Mdx-queries: how to optimize their processing?
hi there. I have a problem with mdx-queries processing. For now mdx-queries are processed as follows:
if I refer a dimension in an mdx-query the OLAP processor while creating a cube downloads into internal memory the whole SID-table and then the whole P- and Q- tables for the corresponding charachteristic are being downloaded as well. And it doesn't take in account any filters used in a query on this stage, i.e. the whole axis is downloaded.
Does anyone know if there is a possibility to optimize the mdx-queries processing in SAP BW? How to make it take in account filters on the stage of downloading master data for the cube?
We use the SAP BW 3.5 version.
hi there. I have a problem with mdx-queries processing. For now mdx-queries are processed as follows:
if I refer a dimension in an mdx-query the OLAP processor while creating a cube downloads into internal memory the whole SID-table and then the whole P- and Q- tables for the corresponding charachteristic are being downloaded as well. And it doesn't take in account any filters used in a query on this stage, i.e. the whole axis is downloaded.
Does anyone know if there is a possibility to optimize the mdx-queries processing in SAP BW? How to make it take in account filters on the stage of downloading master data for the cube?
We use the SAP BW 3.5 version.
Similar Messages
-
ADF how to display a processing page when executing large queries
ADF how to display a processing page when executing large queries
The ADF application that I have written currently has the following structure:
DataPage (search.jsp) that contains a form that the user enters their search criteria --> forward action (doSearch) --> DataAction (validate) that validates the inputted values --> forward action (success) --> DataAction (performSearch) that has a refresh method dragged on it, and an action that manually sets the itterator for the collection to -1 --> forward action (success) --> DataPage (results.jsp) that displays the results of the then (hopefully) populated collection.
I am not using a database, I am using a java collection to hold the data and the refresh method executes a query against an Autonomy Server that retrieves results in XML format.
The problem that I am experiencing is that sometimes a user may submit a query that is very large and this creates problems because the browser times out whilst waiting for results to be displayed, and as a result a JBO-29000 null pointer error is displayed.
I have previously got round this using Java Servlets where by when a processing servlet is called, it automatically redirects the browser to a processing page with an animation on it so that the user knows something is being processed. The processing page then recalls the servlet every 3seconds to see if the processing has been completed and if it has the forward to the appropriate results page.
Unfortunately I can not stop users entering large queries as the system requires users to be able to search in excess of 5 million documents on a regular basis.
I'd appreciate any help/suggestions that you may have regarding this matter as soon as possible so I can make the necessary amendments to the application prior to its pilot in a few weeks time.Hi Steve,
After a few attempts - yes I have a hit a few snags.
I'll send you a copy of the example application that I am working on but this is what I have done so far.
I've taken a standard application that populates a simple java collection (not database driven) with the following structure:
DataPage --> DataAction (refresh Collection) -->DataPage
I have then added this code to the (refreshCollectionAction) DataAction
protected void invokeCustomMethod(DataActionContext ctx)
super.invokeCustomMethod(ctx);
HttpSession session = ctx.getHttpServletRequest().getSession();
Thread nominalSearch = (Thread)session.getAttribute("nominalSearch") ;
if (nominalSearch == null)
synchronized(this)
//create new instance of the thread
nominalSearch = new ns(ctx);
} //end of sychronized wrapper
session.setAttribute("nominalSearch", nominalSearch);
session.setAttribute("action", "nominalSearch");
nominalSearch.start();
System.err.println("started thread calling loading page");
ctx.setActionForward("loading.jsp");
else
if (nominalSearch.isAlive())
System.err.println("trying to call loading page");
ctx.setActionForward("loading.jsp");
else
System.err.println("trying to call results page");
ctx.setActionForward("success");
Created another class called ns.java:
package view;
import oracle.adf.controller.struts.actions.DataActionContext;
import oracle.adf.model.binding.DCIteratorBinding;
import oracle.adf.model.generic.DCRowSetIteratorImpl;
public class ns extends Thread
private DataActionContext ctx;
public ns(DataActionContext ctx)
this.ctx = ctx;
public void run()
System.err.println("START");
DCIteratorBinding b = ctx.getBindingContainer().findIteratorBinding("currentNominalCollectionIterator");
((DCRowSetIteratorImpl)b.getRowSetIterator()).rebuildIteratorUpto(-1);
//b.executeQuery();
System.err.println("END");
and added a loading.jsp page that calls a new dataAction called processing every second. The processing dataAction has the following code within it:
package view;
import javax.servlet.http.HttpSession;
import oracle.adf.controller.struts.actions.DataForwardAction;
import oracle.adf.controller.struts.actions.DataActionContext;
public class ProcessingAction extends DataForwardAction
protected void invokeCustomMethod(DataActionContext actionContext)
// TODO: Override this oracle.adf.controller.struts.actions.DataAction method
super.invokeCustomMethod(actionContext);
HttpSession session = actionContext.getHttpServletRequest().getSession();
String action = (String)session.getAttribute("action");
if (action.equalsIgnoreCase("nominalSearch"))
actionContext.setActionForward("refreshCollection.do");
I'd appreciate any help or guidance that you may have on this as I really need to implement a generic loading page that can be called by a number of actions within my application as soon as possible.
Thanks in advance for your help
David. -
How to optimize code for getting list of portal GP erroneous processes
Hello,
In our Web Dynpro application for Java we got the list of GP processes with status Erroneous which match the following criteria (initiator, processName, blockName, actionName, startDate, endDate, instanceName, actionProcessor) by loop of all portal users. The problem is that it takes too much time for execution. In example with 200 users it takes about 5 min. Any idea how to optimize execution?
<br>
<br>
<br>
<br>
public java.util.List getListOfUser( ) {<br>
//@@begin getListOfUser()<br>
List<IUser> usersList = null;<br>
try {<br>
ISearchResult uniqueIDs = UMFactory.getUserFactory().getUniqueIDs(); <br>
if (uniqueIDs.getState() == ISearchResult.SEARCH_RESULT_OK) <br>
{ <br>
usersList = new ArrayList<IUser>(); <br>
for (Iterator<?> it = uniqueIDs; it.hasNext();) { <br>
usersList.add(UMFactory.getUserFactory().getUser((String)it.next())); <br>
} <br>
} <br>
} catch (UMException ex) { <br>
msgMngr.reportException("Unable get list of users!"); <br>
} <br>
return usersList; <br>
//@@end <br>
} <br>
<br>
<br>
<br>
public void getErrorProcessAllUser( ) { <br>
IUser currentUser = null; <br>
try { <br>
if (wdContext.nodeUsers().currentUsersElement().getLogonId() == null){ <br>
List<IUser> userList = getListOfUser( ); <br>
<font color="red">//{this loop is extremely slow</font> <br>
for(int n = 0; n < userList.size();n++){ <br>
String logonID = userList.get(n).getUniqueName(); <br>
currentUser = UMFactory.getUserFactory().getUserByUniqueName(logonID); <br>
viewProcessDetails(currentUser); <br>
} <br>
//} <br>
}else{ <br>
currentUser = <br> UMFactory.getUserFactory().getUserByUniqueName(wdContext.nodeUsers().currentUsersElement().getLogonId());
viewProcessDetails(currentUser);<br>
}<br>
} catch (UMException e) {<br>
msgMngr.reportException("No user with this logonId!");<br>
}<br>
}<br>
<br>
<br>
public void viewProcessDetails( com.sap.security.api.IUser currentUser ) {<br>
//@@begin viewProcessDetails()<br>
List<IProcessInfoElement> bindableResult = new ArrayList<IProcessInfoElement>();<br>
try { <br>
IGPRuntimeManager rtManager = GPProcessFactory.getRuntimeManager();<br>
IGPWorkItem[] workItems = rtManager.getWorkItems(GPWorkItemStatus.WORKITEM_STATUS_COMPLETED_BY_SYSTEM,<br> GPContextFactory.getContextManager().createUserContext(currentUser));<br>
for(int i = 0; i < workItems.length; i++){<br>
IGPProcessInstanceInfo processInfo = rtManager.getProcessInstanceInformation(workItems<i>.getProcessID(), currentUser);<br>
if(GPBlockInstanceStatus.getStatusForCode(processInfo.getStatus()) == GPBlockInstanceStatus.BLOCK_INSTANCE_STATUS_ERROR){ <br>
IGPProcessInstance instance = rtManager.getProcessInstance(processInfo, GPContextFactory.getContextManager().createUserContext(currentUser)); <br>
IGPActivityInstance[] blocksList = instance.getChildrenInformation(); <br>
for(int j = 0; j < blocksList.length; j++){ <br>
IGPActivityInstance[] actionsList = ((IGPBlockInstance)blocksList[j]).getChildrenInformation();
for (int k = 0; k < actionsList.length; k++){<br>
DO SOMETHINGgot the answers ..
we have use IndexedRecord instead of MappedRecord
IndexedRecord input = rf.createIndexedRecord("input");
boolean flag = input.add("/FolderpathValue");
flag = input.add("CampusCodeValue");
<b>Thanks</b>,
Saravanan -
How to optimize a MDX aggregation functions containing "Exists"?
I have the following calculated measure:
sum(([D Player].[Player Name].[All],
exists([D Match].[Match Id].children,([D Player].[Player Name].currentmember,[Measures].[In Time]),"F Player In Match Stat" ))
,[Measures].[Goals])
Analyzing this calculated measure (the one with "nonempty") in MDX Studio shows "Function
'Exists' was used inside aggregation function - this disables block computation mode".
Mosha Pasumansky spoke about this in one of his posts titled "Optimizing
MDX aggregation functions" where he explains how to optimize MDX aggregation functions containing "Filter",
"NonEmpty", and "Union", but he said he didn't have time to write about Exists, CrossJoin, Descendants, or EXISTING (he posted this in Oct. 2008 and the busy man didn't have time since that date :P )... so anyone knows an article that continues
on what Mosha miss or forgot? how to optimize a MDX aggregation function containing "Exists"? what can I do to achieve the same as this calculated measure but in block mode not cell-by-cell mode ?Sorry for the late replay.
I didn't check if your last proposed solution is faster or not, but I'm sorry to say that it gave the wrong result, look at this:
Player Name
Players Team
Goals Player Scored with Team
A
Team's Goals in Player's Played Matches
Lionel Messi
Argentina
28
28
110
Lionel Messi
Barcelona
341
330
978
The correct result should be like the green column. The last proposed solution in the red column.
If you look at the query in my first post you will find that the intention is to find the total number of goals a team scored in all matches a player participated in. So in the above example Messi scored 28 goals for Argentina (before the last world cup:)
) when the whole Argentinian team scored 110 goals (including Messi's goals) in those matches that Messi played even one minute in. -
How to tune the MDX queries to avoid memory pressure? Please Help!
I tried to run the following mdx queries, but kept running into Memory pressure issue. Could someone give me some suggestions to avoid the issue?
Thanks a lot
Executing the query ...
Server: The operation has been cancelled due to memory pressure.
Execution complete
======================Query 1=============================
SELECT NON EMPTY { [Measures].[Net Purchased CPP] } ON COLUMNS,
NON EMPTY { ([STATION].[Station Name].[Station Name].ALLMEMBERS
* [DAYPART].[Daypart Code].[Daypart Code].ALLMEMBERS
* [DEMOGRAPHIC].[Demo].[Demo].ALLMEMBERS ) } ON ROWS
FROM [SPOT]
WHERE ([ESTIMATE].[Estimate Number].&[3881] )
==================Query 2================================
SELECT NON EMPTY { [Measures].[Net Purchased CPP] } ON COLUMNS,
NON EMPTY { ([STATION].[Station Name].[Station Name].MEMBERS
* [DAYPART].[Daypart Code].[Daypart Code].MEMBERS
* [DEMOGRAPHIC].[Demo].[Demo].MEMBERS ) } ON ROWS
FROM [SPOT]
WHERE ([ESTIMATE].[Estimate Number].&[3881] )
=====================Query 3============================
SELECT NON EMPTY { [Measures].[Net Purchased CPP] } ON COLUMNS,
NON EMPTY { ([ESTIMATE].[Estimate Number].[Estimate Number].ALLMEMBERS
* [STATION].[Station Name].[Station Name].ALLMEMBERS
* [DAYPART].[Daypart Code].[Daypart Code].ALLMEMBERS
* [DEMOGRAPHIC].[Demo].[Demo].ALLMEMBERS ) }
ON ROWS FROM ( SELECT ( { [ESTIMATE].[Estimate Number].&[3881] } ) ON COLUMNS FROM [SPOT])Hi BI_Eric,
The error occurs on the following scenario.
You run a Multidimensional Expressions (MDX) query that contains a Data Analysis Expressions (DAX) measure in Microsoft SQL Server 2008 R2 Analysis Services (SSAS 2008 R2).
The DAX measure has an expression that contains many levels of nested binary operators.
Applying SQL Server 2008 R2 Service Pack 1 will fix the problem, please refer to the link below.
http://support.microsoft.com/kb/2675230
Besides, enough to seriously impair performance.you used CrossJoin function to join multiple dimensions which might cause the performance issue. If you crossjoin medium sized or large sized sets (e.g., sets that contain more than 100 items each), you can
end up with a result set that contains many thousands of items enough to seriously impair performance.http://social.msdn.microsoft.com/Forums/sqlserver/en-US/337aea24-09ff-4354-b67d-8a90f67a13df/memory-pressure-error?forum=sqlanalysisservices
Regards,
Charlie Liao
TechNet Community Support -
Queries related to calibration inspection process
I have some queries related to calibration business process.They are as follows:-
1.Which equipment category do we choose, Q or P?
2.While creating a maintenance plan,which category do we choose, maintenance order or quality maintenance order?
3.Is it that we can't do a time confirmation before results recording?If I try to do so,I get this message"No open operation recieving confirmation entries for order".
4.While I create my maintenance plan,I get this message, a counter could not be defined for the reference object.Why so?
5.After usage decision,if I lock my equipment for further use,the equipment has the status both AVLB and NPRT.Why?
6.Can I create my calibration order manually and assign equipment task list.If so,how?
7.While creating my test equipment,for the PRT tab,the task list usage becomes a mandatory field.What setting is done for this task list usage to become a mandatory field?
Edited by: Pallavi Kakoti on Feb 2, 2012 8:19 AMHi
1.Which equipment category do we choose, Q or P?
Ans:
Q is normaly used for Instruments,Lab equipments
P also used for Production/process related equipments needs calibrations often
2.While creating a maintenance plan,which category do we choose, maintenance order or quality maintenance order?
Ans:
Its all depends on config you have did. if Quality maintenance order should be linked to inspection type.can be used.
we can not used maintenance order if it is not linked to inspection type 14.
3.Is it that we can't do a time confirmation before results recording?If I try to do so,I get this message"No open operation recieving confirmation entries for order".
Ans:
You can do time confirmation after result recording, then UD.after UD ,order will be TECOed automaticlay.
4.While I create my maintenance plan,I get this message, a counter could not be defined for the reference object.Why so?
Ans:
Check any measuring counters created for the equipments,
5.After usage decision,if I lock my equipment for further use,the equipment has the status both AVLB and NPRT.Why?
Ans:
After UD, system changes the status NPRT for the equipments have PRT view only.
if PRT view maintained in the equipment, system status remains AVLB.
If you dont to use the equipment after UD-Reject,have to go development using user status.
6.Can I create my calibration order manually and assign equipment task list.If so,how?
Ans:
Yes. you can
Create task list with 300 inspection point, assign when u create order
also you can have manual call in the maintenance plans. -
Perfromance problem when using MDX queries in Crystal
Dear All,
I am using Crystal reports together with an MDX query from the BI system. When I run the report in the Web then it takes 60 seconds to get results. When I use this query through MDX into a crystal report the query runs very long and in most cases ends up with the message :
Database connection error: 'No more storage space available for extending an internal table"
It looks like there is something going on with the definition of the query on the database when using the MDX drivers. Is There a way to solve this ?
I hear rumours that upgrading to EHP 1 Stack 3 solves performance problems arround MDX queries, but I cannot find prove anywhere. Without proof or an official statement I cannot advise the Customer.
Please Help,
MarcelHi Ingo,
This feature I did not use yet ;-).
The MDX looks like :
SELECT {[Measures].[4B78DNZSKX3JQCWJKSS9S421C]} ON COLUMNS, NON EMPTY CROSSJOIN(EXCEPT([Z0TERR2__Z0SALEM].MEMBERS, {[Z0TERR2__Z0SALEM].[All]}), CROSSJOIN(EXCEPT([Z0CUST].MEMBERS, {[Z0CUST].[All]}), CROSSJOIN(EXCEPT([0DOC_NUMBER].MEMBERS, {[0DOC_NUMBER].[All]}), CROSSJOIN(EXCEPT([Z0TERR2].MEMBERS, {[Z0TERR2].[All]}), CROSSJOIN(EXCEPT([Z0ORDITEM__Z0CNVMAN].MEMBERS, {[Z0ORDITEM__Z0CNVMAN].[All]}), CROSSJOIN(EXCEPT([Z0CICL__Z0IMLG18].MEMBERS, {[Z0CICL__Z0IMLG18].[All]}), CROSSJOIN(EXCEPT([Z0DUMOVEN__Z0MOVDECR].MEMBERS, {[Z0DUMOVEN__Z0MOVDECR].[All]}), CROSSJOIN(EXCEPT([Z0BUSTRW].MEMBERS, {[Z0BUSTRW].[All]}), CROSSJOIN(EXCEPT([Z0BUSTSTW].MEMBERS, {[Z0BUSTSTW].[All]}), EXCEPT([Z0IBUSTRW].MEMBERS, {[Z0IBUSTRW].[All]})))))))))) DIMENSION PROPERTIES [Z0CUST].[20CITY], [Z0CUST].[2Z0CUST], [0DOC_NUMBER].[2Z0SALCRON] ON ROWS FROM [Z0SD_M04/Z0SD_M04_Q0021] SAP VARIABLES [0P_COAR] INCLUDING [0CO_AREA].[3100], [0P_COCD] INCLUDING [0COMP_CODE].[3110], [0GMFROM] INCLUDING [0FISCPER].[Y42009002], [0GMTO] INCLUDING [0FISCPER].[Y42009002], [!V000005] INCLUDING [Z0REJECFP].[Y42009002], [!V000006] INCLUDING [Z0REJECFP].[Y42009002], [ZP_CSDAT] INCLUDING [0CALDAY].[20090311]
Running the query in MDXTEST transaction with Flattening gives data in 10 minutes.
Running the query in multidimensional mode gives the same result.
How to proceed ??
Kind regards
Marcel -
pls provide the details wat type of quest ask to hr department bcoz i dont have the he domain knowedge & i boss sent to hr deparyment but no response from this side , plz guide how to learn the process .thanx in advence.
Hi Gayatri,
First of all let us know what exactly you are looking for?
1) Are you looking for HR process or
2) You want to know the questions that are to be asked with the business users / end users , and based on that to start the business blue print for the implementation.
If you are looking for Q&A to be sent to the end users then you have to search SDN and will find lots of Q&A . But try to understand the minimum process of HR activities that will happen in any Organization before going a head.
In any Organization , HR role plays a vital and behaves as back bone to the Organization.
Process involes , Hiring emloyees, Maintaining their data base, maintainig time attandance, running payroll, etc. Employee interaction and dealing with the employees benefits, dealing with their grievances, uplifting their morale etc are the main functions from HR perspective.
This is just an overview of HR activities.
Coming to SAP HCM Module, we deal with Recruitment, OM , PA , Time Management, Payroll, T&E , Travell Management and based on country specific modules like US Benefits etc.
So our request is , confirm what you are looking for and let us know how we can add our inputs to your querry to give a clear picture.
All the best.
Regards,
Sri.. -
How to map import process for items
Dear All,
can anyone guide me how to map import process for items for our customer
what i understand is this process in B one
Raise PO
GRPO
Landed Cost
AP Invoice
I want to know whether this is correct. Also in this process their are clearing agents who perform the formalities at customs office and then raise bill to client for the sameHi,
The process looks correct. For cost related to clearing agents who perform the formalities at customs office, use landed cost.
Thanks,
Gordon -
IPod w/Cassette Adapter-How to Optimize Sound Quality?
I have I believe a G2 iPod (15 GB), and I play it in my car using a cassette adapter. I've found that to keep the sound from degrading at above average volumes, I have to make some adjustments. One, I have to set the EQ to 'Bass Reducer'. Two, I have to make sure the volume on the iPod is at about 80%. If I go above that, the sound distorts. The lower you go below that, the more hissing you get w/ the playback. I also turn the 'Sound Check' setting on.
Any other recommendations on how to optimize the sound quality when playing it through your vehicle sound system? Also, I'm thinking about getting a G5 unit (most likely the 4 GB Nano). Anyone have any feedback on whether I can expect better sound quality, in particular in my vehicle with the setup outlined above, by upgrading to the newer unit?Various Methods to Connect to a Car Stereo System, or Listen to Your iPod in the Vehicle
Best:
Direct connection via the dock connector or headphone jack of your iPod, to the mini-jack input (or AUX RCA input jacks) of your car stereo. Not many low/moderate-end cars have this feature yet, but it is becoming more popular. Some aftermarket auto stereo units may have this feature.
There are also some after-market, moderate to fairly expensive direct interfaces, that hook into your existing car stereo to provide a high-quality, direct connection. Most will also power/charge the iPod. Pretty slick, but can be pricey ($80-$300). If money is no object, a clean way to go. Not very portable from car to car – if at all.
http://logjamelectronics.com/ipodproducts.html
http://www.myradiostore.us/auxadapters/blitzsafe/blitzsafe-m-link-ipod-interface .html
http://www.theistore.com/ipod2car.html
http://www.mp3yourcar.com/
Better:
Connect your iPod to a cassette adaptor and play your tunes through your car's cassette player. Some new cars no longer come with a cassette player, so it may not be an option. It will provide even better audio quality if you can run the audio feed out of the dock connector (see the SendStation link below). Can be portable between cars that have a cassette player and also be used in your home cassette system. $5 to $20 for the adaptors, with large variations in quality (even with the same model).
Good:
Attach an FM transmitter to your iPod and play the tunes through an unused FM station. Convenient, but wireless FM transmitter signals are susceptible to static and outside interference, and can vary in strength and quality depending on your location. Some noticeable degradation and distortion, depending on the quality of the transmitter, the sensitivity of your ears and the airwave congestion in your area. Highly portable between cars, and may be used in a home system. FM transmitters that need to be plugged into a DC auto jack may not work in a home environment (without some sort of adaptor). You can pay from $15 to more than $80 for some of these.....but for FM quality audio, how much is too much?
Marginal:
Attach an external speaker system to the iPod and play it in the car. Workable, but not too good - unless you spring for a $300+ Bose (or similar) system. But why? Only if your vehicle has no Stereo system, perhaps.
Brave Techno-Geek:
This site gives some directions on adapting a car stereo by yourself. Risky, but it has been successfully accomplished by a forum member. Fairly inexpensive....unless you screw it up.
Whichever you choose, power the iPod through your car’s DC power -- either from a power adapter, or as part of the combined audio adaptor. Have a method to secure the iPod to the dash/console/etc. See the reviews for all the various accessories at the iLounge
You will also get better audio output if the dock connection plug is used, rather than the headphone jack. See Sendstation for a novel adaptor called a PocketDock. Others types are also available via this site.
I have read positive and negative reviews of each method, and within methods there are great variations in performance of different manufacture's systems – and peoples’ opinions of their performance. Some cassette adaptors/FM transmitters work poorly, some better.
FWIW: I have the iTrip Mini & the Newer Technology RoadTrip!+ FM transmitters, a Belkin cassette adaptor (used both with & w/out the PocketDock) and two vehicles with the BlitzSafe direct interface. Using the same song in the same car, I found that the FM transmitters worked, but not as good as the cassette adapter via the headphone jack. Using the PocketDock on the cassette adapter resulted in a significant audio quality improvement. As expected, the Blitzsafe direct connect was exceptionally better than everything else: less tinny, a more warmer/richer sound, and close to true CD quality. -
Hello experts,
is there any documentation about variable types and their processing in i_step = 1, 2 etc.? I know there is note 492504 "Dependent customer exit-type variables", but I don't understand, whether a variable which is NOT "Ready for input" will be processed in i_step = 1 or not (quote of SAP library: "i_step = 1: Call takes place directly before variable entry."). I experienced coincidentally, that some variables not "Ready for input" will be processed there and some not.
Furthermore it is an error, isn't it? Why has a variable without input possibility to be processed before input? Is this really the case?
Confused, any hints are welcomed!
Regards M.L.for I_STEP = 1
Call before the variable screen .
for I_STEP = 2
Call after variable entry. This step is only started up when the same variable is not input ready and could not be filled at I_STEP=1.
for I_STEP = 3
In this call, you can check the values of the variables. Triggering an exception (RAISE) causes the variable screen to appear once more. Afterwards, I_STEP=2 is also called again.
for I_STEP = 0
The enhancement is not called from the variable screen. The call can come from the authorization check or from the Monitor.
There is a good HOW to Guide which explains the importance of I_STEP :
http://service.sap.com/~form/sapnet?_SHORTKEY=00200797470000078090&_SCENARIO=01100035870000000112&_OBJECT=011000358700002762582003E
Another from SDN:
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/events/bw-and-portals-05/five%20ways%20to%20enhance%20sap%20bi%20backend%20functionality%20using%20abap.pdf -
How to optimize this select statement its a simple select....
how to optimize this select statement as the records in earlier table is abt i million
and this simplet select statement is not executing and taking lot of time
SELECT guid
stcts
INTO table gt_corcts
FROM corcts
FOR all entries in gt_mege
WHERE /sapsll/corcts~stcts = gt_mege-ctsex
and /sapsll/corcts~guid_pobj = gt_Sagmeld-guid_pobj.
regards
AroraHi Arora,
Using Package size is very simple and you can avoid the time out and as well as the problem because of memory. Some time if you have too many records in the internal table, then you will get a short dump called TSV_TNEW_PAGE_ALLOC_FAILED.
Below is the sample code.
DATA p_size = 50000
SELECT field1 field2 field3
INTO TABLE itab1 PACKAGE SIZE p_size
FROM dtab
WHERE <condition>
Other logic or process on the internal table itab1
FREE itab1.
ENDSELECT.
Here the only problem is you have to put the ENDSELECT.
How it works
In the first select it will select 50000 records ( or the p_size you gave). That will be in the internal table itab1.
In the second select it will clear the 50000 records already there and append next 50000 records from the database table.
So care should be taken to do all the logic or process with in select and endselect.
Some ABAP standards may not allow you to use select-endselect. But this is the best way to handle huge data without short dumps and memory related problems.
I am using this approach. My data is much more huge than yours. At an average of atleast 5 millions records per select.
Good luck and hope this help you.
Regards,
Kasthuri Rangan Srinivasan -
hi,
I have question how to kill the process chain during running.
thank you.Hi
If the process chain is running in background,
goto <b>SM37</b>
Give * in the jobname
Give the username who scheduled the process chain
Job status - check the check boxes - SCHED, relased, ready, active
click <b>execute</b>.
It displays all the process chains that, scheduled & running.
You can select the process chain that need to be stopped & click STOP active job or ctrl+F1.
Hope this helps!
Kindly award points for all useful answers.
If you post the BW related queries in the <b>BI general</b> forum, you will get more answers.
Best regards,
Thangesh -
MDX queries : BW statistics are inconsistents
Hello,
We are implementing BO on top of SAP BW 3.5.
We had some major performance issues , and now we're tryning to investigate in detail.
The queries were first launched on BO only, and afeterwards on SAP BW, using the transaction MDXTEST, we measured a similar amount of time.
Now now want to have a closer look at the distribution of the time spent. (DB%,Frontend%;Network%, etc...)
The problem is that when we have a look at transaction ST03 in BW analysis, the statistics of MDX
queries are inconsistents :
I've join a concrete example :
The left column is a BO statistic executed on SAP using MDXTEST,
The right column is a traditional BEX query.
When I do the addition of the different time %(percent total) the results is 100 % for the Bex query ...and 44% for the MDX query.
I know that many fixes for MDX have been delivered on BI.7, but what about 3.5, I searched the sap notes but I couldn't find any note descrybing this symptom, nor any fix for our BW release.
Do you know how I should interpret thoses results, in other worlds, where ar the missing percentages in MDX queries ?
thank you in advance for your help.
Regards.
InfoCube Name Cube C Cube C
Name of Query Query A(BO) Query B(BEX)
Number of Navigation Steps 1 1
Total Runtime (s) 534,9 39,8
Total Runtime / Navigation Step (s) 534,9 39,8
Median of Total Runtime (s) 534,9 39,8
Initialization Time (s) 0,2 0,4
Initialization Time / Total Runtime (%) 0,04 1,01
Ø Initialization Time (s) 0,2 0,4
OLAP Runtime (s) 0,9 2,1
OLAP Time / Total Runtime (%) 0,17 5,28
OLAP Runtime / Navigation Step (s) 0,9 2,1
Database Runtime (s) 197,0 22,4
Database Runtime / Total Runtime (%) 36,83 56,28
DB Runtime per Navigation Step (s) 197,0 22,4
Frontend Runtime (s) 33,9 4,8
Frontend Runtime / Total Runtime (%) 6,34 12,06
Frontend Time per Navigation Step (s) 33,9 4,8
Master Data Runtime (s) 5,6 10,1
Master Data Time / Total Runtime (%) 1,05 25,38
Ø Master Data Time per Navigation Step 5,6 10,1
Number of records selected 18620 899
Number of Transferred Records 10037 629
Ratio of Selected to Transferred Records 1,9 1,4
Database Time per Selected Record (ms) 10,6 24,9
Number of Cells 162060 7295
Cells per Transferred Records 16,1 11,6
Number Formatted 0 72
Ratio of Formattings to Number of Cells 0,0 0,0
percent Total : 44,43% for MDX and 100,01% as expected for BEX
time 237,8 40,2
Edited by: Raoul Shiro on Feb 5, 2009 6:54 PMHi Raoul,
in regards to the BW statistics values I would suggest you open a OSS case.
in regards to the performance improvements - correct they are done for BI7 and the XI 3.1 release.
Ingo -
Converting MDX queries to OBIEE 11g
How can I convert MDX queries to OBIEE queries. Im migrating Hyperion/Brio reports to OBIEE 11g reports.
You can always plug in your cubes as datasources into OBIEE and then create the analyses using Answers.
A direct MDX-to-"OBIEE" path only exist if you copy your MDX queries and paste them into a direct database request which uses a conncetion pool pointing towards your cube. That said...you will miss out on a lot of vanilla functionality if you do this.
Cheers,
C.
Maybe you are looking for
-
Flash 9.0.45 crash on Intel Mac OSX
Date/Time: 2007-10-08 09:55:34.233 -0700 OS Version: 10.4.10 (Build 8R2218) Report Version: 4 Command: firefox-bin Path: /Applications/Firefox.app/Contents/MacOS/firefox-bin Parent: WindowServer [720] Version: 2.0.0.7 (2.0.0.7) PID: 1462 Thread: 0 Ex
-
USB Trackpoint Keyboard Driver Issues
Hello! I'm not entirely sure how to go about reporting bugs in drivers, so I decided this would be the best venue. I've discovered two bugs in the driver for the USB trackpoint keyboard. I'm using version 1.01 of the driver (available here: http://ww
-
Indesign, word, mathtype problems...wmf-scaling defect
hello Word document with mathtype equations, place doc in indesign, everything look fine, BUT: 1. some equation is scaled to 90.9% horizontaly, some not. Why? 2. indesign makes embeded eps files from equations. When exporting it to folder via unem
-
How to get laserjet to pause between print jobs
I just reinstalled my OS (Win7) over the weekend, and the laserjet P1102w is working fine, but it is doing one thing differently than before. For as long as I can remember (even going back to an earlier laserjet 1018), I could send 2-3 print jobs to
-
I have an input column value of Sep-15, i want to see output as 201509. what is the most efficient way to achieve it in SSIS 2008R2? The source is Excel not SQL Sever.