Data extracting to BW from R3 taking too much time
Hi,
We have one delta data load to ODS from R3 this is taking 4-5 hours .this job runs in r3 itself for 4-5 hours even for 30-40 records.and after this ODS data updated to cube so but since in ODS itself takes too much time so delta brings 0 records in cube hence we have to update manually.
Also as now job is running for load to ODS so can't we check records for delta in RSA3 Its giving me error saying "error occurs during extraction ".
can u please guide how we can make this loading faster if any index needs to be build how to proceed on that front
Thanks
Nilesh
rAHUL,
I tried with R its giving me dump with message "Resul of customer enhancemnet 19571 records"
Erro details are -
Short text
Function module " " not found.
What happened?
The function module " " is called,
but cannot be found in the library.
Error in the ABAP Application Program
The current ABAP program "SAPLRSA3" had to be terminated because
come across a statement that unfortunately cannot be executed.
What can you do?
Note down which actions and inputs caused the error.
To process the problem further, contact you SAP system
administrator.
Using Transaction ST22 for ABAP Dump Analysis, you can look
at and manage termination messages, and you can also
keep them for a long time.
Similar Messages
-
Data from Bseg taking too much time
Hi ,
I need to extract data from BSEG and it is goin in timeout.
my query is :
1 : on the basis of VBRK i need to fetch data from bseg.
loop at it_vbrk.
awkk = it_vbrk-vbeln.
CALL FUNCTION 'CONVERSION_EXIT_ALPHA_OUTPUT'
EXPORTING
input = awkk
IMPORTING
output = awkk.
it_awkey-vbeln = it_vbrk-vbeln.
it_awkey-awkey = awkk.
append it_awkey.
endloop.
2.
if it_awkey[] is not initial.
select bukrs BELNR gjahr BLART AWKEY from bkpf into table it_bkpf for all entries in it_awkey
where
bukrs = '5000'
and blart EQ 'RV'
and awkey = it_awkey-awkey.
endif.
Please guide me to get solution
Ashish GautamOk, u can use the secondary index tables of BSEG...
BSAS
BSIS,
BSAD,
BSID,
BSAK,
BSIK,
BSIM.
whatever is in BSEG is also in these tables....
and if u use BSEG, then try to match the key fields of BSEG, then only it will give the best performance....
eg.
BUKRS
BELNR
GJAHR
BUZEI
use the same order as given in the database...
and do not use any other fields except the key fields in the where clause...
and do not use 'NE' 'Not in' in the where clause....
regards
Sukriti.... -
Query taking too much time with dates??
hello folks,
I am trying pull some data using the date condition and for somereason its taking too much time to return the data
and trunc(al.activity_date) = TRUNC (SYSDATE, 'DD') - 1 --If i use this its takes too much time
and al.activity_date >= to_date('20101123 000000', 'YYYYMMDD HH24MISS')
and al.activity_date <= to_date('20101123 235959', 'YYYYMMDD HH24MISS') -- If i use this it returns the data in a second. why is that?
How do i get the previous day without using the hardcoded to_date('20101123 000000', 'YYYYMMDD HH24MISS'), if i need to retrieve it faster??Presumably you've got an index on activity_date.
If you apply a function like TRUNC to activity_date, you can no longer use the index.
Post execution plans to verify.
and al.activity_date >= TRUNC (SYSDATE, 'DD') - 1
and al.activity_date < TRUNC (SYSDATE, 'DD') -
Hi, from the last two days my iphone( iphone 4 with ios 5) have very slow to open the apps and very slow when i check the notification window , it taking too much time to open when i tap to down . help me to resolve the issue.
The Basic Troubleshooting Steps are:
Restart... Reset... Restore...
iPhone Reset
http://support.apple.com/kb/ht1430
Try this First... You will Not Lose Any Data...
Turn the Phone Off...
Press and Hold the Sleep/Wake Button and the Home Button at the Same Time...
Wait for the Apple logo to Appear and then Disappear...
Usually takes about 15 - 20 Seconds... ( But can take Longer...)
Release the Buttons...
Turn the Phone On...
If that does not help... See Here:
Backing up, Updating and Restoring
http://support.apple.com/kb/HT1414 -
Report is taking too much time when running from parameter form
Dear All
I have developed report in oracle reports bulider 10g. while running it from report builder data is coming very fast.
But, If it is running from parameter form it is taking too much time to format report in PDF.
Please suggest any configuration or setting if anybody is having Idea.
ThanksHi,
The first thing to check is whether the query is running to completion in TOAD. By default, TOAD just selects the first 50 rows, where as Discoverer must return all the rows before displaying results if a crosstab report is used.
Secondly, check that the queries and the explain plans are the same in Discoverer and Toad. Although, Discoverer shows the SQL in the SQL inspector this isn't necessarily the SQL actually sent to the database. Use TOAD to interogate the Discoverer session to determine the actual SQL and compare this SQL and explain plan to SQL you ran in TOAD.
Thirdly, check that the session context is the same in both cases. So check that any custom contexts and the USER_ENV context is the same, and if any security packages or VPD policies are used in the SQL that these have been initialised the same.
If you still cannot determine the difference then trace both sessions.
Rod West -
ODS to CUBE load taking too much time..
Hi all ,
we are loading the data from our ZODS to ZCUBE, but the data load is taking too much time , we haven't created any indexes , we alsotried by making infosource for the ODS but still tha same problem .. It is always showing 0 from 345674 records that is the records are not getting extracted from ODS .
Can anybody help me in this regards , it is a bit urgent ..
Thanks in advance.Hi,
there are a few you can check. First you should check if this job hasn't ended in a dump with ST22.
The next thing you can do, if the job doesn't end abnormaly, is to reduce the amount of records processed at the same time. Sometimes the system has trouble if the amount of records that it has to process is too large. Go to the InfoPackage -> DataS. Default Data Transfer -> Fill the maximum to 10% of de the default value. Try to run the load again.
If the job still doesn't finish then you have to check wether or not there are any ABAP routines and/or formula involved in the update rule ? Maybe they running in a loop.
regards,
Raymond Baggen
Uphantis bv -
Job is taking too much time during Delta loads
hi
when i tried to extract Delta records from R3 for Standard Extractor 0FI_GL_4 it is taking 46 mins while there are very less no. of delta records(193 records only).
PFA the R3 Job log. the major time is taking in calling the Customer enhacement BW_BTE_CALL_BW204010_E.
please let me know why this is taking too much time.
06:10:16 4 LUWs confirmed and 4 LUWs to be deleted with FB RSC2_QOUT_CONFIRM_DATA
06:56:46 Call up of customer enhancement BW_BTE_CALL_BW204010_E (BTE) with 193 records
06:56:46 Result of customer enhancement: 193 records
06:56:46 Call up of customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 193 records
06:56:46 Result of customer enhancement: 193 records
06:56:46 Asynchronous sending of data package 1 in task 0002 (1 parallel tasks)
06:56:47 IDOC: InfoIDOC 2, IDOC no. 121289649, duration 00:00:00
06:56:47 IDOC: Begin 09.05.2011 06:10:15, end 09.05.2011 06:10:15
06:56:48 Asynchronous sending of InfoIDOCs 3 in task 0003 (1 parallel tasks)
06:56:48 Through selection conditions, 0 records filtered out in total
06:56:48 IDOC: InfoIDOC 3, IDOC no. 121289686, duration 00:00:00
06:56:48 IDOC: Begin 09.05.2011 06:56:48, end 09.05.2011 06:56:48
06:56:54 tRFC: Data package 1, TID = 3547D5D96D2C4DC7740F217E, duration 00:00:07, ARFCSTATE =
06:56:54 tRFC: Begin 09.05.2011 06:56:47, end 09.05.2011 06:56:54
06:56:55 Synchronous sending of InfoIDOCs 4 (0 parallel tasks)
06:56:55 IDOC: InfoIDOC 4, IDOC no. 121289687, duration 00:00:00
06:56:55 IDOC: Begin 09.05.2011 06:56:55, end 09.05.2011 06:56:55
06:56:55 Job finished
Regards
AtulHi Atul,
Have you written any customer exit code . If yes check for the optimization for it .
Kind Regards,
Ashutosh Singh -
Hi Experts,
Am looing to pull vbrp-vbeln i.e. billing doc #, based on the VGBEL i.e. sales doc #
i.e.
select single * from vbrp into wa_vbrp
where vgbel = wa_vbap-vbeln
and posnr = wa_vbap-posnr.
but, as there is no secondary index in vbrp for vgbel and there r tonns of recs in vbrp, its taking too much time?
so, wht is the alternative that i can find billing doc # with my sales doc #?
thanqMr. Srinivas,
Just a suggestion, if you need only the header details, then why not extract data from VBRK (header for billing doc) & VBAK (header for sales doc). These 2 tables contain only single line per billing or sales doc & hence the performance should be better.
If my suggestion is not what you are looking for, then apologies for the same.
Regards,
Vivek
Alternatively as Mr. Eric suggests, you can use VBFA
VBFA-VBELN = VBRK-VBELN
VBFA-VBELV = VBAK-VBELN
Logic is VBFA-VBELN is the subsequent document & VBFA-VBELV is the preceding document.
Hope it helps. (but be sure, the document created after sales order is billing document, there might be cases where there could be delivery documents after sales order & before billing document, so be careful)
Edited by: Vivek on Jan 29, 2008 11:11 PM -
Taking too much time to load application
Hi,
I have deployed a j2ee application on oracle 10g version 10.1.2.0.2. But the application is taking too much time to load. After loading ,everything works fast.
I have another 10g server (same version) in which the same application is loading very fast.
When I checked the apache error logs found this :-
[Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
[Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
[Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
[Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
[Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
[Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
[Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
[Thu Apr 26 11:36:36 2007] [notice] FastCGI: process manager initialized (pid 21177)
[Thu Apr 26 11:36:37 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
[Thu Apr 26 11:36:37 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
[Thu Apr 26 11:36:37 2007] [warn] long lost child came home! (pid 9124)
[Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0015: recv() returns 0. There has no message available to be received and oc4j has gracefully (orderly) closed the connection.
[Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
[Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
[Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
[Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0184: Failed to find an oc4j process for destination: home
[Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0145: There is no oc4j process (for destination: home) available to service request.
[Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0119: Failed to get an oc4j process for destination: home
[Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
[Thu Apr 26 11:46:33 2007] [notice] FastCGI: process manager initialized (pid 21726)
[Thu Apr 26 11:46:34 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
[Thu Apr 26 11:46:34 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
[Thu Apr 26 11:46:34 2007] [warn] long lost child came home! (pid 21182)
[Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
[Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
[Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
[Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
[Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
[Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
[Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
Please HELP ME...Hi this is what the solution given by your link
A.1.6 Connection Timeouts Through a Stateful Firewall Affect System Performance
Problem
To improve performance the mod_oc4j component in each Oracle HTTP Server process maintains open TCP connections to the AJP port within each OC4J instance it sends requests to.
In situations where a firewall exists between OHS and OC4J, packages sent via AJP are rejected if the connections can be idle for periods in excess of the inactivity timeout of stateful firewalls.
However, the AJP socket is not closed; as long as the socket remains open, the worker thread is tied to it and is never returned to the thread pool. OC4J will continue to create more threads, and will eventually exhaust system resources.
Solution
The OHS TCP connection must be kept "alive" to avoid firewall timeout issues. This can be accomplished using a combination of OC4J configuration parameters and Apache runtime properties.
Set the following parameters in the httpd.conf or mod_oc4j.conf configuration files. Note that the value of Oc4jConnTimeout sets the length of inactivity, in seconds, before the session is considered inactive.
Oc4jUserKeepalive on
Oc4jConnTimeout 12000 (or a similar value)
Also set the following AJP property at OC4J startup to enable OC4J to close AJP sockets in the event that a connection between OHS and OC4J is dropped due to a firewall timeout:
ajp.keepalive=true
For example:
java -Dajp.keepalive=true -jar oc4j.jar
Please tell me where or which file i should put the option
java -Dajp.keepalive=true -jar oc4j.jar ??????/ -
Taking too much time in Rules(DTP Schedule run)
Hi,
I am Scheduling the DTP which have filters to minimize the load data.
when i run the DTP it is taking too much time in the "rules" (i can see the DTP monitor ststus package by pakage and step by step like "Start routine" "rules" and "End Routine")
here it is consuming too much time in Rules Mapping.
what is the problem and any solutions please...
regards,
sreeHi,
Time taken at "rules" depends on the complexity involved there in ur routine. If it is a complex calculation it will take time.
Also check ur DTP batch settings, ie how many no. of background processes used to perform DTP, Job class.
U can find these :
goto DTP, select goto menu and select "Settings for Batch Manager".
In the screen increase no of Processes from 3 to higher no(max 9).
ChaNGE job class to 'A'.
If ur DTP is still running , cancel it ie Kill the DTP, delete from the Cube,
Change these settings and run ur DTP one more time.
U can observer the difference.
Reddy -
Taking too much time incollecting in business content activation
Hi all,
I am collecting business content object for activation. I have selected 0fiAA_cha object,while cllecting in the activation but it is taking too much time and then it asks for source
system authorisation and then throws error maximum run time exceded. i have selected data flow before there.
What can be the reason for it.
Please help..Hi ,
You should also always try and have the latest BI content patch installed but I don't think this is the problem. It seems that there
are alot of objects to collect. Under 'grouping' you can select the option 'only necessary objects', please check if you can
use this option to install the objects that you need from content.
Best Regards,
Des. -
Taking too much time using BufferedWriter to write to a file
Hi,
I'm using the method extractItems() which is given below to write data to a file. This method is taking too much time to execute when the number of records in the enumeration is 10000 and above. To be precise it takes around 70 minutes. The writing pauses intermittently for 20 seconds after writing a few lines and sometimes for much more. Has somebody faced this problem before and if so what could be the problem. This is a very high priority work and it would be really helpful if someone could give me some info on this.
Thanks in advance.
public String extractItems() throws InternalServerException{
try{
String extractFileName = getExtractFileName();
FileWriter fileWriter = new FileWriter(extractFileName);
BufferedWriter bufferWrt = new BufferedWriter(fileWriter);
CXBusinessClassIfc editClass = new ExploreClassImpl(className, mdlMgr );
System.out.println("Before -1");
CXPropertyInfoIfc[] propInfo = editClass.getClassPropertyInfo(configName);
System.out.println("After -1");
PrintWriter out = new PrintWriter(bufferWrt);
System.out.println("Before -2");
TemplateHeaderInfo.printHeaderInfo(propInfo, out, mdlMgr);
System.out.println("After -2");
XDItemSet itemSet = getItemsForObjectIds(catalogEditDO.getSelectedItems());
Enumeration allitems = itemSet.allItems();
System.out.println("the batch size : " +itemSet.getBatchSize());
XDForm frm = itemSet.getXDForm();
XDFormProperty[] props = frm.getXDFormProperties();
System.out.println("Before -3");
bufferWrt.newLine();
long startTime ,startTime1 ,startTime2 ,startTime3;
startTime = System.currentTimeMillis();
System.out.println("time here is--before-while : " +startTime);
while(allitems.hasMoreElements()){
String aRow = "";
XDItem item = (XDItem)allitems.nextElement();
for(int i =0 ; i < props.length; i++){
String value = item.getStringValue(props);
if(value == null || value.equalsIgnoreCase("null"))
value = "";
if(i == 0)
aRow = value;
else
aRow += ("\t" + value);
startTime1 = System.currentTimeMillis();
System.out.println("time here is--before-writing to buffer --new: " +startTime1);
bufferWrt.write(aRow.toCharArray());
bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
bufferWrt.newLine();
startTime2 = System.currentTimeMillis();
System.out.println("time here is--after-writing to buffer : " +startTime2);
startTime3 = System.currentTimeMillis();
System.out.println("time here is--after-while : " +startTime3);
out.close();//added by rosmon to check extra time taken for extraction//
bufferWrt.close();
fileWriter.close();
System.out.println("After -3");
return extractFileName;
catch(Exception e){
e.printStackTrace();
throw new InternalServerException(e.getMessage());Hi fiontan,
Thanks a lot for the response!!!
Yeah!! I kow it's a lotta code, but i thought it'd be more informative if the whole function was quoted.
I'm in fact using the PrintWriter to wrap the BufferedWriter but am not using the print() method.
Does it save any time by using the print() method??
The place where the delay is occurring is the wile loop shown below:
while(allitems.hasMoreElements()){
String aRow = "";
XDItem item = (XDItem)allitems.nextElement();
for(int i =0 ; i < props.length; i++){
String value = item.getStringValue(props);
if(value == null || value.equalsIgnoreCase("null"))
value = "";
if(i == 0)
aRow = value;
else
aRow += ("\t" + value);
startTime1 = System.currentTimeMillis();
System.out.println("time here is--before-writing to buffer --out.flush() done: " +startTime1);
bufferWrt.write(aRow.toCharArray());
out.flush();//added by rosmon to check extra time taken for extraction//
bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
bufferWrt.newLine();
startTime2 = System.currentTimeMillis();
System.out.println("time here is--after-writing to buffer : " +startTime2);
What exactly happens is that after a few loops it just seems to sleep for around 20 seconds and then again starts off and ............it goes on till the records are done.
Please do lemme know if you have any idea as to why this is happening !!!!! This bug is giving me the scare.
thanks in advance -
BPC application is taking too much time to load
Hi experts!
I'm facing a very weird problem...
We've developed a BPC application (app name: USM).
This application is taking too much time to be loaded in some computers (around 8 minutes to load). Yes, in SOME computers.
There is around 100.000 registers in the database and most coming from material master data.
If I try to load this USM application in another computer, the process loads smoothly. The computer's hardware is all the same, the server is hyper estimated and everyone is in the same network.
I talked to infrastructure departament and we made several tests. We run BPC on the server (loaded quickly), on several computers (some loads quickly, others don't), used wireless and cable connection (got all the same result) and checked communication between BW and BPC but it is ok.
After all, I tried to load APSHEL application in the same enviroment and it loaded intantly. So, I guess it is something wrong with my application. But if was this, I suppose it should happen to all computers and not only with part of them.
Have anybody ever seen something like this?
Thank you in advance.
Rubens
Edited by: Rubens Massayuki Kumori on May 12, 2011 8:43 PM
Edited by: Rubens Massayuki Kumori on May 12, 2011 8:46 PMHi Rubens,
I would try making a couple of test:
1. I will install the client in a machine that is located in the same network segment, or try using a vpn that comunicates with the server bypassing all security devices, only to see if the network it's the problem.
2. Making a full optimize of one application to see if maybe the problem it's related to the segmentation of the cubes (i don't think that this is the problem but give it a try).
It is very wierd that in some computers happends and in others don't... also try to clean up the local cache of the applications in those computers that are giving to you bad performce and retry.
hope it helps, -
Full DTP taking too much time to load
Hi All ,
I am facing an issue where a DTP is taking too much time to load data from DSO to Cube via PC and also while manually running it.
There are 6 such similar DTP's which load data for different countries(different DSO's and Cubes as source and target respectively) for last 7 days based on GI Date. All the DTP's are pulling almost same no. of records and finish within 25-30 min. But only one DTP takes around 3 hours. The problem started couple of days back.
I have change the Parallel processes from 3->4->5 and packet size from 50,000->10,000->1.00.000 but no improvement. Also want to mention that all the source DSO's and target Cubes have the same structure. All the transformations have Field Routines and End Routines.
Can you all please share some pointers which can help.
Thanks
PrateekHI Raman ,
This is what I get when I check the report. Can this be causing issues as 2 rows have % >= 100
ETVC0006 /BIC/DETVC00069 rows: 1.484 ratio: 0 %
ETVC0006 /BIC/DETVC0006C rows: 15.059.600 ratio: 103 %
ETVC0006 /BIC/DETVC0006D rows: 242 ratio: 0 %
ETVC0006 /BIC/DETVC0006P rows: 66 ratio: 0 %
ETVC0006 /BIC/DETVC0006T rows: 156 ratio: 0 %
ETVC0006 /BIC/DETVC0006U rows: 2 ratio: 0 %
ETVC0006 /BIC/EETVC0006 rows: 14.680.700 ratio: 100 %
ETVC0006 /BIC/FETVC0006 rows: 0 ratio: 0 %
ETVC0007 rows: 13.939.200 density: 0,0 % -
Report taking too much time in the portal
Hi freiends,
we have developed a report on the ods,and we publish the same on the portal.
the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
is there any way to sort out this issue,like can we send the report to the individual user's mail id
so that they can not log in to the portal
or can we create the same report on the cube.
what could be the main difference if the report made on the cube or ods?
please help me
thanks in advance
sridathHi
Try this to improve performance of query
Find the query Run-time
where to find the query Run-time ?
557870 'FAQ BW Query Performance'
130696 - Performance trace in BW
This info may be helpful.
General tips
Using aggregates and compression.
Using less and complex cell definitions if possible.
1. Avoid using too many nav. attr
2. Avoid RKF and CKF
3. Many chars in row.
By using T-codes ST03 or ST03N
Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
/people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
/people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
Try table rsddstats to get the statistics
Using cache memory will decrease the loading time of the report.
Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
Also try
1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
Open the Aggregates...and observe VALUATION and USAGE columns.
"---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
In usage column,we will come to know how far the aggregate has been used in query.
Thus we can check the performance of the aggregate.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
performance ISSUE related to AGGREGATE
Note 356732 - Performance Tuning for Queries with Aggregates
Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
/people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
You can go to T-Code DB20 which gives you all the performance related information like
Partitions
Databases
Schemas
Buffer Pools
Tablespaces etc
use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
If aggregates contain incorrect data, you must regenerate them.
202469 - Using aggregate check tool
Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
6
Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
Generate Report in RSRT
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
Business Intelligence Journal Improving Query Performance in Data Warehouses
http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
Achieving BI Query Performance Building Business Intelligence
http://www.dmreview.com/issues/20051001/1038109-1.html
Assign points if useful
Cheers
SM
Maybe you are looking for
-
I can't clear single item in Location Bar History
I can't clear single item in Location Bar History. I use Firefox 3.6.13 on Mac 10.6.5. I do select the item in location bar, press Shift and delete keys, but it delete the item only temporary... My Firefox Privacy pref indicate: - Use custom settings
-
No subject No sender dated 12/31/69
I am getting these emails. They say No subect, No sender, No content and they're dated 12/31/69. I'm not able to delete them. Very strange. Is this some sort of virus? Has anyone else been having this issue. One has now shown up on my iPhone 4.
-
Can you use portrait orientation lock without the control centre on ios7
With regards to the question above does anyone know another way to lock the screen orientation without the use of the control centre (e.g. like the old way of double clicking the home button). The reason being is that I have a case that fits the scre
-
Shared photo stream option missing iphoto 11
In Iphoto / preferences there is a missing option to check Shared photo streams. Iphoto 11 - upgrade OSX 10.7.5 Anyone know how to resolve ?
-
Orchestrator runbook documentation tool
hey guys, I have used the Orchestrator visio and word generator in the past to help document runbooks however I find it hard to believe there isn't an updated version / or newer product out there. even the purchased product on the Kelverion website s