GLPCA Table taking too muchtime
Hi,
In My GLPCA table around 70 lacks entries are there.When iam cancelling any document its taking too much time.is it possible to do only glpca table archiving or i have to check any abap code.guide me for the same.
Thanku
Hi,
Just try this option as it has worked past:
A new index with the fields RBUKRS and AUFNR will improve the
performance significantly!!! Note 565582
Thank you,
Tilak
Similar Messages
-
Hello Friends,
The background is I am working as conversion manager and we move the data from oracle to SQL Server using SSMA and then we will apply the conversion logic and then move the data to system test ,UAT and Production.
Scenario:
Moving the 80 Million records from Conversion database to System Test database (Just for one transaction table) taking too long. Both the databases are in the same server.
Questions are…
What is best option?
IF we use the SSIS it’s very slow and taking 17 hours (some time it use to stuck and won’t allow us to do any process).
I am using my own script (Stored procedure) and it’s taking only 1 hour 40 Min. I would like know is there any better process to speed up and why the SSIS is taking too long.
When we move the data using SSIS do they commit inside after particular count? (or) is the Microsoft is committing all the records together after writing into Transaction Log
Thanks
Karthikeyan Jothihttp://www.dfarber.com/computer-consulting-blog.aspx?filterby=Copy%20hundreds%20of%20millions%20records%20in%20ms%20sql
Processing
hundreds of millions records can be done in less than an hour.
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
ACCTIT table Taking too much time
Hi,
In SE16: ACCTIT table i gave the G/L account no after that i executed in my production its taking too much time for to show the result.
ThankuHi,
Here iam sending details of technical settings.
Name ACCTIT Transparent Table
Short text Compressed Data from FI/CO Document
Last Change SAP 10.02.2005
Status Active Saved
Data class APPL1 Transaction data, transparent tables
Size category 4 Data records expected: 24,000 to 89,000
Thanku -
ORacle table taking too much space
Hi
I am using Oracle 9i
I created one table using script .Table is blank but It takes around 1 Gb space .
When I create another table using that table
as create table t_name as select * from original_table ;
Space is taken by that table is vely low in MB (.6 MB)
Only Index is not created
why my table is taking to much space ?
How to resolve it ?
what parameter to check ?Hi Pavan,
I am trying to take backup of oracle DB using RMAN script with OSB (Oracle Secure backup). I am facing the following issue given below. I had created a storage selector and my devices are configured in OSB.. Given below the error message
My Rman script is :
RMAN> run {
2> allocate channel oem_sbt_backup type 'sbt_tape' format '%U';
3> backup as BACKUPSET current controlfile tag '11202008104814';
4> restore controlfile validate from tag '11202008104814';
5> release channel oem_sbt_backup;
6> }
error message is given below
allocated channel: oem_sbt_backup
channel oem_sbt_backup: sid=143 devtype=SBT_TAPE
channel oem_sbt_backup: Oracle Secure Backup
Starting backup at 20-NOV-08
channel oem_sbt_backup: starting full datafile backupset
channel oem_sbt_backup: specifying datafile(s) in backupset
including current control file in backupset
channel oem_sbt_backup: starting piece 1 at 20-NOV-08
released channel: oem_sbt_backup
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on oem_sbt_backup channel at 11/20/2008 22:50:05
ORA-19506: failed to create sequential file, name="07k075kr_1_1", parms=""
ORA-27028: skgfqcre: sbtbackup returned error
ORA-19511: Error received from media manager layer, error text:
sbt__rpc_cat_query: Query for piece 07k075kr_1_1 failed.
(Oracle Secure Backup error: 'no preauth config found for OS user (OB tools) oracle').
Need help from you ..
Thanks in Advance -
Select Data from BSIS table taking too long
Hi
I have to develop a report to give the details of Extended Withholding Tax (EWT) for a list of Expenses GL.
Each expense GL is linked to another Gl which is the EWT Tax GL Account. This is maintained in a Ztable.
I havae wirtten the following code. It takes a lot of time to extract a data.
This give me the GL i require
SELECT * FROM ZSECCO_GL_EWT INTO CORRESPONDING FIELDS OF TABLE IT_GL
WHERE BUKRS = P_BUKRS
AND WT_QSCOD = P_QSCOD.
Then I select only the distinct documents No, from BSIS table fro the hkonts in the above internal table
SELECT DISTINCT BUKRS GJAHR BELNR FROM BSIS INTO CORRESPONDING FIELDS OF
TABLE IT_BSIS_GL
FOR ALL ENTRIES IN IT_GL
WHERE BUKRS = P_BUKRS AND HKONT = IT_GL-HKONT AND GJAHR = P_GJAHR.
Here I once again select the document details based on the document No. from above internal table
This query takes a lot of time
SELECT * FROM BSIS INTO CORRESPONDING FIELDS OF TABLE IT_BSIS
FOR ALL ENTRIES IN IT_BSIS_GL
WHERE BUKRS = P_BUKRS AND GJAHR = P_GJAHR
AND BELNR = IT_BSIS_GL-BELNR.
Please HelpHi,
Check note 992803; it could be that there is insufficient or missing index for BSIS table.
Regards,
Eli -
Truncate table taking too much time
hi guys,
thnks in advince.......
oracle version is =9.2.0.5.0
os/ version =SunOS Ganesha1 5.9 Generic_122300-05 sun4u sparc SUNW,Sun-Fire-V890
application people-soft
version=8.4
every thing is running fine from last week .
whenever process executed like billing ,d_dairy.
it selected some temporary tables and start truncate tables it takes 5 mint to 8 mint even table has 0 rows.
if more then one users executed process but diff process it comes in lock ..
regs
deep..Hi,
Here iam sending details of technical settings.
Name ACCTIT Transparent Table
Short text Compressed Data from FI/CO Document
Last Change SAP 10.02.2005
Status Active Saved
Data class APPL1 Transaction data, transparent tables
Size category 4 Data records expected: 24,000 to 89,000
Thanku -
Hi all
I'm quite new to database administration.my problem is that i'm trying to import dump file but one of the table taking too much time to import .
Description::
1 Export taken from source database which is in oracle 8i character set is WE8ISO8859P1
2 I am taking import in 10 g with character set utf 8 and national character set is also same.
3 dump file is about 1.5 gb.
4 I got error like value is too large for column so in target db which is in utf 8 i convert all coloumn from varchar2 to char.
5 while taking a import some table get import very fast bt at perticular table it get very slow
please help me thanks in advance.......Hello,
4 I got error like value is too large for column so in target db which is in utf 8 i convert all coloumn from varchar2 to char.
5 while taking a import some table get import very fast bt at perticular table it get very slow For the point *4* it's typically due to the CHARACTER SET conversion.
You export data in WE8ISO8859P1 and import in UTF8. In WE8ISO8859P1 characters are encoded in *1 Byte* so *1 CHAR = 1 BYTE*. In UTF8 (Unicode) characters are encoded in up to *4 Bytes* so *1 CHAR > 1 BYTE*.
For this reason you'll have to modify the length of your CHAR or VARCHAR2 Columns, or add the CHAR option (by default it's BYTE) in the column datatype definition of the Tables. For instance:
VARCHAR2(100 CHAR)The NLS_LENGTH_SEMANTICS parameter may be used also but it's not very well managed by export/Import.
So, I suggest you this:
1. set NLS_LENGTH_SEMANTICS=CHAR on your target database and restart the database.
2. Create from a script all your Tables (empty) on the target database (without the indexes and constraints).
3. Import the datas to the Tables.
4. Import the Indexes and constraints.You'll have more information on the following Note of MOS:
Examples and limits of BYTE and CHAR semantics usage (NLS_LENGTH_SEMANTICS) [ID 144808.1]For the point *5* it may be due to the conversion problem you are experiencing, it may also due to some special datatype like LONG.
Else, I have a question, why do you choose UTF8 on your Target database and not AL32UTF8 ?
AL32UTF8 is recommended for Unicode uses.
Hope this help.
Best regards,
Jean-Valentin -
Importing a table with a BLOB column is taking too long
I am importing a user schema from 9i (9.2.0.6) database to 10g (10.2.1.0) database. One of the large tables (millions of records) with a BLOB column is taking too long to import (more that 24 hours). I have tried all the tricks I know to speed up the import. Here are some of the setting:
1 - set buffer to 500 Mb
2 - pre-created the table and turned off logging
3 - set indexes=N
4 - set constraints=N
5 - I have 10 online redo logs with 200 MB each
6 - Even turned off logging at the database level with disablelogging = true
It is still taking too long loading the table with the BLOB column. The BLOB field contains PDF files.
For your info:
Computer: Sun v490 with 16 CPUs, solaris 10
memory: 10 Gigabytes
SGA: 4 GigabytesLegatti,
I have feedback=10000. However by monitoring the import, I know that its loading average of 130 records per minute. Which is very slow considering that the table contains close to two millions records.
Thanks for your reply. -
Delta Sync taking too much time on refreshing of tables
Hi,
I am working on Smart Service Manager 3.0. I have come across a scenario where the delta sync is taking too much time.
It is required that if we update the stock quantity then the stock should be updated instantaneously.
To do this we have to refresh 4 stock tables at every sync so that the updated quantity is reflected in the device.
This is taking a lot of time (3 to 4 min) which is highly unacceptable from user perspective.
Please could anyone suggest something so that only those table get refreshed upon which the action is carried out.
For eg: CTStock table should get refreshed only If i transfer a stock and get updated accordingly
Not on any other scenario like the changing status from accept to driving or any thing other than stocks.
Thanks,
Star
Tags edited by: Michael ApplebyHi fiontan,
Thanks a lot for the response!!!
Yeah!! I kow it's a lotta code, but i thought it'd be more informative if the whole function was quoted.
I'm in fact using the PrintWriter to wrap the BufferedWriter but am not using the print() method.
Does it save any time by using the print() method??
The place where the delay is occurring is the wile loop shown below:
while(allitems.hasMoreElements()){
String aRow = "";
XDItem item = (XDItem)allitems.nextElement();
for(int i =0 ; i < props.length; i++){
String value = item.getStringValue(props);
if(value == null || value.equalsIgnoreCase("null"))
value = "";
if(i == 0)
aRow = value;
else
aRow += ("\t" + value);
startTime1 = System.currentTimeMillis();
System.out.println("time here is--before-writing to buffer --out.flush() done: " +startTime1);
bufferWrt.write(aRow.toCharArray());
out.flush();//added by rosmon to check extra time taken for extraction//
bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
bufferWrt.newLine();
startTime2 = System.currentTimeMillis();
System.out.println("time here is--after-writing to buffer : " +startTime2);
What exactly happens is that after a few loops it just seems to sleep for around 20 seconds and then again starts off and ............it goes on till the records are done.
Please do lemme know if you have any idea as to why this is happening !!!!! This bug is giving me the scare.
thanks in advance -
Data Archive Script is taking too long to delete a large table
Hi All,
We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key, Please help... More info below
CREATE TABLE "APP"."MON_TXNS"
( "ID_TXN" NUMBER(12,0) NOT NULL ENABLE,
"BOL_IS_CANCELLED" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
"ID_PAYER" NUMBER(12,0),
"ID_PAYER_PI" NUMBER(12,0),
"ID_PAYEE" NUMBER(12,0),
"ID_PAYEE_PI" NUMBER(12,0),
"ID_CURRENCY" CHAR(3 BYTE) NOT NULL ENABLE,
"STR_TEXT" VARCHAR2(60 CHAR),
"DAT_MERCHANT_TIMESTAMP" DATE,
"STR_MERCHANT_ORDER_ID" VARCHAR2(30 BYTE),
"DAT_EXPIRATION" DATE,
"DAT_CREATION" DATE,
"STR_USER_CREATION" VARCHAR2(30 CHAR),
"DAT_LAST_UPDATE" DATE,
"STR_USER_LAST_UPDATE" VARCHAR2(30 CHAR),
"STR_OTP" CHAR(6 BYTE),
"ID_AUTH_METHOD_PAYER" NUMBER(1,0),
"AMNT_AMOUNT" NUMBER(23,0) DEFAULT 0,
"BOL_IS_AUTOCAPTURE" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
"ID_USE_CASE" NUMBER(4,0) NOT NULL ENABLE,
"ID_AUTH_METHOD_PAYEE" NUMBER(2,0),
CONSTRAINT "CKC_BOL_IS_CANCELLED_MON_TXNS" CHECK (BOL_IS_CANCELLED in ('Y','N')) ENABLE,
CONSTRAINT "PK_MON_TXNS" PRIMARY KEY ("ID_TXN")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_INDEX" ENABLE,
CONSTRAINT "FK_MON_TXNS_CURRENCIES" FOREIGN KEY ("ID_CURRENCY")
REFERENCES "APP"."CURRENCIES" ("ID_CURRENCY") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_PAYER" FOREIGN KEY ("ID_PAYER")
REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_PAYEE" FOREIGN KEY ("ID_PAYEE")
REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
CONSTRAINT "FK_MON_TXNS_REFERENCE_TXNS" FOREIGN KEY ("ID_TXN")
REFERENCES "APP"."TXNS" ("ID_TXN") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_PI_PAYER" FOREIGN KEY ("ID_PAYER_PI")
REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_PI_PAYEE" FOREIGN KEY ("ID_PAYEE_PI")
REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
CONSTRAINT "FK_MON_TXNS_TO_AUTHMETHOD" FOREIGN KEY ("ID_AUTH_METHOD_PAYER")
REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE,
CONSTRAINT "FK_MON_TXNS_USE_CASE_ID" FOREIGN KEY ("ID_USE_CASE")
REFERENCES "APP"."USE_CASES" ("ID_USE_CASE") ENABLE,
CONSTRAINT "FK_MON_TXN_AUTH_PAYEE" FOREIGN KEY ("ID_AUTH_METHOD_PAYEE")
REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE
CREATE INDEX "APP"."IDX_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_INDEX" ;
CREATE INDEX "APP"."IDX_PAYEE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_DATA" ;
CREATE INDEX "APP"."IDX_PYE_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE_PI")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_DATA" ;
CREATE INDEX "APP"."IDX_PYR_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER_PI")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_DATA" ;
CREATE INDEX "APP"."IDX_USE_CASE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_USE_CASE")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_DATA" ;
CREATE UNIQUE INDEX "APP"."PK_MON_TXNS" ON "APP"."MON_TXNS" ("ID_TXN")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "LARGE_INDEX" ;
Data is first moved to table in schema3.OTW. and then we are deleting all the rows in otw from original table. below is the explain plan for delete
SQL> explain plan for
2 delete from schema1.mon_txns where id_txn in (select id_txn from schema3.OTW);
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2798378986
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | DELETE STATEMENT | | 2520 | 233K| 87 (2)| 00:00:02 |
| 1 | DELETE | MON_TXNS | | | | |
|* 2 | HASH JOIN RIGHT SEMI | | 2520 | 233K| 87 (2)| 00:00:02 |
| 3 | INDEX FAST FULL SCAN| OTW_ID_TXN | 2520 | 15120 | 3 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL | MON_TXNS | 14260 | 1239K| 83 (0)| 00:00:02 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
Please help,
thanks,
Banka Ravi'Best practice' is just what Oracle is already doing as you have already been told: DELETE FROM myTable WHERE myDate between myStart and Myend.
Your use case is why many orgs elect to use partitioning and use that DATE column as the partition key. Then it is VERY FAST and VERY EASY to truncate or drop partitions that contain old data when you no longer need them.
The other solution used is to quit waiting so long to delete data and then you don't have to delete large amounts at the same time. So instead of deleting data once a month delete it once a week or even every night. Then the number of rows being deleted will be much smaller and, if the stats are kept current, Oracle may decide to use the index. -
Dear All,
Is there any other table where i can get the cost of each cost element a part form GLPCA table.
As it is taking too much of the time to fetch the data form GLPCA table.
Thanks
NSKDear Shashi,
Try FM PCA_ACTUALS_DETAIL
or PCA_ACTUAL_DOCUMENT_SHOW
PCA_SEND_LINES_GLPCA
Regards,
Amit
Edited by: Amit Iyer on Apr 3, 2009 11:39 AM -
Code taking too much time to output
Following code is taking too much time to execute . (some time giving Time_out )
ind = sy-tabix.
SELECT SINGLE * FROM mseg INTO mseg
WHERE bwart = '102' AND
lfbnr = itab-mblnr AND
ebeln = itab-ebeln AND
ebelp = itab-ebelp.
IF sy-subrc = 0.
DELETE itab INDEX ind.
CONTINUE.
Is there any other way to write this code to reduce the output time.
ThanksHi,
I think you are executing this code in a loop which is causing the problem. The rule is "Never put SELECT statements inside a loop".
Try to rewrite the code as follows:
* Outside the loop
SELECT *
from MSEG
into table lt_mseg
for all entries in itab
where bwart = '102' AND
lfbnr = itab-mblnr AND
ebeln = itab-ebeln AND
ebelp = itab-ebelp.
Then inside the loop, do a READ on the internal table
Loop at itab.
read table lt_mseg with key bwart = '102'. "plus other conditions
if sy-subrc ne 0.
delete itab. "index is automatically determined here from SY-TABIX
endif.
endloop.
I think this should optimise performance. You can check your code's performance using SE30 or ST05.
Hope this helps! Please revert if you need anything else!!
Cheers,
Shailesh.
Always provide feedback for helpful answers! -
Accessing BKPF table takes too long
Hi,
Is there another way to have a faster and more optimized sql query that will access the table BKPF? Or other smaller tables that contain the same data?
I'm using this:
select bukrs gjahr belnr budat blart
into corresponding fields of table i_bkpf
from bkpf
where bukrs eq pa_bukrs
and gjahr eq pa_gjahr
and blart in so_DocTypes
and monat in so_monat.
The report is taking too long and is eating up a lot of resources.
Any helpful advice is highly appreciated. Thanks!Hi max,
I also tried using BUDAT in the where clause of my sql statement, but even that takes too long.
select bukrs gjahr belnr budat blart monat
appending corresponding fields of table i_bkpf
from bkpf
where bukrs eq pa_bukrs
and gjahr eq pa_gjahr
and blart in so_DocTypes
and budat in so_budat.
I also tried accessing the table per day, but it didn't worked too...
while so_budat-low le so_budat-high.
select bukrs gjahr belnr budat blart monat
appending corresponding fields of table i_bkpf
from bkpf
where bukrs eq pa_bukrs
and gjahr eq pa_gjahr
and blart in so_DocTypes
and budat eq so_budat-low.
so_budat-low = so_budat-low + 1.
endwhile.
I think our BKPF tables contains a very large set of data. Is there any other table besides BKPF where we could get all accounting document numbers in a given period? -
Report taking too much time in the portal
Hi freiends,
we have developed a report on the ods,and we publish the same on the portal.
the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
is there any way to sort out this issue,like can we send the report to the individual user's mail id
so that they can not log in to the portal
or can we create the same report on the cube.
what could be the main difference if the report made on the cube or ods?
please help me
thanks in advance
sridathHi
Try this to improve performance of query
Find the query Run-time
where to find the query Run-time ?
557870 'FAQ BW Query Performance'
130696 - Performance trace in BW
This info may be helpful.
General tips
Using aggregates and compression.
Using less and complex cell definitions if possible.
1. Avoid using too many nav. attr
2. Avoid RKF and CKF
3. Many chars in row.
By using T-codes ST03 or ST03N
Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
/people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
/people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
Try table rsddstats to get the statistics
Using cache memory will decrease the loading time of the report.
Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
Also try
1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
Open the Aggregates...and observe VALUATION and USAGE columns.
"---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
In usage column,we will come to know how far the aggregate has been used in query.
Thus we can check the performance of the aggregate.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
performance ISSUE related to AGGREGATE
Note 356732 - Performance Tuning for Queries with Aggregates
Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
/people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
You can go to T-Code DB20 which gives you all the performance related information like
Partitions
Databases
Schemas
Buffer Pools
Tablespaces etc
use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
If aggregates contain incorrect data, you must regenerate them.
202469 - Using aggregate check tool
Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
6
Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
Generate Report in RSRT
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
Business Intelligence Journal Improving Query Performance in Data Warehouses
http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
Achieving BI Query Performance Building Business Intelligence
http://www.dmreview.com/issues/20051001/1038109-1.html
Assign points if useful
Cheers
SM -
Client import taking too much time
hi all,
i am importing a client , i it has complete copy table 19,803 of 19,803 but for last four hours its status is processing
scc3
Target Client 650
Copy Type Client Import Post-Proc
Profile SAP_CUST
Status Processing...
User SAP*
Start on 24.05.2009 / 15:08:03
Last Entry on 24.05.2009 / 15:36:25
Current Action: Post Processing
- Last Exit Program RGBCFL01
Transport Requests
- Client-Specific PRDKT00004
- Texts PRDKX00004
Statistics for this Run
- No. of Tables 19803 of 19803
- Deleted Lines 7
- Copied Lines 0
sm50
1 DIA 542 Running Yes SAPLTHFB 650 SAP*
7 BGD 4172 Running Yes 11479 RGTBGD23 650 SAP* Sequential Read D010INC
sm66
Server No. Type PID Status Reason Sem Start Error CPU Time User Report Action Table
prdsap_PRD_00 7 BTC 4172 Running Yes 11711 SAP* RGTBGD23 Sequential Read D010INC
plz guide me why it is taking too much time , while it has finished most of the things
best regard
KhanThe import is in post processing. It digs through all the documents and adapts them to the new client. Most of the tables in the application area have a "MANDT" (= client) field which needs to be changed. Depending of the size of the client this can take a huge amount of time.
You can try to improve the speed by updating the table statistics for table D010INC.
Markus
Maybe you are looking for
-
How can I disable the "Events" how can I disable "Faces"
buona sera how can I disable the "Events" how can I disable "Faces" grazie
-
Since using MPEG Streamclip yielded dropped frames every 1.5 seconds while using a QUAD 2.5G PowerMac, I thought I'd better ask you folks directly for a solution to a problem that Apple's tech support doesn't have an answer for: My Church has three c
-
Are any alternate input devices supported for turning pages?
I am working on iBooks versions of my print music books and am disappointed to see a lack of support for external devices to change pages. It seems with all the sheet music apps out there that support foot pedals that iBooks would add support for thi
-
We're using Windows 2003 Server for both a SQL server 2000 database and a seperate ColdFusion MX 7 webserver. I created an odbc connection on the webserver but when I try to add the ODBC socket connection inside the ColdFusion Administrator page unde
-
Hi, Can anyone please help me with the following? I need to build a class registration form for my client. Classes broken-down on: private sessions and group sessions, so 2 different prices for each class. Group pricing starts at 3 people and more bu