ODS performance issue.
We have an ODS which has been facing severe performance issues. The reason is mostly due to high number of writes during data capture from the mainframe, volume of data and the huge increase in the number of people/applications querying the database for operational planning, statistical analysis etc. One thing I noticed about this database is that there are a huge number of unused fields in each table. Some of the fields are not used at all, sometimes the data capture operation writes 30 or 40 whitespace to the field (no idea why). My question is : Will dropping these unnecessary fields improve performance ?
Edited by: Crusoe on Sep 18, 2012 2:00 PM
If we tell them to drop those 15 unused fields will the database performance improve significantly ?Why guess?
How can we know?
Further root cause analysis is required.
If you have system-wide performance problems, use a system-wide report like AWR or Statspack to investigate what the biggest issues are.
See http://jonathanlewis.wordpress.com/statspack-examples/ for advice on interpreting these reports.
For example if I have a single table with 20 fields and only 5 of them hold any meaningful data and the remaining 15 are just garbage. Let's say 10 hold null values and remaining contain 5 whitespace of differing length and there are about 70 write transactions per second. If we tell them to drop those 15 unused fields will the database performance improve significantly ?Who knows?
Will it have some effect? Probably?
Is it relevant to your issues? Who knows.
Are you even likely to notice the effect? I would GUESS not.
The reason is mostly due to high number of writes during data capture from the mainframe, volume of data and the huge increase in the number of people/applications querying the database for operational planning, statistical analysis etcZoom in on these areas individually
Trace these parts of the application.
What are these areas of the application waiting on?
Are there common themes to the issues in these specific areas?
Is it inadequate resources? Increasing hardware may not be relevant to your issue.
In fact, just upgrading hardware can make things worse.
Is it specific contention?
Is it bad design leading to inefficient ways to access the required data?
Do you use partitioning?
Have you read "scaling to infinity" by Tim Gorman? http://www.evdbt.com/papers.htm
Is partitioning even relevant to your issues?
Who knows?
Not us. Yet.
Oracle is very well instrumented.
There should be no need to guess and throw money at hardware before you know what the problem is.
Similar Messages
-
Performance issues with 0CO_OM_WBS_1
We use BW3.5 & R/3 4.7 and encounter huge performance issues with 0CO_OM_WBS_1? Always having to do a full load involving approx 15M records even though there are on the average 100k new records since previous load. This takes a longtime.
Is there a way to delta-enable this datasource?Hi,
This DS is not delta enabled and you can only do a full load. For a delta enabled one, you need to use 0CO_OM_WBS_6. This works as other Financials extractors, as it has a safety delta (configurable, default 2 hours, in table BWOM_SETTINGS).
What you should do is maybe, use the WBS_6 as a delta and only extract full loads for WBS_1 for shorter durations.
As you must have an ODS for WBS_1 at the first stage, I would suggest do a full load only for posting periods that are open. This will reduce the data load.
You may also look at creating your own generic data source with delta; if you are clear on the tables and logic used.
cheers... -
While creating Billing, system is very slow..performance issue
Hi,
While creating Billing, system is very slow. How can I debugg and provide the analysis where is the exact problem.
This is showing performance issue.
Waiting for kind response.
Best Regards,
Padhy
Moderator Message : Duplicate post locked.
Edited by: Vinod Kumar on May 12, 2011 10:59 AMhi,
Chk the links
http://help.sap.com/saphelp_nw04/helpdata/en/4a/e71f39488fee0ce10000000a114084/content.htm
Re: How to create Secondary Index?
How may secondary indices I can create on the ODS?
Deletion of ODS index
Ramesh -
QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES
WHAT ARE QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
WHAT ARE DATALOADING PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
WILL REWARD FULL POINT S
REGARDS
GURUBW Back end
Some Tips -
1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 Background Processing Job Management to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 ABAP/4 Run-time Analysis and then run the analysis for the transaction code RSA3 Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW BW IMG Menu on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
Hope it Helps
Chetan
@CP.. -
Performance Issue Executing a BEx Query in Crystal Report E 4.0
Dear Forum
I'm working for a customer with big performance issue Executing a BEx Query in Crystal via transient universe.
When query is executed directly against BW via RSRT query returns results in under 2 seconds.
When executed in crystal, without the use of subreports multiple executions (calls to BICS_GET_RESULTS) are seen. Runtimes are as long as 60 seconds.
The Bex query is based on a multiprovider without ODS.
The RFC trace shows BICS connection problems, CS as BICS_PROV_GET_INITIAL_STATE takes a lot of time.
I checked the note 1399816 - Task name - prefix - RSDRP_EXECUTE_AT_QUERY_DISP, and itu2019s not applicable because the customer has the BI 7.01 SP 8 and it has already
domain RSDR0_TASKNAME_LONG in package RSDRC with the
description: 'BW Data Manager: Task name - 32 characters', data
type: CHAR; No. Characters: 32, decimal digits: 0
data element RSDR0_TASKNAME_LONG in package RSDRC with the
description 'BW Data Manager: Task name - 32 characters' and the
previously created domain.
as described on the message
Could you suggest me something to check, please?
Thanks en advance
Regards
RosaHi,
It would be great if you would quote the ADAPT and tell the audience when it is targetted for a fix.
Generally speaking, CR for Enteprise isn't as performant as WebI, because uptake was rather slow .. so i'm of the opinion that there is improvements to be gained. So please work with Support via OSS.
My onlt recommendations can be :
- Patch up to P2.12 in bi 4.0
- Define more default values on the Bex query variables.
- Implement this note in the BW 1593802 Performance optimization when loading query views
Regards,
H -
Performance issue on Fiscal period.
HI all,
I have multiprovider built on an infoset.The infoset is built on 3 stadard ODS(0FIGL_O02,0PUR_O01,0PUR_DS03).The user is running the report by Company code and Fiscal Period.
The Company Code and Fiscal period is available only for FI-GL ODS.The purchasing os has only Fiscal Variant time characterstic.When i am trying to runt he report,its taking unusually long time to run.
How should i resolve this performance issue?
1)will getting Fiscal period into Purchasing ods help improve the performance.If so can anyone please give me step by step procees.As this is very urgent.
2) Or should i take any other method to improve the performance.Th FI-GL already has secondary indexes on it.
Please advise.
Message was edited by:
sap noviceDuplicate post:
Performance issue on AFPO
Performance issue on AFPO -
Oracle 11g Migration performance issue
Hello,
There a performance issue with Migration from Oracle 10g(10.2.0.5) to Oracle 11g(11.2.0.2).
Its very simple statement hanging for more than a day and later found that query plan is very very bad. Example of the query is given below:
INSERT INTO TABLE_XYZ
SELECT F1,F2,F3
FROM TABLE_AB, TABLE_BC
WHERE F1=F4;
While looking at cost in Explain plan :
on 10g --> 62567
0n 11g --> 9879652356776
Strange thing is that
Scenario 1: if I issue just query as shown below, will display rows immediately :
SELECT F1,F2,F3
FROM TABLE_AB, TABLE_BC
WHERE F1=F4;
Scenario 2: If I create a table as shown below, will work correctly.
CREATE TABLE TABLE_XYZ AS
SELECT F1,F2,F3
FROM TABLE_AB, TABLE_BC
WHERE F1=F4;
What could be the issue here with INSERT INTO <TAB> SELECT <COL> FROM <TAB1>?Table:
CREATE TABLE AVN_WRK_F_RENEWAL_TRANS_T
"PKSRCSYSTEMID" NUMBER(4,0) NOT NULL ENABLE,
"PKCOMPANYCODE" VARCHAR2(8 CHAR) NOT NULL ENABLE,
"PKBRANCHCODE" VARCHAR2(8 CHAR) NOT NULL ENABLE,
"PKLINEOFBUSINESS" NUMBER(4,0) NOT NULL ENABLE,
"PKPRODUCINGOFFICELIST" VARCHAR2(2 CHAR) NOT NULL ENABLE,
"PKPRODUCINGOFFICE" VARCHAR2(8 CHAR) NOT NULL ENABLE,
"PKEXPIRYYR" NUMBER(4,0) NOT NULL ENABLE,
"PKEXPIRYMTH" NUMBER(2,0) NOT NULL ENABLE,
"CURRENTEXPIRYCOUNT" NUMBER,
"CURRENTRENEWEDCOUNT" NUMBER,
"PREVIOUSEXPIRYCOUNT" NUMBER,
"PREVIOUSRENEWEDCOUNT" NUMBER
SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT
TABLESPACE "XYZ" ;
Explain Plan(With Insert Statement and Query):_
INSERT STATEMENT, GOAL = ALL_ROWS Cost=9110025395866 Cardinality=78120 Bytes=11952360
LOAD TABLE CONVENTIONAL Object owner=ODS Object name=AVN_WRK_F_RENEWAL_TRANS
NESTED LOOPS OUTER Cost=9110025395866 Cardinality=78120 Bytes=11952360
TABLE ACCESS FULL Object owner=ODS Object name=AVN_WRK_F_RENEWAL_TRANS_1ST Cost=115 Cardinality=78120 Bytes=2499840
VIEW PUSHED PREDICATE Object owner=ODS Cost=116615788 Cardinality=1 Bytes=121
SORT GROUP BY Cost=116615788 Cardinality=3594 Bytes=406122
VIEW Object owner=SYS Object name=VW_DAG_1 Cost=116615787 Cardinality=20168 Bytes=2278984
SORT GROUP BY Cost=116615787 Cardinality=20168 Bytes=4073936
NESTED LOOPS OUTER Cost=116614896 Cardinality=20168 Bytes=4073936
VIEW Object owner=SYS Cost=5722 Cardinality=20168 Bytes=2157976
NESTED LOOPS Cost=5722 Cardinality=20168 Bytes=2097472
HASH JOIN Cost=924 Cardinality=1199 Bytes=100716
NESTED LOOPS
NESTED LOOPS Cost=181 Cardinality=1199 Bytes=80333
TABLE ACCESS BY INDEX ROWID Object owner=ODS Object name=INWARDSPOLICYDETAILS Cost=159 Cardinality=1199 Bytes=39567
INDEX RANGE SCAN Object owner=ODS Object name=IX_INWPOLDTLS_SYSCOMPANYBRANCH Cost=7 Cardinality=1199
INDEX UNIQUE SCAN Object owner=ODS Object name=PK_AVN_D_MASTERPOLICYDETAILS Cost=0 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=ODS Object name=AVN_D_MASTERPOLICYDETAILS Cost=1 Cardinality=1 Bytes=34
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYLOBMAPPING Cost=741 Cardinality=288498 Bytes=4904466
VIEW PUSHED PREDICATE Object owner=ODS Cost=4 Cardinality=1 Bytes=20
FILTER
SORT AGGREGATE Cardinality=1 Bytes=21
TABLE ACCESS BY GLOBAL INDEX ROWID Object owner=ODS Object name=AVN_F_TRANSACTIONS Cost=4 Cardinality=1 Bytes=21
INDEX RANGE SCAN Object owner=ODS Object name=PK_AVN_F_TRANSACTIONS Cost=3 Cardinality=1
VIEW PUSHED PREDICATE Object owner=ODS Cost=5782 Cardinality=1 Bytes=95
SORT GROUP BY Cost=5782 Cardinality=2485 Bytes=216195
VIEW Object owner=SYS Object name=VW_DAG_0 Cost=5781 Cardinality=2485 Bytes=216195
SORT GROUP BY Cost=5781 Cardinality=2485 Bytes=278320
HASH JOIN Cost=5780 Cardinality=2485 Bytes=278320
VIEW Object owner=SYS Object name=VW_GBC_15 Cost=925 Cardinality=1199 Bytes=73139
SORT GROUP BY Cost=925 Cardinality=1199 Bytes=100716
HASH JOIN Cost=924 Cardinality=1199 Bytes=100716
NESTED LOOPS
NESTED LOOPS Cost=181 Cardinality=1199 Bytes=80333
TABLE ACCESS BY INDEX ROWID Object owner=ODS Object name=INWARDSPOLICYDETAILS Cost=159 Cardinality=1199 Bytes=39567
INDEX RANGE SCAN Object owner=ODS Object name=IX_INWPOLDTLS_SYSCOMPANYBRANCH Cost=7 Cardinality=1199
INDEX UNIQUE SCAN Object owner=ODS Object name=PK_AVN_D_MASTERPOLICYDETAILS Cost=0 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=ODS Object name=AVN_D_MASTERPOLICYDETAILS Cost=1 Cardinality=1 Bytes=34
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYLOBMAPPING Cost=741 Cardinality=288498 Bytes=4904466
VIEW Object owner=SYS Object name=VW_GBF_16 Cost=4854 Cardinality=75507 Bytes=3850857
SORT GROUP BY Cost=4854 Cardinality=75507 Bytes=2340717
VIEW Object owner=ODS Cost=4207 Cardinality=75507 Bytes=2340717
SORT GROUP BY Cost=4207 Cardinality=75507 Bytes=1585647
PARTITION HASH ALL Cost=3713 Cardinality=75936 Bytes=1594656
TABLE ACCESS FULL Object owner=ODS Object name=AVN_F_TRANSACTIONS Cost=3713 Cardinality=75936 Bytes=1594656
Explain Plan(Only Query):_
SELECT STATEMENT, GOAL = ALL_ROWS Cost=62783 Cardinality=89964 Bytes=17632944
HASH JOIN OUTER Cost=62783 Cardinality=89964 Bytes=17632944
TABLE ACCESS FULL Object owner=ODS Object name=AVN_WRK_F_RENEWAL_TRANS_1ST Cost=138 Cardinality=89964 Bytes=2878848
VIEW Object owner=ODS Cost=60556 Cardinality=227882 Bytes=37372648
HASH GROUP BY Cost=60556 Cardinality=227882 Bytes=26434312
VIEW Object owner=SYS Object name=VW_DAG_1 Cost=54600 Cardinality=227882 Bytes=26434312
HASH GROUP BY Cost=54600 Cardinality=227882 Bytes=36005356
HASH JOIN OUTER Cost=46664 Cardinality=227882 Bytes=36005356
VIEW Object owner=SYS Cost=18270 Cardinality=227882 Bytes=16635386
HASH JOIN Cost=18270 Cardinality=227882 Bytes=32587126
HASH JOIN Cost=12147 Cardinality=34667 Bytes=2912028
HASH JOIN Cost=10076 Cardinality=34667 Bytes=2322689
TABLE ACCESS FULL Object owner=ODS Object name=AVN_D_MASTERPOLICYDETAILS Cost=137 Cardinality=34667 Bytes=1178678
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYDETAILS Cost=9934 Cardinality=820724 Bytes=27083892
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYLOBMAPPING Cost=741 Cardinality=866377 Bytes=14728409
VIEW Object owner=ODS Cost=5195 Cardinality=227882 Bytes=13445038
HASH GROUP BY Cost=5195 Cardinality=227882 Bytes=4785522
PARTITION HASH ALL Cost=3717 Cardinality=227882 Bytes=4785522
TABLE ACCESS FULL Object owner=ODS Object name=AVN_F_TRANSACTIONS Cost=3717 Cardinality=227882 Bytes=4785522
VIEW Object owner=ODS Cost=26427 Cardinality=227882 Bytes=19369970
HASH GROUP BY Cost=26427 Cardinality=227882 Bytes=18686324
VIEW Object owner=SYS Object name=VW_DAG_0 Cost=26427 Cardinality=227882 Bytes=18686324
HASH GROUP BY Cost=26427 Cardinality=227882 Bytes=25294902
HASH JOIN Cost=20687 Cardinality=227882 Bytes=25294902
VIEW Object owner=SYS Object name=VW_GBC_15 Cost=12826 Cardinality=34667 Bytes=2080020
HASH GROUP BY Cost=12826 Cardinality=34667 Bytes=2912028
HASH JOIN Cost=12147 Cardinality=34667 Bytes=2912028
HASH JOIN Cost=10076 Cardinality=34667 Bytes=2322689
TABLE ACCESS FULL Object owner=ODS Object name=AVN_D_MASTERPOLICYDETAILS Cost=137 Cardinality=34667 Bytes=1178678
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYDETAILS Cost=9934 Cardinality=820724 Bytes=27083892
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYLOBMAPPING Cost=741 Cardinality=866377 Bytes=14728409
VIEW Object owner=SYS Object name=VW_GBF_16 Cost=7059 Cardinality=227882 Bytes=11621982
HASH GROUP BY Cost=7059 Cardinality=227882 Bytes=6836460
VIEW Object owner=ODS Cost=5195 Cardinality=227882 Bytes=6836460
HASH GROUP BY Cost=5195 Cardinality=227882 Bytes=4785522
PARTITION HASH ALL Cost=3717 Cardinality=227882 Bytes=4785522
TABLE ACCESS FULL Object owner=ODS Object name=AVN_F_TRANSACTIONS Cost=3717 Cardinality=227882 Bytes=4785522 -
Hi Gurus,
I m woking on performance tuning at the moment and wants some tips
regarding the Query performance tuning,if anyone can helpme in that
rfrence.
the thing is that i have got an idea about the system and now the
issues r with DB space, Abap Dumps, problem in free space in table
space, no number range buffering,cubes r using too many aggrigates,
large IC,Large ODS, and many others.
So my questionis that is anyone can tell me that how to resolve the
issues with the Large master data tables,and large ODS,and one more
importnat issue is KPI´s exceding there refrence valuses so any idea
how to deal with them.
waiting for the valuable responces.
thanks In advance
Redards
AmitHi Amit
For Query performance issue u can go for :-
Aggregates : They will help u a lot to make ur query faster becuase query doesnt hits ur cube it hits the aggregates which has very less number of records in comp to ur cube
secondly i wud suggest u is use CKF in place of formulaes if any in the query
other thing is avoid upto the extent possible the use fo nav attr . if u want to use them use it upto the minimal level reason i am saying so is during the query exec whn ever there is nav attr it provides unncessary join to ur MD and thus dec query perfo
be specifc to rows and columns if u r not sure of a kf or a char then better put it in a free char.
use filters if possible
if u follow these m sure ur query perfo will inc
Assign points if applicable
Thanks
puneet -
Report Performance Issue - Activity
Hi gurus,
I'm developing an Activity report using Transactional database (Online real time object).
the purpose of the report is to list down all contacts related activities and activities NOT related to Contact by activity owner (user id).
In order to fullfill that requirment I've created 2 report
1) All Activities related to Contact -- Report A
pull in Acitivity ID , Activity Type, Status, Contact ID
2) All Activities not related to Contact UNION All Activities related to Contact (Base report) -- Report B
to get the list of activities not related to contact i'm using Advanced filter based on result of another request which is I think is the part that slow down the query.
<Activity ID not equal to any Activity ID in Report B>
Anyone encountered performance issue due to the advanced filter in analytic before?
any input is really appriciated
Thanks in advanced,
FinaFina,
Union is always the last option. If you can get all record in one report, do not use union.
since all records, which you are targeting, are in the activity subject area, it is not nessecery to combine reports. add a column with the following logic
if contact id is null (or = 'Unspecified') then owner name else contact name
Hopefully, this is helping. -
Report performance Issue in BI Answers
Hi All,
We have a performance issues with reports. Report is running more than 10 mins. we took query from the session log and ran it in database, at that time it took not more than 2 mins. We have verified proper indexes on the where clause columns.
Could any once suggest to improve the performance in BI answers?
Thanks in advance,I hope you dont have many case statements and complex calculations that you do in the Answers.
Next thing you need to monitor is how many rows of data that you are trying to retrieve from the query. If the volume is huge then it takes time to do the formatting on the Answers as you are going to dump huge volumes of data. Database(like teradata) returns initially like 1-2000 records if you hit show all records then even db is gonna fair amount of time if you are dumping many records
hope it helps
thanks
Prash -
BW BCS cube(0bcs_vc10 ) Report huge performance issue
Hi Masters,
I am working out for a solution for BW report developed in 0bcs_vc10 virtual cube.
Some of the querys is taking more 15 to 20 minutes to execute the report.
This is huge performance issue. We are using BW 3.5, and report devloped in bex and published thru portal. Any one faced similar problem please advise how you tackle this issue. Please give the detail analysis approach how you resolved this issue.
Current service pack we are using is
SAP_BW 350 0016 SAPKW35016
FINBASIS 300 0012 SAPK-30012INFINBASIS
BI_CONT 353 0008 SAPKIBIFP8
SEM-BW 400 0012 SAPKGS4012
Best of Luck
Chris
BW BCS cube(0bcs_vc10 ) Report huge performance issueRavi,
I already did that, it is not helping me much for the performance. Reports are taking 15 t0 20 minutes. I wanted any body in this forum have the same issue how
they resolved it.
Regards,
Chris -
This is the question we will try to answer...
What si the bottle neck (hardware) of Adobe Premiere Pro CS6
I used PPBM5 as a benchmark testing template.
All the data and log as been collected using performance counter
First of all, describe my computer...
Operating System
Microsoft Windows 8 Pro 64-bit
CPU
Intel Xeon E5 2687W @ 3.10GHz
Sandy Bridge-EP/EX 32nm Technology
RAM
Corsair Dominator Platinum 64.0 GB DDR3
Motherboard
EVGA Corporation Classified SR-X
Graphics
PNY Nvidia Quadro 6000
EVGA Nvidia GTX 680 // Yes, I created bench stats for both card
Hard Drives
16.0GB Romex RAMDISK (RAID)
556GB LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed (RAID 0)
I have other RAID installed, but not relevant for the present post...
PSU
Cosair 1000 Watts
After many days of tests, I wanna share my results with community and comment them.
CPU Introduction
I tested my cpu and pushed it at maximum speed to understand where is the limit, can I reach this limit and I've logged precisely all result in graph (See pictures 1).
Intro : I tested my E5-XEON 2687W (8 Cores Hyperthread - 16 threads) to know if programs can use the maximum of it. I used Prime 95 to get the result. // I know this seem to be ordinary, but you will understand soon...
The result : Yes, I can get 100% of my CPU with 1 program using 20 threads in parallel. The CPU gives everything it can !
Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test...
(picture 1)
Disk Introduction
I tested my disk and pushed it at maximum speed to understand where is the limit and I've logged precisely all result in graph (See pictures 2).
Intro : I tested my RAID 0 556GB (LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed) to know if I can reach the maximum % disk usage (0% idle Time)
The result : As you can see in picture 2, yes, I can get the max of my drive at ~ 1.2 Gb/sec read/write steady !
Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test to see the impact of transfering many Go of data during ~10 sec...
(picture 2)
Now, I know my limits ! It's time to enter deeper in the subject !
PPBM5 (H.264) Result
I rendered the sequence (H.264) using Adobe Media Encoder.
The result :
My CPU is not used at 100%, the turn around 50%
My Disk is totally idle !
All the process usage are idle except process of (Adobe Media Encoder)
The transfert rate seem to be a wave (up and down). Probably caused by (Encrypt time.... write.... Encrypt time.... write...) // It's ok, ~5Mb/sec during transfert rate !
CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
RAM, more than enough ! 39 Go RAM free after the test ! // Excellent
~65 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
GPU Load on card seem to be a wave also ! (up and down) ~40% usage of GPU during the process of encoding.
GPU Ram get 1.2Go of RAM (But with GTX 680, no problem and Quadro 6000 with 6 GB RAM, no problem !)
Comment/Question : CPU is free (50%), disks are free (99%), GPU is free (60%), RAM is free (62%), my computer is not pushed at limit during the encoding process. Why ???? Is there some time delay in the encoding process ?
Other : Quadro 6000 & GTX 680 gives the same result !
(picture 3)
PPBM5 (Disk Test) Result (RAID LSI)
I rendered the sequence (Disk Test) using Adobe Media Encoder on my RAID 0 LSI disk.
The result :
My CPU is not used at 100%
My Disk wave and wave again, but far far from the limit !
All the process usage are idle except process of (Adobe Media Encoder)
The transfert rate wave and wave again (up and down). Probably caused by (Buffering time.... write.... Buffering time.... write...) // It's ok, ~375Mb/sec peak during transfert rate ! Easy !
CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
RAM, more than enough ! 40.5 Go RAM free after the test ! // Excellent
~48 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
GPU Load on card = 0 (This kind of encoding is GPU irrelevant)
GPU Ram get 400Mb of RAM (No usage for encoding)
Comment/Question : CPU is free (65%), disks are free (60%), GPU is free (100%), RAM is free (63%), my computer is not pushed at limit during the encoding process. Why ???? Is there some time delay in the encoding process ?
(picture 4)
PPBM5 (Disk Test) Result (Direct in RAMDrive)
I rendered the same sequence (Disk Test) using Adobe Media Encoder directly in my RamDrive
Comment/Question : Look at the transfert rate under (picture 5). It's exactly the same speed than with my RAID 0 LSI controller. Impossible ! Look in the same picture the transfert rate I can reach with the ramdrive (> 3.0 Gb/sec steady) and I don't go under 30% of disk usage. CPU is idle (70%), Disk is idle (100%), GPU is idle (100%) and RAM is free (63%). // This kind of results let me REALLY confused. It's smell bug and big problem with hardware and IO usage in CS6 !
(picture 5)
PPBM5 (MPEG-DVD) Result
I rendered the sequence (MPEG-DVD) using Adobe Media Encoder.
The result :
My CPU is not used at 100%
My Disk is totally idle !
All the process usage are idle except process of (Adobe Media Encoder)
The transfert rate wave and wave again (up and down). Probably caused by (Encoding time.... write.... Encoding time.... write...) // It's ok, ~2Mb/sec during transfert rate ! Real Joke !
CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
RAM, more than enough ! 40 Go RAM free after the test ! // Excellent
~80 thread opened by Adobe Media Encoder (Lot of thread, but it's ok in multi-thread apps!)
GPU Load on card = 100 (This use the maximum of my GPU)
GPU Ram get 1Gb of RAM
Comment/Question : CPU is free (70%), disks are free (98%), GPU is loaded (MAX), RAM is free (63%), my computer is pushed at limit during the encoding process for GPU only. Now, for this kind of encoding, the speed limit is affected by the slower IO (Video Card GPU)
Other : Quadro 6000 is slower than GTX 680 for this kind of encoding (~20 s slower than GTX).
(picture 6)
Encoding single clip FULL HD AVCHD to H.264 Result (Premiere Pro CS6)
You can look the result in the picture.
Comment/Question : CPU is free (55%), disks are free (99%), GPU is free (90%), RAM is free (65%), my computer is not pushed at limit during the encoding process. Why ???? Adobe Premiere seem to have some bug with thread management. My hardware is idle ! I understand AVCHD can be very difficult to decode, but where is the waste ? My computer want, but the software not !
(picture 7)
Render composition using 3D Raytracer in After Effects CS6
You can look the result in the picture.
Comment : GPU seems to be the bottle neck when using After Effects. CPU is free (99%), Disks are free (98%), Memory is free (60%) and it depend of the setting and type of project.
Other : Quadro 6000 & GTX 680 gives the same result in time for rendering the composition.
(picture 8)
Conclusion
There is nothing you can do (I thing) with CS6 to get better performance actually. GTX 680 is the best (Consumer grade card) and the Quadro 6000 is the best (Profressional card). Both of card give really similar result (I will probably return my GTX 680 since I not really get any better performance). I not used Tesla card with my Quadro, but actually, both, Premiere Pro & After Effects doesn't use multi GPU. I tried to used both card together (GTX & Quadro), but After Effects gives priority to the slower card (In this case, the GTX 680)
Premiere Pro, I'm speechless ! Premiere Pro is not able to get max performance of my computer. Not just 10% or 20%, but average 60%. I'm a programmor, multi-threadling apps are difficult to manage and I can understand Adobe's programmor. But actually, if anybody have comment about this post, tricks or any kind of solution, you can comment this post. It's seem to be a bug...
Thank you.Patrick,
I can't explain everything, but let me give you some background as I understand it.
The first issue is that CS6 has a far less efficient internal buffering or caching system than CS5/5.5. That is why the MPEG encoding in CS6 is roughly 2-3 times slower than the same test with CS5. There is some 'under-the-hood' processing going on that causes this significant performance loss.
The second issue is that AME does not handle regular memory and inter-process memory very well. I have described this here: Latest News
As to your test results, there are some other noteworthy things to mention. 3D Ray tracing in AE is not very good in using all CUDA cores. In fact it is lousy, it only uses very few cores and the threading is pretty bad and does not use the video card's capabilities effectively. Whether that is a driver issue with nVidia or an Adobe issue, I don't know, but whichever way you turn it, the end result is disappointing.
The overhead AME carries in our tests is something we are looking into and the next test will only use direct export and no longer the AME queue, to avoid some of the problems you saw. That entails other problems for us, since we lose the capability to check encoding logs, but a solution is in the works.
You see very low GPU usage during the H.264 test, since there are only very few accelerated parts in the timeline, in contrast to the MPEG2-DVD test, where there is rescaling going on and that is CUDA accelerated. The disk I/O test suffers from the problems mentioned above and is the reason that my own Disk I/O results are only 33 seconds with the current test, but when I extend the duration of that timeline to 3 hours, the direct export method gives me 22 seconds, although the amount of data to be written, 37,092 MB has increased threefold. An effective write speed of 1,686 MB/s.
There are a number of performance issues with CS6 that Adobe is aware of, but whether they can be solved and in what time, I haven't the faintest idea.
Just my $ 0.02 -
Performance Issue for BI system
Hello,
We are facing performance issues for BI System. Its a preproductive system and its performance is degrading badly everyday. I was checking system came to know program buffer hit ratio is increaasing everyday due to high Swaps. So asked to change the parameter abap/buffersize which was 300Mb to 500Mb. But still no major improvement is found in the system.
There is 16GB Ram available and Server is HP-UX and with Netweaver2004s with Oracle 10.2.0.4.0 installed in it.
The Main problem is while running a report or creating a query is taking way too long time.
Kindly help me.Hello SIva,
Thanks for your reply but i have checked ST02 and ST03 and also SM50 and its normal
we are having 9 dialog processes, 3 Background , 2 Update and 1 spool.
No one is using the system currently but in ST02 i can see the swaps are in red.
Buffer HitRatio % Alloc. KB Freesp. KB % Free Sp. Dir. Size FreeDirEnt % Free Dir Swaps DB Accs
Nametab (NTAB) 0
Table definition 99,60 6.798 20.000 29.532 153.221
Field definition 99,82 31.562 784 2,61 20.000 6.222 31,11 17.246 41.248
Short NTAB 99,94 3.625 2.446 81,53 5.000 2.801 56,02 0 2.254
Initial records 73,95 6.625 998 16,63 5.000 690 13,80 40.069 49.528
0
boldprogram 97,66 300.000 1.074 0,38 75.000 67.177 89,57 219.665 725.703bold
CUA 99,75 3.000 875 36,29 1.500 1.401 93,40 55.277 2.497
Screen 99,80 4.297 1.365 33,35 2.000 1.811 90,55 119 3.214
Calendar 100,00 488 361 75,52 200 42 21,00 0 158
OTR 100,00 4.096 3.313 100,00 2.000 2.000 100,00 0
0
Tables 0
Generic Key 99,17 29.297 1.450 5,23 5.000 350 7,00 2.219 3.085.633
Single record 99,43 10.000 1.907 19,41 500 344 68,80 39 467.978
0
Export/import 82,75 4.096 43 1,30 2.000 662 33,10 137.208
Exp./ Imp. SHM 89,83 4.096 438 13,22 2.000 1.482 74,10 0
SAP Memory Curr.Use % CurUse[KB] MaxUse[KB] In Mem[KB] OnDisk[KB] SAPCurCach HitRatio %
Roll area 2,22 5.832 22.856 131.072 131.072 IDs 96,61
Page area 1,08 2.832 24.144 65.536 196.608 Statement 79,00
Extended memory 22,90 958.464 1.929.216 4.186.112 0 0,00
Heap memory 0 0 1.473.767 0 0,00
Call Stati HitRatio % ABAP/4 Req ABAP Fails DBTotCalls AvTime[ms] DBRowsAff.
Select single 88,59 63.073.369 5.817.659 4.322.263 0 57.255.710
Select 72,68 284.080.387 0 13.718.442 0 32.199.124
Insert 0,00 151.955 5.458 166.159 0 323.725
Update 0,00 378.161 97.884 395.814 0 486.880
Delete 0,00 389.398 332.619 415.562 0 244.495
Edited by: Srikanth Sunkara on May 12, 2011 11:50 AM -
RE: Case 59063: performance issues w/ C TLIB and Forte3M
Hi James,
Could you give me a call, I am at my desk.
I had meetings all day and couldn't respond to your calls earlier.
-----Original Message-----
From: James Min [mailto:jminbrio.forte.com]
Sent: Thursday, March 30, 2000 2:50 PM
To: Sharma, Sandeep; Pyatetskiy, Alexander
Cc: sophiaforte.com; kenlforte.com; Tenerelli, Mike
Subject: Re: Case 59063: performance issues w/ C TLIB and Forte 3M
Hello,
I just want to reiterate that we are very committed to working on
this issue, and that our goal is to find out the root of the problem. But
first I'd like to narrow down the avenues by process of elimination.
Open Cursor is something that is commonly used in today's RDBMS. I
know that you must test your query in ISQL using some kind of execute
immediate, but Sybase should be able to handle an open cursor. I was
wondering if your Sybase expert commented on the fact that the server is
not responding to commonly used command like 'open cursor'. According to
our developer, we are merely following the API from Sybase, and open cursor
is not something that particularly slows down a query for several minutes
(except maybe the very first time). The logs show that Forte is waiting for
a status from the DB server. Actually, using prepared statements and open
cursor ends up being more efficient in the long run.
Some questions:
1) Have you tried to do a prepared statement with open cursor in your ISQL
session? If so, did it have the same slowness?
2) How big is the table you are querying? How many rows are there? How many
are returned?
3) When there is a hang in Forte, is there disk-spinning or CPU usage in
the database server side? On the Forte side? Absolutely no activity at all?
We actually have a Sybase set-up here, and if you wish, we could test out
your database and Forte PEX here. Since your queries seems to be running
off of only one table, this might be the best option, as we could look at
everything here, in house. To do this:
a) BCP out the data into a flat file. (character format to make it portable)
b) we need a script to create the table and indexes.
c) the Forte PEX file of the app to test this out.
d) the SQL staement that you issue in ISQL for comparison.
If the situation warrants, we can give a concrete example of
possible errors/bugs to a developer. Dial-in is still an option, but to be
able to look at the TOOL code, database setup, etc. without the limitations
of dial-up may be faster and more efficient. Please let me know if you can
provide this, as well as the answers to the above questions, or if you have
any questions.
Regards,
At 08:05 AM 3/30/00 -0500, Sharma, Sandeep wrote:
James, Ken:
FYI, see attached response from our Sybase expert, Dani Sasmita. She has
already tried what you suggested and results are enclosed.
++
Sandeep
-----Original Message-----
From: SASMITA, DANIAR
Sent: Wednesday, March 29, 2000 6:43 PM
To: Pyatetskiy, Alexander
Cc: Sharma, Sandeep; Tenerelli, Mike
Subject: Re: FW: Case 59063: Select using LIKE has performance
issues
w/ CTLIB and Forte 3M
We did that trick already.
When it is hanging, I can see what is doing.
It is doing OPEN CURSOR. But not clear the exact statement of the cursor
it is trying to open.
When we run the query directly to Sybase, not using Forte, it is clearly
not opening any cursor.
And running it directly to Sybase many times, the response is always
consistently fast.
It is just when the query runs from Forte to Sybase, it opens a cursor.
But again, in the Forte code, Alex is not using any cursor.
In trying to capture the query,we even tried to audit any statementcoming
to Sybase. Same thing, just open cursor. No cursor declaration anywhere.==============================================
James Min
Technical Support Engineer - Forte Tools
Sun Microsystems, Inc.
1800 Harrison St., 17th Fl.
Oakland, CA 94612
james.minsun.com
510.869.2056
==============================================
Support Hotline: 510-451-5400
CUSTOMERS open a NEW CASE with Technical Support:
http://www.forte.com/support/case_entry.html
CUSTOMERS view your cases and enter follow-up transactions:
http://www.forte.com/support/view_calls.htmlEarthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701 -
Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?
Thanks Kelly,
The answers would be the following:
1200 cells per custom section (NEW COUNT), and up to 30 custom sections per spec.
Assuming all will be populated, and this would apply to all final material specs in the system which could be ~25% of all material specs.
The cells will be numeric, free text, drop downs, and some calculated numeric.
Are we reaching the limits for UI performance?
Thanks
Maybe you are looking for
-
Choosing one document in Payment wizard
Hi Version: SBO 2004C PL69 Description of requirements: We need to choose one (from 30) document in Payment wizard on step 6. Business needs: In Payment wizard on the step 6 we get all the documents for paying marked, so if we want to choose one or
-
[SOLVED] XSD simpleType trying to assign incompatible types
Hi all! I have a big XSD schema with complex and simple types defined. One of the simple types looks like this (string with min and max length defined): <xsd:simpleType name="MySimpleType"> <xsd:restriction base="xsd:string">
-
Stripping selective HTML tags?
Anyone have a "best practice" for stripping (or allowing) certain HTML tags on an insert/update transaction? I'm using TinyMCE in my forms which does this automatically, but if a user has javascript disabled they can bypass the TinyMCE stripping and
-
Can't install Audio Update 2007-001
I held off on installing the 10.4.10 update until recently. After installing Mac OS X 10.4.10 Update v1.1 (I installed the Intel Combo update), I am unable to install the Audio Update. The Select a Destination step says, "You cannot install Audio Upd
-
Radio button with javascript - - Please HeLP !
Hi, I have the following code My question is:- Q : Clicking on Submit button. gives a javascript message "Please make a selection" even though when I selected a radio button I have haspmap which displays possible answers to a question in a radio