Performance Issue in Large volume of data in report
Hi,
I have a report that will process large amount of data, but it takes too long to process the data into final ALV table, currently im using this logic.
Select ....
Select for all entries...
Loop at table into workarea...
read table2 where key = workarea-key binary search.
modify table.
read table2 where key = workarea-key binary search.
modify table.
endloop.
Currently i select all data that i need (only fields necessary) create a big loop and read other table to insert it to the fields in the final table
Edited by: Alvin Rosales on Apr 8, 2009 9:49 AM
Hi ,
You can use field symbols instead of work area.
If you use field symbols there is no need of modify statement.
Here are two equivalent code:
1) using work areas :
types: begin of lty_example,
col1 type char1,
col2 type char1,
col3 type char1,
end of lty-example.
data:lt_example type standard table of lty_example,
lwa_example type lty_example.
field-symbols : <lfs_example> type lty_example.
suppose if you have the following information in your internal table
col1 col2 col3
1 1 1
1 2 2
2 3 4
Now you may use the modify statement using work areas
loop at lt_example into lwa_example.
lwa_example-col2 = '9'.
modify lt_example index sy-tabix from lwa_example transporting col2.
endloop.
or better using field-symbols:
loop at lt_example assigning <lfs_example>
<lfs_example>-col2 = '9'.
*here there is no need of modify statement.
endloop.
The code using field-symbols is about 10 times faster tahn using work areas and modify statement.
Similar Messages
-
Dealing with large volumes of data
Background:
I recently "inherited" support for our company's "data mining" group, which amounts to a number of semi-technical people who have received introductory level training in writing SQL queries and been turned loose with SQL Server Management
Studio to develop and run queries to "mine" several databases that have been created for their use. The database design (if you can call it that) is absolutely horrible. All of the data, which we receive at defined intervals from our
clients, is typically dumped into a single table consisting of 200+ varchar(x) fields. There are no indexes or primary keys on the tables in these databases, and the tables in each database contain several hundred million rows (for example one table
contains 650 million rows of data and takes up a little over 1 TB of disk space, and we receive weekly feeds from our client which adds another 300,000 rows of data).
Needless to say, query performance is terrible, since every query ends up being a table scan of 650 million rows of data. I have been asked to "fix" the problems.
My experience is primarily in applications development. I know enough about SQL Server to perform some basic performance tuning and write reasonably efficient queries; however, I'm not accustomed to having to completely overhaul such a poor design
with such a large volume of data. We have already tried to add an identity column and set it up as a primary key, but the server ran out of disk space while trying to implement the change.
I'm looking for any recommendations on how best to implement changes to the table(s) housing such a large volume of data. In the short term, I'm going to need to be able to perform a certain amount of data analysis so I can determine the proper data
types for fields (and whether any existing data would cause a problem when trying to convert the data to the new data type), so I'll need to know what can be done to make it possible to perform such analysis without the process consuming entire days to analyze
the data in one or two fields.
I'm looking for reference materials / information on how to deal with the issues, particularly when a large volumn of data is involved. I'm also looking for information on how to load large volumes of data to the database (current processing of a typical
data file takes 10-12 hours to load 300,000 records). Any guidance that can be provided is appreciated. If more specific information is needed, I'll be happy to try to answer any questions you might have about my situation.I don't think you will find a single magic bullet to solve all the issues. The main point is that there will be no shortcut for major schema and index changes. You will need at least 120% free space to create a clustered index and facilitate
major schema changes.
I suggest an incremental approach to address you biggest pain points. You mention it takes 10-12 hours to load 300,000 rows, which suggests there may be queries involved in the process which require full scans of the 650 million row table. Perhaps
some indexes targeted at improving that process is a good first step.
What SQL Server version and edition are you using? You'll have more options with Enterprise (partitioning, row/page compression).
Regarding the data types, I would take a best guess at the proper types and run a query with TRY_CONVERT (assuming SQL 2012) to determine counts of rows that conform or not for each column. Then create a new table (using SELECT INTO) that has strongly
typed columns for those columns that are not problematic, plus the others that cannot easily be converted, and then drop the old table and rename the new one. You can follow up later to address columns data corrections and/or transformations.
Dan Guzman, SQL Server MVP, http://www.dbdelta.com -
Processing large volumes of data in PL/SQL
I'm working on a project which requires us to process large volumes of data on a weekly/monthly/quarterly basis, and I'm not sure we are doing it right, so any tips would be greatly appreciated.
Requirement
Source data is in a flat file in "short-fat" format i.e. each data record (a "case") has a key and up to 2000 variable values.
A typical weekly file would have maybe 10,000 such cases i.e. around 20 million variable values.
But we don't know which variables are used each week until we get the file, or where they are in the file records (this is determined via a set of meta-data definitions that the user selects at runtime). This makes identifying and validating each variable value a little more interesting.
Target is a "long-thin" table i.e. one record for each variable value (with numeric IDs as FKs to identify the parent variable and case.
We only want to load variable values for cases which are entirely valid. This may be a merge i.e. variable values may already exist in the target table.
There are various rules for validating the data against pre-existing data etc. These rules are specific to each variable, and have to be applied before we put the data in the target table. The users want to see the validation results - and may choose to bail out - before the data is written to the target table.
Restrictions
We have very limited permission to perform DDL e.g. to create new tables/indexes etc.
We have no permission to use e.g. Oracle external tables, Oracle directories etc.
We are working with standard Oracle tools i.e. PL/SQL and no DWH tools.
DBAs are extremely resistant to giving us more disk space.
We are on Oracle 9iR2, with no immediate prospect of moving to 10g.
Current approach
Source data is uploaded via SQL*Loader into static "short fat" tables.
Some initial key validation is performed on these records.
Dynamic SQL (plus BULK COLLECT etc) is used to pivot the short-fat data into an intermediate long-thin table, performing the validation on the fly via a combination of including reference values in the dynamic SQL and calling PL/SQL functions inside the dynamic SQL. This means we can pivot+validate the data in one step, and don't have to update the data with its validation status after we've pivoted it.
This upload+pivot+validate step takes about 1 hour 15 minutes for around 15 million variable values.
The subsequent "load to target table" step also has to apply substitution rules for certain "special values" or NULLs.
We do this by BULK collecting the variable values from the intermediate long-thin table, for each valid case in turn, applying the substitution rules within the SQL, and inserting into/updating the target table as appropriate.
Initially we did this via a SQL MERGE, but this was actually slower than doing an explicit check for existence and switching between INSERT and UPDATE accordingly (yes, that sounds fishy to me too).
This "load" process takes around 90 minutes for the same 15 million variable values.
Questions
Why is it so slow? Our DBAs assure us we have lots of table-space etc, and that the server is plenty powerful enough.
Any suggestions as to a better approach, given the restrictions we are working under?
We've looked at Tom Kyte's stuff about creating temporary tables via CTAS, but we have had serious problems with dynamic SQL on this project, so we are very reluctant to introduce more of it unless it's absolutely necessary. In any case, we have serious problems getting permissions to create DB objects - tables, indexes etc - dynamically.
So any advice would be gratefully received!
Thanks,
ChrisWe have 8 "short-fat" tables to hold the source data uploaded from the source file via SQL*Loader (the SQL*Loader step is fast). The data consists simply of strings of characters, which we treat simply as VARCHAR2 for the most part.
These tables consist essentially of a case key (composite key initially) plus up to 250 data columns. 8*250 = 2000, so we can handle up to 2000 of these variable values. The source data may have 100 any number of variable values in each record, but each record in a given file has the same structure. Each file-load event may have a different set of variables in different locations, so we have to map the short-fat columns COL001 etc to the corresponding variable definition (for validation etc) at runtime.
CASE_ID VARCHAR2(13)
COL001 VARCHAR2(10)
COL250 VARCHAR2(10)
We do a bit of initial validation in the short-fat tables, setting a surrogate key for each case etc (this is fast), then we pivot+validate this short-fat data column-by-column into a "long-thin" intermediate table, as this is the target format and we need to store the validation results anyway.
The intermediate table looks similar to this:
CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
VARIABLE_VALUE VARCHAR2(10) -- from COL001 etc
STATUS VARCHAR2(10) -- set during the pivot+validate process above
The target table looks very similar, but holds cumulative data for many weeks etc:
CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
VARIABLE_VALUE VARCHAR2(10)
We only ever load valid data into the target table.
Chris -
Retrive SQL from Webi report and process it for large volume of data
We have a Scenario where, we need to extract large volumes of data into flat files and distribute them from u2018Teradatau2019 warehouse which we usually call them as u2018Extractsu2019. But the requirement is such that, Business users wants to build their own u2018Adhoc Extractsu2019. The only way, I thought, to achieve this, is Build a universe, create the query, save the report and do not run it. Then write a RAS SDK to retrieve the SQL code from the reports and save into .txt file and process it directly in Teradata?
Is there any predefined Solution available with SAP BO or any other tool for this kind of Scenarios?Hi Shawn,
Do we have some VB macro to retrieve Sql queries of data providers of all the WebI reports in CMS.
Any information or even direction where I can get information will be helpful.
Thanks in advance.
Ashesh -
Performance issue in correlation with hidden objects in reports
Hello,
is there a known performance issue in correlation with hidden objects in reports using conditional suppressions? (HFM version 11.1.2.1)
Using comprehensive reports, we have huge performance differences between the same reports with and without hidden objects. Furthermore we suspect that some trouble with our reporting server environment base on using these reports through enduser.
Every advice would be welcome!
Regards,
bsc
Edited by: 972676 on Nov 22, 2012 11:27 AMIf you said that working with EVDRE for each separate sheet is fin ethat's means the main problem it is related to your VB custom macro interdependecy.
I suggest to add a log (to write into a text file)for you Macro and you will se that actually that minute is staying to perform operations from custom Macro.
Kind Regards
Sorin Radulescu -
What is the best way to extract large volume of data from a BW InfoCube?
Hello experts,
Wondering if someone can suggest the best method that is availabe in SAP BI 7.0 to extract a large amount of data (approx 70 million records) from an InfoCube. I've tried OpenHub and APD but not working. I always need to separate the extracts into small datasets. Any advice is greatly appreciated.
Thanks,
DavidHi David,
We had the same issue but that was loading from an ODS to cube. We have over 50 million records. I think there is no such option like parallel loading using DTPs. As suggested earlier in the forum, the only best option is to split according to the calender year of fis yr.
But remember even with the above criteria sometimes for some cal yr you might have lot of data, even that becomes a problem.
What i can suggest you is apart from Just the cal yr/fisc, also include some other selection criteria like comp code or sales org.
yes you will end up load more requests, but the data loads would go smooth with lesser volumes.
Regards
BN -
Performance issue with FDM when importing data
In the FDM Web console, a performance issue has been detected when importing data (.txt)
In less than 10 seconds the ".txt" and the ".log" files are created the INBOX folder (the ".txt" file) and in the OUTBOX\Logs (the ".log" file).
At that moment, system shows the message "Processing, please wait” during 10 minutes. Eventually the information is displayed, however if we want to see the second page, we have to wait more than 20 seconds.
It seems a performance issue when system tries to show the imported data in the web page.
It has been also noted that when a user tries to import a txt file directly clicking on the tab "Select File From Inbox", the user has to also wait other 10 minutes before the information is displayed on the web page.
Thx in advance!
Cheers
MatteoHi Matteo
How much data is being imported / displayed when users are interacting with the system.
There is a report that may help you to analyse this but unfortunately I cannot remember what it is called and don't have access to a system to check. I do remember that it breaks down the import process into stages showing how long it takes to process each mapping step and the overall time.
I suspect that what you are seeing is normal behaviour but that isn't to say that performance improvements are not possible.
The copying of files is the first part of the import process before FDM then starts the import so that will be quick. The processing is then the time taken to import the records, process the mapping and write to the tables. If users are clicking 'Select file from Inbox' then they are re-importing so it will take just as long as it would for you to import it, they are not just asking to retrieve previously imported data.
Hope this helps
Stuart -
Problem to update very large volume of data for 2LIS_04* extr.
Hi
I have problem with jobs for 2LIS_04* extractors using Queued Delta.
There are interface between R3 system and other production system and 3 or 4 times in the month very large volumen of data has been send to R3.
Then job runs very long and not pull data to RSA7.
How to resolve this problem.
Our R3 system is PI_BASIS 2005_1_620.
Thanks
AdamU can check these SAP Notes..........it will help u........
How can downtime be reduced for setup table update
SAP Note Number: 753654
Performance improvement for filling the setup tables
SAP Note Number: 436393
LBWE: Performance for setup of extract structures
SAP Note Number: 437672 -
Performance Issues with large XML (1-1.5MB) files
Hi,
I'm using an XML Schema based Object relational storage for my XML documents which are typically 1-1.5 MB in size and having serious performance issues with XPath Query.
When I do XPath query against an element of SQLType varchar2, I get a good performance. But when I do a similar XPath query against an element of SQLType Collection (Varray of varchar2), I get a very ordinary performance.
I have also created indexes on extract() and analyzed my XMLType table and indexes, but I have no performance gain. Also, I have tried all sorts of storage options available for Collections ie. Varray's, Nested Tables, IOT's, LOB's, Inline, etc... and all these gave me same bad performance.
I even tried creating XMLType views based on XPath queries but the performance didn't improve much.
I guess I'm running out of options and patience as well.;)
I would appreciate any ideas/suggestions, please help.....
Thanks;
Ramakrishna ChintaAre you having similar symptoms as I am? http://discussions.apple.com/thread.jspa?threadID=2234792&tstart=0
-
Pivot - Performance Issue with large dataset
Hello,
Database version : Oracle 10.2.0.4 - Linux
I'm using a function to return a pivot query depending on an input "RUN_ID" value
For example, i consider two differents "RUN_ID" (e.g. 119 and 120) with exactly the same dataset
I have a performance issue when i run the result query with the "RUN_ID"=120.
Pivot:
SELECT MAX (a.plate_index), MAX (a.plate_name), MAX (a.int_well_id),
MAX (a.row_index_alpha), MAX (a.column_index), MAX (a.is_valid),
MAX (a.well_type_id), MAX (a.read_index), MAX (a.run_id),
MAX (DECODE (a.value_type || a.value_index,
'CALC190', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC304050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC306050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC30050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC3011050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC104050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC106050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC10050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC1011050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC204050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC206050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC20050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC2011050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC80050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'CALC70050301', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'RAW0', a.this_value,
NULL
MAX (DECODE (a.value_type || a.value_index,
'RAW5030', a.this_value,
NULL
MAX (a.dose), MAX (a.unit), MAX (a.int_plate_id), MAX (a.run_name)
FROM vw_well_data a
WHERE a.run_id = :app_run_id
GROUP BY a.int_well_id, a.read_index
Run the query :
SELECT Sql_FullText,(cpu_time/100000) "Cpu Time (s)",
(elapsed_time/1000000) "Elapsed time (s)",
fetches,buffer_gets,disk_reads,executions
FROM v$sqlarea
WHERE Parsing_Schema_Name ='SCHEMA';
With results :
SQL_FULLTEXT Cpu Time (s) Elapsed time (s) FETCHES BUFFER_GETS DISK_READS EXECUTIONS
query1 (RUN_ID=119) 22.15857 3.589822 1 2216 354 1
query2 (RUN_ID=120) 1885.16959 321.974332 3 7685410 368 3
Explain Plan for RUNID 119_
PLAN_TABLE_OUTPUT
Plan hash value: 3979963427
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 261 | 98397 | 434 (2)| 00:00:06 |
| 1 | HASH GROUP BY | | 261 | 98397 | 434 (2)| 00:00:06 |
| 2 | VIEW | VW_WELL_DATA | 261 | 98397 | 433 (2)| 00:00:06 |
| 3 | UNION-ALL | | | | | |
|* 4 | HASH JOIN | | 252 | 21168 | 312 (2)| 00:00:04 |
| 5 | NESTED LOOPS | | 249 | 15687 | 112 (2)| 00:00:02 |
|* 6 | HASH JOIN | | 249 | 14442 | 112 (2)| 00:00:02 |
| 7 | TABLE ACCESS BY INDEX ROWID | PLATE | 29 | 464 | 2 (0)| 00:00:01 |
|* 8 | INDEX RANGE SCAN | IDX_PLATE_RUN_ID | 29 | | 1 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 13286 | 544K| 109 (1)| 00:00:02 |
| 10 | TABLE ACCESS BY INDEX ROWID| RUN | 1 | 11 | 1 (0)| 00:00:01 |
|* 11 | INDEX UNIQUE SCAN | PK_RUN | 1 | | 0 (0)| 00:00:01 |
| 12 | TABLE ACCESS BY INDEX ROWID| WELL | 13286 | 402K| 108 (1)| 00:00:02 |
|* 13 | INDEX RANGE SCAN | IDX_WELL_RUN_ID | 13286 | | 46 (0)| 00:00:01 |
|* 14 | INDEX UNIQUE SCAN | PK_WELL_TYPE | 1 | 5 | 0 (0)| 00:00:01 |
| 15 | TABLE ACCESS BY INDEX ROWID | WELL_RAW_DATA | 26361 | 540K| 199 (2)| 00:00:03 |
|* 16 | INDEX RANGE SCAN | IDX_WELL_RAW_RUN_ID | 26361 | | 92 (2)| 00:00:02 |
| 17 | NESTED LOOPS | | 9 | 891 | 121 (2)| 00:00:02 |
|* 18 | HASH JOIN | | 9 | 846 | 121 (2)| 00:00:02 |
|* 19 | HASH JOIN | | 249 | 14442 | 112 (2)| 00:00:02 |
| 20 | TABLE ACCESS BY INDEX ROWID | PLATE | 29 | 464 | 2 (0)| 00:00:01 |
|* 21 | INDEX RANGE SCAN | IDX_PLATE_RUN_ID | 29 | | 1 (0)| 00:00:01 |
| 22 | NESTED LOOPS | | 13286 | 544K| 109 (1)| 00:00:02 |
| 23 | TABLE ACCESS BY INDEX ROWID| RUN | 1 | 11 | 1 (0)| 00:00:01 |
|* 24 | INDEX UNIQUE SCAN | PK_RUN | 1 | | 0 (0)| 00:00:01 |
| 25 | TABLE ACCESS BY INDEX ROWID| WELL | 13286 | 402K| 108 (1)| 00:00:02 |
|* 26 | INDEX RANGE SCAN | IDX_WELL_RUN_ID | 13286 | | 46 (0)| 00:00:01 |
| 27 | TABLE ACCESS BY INDEX ROWID | WELL_CALC_DATA | 490 | 17640 | 9 (0)| 00:00:01 |
|* 28 | INDEX RANGE SCAN | IDX_WELL_CALC_RUN_ID | 490 | | 4 (0)| 00:00:01 |
|* 29 | INDEX UNIQUE SCAN | PK_WELL_TYPE | 1 | 5 | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - access("WELL_RAW_DATA"."RUN_ID"="WELL"."RUN_ID" AND
"WELL"."INT_WELL_ID"="WELL_RAW_DATA"."INT_WELL_ID")
6 - access("PLATE"."RUN_ID"="WELL"."RUN_ID" AND "PLATE"."INT_PLATE_ID"="WELL"."INT_PLATE_ID")
8 - access("PLATE"."RUN_ID"=119)
11 - access("RUN"."RUN_ID"=119)
13 - access("WELL"."RUN_ID"=119)
14 - access("WELL"."WELL_TYPE_ID"="TSF_LAYOUT_WELL_TYPE"."WELL_TYPE_ID")
16 - access("WELL_RAW_DATA"."RUN_ID"=119)
18 - access("WELL"."RUN_ID"="WELL_CALC_DATA"."RUN_ID" AND
"WELL"."INT_WELL_ID"="WELL_CALC_DATA"."INT_WELL_ID")
19 - access("PLATE"."RUN_ID"="WELL"."RUN_ID" AND "PLATE"."INT_PLATE_ID"="WELL"."INT_PLATE_ID")
21 - access("PLATE"."RUN_ID"=119)
24 - access("RUN"."RUN_ID"=119)
26 - access("WELL"."RUN_ID"=119)
28 - access("WELL_CALC_DATA"."RUN_ID"=119)
29 - access("WELL"."WELL_TYPE_ID"="TSF_LAYOUT_WELL_TYPE"."WELL_TYPE_ID")
Explain Plan for RUNID 120_
PLAN_TABLE_OUTPUT
Plan hash value: 599334230
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 754 | 24 (5)| 00:00:01 |
| 1 | HASH GROUP BY | | 2 | 754 | 24 (5)| 00:00:01 |
| 2 | VIEW | VW_WELL_DATA | 2 | 754 | 23 (0)| 00:00:01 |
| 3 | UNION-ALL | | | | | |
|* 4 | TABLE ACCESS BY INDEX ROWID | WELL_RAW_DATA | 1 | 21 | 3 (0)| 00:00:01 |
| 5 | NESTED LOOPS | | 1 | 84 | 9 (0)| 00:00:01 |
| 6 | NESTED LOOPS | | 1 | 63 | 6 (0)| 00:00:01 |
| 7 | NESTED LOOPS | | 1 | 58 | 6 (0)| 00:00:01 |
| 8 | NESTED LOOPS | | 1 | 27 | 3 (0)| 00:00:01 |
| 9 | TABLE ACCESS BY INDEX ROWID| RUN | 1 | 11 | 1 (0)| 00:00:01 |
|* 10 | INDEX UNIQUE SCAN | PK_RUN | 1 | | 0 (0)| 00:00:01 |
| 11 | TABLE ACCESS BY INDEX ROWID| PLATE | 1 | 16 | 2 (0)| 00:00:01 |
|* 12 | INDEX RANGE SCAN | IDX_PLATE_RUN_ID | 1 | | 1 (0)| 00:00:01 |
|* 13 | TABLE ACCESS BY INDEX ROWID | WELL | 1 | 31 | 3 (0)| 00:00:01 |
|* 14 | INDEX RANGE SCAN | IDX_WELL_RUN_ID | 59 | | 2 (0)| 00:00:01 |
|* 15 | INDEX UNIQUE SCAN | PK_WELL_TYPE | 1 | 5 | 0 (0)| 00:00:01 |
|* 16 | INDEX RANGE SCAN | IDX_WELL_RAW_DATA_WELL_ID | 2 | | 2 (0)| 00:00:01 |
|* 17 | TABLE ACCESS BY INDEX ROWID | WELL_CALC_DATA | 1 | 36 | 8 (0)| 00:00:01 |
| 18 | NESTED LOOPS | | 1 | 99 | 14 (0)| 00:00:01 |
| 19 | NESTED LOOPS | | 1 | 63 | 6 (0)| 00:00:01 |
| 20 | NESTED LOOPS | | 1 | 58 | 6 (0)| 00:00:01 |
| 21 | NESTED LOOPS | | 1 | 27 | 3 (0)| 00:00:01 |
| 22 | TABLE ACCESS BY INDEX ROWID| RUN | 1 | 11 | 1 (0)| 00:00:01 |
|* 23 | INDEX UNIQUE SCAN | PK_RUN | 1 | | 0 (0)| 00:00:01 |
| 24 | TABLE ACCESS BY INDEX ROWID| PLATE | 1 | 16 | 2 (0)| 00:00:01 |
|* 25 | INDEX RANGE SCAN | IDX_PLATE_RUN_ID | 1 | | 1 (0)| 00:00:01 |
|* 26 | TABLE ACCESS BY INDEX ROWID | WELL | 1 | 31 | 3 (0)| 00:00:01 |
|* 27 | INDEX RANGE SCAN | IDX_WELL_RUN_ID | 59 | | 2 (0)| 00:00:01 |
|* 28 | INDEX UNIQUE SCAN | PK_WELL_TYPE | 1 | 5 | 0 (0)| 00:00:01 |
|* 29 | INDEX RANGE SCAN | IDX_WELL_CALC_RUN_ID | 486 | | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - filter("WELL_RAW_DATA"."RUN_ID"=120)
10 - access("RUN"."RUN_ID"=120)
12 - access("PLATE"."RUN_ID"=120)
13 - filter("PLATE"."INT_PLATE_ID"="WELL"."INT_PLATE_ID")
14 - access("WELL"."RUN_ID"=120)
15 - access("WELL"."WELL_TYPE_ID"="TSF_LAYOUT_WELL_TYPE"."WELL_TYPE_ID")
16 - access("WELL"."INT_WELL_ID"="WELL_RAW_DATA"."INT_WELL_ID")
17 - filter("WELL"."INT_WELL_ID"="WELL_CALC_DATA"."INT_WELL_ID")
23 - access("RUN"."RUN_ID"=120)
25 - access("PLATE"."RUN_ID"=120)
26 - filter("PLATE"."INT_PLATE_ID"="WELL"."INT_PLATE_ID")
27 - access("WELL"."RUN_ID"=120)
28 - access("WELL"."WELL_TYPE_ID"="TSF_LAYOUT_WELL_TYPE"."WELL_TYPE_ID")
29 - access("WELL_CALC_DATA"."RUN_ID"=120)I need some advice to understand the issue and to improve the performance.
Thanks,
GrégoryHello,
Thanks for your response.
Stats are computed recently with DBMS_STATS package (case 2) and we have histogramm on 'RUN_ID' columns.
I tried to use the deprecated "analyze" method (case 1) and obtained better results!
DECLARE
-- Get tables used in the view vw_well_data --
CURSOR c1
IS
SELECT table_name, last_analyzed
FROM user_tables
WHERE table_name LIKE 'WELL%';
BEGIN
FOR r1 IN c1
LOOP
-- Case 1 : Analyze method : Perf is good --
EXECUTE IMMEDIATE 'analyze table '
|| r1.table_name
|| ' compute statistics ';
-- Case 2 : DBMS_STATS --
DBMS_STATS.gather_table_stats ('SCHEMA', r1.table_name);
END LOOP;
END;The explain plans are the same as before
Any explanations, suggestions ?
Thanks,
Gregory -
Populate large volume of data in the quickest way.
In Oracle 9i, what is the best tools to populate large amount of data is minimium amount of time?
I heard that SQLLDR DIRECT PATH loading could work. Is there any other tools available? What I need to do is to populate a large set of dummy data to stress test the perofrmance of the database system.
Any suggestion will help!
Thank you very much.Hi,
There are various options provided by Oracle like External tables, SQL Loader etc.
SQL Loader is the fastest from the lot. You may refer the docs for a full help on SQL Loader.
Regards. -
Performance Issue Executing a BEx Query in Crystal Report E 4.0
Dear Forum
I'm working for a customer with big performance issue Executing a BEx Query in Crystal via transient universe.
When query is executed directly against BW via RSRT query returns results in under 2 seconds.
When executed in crystal, without the use of subreports multiple executions (calls to BICS_GET_RESULTS) are seen. Runtimes are as long as 60 seconds.
The Bex query is based on a multiprovider without ODS.
The RFC trace shows BICS connection problems, CS as BICS_PROV_GET_INITIAL_STATE takes a lot of time.
I checked the note 1399816 - Task name - prefix - RSDRP_EXECUTE_AT_QUERY_DISP, and itu2019s not applicable because the customer has the BI 7.01 SP 8 and it has already
domain RSDR0_TASKNAME_LONG in package RSDRC with the
description: 'BW Data Manager: Task name - 32 characters', data
type: CHAR; No. Characters: 32, decimal digits: 0
data element RSDR0_TASKNAME_LONG in package RSDRC with the
description 'BW Data Manager: Task name - 32 characters' and the
previously created domain.
as described on the message
Could you suggest me something to check, please?
Thanks en advance
Regards
RosaHi,
It would be great if you would quote the ADAPT and tell the audience when it is targetted for a fix.
Generally speaking, CR for Enteprise isn't as performant as WebI, because uptake was rather slow .. so i'm of the opinion that there is improvements to be gained. So please work with Support via OSS.
My onlt recommendations can be :
- Patch up to P2.12 in bi 4.0
- Define more default values on the Bex query variables.
- Implement this note in the BW 1593802 Performance optimization when loading query views
Regards,
H -
In OSB , xquery issue with large volume data
Hi ,
I am facing one problem in xquery transformation in OSB.
There is one xquery transformation where I am comparing all the records and if there are similar records i am clubbing them under same first node.
Here i am reading the input file from the ftp process. This is perfectly working for the small size input data. When there is large input data then also its working , but its taking huge amount of time and the file is moving to error directory and i see the duplicate records created for the same input data. I am not seeing anything in the error log or normal log related to this file.
How to check what is exactly causing the issue here, why it is moving to error directory and why i am getting duplicate data for large input( approx 1GB).
My Xquery is something like below.
<InputParameters>
for $choice in $inputParameters1/choice
let $withSamePrimaryID := ($inputParameters1/choice[PRIMARYID eq $choice/PRIMARYID])
let $withSamePrimaryID8 := ($inputParameters1/choice[FIRSTNAME eq $choice/FIRSTNAME])
return
<choice>
if(data($withSamePrimaryID[1]/ClaimID) = data($withSamePrimaryID8[1]/ClaimID)) then
let $claimID:= $withSamePrimaryID[1]/ClaimID
return
<ClaimID>{$claimID}</ClaimID>
else
<ClaimID>{ data($choice/ClaimID) }</ClaimID>HI ,
I understand your use case is
a) read the file ( from ftp location.. txt file hopefully)
b) process the file ( your x query .. although will not get into details)
c) what to do with the file ( send it backend system via Business Service?)
Also noted the files with large size take long time to be processed . This depends on the memory/heap assigned to your JVM.
Can say that is expected behaviour.
the other point of file being moved to error dir etc - this could be the error handler doing the job ( if you one)
if no error handlers - look at the timeout and error condition scenarios on your service.
HTH -
Report Performance Issue and Strange Execution Log Data
Today we have had a report suddenly start taking a long time to execute.
Looking at the Report Server executionLog3 table/view we have the following information for the query in question.
<Connection>
<ConnectionOpenTime>1</ConnectionOpenTime>
<DataSets>
<DataSet>
<Name>ReportDataset</Name>
<RowsRead>7</RowsRead>
<TotalTimeDataRetrieval>150013</TotalTimeDataRetrieval>
<ExecuteReaderTime>3</ExecuteReaderTime>
</DataSet>
</DataSets>
</Connection>
Supposedly the time taken to retrieve the data is around 150 seconds. However, running a profiler trace while running the report in SSRS shows the query executing in under 1 second.
Indeed running a profiler trace for anything on the server with a duration greater than 60 seconds isn't returning anything. I can only assume the above data is wrong when it says 150 seconds to retrieve the data. IT IS taking that long to run
the report though - so the question is - where is the time going?
Why can't I find a slow query on the server but SSRS thinks there is?
LucasF
EDIT: This was fixed by restarting the report server. Any ideas on why this might occur?Hi Lucas,
According to your description, you find the <TotalTimeDataRetrieval> in ExecutionLog3 is larger than the profiler trace time.
In Reporting Services, to analyze the performance of the report, we usually check the TimeDataRetrieval to find the time we spend on retrieving the data. It’s the time needed for SQL Server to retrieve the data of all datasets in your report. So in your
scenario, please check if the TimeDataRetrieval is equal to the time in profiler trace.
Reference:
More tips to improve performance of SSRS reports
If you have any question, please feel free to ask.
Best regards,
Qiuyun Yu
Qiuyun Yu
TechNet Community Support -
CDP Performance Issue-- Taking more time fetch data
Hi,
I'm working on Stellent 7.5.1.
For one of the portlet in portal its taking more time to fetch data. Please can some one help me to solve this issue.. So that performance can be improved.. Please kindly provide me solution.. This is my code for fetching data from the server....
public void getManager(final HashMap binderMap)
throws VistaInvalidInputException, VistaDataNotFoundException,
DataException, ServiceException, VistaTemplateException
String collectionID =
getStringLocal(VistaFolderConstants.FOLDER_ID_KEY);
long firstStartTime = System.currentTimeMillis();
HashMap resultSetMap = null;
String isNonRecursive = getStringLocal(VistaFolderConstants
.ISNONRECURSIVE_KEY);
if (isNonRecursive != null
&& isNonRecursive.equalsIgnoreCase(
VistaContentFetchHelperConstants.STRING_TRUE))
VistaLibraryContentFetchManager libraryContentFetchManager =
new VistaLibraryContentFetchManager(
binderMap);
SystemUtils.trace(
VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
"The input Parameters for Content Fetch = "
+ binderMap);
resultSetMap = libraryContentFetchManager
.getFolderContentItems(m_workspace);
* used to add the resultset to the binder.
addResultSetToBinder(resultSetMap , true);
else
long startTime = System.currentTimeMillis();
* isStandard is used to decide the call is for Standard or
* Extended.
SystemUtils.trace(
VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
"The input Parameters for Content Fetch = "
+ binderMap);
String isStandard = getTemplateInformation(binderMap);
long endTimeTemplate = System.currentTimeMillis();
binderMap.put(VistaFolderConstants.IS_STANDARD,
isStandard);
long endTimebinderMap = System.currentTimeMillis();
VistaContentFetchManager contentFetchManager
= new VistaContentFetchManager(binderMap);
long endTimeFetchManager = System.currentTimeMillis();
resultSetMap = contentFetchManager
.getAllFolderContentItems(m_workspace);
long endTimeresultSetMap = System.currentTimeMillis();
* used to add the resultset and the total no of content items
* to the binder.
addResultSetToBinder(resultSetMap , false);
long endTime = System.currentTimeMillis();
if (perfLogEnable.equalsIgnoreCase("true"))
Log.info("Time taken to execute " +
"getTemplateInformation=" +
(endTimeTemplate - startTime)+
"ms binderMap=" +
(endTimebinderMap - startTime)+
"ms contentFetchManager=" +
(endTimeFetchManager - startTime)+
"ms resultSetMap=" +
(endTimeresultSetMap - startTime) +
"ms getManager:getAllFolderContentItems = " +
(endTime - startTime) +
"ms overallTime=" +
(endTime - firstStartTime) +
"ms folderID =" +
collectionID);
Edited by: 838623 on Feb 22, 2011 1:43 AMHi.
The Select statment accessing MSEG Table is Slow Many a times.
To Improve the performance of MSEG.
1.Check for the proper notes in the Service Market Place if you are working for CIN version.
2.Index the MSEG table
2.Check and limit the Columns in the Select statment .
Possible Way.
SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
FROM MSEG
INTO CORRESPONDING FIELDS OF TABLE ITAB
WHERE WERKS EQ P_WERKS AND
MBLNR IN S_MBLNR AND
BWART EQ '105' .
Delete itab where itab EQ '5002361303'
Delete itab where itab EQ '5003501080'
Delete itab where itab EQ '5002996300'
Delete itab where itab EQ '5002996407'
Delete itab where itab EQ '5003587026'
Delete itab where itab EQ '5003587026'
Delete itab where itab EQ '5003493186'
Delete itab where itab EQ '5002720583'
Delete itab where itab EQ '5002928122'
Delete itab where itab EQ '5002628263'.
Select
Regards
Bala.M
Edited by: Bala Malvatu on Feb 7, 2008 9:18 PM
Maybe you are looking for
-
Migration from Oracle 8.1.5 to 8.1.7
I have server 8.1.5 and have to install 8.1.7 in this PC. I have installed 8.1.7 in separate ordner. I have copied initORCL.ora in 8.1.7. Now both server work with same data. I have started script u0801050.sql. After that I have probleme: package sys
-
Data Services 12.2.3.0 BODI-1112015 Adapter metadata import failed
Hi Experts, I am using Data Services 12.2.3.0. I have an issue in importing functions through 'Adapter' type datastore into Data Services. I can open the datastore and see the list of functions available, but when I try to import them, I get the erro
-
Firefox 6 wont load when I click on it.
I tried un-installing and re-installing, but it still won't load. It doesn't matter if I double click on the firefox.exe in the mozilla\firefox dir or double click on the desktop icon, etc. Taskbar doesn't ever show the firefox process.
-
Iam geting exception when trying to creat sales order using bapi in BSP pro
hi, I am using BAPI 'BAPI_SALESORDER_CREATEFROMDAT2' to creat a sales order in BSP program it giving the exception An exception with the type CX_SY_DYN_CALL_ILLEGAL_TYPE occurred, but was neither handled locally, nor declared in a RAISING clause
-
Hi all, I run IMovie 10.0 on Mavericks. I have done all updates so software should be ok. When I build a project compiled out of music, clips and photo's I end up wanting to share this project. When I selectthe share button it looks like the button w