Tuning Data
Anyone,, please help me... I have one problem..
In my Database, I have 260,000,000 records..
Then, I execute this query :
Select name, gender, phone, address ,bla bla bla from table_name
where name like '%sa%'
and gender = 'male'
and mother's_name like '%san%' ;
It needs about 1 minute 5 seconds...
Can anyone tell me, how to tune it, so i can execute it under 1 minute..
thx
best regard,
Sandi
Dear
SELECT NAME, gender, phone, address
FROM table_name
WHERE NAME LIKE '%sa%'
AND gender = 'male'
AND mothers_name LIKE '%san%';You should always give more information about your query, your table, your indexes and your Oracle release in order for others to be able to give you a hand on tuning your query
Could you please post the explain plan of your query. The best thing you could do to post your explain plan is to execute your query as follows
SELECT /*+ gather_plan_statistics */
NAME, gender, phone, address
FROM table_name
WHERE NAME LIKE '%sa%'
AND gender = 'male'
AND mothers_name LIKE '%san%';and then immediately after your query has completed do this
select * from table (dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));and post here the formated explain plan generated by the above select. Please use the tag when posting your explain plan as follows
Having said this, it might be very possible that you need to have an index starting with the column gender
Mohamed Houri
Similar Messages
-
Where to find best practices for tuning data warehouse ETL queries?
Hi Everybody,
Where can I find some good educational material on tuning ETL procedures for a data warehouse environment? Everything I've found on the web regarding query tuning seems to be geared only toward OLTP systems. (For example, most of our ETL
queries don't use a WHERE statement, so the vast majority of searches are table scans and index scans, whereas most index tuning sites are striving for index seeks.)
I have read Microsoft's "Best Practices for Data Warehousing with SQL Server 2008R2," but I was only able to glean a few helpful hints that don't also apply to OLTP systems:
often better to recompile stored procedure query plans in order to eliminate variances introduced by parameter sniffing (i.e., better to use the right plan than to save a few seconds and use a cached plan SOMETIMES);
partition tables that are larger than 50 GB;
use minimal logging to load data precisely where you want it as fast as possible;
often better to disable non-clustered indexes before inserting a large number of rows and then rebuild them immdiately afterward (sometimes even for clustered indexes, but test first);
rebuild statistics after every load of a table.
But I still feel like I'm missing some very crucial concepts for performant ETL development.
BTW, our office uses SSIS, but only as a glorified stored procedure execution manager, so I'm not looking for SSIS ETL best practices. Except for a few packages that pull from source systems, the majority of our SSIS packages consist of numerous "Execute
SQL" tasks.
Thanks, and any best practices you could include here would be greatly appreciated.
-EricOnline ETL Solutions are really one of the biggest challenging solutions and to do that efficiently , you can read my blogs for online DWH solutions to know at the end how you can configure online DWH Solution for ETL using Merge command of SQL Server
2008 and also to know some important concepts related to any DWH solutions such as indexing , de-normalization..etc
http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
http://www.sqlserver-performance-tuning.com/apps/blog/show/12927103-data-warehousing-workshop-2-4-
http://www.sqlserver-performance-tuning.com/apps/blog/show/12927173-data-warehousing-workshop-3-4-
http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
Kindly let me know if any further help is needed
Shehap (DB Consultant/DB Architect) Think More deeply of DB Stress Stabilities -
TUNING DATA WAREHOUSE DATABASE INSTANCE
Hi,
I have to tune one of the DATA WAREHOUSE DATABASE INSTANCE.
Any advice for tuning this instance.
How different is tuning the data warehouse instance than normal instance;
RegardsFirst of all, touch nothing until you understand what your users are doing with the data warehouse, when they are doing it and what their expectations are.
Secondly, remember that a data warehouse is, generally, much bigger than an OLTP database. This changes the laws of physics. Operations you might expect to take a few minutes might take days. This means you need to be completely certain about what you do in production before you do it.
Thirdly, bear in mind that a lot of data warehouse tuning techniques implement physical solutions objects - different types of indexes, partitioning - rather than query tweaking. These things are easier to get right at the start than to retrofit to large volumes of data.
Good luck, APC -
I have a DATA table which consists of Minutely data. For every unique filename, solar_id, year, day there is a unique processing stage that can exist and to find the highest processing stage we select the MAX(date_stamp) for that particular day since it will be the latest touched stage.
So for any given day we can have 1440minutes x (5processing stages on average) = ~7k records.
If I have a query that wants to look at the highest processing stage that exists each day over a TIME FRAME, how can I improve it's performance?
SELECT DISTINCT s.NAME NAME, s.climate_id climate_id, a.solar_id solar_id,
a.processing_stage processing_stage, a.YEAR YEAR, a.DAY DAY,
TO_CHAR (a.date_stamp, 'YYYY-MM-DD HH24:MI:SS') date_stamp, a.user_id
FROM solar_station s, DATA a
WHERE a.solar_id = '636'
AND s.solar_id = '636'
AND s.climate_id = '611KBE0'
AND a.solar_id = s.solar_id
AND a.time_stamp BETWEEN TO_DATE ('2004-9-8', 'YYYY-MM-DD')
AND TO_DATE ('2004-9-20', 'YYYY-MM-DD')
AND (a.solar_id, a.filename, a.YEAR, a.DAY, a.date_stamp) IN (
SELECT b.solar_id, b.filename, b.YEAR, b.DAY, MAX (b.date_stamp)
FROM DATA b
WHERE b.solar_id = '636'
AND b.solar_id = s.solar_id
AND b.time_stamp BETWEEN TO_DATE ('2004-9-8', 'YYYY-MM-DD')
AND TO_DATE ('2004-9-20', 'YYYY-MM-DD')
GROUP BY b.solar_id, b.filename, b.YEAR, b.DAY)
ORDER BY s.NAME, a.YEAR, a.DAY;The data table is partioned by YEAR and sub-partioned by solar_id.Hmm, still a full table scan on data. This is
probably caused by the unnecessary line
AND b.solar_id = s.solar_idWhat happens if you remove this line?
Regards,
Rob.
SELECT DISTINCT s.NAME NAME, s.climate_id climate_id, a.solar_id solar_id,
a.processing_stage processing_stage, a.YEAR YEAR, a.DAY DAY,
TO_CHAR (a.date_stamp, 'YYYY-MM-DD HH24:MI:SS') date_stamp, a.user_id
FROM solar_station s, DATA a
WHERE a.solar_id = '636'
AND s.solar_id = '636'
AND s.climate_id = '611KBE0'
--AND a.solar_id = s.solar_id
and year between 2004 and 2004
AND a.time_stamp BETWEEN TO_DATE ('2004-9-8', 'YYYY-MM-DD')
AND TO_DATE ('2004-12-20', 'YYYY-MM-DD')
AND (a.solar_id, a.filename, a.YEAR, a.DAY, a.date_stamp) IN (
SELECT b.solar_id, b.filename, b.YEAR, b.DAY, MAX (b.date_stamp)
FROM DATA b
WHERE b.solar_id = '636'
--AND b.solar_id = s.solar_id
and year between 2004 and 2004
AND b.time_stamp BETWEEN TO_DATE ('2004-9-8', 'YYYY-MM-DD')
AND TO_DATE ('2004-12-20', 'YYYY-MM-DD')
GROUP BY b.solar_id, b.filename, b.YEAR, b.DAY)
ORDER BY s.NAME, a.YEAR, a.DAY;
PLAN_TABLE_OUTPUT
Plan hash value: 4121458178
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 156 | 19500 | 30539 (4)| 00:06:07 | | |
| 1 | SORT ORDER BY | | 156 | 19500 | 30539 (4)| 00:06:07 | | |
| 2 | HASH UNIQUE | | 156 | 19500 | 30538 (4)| 00:06:07 | | |
|* 3 | HASH JOIN | | 156 | 19500 | 30537 (4)| 00:06:07 | | |
| 4 | NESTED LOOPS | | 1401 | 95268 | 15279 (4)| 00:03:04 | | |
|* 5 | TABLE ACCESS BY INDEX ROWID| SOLAR_STATION | 1 | 27 | 1 (0)| 00:00:01 | | |
|* 6 | INDEX UNIQUE SCAN | SOLAR_STATION_PK | 1 | | 0 (0)| 00:00:01 | | |
| 7 | VIEW | VW_NSO_1 | 1401 | 57441 | 15278 (4)| 00:03:04 | | |
| 8 | SORT GROUP BY | | 1401 | 82659 | 15278 (4)| 00:03:04 | | |
| 9 | PARTITION RANGE ITERATOR | | 272K| 15M| 15245 (3)| 00:03:03 | 10 | 11 |
| 10 | PARTITION HASH SINGLE | | 272K| 15M| 15245 (3)| 00:03:03 | 1 | 1 |
|* 11 | TABLE ACCESS FULL | DATA | 272K| 15M| 15245 (3)| 00:03:03 | | |
| 12 | PARTITION RANGE ITERATOR | | 272K| 14M| 15254 (4)| 00:03:04 | 10 | 11 |
| 13 | PARTITION HASH SINGLE | | 272K| 14M| 15254 (4)| 00:03:04 | 1 | 1 |
|* 14 | TABLE ACCESS FULL | DATA | 272K| 14M| 15254 (4)| 00:03:04 | | |
Predicate Information (identified by operation id):
3 - access("A"."SOLAR_ID"="$nso_col_1" AND "A"."FILENAME"="$nso_col_2" AND "A"."YEAR"="$nso_col_3" AND
"A"."DAY"="$nso_col_4" AND "A"."DATE_STAMP"="$nso_col_5")
5 - filter("S"."CLIMATE_ID"='611KBE0')
6 - access("S"."SOLAR_ID"='636')
11 - filter("B"."TIME_STAMP">=TO_DATE('2004-09-08 00:00:00', 'yyyy-mm-dd hh24:mi:ss') AND "YEAR"=2004 AND
"B"."SOLAR_ID"='636' AND "B"."TIME_STAMP"<=TO_DATE('2004-12-20 00:00:00', 'yyyy-mm-dd hh24:mi:ss'))
14 - filter("A"."TIME_STAMP">=TO_DATE('2004-09-08 00:00:00', 'yyyy-mm-dd hh24:mi:ss') AND "YEAR"=2004 AND
"A"."SOLAR_ID"='636' AND "A"."TIME_STAMP"<=TO_DATE('2004-12-20 00:00:00', 'yyyy-mm-dd hh24:mi:ss'))>
Message was edited by:
Rob van Wijk
ddition questions:
What is the outcome of:
select count(*)
from ( SELECT s.NAME NAME
, s.climate_id climate_id
, a.solar_id solar_id
, a.processing_stage processing_stage
, a.YEAR YEAR
, a.DAY DAY
, TO_CHAR (a.date_stamp, 'YYYY-MM-DD HH24:MI:SS')
date_stamp
, a.user_id
, max(a.date_stamp) over (partition by a.solar_id,
a.file_name, a.year, a.day) max_date_stamp
FROM solar_station s
, DATA a
WHERE a.solar_id = '636'
AND s.solar_id = '636'
AND s.climate_id = '611KBE0'
AND a.solar_id = s.solar_id
AND year between 2004 and 2004
AND a.time_stamp BETWEEN TO_DATE ('2004-9-8',
'YYYY-MM-DD') AND TO_DATE ('2004-12-20',
'YYYY-MM-DD')
Here we get:
COUNT(*)
1266111
1 row selected.
You might remove the subquery by issuing:
select distinct *
from ( SELECT s.NAME NAME
, s.climate_id climate_id
, a.solar_id solar_id
, a.processing_stage processing_stage
, a.YEAR YEAR
, a.DAY DAY
, TO_CHAR (a.date_stamp, 'YYYY-MM-DD HH24:MI:SS')
date_stamp
, a.user_id
, max(a.date_stamp) over (partition by a.solar_id,
a.file_name, a.year, a.day) max_date_stamp
FROM solar_station s
, DATA a
WHERE a.solar_id = '636'
AND s.solar_id = '636'
AND s.climate_id = '611KBE0'
AND a.solar_id = s.solar_id
AND year between 2004 and 2004
AND a.time_stamp BETWEEN TO_DATE ('2004-9-8',
'YYYY-MM-DD') AND TO_DATE ('2004-12-20',
'YYYY-MM-DD')
date_stamp = max_date_stamp
ORDER BY s.NAME
, a.YEAR
, a.DAY
select distinct *
from ( SELECT s.NAME NAME
, s.climate_id climate_id
, a.solar_id solar_id
, a.processing_stage processing_stage
, a.YEAR YEAR
, a.DAY DAY
, TO_CHAR (a.date_stamp, 'YYYY-MM-DD HH24:MI:SS') date_stamp
, a.user_id
, max(a.date_stamp) over (partition by a.solar_id, a.filename, a.year, a.day) max_date_stamp
FROM solar_station s
, DATA a
WHERE a.solar_id = '636'
AND s.solar_id = '636'
AND s.climate_id = '611KBE0'
AND a.solar_id = s.solar_id
AND year between 2004 and 2004
AND a.time_stamp BETWEEN TO_DATE ('2004-9-8', 'YYYY-MM-DD') AND TO_DATE ('2004-12-20', 'YYYY-MM-DD')
where date_stamp = max_date_stamp
ORDER BY NAME
, YEAR
, DAY;
PLAN_TABLE_OUTPUT
Plan hash value: 3043498213
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 14702 | 990K| | 20550 (3)| 00:04:07 | | |
| 1 | SORT UNIQUE | | 14702 | 990K| 9608K| 19862 (3)| 00:03:59 | | |
|* 2 | VIEW | | 90794 | 6117K| | 18372 (3)| 00:03:41 | | |
| 3 | WINDOW SORT | | 90794 | 13M| 33M| 18372 (3)| 00:03:41 | | |
|* 4 | HASH JOIN | | 90794 | 13M| | 15251 (3)| 00:03:04 | | |
|* 5 | TABLE ACCESS BY INDEX ROWID| SOLAR_STATION | 1 | 78 | | 1 (0)| 00:00:01 | | |
|* 6 | INDEX UNIQUE SCAN | SOLAR_STATION_PK | 1 | | | 0 (0)| 00:00:01 | | |
| 7 | PARTITION RANGE ITERATOR | | 272K| 20M| | 15245 (3)| 00:03:03 | 10 | 11 |
| 8 | PARTITION HASH SINGLE | | 272K| 20M| | 15245 (3)| 00:03:03 | 1 | 1 |
|* 9 | TABLE ACCESS FULL | DATA | 272K| 20M| | 15245 (3)| 00:03:03 | | |
Predicate Information (identified by operation id):
2 - filter("MAX_DATE_STAMP"=TO_TIMESTAMP("DATE_STAMP"))
4 - access("A"."SOLAR_ID"="S"."SOLAR_ID")
5 - filter("S"."CLIMATE_ID"='611KBE0')
6 - access("S"."SOLAR_ID"='636')
9 - filter("A"."TIME_STAMP">=TO_DATE('2004-09-08 00:00:00', 'yyyy-mm-dd hh24:mi:ss') AND "YEAR"=2004 AND
"A"."SOLAR_ID"='636' AND "A"."TIME_STAMP"<=TO_DATE('2004-12-20 00:00:00', 'yyyy-mm-dd hh24:mi:ss'))The result was an error:
Error at line 0
ORA-01843: not a valid month -
Performance Tuning Data Load for ASO cube
Hi,
Anyone can help how to fine tune data load on ASO cube.
We have ASO cube which load around 110 million records from a total of 20 data files.
18 of the data files has 4 million records each and the last two has around 18 million records.
On average, to load 4 million records it took 130 seconds.
The data file has 157 data column representing period dimension.
With BSO cube, sorting the data file normally help. But with ASO, it does not seem to have
any impact. Any suggestion how to improve the data load performance for ASO cube?
Thanks,
LianYes TimG it sure looks identical - except for the last BSO reference.
Well nevermind as long as those that count remember where the words come from.
To the Original Poster and to 960127 (come on create a profile already will you?):
The sort order WILL matter IF you are using a compression dimension. In this case the compression dimension acts just like a BSO Dense dimension. If you load part of it in one record then when the next record comes along it has to be added to the already existing part. The ASO "load buffer" is really a file named <dbname.dat> that is built in your temp tablespace.
The most recent x records that can fit in the ASO cache are still retained on the disk drive in the cache. So if the record is still there it will not have to be reread from the disk drive. So you could (instead of sorting) create an ASO cache as large as your final dat file. Then the record would already still be on the disk.
BUT WAIT BEFORE YOU GO RAISING YOUR ASO CACHE. All operating systems use memory mapped IO therefore even if it is not in the cache it will likely still be in on the disk in "Standby" memory (the dark blue memory as seen in Resource Monitor) this will continue until the system runs out of "Free" memory (light blue in resource monitor).
So in conclusion if your system still has Free memory there is no need (in a data load) to increase your ASO cache. And if you are out of Free memory then all you will do is slow down the other applications running on your system by increasing ASO Cache during a data load - so don't do it.
Finally, if you have enough memory so that the entire data file fits in StandBY + Free memory then don't bother to sort it first. But if you do not have enough then sort it.
Of course you have 20 data files so I hope that you do not have compression members spread out amongst these files!!!
Finally, you did not say if you were using parallel load threads. If you need to have 20 files read up on having parrallel load buffers and parallel load scripts. that will make it faster.
But if you do not really need 20 files and just broke them up to load parallel then create one single file and raise your DLTHREADSPREPARE and DLTHREADSWRITE settings. Heck these will help even if you do go parallel and really help if you don't but still keep 20 separate files. -
How to Handle Data Loads in BW
Hi friends,
we are almost on the verge of completing the development, and are planning to go live very soon, but before that we need to make sure the number of records getting loaded (from R/3 to BW)are in sync. If the user says that there are so many records for the Area currently, how do we handle this to happen in BW?? when the number of data packets keeps increasing how do we update the same to our targets,
If there are any steps or procedures to habdle the data loads , after and before GOLIVE , can someone please let me know(Data sizing, Loading, and archiving),
If the client tells that these many records are expected, how should we handle these situations??
Your input is really appreciated.
Thanks alot in advance,Hi,
Best way to ensure that is after your data has been loaded to BW, run a report and verify the results. They should match per the logic or any filter criteria with the R3 area numbers.
In most cases the number of records might not match so doing a comparision of how many records were sent and how many got uploaded through the monitor will not be a good comparision.
For increasing data volumes go through the following thread on Performance tuning :
performance tuning
Data Reconciliation :
Reconciliation of a BW and R3 data
Re: Reg -- reconciliation process.
Cheers,
Kedar -
When will Hub automatically select fewer than 2500 segments for tuning and/or testing?
I have two systems (FR/IT) with 64K+ total segments uploaded.
When I specify "Auto" for tuning and testing, the system only extracts 500 (or some other number < 2500) tuning and testing segments from the TMX files that I uploaded. On visual inspection of the subsequently downloaded training, tuning,
and test corpora, I see nothing unusual: alignment is OK, repeated and partial segments are not very frequent, etc.
Similar content extracted in the same way from the same WorldServer db for other languages (PE, ES, NL, ID) works fine.
When will Hub automatically select fewer than 2500 segments for tuning and/or testing?
Thanks!Hi Mike,
Here is how auto-selection of Testing and Tuning data works in the Hub.
1. If a segment in tuning data is identical to a segment in training data, the segment is removed from training data.
2. If a segment in tuning data is identical to a segment in testing data, the segment is removed from testing data.
3. If a segment in tuning data and testing data is identical to a segment in training data, the segment is removed from training and testing data.
4. If a segment in testing data is identical to a segment in training data, the segment is removed from training data.
Hub auto selects 2500 sentences and the data under goes above adjustments. As a result you will see fewer than 2500 sentences.
Additional details can be found in hub user guide:https://hub.microsofttranslator.com/Help/Download/Microsoft%20Translator%20Hub%20User%20Guide.pdf
Please let us know if you need any additional information.
Thank you -
Suggestions thread for ABAP FAQ sticky
See these threads for "online collections" in the ABAP Development forums =>
FAQ's, intros and memorable discussions in the ABAP General Forum
FAQ's, intros and memorable discussions in the ABAP Data Dictionary Forum
FAQ's, intros and memorable discussions in the ABAP UI Programming Forum
FAQ's, intros and memorable discussions in the ABAP Form Printing Forum
FAQ's, intros and memorable discussions in the Enhancements & Modifications
FAQ's, intros and memorable discussions in the Performance and Tuning Forum
FAQ's, intros and memorable discussions in the Data Transfers Forum
FAQ's, intros and memorable discussions in the ABAP Objects Forum
Edited by: Julius Bussche on Apr 21, 2009 5:22 PMABAP General
Subtotals in alv list. => Subtotals in ALV <= Added and moved to UI programming
Can we modify a sub-total in ALV => Subtotals text in ALV /Modification of Subtotals in ALV <= Moved down to new thread.
cl_salv_table - Save Layout => ALV OM Save layout option <= Moved down to new thread.
Report with page break. => ALV Report with page break. <= Moved down to new thread.
ALV Sorting Not Working after Adding Checkbox to ALV => ALV Sorting and Group Functionality working with Checkbox <= Moved down to new thread.
Calculating Total with Checkbox in ALV Grid => Calculating Total with Checkbox in ALV Grid <= Moved down to new thread.
DUMP WHILE SUMMATION IN ALV => General Fieldcatalog error. <= Moved down to new thread.
Problems with REUSE_ALV_FIELDCATALOG_MERGE =>Problems with REUSE_ALV_FIELDCATALOG_MERGE <= Moved down to new thread.
Must i_structure_name for LVC_FIELDCATALOG_MERGE be pre-defined in dict? => Converting SLIS fieldcatalog to LVC fieldcatalog <= Moved down to new thread.
Capture Checkbox Value in ALV Grid => Capturing the editable fields values in ALV <= Moved down to new thread.
how to refresh table display using slis and 'reuse_alv_grid_display method. => Refreshing the ALV Display <= Moved down to new thread.
fetch data in edittable field in alv using REUSE_ALV_GRID_DISPLAY_LVC FM => Reading the editable values for ALV LVC function <= Moved down to new thread.
Radibutton in ALV Report output => Radio buttons in ALV report <= Moved down to new thread.
Edit rows in ALV => Usage of Styles in ALV for disable/enable input option <= Moved down to new thread.
how to copy/delete file => executing Unix commands <= Hmmm... c-calling the 'system' does not work in my systems during normal operations. See system param 'rdisp/call_system' which can be disabled, then the code dumps. Is there a different thread with DATASET commands? SXPG?
express CR_LF => ABAP char Utilities CR_LF use , replace '##' in a application server <= Ahhh yes, I remember this series. This is one of the nicer threads from it... Added.
String Operations which contain the special character '#' in BDC Session => ABAP char Utilities Horizontal_tab use , replace '#' in application server file <= Sufficiently covered by the next thread.
Problem with statement "find all occurences of ..." => ABAP char utilites Newline, replace '#' in application server in end of line <= Added.
How to Send Email to Outlook? =>sending mails to outlook <= Added.
FM for uploading Image to SAP => upload image to mime repository <= Added.
Passing an Internal Table to a Report executed through 'Submit' => use of Import and export refer Rich Heilman's code <= Added.
Create a parameter to enter a password => Selection parameter password behaviour <= Added.
ABAP Dictionary
=>
Form Printing
Exporting Graphics from SAP =>Download Logo from SE78 <= Okay, added... but the thread is rather old and ws_download is obsolete now, isn't it?
BDC for SE78 Transaction => Upload Logo programatically <= Hmmm... external performs and guests. I would like more opinions on this one before adding.
UI Programming
[Top of page in OO ALV|TOP_OF_PAGE in ALV Using CL_GUI_ALV_GRID ]=> Top of page in OO ALV <= I understand that this blog was very helpfull, but it is full of email addresses. Hmmm.. tough call Is there another example (thread) without any "here is my mail address" comments. We want to avoid those.
[Top of page Alignments in Normal ALV|Alignment of Data in TOP-OF-PAGE in ALV GRID] => Top of page in normal ALV <= this is more wiki material in a blog, but again often asked... Hmm.. lets go through the others first and come back to this one.
To display URL as a hyperlink (along with simple text) in dialog program => URL display using HTML Viewer <= Added.
select-option on dynpro => Select-options usage in Module pool programming <= Added.
New! =>
Enhancements and Modifications
=>
ABAP Performance and Tuning
=>
Data Transfers
=>
ABAP Objects
=>
Edited by: Julius Bussche on Oct 3, 2008 10:31 PM
Edited by: Julius Bussche on Oct 3, 2008 10:34 PM
Edited by: Julius Bussche on Oct 3, 2008 10:38 PM
Edited by: Julius Bussche on Oct 3, 2008 10:43 PM
Edited by: Julius Bussche on Oct 3, 2008 10:55 PM
Edited by: Julius Bussche on Oct 4, 2008 12:06 AM
Edited by: Julius Bussche on Oct 4, 2008 12:40 AM
Edited by: Julius Bussche on Oct 4, 2008 12:57 AM
Edited by: Julius Bussche on Oct 4, 2008 1:23 AM
Edited by: Julius Bussche on Oct 4, 2008 3:27 PM
Edited by: Julius Bussche on Oct 4, 2008 3:39 PM
Edited by: Julius Bussche on Oct 4, 2008 5:40 PM
Edited by: Julius Bussche on Oct 4, 2008 5:59 PM
Edited by: Julius Bussche on Oct 4, 2008 7:17 PM -
Problems with statement cache using OCI
Hello!
We recently changed our program to use statement cache, but we found a problem and not yet a solution.
We have problems in this situation:
OCIEnvCreate();
OCIHandleAlloc();
OCILogon2(..... OCI_LOGON2_STMTCACHE);
OCIStmtPrepare2("CREATE TABLE db_testeSP (cod_usuario INTEGER, usuario CHAR(20), dat_inclusao DATE)")
OCIStmtExecute();
OCIStmtRelease(... OCI_DEFAULT);
OCIStmtPrepare2("INSERT INTO db_testeSP (1,\'user\',CURRENT_DATE");
OCIStmtExecute();
OCIStmtRelease(... OCI_DEFAULT);
OCIStmtPrepare2("SELECT * FROM db_testeSP");
OCIStmtExecute();
OCIStmtRelease(... OCI_DEFAULT);
OCIStmtPrepare2("DROP TABLE db_testeSP");
OCIStmtExecute();
OCIStmtRelease(... OCI_DEFAULT);
OCIStmtPrepare2("CREATE TABLE db_testeSP (cod_usuario INTEGER, usuario CHAR(20), idade INTEGER, dat_inclusao DATE)");
OCIStmtExecute();
OCIStmtRelease(... OCI_DEFAULT);
OCIStmtPrepare2("INSERT INTO db_testeSP (1,\'user\',20,CURRENT_DATE");
OCIStmtExecute();
OCIStmtRelease(... OCI_DEFAULT);
OCIStmtPrepare2("SELECT * FROM db_testeSP");
OCIStmtExecute();
OCIStmtRelease(... OCI_DEFAULT);
On the second Select (wich is in bold), returns -1 from Execute, and if I get the error with OCIErrorGet I have: ORA-00932 - inconsistent datatypes
Researching I discovered that this is statement cache problem, is there a way to clear the cache of one table ? I'm asking this because I could clear whenever there is a DROP TABLE or ALTER TABLE instruction (but I don't know what statements will need to be cleared from the cache). I can't clear all the cache because I may have other statements from other tables on the cache.
This situation above is just an example, but I think that this will cause other problems too.
Our program is a gateway from the main program and database, so I don't know the SQL instructions before executing. How can we resolve this problem?
I have tested this issue with Oracle 10g (10.2.0.4.0) and 11g (11.2.0.1.0) both 64 bits and the result is the same (the OCI is version 11.2.0).
We appreciate any help.
Thanks in advance,
DanielAfter long time searching for answers, apparently this is expected to happen and the program should not use Statement caching in this situation.
I found this on an Oracle document (Tuning Data Source Connection Pools - 11g Release 1 (10.3.6)) and we will need to review the use of statement caching.
Stay as a tip for others who might be in the same situation. -
Update requires a valid UpdateCommand when passed DataRow collection with modified rows.
It is my
understanding that when using a TableAdapter and selecting the proper
options, Insert, Delete and Update statements are generated. The
Update statement should work just using the following:
myTableAdapter.Update(changes);
Where changes is the DataSet containing the modified data.
This logic works great for one of the tables in the DB but not for another.
Learning this new stuff is Confusing at best - Frustrating at worst.
Help appreciated,
MichaelISiM to organize COMAD 2009 - the 15th International Conference - Call for papers
Dear All,
The 15th COMAD is organized by the International School of Information Management (ISiM), University of Mysore during December 9-11, 2009 at Infosys Technologies campus, Mysore. Similar to previous years, COMAD 2009’s scope will include all areas in the data management space including database management systems, Web and Information Retrieval and Data Mining. For details: http://www.isim.ac.in/comad2009
For close to two decades COMAD – the International Conference on Management of Data, modeled along the lines of ACM SIGMOD, has been a premier international database conference hosted in India. The first COMAD was held in 1989, and it has been held on a nearly annual basis since then. COMAD always had significant international participation with about 30% of the papers from outside India including Europe, USA, and East/South East Asia.
Call for Papers
Original research submissions are invited not only in traditional database areas but also in Data Quality, Web, Information Retrieval and Data Mining.
We invite submission of original research contributions as well as proposals for demonstrations, tutorials, industrial presentations, and panels.
Areas of interest include but are not limited to:
Data Management Systems:
* Benchmarking and performance evaluation
* Data exchange and integration
* Database monitoring and tuning
* Data privacy and security
* Data quality, cleaning and lineage
* Data warehousing
* Managing uncertain, imprecise and inconsistent information
* Multilingual data management
* Novel Data Types
* Parallel and distributed databases
* Peer-to-peer data management
* Personalized information systems
* Storage and transaction management
Web and Information Retrieval:
* Categorization, Clustering, and Filtering
* Document Representation and Content Analysis
* Information Extraction and Summarization
* IR Theory, Platform, Evaluation
* Question Answering and Cross-Language IR
* Web and IR
* Social Network Analysis
Data Mining
* Novel data mining algorithms and foundations
* Innovative applications of data mining
* Data mining and KDD systems and frameworks
* Mining data streams and sensor data
* Mining multi-media, graph, spatio-temporal and semi-structured data
* Security, privacy, and adversarial data mining
* High performance and parallel/distributed data mining
* Mining tera-/peta-scale data
* Visual data mining and data visualization
To ensure wide visibility of material published at the conference, we plan
to make arrangements with ACM SIGMOD for including the proceedings of the conference in the SIGMOD on-line and CD-ROM archives. Two awards- for Best Paper and Best Student Paper, will be presented at the conference.
Important Dates:
· Research and Student Papers-July 31, 2009
· Industrial Papers, Demo, Panel and Tutorial Proposals-August 28, 2009
· Notification to authors-September 25th, 2009
· Camera-ready copy due-October 30, 2009
· Early-Bird Registration Deadline-December 1, 2009
· Conference-December 9-11, 2009
Conference Organization:
General Chair
· Shalini Urs, ISiM, University of Mysore, India
Organizational Chair
· Srinath Srinivasa, IIIT-Bangalore, India
Program Chairs
· Sanjay Chawla, University of Sydney , Australia
· Kamal Karlapalem, IIIT-Hyderabad, India.
Contact Details:
Dr. Shalini Urs
COMAD 2009 – General Chair
Executive Director and Professor
International School of Information Management
University of Mysore, Manasagangotri
Mysore – 570 006
Tel: +91-821-2514699; +91-821-2411417
Fax: +91-821-2519209,
Email: [email protected]
Regards -
Update new entries in BCO View
Hello Experts,
I have created a Business Configuration Object (BCO) with a BCset and a BC view which I have added to the Fine Tuning.
However, the data which we have to enter is more. Is there any way to import the data in the BC view using an excel sheet?
I created another custom BO with same elements as the BCO and wrote the following code in the before_save event of the BO.
var building1:elementsof BUILDING1; // This is the BCO
building1.BUILDNO.content = this.building;
building1.LOCNO.content = this.location.content;
building1.TEXT.content = this.description;
BUILDING1.Create(building1);
However I get the error "BCO Node Root is not part of the worklist".
Any Idea on why this error is coming up or is there any other way to do this.
Thanks and Regards,
Sumeet NarangHello Sumeet,
There's no way from within the SDK to do it.
But if you have maintained the values once they are kept in the tenant.
Even if you request a new tenant they will be copied.
You can raise a request to SAP for an "upload of fine tuning data", but I am afraid this must be in general too flexible to be built.
Sorry,
Horst -
Query tuning for data-warehousing application in Oracle 8i.
We have to pick up 24 months old data. Each month data is kept in a different partition.
2007-May month data is kept in PRESC200705 partition
SELECT r.account_id,
p.presc_num,
spm.product_id,
p.month,
t.best_call_state,
sum(p.trx_count)
FROM rlup_assigned_account r,
temp_presc_num_TEST t,
retail.prescrip_retail partition (PRESC200705) p,
sherlock.sherlock_product_mapping spm
WHERE spm.product_id like '056%'
and spm.mds6 = p.product_id
and t.CLIENT_ID = p.presc_num
and r.ndc_pyr_id = p.payer_plan
and t.best_call_state = r.ST
GROUP BY r.account_id,
p.presc_num,
spm.product_id,
p.month,
t.best_call_state
Q This Query is to be tuned
SQL> SELECT table_name,
2 partition_name,
3 high_value,
4 num_rows
5 FROM user_tab_partitions
6 ;
no rows selectedI have the following task:
Requirement:
According to the client, new partitions are created every month.
So the query should contain only the 24 most recent partions leaving one old partition every month.
So, the query becomes dynamic.
The query will have to leave one old partion every month and move ahead with the new partion created.
The total partitions accessed should not cross 24.
Is this possible?
Partition# for OCt 2007
1
2
3
4
5
6
7
24
Partition# for Nov 2007
Old|New
1
2..1
3..2
4..3
5..4
6..5
7..6
24..23
..24
Secondly, with one month data (in a partitioned table), the query takes about one hour to run.
with 24 month data accessed by the query, the query will take 24 hours to run.
I am sure that Oracle can be tuned to run on such huge data with ease and the query output can
come within seconds.. Otherwise, nobody would use oracle for datawarehousing applicaions.
Q. How do I write a dynamic query that references 24 recent partitions, using the query provided:
abc>SELECT r.account_id,
2 p.presc_num,
3 spm.product_id,
4 p.month,
5 t.best_call_state,
6 sum(p.trx_count)
7 FROM rlup_assigned_account r,
8 temp_presc_num_TEST t,
9 retail.prescrip_retail partition (PRESC200705) p,
10 sherlock.sherlock_product_mapping spm
11 WHERE spm.product_id like '056%'
12 and t.CLIENT_ID='934759'
13 and spm.mds6 = p.product_id
14 and t.CLIENT_ID = p.presc_num
15 and r.ndc_pyr_id = p.payer_plan
16 and t.best_call_state = r.ST
17 GROUP BY r.account_id,
18 p.presc_num,
19 spm.product_id,
20 p.month,
21 t.best_call_state
22 ;
retail.prescrip_retail partition (PRESC200705) p,
Partition name, PRESC200705 cannot be hardcoded into the sql.
Sql should take a range of 24 recent partitions.
And the query should execute fast too.
Now, is that what is called a challenge?
+++++++++++++++++++++++++++++++++++++++++++++++
Here are the index/constraints/explain_plan output on prescrip_retail table (which is partitioned)
as well as other tables to which prescrip_retail table is joined
This is what prescrip_retail looks like. This is the table having partitions.
It does not seem to have a primary key!
SQL> desc prescrip_retail
Name Null? Type
PRESC_NUM NUMBER
PFIER_NUM CHAR(8)
RELID NOT NULL CHAR(9)
ME_NUM NOT NULL CHAR(10)
PRODUCT_ID NOT NULL CHAR(6)
PRODUCT_FRMSTR NOT NULL CHAR(1)
PAYER_PLAN NOT NULL CHAR(6)
MONTH NOT NULL DATE
PYMT_CODE NOT NULL CHAR(1)
NRX_COUNT NOT NULL NUMBER(7)
NRX_QUANTITY NOT NULL NUMBER(9)
NRX_DOLLARS NOT NULL NUMBER(13,2)
TRX_COUNT NOT NULL NUMBER(7)
TRX_QUANTITY NOT NULL NUMBER(9)
TRX_DOLLARS NOT NULL NUMBER(13,2)
Table Size of Prescrip_Retail...
1 select table_name,tablespace_name,pct_free,pct_used,num_rows,avg_space
2 from all_tables
3* where table_name='PRESCRIP_RETAIL'
SQL> /
TABLE_NAME TABLESPACE_NAME PCT_FREE PCT_USED NUM_ROWS AVG_SPACE
PRESCRIP_RETAIL 2806673860 360
Explain Plan for the query to be tuned...
22:32:31 SQL> explain plan set statement_id='vista_query'
22:43:33 2 for
22:43:35 3 SELECT r.pfier_account_id,
22:43:41 4 p.presc_num,
22:43:41 5 spm.product_id,
22:43:41 6 p.month,
22:43:41 7 t.best_call_state,
22:43:41 8 sum(p.trx_count)
22:43:41 9 FROM rlup_assigned_account r,
22:43:41 10 temp_presc_num_TEST t,
22:43:41 11 retail.prescrip_retail partition (PRESC200705) p,
22:43:41 12 sherlock.sherlock_product_mapping spm
22:43:41 13 WHERE spm.product_id like '056%'
22:43:41 14 and spm.mds6 = p.product_id
22:43:41 15 and t.CLIENT_ID = p.presc_num
22:43:41 16 and r.ndc_pyr_id = p.payer_plan
22:43:41 17 and t.best_call_state = r.ST
22:43:41 18 GROUP BY r.pfier_account_id,
22:43:41 19 p.presc_num,
22:43:41 20 spm.product_id,
22:43:41 21 p.month,
22:43:41 22 t.best_call_state;
Explained.
SQL> select statement_id,operation,options,object_name
2 from plan_table
3 where statement_id='vista_query';
22:46:03 SQL> /
STATEMENT_ID OPERATION OPTIONS OBJECT_NAME
vista_query SELECT STATEMENT
vista_query SORT GROUP BY
vista_query HASH JOIN
vista_query TABLE ACCESS FULL TEMP_PRESC_NUM_TEST
vista_query HASH JOIN
vista_query TABLE ACCESS FULL RLUP_ASSIGNED_ACCOUNT
vista_query HASH JOIN
vista_query TABLE ACCESS FULL SHERLOCK_PRODUCT_MAPPING
vista_query TABLE ACCESS FULL PRESCRIP_RETAIL
9 rows selected.
Partition Pruning: This is supposed to provide an insight to the partitions oracle
vists internally...
I guess we can use "month>= add_months(sysdate,-24)" instead of partions too.
I don't think Oracle is visiting any partitions.
I'll also search into all_tab_partitions to verify this.
Explain_Plan for what partitions oracle visits internally (partition pruning):
SQL> ed
Wrote file afiedt.buf
1 explain plan set statement_id='vista'
2 for select * from retail.prescrip_retail
3* where month>= add_months(sysdate,-24)
SQL> /
Explained.
Elapsed: 00:00:00.05
22:13:56 SQL> select statement_id,operation,options,object_name
22:14:28 2 from plan_table
22:14:30 3 where statement_id='vista';
STATEMENT_ID OPERATION
OPTIONS OBJECT_NAME
vista SELECT STATEMENT
vista PARTITION RANGE
ITERATOR
vista TABLE ACCESS
FULL PRESCRIP_RETAIL
Elapsed: 00:00:01.00
Indexes/Constraints on PRESCRIP_RETAIL table:
SQL> ED
Wrote file afiedt.buf
1 SELECT TABLE_NAME,TABLE_TYPE,INDEX_NAME,INDEX_TYPE,PCT_FREE,STATUS,PARTITIONED
2 FROM ALL_INDEXES
3* WHERE TABLE_NAME IN ('PRESCRIP_RETAIL')
SQL> /
TABLE_NAME TABLE INDEX_NAME INDEX_TYPE PCT_FREE STATUS PAR
PRESCRIP_RETAIL TABLE BX6_PRESC_RELID BITMAP N/A YES
PRESCRIP_RETAIL TABLE BX7_PRESC_ME BITMAP N/A YES
PRESCRIP_RETAIL TABLE BX1_PRESC_PROD BITMAP N/A YES
PRESCRIP_RETAIL TABLE BX2_PRESC_PAYER BITMAP N/A YES
PRESCRIP_RETAIL TABLE BX3_PRESC_PAYERCD BITMAP N/A YES
PRESCRIP_RETAIL TABLE BX4_PRESC_PRESC BITMAP N/A YES
PRESCRIP_RETAIL TABLE BX5_PRESC_PFIER BITMAP N/A YES
7 rows selected.
SQL> ed
Wrote file afiedt.buf
1 SELECT TABLE_NAME,CONSTRAINT_NAME,CONSTRAINT_TYPE,STATUS,DEFERRABLE
2 FROM ALL_CONSTRAINTS
3* WHERE TABLE_NAME IN ('PRESCRIP_RETAIL')
SQL> /
TABLE_NAME CONSTRAINT_NAME C STATUS DEFERRABLE
PRESCRIP_RETAIL SYS_C001219 C ENABLED NOT DEFERRABLE
PRESCRIP_RETAIL SYS_C001220 C ENABLED NOT DEFERRABLE
PRESCRIP_RETAIL SYS_C001221 C ENABLED NOT DEFERRABLE
PRESCRIP_RETAIL SYS_C001222 C ENABLED NOT DEFERRABLE
PRESCRIP_RETAIL SYS_C001223 C ENABLED NOT DEFERRABLE
PRESCRIP_RETAIL SYS_C001224 C ENABLED NOT DEFERRABLE
PRESCRIP_RETAIL SYS_C001225 C ENABLED NOT DEFERRABLE
PRESCRIP_RETAIL SYS_C001226 C ENABLED NOT DEFERRABLE
PRESCRIP_RETAIL SYS_C001227 C ENABLED NOT DEFERRABLE
PRESCRIP_RETAIL SYS_C001228 C ENABLED NOT DEFERRABLE
PRESCRIP_RETAIL SYS_C001229 C ENABLED NOT DEFERRABLE
PRESCRIP_RETAIL SYS_C001230 C ENABLED NOT DEFERRABLE
PRESCRIP_RETAIL SYS_C001231 C ENABLED NOT DEFERRABLE
13 rows selected.
In all_tables:
NUM_ROWS:2806673860
AVG_SPACE:360
Here is the data size in the table.
SQL> select count(*) from PRESCRIP_RETAIL;
COUNT(*)
4602980312
Again, here is the partition information and the amount of data in each partition:
SQL> ed
Wrote file afiedt.buf
1 select
2 partition_name,SUBPARTITION_COUNT,PARTITION_POSITION,TABLESPACE_NAME,
3 NUM_ROWS
4 from all_tab_partitions
5 where table_name='PRESCRIP_RETAIL'
6* order by partition_name desc
SQL> /
PARTITION_NAME SUBPARTITION_COUNT PARTITION_POSITION TABLESPACE_NAME NUM_ROWS
PRESC200705 0 36 PRESC_PARTITION_29 141147085
PRESC200704 0 35 PRESC_PARTITION_28 140299317
PRESC200703 0 34 PRESC_PARTITION_27 140703128
PRESC200702 0 33 PRESC_PARTITION_26 132592733
PRESC200701 0 32 PRESC_PARTITION_25 145832356
PRESC200612 0 31 PRESC_PARTITION_24 136702837
PRESC200611 0 30 PRESC_PARTITION_23 137421767
PRESC200610 0 29 PRESC_PARTITION_22 140836119
PRESC200609 0 28 PRESC_PARTITION_21 131273578
PRESC200608 0 27 PRESC_PARTITION_20 134967317
PRESC200607 0 26 PRESC_PARTITION_19 130785504
PRESC200606 0 25 PRESC_PARTITION_18 131472696
PRESC200605 0 24 PRESC_PARTITION_17 138590581
PRESC200604 0 23 PRESC_PARTITION_16 126849798
PRESC200603 0 22 PRESC_PARTITION_15 137164667
PRESC200602 0 21 PRESC_PARTITION_14 126938544
PRESC200601 0 20 PRESC_PARTITION_13 135408324
PRESC200512 0 19 PRESC_PARTITION_12 123285100
PRESC200511 0 18 PRESC_PARTITION_11 121245764
PRESC200510 0 17 PRESC_PARTITION_10 122112932
PRESC200509 0 16 PRESC_PARTITION_09 119137399
PRESC200508 0 15 PRESC_PARTITION_08 123372311
PRESC200507 0 14 PRESC_PARTITION_07 112905435
PRESC200506 0 13 PRESC_PARTITION_06 119581406
PRESC200505 0 12 PRESC_PARTITION_05 123977315
PRESC200504 0 11 PRESC_PARTITION_04 118975597
PRESC200503 0 10 PRESC_PARTITION_03 125782688
PRESC200502 0 9 PRESC_PARTITION_02 117448839
PRESC200501 0 8 PRESC_PARTITION_01 122214436
PRESC200412 0 7 PRESC_PARTITION_36 124799998
PRESC200411 0 6 PRESC_PARTITION_35 125471042
PRESC200410 0 5 PRESC_PARTITION_34 118457422
PRESC200409 0 4 PRESC_PARTITION_33 119537488
PRESC200408 0 3 PRESC_PARTITION_32 121319137
PRESC200407 0 2 PRESC_PARTITION_31 115226621
PRESC200406 0 1 PRESC_PARTITION_30 119143031
36 rows selected.
Data in induvidual partition of PRESCRIP_RETAIL.
SQL> SELECT COUNT(*) FROM PRESCRIP_RETAIL PARTITION(PRESC200704);
COUNT(*)
140299317
SQL> SELECT COUNT(*) FROM PRESCRIP_RETAIL PARTITION(PRESC200703);
COUNT(*)
140703128
SQL> SELECT COUNT(*) FROM PRESCRIP_RETAIL PARTITION(PRESC200702);
COUNT(*)
132592733
SQL> SELECT COUNT(*) FROM PRESCRIP_RETAIL PARTITION(PRESC200701);
COUNT(*)
145832356
SQL> SELECT COUNT(*) FROM PRESCRIP_RETAIL PARTITION(PRESC200606);
COUNT(*)
131472696
SQL> SELECT COUNT(*) FROM PRESCRIP_RETAIL PARTITION(PRESC200605);
COUNT(*)
138590581
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other tables info:
Index of other tables related to PRESCRIP_REATIL:
SQL> SELECT TABLE_NAME,TABLE_TYPE,INDEX_NAME,INDEX_TYPE,PCT_FREE,STATUS,PARTITIONED
2 FROM ALL_INDEXES
3 WHERE TABLE_NAME IN ('RLUP_ASSIGNED_ACCOUNT','TEMP_PRESC_NUM_TEST','SHERLOCK_PRODUCT_MAPPING');
SQL> /
TABLE_NAME TABLE INDEX_NAME INDEX_TYPE PCT_FREE STATUS PAR
SHERLOCK_PRODUCT_MAPPING TABLE SHERLOCK_PRODUCT_MAPPING_PK NORMAL 10 VALID NO
SHERLOCK_PRODUCT_MAPPING TABLE SHERLOCK_PRODUCT_MAPPING_X1 NORMAL 0 VALID NO
SHERLOCK_PRODUCT_MAPPING TABLE SHERLOCK_PRODUCT_MAPPING_BX1 BITMAP 0 VALID NO
SHERLOCK_PRODUCT_MAPPING TABLE SHERLOCK_PRODUCT_MAPPING_BX2 BITMAP 0 VALID NO
SHERLOCK_PRODUCT_MAPPING TABLE SHERLOCK_PRODUCT_MAPPING_BX3 BITMAP 0 VALID NO
RLUP_ASSIGNED_ACCOUNT TABLE BX1_RLUP_ASSIGNED_ACCT_PYR BITMAP 10 VALID NO
RLUP_ASSIGNED_ACCOUNT TABLE BX2_RLUP_ASSIGNED_ACCT_TOPLVL BITMAP 10 VALID NO
RLUP_ASSIGNED_ACCOUNT TABLE BX3_RLUP_ASSIGNED_ACCT_PBM BITMAP 10 VALID NO
RLUP_ASSIGNED_ACCOUNT TABLE BX4_RLUP_ASSIGNED_ACCT_AA_FLAG BITMAP 10 VALID NO
RLUP_ASSIGNED_ACCOUNT TABLE BX5_RLUP_ASSIGNED_ACCT_AA_CHD BITMAP 10 VALID NO
RLUP_ASSIGNED_ACCOUNT TABLE BX6_RLUP_ASSIGNED_ACCT_PBM_FLG BITMAP 10 VALID NO
RLUP_ASSIGNED_ACCOUNT TABLE BX7_RLUP_ASSIGNED_ACCT_ACCTID BITMAP 10 VALID NO
RLUP_ASSIGNED_ACCOUNT TABLE PK_RLUP_ASSIGNED_ACCOUNT NORMAL 10 VALID NO
13 rows selected.
Constraints in other tables:
SQL> SELECT TABLE_NAME,CONSTRAINT_NAME,CONSTRAINT_TYPE,STATUS,DEFERRABLE
2 FROM ALL_CONSTRAINTS
3 WHERE TABLE_NAME IN ('RLUP_ASSIGNED_ACCOUNT','TEMP_PRESC_NUM_TEST','SHERLOCK_PRODUCT_MAPPING');
TABLE_NAME CONSTRAINT_NAME C STATUS DEFERRABLE
RLUP_ASSIGNED_ACCOUNT SYS_C00637753 C ENABLED NOT DEFERRABLE
RLUP_ASSIGNED_ACCOUNT SYS_C00637754 C ENABLED NOT DEFERRABLE
RLUP_ASSIGNED_ACCOUNT SYS_C00637755 C ENABLED NOT DEFERRABLE
RLUP_ASSIGNED_ACCOUNT SYS_C00637756 C ENABLED NOT DEFERRABLE
RLUP_ASSIGNED_ACCOUNT SYS_C00637757 C ENABLED NOT DEFERRABLE
RLUP_ASSIGNED_ACCOUNT SYS_C00637758 C ENABLED NOT DEFERRABLE
RLUP_ASSIGNED_ACCOUNT SYS_C00637759 C ENABLED NOT DEFERRABLE
RLUP_ASSIGNED_ACCOUNT SYS_C00637760 C ENABLED NOT DEFERRABLE
RLUP_ASSIGNED_ACCOUNT SYS_C00637761 C ENABLED NOT DEFERRABLE
RLUP_ASSIGNED_ACCOUNT SYS_C00637762 C ENABLED NOT DEFERRABLE
RLUP_ASSIGNED_ACCOUNT PK_RLUP_ASSIGNED_ACCOUNT P ENABLED NOT DEFERRABLE
TEMP_PRESC_NUM_TEST SYS_C00640536 C ENABLED NOT DEFERRABLE
12 rows selected.
TEMP_PRESC_NUM_TEST does not contain any constraints or index. -
Performace tuning: how to pass data between different batch job programs?
Hi everyone,
now i have one problem about performance tuning using threading in SAP programs: split one big program into two programs - one is main program and the other is sub program. using batch jobs, we can submit multi jobs of sub program at the same time.
does anybody know how to pass data between different batch jobs? I don't want to use temp files. can ABAP memory can implement this?
thanks!Wei,
Yes we can transfer the data by using
SAP Memory OR ABAP Memory.
Ex: V_count TYPE i.
V_count = 100.
LOOP AT itab.
IF v_count EQ 25.
Here For every batch job
EXPORT data TO MEMORY ID 'ABC'
Function module
JOB_OPEN
JOB_SUBMIT
JOB_CLOSE.
ENDIF.
ENDLOOP .
IN your 2nd program.
INITIALIZATION.
IMPORT data FROM MEMORY IF 'ABC'.
FREE memory if .---When you free the memory you will get recent data.
Don't forget to reward if useful. -
Performance tuning -index on big table on date column
Hi,
I am working on Oracle 10g with Oracle Apps 11i on Sun.
we have a large non-partition table "GL_JE_HEADERS" with 7 million rows.
Now we want to run the query for selecting rows using between clause on date column.
I have created Btree index on the this table.
Now how can I tune the query? Which hint should I use for the query?
Thanks,
raneHi Rane,
Now how can I tune the query?Indexes on DATE datatypes are tricky, as the SQL queries must match the index!
For example, an index on ship_date would NOT match a query:
WHERE trunc(ship_date) > trunc(sysdate-7);
WHERE to_char(ship_date,’YYYY-MM-DD’) = ‘2004-01-04’;
You may need to create an function-basd index, so that the DATE reference in your SQL matches the index:
http://www.dba-oracle.com/oracle_tips_index_scan_fbi_sql.htm
To start testing, go into SQL*Plus and "set autotrace on" and run the queries.
Then confirm that your index is being used.
Which hint should I use for the query?Hints are a last resort!
Your query is fully tuned when it fetches the rows you need with a minimum of block touches (logical reads, consistent gets).
See here for details:
http://www.dba-oracle.com/art_sql_tune.htm
Hope this helps . . .
Donald K. Burleson
Oracle Press author
Author of "Oracle Tuning: The Definitive Reference"
http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm -
Performance tuning for Sales Order and its configuration data extraction
I write here the data fetching subroutine of an extract report.
This report takes 2.5 hours to extract 36000 records in the quality server.
Kindly provide me some suggestions for performance tuning it.
SELECT auart vkorg vtweg spart vkbur augru
kunnr yxinsto bstdk vbeln kvgr1 kvgr2 vdatu
gwldt audat knumv
FROM vbak
INTO TABLE it_vbak
WHERE vbeln IN s_vbeln
AND erdat IN s_erdat
AND auart IN s_auart
AND vkorg = p_vkorg
AND spart IN s_spart
AND vkbur IN s_vkbur
AND vtweg IN s_vtweg.
IF NOT it_vbak[] IS INITIAL.
SELECT mvgr1 mvgr2 mvgr3 mvgr4 mvgr5
yyequnr vbeln cuobj
FROM vbap
INTO TABLE it_vbap
FOR ALL ENTRIES IN it_vbak
WHERE vbeln = it_vbak-vbeln
AND posnr = '000010'.
SELECT bstkd inco1 zterm vbeln
prsdt
FROM vbkd
INTO TABLE it_vbkd
FOR ALL ENTRIES IN it_vbak
WHERE vbeln = it_vbak-vbeln.
SELECT kbetr kschl knumv
FROM konv
INTO TABLE it_konv
FOR ALL ENTRIES IN it_vbak
WHERE knumv = it_vbak-knumv
AND kschl = 'PN00'.
SELECT vbeln parvw kunnr
FROM vbpa
INTO TABLE it_vbpa
FOR ALL ENTRIES IN it_vbak
WHERE vbeln = it_vbak-vbeln
AND parvw IN ('PE', 'YU', 'RE').
ENDIF.
LOOP AT it_vbap INTO wa_vbap.
IF NOT wa_vbap-cuobj IS INITIAL.
CALL FUNCTION 'VC_I_GET_CONFIGURATION'
EXPORTING
instance = wa_vbap-cuobj
language = sy-langu
TABLES
configuration = it_config
EXCEPTIONS
instance_not_found = 1
internal_error = 2
no_class_allocation = 3
instance_not_valid = 4
OTHERS = 5.
IF sy-subrc = 0.
READ TABLE it_config WITH KEY atnam = 'IND_PRODUCT_LINES'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
READ TABLE it_config WITH KEY atnam = 'IND_GQ'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
READ TABLE it_config WITH KEY atnam = 'IND_VKN'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
READ TABLE it_config WITH KEY atnam = 'IND_ZE'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
READ TABLE it_config WITH KEY atnam = 'IND_HQ'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
READ TABLE it_config WITH KEY atnam = 'IND_CALCULATED_INST_HOURS'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
ENDIF.
ENDIF.
ENDLOOP. " End of loop on it_vbap
Edited by: jaya rangwani on May 11, 2010 12:50 PM
Edited by: jaya rangwani on May 11, 2010 12:52 PMHello Jaya,
Will provide some point which will increase the performance of the program:
1. VBAK & VBAP are header & item table. And so the relation will be 1 to many. In this case, you can use inner join instead multiple select statement.
2. If you are very much confident in handling the inner join, then you can do a single statement to get the data from VBAK, VBAP & VBKD using the inner join.
3. Before using for all entries, check whether the internal table is not initial.
And sort the internal table and delete adjacent duplicates.
4. Sort all the resultant internal table based on the required key fields and read always using the binary search.
You will get a number of documents where you can get a fair idea of what should be done and what should not be while doing a program related to performance issue.
Also you can have number of function module and BAPI where you can get the sales order details. You can try with u2018BAPISDORDER_GETDETAILEDLISTu2019.
Regards,
Selva K.
Maybe you are looking for
-
Hi experts, Need help with IDOC. I'm creating 2 FM to process an existing idoc. first FM will use the data from idoc 1 to populate EDIDD for idoc 2. second FM will use the data from idoc 2 to populate EDIDD for idoc 3. when during processing, I encou
-
Help - Retrospect 6 corupts my system directories when installed
PB 15 inch 1.67, Sys 10.4.4, maxtor one touch II, Retrospect Express v6. I have a brand new PB and a new F/W 800 version external drive. Because of other problems, I completed a full re-format of my hard drive and installed OS 10.4.4 auto updates to
-
Hi Experts, I searched a lot in forums but did not find the suitable answer for my query. Could you please help? We have an interface (IDoc to File) with a financial entity. When sending out the files from PI, those are first encrypted with PGP utili
-
Hi experts, yesterday we had a host problem had to restart a System. Since this, we are facing an hanging DB-process in DBACOCKPIT. In OS I can't find the process. Also it is not a zombie on OS-Level. However, this is now an open transaction and ther
-
Hello In SAP note 1299009 there is path to newest SAPUP which I do not find: Entry by Application Group -> Additional Components -> Upgrade Tools -> SAPUP UNICODE -> SAPUP 7.02 UNICODE -> 51033520 ERP 6.0 SR3 UC u2013 Upgrade Master Linux on x86_64 6