Problem in fetching 7 million record
Hello all,
I need to fetch data from a external database (Oracle 11g) and insert into local database (Oracle 11g) using db link. The query is simple..
>> Select * from table_name where column_name = date.
Can i use cursor with bulk collect and bulk insert concept using LIMIT option?
I tried with above mehod. But sometime am geting "*ORA-04030: out of process memory when trying to allocate 16408 bytes* " but not always.
The Fetchable record is always not less than 6 milion. This process need to happen daily. We have onyl limited access to external DB. Just Select permission for a table.
Regards
Natarajan M
Relational databases are about processing sets, not about processing records.
If you use a procedural approach, this will be many magnitudes slower.
Why? Because Oracle will have to fetch the data to your PGA. The statement posted earlier fetches 0 records. You fetch 7 million records.
Sybrand Bakker
Senior Oracle DBA
Similar Messages
-
Problem with Fetching Million Records from Table COEP into an Internal Tabl
Hi Everyone ! Hope things are going well.
Table : COEP has 6 million records.
I am trying to get records based on certain criteria, that is, there are atleast 5 conditions in the WHERE clause.
I've noticed it takes about 15 minutes to populate the internal table. How can i improve the performance to less than a minute for a fetch of 500 records from a database set of 6 million?
Regards,
Owais...The first obvious sugession would be to use the proper indexes. I had a similar Issue with COVP which is a join of COEP and COBK. I got substanstial performance improvement by adding "where LEDNR EQ '00'" in the where clause.
Here is my select:
SELECT kokrs
belnr
buzei
ebeln
ebelp
wkgbtr
refbn
bukrs
gjahr
FROM covp CLIENT SPECIFIED
INTO TABLE i_coep
FOR ALL ENTRIES IN i_objnr
WHERE mandt EQ sy-mandt
AND lednr EQ '00'
AND objnr = i_objnr-objnr
AND kokrs = c_conarea. -
Problem while fetching more records in SAP ABAP report program
Hello Frinds,
I have SAP ABAP report program which fetches data from usr02 table
Now, program is working fine with less number of records, bot in production there are more than 200000 records and either report gets timed out or there is run time error like buffer area not available.
Below is the fetch statement
SELECT bname FROM usr02 INTO TABLE lt_user
So, do I need to take records in small chunks, I do not think it is needed as I have worked on number of othere databases where there are number of records in single fetch statement and database itself take care of this.
Please provide me some approach to resolve this problem.This will be very difficult for you.....
Since you are getting a time out error....it looks like, you are runnning this report in foreground....................
Try running it in background it will work...
ELSE....you have to fetch in small chunks....but the question is how will you do it. Since the USR02 only has BNAME as primary key...
Either put the BNAME as part of selection screen and fetch the data.....it will solve your problem....
Only fetch for those BNAME which is entered in the selection screen...
Hope it helps! -
Urgent: Problem in Fetching the records from ITAB3
hi,
here's d code,and the bold is dere where i am facing the problem i.e. whne i append lines of ITAB2 to ITAB3 it takes 32,234 records but in reality in ITAB2 there are 39 records,ITFINAL contains 45 records which is displaying the coreect data.
But why ITAB3 conatins 32,234 records in it.
it might hit th eperformance of the report.
TABLES: RSEG.
***********DECLARATION OF TABLES*************
************TABLE BKPF - ACCOUNTING HEADER ***********
DATA: BEGIN OF ITBKPF OCCURS 0,
BUKRS LIKE BKPF-BUKRS,
BELNR LIKE BKPF-BELNR,
GJAHR LIKE BKPF-GJAHR,
AWKEY LIKE BKPF-AWKEY,
BUDAT LIKE BKPF-BUDAT,
XBLNR LIKE BKPF-XBLNR,
AWTYP LIKE BKPF-AWTYP,
END OF ITBKPF.
*********TABLE BSIK - ACCOUNTING OPEN ITEMS********
DATA: BEGIN OF ITAB2 OCCURS 0,
LFBNR LIKE RSEG-LFBNR,
BUKRS LIKE BSIK-BUKRS,
GJAHR LIKE BSIK-GJAHR,
BELNR LIKE BSIK-BELNR,
AWKEY LIKE BKPF-AWKEY,
WRBTR LIKE BSIK-WRBTR,
LIFNR LIKE BSIK-LIFNR,
AUGBL LIKE BSAK-AUGBL,
AUGDT LIKE BSAK-AUGDT,
END OF ITAB2.
**********TABLE BSAK - ACCOUNTING CLEAR ITEMS*******
DATA: BEGIN OF ITAB3 OCCURS 0,
LFBNR LIKE RSEG-LFBNR,
BUKRS LIKE BSAK-BUKRS,
GJAHR LIKE BSAK-GJAHR,
BELNR LIKE BSAK-BELNR,
AWKEY LIKE BKPF-AWKEY,
WRBTR LIKE BSIK-WRBTR,
LIFNR LIKE BSIK-LIFNR,
AUGBL LIKE BSAK-AUGBL,
AUGDT LIKE BSAK-AUGDT,
END OF ITAB3.
DATA: BEGIN OF ITDEMO OCCURS 0,
BELNR LIKE RSEG-BELNR,
GJAHR LIKE RSEG-GJAHR,
LFBNR LIKE RSEG-LFBNR,
XBLNR LIKE RSEG-XBLNR,
END OF ITDEMO.
*****FINAL TABLE TO GATHER N DISPLAY OUTPUT*****
DATA: BEGIN OF ITFINAL OCCURS 0,
LFBNR LIKE RSEG-LFBNR,
BUKRS LIKE BKPF-BUKRS,
GJAHR LIKE BKPF-GJAHR,
BELNR LIKE BKPF-BELNR,
AWKEY LIKE BKPF-AWKEY,
WRBTR LIKE BSIK-WRBTR,
LIFNR LIKE BSIK-LIFNR,
AUGBL LIKE BSAK-AUGBL,
AUGDT LIKE BSAK-AUGDT,
END OF ITFINAL.
**********END OF DECLARATIONS*************
SELECT-OPTIONS: P_LFBNR FOR RSEG-LFBNR.
*************FETCHING OF THE DATA*************
START-OF-SELECTION.
BKPF
SELECT BUKRS BELNR GJAHR AWKEY BUDAT XBLNR AWTYP
FROM BKPF
INTO (ITBKPF-BUKRS,ITBKPF-BELNR,ITBKPF-GJAHR,
ITBKPF-AWKEY,ITBKPF-BUDAT,ITBKPF-XBLNR,ITBKPF-AWTYP)
WHERE AWTYP EQ 'MKPF' OR AWTYP EQ 'RMRP'.
o
+ MKPF*
************BEGIN OF TRY CODE FOR A MATERIAL DOCUMENT************
ITDEMO-BELNR = ITBKPF-AWKEY(10).
ITDEMO-GJAHR = ITBKPF-AWKEY+10(4).
ITDEMO-XBLNR = ITBKPF-XBLNR.
SELECT LFBNR FROM RSEG INTO
(ITDEMO-LFBNR) WHERE
BELNR EQ ITBKPF-AWKEY(10) AND
GJAHR EQ ITBKPF-AWKEY+10(4) AND
XBLNR EQ ITBKPF-XBLNR AND LFBNR > 0.
CHECK SY-SUBRC EQ 0 AND ITDEMO-LFBNR IN P_LFBNR.
************END OF TRY CODE FOR A MATERIAL DOCUMENT***************
ITAB2-BUKRS = ITBKPF-BUKRS.
ITAB2-GJAHR = ITBKPF-GJAHR.
ITAB2-BELNR = ITBKPF-BELNR.
ITAB3-BUKRS = ITBKPF-BUKRS.
ITAB3-GJAHR = ITBKPF-GJAHR.
ITAB3-BELNR = ITBKPF-BELNR.
o
+ BSIK*
SELECT WRBTR LIFNR FROM BSIK
INTO (ITAB2-WRBTR, ITAB2-LIFNR)
WHERE BUKRS EQ ITBKPF-BUKRS
AND GJAHR EQ ITBKPF-GJAHR
AND BELNR EQ ITBKPF-BELNR.
APPEND ITAB2.
EXIT.
ENDSELECT.
o
+
BSAK*
SELECT WRBTR LIFNR AUGBL AUGDT
FROM BSAK
INTO (ITAB3-WRBTR,ITAB3-LIFNR,ITAB3-AUGBL,ITAB3-AUGDT)
WHERE BUKRS EQ ITBKPF-BUKRS
AND GJAHR EQ ITBKPF-GJAHR
AND BELNR EQ ITBKPF-BELNR.
APPEND ITAB3.
EXIT.
ENDSELECT.
APPEND ITDEMO.
EXIT.
ENDSELECT.
APPEND ITBKPF.
ENDSELECT.
Fields Found?
READ TABLE ITBKPF TRANSPORTING NO FIELDS INDEX 1.
IF sy-subrc NE 0.
MESSAGE i000(zmm1) WITH 'No documents found!'.
ENDIF.
Prepare Output
LOOP AT ITBKPF.
CLEAR ITAB2.
READ TABLE ITAB2
WITH KEY BUKRS = ITBKPF-BUKRS
BELNR = ITBKPF-BELNR
GJAHR = ITBKPF-GJAHR.
CHECK sy-subrc EQ 0?
CLEAR ITAB3.
READ TABLE ITAB3
WITH KEY BUKRS = ITBKPF-BUKRS
BELNR = ITBKPF-BELNR
GJAHR = ITBKPF-GJAHR. .
CHECK sy-subrc EQ 0?
READ TABLE ITDEMO
WITH KEY BELNR = ITBKPF-AWKEY(10).
CHECK sy-subrc EQ 0?
APPEND LINES OF ITAB2 TO ITAB3.
CHECK sy-subrc EQ 0?
ITFINAL-LFBNR = ITDEMO-LFBNR.
ITFINAL-BUKRS = ITBKPF-BUKRS.
ITFINAL-BELNR = ITBKPF-BELNR.
ITFINAL-GJAHR = ITBKPF-GJAHR.
ITFINAL-AWKEY = ITBKPF-AWKEY.
ITFINAL-WRBTR = ITAB3-WRBTR.
ITFINAL-LIFNR = ITAB3-LIFNR.
ITFINAL-AUGBL = ITAB3-AUGBL.
ITFINAL-AUGDT = ITAB3-AUGDT.
DELETE ITFINAL WHERE WRBTR = 0.
APPEND ITFINAL.
CLEAR ITFINAL.
ENDLOOP.
SORT ITFINAL BY AUGBL AUGDT .
END-OF-SELECTION
END-OF-SELECTION.
Output
LOOP AT ITFINAL.
WRITE: / ITFINAL-LFBNR,ITFINAL-BELNR, ITFINAL-GJAHR,ITFINAL-AWKEY, ITFINAL-WRBTR, ITFINAL-LIFNR,ITFINAL-AUGBL,ITFINAL-AUGDT.
ENDLOOP.hi,
actually i have to display the open n clear items with respect to the MATERIAL DOCUMENT.
try to execute the code which i am displaying below:-
TABLES: RSEG.
**********DECLARATION OF TABLES************
***********TABLE BKPF - ACCOUNTING HEADER **********
DATA: BEGIN OF ITBKPF OCCURS 0,
BUKRS LIKE BKPF-BUKRS,
BELNR LIKE BKPF-BELNR,
GJAHR LIKE BKPF-GJAHR,
AWKEY LIKE BKPF-AWKEY,
BUDAT LIKE BKPF-BUDAT,
XBLNR LIKE BKPF-XBLNR,
AWTYP LIKE BKPF-AWTYP,
END OF ITBKPF.
********TABLE BSIK - ACCOUNTING OPEN ITEMS*******
DATA: BEGIN OF ITAB2 OCCURS 0,
LFBNR LIKE RSEG-LFBNR,
BUKRS LIKE BSIK-BUKRS,
GJAHR LIKE BSIK-GJAHR,
BELNR LIKE BSIK-BELNR,
AWKEY LIKE BKPF-AWKEY,
WRBTR LIKE BSIK-WRBTR,
LIFNR LIKE BSIK-LIFNR,
AUGBL LIKE BSAK-AUGBL,
AUGDT LIKE BSAK-AUGDT,
END OF ITAB2.
*********TABLE BSAK - ACCOUNTING CLEAR ITEMS******
DATA: BEGIN OF ITAB3 OCCURS 0,
LFBNR LIKE RSEG-LFBNR,
BUKRS LIKE BSAK-BUKRS,
GJAHR LIKE BSAK-GJAHR,
BELNR LIKE BSAK-BELNR,
AWKEY LIKE BKPF-AWKEY,
WRBTR LIKE BSIK-WRBTR,
LIFNR LIKE BSIK-LIFNR,
AUGBL LIKE BSAK-AUGBL,
AUGDT LIKE BSAK-AUGDT,
END OF ITAB3.
*********TABLE BSIS - MIRO NOT PERFORMED*******
DATA: BEGIN OF ITAB4 OCCURS 0,
LFBNR LIKE RSEG-LFBNR,
BUKRS LIKE BSIS-BUKRS,
GJAHR LIKE BSIS-GJAHR,
BELNR LIKE BSIS-BELNR,
AWKEY LIKE BKPF-AWKEY,
WRBTR LIKE BSIK-WRBTR,
LIFNR LIKE BSIK-LIFNR,
AUGBL LIKE BSAK-AUGBL,
AUGDT LIKE BSAK-AUGDT,
END OF ITAB4.
**********TABLE RSEG - FOR MATERIAL DOCUMENT********
DATA: BEGIN OF ITDEMO OCCURS 0,
BELNR LIKE RSEG-BELNR,
GJAHR LIKE RSEG-GJAHR,
LFBNR LIKE RSEG-LFBNR,
XBLNR LIKE RSEG-XBLNR,
END OF ITDEMO.
****FINAL TABLE TO GATHER N DISPLAY OUTPUT****
DATA: BEGIN OF ITFINAL OCCURS 0,
LFBNR LIKE RSEG-LFBNR,
BUKRS LIKE BKPF-BUKRS,
GJAHR LIKE BKPF-GJAHR,
BELNR LIKE BKPF-BELNR,
AWKEY LIKE BKPF-AWKEY,
WRBTR LIKE BSIK-WRBTR,
LIFNR LIKE BSIK-LIFNR,
AUGBL LIKE BSAK-AUGBL,
AUGDT LIKE BSAK-AUGDT,
END OF ITFINAL.
*********END OF DECLARATIONS************
SELECT-OPTIONS: P_LFBNR FOR RSEG-LFBNR.
************FETCHING OF THE DATA************
START-OF-SELECTION.
BKPF
SELECT BUKRS BELNR GJAHR AWKEY BUDAT XBLNR AWTYP
FROM BKPF
INTO (ITBKPF-BUKRS,ITBKPF-BELNR,ITBKPF-GJAHR,
ITBKPF-AWKEY,ITBKPF-BUDAT,ITBKPF-XBLNR,ITBKPF-AWTYP)
WHERE AWTYP EQ 'MKPF' OR AWTYP EQ 'RMRP'.
MKPF
***********BEGIN OF TRY CODE FOR A MATERIAL DOCUMENT***********
ITDEMO-BELNR = ITBKPF-AWKEY(10).
ITDEMO-GJAHR = ITBKPF-AWKEY+10(4).
ITDEMO-XBLNR = ITBKPF-XBLNR.
SELECT LFBNR FROM RSEG INTO
(ITDEMO-LFBNR) WHERE
BELNR EQ ITBKPF-AWKEY(10) AND
GJAHR EQ ITBKPF-AWKEY+10(4) AND
XBLNR EQ ITBKPF-XBLNR AND LFBNR > 0.
CHECK SY-SUBRC EQ 0 AND ITDEMO-LFBNR IN P_LFBNR.
***********END OF TRY CODE FOR A MATERIAL DOCUMENT**************
ITAB2-BUKRS = ITBKPF-BUKRS.
ITAB2-GJAHR = ITBKPF-GJAHR.
ITAB2-BELNR = ITBKPF-BELNR.
ITAB3-BUKRS = ITBKPF-BUKRS.
ITAB3-GJAHR = ITBKPF-GJAHR.
ITAB3-BELNR = ITBKPF-BELNR.
BSIK
SELECT WRBTR LIFNR FROM BSIK
INTO (ITAB2-WRBTR, ITAB2-LIFNR)
WHERE BUKRS EQ ITBKPF-BUKRS
AND GJAHR EQ ITBKPF-GJAHR
AND BELNR EQ ITBKPF-BELNR.
APPEND ITAB2.
EXIT.
ENDSELECT.
BSAK
SELECT WRBTR LIFNR AUGBL AUGDT
FROM BSAK
INTO (ITAB3-WRBTR,ITAB3-LIFNR,ITAB3-AUGBL,ITAB3-AUGDT)
WHERE BUKRS EQ ITBKPF-BUKRS
AND GJAHR EQ ITBKPF-GJAHR
AND BELNR EQ ITBKPF-BELNR.
APPEND ITAB3.
EXIT.
ENDSELECT.
BSIS
SELECT WRBTR XREF3 FROM BSIS
INTO (ITAB1-WRBTR, ITAB1-XREF3)
WHERE BUKRS EQ ITBKPF-BUKRS
AND GJAHR EQ ITBKPF-GJAHR
AND BELNR EQ ITBKPF-BELNR.
APPEND ITAB1.
EXIT.
ENDSELECT.
CHECK sy-subrc EQ 0?
APPEND ITDEMO.
EXIT.
ENDSELECT.
APPEND ITBKPF.
ENDSELECT.
Fields Found?
READ TABLE ITBKPF TRANSPORTING NO FIELDS INDEX 1.
IF sy-subrc NE 0.
MESSAGE i000(zmm1) WITH 'No documents found!'.
ENDIF.
Prepare Output
LOOP AT ITBKPF.
CLEAR ITAB2.
READ TABLE ITAB2
WITH KEY BUKRS = ITBKPF-BUKRS
BELNR = ITBKPF-BELNR
GJAHR = ITBKPF-GJAHR." BINARY SEARCH..
CHECK sy-subrc EQ 0?
CLEAR ITAB3.
READ TABLE ITAB3
WITH KEY BUKRS = ITBKPF-BUKRS
BELNR = ITBKPF-BELNR
GJAHR = ITBKPF-GJAHR." BINARY SEARCH. .
CHECK sy-subrc EQ 0?
READ TABLE ITDEMO
WITH KEY BELNR = ITBKPF-AWKEY(10).
CHECK sy-subrc EQ 0?
APPEND LINES OF ITAB2 TO ITAB3.
CHECK sy-subrc EQ 0?
ITFINAL-LFBNR = ITDEMO-LFBNR.
ITFINAL-BUKRS = ITBKPF-BUKRS.
ITFINAL-BELNR = ITBKPF-BELNR.
ITFINAL-GJAHR = ITBKPF-GJAHR.
ITFINAL-AWKEY = ITBKPF-AWKEY.
ITFINAL-WRBTR = ITAB3-WRBTR.
ITFINAL-LIFNR = ITAB3-LIFNR.
ITFINAL-AUGBL = ITAB3-AUGBL.
ITFINAL-AUGDT = ITAB3-AUGDT.
DELETE ITFINAL WHERE WRBTR = 0.
APPEND ITFINAL.
CLEAR ITFINAL.
ENDLOOP.
SORT ITFINAL BY AUGBL AUGDT .
END-OF-SELECTION
END-OF-SELECTION.
Output
WRITE: /' OPEN ITEMS -> PAYMENTS ARE NOT DONE'.
ULINE.
WRITE: / 'MAT.DOC. A/C DOC. YEAR REF.KEY AMOUNT VENDOR CLR.DOC. CLR.DATE' .
ULINE.
LOOP AT ITFINAL.
WRITE: / ITFINAL-LFBNR,ITFINAL-BELNR, ITFINAL-GJAHR,ITFINAL-AWKEY, ITFINAL-WRBTR, ITFINAL-LIFNR,ITFINAL-AUGBL,ITFINAL-AUGDT.
ENDLOOP. -
SQL Query to fetch records from tables which have 75+ million records
Hi,
I have the explain plan for a sql stmt.Require your suggestions to improve this.
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 340 | 175K| 19075 |
| 1 | TEMP TABLE TRANSFORMATION | | | | |
| 2 | LOAD AS SELECT | | | | |
| 3 | SORT GROUP BY | | 32M| 1183M| 799K|
| 4 | TABLE ACCESS FULL | CLM_DETAIL_PRESTG | 135M| 4911M| 464K|
| 5 | LOAD AS SELECT | | | | |
| 6 | TABLE ACCESS FULL | CLM_HEADER_PRESTG | 1 | 274 | 246K|
PLAN_TABLE_OUTPUT
| 7 | LOAD AS SELECT | | | | |
| 8 | SORT UNIQUE | | 744K| 85M| 8100 |
| 9 | TABLE ACCESS FULL | DAILY_PROV_PRESTG | 744K| 85M| 1007 |
| 10 | UNION-ALL | | | | |
| 11 | SORT UNIQUE | | 177 | 97350 | 9539 |
| 12 | HASH JOIN | | 177 | 97350 | 9538 |
| 13 | HASH JOIN OUTER | | 3 | 1518 | 9533 |
| 14 | HASH JOIN | | 1 | 391 | 8966 |
| 15 | TABLE ACCESS BY INDEX ROWID | CLM_DETAIL_PRESTG | 1 | 27 | 3 |
| 16 | NESTED LOOPS | | 1 | 361 | 10 |
| 17 | NESTED LOOPS OUTER | | 1 | 334 | 7 |
PLAN_TABLE_OUTPUT
| 18 | NESTED LOOPS OUTER | | 1 | 291 | 4 |
| 19 | VIEW | | 1 | 259 | 2 |
| 20 | TABLE ACCESS FULL | SYS_TEMP_0FD9D66C9_DA2D01AD | 1 | 269 | 2 |
| 21 | INDEX RANGE SCAN | CLM_PAYMNT_CLMEXT_PRESTG_IDX | 1 | 32 | 2 |
| 22 | TABLE ACCESS BY INDEX ROWID| CLM_PAYMNT_CHKEXT_PRESTG | 1 | 43 | 3 |
| 23 | INDEX RANGE SCAN | CLM_PAYMNT_CHKEXT_PRESTG_IDX | 1 | | 2 |
| 24 | INDEX RANGE SCAN | CLM_DETAIL_PRESTG_IDX | 6 | | 2 |
| 25 | VIEW | | 32M| 934M| 8235 |
| 26 | TABLE ACCESS FULL | SYS_TEMP_0FD9D66C8_DA2D01AD | 32M| 934M| 8235 |
| 27 | VIEW | | 744K| 81M| 550 |
| 28 | TABLE ACCESS FULL | SYS_TEMP_0FD9D66CA_DA2D01AD | 744K| 81M| 550 |
PLAN_TABLE_OUTPUT
| 29 | TABLE ACCESS FULL | CCP_MBRSHP_XREF | 5288 | 227K| 5 |
| 30 | SORT UNIQUE | | 163 | 82804 | 9536 |
| 31 | HASH JOIN | | 163 | 82804 | 9535 |
| 32 | HASH JOIN OUTER | | 3 | 1437 | 9530 |
| 33 | HASH JOIN | | 1 | 364 | 8963 |
| 34 | NESTED LOOPS OUTER | | 1 | 334 | 7 |
| 35 | NESTED LOOPS OUTER | | 1 | 291 | 4 |
| 36 | VIEW | | 1 | 259 | 2 |
| 37 | TABLE ACCESS FULL | SYS_TEMP_0FD9D66C9_DA2D01AD | 1 | 269 | 2 |
| 38 | INDEX RANGE SCAN | CLM_PAYMNT_CLMEXT_PRESTG_IDX | 1 | 32 | 2 |
| 39 | TABLE ACCESS BY INDEX ROWID | CLM_PAYMNT_CHKEXT_PRESTG | 1 | 43 | 3 |
PLAN_TABLE_OUTPUT
| 40 | INDEX RANGE SCAN | CLM_PAYMNT_CHKEXT_PRESTG_IDX | 1 | | 2 |
| 41 | VIEW | | 32M| 934M| 8235 |
| 42 | TABLE ACCESS FULL | SYS_TEMP_0FD9D66C8_DA2D01AD | 32M| 934M| 8235 |
| 43 | VIEW | | 744K| 81M| 550 |
| 44 | TABLE ACCESS FULL | SYS_TEMP_0FD9D66CA_DA2D01AD | 744K| 81M| 550 |
| 45 | TABLE ACCESS FULL | CCP_MBRSHP_XREF | 5288 | 149K| 5 |
The CLM_DETAIL_PRESTG table has 100 million records and the CLM_HEADER_PRESTG table has 75 million records.
Any gussestions on how to getch huge records from tables of this size will help.
Regards,
NarayanWITH CLAIM_DTL
AS ( SELECT
ICN_NUM,
MIN (FIRST_SRVC_DT) AS FIRST_SRVC_DT,
MAX (LAST_SRVC_DT) AS LAST_SRVC_DT,
MIN (PLC_OF_SRVC_CD) AS PLC_OF_SRVC_CD
FROM CCP_STG.CLM_DETAIL_PRESTG CD WHERE ACT_CD <>'D'
GROUP BY ICN_NUM),
CLAIM_HDR
AS (SELECT
ICN_NUM,
SBCR_ID,
MBR_ID,
MBR_FIRST_NAME,
MBR_MI,
MBR_LAST_NAME,
MBR_BIRTH_DATE,
GENDER_TYPE_CD,
SBCR_RLTNSHP_TYPE_CD,
SBCR_FIRST_NAME,
SBCR_MI,
SBCR_LAST_NAME,
SBCR_ADDR_LINE_1,
SBCR_ADDR_LINE2,
SBCR_ADDR_CITY,
SBCR_ADDR_STATE,
SBCR_ZIP_CD,
PRVDR_NUM,
CLM_PRCSSD_DT,
CLM_TYPE_CLASS_CD,
AUTHO_NUM,
TOT_BILLED_AMT,
HCFA_DRG_TYPE_CD,
FCLTY_ADMIT_DT,
ADMIT_TYPE,
DSCHRG_STATUS_CD,
FILE_BILLING_NPI,
CLAIM_LOCATION_CD,
CLM_RELATED_ICN_1,
SBCR_ID||0
|| MBR_ID
|| GENDER_TYPE_CD
|| SBCR_RLTNSHP_TYPE_CD
|| MBR_BIRTH_DATE
AS MBR_ENROLL_ID,
SUBSCR_INSGRP_NM ,
CAC,
PRVDR_PTNT_ACC_ID,
BILL_TYPE,
PAYEE_ASSGN_CODE,
CREAT_RUN_CYC_EXEC_SK,
PRESTG_INSRT_DT
FROM CCP_STG.CLM_HEADER_PRESTG P WHERE ACT_CD <>'D' AND SUBSTR(CLM_PRCSS_TYPE_CD,4,1) NOT IN ('1','2','3','4','5','6') ),
PROV AS ( SELECT DISTINCT
PROV_ID,
PROV_FST_NM,
PROV_MD_NM,
PROV_LST_NM,
PROV_BILL_ADR1,
PROV_BILL_CITY,
PROV_BILL_STATE,
PROV_BILL_ZIP,
CASE WHEN PROV_SEC_ID_QL='E' THEN PROV_SEC_ID
ELSE NULL
END AS PROV_SEC_ID,
PROV_ADR1,
PROV_CITY,
PROV_STATE,
PROV_ZIP
FROM CCP_STG.DAILY_PROV_PRESTG),
MBR_XREF AS (SELECT SUBSTR(MBR_ENROLL_ID,1,17)||DECODE ((SUBSTR(MBR_ENROLL_ID,18,1)),'E','1','S','2','D','3')||SUBSTR(MBR_ENROLL_ID,19) AS MBR_ENROLLL_ID,
NEW_MBR_FLG
FROM CCP_STG.CCP_MBRSHP_XREF)
SELECT DISTINCT CLAIM_HDR.ICN_NUM AS ICN_NUM,
CLAIM_HDR.SBCR_ID AS SBCR_ID,
CLAIM_HDR.MBR_ID AS MBR_ID,
CLAIM_HDR.MBR_FIRST_NAME AS MBR_FIRST_NAME,
CLAIM_HDR.MBR_MI AS MBR_MI,
CLAIM_HDR.MBR_LAST_NAME AS MBR_LAST_NAME,
CLAIM_HDR.MBR_BIRTH_DATE AS MBR_BIRTH_DATE,
CLAIM_HDR.GENDER_TYPE_CD AS GENDER_TYPE_CD,
CLAIM_HDR.SBCR_RLTNSHP_TYPE_CD AS SBCR_RLTNSHP_TYPE_CD,
CLAIM_HDR.SBCR_FIRST_NAME AS SBCR_FIRST_NAME,
CLAIM_HDR.SBCR_MI AS SBCR_MI,
CLAIM_HDR.SBCR_LAST_NAME AS SBCR_LAST_NAME,
CLAIM_HDR.SBCR_ADDR_LINE_1 AS SBCR_ADDR_LINE_1,
CLAIM_HDR.SBCR_ADDR_LINE2 AS SBCR_ADDR_LINE2,
CLAIM_HDR.SBCR_ADDR_CITY AS SBCR_ADDR_CITY,
CLAIM_HDR.SBCR_ADDR_STATE AS SBCR_ADDR_STATE,
CLAIM_HDR.SBCR_ZIP_CD AS SBCR_ZIP_CD,
CLAIM_HDR.PRVDR_NUM AS PRVDR_NUM,
CLAIM_HDR.CLM_PRCSSD_DT AS CLM_PRCSSD_DT,
CLAIM_HDR.CLM_TYPE_CLASS_CD AS CLM_TYPE_CLASS_CD,
CLAIM_HDR.AUTHO_NUM AS AUTHO_NUM,
CLAIM_HDR.TOT_BILLED_AMT AS TOT_BILLED_AMT,
CLAIM_HDR.HCFA_DRG_TYPE_CD AS HCFA_DRG_TYPE_CD,
CLAIM_HDR.FCLTY_ADMIT_DT AS FCLTY_ADMIT_DT,
CLAIM_HDR.ADMIT_TYPE AS ADMIT_TYPE,
CLAIM_HDR.DSCHRG_STATUS_CD AS DSCHRG_STATUS_CD,
CLAIM_HDR.FILE_BILLING_NPI AS FILE_BILLING_NPI,
CLAIM_HDR.CLAIM_LOCATION_CD AS CLAIM_LOCATION_CD,
CLAIM_HDR.CLM_RELATED_ICN_1 AS CLM_RELATED_ICN_1,
CLAIM_HDR.SUBSCR_INSGRP_NM,
CLAIM_HDR.CAC,
CLAIM_HDR.PRVDR_PTNT_ACC_ID,
CLAIM_HDR.BILL_TYPE,
CLAIM_DTL.FIRST_SRVC_DT AS FIRST_SRVC_DT,
CLAIM_DTL.LAST_SRVC_DT AS LAST_SRVC_DT,
CLAIM_DTL.PLC_OF_SRVC_CD AS PLC_OF_SRVC_CD,
PROV.PROV_LST_NM AS BILL_PROV_LST_NM,
PROV.PROV_FST_NM AS BILL_PROV_FST_NM,
PROV.PROV_MD_NM AS BILL_PROV_MID_NM,
PROV.PROV_BILL_ADR1 AS BILL_PROV_ADDR1,
PROV.PROV_BILL_CITY AS BILL_PROV_CITY,
PROV.PROV_BILL_STATE AS BILL_PROV_STATE,
PROV.PROV_BILL_ZIP AS BILL_PROV_ZIP,
PROV.PROV_SEC_ID AS BILL_PROV_EIN,
PROV.PROV_ID AS SERV_FAC_ID ,
PROV.PROV_ADR1 AS SERV_FAC_ADDR1 ,
PROV.PROV_CITY AS SERV_FAC_CITY ,
PROV.PROV_STATE AS SERV_FAC_STATE ,
PROV.PROV_ZIP AS SERV_FAC_ZIP ,
CHK_PAYMNT.CLM_PMT_PAYEE_ADDR_LINE_1,
CHK_PAYMNT.CLM_PMT_PAYEE_ADDR_LINE_2,
CHK_PAYMNT.CLM_PMT_PAYEE_CITY,
CHK_PAYMNT.CLM_PMT_PAYEE_STATE_CD,
CHK_PAYMNT.CLM_PMT_PAYEE_POSTAL_CD,
CLAIM_HDR.CREAT_RUN_CYC_EXEC_SK
FROM CLAIM_DTL,(select * FROM CCP_STG.CLM_DETAIL_PRESTG WHERE ACT_CD <>'D') CLM_DETAIL_PRESTG, CLAIM_HDR,CCP_STG.MBR_XREF,PROV,CCP_STG.CLM_PAYMNT_CLMEXT_PRESTG CLM_PAYMNT,CCP_STG.CLM_PAYMNT_CHKEXT_PRESTG CHK_PAYMNT
WHERE
CLAIM_HDR.ICN_NUM = CLM_DETAIL_PRESTG.ICN_NUM
AND CLAIM_HDR.ICN_NUM = CLAIM_DTL.ICN_NUM
AND CLAIM_HDR.ICN_NUM=CLM_PAYMNT.ICN_NUM(+)
AND CLM_PAYMNT.CLM_PMT_CHCK_ACCT=CHK_PAYMNT.CLM_PMT_CHCK_ACCT
AND CLM_PAYMNT.CLM_PMT_CHCK_NUM=CHK_PAYMNT.CLM_PMT_CHCK_NUM
AND CLAIM_HDR.MBR_ENROLL_ID = MBR_XREF.MBR_ENROLLL_ID
AND CLM_DETAIL_PRESTG.FIRST_SRVC_DT >= 20110101
AND MBR_XREF.NEW_MBR_FLG = 'Y'
AND PROV.PROV_ID(+)=SUBSTR(CLAIM_HDR.PRVDR_NUM,6)
AND MOD(SUBSTR(CLAIM_HDR.ICN_NUM,14,2),2)=0
UNION ALL
SELECT DISTINCT CLAIM_HDR.ICN_NUM AS ICN_NUM,
CLAIM_HDR.SBCR_ID AS SBCR_ID,
CLAIM_HDR.MBR_ID AS MBR_ID,
CLAIM_HDR.MBR_FIRST_NAME AS MBR_FIRST_NAME,
CLAIM_HDR.MBR_MI AS MBR_MI,
CLAIM_HDR.MBR_LAST_NAME AS MBR_LAST_NAME,
CLAIM_HDR.MBR_BIRTH_DATE AS MBR_BIRTH_DATE,
CLAIM_HDR.GENDER_TYPE_CD AS GENDER_TYPE_CD,
CLAIM_HDR.SBCR_RLTNSHP_TYPE_CD AS SBCR_RLTNSHP_TYPE_CD,
CLAIM_HDR.SBCR_FIRST_NAME AS SBCR_FIRST_NAME,
CLAIM_HDR.SBCR_MI AS SBCR_MI,
CLAIM_HDR.SBCR_LAST_NAME AS SBCR_LAST_NAME,
CLAIM_HDR.SBCR_ADDR_LINE_1 AS SBCR_ADDR_LINE_1,
CLAIM_HDR.SBCR_ADDR_LINE2 AS SBCR_ADDR_LINE2,
CLAIM_HDR.SBCR_ADDR_CITY AS SBCR_ADDR_CITY,
CLAIM_HDR.SBCR_ADDR_STATE AS SBCR_ADDR_STATE,
CLAIM_HDR.SBCR_ZIP_CD AS SBCR_ZIP_CD,
CLAIM_HDR.PRVDR_NUM AS PRVDR_NUM,
CLAIM_HDR.CLM_PRCSSD_DT AS CLM_PRCSSD_DT,
CLAIM_HDR.CLM_TYPE_CLASS_CD AS CLM_TYPE_CLASS_CD,
CLAIM_HDR.AUTHO_NUM AS AUTHO_NUM,
CLAIM_HDR.TOT_BILLED_AMT AS TOT_BILLED_AMT,
CLAIM_HDR.HCFA_DRG_TYPE_CD AS HCFA_DRG_TYPE_CD,
CLAIM_HDR.FCLTY_ADMIT_DT AS FCLTY_ADMIT_DT,
CLAIM_HDR.ADMIT_TYPE AS ADMIT_TYPE,
CLAIM_HDR.DSCHRG_STATUS_CD AS DSCHRG_STATUS_CD,
CLAIM_HDR.FILE_BILLING_NPI AS FILE_BILLING_NPI,
CLAIM_HDR.CLAIM_LOCATION_CD AS CLAIM_LOCATION_CD,
CLAIM_HDR.CLM_RELATED_ICN_1 AS CLM_RELATED_ICN_1,
CLAIM_HDR.SUBSCR_INSGRP_NM,
CLAIM_HDR.CAC,
CLAIM_HDR.PRVDR_PTNT_ACC_ID,
CLAIM_HDR.BILL_TYPE,
CLAIM_DTL.FIRST_SRVC_DT AS FIRST_SRVC_DT,
CLAIM_DTL.LAST_SRVC_DT AS LAST_SRVC_DT,
CLAIM_DTL.PLC_OF_SRVC_CD AS PLC_OF_SRVC_CD,
PROV.PROV_LST_NM AS BILL_PROV_LST_NM,
PROV.PROV_FST_NM AS BILL_PROV_FST_NM,
PROV.PROV_MD_NM AS BILL_PROV_MID_NM,
PROV.PROV_BILL_ADR1 AS BILL_PROV_ADDR1,
PROV.PROV_BILL_CITY AS BILL_PROV_CITY,
PROV.PROV_BILL_STATE AS BILL_PROV_STATE,
PROV.PROV_BILL_ZIP AS BILL_PROV_ZIP,
PROV.PROV_SEC_ID AS BILL_PROV_EIN,
PROV.PROV_ID AS SERV_FAC_ID ,
PROV.PROV_ADR1 AS SERV_FAC_ADDR1 ,
PROV.PROV_CITY AS SERV_FAC_CITY ,
PROV.PROV_STATE AS SERV_FAC_STATE ,
PROV.PROV_ZIP AS SERV_FAC_ZIP ,
CHK_PAYMNT.CLM_PMT_PAYEE_ADDR_LINE_1,
CHK_PAYMNT.CLM_PMT_PAYEE_ADDR_LINE_2,
CHK_PAYMNT.CLM_PMT_PAYEE_CITY,
CHK_PAYMNT.CLM_PMT_PAYEE_STATE_CD,
CHK_PAYMNT.CLM_PMT_PAYEE_POSTAL_CD,
CLAIM_HDR.CREAT_RUN_CYC_EXEC_SK
FROM CLAIM_DTL, CLAIM_HDR,MBR_XREF,PROV,CCP_STG.CLM_PAYMNT_CLMEXT_PRESTG CLM_PAYMNT,CCP_STG.CLM_PAYMNT_CHKEXT_PRESTG CHK_PAYMNT
WHERE CLAIM_HDR.ICN_NUM = CLAIM_DTL.ICN_NUM
AND CLAIM_HDR.ICN_NUM=CLM_PAYMNT.ICN_NUM(+)
AND CLM_PAYMNT.CLM_PMT_CHCK_ACCT=CHK_PAYMNT.CLM_PMT_CHCK_ACCT
AND CLM_PAYMNT.CLM_PMT_CHCK_NUM=CHK_PAYMNT.CLM_PMT_CHCK_NUM
AND CLAIM_HDR.MBR_ENROLL_ID = MBR_XREF.MBR_ENROLLL_ID
-- AND TRUNC(CLAIM_HDR.PRESTG_INSRT_DT) = TRUNC(SYSDATE)
AND CLAIM_HDR.CREAT_RUN_CYC_EXEC_SK = 123638.000000000000000
AND MBR_XREF.NEW_MBR_FLG = 'N'
AND PROV.PROV_ID(+)=SUBSTR(CLAIM_HDR.PRVDR_NUM,6)
AND MOD(SUBSTR(CLAIM_HDR.ICN_NUM,14,2),2)=0; -
Hi,
Please suggest me the best way to fetch the record from the table designed below. It is Oracle 10gR2 on Linux
Whenever a client visit the office a record will be created for him. The company policy is to maintain 10 years of data on the transaction table but the table holds record count of 3 Million records per year.
The table has the following key Columns for the Select (sample Table)
Client_Visit
ID Number(12,0) --sequence generated number
EFF_DTE DATE --effective date of the customer (sometimes the client becomes invalid and he will be valid again)
Create_TS Timestamp(6)
Client_ID Number(9,0)
Cascade Flg vahrchar2(1)
On most of the reports the records are fetched by Max(eff_dte) and Max(create_ts) and cascade flag ='Y'.
I have following queries but the both of them are not cost effective and takes 8 minutes to display the records.
Code 1:
SELECT au_subtyp1.au_id_k,
au_subtyp1.pgm_struct_id_k
FROM au_subtyp au_subtyp1
WHERE au_subtyp1.create_ts =
(SELECT MAX (au_subtyp2.create_ts)
FROM au_subtyp au_subtyp2
WHERE au_subtyp2.au_id_k =
au_subtyp1.au_id_k
AND au_subtyp2.create_ts <
TO_DATE ('2013-01-01',
'YYYY-MM-DD'
AND au_subtyp2.eff_dte =
(SELECT MAX
(au_subtyp3.eff_dte
FROM au_subtyp au_subtyp3
WHERE au_subtyp3.au_id_k =
au_subtyp2.au_id_k
AND au_subtyp3.create_ts <
TO_DATE
('2013-01-01',
'YYYY-MM-DD'
AND au_subtyp3.eff_dte < =
TO_DATE
('2012-12-31',
'YYYY-MM-DD'
AND au_subtyp1.exists_flg = 'Y'
Explain Plan
Plan hash value: 2534321861
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 91 | | 33265 (2)| 00:06:40 |
|* 1 | FILTER | | | | | | |
| 2 | HASH GROUP BY | | 1 | 91 | | 33265 (2)| 00:06:40 |
|* 3 | HASH JOIN | | 1404K| 121M| 19M| 33178 (1)| 00:06:39 |
|* 4 | HASH JOIN | | 307K| 16M| 8712K| 23708 (1)| 00:04:45 |
| 5 | VIEW | VW_SQ_1 | 307K| 5104K| | 13493 (1)| 00:02:42 |
| 6 | HASH GROUP BY | | 307K| 13M| 191M| 13493 (1)| 00:02:42 |
|* 7 | INDEX FULL SCAN | AUSU_PK | 2809K| 125M| | 13493 (1)| 00:02:42 |
|* 8 | INDEX FAST FULL SCAN| AUSU_PK | 2809K| 104M| | 2977 (2)| 00:00:36 |
|* 9 | TABLE ACCESS FULL | AU_SUBTYP | 1404K| 46M| | 5336 (2)| 00:01:05 |
Predicate Information (identified by operation id):
1 - filter("AU_SUBTYP1"."CREATE_TS"=MAX("AU_SUBTYP2"."CREATE_TS"))
3 - access("AU_SUBTYP2"."AU_ID_K"="AU_SUBTYP1"."AU_ID_K")
4 - access("AU_SUBTYP2"."EFF_DTE"="VW_COL_1" AND "AU_ID_K"="AU_SUBTYP2"."AU_ID_K")
7 - access("AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
filter("AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND
"AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
8 - filter("AU_SUBTYP2"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
9 - filter("AU_SUBTYP1"."EXISTS_FLG"='Y')Code 2:
I already raised a thread a week back and Dom suggested the following query, it is cost effective but the performance is same and used the same amount of Temp tablespace
select au_id_k,pgm_struct_id_k from (
SELECT au_id_k
, pgm_struct_id_k
, ROW_NUMBER() OVER (PARTITION BY au_id_k ORDER BY eff_dte DESC, create_ts DESC) rn,
create_ts, eff_dte,exists_flg
FROM au_subtyp
WHERE create_ts < TO_DATE('2013-01-01','YYYY-MM-DD')
AND eff_dte <= TO_DATE('2012-12-31','YYYY-MM-DD')
) d where rn =1 and exists_flg = 'Y'
--Explain Plan
Plan hash value: 4039566059
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2809K| 168M| | 40034 (1)| 00:08:01 |
|* 1 | VIEW | | 2809K| 168M| | 40034 (1)| 00:08:01 |
|* 2 | WINDOW SORT PUSHED RANK| | 2809K| 133M| 365M| 40034 (1)| 00:08:01 |
|* 3 | TABLE ACCESS FULL | AU_SUBTYP | 2809K| 133M| | 5345 (2)| 00:01:05 |
Predicate Information (identified by operation id):
1 - filter("RN"=1 AND "EXISTS_FLG"='Y')
2 - filter(ROW_NUMBER() OVER ( PARTITION BY "AU_ID_K" ORDER BY
INTERNAL_FUNCTION("EFF_DTE") DESC ,INTERNAL_FUNCTION("CREATE_TS") DESC )<=1)
3 - filter("CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND "EFF_DTE"<=TO_DATE('
2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))Thanks,
VijayHi Justin,
Thanks for your reply. I am running this on our Test environment as I don't want to run this on Production environment now. The test environment holds 2809605 records (2 Million).
The query output count is 281699 (2 Hundred Thousand) records and the selectivity is 0.099. The Distinct values of create_ts, eff_dte, and exists_flg is 2808905 records. I am sure the index scan is not going to help out much as you said.
The core problem is both queries are using lot of Temp tablespace. When we use this query to join the tables, the other table has the same design as below so the temp tablespace grows bigger.
Both the production and test environment are 3 Node RAC.
First Query...
CPU used by this session 4740
CPU used when call started 4740
Cached Commit SCN referenced 21393
DB time 4745
OS Involuntary context switches 467
OS Page reclaims 64253
OS System time used 26
OS User time used 4562
OS Voluntary context switches 16
SQL*Net roundtrips to/from client 9
bytes received via SQL*Net from client 2487
bytes sent via SQL*Net to client 15830
calls to get snapshot scn: kcmgss 37
consistent gets 52162
consistent gets - examination 2
consistent gets from cache 52162
enqueue releases 19
enqueue requests 19
enqueue waits 1
execute count 2
ges messages sent 1
global enqueue gets sync 19
global enqueue releases 19
index fast full scans (full) 1
index scans kdiixs1 1
no work - consistent read gets 52125
opened cursors cumulative 2
parse count (hard) 1
parse count (total) 2
parse time cpu 1
parse time elapsed 1
physical write IO requests 69
physical write bytes 17522688
physical write total IO requests 69
physical write total bytes 17522688
physical write total multi block requests 69
physical writes 2139
physical writes direct 2139
physical writes direct temporary tablespace 2139
physical writes non checkpoint 2139
recursive calls 19
recursive cpu usage 1
session cursor cache hits 1
session logical reads 52162
sorts (memory) 2
sorts (rows) 760
table scan blocks gotten 23856
table scan rows gotten 2809607
table scans (short tables) 1
user I/O wait time 1
user calls 11
workarea executions - onepass 1
workarea executions - optimal 9
Second Query
CPU used by this session 1197
CPU used when call started 1197
Cached Commit SCN referenced 21393
DB time 1201
OS Involuntary context switches 8684
OS Page reclaims 21769
OS System time used 14
OS User time used 1183
OS Voluntary context switches 50
SQL*Net roundtrips to/from client 9
bytes received via SQL*Net from client 767
bytes sent via SQL*Net to client 15745
calls to get snapshot scn: kcmgss 17
consistent gets 23871
consistent gets from cache 23871
db block gets 16
db block gets from cache 16
enqueue releases 25
enqueue requests 25
enqueue waits 1
execute count 2
free buffer requested 1
ges messages sent 1
global enqueue get time 1
global enqueue gets sync 25
global enqueue releases 25
no work - consistent read gets 23856
opened cursors cumulative 2
parse count (hard) 1
parse count (total) 2
parse time elapsed 1
physical read IO requests 27
physical read bytes 6635520
physical read total IO requests 27
physical read total bytes 6635520
physical read total multi block requests 27
physical reads 810
physical reads direct 810
physical reads direct temporary tablespace 810
physical write IO requests 117
physical write bytes 24584192
physical write total IO requests 117
physical write total bytes 24584192
physical write total multi block requests 117
physical writes 3001
physical writes direct 3001
physical writes direct temporary tablespace 3001
physical writes non checkpoint 3001
recursive calls 25
session cursor cache hits 1
session logical reads 23887
sorts (disk) 1
sorts (memory) 2
sorts (rows) 2810365
table scan blocks gotten 23856
table scan rows gotten 2809607
table scans (short tables) 1
user I/O wait time 2
user calls 11
workarea executions - onepass 1
workarea executions - optimal 5Thanks,
Vijay
Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:17 AM
Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:19 AM -
Handling internal table with more than 1 million record
Hi All,
We are facing dump for storage parameters wrongly set.
Basically the dump is due to the internal table having more than 1 million records. We have increased the storage parameter size from 512 to 2048 , still the dump is happening.
Please advice is there any other way to handle these kinds of internal table.
P:S we have tried the option of using hashed table, this does not suits our scenario.
Thanks and Regards,
VijayHi
your problem can be solved by populating the internal table in chunks. for that you have to use Database Cursor concept.
hope this code helps.
G_PACKAGE_SIZE = 50000.
* Using DB Cursor to fetch data in batch.
OPEN CURSOR WITH HOLD DB_CURSOR FOR
SELECT *
FROM ZTABLE.
DO.
FETCH NEXT CURSOR DB_CURSOR
INTO CORRESPONDING FIELDS OF TABLE IT_ZTABLE
PACKAGE SIZE G_PACKAGE_SIZE.
IF SY-SUBRC NE 0.
CLOSE CURSOR DB_CURSOR.
EXIT.
ENDIF. -
Update 1 million records in batches of 1000
Hello,
I have to update 1 million records in groups of 1000. This is the best code I've got but it takes about 1hr and 15 minutes. (I'm also doing a commit every 5k for rollback purposes.) Does anybody have any better ideas?
thanks,
-Kevin
set time on
spool c:\testrowbatch.log
declare
vrow_id varchar2(15);
pcount number:=0;
icount number:=0;
vbnum number:=1;
CURSOR cs_01 is
select row_id from eim_activity where if_row_batch_num = '10';
BEGIN
OPEN cs_01;
LOOP
FETCH cs_01 INTO vrow_id;
IF cs_01%NOTFOUND THEN
dbms_output.put_line('End of Data');
CLOSE cs_01;
END IF;
Update eim_activity set if_row_batch_num = vbnum where row_id = vrow_id;
icount:=icount + 1;
pcount:=pcount + 1;
IF icount = 5000 THEN
commit;
icount:=0;
CLOSE cs_01;
OPEN cs_01;
END IF;
IF pcount = 1000 THEN
vbnum := vbnum +1;
pcount:=0;
end if;
END LOOP;
commit;
end;There are three problems with commiting inside the loop, particularly when you are updating the table that you have created the cursor on.
First, as everyone pointed out, it makes it slower.
Second, you run a serious risk of getting an ora 1555 Snapshot too old error. In which case you may not be able to reliably restart the procedure.
Third, it actually takes more rollback doing it that way than doing it in a single transaction.
The problem with Todd's approach is that you will update some records multiple times, whether you commit or not. You will set if_row_batch_num = 10 for 1000 records during the 10th iteration of the loop. These records will then be found in a subsequent iteration.
If you are doing this in PL/SQL just to increment the variable for every 1000 records, then this single update will solve that problem. It will be faster than your procedure, it will only update each row once per run, and it will always be an all or nothing update. You will still set if_row_batch_num = 10 for 1000 existing records.
UPDATE eim_activity
SET if_row_batch_num = FLOOR(rownum/1001) +1
WHERE if_row_batch_num = 10;TTFN
John -
Update performance on a 38 million records table
Hi all,
I´m trying to create a script to update a table that have around 38 million records. That table isn´t partitioned and I just have to update one CHAR(1 byte) field and set it to 'N'.
The Database is 10g r2 running on a Unix TRU64.
The script I create have a LOOP on a CURSOR that Bulk 200.000 records by pass and do a FORALL to update the table by ROWID.
The problem is, on the performances tests that method took about 20 minutes to update 1 million rows and should take about 13 hours to update all table.
My question is: Is that any way to improve the performance?
The Script:
DECLARE
CURSOR C1
IS
SELECT ROWID
FROM RTG.TCLIENTE_RTG;
type rowidtab is table of rowid;
d_rowid rowidtab;
v_char char(1) := 'N';
BEGIN
OPEN C1;
LOOP
FETCH C1
BULK COLLECT INTO d_rowid LIMIT 200000;
FORALL i IN d_rowid.FIRST..d_rowid.LAST
UPDATE RTG.TCLIENTE_RTG
SET CLI_VALID_IND = v_char
WHERE ROWID = d_rowid(i);
COMMIT;
EXIT WHEN C1%NOTFOUND;
END LOOP;
CLOSE C1;
END;
Kind Regards,
FabioI'm just curious... Is this a new varchar2(1) column that has been added to that table? If so will the value for this column remain 'N' for the future for the majority of the rows in that table?
Has this column specifically been introduced to support one of the business functions in your application / will it not be used everywhere where the table is currently in use?
If your answers to above questions contain many yes'ses, then why did you choose to add a column for this that needs to be initialized to 'N' for all existing rows?
Why not add a new single-column table for this requirement: the single column being the pk-column(s) of the existing table. And the meaning being if a pk is present in this new table, then the "CLI_VALID_IND" for this client is 'yes'. And if a pk is not present, then the "CLI_VALID_IND" for this client is 'no'.
That way you only have to add the new table. And do nothing more. Of course your SQL statements in support for the business logic of this new business function will have to use, and maybe join, this new table. But is that really a huge disadvantage? -
Fetching many records all at once is no faster than fetching one at a time
Hello,
I am having a problem getting NI-Scope to perform adequately for my application. I am sorry for the long post, but I have been going around and around with an NI engineer through email and I need some other input.
I have the following software and equipment:
LabView 8.5
NI-Scope 3.4
PXI-1033 chassis
PXI-5105 digitizer card
DELL Latitude D830 notebook computer with 4 GB RAM.
I tested the transfer speed of my connection to the PXI-1033 chassis using the niScope Stream to Memory Maximum Transfer Rate.vi found here:
http://zone.ni.com/devzone/cda/epd/p/id/5273. The result was 101 MB/s.
I am trying to set up a system whereby I can press the start button and acquire short waveforms which are individually triggered. I wish to acquire these individually triggered waveforms indefinitely. Furthermore, I wish to maximize the rate at which the triggers occur. In the limiting case where I acquire records of one sample, the record size in memory is 512 bytes (Using the formula to calculate 'Allocated Onboard Memory per Record' found in the NI PXI/PCI-5105 Specifications under the heading 'Waveform Specifications' pg. 16.). The PXI-5105 trigger re-arms in about 2 microseconds (500kHz), so to trigger at that rate indefinetely I would need a transfer speed of at least 256 Mb/s. So clearly, in this case the limiting factor for increasing the rate I trigger at and still be able to acquire indefinetely is the rate at which I transfer records from memory to my PC.
To maximize my record transfer rate, I should transfer many records at once using the Multi Fetch VI, as opposed to the theoretically slower method of transferring one at a time. To compare the rate that I can transfer records using a transfer all at once or one at a time method, I modified the niScope EX Timestamps.vi to allow me to choose between these transfer methods by changing the constant wired to the Fetch Number of Records property node to either -1 or 1 repectively. I also added a loop that ensures that all records are acquired before I begin the transfer, so that acquisition and trigger rates do not interfere with measuring the record transfer rate. This modified VI is attached to this post.
I have the following results for acquiring 10k records. My measurements are done using the Profile Performance and Memory Tool.
I am using a 250kHz analog pulse source.
Fetching 10000 records 1 record at a time the niScope Multi Fetch
Cluster takes a total time of 1546.9 milliseconds or 155 microseconds
per record.
Fetching 10000 records at once the niScope Multi Fetch Cluster takes a
total time of 1703.1 milliseconds or 170 microseconds per record.
I have tried this for larger and smaller total number of records, and the transfer time per is always around 170 microseconds per record regardless if I transfer one at a time or all at once. But with a 100MB/s link and 512 byte record size, the Fetch speed should approach 5 microseconds per record as you increase the number of records fetched at once.
With this my application will be limited to a trigger rate of 5kHz for running indefinetely, and it should be capable of closer to a 200kHz trigger rate for extended periods of time. I have a feeling that I am missing something simple or am just confused about how the Fetch functions should work. Please enlighten me.
Attachments:
Timestamps.vi 73 KBHi ESD
Your numbers for testing the PXI bandwidth look good. A value of
approximately 100MB/s is reasonable when pulling data accross the PXI
bus continuously in larger chunks. This may decrease a little when
working with MXI in comparison to using an embedded PXI controller. I
expect you were using the streaming example "niScope Stream to Memory
Maximum Transfer Rate.vi" found here: http://zone.ni.com/devzone/cda/epd/p/id/5273.
Acquiring multiple triggered records is a little different. There are
a few techniques that will help to make sure that you are able to fetch
your data fast enough to be able to keep up with the acquired data or
desired reference trigger rate. You are certainly correct that it is
more efficient to transfer larger amounts of data at once, instead of
small amounts of data more frequently as the overhead due to DMA
transfers becomes significant.
The trend you saw that fetching less records was more efficient sounded odd. So I ran your example and tracked down what was causing that trend. I believe it is actually the for loop that you had in your acquisition loop. I made a few modifications to the application to display the total fetch time to acquire 10000 records. The best fetch time is when all records are pulled in at once. I left your code in the application but temporarily disabled the for loop to show the fetch performance. I also added a loop to ramp the fetch number up and graph the fetch times. I will attach the modified application as well as the fetch results I saw on my system for reference. When the for loop is enabled the performance was worst at 1 record fetches, The fetch time dipped around the 500 records/fetch and began to ramp up again as the records/fetch increases to 10000.
Note I am using the 2D I16 fetch as it is more efficient to keep the data unscaled. I have also added an option to use immediate triggering - this is just because I was not near my hardware to physically connect a signal so I used the trigger holdoff property to simulate a given trigger rate.
Hope this helps. I was working in LabVIEW 8.5, if you are working with an earlier version let me know.
Message Edited by Jennifer O on 04-12-2008 09:30 PM
Attachments:
RecordFetchingTest.vi 143 KB
FetchTrend.JPG 37 KB -
Product.Category dimension has 4 child nodes Accessories,Bikes,Clothing n Components.My problem is when I have thousands of first level nodes my application takes a lot of time to load. Is there a way to fetch only say 100 records at a time? So then when
i click a next button i get the next 100
Eg:On the 1st click of a button I fetch 2 members
WITH MEMBER [Measures].[ChildrenCount] AS
[Product].[Category].CurrentMember.Children.Count
SELECT [Measures].[ChildrenCount] ON 1
,TopCount([Product].[Category].Members, 2) on 0
FROM [Adventure Works]
This fetches only Accessories. Is there a way the fetch the next two records Bikes n Clothing on click.
Then Components on the next click. So on an so forth.Hi Tsunade,
According to your description, there are thousands of members on your cube. It will take long time to retrieve all the member at a time, in order to improve the performance, you are looking for a function to fetch 10 records at a time, right? Based on my
research, there is no such a functionally to work around this requirement currently.
If you have any concern about this behavior, you can submit a feedback at
http://connect.microsoft.com/SQLServer/Feedback and hope it is resolved in the next release of service pack or product. Your feedback enables Microsoft to make software and services the best that they can be, Microsoft might consider to add this feature
in the following release after official confirmation.
Regards,
Charlie Liao
TechNet Community Support -
Internal Table with 22 Million Records
Hello,
I am faced with the problem of working with an internal table which has 22 million records and it keeps growing. The following code has been written in an APD. I have tried every possible way to optimize the coding using Sorted/Hashed Tables but it ends in a dump as a result of insufficient memory.
Any tips on how I can optimize my coding? I have attached the Short-Dump.
Thanks,
SD
DATA: ls_source TYPE y_source_fields,
ls_target TYPE y_target_fields.
DATA: it_source_tmp TYPE yt_source_fields,
et_target_tmp TYPE yt_target_fields.
TYPES: BEGIN OF IT_TAB1,
BPARTNER TYPE /BI0/OIBPARTNER,
DATEBIRTH TYPE /BI0/OIDATEBIRTH,
ALTER TYPE /GKV/BW01_ALTER,
ALTERSGRUPPE TYPE /GKV/BW01_ALTERGR,
END OF IT_TAB1.
DATA: IT_XX_TAB1 TYPE SORTED TABLE OF IT_TAB1
WITH NON-UNIQUE KEY BPARTNER,
WA_XX_TAB1 TYPE IT_TAB1.
it_source_tmp[] = it_source[].
SORT it_source_tmp BY /B99/S_BWPKKD ASCENDING.
DELETE ADJACENT DUPLICATES FROM it_source_tmp
COMPARING /B99/S_BWPKKD.
SELECT BPARTNER
DATEBIRTH
FROM /B99/ABW00GO0600
INTO TABLE IT_XX_TAB1
FOR ALL ENTRIES IN it_source_tmp
WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
LOOP AT it_source INTO ls_source.
READ TABLE IT_XX_TAB1
INTO WA_XX_TAB1
WITH TABLE KEY BPARTNER = ls_source-/B99/S_BWPKKD.
IF sy-subrc = 0.
ls_target-DATEBIRTH = WA_XX_TAB1-DATEBIRTH.
ENDIF.
MOVE-CORRESPONDING ls_source TO ls_target.
APPEND ls_target TO et_target.
CLEAR ls_target.
ENDLOOP.Hi SD,
Please put the select querry in below condition marked in bold.
IF it_source_tmp[] IS NOT INTIAL.
SELECT BPARTNER
DATEBIRTH
FROM /B99/ABW00GO0600
INTO TABLE IT_XX_TAB1
FOR ALL ENTRIES IN it_source_tmp
WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
ENDIF.
This will solve your performance issue. Here when internal table it_source_tmp have no records, that time it was fetchin all the records from the database.Now after this conditio it will not select anyrecords if the table contains no records.
Regards,
Pravin -
ABAP Proxy for 10 million records
Hi,
I am running a extract program for my inventory which has about 10 million records.
I am sending through ABAP proxy and job is cacelled due to memory problem.
I am breaking up the records while sending through ABAP proxy..
I am sending about 2000 times proxy by breaking the number of records..
do you think ABAP proxy would able to handle 10 million records..?
Any advice would be highly appreciated.
Thanks and Best Regards,
M-Hi,
I am facing the same problem. My temporary solution is to break up the selected data into 30.000 records and send those portions by ABAP proxy to PI.
I think the problem lies in the ABAP to xml conversion (call transformation) within the proxy.
Although breaking up the data seems to work for me now, it gives me an other issue: I have to combine the data back again in PI.
So now I am thinking of saving all the records as a dataset file on the application server and using the file adapter instead.
Regards,
Arjan Aalbers -
How to update more than 5 million records without error message ORA-00257:
Hi ,
I need to update some columns in my table which is contains about 5 million records
I 've already tried this
Update AAA_CDR
Set RoamFload = Null ;
but the problem is I've got the error message ("ORA-00257: archiver error. Connect internal only,until freed.) and the update consuming about 6 hours with no results ,
then I do the commands ( Alter system set db_recovery_file_dest_size=50G) and the problem solved .
but I need to update about 15 columns of this table to be null ,what I should do to overcome this message and update the table in reasonable time
Please Help Me ,The best way would be to allocate sufficient disk space for your archive log destination. Your database is not sized properly. NOLOGGING option will not do much for you because it' only applies to direct load operations when the data inserted into nologging table is selected from another table. UPDATE will be be logged, regardless of the NOLOGGING status. Here is the quote from the manual:
<quote>
LOGGING|NOLOGGING
LOGGING|NOLOGGING specifies that subsequent Direct Loader (SQL*Loader) and direct-load
INSERT operations against a nonpartitioned index, a range or hash index partition, or
all partitions or subpartitions of a composite-partitioned index will be logged (LOGGING)
or not logged (NOLOGGING) in the redo log file.
In NOLOGGING mode, data is modified with minimal logging (to mark new extents invalid
and to record dictionary changes). When applied during media recovery, the extent
invalidation records mark a range of blocks as logically corrupt, because the redo data
is not logged. Therefore, if you cannot afford to lose this index, you must take a backup
after the operation in NOLOGGING mode.
If the database is run in ARCHIVELOG mode, media recovery from a backup taken before an
operation in LOGGING mode will re-create the index. However, media recovery from a backup
taken before an operation in NOLOGGING mode will not re-create the index.
An index segment can have logging attributes different from those of the base table and
different from those of other index segments for the same base table.
</quote>
If you are really desperate, you can try the following undocumented/unsupported command:
ALTER DATABASE ARCHIVELOG COMPRESS ENABLE;
That will cause database to compress your archive logs and consume less space. This command is not documented or supported, not even in the version 11.2.0.3 and causes the database to start spewing ORA-0600 in version 10G. DO NOT USE IN A PRODUCTION ENVIRONMENT!!!! -
Performance issues in million records table
I Have a scenario wherein have some 20 tables each with a million and more records. [ Historical ]
On an average I do add 1500 - 2500 records a day... i.e would add a million records every year on an average
Am looking for archival solutions for these master tables.
Operations on Archival Tables, would be limited to read.
Expected benefits
User base would be around 2500 users on the whole - but expect 300 - 500 parallel users at the max.
Very limited usage on Historical data - compared to operations on current data
Performance on operations over current data is important compared over that on historical data
Environment - Oracle 9i - Should be migrating to Oracle 10g sooner.
Some solutions i cud think of ...
[ 1 ] Put every archived record into a archival table and fetch it from there
i.e clearly distinguish searches as current or archival - prior to searching
the impact i feel is again archival tables are ever increasing by approx a million in a year
[ 2 ] Put records into various archival tables each differentiated by a year
For instance every year i do replicate the set of tables and that year data goes into that table.
how do i do a fetch??
Note - i do have a unique way of identifying each record in my master table - the primary key is based on YYYYMMXXXXXXXXXX format eg: 2008070000562330, will the year part help me in anyway to check with the correct table
The major concern is i do have a very good response based on indexing and other common things, but would not want this to downgrade in a year and more, but expect to improvise on the current response timings and also do ensure to conitnue the same over a period of time.
Also I don't want to make change to every query in my app - until there is no way out..Hi,
Read the following documentation link about Partitioning in Oracle.
Best Regards,
Alex
Maybe you are looking for
-
Please help - Scrollable result set in sql server 2000
Hi can some one please help me. I'm trying to create scrollable result set in sql server 2000, but i just can't get it to work. I've been trying to do this for the past 12 hours. I want to go home, but I can't till I get this going! please help!!! My
-
Http to RFC or SOAP to RFC (synchronous)
Hi, I need to connect PHP with RFC via XI/PI Request will be send from php developed web pages to R/3 (RFC) to get data back to web page. I think to resolve this scenario 1u2019ve two options 1: SOAP to RFC (synchronous) 2: HTTP to RFC (synchronous)
-
Hi, Our business needs two level of Approvals for Payment proposal to be converted in to payment run in F110. Is it possible. Thanks in advance..... Anand
-
Rework for quantity in sales order stock
Hi Gurus, The following is the scenario of my client. My client is a paint manufacturing Industry. T They Will create a sale order for 100 Gallons of paint. Will convert the sale order to process order and bring the stock to quality inspection (batch
-
Need help solution to continually growing files
Have 3T and 1T external drives, an imac 500G drive and want to figure out an optimum configuration for evergrowing backup/itune/photo files. Previously cleaned out unnecessary/unwanted files. Would like to set this up to last a couple of years. I am