Is it ok? if we have 42 million records in a single fact table!!
Hello,
We have three outstanding fact tables, and we need to add one more fact type, and we were thinking whether we can do two different fact tables, or can we put the values in one of the same fact table which is similar, but the records are upto 42 million if we add ,so my question is having a single fact table with all records, or breaking it down, to two different ones!!!Thnx!!
I am not sure what is an "outstanding fact" or an "fact type". A 42m fact table doesn't necessarily indicate you are doing something wrong although it does sound as odd. I would expect most facts to be small as they should have aggregated measures to speed up report. In some cases you may want to drill down to the detailed transaction level in which case you may find these large facts. But care should be taken not to allow users to query on this fact without user the "transaction ID" which obviously should be indexed and should guarantee that queries will be quick.
Guessing from your post (as it is not clear not descriptive enough) it would seem to imply that you are adding a new dimension to your fact and that will cause the fact to increase it's row count to 42m. That probably means that you are changing the granularity of the fact. That may or may not be correct, depending on your model.
Similar Messages
-
How to identify missing records in a single-column table?
How to identify missing records in a single-column table ?
Column consists of numbers in a ordered manner but the some numbers are deleted from the table in random manner and need to identify those rows.Something like:
WITH t AS (
SELECT 1 ID FROM DUAL UNION ALL
SELECT 2 ID FROM DUAL UNION ALL
SELECT 3 ID FROM DUAL UNION ALL
SELECT 5 ID FROM DUAL UNION ALL
SELECT 8 ID FROM DUAL UNION ALL
SELECT 10 ID FROM DUAL UNION ALL
SELECT 11 ID FROM DUAL
-- end of on-the-fly data sample
SELECT '[' || (id + 1) || ' - ' || (next_id - 1) || ']' gap
FROM (
SELECT id,
lead(id,1,id + 1) over(order by id) next_id
FROM t
where id != next_id - 1
GAP
[4 - 4]
[6 - 7]
[9 - 9]
SQL> SY.
P.S. I assume sequence lower and upper limits are always present, otherwise query needs a little adjustment. -
Loading 3 millions record in database via externel table
I am loading 3+ millions records in database by using externel tables. It is very slow process. How can I make this process fast?
Hi,
1. Break down the file into several files. let just say 10 files (300,000 record each)
2. disable all index on the target table if possible
3. disable Foreign key if possible, beside you can check this later using exceptions table
4. make sure your freelist is and initrans is 10 for the target table, if you are inserting tabel resides in manual space management tablespace
5. Create 10 proccess, each reading from their own file. and run this 10 process concurrently and used log error with unlimited reject limit facility. so the insert will continue until finish
hope can help. -
Reg : Joining the all internal table records into a single internal table
Hi all
I am having 5 internal tables and i want to put all these entries in a single intrnal table and my requirement is for each and every record it has to go through all the internal tables and if an entry is missing meand.it has to go through the other internal tables and for missing entries i should leave it as blank and rest of the contents i have to display can any please gimme some logic how to do this??
Thanks in advanceDon't have time or will to deliver turnkey solutions, but here is a frame:
LOOP AT itab1...
READ TABLE itab2 WITH TABLE KEY... (fields linking itab1 and itab2)
READ TABLE itab3 WITH TABLE KEY... (fields linking itab1 and itab3)
LOOP AT itab4 WHERE... (fields linking itab1 and itab4)
READ TABLE itab5 WITH TABLE KEY... (fields linking itab4 and itab5)
MOVE-CORRESPONDING... (all five work areas to target work area)
APPEND itabtarget...
ENDLOOP.
ENDLOOP.
so use READ when there is a 1:1 relationship (e.g. check table entry), and LOOP when there is a 1:N relationship (e.g. items for a header)
Thomas -
Regarding reading multiple records into a single internal table..
hi experts,
Need your help, i have a requirement like this.
i will have a input file like this
D 123 Suresh 12/01/2008
E ven sha 5432
E ven sha 5432
D 153 Sachin 11/01/2008
E ven sha 5432
Now all i need to consider from D to E (i.e., before next D) as a single record and i need to preare a separate excel,
So that E records can be max 9. So we cant expect this will be always 2 or 3 or 4.
So how can i do this upload and process..
give some idea.
Thanks,
SureshHi,
Once you transfer the values from input file to internal table.
loop through all records of internal table .
check the first letter using offset
e.g if Itab-field+0(1) = 'D'
elseif Itab-field+0(1) = 'E'.
endif.
Populate the work area of internal table until you find the next 'D' once you find 'D' again ...append the work area to internal table......clear work area.
Regards,
Rahul -
OBIEE 11g: Fact table does not have a properly defined primary key
Hi,
We have two fact tables and two dimension tables.
We have joined the tables like
D1-->F1
D2-->F2
D1-->D2
We dont have any hieracies.
It is throwing error in consistency check,
[nQSError: 15001] Could not load navigation space for subject area ABC.
[nQSError: 15033] Logical table Fact1 does not have a properly defined primary key.
It is not like STAR Schema, its like snowflake schema. How to define primary key for fact table.
Thanks.Hi,
My suggestion would be bring both the facts to the same logical table sources and have a single fact table in the BMM layer joined with multiple dimensions.
Build a dimension hierarchy for the dimensions and then in the content logical layer mapping, map the dimensions to the fact tables with detailed level/Total
Refer the below link-
http://108obiee.blogspot.com/2009/08/joining-two-fact-tables-with-different.html
Hope this help's
Thanks,
Satya -
Reading data from a table which can have millions of record
Hi,
I need to write a query in my report that can be able to fetch records from a z table that have arount 20 lac (= 2 million) records. As far i knoew we can not handle such a huge amount of data in our internal tables. because the size of internal table is fixed and that can be increased only by basis persons.
So can anybody tell me the approach that i should follow such that i can split the data from this table.
Or any other approach is also welcome.
Please reply ASAP when you have the time.
Thanks And Regards,
Mayank
Edited by: Thomas Zloch on Mar 6, 2010 9:25 PMHi Mayank,
reduce the data selected to the fields you really need, avoid select *. See online help on SELECT, addition PACKAGE SIZE.
select field1 field2 field3
into corresponding fields of table lt_table package size nnn WHERE ...
* process lt_table
endselect.
Regards,
Clemens
Edited by: Rob Burbank on Mar 9, 2010 12:29 PM -
Tuning the sql query when we have 30 Million records
Hi Friends,
I have query which takes around 25 to 30 Minutes to retrieve 9 Million records.
Oracle version=11.2.0.2
OS=Solaris 10 64bit
query details
CREATE OR REPLACE VIEW TIBEX_ORDERSBYQSIDVIEW
AS
SELECT A."ORDERID", A."USERORDERID", A."ORDERSIDE", A."ORDERTYPE",
A.ORDERSTATUS, A.BOARDID, A.TIMEINFORCE, A.INSTRUMENTID,
A.REFERENCEID, A.PRICETYPE, A.PRICE, A.AVERAGEPRICE,
A.QUANTITY, A.MINIMUMFILL, A.DISCLOSEDQTY, A.REMAINQTY,
A.AON, A.PARTICIPANTID, A.ACCOUNTTYPE, A.ACCOUNTNO,
A.CLEARINGAGENCY, A.LASTINSTRESULT, A.LASTINSTMESSAGESEQUENCE,
A.LASTEXECUTIONID, A.NOTE, A.TIMESTAMP, A.QTYFILLED, A.MEID,
A.LASTINSTREJECTCODE, A.LASTEXECPRICE, A.LASTEXECQTY,
A.LASTINSTTYPE, A.LASTEXECUTIONCOUNTERPARTY, A.VISIBLEQTY,
A.STOPPRICE, A.LASTEXECCLEARINGAGENCY, A.LASTEXECACCOUNTNO,
A.LASTEXECCPCLEARINGAGENCY, A.MESSAGESEQUENCE,
A.LASTINSTUSERALIAS, A.BOOKTIMESTAMP, A.PARTICIPANTIDMM,
A.MARKETSTATE, A.PARTNEREXID, A.LastExecSETTLEMENTCYCLE,
A.LASTEXECPOSTTRADEVENUETYPE, A.PRICELEVELPOSITION,
A.PREVREFERENCEID, A.EXPIRYTIMESTAMP, matchType,
a.lastExecutionRole, a.MDEntryID, a.PegOffset,
a.haltReason, A.COMPARISONPRICE, A.ENTEREDPRICETYPE,
A.ISPEX, A.CLEARINGHANDLING, B.qsid
FROM tibex_Order A,
tibex_Participant b
WHERE a.participantID = b.participantID
AND (A.MessageSequence, A.OrderID) IN (
SELECT max(C.MessageSequence), C.OrderID
FROM tibex_Order C
WHERE LastInstRejectCode = 'OK'
GROUP By C.OrderID
AND a.OrderStatus IN (
SELECT OrderStatus
FROM tibex_orderStatusEnum
WHERE ShortDesc IN (
'ORD_OPEN', 'ORD_EXPIRE', 'ORD_CANCEL', 'ORD_FILLED','ORD_CREATE','ORD_PENDAMD','ORD_PENDCAN'
UNION ALL
SELECT A.ORDERID, A.USERORDERID, A.ORDERSIDE, A.ORDERTYPE,
A.ORDERSTATUS, A.BOARDID, A.TIMEINFORCE, A.INSTRUMENTID,
A.REFERENCEID, A.PRICETYPE, A.PRICE, A.AVERAGEPRICE,
A.QUANTITY, A.MINIMUMFILL, A.DISCLOSEDQTY, A.REMAINQTY,
A.AON, A.PARTICIPANTID, A.ACCOUNTTYPE, A.ACCOUNTNO,
A.CLEARINGAGENCY, A.LASTINSTRESULT, A.LASTINSTMESSAGESEQUENCE,
A.LASTEXECUTIONID, A.NOTE, A.TIMESTAMP, A.QTYFILLED, A.MEID,
A.LASTINSTREJECTCODE, A.LASTEXECPRICE, A.LASTEXECQTY,
A.LASTINSTTYPE, A.LASTEXECUTIONCOUNTERPARTY, A.VISIBLEQTY,
A.STOPPRICE, A.LASTEXECCLEARINGAGENCY, A.LASTEXECACCOUNTNO,
A.LASTEXECCPCLEARINGAGENCY, A.MESSAGESEQUENCE,
A.LASTINSTUSERALIAS, A.BOOKTIMESTAMP, A.PARTICIPANTIDMM,
A.MARKETSTATE, A.PARTNEREXID, A.LastExecSETTLEMENTCYCLE,
A.LASTEXECPOSTTRADEVENUETYPE, A.PRICELEVELPOSITION,
A.PREVREFERENCEID, A.EXPIRYTIMESTAMP, matchType,
a.lastExecutionRole, A.MDEntryID, a.PegOffset,
a.haltReason, A.COMPARISONPRICE, A.ENTEREDPRICETYPE,
A.ISPEX, A.CLEARINGHANDLING, B.qsid
FROM tibex_Order A,
tibex_Participant b
WHERE a.participantID = b.participantID
AND orderstatus in (
SELECT orderstatus
FROM tibex_orderStatusEnum
WHERE ShortDesc in ('ORD_REJECT')
AND 1 IN (
SELECT count(*)
FROM tibex_order c
WHERE c.orderid=a.orderid
AND c.instrumentID=a.instrumentID
/Tried by modifying the query but same result was not retrieved but it was Quicker 6 min.Can Somebody check where i am going wrong.
CREATE OR REPLACE VIEW TIBEX_ORDERSBYQSIDVIEW
AS
WITH REJ AS
SELECT ROWID RID
FROM TIBEX_ORDER
WHERE ORDERSTATUS = (SELECT ORDERSTATUS
FROM TIBEX_ORDERSTATUSENUM
WHERE SHORTDESC = 'ORD_REJECT')
REJ1 AS
SELECT ROWID RID
FROM TIBEX_ORDER
WHERE ORDERSTATUS NOT IN (SELECT ORDERSTATUS
FROM TIBEX_ORDERSTATUSENUM
WHERE SHORTDESC = 'ORD_NOTFND'
OR SHORTDESC = 'ORD_REJECT')
SELECT O.*,
P.QSID
FROM TIBEX_ORDER O,
TIBEX_PARTICIPANT P
WHERE O.PARTICIPANTID = P.PARTICIPANTID
AND O.ROWID IN (
SELECT RID
FROM (
SELECT ROWID RID,
ORDERSTATUS,
RANK () OVER (PARTITION BY ORDERID ORDER BY MESSAGESEQUENCE ASC) R
FROM TIBEX_ORDER
WHERE R = 1
AND RID IN (SELECT RID FROM REJ)
UNION ALL
SELECT O.*,
P.QSID
FROM TIBEX_ORDER O,
TIBEX_PARTICIPANT P
WHERE O.PARTICIPANTID = P.PARTICIPANTID
AND O.ROWID IN (
SELECT RID
FROM (
SELECT ROWID RID,
ORDERSTATUS,
RANK () OVER (PARTITION BY ORDERID ORDER BY MESSAGESEQUENCE DESC) R
FROM TIBEX_ORDER
WHERE R = 1
AND RID IN (SELECT RID FROM REJ1)
);Regards
NMHi Satish,
CREATE OR REPLACE VIEW TIBEX_ORDERSBYQSIDVIEW
(ORDERID, USERORDERID, ORDERSIDE, ORDERTYPE, ORDERSTATUS,
BOARDID, TIMEINFORCE, INSTRUMENTID, REFERENCEID, PRICETYPE,
PRICE, AVERAGEPRICE, QUANTITY, MINIMUMFILL, DISCLOSEDQTY,
REMAINQTY, AON, PARTICIPANTID, ACCOUNTTYPE, ACCOUNTNO,
CLEARINGAGENCY, LASTINSTRESULT, LASTINSTMESSAGESEQUENCE, LASTEXECUTIONID, NOTE,
TIMESTAMP, QTYFILLED, MEID, LASTINSTREJECTCODE, LASTEXECPRICE,
LASTEXECQTY, LASTINSTTYPE, LASTEXECUTIONCOUNTERPARTY, VISIBLEQTY, STOPPRICE,
LASTEXECCLEARINGAGENCY, LASTEXECACCOUNTNO, LASTEXECCPCLEARINGAGENCY, MESSAGESEQUENCE, LASTINSTUSERALIAS,
BOOKTIMESTAMP, PARTICIPANTIDMM, MARKETSTATE, PARTNEREXID, LASTEXECSETTLEMENTCYCLE,
LASTEXECPOSTTRADEVENUETYPE, PRICELEVELPOSITION, PREVREFERENCEID, EXPIRYTIMESTAMP, MATCHTYPE,
LASTEXECUTIONROLE, MDENTRYID, PEGOFFSET, HALTREASON, COMPARISONPRICE,
ENTEREDPRICETYPE, ISPEX, CLEARINGHANDLING, QSID)
AS
SELECT A."ORDERID", A."USERORDERID", A."ORDERSIDE", A."ORDERTYPE",
A.ORDERSTATUS, A.BOARDID, A.TIMEINFORCE, A.INSTRUMENTID,
A.REFERENCEID, A.PRICETYPE, A.PRICE, A.AVERAGEPRICE,
A.QUANTITY, A.MINIMUMFILL, A.DISCLOSEDQTY, A.REMAINQTY,
A.AON, A.PARTICIPANTID, A.ACCOUNTTYPE, A.ACCOUNTNO,
A.CLEARINGAGENCY, A.LASTINSTRESULT, A.LASTINSTMESSAGESEQUENCE,
A.LASTEXECUTIONID, A.NOTE, A.TIMESTAMP, A.QTYFILLED, A.MEID,
A.LASTINSTREJECTCODE, A.LASTEXECPRICE, A.LASTEXECQTY,
A.LASTINSTTYPE, A.LASTEXECUTIONCOUNTERPARTY, A.VISIBLEQTY,
A.STOPPRICE, A.LASTEXECCLEARINGAGENCY, A.LASTEXECACCOUNTNO,
A.LASTEXECCPCLEARINGAGENCY, A.MESSAGESEQUENCE,
A.LASTINSTUSERALIAS, A.BOOKTIMESTAMP, A.PARTICIPANTIDMM,
A.MARKETSTATE, A.PARTNEREXID, A.LastExecSETTLEMENTCYCLE,
A.LASTEXECPOSTTRADEVENUETYPE, A.PRICELEVELPOSITION,
A.PREVREFERENCEID, A.EXPIRYTIMESTAMP, matchType,
a.lastExecutionRole, a.MDEntryID, a.PegOffset,
a.haltReason, A.COMPARISONPRICE, A.ENTEREDPRICETYPE,
A.ISPEX, A.CLEARINGHANDLING, B.qsid
FROM tibex_Order A,
tibex_Participant b
WHERE a.participantID = b.participantID
AND (A.MessageSequence, A.OrderID) IN ( SELECT MAX (C.MessageSequence), C.OrderID
FROM tibex_Order C
WHERE c.LastInstRejectCode = 'OK'
and a.OrderID=c.OrderID
GROUP BY C.OrderID)
AND a.OrderStatus IN (2,4,5,6,1,9,10)
UNION ALL
SELECT A.ORDERID, A.USERORDERID, A.ORDERSIDE, A.ORDERTYPE,
A.ORDERSTATUS, A.BOARDID, A.TIMEINFORCE, A.INSTRUMENTID,
A.REFERENCEID, A.PRICETYPE, A.PRICE, A.AVERAGEPRICE,
A.QUANTITY, A.MINIMUMFILL, A.DISCLOSEDQTY, A.REMAINQTY,
A.AON, A.PARTICIPANTID, A.ACCOUNTTYPE, A.ACCOUNTNO,
A.CLEARINGAGENCY, A.LASTINSTRESULT, A.LASTINSTMESSAGESEQUENCE,
A.LASTEXECUTIONID, A.NOTE, A.TIMESTAMP, A.QTYFILLED, A.MEID,
A.LASTINSTREJECTCODE, A.LASTEXECPRICE, A.LASTEXECQTY,
A.LASTINSTTYPE, A.LASTEXECUTIONCOUNTERPARTY, A.VISIBLEQTY,
A.STOPPRICE, A.LASTEXECCLEARINGAGENCY, A.LASTEXECACCOUNTNO,
A.LASTEXECCPCLEARINGAGENCY, A.MESSAGESEQUENCE,
A.LASTINSTUSERALIAS, A.BOOKTIMESTAMP, A.PARTICIPANTIDMM,
A.MARKETSTATE, A.PARTNEREXID, A.LastExecSETTLEMENTCYCLE,
A.LASTEXECPOSTTRADEVENUETYPE, A.PRICELEVELPOSITION,
A.PREVREFERENCEID, A.EXPIRYTIMESTAMP, matchType,
a.lastExecutionRole, A.MDEntryID, a.PegOffset,
a.haltReason, A.COMPARISONPRICE, A.ENTEREDPRICETYPE,
A.ISPEX, A.CLEARINGHANDLING, B.qsid
FROM tibex_Order A,
tibex_Participant b
WHERE a.participantID = b.participantID
AND orderstatus=3
AND 1 IN (
SELECT count(*)
FROM tibex_order c
WHERE c.orderid=a.orderid
AND c.instrumentID=a.instrumentID
select * from TIBEX_ORDERSBYQSIDVIEW where participantid='NITE';
Current SQL using Temp Segment and Look for Column TEMPSEG_SIZE_MB
SID TIME OPERATION ESIZE MEM MAX MEM PASS TEMPSEG_SIZE_MB
183 11/10/2011:13:38:44 HASH-JOIN 43 43 1556 1 1024
183 11/10/2011:13:38:44 GROUP BY (HASH) 2043 2072 2072 0 4541Edited by: NM on 11-Oct-2011 04:38 -
SQL Query to fetch records from tables which have 75+ million records
Hi,
I have the explain plan for a sql stmt.Require your suggestions to improve this.
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 340 | 175K| 19075 |
| 1 | TEMP TABLE TRANSFORMATION | | | | |
| 2 | LOAD AS SELECT | | | | |
| 3 | SORT GROUP BY | | 32M| 1183M| 799K|
| 4 | TABLE ACCESS FULL | CLM_DETAIL_PRESTG | 135M| 4911M| 464K|
| 5 | LOAD AS SELECT | | | | |
| 6 | TABLE ACCESS FULL | CLM_HEADER_PRESTG | 1 | 274 | 246K|
PLAN_TABLE_OUTPUT
| 7 | LOAD AS SELECT | | | | |
| 8 | SORT UNIQUE | | 744K| 85M| 8100 |
| 9 | TABLE ACCESS FULL | DAILY_PROV_PRESTG | 744K| 85M| 1007 |
| 10 | UNION-ALL | | | | |
| 11 | SORT UNIQUE | | 177 | 97350 | 9539 |
| 12 | HASH JOIN | | 177 | 97350 | 9538 |
| 13 | HASH JOIN OUTER | | 3 | 1518 | 9533 |
| 14 | HASH JOIN | | 1 | 391 | 8966 |
| 15 | TABLE ACCESS BY INDEX ROWID | CLM_DETAIL_PRESTG | 1 | 27 | 3 |
| 16 | NESTED LOOPS | | 1 | 361 | 10 |
| 17 | NESTED LOOPS OUTER | | 1 | 334 | 7 |
PLAN_TABLE_OUTPUT
| 18 | NESTED LOOPS OUTER | | 1 | 291 | 4 |
| 19 | VIEW | | 1 | 259 | 2 |
| 20 | TABLE ACCESS FULL | SYS_TEMP_0FD9D66C9_DA2D01AD | 1 | 269 | 2 |
| 21 | INDEX RANGE SCAN | CLM_PAYMNT_CLMEXT_PRESTG_IDX | 1 | 32 | 2 |
| 22 | TABLE ACCESS BY INDEX ROWID| CLM_PAYMNT_CHKEXT_PRESTG | 1 | 43 | 3 |
| 23 | INDEX RANGE SCAN | CLM_PAYMNT_CHKEXT_PRESTG_IDX | 1 | | 2 |
| 24 | INDEX RANGE SCAN | CLM_DETAIL_PRESTG_IDX | 6 | | 2 |
| 25 | VIEW | | 32M| 934M| 8235 |
| 26 | TABLE ACCESS FULL | SYS_TEMP_0FD9D66C8_DA2D01AD | 32M| 934M| 8235 |
| 27 | VIEW | | 744K| 81M| 550 |
| 28 | TABLE ACCESS FULL | SYS_TEMP_0FD9D66CA_DA2D01AD | 744K| 81M| 550 |
PLAN_TABLE_OUTPUT
| 29 | TABLE ACCESS FULL | CCP_MBRSHP_XREF | 5288 | 227K| 5 |
| 30 | SORT UNIQUE | | 163 | 82804 | 9536 |
| 31 | HASH JOIN | | 163 | 82804 | 9535 |
| 32 | HASH JOIN OUTER | | 3 | 1437 | 9530 |
| 33 | HASH JOIN | | 1 | 364 | 8963 |
| 34 | NESTED LOOPS OUTER | | 1 | 334 | 7 |
| 35 | NESTED LOOPS OUTER | | 1 | 291 | 4 |
| 36 | VIEW | | 1 | 259 | 2 |
| 37 | TABLE ACCESS FULL | SYS_TEMP_0FD9D66C9_DA2D01AD | 1 | 269 | 2 |
| 38 | INDEX RANGE SCAN | CLM_PAYMNT_CLMEXT_PRESTG_IDX | 1 | 32 | 2 |
| 39 | TABLE ACCESS BY INDEX ROWID | CLM_PAYMNT_CHKEXT_PRESTG | 1 | 43 | 3 |
PLAN_TABLE_OUTPUT
| 40 | INDEX RANGE SCAN | CLM_PAYMNT_CHKEXT_PRESTG_IDX | 1 | | 2 |
| 41 | VIEW | | 32M| 934M| 8235 |
| 42 | TABLE ACCESS FULL | SYS_TEMP_0FD9D66C8_DA2D01AD | 32M| 934M| 8235 |
| 43 | VIEW | | 744K| 81M| 550 |
| 44 | TABLE ACCESS FULL | SYS_TEMP_0FD9D66CA_DA2D01AD | 744K| 81M| 550 |
| 45 | TABLE ACCESS FULL | CCP_MBRSHP_XREF | 5288 | 149K| 5 |
The CLM_DETAIL_PRESTG table has 100 million records and the CLM_HEADER_PRESTG table has 75 million records.
Any gussestions on how to getch huge records from tables of this size will help.
Regards,
NarayanWITH CLAIM_DTL
AS ( SELECT
ICN_NUM,
MIN (FIRST_SRVC_DT) AS FIRST_SRVC_DT,
MAX (LAST_SRVC_DT) AS LAST_SRVC_DT,
MIN (PLC_OF_SRVC_CD) AS PLC_OF_SRVC_CD
FROM CCP_STG.CLM_DETAIL_PRESTG CD WHERE ACT_CD <>'D'
GROUP BY ICN_NUM),
CLAIM_HDR
AS (SELECT
ICN_NUM,
SBCR_ID,
MBR_ID,
MBR_FIRST_NAME,
MBR_MI,
MBR_LAST_NAME,
MBR_BIRTH_DATE,
GENDER_TYPE_CD,
SBCR_RLTNSHP_TYPE_CD,
SBCR_FIRST_NAME,
SBCR_MI,
SBCR_LAST_NAME,
SBCR_ADDR_LINE_1,
SBCR_ADDR_LINE2,
SBCR_ADDR_CITY,
SBCR_ADDR_STATE,
SBCR_ZIP_CD,
PRVDR_NUM,
CLM_PRCSSD_DT,
CLM_TYPE_CLASS_CD,
AUTHO_NUM,
TOT_BILLED_AMT,
HCFA_DRG_TYPE_CD,
FCLTY_ADMIT_DT,
ADMIT_TYPE,
DSCHRG_STATUS_CD,
FILE_BILLING_NPI,
CLAIM_LOCATION_CD,
CLM_RELATED_ICN_1,
SBCR_ID||0
|| MBR_ID
|| GENDER_TYPE_CD
|| SBCR_RLTNSHP_TYPE_CD
|| MBR_BIRTH_DATE
AS MBR_ENROLL_ID,
SUBSCR_INSGRP_NM ,
CAC,
PRVDR_PTNT_ACC_ID,
BILL_TYPE,
PAYEE_ASSGN_CODE,
CREAT_RUN_CYC_EXEC_SK,
PRESTG_INSRT_DT
FROM CCP_STG.CLM_HEADER_PRESTG P WHERE ACT_CD <>'D' AND SUBSTR(CLM_PRCSS_TYPE_CD,4,1) NOT IN ('1','2','3','4','5','6') ),
PROV AS ( SELECT DISTINCT
PROV_ID,
PROV_FST_NM,
PROV_MD_NM,
PROV_LST_NM,
PROV_BILL_ADR1,
PROV_BILL_CITY,
PROV_BILL_STATE,
PROV_BILL_ZIP,
CASE WHEN PROV_SEC_ID_QL='E' THEN PROV_SEC_ID
ELSE NULL
END AS PROV_SEC_ID,
PROV_ADR1,
PROV_CITY,
PROV_STATE,
PROV_ZIP
FROM CCP_STG.DAILY_PROV_PRESTG),
MBR_XREF AS (SELECT SUBSTR(MBR_ENROLL_ID,1,17)||DECODE ((SUBSTR(MBR_ENROLL_ID,18,1)),'E','1','S','2','D','3')||SUBSTR(MBR_ENROLL_ID,19) AS MBR_ENROLLL_ID,
NEW_MBR_FLG
FROM CCP_STG.CCP_MBRSHP_XREF)
SELECT DISTINCT CLAIM_HDR.ICN_NUM AS ICN_NUM,
CLAIM_HDR.SBCR_ID AS SBCR_ID,
CLAIM_HDR.MBR_ID AS MBR_ID,
CLAIM_HDR.MBR_FIRST_NAME AS MBR_FIRST_NAME,
CLAIM_HDR.MBR_MI AS MBR_MI,
CLAIM_HDR.MBR_LAST_NAME AS MBR_LAST_NAME,
CLAIM_HDR.MBR_BIRTH_DATE AS MBR_BIRTH_DATE,
CLAIM_HDR.GENDER_TYPE_CD AS GENDER_TYPE_CD,
CLAIM_HDR.SBCR_RLTNSHP_TYPE_CD AS SBCR_RLTNSHP_TYPE_CD,
CLAIM_HDR.SBCR_FIRST_NAME AS SBCR_FIRST_NAME,
CLAIM_HDR.SBCR_MI AS SBCR_MI,
CLAIM_HDR.SBCR_LAST_NAME AS SBCR_LAST_NAME,
CLAIM_HDR.SBCR_ADDR_LINE_1 AS SBCR_ADDR_LINE_1,
CLAIM_HDR.SBCR_ADDR_LINE2 AS SBCR_ADDR_LINE2,
CLAIM_HDR.SBCR_ADDR_CITY AS SBCR_ADDR_CITY,
CLAIM_HDR.SBCR_ADDR_STATE AS SBCR_ADDR_STATE,
CLAIM_HDR.SBCR_ZIP_CD AS SBCR_ZIP_CD,
CLAIM_HDR.PRVDR_NUM AS PRVDR_NUM,
CLAIM_HDR.CLM_PRCSSD_DT AS CLM_PRCSSD_DT,
CLAIM_HDR.CLM_TYPE_CLASS_CD AS CLM_TYPE_CLASS_CD,
CLAIM_HDR.AUTHO_NUM AS AUTHO_NUM,
CLAIM_HDR.TOT_BILLED_AMT AS TOT_BILLED_AMT,
CLAIM_HDR.HCFA_DRG_TYPE_CD AS HCFA_DRG_TYPE_CD,
CLAIM_HDR.FCLTY_ADMIT_DT AS FCLTY_ADMIT_DT,
CLAIM_HDR.ADMIT_TYPE AS ADMIT_TYPE,
CLAIM_HDR.DSCHRG_STATUS_CD AS DSCHRG_STATUS_CD,
CLAIM_HDR.FILE_BILLING_NPI AS FILE_BILLING_NPI,
CLAIM_HDR.CLAIM_LOCATION_CD AS CLAIM_LOCATION_CD,
CLAIM_HDR.CLM_RELATED_ICN_1 AS CLM_RELATED_ICN_1,
CLAIM_HDR.SUBSCR_INSGRP_NM,
CLAIM_HDR.CAC,
CLAIM_HDR.PRVDR_PTNT_ACC_ID,
CLAIM_HDR.BILL_TYPE,
CLAIM_DTL.FIRST_SRVC_DT AS FIRST_SRVC_DT,
CLAIM_DTL.LAST_SRVC_DT AS LAST_SRVC_DT,
CLAIM_DTL.PLC_OF_SRVC_CD AS PLC_OF_SRVC_CD,
PROV.PROV_LST_NM AS BILL_PROV_LST_NM,
PROV.PROV_FST_NM AS BILL_PROV_FST_NM,
PROV.PROV_MD_NM AS BILL_PROV_MID_NM,
PROV.PROV_BILL_ADR1 AS BILL_PROV_ADDR1,
PROV.PROV_BILL_CITY AS BILL_PROV_CITY,
PROV.PROV_BILL_STATE AS BILL_PROV_STATE,
PROV.PROV_BILL_ZIP AS BILL_PROV_ZIP,
PROV.PROV_SEC_ID AS BILL_PROV_EIN,
PROV.PROV_ID AS SERV_FAC_ID ,
PROV.PROV_ADR1 AS SERV_FAC_ADDR1 ,
PROV.PROV_CITY AS SERV_FAC_CITY ,
PROV.PROV_STATE AS SERV_FAC_STATE ,
PROV.PROV_ZIP AS SERV_FAC_ZIP ,
CHK_PAYMNT.CLM_PMT_PAYEE_ADDR_LINE_1,
CHK_PAYMNT.CLM_PMT_PAYEE_ADDR_LINE_2,
CHK_PAYMNT.CLM_PMT_PAYEE_CITY,
CHK_PAYMNT.CLM_PMT_PAYEE_STATE_CD,
CHK_PAYMNT.CLM_PMT_PAYEE_POSTAL_CD,
CLAIM_HDR.CREAT_RUN_CYC_EXEC_SK
FROM CLAIM_DTL,(select * FROM CCP_STG.CLM_DETAIL_PRESTG WHERE ACT_CD <>'D') CLM_DETAIL_PRESTG, CLAIM_HDR,CCP_STG.MBR_XREF,PROV,CCP_STG.CLM_PAYMNT_CLMEXT_PRESTG CLM_PAYMNT,CCP_STG.CLM_PAYMNT_CHKEXT_PRESTG CHK_PAYMNT
WHERE
CLAIM_HDR.ICN_NUM = CLM_DETAIL_PRESTG.ICN_NUM
AND CLAIM_HDR.ICN_NUM = CLAIM_DTL.ICN_NUM
AND CLAIM_HDR.ICN_NUM=CLM_PAYMNT.ICN_NUM(+)
AND CLM_PAYMNT.CLM_PMT_CHCK_ACCT=CHK_PAYMNT.CLM_PMT_CHCK_ACCT
AND CLM_PAYMNT.CLM_PMT_CHCK_NUM=CHK_PAYMNT.CLM_PMT_CHCK_NUM
AND CLAIM_HDR.MBR_ENROLL_ID = MBR_XREF.MBR_ENROLLL_ID
AND CLM_DETAIL_PRESTG.FIRST_SRVC_DT >= 20110101
AND MBR_XREF.NEW_MBR_FLG = 'Y'
AND PROV.PROV_ID(+)=SUBSTR(CLAIM_HDR.PRVDR_NUM,6)
AND MOD(SUBSTR(CLAIM_HDR.ICN_NUM,14,2),2)=0
UNION ALL
SELECT DISTINCT CLAIM_HDR.ICN_NUM AS ICN_NUM,
CLAIM_HDR.SBCR_ID AS SBCR_ID,
CLAIM_HDR.MBR_ID AS MBR_ID,
CLAIM_HDR.MBR_FIRST_NAME AS MBR_FIRST_NAME,
CLAIM_HDR.MBR_MI AS MBR_MI,
CLAIM_HDR.MBR_LAST_NAME AS MBR_LAST_NAME,
CLAIM_HDR.MBR_BIRTH_DATE AS MBR_BIRTH_DATE,
CLAIM_HDR.GENDER_TYPE_CD AS GENDER_TYPE_CD,
CLAIM_HDR.SBCR_RLTNSHP_TYPE_CD AS SBCR_RLTNSHP_TYPE_CD,
CLAIM_HDR.SBCR_FIRST_NAME AS SBCR_FIRST_NAME,
CLAIM_HDR.SBCR_MI AS SBCR_MI,
CLAIM_HDR.SBCR_LAST_NAME AS SBCR_LAST_NAME,
CLAIM_HDR.SBCR_ADDR_LINE_1 AS SBCR_ADDR_LINE_1,
CLAIM_HDR.SBCR_ADDR_LINE2 AS SBCR_ADDR_LINE2,
CLAIM_HDR.SBCR_ADDR_CITY AS SBCR_ADDR_CITY,
CLAIM_HDR.SBCR_ADDR_STATE AS SBCR_ADDR_STATE,
CLAIM_HDR.SBCR_ZIP_CD AS SBCR_ZIP_CD,
CLAIM_HDR.PRVDR_NUM AS PRVDR_NUM,
CLAIM_HDR.CLM_PRCSSD_DT AS CLM_PRCSSD_DT,
CLAIM_HDR.CLM_TYPE_CLASS_CD AS CLM_TYPE_CLASS_CD,
CLAIM_HDR.AUTHO_NUM AS AUTHO_NUM,
CLAIM_HDR.TOT_BILLED_AMT AS TOT_BILLED_AMT,
CLAIM_HDR.HCFA_DRG_TYPE_CD AS HCFA_DRG_TYPE_CD,
CLAIM_HDR.FCLTY_ADMIT_DT AS FCLTY_ADMIT_DT,
CLAIM_HDR.ADMIT_TYPE AS ADMIT_TYPE,
CLAIM_HDR.DSCHRG_STATUS_CD AS DSCHRG_STATUS_CD,
CLAIM_HDR.FILE_BILLING_NPI AS FILE_BILLING_NPI,
CLAIM_HDR.CLAIM_LOCATION_CD AS CLAIM_LOCATION_CD,
CLAIM_HDR.CLM_RELATED_ICN_1 AS CLM_RELATED_ICN_1,
CLAIM_HDR.SUBSCR_INSGRP_NM,
CLAIM_HDR.CAC,
CLAIM_HDR.PRVDR_PTNT_ACC_ID,
CLAIM_HDR.BILL_TYPE,
CLAIM_DTL.FIRST_SRVC_DT AS FIRST_SRVC_DT,
CLAIM_DTL.LAST_SRVC_DT AS LAST_SRVC_DT,
CLAIM_DTL.PLC_OF_SRVC_CD AS PLC_OF_SRVC_CD,
PROV.PROV_LST_NM AS BILL_PROV_LST_NM,
PROV.PROV_FST_NM AS BILL_PROV_FST_NM,
PROV.PROV_MD_NM AS BILL_PROV_MID_NM,
PROV.PROV_BILL_ADR1 AS BILL_PROV_ADDR1,
PROV.PROV_BILL_CITY AS BILL_PROV_CITY,
PROV.PROV_BILL_STATE AS BILL_PROV_STATE,
PROV.PROV_BILL_ZIP AS BILL_PROV_ZIP,
PROV.PROV_SEC_ID AS BILL_PROV_EIN,
PROV.PROV_ID AS SERV_FAC_ID ,
PROV.PROV_ADR1 AS SERV_FAC_ADDR1 ,
PROV.PROV_CITY AS SERV_FAC_CITY ,
PROV.PROV_STATE AS SERV_FAC_STATE ,
PROV.PROV_ZIP AS SERV_FAC_ZIP ,
CHK_PAYMNT.CLM_PMT_PAYEE_ADDR_LINE_1,
CHK_PAYMNT.CLM_PMT_PAYEE_ADDR_LINE_2,
CHK_PAYMNT.CLM_PMT_PAYEE_CITY,
CHK_PAYMNT.CLM_PMT_PAYEE_STATE_CD,
CHK_PAYMNT.CLM_PMT_PAYEE_POSTAL_CD,
CLAIM_HDR.CREAT_RUN_CYC_EXEC_SK
FROM CLAIM_DTL, CLAIM_HDR,MBR_XREF,PROV,CCP_STG.CLM_PAYMNT_CLMEXT_PRESTG CLM_PAYMNT,CCP_STG.CLM_PAYMNT_CHKEXT_PRESTG CHK_PAYMNT
WHERE CLAIM_HDR.ICN_NUM = CLAIM_DTL.ICN_NUM
AND CLAIM_HDR.ICN_NUM=CLM_PAYMNT.ICN_NUM(+)
AND CLM_PAYMNT.CLM_PMT_CHCK_ACCT=CHK_PAYMNT.CLM_PMT_CHCK_ACCT
AND CLM_PAYMNT.CLM_PMT_CHCK_NUM=CHK_PAYMNT.CLM_PMT_CHCK_NUM
AND CLAIM_HDR.MBR_ENROLL_ID = MBR_XREF.MBR_ENROLLL_ID
-- AND TRUNC(CLAIM_HDR.PRESTG_INSRT_DT) = TRUNC(SYSDATE)
AND CLAIM_HDR.CREAT_RUN_CYC_EXEC_SK = 123638.000000000000000
AND MBR_XREF.NEW_MBR_FLG = 'N'
AND PROV.PROV_ID(+)=SUBSTR(CLAIM_HDR.PRVDR_NUM,6)
AND MOD(SUBSTR(CLAIM_HDR.ICN_NUM,14,2),2)=0; -
I am designing a table, for which I am loading the data into my table from different tables by giving joins. But I have Status column, for which I have about 16 different statuses from different tables, now for each case I have a condition, if it satisfies
then the particular status will show in status column, in that way I need to write the query as 16 different cases.
Now, my question is what is the best way to write these cases for the to satisfy all the conditions and also get the data quickly to the table. As the data we are getting is mostly from big tables about 7 million records. And if we give the logic as case
it will scan for each case and about 16 times it will scan the table, How can I do this faster? Can anyone help me outHere is the code I have written to get the data from temp tables which are taking records from 7 millions table with filtering records of year 2013. This is taking more than an hour to run. Iam posting the part of code which is running slow, mainly
the part of Status column.
SELECT
z.SYSTEMNAME
--,Case when ZXC.[Subsystem Name] <> 'NULL' Then zxc.[SubSystem Name]
--else NULL
--End AS SubSystemName
, CASE
WHEN z.TAX_ID IN
(SELECT DISTINCT zxc.TIN
FROM .dbo.SQS_Provider_Tracking zxc
WHERE zxc.[SubSystem Name] <> 'NULL'
THEN
(SELECT DISTINCT [Subsystem Name]
FROM .dbo.SQS_Provider_Tracking zxc
WHERE z.TAX_ID = zxc.TIN)
End As SubSYSTEMNAME
,z.PROVIDERNAME
,z.STATECODE
,z.TAX_ID
,z.SRC_PAR_CD
,SUM(z.SEQUEST_AMT) Actual_Sequestered_Amt
, CASE
WHEN z.SRC_PAR_CD IN ('E','O','S','W')
THEN 'Nonpar Waiver'
-- --Is Puerto Rico of Lifesynch
WHEN z.TAX_ID IN
(SELECT DISTINCT a.TAX_ID
FROM .dbo.SQS_NonPar_PR_LS_TINs a
WHERE a.Bucket <> 'Nonpar'
THEN
(SELECT DISTINCT a.Bucket
FROM .dbo.SQS_NonPar_PR_LS_TINs a
WHERE a.TAX_ID = z.TAX_ID)
--**Amendment Mailed**
WHEN z.TAX_ID IN
(SELECT DISTINCT b.PROV_TIN
FROM .dbo.SQS_Mailed_TINs_010614 b WITH (NOLOCK )
where not exists (select * from dbo.sqs_objector_TINs t where b.PROV_TIN = t.prov_tin))
and z.Hosp_Ind = 'P'
THEN
(SELECT DISTINCT b.Mailing
FROM .dbo.SQS_Mailed_TINs_010614 b
WHERE z.TAX_ID = b.PROV_TIN
-- --**Amendment Mailed Wave 3-5**
WHEN z.TAX_ID In
(SELECT DISTINCT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'Amendment Mailed (3rd Wave)'
and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
and z.Hosp_Ind = 'P'
THEN 'Amendment Mailed (3rd Wave)'
WHEN z.TAX_ID IN
(SELECT DISTINCT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'Amendment Mailed (4th Wave)'
and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
and z.Hosp_Ind = 'P'
THEN 'Amendment Mailed (4th Wave)'
WHEN z.TAX_ID IN
(SELECT DISTINCT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'Amendment Mailed (5th Wave)'
and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
and z.Hosp_Ind = 'P'
THEN 'Amendment Mailed (5th Wave)'
-- --**Top Objecting Systems**
WHEN z.SYSTEMNAME IN
('ADVENTIST HEALTH SYSTEM','ASCENSION HEALTH ALLIANCE','AULTMAN HEALTH FOUNDATION','BANNER HEALTH SYSTEM')
THEN 'Top Objecting Systems'
WHEN z.TAX_ID IN
(SELECT DISTINCT
h.TAX_ID
FROM
#HIHO_Records h
INNER JOIN .dbo.SQS_Provider_Tracking obj
ON h.TAX_ID = obj.TIN
AND obj.[Objector?] = 'Top Objector'
WHERE z.TAX_ID = h.TAX_ID
OR h.SMG_ID IS NOT NULL
)and z.Hosp_Ind = 'H'
THEN 'Top Objecting Systems'
-- --**Other Objecting Hospitals**
WHEN (z.TAX_ID IN
(SELECT DISTINCT
h.TAX_ID
FROM
#HIHO_Records h
INNER JOIN .dbo.SQS_Provider_Tracking obj
ON h.TAX_ID = obj.TIN
AND obj.[Objector?] = 'Objector'
WHERE z.TAX_ID = h.TAX_ID
OR h.SMG_ID IS NOT NULL
)and z.Hosp_Ind = 'H')
THEN 'Other Objecting Hospitals'
-- --**Objecting Physicians**
WHEN (z.TAX_ID IN
(SELECT DISTINCT
obj.TIN
FROM .dbo.SQS_Provider_Tracking obj
WHERE obj.[Objector?] in ('Objector','Top Objector')
and z.TAX_ID = obj.TIN
and z.Hosp_Ind = 'P')
THEN 'Objecting Physicians'
--****Rejecting Hospitals****
WHEN (z.TAX_ID IN
(SELECT DISTINCT
h.TAX_ID
FROM
#HIHO_Records h
INNER JOIN .dbo.SQS_Provider_Tracking obj
ON h.TAX_ID = obj.TIN
AND obj.[Objector?] = 'Rejector'
WHERE z.TAX_ID = h.TAX_ID
OR h.SMG_ID IS NOT NULL
)and z.Hosp_Ind = 'H')
THEN 'Rejecting Hospitals'
--****Rejecting Physciains****
WHEN
(z.TAX_ID IN
(SELECT DISTINCT
obj.TIN
FROM .dbo.SQS_Provider_Tracking obj
WHERE z.TAX_ID = obj.TIN
AND obj.[Objector?] = 'Rejector')
and z.Hosp_Ind = 'P')
THEN 'REjecting Physicians'
----**********ALL OBJECTORS SHOULD HAVE BEEN BUCKETED AT THIS POINT IN THE QUERY**********
-- --**Non-Objecting Hospitals**
WHEN z.TAX_ID IN
(SELECT DISTINCT
h.TAX_ID
FROM
#HIHO_Records h
WHERE
(z.TAX_ID = h.TAX_ID)
OR h.SMG_ID IS NOT NULL)
and z.Hosp_Ind = 'H'
THEN 'Non-Objecting Hospitals'
-- **Outstanding Contracts for Review**
WHEN z.TAX_ID IN
(SELECT DISTINCT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'Non-Objecting Bilateral Physicians'
AND z.TAX_ID = qz.PROV_TIN)
Then 'Non-Objecting Bilateral Physicians'
When z.TAX_ID in
(select distinct
p.TAX_ID
from dbo.SQS_CoC_Potential_Mail_List p
where p.amendmentrights <> 'Unilateral'
AND z.TAX_ID = p.TAX_ID)
THEN 'Non-Objecting Bilateral Physicians'
WHEN z.TAX_ID IN
(SELECT DISTINCT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'More Research Needed'
AND qz.PROV_TIN = z.TAX_ID)
THEN 'More Research Needed'
WHEN z.TAX_ID IN (SELECT DISTINCT qz.PROV_TIN FROM [SQS_Mailed_TINs] qz where qz.Mailing = 'Objector' AND qz.PROV_TIN = z.TAX_ID)
THEN 'ERROR'
else 'Market Review/Preparing to Mail'
END AS [STATUS Column]
Please suggest on this -
Which is the Best way to upload BP for 3+ million records??
Hello Gurus,
we have 3+million records of data to be uploaded in to CRM coming from Informatica. which is the best way to upload the data in to CRM, which takes less time consumption and easy. Please help me.
Thanks,
Naresh.do with bapi BAPI_BUPA_FS_CREATE_FROM_DATA2
-
Best way to Insert Millions records in SQL Azure on daily basis?
I am maintaining millions of records in Sql Server 2008 R2 and now i am intended to migrate these on SQL Azure.
In existing system with SQL Server 2008 R2, few SSIS packages and Stored Procedures are firstly truncate the existing records and then perform Insert operation on the table which holds
approx 26 Million records in 30 mins. on Daily basis (as system demands).
When i migrate these on SQL Azure, i am unable to perform these operations in a
faster way as i did in SQL 2008. Sometimes i got Request timeout error.
While searching for faster way, many of them suggest for Batch process or BCP. But Batch processing is NOT suitable in my case because it takes much time to insert those records. I required some faster and efficient way on SQL Azure.
Hoping for some good suggestions.
Thanks in advance :)
Ashish Narnoli+1 to Frank's advice.
Also, please upgrade your Azure SQL Database server to
V12 as you will receive higher performance on the premium tiers. As you scale-up your database for your bulk insert, remember that
SQL Database charges by the hour. To minimize costs, scale back down when the inserts have completed. -
Internal Table with 22 Million Records
Hello,
I am faced with the problem of working with an internal table which has 22 million records and it keeps growing. The following code has been written in an APD. I have tried every possible way to optimize the coding using Sorted/Hashed Tables but it ends in a dump as a result of insufficient memory.
Any tips on how I can optimize my coding? I have attached the Short-Dump.
Thanks,
SD
DATA: ls_source TYPE y_source_fields,
ls_target TYPE y_target_fields.
DATA: it_source_tmp TYPE yt_source_fields,
et_target_tmp TYPE yt_target_fields.
TYPES: BEGIN OF IT_TAB1,
BPARTNER TYPE /BI0/OIBPARTNER,
DATEBIRTH TYPE /BI0/OIDATEBIRTH,
ALTER TYPE /GKV/BW01_ALTER,
ALTERSGRUPPE TYPE /GKV/BW01_ALTERGR,
END OF IT_TAB1.
DATA: IT_XX_TAB1 TYPE SORTED TABLE OF IT_TAB1
WITH NON-UNIQUE KEY BPARTNER,
WA_XX_TAB1 TYPE IT_TAB1.
it_source_tmp[] = it_source[].
SORT it_source_tmp BY /B99/S_BWPKKD ASCENDING.
DELETE ADJACENT DUPLICATES FROM it_source_tmp
COMPARING /B99/S_BWPKKD.
SELECT BPARTNER
DATEBIRTH
FROM /B99/ABW00GO0600
INTO TABLE IT_XX_TAB1
FOR ALL ENTRIES IN it_source_tmp
WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
LOOP AT it_source INTO ls_source.
READ TABLE IT_XX_TAB1
INTO WA_XX_TAB1
WITH TABLE KEY BPARTNER = ls_source-/B99/S_BWPKKD.
IF sy-subrc = 0.
ls_target-DATEBIRTH = WA_XX_TAB1-DATEBIRTH.
ENDIF.
MOVE-CORRESPONDING ls_source TO ls_target.
APPEND ls_target TO et_target.
CLEAR ls_target.
ENDLOOP.Hi SD,
Please put the select querry in below condition marked in bold.
IF it_source_tmp[] IS NOT INTIAL.
SELECT BPARTNER
DATEBIRTH
FROM /B99/ABW00GO0600
INTO TABLE IT_XX_TAB1
FOR ALL ENTRIES IN it_source_tmp
WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
ENDIF.
This will solve your performance issue. Here when internal table it_source_tmp have no records, that time it was fetchin all the records from the database.Now after this conditio it will not select anyrecords if the table contains no records.
Regards,
Pravin -
ABAP Proxy for 10 million records
Hi,
I am running a extract program for my inventory which has about 10 million records.
I am sending through ABAP proxy and job is cacelled due to memory problem.
I am breaking up the records while sending through ABAP proxy..
I am sending about 2000 times proxy by breaking the number of records..
do you think ABAP proxy would able to handle 10 million records..?
Any advice would be highly appreciated.
Thanks and Best Regards,
M-Hi,
I am facing the same problem. My temporary solution is to break up the selected data into 30.000 records and send those portions by ABAP proxy to PI.
I think the problem lies in the ABAP to xml conversion (call transformation) within the proxy.
Although breaking up the data seems to work for me now, it gives me an other issue: I have to combine the data back again in PI.
So now I am thinking of saving all the records as a dataset file on the application server and using the file adapter instead.
Regards,
Arjan Aalbers -
Approx how much time should JDBC adapter take to insert 1.5 million record?
Hi All,
What is the optimum time for inserting 1.5 million records to Oracle Staging Table? My scenario ECC to Oracle is taking 3 hours.
With your previous experience, what do you think about this. Is there a scope of improvement?
We have a simple insert through JDBC datatype. i.e Action = INSERT.
Kindly Advice.
Regards,
XIer
Edited by: XIer on Mar 27, 2008 9:20 AM
Edited by: XIer on Mar 27, 2008 10:02 AMHi,
>What do you think is the optimum time with your experience...
We had Similar Situation after adding Application Server the time was reduced to 1 hour. Now how many App server are available in your XI system ?
Regards
Sangeetha
Maybe you are looking for
-
How to set password for photos from home screen? Please help.
Hi, Can anyone pls help to set password for my Iphone 4s. My friends/family often use my phone for gaming purpose. I don't want them to see my photos/gallery/videos. How can I protect my personal things from others? Thnx. Rgds.
-
I have a bunch of photos assigned various keywords. When I use the keyword panel to select pictures containing one or more keywords, often there are a few pictures missing. If I do a Show Info on a missing pictures I can verify that the correct keywo
-
After restoring System from TM backup is too big for drive
I had to restore my system from TM recently. When I plugged in the TM drive after restoring it tells me that there's not enough space on the drive to perform the backup. Am I just going to have to chuck away the original backups and start a new one,
-
Is there an overlay feature?
I am a photographer who's been asked by a client to construct a live video feed through the camera that does not impair my ability to shoot at the same time. So I'm mounting a small pen sized camera to the ground glass( or eye piece ) on my camera fe
-
How to download movies to ipad2?
how do you download movies to the ipad2 for viewing offline?