Query taking long time for EXTRACTING the data more than 24 hours

Hi ,
Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
to_char(nvl(i.payment_due_date,
to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
due_date, ah.current_balance-ah.previous_balance amount,
decode(ah.invoice_id,null,'A','I') transaction_type
3 4 5 6 7 8 from account a,account_history ah,invoice i_+
where a.account_id=ah.account_id
and a.account_type_id=1000002
and round(a.account_balance,2) > 0
and (ah.invoice_id is not null or ah.adjustment_id is not null)
and ah.CURRENT_BALANCE > ah.previous_balance
and ah.invoice_id=i.invoice_id(+)
AND a.account_balance > 0
order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
| 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
| 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
|* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
|* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
|* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
|* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
| 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
Predicate Information (identified by operation id):
2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
ROUND("A"."ACCOUNT_BALANCE",2)>0)
4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
22 rows selected.
Index Details:+_
SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
32 rows selected.
Regards,
Bathula
Oracle-DBA

I have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
Also, you do not need two lines for these conditions:
and round(a.account_balance, 2) > 0
AND a.account_balance > 0
You can just use: and a.account_balance >= 0.005
So the formatted query isselect a.account_id,
       round(a.account_balance, 2) account_balance,
       nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
       to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
       to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
               'DD-MON-YYYY') due_date,
       ah.current_balance - ah.previous_balance amount,
       decode(ah.invoice_id, null, 'A', 'I') transaction_type
  from account a, account_history ah, invoice i
where a.account_id = ah.account_id
   and a.account_type_id = 1000002
   and (ah.invoice_id is not null or ah.adjustment_id is not null)
   and ah.CURRENT_BALANCE > ah.previous_balance
   and ah.invoice_id = i.invoice_id(+)
   AND a.account_balance >= .005
order by a.account_id, ah.effective_start_date desc;You will probably want to select:
1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
Try the query after creating these indexes.
A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
alter session set sort_area_size = 2147483647;
alter session set hash_area_size = 2147483647;

Similar Messages

  • Query taking long time To Fectch the Results

    Hi!
    when I run the query,it takes too long time for fetching the resultsets.
    Please find the query below for the same.
    SELECT
    A.BUSINESS_UNIT,
    A.JOURNAL_ID,
    TO_CHAR(A.JOURNAL_DATE,'YYYY-MM-DD'),
    A.UNPOST_SEQ,
    A.FISCAL_YEAR,
    A.ACCOUNTING_PERIOD,
    A.JRNL_HDR_STATUS,
    C.INVOICE,
    C.ACCT_ENTRY_TYPE,
    C.LINE_DST_SEQ_NUM,
    C.TAX_AUTHORITY_CD,
    C.ACCOUNT,
    C.MONETARY_AMOUNT,
    D.BILL_SOURCE_ID,
    D.IDENTIFIER,
    D.VAT_AMT_BSE,
    D.VAT_TRANS_AMT_BSE,
    D.VAT_TXN_TYPE_CD,
    D.TAX_CD_VAT,
    D.TAX_CD_VAT_PCT,
    D.VAT_APPLICABILITY,
    E.BILL_TO_CUST_ID,
    E.BILL_STATUS,
    E.BILL_CYCLE_ID,
    TO_CHAR(E.INVOICE_DT,'YYYY-MM-DD'),
    TO_CHAR(E.ACCOUNTING_DT,'YYYY-MM-DD'),
    TO_CHAR(E.DT_INVOICED,'YYYY-MM-DD'),
    E.ENTRY_TYPE,
    E.ENTRY_REASON,
    E.AR_LVL,
    E.AR_DST_OPT,
    E.AR_ENTRY_CREATED,
    E.GEN_AR_ITEM_FLG,
    E.GL_LVL, E.GL_ENTRY_CREATED,
    (Case when c.account in ('30120000','30180050','30190000','30290000','30490000',
    '30690000','30900040','30990000','35100000','35120000','35150000','35160000',
    '39100050','90100000')
    and D.TAX_CD_VAT_PCT <> 0 then 'Ej_Momskonto_med_moms'
    When c.account not in ('30120000','30180050','30190000','30290000',
    '30490000','30690000','30900040','30990000','35100000','35120000','35150000',
    '35160000','39100050','90100000')
    and D.TAX_CD_VAT_PCT <> 25 then 'Momskonto_utan_moms' end)
    FROM
    sysadm.PS_JRNL_HEADER A,
    sysadm.PS_JRNL_LN B,
    sysadm.PS_BI_ACCT_ENTRY C,
    sysadm.PS_BI_LINE D,
    sysadm.PS_BI_HDR E
    WHERE A.BUSINESS_UNIT = '&BU'
    AND A.JOURNAL_DATE BETWEEN TO_DATE('&From_date','YYYY-MM-DD')
    AND TO_DATE('&To_date','YYYY-MM-DD')
    AND A.SOURCE      = 'BI'
    AND A.BUSINESS_UNIT = B.BUSINESS_UNIT
    AND A.JOURNAL_ID      = B.JOURNAL_ID
    AND A.JOURNAL_DATE = B.JOURNAL_DATE
    AND A.UNPOST_SEQ      = B.UNPOST_SEQ
    AND B.BUSINESS_UNIT = C.BUSINESS_UNIT
    AND B.JOURNAL_ID = C.JOURNAL_ID
    AND B.JOURNAL_DATE = C.JOURNAL_DATE
    AND B.JOURNAL_LINE = C.JOURNAL_LINE
    AND C.ACCT_ENTRY_TYPE = 'RR'
    AND C.BUSINESS_UNIT = '&BU'
    AND C.BUSINESS_UNIT = D.BUSINESS_UNIT
    AND C.INVOICE = D.INVOICE
    AND C.LINE_SEQ_NUM = D.LINE_SEQ_NUM
    AND D.BUSINESS_UNIT = '&BU'
    AND D.BUSINESS_UNIT = E.BUSINESS_UNIT
    AND D.INVOICE = E.INVOICE
    AND E.BUSINESS_UNIT = '&BU'
    AND
    ((c.account in ('30120000','30180050','30190000','30290000','30490000',
    '30690000','30900040','30990000','35100000','35120000','35150000','35160000',
    '39100050','90100000')
    and D.TAX_CD_VAT_PCT <> 0)
    OR
    (c.account not in ('30120000','30180050','30190000','30290000','30490000',
    '30690000','30900040','30990000','35100000','35120000','35150000','35160000',
    '39100050','z')
    and D.TAX_CD_VAT_PCT <> 25)
    GROUP BY
    A.BUSINESS_UNIT,
    A.JOURNAL_ID,
    TO_CHAR(A.JOURNAL_DATE,'YYYY-MM-DD'),
    A.UNPOST_SEQ, A.FISCAL_YEAR,
    A.ACCOUNTING_PERIOD,
    A.JRNL_HDR_STATUS,
    C.INVOICE,
    C.ACCT_ENTRY_TYPE,
    C.LINE_DST_SEQ_NUM,
    C.TAX_AUTHORITY_CD,
    C.ACCOUNT,
    D.BILL_SOURCE_ID,
    D.IDENTIFIER,
    D.VAT_TXN_TYPE_CD,
    D.TAX_CD_VAT,
    D.TAX_CD_VAT_PCT,
    D.VAT_APPLICABILITY,
    E.BILL_TO_CUST_ID,
    E.BILL_STATUS,
    E.BILL_CYCLE_ID,
    TO_CHAR(E.INVOICE_DT,'YYYY-MM-DD'),
    TO_CHAR(E.ACCOUNTING_DT,'YYYY-MM-DD'),
    TO_CHAR(E.DT_INVOICED,'YYYY-MM-DD'),
    E.ENTRY_TYPE, E.ENTRY_REASON,
    E.AR_LVL, E.AR_DST_OPT,
    E.AR_ENTRY_CREATED,
    E.GEN_AR_ITEM_FLG,
    E.GL_LVL,
    E.GL_ENTRY_CREATED,
    C.MONETARY_AMOUNT,
    D.VAT_AMT_BSE,
    D.VAT_TRANS_AMT_BSE
    having
    (Case when c.account in ('30120000','30180050','30190000','30290000',
    '30490000','30690000','30900040','30990000','35100000','35120000','35150000',
    '35160000','39100050','90100000')
    and D.TAX_CD_VAT_PCT <> 0 then 'Ej_Momskonto_med_moms'
    When c.account not in ('30120000','30180050','30190000','30290000','30490000',
    '30690000','30900040','30990000','35100000','35120000','35150000','35160000',
    '39100050','90100000')
    and D.TAX_CD_VAT_PCT <> 25 then 'Momskonto_utan_moms' end) is not null
    So Could you provide the solution to fix this issue?
    Thanks
    senthil

    [url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long ...
    Regards,
    Rob.

  • Taking long time to get the data while trying to drill down the freechar's

    hi experts,
    i have a query on 0sd_c03, in this query PLANT and SALES ORG. are in free characteristics, whenever i am trying to drill down the plant or sales organization it is taking lot of time to retrieve the data.
    what is the problem,
    give me suggestions to solve this problem.
    thanks in advance,
    venkat

    From RSA1 double click on the infoobjects. Goto Business Explorer tab
    For the option Query Execution Filter Val. Selection select Only posted values for navigation
    Try whether thsi setting solves your problem

  • Taking long time to load the data

    Hi All,
    When my BI colleague is trying to load data from a source system, it is taking too long.
    And the corresponding job BI_BOOK* is taking around 90000 seconds already.
    Can anyone suggest how to tackle this problem?
    Thanks so much in advance.
    Best Regards, Pradeep

    Hi
    If your Flat File Source System Takes, time there is no other go than Make the Request Red and Delete the same.Repeat the Upload.
    Make sure that your Flat file is closed while upload, 
    If Infopackage loads data from a flat file on your workstation, then you cannot execute the Infopackage in a Process Chain you need to put your flat file on the Application Server(AL11) and change your Infopackage to load from there instead of the workstation
    The file should be closed while scheduling
    The file path mentioned in the external data tabpage of the infopackage should be correct
    Records in the file should be in uppercase you can use lowercase for a particular characteristic.provided the lowercase checkbox in the general tabpage of the maintenance screen of the infoobject is checked.
    Calendar day should be of the format yyyymmdd in the file
    Hope ithelps
    Edited by: Aduri on Jan 30, 2009 11:19 AM

  • Impdp taking long time for only few MBs data...

    Hi All,
    I have one query related to impdp. I have one expdp file and size is 47M. When I restore this dmp using impdp it will take more time. Also initially table_data loaded finsih very fast but then later on alter function/procedure/view taking a lot time almost 4 to 5 hrs.
    I have no idea why its taking long time... Earlier I can see one DB link is failed and it given error for "TNS name could not resolved" so I create DB link also before run impdp but the same result. Can any one suggest what could be the cause to taking long time for only 47MB data?
    Note - Both expdp and impdp database version is 11.2.0.3.0. If I import the same dmp file into 11.2.0.1.0 then its done in few mins.
    Thanks...

    Also Read
    Checklist For Slow Performance Of DataPump Export (expdp) And Import (impdp) [ID 453895.1]
    DataPump Import (IMPDP) is Very Slow at Object/System/Role Grants, Default Roles [ID 1267951.1]

  • HT4759 Hello .. I've been subscribed for ic;oud for 20$ per year and I found it useless for many reasons: that I can not disconnect my mobile while the uploading process and it takes long time for uploading my data .. Its not a reliable system that's why

    Hello .. I've been subscribed for ic;oud for 20$ per year and I found it useless for many reasons: that I can not disconnect my mobile while the uploading process and it takes long time for uploading my data .. Its not a reliable system that's why I need to deactivate the space service and take my money back .. Thanks

    The "issues" you've raised are nothing to do with the iCloud service.
    No service that uploads data allows you to disconnect the device you are uploading from while uploading data. Doing so would prevent the upload from completing. It is a basic requirement for any uploading service that you remain connected to it for uploading to be possible.
    The time it takes to upload data to iCloud is entirely dependent on how fast your Internet connection is, and how much data you are uploading. Both of these things are completely out of Apple's control. Whichever upload service you use will be affected by the speed of your Internet connection.

  • VA05 & VA05N - Taking Long time to Give the Output.

    Dear All,
    VA05 & VA05N - Taking Long time to Give the Output for
                              Single date & Single Sales Office
    if I create Z-Program (VBAK) also taking Long time for Single date & Single Sales Office.
    Please Give some idea to Optimization the VA05 & VA05N.
    Please Give your Valuable solution.
    Thanks,
    Durai.V

    Dear Lakshmipathi,
       In my Previous client (ECC 5.0) VA05N Executing very fast for one month date for all sales office.
    They running SAP around 3 years, there data also huge but its giving fast output.
      But my current client ECC 5.0, here running SAP around 2.5 years, But here taking Long time to give
    the output for singe date & one Sales Office.
      But Billing details report VF05N Executing very fast.
    Thanks,
    Durai.V

  • Program SAPLSBAL_DB taking long time for BALHDR table entries

    Hi Guys,
    I am running a Z program in Quality and Production system both which is uploading data from Desktop.
    In Quality system the Z program is successfully uploading datas but in production system its taking very long time even sometime getting time out.
    As per trace analysis, Program SAPLSBAL_DB taking long time for BALHDR table entries.
    Can anybody provide me any suggestion.
    Regards,
    Shyamal.

    These are QA screen shots where no issue, but we are getting very long time in CRP.
    Regards,
    Shyamal

  • You are running a report. It is taking long time for

    You are running a report. It is taking long time for
    execution. What steps will you do to reduce the
    execution time.
        plx explain clearly

    Avoid loops inside loops.
    Avoid select inside loops.
    Select only the data that is required instead of using select *
    Select the field in the sequence as they are present in the database, and also specify the fields in the where clause in the same sequence.
    When ur using for all entries in the select statement, check whether the internal table to which ur refering is not initial.
    Remove select... endselect instead use into table
    Avoid Select Single inside the loop, instead select all the data before the loop and read that internal table inside the loop using binary search.
    Sort the Internal tables where ever necessary.

  • Middleware- Taking long time for generation of Runtime objects- SMOGTOTAL

    Hi Experts,
    I am doing middleware settings for connecting CRM 2007 with R/3 4.7.
    When i am generating all the required objects ( Replication objects, publications....) using the transaction code SMOGTOTAL, system is taking very long time for generating the objects. Generally it takes 4 to 6 hours but in our case it has already took more than 36 hours and still its running.
    Can anybody tell me what i need to do to make the generation process faster.
    Regards
    Nadh

    What I read in the best practice:
    It is not required for a new installation.
    Typically this activity has already been executed during the system installation or upgrade.
    Use transaction SMOGLASTLOG  to check if an initial generation has already been executed. In this case you can skip this activity.
    I checked transaction SMOGLASTLOG, and in our case the initial generation was not yet executed. I also couldn't continue with the next steps.
    That's why I started up the job, it is finally finished after 104 hours..
    Thanks for your fast reply.
    Jasper.

  • We are running a report ? it is  taking long time for  execution. what step

    we are running a report ? it is  taking long time for  execution. what steps will we do to reduce the execution time?

    Hi ,
    the performance can be improved thru many ways..
    First try to select based on the key fields if it is a very large table.
    if not then create a Secondary index for selection.
    dont perform selects inside a loop , instead do FOR ALL ENTRIES IN
    try to perform may operations in one loop rather than calling different loops on the same internal table..
    All these above and many more steps can be implemented to improve the performance.
    Need to look into your code, how it can be improved in your case...
    Regards,
    Vivek Shah

  • Cisco WS-C6513 taking long time to save the configuration

    HI,
    cisco WS-C6513 taking long time to save the configuration.
    Any ideas?
    Thank you 

    Hello,
    do you have correct dial plan ? It very depends on the country you live and you have your VoIP operator.
    Try to finish your dialing with # - this may speed it up.

  • Workflow on Email shoot for Un-Attended Lead more than 24 hours in MS Dynamics CRM 2011?

    I want to create Workflow for Un-Attended Lead more than 24 hours then Email will shoot Automatically to User as {BM(BranchManager)}
    My Business Unit Hierarchy is :        
    Main Organisation  >>  RBH Trading(Head)  >>  BM Trading(Branch Manager)  >>  RM Trading(Relational Manager)
    So, if any RM will not attend his Lead until 24 hours then 1 Auto Email should send to his BM.
    Actually the Problem is how to set BM Email into Email Template's "To" Field and i cannot fix any 1 BM there.
    PLEASE HELP!

    MatejLach wrote:
    clamd is running, user and group clamav all have the relevant permissions as far as I can tell, however upon scanning my mail, I always end up with the following error:
    Scanning error:
    /home/username/.claws-mail/mimetmp/0000000e.mimetmp: lstat() failed: Permission denied. ERROR
    Seems like a permissions error to me... maybe check the actual file it is attempting to scan... I know it is in your home folder, but just to be sure, you might want to check that everything is sane.

  • Account based COPA datsource taking long time to extract data

    Hi
    We have created a Account based COPA datasource but it is not extracting data in RSA3 even though the underlying tables have data in it.
    If the COPA datasource is created using fields only from CE4 (segment ) and not CE1 (line items ) table then it extracts data but tat too after very long time.
    If the COPA datasource is created using fields from CE4 (segment ) and  CE1 (line items ) table then it does not extarct any records and RSA3 gives a time out error..
    Also job scheduled from BW side for extracting data goes on for days but does not fetch any data and neither gives any error.
    The COPA tables have huge amount of data and so performance could be a issue. But we have also created the indexes on them. Still it is not helping.
    Please suggest a solution to this...
    Thanks
    Gaurav

    Hi Gaurav
    Check this note 392635 ,,might be usefull
    Regards
    Jagadish
    Symptom
    The process of selecting the data source (line item, totals table or summarization level) by the extractor is unclear.
    More Terms
    Extraction, CO-PA, CE3XXXX, CE1XXXX, CE2XXXX, costing-based, account-based,profitability analysis, reporting, BW reporting, extractor, plug-in, COEP,performance, upload, delta method, full update, CO-PAextractor, read, datasource, summarization level, init, DeltaInit, Delta Init Cause and Prerequisites
    At the time of the data request from BW, the extractor determines the data source that should be read. In this case, the data source to be used depends on the update mode (full initialization of the deltamethod or delta update), and on the definition of the DataSources (line item characteristics (except for REC_WAERS FIELD) or calculated key figures) and the existing summarization levels.
    Solution
    The extractor always tries to select the most favorable source, that is,the one with the lowest dataset. The following restrictions apply:
    o Only the 'Full' update mode from summarization levels is
    supported during extraction from the account-based profitability
    analysis up to and including Release PI2001.1. Therefore, you can
    only everload individual periods for a controlling area. You can
    also use the delta method as of Release PI2001.2. However, the
    delta process is only possible as of Release 4.0. The delta method
    must still be initialized from a summarization level. The following
    delta updates then read line items. In the InfoPackage, you must
    continue to select the controlling area as a mandatory field. You
    then no longer need to make a selection on individual periods.
    However, the period remains a mandatory field for the selection. If
    you do not want this, you can proceed as described in note 546238.
    o To enable reading from a summarization level, all characteristics
    that are to be extracted with the DataSource must also be contained
    in this level (entry * in the KEDV maintenance transaction). In
    addition, the summarization level must have status 'ACTIVE' (this
    also applies to the search function in the maintenance transaction
    for CO-PA data sources, KEB0).
    o For DataSources of the costing-based profitability analysis,
    30.03.2009 Page 2 of 3
    SAP Note 392635 - Information: Sources with BW extraction from the CO-PA
    data can only be read from a summarization level if no other
    characteristics of the line item were selected (the exception here
    is the 'record currency' (REC_WAERS) field, which is always
    selected).
    o An extraction from the object level, that is, from the combination
    of tables CE3XXXX/CE4XXXX ('XXXX' is the name of the result area),
    is only performed for full updates if (as with summarization
    levels) no line item characteristics were selected. During the
    initialization of the delta method this is very difficult to do
    because of the requirements for a consistent dataset (see below).
    o During initialization of the delta method and subsequent delta
    update, the data needs to be read up to a defined time. There are
    two possible sources for the initialization of the delta method:
    - Summarization levels manage the time of the last update/data
    reconstruction. If no line item characteristics were selected
    and if a suitable, active summarization level (see above)
    exists, the DataSource 'inherits' the time information of the
    summarization level. However, time information can only be
    'inherited' for the delta method of the old logic (time stamp
    administration in the profitability analysis). As of PlugIn
    Release PI2004.1 (Release 4.0 and higher), a new logic is
    available for the delta process (generic delta). For
    DataSources with the new logic (converted DataSources or
    DataSources recreated as of Plug-In Release PI2004.1), the line
    items that appear between the time stamp of the summarization
    level and the current time minus the security delta (usually 30
    minutes) are also read after the suitable summarization level
    is read. The current time minus the security delta is set as
    the time stamp.
    - The system reads line items If it cannot read from a
    summarization level. Since data can continue to be updated
    during the extraction, the object level is not a suitable
    source because other updates can be made on profitability
    segments that were already updated. The system would have to
    recalculate these values by reading of line items, which would
    result in a considerable extension of the extraction time.
    In the case of delta updates, the system always reads from line
    items.
    o During extraction from line items, the CE4XXXX object table is read
    as an additional table for the initialization of the delta method
    and full update so that possible realignments can be taken into
    account. In principle, the CE4XXXX object table is not read for
    delta updates. If a realignment is performed in the OLTP, no
    further delta updates are possible as they would make the data
    inconsistent between OLTP and BW. In this case, a new
    initialization of the delta method is required.
    o When the system reads data from the line items, make sure that the
    30.03.2009 Page 3 of 3
    SAP Note 392635 - Information: Sources with BW extraction from the CO-PA
    indexes from note 210219 for both the CE1XXXX (actual data) and
    CE2XXXX (planning data) line item tables have been created.
    Otherwise, you may encounter long-running selections. For
    archiving, appropriate indexes are delivered in the dictionary as
    of Release 4.5. These indexes are delivered with the SAP standard
    system but still have to be created on the database.

  • Hyperion System 9.3.1 reports taking longer time for the very first time

    We are on Hyperion System 9.3.1. The Financial reports are taking longer time (like 2 to 3 minuter) for the very first time for each login. The subsequest reports are does work faster.
    The behaviour is same for the Production and Development environments.
    All the reporting services have given enough JVM heap size.
    FYI, Reporting and Workspace runngin on the same server. Workspace/Reporting are clusted in two servers. HFM app is running on different server. HFM web is on different server. Shared Services is also on running on different server.
    Any help would be greately appreciated.
    Thanks.

    The reason they run quicker the subsequent times, is because the data has already been cached in the system.
    You could try the usual tricks to speed the report up:
    - move items into POV
    - have children and parent in the same row
    - arrange dimensions in inverse outline order
    - remove excessive formatting
    - push report calculations back to the data source
    We have found that using lots of dynamically calculated members also slows down reports, so try and limit the number of these.
    Hope this helps. If not maybe give us an idea of how the report is created to see if other changes could be made.

Maybe you are looking for

  • How can i reorder the columns in the do not display section of the interactive report.

    Hi, My interactive report contains 185 columns, and the user requieres to build his customized reports with some columns, but the he gets lost between this amount of columns, this would be easier if the columns in the do not display section of the in

  • How do I delete an event in Calendar.  Edit nor the option to delete appears.

    I am showing events on Calendar which I cannot delete.  The option to Edit or Delete does not appear.  As an option, how can I delete Calendar and start all over?

  • Security of reading e-mail over gprs

    Hi there, when you use WLAN with your mobile and read e-mail you can enjoy the WPA(2) encryption. But what about security, when you are on the road and have to use e.g. GPRS? Yahoo e.g. has no pop3 encryption, so the pop3 password goes plain text ove

  • Add new system to SAP Logon 720 Patchlevel 2

    Hi, i installed SAP Logon 720 Patch 2 on my computer. On my old installation, Logon 710, it was no problem to add, change and delete new system connection. But in Logon 720 the buttons are grey and not active. Has anybody a tipp how i can activate th

  • VO Substitute not reflected

    Hi, Extended the below VO to select one more column in the standard VO /oracle/apps/cn/oa/payment/pmtbatch/server/TermResVO and by excuting jpx Import command and got below message Imported document : /oracle/apps/cn/oa/payment/pmtbatch/server/custom