Long time to extract 0FI_AP_4/0FI_AR_4

Hi,buddies:
    When I use RSA3 to check the data in source system,I found that the process stopped at updating a table called ROOSGENQ and the process seems to be stooped.
Martin Xie

Hi Martin,
short dump might have posted in background also, check ST22, and confirm..
at the same time, why don't you try with debug option while you execute the
RSA3, it helps you in identifying the area where and why it is getting stuck..
have a look and get some detail about the core/process to get more specific
responses in our forum..
Cheers,
Pattan.

Similar Messages

  • Query taking long time for EXTRACTING the data more than 24 hours

    Hi ,
    Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
    SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
    2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
    to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
    to_char(nvl(i.payment_due_date,
    to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
    due_date, ah.current_balance-ah.previous_balance amount,
    decode(ah.invoice_id,null,'A','I') transaction_type
    3 4 5 6 7 8 from account a,account_history ah,invoice i_+
    where a.account_id=ah.account_id
    and a.account_type_id=1000002
    and round(a.account_balance,2) > 0
    and (ah.invoice_id is not null or ah.adjustment_id is not null)
    and ah.CURRENT_BALANCE > ah.previous_balance
    and ah.invoice_id=i.invoice_id(+)
    AND a.account_balance > 0
    order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
    | 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
    |* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
    |* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
    |* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
    |* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
    | 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
    Predicate Information (identified by operation id):
    2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
    3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
    ROUND("A"."ACCOUNT_BALANCE",2)>0)
    4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
    5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
    IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
    22 rows selected.
    Index Details:+_
    SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
    2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
    INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
    OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
    OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
    OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
    32 rows selected.
    Regards,
    Bathula
    Oracle-DBA

    I have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
    Also, you do not need two lines for these conditions:
    and round(a.account_balance, 2) > 0
    AND a.account_balance > 0
    You can just use: and a.account_balance >= 0.005
    So the formatted query isselect a.account_id,
           round(a.account_balance, 2) account_balance,
           nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
           to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
           to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
                   'DD-MON-YYYY') due_date,
           ah.current_balance - ah.previous_balance amount,
           decode(ah.invoice_id, null, 'A', 'I') transaction_type
      from account a, account_history ah, invoice i
    where a.account_id = ah.account_id
       and a.account_type_id = 1000002
       and (ah.invoice_id is not null or ah.adjustment_id is not null)
       and ah.CURRENT_BALANCE > ah.previous_balance
       and ah.invoice_id = i.invoice_id(+)
       AND a.account_balance >= .005
    order by a.account_id, ah.effective_start_date desc;You will probably want to select:
    1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
    2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
    3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
    Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
    create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
    create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
    Try the query after creating these indexes.
    A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
    alter session set sort_area_size = 2147483647;
    alter session set hash_area_size = 2147483647;

  • Account based COPA datsource taking long time to extract data

    Hi
    We have created a Account based COPA datasource but it is not extracting data in RSA3 even though the underlying tables have data in it.
    If the COPA datasource is created using fields only from CE4 (segment ) and not CE1 (line items ) table then it extracts data but tat too after very long time.
    If the COPA datasource is created using fields from CE4 (segment ) and  CE1 (line items ) table then it does not extarct any records and RSA3 gives a time out error..
    Also job scheduled from BW side for extracting data goes on for days but does not fetch any data and neither gives any error.
    The COPA tables have huge amount of data and so performance could be a issue. But we have also created the indexes on them. Still it is not helping.
    Please suggest a solution to this...
    Thanks
    Gaurav

    Hi Gaurav
    Check this note 392635 ,,might be usefull
    Regards
    Jagadish
    Symptom
    The process of selecting the data source (line item, totals table or summarization level) by the extractor is unclear.
    More Terms
    Extraction, CO-PA, CE3XXXX, CE1XXXX, CE2XXXX, costing-based, account-based,profitability analysis, reporting, BW reporting, extractor, plug-in, COEP,performance, upload, delta method, full update, CO-PAextractor, read, datasource, summarization level, init, DeltaInit, Delta Init Cause and Prerequisites
    At the time of the data request from BW, the extractor determines the data source that should be read. In this case, the data source to be used depends on the update mode (full initialization of the deltamethod or delta update), and on the definition of the DataSources (line item characteristics (except for REC_WAERS FIELD) or calculated key figures) and the existing summarization levels.
    Solution
    The extractor always tries to select the most favorable source, that is,the one with the lowest dataset. The following restrictions apply:
    o Only the 'Full' update mode from summarization levels is
    supported during extraction from the account-based profitability
    analysis up to and including Release PI2001.1. Therefore, you can
    only everload individual periods for a controlling area. You can
    also use the delta method as of Release PI2001.2. However, the
    delta process is only possible as of Release 4.0. The delta method
    must still be initialized from a summarization level. The following
    delta updates then read line items. In the InfoPackage, you must
    continue to select the controlling area as a mandatory field. You
    then no longer need to make a selection on individual periods.
    However, the period remains a mandatory field for the selection. If
    you do not want this, you can proceed as described in note 546238.
    o To enable reading from a summarization level, all characteristics
    that are to be extracted with the DataSource must also be contained
    in this level (entry * in the KEDV maintenance transaction). In
    addition, the summarization level must have status 'ACTIVE' (this
    also applies to the search function in the maintenance transaction
    for CO-PA data sources, KEB0).
    o For DataSources of the costing-based profitability analysis,
    30.03.2009 Page 2 of 3
    SAP Note 392635 - Information: Sources with BW extraction from the CO-PA
    data can only be read from a summarization level if no other
    characteristics of the line item were selected (the exception here
    is the 'record currency' (REC_WAERS) field, which is always
    selected).
    o An extraction from the object level, that is, from the combination
    of tables CE3XXXX/CE4XXXX ('XXXX' is the name of the result area),
    is only performed for full updates if (as with summarization
    levels) no line item characteristics were selected. During the
    initialization of the delta method this is very difficult to do
    because of the requirements for a consistent dataset (see below).
    o During initialization of the delta method and subsequent delta
    update, the data needs to be read up to a defined time. There are
    two possible sources for the initialization of the delta method:
    - Summarization levels manage the time of the last update/data
    reconstruction. If no line item characteristics were selected
    and if a suitable, active summarization level (see above)
    exists, the DataSource 'inherits' the time information of the
    summarization level. However, time information can only be
    'inherited' for the delta method of the old logic (time stamp
    administration in the profitability analysis). As of PlugIn
    Release PI2004.1 (Release 4.0 and higher), a new logic is
    available for the delta process (generic delta). For
    DataSources with the new logic (converted DataSources or
    DataSources recreated as of Plug-In Release PI2004.1), the line
    items that appear between the time stamp of the summarization
    level and the current time minus the security delta (usually 30
    minutes) are also read after the suitable summarization level
    is read. The current time minus the security delta is set as
    the time stamp.
    - The system reads line items If it cannot read from a
    summarization level. Since data can continue to be updated
    during the extraction, the object level is not a suitable
    source because other updates can be made on profitability
    segments that were already updated. The system would have to
    recalculate these values by reading of line items, which would
    result in a considerable extension of the extraction time.
    In the case of delta updates, the system always reads from line
    items.
    o During extraction from line items, the CE4XXXX object table is read
    as an additional table for the initialization of the delta method
    and full update so that possible realignments can be taken into
    account. In principle, the CE4XXXX object table is not read for
    delta updates. If a realignment is performed in the OLTP, no
    further delta updates are possible as they would make the data
    inconsistent between OLTP and BW. In this case, a new
    initialization of the delta method is required.
    o When the system reads data from the line items, make sure that the
    30.03.2009 Page 3 of 3
    SAP Note 392635 - Information: Sources with BW extraction from the CO-PA
    indexes from note 210219 for both the CE1XXXX (actual data) and
    CE2XXXX (planning data) line item tables have been created.
    Otherwise, you may encounter long-running selections. For
    archiving, appropriate indexes are delivered in the dictionary as
    of Release 4.5. These indexes are delivered with the SAP standard
    system but still have to be created on the database.

  • 0CRM_SALES_ACT_1 takes a long time to extract data from CRM5.0 system

    Hi Team,
    We are having CRM 5.0 and BW 701.
    We are extracting data from CRM to BW using extractor 0CRM_SALES_ACT_1 in CRM. But it is taking long time.
    Please suggest what can we do to improve the performance.

    Hi,
    its depend on the data volume its take a time along with CRM server.
    during your load time if crm server was busy then it may delay your load.
    Need to trigger your load where less burden on servers(CRM/BW).
    While triggering before please check available application servers to process your request at SM50.
    if this full load and have huge data then please split your loads with selections and load them into bw.
    you can increase processing records count and info pack wait time at info pack level.
    go to info pack--> menu scheduler--> wait time and datas setting for data transfer.
    during your load time ask basis team to keep on eye at T code - SM58(CRM).
    Thanks

  • 0CRM_SALES_ACT_1 takes a long time to extract data from CRM system

    Hi gurus,
    I am using the datasource 0CRM_SALES_ACT_1 to extract activities data from CRM side. However, it is taking too long time to get any information there.
    I applied the SAP NOTE 829397 (Activity extraction takes a long time: 0crm_sales_act_1) but it did not solve my problem.
    Does anybody knows something about that?
    Thanks in advance,
    Silvio Messias.

    Hi Silvio,
    I've experienced a similar problem with this extractor.  I attempted to Initialize Delta with Data Transfer to no avail.  The job ran for 12+ hours and stayed in "yellow" status (0 records extracted).  The following steps worked for me:
    1.  Initialize Delta without Data Transfer
    2.  Run Delta Update
    3.  Run Full Update and Indicate Request as Repair Request
    Worked like a champ, data load finished in less than 2 minutes.
    Hopefully this will help.
    Regards.
    Jason

  • MIRO take long time when enter invoice for PO with GR-based invoice

    Hi,
    In my client system, system take long time for extracting PO data while booking invoice via. MIRO for the purchase order which have GR-based IV is marked. However system take few second for extracting PO data when enter invoice via. MIRO for the PO which don't have GR-based IV. Please note following point while providing the solution:
    - this problem is exist only for the purchase order related to one company code. However system working perfectly for other company code in the same client. Hence we assuming that some company code level cofiguration is missing.
    - the problem is exist for po with account assignment K.
    - we have one to one mapping for purchase organization to company code to plant.
    Appreciate for you quick respond. Thanks in advance.
    Regards,
    sp sahu

    Hi,
    Please check with FI guy for GL A/c and Cost centers which you are using to create the PO with item category K.
    Still problem permits check with your ABAP person.
    Regards,
    Mohd Ali.

  • Data extraction is taking long time

    Hi,
    I am extracting data into infocube from datamart. it's fullupload and almost extracting 24lack records. generally it should take less than time but taking more than 6 hours to upload. data selection and scheduling is happening correctely but acknowledgement from source infocube is taking more time(may be data mart)
    BW statistics are not activated to this infocube.
    here is job log to this data load.
    01:32:26 'SAPGGB', TABNAME => '"/BI0/0P00000050"',
    01:32:26 ESTIMATE_PERCENT => 10 , METHOD_OPT => 'FOR ALL
    01:32:26 COLUMNS SIZE 75', DEGREE => 1 , GRANULARITY =>
    01:32:26 'ALL', CASCADE => TRUE ); END;
    01:32:27 SQL-END: 2009.09.02 01:32:27 00:00:01
    06:35:44 Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 10,101 records
    06:35:44 Result of customer enhancement: 10,101 records
    06:35:44 Asynchronous send of data package 1 in task 0002 (1 parallel tasks)
    06:35:44 tRFC: Data Package = 0, TID = , Duration = 00:00:00, ARFCSTATE =
    06:35:44 tRFC: Start = 2009.09.02 01:30:14, End = 2009.09.02 01:30:14
    06:35:45 Asynchronous transmission of info IDoc 3 in task 0003 (1 parallel tasks
    06:35:46 Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 10,101 records
    06:35:46 Result of customer enhancement: 10,101 records
    06:35:46 tRFC: Data Package = 0, TID = , Duration = 00:00:01, ARFCSTATE =
    06:35:46 tRFC: Start = 2009.09.02 06:35:45, End = 2009.09.02 06:35:46
    06:35:46 Asynchronous send of data package 2 in task 0004 (1 parallel tasks)
    06:36:55 tRFC: Data Package = 1, TID = 0A1401543C764A9E124057C6, Duration = 00:0
    06:36:55 tRFC: Start = 2009.09.02 06:35:45, End = 2009.09.02 06:36:55
    06:36:55 Asynchronous transmission of info IDoc 4 in task 0005 (1 parallel tasks
    06:36:56 Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 10,101 records
    06:36:56 Result of customer enhancement: 10,101 records
    please advice me where can i check and possible reasons to take long time.
    Thanks,
    Kasi

    Hello Kasi,
    I am facing the similar issue of long running data load but in my case data source is 2LIS_13_VDITM.
    Back ground job in ERP system is long running and is taking time in the below step.
    00:56:56 Call customer enhancement BW_BTE_CALL_BW204010_E (BTE) with 91.869 records
    00:56:56 Result of customer enhancement: 104.153 records
    00:56:56 Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 91.869 records
    02:13:59 Result of customer enhancement: 104.153 records
    02:14:02 PSA=0 USING & STARTING SAPI SCHEDULER
    02:14:02 Asynchronous send of data package 1 in task 0002 (1 parallel tasks)
    02:14:05 IDOC: Info IDoc 2, IDoc No. 348602556, Duration 00:00:00
    02:14:05 IDoc: Start = 16.02.2010 00:56:34, End = 16.02.2010 00:56:34
    02:14:06 Asynchronous transmission of info IDoc 3 in task 0003 (1 parallel tasks)
    Please note that this long running issue is not occuring daily and there were no recent changes with enhancement EXIT_SAPLRSAP_001 in CMOD.
    Kindly let me know how to overcome with this issue.
    Thanks in advance...

  • Extracting classfiles on startup takes a long time

    Hello,
    When I start my Weblogic server, it now takes several minutes to startup. It
    hangs for a long time with the line "extracting classfiles to .../_tmp_war_mydomain_app."
    It only recently started doing this and I'm wondering if I've accidentally changed
    something in my config.xml file to cause this. Can anyone help me understand
    why it might suddenly be doing this?
    Thanks a lot!
    Rocky

    Thanks, Mark.
    When you say the "System Classpath" do you mean the classpath that is specified
    in my startWeblogic.sh script? What do you mean when you say my classes should
    go in the application?
    My environment is pretty simple. I'm on Solaris. I am using a content management
    system - FatWire's UpdateEngine - running on WebLogic. I have a couple of servlets
    in .../WEB-INF/classes and a jar file with some helper classes in .../WEB-INF/lib.
    My files just handle security, some personalizaton, dynamic html page building,
    etc. on top of the UpdateEngine product. I also have a couple of jar files that
    help with some AS400 stuff I'm doing. Those are also in the .../WEB-INF/lib directory.
    So what is WebLogic doing when it says, "Extracting classfiles ...?" Where should
    my jar files be referenced if not in the classpath variable in startWeblogic.sh?
    Thanks, again.
    Rocky
    "Mark Griffith" <[email protected]> wrote:
    Your application classses and jar files should not go in the system
    classpath, but in the application.
    If you provide some more info, like the thread dumps, or more about your
    env
    we can continue to troubleshoot.
    cheers
    mbg
    "Rocky" <[email protected]> wrote in message
    news:[email protected]...
    Hi!
    Both my development and production servers do this. I'm on WL 6.1.My
    application
    is very small so I just jar my files up and put them in my /lib directory.I
    put my jar file in my Weblogic classpath in my startWeblogic script.Should I
    do this differently?
    Thanks!
    Rocky
    "Mark Griffith" <[email protected]> wrote:
    Take thread dumps when its starting up to see where it is sitting.
    Are you in development or production?
    (My guess is the former)
    Are you on what version of WLS?
    Are you deploying an exploded or archived war?
    (you should deploy in development the former).
    Cheers
    mbg
    "Rocky" <[email protected]> wrote in message
    news:[email protected]...
    Hello,
    When I start my Weblogic server, it now takes several minutes to
    startup.
    It
    hangs for a long time with the line "extracting classfiles to..../_tmp_war_mydomain_app."
    It only recently started doing this and I'm wondering if I'veaccidentally changed
    something in my config.xml file to cause this. Can anyone help
    me
    understand
    why it might suddenly be doing this?
    Thanks a lot!
    Rocky

  • Extracting classfiles on start takes a long time (Command and Service)

    I added a new .jar to the lib folder and now it takes a really long time to start
    weblogic, it seems to get hung up on Extracting classfiles to . . .
    Any Ideas ?
    WL 6.1
    Thanks

    I added a new .jar to the lib folder and now it takes a really long time to start
    weblogic, it seems to get hung up on Extracting classfiles to . . .
    Any Ideas ?
    WL 6.1
    Thanks

  • Some Master and transaction loads  taking long time in R/3 extraction andBW

    Hi Experts,
    We have some Master and transaction loads which takes long time in source system(R/3) in BW as well in daily process chains.These loads are happening in daily basis.
    In Extraction ,they takes long duration to finish.
    And then BW also takes much time.....
    So pls give some suggestions ,by which we can reduce this duration....
    And what can we do improve these loads perfomances......
    your  small suggestion can improve loads performance......
    thanx in advance.....

    hi naveen,
    go back and have a look at ur infopackage .. for the no of data packets it is picking up for each time ....
    this can be done by going to scheduler in the menu and u can find the no of data packets and the size as well....
    hope this works ...
    snigdha  >>>

  • I am extracting the data from ECC To bw .but Data Loading taking long tim

    Hi All,
                     i am extracting the data from ECC To BI Syatem..but Data Loading Taking Long time. from last   6 hoursinfopackage is running.still it is showing yellow.Manually i made the red.and delete again i applied repeat of the last delta.but same proble is coming .in the status job is showing bckground job is not finished at source system.we requested to basis.basis people killed that job.again we schedule the chain also again same problem is coming.how can i solve this issue.
    Thanks ,
    chandu

    Hi,
    There are different places to track your job. Once your job is triggered in BW, you can track your load job where exactly it is taking more time and why. Follow below steps:
    1) After InfoPackage is triggered, then take the request number and go to source system to check your extraction job status.
    You can get the job status by taking the request number from BW and go to transaction SM37 in ECC. Then give the request number with begining '' and ending ''.  Also give '*' to user name.
    Job name:  REQ_XXXXXX
    User Name: *
    Check the job status whether job is completed or cancelled or short dump. If the job is still running check in SM66 whether you can see any process. If not accordingly you got to check in ST22 or SM21 in ECC. If the job is complete, then the same in BW side now.
    2) Check the data arrived in PSA, if not check whether Transfer routines or start routines are having bad SQL or code. Similarly in update rules.
    3) Once it is through in Source system (ECC), Transfer rules , Update Rules, then the next task is updating the data might some time take more time which might be based on some parameters ( Number of parallel process to update database ). Check whether updating the database is taking more time and may be you got to check with the DBA guy also.
    At all the times you should see minimum of atleast once process running all the time in SM66 till the time your job gets complete. If not you will see a log in ST22.
    Let me know if you still have questions.
    Assigning points is the only way of saying thanks in SDN.
    Thanks,
    Kumar.

  • App-V 5 Full Infrastructure Apps take long time to stream to the client

    Hi was wondering if anyone has the same issue as i am or knows a fix for this, below is my problem and the troubleshooting i have done.
    Overview of problem
    App-V 5 apps delivered via App-V 5 full infrastructure take a long time to stream to the client and this means the user has to wait if they try and run an application before it has streamed to the client. Users sometimes have to
    wait 2 or 3 minutes for an application to stream and this is about 40 times slower than basic SMB and HTTP transfer tests show the system is capable of (see performance results below).
    App-V 4.6 apps delivered via App-V 4.6 full infrastructure and HTTP streaming are fine.
    Overview of environment
    App-V 5.0 SP1 Full Infrastructure.
    App-V servers are running Server 2012 on Hyper-V 3 or ESX 5.1 with 2 x vCPU and 4GB RAM.
    SQL servers are a SQL 2012 cluster.
    Separate servers for SQL, management, publishing, content and reporting.
    Management, Publishing and Content servers have two servers per role and NLB to provide load balancing. So 7 servers (2 x Man, 2 x Pub, 2 x Content, 1 x Reporting)
    Two further sites with 2 x Pub and 2 x Content each. All publishing servers pointed at the load balance address for management.
    Content delivered via HTTP
    Clients are physical desktops and laptops running Windows 7 SP1 x86 and Windows 8 x86
    App-V client is 5.0SP1
    Clients are pointed at their nearest publishing server NLB via a script which looks up the client IP address and uses PowerShell to configure the publishing server
    Content is streamed from the nearest content server NLB by setting the PackageSourceRoot to the nearest content NLB (via the same PowerShell script above).
    App-V apps delivered per-user via AD group. One AD group per application. Approximately 200 App-V apps published so far - will eventually reach 400 as we sequence more. About 9000 users.
    Analysis performed so far
    Servers not heavily loaded. CPU averages 5%. Lots of RAM free. Very low disk IO. Problem also occurs out-of-hours so we are 99.9% certain that server resources are not a cause.
    Streaming performance is the same from all 6 content servers and all 3 NLB addresses (tested by changing the value of PackageSourceRoot). Wireshark used to confirm packages are really streaming from the correct location, enforcing
    our belief that the problem isn't at the server end (unless all 6 servers are affected).
    Streaming via both HTTP and SMB2.1 is approximately the same (tested by changing the value of PackageSourceRoot between http://xxxx and \\server\AppVContent).
    Wireshark used to confirm we really are using the protocol we think we are using.
    All clients exhibit the same behaviour. Issue reported by many users. 5 test PCs chosen at random at all 3 sites confirmed to have the slow streaming problem.
    Slow streaming from both Hyper-V and VMware ESX servers.
    Client not heavily loaded.
    Affects all App-V apps although it obviously affects the larger ones more.
    All App-V apps have a Feature Block 1 setup.
    If we copy the ".appv" file from the server to the client via either HTTP or SMB then it's reasonably quick (up to 480Mb/s). So we don't believe the network or servers are at fault. For example:
    We can copy a 149MB .appv file via SMB from the content server to the client in 5 seconds.
    We copy HTTP download the .appv file from the content server using IE on the client in 5 seconds.
    But if you ask the App-V 5 client to fully download the sequence then it takes 2 - 3 minutes.
    The App-V 4.6 client takes about 8 - 10 seconds to fully download a similar sized application.
    App-V 5 publishing works fine - when a new user logs on they get their list of application straight away, it's just the streaming which is slow.
    Once the App-V app has streamed locally it runs fine and with a decent performance.
    Looking at a Wireshark trace of the streaming you can see that the slow performance is due to the transfer stopping and starting a lot. You only notice this when you zoom into the performance graph a fair bit.
    Each time the HTTP server stops sending traffic, it doesn't start again until the client sends a "TCP Window update". Each "stop" is of a different length, but just taking a few from the middle I get 0.06s, 0.11s,
    0.13s wasted etc.
    I can see that it's the client stopping the transfer by reducing its advertised TCP Window Size. I'll provide an example:
    Server sends 9 x 1514 bytes. Client responds with an ACK and a Window size of 54016 bytes (256x211)
    Server sends 11 x 1514 bytes. Client responds with an ACK and a Window size of 37888 bytes (256x148)
    Server sends 10 x 1514 bytes. Client responds with an ACK and a Window size of 23296 bytes (256x91)
    Server sends 15 x 1514 bytes. Client responds with an ACK and a Window size of 1280 bytes (5 x 256)
    Server stops sending (I'm guessing because the client advertised Window size was less than a single packet's worth of bytes)
    <0.1 seconds passes>
    Client sends a "TCP Window Update" re-advertising a TCP window size of 65536 (256x256).
    Server starts transmitting again
    So the way I see this is that the App-V 5 client is controlling the transfer speed by utilising TCP Window flow control. The trace was taken at the client end so there's no room for anything on the network to be fiddling with flow
    control (and we've confirmed there are no traffic shapers in the loop).
    We've also tried streaming directly from the local client by copying some App-V 5 apps down to the client, creating a SMB share on the client and changing PackageSourceRoot to \\localhost\AppVContent (i.e. so we are streaming directly
    from the client to the client - to remove the network from the equation) and there is only an improvement of 5 to 10 seconds. So we know it's nothing to do with the network or the servers.
    We've tried turning off TCP auto-tuning on the client with:
    netsh interface tcp set global autotuninglevel=disabled
    and turning off TCP chimney offloading (which is off anyway because the NIC doesn't support it and Netstat -t output shows "InHost" for offload state for all connections) with:
    netsh int tcp set global chimney=disabled
    and nothing has improved.
    So we've now focussed on the extraction of the .appv (ZIP) file on the client.
    Using Windows Explorer it takes 75 seconds to extract the ZIP file
    Using 7ZIP it takes 9 seconds to extract the ZIP file
    Yeah we've always known that the Explorer ZIP engine is terrible. That's why we use 7ZIP or WinRAR on our clients.
    So we've started to wonder if the problem with the slow App-V 5 streaming is because the client is downloading the .appv file and extracting it as it goes along in a single thread. If the App-V 5 client is using the same terrible
    ZIP engine that Explorer does then that would explain the slow performance. The "download" appears to take a long time because the client is using TCP flow control to slow the transfer since it's extracting the .appv file using a very slow ZIP engine
    and it's all in a single thread.

    Guys,
    Just wanted to give you a brief update basically close this thread as Answered.
    We had submitted 4 App-V 5 Bugs to Microsoft and these were reproducible and an explanation was given on work around to them. Microsoft
    sent down a App-v developer to have a look at our problems. They said they will try and include the Bug fixes in SP2 which should be out in a few weeks or they will definitely be included in SP3.
    In regards to the slow streaming it all came down to the Disk IO.
    We found that you could simply enable "Turn off
    Windows write-cache buffer flushing on the device", then start streaming the app and then disable "Turn
    off Windows write-cache buffer flushing on the device" immediately after
    (we don't want to leave it on) and that basically fixed the issue.
    But a normal user would not have permissions to do
    this, so a code was written to enable and disable this option.
    Apology for not going in detail, like my opening thread, its very late. 
    but if you would like a detailed analysis please message me.
    I would like to Thank the Talented Consultant who designed and implemented are App-V infrastructure who found the bugs and created all
    the work around and who also emailed the detailed analysis of the problems to Microsoft that got them interested.
    Simon Bond from Ultima Business Solutions.
    Thank you

  • Report running for long time & performance tuning

    Hi All,
    (1). WebI report is running for long time.so what are the steps i need to check for it ?
    (2). Can you tell me about performance tuning in BO ?
    please help me.....
    Thanks
    Kumar

    (1). WebI report is running for long time.so what are the steps i need to check for it ?
    The first step is to see if the problem lies in the query on the data source or in webi itself. Depending on the data source there are different ways to extract the query and try to run it against the database. Which source does your report uses?
    (2). Can you tell me about performance tuning in BO ?
    I would recommend to start by reading the administrator's guide. There is a section about how to improve performance.
    Regards,
    Stratos

  • Discoverer reports taking a long time!!!

    Hi all,
    One of our clients is complaining that the discoverer reports are taking a long time to run for the last few days, the report used to take 30 minutes before but now is running for hours!!
    I have checked the SGA and I have killed the idle sessions but still there was no improvement in the performance.
    The version of BI discoverer is 10 and database also is 10g and the platform is win server 2003.
    I have checked the forums and they talk about explain plan and tkprof and other commands, but my problem is that i am unable to find the query that discoverer is running i mean once the report is clicked the query runs and gives the estimate time it would take. can some one tell me where this query is stored so that i can check this query,
    Also there were no changes made in the query or to the database.
    The temp space fills up 100%, i increased the size of temp space but still it goes to 100% also i noticed that the CPU utilisation goes to 100%
    i also increased the SGA but still no go.
    can someone kindly help me as to what could be causing this problem
    also kindly guide me to some good documents for tuning the discoverer.
    thanks in advance,
    regards,
    Edited by: user10243788 on Jan 4, 2010 12:47 AM

    Hi,
    The fact that the report used to work fast and now not can be related to many things but my guess is that the database statistics were changed and so the explain plan has changed.
    This can be done due to change in the volume of the data that crossed a level were oracle optimizer change the behavior but it can be other things as well.
    Anyway it is not relevant since it will be easier to tune the SQL than to find what have changed.
    In order to find whether the problem is with the discoverer or in the SQL extract the SQL as described above and run it in SQL tool (SQL Plus, TOAD, SQL Developer and so on).
    The best way to get to the problem is run a trace on your session and then use the TKPROF command to translate it to a text file you can analyze - you can assist your DBA team they should have no problem doing that.
    By doing that you will get the problematic statements/ functions/ procedures that the report uses.
    From there you can start working on improving the performance.
    Performance is expertise for itself so i'm sorry i don't know to tell you where to start from, I guess the start will be from understanding the meaning of the explain plan.
    Hope I helped a little although I wish Ii had a magic answer for you
    BTW, until you resolve that problem you can use the discoverer scheduler to run the reports in the background and so the users will get the data.
    Tamir

  • XI - J2ee takes long time to startup after SP19

    Hi everybody,
    we have patched our XI development system to SP 19; during startup of j2ee it takes a long time in registering method in com.sap.security.core.server.vsi.service.jni.VirusScanInterface....about 10 minutes!
    Here follow an extract of dev_server0 tarce file:
    JHVM_BuildArgumentList: main method arguments of node [server0]
    [Thr 3600] Wed Jan 24 15:10:14 2007
    [Thr 3600] JHVM_RegisterNatives: registering methods in com.sap.bc.proj.jstartup.JStartupFramework
    [Thr 3600] JLaunchISetClusterId: set cluster id 5501650
    [Thr 3600] JLaunchISetState: change state from [Initial (0)] to [Waiting for start (1)]
    [Thr 3600] JLaunchISetState: change state from [Waiting for start (1)] to [Starting (2)]
    [Thr 31100] Wed Jan 24 15:10:48 2007
    [Thr 31100] JHVM_RegisterNatives: registering methods in com.sap.mw.rfc.driver.CpicDriver
    [Thr 31100] JHVM_RegisterNatives: registering methods in com.sap.mw.jco.util.SAPConverters
    [Thr 31100] JHVM_RegisterNatives: registering methods in com.sap.mw.jco.util.SAPCharToNUCByteConverter
    [Thr 31100] Wed Jan 24 15:10:50 2007
    [Thr 31100] JHVM_RegisterNatives: registering methods in com.sap.mw.rfc.engine.Compress
    <b>[Thr 25960] Wed Jan 24 15:11:02 2007
    [Thr 25960] JHVM_RegisterNatives: registering methods in com.sap.security.core.server.vsi.service.jni.VirusScanInterface</b>
    [Thr 3600] Wed Jan 24 15:21:40 2007
    [Thr 3600] JLaunchISetState: change state from [Starting (2)] to [Starting applications (10)]
    [Thr 23390] Wed Jan 24 15:24:40 2007
    [Thr 23390] JLaunchISetState: change state from [Starting applications (10)] to [Running (3)]
    Is there any way to speed up this process? or to deactivate the virusscaninterface service? What can I check to understand what happens during the registering method of VirusScanInterface?
    We are using XI on AIX platform, version 5.3 ML5 with Oracle database.
    Thanks in advance.
    Best regards.
    Tiziano

    Hello there, mabaaref.
    The following Knowledge Base article provides some great information in regards to battery functionality:
    iPhone and iPod touch: Charging the battery
    http://support.apple.com/kb/HT1476
    It's important to note that if the battery is extremely low on power, your device may display a black screen for up to two minutes before one of these images appears. Continue charging for at least 30 minutes, or until your device is fully charged.
    Thanks for reaching out to Apple Support Communities.
    Cheers,
    Pedro.

Maybe you are looking for

  • How to trap null return values from getters when using Method.invoke()?

    Hi, I am using Method.invoke() to access getters in an Object. I need to know which getters return a value and which return null. However, irrespective of the actual return value, I am always getting non-null return value when using Method.invoke().

  • Can't get rid of background image of toolbar buttons

    I am just a front-end developer and responsible for the front-end look of an ADF application that we are creating: We are using jDeveloper 11.1.1.4.0 (and we cannot upgrade due to being told no...so I cannot USE the skin editor which when I tried to

  • Moving web catalog from Windows to AIX

    We have Oracle BI 10g installed on MS Windows Server 2003 and AIX 5.3. After copying web catalog from Windows to AIX we can see Dasboard pages, but drill down crashes Presentation server. Web catalog created on AIX works perfectly. Does anybody know

  • When will Adobe release "AdbeRdr1011_en_US.msi" and "AdbeRdr1011_ja_JP.msi"?

    I can see it is available in other languages already (since 2011/09/29): Greek - ftp://ftp.adobe.com/pub/adobe/reader/win/10.x/10.1.1/el_GR/ - MSI and EXE Arabic - ftp://ftp.adobe.com/pub/adobe/reader/win/10.x/10.1.1/ar_AE/- MSI and EXE Hebrew - ftp:

  • Don't have write access

    everytime I try importing a cd, I get an error that say: (song name) can't be converted. you don't have write access. I have followed the steps provided: 1) click on it in finder, 2) click get info, 3) change permission (also putting in my password f