ODS takes huge time

Hi guys,
  The data lookup in the ODS is taking huge time .....it was working fine a week back it has around  200 mill records..
Thanks,
Your help will be greatly appreciated

For two records, yes it shud not take that much time...
Any just check the system performance on the same.. Some times the basis might be working on some backup activities which might also cause the issue in the system performace...
So check if this is happening to everyone in ur project.. and also check the content directly from the ODS content instead of the listcube.. for the comparision..
Also check how is ur system is loaded with?
Thanks..
Hope this helps

Similar Messages

  • Query on Cube jumps to Query on ODS ; Query on ODS takes Long time

    Hi All,
    Perormance Issue:  Query on Cube jumps to Query on ODS.
    Query on ODS taking long time.(JumpQuery)  
    Specific to ODS Query: When i have checked the Query on ODS(individually) also taking longer time
    Actually ODS contains quite huge data. Indexes already maintained.
    I have checked the RSRT- Execute SQL and Debug Option also. Indexes maintained Perfectly .
    Order of objects in ODS indexes are matching the order of Objects in SQL stat of RSRT Trans. Inspite of that taking long time.
    I have checked both the ways jumpquery aswellas individually .
    My question is when the query is jumping from cube to query on ODS how to check the performance, how the query is executing in background when switching over to the second query, Moreover calculated keyfigure has been used for jumping to the target query.
    How can query(ods query)  time is optimized or improve performance when jumping  from query on Cube ?
    can any body help?
    Rgds,
    C.V.
    Message was edited by:
            C.V. P

    What i understand is that you need to optimise the Query jumping time . But this will be very less compared to the time taken by the query on the ODS.
    Ideally you shouldnt be making a BEx Query on the ODS , as this takes a long time. What you can do is try executing the Bex Query on the ODS to find out as to where the issue lies. If this query is taking a long time , there is not muich that you can do here.

  • First WES7 start takes huge time on some machines

    Hi,
    we have a WES7 image that resembles full-blown W7. We don't use EWF/HORM or other embedded features. Everything is OK, except that on the recent HP desktop units for some machines the first boot takes ages (literally, it takes 4-5 hours to complete!!).
    Windows boots normal and then sits in the 'Setup is installing devices 69%' screen for a very long time. It doesn't hang, as numbers are counting (till 75% or so) and then it works perfectly. The
    disk is not loaded during the wait, the CPU probably not as well, at least the cooler is not running fast.
    From 10 machines from one batch it happens on approximately half. It was not observed before (on older HP machines and on any other customer machines) and it started only with the new HP EliteDesk G1 800 SFF machines. 
    Quite weird issue and if in house we can simply wait for 5 hours per machine, then on field it's not acceptable at all.
    Does anybody have any idea what to check or what to do to resolve the issue?
    Thanks in advance,
    Nikolai
    WBR, Nikolai

    Hello Sean,
    sorry for a long delay, was busy with the release/etc. So, last Friday I was managed to reproduce the case. As I told before, OOBE went smooth till ~60% of drivers installation and then it drastically slowed down. Finally (after 3 hours) it's finished. I did
    re-installation around 10 times in 2 days and it hangs twice.
    I checked panther folder and found first this info in setupact.log
    (here is the link on the full log http://www.webcctv.com/files/%7B70F7C68D-EF19-45F7-85EB-E3F571EFB7EA%7D/setupact.zip, it's 21 kB in zip file)
    2015-02-06 17:45:34, Info                  SYSPRP Re-checking active status...
    2015-02-06 17:45:35, Info                  SYSPRP Status: Starting...
    2015-02-06 17:45:35, Info                  SYSPRP Status: Total = 0,
    Complete = 0, Progress = 0%
    (0%)
    2015-02-06 17:45:35, Info                  SYSPRP Status: Active
    2015-02-06 17:45:35, Info                  SYSPRP Status: Total = 1,
    Complete = 0, Progress = 0%
    (0%)
    2015-02-06 17:45:46, Info                  SYSPRP SPPNP: Reinstalling devices on the system...
    2015-02-06 17:45:46, Info                  SYSPRP Status: Active
    2015-02-06 17:45:46, Info                  SYSPRP Status: Total = 2,
    Complete = 0, Progress = 0%
    (0%)
    2015-02-06 17:45:46, Info                  SYSPRP Status: Active
    2015-02-06 17:45:46, Info                  SYSPRP Status: Total = 3,
    Complete = 0, Progress = 0%
    (0%)
    That was ok, then as you can see the speed of driver installation has decreased:
    2015-02-06 17:46:55, Info                  SYSPRP Status: Total = 172,
    Complete = 103, Progress = 59%
    (59%)
    2015-02-06 17:46:55, Info                  SYSPRP Status: Active
    2015-02-06 17:46:55, Info                  SYSPRP Status: Total = 173,
    Complete = 103, Progress = 59%
    (59%)
    2015-02-06 17:51:55, Info                  SYSPRP Status: Active
    2015-02-06 17:51:55, Info                  SYSPRP Status: Total = 173,
    Complete = 104, Progress = 60%
    (60%)
    2015-02-06 17:51:55, Info                  SYSPRP Status: Active
    2015-02-06 17:51:55, Info                  SYSPRP Status: Total = 173,
    Complete = 105, Progress = 60%
    (60%)
    2015-02-06 17:56:55, Info                  SYSPRP Status: Active
    2015-02-06 17:56:55, Info                  SYSPRP Status: Total = 173,
    Complete = 106, Progress = 61%
    (61%)
    2015-02-06 18:01:55, Info                  SYSPRP Status: Active
    2015-02-06 18:01:55, Info                  SYSPRP Status: Total = 173,
    Complete = 107, Progress = 61%
    (61%)
    2015-02-06 18:06:55, Info                  SYSPRP Status: Active
    2015-02-06 18:06:55, Info                  SYSPRP Status: Total = 173,
    Complete = 108, Progress = 62%
    (62%)
    2015-02-06 18:11:56, Info                  SYSPRP Status: Active
    2015-02-06 18:11:56, Info                  SYSPRP Status: Total = 173,
    Complete = 109, Progress = 63%
    (63%)
    2015-02-06 18:16:56, Info                  SYSPRP Status: Active
    2015-02-06 18:16:56, Info                  SYSPRP Status: Total = 173,
    Complete = 110, Progress = 63%
    (63%)
    and finished ~3hours later:
    2015-02-06 21:37:08, Info                  SYSPRP Status: Active
    2015-02-06 21:37:08, Info                  SYSPRP Status: Total = 173,
    Complete = 171, Progress = 98%
    (98%)
    2015-02-06 21:37:08, Info                  SYSPRP Status: Active
    2015-02-06 21:37:08, Info                  SYSPRP Status: Total = 173,
    Complete = 172, Progress = 99%
    (99%)
    2015-02-06 21:37:08, Info                  SYSPRP Status: End
    2015-02-06 21:37:08, Info                  SYSPRP Status: Total = -1,
    Complete = -1, Progress = 100%
    (100%)
    2015-02-06 21:37:08, Info                  SYSPRP Callback canceled operation, Err = 1223!
    2015-02-06 21:37:08, Info                  SYSPRP DeviceInstallStatusThreadProc failed, Err = 1223
    2015-02-06 21:37:09, Info                  SYSPRP SPPNP: Devices reinstallation completed.
    2015-02-06 21:37:09, Info                  SYSPRP Waiting for thread to exit
    2015-02-06 21:37:09, Info                  SYSPRP Thread exited
    Checked other files in that folder, but don't see anythings specific.
    Any ideas?
    Nikolai
    WBR, Nikolai

  • Shut down to take huge time

    Hi All,
    I am using 11.1.1.3 on windows server 2003,when i started essbase,planning, HFR,web analysis related services and after completion of my work i am trying to shut down it will take 1-2 hours to shutdown.
    please any one help me to resolve this challenge.
    Thanks,

    Are you stopping the services before shutting down the machine, I know there was an issue with OpenLdap and if you didn't stop the service it would take a long time for the machine to shut down.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Loading in Info Cube takes huge time

    Hi,
    We are loading transactional data in the info cube ZFI_C01  from a DSO ZFI_O05
    We loaded around 1.8 million records ( 1776444 )  through a request in the DSO ZFI_O05  that took around 13 minutes including activation of that request.( Note that flag for SIDs Generation upon Activation was checked  and so, SIDs were generated for all the characteristics in the DSO during activation )
    When we loaded the same request to the cube ZFI_C01, the request took around 3 hours to finish.
    I did RSRV Checks for the infocube ZFI_C01 to find out the ratio between fact table and dimension tables.
    ( goto RSRV->all elementary test->database->database info abt infoprovider table->give the cube name ..check the log it gives the ratio between fact table and dim table... )
    I got the following results for the two of the dimensions that are involved in the cube:
    Table /BIC/DZFI_C013 has 1564356 entries; size corresponds to 86% of the InfoCube
    Table /BIC/DZFI_C012 has 1649990 entries; size corresponds to 91% of the InfoCube
    When I checked the properties for both these dimensions, the checkbox "High Cardinality" and "Line Item Dimensions" were unchecked.
    I can not check the checkbox "Line item Dimension" as both these dimensions are having more than one info characteristics.
    Shall I check "High Cardinality" checkbox and retry loading as ratio of the dimension table to fact table size is more than 20%.
    But I am bit unclear what impact it will have on reportin

    Hi there,
    Check if you have any routines with code (start routine, end routine, etc.) that can have huge load performance.
    Also check in sm50 when you do the load to the InfoCube if you're having several reads to the table NRIV and/or to some specific dimension tables. If so, find out what are the entries for buffer number range to those dimension tables, and after repeat the load (cancel it meanwhile) to the infocube, go to transaction SNRO and keep in buffer range 500 of sixe to those buffer number range. Repeat the load.
    Try to see if it hepls,
    Diogo.

  • Data load to DSO takes long time to finish

    Dear All,
    We have a data load from data source to std  DSO.The data load takes 5 hours to complete  6000 records in single data package which is long time.
    Process monitor shows yellow status at one of the step for long time "No message :Transformation End" and after 5 hours approx  it completes successfully.
    Please find the snapshot of process monitor(Attached File Process monitor.png).
    There is an end routine and the transformation  is having direct mapping except for a target object exchage rate which is master data look up of DSO (Attached FIle : Transformation rule.png)
    The look up DSO /BI0/AFIGL_DS00 in the below code is having DOCNUM as a primary key  but not the POSKY. Since one of the field is not a primary key,secondary index is created for the look up DSO.But,still it takes huge time to finish the last step as mentioned in the snapshot.
    Setting for parallel process is 1
    DTP--> Update tab-->Error handling-->No update,no reporting.But there is a error DTP present which I believe that there is no use when "No update,No reporting" option is chosen.
    Can you please suggest the reason for the such long time.Also,Please suggest how to find the exact place where it consumes lot of time.
    End routine Logic:
        IF NOT RESULT_PACKAGE IS INITIAL.
          REFRESH IT_FIG.
          SELECT DOCNUM  POSKY DEBCRE LOCC
          FROM /BI0/AFIGL_DS00 INTO TABLE IT_FIG
          FOR ALL ENTRIES IN RESULT_PACKAGE
          WHERE DOCNUM = RESULT_PACKAGE-BILNO AND
                POSKY = '02'.
        LOOP AT RESULT_PACKAGE ASSIGNING <RESULT_FIELDS>.
            READ TABLE IT_FIG INTO WA_FIG WITH KEY
                       DOCNUM = <RESULT_FIELDS>-BILNO.
            IF SY-SUBRC EQ 0.
              <RESULT_FIELDS>-DEB = WA_FIG-DEBCRE.
              <RESULT_FIELDS>-LOC_CURRC2 = WA_FIG-LOCC.
            ENDIF.
        ENDLOOP.
        ENDIF.
    Thanks in advance
    Regards
    Pradeep

    Hi,
    below code check it and try to load the data.
    IF RESULT_PACKAGE IS NOT INITIAL.
          SELECT DOCNUM 
                          POSKY
                          DEBCRE
                          LOCC
          FROM /BI0/AFIGL_DS00 INTO TABLE IT_FIG
          FOR ALL ENTRIES IN RESULT_PACKAGE
          WHERE DOCNUM = RESULT_PACKAGE-BILNO AND
                POSKY = '02'.
        LOOP AT RESULT_PACKAGE ASSIGNING <RESULT_FIELDS>.
            READ TABLE IT_FIG INTO WA_FIG WITH KEY
                       DOCNUM = <RESULT_FIELDS>-BILNO.
            IF SY-SUBRC EQ 0.
               <RESULT_FIELDS>-DEB = WA_DOCNUM.
               <RESULT_FIELDS>-DEB = WA_POSKY.
              <RESULT_FIELDS>-DEB = WA_FIG-DEBCRE.
              <RESULT_FIELDS>-LOC_CURRC2 = WA_FIG-LOCC.
            ENDIF.
        ENDLOOP.
        ENDIF.
    if your are getting any error please let us know
    1.decrease the data packet size in DTP like 10,000 or 20,000.
    2.increase the parallel process at DTP level.
    Thanks,
    Phani.

  • UPDATE proc taking HUGE TIME

    Hi
    Oracle UPDATE proc is taking over 10 Hours to update 1,30,000 records :-
    /**********************CODE***************************/
    PROCEDURE Update_SP IS
    CURSOR C1 IS
    select tim.c_col,mp.t_n
    from Materialized_VW tim, MP_Table mp
    where tim.R_id = mp.R_id
    and tim.P_id = mp.P_id
    and tim.t_id = mp.t_id
    and mp.t_date between wk_comm and wk_end;
    BEGIN
    FOR I IN C1
    LOOP
    IF v_c=100000 THEN
    v_c:=0;
    COMMIT;
    END IF;
    v_c:=v_c+1;
    UPDATE MP_Table mp
    SET c_col = i.c_col
    WHERE mp.t_n = i.t_n;
    END LOOP;
    COMMIT;
    EXCEPTION
    WHEN OTHERS THEN
    ROLLBACK;
    err_num := SQLCODE;
    err_msg := SUBSTR(SQLERRM,1,100);
    END Update_SP;
    /**********************CODE***************************/
    Materialized_VW :- It has 4 SEPARATE indexes on the columns R_id, P_id, t_id, c_col
    MP_Table :- It has 4 SEPARATE indexes on the columns R_id, P_id, t_id, t_n
    The Explain Plan shows (NOTE : Whenever NUMBER OF RECORDS is More)
    SELECT STATEMENT ALL_ROWS
    Cost: 17,542 Bytes: 67 Cardinality: 1
    3 HASH JOIN
    Cost: 17,542 Bytes: 67 Cardinality: 1
    1 TABLE ACCESS FULL MP_TABLE
    Cost: 14 Bytes: 111,645 Cardinality: 4,135
    2 TABLE ACCESS FULL MATERIALIZED_VW
    Cost: 16,957 Bytes: 178,668,800 Cardinality: 4,466,720
    The Explain Plan shows (NOTE : Whenever NUMBER OF RECORDS is Less)
    SELECT STATEMENT ALL_ROWS
    Cost: 2,228 Bytes: 67 Cardinality: 1
    6 NESTED LOOPS Cost: 2,228 Bytes: 67 Cardinality: 1
    1 TABLE ACCESS FULL MP_TABLE Cost: 3 Bytes: 12,015 Cardinality: 445
    5 TABLE ACCESS BY INDEX ROWID MATERIALIZED_VW Cost: 2,228 Bytes: 40 Cardinality: 1
    4 AND-EQUAL
    2 INDEX RANGE SCAN NON-UNIQUE MATERIALIZED_VW_INDX1
    3 INDEX RANGE SCAN NON-UNIQUE MATERIALIZED_VW_INDX2
    This INTERMITTENT behaviour of EXPLAIN PLAN is causing it to take HUGE TIME whenever the number of records is more.
    This strange behaviour is causing problems as 10 Hours is too much for any UPDATE (that too the number of records is only 6 digit number).
    But, we cannnot use a DIRECT UPDATE as well as that would result in Oracle Exceptions.
    Please suggest ways of reducing the time or any other method of doing the above ASAP.
    Also, is there any way to establish a standard behaviour which takes less time.
    Thanks
    Arnab

    Hi BluShadow,
    I followed up your example extending it to the bulk processing.
    I have tested insert and update operations.
    Here are the insert result:
    SQL> CREATE TABLE mytable (x number, z varchar2(5));
    Table created.
    SQL> DECLARE
      2    v_sysdate DATE;
      3    v_insert NUMBER;
      4    TYPE t_nt_x IS TABLE OF NUMBER;
      5    TYPE t_nt_z IS TABLE OF VARCHAR2(5);
      6    v_nt_x t_nt_x;
      7    v_nt_z t_nt_z;
      8    CURSOR c1 IS SELECT rownum as x, 'test1' as z FROM DUAL CONNECT BY ROWNUM <= 1000000;
      9  BEGIN
    10 
    11    -- Single insert
    12    v_insert := 0;
    13    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
    14    v_sysdate := SYSDATE;
    15    INSERT INTO mytable (x,z) SELECT rownum,'test1' FROM DUAL CONNECT BY ROWNUM <= 1000000;
    16    v_insert := SQL%ROWCOUNT;
    17    COMMIT;
    18    DBMS_OUTPUT.PUT_LINE('Single insert--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    19 
    20    -- Multi insert
    21    v_insert := 0;
    22    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
    23    v_sysdate := SYSDATE;
    24    FOR i IN 1..1000000
    25    LOOP
    26      INSERT INTO mytable (x,z) VALUES (i,'test1');
    27      v_insert := v_insert+SQL%ROWCOUNT;
    28    END LOOP;
    29    COMMIT;
    30    DBMS_OUTPUT.PUT_LINE('Multi insert--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    31 
    32    -- Multi insert using bulk
    33    v_insert := 0;
    34    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
    35    v_sysdate := SYSDATE;
    36    OPEN c1;
    37    LOOP
    38      FETCH c1 BULK COLLECT INTO v_nt_x,v_nt_z LIMIT 100000;
    39      EXIT WHEN C1%NOTFOUND;
    40      FORALL i IN 1..v_nt_x.count
    41        INSERT INTO mytable (x,z) VALUES (v_nt_x(i),v_nt_z(i));
    42        v_insert := v_insert+SQL%ROWCOUNT;
    43    END LOOP;
    44    COMMIT;
    45    DBMS_OUTPUT.PUT_LINE('Multi insert using bulk--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    46 
    47  END;
    48  /
    Single insert--> Row Inserted: 1000000 Time Taken: 3
    Multi insert--> Row Inserted: 1000000 Time Taken: 62
    Multi insert using bulk--> Row Inserted: 1000000 Time Taken: 10
    PL/SQL procedure successfully completed.and here the update result:
    SQL> DECLARE
      2    v_sysdate DATE;
      3    v_update NUMBER;
      4    TYPE t_nt_x IS TABLE OF ROWID;
      5    TYPE t_nt_z IS TABLE OF VARCHAR2(5);
      6    v_nt_x t_nt_x;
      7    v_nt_z t_nt_z;
      8    CURSOR c1 IS SELECT rowid as ri, 'test4' as z FROM mytable;
      9  BEGIN
    10 
    11    -- Single update
    12    v_update := 0;
    13    v_sysdate := SYSDATE;
    14    UPDATE mytable SET z='test2';
    15    v_update := SQL%ROWCOUNT;
    16    COMMIT;
    17    DBMS_OUTPUT.PUT_LINE('Single update--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    18 
    19    -- Multi update
    20    v_update := 0;
    21    v_sysdate := SYSDATE;
    22    FOR rec IN (SELECT ROWID AS ri FROM mytable)
    23    LOOP
    24      UPDATE mytable SET z='test3' WHERE ROWID=rec.ri;
    25      v_update := v_update+SQL%ROWCOUNT;
    26    END LOOP;
    27    COMMIT;
    28    DBMS_OUTPUT.PUT_LINE('Multi update--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    29 
    30    -- Multi update using bulk
    31    v_update := 0;
    32    v_sysdate := SYSDATE;
    33    OPEN c1;
    34    LOOP
    35      FETCH c1 BULK COLLECT INTO v_nt_x,v_nt_z LIMIT 100000;
    36      EXIT WHEN C1%NOTFOUND;
    37      FORALL i IN 1..v_nt_x.count
    38        UPDATE mytable SET z=v_nt_z(i) WHERE ROWID=v_nt_x(i);
    39        v_update := v_update+SQL%ROWCOUNT;
    40    END LOOP;
    41    COMMIT;
    42    DBMS_OUTPUT.PUT_LINE('Multi update using bulk--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    43 
    44  END;
    45  /
    Single update--> Row Updated: 1000000 Time Taken: 39
    Multi update--> Row Updated: 1000000 Time Taken: 60
    Multi update using bulk--> Row Updated: 1000000 Time Taken: 32
    PL/SQL procedure successfully completed.The single statement has still got the better perfomance, but with the bulk processing the cursor performance has improved dramatically
    (in the update case the bulk processing is even slightly better than the single statement).
    I guess that with the bulk processing the switching between the SQL and PL/SQL engines is much less.
    It would be interesting to test it with more rows, i might do it tomorrow.
    Just thought it would have been interesting sharing the result with you guys.
    Cheers,
    Davide

  • Taking huge time to fetch data from CDHDR

    Hi Experts,
    To count the entries in CDHDR table it taking huge time and throught time_out dump.
    I hope in this table some more than million entries exist. Is there any alternate to findout the entries?.
    We are finding the data from CDHDR by following conditions.
    Objclass - classify.
    Udate     -  'X' date
    Utime     -  'X' (( even selecton 1 Min))
    We also tried to index the UDATE filed but it takes huge time ( more than 6 Hrs and uncomplete)
    Can you suggest us is there any alternate to find the entries.
    Regards,
    VS

    Hello,
    at se16 display initila sceen and for you selection criteria you can run it as background and create spool reqeust.
    se16 > content, selection criteria and then Proram execute on background.
    Best regards,
    Peter

  • Takes Long time for Data Loading.

    Hi All,
    Good Morning.. I am new to SDN.
    Currently i am using the datasource 0CRM_SRV_PROCESS_H and it contains 225 fields. Currently i am using around 40 fields in my report.
    Can i hide the remaining fields in the datasource level itself (TCODE : RSA6)
    Currently data loading takes more time to load the data from PSA to ODS (ODS 1).
    And also right now i am pulling some data from another ODS(ODS 2)(LookUP). It takes long time to update the data in Active data table of the ODS.
    Can you please suggest how to improve the performance of dataloading on this Case.
    Thanks & Regards,
    Siva.

    Hi....
    Yes...u can hide..........just Check the hide box for those fields.......R u in BI 7.0 or BW...........whatever ........is the no of records is huge?
    If so u can split the records and execute............I mean use the same IP...........just execute it with different selections.........
    Check in ST04............is there are any locks or lockwaits..........if so...........Go to SM37 >> Check whether any Long running job is there or not.........then check whether that job is progressing or not............double click on the Job >> From the Job details copy the PID..............go to ST04 .....expand the node............and check whether u r able to find that PID there or not.........
    Also check System log in SM21............and shortdumps in ST04........
    Now to improve performance...........u can try to increase the virtual memory or servers.........if possiblr........it will increase the number of work process..........since if many jobs run at a time .then there will be no free Work prrocesses to proceed........
    Regards,
    Debjani......

  • BPM Process chain takes long time to process

    We have BI7, Netweaver 2004s on Oracle and SUN Solaris
    There is a process chain (BPM) which pulls data from the CRM system into BW. The scheduled time to run this chain is 0034 hrs. This chain should ideally complete before / around 0830 Hrs. <b>Now the problem is that every alternate day this chain behaves normally and gets completed well before 0830 hrs but every alternate day this chain fails…</b> there are almost 40 chains running daily. Some are event triggered (dependent with each other) or some run in parallel. In this, (BPM) process chain, usually there are 5 requests with 3 Delta and 2 full uploads (Master Data). The delta uploads finishes in 30 minutes without any issues with very few record transfers. The first full upload is from 0034 hrs to approximately 0130 hrs and the 2nd upload is from 0130 hrs to 0230 hrs. Now if the 1st upload gets delayed then the people who are initiating these chains, stop the 2nd full upload and continue it after all the process chains are completed. Now this entire BPM process chain sometimes takes 17 -18 hrs to complete!!!!!
    No other loads in CRM or BW when these process chains are running
    CRM has background jobs to push IDOCS to BW which run every 2 minutes which runs successfully
    Yesterday this chain got completed successfully (well within stipulated time) with over 33,00,000 records transferred but sometimes it has failed to transfer even 12,00,000 records!!
    Attaching a zip file, please refer the “21 to 26 Analysis screen shot.doc” from the zip file
    Within the zip file, attaching “Normal timings of daily process chains.xls” – the name explains it….
    Also within the zip file refer “BPM Infoprovider and data source screen shot.doc” please refer this file as the infopackage (page 2) which was used in the process chain is not displayed later on in page number 6 BUT CHAIN GOT SUCESSFULLY COMPLETED
    We have analyzed:--
    1)     The PSA data for BPM process chain for past few days
    2)     The info providers for BPM process chain for past few days
    3)     The ODS entries for BPM process chain for past few days
    4)     The point of failure of BPM process chain for past few days
    5)     The overall performance of all the process chains for past few days
    6)     The number of requests in BW for this process chain
    7)     The load on CRM system for past few days when this process chain ran on BW system
    As per our analysis, there are couple of things which can be fixed in the BW system:--
    1)     The partner agreement (transaction WE20) defined for the partner LS/BP3CLNT475 mentions both message types RSSEND and RSINFO: -- collect IDOCs and pack size = 1 Since the pack size = 1 will generate 1 TRFC call per IDOC, it should be changed to 10 so that less number of TRFCs will be generated thus less overhead for the BW server resulting in the increase in performance
    2)     In the definition of destination for the concerned RFC in BW (SM59), the “Technical Setting” tab says the “Load balancing” option = “No”. We are planning to make it “Yes”
    But we believe that though these changes will bring some increase in performance, this is not the root cause of the abnormal behavior of this chain as this chain runs successfully on every alternate day with approximately the same amount of load in it.
    I was not able to attach the many screen shots or the info which I had gathered during my analysis. Please advice how do I attach these files
    Best Regards,

    Hi,
    Normally  index  creation or deletion can take long time in case  your database statistics are not updated properly, so can check  stat  after your data loading is completed and index generation is done,  Do creation of database statistics.
    Then try to recheck ...
    Regards,
    Satya

  • The Application Takes long time to Start

    Hello All,
    We ar eon Unix->64 Bit-> Essbase 11.1.1.3.
    Problem Description : The application is taking long to start up. around 5 to 6 minutes. This is very first time it is happening.
    There were no specific changes done to the application in the recent releases.
    I have tried all options 1. Compacating outline, 2. Purging the application log etc. all other applications respond good on this host except this. Usually any application should not take more than 1 to 2 minutes to start up.
    There are no specific errors or XCP files recorded in the logs and folders.
    Appreciate your suggestions
    MS

    Thanks Jitendra and Prabhas,
    I know i have posted this thread sometime back and later I had to jump on a New release, so did not get time to check your inputs.
    Well I am back on this issue again. I have been working on various option to get this issues solved " start of App takes long time"
    Here are some Details. We are on SunOs 64 Bit, has 12CU with dual core,  with Essbase 11.1.1.3 running on it. This is an ASO application and has just 7 dimensions, Out of which the ORGANIZATION Dimension is pretty huge with Multiple Hierarchies enabled ( Both Stored and Dynamic ) and has more than 20,00,000 members including the alternate hierarchies ( Shared members)
    I did a smoke test by building dimension by dimension the app was startiung up in just *40* seconds. and when i reached the ORG dimension and added more than 70,000 memebrs . there i fall sick. the app now gets back to its old issue ( Takes more than 10 mainutes to start).
    CPU Usage ranges between 3.1 % to 4 %
    PID USER NLWP PRI NI VSZ RSS S STIME ELAPSED %CPU COMMAND
    4424 user1 1 59 20 1608 1032 S 18:13:33 00:00 0.0 grep COMMAND
    4428 user1 1 59 20 1608 1032 S 18:13:33 00:00 0.0 grep ESS
    4766 user1 88 55 20 6814168 5684200 O 17:37:48 35:45 3.1 /path/xyz/masked/ASO_APP hgfedc NOCREAT.
    But My question here is, in the last moth cube i still similar number of members in the cube and nothing really had changed.
    Essbase GURU's Please give me some Hint to think out of box now.
    Thanks
    MS

  • Setting ECHO ON takes more time?

    Hi all,
    Recently, we had to run a huge file with INSERTs in production database. But before that when the same file was run in testing database, we set the ECHO on in SQL*PLUS and it took more time, I mean, the difference was huge, in fact. I wish to know if setting ECHO to ON takes more time than setting ECHO to OFF. Does this have an effect on time it takes to make the INSERTs.
    Regards,
    ...

    Yingkuan,
    Thanks for the reply. In fact, I know the function what ECHO does. Now suppose I have 121,000 lines of INSERT statements in a file called "inserts.sql" and I am going to execute it in SQL*PLUS to a remote server, the server being 9.2.0.8.0. Will there be a time difference in completing the scripts if I set the ECHO to ON and if I set the ECHO to OFF. Consider the following scenario:
    Scenario 1
    ========
    SQL> SET ECHO ON;
    SQL> @inserts.sql;
    Elapsed: 02:00:00.00
    Scenario 2
    ========
    SQL> SET ECHO OFF;
    SQL> @inserts.sql;
    Elapsed: 01:00:00.00
    Please note the "Elapsed" time between the 2 scenarios. Will the ECHO setting impact the elapsed time? I think this setting will not cause the file to take long time to complete as it is just a client side setting. Please clarify.
    Regards,
    ...

  • Oracle coherence first read/write operation take more time

    I'm currently testing with oracle coherence Java and C++ version and from both versions for writing to any local or distributed or near cache first read/write operation take more time compared to next consecutive read/write operation. Is this because of boost operations happening inside actual HashMap or serialization or memory mapped implementation. What are the techniques which we can use to improve the performance with this first read/write operation?
    Currently I'm doing a single read/write operation after fetching the NamedCache Instance. Please let me know whether there's any other boosting coherence cache techniques available.

    In which case, why bother using Coherence? You're not really gaining anything, are you?
    What I'm trying to explain is that you're probably not going to get that "micro-second" level performance on a fully configured Coherence cluster, running across multiple machines, going via proxies for c++ clients. Coherence is designed to be a scalable, fault-tolerant, distributed caching/processing system. It's not really designed for real-time, guaranteed, nano-second/micro-second level processing. There are much better product stacks out there for that type of processing if that is your ultimate goal, IMHO.
    As you say, just writing to a small, local Map (or array, List, Set, etc.) in a local JVM is always going to be very fast - literally as fast as the processor running in the machine. But that's not really the focus of a product like Coherence. It isn't trying to "out gun" what you can achieve on one machine doing simple processing; Coherence is designed for scalability rather than outright performance. Of course, the use of local caches (including Coherence's near caching or replicated caching), can get you back some of the performance you've "lost" in a distributed system, but it's all relative.
    If you wander over to a few of the CUG presentations and attend a few CUG meetings, one of the first things the support guys will tell you is "benchmark on a proper cluster" and not "on a localised development machine". Why? Because the difference in scalability and performance will be huge. I'm not really trying to deter you from Coherence, but I don't think it's going to meet you requirements when fully configured in a cluster of "1 Micro seconds for 100000 data collection" on a continuous basis.
    Just my two cents.
    Cheers,
    Steve
    NB. I don't work for Oracle, so maybe they have a different opinion. :)

  • Problem Export to Excel it takes long time and Takes more space.

    Hello All,
    when we export to Excel in portal(reports), it takes long time and it takes more space.
    how to overcome this problem please any one knows provide the proper solution for this issues
    Regards,
    Ch.

    Hi Chetans,
    I have had the same problem, and I had to create a OSS message to SAP in order to solve this issue, until now I don't have an answer. They made me to change a lot of configuration in Java and BW without luck. To tell you more, when we try to export to excel the java instance restarts aumotatically.
    But we have this problem, when we try to export a huge quantity of data, so I found a note which says the limitation of exporting to excel. Take a special attention to the Performance section.
    Note 1178857
    [https://service.sap.com/sap/support/notes/1178857|https://service.sap.com/sap/support/notes/1178857]
    I recomend you that you create a message to SAP. If you find a solution, please let me know.
    Regards, Federico

  • When starting up a Mountain Lion iMac, it takes along time before network logins are available.

    When starting up a Mountain Lion iMac, it takes along time before network logins are available. There seems to be a huge delay getting the network up and running, and I'm not sure why. This isn't an issue for identical iMacs running Snow Leopard.
    If I login with a local account, I see the ethernet registered as disconnected for at least a minute, then it pops up. If I switch to Snow Leopard, using the exact same hardware, network cable, jack, etc, it's instant.
    Has anybody encountered anything similar?

    Fulcrum Media wrote:
    Has anybody encountered anything similar?
    No. I have a rMBP and a Mac Mini on a home LAN. The Mac Mini is connected via ethernet and the rMBP is WiFi. Both connect instantly to my network. Both are running OS X Mountain Lion 10.8.2.

Maybe you are looking for

  • Updated OS X and now PS CS 5 extended doesn't work

    Have the latest Macbook Pro 17" with i7 processor, 8gm memory, all that stuff, running all updates on everything. I've run into a first for me. I updated the OS this evening to the newest update from 10.7.3 to 10.7.4. Now Photoshop won't work. Gives

  • Startup Disk full. I have external backup drive. Can I delete folders in My Lightroom photos?

    I am using Mac Intel OSX Lion. I received a screen message that stated my startup disk is full. I am positive this is due to my 10,000 or so photos (mostly RAW files) stored in My Lightroom photos folders. Since my photos are externally  backed up ca

  • Digital noise in Logic after using Mac DVD player

    I have Logic Pro 7.2, a dual 1.8 G5 with OS 10.4.7, 3GB of RAM, and loads of hard drive space. Every time I open a song in Logic after having watched a DVD using the Mac DVD player, all the instruments, both midi and audio instruments, just sound lik

  • Call list question

    Hi, I have "exported" a call list to interaction center web client. I have opened it and picked the first BP from the list, but when I "finish" the call the BP is not removed from the list. I have also set the staus for the interaction center record

  • HT4623 iOS 6 auto-lock problem

    Since I have updated my iPhone 4 to iOS 6, the auto-lock button doesnt work anymore. In addition, after pushing the button, my iPhone starts to oscillate between the "power off" and "unlock" pages. The same behavior happens also during the execution