Huge time

windows
10.2.0.4
asm
we have started dataupload yesterday .it is almost 48 hours.still not yet finished
the size is only 855 MB
can i know where excately it went wrong
from database end what can i check

QL> select * from v$version;
ANNER
racle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
L/SQL Release 10.2.0.4.0 - Production
ORE 10.2.0.4.0 Production
NS for 64-bit Windows: Version 10.2.0.4.0 - Production
LSRTL Version 10.2.0.4.0 - Production
Current log# 101 seq# 70888 mem# 1: +FRA01/osondema/onlinelog/group_101.256.687377391
Thu Nov 17 04:58:53 2011
Thread 1 cannot allocate new log, sequence 70889
Checkpoint not complete
Current log# 101 seq# 70888 mem# 0: +DATA01/osondema/onlinelog/group_101.351.687377391
Current log# 101 seq# 70888 mem# 1: +FRA01/osondema/onlinelog/group_101.256.687377391
Thu Nov 17 04:58:55 2011
Thread 1 advanced to log sequence 70889 (LGWR switch)
Current log# 102 seq# 70889 mem# 0: +DATA01/osondema/onlinelog/group_102.352.687377391
Current log# 102 seq# 70889 mem# 1: +FRA01/osondema/onlinelog/group_102.259.687377393
Thu Nov 17 04:59:08 2011
Thread 1 cannot allocate new log, sequence 70890
Checkpoint not complete
Current log# 102 seq# 70889 mem# 0: +DATA01/osondema/onlinelog/group_102.352.687377391
Current log# 102 seq# 70889 mem# 1: +FRA01/osondema/onlinelog/group_102.259.687377393
Thu Nov 17 04:59:11 2011
Thread 1 advanced to log sequence 70890 (LGWR switch)
Current log# 103 seq# 70890 mem# 0: +DATA01/osondema/onlinelog/group_103.353.687377393
Current log# 103 seq# 70890 mem# 1: +FRA01/osondema/onlinelog/group_103.260.687377393
Thu Nov 17 04:59:24 2011
Thread 1 advanced to log sequence 70891 (LGWR switch)
Current log# 101 seq# 70891 mem# 0: +DATA01/osondema/onlinelog/group_101.351.687377391
Current log# 101 seq# 70891 mem# 1: +FRA01/osondema/onlinelog/group_101.256.687377391
Thu Nov 17 04:59:39 2011
Thread 1 cannot allocate new log, sequence 70892
Checkpoint not complete
Current log# 101 seq# 70891 mem# 0: +DATA01/osondema/onlinelog/group_101.351.687377391
Current log# 101 seq# 70891 mem# 1: +FRA01/osondema/onlinelog/group_101.256.687377391
Thu Nov 17 04:59:41 2011
Thread 1 advanced to log sequence 70892 (LGWR switch)
Current log# 102 seq# 70892 mem# 0: +DATA01/osondema/onlinelog/group_102.352.687377391
Current log# 102 seq# 70892 mem# 1: +FRA01/osondema/onlinelog/group_102.259.687377393
Thu Nov 17 04:59:54 2011
Thread 1 cannot allocate new log, sequence 70893
Checkpoint not complete
Current log# 102 seq# 70892 mem# 0: +DATA01/osondema/onlinelog/group_102.352.687377391
Current log# 102 seq# 70892 mem# 1: +FRA01/osondema/onlinelog/group_102.259.687377393
Thu Nov 17 04:59:56 2011
Thread 1 advanced to log sequence 70893 (LGWR switch)
Current log# 103 seq# 70893 mem# 0: +DATA01/osondema/onlinelog/group_103.353.687377393
Current log# 103 seq# 70893 mem# 1: +FRA01/osondema/onlinelog/group_103.260.687377393
please let me know any queries to find out why it is slow
or can you tell me what is the procedure how much time dataupload takes ...........any estimation ?
thank for help

Similar Messages

  • ODS takes huge time

    Hi guys,
      The data lookup in the ODS is taking huge time .....it was working fine a week back it has around  200 mill records..
    Thanks,
    Your help will be greatly appreciated

    For two records, yes it shud not take that much time...
    Any just check the system performance on the same.. Some times the basis might be working on some backup activities which might also cause the issue in the system performace...
    So check if this is happening to everyone in ur project.. and also check the content directly from the ODS content instead of the listcube.. for the comparision..
    Also check how is ur system is loaded with?
    Thanks..
    Hope this helps

  • Central confirmation is taking huge time for perticular user in SRM

    Hi Gurus.
    I am facing an issue in Production system. For Some users Central confirmation is taking huge time for user ,
    around 10 users reported issue as of now and taking 10 times more than usual. Any suggestions will be great help. If any users facing this issue.

    Hi Prabhakar,
    As Konstantin rightly mentioned, kindly check those BADI's implementations especially BBP_WF_LIST. In addition to that, please check whether you are getting any dump as below
    TSV_TNEW_PAGE_ALLOC_FAILED
    Best Regards,
    Bharathi

  • UPDATE proc taking HUGE TIME

    Hi
    Oracle UPDATE proc is taking over 10 Hours to update 1,30,000 records :-
    /**********************CODE***************************/
    PROCEDURE Update_SP IS
    CURSOR C1 IS
    select tim.c_col,mp.t_n
    from Materialized_VW tim, MP_Table mp
    where tim.R_id = mp.R_id
    and tim.P_id = mp.P_id
    and tim.t_id = mp.t_id
    and mp.t_date between wk_comm and wk_end;
    BEGIN
    FOR I IN C1
    LOOP
    IF v_c=100000 THEN
    v_c:=0;
    COMMIT;
    END IF;
    v_c:=v_c+1;
    UPDATE MP_Table mp
    SET c_col = i.c_col
    WHERE mp.t_n = i.t_n;
    END LOOP;
    COMMIT;
    EXCEPTION
    WHEN OTHERS THEN
    ROLLBACK;
    err_num := SQLCODE;
    err_msg := SUBSTR(SQLERRM,1,100);
    END Update_SP;
    /**********************CODE***************************/
    Materialized_VW :- It has 4 SEPARATE indexes on the columns R_id, P_id, t_id, c_col
    MP_Table :- It has 4 SEPARATE indexes on the columns R_id, P_id, t_id, t_n
    The Explain Plan shows (NOTE : Whenever NUMBER OF RECORDS is More)
    SELECT STATEMENT ALL_ROWS
    Cost: 17,542 Bytes: 67 Cardinality: 1
    3 HASH JOIN
    Cost: 17,542 Bytes: 67 Cardinality: 1
    1 TABLE ACCESS FULL MP_TABLE
    Cost: 14 Bytes: 111,645 Cardinality: 4,135
    2 TABLE ACCESS FULL MATERIALIZED_VW
    Cost: 16,957 Bytes: 178,668,800 Cardinality: 4,466,720
    The Explain Plan shows (NOTE : Whenever NUMBER OF RECORDS is Less)
    SELECT STATEMENT ALL_ROWS
    Cost: 2,228 Bytes: 67 Cardinality: 1
    6 NESTED LOOPS Cost: 2,228 Bytes: 67 Cardinality: 1
    1 TABLE ACCESS FULL MP_TABLE Cost: 3 Bytes: 12,015 Cardinality: 445
    5 TABLE ACCESS BY INDEX ROWID MATERIALIZED_VW Cost: 2,228 Bytes: 40 Cardinality: 1
    4 AND-EQUAL
    2 INDEX RANGE SCAN NON-UNIQUE MATERIALIZED_VW_INDX1
    3 INDEX RANGE SCAN NON-UNIQUE MATERIALIZED_VW_INDX2
    This INTERMITTENT behaviour of EXPLAIN PLAN is causing it to take HUGE TIME whenever the number of records is more.
    This strange behaviour is causing problems as 10 Hours is too much for any UPDATE (that too the number of records is only 6 digit number).
    But, we cannnot use a DIRECT UPDATE as well as that would result in Oracle Exceptions.
    Please suggest ways of reducing the time or any other method of doing the above ASAP.
    Also, is there any way to establish a standard behaviour which takes less time.
    Thanks
    Arnab

    Hi BluShadow,
    I followed up your example extending it to the bulk processing.
    I have tested insert and update operations.
    Here are the insert result:
    SQL> CREATE TABLE mytable (x number, z varchar2(5));
    Table created.
    SQL> DECLARE
      2    v_sysdate DATE;
      3    v_insert NUMBER;
      4    TYPE t_nt_x IS TABLE OF NUMBER;
      5    TYPE t_nt_z IS TABLE OF VARCHAR2(5);
      6    v_nt_x t_nt_x;
      7    v_nt_z t_nt_z;
      8    CURSOR c1 IS SELECT rownum as x, 'test1' as z FROM DUAL CONNECT BY ROWNUM <= 1000000;
      9  BEGIN
    10 
    11    -- Single insert
    12    v_insert := 0;
    13    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
    14    v_sysdate := SYSDATE;
    15    INSERT INTO mytable (x,z) SELECT rownum,'test1' FROM DUAL CONNECT BY ROWNUM <= 1000000;
    16    v_insert := SQL%ROWCOUNT;
    17    COMMIT;
    18    DBMS_OUTPUT.PUT_LINE('Single insert--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    19 
    20    -- Multi insert
    21    v_insert := 0;
    22    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
    23    v_sysdate := SYSDATE;
    24    FOR i IN 1..1000000
    25    LOOP
    26      INSERT INTO mytable (x,z) VALUES (i,'test1');
    27      v_insert := v_insert+SQL%ROWCOUNT;
    28    END LOOP;
    29    COMMIT;
    30    DBMS_OUTPUT.PUT_LINE('Multi insert--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    31 
    32    -- Multi insert using bulk
    33    v_insert := 0;
    34    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
    35    v_sysdate := SYSDATE;
    36    OPEN c1;
    37    LOOP
    38      FETCH c1 BULK COLLECT INTO v_nt_x,v_nt_z LIMIT 100000;
    39      EXIT WHEN C1%NOTFOUND;
    40      FORALL i IN 1..v_nt_x.count
    41        INSERT INTO mytable (x,z) VALUES (v_nt_x(i),v_nt_z(i));
    42        v_insert := v_insert+SQL%ROWCOUNT;
    43    END LOOP;
    44    COMMIT;
    45    DBMS_OUTPUT.PUT_LINE('Multi insert using bulk--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    46 
    47  END;
    48  /
    Single insert--> Row Inserted: 1000000 Time Taken: 3
    Multi insert--> Row Inserted: 1000000 Time Taken: 62
    Multi insert using bulk--> Row Inserted: 1000000 Time Taken: 10
    PL/SQL procedure successfully completed.and here the update result:
    SQL> DECLARE
      2    v_sysdate DATE;
      3    v_update NUMBER;
      4    TYPE t_nt_x IS TABLE OF ROWID;
      5    TYPE t_nt_z IS TABLE OF VARCHAR2(5);
      6    v_nt_x t_nt_x;
      7    v_nt_z t_nt_z;
      8    CURSOR c1 IS SELECT rowid as ri, 'test4' as z FROM mytable;
      9  BEGIN
    10 
    11    -- Single update
    12    v_update := 0;
    13    v_sysdate := SYSDATE;
    14    UPDATE mytable SET z='test2';
    15    v_update := SQL%ROWCOUNT;
    16    COMMIT;
    17    DBMS_OUTPUT.PUT_LINE('Single update--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    18 
    19    -- Multi update
    20    v_update := 0;
    21    v_sysdate := SYSDATE;
    22    FOR rec IN (SELECT ROWID AS ri FROM mytable)
    23    LOOP
    24      UPDATE mytable SET z='test3' WHERE ROWID=rec.ri;
    25      v_update := v_update+SQL%ROWCOUNT;
    26    END LOOP;
    27    COMMIT;
    28    DBMS_OUTPUT.PUT_LINE('Multi update--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    29 
    30    -- Multi update using bulk
    31    v_update := 0;
    32    v_sysdate := SYSDATE;
    33    OPEN c1;
    34    LOOP
    35      FETCH c1 BULK COLLECT INTO v_nt_x,v_nt_z LIMIT 100000;
    36      EXIT WHEN C1%NOTFOUND;
    37      FORALL i IN 1..v_nt_x.count
    38        UPDATE mytable SET z=v_nt_z(i) WHERE ROWID=v_nt_x(i);
    39        v_update := v_update+SQL%ROWCOUNT;
    40    END LOOP;
    41    COMMIT;
    42    DBMS_OUTPUT.PUT_LINE('Multi update using bulk--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    43 
    44  END;
    45  /
    Single update--> Row Updated: 1000000 Time Taken: 39
    Multi update--> Row Updated: 1000000 Time Taken: 60
    Multi update using bulk--> Row Updated: 1000000 Time Taken: 32
    PL/SQL procedure successfully completed.The single statement has still got the better perfomance, but with the bulk processing the cursor performance has improved dramatically
    (in the update case the bulk processing is even slightly better than the single statement).
    I guess that with the bulk processing the switching between the SQL and PL/SQL engines is much less.
    It would be interesting to test it with more rows, i might do it tomorrow.
    Just thought it would have been interesting sharing the result with you guys.
    Cheers,
    Davide

  • Taking huge time to fetch data from CDHDR

    Hi Experts,
    To count the entries in CDHDR table it taking huge time and throught time_out dump.
    I hope in this table some more than million entries exist. Is there any alternate to findout the entries?.
    We are finding the data from CDHDR by following conditions.
    Objclass - classify.
    Udate     -  'X' date
    Utime     -  'X' (( even selecton 1 Min))
    We also tried to index the UDATE filed but it takes huge time ( more than 6 Hrs and uncomplete)
    Can you suggest us is there any alternate to find the entries.
    Regards,
    VS

    Hello,
    at se16 display initila sceen and for you selection criteria you can run it as background and create spool reqeust.
    se16 > content, selection criteria and then Proram execute on background.
    Best regards,
    Peter

  • 10.5.2 huge Time Machine backup

    has anybody else noticed that the first backup time machine does after installing the 10.5.2 and graphics update is huge? mine was 15.7 GB though i hadn't changed anything else on my system other than installing the update.
    i thought that it might just be replacing the 10.5.1 system files with the new 10.5.2 files, and then it was going to delete the old ones, but it has simply taken a 15.7 GB chunk out of my backup drive! how annoying...

    It will take a significant amount of time to perform the first TM backup after the update as it touches a large number of files.
    From my experience, the preparing phase will take well over 15-20 minutes on the fastest machines - it took around 20 on my Quad G5 and around the same on my MBP C2D - and the resulting backup will generally be well over 1 GB in size.
    In general, if you can just walk away for at least 90 minutes, the backup should be done when you return.

  • Huge time taken for insertion

    Hi,
    We have a framework where we can insert statements in bulk(where-in we create a set of statements and insert in bulk).
    There is a scenario where we need to insert a product in some product_table, before to which we have some Select and Insert statements.
    The problem is, that the first set of Insert statements are 5000 in number, second set of Insert statements are 45000 in number.
    This is taking a huge amount of time. The first set (5000) takes somewhere around 5-7 mins whereas the second set takes more than 7 hours.
    Could someone help in this context, as why its taking so long. Is the reason is indexes on both the tables, or is some other problem is also there.
    Regards

    Indexes seems a most unlikely reason for this. I have some suspicions but I need more details...
    1.    Exact TimesTen version (output of ttVersion command).
    2.    All DSN attributes (from sys.odbc.ini file) for the database in question.
    3.    Output of 'ttAdmin -query DSN' for the DSN in question.
    4.    Please describe in fine detail how your INSERT program is working (down to the individual SQL statement and API call) and particularly when it connects to, and disconnects from the database.
    Thanks,
    Chris

  • Huge Time Machine Back Up

    I have been running Lion on my mid-2009 15" MacBook Pro for about three weeks.  Time Machine was able to find my previous back up volume on an external HD and keep on recording to it. 
    This morning I saw a 61.3 GB back up in progress!  I do not use this machine all that much, just when I travel, but I recently upgraded to 10.7.2.  I tend to hook up the external drive to it for back up about once a week or so since this is not my primary work machine and I do not create much content on it.   Therefore I am a bit at a loss to understand why this backup is so large.   My entire boot volume on this MBP is only 66GB. 
    Perhaps 10.7.2 reindexed the drive and I am having the entire drive recorded this time.  Perhaps it is downloading the internally stored snapshops that Time Machine takes on MacBook Pro machines.  My last back up for this MBP was 10 days ago but other than updating software I have not done much with it in that time.
    Has anyone else experienced a very large Time Machine back up after upgrading to 10.7.2? 
    Thanks in advance for your responses. 
    Jay

    Jay Kappmeier wrote:
    This morning I saw a 61.3 GB back up in progress!
    Check Time Machine Troubleshooting for help in identifying what that backup was so big.
    I recently upgraded to 10.7.2.
    If you installed the "combo" updater that would change a lot of files.
    I tend to hook up the external drive to it for back up about once a week or so
    I may be mistaken, but I believe a long interval between backups can cause Time Machine to do a full backup.
    Perhaps 10.7.2 reindexed the drive
    That wouldn't cause Time Machine backups to be larger.

  • Aperture 3.0.2 image export takes a huge time

    Hi everyone!
    Yesterday I postprocessed 12 RAWs of portraits from 5D mk2 using similar techniques for each photo:
    - exposure +20
    - brightness +7
    - vibrancy +10
    - levels on eyes
    - skin retouching/cloning ~150-250 brush strokes
    - skin smoothing tool
    - edge sharpening
    - vignetting
    I was so surprised with the results on each processed photo that I even realised that Photoshop times past for me.
    After that I started exporting all 12 photos into JPEG and it took 2 hours! It's unbelievable price for such a good retouching tools that appeared in A3!
    Has someone suffers the way I am?

    Its likely to be the skin smoothing - this is extremely intensive for rendering and images with this adjustment can take a long time.
    I guess its the price to pay for real time rendering on export and non-destructive editing.
    I've found I can reduce this time if I make sure I export with nothing else open, and preferably after a reboot so I have the maximum amount of available ram for Aperture and nothing else eating CPU cycles.
    YMMV, but that works for me. If I'm going to be exporting a large number of images where skin smoothing has been used, I tend to start it and walk away for a while! A bit of extra rendering time is worth it for me for the power of the tools I have at my disposal!

  • WebIntelligence - Refresh taking huge time

    Hi Team,
    I have an issue in BI launchpad. Created WEBI report using BEx as a source and enabled Refresh on open option too. The issue is when the user open the report, its taking lot of time for displaying the prompt screen(Approx 30 mins). And we have only one variable for prompt.
    But when I used to create the same report in Advance analysis for OLAP, its running fast in BI Launchpad.
    Is there any option to resolve this issue?. Please do the needful.
    Awaiting for the reply,
    Thanks in Advance..
    Krishna.

    Hi Mahesh,
    Please go through this once.
    Add 2 new registry entries (string keys) under
    For the 32-bit using Rich Client: [HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\SAP BusinessObjects\Suite XI 4.0\default\WebIntelligence\Calculator]:
    For the 64-bit Web Intelligence:[HKEY_LOCAL_MACHINE\SOFTWARE\SAP BusinessObjects\Suite XI 4.0\default\WebIntelligence\Calculator]
    "BICSLOVChunkSize"="4000"
    "BICSResultChunkSize"="100000"
    BICSLOVChunkSize is defined as a string. It help create a chunk of List of Value of maximum the number given. The bigger the number the more likely the retrieval of the list of value will crash. Consider using a value between 100 and 5000. If you have 1000 values in your database and you set the BICSLOVChunkSize to 200. Then you will see a hierarchical view of 5 groups having eah 200 list of value.
    BICSResultChunkSize is defined as a string. It limits the number of results that can be retrieved as a List of Value. If you have a List of Value which contains 1 Millions values and set the BICSResultChunkSize to 1000 than only the first 1000 rows will be retrieved. Therefore you will get a partial result set for the list of value.
    Regards,
    Krishna.K

  • First WES7 start takes huge time on some machines

    Hi,
    we have a WES7 image that resembles full-blown W7. We don't use EWF/HORM or other embedded features. Everything is OK, except that on the recent HP desktop units for some machines the first boot takes ages (literally, it takes 4-5 hours to complete!!).
    Windows boots normal and then sits in the 'Setup is installing devices 69%' screen for a very long time. It doesn't hang, as numbers are counting (till 75% or so) and then it works perfectly. The
    disk is not loaded during the wait, the CPU probably not as well, at least the cooler is not running fast.
    From 10 machines from one batch it happens on approximately half. It was not observed before (on older HP machines and on any other customer machines) and it started only with the new HP EliteDesk G1 800 SFF machines. 
    Quite weird issue and if in house we can simply wait for 5 hours per machine, then on field it's not acceptable at all.
    Does anybody have any idea what to check or what to do to resolve the issue?
    Thanks in advance,
    Nikolai
    WBR, Nikolai

    Hello Sean,
    sorry for a long delay, was busy with the release/etc. So, last Friday I was managed to reproduce the case. As I told before, OOBE went smooth till ~60% of drivers installation and then it drastically slowed down. Finally (after 3 hours) it's finished. I did
    re-installation around 10 times in 2 days and it hangs twice.
    I checked panther folder and found first this info in setupact.log
    (here is the link on the full log http://www.webcctv.com/files/%7B70F7C68D-EF19-45F7-85EB-E3F571EFB7EA%7D/setupact.zip, it's 21 kB in zip file)
    2015-02-06 17:45:34, Info                  SYSPRP Re-checking active status...
    2015-02-06 17:45:35, Info                  SYSPRP Status: Starting...
    2015-02-06 17:45:35, Info                  SYSPRP Status: Total = 0,
    Complete = 0, Progress = 0%
    (0%)
    2015-02-06 17:45:35, Info                  SYSPRP Status: Active
    2015-02-06 17:45:35, Info                  SYSPRP Status: Total = 1,
    Complete = 0, Progress = 0%
    (0%)
    2015-02-06 17:45:46, Info                  SYSPRP SPPNP: Reinstalling devices on the system...
    2015-02-06 17:45:46, Info                  SYSPRP Status: Active
    2015-02-06 17:45:46, Info                  SYSPRP Status: Total = 2,
    Complete = 0, Progress = 0%
    (0%)
    2015-02-06 17:45:46, Info                  SYSPRP Status: Active
    2015-02-06 17:45:46, Info                  SYSPRP Status: Total = 3,
    Complete = 0, Progress = 0%
    (0%)
    That was ok, then as you can see the speed of driver installation has decreased:
    2015-02-06 17:46:55, Info                  SYSPRP Status: Total = 172,
    Complete = 103, Progress = 59%
    (59%)
    2015-02-06 17:46:55, Info                  SYSPRP Status: Active
    2015-02-06 17:46:55, Info                  SYSPRP Status: Total = 173,
    Complete = 103, Progress = 59%
    (59%)
    2015-02-06 17:51:55, Info                  SYSPRP Status: Active
    2015-02-06 17:51:55, Info                  SYSPRP Status: Total = 173,
    Complete = 104, Progress = 60%
    (60%)
    2015-02-06 17:51:55, Info                  SYSPRP Status: Active
    2015-02-06 17:51:55, Info                  SYSPRP Status: Total = 173,
    Complete = 105, Progress = 60%
    (60%)
    2015-02-06 17:56:55, Info                  SYSPRP Status: Active
    2015-02-06 17:56:55, Info                  SYSPRP Status: Total = 173,
    Complete = 106, Progress = 61%
    (61%)
    2015-02-06 18:01:55, Info                  SYSPRP Status: Active
    2015-02-06 18:01:55, Info                  SYSPRP Status: Total = 173,
    Complete = 107, Progress = 61%
    (61%)
    2015-02-06 18:06:55, Info                  SYSPRP Status: Active
    2015-02-06 18:06:55, Info                  SYSPRP Status: Total = 173,
    Complete = 108, Progress = 62%
    (62%)
    2015-02-06 18:11:56, Info                  SYSPRP Status: Active
    2015-02-06 18:11:56, Info                  SYSPRP Status: Total = 173,
    Complete = 109, Progress = 63%
    (63%)
    2015-02-06 18:16:56, Info                  SYSPRP Status: Active
    2015-02-06 18:16:56, Info                  SYSPRP Status: Total = 173,
    Complete = 110, Progress = 63%
    (63%)
    and finished ~3hours later:
    2015-02-06 21:37:08, Info                  SYSPRP Status: Active
    2015-02-06 21:37:08, Info                  SYSPRP Status: Total = 173,
    Complete = 171, Progress = 98%
    (98%)
    2015-02-06 21:37:08, Info                  SYSPRP Status: Active
    2015-02-06 21:37:08, Info                  SYSPRP Status: Total = 173,
    Complete = 172, Progress = 99%
    (99%)
    2015-02-06 21:37:08, Info                  SYSPRP Status: End
    2015-02-06 21:37:08, Info                  SYSPRP Status: Total = -1,
    Complete = -1, Progress = 100%
    (100%)
    2015-02-06 21:37:08, Info                  SYSPRP Callback canceled operation, Err = 1223!
    2015-02-06 21:37:08, Info                  SYSPRP DeviceInstallStatusThreadProc failed, Err = 1223
    2015-02-06 21:37:09, Info                  SYSPRP SPPNP: Devices reinstallation completed.
    2015-02-06 21:37:09, Info                  SYSPRP Waiting for thread to exit
    2015-02-06 21:37:09, Info                  SYSPRP Thread exited
    Checked other files in that folder, but don't see anythings specific.
    Any ideas?
    Nikolai
    WBR, Nikolai

  • Loading in Info Cube takes huge time

    Hi,
    We are loading transactional data in the info cube ZFI_C01  from a DSO ZFI_O05
    We loaded around 1.8 million records ( 1776444 )  through a request in the DSO ZFI_O05  that took around 13 minutes including activation of that request.( Note that flag for SIDs Generation upon Activation was checked  and so, SIDs were generated for all the characteristics in the DSO during activation )
    When we loaded the same request to the cube ZFI_C01, the request took around 3 hours to finish.
    I did RSRV Checks for the infocube ZFI_C01 to find out the ratio between fact table and dimension tables.
    ( goto RSRV->all elementary test->database->database info abt infoprovider table->give the cube name ..check the log it gives the ratio between fact table and dim table... )
    I got the following results for the two of the dimensions that are involved in the cube:
    Table /BIC/DZFI_C013 has 1564356 entries; size corresponds to 86% of the InfoCube
    Table /BIC/DZFI_C012 has 1649990 entries; size corresponds to 91% of the InfoCube
    When I checked the properties for both these dimensions, the checkbox "High Cardinality" and "Line Item Dimensions" were unchecked.
    I can not check the checkbox "Line item Dimension" as both these dimensions are having more than one info characteristics.
    Shall I check "High Cardinality" checkbox and retry loading as ratio of the dimension table to fact table size is more than 20%.
    But I am bit unclear what impact it will have on reportin

    Hi there,
    Check if you have any routines with code (start routine, end routine, etc.) that can have huge load performance.
    Also check in sm50 when you do the load to the InfoCube if you're having several reads to the table NRIV and/or to some specific dimension tables. If so, find out what are the entries for buffer number range to those dimension tables, and after repeat the load (cancel it meanwhile) to the infocube, go to transaction SNRO and keep in buffer range 500 of sixe to those buffer number range. Repeat the load.
    Try to see if it hepls,
    Diogo.

  • Huge Time Machine backup after crash?

    As usual, I logged out of my acct on my MacBook Pro last night before I put it to sleep. When I opened it up this morning, I discovered that the machine had shut down at some point during the night. After starting it up, it told me that it had shut down due to some problem. Everything seems to be working fine, but my first Time Machine backup of the day to a Time Capsule is now backing up 156GB!
    Has anyone seen this before? What this heck does it think it needs to backup? Should I be concerned? Is there a way to tell what it is backing up during the backup (which is going to take some time!)?

    BastiChina wrote:
    After some research I found Pondini's website (http://web.me.com/pondini/Time_Machine/Home.html)
    Just for reference, there are links to it in the *User Contributed Tips* box at the top of this forum, too.
    that the Time Machine's minimum backup size seems to be around 160MB - even when backups are only 5 or 10 minutes apart.
    That's not a lot, and shouldn't take long.
    And they shouldn't be 5-10 minutes apart, unless you mean one runs for 50 or 55 minutes, then the next one starts. If it's taking that long to back up that little, something is terribly wrong.
    Full backups are another story. There are a few things that can cause them, such as doing a full restore. Are you sure they're full backups of your entire system, or "only" backups of something very large, like your home folder? Those may have different causes, and fixes, so try to narrow it down.
    See #A1 in [Time Machine - Troubleshooting|http://web.me.com/pondini/Time_Machine/Troubleshooting.html] for a handy widget that will display the backup messages from your logs; that may help clarify what's been going on with those.
    I had the theory that Carbonite somewhat screws up the backup. To be sure, I paused Carbonite and suddenly the incremental backups dropped from 160MB to a few KB.
    See #A2 in Troubleshooting for a couple of 3rd-party apps that will show exactly what's being backed-up.
    Also see #D2 there -- your backups sound awfully slow.

  • Huge Time Machine Backups?

    Greetings! Why is Time Machine backing up over 31 GB of data every backup? On my MBP, I have about 70 GB of data and every single backup since Oct 26 has been for over 31 GB!!! I only plug in the USB hard drive (backup) once at night, so that's one backup per day; that's over 150 GBs of backups in less than a week??? Hello. I haven't changed anything, added any apps, deleted any, etc so what's up? Anybody have any ideas why Time Machine would be backing up so much data??? Thanks & God Bless!
    Rick

    Axel, I appreciate your response and speculation; however, what you speculate does not jive with what is stated on Apple.com, to wit, the 2 quotes following:
    "For the initial backup, Time Machine copies the entire contents of the computer to your backup drive. It copies every file exactly (without compression), skipping caches and other files that aren’t required to restore your Mac to its original state. Following the initial backup, Time Machine makes only incremental backups — copying just the files that have changed since the previous backup. Time Machine creates links to any unchanged files, so when you travel back in time you see the entire contents of your Mac on a given day.
    When your mobile Mac is connected to your backup drive, Time Machine works as you’d expect. When it isn’t connected, Time Machine also works as you’d expect. It keeps track of which files have changed since the last backup and backs them up to your backup drive the next time you connect."
    My interpretation of these statements by Apple, is that Time Machine should be "making note" of only changes in files on my MBP and then when connected to my Backup Drive, making the appropriate backup file. I certainly have not "changed" over 31 GB of data in one day!!!

  • Shut down to take huge time

    Hi All,
    I am using 11.1.1.3 on windows server 2003,when i started essbase,planning, HFR,web analysis related services and after completion of my work i am trying to shut down it will take 1-2 hours to shutdown.
    please any one help me to resolve this challenge.
    Thanks,

    Are you stopping the services before shutting down the machine, I know there was an issue with OpenLdap and if you didn't stop the service it would take a long time for the machine to shut down.
    Cheers
    John
    http://john-goodwin.blogspot.com/

Maybe you are looking for

  • Group by rollup - inconsistent?

    Ran across this when doing some aggregates with real data, where the difference is more pronounced. But this test case shows it: with t as ( select case          when dbms_random.value(0,2) < 1 then 10                                          else 20

  • Moving home directories best practice

    I have storage manager 2.5 installed and working well and my users managed. I have about 200 users in different OUs that I want to move storage from one NW65 to another NW65. How do I go about doing this in a smooth fashion without downtime for the u

  • SharePoint 2013 Parent/Child Form

    All, I am completing some audits on accounts with audit names: AuditA, AuditB I have a SharePoint list 'AuditPopulation' with the following columns: AccountNumber AuditName Then as we are auditing accounts we have audit questions based on the applica

  • Stationpre​step not executing in stationcal​lback.seq?

    Hi, I did some examples to get more familiar with callbacks. I'm not sure if there's a setting I'm not aware of or anything, but here's the scenario I have difficulty explaining. I have a Process Model (PM) with an Entry Point containing one sequence

  • Why are my RAW and large JPGS only syncing at 3.5 mp? I open Photogene 4 and meted at shows 3.5MB only.

    Hello everyone. Syncing RAW files is too fast. They are only syncing as JPGS. How can I get a true RAW file sync? I know I can go to iTunes and click on the apps button and select an app that supports a "documents folder" type of sync. But, when I op