First WES7 start takes huge time on some machines

Hi,
we have a WES7 image that resembles full-blown W7. We don't use EWF/HORM or other embedded features. Everything is OK, except that on the recent HP desktop units for some machines the first boot takes ages (literally, it takes 4-5 hours to complete!!).
Windows boots normal and then sits in the 'Setup is installing devices 69%' screen for a very long time. It doesn't hang, as numbers are counting (till 75% or so) and then it works perfectly. The
disk is not loaded during the wait, the CPU probably not as well, at least the cooler is not running fast.
From 10 machines from one batch it happens on approximately half. It was not observed before (on older HP machines and on any other customer machines) and it started only with the new HP EliteDesk G1 800 SFF machines. 
Quite weird issue and if in house we can simply wait for 5 hours per machine, then on field it's not acceptable at all.
Does anybody have any idea what to check or what to do to resolve the issue?
Thanks in advance,
Nikolai
WBR, Nikolai

Hello Sean,
sorry for a long delay, was busy with the release/etc. So, last Friday I was managed to reproduce the case. As I told before, OOBE went smooth till ~60% of drivers installation and then it drastically slowed down. Finally (after 3 hours) it's finished. I did
re-installation around 10 times in 2 days and it hangs twice.
I checked panther folder and found first this info in setupact.log
(here is the link on the full log http://www.webcctv.com/files/%7B70F7C68D-EF19-45F7-85EB-E3F571EFB7EA%7D/setupact.zip, it's 21 kB in zip file)
2015-02-06 17:45:34, Info                  SYSPRP Re-checking active status...
2015-02-06 17:45:35, Info                  SYSPRP Status: Starting...
2015-02-06 17:45:35, Info                  SYSPRP Status: Total = 0,
Complete = 0, Progress = 0%
(0%)
2015-02-06 17:45:35, Info                  SYSPRP Status: Active
2015-02-06 17:45:35, Info                  SYSPRP Status: Total = 1,
Complete = 0, Progress = 0%
(0%)
2015-02-06 17:45:46, Info                  SYSPRP SPPNP: Reinstalling devices on the system...
2015-02-06 17:45:46, Info                  SYSPRP Status: Active
2015-02-06 17:45:46, Info                  SYSPRP Status: Total = 2,
Complete = 0, Progress = 0%
(0%)
2015-02-06 17:45:46, Info                  SYSPRP Status: Active
2015-02-06 17:45:46, Info                  SYSPRP Status: Total = 3,
Complete = 0, Progress = 0%
(0%)
That was ok, then as you can see the speed of driver installation has decreased:
2015-02-06 17:46:55, Info                  SYSPRP Status: Total = 172,
Complete = 103, Progress = 59%
(59%)
2015-02-06 17:46:55, Info                  SYSPRP Status: Active
2015-02-06 17:46:55, Info                  SYSPRP Status: Total = 173,
Complete = 103, Progress = 59%
(59%)
2015-02-06 17:51:55, Info                  SYSPRP Status: Active
2015-02-06 17:51:55, Info                  SYSPRP Status: Total = 173,
Complete = 104, Progress = 60%
(60%)
2015-02-06 17:51:55, Info                  SYSPRP Status: Active
2015-02-06 17:51:55, Info                  SYSPRP Status: Total = 173,
Complete = 105, Progress = 60%
(60%)
2015-02-06 17:56:55, Info                  SYSPRP Status: Active
2015-02-06 17:56:55, Info                  SYSPRP Status: Total = 173,
Complete = 106, Progress = 61%
(61%)
2015-02-06 18:01:55, Info                  SYSPRP Status: Active
2015-02-06 18:01:55, Info                  SYSPRP Status: Total = 173,
Complete = 107, Progress = 61%
(61%)
2015-02-06 18:06:55, Info                  SYSPRP Status: Active
2015-02-06 18:06:55, Info                  SYSPRP Status: Total = 173,
Complete = 108, Progress = 62%
(62%)
2015-02-06 18:11:56, Info                  SYSPRP Status: Active
2015-02-06 18:11:56, Info                  SYSPRP Status: Total = 173,
Complete = 109, Progress = 63%
(63%)
2015-02-06 18:16:56, Info                  SYSPRP Status: Active
2015-02-06 18:16:56, Info                  SYSPRP Status: Total = 173,
Complete = 110, Progress = 63%
(63%)
and finished ~3hours later:
2015-02-06 21:37:08, Info                  SYSPRP Status: Active
2015-02-06 21:37:08, Info                  SYSPRP Status: Total = 173,
Complete = 171, Progress = 98%
(98%)
2015-02-06 21:37:08, Info                  SYSPRP Status: Active
2015-02-06 21:37:08, Info                  SYSPRP Status: Total = 173,
Complete = 172, Progress = 99%
(99%)
2015-02-06 21:37:08, Info                  SYSPRP Status: End
2015-02-06 21:37:08, Info                  SYSPRP Status: Total = -1,
Complete = -1, Progress = 100%
(100%)
2015-02-06 21:37:08, Info                  SYSPRP Callback canceled operation, Err = 1223!
2015-02-06 21:37:08, Info                  SYSPRP DeviceInstallStatusThreadProc failed, Err = 1223
2015-02-06 21:37:09, Info                  SYSPRP SPPNP: Devices reinstallation completed.
2015-02-06 21:37:09, Info                  SYSPRP Waiting for thread to exit
2015-02-06 21:37:09, Info                  SYSPRP Thread exited
Checked other files in that folder, but don't see anythings specific.
Any ideas?
Nikolai
WBR, Nikolai

Similar Messages

  • ODS takes huge time

    Hi guys,
      The data lookup in the ODS is taking huge time .....it was working fine a week back it has around  200 mill records..
    Thanks,
    Your help will be greatly appreciated

    For two records, yes it shud not take that much time...
    Any just check the system performance on the same.. Some times the basis might be working on some backup activities which might also cause the issue in the system performace...
    So check if this is happening to everyone in ur project.. and also check the content directly from the ODS content instead of the listcube.. for the comparision..
    Also check how is ur system is loaded with?
    Thanks..
    Hope this helps

  • Shut down to take huge time

    Hi All,
    I am using 11.1.1.3 on windows server 2003,when i started essbase,planning, HFR,web analysis related services and after completion of my work i am trying to shut down it will take 1-2 hours to shutdown.
    please any one help me to resolve this challenge.
    Thanks,

    Are you stopping the services before shutting down the machine, I know there was an issue with OpenLdap and if you didn't stop the service it would take a long time for the machine to shut down.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Query takes long time from one machine but 1 sec from  machine

    I got a update query which is like a application patch which takes 1 sec from one machine.I need to apply that on the other machine where application is installed
    Both applications are same and connecting to the same DB server.The query ran from second machine takes so long time ....
    but i can update other thing from the secon machine.
    IS there anything to do with page size ,line size
    Urgent Please

    HI
    Everything is same except from the diff machine.
    Any client version issue because the script us so wide like 240 chars
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | UPDATE STATEMENT | | | | |
    | 1 | UPDATE | IDI_INTERFACE_MST | | | |
    | 2 | INDEX UNIQUE SCAN | PK_IDI_INTMST | | | |
    Note: rule based optimization, PLAN_TABLE' is old version
    10 rows selected.
    Message was edited by:
    Maran.E

  • Loading in Info Cube takes huge time

    Hi,
    We are loading transactional data in the info cube ZFI_C01  from a DSO ZFI_O05
    We loaded around 1.8 million records ( 1776444 )  through a request in the DSO ZFI_O05  that took around 13 minutes including activation of that request.( Note that flag for SIDs Generation upon Activation was checked  and so, SIDs were generated for all the characteristics in the DSO during activation )
    When we loaded the same request to the cube ZFI_C01, the request took around 3 hours to finish.
    I did RSRV Checks for the infocube ZFI_C01 to find out the ratio between fact table and dimension tables.
    ( goto RSRV->all elementary test->database->database info abt infoprovider table->give the cube name ..check the log it gives the ratio between fact table and dim table... )
    I got the following results for the two of the dimensions that are involved in the cube:
    Table /BIC/DZFI_C013 has 1564356 entries; size corresponds to 86% of the InfoCube
    Table /BIC/DZFI_C012 has 1649990 entries; size corresponds to 91% of the InfoCube
    When I checked the properties for both these dimensions, the checkbox "High Cardinality" and "Line Item Dimensions" were unchecked.
    I can not check the checkbox "Line item Dimension" as both these dimensions are having more than one info characteristics.
    Shall I check "High Cardinality" checkbox and retry loading as ratio of the dimension table to fact table size is more than 20%.
    But I am bit unclear what impact it will have on reportin

    Hi there,
    Check if you have any routines with code (start routine, end routine, etc.) that can have huge load performance.
    Also check in sm50 when you do the load to the InfoCube if you're having several reads to the table NRIV and/or to some specific dimension tables. If so, find out what are the entries for buffer number range to those dimension tables, and after repeat the load (cancel it meanwhile) to the infocube, go to transaction SNRO and keep in buffer range 500 of sixe to those buffer number range. Repeat the load.
    Try to see if it hepls,
    Diogo.

  • Opening Bex analyser on Excel takes long time on some client

    Hi,
    We have installed BEx analyser on several PC client, on my PC BEx is opening in 2 or 3 seconds, on others it can takes 30 or 40 seconds.
    We have checked the configuration and the problem is the same on two PC having same parameters.
    Has anyone seen this problem and solved it ???
    Thank you for your help !

    Hi,
    The OS installed for everyone is XP SP2 and we are using sap logon 710 Final Release, 7100.1.3.1029 version.
    Every user works on a pc with 512 Mo RAM and 2,8 Ghertz.
    We try to maintain the same configuration for everyone that why I don't understand this difference when opening Bex Analyser .
    Thank you for your help.

  • Taking huge time to fetch data from CDHDR

    Hi Experts,
    To count the entries in CDHDR table it taking huge time and throught time_out dump.
    I hope in this table some more than million entries exist. Is there any alternate to findout the entries?.
    We are finding the data from CDHDR by following conditions.
    Objclass - classify.
    Udate     -  'X' date
    Utime     -  'X' (( even selecton 1 Min))
    We also tried to index the UDATE filed but it takes huge time ( more than 6 Hrs and uncomplete)
    Can you suggest us is there any alternate to find the entries.
    Regards,
    VS

    Hello,
    at se16 display initila sceen and for you selection criteria you can run it as background and create spool reqeust.
    se16 > content, selection criteria and then Proram execute on background.
    Best regards,
    Peter

  • Data load to DSO takes long time to finish

    Dear All,
    We have a data load from data source to std  DSO.The data load takes 5 hours to complete  6000 records in single data package which is long time.
    Process monitor shows yellow status at one of the step for long time "No message :Transformation End" and after 5 hours approx  it completes successfully.
    Please find the snapshot of process monitor(Attached File Process monitor.png).
    There is an end routine and the transformation  is having direct mapping except for a target object exchage rate which is master data look up of DSO (Attached FIle : Transformation rule.png)
    The look up DSO /BI0/AFIGL_DS00 in the below code is having DOCNUM as a primary key  but not the POSKY. Since one of the field is not a primary key,secondary index is created for the look up DSO.But,still it takes huge time to finish the last step as mentioned in the snapshot.
    Setting for parallel process is 1
    DTP--> Update tab-->Error handling-->No update,no reporting.But there is a error DTP present which I believe that there is no use when "No update,No reporting" option is chosen.
    Can you please suggest the reason for the such long time.Also,Please suggest how to find the exact place where it consumes lot of time.
    End routine Logic:
        IF NOT RESULT_PACKAGE IS INITIAL.
          REFRESH IT_FIG.
          SELECT DOCNUM  POSKY DEBCRE LOCC
          FROM /BI0/AFIGL_DS00 INTO TABLE IT_FIG
          FOR ALL ENTRIES IN RESULT_PACKAGE
          WHERE DOCNUM = RESULT_PACKAGE-BILNO AND
                POSKY = '02'.
        LOOP AT RESULT_PACKAGE ASSIGNING <RESULT_FIELDS>.
            READ TABLE IT_FIG INTO WA_FIG WITH KEY
                       DOCNUM = <RESULT_FIELDS>-BILNO.
            IF SY-SUBRC EQ 0.
              <RESULT_FIELDS>-DEB = WA_FIG-DEBCRE.
              <RESULT_FIELDS>-LOC_CURRC2 = WA_FIG-LOCC.
            ENDIF.
        ENDLOOP.
        ENDIF.
    Thanks in advance
    Regards
    Pradeep

    Hi,
    below code check it and try to load the data.
    IF RESULT_PACKAGE IS NOT INITIAL.
          SELECT DOCNUM 
                          POSKY
                          DEBCRE
                          LOCC
          FROM /BI0/AFIGL_DS00 INTO TABLE IT_FIG
          FOR ALL ENTRIES IN RESULT_PACKAGE
          WHERE DOCNUM = RESULT_PACKAGE-BILNO AND
                POSKY = '02'.
        LOOP AT RESULT_PACKAGE ASSIGNING <RESULT_FIELDS>.
            READ TABLE IT_FIG INTO WA_FIG WITH KEY
                       DOCNUM = <RESULT_FIELDS>-BILNO.
            IF SY-SUBRC EQ 0.
               <RESULT_FIELDS>-DEB = WA_DOCNUM.
               <RESULT_FIELDS>-DEB = WA_POSKY.
              <RESULT_FIELDS>-DEB = WA_FIG-DEBCRE.
              <RESULT_FIELDS>-LOC_CURRC2 = WA_FIG-LOCC.
            ENDIF.
        ENDLOOP.
        ENDIF.
    if your are getting any error please let us know
    1.decrease the data packet size in DTP like 10,000 or 20,000.
    2.increase the parallel process at DTP level.
    Thanks,
    Phani.

  • UPDATE proc taking HUGE TIME

    Hi
    Oracle UPDATE proc is taking over 10 Hours to update 1,30,000 records :-
    /**********************CODE***************************/
    PROCEDURE Update_SP IS
    CURSOR C1 IS
    select tim.c_col,mp.t_n
    from Materialized_VW tim, MP_Table mp
    where tim.R_id = mp.R_id
    and tim.P_id = mp.P_id
    and tim.t_id = mp.t_id
    and mp.t_date between wk_comm and wk_end;
    BEGIN
    FOR I IN C1
    LOOP
    IF v_c=100000 THEN
    v_c:=0;
    COMMIT;
    END IF;
    v_c:=v_c+1;
    UPDATE MP_Table mp
    SET c_col = i.c_col
    WHERE mp.t_n = i.t_n;
    END LOOP;
    COMMIT;
    EXCEPTION
    WHEN OTHERS THEN
    ROLLBACK;
    err_num := SQLCODE;
    err_msg := SUBSTR(SQLERRM,1,100);
    END Update_SP;
    /**********************CODE***************************/
    Materialized_VW :- It has 4 SEPARATE indexes on the columns R_id, P_id, t_id, c_col
    MP_Table :- It has 4 SEPARATE indexes on the columns R_id, P_id, t_id, t_n
    The Explain Plan shows (NOTE : Whenever NUMBER OF RECORDS is More)
    SELECT STATEMENT ALL_ROWS
    Cost: 17,542 Bytes: 67 Cardinality: 1
    3 HASH JOIN
    Cost: 17,542 Bytes: 67 Cardinality: 1
    1 TABLE ACCESS FULL MP_TABLE
    Cost: 14 Bytes: 111,645 Cardinality: 4,135
    2 TABLE ACCESS FULL MATERIALIZED_VW
    Cost: 16,957 Bytes: 178,668,800 Cardinality: 4,466,720
    The Explain Plan shows (NOTE : Whenever NUMBER OF RECORDS is Less)
    SELECT STATEMENT ALL_ROWS
    Cost: 2,228 Bytes: 67 Cardinality: 1
    6 NESTED LOOPS Cost: 2,228 Bytes: 67 Cardinality: 1
    1 TABLE ACCESS FULL MP_TABLE Cost: 3 Bytes: 12,015 Cardinality: 445
    5 TABLE ACCESS BY INDEX ROWID MATERIALIZED_VW Cost: 2,228 Bytes: 40 Cardinality: 1
    4 AND-EQUAL
    2 INDEX RANGE SCAN NON-UNIQUE MATERIALIZED_VW_INDX1
    3 INDEX RANGE SCAN NON-UNIQUE MATERIALIZED_VW_INDX2
    This INTERMITTENT behaviour of EXPLAIN PLAN is causing it to take HUGE TIME whenever the number of records is more.
    This strange behaviour is causing problems as 10 Hours is too much for any UPDATE (that too the number of records is only 6 digit number).
    But, we cannnot use a DIRECT UPDATE as well as that would result in Oracle Exceptions.
    Please suggest ways of reducing the time or any other method of doing the above ASAP.
    Also, is there any way to establish a standard behaviour which takes less time.
    Thanks
    Arnab

    Hi BluShadow,
    I followed up your example extending it to the bulk processing.
    I have tested insert and update operations.
    Here are the insert result:
    SQL> CREATE TABLE mytable (x number, z varchar2(5));
    Table created.
    SQL> DECLARE
      2    v_sysdate DATE;
      3    v_insert NUMBER;
      4    TYPE t_nt_x IS TABLE OF NUMBER;
      5    TYPE t_nt_z IS TABLE OF VARCHAR2(5);
      6    v_nt_x t_nt_x;
      7    v_nt_z t_nt_z;
      8    CURSOR c1 IS SELECT rownum as x, 'test1' as z FROM DUAL CONNECT BY ROWNUM <= 1000000;
      9  BEGIN
    10 
    11    -- Single insert
    12    v_insert := 0;
    13    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
    14    v_sysdate := SYSDATE;
    15    INSERT INTO mytable (x,z) SELECT rownum,'test1' FROM DUAL CONNECT BY ROWNUM <= 1000000;
    16    v_insert := SQL%ROWCOUNT;
    17    COMMIT;
    18    DBMS_OUTPUT.PUT_LINE('Single insert--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    19 
    20    -- Multi insert
    21    v_insert := 0;
    22    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
    23    v_sysdate := SYSDATE;
    24    FOR i IN 1..1000000
    25    LOOP
    26      INSERT INTO mytable (x,z) VALUES (i,'test1');
    27      v_insert := v_insert+SQL%ROWCOUNT;
    28    END LOOP;
    29    COMMIT;
    30    DBMS_OUTPUT.PUT_LINE('Multi insert--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    31 
    32    -- Multi insert using bulk
    33    v_insert := 0;
    34    EXECUTE IMMEDIATE 'TRUNCATE TABLE mytable';
    35    v_sysdate := SYSDATE;
    36    OPEN c1;
    37    LOOP
    38      FETCH c1 BULK COLLECT INTO v_nt_x,v_nt_z LIMIT 100000;
    39      EXIT WHEN C1%NOTFOUND;
    40      FORALL i IN 1..v_nt_x.count
    41        INSERT INTO mytable (x,z) VALUES (v_nt_x(i),v_nt_z(i));
    42        v_insert := v_insert+SQL%ROWCOUNT;
    43    END LOOP;
    44    COMMIT;
    45    DBMS_OUTPUT.PUT_LINE('Multi insert using bulk--> Row Inserted: '||v_insert||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    46 
    47  END;
    48  /
    Single insert--> Row Inserted: 1000000 Time Taken: 3
    Multi insert--> Row Inserted: 1000000 Time Taken: 62
    Multi insert using bulk--> Row Inserted: 1000000 Time Taken: 10
    PL/SQL procedure successfully completed.and here the update result:
    SQL> DECLARE
      2    v_sysdate DATE;
      3    v_update NUMBER;
      4    TYPE t_nt_x IS TABLE OF ROWID;
      5    TYPE t_nt_z IS TABLE OF VARCHAR2(5);
      6    v_nt_x t_nt_x;
      7    v_nt_z t_nt_z;
      8    CURSOR c1 IS SELECT rowid as ri, 'test4' as z FROM mytable;
      9  BEGIN
    10 
    11    -- Single update
    12    v_update := 0;
    13    v_sysdate := SYSDATE;
    14    UPDATE mytable SET z='test2';
    15    v_update := SQL%ROWCOUNT;
    16    COMMIT;
    17    DBMS_OUTPUT.PUT_LINE('Single update--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    18 
    19    -- Multi update
    20    v_update := 0;
    21    v_sysdate := SYSDATE;
    22    FOR rec IN (SELECT ROWID AS ri FROM mytable)
    23    LOOP
    24      UPDATE mytable SET z='test3' WHERE ROWID=rec.ri;
    25      v_update := v_update+SQL%ROWCOUNT;
    26    END LOOP;
    27    COMMIT;
    28    DBMS_OUTPUT.PUT_LINE('Multi update--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    29 
    30    -- Multi update using bulk
    31    v_update := 0;
    32    v_sysdate := SYSDATE;
    33    OPEN c1;
    34    LOOP
    35      FETCH c1 BULK COLLECT INTO v_nt_x,v_nt_z LIMIT 100000;
    36      EXIT WHEN C1%NOTFOUND;
    37      FORALL i IN 1..v_nt_x.count
    38        UPDATE mytable SET z=v_nt_z(i) WHERE ROWID=v_nt_x(i);
    39        v_update := v_update+SQL%ROWCOUNT;
    40    END LOOP;
    41    COMMIT;
    42    DBMS_OUTPUT.PUT_LINE('Multi update using bulk--> Row Updated: '||v_update||' Time Taken: '||ROUND(((SYSDATE-v_sysdate)*(24*60*60)),0));
    43 
    44  END;
    45  /
    Single update--> Row Updated: 1000000 Time Taken: 39
    Multi update--> Row Updated: 1000000 Time Taken: 60
    Multi update using bulk--> Row Updated: 1000000 Time Taken: 32
    PL/SQL procedure successfully completed.The single statement has still got the better perfomance, but with the bulk processing the cursor performance has improved dramatically
    (in the update case the bulk processing is even slightly better than the single statement).
    I guess that with the bulk processing the switching between the SQL and PL/SQL engines is much less.
    It would be interesting to test it with more rows, i might do it tomorrow.
    Just thought it would have been interesting sharing the result with you guys.
    Cheers,
    Davide

  • Errors with USMT 4 Loadstate on some machines. Win 7 OSD

    In my test lab my Win 7 OSD is working fine restoring user data with USMT hardlinking. As I started to test it on real machines in our test lab with real user profiles and data I started to see failures on some machines on the USMT data restore step. Below
    is a section from my LoadState.log file that stood out to me with fatal "out of memory exception" errors.
     Has anyone run into this issue before with USMT 4 ? Again in my test lab I see it restoring my profile fine but on machines with more profiles I have this problem.
    If the
    above Image is too small please click here for Skydrive image
    Command line used for loadstate as generated from SCCM read at the top of this logfile:
    Command line: C:\_SMSTaskSequence\Packages\AGR00009\x86\loadstate.exe C:\userState /ue:USPHXAWDMJ7298H\* /v:5 /c /hardlink /nocompress /l:C:\Windows\CCM\Logs\SMSTSLog\loadstate.log /progress:C:\Windows\CCM\Logs\SMSTSLog\loadstateprogress.log /i:C:\_SMSTaskSequence\Packages\AGR00009\x86\AGMigUser.xml
    /i:C:\_SMSTaskSequence\Packages\AGR00009\x86\MigApp.xml /config:C:\_SMSTaskSequence\Packages\AGR00009\x86\Apollo.xml /ue:Administrator /ue:AdminSupport /ue:AAdmin

    Hi,
    If you receive an error message that resembles the following:
    Error [0x000000] Unhandled exception: OutOfMemoryException: (no exception message provided) class UnBCL::String *__stdcall Mig::pGetDateAndTime(void)[gle=0x00000091]
    <var><Date><Time></var>,
    Error [0x000000] Unhandled exception, abnormal termination[gle=0x00000091]
    This issue occurs because of an error in the code for the progress log in USMT 4.0.
    This hotfix will fix the issue.
    http://support.microsoft.com/kb/2023591
    Error                 [0x0802f5] CMediaManager::Close: m_pSelectedTransport->Close(1) failed
    with Exception Win32Exception: Device was open with readonly access.: Acceso denegado. [0x00000005] void __thiscall Mig::CMediaManager::CloseTransport(int)
    void __thiscall Mig::CUNCTransport::Close(int).[gle=0x00000006]
    Have you joined to the domain yet?  You need to be joined to the domain before loadstate runs so that it can map the SID to the username.  I believe this is your problem just move
    the domain join before your restore step.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • HT201232 My I phone is restart after some time. Some times find out its off then if I press Home   Power button then again re-start. Some time after coming apple logo it take auto restart ...display become blue color.

    My I phone is restart after some time. Some times find out its off then if I press Home + Power button then again re-start. Some time after coming apple logo it take auto restart ...display become blue color. I have restored through I tunes and got I cloud backup. But stll have same problem.

    Try once again to restore using iTunes, but this time without using a backup, to make sure that this is not software related.
    If the phone still restarts on it's own, get it checked, this might then be hardware related after all.
    Use iTunes to restore your iOS device to factory settings - Apple Support

  • The Application Takes long time to Start

    Hello All,
    We ar eon Unix->64 Bit-> Essbase 11.1.1.3.
    Problem Description : The application is taking long to start up. around 5 to 6 minutes. This is very first time it is happening.
    There were no specific changes done to the application in the recent releases.
    I have tried all options 1. Compacating outline, 2. Purging the application log etc. all other applications respond good on this host except this. Usually any application should not take more than 1 to 2 minutes to start up.
    There are no specific errors or XCP files recorded in the logs and folders.
    Appreciate your suggestions
    MS

    Thanks Jitendra and Prabhas,
    I know i have posted this thread sometime back and later I had to jump on a New release, so did not get time to check your inputs.
    Well I am back on this issue again. I have been working on various option to get this issues solved " start of App takes long time"
    Here are some Details. We are on SunOs 64 Bit, has 12CU with dual core,  with Essbase 11.1.1.3 running on it. This is an ASO application and has just 7 dimensions, Out of which the ORGANIZATION Dimension is pretty huge with Multiple Hierarchies enabled ( Both Stored and Dynamic ) and has more than 20,00,000 members including the alternate hierarchies ( Shared members)
    I did a smoke test by building dimension by dimension the app was startiung up in just *40* seconds. and when i reached the ORG dimension and added more than 70,000 memebrs . there i fall sick. the app now gets back to its old issue ( Takes more than 10 mainutes to start).
    CPU Usage ranges between 3.1 % to 4 %
    PID USER NLWP PRI NI VSZ RSS S STIME ELAPSED %CPU COMMAND
    4424 user1 1 59 20 1608 1032 S 18:13:33 00:00 0.0 grep COMMAND
    4428 user1 1 59 20 1608 1032 S 18:13:33 00:00 0.0 grep ESS
    4766 user1 88 55 20 6814168 5684200 O 17:37:48 35:45 3.1 /path/xyz/masked/ASO_APP hgfedc NOCREAT.
    But My question here is, in the last moth cube i still similar number of members in the cube and nothing really had changed.
    Essbase GURU's Please give me some Hint to think out of box now.
    Thanks
    MS

  • Oracle coherence first read/write operation take more time

    I'm currently testing with oracle coherence Java and C++ version and from both versions for writing to any local or distributed or near cache first read/write operation take more time compared to next consecutive read/write operation. Is this because of boost operations happening inside actual HashMap or serialization or memory mapped implementation. What are the techniques which we can use to improve the performance with this first read/write operation?
    Currently I'm doing a single read/write operation after fetching the NamedCache Instance. Please let me know whether there's any other boosting coherence cache techniques available.

    In which case, why bother using Coherence? You're not really gaining anything, are you?
    What I'm trying to explain is that you're probably not going to get that "micro-second" level performance on a fully configured Coherence cluster, running across multiple machines, going via proxies for c++ clients. Coherence is designed to be a scalable, fault-tolerant, distributed caching/processing system. It's not really designed for real-time, guaranteed, nano-second/micro-second level processing. There are much better product stacks out there for that type of processing if that is your ultimate goal, IMHO.
    As you say, just writing to a small, local Map (or array, List, Set, etc.) in a local JVM is always going to be very fast - literally as fast as the processor running in the machine. But that's not really the focus of a product like Coherence. It isn't trying to "out gun" what you can achieve on one machine doing simple processing; Coherence is designed for scalability rather than outright performance. Of course, the use of local caches (including Coherence's near caching or replicated caching), can get you back some of the performance you've "lost" in a distributed system, but it's all relative.
    If you wander over to a few of the CUG presentations and attend a few CUG meetings, one of the first things the support guys will tell you is "benchmark on a proper cluster" and not "on a localised development machine". Why? Because the difference in scalability and performance will be huge. I'm not really trying to deter you from Coherence, but I don't think it's going to meet you requirements when fully configured in a cluster of "1 Micro seconds for 100000 data collection" on a continuous basis.
    Just my two cents.
    Cheers,
    Steve
    NB. I don't work for Oracle, so maybe they have a different opinion. :)

  • When starting up a Mountain Lion iMac, it takes along time before network logins are available.

    When starting up a Mountain Lion iMac, it takes along time before network logins are available. There seems to be a huge delay getting the network up and running, and I'm not sure why. This isn't an issue for identical iMacs running Snow Leopard.
    If I login with a local account, I see the ethernet registered as disconnected for at least a minute, then it pops up. If I switch to Snow Leopard, using the exact same hardware, network cable, jack, etc, it's instant.
    Has anybody encountered anything similar?

    Fulcrum Media wrote:
    Has anybody encountered anything similar?
    No. I have a rMBP and a Mac Mini on a home LAN. The Mac Mini is connected via ethernet and the rMBP is WiFi. Both connect instantly to my network. Both are running OS X Mountain Lion 10.8.2.

  • Takes more time to start & shutdown the database

    Hi All,
    I have created a database in oracle9i by following manual steps. Every thing was created successfully and am able to start the database and shutdown also.
    but the problem is while giving the startup command it takes more time to start the database and the same during the shutdown. So anyone help me..
    the follwing are the pfile specifications:
    db_name=practice
    instance_name=practice
    control_files= 'E:\practice\control\control1.ctl',
              'D:\practice\control\control2.ctl'
    db_block_size=2048
    db_cache_size=20m
    shared_pool_size=20m
    background_dump_dest='E:\practice\bdump'
    user_dump_dest='E:\practice\udump'
    Thanks in Advance

    Every thing was created successfully and am able to start the database and > shutdown also.Please restate the above.
    problem is while giving the startup command it takes more time to start the >database and the same during the shutdownHow have you compared? Could it be O/S resources, installation of additional software; you have not mentioned the O/S and complete version of your database.
    You can review the following although I am bit unclear;
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96533/instreco.htm#440322
    Adith

Maybe you are looking for

  • Voice recognition no longer works in my 2013 Chevy Traverse since updating to 6.1.4 on my iPhone 5. Any ideas for me?

    Voice recognition worked perfect in my new car until about a month ago. Thinking it was when I updated to 6.1.4. Now it never understands what I am saying when trying to call one of my contacts.   If I say a number to call, it can understand that, bu

  • 'Pass Through' Chapter Select Menu Text

    Hi there, I'm putting together a DVD using the Pass Through theme, but seem to hit a snag with the chapter menus - namely that I can't stop the background text from saying "PASS THROUGH CHAPTERS". Whilst looking very nice, it's not actually what I wa

  • Can we restore the system?

    Dear All: The system is CRM5.0 Unicode based on JAVA+ABAP stack. When we get the backup files, it contains DB files (SQL Server detach files) and whole copy of \usr\sap\<SID> folder (including DVEBMGS51, SCS50, SYS). And we also get the info that the

  • Public_Address_EC embedded component

    Hello, I need some help me with the use of  ByD UI Designer. I'm trying to include the Address Management UI reuse component in the ByD UI Designer. The component is this: /SAP_BYD APPLICATION_UI/Reuse/Address/Public_Address_EC But I don't know how t

  • Customizing inconsistency(Contact your system adminstrator)

    Hello Experts, I have created thread in asset accounting forum but got rejected again and again.The issue was recently i have maintained the low value asset max value,but the issue is my Development client OB22 currency sttings (10 company code curre