Performance problems related to Timesheet entry and Time Admin processing.

Implementing 9.0, they are in UAT, experiencing performance delays on timeadmin and timesheet page when using apply rules button, they have quite a bit of rules and when number of users increase to 30 concurrent users severe performance issue are experienced on timesheet, at this point they are more concerned with the timesheet performance than the time admin performance, they have delayed their go live date until this issue gets resolved.
In the Performance Monitor data were are getting several failed status' for the PMU 'JOLT Request' and PMU Details 'ICPanel'. In the additional data area it states:
Error Status Code:
Jolt ServiceException: Jolt Errno 100 JoltException.TPEJOLT
PeopleSoft 9.0
Weblogic 9.2
Database:SQL Server 5 SP3
Windows Server 2003 SP2

Have you tried raising a SR on oracle support?
Also, Timesheet performance is a known issue and there are multiple such issues reported on metalink. You can look at the issues for potential solutions!
https://support.oracle.com/CSP/main/article?cmd=show&id=659033.1&type=NOT
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=PROBLEM&id=857761.1
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=PROBLEM&id=961924.1

Similar Messages

  • Performance Problems with "For all Entries" and a big internal table

    We have big Performance Problems with following Statement:
    SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
      FOR ALL ENTRIES IN gt_zmon_help
        WHERE
        status = 'IAI200' AND
        logdat IN gs_dat AND
        ztrack = gt_zmon_help-ztrack.
    In the internal table gt_zmon_help are over 1000000 entries.
    Anyone an Idea how to improve the Performance?
    Thank you!

    >
    Matthias Weisensel wrote:
    > We have big Performance Problems with following Statement:
    >
    >  
    SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
    >   FOR ALL ENTRIES IN gt_zmon_help
    >     WHERE
    >     status = 'IAI200' AND
    >     logdat IN gs_dat AND
    >     ztrack = gt_zmon_help-ztrack.
    >
    > In the internal table gt_zmon_help are over 1000000 entries.
    > Anyone an Idea how to improve the Performance?
    >
    > Thank you!
    You can't expect miracles.  With over a million entries in your itab any select is going to take a bit of time. Do you really need all these records in the itab?  How many records is the select bringing back?  I'm assuming that you have got and are using indexes on your ZEEDMT_ZMON table. 
    In this situation, I'd first of all try to think of another way of running the query and restricting the amount of data, but if this were not possible I'd just run it in the background and accept that it is going to take a long time.

  • 1.1 performance problems related to system configuration?

    It seems like a lot of people are having serious performance problems with Aperture 1.1 in areas were they didn't have any (or at least not so much) problems in the previous 1.01 release.
    Most often these problems occur as slow behaviour of the application when switching views (especially into and out of full view), loading images into the viewer or doing image adjustments. In most cases Aperture works normal for some time and then starts to slow down gradually up to point were images are no longer refreshed correctly or the whole application crashes. Most of the time simply restarting Aperture doesn't help, one has to restart the OS.
    Most of the time the problems occur in conjunction with CPU usage rates which are much higher than in 1.0.1.
    For some people even other applications seem to be affected to a point where the whole system has to be restarted to get everything working up at full speed again. Also shutdown times seem to increase dramatically after such an Aperture slowdown.
    My intention in this thread is to collect information from users who are experiencing such problems about their system configuration. At the moment it does not look like these problems are related to special configurations only, but maybe we can find a common point when we collect as much information as possible about system where Aperture 1.1 shows this behaviour.
    Before I continue with my configuration, I would like to point out that this thread is not about general speed issues with Aperture. If you're not able to work smoothly with 16MPix RAW files on G5 systems with Radeon 9650 video cards or Aperture is generally slow on your iBook 14" system where you installed it with a hack, than this is not the right thread. I fully understand if you want to complain about these general speed issues, but please refrain from doing so in this thread.
    Here I only want to collect information from people who either know that some things works considerably faster in the previous release or who notice that Aperture 1.1 really slows down after some time of use.
    Enough said, here is my information:
    - Powermac G5 Dualcore 2.0
    - 2.5 GB RAM
    - Nvidia 7800GT (flashed PC version)
    - System disk: Software RAID0 (2 WD 10000rpm 74GB Raptor drives)
    - Aperture library on a hardware RAID0 (2 Maxtor 160GB drives) connected to Highpoint RocketRAID 2320 PCIe adapter
    - Displays: 17" and 15" TFT
    I do not think, that we need more information, things like external drives (apart from ones used for the actual library), superdrive types, connected USB stuff like printers, scanners etc. shouldn't make any difference so no need to report that. Also it is self-evident that Mac OS 10.4.6 is used.
    Of interest might be any internal cards (PCIe/PCI/PCI-X...) build into your system like my RAID adapter, Decklink cards (wasn't there a report about problems with them?), any other special video or audio cards or additional graphic cards.
    Again, please only post here if you're experiencing any of the mentioned problems and please try to keep your information as condensed as possible. This thread is about collecting data, there are already enough other threads where the specific problems (or other general speed issues) are discussed.
    Bye,
    Carsten
    BTW: Within the next week I will perform some tests which will include replacing my 7800GT with the original 6600 and removing as much extra stuff from my system as possible to see if that helps.

    Yesterday i had my first decent run in 1.1 and was pleased i avoided a lot perfromance issues that seemed to affect others.
    After i posted, i got hit by a big slow-down in system perfromance. I tried to quit Aperture but couldn't, it had no tasks in its activity window. However Activity Monitor showed Aperture as a 30 thread 1.4GB Virtual memory hairball soaking-up 80-90% of my 4 cpu's. Given the high cpu activity i suspected the reason was not my 2GB of RAM, althought its obviously better with more. So what caused the sudded decerease in system perfromance after 6 hours of relative trouble free editing/sorting with 1.1 ?
    This morning i re-created the issue. Before i go further, when i ran 1.1 for the first time i did not migrate my whole library to the new raw algorithum (its not called the bleeding edge for nothing). So this morning i selected one project to migrate all its raw images to 1.1 and after the progress bar completed its work, the cpus ramped and system got bogged-down again.
    So Aperture is doing a background task that is consuming large amounts of cpu power, shows nothing in its activity monitor and takes a very long time to complete. My project had 89 raw images migrated to the 1.1 algorithum and it took 4 minutes to complete those 'background processes' (more reconstituting of images?). I'm not sure what its doing, but it takes a long time and shows no obvious sign it is normal. If you leave it to complete its work, the system returns to normal. More of an issue is the system allows you to continue to work as the background processes crank, compounding the heavy workload.
    Bit of a guess this, but is this what is causing people system's problems ? As i said if i left my quad alone for 4 minutes all returns as normal. Its just not normal to think it will ever end, so you do more and compound the slow-down ?
    In the interests of research i did another project migrating 245 8MB raws to the 1.1 algorithum and it took 8 minutes. First 5mins consumed 1GB of virtual memory over 20 threads at average 250% CPU usage for Aperture alone. The last three minutes saw the cpus ramp higher to 350%, virtual memory increase to 1.2GB. After the 8 minutes all returned to nornal and fans slowed down (excellent fan/noise behaviour on these quads).
    Is this what others are seeing ?
    When you force quit Aperture during these system slow-downs what effect does this having on your images ? Do the uncompleted background processes restart when you go to try and view them ?
    If i get time i'll try and compare to my MBP.

  • Performance problem between Oracle.DataAccess v1 and v2

    Hi, I have serious performance problem with OracleDataReader when I use the GetValues method.
    My server is Oracle 9.2.0.7, and i use ODAC v10.2.0.221
    I create a dummy table for benchmark :
    create table test (a varchar2(50), b number)
    begin
    for i in 1..62359 loop
    insert into test values ('Values ' || i, i);
    end loop;
    commit;
    end;
    I use the same code for benchmark Framework v1 and Framework v2.
    Code :
    try {
    OracleConnection c = new OracleConnection("user id=saturne_dbo;password=***;data source=satedfx;");
    c.Open();
    go(c);
    c.Close();
    catch (Exception ex) {
    MessageBox.Show(ex.Message);
    private void go(IDbConnection c) {
    IDbCommand cmd = c.CreateCommand();
    cmd.CommandText = "select * from test";
    cmd.CommandType = CommandType.Text;
    DateTime dt = DateTime.Now;
    IDataReader reader = cmd.ExecuteReader();
    int count = 0;
    while (reader.Read()) {
    object[] fields = new object[reader.FieldCount];
    reader.GetValues(fields);
    count++;
    reader.Close();
    TimeSpan eps = DateTime.Now - dt;
    MessageBox.Show("Time " + count + " : " + eps.TotalSeconds);
    Result are :
    Framework v1 with OracleDataAccess 1.10.2.2.20 "Time 62359 : 0.5"
    Framework v2 with OracleDataAccess 2.10.2.2.20 "Time 62359 : 3.57" FACTOR 6 !!!!!
    I notice same problem with oleDb provider and Microsoft Oracle Client provider..
    It's a serious problem for my production server, the time calculation explode...
    Where is the explication ?
    Do u know solution ?

    Can you please try out following -
    1. Create a .NET 1.x DLL with your benchmark code. This will obviously use ODP.NET for .NET 1.x.
    2. Call this assembly routine from a .NET 1.x executable and note the results.
    3. Now call this assembly routine from a .NET 2.0 executable and note the results.
    The idea is to always use "ODP.NET for .NET 1.x" even in .NET 2.0 runtime. This will tell us whether the performance degradation is a runtime issue.

  • Query performance problem - events 2505-read cache and 2510-write cache

    Hi,
    I am experiencing severe performance problems with a query, specifically with events 2505 (Read Cache) and 2510 (Write Cache) which went up to 11000 seconds on some executions. Data Manager (400 s), OLAP data selection (90 s) and OLAP user exit (250 s) are other the other event with noticeable times. All other events are very quick.
    The query settings (RSRT) are
    persistent cache across each app server -> cluster table,
    update cache in delta process is checked ->group on infoprovider type
    use cache despite virtual characteristics/key figs checked (one info-cube has1 virtual key figure which should have a static result for a day)
    =>Do you know how I can get more details than what's in 0TCT_C02 to break down the read and write cache events time or do you have any recommandation?
    I have checked and no dataloads were in progres on the info-providers and no master data loads (change run). Overall system performance was acceptable for other queries.
    Thanks

    Hi,
    Looks like you're using BDB, not BDB JE, and this is the BDB JE forum. Could you please repost here?:
    Berkeley DB
    Thanks,
    mark

  • Performance Problems on Faces Navigation Diagram and Hyperthreading query

    Am I the only one having performance problems when dealing with Faces-Config Diagrams of about 35 JSPs displayed on the sheet. using Jdev 10.1.3 It's taking my workstation about a full minute and a half to update the name of an arrow. The most stressed component during this task seems to be the CPU.
    And just another question has anybody investigated how is the performance of Jdev affected by either enabling or disabling hyperthreading? In my case my CPU usage manages to reach only 50%. I'm tempted to switch HT off to let JDev use all the cpu power. if that would be the case.

    Hello Diego,
    you mentioned that you compared a BEx Query with the Web INtelligence report. Could you provide more details here ?
    - what are the elements in the rows, columns and free characterisitcs in the BEx Query ?
    - was the query execute as designed in the BEx Query Designer with BEx Web Reporting ?
    - what are the elements in the WebIntelligence Query panel ?
    thanks
    Ingo

  • Problem with daisy chaining HDDs and Time Machine

    I have purchased 2 WD My Passport Studio FireWire 2TB HDDs which I have daisy chained to my MacBook Pro.  One I have assigned to be the Time Machine backup the other as an additional data drive.  Both are formatted for Mac OS Extended (Journaled).  I setup Time Machine to backup both the internal and external drive.  When I set it up everything looked OK in Time Machine Preferences, but the daisy chained data drive displayed on the desktop with the Time Machine backup logo and the backup drive displayed as a standard FireWire data drive. However the backup worked successfully and backed everything up to the correct backup drive.
    I have since established the following:
    1.  If both drives are daisy chained and then plugged in, the data drive shows up with the backup icon and the backup drive shows as a data drive, but Time Machine backs up to the correct drive.  This is the case regardless of what order the drives are daisy chained.
    2.  If the backup drive is plugged in on its own it shows up correctly with the backup drive icon.  If the data drive is then daisy chained to the backup drive both drives show up as backup drives.  Time Machine backs up to the correct backup drive.
    3. If the data drive is plugged in on its own it shows up incorrectly as the backup drive.  If the backup drive is then daisy chained to the data drive the backup drive shows as a normal FireWire data drive. But Time Machine still backs up to the correct drive.  Exactly the same as case 1 but the drives have been daisy chained after connecting to the MacBook Pro and the result is the same.
    4. However if the data drive is plugged in on its own, showing up as the backup drive, Time Machine backs up to the data drive incorrectly.
    The first three points are just annoying as the drive icons do not display correctly, although it does indicate to me that Time Machine is confused.  But what I don't like is the fact that I can't just use the external data drive without the backup drive being present.
    How do I overcome this problem?

    Launch Disk Utility and select one of the volumes in question. Click the Info button in the toolbar. In the Information window that opens, note the Universal Unique Identifier. Do the same with the other volume. Are the identifiers the same?

  • Performance problem with Integration with COGNOS and Bex

    Hi Gems
    I have a performance problem with some of my queries when integrating with the COGNOS
    My query is simple which gets the data for the date interval : "
    From Date: 20070101
    To date:20070829
    When executing the query in the Bex it takes 2mins but when it is executed in the COGNOS it takes almost 10mins and above..
    Any where can we debug the report how the data is sending to the cognos. Like debugging the OLEDB ..
    and how to increase the performance.. of the query in the Cognos ..
    Thanks in Advance
    Regards
    AK

    Hi,
    Please check the following CA Unicenter config files on the SunMC server:
    - is the Event Adapter (ea-start) running ?, without these daemon no event forwarding is done the CA Unicenter nor discover from Ca unicenter is working.
    How to debug:
    - run ea-start in debug mode:
    # /opt/SUNWsymon/SunMC-TNG/sbin/ea-start -d9
    - check if the Event Adaptor is been setup,
    # /var/opt/SUNWsymon/SunMC-TNG/cfg_sunmctotng
    - check the CA log file
    # /var/opt/SUNWsymon/SunMC-TNG/SunMCToTngAdaptorMain.log
    After that is all fine check this side it explains how to discover an SunMC agent from CA Unicenter.
    http://docs.sun.com/app/docs/doc/817-1101/6mgrtmkao?a=view#tngtrouble-6
    Kind Regards

  • Problem with getting current date and time using oracle.jbo.domain.Date

    I`d like to get current date and time using oracle.jbo.domain.Date method getCurrentDate(), but it always return current date and 12:00:00. I also need to get the current time.

    I think you should use java.sql.Timestamp domain.
    (And set database type to TIME or DATETIME.)
    Jan

  • DB connect problem,Disp+Work.exe dies and the work processes die on startup

    Dear Experts,
    I am getting the error on SAP startup,Disp+Work.exe dies and the work processes die on startup.
    I found below error in my DEV_W0 trace as below
    *failed to establish conn to np:(local).*
    *C  Retrying without protocol specifier: (local)*
    *C  Provider SQLNCLI could not be initialized. See note #734034 for more information.*
    *C  Using provider SQLOLEDB instead.*
    *C  ExecuteAndFlush return code: 0x80040e14 Stmt: [if user_name() != 'pec' setuser 'pec']*
    *C  sloledb.cpp [ExecuteAndFlush,line 5989]: Error/Message: (err 4604, sev 0), There is no such user or group 'pec'.*
    *C  Procname: [ExecuteAndFlush - no proc]*
    But i could see the user 'pec' is availble in my database users list  with db_owner , public permissions also been assigned to it.
    However When trying to delete a user on SQL server 2000, we recieve the message "You cannot drop the selected login ID because that login ID owns objects in one or more databases.".
    Please refer the below work process trace and please suggest the possible solution..
    trc file: "dev_w0", trc level: 1, release: "640"
    ACTIVE TRACE LEVEL           1
    ACTIVE TRACE COMPONENTS      all, M

    B Sat Feb 14 15:10:21 2009
    B  create_con (con_name=R/3)
    B  Loading DB library 'E:\usr\sap\PEC\SYS\exe\run\dbmssslib.dll' ...
    B  Library 'E:\usr\sap\PEC\SYS\exe\run\dbmssslib.dll' loaded
    B  Version of 'E:\usr\sap\PEC\SYS\exe\run\dbmssslib.dll' is "640.00", patchlevel (0.195)
    B  New connection 0 created
    M sysno      00
    M sid        PEC
    M systemid   560 (PC with Windows NT)
    M relno      6400
    M patchlevel 0
    M patchno    196
    M intno      20020600
    M make:      multithreaded, Unicode
    M pid        3844
    M
    M  ***LOG Q0Q=> tskh_init, WPStart (Workproc 0 3844) [dpxxdisp.c   1162]
    I  MtxInit: -2 0 0
    M  DpSysAdmExtCreate: ABAP is active
    M  DpShMCreate: sizeof(wp_adm)          21120     (1320)
    M  DpShMCreate: sizeof(tm_adm)          29558776     (14772)
    M  DpShMCreate: sizeof(wp_ca_adm)          24000     (80)
    M  DpShMCreate: sizeof(appc_ca_adm)     8000     (80)
    M  DpShMCreate: sizeof(comm_adm)          1160000     (580)
    M  DpShMCreate: sizeof(vmc_adm)          0     (424)
    M  DpShMCreate: sizeof(wall_adm)          (384056/329560/64/184)
    M  DpShMCreate: SHM_DP_ADM_KEY          (addr: 07800040, size: 31492672)
    M  DpShMCreate: allocated sys_adm at 07800040
    M  DpShMCreate: allocated wp_adm at 07801B88
    M  DpShMCreate: allocated tm_adm_list at 07806E08
    M  DpShMCreate: allocated tm_adm at 07806E30
    M  DpShMCreate: allocated wp_ca_adm at 09437628
    M  DpShMCreate: allocated appc_ca_adm at 0943D3E8
    M  DpShMCreate: allocated comm_adm_list at 0943F328
    M  DpShMCreate: allocated comm_adm at 0943F340
    M  DpShMCreate: allocated vmc_adm_list at 0955A680
    M  DpShMCreate: system runs without vmc_adm
    M  DpShMCreate: allocated ca_info at 0955A6A8
    M  DpShMCreate: allocated wall_adm at 0955A6B0
    X  EmInit: MmSetImplementation( 2 ).
    X  <ES> client 0 initializing ....
    X  Using implementation flat
    M  <EsNT> Memory Reset disabled as NT default
    X  ES initialized.

    M Sat Feb 14 15:10:22 2009
    M  calling db_connect ...
    C  Thread ID:3868
    C  Thank You for using the SLOLEDB-interface
    C  Using dynamic link library 'E:\usr\sap\PEC\SYS\exe\run\dbmssslib.dll'
    C  dbmssslib.dll patch info
    C    patchlevel   0
    C    patchno      195
    C    patchcomment DBCON: database names must not start with digits (1078650)
    C  np:(local) connection used on CAMBSVR15
    C  CopyLocalParameters: dbuser is 'pec'

    C Sat Feb 14 15:10:23 2009
    C  Provider SQLNCLI could not be initialized. See note #734034 for more information.
    C  Using provider SQLOLEDB instead.
    C  OpenOledbConnection: MARS property was not set.

    C Sat Feb 14 15:10:26 2009
    C  ExecuteAndFlush return code: 0x80040e14 Stmt: [if user_name() != 'pec' setuser 'pec']
    C  sloledb.cpp [ExecuteAndFlush,line 5989]: Error/Message: (err 4604, sev 0), There is no such user or group 'pec'.
    C  Procname: [ExecuteAndFlush - no proc]
    C  setuser 'pec' failed -- connect terminated
    C  Provider SQLNCLI could not be initialized. See note #734034 for more information.
    C  Using provider SQLOLEDB instead.
    C  ExecuteAndFlush return code: 0x80040e14 Stmt: [if user_name() != 'pec' setuser 'pec']
    C  sloledb.cpp [ExecuteAndFlush,line 5989]: Error/Message: (err 4604, sev 0), There is no such user or group 'pec'.
    C  Procname: [ExecuteAndFlush - no proc]
    C  setuser 'pec' failed -- connect terminated
    C  Provider SQLNCLI could not be initialized. See note #734034 for more information.
    C  Using provider SQLOLEDB instead.
    C  ExecuteAndFlush return code: 0x80040e14 Stmt: [if user_name() != 'pec' setuser 'pec']
    C  sloledb.cpp [ExecuteAndFlush,line 5989]: Error/Message: (err 4604, sev 0), There is no such user or group 'pec'.
    C  Procname: [ExecuteAndFlush - no proc]
    C  setuser 'pec' failed -- connect terminated
    C  failed to establish conn to np:(local).
    C  Retrying without protocol specifier: (local)
    C  Provider SQLNCLI could not be initialized. See note #734034 for more information.
    C  Using provider SQLOLEDB instead.
    C  ExecuteAndFlush return code: 0x80040e14 Stmt: [if user_name() != 'pec' setuser 'pec']
    C  sloledb.cpp [ExecuteAndFlush,line 5989]: Error/Message: (err 4604, sev 0), There is no such user or group 'pec'.
    C  Procname: [ExecuteAndFlush - no proc]
    C  setuser 'pec' failed -- connect terminated
    C  Provider SQLNCLI could not be initialized. See note #734034 for more information.
    C  Using provider SQLOLEDB instead.
    C  ExecuteAndFlush return code: 0x80040e14 Stmt: [if user_name() != 'pec' setuser 'pec']
    C  sloledb.cpp [ExecuteAndFlush,line 5989]: Error/Message: (err 4604, sev 0), There is no such user or group 'pec'.
    C  Procname: [ExecuteAndFlush - no proc]
    C  setuser 'pec' failed -- connect terminated
    C  Provider SQLNCLI could not be initialized. See note #734034 for more information.
    C  Using provider SQLOLEDB instead.
    C  ExecuteAndFlush return code: 0x80040e14 Stmt: [if user_name() != 'pec' setuser 'pec']
    C  sloledb.cpp [ExecuteAndFlush,line 5989]: Error/Message: (err 4604, sev 0), There is no such user or group 'pec'.
    C  Procname: [ExecuteAndFlush - no proc]
    C  setuser 'pec' failed -- connect terminated
    C  failed to establish conn. 0
    B  ***LOG BY2=> sql error 0      performing CON [dbsh#3 @ 1204] [dbsh    1204 ]
    B  ***LOG BY0=> <message text not available> [dbsh#3 @ 1204] [dbsh    1204 ]
    B  ***LOG BY2=> sql error 0      performing CON [dblink#3 @ 428] [dblink  0428 ]
    B  ***LOG BY0=> <message text not available> [dblink#3 @ 428] [dblink  0428 ]
    M  ***LOG R19=> tskh_init, db_connect ( DB-Connect 000256) [thxxhead.c   1280]
    M  in_ThErrHandle: 1
    M  *** ERROR => tskh_init: db_connect (step 1, th_errno 13, action 3, level 1) [thxxhead.c   9621]

    M  Info for wp 0

    M    stat = 4
    M    reqtype = 1
    M    act_reqtype = -1
    M    rq_info = 0
    M    tid = -1
    M    mode = 255
    M    len = -1
    M    rq_id = 65535
    M    rq_source = 255
    M    last_tid = 0
    M    last_mode = 0
    M    int_checked_resource(RFC) = 0
    M    ext_checked_resource(RFC) = 0
    M    int_checked_resource(HTTP) = 0
    M    ext_checked_resource(HTTP) = 0
    M    report = >                                        <
    M    action = 0
    M    tab_name = >                              <

    M  *****************************************************************************
    M  *
    M  *  LOCATION    SAP-Server cambsvr15_PEC_00 on host cambsvr15 (wp 0)
    M  *  ERROR       tskh_init: db_connect
    M  *
    M  *  TIME        Sat Feb 14 15:10:26 2009
    M  *  RELEASE     640
    M  *  COMPONENT   Taskhandler
    M  *  VERSION     1
    M  *  RC          13
    M  *  MODULE      thxxhead.c
    M  *  LINE        9806
    M  *  COUNTER     1
    M  *
    M  *****************************************************************************

    M  PfStatDisconnect: disconnect statistics
    M  Entering TH_CALLHOOKS
    M  ThCallHooks: call hook >ThrSaveSPAFields< for event BEFORE_DUMP
    M  *** ERROR => ThrSaveSPAFields: no valid thr_wpadm [thxxrun1.c   730]
    M  *** ERROR => ThCallHooks: event handler ThrSaveSPAFields for event BEFORE_DUMP failed [thxxtool3.c  254]
    M  Entering ThSetStatError
    M  Entering ThReadDetachMode
    M  call ThrShutDown (1)...
    M  ***LOG Q02=> wp_halt, WPStop (Workproc 0 3844) [dpnttool.c   357]
    I would greatly appriciate your quick response, Since it is a production issue.
    Many Thanks,
    Vinod

    Hi Rohit and Subhadip,
    My problem was solved, thanks for your effort.
    reason is that, user 'pec' not been associated with login user for the SQL server.
    subha : when i execute the first command which you suggested, but couldn't find any user's list.
    so i didn't  tried with the second command. just to give a final try i have executed the script like below and its worked out.
    USE PEC
    GO
    EXEC sp_change_users_login 'update_one','pec','pec'
    GO
    Now i see that all work process are starting,,,thank you so much rohit and subhadip
    I could have try this first ,when you guys suggested since i couldn't find any output with the first command i didn't proceed with second one.
    Since deleting and creating method doesn't seems to be a good, i have just to give a try executed the second command and its worked.
    Thanks for your time and patience
    Many Thanks,
    Vinod

  • BDB read performance problem: lock contention between GC and VM threads

    Problem: BDB read performance is really bad when the size of the BDB crosses 20GB. Once the database crosses 20GB or near there, it takes more than one hour to read/delete/add 200K keys.
    After a point, of these 200K keys there are about 15-30K keys that are new and this number eventually should come down and there should not be any new keys after a point.
    Application:
    Transactional Data Store application. Single threaded process, that's trying to read one key's data, delete the data and add new data. The keys are really small (20 bytes) and the data is large (grows from 1KB to 100KB)
    On on machine, I have a total of 3 processes running with each process accessing its own BDB on a separate RAID1+0 drive. So, according to me there should really be no disk i/o wait that's slowing down the reads.
    After a point (past 20GB), There are about 4-5 million keys in my BDB and the data associated with each key could be anywhere between 1KB to 100KB. Eventually every key will have 100KB data associated with it.
    Hardware:
    16 core Intel Xeon, 96GB of RAM, 8 drive, running 2.6.18-194.26.1.0.1.el5 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
    BDB config: BTREE
    bdb version: 4.8.30
    bdb cache size: 4GB
    bdb page size: experimented with 8KB, 64KB.
    3 processes, each process accesses its own BDB on a separate RAIDed(1+0) drive.
    envConfig.setAllowCreate(true);
    envConfig.setTxnNoSync(ourConfig.asynchronous);
    envConfig.setThreaded(true);
    envConfig.setInitializeLocking(true);
    envConfig.setLockDetectMode(LockDetectMode.DEFAULT);
    When writing to BDB: (Asynchrounous transactions)
    TransactionConfig tc = new TransactionConfig();
    tc.setNoSync(true);
    When reading from BDB (Allow reading from Uncommitted pages):
    CursorConfig cc = new CursorConfig();
    cc.setReadUncommitted(true);
    BDB stats: BDB size 49GB
    $ db_stat -m
    3GB 928MB Total cache size
    1 Number of caches
    1 Maximum number of caches
    3GB 928MB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    2127M Requested pages found in the cache (97%)
    57M Requested pages not found in the cache (57565917)
    6371509 Pages created in the cache
    57M Pages read into the cache (57565917)
    75M Pages written from the cache to the backing file (75763673)
    60M Clean pages forced from the cache (60775446)
    2661382 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    500593 Current total page count
    500593 Current clean page count
    0 Current dirty page count
    524287 Number of hash buckets used for page location
    4096 Assumed page size used
    2248M Total number of times hash chains searched for a page (2248788999)
    9 The longest hash chain searched for a page
    2669M Total number of hash chain entries checked for page (2669310818)
    0 The number of hash bucket locks that required waiting (0%)
    0 The maximum number of times any hash bucket lock was waited for (0%)
    0 The number of region locks that required waiting (0%)
    0 The number of buffers frozen
    0 The number of buffers thawed
    0 The number of frozen buffers freed
    63M The number of page allocations (63937431)
    181M The number of hash buckets examined during allocations (181211477)
    16 The maximum number of hash buckets examined for an allocation
    63M The number of pages examined during allocations (63436828)
    1 The max number of pages examined for an allocation
    0 Threads waited on page I/O
    0 The number of times a sync is interrupted
    Pool File: lastPoints
    8192 Page size
    0 Requested pages mapped into the process' address space
    2127M Requested pages found in the cache (97%)
    57M Requested pages not found in the cache (57565917)
    6371509 Pages created in the cache
    57M Pages read into the cache (57565917)
    75M Pages written from the cache to the backing file (75763673)
    $ db_stat -l
    0x40988 Log magic number
    16 Log version number
    31KB 256B Log record cache size
    0 Log file mode
    10Mb Current log file size
    856M Records entered into the log (856697337)
    941GB 371MB 67KB 112B Log bytes written
    2GB 262MB 998KB 478B Log bytes written since last checkpoint
    31M Total log file I/O writes (31624157)
    31M Total log file I/O writes due to overflow (31527047)
    97136 Total log file flushes
    686 Total log file I/O reads
    96414 Current log file number
    4482953 Current log file offset
    96414 On-disk log file number
    4482862 On-disk log file offset
    1 Maximum commits in a log flush
    1 Minimum commits in a log flush
    160KB Log region size
    195 The number of region locks that required waiting (0%)
    $ db_stat -c
    7 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    2000 Maximum number of locks possible
    2000 Maximum number of lockers possible
    2000 Maximum number of lock objects possible
    160 Number of lock object partitions
    0 Number of current locks
    1218 Maximum number of locks at any one time
    5 Maximum number of locks in any one bucket
    0 Maximum number of locks stolen by for an empty partition
    0 Maximum number of locks stolen for any one partition
    0 Number of current lockers
    8 Maximum number of lockers at any one time
    0 Number of current lock objects
    1218 Maximum number of lock objects at any one time
    5 Maximum number of lock objects in any one bucket
    0 Maximum number of objects stolen by for an empty partition
    0 Maximum number of objects stolen for any one partition
    400M Total number of locks requested (400062331)
    400M Total number of locks released (400062331)
    0 Total number of locks upgraded
    1 Total number of locks downgraded
    0 Lock requests not available due to conflicts, for which we waited
    0 Lock requests not available due to conflicts, for which we did not wait
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    1MB 544KB The size of the lock region
    0 The number of partition locks that required waiting (0%)
    0 The maximum number of times any partition lock was waited for (0%)
    0 The number of object queue operations that required waiting (0%)
    0 The number of locker allocations that required waiting (0%)
    0 The number of region locks that required waiting (0%)
    5 Maximum hash bucket length
    $ db_stat -CA
    Default locking region information:
    7 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    2000 Maximum number of locks possible
    2000 Maximum number of lockers possible
    2000 Maximum number of lock objects possible
    160 Number of lock object partitions
    0 Number of current locks
    1218 Maximum number of locks at any one time
    5 Maximum number of locks in any one bucket
    0 Maximum number of locks stolen by for an empty partition
    0 Maximum number of locks stolen for any one partition
    0 Number of current lockers
    8 Maximum number of lockers at any one time
    0 Number of current lock objects
    1218 Maximum number of lock objects at any one time
    5 Maximum number of lock objects in any one bucket
    0 Maximum number of objects stolen by for an empty partition
    0 Maximum number of objects stolen for any one partition
    400M Total number of locks requested (400062331)
    400M Total number of locks released (400062331)
    0 Total number of locks upgraded
    1 Total number of locks downgraded
    0 Lock requests not available due to conflicts, for which we waited
    0 Lock requests not available due to conflicts, for which we did not wait
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    1MB 544KB The size of the lock region
    0 The number of partition locks that required waiting (0%)
    0 The maximum number of times any partition lock was waited for (0%)
    0 The number of object queue operations that required waiting (0%)
    0 The number of locker allocations that required waiting (0%)
    0 The number of region locks that required waiting (0%)
    5 Maximum hash bucket length
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock REGINFO information:
    Lock Region type
    5 Region ID
    __db.005 Region name
    0x2accda678000 Region address
    0x2accda678138 Region primary address
    0 Region maximum allocation
    0 Region allocated
    Region allocations: 6006 allocations, 0 failures, 0 frees, 1 longest
    Allocations by power-of-two sizes:
    1KB 6002
    2KB 0
    4KB 0
    8KB 0
    16KB 1
    32KB 0
    64KB 2
    128KB 0
    256KB 1
    512KB 0
    1024KB 0
    REGION_JOIN_OK Region flags
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock region parameters:
    524317 Lock region region mutex [0/9 0% 5091/47054587432128]
    2053 locker table size
    2053 object table size
    944 obj_off
    226120 locker_off
    0 need_dd
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Lock conflict matrix:
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Locks grouped by lockers:
    Locker Mode Count Status ----------------- Object ---------------
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    Locks grouped by object:
    Locker Mode Count Status ----------------- Object ---------------
    Diagnosis:
    I'm seeing way to much lock contention on the Java Garbage Collector threads and also the VM thread when I strace my java process and I don't understand the behavior.
    We are spending more than 95% of the time trying to acquire locks and I don't know what these locks are. Any info here would help.
    Earlier I thought the overflow pages were the problem as 100KB data size was exceeding all overflow page limits. So, I implemented duplicate keys concept by chunking of my data to fit to overflow page limits.
    Now I don't see any overflow pages in my system but I still see bad bdb read performance.
    $ strace -c -f -p 5642 --->(607 times the lock timed out, errors)
    Process 5642 attached with 45 threads - interrupt to quit
    % time     seconds  usecs/call     calls    errors syscall
    98.19    7.670403        2257      3398       607 futex
     0.84    0.065886           8      8423           pread
     0.69    0.053980        4498        12           fdatasync
     0.22    0.017094           5      3778           pwrite
     0.05    0.004107           5       808           sched_yield
     0.00    0.000120          10        12           read
     0.00    0.000110           9        12           open
     0.00    0.000089           7        12           close
     0.00    0.000025           0      1431           clock_gettime
     0.00    0.000000           0        46           write
     0.00    0.000000           0         1         1 stat
     0.00    0.000000           0        12           lseek
     0.00    0.000000           0        26           mmap
     0.00    0.000000           0        88           mprotect
     0.00    0.000000           0        24           fcntl
    100.00    7.811814                 18083       608 total
    The above stats show that there is too much time spent locking (futex calls) and I don't understand that because
    the application is really single-threaded. I have turned on asynchronous transactions so the writes might be
    flushed asynchronously in the background but spending that much time locking and timing out seems wrong.
    So, there is possibly something I'm not setting or something weird with the way JVM is behaving on my box.
    I grep-ed for futex calls in one of my strace log snippet and I see that there is a VM thread that grabbed the mutex
    maximum number(223) of times and followed by Garbage Collector threads: the following is the lock counts and thread-pids
    within the process:
    These are the 10 GC threads (each thread has grabbed lock on an avg 85 times):
      86 [8538]
      85 [8539]
      91 [8540]
      91 [8541]
      92 [8542]
      87 [8543]
      90 [8544]
      96 [8545]
      87 [8546]
      97 [8547]
      96 [8548]
      91 [8549]
      91 [8550]
      80 [8552]
    VM Periodic Task Thread" prio=10 tid=0x00002aaaf4065000 nid=0x2180 waiting on condition (Main problem??)
     223 [8576] ==> grabbing a lock 223 times -- not sure why this is happening…
    "pool-2-thread-1" prio=10 tid=0x00002aaaf44b7000 nid=0x21c8 runnable [0x0000000042aa8000] -- main worker thread
       34 [8648] (main thread grabs futex only 34 times when compared to all the other threads)
    The load average seems ok; though my system thinks it has very less memory left and that
    I think is because its using up a lot of memory for the file system cache?
    top - 23:52:00 up 6 days, 8:41, 1 user, load average: 3.28, 3.40, 3.44
    Tasks: 229 total, 1 running, 228 sleeping, 0 stopped, 0 zombie
    Cpu(s): 3.2%us, 0.9%sy, 0.0%ni, 87.5%id, 8.3%wa, 0.0%hi, 0.1%si, 0.0%st
    Mem: 98999820k total, 98745988k used, 253832k free, 530372k buffers
    Swap: 18481144k total, 1304k used, 18479840k free, 89854800k cached
    PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
    8424 rchitta 16 0 7053m 6.2g 4.4g S 18.3 6.5 401:01.88 java
    8422 rchitta 15 0 7011m 6.1g 4.4g S 14.6 6.5 528:06.92 java
    8423 rchitta 15 0 6989m 6.1g 4.4g S 5.7 6.5 615:28.21 java
    $ java -version
    java version "1.6.0_21"
    Java(TM) SE Runtime Environment (build 1.6.0_21-b06)
    Java HotSpot(TM) 64-Bit Server VM (build 17.0-b16, mixed mode)
    Maybe I should make my application a Concurrent Data Store app as there is really only one thread doing the writes and reads. But I would like
    to understand why my process is spending so much time in locking.
    Can I try any other options? How do I prevent such heavy locking from happening? Has anyone seen this kind of behavior? Maybe this is
    all normal. I'm pretty new to using BDB.
    If there is a way to disable locking that would also work as there is only one thread that's really doing all the job.
    Should I disable the file system cache? One thing is that my application does not utilize cache very well as once I visit a key, I don't visit that
    key again for a very long time so its very possible that the key has to be read again from the disk.
    It is possible that I'm thinking this completely wrong and focussing too much on locking behavior and the problem is else where.
    Any thoughts/suggestions etc are welcome. Your help on this is much appreciated.
    Thanks,
    Rama

    Hi,
    Looks like you're using BDB, not BDB JE, and this is the BDB JE forum. Could you please repost here?:
    Berkeley DB
    Thanks,
    mark

  • Task related to timesheet entries

    i have the task  TS 20000460 which is relateed to time sheet entries with the default rule 00000157 for determining the superior positon and the custom user exit
    (cats0008) for determining the approver the task is set in the data entry profile
    and everything is fine in the production environment but suddenly we get a issue from the end user saying that the notification to the managers not being sent what could be the probale reason
    RH_ACT_LEADING_POSITION is the Fm used in the rule
    thanks in advance

    What do you mean by notification? The task or an e-mail notifying the user of new tasks? If it is the task, check SWIA or SWI1 to see if the task is started around the time the time sheet is saved. You can then check the logging for any problems.
    If it is a notification check in scot if mails are being sent, or even if the emails are being created.
    Regards,
    Martin

  • N78 problem related to software updater and softwa...

    i hav recently purchased n78.....and i have updated software which is 20.168...something like that...and now iam facing problems... which are as follows:
    1) camera zoom in and zoom out is not working properly...its doesnt even work when u usually reset yr phone by dailing*#7370* or *#7780#...
    2) secondly... i have installed corecodec mobile player ..which playback all formats like avi, mpeg, mpeg 2, mp4 etc...but its not working properly on my n78 s60 v3.... the version of this player is 1.12 for s60 v3.. and  still create problems...like sound stuck and  something video...... its not like that iam using  the cracked version...but its the genuine one...i have also installed corecodec player on my n73 s60 v2, its working fine there....dont know y not working  on n78....
    3) finally, i want to know if there is any latest firmware for n78....so please let me know..might solve my problem.but still i need help from u guys...please focus....
    thanx
    lina 

    the  code is *#7370# and not * at the end, it is a bug, i have heard of it but there is no fix for it as of yet. you will need a fix in the next firmware but noone really knows when that might come and be released. 
    keep in mind that n73 is a s603rd fp1 and n78 is a s603rd fp2 and that is where it might be an issue. make sure that the application is compatible with feature pack 2 phones and not just feature pack 1 phones.
    /discussions/board/message?board.id=swupdate&thread.id=42894&view=by_date_ascending&page=5
    also read this link it is all about your firmware at the end of the thread pg 5 or 6 and v20....something appears to be the latest for your phone. it appears that it got released around the jan 2009 time. 
    Message Edited by radical24 on 01-Apr-2009 02:43 PM
    You know what I love about you the most, the fact that you are not me ! In love with technology and all that it can offer. Join me in discovery....

  • Performance Problems with UI Element Tabstrip and IE

    Hi,
    I use the UI Element "Tabstrip" in a Java Web Dynpro Application. The application gets slower the more I jump from one tab to the next. All the other UI Elements (e.g. ComboBox, TextField) are affected, too.
    A system trace on the Web AS seems to indicate that it is a client problem.
    We are using the Internet Explorer 6.0.2900.2180 XP SP2 and the J2EE Engine 6.40.
    It seems to be a problem with the Tabstrip element in combination with the IE (Firefox 3 works fine). We created a test application with only the tabstrip element, three tabs, and a combo box. After several clicks on the tabs, the test application gets slower...
    Does anyone has had the same problem with the tabstrips, or anyone an idea what might be the reason??
    Thanks,
    Sabine

    Open an OSS message (BC-WD-UR).
    Armin

  • I have Problem uploading photos to craigslist. It says it has a problem communicating with the server and times out. It works in explorer but not Firefox and I just recently downloaded the new version of firefox. Can you be of any help?

    Photos will not upload and the server times out.

    Well, two thoughts.
    The battery has completely discharged and will take overnight to recharge, so don't expect much in an hour.
    The battery has deep discharged and is now dead. It will never recharge and should be replaced.
    If your iPod Touch, iPhone, or iPad is Broken
    Apple does not fix iDevices. Instead they exchange yours for a refurbished or new replacement depending upon the age of your device and refurbished inventories. On rare occasions when there are no longer refurbished units for your older model, they may replace it with the next newer model.
    You may take your device to an Apple retailer for help or you may call Customer Service and arrange to send your device to Apple:
    Apple Store Customer Service at 1-800-676-2775 or visit online Help for more information.
    To contact product and tech support: Contacting Apple for support and service - this includes international calling numbers.
    iPod Service Support and Costs
    iPhone Service Support and Costs
    iPad Service Support and Costs
    There are third-party firms that do repairs on iDevices, and there are places where you can order parts to DIY if you feel up to the task. Start with Google to search for these.
    The flat fee for a battery exchange is, I believe, $99.00 USD.

Maybe you are looking for

  • Ps CS4 keeps crashing

    Hi everyone, I just bought a new Macbook pro and the adobe creative suite 4 recently. I'm a design student and use photoshop and illustrator regularly. I use photoshop for digital drawings and often experience crashing. It was not so apparent in the

  • I cannot write (only read) on an external hard drive when I'm running Windows on BootCamp.

    I'm running Windows thanks to BootCamp perfectly, but I have a problem. I can open and read files that I have on an external hard drive (HFS+) but I can't create or modify files in it. It's like I don't have permissions. Please, how can I solve this?

  • Noob.. GB '08 on MBP Won't Record Right Channel Audio

    Hi, I'm a noob to Macs so please forgive me if I've missed something obvious. I wanted to do some light, goofing around with GB '08 V 4.1.2 with my Roland V-Drums. The Roland V-Drum module output uses two, 1/4" jacks (a "Left" and "Right" channel) wh

  • How to Prepare for Leopard?

    I finally got Leopard. What should I do to prepare for installing it on my iMac G5? I have 37 gigabytes of space left on my hard drive and one gigabyte of RAM. I understand that the Classic support is no longer available in Leopard. How can I identif

  • Recurring error ORA-01654

    Dear all We are running a batch process and consistently getting this error. (Unexpected error in package pk_rts_con_daily_rates, program "p_create_cpr_record". Error is "ORA-01654: unable to extend index RTS.CRP_CPA_FK_I by 1280 in tablespace RTS_IN