Long runtimes due to P to BP integration

Hi all,
The folks from my project are wondering if any of the experts out there have faced the following issue before. We have raised an OSS note for it but have yet to receive any concrete solution from SAP. As such, we are exploring other avenues of resolving this matter.
Currently, we are facing an issue where a standard infotype BADi is causing extremely long runtimes for programs that update certain affected infotypes. The BADi name is HR_INTEGRATION_TO_BP and SAP recommends that it should be activated when E-Recruitment is implemented. A fairly detailed technical description is provided as follows.
1. Within IN_UPDATE method of the BADi, a function module, HCM_P_BP_INTEGRATION is called to create linkages between a person object and business partner object.
2. Function module RH_ALEOX_BUPA_WRITE_CP will be called within HCM_P_BP_INTEGRATION to perform the database updates.
3. Inside RH_ALEOX_BUPA_WRITE_CP, there are several subroutines of interest, such as CP_BP_UPDATE_SMTP_BPS and CP_BP_UPDATE_FAX_BPS. These subroutines are structured similarly and will call function module BUPA_CENTRAL_EXPL_SAVE_HR to create database entries.
4. In BUPA_CENTRAL_EXPL_SAVE_HR, subroutine ADDRESS_DATA_SAVE_ES_NOUPDTASK will call function module, BUP_MEMORY_PREPARE_FOR_UPD_ADR, which is where the problem begins.
5. BUP_MEMORY_PREPARE_FOR_UPD_ADR contains 2 subroutines, PREPARE_BUT020 and PREPARE_BUT021. Both contain similar code where a LOOP is performed on a global internal table (GT_BUT020_MEM_SORT/GT_BUT021_MEM_SORT) and entries are appended to another global internal table (GT_BUT020_MEM/GT_BUT021_MEM). These tables (GT_BUT020_MEM/GT_BUT021_MEM) will be used later on for updates to database tables BUT020 or BUT021_FS. However, we noticed that these 2 tables are not cleared after updating the database, which results in an ever increasing number of entries that are updated into the database, even though many of them may have already been updated.
If any of you are interested in seeing if this issue affects you, simply run a program that will update either infotype 0000, 0001, 0002, 0006 subty 1, 0009 subty 0 or 0105 subty (0001, 0005, 0010 or 0020) to replicate this scenario if E-recruitment is implemented in your system. Not many infotype updates are required to see the issue, just 2 are enough to tell if the tables in point 5 are being cleared. (We have observed that this issue occurs during the creation of a new personnel number (and hence a new business partner). For existing personnel numbers, the same code is executed but the internal tables in point 5 are not populated.)
System details: SAP ECC 6.0 (Support package: SAPKA70021) with E-Recruitment (Support package: SAPK-60017INERECRUIT) implemented.
Thanks for reading.

Hi Annabelle,
We have a similar setup, but are on SAPK-60406INERECRUIT.  Although the issue does not always occur, we do have a case where the error ADDRESS_DATA_SAVE_ES is thrown.
Did you ever resolve your issue?  Hoping that solution can help guide me.
Thanks
Shane

Similar Messages

  • Long runtimes while performing CCR

    Hello All,
    After running the delta report job we found some inconsistency for stocks when we try to delete or push to apo ( once performing the iteration ) the entries are not being deleted nor pushed  and is taking long runtimes. We dont see this issue for any other elements except stocks. Please let me know the reasons on why this would be happening and also please let me know if there is any way in which we can rectify this inconsistency for stock b/w ECC and apo .
    Thanks
    Uday

    Uday,
    I had one experience several years back with long CCR runtimes for Stock elements that might apply to you.
    For CCR, you have 6 categories of stocks to check.  If any of these stock category elements is not actually contained in any of your integration models, the CCR search can take a long time searching through ALL integration models trying to find a 'hit'.
    There are two possible solutions.  Ensure that you ONLY select CCR stock types that are contained in your CFM1 integration models.  If possible, deselect the CCR stock types that have no actual stocks within the integration models (where such stocks do not actually exist in ECC).  If this does not meet your business requirement, then try performing your CCR ONLY on the integration model(s) that contain the stock entries.  Do not leave the CCR field "Model Name" blank.
    With respect to the stock inconsistencies, 'how bad is it'?  It is common to have one or two Stock inconsistencies every day if you have hundreds of thousands of stock elements to keep in synch.  The most common reason I see for excessive stock entries in CCR is improperly coded enhancements.
    Best Regards,
    DB49

  • The Command Get-ADUser -Identity username -Properties * No Longer Works Due to a Bug in PowerShell 4 and Win8-1 Pro

    The 'Command Get-ADUser -Identity <username> -Properties *' No Longer Works Due to a Bug in PowerShell 4 and Win8-1 Pro
    It produces the following error:
    Get-ADUser : One or more properties are invalid.
    Parameter name: msDS-AssignedAuthNPolicy
    At line:1 char:1
    + Get-ADUser -Identity ********** -Properties *
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : InvalidArgument: (**********:ADUser) [Get-ADUser], ArgumentException
        + FullyQualifiedErrorId : ActiveDirectoryCmdlet:System.ArgumentException,Microsoft.ActiveDirectory.Management.Commands.GetADUser
    This is already documented in these forums:
    1. http://social.technet.microsoft.com/Forums/systemcenter/en-US/1bf9568e-6adc-495d-a37c-48877f86985a/powershell-40-and-the-activedirectory-ps-module?forum=w81previtpro
    2. https://connect.microsoft.com/PowerShell/feedback/details/806452/windows-8-1-powershell-4-0-get-adcomputer-properties-bug
    Unfortunately, in typical style, Microsoft have archived number 1 without bothering to respond with advice.  Can someone in Microsoft please advise your customers here if this is being investigated and of any available workaround or fix ?
    -- huddie "If you're not seeking help or offering it, you probably shouldn't be here."

    Did you consider using one of the "workarounds" below to run an existing version of the AD Module for PowerShell under a specific PowerShell version:
    a. #require -version 3.0    (in ps1 script)
    b. powershell -version 3.0
    Thank you for sharing with us if this helps.
    Desmond, did you miss my reply below ?  I still haven't heard back from you:
    >> "Desmond,
    >> 
    >> Thanks for your quick response.
    >> 
    >> I'm running this just as a command, not in a script:
    >> 
    >> Get-ADUser -Identity <username> -Properties *
    >> 
    >> When I try to run powershell
    -version 3.0 first, then run the above command, it still fails with the same error.  When I then run Get-Host,
    the version still shows as 4.0 so maybe there's more I need to do to launch a 3.0 host.  Anyway, from what I've read it seems your command is more aimed at script compatibility.
    >> 
    >> Can you help ?"
    -- huddie "If you're not seeking help or offering it, you probably shouldn't be here."

  • I've forgotten my security questions and my rescue email address is no longer available due to the address is many uears ago, what can I do for purchasing in itunes store?

    I've forgotten my security questions and my rescue email address is no longer available due to the address is many uears ago, what can I do for purchasing in itunes store?

    You won't be able to change your rescue email address until you can answer your questions, you will need to contact iTunes Support / Apple in your country to get the questions reset.
    Contacting Apple about account security : http://support.apple.com/kb/HT5699
    When they've been reset you can then use the steps half-way down this page to update your rescue email address for potential future use : http://support.apple.com/kb/HT5312

  • Long runtime report SMIGR_CREATE_DDL

    Hi SAP Experts.
    I am migrating a SAP ERP 6.0 SR3 32bits to x64, with system copy export/import. But the report SMIGR_CREATE_DDL has long runtime and it doesn´t finish.
    How can I solve the problem?.
    Best regards.
    Luis Gomez.

    Hi
    As far as i know, the report primarily only needed on BI systems. As long as you don't have partitioned tables or bitmap indexes, you don't have to run SMIGR_CREATE_DDL. You will only end up with an empty directory
    But to troubleshoot your problem, can you please tell us which database/version you have? Can you see which SQL statements are running?
    Best regards
    Michael
    Edit: i just tested the report on a ERP 6.0 system (on Oracle 10.2.0.2), it took ~2 hrs to run and the output was empty.

  • CDB Upgrade 4.0 - 5.0: Long Runtime

    Hello all,
    We are in the middle of CRM Upgrade from 4.0 -> 7.0 and currenty doing CDB upgrade from 4.0 -> 5.0. As part of Segment download, I am downloading CAPGEN_OBJECT_WRITE and it has created few lacs entries in SMQ2.
    The system is processing those entries for last 3 days and although the entries are being processed, we cannot afford to have such long runtime during Go-live. Did I miss something?
    Have you ever faced such scenario? Appreciate your valuable feedback on this.
    Thanks in advance,
    Regards
    Pijush

    Hi William,
    Cobras has it limitation when it comes to internet subscriber -
    noted in the link : Internet subscribers, Bridge, AMIS, SMTP users and such will not be included.
    http://www.ciscounitytools.com/Applications/General/COBRAS/Help/COBRAS.htm
    You might try using Subscriber Information Dump under tools depot > administration tools > Subscriber Information Dump and export and import to the new unity server.
    Rick Mai

  • Web Application Designer 7 - Long Runtime

    Hi,
    I'm working in BI-7 environment and to fulfil the users' requirement we have developed a web template having almost 30 queries in it.
    We are facing very long runtime of that report on web. Afer analysing with BI Statistics we came to know that DB and OLAP are not taking very long time to run but its the front-end (web template) which is causing delay. Another observation is maximum time is consumed while web template is being loaded/initialized, and once loaded fliping between different tabs (reports) doesn't take much time.
    My questions are;
    What can I do to reduce web template intialization/loading time?
    Is there any way I can get time taken by front-end in statistics? (currently we can get DB and OLAP time through BI statistics cube and consider remaing time as front-end time, because standard BI statistics cube is unable to get front-end time when report is running on browser)
    What is the technical processes involve when information moves back from DB to browser?
    Your earliest help would be highly appreciated. Please let me know if you require any further information.
    Regards,
    Shabbar
    0044 (0) 7856 048 843

    Hi,
    It asks you for a log in to the Portal, because the output of the Web Templates can be viewed only through the Enterprise Portal. This is perfectly normal. BI-EP Configuraion should be proper and you need to have a Login-id and Password for the Portal.
    For using WAD and design the front end, go through the below link. It would help you.
    http://help.sap.com/saphelp_nw70/helpdata/en/b2/e50138fede083de10000009b38f8cf/frameset.htm

  • How do i get refunded for the apps that no longer work due to the fact i cant upgrade my phone?

    how do i get refunded for the apps that i purchased that no longer work due to the fact i cant upgrade my 3.1.3 iphone

    I have never bought an app but on all other things Apple  somewhere has in tiny print the system requirements for running the software.  If you're going to live in the world of outdated technology the way I do, you will quickly learn to always read system requirements.
    iTunes Customer Service Contact - http://www.apple.com/support/itunes/contact.html - Apple states all sales are final, but you can always try.

  • BPS0 - very long runtime

    Hi gurus,
    During the manual planning in BPS0 long runtime occurs.
    FOX formulas are used.
    There is lot of data selected, but it is business needs.
    Memory is OK as I can see in st02 - 10-15% of resources are usually used, no dumps, but very long runtime.
    I examine hardware, system, db with different methods, nothing unusual.
    Could you please give me more advices, how I can do extra check of the system? (from basis point of view preferably)
    BW 3.1. - patch 22
    SEM-BW 3.5 - patch 18
    Thanks in advance
    Elena

    Hello Elena,
    you need to take a structured approach. "Examining" things is fine but usually does not lead to results quickly.
    Performance tuning works best as follows:
    1) Check statistics or run a trace
    2) Find the slowest part
    3) Make this part run faster (better, eliminate it)
    4) Back to #1 until it is fast enough
    For the first round, use the BPS statistics. They will tell you if BW data selection or BPS functions are the slowest part.
    If BW is the problem, use aggregates and do all the things to speed up BW (see course BW360).
    If BPS is the problem, check the webinar I did earlier this year: https://www.sdn.sap.com/irj/sdn/webinar?rid=/webcontent/uuid/2ad07de4-0601-0010-a58c-96b6685298f9 [original link is broken]
    Also the BPS performance guide is a must read: https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/7c85d590-0201-0010-20b5-f9d0aa10c53f
    Next, would be SQL trace and ABAP performance trace (ST05, SE30). Check the traces for any custom coding or custom tables at the top of the runtime measurements.
    Finally, you can often see from the program names in the ABAP runtime trace, which components in BPS are the slowest. See if you can match this to the configuration that's used in BPS (variables, characteristic relationships, data slices, etc).
    Regards
    Marc
    SAP NetWeaver RIG

  • Long runtime for CU50

    Hi there, is there anywhere we can update the statistic for table CABN? We encountered long runtime when execute transaction code CU50, which we found out the process keep accessing the CABN table which contains more than 10k characteristics record. Thanks

    If you are running on IBM i (i5/OS, OS/400), there is no need to update statistics for a database table, because that is done automatically by the database.
    If you have a slow transaction, you can analyze it through transaction ST05 and then use the Explain function on the longest running statement. Within the Explain, there is a function "Index advised", that might help in your case.
    Kind regards,
    Christian Bartels.

  • Labview Built Executable - Long Runtime Startup Time

    Hi All,
    I have a LV 2011 SP1 application that has been built (executable) on a development machine running Windows 7 Professional. The application is copied to the target runtime machine. This machine has the LV 2011 runtime plus other DAQmx pre-requisites. The target machine is also Windows 7 Professional, an quad core @ 3.3GHz Xeon machine that, on paper, is significantly faster than the development machine.
    I run the built application on the development machine. It takes around 1-2 seconds for the Startup VI Front panel to show. The application loads and runs. So far so good.
    I run the built application on the runtime machine. It takes just over 90 seconds for the Startup VI Front panel to show. The exe process during this time is showing 0% CPU usage and a very small memory footprint (around 32MB). Eventually the Startup VI is shown and CPU usage and Memory consumption climb almost immediately to around 2% and 80MB. This is normal. From this point on the application runs as it should.
    My question - why does the application startup time so dramatically different on the target machine? Is there some other startup process inherent in the runtime engine that is taking longer? I have tried loading the evaulation version of Labview 2011 SP1 on the target machine but this appears to make no difference. I know that this delay is more an annoyance than a show-stopper but my clients are asking questions and it would be good to provide some answers.
    Some basic web searches have revealed others having similar problems and often the problem is related to some Windows service or other. I have also disabled the firewall on the target PC (though it is not connected to the internet, just a small IO network with ethernet chassis CompactDAQ modules) with no apparent difference. Unfortunately I cannot disable the virus scanner due to company policies.
    Thanks for your help all.

    I have been having this problem and it is very annoying. I am unable to figure out what exactly is slowing load time of an exe in a target machine. My target machines (3 of them) are not connected to the network. I have LabVIEW 2012 and am pretty sure all drivers daqmx,VISA, and Runtime have been installed correctly.
    I have noticed this issue isn't there when the entire development environment is installed. To troubleshoot, I am using my personal laptop as test site (because I can't travel to other cities to fix it without knowing the solution), and my laptop has no previous installations of LabVIEW. I install the application and drivers using the installer I build, but it is exhibiting the same behavior. I must note here that I did not see this problem for LabVIEW 2010 that I was previously using, but my application design has changed. Nonetheless, I have checked the functionality of my application and am absolutely sure it has nothing to do with slow load times.
    I am beginning to suspect some component has a bug in it for LabVIEW 2012 but, I am in no position to validate that. Is there anyone that has found a concrete solution that has made their application open instantly and run?
    Thanks a bunch!
    V
    I may not be perfect, but I'm all I got!

  • URL webdynpro was not called due to an error after integration

    Hi All,
    I have integrated SRM (ABAP) to EP(dual-stack) portal system (both are EHP1) which are freshly integrated using BS: SRM-Portal (Basic Configuration) V1.
    I've checked SRM web based GUI "http://<hostname>.<FQCN>:8002/sap/bc/gui/sap /its/webgui"
    and it's working, but when i tried to check webdynpro,
    "http://<hostname>:<FQCN>:8002/sap/bc/webdynp ro", i couldn't connect to the link. I already read this sap note "1088717 - Active services for Web Dynpro ABAP in transaction SICF" and compared all services in SRM (ABAP) that should be activated but this error still persists. I already changed this parameter icm/host_name_full and SAPLOCALHOSTFULL.
    Error in accessing webdynpro link:
    Note
    The following error text was processed in the system SR2 : WebDynpro Exception: Application // Does Not Exist
    The error occurred on the application server MDCSAP05_SR2_02 and in the work process 0 .
    The termination type was: RABAX_STATE
    The ABAP call stack was:
    Method: RAISE of program CX_WD_GENERAL=================CP
    Method: IF_WDR_RUNTIME~GET_RR_APPLICATION of program CL_WDR_MAIN_TASK==============CP
    Method: CREATE_APPLICATION of program CL_WDR_CLIENT_ABSTRACT_HTTP===CP
    Method: IF_HTTP_EXTENSION~HANDLE_REQUEST of program CL_WDR_MAIN_TASK==============CP
    Method: EXECUTE_REQUEST of program CL_HTTP_SERVER================CP
    Function: HTTP_DISPATCH_REQUEST of program SAPLHTTP_RUNTIME
    Module: %_HTTP_START of program SAPMHTTP
    Any solution for this? Please help thanks!
    Best regards,
    Tony

    Hi,
    If you called this URL : "http://<FQDN>:<port>/sap/bc/webdynpro"
    The error is normal : you tried to call the webdynpro runtime without telling which webdunpro application you want.
    A real webdynpro URL looks like : http://<FQDN>:<port>/sap/bc/webdynpro/application
    Regards,
    Olivier

  • DSO activation - long runtime

    Hello guys,
    in our BW system, activation of a DSO request takes a long time (> 1 hr), although only a small number of records (some hundred) has been loaded. When examining the log in sm37, I found out that there are no entries for the time in question (mind the time gap between 08:21:18 and 09:25:55):
    08:21:13 Job started
    08:21:13 Step 001 started (program RSPROCESS, variant &0000001044887, user ID BWBATCH)
    08:21:18 Activation is running: Data target CUSDSO06, from 132,353 to 132,353
    09:25:55 Overlapping check with archived data areas for InfoProvider CUSDSO06
    09:25:55 Check not necessary, as no data has been archived for CUSDSO06
    09:25:55 Data to be activated successfully checked against archiving objects
    09:25:57 Status transition 2 / 2 to 7 / 7 completed successfully
    ... (further lines concerning SID generation etc. omitted)
    The actual activation is executed within several seconds. So I wondered what happens in the time between 08:21 and 09:25. I tried to find out more on that by tracing (transaction st05) the process. The trace shows that for each request that has ever been loaded into the DSO, some tables are read (see below). Since there are more than 4000 requests and reading for each request takes around 1 second, the runtime sums up to more than one hour.
    Yet, only ONE request has to be activated (all other of these requests have been activated during the months before, so they should be quite irrelevant to the actual activation job).
    226
    RSBKREQUE
    FETCH
    1
    0
    7
    RSSTATMAN
    REOPEN
    0
    SELECT WHERE "RNR" = 'DTPR_4669H4NELXUK91DGMUIWCY6FY' AND "DTA_SOURCE" = '/BIC/B0000302' AND "DTA_SOURCE_TYPE" = 'TFSTRU' AND "DTA_DEST" = 'CUSDSO06' AND "DTA_DEST_TYPE" = 'ODSO'
    263
    RSSTATMAN
    FETCH
    0
    1403
    6
    RSSTATMAN
    REOPEN
    0
    SELECT WHERE "RNR" = 'DTPR_4669H4NELXUK91DGMUIWCY6FY' AND "DTA_SOURCE" = 'SALES_CUSTOMERS_DS            FILE_HU' AND "DTA_SOURCE_TYPE" = 'DTASRC' AND "DTA_DEST" = 'CUSDSO06' AND "DTA_DEST_TYPE" = 'ODSO'
    232
    RSSTATMAN
    FETCH
    1
    1403
    5
    RSSTATMAN
    REOPEN
    0
    SELECT WHERE "RNR" = 'DTPR_4669H4NELXUK91DGMUIWCY6FY' AND "DTA_SOURCE" = 'SALES_CUSTOMERS_DS            FILE_HU' AND "DTA_SOURCE_TYPE" = 'DTASRC' AND "DTA_DEST" = 'CUSDSO06' AND "DTA_DEST_TYPE" = 'ODSO'
    227
    RSSTATMAN
    FETCH
    1
    1403
    6
    RSSELDONE
    REOPEN
    0
    SELECT WHERE "RNR" = 'REQU_4669GG41VH6YPLG04AL85I5GE' AND ROWNUM <= 1
    902
    RSSELDONE
    FETCH
    1
    0
    6
    / /RREQUID
    REOPEN
    0
    SELECT WHERE "SID" = 119751
    220
    / /RREQUID
    FETCH
    1
    0
    5
    RSBKREQUE
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    230
    RSBKREQUE
    FETCH
    1
    0
    6
    RSBKREQUE
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751 AND ROWNUM <= 1
    684
    RSBKREQUE
    FETCH
    1
    0
    6
    RSBKREQUE
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    201
    RSBKREQUE
    FETCH
    1
    0
    6
    RSBKREQUE
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    194
    RSBKREQUE
    FETCH
    1
    0
    6
    RSBKREQUE
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    195
    RSBKREQUE
    FETCH
    1
    0
    6
    RSBKREQUE
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    195
    RSBKREQUE
    FETCH
    1
    0
    6
    RSBKREQUE
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    309
    RSBKREQUE
    FETCH
    1
    0
    6
    RSBKREQUE
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    264
    RSBKREQUE
    FETCH
    1
    1403
    7
    RSBMNODES
    REOPEN
    0
    SELECT WHERE "LOGID" = 'DTPR_119751' AND "NODE" = 0
    410
    RSBMNODES
    FETCH
    1
    0
    5
    RSBMNODES
    REOPEN
    0
    SELECT WHERE "LOGID" = 'DTPR_119751' AND "NODE" = 0
    242
    RSBMNODES
    FETCH
    1
    0
    6
    RSBMLOG
    REOPEN
    0
    SELECT WHERE "LOGID" = 'DTPR_119751'
    247
    RSBMLOG
    FETCH
    1
    0
    12
    RSBMNODES
    REOPEN
    0
    SELECT WHERE "LOGID" = 'DTPR_119751'
    761
    RSBMNODES
    FETCH
    30
    1403
    6
    RSBMONMESS
    REOPEN
    0
    SELECT WHERE "LOGID" = 'DTPR_119751' ORDER BY "NODE" , "POSIT"
    645
    RSBMONMESS
    FETCH
    17
    1403
    6
    RSBMLOGPAR
    REOPEN
    0
    SELECT WHERE "LOGID" = 'DTPR_119751'
    431
    RSBMLOGPAR
    FETCH
    7
    1403
    5
    RSBKDATAP
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    353
    RSBKDATAP
    FETCH
    2
    1403
    6
    RSBKDATAP
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    246
    RSBKDATAP
    FETCH
    0
    1403
    6
    RSBKDATA_V
    REOPEN
    0
    SELECT WHERE "REQUID30" = 'DTPR_4668XBCECQQT3RHYI54CCMUQM'
    314.804
    RSBKDATA_V
    FETCH
    0
    1403
    13
    RSBMNODES
    REOPEN
    0
    SELECT WHERE "LOGID" = 'DTPR_119751'
    1.114
    RSBMNODES
    FETCH
    30
    1403
    6
    RSBMONMESS
    REOPEN
    0
    SELECT WHERE "LOGID" = 'DTPR_119751' ORDER BY "NODE" , "POSIT"
    639
    RSBMONMESS
    FETCH
    17
    1403
    7
    RSBMLOGPAR
    REOPEN
    0
    SELECT WHERE "LOGID" = 'DTPR_119751'
    374
    RSBMLOGPAR
    FETCH
    7
    1403
    6
    RSBKDATAP
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    329
    RSBKDATAP
    FETCH
    2
    1403
    6
    RSBKDATAP
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    242
    RSBKDATAP
    FETCH
    0
    1403
    6
    RSBKDATA_V
    REOPEN
    0
    SELECT WHERE "REQUID30" = 'DTPR_4668XBCECQQT3RHYI54CCMUQM'
    312.963
    RSBKDATA_V
    FETCH
    0
    1403
    8
    RSBKREQUE
    REOPEN
    0
    SELECT WHERE "REQUID" = 119751
    589
    RSBKREQUE
    FETCH
    1
    0
    6
    RSBKSELECT
    REOPEN
    0
    SELECT WHERE "REQUID" = '                       119751'
    287
    RSBKSELECT
    FETCH
    0
    1403
    |        6|RSBKREQUE |REOPEN |     |     0|SELECT WHERE "REQUID" = 119751      
    Any ideas?
    Many thanks,
    Regards,
    Günter

    check the profile parameter settings in RZ10. You can try number range buffering for better performnce

  • CMS collector taking too long pauses due to fragmentation

    we are using Weblogic 10gR3 servers with JDK 160_23 for ODSI application and using CMS collector for garbage collection. But we are seeing ParNew (promotion failed) due to fragmentation and ending up CMS having more than 30 seconds stop the world pauses every 12-13 hours. other than this normally CMS takes only 0.03 - 0.05 seconds of application pauses. Here is the JVM arguments we are using and the GC logs that has ParNew - promotion failed.
    /opt/oracle/10gR3/jdk160_23/jre/bin/java -Dweblogic.Name=member3MS1 -Djava.security.policy=/opt/oracle/10gR3/wlserver_10.3/server/lib/weblogic.policy -Dweblogic.management.server=http://wdcsn443a.sys.cigna.com:7001 -Djava.library.path=/opt/oracle/10gR3/jdk160_23/jre/lib/sparc/client:/opt/oracle/10gR3/jdk160_23/jre/lib/sparc:/opt/oracle/10gR3/jdk160_23/jre/../lib/sparc:/opt/oracle/10gR3/patch_wlw1030/profiles/default/native:/opt/oracle/10gR3/patch_wls1030/profiles/default/native:/opt/oracle/10gR3/patch_cie670/profiles/default/native:/opt/oracle/10gR3/patch_aldsp1030/profiles/default/native:/opt/oracle/10gR3/patch_wlw1030/profiles/default/native:/opt/oracle/10gR3/patch_wls1030/profiles/default/native:/opt/oracle/10gR3/patch_cie670/profiles/default/native:/opt/oracle/10gR3/patch_aldsp1030/profiles/default/native:.:/opt/oracle/10gR3/wlserver_10.3/server/native/solaris/sparc:/opt/oracle/10gR3/wlserver_10.3/server/native/solaris/sparc/oci920_8:/opt/oracle/10gR3/wlserver_10.3/server/native/solaris/sparc:/opt/oracle/10gR3/wlserver_10.3/server/native/solaris/sparc/oci920_8:/opt/oracle/10gR3/wlserver_10.3/server/native/solaris/sparc:/opt/oracle/10gR3/wlserver_10.3/server/native/solaris/sparc/oci920_8:/usr/jdk/packages/lib/sparc:/lib:/usr/lib -Djava.class.path=/opt/oracle/10gR3/user_projects/lib/commons-lang-2.4.jar:/opt/oracle/10gR3/user_projects/lib/log4j-1.2.15.jar:/opt/oracle/10gR3/modules/com.bea.common.configfwk_1.2.0.0.jar:/opt/oracle/10gR3/modules/com.bea.core.xquery.beaxmlbeans-interop_1.3.0.0.jar:/opt/oracle/10gR3/modules/com.bea.core.xquery.xmlbeans-interop_1.3.0.0.jar:/opt/oracle/10gR3/modules/com.bea.core.binxml_1.3.0.0.jar:/opt/oracle/10gR3/modules/com.bea.core.sdo_1.1.0.0.jar:/opt/oracle/10gR3/modules/com.bea.core.xquery_1.3.0.0.jar:/opt/oracle/10gR3/modules/com.bea.alsb.client_1.1.0.0.jar:/opt/oracle/10gR3/modules/com.bea.common.configfwk.wlinterop_10.3.0.0.jar:/opt/oracle/10gR3/patch_wss110/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/opt/oracle/10gR3/patch_wls1001/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/opt/oracle/10gR3/patch_cie650/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/opt/oracle/10gR3/patch_aldsp320/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/opt/oracle/10gR3/jdk160_23/lib/tools.jar:/opt/oracle/10gR3/wlserver_10.3/server/lib/weblogic_sp.jar:/opt/oracle/10gR3/wlserver_10.3/server/lib/weblogic.jar:/opt/oracle/10gR3/modules/features/weblogic.server.modules_10.0.1.0.jar:/opt/oracle/10gR3/modules/features/com.bea.cie.common-plugin.launch_2.1.2.0.jar:/opt/oracle/10gR3/wlserver_10.3/server/lib/webservices.jar:/opt/oracle/10gR3/modules/org.apache.ant_1.6.5/lib/ant-all.jar:/opt/oracle/10gR3/modules/net.sf.antcontrib_1.0b2.0/lib/ant-contrib.jar:/opt/oracle/10gR3/modules/features/aldsp.server.modules_3.2.0.0.jar:/opt/oracle/10gR3/odsi_10.3/lib/ld-server-core.jar:/opt/oracle/10gR3/wlserver_10.3/common/eval/pointbase/lib/pbclient51.jar:/opt/oracle/10gR3/wlserver_10.3/server/lib/xqrl.jar:/opt/oracle/10gR3/user_projects/lib/db2jcc.jar:/opt/oracle/10gR3/user_projects/lib/db2jcc_license_cisuz.jar:/opt/oracle/10gR3/properties -Dweblogic.system.BootIdentityFile=/opt/oracle/10gR3/user_projects/domains/DataFabricDomain/servers/member3MS1/data/nodemanager/boot.properties -Dweblogic.nodemanager.ServiceEnabled=true -Dweblogic.security.SSL.ignoreHostnameVerification=true -Dweblogic.ReverseDNSAllowed=false -Xms2048m -Xmx2048m -Xmn640m -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:-UseBiasedLocking -XX:ParallelGCThreads=16 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -Xloggc:/logs/oracle/10gR3/DataFabricDomain/ManagedServer/member3agc.log_* -da -Dplatform.home=/opt/oracle/10gR3/wlserver_10.3 -Dwls.home=/opt/oracle/10gR3/wlserver_10.3/server -Dweblogic.home=/opt/oracle/10gR3/wlserver_10.3/server -Dwli.home=/opt/oracle/10gR3/wlserver_10.3/integration -Daldsp.home=/opt/oracle/10gR3/odsi_10.3 -Djavax.xml.soap.MessageFactory=weblogic.xml.saaj.MessageFactoryImpl -Dweblogic.management.discover=false -Dweblogic.management.server=http://wdcsn443a.sys.cigna.com:7001 -Dwlw.iterativeDev=false -Dwlw.testConsole=false -Dwlw.logErrorsToConsole=true -Dweblogic.ext.dirs=/opt/oracle/10gR3/patch_wss110/profiles/default/sysext_manifest_classpath:/opt/oracle/10gR3/patch_wls1001/profiles/default/sysext_manifest_classpath:/opt/oracle/10gR3/patch_cie650/profiles/default/sysext_manifest_classpath:/opt/oracle/10gR3/patch_aldsp320/profiles/default/sysext_manifest_classpath -Dweblogic.system.BootIdentityFile=/opt/oracle/10gR3/user_projects/domains/DataFabricDomain/security/boot.properties -DDB2_USE_LEGACY_TOP_CLAUSE=true -Dlog4j.configuration=file:/opt/oracle/10gR3/user_projects/domains/DataFabricDomain/properties/log4j.xml -Ddeploymentsite=prod -DLOG4J_LEVEL=WARN -DLOG4J_ROOT=/logs/oracle/10gR3/DataFabricDomain -DLOG4J_NODENAME=member3a weblogic.Server
    48461.245: [GC 48461.245: [*ParNew (promotion failed)*: 559017K->551408K(589824K), 1.1880458 secs]48462.433: [CMS: 1294242K->895754K(1441792K), 28.3698618 secs] 1852617K->895754K(2031616K), [CMS Perm : 122026K->120411K(262144K)], 29.5587684 secs] [Times: user=29.93 sys=0.04, real=29.56 secs]
    Total time for which application threads were stopped: 29.5661221 seconds
    109007.379: [GC 109007.380: [ParNew: 531521K->8922K(589824K), 0.0181922 secs] 1805634K->1283302K(2031616K), 0.0187539 secs] [Times: user=0.22 sys=0.01, real=0.02 secs]
    Total time for which application threads were stopped: 0.0285263 seconds
    Application time: 33.9224151 seconds
    Total time for which application threads were stopped: 0.0086703 seconds
    Application time: 8.5028806 seconds
    109049.842: [GC 109049.842: [ParNew: 533210K->8861K(589824K), 0.0181380 secs] 1807590K->1283332K(2031616K), 0.0187288 secs] [Times: user=0.22 sys=0.01, real=0.02 secs]
    Total time for which application threads were stopped: 0.0283473 seconds
    Application time: 42.6375077 seconds
    109092.508: [GC 109092.508: [ParNew: 533149K->8811K(589824K), 0.0161865 secs] 1807620K->1283418K(2031616K), 0.0167544 secs] [Times: user=0.19 sys=0.00, real=0.02 secs]
    Total time for which application threads were stopped: 0.0264697 seconds
    109122.582: [GC 109122.583: [*ParNew (promotion failed)*: 533099K->532822K(589824K), 1.2159460 secs]109123.799: [CMS: 1274986K->928935K(1441792K), 30.2900798 secs] 1807706K->928935K(2031616K), [CMS Perm : 127780K->126922K(262144K)], 31.5070045 secs] [Times: user=31.72 sys=0.04, real=31.51 secs]
    Total time for which application threads were stopped: 31.5171276 seconds
    Even though we cannot avoid fragmentation, what would be the best way to reduce these stop the world pauses?
    Edited by: user12844507 on Mar 31, 2011 6:19 AM
    Edited by: user12844507 on Mar 31, 2011 6:46 AM

    The problem appears to be that the CMS work best if it can start before it is forced to start. The -XX:CMSInitiatingOccupancyFraction= determines at what point it should start before it is full. However it appear that this is too high, ie. you are creating objects too fast and it is running out of space before it finishes.
    In particular you have "533099K->532822K(589824K)" which indicated to me you are filling the eden space with medium term lived objects very quickly. (More than 1/2 GB of them)
    I would try to increase the young generation space until it appears to be too large. I would try "-XX:NewSize=2g -mx3g" to give it a much large younger generation space. This will stop some medium lived object being promopted and flooding the tenured space (which then have to be cleaned up and result in fragmentation)
    Perhaps you have enough memory to try larger sizes. I use "-XX:NewSize=7g -mx8g" and I have no objects being prompted after startup.
    BTW -mx == -Xmx
    You might find this interesting http://blogs.sun.com/jonthecollector/entry/when_the_sum_of_the

  • Long runtime with M_VMVLI using external ids

    We are currently experiencing ORA 1555's when shipment_create jobs are run and are sequentially reading M_VMVLI table which is over 3.3 M rows.  Due to this long read time, and our database in a high update mode 24 X 7, the read's fail with the 1555.  Anyone have experience with this?  We are 46C, DI 46C1, patch level 38.

    With real problems like this it's better to us the OSS for support questions.

Maybe you are looking for

  • Move WSUS data base from one drive to another

    Server 2012 std Update Services Microsoft Corporation Version: 6.2.9200.16384 I am running out of space on the drive the WSUS_Database resides on. It is not, as I can see, under SQLServer Express. How can I move the folder to a different drive with m

  • Can I sync Outlook via iTunes by docking iTouch with docking cable??

    Going to sail in the Pacific Ocean. Want to get my contacts and calendar reminders out of Outlook into my iTouch while sailing with no access to G3, cloud or even Wi-Fi technologies. Could I sync using the docking cable via iTunes?

  • Crash on startup after 2008-002 security patch & Safari 3.1

    Mail is crashing on me immediately after starting with the following error: Process: Mail [1887] Path: /Applications/Mail.app/Contents/MacOS/Mail Identifier: com.apple.mail Version: 3.2 (919) Build Info: Mail-9190000~3 Code Type: X86 (Native) Parent

  • Photos changed from jpg to png?

    I just bought a new iphone 4 from the Apple store because my old iphone 4 had a broken power button that could not be replaced. This iphone was working fine- it updated to iOS 7 and all of my contacts and apps synched onto it fine. Now I'm having a p

  • Limiting Outlook 2010 account to send to a single address.

    How can you set up an MS Outlook (2010) account so it can only send to one email address?