Online and batch job running concurrently in SAP FI-CA

Hi experts,
Could anyone let me know of any issues which might crop up while online and batch jobs are executed concurrently in SAP FI-CA?
If yes,then what should be the design considerations to avoid such issues?
Many thanks,
Sanjay Misra

Hi Sanjay,
William is correct in this scenario. You cann't run some processes where the object should be block while it run in background.
Some of the processes in FICA are: FPMA, FPVA, FPINTM1, FPRW etc.
There is only one solution to overcome from this and this is, you need to run these programmes in parellal jobs assigning maximum allotable jobs. And break into short intervals.
Regards,
Akhil

Similar Messages

  • Difference: Job run in foreground, job run in background and batch job

    Hi  Gurus,
    Can you please help me to know what are the differences between job run in foreground, job run in background and batch job? Do jobs in foreground run in presentation server? Do jobs in background or batch jobs run in application server?
    Thanks,
    Kumar

    foreground job running may cause job running crash or failed if it is too big or server is busy and it take too long time. meantime it will take one sap session.
    background job will run base on request server status. and it will not take your sap session. and it will not failed normally.
    and you can get the result by SM37.
    my experience show that big report run in background normally faster than in foreground.
    Edited by: JiQing Zhao on Sep 3, 2010 4:13 AM

  • Application batch job running got error ORA-03113

    Problem: when application batch job running, application system always receive this error: ORA-03113 and job stop.
    Application system: dynamic system AX
    ORACLE system: ORACLE 10.2.0.4.0
    The listener configuration setting is :
    INBOUND_CONNECT_TIMEOUT_LISTENER = 0
    SUBSCRIBE_FOR_NODE_DOWN_EVENT_LISTENER = OFF
    Whether this is the problem of listener setting? "INBOUND_CONNECT_TIMEOUT_LISTENER = 0" indicate 0 seconds or no limit?

    I only find the error message in the Client server (application server).
    below is some example of it.
    The database reported (session 56 (mtihz)): ORA-03114: not connected to ORACLE
    . The SQL statement was: "SELECT A.PURCHID,A.PURCHNAME,A.ORDERACCOUNT,A.INVOICEACCOUNT,A.FREIGHTZONE,A.EMAIL,A.DELIVERYDATE,A.DELIVERYTYPE,A.ADDRESSREFRECID,A.ADDRESSREFTABLEID,A.INTERCOMPANYORIGINALSALESID,A.INTERCOMPANYORIGINALCUSTACCO12,A.CURRENCYCODE,A.PAYMENT,A.CASHDISC,A.PURCHPLACER,A.INTERCOMPANYDIRECTDELIVERY,A.VENDGROUP,A.LINEDISC,A.DISCPERCENT,A.DIMENSION,A.DIMENSION2
    Object Server 01: The database reported (session 58 (zhlhz)): ORA-03113: end-of-file on communication channel
    . The SQL statement was: "SELECT A.SPECTABLEID,A.LINENUM,A.CODE,A.BALANCE01,A.REFTABLEID,A.REFRECID,A.SPECRECID,A.PAYMENT,A.PAYMENTSTATUS,A.ERRORCODEPAYMENT,A.FULLSETTLEMENT,A.CREATEDDATE,A.CREATEDTIME,A.RECVERSION,A.RECID FROM SPECTRANS A WHERE ((SUBSTR(NLS_LOWER(DATAAREAID),1,7)=NLS_LOWER(:in1)) AND ((REFTABLEID=:in2) AND (REFRECID=:in3)))"
    but when I use PL/SQL Developer to run the scripts. there is no problem.
    And we always met errors when application team run long time batch, about 20 - 30 minutes or even longer.
    When they run 5 or 10 minutes job, there is no error occur.

  • Batch job run-

    Hi gurus,
    How to schedule and run a batch job run using SM36 and SM37?
    In MM area what are the transacions/programs are they used most?
    Regards,
    Deepak.

    hi,
    You can create any of the new program as per your requirement, but only you have to provide the technical ABAP name of the program while making the settings at SM36...
    Also please go through the SDN thread..Re: Scheduling Batch Job
    Regards
    Priyanka.P

  • How to disconnect DB connections before batch job runs

    Hi All,
    I have a batch job which generates some static reports at specified location. My question is that before my batch job runs I want to disconnect all DB connections and then run the batch job then get the system up for availabily. Could you please suggest me how do I disconnect the connections before batch runs? I am using Oracle 9i DB. Your help would be more appreciated.

    user536769 wrote:
    I have a batch job which generates some static reports at specified location. My question is that before my batch job runs I want to disconnect all DB connections and then run the batch job then get the system up for availabily. Could you please suggest me how do I disconnect the connections before batch runs? I am using Oracle 9i DB. Your help would be more appreciated.What you want to do, does not make sense. Oracle is intended as a multi-user multi-processing server. It is designed that way. It is developed that. It is sold and used that way.
    Why would you want to kick off all other processes (sessions) just to run a single batch process? To make it faster?
    If so, and the idea is to make the batch process faster, then
    a) WHAT makes the batch process slow?
    b) did you determine that this is caused by other processes?

  • How to get the list of batch jobs running in a specific application server

    Hi Team,
    I am trying to check in SM37 if there is any specific condition to get the list of batch jobs assigned to a specific target server but cant find any.
    Is there is way to find the list of batch jobs assigned to run in one specific application server.( Target server specified in SM36 while job creation)

    Hello,
    This is what you can do in SM37.
    Execute the list of batch jobs, when the result appears on the screen edit the ALV grid via CTRL+F7.
    Now add the following columns "TargetServ" and "Executing server".
    You will now have two extra columns in your result list.
    TargetServr contains the value of the application server where the job should run when you have explicitely filled it in.
    Often this is empty, this means that at runtime SAP will determine itself on which application server the job will run (depending of course where the BGD processes are defined).
    Executing server is filled in always for all executed jobs, this is the actual application server where the job has run.
    You can also add these two fields in your initial selection screen of SM37 by using the "Extended job selection" button.
    I hope this isusefull.
    Wim

  • Batch Jobs Running during upgrade from 46C to ECC 6!

    Hi All
    We are currently due to run our upgrade in the next two weeks and I have one final concern. The only testing we have not done relates to batch jobs.
    Our upgrade stratergy is downtime minimised.
    Does anyone know any best practices or issues that could effect our upgrade?
    Cheers
    Phil

    Hi Phil,
    If I remember correctly all background jobs in SAP R/3 are de-scheduled by the upgrade process. You have to manually reschedule them after the upgrade so make sure you have good scheduling documentation (or consider the central job scheduling provided by SAP NetWeaver for the future).
    One tip is that you need to make sure that jobs started from other systems are postponed until after the upgrade too (e.g. data extractions to BI). The upgrade process cannot determine job schedules and dependencies in other systems/scheduling tools.
    Cheers,
    Mike.

  • Turn off Spool Printing for MRP Batch Job run

    Hi Experts,
    Please tell on how to turn off the printing of a Batch Job which is an MRP Batch Run. Can this be done while creating the Job? We really don't need the prinout for this job. Thanks.
    Points will be awarded.
    Regards,
    LM

    Yes, this can be done while scheduling a job. When creating a job step (transaction SM36) there is a button "Printer specifications". Select any printer there and option "Send to SAP spool". Spool request will be created but it won't be printed out.
    If you are using RMMRP000 program, you also might want to uncheck "Display material list" checkbox on the selection screen. If using other program, see if there is an option to disable the log.

  • How to get all AD User accounts, associated with any application/MSA/Batch Job running in a Local or Remote machine using Script (PowerShell)

    Dear Scripting Guys,
    I am working in an AD migration project (Migration from old legacy AD domains to single AD domain) and in the transition phase. Our infrastructure contains lots
    of Users, Servers and Workstations. Authentication is being done through AD only. Many UNIX and LINUX based box are being authenticated through AD bridge to AD. 
    We have lot of applications in our environment. Many applications are configured to use Managed Service Accounts. Many Workstations and servers are running batch
    jobs with AD user credentials. Many applications are using AD user accounts to carry out their processes. 
    We need to find out all those AD Users, which are configured as MSA, Which are configured for batch jobs and which are being used for different applications on
    our network (Need to find out for every machine on network).
    These identified AD Users will be migrated to the new Domain with top priority. I get stuck with this requirement and your support will be deeply appreciated.
    I hope a well designed PS script can achieve this. 
    Thanks in advance...
    Thanks & Regards Bedanta S Mishra

    Hey Satyajit,
    Thank you for your valuable reply. It is really a great notion to enable account logon audit and collect those events for the analysis. But you know it is also a tedious job when thousand of Users come in to picture. You can imagine how complex it will be
    for this analysis, where more than 200000 users getting logged in through AD. It is the fact that when a batch / MS or an application uses a Domain Users credential with successful process, automatically a successful logon event will be triggered in associated
    DC. But there are also too many users which are not part of these accounts like MSA/Batch jobs or not linked to any application. In that case we have to get through unwanted events. 
    Recently jrv, provided me a beautiful script to find out all MSA from a machine or from a list of machines in an AD environment. (Covers MSA part.)
    $Report= 'Audit_Report.html'
    $Computers= Get-ADComputer -Filter 'Enabled -eq $True' | Select -Expand Name
    $head=@'
    <title>Non-Standard Service Accounts</title>
    <style>
    BODY{background-color :#FFFFF}
    TABLE{Border-width:thin;border-style: solid;border-color:Black;border-collapse: collapse;}
    TH{border-width: 1px;padding: 2px;border-style: solid;border-color: black;background-color: ThreeDShadow}
    TD{border-width: 1px;padding: 2px;border-style: solid;border-color: black;background-color: Transparent}
    </style>
    $sections=@()
    foreach($computer in $Computers){
    $sections+=Get-WmiObject -ComputerName $Computer -class Win32_Service -ErrorAction SilentlyContinue |
    Select-Object -Property StartName,Name,DisplayName |
    ConvertTo-Html -PreContent "<H2>Non-Standard Service Accounts on '$Computer'</H2>" -Fragment
    $body=$sections | out-string
    ConvertTo-Html -Body $body -Head $head | Out-File $report
    Invoke-Item $report
    A script can be designed to get all scheduled back ground batch jobs in a machine, from which the author / the Owner of that scheduled job can be extracted. like below one...
    Function Get-ScheduledTasks
    Param
    [Alias("Computer","ComputerName")]
    [Parameter(Position=1,ValuefromPipeline=$true,ValuefromPipelineByPropertyName=$true)]
    [string[]]$Name = $env:COMPUTERNAME
    [switch]$RootOnly = $false
    Begin
    $tasks = @()
    $schedule = New-Object -ComObject "Schedule.Service"
    Process
    Function Get-Tasks
    Param($path)
    $out = @()
    $schedule.GetFolder($path).GetTasks(0) | % {
    $xml = [xml]$_.xml
    $out += New-Object psobject -Property @{
    "ComputerName" = $Computer
    "Name" = $_.Name
    "Path" = $_.Path
    "LastRunTime" = $_.LastRunTime
    "NextRunTime" = $_.NextRunTime
    "Actions" = ($xml.Task.Actions.Exec | % { "$($_.Command) $($_.Arguments)" }) -join "`n"
    "Triggers" = $(If($xml.task.triggers){ForEach($task in ($xml.task.triggers | gm | Where{$_.membertype -eq "Property"})){$xml.task.triggers.$($task.name)}})
    "Enabled" = $xml.task.settings.enabled
    "Author" = $xml.task.principals.Principal.UserID
    "Description" = $xml.task.registrationInfo.Description
    "LastTaskResult" = $_.LastTaskResult
    "RunAs" = $xml.task.principals.principal.userid
    If(!$RootOnly)
    $schedule.GetFolder($path).GetFolders(0) | % {
    $out += get-Tasks($_.Path)
    $out
    ForEach($Computer in $Name)
    If(Test-Connection $computer -count 1 -quiet)
    $schedule.connect($Computer)
    $tasks += Get-Tasks "\"
    Else
    Write-Error "Cannot connect to $Computer. Please check it's network connectivity."
    Break
    $tasks
    End
    [System.Runtime.Interopservices.Marshal]::ReleaseComObject($schedule) | Out-Null
    Remove-Variable schedule
    Get-ScheduledTasks -RootOnly | Format-Table -Wrap -Autosize -Property RunAs,ComputerName,Actions
    So I think, can a PS script be designed to get the report of all running applications which use domain accounts for their authentication to carry out their process. So from that result we can filter out the AD accounts being used for those
    applications. After that these three individual modules can be compacted in to a single script to provide the desired output as per the requirement in a single report.
    Thanks & Regards Bedanta S Mishra

  • Transaction CJI3 AND  - batch job and output in excel format

    We are trying to schedule a batch job and would like to have the output in an excel file.  Is there a way to enhance CJI3 and FMEDDW to have the output in a excel file.  I've looked at the Layout for creating the disvariant and I know you can have the output in excel format in the foreground, but I'm looking to put an excel file on the SAP DIRECTORY.
    Has anyone done this?
    Thank you.
    Linda

    Talking about Enhancement Options, I believe you can achieve it using Enhancement Implementation. There must be one towards the end of the program just before the ALV is displayed. You can create an implementation of the implicit enhacement spot and output the file to a location maintained in some VAR (to keep the output location dynamic as we don't have it on the select screen).
    My reply is not to the point but I hope you find a way using enhancement spots.
    Should you need any help with enhancement implementation, following blog is good enough:
    Blog - [Enhancement Framework|http://www.sdn.sap.com/irj/scn/weblogs;jsessionid=%28J2EE3417800%29ID1759109750DB10206591421434314571End?blog=/pub/wlg/3595]
    or you can ask back here.
    regards,
    Aabhas
    Edited by: Aabhas K Vishnoi on Sep 24, 2009 10:40 AM

  • Recorded Actions and Batch jobs

    Photoshop has an Actions panel feature that allows you to record and save a set of commands (eg: resize an image to 400 x 600, then turn it black & white).  Anytime you want to perform the same set of commands, just select and run the saved Action.  In addition, Photoshop has a Batch jobs feature that allows you to select a directory of files and run the saved action on all files in the directory.  In this way, you could resize a thousand images to 400 x 600 and turn them all black & white with one click.
    Does Captivate have any similar capability?  I have about 3,000 swf files and I wish to set the preloader percentage on all of them to the same value. I would hate to open, edit, save, and close the files manually one at a time.  If this feature does not exist, can you guys think of an efficient way to accomplish the task?
    I have Captivate 5 but can upgrade if necessary.
    Thanks for the help

    Since you mention that you have both CPT and CPTX files it might be worth mentioning that your goal of just opening them, chaning the preloader percentage before save and close again might not work out as planned. 
    Have you considered the following issues?
    Are you intending to open the CPT files in the same version of Captivate that created them, or were you thinking of upgrading them to a later version such as Cp5.5?  If so, you'll need to do more than just change the preloader percentage.  You'll also need to change the path to the preloader itself since each version of Captivate is installed in a different directory in Program Files. 
    Were all of the CPT projects set up as AS3 with all AS3 components, or were some of them AS2 and therefore would need to be reviewed and tested to make sure upgrading to a Cp5x version did not have other impacts?  Projects that were set up as AS2 would need to change preloaders, skins, and any animations over to AS3 before the upgrade would be successful.
    CPT to CPTX files will also experience font size issues since the font rendering technology changed in Cp5x.  So you'll probably need to go through those projects and fix a lot of issues in text captions.
    And another one to watch out for:  After upgrading projects from CPT to CPTX the Continue button on the Quiz Results will often cease to work unless you remove and replace the Quiz Results slide.
    This is not a complete list of potential upgrade issues.  What I'm trying to sound here is a warning that, unless you use exactly the same version of Captivate that created them, making any changes to these files could turn into a much bigger job than you hope.  There could be weeks of work involved trawling through each project file one by one checking for issues.

  • Any problems having Admin Optimization and Proactive caching run concurrently

    Hi,
    We've recently enabled proactive caching refreshing every 10 minutes and have seen data in locked versions changing after Full Admin Optimization runs. Given how the data reverts back to a prior submitted number, I suspect having proactive caching occur while the Full Admin Optimization runs may be the culprit.
    here's an example to depict what is happening.
    original revenue is $10M.
    user submits new revenue amount of $11M.
    version is locked.
    data in locked version is copied into a new open version.
    full optimization runs at night and take 60 mins. all the while, proactive caching runs every 10 mins.
    user reports the revenue in the previously locked version is $10M and the new version shows $11M.
    We've never experienced this prior to enabling proactive caching which leads me to believe the 2 processes running concurrently may be the source of the problem.
    Is proactive caching supposed to be disabled while Full Admin Optimization process is running?
    Thanks,
    Fo

    Hi Fo
    When a full optimization is run, the following operations take place:
    - data is moved from wb and fac2 tables to the fact table
    - the cube is processed
    If the users are loading data while full optimization occurs then it is expected that a certain discrepancy will be observed. One needs to know that even with proactive caching enabled, the OLAP cube will not be 100% accurate 100% of the time.
    Please have a look at this post which explains the details of proactive caching:
    http://www.bidn.com/blogs/MMilligan/bidn-blog/2468/near-real-time-olap-using-ssas-proactive-caching
    Also - depending on how they are built, the BPC reports may generate a combination of MDX and SQL queries which will retrieve data from the cube and data from the backend tables.
    I would suggest to prevent users from loading data and running reports while the optimization takes place.
    Stefan

  • EIMAdaptiveProcessingService is unreponsive and Dataservice job run slow down

    Hi all,
    I have a BO DS 4.0 SP3 installed on Win SQL Server 2008 R2 and IPS 4.0.
    The CPU is Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz 2.19 GHz(4 processors) with 16GB RAM.
    Since several weeks, the new DS job run in the production. the job run sometimes during 1hour and sometimes during 5hours.
    when it run more than 2 hours, we find BODS service do nothing during serval hours. I found there is some errors "WebIntelligenceProcessingServer' is being marked as down because it is unresponsive" in the event viewer.
    The AdaptiveProcessingService as below:
    GC: [4612]  03:39:34    Suspiciously long running GC (turn this off via -XX:-DumpGCStatisticForLongGCs or change the time threshold via -XX:LongGCTime=<time in seconds>):
    GC: [4612]  03:39:34    GC Nr.                                : 58
    GC started                            : Tue May 20 03:37:35 2014
    Nr. of full GCs                       : 5
    GC algorithm                          : -XX:+UseParallelGC
    Reason                                : Allocation Failure
    Type                                  : partial
    Forced SoftRef clearing               : no
    Duration                              : 119471.03 ms
    Cumulative duration                   : 123392.10 ms
    CPU time                              : 78.00 ms
    Cumulative CPU time                   : 7878.03 ms
    Page faults during GC                 : 15
    Cumulative page faults during GCs     : 94406
    Allocation goal                       : 48 B (48)
    Used in Java heap before GC           : 223.57 MB (234431584)
    Used in Java heap after GC            : 110.65 MB (116024464)
    Bytes freed in Java heap during GC    : 112.92 MB (118407120)
    Committed in Java heap before GC      : 336.31 MB (352649216)
    Committed in Java heap after GC       : 313.56 MB (328794112)
    Bytes decommitted in Java heap        : 22.75 MB (23855104)
    Nr. of unloaded classes               : 0
    Nr. of non-array classes before GC    : 8580
    Nr. of non-array classes after GC     : 8580
    Nr. of array classes before GC        : 574
    Nr. of array classes after GC         : 574
    Cumulative unloaded non-array classes : 32
    Cumulative unloaded array classes     : 0
    Used in eden before GC                : 113.13 MB (118620160), 100.00% of committed, 33.14% of young gen max
    Used in eden after GC                 : 48 B (48),  0.00% of committed,  0.00% of young gen max
    Bytes freed in eden                   : 113.12 MB (118620112), 33.14% of young gen max
    Committed in eden before GC           : 113.13 MB (118620160), 33.14% of young gen max
    Committed in eden after GC            : 113.13 MB (118620160), 33.14% of young gen max
    TLAB waste in eden space before GC    : 15.48 MB (16228992), 13.68% of used in eden
    Used in 'from' space before GC        : 2.31 MB (2424880),  9.74% of committed,  0.68% of young gen max
    Used in 'from' space after GC         : 2.41 MB (2523184), 98.72% of committed,  0.71% of young gen max
    Bytes freed in 'from' space           : -96.00 kB (-98304), -0.03% of young gen max
    Committed in 'from' space before GC   : 23.75 MB (24903680),  6.96% of young gen max
    Committed in 'from' space after GC    : 2.44 MB (2555904),  0.71% of young gen max
    Used in 'to' space before GC          : 0 B (0),  0.00% of committed,  0.00% of young gen max
    Used in 'to' space after GC           : 0 B (0),  0.00% of committed,  0.00% of young gen max
    Bytes added in 'to' space             : 0 B (0),  0.00% of young gen max
    Committed in 'to' space before GC     : 25.31 MB (26542080),  7.42% of young gen max
    Committed in 'to' space after GC      : 23.88 MB (25034752),  7.00% of young gen max
    Max in young gen                      : 341.31 MB (357892096)
    Used in young gen before GC           : 115.44 MB (121045040), 33.82% of max
    Used in young gen after GC            : 2.41 MB (2523232),  0.71% of max
    Bytes freed in young gen              : 113.03 MB (118521808), 33.12% of max
    Committed in young gen before GC      : 136.88 MB (143523840), 40.10% of max
    Committed in young gen after GC       : 115.56 MB (121176064), 33.86% of max
    Tenuring threshold                    : 15
    Max in old gen                        : 682.69 MB (715849728)
    Used in old gen before GC             : 108.13 MB (113386544), 62.10% of committed, 15.84% of max
    Used in old gen after GC              : 108.24 MB (113501232), 62.16% of committed, 15.86% of max
    Bytes freed in old gen                : -112.00 kB (-114688), -0.02% of max
    Committed in old gen before GC        : 174.13 MB (182583296), 25.51% of max
    Committed in old gen after GC         : 174.13 MB (182583296), 25.51% of max
    Max in perm gen                       : 256.00 MB (268435456)
    Used in perm gen before GC            : 60.30 MB (63226432), 65.14% of committed, 23.55% of max
    Used in perm gen after GC             : 60.30 MB (63226432), 65.14% of committed, 23.55% of max
    Bytes freed in perm gen               : 0 B (0),  0.00% of max
    Committed in perm gen before GC       : 92.56 MB (97058816), 36.16% of max
    Committed in perm gen after GC        : 92.56 MB (97058816), 36.16% of max
    Bytes allocated in non-perm until now : 6.77 GB (7273341072)
    Bytes freed in non-perm until now     : 6.67 GB (7160953417)
    Bytes allocated in perm until now     : 60.45 MB (63388408)
    Bytes freed in perm until now         : 176.66 kB (180899)
    Nr. of soft refs cleared until now    : 1754
    Nr. of soft refs enqueued until now   : 61
    Nr. of weak refs cleared until now    : 48369
    Nr. of weak refs enqueued until now   : 47736
    Nr. of final refs enqueued until now  : 52559
    Nr. of phantom refs enqueued until now: 0
    Number of parallel scavenge threads   : 4
    Nr. of GC events                      : 3
    GC Event Nr.                          : 1
    GC event name                         : Young generation reference handling
    Event start time                      : Tue May 20 03:39:34 2014
    Duration                              : 12.36 ms
    CPU time                              : 15.60 ms
    Page faults during event              : 0
    Max last used time (ms) of SoftRefs   : 70000
    Nr. of SoftRefs found                 : 35
    Nr. of SoftRefs kept alive by policy  : 20
    Nr. of dead SoftRefs cleared          : 15
    Nr. of dead SoftRefs not cleared      : 0
    Nr. of live SoftRefs                  : 20
    Nr. of SoftRefs newly enqueued        : 1
    Nr. of new alive SoftRefs found       : 0
            during SoftRef handling
    Nr. of new alive WeakRefs found       : 0
            during SoftRef handling
    Nr. of new alive FinalRefs found      : 0
            during SoftRef handling
    Nr. of new alive PhantomRefs found    : 0
            during SoftRef handling
    Nr. of WeakRefs found                 : 1193
    Nr. of dead WeakRefs                  : 1192
    Nr. of live WeakRefs                  : 1
    Nr. of WeakRefs newly enqueued        : 1185
    Nr. of FinalRefs found                : 1121
    Nr. of live FinalRefs                 : 21
    Nr. of FinalRefs newly enqueued       : 1100
    Nr. of new alive SoftRefs found       : 0
            during FinalRef handling
    Nr. of new alive WeakRefs found       : 0
            during FinalRef handling
    Nr. of new alive FinalRefs found      : 0
            during FinalRef handling
    Nr. of new alive PhantomRefs found    : 0
            during FinalRef handling
    Nr. of PhantomRefs found              : 1
    Nr. of live PhantomRefs               : 1
    Nr. of PhantomRefs newly enqueued     : 0
    Nr. of new alive SoftRefs found       : 0
            during PhantomRef handling
    Nr. of new alive WeakRefs found       : 0
            during PhantomRef handling
    Nr. of new alive FinalRefs found      : 0
            during PhantomRef handling
    Nr. of new alive PhantomRefs found    : 0
            during PhantomRef handling
    Soft reference handling duration      : 0.01 ms
    Soft reference handling CPU time      : 0.00 ms
    Weak reference handling duration      : 0.47 ms
    Weak reference handling CPU time      : 0.00 ms
    Final reference handling duration     : 11.67 ms
    Final reference handling CPU time     : 15.60 ms
    Phantom reference handling duration   : 0.01 ms
    Phantom reference handling CPU time   : 0.00 ms
    JNI weak reference handling duration  : 0.16 ms
    JNI weak reference handling CPU time  : 0.00 ms
    GC Event Nr.                          : 2
    GC event name                         : Parallel scavenge (young generation GC)
    Event start time                      : Tue May 20 03:37:35 2014
    Duration                              : 119470.94 ms
    CPU time                              : 78.00 ms
    Page faults during event              : 15
    Used in eden before event             : 113.13 MB (118620160)
    Used in eden after event              : 0 B (0)
    Freed in eden                         : 113.13 MB (118620160)
    Used in 'from' space before event     : 2.31 MB (2424880)
    Used in 'from' space after event      : 2.41 MB (2523184)
    Freed in 'from' space                 : -96.00 kB (-98304)
    Used in 'to' space before event       : 0 B (0)
    Used in 'to' space after event        : 0 B (0)
    Added in 'to' space                   : 0 B (0)
    Used in young generation before event : 115.44 MB (121045040)
    Used in young generation after event  : 2.41 MB (2523184)
    Freed in young generation             : 113.03 MB (118521856)
    Used in old generation before event   : 108.13 MB (113386544)
    Used in old generation after event    : 108.24 MB (113501232)
    Freed in old generation               : -112.00 kB (-114688)
    Used in perm generation before event  : 60.30 MB (63226432)
    Used in perm generation after event   : 60.30 MB (63226432)
    Freed in perm generation              : 0 B (0)
    System load average                   : 0.03, 0.03, 0.64
    Promotion failed                      : no
    Number of successful steal operations : 123
    Number of failed steal operations     : 4
    Number of failed steals and yields    : 0
    Size of 'to' space PLAB (per thread)  : 32.00 kB (32768 bytes)
    Objects of age  0                     : 22228, 1.02 MB (1071744 bytes)
    Objects of age  1                     : 4917, 188.57 kB (193096 bytes)
    Objects of age  2                     : 3006, 100.47 kB (102880 bytes)
    Objects of age  3                     : 1775, 147.48 kB (151016 bytes)
    Objects of age  4                     : 2384, 71.31 kB (73024 bytes)
    Objects of age  5                     : 2362, 65.82 kB (67400 bytes)
    Objects of age  6                     : 3254, 107.34 kB (109912 bytes)
    Objects of age  7                     : 1972, 56.80 kB (58168 bytes)
    Objects of age  8                     : 2720, 83.50 kB (85504 bytes)
    Objects of age  9                     : 2210, 62.80 kB (64312 bytes)
    Objects of age 10                     : 2646, 81.98 kB (83944 bytes)
    Objects of age 11                     : 1863, 59.31 kB (60736 bytes)
    Objects of age 12                     : 2751, 110.47 kB (113120 bytes)
    Objects of age 13                     : 2419, 67.50 kB (69120 bytes)
    Objects of age 14                     : 2872, 92.19 kB (94400 bytes)
    Objects of age 15                     : 2766, 95.27 kB (97552 bytes)
    Objects of age 16                     : 0, 0 B (0 bytes)
    Objects of age 17                     : 0, 0 B (0 bytes)
    Objects of age 18                     : 0, 0 B (0 bytes)
    Objects of age 19                     : 0, 0 B (0 bytes)
    Objects of age 20                     : 0, 0 B (0 bytes)
    Objects of age 21                     : 0, 0 B (0 bytes)
    Objects of age 22                     : 0, 0 B (0 bytes)
    Objects of age 23                     : 0, 0 B (0 bytes)
    Objects of age 24                     : 0, 0 B (0 bytes)
    Objects of age 25                     : 0, 0 B (0 bytes)
    Objects of age 26                     : 0, 0 B (0 bytes)
    Objects of age 27                     : 0, 0 B (0 bytes)
    Objects of age 28                     : 0, 0 B (0 bytes)
    Objects of age 29                     : 0, 0 B (0 bytes)
    Objects of age 30                     : 0, 0 B (0 bytes)
    Objects of age 31                     : 0, 0 B (0 bytes)
    GC Event Nr.                          : 3
    GC event name                         : Skipped parallel scavenge (young generation GC)
    Event start time                      : Tue May 20 03:39:34 2014
    Reason to skip GC                     : Too much time spend in GC
    Thanks for your help
    Xin

    Hello Xin SUN,
    Try checking whether any other database application running at the back end.
    when that particular job is running see the CPU usage, if it is high try to find which process takes lot of memory.

  • Client Copy and Batch Jobs

    I'm planning on doing a refresh of our QAS client, using PRD client (remote copy at this point).
    I would like to know if the jobs that are setup in the QAS client will be removed/replaced by the client copy from PRD?  And will those jobs that are in the PRD client, be copied to the QAS client.
    Thanks
    Laurie McGinley

    And will those jobs that are in the PRD client, be copied to the QAS client.
    No they will not be copied. I just did Remote copy last week creating a new client and the jobs in PRD were not copied to that client.
    I would like to know if the jobs that are setup in the QAS client will be removed/replaced by the client copy from PRD
    So I guess its safe to assume that the current job definition would not be overwritten. The only thing that I can think of might be affected are your job variants which might not work based on the config. copy.
    Hope this helps.
    Thanks,
    Naveed

  • Batch Jobs fail because User ID is either Locked or deleted from SAP System

    Business Users releases batch jobs under their user id.
    And when these User Ids are deleted or locked by system administrator, these batch jobs fail, due to either user being locked or deleted from the system.
    Is there any way that these batch jobs can be stopped from cancelling or any SAP standard report to check if there are any batch jobs running under specific user id.

    Ajay,
    What you can do is, if you want the jobs to be still running under the particular user's name (I know people crib about anything and everything), and not worry about the jobs failing when the user is locked out, you can still achieve this by creating a system (eg bkgrjobs) user and run the Steps in the jobs under that System User name. You can do this while defining the Step in SM37
    This way, the jobs will keep running under the Business User's name and will not fail if he/she is locked out. But make sure that the System User has the necessary authorizations or the job will fail. Sap_all should be fine, but it really again depends on your company.
    Kunal

Maybe you are looking for

  • Setup problem importing library from external hard drive

    my old iMac died, and I am setting up iPhoto on my new iMac (10.7.2).  I had backed up my iPhoto library to an external hard drive before the old iMac went kaput.  However, when I try to import this library when setting up iPhoto, the Library file it

  • Logging to a file without using Logger?

    Hi all, I've been thinking of adding logging support to my app. So I found the Logger class. It seems to be ok, but that is not exactly what i was looking for. What I want to implement is cyclic/rolling log. But I am not talking about cycling files (

  • Issue in MC.1

    Hello, In our system when the standard report MC.1 is run with drilldown as material, the last goods issue and last goods receipt fields for some materials are either blank or showing dates from 2003 / 2004. These are all rolling materials with many

  • How to access (any) soap / rest web services from widget

    Hi all, Is there an API, javascript library etc, which allows accessing (any) soap / rest webservices from widgets? I know I can access web services from sap j2ee with widget foundation, but I'm still not sure how to access "outside" web services. A

  • Error in page when ADF Security enabled

    Hi, I have created a sample JSF page having only a 'Hello World!' output text. When I run the page without enabling ADF Security, it runs fine. I have enabled ADF Security as per "29.3 Enabling Oracle ADF Security" in Dev Guide for ADF. Now when I ru