Background processes initialization much longer in CS6

Hello,
The title says it all.
When launching a render, the message "background processes initializing / this may take a while" gets stuck more than 70s in CS6, whatever the project, whereas it was never on more than 10s in CS5.
We're back to the CS4 delays here. Is this a known issue ?
On a side note, the multiprocessing is a bit weird:
Before it was pretty straightforward : When 10 cores were used for the render, the frames were rendered by batches of 10. With CS6 it's different : A first batch of 3 frames get rendered, then another one of 14 (!!), then I lost track of the process.
Is multiprocessing so different in CS6 ?
Thanks to whoever can answer...
JM
Mac Pro 12-Core / 32GB RAM
Mac Os 10.6.8
AE MP settings :
2,5GB per core / RAM reserved for other applications : 4GB
CPU reserved for other applications : 4

It seems that the latest update (11.0.2) did not fix this. Are Adobe actually going to acknowlegde this problem exists or just bury their heads in the sand over it? I just tried rendering out a TIFF sequence and a WAV file into an mp4. After 3 minutes of it saying 'background processes intializing this may take some time' I cancelled it, turned off multi-processing and it then rendered straight away. This was using an i7 on Windows using 6 cores to multi-process with. The majoirty of time the delay in ram previewing with multi-processing turned on completely negates the speed advantages you get than if you don't have it turned on as well - especially with small, easy comps. I also see the same problem with the Mac Pros at work. Could you please possibly spend less time inflating the software by adding plugins and components that we don't need ie Camera Tracker (I have Boujou, PFMatchIt, Matchmover and The Foundry Camera Tracker already) and that god awful slow raytracing engine and actually speed the whole programme up which is what we all want?
And one last thing. When are we going to see proper playback from disk and not ram like a Smoke, Flame or Avid system does? I have 5 SSDs in RAID 0, more than quick enough to playback uncompressed 2k+. Why if I'm working in any comp longer than 10 seconds even with 32Gb or RAM am I constantly having to re-render the front part of the comp because AE has used the RAM elsewhere - just store and play it back directly from disk??? This is the only thing that is stopping AE from being a proper online finishing system.
Rant over. Breathe.....

Similar Messages

  • Background process taking very long time to complete.

    Dear All,
    Platform: HP UX
    Version: 12.0.6
    While time of shutting down the instance below background process taking very long time to complete.
    what is below mention process? can i kill it? total 3 process i am getting while finding ps -ef|grep applpre(applepre is apps instance's owner)
    applpre/apps/tech_st/10.1.3/appsutil/jdk/bin/IA64N/java -DCLIENT_PROCESSID=5457 -server -Xmx384m -XX:+UseSerialGC -Dor
    Thanks in Advance,
    Sandeep.

    Sandeep,
    Please see (Note: 567551.1 - Configuring various JVM tuning parameters for Oracle E-Business suite 11i and R12).
    You can safely kill those processes from the OS.
    Thanks,
    Hussein

  • AE CC - saving projects takes much longer than in CS6

    I've noticed that After Effects CC takes much longer to save my current project than previously (about 2-3 minutes). I'm used to saving quite often so this is slowing down my workflow.
    The project was imported from AE CS6. I've tried starting a new project and importing the previous one manually (instead of a conversion). Same lag.
    All unused footage has been taken out and the footage has been consolidated.
    The AE CS6 project file and AE CC file are about 15mb so CC is not any bigger.
    There are about 50 files in Arri Raw 3K, the rest is in DPX and ProRes 1080p.
    I'm working from a Macbook Pro Retina 16GB Ram and all media and cache are on an external Lacie Big Disk thunderbolt Raid SSD.
    My impression is that AE CC is indexing all the files before saving so it's going through all the RAW sequenced files. This could slow down the saving process quite a lot.
    Is anyone experiencing the same lag time while saving projects?

    I'm having the same issue on my PC, saving is taking much longer and is also happening every time I RAM preview. Very strange, and in fact now its having trouble accessing the save location, a local hard drive. any answers for this isse?

  • I moved my music from the c drive to the d drive. All of my music is in itunes but my ipod won't sync with itunes. The syncing process is taking much longer than usual too. I left my ipod over night to sync and it didnt finish. Fails to sync every time.

    I moved my music from the c drive to the d drive. All of my music is in itunes but my ipod won't sync with itunes. The syncing process is taking much longer than usual too. I left my ipod over night to sync and it didnt finish. Fails to sync every time. I tried to restore my ipod and it didnt help.

    Ignore.  I figured it out:)

  • Photoshop CS6 still running in Background Processes, won't close

    Lately, I've been having a problem with Photoshop CS6 where, even after I close the program, Photoshop.exe is still running under Background Processes in the task manager. Hitting "end task" on the program doesn't do anything. It takes up quite a big portion of my processing power, and it usually closes on its own if I just wait a good 5-10 minutes. Even weirder, while this "Ghost Photoshop" is running in the background, I can launch a new version of Photoshop, that will run at the same time without any trouble (aside from the expected slowness when a chunk of my processing power is being eaten up by whatever Ghost Photoshop is doing). I'm running Photoshop CS6 on Windows 8, with just one extension installed-- Coolorus 2.0.

    They were using the little red "x", file > quit allowed her to reopen the program etc. I find it weird that it wont close with the "x" but if I right click the icon on the task bar, it wont allow me to reopen or do anything with it so I was under the assumption it was stuck open.

  • RAC instance won't start: ORA-00443: background process "VKRM" did not star

    I've logged a SR with Oracle, but while waiting for response from them...I'm stumped I can't find more out here about this article.
    On Oracle knowledgebase...any search for VKRM, gives about the same 3 articles relating to RDA (Remote Diagnostic Assistant).
    Not sure what went on here.
    I have a 5 node RAC cluster. All other instances seem to be running just fine.
    On one instance...some applications were getting an error like:
    ORA-01033: ORACLE initialization or shutdown in progress
    I looked in GRID...and it indicated that only two of the 5 nodes had this instance running...which was strange in that with srvctl, it showed ll 5 up and running:
    [oracle@server2 bin]$ ./srvctl status database -d INSTANCE
    Instance INSTANCE1 is running on node server1
    Instance INSTANCE2 is running on node server2
    Instance INSTANCE3 is running on node server3
    Instance INSTNANCE4 is running on node server4
    Instance INSTANCE5 is running on node server5
    Anyway, thought I'd poke around. I started with trying to get srvctl to stop instance #2...in GRID it seemed that instance 2,4 and 5 weren't working.
    srvctl stop instance -D INSTANCE -i INSTANCE2
    This just hung...
    I thought I'd cycle all the nodes...so, did a ctl-c out of that one, and did:
    [oracle@server2 bin]$ ./srvctl stop database -d INSTANCE -o abort
    PRCD-1124 : Failed to stop database INSTANCE and its services
    PRCR-1065 : Failed to stop resource (((((NAME STARTS_WITH ora.instance.) && (NAME ENDS_WITH .svc)) && (TYPE == ora.service.type)) && ((STATE != OFFLINE) || (TARGET != OFFLINE))) || (((NAME == ora.instance.db) && (TYPE == ora.database.type)) && (STATE != OFFLINE)))
    CRS-2675: Stop of 'ora.instance.db' on 'server5' failed
    CRS-2675: Stop of 'ora.instance.db' on 'server4' failed
    ORA-01034: ORACLE not available
    ORA-27101: shared memory realm does not exist
    Linux-x86_64 Error: 2: No such file or directory
    Process ID: 0
    Session ID: 0 Serial number: 0
    ORA-01034: ORACLE not available
    ORA-27101: shared memory realm does not exist
    Linux-x86_64 Error: 2: No such file or directory
    Process ID: 0
    Session ID: 0 Serial number: 0
    The ORA-01034 messages repeats a number of times...and the one with server4 and server5 repeated again too.
    I also got this:
    CRS-2680: Clean of 'ora.instance.db' on 'server2' failed
    CRS-2675: Stop of 'ora.instance.db' on 'server5' failed
    CRS-2675: Stop of 'ora.instance.db' on 'server4' failed
    ORA-01034: ORACLE not available
    ORA-27101: shared memory realm does not exist
    Linux-x86_64 Error: 2: No such file or directory
    Process ID: 0
    Session ID: 0 Serial number: 0
    I get similar messages when I try to restart.
    clusters seem to be up, and other instances seem to be ok
    Looking at alert log...I found some strangeness in the traces
    Starting background process VKRM
    Errors in file /u01/app/oracle/diag/rdbms/instance/INSTANCE2/trace/INSTANCE2_dbrm_26982.trc:
    ORA-00443: background process "VKRM" did not start
    Errors in file /u01/app/oracle/diag/rdbms/instance/INSTANCE2/trace/INSTANCE2_ora_27467.trc:
    ORA-00450: background process '' did not start
    Errors in file /u01/app/oracle/diag/rdbms/instance/INSTANCE2/trace/INSTANCE2_ora_27467.trc:
    ORA-00450: background process '' did not start
    Error 450 happened during db open, shutting down database
    USER (ospid: 27467): terminating the instance due to error 450
    LGWR waiting for instance termination
    Instance terminated by USER, pid = 27467
    ORA-1092 signalled during: ALTER DATABASE OPEN...
    opiodr aborting process unknown ospid (27467) as a result of ORA-1092
    Looking at the trace listed above:
    2011-05-09 12:17:18.305726 :84271119:db_trace:ksb.c@2157:ksbs1p_real(): [10254:6:464] KSBS1P: process DBRM trying to start background VKRM
    2011-05-09 12:17:18.305729 :8427111A:db_trace:ksb.c@2220:ksbs1p_real(): [10254:6:464] KSBS1P: process DBRM obtained PR enqueue to start background VK
    RM
    2011-05-09 12:17:18.306021 :8427111D:db_trace:ksb.c@2354:ksbs1p_real(): [10254:6:464] KSBS1P: creation error posted OER(1089)
    2011-05-09 12:17:18.306029 :8427111F:db_trace:ksb.c@2424:ksbs1p_real(): [10254:6:464] KSBS1P: out of loop: process did not start
    Trace Bucket Dump End: default bucket for process 6 (osid: 26982, DBRM)
    ORA-00443: background process "VKRM" did not start
    kskdbrmpa: reply error 450
    Any ideas? Again...I can't seem to find much of ANY information searching out there for the VKRM background process not starting...
    Thanks in advance,
    cayenne
    Edited by: cayenne on May 9, 2011 11:34 AM

    Anyone? Anyone? Bueller?
    Ok..have been on phone with Oracle support, and have them stumped so far.
    I've checked..other instances are running (except one other that failed to cleanly shut down with srvctl and same error messages).
    I've check...ASM is running on all 5 nodes. I've used crsctl to check CRS on all nodes..clustering seems ok.
    Memory while somewhat high...should have enough room...the system has never complained before, and this has been up on this config for over a year.
    I was able on node one...to fire up the first nodes instance using the pfile there...it came up. I started it restrict..and promptly ran a datapump export, and then shut back down.
    While Oracle support is going through logs and trace files I sent...I've got another 11Gr2 environment (3 node RAC) I've been using as a test environment...and am recreated the instance there temporarily to allow my developers to test and get past the upcoming deadline.
    But, I'm still puzzled as to the solution on the main cluster. This seem to be normal, except these two instances....
    any ideas on where and what to look for?
    cayenne

  • Background process BCTL_4CEDE3M8RTW5WPMQ01YVZ4QYE terminated due to missing

    Hi,
    I am getting following error when i am activating ODS (Bi 7.0)
    Background process BCTL_4CEDE3M8RTW5WPMQ01YVZ4QYE terminated due to missing confirmation.
    I tried reloading and activating it,
    i tried with RSRV adjusts.
    I tried changing package size.
    I tried reducing parallel executions.
    Still not able to activate. Can some one help me.

    Here are the short dump details
    Short text of error message:
    Invalid PBT server group name: PARALLEL_GENERATORS
    Long text of error message:
    Diagnosis
         The system tried to initialize the environment for processing
         parallel RFCs using the function module SPBT_INITIALIZE. The server
         group name specified was found to be invalid. The server group name
         describes a group of servers on which the parallel RFCs should be
         processed.
    System Response
    Procedure
         You can use Transaction RZ12 to determine which PBT server groups
         are configured in your system. You should only use one of the
         server group names listed there to process parallel RFCs.
    But i m activating serially with value 1 in option.

  • Compressor 2 and FCP 4.5 "background process" error

    So I switched to my laptop which is running 10.4.11 and Compressor 2. I get "Cannot submit batch" Unable to connect to background process. I don't know if I should try the same proposed solutions that are suggested for the newer versions of the software? Any other suggestions?

    VAR,
    1. When you can spare another 5 min to solve your problem, instead of coming here and complaining about no one helped you (though Tom did offer a very direct response) you might take the time to read your DVDSP manual.
    In it, you will discover that DVDSP is totally capable of converting quicktime movies to m2v and aif files. You have full control over all aspects of bitrate etc. Indeed, it is the same engine as in compressor. If you need an AC3 file, use A-Pack on the aif DVDSP will create.
    2. When you say compressor does not work, do you mean -A. it does not launch from within FCP or B. you can not launch it independently?
    If it will launch independantly. export a reference QT file, open Compressor and have at it.
    QT7 is a problematic issue with FCP4.5. The long term solution is to upgrade to FCP5 to go with 10.4 and QT7. They all fit together much more neatly.
    fwiw -
    1. the people who hang out here and help out fellow users are not employees of Apple. If you want to complain about the software, a good place to do it is on the FCP feedback page This forum is to provide technical support.
    2. The people who hang out here and help fellow users are not compensated. Their efforts are volunteer and no one is under any obligation to respond to you in any way. If you don't like the advice, I'm sure Apple techincal support would be glad to have a $199 conversation with you.
    3. Tom is a mainstay of the board and one of the the most knowledgeable people you will find regarding FCP. As such he is highly respected by the serious users of the forum. You will not find a great deal of sympathy or support here if you continue on in this vein ..
    You have a technical issue, post the details, For example, what have you tried to solve the problem? The intellectual resources here are quite amazing. I've not seem many posts go unresolved for lack of trying.
    good luck.
    x

  • Why may any user leave background processes running at will?

    Hi all,
    yesterday, I encountered a rather strange problem with linux in common, at least
    I think so.
    In my .xinitrc, I'm starting offlineimap - a console-based mail synchronization
    tool - in the background. Being naive, I expected it to be killed along with the
    gui applications started in that file. Yet, that assumption proved wrong and I
    started asking for help on how to kill that process on #archlinux.
    The guys there (again, thanks for your help and patience!) all came up with
    plenty of ideas on how to avoid starting more than once instance of the program,
    but that wasn't really what I was looking for. The only usable option came from
    anrxc, who suggested killing the program from awesome's logout hooks.
    Not fully satisfied with the solutions, I started thinking and came up with the
    following question:
    Why is every user allowed to leave background processes on the machine
    just as he pleases, even if he logs out?
    I even tried this over ssh, where the launched commands have some sort of
    "parent" process, but even in this circumstance it was possbible to leave
    background processes behind after logging out.
    I mean, on my desktop system, this is not a big issue... I shut it down once
    every day at the least and there are no users on it besides my girlfriend and
    me. But this seems like a fundamental problem to me. Why is this allowed at all?
    Does it make sense to do it that way? What are the consequences?
    Let's discuss!

    JohannesSM64 wrote:Really, you need to find a better way to manage offlineimap than starting it in xinitrc. Automatically killing any background processes on any logout will not make linux better.
    Hmm... to me, .xinitrc is the place to start apps which should live just as long
    as the graphical user login lasts. On #archlinux, several other places were
    discussed, but none of them were "the thing":
    .bashrc
    Doesn't work, because a) the process would only get started when I open a shell,
    not when I log in and b) because finding a place to stop the process would be
    even harder.
    .bash-profile
    Only gets executed for a login shell, which I wouldn't account for a graphical
    login at all.
    wm startup script (in this case awesome's rc.lua)
    Possbile, but not much better. Would fork the process all the same, merely
    moving the problem. If X got killed, not even awesome's logout hooks would
    apply.
    Also, this approach isn't wm-agnostic, so trying out / switching to another wm
    would have the problem occur all over again.
    So what do you suggest? Do you have a good idea?
    pseudonomous wrote:
    As to the question of "why" things act this way:
    I believe this is linux display it's heritage as a mulit-user operating system that people used terminals to log into to run program on.  A big place where unix used to be (and is, to some degree, still used) was in universities where a professor or graduate student might have logged onto the system to run some program to process some large set of data.  You wouldn't want to sit around and wait for this program to finish; you'd want to run it in the background, leave, and come back and look at the results a week later, when the program finished running.  One of my friends doing applied math research still does this sort of thing.  I'd imagine it's relatively common.
    Process management was largely handled be systems administrators, and commonly you were being billed for CPU time, so it was in your interest not to leave programs that you didn't want to run running when you logged out.
    Hmmm... that seems like a rational explanation. But in the case that is the
    reason for linux' behavior: Why isn't there some kind of a mode setting? Like
    one which allows any user to keep processes alive and another one that doesn't?

  • Essbase Background processes

    Hi,
    This is regarding the background processes of essbase.
    we can find the status of background jobs when they got finished or failed in the Background processes window, but while it is running is their any way to find out whether it is running properly or if it is taking long time (why it is taking long time)
    Anybody please advice me on this.

    Hi,
    1- Create calculation script and use calculation commands like Set Message Summary|Detail, Set Notice High|Low and then calculate database as per requirement. Then create a Maxl file and define Maxl command SPOOL ON TO 'Log.txt' and then execute above defined calculation script. All details of calculation will be captured in log.txt file i.e. how much calculation is completed.
    2- Also you can verify whether calculation is working or not by checking essx.pag, essx.ind files (located at ARBORPATH\App\appName\dbName) are creating\updating or not.
    Atul K,

  • The request could not be submitted for background processing.

    Post Author: Chriss
    CA Forum: Administration
    It's an BOE XI SR2, on Win2k3 server, with a print cluster with two print spools, handling 3000+ printers. I discovered this error to be intermittent and only on one of the spools. It turned out that the only common factor was an HP4250 print driver. I backed all the 4250s down to 4200 drivers and the intermitent error ("Error in File. The request could not be submitted for background processing.") went from about 100 a day to zero. The other spool had a different version of the HP4250 driver and would on rare occassion cause this error, "Error in File ... Page header or footer longer than a page." but never the background processing error.
    For reference, when I got this error in XI R1, this was the solution for 'the error with one name and many causes':The error "The request could not be submitted for background processing" can be related to a corrupt or wrong versioned crpe32.dll in the Crystal bin folder. Renaming to crpe32.dll_bak and using the repair command in the the "Add/Remove Programs" tool in the "Control Panel" will reinstall the correct dll. Then restart the Crystal services.

    Post Author: krishna.moorthi
    CA Forum: Administration
    For Crystal reports :
    Error : "The request could not be submitted for background processing"
    I think,this was not related to a corrupt or wrong versioned crpe32.dll.
    but the below mentioned is one of the reason for getting this error.
    I got the error when the main report(crystalreports10) having more than 2 subreports not assigned proper tables for the subreports.
    Example: (this code raise the abone mentioned error.)
    rpt.SetDataSource(Exdataset);
    rpt.Subreports["subreportname1"].SetDataSource(Exdataset); // Exdatatset.Tables[1]
    rpt.Subreports["subreportname2"].SetDataSource(Exdataset);// Exdatatset.Tables[2]

  • The 0co_om_opa_6 ip in the process chains takes long time to run

    Hi experts,
    The 0co_om_opa_6 ip in the process chains takes long time to run around 5 hours in production
    I have checked the note 382329,
    -> where the indexes 1 and 4 are active
    -> index 4 was not "Index does not exist in database system ORACLE"- i have assgined to " Indexes on all database systems and ran the delta load in development system, but guess there are not much data in dev it took 2-1/2 hrs to run as it was taking earlier. so didnt find much differnce in performance.
    As per the note Note 549552 - CO line item extractors: performance, i have checked in the table BWOM_SETTINGS these are the settings that are there in the ECC system.
    -> OLTPSOURCE -  is blank
       PARAM_NAME - OBJSELSIZE
       PARAM_VALUE- is blank
    -> OLTPSOURCE - is blank
       PARAM_NAME - NOTSSELECT
       PARAM_VALUE- is blank
    -> OLTPSOURCE- 0CO_OM_OPA_6
       PARAM_NAME - NOBLOCKING
       PARAM_VALUE- is blank.
    Could you please check if any other settings needs to be done .
    Also for the IP there is selction criteris for FISCALYEAR/PERIOD from 2004-2099, also an inti is done for the same period as a result it becoming difficult for me to load for a single year.
    Please suggest.

    The problem was the index 4 was not active in the database level..it was recommended by the SAP team to activate it in se14..however while doing so we face few issues se14 is a very sensitive transaction should be handled carefully ... it should be activate not created.
    The OBJSELSIZE in the table BWOM_SETTINGS has to be Marked 'X' to improve the quality as well as the indexe 4 should be activate at the abap level i.e in the table COEP -> INDEXES-> INDEX 4 -> Select the  u201Cindex on all database systemu201D in place of u201CNo database indexu201D, once it is activated in the table abap level you can activate the same indexes in the database level.
    Be very carefull while you execute it in se14 best is to use db02 to do the same , basis tend to make less mistake there.
    Thanks Hope this helps ..

  • DSO activation in Process Chains - takes long time

    Dear All,
    We have included DSO activation in Process Chains. This process takes a long time to execute. We manually cancel the corresponding process in Process Overview and repeat it from chain. The chain runs daily and this issue also occurs on a daily basis.
    Does anyone one of you have an idea of how to deal with this performance issue?
    Regards.

    Figure out in which class does your process fall in:
    Class A- High Priority
    Class B- Medium Priority
    Class C- Least Priority
    Background processes
    Class A jobs
    The number of work processes reserved for job class A is a subset of the number of background processes.
    You should only reserve work processes for job class A if it makes sense within your system organization. Work processes reserved for class A jobs are no longer available for job classes B or C.
    Set parallel processing for a specific BW process in the (variant) maintenance of the process
    Call the function for setting the parallel processes.
    You can call the function in the process variant maintenance of a process chain or in the process maintenance. The function call varies for the different BW processes.
    For example, in the data transfer process you call the function with Goto-->  Background Manager Settings.
    The Settings for Parallel Processing dialog box appears
      Under Number of Processes, define the maximum number of work processes that should be used to process the BW process.
    If you enter 1, the BW process is processed serially.
    If you enter a number greater than 1, the BW process is processed in parallel
    In the Parallel Processing group frame, make the relevant settings for parallel processing in the background:
    Enter a job class for defining the job priority.
    The job priority defines how the jobs are distributed among the available background work processes
      In the group frame Parallel Processing, you can define whether parallel processing should take place in dialog work processes or in background work processes for the processes ODSACTIVAT, ODSSID and ODSREQUDEL for the DataStore object
    Transport:
    The entries in tables RSBATCHPARALLEL and RSBATCHSERVER are written on a transport request of the Change and Transport System.
    Edited by: ram.pch on Oct 7, 2011 9:55 PM

  • Process chain taking long time to complete

    Hi,
    I am having following issues in the process chain daily loads
    1) The PSA deletion is taking very long time on fridays only (nearly 3 hrs). Other days it gets deleted in 1 hr max.
    2) Just to load 381 records via dtp from DSO to cube takes nearly 2 hrs(Delta load). No major routines are written in the transformation.
    How to analyse a process in process chain which does not fail, but takes very long time to complete. Is there any tool or transaction in BW which can help us to analyse why the process chain takes so long time to complete in different days. One day it completes in 8 hrs, another day it takes 12 hrs.
    None of the transactions, SM37, SM50, SLG1, LOGS etc are giving me any help to analyse this issue.
    If it fails we have error logs to check and analyse, but without failing how we can analyse and fix the delay and reduce the time of loading data. Please guide me.
    Thanks in advance.
    Vishwanath

    Hi.......
    1) This might be due to the poor performance of the system ..there wont be enoughwork processes avilabe in teh system .- What needs to be done if it does not have enough work processes available. Though I can find that lots of dialog processes were free during these jobs as well some Background processes were also free.?
    Look to increase Work Process.......................u can do two things..............U can cancel some Job...........which r not progressing...........or which r not very important for the time being..........
    Or if possible........u can increase number of servers..........
    Look When a Job will start...............first it will run in background.............u can start in in Dialog also..........but for Dialogue after some time ..load will go to time out.................now many Child jobs will be created for a single background jobs..............they may run in Dialogue..........u can monitor them through SM66............
    2) Check ST04 for lock waits and dead locks ..if the same lock persist for long then check with basis team.
    What needs to be conveyed to the basis team when these locks happen?
    generally these will be temporary Locks.............after some time locks will be realeased..............if they persist for a long time...........then u can contact Basis people..........Or if u can understand that which job is creating the lock.............like if u double click on the job >> click on Job details...............from there u will find the PID................copy that.............and check in ST04..........whether they matches or not..............now suppose it match.............and u know that job is not very important............then u can cancel the job.............
    Check also the table space availba in St04 . the usuage should nt be more than 90%
    What needs to be conveyed to the basis team if the usage is more than 90%.?
    U can ask them to increase the Table space........
    Check the sm37 >> delay column it should nt be high
    What needs to be done if delay column is high. Actually it is high for these jobs (PSA deletion and load from DSO to Cube visa DTP (Delta load)?
    If Delay is more.............it means ...........that there is no free work process...jobs r going to Release state............solution I had already given u.......
    Check the PID refeclting in lock waits in sm66 wether they are progressing or nt ..
    If they are not progressing what action needs to be taken?
    Actually...........suppose one Dialogue job is running............it may change..........ie  that Dialogue job will go to Stop status.........some other job will start............anyways if it does'nt progress............u can cacel the JOb.........
    Check OS07 for DB if the idle time is less then 20% then its a problem ..
    If the Idle time is less than 20% what action needs to be taken?
    U can contact the Basis people.......
    Check sm21 and RFC connection with other source systems in sm59 ..
    What needs to be checked in SM21 and SM59 specifically? What parameter i need to check?
    In SM21.............if red status is there............check the Log beside that..............may be it will be some Terminal disconnected.......
    In SM59...........clck on test connection..........
    if the problem is occuring due to only one source system then check the performance of that system ..
    How to check the performance of these systems? Any tools available in R/3 system to check its performance?
    No.....in this way u can check performance for all the source system...........go to SM59............double click on the desired source system>> click on test connection......
    If All these are persisting then its a performance problem and check with basis team ..
    Is there any special settings which needs to be maintained to achieve better performance of Process chain loads?
    U can Improve the performance of process chain by Parallel Processing..........ie split the loads by giving selections...........and execute thenm in parallel..........
    Regards,
    Debjani........

  • Infopackage in process chain taking long time to run

    HI Experts,
    One of the element (Infopackage) in process chain(Daily process chain) is taking much longer time(6 hr and still runing) as generally it takes 15-20 mins to complete and the status is in yellow and still running in process monitor without giving any clear picture of  error. Manually we are making the staus red and we are updating from PSA and this time it is getting completed in time as specified.
    Flow from PSA to datatarget(DSO) in series
    For last one week we are facing same issue and i would like to mention that we don't have access to SM37,SM12 to look the logs
    and any locks.
    without this i need to investigate what is the root cause for the same.
    with regards,
    murali

    Hi,
    please find the job log.
    Date
    Time
    Message
    MsgID/No./Ty
    13.12.2010
    21:45:30
    Job started
    00
    516
    S
    13.12.2010
    21:45:30
    Step 001 started (program SBIE0001, variant &0000000109368, user ID RFCUSER)
    00
    550
    S
    13.12.2010
    21:45:30
    Asynchronous transmission of info IDoc 2 in task 0001 (0 parallel tasks)
    R3
    413
    S
    13.12.2010
    21:45:30
    DATASOURCE = 2LIS_17_I3HDR
    R3
    299
    S
    13.12.2010
    21:45:30
    RLOGSYS    = PBACLNT200
    R3
    299
    S
    13.12.2010
    21:45:30
    REQUNR     = REQU_D9GSSV08BTS93Z6Y7XWP1ZEUJ
    R3
    299
    S
    13.12.2010
    21:45:30
    UPDMODE    = D
    R3
    299
    S
    13.12.2010
    21:45:30
    LANGUAGES  = *
    R3
    299
    S
    13.12.2010
    21:45:30
    R8
    048
    S
    13.12.2010
    21:45:30
             Current Values for Selected Profile Parameters               *
    R8
    049
    S
    13.12.2010
    21:45:30
    R8
    048
    S
    13.12.2010
    21:45:30
    abap/heap_area_nondia......... 0                                       *
    R8
    050
    S
    13.12.2010
    21:45:30
    abap/heap_area_total.......... 25500319744                             *
    R8
    050
    S
    13.12.2010
    21:45:30
    abap/heaplimit................ 40000000                                *
    R8
    050
    S
    13.12.2010
    21:45:30
    zcsa/installed_languages...... ED                                      *
    R8
    050
    S
    13.12.2010
    21:45:30
    zcsa/system_language.......... E                                       *
    R8
    050
    S
    13.12.2010
    21:45:30
    ztta/max_memreq_MB............ 2047                                    *
    R8
    050
    S
    13.12.2010
    21:45:30
    ztta/roll_area................ 3000320                                 *
    R8
    050
    S
    13.12.2010
    21:45:30
    ztta/roll_extension........... 2000000000                              *
    R8
    050
    S
    13.12.2010
    21:45:30
    R8
    048
    S
    13.12.2010
    21:45:31
    70 LUWs confirmed and 70 LUWs to be deleted with function module RSC2_QOUT_CONFIRM_DATA
    RSQU
    036
    S
    13.12.2010
    21:45:33
    Call customer enhancement BW_BTE_CALL_BW204010_E (BTE) with 9,895 records
    R3
    407
    S
    13.12.2010
    21:45:33
    Result of customer enhancement: 9,895 records
    R3
    408
    S
    13.12.2010
    21:45:33
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 9,895 records
    R3
    407
    S
    13.12.2010
    21:45:33
    Result of customer enhancement: 9,895 records
    R3
    408
    S
    13.12.2010
    21:45:33
    PSA=0 USING & STARTING SAPI SCHEDULER
    R3
    299
    S
    13.12.2010
    21:45:33
    Asynchronous send of data package 1 in task 0002 (1 parallel tasks)
    R3
    409
    S
    13.12.2010
    21:45:34
    IDOC: Info IDoc 2, IDoc No. 5136702, Duration 00:00:00
    R3
    088
    S
    13.12.2010
    21:45:34
    IDoc: Start = 13.12.2010 21:45:30, End = 13.12.2010 21:45:30
    R3
    089
    S
    13.12.2010
    21:45:35
    Asynchronous transmission of info IDoc 3 in task 0003 (1 parallel tasks)
    R3
    413
    S
    13.12.2010
    21:45:35
    Altogether, 0 records were filtered out through selection conditions
    RSQU
    037
    S
    13.12.2010
    21:45:35
    IDOC: Info IDoc 3, IDoc No. 5136703, Duration 00:00:00
    R3
    088
    S
    13.12.2010
    21:45:35
    IDoc: Start = 13.12.2010 21:45:35, End = 13.12.2010 21:45:35
    R3
    089
    S
    13.12.2010
    21:55:37
    tRFC: Data Package = 1, TID = 0AF0842B00C84D0694000F00, Duration = 00:10:03, ARFCSTATE = SYSFAIL
    R3
    038
    S
    13.12.2010
    21:55:37
    tRFC: Start = 13.12.2010 21:45:34, End = 13.12.2010 21:55:37
    R3
    039
    S
    13.12.2010
    21:55:37
    Synchronized transmission of info IDoc 4 (0 parallel tasks)
    R3
    414
    S
    13.12.2010
    21:55:37
    IDOC: Info IDoc 4, IDoc No. 5136717, Duration 00:00:00
    R3
    088
    S
    13.12.2010
    21:55:37
    IDoc: Start = 13.12.2010 21:55:37, End = 13.12.2010 21:55:37
    R3
    089
    S
    13.12.2010
    21:55:37
    Job finished
    00
    517
    S

Maybe you are looking for