Configure long running ejb process

Hi,
I have a statelesss bean which will be invoked by the websphere app server scheduler configuration. From this ejb I am calling a servlet using HttpURLConnection class. This servlet takes min of 15 minutes to complte the task. But after invoking the servlet, after two minutes ejb throwing an error saying timeout.
There is no way i can devide the task being performed in servlet to multiple and have multiple calls from ejb.
How can i solve this???
Please suggest me.

Hi,
I have a statelesss bean which will be invoked by the websphere app server scheduler configuration. From this ejb I am calling a servlet using HttpURLConnection class. This servlet takes min of 15 minutes to complte the task. But after invoking the servlet, after two minutes ejb throwing an error saying timeout.
There is no way i can devide the task being performed in servlet to multiple and have multiple calls from ejb.
How can i solve this???
Please suggest me.

Similar Messages

  • Checking for status of a long-running BPEL process

    Hi experts,
    I have one BPEL process for originating customers' loan which usually involves various steps which takes around 15-20 mins to complete per instance. I would like to implement a user interface that can check the process's progress and display how it is going to the user.
    I have thought of using an asynchronous process that updates a global variable listing status of each step inside the process and use a poller to invoke another operation (like "checkStatus" operation) of this process to retrieve this variable and display the value to the users. It may be achieved by using "OnEvent" activity waiting for "checkStatus" operation. It will run in parallel with the main process, have access to the global variable and reply this variable immediately to the caller.
    It sounds like an idea in theory but invoking a web service operation for polling status is very heavy-weight and may impact performance during high-load.
    I am just wondering if there is other solution to this problem as I believe the status checking is a very common expected requirement in long-running BPEL process. What might be the best practice to implement this?
    Look forward to your responses.
    Thanks and regards,
    Edited by: Nghia Pham on Nov 5, 2012 4:48 PM

    Hi.
    I apologize for the slow reply. I am just back from overseas and did not have chance to go to the forum!
    Thank you a lot for your responses!
    The BAM looks like a suggested out-of-the-box solution. But it ties too much to Oracle API and will be hard to customise the way you want to interpret the status to end-users. If you want to display the process status with BAM in a web application interface, ADF is the only solution (please correct me if I am wrong). I would prefer a stand-alone, free of proprietary API solution so that we can build a screen that is technology-independent. As ADF UI is not our only supported view technology. For example, a PHP web client should be able to interpret the BPEL process status.
    Will really appreciate your suggested solution.
    Thanks and regards,

  • Online redo logs fill-up due to long running SCM processes

    DB2 v9.7
    AIX v6.1 P570
    Running Standard SAP Code
    Going live with SCM system running SNP/DP GATP how can we avoid having our online redo logs fill-up due to long running processes (i.e., CTM. Process Chains, BOP, DP Runs) Need to know SAP Best Practice regarding this -
    Per our developers, BOP jobs performs commit when everything is processed and update table in one go at end, Hence not possible to perform commit regularly. Actually they will have to run BOP for 3200 material with 98000 sales order items.
    But after just running BOP for 40 material with 22000 sales orders line items we are getting DBIF_RSQL_SQL_ERROR + CX_SY_OPEN_SQL_DB dumps
    Any assistance on this would be helpful...
    Thanks -

    Fred
    Check OSS notes for this error. The BOP compponent has a steady stream of corrections developed by SAP especially in older releases.
    Also make sure your Livecache Build is consistent with SCM Release/ Patch level.
    Rishi Menon

  • UCE backup so long running vaccumdb process

    If I kill the child process vacuumdb the Next process running reindexdb. Can I allow to run this process or I want kill the all releated Backup process. If I allow to run reindexdb process what will append the database will corrupt? I have killed already the vaccumdb (the vaccumdb process not completed fully). When I check the backup.sh script after vaccumdb the next step is reindexdb.
    Thanks

    there is a risk that killing vacuumdb can lead to database corruption. it can normally take a long time for vacuumdb to complete(especially if the backup script isn't run regularly or the db is very large)

  • Long Running BTC process in SAP R3 PROD system

    Hello All,
    In our PRD. env ..one BTC job running since long time with out collecting any data .
    when looked into job log and its associated BTC work process its status is active and the action performed is Sequential read to the table RESB .And this job current duration time 100000 + .
    Few Points which are observed 1>When I looked into expensive SQL statement , I can see the table RESB most of the times and below is the screen which I observed over there
    SQL Statement
    SELECT
      "XLOEK" , "MATNR" , "WERKS" , "AUFNR" , "POSTP"
    FROM
      "RESB"
    WHERE
      "MANDT" = :A0 AND "AUFNR" IN ( :A1 , :A2 , :A3 , :A4 , :A5 ) AND "POSTP" IN ( :A6 , :A7 , :A
      :A9 )#
    Execution Plan
    Explain from v$sql_plan: Address: 07000000CE24FD58 Hash_value:  940166482 Child_number:  1
    SELECT STATEMENT ( Estimated Costs = 29,530 , Estimated #Rows = 0 )
            1 TABLE ACCESS FULL RESB
              ( Estim. Costs = 29,529 , Estim. #Rows = 75 )
              Estim. CPU-Costs = 2,848,540,469 Estim. IO-Costs = 29,284
    2>If we look into the statement it mentions as TABLE ACCESS FULL RESB , what does this mean ?
    3>Normally the previous execution of these jobs on average is 1500 seconds .
    4>This job is relating to BW , which collects data from R3 remotely and sends report for BW system .
    Can any one suggest me to fix this issue. as this is causing performance issues to the system .
    Best Regards
    Rakesh

    Hi Rakesh,
    This job got hung. You need to cancel it. If job get hung repeatedly, then analyze the table RESB.
    TABLE ACCESS FULL RESB means process unable to connect RESB. If analyze not work, the run total DB stat . Still you face problem the make RESB table buffering (SE13).
    Check TRFC, QRFC no of entry. If it is huge, it can be problem. Send data part by part. Send data in week end not week days.
    Thanks,
    Suman

  • Long running MGP process / Tuning?

    Our MGP compose process runs for 25 minutes. this seems like a long time for it to be running. I have read of consperf but it seems that you can only enable it by going into each user's publication item and enabling it.
    Is there any global setting that will do the same thing?

    What version are you on?
    We use 10.0 live (testing out 10.2, but found a (to us) showstopper bug).
    To run consperf (for 10.0, i think it is similar for earlier versions), you can run it from the command line on the server, but it is a bit tempremental unless using windows servers, otherwise from the mobile manager try following this route
    1) look at the MGP apply/compose logs (perferably one with a number of logged tables processed), and drill into one of the users. This should give you a list of publication items (WTGPI_nnnnn names) composed with a number of milliseconds to process after it. On the whole ignore anything less than 2000 for now, but see if there are any with large numbers. anything into 5 digits or above needs looking at. If they are all low numbers, there is not a lot to be done - you must have a lot of users!.
    2) assuming you found something, then go to data synchronisation>performance, and press the link behind the word users in the middle of the text describing consperf
    3) select the user you looked at in step one, and press the subscriptions button.
    4) check the application that is the problem and press the consperf performance analysis button
    5) take the set parameters link. for now leave everything as seen, except in the pubitemlist type in wtgpi_nnnnn where nnnnn is the number of the slow running publication item from step 1. NOTE need the wtgpi prefix. press ok, and it will run with a spinning timer. NOTE not fast, may take 10 minutes, but you can swith to other pages and do other things and then come back.
    when finished you should see links to two files, a timing file and an execution plan. Look at the timing file first. This has two sections, SYNC is about the synchronisation (ie: download), the ldel, lins relate to the compose performance. For each area (file tells you about them), there are a number of different templates, and it will give example timings for them. If all of LDEL_1 to LDEL_4 show -10000 then you have a problem (means it gave up). one of the values for each set will have chevrons around it, and this is its default.
    The execution plan file give the coresponding sql statements and explain plans for them

  • Long running 'defragment' process?

    I've got a 'defragment' process that's been running since 8:55am (it's now 11:39am).
    I can't seem to find any docs on what defragment is, other than mentions of it in relationship to MTA channels.
    last pid: 18923; load averages: 1.22, 1.23, 1.25 11:39:15
    101 processes: 99 sleeping, 2 on cpu
    CPU states: % idle, % user, % kernel, % iowait, % swap
    Memory: 16G real, 13G free, 1137M swap in use, 19G swap free
    PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
    9492 mailsrv 1 20 0 35M 13M cpu/0 162:37 12.46% defragment
    Here's what we're running:
    jeffw@oak:/ims/log# imsimta version
    Sun Java(tm) System Messaging Server 6.2-3.04 (built Jul 15 2005)
    libimta.so 6.2-3.04 (built 01:43:03, Jul 15 2005)
    SunOS oak.usg.tufts.edu 5.9 Generic_118558-08 sun4u sparc SUNW,Netra-T12

    Hi Jay:
    we've killed off the defragment process and it restarted and is back to it's old behavior (10% of cpu, just looping over the queued files).
    It looks to me like it's a complete set of files:
    jeffw@oak:/ims/queue/defragment# grep 01C5EC07 *
    ZZe1R7Qpi_4rZ.00:Content-type: message/partial; number=2; id="01C5EC07.159946A0@������"; total=5
    ZZe1R7Qqi_3pm.01:Content-type: message/partial; number=1; id="01C5EC07.159946A0@������"; total=5
    ZZe1R7Qqi_4r1.00:Content-type: message/partial; number=3; id="01C5EC07.159946A0@������"; total=5
    ZZe1R7Qqi_6pq.00:Content-type: message/partial; number=4; id="01C5EC07.159946A0@������"; total=5
    ZZe1R7Qqi_6r3.00:Content-type: message/partial; number=5; id="01C5EC07.159946A0@������"; total=5
    The headers on the first message look a little odd, and I"m wondering if that's what is throwing off defragment. It has a 2nd set of 'headers' in the message body (i.e the space after one blank line after the real headers).
    Here's what it looks like:
    t;1132303375
    p;3
    *;4
    u;mailsrv
    c;tcp_local
    s;dacite.usg.tufts.edu ([130.64.1.203])
    h;<004e01c5ec07$193d6f70$b039eecb@LocalHost>
    d;ims-ms
    m;[email protected]
    d;20
    *;1028
    (;209715200
    );-2
    j;rfc822
    f;[email protected]
    h;[email protected]
    skim20@ims-ms-daemon
    Boundary_(ID_ZeeVeOhkiiJTeDOyEc+YGQ)
    Received: from dacite.usg.tufts.edu ([130.64.1.203]) by prod-ims.usg.tufts.edu
    (Sun Java System Messaging Server 6.2-3.04 (built Jul 15 2005))
    with ESMTPS id <[email protected]> for
    [email protected]; Fri, 18 Nov 2005 03:42:55 -0500 (EST)
    Received: from granite.tufts.edu ([130.64.1.47]) by dacite.usg.tufts.edu with
    esmtp (Exim 4.20) id 1Ed1pZ-0003Mr-c4 for [email protected];
    Fri, 18 Nov 2005 03:42:49 -0500
    Received: from hold-daemon.granite.tufts.edu by granite.tufts.edu
    (iPlanet Messaging Server 5.2 HotFix 1.25 (built Mar 3 2004))
    id <[email protected]> for [email protected]
    (ORCPT [email protected]); Fri, 18 Nov 2005 03:42:49 -0500 (EST)
    Received: from conversion-daemon.granite.tufts.edu by granite.tufts.edu
    (iPlanet Messaging Server 5.2 HotFix 1.25 (built Mar 3 2004))
    id <[email protected]> (original mail from [email protected])
    for [email protected]; Fri, 18 Nov 2005 01:14:32 -0500 (EST)
    Received: from dacite.usg.tufts.edu (dacite.usg.tufts.edu [130.64.1.203])
    by granite.tufts.edu
    (iPlanet Messaging Server 5.2 HotFix 1.25 (built Mar 3 2004))
    with ESMTPS id <[email protected]> for
    [email protected]; Fri, 18 Nov 2005 01:14:29 -0500 (EST)
    Received: from [203.238.57.9] (helo=mailsvr.mk.co.kr)
    by dacite.usg.tufts.edu with esmtp (Exim 4.20)
    id 1EczVG-0007Xw-Zc for [email protected]; Fri, 18 Nov 2005 01:13:42 -0500
    Received: from LocalHost ([203.238.57.176]) by mailsvr.mk.co.kr with Microsoft
    SMTPSVC(6.0.3790.1830); Fri, 18 Nov 2005 15:05:13 +0900
    Resent-date: Fri, 18 Nov 2005 03:42:49 -0500
    Date: Fri, 18 Nov 2005 15:12:36 +0900
    Resent-from: [email protected]
    From: Jenny Ju-young Kim <[email protected]>
    Subject: Re: Fw: Advertisement in the Boston KAEA/ASSA Program brochure
    ad4_wkforum2006bw.eps [1/5]
    Resent-to: [email protected]
    To: Sunghyun Henry Kim <[email protected]>
    Resent-message-id: <[email protected]>
    Message-id: <004e01c5ec07$193d6f70$b039eecb@LocalHost>
    Organization: Maeil Business Newspaper
    MIME-version: 1.0
    X-MIMEOLE: Produced By Microsoft MimeOLE V6.00.2900.2180
    X-Mailer: Microsoft Outlook Express 6.00.2900.2180
    Content-type: message/partial; number=1; id="01C5EC07.159946A0@������"; total=5
    X-Priority: 3
    X-MSMail-priority: Normal
    References: <005c01c5ea7e$030ade70$b039eecb@LocalHost>
    <[email protected]>
    X-OriginalArrivalTime: 18 Nov 2005 06:05:13.0656 (UTC)
    FILETIME=[0A4ACB80:01C5EC06]
    From: "Jenny Ju-young Kim" <[email protected]>
    To: "Sunghyun Henry Kim" <[email protected]>
    References: <005c01c5ea7e$030ade70$b039eecb@LocalHost> <[email protected]>
    Subject: Re: Fw: Advertisement in the Boston KAEA/ASSA Program brochure
    Date: Fri, 18 Nov 2005 15:12:36 +0900
    Organization: Maeil Business Newspaper
    MIME-Version: 1.0
    Content-Type: multipart/mixed;
    boundary="----=_NextPart_000_004C_01C5EC52.84802070"
    X-Priority: 3
    X-MSMail-Priority: Normal
    X-Mailer: Microsoft Outlook Express 6.00.2900.2180
    Disposition-Notification-To: "Jenny Ju-young Kim" <[email protected]>
    X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.2180
    This is a multi-part message in MIME format.
    ------=_NextPart_000_004C_01C5EC52.84802070
    Content-Type: text/plain;
    charset="ISO-8859-1"
    Content-Transfer-Encoding: base64
    RGVhciBQcm9mZXNzb3IgU3VuZ2h5dW4gS2ltLA0KDQpUaGFua3MgZm9yIHlvdXIgcmVwbHkuIEkg
    YXR0YWNoZWQgdGhlIGFkIGRlc2lnbiBvZiBNYWVpbCBCdXNpbmVzcyBOZXdzcGFwZXIgZm9yIHRo

  • Long running message processing

    Hi,
    I am receiving messages from a queue using Peek Lock. The processing of the message could take longer than 5 minutes in some cases. I have found that a queue can have a maximum message lock time of 5 minutes.
    Can anyone tell me how I can extend the time a lock is placed on a message?
    I am using Queue.OnMessageAsync() to receive a message.
    Thanks for your help.
    Graham

    Hi Graham,
    Use following code to renew the message
    //Create a CTS to launch a task in charge of renewing the message lock
    var brokeredMessageRenewCancellationTokenSource = new CancellationTokenSource();
    try
    var brokeredMessage = _client.Receive();
    var brokeredMessageRenew = Task.Factory.StartNew(() =>
    while (!brokeredMessageRenewCancellationTokenSource.Token.IsCancellationRequested)
    //Based on LockedUntilUtc property to determine if the lock expires soon
    if (DateTime.UtcNow > brokeredMessage.LockedUntilUtc.AddSeconds(-10))
    // If so, we repeat the message
    brokeredMessage.RenewLock();
    Thread.Sleep(500);
    }, brokeredMessageRenewCancellationTokenSource.Token);
    // Performing a lengthy operation
    RunLongRunningOperation();
    // We mark the message as completed
    brokeredMessage.Complete();
    catch(MessageLockLostException)
    // lock expired
    catch(Exception)
    brokeredMessage.Abandon();
    finally
    // Lock is stopped renewing task
    brokeredMessageRenewCancellationTokenSource.Cancel();
    Thanks Subhendu DE - DotNet Artisan If a post answers your question, please click
    Mark As Answer on that post. If you find a post helpful, please click Vote as Helpful

  • Terminating long running oracle process

    Hi,
    I want to kill all the oracle process that are running more than 10 hrs. Is it possible to run a script and schedule it using dbms_jobs package? Any advice would be greatly appreciated.
    Thanks,
    Karthik

    The reason is people will open a season through the browser and are not closing it causing the database connection to be open and it exceeds the max no of connections to the database. So I want to terminate the connection that has been made more than 10 hrs ago. How do I find the user connection that are made more than 10hrs ago??
    Thanks in advance.
    Yes. If coded properly and executed with proper authority.
    However, you should address the need for such a job in the first place. Generally, solving symptoms alone is not sound database administration.
    Michael

  • Long running threads (Jasper Reports) and AM-Pooling

    Hi,
    we are developing quite large application with ADF an BC. We have quite a lot of reports generated through Jasper that take quite long time to complete. The result is a PDF document that user gets on the UI so he can download it over download link. Reports that take over an hour to finish are never completed and returned to the user on UI. I think the problem is in AM-Polling because we are using default AM-Polling settings:
    <AM-Pooling jbo.ampool.maxinactiveage="600000" jbo.ampool.monitorsleepinterval="600000" jbo.ampool.timetolive="3600000"/>
    The AM is destroyed or returned to pool before reports finishes. How to properly configure those settings that even long running threads will do there jobs to the end.
    We also modified web.xml as follows:
      <session-config>
        <session-timeout>300</session-timeout>
      </session-config>
    Any help appreciated.
    Regards, Tadej

    Your problem is not related to ADF ApplicationModules. AMs are returned to the pool no earlier than the end of request, so for sure they are not destroyed by the framework while the report is running. The AM timeout settings you are referring to are applicable only to idle AMs in the pool but not to AMs that have been checked out and used by some active request.
    If you are using MS Internet Explorer, then most probably your problem is related to the IE's ReceiveTimeout setting, which defines a timeout for receiving a response from the server. I have had such problems with long running requests (involving DB processing running for more than 1 hour) and solved my problem by increasing this timeout. By default this timeout is as follows:
    IE4 - 5 minutes
    IE5, 6, 7, 8 - 60 minutes
    I cannot find what the default value is for IE9 and IE10, but some people claim it is only 10 seconds, although this information does not sound reasonable and reliable! Anyway, the real value is hardly greater than 60 minutes.
    You should increase the ReceiveTimeout registry value to an appropriate value (greater than the time necessary for your report to complete). Follow the instructions of MS Support here:
    Internet Explorer error &quot;connection timed out&quot; when server does not respond
    I have searched Internet for similar timeout settings for Google Chrome and Mozilla Firefox, but I have not found anything, so I instructed my customers (who execute long-running DB processing) to configure and use IE for these requests.
    Dimitar

  • How to delay a long running tasks start until display is updated?

    I am having a problem in that a progress bar I want to use to show progress of a long running background process is not showing up for a long time (up to 10 seconds) after the long running process is started. This is in an AIR application and the background process is an external native process, so once it is launched the UI thread is free to run, but the launch of the process can take time.
    Below is the current state of the relevant code.
    In addition to the current format I have also tried using the CREATION_COMPLETE, EXIT_FRAME and RENDER events with the same results.
    If I up the value in setTimeout to 500ms the progress bar displays quickly, but I would prefer to not delay the launch of the background process for no reason.
    If I comment out the loadPorject call the progress bar is displayed instantly.
    Any help is appreciated.
    private function continueLoad(evt:Event):void
         // We are about to start some potentially long running process
         CursorManager.setBusyCursor();
         curPopup = new SyncProgress();
         curPopup.addEventListener(Event.ENTER_FRAME, popupLoadedHandler);
         PopUpManager.addPopUp(curPopup, parentView, true);
         PopUpManager.centerPopUp(curPopup);
         curPopup.stage.invalidate();
    private function popupLoadedHandler(event:Event):void
         curPopup.removeEventListener(Event.ENTER_FRAME, popupLoadedHandler);
         setTimeout(function():void{syncManager.loadProject(mainViewModel.selectedUserItem.id,proj ectFile.nativePath,overwrite);},0);

    DBMS_SCHEDULER is very powerful and can be a bit unwieldy. I tend to use DBMS_SCHEDULER for jobs which are purely 'in the database' i.e. not specifically APEX-related - in addition, I find it's better for stuff that needs to be run regularly without human intervention (some sort of refresh process, daily cleanup etc).
    If you are intending to run this process from APEX as a pseudo "on demand" process (i.e. generated by a user request) and have quite simple requirements (e.g. there's no dependencies on other jobs), it might be worth checking out the apex scheduling API - namely the package APEX_PLSQL_JOB:
    http://download.oracle.com/docs/cd/E14373_01/apirefs.32/e13369/apex_plsql_job.htm#BGBCJIJI
    It generates a unique job number which you can use to reference its progress - plus it's much simpler to use.
    p.s. using DBMS_SCHEDULER, yes the job name has to be unique but you can generate one by either using a sequence or data not likely to be repeated, like the current timestamp.

  • Identifying the BW report behind a long-running DIA wp

    Hello - if you see a long running work process in sm66/50...then i can correlate it to the oracle session w/ st04...and then i can see the actual query and explain plan, etc.
    however, i am having trouble tying this back to the BW "report" which the user launched.  i know it's a web report, and all i could find in st03 was RFC time. 
    i looked in se16 at RSDDSTAT for the timeframe in question, but couldn't find a record that correlated to what i saw running actively in sm66/50/st04.  any other way to do this? (other than calling the end-user directly)
    thanks

    Hi Ben,
    I believe now I understand your issue... and I only see two possible options:
    1. Activate trace for the user that is getting the long-running process (you can do it from RSRTRACE, and those logs are not very big). If you see queries that have a very big time on CMD_PROCESS     WAIT, then those are candidates to review.
    The main problem with this option is that you can't catch the user instantly, cause user should log out and log in in order to activate the trace, and you'll need to constantly analyze his logs to catch the annoying query.
    2. Debug the dialog process and find the WRITEQUERY execution (by using F7). There, on g_s_repkey you'll find query technical name on compid. The problem here is to get the authorization from your basis team to perform debugging on a production system (it is not recommended).
    Maybe you can also try to analyze BW Statistics... let's see other opinions.
    Hope this helps,
    David.

  • Considerations for long running publication extensions

    We are considering implementing a post processing publication extension which may take several minutes to execute.   One of our concerns with this strategy is that the publication extension may bog down the Adaptive Processing Server.  
    Are there any general considerations / recommendations for long running post processing publication extensions?  
    Thanks!

    Generally creating a new thread is an expensive process. Well, everything is relative. My laptop can create & run & stop 7,000+ threads per second, test program below, YMMV. If you are dealing with thousands of thread creations per second, pooling may be sensible; if not, premature optimization is the root of all evil, etc.
    public class ThreadSpeed
        public static void main(String args[])
         throws Exception
            System.out.println("Ignore the first few timings.");
            System.out.println("They may include Hotspot compilation time.");
            System.out.println("I hope you are running me with \"java -server\"!");
         for (int n = 0; n < 5; n++)
             doit();
            System.out.println("Did you run me with \"java -server\"?  You should!");
        public static void doit()
         throws Exception
            long start = System.currentTimeMillis();
            for (int n = 0; n < 10000; n++) {
             Thread thread = new Thread(new MyRunnable());
             thread.start();
             thread.join();
            long end = System.currentTimeMillis();
            System.out.println("thread time " + (end - start) + " ms");
        static class MyRunnable
         implements Runnable
         public void run()
    }Edited by: sjasja on Jan 14, 2010 2:20 AM

  • ORABPEL-05002 for long running process

    Hi everybody,
    My question is related with a long running process I have designed and which, after running for a couple of days, ends by reporting the ORABPEL-05002 error:
    ===============================================================
    ORABPEL-05002
    Message handle error.
    An exception occurred while attempting to process the message "com.collaxa.cube.engine.dispatch.message.instance.PerformMessage"; the exception is: Transaction was rolled back: timed out; nested exception is: java.rmi.RemoteException: No Exception - originate from:java.lang.Exception: No Exception - originate from:; nested exception is:
         java.lang.Exception: No Exception - originate from:
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:152)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    ===============================================================
    Looking in the Manual Recovery screen, I can see an Activity I can recover. It's about an assign Activity where I'm doing a single boolean assignation.
    Of course, together with the ORABPEL-05002 error I got also the 'Transaction was rolled back: time out' message. Note that I have modified the transaction-timeout value to 180000. The error occurs during the night, with no heavy load of the server.
    Recovering the assign activity brings back the process in the running state.
    My process pattern:
    while (1 == 1) {
    do activity;
    wait_timeout();
    So, I have the following questions:
    1. Which is cause of this error?
    2. How may I automatically recover this lost activity? RecoveryAgent?
    Any suggestion is appreciated.
    Regards,
    amo
    P.S: the full stack of error messages reported in domain.log:
    ===============================================================
    <2006-09-18 08:08:34,101> <ERROR> <SRH.collaxa.cube.engine.dispatch> <DispatchHelper::handleMessage> failed to handle message
    javax.ejb.EJBException: Transaction was rolled back: timed out; nested exception is: java.rmi.RemoteException: No Exception - originate from:java.lang.Exception: No Exception - originate from:; nested exception is:
         java.lang.Exception: No Exception - originate from:
    java.rmi.RemoteException: No Exception - originate from:java.lang.Exception: No Exception - originate from:; nested exception is:
         java.lang.Exception: No Exception - originate from:
         at com.evermind.server.ejb.EJBUtils.makeException(EJBUtils.java:873)
         at ICubeEngineLocalBean_StatelessSessionBeanWrapper0.handleWorkItem(ICubeEngineLocalBean_StatelessSessionBeanWrapper0.java:1479)
         at com.collaxa.cube.engine.dispatch.message.instance.PerformMessageHandler.handle(PerformMessageHandler.java:45)
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:125)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    Caused by: java.lang.Exception: No Exception - originate from:
         at com.evermind.server.ejb.EJBUtils.makeException(EJBUtils.java:871)
         ... 10 more
    javax.ejb.EJBException: Transaction was rolled back: timed out; nested exception is: java.rmi.RemoteException: No Exception - originate from:java.lang.Exception: No Exception - originate from:; nested exception is:
         java.lang.Exception: No Exception - originate from:
         at ICubeEngineLocalBean_StatelessSessionBeanWrapper0.handleWorkItem(ICubeEngineLocalBean_StatelessSessionBeanWrapper0.java:1479)
         at com.collaxa.cube.engine.dispatch.message.instance.PerformMessageHandler.handle(PerformMessageHandler.java:45)
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:125)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    <2006-09-18 08:08:34,129> <ERROR> <SRH.collaxa.cube.engine.dispatch> <BaseScheduledWorker::process> Failed to handle dispatch message ... exception ORABPEL-05002
    Message handle error.
    An exception occurred while attempting to process the message "com.collaxa.cube.engine.dispatch.message.instance.PerformMessage"; the exception is: Transaction was rolled back: timed out; nested exception is: java.rmi.RemoteException: No Exception - originate from:java.lang.Exception: No Exception - originate from:; nested exception is:
         java.lang.Exception: No Exception - originate from:
    ORABPEL-05002
    Message handle error.
    An exception occurred while attempting to process the message "com.collaxa.cube.engine.dispatch.message.instance.PerformMessage"; the exception is: Transaction was rolled back: timed out; nested exception is: java.rmi.RemoteException: No Exception - originate from:java.lang.Exception: No Exception - originate from:; nested exception is:
         java.lang.Exception: No Exception - originate from:
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:152)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    <2006-09-18 08:09:05,236> <ERROR> <SRH.collaxa.cube> <BaseCubeSessionBean::logError> Error while invoking bean "activity manager": Scope not found.
    The scope "BpSwt2.30995" has not been defined in the current instance.
    ORABPEL-02094
    Scope not found.
    The scope "BpSwt2.30995" has not been defined in the current instance.
         at com.collaxa.cube.engine.core.ScopeContext.getScope(ScopeContext.java:213)
         at com.collaxa.cube.engine.core.WorkItem.setCubeInstance(WorkItem.java:259)
         at com.collaxa.cube.engine.core.WorkItemFactory.init(WorkItemFactory.java:68)
         at com.collaxa.cube.engine.core.WorkItemFactory.create(WorkItemFactory.java:58)
         at com.collaxa.cube.engine.adaptors.common.BaseWorkItemPersistenceAdaptor.load(BaseWorkItemPersistenceAdaptor.java:147)
         at com.collaxa.cube.engine.data.WorkItemPersistenceMgr.load(WorkItemPersistenceMgr.java:75)
         at com.collaxa.cube.engine.CubeEngine.load(CubeEngine.java:5185)
         at com.collaxa.cube.engine.CubeEngine.load(CubeEngine.java:5173)
         at com.collaxa.cube.engine.CubeEngine.expireActivity(CubeEngine.java:2136)
         at com.collaxa.cube.ejb.impl.ActivityManagerBean.expireActivity(ActivityManagerBean.java:145)
         at com.collaxa.cube.ejb.impl.ActivityManagerBean.expireActivity(ActivityManagerBean.java:116)
         at IActivityManagerLocalBean_StatelessSessionBeanWrapper52.expireActivity(IActivityManagerLocalBean_StatelessSessionBeanWrapper52.java:645)
         at com.collaxa.cube.engine.dispatch.message.instance.ExpirationMessageHandler.handle(ExpirationMessageHandler.java:43)
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:125)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    <2006-09-18 08:09:05,274> <ERROR> <SRH.collaxa.cube.engine.dispatch> <DispatchHelper::handleMessage> failed to handle message
    ORABPEL-02094
    Scope not found.
    The scope "BpSwt2.30995" has not been defined in the current instance.
         at com.collaxa.cube.engine.core.ScopeContext.getScope(ScopeContext.java:213)
         at com.collaxa.cube.engine.core.WorkItem.setCubeInstance(WorkItem.java:259)
         at com.collaxa.cube.engine.core.WorkItemFactory.init(WorkItemFactory.java:68)
         at com.collaxa.cube.engine.core.WorkItemFactory.create(WorkItemFactory.java:58)
         at com.collaxa.cube.engine.adaptors.common.BaseWorkItemPersistenceAdaptor.load(BaseWorkItemPersistenceAdaptor.java:147)
         at com.collaxa.cube.engine.data.WorkItemPersistenceMgr.load(WorkItemPersistenceMgr.java:75)
         at com.collaxa.cube.engine.CubeEngine.load(CubeEngine.java:5185)
         at com.collaxa.cube.engine.CubeEngine.load(CubeEngine.java:5173)
         at com.collaxa.cube.engine.CubeEngine.expireActivity(CubeEngine.java:2136)
         at com.collaxa.cube.ejb.impl.ActivityManagerBean.expireActivity(ActivityManagerBean.java:145)
         at com.collaxa.cube.ejb.impl.ActivityManagerBean.expireActivity(ActivityManagerBean.java:116)
         at IActivityManagerLocalBean_StatelessSessionBeanWrapper52.expireActivity(IActivityManagerLocalBean_StatelessSessionBeanWrapper52.java:645)
         at com.collaxa.cube.engine.dispatch.message.instance.ExpirationMessageHandler.handle(ExpirationMessageHandler.java:43)
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:125)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    <2006-09-18 08:09:05,275> <ERROR> <SRH.collaxa.cube.engine.dispatch> <BaseScheduledWorker::process> Failed to handle dispatch message ... exception ORABPEL-05002
    Message handle error.
    An exception occurred while attempting to process the message "com.collaxa.cube.engine.dispatch.message.instance.ExpirationMessage"; the exception is: Scope not found.
    The scope "BpSwt2.30995" has not been defined in the current instance.
    ORABPEL-05002
    Message handle error.
    An exception occurred while attempting to process the message "com.collaxa.cube.engine.dispatch.message.instance.ExpirationMessage"; the exception is: Scope not found.
    The scope "BpSwt2.30995" has not been defined in the current instance.
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:152)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    ===============================================================

    These are the possible cause to the problem and their solutions:
    Poor performance of the dehydration database If you are using Oracle Lite as dehydration store, please switch to use Oracle 9i or 10g. If Oracle 9i/10g is already in use, check the database parameter 'process' and 'session' to make sure it can handle the expected throughput.
    OC4J has too few available connections to the dehydration database. Increase the maxConnection number of the BPELServerDataSource at the BPEL_HOME/integration/orabpel/system/appserver/oc4j/j2ee/home/config/data-sources.xml (for developer edition) or IAS_HOME/j2ee/OC4J_BPEL/config/data-sources.xml (mid-tier installation).
    Size of message is too big Two ways to deal with this problem:
    Increase the transaction timeout at PEL_HOME/integration/orabpel/system/appserver/oc4j/j2ee/home/config/server.xml (developer edition) or IAS_HOME/j2ee/OC4J_BPEL/config/server.xml (mid-tier installation)
    Decrease the auditLevel from BPELConsole -> Manage BPEL Domain -> Configurations tab. Doing so will reduce the amount of data saved to the dehydration store.
    Cheers
    Anirudh Pucha

  • Query on long running client import process

    Hi Gurus,
    I have few queries regarding the parameter PHYS_MEMSIZE. Let me brief about the SAP server configuration before I get into the actual problem.
    We are running on ECC 6.0 on windows 2003, SP2 - 64 bit and DB is SQL 2005 with 16GB RAM and page file configured 22 GB.
    As per the Zero Administration Memory management I have learnt as a rule of thumb, use SAP/DB = 70/30 and set the parameter PHYS_MEMSIZE to approximately 70% of the installed main memory. Please suggest should I change the parameter as described in zero administration memory guide? If so what are the precautions we should take care for changing memory parameters. Are there any major dependencies and known issues associated to this parameter?
    Current PHYS_MEMSIZE parameter set to 512 MB.
    Few days ago we had to perform the client copy using EXPORT / IMPORT method. Export process was normal and went well However the import process took almost 15 HRs to complete. Any clues what could be the possible reasons for long running client copy activity in SQL environment. I am suspecting the parameter PHY_MEMSIZE was configured to 512 MB which appears to be very low.
    Please share your ideas and suggestions on this incase if anyone have ever experienced this sort of issue because we are going to perform a client copy again in next 10 days so i really need your inputs on this.
    Thanks & Regards,
    Vinod
    Edited by: vinod kumar on Dec 5, 2009 9:24 AM

    Hi Nagendra,
    Thanks for your quick response.
    Our production environment is running on ACtive/Active clustering like One central Instance and Dialog Instance. Database size is 116 GB with 1 data file and log file is 4.5 Gb which are shared in cluster.
    As suggested by you if I need to modify the PHYS_MEMSIZE to 11 or 12 GB(70% of physical RAM). What are the precautions should I consider and I see there are many dependencies associated with this parameter as per the documentation of this parameter.
    The standard values of the following parameters are calculated
    According to PHYS_MEMSIZE
    em/initial_size_MB = PHYS_MEMSIZE (extension by PHYS_MEMSIZE / 2)
    rdisp/ROLL_SHM
    rdisp/ROLL_MAXFS
    rdisp/PG_SHM
    rdisp/PG_MAXFS
    Should I make the changes to both Central and dialog instance as well. Please clarify me,. Also are there any other parameters should i enhance or adjust to speedup the client copy process.
    Many Thanks...
    Thanks & Regards,
    Vinod

Maybe you are looking for