ESB performance issue: takes too long to select and insert records in DBs

Hi,
I have an ESB service which has to select data from seven different tables(using join operations) of one database and insert it into a single table of another database.
It takes unduly long time to do this operation.
For ex: it takes over 2 hours to select and insert 3000 records.
When ran the same query to select the records from the tables using SQL developer, it took only 23 seconds.
Do I need to change any configuration settings in the enterprise manager or someother place to increase the performance. Someone please advice.
I am using Oracle SOA Suite 10.1.3.4
Thanks,
RV

Hi,
I have an ESB service which has to select data from seven different tables(using join operations) of one database and insert it into a single table of another database.
It takes unduly long time to do this operation.
For ex: it takes over 2 hours to select and insert 3000 records.
When ran the same query to select the records from the tables using SQL developer, it took only 23 seconds.
Do I need to change any configuration settings in the enterprise manager or someother place to increase the performance. Someone please advice.
I am using Oracle SOA Suite 10.1.3.4
Thanks,
RV

Similar Messages

  • HT204406 what if i do not want artwork for any song?  they take too long to upload and I personally don't care to retain or see the "artwork".  Can you diable this in itunes match?

    what if i do not want artwork for any song?  they take too long to upload and I personally don't care to retain or see the "artwork".  Can you diable this in itunes match?

    Agreed. I hope they allow this some time soon or iTunes Match will remain a big disappointment for me.

  • Performance issues; waited too long for a row cache enqueue lock!

    hi Experts,
    OS: Oracle Solaris on SPARC (64-bit)
    DB version:
    SQL> select * from V$VERSION;
    BANNER
    Oracle Database 11g Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE    11.2.0.1.0      Production
    TNS for Solaris: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    SQL>We have seen 100% CPU usage and high database load, so I checked the instance and have seen there were many blocking sessions and more than 71 sessions running the same select ;
    elect tablespace_name as tbsname from        (select tablespace_name,sum(bytes)/1024/1024 free_mb,0 total_mb,0 max_mb         from dba_free_space         group by tablespace_name         union         select tablespace_name, 0 current_mb,sum(bytes)/1024/1024 total_mb,                sum(decode(maxbytes, 0, bytes, maxbytes))/1024/1024 max_mb         from dba_data_files         group by tablespace_name) group by tablespace_name having round((sum(total_mb)-sum(free_mb))/sum(max_mb)*100) > 95  Blocking sessions are running queries like this;
    SELECT * from MYTABLE WHERE MYCOL=:1 FOR UPDATE;This select queries are coming from a cron job running every 10 minutes to check the tablespaces; so I first killed (kill -9 pid) those select statements so the load and CPU decreased to 13% of CPU usage. Blocking sessions still there and I didn't killed them waiting for app guys confirmation... after few hours and the CPU usage never went down the 13%; I have seen many errors;
    WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=...System State dumped to trace file .....trcAfter that , we decided to restart the DB to release the locks!
    I would like to understand why during loads we were no able to run those select statements, statspack schedule snapshot reports were not able to finish, also automatic
    database statistics... why 5 for update statements locked the whole DB?

    user12035575 wrote:
    SELECT FOR UPDATE will only lock the table row until the transaction is completed.
    "WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK" happens when it needs to acquire a lock on data dictionary. Did you check the trace file associated with the statement?The trace file is too long, which information I need to focus more?

  • Firefox will let me surf the web but certain sites such as facebook and eBay take too long to respond and i can not login, how can i fix this?

    The connection has timed out.
    The server at signin.ebay.com is taking too long to respond.
    * The site could be temporarily unavailable or too busy. Try again in a few
    moments.
    * If you are unable to load any pages, check your computer's network
    connection.
    * If your computer or network is protected by a firewall or proxy, make sure
    that Firefox is permitted to access the Web.
    I can still look up support on the internet but anything that would require a log on to work will not load and FF will give me this error message. This has happened for a few weeks now on and off. Now it has been 3 days of no log ons whatsoever. PLEASE HELP ME I have homework and eBay auctions ending!
    == URL of affected sites ==
    http://ebay.com, http://facebook.com, http://comcast.net

    This has been going on for months for me. Time to drop Firefox & go back to IE.

  • TS1702 Since updating ipad to 7IOS videos take too long to download and when playing they often stop

    Since updating my ipad to 7IOS videos are too slow to download and often stop when playing.  Any suggestions?

    Since updating my ipad to 7IOS videos are too slow to download and often stop when playing.  Any suggestions?

  • IPhoto takes too long to open and close

    I just updated to iPhoto 5.0.4 from 4.0.3 (using an iLife 05 disk) on my iBook G4 (specs below). I have about 10,000 photos in my library. It took 5 or more hours (I finally had to go to bed!) to update all my photos when the program first opened. This newer version opens, closes, and hides slower than the older version, up to 4 minutes sometimes. Also when open, it slows down other processes on the computer quite a bit. But so far, the processes I've tried to do within iPhoto have worked nicely, including rotating, editing, e-mailing, slideshow.
    I have already deleted the .plist file, repaired permissions, and rebuilt the library (successfully) with iPhoto Library Manager, and of course restarted the computer several times. Does anyone have any other suggestions as to how to keep this program from clogging up my computer and running more quickly? If the only suggestion is to try reinstalling iPhoto 5, would it have to update all my photos again and would the reinstalled program look at my iPhoto library or the rebuilt library?
    I have 4.7 GB of hard disk space left of a 60GB hard drive (which is really only 55GB). iPhoto seems to be automatically looking at the rebuilt library now. I guess Library Manager made it that way?? So another question is can I just drag my original iPhoto library from my "Pictures" folder to the trash? This would free up about 15GB.

    Thanks so much, TD, for your quick reply. I have about 25 albums. Would you consider that "a lot"? Don't know if they are "smart" though. I just created each album and drug the appropriate photos from my library over to the folder icon of each one. I don't think that involves any of that "smart" stuff, does it?
    What do you mean by "if you have all your Rolls open"? How do I "open" rolls or "close" them? I always leave the little triangle next to the "Library" in the left column closed. In that column, listed under the Library are the last roll, last 12 months roll, etc. then all my albums.
    I will trash my original library and hopefully that will help some.

  • Accessing BKPF table takes too long

    Hi,
    Is there another way to have a faster and more optimized sql query that will access the table BKPF? Or other smaller tables that contain the same data?
    I'm using this:
       select bukrs gjahr belnr budat blart
       into corresponding fields of table i_bkpf
       from bkpf
       where bukrs eq pa_bukrs
       and gjahr eq pa_gjahr
       and blart in so_DocTypes
       and monat in so_monat.
    The report is taking too long and is eating up a lot of resources.
    Any helpful advice is highly appreciated. Thanks!

    Hi max,
    I also tried using BUDAT in the where clause of my sql statement, but even that takes too long.
        select bukrs gjahr belnr budat blart monat
         appending corresponding fields of table i_bkpf
         from bkpf
         where bukrs eq pa_bukrs
         and gjahr eq pa_gjahr
         and blart in so_DocTypes
         and budat in so_budat.
    I also tried accessing the table per day, but it didn't worked too...
       while so_budat-low le so_budat-high.
         select bukrs gjahr belnr budat blart monat
         appending corresponding fields of table i_bkpf
         from bkpf
         where bukrs eq pa_bukrs
         and gjahr eq pa_gjahr
         and blart in so_DocTypes
         and budat eq so_budat-low.
         so_budat-low = so_budat-low + 1.
       endwhile.
    I think our BKPF tables contains a very large set of data. Is there any other table besides BKPF where we could get all accounting document numbers in a given period?

  • Sometimes my computer takes too long to connect to new website. I am running a pretty powerful work program at same time, what is the best solution? Upgrading speed from cable network, is it a hard drive issue? do I need to "clean out" the computer?

    Many times my computer takes too long to connect to new website. I have wireless internet (time capsule) and I am running a pretty powerful real time financial work program at same time, what is the best solution? Upgrading speed from cable network? is it a hard drive issue? do I only need to "clean out" the computer? Or all of the above...not to computer saavy.  It is a Macbook Pro  osx 10.6.8 (late 2010).

    Almost certainly none of the above!  Try each of the following in this order:
    Select 'Reset Safari' from the Safari menu.
    Close down Safari;  move <home>/Library/Caches/com.apple.Safari/Cache.db to the trash; restart Safari.
    Change the DNS servers in your network settings to use the OpenDNS servers: 208.67.222.222 and 208.67.220.220
    Turn off DNS pre-fetching by entering the following command in Terminal and restarting Safari:
              defaults write com.apple.safari WebKitDNSPrefetchingEnabled -boolean false

  • RPURMP00 program takes too long

    Hi Guys,
    Need some help on this one guys. Not getting any where with this issue.
    I am running RPURMP00 ( Program to  Create Third-Party Remittance Posting Run ) and while running it in test mode for 1 employee it takes too long .
    I ran this in background during off hours , but it takes 19,000 + sec to run and then cancels .
    The long text message is “No entry in table T51R6_FUNDINFO (Remittance detail table for all entities) for key 0002485844 “   and     “Job cancelled after system exception ERROR_MESSAGE”
    I check the program and I found a nested loop within the program (include RPURMP02 ) and decided to debug it with a break point.
    It short dumped and here is the st22 message and source code extract.
          ----Message -
    " Time limit exceeded ".
    "The program "RPURMP00" has exceeded the maximum permitted runtime without
    Interruption and has therefore been terminated."
          ----Source code extract -
    Include RPURMP02
      172 &----                   
      173 *&      Form  get_advice_info                                                               
      174 &----                   
      175 *       text                                                                               
    176 ----                   
      177 *  -->  p1        text                                                                     
      178 *  <--  p2        text                                                                      
      179 ----                   
      180 FORM get_advice_info .                                                                     
      181                                                                               
    182 * get information for advice form only if vendor sub-group and                             
      183 * employee detail is maintained                                                             
      184   IF ( NOT t51rh-lifsg IS INITIAL ) AND                                                    
      185      ( NOT t51rh-hrper IS INITIAL ).                                                       
      186                                                                               
    187 *   get remittance items employee number                                                   
      188     SELECT * FROM t51r4 WHERE remky = t51r5-remky. "#EC CI_GENBUFF "SAM0632658             
      189 *     get payroll seqno determined by PERNR and RDATN                                      
    >>>>>       SELECT * FROM t51r8 WHERE pernr = t51r4-pernr                                        
      191                             AND rdatn = t51r5-rdatn                                        
      192                             ORDER BY PRIMARY KEY. "#EC CI_GENBUFF                          
      193         EXIT.                                                                               
    194       ENDSELECT.                                                                               
    Has anyone ever come across this situation? Any input from anyone on this?
    Regards.
    CJ

    Hi,
    What is your SAP version?
    Have you checked if some OSS notes is there on performance.
    Regards,
    Atish

  • Query designer takes too long to save a query

    Hi dear SDN friends.
    I´m working with query designer in BI 7 and sometimes it takes too long to save a query, about ten minutes. Sometimes it never ends saving and some other times it saves the same query in 1 minute.
    Can anybody please give an advice about this behavior of the query designer?
    We have recently update BI to sp18. In query designer I have sp5 revision 529
    Best regards,
    Karim Reyes

    Hello Karim,
    I would suggest testing this again in the latest Frontend Patch available (FEP 602).  In FEPs 600, 601, & 602 there were some performance and stability improvements made which may correct your issue.  If the issue persists, I would suggest then opening a Customer Message via Service Marketplace.
    It can be downloaded from:
        http://service.sap.com/swdc
    u2192Download
    u2192Support Packages and Patches
    u2192Entry by Application Group
    u2192SAP Frontend Components
    u2192BI ADDON FOR SAPGUI
    u2192 BI 7.0 ADDON FOR SAPGUI 7.10
    u2192 BI 7.0 ADDON FOR SAPGUI 7.10
    u2192Win32
    See SAP Note 1085218 for planned FEP releases.
    I hope that helps.
    Regards,
    Tanner Spaulding
    SAP NetWeaver RIG Americas, BI

  • Quantity conversion takes too long

    Dear Gurus,
    I'm having a problem with the query execution time when I convert the quantities of the materials in KGs.
    I have done all the steps to set up material conversion with reference infoobject 0material and using a dynamic determination of conversion factor using central units of measurement (T006) otherwise reference infoobject.
    With these settings the query takes too long to execute because of the large number of material codes. If I remove this conversion the query is executed very fast.
    Any ideas? Do I have to create an index in UOM0MATE ODS? What am I missing here?
    Regards,
    Panos

    Hi Panos,
    I had the same issue, but it's solved for me now. I tried the same way you did, by creating a secondary index on the active table of the DSO. The only difference is, that I included all the SID fields to the index.
    Did you mark your index as unique? Also make sure, that the index really is created on the DB.
    If the performance still is not getting better, check in RSRT the statistics if the unit conversion really is the problem.
    Kindly regards,
    Matthias

  • Unlike IE When back button is pressed it takes too long. pleas do something for that. thnx.

    Unlike IE When back button is pressed it takes too long. pleas do something for that. I like firefox and i get to use the back button more often.
    thnx

    In order to be able to find the correct solution to your problem, we require some more non-personal information from you. Please do the following:
    *Click the Firefox button at the top left, then click the ''Help'' menu and select ''Troubleshooting Information'' from the submenu. If you don't have a Firefox button, click the Help menu at the top and select ''Troubleshooting Information'' from the menu.
    Now, a new tab containing your troubleshooting information should open.
    *At the top of the page, you should see a button that says "Copy text to clipboard". Click it.
    *Now, go back to your forum post and click inside the reply box. Press Ctrl+V to paste all the information you copied into the forum post.
    If you need further information about the Troubleshooting information page, please read the article [[Use the Troubleshooting Information page to help fix Firefox issues]].
    Thanks in advance for your help!

  • Update statement takes too long to run

    Hello,
    I am running this simple update statement, but it takes too long to run. It was running for 16 hours and then I cancelled it. It was not even finished. The destination table that I am updating has 2.6 million records, but I am only updating 206K records. If add ROWNUM <20 to the update statement works just fine and updates the right column with the right information. Do you have any ideas what could be wrong in my update statement? I am also using a DB link since CAP.ESS_LOOKUP table resides in different db from the destination table. We are running 11g Oracle Db.
    UPDATE DEV_OCS.DOCMETA IPM
    SET IPM.XIPM_APP_2_17 = (SELECT DISTINCT LKP.DOC_STATUS
    FROM [email protected] LKP
    WHERE LKP.DOC_NUM = IPM.XIPM_APP_2_1 AND
    IPM.XIPMSYS_APP_ID = 2
    WHERE
    IPM.XIPMSYS_APP_ID = 2;
    Thanks,
    Ilya

    matthew_morris wrote:
    In the first SQL, the SELECT against the remote table was a correlated subquery. the 'WHERE LKP.DOC_NUM = IPM.XIPM_APP_2_1 AND IPM.XIPMSYS_APP_ID = 2" means that the subquery had to run once for each row of DEV_OCS.DOCMETA being evaluated. This might have meant thousands of iterations, meaning a great deal of network traffic (not to mention each performing a DISTINCT operation). Queries where the data is split between two or more databases are much more expensive than queries using only tables in a single database.Sorry to disappoint you again, but with clause by itself doesn't prevent from "subquery had to run once for each row of DEV_OCS.DOCMETA being evaluated". For example:
    {code}
    SQL> set linesize 132
    SQL> explain plan for
    2 update emp e
    3 set deptno = (select t.deptno from dept@sol10 t where e.deptno = t.deptno)
    4 /
    Explained.
    SQL> @?\rdbms\admin\utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 3247731149
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
    | 0 | UPDATE STATEMENT | | 14 | 42 | 17 (83)| 00:00:01 | | |
    | 1 | UPDATE | EMP | | | | | | |
    | 2 | TABLE ACCESS FULL| EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
    | 3 | REMOTE | DEPT | 1 | 13 | 0 (0)| 00:00:01 | SOL10 | R->S |
    PLAN_TABLE_OUTPUT
    Remote SQL Information (identified by operation id):
    3 - SELECT "DEPTNO" FROM "DEPT" "T" WHERE "DEPTNO"=:1 (accessing 'SOL10' )
    16 rows selected.
    SQL> explain plan for
    2 update emp e
    3 set deptno = (with t as (select * from dept@sol10) select t.deptno from t where e.deptno = t.deptno)
    4 /
    Explained.
    SQL> @?\rdbms\admin\utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 3247731149
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
    | 0 | UPDATE STATEMENT | | 14 | 42 | 17 (83)| 00:00:01 | | |
    | 1 | UPDATE | EMP | | | | | | |
    | 2 | TABLE ACCESS FULL| EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
    | 3 | REMOTE | DEPT | 1 | 13 | 0 (0)| 00:00:01 | SOL10 | R->S |
    PLAN_TABLE_OUTPUT
    Remote SQL Information (identified by operation id):
    3 - SELECT "DEPTNO" FROM "DEPT" "DEPT" WHERE "DEPTNO"=:1 (accessing 'SOL10' )
    16 rows selected.
    SQL>
    {code}
    As you can see, WITH clause by itself guaranties nothing. We must force optimizer to materialize it:
    {code}
    SQL> explain plan for
    2 update emp e
    3 set deptno = (with t as (select /*+ materialize */ * from dept@sol10) select t.deptno from t where e.deptno = t.deptno
    4 /
    Explained.
    SQL> @?\rdbms\admin\utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 3568118945
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
    | 0 | UPDATE STATEMENT | | 14 | 42 | 87 (17)| 00:00:02 | | |
    | 1 | UPDATE | EMP | | | | | | |
    | 2 | TABLE ACCESS FULL | EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
    | 3 | TEMP TABLE TRANSFORMATION | | | | | | | |
    | 4 | LOAD AS SELECT | SYS_TEMP_0FD9D6603_1CEEEBC | | | | | | |
    | 5 | REMOTE | DEPT | 4 | 80 | 3 (0)| 00:00:01 | SOL10 | R->S |
    PLAN_TABLE_OUTPUT
    |* 6 | VIEW | | 4 | 52 | 2 (0)| 00:00:01 | | |
    | 7 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6603_1CEEEBC | 4 | 80 | 2 (0)| 00:00:01 | | |
    Predicate Information (identified by operation id):
    6 - filter("T"."DEPTNO"=:B1)
    Remote SQL Information (identified by operation id):
    PLAN_TABLE_OUTPUT
    5 - SELECT "DEPTNO","DNAME","LOC" FROM "DEPT" "DEPT" (accessing 'SOL10' )
    25 rows selected.
    SQL>
    {code}
    I do know hint materialize is not documented, but I don't know any other way besides splitting statement in two to materialize it.
    SY.

  • Report Takes Too Long Time

    Hi!
    I am in troubble
    following is the query
    SELECT inv_no, inv_name, inv_desc, i.cat_id, cat_name, i.sub_cat_id,
    sub_cat_name, asset_cost, del_date, i.bl_id, gen_desc bl_desc, p.prvcode, prvdesc, cur_loc,
    pldesc, i.pmempno, pmname, i.empid, empname
    FROM inv_reg i,
    cat_reg c,
    sub_cat_reg s,
    gen_desc_reg g,
    ploc p,
    province r,
    pmaster m,
    iemp_reg e
    WHERE i.sub_cat_id = s.sub_cat_id
    AND i.cat_id = s.cat_id
    AND s.cat_id = c.cat_id
    AND i.bl_id = g.gen_id
    AND i.cur_loc = p.plcode
    AND p.prvcode = r.prvcode
    AND i.pmempno = m.pmempno(+)
    AND i.empid = e.empid(+)
    &wc
    order by prvdesc, pldesc, cat_name, sub_cat_name, inv_no
    and this query returns 32000 records
    when i run this query on reports 10g
    then it takes 10 to 20 minuts to generate report
    how can i optimize it...?

    Hi Waqas Attari
    Pls study & try this ....
    When your query takes too long ...
    hope it helps....
    Regards,
    Abdetu...

  • Web application deployment takes too long?

    Hi All,
    We have a wls 10.3.5 clustering environment with one admin server and two managered servers separately. When we try to deploy a sizable web application, it takes about 1 hour to finish. It seems that it takes too long to finish the deployment. Here is the output from one of two managerd server system log. Could anyone tell me it is normal or not? If not, how can I improve this?
    Thanks in advance,
    John
    +####<Feb 29, 2012 12:11:03 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535463373> <BEA-149059> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] is transitioning from STATE_NEW to STATE_PREPARED on server Pinellas1tMS3.>+
    +####<Feb 29, 2012 12:11:05 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <9baa7a67b5727417:26f76f6c:135ca05cff2:-8000-00000000000000b0> <1330535465664> <BEA-149060> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] successfully transitioned from STATE_NEW to STATE_PREPARED on server Pinellas1tMS3.>+
    +####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466493> <BEA-149059> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] is transitioning from STATE_PREPARED to STATE_ADMIN on server Pinellas1tMS3.>+
    +####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466493> <BEA-149060> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] successfully transitioned from STATE_PREPARED to STATE_ADMIN on server Pinellas1tMS3.>+
    +####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466809> <BEA-149059> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] is transitioning from STATE_ADMIN to STATE_ACTIVE on server Pinellas1tMS3.>+
    +####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466809> <BEA-149060> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] successfully transitioned from STATE_ADMIN to STATE_ACTIVE on server Pinellas1tMS3.>+
    +####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442300> <BEA-320143> <Scheduled 1 data retirement tasks as per configuration.>+
    +####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320144> <Size based data retirement operation started on archive HarvestedDataArchive>+
    +####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320145> <Size based data retirement operation completed on archive HarvestedDataArchive. Retired 0 records in 0 ms.>+
    +####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320144> <Size based data retirement operation started on archive EventsDataArchive>+
    +####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320145> <Size based data retirement operation completed on archive EventsDataArchive. Retired 0 records in 0 ms.>+
    +####<Feb 29, 2012 1:10:23 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <weblogic.cluster.MessageReceiver> <<WLS Kernel>> <> <> <1330539023098> <BEA-003107> <Lost 2 unicast message(s).>+
    +####<Feb 29, 2012 1:10:36 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330539036105> <BEA-000111> <Adding Pinellas1tMS2 with ID -9071779833610528123S:entwl2t-vm:[7005,7005,-1,-1,-1,-1,-1]:entwl2t-vm:7005,entwl3t-vm:7007:Pinellas1tDomain:Pinellas1tMS2 to cluster: Pinellas1tCluster1 view.>+
    +####<Feb 29, 2012 1:11:24 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[STANDBY] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330539084375> <BEA-000128> <Updating -9071779833610528123S:entwl2t-vm:[7005,7005,-1,-1,-1,-1,-1]:entwl2t-vm:7005,entwl3t-vm:7007:Pinellas1tDomain:Pinellas1tMS2 in the cluster.>+
    +####<Feb 29, 2012 1:11:24 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[STANDBY] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330539084507> <BEA-000128> <Updating -9071779833610528123S:entwl2t-vm:[7005,7005,-1,-1,-1,-1,-1]:entwl2t-vm:7005,entwl3t-vm:7007:Pinellas1tDomain:Pinellas1tMS2 in the cluster.>+
    Edited by: john wang on Feb 29, 2012 10:36 AM
    Edited by: john wang on Feb 29, 2012 10:37 AM
    Edited by: john wang on Feb 29, 2012 10:38 AM

    Hi John,
    There may be some circumstances like when there are many files in the WEB-INF folder and JPS don't use TLD.
    I don't think a 1hour deployment is normal, it should be much more faster.
    Since you are using 10.3.5, I suggesto you to install the corresponding patch:
    1. Download patch 10118941p10118941_1035_Generic.zip
    2. Uncompress the file p10118941_1035_Generic.zip
    3. Copy the required files (patch-catalog_XXXXX.xml, CIRF.jar ) to the Patch Download Directory (typically, this folder is <WEBLOGIC_HOME>/utils/bsu/cache_dir).
    4. Rename the file patch-catalog_XXXXX.xml into patch-catalog.xml .
    5. Start Smart Update from <WEBLOGIC_HOME>/utils/bsu/bsu.sh .
    6. Select "Work Offline" mode.
    7. Go to File->Preferences, and select "Patch Download Directory".
    8. Click "Manage Patches" on the right panel.
    9. You will see the patch in the panel below (Downloaded Patches)
    10. Click "Apply button" of the downloaded patch to apply it to the target installation and follow the instructions on the screen.
    11. Add "-Dweblogic.jsp.ignoreTLDsProcessingInWebApp=true" to the Java options to ignore additional findTLDs cost.
    12. Restart servers.
    Hope this helps.
    Thanks,
    Cris

Maybe you are looking for