Log "Failed to obtain lock" msgs for baseline

Hi All,
Presently , I have a system for logging baseline success and failure msgs through a java program . But if the baseline fails to obtain lock before run,(in cases where during last run , lock may not have been released) , it is logging that also as baseline successfully completed. Hence i require to get a "failure to obtain lock" message or just a regular failure message like in other cases. This would help in tracking such issues . I would like to know where to put the check for this and what should be the check . Thanks a lot.
Regards,
Anuj

The common pattern here is to add to the else statement to the BaselineUpdate script to do something extra with the failure to get the lock. For example, on other threads in this forum have mentioned using a mailer class to generate email messages. I have also seen implementations implement a wait function.
Here is an example mailer modification:
<script id="BaselineUpdate">
<log-dir>./logs/provisioned_scripts</log-dir>
<provisioned-script-command>./control/baseline_update.sh</provisioned-script-command>
<bean-shell-script>
<![CDATA[
    log.info("Starting baseline update script.");
    // obtain lock
    if (LockManager.acquireLock("update_lock")) {
      if (LockManager.acquireLock("baseline_data_ready")) {
... code snipped ...
        } else {
log.warning("Baseline data not ready for processing.");
Mailer.sendSimpleMsg("Baseline update failed", "Failed to obtain baseline_data_ready lock - means that the incoming data is not ready.");
// release lock
LockManager.releaseLock("update_lock");
log.info("Baseline update script finished.");
} else {
log.warning("Failed to obtain lock.");
Mailer.sendSimpleMsg("Baseline update failed", "Failed to obtain update_lock - may mean that another baseline is running or stalled.");
Edited by: TimK on May 7, 2013 10:48 AM

Similar Messages

  • Build and Capture task sequence fails at 'Prepare ConfigMgr Client for Capture' step

    The scenario is as follows:
    Task Sequence is a Build and Capture of Windows 7 SP1 x64 Enterprise
    There are 17 Applications installed split into 2 'Install Application' steps due to the 9 item limit
    129 Software Updates are installed
    SCCM 2012 RTM
    I have built and captured on this system previously however there were no 'applications', just traditional SCCM 'packages'
    All task sequence steps run successfully until it gets to the 'Prepare ConfigMgr Client for Capture' step and it fails here. Here are the relevant parts of the SMSTS.log file:
    <![LOG[No certificates to delete]LOG]!><time="15:30:12.531-660" date="11-13-2012" component="PrepareSMSClient" context="" type="1" thread="2880" file="preparesmsclient.cpp:1013">
    <![LOG[Deleting Client properties from file C:\Windows\SMSCFG.INI succeeded.]LOG]!><time="15:30:12.531-660" date="11-13-2012" component="PrepareSMSClient" context="" type="1" thread="2880" file="preparesmsclient.cpp:922">
    <![LOG[Reseting the Trusted Root Key successful]LOG]!><time="15:30:12.531-660" date="11-13-2012" component="PrepareSMSClient" context="" type="1" thread="2880" file="preparesmsclient.cpp:1088">
    <![LOG[Deleting instance of 'CCM_Client' successful]LOG]!><time="15:30:12.531-660" date="11-13-2012" component="PrepareSMSClient" context="" type="1" thread="2880" file="preparesmsclient.cpp:170">
    <![LOG[Successfully reset Registration status flag to "not registered"]LOG]!><time="15:30:12.531-660" date="11-13-2012" component="PrepareSMSClient" context="" type="1" thread="2880" file="preparesmsclient.cpp:309">
    <![LOG[Successfully disabled provisioning mode.]LOG]!><time="15:30:12.546-660" date="11-13-2012" component="PrepareSMSClient" context="" type="1" thread="2880" file="preparesmsclient.cpp:1273">
    <![LOG[Start to cleanup TS policy]LOG]!><time="15:30:12.546-660" date="11-13-2012" component="PrepareSMSClient" context="" type="0" thread="2880" file="utils.cpp:2773">
    <![LOG[getPointer()->ExecQuery( BString(L"WQL"), BString(pszQuery), lFlags, pContext, ppEnum ), HRESULT=ffffffff (e:\nts_sccm_release\sms\framework\core\ccmcore\wminamespace.cpp,389)]LOG]!><time="15:35:21.000-660" date="11-13-2012" component="PrepareSMSClient"
    context="" type="0" thread="2880" file="wminamespace.cpp:389">
    <![LOG[ns.Query(sQuery, &spEnum), HRESULT=ffffffff (e:\nts_sccm_release\sms\framework\tscore\utils.cpp,2800)]LOG]!><time="15:35:21.000-660" date="11-13-2012" component="PrepareSMSClient" context="" type="0" thread="2880" file="utils.cpp:2800">
    <![LOG[Wmi query 'select * from CCM_Policy where PolicySource = 'CcmTaskSequence'' failed, hr=0xffffffff]LOG]!><time="15:35:21.000-660" date="11-13-2012" component="PrepareSMSClient" context="" type="3" thread="2880" file="utils.cpp:2800">
    <![LOG[End TS policy cleanup]LOG]!><time="15:35:21.000-660" date="11-13-2012" component="PrepareSMSClient" context="" type="0" thread="2880" file="utils.cpp:2821">
    <![LOG[TS::Utility::CleanupPolicyEx(false), HRESULT=ffffffff (e:\nts_sccm_release\sms\client\osdeployment\preparesmsclient\preparesmsclient.cpp,457)]LOG]!><time="15:35:21.000-660" date="11-13-2012" component="PrepareSMSClient" context="" type="0" thread="2880"
    file="preparesmsclient.cpp:457">
    <![LOG[Failed to delete policies compiled by TaskSequence (0xffffffff)]LOG]!><time="15:35:21.000-660" date="11-13-2012" component="PrepareSMSClient" context="" type="3" thread="2880" file="preparesmsclient.cpp:457">
    <![LOG[Failed to prepare SMS Client for capture, hr=ffffffff]LOG]!><time="15:35:21.000-660" date="11-13-2012" component="PrepareSMSClient" context="" type="3" thread="2880" file="preparesmsclient.cpp:472">
    <![LOG[pCmd->Execute(), HRESULT=ffffffff (e:\nts_sccm_release\sms\client\osdeployment\preparesmsclient\main.cpp,136)]LOG]!><time="15:35:21.000-660" date="11-13-2012" component="PrepareSMSClient" context="" type="0" thread="2880" file="main.cpp:136">
    <![LOG[Failed to prepare SMS Client for capture, hr=ffffffff]LOG]!><time="15:35:21.000-660" date="11-13-2012" component="PrepareSMSClient" context="" type="3" thread="2880" file="main.cpp:136">
    <![LOG[Process completed with exit code 4294967295]LOG]!><time="15:35:21.015-660" date="11-13-2012" component="TSManager" context="" type="1" thread="2152" file="commandline.cpp:1098">
    <![LOG[!--------------------------------------------------------------------------------------------!]LOG]!><time="15:35:21.015-660" date="11-13-2012" component="TSManager" context="" type="1" thread="2152" file="instruction.cxx:3011">
    <![LOG[Failed to run the action: Prepare ConfigMgr Client for Capture. 
    Unknown error (Error: FFFFFFFF; Source: Unknown)]LOG]!><time="15:35:21.046-660" date="11-13-2012" component="TSManager" context="" type="3" thread="2152" file="instruction.cxx:3102">
    It looks like there is a similar issue over here (http://www.windows-noob.com/forums/index.php?/topic/5906-task-sequence-error-0xffffffff/) that was resolved by removing the Application Installations one by one until a problematic one was found
    and removed - however I would rather avoid this very time consuming process if possible.
    I have also testing this using 'capture media' in the full OS and that also fails at the same step with the same errors in the smsts.log
    The only workaround I have for now is to let the task sequence fail at the 'Prepare ConfigMgr Client for Capture' step, then I uninstall the SCCM client using 'ccmsetup.exe /uninstall' and use the 'capture media' method - obviously with no SCCM client installed,
    it skips the 'Prepare ConfigMgr Client for Capture' step and goes straight to the 'Prepare OS' step and then to the actual capture to the WIM step.
    Anybody else seeing this or have any suggestions?
    My Microsoft Core Infrastructure & Systems Management blog -
    blog.danovich.com.au

    It looks like there is a similar issue over here (http://www.windows-noob.com/forums/index.php?/topic/5906-task-sequence-error-0xffffffff/) that was resolved by removing the Application Installations one by one until a problematic one was found
    and removed - however I would rather avoid this very time consuming process if possible.
    You dont have to test it 18 times....
    Just test with 50% of the applications first, if it fails, test with 25% and then on of the time...
    You should be able to find the application, within 4-5 tests...
    Ronni Pedersen | Microsoft MVP - ConfigMgr | Blogs:
    http://www.ronnipedersen.com/ and SCUG.dk/ | Twitter
    @ronnipedersen

  • Failed to activate authorization check for user SAPSYS

    Hi Experts
    I am trying to run the sdcc, it was throwing time_out error. i have increased the work process runtime. now
    i am getting a error Failed to activate authorization check for user SAPSYS.
    Please help me to solve this issue.
    Regards
    Venkat

    Hi, Mr. Joe Bo.
    Thanx for your reply. We are using ECC6 (HP Unix with Oracle)
    Basis Patch - 15, Kernel 159
    I have seen the the note but it's showing ccms method defination settings, but for my case we are yet to go live we have not made any settings from sap they are planning to run a session for the go live. When i am running sdcc i am getting a error in the system log "Failed to activate authorization check for user SAPSYS"
    Thanks & Regards
    Venkatesan J

  • Oracle Service Bus 10gR3 - Failed to obtain WLS Edit lock

    My question is, how do I release this lock?
    The gory details follow ...
    Oracle Service Bus 10gR3 on Windows 2003 Enterprise.
    I am following Tutorial 1. (Sheesh, I can't even get that right!)
    Around page 3-28 - "To test the Routing of the Loan Application ManagerLoanReviewService".
    Point 1 says to start the server. Without thinking, I did just that.
    The problem is, it was already running, otherwise, I would not have been using the sbconsole to do the editing.
    Of course, it detected it was already running and didn't start again.
    Problem is, after the server shuts down, the command files shutdown PointBase.
    So, after a mistaken startup, I ended up with a server with no database underneath it.
    Fine, I restarted it.
    Now, using the activate session ("sbconsole/sbconsole.portal?_nfpb=true&_windowLabel=ViewChangesPortlet&ViewChangesPortlet_actionOverride=%2Fchangemgmt%2FSessionActivate&_pageLabel=ChangeManagement&ViewChangesPortletactivate=activate")
    I cannot activate my session, as it says the following :-
    Failed to obtain WLS Edit lock; it is currently held by user weblogic. This indicates that you have either started a WLS change and forgotten to activate it, or another user is performing WLS changes which have yet to be activated. The WLS Edit lock can be released by logging into WLS console and either releasing the lock or activating the pending WLS changes.
    I show outstanding sesssions, there is only one.
    I switch to that session.
    I still cannot activate this session.
    I have tried re-starting the AdminServer.
    It talks about releasing the lock in the WLS Admin console. Where? How?
    Thanks in advance.
    ...Lyall
    PS. I gave up on this and simply blew the whole lot away and started again (returned to post installation and re-started the entire tutorial). However, it would be useful to know how to fix this properly, if ever I end up in the same scenario again.

    Although the Undo all changes might work in some cases, it could be the case that some files still remain and are even the AdminServer could not be started.
    The lock files are usually located in the domain directories.
    For example
    Domain demo:
    $BEA_HOME/user_projects/domains/demo/edit.lok or $BEA_HOME/user_projects/domains/demo/weblogic_eval.lck
    --olaf                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Determination of file status for /oracle/SID/102_64/rdbms/admin/log failed

    Hello all,
    When using brtools, the following error pops up. Appreciate if you could share some hints. thanks.
    > brtools
    BR0651I BRTOOLS 7.00 (43)
    BR0252E Function stat() failed for '/oracle/QAA/102_64/rdbms/admin/log' at location BrFileStatGet-1
    BR0253E errno 13: Permission denied
    BR0273E Determination of file status for /oracle/QAA/102_64/rdbms/admin/log failed
    BR0280I BRTOOLS time stamp: 2010-05-07 14.20.41
    BR0654I BRTOOLS terminated with errors
    Kind regards
    Raymond

    >
    Raymond Yuan wrote:
    > BR0252E Function stat() failed for '/oracle/QAA/102_64/rdbms/admin/log' at location BrFileStatGet-1
    > BR0253E errno 13: Permission denied
    >
    Hello,
    now this message is pretty obvious. So the needed information would be:
    ls -ld /oracle/QAA/102_64/rdbms/admin
    ls -ld /oracle/QAA/102_64/rdbms/admin/log
    ls -ld brtools
    id
    and then compare if the user executing brtools has appropiate rights.
    Most likely you did not run saproot.sh so brtools might not be set suid to oraqaa
    and you are executing as qaaadm.
    Best regards
    Volker

  • TRANSFER failed!.Error: Error obtaining distribution information for asset_

    i am getting the following error while running the Asster Tranfer API in R11
    TRANSFER failed!.
    Error: Error obtaining distribution information for asset_id &ASSET_ID.
    I want to transfer the asset location and code is as follows
    l_asset_dist_tbl(1).distribution_id := 2385128;
    l_asset_dist_tbl(1).transaction_units := -1;
    l_asset_dist_tbl(2).transaction_units := 1;
    l_asset_dist_tbl(2).assigned_to := null;
    l_asset_dist_tbl(2).expense_ccid := null;
    Any idea is highly appriciated,

    Below is the code which i used to transfer the asset. Used the same code mentioned in metalink
    declare
    l_return_status varchar2(1);
    l_msg_count number:= 0;
    l_msg_data varchar2(4000);
    l_trans_rec fa_api_types.trans_rec_type;
    l_asset_hdr_rec fa_api_types.asset_hdr_rec_type;
    l_asset_dist_tbl fa_api_types.asset_dist_tbl_type;
    temp_str varchar2(512);
    begin
    --fnd_profile.put('PRINT_DEBUG', 'Y');
    dbms_output.enable(1000000);
    fa_srvr_msg.init_server_message;
    fa_debug_pkg.initialize;
    -- fill in asset information
    l_asset_hdr_rec.asset_id := 2000068;
    l_asset_hdr_rec.book_type_code := 'US ASSET SERIAL';
    -- transaction date must be filled in if performing
    -- prior period transfer
    --l_trans_rec.transaction_date_entered :=to_date('01-JAN-1999 10:54:22','dd-mon-yyyy hh24:mi:ss');
    l_asset_dist_tbl.delete;
    fill in distribution data for existing distribution lines
    affected by this transfer txn. Note: You need to fill in
    only affected distribution lines.
    For source distribution, you must fill in either existing
    distribution id or 2 columns(expense_ccid,location_ccid) or
    3-tuple columns(assigned_to,expense_ccid,and location_ccid)
    depending on the makeup of the particular distribution
    of the asset.
    --l_asset_dist_tbl(1).distribution_id := 108422;
    --l_asset_dist_tbl(1).transaction_units := -1;
    l_asset_dist_tbl(1).transaction_units := -1;
    l_asset_dist_tbl(1).expense_ccid :=108422;
    either above 2 lines or below 4 lines must be provided
    for source distribution:
    l_asset_dist_tbl(1).transaction_units := -2;
    l_asset_dist_tbl(1).assigned_to := 11;
    l_asset_dist_tbl(1).expense_ccid :=15338;
    l_asset_dist_tbl(1).location_ccid := 3; */
    -- fill in dist info for destination distribution
    l_asset_dist_tbl(2).transaction_units := 1;
    --l_asset_dist_tbl(2).assigned_to := NULL;
    l_asset_dist_tbl(2).expense_ccid :=109260;
    --l_asset_dist_tbl(2).location_ccid := 3;
    --l_asset_dist_tbl(3).transaction_units := 1;
    --l_asset_dist_tbl(3).assigned_to := 10;
    --l_asset_dist_tbl(3).expense_ccid := 24281;
    --l_asset_dist_tbl(3).location_ccid := 3;
    l_trans_rec.who_info.last_updated_by := 25728;--FND_GLOBAL.USER_ID;
    l_trans_rec.who_info.last_update_login := 25728; --FND_GLOBAL.LOGIN_ID;
    FA_TRANSFER_PUB.do_transfer(
    p_api_version => 1.0,
    p_init_msg_list => FND_API.G_FALSE,
    p_commit => FND_API.G_FALSE,
    p_validation_level =>FND_API.G_VALID_LEVEL_FULL,
    p_calling_fn => NULL,
    x_return_status => l_return_status,
    x_msg_count => l_msg_count,
    x_msg_data => l_msg_data,
    px_trans_rec => l_trans_rec,
    px_asset_hdr_rec => l_asset_hdr_rec,
    px_asset_dist_tbl => l_asset_dist_tbl);
    if (l_return_status != FND_API.G_RET_STS_SUCCESS) then
    dbms_output.put_line('TRANSFER failed!.');
    l_msg_count := fnd_msg_pub.count_msg;
    if (l_msg_count > 0) then
    temp_str := substr(fnd_msg_pub.get(fnd_msg_pub.G_FIRST,
    fnd_api.G_FALSE),1,512);
    dbms_output.put_line('Error: '||temp_str);
    for I in 1..(l_msg_count -1) loop
    temp_str :=
    substr(fnd_msg_pub.get(fnd_msg_pub.G_NEXT,
    fnd_api.G_FALSE),1,512);
    dbms_output.put_line('Error: '||temp_str);
    end loop;
    end if;
    else
    dbms_output.put_line('TRANSFER completed successfully!');
    dbms_output.put_line('THID = ' ||
    to_char(l_trans_rec.transaction_header_id));
    end if;
    fnd_msg_pub.delete_msg();
    end;
    Thanks

  • Problem with EJB and JMS - Failed to obtain/create connection

    hello ejb and jms programmers,
    My problem is my topic MDB keep on retrieving the same message when there is a database connection failure. Maybe somebody could help me how to prevent to retrieve the same data?
    Given:
    - I purposedly turn off the pointbase database because im testing my error handling.
    - Im using SJSAS 8 as my application server.
    - My message Driven Bean of topic type.
    - Im using CMP for my entity bean
    Here is the scenario of whats happening - step by step:
    1. A separate application publishes a message to JMS queue server
    2. My MDB retrieves this message and do some processing then inserts a record (transaction history) in my database
    3. But my db is turned off or down
    4. My MDB sends a successful processing reply to the JMS queue server
    5. Then i noticed that my server.log keeps on growing so when i opened it, the record was not inserted and printed the stacktrace below "RAR5117 : Failed to obtain/create connection. Reason : javax.transaction.SystemException" (complete stacktrace below)
    6. I understand the cause of the stacktrace is because the DB is turned off. But what i dont understand is that my MDB keeps on reading the same message. Since my MDB is of topic type, isnt a topic MDB supposedly reads a message only once???
    So my questions are:
    1. how do i handle insert database error?
    2. how can i stop my MDB from processing the same message?
    3. any better suggestions?
    Thank you in advance :)
    leigh
    *** more complete stack trace ***
    [#|2005-01-09T15:35:57.097+0800|WARNING|sun-appserver-pe8.0.0_01|javax.enterprise.system.core.transaction|_ThreadID=17;|JTS5041: The resource manager is doing work outside a global transaction
    javax.transaction.xa.XAException
         at com.pointbase.xa.xaException.getXAException(Unknown Source)
         at com.pointbase.xa.xaConnectionResource.start(Unknown Source)
         at com.sun.gjc.spi.XAResourceImpl.start(XAResourceImpl.java:162)
    [#|2005-01-09T15:35:57.167+0800|SEVERE|sun-appserver-pe8.0.0_01|javax.enterprise.resource.resourceadapter|_ThreadID=17;|RAR5027:Unexpected exception in resource pooling
    javax.transaction.SystemException
         at com.sun.jts.jta.TransactionImpl.enlistResource(TransactionImpl.java:185)
         at com.sun.enterprise.distributedtx.J2EETransaction.enlistResource(J2EETransaction.java:360)
         at com.sun.enterprise.distributedtx.J2EETransactionManagerImpl.enlistResource(J2EETransactionManagerImpl.java:303)
         at com.sun.enterprise.distributedtx.J2EETransactionManagerOpt.enlistResource(J2EETransactionManagerOpt.java:115)
    [#|2005-01-09T15:35:57.177+0800|WARNING|sun-appserver-pe8.0.0_01|javax.enterprise.resource.resourceadapter|_ThreadID=17;|RAR5117 : Failed to obtain/create connection. Reason : javax.transaction.SystemException|#]
    [#|2005-01-09T15:35:57.227+0800|WARNING|sun-appserver-pe8.0.0_01|javax.enterprise.resource.resourceadapter|_ThreadID=17;|RAR5114 : Error allocating connection : [Error in allocating a connection. Cause: javax.transaction.SystemException]|#]
    [#|2005-01-09T15:35:57.237+0800|SEVERE|sun-appserver-pe8.0.0_01|javax.enterprise.system.container.ejb|_ThreadID=17;|EJB5071: Some remote or transactional roll back exception occurred
    com.sun.jdo.api.persistence.support.JDODataStoreException: JDO77006: SQL exception: state = null, error code = 0.
    NestedException: java.sql.SQLException: Error in allocating a connection. Cause: javax.transaction.SystemException
    FailedObjectArray: [[email protected]5ac]
         at com.sun.jdo.spi.persistence.support.sqlstore.impl.TransactionImpl.getConnectionInternal(TransactionImpl.java:1444)
         at com.sun.jdo.spi.persistence.support.sqlstore.impl.TransactionImpl.getConnection(TransactionImpl.java:1339)

    Hi annie,
    Wherever you are handling database transactions,
    you'd not be able to create a Connection if database
    is closed (I think you mentioned turning off the
    database) then at this condition, you should
    certainly throw a System level exception and stop all
    processing with some meaningful flow to indicate a
    failure (like display message on UI). Even
    network problems are handled by exceptions... so I
    don't see a reason why you didn't wrap it in the
    first place.
    Anyway, try handling specific exceptions rather than
    the general Exception... this will give you a better
    idea of what to do in case of an exception.Yes i know this. I am practicing this in my non-j2ee server applications. But the j2ee app im making, i just pass the db url in the descriptor and the app server automatically creates the connection for my app. So where would i put exception handling?
    2. how can i stop my MDB from processing the same
    message?Guaranteed delivery is not supposed to stop
    processing. It will continue to process the message
    after certain intervals till the message is
    delivered. You shouldn't deliver it at all, if you
    are able to detect that the database is off
    The problem here is that my MDB automatically retrieves the message from the JMS queue server. Im not the one retrieving the messages manually.
    My assumed behavior of topic MDB is once the a certain MDB retrieves a message it will not retrieve the same message anymore.
    thank you in advance.
    leigh

  • [Help]: Failed to compute stored msgs & volume...

    [15/Jul/2003:15:43:41 +0200] scmail1 snmpcoll[338]: General Warning: Failed to get path info (No such file or directory).
    [15/Jul/2003:15:43:41 +0200] scmail1 snmpcoll[338]: General Warning: Failed to compute stored msgs & volume in (/app/server415/msg-posta-new/queue/messages) (No such file or directory).These are two rows of the default log of the Netscape Messaging Server STORE 4.15 Patch 7 (built
    Sep 11 2001). They are repeated once to three times a day.
    What does these messages mean? The services are on and the SMTP works well. But, sometimes, in the deferred queue thousand of "messages" are created in the subdirectory "Error-Handler". This is a result shown by sendmail -bp:
        121       Mailbox-Deliver    /app/server415/msg-posta-new/queue/
    704325      Error-Handler      /app/server415/msg-posta-new/queue/In the directory of the deferred queue I find a subdirectory called "Error-Handler" which contains:
    scmail1# ls -l
    total 371712
    ----------   1 mailsrv  nsgroup        0 Jul  8 06:41 __lock__
    -rw-rw----   1 mailsrv  nsgroup  80090311 Jul  8 08:35 env-20030708
    -rw-rw----   1 mailsrv  nsgroup  110102528 Jul  8 06:03 env-20030708.rqEach env file contains repeated row such these:
    Message-Id: HHEDV400.3EG
    Parent: 1
    Header-Size: 988
    Body-Size: 301285
    Header-Flush: 0
    Function: Error-Handler
    Control-Type: Mail
    Priority: 2
    Submitted-Date: Wed, 2 Jul 2003 14:40:16 +0200
    MIME-Encoding: 7BIT
    Host-From: 10.102.241.20 [xx.xxx.xxx.20]
    User-From: SMTP<[email protected]>
    Message-Size: 302275
    MTA-Hops: 1
    Received-From-MTA: dns; xxx.xxx.it (xx.xx.xxx.20)
    MAIL-Exts:
    RCPT-Exts:
    Trace: SMTP-Accept
    Trace: SMTP-Router
    Trace: Mailbox-Deliver
    Trace: Error-Handler
    Trace: Error-Handler
    Channel-To:
    Channel-To! SMTP <[email protected]>
    Account-To! da043900!scuole
    Account-To! ds043900!scuole
    Deliver-To: Mailbox da043900!scuole
    RCPT-Exts!
    Diagnostic-Code:
    Error: Mailbox-Deliver:QueuedTooLong (ReturnMail)
    Error-Text: The queue containing messages destined for Mailbox-Deliver
    Error-Text: was expired by the Postmaster of posta-new.it.
    Error-Token: mtalocal:DBT_SMTPDelQueuedTooLong_
    Error-Local-Text: The queue containing messages destined for Mailbox-Deliver
    Error-Local-Text: was expired by the Postmaster of posta-new.it.
    Error-Local-Text: The following recipients did not receive your message:
    Action: failedThese messages are repeated and appended to env file continually.
    It's a strange behaviour... I would appreciate any comment or suggestion on this problem.
    Thanks a lot
    Best Regards
    Marco Favero

    Now I am creating Jar with STORED method and the size is exactly same as the one generated by bpelc. But I still get same exception. Please let me know what am I missing? Is running bpelc from my Java program is only way out?
    Please help.
    Meghana

  • Routing failed to locate next hop for ICMP from outside:10.60.30.111/1 to inside:10.89.30.41/0

    ASA 5505 Split tunneling stopped working when upgraded from 8.3(1) to 8.4(3).
    When a user was connecting to the old 8.3(1) appliance they could access all of our subnets: 10.60.0.0/16, 10.89.0.0/16, 10.33.0.0/16, 10.1.0.0/16
    but now they cannot and in the logs I can just see
    6          Oct 31 2012          08:17:59          110003          10.60.30.111          1          10.89.30.41          0          Routing failed to locate next hop for ICMP from outside:10.60.30.111/1 to inside:10.89.30.41/0
    any hints? i have tried almost everything. the running configuration is:
    : Saved
    ASA Version 8.4(3)
    hostname asa
    names
    interface Ethernet0/0
    switchport access vlan 2
    interface Ethernet0/1
    interface Ethernet0/2
    interface Ethernet0/3
    interface Ethernet0/4
    interface Ethernet0/5
    interface Ethernet0/6
    interface Ethernet0/7
    interface Vlan1
    nameif inside
    security-level 100
    ip address 10.60.70.1 255.255.0.0
    interface Vlan2
    nameif outside
    security-level 0
    ip address 80.90.98.217 255.255.255.248
    ftp mode passive
    clock timezone GMT 0
    dns domain-lookup inside
    dns domain-lookup outside
    same-security-traffic permit intra-interface
    object network obj_any
    subnet 0.0.0.0 0.0.0.0
    object network NETWORK_OBJ_10.33.0.0_16
    subnet 10.33.0.0 255.255.0.0
    object network NETWORK_OBJ_10.60.0.0_16
    subnet 10.60.0.0 255.255.0.0
    object network NETWORK_OBJ_10.89.0.0_16
    subnet 10.89.0.0 255.255.0.0
    object network NETWORK_OBJ_10.1.0.0_16
    subnet 10.1.0.0 255.255.0.0
    object network tetPC
    host 10.60.10.1
    description test        
    object network NETWORK_OBJ_10.60.30.0_24
    subnet 10.60.30.0 255.255.255.0
    object network NETWORK_OBJ_10.60.30.64_26
    subnet 10.60.30.64 255.255.255.192
    object network SSH-server
    host 10.60.20.6
    object network SSH_public
    object network ftp_public
    host 80.90.98.218
    object network rdp
    host 10.60.10.4
    object network ftp_server
    host 10.60.20.2
    object network ssh_public
    host 80.90.98.218
    object service FTP
    service tcp destination eq 12
    object network NETWORK_OBJ_10.60.20.3
    host 10.60.20.3
    object network NETWORK_OBJ_10.60.40.192_26
    subnet 10.60.40.192 255.255.255.192
    object network NETWORK_OBJ_10.60.10.10
    host 10.60.10.10
    object network NETWORK_OBJ_10.60.20.2
    host 10.60.20.2
    object network NETWORK_OBJ_10.60.20.21
    host 10.60.20.21
    object network NETWORK_OBJ_10.60.20.4
    host 10.60.20.4
    object network NETWORK_OBJ_10.60.20.5
    host 10.60.20.5
    object network NETWORK_OBJ_10.60.20.6
    host 10.60.20.6
    object network NETWORK_OBJ_10.60.20.7
    host 10.60.20.7
    object network NETWORK_OBJ_10.60.20.29
    host 10.60.20.29
    object service port_tomcat
    service tcp source range 8080 8082
    object network TBSF
    subnet 172.16.252.0 255.255.255.0
    object network MailServer
    host 10.33.10.2
    description Mail Server
    object service HTTPS
    service tcp source eq https
    object network test
    object network access_web_mail
    host 10.60.50.251
    object network downtown_Interface_host
    host 10.60.50.1
    description downtown Interface Host
    object service Oracle_port
    service tcp source eq sqlnet
    object network NETWORK_OBJ_10.60.50.248_29
    subnet 10.60.50.248 255.255.255.248
    object network NETWORK_OBJ_10.60.50.1
    host 10.60.50.1
    object network NETWORK_OBJ_10.60.50.0_28
    subnet 10.60.50.0 255.255.255.240
    object network brisel
    subnet 10.191.191.0 255.255.255.0
    object network NETWORK_OBJ_10.191.191.0_24
    subnet 10.191.191.0 255.255.255.0
    object network NETWORK_OBJ_10.60.60.0_24
    subnet 10.60.60.0 255.255.255.0
    object-group service TCS_Service_Group
    description This Group of available Services is for TCS Clients
    service-object object port_tomcat
    object-group service HTTPS_ACCESS tcp
    port-object eq https
    object-group network DM_INLINE_NETWORK_1
    network-object 10.1.0.0 255.255.0.0
    network-object 10.33.0.0 255.255.0.0
    network-object 10.60.0.0 255.255.0.0
    network-object 10.89.0.0 255.255.0.0
    access-list outside_1_cryptomap extended permit ip 10.60.0.0 255.255.0.0 10.33.0.0 255.255.0.0
    access-list outside_2_cryptomap extended permit ip 10.60.0.0 255.255.0.0 10.89.0.0 255.255.0.0
    access-list outside_3_cryptomap extended permit ip 10.60.0.0 255.255.0.0 10.1.0.0 255.255.0.0
    access-list OUTSIDE_IN extended permit icmp any any time-exceeded
    access-list OUTSIDE_IN extended permit icmp any any unreachable
    access-list OUTSIDE_IN extended permit icmp any any echo-reply
    access-list OUTSIDE_IN extended permit icmp any any source-quench
    access-list OUTSIDE_IN extended permit tcp 194.2.20.0 255.255.255.0 host 80.90.98.220 eq smtp
    access-list OUTSIDE_IN extended permit tcp host 194.25.12.0 host 80.90.98.220 eq smtp
    access-list OUTSIDE_IN extended permit icmp host 80.90.98.222 host 80.90.98.217
    access-list OUTSIDE_IN extended permit tcp host 162.162.4.1 host 80.90.98.220 eq smtp
    access-list OUTSIDE_IN extended permit tcp host 98.85.125.2 host 80.90.98.221 eq ssh
    access-list OAKDCAcl standard permit 10.60.0.0 255.255.0.0
    access-list OAKDCAcl standard permit 10.33.0.0 255.255.0.0
    access-list OAKDCAcl remark backoffice
    access-list OAKDCAcl standard permit 10.89.0.0 255.255.0.0
    access-list OAKDCAcl remark maint
    access-list OAKDCAcl standard permit 10.1.0.0 255.255.0.0
    access-list osgd standard permit host 10.60.20.4
    access-list osgd standard permit host 10.60.20.5
    access-list osgd standard permit host 10.60.20.7
    access-list testOAK_splitTunnelAcl standard permit 10.60.0.0 255.255.0.0
    access-list snmp extended permit udp any eq snmptrap any
    access-list snmp extended permit udp any any eq snmp
    access-list downtown_splitTunnelAcl standard permit host 10.60.20.29
    access-list webMailACL standard permit host 10.33.10.2
    access-list HBSC standard permit host 10.60.30.107
    access-list HBSC standard deny 10.33.0.0 255.255.0.0
    access-list HBSC standard deny 10.89.0.0 255.255.0.0
    access-list outside_4_cryptomap extended permit ip 10.60.0.0 255.255.0.0 10.191.191.0 255.255.255.0
    access-list OAK-remote_splitTunnelAcl standard permit 10.1.0.0 255.255.0.0
    access-list OAK-remote_splitTunnelAcl standard permit 10.33.0.0 255.255.0.0
    access-list OAK-remote_splitTunnelAcl standard permit 10.60.0.0 255.255.0.0
    access-list OAK-remote_splitTunnelAcl standard permit 10.89.0.0 255.255.0.0
    pager lines 24
    logging enable
    logging asdm informational
    mtu inside 1500
    mtu outside 1500
    ip local pool OAKPRD_pool 10.60.30.110-10.60.30.150 mask 255.255.0.0
    ip local pool mail_sddress_pool 10.60.50.251-10.60.50.255 mask 255.255.0.0
    ip local pool test 10.60.50.1 mask 255.255.255.255
    ip local pool ipad 10.60.30.90-10.60.30.99 mask 255.255.0.0
    ip local pool TCS_pool 10.60.40.200-10.60.40.250 mask 255.255.255.0
    ip local pool OSGD_POOL 10.60.50.2-10.60.50.10 mask 255.255.0.0
    ip local pool OAK_pool 10.60.60.0-10.60.60.255 mask 255.255.0.0
    ip verify reverse-path interface inside
    ip verify reverse-path interface outside
    ip audit name ThreatDetection attack action alarm
    ip audit interface inside ThreatDetection
    ip audit interface outside ThreatDetection
    no failover
    icmp unreachable rate-limit 1 burst-size 1
    icmp permit any inside
    icmp permit any echo inside
    icmp permit any echo outside
    asdm history enable
    arp timeout 14400
    nat (inside,outside) source static NETWORK_OBJ_10.60.0.0_16 NETWORK_OBJ_10.60.0.0_16 destination static NETWORK_OBJ_10.33.0.0_16 NETWORK_OBJ_10.33.0.0_16
    nat (inside,outside) source static NETWORK_OBJ_10.60.0.0_16 NETWORK_OBJ_10.60.0.0_16 destination static NETWORK_OBJ_10.89.0.0_16 NETWORK_OBJ_10.89.0.0_16
    nat (inside,outside) source static NETWORK_OBJ_10.60.0.0_16 NETWORK_OBJ_10.60.0.0_16 destination static NETWORK_OBJ_10.1.0.0_16 NETWORK_OBJ_10.1.0.0_16
    nat (inside,outside) source static any any destination static NETWORK_OBJ_10.60.30.0_24 NETWORK_OBJ_10.60.30.0_24
    nat (inside,outside) source static any any destination static NETWORK_OBJ_10.60.30.64_26 NETWORK_OBJ_10.60.30.64_26
    nat (inside,outside) source static NETWORK_OBJ_10.60.20.29 NETWORK_OBJ_10.60.20.29 destination static NETWORK_OBJ_10.60.40.192_26 NETWORK_OBJ_10.60.40.192_26 service any port_tomcat
    nat (inside,outside) source static any any destination static NETWORK_OBJ_10.60.50.1 NETWORK_OBJ_10.60.50.1
    nat (inside,outside) source static MailServer MailServer destination static NETWORK_OBJ_10.60.50.248_29 NETWORK_OBJ_10.60.50.248_29
    nat (inside,outside) source static any any destination static NETWORK_OBJ_10.60.50.0_28 NETWORK_OBJ_10.60.50.0_28
    nat (inside,outside) source static NETWORK_OBJ_10.60.0.0_16 NETWORK_OBJ_10.60.0.0_16 destination static NETWORK_OBJ_10.191.191.0_24 NETWORK_OBJ_10.191.191.0_24
    nat (inside,outside) source static DM_INLINE_NETWORK_1 DM_INLINE_NETWORK_1 destination static NETWORK_OBJ_10.60.60.0_24 NETWORK_OBJ_10.60.60.0_24 no-proxy-arp route-lookup
    object network obj_any
    nat (inside,outside) dynamic interface
    route outside 0.0.0.0 0.0.0.0 80.90.98.222 1
    timeout xlate 3:00:00
    timeout pat-xlate 0:00:30
    timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02
    timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00
    timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00
    timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute
    timeout tcp-proxy-reassembly 0:01:00
    timeout floating-conn 0:00:00
    dynamic-access-policy-record DfltAccessPolicy
    user-identity default-domain LOCAL
    http server enable
    http 192.168.1.0 255.255.255.0 inside
    http 10.60.10.10 255.255.255.255 inside
    http 10.33.30.33 255.255.255.255 inside
    http 10.60.30.33 255.255.255.255 inside
    snmp-server host inside 10.33.30.108 community ***** version 2c
    snmp-server host inside 10.89.70.30 community *****
    no snmp-server location
    no snmp-server contact
    snmp-server community *****
    snmp-server enable traps snmp authentication linkup linkdown coldstart warmstart
    crypto ipsec ikev1 transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac
    crypto ipsec ikev1 transform-set ESP-DES-SHA esp-des esp-sha-hmac
    crypto ipsec ikev1 transform-set ESP-DES-MD5 esp-des esp-md5-hmac
    crypto ipsec ikev1 transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac
    crypto ipsec ikev1 transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac
    crypto ipsec ikev1 transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac
    crypto ipsec ikev1 transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac
    crypto ipsec ikev1 transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac
    crypto ipsec ikev1 transform-set TRANS_ESP_3DES_SHA esp-3des esp-sha-hmac
    crypto ipsec ikev1 transform-set TRANS_ESP_3DES_SHA mode transport
    crypto ipsec ikev1 transform-set ESP-3DES-SHA esp-3des esp-sha-hmac
    crypto ipsec ikev1 transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac
    crypto ipsec ikev1 transform-set lux_trans_set esp-aes esp-sha-hmac
    crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set pfs group1
    crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set ikev1 transform-set ESP-AES-128-SHA ESP-AES-128-MD5 ESP-AES-192-SHA ESP-AES-192-MD5 ESP-AES-256-SHA ESP-AES-256-MD5 ESP-3DES-SHA ESP-3DES-MD5 ESP-DES-SHA ESP-DES-MD5
    crypto map outside_map 1 match address outside_1_cryptomap
    crypto map outside_map 1 set peer 84.51.31.173
    crypto map outside_map 1 set ikev1 transform-set ESP-3DES-SHA
    crypto map outside_map 2 match address outside_2_cryptomap
    crypto map outside_map 2 set peer 98.85.125.2
    crypto map outside_map 2 set ikev1 transform-set ESP-3DES-SHA
    crypto map outside_map 3 match address outside_3_cryptomap
    crypto map outside_map 3 set peer 220.79.236.146
    crypto map outside_map 3 set ikev1 transform-set ESP-3DES-SHA
    crypto map outside_map 4 match address outside_4_cryptomap
    crypto map outside_map 4 set pfs
    crypto map outside_map 4 set peer 159.146.232.122
    crypto map outside_map 4 set ikev1 transform-set lux_trans_set
    crypto map outside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP
    crypto map outside_map interface outside
    crypto ikev1 enable outside
    crypto ikev1 policy 5
    authentication pre-share
    encryption 3des
    hash sha
    group 2
    lifetime 86400
    crypto ikev1 policy 20
    authentication pre-share
    encryption aes-256
    hash sha
    group 5
    lifetime 86400
    crypto ikev1 policy 30
    authentication pre-share
    encryption 3des
    hash sha
    group 2
    lifetime 28800
    crypto ikev1 policy 50
    authentication pre-share
    encryption aes
    hash sha
    group 1
    lifetime 86400
    crypto ikev1 policy 70
    authentication pre-share
    encryption aes
    hash sha
    group 5
    lifetime 86400
    telnet 10.60.10.10 255.255.255.255 inside
    telnet 10.60.10.1 255.255.255.255 inside
    telnet 10.60.10.5 255.255.255.255 inside
    telnet 10.60.30.33 255.255.255.255 inside
    telnet 10.33.30.33 255.255.255.255 inside
    telnet timeout 30
    ssh 10.60.10.5 255.255.255.255 inside
    ssh 10.60.10.10 255.255.255.255 inside
    ssh 10.60.10.3 255.255.255.255 inside
    ssh timeout 5
    console timeout 0
    dhcpd auto_config outside
    dhcpd dns 155.2.10.20 155.2.10.50 interface inside
    dhcpd auto_config outside interface inside
    threat-detection basic-threat
    threat-detection scanning-threat shun duration 3600
    threat-detection statistics
    threat-detection statistics tcp-intercept rate-interval 30 burst-rate 400 average-rate 200
    tftp-server inside 10.60.10.10 configs/config1
    webvpn
    group-policy testTG internal
    group-policy testTG attributes
    dns-server value 155.2.10.20 155.2.10.50
    vpn-tunnel-protocol ikev1
    group-policy DefaultRAGroup_1 internal
    group-policy DefaultRAGroup_1 attributes
    dns-server value 155.2.10.20 155.2.10.50
    vpn-tunnel-protocol l2tp-ipsec
    group-policy TcsTG internal
    group-policy TcsTG attributes
    vpn-idle-timeout 20
    vpn-session-timeout 120
    vpn-tunnel-protocol ikev1
    ipsec-udp disable
    ipsec-udp-port 10000
    split-tunnel-policy tunnelspecified
    split-tunnel-network-list value testOAK_splitTunnelAcl
    address-pools value TCS_pool
    group-policy downtown_interfaceTG internal
    group-policy downtown_interfaceTG attributes
    dns-server value 155.2.10.20 155.2.10.50
    vpn-tunnel-protocol ikev1
    split-tunnel-policy tunnelspecified
    split-tunnel-network-list value downtown_splitTunnelAcl
    group-policy HBSCTG internal
    group-policy HBSCTG attributes
    dns-server value 155.2.10.20 155.2.10.50
    vpn-tunnel-protocol ikev1
    split-tunnel-policy tunnelspecified
    split-tunnel-network-list value HBSC
    group-policy OSGD internal
    group-policy OSGD attributes
    dns-server value 155.2.10.20 155.2.10.50
    vpn-session-timeout none
    vpn-tunnel-protocol ikev1
    group-lock value OSGD
    split-tunnel-policy tunnelspecified
    split-tunnel-network-list value testOAK_splitTunnelAcl
    group-policy OAKDC internal
    group-policy OAKDC attributes
    vpn-tunnel-protocol ikev1
    group-lock value OAKDC
    split-tunnel-policy tunnelspecified
    split-tunnel-network-list value OAKDCAcl
    intercept-dhcp 255.255.0.0 disable
    address-pools value OAKPRD_pool
    group-policy mailTG internal
    group-policy mailTG attributes
    dns-server value 155.2.10.20 155.2.10.50
    vpn-tunnel-protocol ikev1
    split-tunnel-policy tunnelspecified
    split-tunnel-network-list value webMailACL
    group-policy OAK-remote internal
    group-policy OAK-remote attributes
    dns-server value 155.2.10.20 155.2.10.50
    vpn-tunnel-protocol ikev1
    split-tunnel-policy tunnelspecified
    split-tunnel-network-list value OAK-remote_splitTunnelAcl
    vpn-group-policy OAKDC
    service-type nas-prompt
    tunnel-group DefaultRAGroup general-attributes
    address-pool OAKPRD_pool
    address-pool ipad
    default-group-policy DefaultRAGroup_1
    tunnel-group DefaultRAGroup ipsec-attributes
    ikev1 pre-shared-key *****
    tunnel-group 84.51.31.173 type ipsec-l2l
    tunnel-group 84.51.31.173 ipsec-attributes
    ikev1 pre-shared-key *****
    tunnel-group 98.85.125.2 type ipsec-l2l
    tunnel-group 98.85.125.2 ipsec-attributes
    ikev1 pre-shared-key *****
    tunnel-group 220.79.236.146 type ipsec-l2l
    tunnel-group 220.79.236.146 ipsec-attributes
    ikev1 pre-shared-key *****
    tunnel-group OAKDC type remote-access
    tunnel-group OAKDC general-attributes
    address-pool OAKPRD_pool
    default-group-policy OAKDC
    tunnel-group OAKDC ipsec-attributes
    ikev1 pre-shared-key *****
    tunnel-group TcsTG type remote-access
    tunnel-group TcsTG general-attributes
    address-pool TCS_pool
    default-group-policy TcsTG
    tunnel-group TcsTG ipsec-attributes
    ikev1 pre-shared-key *****
    tunnel-group downtown_interfaceTG type remote-access
    tunnel-group downtown_interfaceTG general-attributes
    address-pool test
    default-group-policy downtown_interfaceTG
    tunnel-group downtown_interfaceTG ipsec-attributes
    ikev1 pre-shared-key *****
    tunnel-group TunnelGroup1 type remote-access
    tunnel-group mailTG type remote-access
    tunnel-group mailTG general-attributes
    address-pool mail_sddress_pool
    default-group-policy mailTG
    tunnel-group mailTG ipsec-attributes
    ikev1 pre-shared-key *****
    tunnel-group testTG type remote-access
    tunnel-group testTG general-attributes
    address-pool mail_sddress_pool
    default-group-policy testTG
    tunnel-group testTG ipsec-attributes
    ikev1 pre-shared-key *****
    tunnel-group OSGD type remote-access
    tunnel-group OSGD general-attributes
    address-pool OSGD_POOL
    default-group-policy OSGD
    tunnel-group OSGD ipsec-attributes
    ikev1 pre-shared-key *****
    tunnel-group HBSCTG type remote-access
    tunnel-group HBSCTG general-attributes
    address-pool OSGD_POOL
    default-group-policy HBSCTG
    tunnel-group HBSCTG ipsec-attributes
    ikev1 pre-shared-key *****
    tunnel-group 159.146.232.122 type ipsec-l2l
    tunnel-group 159.146.232.122 ipsec-attributes
    ikev1 pre-shared-key *****
    tunnel-group OAK-remote type remote-access
    tunnel-group OAK-remote general-attributes
    address-pool OAK_pool
    default-group-policy OAK-remote
    tunnel-group OAK-remote ipsec-attributes
    ikev1 pre-shared-key *****
    policy-map global_policy
    prompt hostname context
    no call-home reporting anonymous
    hpm topN enable
    : end
    asdm history enable

    Dear Darko,
    The problem here is the overlapp issue with the Internal network.
    Since the VPN pool is:
    ip local pool OAKPRD_pool 10.60.30.110-10.60.30.150 mask 255.255.0.0
    And the local network is:
    interface Vlan1
         nameif inside
         security-level 100
         ip address 10.60.70.1 255.255.0.0
    So since you have some NAT rules telling the FW that 10.60.0.0/16 is connected to the inside, we need to change that and force it to know that 10.60.30.0/24 is actually reachable to the outside.
    On the other hand, yes you could point to outside interface, but is not a good practice.
    Thanks.
    Portu.
    In case you do not have any further questions, please mark this post as answered.

  • Track & log failed jobs on Report Server 9.0.4

    Hi
    Oracle Reports Services 9.0.4 on Windows
    i'm trying to figure out how to log failed jobs , so i can automate it and send it to developers
    the problem is , how dose showjobs display the job ids with the ohs logs info
    http://localhost:7777/reports/rwservlet/showjobs?
    i can collect the job IDs with the "Terminated with error" from the rwserver.trc
    but how do i connect these jobs with the actual RDF, pdf entries in the apache log
    or maybe logs
    for example , so i can track the failed requests with their respective parameters
    any ideas ?
    thanks

    thanks desgard
    that info is totally great and all , i even found this link that may help others with Notification
    http://www.oracle.com/technology/products/reports/apis/plugNotification/NOT_3.html
    but i don't want to send a msg for each run & i need to alter the URL format to use this feature in the rwservlet
    but that's not what i want , i want to filter the trace log (rwserver.trc) & extract the History of failed jobs for the past month or so
    i can get the list of jobs , example
    "Job 104870 status is: Terminated with error:"
    how can i connect the job IDs with their URL Requests

  • PC locks up for 5-10 min when clicking anywhere in Files...

    Hello,
    I have a client that recently had his User profile in XP Pro copied from a Domain to a Local profile. It appears that perhaps something got corrupted, I have read the instructions on removing the DW cache file and/or the User Configuration File.
    Now, when he starts DW version 6, and clicks anywhere in the Files, with any of his sites, DW locks up for 5-10 minutes. We can access those files manually over the network using Explorer just fine.
    I should mention that this user has multiple versions of Dreamweaver and CS/Photoshop on his PC.
    Two questions:
    1- Will doing either of these procedures require the user to have to recreate his Site Files?
    2- Are there any other procedures or items I should look at for this issue?
    Thank you!
    Johnathon

    Start with the basics... Are you installing from Software Update or have you downloaded the combo update and manually run it?
    I would recommend you download and run the combo, and not use Software Update.
    Prior to install, backup everything! Then double check that you have backed up everything! You've been lucky 6 times - don't wait for #7 to be unlucky and render your Mac inoperable before thinking of backup.
    Before you run the updater, power down and restart it. Run Disk util and repair permissions. Check the disk to see if there are any issues with it too. If so, repair these prior to updating (post here for more help as disk issues may not be as easy to repair as clicking "repair disk").
    Just let it do it's thing. If it hangs for more than 30 mins I'd say restart it (it shouldn't take all that long really - hats off to you for having 12 hrs worth of patience!). If it fails again, come back and let us know we can help you through checking out some logs.

  • Trouble with iCal.  It locks up for minutes at a time.

    I can no longer use iCal. It is extremely slow and often locks up for minutes at a time. Has any one experienced this and is there a way to fix it?

    The next time you have the problem, note the exact times when it starts and ends: hour, minute, second.
    If you have more than one user account, these instructions must be carried out as an administrator.
    Launch the Console application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Console in the icon grid.
    Make sure the title of the Console window is All Messages. If it isn't, select All Messages from the SYSTEM LOG QUERIES menu on the left.
    Scroll back in the log to the time you noted above. Post any messages timestamped during that interval – the text, please, not a screenshot.
    When posting a log extract, be selective. In most cases, a few dozen lines are more than enough.
    Please do not indiscriminately dump thousands of lines from the log into a message.
    Important: Some private information, such as your name, may appear in the log. Edit it out by search-and-replace in a text editor before posting.

  • Failed to obtain/create connection from connection pool after redeploy

    I have a web application (.war) that uses a jdbc connection pool. The application works fine, however after I redeploy it using the administration console I get "Failed to obtain/create connection from connection pool [ Datavision_Pool ]. Reason : null" followed by "Error allocating connection : [Error in allocating a connection. Cause: null]" from javax.enterprise.resource.resourceadapter and I am forced to restart the instance. I am running Sun Java System Application Server 9.1 (build b58g-fcs)
    using a connection pool to a Microsoft SQL 2000 database using inet software's JDBC drivers. I need to be able to redeploy applications without having to restart the instance. Any help is appreciated.

    I have turned on some additional diagnostics and found out some answers and a work-around, but I think that there may be a bug in the way JDBC connection pool classes are loaded. The actual error was a null pointer in the JDBC driver class in the perpareStatement method. The only line in this method is "return factory.createPreparedStatement( this , sql );" and the only possible NPE would be if the factory was null, which should be impossible because it is a static variable and it is initialized when the class is loaded. The problem occurs because we deploy the JDBC driver .jar file within our .war file, for use when a client doesn't have or want to use connection pooling. Apparently, the connection pool must have picked up some of these classes and when the .war was redeployed, the reference to the factory was lost for existing connections (not sure how). If I remove the JDBC .jar file from the .war, it works, but that wasn't an ideal solution, the other way to get it to work was to change the sun-web.xml file to have <class-loader delegate="true">. We previously had it set to false in version 8.1 because of interference with a different version of the apache Tiles classes, which has now been addressed in version 9.1.
    I still think there is an issue, because the connection pool should never use the application specific classloaders. Am I wrong to believe this?

  • My friend's iPod touch 5th gen is locked somehow for 41 years and she does not have a backup. Is there a way to unlock the iPod without restoring it or loosing the pictures and videos from the camera roll?

    My friend's iPod touch 5th gen is locked somehow for 41 years  and she does not have a backup. Is there a way to unlock the iPod without restoring it or loosing the pictures and videos from the camera roll, the applications can be redownloaded and she had no music. Is there any known bug about ipods locking on their own because no one seemed to have tried to bypass the 4 digit code and it is locked for 26 million minutes.So thank you if anyone has any information or help for us.

    Your friend may have placed the ipod next to an object that repeatedly pressed the screen.  Under certain circumstances this can lead to what appears to the iTouch to be failed attempts to access the system.  After repeated failed attempts, the Touch increases the waiting time required for the password and may ultimately lock in the way you describe.
    It's been a while since this was posted, but this might be informative to someone else who finds themself in a similar predicament.

  • ORA-00349: failure obtaining block size for '+Z'  in Oracle XE

    Hello,
    I am attempting to move the online redo log files to a new flash recovery area location created on network drive "Z" ( Oracle Database 10g Express Edition Release 10.2.0.1.0).
    When I run @?/sqlplus/admin/movelogs; in SQL*Plus as a local sysdba, I get the following errors:
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+Z'
    ORA-06512: at line 14
    Please let me know how to go about resolving this issue.
    Thank you.
    See below for detail:
    Connected.
    SQL> @?/sqlplus/admin/movelogs;
    SQL> Rem
    SQL> Rem $Header: movelogs.sql 19-jan-2006.00:23:11 banand Exp $
    SQL> Rem
    SQL> Rem movelogs.sql
    SQL> Rem
    SQL> Rem Copyright (c) 2006, Oracle. All rights reserved.
    SQL> Rem
    SQL> Rem NAME
    SQL> Rem movelogs.sql - move online logs to new Flash Recovery Area
    SQL> Rem
    SQL> Rem DESCRIPTION
    SQL> Rem This script can be used to move online logs from old online
    log
    SQL> Rem location to Flash Recovery Area. It assumes that the database
    SQL> Rem instance is started with new Flash Recovery Area location.
    SQL> Rem
    SQL> Rem NOTES
    SQL> Rem For use to rename online logs after moving Flash Recovery
    Area.
    SQL> Rem The script can be executed using following command
    SQL> Rem sqlplus '/ as sysdba' @movelogs.sql
    SQL> Rem
    SQL> Rem MODIFIED (MM/DD/YY)
    SQL> Rem banand 01/19/06 - Created
    SQL> Rem
    SQL>
    SQL> SET ECHO ON
    SQL> SET FEEDBACK 1
    SQL> SET NUMWIDTH 10
    SQL> SET LINESIZE 80
    SQL> SET TRIMSPOOL ON
    SQL> SET TAB OFF
    SQL> SET PAGESIZE 100
    SQL> declare
    2 cursor rlc is
    3 select group# grp, thread# thr, bytes/1024 bytes_k
    4 from v$log
    5 order by 1;
    6 stmt varchar2(2048);
    7 swtstmt varchar2(1024) := 'alter system switch logfile';
    8 ckpstmt varchar2(1024) := 'alter system checkpoint global';
    9 begin
    10 for rlcRec in rlc loop
    11 stmt := 'alter database add logfile thread ' ||
    12 rlcRec.thr || ' size ' ||
    13 rlcRec.bytes_k || 'K';
    14 execute immediate stmt;
    15 begin
    16 stmt := 'alter database drop logfile group ' || rlcRec.grp;
    17 execute immediate stmt;
    18 exception
    19 when others then
    20 execute immediate swtstmt;
    21 execute immediate ckpstmt;
    22 execute immediate stmt;
    23 end;
    24 execute immediate swtstmt;
    25 end loop;
    26 end;
    27 /
    declare
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+Z'
    ORA-06512: at line 14
    Can someone point me in the right direction as to what I may be doing wrong here - Thank you!

    888442 wrote:
    I am trying to drop and recreate ONLINE redo logs on my STANDB DATABASE (11.1.0.7)., but i am getting the below error.
    On primary, we have done the changes., ie we added new logfile with bigger size and 3 members. When trying to do the same on Standby we are getting this error.
    Our database is in Active DG Read only mode and the oracle version is 11.1.0.7.
    I have deffered the log apply and cancelled the managed recovery, and dg is in manual mode.
    SQL> alter database Add LOGFILE GROUP 4 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M;
    alter database Add LOGFILE GROUP 4 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M
    ERROR at line 1:
    ORA-00349: failure obtaining block size for '+DT_DG1'First why you are dropping & recreating online redo log files on standby.
    On standby only standby redo log files will be used. Not sure what you are trying to do.
    here is example how to create online redo log files, Check that diskgroup is mounted and have sufficient space to create.
    sys@ORCL> select member from v$logfile;
    MEMBER
    C:\ORACLE\ORADATA\ORCL\REDO03.LOG
    C:\ORACLE\ORADATA\ORCL\REDO02.LOG
    C:\ORACLE\ORADATA\ORCL\REDO01.LOG
    sys@ORCL> alter database add logfile group 4 (
      2     'C:\ORACLE\ORADATA\ORCL\redo_g01a.log',
      3     'C:\ORACLE\ORADATA\ORCL\redo_g01b.log',
      4     'C:\ORACLE\ORADATA\ORCL\redo_g01c.log') size 10m;
    Database altered.
    sys@ORCL> select member from v$logfile;
    MEMBER
    C:\ORACLE\ORADATA\ORCL\REDO03.LOG
    C:\ORACLE\ORADATA\ORCL\REDO02.LOG
    C:\ORACLE\ORADATA\ORCL\REDO01.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01A.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01B.LOG
    C:\ORACLE\ORADATA\ORCL\REDO_G01C.LOG
    6 rows selected.
    sys@ORCL>
    Your profile:-
    888442      
         Newbie
    Handle:      888442
    Status Level:      Newbie
    Registered:      Sep 29, 2011
    Total Posts:      12
    Total Questions:      8 (7 unresolved)
    Close the threads if answered, Keep the forum clean.

Maybe you are looking for