E2E BI HOUSEKEEPING Job failing due to timeout

Hi Folks,
Reviewing our Solman system I found that some tables are getting really large, I found SMD_HASH_TABLE is over 71GB compressed. I found the note 1958979 - Solution Manager - Tablespace increase which in turn directs you to the attached document of the note 1480588 - ST: E2E Diagnostics - BI Housekeeping - Information.
Per the document, I executed TAANA transaction and ran analysis on the table and got the following data,
Per the next steps, I added the following entry in the table E2E_BI_DELETE.
the next run of the job, I got an ABAP dump with timeout.
Looks like while selecting the data for deletion, it timed out.
Has someone faced this issue and can suggest ways I can run this outside of this job one time to clean up this table. I believe the subsequent runs will be ok as long as the housekeeping job is run regularly.
regards
Yogesh

Hello Yogesh,
I experience exactly the same issue. Can you share the solution if you resolved this?
Thank you,
Paul.

Similar Messages

  • The Job Failed Due to  MAX-Time at ODS Activation Step

    Hi
            I'm getting these errors "The Job Failed Due to  MAX-Time at ODS Activation Step" and
             "Max-time Failure"
    How to resolve this failure?

    Hi ,
    You can check the ODs activation logs in ODs batch montior >> click on taht job >>> job log
    for this first you check in sm37 how many jobs are running if there are many otehr long running jobs then . the ODs activation will happen very slowly as the system is overloaded .. due to the long running jobs ..this causes the poor performance of system ..
    for checking the performance of system .. you first check the lock waits in ST04..and check wether they are progressing or nt ..
    check sm66 .. for the no of process running on system ..
    check st22 >> for short dumps ..
    check Os07> to check the cpu idle tiem if its less tehn 20% . then it means the cpu is overlaoded .
    check sm21 ...
    check the table space avilable in st04..
    see if the system is overlaoded then the ODS wont get enough work process to create the backgorund jobs for that like(Bi_BCTL*)..jobs .the updation will happen but very slowly ..
    in this case you can kill few long running jobs which are not important .. and kill few ODs activations also ..
    Dont run 23 ODs activation all at a time .. run some of them at a time ..
    And as for checking the key points for data loading is check st22,cehck job in R/3 , check sm58 for trfc,check sm59 for Rfc connections
    Regards,
    Shikha

  • LIS job failed due to server restart

    Hi Experts,
    One of out LIS jobs on queue MCEX11 is failed due to a system restart in ECC. What are the steps we need to take to make sure no data is lost or how we can confirm that there is no data loss due to this restart.
    Regards
    Jitendra

    Hi Rama,
    Please find attached screenshots from failed job, LBWQ & RSA7. Looking on them I guess there will be surely a data loss.
    Failed Job
    LBWQ
    RSA7
    Regards
    Jitendra

  • DB13 job failed due to SXPG_COMMAND_EXECUTE error

    Hi,
    I received an error when I try run the BRCONNECT Database Action Calendar (tx. DB13), either it's a Database check, statistics updates, etc. The error messages are as follows:
    Job started
    Step 001 started (program RSDBAJOB, variant &0000000000166, user ID SETIR0)
    No application server found on database host - rsh/Gateway will be used
    Execute logical command BRCONNECT On host tdcwsxd1
    Parameters: -u / -c -f check
    SXPG_COMMAND_EXECUTE failed for BRCONNECT - Reason: program_start_error: For More Information, See SYS
    Job cancelled after system exception ERROR_MESSAGE
    FYI, we're using DB instance (DB) and Central Instance (CI) on each separate box in the unix environments. SAP Basis rel. 620 with SP level SAPKB62053.
    If I execute the command directly in DB instance, it works fine but not from SAP trx DB13.
    Command: brconnect -u / -c -f check.
    Thanks and best regards,
    Mark

    Hi Mark,
    Sorry I see you are running on Unix, I am only familiar with Windows environment, but however on Windows I would do the following:
    1) Install a so called Standalone Gateway on the DB-host.
    2) copy the sapxpg executable to the directory where the gateway is installed.
    2) Change the SAPXPG_DBDEST... RFC connection to point to, or use this gateway (gateway options).
    That's all....
    (How to Install a standalone gateway or Gateway instance is normally described in the SAP installation guide).
    But as I sad, I am not sure if this is possible under Unix also, and the way to go...
    Regards
    Rolf

  • Extract sCSM server name Job failing due to sealed MP ?

    The extract of the management server job is failing on the last extract of a CI sealed MP we created.
    looking at the event viewer log I see this:
    ETL Module Execution failed:
    ETL process type: Extract
    Batch ID: 36408
    Module name: Extract_C3LOBCI_C3SCSMR2
    Message: The given value of type String from the data source cannot be converted to type uniqueidentifier of the specified target column.
    Stack:    at System.Data.SqlClient.SqlBulkCopy.ConvertValue(Object value, _SqlMetaData metadata, Boolean isNull, Boolean& isSqlType, Boolean& coercedToDataFeed)
       at System.Data.SqlClient.SqlBulkCopy.ReadWriteColumnValueAsync(Int32 col)
       at System.Data.SqlClient.SqlBulkCopy.CopyColumnsAsync(Int32 col, TaskCompletionSource`1 source)
       at System.Data.SqlClient.SqlBulkCopy.CopyRowsAsync(Int32 rowsSoFar, Int32 totalRows, CancellationToken cts, TaskCompletionSource`1 source)
       at System.Data.SqlClient.SqlBulkCopy.CopyBatchesAsyncContinued(BulkCopySimpleResultSet internalResults, String updateBulkCommandText, CancellationToken cts, TaskCompletionSource`1 source)
       at System.Data.SqlClient.SqlBulkCopy.CopyBatchesAsync(BulkCopySimpleResultSet internalResults, String updateBulkCommandText, CancellationToken cts, TaskCompletionSource`1 source)
       at System.Data.SqlClient.SqlBulkCopy.WriteToServerInternalRestContinuedAsync(BulkCopySimpleResultSet internalResults, CancellationToken cts, TaskCompletionSource`1 source)
       at System.Data.SqlClient.SqlBulkCopy.WriteToServerInternalRestAsync(CancellationToken cts, TaskCompletionSource`1 source)
       at System.Data.SqlClient.SqlBulkCopy.WriteToServerInternalAsync(CancellationToken ctoken)
       at System.Data.SqlClient.SqlBulkCopy.WriteRowSourceToServerAsync(Int32 columnCount, CancellationToken ctoken)
       at System.Data.SqlClient.SqlBulkCopy.WriteToServer(IDataReader reader)
       at Microsoft.SystemCenter.Warehouse.Utility.SqlBulkOperation.Insert(String sourceConnStrg, String sourceQuery, String destinationTable, Dictionary`2 mapping, String sqlConnectionStrg, Boolean& readerHasRows, DomainUser sourceSecureUser,
    DomainUser destSecureUser)
       at Microsoft.SystemCenter.Warehouse.Etl.ADOInterface.Insert(DomainUser sourceConnectionUser, DomainUser destinationConnectionUser)
       at Microsoft.SystemCenter.Warehouse.Etl.ADOInterface.Execute(IXPathNavigable config, Watermark wm, DomainUser sourceConnectionUser, DomainUser destinationConnectionUser)
       at Microsoft.SystemCenter.Warehouse.Etl.ExtractModule.Execute(IXPathNavigable config, Watermark wm, DomainUser sourceConnectionUser, DomainUser destinationConnectionUser, Int32 batchSize)
       at Microsoft.SystemCenter.Warehouse.Etl.ExtractModule.Execute(IXPathNavigable config, Watermark wm, DomainUser sourceConnectionUser, DomainUser destinationConnectionUser)
       at Microsoft.SystemCenter.Etl.ETLModule.OnDataItem(DataItemBase dataItem, DataItemAcknowledgementCallback acknowledgedCallback, Object acknowledgedState, DataItemProcessingCompleteCallback completionCallback, Object completionState)
    Management pack import failed.
    Management pack ID: 417daf9b-9505-8824-4419-6c45925a1deb
    Management pack name: C3LineofBusinessCI
    Management pack version: 1.0.0.4
    Data source name: DW_C3SCSMR2
    Message: Verification failed with 4 errors:
    Error 1:
    Found error in 2|C3LineofBusinessCI|1.0.0.1|C3LineofBusinessCI|| with message:
    Version 1.0.0.4 of the management pack is not upgrade compatible with older version 1.0.0.1. Compatibility check failed with 3 errors:
    Error 2:
    Found error in 1|C3LineofBusinessCI/a6ae647f076efa3a|1.0.0.0|C3LOBCI|| with message:
    The property Type (C3SupportLocation) has a value that is not upgrade compatible. OldValue=enum, NewValue=string.
    Error 3:
    Found error in 1|C3LineofBusinessCI/a6ae647f076efa3a|1.0.0.0|C3LOBCI|| with message:
    The property Key (C3Client) has a value that is not upgrade compatible. OldValue=True, NewValue=False.
    Error 4:
    Found error in 1|C3LineofBusinessCI/a6ae647f076efa3a|1.0.0.0|C3LOBCI|| with message:
    New Key ClassProperty item (ClientID) has been added in the newer version of this management pack.
    Stack trace:
        at Microsoft.EnterpriseManagement.Common.Internal.ServiceProxy.HandleFault(String methodName, Message message)
       at Microsoft.EnterpriseManagement.Common.Internal.ManagementPackServiceProxy.ImportManagementPackWithResources(String managementPack, String keyToken, Dictionary`2 resources)
       at Microsoft.EnterpriseManagement.SystemCenter.Warehouse.Synchronization.Library.BaseManagementGroup.ImportManagementPack(ManagementPackObject mpObj)
    any help would be appreciated thanks
    David Ulrich

    Version 1.0.0.4 of the management pack is not upgrade compatible with older version 1.0.0.1. Compatibility check failed with 3 errors:
    The property Type (C3SupportLocation) has a value that is not upgrade compatible. OldValue=enum, NewValue=string.
    The property Key (C3Client) has a value that is not upgrade compatible. OldValue=True, NewValue=False.
    New Key ClassProperty item (ClientID) has been added in the newer version of this management pack.
    you have ignored the error in the management server that your MP was not upgrade compatible, deleted the old version of the management pack form the management server, and imported the new version into the management server without waiting for the MPSync
    job to run and clean up the datawarehouse. the DW server is seeing the new version in the management server and is trying to upgrade an MP with a non-upgrade compatible upgrade. 
    the normal resolution to this is to delete the MP from the Management server, run the MPSync job twice, verify the old version has been cleaned up from the DW server, and then re-import the new version. however, this isn't going to work for you, because
    you are changing the primary key, and changing the type of a property, neither of which are EVER upgrade compatible. 
    if this is a lab or new system, then you can rebuild the Data Warehouse and associate it with the management server again from scratch. if this system is older, or is production, you will lose data doing this, so consider carefully. 

  • Jobs failing due to "incorrect settings: QUOTED_IDENTIFIER"

    I have three jobs that have all failed on the second step, for the same reason: "[Command] failed because the following SET options have incorrect settings: 'QUOTED_IDENTIFIER'. Verify that SET options are correct for use with indexed views and/or indexes
    on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations." Two are INSERTs and the third is a MERGE. The statements run fine directly, and they run fine when jobs run stored procedures
    that contain them. The database has QUOTED_IDENTIFIER turned on. What is the problem and how do I resolve it?

    HI,  I have the same problem with various jobs on a production box.  When I add a job to execute some SQL usually it works, but occasionally it fails with the same error: "[Command] failed because the following SET options have incorrect settings:
    'QUOTED_IDENTIFIER' . . ."  For example, an index rebuild or a delete statement.  Explicitly adding the SET command (and by the way, nice that it doesn't tell you which way it should be set) is an inconvenience.  It's easily the sort of thing
    that a person could forget (when so many jobs succeed and don't have the option explicitly set), leaving jobs to fail.  These jobs are running very basic SQL with the table and column names already wrapped in square brackets.
    The only solution I have is to now remember to always set that option at the top of a SQL statement in a job.  :(

  • RMAN Delete Obsolete job fails due to Error Allocating Device

    Experts, I need help, please.
    This is 10.2.0.1 on Windows
    RMAN> DELETE NOPROMPT OBSOLETE;
    RMAN retention policy will be applied to the command
    RMAN retention policy is set to recovery window of 3 days
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=130 devtype=DISK
    released channel: ORA_DISK_1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of delete command at 05/15/2012 14:10:17
    ORA-19554: error allocating device, device type: SBT_TAPE, device name:
    ORA-27211: Failed to load Media Management Library
    I ran RMAN> configure device type 'SBT_TAPE' clear;
    but I still get the same error.
    I backup to Disk, not tape
    Notice the reference to SBT_TAPE:
    RMAN configuration parameters are:
    CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO BACKUPSET PARALLELISM 1;
    CONFIGURE DEVICE TYPE SBT_TAPE PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
    What do I need to do to be able to run DELETE OBSOLETE; ?
    Thanks, John

    as anand mention ,you've to reset this parameter to default value.
    and clear cmd with configure will remove it from configuration parameter
    see below
    RMAN> show all;
    RMAN configuration parameters are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 1;
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    *CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 1 BACKUP TYPE TO BACKUPSET;*
    CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/10/db/dbs/snapcf_GG1.f'; # default
    RMAN> delete obsolete;
    RMAN retention policy will be applied to the command
    RMAN retention policy is set to redundancy 1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of delete command at 05/16/2012 12:04:34
    ORA-19554: error allocating device, device type: SBT_TAPE, device name:
    ORA-27211: Failed to load Media Management Library
    Additional information: 2
    RMAN>  CONFIGURE DEVICE TYPE 'SBT_TAPE' clear;
    old RMAN configuration parameters:
    CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 1 BACKUP TYPE TO BACKUPSET;
    RMAN configuration parameters are successfully reset to default value
    RMAN> show all;
    RMAN configuration parameters are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 1;
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/10/db/dbs/snapcf_GG1.f'; # default
    RMAN> delete obsolete;
    RMAN retention policy will be applied to the command
    RMAN retention policy is set to redundancy 1
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=1591 instance=GG1 devtype=DISK
    allocated channel: ORA_DISK_2
    channel ORA_DISK_2: sid=1627 instance=GG1 devtype=DISK
    no obsolete backups found

  • SQL Agent Job fails due to excel import

    I  have SQL SERVER 2012 SP 1.  I have a package which imports a generic spreadsheet into a import table and then does does other stuff with it.
    I can happly run the SSIS package through BIDS but fails when I try and run it though SQL agent.
    with errors such as Microsoft.ACE.OLEDB.12.0.  Provider is not registered on this machine.
    Within the SSIS I use the Excel connection manager and have modifed the string to
    Provider=Microsoft.ACE.OLEDB.12.0;Data Source=D:\FileDropTest\Fusion.xlsx;Extended Properties="EXCEL 12.0 XML;HDR=YES;IMEX=1";
    and re deployed and still get the same issue. 
    I then enabled xp_cmdshell and tried from TSQL
    DECLARE @Qry VARCHAR(8000)
    SELECT @Qry = 'dtexec /FILE "D:\SSIS\export\Fusion.dtsx"'
    EXECUTE xp_cmdshell @Qry
    and got
    output
    Microsoft (R) SQL Server Execute Package Utility
    Version 11.0.5058.0 for 64-bit
    Copyright (C) Microsoft Corporation. All rights reserved.
    NULL
    Started:  5:49:25 PM
    Error: 2015-02-09 17:49:25.43
       Code: 0xC001000E
       Source: Fusion
       Description: The connection "{3076028A-D7D9-4432-AADB-CAC6D9AD5995}" is not found. This error is thrown by Connections collection when the specific connection element is not found.
      As a different approach it was recommended to use OLEDB however within BIDS I the connection tests fine but when I run it though BIDS it fails
    [OLE DB Source [79]] Error: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER.  The AcquireConnection method call to the connection manager "D:\FileDropTest\Fusion.xlsx" failed with error code 0xC0202009.  There may be
    error messages posted before this with more information on why the AcquireConnection method call failed.
    For refrence I ve also set the delay to true.
    I'm a bit stuck as I thought importing a spreadsheet being Microsoft product ;) would be fairly easy .  Any Help on this matter would be great.

    If your source Excel file is .xls, it should successfully import it. However in your case Excel file is .xlsx hence it fails.
    >> I can happly run the SSIS package through BIDS but fails when I try and run it though SQL agent.
    with errors such as Microsoft.ACE.OLEDB.12.0.  Provider is not registered on this machine
    Your local server where you run the package in BIDS have the 32 Bit ACE OLEDB 12.0 driver installed correctly.
    But the target server where you run the package thorugh SQL Agent has no correct driver. Please download
    32 bit Microsoft Access Database Engine 2010 Redistributable and install it in Target server.
    >> I then enabled xp_cmdshell and tried from TSQL and got errors "Connections collection when the specific connection element is not found"
    Please look at the connection managers in your package. There has to be some connection manager where name is missing. This may happen if you copy paste connection from same or another ssis package.
    -Vaibhav Chaudhari

  • DBA: ANALYZETAB and DBA:CHECKOPT jobs failed.

    Hi Guys,
    Good day!
    I would like to seek an assistance on how can I deal with the issue below:
    For JOB: DBA: ANALYZETAB
    27.05.2010 04:00:20 Job started
    27.05.2010 04:00:28 Step 001 started (program RSDBAJOB, variant &0000000000113, use
    27.05.2010 04:00:28 Action unknown in SDBAP <<< Job failed due to this error.
    27.05.2010 04:00:28 Job cancelled after system exception ERROR_MESSAGE
    For JOB: DBA:CHECKOPT
    Date       Time     Message text
    26.05.2010 18:15:07 Job started
    26.05.2010 18:15:08 Step 001 started (program RSDBAJOB, variant &0000000000112)
    26.05.2010 18:15:09 Action ANA Unknown in database ORACLE <<<< Job failed due to this error.
    26.05.2010 18:15:09 Job cancelled after system exception ERROR_MESSAGE
    I also checked those DBAs and I could find any in DB13 as well as in table SDBAP.
    Appreciate your help on this matter.
    Cheers,
    Virgilio
    Edited by: Virgilio Padios on May 29, 2010 3:24 PM

    Hello,
    I may have the same scenario because I have two same jobs that are getting canceled in DB13.  I checked them and it is because that the jobs were created in a client which is no longer existing.  "Logon of user XXXX in client XXX failed when starting a step"
    DBA:ANALYZETAB
    DBA:CHECKOPT
    Can you please tell the importance of these jobs as these are not included in SAP Note 16083 - "Standard jobs, reorganization jobs" as one of the standard jobs.  If required, I need to reschedule these jobs in DB13 which I do not know how.  DB13 does not have options that will create these type of jobs.  So far, this is what I can only see from my DB13.
    Whole database offline + r
    Whole database offline bac
    Whole database online + re
    Whole database online back
    Redo log backup          
    Partial database offline b
    Partial database online ba
    Check and update optimizer - this is for DBA:UDATESTATS
    Adapt next extents - this is for DBA:NEXTEXTENT
    Check database - this is for DBA:CHECKDB        
    Verify database  - this is for DBA:VERIFYDB       
    Cleanup logs  - this is for DBA:CLEANUPLOGS           
    So where does these two jobs should be generated from?  I would appreciate any help regarding this.  
    Thanks,
    Tony

  • Batch Jobs fail because User ID is either Locked or deleted from SAP System

    Business Users releases batch jobs under their user id.
    And when these User Ids are deleted or locked by system administrator, these batch jobs fail, due to either user being locked or deleted from the system.
    Is there any way that these batch jobs can be stopped from cancelling or any SAP standard report to check if there are any batch jobs running under specific user id.

    Ajay,
    What you can do is, if you want the jobs to be still running under the particular user's name (I know people crib about anything and everything), and not worry about the jobs failing when the user is locked out, you can still achieve this by creating a system (eg bkgrjobs) user and run the Steps in the jobs under that System User name. You can do this while defining the Step in SM37
    This way, the jobs will keep running under the Business User's name and will not fail if he/she is locked out. But make sure that the System User has the necessary authorizations or the job will fail. Sap_all should be fine, but it really again depends on your company.
    Kunal

  • Job Failed // PLAAP5BULCOD_CONS_CHECK_CCR_MPOR  in APO

    Hello All,
    Job failed due below reason  PLAAP5BULCOD_CONS_CHECK_CCR_MPOR
    Iteration executed
    Spool request (number 0000029847) created without immediate output
    Results saved successfully
    Step 003 started (program /SAPAPO/CIF_DELTAREPORT3, variant PR4_MANUF_ORD2, user ID BTCH_ADM)
    Background job terminated due to communication error
    Job cancelled after system exception ERROR_MESSAGE
    Please help me to resolve the issue.
    Regards
    Mohsin M

    Hi ,
    Check the similar issue:
    [background job PLAAP5BULCOD_CONS_CHECK_CCR_MPOR cancelling;
    Thanks and Regards
    Purna

  • Weblogic invoking web service failed due to socket timeout

    Hi,
    I encountered an error when I invoke web service from OBIEE 11g. The web serivce resides on Websphere running on other machine.
    An error says that "Invoking web service failed due to socket timeout." and it seems that it stopped in just 40 secs.
    Is there any settings of WebLogic server to avoid this? This web service normally runs for more than 60 sec.
    I have checked several parameters by WebLogic admin console and changed those values, but I still receive same errors.
    Regards,
    Fujio Sonehara

    Hey Eason,
    As I had previously mentioned, I have checked the FE server certs and have mentioned the signing algorithm it used to sign the certs, which was sha1DSA and not sha1RSA, I even checked my CA list of issued certs and have found all certs are signed the same.
    Signature algorithm: sha1DSA
    Signature Hash Algorithm: sha1
    Public Key:  RSA (1024 bit)
    I could run request and reinstall all day long it will still get the same certs signed with the algo..
    Doing some research I attempted to see if I could change the signing cert for a specific cert template that was being used to issue the Lync FE certs... however seems that from according to
    this, that I'd have to completely rebuild my CA before I'd be able to request and issue a cert with the proper signing algorithm?!
    This
    says its possible but not supported, what do I do in this situation? Is my only option to rebuild teh entire CA and cert infrastructure?
    I noticed my CSP is set to Microsoft Base DSS Cryptographic Provider, and under the CSP folder there is no "CNGHashAlgorithm" key so I'm using a "Next Gen CSP" apparently? Is this CSP good enough to support Lync...Straight up where is
    the Lync documentation on the CA setup requirements??
    This google link doesn't tell you how you should setup a CA for Lync, what settings need to set etc..

  • Locker failed to offer due to timeout.Please increase worker thread count by increasing DeferredWorkerThreadCount in mediator-config.xml

    Can some one please assist me to get rid of the below error when trying to bring up the SOA services in Weblogic?
    "Locker failed to offer due to timeout.Please increase worker thread count by increasing DeferredWorkerThreadCount in mediator-config.xml"
    We tested the agent, node manager and everything seems to be working fine. Its persistent in UAT environment which is a clustered environment. 1st managed server for SOA is having the issue while the second one is working fine. Below are the list of manages servers in the env:
    2 managed servers for ADF,
    2 managed servers for BAM,
    2 managed servers for SOA,
    2 managed servers for WSM.
    The affected SOA1 managed server was working fine till last week and there were no changes made to the environment. Please let me know if I can furnish any further info.
    Thanks!
    Nidhi Gangadhar.
    WebLogic PortalSOA Suite

    Can some one please assist me to get rid of the below error when trying to bring up the SOA services in Weblogic?
    "Locker failed to offer due to timeout.Please increase worker thread count by increasing DeferredWorkerThreadCount in mediator-config.xml"
    We tested the agent, node manager and everything seems to be working fine. Its persistent in UAT environment which is a clustered environment. 1st managed server for SOA is having the issue while the second one is working fine. Below are the list of manages servers in the env:
    2 managed servers for ADF,
    2 managed servers for BAM,
    2 managed servers for SOA,
    2 managed servers for WSM.
    The affected SOA1 managed server was working fine till last week and there were no changes made to the environment. Please let me know if I can furnish any further info.
    Thanks!
    Nidhi Gangadhar.
    WebLogic PortalSOA Suite

  • E2E BI HOUSEKEEPING

    Hello,
    In our solman 7.1 system the job E2E BI HOUSEKEEPING finished but the spool has different errors & there st22 dumps too.
    Found this note 1844955 but my question is :
    1. We have not been using End to End analysis tab in solman workcenter tcode -> root cause analysis  so is this job still necessary to run in solman ?
    2. If not, its been running for a while what would be the affect of un-scheduling the job now ?
    3. Do I have to carry out any steps to check if doesn't affect any steps in solman or connected systems ?
    Spool Errors :
    Target Infocube: 0DB4_C04D
    Source Infocube: 0DB4_C04H
    Destination    : NONE
    Aggregator     :
    E2E_HK_AGGREGATE_MDX
    Delta          :         90
    HK: Calling SMD_HOUSEKEEPER at NONE
    HK: Cube aggregation: SMD_HK_AGGREGATE_MDX
    HK: Error detected by SMD_HK_AGGREGATE_MDX
    HK: Status :     3-
    HK: Message:
    Target 0DB4_C04D does not exist
    Runtime [s]:   0
    Target Infocube: 0ORA_C03D
    Source Infocube: 0ORA_C03H
    Destination    : NONE
    Aggregator     :
    E2E_HK_AGGREGATE_MDX
    Delta          :         90
    HK: Calling SMD_HOUSEKEEPER at NONE
    HK: Cube aggregation: SMD_HK_AGGREGATE_MDX
    System Failure:
    Time limit exceeded.
    Runtime [s]:      17.985
    ST22 ( two dumps)
    Category          
    ABAP Programming Error
    Runtime Errors    
    DBIF_RSQL_TABLE_UNKNOWN
    ABAP Program      
    SAPLRSDRD
    Application Component  BW-BEX-OT
    Date and Time     
    03/19/2014 05:31:31
    Short text
    A table is unknown or does not exist.
    What happened?
    Error in the ABAP Application Program
    The current ABAP program "SAPLRSDRD" had to be terminated because it has
    come across a statement that unfortunately cannot be executed.
    What can you do?
    Note down which actions and inputs caused the error.
    To process the problem further, contact you SAP system
    administrator.
    Using Transaction ST22 for ABAP Dump Analysis, you can look
    at and manage termination messages, and you can also
    keep them for a long time.
    Error analysis
    A table is referred to in an SAP Open SQL statement that either does not
    exist or is unknown to the ABAP Data Dictionary.
    The table involved is "/BI0/F0ORA_C02D" or another table accessed in the
    statement.
    Category          
    ABAP Programming Error
    Runtime Errors    
    TIME_OUT
    ABAP Program      
    SAPLRSDRD
    Application Component  BW-BEX-OT
    Date and Time     
    03/19/2014 05:30:23
    Short text
    Time limit exceeded.
    What happened?
    The program "SAPLRSDRD" has exceeded the maximum permitted runtime without
    interruption and has therefore been terminated.
    Thanks
    Shradha

    Hi Shradha,
    E2E BI HOUSEKEEPING is must run job in solution manager. it is actually needed for your BI house cleaning.
    1. We have not been using End to End analysis tab in solman workcenter tcode -> root cause analysis  so is this job still necessary to run in solman ?
    In 7.1, there is no dedicated guided concept for RCA, if you have done managed system configuration iteself, most of the extractors were active for the system, the data collection and storing to BI has started. You must have this housekeeping job. Recently solution manager has UPL integration too. All these features would go long term if you have proper housekeeping activity scheduled. hence please make this job running without fail.
    2. If not, its been running for a while what would be the affect of un-scheduling the job now ?
    3. Do I have to carry out any steps to check if doesn't affect any steps in solman or connected systems ?
    This is the independent job, will not affect any other scheduled jobs.
    Refer my blog How Is The Health Of Your SAP Solution Manager BI Content? and the sap note 1480588 - ST: E2E Diagnostics - BI Housekeeping - Information
    Please check
    Thanks
    Jansi

  • CO_COSTCTR Archiving Write Job Fails

    Hello,
    The CO_COSTCTR archiving write job fails with the error messages below. 
    Input or output error in archive file \\HOST\archive\SID\CO_COSTCTR_201209110858
    Message no. BA024
    Diagnosis
    An error has occurred when writing the archive file \\HOST\archive\SID\CO_COSTCTR_201209110858 in the file system. This can occur, for example, as the result of temporary network problems or of a lack of space in the fileing system.
    The job logs do not indicate other possible causes.  The OS and system logs don't have either.  When I ran it in test mode it it finished successfully after long 8 hours.  However, the error only happens during production mode where the system is generating the archive files.  The weird thing, I do not have this issue with our QAS system (db copy from our Prod).  I was able to archive successfully in our QAS using the same path name and logical name (we transport the settings). 
    Considering above, I am thinking of some system or OS related parameter that is unique or different from our QAS system.  Parameter that is not saved in the database as our QAS is a db copy of our Prod system.  This unique parameter could affect archiving write jobs (with read/write to file system). 
    I already checked the network session timeout settings (CMD > net server config) and the settings are the same between our QAS and Prod servers.  No problems with disk space.  The archive directory is a local shared folder \\HOST\archive\SID\<filename>.  The HOST and SID are variables which are unique to each system.  The difference is that our Prod server is HA configured (clustered) while our QAS is just standalone.  It might have some other relevant settings I am not aware of.  Has anyone encountered this before and was able to resolve it?
    We're running SAP R3 4.7 by the way.
    Thanks,
    Tony

    Hi Rod,
    We tried a couple of times already. They all got cancelled due to the error above. As much as we wanted to trim down the variant, the CO_COSTCTR only accepts entire fiscal year. The data it has to go through is quite a lot and the test run took us more that 8 hours to complete. I have executed the same in our QAS without errors. This is why I am bit confused why in our Production system I am having this error. Aside that our QAS is refreshed from our PRD using DB copy, it can run the archive without any problems. So I made to think that there might be unique contributing factors or parameters, which are not saved in the database that affects the archiving. Our PRD is configured with High availability; the hostname is not actually the physical host but rather a virtual host of two clustered servers. But this was no concern with the other archiving objects; only in CO_COSTCTR it is giving us this error. QAS has archiving logs turned off if it’s relevant.
    Archiving 2007 fiscal year cancels every after around 7200 seconds, while the 2008 fiscal year cancels early around 2500 seconds. I think that while the write program is going through the data in loops, by the time it needs to access back the archive file, the connection has been disconnected or timed out. And the reason why it cancels almost consistently after an amount of time is because of the variant, there is not much variety to trim down the data. The program is reading the same set of data objects. When it reaches to that one point of failure (after the expected time), it cancels out. If this is true, I may need to find where to extend that timeout or whatever it is that is causing above error.
    Thanks for all your help.  This is the best way I can describe it.  Sorry for the long reply.
    Tony

Maybe you are looking for

  • Output from selected string

    i have two table and i am giving the structures bellow tWO TABLE TBL1 ID DATA                    COL1 COL2 1     TESTING 1 DATA      2     DEPTID YEAR MESSGAE :TESTING DEPTID DATA 2008 ( 1 REPLACES COL1 VALUE AND 2 REPLACES COL2 VALUE TBL2 ID     DEP

  • Trouble importing favorites from IE to Mozilla

    I tried to import my favorites from IE to Mozilla by follwoing the instructions and it did say that they were successfully imported, but when I go to the internet explorer folder under the "bookmarks" tab, it it empty. Please help. thanks. mike

  • Now not able even to connect to iTunes store-  to do with iTunes7?

    over the past few weeks i have been able to at least search iTunes music without upgrading to iTunes7- starting yesterday, though, & all through today, i can't even get that far- get an error message which says the store may be busy & tells me to che

  • When using the Faces, they don't show. But if I click on the blank picture, they appear?

    When I select Faces, the list of faces I had entered shows, but no pphotos show. When I select one of the Faces folders, the outline of the total number of pictures does show, but no photos. Only when I click on the individual photo does it show. Hel

  • Premiere Pro 2.0 HDV-Aufnahme mit HC1 nicht möglich

    Besitze eine Sony HC1. Wenn ich bei dieser Kamera den DV Modus einschalte werden die Clips einwandfrei überspielt. Wird auf 1080i umgeschaltet funktioniert zwar noch die Gerätesteuerung, aber ich bekomme kein Bild mehr und erhalte die Nachricht: Reco