Issue with scheduled jobs

Hello Team,
After creating a new protection group, the jobs were frozen and was never kicked off.
What could be the reason?
Regards,
Suman Rout

Hi,
Please see the below blog which may assist with troubleshooting scheduled jobs.
Blog:
http://blogs.technet.com/b/dpm/archive/2014/10/08/how-to-troubleshoot-scheduled-backup-job-failures-in-dpm-2012.aspx
Pervious fourm post:
https://social.technet.microsoft.com/Forums/en-US/ed65d3e0-c7d7-488b-ba34-4a2083522bae/dpm-2010-scheduled-jobs-disappear-rather-than-run?forum=dataprotectionmanager
Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually
answer your question. This can be beneficial to other community members reading the thread. Regards, Dwayne Jackson II. [MSFT] This posting is provided "AS IS" with no warranties, and confers no rights."

Similar Messages

  • Issues with scheduling job in sm36 for a standard report...

    Hi,
    After creating a variant for a program, if I try to execute SM36 -> Define step and then, I select ABAP program, the variant name associated with it. Now what do I need to do to schedule job for that report.

    Hi,
    After giving the program and variant..
    Press the start condition button..
    Press Immediate to run immediately..
        Or
    Choose the date and time you want to run this job..
    After that press save.
    Then press save in sm36 for the job..THis will release the job...
    Thanks,
    Naren

  • Issue with background job--taking more time

    Hi,
    We have a custom program which runs as the background job. It runs every 2 hours.
    Itu2019s taking more time than expected on ECC6 SR2 & SR3 on Oracle 10.2.0.4. We found that it taking more time while executing native SQL on DBA_EXTENTS. When we tried to fetch less number of records from DBA_EXTENTS, it works fine.
    But we need the program to fetch all the records.
    But it works fine on ECC5 on 10.2.0.2 & 10.2.0.4.
    Here is the SQL statement:
    EXEC SQL PERFORMING SAP_GET_EXT_PERF.
      SELECT OWNER, SEGMENT_NAME, PARTITION_NAME,
             SEGMENT_TYPE, TABLESPACE_NAME,
             EXTENT_ID, FILE_ID, BLOCK_ID, BYTES
       FROM SYS.DBA_EXTENTS
       WHERE OWNER LIKE 'SAP%'
       INTO
       : EXTENTS_TBL-OWNER, :EXTENTS_TBL-SEGMENT_NAME,
       :EXTENTS_TBL-PARTITION_NAME,
       :EXTENTS_TBL-SEGMENT_TYPE , :EXTENTS_TBL-TABLESPACE_NAME,
       :EXTENTS_TBL-EXTENT_ID, :EXTENTS_TBL-FILE_ID,
       :EXTENTS_TBL-BLOCK_ID, :EXTENTS_TBL-BYTES
    ENDEXEC.
    Can somebody suggest what has to be done?
    Has something changed in SAP7 (wrt background job etc) or do we need to fine tune the SQL statement?
    Regards,
    Vivdha

    Hi,
    there was an issue with LMT's but that was fixed  in 10.2.0.4
    besides missing system statistics:
    But WHY do you collect every 2 hours this information? The dba_extents view is based on really heavy used system tables.
    Normally , you would start queries of this type against dba_extents ie. to identify corrupt blocks and such:
    SELECT  owner , segment_name , segment_type
            FROM  dba_extents
           WHERE  file_id = &AFN
             AND  &BLOCKNO BETWEEN block_id AND block_id + blocks -1
    Not sure what you want to achieve with it.
    There are monitoring tools (OEM ?) around that may cover your needs.
    Bye
    yk

  • Virsa CC 5.1: Issue with Background Job

    Dear All,
      I almost finished configuration of our new Compliance Calibrator dashboard (Java stack of NW'04). But, unfortunately, have an issue with SoD analysis now.
      Using SAP Best Practice recommendations, we have uploaded all functions, business processes and risks into CC database, and then successfully generated new rules (there are about 190 active ones in our Analysis Engine).
      I also configured JCo to R/3 backend and was able to extract the full list of our users, roles and profiles. But somehow background analysis job fails.
      In job history table I see the following message: "Error while executing the Job:null", and in job log there is an entry, saying that "Daemon idle time longer than RFC time out, terminating daemon 0". But it's quite strange, as we use default values: RFC Time Out = 30 min, and daemons are invoked every 60 seconds.
      Please, advise whether you had similar issues in your SAP environment and, if yes, how you resolved them.
    Thanks,
    Laziz

    Hi Laziz
    I am now just doing the first part of CC. May I know your details in how to configured Jco To R/3 Backend? Do you need to create any SM59 for connection T in R/3? If yes, I am lacking of the details to do that if you could help? Thank you.
    Regards
    Florence

  • Issue with canceled jobs

    Hi all,
    Got an issue with dat aloads in BI 7.
    We have transaction data loads scheduled in process chains, and during the execution of Infopackage in the processchain, the connection to ECC is loast and also the BI server went down.
    When the servers are recovered, the status of the infopackages running are in Yellow.
    Now i have two questions.
    1. What happens if the infopackage status is yellow i.e half of the records are transferred to BI and half are left out in ECC.
       Executing the LUW's in TRFC  will solve the issue?
    2. What happens if the request becomes RED, with half of the records transferred?
        How can we recover the next half of the records?

    Hi,
    See, that is not the case....if your connectivity goes in between the data load is in progress...No matter whether it is delta load of full load....u always have the option fo deleting the red request and going for a reload....It is as simple as that....
    If it is delta load...it will propmt you a pop up saying that the last delta was incomplete...do you want to repeat the delta load....
    Once you do the Repeat Delta...entire data will be reloaded again....Also once u do Repeat delta...From the Delta Queue entire records which has failed during the previous load and all the new records created  till that time will be loaded in BW...
    Also for you infomation...there is nothing like Repeat Delta Queue and Normal Delta Queue...There exist only one Delta Queue...
    Hope it is clear...
    Rgds,
    Amit Kr.
    Edited by: Amit Kr on Jul 26, 2010 1:04 PM
    Edited by: Amit Kr on Jul 26, 2010 1:05 PM

  • SAP BW structure/table name change issue with BODS Jobs promotion

    Dear All, One of my client has issue with promotion of BODS jobs from one environment to another. They move SAP BW projects/tables along with BODS jobs (separately) from DEV to QA to Prod.
    In SAP-BW the structures and tables get a different postfix when they are transported to the next environment.  The promotion in SAP-BW (transport) is an automated process and it is the BW deployment mechanism that causes the postfixes to change.  As a result with the transport from Dev to QA in SAP-BW we end up with table name changes (be they only suffixes), but this does mean that when we deploy our Data Services jobs we immediately have to change them for the target environments.
    Please let me know if someone has deployed some solution for this.
    Thanks

    This is an issue with SAP BW promotion process. SAP BASIS team should not turn on setting to suffix systemid with the table names during promotion.
    Thanks,

  • DS run longer with scheduled job as compare to manual run

    I have scheduled a job through Management Console (MC) to run everyday once at 
    certain time. After some time, maybe after 15 days of running, the execution 
    time has increased to double from 17 mins to 67 mins by one time jump. After 
    that, the job kept running and spent 67 mins to complete.
    The nature of the job is to generate around 400 output flat files from a 
    source db2 table. Under efficient running time, 1 file took around 2 seconds to 
    generate. But now it became taking 8 seconds to generate one file. The data 
    volume and nature of the source table didn't change, so that was not the root 
    cause of increasing time.
    I have done several investigations and the results as such:
    1) I scheduled again this job at MC to run for testing, it would take 67 mins 
    to complete. However, if I manually run this job thorough MC, it would take 
    17 mins efficient time to run.
    2) I replicated this job to a second job as a copy. Then I scheduled this 
    copied job at MC to run, it would take 67 mins to run. But if I manually run 
    this job through MC, it will take 17 mins to run.
    3) I created another test repo and load this job in. I scheduled the job to 
    run at this new repo, it would take 67 mins to run. If I manually run the job 
    through MC, it only took 17 mins to run.
    4) Finally, I manually executed the job through unix job scripts command, 
    which is one of the scheduled job entry in the cron file, such as 
    ./DI__4c553b0d_6fe5_4083_8655_11cb0fe230f4_2_r_3_w_n_6_40.sh, the job also would take 17 
    mins to run to finish.
    5) I have recreated the repo to make it clean and reload back the jobs and 
    recreated again the schedule. Yet it still took 67 mins to run scheduled job.
    Therefore, the conclusion is why it takes longer time to run by scheduling 
    method as compare to manually running method?
    Please provide me a way to troubleshoot this problem. Thank you.
    OS : HPUX 11.31
    DS : BusinessObjects Data Services 12.1.1.0
    databasee : DB2 9.1

    Yesterday we had done another test and indirectly made the problem to go 
    away. We changed the generated output flat file directory from current directory 
    of /fdminst/cmbc/fdm_d/bds/gl to /fdminst/cmbc/fdm_d/bds/config directory to 
    run, to see any difference would make. We changed the directory pointing 
    inside Substitution Parameter Configurations windows. Surprisingly, job had 
    started to run fast and completed in 15 minutes and not 67 minutes anymore.
    Then we shifted back and pointed the output directory back to original 
    /fdminst/cmbc/fdm_d/bds/gl and the job has started to run fast ever since and all 
    completed in 15 minutes. Even we created ad hoc schedule to run and it was 
    still running fast.
    We not sure why it was solved by shifting directory away and shifting back, 
    and whether this had to do with BODS problem or HP Unix system environment 
    problem. Nonetheless, the job is started to run normally and fast now as we 
    test.

  • Issue with Scheduled Reports

    Having an issue with running reports.  We have created the folder under C:\Program Files\Cisco CRS Historical Reports\reports ..to resolve a bug but now the user gets the attached error message every morning.  Any ideas?
    Thanks,
    Joe

    Hi Joe
    No attachment?
    Aaron

  • Issue with brconnect jobs

    Hi All,
    we have recently Upgraded our Oracle from 9i to 10.2.0.4 and our kernel 640 also to 347 .
    Since the Upgrade my DB13 jobs are not running fine.
    I am getting error :
    brconnect: error while loading shared libraries: libclntsh.so.10.1: cannot open shared object file:No such file or directory
    I checked file   libclntsh.so.10.1 is available at /oracle/PD0/102_64/lib but I do not why I am getting this error.
    Can you please suggest to resolve this issue.
    Regards,
    Shivam Mittal

    Shivam Mittal wrote:
    Hi Orkun,
    >
    > We are using Linux OS, I check we have LD_LIBRARY_PATH set in our environment which is pointing to this environment.
    >
    > LD_LIBRARY_PATH=/usr/sap/PD0/SYS/exe/run:/usr/sap/PD0/SYS/exe/runU:/oracle/PD0/102_64/lib
    >
    > Please suggest do we have to set any other variable also to resolve the issue.
    >
    > Shivam
    Hi Shivam,
    Are you facing with this error while executing "brconnect" on ora<sid> user?
    What about the permission on the libclntsh.so.10.1?
    Do you have any problem on that file, after you executed "relink all"?
    Best regards,
    Orkun Gedik
    Edited by: Orkun Gedik on Jun 15, 2011 2:46 PM

  • Issue with scheduling a quarterly report in Central Management Console

    Hi there,
    I'm new to BusinessObjects and am a brand new member of the forum here and this is my maiden post and I'm hoping that the community here can help with an issue I have encountered. Basically we are using BusinessObjects Exterprice XI Release 2 and had a report schedule set-up to run Every N months Where N is 3, the start date was set as the 22:30 on 31/3/2009 and the report ran as scheduled on that date, however, the report didn't subsequently run at the end of Q2. I suspect it may be due to the date being set as the 31st March and that, ft is is to run every 3 months, that it tried to roll forward to 31st June which is obviously an invalid date. Has anyone seen this issue before and can you advise on the possible reason and a workaround/solution?
    Thanks in advance...

    Hope you have already got know on how to add new programs to CMC. (CMC->Folders->Manage->Add->Program file->Browse and select the program)
    To replace...
    If you are using SAP BO XI R3.x, you can use "Import wizard" or LCM for replacing \ overwriting the existing version of any objects (Programs, Crystal, webi reports etc..,) from one environment to another (Dev-QA, QA-UAT, UAT-Prod).
    If you are using SAP BI 4.x, you can use Promotion Management which is integrated within the CMC to replace or overwrite the existing version of any objects from one environemnt to another.
    To know more about using Import Wizard - http://scn.sap.com/docs/DOC-20523
    To know more about LCM or Promotion Management - http://help.sap.com/businessobject/product_guides/boexir31/en/xi31_LCM_User_en.pdf
    Note: If you are using LCM in R3.x or Promotion Management in BI 4.x, you can also utilize the "Version management" to manage your versions to keep track off.

  • Issue with Archiving_deletion job

    Team
    I am facing an issues due to archiving Deletion job canceleld halfway.
    We ran the archiving for SD_VBAK and it got completed successfully. It generated three archive file sessions. Now I am running the deletion job. One Archive deletion session was successfully completed. The other two jobs got canceled due to weekly system restart. I tried scheduling  new deletion via SARA and it runs successfully in test Mode. When I un-check the test mode and run it, the jobs are getting canceled throwing the following error.
    "Step 001 started (program S3VBAKDL, variant ZZCVB
    Archive file 000144-002SD_VBAK is being verified 
    Archive file 000144-002SD_VBAK is being processed
    New fill for archive for which fill was not completed
    Text 7600293102 ID ZH06 language EN not found    
    Job canceled after system exception ERROR_MESSAGE"
    Can somebody throw some light on how to fix this issue. ZHO6 is a document type.
    I unable to proceed further for other objects due to this issue.
    Please help.
    Regards
    Mathi

    Dear  mathi.
    Kindly check the  error log file. Please let us Know .and also check the in the Statics tab in sara transaction . check the Job number status (wheather it is green or Yellow)
    Thanks and Warm Regards
    Basavaraj Evani

  • Issue with Scheduling agreements created with reference to Contract

    We create scheduling agreements only with reference to contracts. A contract may have multiple Scheduling agreements linked to it. Now the problem is that when a scheduling agreement is being modified, it is creating a lock to all the other scheduling agreements which refers to the same contract. So till the time the user is out of change mode in Scheduling agreement, no other user is able to perform any transactions pertaining to other scheduling agreements (referring to the same Contract)
    Is there anyway that we can address this?? Please suggest.

    The record locks are needed to avoid data inconsistencies. If 2 users would be able to have the same document open in update mode, then how can you decide which entry will be used  for a table update? And what would you tell the other person? (Oh sorry, another user was quicker, please start all over again)
    In some high frequent transactions SAP has developed some options like late locks for good movements, but more often you have to take organization measures like educating the users, and maybe reducing the number of items in a document and creating more documents instead.
    If it is a real big headache then you have to talk to SAP, have to explain the situation, and hope that they either have already a solution or even realize that it is not needed to lock all and can create a solution per OSS note.

  • Reading offline form from mailbox and processing with scheduled job.

    Hi,
    i'm trying to prepare an application that runs scheduled on portal.
    This applcation will get attached .pdf form form mailbox and parse and process it.
    I can receive attached pdf form from mailbox. And i can develop a scheduled application for portal.
    My problem is to parse into fields the pdf form with java to use in scheduled portal app.
    Thanks.

    hi Chandra,
    I store the url in one of hidden field in form, then copy javascript below to click event, button control type: Regular.
    just replace all the "<>" in the code below.
    var message;
    var response;
    // The name of the data connection will be pulled from the WSDLConnection name in the ConnectionSet packet
    var sWSDLName = <DataConnection name>;
    // clone, modify and execute the connection.
    var vConnection = xfa.connectionSet[sWSDLName].clone(true);
    vConnection.soapAddress.value = <url>;
    // Execute the connection, without remerge data after the result.
      var ws_rc = vConnection.execute(true);
    for multiple data connection, you have to repeat these code with replace different data connection name and it's url address.
    you may use script object (something like subroutine).
    Regards,
    Kathy Lau

  • Issue while scheduling job of Delete program of Archiving Object

    Please help me.
    I am working on SARA and clicked on delete program for Purchase Order.
    When the job was executed, i checked logs and i found that there is some error and the job was cancelled.
    The main problem is when i checked the EKKO table, the entries were also deleted which should not happen.
    The Error Log was:
    Job started
    Step 001 started (program RM06ED47, variant SAP&PROD, user ID ABAPUSER)
    Archive file 001273-001MM_EKKO cannot be processed because of its status
    Job cancelled after system exception ERROR_MESSAGE
    Can anyone suggest me what shud i do to avoid such scenario??

    i know that..but  i was working in production mode and the data got deleted when the delete job status = cancelled..
    Usually when delete fails, the data isnt deleted but this time the data got deleted which is wrong i guess..

  • Email delievery Issues with scheduled advanced reports XLS and PDF format

    Has anyone experienced problems with not recieving XLS, PDF formatted advanced reports via email?
    Is anyone using the TAW/WEB-INF/web.xml parameter:
    <context-param>
    <param-name>scheduledReportUrl</param-name>
    <param-value>http://reportservers.mydomain.com</param-value>
    </context-param>
    <context-param>
    <param-name>reportServerUrl</param-name>
    <param-value>http://reportservers.mydomain.com</param-value>
    </context-param>
    <context-param>
    <param-name>isReportServer</param-name>
    <param-value>true</param-value>
    </context-param>

    Thanks for the reply Sebastian,
    I created APS with only CVOM service and changed the parameter Xmx6g, but the result is the same. But there are some strange thing, since com at the time of shipment and PDF generation dedicated to the APS, the value of the metric "Maximum Memory (MB)" was 5.461Mg that corresponds to the parameter Xmx6g, but the metric "Total Memory (MB)" value was 1792Mg and "Free Memory (MB)" the 1.052Mg value. Is it possible that the APS does not use all available memory is allocated to it? At the time of generating the server had memory available to grow.
    Tnks,
    Calres

Maybe you are looking for

  • Can I transfer files to sub folders in itune apps.i It seems I can't.

    I have the app office2hd and I'm trying to transfer files and folders to the app from my win 7 pc within iTunes. I can transfer files okay but when I try yo double click a folder to get it to open the sub folder under it, but it does not open. I have

  • Detecting changes in form data

    Hello! If this is obvious, i'm seeing it (or finding the answer). Is there a simple way to detect whether someone has changed the data on a form before hitting submit? thanks! -jeff

  • On start up the CC says 6 upgrades...

    Every time I start up my computer the CC says 6 upgrades..

  • Without absolute path, SAXParser1.4   does not find schemLocation!!!!!

    Hello I am validating soap messages according to xsd files with the SAXParser, Java 1.4. My xsds include references to other xsd files as following sample shows: <xs:include schemaLocation="C:/temp/shiphome.xsd"/>       The problem is, how to say the

  • C6-00 Google Calendar Synch

    Hello out there, first of all I've read all the postings about synchs between the nokia c6 and google calendar, but couldn't download MFE (Mail For Exchange). When trying to get MFE via Ovi Store it only says that this product isn't available for C6-