Automated Monitoring Scheduled Job "In Progress" - PC

Hello Experts,
When scheduling a job for automated control monitoring in NWBC, the job status is showing as "in progress" and not getting fully executed. Also the job log is empty.
Screenshots are as follows.
Could anyone suggest what can be the reason.
Regards,
Ramakrishna Chaitanya

Hi Ramakrishna,
It could be some connector issue.
You can try to Create a new connector in the system and Maintain the newly created connector in the SPRO under Governance Risk and Compliance->Process Control->Legacy Automated
Monitoring->Automated Testing and Monitoring->Register Connectors.
Schedule a new job in the planner and check the results.
Regards,
Silky Sharma

Similar Messages

  • Automated Monitoring Scheduled Job not completing - PC

    Hello Experts,
    When we have scheduled a automated monitoring job, it is not getting executed completely and the status is shown as "In Progress". Also when we open the "Job Step Log" tab of the scheduled job in Automated Monitoring it is blank
    What could be the reason for it?
    Regards,
    Ramakrishna Chaitanya

    Hi
    Try with Sox export role. Asynchronous mode?

  • Monitor Scheduled Jobs in OEM

    Does anyone use OEM to monitor the success or failure of Scheduled Jobs?
    Can somebody point me in the right direction, either documentation or how you do this at your place of work.
    Documentation out there seems to be a little scarce with regards to this.
    Thanks!

    I should make myself a little clearer. I would like a job failure to automatically notify me - whether by email or any other means.
    The small bit of documentation that I find is in the OEM Advanced Configuratin Guide (B16242-01) section: 12.4 Passing Job Execution Status Information
    Thanks.

  • Business Objects Scheduler Jobs Monitoring

    Hi,
    Is there a management pack available for integration with System Centre (SCOM) such that we can monitor failed scheduled jobs in Business Objects via SCOM?
    Is there any other way to monitor failed scheduled Business Objects jobs straight from the database or via any other method?
    Thanks,
    Chandan

    Hi Chandran,
    I'm not a SCOM expert but I understand it is basically a monitoring tool.
    Normally you would check for failed schedules in Business Objects using the Instance Manager, to filter for failed scheduled reports. Alternatively you could use the Query Builder to report directly against the XI3.1 Repository and return a list of failed instances.
    Above workflows can also be coded into the SDK to provide customised feedback on schedule failures
    The final option would be to simply set up email Notifications on the schedules to simply email the Administrator in the event of a schedule failure.
    With any of the above options it should be possible to link in your SCOM system to read\monitor from this output.
    I hope this helps.
    Kind regards,
    John

  • How can u schedule or monitor Bacground jobs from OS level?

    Hello,
         I was faced one question recently in IBM interview,, How can u schedule or monitor Background Jobs through OS level.anyone pls help me.
    regards,
    balaram

    In my knowledge, for scheduling background jobs in windows we can use scheduler.
    In unix flavours, cronjobs or at commands can be used.
    Again it depends on which version of OS we use.
    hope this helps.
    inspire by rewarding

  • Oracle schedule jobs monitoring & restarting if necessary

    Hi Gurus,
    I need to monitor the schedule jobs.
    I can see them under Scheduler icon in SQL Developer.
    But I need to see them by executing sql so can get the states and all job details (cause I am guessing some jobs are not running properly).
    I tried as SYS or user the following views ALL_SCHEDULER_JOBS, ALL_SCHEDULER_SCHEDULES, ALL_SCHEDULER_PROGRAMS, ALL_SCHEDULER_JOB_CLASSES, ALL_JOBS; none worked.
    Also after I see their states, how can I restart some schedule jobs if they are not enabled or active.
    Thanks
    Amitava.

    amitavachatterjee1975 wrote:
    Hi Gurus,
    I need to monitor the schedule jobs.
    I can see them under Scheduler icon in SQL Developer.
    But I need to see them by executing sql so can get the states and all job details (cause I am guessing some jobs are not running properly).
    I tried as SYS or user the following views ALL_SCHEDULER_JOBS, ALL_SCHEDULER_SCHEDULES, ALL_SCHEDULER_PROGRAMS, ALL_SCHEDULER_JOB_CLASSES, ALL_JOBS; none worked.
    Also after I see their states, how can I restart some schedule jobs if they are not enabled or active.
    Thanks
    Amitava.
    "none worked" is not a useful statement.
    I don't know what you have.
    I don't know what you do.
    I don't know what you see.
    It is really, Really, REALLY difficult to fix a problem that can not be seen.
    use COPY & PASTE so we can see what you do & how Oracle responds.
    what exactly do you expect/desire to see when the SQL works for you?

  • DPM 2010 Scheduled Jobs Disappear rather than Run

    I have a situation where I have a DPM server that appears to be functioning fine, but none of the scheduled jobs run.  No errors are given, there are no Alerts, and there is nothing in the Event log (Application and System) which indicates a failure. 
    All my Protection Groups show a green tick to indicate that they are fine, but the last successful backup for all of them is Friday the 8th of February.
    If I go to Monitoring and Jobs I see the jobs scheduled, but when the time comes for the job to run, it does not go into the "All jobs in progress", it just merely disappears, like thus:
    And a few minutes later,
    As you can see, the jobs disappear from the queue, and the total number of jobs decreases accordingly.  These jobs do not go into any of the other 3 Statusses (Completed, Failed or In Progress), they just disappear without a trace.
    There is some unallocated space, albeit not much (Used space: 21 155,05 GB Unallocated space: 469,16 GB). If space was an issue I would expect to see errors to indicate this.
    DPM 2010 running version 3.0.8193.0 (hotfix rollup package 6) using remote instance of SQL 2008 which is functioning fine.  I have tried stopping/starting the services, and even rebooted the server twice.  The remote instance of SQL server is using
    a domain account as its service account.  There are no pending Windows updates, i.e. it is fully up-to-date.
    The System Center Data Protection Manager 2010 Troubleshooting Guide (July 2010) does not show how to troubleshoot this particular probelm.
    Does anybody know how to resolve this issue or which logs might help me troubleshoot it?

    OK,
    Did you change the SQL Agent user account ?
    If so, DPM enters the SQL Agent account name into the registry and later we check that account each time the DPM engine launches.  The internal interfaces to DPM are secured using this account so the account name needs to match the account the SQL Agent
    is using. 
    Step 1
    In the registry HKLM\Software\Microsoft\Microsoft Data Protection Manager\Setup alter  both
    SqlAgentAccountName and SchedulerJobOwnerName keys to reflect the SQL Agent user account being used.
    Step 2
    Update DCOM launch and access permissions to match what was granted to the Microsoft$DPM$Acct account.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • PC:Regulation and Regulation Configuration fields appears blank, while Scheduling Job

    Hi All,
    The regulation drop down does not show any entry,while scheduling job(Automated Monitoring -> Continuous monitoring scheduler)
    So, i tried to create regulation, for which i  got below error., while creating a Regulation Configuration
    i have been through SAP GRC Process Control - Not Displaying Regulations in Organizations menu
    I understand that it might be due to incomplete configuration in SPRO. So, could you anyone suggest on this.
    fyi, i have activated all BC sets
    Regards
    Plaban

    Hello Sahoo,
    check  the below note and perform the technical steps to make fields available for regulations
    1799427 - How to make fields available to be selected as regulation specific .
    Regulations will not appear until you assign objects to your organization such as Sub processes , policies and Indirect-Entity-Level Controls that have been assigned to regulations.
    Regards
    Baithi

  • Background Processing? how schedule job for "System Error" Message .

    Hello everyone,
    in sap help i have read.
    http://help.sap.com/saphelp_nw04/helpdata/en/5a/f72040599a8f5ce10000000a155106/frameset.htm
    PCK> Monitoring>Message Monitoring-->Background Processing
    you can schedule jobs for various background processing:
    ●     Archiving of messages processed successfully
    ●     Deletion of messages that are not to be archived
    ●     Restarting of messages with errors
    ●     Rescheduling of lost messages
    can anyone understand this docu?
    give me some introduction, how can i define and schedule these jobs ?
    thx in advance!!
    best regards
    Yaning

    Background Processing
    Prerequisites
    You have started the message monitor on the initial screen of the PCK and are in Background Processing.
    Features
    Archiving
    You require two archiving sessions to archive messages:
    ●     One session to write the messages to the archive
    ●     One session to delete the persisted messages that have been archived
    To do this, you schedule an archiving job, which implicitly schedules the sessions to write to the archive and delete the archived messages.
    You can define one or more rules for each archiving job; these rules contain conditions that a message must meet in order to be archived by the job. At least one of the defined rules must be met for archiving to take place.
    All information that is displayed for a message in message monitoring is archived, in addition to the audit log for each message.
    Deleting
    A standard delete job is created automatically. It runs once a day. You can schedule additional delete jobs; however, you cannot define rules for them.
    Restarting
    Instead of restarting messages with errors manually with message monitoring, you can schedule a job to automatically restart these messages. This is possible for all messages for which the number of defined restart attempts has been exceeded (messages with the system error status).
    You can define one or more rules for each job to restart messages; these rules contain conditions that a message must meet in order to be restarted by the job. At least one of the defined rules must be met for archiving to take place.
    Rescheduling
    A standard job to reschedule messages is created automatically. The job runs once a day and ensures that messages lost as a result of database failure, for example, are rescheduled. You can schedule additional rescheduling jobs; however, you cannot define rules for them.
    Thx Aamir.
    But I mean the messages with errors in Adapter Engine , not in Intergrations Engine.
    the situation is like Naveen Pandrangi's WebLog
    II. Errors in Adapter Engine [XI :  How to Re-Process failed XI Messages Automatically|XI :  How to Re-Process failed XI Messages Automatically]
    I
    Till now we have seen how to resubmit/restart message that failed in Integration Engine.  One a message makes it from Integration Engine to Adapter Engine, the message is flagged as checked in Integration Engine. The status of the message in Adapter engine does not effect the processed state in Integration Engine. Now if this message was asynchronous, XI will by default try to restart the message 3 times at intervals of 5 minutes before the status of the message is changed from Waiting to System Error .
    *how can i schedule a job to automatically restart these messages with errors?
    best regards
    Yaning
    Edited by: Yaning Liu on Aug 18, 2008 1:43 PM

  • Scheduled Job to gather stats for multiple tables - Oracle 11.2.0.1.0

    Hi,
    My Oracle DB Version is:
    BANNER Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE 11.2.0.1.0 Production
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    In our application, we have users uploading files resulting in insert of records into a table. file could contain records ranging from 10000 to 1 million records.
    I have written a procedure to bulk insert these records into this table using limit clause. After the insert, i noticed my queries run slow against these tables if huge files are uploaded simultaneously. After gathering stats, the cost reduces and the queries executed faster.
    We have 2 such tables which grow based on user file uploads. I would like to schedule a job to gather stats during a non peak hour apart from the nightly automated oracle job for these two tables.
    Is there a better way to do this?
    I plan to execute the below procedure as a scheduled job using DBMS_SCHEDULER.
    --Procedure
    create or replace
    PROCEDURE p_manual_gather_table_stats AS
    TYPE ttab
    IS
        TABLE OF VARCHAR2(30) INDEX BY PLS_INTEGER;
        ltab ttab;
    BEGIN
        ltab(1) := 'TAB1';
        ltab(2) := 'TAB2';
        FOR i IN ltab.first .. ltab.last
        LOOP
            dbms_stats.gather_table_stats(ownname => USER, tabname => ltab(i) , estimate_percent => dbms_stats.auto_sample_size,
            method_opt => 'for all indexed columns size auto', degree =>
            dbms_stats.auto_degree ,CASCADE => TRUE );
        END LOOP;
    END p_manual_gather_table_stats;
    --Scheduled Job
    BEGIN
        -- Job defined entirely by the CREATE JOB procedure.
        DBMS_SCHEDULER.create_job ( job_name => 'MANUAL_GATHER_TABLE_STATS',
        job_type => 'PLSQL_BLOCK',
        job_action => 'BEGIN p_manual_gather_table_stats; END;',
        start_date => SYSTIMESTAMP,
        repeat_interval => 'FREQ=DAILY; BYHOUR=12;BYMINUTE=45;BYSECOND=0',
        end_date => NULL,
        enabled => TRUE,
        comments => 'Job to manually gather stats for tables: TAB1,TAB2. Runs at 12:45 Daily.');
    END;Thanks,
    Somiya

    The question was, is there a better way, and you partly answered it.
    Somiya, you have to be sure the queries have appropriate statistics when the queries are being run. In addition, if the queries are being run while data is being loaded, that is going to slow things down regardless, for several possible reasons, such as resource contention, inappropriate statistics, and having to maintain a read consistent view for each query.
    The default collection job decides for each table based on changes it perceives in the data. You probably don't want the default collection job to deal with those tables. You probably do want to do what Dan suggested with the statistics. But it's hard to tell from your description. Is the data volume and distribution volatile? You surely want representative statistics available when each query is started. You may want to use all the plan stability features available to tell the optimizer to do the right thing (see for example http://jonathanlewis.wordpress.com/2011/01/12/fake-baselines/ ). You may want to just give up and use dynamic sampling, I don't know, entire books, blogs and papers have been written on the subject. It's sufficiently advanced technology to appear as magic.

  • IDOC Monitoring issue - job BPM_DATA_COLLECTION* not running.

    Hi all,
    We are facing an issue with BPM "IDOC Monitoring" (under application monitoring), which we have setup to monitor Inbound and Outbound Idocs in 2 separate R/3 systems.
    In one system it works fine, and measured values are returned each time the monitor is set to run according to the specified schedule.
    However, for another R/3 system, the monitor has never run, even though the settings are identical in the monitoring setup.
    From reading the Interfaces Monitoring Setup guide, I found that this monitor depends on a job called BPM_DATA_COLLECTION* which runs in the monitored system. In the system where monitoring is functioning correctly, I can see this job is completing successfully at the time the monitor is set to run. All I find is completed jobs - no scheduled or released jobs present.
    However, for the system where the monitoring is not functioning, I found that this job is not running, but that the job is sitting in scheduled status instead.
    When I tested manually running the job in this system, it ran successfully, and in Solution Manager the monitor brought back the measured value, so the monitor only ran successfully when I manually ran the job in the monitored system.
    I read notes 1321015 & 1339657 relating to IDOC monitoring. 1321015 appears to be more relevant, yet it does not exactly describe my issue - it mentions the job BPM_DATA_COLLECTION* failing rather than just remaining in scheduled status which is what I see.
    Anyone else see this issue before?
    On a more general point - the standard BPM Setup guide doesn't really go into much detail on IDOC Monitoring, and makes no mention of what is happening in the background, i.e. the job BPM_DATA_COLLECTION* being created and run as per schedule. This info is found in a separate document "Interface Monitoring Setup Guide".
    Is there any single document which describes fully what happens both in the Solution Manager and the Monitored systems when BPM is activated? For example, to describe which monitors require jobs to be run, which monitors require additional setup in monitored system, etc? A document such as this which describes exactly the process flow for each monitor would be very useful in troubleshooting issues going forward.
    Thanks,
    John

    Hello John,
    most probably the user assigned to the corresponding RFC READ connection that connects SolMan with the backend system doesn't have proper authorization to release a job. That's why it is only created/scheduled but not released. Verify if the RFC user on the backend has the latest CSMREG profile assigned according to SAP note 455356.
    You can also check if the latest ST-PI support package is installed  on your backend system as the ST-PI usualy contain the latest definition of CSMREG.
    Best Regards
    Volker

  • Disable scheduler jobs during the database refresh

    Chaps,
    I have a strange issue. We have certain jobs scheduled which monitor other jobs and when they aren't running, they send emails using utl_smtp to the whole DBA group. All is working fine on Production but the moment we restore the database to QA, and soon after the database is recovered, it sends an email saying the those jobs aren't running..
    How do I disable the scheduler jobs ? Can it be done while the database is in mount state ? Or, is there any parameters to do so ?

    Hi,
    Although you can't disable the entire scheduler, you can disable individual jobs or all jobs in a job class using dbms_scheduler.disable which will prevent the jobs from running (but not stop already running jobs).
    It should be straightforward to have a table of jobs that should be disabled and have procedures which run over the table either disabling or enabling them.
    -Ravi

  • Problem with  monitor a job

    Hi all,
       I am running a job which loads data from one DSO to another DSO . The update rules have a start routine which extract data from APO livecache using 2 -3 functional modules for some characteristics of the target DSO .
    Earlier this load was taking 1.5 hours to finish but now its just taking more than 4 hours and this job is occupying all system resources and so other jobs are kept in hanged state.
    I want to know :
    1> How can i monitor the jobs ? i mean how to monitor the jobu2019s system resource utilization (number of processors) ?
    2> How can i reduce the time of load ?
    Thanks
    Sam
    Edited by: Samir Bihari on Nov 12, 2009 10:40 AM

    Hi,
    With job monitoring, you can monitor individual jobs in the Alert Monitor with regard to status and runtime. This is an important extension of the monitoring of the background runtime environment (see Background Processing Monitor), in which the general status of the SAP background processing is monitored (for example, with regard to utilization or job terminations).
    The job monitoring is used to monitor jobs that run regularly, that are always scheduled using a particular name or name pattern (such as a fixed job name with a timestamp). A job of this type is called a job chain.
    For more info go through the below link
    http://help.sap.com/saphelp_nw70/helpdata/en/60/cd49ff274aa240a7291286ec797618/content.htm
    If your Data load are independent with each other you can schedule in Parallel and this will reduce your data load time with comparison to series load. When you are loading data in parallel, you have to keep your Background workprocess in mind. If you have three workprocess and four parallel data load in process chain, fourth will wait until it gets the free workprocess.
    The staging process of any significant volume of data into SAP BW presents challenges to system resource utilization and timeliness of data. This session discusses the causes of data load performance issues, highlights the troubleshooting process, and offers tuning solutions to help maximize throughput. Many aspects of data load performance analysis and tuning are covered including extraction, packaging, transformation, parallel processing, as well as change run and aggregate rollup.
    For more info go through the below link
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b&overridelayout=true
    Regards,
    Marasa.

  • Auto alert mechanism for ATG scheduled job failure and BCC project failure

    Hello all,
    Could you please confirm if there are auto alert mechanisms for ATG scheduled job failure and BCC project failure?
    Waiting for reply.
    Thanks and regards,

    Hi,
    You need to write custom code to get alerts if an ATG Scheduler fails.
    For BCC project deployment monitoring, please refer to the below link in the documentation.
    Oracle ATG Web Commerce - Configure Deployment Event Listeners
    Thanks,
    Gopinath Ramasamy

  • Bat file for running scheduled jobs

    Hi
    I am not entirely sure whether this is the correct forum to post this question, so apologies if I have posted this question in the wrong place.
    Anyhow, I would like to know how to create two automated bat file scripts that will execute a PL/SQL package that will tell my Oracle 10g R2 database to run a scheduled job. Equally, I need another bat file to tell it to drop the scheduled job.
    I already have a PL/SQL package that creates a schedule job using dbms_scheduler, and I can execute the scheduled job by going into SQL Plus running the execute command against the package. It is this latter bit, the execute part that I want to automate into bat file.
    Can someone show me how to do this?

    I'm in complete agreement with Hans. Oracle has two facilities DBMS_JOB and DBMS_SCHEDULER neither of which benefits in any way be being called by a batch file.
    What is your version number (all of it) and why are you considering this idea?

Maybe you are looking for