Critical Action and Role/Profile Analysis job in not running in GRC 5.3

Hi Team,
I  am working for a client where GRC 5.3 is installed( support pack 4 and patch 1).
The installation is complete and also the post processing is done.
We have scheduled a periodic ( weekly ) incremental background job for Critical Action and Role/Profile.
Following are the parameter setting used:
Task: Risk Analysis -Batch
Batch Mode : Incremental
First time it run successfully on 28th June'09 and it is completed with spool also. But next time it is supposed to run on 4th of July'09 . But it does not. And since then it is in same state.
I am not able to find any reason that why it is behaving this way where other incremental jobs are running successfully.
It will be helpfull if any one can guide me providing the solution.
Regards,
Kakali

Hi Varun,
I go to the Job History Button. It shows the following data only :
2009-06-28 00:00:59 Done Job Completed successfully
2009-06-27 23:45:00 Started RAR_PE1CLNT100_Critical Action and Role/Profile Analysis started :threadid: 0
Under the Last Run Colomn it shows 28th June ( Status -completed)
Under Next Run Date it is showing 4th July
Follwoing are the list of Updates available From SP05
When executing the critical roles/profile jobs in background, a message
"error while executing the Job: null" comes up. ---( this one is for which come under Informer Tab)
Background job spools are not available after upgrade from 5.2 to 5.3.
Critical action and critical role/profile analysis cannot be run in
background by system. --- ( But in my case It ran for once )
Selection parameters (System, User and User Group) have been provided for
"Critical Action and Role/Profile Analysis" in Configuration->Background
Job->Schedule Job. --- ( it means it run usually)
Critical Actions report in detail view shows no results after executing the
Risk Analysis Job in the background. The same report shows data when
executed in the foreground. ( this one is for which come under Informer Tab )
When there is only one periodic job configured in RAR, this job fails to
start after the first time in the specified time. ( this is not true, becoz there other periodic jobs running successfuly)
Unable to run Informer - audit reports - critical role and profiles with
logical systems. ( this is again under Informer Tab )
I had gone through this  earlier also, but not able to match any update with my problem. If if have any other suggestion you can provide me the same.
Is there any way to check for job log so that I can check what is the problem. View Log option is also greyed out as we have sap logger set up as a default logger Parameter. I have made it enable just to check but there is nothing.
Please Guide.
Regards,
Kakali

Similar Messages

  • Critical Action and Role/Profile Analysis

    Hi,
    I want to know the purpose of the Batch Risk Analysis back ground job "Critical Action and Role/Profile Analysis" in RAR 5.3.
    I'm assuming that I need not run this job if I do not want the critical roles/profiles like SAP_ALL to be analysed which were defined to be critical in rule architect.
    Please let me know if there is any other purpose to run the BG job "Critical Action and Role/Profile Analysis".
    Thank you,
    Partha

    Hello Partha,
      You got this right. It will analyze the defined critical actions/roles/profiles.
    Regards, Varun

  • Full Synch risk analysis job is not running good

    Hi
    Recently we have started user_full_synch_risk_analsysis job is running but with many error..
    I have checked the SLD it is running, JCO also tested good..checked in debugger...tried to look the data..working good..
    but the log saying "can not extract the data from DEV system"
    it is skipping each userid risk analysis saying the above error and it continuing with other userIDs.. Please help
    INFO:  Job ID:18 : Exec Risk Analysis
    Sep 27, 2010 7:51:38 AM com.virsa.cc.xsys.riskanalysis.AnalysisEngine performActPermAnalysis
    INFO:  Job ID:18 : Analysis starts: XRPM_PM2
    Sep 27, 2010 7:53:04 AM com.virsa.cc.comp.BgJobInvokerView onActionsearchObj
    INFO: Number of users matched:132
    Sep 27, 2010 7:58:30 AM com.virsa.cc.xsys.meng.MatchingEngine matchActRisks
    WARNING:  Error :
    com.virsa.cc.dataextractor.dao.DataExtractorException: Cannot extract data from system (DEV); for more details, refer to ccappcomp.n.log
         at com.virsa.cc.dataextractor.bo.DataExtractorSAP.searchUser(DataExtractorSAP.java:551)
         at com.virsa.cc.dataextractor.bo.DataExtractorSAP.userIsIgnored(DataExtractorSAP.java:529)
         at com.virsa.cc.xsys.meng.MatchingEngine.getObjActions(MatchingEngine.java:682)
         at com.virsa.cc.xsys.meng.MatchingEngine.matchActRisks(MatchingEngine.java:112)
         at com.virsa.cc.xsys.riskanalysis.AnalysisEngine.performActPermAnalysis(AnalysisEngine.java:1104)
         at com.virsa.cc.xsys.riskanalysis.AnalysisEngine.riskAnalysis(AnalysisEngine.java:297)
         at com.virsa.cc.xsys.bg.BatchRiskAnalysis.performBatchRiskAnalysis(BatchRiskAnalysis.java:1047)
         at com.virsa.cc.xsys.bg.BatchRiskAnalysis.performBatchSyncAndAnalysis(BatchRiskAnalysis.java:1331)
         at com.virsa.cc.xsys.bg.BgJob.runJob(BgJob.java:402)
         at com.virsa.cc.xsys.bg.BgJob.run(BgJob.java:264)
         at com.virsa.cc.xsys.riskanalysis.AnalysisDaemonBgJob.scheduleJob(AnalysisDaemonBgJob.java:240)
         at com.virsa.cc.xsys.riskanalysis.AnalysisDaemonBgJob.start(AnalysisDaemonBgJob.java:80)
         at com.virsa.cc.comp.BgJobInvokerView.wdDoModifyView(BgJobInvokerView.java:436)
         at com.virsa.cc.comp.wdp.InternalBgJobInvokerView.wdDoModifyView(InternalBgJobInvokerView.java:1225)
         at com.sap.tc.webdynpro.progmodel.generation.DelegatingView.doModifyView(DelegatingView.java:78)
         at com.sap.tc.webdynpro.progmodel.view.View.modifyView(View.java:337)
         at com.sap.tc.webdynpro.clientserver.cal.ClientComponent.doModifyView(ClientComponent.java:480)
         at com.sap.tc.webdynpro.clientserver.window.WindowPhaseModel.doModifyView(WindowPhaseModel.java:551)
         at com.sap.tc.webdynpro.clientserver.window.WindowPhaseModel.processRequest(WindowPhaseModel.java:148)
         at com.sap.tc.webdynpro.clientserver.window.WebDynproWindow.processRequest(WebDynproWindow.java:335)
         at com.sap.tc.webdynpro.clientserver.cal.AbstractClient.executeTasks(AbstractClient.java:143)
         at com.sap.tc.webdynpro.clientserver.session.ApplicationSession.doProcessing(ApplicationSession.java:299)
         at com.sap.tc.webdynpro.clientserver.session.ClientSession.doApplicationProcessingStandalone(ClientSession.java:711)
         at com.sap.tc.webdynpro.clientserver.session.ClientSession.doApplicationProcessing(ClientSession.java:665)
         at com.sap.tc.webdynpro.clientserver.session.ClientSession.doProcessing(ClientSession.java:232)
         at com.sap.tc.webdynpro.clientserver.session.RequestManager.doProcessing(RequestManager.java:152)
         at com.sap.tc.webdynpro.serverimpl.defaultimpl.DispatcherServlet.doContent(DispatcherServlet.java:62)
         at com.sap.tc.webdynpro.serverimpl.defaultimpl.DispatcherServlet.doGet(DispatcherServlet.java:46)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:740)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
         at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.runServlet(HttpHandlerImpl.java:390)
         at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.handleRequest(HttpHandlerImpl.java:264)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:347)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:325)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.invokeWebContainer(RequestAnalizer.java:887)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.handle(RequestAnalizer.java:241)
         at com.sap.engine.services.httpserver.server.Client.handle(Client.java:92)
         at com.sap.engine.services.httpserver.server.Processor.request(Processor.java:148)
         at com.sap.engine.core.service630.context.cluster.session.ApplicationSessionMessageListener.process(ApplicationSessionMessageListener.java:33)
         at com.sap.engine.core.cluster.impl6.session.MessageRunner.run(MessageRunner.java:41)
         at com.sap.engine.core.thread.impl3.ActionObject.run(ActionObject.java:37)
         at java.security.AccessController.doPrivileged(Native Method)
         at com.sap.engine.core.thread.impl3.SingleThread.execute(SingleThread.java:100)
         at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java:170)
    Sep 27, 2010 7:58:30 AM com.virsa.cc.xsys.riskanalysis.AnalysisEngine riskAnalysis
    WARNING:  Job ID:18 : Failed to run Risk Analysis
    java.lang.Exception: Cannot extract data from system (DEV); for more details, refer to ccappcomp.n.log
         at com.virsa.cc.xsys.meng.MatchingEngine.matchActRisks(MatchingEngine.java:118)
         at com.virsa.cc.xsys.riskanalysis.AnalysisEngine.performActPermAnalysis(AnalysisEngine.java:1104)
         at com.virsa.cc.xsys.riskanalysis.AnalysisEngine.riskAnalysis(AnalysisEngine.java:297)
         at com.virsa.cc.xsys.bg.BatchRiskAnalysis.performBatchRiskAnalysis(BatchRiskAnalysis.java:1047)
         at com.virsa.cc.xsys.bg.BatchRiskAnalysis.performBatchSyncAndAnalysis(BatchRiskAnalysis.java:1331)
         at com.virsa.cc.xsys.bg.BgJob.runJob(BgJob.java:402)
         at com.virsa.cc.xsys.bg.BgJob.run(BgJob.java:264)
         at com.virsa.cc.xsys.riskanalysis.AnalysisDaemonBgJob.scheduleJob(AnalysisDaemonBgJob.java:240)
         at com.virsa.cc.xsys.riskanalysis.AnalysisDaemonBgJob.start(AnalysisDaemonBgJob.java:80)
         at com.virsa.cc.comp.BgJobInvokerView.wdDoModifyView(BgJobInvokerView.java:436)
         at com.virsa.cc.comp.wdp.InternalBgJobInvokerView.wdDoModifyView(InternalBgJobInvokerView.java:1225)
         at com.sap.tc.webdynpro.progmodel.generation.DelegatingView.doModifyView(DelegatingView.java:78)
         at com.sap.tc.webdynpro.progmodel.view.View.modifyView(View.java:337)
         at com.sap.tc.webdynpro.clientserver.cal.ClientComponent.doModifyView(ClientComponent.java:480)
         at com.sap.tc.webdynpro.clientserver.window.WindowPhaseModel.doModifyView(WindowPhaseModel.java:551)
         at com.sap.tc.webdynpro.clientserver.window.WindowPhaseModel.processRequest(WindowPhaseModel.java:148)
         at com.sap.tc.webdynpro.clientserver.window.WebDynproWindow.processRequest(WebDynproWindow.java:335)
         at com.sap.tc.webdynpro.clientserver.cal.AbstractClient.executeTasks(AbstractClient.java:143)
         at com.sap.tc.webdynpro.clientserver.session.ApplicationSession.doProcessing(ApplicationSession.java:299)
         at com.sap.tc.webdynpro.clientserver.session.ClientSession.doApplicationProcessingStandalone(ClientSession.java:711)
         at com.sap.tc.webdynpro.clientserver.session.ClientSession.doApplicationProcessing(ClientSession.java:665)
         at com.sap.tc.webdynpro.clientserver.session.ClientSession.doProcessing(ClientSession.java:232)
         at com.sap.tc.webdynpro.clientserver.session.RequestManager.doProcessing(RequestManager.java:152)
         at com.sap.tc.webdynpro.serverimpl.defaultimpl.DispatcherServlet.doContent(DispatcherServlet.java:62)
         at com.sap.tc.webdynpro.serverimpl.defaultimpl.DispatcherServlet.doGet(DispatcherServlet.java:46)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:740)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
         at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.runServlet(HttpHandlerImpl.java:390)
         at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.handleRequest(HttpHandlerImpl.java:264)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:347)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:325)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.invokeWebContainer(RequestAnalizer.java:887)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.handle(RequestAnalizer.java:241)
         at com.sap.engine.services.httpserver.server.Client.handle(Client.java:92)
         at com.sap.engine.services.httpserver.server.Processor.request(Processor.java:148)
         at com.sap.engine.core.service630.context.cluster.session.ApplicationSessionMessageListener.process(ApplicationSessionMessageListener.java:33)
         at com.sap.engine.core.cluster.impl6.session.MessageRunner.run(MessageRunner.java:41)
         at com.sap.engine.core.thread.impl3.ActionObject.run(ActionObject.java:37)
         at java.security.AccessController.doPrivileged(Native Method)
         at com.sap.engine.core.thread.impl3.SingleThread.execute(SingleThread.java:100)
         at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java:170)
    Sep 27, 2010 7:58:30 AM com.virsa.cc.xsys.bg.BatchRiskAnalysis performBatchRiskAnalysis
    WARNING: Error: while executing BatchRiskAnalysis for JobId=18 and object(s):XRPM_PM2: Skipping error to continue with next object: Cannot extract data from system (DEV); for more details, refer to ccappcomp.n.log
    Sep 27, 2010 7:58:30 AM com.virsa.cc.xsys.bg.BgJob updateJobHistory
    FINEST: --- @@@@@@@@@@@ Updating the Job History -
    2@@Msg is Error while executing the Job for Object(s) :XRPM_PM2:Cannot extract data from system (DEV); for more details, refer to ccappcomp.n.log
    Sep 27, 2010 7:58:30 AM com.virsa.cc.xsys.bg.dao.BgJobHistoryDAO insert
    INFO: -
    Background Job History: job id=18, status=2, message=Error while executing the Job for Object(s) :XRPM_PM2:Cannot extract data from system (DEV); for more details, refer to ccappcomp.n.log
    Sep 27, 2010 7:58:30 AM com.virsa.cc.xsys.riskanalysis.AnalysisEngine riskAnalysis

    Dear Harikrishna,
    Please mention what is your Support pack. This is an identified bug for SP12 especially synchronizing the users from the backend and doing the risk analysis at the user level. We had the same problem. But for some reason we are now able to do the synchronization from the backend but the management reports are getting updated.

  • Job is not running in Source system.

    Hi Experts,
    One issue I have because of this I am not able to load data into Data sources.
    I am in BI 7.0 environment.
    when I execute infopackage, total and techinical status are in yellow.
    I found in R/3 Job is not running, based on this statement in SM37. from the fallowing.
    Call customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 0 records
    Result of customer enhancement: 0 records
    IDOC: Info IDoc 2, IDoc No. 3136, Duration 00:00:00
    IDoc: Start = 14.05.2009 07:23:39, End = 14.05.2009 07:23:39
    Synchronized transmission of info IDoc 3 (0 parallel tasks)
    IDOC: Info IDoc 3, IDoc No. 3137, Duration 00:00:00
    IDoc: Start = 14.05.2009 07:23:39, End = 14.05.2009 07:23:39
    Job finished
    Job satrt time and end time are same, so job is not running in the sourcesystem, am i right. let me known if i am worng.
    It is Standard Data source but in the above statement becasue  *result of Customer enhancement:0 records.*
    please help me to reslove this issue to load data into BI.
    1. what I have to do to run job in source system side.
    2. should I take any help from Basis.
    Regards
    Vijay
    Edited by: vijay anand on May 14, 2009 3:02 PM
    Edited by: vijay anand on May 14, 2009 3:04 PM

    Hi Rupesh,
    in RSA3 data is availbale,
    but it is not coming to BI,
    Messages from source system
    see also Processing Steps Request
    These messages are sent by IDoc from the source system. Both the extractor itself as well as the service API can send messages. When errors occur, several messages are usually sent together.
    From the source system, there are several types of messages that can be differentiated by the so-called Info-IDoc-Status. The IDoc with status 2 plays a particular role here; it describes the number of records that have been extracted in a source system and sent to BI. The number of the records received in BI is checked against this information.
    the abovemessage I am getting in details tab.
    Regards
    Vijay

  • IDOC Monitoring issue - job BPM_DATA_COLLECTION* not running.

    Hi all,
    We are facing an issue with BPM "IDOC Monitoring" (under application monitoring), which we have setup to monitor Inbound and Outbound Idocs in 2 separate R/3 systems.
    In one system it works fine, and measured values are returned each time the monitor is set to run according to the specified schedule.
    However, for another R/3 system, the monitor has never run, even though the settings are identical in the monitoring setup.
    From reading the Interfaces Monitoring Setup guide, I found that this monitor depends on a job called BPM_DATA_COLLECTION* which runs in the monitored system. In the system where monitoring is functioning correctly, I can see this job is completing successfully at the time the monitor is set to run. All I find is completed jobs - no scheduled or released jobs present.
    However, for the system where the monitoring is not functioning, I found that this job is not running, but that the job is sitting in scheduled status instead.
    When I tested manually running the job in this system, it ran successfully, and in Solution Manager the monitor brought back the measured value, so the monitor only ran successfully when I manually ran the job in the monitored system.
    I read notes 1321015 & 1339657 relating to IDOC monitoring. 1321015 appears to be more relevant, yet it does not exactly describe my issue - it mentions the job BPM_DATA_COLLECTION* failing rather than just remaining in scheduled status which is what I see.
    Anyone else see this issue before?
    On a more general point - the standard BPM Setup guide doesn't really go into much detail on IDOC Monitoring, and makes no mention of what is happening in the background, i.e. the job BPM_DATA_COLLECTION* being created and run as per schedule. This info is found in a separate document "Interface Monitoring Setup Guide".
    Is there any single document which describes fully what happens both in the Solution Manager and the Monitored systems when BPM is activated? For example, to describe which monitors require jobs to be run, which monitors require additional setup in monitored system, etc? A document such as this which describes exactly the process flow for each monitor would be very useful in troubleshooting issues going forward.
    Thanks,
    John

    Hello John,
    most probably the user assigned to the corresponding RFC READ connection that connects SolMan with the backend system doesn't have proper authorization to release a job. That's why it is only created/scheduled but not released. Verify if the RFC user on the backend has the latest CSMREG profile assigned according to SAP note 455356.
    You can also check if the latest ST-PI support package is installed  on your backend system as the ST-PI usualy contain the latest definition of CSMREG.
    Best Regards
    Volker

  • Report job is not run at scheduled time

    OS:WindowsServer2008R2
    Oracle BI 11g(11.1.1.6.0)
    CT defined a report job
    ====
    Frequency Daily
    Start 2013/01/14 AM 06:00:00
    End 2020/01/14 PM 11:59:00
    timezone [GMT+09:00]Osaka,Tokyo
    ====
    but he found the report is not run at 06:00:00
    at 2013/01/16 08:50AM,he login to BIP,and got to Report Job history.
    he found the staus of this report is running and
    start processing is 2013/01/16 08:51:32AM
    It seems the report job is run after login.
    In CT's environment , the followinig work are done Every Day
    22:00 stop OracleBI  (opmnctl.bat→stopManagedWebLogic.cmd→stopWebLogic.cmd)
    00:00 restart server
    05:45 clear cashe (nQCmd.exe)
    05:45 start OracleBI(BI_START.bat)
    06:00 report job is not run for no mail received.-->★
    Could you tell me why the report job are not run at 06:00AM
    Does any one can give me some advice on investigating this issue, which information are needed?

    Any suggestion on this issue?

  • Scheduled jobs are not running DPM 2012 R2

    Hi,
    Recently upgraded my dpm 2012 sp1 to 2012 R2 and upgrade went well but i got 'Connection to the DPM service has been lost.(event id:917 and other event ids in the eventlog errors ike '999,997)'. Few dpm backups are success and most of the dpm backups consistenancy
    checks are failed.
    After investigating the log files and found two SQL server services running in the dpm 2012 r2 server those are 'sql server 2010 & sql server 2012 'service. Then i stopped sql 2010 server service and started only sql server 2012 service using (.\MICROSOFT$DPM$Acct).
    Now 'dpm console issue has gone (event id:917) but new issue ocurred 'all the scheduled job are not running' but manully i can able to run all backup without any issues. i am getting below mentioned event log errors 
    Log Name:      Application
    Source:        SQLAgent$MSDPM2012
    Date:          7/20/2014 4:00:01 AM
    Event ID:      208
    Task Category: Job Engine
    Level:         Warning
    Keywords:      Classic
    User:          N/A
    Computer:      
    Description:
    SQL Server Scheduled Job '7531f5a5-96a9-4f75-97fe-4008ad3c70a8' (0xD873C2CCAF984A4BB6C18484169007A6) - Status: Failed - Invoked on: 2014-07-20 04:00:00 - Message: The job failed.  The Job was invoked by Schedule 443 (Schedule 1).  The last step to
    run was step 1 (Default JobStep).
     Description:
    Fault bucket , type 0
    Event Name: DPMException
    Response: Not available
    Cab Id: 0
    Problem signature:
    P1: TriggerJob
    P2: 4.2.1205.0
    P3: TriggerJob.exe
    P4: 4.2.1205.0
    P5: System.UnauthorizedAccessException
    P6: System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal
    P7: 33431035
    P8: 
    P9: 
    P10: 
    Log Name:      Application
    Source:        MSDPM
    Date:          7/20/2014 4:00:01 AM
    Event ID:      976
    Task Category: None
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      
    Description:
    The description for Event ID 976 from source MSDPM cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event: 
    The DPM job failed because it could not contact the DPM engine.
    Problem Details:
    <JobTriggerFailed><__System><ID>9</ID><Seq>0</Seq><TimeCreated>7/20/2014 8:00:01 AM</TimeCreated><Source>TriggerJob.cs</Source><Line>76</Line><HasError>True</HasError></__System><Tags><JobSchedule
    /></Tags></JobTriggerFailed>
    the message resource is present but the message is not found in the string/message table
    plz help me to resolve this error.
    jacob

    Hi,
    i would try to reinstall DPM
    Backup DB
    uninstall DPM
    Install DPM same Version like before
    restore DPM DB
    run dpmsync.exe -sync
    finished
    Seidl Michael | http://www.techguy.at |
    twitter.com/techguyat | facebook.com/techguyat

  • Oracle automatic statistics optimizer job is not running after full import

    Hi All,
    I did a full import in our QA database, import was successful, however GATHER_STATS_JOB is not running after sep 18 2010 though its enable and scheduled, i did query last_analyzed table to check and its confirmed that it didnt ran after sep18,2010.
    Please refer below for the output
    OWNER JOB_NAME ENABL STATE START_DATE END_DATE LAST_START_DATE NEXT_RUN_D
    SYS GATHER_STATS_JOB TRUE SCHEDULED 18-09-2010 06:00:02
    Oracle defined automatic optimizer statistics collection job
    =======
    SQL> select OWNER,JOB_NAME,STATUS,REQ_START_DATE,
    to_char(ACTUAL_START_DATE, 'dd-mm-yyyy HH24:MI:SS') ACTUAL_START_DATE,RUN_DURATION
    from dba_scheduler_job_run_details where
    job_name='GATHER_STATS_JOB' order by ACTUAL_START_DATE asc; 2 3 4
    OWNER JOB_NAME STATUS REQ_START_DATE ACTUAL_START_DATE
    RUN_DURATION
    SYS GATHER_STATS_JOB SUCCEEDED 16-09-2010 22:00:00
    +000 00:00:22
    SYS GATHER_STATS_JOB SUCCEEDED 17-09-2010 22:00:02
    +000 00:00:18
    SYS GATHER_STATS_JOB SUCCEEDED 18-09-2010 06:00:02
    +000 00:00:26
    What could be the reason for GATHER_STATS_JOB job not running although its set to auto
    SQL> select dbms_stats.get_param('AUTOSTATS_TARGET') from dual;
    DBMS_STATS.GET_PARAM('AUTOSTATS_TARGET')
    AUTO
    Does anybody has this kind of experience, please share
    Apprecitate your responses
    Regards
    srh

    ?So basically you are saying is if none of the tables are changed then GATHER_STATS_JOB will not run, but i see tables are updated still the job is not running. I did >query dba_scheduler_jobs and the state of the job is true and scheduled. Please see my previous post on the output
    Am i missing anything here, do i look for some parameters settings
    So basically you are saying is if none of the tables are changed then GATHER_STATS_JOB will not run,GATHER_STATS_JOB will run and if there are any table in which there's a 10 percent change in data, it will gather statistics on that table. If no table data have changes less than 10 percent, it will not gather statistics.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#i41282
    Hope this helps.
    -Anantha

  • Scheduled Brio/IR Jobs are not running as expected in EPM 11.1.2.2

    Hi All,
    We are observing some scheduled jobs are not running as expected,they are not running on the scheduled time.
    Given below are more details.
    We have 2 Jobs running at 6:00 AM daily based on single Recurring Time Event and it is observed that this jobs are not running on time(ex: 6:06 AM EST).
    I tried verifying Consolidate Job Status in workspace admin options and the v8_RecurtimeEvent table for any miss in the 'Next Run Time',
    but they were having the scheduled date/time of run.No clue from EPM Logs.
    Interesting observation is the same jobs are running after updating the schedule time a little bit further(ex: 6:06 AM to 6:07 AM),but again after some days
    the updated job(6:07 AM ) will stop running.
    Thanks,

    Put the IR job logs in TRACE:32 mode which will give more information in the logs . Refer this document link to put in TRACE mode :http://docs.oracle.com/cd/E17236_01/epm.1112/epm_install_troubleshooting_1112200.pdf
    Thanks,
    KK

  • Rule set import - Background job did not run

    Hi,
    I am setting up my CC 5.2 production system. I have downloaded from ruleset from the dev CC and imported it into production. However the background job generated did not run. I am implementing SAP note 999785 to fix this, but am wondering what should I do about the rule set? Do I need to delete the rule set and reimport it? As this background job did not run I notice that the permission rules did not generate.
    Any advice is welcome.
    Thanks

    Hi,
    as the backgorund job never ran and no rules were created, I can just reimport the ruleset and let the job run. I have tried this and the rules were created successfully

  • Analysis Enginer showing not running

    Analysis Engine is not running and giving Error:
    Error: getAnalysisEngineStatistics : ct-sensorApp.598 not responding, please check system processes - The connect to the specified Io::ClientPipe failed

    I fixed this issue once using the following procedure:
    https://supportforums.cisco.com/docs/DOC-3589
    If the above procedure or reload does not fix the issue as suggested on the following link:
    https://supportforums.cisco.com/docs/DOC-5121/diff;jsessionid=82FA4EB3696EC0C97B6394F996EEAA5E.node0?secondVersionNumber=2
    You have to contact TAC, as mentioned below:
    http://www.cisco.com/en/US/docs/security/ips/6.0/installation/guide/hwTS.html#wp1122031
    Regards
    Farrukh
    Message was edited by: Farrukh Haroon

  • The SSP Timer Job Distribution List Import Job was not run.

    The SSP Timer Job Distribution List Import Job was not run.   Reason: Logon failure: the user has not been granted the requested logon type at this computer
    We are facing the above mentioned problem . I have added the service account in backup operates and did everything like update farm credentils , update managed account etc , but still no luck . Please note that we haven't done any chnages to the account
    Application pool is keeping going into stop mode when we try to access the site . Both CA and webapplication are down with the error as Service Unavailble . Please help
    Srini

    sounds like the account can only be used on certain COMPUTERS ("has not been granted the requested logon type
    at this computer")... check active directory.
    Scott Brickey
    MCTS, MCPD, MCITP
    www.sbrickey.com
    Strategic Data Systems - for all your SharePoint needs

  • Scheduled jobs do not run as expected after upgrading to 10.2.0.3 or 4

    FYI, we've had a ticket open for several days because our scheduled jobs (dbms_scheduler) would no longer run as scheduled after an upgrade to 10.2.0.4 on HPUX, couldn't find the solution by searching in metalink, nor did I find it here or using Google - obviously I wasn't searching correctly. There is a note id that references a set of steps that appears to have resolved our problem. I am putting this note out here so that if you encountered the same difficulty you may come across this note earlier in your troubleshooting efforts rather than later. The full title of the note is: 'Scheduled jobs do not run as expected after upgrading to either 10.2.0.3 or 10.2.0.4'. The Doc ID is: 731678.1.

    Thanks - our ticket should be getting closed out on it (our dba will be updating it), the scheduler has been running reliably since we took the steps in the doc mentioned.

  • I just upgraded to mountain lion osx and now my applications are crossed out and out of date. they will not run. how do i fix this?

    I just upgraded to mountain lion osx and now my applications are crossed out and out of date. they will not run. how do i fix this?

    Here is a recent post I assembled for a similar question:
    Unfortunately you got caught up in the minor miracle of Rosetta.  Originally licensed by Apple when it migrated from the PowerPC CPU platform that it had used from the mid-1990's until the Intel CPU platform in 2006, Rosetta allowed Mac users to continue to use their library of PPC software transparently in emulation.
    However, Apple's license to continue to use this technology expired with new releases of OS X commencing with Lion (and now Mountain Lion).  While educational efforts have been made over the last 6 years, the fact is that Rosetta was SO successful that many users were caught unaware UNTIL they upgraded to Lion or Mountain Lion.
    Workarounds:
    1. If your Mac will support it, restore OS X Snow Leopard;
    2.  If your Mac will support it, partition your hard drive or add an external hard drive and install Snow Leopard into it and use the "dual-boot" method to choose between your PowerPC software or Lion/Mt. Lion;
    3.  Upgrade your software to Intel compatible versions if they are available, or find alternative software that will open, modify and save your data files;
    3.  Install Snow Leopard (with Rosetta) into Parallels:
                                  [click on images to enlarge]
    Full Snow Leopard installation instructions here:
    http://forums.macrumors.com/showthread.php?t=1365439
    NOTE:  Computer games with complex, 3D or fast motion graphics make not work well or at all in virtualization.

  • User, Role, Profile Synchronization Job Fails

    Hi Gurus,
    When I am scheduling a job the User, Role, and Profile Sync. job fails giving an error
    "Cannot assign a java.lang.String object of length 53 to host variable 5 which has JDBC type VARCHAR(40)."
    This happens when the synchronization happens with a portal system. We dont have a ruleset for the portal system, So if I put in a "*", it includes this system and results in the error, If I manually select all other system, it works fine. Is there any way to remove this error so that I can schedule the jobs without having to select every system manually.
    Regards,
    Chinmaya

    Hi,
    As per my knowledge, in the Portal system, you should perform only user sync. Roles/profile sync will not work since portal will have workset roles.
    Please refer SAP Note 1168120, which may help you to understand the limitations
    Hope this helps!!
    Rgds,
    Raghu
    Edited by: Raghu Boddu on Nov 4, 2010 7:39 PM

Maybe you are looking for

  • File receiver communication channel

    Hi All, I am working on a IDOC to FILE scenario. In my target structure I have 10 fields. I need to transfer only 9 fields to my file. I am using the 10th field to hold the file name, which is used in the variable substitution in the receiver file co

  • Workspace 11.1.2.2 Issue

    Hi All, I have installed the Hyperion 11.1.2.2 - 64 bits in Windows 7 OS(it is not support OS for Hyperion as per Oracle matrix), Installation and configuration went well without any error. I can able to login and navigate to all the below products e

  • Table to link Sales Org , mat, Plant, storagr loc,distribution channel

    Hi Frnds,     Can any one suggest table which is having all the fields Sales Org , mat, Plant, storagr loc,distribution channel. thanks, Y.Ravi Kumar

  • Can we use Flash Player 11 abilities through Flex SDK 4.5.1?

    Hi All As you know BitmapData has some limitation for image size and resolution in Flash Player 10 (maximum resolution is 8192) and for Flash Player version 11 and later, Adobe fixed this limitation and depends on OS we can have all type of huge reso

  • DBMS_PIPE Package Help

    Hi, Below block is executed successfully but when i use it to dynamically fetch the values in SQL* Plus no values are getting printed. Please help me out in rectifying the issue. SET serveroutput on size 1000000 SET wrap on SET linesize 80 DECLARE v_