Session purge jobs

A recent upgrade from 3.1.2 to 4.0.1 went fine but the 2 database jobs that purge old sessions and push the email queue are still pointing to the 3.1 version.
I followed Jason's instructions in Re: 3.2 upgrade error but it didn't quite work. Logging in as FLOWS_030100 was required to successfully execute the dbms_job.remove step so the jobs are now gone.
But running apex_040000.wwv_flow_upgrade.create_jobs('APEX_040000') as both SYS and APEX_040000 says 'PL/SQL procedure successfully completed.' but the jobs are not created.
Help? Thanks

Joel - Sorry, I should have included that information in my original post. When I run
SELECT upgrade_date, upgrade_sequence,upgrade_action, upgrade_error, upgrade_command
      FROM   apex_040000.wwv_flow_upgrade_progress           
      WHERE upgrade_error IS NOT NULL
        ORDER BY upgrade_date desc,upgrade_sequenceI get this http://screencast.com/t/w5x1HkoBfwI
ORA-27477: "APEX_040000.ORACLE_APEX_PURGE_SESSIONS" already exists.
But when I do SELECT min(session_created) FROM  apex_workspace_sessions I see sessions created more than 10 days ago.
When I do SELECT owner,object_name,object_type,created,last_ddl_time
FROM  dba_objects
WHERE object_name = 'ORACLE_APEX_PURGE_SESSIONS' I get http://screencast.com/t/8NFrJb5LCH
Is there something on the database scheduler side that needs to be turned on? Are the jobs supposed to be visible in DBA_JOBS? I am sorry but I haven't got up to speed on the new scheduling features introduced in Oracle 10, my knowledge is limited to the older DBMS_JOB interface. Any help appreciated. Thanks

Similar Messages

  • Cancel Auto-Purge Job?

    Is there a way to cancel an Auto-Purge job? The process is a background process MMON so I can't kill it using kill session.

    I started with that note over a week ago. The cancel task worked for the shink task but the delete task I finally killed after 2 days because it had generated 33 gigs of undo and I couldn't tell when it might finish. (It was trying to remove over 30Gigs of data from wri$_adv_objects.)
    I've been struggle with this problem for a while now and finally decided to just turn off loggin to wri$_adv_objects and its index and delete from the wri$_adv_objects (over 187 mil rows) where the task_id = 14128 (since that's the sql that was showing when I tried the dbms_advisor.delete_task) - this is the shrink task id that is taking up so much space.
    That was going well yesterday (I had the delete statement in a loop deleting 100k rows at a time then committing). but then this auto-purge job kicked in last night and put a lock on wri$_adv_objects so I had to kill my little delete script. I can't restart my little delete script until this auto-purge job finishes, in the mean time it's generating redo at an enormous rate.

  • LMS 3.2 Syslog purge job failed.

    We installed LMS 3.2 on windows server 2003 enterprise edition. RME version is 4.3.1. We have created daily syslog purge job from RME> Admin> Syslog> Set Purge Policy. This job failes with the following error.
    "Drop table failed:SQL Anywhere Error -210: User 'DBA' has the row in 'SYSLOG_20100516' locked 8405 42W18
    [ Sat Jun 19  01:00:07 GMT+05:30 2010 ],ERROR,[main],Failed to purge syslogs"
    After server restart job will again execute normally for few days.
    Please check attached log file of specified job.
    Regards

    The only reports of this problem in the past has been with two jobs being created during an upgrade.  This would not have been something you would have done yourself.  I suggest you open a TAC service request so database locking analysis can be done to find what process is keeping that table locked.

  • LMS 3.2 DFM Fault History purge Job

    Hey experts,
    the Online Help for LMS 3.2 say "Data for Fault History remains in the DFM database for 31 days".
    How can I increase or reduce the amount of days the data will remain in the DFM Fault History Database?
    Is it possible to have only 10 Days in Fault History or can I set it up to 60 Days?
    Is there a specific config file which could be edited for my purposes?
    I know the purge Job for Fault Hostory but it purges the data which is older than 31 days, I can´t adjust this job.
    thx,
    Patrick

    No, this is immutable.  The only thing you can change is the TIME at which the purge happens.

  • DB purge jobs

    Hello,
    I have a live UCCE 8.0 system.
    Some tables like the TerminationCallDetail table are being deleted every 1 month. Is this a normal behaviour? If I expand the size of the HDS, will the purge happen less often?
    Does somebody has a reference or few details for this matter?
    Thanks,
    Justine.

    Hi Justine,
    I don't know the size of ur contact center, but i belive that 3 GB is not enough for ur HDS DB.
    In  fact once, the % of the available free space gets more than 80%, ICM  starts to purge DATA from HDS. The issue is that it doesn't clear older  records ! !  it deletes records based on the alphabetic order of tables' names. (a little wierd I find )
    I think you need to expand the HDS Database size. This can be done using the ICMDBA tool from the HDS server.
    This link may help you. It shows how to expand the HDS DB size using ICMDBA :
    http://www.cisco.com/en/US/products/sw/custcosw/ps1001/products_tech_note09186a0080094927.shtml
    I hope this helps.
    Best Regards.
    Wajih.

  • ESYU: Canceled 상태의 WIP Work order나 Discrete job을 purge 할 수 있는 방법

    Purpose
    Oracle Work in Process - Version: 11.5.2 to 11.5.10
    WIP 사용자 중 Canceled status의 Discrete Jobs이나 WIP Work Orders를 purge
    하길 원하는 user가 있다.
    하지만 standard form은 closed discrete jobs만을 purge 할 수 있다.
    WIP user manual에 따르면 현재 closed 된 accounting period 안에서 closed 된
    discrete jobs에 대한 정보만을 purge 할 수 있다고 되어 있다.
    다음은 이에 대한 work around 이다.
    Solution
    Work Around:
    1. WIP: Discrete> Close Discrete Jobs> Close Discrete Jobs(Form)으로 이동한다.
    2. Find Discrete Jobs parameter에 아래와 같이 입력한다.
    From Job = Close를 원하는 Job
    To Job = Close를 원하는 Job
    Status = Canceled
    3. Tools menu에서 Close 1을 선택한다.
    Close 1을 선택하면 canceled Discrete Job이 closed 가 된다.
    이 때 job number를 기억하고 note 해 둔다.
    4. GL과 Inventory Accounting periods가 close가 될 때, canceled discrete jobs
    (현재는 closed 상태)를 Purge Job form을 이용하여 purge 할 수 있다.
    WIP: Discrete> Purge Discrete Jobs> Purge Discrete Josb from or SRS
    Reference
    Note 429489.1

  • ORA-00600 in AQ background job

    Hi all,
    I am getting an ORA-00600 in AQ background job responsible for dequeuing messages.
    The trace output is as follows:
    ksedmp: internal or fatal error
    ORA-00600: internal error code, arguments: [kwqidjqp0], [], [], [], [], [], [], []
    Current SQL statement for this session:
    DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN := FALSE; BEGIN sys.dbms_aqadm_sys.register_driver(); :mydate := next_date; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
    ----- PL/SQL Call Stack -----
    object line object
    handle number name
    6713D69C 7249 package body SYS.DBMS_AQADM_SYS
    6713D69C 7280 package body SYS.DBMS_AQADM_SYS
    66EB50B4 1 anonymous block
    ----- Call Stack Trace -----
    calling call entry argument values in hex
    location type point (? means dubious value)
    I have job_queue_processes set to 4 and aq_tm_processes set to 2.
    Has anyone seen this? Please help.
    Thanks,
    Atul

    May be a bit late to help the original posters, but just for reference, this looks very similar to the following described in Metalink:
    Bug 3033868 OERI[kwqidjqp0] from DBMS_AQADM_SYS.REGISTER_DRIVER
    This note gives a brief overview of bug 3033868.
    The content was last updated on: 20-JAN-2004
    Affects:
    Range of versions believed to be affected     Versions < 10.1.0.2
    Versions confirmed as being affected     
    * 9.0.1.4
    * 9.2.0.4
    Platforms affected     Generic (all / most platforms affected)
    Fixed:
    This issue is fixed in     
    * 9.2.0.5 (Server Patch Set)
    * 10.1.0.2 (Base Release)
    Edited by: Anthony Maslowski on Feb 5, 2010 1:26 PM

  • RME 4.0.6 - Archive Purge not working

    Hi,
    I have noticed that our archive purge job is failing completely, for all devices, and it appears to have been failing for some time now.
    Looking at the meesage apeparing in the Job Details for one of the devices I get:
    *** Device Details for mydevice ***
    Protocol ==> Unknown / Not Applicable
    Unable to get results of job execution for device. Retry the job after increasing the job result wait time using the option:Resource Manager Essentials -> Admin -> Config Mgmt -> Archive Mgmt ->Fetch Settings
    The Maximum time to wait for job results per device is set to 120 seconds.
    This may explain why the job takes several days to run if every device is timing out after 2 minutes.
    My assumption, obviously incorrect, is that the purge would just go through the database and look for any archives older than a year, which is what we have set the purge options to be. This message implies that RME is trying to connect to each device.
    Is it trying to connect to each device?
    Next question would be where do I start looking so that I can fix this.
    Regards
    Jeff

    Hi Joe
    Thanks for the response. I have made the change as suggested as disabled and then re-enabled the job, so I now have it running as 8597 instead of 1106.
    JobManager.properties:
    ConfigJobManager.jobFileStorage=/rme/jobs
    ConfigJobManager.debugLevel=0
    ConfigJobManager.enableCorba=true
    ConfigJobManager.heapsize=384m
    I set this running at 9:50 this morning, and with the time now at 4:14 in the afternoon, there does not seem to be a lot happening still. If I look at the job details, all devices are in the Pending list - no Successful, or failed.
    If I look at the ..\CSCOpx\files\rme\jobs\ArchivePurge\8597\1\log file i have:
    [ Mon May 24  09:50:08 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.config.ccjs.executor.CfgJobExecutor,getJobExecutionImpl,160, Executor implementation class com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecutor
    [ Mon May 24  09:50:08 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecutor,setJobInfo,167,DcmaJobExecutor: Initializing 8597
    [ Mon May 24  09:50:10 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.config.ccjs.executor.CfgJobExecutor,,119,Job listener for daemon manager messages started
    [ Mon May 24  09:50:10 NZST 2010 ],INFO ,[8597],com.cisco.nm.rmeng.config.ccjs.executor.dmgtJobRunner,run,31, DMGT Job Listener running..
    [ Mon May 24  09:50:10 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecutor,initJobPolicies,583,Notification policy is ON, recipients :
    [ Mon May 24  09:50:11 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecutor,initJobPolicies,603,Execution policy is : Parallel Execution
    [ Mon May 24  09:50:11 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecutor,initJobPolicies,616,Getting managed devices list from DM
    [ Mon May 24  09:50:11 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.util.db.DatabaseConnectionPool,getConnection,59,Inside ICSDatabaseConnection, MAX_COUNT =20
    [ Mon May 24  09:50:11 NZST 2010 ],INFO ,[Thread-4],com.cisco.nm.rmeng.config.ccjs.executor.CfgJobExecutionESSListener,run,55,ESS Message listener started
    [ Mon May 24  09:50:25 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecutor,initJobPolicies,626,Num Threads = 1, Task = Purge Archive, Num Devices = 2696
    [ Mon May 24  09:50:25 NZST 2010 ],ERROR,[main],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecutor,sendMail,949,sendEmailMessage: Null recipient list
    [ Mon May 24  09:50:25 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecutor,run,212,Job initialization complete, starting execution
    [ Mon May 24  09:50:25 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecThread,,113,Constructing ExecutorThread DcmaJobExecThread 0
    [ Mon May 24  09:50:26 NZST 2010 ],INFO ,[Thread-7],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecThread,handleMultiDeviceExecution,143,JobExecutorThread - MultiDeviceExec DcmaJobExecThread 0 : Running
    [ Mon May 24  09:50:26 NZST 2010 ],INFO ,[Thread-7],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecThread,purgeArchive,706,Purging Archive....
    [ Mon May 24  09:50:26 NZST 2010 ],INFO ,[Thread-8],com.cisco.nm.rmeng.dcma.client.ConfigArchivePurger,purgeConfigs,177,PURGE SETTINGS: ByVersion = false ByAge = true PurgeLabelledFiles = false
    [ Mon May 24  09:50:26 NZST 2010 ],INFO ,[Thread-8],com.cisco.nm.rmeng.dcma.client.ConfigArchivePurger,purgeConfigs,201,Purge files between start time and Fri May 29 09:50:26 NZST 2009
    [ Mon May 24  09:50:26 NZST 2010 ],INFO ,[Thread-8],com.cisco.nm.rmeng.dcma.client.ConfigArchivePurger,purgeConfigs,207,Num Versions to keep = 0
    [ Mon May 24  09:50:26 NZST 2010 ],INFO ,[Thread-8],com.cisco.nm.rmeng.dcma.client.ConfigArchivePurger,purgeConfigs,208,Purge files older than 12 Months
    [ Mon May 24  10:33:50 NZST 2010 ],INFO ,[Thread-7],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecThread,handleMultiDeviceExecution,149,Completed executeJob(), updating Results
    [ Mon May 24  10:33:50 NZST 2010 ],INFO ,[Thread-7],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecThread,getNumCyclesToPoll,1018,getNumCyclesToPoll Function Started.
    [ Mon May 24  10:33:50 NZST 2010 ],INFO ,[Thread-7],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecThread,updateMultiDeviceExecResults,781,Awaiting Job results: req Id = 0 Poll time = 2022 min(s)
    [ Mon May 24  12:39:42 NZST 2010 ],INFO ,[Tibrv Dispatcher],com.cisco.nm.rmeng.config.ccjs.executor.CfgJobExecutionESSListener,onMessage,66,Listener waiting for message :
    [ Mon May 24  12:42:48 NZST 2010 ],INFO ,[Tibrv Dispatcher],com.cisco.nm.rmeng.config.ccjs.executor.CfgJobExecutionESSListener,onMessage,66,Listener waiting for message :
    Is this normal? Most of the other jobs at least give you an updated success or failure for the devcies.
    Regards
    Jeff

  • Difference: Job run in foreground, job run in background and batch job

    Hi  Gurus,
    Can you please help me to know what are the differences between job run in foreground, job run in background and batch job? Do jobs in foreground run in presentation server? Do jobs in background or batch jobs run in application server?
    Thanks,
    Kumar

    foreground job running may cause job running crash or failed if it is too big or server is busy and it take too long time. meantime it will take one sap session.
    background job will run base on request server status. and it will not take your sap session. and it will not failed normally.
    and you can get the result by SM37.
    my experience show that big report run in background normally faster than in foreground.
    Edited by: JiQing Zhao on Sep 3, 2010 4:13 AM

  • Session Integrity during Patching

    I was wondering about what logged-in users of an HTML DB application experience if the application is being patched while they are working? I know there has been some discussion on this forum before about the fact that the deployment of a new version of an HTML DB application is effectively atomic, but surely there could be an issue if the user is in the middle of a "flow" through some application logic - e.g. in the middle of a wizard-type application or similar.
    If, say, the patched application now contains some new items on a page the user has been to that are used on a subsequent page, the user's session will not have those items. There could be an issue about data integrity etc., or some failed validation/process that would not normally be able to occur for a user logging in fresh to the newly patched application.
    While we like to minimise downtime, is there a way to forcibly kick users from the application during the patch deployment, or another approach that anyone might suggest? I know that you can set the application to Unavailable, but I believe that if the user doesn't perform any action during the period until the app is Available again, then their session state still persists.
    Sorry if what I'm asking is unclear, but I'd like to make sure we have as watertight a patching process as we can.
    Jon.

    Jon,
    The easiest way to avoid this issue is to take down the web server. Any other "workaround" has the potential for something to slip through the cracks.
    Having said that, you could do one of the following:
    - Run the script late at night, thus minimizing the number of active sessions
    - Purge all sessions, immediately make the application Unavailable, and then upgrade. This is likely the most fool-proof method, as all sessions will become invalid immediately, and as long as you can change the application status faster than it takes for someone to log in, you should be good.
    - Put a notification of the planned downtime on every page of your application, but never actually take down the server. If it's in their face enough, most uses will plan on doing something else during the downtime, thus minimizing the number of actual live sessions. (Yes, it's sneaky, but it works...)
    Thanks,
    - Scott -

  • Delete old records in oracle database using jobs

    Hi,
    will it be possible to delete old records in oracle database using jobs
    need to delete old records on weekly basis and rebuild my index.
    Thanks!

    933633, While it is possible to do a great deal with the dbms_scheduler your shop should have a scheduler like CA Unicenter that is used to run the application job schedules. Purge jobs should probably be part of the normal application schedule rather than contained in the database, if your shop has a scheduler in use.
    As far as rebuilding the indexes after the purge keep in mind that freshly rebuilt indexes often have to split when inserts are performed due to the fact the compacted index blocks do not have room to hold the newly inserted keys in the appropriate locations. So just because you purge weekly does not automatically mean the indexes should be rebuilt weekly. You need to look at the index key DML pattern and at the total percentage of the index that is held by deleted rows.
    HTH -- Mark D Powell --

  • HUM 1.2.0 - Unable to Purge Quick Reports

    Hello,
    When I goto HIM -> Admin -> Purge Details, it shows 'Next Data Purge job scheduled at: -'. Even though I have set the purge for Quick report to older than 3 days, yet the quick reports folder has files dating back month and a half. How can I ensure the purge policy does execute as configured.
    Thanks.

    Send a screenshot of the HUM -> Admin -> System Preferences -> Purge Details before making changes.
    1. Go to HUM --> Admin --> System Preferences --> Job Purge --> Change the "Run Type" to Daily. Change the Data Purge to 3 days or whatever the desired value is .
    2. Go to Admin --> System Preferences --> Data Purge --> and modify the values for the different reports till it suits your needs and wait for the job to run.

  • CCMS Monitor to Pager for Failed Background Jobs

    Hello Experts,
    I am currently leveraging Central CCMS monitoring to alert us via email whenever a background job fails in production using the MTE Class R3BPServerSpecAbortedJobs.
    I am trying to find a way that I can tweak the monitor to alert me ONLY when specific background jobs fail.
    We want this alert to notify the oncall pager only when a handful of critical jobs fail. Does anyone know how I can delimit this MTE?
    For example, we will be creating jobs that begin with Z_ALERT* that I will tie to an auto-react method that will email a pager.
    Thanks in advance.
    Bill

    Hi Sundara.
    From following link, you can download step by step setup information.
    So please check following documents.
    http://www.service.sap.com/bpm
    =>Media Library=>Technical Information
    =>1. Business Process Monitoring - Setup Roadmap
    =>2. Setup Guide - Business Process Monitoring
    Before you start setup, I recommend you to check
    SAP Notes 521820 to ensure whether you already fulfill
    prerequestions.
    Basically what you have to do is following.
    1. Describe your business process under following area.
       (T-CD DSWP =>Solution Landscape =>Solution landscape maintenance)
    2. Setup BPMon session.
       (T-CD DSWP =>Operation Setup => Solution Monitoring =>
        Business Process Monitoring)
       in BPMon session, select job monitoring. And define background
       job that you want to monitor.
       In BPMon job monitoring, you can monitor, cancel, delay, duration,
       unexpected parallelization, also job log and so on.
    I hope this information help you.
    Best Regards
    Keiji Mishima

  • Saving session state

    Hi Everyone!
    We are looking for an easy-to-use solution for saving the session state in whole.
    (We are aware that it is possible to save the sesion state for a single item.)
    Our goal is to get a single commando which restores an whole session to another session. We would like to be able to save a session, purge the session (because of old age) and restore it's session state again to a new session.
    any ideas?

    Hi,
    I was thinking that a pointer to the Session object could be stored in a database on the initial loginAre you by any chance referring to the pointer similar to the one that is used in C? If so I guess it is not available in java.
    One possible solution where you can persist data across sessions is by the use of stateful session beans. Also you could experiment with the different scopes of the object for achieving your goal.
    I hope this helps you. In case I have missed out something please do post again.
    Cheers
    Giri :-)
    Creator Team

  • LMS 3.2 Archive Update Job Failing

    The scheduled archive update job is failing for all devices. Every one that I've checked is failing with the same message:
    Execution Result:
    Unable to get results of job  execution for device. Retry the job after increasing the job result wait  time using the option:Resource Manager Essentials -> Admin ->  Config Mgmt -> Archive Mgmt ->Fetch Settings
    This setting is at 120 seconds. I've tried adjusting this setting and get same results
    Attaching job logs from most recent failure.
    Thanks for any help.

    Hi ,
    Archive purge can fail for many reasons. I can suggest few things , If it did not work. You can open up a TAC case for troubleshooting.
    Try  this :
    Increase the ConfigJobManager.heapsize as “1024m” in the following file:
    NMSROOT/MDC/tomcat/webapps/rme/WEB-INF/classes/JobManager.properties  (ie,,ConfigJobManager.heapsize=1024m)  ·
    Restart the daemon manager  ·
    Once the daemon manager is started successfully,  Go to Admin > Network > Purge Settings > Config Archive Purge Settings,  increase the “Purge versions older than:” to 12 Months  (Also configure large value  for no. of versions that you would like to have per device) and trigger the job.  ·
    Once job is completed, decrease the number of months gradually, until the desired no. of days & no. of versions required is reached.  This exercise is to reduce the number of archives loading in memory during a purge job, which will cause job hanging.
    Thanks-
    Afroz
    [Do rate the useful post]

Maybe you are looking for

  • Transfer contacts from iphone 3GS to pc

    Hi my phone is acting out and I want to save everything from it on the pc, in particular the contacts and the books. How do I do that? Claudia

  • Tab canvas, tabs navigation

    In my tab canvas, for some reason, I had to make my tab1 as tab3 now. I want the users to be able to navigate in the normal order tab1,2,3 (as they are laid out in the object navigator). Now that I renamed my tab1 as 3, upon entering the block it is

  • XDCam to WMV

    I am recording XDCam 1920x1080 60i footage to my EX1r's. I am ingesting into FCS 7.02 using XDCam Transfer very successfully. My sequence presets are Apple Pro Res 422 1920x1080 60i and my capture preset is DVCPro HD 1080i60. Quite honestly I have kn

  • Going in circ

    I'm getting fed up. I tried to update the firmware on my Zen Xtra only for the program to stop responding and then says it can't find my Zen Xtra. I reboot the computer and plug the Xtra into a different USB port, XP says it found the Xtra but then a

  • Tree Component - on itemRollOver

    Hi all, How can I get the item that was rolled over with the "on itemRollOver" handle? Documentation talks about a index property, which I cannot get. Thanks