DB purge jobs

Hello,
I have a live UCCE 8.0 system.
Some tables like the TerminationCallDetail table are being deleted every 1 month. Is this a normal behaviour? If I expand the size of the HDS, will the purge happen less often?
Does somebody has a reference or few details for this matter?
Thanks,
Justine.

Hi Justine,
I don't know the size of ur contact center, but i belive that 3 GB is not enough for ur HDS DB.
In  fact once, the % of the available free space gets more than 80%, ICM  starts to purge DATA from HDS. The issue is that it doesn't clear older  records ! !  it deletes records based on the alphabetic order of tables' names. (a little wierd I find )
I think you need to expand the HDS Database size. This can be done using the ICMDBA tool from the HDS server.
This link may help you. It shows how to expand the HDS DB size using ICMDBA :
http://www.cisco.com/en/US/products/sw/custcosw/ps1001/products_tech_note09186a0080094927.shtml
I hope this helps.
Best Regards.
Wajih.

Similar Messages

  • LMS 3.2 Syslog purge job failed.

    We installed LMS 3.2 on windows server 2003 enterprise edition. RME version is 4.3.1. We have created daily syslog purge job from RME> Admin> Syslog> Set Purge Policy. This job failes with the following error.
    "Drop table failed:SQL Anywhere Error -210: User 'DBA' has the row in 'SYSLOG_20100516' locked 8405 42W18
    [ Sat Jun 19  01:00:07 GMT+05:30 2010 ],ERROR,[main],Failed to purge syslogs"
    After server restart job will again execute normally for few days.
    Please check attached log file of specified job.
    Regards

    The only reports of this problem in the past has been with two jobs being created during an upgrade.  This would not have been something you would have done yourself.  I suggest you open a TAC service request so database locking analysis can be done to find what process is keeping that table locked.

  • LMS 3.2 DFM Fault History purge Job

    Hey experts,
    the Online Help for LMS 3.2 say "Data for Fault History remains in the DFM database for 31 days".
    How can I increase or reduce the amount of days the data will remain in the DFM Fault History Database?
    Is it possible to have only 10 Days in Fault History or can I set it up to 60 Days?
    Is there a specific config file which could be edited for my purposes?
    I know the purge Job for Fault Hostory but it purges the data which is older than 31 days, I can´t adjust this job.
    thx,
    Patrick

    No, this is immutable.  The only thing you can change is the TIME at which the purge happens.

  • Cancel Auto-Purge Job?

    Is there a way to cancel an Auto-Purge job? The process is a background process MMON so I can't kill it using kill session.

    I started with that note over a week ago. The cancel task worked for the shink task but the delete task I finally killed after 2 days because it had generated 33 gigs of undo and I couldn't tell when it might finish. (It was trying to remove over 30Gigs of data from wri$_adv_objects.)
    I've been struggle with this problem for a while now and finally decided to just turn off loggin to wri$_adv_objects and its index and delete from the wri$_adv_objects (over 187 mil rows) where the task_id = 14128 (since that's the sql that was showing when I tried the dbms_advisor.delete_task) - this is the shrink task id that is taking up so much space.
    That was going well yesterday (I had the delete statement in a loop deleting 100k rows at a time then committing). but then this auto-purge job kicked in last night and put a lock on wri$_adv_objects so I had to kill my little delete script. I can't restart my little delete script until this auto-purge job finishes, in the mean time it's generating redo at an enormous rate.

  • Session purge jobs

    A recent upgrade from 3.1.2 to 4.0.1 went fine but the 2 database jobs that purge old sessions and push the email queue are still pointing to the 3.1 version.
    I followed Jason's instructions in Re: 3.2 upgrade error but it didn't quite work. Logging in as FLOWS_030100 was required to successfully execute the dbms_job.remove step so the jobs are now gone.
    But running apex_040000.wwv_flow_upgrade.create_jobs('APEX_040000') as both SYS and APEX_040000 says 'PL/SQL procedure successfully completed.' but the jobs are not created.
    Help? Thanks

    Joel - Sorry, I should have included that information in my original post. When I run
    SELECT upgrade_date, upgrade_sequence,upgrade_action, upgrade_error, upgrade_command
          FROM   apex_040000.wwv_flow_upgrade_progress           
          WHERE upgrade_error IS NOT NULL
            ORDER BY upgrade_date desc,upgrade_sequenceI get this http://screencast.com/t/w5x1HkoBfwI
    ORA-27477: "APEX_040000.ORACLE_APEX_PURGE_SESSIONS" already exists.
    But when I do SELECT min(session_created) FROM  apex_workspace_sessions I see sessions created more than 10 days ago.
    When I do SELECT owner,object_name,object_type,created,last_ddl_time
    FROM  dba_objects
    WHERE object_name = 'ORACLE_APEX_PURGE_SESSIONS' I get http://screencast.com/t/8NFrJb5LCH
    Is there something on the database scheduler side that needs to be turned on? Are the jobs supposed to be visible in DBA_JOBS? I am sorry but I haven't got up to speed on the new scheduling features introduced in Oracle 10, my knowledge is limited to the older DBMS_JOB interface. Any help appreciated. Thanks

  • ESYU: Canceled 상태의 WIP Work order나 Discrete job을 purge 할 수 있는 방법

    Purpose
    Oracle Work in Process - Version: 11.5.2 to 11.5.10
    WIP 사용자 중 Canceled status의 Discrete Jobs이나 WIP Work Orders를 purge
    하길 원하는 user가 있다.
    하지만 standard form은 closed discrete jobs만을 purge 할 수 있다.
    WIP user manual에 따르면 현재 closed 된 accounting period 안에서 closed 된
    discrete jobs에 대한 정보만을 purge 할 수 있다고 되어 있다.
    다음은 이에 대한 work around 이다.
    Solution
    Work Around:
    1. WIP: Discrete> Close Discrete Jobs> Close Discrete Jobs(Form)으로 이동한다.
    2. Find Discrete Jobs parameter에 아래와 같이 입력한다.
    From Job = Close를 원하는 Job
    To Job = Close를 원하는 Job
    Status = Canceled
    3. Tools menu에서 Close 1을 선택한다.
    Close 1을 선택하면 canceled Discrete Job이 closed 가 된다.
    이 때 job number를 기억하고 note 해 둔다.
    4. GL과 Inventory Accounting periods가 close가 될 때, canceled discrete jobs
    (현재는 closed 상태)를 Purge Job form을 이용하여 purge 할 수 있다.
    WIP: Discrete> Purge Discrete Jobs> Purge Discrete Josb from or SRS
    Reference
    Note 429489.1

  • RME 4.0.6 - Archive Purge not working

    Hi,
    I have noticed that our archive purge job is failing completely, for all devices, and it appears to have been failing for some time now.
    Looking at the meesage apeparing in the Job Details for one of the devices I get:
    *** Device Details for mydevice ***
    Protocol ==> Unknown / Not Applicable
    Unable to get results of job execution for device. Retry the job after increasing the job result wait time using the option:Resource Manager Essentials -> Admin -> Config Mgmt -> Archive Mgmt ->Fetch Settings
    The Maximum time to wait for job results per device is set to 120 seconds.
    This may explain why the job takes several days to run if every device is timing out after 2 minutes.
    My assumption, obviously incorrect, is that the purge would just go through the database and look for any archives older than a year, which is what we have set the purge options to be. This message implies that RME is trying to connect to each device.
    Is it trying to connect to each device?
    Next question would be where do I start looking so that I can fix this.
    Regards
    Jeff

    Hi Joe
    Thanks for the response. I have made the change as suggested as disabled and then re-enabled the job, so I now have it running as 8597 instead of 1106.
    JobManager.properties:
    ConfigJobManager.jobFileStorage=/rme/jobs
    ConfigJobManager.debugLevel=0
    ConfigJobManager.enableCorba=true
    ConfigJobManager.heapsize=384m
    I set this running at 9:50 this morning, and with the time now at 4:14 in the afternoon, there does not seem to be a lot happening still. If I look at the job details, all devices are in the Pending list - no Successful, or failed.
    If I look at the ..\CSCOpx\files\rme\jobs\ArchivePurge\8597\1\log file i have:
    [ Mon May 24  09:50:08 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.config.ccjs.executor.CfgJobExecutor,getJobExecutionImpl,160, Executor implementation class com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecutor
    [ Mon May 24  09:50:08 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecutor,setJobInfo,167,DcmaJobExecutor: Initializing 8597
    [ Mon May 24  09:50:10 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.config.ccjs.executor.CfgJobExecutor,,119,Job listener for daemon manager messages started
    [ Mon May 24  09:50:10 NZST 2010 ],INFO ,[8597],com.cisco.nm.rmeng.config.ccjs.executor.dmgtJobRunner,run,31, DMGT Job Listener running..
    [ Mon May 24  09:50:10 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecutor,initJobPolicies,583,Notification policy is ON, recipients :
    [ Mon May 24  09:50:11 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecutor,initJobPolicies,603,Execution policy is : Parallel Execution
    [ Mon May 24  09:50:11 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecutor,initJobPolicies,616,Getting managed devices list from DM
    [ Mon May 24  09:50:11 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.util.db.DatabaseConnectionPool,getConnection,59,Inside ICSDatabaseConnection, MAX_COUNT =20
    [ Mon May 24  09:50:11 NZST 2010 ],INFO ,[Thread-4],com.cisco.nm.rmeng.config.ccjs.executor.CfgJobExecutionESSListener,run,55,ESS Message listener started
    [ Mon May 24  09:50:25 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecutor,initJobPolicies,626,Num Threads = 1, Task = Purge Archive, Num Devices = 2696
    [ Mon May 24  09:50:25 NZST 2010 ],ERROR,[main],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecutor,sendMail,949,sendEmailMessage: Null recipient list
    [ Mon May 24  09:50:25 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecutor,run,212,Job initialization complete, starting execution
    [ Mon May 24  09:50:25 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecThread,,113,Constructing ExecutorThread DcmaJobExecThread 0
    [ Mon May 24  09:50:26 NZST 2010 ],INFO ,[Thread-7],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecThread,handleMultiDeviceExecution,143,JobExecutorThread - MultiDeviceExec DcmaJobExecThread 0 : Running
    [ Mon May 24  09:50:26 NZST 2010 ],INFO ,[Thread-7],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecThread,purgeArchive,706,Purging Archive....
    [ Mon May 24  09:50:26 NZST 2010 ],INFO ,[Thread-8],com.cisco.nm.rmeng.dcma.client.ConfigArchivePurger,purgeConfigs,177,PURGE SETTINGS: ByVersion = false ByAge = true PurgeLabelledFiles = false
    [ Mon May 24  09:50:26 NZST 2010 ],INFO ,[Thread-8],com.cisco.nm.rmeng.dcma.client.ConfigArchivePurger,purgeConfigs,201,Purge files between start time and Fri May 29 09:50:26 NZST 2009
    [ Mon May 24  09:50:26 NZST 2010 ],INFO ,[Thread-8],com.cisco.nm.rmeng.dcma.client.ConfigArchivePurger,purgeConfigs,207,Num Versions to keep = 0
    [ Mon May 24  09:50:26 NZST 2010 ],INFO ,[Thread-8],com.cisco.nm.rmeng.dcma.client.ConfigArchivePurger,purgeConfigs,208,Purge files older than 12 Months
    [ Mon May 24  10:33:50 NZST 2010 ],INFO ,[Thread-7],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecThread,handleMultiDeviceExecution,149,Completed executeJob(), updating Results
    [ Mon May 24  10:33:50 NZST 2010 ],INFO ,[Thread-7],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecThread,getNumCyclesToPoll,1018,getNumCyclesToPoll Function Started.
    [ Mon May 24  10:33:50 NZST 2010 ],INFO ,[Thread-7],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecThread,updateMultiDeviceExecResults,781,Awaiting Job results: req Id = 0 Poll time = 2022 min(s)
    [ Mon May 24  12:39:42 NZST 2010 ],INFO ,[Tibrv Dispatcher],com.cisco.nm.rmeng.config.ccjs.executor.CfgJobExecutionESSListener,onMessage,66,Listener waiting for message :
    [ Mon May 24  12:42:48 NZST 2010 ],INFO ,[Tibrv Dispatcher],com.cisco.nm.rmeng.config.ccjs.executor.CfgJobExecutionESSListener,onMessage,66,Listener waiting for message :
    Is this normal? Most of the other jobs at least give you an updated success or failure for the devcies.
    Regards
    Jeff

  • Delete old records in oracle database using jobs

    Hi,
    will it be possible to delete old records in oracle database using jobs
    need to delete old records on weekly basis and rebuild my index.
    Thanks!

    933633, While it is possible to do a great deal with the dbms_scheduler your shop should have a scheduler like CA Unicenter that is used to run the application job schedules. Purge jobs should probably be part of the normal application schedule rather than contained in the database, if your shop has a scheduler in use.
    As far as rebuilding the indexes after the purge keep in mind that freshly rebuilt indexes often have to split when inserts are performed due to the fact the compacted index blocks do not have room to hold the newly inserted keys in the appropriate locations. So just because you purge weekly does not automatically mean the indexes should be rebuilt weekly. You need to look at the index key DML pattern and at the total percentage of the index that is held by deleted rows.
    HTH -- Mark D Powell --

  • HUM 1.2.0 - Unable to Purge Quick Reports

    Hello,
    When I goto HIM -> Admin -> Purge Details, it shows 'Next Data Purge job scheduled at: -'. Even though I have set the purge for Quick report to older than 3 days, yet the quick reports folder has files dating back month and a half. How can I ensure the purge policy does execute as configured.
    Thanks.

    Send a screenshot of the HUM -> Admin -> System Preferences -> Purge Details before making changes.
    1. Go to HUM --> Admin --> System Preferences --> Job Purge --> Change the "Run Type" to Daily. Change the Data Purge to 3 days or whatever the desired value is .
    2. Go to Admin --> System Preferences --> Data Purge --> and modify the values for the different reports till it suits your needs and wait for the job to run.

  • LMS 3.2 Archive Update Job Failing

    The scheduled archive update job is failing for all devices. Every one that I've checked is failing with the same message:
    Execution Result:
    Unable to get results of job  execution for device. Retry the job after increasing the job result wait  time using the option:Resource Manager Essentials -> Admin ->  Config Mgmt -> Archive Mgmt ->Fetch Settings
    This setting is at 120 seconds. I've tried adjusting this setting and get same results
    Attaching job logs from most recent failure.
    Thanks for any help.

    Hi ,
    Archive purge can fail for many reasons. I can suggest few things , If it did not work. You can open up a TAC case for troubleshooting.
    Try  this :
    Increase the ConfigJobManager.heapsize as “1024m” in the following file:
    NMSROOT/MDC/tomcat/webapps/rme/WEB-INF/classes/JobManager.properties  (ie,,ConfigJobManager.heapsize=1024m)  ·
    Restart the daemon manager  ·
    Once the daemon manager is started successfully,  Go to Admin > Network > Purge Settings > Config Archive Purge Settings,  increase the “Purge versions older than:” to 12 Months  (Also configure large value  for no. of versions that you would like to have per device) and trigger the job.  ·
    Once job is completed, decrease the number of months gradually, until the desired no. of days & no. of versions required is reached.  This exercise is to reduce the number of archives loading in memory during a purge job, which will cause job hanging.
    Thanks-
    Afroz
    [Do rate the useful post]

  • RME 4.0.6 - Baseline Compliance jobs now failing

    I have now started getting the following error when trying to run baseline comparison job:
    In the Job Browser I get Unknown appearing in the Copliant/Depolyed Devices column, and Failed in the Status column.
    When I clcik on the Unknown link, I get the Error: No compliance report available. Reason: Device(s) may not have any configurations archived.
    This is a little puzzling as it used to work (until I started to resolve the issue of the Archive Purge job failing. See discussion RME 4.0.6 - Archive Purge not working)
    The devices have at least 1 Startup config, and hundreds of Running configs archived.
    Any ideas where to start looking?
    Regards
    Jeff

    Hi Joe,
    Below is the entries from the dcmaservice.log for the job I just ran after restarting the ConfiGmgmtServer.
    [ Thu May 27  16:55:55 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.util.logger.ServiceLogLevelChanger,notifyLevelChange,35,publishing urn for archive.service
    [ Thu May 27  16:55:56 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.util.logger.ServiceLogLevelChanger,notifyLevelChange,42,published urn as archive.service-RMELogLevelChange
    [ Thu May 27  16:55:56 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.configmanager.DcmaService,main,54,Initializing dummy IOSPlatform
    [ Thu May 27  16:55:56 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.configmanager.DcmaService,,103,ArchivePath : D:\CSCOpx\files\rme\dcma
    [ Thu May 27  16:56:00 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.util.db.DatabaseConnectionPool,getConnection,59,Inside ICSDatabaseConnection, MAX_COUNT =20
    [ Thu May 27  16:56:24 NZST 2010 ],ERROR,[main],com.cisco.nm.xms.xdi.SdiEngine,initDtMatcher,148,java.lang.Exception: inconsistent enumeration extension SFS_SW(63)java.lang.Exception: inconsistent enumeration extension SFS_SW(63)
    at com.cisco.nm.xms.xdi.ags.imageparser.ImageType.extendWith(ImageType.java:101)
    at com.cisco.nm.xms.xdi.pkgs.SharedSwimSFS.Descriptor.g$init(Descriptor.java:26)
    at com.cisco.nm.xms.xdi.dapi.DescriptorBD.init(DescriptorBD.java:70)
    at com.cisco.nm.xms.xdi.SdiEngine.initDtMatcher(SdiEngine.java:134)
    at com.cisco.nm.xms.xdi.SdiEngine.makeNewEngine(SdiEngine.java:86)
    at com.cisco.nm.xms.xdi.SdiEngine.getEngine(SdiEngine.java:73)
    at com.cisco.nm.rmeng.dcma.cats.CATS.(CATS.java:83)
    at com.cisco.nm.rmeng.dcma.configmanager.ConfigManager.(ConfigManager.java:391)
    at com.cisco.nm.rmeng.dcma.configmanager.DcmaService.(DcmaService.java:140)
    at com.cisco.nm.rmeng.dcma.configmanager.DcmaService.main(DcmaService.java:72)
    [ Thu May 27  16:56:24 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.configmanager.DcmaService,,145,Starting Baseline Migration
    [ Thu May 27  16:56:24 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.configmanager.DcmaService,,149,Got Instancecom.cisco.nm.rmeng.dcma.configmanager.DcmaBaselineTemplateMigrator@13dd208
    [ Thu May 27  16:56:24 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.configmanager.DcmaBaselineTemplateMigrator,checkMigrationNeeded,281,No RESTORE Happening
    [ Thu May 27  16:56:24 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.configmanager.DcmaBaselineTemplateMigrator,checkMigrationNeeded,325,select count(name) from sysobjects where name in ('Config_BaseLine_Version','Config_New_BaseLine')
    [ Thu May 27  16:56:24 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.configmanager.DcmaBaselineTemplateMigrator,checkMigrationNeeded,345,COUNT (No. of Baseline tables present) :1
    [ Thu May 27  16:56:24 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.configmanager.DcmaBaselineTemplateMigrator,checkMigrationNeeded,361,The BaseLine Table does not Exists
    [ Thu May 27  16:56:24 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.configmanager.DcmaBaselineTemplateMigrator,doMigration,163,Check Migration result false
    [ Thu May 27  16:56:24 NZST 2010 ],INFO ,[main],com.cisco.nm.rmeng.dcma.configmanager.DcmaBaselineTemplateMigrator,doMigration,225,Baseline Migration not needed
    Also, the only entry in the job directory ..\CSCOpx\files\rme\jobs\ArchiveMgmt\8628 is the job.obj.
    Given the error that is in the log file, perhaps this is no surprise.
    Regards
    Jeff

  • CiscoWorks - Job Browser very slow

    Hello,
    We are running LMS 3.2 and I am trying to access Common Services -> Server -> Admin -> Job Browser but it is extremely slow. Waited for half hour yet it is processing the response page.
    Please assist.
    Thanks.

    this sounds as you have running LMS for quite a while but job purging is not configured (or not properly configured)
    When the job browser page opens, how many jobs are listed? You will see the job count in the upper right corner above the table.
    one starting point to look at is:
        Resource Manager Essentials > Admin > System Preferences > Job Purge
    if there are no purge jobs configured, schedule and enable these jobs.
    With LMS 3.2 you can also filter for specific jobs (CS > Server > Admin > Job Browser), select those with many instances and delete the historic job infos. BUT DO NOT DELETE the INSTANCES of jobs which are scheduled to run in the future (otherwise these jobs are deleted and will need to be reconfigured).

  • Job Browser failing in Cisco Prime LMS 4.2.2

    Hi,
    we have a costumer who cannot access to Admin/Jobs/Browser. It shows this error:
    I have read that the reason could be that the costumer has too many jobs scheduled. But, if we can't access to the Job Browser, we can't delete the jobs scheduled.
    Besides, we have a huge memory consumption (15 of 16 gigas). I don't know if this coulb be relationated with the access to the Job Browser.
    Thank you

    Hello Diego,
    You may try to purge the old jobs.
    In general the LMS 4.2 runs VRFcollection jobs and Summarizer jobs periodically. When you hit a high number of jobs (15K if I'm not wrong), you start to get an error message on Job browser page. Then the purging may come in handy.
    Go to [Admin] > [Network] > [Purge Settings].
    To purge the jobs related to VRF, see the topic "Purging VRF Management Reports Jobs and Archived Reports" on the following document.
    http://www.cisco.com/en/US/docs/net_mgmt/ciscoworks_lan_management_solution/4.2/user/guide/admin/purgeset.pdf
    To purge Summarizer jobs, take a look at the topic "Performance Purge Jobs" from the same document.
    Let me know if it helped.

  • LMS 3.2 Job Information Status

    Hello
    In LMS portal, Job information status shows system jobs that have a missed start as the status. These jobs are old versions of jobs  (Mon Jun 29 03:00:00 CEST 2009) and the new versions have been running as they should.
    The question is how to delete these missed starting job from the preview window. Deleting the job in Job Browser results in:
    Can not delete the following jobs:
    1039.26 - Can not delete system job 1039
    Thanks in advance.

    Hello Joe
    This is how it look like in Job Information Status
    1039.26 ChangeAuditDefaultPurge Missed start ChangeAudit Records - default purge job eydfinnur
    Mon Jun 29 03:00:00 CEST 2009
    Job details:
    Change Audit Report  
    Job Summary
    Runtype:  Daily
    Start Time:  Mar 09 2010 03:00:00
    Purge Change Audit data older than:  180 days
    Purge Audit Trail data older than:  180 days
    Job Type:  Change Audit Default Purge

  • Collection_set_3_upload_upload job continously failing

    I've SQL Server 2008 R2 SP2 CU5 (10.50.4276) on Windows 2008 R2. The collection_set_3_upload_upload job continously failing with below errors with retries, each retries takes minimum 40mins to few hours.  Though the job is failing but still it
    is uploading the data and i could see the data in the MDW graphs. But the job is failing. What could be the cause and what would the reason for failing? Below are the error message.
    Message
    Executed as user: ServiceAccount. Failed to log error for log id: 406725, generating message:
    The thread "ExecMasterPackage" has timed out 3600 seconds after being signaled to stop.. Inner Error ------------------>  Row#:    0 Source: "Microsoft SQL Server Native Client 10.0" Instance: "VServer\INS1"
    Procedure: "sp_syscollector_event_onerror" Line#:  111 Error Number:  14684 Error State:   1 Error Severity:  16 Error Message: "Caught error#: 14262, Level: 16, State: 1, in Procedure: sp_syscollector_verify_event_log_id,
    Line: 32, with Message: The specified @log_id ('406725') does not exist." Help File: "(null)", Help context:    0 GUID: {0C733A63-2A1C-11CE-ADE5-00AA0044773D}.  OLE DB Error Record dump end.  The thread "ExecMasterPackage"
    has timed out 3600 seconds after being signaled to stop.  Unable to execute a stop collection event for log Id: 406725. Inner Error ------------------>  Row#:    0 Source: "Microsoft SQL Server Native Client 10.0" Instance:
    "VServer\INS1" Procedure: "sp_syscollector_verify_event_log_id" Line#:   32 Error Number:  14262 Error State:   1 Error Severity:  16 Error Message: "The specified @log_id ('406725') does not exist." Help File:
    "(null)", Help context:    0 GUID: {0C733A63-2A1C-11CE-ADE5-00AA0044773D}.  OLE DB Error Record dump end.  Process Exit Code 259.
    Executed as user: ServiceAccount. Failed to log error for log id: 406698, generating message: Failed to log package end event for package: "QueryActivityUpload". Inner Error ------------------>  Row#:    0 Source:
    "Microsoft SQL Server Native Client 10.0" Instance: "VServer\INS1" Procedure: "sp_syscollector_verify_event_log_id" Line#:   22 Error Number:  14262 Error State:   1 Error Severity:  16 Error Message: "The
    specified @log_id ('406700') does not exist." Help File: "(null)", Help context:    0 GUID: {0C733A63-2A1C-11CE-ADE5-00AA0044773D}.  OLE DB Error Record dump end.. Inner Error ------------------>  Row#:    0
    Source: "Microsoft SQL Server Native Client 10.0" Instance: "VServer\INS1" Procedure: "sp_syscollector_event_onerror" Line#:  111 Error Number:  14684 Error State:   1 Error Severity:  16 Error Message: "Caught
    error#: 14262, Level: 16, State: 1, in Procedure: sp_syscollector_verify_event_log_id, Line: 32, with Message: The specified @log_id ('406698') does not exist." Help File: "(null)", Help context:    0 GUID: {0C733A63-2A1C-11CE-ADE5-00AA0044773D}.
     OLE DB Error Record dump end.  Failed to log package end event for package: "QueryActivityUpload". Inner Error ------------------>  Row#:    0 Source: "Microsoft SQL Server Native Client 10.0" Instance: "VServer\INS1"
    Procedure: "sp_syscollector_verify_event_log_id" Line#:   22 Error Number:  14262 Error State:   1 Error Severity:  16 Error Message: "The specified @log_id ('406700') does not exist." Help File: "(null)",
    Help context:    0 GUID: {0C733A63-2A1C-11CE-ADE5-00AA0044773D}.  OLE DB Error Record dump end.Failed to log error for log id: 406698, generating message: Failed to log package end event for package: "Set_{2DC02BD6-E230-4C05-8516-4E8C0EF21F95}_Master_Package_Upload".
    Inner Error ------------------>  Row#:    0 Source: "Microsoft SQL Server Native Client 10.0" Instance: "VServer\INS1" Procedure: "sp_syscollector_verify_event_log_id" Line#:   22 Error Number:  14262
    Error State:   1 Error Severity:  16 Error Message: "The specified @log_id ('406699') does not exist." Help File: "(null)", Help context:    0 GUID: {0C733A63-2A1C-11CE-ADE5-00AA0044773D}.  OLE DB Error Record dump
    end.. Inner Error ------------------>  Row#:    0 Source: "Microsoft SQL Server Native Client 10.0" Instance: "VServer\INS1" Procedure: "sp_syscollector_event_onerror" Line#:  111 Error Number:  14684
    Error State:   1 Error Severity:  16 Error Message: "Caught error#: 14262, Level: 16, State: 1, in Procedure: sp_syscollector_verify_event_log_id, Line: 32, with Message: The specified @log_id ('406698') does not exist." Help File: "(null)",
    Help context:    0 GUID: {0C733A63-2A1C-11CE-ADE5-00AA0044773D}.  OLE DB Error Record dump end.  Failed to log package end event for package: "Set_{2DC02BD6-E230-4C05-8516-4E8C0EF21F95}_Master_Package_Upload". Inner Error ------------------>
     Row#:    0 Source: "Microsoft SQL Server Native Client 10.0" Instance: "VServer\INS1" Procedure: "sp_syscollector_verify_event_log_id" Line#:   22 Error Number:  14262 Error State:   1 Error Severity:
     16 Error Message: "The specified @log_id ('406699') does not exist." Help File: "(null)", Help context:    0 GUID: {0C733A63-2A1C-11CE-ADE5-00AA0044773D}.  OLE DB Error Record dump end.  Failed to log error for
    log id: 406698, generating message: The master package exited with error, previous error messages should explain the cause.. Inner Error ------------------>  Row#:    0 Source: "Microsoft SQL Server Native Client 10.0" Instance:
    "VServer\INS1" Procedure: "sp_syscollector_event_onerror" Line#:  111 Error Number:  14684 Error State:   1 Error Severity:  16 Error Message: "Caught error#: 14262, Level: 16, State: 1, in Procedure: sp_syscollector_verify_event_log_id,
    Line: 32, with Message: The specified @log_id ('406698') does not exist." Help File: "(null)", Help context:    0 GUID: {0C733A63-2A1C-11CE-ADE5-00AA0044773D}.  OLE DB Error Record d

    Hi Jayakumar,
    According to your description, collection_set_3_upload_upload job is failing. Based on my research, the issue could occur when the data is collected using an interim cache file, or the Data Collection set collects data every frequently.
    To troubleshoot this issue, you could use these three solutions below.
    1. Change the Server Activity collection to non-cached. This change would ensure that the data is uploaded as soon as it is collected without using an interim cache file.
    2. Increase the collection frequency. The collection frequency is set to default of 15 seconds. This can be increased to 30 seconds or higher to prevent very large data being collected between the scheduled upload intervals.
    3. Modify the sp_purge_data stored procedure for the purge job to run faster. An updated version of sp_purge_data is available
    here. This will ensure that the purge job completes faster.
    Regards,
    Michelle Li

Maybe you are looking for

  • How to handle the control records in case of file to idoc scenario.

    Hi All, can you please clarify me how to handle the control records in case of file to idoc scenario.

  • Measuring Point

    Dear All, We are created Counter for air consumption in that mentioned counter over floew reading 99999999.We are not planning any maintenance paln againtst this counter. we start punching readings like, Our last counter reading before uploading into

  • A script to apply cell style to entire rows containing a specific text

    Hi, I am using InDesign CC 2014 Does anyone know how to apply a cellstyle based on text contents found in a cell in javascript? I need to search the document and change cell style in entire row to "cellStyle01" if the text "someText" appear in cell.

  • ATP Check include in MRP and Exclude on Process order

    Dear all, We have a situation if we include reservations, deliveries and dependent requirements in ATP check in OPJJ, the system behaviour is that it includes all of them during MRP run. On process order level if we perform ATP check the system shows

  • Migration Assistant PC to Mac - can't view anything after transfer

    I just transferred docs, pics and music from my PC to my new Mac using Migration Assistant. I tried to open the folders but they all have a red circle with a negative sign in it and they say that I don't have permission to access the files. What shou