SCEP Client Activity Logs Files - Retention Policy?

In SCEP 2012....
1. Where are client activity log files stored?  
2. What is the default retention policy?  
I remember with FCS, I think the historical data was stored for 14 months (by default).  Is that the same for SCEP?
Andrew Marcos

Logs are in c:\program data\Microsoft\Microsoft Antimalware\Support.
Not sure on retention as I am working in non-persistent VDI's that get their logs reset after a log off!
Cheers
Paul | sccmentor.wordpress.com

Similar Messages

  • Endpoint Protection Client Activity Log

    Hello
    I'd like to know how long SCCM 2012 keep the Endpoint protection client activity logs (logs of scan,detection, quarantine..etc) and if i can change it?
    thanx 

    HI,
    Endpoint Protection history data is deleted after 365 days, it can be controlled in the Site Maintenance task "Endpoint Protection is Delete Aged Endpoint Protection Health Status History Data"
    There is also a setting for "Delete Aged Threat Data" which is set to 30 days. It depends on which level of details you are after but it sounds like you should increase the "Delete Aged Threat Data"
    Regards,
    Jörgen
    -- My System Center blog ccmexec.com -- Twitter
    @ccmexec

  • SCEP client not updating settings after policy retrieval

    I have a computer assigned a SCEP policy, that seems to have been found and Applied fine by the SCCM Client, looking at the registry.
    I find the policy in the regkey HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\CCM\EPAgent\GeneratedPolicy, With the DWORD values
    Just a test to my computer (Excluded)                   REG_DWORD         0x00000002 (2)
    Just a test to my computer (Scan Schedule)           REG_DWORD         0x00000002 (2)
    What I have configured in this test policy is just "Limit CPU usage during scan to: 10%" and "Start the scheduled scan only when my PC is on but not in use"
    But the SCEP Client, in the settings, do not show the correct settings. The CPU limit setting is set to 20% and the "Start the scheduled scan" setting is unchecked, these settings come from the "Default Client Antimalware Policy"
    The EndpointProtectionAgent.log says:
    Endpoint is triggered by WMI notification. EndpointProtectionAgent 28.10.2014 16:54:39 3504 (0x0DB0)
    EP State and Error Code didn't get changed, skip resend state message. EndpointProtectionAgent 28.10.2014 16:54:39 3504 (0x0DB0)
    State 1, error code 0 and detail message are not changed, skip updating registry value EndpointProtectionAgent 28.10.2014 16:54:39 3504 (0x0DB0)
    Previous state is same with current one: 1, skip notification. EndpointProtectionAgent 28.10.2014 16:54:39 3504 (0x0DB0)
    File C:\Windows\ccmsetup\SCEPInstall.exe version is 4.5.216.0. EndpointProtectionAgent 28.10.2014 16:54:39 3504 (0x0DB0)
    EP version 4.6.305.0 is already installed. EndpointProtectionAgent 28.10.2014 16:54:39 3504 (0x0DB0)
    EP 4.6.305.0 is installed, version is higher than expected installer version 4.5.216.0. EndpointProtectionAgent 28.10.2014 16:54:39 3504 (0x0DB0)
    The trigger 10 doesn't make ANY state change. EndpointProtectionAgent 28.10.2014 16:54:39 3504 (0x0DB0)
    Handle EP AM policy. EndpointProtectionAgent 28.10.2014 16:54:39 3504 (0x0DB0)
    Policy group lose, group name: Scan Schedule, settingKey: {d6961d76-070d-46af-b898-6d24562fb219}_201_201 EndpointProtectionAgent 28.10.2014 16:54:39 3504 (0x0DB0)
    Policy deployment result: <?xml version="1.0"?><Group Name="Scan Schedule">    <Policy Name="Just a test to my computer" State=2/>    <Policy Name="Default Client Antimalware
    Policy" State=1/></Group><Group Name="Threat Default Action">    <Policy Name="Default Client Antimalware Policy" State=2/></Group><Group Name="Excluded">   
    <Policy Name="Default Client Antimalware Policy" State=2/>    <Policy Name="Just a test to my computer" State=2/></Group><Group Name="Realtime Config">    <Policy Name="Default
    Client Antimalware Policy" State=2/></Group><Group Name="Advance Setting">    <Policy Name="Default Client Antimalware Policy" State=2/></Group><Group Name="Spynet">   
    <Policy Name="Default Client Antimalware Policy" State=2/></Group><Group Name="Signature Update">    <Policy Name="Default Client Antimalware Policy" State=2/></Group><Group Name="Scan">   
    <Policy Name="Default Client Antimalware Policy" State=2/></Group> EndpointProtectionAgent 28.10.2014 16:54:39 3504 (0x0DB0)
    Generate Policy XML successfully at C:\Windows\CCM\EPAMPolicy.xml EndpointProtectionAgent 28.10.2014 16:54:39 3504 (0x0DB0)
    Generate AM Policy XML while EP is disabled. EndpointProtectionAgent 28.10.2014 16:54:39 3504 (0x0DB0)
    Any idea what happened to the New settings?
    Freddy

    Antimalware Client Version: 4.6.305.0
    Engine Version: 1.1.11104.0
    Antivirus definition: 1.187.618.0
    Antispyware definition: 1.187.618.0
    Network Inspection System Engine Version: 2.1.11005.0
    Network Inspection System Definition Version: 113.5.0.0
    Policy Name: Antimalware Policy
    Policy Applied: 02.09.2014 at 14:16
    The above is information in "About"
    This is the information about the Antimalware policies assigned to this computer
    Name                                             
    Collection name       Priority    Policy Application state Last update time         Policy Application Return code
    Default Client Antimalware Policy                                   10000     
    Succeeded                     02.09.2014 16:16:00      0x00000000  
    Just a test to my computer              VITN-SC-OSL-112  1
    This tells me that there is no policy Application Return code for the custom policy i am testing, and that is something I would like to solve. Any ideas? Thank you

  • Archive Log Backups Retention Policy

    How can I define Retention Policy for archive log backups?

    The question asked and answer you have marked as "Correct answer" is misleading.
    For archived log deletion policy, look below.
    Configuring the RMAN Environment
    For the actual question posted "How can I define Retention Policy for archive log backups?", look below link.
    http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmconfb.htm#i1019318
    Also, did you start the thread to clear your doubt or test the other forum users?
    Thank you!!

  • Mac OS X 10.5 Clients - Active Directory Login - Password Policy

    Hi,
    I wonder if anyone can help me or give me some pointers.
    I have a client who has a number of Mac OS X 10.5 Leopard clients who sign-in and authenticate with a Window's Active Directory server which has a password policy to prompt users to change their login password every 30 days.
    Today is the day they are required to change their login password and they do get message that says something like "0 days to change your password" but are not getting the subsequent dialogue box that allows them to change their password.
    Any ideas?

    OOPs, missed which one we were talking about, sorry.
    Does it boot to Single User Mode, CMD+s keys at bootup, if so try...
    /sbin/fsck -fy
    Repeat until it shows no errors fixed.
    (Space between fsck AND -fy important).
    Resolve startup issues and perform disk maintenance with Disk Utility and fsck...
    http://docs.info.apple.com/article.html?artnum=106214

  • Problem with logging in log files.

    HI,
    Our's is a client/server application.
    In our application there are so many clients.
    They each have separate page(web page).
    When a client download any file from their site it logs(client name, time, file name etc.) into a common log file(access.log) and a client specific log file(access.client_name).
    Logging in common file(access.log) is performing well.
    Logging in client specific log file works for some clients and don't work for some clients.
    For example there is a client called 'candy'.
    Some times it logs in the log file, access.candy
    some times it don't log.
    Tell me wht is the problem.
    If you want more information regarding my problem, I will send.
    Please give me some solution.
    Thank you.............

    Third Party Client: ((__null) != m_lock && 0 == (*__error())) Can't create semaphore lock
    There seems to be something wrong with a handling (m_lock = method of lock) of semaphore mechanism-- It appears to be a program issue, which is either on your local machine or a remote Web site' page(s).
    'semaphore' Apple definition (quotation from ADC) :
    A programming technique for coordinating activities in which multiple processes compete for the same kernel resources. Semaphores are commonly used to share a common memory space and to share access to files. Semaphores are one of the techniques for interprocess communication in BSD.
    In short, it is a flag to terminate a task/thread efficiently without fail prior to another task/thread starts-- a synchronization mechanism among cooperating threads/tasks. (You might need to have some understanding of the basic concepts of locks and semaphores.)
    I would test any suspect applications to uninstall temporarily to see if the erratic events are displayed on Console. Perhaps, vlc player?
    Fumiaki
    Tokyo

  • Moving the log file of a publisher database SQL Server 2008

    There are many threads on this here. Most of them not at all helpful and some of them wrong. Thus a fresh post.
    This post regards SQL Server 2008 (10.0.5841)
    The PUBLISHER database primary log file which is currently of 3 blocks and not extendable,
    must be moved as the LUN is going away.
    The database has several TB of data and a large number of push transactional replications as well as a couple of bi-directional replications.
    While the primary log file is active, it is almost never (if ever) used due to its small fixed size.
    We are in the 20,000 TPS range at peak (according to perfmon). This is a non-trivial installation.
    This means that
    backup/restore is not even a remotely viable option (it never is in the real world)
    downtime minimization is critical - measured in minutes or less.
    dismantling and recreating the replications is doable, but I have to say, I have zero trust in the script writer to generate accurate scripts. Many of these replications were originally set up in older versions of SQL Server and have come along for the
    ride as upgrades have occurred. I consider scripting everything and dismantling the whole lot pretty high risk. In any case, I do not want to have to reinitialize any replications as this takes, effectively, an eternity.
    Possible solution:
    The only option I can think of is to wind down everything, such that there are zero outstanding uncommitted transactions and detach the database, delete the offending log file and reattach using the CREATE DATABASE xyz ATTACH_REBUILD_LOG option.
    This should, if I have understood things correctly, cause SQL Server to recreate the default log file in the same directory as the .mdf file. I am not sure what will happen to the secondary log file which is not moving anywhere at this point.
    The hard bit is insuring that every transaction in the active log files have been replicated before shutdown. This is probably doable. I do not know how to manually flush any left over transactions to replication. I expect if I shut down all "real"
    activity and wait for a certain amount of time, eventually all the replications will show "No replicated transactions are available" and then I would be good to go.
    Hillary, if you happen to be there, comments appreciated.

    Hi Philip
    you should try this long back suggested way of stopping replication and restore db and rename or detach attach
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/6731803b-3efa-4820-a303-4ffb7edf154a/detaching-a-replicated-database?forum=sqlreplication
    Thanks
    Saurabh Sinha
    http://saurabhsinhainblogs.blogspot.in/
    Please click the Mark as answer button and vote as helpful
    if this reply solves your problem
    I do not wish to be rude, but which part of the OP didn't you understand?
    Specifically the bit about 20,000 transactions a second and database size of several TB. Do you have any concept whatsoever of what this means? I will answer for you, "no, you are clueless" as your answer clearly shows.
    Stop wasting bandwidth by proposing pointless and wrong solutions which indicate that you did not read the OP, or do you just do this to generate points?
    Also, you clearly failed to notice that I was on the thread to which you referred, and I had some pointed comments to make. This thread was an attempt to garner some input for an alternative proposal.

  • Log Groups/Log Files Oracle8i

    Sorry for the basic question but I need clarification help.
    The Oracle8i DBA Certification Exam Guide indicates:
    1. You must have at least two Log Groups each with at least one Log File in each. For Recoverability you
    must have more than one Log File in each Log Group (and obviously the same number in each Log Group).
    2. LGWR writes to only one Log Group at a time. If more than one Log File is in the Log Group, LGWR will write to each at the same time. For recoverability it is advised that each Log File within a Log Group should
    reside on a separate disk.
    3. As the Log File fills, a Checkpoint occurs and LGWR will then begin writing to the next Log Group as
    previously described.
    PROBLEM:
    My current DBA Instructor maintains the understanding above is incorrect. He maintains:
    1. LGWR will write to the first Log File in each Log Group simultaneously (for recoverability).
    2. As the first log file is filled, a Checkpoint occurs, and LGWR will write to the second (etc) Log File in
    each Log Group.
    3. All Log Files within one Log Group must reside on the same disk (Syntax problem if you don't).
    4. Each Log Group should be placed on a different disk for recoverability.
    Who is correct? Did I misunderstand the DBA Certification Exam Guide?
    Thanks for any help.

    Hi,
    When you take an online back including logs (which is the default option for online backups in 700 systems)
    this means that a backup is taken and all the logs that are written to during the time of the backup are also inlcuded in the backup. this means that you have the backup image and the logs to enable a restore and rollforward through the logs to a consistent point in time. If you took an online backup without including the logs, you would need to ensure you have those logs from the time of that backup to be able to restore that image and rollforward to ensure consisntecny.
    One mjor point, never ever delete or touch active log files from /db2/NMS/log_dir as this will result in the system becoming inconsistent due to the logging mechanism being interrupted. If such a case does occur, please contact support.
    Regards,
    Paul

  • Parse XMLFormatter log files back into LogRecords

    My app uses a java.util.logging.FileHandler and XMLFormatter to generate XML log files, which is great. Now I want to display those logs in my app's web admin console.
    I assumed that there would be an easy way to read my XML logs files back into a List of LogRecord objects, but my initial investigation has revealed nothing.
    Anyone have any advice?

    If you remove an "active" log file, then this can cause problems. If you remove an archieved log file, then it is OK.
    If you change the log directory, then you SHOULD inform all your applications that use this directory... Depending on the service, this information is usually stored inside the config files of the services.
    Mihalis.

  • Portal log files

    where do I find the activity logging files?
    Thanks

    Here are my setting via the Logging tab in PSConsole:
    Log Viewer
    Search Criteria: Nothing comes up in the drop down after I select my instance
    Common Logger Settings
    General:
    Log Level: SEVERE
    Default Handler: n/a
    File Handler Properties
    File Pattern: n/a
    File Count: blank
    Append: TRUE
    Filter: blank
    Formatter: n/a
    Other:
    Custom Handlers: blank
    Use Web Container Log File: Radio button to use portal log files is selected
    Specific Logger Settings
    No Loggers in the List

  • How do we track client deployment via group policy by referring log files globally

    How do we track client deployment via group policy by referring log file centrally?

    need answer from  both CM07/CM012 by using GPO
    There is NO Centralized tracking for GPOs.
    Garth Jones | My blogs: Enhansoft and
    Old Blog site | Twitter:
    @GarthMJ

  • SCEP manager is not showing current logs for any SCEP clients

    I have installed SCEP manager on one machine and it is managing one client, which is on another machine.
    Client is showing virus detected logs in SCEP client UI, but the same events/logs are not getting stored in SCEP manager database, i tried pulling out records from database, there is no entry for detected viruses in the database, and SCEP manager UI monitor
    tab is also not showing any detected events.

    Hi,
    Active means that it has been active and communicated with the MP within the last 7 days, not that it is active now.
    That means that you either haven't extended the Active Directory or created the System Management container in AD and delegated permission to that container and all the child object to the ConfigMgr Primary Site Server Computer account. But that isn't a
    requirement only a rekommendation.
    If you look in the client in ClientLocation.log file can the client find an MP to communicate with? Any more errors in the MPcontrol.log file on the server?
    Regards,
    Jörgen
    -- My System Center blog ccmexec.com -- Twitter
    @ccmexec

  • (Cisco Historical Reporting / HRC ) All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054

    Hi All,
    I am getting an error message "All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054"  when trying to log into HRC (This user has the reporting capabilities) . I checked the log files this is what i found out 
    The log file stated that there were ongoing connections of HRC with the CCX  (I am sure there isn't any active login to HRC)
    || When you tried to login the following error was being displayed because the maximum number of connections were reached for the server .  We can see that a total number of 5 connections have been configured . ||
    1: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Current number of connections (5) from historical Clients/Scheduler to 'CRA_DATABASE' database exceeded the maximum number of possible connections (5).Check with your administrator about changing this limit on server (wfengine.properties), however this might impact server performance.
    || Below we can see all 5 connections being used up . ||
    2: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:[DB Connections From Clients (count=5)]|[(#1) 'username'='uccxhrc','hostname'='3SK5FS1.ucsfmedicalcenter.org']|[(#2) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#3) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#4) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#5) 'username'='uccxhrc','hostname'='47BMMM1.ucsfmedicalcenter.org']
    || Once the maximum number of connection was reached it threw an error . ||
    3: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Number of max connection to 'CRA_DATABASE' database was reached! Connection could not be established.
    4: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Database connection to 'CRA_DATABASE' failed due to (All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054.)
    Current exact UCCX Version 9.0.2.11001-24
    Current CUCM Version 8.6.2.23900-10
    Business impact  Not Critical
    Exact error message  All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054
    What is the OS version of the PC you are running  and is it physical machine or virtual machine that is running the HRC client ..
    OS Version Windows 7 Home Premium  64 bit and it’s a physical machine.
    . The Max DB Connections for Report Client Sessions is set to 5 for each servers (There are two servers). The no of HR Sessions is set to 10.
    I wanted to know if there is a way to find the HRC sessions active now and terminate the one or more or all of that sessions from the server end ? 

    We have had this "PRX5" problem with Exchange 2013 since the RTM version.  We recently applied CU3, and it did not correct the problem.  We have seen this problem on every Exchange 2013 we manage.  They are all installations where all roles
    are installed on the same Windows server, and in our case, they are all Windows virtual machines using Windows 2012 Hyper-V.
    We have tried all the "this fixed it for me" solutions regarding DNS, network cards, host file entries and so forth.  None of those "solutions" made any difference whatsoever.  The occurrence of the temporary error PRX5 seems totally random. 
    About 2 out of 20 incoming mail test by Microsoft Connectivity Analyzer fail with this PRX5 error.
    Most people don't ever notice the issue because remote mail servers retry the connection later.  However, telephone voice mail systems that forward voice message files to email, or other such applications such as your scanner, often don't retry and
    simply fail.  Our phone system actually disables all further attempts to send voice mail to a particular user if the PRX5 error is returned when the email is sent by the phone system.
    Is Microsoft totally oblivious to this problem?
    PRX5 is a serious issue that needs an Exchange team resolution, or at least an acknowledgement that the problem actually does exist and has negative consequences for proper mail flow.
    JSB

  • Recovery window retention policy deletes archive logs before a backup?

    Hi All,
    Oracle 11G on Windows 2008 R2
    This afternoon, I changed my RMAN retention policy from 'RETENTION POLICY REDUNDANCY 3' to RETENTION POLICY RECOVERY WINDOW OF 3 DAYS'
    However, i checked tonight and after my daily backup ran, all the archive logs prior to the backup had been deleted! Thus meaning i dont think i can restore to any point in time, prior to my daily backup. All the .arc logs were there after the backup. So i tried another test and kicked off the daily backup again. During the backup process, the archive logs got deleted again! Now i don't have any archive logs..
    Is this proper behaviour of RETENTION POLICY RECOVERY WINDOW?? I thought it would keep all the files needed for me to restore to any point in time for the previous 3 days. When i used REDUNDANCY, with my daily backups, it kept 3 days worth of backups+ archive logs so i could restore point-in-time to any point. How can i use RECOVERY WINDOW so that i can actually restore to any point-in-time for the 3 days??
    I wanted to change to RECOVERY WINDOW since i read that by using REDUNDANCY it only keeps X copies of a backup (so if i ran the backup 3 times in a day, i would only have those 3).
    Thanks in advance.

    Hi All,
    Here is the SHOW ALL output:
    RMAN> show all;
    RMAN configuration parameters for database with db_unique_name MMSPRD7 are:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOA
    D TRUE ; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'D:\ORACLE\DATABASE\ORA11G\DATABASE\SNCFM
    MSPRD7.ORA'; # default
    Here is the RMAN script:
    Recovery Manager: Release 11.2.0.3.0 - Production on Mon Jan 20 23:03:12 2014
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    connected to target database: XXX (DBID=)
    RMAN> CROSSCHECK ARCHIVELOG ALL;
    2> CROSSCHECK BACKUPSET;
    3> CROSSCHECK BACKUP;
    4> CROSSCHECK COPY;
    5> DELETE NOPROMPT EXPIRED ARCHIVELOG ALL;
    6> DELETE NOPROMPT EXPIRED BACKUPSET;
    7> DELETE NOPROMPT OBSOLETE;
    8> BACKUP CURRENT CONTROLFILE;
    9> BACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG DELETE INPUT;
    Also, not sure if you needed the whole RMAN output, but here is the deletion part:
    RMAN retention policy will be applied to the command
    RMAN retention policy is set to recovery window of 3 days
    using channel ORA_DISK_1
    Deleting the following obsolete backups and copies:
    Type                 Key    Completion Time    Filename/Handle
    Backup Set           1392   15-JAN-14        
      Backup Piece       1392   15-JAN-14          F:\ORAFRA\MMSPRD7\BACKUPSET\2014_01_15\O1_MF_NNNDF_TAG20140115T190054_9FG89R8N_.BKP
    Backup Set           1393   15-JAN-14        
      Backup Piece       1393   15-JAN-14          F:\ORAFRA\MMSPRD7\BACKUPSET\2014_01_15\O1_MF_ANNNN_TAG20140115T192204_9FG9KDHX_.BKP
    Backup Set           1397   16-JAN-14        
      Backup Piece       1397   16-JAN-14          F:\ORAFRA\MMSPRD7\BACKUPSET\2014_01_16\O1_MF_ANNNN_TAG20140116T190027_9FJWNW6L_.BKP
    Backup Set           1400   17-JAN-14        
      Backup Piece       1400   17-JAN-14          F:\ORAFRA\MMSPRD7\BACKUPSET\2014_01_17\O1_MF_ANNNN_TAG20140117T190138_9FMK349M_.BKP
    deleted backup piece
    backup piece handle=F:\ORAFRA\MMSPRD7\BACKUPSET\2014_01_15\O1_MF_NNNDF_TAG20140115T190054_9FG89R8N_.BKP RECID=1392 STAMP=836938856
    deleted backup piece
    backup piece handle=F:\ORAFRA\MMSPRD7\BACKUPSET\2014_01_15\O1_MF_ANNNN_TAG20140115T192204_9FG9KDHX_.BKP RECID=1393 STAMP=836940124
    deleted backup piece
    backup piece handle=F:\ORAFRA\MMSPRD7\BACKUPSET\2014_01_16\O1_MF_ANNNN_TAG20140116T190027_9FJWNW6L_.BKP RECID=1397 STAMP=837025228
    deleted backup piece
    backup piece handle=F:\ORAFRA\MMSPRD7\BACKUPSET\2014_01_17\O1_MF_ANNNN_TAG20140117T190138_9FMK349M_.BKP RECID=1400 STAMP=837111700
    Deleted 4 objects

  • Retention policy and deletion of archived files

    hello, I 'am a novice DBA having a backup question.
    Our retention policy is set to 14 days; weekly we have full database backup; daily we have an incremental backup.
    See scripts below.
    The drive "\\is003s012\rhea\rman" where the backup files reside has files older than 2 months.
    I expected these files to be deleted already by rman.
    I can't find the cause why these files are not removed.
    The database backup jobs are running fine ; logs file do not contain any errors and once and a while you see that rman removes obsolete files.
    list expired / report obsolete do not return any files.
    thanks for the assistance
    chris
    ================= full db
    c$rman_script="backup incremental level 0 cumulative device type disk tag '%TAG' database;
    backup device type disk tag '%TAG' archivelog all not backed up delete all input;
    allocate channel for maintenance type disk;
    delete noprompt obsolete device type disk;
    release channel;
    &br_save_agent_env();
    &br_prebackup($l_db_connect_string, $l_is_cold_backup, $l_use_rcvcat, $l_db_10_or_higher, $l_backup_strategy, "TRUE");
    my $result = &br_backup();
    exit($result);
    ================ incr db
    $rman_script="backup incremental level 1 cumulative device type disk tag '%TAG' database;
    recover copy of database;
    backup device type disk tag '%TAG' archivelog all not backed up delete all input;
    allocate channel for maintenance type disk;
    delete noprompt obsolete device type disk;
    release channel;
    &br_save_agent_env();
    &br_prebackup($l_db_connect_string, $l_is_cold_backup, $l_use_rcvcat, $l_db_10_or_higher, $l_backup_strategy, "TRUE");
    my $result = &br_backup();
    exit($result);
    ================== rman setting
    RMAN> show all;
    RMAN configuration parameters for database with db_unique_name RHEA are:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 14 DAYS;
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '\\is003s012\rhea\rman\%F';
    CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '\\is003s012\rhea\rman\%U';
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE';
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'D:\ORACLE\PRODUCT\11.2.0\DBHOME_1\DATABASE\SNCFRHEA.ORA'; # default

    Chris,
    Thanks for the info; my current setting for this parameter:
    SQL> show parameter CONTROL_FILE_RECORD_KEEP_TIME;
    NAME                                 TYPE        VALUE
    control_file_record_keep_time        integer     7
    I found another thread that elaborated on this: Re: CONTROL_FILE_RECORD_KEEP_TIME vs Retention Policy in RMAN
    a) RMAN Retention Policy set to recovery window of 30 days means backups and archivelogs that you need to do a Point in Time Recovery to before 30 days do not get obsolete.
    b) CONTROLFILE_RECORD_KEEP_TIME=7 will keep (potentially, if space pressure in the controlfile) the records of these files for only 7 days
    Effect: Your old backups and archivelogs (+7 days) are no longer known by the system and will not get deleted (automatically or by delete obsolete) and your recovery area/backup location/archive destination will get filled up with old files.
    Therefore set b) > a)
    Based on this I changed the parameter:
    alter system set CONTROL_FILE_RECORD_KEEP_TIME=35 scope=both;
    Even tho the 'stale' backupsets are still invisible (still getting 'no obsolete backups found' when doing REPORT OBSOLETE), I guess that's because it will take a while (23 days?) for the CONTROL_FILE_RECORD_KEEP to catch up to the RMAN Retention Policy of 30 days)....
    Barry

Maybe you are looking for

  • How can I migrate info from an iPod nano to a new one?

    I just purchased a new iPod nano 16GB, but I like the playlists I had on another iPod Nano 8GB I'd like to know if someone knows how can I sync info fron older to newer one.

  • I can't see the group ripper in premiere pro after 2014.1 update, and 2014.2 did not fix.

    Please someone help me fix this I am literally going crazy as I have to edit in my notebook and am constantly hooking it up to different monitors, which calls for a different workspace most of the time. Thanks in advance. To be clear I can not see th

  • AC Adapter port question.

    Hello, all. I have a wonderful iBook G4 which I love very much. A couple of weeks ago, it was charging on the table when a commotion happened, and the AC adapter got ripped out of the adapter port. The ibook will no longer take a charge, as there is

  • Business Objects in 3 Tier

    Dear experts - I was getting an error -> error.inf. This is how this error is getting generated. We have 3 tier architecutre setup for BO because of security issues. Our users have desktop intelligence access through infoview. 1. Rights we have given

  • HOW TO DEVELOP AN INTERACTIVE REPORT

    HI,      How to Develop  an interactive report to display sales orders for particular customer, items for particular order.