Log purge

Hi,
AWT groups in TimesTen data store.
Replication and cache agent are started.
In the beginning, log purge works well. Later I found that more and more logs are in log directory.
So I run:
call ttLogHolds
< 245, 40265728, Replication , PERF7420:_ORACLE >
< 841, 48642048, Checkpoint , rccpp.ds0 >
< 841, 48646144, Checkpoint , rccpp.ds1 >
Command> call ttRepSTart;
12026: The agent is already running for the data store.
The command failed.
It shows that replication agent is running. I try to stop it and restart it, and run ttCkpt and ttLogHolds.
Command> call ttLogHolds;
< 414, 56417344, Replication , PERF7420:_ORACLE >
< 845, 35186688, Checkpoint , rccpp.ds0 >
< 845, 38623232, Checkpoint , rccpp.ds1 >
3 rows found.
It seems that there is some issue about replication agent.
Regards,
Nesta
Edited by: nesta on Mar 4, 2010 6:50 AM

Hi Nesta,
Just because the repagent is running doesn't mean it is able to apply stuff in Oracle. Is Oracle up? Is it accepting connections? Check the TimesTen daemon log and the dsname.awterrs file to see what they show.
Chris

Similar Messages

  • ODI - Logs purged while ETL is in progress.

    Hi All,
    We are using ODI 11.1.6 to execute ETL. We found that while a load plan is in progress, after some time ODI agent automatically stops executing the load plan, purges the logs and on agents command prompt window writes a message "Agent <AGENT NAME> executing load plan log purge for work repository <Work Repo Name>". After this, work repository goes down and ETL execution is halted.
    Is there any parameter which can be set for this behavior of ODI?
    Regards
    Gaurav.

    Hi All,
    We are using ODI 11.1.6 to execute ETL. We found that while a load plan is in progress, after some time ODI agent automatically stops executing the load plan, purges the logs and on agents command prompt window writes a message "Agent <AGENT NAME> executing load plan log purge for work repository <Work Repo Name>". After this, work repository goes down and ETL execution is halted.
    Is there any parameter which can be set for this behavior of ODI?
    Regards
    Gaurav.

  • Alert & Audit Log Purging sample script

    Hi Experts,
    Can somebody point to sample scripts for
    1. alert & audit log purging?
    2. Listener log rotation?
    I am sorry if questions look too naive, I am new to DBA activities; pls let me know if more details are required.
    As of now the script is required to be independent of versions/platforms
    Regards,

    34MCA2K2 wrote:
    Thank a lot for your reply!
    If auditing is enabled in Oracle, does it generate Audit log or it inserts into a SYS. table?
    Well, what do your "audit" initialization parameters show?
    For the listener log "rotation", just rename listener.log to something else (there is an OS command for that), then bounce the listener.
    You don't want to purge the alert log, you want to "rotate" it as well.  Just rename the existing file to something else. (there is an OS command for that)
    So this has to be handled at operating system level instead of having a utility. Also if that is the case, all this has to be done when database is shut down right?
    No, the database does not have to be shut down to rotate the listener log.  The database doesn't give a flying fig about the listener log.
    No, the database does not have to be shut down to rotate the alert log.  If the alert log isn't there when it needs to write to it, it will just start a new one.  BTW, beginning with 11g, there are two alert logs .. the old familiar one, now located at $ORACLE_BASE/diag/rdbms/$ORACLE_SID/$ORACLE_SID/trace, and the xml file used by adrci.  There are adrci commands and configurations to manage the latter.
    Again, I leave the details as an exercise for the student to practice his research skills.
    Please confirm my understanding.
    Thanks in advance!

  • LDAP LOG PURGE (Housekeeping)

    I have 2 large tables in LDAP,
    1.ods.ods_chg_log (size :9375M)
    2.ods.ds_attrstore(size :2431M)
    Please anyone to help me and tell me how to housekeeping thees two tables?
    Thabks,
    Bryan

    Hi Bryan,
    1. If I am correct, the changes to the OID are stored in the ODS.ODS_CHG_LOG table You may want to have a look at
    Metalink Note:301727.1:
    Change Log Purging - Overview to purge some of the Change Log Entries.
    You may also get information about the same from the Oracle Internet Directory Administrator's Guide at this link :-
    http://docs.oracle.com
    2. Again, if I am correct, the ods.ds_attrstore stores information about OID Entry Attributes. I am not sure if we can / should purge the data in this table......but, 2431M for the table seems unusual.....how many records does it currently hold ?
    Regards,
    Sandeep

  • 12c Cloud Control Alert Log Purge Error

    I'm trying to purge old alert log events in EM 12c, but i'm getting error - The alert(s) could not be purged. Please ensure you have Edit privileges on this target while purging.
    What's i'm doing wrong? Cant find anything close to this in EM documentation.
    Thank you.

    Hi,
    What privileges do you have on the target? You'll need at least Manage Target Events (a subprivilege of Operator) in order to clear events on the target.
    regards,
    Ana

  • Automating Job Log Purges

    I was wondering if there are any standard practices regarding the purging of Job History Logs through the Automation Server? The # of logs gets pretty high pretty quickly on our system and I am looking for a way to automatically purge them weekly, or at least on some schedule.
    Does anyone know of a way to do this? I was thinking I could just purge them from the database using SQL, but I don't know what else this might affect.
    Thanks in advance!
    Jeff
    Jeffrey Turmelle <[email protected]>
    International Research Institute for Climate & Society
    Earth Institute at Columbia University

    Turns out it didn't work after all.
    I set the PTSERVERCONFIG setting "days before job logs should be expired" to 7.
    But after that, anytime the joblog got to 7 days full, it wouldn't expire the old jobs. It would simply fail all new jobs with an error message saying the joblogs folder was full.
    There must be another setting someplace?
    Jeffrey Turmelle <[email protected]>
    International Research Institute for Climate & Society
    Earth Institute at Columbia University

  • Frequency of /var/log/auth.log purges (question answered)

    I check /var/log/auth.log almost daily for break-in attempts - always unsuccessful (knock on wood) thanks do DenyHosts. However, I recently found that the log had been emptied!
    Being a little paranoid as I usually am, I thought at first that someone had broken in and was trying to cover their tracks. But then I realized it was November 1...
    Is /var/log/auth.log cleared monthly? How can I control how often it is cleared, if at all?
    Last edited by deconstrained (2009-11-01 17:06:36)

    I agree with vacant. And you can run this command to see your earlier auth.log files:
    ls -la /var/log/auth.log*
    If you care about log files then you should definitely read more about logrotate, cron, anacron and syslog.

  • Flashback log purge

    Hi
    I`ve set db_recovery_file_dest_size to 20m and flashback retention target to 60mins.
    As far as I know flashback logs are automatically deleted when the fra is filled up.
    I notice that there are 4 flashback logs and they are never deleted in the last two days.
    What is the reason for that ?
    log_1.287.760141739
    log_2.281.759973635
    log_2.285.760141745
    log_3.298.760145161

    It only possible it will reuse it.
    Most of the time it will create a new log from my experience.
    It all depends on your settings.
    I watch v$flash_recovery_area_usage fairly close on my systems and the flashback logs are a fairly small piece of my FRA.
    Archive and RMAN backups use up most of the space. For the most part I let Oracle just handle the flashback logs.
    Best Regards
    mseberg
    Query I like for this :
    SELECT
      ROUND((A.SPACE_LIMIT / 1024 / 1024 / 1024), 2) AS FLASH_IN_GB,
      ROUND((A.SPACE_USED / 1024 / 1024 / 1024), 2) AS FLASH_USED_IN_GB,
      ROUND((A.SPACE_RECLAIMABLE / 1024 / 1024 / 1024), 2) AS FLASH_RECLAIMABLE_GB,
      SUM(B.PERCENT_SPACE_USED)  AS PERCENT_OF_SPACE_USED
    FROM
      V$RECOVERY_FILE_DEST A,
      V$FLASH_RECOVERY_AREA_USAGE B
    GROUP BY
      SPACE_LIMIT,
      SPACE_USED ,
      SPACE_RECLAIMABLE ;And
    column FILE_TYPE format a20
    select * from v$flash_recovery_area_usage;Edited by: mseberg on Aug 24, 2011 9:01 PM

  • Clearing out logs on filesystem and database

    We've got SOA Suite up and running well enough, but I was wondering how to control the various log levels and more importantly how to delete old logs for the various components.
    I assume that some logs are stored in the file system and some in the underlying SOA support database. Our test support database orabpel and users tablespaces have grown to 4.5 gig and 2 gig respectively. Clearly something is being logged in the database. Also, I can see entire soap messages looking at the SOA console app (can't remember which one) and I need to turn that logging down because our SOAP messages contain proprietary data that needs to be encrypted while at rest.
    I've looked at some of the documentation and googled too. I must be dense or something because I'm not finding much. I did find a procedure to delete some logs on the file system, but it required a shutdown of one of the services. That can't be right.
    What's the proper procedure for getting rid of old data in the database, file system logs and tuning down the content logged in the various SOA components?
    Anyone have a pointer to the docs or a a how-to?
    Thanks

    OK nothing like a little crisis get the mind working.
    First, our policy set had logging turned on at the "envelope" level. The Oracle consultants who helped set this up didn't stress the logging pieces of the policy. Anyway I just disabled the logging steps in the policy definition, and committed.
    Next I purged the logs using OWSM console Operational Management>Overall Statistics>Message Logs>Purge Message Logs.
    Finally, I went into grid control and "reorganized the 4.5 gigabyte log_objects table which for some reason still had 10 rows after repeated purges.
    My log_objects table is now 0.13 mb and NOT growing because I've turned off logging.
    In my defense this is our first SOA implementation and we didn't get a lot of operational "knowledge transfer" from our consultants. Regardless, I'm a big dufus for not figuring this out before.
    Hope this helps someone else in the future.

  • Clearing Parental Control Log

    One of our kids got into some web sites he should not of - and thanks to the Parental Log, we've had a l-o-n-g conversation and we a teachable moment was to be had. I would now like to clear the logs, purging my computer of icky-ness. Is there a way to do this? Thanks in advance....Fourwoods

    I'm not sure about that...I checked on my husbands MBPro in the Library/Support......folder and there is but one folder in that as well: Airport (which is the same folder in that location on our other computers). There must be a slight difference in our machines? Considering he had no clue that his actions were being monitored (actually, I did not either..I must have set that up long ago and stumbled upon this log looking for something else) I know he cleared the browser history but that's it. Since I had the log set to record only the past week, I wonder if the items will automatically clear themselves after one week? The offending date's one week anniversary will be this Sunday so I'll post back. Let me know if you come up with anything else....Thanks! kraen

  • Operation question about log

    if i set rampolicy to inuse mode,ckpt frequency to 0,log purge to true.whether the log will be auto purged when all connection disconnect from the datastore?
    Message was edited by:
    knights

    ChrisJenkins wrote:
    No logging mode (Logging=0) is a 'special use' option and has many, many limitations. For example:
    1. Replication is not available
    2. XLA and XLA/JMS is not available
    3. All locking is at the database level; even queries will acquire an exclusive database leve llock so the database essentially becomes single threaded
    4. Transactions are not available so there is no commit or rollback and any errors can leave multi row updates/deletes partially done
    5. The only persistence is now via checkpoints; but fuzzy checkpoint is not available so all checkpoints are blocking and will impede application access to the database
    {color:#3366ff}Essentially, no logging mode is only intended for bulk data load operatiosn when initially populating your datastore. {color}Hi Chris, I'm facing this case.
    During the time of loading cache group from db into data store,I wanted to close log operation when loading then open it after the operation. While I found that, after closing the log( by setting logging=0), I can't run the "load cache" command, which hints me error.
    Would you kindly pls. tell the way to disable the log temporarily? Many thanks!
    Regards,
    Michael
    It is not in general usable for any kind of operational running. TimesTen does not support purely in-memory operation. Checkpointing and logging are fundamental mechanims and are required for any real use of TimesTen.
    Chris

  • Concurrent Program not executing

    Hi All,
    I have created new custom concurrent program of type SQL*Plus to purge the data. To my surprize when I submit the program, program is not getting executed, which I can confirm saying the data is not getting deleted. Also the log messages mentioned script is also not displayed in the LOG.
    No debug message is displayed in neither LOG nor OUTPUT.
    Executable File Name is correctly given and taken care of all other mandatory stuff while registration.
    Bleow the script:
    I am passing Number of Days ( 500 ) as parameter to this program.
    So here &1 = 500:
    DECLARE
    L_deleted_rec_cnt NUMBER;
    BEGIN
    fnd_file.put_line ( fnd_file.LOG, ' Conc Program Starts');
    DELETE
    FROM xxX_TABLE_NAME
    WHERE TRUNC (creation_date) < TRUNC (SYSDATE- &1);
    L_deleted_rec_cnt := SQL%ROWCOUNT;
    IF L_deleted_rec_cnt > 0 THEN
    COMMIT;
    fnd_file.put_line ( fnd_file.LOG, L_deleted_rec_cnt||' Records purged');
    ELSE
    fnd_file.put_line ( fnd_file.LOG, ' No Records to purge');
    END IF;
    fnd_file.put_line ( fnd_file.OUTPUT, ' Conc Program End');
    EXCEPTION
    WHEN OTHERS THEN
    fnd_file.put_line (fnd_file.LOG, 'Error in purging '||SQLCODE||' '||SQLERRM);
    END;
    Please advise.
    Regards,
    Ram

    It is 11i and the LOG is showing as Concurrent Program executed succesfully. THere is not error reported in the LOG.
    And also nothing writter to OUT file also.
    Content in LOG file:
    XXXX Customer Advocacy: Version : 1.0 - Development
    Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
    XXCC module: XXXX Error Log Purge
    Current system time is 28-DEC-2009 04:55:46
    +-----------------------------
    | Starting concurrent program execution...
    +-----------------------------
    Arguments
    450
    Start of log messages from FND_FILE
    End of log messages from FND_FILE
    Executing request completion options...
    ------------- 1) PRINT   -------------
    Printing output file.
    Request ID : 52374634      
    Number of copies : 0      
    Printer : noprint
    Finished executing request completion options.
    Concurrent request completed successfully
    Current system time is 28-DEC-2009 04:55:47
    Content in Out FIle:
    Input truncated to 2 characters
    Regards,
    Ram

  • Upgrade to 10.5 fails - Not enough diskspace in Common Partition - db_hist using most of the space

    Hi All,
    Tried upgrading our UCCX 9.0(2) SU1 HA deployment to 10.5(1) last night and it failed with the following message -
    There is not enough disk space in the common partition to perform the upgrade. Please use either the Platform Command Line Interface or the Real-Time Monitoring Tool (RTMT) to free space on the common partition.
    RTMT reported the Common diskspace used as 94%. Using RTMT we set the LogPartitionHighWaterMarkExceeded to 55% and waited for diskspace to free up. No such luck.
    A little more investigation and it turns out there is a database called db_hist_dbs in the Common partition that is 33GB in size. This is the command and returned output we used to find this out -
    show diskusage common
    8.0K    /common/moh
    33G     /common/var-uccx/dbc/db_hist_dbs
    1.6G    /common/var-uccx/dbc/temp_uccx_dbs
    201M    /common/var-uccx/dbc/uccx_er_dbs
    1.6G    /common/var-uccx/dbc/uccx_ersb_dbs
    37G     /common/var-uccx/dbc
    37G     /common/var-uccx
    16M     /common/ontape_backup.gz
    8.0K    /common/cancel_upgrade
    Using this command - show uccx dbserver disk - we can see that the db_hist database file is actually 80% free! Output from the command here -
    SNO. DATABASE NAME      TOTAL SIZE (MB) USED SIZE (MB) FREE SIZE (MB) PERCENT FREE
     4    db_hist                            34508.6                 6836.9                  27671.7             80%
    Anyone have any ideas on how to shrink the file size of the database in question? Is it even possible? Or what are our options to get around this and perform the upgrade.
    Thanks in advance
    Jeff

    You may be running into this bug: CSCul18667
    Symptom:
    CUIC log purge does not clean up JMX folder. Upgrade may fail due to lack of space.
    "show status" command show's that logging is consuming 99% of space
    Workaround:
    Can delete by going to RTMT -> Trace & Log Central -> Remote Browse -> Nodes -> "specific node" -> UCCX -> Cisco Unified Intelligence Center Servicability Service -> jmx -> select all -> Delete

  • After Trigger Not Fired

    Hi All,
    I had created a Before and After Trigger for each row on a table.
    The Before Trigger will be fired when some columns are updated.
    In the Before trigger, I will changed the value of 1 column, says delvstatus.
    The After trigger will be fired if the delvstatus was changed.
    However, I noticed that after trigger is not fired if the delvstatus is changed within Before Trigger.
    Is this a normal case for trigger execution in oracle?
    Any comments are welcome.
    Thanks.

    The before trigger and the after trigger should be fired like in the following test case.
    If not please post your test case and your 4 digits Oracle version.
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for 32-bit Windows: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    SQL>
    SQL> drop table t purge;
    Table dropped.
    SQL> drop table log purge;
    Table dropped.
    SQL>
    SQL> create table log(txt varchar2(50));
    Table created.
    SQL>
    SQL> create table t (x int primary key, s1 int, s2 int);
    Table created.
    SQL>
    SQL> create trigger tbu
      2  before update of s1
      3  on t
      4  for each row
      5  begin
      6  :new.s1 := 1;
      7  insert into log values('tbu fired');
      8  end;
      9  /
    Trigger created.
    SQL> show errors
    No errors.
    SQL>
    SQL> create trigger tau
      2  after update of s1
      3  on t
      4  for each row
      5  begin
      6  insert into log values('tau fired');
      7  end;
      8  /
    Trigger created.
    SQL> show errors
    No errors.
    SQL>
    SQL> insert into t values(0,0,0);
    1 row created.
    SQL> update t set s1 = 2 where x = 0;
    1 row updated.
    SQL> select * from t ;
             X         S1         S2
             0          1          0
    SQL> select * from log;
    TXT
    tbu fired
    tau fired
    SQL>

  • How reports are generated for inventory/syslog in RME

    Hi,
    When we generate inventory/ syslog reports in RME, how these reports are generated, I mean what is the name of the database storing these reports.
    We are running scheduled jobs that when logs reach certain size then they get purged and sometime we manually purged the logs as sometime automatic log purging do not work. Manually log rotation are saved in different drive not in CiscoWorks root directory i.e. E drive: and CiscoWorks root directory is C drive: In this case when we generate syslog/inventory reports from RME then how we can get the correct reports.
    Please advise that what the correct procedure to generate accurate reports is.
    I am using LMS 3.2.1;RME4.3.1
    Thanks

    We solved this by changing the parameter SAPLOCALHOSTFULL in ECC6 with the value sapecc.domain.com.au as per note 773830. Now the URL begins with this value for the RRI and therefore the logon tickets work.

Maybe you are looking for

  • "catch is unreachable" compiler error with java try/catch statement

    I'm receiving a compiler error, "catch is unreachable", with the following code. I'm calling a method, SendMail(), which can throw two possible exceptions. I thought that the catch statements executed in order, and the first one that is caught will e

  • Lumia 2520 no surround sound?

    I'm trying to get the surround sound to work when connected via hdmi or miracast. I tried www.amazon.com/gp/product/B00DKC2CSS and http://www.amazon.com/gp/product/B007NJ9L56 and whatever I do I only get two channel audio or no audio from my receiver

  • Mile stone billing plan - issue

    Hi sap experts i have a probleme in creating mile stone billing plan. in MS Bill plant if my billplan exceeds my order quantity system is accepting without showing any error message or bloking the billing. i.e if my plan exceeds more than 100 % syste

  • Help Recover Photos in iPhoto6

    While working in iphoto 6, my computer was accidently turned off resulting in iphoto crash. My photos are no longer accessable in iphoto, however, thumbs and albums are fine. I found all of the missing photos, about 15,000, in a file called "Recovere

  • Can you run Lightroom off an external hardrive?

    I need to get a new laptop and am considering buying a HP Stream 14. It has only got 32GB of storage on it offline so relies on cloud storage mainly. I will have an external hardrive too though. Will I be able to run Lightroom on this? I don't mind i