OID (iasdb) 9.0.1.4.0 generating tons of archive logs

Hi,
When the tspurge jobs run, from dba_jobs it looks like there is a lot of archive logs being generated. Is this normal ? If so, are records being deleted from certain log and audit tables (PORTAL, ORASSO etc). I have a situation when the tspurgeDirObject ('cn=secrefresh events purgeconfig' ); END; runs, 125MB archive logs are generated every 2 minutes and is filling up disk space.
Thanks for your help.
Ramesh.

Hi,
When the tspurge jobs run, from dba_jobs it looks like there is a lot of archive logs being generated. Is this normal ? If so, are records being deleted from certain log and audit tables (PORTAL, ORASSO etc). I have a situation when the tspurgeDirObject ('cn=secrefresh events purgeconfig' ); END; runs, 125MB archive logs are generated every 2 minutes and is filling up disk space.
Thanks for your help.
Ramesh.

Similar Messages

  • Are there any possible reason RMAN generates some corrupt archive log files

    Dear all,
    Are there any possible reason RMAN generates some corrupt archive log files?
    Best Regards,
    Amy

    Because I try to perform daily backup at lunch time and found out it takes more than 1 hour had no any progress. Normally we take around 40 minus. The following is the log file:
    RMAN> Run
    2> {
    3> CONFIGURE CONTROLFILE AUTOBACKUP ON;
    4> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u03/db/backup/RMAN/%F.bck';
    5> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 1 DAYS;
    6> allocate channel ch1 type disk format '/u03/db/backup/RMAN/backup_%d_%t_%s_%p_%U.bck';
    7> backup incremental level 1 cumulative database plus archivelog delete all input;
    8> backup current controlfile;
    9> backup spfile;
    10> release channel ch1;
    11> }
    12> allocate channel for maintenance type disk;
    13> delete noprompt obsolete;
    14> delete noprompt archivelog all backed up 2 times to disk;
    15>
    16>
    using target database controlfile instead of recovery catalog
    old RMAN configuration parameters:
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    new RMAN configuration parameters:
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    new RMAN configuration parameters are successfully stored
    old RMAN configuration parameters:
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u03/db/backup/RMAN/%F.bck';
    new RMAN configuration parameters:
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u03/db/backup/RMAN/%F.bck';
    new RMAN configuration parameters are successfully stored
    old RMAN configuration parameters:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 1 DAYS;
    new RMAN configuration parameters:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 1 DAYS;
    new RMAN configuration parameters are successfully stored
    allocated channel: ch1
    channel ch1: sid=99 devtype=DISK
    Starting backup at 31-MAR-09
    current log archived
    After that I go to archive log directory "/u02/oracle/uat/uatdb/9.2.0/dbs" and use ls -lt command to see how many archive logs and my screen just hang. After we found out that we cannot use ls -lt command to read arch1_171.dbf
    archive log, the rest of archive logs able to use ls -lt command.
    We cannot delete this file as well. We shutdown database abort and perform check disk...... and fix the disk error and then open database again. Everything seems back to normal and we can use ls -lt command to read arch1_171.dbf.
    The strange problem is we have the same problem in Development and Production..... one ore more archive logs seems to be corrupted under the same directories /u02/oracle/uat/uatdb/9.2.0/dbs.
    Does anyone encounter the same problem?
    Amy

  • Create procedure is generating too many archive logs

    Hi
    The following procedure was run on one of our databases and it hung since there were too many archive logs being generated.
    What would be the answer? The db must remain in archivelog mode.
    I understand the nologging concept, but as I know this applies to creating tables, views, indexes and tablespaces. This script is creating procedure.
    CREATE OR REPLACE PROCEDURE APPS.Dfc_Payroll_Dw_Prc(Errbuf OUT VARCHAR2, Retcode OUT NUMBER
    ,P_GRE NUMBER
    ,P_SDATE VARCHAR2
    ,P_EDATE VARCHAR2
    ,P_ssn VARCHAR2
    ) IS
    CURSOR MainCsr IS
    SELECT DISTINCT
    PPF.NATIONAL_IDENTIFIER SSN
    ,ppf.full_name FULL_NAME
    ,ppa.effective_date Pay_date
    ,ppa.DATE_EARNED period_end
    ,pet.ELEMENT_NAME
    ,SUM(TO_NUMBER(prv.result_value)) VALOR
    ,PET.ELEMENT_INFORMATION_CATEGORY
    ,PET.CLASSIFICATION_ID
    ,PET.ELEMENT_INFORMATION1
    ,pet.ELEMENT_TYPE_ID
    ,paa.tax_unit_id
    ,PAf.ASSIGNMENT_ID ASSG_ID
    ,paf.ORGANIZATION_ID
    FROM
    pay_element_classifications pec
    , pay_element_types_f pet
    , pay_input_values_f piv
    , pay_run_result_values prv
    , pay_run_results prr
    , pay_assignment_actions paa
    , pay_payroll_actions ppa
    , APPS.pay_all_payrolls_f pap
    ,Per_Assignments_f paf
    ,per_people_f ppf
    WHERE
    ppa.effective_date BETWEEN TO_DATE(p_sdate) AND TO_DATE(p_edate)
    AND ppa.payroll_id = pap.payroll_id
    AND paa.tax_unit_id = NVL(p_GRE, paa.tax_unit_id)
    AND ppa.payroll_action_id = paa.payroll_action_id
    AND paa.action_status = 'C'
    AND ppa.action_type IN ('Q', 'R', 'V', 'B', 'I')
    AND ppa.action_status = 'C'
    --AND PEC.CLASSIFICATION_NAME IN ('Earnings','Alien/Expat Earnings','Supplemental Earnings','Imputed Earnings','Non-payroll Payments')
    AND paa.assignment_action_id = prr.assignment_action_id
    AND prr.run_result_id = prv.run_result_id
    AND prv.input_value_id = piv.input_value_id
    AND piv.name = 'Pay Value'
    AND piv.element_type_id = pet.element_type_id
    AND pet.element_type_id = prr.element_type_id
    AND pet.classification_id = pec.classification_id
    AND pec.non_payments_flag = 'N'
    AND prv.result_value <> '0'
    --AND( PET.ELEMENT_INFORMATION_CATEGORY LIKE '%EARNINGS'
    -- OR PET.element_type_id IN (1425, 1428, 1438, 1441, 1444, 1443) )
    AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN PET.EFFECTIVE_START_DATE AND PET.EFFECTIVE_END_DATE
    AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN PIV.EFFECTIVE_START_DATE AND PIV.EFFECTIVE_END_DATE --dcc
    AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN Pap.EFFECTIVE_START_DATE AND Pap.EFFECTIVE_END_DATE --dcc
    AND paf.ASSIGNMENT_ID = paa.ASSIGNMENT_ID
    AND ppf.NATIONAL_IDENTIFIER = NVL(p_ssn, ppf.NATIONAL_IDENTIFIER)
    ------------------------------------------------------------------TO get emp.
    AND ppf.person_id = paf.person_id
    AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN ppf.EFFECTIVE_START_DATE AND ppf.EFFECTIVE_END_DATE
    ------------------------------------------------------------------TO get emp. ASSIGNMENT
    --AND paf.assignment_status_type_id NOT IN (7,3)
    AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN paf.effective_start_date AND paf.effective_end_date
    GROUP BY PPF.NATIONAL_IDENTIFIER
    ,ppf.full_name
    ,ppa.effective_date
    ,ppa.DATE_EARNED
    ,pet.ELEMENT_NAME
    ,PET.ELEMENT_INFORMATION_CATEGORY
    ,PET.CLASSIFICATION_ID
    ,PET.ELEMENT_INFORMATION1
    ,pet.ELEMENT_TYPE_ID
    ,paa.tax_unit_id
    ,PAF.ASSIGNMENT_ID
    ,paf.ORGANIZATION_ID
    BEGIN
    DELETE cust.DFC_PAYROLL_DW
    WHERE PAY_DATE BETWEEN TO_DATE(p_sdate) AND TO_DATE(p_edate)
    AND tax_unit_id = NVL(p_GRE, tax_unit_id)
    AND ssn = NVL(p_ssn, ssn)
    COMMIT;
    FOR V_REC IN MainCsr LOOP
    INSERT INTO cust.DFC_PAYROLL_DW(SSN, FULL_NAME, PAY_DATE, PERIOD_END, ELEMENT_NAME, ELEMENT_INFORMATION_CATEGORY, CLASSIFICATION_ID, ELEMENT_INFORMATION1, VALOR, TAX_UNIT_ID, ASSG_ID,ELEMENT_TYPE_ID,ORGANIZATION_ID)
    VALUES(V_REC.SSN,V_REC.FULL_NAME,v_rec.PAY_DATE,V_REC.PERIOD_END,V_REC.ELEMENT_NAME,V_REC.ELEMENT_INFORMATION_CATEGORY, V_REC.CLASSIFICATION_ID, V_REC.ELEMENT_INFORMATION1, V_REC.VALOR,V_REC.TAX_UNIT_ID,V_REC.ASSG_ID, v_rec.ELEMENT_TYPE_ID, v_rec.ORGANIZATION_ID);
    COMMIT;
    END LOOP;
    END ;
    So, how could I assist our developer with this, so that she can run it again without it generating a ton of logs ? ?
    Thanks
    Oracle 9.2.0.5
    AIX 5.2

    The amount of redo generated is a direct function of how much data is changing. If you insert 'x' number of rows, you are going to generate 'y' mbytes of redo. If your procedure is destined to insert 1000 rows, then it is destined to create a certain amount of redo. Period.
    I would question the <i>performance</i> of the procedure shown ... using a cursor loop with a commit after every row is going to be a slug on performance but that doesn't change the fact 'x' inserts will always generate 'y' redo.

  • Generating lots of archive logs

    Hi Friends,
    We have an EBS 11i on AIX 5L ...which has just been setup and ready for UAT...but the AppsDBA/Functional Consultant who didi it are not around anymore to ask for questiones. I just noticed the there are archive logs generated everyday...like 30 logs...when in fact the application is not being used. Is there concurrent programs
    that has been setup to update data on a background process, just like recursive updating which is not really necessary. How do I check if there are updates being done.
    Thanks a lot

    Do not stop this concurrent program as it is used to synchronize the Workflow local tables with the user and role information stored in the product application tables until each affected product performs the synchronization automatically.
    More details can be found in the following note:
    Note: 171703.1 - 11.5.x: Implementing Oracle Workflow Directory Service Synchronization
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=171703.1
    Did you check the total size of the log files? I believe you should not be worried now until the system is delivered to the users, you can monitor the number of log files generated daily then and based on that start your investigation.

  • One process generating too much archives

    Hi all,
    DB-Oracle 9.2
    OS- Aix
    One of my friend ask me how to find out which process generating too mcuh archive logs
    below are some of the parameters
    log_checkpoint_interval integer 30000
    log_checkpoint_timeout integer 1800
    log_checkpoints_to_alert boolean FALSE
    log_buffer  1048576
    log_archive_max_processes  2
    fast_start_mttr_target 0please suggest me

    hi,
    you have killed the seesion.. but insert operations are not needed or what? if you don't want any insert from that user then give priveleges acccordingly.
    coz, the import may be needed for some application. or else increase the redo log files.
    to find which process
    check the holding session from dba_waiter and check whether the session using any insert or update statement..(find the query).. even u can check from ps-ef |grep arch..
    instead of killing you can do any other alteration or inform user to import at a less consuption DB time..
    regards,
    Deepak

  • Generate archive logs are not in sequence number?

    On last friday... the latest archive log number was ARC00024.ARC. Tomorrow when I come backup, the archive logs ARC00001.ARC and ARC00002.ARC were being generated by oracle itself. I wondering the archive log sequence should be in sequence. What is happening?
    SQL> archive log list;
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination C:\oracle\ora92\RDBMS
    Oldest online log sequence 1
    Next log sequence to archive 3
    Current log sequence 3
    SQL>
    FAN
    Edited by: user623471 on Jun 7, 2009 7:35 PM

    khurram,
    Its our production instance and havent issued resetlogs option but when listing the arvchives it shows in different sequence number...
    and also while copying the archives by RMAN it doesnt copy in sequence
    -rw-r----- 1 xxx dba 69363859 May 28 19:16 2_10373.arc.gz
    -rw-r----- 1 xxx dba 43446622 May 28 19:16 1_10553.arc.gz
    -rw-r----- 1 xxx dba 52587365 May 28 19:16 1_10578.arc.gz
    -rw-r----- 1 xxx dba 45251820 May 28 19:16 1_10543.arc.gz
    -rw-r----- 1 xxx dba 60890256 May 28 19:17 1_10579.arc.gz
    -rw-r----- 1 xxx dba 46659008 May 28 19:17 1_10548.arc.gz
    -rw-r----- 1 xxx dba 116899466 May 28 19:17 2_10353.arc.gz
    -rw-r----- 1 xxx dba 77769517 May 28 19:17 1_10531.arc.gz
    -rw-r----- 1 xxx dba 66401923 May 28 19:18 1_10530.arc.gz
    -rw-r----- 1 xxx dba 45972697 May 28 19:18 1_10605.arc.gz
    -rw-r----- 1 xxx dba 55082543 May 28 19:18 1_10600.arc.gz
    -rw-r----- 1 xxxq dba 42682207 May 28 19:19 1_10547.arc.gz
    thanks,
    baskar.l

  • To generate email alert when archive logs fill

    Hi. I was away yesterday and the archive logs were filling up and were at 92% when I got in this morning.
    I need to have an email generated once the /archlogs directory fills past 90%.
    The total size of that directory is 8064M. So once it hits around 7257M I would like an email fired to me and my boss.
    Oracle 9.2.0.5.0
    UNIX AIX 5.2

    In addition to Sybrand's reply, you might want to consider some other factors.
    First, when setting up email notifications, you need to adjust your expectations to account for the possible latency in email delivery. I have seen 'critical' emails take several hours to get through the system. This is not a function of Oracle, but of the email servers.
    Second, if archivelog destination filling up is an ongoing problem, I'd be looking at how my archivelogs are backed up and deleted - how I do my housekeeping on that destination. You should have more important things to do than constant monitoring and responding to 'destination full' conditions.

  • Importing tables generating too much archives

    I am using 9i/10g.
    Please suggest how to import big tables without generating archives.
    Can be set something like direct=y
    Thanks in advance,
    Aj

    Pre-create the objects before import and change them to NOLOGGING.
    This way the amount of Redo Generated will be less with Archive Log mode.
    Use Buffer and Commit=n which would speed up the import process.
    If possible, Change the Database to Noarchive log mode and proceed with the import.
    Same way.Create the Indexes with Nologging and Parallel mode to speed up the process.

  • How can I turn off archive logs are being generated by system? (ugrent)

    Dear all,
    How can I turn off archive logs are being generated by system?
    Best Regards,
    Amy

    Sorry not to you @kamran its to OP.accidently it reply button pressed for you
    SQL> shutdown immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup mount
    ORACLE instance started.
    Total System Global Area  171966464 bytes
    Fixed Size                   787988 bytes
    Variable Size             145750508 bytes
    Database Buffers           25165824 bytes
    Redo Buffers                 262144 bytes
    Database mounted.
    SQL> alter database noarchivelog
      2  /
    Database altered.
    SQL> Khurram

  • Reduce amount of archived log generated.

    RDBMS version : 9.2.0.8
    SQL> SELECT tablespace_name, force_logging FROM dba_tablespaces;
    TABLESPACE_NAME FORCE_LOGGING
    SYSTEM NO
    Above is what status of database, but when I do maintenance work of rebuilding index tablespace I get day or two worth of archived log files.
    and I dont' think ALTER DATABASE no force logging will reduce the amount of log generated.
    Is there any other method available?
    thanks

    Hi,
    if you force logging for a tablespace or for the database, then this means only that any nologging clause that comes with statements related to segments in that tablespace/database is ignored. No force logging is the default.
    In order to reduce the amount of redo protocol, you may consider to use NOLOGGING for the rebuild of your indexes:
    create index <indexname> on <table(column)> nologging;Or you put the tablespace in NOLOGGING in which the indexes are created in:
    alter tablespace <indextablespace> nologging;Or (perhaps even better) simply leave the indexes as they are. Most indexes do not need a rebuild anyway.
    Kind regards
    Uwe
    http://uhesse.wordpress.com

  • Refresh Materilaized Views without generating Archive Logs

    Hello Gurus,
    I'm facing an emebarassing situation,
    I've a job every night to execute a dbms_refresh on several materialiled Views, but there are 2 Mviews that take almost 30 minutes each but another issue thay generates almost 10G of data and my File system becomes full in a minute.
    I've done an alter Mview Nologging but it doesn't change anything.
    Does soemon ahve an idea how to do a refresh without generating Archive logs - or
    without logging?
    thxs in advance
    C.

    Since this is a TRUNCATE and INSERT, any user / application querying the MV while the Refresh is running will see ZERO rows. Furthermore if the INSERT fails (e.g. insufficent space to add an extent), the MV remains with ZERO rows.
    A DELETE and INSERT avoids such situations (even if the refresh fails, both INSERT and DELETE are rolled-back and the MV is reverted to the state as it was before the Refresh began -- thus showing data from the previous refresh)
    Hemant K Chitale
    http://hemantoracledba.blogspot.com
    Just to clarify : My paragraphs above are not about data "missing" but about precautions to be taken. A COMPLETE Refresh is done during defined outages -- users/applications are made aware that data is not available while the Refresh is running.
    Edited by: Hemant K Chitale on Sep 1, 2009 10:53 AM

  • Archive log generating views

    our database is running in archive log mode.
    i want to know which are the sessions generated/generating(sysdate and sysdate -2) more archive log .
    can i get it from the database view???

    855516 wrote:
    our database is running in archive log mode.
    i want to know which are the sessions generated/generating(sysdate and sysdate -2) more archive log .
    can i get it from the database view???use v$archived_log/v$log_history
    sys@ORCL>  select name,completion_time from v$archived_log where completion_time > sysdate-2;
    NAME                                                                             COMPLETIO
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000042_0776788597.0001                      27-MAR-12
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000043_0776788597.0001                      27-MAR-12
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000044_0776788597.0001                      27-MAR-12
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000045_0776788597.0001                      27-MAR-12
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000046_0776788597.0001                      27-MAR-12
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000048_0776788597.0001                      27-MAR-12
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000049_0776788597.0001                      27-MAR-12
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000050_0776788597.0001                      27-MAR-12
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000051_0776788597.0001                      27-MAR-12
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000052_0776788597.0001                      28-MAR-12
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000053_0776788597.0001                      28-MAR-12
    11 rows selected.

  • Huge archive logs generated ,EBS R12 11gR2 DB

    Hi all,
    Yesterday  huge number of archive logs generated (up normal ) night time where there is no load on the server caused my disk to become full, how can i determine which concurrent program caused this huge number of archive logs ?
    Regards,
    Mohanad.

    HI Mohanad,
    Please check thread:
    https://forums.oracle.com/message/10834762
    https://forums.oracle.com/thread/2417003
    Thanks &
    Best Regards,

  • HTML output for archive logs generated

    Hi All,
    Greetings of the day,
    Have a sql scheduled in cron which gives number of archive logs generated in each hour..I have modify the shell to include HTML commands to get the ouput in HTML format...
    Any ideas on how will i do this?
    Thanks ,
    baskar.l

    Please take time to read the documentation. There is a link to "Generating HTML Reports in SQLPlus" which also has examples.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14357/ch7.htm#CHDCECJG
    Edited by: Hemant K Chitale on May 21, 2009 5:08 PM

  • How can I generate and/or retrieve log files from iPad

    How can I generate and/or retrieve log files from iPad?
    OBS!
    There are NO files apearing in ~/Library/Logs/CrashReporter/MobileDevice/<name of iPad> so where else can i find it?
    I want to force it to produce a log, or find it within the iPad.
    It is needed for support of an app.

    Not sure on porting out the log data, but you can find it under General->About->Diagnostic&Usage->Diagnostic&Usage Data.  It will give you a list of your log data, and you can get additional details by selecting the applicable log you are looking for.  Hope this helps.

Maybe you are looking for

  • How can I get the system date in mm/dd/yyyy,

    how can I get the system date in mm/dd/yyyy, i need to compare system date with some other date,continuosly using threads,can U plz help me. With Some code Thanks In advnace Mahiiii

  • Wirelessly streaming to Apple TV

    Hi all, first time on this forum so I'll make this question brief! I want to steam content to my Apple TV, obviously this can be handled by my MacBook using iTunes however this seems like a clumsy way of achieving this task as it required my Macbook

  • Profit Centre Derivation - User Exit

    We have set up a substitution that is based on the sold-to party - i.e. we use data on the sold-to party to derive the profit centre (the profit centre is based on the country of the customer).  However, ideally we would like to base it on the countr

  • Macbook pro not reading hard drive blinking question mark

    Just got a new replacement MacBook Pro.   Did a restore from timemachine and the Apple updates.   Opened a finder and could not find the internal hard drive.  (nothing else connectd to the laptop)   attempted to open the applications folder and got s

  • Forms 6i JavaBeans

    Hi all , iam using java beans on 10g , but i wonder does 6i supports java beans the same way 10g do , and what configuration needed since iam running client/server application and i don't have AS niether OC4J ? Thanks