Minimize archivelog generation

I have changed 2 big tables to NOLOGGING in order to reduce the production of archive logs, but there is no difference.
I can't use Insert /* +append* / as the core code can't be changed.What can i do from database level to minimise generation of archived redo logs?
Whenever these 2 big tables are used , I my archivelog directory becomes full.There is 15 GB space for archive logs.
Please give me a way to reduce the archive log generation at database level.

Hi
if u r changing logging property of table.it doesn't means that oracle will not generate any archive log.
it will still generate archive on insert,delete.
so its better u can take backup or Move archivelog on another destination.
You can also change logging property on tablespace level in 9i.
Thanks and Regards
Kuljeet pal singh

Similar Messages

  • How simulate massive archivelog generation

    Hi,
    We're in the process of testing our physical standby database, if the network link would be able to cope with peak load and archivelog generation during this time.
    I'm looking for some sample script to run to simulate massive archivelog generation, mayb 1GB every minute. We're on 11.2.0.1 version.
    The database is currently empty, no load , no data.
    Regards,
    dula

    user13005731 wrote:
    Hi,
    We're in the process of testing our physical standby database, if the network link would be able to cope with peak load and archivelog generation during this time.
    I'm looking for some sample script to run to simulate massive archivelog generation, mayb 1GB every minute. We're on 11.2.0.1 version.
    The database is currently empty, no load , no data.
    Regards,
    dulaJust write a little PL/SQL procedure with a loop that iterates 10 million times, and inside the loop do a little dml - insert or update.
    Pseudo code:
    for i = 1 to 10000000
      insert into testtable ('hello world');
    next iThe above is not valid pl/sql, but demonstrates the process. I leave it to the student to pull out the PL/SQL manual found a tahiti.oracle.com and work out the exact syntax. If you don't have space to grow testtable by 10,000,000 rows, use an update instead. It will still generate the necessary redo.

  • More Archivelog generation at a particular time?

    Dear Experts,
    OS-SOlaris
    DB version=9.2.0.6
    our database is generating more archive logs in night between 2Am and 3AM.During this period no jobs are scheduled to run either from OS or from database itself.i want to invistigate what makes database to genetate more archivelog at this time.what i have to do to know the reason for more archive log generation.
    Thank You All

    Do you do hot backup during night? If you found entries like 'ALTER TABLESPACE xxx BEGIN BACKUP' before 2am and entries like 'ALTER TABLESPACE xxx END BACKUP' after 3am in your alert.log, that is the key.

  • Archivelog generation double after dataguard

    Hi Gurus,
    Our archivelog generated x2 more after the configuration of Dataguard. Does DG impact this? we added the standby redo log file.
    Thanks in advance.

    Hi
    >>Our archivelog generated x2 more after the configuration of Dataguard. Does DG impact this? we added the standby redo log file.
    Did you enable FORCE LOGGING at Database level?
    Were there many NOLOGGING operations before DG Setup ?
    Force logging at the database level will affect no logging operations and it might be a reason you are seeing more redo generation.
    HTH,
    Pradeep

  • Dataguard:How to check the timestamp of archivelog generation?

    Hi ,
    I want to test my DR database.
    Is there any dictionary view provided the three below information.
    1.When archivelog created in primary?
    2.When archivelog created in standby server?(Network latency)
    3.When logs applied in DR database?
    Thanks & Regards,
    Vinoth

    Hi
    Use the below queries
    On the standby database, queries the V$ARCHIVED_LOG view to verify the redo data was received and archived on the standby database:
    SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
    SEQUENCE# FIRST_TIME NEXT_TIME
    8 11-JUL-02 17:50:45 11-JUL-02 17:50:53
    9 11-JUL-02 17:50:53 11-JUL-02 17:50:58
    10 11-JUL-02 17:50:58 11-JUL-02 17:51:03
    11 11-JUL-02 17:51:03 11-JUL-02 18:34:11
    4rows selected.
    1)Verify new archived redo log files were applied.
    On the standby database, query the V$ARCHIVED_LOG view to verify the archived redo log files were applied.
    SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
    SEQUENCE# APP
    8 YES
    9 YES
    10 YES
    11 YES
    4 rows selected.

  • Archivelog generation of RAC database

    Hello All,
    Can you please give me query to get no of archivelogs generated in RAC database.
    Thanks in advance for your help.
    Regards,
    Alok

    hi,
    I am using following query
    set termout off;
    column 00 format 9999
    col 01 format 9999
    col 02 format 9999
    col 03 format 9999
    col 04 format 9999
    col 05 format 9999
    col 06 format 9999
    col 07 format 9999
    col 08 format 9999
    col 09 format 9999
    col 10 format 9999
    col 11 format 9999
    col 12 format 9999
    col 13 format 9999
    col 14 format 9999
    col 16 format 9999
    col 17 format 9999
    col 18 format 9999
    col 19 format 9999
    col 20 format 9999
    col 21 format 9999
    col 22 format 9999
    col 23 format 9999
    spool /tmp/redolog_history_doc.html
    set markup html on;
    select to_char(FIRST_TIME,'YYYY/MM/DD') day,
    to_char(sum(decode(to_char(first_time,'hh24'),'00',1,0)),'99') "00",
    to_char(sum(decode(to_char(first_time,'hh24'),'01',1,0)),'99') "01",
    to_char(sum(decode(to_char(first_time,'hh24'),'02',1,0)),'99') "02",
    to_char(sum(decode(to_char(first_time,'hh24'),'03',1,0)),'99') "03",
    to_char(sum(decode(to_char(first_time,'hh24'),'04',1,0)),'99') "04",
    to_char(sum(decode(to_char(first_time,'hh24'),'05',1,0)),'99') "05",
    to_char(sum(decode(to_char(first_time,'hh24'),'06',1,0)),'99') "06",
    to_char(sum(decode(to_char(first_time,'hh24'),'07',1,0)),'99') "07",
    to_char(sum(decode(to_char(first_time,'hh24'),'08',1,0)),'99') "08",
    to_char(sum(decode(to_char(first_time,'hh24'),'09',1,0)),'99') "09",
    to_char(sum(decode(to_char(first_time,'hh24'),'10',1,0)),'99') "10",
    to_char(sum(decode(to_char(first_time,'hh24'),'11',1,0)),'99') "11",
    to_char(sum(decode(to_char(first_time,'hh24'),'12',1,0)),'99') "12",
    to_char(sum(decode(to_char(first_time,'hh24'),'13',1,0)),'99') "13",
    to_char(sum(decode(to_char(first_time,'hh24'),'14',1,0)),'99') "14",
    to_char(sum(decode(to_char(first_time,'hh24'),'15',1,0)),'99') "15",
    to_char(sum(decode(to_char(first_time,'hh24'),'16',1,0)),'99') "16",
    to_char(sum(decode(to_char(first_time,'hh24'),'17',1,0)),'99') "17",
    to_char(sum(decode(to_char(first_time,'hh24'),'18',1,0)),'99') "18",
    to_char(sum(decode(to_char(first_time,'hh24'),'19',1,0)),'99') "19",
    to_char(sum(decode(to_char(first_time,'hh24'),'20',1,0)),'99') "20",
    to_char(sum(decode(to_char(first_time,'hh24'),'21',1,0)),'99') "21",
    to_char(sum(decode(to_char(first_time,'hh24'),'22',1,0)),'99') "22",
    to_char(sum(decode(to_char(first_time,'hh24'),'23',1,0)),'99') "23"
    from v$log_history where FIRST_TIME > sysdate - 45
    group by to_char(FIRST_TIME,'YYYY/MM/DD') order by substr(to_char(FIRST_TIME,'YYYY/MM/DD'),1,10) desc ;
    spool off
    set markup html off;
    set termout on;
    but the output comes in hashed out like this:
    DAY Mid 1AM 2AM 3AM 4AM 5AM 6AM 7AM 8AM 9AM 10A 11A Noo 1PM 2PM 3PM 4PM 5PM 6PM 7PM 8PM 9PM 10P 11P
    2010/06/07 ### ### 94 ### 2 2 0 5 11 1 4 98 89 63 ### 37 31 20 ### ### ### ### ### 0
    2010/06/06 22 69 17 16 15 10 10 7 7 10 6 6 4 7 4 6 4 4 4 5 ### ### 58 27
    2010/06/05 2 9 ### 3 2 5 16 71 87 63 44 ### ### ### 85 88 22 ### ### 2 3 4 ### 23
    2010/06/04 1 5 1 2 3 2 4 2 3 3 2 2 2 ### 68 29 9 1 29 ### 39 15 2 13
    2010/06/03 30 46 35 26 22 50 43 23 ### ### ### ### ### ### 24 23 23 22 23 26 ### ### ### 10
    2010/06/02 ### ### 83 49 59 32 23 22 17 ### ### ### ### ### ### ### ### 63 ### ### ### ### ### 30
    2010/06/01 1 5 15 8 ### ### ### ### ### ### ### ### 38 71 ### 62 40 ### ### ### ### ### ### 95
    2010/05/31 32 4 0 1 4 1 2 3 28 ### ### ### ### 97 50 ### ### ### 4 9 2 4 2 6
    2010/05/30 1 5 4 4 6 5 62 6 4 4 43 ### ### 18 9 6 3 ### ### ### 3 4 1 12
    2010/05/29 ### 64 40 57 51 88 2 5 2 3 ### ### 13 7 7 17 21 59 ### 10 10 12 7 3
    2010/05/28 1 4 6 7 4 5 3 1 4 5 1 3 35 59 85 59 59 58 47 37 37 ### ### 5
    2010/05/27 1 69 ### 4 5 3 1 4 2 4 57 8 11 11 9 3 11 60 2 4 3 5 0 7
    2010/05/26 2 5 1 3 7 2 8 6 2 6 14 40 53 7 18 45 ### ### 9 8 3 5 2 4
    2010/05/25 ### 53 35 8 6 4 5 18 11 3 7 12 4 5 7 4 5 4 60 4 4 5 2 2
    2010/05/24 0 4 2 3 5 2 0 2 2 2 4 2 2 9 4 1 4 4 40 ### 76 61 41 14
    2010/05/23 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3
    could you please help with this.
    Thanks

  • Import performance and archive logs

    Well we are working on Oracle 10 R2 on Solaris.
    During import (impdp) its generating huge volume of archive logs.
    Our database size is in terabytes.
    How to stop the archive log generation during import or atleast minimize the generation ??

    Hello,
    If you can restart your database then you may set your database in NOARCHIVELOG mode.
    Then, after the import is finished, you'll have to set back your database in ARCHIVELOG mode (you'll need to restart again the database).
    Afterwards, you'll have to Backup your database.
    Else, without changing the Archive mode of the database, you can Backup and compress your archived "logs".
    For instance, with RMAN:
    connect target /
    backup
      as compressed backupset
      device type disk
      tag 'BKP_ARCHIVE'
      archivelog all not backed up
      delete all input;
    exit;By that way you'll save space on disk.
    Hope this help.
    Best regards,
    Jean-Valentin

  • RMAN Backup Failed.

    Hi Gurus,
    My RMAN backup failed with below error.Please suggest.
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of sql command on default channel at 01/26/2013 22:48:56
    RMAN-11003: failure during parse/execution of SQL statement: alter system archive log current
    ORA-00258: manual archiving in NOARCHIVELOG mode must identify log
    RMAN>
    Recovery Manager complete.
    Thanks,

    The message indicates that your database is in NOARCHIVELOG mode. You cannot make an RMAN Backup of a database that is OPEN if it is in NOARCHIVELOG mode.
    You have two options
    1. Make consistent RMAN Backups with the database only in MOUNT mode (SHUTDOWN ; STARTUP MOUNT)
    OR
    2. Configure for ArchiveLogs (create a target filesystem or FRA), set the database to ARCHIVELOG mode, monitor the volume of ArchiveLog generation, configure both Database and ArchiveLog backups.
    Hemant K Chitale

  • OEM 12c using lots of DB Space and generating over 100Gbytes logs daily

    We have upgraded to OEM12c, but we notice that
    1.     Tablespace Mgmt_tablespace is growing very rapidly about 5GBytes every week
    2.     and we are generating many achieve logs in my case 100 gbytes in every days
    Is this normal and are there anything I can do to reduce the space usage?
    Is anyone having this problem or is it just me
    Many thanks
    Edited by: user10745648 on Mar 5, 2013 2:00 PM
    Edited by: user10745648 on Mar 5, 2013 2:22 PM

    Excessive archivelog generation on an EM12c repository database may mean you are hitting bug 14726136. Please see MOS note 1502370.1. I would suggest filing an SR with support to confirm whether or not you are experiencing this bug; they can help you with some additional analysis.

  • GWTDOMAIN using lots of swaps space in WLE 4.2!!!

    We are having problem in one our of non- production environments where the GWTDOMAIN is using lots of swap space and not releasing them. It got so bad that the swap space used was 100% and we had to "kill -9" this GWTDOMAIN and bounce it again to to release this space. Is this a known bug and if not what can I do to resolve this problem
    or do we need to open a trouble ticket with BEA to fix.
    Thank for your help.
    Bayo Alege
    Senior Analyst, Systems
    CIMG
    513-723-2954

    Excessive archivelog generation on an EM12c repository database may mean you are hitting bug 14726136. Please see MOS note 1502370.1. I would suggest filing an SR with support to confirm whether or not you are experiencing this bug; they can help you with some additional analysis.

  • I need help on how to setup hardware raid for ASM.

    In the « Recommendations for Storage Preparation” section in the following documentation: http://download.oracle.com/docs/cd/B28359_01/server.111/b31107/asmprepare.htm
    It mentions:
    --Use the storage array hardware RAID 1 mirroring protection when possible to reduce the mirroring overhead on the server.
    Which is a good raid 1 configuration considering my machine setup?
    “I put my Machine info below.”
    Should I go for something like:
    5 * raid 1 of 2 disks in each raid: disk group DATA
    5 * raid 1 of 2 disks in each raid: disk group FRA
    Then ASM will take care of all the striping between the 5 raids inside a disk group right?
    OR, I go for:
    1 * raid 1 of 10 disks: disk group DATA
    1 * raid 1 of 10 disks: disk group FRA
    In the second configuration, does ASM recognize that there are 10 disks in my raid configuration and stripes on those disks? Or to use ASM striping, I need to have lots of raid in a disks group?
    Here is my Machine Characteristics:
    O/s is Oracle Enterprise Linux 4.5 64 bit
    Single instance on Enterprise Edition 10g r2
    200 GIG database size.
    High "oltp" environment.
    Estimated growth of 60 to 80GIG per year
    50-70GIG archivelogs generation per Day
    Flashback time is 24 hours: 120GIG of flashback space in avg
    I keep a Local backup. Then push to another disk storage, then on tape.
    General Hardware Info:
    Dell PowerEdge 2950
    16 GIG RAM
    2 * 64 bit dual core CPU's
    6 * local 300G/15rpm disks
    Additional Storage:
    Dell PowerVault MD1000
    15 * 300G/15rpm Disks
    So I have 21 Disks in total.

    I would personally prefer the first configuration and let ASM stripe the disks. Generally speaking, many RAID controllers will stripe then mirror (0+1) when you tell it to build a striped and mirrored RAID set on 10 disks. Some will mirror then stripe (1+0) which is what most people prefer. That's because when a 1+0 configuration has a disk failure, only a single RAID 1 set needs to be resync'd. The other members of the stripe won't have to be resynchronized.
    So, I'd prefer to have ASM manage 5 luns and let ASM stripe across those 5 luns in each disk group. It also increases your ability to reorganize your storage if you need 20% more info in DATA and can afford 20% less in FRA, you can move one of your RAID 1 luns from FRA to DATA easily.
    That's my 0.02.

  • DELETING ARCHIVE LOG

    Hi all,
    I am running out of space on my system. I am having rman backup also. what is the best practice to delte the archivelog.
    Regards,
    Sakthivel G

    Hi,
    Take archivelog backup (using rman) every 2hrs and delete them using 'delete input' in rman window.
    There are few things to be checked :-
    1. Do you have standby database, is it is sync before you delete.
    2. The backup policy of the database.
    3. Do you think the archivelog generation is more than expected.
    Anand

  • Generation of numerous archivelogs when running a batch risk analysis

    Is it a normal process to generate so many ARCHIVELOGS when running a batch risk analysis?  If the jobs are broken up into three seperate processes, we use approximately 7GB.  If we run all the processes into one, we use 100+GB. What storage requirements should we anticipate is needed in DEV and PRD for the generation of ARCHIVELOGS ?

    I found a forum question that has information that resolved our issue.  The following is the link:  https://forums.sdn.sap.com/click.jspa?searchID=14313475&messageID=5373262
    Q: Background jobs to analyze users against our rules are taking forever. Our feeling is that the problem lies in the R/3 backend, however we need to know how to improve performance.
    A: Step 1
    Please ask your SAP BASIS to apply the following notes:
    Note: 1044174 - Recommendation for CC 5.x running on Oracle 10G Database
    Note: 1121978 - Recommended settings to improve performance risk analysis
    Note: 1044173 - Recommended NetWeaver Setting for Access Control 5.x
    Note: 723909 - Java VM settings for J2EE 6.40/7.0
    Step 2
    Once you applied all of the above SAP notes. Please ask your SAP DBA/BASIS to do the following actions:
    Truncate table virsa_cc_prmvl; this is the table which stores all the analysis results.
    Execute stats on all virsa_cc* tables. Example: exec dbms_stats.gather_table_stats ('SAPSR3DB',VIRSA_CC_PRMVL)

  • Report generation vi's-save report to file.vi

    I am a new user of LabVIEW. I'm using the Report Generation Toolkit VI's to do some customized reports.
    1. how can I stop the report from popping up on the front panel when I initialize the report to be done. I know you can use "minimized" but that takes a few seconds to minimize and I would like to NOT see the report I'm generating pop-up at all.
    2. I want use this customized report and constantly be saving data when a reset button is pushed. That is save the time stamp, the change in data etc on a continuous basis in 1 folder. Right now when I use the "Save Report to File. Vi" it over writes the previous data saved and all that data is lost. I need to be constantly saving all the data generated to be able to look back at it.
    Thanks.

    Hi shaef,
    Which version of LV and Report Generation Toolkit are you using?
    Assuming you are using LV 8.5 and RGT 1.1.2, then the attached screenshot should offer some help.  Basically, you need to wire in the existing file as your template in order to not just overwrite the old data.  You also need to make use of the series of 'Append' VIs that are in the toolkit.
    Let us know if this works for you,
    David_B
    Applications Engineer
    National Instruments
    Attachments:
    2008-03-23_165242.png ‏5 KB

  • What type of wireless router is the 1st generation time capsule? Is it B, G or N. I'm trying to understand why our wifi signal is a bit erratic. Paul

    what type of wireless router is the 1st generation time capsule? Is it B, G or N? I'm trying to establish whether its causing signal degradation as a result of conflicts with my BT Home Hub router.

    The 1st generation Time Capsule is an 802.11"n" wireless router, but in default settings it produces a signal that is also compatible with "g" and "b" wireless devices.
    If your HomeHub is in close proximity to the Time Capsule and it is also producing a wireless network, either the wireless on the HomeHub or the TIme Capsule should be be disabled to minimize the chances of wireless interference.
    Interference may also be coming from any cordless phones you may have, or another nearby wireless network as well.

Maybe you are looking for

  • How to display data in combo box from xml file.

    Hi All,         I have the data in xml file.   <jukebox>     <song>         <title>When the Levee Breaks</title>         <artist>Kansas Joe and Memphis Minnie</artist>         <url>delhi601(www.songs.pk).mp3</url>     </song>     <song>         <titl

  • Collect message into internal table

    Hi, does any one knows how BAPIs or CALL TRANSACTIONs collect messages into internal table. My problem is that some BI with CALL TRANSACTION doesn't collect right message into return table and I would like to collect this last message after CALL TRAN

  • Exception 'CX_SY_CONVERSION_CODEPAGE '

    Hi friends, In one of my bank transfer report i'm getting the runtime error CONVT_CODE_PAGE with the exception CX_SY_CONVERSION_CODEPAGE.I don't what is the reason for this dump and how to correct it.So can anyone of u provide me help on this. I'm at

  • Calendar Project Quirks

    I'm trying to build my first calendar using iPhoto 7.1. I'm using the Picture Calendar theme. I've selected several events and have placed them in the calendar photo media browser. The resolutions of the photos range from 1600x1200 to 3008x2000 (3 ca

  • MAC Address-Table Move Update Feature

    Hi guys Does 6500 SUP720/2T support MAC Address-Table Move Update Feature? I cannot find it in anywhere.. Thanks very much! QXZ