Reduce amount of archived log generated.

RDBMS version : 9.2.0.8
SQL> SELECT tablespace_name, force_logging FROM dba_tablespaces;
TABLESPACE_NAME FORCE_LOGGING
SYSTEM NO
Above is what status of database, but when I do maintenance work of rebuilding index tablespace I get day or two worth of archived log files.
and I dont' think ALTER DATABASE no force logging will reduce the amount of log generated.
Is there any other method available?
thanks

Hi,
if you force logging for a tablespace or for the database, then this means only that any nologging clause that comes with statements related to segments in that tablespace/database is ignored. No force logging is the default.
In order to reduce the amount of redo protocol, you may consider to use NOLOGGING for the rebuild of your indexes:
create index <indexname> on <table(column)> nologging;Or you put the tablespace in NOLOGGING in which the indexes are created in:
alter tablespace <indextablespace> nologging;Or (perhaps even better) simply leave the indexes as they are. Most indexes do not need a rebuild anyway.
Kind regards
Uwe
http://uhesse.wordpress.com

Similar Messages

  • Huge archive logs generated ,EBS R12 11gR2 DB

    Hi all,
    Yesterday  huge number of archive logs generated (up normal ) night time where there is no load on the server caused my disk to become full, how can i determine which concurrent program caused this huge number of archive logs ?
    Regards,
    Mohanad.

    HI Mohanad,
    Please check thread:
    https://forums.oracle.com/message/10834762
    https://forums.oracle.com/thread/2417003
    Thanks &
    Best Regards,

  • HTML output for archive logs generated

    Hi All,
    Greetings of the day,
    Have a sql scheduled in cron which gives number of archive logs generated in each hour..I have modify the shell to include HTML commands to get the ouput in HTML format...
    Any ideas on how will i do this?
    Thanks ,
    baskar.l

    Please take time to read the documentation. There is a link to "Generating HTML Reports in SQLPlus" which also has examples.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14357/ch7.htm#CHDCECJG
    Edited by: Hemant K Chitale on May 21, 2009 5:08 PM

  • Archive log generating views

    our database is running in archive log mode.
    i want to know which are the sessions generated/generating(sysdate and sysdate -2) more archive log .
    can i get it from the database view???

    855516 wrote:
    our database is running in archive log mode.
    i want to know which are the sessions generated/generating(sysdate and sysdate -2) more archive log .
    can i get it from the database view???use v$archived_log/v$log_history
    sys@ORCL>  select name,completion_time from v$archived_log where completion_time > sysdate-2;
    NAME                                                                             COMPLETIO
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000042_0776788597.0001                      27-MAR-12
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000043_0776788597.0001                      27-MAR-12
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000044_0776788597.0001                      27-MAR-12
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000045_0776788597.0001                      27-MAR-12
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000046_0776788597.0001                      27-MAR-12
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000048_0776788597.0001                      27-MAR-12
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000049_0776788597.0001                      27-MAR-12
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000050_0776788597.0001                      27-MAR-12
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000051_0776788597.0001                      27-MAR-12
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000052_0776788597.0001                      28-MAR-12
    C:\ORACLE\FLASH_RECOVERY_AREA\ARC0000000053_0776788597.0001                      28-MAR-12
    11 rows selected.

  • Generating lots of archive logs

    Hi Friends,
    We have an EBS 11i on AIX 5L ...which has just been setup and ready for UAT...but the AppsDBA/Functional Consultant who didi it are not around anymore to ask for questiones. I just noticed the there are archive logs generated everyday...like 30 logs...when in fact the application is not being used. Is there concurrent programs
    that has been setup to update data on a background process, just like recursive updating which is not really necessary. How do I check if there are updates being done.
    Thanks a lot

    Do not stop this concurrent program as it is used to synchronize the Workflow local tables with the user and role information stored in the product application tables until each affected product performs the synchronization automatically.
    More details can be found in the following note:
    Note: 171703.1 - 11.5.x: Implementing Oracle Workflow Directory Service Synchronization
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=171703.1
    Did you check the total size of the log files? I believe you should not be worried now until the system is delivered to the users, you can monitor the number of log files generated daily then and based on that start your investigation.

  • How to pre calculate archive logging?

    Hello,
    We try to initialize a BI workflow load which fills the table BPMWITSTP. The extractor uses insert and update statements. The table looks like this:
    MANDT             CLNT     3     0     Mandant
    SWW_WIID     NUMC     12     0     Workitem-ID
    SWFRMETS     DEC     21     7     Workflow: timestamp methode-einde
    Our Q&A machine runs ora 10.2.0.2.0 on Solaris. Our hardware hosting party created a ZFS unix mount for the archlog with a fixed size of 25GB. the log files are written into this directory and eventually are written on tier 4 storage (tapes) when the directory is almost full.
    When I start the extractor, the table gets filled so is the archlog. After about 30min the ZFS mount is full and oracles crashes  caused by the limited space of the mount (no archlog could be written  because the tape backup run was too slow).
    I do understand the problem with the archlog (no space available) but what i dont understand is how the extractor generates 25gb of logging with simple inserts into this tiny table? We dont have the problem with other extractors which generates a lot more data (setup table for 2LIS eg)
    I might be able to get our hosting party to increase the ZFS mount size but first I have to know how much archlogging will be created.
    Is there even a way to pre calculate the size of the logging created by an insert statement?
    Kind regards,
    Edited by: Matthias Gamsjager on Nov 24, 2009 9:43 AM
    Edited by: Matthias Gamsjager on Nov 24, 2009 9:51 AM
    Edited by: Matthias Gamsjager on Nov 24, 2009 9:52 AM

    Hi,
    1st: 25 Gbyte for archiving isn't much.
    On a productive system it should be several hundreds of Gbytes .
    How big are your online redo logs? Check recommendations by SAP
    2nd:
    if we have a system were  your INSERT process is mostly contributing to your workload we could roughly estimate the amount needed redo logs (be aware that a lot of other things can cause archiving to log files, so I would recommend to check
    what else is going on).
    What is going into the Archived logs is the redo infomation of your INSERT statement:
    "INSERT (MANDT, SWW_WIID,SWFRMETS      )  INTO BPMWITSTP  values ( .... )"
    What you should check also is this excessive amount of INSERTS going into the table.
    The query below shows you the amount of Archived log space in GBYTE  for a time intervall.
    You could correlate that from  the number of inserts done to inserts expected and have a rough estimate
    (add to this 30% GBYTES more as a safety buffer).
    select to_char(min(completion_time),'HH24:MI:SS DD.MM.YYYY')
    comptimeMin,
           to_char(max(completion_time),'HH24:MI:SS DD.MM.YYYY')
    comptimeMax,
    count(recid) ArchNo,
    round(( max(completion_time) - min(completion_time) ) * 24 * 60 ,2)
    log_switch_comp_timelag,
    round((( max(completion_time) - min(completion_time) ) * 24 * 60),2) / 
    count(recid) log_switch_minutes,
    round(sum(blocks * block_size) /1024/1024/1024 ,2) GBRedo
    from
    V$ARCHIVED_LOG
    where completion_time between
    to_date('24.11.2009 00:00:09','DD.MM.YYYY HH24:MI:SS' ) and
    to_date('24.11.2009 23:59:59','DD.MM.YYYY HH24:MI:SS' )
    bye
    yk

  • Rman backup archive log

    Hi Guys,
    Can advise on the syntax to perform rman backup of archive logs generated in last 2 days?
    Should it be 1 or 2?
    thanks!
    1. BACKUP ARCHIVELOG UNTIL TIME 'SYSDATE-2';
    2. BACKUP ARCHIVELOG FROM TIME 'SYSDATE-2';

    What prevents you from trying both?
    I'm not trying to be difficult here but why take the time to ask people in a forum, not even supplying a version number, and not just find out?
    It took me less than 60 seconds to cut-and-paste both of your command lines into RMAN and look at the output.
    Edited by: damorgan on Jan 19, 2013 4:11 PM

  • Urgent: Huge diff in total redo log size and archive log size

    Dear DBAs
    I have a concern regarding size of redo log and archive log generated.
    Is the equation below is correct?
    total size of redo generated by all sessions = total size of archive log files generated
    I am experiencing a situation where when I look at the total size of redo generated by all the sessions and the size of archive logs generated, there is huge difference.
    My total all session redo log size is 780MB where my archive log directory size has consumed 23GB.
    Before i start measuring i cleared up archive directory and started to monitor from a specific time.
    Environment: Oracle 9i Release 2
    How I tracked the sizing information is below
    logon as SYS user and run the following statements
    DROP TABLE REDOSTAT CASCADE CONSTRAINTS;
    CREATE TABLE REDOSTAT
    AUDSID NUMBER,
    SID NUMBER,
    SERIAL# NUMBER,
    SESSION_ID CHAR(27 BYTE),
    STATUS VARCHAR2(8 BYTE),
    DB_USERNAME VARCHAR2(30 BYTE),
    SCHEMANAME VARCHAR2(30 BYTE),
    OSUSER VARCHAR2(30 BYTE),
    PROCESS VARCHAR2(12 BYTE),
    MACHINE VARCHAR2(64 BYTE),
    TERMINAL VARCHAR2(16 BYTE),
    PROGRAM VARCHAR2(64 BYTE),
    DBCONN_TYPE VARCHAR2(10 BYTE),
    LOGON_TIME DATE,
    LOGOUT_TIME DATE,
    REDO_SIZE NUMBER
    TABLESPACE SYSTEM
    NOLOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING;
    GRANT SELECT ON REDOSTAT TO PUBLIC;
    CREATE OR REPLACE TRIGGER TR_SESS_LOGOFF
    BEFORE LOGOFF
    ON DATABASE
    DECLARE
    PRAGMA AUTONOMOUS_TRANSACTION;
    BEGIN
    INSERT INTO SYS.REDOSTAT
    (AUDSID, SID, SERIAL#, SESSION_ID, STATUS, DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, DBCONN_TYPE, LOGON_TIME, LOGOUT_TIME, REDO_SIZE)
    SELECT A.AUDSID, A.SID, A.SERIAL#, SYS_CONTEXT ('USERENV', 'SESSIONID'), A.STATUS, USERNAME DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, TYPE DBCONN_TYPE,
    LOGON_TIME, SYSDATE LOGOUT_TIME, B.VALUE REDO_SIZE
    FROM V$SESSION A, V$MYSTAT B, V$STATNAME C
    WHERE
    A.SID = B.SID
    AND
    B.STATISTIC# = C.STATISTIC#
    AND
    C.NAME = 'redo size'
    AND
    A.AUDSID = sys_context ('USERENV', 'SESSIONID');
    COMMIT;
    END TR_SESS_LOGOFF;
    Now, total sum of REDO_SIZE (B.VALUE) this is far less than archive log size. This at time when no other user is logged in except myself.
    Is there anything wrong with query for collecting redo information or there are some hidden process which doesnt provide redo information on session basis.
    I have seen the similar implementation as above at many sites.
    Kindly provide a mechanism where I can trace which user is generated how much redo (or archive log) on a session basis. I want to track which all user/process are causing high redo to generate.
    If I didnt find a solution I would raise a SR with Oracle.
    Thanks
    [V]

    You can query v$sess_io, column block_changes to find out which session generating how much redo.
    The following query gives you the session redo statistics:
    select a.sid,b.name,sum(a.value) from v$sesstat a,v$statname b
    where a.statistic# = b.statistic#
    and b.name like '%redo%'
    and a.value > 0
    group by a.sid,b.name
    If you want, you can only look for redo size for all the current sessions.
    Jaffar

  • Question about only new archive logs backed up in backup

    Hi,
    We are taking daily two online backup. We are running database in ARCHIVELOG mode. We configure database in PRIMARY and PHYSICAL STANDBY mode. Till now, we were taking all archive logs in backup. But it was causing problem of lot of space utilization of disk.
    So based on search in this forum, I am planning to take only new archive logs generated since last backed up using following command.
    BACKUP ARCHIVELOG all not backed up 1 times format '$dir/archivelogs_%s_%t' FORCE;
    I am not sure about how it impact during restore and recovery when we take only new archivelogs in backup.
    We restore database and then after perform always incomplete recovery till latest SCN capture in backup using following commands.
    RESTORE DATABASE;
    RECOVER DATABASE UNTIL SCN $BACKUP_LAST_SCN;
    Do you see any problem/risk of implementing this solution going ahead?
    Please let me provide your thoughts/inputs for this.
    Thanks.
    Shardul

    Hi,
    We are not deleting archive logs from actual location after backup. We keep latest 6 days archive logs at actual location. But here we are planning to put only new archive logs in backup image which were not backed up due to disk size problem.
    For your reference below is our datbase backup RMAN commands. We are taking full database backup.
    run {
    ALLOCATE CHANNEL C1 TYPE DISK;
    delete noprompt archivelog all completed before 'sysdate-5';
    SQL 'ALTER SYSTEM ARCHIVE LOG CURRENT';
    BACKUP INCREMENTAL LEVEL=0 CUMULATIVE format '$dir/level0_%u' DATABASE include current controlfile
    for standby force;
    SQL 'ALTER SYSTEM ARCHIVE LOG CURRENT';
    BACKUP ARCHIVELOG all not backed up 1 times format '$dir/archivelogs_%s_%t' FORCE;
    BACKUP CURRENT CONTROLFILE format '$dir/control_primary' FORCE;
    Then in this polich do you see any problem when we restore database as PRIMARY or PHYSICAL STANDBY on server. We are using Oracle 10.2.0.3.

  • Enormous acount of archive log produced for inserts

    Hi there,
    This is my first post on a forum ever. I have set up a test database that we will be installing on a customer site soon. One table is going to be very large (about 520 million rows). It is partitioned by range (on date) 53 ways (one for every week of the year + 1). I have written a PL/SQL script to load the table with about 1 million rows but I am baffled by the amount of archive log produced. Here is the PL/SQL script :
    declare
    TYPE hash_a IS TABLE OF VARCHAR2(15) INDEX BY VARCHAR(15);
    TYPE array_a IS TABLE OF VARCHAR2(15) INDEX BY PLS_INTEGER;
    min_to_meid hash_a;
    min_array array_a;
    temp VARCHAR2(20);
    begin_time TIMESTAMP(0);
    end_time TIMESTAMP(0);
    begin
    -- Initialise
    FOR i IN 1..3000 LOOP
    temp := TO_CHAR(6162500000 + i);
    min_array(i) := temp;
    -- Convert to hex char :
    SELECT TO_CHAR(130791510000+i, 'xxxxxxxxx') INTO min_to_meid(temp) FROM DUAL;
    END LOOP;
    SELECT SYSTIMESTAMP INTO begin_time FROM DUAL;
    begin_time := begin_time - interval '1' year;
    end_time := begin_time + interval '1' minute;
    FOR i IN 1..365 LOOP
    FOR j IN 1..3000 LOOP
    INSERT INTO SESSIONTAB (ms_msid, session_type, start_time, finish_time, call_type, error_type, cm_instance, term_state)
    VALUES (min_array(j), 21, begin_time, end_time, 'OTAPA', 0, 1, 'ok');
    END LOOP;
    begin_time := begin_time + interval '1' day;
    end_time := end_time + interval '1' day;
    END LOOP;
    end;
    Essentially it is putting 3000 entries for each day into the table. Each partition increases in size by about 2,5MB -- ie 130MB in total. Yet the amount of archive log produced is crazy -- about 1GB (and that is multiplexed two ways meaning 2GB appear on disk!). All tablespaces for the table and its indexes are LOGGING. Surely this amount is excessive though? Any ideas what is going on??

    Thanks for getting back so quickly.
    Here are the index creation statements :
    rem #--
    rem # Indexes for sessiontab
    rem #--
    CREATE INDEX session_ms_msid_index ON sessiontab(ms_msid) LOCAL;
    CREATE INDEX session_device_id_index ON sessiontab(device_id) LOCAL;
    CREATE INDEX session_new_ms_msid_index ON sessiontab(new_ms_msid) LOCAL;
    CREATE INDEX session_new_mdn_index ON sessiontab(new_mdn) LOCAL;
    And here is the table creation. Note that the partition there is a hack. The real partitions are created later in this sql script. As you can see there are quite a few indexes, and as they are local they reside in each partitions tablespace. The only index that resides in its own tablespace is the primary key index. As I understand it, changing the tablespace to NOLOGGING wouldn't make a difference anyway because insert statements always cause redo logs to be written to.
    rem #---
    rem # Session Table
    rem #---
    CREATE TABLE sessiontab (
    record_id number(38),
    ms_msid VARCHAR2(15) NOT NULL,
    session_type NUMBER(3) NOT NULL,
    start_time TIMESTAMP(0) NOT NULL,
    finish_time TIMESTAMP(0) NOT NULL,
    call_type CHAR(5) NOT NULL,
    error_type NUMBER(3) NULL,
    error_data NUMBER(3) NULL,
    sms_acc_den_reason NUMBER(2) NULL,
    otaf_id VARCHAR2(32) NULL,
    cm_instance NUMBER(2) NULL,
    term_state VARCHAR2(64) NOT NULL,
    trn CHAR(8) NULL,
    activation_ms_msid VARCHAR2(15) NULL,
    csc_id VARCHAR2(64) NULL,
    msc_addr VARCHAR2(12) NULL,
    mdn VARCHAR2(15) NULL,
    device_id VARCHAR2(15) NULL,
    msc_id NUMBER(8) NULL,
    job_id NUMBER(10) NULL,
    nam_download CHAR(1) NULL,
    succ_name CHAR(1) NULL,
    new_ms_msid VARCHAR2(15) NULL,
    new_mdn VARCHAR2(15) NULL,
    home_sid_changed CHAR(1) NULL,
    old_home_sid NUMBER(5) NULL,
    new_home_sid NUMBER(5) NULL,
    sspr_download CHAR(1) NULL,
    old_prl_id NUMBER(5) NULL,
    new_prl_id NUMBER(5) NULL,
    sspr_p_rev NUMBER(5) NULL,
    unlocked_phone CHAR(1) NULL,
    old_spc CHAR(6) NULL,
    changed_spc CHAR(1) NULL,
    new_spc CHAR(6) NULL,
    generated_akey CHAR(1) NULL,
    ssd_updated CHAR(1) NULL,
    re_auth CHAR(1) NULL,
    succ_ms_commit CHAR(1) NULL,
    succ_akey_commited CHAR(1) NULL,
    succ_spasm CHAR(1) NULL,
    succ_spasm_init CHAR(1) NULL,
    succ_welcom_msg CHAR(1) NULL,
    extra0 VARCHAR2(64) NULL,
    extra1 VARCHAR2(64) NULL,
    extra2 VARCHAR2(64) NULL,
    CONSTRAINT sessiontab_pk PRIMARY KEY (record_id) USING INDEX TABLESPACE sessiontab_indexes
    PARTITION BY RANGE (start_time)(
    PARTITION DEFAULT_PART VALUES LESS THAN (TO_DATE('31/12/2000','DD/MM/YYYY'))
    );

  • RMAN ALert Log Message: ALTER SYSTEM ARCHIVE LOG

    Created a new Database on Oracle 10.2.0.4 and now seeing "ALTER SYSTEM ARCHIVE LOG" in the Alert Log only when the online RMAN backup runs:
    Wed Aug 26 21:52:03 2009
    ALTER SYSTEM ARCHIVE LOG
    Wed Aug 26 21:52:03 2009
    Thread 1 advanced to log sequence 35 (LGWR switch)
    Current log# 2 seq# 35 mem# 0: /u01/app/oracle/oradata/aatest/redo02.log
    Current log# 2 seq# 35 mem# 1: /u03/oradata/aatest/redo02a.log
    Wed Aug 26 21:53:37 2009
    ALTER SYSTEM ARCHIVE LOG
    Wed Aug 26 21:53:37 2009
    Thread 1 advanced to log sequence 36 (LGWR switch)
    Current log# 3 seq# 36 mem# 0: /u01/app/oracle/oradata/aatest/redo03.log
    Current log# 3 seq# 36 mem# 1: /u03/oradata/aatest/redo03a.log
    Wed Aug 26 21:53:40 2009
    Starting control autobackup
    Control autobackup written to DISK device
         handle '/u03/exports/backups/aatest/c-2538018370-20090826-00'
    I am not issuing a log swiitch command. The RMAN commands I am running are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u03/exports/backups/aatest/%F';
    CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET;
    CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/u03/exports/backups/aatest/%d_%U';
    BACKUP DATABASE PLUS ARCHIVELOG;
    DELETE NOPROMPT OBSOLETE;
    DELETE NOPROMPT ARCHIVELOG UNTIL TIME 'SYSDATE-2';
    I do not see this message on any other 10.2.0.4 instances. Has anyone seen this and if so why is this showing in the log?
    Thank you,
    Curt Swartzlander

    There's no problem with log switch. Please refer to documentation for more information on syntax "PLUS ARCHIVELOG"
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/bkup003.htm#sthref377
    Adding BACKUP ... PLUS ARCHIVELOG causes RMAN to do the following:
    *1. Runs the ALTER SYSTEM ARCHIVE LOG CURRENT command.*
    *2. Runs BACKUP ARCHIVELOG ALL. Note that if backup optimization is enabled, then RMAN skips logs that it has already backed up to the specified device.*
    *3. Backs up the rest of the files specified in BACKUP command.*
    *4. Runs the ALTER SYSTEM ARCHIVE LOG CURRENT command.*
    *5. Backs up any remaining archived logs generated during the backup.*
    This guarantees that datafile backups taken during the command are recoverable to a consistent state.

  • Archive Log vs Full Backup Concept

    Hi,
    I just need some clarification on how backups and archive logs work. Lets say starting at 1PM I have archive logs 1,2,3,4,5 and then I perform a full backup at 6PM.
    Then I resume generating archive logs at 6PM to get logs 6,7,8,9,10. I then stop at 11PM.
    If my understanding is correct, the archive logs should allow me to restore oracle to a point in time anywhere between 1PM and 11PM. But if I only have the full backup then I can only restore to a single point, which is 6PM. Is my understanding correct?
    Do the archive logs only get applied to the datafiles when the backup occurs or only when a restore occurs? It doesn't seem like the archive logs get applied on the fly.
    Thanks in advance.

    thelok wrote:
    Thanks for the great explanation! So I can do a point in time restore from any time since the datafiles have last been written (or from when I have the last set of backed up datafiles plus the archive logs). From what you are saying, I can force the datafiles to be written from the redo logs (by doing a checkpoint with "alter set archive log current" or "backup database plus archivelog"), and then I can delete all the archive logs that have a SCN less than the checkpoint SCN on the datafiles. Is this true? This would be for the purposes of preserving disk space.Hi,
    See this example. I hope this explain your doubt.
    # My current date is 06-11-2011 17:15
    # I not have backup of this database
    # My retention policy is to have 1 backup
    # I start listing  archive logs.
    RMAN> list archivelog all;
    using target database control file instead of recovery catalog
    List of Archived Log Copies
    Key     Thrd Seq     S Low Time            Name
    29      1    8       A 29-10-2011 12:01:58 +HR/dbhr/archivelog/2011_10_31/thread_1_seq_8.399.766018837
    30      1    9       A 31-10-2011 23:00:30 +HR/dbhr/archivelog/2011_11_03/thread_1_seq_9.409.766278025
    31      1    10      A 03-11-2011 23:00:23 +HR/dbhr/archivelog/2011_11_04/thread_1_seq_10.391.766366105
    32      1    11      A 04-11-2011 23:28:23 +HR/dbhr/archivelog/2011_11_06/thread_1_seq_11.411.766516065
    33      1    12      A 05-11-2011 23:28:49 +HR/dbhr/archivelog/2011_11_06/thread_1_seq_12.413.766516349
    ## See I have archive logs from time "29-10-2011 12:01:58" until "05-11-2011 23:28:49" but I dont have any backup of database.
    # So I perfom backup of database including archive logs.
    RMAN> backup database plus archivelog delete input;
    Starting backup at 06-11-2011 17:15:21
    ## Note above RMAN forcing archive current log, this archivelog generated will be usable only for previous backup.
    ## Is not my case... I don't have backup of database.
    current log archived
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=159 devtype=DISK
    channel ORA_DISK_1: starting archive log backupset
    channel ORA_DISK_1: specifying archive log(s) in backup set
    input archive log thread=1 sequence=8 recid=29 stamp=766018840
    input archive log thread=1 sequence=9 recid=30 stamp=766278027
    input archive log thread=1 sequence=10 recid=31 stamp=766366111
    input archive log thread=1 sequence=11 recid=32 stamp=766516067
    input archive log thread=1 sequence=12 recid=33 stamp=766516350
    input archive log thread=1 sequence=13 recid=34 stamp=766516521
    channel ORA_DISK_1: starting piece 1 at 06-11-2011 17:15:23
    channel ORA_DISK_1: finished piece 1 at 06-11-2011 17:15:38
    piece handle=+FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525 tag=TAG20111106T171521 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:16
    channel ORA_DISK_1: deleting archive log(s)
    archive log filename=+HR/dbhr/archivelog/2011_10_31/thread_1_seq_8.399.766018837 recid=29 stamp=766018840
    archive log filename=+HR/dbhr/archivelog/2011_11_03/thread_1_seq_9.409.766278025 recid=30 stamp=766278027
    archive log filename=+HR/dbhr/archivelog/2011_11_04/thread_1_seq_10.391.766366105 recid=31 stamp=766366111
    archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_11.411.766516065 recid=32 stamp=766516067
    archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_12.413.766516349 recid=33 stamp=766516350
    archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_13.414.766516521 recid=34 stamp=766516521
    Finished backup at 06-11-2011 17:15:38
    ## RMAN finish backup of Archivelog and Start Backup of Database
    ## My backup start at "06-11-2011 17:15:38"
    Starting backup at 06-11-2011 17:15:38
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting full datafile backupset
    channel ORA_DISK_1: specifying datafile(s) in backupset
    input datafile fno=00001 name=+HR/dbhr/datafile/system.386.765556627
    input datafile fno=00003 name=+HR/dbhr/datafile/sysaux.396.765556627
    input datafile fno=00002 name=+HR/dbhr/datafile/undotbs1.393.765556627
    input datafile fno=00004 name=+HR/dbhr/datafile/users.397.765557979
    input datafile fno=00005 name=+BFILES/dbhr/datafile/bfiles.257.765542997
    channel ORA_DISK_1: starting piece 1 at 06-11-2011 17:15:39
    channel ORA_DISK_1: finished piece 1 at 06-11-2011 17:16:03
    piece handle=+FRA/dbhr/backupset/2011_11_06/nnndf0_tag20111106t171539_0.269.766516539 tag=TAG20111106T171539 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:24
    Finished backup at 06-11-2011 17:16:03
    ## And finish at "06-11-2011 17:16:03", so I can recovery my database from this time.
    ## I will need archivelogs (transactions) which was generated during backup of database.
    ## Note during backup some blocks are copied others not. The SCN is inconsistent state.
    ## To make it consistent I need apply archivelog which have all transactions recorded.
    ## Starting another backup of archived log generated during backup.
    Starting backup at 06-11-2011 17:16:04
    ## So automatically RMAN force another "checkpoint" after backup finished,
    ## forcing archive current log, because this archivelog have all transactions to bring database in a consistent state.
    current log archived
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting archive log backupset
    channel ORA_DISK_1: specifying archive log(s) in backup set
    input archive log thread=1 sequence=14 recid=35 stamp=766516564
    channel ORA_DISK_1: starting piece 1 at 06-11-2011 17:16:05
    channel ORA_DISK_1: finished piece 1 at 06-11-2011 17:16:06
    piece handle=+FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171604_0.272.766516565 tag=TAG20111106T171604 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
    channel ORA_DISK_1: deleting archive log(s)
    archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_14.414.766516565 recid=35 stamp=766516564
    Finished backup at 06-11-2011 17:16:06
    ## Note: I can recover my database from time "06-11-2011 17:16:03" (finished backup full)
    ##  until "06-11-2011 17:16:04" (last archivelog generated) that is my recover window in this scenary.
    ## Listing Backup I have:
    ## Archive Logs in backupset before backup full start - *BP Key: 40*
    ## Backup Full database in backupset - *BP Key: 41*
    ##  Archive Logs in backupset after backup full stop - *BP Key: 42*
    RMAN> list backup;
    List of Backup Sets
    ===================
    BS Key  Size       Device Type Elapsed Time Completion Time
    40      196.73M    DISK        00:00:15     06-11-2011 17:15:37
            *BP Key: 40*   Status: AVAILABLE  Compressed: NO  Tag: TAG20111106T171521
            Piece Name: +FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525
      List of Archived Logs in backup set 40
      Thrd Seq     Low SCN    Low Time            Next SCN   Next Time
      1    8       766216     29-10-2011 12:01:58 855033     31-10-2011 23:00:30
      1    9       855033     31-10-2011 23:00:30 896458     03-11-2011 23:00:23
      1    10      896458     03-11-2011 23:00:23 937172     04-11-2011 23:28:23
      1    11      937172     04-11-2011 23:28:23 976938     05-11-2011 23:28:49
      1    12      976938     05-11-2011 23:28:49 1023057    06-11-2011 17:12:28
      1    13      1023057    06-11-2011 17:12:28 1023411    06-11-2011 17:15:21
    BS Key  Type LV Size       Device Type Elapsed Time Completion Time
    41      Full    565.66M    DISK        00:00:18     06-11-2011 17:15:57
            *BP Key: 41*   Status: AVAILABLE  Compressed: NO  Tag: TAG20111106T171539
            Piece Name: +FRA/dbhr/backupset/2011_11_06/nnndf0_tag20111106t171539_0.269.766516539
      List of Datafiles in backup set 41
      File LV Type Ckp SCN    Ckp Time            Name
      1       Full 1023422    06-11-2011 17:15:39 +HR/dbhr/datafile/system.386.765556627
      2       Full 1023422    06-11-2011 17:15:39 +HR/dbhr/datafile/undotbs1.393.765556627
      3       Full 1023422    06-11-2011 17:15:39 +HR/dbhr/datafile/sysaux.396.765556627
      4       Full 1023422    06-11-2011 17:15:39 +HR/dbhr/datafile/users.397.765557979
      5       Full 1023422    06-11-2011 17:15:39 +BFILES/dbhr/datafile/bfiles.257.765542997
    BS Key  Size       Device Type Elapsed Time Completion Time
    42      3.00K      DISK        00:00:02     06-11-2011 17:16:06
            *BP Key: 42*   Status: AVAILABLE  Compressed: NO  Tag: TAG20111106T171604
            Piece Name: +FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171604_0.272.766516565
      List of Archived Logs in backup set 42
      Thrd Seq     Low SCN    Low Time            Next SCN   Next Time
      1    14      1023411    06-11-2011 17:15:21 1023433    06-11-2011 17:16:04
    ## Here make sense what I trying explain
    ## As I don't have backup of database before of my Last backup, all archivelogs generated before of my backup full is useless.
    ## Deleting what are obsolete in my env, RMAN choose backupset 40 (i.e all archived logs generated before my backup full)
    RMAN> delete obsolete;
    RMAN retention policy will be applied to the command
    RMAN retention policy is set to redundancy 1
    using channel ORA_DISK_1
    Deleting the following obsolete backups and copies:
    Type                 Key    Completion Time    Filename/Handle
    *Backup Set           40*     06-11-2011 17:15:37
      Backup Piece       40     06-11-2011 17:15:37 +FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525
    Do you really want to delete the above objects (enter YES or NO)? yes
    deleted backup piece
    backup piece handle=+FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525 recid=40 stamp=766516523
    Deleted 1 objectsIn the above example, I could before starting the backup run "delete archivelog all" because they would not be needed, but to show the example I follow this unnecessary way. (backup archivelog and delete after)
    Regards,
    Levi Pereira
    Edited by: Levi Pereira on Nov 7, 2011 1:02 AM

  • System I/O and Too Many Archive Logs

    Hi all,
    This is frustrating me. Our production database began to produce too many archived redo logs instantly --again. This happened before; two months ago our database was producing too many archive logs; just then we began get async i/o errors, we consulted a DBA and he restarted the database server telling us that it was caused by the system(???).
    But after this restart the amount of archive logs decreased drastically. I was deleting the logs by hand(350 gb DB 300 gb arch area) and after this the archive logs never exceeded 10% of the 300gb archive area. Right now the logs are increasing 1%(3 GB) per 7-8 mins which is too many.
    I checked from Enterprise Manager, System I/O graph is continous and the details show processes like ARC0, ARC1, LGWR(log file sequential read, db file parallel write are the most active ones) . Also Phsycal Reads are very inconsistent and can exceed 30000 KB at times. Undo tablespace is full nearly all of the time causing ORA-01555.
    The above symptoms have all began today. The database is closed at 3:00 am to take offline backup and opened at 6:00 am everyday.
    Nothing has changed on the database(9.2.0.8), applications(11.5.10.2) or OS(AIX 5.3).
    What is the reason of this most senseless behaviour? Please help me.
    Thanks in advance.
    Regards.
    Burak

    Selam Burak,
    High number of archive logs are being created because you may have massive redo creation on your database. Do you have an application that updates, deletes or inserts into any kind of table?
    What is written in the alert.log file?
    Do you have the undo tablespace with the guarentee retention option btw?
    Have you ever checked the log file switch sequency map?
    Please use below SQL to detirme the switch frequency;
    SELECT * FROM (
    SELECT * FROM (
    SELECT   TO_CHAR(FIRST_TIME, 'DD/MM') AS "DAY"
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '00', 1, 0)), '999') "00:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '01', 1, 0)), '999') "01:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '02', 1, 0)), '999') "02:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '03', 1, 0)), '999') "03:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '04', 1, 0)), '999') "04:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '05', 1, 0)), '999') "05:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '06', 1, 0)), '999') "06:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '07', 1, 0)), '999') "07:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '08', 1, 0)), '999') "08:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '09', 1, 0)), '999') "09:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '10', 1, 0)), '999') "10:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '11', 1, 0)), '999') "11:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '12', 1, 0)), '999') "12:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '13', 1, 0)), '999') "13:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '14', 1, 0)), '999') "14:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '15', 1, 0)), '999') "15:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '16', 1, 0)), '999') "16:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '17', 1, 0)), '999') "17:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '18', 1, 0)), '999') "18:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '19', 1, 0)), '999') "19:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '20', 1, 0)), '999') "20:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '21', 1, 0)), '999') "21:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '22', 1, 0)), '999') "22:00"+
    +, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '23', 1, 0)), '999') "23:00"+
    FROM V$LOG_HISTORY
    WHERE extract(year FROM FIRST_TIME) = extract(year FROM sysdate)
    GROUP BY TO_CHAR(FIRST_TIME, 'DD/MM')
    +) ORDER BY TO_DATE(extract(year FROM sysdate) || DAY, 'YYYY DD/MM') DESC+
    +) WHERE ROWNUM < 8+
    Ogan

  • Create procedure is generating too many archive logs

    Hi
    The following procedure was run on one of our databases and it hung since there were too many archive logs being generated.
    What would be the answer? The db must remain in archivelog mode.
    I understand the nologging concept, but as I know this applies to creating tables, views, indexes and tablespaces. This script is creating procedure.
    CREATE OR REPLACE PROCEDURE APPS.Dfc_Payroll_Dw_Prc(Errbuf OUT VARCHAR2, Retcode OUT NUMBER
    ,P_GRE NUMBER
    ,P_SDATE VARCHAR2
    ,P_EDATE VARCHAR2
    ,P_ssn VARCHAR2
    ) IS
    CURSOR MainCsr IS
    SELECT DISTINCT
    PPF.NATIONAL_IDENTIFIER SSN
    ,ppf.full_name FULL_NAME
    ,ppa.effective_date Pay_date
    ,ppa.DATE_EARNED period_end
    ,pet.ELEMENT_NAME
    ,SUM(TO_NUMBER(prv.result_value)) VALOR
    ,PET.ELEMENT_INFORMATION_CATEGORY
    ,PET.CLASSIFICATION_ID
    ,PET.ELEMENT_INFORMATION1
    ,pet.ELEMENT_TYPE_ID
    ,paa.tax_unit_id
    ,PAf.ASSIGNMENT_ID ASSG_ID
    ,paf.ORGANIZATION_ID
    FROM
    pay_element_classifications pec
    , pay_element_types_f pet
    , pay_input_values_f piv
    , pay_run_result_values prv
    , pay_run_results prr
    , pay_assignment_actions paa
    , pay_payroll_actions ppa
    , APPS.pay_all_payrolls_f pap
    ,Per_Assignments_f paf
    ,per_people_f ppf
    WHERE
    ppa.effective_date BETWEEN TO_DATE(p_sdate) AND TO_DATE(p_edate)
    AND ppa.payroll_id = pap.payroll_id
    AND paa.tax_unit_id = NVL(p_GRE, paa.tax_unit_id)
    AND ppa.payroll_action_id = paa.payroll_action_id
    AND paa.action_status = 'C'
    AND ppa.action_type IN ('Q', 'R', 'V', 'B', 'I')
    AND ppa.action_status = 'C'
    --AND PEC.CLASSIFICATION_NAME IN ('Earnings','Alien/Expat Earnings','Supplemental Earnings','Imputed Earnings','Non-payroll Payments')
    AND paa.assignment_action_id = prr.assignment_action_id
    AND prr.run_result_id = prv.run_result_id
    AND prv.input_value_id = piv.input_value_id
    AND piv.name = 'Pay Value'
    AND piv.element_type_id = pet.element_type_id
    AND pet.element_type_id = prr.element_type_id
    AND pet.classification_id = pec.classification_id
    AND pec.non_payments_flag = 'N'
    AND prv.result_value &lt;&gt; '0'
    --AND( PET.ELEMENT_INFORMATION_CATEGORY LIKE '%EARNINGS'
    -- OR PET.element_type_id IN (1425, 1428, 1438, 1441, 1444, 1443) )
    AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN PET.EFFECTIVE_START_DATE AND PET.EFFECTIVE_END_DATE
    AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN PIV.EFFECTIVE_START_DATE AND PIV.EFFECTIVE_END_DATE --dcc
    AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN Pap.EFFECTIVE_START_DATE AND Pap.EFFECTIVE_END_DATE --dcc
    AND paf.ASSIGNMENT_ID = paa.ASSIGNMENT_ID
    AND ppf.NATIONAL_IDENTIFIER = NVL(p_ssn, ppf.NATIONAL_IDENTIFIER)
    ------------------------------------------------------------------TO get emp.
    AND ppf.person_id = paf.person_id
    AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN ppf.EFFECTIVE_START_DATE AND ppf.EFFECTIVE_END_DATE
    ------------------------------------------------------------------TO get emp. ASSIGNMENT
    --AND paf.assignment_status_type_id NOT IN (7,3)
    AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN paf.effective_start_date AND paf.effective_end_date
    GROUP BY PPF.NATIONAL_IDENTIFIER
    ,ppf.full_name
    ,ppa.effective_date
    ,ppa.DATE_EARNED
    ,pet.ELEMENT_NAME
    ,PET.ELEMENT_INFORMATION_CATEGORY
    ,PET.CLASSIFICATION_ID
    ,PET.ELEMENT_INFORMATION1
    ,pet.ELEMENT_TYPE_ID
    ,paa.tax_unit_id
    ,PAF.ASSIGNMENT_ID
    ,paf.ORGANIZATION_ID
    BEGIN
    DELETE cust.DFC_PAYROLL_DW
    WHERE PAY_DATE BETWEEN TO_DATE(p_sdate) AND TO_DATE(p_edate)
    AND tax_unit_id = NVL(p_GRE, tax_unit_id)
    AND ssn = NVL(p_ssn, ssn)
    COMMIT;
    FOR V_REC IN MainCsr LOOP
    INSERT INTO cust.DFC_PAYROLL_DW(SSN, FULL_NAME, PAY_DATE, PERIOD_END, ELEMENT_NAME, ELEMENT_INFORMATION_CATEGORY, CLASSIFICATION_ID, ELEMENT_INFORMATION1, VALOR, TAX_UNIT_ID, ASSG_ID,ELEMENT_TYPE_ID,ORGANIZATION_ID)
    VALUES(V_REC.SSN,V_REC.FULL_NAME,v_rec.PAY_DATE,V_REC.PERIOD_END,V_REC.ELEMENT_NAME,V_REC.ELEMENT_INFORMATION_CATEGORY, V_REC.CLASSIFICATION_ID, V_REC.ELEMENT_INFORMATION1, V_REC.VALOR,V_REC.TAX_UNIT_ID,V_REC.ASSG_ID, v_rec.ELEMENT_TYPE_ID, v_rec.ORGANIZATION_ID);
    COMMIT;
    END LOOP;
    END ;
    So, how could I assist our developer with this, so that she can run it again without it generating a ton of logs ? ?
    Thanks
    Oracle 9.2.0.5
    AIX 5.2

    The amount of redo generated is a direct function of how much data is changing. If you insert 'x' number of rows, you are going to generate 'y' mbytes of redo. If your procedure is destined to insert 1000 rows, then it is destined to create a certain amount of redo. Period.
    I would question the <i>performance</i> of the procedure shown ... using a cursor loop with a commit after every row is going to be a slug on performance but that doesn't change the fact 'x' inserts will always generate 'y' redo.

  • Reduce Archive logs

    Hi,
    I have a database with a schema that gets refreshed nightly (user level export/import). This is generating a lot of redo logs. I have a space crunch for archive log location but cannot turn off archiving.
    How could I reduce the redo log generation of the job?
    Thanks,
    Prachi

    if you already have index then yes oracle will mainten it unless it is unusable but if you dont have index then oracle will not create it if you export it with indexes=n
    well lets see if thats correct or not...
    SQL> create table exp_test(id number,name varchar2(20));
    Table created.
    SQL> create index exp_test_idx on exp_test(id);
    Index created.
    SQL> insert into exp_test values(1,'larry');
    1 row created.
    SQL> insert into exp_test values(2,'bill');
    1 row created.
    SQL> commit;
    Commit complete.
    --export table without index
    C:\junk>exp scott/tiger file=exp_test_noindex.dmp indexes=n tables=exp_test
    . . exporting table EXP_TEST 2 rows exported
    Export terminated successfully without warnings.
    --export table with index
    C:\junk>exp scott/tiger file=exp_test_index.dmp indexes=y tables=exp_test
    . . exporting table EXP_TEST 2 rows exported
    Export terminated successfully without warnings.
    SQL> select index_name from user_indexes where table_name='EXP_TEST';
    INDEX_NAME
    EXP_TEST_IDX
    SQL> DROP TABLE EXP_tEST;
    Table dropped.
    --import table with indexes
    C:\junk>imp scott/tiger file=exp_test_index.dmp tables=exp_test
    . . importing table "EXP_TEST" 2 rows imported
    Import terminated successfully without warnings.
    --index is created
    SQL> select index_name from user_indexes where table_name='EXP_TEST';
    INDEX_NAME
    EXP_TEST_IDX
    SQL> drop table exp_test;
    Table dropped.
    --import table without any indexes
    C:\junk>imp scott/tiger file=exp_test_noindex.dmp tables=exp_test
    . . importing table "EXP_TEST" 2 rows imported
    Import terminated successfully without warnings.
    --there is no index
    SQL> select index_name from user_indexes where table_name='EXP_TEST';
    no rows selected

Maybe you are looking for

  • Mail merge issue

    Mail merge document has 4 decimal places when I click on the text box.  The value was moved from a spreadsheet.  Both documents have the format set to 2 decimal places.  Why is it showing 4 when I click on the textbox?

  • Issue while installing ADF

    Hi, I am not able to install ADF either manually or through orchestration script. Both the instance it is failing. I am trying to install endeca server on RHEL 6.5. The exception mentioned in the log file is, java.lang.reflect.InvocationTargetExcepti

  • Re: User exits, BAPI & BAdI

    Dear gurus, I have several questions, which may be quite general to some if you but I hope someone can assist to give me full detail information: - 1. What is user exits? How it is used? How to configure in Customizing? Details appreciated on the cus

  • Db13 both the jobs failed update stats and DB check

    Dear Friends, Request you to look at following logs and suggest . Thanks in advance. BR0883I Table selected to collect statistics after check: SAPSR3.ALSLDCTRL (1/0:34:0) BR0280I BRCONNECT time stamp: 2009-07-20 23.00.50 BR0881I Collecting statistics

  • Help Fix 26473 Corrupt blocks in system09.dbf

    How to Format Corrupted Block Not Part of Any Segment [ID 336133.1] I'm following the above doc to fix 26K corrupt blocks in system09.dbf in Step 7 of the Doc it says: +" First find the *extent size* by querying dba_free_space "+ +65536 = If its 64 K