Max SCN Number in redolog file

Hi ,
I have configured a data guard environment using below configuration
STANDBY TYPE : - PHYSICAL STANDBY
LOG TRANSPORT SERVICE : - ARCH [ ARCHIVE PROCESS ]
STANDBY LOG :- NO STANDBY LOG IN PRIMARY AND STANDBY
SYNC STATUS OF PRIMARY AND STANDBY : - FULLY SYNC
OPERATION  : - FAIL OVER USING 'ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;'
               ACTIVATING THE STANDBY USING ' ALTER DATABASE ACTIVATE STANDBY DATABASE;'
PRIMARY AND STANDBY ARE IN FULLY SYNC
ON PRIMARY
LAST ARCHIVED SEQUENCE NUMBER IS 12 AND FIRST AND LAST SCN ASSOCIATED WITH SEQUENCE 12 IS AS BELOW
SELECT SEQUENCE#,FIRST_CHANGE#,NEXT_CHANGE# FROM V$ARCHIVED_LOG WHERE SEQUENCE#=12;
SEQUENCE# FIRST_CHANGE# NEXT_CHANGE#
        12          669447                 670246
ON STANDBY
THE ARCHIVE LOG WITH SEQUENCE NUMBER 12 HAS ARCHIVED AND APPLIED ON STANDBY DATABASE SUCCESSFULLY.
NOW I AM DOING A FAIL OVER BY USING THE BELOW COMMANDS
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
ALTER DATABASE ACTIVATE STANDBY DATABASE;
ALERT  LOG ON STANDBY DATABASE
Media Recovery Log /data/PRD_DR/arch/arch_1_11_834360625.arch
Media Recovery Log /data/PRD_DR/arch/arch_1_12_834360625.arch
Media Recovery Waiting for thread 1 sequence 13
Error 12154 received logging on to the standby
FAL[client, MRP0]: Error 12154 connecting to PRD for fetching gap sequence
Errors in file /apps/oracle/diag/rdbms/stand/PRD/trace/PRD_mrp0_7865.trc:
ORA-12154: TNS:could not resolve the connect identifier specified
Thu Dec 26 18:00:36 2013
alter database recover managed standby database cancel
Thu Dec 26 18:00:36 2013
MRP0: Background Media Recovery cancelled with status 16037
Errors in file /apps/oracle/diag/rdbms/stand/PRD/trace/PRD_mrp0_7865.trc:
ORA-16037: user requested cancel of managed recovery operation
Shutting down recovery slaves due to error 16037
Recovery interrupted!
Errors in file /apps/oracle/diag/rdbms/stand/PRD/trace/PRD_mrp0_7865.trc:
ORA-16037: user requested cancel of managed recovery operation
MRP0: Background Media Recovery process shutdown (PRD)
Waiting for MRP0 pid 7865 to terminate
Managed Standby Recovery Canceled (PRD)
Completed: alter database recover managed standby database cancel
Thu Dec 26 18:00:59 2013
alter database activate standby database
ALTER DATABASE ACTIVATE [PHYSICAL] STANDBY DATABASE (PRD)
tkcrrxms: Killing 2 processes (all RFS)
RESETLOGS after incomplete recovery UNTIL CHANGE 670246
Resetting resetlogs activation ID 1898010833 (0x712158d1)
Online log /data/PRD_DR/REDOLOG11.LOG: Thread 1 Group 1 was previously cleared
Online log /data/PRD_DR/REDOLOG21.LOG: Thread 1 Group 2 was previously cleared
Online log /data/PRD_DR/REDOLOG33.LOG: Thread 1 Group 3 was previously cleared
Standby became primary SCN: 670244
Thu Dec 26 18:01:01 2013
Setting recovery target incarnation to 3
Converting standby mount to primary mount.
ACTIVATE STANDBY: Complete - Database mounted as primary (PRD)
Completed: alter database activate standby database
IN STANDBY ALERT LOG I CAN SEE BELOW THINGS
RESETLOGS after incomplete recovery UNTIL CHANGE 670246
Standby became primary SCN: 670244
MY QUESTION IS ON 'SCN NUMBER OF 'Standby became primary SCN: 670244'.
I HAVE CHECKED THE SCN NUMBERS OF THE ARCHIVE LOG OF SEQUENCE 12 [ USING LOGMINER ] THE MAX SCN ASSOCIATED WITH THE ARCHIVE LOG IS 670242
SELECT MAX(SCN) FROM V$LOGMNR_CONTENTS; [ FOR LOGMINER I HAVE USED '
EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DDL_DICT_TRACKING + DBMS_LOGMNR.DICT_FROM_REDO_LOGS); ]
  MAX(SCN)
  670242
- WHY IN LOGMINER MAX(SCN) IS NOT SHOWING AS 670246 ?
- HOW I CAN SEE THE SCN 670244 FOR ARCHIVE LOG FOR SEQUENCE NUMBER 12 ?
Thanks,

IN STANDBY ALERT LOG I CAN SEE BELOW THINGS
RESETLOGS after incomplete recovery UNTIL CHANGE 670246
Standby became primary SCN: 670244
MY QUESTION IS ON 'SCN NUMBER OF 'Standby became primary SCN: 670244'.
I HAVE CHECKED THE SCN NUMBERS OF THE ARCHIVE LOG OF SEQUENCE 12 [ USING LOGMINER ] THE MAX SCN ASSOCIATED WITH THE ARCHIVE LOG IS 670242
in fact, it is really intelligent question.
First you have to know the sequence 12, next_change# is not belongs to 12 but it belongs to the 13th sequence first_change...
So in the real, the seqeunce 12 change number is only up to 670245 and the change 670246 is the starting change of sequence numebr 13.
It is not using any real time apply, Now as per the my above conclusion the sequence number last change is only 670245 , As per the recovery concepts.. If you want to perform recovery change up to 100, you need to mention as "until 100 + 1", i.e. 101.. So if you mention 101 then it performs recovery until 100.
1) the 12 sequence max change is 670245
2) when it performs recovery until that sequence, then then usually it performs recovery until 6740244 as per the recovery rules.
From http://docs.oracle.com/cd/B19306_01/server.102/b14357/ch12033.htm
UNTIL CHANGE integer
Processes managed recovery up to but not including the specified system change number (SCN).
Still at this point am not giving conclusion 100%, am testing same as you using log miner and will let you know sure..
- WHY IN LOGMINER MAX(SCN) IS NOT SHOWING AS 670246 ?
When you analyze archive redo log file, Have you used starttime and end time? Note that if you give end time bit less then there is chance to truncate to gather information for log miner and important thing is Oracle writes checksum information and change information in terms of metadata into headers.Also note that oracle uses some of records for SYSTEM CHANGE, so some of them may not visible.
HTH.

Similar Messages

  • Dynamic number of xml files with a specified max size

    Hi all,
    I'm using a custom report to generate xml files (via a ST program and a call transformation) containing
    data belonging to is-u invoices (some are much more complex and rich of data than other ones).
    I'm asked to generate the minimun number of xml files, the only limit is their maximum size (40 MB).
    How can I dynamically understand when I need to save an xml file and generate the next one if the
    size is only readable after the transformation,as far as I know?
    Thanks a lot in advance for your ideas.
    Angelo

    My problem is that different users may be loading different files.
    As there may be many simultaneous users and the xml document nodes do not get edited in any way, I only want to have one copy of each file in memory .... hence the application scope.
    If I make the var name fixed then a user loading xml2.xml would overwrite any previous file xml1.xml or whatever.
    I could switch to session scope but that would load possibly hundreds of copies of any given file at any given time and the files are not small :-(
    Keith

  • How can I determine what is the minimum SCN number I need to restore up to.

    Say if I have a full database backup, I know I have file inconsistency, but I want to know what is the minimum time or SCN number a need to roll forward to in order to be able to open the database?
    For example: I do a database restore.
    restore database ;
    RMAN> sql 'alter database open read only';
    sql statement: alter database open read only
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of sql command on default channel at 03/16/2009 15:00:04
    RMAN-11003: failure during parse/execution of SQL statement: alter database open read only
    ORA-16004: backup database requires recovery
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/u01/oradata/p1/system01.dbf'
    I need need to apply archive log files. All references I find for ORA-00194 state the solution is to "apply more logs until the file is consistent " But "HOW MANY LOGS", or more apporaite up to what time or SCN? How does one determine what TIME or SCN is required to get all file consistent?
    I thought this query might provide the answer, but it doesn't
    select max(checkpoint_change#)
    from v$datafile_header
    MAX(CHECKPOINT_CHANGE#)
    7985876903
    --It applies a bit more redo, but not enough to make my datafiles consistent.
    recover database until SCN=7985876903 ;
    Starting recover at 03/16/09 15:04:54
    using channel ORA_DISK_1
    using channel ORA_DISK_2
    using channel ORA_DISK_3
    using channel ORA_DISK_4
    using channel ORA_DISK_5
    using channel ORA_DISK_6
    using channel ORA_DISK_7
    using channel ORA_DISK_8
    starting media recovery
    channel ORA_DISK_1: starting archive log restore to default destination
    channel ORA_DISK_1: restoring archive log
    archive log thread=1 sequence=18436
    channel ORA_DISK_1: reading from backup piece /temp-oracle/backup/hot/p1/20090315/hourly.arch_P1_47353_681538638_1
    channel ORA_DISK_1: restored backup piece 1
    piece handle=/temp-oracle/backup/hot/p1/20090315/hourly.arch_P1_47353_681538638_1 tag=TAG20090315T041716
    channel ORA_DISK_1: restore complete, elapsed time: 00:02:26
    archive log filename=/u01/app/oracle/flash_recovery_area/P1/archivelog/2009_03_16/o1_mf_1_18436_4vxd81yc_.arc thread=1 se quence=18436
    Oracle Error:
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/u01/oradata/p1/system01.dbf'
    I've discover I need to apply archive logs until this query reports all datafiles as FUZZY=NO , but this only works by guessing at some time periord to roll forward to, then checking the FUZZY column, and try again. Is there a way to know, I have to roll forward to a specific SNC in order for all my datafiles to be consistent?
    select file#
         , status
         , checkpoint_change#
         , checkpoint_time
         , FUZZY
         , RECOVER
    ,LAST_DEALLOC_SCN
    from v$datafile_header
    order by checkpoint_time
    Thanks,
    Jason

    The minimum point in time is the time when the last backup piece for datafiles in that backup was completed.
    Your alert.log should show the redo log sequence number at that time.
    You can query V$ARCHIVED_LOG and get the FIRST_CHANGE# of the first archivedlog generated after that backup piece completed.
    A
    LIST BACKUP;in RMAN should also show you the SCNs at the time of the backups.
    You can also query SCN_TO_TIMESTAMP -- eg
    select timestamp_to_scn(to_timestamp('15-MAR-09 09:24:01','DD-MON-RR HH24:MI:SS')) from dual;will return an approximation of the SCN.
    Hemant K Chitale
    http://hemantoracledba.blogspot.com
    Edited by: Hemant K Chitale on Mar 17, 2009 9:41 AM
    added the LIST BACKUP command from RMAN.

  • Restore with brtools - need more archive redolog files

    Good day
    I will make online backup my oracle database with brtools
    (Oracle 10g, BRBACKUP 7.00 (39))
    brbackup -c -d util_file_online -t online -m all -u /
    bdztoexw anf  2009-01-23 15.27.40 ; 2009-01-23 16.47.21 ; 1  ...............     57    56     0     17671        226215105    17676        226268039  ALL
    online          util_file_online -
    7.00 (39)
    BR0280I BRBACKUP time stamp: 2009-01-23 16.43.38
    BR0232I 57 of 57 files saved by backup utility
    BR0230I Backup utility called successfully
    BR0280I BRBACKUP time stamp: 2009-01-23 16.43.40
    BR0340I Switching to next online redo log file for database instance VPP ...
    BR0321I Switch to next online redo log file for database instance VPP successful
    BR0117I ARCHIVE LOG LIST after backup for database instance VPP
    Parameter                      Value
    Database log mode              Archive Mode
    Automatic archival             Enabled
    Archive destination            /oracle/VPP/oraarch/VPParch
    Archive format                 %t_%s_%r.dbf
    Oldest online log sequence     17673
    Next log sequence to archive   17676
    Current log sequence           17676            SCN: 226268039
    Database block size            8192             Thread: 1
    Current system change number   226268041        ResetId: 603135330
    After brbackup  I'll do "brarchive" in the same script:
    brarchive -c -d util_file -sd -u / > $br_out_file
    #ARCHIVE.. 17670  /oracle/VPP/oraarch/VPParch1_17670_603135330.dbf ; 2009-01-23 15.10.36 ; 43450368         226202112  1
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............
    #COPIED... ........ ...  ................. .......... ........ ........... ............
    #DELETED.. adztolzu svd  2009-01-23 16.51.11
    #ARCHIVE.. 17671  /oracle/VPP/oraarch/VPParch1_17671_603135330.dbf ; 2009-01-23 15.36.12 ; 43430912         226215105  1
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............
    #COPIED... ........ ...  ................. .......... ........ ........... ............
    #DELETED.. adztolzu svd  2009-01-23 16.51.11
    #ARCHIVE.. 17672  /oracle/VPP/oraarch/VPParch1_17672_603135330.dbf ; 2009-01-23 15.40.27 ; 43515904         226227928  1
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............
    #COPIED... ........ ...  ................. .......... ........ ........... ............
    #DELETED.. adztolzu svd  2009-01-23 16.51.11
    #ARCHIVE.. 17673  /oracle/VPP/oraarch/VPParch1_17673_603135330.dbf ; 2009-01-23 15.41.06 ; 43729408         226238784  1
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............
    #COPIED... ........ ...  ................. .......... ........ ........... ............
    #DELETED.. adztolzu svd  2009-01-23 16.51.11
    #ARCHIVE.. 17674  /oracle/VPP/oraarch/VPParch1_17674_603135330.dbf ; 2009-01-23 16.06.06 ; 43450368         226250315  1
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............
    #COPIED... ........ ...  ................. .......... ........ ........... ............
    #DELETED.. adztolzu svd  2009-01-23 16.51.11
    #ARCHIVE.. 17675  /oracle/VPP/oraarch/VPParch1_17675_603135330.dbf ; 2009-01-23 16.43.40 ; 13243904         226263012  1
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............
    #COPIED... ........ ...  ................. .......... ........ ........... ............
    #DELETED.. adztolzu svd  2009-01-23 16.51.11
    VPP  util_file  adztolzu svd  2009-01-23 16.47.22 ; 2009-01-23 16.54.47 ; 1  ...........     17670    17675        0        0  ------- 7.00 (39)  @0603135330
    BR0280I BRARCHIVE time stamp: 2009-01-23 16.51.11
    BR0232I 6 of 6 files saved by backup utility
    BR0230I Backup utility called successfully
    BR0016I 6 offline redo log files processed, total size 220.128 MB
    Then I take tape with this data and try restore only from this tape
    all datafiles and relevant archive redologs files (17670-17675) restored without errors
    but in the end ERROR occurred
    ERROR at line 1:
    ORA-01195: online backup of file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oracle/VPP/sapdata1/system_1/system.data1'
    ORA-00279: change 226268039 generated at 01/23/2009 16:43:40 needed for thread
    1
    ORA-00289: suggestion : /oracle/VPP/oraarch/VPParch1_17676_603135330.dbf
    ORA-00280: change 226268039 for thread 1 is in sequence #17676
    My online backup ended, before switch to redolog 17676
    Why I need this file? I think files (17670-17675) should be enough?
    Besides, the change 226268039 generated exactly at moment Switching to next online redo.
    Can I try open database regardless of this error?
    Thank you for your prompt response
    Andrey Timofeev
    Edited by: Andrey Timofeev on Jul 21, 2009 3:52 PM

    <P>Good day, sorry for TAG. Once more time. <BR>
    I will make online backup my oracle database with brtools<BR>
    (Oracle 10g, BRBACKUP 7.00 (39))</P>
    <P>brbackup -c -d util_file_online -t online -m all -u /</P>
    <HR>
    <P>bdztoexw anf  2009-01-23 15.27.40  2009-01-23 16.47.21  1  ...............     57    56     0     17671        226215105    17676        226268039  ALL<BR>
    online          util_file_online </P>
    <HR>
    <P> BR0280I BRBACKUP time stamp: 2009-01-23 16.43.38</P>
    <HR>
    <BR>
    <P>BR0232I 57 of 57 files saved by backup utility<BR>
    BR0230I Backup utility called successfully</P>
    <P>BR0280I BRBACKUP time stamp: 2009-01-23 16.43.40<BR>
    BR0340I Switching to next online redo log file for database instance VPP ...<BR>
    BR0321I Switch to next online redo log file for database instance VPP successful</P>
    <P>BR0117I ARCHIVE LOG LIST after backup for database instance VPP</P>
    <P>Parameter                      Value</P>
    <P>Database log mode              Archive Mode<BR>
    Automatic archival             Enabled<BR>
    Archive destination            /oracle/VPP/oraarch/VPParch<BR>
    Archive format                 %t_%s_%r.dbf<BR>
    Oldest online log sequence     17673<BR>
    Next log sequence to archive   17676<BR>
    Current log sequence           17676            SCN: 226268039<BR>
    Database block size            8192             Thread: 1<BR>
    Current system change number   226268041        ResetId: 603135330</P>
    <HR>
    <BR>
    <P>After brbackup  I'll do &quot;brarchive&quot; in the same script:</P>
    <P>brarchive -c -d util_file -sd -u / &gt; $br_out_file</P>
    <HR>
    <P>#ARCHIVE.. 17670  /oracle/VPP/oraarch/VPParch1_17670_603135330.dbf  2009-01-23 15.10.36  43450368         226202112  1<BR>
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............<BR>
    #COPIED... ........ ...  ................. .......... ........ ........... ............<BR>
    #DELETED.. adztolzu svd  2009-01-23 16.51.11<BR>
    #<BR>
    #ARCHIVE.. 17671  /oracle/VPP/oraarch/VPParch1_17671_603135330.dbf  2009-01-23 15.36.12  43430912         226215105  1<BR>
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............<BR>
    #COPIED... ........ ...  ................. .......... ........ ........... ............<BR>
    #DELETED.. adztolzu svd  2009-01-23 16.51.11<BR>
    #<BR>
    #ARCHIVE.. 17672  /oracle/VPP/oraarch/VPParch1_17672_603135330.dbf  2009-01-23 15.40.27  43515904         226227928  1<BR>
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............<BR>
    #COPIED... ........ ...  ................. .......... ........ ........... ............<BR>
    #DELETED.. adztolzu svd  2009-01-23 16.51.11<BR>
    #<BR>
    #ARCHIVE.. 17673  /oracle/VPP/oraarch/VPParch1_17673_603135330.dbf  2009-01-23 15.41.06  43729408         226238784  1<BR>
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............<BR>
    #COPIED... ........ ...  ................. .......... ........ ........... ............<BR>
    #DELETED.. adztolzu svd  2009-01-23 16.51.11<BR>
    #<BR>
    #ARCHIVE.. 17674  /oracle/VPP/oraarch/VPParch1_17674_603135330.dbf  2009-01-23 16.06.06  43450368         226250315  1<BR>
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............<BR>
    #COPIED... ........ ...  ................. .......... ........ ........... ............<BR>
    #DELETED.. adztolzu svd  2009-01-23 16.51.11<BR>
    #<BR>
    #ARCHIVE.. 17675  /oracle/VPP/oraarch/VPParch1_17675_603135330.dbf  2009-01-23 16.43.40  13243904         226263012  1<BR>
    #SAVED.... adztolzu svd  *VXF1232718526    2009-01-23 16.51.11 ........... ............<BR>
    #COPIED... ........ ...  ................. .......... ........ ........... ............<BR>
    #DELETED.. adztolzu svd  2009-01-23 16.51.11<BR>
    #<BR>
    VPP  util_file  adztolzu svd  2009-01-23 16.47.22  2009-01-23 16.54.47  1  ...........     17670    17675        0        0  </P>
    <HR>
    <P>#</P>
    <HR>
    <P>BR0280I BRARCHIVE time stamp: 2009-01-23 16.51.11<BR>
    BR0232I 6 of 6 files saved by backup utility<BR>
    BR0230I Backup utility called successfully<BR>
    BR0016I 6 offline redo log files processed, total size 220.128 MB</P>
    <HR>
    <P>Then I take tape with this data and try restore only from this tape<BR>
    all datafiles and relevant archive redologs files (17670-17675) restored without errors <BR>
    but in the end ERROR occurred</P>
    <HR>
    <BR>
    <P>ERROR at line 1:<BR>
    ORA-01195: online backup of file 1 needs more recovery to be consistent<BR>
    ORA-01110: data file 1: '/oracle/VPP/sapdata1/system_1/system.data1'</P>
    <P>ORA-00279: change 226268039 generated at 01/23/2009 16:43:40 needed for thread<BR>
    1<BR>
    ORA-00289: suggestion : /oracle/VPP/oraarch/VPParch1_17676_603135330.dbf<BR>
    ORA-00280: change 226268039 for thread 1 is in sequence #17676</P>
    <HR>
    <BR>
    <P>My online backup ended, before swith to redolog 17676<BR>
    Why I need this file? I think files (17670-17675) should be enough?</P>
    <P>Besides, the cange 226268039 generated exactly at moment Switching to next online redo.<BR>
    Can I try open database regardless of this error?</P>
    <BR>
    <P>Thank you for your prompt response<BR>
    Andrey Timofeev</P>
    Edited by: Andrey Timofeev on Jul 21, 2009 4:53 PM
    Edited by: Andrey Timofeev on Jul 21, 2009 4:54 PM

  • Hyp FR Error: 5200 : Error executing query.  Exceed max row number 100000

    Hi,
    I am getting the error
    5200 : Error executing query. Exceed max row number 100000
    when I run the report on Financial Reporting. It gives the same error when run on Workspace.
    Have you guys encountered this error before? What are the best ways to tackle it? Help is much appreciated guys.
    -- Adi
    Edit 1 - I tried to simplify the parameters but I still get the same error making me suspect that the issue is not the 100000 row issue.
    Edited by: Aditya26 on Apr 11, 2012 9:02 AM

    Hi Adi,
    This is from My Oracle Support:
    How to Increase Row Limit to Avoid Error "Exceed Max Row Number 100000" [ID 866832.1]
    Modified 23-FEB-2012 Type HOWTO Status PUBLISHED
    In this Document
    Goal
    Solution
    Applies to:
    Hyperion BI+ - Version: 9.3.1.0.00 to 11.1.1.3.00 - Release: 9.3 to 11.1
    Information in this document applies to any platform.
    Goal
    How do you increase the maximum row limit to avoid the error "5200: Error executing query: Exceed max row number 100000"?
    Solution
    1.Edit \Hyperion\common\ADM\<version>\lib\ADM.properties as follows:
    From MAX_ROW_NUMBERS=100000 to MAX_ROW_NUMBERS=500000
    If you are running extremely large reports, you can increase the limit.
    2.Restart Reporting and Analysis services.
    For version 11.1.2.x
    The path of ADM.properties file in these versions should be located under:
    %Oracle_Home%\Middleware\EPMSystem11R1\commo\ADM\11.1.2.0\lib
    Cheers,
    Mehmet

  • ORA-22290: operation would exceed the maximum number of opened files or LOB

    i am getting this error in a procedure.
    ORA-22290: operation would exceed the maximum number of opened files or LOBs
    22290, 00000, "operation would exceed the maximum number of opened files or LOBs"
    // *Cause: The number of open files or LOBs has reached the maximum limit.
    // *Action: Close some of the opened files or LOBs and retry the operation.
    NAME TYPE VALUE
    session_max_open_files integer 10
    Procuedure:
    CREATE OR REPLACE PROCEDURE WMSOWN."PROC_WMS_XML_READ"
    P_EVENT_KEY IN VARCHAR2,
    X_STATUS_MSG OUT VARCHAR2,
    X_STATUS OUT NUMBER
    )AS
    l_parser dbms_xmlparser.Parser;
    domdoc xmldom.DOMDocument;
    nodelist XMLDOM.DOMNODELIST;
    node XMLDOM.DOMNODE;
    n_child XMLDOM.DOMNODE;
    elements XMLDOM.DOMELEMENT;
    name_node_map XMLDOM.DOMNAMEDNODEMAP;
    parent_seg varchar2(4000);
    tag_name_bkp varchar2(4000); -- LOOK OUT BRAD IS CODING AGAIN
    chile_seg VARCHAR2(4000);
    p_seg VARCHAR2(4000);
    p_seg1 VARCHAR2(4000);
    p_seg2 VARCHAR2(30);
    p_int_name VARCHAR2(50);
    col_value VARCHAR2(100):=NULL;
    len1 NUMBER;
    cnt NUMBER;
    seg_id_bkp NUMBER; -- LOOK OUT BRAD IS CODING AGAIN
    sequence_bkp NUMBER; -- LOOK OUT BRAD IS CODING AGAIN
    prev_sequence NUMBER; -- LOOK OUT BRAD IS CODING AGAIN
    prev_seq_set VARCHAR2(3); --brad coding
    parent_id number; ---brad coding
    valid_seg NUMBER; -- LOOK OUT BRAD IS CODING AGAIN
    data_status VARCHAR2(10);
    v_main_seg VARCHAR2(50);
    v_seq_no NUMBER;
    V_CLOBLOCATOR CLOB;
    V_FILELOCATOR BFILE;
    v_amount_to_load NUMBER;
    dest_offset NUMBER := 1;
    src_offset NUMBER := 1;
    lang_context NUMBER := DBMS_LOB.DEFAULT_LANG_CTX;
    warning NUMBER;
    v_event_name USR_OUB_FILE_PROCESS_DETAILS.EVENT_NAME%TYPE;
    v_file_name USR_OUB_FILE_PROCESS_DETAILS.FILE_NAME%TYPE;
    DIRECTORY_PATH_INVALID EXCEPTION;
    PRAGMA EXCEPTION_INIT(DIRECTORY_PATH_INVALID,-22285);
    NO_PRIVILEGES EXCEPTION;
    PRAGMA EXCEPTION_INIT(NO_PRIVILEGES,-22286);
    INVALID_DIRECTORY EXCEPTION;
    PRAGMA EXCEPTION_INIT(INVALID_DIRECTORY,-22287);
    FILE_NOT_FOUND EXCEPTION;
    PRAGMA EXCEPTION_INIT(FILE_NOT_FOUND,-22289);
    P_DIRECTORY VARCHAR2(50) :='WMS_XML_DIR_OUB';
    v_whid poldat_view.wh_id%type;
    BEGIN
    --NAME :  PROC_WMS_XML_READ.PLS
    --DESCRIPTION :
    -- Procedure PROC_WMS_XML_READ search XML files from remote location.
    -- Open,Parse and Read XML files. Store all XML values into tables.
    -- Developed by Dharmesh Patidar(jw782)
    -- History: New condition is added i.e. p_seg:=parent_seg to maintain PARENT and CHILD relationship
    -- by Vishwanath Dubey(jl246) on 17-June-2011
    -- BRAD_XML_DEBUG table removed for CLEANING Activity by DHARMESH PATIDAR(JW782) ON 29-JUNE-2011.
    /*BLOCK FOR CAPTURING EVENT NAME BASED ON EVENT ID START*/
    BEGIN
    SELECT event_name,file_name,WAREHOUSE_ID
    INTO v_event_name, v_file_name,v_whid
    FROM usr_oub_file_process_details
    WHERE event_id=p_event_key
    AND process_flag='U';
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    x_status_msg:=SQLCODE||':'||' Error while selecting event name and event id in Procedure PROC_WMS_XML_READ : Record is not available in USR_OUB_FILE_PROCESS_DETAILS table for event id '|| P_EVENT_KEY;
    x_status:=SQLCODE;
    proc_wms_error_trace(v_whid, --warehouse id
    null , --event id
    v_event_name , --event name
    x_status, --error code
    x_status_msg ); --error message
    RETURN;
    WHEN TOO_MANY_ROWS THEN
    x_status_msg:=SQLCODE||':'||' Error while selecting event name and event id in Procedure PROC_WMS_XML_READ : More than one Records found in USR_OUB_FILE_PROCESS_DETAILS table for event id '|| P_EVENT_KEY;
    x_status:=SQLCODE;
    proc_wms_error_trace(v_whid, --warehouse id
    null , --event id
    v_event_name , --event name
    x_status, --error code
    x_status_msg ); --error message
    RETURN;
    WHEN VALUE_ERROR THEN
    x_status_msg:=SQLCODE||':'||' Error while selecting event name and event id in Procedure PROC_WMS_XML_READ : Varibale length is small or data type mismatch while selecting event id and event name in USR_OUB_FILE_PROCESS_DETAILS table for event id '|| P_EVENT_KEY;
    x_status:=SQLCODE;
    proc_wms_error_trace(v_whid, --warehouse id
    null , --event id
    v_event_name , --event name
    x_status, --error code
    x_status_msg ); --error message
    RETURN;
    WHEN OTHERS THEN
    x_status_msg:=SQLCODE||':'||'Error in Procedure PROC_WMS_XML_READ while selecting event name and event id ';
    x_status:=SQLCODE;
    proc_wms_error_trace(v_whid, --warehouse id
    null , --event id
    v_event_name , --event name
    x_status, --error code
    x_status_msg ); --error message
    RETURN;
    END;
    /*BLOCK FOR CAPTURING EVENT NAME BASED ON EVENT ID END*/
    /*LOGIC TO READ XML FROM REMOTE LOCATION START*/
    DBMS_LOB.CREATETEMPORARY(V_CLOBLOCATOR, TRUE);
    V_FILELOCATOR := BFILENAME(P_DIRECTORY,V_FILE_NAME);
    DBMS_LOB.OPEN(V_FILELOCATOR,DBMS_LOB.FILE_READONLY);
    V_AMOUNT_TO_LOAD := DBMS_LOB.GETLENGTH(V_FILELOCATOR);
    DBMS_LOB.LOADCLOBFROMFILE(V_CLOBLOCATOR,
    V_FILELOCATOR ,
    V_AMOUNT_TO_LOAD,
    DEST_OFFSET,
    SRC_OFFSET,
    0,
    LANG_CONTEXT,
    WARNING);
    dbms_lob.close(V_FILELOCATOR);
    /*LOGIC TO READ XML FROM REMOTE LOCATION END*/
    /*Temporary Code to help with debug Clear the table before populating it with new data*/
    --delete table BRAD_XML_DEBUG;
    cnt:=1;
    seg_id_bkp:=0;
    data_status:='N';
    v_seq_no:=0;
    prev_seq_set:='NO';
    /*create new parser.*/
    l_parser := dbms_xmlparser.newParser;
    dbms_xmlparser.parseClob(l_parser, replace(V_CLOBLOCATOR,'&','1x2x3x4x5'));
    /*Parse the document and create a new DOM document.*/
    domdoc :=dbms_xmlparser.getDocument(l_parser);
    /* get all elements in the DOM*/
    nodelist := XMLDOM.getElementsByTagName(DOMDoc, '*');
    len1 := XMLDOM.getLength(nodelist);
    /* loop through elements of the DOM */
    FOR j in 1..len1-1 LOOP --MAIN LOOP START
    BEGIN
    /*below sql will fetch Node from table to travel xml data*/
    BEGIN
    SELECT int_name,tag_name
    INTO p_int_name, p_seg1
    FROM usr_wms_tag_det
    WHERE int_name=v_event_name
    AND seq_no =cnt;
    EXCEPTION
    --PLEASE DO NOT HANDLE ANY EXCEPTION APART MENTIONED BELOW
    WHEN OTHERS THEN
    NULL;
    END;
    IF cnt=1 THEN
    v_main_seg:=p_seg1;
    END IF;
    EXCEPTION
    --PLEASE DO NOT HANDLE ANY EXCEPTION APART MENTIONED BELOW
    WHEN no_data_found THEN
    null;
    WHEN OTHERS THEN
    x_status_msg:=SQLCODE||':'||'Error in Procedure PROC_WMS_XML_READ while selecting interface name and tag name'||sqlerrm;
    x_status:=SQLCODE;
    proc_wms_error_trace(v_whid, --warehouse id
    null , --event id
    v_event_name , --event name
    x_status, --error code
    x_status_msg ); --error message
    RETURN;
    END;
    /*LOGICS TO READ XML START*/
    node:=XMLDOM.item(nodelist, j);
    elements:=XMLDOM.makeElement(node);
    parent_seg:=(xmldom.getTagName(elements));
    tag_name_bkp:=(xmldom.getTagName(elements));
    name_node_map:=xmldom.getAttributes(node);
    n_child:=xmldom.getFirstChild(node);
    col_value:=xmldom.getNodeValue(n_child);
    /*get the sequence number from the interface hierarchy table */
    SELECT count(1)
    INTO valid_seg
    FROM usr_wms_tag_det
    WHERE int_name=v_event_name
    AND tag_name = tag_name_bkp;
    if valid_seg>0 then
    begin
    SELECT seq_no
    INTO sequence_bkp
    FROM usr_wms_tag_det
    WHERE int_name=v_event_name
    AND tag_name = tag_name_bkp;
    seg_id_bkp:=seg_id_bkp+1;
    p_seg:=parent_seg;--Modified by Vishwanath Dubey dated 16-jun-2011
    end;
    end if;
    if prev_seq_set = 'NO' then
    begin
    prev_sequence := sequence_bkp;
    prev_seq_set := 'YES';
    end;
    end if;
    if sequence_bkp < prev_sequence then --you just moved up level(s) in the message structure
    begin
    select max(seg_id)
    into parent_id
    from usr_wms_global_xml_det
    where seg_sequence = sequence_bkp-1;
    prev_sequence := sequence_bkp;
    end;
    end if;
    if sequence_bkp > prev_sequence then --you just moved down a level in the message structure
    parent_id := seg_id_bkp-1;
    prev_sequence := sequence_bkp;
    end if;
    /*end getting the hierarchy table sequence */
    /*LOGICS TO READ XML END */
    IF (parent_seg =p_seg1) or (parent_seg=p_seg2) THEN
    if parent_seg=v_main_seg then
    v_seq_no:=v_seq_no+1;
    end if;
    BEGIN
    /* INSERTING DATA LOGICS TO READ XML END */
    INSERT INTO usr_wms_global_xml_det values(p_int_name,tag_name_bkp,parent_seg,seg_id_bkp,sequence_bkp,parent_id,'','','',J,v_seq_no,data_status,cnt);
    EXCEPTION
    WHEN OTHERS THEN
    x_status_msg:=SQLCODE||' : Error in Procedure PROC_WMS_XML_READ while inserting records in USR_WMS_GLOBAL_XML_DET table for interface name and parent segment '||P_INT_NAME||','||PARENT_SEG;
    x_status:=SQLCODE;
    proc_wms_error_trace(v_whid, --warehouse id
    null , --event id
    v_event_name , --event name
    x_status, --error code
    x_status_msg ); --error message
    RETURN;
    END ;
    p_seg:=parent_seg;
    p_seg2:=P_SEG1;
    cnt:=cnt+1;
    ELSE
    chile_seg:=parent_seg;
    BEGIN
    /* INSERTING DATA LOGICS TO READ XML END */
    INSERT INTO usr_wms_global_xml_det values(p_int_name,tag_name_bkp,p_seg,seg_id_bkp,sequence_bkp,parent_id,'',chile_seg,replace(TRIM(Col_Value),'1x2x3x4x5','&'),J,v_seq_no,data_status,cnt);
    EXCEPTION
    WHEN OTHERS THEN
    x_status_msg:=SQLCODE||' : Error in Procedure PROC_WMS_XML_READ while inserting records in USR_WMS_GLOBAL_XML_DET table for interface name and parent segment '||P_INT_NAME||','||PARENT_SEG;
    x_status:=SQLCODE;
    proc_wms_error_trace(v_whid, --warehouse id
    null , --event id
    v_event_name , --event name
    x_status, --error code
    x_status_msg ); --error message
    RETURN;
    END;
    END IF;
    END LOOP; --MAIN LOOP END
    dbms_xmldom.freeDocument(DOMDoc);
    x_status:=0;
    EXCEPTION
    WHEN DIRECTORY_PATH_INVALID THEN
    x_status_msg:=SQLCODE||' : Error in Procedure PROC_WMS_XML_READ DIRECTORY PATH IS INVALID';
    x_status:=SQLCODE;
    proc_wms_error_trace(v_whid, --warehouse id
    null , --event id
    v_event_name , --event name
    x_status, --error code
    x_status_msg ); --error message
    RETURN;
    WHEN FILE_NOT_FOUND THEN
    x_status_msg:=SQLCODE||' : Error in Procedure PROC_WMS_XML_READ INVALID XML FILE NAME OR FILE DOES NOT EXISTS';
    x_status:=SQLCODE;
    proc_wms_error_trace(v_whid, --warehouse id
    null , --event id
    v_event_name , --event name
    x_status, --error code
    x_status_msg ); --error message
    RETURN;
    WHEN NO_PRIVILEGES THEN
    x_status_msg:=SQLCODE||' : Error in Procedure PROC_WMS_XML_READ Insufficient privileges on file or directory NAME- '||p_directory||' to perform FILEOPEN operation.';
    x_status:=SQLCODE;
    proc_wms_error_trace(v_whid, --warehouse id
    null , --event id
    v_event_name , --event name
    x_status, --error code
    x_status_msg ); --error message
    RETURN;
    WHEN OTHERS THEN
    x_status_msg:=SQLCODE||' : Error in Procedure PROC_WMS_XML_READ '|| SQLERRM;
    x_status:=SQLCODE;
    proc_wms_error_trace(v_whid, --warehouse id
    null , --event id
    v_event_name , --event name
    x_status, --error code
    x_status_msg ); --error message
    dbms_xmlparser.freeParser(l_parser);
    dbms_xmldom.freeDocument(DOMDoc);
    RETURN;
    END PROC_WMS_XML_READ;
    Edited by: user13427480 on Feb 8, 2013 7:08 PM

    when you post sql statement use also similar threads :
    ORA-22290: operation would exceed the maximum number of opened files or LOB
    https://kr.forums.oracle.com/forums/thread.jspa?messageID=10842417                                                                                                                                                                                                                                                                                                                                                                                                           

  • What is the max size of a zip file with the JDK1.5 ?

    Hello everybody,
    I'm a french student and for a project, I need to create a zip file, but I don't know in advance the number and the size of files to include in my zip.
    I wish to know if someone have the answer to my question : what is the max size of a zip file with the JDK1.5 ? I believe that with the JDK1.3, the limit size of a zip was about 2Go, wasn't ?
    Thank you for all answer !
    Good day !
    PS : sorry for my very poor english ;-)

    Here is all I have found for the moment :
    ...Okay, what about my suggestion of creating your own 10GB file?
    Try this:import java.io.File;
    import java.io.RandomAccessFile;
    import java.nio.ByteBuffer;
    import java.nio.channels.FileChannel;
    import java.util.Random;
    class Main {
        public static void main(String[] args) {
            long start = System.currentTimeMillis();
            int mbs = 1024;
            writeFile("E:/Temp/data/1GB.dat", mbs);
            long end = System.currentTimeMillis();
            System.out.println("Done writing "+mbs+" MB's to disk in "+
                    ((end-start)/1000)+" seconds.");
        private static void writeFile(String fileName, int numMegaBytes) {
            try {
                int numBytes = numMegaBytes*1024*1024;
                File file = new File(fileName);
                FileChannel rwChannel =
                        new RandomAccessFile(file, "rw").getChannel();
                ByteBuffer buffer = rwChannel.map(
                        FileChannel.MapMode.READ_WRITE, 0, numBytes);
                Random rand = new Random();
                for(int i = 1; i <= numMegaBytes; i++) {
                    for(int j = 1; j <= 1024; j++) {
                        byte[] bytes = new byte[1024];
                        rand.nextBytes(bytes);
                        buffer.put(bytes);
                rwChannel.close();
            } catch(Exception e) {
                e.printStackTrace();
    }On my machine it took me 43 seconds to create a 1GB file, so it shouldn't take too long to create your own 10GB. Then try zipping that file.
    Good luck.

  • Increase redolog file size - Merits and Demerits

    Hi
    Currently, we are in  9.2.0.7.0 oralce version and having redolog file sizes (Mirrlog and origlog) of 100MB.
    Now we are planning to increase the size to 200 MB so that we could reduce the number of archive log files.
    Can you please let me know what would be the demerits of bigger size in redolog files?
    And also let me know the step by step process how to increase the size of redolog files?
    Thank you

    > I understand what you are saying but in our situation our backup policy is one time online backup  and one time offline backup in a week.....Online backup is on Thu and Offline backup is on Sunday.......
    >
    > In case of system crash if needed we would need to apply archive log files; If we have lesser number of archive logs; recover database would be faster.......correct me if am wrong.
    You are wrong.
    Ok, let's see an example:
    You took your backup on sunday midnight and your DB needs recovery on wednesday.
    Meanwhile you created say, 800 M worth of redolog data per day.
    That sums up to (monday, tuesday, wednesday) 3x800 M = 2400 M that need to be recovered.
    Going with your current setup (100 M redolog size) the largest archivelog file can be 100 M, makes 24 files to restore and recover.
    After changing the redologsize to, say 200 M, you only have 12 files to restore and recover.
    But know what? It's still 2400 M of data.
    Since you will likely not put every archivelog file to its own tape, but rather change the tape each day (just an assumption) or maybe don't use manually operated tapes at all, the little latency overhead in handling tapes doesn't count in to your overall recovery time.
    All in all you still need to feed the same amount of data to the recovery process.
    Apart from this:
    if you're discussing short recovery times, than you'd never perform just two data backups a week.
    You'd make online backups every day - maybe incremental ones.
    You' d use the flashback recovery area.
    An additional thing often overlooked: in many cases the ultimate performance killer for a restore/recovery scenario is not the technology in use.
    It's that when the case is there, the DBA is not sure anymore, what do to.
    He wonders:
    Where the good backups are.
    How to get them back from the 3rd party backup tool.
    How to check them.
    Where to get a different storage system because the original one is broken.
    How to figure out what needs recovery
    How the tools work
    By ensuring that you always master the theory and the how to of restore and recovery - that's how you make it quick and painless (and dataloss-less).
    regards,
    Lars

  • Limit on number of XML files used ?

    Hi,
    Am user of 4.5 workgroup since 2 years and am recently also testing our models on engage2008.  We make datadashboards with sometimes large datasets (25 column, 10000rows in excel), sometimes using XML sometimes all contained.
    Is there a limit on the number of XML files -mapped in excel - that can be handled by xcelsius ?  There also seems to be a difference in loading time ; when the XML files are mapped to 1 single sheet in excel, the loading/importing into xcelsius seems to go a lot faster.
    Any advice on this ?
    kind regards
    Marc Vanderkeel, BE

    Anil,
    It is not always the 10000 rows, and we have several dashboards with XML files (max 1335 rows) that work fine.  The question still is if there is a limit on the actual number of XML files attached, or is it a limit on the total linked datavolume ?
    There also seems to be a difference when the XML files are mapped on the same excel sheet (faster import). 
    wkr
    Marc

  • How to get endtime of a redolog file

    How to get endtime of a redolog file..
    i get start time from the following statement..but the high time shows as
    01-JAN-1988 00:00:00
    select TO_CHAR(LOW_TIME,'DD-MON-YYYY HH24:MI:SS'),TO_CHAR(HIGH_TIME,'DD-MON-YYYY HH24:MI:SS') from v$logmnr_logs;
    where do i get the correct end time

    i need the high time of the current redo log..using the folowing query i get
    select min(to_char(first_time, 'DD-MON-YY HH24:MI')),
    max(to_char(first_time, 'DD-MON-YY HH24:MI')),MEMBERS from v$log
    group by members;
    MIN(TO_CHAR(FIR MAX(TO_CHAR(FIR MEMBERS
    28-NOV-07 09:33 28-NOV-07 12:19 1
    it returns the max and min low time among all redo logfiles
    what I need is the high time of current log file.
    using the next query i get
    select min(TO_CHAR(LOW_TIME,'DD-MON-YYYY HH24:MI:SS')),max(TO_CHAR(LOW_TIME,'DD-MON-YYYY HH24:MI:SS')) from v$logmnr_logs;
    28-NOV-2007 12:19:09 28-NOV-2007 12:19:09
    since i add only current log file to log mining session, v$logmnr_logs would only have information about that file(one row only). so will get same time here for min and max. What i need here is high_time of the current redo log file.
    Any idea?

  • Question on recovery of redolog file

    hii,
    how to open database,
    when block corruption error in redolog file.
    there are two archive redo log groups.
    and database in non archive mode.
    thank's in advance

    I think this is a very difficult situation. You cannot proceed in opening your database as your redo log is corrupt (current I guess :( ). Your database is no noarchivelog mode, which makes it more diffidcult.
    When you say you have two archive log groups, do you mean redo log groups?
    Are those groups multiplexed?
    In this case you can still recover from this error by dropping the corrupt redo log file and using the valid surviving redo log member.
    If you have only two redo log groups, not multiplexed members an no archivelog mode, then I suggest you to dump the contents of your redo log file and detect up to which point you still can find a valid SCN, so can execute an incomplete recover until that scn.
    Procedure to dump the contents of your redo log file is:
    alter system dump logfile '/xxxx';
    This will create a text file with the contents from your redo log file, once you have it you can check up to whic valid SCN you can get. Next perform an incompete recover until that found SCN.
    ~ Madrid.

  • How can I add Redolog files in standby

    Hi,
    I am about to create standby database in standard edition of Oracle9i through RMAN I can't find any entry regarding Redologs in DUPLICATE command can I add redolog after creating standby database or standby database doesn't need redologs, plz let me clear.
    Thanks in advance.
    Khawar

    Hi,
    you can add redolog files for variety of purposes,and this is not mendetory for creating stdby databases.
    anyway you can use these commands which are give below.
    ALTER DATABASE ADD STANDBY LOGFILE
    ('/oracle/dbs/log1c.rdo','/oracle/dbs/log2c.rdo') SIZE 500K;
    You can also specify a number that identifies the group using the GROUP option:
    ALTER DATABASE ADD STANDBY LOGFILE GROUP 10
    ('/oracle/dbs/log1c.rdo','/oracle/dbs/log2c.rdo') SIZE 500K;
    thnks.

  • Maximum number of open files..

    I'm looking for some help...probably a consultant to give us a call.
    I need to know the following:
    For 2.6 and 7, number of open files per process default and maximum setting.
    Procedure to change the default setting to the maximum
    Amont of RAM required to handle the max setting.
    Risks inherent in setting this parameter to the max.
    Any info on test environments where max setting has been utilized (e.g. datase TPC benchmarks, etc..)
    Feel free to call 408.861.1103 - happy to pay for the advice.

    Hi!
    The maximum number of file descriptors per process is set by two parameters:
    rlim_fd_cur (soft limit, defaults to 64)
    rlim_fd_max (hard limit, defaults to 1024)
    Processes may raise their soft limit up to the hard limit using setrlimit(2).
    Setting rlim_fd_cur high is not a problem as the file desciptors are allocated in chunks of 24 as required, and so not all in one go. They don't actually require that much memory either.
    As administrator you may set the limits by adding an entry to /etc/system, eg:
    set rlim_fd_max=600
    and rebooting.
    Note however on 32 bit solaris, the significant limitation is that the stdio library FILE structure limits your process to 256 fds. This is increased to 65536 for 64bit programs on solaris 7.
    Select(3c) can use up to 65536 fds (#define FD_SETSIZE 65536 in your code for 32bit solaris 7).
    Hope that helps.
    Ralph
    SUN DTS

  • Instance fast recovery and redolog file size

    Hi,
    Could you please explain me how size of redolog file helps to recover instance faster?
    Thanks
    KSG

    Very quickly I shall try to tell.
    The answer lies in the relation of the number of dirty buffers needed for the recovery. The number of dirty buffers needed for recovery are limited by the checkpoint process when DBWR is pinged to write few of them to the datafile. So with a log switch you are going to hit a checkpoint. So if the size of the log files is going to be smaller, the checkpoint frequency would be large thus making the buffers being written to the data fikles more aggressively and limiting the time for the instance reocvery. The biger you make them, the more longer it would take for teh checkpoint to happen and thus in the case of instance recovery, it would bemore time required. That said, using a small sized log file would also lead to "cehckpoint inicomplete" error as well since it may happen that DBWR won't be able to match with the speed of the checkpoint event being generated and its own writing speed,.
    HTH
    Aman....

  • Checkpoint number and SCN number

    Hi,
    I am getting confused between these two terminology, i have asked a couple of people and every where i get different explanation.
    Can anyone please clarify these -
    a) Is checkpoint point number and SCN number same kind of number (SCN# will be greater than checkpoint#)?
    b) I was told that checkpoint also gets incremented when log switch happens, but when I issue alter system switch logfile, the checkpoint_change# in v$database does not get incremented. It gets incremented when i issue alter system checkpoint
    Thanks in advance
    Neel

    816153 wrote:
    Thank you all.
    Can someone help me understanding - why checkpoint_change# of v$database does not get incremented when i issue "alter system switch logfile"?
    What do you think can be the reason? Let's hear from you first. And by the time you prepare the answer, please have a read of this pdf as well,
    http://prutser.files.wordpress.com%2F2008%2F12%2Fcheckpointsukoug.pdf
    HTH
    Aman....

Maybe you are looking for

  • Lightroom and/or iPhoto with Apple TV (first 1st generation) integration/streaming

    As a hobby Photographer I love using lightroom, Many times, I want to use apple tv to show the photos, or even just randomly browse them. Now the problem is, there isnt any (or I dont know about one, thats why im here ) easier or more automatic or in

  • How do I programmatically change progress bar limit?

    Hello all.  I have a custom control and one of the indicators is a progress bar.  The bar is used to show the position of a life test fixture as the fixture moves up and down.  I would like to have the max limit of the bar to be set by the user.  The

  • Cannot edit accordion widget or it won't appear in publish or preview

    I'm using Adobe Presenter 9 with Windows 7 and PowerPoint 2010. I'm trying to use the Accordion Interaction. I am able to create the accordion and insert it into my PPTX the first time (fresh presentation). If I continue to preview or publish at this

  • [SOLVED] Audio dropouts with USB sound card

    Hi! Recently I bought an external USB sound card (Music Fidelity V-DAC). If I plug in the USB cable the sound device is recognized by Phonon in KDE4 (kdemod). If I play an audio file in Amarok (or VLC) the audio is indeed redirected to the USB sound

  • Captivate 7 - How to publish into a single swf file?

    Our Authoring tool has been designed to cater for Captivat swf files for video uploads. Now that we have upgraded to Captivate 7 the swf - is merely a 'holden cache file' the video content is saved in the .mp4 file. Our programmer tells us that it wi