Is your RAC database working good??

Hi, all.
I have a 2 node RAC database 10.2.0.2.0 on 32bit windows 2003 EE SP1.
I am wondering if your RAC database is working good.
In my case, there are a number of gc related wait events such as gc buffer busy and gc cr request. Thus, I need to restart one node from time to time.( on average, one time per a week)
How about yours??
Thanks and Regards.

.In my case, there are a number of gc related wait events such as gc buffer busy and gc cr request. Thus, I need to restart one node from time to time.( on average, one time per a week)Seeing excessive/high gc related wait events usually indicates poor interconnect performance and that the interconnect may not have the required bandwidth and is the possible bottleneck. In other words, your data requests via the interconnect are much higher than what it could support without waits. The holding instance may not be able to make the requested block available immediately and thus the wait occurs.
Just curious, how did you decide that these gc waits are causing performance problems and that restarting the nodes would fix it? Are you noticing better performance after restarting the nodes?
Thanks
Chandra

Similar Messages

  • DBA panel not working correctly for RAC databases

    I hope this is the correct forum for this stuff... I learned that there is a DBA panel in SQL Developer - which is a great idea. But then, trying it on a RAC database, the output is quite confusing. For example, the init parameters are desplayed twice, or better: once for every instance. But a display column to identify the instance is missing. I assume that the query goes to gv$parameter but doesn't make use of the INST_ID column. Or am I lacking some configuration trick...?

    This probably has nothing to do with your problem but something is screwed up in the way CSCC handles Configurator panels.
    Converted a panel made for CS6 using Configurator4. Got it loaded into CC. It was not quite right so I made some minor changes. The changes never showed up in CC. Deleted the original folder under Panels, changed the name of the panel and reinstalled. Whoa! The original deleted panel was the only one that showed up under Extensions not the new one. This in spite of the fact that that folder no longer existied.
    I have tried all sorts of maneauvers but the fact is that the original panel is the only one that PS recognizes even though it does not exist on the drive. It is locked into PS and nothing I have done will change it.
    Larry

  • Jdbc 9.2 work with RAC database

    Dear Sirs,
    I notice that jdbc 9.2 support the use of service name in the connection string, i.e : host:port_no:service_name, prior to 9.2, sid must be used to make the connection.
    Question
    1. After using jdbc 9.2 with service name, can it be benefit from the load-balancing and failover feature of 9i RAC database ?
    2. Does jdbc 9.2 compatible with Oracle 817 database ?
    Thanks

    I am having a number of problems using JDBC 9.2 with Oracle Server 8.17. Specifically it is not recognizing existing column names and applying un-related constraints when attempting to access tables. It appears to function correctly when using Oracle Server 9.0.1. Is anyone having similar problems?

  • Convert RAW Files in Your Aperture Database to Adobe DNG Files

    The following describes how to convert all the RAW images in your Aperture database from manufacturer formats, such as Sony's ARW and Canon's CR2, to Adobe's DNG while retaining all the Adjustments already applied to your RAW files.  In the example below I am assuming that your Aperture Library has ARW and CR2 files.  These steps work with the latest version of Aperture, being Version 3.3, and have not been tested with earlier versions (in fact, it probably will not work because the database structure changed in 3.3 - however, this means that the steps below can also be applied to your iPhoto library).  The steps are:
    1. Within Finder select the Aperture Library and Secondary Click to bring up the Shortcut Menu.  From this select "Show Package Contents"; this will open a Window showing all the files/directories contained within your Aperture Library.
    2. Drag the "Masters" folder out of the Package and place it on your Desktop.  The purpose of this step is so that Applications, such as Adobe DNG Converter, can "see" the "Masters" folder, which they cannot do if it is located within the Aperture Library Package.
    3. Run the Adobe DNG Converter, select the above "Masters" folder with the "Select Folder" button, make sure you have selected the option "Save in the Same Location", it is also a good idea to select the option "Skip source image if the destination already exists", check your Preferences then select the "Convert" button.
    4. Adobe DNG Converter will now convert all the RAW files to Adobe DNG files and save them in the same location as your existing RAW files.  Once complete, take a note of (a) the number of files converted and (b) the types of files converted, such as if the conversion includes ARW, CR2, NEF files etc.  In this example I will assume that the converter only found ARW and CR2 files; if your system is different then modify the steps below to make sure it covers all the RAW file types converted in your particular system.
    5. Select the "Masters" folder and in the Finder Window Search Field search for all the files that end in .ARW and .CR2 (this filename search list should match the types of files found by the Adobe DNG Converter in step (4)(b) above).  The number of files returned by the search must match the number of files recorded by the Adobe DNG Converter in step (4)(a) above.  Do NOT put the .DNG files in your search criteria.  Select all the files found in the search and move them to the Trash.  This will delete all the original manufacturer's RAW files from your Aperture Library leaving behind all the new DNG files.
    6. Move the "Masters" folder on your Desktop back to the root directory of the Aperture Library Package Content directory.
    7. Select the Finder Window containing the Aperture Library Package Contents.
    8. If there is a file called "ApertureData.xml" then open it with a text editor.  Search and Replace ".arw" with ".dng", ".ARW" with ".DNG", ".cr2" with ".dng" and ".CR2" with ".DNG" (note, do not use the " marks in your search).  Make sure you cover all the file types incorporated in your particular system.  Save the "ApertureData.xml" file.
    9. Traverse to the Database/apdb directory.  Select the "BigBlobs.apdb" file and open it with a Hex editor.  In this example I will use Hex Fiend by Ridiculous Fish (see http://ridiculousfish.com/hexfiend/).  Once the file is open perform a Find and Replace ensuring you are finding and replacing Text and not Hex.  In Hex Fiend this means selecting Edit/Find from the menu and then selecting the "Text" button to the top/left of the window.  In your Find/Replace field you will need to find ".arw" and replace it with ".dng", make sure you select "Replace All" (note, do not use the " marks in your search).  Do exactly the same for ".ARW" with ".DNG", ".cr2" with ".dng" and ".CR2" with ".DNG" (and whatever particular RAW files were in your system).
    10. Perform exactly the same steps in (9) for the files "History.apdb", "ImageProxies.apdb", "Library.apdb" and "Properties.apdb".
    That is it, your Aperture Library now contains DNG files instead of your original manufacturer files while still retaining all the Adjustments originally made in Aperture to those manufacturer files.  Of course, you can repeat the same step and replace your DNG files with the original RAW manufacturer files if you wish.  This process works because:
    1. Aperture does not store the Adjustments in the RAW files, it keeps these in its internal SQLite database.
    2. By using a Hex Editor you (a) don't have to play with SQLite to gain access to Aperture's data and (b) because you are replacing text that has exactly the same number of characters you are not invalidating the format of the underlying data file - this is why you use a Hex Editor instead of a simple text editor.
    Think of Aperture as being a repository that holds Adjustments which then link to the original RAW source.  Therefore, the above process simply replaces your RAW source and therefore all the Aperture Adjustments are still valid; same Adjustments, new source.  In case you ask, no, you cannot transfer Adjustments in and out of Aperture because there is no standard to transform adjustments between different photographic applications.

    A rather involved method, David.
    I am sure it works, and compliments for figuring it out, but I think one critical step is missing in your workflow: Before you begin - backup, backup, backup!
    And I think, all the edits in your database that you are doing so diligently, is what you bought Aperture for to do for you, why do it yourself?
    I convert selected raw files this way - without manually patching the Aperture Library:
    Export the originals of the raw images that I want to convert.
    Run dng-converter.
    Import the converted originals back, flag them,  and move them to the project they came from.
    Sort the project by capture date, so that identical images are show side by side.
    Then I use the Lift&Stamp tool to transfer all adjustments and tags from the original raw to the dng copy. I check, if some edits are left to do, then delete the original.
    It may take a little longer than your method, but this way all edits in the library are done by Aperture, and I am protected from accidental slips when editing the property list files. That requires a very careful work.
    Patching the database files inside the library may be justified as a last ressort, when you need to fix and recue a broken Aperture library, and none of the provided tools is working, but not as a routine operation to do batch conversion of image files. It is very error prone. One wrong entry in the library files and your Aperture Library may be unreadable.
    Regards
    Léonie

  • Can not start Rac database

    Hi,
    Oracle RAC database 10.2.0.3/RedHat4 with 2 nodes.
    In the begining we had an error ORA-600[12803] so only sys can connect to database I find the note 1026653.6 this note said that we need to create AUDSES$ sequence but befor that we have to restart the database.
    When we stop the datanbase we had another ORA-600 and it's impossible to start it!!
    Here is a coppy of our alert file:
    Picked latch-free SCN scheme 2
    Autotune of undo retention is turned on.
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.3.0.
    System parameters with non-default values:
    processes = 300
    sessions = 335
    sga_max_size = 524288000
    __shared_pool_size = 310378496
    __large_pool_size = 4194304
    __java_pool_size = 8388608
    __streams_pool_size = 8388608
    spfile = +DATA/osista/spfileosista.ora
    nls_language = FRENCH
    nls_territory = FRANCE
    nls_length_semantics = CHAR
    sga_target = 524288000
    control_files = DATA/osista/controlfile/control01.ctl, DATA/osista/controlfile/control02.ctl
    db_block_size = 8192
    __db_cache_size = 184549376
    compatible = 10.2.0.3.0
    log_archive_dest_1 = LOCATION=USE_DB_RECOVERY_FILE_DEST
    db_file_multiblock_read_count= 16
    cluster_database = TRUE
    cluster_database_instances= 2
    db_create_file_dest = +DATA
    db_recovery_file_dest = +FLASH
    db_recovery_file_dest_size= 68543315968
    thread = 2
    instance_number = 2
    undo_management = AUTO
    undo_tablespace = UNDOTBS2
    undo_retention = 29880
    remote_login_passwordfile= EXCLUSIVE
    db_domain =
    dispatchers = (PROTOCOL=TCP) (SERVICE=OSISTAXDB)
    local_listener = (address=(protocol=tcp)(port=1521)(host=132.147.160.243))
    remote_listener = LISTENERS_OSISTA
    job_queue_processes = 10
    background_dump_dest = /oracle/product/admin/OSISTA/bdump
    user_dump_dest = /oracle/product/admin/OSISTA/udump
    core_dump_dest = /oracle/product/admin/OSISTA/cdump
    audit_file_dest = /oracle/product/admin/OSISTA/adump
    db_name = OSISTA
    open_cursors = 300
    pga_aggregate_target = 104857600
    aq_tm_processes = 1
    Cluster communication is configured to use the following interface(s) for this instance
    172.16.0.2
    Wed Jun 13 11:04:30 2012
    cluster interconnect IPC version:Oracle UDP/IP (generic)
    IPC Vendor 1 proto 2
    PMON started with pid=2, OS id=8560
    DIAG started with pid=3, OS id=8562
    PSP0 started with pid=4, OS id=8566
    LMON started with pid=5, OS id=8570
    LMD0 started with pid=6, OS id=8574
    LMS0 started with pid=7, OS id=8576
    LMS1 started with pid=8, OS id=8580
    MMAN started with pid=9, OS id=8584
    DBW0 started with pid=10, OS id=8586
    LGWR started with pid=11, OS id=8588
    CKPT started with pid=12, OS id=8590
    SMON started with pid=13, OS id=8592
    RECO started with pid=14, OS id=8594
    CJQ0 started with pid=15, OS id=8596
    MMON started with pid=16, OS id=8598
    Wed Jun 13 11:04:31 2012
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    MMNL started with pid=17, OS id=8600
    Wed Jun 13 11:04:31 2012
    starting up 1 shared server(s) ...
    Wed Jun 13 11:04:31 2012
    lmon registered with NM - instance id 2 (internal mem no 1)
    Wed Jun 13 11:04:31 2012
    Reconfiguration started (old inc 0, new inc 2)
    List of nodes:
    1
    Global Resource Directory frozen
    * allocate domain 0, invalid = TRUE
    Communication channels reestablished
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    Wed Jun 13 11:04:31 2012
    LMS 0: 0 GCS shadows cancelled, 0 closed
    Wed Jun 13 11:04:31 2012
    LMS 1: 0 GCS shadows cancelled, 0 closed
    Set master node info
    Submitted all remote-enqueue requests
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    Post SMON to start 1st pass IR
    Wed Jun 13 11:04:31 2012
    LMS 0: 0 GCS shadows traversed, 0 replayed
    Wed Jun 13 11:04:31 2012
    LMS 1: 0 GCS shadows traversed, 0 replayed
    Wed Jun 13 11:04:31 2012
    Submitted all GCS remote-cache requests
    Fix write in gcs resources
    Reconfiguration complete
    LCK0 started with pid=20, OS id=8877
    Wed Jun 13 11:04:43 2012
    alter database mount
    Wed Jun 13 11:04:43 2012
    This instance was first to mount
    Wed Jun 13 11:04:43 2012
    Starting background process ASMB
    ASMB started with pid=25, OS id=10068
    Starting background process RBAL
    RBAL started with pid=26, OS id=10072
    Wed Jun 13 11:04:47 2012
    SUCCESS: diskgroup DATA was mounted
    Wed Jun 13 11:04:51 2012
    Setting recovery target incarnation to 1
    Wed Jun 13 11:04:52 2012
    Successful mount of redo thread 2, with mount id 3005749259
    Wed Jun 13 11:04:52 2012
    Database mounted in Shared Mode (CLUSTER_DATABASE=TRUE)
    Completed: alter database mount
    Wed Jun 13 11:05:06 2012
    alter database open
    Wed Jun 13 11:05:06 2012
    This instance was first to open
    Wed Jun 13 11:05:06 2012
    Beginning crash recovery of 1 threads
    parallel recovery started with 2 processes
    Wed Jun 13 11:05:07 2012
    Started redo scan
    Wed Jun 13 11:05:07 2012
    Completed redo scan
    61 redo blocks read, 4 data blocks need recovery
    Wed Jun 13 11:05:07 2012
    Started redo application at
    Thread 1: logseq 7924, block 3, scn 506098125
    Wed Jun 13 11:05:07 2012
    Recovery of Online Redo Log: Thread 1 Group 2 Seq 7924 Reading mem 0
    Mem# 0: +DATA/osista/onlinelog/group_2.372.742132543
    Wed Jun 13 11:05:07 2012
    Completed redo application
    Wed Jun 13 11:05:07 2012
    Completed crash recovery at
    Thread 1: logseq 7924, block 64, scn 506118186
    4 data blocks read, 4 data blocks written, 61 redo blocks read
    Switch log for thread 1 to sequence 7925
    Picked broadcast on commit scheme to generate SCNs
    db_recovery_file_dest_size of 65368 MB is 0.61% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    SUCCESS: diskgroup FLASH was mounted
    SUCCESS: diskgroup FLASH was dismounted
    Thread 1 advanced to log sequence 7926
    SUCCESS: diskgroup FLASH was mounted
    SUCCESS: diskgroup FLASH was dismounted
    Thread 1 advanced to log sequence 7927
    Wed Jun 13 11:05:11 2012
    LGWR: STARTING ARCH PROCESSES
    ARC0 started with pid=31, OS id=12747
    Wed Jun 13 11:05:11 2012
    ARC0: Archival started
    ARC1: Archival started
    LGWR: STARTING ARCH PROCESSES COMPLETE
    ARC1 started with pid=32, OS id=12749
    Wed Jun 13 11:05:12 2012
    Thread 2 opened at log sequence 7176
    Current log# 4 seq# 7176 mem# 0: +DATA/osista/onlinelog/group_4.289.742134597
    Wed Jun 13 11:05:12 2012
    ARC1: Becoming the 'no FAL' ARCH
    ARC1: Becoming the 'no SRL' ARCH
    Wed Jun 13 11:05:12 2012
    Successful open of redo thread 2
    Wed Jun 13 11:05:12 2012
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Wed Jun 13 11:05:12 2012
    ARC0: Becoming the heartbeat ARCH
    Wed Jun 13 11:05:12 2012
    SMON: enabling cache recovery
    Wed Jun 13 11:05:15 2012
    Successfully onlined Undo Tablespace 20.
    Wed Jun 13 11:05:15 2012
    SMON: enabling tx recovery
    Wed Jun 13 11:05:15 2012
    Database Characterset is AL32UTF8
    Wed Jun 13 11:05:16 2012
    Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_9174.trc:
    ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []
    Wed Jun 13 11:05:16 2012
    Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_9174.trc:
    ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []
    Error 600 happened during db open, shutting down database
    USER: terminating instance due to error 600
    Instance terminated by USER, pid = 9174
    ORA-1092 signalled during: alter database open...
    Wed Jun 13 11:06:16 2012
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Interface type 1 eth0 172.16.0.0 configured from OCR for use as a cluster interconnect
    Interface type 1 bond0 132.147.160.0 configured from OCR for use as a public interface
    Picked latch-free SCN scheme 2
    Autotune of undo retention is turned on.
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.3.0.
    System parameters with non-default values:
    processes = 300
    sessions = 335
    sga_max_size = 524288000
    __shared_pool_size = 314572800
    __large_pool_size = 4194304
    __java_pool_size = 8388608
    __streams_pool_size = 8388608
    spfile = +DATA/osista/spfileosista.ora
    nls_language = FRENCH
    nls_territory = FRANCE
    nls_length_semantics = CHAR
    sga_target = 524288000
    control_files = DATA/osista/controlfile/control01.ctl, DATA/osista/controlfile/control02.ctl
    db_block_size = 8192
    __db_cache_size = 180355072
    compatible = 10.2.0.3.0
    log_archive_dest_1 = LOCATION=USE_DB_RECOVERY_FILE_DEST
    db_file_multiblock_read_count= 16
    cluster_database = TRUE
    cluster_database_instances= 2
    db_create_file_dest = +DATA
    db_recovery_file_dest = +FLASH
    db_recovery_file_dest_size= 68543315968
    thread = 2
    instance_number = 2
    undo_management = AUTO
    undo_tablespace = UNDOTBS2
    undo_retention = 29880
    remote_login_passwordfile= EXCLUSIVE
    db_domain =
    dispatchers = (PROTOCOL=TCP) (SERVICE=OSISTAXDB)
    local_listener = (address=(protocol=tcp)(port=1521)(host=132.147.160.243))
    remote_listener = LISTENERS_OSISTA
    job_queue_processes = 10
    background_dump_dest = /oracle/product/admin/OSISTA/bdump
    user_dump_dest = /oracle/product/admin/OSISTA/udump
    core_dump_dest = /oracle/product/admin/OSISTA/cdump
    audit_file_dest = /oracle/product/admin/OSISTA/adump
    db_name = OSISTA
    open_cursors = 300
    pga_aggregate_target = 104857600
    aq_tm_processes = 1
    Cluster communication is configured to use the following interface(s) for this instance
    172.16.0.2
    Wed Jun 13 11:06:16 2012
    cluster interconnect IPC version:Oracle UDP/IP (generic)
    IPC Vendor 1 proto 2
    PMON started with pid=2, OS id=18682
    DIAG started with pid=3, OS id=18684
    PSP0 started with pid=4, OS id=18695
    LMON started with pid=5, OS id=18704
    LMD0 started with pid=6, OS id=18721
    LMS0 started with pid=7, OS id=18735
    LMS1 started with pid=8, OS id=18753
    MMAN started with pid=9, OS id=18767
    DBW0 started with pid=10, OS id=18788
    LGWR started with pid=11, OS id=18796
    CKPT started with pid=12, OS id=18799
    SMON started with pid=13, OS id=18801
    RECO started with pid=14, OS id=18803
    CJQ0 started with pid=15, OS id=18805
    MMON started with pid=16, OS id=18807
    Wed Jun 13 11:06:17 2012
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    MMNL started with pid=17, OS id=18809
    Wed Jun 13 11:06:17 2012
    starting up 1 shared server(s) ...
    Wed Jun 13 11:06:17 2012
    lmon registered with NM - instance id 2 (internal mem no 1)
    Wed Jun 13 11:06:17 2012
    Reconfiguration started (old inc 0, new inc 2)
    List of nodes:
    1
    Global Resource Directory frozen
    * allocate domain 0, invalid = TRUE
    Communication channels reestablished
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    Wed Jun 13 11:06:18 2012
    LMS 0: 0 GCS shadows cancelled, 0 closed
    Wed Jun 13 11:06:18 2012
    LMS 1: 0 GCS shadows cancelled, 0 closed
    Set master node info
    Submitted all remote-enqueue requests
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    Post SMON to start 1st pass IR
    Wed Jun 13 11:06:18 2012
    LMS 0: 0 GCS shadows traversed, 0 replayed
    Wed Jun 13 11:06:18 2012
    LMS 1: 0 GCS shadows traversed, 0 replayed
    Wed Jun 13 11:06:18 2012
    Submitted all GCS remote-cache requests
    Fix write in gcs resources
    Reconfiguration complete
    LCK0 started with pid=20, OS id=18816
    Wed Jun 13 11:06:18 2012
    ALTER DATABASE MOUNT
    Wed Jun 13 11:06:18 2012
    This instance was first to mount
    Wed Jun 13 11:06:18 2012
    Reconfiguration started (old inc 2, new inc 4)
    List of nodes:
    0 1
    Wed Jun 13 11:06:18 2012
    Starting background process ASMB
    Wed Jun 13 11:06:18 2012
    Global Resource Directory frozen
    Communication channels reestablished
    ASMB started with pid=22, OS id=18913
    Starting background process RBAL
    * domain 0 valid = 0 according to instance 0
    Wed Jun 13 11:06:18 2012
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    Wed Jun 13 11:06:18 2012
    LMS 0: 0 GCS shadows cancelled, 0 closed
    Wed Jun 13 11:06:18 2012
    LMS 1: 0 GCS shadows cancelled, 0 closed
    Set master node info
    Submitted all remote-enqueue requests
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    Wed Jun 13 11:06:18 2012
    LMS 0: 0 GCS shadows traversed, 0 replayed
    Wed Jun 13 11:06:18 2012
    LMS 1: 0 GCS shadows traversed, 0 replayed
    Wed Jun 13 11:06:18 2012
    Submitted all GCS remote-cache requests
    Fix write in gcs resources
    RBAL started with pid=23, OS id=18917
    Reconfiguration complete
    Wed Jun 13 11:06:22 2012
    SUCCESS: diskgroup DATA was mounted
    Wed Jun 13 11:06:26 2012
    Setting recovery target incarnation to 1
    Wed Jun 13 11:06:26 2012
    Successful mount of redo thread 2, with mount id 3005703530
    Wed Jun 13 11:06:26 2012
    Database mounted in Shared Mode (CLUSTER_DATABASE=TRUE)
    Completed: ALTER DATABASE MOUNT
    Wed Jun 13 11:06:27 2012
    ALTER DATABASE OPEN
    This instance was first to open
    Wed Jun 13 11:06:27 2012
    Beginning crash recovery of 1 threads
    parallel recovery started with 2 processes
    Wed Jun 13 11:06:27 2012
    Started redo scan
    Wed Jun 13 11:06:27 2012
    Completed redo scan
    61 redo blocks read, 4 data blocks need recovery
    Wed Jun 13 11:06:28 2012
    Started redo application at
    Thread 2: logseq 7176, block 3
    Wed Jun 13 11:06:28 2012
    Recovery of Online Redo Log: Thread 2 Group 4 Seq 7176 Reading mem 0
    Mem# 0: +DATA/osista/onlinelog/group_4.289.742134597
    Wed Jun 13 11:06:28 2012
    Completed redo application
    Wed Jun 13 11:06:28 2012
    Completed crash recovery at
    Thread 2: logseq 7176, block 64, scn 506138248
    4 data blocks read, 4 data blocks written, 61 redo blocks read
    Picked broadcast on commit scheme to generate SCNs
    Wed Jun 13 11:06:28 2012
    LGWR: STARTING ARCH PROCESSES
    ARC0 started with pid=28, OS id=19692
    Wed Jun 13 11:06:28 2012
    ARC0: Archival started
    ARC1: Archival started
    LGWR: STARTING ARCH PROCESSES COMPLETE
    ARC1 started with pid=29, OS id=19695
    Wed Jun 13 11:06:28 2012
    Thread 2 advanced to log sequence 7177
    Thread 2 opened at log sequence 7177
    Current log# 3 seq# 7177 mem# 0: +DATA/osista/onlinelog/group_3.291.742134597
    Successful open of redo thread 2
    Wed Jun 13 11:06:28 2012
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Wed Jun 13 11:06:28 2012
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    Wed Jun 13 11:06:28 2012
    ARC1: Becoming the heartbeat ARCH
    Wed Jun 13 11:06:28 2012
    SMON: enabling cache recovery
    Wed Jun 13 11:06:28 2012
    db_recovery_file_dest_size of 65368 MB is 0.61% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    SUCCESS: diskgroup FLASH was mounted
    SUCCESS: diskgroup FLASH was dismounted
    Wed Jun 13 11:06:31 2012
    Successfully onlined Undo Tablespace 20.
    Wed Jun 13 11:06:31 2012
    SMON: enabling tx recovery
    Wed Jun 13 11:06:31 2012
    Database Characterset is AL32UTF8
    Wed Jun 13 11:06:31 2012
    Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_19596.trc:
    ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []
    Wed Jun 13 11:06:32 2012
    Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_19596.trc:
    ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []
    Error 600 happened during db open, shutting down database
    USER: terminating instance due to error 600
    Instance terminated by USER, pid = 19596
    ORA-1092 signalled during: ALTER DATABASE OPEN...
    Wed Jun 13 11:11:35 2012
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Interface type 1 eth0 172.16.0.0 configured from OCR for use as a cluster interconnect
    Interface type 1 bond0 132.147.160.0 configured from OCR for use as a public interface
    Picked latch-free SCN scheme 2
    Autotune of undo retention is turned on.
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.3.0.
    System parameters with non-default values:
    processes = 300
    sessions = 335
    sga_max_size = 524288000
    __shared_pool_size = 318767104
    __large_pool_size = 4194304
    __java_pool_size = 8388608
    __streams_pool_size = 8388608
    spfile = +DATA/osista/spfileosista.ora
    nls_language = FRENCH
    nls_territory = FRANCE
    nls_length_semantics = CHAR
    sga_target = 524288000
    control_files = DATA/osista/controlfile/control01.ctl, DATA/osista/controlfile/control02.ctl
    db_block_size = 8192
    __db_cache_size = 176160768
    compatible = 10.2.0.3.0
    log_archive_dest_1 = LOCATION=USE_DB_RECOVERY_FILE_DEST
    db_file_multiblock_read_count= 16
    cluster_database = TRUE
    cluster_database_instances= 2
    db_create_file_dest = +DATA
    db_recovery_file_dest = +FLASH
    db_recovery_file_dest_size= 68543315968
    thread = 2
    instance_number = 2
    undo_management = AUTO
    undo_tablespace = UNDOTBS2
    undo_retention = 29880
    remote_login_passwordfile= EXCLUSIVE
    db_domain =
    dispatchers = (PROTOCOL=TCP) (SERVICE=OSISTAXDB)
    local_listener = (address=(protocol=tcp)(port=1521)(host=132.147.160.243))
    remote_listener = LISTENERS_OSISTA
    job_queue_processes = 10
    background_dump_dest = /oracle/product/admin/OSISTA/bdump
    user_dump_dest = /oracle/product/admin/OSISTA/udump
    core_dump_dest = /oracle/product/admin/OSISTA/cdump
    audit_file_dest = /oracle/product/admin/OSISTA/adump
    db_name = OSISTA
    open_cursors = 300
    pga_aggregate_target = 104857600
    aq_tm_processes = 1
    Cluster communication is configured to use the following interface(s) for this instance
    172.16.0.2
    Wed Jun 13 11:11:35 2012
    cluster interconnect IPC version:Oracle UDP/IP (generic)
    IPC Vendor 1 proto 2
    PMON started with pid=2, OS id=16101
    DIAG started with pid=3, OS id=16103
    PSP0 started with pid=4, OS id=16105
    LMON started with pid=5, OS id=16107
    LMD0 started with pid=6, OS id=16110
    LMS0 started with pid=7, OS id=16112
    LMS1 started with pid=8, OS id=16116
    MMAN started with pid=9, OS id=16120
    DBW0 started with pid=10, OS id=16132
    LGWR started with pid=11, OS id=16148
    CKPT started with pid=12, OS id=16169
    SMON started with pid=13, OS id=16185
    RECO started with pid=14, OS id=16203
    CJQ0 started with pid=15, OS id=16219
    MMON started with pid=16, OS id=16227
    Wed Jun 13 11:11:36 2012
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    MMNL started with pid=17, OS id=16229
    Wed Jun 13 11:11:36 2012
    starting up 1 shared server(s) ...
    Wed Jun 13 11:11:36 2012
    lmon registered with NM - instance id 2 (internal mem no 1)
    Wed Jun 13 11:11:36 2012
    Reconfiguration started (old inc 0, new inc 2)
    List of nodes:
    1
    Global Resource Directory frozen
    * allocate domain 0, invalid = TRUE
    Communication channels reestablished
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    Wed Jun 13 11:11:36 2012
    LMS 0: 0 GCS shadows cancelled, 0 closed
    Wed Jun 13 11:11:36 2012
    LMS 1: 0 GCS shadows cancelled, 0 closed
    Set master node info
    Submitted all remote-enqueue requests
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    Post SMON to start 1st pass IR
    Wed Jun 13 11:11:36 2012
    LMS 1: 0 GCS shadows traversed, 0 replayed
    Wed Jun 13 11:11:36 2012
    LMS 0: 0 GCS shadows traversed, 0 replayed
    Wed Jun 13 11:11:36 2012
    Submitted all GCS remote-cache requests
    Fix write in gcs resources
    Reconfiguration complete
    LCK0 started with pid=20, OS id=16235
    Wed Jun 13 11:11:37 2012
    ALTER DATABASE MOUNT
    Wed Jun 13 11:11:37 2012
    This instance was first to mount
    Wed Jun 13 11:11:37 2012
    Starting background process ASMB
    ASMB started with pid=22, OS id=16343
    Starting background process RBAL
    RBAL started with pid=23, OS id=16347
    Wed Jun 13 11:11:44 2012
    SUCCESS: diskgroup DATA was mounted
    Wed Jun 13 11:11:49 2012
    Setting recovery target incarnation to 1
    Wed Jun 13 11:11:49 2012
    Successful mount of redo thread 2, with mount id 3005745065
    Wed Jun 13 11:11:49 2012
    Database mounted in Shared Mode (CLUSTER_DATABASE=TRUE)
    Completed: ALTER DATABASE MOUNT
    Wed Jun 13 11:22:25 2012
    alter database open
    This instance was first to open
    Wed Jun 13 11:22:26 2012
    Beginning crash recovery of 1 threads
    parallel recovery started with 2 processes
    Wed Jun 13 11:22:26 2012
    Started redo scan
    Wed Jun 13 11:22:26 2012
    Completed redo scan
    61 redo blocks read, 4 data blocks need recovery
    Wed Jun 13 11:22:26 2012
    Started redo application at
    Thread 1: logseq 7927, block 3
    Wed Jun 13 11:22:26 2012
    Recovery of Online Redo Log: Thread 1 Group 1 Seq 7927 Reading mem 0
    Mem# 0: +DATA/osista/onlinelog/group_1.283.742132543
    Wed Jun 13 11:22:26 2012
    Completed redo application
    Wed Jun 13 11:22:26 2012
    Completed crash recovery at
    Thread 1: logseq 7927, block 64, scn 506178382
    4 data blocks read, 4 data blocks written, 61 redo blocks read
    Switch log for thread 1 to sequence 7928
    Picked broadcast on commit scheme to generate SCNs
    Wed Jun 13 11:22:27 2012
    LGWR: STARTING ARCH PROCESSES
    ARC0 started with pid=31, OS id=13010
    Wed Jun 13 11:22:27 2012
    ARC0: Archival started
    ARC1: Archival started
    LGWR: STARTING ARCH PROCESSES COMPLETE
    ARC1 started with pid=32, OS id=13033
    Wed Jun 13 11:22:27 2012
    Thread 2 opened at log sequence 7178
    Current log# 4 seq# 7178 mem# 0: +DATA/osista/onlinelog/group_4.289.742134597
    Successful open of redo thread 2
    Wed Jun 13 11:22:27 2012
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Wed Jun 13 11:22:27 2012
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    Wed Jun 13 11:22:27 2012
    ARC1: Becoming the heartbeat ARCH
    Wed Jun 13 11:22:27 2012
    SMON: enabling cache recovery
    Wed Jun 13 11:22:30 2012
    db_recovery_file_dest_size of 65368 MB is 0.61% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    SUCCESS: diskgroup FLASH was mounted
    SUCCESS: diskgroup FLASH was dismounted
    Wed Jun 13 11:22:31 2012
    Successfully onlined Undo Tablespace 20.
    Wed Jun 13 11:22:31 2012
    SMON: enabling tx recovery
    Wed Jun 13 11:22:32 2012
    Database Characterset is AL32UTF8
    Wed Jun 13 11:22:32 2012
    Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_11751.trc:
    ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []
    Wed Jun 13 11:22:33 2012
    Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_11751.trc:
    ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []
    Error 600 happened during db open, shutting down database
    USER: terminating instance due to error 600
    Instance terminated by USER, pid = 11751
    ORA-1092 signalled during: alter database open...
    regards,

    Hi;
    Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_9174.trc:Did you check trc file?
    ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []You are getting oracle internal error(ORA 600) which mean you could need to work wiht oracle support team. Please see below note, if its not help than i suggest log a sr:
    Troubleshoot an ORA-600 or ORA-7445 Error Using the Error Lookup Tool [ID 153788.1]
    for your future rac issue please use Forum Home » High Availability » RAC, ASM & Clusterware Installation which is RAC dedicated forum site.
    Regard
    Helios

  • Multiple RAC databases on same GI using different subnets for Public i/face

    Hello. We are configuring a 2 node cluster. That cluster will host several RAC databases. For security reasons our networking team want to create separate subnets for the application traffic to each specific RAC database on the cluster.
    E.g. application 1 has 2 application servers that will connect to RAC database PROD1 via one subnet, application 2 has 3 application servers that will connect to RAC database PROD2 via a different subnet, etc.
    In addition the networking team want to configure a separate management subnet that DBAs etc. will use to administer all RAC databases and infrastructure in the cluster.
    Grid Infrastructure version 11.2.0.2. Database versions will vary from 10.2.0.x to 11.2.0.2. All databases will utilise RAC.
    We want to take advantage of SCAN listener functionality to support connectivity to all databases on the cluster. Forum thread 2199620 [https://cn.forums.oracle.com/forums/thread.jspa?threadID=2199620] suggests that 11gR2 supports multiple subnets, which looks to be exactly the feature we need. Please can you confirm how this works and point us to any documentation (standard docs, white papers, MOS, etc.) that might help us configure this.
    Document referenced in thread 2199620 was not exactly what we were looking for, and didn't translate too well in Google Translate.
    Any guidance much appreciated. Thanks, Rich.
    Similar threads:
    https://cn.forums.oracle.com/forums/thread.jspa?messageID=9846298? (Dual SCAN on multi homed cluster)
    https://cn.forums.oracle.com/forums/thread.jspa?threadID=2199620 (scan listener in OAM VLAN)
    Edited by: 887449 on 26-Sep-2011 01:41

    Thanks Levi. Your advice is very much appreciated.
    Your statement that we can only have one SCAN listener listening on one public network is actually the clarification I was looking for.
    For anyone else reading this thread I believe this gives us 3 options:
    1) Configure a SCAN listener and have all applications, and all management/administration, connecting to the corresponding database on the same cluster via that SCAN listener, all on the same subnet.
    2) Configure a SCAN listener for use by all applications connecting to the corresponding database on the same cluster, and use TNSNAMES/VIP for management/administration traffic, both on separate subnets (by configuring the LISTENER_NETWORKS parameter)
    3) Configure a SCAN listener for use by applications connecting to one of the databases on the cluster via one subnet, use TNSNAMES/VIP for all other applications connecting to other databases, each using their own subnet. Plus, the management/administration could be via another subnet utilising TNSNAMES/VIP.
    From our perspective we will work out the best one for us and implement accordingly.
    Thanks again for your timely and comprehensive response.

  • Oracle 11.1.0.6 - Can more than one database work of the same installation

    I am in the process of setting up 3 production 11.1.0.6 databases in a RAC Linux RH4 environment (approx 40GB each in size), question is can I get all 3 databases to work of the one Oracle Database installation i.e. can all three separate database work of the same set of binaries and from the same Oracle_Home?
    As far as I know the only items that will be different of those values in the tnsnames and listener files.
    Thanks

    Lets back up and clarify terms.
    A 3 node RAC cluster has one (1) and only one (1) database. Three (3) instances but only one (1) database.
    If you are talking about something else you need to be very clear in your description.
    Assuming one database and three instances the listener and tnsnames files should be identical. And, as Oracle creates all of this during dbca installation, why is it a concern?

  • Configured TNS in oracle client to RAC database

    Hi Experts,
    I have a 4 nodes oracle RAC in radhat 5.0
    both oracle database and client as 10.2.0.4.
    I created a TNS that works in our office but it does not work in our remote brance office.
    Based on experts instruction, I set up sqlnet trace at client side. but not any file that be gererated.
    In linux RAC database, I saw listener.log file as
    28-AUG-2009 11:00:01 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=sale)(INSTANCE_NAME=sale1)(CID=(PR OGRAM=D:\oracle\product\10.2.0\client_1\BIN\sqlplusw.exe)(HOST=TRAC3)(USER=JIM))) * (ADDRESS=(PROTOCOL=tcp)( HOST=xxx.18.10.xx)(PORT=3289)) * establish * sale * 0
    in client pc sqlnet.ora my setting as
    SQLNET.AUTHENTICATION_SERVICES= (NTS)
    TRACE_LEVEL_CLIENT = 16
    TRACE_FILE_CLIENT = client
    TRACE_DIRECTORY_CLIENT = C:\TEMPODBC
    TRACE_TIMESTAMP_CLIENT = ON
    SQLNET.EXPIRE_TIME =10
    SQLNET.INBOUND_CONNECT_TIMEOUT = 500
    SQLNET.SEND_TIMEOUT = 500
    SQLNET.RECV_TIMEOUT = 500
    I got a error as
    ERROR:
    ORA-03135: connection lost contact
    which wrong is in my case?
    I can connect to all other window database from thsi PC.But I can not connect to RAC database from this PC.
    However, I can ping RAC node's IP and VIP from this PC
    I also can connect to RAC database with TNS from other window PC.
    What do I need to do?
    ThanKs for any help.
    JIM

    Hi Sb92075,
    Thanks for your help.
    I got same error after comment #SQLNET.AUTHENTICATION_SERVICES= (NTS) in client sqlnet.ora in client.
    the server side does not have sqlnet file in linux platform.
    I also check listener services that all instance status is ready.
    I just saw sqlnet.log file in server side that error as
    Fatal NI connect error 12170.
    VERSION INFORMATION:
    TNS for Linux: Version 10.2.0.4.0 - Production
    Oracle Bequeath NT Protocol Adapter for Linux: Version 10.2.0.4.0 - Production
    TCP/IP NT Protocol Adapter for Linux: Version 10.2.0.4.0 - Production
    Time: 28-AUG-2009 13:21:06
    Tracing not turned on.
    Tns error struct:
    ns main err code: 12535
    TNS-12535: TNS:operation timed out
    ns secondary err code: 12606
    nt main err code: 0
    nt secondary err code: 0
    nt OS err code: 0
    Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=xxx.18.xx.xx)(PORT=3513))
    what do i need to do?
    Thanks
    JiM
    Edited by: user589812 on Aug 28, 2009 10:22 AM

  • Uninstall Oracle 11gr2 RAC database in grid infrastructure

    Hi all,
    After several attempt to install my Oracle database RAC with grid infrastructure, i want now to do a fresh installation as i have attempted 3 times and now i have all the procedure on installing the database and RAC.
    Actually i have installed it correctly but now i want to cleanup my server and remove all oracle installation directory and do a fresh installation.
    My question is what is the procedure to uninstall an Oracle RAC database and Clusterware with grid infrastucture and cleanup oracle base installation.
    The architecture is:
    GRID and clusterware: Oracle grid 11gR2
    Database: Oracle database 11gR2
    Database and grid storage: ASM
    OS: linux centos 6
    Thank you.
    Raluce.

    The deinstallation of Oracle GI could be not so easy thing to do, because it contains many components one should be aware of.  The proper deinstall is important because it will safe you from many issues with next install on these servers
    In general we need to be sure that:
    1. all sowftware stopped properly
    2. removed from oraInventory
    3. binaries removed
    4. /etc/oracle cleared
    5. ocr and votes cleared using dd
    6. /etc/oratab updated
    7. .profile updated
    8. init.d files in /etc/ cleard
    Usually its recommended to use deconfigure scripts, if they fails for some reason, the manual procedure should be followed.
              How to Deconfigure/Reconfigure(Rebuild OCR) or Deinstall Grid Infrastructure [ID 1377349.1]
    How to Deinstall Oracle Clusterware Home Manually [ID 1364419.1]
    As general recommendation its good idea to save your crs configuration for future reference.
    Regards
    Ed Rudans
    http://erudans.blogspot.com

  • Another RMAN duplicate problem - RAC database to single instance

    Hi,
    I have a problem with the RMAN duplicate procedure and was hoping someone can help.
    I would like to create a duplicate of our production RAC database on a separate, stand-alone, database server on another site. This duplicate will be used for intensive querying by another business unit who I don't want to have access to our production database.
    My procedure goes like this:
    1. Create a disk (not ASM) based backup of the datafiles and any archived redo logs:
    "run {
    allocate channel d1 type disk;
    backup format '/u02/stage/df_t%t_s%s_p%p' database plus archivelog delete input;
    release channel d1;
    2. Tar and scp these files to the same location on the stand-alone database server.
    3. In the meantime, work has been happening on the production database and further archived redo logs have been generated. I don't really care about these logs for the purposes of the duplicate however, I just want to duplicate to the point of the recent backup. To do this, I run the following SQL to determine the sequence number that I should be duplicating up to:
    "select max(sequence#), thread# from v$archived_log where deleted='YES' group by thread#;"
    4. Duplicate the production database to the stand-alone database (all the SQL Net stuff is working).
    "run {
    set until sequence (value returned by above SQL statement);
    duplicate target database to XXX;
    However, my problem arises because I don't know how to handle the fact that there are two threads. I understand that each thread relates to one of the RAC instances, I just don't know which one to specify for the duplicate. We have a database service which the client application connects through, and that service runs on on or other of the instances. Should I just care about the logs from the instance where the service is currently running?
    Am I even approaching this is the correct way?
    I look forward to any help that people may be able to offer.
    Regards,
    Phil

    Hi Werner,
    Thanks again for your help, there is still something wrong though. "list backup of archivelog all;" shows:
    BS Key Size Device Type Elapsed Time Completion Time
    3784 202.34M DISK 00:00:08 28-OCT-09
    BP Key: 3784 Status: AVAILABLE Compressed: NO Tag: TAG20091028T111718
    Piece Name: /u02/stage/df_t701435838_s3820_p1
    List of Archived Logs in backup set 3784
    Thrd Seq Low SCN Low Time Next SCN Next Time
    1 9746 569095777 28-OCT-09 569150229 28-OCT-09
    1 9747 569150229 28-OCT-09 569187892 28-OCT-09
    1 9748 569187892 28-OCT-09 569231956 28-OCT-09
    1 9749 569231956 28-OCT-09 569259816 28-OCT-09
    2 7931 569095774 28-OCT-09 569187902 28-OCT-09
    2 7932 569187902 28-OCT-09 569259814 28-OCT-09
    BS Key Size Device Type Elapsed Time Completion Time
    3787 1.04M DISK 00:00:02 28-OCT-09
    BP Key: 3787 Status: AVAILABLE Compressed: NO Tag: TAG20091028T112222
    Piece Name: /u02/stage/df_t701436142_s3823_p1
    List of Archived Logs in backup set 3787
    Thrd Seq Low SCN Low Time Next SCN Next Time
    1 9750 569259816 28-OCT-09 569261110 28-OCT-09
    2 7933 569259814 28-OCT-09 569261108 28-OCT-09
    You can see that the highest sequence number is 9750 of thread 1, and that the Low and Next SCNs are 569259816 and 56926111. However, when I look at the output of the RMAN duplicate command:
    contents of Memory Script:
    set until scn 569505448;
    recover
    clone database
    delete archivelog
    executing Memory Script
    executing command: SET until clause
    Starting recover at 28-OCT-09
    allocated channel: ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: sid=39 devtype=DISK
    starting media recovery
    Oracle Error:
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 4 needs more recovery to be consistent
    ORA-01110: data file 4: '/u02/sca-standby/data/users.260.623418479'
    RMAN-03002: failure of Duplicate Db command at 10/28/2009 16:12:55
    RMAN-03015: error occurred in stored script Memory Script
    RMAN-06053: unable to perform media recovery because of missing log
    RMAN-06025: no backup of log thread 2 seq 7936 lowscn 569411744 found to restore
    RMAN-06025: no backup of log thread 2 seq 7935 lowscn 569321987 found to restore
    RMAN-06025: no backup of log thread 2 seq 7934 lowscn 569261108 found to restore
    RMAN-06025: no backup of log thread 1 seq 9758 lowscn 569471890 found to restore
    RMAN-06025: no backup of log thread 1 seq 9757 lowscn 569440076 found to restore
    RMAN-06025: no backup of log thread 1 seq 9756 lowscn 569411439 found to restore
    RMAN-06025: no backup of log thread 1 seq 9755 lowscn 569378529 found to restore
    RMAN-06025: no backup of log thread 1 seq 9754 lowscn 569358970 found to restore
    RMAN-06025: no backup of log thread 1 seq 9753 lowscn 569321882 found to restore
    RMAN-06025: no backup of log thread 1 seq 9752 lowscn 569284238 found to restore
    RMAN-06025: no backup of log thread 1 seq 9751 lowscn 569261110 found to restore
    you can see that something is setting the recovery SCN to 569505448 which higher even then any of the archived logs mentioned above. If I select current_scn from the production database, this gives me 569528258 which is closer to the value which RMAN is expecting to recover to than any of the archived redo logs.
    Can you think what might be causing RMAN to try to recover to this value? and why does it appear to be ignoring the SET UNTIL SEQUENCE command?
    Cheers,
    Phil

  • Create RAC Database Connection in Sql Developer

    Hi All,
    Can you please let me know how to create a new connection for a RAC Database. When we use the scan IP and service name, it always displays the network adapter could not establish the connection, even though the listener and database are up and running. After repeatedly, trying atlease 20 times, then it will connect to the database. There are no issue while connecting to a single instance database.
    Thanks & Regards,
    Krishna

    Hi Krishna,
    You don't give any details about version numbers but, in general, the Basic connection type should work -- there is no need to use TNSNames. If you can connect to the RAC system via SQL*Plus with an EZConnect string, one would hope the same host/port/service works in SQL Developer for a Basic connection type. But that may be just a hope, I suppose, depending on your overall configuration for SCAN. Or perhaps you pass the full TNS descriptor to connect to SQL*Plus?
    In case of long thin RAC JDBC URL, presumably you mean including the full TNS descriptor in the url. I really did not find very much on the forums regarding SCAN, but searching more widely on the internet I note someone resolving a JDBC-specific issue (where SQL*Plus connects just fine) with comments like this: +"The host name in the connectiong string be the same as the init.ora parameter remote_listener and it should also match to the SCAN name. We should not include any domain names with SCAN name , remote_listername and with HOST setting in connecting string"+. So apparently you have to be very careful with the url specification.
    Hope this helps,
    Gary

  • How to stop and start RAC Database

    Hi,
    Can anybody tell me How to stop and start RAC Database.
    Pls give me the exact procedure for OS - IBM Aix 5.3.
    Rajkumar

    Burleson,
    It so appears, as from the numerous times you have been published incorrect, untested information, and non-working scripts, you are not an Oracle guru,
    but only a self-proclaimed one.
    It also appears nothing on your 'CV' can be easily verified.
    You state you are an 'adjunct professor emeritus', however anyone who was kicked out after a few months can call himself a 'professor emeritus'.
    Looking at the enormous amount insults and slander you have posted directed to me and others on this forum, showing a highly unprofessional attitude toward your peers,violating this forums rules of conduct, don't you think you deserved to be banned completely?
    After all, you are always promoting your own publications here, which is also in violation of the rules for this forum.
    What makes you think that "Sybrand" is a man's name? How many male Sybrand's do you know? I don't know any, not one.
    That's your problem. I'm as much as male, as you are a fat, ugly, self-proclaimed Oracle-guru, which you are clearly not, you are only a fraud, and this glares from all of your publications.,
    Sybrand Bakker
    Senior Oracle DBA

  • Data pump export full RAC database  in window single DB by network_link

    Hi Experts,
    I have a window 32 bit 10.2 database.
    I try to export a full rac database (350G some version with window DB) in window single database by dblink.
    exp syntax as
    exdpd salemanager/********@sale FULL=y DIRECTORY=dataload NETWORK_LINK=sale.net DUMPFILE=sale20100203.dmp LOGFILE=salelog20100203.log
    I created a dblink with fixed instance3. It was working for two day and display message as
    ORA-31693: Table data object "SALE_AUDIT"."AU_ITEM_IN" failed to load/unload and is being skipped due to error:
    ORA-29913: error in executing ODCIEXTTABLEPOPULATE callout
    ORA-01555: snapshot too old: rollback segment number with name "" too small
    ORA-02063: preceding line from sale.netL
    I stoped export and checked window target alert log.
    I saw some message as
    kupprdp: master process DM00 started with pid=16, OS id=4444
    to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_FULL_02', 'SYSTEM', 'KUPC$C_1_20100202235235', 'KUPC$S_1_20100202235235', 0);
    Tue Feb 02 23:56:12 2010
    The value (30) of MAXTRANS parameter ignored.
    kupprdp: master process DM00 started with pid=17, OS id=4024
    to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_FULL_01', 'SALE', 'KUPC$C_1_20100202235612', 'KUPC$S_1_20100202235612', 0);
    kupprdp: worker process DW01 started with worker id=1, pid=18, OS id=2188
    to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_FULL_01', 'SALE');
    In RAC instance alert.log. I saw message as
    SELECT /*+ NO_PARALLEL ("KU$") */ "ID","RAW_DATA","TRANSM_ID","RECEIVED_UTC_DATE ","RECEIVED_FROM","ACTION","ORAUSER",
    "ORADATE" FROM RELATIONAL("SALE_AUDIT"."A U_ITEM_IN") "KU$"
    How to fixed this error?
    add more undotbs space in RAC instance 3 or window database?
    Thanbks
    Jim
    Edited by: user589812 on Feb 4, 2010 10:15 AM

    I usually increate undo space. Is your undo retention set smaller than the time it takes to run the job? If it is, I would think you would need to do that. If not, then I would think it would be the space. You were in the process of exporting data when the job failed which is what I would have expected. Basically, DataPump want to export each table consistent to itself. Let's say that one of your tables is partitioned and it has a large partition and a smaller partition. DataPump attempts to export the larger partiitons first and it remembers the scn for that partition. When the smaller partitions are exported, it will use the scn to get the data from that partition as it would have looked like if it exported the data when the first partiiton was used. If you don't have partitioned tables, then do you know if some of the tables in the export job (I know it's full so that includes just about all of them) are having data added to them or removed from them? I can't think of anything else that would need undo while exporting data.
    Dean

  • Separate table and index data in RAC database

    Hi Experts,
    Our database is Oracle11g RAC database. I need your expertise on this
    Do we need to retain the table and index data in two different tablespaces for performance perspective in RAC database too?
    Please share your practical experience…Thanks in advance.
    Regards
    Richard

    g777 wrote:
    In my opinion, if there is striping implemented then performance shouldn't degrade even if the index and table blocks are in one tablespace. Exactly.. striping is NOT a good idea at tablespace level as a tablespace is a logical storage device. It is very difficult to stripe comprehensively/correctly at that level, if not impossible.
    Striping is a function of the actual storage system and need to happen at physical level. A proper RAID0 implementation.
    So the question about multiple tablespaces for a performance increase should not be about striping - but about issues such as data management, block sizes, transportable tablespaces and so on.
    Thus my question (at the OP) - what performance problems are expected and are these relevant to the number of tablespaces?

  • RAC database patchset 10.2.0.5 on windows

    HI,
    I have 2 RAC environment having 13 Production database with version 10.2.0.4 of database, clusterware and ASM on Windows server 2008 64 bit.
    Now I need to apply patchset 10.2.0.5 on ONLY one database due to BUG. Instead of apply patchset on all RAC database and upgrade planning for only one affected database. I need to below clarification -
    1. I am planning to apply patchset 10.2.0.5 on clusterware and ASM only.
    2. Planning to install 10.2.0.5 on new oracle home.
    3. Planning to upgrade one affected database from 10.2.0.4 to 10.2.0.5.
    QUESTION HERE -
    1. Will above plan work for one database only.
    2. Will clusterware and ASM (10.2.0.5)will support both ORACLE_HOME (10.2.0.4 and 10.2.0.5) databases
    3. How we can setup clusterware for this new ORACLE_HOME (10.2.0.5) DATABASEs.
    Anyone have any plan or suggestion please share.

    Hi,
    Yes it is possible to have multiple ORACLE_HOMES inside a Oracle GRID infrastructure. To check the supported and compatibility My Oracle Support for this.
    1. check compatibility of the components and make the design for your listener
    2. Upgrade ASM en clusterware. to 10.2.0.5
    3. Install new oracle software tree 10.2.0.5
    4. upgrade one database to 10.2.0.5
    keep in mind the Oracle Cluster Registry for this database has to change
    Cheers,
    Jos van den Oord
    Blog : http://joordsblog.vandenoord.eu/
    Company : http://www.transfer-solutions.com/
    "Don't believe it, test it!"

Maybe you are looking for

  • Installing Windows XP 2002 Sp2 On mac book pro, Boot Camp

    I have a mac book pro and i want to install windows XP on it, but i ran into a problem. While using boot camp i followed the instructions and i got to the point where Windows XP is supposed to ask for partition drive, but in my case it gives me an er

  • Making a new PDF show yellow edits for NoteBook

    I'm trying to load Acrobat Pro 9.0 PDFs into something called NoteBook - I don't think the details of NoteBook are important here. The PDFs load in two ways, but I'm seeking a third. The PDFs can load, and function, just as if I'm using Acrobat withi

  • Creative media source and Live!

    Does anybody knows what is list of selected cards that can use crossfading in Creative media source? Why Li've! 5. cannot? Not even EAX? Thanks in advance

  • Quality manager QMQU0003 The audio file is invalid and cannot be played

    Hi, I've client using Cisco QM 8.5. There is a recording dated 21st November, 2011 for one agent which he is not able to play back. The agent took many calls that day and all the recording are getting played back except one. The duration of call in q

  • OT: Display long product list in Wordpress

    I've got an extensive list of chemicals to be shown on a page on a Wordpress site. Anyone got a clever idea for the best way to do this? I've thought about a horizontal accordion, but I'm not thrilled with how that looks....