Streams Rac / non-rac

Hi,
Before i do more analysis wanted to get some feedback on the below Data streams setup.
Source: Oracle 9204 on solaris non-rac
Target: Oracle 9204 on Linux RAC
Anyone has working experience with this setup?
Thanks,

Hi,
Before i do more analysis wanted to get some feedback on the below Data streams setup.
Source: Oracle 9204 on solaris non-rac
Target: Oracle 9204 on Linux RAC
Anyone has working experience with this setup?
Thanks,

Similar Messages

  • RAC - NON RAC ebiz clone

    The title says it all... We are running eBusiness Suite 11.5.10.2 in Production on RAC 10gR2 (red hat linux, split application and database tiers). Is there a tried, tested and reliable way to clone from the Production RAC environment to a development environment which is non RAC?
    Thanks in advance
    IH

    That scenario is covered in the rapid clone doc:
    Cloning Oracle Applications Release 11i with Rapid Clone
    http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=230672.1
    Note: To clone from RAC to non-RAC, follow the same above steps but copy the master ORACLE_HOME to one target node only, and answer "No" to the question "Target instance is a Real Application Cluster (RAC) instance (y/n)", when prompted by adcfgclone.pl.

  • RAC to NON-RAC Cloning in R 12 using Rapid Clone

    Hi,
    We are using EBS 12.0.4 with DB version Oracle Database 10g Enterprise Edition Release 10.2.0.4.0.
    Initilally the RAC to non RAC cloning were not supported using rapid clone for this.But from Oct 19 Oracle has certified the same.
    Can anybody specify me the document which contain the step by step process for RAC to NON-RAC Cloning using Rapid Clone.I cant find this.
    Regards,
    Susmit

    Hi;
    Please check below previos thread:
    Cloning RAC to non RAC
    RAC -> NON RAC ebiz clone
    Re: Clone Oracle Apps 11.5.10.2 RacDB to Non-RAC DB
    Regard
    Helios

  • Can a non-RAC database use SCAN ?

    I have a 4 node 11gR2 cluster which has all different kinds of versions ( 10g ,11gR1 & 11gR2 ) on both RAC & NON RAC .
    I know for sure that I can use SCAN for non 11gR2 databases for RAC databases .
    For standalone db's It seems like it is working intermitently but is this supported ? the reason why I ask is there is no documentation on how to configure or troubleshoot for non-RAC using SCAN .
    Appreciate your help
    Ravi

    I was trying to find some information about the issue, but i think SCAN is really designed for RAC databases.
    You don't really need to use the SCAN for what you just described:
    you can update the client tnanames with the vip addresses of all nodes with FAILOVER = ON
    Example:
    (DESCRIPTION=
    (ADDRESS_LIST=
    (ADDRESS=(PROTOCOL=TCP)(HOST=node1-vip)(PORT=1521))
    (ADDRESS=(PROTOCOL=TCP)(HOST=node2-vip)(PORT=1521))
    (ADDRESS=(PROTOCOL=TCP)(HOST=node3-vip)(PORT=1521))
    (ADDRESS=(PROTOCOL=TCP)(HOST=node4-vip)(PORT=1521))
    (FAILOVER = ON)
    (CONNECT_DATA=(SERVER = DEDICATED)(SERVICE_NAME=DB.WORLD))
    However, you can try this (i never tried it, but i think it should work):
    - See the document http://download.oracle.com/docs/cd/E11882_01/install.112/e10813/undrstnd.htm#BEICFAIC
    +... If you do not set LOCAL_LISTENER, then the Database Agent automatically keeps the database associated with the Grid home's node listener updated...+
    So don't set the LOCAL_LISTENER in your db.
    - Update the client tnsnames with the SCAN if the version of the database is 11.2.0.1
    - Try to see if it works fine
    - Move your db to another node and try again

  • Configuring multiple listeners in a non-RAC env.

    Hi,
    Our database is being made as many as 35 requests per second by the application during peak hours. This is causing the listener to die. Metalink suggested to configure more listeners so that they can balance the load. When I tried to configure 2 listeners on the local server, one of the listeners are not working propoerly.
    I am posting the listenerr.ora and their statuses along with the error messages here. Can someone help me understand the problem?
    Listener.ora
    LISTENER2 =
      (DESCRIPTION_LIST =
        (DESCRIPTION =
          (ADDRESS = (PROTOCOL = TCP)(HOST = dbdev.website.com)(PORT = 1522))
    SID_LIST_LISTENER2 =
      (SID_LIST =
        (SID_DESC =
          (SID_NAME = TESTDB)
          (ORACLE_HOME = /u02/app/oracle)
          (PROGRAM = extproc)
    LISTENER1 =
      (DESCRIPTION_LIST =
        (DESCRIPTION =
          (ADDRESS = (PROTOCOL = TCP)(HOST = dbdev.website.com)(PORT = 1521))
          (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
    SID_LIST_LISTENER1 =
      (SID_LIST =
        (SID_DESC =
          (SID_NAME = PLSExtProc)
          (ORACLE_HOME = /u02/app/oracle)
          (PROGRAM = extproc)
    lsnrctl status <listener_name>
    oracle@ DBDEV $ lsnrctl status LISTENER1
    LSNRCTL for Linux: Version 10.2.0.3.0 - Production on 02-AUG-2009 20:08:03
    Copyright (c) 1991, 2006, Oracle.  All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbdev.website.com)(PORT=1521)))
    STATUS of the LISTENER
    Alias                     LISTENER1
    Version                   TNSLSNR for Linux: Version 10.2.0.3.0 - Production
    Start Date                29-JUL-2009 22:28:31
    Uptime                    3 days 21 hr. 39 min. 32 sec
    Trace Level               off
    Security                  ON: Local OS Authentication
    SNMP                      OFF
    Listener Parameter File   /u02/app/oracle/network/admin/listener.ora
    Listener Log File         /u02/app/oracle/network/log/arogya1.log
    Listening Endpoints Summary...
      (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dbdev.website.com)(PORT=1521)))
      (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC0)))
    Services Summary...
    Service "PLSExtProc" has 1 instance(s).
      Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
    Service "TESTDB" has 1 instance(s).
      Instance "TESTDB", status READY, has 1 handler(s) for this service...
    Service "TESTDB_XPT" has 1 instance(s).
      Instance "TESTDB", status READY, has 1 handler(s) for this service...
    The command completed successfully
    oracle@ DBDEV$ lsnrctl status LISTENER2
    LSNRCTL for Linux: Version 10.2.0.3.0 - Production on 02-AUG-2009 20:08:07
    Copyright (c) 1991, 2006, Oracle.  All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbdev.website.com)(PORT=1522)))
    STATUS of the LISTENER
    Alias                     LISTENER2
    Version                   TNSLSNR for Linux: Version 10.2.0.3.0 - Production
    Start Date                31-JUL-2009 02:37:10
    Uptime                    2 days 17 hr. 30 min. 56 sec
    Trace Level               off
    Security                  ON: Local OS Authentication
    SNMP                      OFF
    Listener Parameter File   /u02/app/oracle/network/admin/listener.ora
    Listener Log File         /u02/app/oracle/network/log/arogya2.log
    Listening Endpoints Summary...
      (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dbdev.website.com)(PORT=1522)))
    Services Summary...
    Service "TESTDB" has 1 instance(s).
      Instance "TESTDB", status UNKNOWN, has 1 handler(s) for this service...
    The command completed successfully
    ps -ef | grep tnslsnr
    oracle@ DBDEV $ ps -ef | grep tnslsnr
    oracle    7951     1  0 Jul29 ?        00:00:00 /u02/app/oracle/bin/tnslsnr LISTENER1 -inherit
    oracle   18080     1  0 Jul31 ?        00:00:00 /u02/app/oracle/bin/tnslsnr LISTENER2 -inherit
    Client message
    C:\Oracle\product\10.1.0\Client_1\BIN>sqlplus system/manager@asridb1
    SQL*Plus: Release 10.1.0.2.0 - Production on Mon Aug 3 09:32:18 2009
    Copyright (c) 1982, 2004, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options
    SQL>
    SQL> Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options
    C:\Oracle\product\10.1.0\Client_1\BIN>sqlplus system/manager@asridb2
    SQL*Plus: Release 10.1.0.2.0 - Production on Mon Aug 3 09:32:25 2009
    Copyright (c) 1982, 2004, Oracle.  All rights reserved.
    ERROR:
    ORA-28547: connection to server failed, probable Net8 admin error
    Enter user-name:Thanks,
    Aswin.

    ice_cold_aswin wrote:
    But shared server is not always better than a dedicated server.I did not say it was - only that if you're expecting loads of brand new connection requests to be services fast by the Listener, then shared server will be significantly faster than dedicated server.
    Frequency of connections may be handled better by the use of shared server, but how about holding thousands and thousands of connections and serving all their requests by using some preemptive mechanism? Many years ago, I configured an Oracle 8i instance on my desktop PC with shared server and told the Java guys to try their worst. See how many threads they can open against my desktop PC instance versus the big HP server instance. The latter was using dedicated server and they were kind of thinking that this was somehow better than shared server. They could open less than a few hundred connections to the big server. They opened over a 1000 connections to my desktop instance.
    Shared  Server is and never was a poor and second choice to Dedicated Server as some seems to suggest. It is extremely capable and ideally suited when dealing with a huge number of connections.
    In our application, number of connections would reach thousands during peak hours and large number of REALLY BIG queries would be fired by each of them in a small time period. I would say these BIG QUERIES can be termed as OLAP queries, while our main stream application is a OLTP system. It must be apparent to you by now, our database (non-RAC) is supporting an application that gives OLAP and OLTP queries at a very high frequency. A big query ito what? Resource footprint? Source code size of the query? Execution time of the query?
    If you have 100's of OLAP type queries (complex and long running queries) hitting the server every few seconds, then your server will die. Unless you are running some seriously expensive hardware.
    That aside - it is only a sensible approach to use shared server in an environment that deals with 1000's of OLTP type connections. And if there is an OLAP mix too, then configure those connections to request dedicated server connections.
    Why? The shared server scales a lot better than dedicated server. And scalability is what you want in this type of environment.

  • ANy thing special for Streams configuration in RAC?

    We are going to configure streams in a 2 node RAC environment.
    when the node goes down on which CAPTURE or APPLY or PROPAGATE runs, does it automatically failover to available to node? What kind of extra configuration should be taken care specifically for RAC to handle this scenario?

    I do run Streams on RAC and on non-RAC. Only thing that changes is that the node that processes the Streams is determined by the queue table node ownership and remember that v$ views will be empty on the node where the streams are not running.

  • Streams Setup from RAC to Single instance

    Does anyone have a document to setup streams from RAC to Non RAC. I successfully setup streams on 2 single instances but I am having issues in replicating, Streams is setup on node1 or Rac and Apply process is also setup on single node. but data is not replicating.
    Appreciate any suggestions.

    From Metalink Note 418755.1:
    Additional Configuration for RAC Environments for a Source Database Archive Logs
    The archive log threads from all instances must be available to any instance
    running a capture process. This is true for both local and downstream capture.
    Queue Ownership
    When Streams is configured in a RAC environment, each queue table has an
    "owning" instance. All queues within an individual queue table are owned by
    the same instance. The Streams components (capture/propagation/apply) all
    use that same owning instance to perform their work. This means that
    + a capture process is run at the owning instance of the source queue.
    + a propagation job must run at the owning instance of the queue
    + a propagation job must connect to the owning instance of the target queue.
    Ownership of the queue can be configured to remain on a specific instance,
    as long as that instance is available, by setting the PRIMARY _INSTANCE
    and/or SECONDARY_INSTANCE parameters of DBMS_AQADM.ALTER_QUEUE_TABLE.
    If the primary_instance is set to a specific instance (ie, not 0), the queue
    ownership will return to the specified instance whenever the instance is up.
    Capture will automatically follow the ownership of the queue.If the ownership
    changes while capture is running, capture will stop on the current instance
    and restart at the new owner instance.
    For queues created with Oracle Database 10g Release 2, a service will be
    created with the service name= schema.queue and the network name
    SYS$schema.queue.global_name for that queue. If the global_name of the
    database does not match the db_name.db_domain name of the database, be sure
    to include the global_name as a service name in the init.ora.
    For propagations created with the Oracle Database 10g Release 2 code with
    the queue_to_queue parameter to TRUE, the propagation job will deliver only
    to the specific queue identified. Also, the source dblink for the target
    database connect descriptor must specify the correct service (global name of
    the target database ) to connect to the target database. For example, the
    tnsnames.ora entry for the target database should include the CONNECT_DATA
    clause in the connect descriptor for the target database. This claus should
    specify (CONNECT_DATA=(SERVICE_NAME='global_name of target database')).
    Do NOT include a specific INSTANCE in the CONNECT_DATA clause.
    For example, consider the tnsnames.ora file for a database with the global name
    db.mycompany.com. Assume that the alias name for the first instance is db1 and
    that the alias for the second instance is db2. The tnsnames.ora file for this
    database might include the following entries:
    db.mycompany.com=
    (description=
    (load_balance=on)
    (address=(protocol=tcp)(host=node1-vip)(port=1521))
    (address=(protocol=tcp)(host=node2-vip)(port=1521))
    (connect_data=
    (service_name=db.mycompany.com)))
    db1.mycompany.com=
    (description=
    (address=(protocol=tcp)(host=node1-vip)(port=1521))
    (connect_data=
    (service_name=db.mycompany.com)
    (instance_name=db1)))
    db2.mycompany.com=
    (description=
    (address=(protocol=tcp)(host=node2-vip)(port=1521))
    (connect_data=
    (service_name=db.mycompany.com)
    (instance_name=db2)))
    Use the italicized tnsnames.ora alias in the target database link USING clause.
    DBA_SERVICES lists all services for the database. GV$ACTIVE_SERVICES identifies
    all active services for the database In non_RAC configurations, the service
    name will typically be the global_name. However, it is possible for users to
    manually create alternative services and use them in the TNS connect_data
    specification . For RAC configurations, the service will appear in these views
    as SYS$schema.queue.global_name.
    Propagation Restart
    Use the procedures START_PROPAGATION and STOP_PROPAGATION from
    DBMS_PROPAGATION_ADM to enable and disable the propagation schedule.
    These procedures automatically handle queue_to_queue propagation.
    Example:
    exec DBMS_PROPAGATION_ADM.stop_propagation('name_of_propagation'); or
    exec DBMS_PROPAGATION_ADM.stop_propagation('name_of_propagation',force=>true);
    exec DBMS_PROPAGATION_ADM.start_propagation('name_of_propagation');
    If you use the lower level DBMS_AQADM procedures to manage the propagation schedule,
    be sure to explicitly specify the destination_queue name when queue_to_queue propagation has been configured.
    Example:
    DBMS_AQADM.UNSCHEDULE_PROPAGATION('source_queue_name','destination',destination_queue=>'specific_queue');
    DBMS_AQADM.SCHEDULE_PROPAGATION('source_queue_name','destination',destination_queue=>'specific_queue');, DBMS_AQADM.ENABLE_PROPAGATION_SCHEDULE('source_queue_name','destination',destination_queue=>'specific_queue');,
    DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE('source_queue_name','destination',destination_queue=>'specific_queue');, DBMS_AQADM.ALTER_PROPAGATION_SCHEDULE('source_queue_name','destination',destination_queue=>'specific_queue');
    Changing the GLOBAL_NAME of the Source Database
    See the OPERATION section on Global_name below. The following are some
    additional considerations when running in a RAC environment.
    If the GLOBAL_NAME of the database is changed, ensure that any propagations
    are dropped and recreated with the queue_to_queue parameter set to TRUE.
    In addition, if the GLOBAL_NAME does not match the db_name.db_domain of the
    database, include the global_name for the queue (NETWORK_NAME in DBA_QUEUES)
    in the list of services for the database in the database parameter
    initialization file.
    Section 4. Target Site Configuration
    The following recommendations apply to target databases, ie, databases in which
    Streams apply is configured.
    1. Privileges
    Grant Explicit Privileges to APPLY_USER for the user tables
    Examples:
    Privileges for table level DML: INSERT/UPDATE/DELETE,
    Privileges for table level DDL: CREATE (ANY) TABLE , CREATE (ANY) INDEX,
    CREATE (ANY) PROCEDURE
    2. Instantiation
    Set Instantiation SCNs manually if not using export/import. If manually
    configuring the instantiation scn for each table within the schema, use the
    RECURSIVE=>TRUE option on the DBMS_STREAMS_ADM.SET_SCHEMA_INSTANTIATION_SCN
    procedure
    For DDL Set Instantiation SCN at next higher level(ie,SCHEMA or GLOBAL level).
    3. Conflict Resolution
    If updates will be performed in multiple databases for the same shared
    object, be sure to configure conflict resolution. See the Streams
    Replication Administrator's Guide Chapter 3 Streams Conflict Resolution,
    for more detail.
    To simplify conflict resolution on tables with LOB columns, create an error
    handler to handle errors for the table. When registering the handler using
    the DBMS_APPLY_ADM.SET_DML_HANDLER procedure, be sure to specify the
    ASSEMBLE_LOBS parameter as TRUE.
    In Streams Concepts manual 10.2 chapter 22: Monitoring Apply
    Displaying detailed information about Apply errors.
    4. Apply Process Configuration
    A. Rules
    If the maintain_* procedures are not suitable for your environment,
    please use the ADD_RULES  procedures (ADDTABLE_RULES , ADD_SCHEMA_RULES ,
    ADD_GLOBAL_RULES (for DML and DDL), ADD_SUBSET_RULES (DML only).
    These procedures minimize the number of steps required to configure Streams
    processes. Also, it is possible to create rules for non-existent objects,
    so be sure to check the spelling of each object specified in a rule carefully.
    APPLY can be configured with or without a ruleset. The ADD_GLOBAL_RULES can
    be used to apply all changes in the queue for the database. If no ruleset is
    specified for the apply process, all changes in the queue are processed by the apply process.
    A single Streams apply can process rules for multiple tables or schemas
    located in a single queue that are received from a single source database .
    For best performance, rules should be simple. Rules that include LIKE clauses are
    not simple and will impact the performance of Streams.
    To eliminate changes for particular tables or objects, specify the
    include_tagged_lcr clause along with the table or object name in the
    negative rule set for the Streams process. Setting this clause will
    eliminate all changes, tagged or not, for the table or object.
    B. Parameters
    Set the following parameters after a apply process is created:
    + DISABLE_ON_ERROR=N Default: Y
    If Y, then the apply process is disabled on the first unresolved error,
    even if the error is not fatal.
    If N, then the apply process continues regardless of unresolved errors.
    + PARALLELISM=3* Number of CPU Default: 1
    Apply parameters can be set using the SET_PARAMETER procedure from the
    DBMS_APPLY_ADM package. For example, to set the DISABLE_ON_ERROR parameter
    of the streams apply process named APPLY_EX, use the following syntax while
    logged in as the Streams Administrator:
    exec dbms_apply_adm.set_parameter('apply_ex','disable_on_error','n');
    Change the apply parallelism parameter recommendation to a lower number.
    In general, try 4 or 8 and increase or decrease as necessary for your workload.
    In some cases, performance can be improved by setting the following hidden
    parameter. This parameter should be set when the major workload is UPDATEs
    and the updates are performed on just a few columns of a many-column table.
    + DYNAMICSTMTS=Y Default: N
    If Y, then for UPDATE statements, the apply process will optimize the
    generation of SQL statements based on required columns.
    CHECKPOINTFREQUENCY=1000
    Increase the frequency of logminer checkpoints especially in a
    database with significant LOB or DDL activity.
    exec dbms_capture_adm.set_parameter('capture_ex','_checkpoint_frequency','1000');
    5. Additional Configuration for RAC Environments for a Apply Database
    Queue Ownership
    When Streams is configured in a RAC environment, each queue table has an
    "owning" instance. All queues within an individual queue table are owned
    by the same instance. The Streams components (capture/propagation/apply)
    all use that same owning instance to perform their work. This means that
    the database link specified in the propagation must connect to the owning
    instance of the target queue. the apply process is run at the owning instance
    of the target queue
    Ownership of the queue can be configured to remain on a specific instance,
    as long as that instance is available, by setting the PRIMARY _INSTANCE and
    SECONDARY_INSTANCE parameters of DBMS_AQADM.ALTER_QUEUE_TABLE. If the
    primary_instance is set to a specific instance (ie, not 0), the queue
    ownership will return to the specified instance whenever the instance is up.
    Apply will automatically follow the ownership of the queue. If the ownership
    changes while apply is running, apply will stop on the current instance and
    restart at the new owner instance.
    Changing the GLOBAL_NAME of the Database
    See the OPERATION section on Global_name below. The following are some
    additional considerations when running in a RAC environment.
    If the GLOBAL_NAME of the database is changed, ensure that the queue is
    empty before changing the name and that the apply process is dropped and
    recreated with the apply_captured parameter = TRUE. In addition, if the
    GLOBAL_NAME does not match the db_name.db_domain of the database, include
    the GLOBAL_NAME in the list of services for the database in the database
    parameter initialization file.

  • Can I use data guard to create a RAC standby database for a non RAC primary

    Hi,
    we need to RAC our production database but the normal methods will mean a long outage. It is possible to create a standby as a single node RAC database and when ready do a graceful failover to the standby database and open it for business. The next step would be to create another RAC node from this on the original server.
    servers are already cluster aware, using ASM etc
    Oracle 10.2

    Yes, you will be able to setup RAC stnadby for a non-RAC Primary. For primary it just needs a available destination for redo shipping it doesn't matter whether it's RAC enabled or not. And ofcourse you are using 10.2 anyway only one node will be running MRP and that is too in standby mount mode.
    However since you have are using You may follow below sequence.
    1. Setup a new standby as RAC enabled.
    2. Perform a switchover.
    3. Shutdown the Old primary (which is standby now).
    4. Install CRS and RDBMS on the old primary and it's new node.
    5. Modify the cluster_database=TRUE and cluster_database_instances=<required number of instances>.
        With above modification mount the standby database in standby mode and start MRP.
    6. Introduce the database and instances to the OCR using SRVCTL add command.
    7. Once you your database is synchronized with Primary do a switchover.
    9. Now you can repeat step 3 to 6 on the other site too.   <- if you need your secondary site to be RAC enabled too
    10. Finally both the sites should be RAC enabled.
    Hope this is helpful!!!
    Thanks,
    Asif Haliyal

  • Steps to do the R12 on 11.2.0.2 RAC to Non-Rac cloning on Win2008 64bit

    Please brief the steps to do the R12 on 11.2.0.2 RAC to Non-Rac cloning on Windows 2008 64-bit.
    There is a metalink document but that says it for linux environment but not windows. Please brief the steps if someone has done it on windows.

    ora-appsdba wrote:
    Please brief the steps to do the R12 on 11.2.0.2 RAC to Non-Rac cloning on Windows 2008 64-bit.
    There is a metalink document but that says it for linux environment but not windows. Please brief the steps if someone has done it on windows.
    The steps should be the same for Windows (with minor changes, like you may need to create Oracle service before duplicating the database).
    Cloning Oracle Applications Release 12 with Rapid Clone [ID 406982.1]
    Rapid Clone Documentation Resources For Release 11i and 12 [ID 799735.1]
    Cloning Oracle E-Business Suite Release 12 RAC-Enabled Systems with Rapid Clone [ID 559518.1]
    Certified RAC Scenarios for E-Business Suite Cloning [ID 783188.1]
    Thanks,
    Hussein

  • Instnce name in non-RAC environment

    Hi!
    In non-RAC environment V$INSTANCE.INSTANCE_NAME does not actually displays the name of the instance,that was set in INSTANCE_NAME parameter.
    It always displays DB_NAME instead.
    Is it any way to get instance_name that has service user connected to in this environment?
    LSNRCTL for 32-bit Windows: Version 10.2.0.4.0 - Production on 28-JAN-2010 09:16:25
    Copyright (c) 1991, 2007, Oracle. All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=vegas)(PORT=1524)))
    STATUS of the LISTENER
    Alias LISTENER
    Version TNSLSNR for 32-bit Windows: Version 10.2.0.4.0 - Production
    Start Date 28-JAN-2010 09:15:36
    Uptime 0 days 0 hr. 0 min. 48 sec
    Trace Level off
    Security ON: Local OS Authentication
    SNMP OFF
    Listener Parameter File D:\oracle\db\product\10.2.0\network\admin\listener.ora
    Listener Log File D:\oracle\db\product\10.2.0\network\log\listener.log
    Listening Endpoints Summary...
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=vegas)(PORT=1524)))
    Services Summary...
    Service "EMCOR" has 1 instance(s).
    Instance "INST0", status READY, has 1 handler(s) for this service...
    Service "EMCOR_XPT" has 1 instance(s).
    Instance "INST0", status READY, has 1 handler(s) for this service...
    Service "PLSExtProc" has 1 instance(s).
    Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
    Service "RESXDB" has 1 instance(s).
    Instance "INST0", status READY, has 1 handler(s) for this service...
    Service "SRV1" has 1 instance(s).
    Instance "INST0", status READY, has 1 handler(s) for this service...
    Service "SRV2" has 1 instance(s).
    Instance "INST0", status READY, has 1 handler(s) for this service...
    The command completed successfully
    And SQLPLUS said
    C:\Documents and Settings\oradba>sqlplus
    SQL*Plus: Release 10.2.0.4.0 - Production on Thu Jan 28 09:44:59 2010
    Copyright (c) 1982, 2007, Oracle. All Rights Reserved.
    Enter user-name: emcos@emcor_srv2
    Enter password:
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    09:45:04 EMCOS@emcor_srv2 >select name from v$database;
    NAME
    EMCOR
    Elapsed: 00:00:00.00
    09:45:07 EMCOS@emcor_srv2 >select instance_name from v$instance;
    INSTANCE_NAME
    emcor
    Elapsed: 00:00:00.01
    09:45:21 EMCOS@emcor_srv2 >select service_name from v$session where sid=(select unique sid from v$mystat);
    SERVICE_NAME
    SRV2

    Hemant K Chitale wrote:
    The documentation on INSTANCE_NAME in the 10gR2 Reference says :
    "In a single-instance database system, the instance name is usually the same as the database name."
    (this after
    "In a Real Application Clusters environment, multiple instances can be associated with a single database service. Clients can override Oracle's connection load balancing by specifying a particular instance by which to connect to the database. INSTANCE_NAME specifies the unique name of this instance.")
    This would imply that setting INSTANCE_NAME in non-RAC is ignored. The usage of the word "usually" is weak.
    Hemant K ChitaleBut what do says lsnrctl - it says that it is not weak
    11:33:28 SYS@EMCOR_SRV1 >show parameter instance_name
    NAME TYPE VALUE
    instance_name                        string      INST0
    11:33:36 SYS@EMCOR_SRV1 >host lsnrctl status
    LSNRCTL for 32-bit Windows: Version 10.2.0.4.0 - Production on 28-JAN-2010 11:33:50
    Copyright (c) 1991, 2007, Oracle. All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=vegas)(PORT=1524)))
    STATUS of the LISTENER
    Alias LISTENER
    Version TNSLSNR for 32-bit Windows: Version 10.2.0.4.0 - Production
    Start Date 28-JAN-2010 09:15:36
    Uptime 0 days 2 hr. 18 min. 14 sec
    Trace Level off
    Security ON: Local OS Authentication
    SNMP OFF
    Listener Parameter File D:\oracle\db\product\10.2.0\network\admin\listener.ora
    Listener Log File D:\oracle\db\product\10.2.0\network\log\listener.log
    Listening Endpoints Summary...
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=vegas)(PORT=1524)))
    Services Summary...
    Service "EMCOR" has 1 instance(s).
    Instance "INST0", status READY, has 1 handler(s) for this service...
    Service "EMCOR_XPT" has 1 instance(s).
    Instance "INST0", status READY, has 1 handler(s) for this service...
    Service "PLSExtProc" has 1 instance(s).
    Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
    Service "RESXDB" has 1 instance(s).
    Instance "INST0", status READY, has 1 handler(s) for this service...
    Service "SRV1" has 1 instance(s).
    Instance "INST0", status READY, has 1 handler(s) for this service...
    Service "SRV2" has 1 instance(s).
    Instance "INST0", status READY, has 1 handler(s) for this service...
    The command completed successfully
    11:33:50 SYS@EMCOR_SRV1 >select sys_context('USERENV','INSTANCE_NAME') from dual;
    SYS_CONTEXT('USERENV','INSTANCE_NAME')
    emcor
    Elapsed: 00:00:00.00
    11:34:42 SYS@EMCOR_SRV1 >select service_name from v$session where sid=sys_context('USERENV','SID');
    SERVICE_NAME
    SRV1
    Best regards, Sergey

  • The CSSD does not start automatically on Non-RAC - AIX 5L

    Hi all,
    Database: Oracle 10.2.0.4
    O.S AIX 5.3 TL 10
    I am facing a problem with the CSSD Non-RAC/Clusterware.
    When we restarted the server the service CSSD does not start automatically and there is no relevant messages in the log files of CSSD.
    Running the "localconfig reset $ORACLE_HOME", execute successfully but does not start the CSSD.
    Running "/etc/init.cssd start" - does not work, nothing happens when we run it
    I manually ran "/etc/ init.cssd run" and the service CSSD (ossd.bin) was started without errors.
    No errors on errpt from AIX.
    I've opened a SR but I am waiting for response from Oracle.

    Hi,
    I believe this problem is not with Oracle, but in AIX.It may be that one process that is starting on boot is holding the other process. You need to identify what is process/application.
    Check the process on inittab that are already active and which are inactive they should be active.
    One clue: Is very common process that is initiated in wait mode hold the boot server.
    wait
    When the init command enters the run level that matches the entry's run level,
    start the process and wait for its termination.
    All subsequent reads of the /etc/inittab file while the init
    command is in the same run level will cause the init command to ignore this entry.http://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.files/doc/aixfiles/inittab.htm
    Regards,
    Levi Pereira

  • Best way to have resizeable LUNS for datafiles - non RAC system

    All,
    (thanks Avi for the help so far, I know its a holiday there so I wait for your return and see if any other users can chip in also)
    one of our systems (many on the go here) is being provided by an external vendor - I am reviewing their design and I have some concerns about the LUN's to house the datafiles:
    they dont want to pre-assign full size luns - sized for future growth - and want more flexibility to give each env less disk space in the beginning and allocate more as each env grows
    they are not going to use RAC (the system has nowhere near the uptime/capacity reqs - and we are removing it as it has caused enormous issues with the previous vendors and their lack of skills with it - we want simplicity)
    They have said they do not want to use ASM (I have asked for that previously I think they have never used it before - I may be able to change their minds on this but they are saying as not RAC not needed)
    but they are wondering how they give smaller luns to the env and increase size as they grow - but dont want to forever be adding /u0X /u0Y /u0Z extra filesystems (ebusiness suite rapidclone doesn't like working with many filesystem anyway and I find it unelegant to have so many mount points)
    they have suggested using large ovm repo's and serving the data filesystems out of that (i have told them to use the repo's just for the guest OS's and use direct phy's attached luns for datafiles (5TB of them)
    now they have suggested creating a large LUN (large enuogh for many envs at the same time [dev / test1 / test2 etc]) .... and putting OCFS2 on it so that they can mount it to all the domU/guest's and they can allocate space as needed uot of that:
    so that they have guests/VM's (DEV1 - DEV2 - TEST1 say) (all seperate vm's) and all mounting the same OCFS2 cluster filesystem (as /u01 maybe) and they can share that for the datafiles under a sep dir so that each DB VM would see:
    /u01/ and as subdirectories to that DEV1 DEV2 TEST1 so:
    /u01/DEV1
    /u01/DEV2
    /u01/TEST1
    and only use the right directory for each guests datafiles (thus sharing the space in u01(the big LUN) as needed per env)....
    i really dont like that as each guest is going to have the same oracle unix user details and able to write to each other dir's - id prefer dedicated LUNS for each VM - not mounted to many VM's
    so I am looking for a way to suggest something better....
    should I just insist on ASM (but this is a risk as I fear they are not experienced with it)
    or go with OEL/RHEL LVM and standard ext filesystems that can be extended - what are the risks with this? (On A Linux Guest For OVM, Which Partitions Can Be LVM? [ID 1080783.1]) - seems to say there is little performance impact
    or is there another option?
    Thanks all
    Martin
    Edited by: Martin Brambley on 11-Jun-2012 08:53

    Martin, what route did you end up going?
    We are about to deploy several hundred OEL VMs that are going to run non-RAC database instances. We don't plan to use ASM either. Our plan right now is to use one large 3TB LUN virtual disk to carve out the operating system space for the VMs and then have a separate physical attached LUN for each VM that will host a /u01 filesystem using LVM. I have concerns with this as we don't know how much space /u01 will ultimately need and if we end up having to extend /u01 on all of these VMs, that sounds like it will be messy. Right now I've got 400 separate 25gb LUNs presented to all of my OVM servers that we plan to use for /u01 filesystems.

  • Rac to non rac standby file name convert option

    Hi,
    Rac to non rac with ASM both, please find the below details.
    Primary Setup.
    diskgroup NAME for all files for both primary and standby +DATA
    pmon,unique and service name at standby-rac1, rac2
    db version -11.2.0.1
    platform - HP -UX
    $ ps -ef | grep pmon
    oracle11 22329 1 0 22:33:55 ? 0:28 ora_pmon_rac1
    oragrid 23522 1 0 Jan 2 ? 1:30 asm_pmon_+ASM1
    $ ps -ef | grep pmon
    oracle11 22329 1 0 22:33:55 ? 0:28 ora_pmon_rac2
    oragrid 23522 1 0 Jan 2 ? 1:30 asm_pmon_+ASM2
    database and Unique name - RAC, so all files on ASM created under +DATA/RAC
    Stanby setup
    diskgroup NAME for all files for both primary and standby +DATA
    pmon,unique and service name at standby- racdr
    $ ps -ef | grep pmon
    oracle11 22329 1 0 22:33:55 ? 0:28 ora_pmon_racdr
    oragrid 23522 1 0 Jan 2 ? 1:30 asm_pmon_+ASM
    database name and unique name is racdr, so all files on ASM created under DATA/racdr when cloned with RMAN from primary backup. (by settling log_file and db_file covert from DATA/RAC to +DATA/racdr )
    so now everything ok... but how do we create a new datafiles at primary for the above setting success, ??? and reflect on standby ???.
    1.create tablespace tbs size 10g;
    2.create tablespace tbs datafile '+/DATA/RAC/test.dbf size 10g; ??
    3.create tablespace tbs datafile '+/DATA size 10g;???
    4. directories name on asm and specifying on the convert options is case sensitive ???
    Thanks in advance...

    thats fine john,
    log_file and db_file covert from '+DATA/RAC', '+DATA/racdr'
    my primary is rac
    my standby us racdr
    so i mentioned above folders on convert option, so if i created datafile with or without specifying a diskgroup like mentione below, with or without specifying the directory like below
    will all create on under '+DATA/racdr' diskgroup ??
    create tablespace test DATA size10m; --> not specidying a full path as DATA/RAC' so will it go to DATA/racdr' ?? or creating under DATA ?diskgroup on standby????
    Thanks,

  • IS IT SUPPORTED TO CENTRALLY MOUNT THE ORACLE_HOME IN A NON-RAC ENVIRONMENT

    SR 7250090.993 : (http://qmon.oraclecorp.com/qmon3/quickpicks.pl?t=t&q=7250090.993)
    Technical Summary:
    Customer is planing to install Oracle 10.2.0.4 and 11.1.0.x software on a Red Hat5 with NetApp storage.
    Customer came across the following :
    For single instance installations (as opposed to RAC installations), you must create a separate Oracle home directory for each installation. Run the software in this Oracle home directory only from the system that you used to install it. For Oracle Real Application Clusters (RAC) installations, you can use a single Oracle home directory mounted from each node in the cluster. You must mount this Oracle home directory on each node so that it has the same directory path on all nodes.
    mentioned in the 10gR2 documentation link :
    http://download.oracle.com/docs/cd/B19306_01/install.102/b15667/app_nas.htm#BCFIDEJA
    Requirements/Expectations:
    As the above statements, that customer came across is not present in 9i documentation, customer wants to understand if it is actually supported to centrally mount the 10g/11g ORACLE_HOME to many servers that are not RAC enabled.
    Also, I would like to understand if the statements in documentation indicate that it is not generally recommended centrally mount the 10g/11g ORACLE_HOME or does it mean that it is not supported to centrally mount the ORACLE_HOME in a non RAC environment?
    Please advice.

    The binaries (executables) in an Oracle home are "linked" (link edited?) to the OS libraries on each server where the software is installed.
    Unless the OS is IDENTICAL on each of the IDENTICAL(HW) servers -- that would share the Oracle home--, you could be in trouble.
    The only supported configuration (I know of) where the Oracle binaries are shared between servers is 9i RAC. On 10g RAC the binaries are installed on each server.
    Other wise I'd say it's NOT recommended, besides you don't save anything (execpt a cooupl of Gigs disk space).
    :p

  • OCFS2 in a non-RAC environment

    Here's a question I've been batting around with a sysadmin. I thought I'd run it past this group.
    We're spec'ing out the hardware for a very large Oracle database (3+ TB) running on Linux. This will be a single, non-RAC instance. This particular SA has very bad feelings about the ext3 filesystem (I am not exactly sure what his objections are). He has heard about ocfs2 and wonders if it would be superior to ext3.
    What do you think? Would ocfs2 be appropriate in a non-clustered environment? Can you suggest any arguments for or against this approach?
    Thanks, I appreciate any thoughts anyone might have.

    Note that paper was written in 2004, and their tests used Pentium III processors. The OS was RHAS 2.1 and their kernel was 2.4.9
    I'm not saying that invalidates the paper's conclusions, but I'm certainly saying you'd have to be very careful indeed in assuming that conclusions drawn from that combination of hardware and software had anything much to say about today's quad cores, multi-gigabyte RAM, kernel 2.6.x setups.
    I'm concerned when I see their graphs on page 8: no axis scales in either x or y directions. If they're going to come to the conclusion that ext2/3 performance tails off, it would be nice to know whether it does so when the number of users hits 10, 50, 100, 500 or whatever, wouldn't it? But you can't tell anything from their graphs on that score.
    And I don't see any mention of them adjusting the buffer cache when using raw, which (having just abolished the file system cache) is something you should sensibly do when making such comparisons.
    And, of course, they were testing OCFS not OCFS2, so again... the applicability of anything concluded in that paper to today's situation is highly questionable, at least in my mind. But I do note that OCFS with "high numbers" of users ended up clobbering the CPU to death, whereas ext2/3 didn't...
    And they seem to claim that transactions per second levelled out with ext2/3 because they hit a lot of free buffer waits -and there are ways of dealing with them that don't involve getting rid of the file system! The fact that they didn't report a lot of write complete waits as well as free buffer waits might indicate that they just need more DBWRs, and not a switch of file system, for example.
    Anyway... short answer is, I wouldn't read too much into that report, which looks to me to be poorly devised and written and very much 'of its time and place' with little general applicability outside of that.

Maybe you are looking for

  • Create dropdown that emails based on info

    I am creating an email dropdown list that I want to "user" to push the button and based off the airport code menu item that is selected directs to whom the email gets sent. (Each menu item has more than one email attached to it.) I have already disco

  • How to sync Chrome bookmarks to iPad Air

    I use Chrome on my Macs and would like to sync them to my new iPad Air. How can I do this? Thanks.

  • Yosemite restarted itself, now it's SUPER Slow. Help?

    Hello, and thanks for your time. My 2009 macbook pro restarted itself this morning before I even looked at it (I heard the chime randomly, asked my wife if she did it and she hadn't). When I went to use my MacBook i had to login, but it didn't respon

  • Unusual behavior with 'IKM Oracle Incremental Update' knowledge module

    Hi All, We are getting strange behaviour in our PIP.We have following three scenarios in our PIP. 1) LOAD_ORACLEEBIZ_PROCESSMFG_BULK_LOT_DATA_TO_PAS_PKG 2) LOAD_ORACLEEBIZ_DISCRETEMFG_WORKORDER_DATA_TO_PAS_PKG 3) LOAD_ORACLEEBIZ_PROCESSMFG_WORKORDER_

  • ESD question on iBook repair

    I am going to do a self-repair to an iBook G3 because the machine is out of warranty. I have the necessary tools and repair guide thanks to iFixIt. I am just wondering if anyone can answer these questions about using an ESD wristband, to prevent an e