FRM-92119 in cluster, not in single instance

Hello everyone,
Background: Forms 11.1.2.1.0, JDK 1.7 u 11, 64-bit, Windows.
I have a 2-machine cluster set up behind a LBR. Both of the OHS are set to route to both of the WLS_FORMS managed servers. I have followed the instructions in the HA guide for setting this up.
I've defined a couple of configuration settings in frmweb.cfg and have corresponding environment files. When only one of the WLS_FORMS servers is running, things are working fine. This is the case regardless of which one of the WLS_FORMS is running. However, when both of the WLS_FORMS managed servers are up and running, I get FRM-92119 complaining about the ORACLE_INSTANCE not being set. The log files don't have anything more interesting than that.
Any clue as to what may be going on or even how to debug the issue?
Kind regards,
John

I guess you stumbled upon Note 989118.1 Tuning / Load Balancing: How to Manually Create a New WLS Forms Managed Server in 11g (11.1.1.2 and Above) ?
It is mentioned there that you need to set the -Doracle.instance java property (amongst others) in the weblogic administration console. However; I found out that java properties set in the weblogic console are not picked up for the forms installation; you have to set them manually in one of the batch files used to start the managed servers.
For the original WLS_FORMS this is done in one of those files (startWeblogic.cmd, startManagedWeblogic.cmd or setDomainEnv.cmd; I don't remember) in an if condition; if you named your cloned server WLS_FORMS1 or something like it is mentioned in the note above most certainly this didn't happen.
cheers

Similar Messages

  • Creating windows cluster after Oracle single instance installation

    Hi,
    I have installed and created Oracle 11g R1 single instance server. The database also created on the server. Microsoft windows 2003 enterprise was not cluster during the said installation.
    Now, the requirement has changed and as per new architecture, we want to create microsoft cluster having 2 nodes in it.
    We want to move the current database instance data files on node 1 of the new cluster.
    My Q's is, what will be effect of the current architecture. Also, how we can do it?
    THanks in advance.

    Does any body like to share their experience ?

  • CloudControl 12c: when adding cluster-database single instances are missing

    Hello ,
    I'e have installed the agent 12c on every single host of our RAC. When adding a database to CloudControl 12c just the RAC-instance gets discovered - but not the single database instances on each host.
    Due to this when clicking targets -> databases I just see the cluster-database - but not every single instance of the database.
    Compared to another RAC database this is totally different behaviour.
    Any help will be appreciated
    Rgds
    JH

    Hello ,
    maybe I should try to explain the issue a little bit clearer...
    We have one RAC database called PROD02. When clicking "targets -> databases" I see the following:
    PROD02 of type Cluster Database
    PROD021 of type Database Instance
    PROD022 of type Database Instance
    That's the way it shoudl be - that's the way I expected it...
    For the newly cofgured RAC database, called PREP02, it looks different when clicking "targets -> databases"
    PREP02 of type Cluster Database
    The single instances don't show up. Of course I see the single instances when clicking on "PREP02" under the item "Instances".
    Now I wonder why these two RAC databases are differently displayed when clicking "targets -> databases". When trying to add the single instances (clicking "Add" under "targets -> databases") and selecting then either "only on the host <hostname>" or "on all hosts in the cluster" no single instances are found or displayed...
    Rgds
    Jan

  • Rconfig: converting a single instance to RAC instance

    Hi,
    I am trying to use the "rconfig" utility to convert a single instance to a RAC instance in an existing RAC cluster.
    I have modified the .xml file, and am trying to run the conversion from the 1st node in the 2 node cluster (where the single instance resides).
    The only error message i seem to be getting is below:
    <Response>
    <Result code="1" >
    Operation Failed
    </Result>
    <ErrorDetails>
    ORCL_DATA_ORCLCLN The specified diskgroup is not mounted.
    </ErrorDetails>
    </Response>
    </Convert>
    </ConvertToRAC></RConfig>
    Now I dont really understand why I would be getting that message as the instance is up and running and ASM disk group is mounted on node1 at the time i run the rconfig command, though its not clear to me if I also need to somehow mount the ASM disk group on the second node prior to running the rconfig command??
    node1:
    bash-3.00$ asmcmd -p
    ASMCMD [+] > lsdg
    State Type Rebal Unbal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Name
    MOUNTED EXTERN N N 512 4096 1048576 10181 7442 0 7442 0 ORCL_DATA_ORCLCLN/
    node2:
    ASMCMD [+] > lsdg
    State Type Rebal Unbal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Name
    I have attached the output of the alert log during the rconfig conversion of the target database, but it all looks pretty standard to me (keep in mind i am an oracle novice!).
    alert.log
    Completed: ALTER DATABASE OPEN
    Thu Jul 23 13:51:55 2009
    Shutting down instance (abort)
    License high water mark = 2
    Instance terminated by USER, pid = 15030
    Thu Jul 23 13:51:57 2009
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Interface type 1 e1000g1 10.128.113.0 configured from OCR for use as a cluster interconnect
    Interface type 1 e1000g0 10.128.113.0 configured from OCR for use as a public interface
    Picked latch-free SCN scheme 2
    Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/oracle/product/10.2.0/db_1/dbs/arch
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =18
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.2.0.
    System parameters with non-default values:
    processes = 150
    __shared_pool_size = 121634816
    __large_pool_size = 4194304
    __java_pool_size = 4194304
    __streams_pool_size = 0
    sga_target = 440401920
    control_files = +ORCL_DATA_ORCLCLN/control01.ctl
    db_block_size = 8192
    __db_cache_size = 306184192
    compatible = 10.2.0.2.0
    log_archive_format = %t_%s_%r.dbf
    db_file_multiblock_read_count= 16
    cluster_database = FALSE
    cluster_database_instances= 1
    db_recovery_file_dest_size= 2147483648
    norecovery_through_resetlogs= TRUE
    undo_management = AUTO
    undo_tablespace = UNDOTBS1
    remote_login_passwordfile= EXCLUSIVE
    db_domain = netapp.com
    job_queue_processes = 10
    background_dump_dest = /u01/app/oracle/admin/orcldb/bdump/ORCLCLN
    user_dump_dest = /u01/app/oracle/admin/orcldb/udump/ORCLCLN
    core_dump_dest = /u01/app/oracle/admin/orcldb/cdump/ORCLCLN
    db_name = ORCLCLN
    open_cursors = 300
    pga_aggregate_target = 145752064
    Cluster communication is configured to use the following interface(s) for this instance
    10.128.113.200
    Thu Jul 23 13:51:59 2009
    cluster interconnect IPC version:Oracle UDP/IP (generic)
    IPC Vendor 1 proto 2
    PMON started with pid=2, OS id=15085
    DIAG started with pid=3, OS id=15091
    PSP0 started with pid=4, OS id=15094
    LMON started with pid=5, OS id=15097
    LMD0 started with pid=6, OS id=15102
    MMAN started with pid=7, OS id=15112
    DBW0 started with pid=8, OS id=15114
    LGWR started with pid=9, OS id=15116
    CKPT started with pid=10, OS id=15125
    SMON started with pid=11, OS id=15128
    RECO started with pid=12, OS id=15130
    CJQ0 started with pid=13, OS id=15134
    MMON started with pid=14, OS id=15143
    MMNL started with pid=15, OS id=15146
    Thu Jul 23 13:52:03 2009
    lmon registered with NM - instance id 1 (internal mem no 0)
    Thu Jul 23 13:52:04 2009
    Reconfiguration started (old inc 0, new inc 2)
    List of nodes:
    0
    Global Resource Directory frozen
    * allocate domain 0, invalid = TRUE
    Communication channels reestablished
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    Resources and enqueues cleaned out
    Resources remastered 0
    Set master node info
    Submitted all remote-enqueue requests
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    Post SMON to start 1st pass IR
    Submitted all GCS remote-cache requests
    Post SMON to start 1st pass IR
    Reconfiguration complete
    Thu Jul 23 13:52:04 2009
    ALTER DATABASE MOUNT
    Thu Jul 23 13:52:04 2009
    Starting background process ASMB
    ASMB started with pid=17, OS id=15157
    Starting background process RBAL
    RBAL started with pid=18, OS id=15169
    Thu Jul 23 13:52:09 2009
    SUCCESS: diskgroup ORCL_DATA_ORCLCLN was mounted
    Thu Jul 23 13:52:13 2009
    Setting recovery target incarnation to 2
    Thu Jul 23 13:52:13 2009
    Successful mount of redo thread 1, with mount id 4437636
    Thu Jul 23 13:52:13 2009
    Database mounted in Exclusive Mode
    Completed: ALTER DATABASE MOUNT
    Thu Jul 23 13:52:14 2009
    ALTER DATABASE OPEN
    Thu Jul 23 13:52:14 2009
    Beginning crash recovery of 1 threads
    Thu Jul 23 13:52:14 2009
    Started redo scan
    Thu Jul 23 13:52:14 2009
    Completed redo scan
    105 redo blocks read, 32 data blocks need recovery
    Thu Jul 23 13:52:14 2009
    Started redo application at
    Thread 1: logseq 2, block 929
    Thu Jul 23 13:52:15 2009
    Recovery of Online Redo Log: Thread 1 Group 2 Seq 2 Reading mem 0
    Mem# 0 errs 0: +ORCL_DATA_ORCLCLN/redo_2_1.log
    Mem# 1 errs 0: +ORCL_DATA_ORCLCLN/redo_2_0.log
    Thu Jul 23 13:52:15 2009
    Completed redo application
    Thu Jul 23 13:52:15 2009
    Completed crash recovery at
    Thread 1: logseq 2, block 1034, scn 613579
    32 data blocks read, 25 data blocks written, 105 redo blocks read
    Thu Jul 23 13:52:15 2009
    Thread 1 advanced to log sequence 3
    Thread 1 opened at log sequence 3
    Current log# 1 seq# 3 mem# 0: +ORCL_DATA_ORCLCLN/redo_1_1.log
    Current log# 1 seq# 3 mem# 1: +ORCL_DATA_ORCLCLN/redo_1_0.log
    Successful open of redo thread 1
    Thu Jul 23 13:52:15 2009
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Thu Jul 23 13:52:15 2009
    SMON: enabling cache recovery
    Thu Jul 23 13:52:17 2009
    Successfully onlined Undo Tablespace 1.
    Thu Jul 23 13:52:17 2009
    SMON: enabling tx recovery
    Thu Jul 23 13:52:17 2009
    Database Characterset is WE8ISO8859P1
    replication_dependency_tracking turned off (no async multimaster replication found)
    Starting background process QMNC
    QMNC started with pid=21, OS id=15328
    Thu Jul 23 13:52:23 2009
    Completed: ALTER DATABASE OPEN
    Any help would be greatly appreciated!!!!

    Ok,
    So I managed to get the disk group mounted on the second node, and re-ran the rconfig process.
    I got a little further, but encountered another error which is displayed below:
    -bash-3.00$ rconfig racconv.xml
    <?xml version="1.0" ?>
    <RConfig>
    <ConvertToRAC>
    <Convert>
    <Response>
    <Result code="1" >
    Operation Failed
    </Result>
    <ErrorDetails>
    /u01/app/oracle/product/10.2.0/db_1/dbs Data File is not shared across all nodes in the cluster
    </ErrorDetails>
    </Response>
    </Convert>
    </ConvertToRAC></RConfig>
    I am not using a shared oracle home, each node in the cluster has its own oracle installation residing on local disk. Is a shared oracle home a pre-requisite for usin rconfig?
    I have provided the .xml file I am using below:
    -bash-3.00$ cat racconv.xml
    <?xml version="1.0" encoding="UTF-8"?>
    <n:RConfig xmlns:n="http://www.oracle.com/rconfig"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.oracle.com/rconfig">
    <n:ConvertToRAC>
    <!-- Verify does a precheck to ensure all pre-requisites are met, before the conversion is attempted. Allowable values are: YES|NO|ONLY -->
    <n:Convert verify="YES">
    <!--Specify current OracleHome of non-rac database for SourceDBHome -->
    <n:SourceDBHome>/u01/app/oracle/product/10.2.0/db_1</n:SourceDBHome>
    <!--Specify OracleHome where the rac database should be configured. It can be same as SourceDBHome -->
    <n:TargetDBHome>/u01/app/oracle/product/10.2.0/db_1</n:TargetDBHome>
    <!--Specify SID of non-rac database and credential. User with sysdba role is required to perform conversion -->
    <n:SourceDBInfo SID="ORCLCLN">
    <n:Credentials>
    <n:User>oracle</n:User>
    <n:Password>password</n:Password>
    <n:Role>sysdba</n:Role>
    </n:Credentials>
    </n:SourceDBInfo>
    <!--ASMInfo element is required only if the current non-rac database uses ASM Storage -->
    <n:ASMInfo SID="+ASM1">
    <n:Credentials>
    <n:User>oracle</n:User>
    <n:Password>password</n:Password>
    <n:Role>sysdba</n:Role>
    </n:Credentials>
    </n:ASMInfo>
    <!--Specify the list of nodes that should have rac instances running. LocalNode should be the first node in this nodelist. -->
    <n:NodeList>
    <n:Node name="sol002"/>
    <n:Node name="sol003"/>
    </n:NodeList>
    <!--Specify prefix for rac instances. It can be same as the instance name for non-rac database or different. The instance number will be attached to this prefix. -->
    <n:InstancePrefix>ORCLCLN</n:InstancePrefix>
    <!--Specify port for the listener to be configured for rac database.If port="", alistener existing on localhost will be used for rac database.The listener will be extended to all nodes in the nodelist -->
    <n:Listener port=""/>
    <!--Specify the type of storage to be used by rac database. Allowable values are CFS|ASM. The non-rac database should have same storage type. -->
    <n:SharedStorage type="ASM">
    <!--Specify Database Area Location to be configured for rac database.If this field is left empty, current storage will be used for rac database. For CFS, this field will have directory path. -->
    <n:TargetDatabaseArea></n:TargetDatabaseArea>
    <!--Specify Flash Recovery Area to be configured for rac database. If this field is left empty, current recovery area of non-rac database will be configured for rac database. If current database is not using recovery Area, the resulting rac database will not have a recovery area. -->
    <n:TargetFlashRecoveryArea></n:TargetFlashRecoveryArea>
    </n:SharedStorage>
    </n:Convert>
    </n:ConvertToRAC>
    </n:RConfig>

  • App Server 8.1 cluster not all members starting up.

    Hi,
    Solaris 10.
    Two instances of app servers, each in a separate zone, on same machine.
    App servers were just configured with the configure-ha-cluster command to operate in HA mode.
    When restarting the cluster with two members, only one of them comes up:
    ./asadmin start-cluster --user admin --passwordfile domain1.pwd idm-cluster
    Not all server instances in cluster idm-cluster were successfully started.
    Failed to retrieve RMIServer stub: javax.naming.NameNotFoundException: management/rmi-jmx-connector
    Command start-cluster executed successfully.Funnily enough, it's the instance on the local machine that will not start up. I had a similar problem earlier where creating a hadb domain could not contact the local hadb instance...
    The node-agent is running though, but the instance will not start up.
    ./asadmin list-clusters --user admin --passwordfile domain1.pwd
    idm-cluster partially running
    Command list-clusters executed successfully.
    ./asadmin list-node-agents --user admin --passwordfile domain1.pwd
    idm1 running
    idm2 running
    Command list-node-agents executed successfully.
    ./asadmin start-instance --user admin --passwordfile domain1.pwd idm1-instance
    Operation 'startServerInstance' failed in 'servers' Config Mbean.
    Target exception message: Failed to retrieve RMIServer stub: javax.naming.NameNotFoundException: management/rmi-jmx-connector
    CLI137 Command start-instance failed.Any pointers or ideas?
    Thanks!

    Hmmm. I seem to be having the same problem.
    Both node agents are running fine, but the DAS is unable to talk to the cluster.
    Exactly the same error occurs.
    Doesn't seem particularly robust software, imho. Happened several times.

  • Multiple Single Instance DB Conversion to Cluster Database RAC

    Hi,
    Good Day.
    we are in planning phase to perform an activity, in which 8 existing (Single Instance) and 1 new database will be consolidated in ONE Oracle RAC Cluster database.
    Kindly share your ideas, to perform the consolidation smoothly with minimal downtime.
    can any one share the cons/pros, what if we create MULTIPLE RAC cluster databases instead of one cluster database?
    Thank You.
    Regards,

    The cons are pretty much the same as running multiple instances on a single server: Tuning will be much harder because you will have to look for bottleneck and depletion of common resources (CPU, RAM, IO) across all your instances instead of one (where you have lots of nice and integrated features). There is the potential for contention between resources and since each instance/database has its own SGA (and they are not shared) you will potentially waste a lot of ram for stuff that is redundant between the instances.
    But if you still think this is something you want to do, you should take a look at the 'server pool' feature. It will give you some more control over how your databases/instances should be ditributed across your rac (there is propably no need for all databases to be running on all servers at the same time)
    Bjoern

  • Cluster with one 2 Node RAC and a Single Instance using ASM

    Hi there,
    i am not sure with one planned installation and want to ask, weather i am on the right track.
    Some Facts:
    Clusterware 11g
    ASM 11g
    Database 10gR2
    AIX 5.3
    3 Machines
    2 Storages DS4700
    My Plan
    On Node 1 and Node 2 we install a RAC Database for an ERP Software
    On Node 3 we install a single Instance Database for a Logistic Software
    So i will install on all three Nodes Clusterware and an 3 Instances ASM - Cluster
    I create 2 Diskgroups, one for the FRA and one for the Data, both on Luns on the DS4700
    The RAC-Database and the Logistic-Database are using the same Diskgroups.
    Is this the way to go for this circumstances?
    The alternative is, as far as i see
    Clusterware on an 3 Servers
    One 2 Node ASM for the ERP Software
    one Single Node ASM for the Logistcs
    4 Diskgroups, because of the 2 ASM-Database 2 for the RAC and 2 for the Single Instance.
    Please give me some hints, which way i should prefer.
    My tendence is going to the first alternative. I like the idea to share the Diskgroups over more than on Database because of easy administration.
    The load of the 2 Databases are completly different, the logistc software will nearly do nothing compared to the ERP Software, so this should'nt be a problem.
    But maybe i oversee something, so please do not hesitate to tell me, i am completly wrong ;)
    Thanks a lot
    Jörg

    Chris Slattery wrote:
    why clusterware on 3rd machine ?
    I'd have separate DGs but that's just me.If you wish to install ASM you need OCS installed on the machine, even if it is just one node at all.
    It is a kind of a dependency, no OCS, no ASM
    cu
    Jörg

  • Not able to install Adobe 17x32.Getting message  only a single instance of this application can run

    Sir,I can't install Adobe 17X32.I have      Win 7 system.All previous versions have been uninstalled and yet I am getting a message--ONLY A SINGLE INSTANCE OF THIS APPLICATION CAN RUN.How do I remedy the situation?
    Thanks,
    Message was edited by: Maria Vargas
    Removed personal email address, which should not be included in a public forum.

    Hi Viraj,
    Please try the offline installer posted at the bottom of the Installation problems | Flash Player | Windows page, in the 'Still having problems' section.
    Maria

  • Oracle single instance installation  with RHEL Cluster suite

    Hi ,
    Can anyone help regarding Oracle single instance installation with RHEL Cluster suite? I have to know what factor need to be considered for RHEL Cluster configuration for Oracle Installation. And is it certified by Oracle?
    Aungshu

    Does any body like to share their experience ?

  • How does a RAC DB 'spread' from single instance to multiple instances ?

    GI/RDBMS Version: 11.2.0.3
    OS: Oracle Linux 6.3
    Filesystem : ASM
    When a RAC database is created using dbca , Manually , or RMAN restore, the DB is created in the Node1 first with cluster_Database=FALSE .
    Then you run the following commands (for 3-node RAC)  from Node1
    srvctl add database -d lmnprod -o $ORACLE_HOME -p +LMNPROD_DATA01/lmnprod/spfilelmnprod.ora
    srvctl add instance -d lmnprod -i lmnprod1 -n hwcarser290
    srvctl add instance -d lmnprod -i lmnprod2 -n hwcarser291
    srvctl add instance -d lmnprod -i lmnprod3 -n hwcarser292
    Once the DB is created,mounted and opened in Node1 and the above commands are executed , you set cluster_Database=TRUE and startup the Instance2 and Instance 3 in Node2 and Node3.
    I just want to know how does Node2 and Node3 becomes aware of the DB and joins the DB cluster ? What happens internally ?

    Generally speaking, registering in OCR is not required for database to be a cluster database.
    Migration from single-instance database to cluster consists of creating redo logs and undo tablespace for new instance and enabling this instance (thread). If database is policy-managed, this is done automatically for new node.

  • 10.2.0.4 patch on single instance standby database

    Hi All,
    We have production database 10.2.0.3 running with RAC having 2 nodes on Solaris 10. We have patched this
    databases to 10.2.0.4 (both nodes) on production without any issue.
    We have a physical standby database (10.2.0.3) with 2 nodes on solaris 10. But we have stopped node2 sometime
    back and currently it's a single instance standby database. Whe nwe are trying to apply patch to standby dataabse
    it's showing both node during installation and installation fails when it tries to search for node2.
    What's the solution for this problem? Any document on how to patch the single instance standby database when
    production is running in RAC.

    I think you are basically saying that you have a 2 node RAC cluster with 1 node down (possibly permanently) and you want to just patch 1 of the nodes?
    It's not overly a surprise the installer is trying to patch the other node, when as far as your inventory is concerned you have a 2 node cluster.
    Have you tried running the installer with the -local option?
    This should just patch your local node. Obviously the dictionary changes will get applied via MRP from the primary db.
    jason.
    http://jarneil.wordpress.com

  • Is a cluster proxy a single-point-of-failure?

    Our group is planning on configuring a two machine cluster to host
              servlets/jsp's and a single backend app server to host all EJBs and a
              database.
              IIS is going to be configured on each of the two cluster machines with a
              cluster plugin. IIS is being used to optimize performance of static HTTP
              requests. All servlet/jsp request would be forwarded to the Weblogic
              cluster. Resonate's Central Dispatch is also going to be installed on the
              two cluster machines. Central Dispatch is being used to provide HTTP
              request load-balancing and to provide failover in case one of the IIS
              servers fails (because the IIS process fails or the cluster machine it's on
              fails).
              Will this configuration work? I'm most concerned about the failover of the
              IIS cluster proxy. If one of the proxies is managing a sticky session (X),
              what happens when the machine (the proxy is on) dies and we failover to the
              other proxy? Is that proxy going to have any awareness of session X?
              Probably not. The new proxy is probably going to believe this request is
              new and forward the request to a machine which may not host the existing
              primary session. I believe this is an error?
              Is a cluster proxy a single-point-of-failure? Is there any way to avoid
              this? Does the same problem exist if you use Weblogic's HTTP server (as the
              cluster proxy)?
              Thank you.
              Marko.
              

    We found our entity bean bottlenecks using JProbe Profiler. It's great for
              watching the application and seeing what methods it spends its time in. We
              found an exeedingly high number of calls to ejbLoad were taking a lot of
              time, probably due to the fact that our EBs don't all have bulk-access
              methods.
              We also had to do some low-level method tracing to watch WebLogic thrash EB
              locks, basically it locks the EB instance every time it is accessed in a
              transaction. Our DBA says that Oracle is seeing a LOT of lock/unlock
              activity also. Since much of our EB data is just configuration information
              we don't want to incur the overhead of Java object locks, excess queries,
              and Oracle row locks just to read some config values. Deadlocks were also a
              major issue because many txns would access the same config data.
              Our data is also very normalized, and also very recursive, so using EBs
              makes it tricky to do joins and recursive SQL queries. It's possible that we
              could get good EB performance using bulk-access methods and multi-table EBs
              that use custom recursive SQL queries, but we'd still have the
              lock-thrashing overhead. Your app may differ, you may not run into these
              problems and EBs may be fine for you.
              If you have a cluster proxy you don't need to use sticky sessions with your
              load balancer. We use sticky sessions at the load-balancer level because we
              don't have a cluster proxy. For our purposes we decided that the minimal
              overhead of hardware ip-sticky session load balancing was more tolerable
              than the overhead of a dog-slow cluster proxy on WebLogic. If you do use the
              proxy then your load balancer can do round-robin or any other algorithm
              amongst all the proxies.
              Marko Milicevic <[email protected]> wrote in message
              news:[email protected]...
              > Sorry Grant. I meant to reply to the newsgroup. I am putting this reply
              > back on the group.
              >
              > Thanks for your observations. I will keep them all in mind.
              > Is there any easy way for me to tell if I am getting acceptable
              performance
              > with our configuration? For example, how do I know if my use of Entity
              > beans is slow? Will I have to do 2 implementations? One implementation
              > using entity beans and anther implementation that replaces all entity use
              > with session beans, then compare the performance?
              >
              > One last question about the cluster proxy. You mentioned that you are
              using
              > Load Director with sticky sessions. We too are planning on using sticky
              > sessions with Central Dispatch. But since the cluster proxy is stateless,
              > does it matter if sticky sessions is used by the load balancer? No matter
              > which cluster proxy the request is directed to (by load balancing) the
              > cluster proxy will in turn redirect the request to the correct machine
              (with
              > the primary session). Is this correct? If I do not have to incur the
              cost
              > of sticky sessions (with the load balancer) I would rather avoid it.
              >
              > Thanks again Grant.
              >
              > Marko.
              > .
              >
              > -----Original Message-----
              > From: Grant Kushida [mailto:[email protected]]
              > Sent: Monday, May 01, 2000 5:16 PM
              > To: Marko Milicevic
              > Subject: RE: Is a cluster proxy a single-point-of-failure?
              >
              >
              > We haven't had too many app server VM crashes, although our web server
              > typically needs to be restarted every day or so due to socket strangeness
              or
              > flat out process hanging. Running 2 app server processes on the same box
              > would help with the VM stuff, but remember to get 2 NICs, because all
              > servers on a cluster need to run on the same port with different IP addrs.
              >
              > We use only stateless session beans and entity beans - we have had a
              number
              > of performance problems with entity beans though so we will be migrating
              > away from them shortly, at least for our configuration-oriented tables.
              > Since each entity (unique row in the database) can only be accessed by one
              > transaction at a time, we ran into many deadlocks. There was also a lot of
              > lock thrashing because of this transaction locking. And of course the
              > performance hit of the naive database synching (read/write for each method
              > call). We're using bean-managed persistence in 4.5.1, so no read-only
              beans
              > for us yet.
              >
              > It's not the servlets that are slower, it's the response time due to the
              > funneling of requests through the ClusterProxy servlet running on a
              WebLogic
              > proxy server. You don't have that configuration so you don't really need
              to
              > worry. Although i have heard about performance issues with the cluster
              proxy
              > on IIS/netscape, we found performance to be just fine with the Netscape
              > proxy.
              >
              > We're currently using no session persistence. I have a philosophical issue
              > with going to vendor-specific servlet extensions that tie us to WebLogic.
              We
              > do the session-sticky load balancing with a Cisco localdirector, meanwhile
              > we are investigating alternative servlet engines (Apache/JRun being the
              > frontrunner). We might set up Apache as our proxy server running the
              > Apache-WL proxy plugin once we migrate up to 5.1, though.
              >
              > > -----Original Message-----
              > > From: Marko Milicevic [mailto:[email protected]]
              > > Sent: Monday, May 01, 2000 1:08 PM
              > > To: Grant Kushida
              > > Subject: Re: Is a cluster proxy a single-point-of-failure?
              > >
              > >
              > > Thanks for the info Grant.
              > >
              > > That is good news. I was worried that the proxy maintained
              > > state, but since
              > > it is all in the cookie, then I guess we are ok.
              > >
              > > As for the app server, you are right. It is a single point
              > > of failure, but
              > > the machine is a beast (HP/9000 N-class) with hardware
              > > redundancy up the
              > > yin-yang. We were unsure how much benefit we would get if we
              > > clustered
              > > beans. There seems to be a lot of overhead associated with
              > > clustered entity
              > > beans since every bean read includes a synch with the
              > > database, and there is
              > > no fail over support. Stateful session beans are not load
              > > balanced and do
              > > not support fail over. There seems to be real benefit for
              > > only stateless
              > > beans and read-only entities. Neither of which we have many
              > > of. We felt
              > > that we would probably get better performance by locating all
              > > of our beans
              > > on the same box as the data source. We are considering creating a two
              > > instance cluster within the single app server box to protect
              > > against a VM
              > > crash. What do you think? Do you recommend a different
              > > configuration?
              > >
              > > Thanks for the servlet performance tip. So you are saying
              > > that running
              > > servlets without clustering is 6-7x faster than with
              > > clustering? Are you
              > > using in-memory state replication for the session? Is this
              > > performance
              > > behavior under 4.5, 5.1, or both? We are planning on
              > > implementing under
              > > 5.1.
              > >
              > > Thanks again Grant.
              > >
              > > Marko.
              > > .
              >
              >
              > Grant Kushida <[email protected]> wrote in message
              > news:[email protected]...
              > > Seems like you'll be OK as far as session clustering goes. The Cluster
              > > proxies running on your IIS servers are pretty dumb - they just analyze
              > the
              > > cookie and determine the primary/secondary IP addresses of the WebLogic
              > web
              > > servers that hold the session data for that request. If one goes down
              the
              > > other is perfectly capable of analyzing the cookie too. As long as one
              > proxy
              > > and one of your two clustered WL web servers survives your users will
              have
              > > intact sessions.
              > >
              > > You do, however, have a single point of failure at the app server level,
              > and
              > > at the database server level, compounded by the fact that both are on a
              > > single machine.
              > >
              > > Don't use WebLogic to run the cluster servlet. It's performance is
              > > terrible - we experienced a 6-7x performance degredation, and WL support
              > had
              > > no idea why. They wanted us to run a version of ClusterServlet with
              > timing
              > > code in it so that we could help them debug their code. I don't think
              so.
              > >
              > >
              > > Marko Milicevic <[email protected]> wrote in message
              > > news:[email protected]...
              > > > Our group is planning on configuring a two machine cluster to host
              > > > servlets/jsp's and a single backend app server to host all EJBs and a
              > > > database.
              > > >
              > > > IIS is going to be configured on each of the two cluster machines with
              a
              > > > cluster plugin. IIS is being used to optimize performance of static
              > HTTP
              > > > requests. All servlet/jsp request would be forwarded to the Weblogic
              > > > cluster. Resonate's Central Dispatch is also going to be installed on
              > the
              > > > two cluster machines. Central Dispatch is being used to provide HTTP
              > > > request load-balancing and to provide failover in case one of the IIS
              > > > servers fails (because the IIS process fails or the cluster machine
              it's
              > > on
              > > > fails).
              > > >
              > > > Will this configuration work? I'm most concerned about the failover
              of
              > > the
              > > > IIS cluster proxy. If one of the proxies is managing a sticky session
              > > (X),
              > > > what happens when the machine (the proxy is on) dies and we failover
              to
              > > the
              > > > other proxy? Is that proxy going to have any awareness of session X?
              > > > Probably not. The new proxy is probably going to believe this request
              > is
              > > > new and forward the request to a machine which may not host the
              existing
              > > > primary session. I believe this is an error?
              > > >
              > > > Is a cluster proxy a single-point-of-failure? Is there any way to
              avoid
              > > > this? Does the same problem exist if you use Weblogic's HTTP server
              (as
              > > the
              > > > cluster proxy)?
              > > >
              > > > Thank you.
              > > >
              > > > Marko.
              > > > .
              > > >
              > > >
              > > >
              > > >
              > > >
              > > >
              > >
              > >
              >
              >
              

  • Benefit of RAC over single instance

    Dear,
    There would be up-gradation of our main system from oracle 10.2.0.4 to 11.2.0.4. The management wants to stay on single instance as it is now. But Oracle RAC is something that is must for all the critical system I need to present the benefit of RAC over single instance considering the high cost. Can you show me the benefits of RAC that ppl would go for it instead  of single instance????

    Hi,
    The Benefits of Oracle Real Application Clusters.
    High Availability-Oracle Real Application Clusters 11g provides the foundation for data centre-high availability. It is also
    an integral component of Oracle’s Maximum Availability Architecture, which provides best practices to
    provide the highest availability for your data center. Oracle Real Application provides the following ke
    characteristics essential for a high available data management
    Reliability – The Oracle Database is known for its reliability. Oracle Real Application Clusters takes this
    a step further by removing the database server as a single point of failure. If an instance fails, the
    remaining instances in the server pool remain open and active. Oracle Clusterware monitors all Oracle
    processes and immediately restarts any failed component.
    Recoverability – The Oracle Database includes many features that make it easy to recover from all
    types of failures. If an instance fails in an Oracle RAC database, it is recognized by another instance in
    the server pool and recovery will start automatically. Fast Application Notification (FAN) and Fast
    Connection Failover (FCF) or Transparent Application Failover (TAF) make it easy for applications to
    mask component failures from the user.
    Error Detection – Oracle Clusterware automatically monitors Oracle RAC databases as well as other
    Oracle processes (ASM, listener, etc) and provides fast detection of problems in the environment. It also
    automatically recovers from failures often before users noticed that a failure has occurred. Fast
    Application Notification (FAN) provides the ability for applications to receive immediate notification of
    cluster component failures in order to re-issue the transaction before the failure surfaces.
    Continuous Operations – Oracle Real Application Clusters provides continuous service for both
    planned and unplanned outages. If a server (or an instance) fails, the database remains open and the
    application is able to access data. Most database maintenance operations can be completed without
    downtime and are transparent to the user. Many other maintenance tasks can be done in a rolling
    fashion so application downtime is minimized or removed. Fast Application Notification and Fast
    Connection Failover assist applications in meeting service levels.
    Scalability-Oracle Real Application Clusters provides a unique technology for scaling applications. Traditionally,
    when database servers ran out of capacity, they were replaced with new and larger servers. As servers
    grow in capacity, they are more expensive. For databases using Oracle RAC, there are alternatives for
    increasing the capacity. Applications that have traditionally run on large SMP servers can be migrated to
    run on pools of small servers. Alternatively, you can maintain the investment in the current hardware and
    add a new servers to the pool (or to create a server pool) to increase the capacity. Adding servers to a
    server pool with Oracle Clusterware and Oracle RAC does not require an outage and as soon as the new
    instances are started, the application can take advantage of the extra capacity. All servers in the server pool
    must run the same operating system and the same version of Oracle, but they do not have to be of exactly
    the same capacity. Customers today run server pools that fit their needs often using servers of (slightly)
    different characteristics.
    http://www.oracle.com/technetwork/database/clustering/overview/twp-rac11gr2-134105.pdf

  • How to convert from single instance to RAC?

    Hi, I have one environment it is version is 10.2.0.4 in linux SUSE.
    I decided to move this environment to 10.2.0.4 Real applicaiton cluster has two nodes in linux red-hat.
    Firstly,
    I install 10.2.0.1 rac on two new machine in linux red hat.
    Then I upgrade my rac to 10.2.0.4 from 10.2.0.
    Now I want to move the single instance to RAC,
    Is there an anathoer way that different from export , import?

    Besides the obvious Export-Import which do not want to consider (although it would be the neatest method !), look the 4th conversion scenario under paragraph D.3.2 "Single-Instance to RAC Conversion Scenarios" at
    http://download.oracle.com/docs/cd/B19306_01/install.102/b14203/cvrt2rac.htm#BABFCAHF
    The 4th conversion scenario is "Converting a single-instance Oracle Database 10g Release 2 (10.2) to an 10g Release 2 (10.2) RAC database, running out of a different Oracle home, and where the host where the single-instance database is running is not one of the nodes of the RAC database".
    Reference section D of the "Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2) for Linux"
    http://download.oracle.com/docs/cd/B19306_01/install.102/b14203/toc.htm

  • Scan listener for single instance

    Hi,
    in my 6 node 11.2 cluster, i've 1 single instance 11.2
    This instance uses GI listener and i can be accessed using host vip.
    Using the scan i get ora-12514.
    Why?
    Thanks.

    Hi,
    "SCAN listeners can run on any node in the cluster. SCANs provide location independence for
    databases so that the client configuration does not have to depend on which nodes run a
    particular database.
    Oracle Database 11g Release 2 and later instances register with SCAN listeners only as
    remote listeners. Upgraded databases register with SCAN listeners as remote listeners, and
    also continue to register with all other listeners."
    "When a client submits a connection request, the SCAN listener listening on a SCAN IP
    address and the SCAN port are contacted on the client’s behalf. Because all services on the
    cluster are registered with the SCAN listener, the SCAN listener replies with the address of
    the local listener on the least-loaded node where the service is currently being offered.
    Finally, the client establishes a connection to the service through the listener on the node
    where service is offered. All these actions take place transparently to the client without any
    explicit configuration required in the client."
    So scan redirect to the vip in which the instance is running.
    Edited by: Mr.D. on 22-mag-2013 1.39

Maybe you are looking for