Thin Client connection not failing over

I'm using the following thin client connection and the sessions do not failover. Test with SQLPLUS and the sessions do fail over. One difference I see between the two different connections is the thin connection has NONE for the failover_method and failover_type but the SQLPLUS connection show BASIC for failover_method and SELECT for failover_type.
Is there any issues with the thin client the version is 10.2.0.3
jdbc:oracle:thin:@(description=(address_list=(load_balance=YES)(address=(protocol=tcp)(host=crpu306-vip.wm.com)(port=1521))(address=(protocol=tcp)(host=crpu307-vip.wm.com)(port=1521)))(connect_data=(service_name=ocsqat02)(failover_mode=(type=select)(method=basic)(DELAY=5)(RETRIES=180))))

You have to use (FAILOVER=on) as well on jdbc url.
http://download.oracle.com/docs/cd/B19306_01/network.102/b14212/advcfg.htm#sthref1292
Example: TAF with Connect-Time Failover and Client Load Balancing
Implement TAF with connect-time failover and client load balancing for multiple addresses. In the following example, Oracle Net connects randomly to one of the protocol addresses on sales1-server or sales2-server. If the instance fails after the connection, the TAF application fails over to the other node's listener, reserving any SELECT statements in progress.sales.us.acme.com=
(DESCRIPTION=
*(LOAD_BALANCE=on)*
*(FAILOVER=on)*
(ADDRESS=
(PROTOCOL=tcp)
(HOST=sales1-server)
(PORT=1521))
(ADDRESS=
(PROTOCOL=tcp)
(HOST=sales2-server)
(PORT=1521))
(CONNECT_DATA=
(SERVICE_NAME=sales.us.acme.com)
*(FAILOVER_MODE=*
*(TYPE=select)*
*(METHOD=basic))))*
Example: TAF Retrying a Connection
TAF also provides the ability to automatically retry connecting if the first connection attempt fails with the RETRIES and DELAY parameters. In the following example, Oracle Net tries to reconnect to the listener on sales1-server. If the failover connection fails, Oracle Net waits 15 seconds before trying to reconnect again. Oracle Net attempts to reconnect up to 20 times.sales.us.acme.com=
(DESCRIPTION=
(ADDRESS=
(PROTOCOL=tcp)
(HOST=sales1-server)
(PORT=1521))
(CONNECT_DATA=
(SERVICE_NAME=sales.us.acme.com)
*(FAILOVER_MODE=*
*(TYPE=select)*
*(METHOD=basic)*
*(RETRIES=20)*
*(DELAY=15))))*

Similar Messages

  • NIC not failing Over in Cluster

    Hi there...I have configured 2 Node cluster with SoFS role...for VM Cluster and HA using Windows Server 2012 Data Center. Current set up is Host Server has 3 NICS (2 with Default Gateway setup (192.x.x.x), 3 NIC is for heartbeat 10.X.X.X). Configured CSV
    (can also see the shortcut in the C:\). Planning to setup few VMs pointing to the disk in the 2 separate storage servers (1 NIC in 192.x.x.x) and also have 2 NIC in 10.x.x.x network. I am able to install VM and point the disk to the share in the cluster volume
    1. 
    I have created 2 VM Switch for 2 separate Host server (using Hyper-V manager). When I test the functionality by taking Node 2, I can see the Disk Owner node is changing to Node 1, but the VM NIC 2 is not failing over automatically to VM NIC 1 (but I can
    see the VM NIC 1 is showing up un-selected in the VM Settings). when I go to the VM Settings > Network Adapter, I get error -
    An Error occurred for resource VM "VM Name". select the "information details" action to view events for this resource. The network adapter is configures to a switch which no longer exists or a resource
    pool that has been deleted or renamed (with configuration error in "Virtual Switch" drop down menu). 
    Can you please let me know any resolution to fix this issue...Hoping to hear from you.
    VT

    Hi,
    From your description “My another thing I would like to test is...I also would like to bring a disk down (right now, I have 2 disk - CSV and one Quorum disk) for that 2 node
    cluster. I was testing by bringing a csv disk down, the VM didnt failover” Are you trying to test the failover cluster now? If so, please refer the following related KB:
    Test the Failover of a Clustered Service or Application
    http://technet.microsoft.com/en-us/library/cc754577.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • VIP is not failed over to surviving nodes in oracle 11.2.0.2 grid infra

    Hi ,
    It is a 8 node 11.2.0.2 grid infra.
    While pulling both cables from public nic the VIP is not failed over to surviving nodes in 2 nodes but remainng nodes VIP is failed over to surviving node in the same cluster. Please help me on this.
    If we will remove the power from these servers VIP is failed over to surviving nodes
    Public nic's are in bonding.
    grdoradr105:/apps/grid/grdhome/sh:+ASM5> ./crsstat.sh |grep -i vip |grep -i 101
    ora.grdoradr101.vip ONLINE OFFLINE
    grdoradr101:/apps/grid/grdhome:+ASM1> cat /proc/net/bonding/bond0
    Ethernet Channel Bonding Driver: v3.4.0-1 (October 7, 2008)
    Bonding Mode: fault-tolerance (active-backup)
    Primary Slave: None
    Currently Active Slave: eth0
    MII Status: up
    MII Polling Interval (ms): 100
    Up Delay (ms): 0
    Down Delay (ms): 0
    Slave Interface: eth0
    MII Status: up
    Speed: 100 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 84:2b:2b:51:3f:1e
    Slave Interface: eth1
    MII Status: up
    Speed: 100 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 84:2b:2b:51:3f:20
    Thanks
    Bala

    Please check below MOS note for this issue.
    1276737.1
    HTH
    Edited by: krishan on Jul 28, 2011 2:49 AM

  • GSLB Zone-Based DNS Payment Gw - Config Active-Active: Not Failing Over

    Hello All:
    Currently having a bit of a problem, have exhausted all resources and brain power dwindling.
    Brief:
    Two geographically diverse sites. Different AS's, different front ends. Migrated from one site with two CSS 11506's to two sites with one 11506 each.
    Flow of connection is as follows:
    Client --> FW Public Destination NAT --> CSS Private content VIP/destination NAT --> server/service --> CSS Source VIP/NAT --> FW Public Source NAT --> client.
    Using Load Balancers as DNS servers, authoritative for zones due to the requirement for second level Domain DNS load balancing (i.e xxxx.com, AND FQDNs http://www.xxxx.com). Thus, CSS is configured to respond as authoritative for xxxx.com, http://www.xxxx.com, postxx.xxxx.com, tmx.xxxx.com, etc..., but of course cannot do MX records, so is also configured with dns-forwarders which consequently were the original DNS servers for the domains. Those DNS servers have had their zone files changed to reflect that the new DNS servers are in fact the CSS'. Domain records (i.e. NS records in the zone file), and the records at the registrar (i.e. tucows, which I believe resells .com, .net and .org for netsol) have been changed to reflect the same. That part of the equation has already been tested and is true to DNS Workings. The reason for the forwarders is of course for things such as non load balanced Domain Names, as well as MX records, etc...
    Due to design, which unfortunately cannot be changed, dns-record configuration uses kal-ap, example:
    dns-record a http://www.xxxx.com 0 111.222.333.444 multiple kal-ap 10.xx.1.xx 254 sticky-enabled weightedrr 10
    So, to explain so we're absolutely clear:
    - 111.222.333.444 is the public address returned to the client.
    - multiple is configured so we return both site addresses for redundancy (unless I'm misunderstanding that configuration option)
    - kal-ap and the 10.xx.1.xx address because due to the configuration we have no other way of knowing the content rule/service is down and to stop advertising the address for said server/rule
    - sticky-enabled because we don't want to lose a payment and have it go through twice or something crazy like that
    - weighterr 10 (and on the other side weightedrr 1) because we want to keep most of the traffic on the site that is closer to where the bulk of the clients are
    So, now, the problem becomes, that the clients (i.e. something like an interac machine, RFID tags...) need to be able to fail over almost instantly to either of the sites should one lose connectivity and/or servers/services. However, this does not happen. The CSS changes it's advertisement, and this has been confirmed by running "nslookups/digs" directly against the CSSs... however, the client does not recognize this and ends up returning a "DNS Error/Page not found".
    Thinking this may have something to do with the "sticky-enabled" and/or the fact that DNS doesn't necessarily react very well to a TTL of "0".
    Any thoughts... comments... suggestions... experiences???
    Much appreciated in advance for any responses!!!
    Oh... should probably add:
    nslookups to some DNS servers consistently - ALWAYS the same ones - take 3 lookups before getting a reply. Other DNS servers are instant....
    Cheers,
    Ben Shellrude
    Sr. Network Analyst
    MTS AllStream Inc

    Hi Ben,
    if I got your posting right the CSSes are doing their job and do advertise the correct IP for a DNS-query right?
    If some of your clients are having a problem this might be related to DNS-caching. Some clients are caching the DNS-response and do not do a refresh until they fail or this timeout is gone.
    Even worse if the request fails you sometimes have to reset the clients DNS-demon so that they are requesting IP-addresses from scratch. I had this issue with some Unixboxes. If I remeber it corretly you can configure the DNS behaviour for unix boxes and can forbidd them to cache DNS responsed.
    Kind Regards,
    joerg

  • Stateful bean not failing over

              I have a cluster of two servers and a Admin server. Both servers are running NT
              4 sp6 and WLS6 sp1.
              When I stop one of the servers, the client does n't automatically failover to
              the other server, instead it fails unable to contact server that has failed.
              My bean is configured to have its home clusterable and is a stateful bean. My
              client holds onto the remote interface, and makes calls through this. If Server
              B fails then it should automatically fail over to server A.
              I have tested my multicast address and all seems to be working fine between servers,
              my stateless bean work well, load balancing between servers nicely.
              Does anybody have any ideas, regarding what could be causing the stateful bean
              remote interface not to be providing failover info.
              Also is it true that you can have only one JMS destination queue/topic per cluster..The
              JMS cluster targeting doesn't work at the moment, so you need to deploy to individual
              servers?
              Thanks
              

    Did you enable stateful session bean replication in the
              weblogic-ejb-jar.xml?
              -- Rob
              Wayne Highland wrote:
              >
              > I have a cluster of two servers and a Admin server. Both servers are running NT
              > 4 sp6 and WLS6 sp1.
              > When I stop one of the servers, the client does n't automatically failover to
              > the other server, instead it fails unable to contact server that has failed.
              >
              > My bean is configured to have its home clusterable and is a stateful bean. My
              > client holds onto the remote interface, and makes calls through this. If Server
              > B fails then it should automatically fail over to server A.
              >
              > I have tested my multicast address and all seems to be working fine between servers,
              > my stateless bean work well, load balancing between servers nicely.
              >
              > Does anybody have any ideas, regarding what could be causing the stateful bean
              > remote interface not to be providing failover info.
              >
              > Also is it true that you can have only one JMS destination queue/topic per cluster..The
              > JMS cluster targeting doesn't work at the moment, so you need to deploy to individual
              > servers?
              >
              > Thanks
              Coming Soon: Building J2EE Applications & BEA WebLogic Server
              by Michael Girdley, Rob Woollen, and Sandra Emerson
              http://learnweblogic.com
              

  • BGP in Dual Homing setup not failing over correctly

    Hi all,
    we have dual homed BGP connections to our sister company network but the failover testing is failing.
    If i shutdown the WAN interface on the primary router, after about 5 minutes, everything converges and fails over fine.
    But, if i shut the LAN interface down on the primary router, we never regain connectivity to the sister network.
    Our two ASR's have an iBGP relationship  and I can see that after a certain amount of time, the BGP routes with a next hop of the primary router get flushed from BGP and the prefferred exit path is through the secondary router. This bit works OK, but i believe that the return traffic is still attempting to return over the primary link...
    To add to this, we have two inline firewalls on each link which are only performing IPS, no packet filtering.
    Any pointers would be great.
    thanks
    Mario                

    Hi John,
    right... please look at the output below which is the partial BGP table during a link failure...
    10.128.0.0/9 is the problematic summary that still keeps getting advertised out when we do not want it to during a failure....
    now there are prefixes in the BGP table which fall within that large summary address space. But I am sure that they are all routes that are being advertised to us from the eBGP peer...
    *> 10.128.0.0/9     0.0.0.0                            32768 i
    s> 10.128.56.16/32  172.17.17.241                 150      0 2856 64619 i
    s> 10.128.56.140/32 172.17.17.241                 150      0 2856 64619 i
    s> 10.160.0.0/21    172.17.17.241                 150      0 2856 64611 i
    s> 10.160.14.0/24   172.17.17.241                 150      0 2856 64611 i
    s> 10.160.16.0/24   172.17.17.241                 150      0 2856 64611 i
    s> 10.200.16.8/30   172.17.17.241                 150      0 2856 65008 ?
    s> 10.200.16.12/30  172.17.17.241                 150      0 2856 65006 ?
    s> 10.255.245.0/24  172.17.17.241                 150      0 2856 64548 ?
    s> 10.255.253.4/32  172.17.17.241                 150      0 2856 64548 ?
    s> 10.255.253.10/32 172.17.17.241                 150      0 2856 64548 ?
    s> 10.255.255.8/30  172.17.17.241                 150      0 2856 6670 ?
    s> 10.255.255.10/32 172.17.17.241                 150      0 2856 ?
    s> 10.255.255.12/30 172.17.17.241                 150      0 2856 6670 ?
    s> 10.255.255.14/32 172.17.17.241                 150      0 2856 ?
    i would not expect summary addresses to still be advertised if the specific prefixes are coming from eBGP... am i wrong?
    thanks for everything so far...
    Mario De Rosa

  • GetConnection on clustered DS not failing over as documentation indicates

    Hi,
    (Weblogic 7 SP2 )
    I have a Connection pool and datasource both targeted to a cluster.
    The cluster is working as a cluster, HTTP session failover is working
    for instance, as is context failover. However, ds.getConnection() does
    not failover to another DS when the machine hosting the ds is down. I
    don't expect connection failover, I just expect DS failover as indicated
    by the docs.
    The JDBC documentation indicates that the datasource is cluster aware
    (see the quotes below). However, I do not observe this value.
    I have a client connecting to the cluster using the cluster address.
    If I lookup the the datasource, then in a loop do
    Connection ccc = ds.getConnection();
    ccc.close();
    Thread.sleep(500);
    (with exception handling omittted)
    Then I expect that when the server to which the ds is connected (the
    server on which it was looked up, which is determined by the context
    which I used to look it up, which is determined by round robin) goes
    down, that the getConnection() will failover to another machine.
    That is, I expected getconnection() to return a connection from another
    pool to which the datasource was targetted. This does not happen.
    Instead I get a connection error:
    weblogic.rmi.extensions.RemoteRuntimeException: Unexpected Exception
    - with nested exception:
    [java.rmi.ConnectException: Could not establish a connection with
    -4447598948888936840S:10.0.10.10:[7001,7001,7002,7002,7001,7002,-1]:10.0.10.10:7001,10.0.10.14:7001:my2:ServerA,
    java.rmi.ConnectException: Destination unreachable; nested exception is:
         java.net.ConnectException: Connection refused: connect; No available
    router to destination]
    The datasource is after all a cluster-aware stub?
    What is the intended behaviour?
    Thanks,
    Q     
    "Clustered JDBC eases the reconnection process: the cluster-aware nature
    of WebLogic data sources in external client applications allow a client
    to request another connection from them if the server instance that was
    hosting the previous connection fails."
    "External Clients Connections?External clients that require a database
    connection perform a JNDI lookup and obtain a replica-aware stub for the
    data source. The stub for the data source contains a list of the server
    instances that host the data source?which should be all of the Managed
    Servers in the cluster. Replica-aware stubs contain load balancing logic
    for distributing the load among host server instances."

    This is a bug in WL 7 (& 8).
    "QQuatro" <[email protected]> wrote in message
    news:[email protected]...
    Hi,
    (Weblogic 7 SP2 )
    I have a Connection pool and datasource both targeted to a cluster.
    The cluster is working as a cluster, HTTP session failover is working
    for instance, as is context failover. However, ds.getConnection() does
    not failover to another DS when the machine hosting the ds is down. I
    don't expect connection failover, I just expect DS failover as indicated
    by the docs.
    The JDBC documentation indicates that the datasource is cluster aware
    (see the quotes below). However, I do not observe this value.
    I have a client connecting to the cluster using the cluster address.
    If I lookup the the datasource, then in a loop do
    Connection ccc = ds.getConnection();
    ccc.close();
    Thread.sleep(500);
    (with exception handling omittted)
    Then I expect that when the server to which the ds is connected (the
    server on which it was looked up, which is determined by the context
    which I used to look it up, which is determined by round robin) goes
    down, that the getConnection() will failover to another machine.
    That is, I expected getconnection() to return a connection from another
    pool to which the datasource was targetted. This does not happen.
    Instead I get a connection error:
    weblogic.rmi.extensions.RemoteRuntimeException: Unexpected Exception
    - with nested exception:
    [java.rmi.ConnectException: Could not establish a connection with
    -4447598948888936840S:10.0.10.10:[7001,7001,7002,7002,7001,7002,-1]:10.0.10.10:7001,10.0.10.14:7001:my2:ServerA,
    java.rmi.ConnectException: Destination unreachable; nested exception is:
    java.net.ConnectException: Connection refused: connect; No available
    router to destination]
    The datasource is after all a cluster-aware stub?
    What is the intended behaviour?
    Thanks,
    Q
    "Clustered JDBC eases the reconnection process: the cluster-aware nature
    of WebLogic data sources in external client applications allow a client
    to request another connection from them if the server instance that was
    hosting the previous connection fails."
    "External Clients Connections—External clients that require a database
    connection perform a JNDI lookup and obtain a replica-aware stub for the
    data source. The stub for the data source contains a list of the server
    instances that host the data source—which should be all of the Managed
    Servers in the cluster. Replica-aware stubs contain load balancing logic
    for distributing the load among host server instances."

  • Thin client connection to Unix with RMI

    Hi All and thanks for reading this.
    I am trying to run an application with a swing user interface on the client - a Windows box, and the work engine on the server - a unix box. I am trying to use rmi to communicate between the two. When I stay in the windows world, everthing works just fine. Unfortunately, not so fine when mixing the windows and unix worlds.
    I have started the rmi registry on the server and am able to bind the work engine to the registry.
    When I start up the client GUI and submit a job to the server, using an oracle jdbc thin client, I get the following null pointer exception at the time the application asks for its first connection:
    // message when rebind occurs
    DrillDownEngine bound
    // start of execution of Implementation class
    [dde.DrillDownEngine] Entered DrillDownEngine.executeTask [dde.OrderEntryList] Entered execute() in OrderEntryList
    [dde.OrderEntryList] Have a new thread LIST: Order: 084146Qty: 1.0
    // from within the thread as it is run by the server
    [dde.OrderEntryList] Entered run() for thread in OrderEntryList
    [dde.OrderEntryList] Value of currentThread = LIST: Order: 084146Qty: 1.0
    [dde.OrderEntryList] About to look for sales report with report name = Cost Of S
    ales
    [dde.OrderEntryList] About to execute SalesDrilldownRun
    [dde.SalesDrilldownRun] Entered DrilldownRun with runType of Cost of Sales
    // just prior to first attemp to get a connection
    [dde.SalesDrilldownRun] Run initialization prior to Table Instantiation
    [dde.SalesDrilldownRun] Drilldown Run terminated
    Error: null
    Stack:
    java.lang.NullPointerException
    at oracle.jdbc.ttc7.O3log.marshal(O3log.java:612)
    at oracle.jdbc.ttc7.TTC7Protocol.logon(TTC7Protocol.java:258)
    at oracle.jdbc.driver.OracleConnection.<init>(OracleConnection.java:362)
    at oracle.jdbc.driver.OracleDriver.getConnectionInstance(OracleDriver.ja
    va:536)
    at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:328)
    at java.sql.DriverManager.getConnection(DriverManager.java:512)
    at java.sql.DriverManager.getConnection(DriverManager.java:171)
    // from here down the stack is in my application code
    at pool.JDBCConnectionPool.getConnection(JDBCConnectionPool.java:132)
    at data.DbTbl.<init>(DbTbl.java:92)
    at ifsData.IfsDetTbl.<init>(IfsDetTbl.java:45)
    at dde.DrilldownRun.init(DrilldownRun.java:121)
    at dde.DrilldownRun.<init>(DrilldownRun.java:103)
    at dde.SalesDrilldownRun.<init>(SalesDrilldownRun.java:36)
    at dde.OrderEntryList.run(OrderEntryList.java:131)
    None of the 3 arguments passed in the getConnection method is null.
    Couldn't find anything about this type of error in Java or Oracle documentation.
    Do you think this is a JDBC problem or an RMI problem?
    Am now trying to make a connection using oci instead of thin client, but am now encountering security permissions problems - which I will ask about in another post just to try to keep things simpler.
    Thanks in advance for any suggestions.

    It turns out that there is a problem with the Oracle 9.2 dirvers for jdbc. If anyone experiences similar problems, roll back to version 9.0.1
    I was using jdk1.4 and Oracle 9i.
    Thanks

  • Http cluster servlet not failing over when no answer received from server

              I am using weblogic 510 sp9. I have a weblogic server proxying all requests to
              a weblogic cluster using the httpclusterservlet.
              When I kill the weblogic process servicing my request, I see the next request
              get failed over to the secondary server and all my session information has been
              replicated. In short I see the behavior I expect.
              r.troon
              However, when I either disconnect the primary server from the network or just
              switch this server off, I just get a message back
              to the browser - "unable to connect to servers".
              I don't really understand why the behaviour should be different . I would expect
              both to failover in the same manner. Does the cluster servlet only handle tcp
              reset failures?
              Has anybody else experience this or have any ideas.
              Thanks
              

    I think I might have found the answer......
    The AD objects for the clusters had been moved from the Computers OU into a newly created OU. I'm suspecting that the cluster node computer objects didn't have perms to the cluster object within that OU and that was causing the issue. I know I've seen cluster
    object issues before when moving to a new OU.
    All has started working again for the moment so I now just need to investigate what permissions I need on the new OU so that I can move the cluster object in.

  • Why DML not failed over in TAF??

    Hi,
    I have an OLTP application running on 2 node 10gR2 RAC(10.2.0.3) on AIX 5.3L ML 8. I have configured TAF here for SESSION failover.I would like to know two things from you all:
    1) Though each instance is able to read other instnace's undo tablespace data and redolog, then allso why TAF is not able failover the DML transactions?
    2) As of now is there any way to failover the DML other than cathing the error thrown back to application and re-executing the query?Is it possible in the 11gR1?
    I am gratefull to you all if you are sparing your valuable time to answer this.
    Thanks and Regards,
    Vijay Shanker

    Re: Failover DML on RAC
    The reason is transaction processing and its implications.
    Imagine that you updated a row, then waited idly, then some other session wanted that same row and waited for you to either rollback or commit.
    You failed.
    Automatically, Oracle will rollback your transaction and release all your locks.
    What should the other session do: wait to see that maybe you have TAF or FCF and will reconnect and rerun your uncommitted DML, or should it proceed with its own work?
    Failed session rollback currently happens regardless of whether you or anybody else have TAF, FCF, or even whether you have RAC.
    But in order for you to be able to replay your DML safely after reconnect, that transaction rollback had to be prevented, and your new failed over session should magically re-attach to the failed session's transaction.
    Maybe some day Oracle will implement something like that, but it's not easy, and Oracle leaves it up to the application to decide what to do (TAF-specific error codes).
    On the other hand, replaying selects is fairly easy: re-executing the query (with scn as of the originally failed cursor to ensure read-consistency) and re-fetching up to the point of last fetch.

  • Clients connections not balanced across proxies

    Guys,
    I have 8 Extend clients connecting to 6 proxies. With 3.7, my understanding was by default clients connections should be balanced across available proxies but its not the case always. I see some times client connections are not load balanced at all.
    Have been through -- http://docs.oracle.com/cd/E24290_01/coh.371/e22839/gs_configextend.htm#BEBBIJAI
    I've specified two proxies in the client side config
              <tcp-initiator>             
               <remote-addresses>
                   <socket-address>
                      <address>35.54.42.46</address>
                      <port>9099</port>
                   </socket-address>
                   <socket-address>
                      <address>35.54.42.47</address>
                      <port>9099</port>
                   </socket-address>
                </remote-addresses>
                   <connect-timeout>10s</connect-timeout>
                </tcp-initiator>
    Haven't configured load balancer on the proxy, so by default it should be set to 'proxy'. Have even tried by explicity configuring load balancer to 'proxy' bo luck.
    Certain times I see, which is perfect
         proxy 1 -- 1 client connection
         proxy 2 -- 2 client connection's
         proxy 3 -- 1 client connection
         proxy 4 -- 1 client connection
         proxy 5 -- 2 client connection's
         proxy 6 -- 1 client connection
    But I also see
         proxy 1 -- 1 client connection
         proxy 2 -- 3 client connection's
         proxy 3 -- 0 client connection
         proxy 4 -- 0 client connection
         proxy 5 -- 1 client connection
         proxy 6 -- 3 client connection's
    Any ideas to fix this would be appreciated.
    Thanks
    K

    Hi Venkat,
    Been monitoring RequestTotalCount, but I still think connections still imbalanced..
    Currently, I have 3 clients and 5 proxy nodes and i see
    Proxy node 1 ---  ConnectionCount=2, RequestTotalCount=25
    Proxy node 2 ---  ConnectionCount=0, RequestTotalCount=23
    Proxy node 3 ---  ConnectionCount=0, RequestTotalCount=4
    Proxy node 4 ---  ConnectionCount=0, RequestTotalCount=18
    Proxy node 5 ---  ConnectionCount=1, RequestTotalCount=19
    Would appreciate your clarification. Also, shound't client connections be balanced based on 'ConnectionCount' and other attributes like CPU usage, etc?
    Thanks
    K

  • Problems with Oracle FailSafe - Primary node not failing over the DB to the

    I am using 11.1.0.7 on Windows 64 bit OS, two nodes clustered at OS level. The Cluster is working fine at Windows level and the shared drive fails over. However, the database does not failover when the primary node is shutdown or restarted.
    The Oracle software is on local drive on each box. The Oracle DB files and Logs are on shared drive.

    Is the database listed in your cluster group that you are failing over?

  • Extend-TCP client not failing over to another proxy after machine failure

    I have a configuration of three hosts. on A is the client, on B & C are a proxy and a cache instance. I've defined an AddressProvider that returns the address of B and then C. The client just repeatedly calls the cache (read-only). The client configuration is:
    <?xml version="1.0"?>
    <cache-config
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
    xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config
    coherence-cache-config.xsd">
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>cache1</cache-name>
    <scheme-name>extend-near</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <!-- Use ExtendTCP to connect to a proxy. -->
    <caching-schemes>
    <near-scheme>
    <scheme-name>extend-near</scheme-name>
    <front-scheme>
    <local-scheme>
    <high-units>1000</high-units>
    </local-scheme>
    </front-scheme>
    <back-scheme>
    <remote-cache-scheme>
    <scheme-ref>remote-cache1</scheme-ref>
    </remote-cache-scheme>
    </back-scheme>
    <invalidation-strategy>all</invalidation-strategy>
    </near-scheme>
    <remote-cache-scheme>
    <scheme-name>remote-cache1</scheme-name>
    <service-name>cache1ExtendedTcpProxyService</service-name>
    <initiator-config>
    <tcp-initiator>
    <remote-addresses>
    <address-provider>
    <class-name>com.foo.clients.Cache1AddressProvider</class-name>
    </address-provider>
    </remote-addresses>
    <connect-timeout>10s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
    <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    </initiator-config>
    </remote-cache-scheme>
    </caching-schemes>
    If I shutdown the proxy that the client is connected to on host B, failover occurs quickly by calling the AddressProvider. But if I shut down the network for host B (or drop the TCP port of the proxy on B) to simulate a machine failure, failover does not occur. The client simply continues to try to contact B and dutifully times out in 5 seconds. It never asks the AddressProvider for another address.
    How do I get failover to kick in?

    Hello,
    If you are testing Coherence*Extend failover in the face of a network, machine, or NIC failure, you should enable Connection heartbeats on both the <tcp-initiator/> and <tcp-acceptor/>. For example:
    Client cache config:
    <remote-cache-scheme>
      <scheme-name>extend-direct</scheme-name>
      <service-name>ExtendTcpCacheService</service-name>
      <initiator-config>
        <tcp-initiator>
          <remote-addresses>
            <socket-address>
              <address system-property="tangosol.coherence.extend.address">localhost</address>
              <port system-property="tangosol.coherence.extend.port">9099</port>
            </socket-address>
          </remote-addresses>
          <connect-timeout>2s</connect-timeout>
        </tcp-initiator>
        <outgoing-message-handler>
          <heartbeat-interval>10s</heartbeat-interval>
          <heartbeat-timeout>5s</heartbeat-timeout>
          <request-timeout>15s</request-timeout>
        </outgoing-message-handler>
      </initiator-config>
    </remote-cache-scheme>Proxy cache config:
    <proxy-scheme>
      <scheme-name>example-proxy</scheme-name>
      <service-name>ExtendTcpProxyService</service-name>
      <thread-count system-property="tangosol.coherence.extend.threads">2</thread-count>
      <acceptor-config>
        <tcp-acceptor>
          <local-address>
            <address system-property="tangosol.coherence.extend.address">localhost</address>
            <port system-property="tangosol.coherence.extend.port">9099</port>
          </local-address>
        </tcp-acceptor>
        <outgoing-message-handler>
          <heartbeat-interval>10s</heartbeat-interval>
          <heartbeat-timeout>5s</heartbeat-timeout>
          <request-timeout>15s</request-timeout>
        </outgoing-message-handler>
      </acceptor-config>
      <autostart system-property="tangosol.coherence.extend.enabled">true</autostart>
    </proxy-scheme>This is because it may take the TCP/IP stack a considerable amount of time to detect that it's peer is unavailable after a network, machine, or NIC failure (O/S dependent).
    Jason

  • ASA 5520 Not Failing over

        Hi All
    Im preparing a lab and I have 2 ASA 5520's. I have configured them for failover so the Primarys config will replicate over to the Secondary. They are connected via a 3560 switch. the switch ports are configured as access ports on vlan 1. Spanning-tree portfast is enabled
    Firewall (Primary)
    Cisco Adaptive Security Appliance Software Version 9.1(1)
    Device Manager Version 7.1(2)
    Compiled on Wed 28-Nov-12 10:38 by builders
    System image file is "disk0:/asa911-k8.bin"
    Config file at boot was "startup-config"
    DEO-FW-01 up 5 hours 1 min
    failover cluster up 5 hours 1 min
    Hardware:   ASA5520, 2048 MB RAM, CPU Pentium 4 Celeron 2000 MHz,
    Internal ATA Compact Flash, 256MB
    BIOS Flash M50FW080 @ 0xfff00000, 1024KB
    Encryption hardware device : Cisco ASA-55xx on-board accelerator (revision 0x0)
                                 Boot microcode        : CN1000-MC-BOOT-2.00
                                 SSL/IKE microcode     : CNLite-MC-SSLm-PLUS-2.03
                                 IPSec microcode       : CNlite-MC-IPSECm-MAIN-2.08
                                 Number of accelerators: 1
    0: Ext: GigabitEthernet0/0  : address is 001e.f762.bc44, irq 9
    1: Ext: GigabitEthernet0/1  : address is 001e.f762.bc45, irq 9
    2: Ext: GigabitEthernet0/2  : address is 001e.f762.bc46, irq 9
    3: Ext: GigabitEthernet0/3  : address is 001e.f762.bc47, irq 9
    4: Ext: Management0/0       : address is 001e.f762.bc43, irq 11
    5: Int: Not used            : irq 11
    6: Int: Not used            : irq 5
    Licensed features for this platform:
    Maximum Physical Interfaces       : Unlimited      perpetual
    Maximum VLANs                     : 150            perpetual
    Inside Hosts                      : Unlimited      perpetual
    Failover                          : Active/Active  perpetual
    Encryption-DES                    : Enabled        perpetual
    Encryption-3DES-AES               : Enabled        perpetual
    Security Contexts                 : 2              perpetual
    GTP/GPRS                          : Disabled       perpetual
    AnyConnect Premium Peers          : 2              perpetual
    AnyConnect Essentials             : Disabled       perpetual
    Other VPN Peers                   : 750            perpetual
    Total VPN Peers                   : 750            perpetual
    Shared License                    : Disabled       perpetual
    AnyConnect for Mobile             : Disabled       perpetual
    AnyConnect for Cisco VPN Phone    : Disabled       perpetual
    Advanced Endpoint Assessment      : Disabled       perpetual
    UC Phone Proxy Sessions           : 2              perpetual
    Total UC Proxy Sessions           : 2              perpetual
    Botnet Traffic Filter             : Disabled       perpetual
    Intercompany Media Engine         : Disabled       perpetual
    Cluster                           : Disabled       perpetual
    This platform has an ASA 5520 VPN Plus license.
    Here is the failover config
    failover
    failover lan unit primary
    failover lan interface SFO GigabitEthernet0/3
    failover replication http
    failover link SFO GigabitEthernet0/3
    failover interface ip SFO 10.10.16.25 255.255.255.248 standby 10.10.16.26
    Here is the Show failover output
    Failover On
    Failover unit Primary
    Failover LAN Interface: SFO GigabitEthernet0/3 (Failed - No Switchover)
    Unit Poll frequency 1 seconds, holdtime 15 seconds
    Interface Poll frequency 5 seconds, holdtime 25 seconds
    Interface Policy 1
    Monitored Interfaces 3 of 160 maximum
    failover replication http
    Version: Ours 9.1(1), Mate Unknown
    Last Failover at: 12:53:27 UTC Mar 14 2013
            This host: Primary - Active
                    Active time: 18059 (sec)
                    slot 0: ASA5520 hw/sw rev (2.0/9.1(1)) status (Up Sys)
                      Interface inside (10.10.16.1): No Link (Waiting)
                      Interface corporate_network_traffic (10.10.16.21): Unknown (Waiting)
                      Interface outside (193.158.46.130): Unknown (Waiting)
                    slot 1: empty
            Other host: Secondary - Not Detected
                    Active time: 0 (sec)
                      Interface inside (10.10.16.2): Unknown (Waiting)
                      Interface corporate_network_traffic (10.10.16.22): Unknown (Waiting)
                      Interface outside (193.158.46.131): Unknown (Waiting)
    Stateful Failover Logical Update Statistics
            Link : SFO GigabitEthernet0/3 (Failed)
    Here is the output for the secondary firewall
    Cisco Adaptive Security Appliance Software Version 9.1(1)
    Device Manager Version 6.2(5)
    Compiled on Wed 28-Nov-12 10:38 by builders
    System image file is "disk0:/asa911-k8.bin"
    Config file at boot was "startup-config"
    ciscoasa up 1 hour 1 min
    failover cluster up 1 hour 1 min
    Hardware:   ASA5520, 2048 MB RAM, CPU Pentium 4 Celeron 2000 MHz,
    Internal ATA Compact Flash, 256MB
    BIOS Flash M50FW080 @ 0xfff00000, 1024KB
    Encryption hardware device : Cisco ASA-55xx on-board accelerator (revision 0x0)
                                 Boot microcode        : CN1000-MC-BOOT-2.00
                                 SSL/IKE microcode     : CNLite-MC-SSLm-PLUS-2.03
                                 IPSec microcode       : CNlite-MC-IPSECm-MAIN-2.08
                                 Number of accelerators: 1
    0: Ext: GigabitEthernet0/0  : address is 0023.0477.12e4, irq 9
    1: Ext: GigabitEthernet0/1  : address is 0023.0477.12e5, irq 9
    2: Ext: GigabitEthernet0/2  : address is 0023.0477.12e6, irq 9
    3: Ext: GigabitEthernet0/3  : address is 0023.0477.12e7, irq 9
    4: Ext: Management0/0       : address is 0023.0477.12e3, irq 11
    5: Int: Not used            : irq 11
    6: Int: Not used            : irq 5
    Licensed features for this platform:
    Maximum Physical Interfaces       : Unlimited      perpetual
    Maximum VLANs                     : 150            perpetual
    Inside Hosts                      : Unlimited      perpetual
    Failover                          : Active/Active  perpetual
    Encryption-DES                    : Enabled        perpetual
    Encryption-3DES-AES               : Enabled        perpetual
    Security Contexts                 : 2              perpetual
    GTP/GPRS                          : Disabled       perpetual
    AnyConnect Premium Peers          : 2              perpetual
    AnyConnect Essentials             : Disabled       perpetual
    Other VPN Peers                   : 750            perpetual
    Total VPN Peers                   : 750            perpetual
    Shared License                    : Disabled       perpetual
    AnyConnect for Mobile             : Disabled       perpetual
    AnyConnect for Cisco VPN Phone    : Disabled       perpetual
    Advanced Endpoint Assessment      : Disabled       perpetual
    UC Phone Proxy Sessions           : 2              perpetual
    Total UC Proxy Sessions           : 2              perpetual
    Botnet Traffic Filter             : Disabled       perpetual
    Intercompany Media Engine         : Disabled       perpetual
    Cluster                           : Disabled       perpetual
    This platform has an ASA 5520 VPN Plus license.
    Here is the failover config
    failover
    failover lan unit secondary
    failover lan interface SFO GigabitEthernet0/3
    failover replication http
    failover link SFO GigabitEthernet0/3
    failover interface ip SFO 10.10.16.26 255.255.255.248 standby 10.10.16.25
    Here is the Show failover output
    failover
    failover lan unit secondary
    failover lan interface SFO GigabitEthernet0/3
    failover replication http
    failover link SFO GigabitEthernet0/3
    failover interface ip SFO 10.10.16.26 255.255.255.248 standby 10.10.16.25
    Failover On
    Failover unit Secondary
    Failover LAN Interface: SFO GigabitEthernet0/3 (up)
    Unit Poll frequency 1 seconds, holdtime 15 seconds
    Interface Poll frequency 5 seconds, holdtime 25 seconds
    Interface Policy 1
    Monitored Interfaces 0 of 160 maximum
    failover replication http
    Version: Ours 9.1(1), Mate Unknown
    Last Failover at: 12:58:31 UTC Mar 14 2013
    This host: Secondary - Active
    Active time: 3630 (sec)
    slot 0: ASA5520 hw/sw rev (2.0/9.1(1)) status (Up Sys)
    slot 1: empty
    Other host: Primary - Not Detected
    Active time: 0 (sec)
    Stateful Failover Logical Update Statistics
    Link : SFO GigabitEthernet0/3 (up)
    interface g0/3 on both are up via the No shutdown command. However I get the following error No Active mate detected
    please could someone help.
    Many thanks

    Hello James,
    You have configured  the IPs on the interfaces incorrectly.
    Let me point it out
    failover
    failover lan unit primary
    failover lan interface SFO GigabitEthernet0/3
    failover replication http
    failover link SFO GigabitEthernet0/3
    failover interface ip SFO 10.10.16.25 255.255.255.248 standby 10.10.16.26
    You are telling the Primary device use IP address 10.10.16.25 and the secondary firewall will be 10.10.26.26
    Now let's see the configuration on the Secondary Unit?
    failover
    failover lan unit secondary
    failover lan interface SFO GigabitEthernet0/3
    failover replication http
    failover link SFO GigabitEthernet0/3
    failover interface ip SFO 10.10.16.26 255.255.255.248 standby 10.10.16.25
    On the secondary you are saying the primary IP will be 10.10.16.26 and the secondary will be 10.10.16.25
    You have it backwards and based on the output I would say you configured it on all of the interfaces like that
    So please change it and make it the same on all of the interfaces so both devices know the same thing ( which IP they should use when they are primary and secondary, this HAVE to match )
    Hope that I could help
    Julio Carvajal

  • 2 node Sun Cluster 3.2, resource groups not failing over.

    Hello,
    I am currently running two v490s connected to a 6540 Sun Storagetek array. After attempting to install the latest OS patches the cluster seems nearly destroyed. I backed out the patches and right now only one node can process the resource groups properly. The other node will appear to take over the Veritas disk groups but will not mount them automatically. I have been working on this for over a month and have learned alot and fixed alot of other issues that came up, but the cluster is just not working properly. Here is some output.
    bash-3.00# clresourcegroup switch -n coins01 DataWatch-rg
    clresourcegroup: (C776397) Request failed because node coins01 is not a potential primary for resource group DataWatch-rg. Ensure that when a zone is intended, it is explicitly specified by using the node:zonename format.
    bash-3.00# clresourcegroup switch -z zcoins01 -n coins01 DataWatch-rg
    clresourcegroup: (C298182) Cannot use node coins01:zcoins01 because it is not currently in the cluster membership.
    clresourcegroup: (C916474) Request failed because none of the specified nodes are usable.
    bash-3.00# clresource status
    === Cluster Resources ===
    Resource Name Node Name State Status Message
    ftp-rs coins01:zftp01 Offline Offline
    coins02:zftp01 Offline Offline - LogicalHostname offline.
    xprcoins coins01:zcoins01 Offline Offline
    coins02:zcoins01 Offline Offline - LogicalHostname offline.
    xprcoins-rs coins01:zcoins01 Offline Offline
    coins02:zcoins01 Offline Offline - LogicalHostname offline.
    DataWatch-hasp-rs coins01:zcoins01 Offline Offline
    coins02:zcoins01 Offline Offline
    BDSarchive-res coins01:zcoins01 Offline Offline
    coins02:zcoins01 Offline Offline
    I am really at a loss here. Any help appreciated.
    Thanks

    My advice is to open a service call, provided you have a service contract with Oracle. There is much more information required to understand that specific configuration and to analyse the various log files. This is beyond what can be done in this forum.
    From your description I can guess that you want to failover a resource group between non-global zones. And it looks like the zone coins01:zcoins01 is reported to not be in cluster membership.
    Obviously node coins01 needs to be a cluster member. If it is reported as online and has joined the cluster, then you need to verify if the zone zcoins01 is really properly up and running.
    Specifically you need to verify that it reached the multi-user milestone and all cluster related SMF services are running correctly (ie. verify "svcs -x" in the non-global zone).
    You mention Veritas diskgroups. Note that VxVM diskgroups are handled in the global cluster level (ie. in the global zone). The VxVM diskgroup is not imported for a non-global zone. However, with SUNW.HAStoragePlus you can ensure that file systems on top of VxVM diskgroups can be mounted into a non-global zone. But again, more information would be required to see how you configued things and why they don't work as you expect it.
    Regards
    Thorsten

Maybe you are looking for

  • Can I get an older version of Netflix on my ipod touch 2nd generation?

    I had netflix on my ipod before but i had to delete itunes to update my computer......along with itunes my version of netflix (2.2.1) was deleted. My ipod was also restored because "it was synced to another itunes library". Itunes was recently update

  • Trying to move a graphics object using buttons.

    Hello, im fairly new to GUI's. Anyway I have 1 class which makes my main JFrame, then I have another 2 classes, one to draw a lil square graphics component (which iwanna move around) which is placed in the center of my main frame and then another cla

  • Error While creating Data type from a XSD

    Hi, I am trying to import a XSD into my Data type. I have changed the namespace in the XSD to match that of the target namespace. I am getting the following error. Global definition Element: http://ls.sbc.com/OMS/ProcessSalesOrderEFE004,   orderReque

  • Creation of new internal delivery channel using AQ

    Hi, I wanted to route one of Trading partner data to a different queue.(no want to use existing queues - IP_IN /IP_OUT queue). This needs to create a new delivery channel using AQ in B2B. I have gone thru the steps & entered all entries, but I am not

  • 9i (9.2.0.2) or 9i (9.2.0.1)

    for RH AS 2.1, which better between oracle 9i (9.2.0.2) and 9i (9.2.0.1) ?