Fail-Over?

Hi,I'm looking into the fail-over solution for our Essbase server. I'm currently looking at EDS's clustering solution. Could you please share your experience with it if any of you as it implemented? Also, I'm wondering if there are anyone who was successful in implementing the unsupported fail-over solution such as MS Clustering or anything else? If you have any unique idea for the fail-over, I'd appreciate it if you could share it with me.Thank you,Tony

Hi Tony,I believe the EDS solution uses the industry standard J2EE clustering which has wide vendor support and has been around longer (ie more mature).I haven't personally setup J2EE clustering (or the MS clustering either) so I can't offer any personal advice on the topic.Tim TowApplied OLAP, Inc

Similar Messages

  • SQL Server 2014 Always on HA takes 8-14 seconds to fail over. Application side timeouts occur

    Hi All,
    I have a very similar post in the SQL Server 2014 forums too (https://social.technet.microsoft.com/Forums/sqlserver/en-US/adb5e338-907e-4405-aa62-d3ea93c7a98a/sql-server-2014-always-on-ha-takes-814-seconds-to-fail-over-application-side-timeouts-occur?forum=sqldisasterrecovery) -
    advice in the end was to post a question here.
    SQL Server Nodes, 2014 (12.0.2480.0)
    1 Share witness (on separate subnet)
    1 Cluster
    1 Listener
    I have been testing the response time to failovers – both manual (right-click, fail over in SSMS) and Automatic (shut down the primary host). The way I am testing response is to have a SSMS query running on my desktop, connected to the listener querying
    a small table and hit execute.
    The Query response time, from execute to receiving the result, has been between 8 and 14 seconds based on my testing. My previous experience (in a separate environment) showed around 2 second fail over times in a very similar configuration.
    Availability DB is 200Mb and is not actively used. The nodes are synchronised.
    SQL Server Hosts: Windows 2012, 2 cpu, 8gb RAM.
    Questions:
    1: It’s a big question but what should I expect for a ‘normal’ fail over time. Keep in mind this scenario is about as simple as it gets.
    2: As it stands an 8 to 14 second ‘outage’ could cause some applications to time out. Or am I being un-reasonable? I am seeing the very simple query in SSMS to time out with this:
    Msg 983, Level 14, State 1, Line 2
    Unable to access availability database 'DATABASE' because the database replica is not in the PRIMARY or SECONDARY role. Connections to
    an availability database is permitted only when the database replica is in the PRIMARY or SECONDARY role. Try the operation again later.
    Cluster logs are long - this section accounts for 8 seconds of the 11 second outage I experienced. I can supply the full log if required. Also this log is just the 2 cluster nodes, I removed the witness share to make sure it was as simple as possible.
    00001090.00002128::2015/02/25-03:05:08.255 INFO  [GEM] Node 2: Deleting [1:65 , 1:71] (both included) as it has been ack'd by every node
    00001ee4.00002130::2015/02/25-03:05:10.107 INFO  [RES] Network Name: Agent: Sending request Netname/RecheckConfig to NN:5b81e7bd-58fe-4be9-a68a-c48ba2aa552b:Netbios
    00001090.00002128::2015/02/25-03:05:11.888 INFO  [GEM] Node 2: Deleting [1:72 , 1:73] (both included) as it has been ack'd by every node
    00001090.00002698::2015/02/25-03:05:11.889 INFO  [GUM] Node 2: Processing RequestLock 2:49
    00001090.00002128::2015/02/25-03:05:11.890 INFO  [GUM] Node 2: Processing GrantLock to 2 (sent by 1 gumid: 67)
    00001090.00002698::2015/02/25-03:05:11.890 INFO  [GUM] Node 2: executing request locally, gumId:68, my action: /dm/update, # of updates: 1
    00001090.00002128::2015/02/25-03:05:12.890 INFO  [GEM] Node 2: Deleting [1:74 , 1:74] (both included) as it has been ack'd by every node
    00001ee4.00002130::2015/02/25-03:05:15.107 INFO  [RES] Network Name: Agent: Sending request Netname/RecheckConfig to NN:5b81e7bd-58fe-4be9-a68a-c48ba2aa552b:Netbios
    00001090.00002128::2015/02/25-03:05:16.988 INFO  [GUM] Node 2: Processing RequestLock 1:28
    Thanks in advance.
    Keegan

    Hi Keegan,
    From these event log , what I can see is "Sending request Netname" wasted the time .
    Could you please tell us the network configuration of that cluster nodes ?
    If I recall correctly , it is recommended to only remain Tcp/IP protocol and disable NetBIOS over TCP/IP for "Private Network" , also do not configure DNS/Wins default gateway for "Private Network" :
    https://support.microsoft.com/kb/258750?wa=wsignin1.0
    After that please test again .
    Best Regards,
    Elton JI
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Front End pool failed over

    Hi all,
    1. I setup a pool with three Front End servers (FQDN of pool is pool.site1.sip96x2.com and it's pointed to IP address of three Front End servers). Everything works fine. But When I disable network interface on FE1 and FE2, the Lync clients are disconnected.
    I haven't understood clearly how the Lync clients failed over in a pool? Please clarify to me.
    2. I have two central site (Root site and Primary site, they have different domain sip96x2.com and site1.sip96x2.com). The simple URL dialin is pointed to Front End server at Root site. So if the link between Root site and Primary site is down, how can the
    users at Primary site connect to dialin URL? 
    3. In building topology for Front End pool, I checked Override FQDN internal web service and the FQDN is "poolint.site1.sip96x2.com". I created three A records "poolint.site1.sip96x2.com" and pointed to three IP addresses of Front End
    servers. Is it right?
    Thanks so much!

    Ah ok, well first thing if I am reading this correctly, pool pairing Standard with Enterprise is not supported. You should only pair Standard with Standard and Enterprise with Enterprise (even though topology builder won't stop you) Take a look here for
    support scenarios http://technet.microsoft.com/en-us/library/jj204697.aspx
    To deal with the simple URLs in the event of failover you need to add them using Powershell. Take a look at this article which explains and gives an example: http://blogs.perficient.com/microsoft/2012/01/configuring-simple-urls-for-multiple-lync-pools/
    If this helped you please click "Vote As Helpful" if it answered your question please click "Mark As Answer"
    Georg Thomas | Lync MVP
    Blog www.lynced.com.au | Twitter
    @georgathomas
    Lync Edge Port Check (Beta)

  • How Front End pool deals with fail over to keep user state?

         Hello to all, I searched a lot of articles to understand how Lync 2010 keeps user state if a fail happens in a Front Pool node, but didn't find anything clear.
         I found a MS info. about ths topic : " The Front End Servers maintain transient information—such as logged-on state and control information for an IM, Web, or audio/video (A/V) conference—only for the duration of a user’s session.
    This configuration
    is an advantage because in the event of a Front End Server failure, the clients connected to that server can quickly reconnect to another Front End Server that belongs to the same Front End pool. "
        As I read, the client uses DNS to reconnect to another Front End in the pool. When it reconnects to an available server, does he lose what he/she was doing at Lync client? Can the server that is now hosting his section recover all
    "user's session data"? Is positive, how?
       Regards, EEOC.

    The presence information and other dynamic user data is stored in the RTCDYN database on the backend SQL database in a 2010 pool:
    http://blog.insidelync.com/2011/04/the-lync-server-databases/  If you fail over to another pool member, this pool member has access to the same data.
    Ongoing conversations and the like are cached at the workstation.
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question please click "Mark As Answer".
    SWC Unified Communications

  • Is it possible to add hyper-V fail over clustering afterwards?

    Hi,
    We are testing Windows 2012R2 Hyper-V using only one stand alone host without fail over clustering now with few virtual machines. Is it possible to add fail over clustering afterwards and add second Hyper-V node and shared disk and move virtual
    machines there or do we have to install both nodes from scratch?
    ~ Jukka ~

    Hi Jukka,
    Inaddition, before you build hyper-v failover cluster please refer to these requirements within the article below :
    http://technet.microsoft.com/en-us/library/jj863389.aspx
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • OCR and voting disks on ASM, problems in case of fail-over instances

    Hi everybody
    in case at your site you :
    - have an 11.2 fail-over cluster using Grid Infrastructure (CRS, OCR, voting disks),
    where you have yourself created additional CRS resources to handle single-node db instances,
    their listener, their disks and so on (which are started only on one node at a time,
    can fail from that node and restart to another);
    - have put OCR and voting disks into an ASM diskgroup (as strongly suggested by Oracle);
    then you might have problems (as we had) because you might:
    - reach max number of diskgroups handled by an ASM instance (63 only, above which you get ORA-15068);
    - experiment delays (especially in case of multipath), find fake CRS resources, etc.
    whenever you dismount disks from one node and mount to another;
    So (if both conditions are true) you might be interested in this story,
    then please keep reading on for the boring details.
    One step backward (I'll try to keep it simple).
    Oracle Grid Infrastructure is mainly used by RAC db instances,
    which means that any db you create usually has one instance started on each node,
    and all instances access read / write the same disks from each node.
    So, ASM instance on each node will mount diskgroups in Shared Mode,
    because the same diskgroups are mounted also by other ASM instances on the other nodes.
    ASM instances have a spfile parameter CLUSTER_DATABASE=true (and this parameter implies
    that every diskgroup is mounted in Shared Mode, among other things).
    In this context, it is quite obvious that Oracle strongly recommends to put OCR and voting disks
    inside ASM: this (usually called CRS_DATA) will become diskgroup number 1
    and ASM instances will mount it before CRS starts.
    Then, additional diskgroup will be added by users, for DATA, REDO, FRA etc of each RAC db,
    and will be mounted later when a RAC db instance starts on the specific node.
    In case of fail-over cluster, where instances are not RAC type and there is
    only one instance running (on one of the nodes) at any time for each db, it is different.
    All diskgroups of db instances don't need to be mounted in Shared Mode,
    because they are used by one instance only at a time
    (on the contrary, they should be mounted in Exclusive Mode).
    Yet, if you follow Oracle advice and put OCR and voting inside ASM, then:
    - at installation OUI will start ASM instance on each node with CLUSTER_DATABASE=true;
    - the first diskgroup, which contains OCR and votings, will be mounted Shared Mode;
    - all other diskgroups, used by each db instance, will be mounted Shared Mode, too,
    even if you'll take care that they'll be mounted by one ASM instance at a time.
    At our site, for our three-nodes cluster, this fact has two consequences.
    One conseguence is that we hit ORA-15068 limit (max 63 diskgroups) earlier than expected:
    - none ot the instances on this cluster are Production (only Test, Dev, etc);
    - we planned to have usually 10 instances on each node, each of them with 3 diskgroups (DATA, REDO, FRA),
    so 30 diskgroups each node, for a total of 90 diskgroups (30 instances) on the cluster;
    - in case one node failed, surviving two should get resources of the failing node,
    in the worst case: one node with 60 diskgroups (20 instances), the other one with 30 diskgroups (10 instances)
    - in case two nodes failed, the only node survived should not be able to mount additional diskgroups
    (because of limit of max 63 diskgroup mounted by an ASM instance), so all other would remain unmounted
    and their db instances stopped (they are not Production instances);
    But it didn't worked, since ASM has parameter CLUSTER_DATABASE=true, so you cannot mount 90 diskgroups,
    you can mount 62 globally (once a diskgroup is mounted on one node, it is given a number between 2 and 63,
    and other diskgroups mounted on other nodes cannot reuse that number).
    So as a matter of fact we can mount only 21 diskgroups (about 7 instances) on each node.
    The second conseguence is that, every time our CRS handmade scripts dismount diskgroups
    from one node and mount it to another, there are delays in the range of seconds (especially with multipath).
    Also we found inside CRS log that, whenever we mounted diskgroups (on one node only), then
    behind the scenes were created on the fly additional fake resources
    of type ora*.dg, maybe to accomodate the fact that on other nodes those diskgroups were left unmounted
    (once again, instances are single-node here, and not RAC type).
    That's all.
    Did anyone go into similar problems?
    We opened a SR to Oracle asking about what options do we have here, and we are disappointed by their answer.
    Regards
    Oscar

    Hi Klaas-Jan
    - best practises require that also online redolog files are in a separate diskgroup, in case of ASM logical corruption (we are a little bit paranoid): in case DATA dg gets corrupted, you can restore Full backup plus Archived RedoLog plus Online Redolog (otherwise you will stop at the latest Archived).
    So we have 3 diskgroups for each db instance: DATA, REDO, FRA.
    - in case of fail-over cluster (active-passive), Oracle provide some templates of CRS scripts (in $CRS_HOME/crs/crs/public) that you edit and change at your will, also you might create additionale scripts in case of additional resources you might need (Oracle Agents, backups agent, file systems, monitoring tools, etc)
    About our problem, the only solution is to move OCR and voting disks from ASM and change pfile af all ASM instance (parameter CLUSTER_DATABASE from true to false ).
    Oracle aswers were a litlle bit odd:
    - first they told us to use Grid Standalone (without CRS, OCR, voting at all), but we told them that we needed a Fail-over solution
    - then they told us to use RAC Single Node, which actually has some better features, in csae of planned fail-over it might be able to migreate
    client sessions without causing a reconnect (for SELECTs only, not in case of a running transaction), but we already have a few fail-over cluster, we cannot change them all
    So we plan to move OCR and voting disks into block devices (we think that the other solution, which needs a Shared File System, will take longer).
    Thanks Marko for pointing us to OCFS2 pros / cons.
    We asked Oracle a confirmation that it supported, they said yes but it is discouraged (and also, doesn't work with OUI nor ASMCA).
    Anyway that's the simplest approach, this is a non-Prod cluster, we'll start here and if everthing is fine, after a while we'll do it also on Prod ones.
    - Note 605828.1, paragraph 5, Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0, 11.2.0) on RHEL5/OL5
    - Note 428681.1: OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE)
    -"Grid Infrastructure Install on Linux", paragraph 3.1.6, Table 3-2
    Oscar

  • Failover cluster server - File Server role is clustered - Shadow copies do not seem to travel to other node when failing over

    Hi,
    New to 2012 and implementing a clustered environment for our File Services role.  Have got to a point where I have successfully configured the Shadow copy settings.
    Have a large (15tb) disk.  S:
    Have a VSS drive (volume shadow copy drive) V:
    Have successfully configured through Windows Explorer the Shadow copy settings.
    Created dependencies in Failcover Cluster Server console whereby S: depends on V:
    However, when I failover the resource and browse the Client Access Point share there are no entries under the "Previous Versions" tab. 
    When I visit the S: drive in windows explorer and open the Shadow copy dialogue box, there are entries showing the times and dates of the shadow copies ran when on the original node.  So the disk knows about the shadow copies that were ran on the
    original node but the "previous versions" tab has no entries to display.
    This is in a 2012 server (NOT R2 version).
    Can anyone explain what might be the reason?  Do I have an "issue" or is this by design?
    All help apprecieated!
    Kathy
    Kathleen Hayhurst Senior IT Support Analyst

    Hi,
    Please first check the requirements in following article:
    Using Shadow Copies of Shared Folders in a server cluster
    http://technet.microsoft.com/en-us/library/cc779378(v=ws.10).aspx
    Cluster-managed shadow copies can only be created in a single quorum device cluster on a disk with a Physical Disk resource. In a single node cluster or majority node set cluster without a shared cluster disk, shadow copies can only be created and managed
    locally.
    You cannot enable Shadow Copies of Shared Folders for the quorum resource, although you can enable Shadow Copies of Shared Folders for a File Share resource.
    The recurring scheduled task that generates volume shadow copies must run on the same node that currently owns the storage volume.
    The cluster resource that manages the scheduled task must be able to fail over with the Physical Disk resource that manages the storage volume.
    If you have any feedback on our support, please send to [email protected]

  • Which role do I need DFS or File server on fail over cluster server 2012 R2?

    what I want to achieve is that I want to share all my user data files in a central location and to be highly available all the time whether it's a general share or folder redirection data. BUT I'm a bit confused;  I have fail over cluster  set-up
    on server 2012, now I would like to add DFS as a role but than we have another role called File server and virtually it does the same thing as DFS? Means it creates a namespace share that can be access even one of the nodes goes down. Now I am thinking is
    that DFS does the replication between two physical location but fail over cluster works slightly differently  and with file server it pretty much does the same thing except for replicating data from one drive to another. Now what do you suggest I do or
    did I get the concept wrong like a noob?

    DFS and Failover Clustering for file shares provides a similar end result for file access, but they are significantly different implementations.
    Clustering provides high availability to files by presenting shared access to set a files served from a cluster.  With 2012 R2 Microsoft added the ability to create a Scale-out File Server that even allows all nodes of the cluster to server access to
    the files for a higher level of performance and other great things.  Bottom line with Failover Clusters for files is that there is a single copy of the file presented from the cluster.
    DFS on the other hand provides high availability to files by presenting multiple copies of the file by making a copy in two or more locations and presenting a naming space that allows access to the file through any of the network paths.  DFS works very
    well for files that are primarily read-only.  When you get into a situation where there is a lot of updating of the shared files, DFS is not a very good solution.  There are ways to implement DFS for read/write files, but it generally requires a
    good knowledge of how the files are used and how you want to manage them.
    The key to answering your question comes in your first sentence "I want to share all my user data files in a central location and to be highly available all the time".  My initial reaction to this is that central location means Failover Cluster
    - there is only a single copy of the file.  However, "all the time" can be compromised by network failures to the central site.  Remote sites would not have access if they can't access the central site.  DFS provides the ability to
    have copies remotely, but then if you allow updating at multiple sites, you have to manage the merging of the changes, among other things.
    . : | : . : | : . tim

  • Time out fail over

    On this system:
    OS: Solaris 10 11/06 s10s_u3wos_10 SPARC
    Cluster version: 3.1u4
    A- Normally after how much time resource is moved to the other node if ipmp fails (e.g. gateway is unreacheable) ?
    B- What happens if ipmp fails in both server ? packages are kept on their nodes ?
    C- Does it exist timeout over 10 minutes in cluster configuration ?

    u have 2 options - u could increase the back end time out to a very large value so that server can wait rather than timing out rather than failing over or to do some thing like
    <Object name=�default�>
    NameTrans fn=map from=/ name=reverse-proxy-/
    </Object>
    <Object name=�reverse-proxy-/�>
    Route fn=set-origin-server server=server1
    ObjectType fn=http-client-config timeout=600
    </Object>
    see - http://docs.sun.com/app/docs/doc/820-4841/gdhrg?a=view
    ( or simply disable any fail over but have different individual servers distributing load across different application)
    split your uri or application so that each application goes to 1 back end server. for example, let us say - u have 2 java applications that u would like jboss to do the job for you, u could do some thing like
    now, u could edit your obj.conf or (<vs>-obj.conf) depending on your configuration so that it looks like this
    <Object name=�default�>
    NameTrans fn=map from=/ name=reverse-proxy-/
    </Object>
    <Object name=�reverse-proxy-/�>
    <If $uri =~ /foo1>
    Route fn=set-origin-server server=<&#349;erver1>
    </If>
    <If $uri =~ /foo2>
    Route fn=set-origin-server server=<&#349;erver2>
    </If>
    </Object>
    btw - i will file a RFE on your behalf for this feature.

  • Is Replica aware stubs are in infinite loop when fail over????

              Hi
              Any help on this Appreciated
              See in this senario, where there is four weblogic instance runs in the cluster
              and a replica aware stub(stateless bean with idempodent methods) finds a particular
              method fails on a server and it redircets the request to another one server but the
              same method fails on all the server, then what is goin to happen?? is it going to
              throw some exception or gonna be in a loop to keep on redirecting the method request
              to all servers in Round???
              Regards
              Aruna
              

              Aruna,
              A stateless session bean whose methods have been declared idempotent will automatically
              retry on another service provider in a fail-over situation. When a fail-over situation
              occurs, the stub refreshes its list of service providers. Note: Just because your
              method call fails, doesn't mean it's a fail-over situation.
              Jane
              "Aruna" <[email protected]> wrote:
              >
              >Hi
              >
              > Any help on this Appreciated
              >
              > See in this senario, where there is four weblogic instance runs in
              >the cluster
              >and a replica aware stub(stateless bean with idempodent methods) finds a
              >particular
              >method fails on a server and it redircets the request to another one server
              >but the
              >same method fails on all the server, then what is goin to happen?? is it
              >going to
              >throw some exception or gonna be in a loop to keep on redirecting the method
              >request
              >to all servers in Round???
              >
              >
              >Regards
              >Aruna
              

  • Exception while failing over to 2nd RAC Node

    We are using Weblogic 10.3.4. Our setup is that we have a Web Application (A tapestry front end Web UI) and EJb 2.1 back-end talking to the Oracle database. The EJB’s are CMP. Our product always was just stand alone and it wasn’t until this release we needed to make it work with RAC. To get this to work we followed the model of having a Multidatasource with datasources pointing to our RAC nodes. We have two types of datasources that we use persistent and non-persistent. And we are using the Oracle thin driver – non-XA for RAC Service Instances, supporting global transactions.
    When we do failover to the 2nd node we get a nasty exception in our GUI but after logging out and logging back it we are fine.
    My question is that I assumed I shouldn't have to restart our web-application and it should have stayed up ?? Or is there something wrong with our setup ?
    Thanks,
    Ian

    Showing us the exception and/or the error messages at the server might help...
    Note that failing over does not save any ongoing connection or transaction that
    had been to the dead RAC node... Does your web-app get-use-close JDBC
    connections on a per-user-invoke basis, or does it hold onto connections?
    Joe

  • Firefox Proxy Fail-over is not working correctly

    I am in a corporate environment, where we must use a complex auto-proxy, by configuring an automatic proxy configuration of http://proxyconf/proxy.pac. I am seeing an intermittent failure with Firefox 3.6.13, where the same site will load after a delay in IE (e.g. it works for half an hour, then fails for a while, etc.).
    By using Wireshark and tracing the packets, I have identified that a proxy server is intermittently failing, and Firefox is failing to try the second proxy. The auto proxy rule that is being invoked is:
    if (!isResolvable(host)) return "PROXY 172.16.39.201:8080; PROXY 10.241.32.28:8080";
    The problem is that Firefox is never failing over - it tries the 172 address 6 times in a row, then gives up and displays the "The proxy server is refusing connections" "Firefox is configured to use a proxy server that is refusing connections." "* Check the proxy settings to make sure that they are correct." "* Contact your network administrator to make sure the proxy server is working." error message. It continues with this behavior regardless of how many attempts, reloads, restarts are tried.
    IE on the other hand will try and fail with the 172 address, and then start using the 10. address (which works correctly). Several other applications also work correctly, such as IRC clients.
    Obviously the corporate proxy that is failing must be fixed, however Firefox is failing to utilitize the 2nd proxy after the first one fails.
    Seems like a bug.
    Is there some easy way for me to replace the proxy file with my own file? E.g. replace http://http://proxyconf/proxy.pac with file://c:\..., or use some add-on?
    It must be an autoproxy script, as there is no single proxy that I can use for all addresses.

    You can correct this issue by forcing the file blocklist.xml to update or wait until Firefox updates the file.<br />
    That update will remove the severity="0" flags in the file that cause the problem.
    See:
    * [/questions/832793?page=2#answer-198407]
    * http://forums.mozillazine.org/viewtopic.php?p=10899869#p10899869
    *[https://bugzilla.mozilla.org/show_bug.cgi?id=663722 Bug 663722] - The blocklist output is including severity="0" where it shouldn't be

  • Failing over after WRITE_ERROR_TO_SERVER exception in sendRequest()

    Hi
    I am getting below error in my issproxy.log file. I wanted to see the source of this URL.cpp file to find out why it is failing. I am not able to open them using DLL decompiler as well.
    Could anyone tell me where can I get the source code for iisproxy.dll and iisforward.dll ?
    This request is failing only when the request is routed from IIS.
    ================New Request: [/GLMS/index.jsp.wlforward] =================
    Mon Nov 24 14:19:48 2014 <503614168189882> SSL must be used
    Mon Nov 24 14:19:48 2014 <503614168189882> Initializing SSL
    Mon Nov 24 14:19:48 2014 <503614168189881> INFO: Initializing SSL library
    Mon Nov 24 14:19:48 2014 <503614168189881> timer thread starting
    Mon Nov 24 14:19:48 2014 <503614168189881> Loaded 1 trusted CA's
    Mon Nov 24 14:19:48 2014 <503614168189881> sysMkdirs() on 'C:\windows\TEMP\_wl_proxy':
    Mon Nov 24 14:19:48 2014 <503614168189881> getWLFilePath: Complete File name = [C:\windows\TEMP\_wl_proxy\orbrandom.txt]
    Mon Nov 24 14:19:48 2014 <503614168189881> INFO: Successfully initialized SSL
    Mon Nov 24 14:19:48 2014 <503614168189882> SSL configured successfully
    Mon Nov 24 14:19:48 2014 <503614168189882> resolveRequest: wlforward: /TEST/index.jsp
    Mon Nov 24 14:19:48 2014 <503614168189882> URI is /GLMS/index.jsp, len=15
    Mon Nov 24 14:19:48 2014 <503614168189882> Request URI = [/TEST/index.jsp]
    Mon Nov 24 14:19:48 2014 <503614168189882> attempt #0 out of a max of 50
    Mon Nov 24 14:19:48 2014 <503614168189882> Trying a pooled connection for 'XX.XX.XX.XX/7002/7002'
    Mon Nov 24 14:19:48 2014 <503614168189882> getPooledConn: No more connections in the pool for Host[XX.XX.XX.XX] Port[7002] SecurePort[7002]
    Mon Nov 24 14:19:48 2014 <503614168189882> general list: trying connect to '192.168.17.180'/7002/7002 at line 1306 for '/GLMS/index.jsp'
    Mon Nov 24 14:19:48 2014 <503614168189882> New SSL URL: match = 0 oid = 22
    Mon Nov 24 14:19:48 2014 <503614168189882> Connect returns -1, and error no set to 10035, msg 'Unknown error'
    Mon Nov 24 14:19:48 2014 <503614168189882> EINPROGRESS in connect() - selecting
    Mon Nov 24 14:19:48 2014 <503614168189882> Setting peerID for new SSL connection
    Mon Nov 24 14:19:48 2014 <503614168189882> c0a8 11b4 5a1b 0000                          ....Z...
    Mon Nov 24 14:19:48 2014 <503614168189882> Local Port of the socket is 57397
    Mon Nov 24 14:19:48 2014 <503614168189882> Remote Host xx.xx.xx.xx Remote Port 7002
    Mon Nov 24 14:19:48 2014 <503614168189882> general list: created a new connection to 'XX.XX.XX.XX'/7002 for '/GLMS/index.jsp', Local port: 57397
    Mon Nov 24 14:19:48 2014 <503614168189882> WLS info in sendRequest:  XX.XX.XX.XX:7002 recycled? 0
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs from client:[Accept]=[application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, */*]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs from client:[Accept-Encoding]=[gzip, deflate]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs from client:[Accept-Language]=[en-IN]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs from client:[Cookie]=[ADMINCONSOLESESSION=9fTkJypQ229r1ZHx6cQZG8cwHb0T0ssW8TkM7zyzzCVvNzjzDsf2!1779325670; JSESSIONID=GcZVJyXT8WMyv9pT8xGNzndSPCbBCcy1tfm5yRG1DSv8PhT97gv9!1779325670; _WL_AUTHCOOKIE_ADMINCONSOLESESSION=WcL9RbOJFiDqn3LiZO0g]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs from client:[Host]=[localhost]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs from client:[User-Agent]=[Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0)]
    Mon Nov 24 14:19:48 2014 <503614168189882> URL::sendHeaders(): meth='GET' file='/GLMS/index.jsp' protocol='HTTP/1.1'
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[Accept]=[application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, */*]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[Accept-Encoding]=[gzip, deflate]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[Accept-Language]=[en-IN]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[Cookie]=[ADMINCONSOLESESSION=9fTkJypQ229r1ZHx6cQZG8cwHb0T0ssW8TkM7zyzzCVvNzjzDsf2!1779325670; JSESSIONID=GcZVJyXT8WMyv9pT8xGNzndSPCbBCcy1tfm5yRG1DSv8PhT97gv9!1779325670; _WL_AUTHCOOKIE_ADMINCONSOLESESSION=WcL9RbOJFiDqn3LiZO0g]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[Host]=[localhost]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[User-Agent]=[Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0)]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[Connection]=[Keep-Alive]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[WL-Proxy-Client-IP]=[::1]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[Proxy-Client-IP]=[::1]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[X-Forwarded-For]=[::1]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[WL-Proxy-Client-Keysize]=[128]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[X-WebLogic-KeepAliveSecs]=[30]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[X-WebLogic-Force-JVMID]=[unset]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[WL-Proxy-SSL]=[true]
    Mon Nov 24 14:19:48 2014 <503614168189881> WARN: GetSessionCallback: No session match found
    Mon Nov 24 14:19:48 2014 <503614168189881> WARN: DeleteSessionCallback: No match found!!
    Mon Nov 24 14:19:48 2014 <503614168189882> ERROR: SSLWrite failed
    Mon Nov 24 14:19:48 2014 <503614168189882> SEND failed (ret=-1) at 805 of file ..\nsapi\.\URL.cpp
    Mon Nov 24 14:19:48 2014 <503614168189882> *******Exception type [WRITE_ERROR_TO_SERVER] raised at line 806 of ..\nsapi\.\URL.cpp
    Mon Nov 24 14:19:48 2014 <503614168189882> Marking xx.xx.xx.xx:7002 as bad
    Mon Nov 24 14:19:48 2014 <503614168189882> Exception occurred for backend host 'XX.XX.XX.XX/7002/0' while sending request : 'WRITE_ERROR_TO_SERVER [os error=0,  line 806 of ..\nsapi\.\URL.cpp]: '
    Mon Nov 24 14:19:48 2014 <503614168189882> got exception in sendRequest phase: WRITE_ERROR_TO_SERVER [os error=0,  line 806 of ..\nsapi\.\URL.cpp]:  at line 1019; last_error 0
    Mon Nov 24 14:19:48 2014 <503614168189882> INFO: Closing SSL context
    Mon Nov 24 14:19:48 2014 <503614168189882> Failing over after WRITE_ERROR_TO_SERVER exception in sendRequest()

    yes that is right.
    Essentially you should be doing one of the following on weblogic side:
    1) Installed Certs on weblogic that were obtained from a commercial CA. (like verisign, thawte etc)
    In this case, you will receive rootCA crt along with the other bundled certs and private key.
    these rootCA certs are publicly available (your browser will be already using them)
    2) Using certs signed by your company. (companies can maintain their own CA)
    In this case you should be having a rootCA cert from your company.
    3) using demo certs that were shipped with weblogic.
    In this case, the rootca cert can be obtained from DemoTrust.jks
    this is documented at http://e-docs.bea.com/wls/docs90/plugins/isapi.html#114851 (should be same for any plugins)
    Apache plug-in can understand .crt extension.
    -Vijay

  • Failing over Oracle connections in a pool

              Hi,
              This message is probably a bit out of context (I've already posted
              it to the JDBC group). I post here as well, since I guess it's
              the place where people have the most experience with clustering
              and HA. Original posting below...
              Could you please tell me whether, yes or no, connections to an
              Oracle database should fail over (when the database fails over
              to another machine)? I use Oracle's Transparent Application Failover
              (configured via Net8) with Weblogic 6 on Linux and Oracle 8.1.7
              on Solaris/SPARC.
              If this doesn't work in my configuration, is there any configuration
              where it should work? (Another version of Oracle, WLS, OS, ...)
              When I try TAF using the PetStore application, I get exceptions
              related to no being connected to the database.
              If TAF doesn't work with WebLogic, is there a way to work around
              the problem? Can I catch these exceptions and renew the connections
              in the pool? Or, what else is possible...?
              I'd appreciate any help. I'd like to demonstrate our HA product
              with WLS. If it doesn't work, I'll turn to iPlanet instead. Pity,
              I really like WLS!
              Thanks in advance for any help or advice!
              Regards, Frank Olsen
              

              Hi (Frank ;-)
              I got carried away a bit too fast...
              Some more testing shows that it doesn't work in all cases:
              - when someone is trying to check out the shopping cart when the
              the database fails (and fails over), I get exceptions once the
              databses has restarted on the backup node
              - the exceptions are related to some transactions being rolled
              back and Oracle stating that it couldn't safely replay the transactions
              - browsing the categories still works fine
              - all access to the shopping cart and sign-in/sign-out causes time-outs
              and exceptions
              Any ideas what may cause this problem, please?
              Regards,
              Frank Olsen
              "Frank Olsen" <[email protected]> wrote:
              >
              >Hi,
              >
              >TAF worked with WLS 6 on NT with the Oracle 8.1.7 client!
              >
              >Has anyone tested it on Solaris/SPARC?
              >
              >Regards,
              >Frank Olsen
              >
              >
              >
              >"Frank Olsen" <[email protected]> wrote:
              >>
              >>Hi,
              >>
              >>Most of my question below is still valid (in particular
              >>concerning
              >>whether TAF should work with WLS on some or all platforms
              >>and
              >>versions).
              >>
              >>However, when I tested TAF with the Oracle client (sqlplus)
              >>there
              >>also was no failover of the (one) connection. I then
              >checked
              >>the
              >>`V$SESSION' view and the colums related to failover showed
              >>that
              >>TAF was not correctly configured. Strange because I copied
              >>the
              >>`tnsnames.ora' parameters from the Oracle documentation
              >>for TAF.
              >>
              >>Has anyone managed to configure and use TAF, with or
              >without
              >>WLS?!
              >>
              >>Thanks in advance for your help!
              >>
              >>Regards,
              >>Frank Olsen
              >>
              >>
              >>"Frank Olsen" <[email protected]> wrote:
              >>>
              >>>Hi,
              >>>
              >>>This message is probably a bit out of context (I've
              >already
              >>>posted
              >>>it to the JDBC group). I post here as well, since I
              >guess
              >>>it's
              >>>the place where people have the most experience with
              >>clustering
              >>>and HA. Original posting below...
              >>>
              >>>----
              >>>
              >>>Could you please tell me whether, yes or no, connections
              >>>to an
              >>>Oracle database should fail over (when the database
              >fails
              >>>over
              >>>to another machine)? I use Oracle's Transparent Application
              >>>Failover
              >>>(configured via Net8) with Weblogic 6 on Linux and Oracle
              >>>8.1.7
              >>>on Solaris/SPARC.
              >>>
              >>>If this doesn't work in my configuration, is there any
              >>>configuration
              >>>where it should work? (Another version of Oracle,
              >WLS,
              >>>OS, ...)
              >>>
              >>>
              >>>When I try TAF using the PetStore application, I get
              >>exceptions
              >>>related to no being connected to the database.
              >>>
              >>>If TAF doesn't work with WebLogic, is there a way to
              >>work
              >>>around
              >>>the problem? Can I catch these exceptions and renew
              >the
              >>>connections
              >>>in the pool? Or, what else is possible...?
              >>>
              >>>I'd appreciate any help. I'd like to demonstrate our
              >>HA
              >>>product
              >>>with WLS. If it doesn't work, I'll turn to iPlanet instead.
              >>>Pity,
              >>>I really like WLS!
              >>>
              >>>Thanks in advance for any help or advice!
              >>>
              >>>Regards, Frank Olsen
              >>>
              >>
              >
              

  • How to do I prevent fail over

    This is actually a two part question.
    First - I need to upgrade a wireless controller 4402 and I need update boot loader and software. Can I do both at the same time?
    Second - How can I prevent the APS from failing over when I reboot with out having to go into each AP and take off secondary controller.

    Specific to the AP failover. Why dont you deploy AP FALLBACK? When a controller falls offline for
    any reason the APs join other controllers. HOWEVER when the controller comes online they will fallback to the controllers
    you want with NO intervention from you.
    fyi
    NoteWhen an access point’s primary controller comes back online, the access point disassociates from the backup controller and reconnects to its primary controller. The access point falls back to its primary controller and not to any secondary controller for which it is configured. For example, if an access point is configured with primary, secondary, and tertiary controllers, it fails over to the tertiary controller when the primary and secondary controllers become unresponsive and waits for the primary controller to come back online so that it can fall back to the primary controller. The access point does not fall back from the tertiary controller to the secondary controller if the secondary controller comes back online; it stays connected to the tertiary controller until the primary controller comes back up.

  • Multiple types of database and fail over clustering

    Hi,
    I have a few questions here.
    1) Can I have 2 types of databases (eg: OLTP and OLAP)run at the same time on a same machine?
    2) Can I implement a cross fail over cluster in this situation? Meaning I have 2 machines with OLAP and OLTP database instances installed in them (replica of each other), 1st machine running OLTP and 2nd running OLAP. In the situation where one of machines fail, the passive instance on the other machine takes over (back to situation on question 1).
    Thanks
    Regards
    Lai Ling

    Dear All,
    My problem is solved by disabling antivirus.
    thanks for the support
    Sunil
    SUNIL PATEL SYSTEM ADMINISTRATOR

Maybe you are looking for

  • WD external hard drive seems to crash my Mac just by being connected to it

    Is this even possible? I apologize in advance if the question is too obviously n00b, or if it's been answered (I did search before posting), or if it's in the wrong forum. I have a brand-new MBP, late 2008, and I was going to use a WD Elements 1.5 TB

  • Trying to do a system recovery

    Hi, so I have an HP notebook and a few days a ago I did a system recovery to its factory settings. Problem is, when it got to the second disk, around 60% it gave an error and stopped. I dont know why, the disks look like they are in tip top shape. I

  • How to lock an editable pdf on "Sending" or "Submit"

    Hey guys, Pretty new to Acrobat Pro so not sure if what i'm attempting is even possible - any guidance will be greatly appreciated. What we're trying to achieve is to have an editable form (application form in the Life Insurance industry) where once

  • Installing software from new to old computer

    If I buy a new mac with Tiger, can I use the CDs provided to instal it on my old computer (that has panther on it) or do I need to buy it separately? Power Mac G5   Mac OS X (10.3.9)  

  • Link Purchasing Vendor to Paying Vendors

    Dear Gurus, How can we link multiple purchasing vendors to a single paying vendor? Thanks. regards, Raj