ASA5505 2nd tunnel to fail over to 2nd network

Hello
We have remote sites using Cisco ASA5505s - we link into the Network via  Data Centre 'A'. we now have a Disaster Recovery Server at Data Centre 'B' (in a different geographical Location) is it possible to configure the ASA5505 so that if Data centre 'A' goes down then the ASA5505 would pick up either automatically or on a reboot Data Centre 'B' (the peer address being different at each Data Centre)
help would be appreciated, many thanks in anticipation
derek

DNS typically doesn't work for security appliances. They like to have solid IP addresses, possibly because DNS will allow the potential for a VPN to be redirected to another host, compromising the security of the channel.

Similar Messages

  • Exception while failing over to 2nd RAC Node

    We are using Weblogic 10.3.4. Our setup is that we have a Web Application (A tapestry front end Web UI) and EJb 2.1 back-end talking to the Oracle database. The EJB’s are CMP. Our product always was just stand alone and it wasn’t until this release we needed to make it work with RAC. To get this to work we followed the model of having a Multidatasource with datasources pointing to our RAC nodes. We have two types of datasources that we use persistent and non-persistent. And we are using the Oracle thin driver – non-XA for RAC Service Instances, supporting global transactions.
    When we do failover to the 2nd node we get a nasty exception in our GUI but after logging out and logging back it we are fine.
    My question is that I assumed I shouldn't have to restart our web-application and it should have stayed up ?? Or is there something wrong with our setup ?
    Thanks,
    Ian

    Showing us the exception and/or the error messages at the server might help...
    Note that failing over does not save any ongoing connection or transaction that
    had been to the dead RAC node... Does your web-app get-use-close JDBC
    connections on a per-user-invoke basis, or does it hold onto connections?
    Joe

  • After adding 2nd WiSM and failing over AP's some apps don't work

    We have a dual core made up of 2 6513's. In 6513#1 we have WiSM#1 which we have had for sometime now. We have added a 2nd WiSM in 6513#2 for redundancy purposes also we are going to be re-configuring the WiSM in 6513#1 to more match that of the new WiSM in 6513#2. We have installed the new WiSM and failed over the AP's from 6513#1 so we can re-configure it's WiSM. The failover went great and no issues, with the exception that a web application or two didn't function from wireless clients and users were having issues getting to some mapped drives. The only difference from the new WiSM config vs the old WiSM is that on the old WiSM the AP's were in the same VLAN as the controller management interfaces. Now with the new WiSM it's configuration has the controllers AP mgt interfaces ip addresses in a different VLAN from the AP's, we are doing this based on Cisco best practices. If we revert the AP's back to the original WiSM/controllers the PC's where they are on the same vlan/subnet the applications and shares that were having issues the other way work. We have placed a call with Cisco TAC and they say our configs look good and we even sent them some packet captures and they said everything looks normal. The wireless clients can ping and resolve the server hosting the application database just fine.
    Thanks

    We did create the mobility groups, and we are using DHCP opt 43. The AP's find the 2nd WiSM#2 just fine and associate to the controllers and all the WLAN's work just fine. The only issue is that after the AP's are on the new WiSM and controllers there is an application or 2 that is having trouble locating it's database server and that some share's are not working. Again the only difference in this new setup in that now the AP's are on a different subnet/vlan from the controller mgt addresses where as before they were in the same subnet/vlan and the application and shares worked fine. It's almost like it is a bit of a routing issue?
    Thanks

  • Firefox Proxy Fail-over is not working correctly

    I am in a corporate environment, where we must use a complex auto-proxy, by configuring an automatic proxy configuration of http://proxyconf/proxy.pac. I am seeing an intermittent failure with Firefox 3.6.13, where the same site will load after a delay in IE (e.g. it works for half an hour, then fails for a while, etc.).
    By using Wireshark and tracing the packets, I have identified that a proxy server is intermittently failing, and Firefox is failing to try the second proxy. The auto proxy rule that is being invoked is:
    if (!isResolvable(host)) return "PROXY 172.16.39.201:8080; PROXY 10.241.32.28:8080";
    The problem is that Firefox is never failing over - it tries the 172 address 6 times in a row, then gives up and displays the "The proxy server is refusing connections" "Firefox is configured to use a proxy server that is refusing connections." "* Check the proxy settings to make sure that they are correct." "* Contact your network administrator to make sure the proxy server is working." error message. It continues with this behavior regardless of how many attempts, reloads, restarts are tried.
    IE on the other hand will try and fail with the 172 address, and then start using the 10. address (which works correctly). Several other applications also work correctly, such as IRC clients.
    Obviously the corporate proxy that is failing must be fixed, however Firefox is failing to utilitize the 2nd proxy after the first one fails.
    Seems like a bug.
    Is there some easy way for me to replace the proxy file with my own file? E.g. replace http://http://proxyconf/proxy.pac with file://c:\..., or use some add-on?
    It must be an autoproxy script, as there is no single proxy that I can use for all addresses.

    You can correct this issue by forcing the file blocklist.xml to update or wait until Firefox updates the file.<br />
    That update will remove the severity="0" flags in the file that cause the problem.
    See:
    * [/questions/832793?page=2#answer-198407]
    * http://forums.mozillazine.org/viewtopic.php?p=10899869#p10899869
    *[https://bugzilla.mozilla.org/show_bug.cgi?id=663722 Bug 663722] - The blocklist output is including severity="0" where it shouldn't be

  • Multiple types of database and fail over clustering

    Hi,
    I have a few questions here.
    1) Can I have 2 types of databases (eg: OLTP and OLAP)run at the same time on a same machine?
    2) Can I implement a cross fail over cluster in this situation? Meaning I have 2 machines with OLAP and OLTP database instances installed in them (replica of each other), 1st machine running OLTP and 2nd running OLAP. In the situation where one of machines fail, the passive instance on the other machine takes over (back to situation on question 1).
    Thanks
    Regards
    Lai Ling

    Dear All,
    My problem is solved by disabling antivirus.
    thanks for the support
    Sunil
    SUNIL PATEL SYSTEM ADMINISTRATOR

  • How to 'fail-over' CSS11503-AC when ALL 5 Reals Servers (Services) die

    Hi all,
    Could anyone out there possibly provide an idea/config, of how it is possible to'fail-over' a CSS11503 set-up in Active/Standby mode with "ASR" enabled when:-
    - ALL your real servers(Services) for a particular VIP 'die'/OR nic is faulty.
    - So NOT just 1 of the real servers, but when ALL 5 are not reachable, I need to 'failover'
    My initial thought are to use the "critical reporter" or "critical service" to report back to the 'active' CSS.
    Anyone who has done this scenario before , please advise..
    thanks

    Thanks very much Syed fo rthis.I was thiking that no-one could answer this query.
    After a little tsting, I set the following config in the lab and it works but is different to yours. I cannot seem to configure the servive as "type local". When I input 'type ?; I get options such as nci-direct-return, nci-info-only, proxy-cache, redirect etc...etc..NO 'local'...!!
    Please advise..Thanks in advance
    ************************* INTERFACE ************************* interface 1/1 bridge vlan 800 phy 1Gbits-FD-no-pause
    nterface 1/2
    phy 1Gbits-FD-no-pause
    bridge vlan 20
    nterface Ethernet-Mgmt
    description "Management Interface"
    nterface 2/1
    description "1st ASR Link"
    isc-port-one
    nterface 2/3
    description "2nd ASR Link"
    isc-port-two
    ************************** CIRCUIT ************************** circuit VLAN800
    description "FE_CORE"
    ip address 192.168.83.249 255.255.255.0
    ip virtual-router 1 priority 110
    ip redundant-vip 1 192.168.83.148
    ip redundant-vip 1 192.168.83.158
    ip critical-service 1 DTSFE01
    ip critical-service 1 DTSFE02
    ip critical-service 1 DTSFE03
    ip critical-service 1 DTSFE04
    ip critical-service 1 DTSFE05
    ip critical-reporter 1 Physical_if_DWN
    ip critical-reporter 1 r1
    ircuit VLAN20
    description "LBAL"
    ip address 192.168.20.1 255.255.255.0
    ip virtual-router 2 priority 110
    ip redundant-interface 2 192.168.20.3
    ip critical-service 2 DTSFE01
    ip critical-service 2 DTSFE02
    ip critical-service 2 DTSFE03
    ip critical-service 2 DTSFE04
    ip critical-service 2 DTSFE05
    ip critical-reporter 2 Physical_if_DWN
    ip critical-reporter 2 r1
    ************************** REPORTER **************************
    reporter Physical_if_DWN
    type critical-phy-all-up
    phy 1/1
    phy 1/2
    active
    reporter r1
    type vrid-peering
    vrid 192.168.83.249 1
    vrid 192.168.20.1 2
    active
    ************************** SERVICE **************************
    service FE01
    ip address 192.168.20.183
    keepalive frequency 2
    keepalive retryperiod 2
    keepalive maxfailure 2
    redundant-index 4
    service FE02
    ip address 192.168.20.184
    keepalive frequency 2
    keepalive retryperiod 2
    keepalive maxfailure 2
    redundant-index 5
    service FE03
    ip address 192.168.20.185
    keepalive frequency 2
    keepalive retryperiod 2
    keepalive maxfailure 2
    redundant-index 6
    service FE04
    ip address 192.168.20.186
    keepalive frequency 2
    keepalive retryperiod 2
    keepalive maxfailure 2
    redundant-index 7
    service NWFE02
    ip address 192.168.20.204
    keepalive frequency 2
    keepalive retryperiod 2
    keepalive maxfailure 2
    redundant-index 10
    active
    !*************************** OWNER *************************** owner SERVICES
    content DTS_192.168.83.148_443
    add service DTSFE01
    add service DTSFE02
    add service DTSFE03
    add service DTSFE04
    add service DTSFE05
    vip address 192.168.83.148
    port 443
    protocol tcp
    advanced-balance sticky-srcip
    redundant-index 1
    sticky-inact-timeout 5
    owner NW_SERVICES
    content NWCS_192.168.83.158_443
    add service NWCSFE01
    add service NWCSFE02
    vip address 192.168.83.158
    protocol tcp
    port 443
    sticky-inact-timeout 5
    redundant-index 2
    advanced-balance sticky-srcip
    active

  • Active/active Fail over monitoring

    Guys,
    I have a small concern for my Active active FO. Below are the details of setup.
    1)- I have two pairs of ASA 5520 each carrying three contexts. (CTXT 1-3 in 1st ASA, CTXT 4-6 in 2nd ASA).
    2)- I have created two fail over group in each ASA for active active FO. FO group 1&2 in primary ASA and 3&4 FO group in 2nd ASA.
    3)- I have assigned two contexts (CTXT1-2) to FO group 1 and rest Context (CTXT-3) to FO group 2 in primary ASA.
    4)- Same in the 2nd ASA.
    My question is
    1)- how can i configure my monitor for fail over.
    2)- Is it based on the Interfaces of Contexts or the no of contexts ?
    Thanks
    swap

    Hi Felipe,
    Yes, it could be the NAT configuration. But I've tried creating a back-up NAT rule before, but that wasn't successful either.
    nat (inside,outside) source static NETWORK_OBJ_10.80.1.0_24 NETWORK_OBJ_10.80.1.0_24 destination static DM_INLINE_NETWORK_5 DM_INLINE_NETWORK_5 no-proxy-arp route-lookup
    nat (inside,outside) source static Branch_Inside Branch_Inside destination static DM_INLINE_NETWORK_4 DM_INLINE_NETWORK_4 no-proxy-arp route-lookup
    nat (inside,outside) source static Branch_Inside Branch_Inside destination static Roswell Roswell no-proxy-arp route-lookup
    object network inside-net
    nat (inside,outside) dynamic interface
    nat (inside,outside) after-auto source dynamic any interface
    access-group outside_access_in_1 in interface outside control-plane
    access-group outside_access_in in interface outside
    access-group outside_access_out out interface outside
    access-group outside_access_ipv6_in in interface outside
    access-group outside_access_ipv6_out out interface outside
    access-group outside_access_in in interface inside
    access-group outside_access_out out interface inside
    access-group outside_access_ipv6_in in interface inside
    access-group outside_access_ipv6_out out interface inside
    access-group outside_p_access_in in interface outside_p
    access-group outside_p_access_out out interface outside_p
    access-group global_access global
    access-group global_access_ipv6 global
    route outside_p 0.0.0.0 0.0.0.0 y.y.y.y 1 track 1
    route outside 0.0.0.0 0.0.0.0 x.x.x.x 255
    Before I was really lookign at the license being Base as I may need to upgrade to Security Plus.

  • SQL Server 2014 Always on HA takes 8-14 seconds to fail over. Application side timeouts occur

    Hi All,
    I have a very similar post in the SQL Server 2014 forums too (https://social.technet.microsoft.com/Forums/sqlserver/en-US/adb5e338-907e-4405-aa62-d3ea93c7a98a/sql-server-2014-always-on-ha-takes-814-seconds-to-fail-over-application-side-timeouts-occur?forum=sqldisasterrecovery) -
    advice in the end was to post a question here.
    SQL Server Nodes, 2014 (12.0.2480.0)
    1 Share witness (on separate subnet)
    1 Cluster
    1 Listener
    I have been testing the response time to failovers – both manual (right-click, fail over in SSMS) and Automatic (shut down the primary host). The way I am testing response is to have a SSMS query running on my desktop, connected to the listener querying
    a small table and hit execute.
    The Query response time, from execute to receiving the result, has been between 8 and 14 seconds based on my testing. My previous experience (in a separate environment) showed around 2 second fail over times in a very similar configuration.
    Availability DB is 200Mb and is not actively used. The nodes are synchronised.
    SQL Server Hosts: Windows 2012, 2 cpu, 8gb RAM.
    Questions:
    1: It’s a big question but what should I expect for a ‘normal’ fail over time. Keep in mind this scenario is about as simple as it gets.
    2: As it stands an 8 to 14 second ‘outage’ could cause some applications to time out. Or am I being un-reasonable? I am seeing the very simple query in SSMS to time out with this:
    Msg 983, Level 14, State 1, Line 2
    Unable to access availability database 'DATABASE' because the database replica is not in the PRIMARY or SECONDARY role. Connections to
    an availability database is permitted only when the database replica is in the PRIMARY or SECONDARY role. Try the operation again later.
    Cluster logs are long - this section accounts for 8 seconds of the 11 second outage I experienced. I can supply the full log if required. Also this log is just the 2 cluster nodes, I removed the witness share to make sure it was as simple as possible.
    00001090.00002128::2015/02/25-03:05:08.255 INFO  [GEM] Node 2: Deleting [1:65 , 1:71] (both included) as it has been ack'd by every node
    00001ee4.00002130::2015/02/25-03:05:10.107 INFO  [RES] Network Name: Agent: Sending request Netname/RecheckConfig to NN:5b81e7bd-58fe-4be9-a68a-c48ba2aa552b:Netbios
    00001090.00002128::2015/02/25-03:05:11.888 INFO  [GEM] Node 2: Deleting [1:72 , 1:73] (both included) as it has been ack'd by every node
    00001090.00002698::2015/02/25-03:05:11.889 INFO  [GUM] Node 2: Processing RequestLock 2:49
    00001090.00002128::2015/02/25-03:05:11.890 INFO  [GUM] Node 2: Processing GrantLock to 2 (sent by 1 gumid: 67)
    00001090.00002698::2015/02/25-03:05:11.890 INFO  [GUM] Node 2: executing request locally, gumId:68, my action: /dm/update, # of updates: 1
    00001090.00002128::2015/02/25-03:05:12.890 INFO  [GEM] Node 2: Deleting [1:74 , 1:74] (both included) as it has been ack'd by every node
    00001ee4.00002130::2015/02/25-03:05:15.107 INFO  [RES] Network Name: Agent: Sending request Netname/RecheckConfig to NN:5b81e7bd-58fe-4be9-a68a-c48ba2aa552b:Netbios
    00001090.00002128::2015/02/25-03:05:16.988 INFO  [GUM] Node 2: Processing RequestLock 1:28
    Thanks in advance.
    Keegan

    Hi Keegan,
    From these event log , what I can see is "Sending request Netname" wasted the time .
    Could you please tell us the network configuration of that cluster nodes ?
    If I recall correctly , it is recommended to only remain Tcp/IP protocol and disable NetBIOS over TCP/IP for "Private Network" , also do not configure DNS/Wins default gateway for "Private Network" :
    https://support.microsoft.com/kb/258750?wa=wsignin1.0
    After that please test again .
    Best Regards,
    Elton JI
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Front End pool failed over

    Hi all,
    1. I setup a pool with three Front End servers (FQDN of pool is pool.site1.sip96x2.com and it's pointed to IP address of three Front End servers). Everything works fine. But When I disable network interface on FE1 and FE2, the Lync clients are disconnected.
    I haven't understood clearly how the Lync clients failed over in a pool? Please clarify to me.
    2. I have two central site (Root site and Primary site, they have different domain sip96x2.com and site1.sip96x2.com). The simple URL dialin is pointed to Front End server at Root site. So if the link between Root site and Primary site is down, how can the
    users at Primary site connect to dialin URL? 
    3. In building topology for Front End pool, I checked Override FQDN internal web service and the FQDN is "poolint.site1.sip96x2.com". I created three A records "poolint.site1.sip96x2.com" and pointed to three IP addresses of Front End
    servers. Is it right?
    Thanks so much!

    Ah ok, well first thing if I am reading this correctly, pool pairing Standard with Enterprise is not supported. You should only pair Standard with Standard and Enterprise with Enterprise (even though topology builder won't stop you) Take a look here for
    support scenarios http://technet.microsoft.com/en-us/library/jj204697.aspx
    To deal with the simple URLs in the event of failover you need to add them using Powershell. Take a look at this article which explains and gives an example: http://blogs.perficient.com/microsoft/2012/01/configuring-simple-urls-for-multiple-lync-pools/
    If this helped you please click "Vote As Helpful" if it answered your question please click "Mark As Answer"
    Georg Thomas | Lync MVP
    Blog www.lynced.com.au | Twitter
    @georgathomas
    Lync Edge Port Check (Beta)

  • How Front End pool deals with fail over to keep user state?

         Hello to all, I searched a lot of articles to understand how Lync 2010 keeps user state if a fail happens in a Front Pool node, but didn't find anything clear.
         I found a MS info. about ths topic : " The Front End Servers maintain transient information—such as logged-on state and control information for an IM, Web, or audio/video (A/V) conference—only for the duration of a user’s session.
    This configuration
    is an advantage because in the event of a Front End Server failure, the clients connected to that server can quickly reconnect to another Front End Server that belongs to the same Front End pool. "
        As I read, the client uses DNS to reconnect to another Front End in the pool. When it reconnects to an available server, does he lose what he/she was doing at Lync client? Can the server that is now hosting his section recover all
    "user's session data"? Is positive, how?
       Regards, EEOC.

    The presence information and other dynamic user data is stored in the RTCDYN database on the backend SQL database in a 2010 pool:
    http://blog.insidelync.com/2011/04/the-lync-server-databases/  If you fail over to another pool member, this pool member has access to the same data.
    Ongoing conversations and the like are cached at the workstation.
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question please click "Mark As Answer".
    SWC Unified Communications

  • Is it possible to add hyper-V fail over clustering afterwards?

    Hi,
    We are testing Windows 2012R2 Hyper-V using only one stand alone host without fail over clustering now with few virtual machines. Is it possible to add fail over clustering afterwards and add second Hyper-V node and shared disk and move virtual
    machines there or do we have to install both nodes from scratch?
    ~ Jukka ~

    Hi Jukka,
    Inaddition, before you build hyper-v failover cluster please refer to these requirements within the article below :
    http://technet.microsoft.com/en-us/library/jj863389.aspx
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • OCR and voting disks on ASM, problems in case of fail-over instances

    Hi everybody
    in case at your site you :
    - have an 11.2 fail-over cluster using Grid Infrastructure (CRS, OCR, voting disks),
    where you have yourself created additional CRS resources to handle single-node db instances,
    their listener, their disks and so on (which are started only on one node at a time,
    can fail from that node and restart to another);
    - have put OCR and voting disks into an ASM diskgroup (as strongly suggested by Oracle);
    then you might have problems (as we had) because you might:
    - reach max number of diskgroups handled by an ASM instance (63 only, above which you get ORA-15068);
    - experiment delays (especially in case of multipath), find fake CRS resources, etc.
    whenever you dismount disks from one node and mount to another;
    So (if both conditions are true) you might be interested in this story,
    then please keep reading on for the boring details.
    One step backward (I'll try to keep it simple).
    Oracle Grid Infrastructure is mainly used by RAC db instances,
    which means that any db you create usually has one instance started on each node,
    and all instances access read / write the same disks from each node.
    So, ASM instance on each node will mount diskgroups in Shared Mode,
    because the same diskgroups are mounted also by other ASM instances on the other nodes.
    ASM instances have a spfile parameter CLUSTER_DATABASE=true (and this parameter implies
    that every diskgroup is mounted in Shared Mode, among other things).
    In this context, it is quite obvious that Oracle strongly recommends to put OCR and voting disks
    inside ASM: this (usually called CRS_DATA) will become diskgroup number 1
    and ASM instances will mount it before CRS starts.
    Then, additional diskgroup will be added by users, for DATA, REDO, FRA etc of each RAC db,
    and will be mounted later when a RAC db instance starts on the specific node.
    In case of fail-over cluster, where instances are not RAC type and there is
    only one instance running (on one of the nodes) at any time for each db, it is different.
    All diskgroups of db instances don't need to be mounted in Shared Mode,
    because they are used by one instance only at a time
    (on the contrary, they should be mounted in Exclusive Mode).
    Yet, if you follow Oracle advice and put OCR and voting inside ASM, then:
    - at installation OUI will start ASM instance on each node with CLUSTER_DATABASE=true;
    - the first diskgroup, which contains OCR and votings, will be mounted Shared Mode;
    - all other diskgroups, used by each db instance, will be mounted Shared Mode, too,
    even if you'll take care that they'll be mounted by one ASM instance at a time.
    At our site, for our three-nodes cluster, this fact has two consequences.
    One conseguence is that we hit ORA-15068 limit (max 63 diskgroups) earlier than expected:
    - none ot the instances on this cluster are Production (only Test, Dev, etc);
    - we planned to have usually 10 instances on each node, each of them with 3 diskgroups (DATA, REDO, FRA),
    so 30 diskgroups each node, for a total of 90 diskgroups (30 instances) on the cluster;
    - in case one node failed, surviving two should get resources of the failing node,
    in the worst case: one node with 60 diskgroups (20 instances), the other one with 30 diskgroups (10 instances)
    - in case two nodes failed, the only node survived should not be able to mount additional diskgroups
    (because of limit of max 63 diskgroup mounted by an ASM instance), so all other would remain unmounted
    and their db instances stopped (they are not Production instances);
    But it didn't worked, since ASM has parameter CLUSTER_DATABASE=true, so you cannot mount 90 diskgroups,
    you can mount 62 globally (once a diskgroup is mounted on one node, it is given a number between 2 and 63,
    and other diskgroups mounted on other nodes cannot reuse that number).
    So as a matter of fact we can mount only 21 diskgroups (about 7 instances) on each node.
    The second conseguence is that, every time our CRS handmade scripts dismount diskgroups
    from one node and mount it to another, there are delays in the range of seconds (especially with multipath).
    Also we found inside CRS log that, whenever we mounted diskgroups (on one node only), then
    behind the scenes were created on the fly additional fake resources
    of type ora*.dg, maybe to accomodate the fact that on other nodes those diskgroups were left unmounted
    (once again, instances are single-node here, and not RAC type).
    That's all.
    Did anyone go into similar problems?
    We opened a SR to Oracle asking about what options do we have here, and we are disappointed by their answer.
    Regards
    Oscar

    Hi Klaas-Jan
    - best practises require that also online redolog files are in a separate diskgroup, in case of ASM logical corruption (we are a little bit paranoid): in case DATA dg gets corrupted, you can restore Full backup plus Archived RedoLog plus Online Redolog (otherwise you will stop at the latest Archived).
    So we have 3 diskgroups for each db instance: DATA, REDO, FRA.
    - in case of fail-over cluster (active-passive), Oracle provide some templates of CRS scripts (in $CRS_HOME/crs/crs/public) that you edit and change at your will, also you might create additionale scripts in case of additional resources you might need (Oracle Agents, backups agent, file systems, monitoring tools, etc)
    About our problem, the only solution is to move OCR and voting disks from ASM and change pfile af all ASM instance (parameter CLUSTER_DATABASE from true to false ).
    Oracle aswers were a litlle bit odd:
    - first they told us to use Grid Standalone (without CRS, OCR, voting at all), but we told them that we needed a Fail-over solution
    - then they told us to use RAC Single Node, which actually has some better features, in csae of planned fail-over it might be able to migreate
    client sessions without causing a reconnect (for SELECTs only, not in case of a running transaction), but we already have a few fail-over cluster, we cannot change them all
    So we plan to move OCR and voting disks into block devices (we think that the other solution, which needs a Shared File System, will take longer).
    Thanks Marko for pointing us to OCFS2 pros / cons.
    We asked Oracle a confirmation that it supported, they said yes but it is discouraged (and also, doesn't work with OUI nor ASMCA).
    Anyway that's the simplest approach, this is a non-Prod cluster, we'll start here and if everthing is fine, after a while we'll do it also on Prod ones.
    - Note 605828.1, paragraph 5, Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0, 11.2.0) on RHEL5/OL5
    - Note 428681.1: OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE)
    -"Grid Infrastructure Install on Linux", paragraph 3.1.6, Table 3-2
    Oscar

  • Failover cluster server - File Server role is clustered - Shadow copies do not seem to travel to other node when failing over

    Hi,
    New to 2012 and implementing a clustered environment for our File Services role.  Have got to a point where I have successfully configured the Shadow copy settings.
    Have a large (15tb) disk.  S:
    Have a VSS drive (volume shadow copy drive) V:
    Have successfully configured through Windows Explorer the Shadow copy settings.
    Created dependencies in Failcover Cluster Server console whereby S: depends on V:
    However, when I failover the resource and browse the Client Access Point share there are no entries under the "Previous Versions" tab. 
    When I visit the S: drive in windows explorer and open the Shadow copy dialogue box, there are entries showing the times and dates of the shadow copies ran when on the original node.  So the disk knows about the shadow copies that were ran on the
    original node but the "previous versions" tab has no entries to display.
    This is in a 2012 server (NOT R2 version).
    Can anyone explain what might be the reason?  Do I have an "issue" or is this by design?
    All help apprecieated!
    Kathy
    Kathleen Hayhurst Senior IT Support Analyst

    Hi,
    Please first check the requirements in following article:
    Using Shadow Copies of Shared Folders in a server cluster
    http://technet.microsoft.com/en-us/library/cc779378(v=ws.10).aspx
    Cluster-managed shadow copies can only be created in a single quorum device cluster on a disk with a Physical Disk resource. In a single node cluster or majority node set cluster without a shared cluster disk, shadow copies can only be created and managed
    locally.
    You cannot enable Shadow Copies of Shared Folders for the quorum resource, although you can enable Shadow Copies of Shared Folders for a File Share resource.
    The recurring scheduled task that generates volume shadow copies must run on the same node that currently owns the storage volume.
    The cluster resource that manages the scheduled task must be able to fail over with the Physical Disk resource that manages the storage volume.
    If you have any feedback on our support, please send to [email protected]

  • Which role do I need DFS or File server on fail over cluster server 2012 R2?

    what I want to achieve is that I want to share all my user data files in a central location and to be highly available all the time whether it's a general share or folder redirection data. BUT I'm a bit confused;  I have fail over cluster  set-up
    on server 2012, now I would like to add DFS as a role but than we have another role called File server and virtually it does the same thing as DFS? Means it creates a namespace share that can be access even one of the nodes goes down. Now I am thinking is
    that DFS does the replication between two physical location but fail over cluster works slightly differently  and with file server it pretty much does the same thing except for replicating data from one drive to another. Now what do you suggest I do or
    did I get the concept wrong like a noob?

    DFS and Failover Clustering for file shares provides a similar end result for file access, but they are significantly different implementations.
    Clustering provides high availability to files by presenting shared access to set a files served from a cluster.  With 2012 R2 Microsoft added the ability to create a Scale-out File Server that even allows all nodes of the cluster to server access to
    the files for a higher level of performance and other great things.  Bottom line with Failover Clusters for files is that there is a single copy of the file presented from the cluster.
    DFS on the other hand provides high availability to files by presenting multiple copies of the file by making a copy in two or more locations and presenting a naming space that allows access to the file through any of the network paths.  DFS works very
    well for files that are primarily read-only.  When you get into a situation where there is a lot of updating of the shared files, DFS is not a very good solution.  There are ways to implement DFS for read/write files, but it generally requires a
    good knowledge of how the files are used and how you want to manage them.
    The key to answering your question comes in your first sentence "I want to share all my user data files in a central location and to be highly available all the time".  My initial reaction to this is that central location means Failover Cluster
    - there is only a single copy of the file.  However, "all the time" can be compromised by network failures to the central site.  Remote sites would not have access if they can't access the central site.  DFS provides the ability to
    have copies remotely, but then if you allow updating at multiple sites, you have to manage the merging of the changes, among other things.
    . : | : . : | : . tim

  • Time out fail over

    On this system:
    OS: Solaris 10 11/06 s10s_u3wos_10 SPARC
    Cluster version: 3.1u4
    A- Normally after how much time resource is moved to the other node if ipmp fails (e.g. gateway is unreacheable) ?
    B- What happens if ipmp fails in both server ? packages are kept on their nodes ?
    C- Does it exist timeout over 10 minutes in cluster configuration ?

    u have 2 options - u could increase the back end time out to a very large value so that server can wait rather than timing out rather than failing over or to do some thing like
    <Object name=�default�>
    NameTrans fn=map from=/ name=reverse-proxy-/
    </Object>
    <Object name=�reverse-proxy-/�>
    Route fn=set-origin-server server=server1
    ObjectType fn=http-client-config timeout=600
    </Object>
    see - http://docs.sun.com/app/docs/doc/820-4841/gdhrg?a=view
    ( or simply disable any fail over but have different individual servers distributing load across different application)
    split your uri or application so that each application goes to 1 back end server. for example, let us say - u have 2 java applications that u would like jboss to do the job for you, u could do some thing like
    now, u could edit your obj.conf or (<vs>-obj.conf) depending on your configuration so that it looks like this
    <Object name=�default�>
    NameTrans fn=map from=/ name=reverse-proxy-/
    </Object>
    <Object name=�reverse-proxy-/�>
    <If $uri =~ /foo1>
    Route fn=set-origin-server server=<&#349;erver1>
    </If>
    <If $uri =~ /foo2>
    Route fn=set-origin-server server=<&#349;erver2>
    </If>
    </Object>
    btw - i will file a RFE on your behalf for this feature.

Maybe you are looking for

  • Production Manager - Service

    Production Manager (Service)-FLU000433 OPCO Description  Fluke Corporation is the world leader in the manufacture, distribution and service of electronic test tools and software. We are a dynamic market leader with sound financial health. Fluke has a

  • USER_LOCK PKG

    can anyone tell me how to run USER_SQL pkg.......... as i have seen its $ORACLE_HOME/RDBMS/ADMIN/userlock.sql.... i wanna know that how can i grant execute right on this pkg to role PUBLIC.... regards soumen

  • Acrobat Pro X - How to allow user to save form with data

    I'm evaluating the trial version to make sure it does what we want.  I want the user to be able to save a form to their PC with the data they've typed in.  In my search of the forums I found an answer that said to look at File > Save As > Reader Exte

  • What is the cost to use my phone in Malawi?What temporary add-ons are available?

    I am travelling to Malawi this month for 1 month. I will need some form of communication out there. I expect I will probably buy a phone and international SIM whilst I'm there, but before I do so I wanted to explore what I can do with my current plan

  • Usr_key: Modifying user in OIM 11gr2

    Hi Experts, My requirement: while modifying the user i need to get the "usr_key" or "User Login" of that user for further use. I am new to OIM, so can anyone of you help me in resolving my isseu. Thanks in advane.