ASA fails over upon anyconnect image activation

I'm running into an odd thing here that I can't find any reference at all to in a search.  I am setting up anyconnect on an active/standby pair of ASA 5510 running 8.3(2).  Everything works great and I've got the MacOS package installed.  The odd thing is that when I try to enter the "svc image" command for the Win package, it causes the firewalls to failover every time.  I'm working with the 3.1 package and have tried both 3.1.07021 and 3.1.08009.  I've got plenty of flash space since these packages are sitting by themselves on a 2g card.  I thought that maybe the CPU was getting pegged installing the package, causing it to miss a failover poll so I increased the poll time to 15 seconds and still no go.  The failover occurs instantly when I enter the config command.  Interestingly, the win 2.5 client installs just fine but I need to be able to use it with win 8.1 so I need the 3.1 client.
Would certainly appreciate any insight that someone might have.
Thanks,
  Brian

I actually don't have an xml profile defined at all.
The failover log looks like this.  There's more, but these seem to be the relevant bits from when I attempt to activate the pkg.
15:21:39 EDT May 1 2015
Standby Ready Just Active HELLO not heard from mate
15:21:39 EDT May 1 2015
Just Active Active Drain HELLO not heard from mate
15:21:39 EDT May 1 2015
Active Drain Active Applying Config HELLO not heard from mate
15:21:39 EDT May 1 2015
Active Applying Config Active Config Applied HELLO not heard from mate
15:21:39 EDT May 1 2015
Active Config Applied Active HELLO not heard from mate
As for an upgrade, I realize it might be necessary but this is a tough controlled environment where there are only quarterly maintenance windows and a long RFC process.  I'd have to point to a known bug of some sort to push an upgrade through.  Unfortunately, I can't just try to see if it works.
Thanks for taking the time on this.

Similar Messages

  • ASA Fail over

    Hi Team,
    Can anyone answer below simple question. I am new to security level please assist me regards this
    For active and passive failover configured firewall,
    1.What about the default timer?
    2.If active device exit hold down timer, then passive will comes up. After that if active device comes up, then which one play as active asa.
    3.failover mainly based on which category?
    Thanks and Regards,
    Mohamed kabeer.S

    Hi,
    Please find my answers.
    1.What about the default timer?
    Health check and monitoring will run continiously between FO pairs and it updates every second....
    The default values on the ASA security appliance are as follows:
    •The poll time is 1 second.
    •The holdtime time is 15 seconds.
    2.If active device exit hold down timer, then passive will comes up. After that if active device comes up, then which one play as active asa.
    If the active devices failover and then the standby device will become active.... and even after the primary unit comes back to up also it will not fall back to primary unit......
    3.failover mainly based on which category?
    Failover is for providing the redundancy over the network.... if primary fails the secondary device will take the traffic..... there are two failovers... lan based and stateful.....
    LAN based FO is the failover mechanism for the LAN failure scenarios... if the link/connectivity goes down for some reason it will failover to the standby unit....
    Stateful failover --> This will enable the uninterrupted traffic passing through without any issues even during the failover scenarios as it exchanges the session table as well during sync.... when you enable only LAN based FO then you might loose the active connections during the FO....
    Regards
    Karthik

  • ASA fail over inconsistency

    Active Primary changing to secondary after few minutes. HA link is also up (Secondary ASA not down )
    Active Primary changing to secondary after few minutes.
    (Secondary ASA not down )
    and When pinging  the  ASA latency is very high .
    Please help

    Show failover history
    show tech
    The ASA will change from active to standby not really from Primary to Secondary.
    Basically it is just doing failover.
    If you would like to talk about what you have or webex please reply with your phone number.

  • Active/active Fail over monitoring

    Guys,
    I have a small concern for my Active active FO. Below are the details of setup.
    1)- I have two pairs of ASA 5520 each carrying three contexts. (CTXT 1-3 in 1st ASA, CTXT 4-6 in 2nd ASA).
    2)- I have created two fail over group in each ASA for active active FO. FO group 1&2 in primary ASA and 3&4 FO group in 2nd ASA.
    3)- I have assigned two contexts (CTXT1-2) to FO group 1 and rest Context (CTXT-3) to FO group 2 in primary ASA.
    4)- Same in the 2nd ASA.
    My question is
    1)- how can i configure my monitor for fail over.
    2)- Is it based on the Interfaces of Contexts or the no of contexts ?
    Thanks
    swap

    Hi Felipe,
    Yes, it could be the NAT configuration. But I've tried creating a back-up NAT rule before, but that wasn't successful either.
    nat (inside,outside) source static NETWORK_OBJ_10.80.1.0_24 NETWORK_OBJ_10.80.1.0_24 destination static DM_INLINE_NETWORK_5 DM_INLINE_NETWORK_5 no-proxy-arp route-lookup
    nat (inside,outside) source static Branch_Inside Branch_Inside destination static DM_INLINE_NETWORK_4 DM_INLINE_NETWORK_4 no-proxy-arp route-lookup
    nat (inside,outside) source static Branch_Inside Branch_Inside destination static Roswell Roswell no-proxy-arp route-lookup
    object network inside-net
    nat (inside,outside) dynamic interface
    nat (inside,outside) after-auto source dynamic any interface
    access-group outside_access_in_1 in interface outside control-plane
    access-group outside_access_in in interface outside
    access-group outside_access_out out interface outside
    access-group outside_access_ipv6_in in interface outside
    access-group outside_access_ipv6_out out interface outside
    access-group outside_access_in in interface inside
    access-group outside_access_out out interface inside
    access-group outside_access_ipv6_in in interface inside
    access-group outside_access_ipv6_out out interface inside
    access-group outside_p_access_in in interface outside_p
    access-group outside_p_access_out out interface outside_p
    access-group global_access global
    access-group global_access_ipv6 global
    route outside_p 0.0.0.0 0.0.0.0 y.y.y.y 1 track 1
    route outside 0.0.0.0 0.0.0.0 x.x.x.x 255
    Before I was really lookign at the license being Base as I may need to upgrade to Security Plus.

  • GSLB Zone-Based DNS Payment Gw - Config Active-Active: Not Failing Over

    Hello All:
    Currently having a bit of a problem, have exhausted all resources and brain power dwindling.
    Brief:
    Two geographically diverse sites. Different AS's, different front ends. Migrated from one site with two CSS 11506's to two sites with one 11506 each.
    Flow of connection is as follows:
    Client --> FW Public Destination NAT --> CSS Private content VIP/destination NAT --> server/service --> CSS Source VIP/NAT --> FW Public Source NAT --> client.
    Using Load Balancers as DNS servers, authoritative for zones due to the requirement for second level Domain DNS load balancing (i.e xxxx.com, AND FQDNs http://www.xxxx.com). Thus, CSS is configured to respond as authoritative for xxxx.com, http://www.xxxx.com, postxx.xxxx.com, tmx.xxxx.com, etc..., but of course cannot do MX records, so is also configured with dns-forwarders which consequently were the original DNS servers for the domains. Those DNS servers have had their zone files changed to reflect that the new DNS servers are in fact the CSS'. Domain records (i.e. NS records in the zone file), and the records at the registrar (i.e. tucows, which I believe resells .com, .net and .org for netsol) have been changed to reflect the same. That part of the equation has already been tested and is true to DNS Workings. The reason for the forwarders is of course for things such as non load balanced Domain Names, as well as MX records, etc...
    Due to design, which unfortunately cannot be changed, dns-record configuration uses kal-ap, example:
    dns-record a http://www.xxxx.com 0 111.222.333.444 multiple kal-ap 10.xx.1.xx 254 sticky-enabled weightedrr 10
    So, to explain so we're absolutely clear:
    - 111.222.333.444 is the public address returned to the client.
    - multiple is configured so we return both site addresses for redundancy (unless I'm misunderstanding that configuration option)
    - kal-ap and the 10.xx.1.xx address because due to the configuration we have no other way of knowing the content rule/service is down and to stop advertising the address for said server/rule
    - sticky-enabled because we don't want to lose a payment and have it go through twice or something crazy like that
    - weighterr 10 (and on the other side weightedrr 1) because we want to keep most of the traffic on the site that is closer to where the bulk of the clients are
    So, now, the problem becomes, that the clients (i.e. something like an interac machine, RFID tags...) need to be able to fail over almost instantly to either of the sites should one lose connectivity and/or servers/services. However, this does not happen. The CSS changes it's advertisement, and this has been confirmed by running "nslookups/digs" directly against the CSSs... however, the client does not recognize this and ends up returning a "DNS Error/Page not found".
    Thinking this may have something to do with the "sticky-enabled" and/or the fact that DNS doesn't necessarily react very well to a TTL of "0".
    Any thoughts... comments... suggestions... experiences???
    Much appreciated in advance for any responses!!!
    Oh... should probably add:
    nslookups to some DNS servers consistently - ALWAYS the same ones - take 3 lookups before getting a reply. Other DNS servers are instant....
    Cheers,
    Ben Shellrude
    Sr. Network Analyst
    MTS AllStream Inc

    Hi Ben,
    if I got your posting right the CSSes are doing their job and do advertise the correct IP for a DNS-query right?
    If some of your clients are having a problem this might be related to DNS-caching. Some clients are caching the DNS-response and do not do a refresh until they fail or this timeout is gone.
    Even worse if the request fails you sometimes have to reset the clients DNS-demon so that they are requesting IP-addresses from scratch. I had this issue with some Unixboxes. If I remeber it corretly you can configure the DNS behaviour for unix boxes and can forbidd them to cache DNS responsed.
    Kind Regards,
    joerg

  • No Audio on either end Cisco Jabber for Windows over Cisco AnyConnect

    Our telephony staff is replacing our aging/unsupported VoIP system with a Cisco system and as the network tech, I'm trying to get Jabber for Windows to work over our AnyConnect VPN client.  Jabber to Cisco phone and Jabber to Jabber calls work fine within our LAN.  
    However, when I take a laptop to a separate internet connection and connect to the network via the VPN, I can't get any audio to pass across the system, in either direction.  If I call a phone on our LAN using the Jabber client (via AnyConnect), the phone rings and when I answer it, it's just dead air on both ends.  If I reverse the process, calling from the phone to the Jabber client, the same thing, Jabber client rings, but dead air both ways once I answer.  
    Things I can do from the laptop over the VPN connection:
    I'm able to get to the phone's web interface using that same laptop.
    I can ping the phone as well.  In fact, the VPN profile I'm using has full access to the entire VoIP Vlan including all IP traffic (all ~65,000 ports).
    Searching the address book also works fine.  I can search for staff and it's pulling directly from our Active Directory environment.
    Is there any special settings on the firewall that I need to setup to allow the voice traffic (which I assume is RTP traffic)?  I tried to add a service policy for RTP traffic, but that didn't seem to work...unless I built it wrong.
    Jabber for Windows - 10.6.0
    Cisco Anyconnect - 3.1.06079
    Cisco 5515-x ASA - 9.2

    I was able to resolve this on my own.  I thought that SIP traffic needed to be inspected via the global inspection policy in order for it to pass through the firewall. I ran into the same issue with ICMP traffic from an Anyconnect client to LAN devices. I had to enable ICMP in that policy for us to be able to ping LAN devices over the VPN tunnel. So when I saw that SIP was already being inspected by this policy, I moved on looking for other solutions. Then I stumbled deep within a Google search (almost hit the end of the Internet doing so) where someone mentioned that SIP shouldn’t be inspected by that policy. So I unchecked it and bam! Voice is now working over the anyconnect client to phones on the LAN. 

  • Reboot ASA 8.4 (asdm 6.4) Active/Standby pair

    Hi,
    I manage a pair of ASAs (8.4 asdm 6.4) and am having trouble with traffic going thru a tunnel.  It was recommended to me that perhaps a reboot is in order.  I found the instructions at http://www.cisco.com/en/US/docs/security/asa/asa84/configuration/guide/admin_swconfig.html#wp1355970 (which I followed without actually upgrading the IOS, as all I wanted was both devices to reboot - one at at time without causing connection resets) but when I attempted it, the device that rebooted was always the same IP.  My question is at step  3 "when standby unit has finished reloading and is in the Standby Ready state, force the active unit to fail over to the standby unit by entering the following command on the active unit.
    active# no failover active
    But there is a note "Use show failover command to verify that the standby unit is in the standby ready state"  which I did. 
    This is the result of show failover from the 0.5 (primary) unit BEFORE issuing no failover active:
    Last Failover at: 05:32:10 EST Feb 9 2012
            This host: Primary - Active
                    Active time: 3732124 (sec)
                    slot 0: ASA5510 hw/sw rev (2.0/8.4(3)) status (Up Sys)
                      Interface management (192.168.200.249): No Link (Not-Monitored)
                      Interface outside (63.146.180.5): Normal (Monitored)
                      Interface inside (172.16.0.5): Normal (Monitored)
                      Interface DBDMZ (192.168.60.5): Normal (Monitored)
                      Interface WEBDMZ (192.168.50.5): Normal (Monitored)
                    slot 1: ASA-SSM-4GE hw/sw rev (1.0/1.0(0)10) status (Up)
            Other host: Secondary - Standby Ready
                    Active time: 0 (sec)
                    slot 0: ASA5510 hw/sw rev (2.0/8.4(3)) status (Up Sys)
                      Interface management (0.0.0.0): Normal (Not-Monitored)
                      Interface outside (63.146.180.6): Normal (Monitored)
                      Interface inside (172.16.0.6): Normal (Monitored)
                      Interface DBDMZ (192.168.60.6): Normal (Monitored)
                      Interface WEBDMZ (192.168.50.6): Normal (Monitored)
                    slot 1: ASA-SSM-4GE hw/sw rev (1.0/1.0(0)10) status (Up)
    So far so good.  Then I entered (on the PRIMARY-ACTIVE unit) the command no failover active and I got the following:
    NMEC-ASA5510-COLOVA# sho failover
    Failover On
    Failover unit Secondary
    Failover LAN Interface: failover Ethernet0/0 (up)
    Unit Poll frequency 500 milliseconds, holdtime 3 seconds
    Interface Poll frequency 5 seconds, holdtime 25 seconds
    Interface Policy 1
    Monitored Interfaces 4 of 110 maximum
    Version: Ours 8.4(3), Mate 8.4(3)
    Last Failover at: 11:02:26 EDT Mar 23 2012
            This host: Secondary - Active
                    Active time: 140 (sec)
                    slot 0: ASA5510 hw/sw rev (2.0/8.4(3)) status (Up Sys)
                      Interface management (192.168.200.249): No Link (Not-Monitored)
                      Interface outside (63.146.180.5): Normal (Monitored)
                      Interface inside (172.16.0.5): Normal (Monitored)
                      Interface DBDMZ (192.168.60.5): Normal (Monitored)
                      Interface WEBDMZ (192.168.50.5): Normal (Monitored)
                    slot 1: ASA-SSM-4GE hw/sw rev (1.0/1.0(0)10) status (Up)
            Other host: Primary - Standby Ready
                    Active time: 3732178 (sec)
                    slot 0: ASA5510 hw/sw rev (2.0/8.4(3)) status (Up Sys)
                      Interface management (0.0.0.0): Normal (Not-Monitored)
                      Interface outside (63.146.180.6): Normal (Monitored)
                      Interface inside (172.16.0.6): Normal (Monitored)
                      Interface DBDMZ (192.168.60.6): Normal (Monitored)
                      Interface WEBDMZ (192.168.50.6): Normal (Monitored)
                    slot 1: ASA-SSM-4GE hw/sw rev (1.0/1.0(0)10) status (Up)
      Thinking all was well, I now issued (from the same 172.16.0.5 unit) the reload command.  Unfortunately my continuous pings to .0.5 and .0.6 show that 0.6 rebooted AGAIN!?! 
    Can someone tell me what I am doing wrong? 
    Thanks,
    Sue

    I guessed that might be the case, but am still unsure.  The IP I was pinging was the inside LAN interface (Eth
    LAN failover is configured using Eth0/0 (IPs 10.0.254.253 and .254)  and State Failover with Eth0/1 (IPs 10.0.253.253 and .254) "Inside" is Gig1/1 with IP 172.16.0.5 (and .6 on the second unit) I would have expected either the LAN failover or the State failover IPs to change but not the LAN interface.  But perhaps I've got it backwards.  Thanks for your response. Patrick.
    Sue

  • ACE MIB Value for Fail over

    Hi,
         May I know the MIB value ACE (Application Contril Engine )will be generating  while a fail over occurs. Is the value same for contetx fail over as well ??
    Regards
    Jithesh

    Hi,
    I think this is the error code your looking for;
    727012
    Error Message   %ACE-2-727012: HA: FT Group group ID changed state to NewState. Reason:
    reason str.
    Table 2-2 NewState Values and Descriptions
    NewState Value
    Description
    FSM_FT_STATE_INIT
    The initial state. Visible only when the configuration for the FT group exists but it is not in service.
    FSM_FT_STATE_ELECT
    After you enter the inservice command when you are configuring an FT group, the ACE enters the ELECT state. The redundancy state machine negotiates with its peer context in the FT group to determine the redundancy role (active or standby)
    FSM_FT_STATE_ACTIVE
    The active member of the FT group.
    FSM_FT_STATE_STANDBY_COLD
    This state can be entered if:
    •FT VLAN is down but the peer device is still alive.
    •Configuration or application state synchronization failure have occurred.
    FSM_FT_STATE_STANDBY_CONFIG
    The standby context is waiting to receive configuration information. Upon entering this state, the active context will be notified to send a copy of the running configuration.
    FSM_FT_STATE_STANDBY_BULK
    The standby context is waiting to receive state information. Upon entering this state, the active context will be notified to send a copy of the current states information for all applications.
    FSM_FT_STATE_STANDBY_HOT
    The standby context is ready to become active in a failover situation.
    Values returned for the reason str variable can be one of the following:
    •FSM_FT_EV_PEER_DOWN
    •FSM_FT_EV_PEER_FT_VLAN_DOWN
    •FSM_FT_EV_PEER_SOFT_RESET
    •FSM_FT_EV_STATE
    •FSM_FT_EV_TIMEOUT
    •FSM_FT_EV_CFG_SYNC_STATUS
    •FSM_FT_EV_BULK_SYNC_STATUS
    •FSM_FT_EV_COUP
    •FSM_FT_EV_RELINQUISH
    •FSM_FT_EV_TRACK_STATUS
    •FSM_FT_EV_UPDATE
    •FSM_FT_EV_ENABLE_INSERVICE
    •FSM_FT_EV_DISABLE_INSERVICE
    •FSM_FT_EV_SWITCHOVER
    •FSM_FT_EV_PEER_COMPATIBLE
    •FSM_FT_EV_MAINT_MODE_OFF
    •FSM_FT_EV_MAINT_MODE_PARTIAL
    •FSM_FT_EV_MAINT_MODE_FULL
    Check from the above ID onwards for more details around ft status.
    Cheers
    Scott

  • SQL Server 2014 Always on HA takes 8-14 seconds to fail over. Application side timeouts occur

    Hi All,
    I have a very similar post in the SQL Server 2014 forums too (https://social.technet.microsoft.com/Forums/sqlserver/en-US/adb5e338-907e-4405-aa62-d3ea93c7a98a/sql-server-2014-always-on-ha-takes-814-seconds-to-fail-over-application-side-timeouts-occur?forum=sqldisasterrecovery) -
    advice in the end was to post a question here.
    SQL Server Nodes, 2014 (12.0.2480.0)
    1 Share witness (on separate subnet)
    1 Cluster
    1 Listener
    I have been testing the response time to failovers – both manual (right-click, fail over in SSMS) and Automatic (shut down the primary host). The way I am testing response is to have a SSMS query running on my desktop, connected to the listener querying
    a small table and hit execute.
    The Query response time, from execute to receiving the result, has been between 8 and 14 seconds based on my testing. My previous experience (in a separate environment) showed around 2 second fail over times in a very similar configuration.
    Availability DB is 200Mb and is not actively used. The nodes are synchronised.
    SQL Server Hosts: Windows 2012, 2 cpu, 8gb RAM.
    Questions:
    1: It’s a big question but what should I expect for a ‘normal’ fail over time. Keep in mind this scenario is about as simple as it gets.
    2: As it stands an 8 to 14 second ‘outage’ could cause some applications to time out. Or am I being un-reasonable? I am seeing the very simple query in SSMS to time out with this:
    Msg 983, Level 14, State 1, Line 2
    Unable to access availability database 'DATABASE' because the database replica is not in the PRIMARY or SECONDARY role. Connections to
    an availability database is permitted only when the database replica is in the PRIMARY or SECONDARY role. Try the operation again later.
    Cluster logs are long - this section accounts for 8 seconds of the 11 second outage I experienced. I can supply the full log if required. Also this log is just the 2 cluster nodes, I removed the witness share to make sure it was as simple as possible.
    00001090.00002128::2015/02/25-03:05:08.255 INFO  [GEM] Node 2: Deleting [1:65 , 1:71] (both included) as it has been ack'd by every node
    00001ee4.00002130::2015/02/25-03:05:10.107 INFO  [RES] Network Name: Agent: Sending request Netname/RecheckConfig to NN:5b81e7bd-58fe-4be9-a68a-c48ba2aa552b:Netbios
    00001090.00002128::2015/02/25-03:05:11.888 INFO  [GEM] Node 2: Deleting [1:72 , 1:73] (both included) as it has been ack'd by every node
    00001090.00002698::2015/02/25-03:05:11.889 INFO  [GUM] Node 2: Processing RequestLock 2:49
    00001090.00002128::2015/02/25-03:05:11.890 INFO  [GUM] Node 2: Processing GrantLock to 2 (sent by 1 gumid: 67)
    00001090.00002698::2015/02/25-03:05:11.890 INFO  [GUM] Node 2: executing request locally, gumId:68, my action: /dm/update, # of updates: 1
    00001090.00002128::2015/02/25-03:05:12.890 INFO  [GEM] Node 2: Deleting [1:74 , 1:74] (both included) as it has been ack'd by every node
    00001ee4.00002130::2015/02/25-03:05:15.107 INFO  [RES] Network Name: Agent: Sending request Netname/RecheckConfig to NN:5b81e7bd-58fe-4be9-a68a-c48ba2aa552b:Netbios
    00001090.00002128::2015/02/25-03:05:16.988 INFO  [GUM] Node 2: Processing RequestLock 1:28
    Thanks in advance.
    Keegan

    Hi Keegan,
    From these event log , what I can see is "Sending request Netname" wasted the time .
    Could you please tell us the network configuration of that cluster nodes ?
    If I recall correctly , it is recommended to only remain Tcp/IP protocol and disable NetBIOS over TCP/IP for "Private Network" , also do not configure DNS/Wins default gateway for "Private Network" :
    https://support.microsoft.com/kb/258750?wa=wsignin1.0
    After that please test again .
    Best Regards,
    Elton JI
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • OCR and voting disks on ASM, problems in case of fail-over instances

    Hi everybody
    in case at your site you :
    - have an 11.2 fail-over cluster using Grid Infrastructure (CRS, OCR, voting disks),
    where you have yourself created additional CRS resources to handle single-node db instances,
    their listener, their disks and so on (which are started only on one node at a time,
    can fail from that node and restart to another);
    - have put OCR and voting disks into an ASM diskgroup (as strongly suggested by Oracle);
    then you might have problems (as we had) because you might:
    - reach max number of diskgroups handled by an ASM instance (63 only, above which you get ORA-15068);
    - experiment delays (especially in case of multipath), find fake CRS resources, etc.
    whenever you dismount disks from one node and mount to another;
    So (if both conditions are true) you might be interested in this story,
    then please keep reading on for the boring details.
    One step backward (I'll try to keep it simple).
    Oracle Grid Infrastructure is mainly used by RAC db instances,
    which means that any db you create usually has one instance started on each node,
    and all instances access read / write the same disks from each node.
    So, ASM instance on each node will mount diskgroups in Shared Mode,
    because the same diskgroups are mounted also by other ASM instances on the other nodes.
    ASM instances have a spfile parameter CLUSTER_DATABASE=true (and this parameter implies
    that every diskgroup is mounted in Shared Mode, among other things).
    In this context, it is quite obvious that Oracle strongly recommends to put OCR and voting disks
    inside ASM: this (usually called CRS_DATA) will become diskgroup number 1
    and ASM instances will mount it before CRS starts.
    Then, additional diskgroup will be added by users, for DATA, REDO, FRA etc of each RAC db,
    and will be mounted later when a RAC db instance starts on the specific node.
    In case of fail-over cluster, where instances are not RAC type and there is
    only one instance running (on one of the nodes) at any time for each db, it is different.
    All diskgroups of db instances don't need to be mounted in Shared Mode,
    because they are used by one instance only at a time
    (on the contrary, they should be mounted in Exclusive Mode).
    Yet, if you follow Oracle advice and put OCR and voting inside ASM, then:
    - at installation OUI will start ASM instance on each node with CLUSTER_DATABASE=true;
    - the first diskgroup, which contains OCR and votings, will be mounted Shared Mode;
    - all other diskgroups, used by each db instance, will be mounted Shared Mode, too,
    even if you'll take care that they'll be mounted by one ASM instance at a time.
    At our site, for our three-nodes cluster, this fact has two consequences.
    One conseguence is that we hit ORA-15068 limit (max 63 diskgroups) earlier than expected:
    - none ot the instances on this cluster are Production (only Test, Dev, etc);
    - we planned to have usually 10 instances on each node, each of them with 3 diskgroups (DATA, REDO, FRA),
    so 30 diskgroups each node, for a total of 90 diskgroups (30 instances) on the cluster;
    - in case one node failed, surviving two should get resources of the failing node,
    in the worst case: one node with 60 diskgroups (20 instances), the other one with 30 diskgroups (10 instances)
    - in case two nodes failed, the only node survived should not be able to mount additional diskgroups
    (because of limit of max 63 diskgroup mounted by an ASM instance), so all other would remain unmounted
    and their db instances stopped (they are not Production instances);
    But it didn't worked, since ASM has parameter CLUSTER_DATABASE=true, so you cannot mount 90 diskgroups,
    you can mount 62 globally (once a diskgroup is mounted on one node, it is given a number between 2 and 63,
    and other diskgroups mounted on other nodes cannot reuse that number).
    So as a matter of fact we can mount only 21 diskgroups (about 7 instances) on each node.
    The second conseguence is that, every time our CRS handmade scripts dismount diskgroups
    from one node and mount it to another, there are delays in the range of seconds (especially with multipath).
    Also we found inside CRS log that, whenever we mounted diskgroups (on one node only), then
    behind the scenes were created on the fly additional fake resources
    of type ora*.dg, maybe to accomodate the fact that on other nodes those diskgroups were left unmounted
    (once again, instances are single-node here, and not RAC type).
    That's all.
    Did anyone go into similar problems?
    We opened a SR to Oracle asking about what options do we have here, and we are disappointed by their answer.
    Regards
    Oscar

    Hi Klaas-Jan
    - best practises require that also online redolog files are in a separate diskgroup, in case of ASM logical corruption (we are a little bit paranoid): in case DATA dg gets corrupted, you can restore Full backup plus Archived RedoLog plus Online Redolog (otherwise you will stop at the latest Archived).
    So we have 3 diskgroups for each db instance: DATA, REDO, FRA.
    - in case of fail-over cluster (active-passive), Oracle provide some templates of CRS scripts (in $CRS_HOME/crs/crs/public) that you edit and change at your will, also you might create additionale scripts in case of additional resources you might need (Oracle Agents, backups agent, file systems, monitoring tools, etc)
    About our problem, the only solution is to move OCR and voting disks from ASM and change pfile af all ASM instance (parameter CLUSTER_DATABASE from true to false ).
    Oracle aswers were a litlle bit odd:
    - first they told us to use Grid Standalone (without CRS, OCR, voting at all), but we told them that we needed a Fail-over solution
    - then they told us to use RAC Single Node, which actually has some better features, in csae of planned fail-over it might be able to migreate
    client sessions without causing a reconnect (for SELECTs only, not in case of a running transaction), but we already have a few fail-over cluster, we cannot change them all
    So we plan to move OCR and voting disks into block devices (we think that the other solution, which needs a Shared File System, will take longer).
    Thanks Marko for pointing us to OCFS2 pros / cons.
    We asked Oracle a confirmation that it supported, they said yes but it is discouraged (and also, doesn't work with OUI nor ASMCA).
    Anyway that's the simplest approach, this is a non-Prod cluster, we'll start here and if everthing is fine, after a while we'll do it also on Prod ones.
    - Note 605828.1, paragraph 5, Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0, 11.2.0) on RHEL5/OL5
    - Note 428681.1: OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE)
    -"Grid Infrastructure Install on Linux", paragraph 3.1.6, Table 3-2
    Oscar

  • Failing over after WRITE_ERROR_TO_SERVER exception in sendRequest()

    Hi
    I am getting below error in my issproxy.log file. I wanted to see the source of this URL.cpp file to find out why it is failing. I am not able to open them using DLL decompiler as well.
    Could anyone tell me where can I get the source code for iisproxy.dll and iisforward.dll ?
    This request is failing only when the request is routed from IIS.
    ================New Request: [/GLMS/index.jsp.wlforward] =================
    Mon Nov 24 14:19:48 2014 <503614168189882> SSL must be used
    Mon Nov 24 14:19:48 2014 <503614168189882> Initializing SSL
    Mon Nov 24 14:19:48 2014 <503614168189881> INFO: Initializing SSL library
    Mon Nov 24 14:19:48 2014 <503614168189881> timer thread starting
    Mon Nov 24 14:19:48 2014 <503614168189881> Loaded 1 trusted CA's
    Mon Nov 24 14:19:48 2014 <503614168189881> sysMkdirs() on 'C:\windows\TEMP\_wl_proxy':
    Mon Nov 24 14:19:48 2014 <503614168189881> getWLFilePath: Complete File name = [C:\windows\TEMP\_wl_proxy\orbrandom.txt]
    Mon Nov 24 14:19:48 2014 <503614168189881> INFO: Successfully initialized SSL
    Mon Nov 24 14:19:48 2014 <503614168189882> SSL configured successfully
    Mon Nov 24 14:19:48 2014 <503614168189882> resolveRequest: wlforward: /TEST/index.jsp
    Mon Nov 24 14:19:48 2014 <503614168189882> URI is /GLMS/index.jsp, len=15
    Mon Nov 24 14:19:48 2014 <503614168189882> Request URI = [/TEST/index.jsp]
    Mon Nov 24 14:19:48 2014 <503614168189882> attempt #0 out of a max of 50
    Mon Nov 24 14:19:48 2014 <503614168189882> Trying a pooled connection for 'XX.XX.XX.XX/7002/7002'
    Mon Nov 24 14:19:48 2014 <503614168189882> getPooledConn: No more connections in the pool for Host[XX.XX.XX.XX] Port[7002] SecurePort[7002]
    Mon Nov 24 14:19:48 2014 <503614168189882> general list: trying connect to '192.168.17.180'/7002/7002 at line 1306 for '/GLMS/index.jsp'
    Mon Nov 24 14:19:48 2014 <503614168189882> New SSL URL: match = 0 oid = 22
    Mon Nov 24 14:19:48 2014 <503614168189882> Connect returns -1, and error no set to 10035, msg 'Unknown error'
    Mon Nov 24 14:19:48 2014 <503614168189882> EINPROGRESS in connect() - selecting
    Mon Nov 24 14:19:48 2014 <503614168189882> Setting peerID for new SSL connection
    Mon Nov 24 14:19:48 2014 <503614168189882> c0a8 11b4 5a1b 0000                          ....Z...
    Mon Nov 24 14:19:48 2014 <503614168189882> Local Port of the socket is 57397
    Mon Nov 24 14:19:48 2014 <503614168189882> Remote Host xx.xx.xx.xx Remote Port 7002
    Mon Nov 24 14:19:48 2014 <503614168189882> general list: created a new connection to 'XX.XX.XX.XX'/7002 for '/GLMS/index.jsp', Local port: 57397
    Mon Nov 24 14:19:48 2014 <503614168189882> WLS info in sendRequest:  XX.XX.XX.XX:7002 recycled? 0
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs from client:[Accept]=[application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, */*]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs from client:[Accept-Encoding]=[gzip, deflate]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs from client:[Accept-Language]=[en-IN]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs from client:[Cookie]=[ADMINCONSOLESESSION=9fTkJypQ229r1ZHx6cQZG8cwHb0T0ssW8TkM7zyzzCVvNzjzDsf2!1779325670; JSESSIONID=GcZVJyXT8WMyv9pT8xGNzndSPCbBCcy1tfm5yRG1DSv8PhT97gv9!1779325670; _WL_AUTHCOOKIE_ADMINCONSOLESESSION=WcL9RbOJFiDqn3LiZO0g]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs from client:[Host]=[localhost]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs from client:[User-Agent]=[Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0)]
    Mon Nov 24 14:19:48 2014 <503614168189882> URL::sendHeaders(): meth='GET' file='/GLMS/index.jsp' protocol='HTTP/1.1'
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[Accept]=[application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, */*]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[Accept-Encoding]=[gzip, deflate]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[Accept-Language]=[en-IN]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[Cookie]=[ADMINCONSOLESESSION=9fTkJypQ229r1ZHx6cQZG8cwHb0T0ssW8TkM7zyzzCVvNzjzDsf2!1779325670; JSESSIONID=GcZVJyXT8WMyv9pT8xGNzndSPCbBCcy1tfm5yRG1DSv8PhT97gv9!1779325670; _WL_AUTHCOOKIE_ADMINCONSOLESESSION=WcL9RbOJFiDqn3LiZO0g]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[Host]=[localhost]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[User-Agent]=[Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0)]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[Connection]=[Keep-Alive]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[WL-Proxy-Client-IP]=[::1]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[Proxy-Client-IP]=[::1]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[X-Forwarded-For]=[::1]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[WL-Proxy-Client-Keysize]=[128]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[X-WebLogic-KeepAliveSecs]=[30]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[X-WebLogic-Force-JVMID]=[unset]
    Mon Nov 24 14:19:48 2014 <503614168189882> Hdrs to WLS:[WL-Proxy-SSL]=[true]
    Mon Nov 24 14:19:48 2014 <503614168189881> WARN: GetSessionCallback: No session match found
    Mon Nov 24 14:19:48 2014 <503614168189881> WARN: DeleteSessionCallback: No match found!!
    Mon Nov 24 14:19:48 2014 <503614168189882> ERROR: SSLWrite failed
    Mon Nov 24 14:19:48 2014 <503614168189882> SEND failed (ret=-1) at 805 of file ..\nsapi\.\URL.cpp
    Mon Nov 24 14:19:48 2014 <503614168189882> *******Exception type [WRITE_ERROR_TO_SERVER] raised at line 806 of ..\nsapi\.\URL.cpp
    Mon Nov 24 14:19:48 2014 <503614168189882> Marking xx.xx.xx.xx:7002 as bad
    Mon Nov 24 14:19:48 2014 <503614168189882> Exception occurred for backend host 'XX.XX.XX.XX/7002/0' while sending request : 'WRITE_ERROR_TO_SERVER [os error=0,  line 806 of ..\nsapi\.\URL.cpp]: '
    Mon Nov 24 14:19:48 2014 <503614168189882> got exception in sendRequest phase: WRITE_ERROR_TO_SERVER [os error=0,  line 806 of ..\nsapi\.\URL.cpp]:  at line 1019; last_error 0
    Mon Nov 24 14:19:48 2014 <503614168189882> INFO: Closing SSL context
    Mon Nov 24 14:19:48 2014 <503614168189882> Failing over after WRITE_ERROR_TO_SERVER exception in sendRequest()

    yes that is right.
    Essentially you should be doing one of the following on weblogic side:
    1) Installed Certs on weblogic that were obtained from a commercial CA. (like verisign, thawte etc)
    In this case, you will receive rootCA crt along with the other bundled certs and private key.
    these rootCA certs are publicly available (your browser will be already using them)
    2) Using certs signed by your company. (companies can maintain their own CA)
    In this case you should be having a rootCA cert from your company.
    3) using demo certs that were shipped with weblogic.
    In this case, the rootca cert can be obtained from DemoTrust.jks
    this is documented at http://e-docs.bea.com/wls/docs90/plugins/isapi.html#114851 (should be same for any plugins)
    Apache plug-in can understand .crt extension.
    -Vijay

  • How to overlay color image over a grayscale image without IMAQ?

    I would like to display a color image over a grayscale image; I would like the color image look translucent. How would I do this without using IMAQ functions? I am currently displaying my grayscale image using Intensity Graph.

    > I would like to display a color image over a grayscale image; I would
    > like the color image look translucent. How would I do this without
    > using IMAQ functions? I am currently displaying my grayscale image
    > using Intensity Graph.
    Transparency and overlays are really just arithmetic on the images. It
    is either done by the windowing system or by you. At the moment you
    can't set transparency on LV controls. You can set it on LV floating
    windows on some OSes, but then you will need to have the windows lined up.
    A more direct approach is to lighten or darken the color image elements
    based upon whether they will display over white, black, or a shade of
    gray. If the images don't have the same size pixels, this will have a
    first step of resampling th
    e images so they do. Then combine the pixels
    using the transparency you were going to apply to the color and the
    shade of gray beneath it. I'm being vague here because there are lots
    of physical models for combining colors.
    If I look in my paint program I see about fifteen, so this is where you
    get to make a choice and decide if black behind a red pixel is black, or
    dark red. It all depends on whether these are two transparent
    images(acetate sheets) backlit, or is it a transparency over a
    nontransmitting media like paper. Anyway, if you can be more specific
    about what you want, none of this is hard, typically just scaling the
    int32s, adding, and some sort of normalization.
    Greg McKaskle

  • Is my installation of SQL Server Fail Over cluster correct?

    I made a 2 node SQL Server 2012 fail over cluster but having some problems during installation so I wanted to know if the steps below I performed are correct.
    Hardware
    Node1 192.168.1.10
    Node2 192.168.1.11
    Added following entries in DNS
    cluster.domain.local 192.168.1.12 (for Windows Cluster)
    msdtc.domain.local 192.168.1.13 (for MSDTC)
    sql.domain.local 192.168.1.14 (for SQL Server Cluster)
    Cluster Storage
    Disk1 (for Quorum)
    Disk2 (for MSDTC
    Disk3 (for SQL Server)
    Now comes the installation. I am performing all these steps as DOMAIN ADMIN.
    1. First I installed clustering role on both nodes
    2. Then I ran fail over validation wizard on Node1 adding both nodes which went fine (there were some warnings)
    3. Then I made a Windows Cluster on Node1 using these two nodes. I gave the name and IP to this cluster which I wrote above i.e. cluster.domain.local 192.168.1.12
    4. Cluster was created and boths nodes are UP.
    Now I want to ask a question here. Is it best practice to perform the above operation using DOMAIN ADMIN? Or if I use a standard domain user account with local admin rights, will it work? If not then exactly what rights are required to perform this operation.
    5. Then I installed "Application Server" role on both Node1 and Node2 and also added "Distributed Transaction" feature
    6. Then I right clicked on Windows Cluster I created and added a new role/feature which is "DTC"
    7. I gave it the same name which I wrote above i.e. msdtc.domain.local 192.168.1.13
    8. MSDTC was created but when it tried to UP its service, it threw an error. Upon investigation it turns out the Windows Cluster cluster.domain.local doesn't have proper rights to created some objects in AD. I didn't know what rights to give so I gave it full
    permission and after that when I created MSDTC again, the service went up fine.
    So I want to know what rights does cluster.domain.com require to make MSDTC?
    Am I doing good so far?

    Hello,
    >>Then I made a Windows Cluster on Node1 using these two nodes. I gave the name and IP to this cluster which I wrote above i.e. cluster.domain.local 192.168.1.10
    Hello I suppose this IP was physical node IP windows cluster IP was 192.168.1.12  I suppose yo must have given this IP as windows cluster IP.10 and 11 are physical nodes in Cluster but 12 is Cluster IP .Correct me if I am wrong.
    Did you do failover and failback to check whether cluster is configured correctly or not ,If not please do it .
    >>Then I ran fail over validation wizard on Node1 adding both nodes which went fine (there were some warnings)
    Please remove warnings also ,it might cause issue.Not sure its correct every time but make sure cluster validation should be free of error and warning.
    >>Now I want to ask a question here. Is it best practice to perform the above operation using DOMAIN ADMIN?
    You can do it with domain admin account as this is required to create Cluster NAme object(CNO) in domain and local account might not have that right so I would say its ok.
    >>I gave it the same name which I wrote above i.e. msdtc.domain.local
    192.168.1.11
    again this IP is node 2 IP how can you give it to MSDTC.Use below link for reference
    http://blogs.msdn.com/b/cindygross/archive/2009/02/22/how-to-configure-dtc-for-sql-server-in-a-windows-2008-cluster.aspx
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • How do the application servers connect the new database after failing over from primary DB to standby DB

    How do the application servers connect the new database after failing over from primary DB to standby DB?
    We have setup a DR environment with a standalone Primary server and a standalone Physical Standby server on RHEL Linux 6.4. Now our application team would like to know:
    When the primary DB server is crashed, the standy DB server will takeover the role of primary DB through the DataGuard fast failover. As the applications are connected by the primary DB IP before,currently the physical DB is used as a different IP or listener. If this is happened, they need to stop their application servers and re-configure their connection so the they coonect the new DB server, they cannot tolerate these workaround. 
    Whether does oracle have the better solution for this so that the application can automatically know the role's transition and change to the new IP without re-confige any connection and shutdown their application?
    Oracle support provides us the answer as following:
    ==================================================================
    Applications connected to a primary database can transparently failover to the new primary database upon an Oracle Data Guard role transition. Integration with Fast Application Notification (FAN) provides fast failover for integrated clients.
    After a failover, the broker publishes Fast Application Notification (FAN) events. These FAN events can be used in the following ways:
    Applications can use FAN without programmatic changes if they use one of these Oracle integrated database clients: Oracle Database JDBC, Oracle Database Oracle Call Interface (OCI), and Oracle Data Provider for .NET ( ODP.NET). These clients can be configured for Fast Connection Failover (FCF) to automatically connect to a new primary database after a failover.
    JAVA applications can use FAN programmatically by using the JDBC FAN application programming interface to subscribe to FAN events and to execute event handling actions upon the receipt of an event.
    FAN server-side callouts can be configured on the database tier.
    FAN events are published using Oracle Notification Services (ONS) and Oracle Streams Advanced Queuing (AQ).
    =======================================================================================
    Who has the experience and the related documentation or other solutions? we don't have the concept of about FAN.
    Thank very much in advance.

    Hi mesbeg,
    Thanks alot.
    For example, there is an application JBOSS server connecting the DB, we just added another datasource and put the standby IP into the configuration file except adding a service on DB side like this following:
            <subsystem xmlns="urn:jboss:domain:datasources:1.0">
            <datasources>
                    <datasource jta="false" jndi-name="java:/jdbc/idserverDatasource" pool-name="IDServerDataSource" enabled="true" use-java-context="true">
                        <connection-url>jdbc:oracle:thin:@<primay DB IP>:1521:testdb</connection-url>
                        <connection-url>jdbc:oracle:thin:@<standby DB IP>:1521:testdb</connection-url>
                        <driver>oracle</driver>
                        <pool>
                            <min-pool-size>2</min-pool-size>
                            <max-pool-size>10</max-pool-size>
                            <prefill>true</prefill>
                        </pool>
                        <security>
                            <user-name>TEST_USER</user-name>
                            <password>Password1</password>
                        </security>
                        <validation>
                            <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker"/>
                            <validate-on-match>false</validate-on-match>
                            <background-validation>false</background-validation>
                            <use-fast-fail>false</use-fast-fail>
                            <stale-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.oracle.OracleStaleConnectionChecker"/>
                            <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter"/>
                        </validation>
                    </datasource>
                    <drivers>
                        <driver name="oracle" module="com.oracle.jdbc">
                            <xa-datasource-class>oracle.jdbc.OracleDriver</xa-datasource-class>
                        </driver>
                    </drivers>
                </datasources>
            </subsystem>
    If the failover is occurred, the JBOSS will automatically be pointed to the standby DB. Additional actions are not needed.

  • Physical standby database fail-over

    Hi,
    I am working on Oracle 10.2.0.3 on Solaris SPARC 64-bit.
    I have a Dataguard configuration with a single Physical standby database that uses real time application. We had a major application upgrade yesterday and before the start of upgrade, we cancelled the media recovery and disabled the log_archive_dest_n so that it doesn't ship the archive logs to standby site. We left the dataguard configuration in this mode incase of a rollback.
    Primary:
    alter system set log_archive_dest_state_2='DEFER';
    alter system switch logfile;
    Standby:
    alter database recover managed standby database cancel;Due to application upgrade induced problems we had to failover to the physical standby, which was not in sync with primary from yesterday. I used the following method to fail-over since i do not want to apply any redo from yesterday.
    Standby:
    alter database activate physical standby database;
    alter database open;
    shutdown immediate;
    startupSo, after this step, the database was a stand alone database, which doesn't have any standby databases yet (but it still has log_archive_config parameter set and log_archive_dest_n parameters set but i have 'DEFER' the log_archive_dest_n pointing to the old primary). I have even changed the "archive log deletion policy to NONE"
    RMAN> configure archivelog deletion policy to none;After the fail-over was completed, the log sequence started from Sequence 1. We cleared the FRA to make space for the new archive logs and started off a FULL database backup (backup incremental level 0 database plus archivelog delete input). The backup succeded but we got these alerts in the backup log that RMAN cannot delete the archivelogs.
    RMAN-08137: WARNING: archive log not deleted as it is still neededMy question here is
    1) Even though i have disabled the log_archive_dest_n parameters, why is RMAN not able to delete the archivelogs after backup when there is no standby database for this failed-over database?
    2) Are all the old backups marked unusable after a fail-over is performed?
    FYI... flashback database was not used in this case as it did not server our purpose.
    Any information or documentation links would be greatly appreciated.
    Thanks,
    Harris.

    Thanks for the reply.
    The FINISH FORCE works in some cases but if there is an archive gap (though it didn't report in our case), it might not work some times (DOCID: 846087.1). So, we followed the Switch-over & Fail-Over best practices where it mentioned about this "ACTIVE PHYSICAL STANDBY" for a fail-over if you intend not to apply any archivelogs. The process we followed is the Right one.
    Anyhow, we got the issue resolved. Below is the resolution path.
    1) Even though if you DEFER the LOG_ARCHIVE_DEST_STATE_N parameter's on the primary, there are some situations where the Primary database in a dataguard configuration where it will not delete the archive logs due to some SCN issues. This issue may or may not arise in all fail-over scenarios. If it does, then do the following checks
    Follow DOCID: 803635.1, which talks about a PLSQL procedure to check for problematic SCN's in a dataguard configuration even though the physical standby databases are no available (i.e., if the dataguard parameters are set, log_archive_config, log_archive_dest_n='SERVICE=..." still set and even though corresponding LOG_ARCHIVE_DEST_STATE_N parameters are DEFERRED).
    If this procedure returns any rows, then the primary database is not able to delete the archivelogs because it is still thinking there is a standby database and trying to save the archive logs because of the SCN conflict.
    So, the best thing to do is, remove the DG related parameters from the spfile (log_archive_config, log_archive_dest_n parameters).
    After i made these changes, i ran a test backup using "backup archivelog all delete input", the archive logs got deleted after backup without any issues.
    Thanks,
    Harris.
    Edited by: user11971589 on Nov 18, 2010 2:55 PM

Maybe you are looking for

  • Viewing RAW images in Aperture

    I cannot find a similar discussion so I'll post this. I cannot view RAW files in Aperture since the 3.5 update to 10.9.  The files import correctly and then when I try to view them in the viewer they turn into a scrambled mess.  I have created a new

  • Hard drive exchange between MacBook(2007) and MacBookPro(2009)

    I have a MacBook(2005) that I upgraded with a 500GB hard drive and other stuff. I just did a fresh install of Snow Leopard and all my settings are right where I want them and it fells perfect. Currently I am using 300GB of hard drive space. However,

  • Ipod touch not sorting track by TSOT

    Hi everyone, I'm experiencing a problem with my iPod touch. I have some Japanese music in mp3 quite well tagged. Uploading it on my iPod with Japanese locale I have a proper sorting for artist based on TSOP tags, but I found no ways to let it work wh

  • Print the query content of PreparedStatement

    hi all, Is there any possible to print a PreparedStatement content? I have the following code: try { PreparedStatement stmt = this.connection.prepareStatement( "Insert into table " +" ( " +" field1, " +" field1 " +" ) " +" values " +" ( " +" ?,"//1 +

  • Clearcase integration

    We have Jdeveloper 10.1.3 and clearcase 7 working together but there are a few things that are unexpected that I hope someone can explain. 1) When a pebl file is changed, it is automatically checked out. There is no dialog and the file is checked out