Failover scenarios for AlwaysOn

What are the failover scenarios/situations in which the failover happens when the Databases are configured with ALways-On?
In our situation windows failover cluster server node 1 fails over to node 2 but the databases are still pointing to node 1.
thanks

Hi,
When failure occurs, whether an availability group will failover immediately depends on both
the failover mode and the availability mode of the replica.
Please check below articles for more information.
http://msdn.microsoft.com/en-us/library/hh213151.aspx#Overview
After a failover, client applications that need to access the primary databases must connect to the new primary replica. Also, if the new secondary replica is configured to allow read-only access, read-only client applications can connect to it. For information
about how clients connect to an availability group, see
Availability Group Listeners, Client Connectivity, and Application Failover (SQL Server).
Hope the information helps.
Tracy Cai
TechNet Community Support

Similar Messages

  • AlwaysOn Failover Scenarios

    Hi,
    I have implemented AlwaysOn feature between two standalone SQL Server instances hosted on two Clustered nodes in two different subnets (multisite clustering with Node and File Share Majority quorum). I have configured AlwaysOn for
    automatic failover between the primary and the only secondary replica. There are two databases in it. The implementation did went successful.
    Now, I before going live, I wanted to test the failover scenarios. First one was to manually failover between the nodes from SQL Server side, as well as from Failover Cluster Manager consoles. Both went perfect.
    But, the second test was to stop the SQL Server service, and see the result. When I stopped the primary, the resource group in the Cluster Manager failed, and the databases were also not connecting. I expected the Availability group should have
    failed over to the secondary node, and the db's should have been up and running.
    Did I miss something in the implementation part above, or it is an expected behavior of AlwaysON? If yes, then what does automatic failover imply?
    Thanks  & Regards

    Hi,
    I have implemented AlwaysOn feature between two standalone SQL Server instances hosted on two Clustered nodes in two different subnets (multisite clustering with Node and File Share Majority quorum). I have configured AlwaysOn for
    automatic failover between the primary and the only secondary replica. There are two databases in it. The implementation did went successful.
    Now, I before going live, I wanted to test the failover scenarios. First one was to manually failover between the nodes from SQL Server side, as well as from Failover Cluster Manager consoles. Both went perfect.
    But, the second test was to stop the SQL Server service, and see the result. When I stopped the primary, the resource group in the Cluster Manager failed, and the databases were also not connecting. I expected the Availability group should have
    failed over to the secondary node, and the db's should have been up and running.
    Did I miss something in the implementation part above, or it is an expected behavior of AlwaysON? If yes, then what does automatic failover imply?
    Thanks  & Regards
    upon re-reading the question.. i think we need some clarity..what do you mean failover  node..you do not fail over over the node,, you fail over the services running on the node but in your case, you said you have  stand alone sql instances - which
    is required for always on.. so, by fail over node - you mean you took node OFFLINE - either take the sql service OFFLINE and/or entire NODE offline... what did the Always ON status show when you connected to the sql instance running on another node. it should
    be now primary.
    when you take first node offline or one sql service offline, the sql service will not fail over because is not sql cluster, it will fail over only the database that are set up for always on. not even all the database,just the ones set up for it... 
    in other words, it will failover the "always on" service...which takes care of the AG databases on the node to determine which is priiary/secondary
    you dashboard on the secondary instance - AG dashboard should tell you the status,,,
    Hope it Helps!!

  • Cluster Adobe Document Services for Failover scenario

    Hello All,
    We have ADS installed and working in our Dev/QA environment deployed on its own standalone Web AS Java. Going forward, we want to set up our production systems in a Clustered environment for HA/Failover scenario.
    From the notes I have read on marketplace, it is possible to cluster a standalone Web AS java by manually clustering SCS and installing the java CI on one physical node(using local disk) and a java dialog instance on other physical node.
    I would like to know once we have clustered the standalone java manually, will there be any difference deploying ADS on this java since only SCS is clustered and CI would be installed on a physical host???
    Has someone already implemented this kind of scenario???
    ECC5.0, WebAS 6.40
    Database: SQL Server 2005
    OS: Win 2003
    Please help.Your replies will be greatly appreciated with lots of points.
    Thanks,
    Fahad

    Hello Samrat,
    thank you for your quick reply. You really helped me out.
    Unfortunately, I haven't marked this thread as a question. So I cannot give you any points. Is there any possibility to change this thread into a question?
    Cheers,
    Matthias

  • Need test documents for RAC failover Scenarios

    Hello friends...
    By the end of this week i have to produce sum test documents for RAC and Database server including Sun Cluster Failover Scenarios.
    Can sumone guide me to a link where i can get enough help.
    I have already managed to get enough information.. but i want to see to it that i cover most of the topics.
    Thanks, Regards
    Monu Koshy

    Please check the following links.
    http://download-uk.oracle.com/docs/cd/B19306_01/rac.102/b14197/toc.htm
    http://download-uk.oracle.com/docs/cd/B19306_01/install.102/b14205/toc.htm
    -aijaz

  • Solaris 10 Cluster 3.2 with  2 zones in a failover scenario

    Hi
    Looking for the best way to set up things for the following scenario
    I have 2 M5000 servers with internal storage and a 6140 array for shared storage
    I need to create 2 zones on each in a failover scenario (active /standby)
    On Server1 3 out of 4 cpus for Oracle Database Server 11g and 1 out of 4 cpus for Oracle Application Server
    On Server2 3 out of 4 cpus for Oracle Application Server and 1 out of 4 cpus for Oracle Database Server 11g
    Database files will be placed on the shared storage. In case of failure of Server1 Oracle Database will fail over to Server2 and in case Server2 is down Oracle Application Server will fail to Server1.
    Would a zone cluster using clzonecluster be better?if yes how can i achieve the difference in cpu power in case of failure.
    where is best to keep the zone root path on the internal storage or on the shared storage?
    What about the swap space for both zones?
    Better use exclusive ips or shared will be fine?
    Will it be better to have sparse zone installation for the zone or do the whole thing?
    What is the best way to achieve the cpu assignments needed and how much should be left for the global zone?
    Thanks in advance
    vangelis

    Hi Vangelis,
    Building a cluster, requires some planning and understanding the concepts.
    A good start would be reading some of the documents linked to in this url: http://docs.sun.com/app/docs/doc/819-2969/gcbkf?a=view
    Regards,
    Davy

  • Test scenario for Physical Standby

    Dear Team,
    Could you please share the Test scenario for Physical Standby.
    1. Read only.
    2. Switch Over
    3. Fail Over.
    Many Thanks in Advance
    Arjun.

    You didnt mention the version.
    Oracle10g Release 2, Data Guard Switchover and Failover best practices
    http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_SwitchoverFailoverBestPractices.pdfOracle9i, Data Guard Switchover & Failover best practices
    http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_9iSwitchoveFailoverBestPractices.pdfKhurram

  • Uplink failover scenarios - The correct behavior

    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-qformat:yes;
    mso-style-parent:"";
    mso-padding-alt:0in 5.4pt 0in 5.4pt;
    mso-para-margin:0in;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:11.0pt;
    font-family:"Calibri","sans-serif";
    mso-ascii-font-family:Calibri;
    mso-ascii-theme-font:minor-latin;
    mso-hansi-font-family:Calibri;
    mso-hansi-theme-font:minor-latin;}
    Hello Dears,
    I’m somehow confused about the failover scenarios related to the uplinks and the Fabric Interconnect (FI) switches, as we have a lot of failover points either in the vNIC , FEX , FI or uplinks.
    I have some questions and I hope that someone can clear this confusion:
    A-     Fabric Interconnect failover
    1-      As I understand when I create a vNIC , it can be configured to use FI failover , which means if FI A is down , or the uplink from the FEX to the FI is down , so using the same vNIC it will failover to the other FI via the second FEX ( is that correct , and is that the first stage of the failover ?).
    2-      This vNIC will be seen by the OS as 1 NIC and it will not feel or detect anything about the failover done , is that correct ?
    3-      Assume that I have 2 vNICs for the same server (metal blade with no ESX or vmware), and I have configured 2 vNICs to work as team (by the OS), does that mean that if primary FI or FEX is down , so using the vNIC1 it will failover to the 2nd FI, and for any reason the 2nd vNIC is down (for example if uplink is down), so it will go to the 2nd vNIC using the teaming ?
    B-      FEX failover
    1-      As I understand the blade server uses the uplink from the FEX to the FI based on their location in the chassis, so what if this link is down, does that mean FI failover will trigger, or it will be assigned to another uplink ( from the FEX to the FI)
    C-      Fabric Interconnect Uplink failover
    1-      Using static pin LAN group, the vNIC is associated with an uplink, what is the action if this uplink is down ? will the vNIC:
    a.       Brought down , as per the Network Control policy applied , and in this case the OS will go for the second vNIC
    b.      FI failover to the second FI , the OS will not detect anything.
    c.       The FI A will re-pin the vNIC to another uplink on the same FI with no failover
    I found all theses 3 scenarios in a different documents and posts, I did not have the chance it to test it yet, so it will be great if anyone tested it and can explain.
    Finally I need to know if the correct scenarios from the above will be applied to the vHBA or it has another methodology.
    Thanks in advance for your support.
    Moamen

    Moamen
    A few things about Fabric Failover (FF)  to keep in mind before I try to address your questions.
    FF is only supported on the M71KR and the M81KR.
    FF is only applicable/supported in End Host Mode of Operation and applies only to ethernet traffic. For FC traffic one has to use multipathing software (the way FC failover has worked always). In End Host mode, anything along the path (adapter port, FEX-IOM link, uplinks) fails and FF is initiated for ethernet traffic *by the adapter*.
    FF is an event which is triggered by vNIC down i.e a vNIC is triggered down and the adapter initiates the failover i.e it sends a message to the other fabric to activate the backup veth (switchport) and the FI sends our gARPs for the MAC as part of it. As it is adapter driven, this is why FF is only available on a few adapters i.e for now the firmware for which is done by Cisco.
    For the M71KR (menlo's) the firmware on the Menlo chip is made by Cisco. The Oplin and FC parts of the card, and Intel/Emulex/Qlogic control that.
    The M81KR is made by Cisco exclusively for UCS and hence the firmware on that is done by us.
    Now to your questions -
    >1-      As I understand when I create a vNIC , it can be configured to use FI failover , which means if FI A is down , or the uplink from the FEX to the >FI is down , so using the same vNIC it will failover to the other FI via the second FEX ( is that correct , and is that the first stage of the failover ?).
    Yes
    > 2-      This vNIC will be seen by the OS as 1 NIC and it will not feel or detect anything about the failover done , is that correct ?
    Yes
    >3-      Assume that I have 2 vNICs for the same server (metal blade with no ESX or vmware), and I have configured 2 vNICs to work as team (by the >OS), does that mean that if primary FI or FEX is down , so using the vNIC1 it will failover to the 2nd FI, and for any reason the 2nd vNIC is down (for >example if uplink is down), so it will go to the 2nd vNIC using the teaming ?
    Instead of FF vNICs you can use NIC teaming. You bond the two vNICs which created a bond interface and you specify an IP on it.
    With NIC teaming you will not have the vNICs (in the Service Profile) as FF. So the FF will not kick in and the vNIC will be down for the teaming software to see on a fabric failure etc for the teaming driver to come into effect.
    > B-      FEX failover
    > 1-      As I understand the blade server uses the uplink from the FEX to the FI based on their location in the chassis, so what if this link is down, > >does that mean FI failover will trigger, or it will be assigned to another uplink ( from the FEX to the FI)
    Yes, we use static pinning between the adapters and the IOM uplinks which depends on the number of links.
    For example, if you have 2 links between IOM-FI.
    Link 1 - Blades 1,3,5,7
    Link 2 - Blades 2,4,6,8
    If Link 1 fails, Blade 1,3,5,7 move to the other IOM.
    i.e it will not failover to the other links on the same IOM-FI i.e it is no a port-channel.
    The vNIC down event will be triggered. If FF is initiated depends on the setting (above explanation).
    > C-      Fabric Interconnect Uplink failover
    > 1-      Using static pin LAN group, the vNIC is associated with an uplink, what is the action if this uplink is down ? will the vNIC:
    > a.       Brought down , as per the Network Control policy applied , and in this case the OS will go for the second vNIC
    If you are using static pin group, Yes.
    If you are not using static pin groups, the same FI will map it to another available uplink.
    Why? Because by defining static pinning you are purposely defining the uplink/subscription ratio etc and you don't want that vNIC to go to any other uplink. Both fabrics are active at any given time.
    > b.      FI failover to the second FI , the OS will not detect anything.
    Yes.
    > c.       The FI A will re-pin the vNIC to another uplink on the same FI with no failover
    For dynamic pinning yes. For static pinning NO as above.
    >I found all theses 3 scenarios in a different documents and posts, I did not have the chance it to test it yet, so it will be great if anyone tested it and >can explain.
    I would still highly recommend testing it. Maybe its me but I don't believe anything till I have tried it.
    > Finally I need to know if the correct scenarios from the above will be applied to the vHBA or it has another methodology.
    Multipathing driver as I mentioned before.
    FF *only* applies to ethernet.
    Thanks
    --Manish

  • L2 Failover Plan for Point to Point

    We have 2 point to point links from 2 different ISP's for the purpose of connecting to our data center from our office. We would like to configure a failover scenario using these two P2P links.
    I have been trying with the sTP method but for some reason we could not succeed in this, could be because of the distance.
    Please help me to implement a failover P2P conectivity usign this two links.

    Hello,
    Is there a specific reason you are using Layer 2 between sites? If the links are the same bandwidth you could create a port-channel - where both physical links become one logical link, this is ideal because the fail-over is seamless without losing any connectivity and is far superior to spanning-tree.
    To configure this:
    interface Port-channel1
    switchport trunk encapsulation dot1q
    switchport mode trunk
    interface GigabitEthernet1/0/1
    description Link to ISP1
    switchport trunk encapsulation dot1q
    switchport mode trunk
    switchport nonegotiate
    channel-group 1 mode active
    interface GigabitEthernet1/0/2
    description Link to ISP2
    switchport trunk encapsulation dot1q
    switchport mode trunk
    switchport nonegotiate
    channel-group 1 mode active
    This could also be accomplished with IP routing. you can can configure the interfaces as layer 3 ports and setup a routing protocol between sites - Like EIGRP. EIGRP can do equal and unequal cost load balancing or simply route all traffic via one link and automatically route via the other if the primary went down.
    If you provide more data I could offer more assitance with design and configuration.

  • SC3.2, S10, V40z - failover scenarios/troubles

    Hi,
    Few days ago I finished the above setup (2 nodes, sc3.2, sol10x86 - all updates).
    There are two shared storages connected to the cluster: T3(fiber) and 3310(scsi raid)
    Configured atm is a single MySQL instance in active-standby mode.
    Later, an NFS service might be added.
    There are three global mounts.
    The system need to get into production asap but there were some problems while testing the failover scenarios.
    = Disconnecting any combination of interconnect cables - WORKS
    = Disconnecting any combination of network cables - WORKS
    = Shutting down any node - WORKS
    = Powering off any node - WORKS
    = RGs and resources are switched as expected and any node is able to take ownership
    The one test which failed was when the SCSI and FC cables were unplugged.
    In both cases, both nodes were rebooted almost instantly.
    Is this behavior configurable or expected?
    Any suggestions for another test scenario?
    I found a single forum thread with similar problem described which was tracked down to bad grounding ... anyone else having experience with that?
    I can send more details if anyone is able to help.
    Thanks in advance !!!
    Paul.

    OK, I repeated the scenario yesterday.
    For some reason only the node which was mastering the RG was rebooted after panicking about state database records.
    I'm not sure if this is a natural behavior or if there is some misconfiguration.
    Please, see the logs and cluster info below:
    ========
    Cluster Info
    ========
    -- Cluster Nodes --
    Node name Status
    Cluster node: CLNODE2 Online
    Cluster node: CLNODE1 Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path: CLNODE2:ce1 CLNODE1:ce1 Path online
    Transport path: CLNODE2:bge1 CLNODE1:bge1 Path online
    -- Quorum Summary --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node --
    Node Name Present Possible Status
    Node votes: CLNODE2 1 1 Online
    Node votes: CLNODE1 1 1 Online
    -- Quorum Votes by Device --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d7s2 1 1 Online
    -- Device Group Servers --
    Device Group Primary Secondary
    Device group servers: new_mysql CLNODE1 CLNODE2
    Device group servers: new_ibdata CLNODE1 CLNODE2
    Device group servers: new_binlog CLNODE1 CLNODE2
    -- Device Group Status --
    Device Group Status
    Device group status: new_mysql Online
    Device group status: new_ibdata Online
    Device group status: new_binlog Online
    -- Multi-owner Device Groups --
    Device Group Online Status
    -- Resource Groups and Resources --
    Group Name Resources
    Resources: mysql-failover-rg mysql-has mysql-lh mysql-res
    -- Resource Groups --
    Group Name Node Name State Suspended
    Group: mysql-failover-rg CLNODE2 Offline No
    Group: mysql-failover-rg CLNODE1 Online No
    -- Resources --
    Resource Name Node Name State Status Message
    Resource: mysql-has CLNODE2 Offline Offline
    Resource: mysql-has CLNODE1 Online Online
    Resource: mysql-lh CLNODE2 Offline Offline - LogicalHostname offline.
    Resource: mysql-lh CLNODE1 Online Online - LogicalHostname online.
    Resource: mysql-res CLNODE2 Offline Offline
    Resource: mysql-res CLNODE1 Online Online - Service is online.
    -- IPMP Groups --
    Node Name Group Status Adapter Status
    IPMP Group: CLNODE2 sc_ipmp0 Online ce0 Online
    IPMP Group: CLNODE2 sc_ipmp0 Online bge0 Online
    IPMP Group: CLNODE1 sc_ipmp0 Online ce0 Online
    IPMP Group: CLNODE1 sc_ipmp0 Online bge0 Online
    =========
    Devices
    =========
    ===
    DIDs
    ===
    CLNODE1:root[]didadm -L
    1 CLNODE2:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1 INTERNAL DISKS/HARDWARE RAID
    2 CLNODE2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d2 INTERNAL DISKS/HARDWARE RAID
    3 CLNODE2:/dev/rdsk/c4t60020F200000C51E48874D0D000DA3EEd0 /dev/did/rdsk/d3 FC VOLUMES
    3 CLNODE1:/dev/rdsk/c4t60020F200000C51E48874D0D000DA3EEd0 /dev/did/rdsk/d3 FC VOLUMES
    4 CLNODE2:/dev/rdsk/c4t60020F200000C51E48874D49000686F0d0 /dev/did/rdsk/d4 FC VOLUMES
    4 CLNODE1:/dev/rdsk/c4t60020F200000C51E48874D49000686F0d0 /dev/did/rdsk/d4 FC VOLUMES
    5 CLNODE1:/dev/rdsk/c5t1d0 /dev/did/rdsk/d5 SCSI RAID
    5 CLNODE2:/dev/rdsk/c5t1d0 /dev/did/rdsk/d5 SCSI RAID
    6 CLNODE1:/dev/rdsk/c5t0d0 /dev/did/rdsk/d6 SCSI RAID
    6 CLNODE2:/dev/rdsk/c5t0d0 /dev/did/rdsk/d6 SCSI RAID
    7 CLNODE2:/dev/rdsk/c4t60020F200000C51E48874DA900088862d0 /dev/did/rdsk/d7 FC VOLUMES
    7 CLNODE1:/dev/rdsk/c4t60020F200000C51E48874DA900088862d0 /dev/did/rdsk/d7 FC VOLUMES
    8 CLNODE2:/dev/rdsk/c4t60020F200000C51E48874DDD000CE109d0 /dev/did/rdsk/d8 FC VOLUMES
    8 CLNODE1:/dev/rdsk/c4t60020F200000C51E48874DDD000CE109d0 /dev/did/rdsk/d8 FC VOLUMES
    11 CLNODE1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d11 INTERNAL DISKS/HARDWARE RAID
    12 CLNODE1:/dev/rdsk/c1t0d0 /dev/did/rdsk/d12 INTERNAL DISKS/HARDWARE RAID
    ===
    metasets
    ===
    CLNODE1:root[]metaset -s new_binlog
    Set name = new_binlog, Set number = 8
    Host Owner
    CLNODE1 Yes
    CLNODE2
    Driv Dbase
    d5 Yes
    d6 Yes
    CLNODE1:root[]metaset -s new_mysql
    Set name = new_mysql, Set number = 5
    Host Owner
    CLNODE1 Yes
    CLNODE2
    Driv Dbase
    d3 Yes
    d4 Yes
    CLNODE1:root[]metaset -s new_ibdata
    Set name = new_ibdata, Set number = 7
    Host Owner
    CLNODE1 Yes
    CLNODE2
    Driv Dbase
    d7 Yes
    d8 Yes
    ===
    metadb info
    ===
    CLNODE1:root[]metadb -s new_binlog
    flags first blk block count
    a m luo r 16 8192 /dev/did/dsk/d5s7
    a luo r 16 8192 /dev/did/dsk/d6s7
    CLNODE1:root[]metadb -s new_mysql
    flags first blk block count
    a m luo r 16 8192 /dev/did/dsk/d3s7
    a luo r 16 8192 /dev/did/dsk/d4s7
    CLNODE1:root[]metadb -s new_ibdata
    flags first blk block count
    a m luo r 16 8192 /dev/did/dsk/d7s7
    a luo r 16 8192 /dev/did/dsk/d8s7
    ===
    md.tab - 3 configured mirrors mounted as global
    ===
    d110 -m d103 d104
    new_mysql/d110 -m new_mysql/d103 new_mysql/d104
    d120 -m d115 d116
    new_binlog/d120 -m new_binlog/d115 new_binlog/d116
    d130 -m d127 d128
    new_ibdata/d130 -m new_ibdata/d127 new_ibdata/d128
    =========
    Log at time of failure - CLNODE1 is the master - SCSI cable disconnected - CLNODE2 takes over RG after CLNODE1's panic
    =========
    Aug 7 14:16:27 CLNODE1 scsi: [ID 107833 kern.warning] WARNING: /pci@1d,0/pci1022,7450@1/pci1000,1010@1/sd@0,0 (sd81):
    Aug 7 14:16:27 CLNODE1 disk not responding to selection
    Aug 7 14:16:28 CLNODE1 scsi: [ID 107833 kern.warning] WARNING: /pci@1d,0/pci1022,7450@1/pci1000,1010@1/sd@1,0 (sd82):
    Aug 7 14:16:28 CLNODE1 disk not responding to selection
    Aug 7 14:16:33 CLNODE1 scsi: [ID 107833 kern.warning] WARNING: /pci@1d,0/pci1022,7450@1/pci1000,1010@1/sd@0,0 (sd81):
    Aug 7 14:16:33 CLNODE1 disk not responding to selection
    Aug 7 14:16:35 CLNODE1 scsi: [ID 107833 kern.warning] WARNING: /pci@1d,0/pci1022,7450@1/pci1000,1010@1/sd@1,0 (sd82):
    Aug 7 14:16:35 CLNODE1 disk not responding to selection
    Aug 7 14:16:38 CLNODE1 scsi: [ID 107833 kern.warning] WARNING: /pci@1d,0/pci1022,7450@1/pci1000,1010@1/sd@0,0 (sd81):
    Aug 7 14:16:38 CLNODE1 disk not responding to selection
    Aug 7 14:16:38 CLNODE1 md: [ID 312844 kern.warning] WARNING: md: state database commit failed
    Aug 7 14:16:39 CLNODE1 cl_dlpitrans: [ID 624622 kern.notice] Notifying cluster that this node is panicking
    Aug 7 14:16:39 CLNODE1 unix: [ID 836849 kern.notice]
    Aug 7 14:16:39 CLNODE1 ^Mpanic[cpu1]/thread=fffffe800030bc80:
    Aug 7 14:16:39 CLNODE1 genunix: [ID 268973 kern.notice] md: Panic due to lack of DiskSuite state
    Aug 7 14:16:39 CLNODE1 database replicas. Fewer than 50% of the total were available,
    Aug 7 14:16:39 CLNODE1 so panic to ensure data integrity.
    Aug 7 14:16:39 CLNODE1 unix: [ID 100000 kern.notice]
    Aug 7 14:16:39 CLNODE1 genunix: [ID 655072 kern.notice] fffffe800030bb80 md:mddb_commitrec_wrapper+8c ()
    Aug 7 14:16:39 CLNODE1 genunix: [ID 655072 kern.notice] fffffe800030bbc0 md_mirror:process_resync_regions+16a ()
    Aug 7 14:16:39 CLNODE1 genunix: [ID 655072 kern.notice] fffffe800030bbf0 md_mirror:check_resync_regions+df ()
    Aug 7 14:16:39 CLNODE1 genunix: [ID 655072 kern.notice] fffffe800030bc50 md:md_daemon+10b ()
    Aug 7 14:16:39 CLNODE1 genunix: [ID 655072 kern.notice] fffffe800030bc60 md:start_daemon+e ()
    Aug 7 14:16:39 CLNODE1 genunix: [ID 655072 kern.notice] fffffe800030bc70 unix:thread_start+8 ()
    Aug 7 14:16:39 CLNODE1 unix: [ID 100000 kern.notice]
    Aug 7 14:16:39 CLNODE1 genunix: [ID 672855 kern.notice] syncing file systems...
    Aug 7 14:16:39 CLNODE1 genunix: [ID 733762 kern.notice] 1
    Aug 7 14:16:40 CLNODE1 genunix: [ID 904073 kern.notice] done
    Aug 7 14:16:41 CLNODE1 genunix: [ID 111219 kern.notice] dumping to /dev/dsk/c1t0d0s1, offset 429391872, content: kernel
    Aug 7 14:16:52 CLNODE1 genunix: [ID 409368 kern.notice] ^M100% done: 148178 pages dumped, compression ratio 4.10,
    Aug 7 14:16:52 CLNODE1 genunix: [ID 851671 kern.notice] dump succeeded
    Aug 7 14:19:39 CLNODE1 genunix: [ID 540533 kern.notice] ^MSunOS Release 5.10 Version Generic_118855-36 64-bit
    Aug 7 14:19:39 CLNODE1 genunix: [ID 172907 kern.notice] Copyright 1983-2006 Sun Microsystems, Inc. All rights reserved.
    =========================================================
    Anyone?
    Thanks in advance!

  • How to use pre-defined scenarios for RosettaNet with XI 3.0

    Hi all.
       I am working in Comgroup Shanghai co. ltd. which is partner of SAP China.
       We have a potential customer who use RosettaNet as their Supply Chain EDI system.
       I would like to make a demo for demostrate the pre-defined scenarios for RosettaNet RNIF 2.0 packages.
       Where can I find such guideline ? I have checked the article: implementing RosettaNet with XI 3.0, but it cannot help me.
       Another question is how I can develop my own scenarios for the RNIF standards which are not included in XI RosettaNet business package ?

    Hi Andy,
    See the instructions below after you installed the RosettaNet STK.
    Below given is the sample config for PIP3B2 Scenario. Part I is using STK and Part II is using 2 XI systems.
    <b>Part I. Test Using STK</b>
    In your XI ID:
    1) Define both the parties 1) Shipper(Your Partner) 2) Receiver(Your company) with identifiers like DUNS Number.
    2) In ID goto>Tools>Transfer Integration Scenarios from IR-->select the scenario "PIP3B2_Receiver" from the drop down list.
    3) Follow the 4 steps in the config wizard.
    4) While creating your CC, create it from the channel template delivered with the RosettaNet BP.
       4.1) Specify the URL for STK which should be in the following format
    http://<STK Server>:<port>/rosettanet/servlet/listenerServlet?userId=<Party Name>
       4.2)In the location fields, enter your location and your partner location.
    In your STK
    1) Start the RosettaNet STK.
    2) enter the UserID<Partner Name in XI>
    3) Select the Test Scenario, 3B2V01.01-AdvanceShipmentNotification-0001-Scenario-Shipper
    4) Enter Global Business ID(DUNS Number) and Location ID for both the partners. These fields should be same as in your R/3 Party configuration.
    5) Enter the URL as follows:
    http://<XI Server:<J2EE_Port>/MessagingSystem/receive/RNIFAdapter/RNIF
    If everything is configured correctly as mentioned, you should be able to test your single-action scenario.
    <b>Part II Test using another XI System</b>
    For these follow the steps above for XI Config.
    Configure one XI System as PIP3B2 Shipper using the Scenario "PIP3B2_Shipper" and config wizard as mentioned above.
    Configure the other XI as PIP3B2 Receiver using the scenario "PIP3B2_Receiver" and config wizard.
    In the Url field, specify the URL as follows:
    http://<XI host>:<J2EE_Port>/MessagingSystem/receive/RNIFAdapter/RNIF
    Hope this helps.
    Regards,
    Sam Raju

  • Business scenarios for daily, weekly and nightly loads!!

    Can any body tell some Business scenarios for daily, weekly and nightly loads where we monitor the loads in Production support.
    Regards
    srikanth.ch

    Hi,
    It all depends on your business needs. In general on daily basis you will load the SD and MM which will cover all the stock and sales done for day. Generally most of the clients use Lo Cockpit extraction for both the modules. FI chains will run on month end for example the AP(Accounts Payable),AR(Accounts Receivable) chains will be run on the month end.
    Some SD datasources : 2lis_13_vdhdr, 2lis_13_vditm, 2lis_13_vdkon - all these to pull billing related data
    Some MM Datasource : 2lis_03_bx and 2lis_03_bf.
    Regards,
    Harish Raju

  • Typical business case scenarios for SharePoint 2013 Apps

    What are the typical real time business case scenario for 1. SharePoint Hosted apps 2. Auto Hosted apps 3. provider hosted Apps? Why people choose any of this model apps model in comparison to typical On-Premise solution? 
    Another question comes is when should one go for On-Premise solution rather than Apps? What are real time scenarios?

    Let's discuss first whether to develop apps or full trust solution.
    Talking about technical aspect, On-Premise solution is popular choice for organizations who have SharePoint hosted on premise (not SharePoint Online). However, if you are using SharePoint Online, there's no choice except Apps (and possibly sandbox but deprecated).
    So for SharePoint Online only viable solution is developing apps. However for on-premise you can develop full trust or apps. However since apps requires a bit more configuration and not yet full feature set as you can do with full trust, most people are still
    developing full trust solution on premise.
    Talking about business requirement at hand, you can develop apps (even in on-premise), if your requirement is really looks like an app. For example, you would like to develop a SharePoint solution, that will show data from SAP inside SharePoint. The solution
    mostly nothing to do with SharePoint but calling SAP services to perform stuffs. So you can develop an apps and deploy inside SharePoint. Another example is import CSV files in SharePoint list, you can write an apps. But mostly I think you need to decide whether
    to develop apps depends on the Apps API ability. The API is still very limited in features/functionalities. So even if your requirement looks like apps you can't develop apps if the current SharePoint Apps API doesn't support the functionality. 
    Now let's talk about which apps model you would like to use SharePoint Hosted apps means everything hosted inside SharePoint and no other external infrastructure is required. One example is the CSV file upload apps. However, with SharePoint hosted apps,
    you can't use C# rather you just have javascript to develop the apps. There's rumours that auto hosted apps will be deprecated not sure but there's not more conceptual difference apart from where the apps is hosted (both hosted outside SharePoint). Provider/auto-hosted
    apps will be running outside SharePoint (i.e., in IIS web site), and communicate to SharePoint through REST or client object model. One provider hosted apps in the office app market is adlib pdf conversion. The apps allows you to select multiple files and
    covert them to pdf file. Basically the conversion site is hosted somewhere outside (in the provider's infrastructure). When you select all the files and click 'convert' from ribbon user is redirected to the provider site (different url) and the files are merged
    and converted to pdf. Since it's outside of sharepoint, you can use any development language (C#, java, php etc.). The provider hosted app talks to sharepoint through REST or client object model.
    So in summary, sharepoint hosted app has less complexity as it doesn't require another infrastructure and runs inside sharepoint. However you are limited to use only javascript to develop your apps. On the other hand with provider hosted apps, you can use
    any development language but you need to consider extra level of security/complexity to integrate the app with sharepoint.
    Thanks,
    Sohel Rana
    http://ranaictiu-technicalblog.blogspot.com
    Thanks for this nice reply. Plz clarify if only Auto hosted apps will be deprecated or provider hosted apps also may be deprecated by MS?

  • Complicated Free goods scenario for Beverage

    Hi
    These are scenario for F&B business - mostly they have complicated promotion on free gifts which I think SAP standard cannot cover it. If any of you has the similar cases, please share how you handle it, thanks.
    1. Buy soda water size 10 oz regardless of soda taste for 1 pack (1 pack consist of 24 bottles) then get 1 bottle free.
    2. Buy soda water size 10 oz regardless of soda taste for 1 pack, then get 1 bottle free - maximum at 2 sets.
    3. Buy soda water either size 10 oz or 15 oz for 1 pack, get 1 bottle of 10 oz free
    4. Buy soda water regardless of soda taste as set i.e. for every 36 packs of combine size 10 oz, 15 oz and 1 lt - get free items of 10 oz/15 oz for 2 bottles per pack and free item of 1lt for 1 bottle per pack
    5. Similar to 4 with additional condition that 1 soda taste must be more than 50%  - for example, it must be coke at least 50%
    6. Sales quota - Buy 1 pack, get 1 bottle free - maximum 100 bottles for each customer
    Please kindly share your experience or ideas. Thank you
    Chanchana

    HI Chanchana,
    I had a similar problem while configuring Free goods in Ceramic Industry(Sanitaryware division).
    For Eg: In the case of wash-basins, basins of different colors are defined as different materials, an the requirement was to
    2 Black & 2 red = 1 tap free
    or
    1 red, 1 white & 1 black = 2 taps free.
    We did a lot of R&D considering std SAP free goods scenario, out of which I can conclude the following:
    1) Std SAP allows only 1 + 1(Same material, inclusive) or 1 + 1(different material, exclusive) to be configured as free goods.
    In your case if the main material and the free goods are defined as two different materials and there is no 3rd material coming into picture only then free goods config will work.
    For Eg: For 10qty Material A, 1 qty of material A free - possible
                For 10qty of material A, 1qty of material B free - possible
                For 10qty of Material A, 5 qty of Material B, 1qty of Material C free- not possible in std SAP.
    You can convince this to your client, but if they still insist on having the same, you will have to develop a custom 'Z' application which will control the same. However, the same might turn out to be very complex, since pricing, COGS of the free material has to be taken care of along with a lot of other factors. Also the delivery and billing has to go through std SAP, mapping of which will be complex again.
    All the best.
    Regards,
    Amit
    Edited by: Amit Iyer on Sep 20, 2010 10:17 AM

  • Logic to retrieve batch number in batch split scenario for a material

    Hi All!
    LIPS-CHARG gives the batch number for a material in normal scenario.But in batch split scenario for a material what should be the logic to retrieve the field batch number based on POSNR,VBELN and UECHA in LIPS.
    UECHA corresponds to higher level item of a batch.
    I had been given the following logic to do the same but it is not pulling any values inspite of a batch split available for the material.The logic is
    Select lips-charg (batch number) where  lips-posnr = lips-uecha ( higher level item batch)
    Please advise
    Regards
    Praneeth

    Hi Praneeth,
    The way LIPS records are in a batch split scenario is that let us say you have a delivery with one line item 00010. Now if this splits into two batch split items, then you will find in LIPS 3 records, one with line item 00010, one with 90001 and another with 90002. Both 90001 and 90002 will have a UECHA of 00010, whereas for 00010, this field will be blank. So the logic is to create two internal tables one where UECHA is blank and another where it is not blank and use it.
    SELECT * FROM LIPS
             INTO TABLE I_LIPS
            WHERE VBELN = P_VBLEN.
    I_LIPS_TEMP[] = I_LIPS[].
    DELETE I_LIPS_TEMP WHERE UECHA IS INITIAL.
    *-- This table will not have only the batch split items not the main items.
    LOOP AT I_LIPS WHERE UECHA IS INITIAL.
    *-- loop at the main items.
      IF I_LIPS-CHARG IS INITIAL.
    *-- batch is not there on the main item, see if it is there on any batch split items.
        READ TABLE I_LIPS_TEMP WITH KEY UECHA = I_LIPS-POSNR.
         do whatever
      ENDIF.
    ..... do whatever
    ENDLOOP.

  • ABAP HR-test scenario for payslip-urgent

    Hi,
    please send me some test scenarios for payslip by which payslip can be affected.
    I have designed a PAyslip with retro and i want to test it.
    eg. retro. means if retro will run then it will affect payslip.
    Also send me some unit test cases in case of any payslip requirement.
    Points will be rewarded.
    Regards
    Monika

    Hi,
    Just give an entry in infotype 8, 14 or 15 for previous month and run the payroll than retro will run, than compare results using wage type reporter.
    When you run the payroll give the sandard SAP Remuneration and compare the results in the SAP payslip and payslip you have developed.
    If you have developed payslip using PE51 than the retro amounts are picked authomatically.
    Regards,
    Ramu N.

Maybe you are looking for

  • CS2 Error Could not complete your Request because there is not enough memory (RAM)"

    Using, WinXP SP3, P4, 80G HD, 2G RAM I had a student try to place, open and copy a .jpg file into Illustrator and none of the ways worked. 1) Opened Illustrator selected FILE, PLACE and then directed to get photo. It shows an empty box. 2) Starting f

  • Report Not Run

    Greetings all, I got the error while i run the SCCM report today, An error has occurred during report processing. (rsProcessingAborted) Cannot impersonate user for data source 'DataSource1'. (rsErrorImpersonatingUser) Log on failed. Ensure the user n

  • How to set password protection in iphone5 for yahoo mail app

    I have password protection for my iPhone5. Is it possible to set a seperate password protection for my Yahoo mail account?

  • EBS - How to process NEFT Rejections

    Hi All We are implementing EBS in India Company codes using Citibank BAI2 format. How can we handle NEFT Rejections using EBS functionality ? Current Process for NEFT Rejections :- ZP Accounting document clears Vendor Invoice [ Day 1 ] Dr Vendor Cr E

  • Lost Connections

    Hi I am using a WRT110, 1 wired desktop, 2 wireless laptop a Lenovo T61 laptop, which is the one I am having an issue. Every time I reboot or shutdown this laptop, when I restart again, all other connections are lost, including this laptop, once I go