N+1 redundancy + ap groups

Hello all,
I am running WLC 7.4.100.60 and want to complete the following scenario:
I have three controllers: A, B and BACKUP. Backup is N+1 backup controller of both A and B
A and B both contain a WLAN with different profile name but with same SSID for example SSID "OPEN"
When these APs failover to the same backup controller called BACKUP, i want to keep the IP and client separation on the OPEN SSID (they are not allowed to be joined on the same IP interfaces, because of a special application running on the clients that does configuration based on source ip address)
APs on controller A are put in AP group A on interface A, APs on controller B are put in AP group B on interface B
In theory, i should be able to create both WLAN profles (with different name but same SSID) on the backup controller, assign different interfaces to it (for example backup int A to A, backup int  B to B), assign the SSIDs to different AP groups on the backup controller (A and B).
Then when APs failover to the backup controller, they will associate, they will keep their AP group and based on the AP group, they will run the OPEN SSID either on backup int A or backup int B.
So in theory, this should work.
However, when i want to configure this on my controller, the controller won't let me activate the second WLAN (even if it has a different wlan profile name, is assigned to a different interface and a different AP group), because he gives me the error message:
"The following errors occurred while updating the WLAN: WLAN with duplicate SSID and L2 security policy found".
If a duplicate SSID exists, but the wlan profile is assigned to a different AP group and interface, why wouldn't i be allowed to create two active profiles ?
An AP can only be part of one AP group, so it can only run one of these two WLAN profiles....
regards,
Geert

I just realised my question is rather stupid. The solution to the problem is: you just need to create the WLAN profile only once. Then within the different AP groups definition, you assign the same WLAN profile to two different interfaces and then it works perfectly !
The only problem i noted was that if the two SSID are the same, but the pre-shared keys are different for example, this doesn't work (as the password would be the same for both A and B clients).
To solve this, the trick of Scott above works ! You create the second WLAN with id > 16 and then you can enable them both and assigne them to different AP groups. Thanks Scott.
regards,
Geert

Similar Messages

  • What is the usable space of normal redundancy disk group with two uneven capacity fail groups

    Hi,
    I have a normal redundancy disk group (DATA) with two uneven capacity fail groups (FG1 and FG2). FG1 size is 10GB and FG2 size is 100GB.
    In this case what will be the usable space of the disk group (DATA)? is it 10G or 55G?
    Thanks,
    Mahi

    Please, don't duplicate post with same matter.
    This question was answered in your previous thread.
    Re: ASM normal redundancy disk group

  • WiSM redundancy, mobility groups and RF groups

    Hi there
    we would like to implement the following:
    - Support for about 2000 LAP's
    - 1 x Catalyst 6509
    - 1 x Sup 720
    - 7 x WiSM's
    What I'm interesting is are the following points:
    1. I thought that we would build the switch completly redudant, so we have to wlan switches (switch A and B) with 7 WiSM's eatch. So I can garanty a N+N redundancy --> each LAP's has a primary controller on switch A and it's secondary controller on switch B. The LAP's can be splitted on the two switches, but for your understanding there is a 1:1 redundancy. What do you think of this design, is the too much or is this appropriate?
    2. As I know you can build up a mobility group of a maximum of 24 controllers or 12 WiSM's. I would put only these controllers in a mobility group, where Layer 2 roaming can occure.
    3. But what is about the RF groups - there is a maximum of 1000 LAP's, so I can put only 3 WiSM's in one group. But this would not work form me, then I would have 2 WiSM's on switch A and only 1 WiSM on switch B in a RF group (not a 1:1 redundancy). First is it possible to put WiSM-A and WiSM-B into different RF groups, I think so because they are logically splitted, aren't they?
    And what RF group design would be best (just as a reference)? I thought that it would make sense to form a RF group for each of the seven pairs (1 WiSM on switch A and 1 WiSM on switch B) for redundancy? What do you think of that approach?
    4. So I would have 1 mobility group and 7 RF groups. Or do you recommend to form the mobility groups like the RF groups? But what happens with Layer 2 roaming in that case?
    I'm sorry for the long and messy text, but I hope you can see my design questions?
    Thanks a lot in advance.
    Dominic

    It sounds like you already have some good replies. Personally I like N+1 redundancy, but that is a designers choice. One thing I should point out is that the 6500 can only support 5 WiSM cards each. In this case a 4 WiSM x 3 chassis option would give your more spare capacity with only 12 total cards. The lower WiSM cost (12 vs 14) would help offset the cost of the extra chassis. You could also support 2400 APs with 8 WiSM cards even if one switch is down.
    Not too long ago Cisco added the ability to set the priority of APs so your critical ones would join a controller and the less critical ones would go down if a controller failed and there were no redundancy. That is something to keep in mind when designing wireless. You may not need redundancy for all APs and that could affect your design and costs.
    Randy

  • ASM normal redundancy disk group

    Hi,
    One of my collegues created an ASM disk group with normal redundancy using 2 disks. But the disks are not of same size, one disk is 100GB and another one is 10GB.
    Now the usable space out of diskgroup is showing 55GB. When I checked the diskgroup properties it is showing 2 fail groups one is DATA1_0000 with 10GB and other is DATA_0001 with 100GB.
    My question is why it is showing 55GB as usable space?
    My assumption is as it is having 2 fail groups with different disks of different sizes. For the failgroup 2 even it is of 100GB size, in order to maintain the redundancy with other small FG(10G) it will consider only 10GB out of 100GB.
    So, the 2nd FG size also should be 10GB. So that the usable space should show as 10GB as opposed to 55GB (not (100+10)/2).
    Please clarify me.
    Thanks,
    Mahi

    Hi,
    Please see the below output, I am talking about DATA1 disk group with number 2.
    SQL> select group_number,name, type,total_mb, free_mb, required_mirror_free_mb, usable_file_mb from v$asm_diskgroup;
    GROUP_NUMBER NAME                           TYPE     TOTAL_MB    FREE_MB
    REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB
               1 DATA                           NORMAL       6144       5344
                       2048           1648
               2 DATA1                          NORMAL     112640     108604
    SQL> select group_number, name, mount_status, state, redundancy,  failgroup, failgroup_type, os_mb, total_mb, free_mb from v$asm_disk;
    GROUP_NUMBER NAME                           MOUNT_S STATE    REDUNDA
    FAILGROUP                      FAILGRO      OS_MB   TOTAL_MB    FREE_MB
               1 DATA_0000                      CACHED  NORMAL   UNKNOWN
    DATA_0000                      REGULAR       2048       2048       1781
               1 DATA_0001                      CACHED  NORMAL   UNKNOWN
    DATA_0001                      REGULAR       2048       2048       1782
               1 DATA_0002                      CACHED  NORMAL   UNKNOWN
    DATA_0002                      REGULAR       2048       2048       1781
    GROUP_NUMBER NAME                           MOUNT_S STATE    REDUNDA
    FAILGROUP                      FAILGRO      OS_MB   TOTAL_MB    FREE_MB
               2 DATA1_0000                     CACHED  NORMAL   UNKNOWN
    DATA1_0000                     REGULAR      10240      10240       8222
               2 DATA1_0001                     CACHED  NORMAL   UNKNOWN
    DATA1_0001                     REGULAR     102400     102400     100382
    Thanks,
    Mahipal

  • Disk Group from normal to external in a RAC environment

    Hello,
    my environment is based on 11.2.0.3.7 RAC SE with two nodes.
    Currently I have 4 DG, all in NORMAL redundancy, to contain respectively:
    ARCH
    REDO
    DATA
    VOTING
    At the moment I focus only with non-VOTING DG.
    Each of them has 2 failure groups that physically maps to disks in 2 different server rooms.
    The storage arrays are EMC VNX (one in each server room).
    They will be substituted by a VPLEX system that will be configured as a single storage entity with DGs in external redundancy.
    I see from document id
    How To Move The Database To Different Diskgroup (Change Diskgroup Redundancy) (Doc ID 438580.1)
    that it is not possbile to do this online apparently.
    Can you confirm?
    I also read the thread in this forum:
    https://community.oracle.com/message/10173887#10173887
    that seems to confirm this too.
    I have some pressure to free the old storage arrays, but in short time I will not be able to stop the RAC RDBMSs I have in place.
    So the question is: can I proceed into two steps so that
    1) add a third failure group composed by the VPLEX disks
    2) wait for data sync of the third failure group
    3) drop one of the two old failure groups (ASM should be let me make this, correct?)
    3) brutally remove all disks of the remaining old storage failure group
    and proceed in reduced redundancy for some time until I can afford the maintenance window
    Inside the ASM administrator guide I see this:
    Normal redundancy disk group - It is best to have enough free space in your disk
    group to tolerate the loss of all disks in one failure group. The amount of free
    space should be equivalent to the size of the largest failure group.
    and also
    Normal redundancy
    Oracle ASM provides two-way mirroring by default, which means that all files are
    mirrored so that there are two copies of every extent. A loss of one Oracle ASM
    disk is tolerated. You can optionally choose three-way or unprotected mirroring.
    spare

    When you are creating external table you must specify location that external table will use to manipulate with external data.
    This is done with LOCATION and/or DEFAULT_DIRECTORY parameter.
    If you want that every instance in your cluster is able to use one specific external table, then you would need to have the location specified in the create external table command visible/accessible to all servers in your cluster, propably by some specific shared os disks/storage configuration; e.g. mounting remote disks, and this could very easy cause slower ext. table performance than it would be when the specified location is on the db server.
    This will be the one and only way because it is impossible to specify remote location, either when creating directory or in any specification parameters when creating external table.

  • ASM redundancy confusion

    Dear All,
    o/s:redhat linux 5.6
    db:10.2.0.3 with ASMlib
    I always thought that ASM redundancy NORMAL will have only 2 failure groups and HIGH will have 3 failure groups.Is this sentence right or wrong because i say this below SQL statement on some website
    SQL> create diskgroup DATA_NRML normal redundancy
    Failure group flgrp1 disk
    ‘/dev/rdsk/c3t19d3s4’,‘/dev/rdsk/c3t19d4s4’,‘/dev/rdsk/c3t19d5s4’,
    ‘/dev/rdsk/c3t19d6s4’
    Failure group flgrp2 disk
    ‘/dev/rdsk/c4t20d3s4’,‘/dev/rdsk/c4t20d4s4’,‘/dev/rdsk/c4t20d5s4’,
    ‘/dev/rdsk/c4t19ds4’
    Failure group flgrp3 disk
    /dev/rdsk/c5t21d3s4’,‘/dev/rdsk/c5t21d4s4’,‘/dev/rdsk/c5t21d5s4’,
    ‘/dev/rdsk/c5t21ds4’
    Failure group flgrp4 disk
    /dev/rdsk/c6t22d3s4’,‘/dev/rdsk/c6t22d4s4’,‘/dev/rdsk/c6t22d5s4’,
    ‘/dev/rdsk/c6t22ds4’;
    if the above sql statement is correct then how normal redundancy works.
    If failure group 1 had disk failure then mirror extents are on failure group 2 which will allow db to run.
    If failure group 2 also have disk failure then db will be still up?
    Regards

    Hi,
    -Can there be 3 failure groups in NORMAL redundancy? if yes then 2 diskgroup failure can be tolerated?No.
    eg:
    FAILGROUP controller1 DISK
       '/devices/diska1','/devices/diska2','/devices/diska3','/devices/diska4'
      FAILGROUP controller2 DISK
       '/devices/diskb1','/devices/diskb2','/devices/diskb3','/devices/diskb4'
    FAILGROUP controller3 DISK
       '/devices/diskc1','/devices/diskc2','/devices/diskc3','/devices/diskc4'In this case with normal redundancy is tolerated a loss of one or all disk of one Failgroup at time. If fails disk from two or more failgroups at same time the diskgroup will fail.
    see this example:
    {message:id=10185895}
    -If the 2 diskgroup failure cannot be tolerated then what is the use of adding 3rd failure group?we will go by 2.When I must choose of use different failgroup? When all disk(LUN) on that failgroup share same point failure. eg. (same array, same controller, same storage and so on).
    Useful doc:
    Use a Simple Disk and Disk Group Configuration
    http://docs.oracle.com/cd/E11882_01/server.112/e10803/config_storage.htm#CDEEFBHG
    Use Redundancy to Protect from Disk Failure
    http://docs.oracle.com/cd/E11882_01/server.112/e10803/config_storage.htm#CDEBCDBD
    Regards,
    Levi Pereira

  • ASM Quorum Failgroup Setup is Mandatory for Normal and High Redundancy?

    Hi all,
    Since I have worked with version 11.2 I had a concept about Quorum Failgroup and its purpose, now reading the documentation 12c I'm  confuse about some aspect and want your views on this subject.
    My Concept About Quorum Failgroup:
    The Quorum Failgroup was introduced in 11.2 for setup with Extended RAC and/or for setup with Diskgroups that have only 2 ASMDISK using Normal redundancy or 3 ASMDISK using High redundancy.
    But if we are not using Extended RAC and/or have a Diskgroup Normal Redundancy with 3 or more ASMDISK  or Diskgroup High Redundancy with 5 or more ASMDISK the use of Quorum Failgroup is optional, most like not used.
    ==============================================================================
    Documentation isn't clear about WHEN we must to use Quorum Failgroup.
    https://docs.oracle.com/database/121/CWLIN/storage.htm#CWLIN287
    7.4.1 Configuring Storage for Oracle Automatic Storage Management
      2. Choose the Oracle ASM redundancy level to use for the Oracle ASM disk group.
    Except when using external redundancy, Oracle ASM mirrors all Oracle Clusterware files in separate failure groups within a disk group. A quorum failure group, a special type of failure group, contains mirror copies of voting files when voting files are stored in normal or high redundancy disk groups. If the voting files are in a disk group, then the disk groups that contain Oracle Clusterware files (OCR and voting files) have a higher minimum number of failure groups than other disk groups because the voting files are stored in quorum failure groups.
    A quorum failure group is a special type of failure group that is used to store the Oracle Clusterware voting files. The quorum failure group is used to ensure that a quorum of the specified failure groups are available. When Oracle ASM mounts a disk group that contains Oracle Clusterware files, the quorum failure group is used to determine if the disk group can be mounted in the event of the loss of one or more failure groups. Disks in the quorum failure group do not contain user data, therefore a quorum failure group is not considered when determining redundancy requirements in respect to storing user data.
    As mentioned in documentation above, I could understand that in ANY diskgroup that use Normal or High Redundancy MUST have a Quorum failgroup. (does not matter what setup)
    In my view, if a Quorum Failgroup is used to ENSURE that a quorum of the specified failure groups are available, then we must use it, in other words is mandatory.
    What's your view on this matter?
    ==============================================================================
    Another Issue:
    Suppose the following scenario (example using NORMAL Redundancy).
    Example 1
    Diskgroup Normal Redundancy  with 3 ASMDIKS.
    DSK_0000  - FG1 (QUORUM FAILGROUP)
    DSK_0001  - FG2 (REGULAR FAILGROUP)
    DSK_0002  - FG3 (REGULAR FAILGROUP)    
    The ASM will allow create only one Quorum Failgroup, and two Regular Failgroup ( a failgroup to each asm disk)    
    Storing Votedisk on this diskgroup the all three asmdisk will be used one votedisk in each asm disk.    
    Storing OCR on this diskgroup the two Regular Failgroup will be used, only one OCR and primary extents and  mirror of its extents accross two failgroup. (quorum failgroup will not be used to OCR)
    Example 2
    Diskgroup Normal Redundancy  with 5 ASMDIKS.
    DSK_0000  - FG1 (REGULAR FAILGROUP)
    DSK_0001  - FG2 (REGULAR FAILGROUP)
    DSK_0002  - FG3 (QUORUM FAILGROUP) 
    DSK_0003  - FG4 (QUORUM FAILGROUP)
    DSK_0004  - FG5 (QUORUM FAILGROUP)
    The ASM will allow create up to three Quorum Failgroup, and two Regular Failgroup.
    Storing Votedisk on this diskgroup the all three QUORUM FAILGROUP will be used. REGULAR FAILGROUP will not be used.
    Storing OCR on this diskgroup the two Regular Failgroup will be used, only one OCR and primary extents and  mirror of its extents accross two failgroup. (none quorum failgroup will not be used to OCR).
    This right here below is confuse to me.
    https://docs.oracle.com/database/121/CWLIN/storage.htm#CWLIN287
    7.4.1 Configuring Storage for Oracle Automatic Storage Management
      2. Choose the Oracle ASM redundancy level to use for the Oracle ASM disk group.
    The quorum failure group is used to determine if the disk group can be mounted in the event of the loss of one or more failure groups.
    Normal redundancy: For Oracle Clusterware files, a normal redundancy disk group requires a minimum of three disk devices (two of the three disks are used by failure groups and all three disks are used by the quorum failure group) and provides three voting files and one OCR and mirror of the OCR. When using a normal redundancy disk group, the cluster can survive the loss of one failure group.For most installations, Oracle recommends that you select normal redundancy disk groups.
    High redundancy:  For Oracle Clusterware files, a high redundancy disk group requires a minimum of five disk devices (three of the five disks are used by failure groups and all five disks are used by the quorum failure group) and provides five voting files and one OCR and two mirrors of the OCR. With high redundancy, the cluster can survive the loss of two failure groups.
    Documentation says:
    minimum of three disk devices:  two of the three disks are used by failure groups and all three disks are used by the quorum failure group for normal redundancy.
    minimum of five disk devices: three of the five disks are used by failure groups and all five disks are used by the quorum failure group for high redudancy.
    Questions :
    What this USED mean?
    How the all disk are USED by quorum failgroup?
    This USED mean used to determine if the disk group can be mounted?
    How Quorum Failgroup determine if a diskgroup can be mounted, what is the math?
    Consider following scenery:
    Diskgroup Normal Redundancy with 3 ASM Disks. (Two Regular failgroup and One Quorum Failgroup)
    If we lost the Quorum failgroup group. We can mount that diskgroup using force option.
    If we lost one Regular failgroup group. We can mount that diskgroup using force option.
    We can't lost two Failgroup at same time.
    If I don't use Quorum failgroup (i.e only Regular Failgroup) the result of test is the same.
    I see no difference between use Quorum Failgroup and only Regular Failgroup on this matter.
    ======================================================================================
    When Oracle docs says:
    one OCR and mirror of the OCR for normal redundancy
    one OCR and two mirrors of the OCR for high redundancy
    What this means is we have ONLY ONE OCR File and mirror of its extents, but oracle in documentation says 1 mirror of OCR (normal redundancy) and 2 mirror of OCR (high redudancy).
    What is sound like? a single file or two or more files ?
    Please don't confuse it with ocrconfig mirror location.

    Hi Levi Pereira,
    Sorry for the late answer. And as per 12c1 documentation, yes you are right, only the VD will be placed on the quorum fail groups:
    The redundancy level that you choose for the Oracle ASM disk group determines how Oracle ASM mirrors files in the disk group, and determines the number of disks and amount of disk space that you require. If the voting files are in a disk group, then the disk groups that contain Oracle Clusterware files (OCR and voting files) have a higher minimum number of failure groups than other disk groups because the voting files are stored in quorum failure groups.
    Managing Oracle Cluster Registry and Voting Files
    Regarding your question "I want answer about is mandatory to use Quorum Failgroup when use Normal or High Redundancy?" No it isn't, I have a normal redundancy diskgroup that I store VD with no Quorum Failgroup, indeed, it would prevent you to store data on the disks within this kind of failgroup as per the documentation:
    A quorum failure group is a special type of failure group that is used to store the Oracle Clusterware voting files. The quorum failure group is used to ensure that a quorum of the specified failure groups are available. When Oracle ASM mounts a disk group that contains Oracle Clusterware files, the quorum failure group is used to determine if the disk group can be mounted if there is a loss of one or more failure groups. Disks in the quorum failure group do not contain user data, therefore a quorum failure group is not considered when determining redundancy requirements in respect to storing user data.
    Managing Oracle Cluster Registry and Voting Files
    And as per the documentation, my answer are with red color:
    Could you explain what documentation meant:
    minimum of three disk devices:  two of the three disks are used by failure groups and all three disks are used by the quorum failure group for normal redundancy.
    how all three disk are used by the quorum failgroup? [I don't think this is correct, sounds a bit strange and it is the opposite for what is right before...]
    Regards.

  • Ace redundancy with different software licences

    Hi,
    We have 4710 with ACE-4710-1F-K9.
    1G Bundle: Includes ACE 4710 Hardware, 1 Gbps  Throughput, 5,000 SSL TPS, 500 Mbps Compression, 5 Virtual Devices, 50  Application Acceleration Connection License, Embedded Device Manager
    We have another 4710 with ACE-4710-2F-K9.
    2G Bundle: Includes ACE 4710 Hardware, 2 Gbps  Throughput, 7,500 SSL TPS, 1Gbps Compression, 5 Virtual Devices, 50  Application Acceleration Connection License, Embedded Device Manager
    Is that possible to make redundancy (FT GROUP) with 2 devices has different software bundles?

    Hello-
    When you initially setup the ACE's in an FT pair, they initially figure out who is master based on priority, then they check if the licenses that they each have installed are the same.  If there is a mismatch, FT will continue to check the configuration and will eventually go into a "standby warm" state.  It will not config-sync the startup or running configurations until you install the correct license and toggle config sync.
    This is what yo uwould see:
    ACE-A/Admin# show ft group 1 status
    FT Group                     : 1
    Configured Status            : in-service
    Maintenance mode             : MAINT_MODE_OFF
    My State                     : FSM_FT_STATE_ACTIVE
    Peer State                   : FSM_FT_STATE_STANDBY_WARM
    Peer Id                      : 1
    No. of Contexts              : 1
    Running cfg sync status      : Detected license mismatch with peer, disabling running-config auto sync
    Startup cfg sync status      : Detected license mismatch with peer, disabling running-config auto sync
    If you disable config sync, it will still stay in a warm state and ignore the license mismatch:
    ACE-A/Admin# show ft group 1 status
    FT Group                     : 1
    Configured Status            : in-service
    Maintenance mode             : MAINT_MODE_OFF
    My State                     : FSM_FT_STATE_ACTIVE
    Peer State                   : FSM_FT_STATE_STANDBY_WARM
    Peer Id                      : 1
    No. of Contexts              : 1
    Running cfg sync status      : Sync disabled by CLI.
    Startup cfg sync status      : Sync disabled by CLI.
    It is not recommended to run with 2 different licenses because it is possible that you failover and don't have enough resources to carry the traffic that the active was running - however - if you disable configuration sync, it will allow you to do such.
    Regards,
    Chris Higgins

  • [ios pw redundancy with xr mc-lag termination]

    hi, all:
    first of all, thanks in advance and please take a look at the attached diagram.
    i'm trying to setup a pseudowire redundancy setup between an ME3800 and two ASR9000s that build a mlacp etherchannel towards a cat4500, 4500-2. when primary pseudowire is up, everything works as expected. the problem is that, when you cause a switchover scenario from the primary asr9k-1 (say by shutting down the link to the 4500-2) to the secondary asr9k-2, traffic does not pass from one end of the pw to the other. if we bring up the failed link back up, primary pw works.
    all 'show' commands checkout and pw switches over as expected. as a test, i have a 3rd asr9k connected in parallel to the ME3800 and we have no problem with that. when we cause the exact same failure scenario, the primary pw switches over to the secondary and everything works exactly like i would expect. traffic passes in both pri and stby pws when using the parallel asr9k.
    as you will be able to see from attached-configs, the pw's from ME3800 and asr9k are a little different. ME3800 pw is port-based and asr9k pw is vlan-based, but since both primary pws work i see no obvious problem with that.
    now, i know both ends of the mc-lag work, because the asr9k pw redundancy setup works.
    if i build a single pw (no redundancy) from ME3800 to asr9k-1 connecivity works AND if i build a single pw from me3800 to asr9k-2 on same exact vlan, connecivity works also.
    hopefully, one of you will take the time to look at configs and let me know if you see something wrong (i think with the ME3800 config). please keep in mind that everything works perfectly when working with asr9k pw redundancy (xr on both ends of pw)
    c.
    ============
    ME3800 (pw-redundancy)
    3800#show run | section pseudowire
    pseudowire-class mpls
    encapsulation mpls
    status peer topology dual-homed
    ! tried it without above status command, didn't work either.
    3800#show run int g0/24          
    Building configuration...
    Current configuration : 175 bytes
    interface GigabitEthernet0/24
    no switchport
    no ip address
    xconnect 207.x.y.9 1100 encapsulation mpls pw-class mpls
      backup peer 207.x.y.17 1101 pw-class mpls
    end
    3800#
    3800#show xconnect all
    Legend:    XC ST=Xconnect State  S1=Segment1 State  S2=Segment2 State
      UP=Up       DN=Down            AD=Admin Down      IA=Inactive
      SB=Standby  HS=Hot Standby     RV=Recovering      NH=No Hardware
    XC ST  Segment 1                         S1 Segment 2                         S2
    ------+---------------------------------+--+---------------------------------+--
    UP pri   ac Gi0/24:78(Ethernet)          UP mpls 207.x.y.9:1100            UP
    IA sec   ac Gi0/24:78(Ethernet)          UP mpls 207.x.y.17:1101           DN
    3800_sw_pruebas#
    ============
    ASR-3 (pw-redundancy)
    RP/0/RSP1/CPU0:ASR-3#show run l2vpn
    Sat Jun 15 09:07:10.183 CST
    l2vpn
    xconnect group PRUEBAS-XXXX
      p2p ESC-MTZ
       interface Bundle-Ether1000.28
       neighbor 207.x.y.9 pw-id 1128
        backup neighbor 207.x.y.17 pw-id 1228
    RP/0/RSP1/CPU0:ASR-3#show l2vpn xconnect
    Sat Jun 15 09:20:26.183 CST
    Legend: ST = State, UP = Up, DN = Down, AD = Admin Down, UR = Unresolved,
            SB = Standby, SR = Standby Ready
    XConnect                   Segment 1                   Segment 2               
    Group      Name       ST   Description            ST   Description            ST
    PRUEBAS-XXXX
               to4500-2    UP   BE1000.28              UP   207.x.y.9    1128   UP
                                                           Backup                  
                                                           207.x.y.17   1228   DN
    RP/0/RSP1/CPU0:ASR-3#
    ============
    asr9k-1  (pw-termination)
    RP/0/RSP0/CPU0:asr9k-1#show run l2vpn
    Sat Jun 15 09:09:10.555 CST
    l2vpn
    pw-status
    pw-class mpls
      encapsulation mpls
       redundancy
        one-way
    xconnect group PRUEBAS-XXXX
      p2p toASR-3
       interface Bundle-Ether1000.28
       neighbor 207.x.y.1 pw-id 1128
      p2p toME3800
       interface Bundle-Ether1000.26
       neighbor 207.x.y.30 pw-id 1100
    RP/0/RSP0/CPU0:asr9k-1#show run redundancy
    Sat Jun 15 09:09:16.659 CST
    redundancy
    iccp
      group 1000
       mlacp node 1
       mlacp system mac 000d.000e.000f
       mlacp system priority 1
       member
        neighbor 207.x.y.17
       backbone
        interface Bundle-Ether1
    ============
    RP/0/RSP0/CPU0:asr9k-2#show run l2vpn
    Sat Jun 15 09:13:39.908 CST
    l2vpn
    pw-status
    pw-class mpls
      encapsulation mpls
       redundancy
        one-way
    xconnect group PRUEBAS-XXXX
      p2p toASR-3
       interface Bundle-Ether1000.28
       neighbor 207.x.y.1 pw-id 1228
      p2p toME3800
       interface Bundle-Ether1000.26
       neighbor 207.x.y.30 pw-id 1101
    RP/0/RSP0/CPU0:asr9k-2#show run redundancy
    Sat Jun 15 09:13:43.656 CST
    redundancy
    iccp
      group 1000
       mlacp node 2
       mlacp system mac 000d.000e.000f
       mlacp system priority 1
       member
        neighbor 207.x.y.9
       backbone
        interface Bundle-Ether1

    hard to tell where and why the traffic gets dropped if I were to guess the me might send traff still down the wrong PW
    due to mac learning so it might need to get flushed.
    I thought however that as part of the pw switchover the mac flush is instantiated.
    either case you want to set up a stream of say 1000 pps so it is easy to verify and check the np counters to see where and why these paks are getting dropped and if it is the 9k or the me in that regard.
    suspect a pw signaling and mac flushing issue here.
    xander

  • Is there a way to create different diskgroups in exadata?

    We have a need to have some other diskgroups other than +DATA and +RECO.
    How do we do that? Exadata version is x3.
    Thanks

    user569151 -
    As 1188454 states this can be done. I would first ask why is it you need to create additional disk groups than the data, reco and dbfs disk group created by default? I often see Exadata users question the default disk groups and want to add more or change the disk groups to follow what they've previously done on non-Exadata RAC/ASM environments. However, usually the data and reco disk groups are sufficient and allow for the best flexibility, growth and performance. One reason to create multiple disk groups could be for wanting to have different two different redundancy options for a data disk group - to have a prod database on high redundancy and a test database on normal redundancy for example; but there aren't a lot of needs to change it.
    To add disk groups you will need to also re-organize and add new grid disks. You should keep the grid disk prefix and corresponding disk group names equivalent. Keep in mind that all of the Exadata storage is allocated to the existing grid disks and disk groups - and this is needed to keep the necessary balanced configuration and maximize performance. So adding and resizing the grid disks and disk groups is not a trivial task if you already have running DB environments on the Exadata, and especially if you do not have sufficient free space in Data and Reco to allow dropping all the grid disks in a failgroup - because that would require removing data before doing the addition and resize of the grid disks. I've also encountered problems with resizing grid disks that end up forcing you to move data off the disks - even if you think you have enough space to aloo dropping an entire fail group.
    Be sure to accurately estimate the size of the disk groups - factoring in the redundancy, fail groups and reserving space to handle cell failure - as well as the anticipated growth of data on the disk groups. Because if you run out of space in a disk group you will need to either go through the process again of resizing all the grid disks and disk groups accordingly - or purchase an Exadata storage expansion or additional Exadata racks. This is one of the reasons why it is often best to stick with just the existing Data and Reco.
    To add new grid disks and disk groups and resize the others become very familiar with the information in and follow the steps given in the the "Resizing Storage Griddisks" section of Ch. 7 of the Exadata Machine Owner's guide as well as the information and examples in MOS Note: "Resizing Grid Disks in Exadata: Examples (Doc ID 1467056.1)". I also often like to refer to MOS note "How to Add Exadata Storage Servers Using 3 TB Disks to an Existing Database Machine (Doc ID 1476336.1)" when doing grid disk addition or resize operations. The use case may not match but many steps given in this note are helpful as is discusses adding new grid disks and even discusses creating a new disk group for occasions when you have cell disks of different sizes.
    Also, be sure you stay true to the Exadata best practices for the storage as documented in "Oracle Exadata Best Practices (Doc ID 757552.1)". For example, the total number of griddisks per storage server for a given prefix name (e.g: DATA) should match across all storage servers where the given prefix name exists. Also, to maximize performance you should have each grid disk prefix, and corresponding disk group, spread evenly across each storage cell. You'll also want to maintain the fail group integrity, separating fail groups by storage cell allowing the loss of cell without losing data.
    Hope that helps. Good luck!
    - Kasey

  • Quorum Query

    Environment
    2 Node RAC Cluster with each containing:
    Oracle Linux 6 Update 5 (x86-64)
    Oracle Grid Infrastructure 12R1 (12.1.0.2.0)
    Oracle Database 12R1 (12.1.0.2.0)
    I am not understanding the role of the quorum in the quorum failure group for Voting Files with Oracle ASM.
    Failure Groups (FG) provide assurance that there is separation of the risk you are trying to mitigate.   The separation is at the extent level with User Data.  With Voting Files, the file is separated to a disk for each FG within Disk Group (DG).  If Voting Files are stored on a Disk Group (DG) at normal redundancy.  A minimum of two FGs are required.  It is recommended that 3 FGs be used.  This is so Partner Status Table (PST) can have at least one other FG where PST is maintain for comparison.  This is in the event  of one FG failure.  Do the FGs need to be QUORUM that are storing Voting Files?  What is the role of the Quorum? When is it needed?

    Hi,
    I'll start with what Quorum means:
    A quorum is the minimum number of members (majority) of a set, necessary to prevent a failure. (IT concept)
    There is many quorum such as Votedisk Quorum, OCR Quorum, Network Quorum, PST Quorum,etc
    We need separate what quorum we are concerned.
    The Quorum of PST is different of Quorum of Votedisk, although all thinks works toogheter.
    Quorum PST:
    A PST contains information about all ASM Disk in a Diskgroup - Disk Number, Disk Status, Disk Partner Number, Heartbeat info and Failgroup Info.
    A Disk Group must be able to access a quorum of the Partner Status Tables (PST) to mount the diskgroup.
    When diskgroup mount is requested the instance reads all disks in the disk group to find and verify all available PST. Once it verifies that there are enough PSTs for a quorum, it mounts the disk group.
    There is a nice post here: ASM Support Guy: Partnership and Status Table
    Quorum Votedisk:
    Is a minimum number of votes to cluster be operational. There is always a votedisk quorum.
    When you setup votedisk in a normal redundancy you have 3 Votedisk one in each Failgroup. To cluster be operational you need at least a quorum with 2 vote online to cluster remain online.
    Quorum Failgroup (clause):
    Quorum Failgroup is an option of setup of a Diskgroup.
    This option must not be confused with Voting Quorum, because Voting Quorum and Failgroup Quorum are different things.
    For example: In a Normal Redudancy diskgroup I can lost my Quorum Failgroup and the Cluster will remain online with 2 Regular Failgroup, so Quorum failgroup is a setup.
    Oracle named as "Failgroup Quorum" a failegroup to a specific purpose that is store only Votedisk due a infrastructure deployment.
    Is not mandatory use "Quorum Failgroup" in a Diskgroup that hold votedisk.
    Now back to your question:
    If your failure groups only have 1 ASM Disk then shouldn't the recommendation be to use High Redundancy (5 failure groups) so in the event of a ASM disk failure a quorum of PST (3 PSTs) would be possible?
    About PTS Quorum: You need must be aware that if you have 5 PST you will need at least a quorum with 3 PST to mount its Diskgroup.
    If you have 5 Failgroup and  each Failgroup has only one ASMDISK you will have one PTS per ASMDISK that support you lost at least 2 PST to be able make a Quorum with 3 PST and keep Diskgroup Mounted or Mount it.
    The bold italicized Oracle documentation above seems to say that if you allocate 3 disk devices that 2 will be used by failure groups in Normal Redundancy. Further, a quorum failure group will exist that will use all disk devices.  What does this mean?
    I have no idea what documentation are saying is so confuse. I'll try contact some Oracle employee to check it.
    But will try clarify some things:
    Suppose you setup a Diskgroup as follow:
    Diskgroup  DATA
    Failgroup data01
    * /dev/hdisk1 and /dev/hdisk2
    Failgroup data02
    * /dev/hdisk3 and /dev/hdisk4
    Failgroup data03
    * /dev/hdisk5 and /dev/hdisk6
    Quorum Failgroup data_quroum
    * /nfs/votedisk
    When you add Votedisk on this diskgroup DATA, the  CSSD will store as follow:
    CSSD will pick randomly one asmdisk per Failgroup and store votedisk on it, but always will pick one ASMDISK from Quorum Failgroup (if exists).
    So, after add votedisk in below diskgroup you can have:
    * Failgroup data01 ( /dev/hdisk2)
    * Failgroup data03 (/dev/hdisk5)
    * Failgroup data_quorum (/nfs/votedisk)
    To mount diskgroup DATA you need failgroup data01,data03 and data_quorum available to mount diskgroup, otherwise diskgroup does not mount.
    About Documentation (https://docs.oracle.com/database/121/CWADD/votocr.htm#CWADD91889) is a bit confuse:
    Normal redundancy
    The redundancy level that you choose for the Oracle ASM disk group determines how Oracle ASM mirrors files in the disk group, and determines the number of disks and amount of disk space that you require. If the voting files are in a disk group, then the disk groups that contain Oracle Clusterware files (OCR and voting files) have a higher minimum number of failure groups than other disk groups because the voting files are stored in quorum failure groups.
    What it saying is: In case of use Quorum Failgroup you will have a higher minimum number of failure groups than other disk groups...
    But remember that qorum Failgroup is optional for those that use a single storage or odd number of storage H/W.
    For Oracle Clusterware files, a normal redundancy disk group requires a minimum of three disk devices (two of the three disks are used by failure groups and all three disks are used by the quorum failure group) and provides three voting files and one OCR and mirror of the OCR. When using a normal redundancy disk group, the cluster can survive the loss of one failure group.
    Trying clarify:
    - Votedisk in Diskgroup with normal redundancy requires three disk devices. (In case of use Quorum Failgroup: you will have two of three disk  used by Regular failgroup and one of three disk are used by Quorum Failgroup but all three disks (regular and quorum failgroup that store votedisk) count when mount that diskgroup.
    - and one OCR and mirror of the OCR:
    It's really confuse. Because the mirror of OCR must be placed in a different diskgroup because the OCR is stored similar to how Oracle Database files are stored. The extents are spread across all the disks in the diskgroup.
    I don't know what it's talking about. If mirror of extent about diskgroup redundancy or OCR Mirror.
    Per as note and above documentation says it's not possible store OCR and OCR Mirror on same diskgroup
    RAC FAQ (Doc ID 220970.1)
    How is the Oracle Cluster Registry (OCR) stored when I use ASM?
    And (https://docs.oracle.com/database/121/CWADD/votocr.htm#CWADD90964)
    * At least two OCR locations if OCR is configured on an Oracle ASM disk group. You should configure OCR in two independent disk groups. Typically this is the work area and the recovery area.
    High redundancy:
    For Oracle Clusterware files, a high redundancy disk group requires a minimum of five disk devices (three of the five disks are used by failure groups and all five disks are used by the quorum failure group) and provides five voting files and one OCR and two mirrors of the OCR. With high redundancy, the cluster can survive the loss of two failure groups.
    Three of five disks are used ??? and Two mirror of OCR?? In a single Diskgroup?
    Now things goes bad.
    Far as I can test and see when use Quorum Votedisk Four (not three) of five disks are used and all five counts.

  • Configuring APS for 10G ITU w/ Splitter

    Does anyone have experience on configuring APS for 10G ITU card with Splitter so that in a case of signal degrade it will switchover to the protection path? The default switchover mechanism is based on LOL (Loss-Of-Light) but it wont help if signal degrades for example to -25dB. All experiences and best practices are wellcome.

    I know the protection schemes (client, splitter, Y-cable etc.) but my question is how to configure splitter protection so that in a case of signal degrease it will do switchover to the protection path.
    See below an output from ONS15530:
    ONS15530_1>sh aps
    AR : APS Role, Wk: Working, Pr: Protection
    AS : APS State, Ac: Active, St: Standby, NA: Not Applicable
    IS : Interface State, Up: Up, Dn: Down
    MPL: Minimum Protection Level, SD: Signal Degrade, SF: Signal Failure
    LOX: Loss of Light/Loss of (CDR) Lock/Loss of Frame/Sync,
    LOL: Loss of Light, - not currently protected
    Interface AR AS IS MPL Redundant Intf Group Name
    ~~~~~~~~~~~~~~~~~~~~~ ~~ ~~ ~~ ~~~ ~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
    Wavepatch1/0/0 Wk Ac Up LOX Wavepatch1/0/1 CH1
    Wavepatch1/0/1 Pr St Up - Wavepatch1/0/0 CH1
    So APS will switch to the protection path only in a case of Loss of Light/Loss of (CDR) Lock/Loss of Frame/Sync. So it WONT do the failover if RX signal level degreases below the minimum RX level (-23dB for 10G ITU card), because there is still light coming (not LOL case). So how do I protect my system against this kind of signal degreade?
    rgds,
    Jp

  • 11gR2 OCR and ASM, recommendation please

    For 11gR2 - storing OCR on ASM
    I see posts recommending that OCR be stored in a seperate diskgroup from Database or recovery files, but no detail on why.
    I'm mearly seeking to understand the recommendation, I'm not questioning it.
    Please can someone detail why OCR should be seperated
    Thanks

    One reason could be to separate access to Oracle Database files from the Oracle Clusterware files. Originally, I believe it was recommended to use separate disk groups for Oracle Clusterware files and Oracle Database data files because storing the voting disk in Oracle ASM requires more failure groups than is required for other disk groups.
    A normal redundancy disk group normally requires 2 failure groups (or two independent disk devices), but when you store a voting disk in a normal redundancy disk group 3 failure groups (or 3 disk devices) are required.
    For example, if you have a normal redundancy disk group that stores the OCR, voting disks, and data files,and you want 150 GB of space for the database files, then you would need 3 disks with a total size of 450 GB. If you use separate disk groups for the Oracle Clusterware files and Oracle Database files, and both disk groups are normal redundancy, then you would still need three disks, but only 306 GB of disk space (assuming each disk in the Oracle Clusterware disk group is a 2 GB partition).

  • ASM - Concept - Clarification Request

    Hello All,
    I'm about to go ahead and install ASM for one of my clients. After going through the book ASM - Under the hood, I have a few clarifications, which I hope can be answered by the experts here.
    1- Since ASM uses its our algorithm for mirroring - Can I have an in-pair number of disks in +DATA diskgroup? say 11 disks ?
    2- In regards to Failure Groups, what is concept? Say I have 1 diskgroup +DATA - 4 disks  - does failure groups mean that id Disk 1 goes, then move the primary extents to another disk, say disk 3.
    - Can failure groups be in different diskgroups, lets say failure group for DATA disks, would be disk in RECOVERY ?
    - Or are failure groups additional disks which just sit there and are activated if case of a disk failure
    3- On installation, ASM 10gR2, are there any things a firs timer should watch out for.
    4- Should I have a hot spare disk on a 15 disk array Dell MD1000 - is this really necessary - why? if one disk goes bad, then we can simpy change it. Does this make sense if I have 4 hour gold-support on site with a new disk?
    Thank in advance for any assistance.
    Jan

    1. Yes, ASM will determine the most suitable block mirroring strategy regardless the number of disks in the diskgroup.
    2. Failure groups affect how ASM mirrors blocks across them. By default, each disk is in its own failure group - it is assumed that each disk can fail independently of others. If you assign two different disks to the same failure group, you indicate that they are likely to fail together (for example, if they share the access path and controller for that access path fails,) so ASM will only create single mirror on them and will try to create another mirror in another failure group. For example, you assign disk1 and disk2 to the same failure group: ASM will never create a mirror of a block from disk1 on disk2, it will only mirror to a different failure group. Note that if your storage is already RAIDed, EXTERNAL redundancy diskgroups are pretty safe: hardware RAIDs are usually more efficient than NORMAL redundancy ASM groups while maintaining the same level of protection, thanks to hardware acceleration and large caches they sport these days.
    3. Not really, as long as you follow the documented procedures and have Oracle patched to the current patchset level. However, if you employ ASMLIB, there might be issues that differ by the storage vendor.
    4. If you are sure that no other disk will fail within those 4 hours, hot spare is probably not that necessary. If availability is of concern, always plan for the worst case though. Having hot spare will protect you from such second failure while the replacement is en route.
    Regards,
    Vladimir M. Zakharychev

  • Adding quorum disk causing wasted space?

    Hi,
    Any idea whether this is a bug or an expected behavior?
    Seeing this with ASM 11.2.0.4 and 12.1.0.4
    Have a Normal redundancy disk group (FLASHDG22G below) with two disks of equal size. With no data on the disk group the Usable_file_MB is equal to the size of one disk, as expected.
    But if I add a small quorum disk to the disk group, the Usable_file_MB decreases to 1/2 of the disk size. So, half of the capacity is lost.
    Thoughts?
    [grid@symcrac3 ~]$ asmcmd lsdsk -k
    Total_MB  Free_MB  OS_MB  Name        Failgroup  Failgroup_Type  Library  Label  UDID  Product  Redund   Path
       20980    20878  20980  SYMCRAC3_A  FG1        REGULAR         System                         UNKNOWN  /dev/symcrac3-a-22G
         953      951    953  QUORUMDISK  FGQ        QUORUM          System                         UNKNOWN  /dev/symcrac3-a-quorum
       20980    20878  20980  SYMCRAC4_A  FG2        REGULAR         System                         UNKNOWN  /dev/symcrac4-a-22G
    [grid@symcrac3 ~]$ asmcmd lsdg
    State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
    MOUNTED  NORMAL  N         512   4096  1048576     42913    42707            20980           10388              0             N  FLASHDG22G/

    There are two separate issues:
    1) ASMCMD silently fails to add quorum failure groups. Adds them, but as regular failure groups.
    2) Even if a quorum failure group is added with SQLPlus, the space is still lost – I have just confirmed it. And it doesn’t matter whether I add a quorum disk to an existing group or create a new group with a quorum disk.
    For #2 here is the likely source of the problem. Usable_File_MB = (FREE_MB – REQUIRED_MIRROR_FREE_MB ) / 2.
    REQUIRED_MIRROR_FREE_MB is computed as follows (per ASM 12.1 user guide):
    –Normal redundancy disk group with more than two failure groups
       The value is the total raw space for all of the disks in the largest failure group. The largest failure group is the one with the largest total raw capacity. For example, if each disk is in its own failure group, then the value would be the size of the largest capacity disk.
    Instead, it should be "with more than two regular failure groups".
    With just two failure groups it is not possible to restore full redundancy after one of them fails. So, REQUIRED_MIRROR_FREE_MB = 0 in this case.
    Also REQUIRED_MIRROR_FREE_MB should remain 0 even when there are three failure groups if one of them is a quorum failure group. But the logic seems to be broken here.

Maybe you are looking for

  • Using user-defined transport recording routine with table maintenance view?

    Hi, I have  a table that is maintained through a maintenance view. I need to record the changes to a dedicated transport object (not TABU) with my own recording routine instead of teh standard recording routine.Could someone tell me which/how the eve

  • Can my USA iMac 2.4Ghz work on European power

    I have an iMac 2.4GHz Model A1224. We moved to Amsterdam and they of course run on European power. My iMac label states it is 100-240V range. The local Mac store has a cable that plugs into the USA 3-prong and has the European plug on the other end.

  • ALL Posters - Please read this - important.

    Can I please ask that before posting in the main forums (where this message is) that you check a couple of things first: 1 - Is this the correct place to post? See if there is a subforum that is more suitable - this will get you help & answers to pro

  • Multiple facts with shared/non shared dimensions

    Hello All, I have a scenario where we are using multiple facts in OBIEE. Fact1(Inventory Onhand) is joined to all shared dimensions, and Fact2( Forecasting Values) is joined to few shared dimesnions so the joins are like: Fact1 with Dim1, Dim2, Dim3,

  • Builder my own RTSP client

    Hi, guys I want to build my own RTSP client, I know that JMF2.0 can support RTSP URL, but i want to find a source code which implements RTSP client treatment without using JMF's Manager.createPlayer . Because i need some changes in RTSP treatment I h