Multicast ASM & SSM

hi
just need to ask if we have both ssm and asm range defined in a mpls multicast domain, which would be used to establishing the tunnel incase both ssm and asm ranges are similar. is there any preference to use ssm first and then ssm or vice-versa.
ip ssm range ssm-range
ip access-list standard ssm-range
10 permit 239.232.0.0 0.0.255.255
ip pim rp-address x.x.x.x asm-range
ip access-list standard asm-range
10 permit 239.232.0.0 0.0.255.255
hope i made the question clear

Hi Manish,
SSM will take precedence as long as it's configured.
Why would you need to do that ? ASM will never be used so there is no point to implement such configuration.
Thanks
Laurent.

Similar Messages

  • Multicast SSM in 6500

    Hi,
    I have a problem with a simple Multicast topology (SSM) in 6500.
    The Multicast traffic is no routed between two vlans (SVI). 
    The 6500 sees the IGMP query from receiver host but does not create the mroute. I think this should be done automatically.
    VSS6500#show ip mroute
    IP Multicast Routing Table
    Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
           L - Local, P - Pruned, R - RP-bit set, F - Register flag,
           T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
           X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
           U - URD, I - Received Source Specific Host Report,
           Z - Multicast Tunnel, z - MDT-data group sender,
           Y - Joined MDT-data group, y - Sending to MDT-data group
           V - RD & Vector, v - Vector
    Outgoing interface flags: H - Hardware switched, A - Assert winner
     Timers: Uptime/Expires
     Interface state: Interface, Next-Hop or VCD, State/Mode
    (10.111.33.1, 232.17.1.1), 00:39:04/00:02:51, flags: sPT
      Incoming interface: Vlan1033, RPF nbr 0.0.0.0, RPF-MFD
      Outgoing interface list: Null
    VSS6500#show ip igmp snooping explicit-tracking vlan 1032
    Source/Group                    Interface       Reporter        Filter_mode
    10.111.33.1/232.17.1.1          Vl1032:Po31     10.111.32.1     INCLUDE
    0.0.0.0/224.0.1.40              Vl1032:VPLS15/-110.111.32.252   EXCLUDE
    VSS6500#show version
    Cisco IOS Software, s72033_rp Software (s72033_rp-IPSERVICESK9_WAN-M), Version 12.2(33)SXJ5, RELEASE SOFTWARE (fc2)
    Technical Support: http://www.cisco.com/techsupport
    Copyright (c) 1986-2013 by Cisco Systems, Inc.
    Compiled Thu 31-Jan-13 14:30 by prod_rel_team
    Any help?
    Thanks

    check out the following link Multicast over IPsec VPN Design Guide, this should help :
    http://www.cisco.com/application/pdf/en/us/guest/netsol/ns656/c649/cdccont_0900aecd80402f07.pdf

  • Invalid group used for source specific multicast

    Is there a difference between the commands ipv6 pim accept-register list <ACL>
    VS ipv6 pim ssm range <ACL>? If so, what is it?
    Thank you in advance

    Hi Forrest,
    These two commands serve a completely different purpose and are used in different multicast mode (ASM vs SSM).
    "ipv6 pim accept-register list" is configured on the RP and dictates which first hop routers are allowed to send register messages to the RP. It is used in any source multicast (ASM) mode.
    "ipv6 pim ssm range" is used to change the default source specific multicast (SSM) group range.
    Regards

  • PIM-SSM Group Mapping

    The command 'show pim group-map' shows that the default mapping for SSM is 232.0.0.0/8.  According this page you can change it:
    https://supportforums.cisco.com/document/100301/introduction-ios-xr-multicast
    With the "multicast-routing ssm range" command.  But that command isn't available.  The multicast package is active, and other multicast commands work.  Has the command moved to a different spot, has something else changed?

    do it under address family
    RP/0/RSP0/CPU0:ariad(config-mcast-default-ipv4)#pwd
    multicast-routing
     address-family ipv4
    RP/0/RSP0/CPU0:ariad(config-mcast-default-ipv4)#ssm range ?
      WORD  Access list specifying SSM group range
    Regards,
    /A

  • SSM and ASM

    Can a 3560 switch support source-specific-multicast and any-source-multicast simultaneously?

    Hi Manish,
    SSM will take precedence as long as it's configured.
    Why would you need to do that ? ASM will never be used so there is no point to implement such configuration.
    Thanks
    Laurent.

  • Configuring ssm multicast

    hi,
    we are getting ready to implement the nexus 7000 with otv are two sites. since multicast is required to support this configuration i am currently testing how to implement ssm multicast on our core network. i am having problems joining the ssm group. here is the output from the 6509 i am using:
    hw-dc-vss-cs6509-1(config-if)#ip igmp join-group 232.1.1.1
    Ignoring request to join group 232.1.1.1, SSM group without source specified
    hw-dc-vss-cs6509-1(config-if)#ip igmp join-group 232.1.1.1 ?
      <cr>
    hw-dc-vss-cs6509-1(config-if)#ip igmp join-group 232.1.1.1
    as you can see the source option is not available and i can't figure out why.
    here is a copy of my running configure and show multicast show commands
    sh runn
    Building configuration...
    Current configuration : 6830 bytes
    ! Last configuration change at 18:37:28 UTC Thu Dec 16 2010
    upgrade fpd auto
    version 12.2
    service timestamps debug datetime msec
    service timestamps log datetime msec
    no service password-encryption
    service counters max age 5
    hostname hw-dc-vss-cs6509-1
    boot-start-marker
    boot system flash sup-bootdisk:s72033-ipservicesk9_wan-mz.122-33.SXI3.bin
    boot-end-marker
    security passwords min-length 1
    no logging console
    enable secret 5 $1$dZ1J$6KkcatZ2tXk055vswN1Kb1
    no aaa new-model
    --More--                           ip subnet-zero
    ip multicast-routing
    mls netflow interface
    mls cef error action reset
    spanning-tree mode rapid-pvst
    spanning-tree portfast edge default
    spanning-tree portfast edge bpduguard default
    spanning-tree extend system-id
    spanning-tree pathcost method long
    spanning-tree vlan 1,5,245,501-502 priority 16384
    --More--                           spanning-tree vlan 1,5,245,501-502 forward-time 9
    spanning-tree vlan 1,5,245,501-502 max-age 12
    diagnostic bootup level minimal
    redundancy
    main-cpu
      auto-sync running-config
    mode sso
    ip access-list standard ssm-groups
    permit 232.0.0.0 0.255.255.255
    permit 239.232.0.0 0.0.255.255
    vlan internal allocation policy ascending
    vlan access-log ratelimit 2000
    interface Loopback1
    ip address 10.255.255.1 255.255.255.255
    interface GigabitEthernet3/1
    description adcore-4503 2/1
    --More--                            mtu 9216
    ip address 159.233.253.106 255.255.255.252
    ip pim sparse-mode
    ip igmp version 3
    interface GigabitEthernet3/2
    description pwcore-6509 3/2
    mtu 9216
    ip address 159.233.253.110 255.255.255.252
    ip pim sparse-mode
    ip igmp version 3
    interface GigabitEthernet3/3
    description p101-4503 1/1
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 1,5,245,501,502
    switchport mode trunk
    mtu 9216
    spanning-tree guard root
    interface GigabitEthernet3/4
    no ip address
    --More--                           !
    interface GigabitEthernet3/5
    no ip address
    interface GigabitEthernet3/6
    no ip address
    interface GigabitEthernet3/7
    no ip address
    interface GigabitEthernet3/8
    no ip address
    interface GigabitEthernet3/9
    no ip address
    interface GigabitEthernet3/10
    no ip address
    interface GigabitEthernet3/11
    no ip address
    interface GigabitEthernet3/12
    --More--                            no ip address
    interface GigabitEthernet3/13
    no ip address
    interface GigabitEthernet3/14
    no ip address
    interface GigabitEthernet3/15
    no ip address
    interface GigabitEthernet3/16
    no ip address
    interface GigabitEthernet3/17
    no ip address
    interface GigabitEthernet3/18
    no ip address
    interface GigabitEthernet3/19
    no ip address
    --More--                           interface GigabitEthernet3/20
    no ip address
    interface GigabitEthernet3/21
    no ip address
    interface GigabitEthernet3/22
    no ip address
    interface GigabitEthernet3/23
    no ip address
    interface GigabitEthernet3/24
    no ip address
    interface GigabitEthernet5/1
    no ip address
    shutdown
    interface GigabitEthernet5/2
    no ip address
    shutdown
    --More--                           interface GigabitEthernet8/1
    switchport
    switchport access vlan 5
    switchport mode access
    interface GigabitEthernet8/2
    switchport
    switchport access vlan 245
    switchport mode access
    interface GigabitEthernet8/3
    no ip address
    shutdown
    interface GigabitEthernet8/4
    no ip address
    shutdown
    interface GigabitEthernet8/5
    no ip address
    shutdown
    interface GigabitEthernet8/6
    --More--                            no ip address
    shutdown
    interface GigabitEthernet8/7
    no ip address
    shutdown
    interface GigabitEthernet8/8
    no ip address
    shutdown
    interface GigabitEthernet8/9
    no ip address
    shutdown
    interface GigabitEthernet8/10
    no ip address
    shutdown
    interface GigabitEthernet8/11
    no ip address
    shutdown
    --More--                           interface GigabitEthernet8/12
    no ip address
    shutdown
    interface GigabitEthernet8/13
    no ip address
    shutdown
    interface GigabitEthernet8/14
    no ip address
    shutdown
    interface GigabitEthernet8/15
    no ip address
    shutdown
    interface GigabitEthernet8/16
    no ip address
    shutdown
    interface GigabitEthernet8/17
    no ip address
    shutdown
    --More--                           !
    interface GigabitEthernet8/18
    no ip address
    shutdown
    interface GigabitEthernet8/19
    no ip address
    shutdown
    interface GigabitEthernet8/20
    no ip address
    shutdown
    interface GigabitEthernet8/21
    no ip address
    shutdown
    interface GigabitEthernet8/22
    no ip address
    shutdown
    interface GigabitEthernet8/23
    no ip address
    --More--                            shutdown
    interface GigabitEthernet8/24
    no ip address
    shutdown
    interface GigabitEthernet8/25
    no ip address
    shutdown
    interface GigabitEthernet8/26
    no ip address
    shutdown
    interface GigabitEthernet8/27
    no ip address
    shutdown
    interface GigabitEthernet8/28
    no ip address
    shutdown
    interface GigabitEthernet8/29
    --More--                            no ip address
    shutdown
    interface GigabitEthernet8/30
    no ip address
    shutdown
    interface GigabitEthernet8/31
    no ip address
    shutdown
    interface GigabitEthernet8/32
    no ip address
    shutdown
    interface GigabitEthernet8/33
    no ip address
    shutdown
    interface GigabitEthernet8/34
    no ip address
    shutdown
    --More--                           interface GigabitEthernet8/35
    no ip address
    shutdown
    interface GigabitEthernet8/36
    no ip address
    shutdown
    interface GigabitEthernet8/37
    no ip address
    shutdown
    interface GigabitEthernet8/38
    no ip address
    shutdown
    interface GigabitEthernet8/39
    no ip address
    shutdown
    interface GigabitEthernet8/40
    no ip address
    shutdown
    --More--                           !
    interface GigabitEthernet8/41
    no ip address
    shutdown
    interface GigabitEthernet8/42
    no ip address
    shutdown
    interface GigabitEthernet8/43
    no ip address
    shutdown
    interface GigabitEthernet8/44
    no ip address
    shutdown
    interface GigabitEthernet8/45
    no ip address
    shutdown
    interface GigabitEthernet8/46
    no ip address
    --More--                            shutdown
    interface GigabitEthernet8/47
    no ip address
    shutdown
    interface GigabitEthernet8/48
    no ip address
    shutdown
    interface Vlan1
    no ip address
    shutdown
    interface Vlan5
    mtu 9216
    ip address 159.233.5.1 255.255.255.0
    no ip redirects
    no ip unreachables
    no ip proxy-arp
    ip flow ingress
    ip pim sparse-mode
    ip igmp join-group 239.1.1.1
    --More--                            ip igmp version 3
    arp timeout 200
    interface Vlan245
    mtu 9216
    ip address 159.233.245.1 255.255.255.0
    no ip redirects
    no ip unreachables
    no ip proxy-arp
    ip flow ingress
    ip pim sparse-mode
    ip igmp join-group 239.1.1.1
    ip igmp version 3
    arp timeout 200
    interface Vlan501
    mtu 9216
    ip address 159.233.62.1 255.255.255.224
    no ip redirects
    no ip unreachables
    no ip proxy-arp
    ip flow ingress
    arp timeout 200
    --More--                           !
    interface Vlan502
    mtu 9216
    ip address 159.233.1.1 255.255.255.240
    no ip redirects
    no ip unreachables
    no ip proxy-arp
    ip flow ingress
    arp timeout 200
    router eigrp 241
    network 159.233.0.0
    no auto-summary
    redistribute static
    ip classless
    no ip http server
    no ip http secure-server
    ip pim rp-address 10.255.255.1
    ip pim ssm default
    --More--                           !
    control-plane
    dial-peer cor custom
    line con 0
    line vty 0 4
    password f1v3c3nt2
    login
    line vty 5 15
    password f1v3c3nt2
    login
    end
    hw-dc-vss-cs6509-1#
    sh ip mroute
    IP Multicast Routing Table
    Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
           L - Local, P - Pruned, R - RP-bit set, F - Register flag,
           T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
           X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
           U - URD, I - Received Source Specific Host Report,
           Z - Multicast Tunnel, z - MDT-data group sender,
           Y - Joined MDT-data group, y - Sending to MDT-data group
           V - RD & Vector, v - Vector
    Outgoing interface flags: H - Hardware switched, A - Assert winner
    Timers: Uptime/Expires
    Interface state: Interface, Next-Hop or VCD, State/Mode
    (*, 239.1.1.1), 00:20:20/00:02:55, RP 10.255.255.1, flags: SJCL
      Incoming interface: Null, RPF nbr 0.0.0.0
      Outgoing interface list:
        Vlan5, Forward/Sparse, 00:19:14/00:02:55
    (*, 239.255.255.250), 00:26:33/00:02:35, RP 10.255.255.1, flags: SP
      Incoming interface: Null, RPF nbr 0.0.0.0
      Outgoing interface list: Null
    (159.233.245.100, 232.1.1.1), 00:06:48/00:02:55, flags: sPT
      Incoming interface: Vlan245, RPF nbr 0.0.0.0, RPF-MFD
      Outgoing interface list: Null
    (*, 224.0.1.40), 02:25:53/00:02:33, RP 10.255.255.1, flags: SJCL
      Incoming interface: Null, RPF nbr 0.0.0.0
      Outgoing interface list:
        GigabitEthernet3/1, Forward/Sparse, 02:25:53/00:02:30
    hw-dc-vss-cs6509-1#sj   h ip igmp gr
    hw-dc-vss-cs6509-1#sh ip igmp groups
    IGMP Connected Group Membership
    Group Address    Interface                Uptime    Expires   Last Reporter   Group Accounted
    239.1.1.1        Vlan5                    00:19:22  00:02:48  159.233.5.1    
    239.1.1.1        Vlan245                  00:20:14  00:02:25  159.233.245.1  
    239.255.255.250  Vlan245                  00:26:41  00:02:28  159.233.245.100
    224.0.1.40       Vlan245                  01:30:34  00:02:25  159.233.245.105
    224.0.1.40       GigabitEthernet3/1       1w1d      00:02:22  159.233.253.105
    hw-dc-vss-cs6509-1#
    any help would be greatly appreciated. thank you
    i did some more digging and found the answer to my question, have to use the following command instead of the join-group command
    Ip igmp static-group

    I'm assuming you mean WebLogic SSM. If you set it up using organizational structure, you should see a new org in /entitlementsadministration with a number of applications bound to the SSM you declared.

  • IGMP V3, SSM Multicast Boundaries config questions..

    Guys,
    I am having some difficulty to understand the SSM boundary config sample from this Cisco doco.
    eg 1, I am not sure why they have the host 0.0.0.0 line. Also the acl are base on ip, udp and igmp?
    Eg 2, similar to eg 1 but it also has deny pim.
    Could someone explain why do we need to use udp, pim & igmp for the boundery filter? Normally I would use ip instead. (eg: permit ip host 181.1.2.201 host 232.1.1.1)
    eg 1:
    The following example permits outgoing traffic for (181.1.2.201, 232.1.1.1) and (181.1.2.202, 232.1.1.1) and denies all other (S,G)s.
    configure terminal
    ip access-list extended acc_grp1
    permit ip host 0.0.0.0 232.1.1.1 0.0.0.255
    permit ip host 181.1.2.201 host 232.1.1.1
    permit udp host 181.1.2.202 host 232.1.1.1
    permit ip host 181.1.2.202 host 232.1.1.1
    deny igmp host 181.2.3.303 host 232.1.1.1
    interface ethernet 2/3
    ip multicast boundary acc_grp1 out
    eg 2:
    The following example permits outgoing traffic for (181.1.2.201, 232.1.1.5) and 181.1.2.202, 232.1.1.5).
    configure terminal
    ip access-list extended acc_grp6
    permit ip host 0.0.0.0 232.1.1.1 5.0.0.255
    deny udp host 181.1.2.201 host 232.1.1.5
    permit ip host 181.1.2.201 host 232.1.1.5
    deny pim host 181.1.2.201 host 232.1.1.5
    permit ip host 181.1.2.202 host 232.1.1.5
    deny igmp host 181.2.3.303 host 232.1.1.1
    interface ethernet 2/3
    ip multicast boundary acc_grp6 out
    Cisco Doco:
    http://www.cisco.com/univercd/cc/td/doc/product/software/ios124/124cg/himc_c/chap05/hmcbnd.htm

    I Hope this DOC will help you:
    http://www.cisco.com/en/US/products/ps6350/products_configuration_guide_chapter09186a00805a3624.html

  • ASM instance on one node can not startup

    use dbca create database failure
    [oracle@rac1 ~]$ dbca -silent -responseFile /home/oracle/dbca.rsp
    Look at the log file "/opt/ora/product/10.2.0/db_1/cfgtoollogs/dbca/mydb.log" for further details.
    [oracle@rac1 ~]$ cat "/opt/ora/product/10.2.0/db_1/cfgtoollogs/dbca/mydb.log"
    PRKS-1009 : Failed to start ASM instance "+ASM2" on node "rac2", [CRS-0215: Could not start resource 'ora.rac2.ASM2.asm'.]
    ORA-03135: connection lost contact
    [oracle@rac2 bin]$ ./crs_stat -t
    Name           Type           Target    State     Host
    ora....SM1.asm application    ONLINE    ONLINE    rac1
    ora....C1.lsnr application    ONLINE    ONLINE    rac1
    ora.rac1.gsd   application    ONLINE    ONLINE    rac1
    ora.rac1.ons   application    ONLINE    ONLINE    rac1
    ora.rac1.vip   application    ONLINE    ONLINE    rac1
    ora....SM2.asm application    ONLINE    OFFLINE
    ora....C2.lsnr application    ONLINE    ONLINE    rac2
    ora.rac2.gsd   application    ONLINE    ONLINE    rac2
    ora.rac2.ons   application    ONLINE    ONLINE    rac2
    ora.rac2.vip   application    ONLINE    ONLINE    rac2so i try to start asm instance on rac2
    [oracle@rac2 bin]$ ./srvctl start asm -n rac2
    PRKS-1009 : Failed to start ASM instance "+ASM2" on node "rac2", [PRKS-1009 : Failed to start ASM instance "+ASM2" on node "rac2", [rac2:ora.rac2.ASM2.asm:
    rac2:ora.rac2.ASM2.asm:SQL*Plus: Release 10.2.0.1.0 - Production on Mon Jul 25 17:58:21 2011
    rac2:ora.rac2.ASM2.asm:
    rac2:ora.rac2.ASM2.asm:Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    rac2:ora.rac2.ASM2.asm:
    rac2:ora.rac2.ASM2.asm:Enter user-name: Connected to an idle instance.
    rac2:ora.rac2.ASM2.asm:
    rac2:ora.rac2.ASM2.asm:SQL> ORA-03113: end-of-file on communication channel
    rac2:ora.rac2.ASM2.asm:SQL> Disconnected
    rac2:ora.rac2.ASM2.asm:
    CRS-0215: Could not start resource 'ora.rac2.ASM2.asm'.]]
      [PRKS-1009 : Failed to start ASM instance "+ASM2" on node "rac2", [rac2:ora.rac2.ASM2.asm:
    rac2:ora.rac2.ASM2.asm:SQL*Plus: Release 10.2.0.1.0 - Production on Mon Jul 25 17:58:21 2011
    rac2:ora.rac2.ASM2.asm:
    rac2:ora.rac2.ASM2.asm:Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    rac2:ora.rac2.ASM2.asm:
    rac2:ora.rac2.ASM2.asm:Enter user-name: Connected to an idle instance.
    rac2:ora.rac2.ASM2.asm:
    rac2:ora.rac2.ASM2.asm:SQL> ORA-03113: end-of-file on communication channel
    rac2:ora.rac2.ASM2.asm:SQL> Disconnected
    rac2:ora.rac2.ASM2.asm:check the alert log find
    Mon Jul 25 17:58:23 2011
    Maximum Tranmission Unit (mtu) of the ether adapter is different
    on the node running instance 1, and this node.
    Ether adapters connecting the cluster nodes must be configured
    with identical mtu on all the nodes, for Oracle.
    Please ensure the mtu attribute of the ether adapter on all
    nodes are identical, before running Oracle.
    Mon Jul 25 17:58:23 2011
    Errors in file /opt/ora/admin/+ASM/bdump/+asm2_lmon_29711.trc:
    ORA-27550: Target ID protocol check failed. tid vers=%d, type=%d, remote instance number=%d, local instance number=%d
    LMON: terminating instance due to error 27550
    Mon Jul 25 17:58:25 2011
    Shutting down instance (abort)
    License high water mark = 0
    Mon Jul 25 17:58:28 2011
    Instance terminated by LMON, pid = 29711
    Mon Jul 25 17:58:30 2011
    Instance terminated by USER, pid = 29831and i checked my eth
    [root@rac1 oracle]# ifconfig -a
    eth0      Link encap:Ethernet  HWaddr 00:0C:29:3C:42:0C
              inet addr:192.168.40.102  Bcast:192.168.40.255  Mask:255.255.255.0
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:9762 errors:0 dropped:0 overruns:0 frame:0
              TX packets:6028 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:1193646 (1.1 MiB)  TX bytes:1037252 (1012.9 KiB)
              Interrupt:67 Base address:0x2400
    eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:3C:42:0C
              inet addr:192.168.40.201  Bcast:192.168.40.255  Mask:255.255.255.0
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              Interrupt:67 Base address:0x2400
    eth1      Link encap:Ethernet  HWaddr 00:0C:29:3C:42:16
              inet addr:10.10.17.221  Bcast:10.10.17.225  Mask:255.255.255.0
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:30185 errors:0 dropped:0 overruns:0 frame:0
              TX packets:37936 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:19988844 (19.0 MiB)  TX bytes:35126056 (33.4 MiB)
              Interrupt:51 Base address:0x2480
    lo        Link encap:Local Loopback
              inet addr:127.0.0.1  Mask:255.0.0.0
              UP LOOPBACK RUNNING  MTU:16436  Metric:1
              RX packets:52537 errors:0 dropped:0 overruns:0 frame:0
              TX packets:52537 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:4380651 (4.1 MiB)  TX bytes:4380651 (4.1 MiB)
    [root@rac2 ~]# ifconfig -a
    eth0      Link encap:Ethernet  HWaddr 00:0C:29:9C:CC:90
              inet addr:192.168.40.103  Bcast:192.168.40.255  Mask:255.255.255.0
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:3665 errors:0 dropped:0 overruns:0 frame:0
              TX packets:1489 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:563952 (550.7 KiB)  TX bytes:243127 (237.4 KiB)
              Interrupt:67 Base address:0x2400
    eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:9C:CC:90
              inet addr:192.168.40.202  Bcast:192.168.40.255  Mask:255.255.255.0
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              Interrupt:67 Base address:0x2400
    eth1      Link encap:Ethernet  HWaddr 00:0C:29:9C:CC:9A
              inet addr:10.10.17.222  Bcast:10.10.17.225  Mask:255.255.255.0
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:38281 errors:0 dropped:0 overruns:0 frame:0
              TX packets:29683 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:35182270 (33.5 MiB)  TX bytes:19905266 (18.9 MiB)
              Interrupt:51 Base address:0x2480
    lo        Link encap:Local Loopback
              inet addr:127.0.0.1  Mask:255.0.0.0
              UP LOOPBACK RUNNING  MTU:16436  Metric:1
              RX packets:44387 errors:0 dropped:0 overruns:0 frame:0
              TX packets:44387 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:2299383 (2.1 MiB)  TX bytes:2299383 (2.1 MiB)any idea ?
    i try to run dbca on rac2 ,it seems OK
    ============
    ASM instance still have problem,no ASM instance create in rac1
    [oracle@rac2 bin]$ ./crs_stat -t
    Name           Type           Target    State     Host
    ora.mydb.db    application    ONLINE    ONLINE    rac1
    ora....b1.inst application    ONLINE    OFFLINE
    ora....b2.inst application    ONLINE    ONLINE    rac2
    ora....SM1.asm application    ONLINE    OFFLINE
    ora....C1.lsnr application    ONLINE    ONLINE    rac1
    ora....C2.lsnr application    ONLINE    OFFLINE              =>  ora.rac1.LISTENER_RAC2.lsnr   (i do not know where this resource come from ??)
    ora.rac1.gsd   application    ONLINE    ONLINE    rac1
    ora.rac1.ons   application    ONLINE    ONLINE    rac1
    ora.rac1.vip   application    ONLINE    ONLINE    rac1
    ora....SM2.asm application    ONLINE    ONLINE    rac2
    ora....C2.lsnr application    ONLINE    ONLINE    rac2
    ora.rac2.gsd   application    ONLINE    ONLINE    rac2
    ora.rac2.ons   application    ONLINE    ONLINE    rac2
    ora.rac2.vip   application    ONLINE    ONLINE    rac2and here is my dbca.rsp file content
    [GENERAL]
    RESPONSEFILE_VERSION = "10.0.0"
    OPERATION_TYPE = "createDatabase"
    [CREATEDATABASE]
    GDBNAME = "mydb.us.oracle.com"
    SID = "mydb"
    NODELIST=rac1,rac2
    TEMPLATENAME = "/home/oracle/mydb.dbc"
    SYSPASSWORD = "Myss123456"
    SYSTEMPASSWORD = "Myss123456"
    EMCONFIGURATION = "NONE"
    DBSNMPPASSWORD = "Myss123456"
    STORAGETYPE=ASM
    DISKLIST=/dev/oracleasm/disks/VOL1,/dev/oracleasm/disks/VOL2
    DISKGROUPNAME=ORADG
    REDUNDANCY=EXTRENAL
    DISKSTRING="/dev/oracleasm/disks/*"
    ASM_SYS_PASSWORD="Myss123456"
    LISTENERS = "listener1 listener2"Edited by: 859340 on 2011-7-24 下午7:23
    Edited by: 859340 on 2011-7-24 下午7:32
    Edited by: 859340 on 2011-7-24 下午7:35
    Edited by: 859340 on 2011-7-24 下午7:51

    Hi,
    Use this tech note on MOS.
    *Instance Crash on startup with ORA-27550: Target ID protocol check failed [ID 730516.1]*
    Regards,
    Levi Pereira

  • The Script root.sh problem - ora.asm and ASM and Clusterware Stack failed

    Folks,
    Hello. I am installing Oracle 11gR2 RAC using 2 VMs (rac1 and rac2) whose OS are Oracle Linux 5.6 in VMPlayer according to the website http://appsdbaworkshop.blogspot.com/2011/10/11gr2-rac-on-linux-56-using-vmware.html
    I am installing Grid infrastructure. On step 9 of 10 - execute script /u01/app/grid/root.sh for 2 VMs rac1 and rac2.
    After run root.sh in rac1 successfully. I run root.sh in rac2 and get an error as below:
    [root@rac2 grid]# ./root.sh
    Running Oracle 11g root.sh script...
    The following environment variables are set as:
    ORACLE_OWNER= ora11g
    ORACLE_HOME= /u01/app/grid
    Enter the full pathname of the local bin directory: [usr/local/bin]: /usr/local/bin
    Copying dbhome to /usr/local/bin ...
    Copying oraenv to /usr/local/bin ...
    Copying coraenv to /usr/local/bin ...
    Creating /etc/oratab file...
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root.sh script.
    Now product-specific root actions will be performed.
    2012-03-05 16:32:52: Parsing the host name
    2012-03-05 16:32:52: Checking for super user privileges
    2012-03-05 16:32:52: User has super user privileges
    Using configuration parameter file: /u01/app/grid/crs/install/crsconfig_params
    Creating trace directory
    LOCAL ADD MODE
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    Adding daemon to inittab
    CRS-4123: Oracle High Availability Services has been started.
    ohasd is starting
    CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
    An active cluster was found during exclusive startup, restarting to join the cluster
    CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
    CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
    CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
    CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
    CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
    CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
    CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
    CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
    CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
    CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
    Start action for octssd aborted
    CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
    CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac2'
    CRS-2672: Attempting to start 'ora.asm' on 'rac2'
    CRS-2676: Start of 'ora.drivers.acfs' on 'rac2' succeeded
    CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
    CRS-2664: Resource 'ora.ctssd' is already running on 'rac2'
    CRS-4000: Command Start failed, or completed with errors.
    Command return code of 1 (256) from command: /u01/app/grid/bin/crsctl start resource ora.asm -init
    Start of resource "ora.asm -init" failed
    Failed to start ASM
    Failed to start Oracle Clusterware stack
    [root@rac2 grid]#
    As we see the output above, at the end of the output
    1) Start of resource ora.asm -init failed
    2) Failed to start ASM
    3) Failed to start Oracle Clusterware stack
    The runInstaller is in the first VM rac1. My question is:
    Do any folk understand how to solve the script root.sh in rac2 problem ( 3 fails of ora.asm, ASM and Clusterware stack as above) ?
    Thanks.

    Please check there is no firewall exist:
    try this like:
    root.sh fails on second node
    MOS note:
    11gR2 Grid: root.sh Fails to Start the Clusterware on the Second Node Due to Firewall on Private Network [ID 981357.1]
    Grid Infrastructure 11.2.0.2 Installation or Upgrade may fail due to Multicasting Requirement [ID 1212703.1] (Most probabily this issue)

  • Asm weird errors

    we have 2 SUN FIRE 6900 with cluster ware 10.1.0.4..2 asm and database
    the node 1 runs forms reports services for solaris 10
    the node 2 runs application server
    node 1 and node 2 are running different databases node 1 spdb and node 2 turbodb
    recently we have the following sequence of errors in the files following loss of service of reports server and eventually we have to reboot machine 1
    ASM ALERT FILE
    Wed Mar 17 09:51:12 2010
    Errors in file /opt/orabase/asmhome/admin/+ASM/udump/+asm1_ora_5759.trc:
    ORA-07445: exception encountered: core dump [__lwp_kill()+8] [SIGIOT] [unknown code] [0x167F00000000] [] []
    Wed Mar 17 09:51:12 2010
    Trace dumping is performing id=[cdmp_20100317095112]
    DB ALERT
    ORA-00060: Deadlock detected. More info in file /opt/orabase/dbhome/admin/SPISDB/udump/spisdb_ora_12196.trc.
    Wed Mar 17 09:51:12 2010
    Thread 1 advanced to log sequence 41853
    Current log# 2 seq# 41853 mem# 0: +DATA/spisdb/onlinelog/group_22
    Current log# 2 seq# 41853 mem# 1: +DATA/spisdb/onlinelog/group_2
    Wed Mar 17 09:51:21 2010
    Errors in file /opt/orabase/dbhome/admin/SPISDB/bdump/spisdb_arc3_6847.trc:
    ORA-00313: open failed for members of log group 1 of thread 1
    ORA-00312: online log 1 thread 1: '+DATA/spisdb/onlinelog/group_11'
    ORA-17503: ksfdopn:2 Failed to open file +DATA/spisdb/onlinelog/group_11
    ORA-03113: end-of-file on communication channel
    Wed Mar 17 09:56:40 2010
    ORA-00060: Deadlock detected. More info in file /opt/orabase/dbhome/admin/SPISDB/udump/spisdb_ora_4820.trc.
    Wed Mar 17 09:56:41 2010
    ORA-00060: Deadlock detected. More info in file /opt/orabase/dbhome/admin/SPISDB/udump/spisdb_ora_18667.trc.
    Wed Mar 17 10:01:49 2010
    ORA-00060: Deadlock detected. More info in file /opt/orabase/dbhome/admin/SPISDB/udump/spisdb_ora_1200.trc.
    Wed Mar 17 10:17:31 2010
    Thread 1 advanced to log sequence 41854
    Current log# 3 seq# 41854 mem# 0: +DATA/spisdb/onlinelog/group_33
    Current log# 3 seq# 41854 mem# 1: +DATA/spisdb/onlinelog/group_3
    Wed Mar 17 10:22:14 2010
    ORA-00060: Deadlock detected. More info in file /opt/orabase/dbhome/admin/SPISDB/udump/spisdb_ora_25503.trc.
    Wed Mar 17 10:24:29 2010
    ORA-00060: Deadlock detected. More info in file /opt/orabase/dbhome/admin/SPISDB/udump/spisdb_ora_11719.trc.
    dmesg is full of these lines ....
    Mar 17 10:23:03 e6900ap3 scsi: [ID 243001 kern.warning] WARNING: /scsi_vhci (scsi_vhci0):
    Mar 17 10:23:03 e6900ap3 /scsi_vhci/ssd@g600a0b80002624f4000034224b11ee2f (ssd73): Command Timeout on path /ssm@0,0/pci@1b,700000/SUNW,qlc@2/fp@0,0 (fp2)
    Mar 17 10:23:03 e6900ap3 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/ssd@g600a0b80002624f4000034224b11ee2f (ssd73):
    Mar 17 10:23:03 e6900ap3 SCSI transport failed: reason 'timeout': retrying command
    Mar 17 10:26:36 e6900ap3 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/ssd@g600a0b80002624f4000034224b11ee2f (ssd73):
    Mar 17 10:26:36 e6900ap3 SCSI transport failed: reason 'tran_err': retrying command
    -bash-3.00$ cat /opt/orabase/asmhome/admin/+ASM/udump/+asm1_ora_5759.trc
    /opt/orabase/asmhome/admin/+ASM/udump/+asm1_ora_5759.trc
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
    With the Partitioning, Real Application Clusters, OLAP and Data Mining options
    ORACLE_HOME = /opt/orabase/asmhome
    System name: SunOS
    Node name: e6900ap3
    Release: 5.10
    Version: Generic_118833-20
    Machine: sun4u
    Instance name: +ASM1
    Redo thread mounted by this instance: 0 <none>
    Oracle process number: 0
    Unix process pid: 5759, image: oracle@e6900ap3
    Exception signal: 6 (SIGIOT), code: -1 (unknown code), addr: 0x167f00000000, exception issued by pid: 5759, uid: 200, PC: [0xffffffff7aace500, __lwp_kill()+8]
    *** 2010-03-17 09:51:12.165
    ksedmp: internal or fatal error
    ORA-07445: exception encountered: core dump [__lwp_kill()+8] [SIGIOT] [unknown code] [0x167F00000000] [] []
    Current SQL information unavailable - no session.
    ----- Call Stack Trace -----
    calling call entry argument values in hex
    location type point (? means dubious value)
    ksedmp()+744 CALL ksedst() 000000840 ?
    FFFFFFFF7FFF517C ?
    000000000 ?
    FFFFFFFF7FFF1C70 ?
    FFFFFFFF7FFF09D8 ?
    FFFFFFFF7FFF13D8 ?
    ssexhd()+1000 CALL ksedmp() 000106000 ? 106323304 ?
    106323000 ? 000106323 ?
    000106000 ? 106323304 ?
    __sighndlr()+12 PTR_CALL 0000000000000000 000380007 ?
    FFFFFFFF7FFF8EF0 ?
    000000067 ? 000380000 ?
    000000006 ? 106323300 ?
    call_user_handler() CALL __sighndlr() 000000006 ?
    +992 FFFFFFFF7FFF8EF0 ?
    FFFFFFFF7FFF8C10 ?
    10032D860 ? 000000000 ?
    000000005 ?
    raise()+16 CALL pthread_kill() FFFFFFFF7AC02000 ?
    has anyone of you guys seen something like this
    personally i manage the machine from 2000 miles away so there is no chance for a physical look

    It doesn't seem to be an error of starting up the instance;
    Startup Fails with ORA-7445 [__lwp_kill()+8] on Sun V440 Machines [ID 330082.1]
    Bug 4422028: ORA-7445 [__LWP_KILL] DURING STARTUP NOMOUNT OF ASM INSTANCE
    The instance is already running ages ago!!!
    also Bug 4422028 refers to configuring and creating an ASM instnce and we don't fall in to this case.
    regarding the sequence of errors
    ORA-00313: open failed for members of log group 1 of thread 1
    ORA-00312: online log 1 thread 1: '+DATA/spisdb/onlinelog/group_11'
    ORA-17503: ksfdopn:2 Failed to open file +DATA/spisdb/onlinelog/group_11
    ORA-03113: end-of-file on communication channel
    can you advice please!
    Edited by: user11995938 on 18 Μαρ 2010 7:25 πμ

  • Install ASM on Solaris RAC 10g

    Hello,
    I installed CRS and database software on two nodes RAC 10.2.0.4 Solaris x86-64 5.10, latest updates for Solaris. I have no error with crs.
    Problems description:
    1) When I run DBCA to create ASM it fails to create it on the nodes, with error ORA-03135: connection lost contact.
    2) I see ORA-29702 into the logs (error occurred in Cluster Group Service operation
    Cause: An unexpected error occurred while performing a CGS operation.
    Action: Verify that the LMON process is still active. Also, check the Oracle LMON trace files for errors.)
    Question:
    Do you think that the problem is that interface 10.0.0.21 node1-priv-fail2 is started? (see bellow bdump/alert_+ASM1.log).
    This interface 10.0.0.21 node1-priv-fail2 is the one that frozens the prompt when I try ssh oracle@node1-priv-fail2 from node2?
    Possible solution: I saw Metalink 283684.1 but don't know if/what to change in my interfaces.
    Details:
    I think is something with the interfaces, but I don't know what.
    - One thing I noticed is that is not possible to ssh from node1 to node2-priv-fail2 (this I wad told is the private standby loopback insterface). The same is from node2 to node1-priv-fail2, it gives a frozen prompt.
    - in /etc/hosts on both nodes I have:
    +127.0.0.1 localhost+
    +172.17.1.17 node1+
    +172.17.1.18 node1-fail1+
    +172.17.1.19 node1-fail2+
    +172.17.1.20 node1-vip+
    +172.17.1.29 node2 loghost+
    +172.17.1.30 node2-fail1+
    +172.17.1.31 node2-fail2+
    +172.17.1.32 node2-vip+
    +10.0.0.1 node1-priv+
    +10.0.0.11 node1-priv-fail1+
    +10.0.0.21 node1-priv-fail2+
    +10.0.0.2 node2-priv+
    +10.0.0.12 node2-priv-fail1+
    +10.0.0.22 node2-priv-fail2+
    Do you think that the problem is that interface 10.0.0.21 node1-priv-fail2 is started (see bdump/alert_+ASM1.log).
    This interface 10.0.0.21 node1-priv-fail2 is the one that frozens the prompt when I try ssh oracle@node1-priv-fail2 from node2?
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Interface type 1 e1000g2 10.0.0.0 configured from OCR for use as a cluster interconnect
    Interface type 1 e1000g3 10.0.0.0 configured from OCR for use as a cluster interconnect
    Interface type 1 e1000g0 172.17.0.0 configured from OCR for use as a public interface
    Interface type 1 e1000g1 172.17.0.0 configured from OCR for use as a public interface
    Starting up ORACLE RDBMS Version: 10.2.0.4.0.
    System parameters with non-default values:
    large_pool_size = 12582912
    instance_type = asm
    cluster_database = TRUE
    instance_number = 1
    remote_login_passwordfile= EXCLUSIVE
    ++background_dump_dest = /opt/app/oracle/db/admin/+ASM/bdump++
    ++user_dump_dest = /opt/app/oracle/db/admin/+ASM/udump++
    ++core_dump_dest = /opt/app/oracle/db/admin/+ASM/cdump++
    Cluster communication is configured to use the following interface(s) for this instance
    +10.0.0.1+
    +10.0.0.21+
    node1:oracle$ oifcfg getif
    e1000g0 172.17.0.0 global public
    e1000g1 172.17.0.0 global public
    e1000g2 10.0.0.0 global cluster_interconnect
    e1000g3 10.0.0.0 global cluster_interconnect
    node1:oracle$ ifconfig -a
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 172.17.1.17 netmask ffff0000 broadcast 172.17.255.255
    groupname orapub
    e1000g0:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 2
    inet 172.17.1.18 netmask ffff0000 broadcast 172.17.255.255
    e1000g0:2: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 2
    inet 172.17.1.20 netmask ffff0000 broadcast 172.17.255.255
    e1000g1: flags=39040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,FAILED,STANDBY> mtu 1500 index 3
    inet 172.17.1.19 netmask ffff0000 broadcast 172.17.255.255
    groupname orapub
    e1000g2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
    inet 10.0.0.1 netmask ff000000 broadcast 10.255.255.255
    groupname oracle_interconnect
    e1000g2:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 4
    inet 10.0.0.11 netmask ff000000 broadcast 10.255.255.255
    e1000g3: flags=39040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,FAILED,STANDBY> mtu 1500 index 5
    inet 10.0.0.21 netmask ff000000 broadcast 10.255.255.255
    groupname oracle_interconnect

    Hi,
    for a 10g RAC you need:
    - a host-IP for every node
    - a private IP for every node
    - a virtual IP for every node
    Host-IP and Private IP must be assigned to both hosts and conenction between the hosts using either the host-IP or the private IP must be possible.
    Is is possible to build the RAC and ASM only with the private IP without the public and the virtual IP, if yes how?The term "private" and "public" does not refer to public IPs. It refers to the fact "private" is only for communication between the nodes and public for communication between the client and database.
    For a successful installation you need at least these three IPs on each system.
    So for instance your public IPs reside in the network 192.168.1.0/255.255.255.0 and your private interconnect network can be 192.168.2.0/255.255.255.0. Both networks consists of private (i.e. non-routeable) IPs.

  • Mobility groups with multicast - 7.4.100.60

    Hello all,
    I am not sure if this is a bug, but here we go. I configured 2 controllers in a mobility group using multicast signalling.
    I expect that from the moment i enable multicast, the controller joins this group AND STAYS IN THIS GROUP so he will get messages from all other controllers.
    On the first hop router, i can see the join:
    #sh ip igmp mem 239.194.248.10
    Flags: A  - aggregate, T - tracked
           L  - Local, S - static, V - virtual, R - Reported through v3
           I - v3lite, U - Urd, M - SSM (S,G) channel
           1,2,3 - The version of IGMP, the group is in
    Channel/Group-Flags:
           / - Filtering entry (Exclude mode (S,G), Include mode (G))
    Reporter:
           <mac-or-ip-address> - last reporter if group is not explicitly tracked
           <n>/<m>      - <n> reporter in include mode, <m> reporter in exclude
    Channel/Group                  Reporter        Uptime   Exp.  Flags  Interface
    *,239.194.248.10               10.102.78.98    00:00:05 02:54 2A     Vl350
    However, this entry does not get refreshed. I have the impression that the controller does not reply to the IGMP general queries:
    IGMP(0): Send v2 general Query on Vlan350 -> no replies
    #sh ip igmp mem 239.194.248.10
    Flags: A  - aggregate, T - tracked
           L  - Local, S - static, V - virtual, R - Reported through v3
           I - v3lite, U - Urd, M - SSM (S,G) channel
           1,2,3 - The version of IGMP, the group is in
    Channel/Group-Flags:
           / - Filtering entry (Exclude mode (S,G), Include mode (G))
    Reporter:
           <mac-or-ip-address> - last reporter if group is not explicitly tracked
           <n>/<m>      - <n> reporter in include mode, <m> reporter in exclude
    Channel/Group                  Reporter        Uptime   Exp.  Flags  Interface
    *,239.194.248.10               10.102.78.98    00:02:35 00:24 2A     Vl350
    >> 20 seconds from expiry.
    #sh ip igmp mem 239.194.248.10
    Flags: A  - aggregate, T - tracked
           L  - Local, S - static, V - virtual, R - Reported through v3
           I - v3lite, U - Urd, M - SSM (S,G) channel
           1,2,3 - The version of IGMP, the group is in
    Channel/Group-Flags:
           / - Filtering entry (Exclude mode (S,G), Include mode (G))
    Reporter:
           <mac-or-ip-address> - last reporter if group is not explicitly tracked
           <n>/<m>      - <n> reporter in include mode, <m> reporter in exclude
    Channel/Group                  Reporter        Uptime   Exp.  Flags  Interface
    >> 20 seconds later, gone
    >> At this moment the controller doesn't receive any messages anymore from remote controllers:
    >>sh ip mroute 239.194.248.10 
    IP Multicast Routing Table
    Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
           L - Local, P - Pruned, R - RP-bit set, F - Register flag,
           T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
           X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
           U - URD, I - Received Source Specific Host Report,
           Z - Multicast Tunnel, z - MDT-data group sender,
           Y - Joined MDT-data group, y - Sending to MDT-data group
           V - RD & Vector, v - Vector
    Outgoing interface flags: H - Hardware switched, A - Assert winner
    Timers: Uptime/Expires
    Interface state: Interface, Next-Hop or VCD, State/Mode
    (*, 239.194.248.10), 00:05:38/00:00:21, RP 10.102.90.20, flags: SP
      Incoming interface: Port-channel30, RPF nbr 10.102.78.13, RPF-MFD
    Outgoing interface list: Null
    This is seen on 8500 platform running 7.4.100.60
    We have also WISMs running at 7.0.230.0, and there it can be seen that the entry refreshes every minute (like it should be)
    IGMP(0): Received v2 Report on Vlan950 from 10.96.9.67 for 239.194.240.1
    IGMP(0): Received Group record for group 239.194.240.1, mode 2 from 10.96.9.67 for 0 sources
    IGMP(0): Updating EXCLUDE group timer for 239.194.240.1
    IGMP(0): MRT Add/Update Vlan950 for (*,239.194.240.1) by 0
    Shouldn't the controller at all times stay joined in its mobiility group ?

    ok, some more debugging.
    I loaded up 7.4.100.60 on a 5500 controller and got the same result.
    NOTE: my global multicast settings are:
    Controller->General: AP Multicast Mode: UNICAST
    Controller->Multicast->Global Multicast is NOT enabled
    Controller->Mobility->Multicast Messaging: enabled and group is 239.194.240.3
    With this config, the IGMP entry is not retained, and i see on the controller the message (debug bcast all):
    >>processEthernetIGMPpacket Received IGMP Pkt from DS when either igmp snooping or global multicast is disabled.
    So, i then enabled Global Multicast  (Controller->Multicast->Global Multicast to enable), and then the  IGMP entry is refreshed:
    bcastReceiveTask: Sep 05 13:11:17.734:  IGMP packet received over vlanid = 102 from DS side
    *bcastReceiveTask: Sep 05 13:11:17.734:  received an IGMP query for multicast vlan = 102 address 0.0.0.0 intfnum = 3
    *bcastReceiveTask: Sep 05 13:11:17.734: IGMP report scheduled for grp=0xefc2f003, vlan=102, intf=3, slot=70, maxRespTime=100
    >>sh ip igmp membership 239.194.240.3                                     
    004586: Sep  5 15:11:50.710 CEST: IGMP(0): Send v2 general Query on Vlan101
    Flags: A  - aggregate, T - tracked
           L  - Local, S - static, V - virtual, R - Reported through v3
           I - v3lite, U - Urd, M - SSM (S,G) channel
           1,2,3 - The version of IGMP, the group is in
    Channel/Group-Flags:
           / - Filtering entry (Exclude mode (S,G), Include mode (G))
    Reporter:
           - last reporter if group is not explicitly tracked
           /      - reporter in include mode, reporter in exclude
    Channel/Group                  Reporter        Uptime   Exp.  Flags  Interface
    *,239.194.240.3                10.102.180.218  00:38:51 02:28 2A     Vl102
    Now, on the 5500 controller i can enable global multicast, while AP multicast mode is in UNICAST.
    However, on the 8500 controller, running the same firmware (7.4.100.60), when i try to enable "Global Multicast", i get:
    "Multicast-Unicast mode does not support IGMP/MLD snooping.  Config mode to multicast-multicast first". I did not get this message on  the 5500.
    So when switching to Multicast->Multicast mode, and  then enabled "Global Multicast", then it works.
    This means that in order to have mobility messaging working on 8500 in multicast, you MUST put the AP Multicast Mode to Multicast AND you must enabled "Global Multicast", otherwise it won't work.

  • Multicast vrf

    Good Day! I have got a task to play multicast traffic through mpls (at least between the same vrf). I have 3 switches 3750 ME, sw1, sw2 and sw3. Multicast source host is connected to sw3 int fa1/0/6, receiver host is connected to sw1 int fa 1/0/5, respectively. Both interfaces are in vlan 100 (just the same vlan number). Interface vlan is in vrf green. Switches are connected back to back sw1-sw2-sw3 via gigabit interfaces (dedicated for mpls) like ce1/pe1-p-pe2/ce2. In addition, sw1 and sw3 are rr-clients for sw2. To check multicast traffic I use multicasttest utility (http://www.mikkle.dk/multicasttest/). Multicast group address for test is 224.237.248.237. Multicast traffic walk from host 192.168.1.3 to 192.168.2.2. Also, there are vl 100 interfaces on switches in vrf green created just for check proper connectivity.
    Configs:
    hostname sw1
    system mtu routing 1500
    ip subnet-zero
    ip routing
    no ip dhcp conflict logging
    ip dhcp excluded-address 192.168.2.1
    ip dhcp excluded-address 172.16.1.1
    ip dhcp pool green1
       network 192.168.2.0 255.255.255.0
       default-router 192.168.2.1
      default-router 172.16.1.1
    ip vrf green
    rd 100:100
    route-target export 100:100
    route-target import 100:100
    mdt default 232.1.1.1
    ip multicast-routing distributed
    ip multicast-routing vrf green distributed
    interface Loopback0
    ip address 10.1.1.1 255.255.255.255
    ip pim sparse-dense-mode
    ip ospf 1 area 0
    interface Loopback100
    ip vrf forwarding green
    ip address 10.0.100.1 255.255.255.255
    ip pim sparse-dense-mode
    interface FastEthernet1/0/5
    switchport access vlan 100
    interface GigabitEthernet1/1/2
    no switchport
    ip address 10.0.1.2 255.255.255.0
    ip pim sparse-dense-mode
    ip ospf 1 area 0
    speed auto 1000
    mpls ip
    interface Vlan100
    ip vrf forwarding green
    ip address 192.168.2.1 255.255.255.0
    ip pim sparse-dense-mode
    router ospf 1
    log-adjacency-changes
    router bgp 65001
    no synchronization
    bgp log-neighbor-changes
    neighbor 10.1.1.2 remote-as 65001
    neighbor 10.1.1.2 update-source Loopback0
    no auto-summary
    address-family ipv4 mdt
      neighbor 10.1.1.2 activate
      neighbor 10.1.1.2 send-community extended
    exit-address-family
    address-family vpnv4
      neighbor 10.1.1.2 activate
      neighbor 10.1.1.2 send-community extended
    exit-address-family
    address-family ipv4 vrf green
      no synchronization
      network 10.0.100.1 mask 255.255.255.255
      network 192.168.2.0
    exit-address-family
    ip classless
    hostname sw2
    system mtu routing 1500
    ip subnet-zero
    ip routing
    ip vrf green
    rd 100:100
    route-target export 100:100
    route-target import 100:100
    mdt default 232.1.1.1
    ip multicast-routing distributed
    ip multicast-routing vrf green distributed
    vtp mode transparent
    interface Loopback0
    ip address 10.1.1.2 255.255.255.255
    ip pim sparse-dense-mode
    ip ospf 1 area 0
    interface GigabitEthernet1/1/1
    no switchport
    ip address 10.0.2.1 255.255.255.0
    ip pim sparse-dense-mode
    ip ospf 1 area 0
    speed auto 1000
    mpls ip
    interface GigabitEthernet1/1/2
    no switchport
    ip address 10.0.1.1 255.255.255.0
    ip pim sparse-dense-mode
    ip ospf 1 area 0
    speed auto 1000
    mpls ip
    router ospf 1
    log-adjacency-changes
    router bgp 65001
    no synchronization
    bgp log-neighbor-changes
    neighbor 10.1.1.1 remote-as 65001
    neighbor 10.1.1.1 update-source Loopback0
    neighbor 10.1.1.1 route-reflector-client
    neighbor 10.1.1.3 remote-as 65001
    neighbor 10.1.1.3 update-source Loopback0
    neighbor 10.1.1.3 route-reflector-client
    no auto-summary
    address-family ipv4 mdt
      neighbor 10.1.1.1 activate
      neighbor 10.1.1.1 send-community extended
      neighbor 10.1.1.3 activate
      neighbor 10.1.1.3 send-community extended
    exit-address-family
    address-family vpnv4
      neighbor 10.1.1.1 activate
      neighbor 10.1.1.1 send-community extended
      neighbor 10.1.1.1 route-reflector-client
      neighbor 10.1.1.3 activate
      neighbor 10.1.1.3 send-community extended
      neighbor 10.1.1.3 route-reflector-client
    exit-address-family
    address-family ipv4 vrf green
      no synchronization
    exit-address-family
    ip classless
    hostname sw3
    system mtu routing 1500
    ip subnet-zero
    ip routing
    no ip dhcp conflict logging
    ip dhcp excluded-address 192.168.1.1
    ip dhcp pool green2
       network 192.168.1.0 255.255.255.0
       default-router 192.168.1.1
    ip vrf green
    rd 100:100
    route-target export 100:100
    route-target import 100:100
    mdt default 232.1.1.1
    ip multicast-routing distributed
    ip multicast-routing vrf green distributed
    vtp mode transparent
    interface Loopback0
    ip address 10.1.1.3 255.255.255.255
    ip pim sparse-dense-mode
    ip ospf 1 area 0
    interface Loopback100
    ip vrf forwarding green
    ip address 10.0.100.3 255.255.255.255
    ip pim sparse-dense-mode
    interface FastEthernet1/0/6
    switchport access vlan 100
    interface GigabitEthernet1/1/1
    no switchport
    ip address 10.0.2.2 255.255.255.0
    ip pim sparse-dense-mode
    ip ospf 1 area 0
    speed auto 1000
    mpls ip
    interface Vlan100
    ip vrf forwarding green
    ip address 192.168.1.1 255.255.255.0
    ip pim sparse-dense-mode
    router ospf 1
    log-adjacency-changes
    router bgp 65001
    no synchronization
    bgp log-neighbor-changes
    neighbor 10.1.1.2 remote-as 65001
    neighbor 10.1.1.2 update-source Loopback0
    no auto-summary
    address-family ipv4 mdt
      neighbor 10.1.1.2 activate
      neighbor 10.1.1.2 send-community extended
    exit-address-family
    address-family vpnv4
      neighbor 10.1.1.2 activate
      neighbor 10.1.1.2 send-community extended
    exit-address-family
    address-family ipv4 vrf green
      no synchronization
      network 10.0.100.3 mask 255.255.255.255
      network 192.168.1.0
    exit-address-family
    ip classless
    Пинги везде проходят (как между свитчами, так и между хостами)
    sw1#ping vrf green 192.168.2.2
    Type escape sequence to abort.
    Sending 5, 100-byte ICMP Echos to 192.168.2.2, timeout is 2 seconds:
    Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/8 ms
    sw1#ping vrf green 192.168.1.3
    Type escape sequence to abort.
    Sending 5, 100-byte ICMP Echos to 192.168.1.3, timeout is 2 seconds:
    Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/9 ms
    sw1#ping vrf green 224.237.248.237
    Type escape sequence to abort.
    Sending 1, 100-byte ICMP Echos to 224.237.248.237, timeout is 2 seconds:
    Reply to request 0 from 192.168.2.1, 1 ms
    Reply to request 0 from 10.0.100.1, 1 ms
    sw3#ping vrf green 192.168.1.3
    Type escape sequence to abort.
    Sending 5, 100-byte ICMP Echos to 192.168.1.3, timeout is 2 seconds:
    Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/9 ms
    sw3#ping vrf green 192.168.2.2
    Type escape sequence to abort.
    Sending 5, 100-byte ICMP Echos to 192.168.2.2, timeout is 2 seconds:
    Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/8 ms
    sw3#ping vrf green 224.237.248.237
    Type escape sequence to abort.
    Sending 1, 100-byte ICMP Echos to 224.237.248.237, timeout is 2 seconds:
    Reply to request 0 from 192.168.1.1, 1 ms
    Reply to request 0 from 10.0.100.3, 1 ms
    I can see I pim neighbors in global table, but cat’s see them in vrf green. I think the problem is here.
    sw1#sh ip pim neighbor
    PIM Neighbor Table
    Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
          P - Proxy Capable, S - State Refresh Capable
    Neighbor          Interface                Uptime/Expires    Ver   DR
    Address                                                            Prio/Mode
    10.0.1.1          GigabitEthernet1/1/2     20:25:44/00:01:43 v2    1 / S P
    sw2#sh ip pim neighbor
    PIM Neighbor Table
    Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
          P - Proxy Capable, S - State Refresh Capable
    Neighbor          Interface                Uptime/Expires    Ver   DR
    Address                                                            Prio/Mode
    10.0.2.2          GigabitEthernet1/1/1     20:25:57/00:01:22 v2    1 / DR S P
    10.0.1.2          GigabitEthernet1/1/2     20:25:58/00:01:19 v2    1 / DR S P
    sw3#sh ip pim neighbor
    PIM Neighbor Table
    Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
          P - Proxy Capable, S - State Refresh Capable
    Neighbor          Interface                Uptime/Expires    Ver   DR
    Address                                                            Prio/Mode
    10.0.2.1          GigabitEthernet1/1/1     20:26:13/00:01:35 v2    1 / S P
    sw1#sh ip pim vrf green neighbor
    PIM Neighbor Table
    Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
          P - Proxy Capable, S - State Refresh Capable
    Neighbor          Interface                Uptime/Expires    Ver   DR
    Address                                                            Prio/Mode
    sw1#
    sw3#sh ip pim vrf green neighbor
    PIM Neighbor Table
    Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
          P - Proxy Capable, S - State Refresh Capable
    Neighbor          Interface                Uptime/Expires    Ver   DR
    Address                                                            Prio/Mode
    sw3#
    mroute in vrf:
    sw1#sh ip mroute vrf green 224.237.248.237
    IP Multicast Routing Table
    Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
           L - Local, P - Pruned, R - RP-bit set, F - Register flag,
           T - SPT-bit set, J - Join SPT, M - MSDP created entry,
           X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
           U - URD, I - Received Source Specific Host Report,
           Z - Multicast Tunnel, z - MDT-data group sender,
           Y - Joined MDT-data group, y - Sending to MDT-data group
           V - RD & Vector, v - Vector
    Outgoing interface flags: H - Hardware switched, A - Assert winner
    Timers: Uptime/Expires
    Interface state: Interface, Next-Hop or VCD, State/Mode
    (*, 224.237.248.237), 02:50:33/00:02:56, RP 0.0.0.0, flags: DCL
      Incoming interface: Null, RPF nbr 0.0.0.0
      Outgoing interface list:
        Vlan100, Forward/Sparse-Dense, 02:50:33/00:00:00
    sw3#sh ip mroute vrf green 224.237.248.237
    IP Multicast Routing Table
    Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
           L - Local, P - Pruned, R - RP-bit set, F - Register flag,
           T - SPT-bit set, J - Join SPT, M - MSDP created entry,
           X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
           U - URD, I - Received Source Specific Host Report,
           Z - Multicast Tunnel, z - MDT-data group sender,
           Y - Joined MDT-data group, y - Sending to MDT-data group
           V - RD & Vector, v - Vector
    Outgoing interface flags: H - Hardware switched, A - Assert winner
    Timers: Uptime/Expires
    Interface state: Interface, Next-Hop or VCD, State/Mode
    (*, 224.237.248.237), 02:48:36/00:02:25, RP 0.0.0.0, flags: DCL
      Incoming interface: Null, RPF nbr 0.0.0.0
      Outgoing interface list:
        Vlan100, Forward/Sparse-Dense, 02:48:36/00:00:00
    sw1#mstat
    VRF name: green
    Source address or name: 192.168.2.1
    Destination address or name: 192.168.1.3
    Group address or name: 224.237.248.237
    Multicast request TTL [64]:
    Response address for mtrace:
    Type escape sequence to abort.
    Mtrace from 192.168.2.1 to 192.168.1.3 via group 224.237.248.237 in VRF green
    From source (?) to destination (?)
    Waiting to accumulate statistics....* * *
    Timeout on first trace.
    sw3#mstat
    VRF name: green
    Source address or name: 192.168.1.1
    Destination address or name: 192.168.1.3
    Group address or name: 224.237.248.237
    Multicast request TTL [64]:
    Response address for mtrace:
    Type escape sequence to abort.
    Mtrace from 192.168.1.1 to 192.168.1.3 via group 224.237.248.237 in VRF green
    From source (?) to destination (?)
    Waiting to accumulate statistics......
    Results after 10 seconds:
      Source        Response Dest   Packet Statistics For     Only For Traffic
    192.168.1.1     192.168.1.1     All Multicast Traffic     From 192.168.1.1
         |       __/  rtt 0    ms   Lost/Sent = Pct  Rate     To 224.237.248.237
         v      /     hop 0    ms   ---------------------     --------------------
    192.168.1.1     ?
         |      \__   ttl   0
         v         \  hop 0    ms        0         0 pps           0    0 pps
    192.168.1.3     192.168.1.1
      Receiver      Query Source
    I hope I have shown all necessary configs, outputs and schemes to make the picture clear. Other outputs I can show on demand. Thanks in advance.

    Hi Evgeny
    Unfortunately the multicast VPN feature is not supported on the 3750 ME platform even though the commands are present . This is also mentioned in Cisco Feature Navigator. There are no plans to implement this on this platform.
    Thanks
    Mayuresh

  • GI+ASM+DBMS+DB cloning

    Hi.
    I have 1-node RAC 11.2.0.3 deployed on OL5.11 with test DB in ASM (3 x OCR_VOTE disks, 2 x DATA disks, 2 x FRA disks).
    The aim is to create new cluster using the current cluster as a master and to get DB through ASM disks cloning.
    As for http://docs.oracle.com/cd/E11882_01/rac.112/e41959/clonecluster.htm#CWADD92122 and "HOW TO CLONE AN 11.2.0.3 GRID INFRASTRUCTURE HOME AND CLUSTERWARE (DOC ID 1413846.1)" it is necessary to:
    prepare node
    deploy GI
    run clon.pl
    run config.sh
    Steps 1 and 2 in my case were replaced with:
    creating a master node as for docs (direct deleting cluster configuration and other files vs. deleting on copy).
    cloning disks and virtual machine
    changing relevant names and IPs
    I was successful up to "run config.sh". But running config.sh I got:
    [FATAL] [INS-40401] The Installer has detected a configured Oracle Clusterware home on the system.
    Any ideas? comments?
    Thank you in advance for your valuable time.

    Problem solved.
    Environment:
    VirtualBox.
    Linux 5.11
    1-node cluster with ASM (11.2.0.3)
    DBMS (11.2.0.3) installed
    1 RAC DB created.
    Disks in ASM:
    sharable 3 x ocrvote
    sharable 2 x data
    sharable 2 x fra
    1. Create pfile for DB.
    2. Stop all RAC services, disable crs.
    3. Delete files as for official docs.
    4. Clean OS files: logs, tmp, etc.
    5. Detach GI home
    6. Detach DB home
    Now we have master node.
    1. Clone VM (or disks).
    Now we have first node for new cluster.
    1. Prepare table of conversion: IPs, names.
    2. Register IPs, names in DNS
    3. Start cloned host.
    4. Change IPs, names in OS.
    5. Delete and create ssh, establish ssh equivalence for grid and oracle users.
    6. Redefine ASM disks assignments.
    7. Run clone.pl as GI user.
    8. Run root.sh as root.
    9. unlock GI.
    10. Move or delete ocr.loc.
    11. Run config.sh.
    12. Check cluster configuration.
    13. Check ASM.
    14. Change scan name to new one.
    15. Resolve multicast (if necessary)
    GI is ready.
    To prepare RAC DBMS.
    1. Run clone.pl.
    2. Run root.sh.
    3. Modify DB registration.
    4. Add instance.
    5. Modify tnsnames.ora with new scan  name.
    6. Modify created early pfile with new scan name.
    7. Start DB from pfile.
    8. Modify remote_listener. Create spfile.
    9. Start DB.
    10. If necessary it is possible to change DB name, DBID, disk groups names.
    You are welcome to new cluster and new RAC DB .

  • Load balance multicast stream

    Hi, i have the same stream coming from 2 different
    directions. The 2 routers in the multicast server are using hsrp. My question is, can i load balance the stream?The method in use is SSM.

    PIM (dense/sparse) will not load balance multicast packets due to prune behavior to prevent duplicate packets. However, GRE tunnel(s) can be used to "load balance" multicast traffic. There is also another global command "ip multicast multipath" which allows load balancing But it will only load balance If multiple sources exist for the same group(s):

Maybe you are looking for

  • Jump from alv report to cor2 screen

    Hi experts, I develop one alv report ..i want to jump from alv report to COR2 screen when click on order number(field name is AUFNR). i wrote code in this way. FORM USER_COMMAND USING UCOMM LIKE SY-UCOMM SELFIELD TYPE SLIS_SELFIELD   CASE SELFIELD-FI

  • Using Select Statement in the decoe function in Oralce Form10 g

    Hello All: Is it possible to call another Select Statement inside the Decode Function in Oracle Form? I tested the SQL on PL/SQL Plus, it ran without giving any errors. However, I got the following errors when I tried to compile the procedure in the

  • How to write a create method for a table Not null fields

    Hi all, I have a table with four columns, process prs_seqid NUMBER(4) NOT NULL, prs_sbs VARCHAR2(2) NOT NULL, prs_name VARCHAR2(64) NOT NULL, description VARCHAR2(128) Where prs_seqid is primary key and prs_sbs is foreign key. I tried to create a CMP

  • Programs close when Macbook sleeps

    I am new to Mac. I open mail and some browsers. I then leave my macbook and it automatically locks after a few minutes, I presume it eventually goes to sleep. When I unlock after the screen has gone black, then all my programs closed. I cannot downlo

  • IPhoto and Aperture referenced library issues

    Hi, For the last couple of days I've been busy trying to get my iPhoto/Aperture library to work properly. I recently created a referenced library using Aperture because of lack of space on my internal ssd. I can see all of my Projects, thumbnails, me