Nexus 1000v and fabric extenders

Would be possible in near future to connect fabric extenders like n2k-c2232tm to nexus 1000v virtual switch?
Regards,
Vice

Hi vlacmanov, 
It would depend on the current hardware setup that you have. please feel free to send me an email [email protected] so we can discuss this further. hope to hear from you soon!

Similar Messages

  • VN-Tag with Nexus 1000v and Blades

    Hi folks,
    A while ago there was a discussion on this forum regarding the use of Catalyst 3020/3120 blades switches in conjunction with VN-tag.  Specifically, you can't do VN-Tag with that Catalyst blade switch sitting inbetween the Nexus 1000V and the Nexus 5000.  I know there's a Blade switch for the IBM blade servers, but will there be a similar version for the HP C-class blades?  My guess is NO, since Cisco just kicked HP to the curb.  But if that's the case, what are my options?  Pass-through switches?  (ugh!)
    Previous thread:
    https://supportforums.cisco.com/message/469303#469303

    wondering the same...

  • Nexus 1000V and strange ping behavior

    Hi ,
    I am using a Nexus 1000v a FI 6248 with a Nexus 5K in redundant architecture and I have a strange bevahior with VMs.
    I am using  port-profiles without any problems but in one case I have this issue
    I have 2 VMs assigned to the same port profile
    When the 2 Vms are on the same esx I can ping (from a VM)  the gateway and the other VM, now when I move one of the VM to an other ESX (same chassis or not).
    From both , I can ping the gateway, a remote IP but VMs are unreachable between them.
    and a remote PC are able to ping both Vms.
    I checked the mac table, from N5k it's Ok , from FI 6348 it's Ok , but from N1K I am unable to see the mac address of both VMs.
    Why I tried ( I performed at each step a clear mac table)
        Assign to an other vmnic , it works.
        On UCS I moved it to an other vmnic , it works
        On UCS I Changed the QOS policy , it works.
        I reassigned it , and I had the old behavior
        I checked all trunk links it's ok
    So i didn't understand why I have this strange behavior and how I can troubleshoot it deeper?
    I would like if possible to avoid to do that but the next step will be to create a new vmnic card and assign the same policy and after to suppress the vnmic and to recreate the old one.
    Regards

    From what you mentioned here's my thoughts.
    When the two VMs are on the same host, they can reach each other.  This is because they're locally switching in the VEM so this doesn't tell us much other than the VEM is working as expected.
    When you move one of the VMs to a different UCS ESX host, the path changes.    Let's assume you've moved one VM to a different host, within the UCS system.
    UCS-Blade1(Host-A) - VM1
    UCS-Blade2(Host-B) - VM2
    There are two paths option from VM1 -> VM2
    VM1 -> Blade1 Uplink -> Fabric Interconnect A -> Blade 2 Uplink -> VM2
    or
    VM1-> Blade1 Uplink -> Fabric Interconnect A -> Upstream Switch -> Fabric Interconnect B -> Blade 2 Uplink -> VM2
    For the two options I've seen many instances were the FIRST option works fine, but the second doesn't.  Why?  Well as you can see option 1 has a path from Host A to FI-A and back down to Host B.  In this path there's no northbound switching outside of UCS.  This would require both VMs to be be pinned to the Hosts Uplink going to the same Fabric Interconnect. 
    In the second option if the path involves going from Host-A up to FI-A, then northbound to the upstream switch, then back down eventually to FI-B  and then Host-B. When this path is taken, if the two VMs can't reach each other then you have some problem with your upstream switches.  If both VMs reside in the same subnet, it's a Layer2 problem.  If they're in different subnets, then it's a Layer 2 or 3 problem somewhere north of UCS.
    So knowing this - why did manual pinning on the N1K fix your problem?  What pinning does is forces a VM to a particular uplink.  What likely happened in your case is you pinned both VMs to Host Uplinks that both go to the same UCS Fabric Interconnect (avoiding having to be switched northbound).  Your original problem still exists, so you're not clear out of the woods yet.
    Ask yourself is - Why are just these two VMs affected.   Are they possibly the only VMs using a particular VLAN or subnet?
    An easy test to verify the pinning to to use the command below.  "x" is the module # for the host the VMs are running on.
    module vem x execute vemcmd show port-old
    I explain the command further in another post here -> https://supportforums.cisco.com/message/3717261#3717261.  In your case you'll be looking for the VM1 and VM2 LTL's and finding out which SubGroup ID they use, then which SG_ID belongs to whch VMNIC.
    I bet your find the manual pinning "that works" takes the path from each host to the same FI. If this is the case, look northbound for your L2 problem.
    Regards,
    Robert

  • Nexus 1000v and vcenter domain admin account

    I changed out domain admin account on our domain in which vcenter services runs as and now its using a different services account. I am wondering if I need to update anything on the nexus 1000v switch side between the 1000v and venter

    Hi Dan,
    You are on the right track. However you can perform some of these function "online".
    First you want to ensure that you are running at a minimum, Nexus 1000v SV1(4a) as ESXi 5.0 only began support on this release. With SV1(4a), it provides support for both ESXi 5.0 and ESX/i 4.1.
    Then you can follow the procedure documented here:
    Upgrading from VMware Release 4.0/4.1 to VMware Release 5.0.0
    This document walks you through upgrading your ESX infrastructure to VMware Release 5.0.0 when Cisco Nexus 1000V is installed. It is required to be completed in the following order:
    1. Upgrade the VSMs and VEMs to Release 4.2(1)SV1(4a).
    2. Upgrade the VMware vCenter Server to VMware Release 5.0.0.
    3. Upgrade the VMware Update Manager to VMware Release 5.0.0.
    4. Upgrade your ESX hosts to VMware Release 5.0.0 with a custom ESXi image that includes the VEM bits.
    Upgrading the ESX/ESXi hosts consists of the following procedures:
    –Upgrading the vCenter Server
    –Upgrading the vCenter Update Manager
    –Augmenting the Customized ISO
    –Upgrading the ESXi Hosts
    There is also a 3 part video highlighting the procedure to perfrom the last two steps above (customized ISO and upgrading ESXi hosts)
    Video: Upgrading the VEM to VMware ESXi Release 5.0.0
    Hope that helps you with your upgrade.
    Thanks,
    Michael

  • Nexus 1000V and sub-groups

    Hi,
    I have a questions about the number of subgroups in Nexus1000V. I can use sub-groups with an ID between 0-31.
    Does it mean that I can have only 32 subgroups for all my port-channels ?  Or can I use the same sub-group-ID for interfaces that are in different port-channels ?
    Thanks
    Hendrik

    You can use same sub group id for interfaces that are in different port channels. Sub-groups are local to a port channel. Hope that helps.

  • Nexus 1000v, VMWare ESX and Microsoft SC VMM

    Hi,
    Im curious if anybody has worked up any solutions managing network infrastructure for VMWare ESX hosts/vms with the Nexus 1000v and Microsoft's System Center Virtual Machine Manager.
    There currently exists support for the 1000v and ESX and SCVMM using the Cisco 1000v software for MS Hyper-V and SCVMM.   There is no suck support for VMWare ESX.
    Im curious as to what others with VMWare, Nexus 1000v or equivalent and SCVMM have done to work around this issue.
    Trying to get some ideas.
    Thanks

    Aaron,
    The steps you have above are correct, you will need steps 1 - 4 to get it working correctly.  Normally people will create a separate VLAN for their NLB interfaces/subnet, to prevent uncessisary flooding of mcast frames within the network.
    To answer your questions
    1) I've seen multiple customer run this configuration
    2) The steps you have are correct
    3) You can't enable/disable IGMP snooping on UCS.  It's enabled by default and not a configurable option.  There's no need to change anything within UCS in regards to MS NLB with the procedure above.  FYI - the ability to disable/enable IGMP snooping on UCS is slated for an upcoming release 2.1.
    This is the correct method untill the time we have the option of configuring static multicast mac entries on
    the Nexus 1000v.  If this is a feature you'd like, please open a TAC case and request for bug CSCtb93725 to be linked to your SR. 
    This will give more "push" to our develpment team to prioritize this request.
    Hopefully some other customers can share their experience.
    Regards,
    Robert

  • VM-FEX and Nexus 1000v relation

    Hi
    I am a new in virtulaization world and I need to know what is the relation between Cisco Nexus 1000v and Cisco VM-FEX?, and when to use VM-FEX and when to use Nexus 1000v.
    Regards

    Ahmed,
    Sorry for taking this long to get back to you.
    Nexus 1000v is a virtualized switch and as such will require that any traffic coming in or leaving the VM will first need to pass through the virtualization layer, therefore causing a minimum delay that for some applications (VMs) can be catastrophic enough that may mean too much delay.
    With VM-FEX you gain the option to bypass the virtualization layer with for example "Pass-Through" mode where the vmnics are really assigned and managed by the OS, minimizing the delay and making the VMs look as if they were directly attached, also, this offloads CPU workload in the mean time, optimizing the host/VM's performance.
    The need for one or the other will be defined as always by the needs your organization/business has.
    Benefits of VM-FEX (from cisco.com):
    Simplified operations: Eliminates the need for a separate, virtual networking infrastructure
    Improved network security: Contains VLAN proliferation
    Optimized network utilization: Reduces broadcast domains
    Enhanced application performance: Offloads virtual  machine switching from host CPU to parent switch application-specific  integrated circuits (ASICs)
    Benefits of Nexus 1000v here on another post from Rob Burns:
    https://supportforums.cisco.com/thread/2087541 
    https://communities.vmware.com/thread/316542?tstart=0
    I hope that helps 
    -Kenny

  • Weird syslog format messages with Nexus 1000v

    I'm  trying out the Nexus 1000v, and have the VEM configured to write logs to my  syslog server. The thing is, the messages are in a weird format that my  log management tools cannot parse. Here is an example:
    <189>: 2012 Oct 21 15:22:40 UTC: %VSHD-5-VSHD_SYSLOG_CONFIG_I: Configured from vty by admin on unknown_session
    I found the documentation rather  amusing, where it states "The syslog client functionality is RFC-5424  compliant" - doesn't look like they've even read the RFC! This is closer  to the format of the older (but more often found in the wild,  RFC3164... though not compliant with that either :/
    Anyway,  I guess the main issue here is that the hostname of the 1000v is not  being added to the logs (it is set in my config). Any ideas how I can  fix this?
    Thanks!

    Hi,
         Do you have vCenter install on Win2012 Server? The installation would not continue until you have vCenter installed.
    Hardik

  • Nexus 1000V private-vlan issue

    Hello
    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:Standardowy;
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-parent:"";
    mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
    mso-para-margin:0cm;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:10.0pt;
    font-family:"Times New Roman";
    mso-ansi-language:#0400;
    mso-fareast-language:#0400;
    mso-bidi-language:#0400;}
    I need to transmit both the private-vlans (as promiscous trunk) and regular vlans on the trunk port between the Nexus 1000V and the physical switch. Do you know how to properly configure the uplink port to accomplish that ?
    Thank you in advance
    Lucas

    Control vlan is a totally seperate VLAN then your System Console. The VLAN just needs to be available to the ESX host through the upstream physical switch and then make sure the VLAN is passed on the uplink port-profile that you assign the ESX host to.
    We only need an interface on the ESX host if you decide to use L3 control. In that instance you would create or use an existing VMK interface on the ESX host.

  • Firewall ports for Nexus 1000v

    hi all,
    There is firewall between nexus 1000v and vcentre and ESX 4.1i hosts.
    Could u pls advise which TCP/UDP ports to be opened for communication among Nexus1000v, vcentre and ESX hosts?
    Thank you very much!
    Best Regards,

    David,
    Between your VSM & VC you'll need TCP ports 80 & 443 open
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_0_4_s_v_1_3/troubleshooting/configuration/guide/n1000v_trouble_5modules.html
    Between your VEM & VSM you'll need port this should be layer 2 so no ports need to be open.
    If you're using Layer 3 mode then enusre you have UDP 4785 open.
    http://www.ciscosystemsverified.biz/en/US/docs/switches/datacenter/nexus1000/sw/4_0_4_s_v_1_3/system_management/configuration/guide/n1000v_system_3domain.pdf
    Regards,
    Robert

  • Nexus 1000v port-channels questions

    Hi,
    I’m running vCenter 4.1 and Nexus 1000v and about 30 ESX Hosts.
    I’m using one system uplink port profile for all 30 ESX Host; On each of the ESX host I have 2 NICs going to a Catalyst 3750 switch stack (Switch A), and another 2 NICs going to another Catalyst 3750 switch stack (Switch B).
    The Nexus is configured with the “sub-group CDP” command on the system uplink port profile like the following:
    port-profile type ethernet uplink
    vmware port-group
    switchport mode trunk
    switchport trunk allowed vlan 1,800,802,900,988-991,996-997,999
    switchport trunk native vlan 500
    mtu 1500
    channel-group auto mode on sub-group cdp
    no shutdown
    system vlan 988-989
    description System-Uplink
    state enabled
    And the port channel on the Catalyst 3750 are configured like the following:
    interface Port-channel11
    description ESX-10(Virtual Machine)
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 500
    switchport trunk allowed vlan 800,802,900,988-991
    switchport mode trunk
    switchport nonegotiate
    spanning-tree portfast trunk
    end
    interface GigabitEthernet1/0/18
    description ESX-10(Virtual Machine)
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 500
    switchport trunk allowed vlan 800,802,900,988-991
    switchport mode trunk
    switchport nonegotiate
    channel-group 11 mode on
    spanning-tree portfast trunk
    spanning-tree guard root
    end
    interface GigabitEthernet1/0/1
    description ESX-10(Virtual Machine)
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 500
    switchport trunk allowed vlan 800,802,900,988-991
    switchport mode trunk
    switchport nonegotiate
    channel-group 11 mode on
    spanning-tree portfast trunk
    spanning-tree guard root
    end
    Now Cisco is telling me that I should be using MAC pinning when doing a trunk to two different stacks , and that each interface on 3750 should not be configured in a port-channel like above,  but should be configured as individual trunks.
    First question: Is the above statement correct, are my uplinks configured wrong?  Should they be configured individually in trunks instead of a port-channel?
    Second questions: If I need to add the MAC pinning configuration on my system uplink port-profile can I create a new system uplink port profile with the MAC pinning configuration and then move one ESX host (with no VM on them) one at a time to that new system uplink port profile? This way, I could migrate one ESX host at a time without outages to my VMs. Or is there an easier way to move 30 ESX hosts to a new system uplink profile with the MAC Pinning configuration.
    Thanks.

    Hello,
    From what I understood, you have the following setup:
         - Each ESX host has 4 NICS
         - 2 of them go to a 3750 stack and the other 2 go to a different 3750 stack
         - all 4 vmnics on the ESX host use the same Ethernet port-profile
              - this has 'channel-group auto mode on sub-group cdp'
         - The 2 interfaces on each 3750 stack are in a port-channel (just 'mode on')
    If yes, then this sort of a setup is correct. The only problem with this is the dependance on CDP. With CDP loss, the port-channels would go down.
    'mac-pinning' is the recommended option for this sort of a setup. You don't have to bundle the interfaces on the 3750 for this and these can be just regular trunk ports. If all your ports are on the same stack, then you can look at LACP. The CDP option would not be supported in the future releases. In fact, it is supposed to be removed from 4.2(1)SV1(2.1) but I still see the command available (ignore 4.2(1)SV1(4) next to it) - I'll follow up on this internally:
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_2_1_1/interface/configuration/guide/b_Cisco_Nexus_1000V_Interface_Configuration_Guide_Release_4_2_1_SV_2_1_1_chapter_01.html
    For migrating, the best option would be as you suggested. Create a new port-profile with mac-pinning and move one host at a time. You can migrate VMs off the host before you change the port-profile and can remove the upstream port-channel config as well.
    Thanks,
    Shankar

  • Nexus 1000v: Control VLAN must be same VLAN as ESX hosts?

    Hello,
    I'm trying to install nexus 1000v and came across the below prerequisite.
    The below release notes for Nexus 1000v states
    VMware and Host Prerequisites
    The VSM VM control interface must be on the same Layer 2 VLAN as the ESX 4.0 host that it manages. If you configure Layer 3, then you do not have this restriction. In each case however, the two VSMs must run in the same IP subnet.
    What I'm trying to do is to create 2 VLANs - one for management and the other for control & Data (as per latest deployment guide, we can put control & data in the same vlan).
    However, I wanted to have all ESX host management same VLAN as the VSM management as well as the vCenter Management. Essentially, creating a management network.
    However, from the above "VMWare and Host Prerequisites", does this means I cannot do this?
    I need to have the ESX host management same VLAN as the control VLAN?
    This means that my ESX host will reside in a different VLAN than my management subnet?
    Thanks...

    Control vlan is a totally seperate VLAN then your System Console. The VLAN just needs to be available to the ESX host through the upstream physical switch and then make sure the VLAN is passed on the uplink port-profile that you assign the ESX host to.
    We only need an interface on the ESX host if you decide to use L3 control. In that instance you would create or use an existing VMK interface on the ESX host.

  • Nexus 1000v UCS Manager and Cisco UCS M81KR

    Hello everyone
    I am confused about how works the integration between N1K and UCS Manager:
    First question:
    If two VMs on different ESXi and different VEM but in the same VLAN,would like to talk each other, the data flow between them is managed from the upstream switch( in this case UCS Fabric Inteconnect), isn'it?
    I created a Ethernet uplink port-profile on N1K in switch port mode access(100), I created a vEthernet port-profile for the VM in switchport mode access(100) as well. In the Fabric Interconnect I created a vNIC profile for the physical NICs of ESXi(where there are the VMs). Also I created the vlan 100(the same in N1K)
    Second question: With the configuration above, if I include in the vNIC profile the vlan 100 (not as native vlan) only, the two VMs can not ping each other. Instead if I include in the vNIC profile only the defaul vlan(I think it is the vlan 1) as native vlan evereything works fine. WHY????
    Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
    I tried to read differnt documents, but I did not understand.
    Thanks                 

    This document may help...
    Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B and C Series Cisco UCS Manager Servers
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html
    If two VMs on different ESXi and different VEM but in the same  VLAN,would like to talk each other, the data flow between them is  managed from the upstream switch( in this case UCS Fabric Inteconnect),  isn'it?
    -Yes.  Each ESX host with the VEM will have one or more dedicated NICs for the VEMs to communicate with the upstream network.  These would be your 'type ethernet' port-profiles.  The ustream network would need to bridge the vlan between the two physicall nics.
    Second question: With the configuration above, if I include in the vNIC  profile the vlan 100 (not as native vlan) only, the two VMs can not ping  each other. Instead if I include in the vNIC profile only the defaul  vlan(I think it is the vlan 1) as native vlan evereything works fine.  WHY????
    -  The N1K port profiles are switchport access making them untagged.  This would be the native vlan in ucs.  If there is no native vlan in the UCS configuration, we do not have the upstream networking bridging the vlan.
    Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
    -  All ports on the UCS are effectively trunks and you can define what vlans are allowed on the trunk as well as what vlan is passed natively or untagged.  In N1K, you will want to leave your vEthernet port profiles as 'switchport mode access'.  For your Ethernet profiles, you will want them to be 'switchport mode trunk'.  Use an used used vlan as the native vlan.  All production vlans will be passed from N1K to UCS as tagged vlans.
    Thank You,
    Dan Laden
    PDI Helpdesk
    http://www.cisco.com/go/pdihelpdesk

  • Firewall between Nexus 1000V VSM and vCenter

    Hi,
    Customer has multiple security zones in environment, and VMware vCenter is located in a Management Security Zone. VSMs in security zones have dedicated management interface facing Management Security Zone with firewall in between. What ports do we need to open for the communication between VSMs and vCenter? The Nexus 1000V troubleshooting guide only mentioned TCP/80 and TCP/443. Are these outbound from VSM to vCenter? Is there any requirements from vCenter to VSM? What's the best practice for VSM management interface configuration in multiple security zones environment? Thanks.

    Avi -
    You need the connection between vCenter and the VSM anytime you want to add or make any changes to the existing port-profiles.  This is how the port-profiles become available to the virtual machines that reside on your ESX hosts.
    One problem when the vCenter is down is what you pointed out - configuration changes cannot be pushed
    The VEM/VSM relationship is independent of the VSM/vCenter connection.  There are separate VLANs or L3 interfaces that are used to pass information and heartbeats between the VSM and its VEMs.
    Jen

  • Fabric with two Nexus-5548 and a brocade switch does not get fabric updates

    We have a fabric containing two Nexus 5548 and a Brocade 5000 switch in interop mode 2. When i make changes to the zoning, the first nexus (the fabric principal) and the brocade switch see the zone changes. The second Nexus switch does not see it. There are no error messages but  the change just can't be seen.  What can i do to find out, what goes wrong ?

    Ouch, deprecated is not the word i wanted to read
    We are using 5.1(3)N1(1a) on nexus-rz1-a
    and 6.0(2)N1(2) on nexus-rz2-a.
    The fabric can be seen :
    nexus-rz2-a# show fcs ie vsan 10
    IE List for VSAN: 10
    IE-WWN                   IE     Mgmt-Id  Mgmt-Addr (Switch-name)
    10:00:00:05:1e:90:57:27  S(Rem) 0xfffc01 10.88.133.110 (bc-san1)
    20:0a:00:2a:6a:72:ba:01  S(Loc) 0xfffc1c 10.88.133.105 (nexus-rz2-a)
    20:0a:54:7f:ee:7f:dc:01  S(Adj) 0xfffc0b 10.88.133.100 (nexus-rz1-a)
    [Total 3 IEs in Fabric]
    nexus-rz1-a# show fcs ie vsan 10
    IE List for VSAN: 10
    IE-WWN                   IE     Mgmt-Id  Mgmt-Addr (Switch-name)
    10:00:00:05:1e:90:57:27  S(Adj) 0xfffc01 10.88.133.110 (bc-san1)
    20:0a:00:2a:6a:72:ba:01  S(Adj) 0xfffc1c 10.88.133.105 (nexus-rz2-a)
    20:0a:54:7f:ee:7f:dc:01  S(Loc) 0xfffc0b 10.88.133.100 (nexus-rz1-a)
    [Total 3 IEs in Fabric]
    I try to distribute the zoneset this way:
    zoneset distribute vsan 10
    Zoneset distribution initiated. check zone status
    nexus-rz1-a# show zone status
    VSAN: 10 default-zone: deny distribute: full Interop: 2
        mode: basic merge-control: allow
        session: none
        hard-zoning: enabled broadcast: disabled
    Default zone:
        qos: none broadcast: disabled ronly: unsupported
    Full Zoning Database :
        DB size: 6291 bytes
        Zonesets:1  Zones:62 Aliases: 44
    Active Zoning Database :
        DB size: 10243 bytes
        Name: FABRIC1  Zonesets:1  Zones:60
    Status: Zoneset distribution completed at 08:06:00 UTC Dec  3 2013
    nexus-rz2-a# show zone status
    VSAN: 1 default-zone: deny distribute: active only Interop: default
        mode: basic merge-control: allow
        session: none
        hard-zoning: enabled broadcast: disabled
    Default zone:
        qos: none broadcast: disabled ronly: unsupported
    Full Zoning Database :
        DB size: 4 bytes
        Zonesets:0  Zones:0 Aliases: 0
    Active Zoning Database :
        Database Not Available
    Status:
    VSAN: 10 default-zone: deny distribute: full Interop: 2
        mode: basic merge-control: allow
        session: none
        hard-zoning: enabled broadcast: disabled
    Default zone:
        qos: none broadcast: disabled ronly: unsupported
    Full Zoning Database :
        DB size: 6291 bytes
        Zonesets:1  Zones:62 Aliases: 44
    Active Zoning Database :
        DB size: 10243 bytes
        Name: FABRIC1  Zonesets:1  Zones:60
    Status: Activation completed at 13:03:42 UTC Dec  2 2013

Maybe you are looking for

  • Returning XML from CF for xmlHttpRequest

    Greetings, Not sure what I'm doing wrong here, but I can't seem to get a .cfm page to return the necessary xml for use in my xmlHttpRequest javascript code. I know the problem doesn't exist within my javascript code......if I build the URL to connect

  • Retiree Medical Eligibility Date

    Hi, I got one requirement on Retiree medical eligibility date. I would like to create one custom infotype. I want to use standard selection screen and get the employees, at the same time i have some more select options like hire actions & reason comb

  • How do I extend a div background colour (or image) beyond the browser viewport?

    Hi there, I have a web page, where I'm using onclick="showhide('ace'); to show an absolute positioned div ('ace') on top of the page, similar to a lightbox. And like a lightbox, I am trying to add a transparent png between the div ace and the page. U

  • I want the old iTunes App back!

    Hi, I'm using Windows XP and for years I've had no problem with the iTunes App. Either on my PC or synching with my iPhone. Now this new Version 11 will not allow me to get into the iTunes Stores. (Yes, I have v11.01.12) I can play music but not get

  • TS1700 How to make google is the reference in maps app instead of tomtom?

    I was wondering when I used a friend iPhone's maps app as I saw google is the reference in his default maps app While tomtom is the reference in my IPhone, so I was asking of I can change it to google?