VWLC and Nexus-1000V

Hi Experts!
Does anybody try to install vWLC on ESX with Nexus-1000V as switch?
All deployment guide are based on standard VMWare vSwitch and I can not find any information about questions:
1. Is vWLC compatible with Nexus-1000V?
2. What configuration should be done on Nexus-1000V to vWLC works properly?

Hi Dave,
You can access  below URL for nexus 1000v -4.0(4)SV1(3b) docs:
http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_0_4_s_v_1_3_b/roadmap/guide/n1000v_roadmap.html
And
Nexus5000
http://www.cisco.com/en/US/products/ps9670/tsd_products_support_series_home.html
BR,
John Meng

Similar Messages

  • VM-FEX and Nexus 1000v relation

    Hi
    I am a new in virtulaization world and I need to know what is the relation between Cisco Nexus 1000v and Cisco VM-FEX?, and when to use VM-FEX and when to use Nexus 1000v.
    Regards

    Ahmed,
    Sorry for taking this long to get back to you.
    Nexus 1000v is a virtualized switch and as such will require that any traffic coming in or leaving the VM will first need to pass through the virtualization layer, therefore causing a minimum delay that for some applications (VMs) can be catastrophic enough that may mean too much delay.
    With VM-FEX you gain the option to bypass the virtualization layer with for example "Pass-Through" mode where the vmnics are really assigned and managed by the OS, minimizing the delay and making the VMs look as if they were directly attached, also, this offloads CPU workload in the mean time, optimizing the host/VM's performance.
    The need for one or the other will be defined as always by the needs your organization/business has.
    Benefits of VM-FEX (from cisco.com):
    Simplified operations: Eliminates the need for a separate, virtual networking infrastructure
    Improved network security: Contains VLAN proliferation
    Optimized network utilization: Reduces broadcast domains
    Enhanced application performance: Offloads virtual  machine switching from host CPU to parent switch application-specific  integrated circuits (ASICs)
    Benefits of Nexus 1000v here on another post from Rob Burns:
    https://supportforums.cisco.com/thread/2087541 
    https://communities.vmware.com/thread/316542?tstart=0
    I hope that helps 
    -Kenny

  • Port-security and Nexus 1000v

    Is there really any true need for port-security on Nexus 1000v for vethernet ports? Can a VM be assigned a previously used vethernet port that would trigger a port-security action?

    If you want to prevent admins or malicious users from being able change the mac address of a VM then port-security is a useful feature. Especially in VDI environments where users might have full admin control of the VM and can change the mac of the vnic.
    Now about veths ports. A veth gets assigned to a VM and stays with that VM. A veth is only released when either the nic on the VM is deleted or the nic is assigned to another port-profile on the N1KV or a port-group on a vSwitch or VMware DVS. Now when the veth is released it does not retain any of the piror information. It's freed up and added to a pool of available veths. When a veth is needed for a VM in either the same port-profile or a different port-profile the free veth will be grabbed and initialized. It does not retain any of the previous settings.
    So assigning a VM to a previsously used veth port should not trigger a violation. The MAC should get learned and traffic should be able to flow.

  • How to change Nexus 1010 and Nexus 1000v IP address

    Hi Experts,
    We had run two VSM and one NAM in the Nexus 1010. The Nexus 1010 version is 4.2.1.SP1.4. And the Nexus 1000v version is 4.0.4.SV1.3c. Now we must to change management IP address to the other one. Where can I find the SOP or config sample? And have anything I must to remind?

    If it's only the mgmt0 IP address that you are changing, then you can just enter the new address under the mgmt0 interface. It will automatically sync with the VC.
    I guess you are trying to change the IP address of the VC plus the management VLAN as well. One way of doing this is:
    - From the Nexus 1000v, disconnect the connection to the VC (svs connection -> no connect)
    - Change the VC IP address and connect to it (svs connection -> remote ip address)
    - Change the Nexus 1000v mgmt0 address
    - Change the mgmt VLAN on the N1010
    - Change the mgmt address of the N1010
    - Reconnect the Nexus 1000v back to the VC (svs connection -> connect)
    Correspondingly, change the VLAN configuration on the upstream switch plus the connection to the VC as well.
    Thanks,
    Shankar

  • Nexus 1000v port-channels questions

    Hi,
    I’m running vCenter 4.1 and Nexus 1000v and about 30 ESX Hosts.
    I’m using one system uplink port profile for all 30 ESX Host; On each of the ESX host I have 2 NICs going to a Catalyst 3750 switch stack (Switch A), and another 2 NICs going to another Catalyst 3750 switch stack (Switch B).
    The Nexus is configured with the “sub-group CDP” command on the system uplink port profile like the following:
    port-profile type ethernet uplink
    vmware port-group
    switchport mode trunk
    switchport trunk allowed vlan 1,800,802,900,988-991,996-997,999
    switchport trunk native vlan 500
    mtu 1500
    channel-group auto mode on sub-group cdp
    no shutdown
    system vlan 988-989
    description System-Uplink
    state enabled
    And the port channel on the Catalyst 3750 are configured like the following:
    interface Port-channel11
    description ESX-10(Virtual Machine)
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 500
    switchport trunk allowed vlan 800,802,900,988-991
    switchport mode trunk
    switchport nonegotiate
    spanning-tree portfast trunk
    end
    interface GigabitEthernet1/0/18
    description ESX-10(Virtual Machine)
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 500
    switchport trunk allowed vlan 800,802,900,988-991
    switchport mode trunk
    switchport nonegotiate
    channel-group 11 mode on
    spanning-tree portfast trunk
    spanning-tree guard root
    end
    interface GigabitEthernet1/0/1
    description ESX-10(Virtual Machine)
    switchport trunk encapsulation dot1q
    switchport trunk native vlan 500
    switchport trunk allowed vlan 800,802,900,988-991
    switchport mode trunk
    switchport nonegotiate
    channel-group 11 mode on
    spanning-tree portfast trunk
    spanning-tree guard root
    end
    Now Cisco is telling me that I should be using MAC pinning when doing a trunk to two different stacks , and that each interface on 3750 should not be configured in a port-channel like above,  but should be configured as individual trunks.
    First question: Is the above statement correct, are my uplinks configured wrong?  Should they be configured individually in trunks instead of a port-channel?
    Second questions: If I need to add the MAC pinning configuration on my system uplink port-profile can I create a new system uplink port profile with the MAC pinning configuration and then move one ESX host (with no VM on them) one at a time to that new system uplink port profile? This way, I could migrate one ESX host at a time without outages to my VMs. Or is there an easier way to move 30 ESX hosts to a new system uplink profile with the MAC Pinning configuration.
    Thanks.

    Hello,
    From what I understood, you have the following setup:
         - Each ESX host has 4 NICS
         - 2 of them go to a 3750 stack and the other 2 go to a different 3750 stack
         - all 4 vmnics on the ESX host use the same Ethernet port-profile
              - this has 'channel-group auto mode on sub-group cdp'
         - The 2 interfaces on each 3750 stack are in a port-channel (just 'mode on')
    If yes, then this sort of a setup is correct. The only problem with this is the dependance on CDP. With CDP loss, the port-channels would go down.
    'mac-pinning' is the recommended option for this sort of a setup. You don't have to bundle the interfaces on the 3750 for this and these can be just regular trunk ports. If all your ports are on the same stack, then you can look at LACP. The CDP option would not be supported in the future releases. In fact, it is supposed to be removed from 4.2(1)SV1(2.1) but I still see the command available (ignore 4.2(1)SV1(4) next to it) - I'll follow up on this internally:
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_2_1_1/interface/configuration/guide/b_Cisco_Nexus_1000V_Interface_Configuration_Guide_Release_4_2_1_SV_2_1_1_chapter_01.html
    For migrating, the best option would be as you suggested. Create a new port-profile with mac-pinning and move one host at a time. You can migrate VMs off the host before you change the port-profile and can remove the upstream port-channel config as well.
    Thanks,
    Shankar

  • Nexus 1000v, VMWare ESX and Microsoft SC VMM

    Hi,
    Im curious if anybody has worked up any solutions managing network infrastructure for VMWare ESX hosts/vms with the Nexus 1000v and Microsoft's System Center Virtual Machine Manager.
    There currently exists support for the 1000v and ESX and SCVMM using the Cisco 1000v software for MS Hyper-V and SCVMM.   There is no suck support for VMWare ESX.
    Im curious as to what others with VMWare, Nexus 1000v or equivalent and SCVMM have done to work around this issue.
    Trying to get some ideas.
    Thanks

    Aaron,
    The steps you have above are correct, you will need steps 1 - 4 to get it working correctly.  Normally people will create a separate VLAN for their NLB interfaces/subnet, to prevent uncessisary flooding of mcast frames within the network.
    To answer your questions
    1) I've seen multiple customer run this configuration
    2) The steps you have are correct
    3) You can't enable/disable IGMP snooping on UCS.  It's enabled by default and not a configurable option.  There's no need to change anything within UCS in regards to MS NLB with the procedure above.  FYI - the ability to disable/enable IGMP snooping on UCS is slated for an upcoming release 2.1.
    This is the correct method untill the time we have the option of configuring static multicast mac entries on
    the Nexus 1000v.  If this is a feature you'd like, please open a TAC case and request for bug CSCtb93725 to be linked to your SR. 
    This will give more "push" to our develpment team to prioritize this request.
    Hopefully some other customers can share their experience.
    Regards,
    Robert

  • Firewall between Nexus 1000V VSM and vCenter

    Hi,
    Customer has multiple security zones in environment, and VMware vCenter is located in a Management Security Zone. VSMs in security zones have dedicated management interface facing Management Security Zone with firewall in between. What ports do we need to open for the communication between VSMs and vCenter? The Nexus 1000V troubleshooting guide only mentioned TCP/80 and TCP/443. Are these outbound from VSM to vCenter? Is there any requirements from vCenter to VSM? What's the best practice for VSM management interface configuration in multiple security zones environment? Thanks.

    Avi -
    You need the connection between vCenter and the VSM anytime you want to add or make any changes to the existing port-profiles.  This is how the port-profiles become available to the virtual machines that reside on your ESX hosts.
    One problem when the vCenter is down is what you pointed out - configuration changes cannot be pushed
    The VEM/VSM relationship is independent of the VSM/vCenter connection.  There are separate VLANs or L3 interfaces that are used to pass information and heartbeats between the VSM and its VEMs.
    Jen

  • VN-Tag with Nexus 1000v and Blades

    Hi folks,
    A while ago there was a discussion on this forum regarding the use of Catalyst 3020/3120 blades switches in conjunction with VN-tag.  Specifically, you can't do VN-Tag with that Catalyst blade switch sitting inbetween the Nexus 1000V and the Nexus 5000.  I know there's a Blade switch for the IBM blade servers, but will there be a similar version for the HP C-class blades?  My guess is NO, since Cisco just kicked HP to the curb.  But if that's the case, what are my options?  Pass-through switches?  (ugh!)
    Previous thread:
    https://supportforums.cisco.com/message/469303#469303

    wondering the same...

  • Nexus 1000v and vcenter domain admin account

    I changed out domain admin account on our domain in which vcenter services runs as and now its using a different services account. I am wondering if I need to update anything on the nexus 1000v switch side between the 1000v and venter

    Hi Dan,
    You are on the right track. However you can perform some of these function "online".
    First you want to ensure that you are running at a minimum, Nexus 1000v SV1(4a) as ESXi 5.0 only began support on this release. With SV1(4a), it provides support for both ESXi 5.0 and ESX/i 4.1.
    Then you can follow the procedure documented here:
    Upgrading from VMware Release 4.0/4.1 to VMware Release 5.0.0
    This document walks you through upgrading your ESX infrastructure to VMware Release 5.0.0 when Cisco Nexus 1000V is installed. It is required to be completed in the following order:
    1. Upgrade the VSMs and VEMs to Release 4.2(1)SV1(4a).
    2. Upgrade the VMware vCenter Server to VMware Release 5.0.0.
    3. Upgrade the VMware Update Manager to VMware Release 5.0.0.
    4. Upgrade your ESX hosts to VMware Release 5.0.0 with a custom ESXi image that includes the VEM bits.
    Upgrading the ESX/ESXi hosts consists of the following procedures:
    –Upgrading the vCenter Server
    –Upgrading the vCenter Update Manager
    –Augmenting the Customized ISO
    –Upgrading the ESXi Hosts
    There is also a 3 part video highlighting the procedure to perfrom the last two steps above (customized ISO and upgrading ESXi hosts)
    Video: Upgrading the VEM to VMware ESXi Release 5.0.0
    Hope that helps you with your upgrade.
    Thanks,
    Michael

  • Vmware Tools for Nexus 1000v, VNMC and VSG

    Hi everyone, a customer is asking me about how to install the vmware tools in the virtual machines of N1Kv, VSG and VNMC.
    Someone knows the procedure, or if thats posiible or not.

    @Robert
    Wanted to know / understand what hardware version would be compatible for Nexus 1000V ? Is there any dependency for hardware version ?
    Regards,
    Amit Vyas

  • VSM and Cisco nexus 1000v

    Hi,
    We are planning to install Cisco Nexus 1000v in our environment. Before we want to install we want to explore little bit about Cisco Nexus 1000v
    •  I know there is 2 elements for Cisco 1k, VEM and VSM. Does VSM is required? Can we configure VEM individually?
    •   How does Nexus 1k integrated with vCenter. Can we do all Nexus 1000v configuration from vCenter without going to VEM or VSM?
    •   In term of alarming and reporting, does we need to get SNMP trap and get from individual VEM or can be use VSM to do that. OR can we   get    Cisco Nexus 1000v alarming and reporting form VMware vCenter.
    •  Apart from using Nexus 1010 can what’s the recommended hosting location for VSM, (same Host as VEM, different VM, and different physical server)
    Foyez Ahammed

    Hi Foyez,
    Here is a brief on the Nexus1000v and I'll answer some of your questions in that:
    The Nexus1000v is a Virtual Distributed Switch (software based) from Cisco which integrated with the vSphere environment to provide uniform networking across your vmware environment for the host as well as the VMs. There are two components to the N1K infrastructure 1) VSM 2) VEM.
    VSM - Virtual supervisor module is the one which controls the entire N1K setup and is from where the configuration is done for the VEM modules, interfaces, security, monitoring etc. VSM is the one which interacts with the VC.
    VEM - Virtual ethernet module are simply the module or virtual linecards which provide the connectivity option or virtual ports for the VMs and other virtaul interfaces. Each ESX host today can only have one VEM. These VEMs recieve their configuration / programing from the VSM.
    If you are aware of any other switching products from Cisco like the Cat 6k switches, the n1k behaves the same way but in a software / virtual environment. Where the VSM are equal of a SUPs and the VEM are similar to the line cards. The control and the packet VLANs in the n1k provide the same kind of AIPC and Inband connectivity as the 6k backplane would for the communication between the modules and the SUP (VSM in this case).
    *The n1k configuration is done only from the VSM and is visible in the VC.However the port-profiles created from the VSM are pushed from the VSM to the VC and have to be assigned to the virtual / physical ports from the VC.
    *You can run the VSM either on the Nexus1010 as a Virtual service blade (VSB) or as a normal VM on any of the ESX/ESXi server. The VSM and the VEM on the same server are fully supported.
    You can refer the following deployment guide for some more details: http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/guide_c07-556626.html
    Hope this answers your queries!
    ./Abhinav

  • Nexus 1000V, 4K and 5K

    I'm looking over the deployment guide for 1000Vs, and am not clear on the design.  If I have a Nexus 4k connecting to a Nexus 5k, how does the Nexus 1000V fit?  What I'm seeing is that typically a vpc is built between the Nexus 1k and a clustered upstream switch, such as Nexus 5ks, or VSS with 6500s.  However, if I already have a vpc between a Nexus 4k and a pair of 5ks, what affect does adding 1ks to the configuration have?  Or is the idea to move the vpc back to the 1000Vs instead of the between the 4k and 5ks?  Or perhaps is using a 1000V more suited when you have blades that are pass through modules where each blade has its own NIC or there are blade switches (non Nexus 4k) in the chassis? 
    thank you,
    Bill

    hi bill
    mainly there are two options
    first option if to use the N1K with a clustered up stream switches as you mentioned vPC or VSS
    in this case all what you need form the N1K/ESXi host is to use a normal portchannel and multihome th eport channel links to both of these switches ( this is a recommended solution if applicable )
    option two is to use non-clustered switches like in your case the two 4K switches as the upstream switches with the N1K
    and in this case you can use vPC host mode where the N1K with new releases uses mac-pining to chose uplink subgroup within the port channel
    see below:

  • Nexus 1000v UCS Manager and Cisco UCS M81KR

    Hello everyone
    I am confused about how works the integration between N1K and UCS Manager:
    First question:
    If two VMs on different ESXi and different VEM but in the same VLAN,would like to talk each other, the data flow between them is managed from the upstream switch( in this case UCS Fabric Inteconnect), isn'it?
    I created a Ethernet uplink port-profile on N1K in switch port mode access(100), I created a vEthernet port-profile for the VM in switchport mode access(100) as well. In the Fabric Interconnect I created a vNIC profile for the physical NICs of ESXi(where there are the VMs). Also I created the vlan 100(the same in N1K)
    Second question: With the configuration above, if I include in the vNIC profile the vlan 100 (not as native vlan) only, the two VMs can not ping each other. Instead if I include in the vNIC profile only the defaul vlan(I think it is the vlan 1) as native vlan evereything works fine. WHY????
    Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
    I tried to read differnt documents, but I did not understand.
    Thanks                 

    This document may help...
    Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B and C Series Cisco UCS Manager Servers
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html
    If two VMs on different ESXi and different VEM but in the same  VLAN,would like to talk each other, the data flow between them is  managed from the upstream switch( in this case UCS Fabric Inteconnect),  isn'it?
    -Yes.  Each ESX host with the VEM will have one or more dedicated NICs for the VEMs to communicate with the upstream network.  These would be your 'type ethernet' port-profiles.  The ustream network would need to bridge the vlan between the two physicall nics.
    Second question: With the configuration above, if I include in the vNIC  profile the vlan 100 (not as native vlan) only, the two VMs can not ping  each other. Instead if I include in the vNIC profile only the defaul  vlan(I think it is the vlan 1) as native vlan evereything works fine.  WHY????
    -  The N1K port profiles are switchport access making them untagged.  This would be the native vlan in ucs.  If there is no native vlan in the UCS configuration, we do not have the upstream networking bridging the vlan.
    Third question: How it works the tagging vlan on Fabric interconnectr and also in N1K.
    -  All ports on the UCS are effectively trunks and you can define what vlans are allowed on the trunk as well as what vlan is passed natively or untagged.  In N1K, you will want to leave your vEthernet port profiles as 'switchport mode access'.  For your Ethernet profiles, you will want them to be 'switchport mode trunk'.  Use an used used vlan as the native vlan.  All production vlans will be passed from N1K to UCS as tagged vlans.
    Thank You,
    Dan Laden
    PDI Helpdesk
    http://www.cisco.com/go/pdihelpdesk

  • Install and Configure Nexus 1000V

    Hi all !! Hope everyone is well !!
    I just purchase 2 Nexus 1000V's. Could someone give me some guidance on how to go about installing / configuring a Nexus 1000V switch ? This is my first time working with Nexus series switches. Thanks in advance !!!
    D.

    Hi,
    Here are some of the documents and video links that will help you to proceed futher with Nexus 1000v installation:
    http://www.cciemachine.com/en/US/products/ps9902/prod_installation_guides_list.html
    http://www.google.co.in/search?q=nexus+1000v+installation&hl=en&client=firefox-a&hs=Hnt&rls=org.mozilla:en-US:official&prmd=v&source=univ&tbs=vid:1&tbo=u&ei=WtloTI6hAc-rcbb98I8F&sa=X&oi=video_result_group&ct=title&resnum=4&ved=0CCwQqwQwAw

  • Nexus 1000V and strange ping behavior

    Hi ,
    I am using a Nexus 1000v a FI 6248 with a Nexus 5K in redundant architecture and I have a strange bevahior with VMs.
    I am using  port-profiles without any problems but in one case I have this issue
    I have 2 VMs assigned to the same port profile
    When the 2 Vms are on the same esx I can ping (from a VM)  the gateway and the other VM, now when I move one of the VM to an other ESX (same chassis or not).
    From both , I can ping the gateway, a remote IP but VMs are unreachable between them.
    and a remote PC are able to ping both Vms.
    I checked the mac table, from N5k it's Ok , from FI 6348 it's Ok , but from N1K I am unable to see the mac address of both VMs.
    Why I tried ( I performed at each step a clear mac table)
        Assign to an other vmnic , it works.
        On UCS I moved it to an other vmnic , it works
        On UCS I Changed the QOS policy , it works.
        I reassigned it , and I had the old behavior
        I checked all trunk links it's ok
    So i didn't understand why I have this strange behavior and how I can troubleshoot it deeper?
    I would like if possible to avoid to do that but the next step will be to create a new vmnic card and assign the same policy and after to suppress the vnmic and to recreate the old one.
    Regards

    From what you mentioned here's my thoughts.
    When the two VMs are on the same host, they can reach each other.  This is because they're locally switching in the VEM so this doesn't tell us much other than the VEM is working as expected.
    When you move one of the VMs to a different UCS ESX host, the path changes.    Let's assume you've moved one VM to a different host, within the UCS system.
    UCS-Blade1(Host-A) - VM1
    UCS-Blade2(Host-B) - VM2
    There are two paths option from VM1 -> VM2
    VM1 -> Blade1 Uplink -> Fabric Interconnect A -> Blade 2 Uplink -> VM2
    or
    VM1-> Blade1 Uplink -> Fabric Interconnect A -> Upstream Switch -> Fabric Interconnect B -> Blade 2 Uplink -> VM2
    For the two options I've seen many instances were the FIRST option works fine, but the second doesn't.  Why?  Well as you can see option 1 has a path from Host A to FI-A and back down to Host B.  In this path there's no northbound switching outside of UCS.  This would require both VMs to be be pinned to the Hosts Uplink going to the same Fabric Interconnect. 
    In the second option if the path involves going from Host-A up to FI-A, then northbound to the upstream switch, then back down eventually to FI-B  and then Host-B. When this path is taken, if the two VMs can't reach each other then you have some problem with your upstream switches.  If both VMs reside in the same subnet, it's a Layer2 problem.  If they're in different subnets, then it's a Layer 2 or 3 problem somewhere north of UCS.
    So knowing this - why did manual pinning on the N1K fix your problem?  What pinning does is forces a VM to a particular uplink.  What likely happened in your case is you pinned both VMs to Host Uplinks that both go to the same UCS Fabric Interconnect (avoiding having to be switched northbound).  Your original problem still exists, so you're not clear out of the woods yet.
    Ask yourself is - Why are just these two VMs affected.   Are they possibly the only VMs using a particular VLAN or subnet?
    An easy test to verify the pinning to to use the command below.  "x" is the module # for the host the VMs are running on.
    module vem x execute vemcmd show port-old
    I explain the command further in another post here -> https://supportforums.cisco.com/message/3717261#3717261.  In your case you'll be looking for the VM1 and VM2 LTL's and finding out which SubGroup ID they use, then which SG_ID belongs to whch VMNIC.
    I bet your find the manual pinning "that works" takes the path from each host to the same FI. If this is the case, look northbound for your L2 problem.
    Regards,
    Robert

Maybe you are looking for

  • Clearing Values in Sctript

    i am issuing the output from VF31. there are some values(serial numbers) are there to display in the script .if first document have the five values and second have only one even it is dipslyin five.previous values are not clearing. even i am not call

  • Create another BP based on an existing BP

    Hi Gurus For CRM 2007 I want to create a BP based on a the details of a BP selected by a user. How would this be done? I haven't found any useful threads on this subject. Thanks Panduranga

  • Problems with deep request structure in web service

    I'm having problems when trying to consume an Enterprise Service-like web service in my Web Dynpro application. The web service FindXByElements has a request-structure as follows: XByElementsQuery -ProcessingConditions -QueryHitsMaximumNumberValue -Q

  • How to set JVM max memory setting in JBoss?

    We have an old deployment of LiveCycle 7.2 running on Windows 2003 / JBoss.  The max memory is reported as 508mb on the JBoss Management Console.   Anyone have any idea how to change this value.   I tried modifying the jvm.cfg file as well as run.con

  • Muse will not Launch, very frustrated with no actual customer support.

    When the icon is double click the splash/home screen opens, covers half my screen and stops, It will stay their until it provides an error then closes, this time it closed without the error. In task manager one time it said it was running but most of