NIC teaming/bonding for interconnection

Hi to all,
We are planning for a two node Oracle RAC implementation on Windows Server and we have some doubts about interconnect. Because redundancy is not as important, and because we are limited by money Gigabit switch will be used for interconnection and for public network.
Can we get in bandwidth (load balancing) if we use NIC Bonding on one switch. Which is best?
- To buy servers with two single 1Gb port NIC’s and bond this NIC’s for interconnection and one single port NIC for public network or
- One dual port NIC (bond this NIC’s) for interconnects and one single NIC for public network?

Nikoline wrote:
We are planning for a two node Oracle RAC implementation on Windows Server and we have some doubts about interconnect. Because redundancy is not as important, and because we are limited by money Gigabit switch will be used for interconnection and for public network.Keep in mind that the h/w architecture used, directly determines how robust RAC is, how well it performs, and how it meets redundancy, high availability and scalability requirements.
The wrong h/w choices can and will hurt all of these factors.
Can we get in bandwidth (load balancing) if we use NIC Bonding on one switch. No. Bonding usually does not address load balancing. You need to check exactly what the driver stack used for bonding supports.
The primary reason for bonding is redundancy. Not load balancing/load sharing.
Which is best?
- To buy servers with two single 1Gb port NIC’s and bond this NIC’s for interconnection and one single port NIC for public network or
- One dual port NIC (bond this NIC’s) for interconnects and one single NIC for public network?A server with dual 1Gb NICs will be bonded into a single bonded NIC. And that is what then must be used - the logical bonded NIC and not the individual NICs.
This means that 2 1Gb ports provide you with a single bonded port. And you need 2 ports for RAC - public and interconnect.
So you either need 4 physical ports in total to create 2 bonded ports, or 3 ports in total for 1 bonded port and 1 unbonded port.
And what about your cluster's shared storage?
This, together with the Interconnect, determines the robustness and performance of RAC. 1Gb Interconnect is already pretty much a failure in providing a proper Interconnect infrastructure for RAC.
Keep in mind that the Interconnect is used to share memory across cluster nodes (cache fusion). The purpose is to speed up I/O - making it faster for a cluster node to get data blocks from another node's cache instead of having to hit spinning rust to read that data.
Old style fibre channel technology for shared storage is dual 2Gb fibre channels. Which will be faster than your 1Gb Interconnect. How does it make sense to use an Interconnect that is slower (less bandwidth and more latency) than the actual I/O fabric layer?
Would you configure h/w for a Windows server (be that a database, web or mail server) where the logical I/O from the server's memory buffer cache for local disks is slower than actually reading that data off local disk?
Then why do this for RAC?
Last comment - over 90% of the 500 fastest and biggest clusters on this planet run Linux. 1% runs Windows. Windows is usually a poor choice when it comes to clustering. Even Oracle does not provide their Exadata Database Machine product (fastest RAC cluster in the world) on Windows - only on Linux and Solaris (and the latter only because Oracle now owns Solaris too).

Similar Messages

  • NIC Teaming/Bonding

    Do anyone has a doc or link for NIC teaming/bonding ?
    Thanks in advance.
    Regards,
    Chandu

    Hi Chandu,
    NIC teaming is the process of grouping together several physical NICs into one single logical NIC, it can be used for network fault tolerance and transmit load balance. The process of grouping NICs is called teaming.
    It has two purposes:
    Fault Tolerance: By teaming more than one physical NIC to a logical NIC, high availability is maximized. Even if one NIC fails, the network connection does not cease and continues to operate on other NICs.
    Load Balancing: Balancing the network traffic load on a server can enhance the functionality of the server and the network.
    example: http://www.cisco.com/en/US/tech/tk389/tk213/technologies_configuration_example09186a008089a821.shtml
    Hope it helps.
    Regards
    Dont forget to rate helpful posts.

  • NIC Teaming/Bonding with multiple switches.

    Today I started setting up in my lab a network with redundant Cisco 2970 switches. The hosts are two HP DL145 servers with two broadcom GBe NICs. The systems are running Linux 2.6.15.1 and I have enabled bonding in the kernel as a module.
    Each NIC is plugged into a different switch, and the switches are NOT connected together using ISL or any other method. For all intents they operate as single switches. Each switch is in turn connected to a switch on which my gateway is located.
    I have the configuration working using mode 0 (balance-rr). Using tcpdump on the underlying slave interfaces, I can see that packets are in fact being sent out over both interfaces, and that odd sequence numbers are sent on if0 and even numbers are sent on if1. If I unplug one of the interfaces, the bonding driver marks it as down, and stops sending data on that interface. When plugged back in, data transmission is resumed. Furthermore, when using the bonding driver's arp monitoring of the default gateway, unplugging a switch from the upstream switch causes the interface connected to that switch to fail.
    My question is if this is a "supported" configuration and/or if there is a better way to make this work.
    Furthermore, my next test is to add a PIX 515UR to the mix, and figure out how to connect IT to both switches.
    I cannot find any information about how to bond or team interfaces on the PIX, can it do something like this?

    Today I started setting up in my lab a network with redundant Cisco 2970 switches. The hosts are two HP DL145 servers with two broadcom GBe NICs. The systems are running Linux 2.6.15.1 and I have enabled bonding in the kernel as a module.
    Each NIC is plugged into a different switch, and the switches are NOT connected together using ISL or any other method. For all intents they operate as single switches. Each switch is in turn connected to a switch on which my gateway is located.
    I have the configuration working using mode 0 (balance-rr). Using tcpdump on the underlying slave interfaces, I can see that packets are in fact being sent out over both interfaces, and that odd sequence numbers are sent on if0 and even numbers are sent on if1. If I unplug one of the interfaces, the bonding driver marks it as down, and stops sending data on that interface. When plugged back in, data transmission is resumed. Furthermore, when using the bonding driver's arp monitoring of the default gateway, unplugging a switch from the upstream switch causes the interface connected to that switch to fail.
    My question is if this is a "supported" configuration and/or if there is a better way to make this work.
    Furthermore, my next test is to add a PIX 515UR to the mix, and figure out how to connect IT to both switches.
    I cannot find any information about how to bond or team interfaces on the PIX, can it do something like this?

  • NIC Teaming/Bonding Support by Oracle or NOT

    hi:
    I have customers using RedHat Linux ES3 Update 6 with NIC Bonding/Teaming on the Oracle 10G RAC's private/heartbeat LAN, I would like to hear from you all or Oracle Support whether SUCH CONFIGURATION IS SUPPORTED BY ORACLE OR NOT.
    Thanks.
    Bennie

    hi:
    I have customers using RedHat Linux ES3 Update 6 with NIC Bonding/Teaming on the Oracle 10G RAC's private/heartbeat LAN, I would like to hear from you all or Oracle Support whether SUCH CONFIGURATION IS SUPPORTED BY ORACLE OR NOT.
    Thanks.
    Bennie

  • NIC teaming in OVM

    Hi folks,
    Does Oracle VM support NIC Teaming(Bonding of more than 1 NICs in to 1 Logical NIC)? Does it require any driver or patch for the same.
    I have SUN server x4710 which has 2 NIC cards.
    Moreover If I ve VMs(guests) with IPs of different VLANs(Eg. 10.22.70.x and 202.49.214.x) residing on same VM server what would be the best practice?
    Hope fully I have made the points clear..
    Thanks in advance...

    user10310678 wrote:
    Does Oracle VM support NIC Teaming(Bonding of more than 1 NICs in to 1 Logical NIC)? Does it require any driver or patch for the same.Yes it does support bonding and no, it doesn't require any additional drivers or patches. Bonding is built into the kernel.
    Moreover If I ve VMs(guests) with IPs of different VLANs(Eg. 10.22.70.x and 202.49.214.x) residing on same VM server what would be the best practice?I usually create a bridge per VLAN. That way, I can create a virtual interface to a guest that is already on a particular VLAN and the guest doesn't have to worry about VLANs. Also, it means you can control VLAN assignments outside the guest OS. See this wiki page for more info:
    http://wiki.oracle.com/page/Oracle+VM+Server+Configuration-bondedand+trunked+network+interfaces

  • 2012 R2 NIC Teaming and netwroking

    I have a 4 port NIC, all connected to the same network using DHCP, when I team two cards, they no longer have an IP address. Is that by design?
    Where can I find more information about the virtual networking how to?
    TIA

     
    Yes it is by design.
    If individual NICs are configured with IP settings and then you create a NIC teaming the individual nics will lose their IP settings.
    And you will no longer be able to configure the ip settings on the individual NICs which are part of a NIC team. Instead you will need to assign the ip address on the NIC Team.
    To assign ip address to the NIC teams, go to control Panel\ Network and Internet\ Network Connections
    To assign VLANs and manage/update Team interfaces: In the Server Manager select the local server and then in the local server properties section click on Nic Teaming "Enabled" link.
    This will open the NIC teaming window and here you can manage the NIC teams.
    For more information on NIC teams please refer to:
    http://blogs.technet.com/b/keithmayer/archive/2012/11/20/vlan-tricks-with-nic-teaming-in-windows-server-2012.aspx
    Kind Regards Tim (Canberra)

  • 802.3ad (mode=4) bonding for RAC interconnects

    Is anyone using 802.3ad (mode=4) bonding for their RAC interconnects? We have five Dell R710 RAC nodes and we're trying to use the four onboard Broadcom NetXtreme II NICs in a 802.3ad bond with src-dst-mac load balancing. Since we have the hardware to pull this off we thought we'd give it a try and achieve some extra bandwith for the interconnect rather than deploying the traditional acitve/standby interconnect using just two of the NICs. Has anyone tried this config and what was the outcome? Thanks.

    I don't but may be the documents might help ?
    http://www.iop.org/EJ/article/1742-6596/119/4/042015/jpconf8_119_042015.pdf?request-id=bcddc94d-7727-4a8a-8201-4d1b837a1eac
    http://www.oracleracsig.org/pls/apex/Z?p_url=RAC_SIG.download_my_file?p_file=1002938&p_id=1002938&p_cat=documents&p_user=nobody&p_company=994323795175833
    http://www.oracle.com/technology/global/cn/events/download/ccb/10g_rac_bp_en.pdf
    Edited by: Hub on Nov 18, 2009 10:10 AM

  • Are these viable designs for NIC teaming on UCS C-Series?

    Is this a viable design on ESXi 5.1 on UCS C240 with 2 Quad port nic adapters?
    Option A) VMware NIC Teaming with load balancing of vmnic interfaces in an Active/Active configuration through alternate and redundant hardware paths to the network.
    Option B) VMware NIC Teaming with load balancing of vmnic interfaces in an Active/Standy By configuration through alternate and redundant hardware paths to the network.
    Option A:
    Option B:
    Thanks.

    No.  It really comes down to what Active/Active means and the type of upstream switches.  For ESXi NIC teaming - Active/Active load balancing provided the opportunity to have all network links be active for different guest devices.  Teaming can be configured in a few different methods.  The default is by virtual port ID where each guest machine gets assigned to an active port and then also a backup port.  Traffic for that host would only be sent on one link at a time.
    For example lets assume 2 Ethernet Links and 4 guests on the ESX host.  Link 1 to Switch 1 would be active for Guest 1 and 2 and Link 2 to Switch 2 would be backup for Guest 1 and 2.  However Link 2 to Switch 2 would be active for Guest 3 and 4 and Link 1 to Switch 1 would be backup for guest 1 and 2. 
    The following provides details on the configuration of NIC teaming with VMWare:
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004088
    There are also possibilities of configuring LACP in some situations, but there are special hardware considerations on the switch side as well as the host side.
    Also keep in mind that the vSwitch does not indiscriminately forward broadcast/multicast/unknown unicast out all ports.  It has a strict set of rules that prevents it from looping.  It is not a traditional L2 forwarder so loops are not a consideration in an active/active environment. 
    This document further explains VMWare Virtual Networking Concepts.
    http://www.vmware.com/files/pdf/virtual_networking_concepts.pdf
    Steve McQuerry
    UCS - Technical Marketing

  • Can you use NIC Teaming for Replica Traffic in a Hyper-V 2012 R2 Cluster

    We are in the process of setting up a two node 2012 R2 Hyper-V Cluster and will be using the Replica feature to make copies of some of the hosted VM's to an off-site, standalone Hyper-V server.
    We have planned to use two physical NIC's in an LBFO Team on the Cluster Nodes to use for the Replica traffic but wanted to confirm that this is supported before we continue?
    Cheers for now
    Russell

    Sam,
    Thanks for the prompt response, presumably the same is true of the other types of cluster traffic (Live Migration, Management, etc.)
    Cheers for now
    Russell
    Yep.
    In our practice we actually use converged networking, which basically NIC-teams all physical NICs into one pipe (switch independent/dynamic/active-active), on top of which we provision vNICs for the parent partition (host OS), as well as guest VMs. 
    Sam Boutros, Senior Consultant, Software Logic, KOP, PA http://superwidgets.wordpress.com (Please take a moment to Vote as Helpful and/or Mark as Answer, where applicable) _________________________________________________________________________________
    Powershell: Learn it before it's an emergency http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx http://technet.microsoft.com/en-us/scriptcenter/dd793612.aspx

  • Using NIC Teaming and a virtual switch for Windows Server 2012 host networking and Hyper-V.

    Using NIC Teaming and a virtual switch for Windows Server 2012 host networking!
    http://www.youtube.com/watch?v=8mOuoIWzmdE
    Hi thanks for reading. Now I may well have my terminology incorrect here so I will try to explain  as best I can and apologies from the start.
    It’s a bit of both Hyper-v and Server 2012R2. 
    I am setting up a lab with Server 2012 R2. I have several physical network cards that I have teamed called “HostSwitchTeam” from those I have made several Virtual Network Adaptors such as below
    examples.
    New-VMSwitch "MgmtSwitch" -MinimumBandwidthMode weight -NetAdaptername "HostSwitchTeam" -AllowManagement $false
    Add-VMNetworkAdapter -ManagementOS -Name "Vswitch" -SwitchName "MgmtSwitch"
    Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName "MgmtSwitch"
    When I install Hyper-V and it comes to adding a virtual switch during installation it only shows the individual physical network cards and the
    HostSwitchTeam for selection.  When installed it shows the Microsoft Network Multiplexor Driver as the only option. 
    Is this correct or how does one use the Vswitch made above and incorporate into the Hyper-V so a weight can be put against it.
    Still trying to get my head around Vswitches,VMNetworkadapters etc so somewhat confused as to the way forward at this time so I may have missed the plot altogether!
    Any help would be much appreciated.
    Paul
    Paul Edwards

    Hi P.J.E,
    >>I have teams so a bit confused as to the adapter bindings and if the teams need to be added or just the vEthernet Nics?.
    Nic 1,2 
    HostVMSwitchTeam
    Nic 3,4,5
             HostMgmtSwitchTeam
    >>The adapter Binding settings are:
    HostMgmtSwitchTeam
    V-Curric
    Nic 3
    Nic 4
    Nic 5
    V-Livemigration
    HostVMSwitch
    Nic 1
    Nic 2
    V-iSCSI
    V-HeartBeat
    Based on my understanding of the description , "HostMgmtSwitchTeam and
    HostVMSwitch " are teamed NIC .
    You can think of them as two physical NICs (do not use NIC 1,2,3,4,5 any more , there are just two NICs "HostMgmtSwitchTeam and
    HostVMSwitch").
    V-Curric,
    V-Livemigration , V-iSCSI ,
    V-HeartBeat are just VNICs of host  (you can change their name then check if the virtual switch name will be changed )
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Hyper-V Nic Teaming (reserve a nic for host OS)

    Whilst setting up nic teaming on my host (server 2012 r2) the OS recommends leaving one nic for host management(access). IS this best practice?  Seems like a waste for a nic as the host would hardly ever be accessed after initial setup.
    I have 4 nics in total. What is the best practice in this situation?

    Depending on if it is a single and the one and only or you build a Cluster you need some networks on your Hyper-V
    at least one connection for the Host to do Management.
    so in case of a single node with local disks you would create a Team with the 4 Nics and create a Hyper-V Switch with the Option checked for creating that Management OS Adapter what is a so called vNIC on that vSwitch and configure that vNIC with the needed
    IP Setting etc...
    If you plan a Cluster and also ISCSI/SMB for Storage Access take a look here
    http://www.thomasmaurer.ch/2012/07/windows-server-2012-hyper-v-converged-fabric/
    You find a few possible ways for teaming and the Switch Settings and also all needed Steps for doing a fully converged Setup via PowerShell.
    If you share more Informations on you setup we can give more Details on that.

  • NIC Teaming for hyper-v serve.

    I have installed windows server 2012 r2 on server. Server is having network adapters i have given static ip address to both nic's
    LAN1: - 192.168.0.100 & LAN2: - 192.168.0.101 after enabling NIC Teaming server have added one more adapter called
    "Network Adapter Multiplexor" after this above mentioned ip address are not responding to PING or any requests. Then i have given
    192.168.0. 102 ip address to Multiplexor and its started working.
    So my question do i need to give ip address to LAN1 &
    LAN2 or i can just create team and give ip address to Multiplexor
    Also if i installed hyper-v server on it will it give me failover thing for this machine.????
    Akshay Pate

    Hello Akshay,
    In brief, after creating the Teaming adapter (Multiplexor) you'll use it's address for future networking purposes.
    Regarding the lack of ping, I had the same "issue" and it seems like is block by the Microsoft code itself. Still couldn't find how to allow it.
    When digging into W2k12(&R2) NIC Teaming this two pages were very explanative and usefull:
    Geek of All Trades: The Availability Answer (by Greg Shields)
    Windows server 2012 Hyper-V 3.0 network virtualization (if you need more technical detail)
    Hope it helps!

  • Windows 7/8.0/8.1 NIC teaming issue

    Hello,
    I'm having an issue with Teaming network adapters in all recent Windows client OSs.
    I'm using Intel Pro Dual Port or Broadcom NetExtreme II GigaBit adapters with the appropriate drivers/applications from the vendors.
    I am able to set up teaming and fail-over works flawlessly, but the connection will not use the entire advertised bandwidth of 2Gbps. Basically it will use either one port or the other.
    I'm doing the testing with the iperf tool and am communicating with a unix based server.
    I have the following setup:
    Dell R210 II server with 2 Broadcom NetEtreme II adapters and a DualPort Intel Pro adapter - Centos 6.5 installed bonding configured and working wile communicating with other unix based systems.
    Zyxel GS2200-48 switch - Link Aggregation configured and working
    Dell R210 II with Windows 8.1 with Broadcom NetExtreme II cards or Intel Pro dualport cards.
    For the Windows machine I have also tried Windows 7 and Windows 8, also non server type hardware with identical results.
    so.. Why am I not getting > 1 Gbps throughput on the created team? although load balancing is activated, team adapter says the connection type is 2 Gbps, a the same setup with 2 unix machines works flawlessly.
    Am I to understand that Link Aggregation (802.3ad) under Microsoft OS does not support load balancing if connection is only towards one IP?
    To make it clear, I need client version of Windows OS to communicate unix based OS over a higher then 1Gbps bandwidth (as close to 2 Gbps as possible). Without the use of 10 Gbps network adapters.
    Thanks in advance,
    Endre

    As v-yamliu has mentioned, NIC teaming through the operating system is
    only available in Windows Server 2012 and Windows Server 2012 R2. For Windows Client or for previous versions of Windows Server you will need to create the team via the network driver. For Broadcom this is accomplished
    using the Broadcom Advanced Server Program (BASP) as documented here and
    for Intel via Advanced Network Services as documented here.
    If you have configured the team via the drivers, you may need to ensure the driver is properly installed and updated. You may also want to ensure that the adapters are configured for aggregation (802.3ad/802.1ax/LACP), rather than fault tolerance or load
    balancing and that the teaming configuration on the switch matches and is compatible with the server configuration. Also ensure that all of the links are connecting at full duplex as this is a requirement.
    Brandon
    Windows Outreach Team- IT Pro
    The Springboard Series on TechNet

  • ESXi 4.1 NIC Teaming's Load-Balancing Algorithm,Nexus 7000 and UCS

    Hi, Cisco Gurus:
    Please help me in answering the following questions (UCSM 1.4(xx), 2 UCS 6140XP, 2 Nexus 7000, M81KR in B200-M2, No Nexus 1000V, using VMware Distributed Switch:
    Q1. For me to configure vPC on a pair of Nexus 7000, do I have to connect Ethernet Uplink from each Cisco Fabric Interconnect to the 2 Nexus 7000 in a bow-tie fashion? If I connect, say 2 10G ports from Fabric Interconnect 1 to 1 Nexus 7000 and similar connection from FInterconnect 2 to the other Nexus 7000, in this case can I still configure vPC or is it a validated design? If it is, what is the pro and con versus having 2 connections from each FInterconnect to 2 separate Nexus 7000?
    Q2. If vPC is to be configured in Nexus 7000, is it COMPULSORY to configure Port Channel for the 2 Fabric Interconnects using UCSM? I believe it is not. But what is the pro and con of HAVING NO Port Channel within UCS versus HAVING Port Channel when vPC is concerned?
    Q3. if vPC is to be configured in Nexus 7000, I understand there is a limitation on confining to ONLY 1 vSphere NIC Teaming's Load-Balancing Algorithm i.e. Route Based on IP Hash. Is it correct?
    Again, what is the pro and con here with regard to application behaviours when Layer 2 or 3 is concerned? Or what is the BEST PRACTICES?
    I would really appreciate if someone can help me clear these lingering doubts of mine.
    God Bless.
    SiM

    Sim,
    Here are my thoughts without a 1000v in place,
    Q1. For me to configure vPC on a pair of Nexus 7000, do I have to connect Ethernet Uplink from each Cisco Fabric Interconnect to the 2 Nexus 7000 in a bow-tie fashion? If I connect, say 2 10G ports from Fabric Interconnect 1 to 1 Nexus 7000 and similar connection from FInterconnect 2 to the other Nexus 7000, in this case can I still configure vPC or is it a validated design? If it is, what is the pro and con versus having 2 connections from each FInterconnect to 2 separate Nexus 7000?   //Yes, for vPC to UCS the best practice is to bowtie uplink to (2) 7K or 5Ks.
    Q2. If vPC is to be configured in Nexus 7000, is it COMPULSORY to configure Port Channel for the 2 Fabric Interconnects using UCSM? I believe it is not. But what is the pro and con of HAVING NO Port Channel within UCS versus HAVING Port Channel when vPC is concerned? //The port channel will be configured on both the UCSM and the 7K. The pro of a port channel would be both bandwidth and redundancy. vPC would be prefered.
    Q3. if vPC is to be configured in Nexus 7000, I understand there is a limitation on confining to ONLY 1 vSphere NIC Teaming's Load-Balancing Algorithm i.e. Route Based on IP Hash. Is it correct? //Without the 1000v, I always tend to leave to dvSwitch load balence behavior at the default of "route by portID". 
    Again, what is the pro and con here with regard to application behaviours when Layer 2 or 3 is concerned? Or what is the BEST PRACTICES? UCS can perform L2 but Northbound should be performing L3.
    Cheers,
    David Jarzynka

  • Relationship between coherence and NIC teaming

    Hi,
    We are using Tangosol coherence for clustering purpose in our product Webmethods Integration server.
    When our server starts up it tries to jojn tne cluster.
    Our scenario is this :-
    We have 2 servers running on 2 separate boxes A&B.
    They are on same network segment.
    Multicast test is working properly .
    The issue is only one of the nodes(which is started first) in becoming the part of the cluster and other one remain disabled.
    We found out that the NIC teaming was disabled in the boxes.
    When we enabled NIC teaming with smart load balancing then both the nodes are able to join the cluster.
    My specific question is,
    Is there any relationship between Tangosol coherence and NIC teaming? If yes, what's the relationship.
    Regards,
    Ritwik Bhattacharyya

    I did some tinkering a while back trying to get 4Gb/s bonded etherchannels going on linux boxes but I had issues with out of order and missing packets:
    4Gb/s bonded ethernet test results - finally...
    But to answer your question there is no reason that you would need NIC teaming on in order to make Coherence work. It sounds like something is not configured correctly with your NIC or switch. Maybe try connecting the machines with a crossover cable instead of a switch just to eliminate the switch as a possible problem. It sounds like maybe you're just using the wrong ethernet port on a server or something.
    -Andrew

Maybe you are looking for

  • Creative Cloud is Blank, nothing is displayed even after a reboot, what is the solution?

    I did the following but it did not help, Rename the opm.db file. 1.Close the Creative Cloud application if it's running. 2.Navigate to the OOBE folder. Windows: [System drive]:\Users\[username]\AppData\Local\Adobe\OOBE Mac OS: /User/<username>/Librar

  • Required steps to add standard SAP field to Work Manager 6.0

    Dear Agentry experts, I need some help on how to add standard SAP backend fields to the Work Manager 6.0 application. The application is freshly setup and not yet customized apart from filters, so we are also new to develop it. Specifically we want t

  • Errors downloading updates on my Mac Mini

    I am trying to download updates on my MacMini. Every time I try I get a message that says "Mac update cannot be saved you are not connected to the internet." but I am always connected to the internet. Sometimes I get other error messages, but I can't

  • How to reconfigure my Linksys EA2700 wi fi router

    I made the mistake of resetting my router (EA2700) when it was down and now I need to reconfigure it.  Can anyone help me or do I need to pay tech support at Linksys? Thanks

  • Safari crashes after updating to Yosemite

    Process:           Safari [1069] Path:              /Applications/Safari.app/Contents/MacOS/Safari Identifier:        com.apple.Safari Version:           8.0 (10600.1.25) Build Info:        WebBrowser-7600001025000000~1 Code Type:         X86-64 (Nat