7.10 installation server, load balancing and multiple installation servers

Hi
In the 7.10 GUI installation server, there is no Load balancing option anamore, also there seems to be no option to easily clone the installation server from within the nwsapsetupadmin.exe program.
If having MANY users at one location, and needing more than one installation server to accomodate the frontend installations, do you need to do this "manually" now ? grouping users and and the logon script directing different user groups to different sapgui installation servers ?
what about creating multiple installation servers easily, is it possible to simply copy the installation server directory (c:\sapinst f.ex.) to another file server, share the directory and configure the DS and IS services (if needed) ??
will configured packages and "on end install" scripts and such be copied too to the new installation server ?
I need to easily create installation servers in 15 countries, this is the reason for my question...

Hi Kim Sonny,
hope you're doing fine
Regarding your question:
The "Load Balancing" was more a fail-over service and wasn't intended to use for several locations like in your case.
So the easiest way to do this is to setup one installation server and copy the files to the other servers. On the new Installation servers you only have to setup the service again and that's it.
Cheers,
Martin

Similar Messages

  • Server Load-balancing and ACL router decision

    Hello,
    My 2 server farm distribution switches are running in "hybrid" mode, with CAT OS on the switch and IOS on the MSFC.
    My server team is asking to block traffic to a specific server that is load balanced using Cisco's CSM load-balancer which is also installed in the chassis.
    The question that I have is this.
    Does anyone know in what order the MSFC will inspect and apply the ACL and when will the CSM make the load balancing decision?
    The reason I need to know this is that the CSM is setup in bridged mode, where traffic to the server comes into the MSFC with a destination IP of a VIP which resides on the CSM. Subsequently, the CSM forwards the traffic to the one of the real servers in the load-balanced server farm after it makes its load-balancing decision. Which ocurrs first??
    Does anyone have any info on what ocurrs first and so forth??
    Is there a link to Cisco's website that explains this process??
    Thanks in advance for your help.
    Tony

    Tony,
    It sounds as if your setup is like this:
    Client VLAN----MSFC----VLAN A----CSM----Server VLAN
    With VLAN A and Server VLAN being the same IP subnet.
    In this case all client traffic reaching the VIPs on the CSM first traverses the MSFC. So, if you want to block traffic to a specific VIP or Server IP you can do that on the MSFC's Interface for Client VLAN. You could configure an access list that filters inbound traffic on that VLAN interface.
    Make sense?
    -Brad

  • Load balancing across multiple application servers not working with JCo RFC

    We have a problem where inbound messages to the Mapping Runtime engine (ABAP -> J2EE) are not load balanced over application servers. However, load balancing does take place across server nodes within one application server.
    Our system comprises of the following:
    Central Instance (2 X server nodes)
    Database Instance
    2 X Dialog Instances (with 2 X server nodes each)
    The 1st application server that starts is usually the one that is used for inbound messaging.
    We have looked at the sap gateway configuration and have tried various options without much luck:
    i.e.: local gateways vs. one central gateway, load balancing type by changing parameter gw/reg_lb_level, see: http://help.sap.com/saphelp_nw70/helpdata/EN/bb/9f12f24b9b11d189750000e8322d00/frameset.htm
    Here are our release levels:
    SAP_ABA     700     0012     SAPKA70012
    SAP_BASIS     700     0012     SAPKB70012
    PI_BASIS     2005_1_700     0012     SAPKIPYJ7C
    ST-PI     2005_1_700     0005     SAPKITLQI5
    SAP_BW     700     0013     SAPKW70013
    ST-A/PI     01J_BCO700     0000          -
    Any help would be greatly appreciated.
    Many thanks

    Tim
    Did you follow the guide here:
    How to Scale Up SAP Exchange Infrastructure 3.0  
    Learn what the most likely scaled system architecture looks like, and read about a step by step procedure to install additional dialog instances. The guide also walks you through additional configuration steps and the application of Support Package Stacks.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/c3d9d710-0d01-0010-7486-9a51ab92b927
    We followed this guide for XI3.0 and PI7.0 and works successfully!

  • Load Balancing and WLS primary server offset

    I've got a load balancer in front of my WLS cluster, and I'm trying to
              set up load balancing based on WLS clustering. What I need to know to
              do this is the offset within the cookie that's responsible for
              determining which machine within the cluster to direct to.
              Any idea how I can get this information?
              thanks,
              cfraser
              

    Chris Fraser wrote:
              > The proxy/plug-in solution sounds pretty cool, but I've got a high speed
              > Alteon Load Balancer already set up. I would prefer to use that as the load
              > balancer to the WL cluster rather than pay to bring another WLS online to do
              > pretty much what the load balancer, that I already own, can do. I know that
              > going this route means that we're probably not going to be able to do things
              > like failover to the secondary when the primary dies, but we will be able to
              > load balance and also have the ability to dynamically add/delete servers
              > from the list of available servers as they are brought up/down.
              In Memory session replication doesn't work without our plugins. I will have to
              do little bit of investigation to figure out if other persistence mechanism's
              would work without our plugins if you are interested in them. I have to remind
              you though that other types of persistence mechanism's we support are slower
              compared to in memory session replication.
              > Are there any plans to work with an Alteon or a Foundry to have their Load
              > Balancers act as the front end to a WLS cluster?
              Currently none. We are taking steps to make the plugin's and cluster more
              robust, we currently don't have any plans to work with other 3rd party vendors.
              > For us it would be ideal, because we wouldn't have to support another piece of
              > software, we would just
              > have to support the hardware based Alteon, which can handle thousands of
              > transactions per second.
              > I understand that the primary and secondary server information is available
              > in the sessionID, I'm just not quite sure how to extract it.
              This information is saved in the cookie. But I wouldn't count that, as we
              have plans to change this. I cannot give your more details.
              > Is there a particular offset within the session ID where it can always be
              > found?
              I don't quite get what you mean here.
              Hope this helps.
              - Prasad
              > thanks for the help,
              > cfraser
              > ----------
              > C h r i s t o p h e r A . F r a s e r
              > Director, Technology
              > macroplay.com, Inc.
              > [email protected]
              >
              > Viresh Garg wrote:
              >
              > > You should be using
              > > -- NES +NSAPI Plugin
              > > -- IIS + ISAPI Plugin
              > > -- WEblogic server acting as proxy
              > > -- Apache +Apache Plugin ( only in Denali)
              > >
              > > front-ending your Weblogic cluster
              > >
              > > These proxies/plug-ins are smart to do a lot of things like:
              > >
              > > -- Load balancing in weblogic cluster
              > > -- Adding/deleting servers dynamically in cluster when the servers
              > > join/leave Weblogic cluster
              > > -- failover to secondary when primary dies.
              > >
              > > As far as the information about primary and secondary is concerned it is
              > > available in session ID.
              > >
              > > --Viresh Garg
              > >
              > > Chris Fraser wrote:
              > >
              > > > I've got a load balancer in front of my WLS cluster, and I'm trying to
              > > > set up load balancing based on WLS clustering. What I need to know to
              > > > do this is the offset within the cookie that's responsible for
              > > > determining which machine within the cluster to direct to.
              > > >
              > > > Any idea how I can get this information?
              > > >
              > > > thanks,
              > > > cfraser
              

  • App.server load balancing for SAP System with 1 PS

    Hi,
    In SAP CPS 7.0 (Build M26.12) I have a SAP system with Central Instance + 10 App.servers, but all instances are managed by 1 ProcessServer.
    After activating the "App.server load balancing" setting in SAP system definition the application servers are becoming visible in CPS with their load factors (number of BGD wp's on app.servers) and load numbers (number of active jobs on app.servers).
    This is so far fine, but the additional functionality is not working as I would expect, I have issues with 2 functionalities:
    1. Based on documentation after activating also the XAL connection the CPS should submit the job on app.server with best performance based on XAL monitoring data filling the TARGET_SERVER parameter.
    This functionality is not working for me at all
    2. A useful functionality after activating the "App.server load balancing" setting is that the ProcessServer is going to "Overloaded" status when all BGD wp's of SAP system are occupied, thus restricting submitting new jobs during overload situation. But I had an issue also with this functionality, after SAP system recovery from overload situation, the CPS still remained in Overload status (so no new jobs were submitted).
    As a workaround I had increased the treshold values for loads on all app.servers for this SAP system, what was fine for several days, but after a while I believe this was a reason of unexpected performance issues in CPS, therefore I have deactivated the 'App.server load balancing" setting at all for this ProcessServer.
    I would appreciate your feedbacks with this functionality.
    Thanks and Regards,
    Ernest Liczki

    Hi Preetish,
    This connect string option is to loadbalance RFC connections. These are balanced upon login, once you are connected to a particular application server (AS) you stay on that server until you reconnect.
    Since CPS uses multiple RFC connections, this will result in the connections being distributed over the available AS resources which is fine as long as they are generally evenly loaded. If you have certain AS hosts that are continuosly more loaded than the rest, then you probably don't want the CPS RFC connections to end up on these servers.
    The original question is about loadbalancing of batch jobs over the available AS resources, and this is done independent of the RFC connection load balancing. Even if all CPS RFC connections are pinned to the DB/CI host, you can still loadbalance jobs over the available SAP AS hosts, either by using SAPs builtin balancing, or the CPS algorithm by activating the checkbox as indicated in the first entry in this thread.
    Finally, to reply to Ernest's question: I believe there are some fixes on the app load balancing in the latest release, M26.17 should be available on the SWDC now.
    Regards,
    Anton.
    Edited by: Anton Goselink on May 29, 2009 9:06 PM

  • Load balancing across multiple machines

    I am looking for assistance in configuring Tuxedo to perform load balancing across
    multiple machines. I have successfully performed load balancing for a service
    across different servers hosted on one machine but not to another server that's
    hosted on a different machine.
    Any assistance in this matter is greatly appreciated.

    Hello, Christina.
    Load balancing with multiple machines is a little bit different than
    in the same machine. One of the important resource in this kind
    of application is network bandwidth, so tuxedo tries to keep the
    traffic among the machines as low as possible. So, it only
    balance the load (call services in other machine) in case all the
    services are busy in the machine where they are call.
    I mean, if you have workstation clients attached only to one
    machine, then tuxedo will call services in this machine untill
    all servers are busy.
    If you want load balancing, try to put one WSL in each machine,
    and the corresponding configuration in your WSC ( with the | to
    make tuxedo randomly choose one or the other) or spread your
    native clients among all the machines.
    And so, be carefull with the routing!
    Ramón Gordillo
    "Christina" <[email protected]> wrote:
    >
    I am looking for assistance in configuring Tuxedo to perform load balancing
    across
    multiple machines. I have successfully performed load balancing for a
    service
    across different servers hosted on one machine but not to another server
    that's
    hosted on a different machine.
    Any assistance in this matter is greatly appreciated.

  • How do I load balance TFTP between two servers and a client on the same subnet?

    Hi,
    I have trawled through several documents and tried umpteen different configs, all to no avail. I have a PXE boot client trying to access a boot file via TFTP from a couple of TFTP servers on the same VLAN/subnet. For HA purposes I want to load balance the two TFTP servers.
    Config is currently;
    =====
    probe icmp ICMP_PROBE
      description icmp probe for default gateway tracking
      interval 5
      passdetect interval 15
    rserver host server1
      description Server1
      ip address 10.0.0.1
      inservice
    rserver host server2
      description Server 2
      ip address 10.0.0.2
      inservice
    serverfarm host serverfarm_01
      description servers used
      probe ICMP_PROBE
      rserver server1
        inservice
      rserver server2
        inservice
    class-map match-all L4_VIP_TFTP
      10 match virtual-address 10.0.0.10 udp eq 69
    policy-map type loadbalance first-match L7_TFTP
      class class-default
        serverfarm serverfarm_01
    policy-map multi-match L4_LB_VIP_POLICY
      class L4_VIP_TFTP
        loadbalance vip inservice
        loadbalance policy L7_TFTP
        loadbalance vip icmp-reply active
    nat dynamic 1 vlan 200
    interface vlan 200
      ip address 10.0.0.250 255.255.255.0
      nat-pool 1 10.0.0.241 10.0.0.243 netmask 255.255.255.255 pat
      service-policy input L4_LB_VIP_POLICY
      no shutdown
    ip route 0.0.0.0 0.0.0.0 10.0.0.254
    =====
    I have read the doco by Ivan Kovacevic amongst many others but as my clients and servers are on the same subnet, the config doesnt work.
    Can anybody point me in the right direction please. The devices are ACE 4710 running A3(2.3).
    Thanks

    Try using the following configuration:
    Note: Please make sure to configure also a udp probe to probe udp port 69, in case the application is down.
    You need to configure a management policy on the interface when using a UDP probe.
    That is because, when port 69 on the server will be unreachable, the server will send an ICMP unreachable.
    ACE will consider a udp probe as "failed" only when it sees ICMP unreachable.
    Without a management policy-map, the ICMP unreachable message will be dropped.
    Also, add an ICMP probe to the rserver because udp probe will not be enough when the physical interface will be down.
    That is because UDP is a connection-less protocol. To consider a UDP probe successfull, ACE need to see NO answer from the server in respose to the probe.
    The ACE will not see any answer from the server when the interface is down and thus, will consider the probe as "sucessful".
    With ICMP probe attached to the rserver, you also test the reachability of the server and not only the UDP port.
    Here is the configuration (of course, you can chage the names of the of the objects to the name you are using if you want) :
    access-list ALL line 10 extended permit ip any any
    probe udp TFTP
      port 69
      interval 5
      passdetect interval 15
    probe icmp ICMP_PROBE
      interval 5
      passdetect interval 15
    rserver host TFTP_1
      ip address 10.0.0.1
      probe TFTP
      probe ICMP_PROBE
      inservice
    rserver host TFTP_2
      ip address 10.0.0.2
      probe TFTP
      probe ICMP_PROBE
      inservice
    serverfarm host TFTP-SFARM
      rserver TFTP_1
        inservice
      rserver TFTP_2
        inservice
    sticky ip-netmask 255.255.255.255 address source TFTP-STICKY
      timeout 10
      replicate sticky
      serverfarm TFTP-SFARM
    class-map type management match-any MANAGE
      2 match protocol icmp any
    class-map match-all NAT
      2 match virtual-address 0.0.0.0 0.0.0.0 udp any
    class-map match-all TFTP
      2 match virtual-address 10.0.0.10 udp eq 69
    policy-map type management first-match MANAGE
      class MANAGE
        permit
    policy-map type loadbalance first-match ROUTE
      class class-default
        forward
    policy-map type loadbalance first-match TFTP-POL
      class class-default
        sticky-serverfarm TFTP-STICKY
    policy-map multi-match TFTP-MULTI
      class TFTP
        loadbalance vip inservice
        loadbalance policy TFTP-POL
        nat dynamic 1 vlan 212
      class NAT
        loadbalance vip inservice
        loadbalance policy ROUTE
        nat dynamic 2 vlan 212
    interface vlan 212
      ip address 10.0.0.250 255.255.255.0
      no normalization
      access-group input ALL
      nat-pool 1 10.0.0.241 10.0.0.243 netmask 255.255.255.0 pat
      nat-pool 2 10.0.0.10 10.0.0.10 netmask 255.255.255.0 pat
      service-policy input TFTP-MULTI
      service-policy input MANAGE
      no shutdown
    Let me know how it goes.
    Good luck!

  • VPN load balancing and ASA !!!

    Hi netpros,
    I have a couple of questions about this and hope you might be able to assist me.
    1.- Are VPN load balancing and failover (Active/Active) mutually exclusive ..? I mean they can't be used at the same time correct ..?
    2.- How does the ASA handle the return traffic from the Internal LAN towards the remote client .. Because the cluster only requires ONE public virtual IP address, which will work for incoming packets .. but what about the return traffic which has knowledge of the DHCP scope's default gateway IP address only .. ? How gets the returned packet redirected from the default gateway IP address to the respective ASA internal IP address .?
    3.- VPN load balancing only applies to remote clients using easy VPN technology (easy vpn client, hardware client , pIX using easy vpn client etc ) and does not work with static LAN-LAN tunnel .. correct ..?
    Your comments are much appreciated

    Hi Gilbert ..
    1.- Thanks I wanted to make sure.
    2.- I know that .. my question is in regards the return packets .. for example if I have the below IP schema:
    ASA1: Public 20.20.20.20
    Private 192.168.1.1
    ASA2: Public 20.20.20.21
    Private 192.168.1.2
    Cluster virutal IP: 20.20.20.10
    Default gateway for segment 192.168.1.0 is 192.168.1.1
    Let's say that a vpn client tries to connect and the cluster instructs the client to connect to ASA2 20.20.20.21. The packets reach the internal server at 192.168.1.100. The internal server then sends the return packets back to the client by forwarding them to its default gateway which is 192.168.1.1 (ASA1). Here is my question .. how does the cluster handles this because the return packet are supposed to be directed to ASA2 192.168.1.2
    3.- Any idea about this one ..?
    Cheers,

  • Load balancing and rfc metadata repository in reciever rfc communication ch

    hi.
    i want to know the purpose of load balancing and rfc meta data repository in RFC communication channel.
    and can u send me any examples on this load balancing.
    waiting for your response.
    bye.
    regards.
    seeta ram.

    Hi Seeta Ram,
    Load distribution is handled by the message server (there is one message server in an SAP System). When a user logs on, the message server assigns him or her to the application server that currently has the <b>smallest load</b>.
    Well now you can understand that we use load balancing for better performance by distributing the work to different processes to balance or maintain the work load in SAP system.
    For more information refer to this link
    http://help.sap.com/saphelp_nw04/helpdata/en/28/75153a1a5b4c2de10000000a114084/content.htm
    Regards
    Sumit Bhutani

  • Advantages of using a webserver inbetween a load balancer and application servers

    I am building out a new weblogic domain.
    I am wondering which one of these configuration to go with:
    1. Load balancer > weblogic servers
    2. Load balancer > web server > weblogic servers
    Could someone tell me what are the specific advantages of having web servers inbetween a load balancer and application servers (besides caching static data content and acting as a proxy)?
    Thanks in advance
    Srini

    Other than hosting the static content, nothing much really.   We have our load balancer go straight to WL for applications without static content and route to web server if there is static content.   Easy enough to do it both ways, best of both worlds.

  • Web Proxy Server Load Balancing

    I deployed Sun Jave Web Proxy Server 4.0 as a Reverse Proxy. I would also like to use it as a load balancer. As per the instructions, I configured the obj.conf file as shown below
    Route fn="set-origin-server" server="https://xx.xx.xx.xx" server="https:yy.yy.yy.yy" sticky-cookie="JSESSIONID" sticky-param="jsessionid" route-hdr="Proxy-jroute" route-cookie="JROUTE" rewrite-host="true" rewrite-location="true" rewrite-content-location="true"
    But, it is not doing load balancing. It always sends to the first server (xx.xx.xx.xx). I guess that is because I used mapping as follows:
    NameTrans fn="reverse-map" from="https:xx.xx.xx.xx" to="https://server.net" rewrite-location="true" rewrite-content-location="true"
    NameTrans fn="redirect" from="http://server" url="https://xx.xx.xx.xx"
    NameTrans fn="map" from="https://server" to="https://xx.xx.xx.xx" rewrite-host="true" name="pa-server-farm1" NameTrans fn="map" from="/" to="https://xx.xx.xx.xx" rewrite-host="true" name="pa-server-farm1"PathCheck fn="url-check"ObjectType fn="block-ip"
    ObjectType fn="cache-enable" cache-auth="1" cache-https="1" query-maxlen="0" min-size="0" Service fn="proxy-retrieve"
    I don't understand how routing and mapping work togother. Any help in this regard is appreciated.

    Motor,
    the following is from the Web Proxy Sever Administration guide. Please, check the last paragraph for the explanation. Any how, the problem is simple. I am using the Proxy Server as the Reverse proxy. And at the same time, I would like to use two origin servers (for load balancing) instead of one. How do I make both load balancing and reverse proxy functions work together?
    Thanks
    To Create Regular or Reverse Mapping
    Access the Server Manager, and click the URLs tab.
    Click the Create Mapping link.
    The Create Mapping page is displayed.
    In the page that appears, provide the source prefix and source destination for the regular mapping,
    for example,
    Source prefix: http://proxy.site.com
    Source destination: http://http.site.com/
    Click OK.
    Return to the page and create the reverse mapping, for example,
    Reverse mapping:
    Source prefix: http://http.site.com/
    Source destination: http://proxy.site.com/
    To make the change, click OK.
    Once you click the OK button, the proxy server adds one or more additional mappings. To see the mappings, click the lView/Edit Mappings link. Additional mappings would be in the following format:
    from: /
    to: http://http.site.com/
    These additional automatic mappings are for users who connect to the reverse proxy as a normal server. The first mapping is to catch users connecting to the reverse proxy as a regular proxy. The �/� mapping is added only if the user doesn't change the contents of the Map Source Prefix text box provided automatically by the Administration GUI. Depending on the setup, usually the second mapping is the only one required, but the extra mapping does not cause problems in the proxy.

  • For a true load balancing and high-availability OHS, OPMN, and mod_oc4j

    i have read this link of Enabling Clustering on oc4j9.0.4 standalone app server
    http://www.oracle.com/technology/docs/tech/java/oc4j/htdocs/getstart.htm#1015479
    To test the clustering, start up the load balancer by executing "java -jar loadbalancer.jar".
    C:\OC4J_EXTENDED\j2ee\home>java -jar loadbalancer.jar
    In a future release of Oracle Application Server, loadbalancer.jar will be
    desupported. Because of this, we strongly suggest that you discontinue your use
    of loadbalancer.jar in this release. Under high loads, loadbalancer.jar may not
    function properly. For a true load balancing and high-availability solution,
    please move to use OHS, OPMN, and mod_OC4J. For more information, please see
    http://otn.oracle.com/products/ias/ohs/content.html
    Balancer initialized...
    what load balancer should i use for web clustering
    <frontend host="balancer-host" port="balancer-port" />
    balancer-host=localhost
    balancer-port=80
    for all nodes i mentioned same host and port in http-web-site.xml.Is it correct?
    i completed all the steps and run http://localhost:6666/session/SessionServlet
    i hit 3 times
    in the different browser http://localhost:7777/session/SessionServlet
    instead of coming 4 it starting from 1 only.

    can i use this loadbalancer.jar or not?
    how to mod_oc4j in standalone app server

  • Load balancing and High Availability topology

    Our Forms 6i client-server application currently runs on Citrix farm of 20 Windows 2000 boxes (IBM Blade Servers 2 CPU and 2 Gig Memory).
    Application supports 2000 users.
    We are moving to AS 10g r2, forms 10g and the goal is to use same hardware, 20 Windows boxes (or less), for intranet web deployment.
    What will be our best choices for application Load balancing and High Availability?
    Hardware load balancer, Web Cache, mod-oc4j? Combinations?
    Any suggestions, best practices, your experience?

    Gerd, I understand, that you are running 10g web forms through the browser, but using Citrix for deployment. This means that in addition to Application Server and Forms runtime sessions, it will be separate browser session opened for each user. What the advantage of this configuration?
    Michael, we are aware, that Citrix is not supported by Oracle as a deployment platform. That only means that prior contacting Oracle Support we have to reproduce the problem in standard environment. It was never been a problem to reproduce problem :) We were using Citrix as a deployment platform for Forms 6i client/server for 4 years, but now we are forced to upgrade to 10g.
    We are familiar with various Load balancing options available. The question is which option is the most "workable" in our case.

  • 2 ISP load balancing and redundancy

    Hello!!
    Our small company has about 40 branches spreaded within city. Branches are connected by optic wire supplied by our ISP. So in ISP our branches are located in one VLAN. From every branch we created VPN tunnel to our server room in central office. Central office is like a cetner point. If optic wire fails to central office, there would no VPN tunnels and no network to all branches. Moreover, all the traffice goes through central office.
    Now we decided to pave one more optic line to our central office. And that will increase bandwidth and redundancy.
    Private network topology: There are no default gateways and ip-addresses. For examle, at first branch I will plug computer directly into media converter and at the second branch plug another computer to the media converter. After that this two computers became in one network. And can assign any ip addresses to them.
    What I have: our firewall do enough work, don't want to overload it. But we have some free ports in our new cisco 3750. The question is how to do load balancing and redundanccy? Can it do load balancing according to traffic? And how load balance incoming traffic? For example, connection was established from branche's router, how this router will choose through which line make connection? By the way, at all branches we use noisy cisco
    3700 series routers.

    Sorry for upping 1 year old threat.
    We talked to our Network Provider. They said "these two cables are coming from two different places, so there is no way to use etherchannel. You must use active-standby solution."
    Relying on STP we just put two cables into 3750 stack. But with default STP settings, connection was very unstable, many packet losses and disconnections. So we found easy solution with "flex links", making one interface backup of the other. And only now I recognized that this is not a failover solution. Because, if network beyond media converter will down, link from media converter to switch would still up.
    What could I do to make our L2 WAN redundant? Are there any additional STP settings.

  • Discussion on load-balance and load-sharing

    Hi, I found a article, which discuss the difference between load-balance and load-sharing. I think the explanation is pretty good, please see below. But I still have a question: how can we decide to choose one the both balance in the production environment ?  Thank you
    "In short, load balancing tries to distribute traffic evenly over multiple paths, whereas, load sharing intends to do it (for the lack of a better term) equally.  True load balancing is difficult to achieve.  For example, let's say there were two links (100 mbps and 300 mpbs) and a router needed to send out 600 mbps of traffic.  Load balancing would distribute the traffic evenly, sending 300 mbps on each link.  On the contrary, load sharing would divide the traffic equally based on the available resources, sending 200 mbps on the slower link and 400 mbps on the faster one. "

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    That's not how Cisco uses the terms, and generically they are often used almost interchangeably.
    Cisco uses load balancing as the catch all for how a single L3 device routes across multiple paths to the same destination.  Equal metrics or equal actual load distribution are not required.  Most often, load balancing will be discussed with ECMP, but unequal path loading balancing will include Cisco's proprietary IGPs, such as EIGRP.
    Cisco uses load sharing when using multiple paths when a single L3 devices doesn't normally route across multiple paths or multiple L3 devices are involved.  Cisco load sharing discussions usually revolve around BGP.
    Generically, I would say load balancing has more of a dynamic aspect to it, i.e. something is trying to actively balance traffic across multiple paths, while load sharing might mean multiple paths are utilized but not actively dynamically balanced.
    I'm unsure what's your question with a production environment.

Maybe you are looking for

  • Adobe Classroom in a book help

    Ok so I am on chapter six and on the image in the book, the sidebar is bright green. However, my sidebar is still the same color as when I first started the project. I can't find the part where I could have mest up. Can anyone help me with this or te

  • Does Nokia Maps need to be online to work sometime...

    Hi all, I am very confused about the use of the free Nokia Maps applications. I have an N97 mini, and I was using the maps application throughout Egypt last week. I had the settings placed "offline", and the application would open and would work fine

  • Currency Conversion Issue in Infocube Purchasing Data (0PUR_C01) Routine

    Hi Gurus,      We are using SAP BI 7.0, We are having reports on Purchasing Data (0PUR_C01) under that 2 updates rules are there. For the field Effective purchase order value (0ORDER_VAL), there is one standard routine only from the Datasource 2LIS_0

  • Keyword Filtering & Search Via Parent Structure In Aperture Is Ugly - Help

    I'm very frustrated with the keyword assigning and filtering within Aperture. I basically have my keywording down, but USING the keywords is a nightmare.   I try and create parent keywords by things like People (fred, john, harry, etc.), vacation (20

  • Certification for Oracle AS10g OCA-OCP

    To all, Right now I am looking at review materials for taking up the exam for OCA, but I think the topics covered were for the older version of Oracle AS10g(9.0.4). I noticed that there is a lot of difference, directory structure and log files betwee