Load Balancing - SMLG Question

I have a question about the load distribution when using logon groups.
I have created a logon group in SMLG and it is working to distribute users to different servers.
It appears that distributes the load based off of a “quality” rating.
What factors make up the quality rating?  Or what if not the quality rating, what is used to measure load on the servers?

Hi Drew,
The quality field display a numeric value from the results of a RFC query
Read,
http://help.sap.com/saphelp_nw2004s/helpdata/en/28/1c623c44696069e10000000a11405a/content.htm
Regards
Juan
Please reward with points if helpful

Similar Messages

  • Portal Landscape - With 2 CSM (load balance) related question

    Hi,
      We are currently having a portal landscape (Dev, QA -2 app servers, PRD - 4 app servers). The load balancing happens on Production Portal using CSM (load balancer) and it does SSL offloading for security encryption and it lands onto one of the application servers. When we try to login to portal it authenticates using the LDAP (OID). And we have some links which takes to backend R/3, BW etc (we use SAP load balance using SMLG logon group)
    Now due to another special project the following is what we are planning:
    1. Adding couple of more application servers for production portal or having seperate second portal landscape itself
    2. Adding couple of more application servers for R/3 production server (load balance can be done with special logon group for that)
    Questions are:
    1. When we land into current production portal page and click a iview link for the special project it should go only to those special portal app servers (planning to do through another CSM) and from their to backend R/3. In this scenario how the authentication (or sso ticket) happens when it goes from CSM to another CSM, will it ask for login again or any issue will happen with SSO ticket ?
    2. If we decide to go for second portal landscape and in the same scenario when login to current prod portal page and click a iview link for the special project it should go to that another production portal,in that case what will happen to the login authentication happened through the first portal and SSO ticket ?
    3. Suppose if we go to the second production portal directly through a website and if the user tries to login using the same id to first portal how portal will deal in terms of security (SSO) and also how backend R/3 will behave when same id comes as part of SSO.
    Or if anyone thinks of any other issue apart from SSO or encryption related things which i need to be aware of, kindly let me know.
    Thanks,
    Murali.

    I am not sure what CSM is, but I would expect it only does ssl offloading and a sort of "reverse proxy" against the cluster.
    >1. When we land into current production portal page and click a iview link for the special project it should go only to those special portal app servers (planning to do through another CSM) and from their to backend R/3. In this >scenario how the authentication (or sso ticket) happens when it goes from CSM to another CSM, will it ask for login again or any issue will happen with SSO ticket ?
    This depends on the host name you use for the two CSM clusters. If they have the same subdomain, there should be no problem as the SAP Logon Ticket (MYSAPSSO2) cookie is issued to the sub domain of the portal.
    If they do not have the same subdomain, the second CSM cluster will receive the request without the MYSAPSSO2 cookie, and will therefore trigger reauthentication.
    >2. If we decide to go for second portal landscape and in the same scenario when login to current prod portal page and click a iview link for the special project it should go to that another production portal,in that case what will >happen to the login authentication happened through the first portal and SSO ticket ?
    It will fail, as the MYSAPSSO2 cookie from the first portal is not recognized in the second. However, you can easily setup so that the second portal trusts the first and does a logon based on its credentials
    >3. Suppose if we go to the second production portal directly through a website and if the user tries to login using the same id to first portal how portal will deal in terms of security (SSO) and also how backend R/3 will behave >when same id comes as part of SSO.
    I assume both portal will be setup against the same LDAP/UME source. Therefore it will allow the logon. The backend systems should trust both the first and second portal (STRUSTSSO2 transaction)
    I think your architecture choice comes down to if the new project has special considerations with regards to versioning of portal. If it does, it would be sensible to separate it into a separate portal (and you can always integrate them with the first portal through portal federation if you have a relatively new version).
    Regards
    Dagfinn

  • Load Balancing simple question

    Hi,
    i'm using CSS 11501 to load balance some web servers using src IP.
    if one src IP is directed to certain web server,
    How much time has to pass for this same src IP to be directed to other web server?
    Thank you in advance!

    By default, entries in the sticky table do not time out. The table works on a first-in, first-out basis. The size of the table depends on the amount of memory in the CSS (SCM 144 MB --> 32k, SCM 288 MB --> 128k).
    You can change the default timeout value using the 'sticky-inact-timeout ' command.
    ~Zach

  • ASA Load-Balancing intriguing question

    I have a setup where the inside interface may be in the same private subnet, but the outside interfaces, are most likely in different public subnets.
    For example. inside on both ASA: 192.168.1.1 and 192.168.1.2 /24 and the public connected even to two different ISPs.
    My guess is that I would probably lose the possibility for failover of the master for load-balancing, in case this ASA goes down, but nevertheless, I would be still interested in that users connect to the same public ip, and that the master gives the fqdn of the other ASA, and balance their Anyconnect entry into the network between both ASAs. Does this works this way?
    I mean, does this vpn load-balance feature talks only accross the inside network, or it needs to have same outside subnet mask? Is it a trick of the mask in the interface? 
    If not, is there a way around that? like this, if use a bogus outside interface and tunnel it somehow to the other outside in the other ASA, will still the offering of fqdn be on, so that the client connects to the other "real" public IP? 

    you cant route based on source ip with firewall only with router possiable by PBR
    you can make to static routes each one point to deffrent router with deffrent metric
    in this case it will make the topology like active standby which not good in your case
    but you can use sub interfaces on your ASA intis case make each subinterface in deffrent subnet and deffrent security level
    and let each subinterface use deffrent hsrp instance
    or there is another way
    IF you dont use VPN on your ASA u can achive it by useing multiple context
    in multiple context you gonna separate your firewall virtualy
    so if you have two vlans in your inside network (two deffrent subnets)
    then each subnet will use deffrent firewall virtually
    u goona divide the internal interface to two subinterfaces
    and you can use one outside interface shred between the context or also separate it to two subinterfaces
    and allocate those interface to each context
    so you gonna deal with each context as deffrent firewall
    and you can use deffrent HSRP instance on each context
    but with multiple context you cant use VPN on the firewall
    *****use the following method*****
    THE OTHER WAY WHICH ALSO I SUGIST YOU TO TRY IT WHICH IS THE Transparent Firewall
    in the case your firewall will operate in L2 mode
    so you can use the routers HSRP IPS AS there is no firewall in the path
    which i thnk helpful in you case aslo
    in transperante mode the defaultgate way for your client will be the hsrp IP because the firewall will not have any IPs exept for managment
    also the useres will be in the same IP subnet as the gateway in your case HSRP VIP
    and also you can control the network security through the firewall normally
    try this way and let me know
    see the following link for configuration
    http://www.cisco.com/en/US/products/hw/vpndevc/ps2030/products_configuration_example09186a008089f467.shtml
    please, Rate if helpful

  • HTTPS with load balancing

    Hi guys,
    We have a portal system with instance 08, so we typically connect to the portal using port 50800 for HTTP, and 50801 for HTTPS.
    We have just created a second server node for this portal (in the config tool).
    When we connect to 50800, does this automatically load balance the user to the better server? From some reading on these forums, it seemed to indicate that load balancing will only occur if I connect using port 8109. (where 09 is the instance number for the SCS of our portal)
    When connecting to port 8109, we are redirected to port 50800, as I'd expect.
    Question 1 - do we need to use 8109 for load balancing, or can we still use 50800?
    Question 2 - If we need to use 8109, which is a HTTP port, how can we achieve load balancing with HTTPS. Is there a different port we need to use to have HTTPS with load balancing?
    Question 3 - Is the creation of a second server node the best way to accomodate additional users and load on the portal system, or is there a better way to do things?
    Thanks,
    Michael.

    Better late than never.
    The load balancing you describe through the message server has its limitation. It redirects you to one of the dialog server hosts which means that any bookmarks will always point directly to a dialog server which may be down at that moment.
    Access directly to a dialog server on port 50800 will sort of load balanc on the java server instances that are on that server but not on other servers.
    The general recommendation is to setup an external loadbalancer and SAP Web dispatcher is a good match if the load is not very high. SAP webdispatcher will then bind up the cluster address and act as a proxy towards the dialog servers of the portal. The user will therefore only see one address. This will also work for HTTPS.
    Regards
    Dagfinn

  • Service Request - Load balancing weight & TM labour schedule

    What is Load balancing weight? How does it work, (I think it must be w.r.t the selecting resources for assigning the task, But how does it work, I am not getting the help menu in this)
    How does the T&M labour schedule is managed
    Regards

    Hi,
    1. I had some time and setup your config. Load Balancing seems to work fine for me. I tried to work with 4 different clients.
    WHat I did to set it up:
    # Failover RG for shared address
    scrgadm -a -g proftp-srg -h mars,saturn
    scrgadm -a -S -j hafast_r -g proftp-srg -l hafast
    # Scalable RG for proftpd
    scrgadm -a -g proftp-rg -y Maximum_primaries=2 -y Desired_primaries=2 -h mars,saturn -y RG_dependencies=proftp-srg
    scrgadm -a -j proftp_r -g proftp-rg -t SUNW.gds -y Network_resources_used=hafast_r -y Port_l
    ist=4443/tcp -x Start_command=/usr/local/sbin/proftpd -x Probe_command=/bin/true -y Scalable=true -y Load_balancing_policy=Lb_sticky_wild
    I assume that the key is the Lb_sticky_wild setting. As proftp forks new processes for every connection and even does reconnects during operation with new ports this seems to be essential. Please try
    2. It was not considered an excellent idea to configure ftp as a scalable service. Reason is that you now have network traffic load balanced. Question is: how do you configure shared storage. Do you have one (1) underlying global filesystem??? That could be a bottleneck. Check it out.
    Regards
    Hartmut
    PS: BTW I used proftp 1.3.0

  • Load Balancing question

    My company is in the process of building a small scale network architecture strictly for testing purposes. We have a DMZ area that contains 2 load balancers and 1 web server. The web server is a SunFire 280 and has two gig e nics. They want to cable one nic to one load balancer and one nic to the other. Since this is only one box we have to put the nics on separate subnets. The question is, can I configure the load balancers in a failover situation of an active active situation with one load balancer on one vlan and another load balancer on a separate vlan.

    I did not able to understand why you want to give ip to two NICs from different subnets.
    There is NO any requirement, like that. If you have your own requirement can you explain me that?
    Ashman

  • Load Balancing Directory Servers with Access Manager - Simple questions

    Hi.
    We are in the process of configuring 2 Access Manager instances (servers) accessing the same logical LDAP repository (comprising physically of two Directory Servers working together with Multi-Master Replication configured and tested) For doing this, we are following guide number 819-6258.
    The guide uses BigIP load balancer for load balancing the directory servers. However, we intend to use Directory Proxy Server. Since we faced some (unresolved) issues last time that we used DPS, there are some simple questions that I would be very grateful to have answers to:
    1. The guide, in section 3.2.10 (To configure Access Manager 1 with the Directory Server load balancer), talks about making changes at 4 places, and replacing the existing entry (hostname and port) with the load balancer's hostname and port (assuming that the load balancer has already been configured). It says that changes need not be made on Access Manager 2 since the LDAPs are in replication, and hence changes will be replicated at all places. However, the guide also states that changes have to be made in two files, namely AMConfig.properties, and the serverconfig.xml file. But these changes will not be reflected on Access Manager 2, since these files are local on each machine.
    Question 1. Do changes have to be made in AMConfig.properties and serverconfig.xml files on the other machine hosting Access Manager 2?
    Question 2: What is the purpose of putting these values here? Specifically, what is achieved by specifying the Directory server host and port in AMConfig.properties, as well as in serverconfig.xml?
    Question 3. In the HTTP console, there is the option of specifying multiple primary LDAP servers, as well as multiple secondary LDAP servers. What is the purpose of these? Are secondary servers attempted when none of the list in the primary list are accessible? Also, if there are multiple entries in the primary server list, are they accessed in a round robin fashion (hereby providing rudimentary load balancing), or are other servers accessed only when the one mentioned first is not reachable etc.?
    2. Since I do not have a load balancer setup yet, I tried the following deviation to the above, which, according to me, should have worked. If viewed in the HTTP console, LDAP / Membership / MSISDN and Policy configuration all pointed to the DS on host 1. When I changed all these to point to the directory server on host 2 (and made AMConfig.properties and serverconfig.xml on host 1 point to DS of host 2 as well), things should have worked fine, but apparently Access manager 1 could not be started. Error from Webserver:
    [14/Aug/2006:04:30:36] info (13937): WEB0100: Loading web module in virtual server [https-machine_1_FQDN] at [search]
    [14/Aug/2006:04:31:48] warning (13937): CORE3283: stderr: Exception in thread "EventService" java.lang.ExceptionInInitializerError
    [14/Aug/2006:04:31:48] warning (13937): CORE3283: stderr: at com.iplanet.services.ldap.event.EventServicePolling.run(EventServicePolling.java:132)
    [14/Aug/2006:04:31:48] warning (13937): CORE3283: stderr: at java.lang.Thread.run(Thread.java:595)
    [14/Aug/2006:04:31:48] warning (13937): CORE3283: stderr: Caused by: java.lang.InterruptedException
    [14/Aug/2006:04:31:48] warning (13937): CORE3283: stderr: at com.sun.identity.sm.ServiceManager.<clinit>(ServiceManager.java:74)
    [14/Aug/2006:04:31:48] warning (13937): CORE3283: stderr: ... 2 more
    In effect, AM on 1 did not start. On rolling back the changes, things again worked like previously.
    Will be really grateful for any help / insight / experience on dealing with the above.
    Thanks!

    Update to the above, incase anyone is reading:
    We setup a similar setup in Windows, and it worked. Here is a detailed account of what was done:
    1. Host 1: Start installer, install automatically, chose Directory server, Directory Administration server, Directory Proxy server, Web server, Access Manager.
    All installed, and worked fine. (AMConfig.properties, serverconfig.xml, and the info in LDAP service, all pointed to HOST1:389)
    2. Host 2: Start installer, install automatically, chose Directory server, Directory Administration server, Directory Proxy server, Web server, Access Manager.
    All installed, and worked fine. (AMConfig.properties, serverconfig.xml, and the info in LDAP service, all pointed to HOST2:389)
    3. Host 1: Started replication. Set to Master
    4. Host 2: Started replication. Set to Master
    5. Host 1: Setup replication agreement to Host 2
    6. Host 2: Setup replication agreement to Host 1
    7. Initiated the remote replica from Host 1 ----> Host 2
    Note that since default installation uses abc.....xyz as the encryption key, setting this to same was not an issue.
    9. Started webserver for Host 1 and logged into AM as amadmin.
    10. Added Host 2 FQDN in DNS Aliases / Realms
    11. Added http://HOST2_FQDN:80 in the Platform server (instance) list.
    12. Started Host 2 webserver. Logged in AM on Host 2, things worked fine.
    At this stage, note the following:
    a) Host 1:
    AMConfig.properties file has
    com.iplanet.am.directory.host=host1_FQDN
    and
    com.iplanet.am.directory.port=389
    serverconfig.xml has:
    <Server name="Server1" host="host1_FQDN" port="389" type="SIMPLE" />
    b) Host 2:
    AMConfig.properties file has
    com.iplanet.am.directory.host=host2_FQDN
    and
    com.iplanet.am.directory.port=389
    serverconfig.xml has:
    <Server name="Server1" host="host2_FQDN" port="389" type="SIMPLE" />
    c) If one logs into AM, and checks LDAP servers for LDAP / Policy Configuration / Membership etc services, they all contain Host2_FQDN:389 (which makes sense, since replica 2 was initialized from 1)
    Returning back to the configuations:
    13. On Host 1, login into the Admin server console of the Directory server. Navigate to the DPS, and confgure the following:
    a) Network Group
    b) LDAP servers
    c) Load Balancing
    d) Change Group
    e) Action on-bind
    f) Allow all actions (permit modification / deletion etc.).
    g) any other configuations required - Am willing to give detailed steps if someone needs them to help me / themselves! :)
    So now, we have DPS configured and running on Host1:489, and distributing load to DS1 and DS2 on a 50:50 basis.
    14. Now, log into AM on Host 1, and instead of Host1_fqdn:389 (for DS) in the following places, specify Host1_fqdn:489 (for the DPS)--
    LDAP Authentication
    MSISDN server
    Membership Service
    Policy configuation.
    Verified that this propagated to the Policy Configuration service and the LDAP authentication service that are already registered with the default organization.
    15. Log out of AM. Following the documentation, modify directory.host and directory.port in AMConfig.properties to point to Host 1_FQDN and 489 respectively. Make this change in AMConfig.properties of both Host 1 as well as 2.
    16. Edit serverconfig.xml on both hosts, and instead of they pointing to their local directory servers, point both to host1_FQDN:489
    17. When you start the webserver, it will refuse to start. Will spew errors such as:
    [https-host1_FQDN]: Sun ONE Web Server 6.1SP5 B06/23/2005 17:36
    [https-host1_FQDN]: info: CORE3016: daemon is running as super-user
    [https-host1_FQDN]: info: CORE5076: Using [Java HotSpot(TM) Server VM, Version 1.5.0_04] from [Sun Microsystems Inc.]
    [https-host1_FQDN]: info: WEB0100: Loading web module in virtual server [https-host1_FQDN] at [amserver]
    [https-host1_FQDN]: warning: WEB6100: locale-charset-info is deprecated, please use parameter-encoding
    [https-host1_FQDN]: info: WEB0100: Loading web module in virtual server [https-host1_FQDN] at [ampassword]
    [https-host1_FQDN]: warning: WEB6100: locale-charset-info is deprecated, please use parameter-encoding
    [https-host1_FQDN]: info: WEB0100: Loading web module in virtual server [https-host1_FQDN] at [amcommon]
    [https-host1_FQDN]: info: WEB0100: Loading web module in virtual server [https-host1_FQDN] at [amconsole]
    [https-host1_FQDN]: warning: WEB6100: locale-charset-info is deprecated, please use parameter-encoding
    [https-host1_FQDN]: info: WEB0100: Loading web module in virtual server [https-host1_FQDN] at [search]
    [https-host1_FQDN]: warning: CORE3283: stderr: netscape.ldap.LDAPException: error result (32); matchedDN = dc=sun,dc=com; No such object (DN changed)
    [https-host1_FQDN]: warning: CORE3283: stderr: Got LDAPServiceException code=-1
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.services.ldap.DSConfigMgr.getConnection(DSConfigMgr.java:357)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.services.ldap.DSConfigMgr.getNewFailoverConnection(DSConfigMgr.java:314)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.services.ldap.DSConfigMgr.getNewConnection(DSConfigMgr.java:253)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.services.ldap.DSConfigMgr.getNewProxyConnection(DSConfigMgr.java:184)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.services.ldap.DSConfigMgr.getNewProxyConnection(DSConfigMgr.java:194)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.ums.DataLayer.initLdapPool(DataLayer.java:1248)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.ums.DataLayer.(DataLayer.java:190)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.ums.DataLayer.getInstance(DataLayer.java:215)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.ums.DataLayer.getInstance(DataLayer.java:246)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.sun.identity.sm.ldap.SMSLdapObject.initialize(SMSLdapObject.java:156)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.sun.identity.sm.ldap.SMSLdapObject.(SMSLdapObject.java:124)
    [https-host1_FQDN]: warning: CORE3283: stderr: at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    [https-host1_FQDN]: warning: CORE3283: stderr: at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
    [https-host1_FQDN]: warning: CORE3283: stderr: at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
    [https-host1_FQDN]: warning: CORE3283: stderr: at java.lang.reflect.Constructor.newInstance(Constructor.java:494)
    [https-host1_FQDN]: warning: CORE3283: stderr: at java.lang.Class.newInstance0(Class.java:350)
    [https-host1_FQDN]: warning: CORE3283: stderr: at java.lang.Class.newInstance(Class.java:303)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.sun.identity.sm.SMSEntry.(SMSEntry.java:216)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.sun.identity.sm.ServiceSchemaManager.(ServiceSchemaManager.java:67)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.am.util.AMClientDetector.getServiceSchemaManager(AMClientDetector.java:219)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.am.util.AMClientDetector.(AMClientDetector.java:94)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.sun.mobile.filter.AMLController.init(AMLController.java:85)
    [https-host1_FQDN]: warning: CORE3283: stderr: at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:262)
    [https-host1_FQDN]: warning: CORE3283: stderr: at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:322)
    [https-host1_FQDN]: warning: CORE3283: stderr: at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:120)
    [https-host1_FQDN]: warning: CORE3283: stderr: at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:3271)
    [https-host1_FQDN]: warning: CORE3283: stderr: at org.apache.catalina.core.StandardContext.start(StandardContext.java:3747)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.ias.web.WebModule.start(WebModule.java:251)
    [https-host1_FQDN]: warning: CORE3283: stderr: at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1133)
    [https-host1_FQDN]: warning: CORE3283: stderr: at org.apache.catalina.core.StandardHost.start(StandardHost.java:652)
    [https-host1_FQDN]: warning: CORE3283: stderr: at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1133)
    [https-host1_FQDN]: warning: CORE3283: stderr: at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:355)
    [https-host1_FQDN]: warning: CORE3283: stderr: at org.apache.catalina.startup.Embedded.start(Embedded.java:995)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.ias.web.WebContainer.start(WebContainer.java:431)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.ias.web.WebContainer.startInstance(WebContainer.java:500)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.ias.server.J2EERunner.confPostInit(J2EERunner.java:161)
    [https-host1_FQDN]: failure: WebModule[amserver]: WEB2783: Servlet /amserver threw load() exception
    [https-host1_FQDN]: javax.servlet.ServletException: WEB2778: Servlet.init() for servlet LoginLogoutMapping threw exception
    [https-host1_FQDN]: at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:949)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:813)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:3478)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardContext.start(StandardContext.java:3760)
    [https-host1_FQDN]: at com.iplanet.ias.web.WebModule.start(WebModule.java:251)
    [https-host1_FQDN]: at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1133)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardHost.start(StandardHost.java:652)
    [https-host1_FQDN]: at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1133)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:355)
    [https-host1_FQDN]: at org.apache.catalina.startup.Embedded.start(Embedded.java:995)
    [https-host1_FQDN]: at com.iplanet.ias.web.WebContainer.start(WebContainer.java:431)
    [https-host1_FQDN]: at com.iplanet.ias.web.WebContainer.startInstance(WebContainer.java:500)
    [https-host1_FQDN]: at com.iplanet.ias.server.J2EERunner.confPostInit(J2EERunner.java:161)
    [https-host1_FQDN]: ----- Root Cause -----
    [https-host1_FQDN]: java.lang.NullPointerException
    [https-host1_FQDN]: at com.sun.identity.authentication.UI.LoginLogoutMapping.init(LoginLogoutMapping.java:71)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:921)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:813)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:3478)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardContext.start(StandardContext.java:3760)
    [https-host1_FQDN]: at com.iplanet.ias.web.WebModule.start(WebModule.java:251)
    [https-host1_FQDN]: at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1133)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardHost.start(StandardHost.java:652)
    [https-host1_FQDN]: at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1133)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:355)
    [https-host1_FQDN]: at org.apache.catalina.startup.Embedded.start(Embedded.java:995)
    [https-host1_FQDN]: at com.iplanet.ias.web.WebContainer.start(WebContainer.java:431)
    [https-host1_FQDN]: at com.iplanet.ias.web.WebContainer.startInstance(WebContainer.java:500)
    [https-host1_FQDN]: at com.iplanet.ias.server.J2EERunner.confPostInit(J2EERunner.java:161)
    [https-host1_FQDN]:
    [https-host1_FQDN]: info: HTTP3072: [LS ls1] http://host1_FQDN:58080 [i]ready to accept requests
    [https-host1_FQDN]: startup: server started successfully
    Success!
    The server https-host1_FQDN has started up.
    The server infact, didn't start up (nothing even listening on 58080).
    However, if AMConfig.properties is left as it originally was, and only serverconfig.xml files were changed as mentioned above, web servers started fine, and things worked all okay. (Alright, except for some glitches when viewed in /amconsole. If /amserver/console is accessed, all is good. Can this mean that all is still not well? I am not sure).
    So far so good. Now comes the sad part. When the same is done on Solaris 9, things dont work. You continue to get the above error, OR the following error, and the web server will refuse to start:
    Differences in Solaris and Windows are as follows:
    1. Windows hosts have 1 IP and hostname. Solaris hosts have 3 IPs and hostnames (for DS, DPS, and webserver).
    No other difference from an architectural perspective.
    Any help / insight on why the above is not working (and why the hell does the documentation seem so sketchy / insecure / incorrect).
    Thanks a bunch!

  • Question about Load Balance SFTP service by using CSS1150X

    Does anyone come across of load balancing SFTP service by using CSS1150X? Typically by configuring CSS1150X to load balance FTP service, the configuration will as follow:
    content ftp_rule
    vip address 192.168.3.6
    protocol tcp
    port 21
    application ftp-control
    add service serv1
    add service serv2
    add service serv3
    active
    group ftp_group
    vip address 192.168.3.6
    add service serv1
    add service serv2
    add service serv3
    active
    However, for my personal understanding and knowledge, I will configure my CSS1150X as follow to load balance SFTP service:
    content sftp_rule
    vip address 192.168.3.6
    protocol tcp
    port 22 //Change 21 to 22
    application ftp-control
    add service serv1
    add service serv2
    add service serv3
    active
    group sftp_group
    vip address 192.168.3.6
    add service serv1
    add service serv2
    add service serv3
    active
    My question is, "application ftp-control" in content "ftp_rule" is still applicable to SFTP or not?

    I believe application ftp-control would not be used for sftp.
    This might cause the session to get dropped when there is no data channel created and cause issues with long connections.
    Hope it helps!!

  • CSS load balancing questions

    I hope that someone can help with 2 simple (i think) CSS questions.
    1. When configured properly for load balancing, should the CSS round-robin between servers or will it continue to use only one server until triggered by some event or parameter?
    2. If 1 of 2 load balanced servers fails, how does load balancing proceed? Will it continue to try to load balance between the servers or will it give up on the failed server unitil some event or timeout occurs?
    Thanks in advance,
    Eliot

    Hi Eliot,
    The CSS can be configured to perform load balancing in a variety of different ways. Least connections, round robin, ACA etc. Each new connection through the CSS will be round robined over each of the servers in your server group.
    If a server fails then the CSS will know it has failed through the use of keepalives (based on TCP connection, ICMP etc) and no longer send requests through to that server. Traffic associated with a previous connection to the failed server will be sent to on of the surviving servers. It is then up to the behavior of the application as to if the user experiences any disruption.
    Hope this helps
    Brett

  • Question about Load balancing with IISPROXY

              Hi,
              We are running WLS 5.1.0 SP5 on NT 4.0 SP6. We are not using clustering.
              We are able to round robin between multiple instances of the WLS successfully.
              Question: If one of the instances of WLS goes down, is there any way to configure
              the plugin to take it out of the loop automatically (without using clustering)?
              Thanks,
              Anil.
              

    This is not the syntax. Syntax is just this:
              MaxSkips=something.
              eg: MaxSkips=25
              The doc says:
              5:10:1000 for min:default:max
              By which we mean that default value is 10, max is 1000 and min is 5. I guess the
              docs are confusing about the syntax here. We will correct them.
              --Vinod.
              Anil Kommareddi wrote:
              > Vinod,
              > I could not find any documentation on the MaxSkips parameter except in the Service Pack
              > docs. The syntax is MaxSkips=min:default:max.
              >
              > how do the min and max parameters work?
              >
              > Vinod Mehra wrote:
              >
              > > Even if the servers in the WebLogicCluster list are non clustered you WILL be
              > > able to do load balancing. But the problem is if the servers go down the plugin
              > > will not remove them. But it not that bad. If an connection attempt fails the
              > > server is marked as bad and will be skipped for the next MaxSkips (default=10)
              > > cycles of load balancing. MaxSkips parameter is configurable for IISProxy
              > > (SP4 onwards, I think).
              > >
              > > -Vinod.
              > >
              > > Prasad Peddada wrote:
              > >
              > > > I believe there won't be any load balancing unless you use servers in a cluster. As
              > > > an alternative you can use hardware load balancers directly in a situation like this.
              > > >
              > > > Anil Kommareddi wrote:
              > > >
              > > > > Hi,
              > > > >
              > > > > We are running WLS 5.1.0 SP5 on NT 4.0 SP6. We are not using clustering.
              > > > > We are able to round robin between multiple instances of the WLS successfully.
              > > > >
              > > > > Question: If one of the instances of WLS goes down, is there any way to configure
              > > > > the plugin to take it out of the loop automatically (without using clustering)?
              > > > >
              > > > > Thanks,
              > > > > Anil.
              > > >
              > > > --
              > > > Cheers
              > > >
              > > > - Prasad
              

  • Load Balancing / CF Edition Question

    Hi,
    I know very little about load-balancing, so please forgive
    the beginner question.
    If I don't plan on using ColdFusion's ClusterCATS load
    balancing software
    solution for multiple web servers, but instead am going to
    use a hardware
    load balancing solution instead, can I get away with
    purchasing the ColdFusion
    Standard Edition for each server(?) or would I have to
    purchase the ColdFusion
    Enterprise Edition for each server to make things work?
    Multiple purchases of Enterprise costs so much, I just want
    to save money if possible.
    Thanks in advance,
    Joe

    Joe_Krako wrote:
    >
    > I know very little about load-balancing, so please
    forgive the beginner
    > question.
    >
    > If I don't plan on using ColdFusion's ClusterCATS load
    balancing software
    > solution for multiple web servers, but instead am going
    to use a hardware
    > load balancing solution instead, can I get away with
    purchasing the ColdFusion
    > Standard Edition for each server(?)
    Yes. You will have to configure your load balancer to use
    sticky
    sessions if you want to use session variables and you will
    not have some
    of the scalability features of CF Enterprise, but it will
    work.
    I don't see any requirements to buy Enterprise edition
    licenses in the
    EULA, but check that for yourself:
    http://www.adobe.com/products/eula/server/
    Jochem
    Jochem van Dieten
    Adobe Community Expert for ColdFusion

  • Load Balancing with BigIP / SSL question

    I have an oddball question. We're load balancing ColdFusion
    MX7 across 3 servers using a BigIP load balancing server. We
    decided to go the hardware approach and it has been great except
    for one small configuration issue.
    We use a mix of SSL and non SSL pages, prior to the switch
    from a single server to a load balanced setup I used to script that
    would determine if a page that was supposed to be SSL had the
    variable CGI.HTTPS turned on or off. If it was off, the page would
    redirect back to itself with the SSL turned on.
    The problem we have is that we followed BigIP's instruction
    to secure the load balancing hardware instead of the three servers
    running behind it. So what happens is that the traffic goes to the
    load balancer port 441, but then the calls from the load balancer
    to the individual servers is port 80. So even if a page is called
    as HTTPS://... the coldfusion server says that CGI.HTTPS is "off"
    since the traffic is port 80.
    This isn't much of a problem, our SSL pages are linked as
    HTTPS:// and the only problem would actually arise if someone was
    to type in the URL and call it as HTTP rather than HTTPS.
    My questions is this, does anyone know of a way that I can
    detect if the page should be HTTPS and is not without changing our
    configuration and putting SSL certificates on each individual
    server?

    Hey,
    Well the load balancing with the BigIP device is really very
    amazing. I think
    what i liked most was swapping out servers when their lease
    was up, through the
    BigIP manager I just stopped all traffic to a server, shut it
    down, plugged in
    the new one and turned traffic back on. It was really very
    easy.
    The SSL stuff still gives me a headache to think about. but
    I should mention I
    no longer work where I was, plus now I'm all .net C# but
    that's a different
    story.
    I think if I was going to do this all again I would not have
    secured the bigIP
    unit. It was nice to buy one SSL cert for all the servers I
    attached rather
    than one per server, but getting the SSL sites to work
    properly was a headache.
    We also use windows file replication where now I would go
    with like a pair of
    Dell MD1000's mirrored for storage and just have tons of ram
    and cpu on the
    front end units. Depends what you want to spend I guess. I
    think the bigIP unit
    we bought was like 20 grand, i think they are cheaper now
    though.
    Hope I helped.

  • Question on how does load balancing work on Firewall Services Module (FWSM)

    Hi everyone,
    I have a question about the algorithm of load balancing on Firewall Services Module (FWSM).
    I understand that the FWSM supports up to three equal cost routes on the same interface for load balancing.
    Please see a lower simple figure.
    outside inside
    --- L3 SW --+
    |
    MHSRP +--- FWSM ----
    |
    --- L3 SW --+
    I am going to configure the following default routes on FWSM point to each MHSRP VIP (192.168.13.29 and 192.168.13.30) for load balancing.
    route outside_1 0.0.0.0 0.0.0.0 192.168.13.29 1
    route outside_1 0.0.0.0 0.0.0.0 192.168.13.30 1      
    However I don't know how load balancing work on FWSM.
    On FWSM, load balancing work based on
    Per-Destination ?
    Per-Source ?
    Per-Packet ?
    or
    Other criteria ?
    Your information would be greatly appreciated.
    Best Regards,

    Configuring "tunnel default gateway' on the concentrator allowed traffic to flow as desired through the FWSM.
    FWSM is not capable of performing policy based routing, the additional static routes for the VPN load balancing caused half of the packets to be lost. As a result, it appears that the VPN concentrators will not be able to load balance.

  • Question Cluster/Load balancing

    Question about iplanet load balancing/Cluster:
    Following discussion are based on iAS C++ engine(kcs).
    We have four web servers and two iAS servers:
    Web1, Web2, Web3, Web4
    iAS1, iAS2
    All machines run Solaris 8, web server is iWS4.1 SP6,
    Application server is iAS6.0 SP2, and both iAS boxes have
    same hardware configuration.
    1. What's the best load balancing method for this structure?
    Per Server Response Time(Web Connector Driven)
    Per Component Response Time(Web Connector Driven)
    Round Robin(Web Connector Driven)
    User Defined Criteria(iAS Driven)
    2. What's the criteria for the kxs engine to choose the kcs
    engine to sent request if we set Web Connector Driven
    load balancing?
    3. If we set iAS driven load balancing, what's the criteria
    for the web connector used to choose kxs?
    4. We got a problem when run load testing for an AppLogic
    in this cluster, one iAS CPU average usage got almost
    100%, but the other one is just 70%.
    We used Per Server Response Time load balancing method.
    Thanks.
    Heng

    see answers inline
    hcao wrote:
    Question about iplanet load balancing/Cluster:
    Following discussion are based on iAS C++ engine(kcs).
    We have four web servers and two iAS servers:
    Web1, Web2, Web3, Web4
    iAS1, iAS2
    All machines run Solaris 8, web server is iWS4.1 SP6,
    Application server is iAS6.0 SP2, and both iAS boxes have
    same hardware configuration.
    1. What's the best load balancing method for this structure?
    Per Server Response Time(Web Connector Driven)
    Per Component Response Time(Web Connector Driven)
    Round Robin(Web Connector Driven)
    User Defined Criteria(iAS Driven)
    it depends on the characteristics and behaviour of your application
    >
    2. What's the criteria for the kxs engine to choose the kcs
    engine to sent request if we set Web Connector Driven
    load balancing?
    kxs always does round robin to the kjs or kcs engines. The webconnector
    selects the kxs to which to send to.
    >
    3. If we set iAS driven load balancing, what's the criteria
    for the web connector used to choose kxs?
    as specified by your criteria in the iAS driven section.
    The ias instance will send its current list of preferences for ias
    intances it got from the criteria to the webconnector. This information
    is dynamic and updated constantly.
    >
    4. We got a problem when run load testing for an AppLogic
    in this cluster, one iAS CPU average usage got almost
    100%, but the other one is just 70%.
    We used Per Server Response Time load balancing method.
    again, this can be a valid result depending on the way your applogics
    are written. Are they CPU bound, I/O bound or DB bound? Since individual
    components execute differently and you specified to use the average of
    those results to determine load balancing this can be a valid result
    because differences in execution times of your applogics.
    >
    Thanks.
    Hengregards
    Han-Dat
    Consulting Project Engineer
    iPlanet Professional Services - ANZ
    iPlanet e-commerce Solutions
    - A Sun|Netscape Alliance
    Sun Microsystems Australia Pty Ltd

Maybe you are looking for