ASA Load-Balancing intriguing question

I have a setup where the inside interface may be in the same private subnet, but the outside interfaces, are most likely in different public subnets.
For example. inside on both ASA: 192.168.1.1 and 192.168.1.2 /24 and the public connected even to two different ISPs.
My guess is that I would probably lose the possibility for failover of the master for load-balancing, in case this ASA goes down, but nevertheless, I would be still interested in that users connect to the same public ip, and that the master gives the fqdn of the other ASA, and balance their Anyconnect entry into the network between both ASAs. Does this works this way?
I mean, does this vpn load-balance feature talks only accross the inside network, or it needs to have same outside subnet mask? Is it a trick of the mask in the interface? 
If not, is there a way around that? like this, if use a bogus outside interface and tunnel it somehow to the other outside in the other ASA, will still the offering of fqdn be on, so that the client connects to the other "real" public IP? 

you cant route based on source ip with firewall only with router possiable by PBR
you can make to static routes each one point to deffrent router with deffrent metric
in this case it will make the topology like active standby which not good in your case
but you can use sub interfaces on your ASA intis case make each subinterface in deffrent subnet and deffrent security level
and let each subinterface use deffrent hsrp instance
or there is another way
IF you dont use VPN on your ASA u can achive it by useing multiple context
in multiple context you gonna separate your firewall virtualy
so if you have two vlans in your inside network (two deffrent subnets)
then each subnet will use deffrent firewall virtually
u goona divide the internal interface to two subinterfaces
and you can use one outside interface shred between the context or also separate it to two subinterfaces
and allocate those interface to each context
so you gonna deal with each context as deffrent firewall
and you can use deffrent HSRP instance on each context
but with multiple context you cant use VPN on the firewall
*****use the following method*****
THE OTHER WAY WHICH ALSO I SUGIST YOU TO TRY IT WHICH IS THE Transparent Firewall
in the case your firewall will operate in L2 mode
so you can use the routers HSRP IPS AS there is no firewall in the path
which i thnk helpful in you case aslo
in transperante mode the defaultgate way for your client will be the hsrp IP because the firewall will not have any IPs exept for managment
also the useres will be in the same IP subnet as the gateway in your case HSRP VIP
and also you can control the network security through the firewall normally
try this way and let me know
see the following link for configuration
http://www.cisco.com/en/US/products/hw/vpndevc/ps2030/products_configuration_example09186a008089f467.shtml
please, Rate if helpful

Similar Messages

  • ASA load balance per destination

    Hello,
    I have an ASA with version 8.4.4.1 connected via a switch to two routers and I have two default routes on the ASA pointing to the two routers.
    The question is : Does the ASA load balance the traffic onto the two routers per-destination or per-packet?
    As in routers, CEF load balances the connection per-destination by default. So, what about the ASA regardless of  the hardware.
    Regards,
    George

    No. Everything will go to lowest cost route. Matthew

  • ASA LOAD BALANCE

    HELLO,
    ANYONE KNOWS WHAT DEVICE TO USE TO DO ASA LOAD BALANCING?
    THANKS

    no.
    i have 2 physical asa5520. we are thingking of creating 2 context on each asa and configure the ff
    asa1
    context A - active
    context B - passive
    asa2
    context A - passive
    context B - active
    thereby we will be having 2 asa's with diff ip address on the outside.
    we want the traffic comming in to our web servers to be load balance bet this 2 asa's
    thanks

  • Portal Landscape - With 2 CSM (load balance) related question

    Hi,
      We are currently having a portal landscape (Dev, QA -2 app servers, PRD - 4 app servers). The load balancing happens on Production Portal using CSM (load balancer) and it does SSL offloading for security encryption and it lands onto one of the application servers. When we try to login to portal it authenticates using the LDAP (OID). And we have some links which takes to backend R/3, BW etc (we use SAP load balance using SMLG logon group)
    Now due to another special project the following is what we are planning:
    1. Adding couple of more application servers for production portal or having seperate second portal landscape itself
    2. Adding couple of more application servers for R/3 production server (load balance can be done with special logon group for that)
    Questions are:
    1. When we land into current production portal page and click a iview link for the special project it should go only to those special portal app servers (planning to do through another CSM) and from their to backend R/3. In this scenario how the authentication (or sso ticket) happens when it goes from CSM to another CSM, will it ask for login again or any issue will happen with SSO ticket ?
    2. If we decide to go for second portal landscape and in the same scenario when login to current prod portal page and click a iview link for the special project it should go to that another production portal,in that case what will happen to the login authentication happened through the first portal and SSO ticket ?
    3. Suppose if we go to the second production portal directly through a website and if the user tries to login using the same id to first portal how portal will deal in terms of security (SSO) and also how backend R/3 will behave when same id comes as part of SSO.
    Or if anyone thinks of any other issue apart from SSO or encryption related things which i need to be aware of, kindly let me know.
    Thanks,
    Murali.

    I am not sure what CSM is, but I would expect it only does ssl offloading and a sort of "reverse proxy" against the cluster.
    >1. When we land into current production portal page and click a iview link for the special project it should go only to those special portal app servers (planning to do through another CSM) and from their to backend R/3. In this >scenario how the authentication (or sso ticket) happens when it goes from CSM to another CSM, will it ask for login again or any issue will happen with SSO ticket ?
    This depends on the host name you use for the two CSM clusters. If they have the same subdomain, there should be no problem as the SAP Logon Ticket (MYSAPSSO2) cookie is issued to the sub domain of the portal.
    If they do not have the same subdomain, the second CSM cluster will receive the request without the MYSAPSSO2 cookie, and will therefore trigger reauthentication.
    >2. If we decide to go for second portal landscape and in the same scenario when login to current prod portal page and click a iview link for the special project it should go to that another production portal,in that case what will >happen to the login authentication happened through the first portal and SSO ticket ?
    It will fail, as the MYSAPSSO2 cookie from the first portal is not recognized in the second. However, you can easily setup so that the second portal trusts the first and does a logon based on its credentials
    >3. Suppose if we go to the second production portal directly through a website and if the user tries to login using the same id to first portal how portal will deal in terms of security (SSO) and also how backend R/3 will behave >when same id comes as part of SSO.
    I assume both portal will be setup against the same LDAP/UME source. Therefore it will allow the logon. The backend systems should trust both the first and second portal (STRUSTSSO2 transaction)
    I think your architecture choice comes down to if the new project has special considerations with regards to versioning of portal. If it does, it would be sensible to separate it into a separate portal (and you can always integrate them with the first portal through portal federation if you have a relatively new version).
    Regards
    Dagfinn

  • Load Balancing simple question

    Hi,
    i'm using CSS 11501 to load balance some web servers using src IP.
    if one src IP is directed to certain web server,
    How much time has to pass for this same src IP to be directed to other web server?
    Thank you in advance!

    By default, entries in the sticky table do not time out. The table works on a first-in, first-out basis. The size of the table depends on the amount of memory in the CSS (SCM 144 MB --> 32k, SCM 288 MB --> 128k).
    You can change the default timeout value using the 'sticky-inact-timeout ' command.
    ~Zach

  • Load Balancing - SMLG Question

    I have a question about the load distribution when using logon groups.
    I have created a logon group in SMLG and it is working to distribute users to different servers.
    It appears that distributes the load based off of a “quality” rating.
    What factors make up the quality rating?  Or what if not the quality rating, what is used to measure load on the servers?

    Hi Drew,
    The quality field display a numeric value from the results of a RFC query
    Read,
    http://help.sap.com/saphelp_nw2004s/helpdata/en/28/1c623c44696069e10000000a11405a/content.htm
    Regards
    Juan
    Please reward with points if helpful

  • HTTPS with load balancing

    Hi guys,
    We have a portal system with instance 08, so we typically connect to the portal using port 50800 for HTTP, and 50801 for HTTPS.
    We have just created a second server node for this portal (in the config tool).
    When we connect to 50800, does this automatically load balance the user to the better server? From some reading on these forums, it seemed to indicate that load balancing will only occur if I connect using port 8109. (where 09 is the instance number for the SCS of our portal)
    When connecting to port 8109, we are redirected to port 50800, as I'd expect.
    Question 1 - do we need to use 8109 for load balancing, or can we still use 50800?
    Question 2 - If we need to use 8109, which is a HTTP port, how can we achieve load balancing with HTTPS. Is there a different port we need to use to have HTTPS with load balancing?
    Question 3 - Is the creation of a second server node the best way to accomodate additional users and load on the portal system, or is there a better way to do things?
    Thanks,
    Michael.

    Better late than never.
    The load balancing you describe through the message server has its limitation. It redirects you to one of the dialog server hosts which means that any bookmarks will always point directly to a dialog server which may be down at that moment.
    Access directly to a dialog server on port 50800 will sort of load balanc on the java server instances that are on that server but not on other servers.
    The general recommendation is to setup an external loadbalancer and SAP Web dispatcher is a good match if the load is not very high. SAP webdispatcher will then bind up the cluster address and act as a proxy towards the dialog servers of the portal. The user will therefore only see one address. This will also work for HTTPS.
    Regards
    Dagfinn

  • Service Request - Load balancing weight & TM labour schedule

    What is Load balancing weight? How does it work, (I think it must be w.r.t the selecting resources for assigning the task, But how does it work, I am not getting the help menu in this)
    How does the T&M labour schedule is managed
    Regards

    Hi,
    1. I had some time and setup your config. Load Balancing seems to work fine for me. I tried to work with 4 different clients.
    WHat I did to set it up:
    # Failover RG for shared address
    scrgadm -a -g proftp-srg -h mars,saturn
    scrgadm -a -S -j hafast_r -g proftp-srg -l hafast
    # Scalable RG for proftpd
    scrgadm -a -g proftp-rg -y Maximum_primaries=2 -y Desired_primaries=2 -h mars,saturn -y RG_dependencies=proftp-srg
    scrgadm -a -j proftp_r -g proftp-rg -t SUNW.gds -y Network_resources_used=hafast_r -y Port_l
    ist=4443/tcp -x Start_command=/usr/local/sbin/proftpd -x Probe_command=/bin/true -y Scalable=true -y Load_balancing_policy=Lb_sticky_wild
    I assume that the key is the Lb_sticky_wild setting. As proftp forks new processes for every connection and even does reconnects during operation with new ports this seems to be essential. Please try
    2. It was not considered an excellent idea to configure ftp as a scalable service. Reason is that you now have network traffic load balanced. Question is: how do you configure shared storage. Do you have one (1) underlying global filesystem??? That could be a bottleneck. Check it out.
    Regards
    Hartmut
    PS: BTW I used proftp 1.3.0

  • VPN load balancing and ASA !!!

    Hi netpros,
    I have a couple of questions about this and hope you might be able to assist me.
    1.- Are VPN load balancing and failover (Active/Active) mutually exclusive ..? I mean they can't be used at the same time correct ..?
    2.- How does the ASA handle the return traffic from the Internal LAN towards the remote client .. Because the cluster only requires ONE public virtual IP address, which will work for incoming packets .. but what about the return traffic which has knowledge of the DHCP scope's default gateway IP address only .. ? How gets the returned packet redirected from the default gateway IP address to the respective ASA internal IP address .?
    3.- VPN load balancing only applies to remote clients using easy VPN technology (easy vpn client, hardware client , pIX using easy vpn client etc ) and does not work with static LAN-LAN tunnel .. correct ..?
    Your comments are much appreciated

    Hi Gilbert ..
    1.- Thanks I wanted to make sure.
    2.- I know that .. my question is in regards the return packets .. for example if I have the below IP schema:
    ASA1: Public 20.20.20.20
    Private 192.168.1.1
    ASA2: Public 20.20.20.21
    Private 192.168.1.2
    Cluster virutal IP: 20.20.20.10
    Default gateway for segment 192.168.1.0 is 192.168.1.1
    Let's say that a vpn client tries to connect and the cluster instructs the client to connect to ASA2 20.20.20.21. The packets reach the internal server at 192.168.1.100. The internal server then sends the return packets back to the client by forwarding them to its default gateway which is 192.168.1.1 (ASA1). Here is my question .. how does the cluster handles this because the return packet are supposed to be directed to ASA2 192.168.1.2
    3.- Any idea about this one ..?
    Cheers,

  • ASA 5520 VPN load balancing with Active/Standby failover on 2 devices only...

    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-qformat:yes;
    mso-style-parent:"";
    mso-padding-alt:0in 5.4pt 0in 5.4pt;
    mso-para-margin-top:0in;
    mso-para-margin-right:0in;
    mso-para-margin-bottom:10.0pt;
    mso-para-margin-left:0in;
    line-height:115%;
    mso-pagination:widow-orphan;
    font-size:11.0pt;
    font-family:"Calibri","sans-serif";
    mso-ascii-font-family:Calibri;
    mso-ascii-theme-font:minor-latin;
    mso-fareast-font-family:"Times New Roman";
    mso-fareast-theme-font:minor-fareast;
    mso-hansi-font-family:Calibri;
    mso-hansi-theme-font:minor-latin;}
    This topic has been beat to death, but I did not see a real answer. Here is configuration:
    1) 2 x ASA 5520, running 8.2
    2) Both ASA are in same outside and inside interface broadcast domains – common Ethernet on interfaces
    3) Both ASA are running single context but are active/standby failovers of each other. There are no more ASA’s in the equation. Just these 2. NOTE: this is not a Active/Active failover configuration. This is simply a 1-context active/standby configuration.
    4) I want to share VPN load among two devices and retain active/standby failover functionality. Can I use VPN load balancing feature?
    This sounds trivial, but I cannot find a clear answer (without testing this); and many people are confusing the issue. Here are some examples of confusion. These do not apply to my scenario.
    Active/Active failover is understood to mean only two ASA running multi-contexts. Context 1 is active on ASA1 Context 2 is active on ASA2. They are sharing failover information. Active/Active does not mean two independently configured ASA devices, which do not share failover communication, but do VPN load balancing. It is clear that this latter scenario will work and that both ASA are active, but they are not in the Active/Active configuration definition. Some people are calling VPN load balancing on two unique ASA’s “active/active”, but it is not
    The other confusing thing I have seen is that VPN config guide for VPN load balancing mentions configuring separate IP address pools on the VPN devices, so that clients on ASA1 do not have IP address overlap with clients on ASA2. When you configure ip address pool on active ASA1, this gets replicated to standby ASA2. In other words, you cannot have two unique IP address pools on a ASA Active/Standby cluster. I guess I could draw addresses from external DHCP server, and then do some kind of routing. Perhaps this will work?
    In any case, any experts out there that can answer question? TIA!

    Wow, some good info posted here (both questions and some answers). I'm in a similar situation with a couple of vpn load-balanced pairs... my goal was to get active-standby failover up and running in each pair- then I ran into this thread and saw the first post about the unique IP addr pools (and obviously we can't have unique pools in an active-standby failover rig where the complete config is replicated). So it would seem that these two features are indeed mutually exclusive. Real nice initial post to call this out.
    Now I'm wondering if the ASA could actually handle a single addr pool in an active-standby fo rig- *if* the code supported the exchange of addr pool status between the fo members (so they each would know what addrs have been farmed out from this single pool)? Can I get some feedback from folks on this? If this is viable, then I suppose we could submit a feature request to Cisco... not that this would necessarily be supported anytime soon, but it might be worth a try. And I'm also assuming we might need a vip on the inside int as well (not just on the outside), to properly flip the traffic on both sides if the failover occurs (note we're not currently doing this).
    Finally, if a member fails in a std load-balanced vpn pair (w/o fo disabled), the remaining member must take over traffic hitting the vip addr (full time)... can someone tell me how this works? And when this pair is working normally (with both members up), do the two systems coordinate who owns the vip at any time to load-balance the traffic? Is this basically how their load-balancing scheme works?
    Anyway, pretty cool thread... would really appreciate it if folks could give some feedback on some of the above.
    Thanks much,
    Mike

  • Multi-Geography Load-Balancing on ASA

    Hi all,
    I have two buildings in two diferent locations each one with a Cisco ASA5520 to provide VPN access. I want to configure load-balancing between them but I can´t have both outside interfaces in the same subnet. Reading the document: http://www.cisco.com/en/US/products/ps6120/products_configuration_example09186a00805fda25.shtml, it is stated that "All devices in the virtual cluster must be on the same outside and inside IP subnets" but then, I have some slides from a Cisco Networkers training (26-29 Jan 2009) titled "Deploying Remote Access with SSL VPNs" written by Nadhem J. AlFardan where it sais that it is posible to configure two ASA one in London and one in New York in a cluster to provide load balancing between them.
    Does anyone have configured load-balancing between two ASA not sharing the same outside and inside subnetworks? Is there any documentation addressing this issue?
    Thanks

    Good Question ... I have the same issue ... Can someone please help us? I also looked into Load Balancing utilizing the ASA cluster but unfortunatly realized that "All devices in the virtual cluster must be on the same outside and inside IP subnets" and now I have to come up with another solution ...

  • Load Balancing question

    My company is in the process of building a small scale network architecture strictly for testing purposes. We have a DMZ area that contains 2 load balancers and 1 web server. The web server is a SunFire 280 and has two gig e nics. They want to cable one nic to one load balancer and one nic to the other. Since this is only one box we have to put the nics on separate subnets. The question is, can I configure the load balancers in a failover situation of an active active situation with one load balancer on one vlan and another load balancer on a separate vlan.

    I did not able to understand why you want to give ip to two NICs from different subnets.
    There is NO any requirement, like that. If you have your own requirement can you explain me that?
    Ashman

  • Load Balancing Directory Servers with Access Manager - Simple questions

    Hi.
    We are in the process of configuring 2 Access Manager instances (servers) accessing the same logical LDAP repository (comprising physically of two Directory Servers working together with Multi-Master Replication configured and tested) For doing this, we are following guide number 819-6258.
    The guide uses BigIP load balancer for load balancing the directory servers. However, we intend to use Directory Proxy Server. Since we faced some (unresolved) issues last time that we used DPS, there are some simple questions that I would be very grateful to have answers to:
    1. The guide, in section 3.2.10 (To configure Access Manager 1 with the Directory Server load balancer), talks about making changes at 4 places, and replacing the existing entry (hostname and port) with the load balancer's hostname and port (assuming that the load balancer has already been configured). It says that changes need not be made on Access Manager 2 since the LDAPs are in replication, and hence changes will be replicated at all places. However, the guide also states that changes have to be made in two files, namely AMConfig.properties, and the serverconfig.xml file. But these changes will not be reflected on Access Manager 2, since these files are local on each machine.
    Question 1. Do changes have to be made in AMConfig.properties and serverconfig.xml files on the other machine hosting Access Manager 2?
    Question 2: What is the purpose of putting these values here? Specifically, what is achieved by specifying the Directory server host and port in AMConfig.properties, as well as in serverconfig.xml?
    Question 3. In the HTTP console, there is the option of specifying multiple primary LDAP servers, as well as multiple secondary LDAP servers. What is the purpose of these? Are secondary servers attempted when none of the list in the primary list are accessible? Also, if there are multiple entries in the primary server list, are they accessed in a round robin fashion (hereby providing rudimentary load balancing), or are other servers accessed only when the one mentioned first is not reachable etc.?
    2. Since I do not have a load balancer setup yet, I tried the following deviation to the above, which, according to me, should have worked. If viewed in the HTTP console, LDAP / Membership / MSISDN and Policy configuration all pointed to the DS on host 1. When I changed all these to point to the directory server on host 2 (and made AMConfig.properties and serverconfig.xml on host 1 point to DS of host 2 as well), things should have worked fine, but apparently Access manager 1 could not be started. Error from Webserver:
    [14/Aug/2006:04:30:36] info (13937): WEB0100: Loading web module in virtual server [https-machine_1_FQDN] at [search]
    [14/Aug/2006:04:31:48] warning (13937): CORE3283: stderr: Exception in thread "EventService" java.lang.ExceptionInInitializerError
    [14/Aug/2006:04:31:48] warning (13937): CORE3283: stderr: at com.iplanet.services.ldap.event.EventServicePolling.run(EventServicePolling.java:132)
    [14/Aug/2006:04:31:48] warning (13937): CORE3283: stderr: at java.lang.Thread.run(Thread.java:595)
    [14/Aug/2006:04:31:48] warning (13937): CORE3283: stderr: Caused by: java.lang.InterruptedException
    [14/Aug/2006:04:31:48] warning (13937): CORE3283: stderr: at com.sun.identity.sm.ServiceManager.<clinit>(ServiceManager.java:74)
    [14/Aug/2006:04:31:48] warning (13937): CORE3283: stderr: ... 2 more
    In effect, AM on 1 did not start. On rolling back the changes, things again worked like previously.
    Will be really grateful for any help / insight / experience on dealing with the above.
    Thanks!

    Update to the above, incase anyone is reading:
    We setup a similar setup in Windows, and it worked. Here is a detailed account of what was done:
    1. Host 1: Start installer, install automatically, chose Directory server, Directory Administration server, Directory Proxy server, Web server, Access Manager.
    All installed, and worked fine. (AMConfig.properties, serverconfig.xml, and the info in LDAP service, all pointed to HOST1:389)
    2. Host 2: Start installer, install automatically, chose Directory server, Directory Administration server, Directory Proxy server, Web server, Access Manager.
    All installed, and worked fine. (AMConfig.properties, serverconfig.xml, and the info in LDAP service, all pointed to HOST2:389)
    3. Host 1: Started replication. Set to Master
    4. Host 2: Started replication. Set to Master
    5. Host 1: Setup replication agreement to Host 2
    6. Host 2: Setup replication agreement to Host 1
    7. Initiated the remote replica from Host 1 ----> Host 2
    Note that since default installation uses abc.....xyz as the encryption key, setting this to same was not an issue.
    9. Started webserver for Host 1 and logged into AM as amadmin.
    10. Added Host 2 FQDN in DNS Aliases / Realms
    11. Added http://HOST2_FQDN:80 in the Platform server (instance) list.
    12. Started Host 2 webserver. Logged in AM on Host 2, things worked fine.
    At this stage, note the following:
    a) Host 1:
    AMConfig.properties file has
    com.iplanet.am.directory.host=host1_FQDN
    and
    com.iplanet.am.directory.port=389
    serverconfig.xml has:
    <Server name="Server1" host="host1_FQDN" port="389" type="SIMPLE" />
    b) Host 2:
    AMConfig.properties file has
    com.iplanet.am.directory.host=host2_FQDN
    and
    com.iplanet.am.directory.port=389
    serverconfig.xml has:
    <Server name="Server1" host="host2_FQDN" port="389" type="SIMPLE" />
    c) If one logs into AM, and checks LDAP servers for LDAP / Policy Configuration / Membership etc services, they all contain Host2_FQDN:389 (which makes sense, since replica 2 was initialized from 1)
    Returning back to the configuations:
    13. On Host 1, login into the Admin server console of the Directory server. Navigate to the DPS, and confgure the following:
    a) Network Group
    b) LDAP servers
    c) Load Balancing
    d) Change Group
    e) Action on-bind
    f) Allow all actions (permit modification / deletion etc.).
    g) any other configuations required - Am willing to give detailed steps if someone needs them to help me / themselves! :)
    So now, we have DPS configured and running on Host1:489, and distributing load to DS1 and DS2 on a 50:50 basis.
    14. Now, log into AM on Host 1, and instead of Host1_fqdn:389 (for DS) in the following places, specify Host1_fqdn:489 (for the DPS)--
    LDAP Authentication
    MSISDN server
    Membership Service
    Policy configuation.
    Verified that this propagated to the Policy Configuration service and the LDAP authentication service that are already registered with the default organization.
    15. Log out of AM. Following the documentation, modify directory.host and directory.port in AMConfig.properties to point to Host 1_FQDN and 489 respectively. Make this change in AMConfig.properties of both Host 1 as well as 2.
    16. Edit serverconfig.xml on both hosts, and instead of they pointing to their local directory servers, point both to host1_FQDN:489
    17. When you start the webserver, it will refuse to start. Will spew errors such as:
    [https-host1_FQDN]: Sun ONE Web Server 6.1SP5 B06/23/2005 17:36
    [https-host1_FQDN]: info: CORE3016: daemon is running as super-user
    [https-host1_FQDN]: info: CORE5076: Using [Java HotSpot(TM) Server VM, Version 1.5.0_04] from [Sun Microsystems Inc.]
    [https-host1_FQDN]: info: WEB0100: Loading web module in virtual server [https-host1_FQDN] at [amserver]
    [https-host1_FQDN]: warning: WEB6100: locale-charset-info is deprecated, please use parameter-encoding
    [https-host1_FQDN]: info: WEB0100: Loading web module in virtual server [https-host1_FQDN] at [ampassword]
    [https-host1_FQDN]: warning: WEB6100: locale-charset-info is deprecated, please use parameter-encoding
    [https-host1_FQDN]: info: WEB0100: Loading web module in virtual server [https-host1_FQDN] at [amcommon]
    [https-host1_FQDN]: info: WEB0100: Loading web module in virtual server [https-host1_FQDN] at [amconsole]
    [https-host1_FQDN]: warning: WEB6100: locale-charset-info is deprecated, please use parameter-encoding
    [https-host1_FQDN]: info: WEB0100: Loading web module in virtual server [https-host1_FQDN] at [search]
    [https-host1_FQDN]: warning: CORE3283: stderr: netscape.ldap.LDAPException: error result (32); matchedDN = dc=sun,dc=com; No such object (DN changed)
    [https-host1_FQDN]: warning: CORE3283: stderr: Got LDAPServiceException code=-1
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.services.ldap.DSConfigMgr.getConnection(DSConfigMgr.java:357)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.services.ldap.DSConfigMgr.getNewFailoverConnection(DSConfigMgr.java:314)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.services.ldap.DSConfigMgr.getNewConnection(DSConfigMgr.java:253)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.services.ldap.DSConfigMgr.getNewProxyConnection(DSConfigMgr.java:184)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.services.ldap.DSConfigMgr.getNewProxyConnection(DSConfigMgr.java:194)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.ums.DataLayer.initLdapPool(DataLayer.java:1248)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.ums.DataLayer.(DataLayer.java:190)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.ums.DataLayer.getInstance(DataLayer.java:215)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.ums.DataLayer.getInstance(DataLayer.java:246)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.sun.identity.sm.ldap.SMSLdapObject.initialize(SMSLdapObject.java:156)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.sun.identity.sm.ldap.SMSLdapObject.(SMSLdapObject.java:124)
    [https-host1_FQDN]: warning: CORE3283: stderr: at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    [https-host1_FQDN]: warning: CORE3283: stderr: at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
    [https-host1_FQDN]: warning: CORE3283: stderr: at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
    [https-host1_FQDN]: warning: CORE3283: stderr: at java.lang.reflect.Constructor.newInstance(Constructor.java:494)
    [https-host1_FQDN]: warning: CORE3283: stderr: at java.lang.Class.newInstance0(Class.java:350)
    [https-host1_FQDN]: warning: CORE3283: stderr: at java.lang.Class.newInstance(Class.java:303)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.sun.identity.sm.SMSEntry.(SMSEntry.java:216)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.sun.identity.sm.ServiceSchemaManager.(ServiceSchemaManager.java:67)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.am.util.AMClientDetector.getServiceSchemaManager(AMClientDetector.java:219)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.am.util.AMClientDetector.(AMClientDetector.java:94)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.sun.mobile.filter.AMLController.init(AMLController.java:85)
    [https-host1_FQDN]: warning: CORE3283: stderr: at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:262)
    [https-host1_FQDN]: warning: CORE3283: stderr: at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:322)
    [https-host1_FQDN]: warning: CORE3283: stderr: at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:120)
    [https-host1_FQDN]: warning: CORE3283: stderr: at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:3271)
    [https-host1_FQDN]: warning: CORE3283: stderr: at org.apache.catalina.core.StandardContext.start(StandardContext.java:3747)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.ias.web.WebModule.start(WebModule.java:251)
    [https-host1_FQDN]: warning: CORE3283: stderr: at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1133)
    [https-host1_FQDN]: warning: CORE3283: stderr: at org.apache.catalina.core.StandardHost.start(StandardHost.java:652)
    [https-host1_FQDN]: warning: CORE3283: stderr: at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1133)
    [https-host1_FQDN]: warning: CORE3283: stderr: at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:355)
    [https-host1_FQDN]: warning: CORE3283: stderr: at org.apache.catalina.startup.Embedded.start(Embedded.java:995)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.ias.web.WebContainer.start(WebContainer.java:431)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.ias.web.WebContainer.startInstance(WebContainer.java:500)
    [https-host1_FQDN]: warning: CORE3283: stderr: at com.iplanet.ias.server.J2EERunner.confPostInit(J2EERunner.java:161)
    [https-host1_FQDN]: failure: WebModule[amserver]: WEB2783: Servlet /amserver threw load() exception
    [https-host1_FQDN]: javax.servlet.ServletException: WEB2778: Servlet.init() for servlet LoginLogoutMapping threw exception
    [https-host1_FQDN]: at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:949)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:813)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:3478)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardContext.start(StandardContext.java:3760)
    [https-host1_FQDN]: at com.iplanet.ias.web.WebModule.start(WebModule.java:251)
    [https-host1_FQDN]: at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1133)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardHost.start(StandardHost.java:652)
    [https-host1_FQDN]: at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1133)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:355)
    [https-host1_FQDN]: at org.apache.catalina.startup.Embedded.start(Embedded.java:995)
    [https-host1_FQDN]: at com.iplanet.ias.web.WebContainer.start(WebContainer.java:431)
    [https-host1_FQDN]: at com.iplanet.ias.web.WebContainer.startInstance(WebContainer.java:500)
    [https-host1_FQDN]: at com.iplanet.ias.server.J2EERunner.confPostInit(J2EERunner.java:161)
    [https-host1_FQDN]: ----- Root Cause -----
    [https-host1_FQDN]: java.lang.NullPointerException
    [https-host1_FQDN]: at com.sun.identity.authentication.UI.LoginLogoutMapping.init(LoginLogoutMapping.java:71)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:921)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:813)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:3478)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardContext.start(StandardContext.java:3760)
    [https-host1_FQDN]: at com.iplanet.ias.web.WebModule.start(WebModule.java:251)
    [https-host1_FQDN]: at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1133)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardHost.start(StandardHost.java:652)
    [https-host1_FQDN]: at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1133)
    [https-host1_FQDN]: at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:355)
    [https-host1_FQDN]: at org.apache.catalina.startup.Embedded.start(Embedded.java:995)
    [https-host1_FQDN]: at com.iplanet.ias.web.WebContainer.start(WebContainer.java:431)
    [https-host1_FQDN]: at com.iplanet.ias.web.WebContainer.startInstance(WebContainer.java:500)
    [https-host1_FQDN]: at com.iplanet.ias.server.J2EERunner.confPostInit(J2EERunner.java:161)
    [https-host1_FQDN]:
    [https-host1_FQDN]: info: HTTP3072: [LS ls1] http://host1_FQDN:58080 [i]ready to accept requests
    [https-host1_FQDN]: startup: server started successfully
    Success!
    The server https-host1_FQDN has started up.
    The server infact, didn't start up (nothing even listening on 58080).
    However, if AMConfig.properties is left as it originally was, and only serverconfig.xml files were changed as mentioned above, web servers started fine, and things worked all okay. (Alright, except for some glitches when viewed in /amconsole. If /amserver/console is accessed, all is good. Can this mean that all is still not well? I am not sure).
    So far so good. Now comes the sad part. When the same is done on Solaris 9, things dont work. You continue to get the above error, OR the following error, and the web server will refuse to start:
    Differences in Solaris and Windows are as follows:
    1. Windows hosts have 1 IP and hostname. Solaris hosts have 3 IPs and hostnames (for DS, DPS, and webserver).
    No other difference from an architectural perspective.
    Any help / insight on why the above is not working (and why the hell does the documentation seem so sketchy / insecure / incorrect).
    Thanks a bunch!

  • Question about Load Balance SFTP service by using CSS1150X

    Does anyone come across of load balancing SFTP service by using CSS1150X? Typically by configuring CSS1150X to load balance FTP service, the configuration will as follow:
    content ftp_rule
    vip address 192.168.3.6
    protocol tcp
    port 21
    application ftp-control
    add service serv1
    add service serv2
    add service serv3
    active
    group ftp_group
    vip address 192.168.3.6
    add service serv1
    add service serv2
    add service serv3
    active
    However, for my personal understanding and knowledge, I will configure my CSS1150X as follow to load balance SFTP service:
    content sftp_rule
    vip address 192.168.3.6
    protocol tcp
    port 22 //Change 21 to 22
    application ftp-control
    add service serv1
    add service serv2
    add service serv3
    active
    group sftp_group
    vip address 192.168.3.6
    add service serv1
    add service serv2
    add service serv3
    active
    My question is, "application ftp-control" in content "ftp_rule" is still applicable to SFTP or not?

    I believe application ftp-control would not be used for sftp.
    This might cause the session to get dropped when there is no data channel created and cause issues with long connections.
    Hope it helps!!

  • CSS load balancing questions

    I hope that someone can help with 2 simple (i think) CSS questions.
    1. When configured properly for load balancing, should the CSS round-robin between servers or will it continue to use only one server until triggered by some event or parameter?
    2. If 1 of 2 load balanced servers fails, how does load balancing proceed? Will it continue to try to load balance between the servers or will it give up on the failed server unitil some event or timeout occurs?
    Thanks in advance,
    Eliot

    Hi Eliot,
    The CSS can be configured to perform load balancing in a variety of different ways. Least connections, round robin, ACA etc. Each new connection through the CSS will be round robined over each of the servers in your server group.
    If a server fails then the CSS will know it has failed through the use of keepalives (based on TCP connection, ICMP etc) and no longer send requests through to that server. Traffic associated with a previous connection to the failed server will be sent to on of the surviving servers. It is then up to the behavior of the application as to if the user experiences any disruption.
    Hope this helps
    Brett

Maybe you are looking for

  • My iMac has started powering itself off

    I have an iMac with a 3.2 GHz Intel i3 core, it is about 9 months old now.  A few weeks ago I upgraded the OS from Snow Leopard to Lion 10.7.1.  The install and upgrade went very smoothly, but I started to see that my machine now turns itself off rat

  • Multiple FDM Value by -1 in import script

    This should be an easy answer, but we have certain line items in our source file where we want to multiply the amount by -1.  What would be the correct syntax.  Everything I try fails.  Below is what I have that gives a type mismatch error on v_Amt_2

  • Single Image from the Picture Library

    Hi All - I am trying to display single image from a picture library. I thought it is the simplest task of all to do in SharePoint. I am having hard time. The Picture Library List View Webpart is not displaying single image (created a view that will b

  • How to remove an installed app from iMac?

    I bought my iMac from an individual and I am getting an 'update available' box for Pages. When I click on it to download it I get a message to sign in w/previous owners info. Also, that I can buy a serial number for me. I haven't been able to do that

  • I have just created a new username and bought a new prepaid phone. I am being toldf I am not the account owner. Why is this?

    I have just created a new username and bought a new prepaid phone. I am being toldf I am not the account owner. Why is this