Siebel Testing. Cookie handling, cookie load balancing and authentication.

We are delivering a PoC in a big Siebel customer and we are finding some issues.
Environment information is:
- Siebel testing, version 8.
- NTLM Windows authentication.
     When we create a Siebel Load script, recording works fine. Replaying the script, after adding the Authentication function, does not work
     When we create a Web Load Script with the same structure and business process, after adding the Authentication function, it works.
     Looking at the difference between Web and Siebel scripts, only difference are a couple of cookies that Web script considers and Siebel does not. This two cookies are added in the second call of the NTLM handshake requests (two requests with 401 http code and a final with a 200 http code). The application, in the first NTLM handshake request, ask the browser to add a couple of cookies that the browser (and OLT in a web load script) add. Siebel load script does not add/handle this cookies.
     Accordingly, the script works with web module but does not work with Siebel module.
Right now we only have two options to make Siebel work:
- Change the DNS address to point to a single node for the application, instead of pointing to the load balancing service. This way, cookies are not needed and the script does not fail.
- Add the cookies by hand. This way, cookie content is "hardcoded" and, so, it will not be useful for load balancing purposes, which lead us to previous bullet. Load testing using Siebel will not work load balanced.
I added a web script (works fine with no tweaking) and a Siebel script executed with normal configuration (does not work) and workaround (avoiding the load balancing cookies- it works).
Many thanks for the help,
Iván.
IM_Siebel_Second_Test.zip
Siebel web test.zip
Edited by: user9982485 on 03-Aug-2010 09:18

Álex,
Thank you a lot, I will call you related to other issue also. Thanks for the kind help!.
IMHO, it works or not depending on cookies added for load balancing. Siebel module does not specifically add these cookies, while web module does. If you delete the cookies from the web module, it stops working, so I guess the cookies are making/breaking the script.
I will send you the scripts, so you can have a look.
Thanks,
Iván.

Similar Messages

  • Load Balancing and WLS primary server offset

    I've got a load balancer in front of my WLS cluster, and I'm trying to
              set up load balancing based on WLS clustering. What I need to know to
              do this is the offset within the cookie that's responsible for
              determining which machine within the cluster to direct to.
              Any idea how I can get this information?
              thanks,
              cfraser
              

    Chris Fraser wrote:
              > The proxy/plug-in solution sounds pretty cool, but I've got a high speed
              > Alteon Load Balancer already set up. I would prefer to use that as the load
              > balancer to the WL cluster rather than pay to bring another WLS online to do
              > pretty much what the load balancer, that I already own, can do. I know that
              > going this route means that we're probably not going to be able to do things
              > like failover to the secondary when the primary dies, but we will be able to
              > load balance and also have the ability to dynamically add/delete servers
              > from the list of available servers as they are brought up/down.
              In Memory session replication doesn't work without our plugins. I will have to
              do little bit of investigation to figure out if other persistence mechanism's
              would work without our plugins if you are interested in them. I have to remind
              you though that other types of persistence mechanism's we support are slower
              compared to in memory session replication.
              > Are there any plans to work with an Alteon or a Foundry to have their Load
              > Balancers act as the front end to a WLS cluster?
              Currently none. We are taking steps to make the plugin's and cluster more
              robust, we currently don't have any plans to work with other 3rd party vendors.
              > For us it would be ideal, because we wouldn't have to support another piece of
              > software, we would just
              > have to support the hardware based Alteon, which can handle thousands of
              > transactions per second.
              > I understand that the primary and secondary server information is available
              > in the sessionID, I'm just not quite sure how to extract it.
              This information is saved in the cookie. But I wouldn't count that, as we
              have plans to change this. I cannot give your more details.
              > Is there a particular offset within the session ID where it can always be
              > found?
              I don't quite get what you mean here.
              Hope this helps.
              - Prasad
              > thanks for the help,
              > cfraser
              > ----------
              > C h r i s t o p h e r A . F r a s e r
              > Director, Technology
              > macroplay.com, Inc.
              > [email protected]
              >
              > Viresh Garg wrote:
              >
              > > You should be using
              > > -- NES +NSAPI Plugin
              > > -- IIS + ISAPI Plugin
              > > -- WEblogic server acting as proxy
              > > -- Apache +Apache Plugin ( only in Denali)
              > >
              > > front-ending your Weblogic cluster
              > >
              > > These proxies/plug-ins are smart to do a lot of things like:
              > >
              > > -- Load balancing in weblogic cluster
              > > -- Adding/deleting servers dynamically in cluster when the servers
              > > join/leave Weblogic cluster
              > > -- failover to secondary when primary dies.
              > >
              > > As far as the information about primary and secondary is concerned it is
              > > available in session ID.
              > >
              > > --Viresh Garg
              > >
              > > Chris Fraser wrote:
              > >
              > > > I've got a load balancer in front of my WLS cluster, and I'm trying to
              > > > set up load balancing based on WLS clustering. What I need to know to
              > > > do this is the offset within the cookie that's responsible for
              > > > determining which machine within the cluster to direct to.
              > > >
              > > > Any idea how I can get this information?
              > > >
              > > > thanks,
              > > > cfraser
              

  • VPN load balancing and ASA !!!

    Hi netpros,
    I have a couple of questions about this and hope you might be able to assist me.
    1.- Are VPN load balancing and failover (Active/Active) mutually exclusive ..? I mean they can't be used at the same time correct ..?
    2.- How does the ASA handle the return traffic from the Internal LAN towards the remote client .. Because the cluster only requires ONE public virtual IP address, which will work for incoming packets .. but what about the return traffic which has knowledge of the DHCP scope's default gateway IP address only .. ? How gets the returned packet redirected from the default gateway IP address to the respective ASA internal IP address .?
    3.- VPN load balancing only applies to remote clients using easy VPN technology (easy vpn client, hardware client , pIX using easy vpn client etc ) and does not work with static LAN-LAN tunnel .. correct ..?
    Your comments are much appreciated

    Hi Gilbert ..
    1.- Thanks I wanted to make sure.
    2.- I know that .. my question is in regards the return packets .. for example if I have the below IP schema:
    ASA1: Public 20.20.20.20
    Private 192.168.1.1
    ASA2: Public 20.20.20.21
    Private 192.168.1.2
    Cluster virutal IP: 20.20.20.10
    Default gateway for segment 192.168.1.0 is 192.168.1.1
    Let's say that a vpn client tries to connect and the cluster instructs the client to connect to ASA2 20.20.20.21. The packets reach the internal server at 192.168.1.100. The internal server then sends the return packets back to the client by forwarding them to its default gateway which is 192.168.1.1 (ASA1). Here is my question .. how does the cluster handles this because the return packet are supposed to be directed to ASA2 192.168.1.2
    3.- Any idea about this one ..?
    Cheers,

  • Load balancing and rfc metadata repository in reciever rfc communication ch

    hi.
    i want to know the purpose of load balancing and rfc meta data repository in RFC communication channel.
    and can u send me any examples on this load balancing.
    waiting for your response.
    bye.
    regards.
    seeta ram.

    Hi Seeta Ram,
    Load distribution is handled by the message server (there is one message server in an SAP System). When a user logs on, the message server assigns him or her to the application server that currently has the <b>smallest load</b>.
    Well now you can understand that we use load balancing for better performance by distributing the work to different processes to balance or maintain the work load in SAP system.
    For more information refer to this link
    http://help.sap.com/saphelp_nw04/helpdata/en/28/75153a1a5b4c2de10000000a114084/content.htm
    Regards
    Sumit Bhutani

  • For a true load balancing and high-availability OHS, OPMN, and mod_oc4j

    i have read this link of Enabling Clustering on oc4j9.0.4 standalone app server
    http://www.oracle.com/technology/docs/tech/java/oc4j/htdocs/getstart.htm#1015479
    To test the clustering, start up the load balancer by executing "java -jar loadbalancer.jar".
    C:\OC4J_EXTENDED\j2ee\home>java -jar loadbalancer.jar
    In a future release of Oracle Application Server, loadbalancer.jar will be
    desupported. Because of this, we strongly suggest that you discontinue your use
    of loadbalancer.jar in this release. Under high loads, loadbalancer.jar may not
    function properly. For a true load balancing and high-availability solution,
    please move to use OHS, OPMN, and mod_OC4J. For more information, please see
    http://otn.oracle.com/products/ias/ohs/content.html
    Balancer initialized...
    what load balancer should i use for web clustering
    <frontend host="balancer-host" port="balancer-port" />
    balancer-host=localhost
    balancer-port=80
    for all nodes i mentioned same host and port in http-web-site.xml.Is it correct?
    i completed all the steps and run http://localhost:6666/session/SessionServlet
    i hit 3 times
    in the different browser http://localhost:7777/session/SessionServlet
    instead of coming 4 it starting from 1 only.

    can i use this loadbalancer.jar or not?
    how to mod_oc4j in standalone app server

  • Load balancing and failover in Embedded LDAP in weblogic

    How to handle load balancing and failover in Embedded LDAP in weblogic server?

    You should consider posting this to the Weblogic and/or LDAP support forums. This forum is meant for Sun Web Server questions.
    Thanks
    Manish

  • Network Load Balancing and failover for AFP Sharing

    Dear all,
    Somebody kindly teach me to use round robin DNS to perform the network load balancing, it's success but not the failover.
    I have 4 xserve and want to do the load balancing and failover at the same time.
    I have read the IP failover document and setup it successfully, but anyone know is it possible to do the IP failover for more than 2 server?
    For example, 4 server serving the AFP service at the same time, maybe I have 1 more extra server to do the IP failover for thoese 4 servers.
    As I know, IP failover require Firewire as the heartbeat detection. But one xserve only have 2 firewire ports. May I setting up the IP failover only by a ethernet port and an IP address? does it possible to detect and failover to any server after server down has been detected?
    I believe load balancer maybe the best solution but its cost is too high.
    Thanks any advance!
    Karllee

    well, u have 2 options here
    software load balancing
    request comes it foo.com -> ws7u2 hosting foo.com is configured to run as reverse proxy . this server sends any incoming requests to one of the four back end web server 7 handling your incoming request
    hardware load balancing (this you need to invest)
    request comes to hardware load balancer who responds for foo.com -> sends requests to four ws7 server hosting your application
    you could try out how software load balancing works out for you before you invest in hardware load balancing
    here is more instruction on configuring ws7 + reverse proxy (software load configuration)
    - install ws7 on foo.com
    - create a new configuration (choose port 80, disable java

  • LRT224 Load Balancing and Link Failover

    Hi, I am new to this forum. I have recently set up the LRT224 with two different ISP's. I am having problems configuring the Load Balance and Link Failover.
    When I have Load Balance selected only one ISP (WAN 1) is active, the other (WAN2, ISP modem) remains inactive. Why is Load Balance only engaging one ISP?
    When I have Link Failover selected, even with attempts and seconds configured to one second, and WAN1 has packets lost, it doesn't switch over to WAN2.
    I am not tech savey but any help will be greatly appreciated so that I can get both ISP's active with Load Balance or at least have Link Failover work almost instantly. Thanks.

    Hi @BSue2015,
    If both WAN1 and WAN2 are already getting IP Addresses from your ISPs then we can say that Load Balance is working. To check it further, do a speed test by going to http://www.speedtest.net. Dual WAN connections are doubling the amount of available full speed connections due to the load balancing. The speed should have its maximum throughput even if you have several users on the network.

  • Geo load balancing and scaling

    I am looking at the options for geographic load balancing and  thinking of setting up 2 or 3 regions initially.
    Is there any data available for each of the regions in terms of latency for different countries/locations? I want to pick a number of key sites that give me best coverage for areas where I am seeing most traffic in google analytics .....
    So as examples I want data to answer questions such as :
    Would Texas give me significant gains in both north and south america? Would any areas not benefit?
    Considering Australia, india and that part of the world generally would east/south asia be best?
    How does that fair for Japan?
    etc. etc.
    Secondly I want to auto scale. How does auto scaling work with the geographic load balancing?
    What would be great is if I could scale all the way up or down (to nothing) based on a schedule in each region. So a fixed European instance , and scaling up on other continents during fixed time schedules. e.g. when India wakes up spin up a service in Asia.

    Useful, but I'm not sure that answers my question fully.
    Firstly it is for storage, but I guess would be a useful proxy measure.
    Am I right in thinking that based on the below it will be testing from one of the below rather than my current location?
    It would be useful if the latency test could be switched to be run from any hosting location.
    "AzureSpeed.com is running in both Azure East Asia and West US data centers, with following 2 website endpoints
    azurespeedeastasia.azurewebsites.net ( Deloyment in East Asia )
    azurespeedwestus.azurewebsites.net ( Deployment in West US )"
    Could I assume that because Microsoft have opted for from the above my best coverage globally would be be to use East Asia and West US?

  • Dual WLAN links with load balancing and failover

    Hello,
    I am in a scenario where I am in need of two WLAN links between two buildings. There is a distance of 100-150 meters and minimum bandwidth required for both links together is 300Mbit/s. The thing is that both links should use load balancing between them and if one of them goes down, the last one should act as fail over.
    I have been looking at Cisco Aironet 1550 Series though I have no idea what is needed to get load balancing and fail over to work, so I am searching here for suggestions on what equipment is needed.
    Something like this:
                  ---------------WLAN Link 150-300Mbit/s-----------
    Building                    Load balancing and fail over               Building
                  ---------------WLAN Link 150-300Mbit/s-----------
    Thanks in advance!

    Several points.
    When an AP is doing 300Mbps, that's NOT the real throughput you have. It's the data rate at which traffic is sent.
    All in all, if your AP/client are doing 300MBps association, you will see max 150Mbps with a file transfer.
    From there, I'm not even sure that 11n supports dual spatial streams over such long distances (you can't have multipath in open air) so afaik the 1550 only do 150Mbps association rate (=dual channel with one spatial stream). That means 75Mbps real speed.
    I couldn't test a 1550 yet so don't take my word for official statement but that's what I'm thinking.
    the wireless links will always be both up and they can be on different channels.
    That will then mean that it will be "as if" the remote switch was connected directly to the central switch (where WLC is connected) as the WLC tunnels traffic all the way. So you could do a spanning-tree config on this one I guess to block the port onthe remote switch.
    Regards,
    Nicolas

  • Advantages of using a webserver inbetween a load balancer and application servers

    I am building out a new weblogic domain.
    I am wondering which one of these configuration to go with:
    1. Load balancer > weblogic servers
    2. Load balancer > web server > weblogic servers
    Could someone tell me what are the specific advantages of having web servers inbetween a load balancer and application servers (besides caching static data content and acting as a proxy)?
    Thanks in advance
    Srini

    Other than hosting the static content, nothing much really.   We have our load balancer go straight to WL for applications without static content and route to web server if there is static content.   Easy enough to do it both ways, best of both worlds.

  • Load balancing and High Availability topology

    Our Forms 6i client-server application currently runs on Citrix farm of 20 Windows 2000 boxes (IBM Blade Servers 2 CPU and 2 Gig Memory).
    Application supports 2000 users.
    We are moving to AS 10g r2, forms 10g and the goal is to use same hardware, 20 Windows boxes (or less), for intranet web deployment.
    What will be our best choices for application Load balancing and High Availability?
    Hardware load balancer, Web Cache, mod-oc4j? Combinations?
    Any suggestions, best practices, your experience?

    Gerd, I understand, that you are running 10g web forms through the browser, but using Citrix for deployment. This means that in addition to Application Server and Forms runtime sessions, it will be separate browser session opened for each user. What the advantage of this configuration?
    Michael, we are aware, that Citrix is not supported by Oracle as a deployment platform. That only means that prior contacting Oracle Support we have to reproduce the problem in standard environment. It was never been a problem to reproduce problem :) We were using Citrix as a deployment platform for Forms 6i client/server for 4 years, but now we are forced to upgrade to 10g.
    We are familiar with various Load balancing options available. The question is which option is the most "workable" in our case.

  • 2 ISP load balancing and redundancy

    Hello!!
    Our small company has about 40 branches spreaded within city. Branches are connected by optic wire supplied by our ISP. So in ISP our branches are located in one VLAN. From every branch we created VPN tunnel to our server room in central office. Central office is like a cetner point. If optic wire fails to central office, there would no VPN tunnels and no network to all branches. Moreover, all the traffice goes through central office.
    Now we decided to pave one more optic line to our central office. And that will increase bandwidth and redundancy.
    Private network topology: There are no default gateways and ip-addresses. For examle, at first branch I will plug computer directly into media converter and at the second branch plug another computer to the media converter. After that this two computers became in one network. And can assign any ip addresses to them.
    What I have: our firewall do enough work, don't want to overload it. But we have some free ports in our new cisco 3750. The question is how to do load balancing and redundanccy? Can it do load balancing according to traffic? And how load balance incoming traffic? For example, connection was established from branche's router, how this router will choose through which line make connection? By the way, at all branches we use noisy cisco
    3700 series routers.

    Sorry for upping 1 year old threat.
    We talked to our Network Provider. They said "these two cables are coming from two different places, so there is no way to use etherchannel. You must use active-standby solution."
    Relying on STP we just put two cables into 3750 stack. But with default STP settings, connection was very unstable, many packet losses and disconnections. So we found easy solution with "flex links", making one interface backup of the other. And only now I recognized that this is not a failover solution. Because, if network beyond media converter will down, link from media converter to switch would still up.
    What could I do to make our L2 WAN redundant? Are there any additional STP settings.

  • Load balancing and Failover

    Hello,
    We are wondering how load-balancing and failover of tpcall() work with
    WTC:
    The scenario:
    We have one WLS Domain and two Tuxedo Domains. The Tuxedo Domains offer
    the same set of services.
    In the bdmconfig.xml, we specify connection_policy as 'ON_STARTUP' for
    both Remote Tuxedo Domains. We also Import (T_DM_IMPORT) the same
    Tuxedo Service from both Tuxedo Domains.
    Questions:
    1. Is there any load-balancing of the tpcall between the two Domains? If
    so, is it round-robin? If round-robin, what determines the order?
    2. If it is ONLY Failover, what determines the order of the tpcall? And,
    is the Failover automatic? Or do we need to code for retry on failure?
    3. ON_DEMAND vs ON_STARTUP: Does ON_DEMAND drop the connection to the
    remote domain upon tpterm? And does ON_STARTUP use a pool of
    TuxedoConnection objects?
    4. Are there any configuration parameters for
    'max_number-of_connections? What determines how many simultaneous
    connections can be made?
    Thanks,
    Suresh Mohan.

    Hi Suresh,
    The following are my answers to your questions.
    Suresh Mohan wrote:
    Hello,
    We are wondering how load-balancing and failover of tpcall() work with
    WTC:
    The scenario:
    We have one WLS Domain and two Tuxedo Domains. The Tuxedo Domains offer
    the same set of services.
    In the bdmconfig.xml, we specify connection_policy as 'ON_STARTUP' for
    both Remote Tuxedo Domains. We also Import (T_DM_IMPORT) the same
    Tuxedo Service from both Tuxedo Domains.
    Questions:
    1. Is there any load-balancing of the tpcall between the two Domains? If
    so, is it round-robin? If round-robin, what determines the order?Yes there is a load balancing between two remote Tuxedo TDomain Gateways.
    The algorithm is random, not RR. Over time this should give equal
    opportunities to both remote TDomain.
    >
    2. If it is ONLY Failover, what determines the order of the tpcall? And,
    is the Failover automatic? Or do we need to code for retry on failure?The load balancing is always there. The failover is automatic. When a
    connection to a remote TDomain encountered a problem (ie network) the remote
    domain will be put on retry open connection (in ON_STARTUP) and the load
    balancing will not select it until the connection re-established.
    However, the tpcall() that encountered the error will not be retried to send
    to different destination. It is up to the application to decide whether it
    want to resend. Any requests called after the error will not select the
    failed Remote TDomain.
    >
    3. ON_DEMAND vs ON_STARTUP: Does ON_DEMAND drop the connection to the
    remote domain upon tpterm? And does ON_STARTUP use a pool of
    TuxedoConnection objects?TPTERM() only terminate your application session to WTC. WTC still maintain
    a secured T-session to remote Tuxedo TDomain. WTC does not use a pool of
    TuxedoConnection Objects, the object stored in the JNDI refers to WTC.
    >
    4. Are there any configuration parameters for
    'max_number-of_connections? What determines how many simultaneous
    connections can be made?No. As described in #3, there is no need to use connection pool in WTC. WTC
    uses session and virtual circuit design concept as Tuxedo TDOMAIN, the
    logical pool is created/destroyed dynamically. That is the reason why you
    can have a lot of TPACALL() outstanding at the same time. (The limitation is
    the availability system resource.)
    >
    >
    Thanks,
    Suresh Mohan.Regards,
    Hong-Hsi :-)

  • Discussion on load-balance and load-sharing

    Hi, I found a article, which discuss the difference between load-balance and load-sharing. I think the explanation is pretty good, please see below. But I still have a question: how can we decide to choose one the both balance in the production environment ?  Thank you
    "In short, load balancing tries to distribute traffic evenly over multiple paths, whereas, load sharing intends to do it (for the lack of a better term) equally.  True load balancing is difficult to achieve.  For example, let's say there were two links (100 mbps and 300 mpbs) and a router needed to send out 600 mbps of traffic.  Load balancing would distribute the traffic evenly, sending 300 mbps on each link.  On the contrary, load sharing would divide the traffic equally based on the available resources, sending 200 mbps on the slower link and 400 mbps on the faster one. "

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    That's not how Cisco uses the terms, and generically they are often used almost interchangeably.
    Cisco uses load balancing as the catch all for how a single L3 device routes across multiple paths to the same destination.  Equal metrics or equal actual load distribution are not required.  Most often, load balancing will be discussed with ECMP, but unequal path loading balancing will include Cisco's proprietary IGPs, such as EIGRP.
    Cisco uses load sharing when using multiple paths when a single L3 devices doesn't normally route across multiple paths or multiple L3 devices are involved.  Cisco load sharing discussions usually revolve around BGP.
    Generically, I would say load balancing has more of a dynamic aspect to it, i.e. something is trying to actively balance traffic across multiple paths, while load sharing might mean multiple paths are utilized but not actively dynamically balanced.
    I'm unsure what's your question with a production environment.

Maybe you are looking for