Alteon passive cookie inspection

I've got a weblogic cluster (8.1 SP3) set up and an Alteon load balancing the connections on the front end. I followed nortel's weblogic doc to use passive cookie persistence to maintain sessions to the same weblogic server that set the original JSESSIONID cookie, however it does not appear to be working. I am wondering if anyone else has tried the passive cookie inspection feature of the Alteon 2424 or equivalent with the weblogic cookie and how you fared. The nortel doc is a bit out of date and I am hoping that it's a simple misconfiguration.

Have anyone tried to load balance their web servers and application servers using the same physical hardware load balancer (Alteon 2424)?
Note: I can provide more detail information or configuration setting if anyone thinks they know how this can be achieved.
### My Web Tier ###
I am currently using cookie insertion (with the default AlteonP cookie) to load balance the web tier which consisted of three physical web servers and they are all running iPlanet. Proxy plug-ins is configured to point to a virtual ip which consisted of four real application servers. See below
### My App Tier ###
The application tier consisted of four physical application servers and they are all running weblogic 8.1 SP4. I am currently using passive cookie inspection to load balance this layer.
### Problem that I am facing ###
Request to the web tier (via the web vip) is load balancing correctly.
Request directly to the app tier (via the app vip) is load balancing correctly.
Request to the app tier (via the web vip) is "not" load balancing properly. For example, the load balancer is not inserting the AlteonP cookie all the time and it is only routing requests to one app server.
### What I think the problem might be ###
The reason why all requests are going to only one app server could have something to do with the connection pool between the proxy plug-ins and the application servers. For example HTTP 1.0 vs HTTP 1.1 requests.
I currently have no idea why the AlteonP cookie is not been inserted all the times.
It will be interesting to know how others weblogic guru configure load balancing for their web/application tier using Alteon load balancer.

Similar Messages

  • URL-learn cookie stickiness method

    Hello
    In our network we are trying to configure a SLB with stickiness based on the passive cookie method on the CSM-S module for cat6k.
    The server is setting the JSESSIONLIST cookie in the "set-cookie" field in the HTTP header. Unfortunately, each time a client is accessing the server, the server adds more data into the "Refer" field in the HTTP header that it's placed before the cookie field. Finally when the HTTP header is bigger then 4000 bytes, which is the maximum max-parse length value for CSM-S module, the module is unable to correctly stick the session based on the cookie value send by the client.
    When a server sets the set-cookie value in the HTTP header, at the same time, it sets the parameter called jsessionid in the URI that has the same value that the cookie JSESSIONLIST. Because of our problem with the long "Referer" field in subsequent client requests we have tried to configure the stickiness based on URL-Learn method.
    The virtual server is using a sticky group configured as below
    sticky 2 cookie JSESSIONLIST timeout 30
    cookie secondary jsessionid
    Unfortunately it does not work. We are wondering why. In the configuration-guide there is not much information about this kind of stickiness. We are wondering if it is not a problem for CSM to stick a session based on the "secondary cookie", when, at the same time, the cookie field is also transmitted in the client requests. We are also wondering if it is not a problem for a load balancer that the jsessionid parameter in the URI follows ";" not "?" as in the example in the configuration guide.
    I am attaching an example HTTP GET request from the client (some values were hidden). This trace shows the request with a short "Refere" field but the subsequent packets contain this field much more bigger.
    Thanks for any help in advance

    the CSM will look into the url if it can't find the cookie in the header.
    However, if the header length is too big, the CSM will consider this an error and it will stop parsing.
    A solution for you is to increase the parse length with a variable:
    gdufour-cat6k-2#sho mod csm 3 var | i PAR
    MAX_PARSE_LEN_MULTIPLIER 1
    It will multiply whatever parse length you have configured.
    Now, you could also change the server behavior with the referer.
    Increasing the size of the header will consume BW and reduce performance of the LB and SSL offloader.
    Gilles.

  • Questions on replication and h/w load balancer

              Why does h/w load balancer have to support passive cookies and inspect them to
              dispatch the request to the primary server first? If we have in-memory replication
              and if h/w loadbalancer just dispatches the http request from the client to any
              of the weblogic servers in the cluster wouldnt this work?
              Is it to pin the session to the creator server to minimize the chance of replication
              misses due to n/w issues, member server slow speed, buffer overwrite etc.
              -Shiraz
              

    Yes, and previous to 6.1 (?) if the request showed up at the wrong server it
              would fail.
              Peace,
              Cameron Purdy
              Tangosol Inc.
              Tangosol Coherence: Clustered Coherent Cache for J2EE
              Information at http://www.tangosol.com/
              "Shiraz Zaidi" <[email protected]> wrote in message
              news:3c15aa10$[email protected]..
              >
              > Why does h/w load balancer have to support passive cookies and inspect
              them to
              > dispatch the request to the primary server first? If we have in-memory
              replication
              > and if h/w loadbalancer just dispatches the http request from the client
              to any
              > of the weblogic servers in the cluster wouldnt this work?
              >
              > Is it to pin the session to the creator server to minimize the chance of
              replication
              > misses due to n/w issues, member server slow speed, buffer overwrite etc.
              >
              > -Shiraz
              

  • 3rd party distributed SW load balancing with In-Memory Replication

              Hi,
              Could someone please comment on the feasibility of the following setup?
              I've started testing replication with a software load balancing product. This
              product lets all nodes receive all packets and uses a kernel-level filter
              to let only one node at the time receive it. Since there's minimum 1 heartbeat
              between the nodes, there are several NICs in each node.
              At the moment it seems like it doesn't work: - I use the SessionServlet - with
              a 2-node cluster I first have the 2 nodes up and I access it with a single client:
              .the LB is configured to be sticky wrt. source IP address, so the same node gets
              all the traffic - when I stop the node receiving the traffic the other node takes
              over (I changed the colours of SessionServlet) . however, the counter restarts
              at zero
              From what I read of the in-memory replication documentation I thought that it
              might work also with a distributed software load balancing cluster. Any comments
              on the feasability of this?
              Is there a way to debug replication (in WLS6SP1)? I don't see any replication
              messages in the logs, so I'm not even sure that it works at all. - I do get a
              message about "Clustering Services startting" when I start the examples server
              on each node - is there anything tto look for in the console to make sure that
              things are working? - the evaluation license for WLS6SP1 on NT seems to support
              In-Memory Replication and Cluster. However, I've also seen a Cluster-II somewhere:
              is that needed?
              Thanks for your attention!
              Regards, Frank Olsen
              

    We are considering Resonate as one of the software load balancer. We haven't certified
              them yet. I have no idea how long its going to take.
              As a base rule if the SWLB can do the load balancing and maintain stickyness that is fine
              with us as long as it doesn't modify the cookie or the URL if URL rewriting is enabled.
              Having said that if you run into problems we won't be able to support you since it is not
              certified.
              -- Prasad
              Frank Olsen wrote:
              > Prasad Peddada <[email protected]> wrote:
              > >Frank Olsen wrote:
              > >
              > >> Hi,
              > >>
              > > We don't support any 3rd party software load balancers.
              >
              > Does that mean that there are technical reasones why it won't work, or just that
              > you haven't tested it?
              >
              > > As >I said before I am thinking your configuration is >incorrect if n-memory
              > replication is not working. I would >strongly suggest you look at webapp deployment
              > descriptor and >then the config.xml file.
              >
              > OK.
              >
              > >Also doing sticky based on source ip address is not good. You >should do it based
              > on passive cookie persistence or active >cookie persistence (with cookie insert,
              > a new one).
              > >
              >
              > I agree that various source-based sticky options (IP, port; network) are not the
              > best solution. In our current implementation we can't do this because the SW load
              > balancer is based on filtering IP packets on the driver level.
              >
              > Currently I'm more interested in understanding whether it can our SW load balancer
              > can work with your replication at all?
              >
              > What makes me think that it could work is that in WLS6.0 a session failed over
              > to any cluster node can recover the replicated session.
              >
              > Can there be a problem with the cookies?
              > - are the P/S for replication put in the cookie by the node itself or by the proxy/HW
              > load balancer?
              >
              > >
              > >The options are -Dweblogic.debug.DebugReplication=true and
              > >-Dweblogic.debug.DebugReplicationDetails=true
              > >
              >
              > Great, thanks!
              >
              > Regards,
              > Frank Olsen
              

  • LB configuration - Keep Alive.

    I need to help configure proper load balancing between 2 building modules of Portal server.
    SetUp:
    Server A - Portal 1,Access Manager 1 , Directory 1
    Server B - Portal 2, Access Manager 2, Directory 2
    Logical Architecture.
    -------- Server A
    User Request -------------------->LB ------------>
    ------ Server B
    I need to configure LB in a way that It has keep Alive info for Server A and B.I think there is a configuration required in some xml/property file so that If lets say server A goes down...the LB will automatically direct the next request to server B.
    Can somebody point me in the right direction?
    Thanks.

    Do you need the SSO session created on Server A to failover to Server B or is it ok that the user has to reauthenticate if Server A goes down?
    In any case you can either configure the LB for s.c. active cookie based persistence (LB is inserting it's own cookie) or s.c. passive cookie based persistence (LB is using a predefined cookie). I guess this is what you mean by 'keep-alive info' - correct?
    Of course this does only work if no SSL is used or if SSL is terminated at the LB.
    If SSL is used the only solution is to use client-IP based persistence.
    All the above is only needed to prevent cross-talk between AM instances.
    -Bernhard

  • How to configure Load Balancer in front of Web Logic Cluster

    hi all,
    I installed 2 weblogic servers in cluster and now i want to deploy Hardware Load balancer in front of them, i want to know do i require any configuration on servers or i just deploy hardware Load balancer in front clustered servers with round robin technique.
    Regards,
    imran

    I think there are two important configuration when you use hardware load balancer in front of WebLogic cluster.
    1) Passive Cookie Persistance
    You need to configure hardware load balancer so that it can identify Weblogic session cookie for routing request primary server holding HTTP sesstion.
    2) External DNS
    If there is firewall between hardware load balancer and weblogic cluster and NAT (Network translation ) is used, then you need to configure "External DNS" for each weblogic server in cluster. You need to specify the hostname used by load balancer in "external DNS".
    More details about this are available at.
    http://edocs.bea.com/wls/docs92/cluster/load_balancing.html#wp1026940
    http://e-docs.bea.com/wls/docs92/cluster/planning.html#wp1088950
    Hope this will help...
    Jayesh
    Yagna Sys

  • How do you stop unauthorized cookies from appearing in Safari?

    Hi ,
    I'm using Safari 5.1.10 and system 10.6.8.  I've gotten all the security downloads available, but I seem to having issues with unauthorized  cookies appearing. These seem to appear even though I've not visited their websites, and have Safari set to accept cookies from only sites I've visited.
    After going to Preferences:Privacy: remove all website data: then remove all cookies,
    If I just wait a few minutes, I get 72 website cookies restored to  my computer, without doing anything. These include cookies from google, alibaba, 2mdn.net, facebook, microsoft, oracle and many more.  Some of these  declare they are using local storage, others the catch, while others just declare themselves as cookies.
    These appear in spite of the fact that I have the preferences set to block cookies from third party advertizers, set Extensions to OFF, but have Javascript enabled, and allow Java, but deny all other plug-ins.
    If I unclick the allow Java button in Preferences:security, then  11 of these cookies sneek back in, but the others seem to be blocked. Those that come back include Alibaba, apple, google-analytics, "local documents on my computer", machine-seeker, wikipedia, and a few others.
    If I disable JavaScript in Preferences:Security, now I get only cookies from sites I've visited, as I'm supposed to, according to the settings in my Safari preferences.
    So it seems that some unscrupulous information collectors are collecting data  even when the Safari settings should prohibit it. Unfortunately, some of the sites I visit ( Like Apple support communities)  require that Javascript be enabled, so I don't know how to stop this. 
    The problem is that I've found these unwarrented cookies appear to slow down my internet connection speeds  by  ~ 95% ( Try removing them and disabling Javascript to see what happens) in addition to it being an invasion of my privacy. In addition, it really bothers me that some of these sites are storing local documents on my computer without permission.
    As I've said, I've already installed ALL the pertinent security updates.  Does anybody have any idea how to stop this from happening? I presume this is also happening on my iphone and ipad as well, but haven't checked.
    I see that Safari was sued by Apple in 2012 for doing just this same thing, but they appear to be up to their old tricks, as well as many other companies.
    Thanks

    Hi,
    I've investigated this phenomena  of UNauthorized Cookies a bit more  in the past few days and found their cause  and uses goes very deep down the internet rabbit hole.  While most browsers allow the user to delete cookies, or to block cookies from third parties, third parties may place cookies or "cookie equivalents" on your computer through a large variety of back doors. The most pernicious type  of such cookie is euphemistically  called a "Zombie Cookie"  or a "supercookie".
    These may reside in a number of places either in  your own computer or remotely on the web. Deleting zombie cookies or supercookies is generally ineffective, because they are reinstalled in your browser, or worse, just exchange information with your browser withouth leaving a trail of cookie crumbs, the next time you get online. Some of these zombie cookies are not browser specific, so they can be accessed through all browsers on your computer. 
    The reason that you may never have heard of supercookies, and the reason they are so hard to find and get rid of, is that their deployment is deliberately sneaky and designed to evade detection and deletion. This means that most people who think they have cleared their computers of tracking objects have likely not. The European Union has recently taken action to make illegal the emplacement of "non-essential" cookies  on your computer, but the United States, being less concerned about your personal privacy, and more concerned about  making it easy for companies (and the government) to eavesdrop, has not.
    The following is a list ( probably incomplete) where zombie cookies may be hiding on your computer:
    Standard HTTP cookies
    Storing cookies in and reading out web history
    Storing cookies in HTTP ETags
    Internet Explorer userData storage (starting IE9, userData is no longer supported)
    HTML5 Session Storage
    HTML5 Local Storage
    HTML5 Global Storage
    HTML5 Database Storage via SQLite
    Storing cookies in RGB values of auto-generated, force-cached PNGs using HTML5 Canvas tag to read pixels (cookies) back out
    Local Shared Objects
    Silverlight Isolated Storage
    Cookie syncing scripts that function as a cache cookie and respawn the MUID cookie[4]
    If a user is not able to remove the cookie from every one of these data stores then the cookie will be recreated to all of these stores on the next visit to the site that uses that particular cookie, or in some cases, just the next visit to the internet, even though you may have barred 3rd party cookies from being emplaced in your browser. Every company has their own implementation of zombie cookies and most are kept proprietary, although an open-source implementation of zombie cookies, called Evercookie,[5] is available and commonly used.
    One  such common type of supercookie is called Local shared objects (LSOs), or more commonly Flash cookies (due to their similarities with HTTP cookies), are pieces of data that websites which use Adobe Flash may store on a user's computer. Local shared objects are used by all versions of Adobe Flash Player and version 6 and above of Macromedia's now-obsolete Flash Player.[1]
    It is possible to see who is using Flash cookies on your computer, (and remove them) by going to the adobe website storage settings panel : (http://www.macromedia.com/support/documentation/en/flashplayer/help/settings_man ager07.html).  This takes you to a settings manager  figure. This  Settings Manager figure that you see on this page is not an image; it is the actual Settings Manager for your computer. Click the tabs to see different panels, and click the options in the panels to change your Adobe Flash Player settings.
    So far, I have not been able to find a method of removing or inhibiting zombie cookies that use HTML5 local or global storage locations. Some browsers may provide such power, but Apple Safari apparently does not.
    For more information on supercookies see:
    https://www.bestvpn.com/blog/8177/super-cookies-flash-cookies/
    There are some ways to reduce your load of unwanted cookies and local storage  type cookies using  extensions such as AdBlock or Disconnect,  But I've tried some of these and it doesn't seem to  stop very many of them, even though the Disconnect extension is said to block over 2000 of these types of  cookies.
    For those who are trying to ride under the radar by using some of these extensions or software blockers, be aware that use of these may actually make you more visible because of browser fingerprinting.  Whenever you visit a website your browser sends data to the server hosting that site. This data includes basic information, including the browser name, operating system, and exact version number of the browser. This information is known as passive browser fingerprint because it happens automatically. However websites when blocked, can also easily install other types of scripts that ask for additional information, such as a list of all installed fonts and plugins, supported data types (so-called MIME types), screen resolution, system colors and much more. Because this information has to be solicited from your browser, it is known as active fingerprinting. Taken altogether, the various fingerprint attributes can be almost instantly (it takes just a few milliseconds to run algorithms that compare millions of fingerprints) combined to create a unique fingerprint that can be used to very accurately identify an individual user, no matter if cookies have been deleted or IP address changed between website visits.
    For an article on browser fingerprinting, See : https://www.bestvpn.com/blog/8159/browsers-fingerprint-reduce/
    The bottom line is that if you use the internet, your browser history is being tracked by a myriad of companies and government agents, and it is likely not possible to stop this.  For those who work in science, industry or government and are working on sensitive topics or novel product development  that  another company or government may find interesting, there appears to be many ways to recreate  what you are working on by studying your browser history, or installing worms to view exactly what you are writing or reading.  It came as somewhat of a shock to me to see just how pervasive internet spying has become, and it's not just malicious or destrustive agents who are doing so. Google didn't become a $350 billion company by simply bringing nice toys to us to play with. The real value of the internet comes from the trade and sale of secretly obtained personal information from you and I and everyone else, and its sale to all who will pay for it.

  • GMail is telling me to reset cookies, and I can't access tools... is this safe, and how do I access my tools to change it? Thanks much.

    I have a gmail account. When I try to pull up my mail page, it tells me there is a problem - and to enable cookies, but it is for Firefox 3.2 Allowing third-party cookies:
    "Even if you already have cookies enabled, third-party cookies need to be enabled as well for some Google settings (for example, the SafeSearch lock) to work. The default setting in Safari is not to allow third-party cookies, but following the steps above sets Safari to allow them "
    Is this safe, and how can I get to it if it is safe? I cannot find the tools at the top of the page, and when I do see it (when I go to Toolbar options) it is not opening to the Tools-Options area I need. I never had a problem until I tried to log on last night... worked fine till then. Please help... thanks so much.

    Check the date and time in the clock on your computer: (double) click the clock icon on the Windows Taskbar.
    Check out why the site is untrusted and click "Technical Details to expand this section.<br>If the certificate is not trusted because no issuer chain was provided (sec_error_unknown_issuer) then see if you can install this intermediate certificate from another source.
    You can retrieve the certificate and check details like who issued certificates and expiration dates of certificates.
    *Click the link at the bottom of the error page: "I Understand the Risks"
    Let Firefox retrieve the certificate: "Add Exception" -> "Get Certificate".
    *Click the "View..." button and inspect the certificate and check who is the issuer of the certificate.
    You can see more Details like intermediate certificates that are used in the Details pane.
    If "I Understand the Risks" is missing then this page may be opened in an (i)frame and in that case try the right-click context menu and use "This Frame: Open Frame in New Tab".
    Note that some firewalls monitor (secure) connections and that programs like Sendori or FiddlerRoot can intercept connections and send their own certificate instead of the website's certificate.

  • How can I add a blocking cookie to keep by Blogger stats feature from counting my own page views? Blogger has a pop-up menu that promises to do this, but it has not worked.

    This is the message from the Blogger stats pop-up:
    "Don't track your own page views: You can tell Blogger not to include your own pageviews in its stats. To do this, Blogger must add a blocking cookie to your browser."
    I click on the "Don't track my pageviews" option and click "save." However, my own pageviews are still being tracked. Can I do this operation myself?
    Regards — Ed

    Hi ben_burrowes, the disallow future dialogs checkbox appears on normal sites, no add-on required, but apparently some sites have found a way around it.
    In a thread about a different inescapable website problem a user suggested gutting the page using the Inspector. I haven't tested it on the kind of page you're encountering, but here's how you would do it: right-click a blank area near the margin of the page, and choose Inspect Element (Q). This should open a panel with some element of the page's HTML code selected. You're looking for the &lt;body> tag, scroll up if necessary, then right-click it and choose Delete Node. You also could delete the &lt;head> node. There's a good chance this will cripple the script that's preventing you from leaving the page. Obviously not an approach you would want to have to use often, but you could practice it for potential emergencies.

  • Both the download and cookie exception dialogs are empty. I know there are entries there but I can't see them. This has been a problem since release 5.0 on Mac OSX.

    They're empty, no entries. When I download a file I can see the download progress but once the download is finished the entry disappears.
    The Cookie Exceptions list is always empty. But I know the list contains entries because I have Firefox configured to ask me about allowing cookies. So there should be many entries with sites either allowed, allowed for session, or denied.

    I had the same issue: Exception list empty, but exceptions still present.
    I've inspected my permissions.sqlite file and found an entry with permission set to -1. After changing this to 1, all exceptions were visible again.

  • I send a cookie to the server, but it doesn't come back.

    Hey all:
    I'm attemping send a cookie via an application to the servlet, and then pull that cookie back out from the servlet's response. I want to do this whole process repeatedly while retaining the same cookie.
    Pulling the cookie out on the first request to the servlet works fine because I do not send a cookie to the servlet, and the servlet instantiates a new session. Next, the cookie is sent back to the servlet successfully because it is printed out on the server-side exactly as it appeared on the client side. However, after the servlet responds to that second request and on subsequent requests, I can not pull the cookie out again. On the method call, connection.getHeaderField("Set-Cookie"), the result is null. I've even inspected the connection object, and it seems empty of any cookies whatsoever. Why doesn't the cookie come back on requests after the initial request? Is there anything I can do to make it come back?
    Thanks a ton in advance. Have great day!!!

    you must check that cookie's path set by Tomcat is the one expected by your browser. Otherwise the former won't send it back to the server. I had the same problem because Tomcat sets the cookie path to "/webAppName" and the browser expected it to be "/".

  • Firefox 18.0.1 no longer accepts cookies from wells fargo

    firefox 18.0.1 does not accept cookies when I open the online site. If I go to tools/options and click OK then the cookies are accepted and I can log on. I have keep until I close Firefox checked. This problem does not occur on other sites like AMEX for example.

    Clear the cache and the cookies from sites that cause problems.
    "Clear the Cache":
    *Tools > Options > Advanced > Network > Cached Web Content: "Clear Now"
    "Remove Cookies" from sites causing problems:
    *Tools > Options > Privacy > Cookies: "Show Cookies"
    *http://kb.mozillazine.org/Cookies
    *http://kb.mozillazine.org/Websites_report_cookies_are_disabled
    Start Firefox in <u>[[Safe Mode|Safe Mode]]</u> to check if one of the extensions (Firefox/Tools > Add-ons > Extensions) or if hardware acceleration is causing the problem (switch to the DEFAULT theme: Firefox/Tools > Add-ons > Appearance).
    *Do NOT click the Reset button on the Safe mode start window or otherwise make changes.
    *https://support.mozilla.org/kb/Safe+Mode
    *https://support.mozilla.org/kb/Troubleshooting+extensions+and+themes
    You can inspect and manage the permissions for all domains on the <b>about:permissions</b> page.
    *https://support.mozilla.org/kb/how-do-i-manage-website-permissions
    You can remove all stored data from a specific domain via "Forget About This Site" in the right-click context menu of an history entry (Show All History or History sidebar) or via the about:permissions page.
    Using "Forget About This Site" will remove everything like bookmarks, cookies, passwords, cache, history, and exceptions from that domain, so be cautious and if you have a password or other data from that domain that you do not want to lose then make a note of those passwords and bookmarks.
    You can't recover from that "forget" unless you have a backup of the affected files.
    It doesn't have any lasting effect, so if you revisit such a 'forgotten' website then data from that website will be saved once again.

  • Everytime I change my privacy settings, it never stays which is an issue because I can't use Gmail since my cookie settings never stay on.

    I only want to use Firefox, but for the longest time I've had to keep Chrome as a backup browser because that is the only way I can view my gmail account. I want to view it on Firefox, but a cookies issue won't let me. Every time I go into my privacy settings and change it to custom, it never stays. I will change it, and immediately go back to the privacy settings and notice that my last changes haven't been saved. I reset my Firefox to factory settings, as well as started my Firefox in safe mode and tried to see if add-ons would be causing an issue (they weren't, as add-ons are disabled in safe mode and gmail still wouldn't let me in...). How can I make my privacy settings stay permanently to my custom settings please? I would very much like to get rid of chrome and only use Firefox.

    You can try to delete the cookies.sqlite and permissions.sqlite files in the Firefox profile folder.
    You can use this button to go to the currently used Firefox profile folder:
    *Help > Troubleshooting Information > Profile Director: Show Folder (Linux: Open Director; Mac: Show in Finder)
    *http://kb.mozillazine.org/Profile_folder_-_Firefox
    You can inspect and manage the permissions for the domain in the currently selected tab via these steps:
    *Click the "Site Identity Button" (globe/padlock) on the location bar
    *Click "More Information" to open "Tools > Page Info" with the Security tab selected
    *Go to the Permissions tab (Tools > Page Info > Permissions) to check the permissions for the domain in the currently selected tab
    Start Firefox in <u>[[Safe Mode|Safe Mode]]</u> to check if one of the extensions (Firefox/Tools > Add-ons > Extensions) or if hardware acceleration is causing the problem (switch to the DEFAULT theme: Firefox/Tools > Add-ons > Appearance).
    *Do NOT click the Reset button on the Safe Mode start window.
    *https://support.mozilla.org/kb/Safe+Mode
    *https://support.mozilla.org/kb/Troubleshooting+extensions+and+themes

  • Passivation problem when jbo.doconnectionpooling=true

    Using jdev 10.1.3.4 with jsf/adf bc. We frequently see the row currency error due to the fact that our ps_txn table 'fills up' very quickly. When the rows in ps_txn are deleted manually or via the bc4j sql script the row currency problem goes away. We've scheduled the bc4j sql script to run once daily, when there are no users on the system. However, this doesn't seem to be enough to keep the number of rows in ps_txn to a minimum.
    As I understand it after reading the documentation and various threads in this forum, that sql script is really designed to clean out rows in ps_txn that are not cleaned up automatically by the bc4j mechanism due to something like an unexpected app server shutdown and things like that.
    According to the documentation - "Under normal circumstances, the ADF state management facility provides automatic cleanup of the passivation snapshot records. When a passivation record is saved to the database on behalf of a session cookie, as described above, this passivation record gets a new, unique snapshot ID. The passivation record with the previous snapshot ID used by that same session cookie is
    deleted as part of the same transaction. In this way, assuming no server failures, there will only ever be a single passivation snapshot record per active end-user session."
    Our app module is configured using jbo.doconnectionpooling=true. For testing purposes, I've created a two page application. The 'first' page simply has a button which navigates to a second page in which a table of data is displayed via a read only view object. I've found that by simply navigating back and forth between these two pages, a new row is written to the ps_txn each time i navigate between the two pages in the same session. I'm positive that i'm the only user on the system during testing, so I know that these rows that are being added to ps_txn cannot be the result of another user using the system at the same time as me. I've found that after just several minutes of bouncing around in the application as many as a hundred rows can be inserted into the ps_txn table. This is with just one user in the application. Obviously, with multiple users in the application at the same time, the ps_txn table is filling up way too fast as it seems that the 'built in' adf mechanism which is supposed to perform automatic cleanup isn't working properly. Therefore, we frequently encounter the row currency exception because of the number of rows in ps_txn.
    I mentioned that our app module is configured with the jbo.doconnectionpooling property set to 'true'. This is because our priority is to keep the number of connections to a minimum. However, for testing purposes, I set that property to false to see the behavior. With that property set to false, bouncing back and forth between the two pages mentioned above doesn't ever result in a row being written to the ps_txn table.
    Does anyone have any ideas as to why the ps_txn table is filling up so fast in the above scenario when jbo.doconnectionpooling=true? The automatic cleanup mechanism of the adf framework does not seem to be functioning properly. Thanks for any help on this.
    Edited by: user8881206 on May 6, 2010 6:41 AM

    I wanted to update this thread with some more findings. I still need help in figuring out why the passivation/activation mechanism is not deleting records from ps_txn in the same user session.
    I followed Didier's advice (Passivation table ps_txn not being cleaned up and tested the activation/passivation in the business component browser. This seemed to work fine, as I could see that a row was written/passivated to ps_txn when I selected Save Transaction State and when I selected Restore Transaction State from the menu that row was deleted from ps_txn.
    I've also overridden the activateState and passivateState methods in the app module to see if they were invoked as I ran my application:
    protected void activateState(Element element) {
    System.out.println("activate state called");
    super.activateState(element);
    protected void passivateState(Document document, Element element) {
    System.out.println("passivate state called");
    super.passivateState(document, element);
    When I run the application, I can see that both these methods are being invoked, but the passivated row(s) are not being deleted from ps_txn for my user session. The passivation continues to write new rows to ps_txn for my session without deleting any of the other rows from the same session. This is resulting in this table filling up way too fast and ultimately causiing the row currency issue. Anyone have any ideas of what's causing multiple rows to be written to ps_txn for the same user session? Thanks for any help.

  • ACE FTP inspect with port range

    Hi everyone,
    I have a problem with passive FTP with fixed port range.
    I configured a ftp server with a fixed port range of 60000 - 60500 for the data channel.
    And the ace is configured with "inspect ftp" on policy of ftp-serverfarm.
    A tcpdump on server I can see that the server uses the portrange in response packet.
    (x,x,x,x,34,195) = 60099
    But on client I can see that the port on packet is change to another port. The ace is between server and client.
    On CCO I found a document "http://www.ciscosystems.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA1_7_/command/reference/policy.html#wp1006925" ->> Enables FTP inspection. The ACE inspects FTP packets, translates the address and the port that are embedded in the payload, and opens up a secondary channel for data.
    I don't understand why the ace change the port in ftp payload.
    Is it possible to  create the same port range on ace configuration of connectio to client?
    Thanks
    René

    You don't need inspect ftp with one server because you can avoid it.
    You can for example configure a loopback on the server with the vip address and configure the serverfarm as transparent on ACE.
    Then for the data channel, since your range of ports is quite small, you can catch it with a class-map and simply forward to the server.
    Like this, the server will use the vip address in all packets exchange with the cleint (no need to nat the payload) and when the client opens a data connection, the traffic is matched with the class-map and the connection can be forwarded to the server using the same transparent serverfarm.
    Less chance to run into compatibility issue.
    Better performance since we can switch traffic with inspecting its content.
    Gilles.

Maybe you are looking for

  • ICloud incoming mail server is greyed out so I cannot change info or add alias account

    I cannot add a new icloud alias to my mac mail because the incoming mail server is greyed out & unchangeable. Help!

  • ODS activation process random error in process chain

    Dear All, In my process chains, the process of ODS activation is sometimes in error. It is a random error, it is not always the same ODS and process chain. The message error is DBIF_RSQL_SQL_ERROR. I noticed that, on right-clic -> Change variant to t

  • Error in server behavoirs

    My copy of dw seems to be corrupt whenever I open the page containing recordset they do not show and I get the following error I have installed and uninstalled a few times but still get the same probelms

  • Systemd: Delay to start OSS4

    why OSS takes so long to start? See below output for systemd-analyze blame: 3min 6.096s oss.service 15.408s colord.service 10.841s NetworkManager.service 5.826s avahi-daemon.service 5.801s systemd-logind.service 5.351s gdm.service 2.253s systemd-modu

  • Missing Lib

    First of all and being my first post hi everyone I'm trying to run ddd and it complains about  libXm.so.2 couldn't be found.What package has that lib?