Portal Session Memory Consumption

Dear All,
                      I want to see the user sessions memory consumption for portal 7.0. i.e. if a Portal user opens a session, how much memory is consumed by him/her. How can i check this. Any default value that is associated with this?
Backend System memory load will get added to portal consumption or to that specific Backend System memory consumption.
Thanks in Advance......
Vinayak

I'm seeing the exact same thing with our setup (it essentially the same
as yours). The WLS5.1 documentation indicates that java objects that
aren't serializeable aren't supported with in-memory replication. My
testing has indicated that the <web_context>._SERVLET_AUTHENTICATION_
session value (which is of class type
weblogic.servlet.security.ServletAuthentication) is not being
replicated. From what I can tell in the WLS5.1 API Javadocs, this class
is a subclass of java.lang.object (doesn't mention serializeable) as of
SP9.
When <web_context>._SERVLET_AUTHENTICATION_ doesn't come up in the
SECONDARY cluster instance, the <web_context>.SERVICEMANAGER.LOGGED.IN
gets set to false.
I'm wondering if WLCS3.2 can only use file or JDBC for failover.
Either way, if you learn anything more about this, will you keep me
informed? I'd really appreciate it.
>
Hi,
We have clustered two instances of WLCS in our development environment with
properties file configured for "in memory replication" of session data. Both the
instances come up properly and join the cluster properly. But, the problem is
with the in memory replication. It looks like the session data of the portal is
getting replicated.
We tried with the simplesession.jsp in this cluster and its session data is properly
replicated.
So, the problem seems to be with the session data put by Portal
(and that is the reason why I am posting it here). Everytime the "logged in "
check fails with the removal of one of the instances, serving the request. Is
there known bug/patch for the session data serialization of WLCS? We are using
3.2 with Apache as the proxy.
Your help is very much appreciated.--
Greg
GREGORY K. CRIDER, Emerging Digital Concepts
Systems Integration/Enterprise Solutions/Web & Telephony Integration
(e-mail) gcrider@[NO_SPAM]EmergingDigital.com
(web) http://www.EmergingDigital.com

Similar Messages

  • Portal Session Timeout and Logon Ticket Timeout

    Hi All,
    Can anyone give me answers to the following:
    - If my Portal session times out, but my logon ticket is still valid, will I lose my session data?
    - Is there any way of determining the size of a users session information in memory (or the size of all user sessions in memory). I can see in the Monitoring service in Visual Admin the number of sessions but not their individual or total size.
    I'm using EP7.
    Cheers,
    Steve

    Hi,
    the Logon Ticket is only used for SSO between the portal and the integrated system. Your session data is stored in the session. If the session times out or gets closed, the session data is lost.
    br,
    Tobias

  • Memory consumption when computer locked?

    I noticed something strange with my app. If I leave it open and switch to login screen, it's memory consumption raises up to gigabyte. I discover this in Activity Monitor when logging in back. But just after I logged back in, process's working set size begins slowly going back to normal, and then there're no leaks reported, and app works just normal.
    Other apps dont issue this. Besides ordinary Cocoa GUI, my app makes use of multithreading, sockets and webcam capture (sequence grabber).
    Looks like there's something specific to fast user switching feature that I don't know, maybe some buffer is infinitely filled until there's chance to display, or something.
    Does anyone have idea what it could be?
    Message was edited by: kasym

    Another point that I wanted to mention...
    As a mentioned, we are looping with our application through a resultset and "processing" each record. If we simply disconnect the sqlca object (the transaction object the PowerBuilder application uses to connect to the database) and then simply re-connect, say, every 100 records or so... the problem goes away. We simply disconnect, re-connect, and pick up at the point where we left off. This shows me the memory gets flushed every time the session is disconnected.
    This is the effect that I want... for the memory to be flushed every so many records, so it can continue looping through each record in the resultset as if it were doing the first one each time. I understand there may be a performance impact as it flushes the memory for each record (or every hundred or so), but I'm willing to sacrifice that to keep it from running out of memory altogether.
    I'd appreciate feedback on this point.

  • ITS 620 Template Cache - unlimited memory consumption

    Hi,
       We have patched standalone ITS 620 to 23 to resolve some issues we have with
    the display of ITS screens via the Mozilla browser. However, we are now encountering memory issues where the Agate process is consuming much more than is allocated via the threads and sessions values.
       With patch 23 we can now see the template cache value and it consumes all the free physical memory on the machine. The 'Cachesize' parameter is set to a much lower value than this:
       Can anyone tell me:
    1. Is this behaviour correct for this cache?
    2. How do we disable the cache?
    3. How do we restrict the size of the cache?
       Thanks.
    /regards,
    Conor.

    Hi Conor,
    ITS development is not aware of memory issues with PL 23.
    The memory the template cache allocates does not depend on the "Cachesize" parameter. This parameter is for RFC connections of flowlogic services.
    If after PL 23 you have Mozilla user you haven't before this would explain the increase of the template cache memory consumption. For each browser and language the amount of memory in the templates cache increases.
    You surely could disable the templates cache but this would hit performance (switch parameter StaticTemplates to 0 in ITS Admin - <Your_Instance> Configuration - Performance). On each template access the template would have to be parsed again. This switch is intended for a development environment only.
    The best you can do is to setup ITS server on 64-Bit OS and 64-Bit ITS 6.20 executables. On 64-Bit you can forget about memory issues due 32-Bit address space limitations.
    If you this is not possible you have to reduce the memory by
    - forwarding requests with specific languages to specific ITS instances
    - forwarding request from Browsers like Mozilla to a specific ITS instances
    - checking peak values in ITS Admin Overview and tune parameters MaxSessions, MaxWorkthreads accordingly. Be carefull!!
    SAP note 720428 gives you advise about the most important ITS 6.20 parameters.
    Best regards,
    Klaus

  • How to measure memory consumption during unit tests?

    Hello,
    I'm looking for simple tools to automate measurement of overall memory consumption during some memory-sensitive unit tests.
    I would like to apply this when running a batch of some test suite targetting tests that exercise memory-sensitive operations.
    The intent is, to verify that a modification of code in this area does not introduce regression (raise) of memory consumption.
    I would include it in the nightly build, and monitor evolution of summary figure (a-ah, the "userAccount" test suite consumed 615Mb last night, compared to 500Mb the night before... What did we check-in yesterday?)
    Running on Win32, the system-level info of memory consumed is known not to be accurate.
    Using perfmon is more accurate but it seems an overkill - plus it's difficult to automate, you have to attach it to an existing process...
    I've looked in the hprof included in Sun's JDK, but it seems to be targetted at investigating problems rather than discovering them. In particular there isn't a "summary line" of the total memory consumed...
    What tools do you use/suggest?

    However this requires manual code in my unit test
    classes themselves, e.g. in my setUp/tearDown
    methods.
    I was expecting something more orthogonal to the
    tests, that I could activate or not depending on the
    purpose of the test.Some IDEs display mmeory usage and execution time for each test/group of tests.
    If I don't have another option, OK I'll wire my own
    pre/post memory counting, maybe using AOP, and will
    activate memory measurement only when needed.If you need to check the memory used, I would do this.
    You can do the same thing with AOP. Unless you are using an AOP library, I doubt it is worth additional effort.
    Have you actually used your suggestion to automate
    memory consumption measurement as part of daily builds?Yes, but I have less than a dozen tests which fail if the memory consumption is significantly different.
    I have more test which fail if the execution time is siginificantly different.
    Rather than use the setUp()/tearDown() approach, I use the testMethod() as a wrapper for the real test and add the check inside it. This is useful as different test will use different amounts of memory.
    Plus, I did not understand your suggestion, can you elaborate?
    - I first assumed you meant freeMemory(), which, as
    you suggest, is not accurate, since it returns "an
    approximation of [available memory]"freeMemory gives the free memory from the total. The total can change so you need to take the total - free as the memory used.
    - I re-read it and now assume you do mean
    totalMemory(), which unfortunately will grow only
    when more memory than the initial heap setting is
    needed.more memory is needed when more memory is used. Unless your test uses a significant amount of memory there is no way to measure it reliably. i.e. if a GC is perform during a test, you can have the test appear to use less memory than it consumes.
    - Eventually, I may need to inlcude calls to
    System.gc() but I seem to remember it is best-effort
    only (endless discussion) and may not help accuracy.if you do a System.gc(); followed by a Thread.yield() at the start it can improve things marginally.

  • How to set a variable in portal session using web dynpro java.

    Hi,
    I have created a web dynpro application, which is running inside portal. I have created a role called "R1". Inside role R1, i have created 3 workset W1, W2 and W3. and inside each workset i have some pages and iviews.
    My requirement is when user logins to the portal , and when he clicks on role R1 for the first time, a login page should come (so that we can do revalidation), and when he enters his password again in that login page , then only workset W1, W2 and W3 should be visible/accessible to him and after successful revalidation, if he clicks again on role R1, in that particular portal session, than that login page should not come.
    for this, i thought i will set a variable in portal session, whenever user successfully revalidated himself, and if after successful revalidation he clicks again on role R1, i will check in doinit method of webdynpro whether variable is set or not (which i already set on successful revalidation), and if it is set then i will do Donavigation else i will present login page to the user.
    Can anyone tells me how to set a variable in portal session using web dynpro java.
    thanks
    Arush

    Hi,
    Try this:
    WDScopeUtil.put(WDScopeType.CLIENTSESSION_SCOPE, key, value)
    WDScopeUtil.get(WDScopeType.CLIENTSESSION_SCOPE, key)
    Ex:
    WDScopeUtil.put(WDScopeType.CLIENTSESSION_SCOPE,"Key1","Value1");
    String value1=WDScopeUtil.get(WDScopeType.CLIENTSESSION_SCOPE,"Key1").toString();
    /people/william.cui/blog/2007/02/12/sharing-session-context-between-parent-and-external-windows-running-on-same-host
    Regards,
    Charan

  • Can portal session cookies be used between two data centers

    OAS generates the following header information and session information for my application. However when I need to failover the originating OAS datacenter into my hot stand-by for maintenance or upgrades, the OAS in the other datacenter responds with a 503 web error. We are using Akamai's GTM to manage the liveness of the datacenter, so we would need the hot stand-by OAS portal in that datacenter to return a 302 error code. Is there some method that we can add to our portal application which would always return a 302 error code.
    See header information collected through wfetch. The 503 error is caused by the hot stand-by data center not accepting or recognizing the cookie. Both OAS datacenters are IDENTICAL in Oracle levels, application levels, web servers, portals and OS patches.
    resolve hostname "170.107.183.32"WWWConnect::Connect("170.107.183.32","80")\nsource port: 2182\r\n
    GET /portal/pls/portal/PORTAL.wwsec_app_priv.login?p_requested_url=%2Fportal%2Fpls%2Fportal%2FPORTAL.home&p_cancel_url=%2Fportal%2Fpls%2Fportal%2FPORTAL.home HTTP/1.1\r\n
    Accept: */*\r\n
    Accept-Language: en-us\r\n
    Accept-Encoding: gzip, deflate\r\n
    User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.0.3705; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30)\r\n
    Host: www.thomson-pharma.com\r\n
    Connection: Keep-Alive\r\n
    Cookie: ORA_WX_SESSION="10.225.8.30:80-1#2"; portal=9.0.3+en-us+us+AMERICA+3D66674E7EED0801E04400144F41424E+BBAA98EEB32D58C086231A8D6CBE2E5D402D89B0E79D83A18C668BB0CA7417B4044DEA389C8B50DD37D9272A24B4753B22F29978861DE14503F8B9BEDC2014654B26A434CF074F4D8749B88610ADADF5084A90ADBF749E2A; DATACENTER=EAGAN\r\n
    \r\n
    HTTP/1.1 503 Service Unavailable\r\n
    Cache-Control: private\r\n
    Content-Type: text/html\r\n
    Set-Cookie: ORA_WX_SESSION="10.237.138.33:80-1#2"\r\n
    Set-Cookie: portal=; expires=Wednesday, 27-Dec-95 05:29:10 GMT; path=/\r\n
    Connection: Keep-Alive\r\n
    Keep-Alive: timeout=5, max=999\r\n
    Server: Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server OracleAS-Web-Cache-10g/10.1.2.0.2 (N;ecid=208440262161,0)\r\n
    Content-Length: 710\r\n
    Date: Fri, 26 Oct 2007 14:58:07 GMT\r\n
    \r\n
    Thanks -John

    Hi John,
    This question is probably more appropriate in one of the Portal forums, but perhaps you can take a look at the information in section C.5 Configuring the Portal Session Cookie in Appendix C of the Portal Configuration guide.
    Here is a link: http://download.oracle.com/docs/cd/B14099_19/portal.1014/b19305/cg_app_c.htm#sthref1907
    Regards,
    Peter

  • Problems updating projects to new versions of Premiere (CS5 to CC and CC to CC 2014) Memory consumption during re-index and Offline MPEG Clips in CC 2014

    I have 24GB of RAM in my 64 bit Windows 7 system running on RAID 5 with an i7 CPU.
    A while ago I updated from Premiere CS5 to CC and then from Premiere CC to CC 2014. I updated all my then current projects to the new version as well.
    Most of the projects contained 1080i 25fps (1080x1440 anamorphic) MPEG clips originally imported (captured from HDV tape) from a Sony HDV camera using Premiere CS5 or CC.
    Memory consumption during re-indexing.
    When updating projects I experienced frequent crashes going from CS5 to CC and later going from CC to CC 2014. Updating projects caused all clips in the project to be re-indexed. The crashes were due to the re-indexing process causing excessive RAM consumption and I had to re-open each project several times before the re-index would eventually complete successfully. This is despite using the setting to limit the RAM consumed by Premiere to much less than the 24GB RAM in my system.
    I checked that clips played; there were no errors generated; no clips showed as Offline.
    Some Clips now Offline:Importer  CC 2014
    Now, after some months editing one project I found some of the MPEG clips have been flagged as "Offline: Importer" and will not relink. The error reported is "An error occurred decompressing video or audio".
    The same clips play perfectly well in, for example, Windows Media Player.
    I still have the earlier Premiere CC and the project file and the clips that CC 2014 importer rejects are still OK in the Premiere CC version of the project.
    It seems that the importer in CC 2014 has a bug that causes it to reject MPEG clips with which earlier versions of Premiere had no problem.
    It's not the sort of problem expected with a premium product.
    After this experience, I will not be updating premiere mid-project ever again.
    How can I get these clips into CC 2014? I can't go back to the version of the project in Premiere CC without losing hours of work/edits in Premiere CC 2014.
    Any help appreciated. Thanks.

    To answer my own question: I could find no answer to this myself and, with there being no replies in this forum, I have resorted to re-capturing the affected HDV tapes from scratch.
    Luckily, I still had my HDV camera and the source tapes and had not already used any of the clips that became Offline in Premiere Pro CC 2014.
    It seems clear that the MPEG importer in Premiere Pro CC 2014 rejects clips that Premiere Pro CC once accepted. It's a pretty horrible bug that ought to be fixed. Whether Adobe have a workaround or at least know about this issue and are working on it is unknown.
    It also seems clear that the clip re-indexing process that occurs when upgrading a project (from CS5 to CC and also from CC to CC 2014) has a bug which causes memory consumption to grow continuously while it runs. I have 24GB RAM in my system and regardless of the amount RAM I allocated to Premiere Pro, it would eventually crash. Fortunately on restarting Premiere Pro and re-loading the project, re-indexing would resume where it left off, and, depending on the size of the project (number of clips to be indexed), after many repeated crashes and restarts re-indexing would eventually complete and the project would be OK after that.
    It also seems clear that Adobe support isn't the greatest at recognising and responding when there are technical issues, publishing "known issues" (I could find no Adobe reference to either of these issues) or publishing workarounds. I logged the re-index issue as a bug and had zero response. Surely I am not the only one who has experienced these particular issues?
    This is very poor support for what is supposed to be a premium product.
    Lesson learned: I won't be upgrading Premiere again mid project after these experiences.

  • Query on memory consumption during SQL

    Hi SAP Gurus,
    Could I kindly request for your inputs concerning the following scenario?
    To put it quite simply, we have a program where we're required to retrieve all the fields from a lengthy custom table, i.e. the select statement uses an asterisk.  Unfortunately, there isn't really a way to avoid this short of a total overhaul of the code, so we had to settle with this (for now).
    The program retrieves from the database table using a where clause filtering only to a single value company code.  Kindly note that company code is not the only key in the table.  In order to help with the memory consumption, the original developer had employed retrieval by packages (also note that the total length of each record is 1803...).
    The problem encountered is as follows:
    - Using company code A, retrieving for 700k entries in packages of 277, the program ran without any issues.
    - However, using company code B, retrieving for 1.8m in packages of 277, the program encountered a TSV_TNEW_PAGE_ALLOC_FAILED short dump.  This error is encountered at the very first time the program goes through the select statement, ergo it has not even been able to pass through any additional internal table processing yet.
    About the only biggest difference between the two company codes is the number of corresponding records they have in the table.  I've checked if company code B had more values in its columns than company code A.  However, they're just the same.
    What I do not quite understand is why memory consumption changed just by changing the company code in the selection.  I thought that the memory consumed by both company codes should be the same... at least, in the beginning, considering that we're retrieving by packages, so we're not trying to get all of the records all at once.  However, the fact that it failed at the very beginning has shown me that I'm gravely mistaken.
    Could someone please enlighten me on how memory is consumed during database retrieval?
    Thanks!

    Hi,
    with FAE (FOR ALL ENTRIES) the whole query even for a single record in the itab is executed and all results for
    the company code are transfered from the database to the DBI since the duplicates will be removed by the DBI
    not by the database.
    If you use package size the resultset is buffered in a system table in the DBI (which allocates memory from your user quota). And from there on the package sizes are built and handed over to your application (into table lt_temp).
    see recent ABAP documentation:
    Since duplicate rows are only removed on the application server, all rows specified using the WHERE condition are sometimes transferred to an internal system table and aggregated here. This system table has the same maximum size as the normal internal tables. The system table is always required if addition PACKAGE SIZE or UP TO n ROWS is used at the same time. These do not affect the amount of rows transferred from the database server to the application server; instead, they are used to transfer the rows from the system table to the actual target area.
    What you should do:
    calculate the size needed for your big company code B. How many row  multiplied with line length.
    That is the minimum amount you need for your user memory quota. (quotas can be checked with
    ABAP report RSMEMORY) If the amount of memory is sufficient then try without packagesize.
    SELECT * FROM <custom table>
    INTO TABLE lt_temp
    FOR ALL ENTRIES IN lt_bukrs
    WHERE bukrs = lt_bukrs-bukrs
    ORDER BY primary key.
    This might actually use less memory than the package size option for the FOR ALL ENTRIES.
    Since with FAE it is buffered anyway in the DBI (and subtracted from your quota) you can
    do it right away and avoid double saving portions (the DBI buffer and a portion of that in the
    packe in lt_temp).
    If the amount of memory is still too big, you have to either increase the quotas or select
    less data (additional where conditions) or avoid using FAE in this case in order to not read all
    the data in one go.
    Hope this helps,
    Hermann

  • Portal Session content in BSP

    Hello,
    My BSP inside an iView is located in the xRPM content. What I now try to achieve is, to hand over the chosen project-id from the portal to the BSP - iView.
    request->get_cookies provides me with the cookie "sap-appcontext", which I think contains the portal session-id. Since BSP has it's own session-id, I somehow have to get the session - content of the portal, where I hope, the chosen project-id is stored.
    Am I on the right way?
    I would appreciate any help or suggestions.
    Thanks in advance.
    Daniel

    <i>How can I register such an event inside xRPM-content without modifying the standard-source? Or how can I find out, if there's an event fired I could use?</i>
    I see no real easy way. I have never seen the source code of xRPM, and know no developers inhouse to ask directly. Effectively portal eventing is just JavaScript code. To see if xRPM supports any events, why not look at the outputted HTML. Look for anything to do with EPCM, or search for events.
    <i>It's hard to understand, why it's not possible to access session-information of the portal from a BSP started in the same page...</i>
    What exactly session information do you wish to know. From my understanding, you have a BSP running inside the portal. Inside this BSP application you wish to know something. I assume not exactly session ids, but some other information from the portal. Maybe this helps us.
    At the low level of HTML is might be easier to understand. The portal renders the HTML page, and then starts the BSP inside an <iframe>. The BSP does not know anything about the surrounding environment. Of course you could use JavaScript to walk up the dom (document.parent) and look at things inside the other frames. But keep in mind you are in the browser, and not on the portal server, not in the ABAP stack (where BSP is). So you can only look at rendered code. And this can (and will) change per SP.
    At the end of the day it is all plain HTML in your browser, and this are what sets your limitations.
    brian

  • Integration Builder Memory Consumption

    Hello,
    we are experiencing very high memory consumption of the Java IR designer (not the directory). Especially for loading normal graphical idoc to EDI mappings, but also for normal idoc to idoc mappings. examples (RAM on client side):
    - open normal idoc to idoc mapping: + 40 MB
    - idoc to edi orders d93a: + 70 MB
    - a second idoc to edi orders d93a: + 70 MB
    - Execute those mappings: no additional consumption
    - third edi to edi orders d93a: + 100 MB
    (alle mappings in same namespace)
    After three more mappings RAM on client side goes on 580 MB and then Java heap error. Sometimes also OutOfMemory, then you have to terminate the application.
    Obviously the mapping editor is not quite will optimized for RAM usage. It seems to not cache the in/out message structures. Or it loads for every mapping very much dedicated functionality.
    So we cannot really call that fun. Working is very slow.
    Do you have similar experiences ? Are there workarounds ? I know the JNLP mem setting parameters, but the problem is the high load of each mapping, not only the overall maximum memory.
    And we are using only graphical mappings, no XSLT !
    We are on XI 3.0 SP 21
    CSY

    Hii
    Apart from raising tablespace..
    Note 425207 - SAP memory management, current parameter ranges
    you have configure operation modes to change work processes dynamically using rz03,rz04.
    Please see the below link
    http://help.sap.com/saphelp_nw04s/helpdata/en/c4/3a7f53505211d189550000e829fbbd/frameset.htm
    You can Contact your Basis administrator for necessary action

  • Check Process memory consumption and Kill it

    Hello
    I have just installed Orchestrator and have a problem that I think is perfekt for Orchestrator to handle.
    I have a process that sometimes hangs and the only way to spot it is that the memory consumption has stoped.
    The process is started every 15 minutes and scans a folder, if it finds a file it reads the file to a system. You can see that it is working by the increasing Memory consumption. If the read fails then the memory consumption stops. The process is still working
    and is responding but is hung.
    I'm thinking about doing a runbook that checks the memory-consumption every 5 minutes and compares it with the previous value. if the last three values are the same then I will kill the process and start it again.
    My problem is that I have not found a way to check the memory consumption of a process.
    I have set up a small test, just verify that I get the correct process, with the activity Monitor process -> Get Process Status -> Append Line (process name).
    But How do I get the process memory consumption?
    /Anders

    Now that I think about it a bit more I don't think there will be an easy way to set up a monitor for your situation in SCOM. Not that it couldn't be done, just not easily. Getting back to SCORCH. What you are trying to do isn't an every day kind of
    scenario. I don't think there is a built in activity for this.
    The hardest thing to overcome whether you use SCORCH or SCOM is likely going to be determining the error condition of three consecutive samples of the same memory usage. you'll need a way to track the samples. I can't think of a good way to do
    this without utilizing scripting.

  • High memory consumption in XSL transformations (XSLT)

    Hello colleagues!
    We have the problem of a very high memory consumption when transforming XML
    files with CALL TRANSFORMATION.
    Code example:
    CALL TRANSFORMATION /ipro/wml_translate_cls_ilfo
                SOURCE XML lx_clause_text
                RESULT XML lx_temp.
    lx_clause_text is a WordML xstring (i.e. it is a Microsoft Word file in XML
    format) and can therefore not be easily splitted into several parts.
    Unfortunately this string can get very huge (e.g. 50MB). The problem is that
    it seems that CALL TRANSFORMATION allocates memory for the source and result
    xstrings but doesn't free them after the transformation.
    So in this example this would mean that the transformation allocates ~100MB
    memory (50MB for source, ~50MB for result) and doesn't free it. Multiply
    this with a couple of transformations and a good amount of users and you see
    we get in trouble.
    I found this note regarding the problem: 1081257
    But we couldn't figure out how this problem could be solved in our case. The
    note proposes to "use several short-running programs". What is meant with
    this? By the way, our application is done with Web Dynpro for ABAP.
    Thank you very much!
    With best regards,
    Mario Düssel

    Hi,
    q1. how come the Ram consumption is increased to 99% on all the three boxes?If we continue with the theory that network connectivity was lost between the hosts, the Coherence servers on the local hosts would form their own clusters. Prior to the "split", each cache server would hold 1/12 of the primary and 1/12 of the backup (assuming you have one backup). Since Coherence avoids selecting a backup on the same host as the primary when possible, the 4 servers on each host would hold 2/3 of the cache. After the spit, each server would hold 1/6 of the primary and 1/6 of the backup, i.e., twice the memory it previously consumed for the cache. It is also possible that a substantial portion of the missing 1/3 of the cache may be restored from the near caches, in which case, each server would then hold 1/4 of the primary and 1/4 of the backup, i.e., thrice the memory it previously consumed for the cache.
    q2: where is the cache data stored in the coherence servers?on which memory?The cache data is typically stored in the jvm's heap memory area.
    Have you reviewed the logs?
    Regards,
    Harv

  • What about session memory when using BEA Weblogic connection pooling?

    Hi,
    consider a web application, allowing database connections via a BEA Weblogic 8.1 application server. The app-server is pooling the oracle connections. The oracle database is running in dedicated server mode.
    How are the database requests from the web app served by the connection pool from BEA?
    1) Does one oracle session serve more than one request simultanously?
    2) Does BEA serialize the requests, which means, that a session from the pool is always serving only one request at a time?
    If (1) is true, than what about the session memory of Oracle sessions? I understand, that things like package global variables are beeing stored in this session private memory. If (1) is true, the PL/SQL programmer has the same situation, as with programming an Oracle databas in "shared server" mode, that is, he should not use package global variables etc.
    Thankful for any ideas...
    Message was edited by:
    Xenofon

    Xenofon Grigoriadis wrote:
    Hi,
    consider a web application, using BEA between client and an Oracle Database (v9i). BEA is pooling the oracle connections. The oracle database is running in dedicated server mode.
    How are the database requests from the web app beeing served by the connection pool from BEA?
    1) Does one oracle session serve more than one request simultanously?no.
    2) Or does BEA serialize the requests, which means, that a session from the pool is always serving only one request at a time?
    Reading "Configuring and Using WebLogic JDBC" from weblogic8.1 documentation, I read:
    "... Your application "borrows" a connection from the pool, uses it, then returns it to the pool by closing it...."
    What do you mean by returning the connection by closing it? Tbe server will either return the connection to the pool or close it...When application code does typical jdbc code, it obtains
    a connection via a WebLogic DataSource, which reserves an
    unused pooled connection and passes it (transparently wrapped)
    to the application. The application uses it, and then closes
    it. WebLogic intercepts the close() call via the wrapper, and
    puts the DBMS connection back into the WebLogic pool.
    The reason, why I as an Oracle programmer ask this is, because every session (=connection)
    in Oracle has its own dedicate, private memory for things like global PL/SQL variables.
    Now I want to figure out, if you have to careful in programming your databases, when
    one Oracle session (=connection) is serving many weblogic requests.It is serving many requests, but always serially. Do note however, that we
    also transparently cache/pool prepared and callable statements with the
    connection so repeat uses of the connection will be able to get already-made
    statements when they call prepareStatement() and prepareCall(). These
    long-lived statements will each require a DBMS-side cursor.
    >
    Thankful for any ideas or practical experience...
    Message was edited by:
    mk637Joe

  • How can i view the variables of the session memory

    Hi experts
       How can i view the variables of the session memory.Such as I want display the variables of memory which id is 'BULRN' in ABAP debug.
    In program i can use import from memory id visit the momery of session,but i don't know the name of variables which store in momery of my session.

    Its not possible to view in the debug mode..
    SAP memory is a memory area to which all main sessions within a SAPgui have access. You can use SAP memory either to pass data from one program to another within a session, or to pass data from one session to another. Application programs that use SAP memory must do so using SPA/GPA parameters (also known as SET/GET parameters). These parameters can be set either for a particular user or for a particular program using the SET PARAMETER statement. Other ABAP programs can then retrieve the set parameters using the GET PARAMETER statement. The most frequent use of SPA/GPA parameters is to fill input fields on screens
    SAP global memory retains field value through out session.
    set parameter id 'MAT' field v_matnr.
    get parameter id 'MAT' field v_matnr.
    They are stored in table TPARA.
    ABAP memory is a memory area that all ABAP programs within the same internal session can access using the EXPORT and IMPORT statements. Data within this area remains intact during a whole sequence of program calls. To pass data
    to a program which you are calling, the data needs to be placed in ABAP memory before the call is made. The internal session of the called program then replaces that of the calling program. The program called can then read from the ABAP memory. If control is then returned to the program which made the initial call, the same process operates in reverse.
    ABAP memory is temporary and values are retained in same LUW.
    export itab to memory id 'TEST'.
    import itab from memory Id 'TEST'.
    Here itab should be declared of same type and length.

Maybe you are looking for

  • Why can't Adobe Reader find or search for words on a PDF document?

    I have seen where people have the same problem whoes comments where dated back in June 2011.  WHEN WILL THIS PROBLEM BE FIXED?  I am sent PDF files with multiple pages of documents that I have to go through.  I need to find documents by employee name

  • Non display of  OPEN ITEM GL for MISGL in FAGLB03

    For Open Item GL  where shown balance RS.52 Lacs, But line item hown Zero amount of the same. have given proper  Inputs in FAGLB03 like GL Account Number,Company Code, Fiscal Year and Ledger. can you please help me out on this.. Thanks and Regards, S

  • Where is the snapshot stored before it is copied somewhere, so that it can be retrieved as an image.

    Where is the snapshot stored before it is copied somewhere, so that it can be retrieved as an image. Alternative is to take snapshot and copy into a Powerpoint page and then save page as an image.

  • Error in drawmgr.cpp when running VI

    I have a VI which is trying to draw the following items on the screen - 6 graphs 2 x realtime plots from DAQ 2 x Outputs from limit testing VI (= 4 waveforms + 2 additional waveforms) 2 x delayed graphs i.e. data from previous iterations. all sampled

  • Slowdown in Plano TX?

    Ii've been getting 25/25 or even 30/30 consistently yet this morning i'm seeing 4/.5 and 2/.5  ??!? anybody else here in Plano TX/metro seeing dramatic speed drops like this? very very sluggish getting same speed on 3 systems.