Amount of Data in the session

Hi,
I would like to know what is the max amount of data that can be kept in a session. How does it impact on the performance?
Thanks,
Simi.

Hi,
actually I believe, that if we are speaking about multiple users each with its own HttpSession, in case of two users accessing the same session attribute in their own session, the actually used cache keys will not be the same.
On the other hand, this is probably not what you would really like, you would possibly like to share that data among sessions.
You should probably consider using either read-through caching with the CacheLoader implementor doing the expensive data retrieval (if the data to be cached can be obtained outside of an HTTP container), or side caching with using Coherence locks or entry-processors for concurrency control on the data retrieval operations for the same key (take care of retries in this case).
Best regards,
Robert

Similar Messages

  • How to reset the data when the session lost?

    When I use response.sendRedirect("http://localhost:82/main.jsp")
    the session lost, because the URL before redirect is using SSL,
    "https://localhost:81/index.jsp", they use the different port,
    so the session lost, what should i do to prevent this? And how
    to reset the data?

    Create a HashMap and store it as an attribute of the servlet context. In the first servlet assign the user a key and store all session data in the HashMap under that key (use a vector, collection or user defined class).
    In the response.sendRedirect call add the key to the url:
    response.sendRedirect(url + "?key=" + userKey);
    In the second servlet get the key (request.getParameter("key")), retrieve the session data from the HashMap in the servletcontext and store in an httpsession.
    Remember to delete the entry from the HashMap so it doesn't become overly large.

  • I want to get data in the session

    I have created :
    - login.jsp
    - login.java {get and set on the UIcomponents} duration=session
    I want to create an over java class that can access to the components in the session.
    how can I do this?
    Thank you for your help.

    Hi,
    if I understood your question right you can do something like this:
    FacesContext context = FacesContext.getCurrentInstance();
    context.getExternalContext().getSessionMap().get("[nameOfTheComponent]");The nameOfTheComponent for example is the name of a manged bean you gave it in the faces-config.xml.
    Hope this helps you.

  • Storing the data in the session

    Hi all,
    I want to know how to store some values (which i get from back end - R/3) in session, so that those are available through out the user's session.
    If any one has done this earlier, please help me.
    Regards,
    Narahari

    Hi Narahari,
    You can try out the following links. It may be helpfull....
    EP - problem when store data in session
    Storing data across calls
    Re: How set a SESSION variable?
    Re: http session that's driving me crazy
    Regards
    gEorgE

  • Pipelined function with lagre amount of data

    We would like to use pipelined functions as source of the select statements instead of tables. Thus we can easily switch from our tables to the structures with data from external module due to the need for integration with other systems.
    We know these functions are used in situations such as data warehousing to apply multiple transformations to data but what will be the performance in real time access.
    Does anyone have any experience using pipelined function with large amounts of data in the interface systems?

    It looks like you have already determined that the datatable object will be the best way to do this. When you are creating the object, you must enter the absolute path to your spreadsheet file. Then, you have to create some type of connection (i.e. a pushbutton or timer) that will send a true to the import data member of the datatable object. After these two things have been done, you will be able to access the data using the A3 - K133 data members.
    Regards,
    Michael Shasteen
    Applications Engineering
    National Instruments
    www.ni.com/ask
    1-866-ASK-MY-NI

  • DSS problems when publishing large amount of data fast

    Has anyone experienced problems when sending large amounts of data using the DSS. I have approximately 130 to 150 items that I send through the DSS to communicate between different parts of my application.
    There are several loops publishing data. One publishes approximately 50 items in a rate of 50ms, another about 40 items with 100ms publishing rate.
    I send a command to a subprogram (125ms) that reads and publishes the answer on a DSS URL (app 125 ms). So that is one item on DSS for about 250ms. But this data is not seen on my man GUI window that reads the DSS URL.
    My questions are
    1. Is there any limit in speed (frequency) for data publishing in DSS?
    2. Can DSS be unstable if loaded to much?
    3. Can I lose/miss data in any situation?
    4. In the DSS Manager I have doubled the MaxItems and MaxConnections. How will this affect my system?
    5. When I run my full application I have experienced the following error Fatal Internal Error : ”memory.ccp” , line 638. Can this be a result of my large application and the heavy load on DSS? (se attached picture)
    Regards
    Idriz Zogaj
    Idriz "Minnet" Zogaj, M.Sc. Engineering Physics
    Memory Profesional
    direct: +46 (0) - 734 32 00 10
    http://www.zogaj.se

    LuI wrote:
    >
    > Hi all,
    >
    > I am frustrated on VISA serial comm. It looks so neat and its
    > fantastic what it supposes to do for a develloper, but sometimes one
    > runs into trouble very deep.
    > I have an app where I have to read large amounts of data streamed by
    > 13 µCs at 230kBaud. (They do not necessarily need to stream all at the
    > same time.)
    > I use either a Moxa multiport adapter C320 with 16 serial ports or -
    > for test purposes - a Keyspan serial-2-USB adapter with 4 serial
    > ports.
    Does it work better if you use the serial port(s) on your motherboard?
    If so, then get a better serial adapter. If not, look more closely at
    VISA.
    Some programs have some issues on serial adapters but run fine on a
    regular serial port. We've had that problem recent
    ly.
    Best, Mark

  • Dealing with large amounts of data

    Hi
    I am new to using Flex and BlazeDS. I can see in the FAQ that binary data transfer from server to Flex app is more efficient. My question is: is there a way to build a Flex databound control (e.g. datagrid) which binds to a SQL query or web service or remoting on the server side and then display infinite amount of data as user pages or scrolls using scrollbar? or does the developer have to write code from scratch to deal with paging or deal with infinite scrollbar by asking server for chunks of data at a time?

    You have to write your own paginating grid. It's easy to do, just make a canvas, throw a grid and some buttons on it, than when user click to the next page, you make a request to the server and when you have a result, set it as a new data model for the grid.
    I would discourage you to return infinite amount of data to the user, provide a search functionality plus pagination.
    Hope that helps.

  • How can I edit large amount of data using Acrobat X Pro

    Hello all,
    I need to edit a catalog that contains large amount of data - mainly the product price. Currently I can only export the document into excel file and then paste the new price onto the catalog using Acrobat X Pro one by one, which is extremely time-consuming. I am sure there's a better way to make this faster while keeping the accuracy of the data. Thanks a lot in advance if any one's able to help! 

    Hi Chauhan,
    Yes I am able to edit text/image via tool box, but the thing is the catalog contains more than 20,000 price data and all I can do is deleteing the orginal price info from catalog and replace it with the revised data from excel. Repeating this process over 20,000 times would be a waste of time and manpower... Not sure if I make my situation clear enough? Pls just ask away, I really hope to sort it out, Thanks! 

  • Working with large amount of data

    Hi! I am writing an peer-to-peer video player and I need to operate with huge amounts of data. The downloaded data should be stored in memory for sharing with other peers. Some video files can be up to 2 GB and more. So keep all this data in RAM - not the best solution i think =)
    Since the flash player does not have access to the file system I can not save this data in temporary files.
    Is there a solution to this problem?

    No ideas? very sad ((

  • Sorting large amounts of data with treemap

    Hello. Im doing a project where I have to sort a large amount of data. The data is formed by a unique number and a location (a string).
    Something like this
    NUMBER .... CITY
    1000123 BOSTON
    1045333 HOUSTON
    5234222 PARIS
    2343345 PARIS
    6234332 SEATTLE
    I have to sort the data by location and then by unique number...
    I was using the TreeMap to do this : I used the location string as a key - since I wanted to sort the data by that field - but, because the location string is not unique, at the moment to insert the data on the TreeMap, it overwrites the object with the same location string, saving only the last one that was inserted.
    Is there any Collection that implements sorting in the way that I need it?... or if there isnt such thing... is there any collection that supports a duplicated key object???
    Thanks for your time!
    Regards
    Cesar

    ... or use a SortedSet for the list of numbers (as the associated value for
    the location key). Something like this:voidAddTuple(String location, Integer number) {
       SortedSet numbers= set.get(location);
       if (numbers == null)
          set.put(location, numbers= new TreeSet());
       numbers.put(number);
    }kind regards,
    Jos

  • JSP and large amounts of data

    Hello fellow Java fans
    First, let me point out that I'm a big Java and Linux fan, but somehow I ended up working with .NET and Microsoft.
    Right now my software development team is working on a web tool for a very important microchips manufacturer company. This tool handles big amounts of data; some of our online reports generates more that 100.000 rows which needs to be displayed on a web client such as Internet Explorer.
    We make use of Infragistics, which is a set of controls for .NET. Infragistics allows me to load data fetched from a database on a control they call UltraWebGrid.
    Our problem comes up when we load large amounts of data on the UltraWebGrid, sometimes we have to load 100.000+ rows; during this loading our IIS server memory gets killed and could take up to 5 minutes for the server to end processing and display the 100.000+ row report. We already proved the database server (SQL Server) is not the problem, our problem is the IIS web server.
    Our team is now considering migrating this web tool to Java and JSP. Can you all help me with some links, information, or past experiences you all have had with loading and displaying large amounts of data like the ones we handle on JSP? Help will be greatly appreciated.

    Who in the world actually looks at a 100,000 row report?
    Anyway if I were you and I had to do it because some clueless management person decided it was a good idea... I would write a program in something that once a day, week, year or whatever your time period produced the report (in maybe a PDF fashion but you could do it in HTML if you really must have it that way) and have it as a static file that you link to from your app.
    Then the user will have to just wait while it downloads but the webserver or web applications server will not be bogged down trying to produce that monstrosity.

  • Querying large amounts of data

    Suppose I want to store an amount of data that is too large to store in memory only (serveral GB's or TB's). The data needs to be highly available and fault tolerant and a user would need to query the data based on some criteria. Coherence would fit these requirements perfectly except for the fact that you would need to keep all data in memory before being able to query it using the filter API.
    As I see it, the only possible way to make this happen is to store all this data in a database via a write-behind schema and query the db instead of the cache. This would make the db the single source of data instead of the cache. However, this would also seriously diminish the usefullness of the cache as you would now need to make sure the db is replicated and fault tolerant. You wouldn't even need the cache anymore as it's only purpose would be to pass the data to the db via the write-behind schema.
    Is there another way to query the data not using a db query but using the coherence filter API and still storing the data somewhere outside the JVM's memory?
    Best regards
    Jan

    user10601659 wrote:
    Suppose I want to store an amount of data that is too large to store in memory only (serveral GB's or TB's). The data needs to be highly available and fault tolerant and a user would need to query the data based on some criteria. Coherence would fit these requirements perfectly except for the fact that you would need to keep all data in memory before being able to query it using the filter API.
    As I see it, the only possible way to make this happen is to store all this data in a database via a write-behind schema and query the db instead of the cache. This would make the db the single source of data instead of the cache. However, this would also seriously diminish the usefullness of the cache as you would now need to make sure the db is replicated and fault tolerant. You wouldn't even need the cache anymore as it's only purpose would be to pass the data to the db via the write-behind schema.
    Is there another way to query the data not using a db query but using the coherence filter API and still storing the data somewhere outside the JVM's memory?
    Best regards
    JanHi Jan,
    three things come to my mind regarding your post:
    1. Write-behind is not usable if you want the DB to be the system of record. With write-behind the cache is the system of record and not the backing storage. Will the data change? If yes, how frequently? What system changes the data?
    2. To query the cache with such high amount of data with the Filter API you practically have to have all your data indexed as otherwise query response times would be too high if you needed to deserialize even a subset of your data. But indexes always have to reside on the heap. So the question is how complex your queries would be? The more complex the queries, the larger amount of indexes you would likely need.
    3. You can query only a single cache with a Coherence filter. Would this be sufficient for your querying needs?
    Best regards,
    Robert

  • ERROR MESSAGE WHEN DISPLAYING LARGE RETRIEVING AND DISPLAYING LARGE AMOUNT OF DATA

    Hello,
    Am querying my database(mysql) and displaying my data in a
    DataGrid (Note that am using Flex 2.0)
    It works fine when the amount of data populating the grid is
    not much. But when I have large amount of data I get the following
    error message and the grid is not populated.
    ERROR 1
    faultCode:Server.Acknowledge.Failed
    faultString:'Didn't receive an acknowledge message'
    faultDetail: 'Was expecting
    mx.messaging.messages.AcknowledgeMessage, but receive Null'
    ERROR 2
    faultCode:Client.Error.DeliveryInDoubt
    faultString:'Channel disconnected'
    faultDetail: 'Channel disconnected before and acknowledge was
    received'
    Note that my datagrid is populated when I run the query on my
    Server but does not works on my client pcs.
    Your help would br greatly appreciated here.
    Awaiting a reply.
    Regards

    Hello,
    Am using remote object services.
    USing component (ColdFusion as destination).

  • How to reduce the run time of ABAP code (BADI) when it is reading huge data from BPC Cube and thus writing  back huge data to the cube.

    Hi All ,
    In Case of reading huge amount of record from BPC Cube  from BADI code , performing calculations and writing back huge amount of data into the cube , It takes lot of time . If there is any suggestion to read the data  from Cube  or writing data into the cube using some Parallel Processing  methods , Then Please suggest .
    Regards,
    SHUBHAM

    Hi Gersh ,
    If we have a specific server say 10.10.10.10 (abc.co.in) on which we are working, Then under RZ12 we make the following entry  as :
    LOGON GROUP          INSTANCE
    parallel_generators        abc.co.in_10         ( Lets assume : The instance number is 10 )
    Now in SM59 under ABAP Connections , I am giving the following technical settings:
    TARGET HOST          abc.co.in
    IP address                  10.10.10.10
    Instance number          10
    Now if we have a scenario of load balancing servers with following server details (with all servers on different instance numbers ) :
    10.10.10.11   
    10.10.10.13
    10.1010.10
    10.10.10.15
    In this case how can we make the RZ12 settings and SM59 settings such that we don't have to hardcode any IP Address.
    If the request is redirected to 10.10.10.11 and not to 10.10.10.10 , in that case how will the settings be.
    I have raised this question on the below thread :
    How to configure RZ12  and SM59 ABAP connection settings when we have work with Load Balancing servers rather than a specific server .
    Regards,
    SHUBHAM

  • Exchange 2010 DAG Replication - too much data crossing the wire

    I’m replicating 3 Exchange databases from our production active Exchange 2010 server across the WAN to another passive Exchange 2010 server at our DR site. 
    The Exchange server at the DR site does not have any active databases, i.e. no users are hitting that server. 
    We are running Update Rollup 8 for Exchange Server 2010 SP1 on both these servers. 
    The two sites are connected via a 10Mb/s MPLS connection and all the databases are in sync and the replication is working fine. 
    I have setup a network sniffer at the primary site and see a sizable amount of data crossing the wire from the production Exchange server to the DR Exchange server. 
    When I query the production Exchange server using the Tracking Log explorer and only choose the EventID ‘RECEIVED’ that should show me the amount of data that has been committed to the database. 
    If I choose a date range that is the same exact range that I have used to capture the raw data with my sniffer the amount of data the sniffer shows crossing the wire is 10 fold compared to what the Tracking Log Explorer shows. 
    If I actually count up the data in the LOG files it is about 20% more than what is crossing the wire but that seems to be because the DAG is compressing the data. 
    If in a one hour timeframe there is 500MB of data crossing the wire to the DR Exchange server the tracking log explorer will show only 50MB. 
    I would like to know why the data crossing the wire far exceeds the amount of data that is truly being sent/received from the primary exchange server. 
    Perhaps I’m simply not getting a true view of the amount of data being committed to the exchange server using the Tracking Log Explorer. 
    Maybe there is a better way to report how much data is being committed to the exchange databases. 
    Any assistance would be appreciated…

    Two things.
    1 - you must update Exchange to SP3 and a recent RU.  Willard has already provided the links, which point back to my blog if you want to see the lifecycle map for Exchange 2010.  SP1 has been out of support since January 2013.  Time to move
    on pllease
    2 - Looking at the tracking log is not sufficient.  I would not expect that to show everything.
    I want to know what traffic you see as excessive.  What ports are you seeing used here?
    My money is on content indexing.  CI will use additional traffic over and above log repl traffic.  Expect CI traffic to be roughly the same again.  You can test this by disabling CI on the database or stopping the services on the DR server. 
    To disable the CI for the database:
    Set-MailboxDatabase DBName -indexEnabled $False
    Or stop the Exchange search services on the DR box to leave production unaffected.
    Again - you need to update Exchange.  You would be better to do that now rather than when something breaks and Microsoft support cannot fully assist you since you are not uptodate.    I'll leave discussion of the security issues resolved
    in recent Exchange RUs aside. 
    Cheers,
    Rhoderick
    Microsoft Senior Exchange PFE
    Blog:
    http://blogs.technet.com/rmilne 
    Twitter:   LinkedIn:
      Facebook:
      XING:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

Maybe you are looking for

  • Import Process in US Urgent

    Hello, Please provide me the Import Process in US? Regards, Kumar Rayudu Message was edited by:         Kumar Rayudu

  • Mailmerge functionality

    Hi,     THE MAILMERGE FUNCTIONALITY IS NOT WORKING FOR THE LEAN CAMPAIGN MANAGEMENT SCENARIO: A new Marketing Plan for all marketing activities of the current year and the related Marketing Plan Elements on a quarterly basis are created using transac

  • "We are unable to validate this serial number for Adobe Premiere Pro CS6 Family."

    I have bought and obtained a license serial number for Premiere CS6. I have downloaded the files for Premiere CS6 PremierePro_6_LS7.exe                    PremierePro_6_LS7.7z PremierePro_6_Content_LS7.exe     PremierePro_6_Content_LS7.7z I have run

  • Home screen bookmark icons different on IPad 2

    I have a question that I hope someone can answer, I've been trying to add bookmarks to my home screen, but their icons are just a snap shot of the web page. I have the same icon on my iPhone home screen, and the icon automatically change to an actual

  • CS3 Updater Keeps Looping

    Hi, I re-installed CS3 on a G5 by repairing the current installation. However, when I go to update CS3 using the Adobe On-Line Updater, I am told that there is a new version of the Adobe Updater available and I successfully authenticate. The Adobe Up