How to handle large heap requirement

Hi,
Our Application requires large amount of heap memory to load data in memory for further processing.
Application is load balanced and we want to share the heap across all servers so one server can use heap of other server.
Server1 and Server2 have 8GB of RAM and Server3 has 16 GB of RAM.
If any request comes to server1 and if it requires some more heap memory to load data, in this scenario can server1 use serve3’s heap memory?
Is there any mechanism/product which allows us to share heap across all the servers? OR Is there any other way to handle large heap requirement issue?
Thanks,
Atul

user13640648 wrote:
Hi,
Our Application requires large amount of heap memory to load data in memory for further processing.
Application is load balanced and we want to share the heap across all servers so one server can use heap of other server.
Server1 and Server2 have 8GB of RAM and Server3 has 16 GB of RAM.
If any request comes to server1 and if it requires some more heap memory to load data, in this scenario can server1 use serve3’s heap memory?
Is there any mechanism/product which allows us to share heap across all the servers? OR Is there any other way to handle large heap requirement issue? That isn't how you design it (based on your brief description.)
For any transaction A you need a set of data X.
For another transaction B you need a set of data Y which might or might not overlap with X.
The set of data (X or Y) is represented by discrete hunks of data (form is irrelevant) which must be loaded.
One can preload the server with this data or do a load on demand.
Once in memory it is cached.
One can refine this further with alternative caching strategies that define when loaded data is unloaded and how it is unloaded.
JEE servers normally support this in a variety of forms. But one can custom code it as well.
JEE servers can also replicate cached data across server instances. Custom code can do this but it is more complicated than doing the custom caching.
A load balanced system exists for performance and failover scenarios.
Obviously in a failover situation a "shared heap" would fail completely (as asked about) because the other server would be gone.
One might also need to support very large data sets. In that case something like Memcached (google for it) can be used. There are commercial solutions in this space as well. This allows for distributed caching solutions which can be scaled.

Similar Messages

  • How to handle large result set of a SQL query

    Hi,
    I have a question about how to handle large result set of a SQL query.
    My query returns more than a million records. However, the Query Template has a "row count" parameter. If I don't specify it, it by default returns only 100 lines of records in the query result. If I specify it, then it's limited to a specific number.
    Is there any way to get around of this row count issue? I don't want any restriction on the number of records returned by a query.
    Thanks a lot!

    No human can manage that much data...in a grid, a chart, or a direct-connected link to the brain. 
    What you want to implement (much like other customers with similar requirements) is a drill-in and filtering model that helps the user identify and zoom in on data of relevance, not forcing them to scroll through thousands or millions of records.
    You can also use a time-based paging model so that you only deal with a time "slice" at one request (e.g. an hour, day, etc...) and provide a scrolling window.  This is commonly how large datasets are also dealt with in applications.
    I would suggest describing your application in more detail, and we can offer design recommendations and ideas.
    - Rick

  • How to handle large images?

    Hi,
    Does anyone know how to handle big jpg images (1280*960) so that they could be presented in a midlet.
    The problem is that the images requires so much memory that they can't be decoded to an Image object with Image.createImage method. One solution would be to extract thumbnail image from exif headers. Unfortunately at least images taken with Nokia 6680 don't contain thumbnail in exif headers.
    So the only solution seems to be to decode the byte presentation of the image and resize it before creating an Image object.
    Do anybody know any library for this or tips where to start?
    Br, Ilpo

    Hi,
    I think it is not possible. My application contains a file browser (which uses jsr-75). User can use the browser to select an image either from phone memory or memory card. After the selection I would like to present the selected image for that user can be sure it is the right image. The selected image will be then sent to the server side with some additional data for further processing (but that is another story).
    Now the problem is that for example with Nokia 6680 user can take images as big as 1280*960 and I can't present them anymore because of the memory restrictions. With 640*480 image there is no problem because I can create an image object and then use a simple algorithm to resize the image for presentation.
    Br, Ilpo

  • How to handle large data in file adapter

    We have a scenario Proxy -> PI -> File Sever using File adapter.
    File adapter is using FCC for conversion.
    recently we had wave 2 products live and suddenly for this interface we have increase in volume of messages, due to which File adapter is not performing well, PI goes slow or frequent disconnect from file server problem. Due to which either we will have duplicate records in file or file format created is wrong.
    File size is somewhere around 4.07 GB which I also think quite high for PI to handle.
    Can anybody suggest how we can handle such large data.
    Regards,
    Vikrant

    Check this Blog for Huge File Processing:
    Night Mare-Processing huge files in SAP XI
    However, you can take a look also to this Blog, about High Volume Messages:
    Step-by-Step Guide in Processing High-Volume Messages Using PI 7.1's Message Packaging
    PI Performance Tuning Best Practice:
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/2016a0b1-1780-2b10-97bd-be3ac62214c7?QuickLink=index&overridelayout=true&45896020746271

  • How to handle large data sets?

    Hello All,
    I am working on a editable form document. It is using a flowing subform with a table. The table may contain up to 50k rows and the generated pdf may even take up to 2-4 Gigs of memory, in some cases adobe reader fails and "gives up" opening these large data sets.
    Any suggestions? 

    On 25.04.2012 01:10, Alan McMorran wrote:
    > How large are you talking about? I've found QVTo scales pretty well as
    > the dataset size increases but we're using at most maybe 3-4 million
    > objects as the input and maybe 1-2 million on the output. They can be
    > pretty complex models though so we're seeing 8GB heap spaces in some
    > cases to accomodate the full transformation process.
    Ok, that is good to know. We will be working in roughly the same order
    of magnitude. The final application will run on a well equipped server,
    unfortunately my development machine is not as powerful so I can't
    really test that.
    > The big challenges we've had to overcome is that our model is
    > essentially flat with no containment in it so there are parts of the
    We have a very hierarchical model. I still wonder to what extent EMF and
    QVTo at least try to let go of objects which are not needed anymore and
    allow them to be garbage collected?
    > Is the GC overhead limit not tied to the heap space limits of the JVM?
    Apparently not, quoting
    http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html:
    "The concurrent collector will throw an OutOfMemoryError if too much
    time is being spent in garbage collection: if more than 98% of the total
    time is spent in garbage collection and less than 2% of the heap is
    recovered, an OutOfMemoryError will be thrown. This feature is designed
    to prevent applications from running for an extended period of time
    while making little or no progress because the heap is too small. If
    necessary, this feature can be disabled by adding the option
    -XX:-UseGCOverheadLimit to the command line."
    I will experiment a little bit with different GC's, namely the parallel GC.
    Regards
    Marius

  • How to handle large number (7200+) identical HorizontalRule's

    I have an application where performance is becoming an issue. I have 30 VBoxes that each contain identical HorizontalRules, the number of
    HorizontalRules is unknown until render time but can easily be up to 240 per VBox. As there are 30 VBoxes this results in 7200 HorizontalRules being added to the stage. This results in large memory consumption and poor rendering time on lower specification machines. To speed this up I have tried using Sprites and graphics.draw to render the line, but I think it is the time taken to create the Object that is the problem.
    Is there any way to create one HorizontalRule and add it to the stage multiple times? I know that Flex will remove the child if it is added again with a different [x,y] coordinate so my attempts to do that failed.
    Thanks for any suggestions.

    The VBoxes are typically about 20 pixels wide and each has a line about 4 or 5 pixels apart all the way down their length. That is why there are so many of the little blighters.
    You are probably right, but I had a problem with graphics.draw in that they had to be added as rawChildren. The component is resizable and I found it impossible to move the drawn lines. I could not get a good reference to them, even if I stored each line in a seperate Array before adding it as a rawChild, and so I could not even delete them reliably. I know that the rawChildren and children/elements have things at different indexes, but I didn't manage to find a way to use that information.
    You have given me the idea that I shouldn't draw them all on the screen at once though. With the full 7,200 lines, only about 30% are on the screen, I will look into using an item renderer. Is that the correct way to do it?
    Does anyone know how to relibly access rawChildren so they can be moved? Maybe my app is just a little wierd. I will try building a test case and see if I can add and remove from the rawChild list freely in a simpler application.

  • How to handle large data while acquisition? BNC 2110

    I want to acquire data using  BNC 2110, I am writing a software in VB 6. We will use 3 channels. We are supposed to scan about 10000 points before AcquiredData is triggered. in all we will need to scan 10000 * 1000 * 1000 before data is put into a binary fall. Can anybody let me know, how to hande this large number points

    Hello Vjuno,
    In order to acquire 10,000,000,000 points you are going to have to be streaming this data to your hard drive as you go.  To do this you'll need to write the data you read to a file each loop iteration.  In general it is a good practice to make your "samples to read" at least 10% of your sample rate in seconds to avoid overflowing buffers, however, depending on your computer you may be able to go faster.  I made an example program in LabVIEW and was able to read 10,000 points at a time from each of 3 analog inputs at 333MHz and write the values to file without overflowing a buffer.  However, even opening a web browser while the code was running was enough to delay the VI long enough for the buffer to overflow.
    You can use the DAQmx Configure Input Buffer call to increase the buffer size and account for spikes in CPU usage from other processes, and you should also monitor the "Available Samples Per Channel" property to make sure you aren't steadily gaining samples in your buffer.  Since you want to acquire 10 billion samples at 1MHz this acquisition will take several hours; if you're not able to keep the buffer empty then it will become apparent before the end of your acquisition.  By monitoring the samples in the buffer you can tell if you're pulling the samples out fast enough, if you find that this number is steadily increasing then you should either reduce the sample rate or increase the number of samples to read each time you call the DAQmx Read.
    In my example program I used a write to TDMS (binary) file and a PCI-6251.
    I hope this helps, and have a good night.
    Cheers,
    Brooks

  • How to handle large library, limited data plan

    I've been using Itunes Match for about six months now, and I'm having problems...
    I live rurally and so I use an AT&T hotspot with a 10 gig/month data plan for my phone, ipad, and desktop mac. My wireless connection speed is pretty good.
    I have about 15000 tracks in my Itunes library, most not purchased through Itunes.
    When I signed up for Match and went through the initial process, it took about 24 hours and used up about six gig, which caused a significant overage in my data plan. Consequently, I just use Match on my Mac and decided not to use it on my other devices because of the data limitations. My motivation to use it currently is as a back up for my music. I connect my phone to my mac manually to transfer some of my music to my phone.
    I figured that the data overuse problem was a one time deal if I didn't use my other devices.
    But recently, when I purchase a song from emusic or Amazon, the icloud processing image pops up after the download purchase is complete. Itunes will then start the process of sending data to Amazon, which was still going at 4 hours yesterday when I manually stopped it as I saw my data plan being used up. It seemed to restart the process of sending periodically and never got to the analyzing or returning data stages.
    Now my recently purchased music shows up with both the Icloud processing symbol and a faded download icloud icon, one for each separate track. I can play the "processing" track, but Itunes won't allow me to add it to a playlist. Further, if I'm online with my hotspot, sometimes the regular Itunes Icon is visible, but at other times the icon with the thunderbold through it. I'm guessing this means it's streaming with the first icon, playing from my harddrive with the second.
    So a couple of questions, and sorry for the length, I'm a first-time support user
    1) Does it make sense to use Match with such a large library, and limited data plan?
    2) "Where" is the music I've purchased - on my computer or in the cloud?
    3) Should it take all day to send data to itunes, when the only updates to my library are songs I've deleted and a handful of new albums I purchased.
    4) Am I using up my data plan when the regular Icloud Icon appears and I'm online? Is there a way to manually play from the hardrive to reduce use of my data plan? I turned off Match once and payed with a half-day sending and receiving session when I turned it back on.
    4) Will I lose music if I unsubscribe from Match?
    Thanks to anybody who has the inclination to respond to any of these questions!

    This datasource sends After images in delta loads which is compatible with loading to a Std DSO only. You cannot load from this datasource directly to a cube.
    You can check with business the number of years they would need history. If they need for 5 years, you could delete data older than that or Archive.
    For GL, there would not be any changes to years that are closed for posting. There may be adjustments carried out for the previous fiscal year. I guess there would not be any changes to years prior to that. So archiving old data should not affect delta.

  • How to handle large number of Threads !!?

    hello friends,
    i done a program that is digging all files and folders for finding given filename
    in this logic i created one thread per directory and and that thread list out all
    sub files and directory and matches the search filename string , in that if sub
    directory found than it creates new thread and forth ..
    problem is after running program , at point it is creating large thread around
    3000 threads and system goes down ,
    so what should be solution for that?
    requesting you to suggest the good scheduling of thread that give better performance of the system
    logic code is given below:
    import java.io.File;
    class DiggtheFiles implements Runnable
         private File     currentdir     = null;
         private String     search          = "";
         DiggtheFiles(File file, String search)
              this.currentdir = file;
              this.search = search;
         private File     files[]     = null;
         public void run()
              files = currentdir.listFiles();
              if (files != null && files.length > 0)
                   synchronized (this)
                        for (int i = 0; i < files.length; i++)
                             if (files.isDirectory())
                                  data.dircount++;
                                  if(files[i].getName().contains(search))data.filearry.add(files[i].getAbsolutePath());
                                  *new Thread(new DiggtheFiles(files[i],search)).start();*
                             else
                                  if(files[i].getName().contains(search))data.filearry.add(files[i].getAbsolutePath());
                                  data.filecount++;
                   Filechooserpanel.consolearea.setText("DIR:" + data.dircount
                             + " FILE:" + data.filecount + " Thread Completed:"
                             + data.threadcount);
                   data.threadcount++;
                   return;
              else
                   data.threadcount++;
                   return;

    Remove the synchronized and use a work list instead of doing new Thread(), do workList.add() instead and have a threadpooledexceutor take workitem of the list.
    Or go non-parallel and just do a recursion-call?

  • How to handle large payloads in WCF services?

    Hi All,
    I have developed WCF service,Which takes list of Employee Ids and return employee details.
    The service is working fine with 500 employee Ids but when we receive request with 5000 employee Ids the service taking much time to complete its action.
    In side the Method i am doing the following operations
    1) Each employee id i am calling stored procedure and getting the employee object and adding to list of employees
    looping through all the employeeId and contruction the list of all the employee objects
    could you please suggest us the best way /Fastest way to retreive the employee details for more than 5000 employeeIds to implement the above operation
    1) Can we use multithreading here? if possible give me an idea,
    Thanks
    Amar

    You can try using a Table Value parameter to a Stored Procedure.
    Basically:
    Create a Stored Procedure that takes the list of CustomerID's as a Table Parameter.
    Generate the Schemas using the WCF Adapter Wizard.
    Map the request message of CustomerIDs to the SQL Request message.
    In the Stored Procedure, select the customers by JOIN'ing to the Table Parameter.  In the SP, the Table Parameter will behave just like a physical table.
    Return the result set of all customers in whatever format is you need.
    This way, SQL Server is doing heavy data operation in one shot vs many individual queries.
    Here's two articles that describe how to use Table Value Parameters:
    http://msdn.microsoft.com/en-us/library/dd787869.aspx
    http://connectedcircuits.wordpress.com/2011/08/17/using-sql2008-data-table-type-biztalk-to-insert-parent-child-rows/

  • How to handle large result sets?

    Hi All,
    I have a large result set to be displayed to user using jsp's. Problem is that result set is too big, so I can't display all the records in a single push. I want to show the results page by page say 25 per page. Now for every page I have to fetch data from database, means there are going to be many database calls which is not advisable. Or i can cache data in a CachedRowSet to reduce database calls, but in this case you have to store all the data in memory which is not a good solution in case you have very large data sets. Can anybody suggest me a solution to this problem?

    The best thing for you to do is to implmeneting paging logic in conjunction with a scrollable resultset (JDBC 2.0+).
    The logic would go like this assuming 30 rows per page:
    - keep track of which page the user is on (e.g. page 3)
    - issue the full sql
    - scroll thru only the rows in the current page (e.g. rows 90-120)
    - copy the page's rows to value objects
    - close the resultset, statement, and connection
    In the above example, you would scroll to row 90 using rs.absolute(90).
    The efficiency comes from the fact that you're using a scrollable resultset. By using this, only the rows that you scroll thru are extracted out from the database. I performed some simple testing and with my data, and the scrollable resultset was about 10x in performance.
    Good luck!

  • How to handle large database

    Dear All
    My Customer is having 1200-1500 sales Invoice everyday and purchases 200-300, Question is databse size os too big .
    can SAP Business One 8.8 suffice ? and How ?
    Ashish Gupte

    Hi,
       SAP 8.8 Data archive functionality is help to reduce the size.
    Please check the following thread
    https://websmp209.sap-ag.de/~form/sapnet?_FRAME=CONTAINER&_OBJECT=011000358700001143602007E
    Regards
    Jambulingam.P

  • How to Handle Large Dumps and Complex Calculations in OBIEE

    Hi,
    We are working on some reports in OBIEE and we are unable to optimize performace. Please consider following scenerios:
    1) Detail Report
    In this particular report we have essentaily data dump. That is there are 5 Dimentions and a Fact table and we are just selecting
    fields from various Dimentions and facts. But the problem is there are over 5000 records. I guess that is making report very slow.
    Another issue is when we click on PAGE Contols, they simply dont work, that is if we click on (>>) to see all records its takes lots
    of time but show only first 25 records. Any idea what is the problem.
    2) Calculated Report
    We have one calculated report where there are 7 calculated columns. Because every column contains different Measures and we have to
    use lots of Filter expressions
    Eg: Filter (Measure1 USING {@VarDate} BETWEEN DIM1.FromDate and Dim1.Todate AND {@VarDate} BETWEEN DIM2.FromDate and Dim2.Todate AND <SOME OTHER FILTERS> )
    The Granuality of data is at employee level and there are around 1388726 records in Fact table.
    Is there is any way to optimize above except to create a summary table instead?
    kindly guide us on above scenerios

    Hi,
    Thanks for your reply, I was trying things suggested, I found that the actual physical SQL (which I can get from "Manage Session" link) run in like 4-5 seconds in Query analyzer but: when I studied SQL carefully it does not contain some fiters which we I have applied in report.
    From example i have a filter in report <Measure.Present = 1 > but in Physical SQL i cannot see that filter. However, the results it gives on dashboard are correct that is after it applies the fillter.
    Any idea why is this happening?
    My guess is because above filter is not being applied at physical sql level thats why query fetches like 20000 recors in server cache and then it applies <Measure.Present = 1 >. Thats why it might be taking time ?
    Please give me your suggestions.
    Thanks !

  • Best practices for handling large messages in JCAPS 5.1.3?

    Hi all,
    We have ran into problems while processing larges messages in JCAPS 5.1.3. Or, they are not that large really. Only 10-20 MB.
    Our setup looks like this:
    We retrieve flat file messages with from an FTP server. They are put onto a JMS queue and are then converted to and from different XML formats in several steps using a couple of jcds with JMS queues between them.
    It seems that we can handle one message at a time but as soon as we get two of these messages simultaneously the logicalhost freezes and crashes in one of the conversion steps without any error message reported in the logicalhost log. We can't relate the crashes to a specific jcd and it seems that the memory consumption increases A LOT for the logicalhost-process while handling the messages. After restart of the server the message that are in the queues are usually converted ok. Sometimes we have however seen that some message seems to disappear. Scary stuff!
    I have heard of two possible solutions to handle large messages in JCAPS so far; Splitting them into smaller chunks or streaming them. These solutions are however not an option in our setup.
    We have manipulated the JVM memory settings without any improvements and we have discussed the issue with Sun's support but they have not been able to help us yet.
    My questions:
    * Any ideas how to handle large messages most efficiently?
    * Any ideas why the crashes occur without error messages in the logs or nothing?
    * Any ideas why messages sometimes disappear?
    * Any other suggestions?
    Thanks
    /Alex

    * Any ideas how to handle large messages most efficiently? --
    Strictly If you want to send entire file content in JMS message then i don't have answer for this question.
    Generally we use following process
    After reading the file from FTP location, we just archive in local directory and send a JMS message to queue
    which contains file name and file location. Most of places we never send file content in JMS message.
    * Any ideas why the crashes occur without error messages in the logs or nothing?
    Whenever JMSIQ manager memory size is more lgocialhosts stop processing. I will not say it is down. They
    stop processing or processing might take lot of time
    * Any ideas why messages sometimes disappear?
    Unless persistent is enabled i believe there are high chances of loosing a message when logicalhosts
    goes down. This is not the case always but we have faced similar issue when IQ manager was flooded with lot
    of messages.
    * Any other suggestions
    If file size is more then better to stream the file to local directory from FTP location and send only the file
    location in JMS message.
    Hope it would help.

  • How do I handle large resultsets in CRXI without a performance issue?

    Hello -
    Problem Definition
    I have a performance problem displaying large/huge resultset of data on a crystal report.  The report takes about 4 minutes or more depending on the resultset size.
    How do you handle large resultsets in Crystal Reports without a performance issue?
    Environment
    Crystal Reports XI
    Apache WebSvr 2.X, Jboss 4.2.3, Struts
    Java Reporting Component (JRC),Crystal Report Viewer (CRV)
    Firefox
    DETAILS
    I use the CRXI thick client to build my report (.rpt) and then use it in my webapplication (webapp) under Jboss.
    User specifies the filter criteria to generate a report (date range etc) and submits the request to the webapp.  Webapp  queries the database, gets a "resultset".
    I initialize the JRC and CRV according to all the specifications and finally call the "processHttpRequest" method of Crystal Report Viewer to display the report on browser.
    So.....
    - Request received to generate a report with a filter criteria
    - Query DB to get resultset
    - Initialize JRC and CRV
    - finally display the report by calling
        reportViewer.processHttpRequest(request, response, request.getSession().getServletContext(), null);
    The performance problem is within the last step.  I put logs everywhere and noticed that database query doesnt take too long to return resultset.  Everything processes pretty quickly till I call the processHttpRequest of CRV.  This method just hangs for a long time before displaying the report on browser.
    CRV runs pretty fast when the resultset is smaller, but for large resultset it takes a long long time.
    I do have subreports and use Crystal report formulas on the reports.  Some of them are used for grouping also.  But I dont think Subreports is the real culprit here.  Because I have some other reports that dont have any subreports, and they too get really slow displaying large resultsets.
    Solutions?
    So obviously I need a good solution to this generic problem of "How do you handle large resultsets in Crystal Reports?"
    I have thought of some half baked ideas.
    A) Use external pagination and fetch data only for the current page being displayed.  But for this, CRXI must allow me to create my own buttons (previous, next, last), so I can control the click event and fetch data accordingly.  I tried capturing events by registering event handler "addToolbarCommandEventListener" of CRV.  But my listener gets invoked "after" processHttpRequest method completes, which doesnt help.
    Some how I need to be able to control the UI by adding my own previous page, next page, last page buttons and controlling it's click events. 
    B) Automagically have CRXI use a javascript functionality, to allow browser side page navigation.  So maybe the first time it'll take 5 mins to display the report, but once it's displayed, user can go to any page without sending the request back to server.
    C) Try using Crystal Reports 2008.  I'm open to using this version, but I couldnt figureout if it has any features that can help me do external pagination or anything that can handle large resultsets.
    D) Will using the Crystal Reports Servers like cache server/application server etc help in any way?  I read a little on the Crystal Page Viewer, Interactive Viewer, Part Viewer etc....but I'm not sure if any of these things are going to solve the issue.
    I'd appreciate it if someone can point me in the right direction.

    Essentialy the answer is use smaller resultsets or pull from the database directly instead of using resultsets.

Maybe you are looking for

  • Email interactive pdf from webdynpro without displaying form

    Hi all, We have a webdynpro application that needs to generate an interactive form and send it by email, without first displaying it to the user.  I have built the interactive form in a webdynpro component and it works fine, but I think it requires t

  • Routers that work with iChat AV?

    Since Apple's list of approved routers hasn't been updated for years, I would love to hear from anyone who has purchased a router recently that works with iChat for audio/video, including any special setting changes/tips. Could anyone with a success

  • How to keep Korean fonts when converting .pages file to .epub file

    Hi, I am also asking about embedding Korean fonts other than English when converting .pages file to .epub file. I really like to use application for Mac in order to make certain file format for e-books though there seems to be some problem that shoul

  • Counter in SWF

    Is there a way to put a hit counter into a flash video?  Or is that something that would need to go into the HTML file?  Also I need it to be invisible, but be able to check it somehow...hah is any of this possible? Thanks! cmo

  • Order receipt/credit customer blocked

    HI When i try to remove the reason for rejection in one of the order i am geting below meessage "order receipt/delivery credit customer blocked message noV1154" error.. i checked in the customer master thre are no blocks and also in FD33 no blocks av