Best strategy to retrieve large amount of data with EJB ?

Hi all
I have an EJB 3 which is fronted by a Web Service. The EJB retrieves a large amount of objects (TaskList) and returns it to the client.
@Stateless
public List <TaskList> findByRole(String role){
List <TaskList> tasklist = em.createNamedQuery("findByRole")
.setParameter("Role", role)
.getResultList();
return tasklist;
}The problem is the query is rather slow but especially returning lots of objects in the SOAP envelopes takes a lot of time.
So I'd like to retrieve only a "slot" of the data. Is there any EJB3/Hibernate object which could help ?
By myself the only solution that I found is to turn the SLSB into SFSB , add a parameter to the method and return only a slot, not the whole TaskList.
@Stateful
List <TaskList> tasklist;
public List <TaskList> findByRole(String role, int slot){
tasklist = em.createNamedQuery("findByRole")
.setParameter("Role", role)
.getResultList();
// Take a piece of the TaskList
return tasklist;
}What do you think ? any idea on how to improve this ?
Thanks
Frank

Hello,
Am using remote object services.
USing component (ColdFusion as destination).

Similar Messages

  • Best way to pass large amounts of data to subroutines?

    I'm writing a program with a large amount of data, around 900 variables.  What is the best way for me to pass parts of this data to different subroutines?  I have a main loop on a PXI RT Controller that is controlling hydraulic cylinders and the loop needs to be 1ms or better.  How on earth should I pass 900 variables through a loop and keep it at 1ms?  One large cluster??  Several smaller clusters??  Local Variables?  Global Variables??  Help me please!!!

    My suggestion, similar to Altenbach and Matt above, is to use a Functional Global Variable (FGV) and use a 1D array of 900 values to store the data in the FGV. You can retrieve individual data items from the FGV by passing in the index of the desired variable and the FGV returns the value from the array. Instead of passing in an index you could also use a TypeDef'd Enum with all of your variables as element of the Enum, which will allow you to place the Enum constant on the diagram and make selecting variables, as well as reading the diagram, simpler.
    My group is developing a LabVIEW component/example code with this functionality that we plan to publish on DevZone in a month or two.
    The attached RTF file shows the core piece of this implementation. This VI off course is non-reentrant. The Init case could be changed to allocate the internal 1D array as part of this VI rather than passing it from another VI.
    Message Edited by Christian L on 01-31-2007 12:00 PM
    Christian Loew, CLA
    Principal Systems Engineer, National Instruments
    Please tip your answer providers with kudos.
    Any attached Code is provided As Is. It has not been tested or validated as a product, for use in a deployed application or system,
    or for use in hazardous environments. You assume all risks for use of the Code and use of the Code is subject
    to the Sample Code License Terms which can be found at: http://ni.com/samplecodelicense
    Attachments:
    CVT_Double_MemBlock.rtf ‏309 KB

  • ERROR MESSAGE WHEN DOING SIMPLE QUERY TO RETRIEVE LARGE AMOUNT OF DATA

    Hello,
    Am querying my database(mysql) and displaying my data in a
    DataGrid (Note that am using Flex 2.0)
    It works fine when the amount of data populating the grid is
    not much. But when I have large amount of data I get the following
    error message and the grid is not populated.
    ERROR 1
    faultCode:Server.Acknowledge.Failed
    faultString:'Didn't receive an acknowledge message'
    faultDetail: 'Was expecting
    mx.messaging.messages.AcknowledgeMessage, but receive Null'
    ERROR 2
    faultCode:Client.Error.DeliveryInDoubt
    faultString:'Channel disconnected'
    faultDetail: 'Channel disconnected before and acknowledge was
    received'
    Note that my datagrid is populated when I run the query on my
    Server but does not works on my client pcs.
    Your help would br greatly appreciated here.
    Awaiting a reply.
    Regards

    Hello,
    Am using remote object services.
    USing component (ColdFusion as destination).

  • Scrolling of large amount of data with Unscrolled HEADER .How to do???

    I am in much need of scrolling a large amount of data without scrolling the header,i.e. while scrolling,only data will be scrolled,so that I can see the header as well.I am using JSP in struts framework.Waiting for your help friends.Thanks in advance.

    1) Install Google at your machine.
    2) Open it in your favourite web browser.
    3) Note the input field and the buttons.
    4) Enter smart keywords like "scrollable", "table", "fixed" and "header" in that input field.
    5) Press the first button. You'll get a list of links.
    6) Follow the relevant links and read it thouroughly.

  • Sorting large amounts of data with treemap

    Hello. Im doing a project where I have to sort a large amount of data. The data is formed by a unique number and a location (a string).
    Something like this
    NUMBER .... CITY
    1000123 BOSTON
    1045333 HOUSTON
    5234222 PARIS
    2343345 PARIS
    6234332 SEATTLE
    I have to sort the data by location and then by unique number...
    I was using the TreeMap to do this : I used the location string as a key - since I wanted to sort the data by that field - but, because the location string is not unique, at the moment to insert the data on the TreeMap, it overwrites the object with the same location string, saving only the last one that was inserted.
    Is there any Collection that implements sorting in the way that I need it?... or if there isnt such thing... is there any collection that supports a duplicated key object???
    Thanks for your time!
    Regards
    Cesar

    ... or use a SortedSet for the list of numbers (as the associated value for
    the location key). Something like this:voidAddTuple(String location, Integer number) {
       SortedSet numbers= set.get(location);
       if (numbers == null)
          set.put(location, numbers= new TreeSet());
       numbers.put(number);
    }kind regards,
    Jos

  • Problem retrieving large amount of data!

    Hi,
    I'm currently working with a database application accessing very big amount of data. One query could result in 500 000+ hits!
    I would like to present this data in a JTable.
    When the query is executed, I create a two-dimensional array and store the data in it. The problem is that I get OutOfMemoryError when I reach the 150 000th row and add it to the array.
    I've looked into the design pattern "Value List Handler" and it seems it could be of use. But still I need to populate an array with all the data, and then I get the error.
    Is there somehow I could query the database, populate part of the data in a smaller array, use the "Value List Handler" pattern to access small portions of the complete resultset?
    Another problem is that the user wants ability to sort asc/desc by clicking columnheaders in the JTable. The I need to access all data in that table to make it be sorted right. Could re-query the database with a "ORDER BY <column> ASC" and use a modification of the value list handler pattern?
    I'm a bit confused, please help!
    Kind regards, Andreas

    The only chance that I you have: only select as many rows as you display on the screen. When the user hits "next page" retrieve the next rows.
    You might be able to do this with a scrollable resultset, but with that you are left to the mercy of the JDBC driver concerning memory management.
    So you need to think about a solution where you issue a new SELECT narrowing down the result set depending on the first/last row displayed
    Search this forum for pagewise navigation this question gets asked about 5 times a week (mostly in conjunction with web applications, but the logic behind it, should be the same).
    Tailoring your TableModel might be tricky as well.
    Thomas

  • Best way of handling large amounts of data movement

    Hi
    I like to know what is the best way to handle data in the following scenario
    1. We have to create Medical and Rx claims Tables for 36 months of data about 150 million records each - First month (month 1, 2, 3, 4, .......34, 35, 36)
    2. We have to add the DELTA of month two to the 36 month baseline. But the application requirement is ONLY 36 months, even though the current size is 37 months.
    3 Similarly in the 3rd month we will have 38 months, 4th month will have 39 months.
    4. At the end of 4th month - how can I delete the First three months of data from Claim files without affecting the performance which is a 24X7 Online system.
    5. Is there a way to create Partitions of 3 months each and that can be deleted - Delete Partition number 1, If this is possible, then what kind of maintenance activity needs to be done after deleting partition.
    6. Is there any better way of doing the above scenario. What other options do I have.
    7 My goal is to eliminate the initial months data from system as the requirement is ONLY 36 months data.
    Thanks in advance for your suggestion
    sekhar

    Hi,
    You should use table partitioning to keep your data on monthly partitions. Serach on table partitioning for detailed examples.
    Regards

  • Best way to store large amounts of data

    Greetings!
    I have some code that will parse through XML data one character at a time, determine if it's an opening or closing tag, what the tag name is, and what the value between the tags is. All of the results are saved in a 2D string array. Each parent result can have a variable number of child results associated with it and it is possible to have over 2,000 parent results.
    Currently, I initialize a new string that I will use to store the values at the beginning of the method.
    String[][] initialXMLValues = new String[2000][45]I have no idea how many results will actually be returned when the method is initially called, so I don't know what to do besides make initialXMLValues around the maximum values I expect to have.
    As I parse through the XML, I look for a predefined tag that signifies the start of a result. Each tag/value that follows is stored in a single element of an arraylist in the form "tagname,value". When I reach the closing parent tag, I convert the arraylist to a String[], store the size of the array if it is bigger than the previous array (to track the maximum size of the child results), store it in initialXMLValues[i.] (<- peroid to avoid post showing up in italics), then increment i
    When I'm all done parsing, I create a new String String[][] XMLValues = new String[i][j], where i is equal to the total number of parent results (from last paragraph) and j is equal to the maximum number of child results. I then use a nested for loop to store all of the values from initialXMLValues into XMLValues. The whole point of this is to minimize the overall size of the returned String Array and minimize the number of null valued fields.
    I know this is terribly inefficient, but I don't know a better way to do it. The problem is having to have the size of the array initilized before I really know how many results I'm going to end up storing. Is there a better way to do this?

    So I'm starting to rewrite my code. I was shocked at how easy it was to implement the SAX parser, but it works great. Now I'm on to doing away with my nasty string arrays.
    Of course, the basic layout of the XML is like this:
    <result1>
    <element1>value</element1>
    <element2>value</element2>
    </result1>
    <result2>
    I thought about storing each element/value in a HashMap for each result. This works great for a single result. But what if I have 1000 results? Do I store 1000 HashMaps in an ArrayList (if that's even possible)? Is there a way to define an array of HashMaps?

  • What is the best way to migrate large amounts of data from a g3 to an intel mac?

    I want to help my mom transfer her photos and other info from an older  blueberry G3 iMac to a new intel one.  There appears to be no prmigration provision on the older mac.  Also the firewire caconnestions are different.  Somebody must have done this before.

    Hello
    the cable above can be use to enable Target Disk mode for data transfert
    http://support.apple.com/kb/ht1661 for more info
    to enable Target Disk mode just after startup sound Hold on "T" key on key board until see at  screen firewire symbol aka screen saver , then plug fire wire cable betwen 2 mac
    HTH
    Pierre

  • Large Amount of Data in JSF

    Hello,
    I am using the Table Group component for displaying data in my application designed in Java Studio Creator.
    I have enabled paging on the component. I use CachedRowSet on the bean for the page for getting the data. This works very well at the moment in my development environment. At the moment I am testing on small amount of data.
    I was wondering how does this component perform with very large amounts of data (>75,000 rows). I noticed that there is a button available for users to retrieve all the rows. So I was wondering apart from that instance, when viewing in a paged mode does the component get all the results from the database everytime ?
    Which component would be best suited for displaying large amounts of data in a table format?
    Thanks In Advance!!

    Thanks for your reply. The table control that I use does have paging as a feature and I have enabled it. It still takes time to load the data initially.
    I wonder if it is got to do with the logic of paging. How do you specify which set of 20 records to extract from SQL.
    Thanks for your help!!

  • Looking for ideas for transferring large amounts of data between systems

    Hello,
    I am looking for ideas based on best practices for transferring Large Amounts of Data in and out of a Netweaver based application.
    We have a new system we are developing in Netweaver that will utilize both the Java and ABAP stack, and will require integration with other SAP and 3rd Party Systems. It is a standalone product that doesn't share any form of data store with other systems.
    We need to be able to support 10s of millions of records of tabular data coming in and out of our system.
    Since we need to integrate with so many different systems, we are planning to use RFC for our primary interface in and out of the system. As it turns out RFC is not good at dealing with this large amount of data being pushed through a single call.
    We have considered a number of possible ideas, however we are not very happy with any of them. I would like to see what the community has done in the past to solve problems like this as well as how SAP currently solves this problem in other applications like XI, BI, ERP, etc.

    Primoz wrote:Do you use KDE (Dolphin) 4.6 RC or 4.5?
    Also I've noticed that if i move / copy things with Dolphin they're substantially slower than if I use cp/mv. But cp/mv works fine for me...
    Also run Dolphin from terminal to try and see what's the problem.
    Hope that help at least a bit.
    Could you explain why Dolphin should be slower? I'm not attacking you, I'm just asking.
    Cause I thought that Dolphin is just a „little" wrapper around the cp/mv/cd/ls applications/commands.

  • Deleting large amounts of data

    All,
    I have several tables that have about 1 million plus rows of historical data that is no longer needed and I am considering deleting the data. I have heard that deleting the data will actually slow down performance as it will mess up the indexing, is this true? What if I recalculate statistics after deleting the data? In general, I am looking for advice what is best practices for deleting large amounts of data from tables.
    For everyones reference I am running Oracle 9.2.0.1.0 on Solaris 9. Thanks in advance for the advice.
    Thanks in advance!
    Ron

    Another problem with delete is that it generates a vast amount of redo log (and archived logs) information . The better way to get rid of the unneeded data would be to use TRUNCATE command:
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_107a.htm#2067573
    The problem with truncate that it removes all the data from the table. In order to save some data from the table you can do next thing:
    1. create another_table as select * from &lt;main_table&gt; where &lt;data you want to keep clause&gt;
    2. save the indexes, constraints, trigger definitions, grants from the main_table
    3. drop the main table
    4. rename &lt;stage_table&gt; to &lt;main_table&gt;.
    5. recreate indexes, constraints and triggers.
    Another method is to use partitioning to partition the data based on the key (you've mentioned "historical" - the key could be some date column). Then you can drop the historical data partitions when you need it.
    As far as your question about recalculating the statistics - it will not release the storage allocated for index. You'll need to execute ALTER INDEX &lt;index_name&gt; REBUILD :
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_18a.htm
    Mike

  • Java NIO - reading large amount of data

    Hi,
    I have diffuculties of reading large amount of data with SocketChannel (using directAllocated buffer & allocated one). Files greater than 300KB are cut even though I tried write the data into FileChannel.
    My Code:
    ByteBuffer directBlockBuffer = ByteBuffer.allocateDirect(150000);
    buffer = ByteBuffer.allocate(6000000);
    out = new     FOS("d:\\msgData.tmp");
    fc=out.getFOS().getChannel(); // FileChannel               
    int fileLength = (int)fc.size();
    while (clientChannel.read(directBlockBuffer)>0)
    {                              directBlockBuffer.flip()                         buffer.put(directBlockBuffer);
         directBlockBuffer.compact();
    //close data file
                                       buffer.flip();
                                       fc.write(buffer);
                                       fc.close();
    FOS.close();
    // end of code
    Any ideas?
    Thanks
    AST

    I don't understand how the "write" result will help read the whole data.
    Anyway, I changed the code so the SocketChannel will read in smaller chunks (~8KB) & the FileChannel writes in every read
    but the data stream is cut again (to ~5KB no matter what size of file I send).
    In the updated code when try to compare socketChannel.read to -1 I got endless loop.
    I'm basically trying to write POP3/SMTP server program, this part of code handles attachment that is received by the SocketChannel in one unit (i.e 1+ MB of data, the other SMTP commands/lines are no more than 27 chars and simple to handle).
    Therefore I need to be ready to accept large amount of data to the buffer & write it to filechannel. (In the POP3 thread I'm using MappedByteBuffer successfully).
    Updated code:
    ByteBuffer directBlockBuffer = ByteBuffer.allocateDirect(8192);
    while (clientChannel.read   (directBlockBuffer>0&&directBlockBuffer.hasRemaining))
              directBlockBuffer.flip();
              fc.write(directBlockBuffer);
              directBlockBuffer.clear();
         }I think based on API my code is logical (and good for small files) but what about handling bigger files (up to 5MB)?
    Thanks,
    AST 

  • ERROR MESSAGE WHEN DISPLAYING LARGE RETRIEVING AND DISPLAYING LARGE AMOUNT OF DATA

    Hello,
    Am querying my database(mysql) and displaying my data in a
    DataGrid (Note that am using Flex 2.0)
    It works fine when the amount of data populating the grid is
    not much. But when I have large amount of data I get the following
    error message and the grid is not populated.
    ERROR 1
    faultCode:Server.Acknowledge.Failed
    faultString:'Didn't receive an acknowledge message'
    faultDetail: 'Was expecting
    mx.messaging.messages.AcknowledgeMessage, but receive Null'
    ERROR 2
    faultCode:Client.Error.DeliveryInDoubt
    faultString:'Channel disconnected'
    faultDetail: 'Channel disconnected before and acknowledge was
    received'
    Note that my datagrid is populated when I run the query on my
    Server but does not works on my client pcs.
    Your help would br greatly appreciated here.
    Awaiting a reply.
    Regards

    Hello,
    Am using remote object services.
    USing component (ColdFusion as destination).

  • Advice needed on how to keep large amounts of data

    Hi guys,
    Im not sure whats the best way is to make large amounts of data available to my android  app on the local device.
    For example records of food ingredients, in the 100's?
    I have read and successfully created .db's using this tutorial.
    http://help.adobe.com/en_US/AIR/1.5/devappsflex/WS5b3ccc516d4fbf351e63e3d118666ade46-7d49. html
    However to populate the database I use flash? So this kind of defeats the purpose of it. No point in me shifting a massive array of data from flash to a sql database, when I could access the data direct from the as3 array?
    So maybe I could create the .db with an external program? but then how would I include that .db in the apk file and then deploy it to users android device.
    Or maybe I create a as3 class with an xml object init and use that as a means of data storage?
    Any advice would be appreciated

    You can use any means you like to populate your SQLite database, including using external programs, (temporarily) embedding a text file with SQL statements, executing some SQL from AS3 code etc etc.
    Once you have populated your db, deploy it with your project:
    http://chrisgriffith.wordpress.com/2011/01/11/understanding-bundled-sqlite-databases-in-ai r-for-mobile/
    Cheers, - Jon -

Maybe you are looking for

  • Data Source created by Infoset based in two joined tables

    Hi Forum, I want to do a segmentation in the Segment Builder based in a attribute list. The tablas i join are: BUT000 and ADRC. Everything is ok when the infoset is created and generated. Well, the next step i do is create the DATA SOURCE. When i edi

  • Uploading docs from iPad to MacBook Pro via iCloud

    Why are my Pages documents not syncing with iCloud, and what do I do about it? I have iCloud turned On, Documents and Data turned On to store on iCloud, I have 24.3 GB storage available, and I have Use iCloud turned On in Pages. What else do I have t

  • Video from Encore has ghosting of images & skips, but not in files on computer

    I am having an issue with Encore that I have no idea how to fix. I am capturing footage from a Sony HDR-FX1 and capturing it with Premiere Pro. The footage is 1440X1080 coming from MiniDV tapes. I am shooting high school football games, so there is a

  • Third Party Remittance Document Numbers

    Hi All, Client would like separate document numbers per vendor in the third party remittance posting. Currently one document number has several vendors. Does anyone know how to accomplish this? Thanks, Marlo

  • Fullscreen Gallery image gets erased when scrolling on Chrome

    I'm working on a website where we have an abstract video background playing behind a transparent logo, which when clicked scrolls the page down to the main content. The logo is layed over the video background as a single image in the Fullscreen Galle