When batch reading feature?

When it is expected to add batch reading feature.
TopLink has batch reading feature.
If you have a Class "A" 1-n relation with a "B" class you can set ON batch
Reading of that relation.
Then the first time you need to navigate from A 1-to-n B , it loads the
result of the inner join between the tables for the Whole Extent of A and
B(would be better to limit that).
And returns the collection with B objects.
The next time you need to navigate from an A object 1-to-n B the
resultset(in memory ) is use to retrive the the collection with B objects.
I have heard that it was suposed to be a feature in KODO 3.0

I have heard that it was suposed to be a feature in KODO 3.0Correct. We hope to start seeding 3.0 beta releases some weeks after 2.5
final is out.

Similar Messages

  • Batch Reading with Custom SQL

    Hello,
    Queries
    1. Is it possible to use Batch Reading in conjunction with Custom Stored Procs/ SQL?
    2. Is it possible to map an attribute to a SQL expression (like in Hibernate we have formula columns mapped using the formula* property)?
    Background
    1. We use Toplink 11g (11.1.1.0.1) (not EclipseLink) in our application and are controlling mapping using XML files (not annotations).
    2. We are migrating a legacy application with most of its data retreival logic present in stored procedures to Java.
    3. I am effectively a newbie to Toplink.
    Scenario
    1. We have a deep class heirarchy with ClassA+ at the following having a one-to-many relation with ClassB+ and ClassB+ having a one-to-many relation with ClassC+ and so on and so forth.
    2. For each of these classes the data retreival logic is present in stored procedures (coming from the legacy application) containing not so simple queries.
    3. Also there are a quite a few attributes that actually represent computed values (computed and returned from the stored procedure). Also the logic for computing the values are not simple either.
    4. So to make things easy we configured toplink to use the stored procedures to retreive data for objects of ClassA+, ClassB+ and ClassC+.
    5. But since the class heirarchy was deep, we ended up firing too many stored procedure calls to the database.
    6. We thought we could use the Batch Reading feature to help with this, but I have come across documentation that says that it wont work if you override toplink's queries with stored procedures.
    7. I wrote some sample code to determine this and for the heirarchy shown above it uses the speicifed Custom procedure (I also tried replacing the stored procs with custom SQL, but the behavior is the same) for ClassA+ and ClassB+, but for ClassC+ and below it resorts to its own generated SQL.
    8. This is a problem because the generated SQL contains the names of the computed columns which is not present in the underlying source tables.
    Thanks
    Arvind

    Batch reading is not supported with custom SQL or stored procedures.
    Join fetching is though, so you may wish to investigate that (you need to ensure you return the correct data from the stored procedure).
    James : http://www.eclipselink.org

  • Photomerge interactive is no longer available in 12.  I wasted 4 hours trying to find the option.  Then read in a forum that the feature had been discontinued.  Interactive is the primary reason that I keep the product up-to-date.  When will the feature b

    Photomerge interactive is no longer available in 12.  I wasted 4 hours trying to find the option.  Then read in a forum that the feature had been discontinued.  Interactive is the primary reason that I keep the product up-to-date.  When will the feature be returned?

    No one knows if the interactive dialog will return, except adobe and adobe is not saying.
    What operating system are you using?
    On windows there is a way to get it back, but not if your using a mac.

  • Resume Reading feature

    Hi there,
    I'm trying to get a better understanding of the data streaming features that comes with the -hot forum topic- Batch eWay in caps 513. Its User Guide has a page on the "Resume Reading" feature of the BatchLocalFile OTD. It allows to read part by part from a large file and does this by remembering the break position in the file, so it knows where to start for reading the next part.
    - How does this got persisted? I guess it's just in memory.
    - What about a system failure during the read? Is there a kind of rollback mechanism foreseen?
    - collaboration rules define the break (identified by i.e. #recs, delimiter character), not by configuration. How?
    Anyone has experience with this feature ?
    Some code snippets would be very usefull.
    Thanks 4 any help on this
    Kris

    Hoi Kris,
    it works together with the Record parsers (BatchRecord OTD).
    The state is written to a file on the side of the domain. As it is meant to work in scenario as LocalFile-BatchRecord-BatchFTP or LocalFile-BatchRecord-JMS, the built-in features of Pre and Post transfer renaming give a pretty good transactional control.
    The thing to make sure is having synchronization enabled so execution is serialized. Also, when adding more logic on top of the BatchRecord features, it is suggested to set "Resume Reading" to false, which just means that it's not going to work perfectly in this situation (if you for example do double splitting on the input datastream and send them off as JMS messages).
    Hope this clarifies things a bit.
    Paul

  • How can I reprint a large # of files in adobe acrobat XI since there's no batch processing feature?

    How can I reprint a large # of files in adobe acrobat XI since there's no batch processing feature?

    One of the available commands in the Action Wizard is Print (under the More
    Tools sub-section). You create a new Action, add that command and then run
    it on your files to print them all.

  • Trigger Idoc when Batch is created or changed

    Hello,
    We have to transfer Batch ID,Status and Batch Characteristics through IDoc to PI when Batch is created or updated. This can be possible by 3 scenarios.
    When Batch is created or updated manually through Tcode MSC1N.
    When Batch is created or updated at the time of Goods Receipt.( Tcode MIGO)
    When Batch is created or updated at process order creation.(After Release of Process order) Tcode COR1.
    For first 2 scenarios we have found user exit 'EXIT_SAPLCLFM_002' from where we can fill the required Idoc data and trigger IDoc to PI system.
    Also for third scenario we have found user exit 'EXIT_SAPLCOBT_001' where we can get Batch Data but not the characteristics data that is created or updated.
    In both the above cases User Exit is fired before Commit to database statement. Therefore my query is whether we can trigger IDoc through Z function modules in these user exits in update mode?
    Can you provide any input on this situation or can you suggest any alternative method or exits to achieve this functionality?

    BATMAS / BATMAS03 -  for Tcode MSC1N.
    And  I  have only information that there is a SAP note is available on this.
    865778 MIGO to post a goods receipt for a purchase order
    Thanks.

  • I was fooling around with the "reader" feature on my New iPad. I am now stuck with that and can't get out. Even my four digit code is read out in a machine voice and the iPad wont boot. How do I get out of this. Tried the red arrow slider. shut of the iPa

    I was fooling around with the "reader" feature on my New iPad. I am now stuck with that and can't get out. Even my four digit code is read out in a machine voice and the iPad wont boot. How do I get out of this. Tried the red arrow slider. shut of the iPa

    James,
    I cannot get into Settings! It keeps reading out Slide to Unlock when I try to slide teh arrow to open the fpur digit entry boxes.

  • We stinn lacks for a mark as read feature

    Please could you reimplement a mark as read feature.
    We can't read all forums at once but when we go into a single forum all are marked as read.
    TIA
    Tullio

    Maybe there is something wrong with the file where rules are stored. You may try re-creating it and setting up the rule again:
    1. Quit Mail.
    2. In the Finder, go to ~/Library/Mail/.
    3. Locate MessageRules.plist and move it to the Desktop. If there is a file called MessageRules.plist.backup, move it to the Desktop too. You may also see MessageSorting.plist files there; this is where Mail 1.x stored the rules, and they are no longer used by Mail 2.x, so just move them to the Trash if you see them.
    4. Open Mail. As a result of removing the rules file, the junk filter will be disabled now. You may want to either tell Mail to go offline immediately after opening it, or shut down the Internet connection before opening Mail, to prevent it from downloading anything until the junk mail filter has been enabled again.
    5. Go to Mail > Preferences > Junk Mail, enable junk filtering, and configure it however you wish.
    6. Go online again if you went offline in step 4.
    If the problem persists after doing this, then you know the rules file itself has no bearing on it, and you may move the files on the Desktop back to the ~/Library/Mail/ folder, overwriting any files Mail may have created anew there (quit Mail first).
    Note: For those not familiarized with the ~/ notation, it refers to the user's home folder.

  • Email mark all as read feature in OS5..?

    Very annoying to have to open each and every email to show marked as read. Thought for sure a "mark all as read" feature would be in OS5, but doesn't appear so at first glance. Any answers?

    bbfc wrote:
    Go into Mail, tap Edit, then mark the messages you want, tap Mark on the bottom and tap Mark as Read.
    The same can be done to mark messages as unread.
    However, when having around 200 unread messages (could be 500 as well), a “mark all as read” option would not be too much asking.

  • Why is bridge cs4 skipping files when batch renaming images?

    When batch renaming in bridge, I select all the files and then start with the first image to rename them. Over the last few months, it is randomly skipping images. It will also jump the number that image should have gotten, so typically we can manually go through and rename the file so it will line up properly.
    Just wondering why this is happening all of the sudden when it worked perfectly for years!

    Sorry Patrick, Did not see the previews before, only saw the thread via
    email and they don't show the samples you provide.
    After seeing those files it looks definitely like corrupted images. I cannot
    imagine Bridge to cause this. Rereading your post, it more looks the result
    of a bad functioning write/read process on the external HD or a bad USB
    cable.
    Assuming you have back up could you test this on either an other HD or a
    DVD. Just copy the files again to those devices and retry. That would proof
    the fault lies with the external HD.
    Could you report back on that?
    Thanks for your tips. I tried, BUT didnt solve my problem.
    Have you seen how the pictures are distracted ?
    They seem hav gotten some kind of alien-verystrange-crazy-adjusments

  • Batch reading won't work in 1:M relationships

    Hi all,
    I have two CMP 2.0 beans which are mapped with
    as 1:M relationships. In parent's child collection
    I set batch reading. When I issue parent.getChildren(),
    toplink issue as many select statement as the no. of
    children associated like
    SELECT * FROM CHILD WHERE CHILD.CHILD_ID = 'child_id';
    instead of
    SELECT * FROM CHILD WHERE CHILD.PARENT_ID = 'parent_id';
    Is there anything wrong in my mapping?

    Yes, sounds like something isn't mapped properly, and this could explain the issue you've posted in another thread too.
    - Don

  • Batch Reading Versus Query Joining

    What is the difference between using ReadAllQuery.addJoinedAttribute and ReadAllQuery.addBatchReadAttribute ? Did this change between 9.0.4 a d 10.1.3 as our 9.0.4 code which used addJoinedAttribute fails running in 10.1.3 with nullpointerexception raised; it works when we change it to use batch reading (which I believe is the recommended approach in anycase). Should we use batch reading for read only joins in future ?

    Batch reading and joined reading are both optimizations for loading related objects. Joined reading requires related objects to be joined and loaded within the initial SQL statement while batch reading loads related objects using a secondary SQL statement (still joining back to the initial queries table to apply its criteria).
    Both can be used in most situations and each has different performance characteristics so I always tell people to measure which mechanisms works best for their situation.
    Typically joining works well on 1:1/M:1 relationships and batch is best applied for collections but your results will be effected by the database.
    If you had a joining scenario that worked in in 9.0.4 I would have expected it to continue working in future releases given no other changes to your model. I would recommend opening a support request or posting information about your scenario that resuls in the NullPointerException.
    Doug

  • Compressor Lag when Batch Exporting

    I'm trying to send five 1080p ProRes422 HQ sequences from FCP to Compressor (FCP Studio 3) for batch exporting, but it produces a HUGE lag every time I try to apply an export setting to a sequence in Compressor. We're talking 3-5 minutes of processing before I can move forward.
    I simply selected the five sequences in the FCP bin and clicked "Send to Compressor."
    Is managing five HD sequences truly too much for Compressor to handle or am I doing something wrong the the way that I am batch exporting?
    By the way, I'm well aware of the batch export feature built into FCP, but I have to transcode videos several times a week into 5-10 different formats and I'm sick of having to manually tune the settings of each export in FCP. I want to be able to take advantage of Compressor's custom export setting profile management.

    I didn't realize that I could add more than one export preset to an item in Compressor. So all I needed to do was send the native sequence to Compressor and apply all five of the export profiles to that one sequence.
    Sending five individual sequences definitely killed it though.

  • How to enable Order release to rejected status when batch derivation fails during release

    Hi,
    I need to enable production order release to rejected status when batch derivation fails during order release.
    one way is to have Batch entry required for the sender material in the material master, with this standard SAP will reject the release.
    Below is the scenario:
    sender attribute for derivation  - VFDAT from batch of the sender material
    sender material  - Batch Entry not required in material master
    All required master data set for derivation.
    during order release, since batch entry is not required, system carries out the derivation without sender batch and derivation fails. Order still gets released.
    I am looking for advise on how to restrict release in above situation.
    appreciate your help.
    Thanks
    Aheesh

    Aheesh,  Order release for instance is based on what we see as events/system statuses which are allowed/forbidden etc., for a particular action to happen/be triggered.
    If you can figure out if there exists a status related to batch derivation, it may give some further leads.

  • Batch reading and BLOB

    Hi,
    I've got three db tables A, B and C, A related with B and B with C.
    Through A, I'd like to read attributes from B and C in one/two selects, hence I use batch reading. The problem is that B has a BLOB attribute and I get an exception (Cast) when accessing B. Everything goes right if I do not use batch reading.
    Another issue, it is not possile to use partial and batch reading in the same query? When done, I cannot read the object set in batch reading attribute. It makes a bit of sense if I set partial attributes only, but I cannot set as partial attribute a relation, isn't it?
    Thanks.

    We used query.dontUseDistinct() : the performances are catastrophic (1-1 mapping with 1-M back mapping) because TopLink turns over the blob in several specimens in JVM memory.
    It is worse than not to use addBatchAttribute !
    Is there a patch which corrects this problem ?
    Thank you

Maybe you are looking for