Best practice when doing large cascading updates

Hello all
I am looking for some help with tackling a fairly large cascading update.
I have an object tree that needs to be merged using JPA and Toplink.
Each update consists of 5-10000 objects with a decent depth as well.
Can anyone give me some pointers/hints towards a Best practice for doing this? Looping though each object with JPA's merge takes minutes to complete, so i would rather not do that.
I have never actually used TopLinks own API before, so i am especially interested if TopLink has an effective way of handling this, preferably with a link to some related reading material?
Note that i have a somewhat duplicate question on (Noting for good forum practice)
http://stackoverflow.com/questions/14235577/how-to-execute-a-cascading-jpa-toplink-batch-update

Not certain what you think you can't do. Take a long clip and open it in the Viewer. Set In and Out points. Drop that into the Timeline. Now you can move along in the Viewer clip and set new Ins and Outs and drop that into the Timeline. Clips in the Timeline are created from the Ins and Outs you set in the Viewer.
Is that what you want to do? If it is, I don't where making copies of the clip would work for you
Later, if you want to match up a clip in the Timeline to that master clip, just use Match Clip (find) in the timeline to find where it correaltes to your main clip
You can have FCE automatically create subclips at camera cut points by using DV Stop/Start Detect if that is what you're looking for

Similar Messages

  • What is the best practice of deleting large amount of records?

    hi,
    I need your suggestions on best practice of deleting large amount of records of SQL Azure regularly.
    Scenario:
    I have a SQL Azure database (P1) to which I insert data every day, to prevent the database size grow too fast, I need a way to  remove all the records which is older than 3 days every day.
    For on-premise SQL server, I can use SQL Server Agent/job, but, since SQL Azure does not support SQL Job yet, I have to use a Web job which scheduled to run every day to delete all old records.
    To prevent the table locking when deleting too large amount of records, in my automation or web job code, I limit the amount of deleted records to
    5000 and batch delete count to 1000 each time when calling the deleting records stored procedure:
    1. Get total amount of old records (older then 3 days)
    2. Get the total iterations: iteration = (total count/5000)
    3. Call SP in a loop:
    for(int i=0;i<iterations;i++)
       Exec PurgeRecords @BatchCount=1000, @MaxCount=5000
    And the stored procedure is something like this:
     BEGIN
      INSERT INTO @table
      SELECT TOP (@MaxCount) [RecordId] FROM [MyTable] WHERE [CreateTime] < DATEADD(DAY, -3, GETDATE())
     END
     DECLARE @RowsDeleted INTEGER
     SET @RowsDeleted = 1
     WHILE(@RowsDeleted > 0)
     BEGIN
      WAITFOR DELAY '00:00:01'
      DELETE TOP (@BatchCount) FROM [MyTable] WHERE [RecordId] IN (SELECT [RecordId] FROM @table)
      SET @RowsDeleted = @@ROWCOUNT
     END
    It basically works, but the performance is not good. One example is, it took around 11 hours to delete around 1.7 million records, really too long time...
    Following is the web job log for deleting around 1.7 million records:
    [01/12/2015 16:06:19 > 2f578e: INFO] Start getting the total counts which is older than 3 days
    [01/12/2015 16:06:25 > 2f578e: INFO] End getting the total counts to be deleted, total count:
    1721586
    [01/12/2015 16:06:25 > 2f578e: INFO] Max delete count per iteration: 5000, Batch delete count
    1000, Total iterations: 345
    [01/12/2015 16:06:25 > 2f578e: INFO] Start deleting in iteration 1
    [01/12/2015 16:09:50 > 2f578e: INFO] Successfully finished deleting in iteration 1. Elapsed time:
    00:03:25.2410404
    [01/12/2015 16:09:50 > 2f578e: INFO] Start deleting in iteration 2
    [01/12/2015 16:13:07 > 2f578e: INFO] Successfully finished deleting in iteration 2. Elapsed time:
    00:03:16.5033831
    [01/12/2015 16:13:07 > 2f578e: INFO] Start deleting in iteration 3
    [01/12/2015 16:16:41 > 2f578e: INFO] Successfully finished deleting in iteration 3. Elapsed time:
    00:03:336439434
    Per the log, SQL azure takes more than 3 mins to delete 5000 records in each iteration, and the total time is around
    11 hours.
    Any suggestion to improve the deleting records performance?

    This is one approach:
    Assume:
    1. There is an index on 'createtime'
    2. Peak time insert (avgN) is N times more than average (avg). e.g. supposed if average per hour is 10,000 and peak time per hour is 5 times more, that gives 50,000. This doesn't have to be precise.
    3. Desirable maximum record to be deleted per batch is 5,000, don't have to be exact.
    Steps:
    1. Find count of records more than 3 days old (TotalN), say 1,000,000.
    2. Divide TotalN (1,000,000) with 5,000 gives the number of deleted batches (200) if insert is very even. But since it is not even and maximum inserts can be 5 times more per period, set number of deleted batches should be 200 * 5 = 1,000.
    3. Divide 3 days (4,320 minutes) with 1,000 gives 4.32 minutes.
    4. Create a delete statement and a loop that deletes record with creation day < today - (3 days ago - 3.32 * I minutes). (I is the number of iterations from 1 to 1,000)
    In this way the number of records deleted in each batch is not even and not known but should mostly within 5,000 and even you run a lot more batches but each batch will be very fast.
    Frank

  • Best practices when carry forward for audit adjustments

    Dear experts,
    I would like to know if someone can share his best practices when performing carry forward for audit adjustments.
    We are actually doing legal consolidation for one customer and we are facing one issue.
    The accounting team needs to pass audit adjustments around April-May for last year.
    So from January to April / May, the opening balance must be based on December closing of prior year.
    Then from May / June to December, the opening balance must be based on Audit closing of prior year.
    We originally planned to create two members for December period, XXXX.DEC and XXXX.AUD
    Once the accountants would know their audit closing balance, they would have to input it on the XXXX.AUD period and a business rule could compute the difference between the closing of AUD and DEC periods and store the result on an opening flow.
    The opening flow hierarchy would be as follow:
    F_OPETOT (Opening balance Total)
        F_OPE (Opening balance from December)
        F_OPEAUD (Opening balance from the difference between closing balance of Audit and December periods)
    Now, assume that we are in October, but for any reason, the accountant run a carry forward for February, he is going to impact the opening balance because at this time (October), we have the audit adjustments.
    How to avoid such a thing? What are the best practices in this case?
    I guess it is something that you may have encounter if you did a consolidation project.
    Any help will be greatly appreciated.
    Thanks
    Antoine Epinette

    Cookman and I have been arguing about this since the paleozoic era. Here's my logic for capturing everything.
    Less wear and tear on the tape and the deck.
    You've got everything on the system. Can't tell you how many times a client has said "I know that there was a better take." The only way to disabuse them of this notion is to look at every take. if it's not on the system, you've got to spend more time finding the tape, and adding "wear and tear on the tape and the deck." And then there's the moment where you need to replace the audio for one word from another take. You can quickly check all the other takes (particularly if you've done a thorough job logging the material - see below)_.
    Once it's on the system, you still need to log and learn the material. You can scan thru material much faster once it's captured. Jumping around the material is much easier.
    There's no question that logging the material before you capture makes you learn the material in a more thorough way, but with enough selfdiscipline, you can learn the material as thoroughly once it's been captured.

  • Best practices when making service requests

    Best practices when making service requests
    We've been working on moving our old services that were built with an different service request tool into RequestCenter and were wondering if anyone had any thoughts about best standards or practices for the new forms that they would be willing to share.  For example, one such standard might be that the customer - initiator information will always be displayed at the top of the request.
    Are there any other standardizations you could share that help lend consistency and provide improved readability for request forms?  Maybe someone has a design framework guide they would be willing to share?
    Thanks!
    Tim

    Thanks for the comments and the book suggestion.
    We've been placing the customer information at the top because wanted the customer to review the information before subbmitng the form.  Our LDAP data is somewhat spotty and we want to make sure we have the right information when the form is submitted but I can see the advantages to placing it at the bottom as well.  I'll have to think that over more.
    Does anyone find tha certain fields work better than others?  For example, we've not had much

  • Best practice when using Tangosol with an app server

    Hi,
    I'm wondering what is the best practice when using Tangosol with an app server (Websphere 6.1 in this case). I've been able to set it up using the resource adapter, tried using distributed transactions and it appears to work as expected - I've also been able to see cache data from another app server instance.
    However, it appears that cache data vanishes after a while. I've not yet been able to put my finger on when, but garbage collection is a possibility I've come to suspect.
    Data in the cache survives the removal of the EJB, but somewhere later down the line it appear to vanish. I'm not aware of any expiry settings for the cache that would explain this (to the best of my understanding the default is "no expiry"), so GC came to mind. Would this be the explanation?
    If that would be the explanation, what would be a better way to keep the cache from being subject to GC - to have a "startup class" in the app server that holds on to the cache object, or would there be other ways? Currently the EJB calls getCacheAdapter, so I guess Bad Things may happen when the EJB is removed...
    Best regards,
    /Per

    Hi Gene,
    I found the configuration file embedded in coherence.jar. Am I supposed to replace it and re-package coherence.jar?
    If I put it elsewhere (in the "classpath") - is there a way I can be sure that it has been found by Coherence (like a message in the standard output stream)? My experience with Websphere is that "classpath" is a rather ...vague concept, we use the J2CA adapter which most probably has a different class loader than the EAR that contains the EJB, and I would rather avoid to do a lot of trial/error corrections to a file just to find that it's not actually been used.
    Anyway, at this stage my tests are still focused on distributed transactions/2PC/commit/rollback/recovery, and we're nowhere near 10,000 objects. As a matter of fact, we haven't had more than 1024 objects in these app servers. In the typical scenario where I've seen objects "fade away", there has been only one or two objects in the test data. And they both disappear...
    Still confused,
    /Per

  • Best practice when modifying SAP Standard Development Component

    Hello Experts,
    What is best practice when modifying SAP Standard Development Component (Java Web Dynpro)? Iu2019m looking for the best method to do modifications to SAP Standard DC so that my changes will be kept (or need low maintenance) after a new service package (or EHP) is applied.
    Thanks,
    Kevin

    Hi,
      'How to use Busiess Packages in Enterprise Portal 6.0' is available in this link.
    http://help.sap.com/bp_epv260/EP_EN/documentation/How-to_Guides/misc/Using_Business_Packages.pdf
    Check out for the best practices.
    Regards,
    Harini S

  • Books / links on best practices when writing on-line Help

    Hi everyone
    Not sure were to place this topic...
    I have not posted in here for ages...
    I am a RoboHelp user and I am looking for one or several
    books about best practices when writing on-line help. For examples,
    what are the "rules" or "do's" and "don'ts" for CSS, topic linking,
    number of clicks, links within a topic, index building, etc.
    Just wondering if some people on this forum know about some
    good books where all of the rules or do's would be compiled?
    Thanks in advance for any input.
    Regards

    KeepItSimple,Stupid!
    That is, just because there are neat things like drop-down
    text, marquees, and such, doesn't mean you should use them.
    Stick to the basic HTML fonts and colors (use the
    w3schools web site for all
    things HTML and CSS.
    Instead of styles, create your lists by selecting Normal
    paragraphs and formatting with the Bullet and Number toolbar
    buttons.
    Keep your tables as simple as possible (try not to nest them
    and have all sorts of row and column spans, and try to avoid lists
    and figures, if you can). Also, break up very long tables into
    functional groupings with introductory headings.
    Use
    Peter Grainge's web
    site and
    Rick
    Stone's web site for all the best workarounds and diagnostics.
    Good luck,
    Leon

  • A must read best practices when starting out in Designer

    Hi,
    Here is a link to a blog by Vishal Gupta on best practices when developing XFA Forms.
    http://www.adobe.com/devnet/livecycle/articles/best-practices-xfa-forms.html
    Please go read it now; it is excellent :-)
    Niall

    I followed below two links. I think it should be the same even though the links are 2008 R2 migration steps.
    http://kpytko.pl/active-directory-domain-services/adding-first-windows-server-2008-r2-domain-controller-within-windows-2003-network/
    http://blog.zwiegnet.com/windows-server/migrate-server-2003-to-2008r2-active-directory-and-fsmo-roles/
    Hope this help!

  • Memory consumption issues (when doing large batches of photos)

    I have a user who reports my plugin consumes memory until Lr/System is no longer operable, when doing large batches of photos.
    I have this type of problem too from time to time, but not always, and in the most recent case, *not* for the same operation my client is complaining about.
    Begs the question: is there a way to control whether excessive memory is used, or force it to be released, when doing an operation upon large batches of photos.
    Note: the operation is already concluding catalog transactions every 1000 photos (exits with-write function, and re-enters). My client reports Lr/System slowdown at about 4000 photos. He is running Lr3, Windows OS - system details not yet known.
    Rob

    Hey Rob
    Have you already tried John R. Ellis idea of reducing the transaction size? I remember from another project we had to limit the transaction size on a SQLite based database due to memory problems.
    Maybe you are facing a different problem - not sure how efficient LUAs garbage collection is and your code causes some kind of memory leaks somewhere.
    Daniel

  • When does the new update come out?

    When does the new update for the OS come out on itunes?

    When does the new update for the OS come out on itunes?
    June 17, 2009.
    (44364)

  • Best practice when upstream does not provide version number

    I'm currently preparing a package for the game Dreamfall Chapters, unfortunately the Developer does not (yet) provide a Version number.
    What is the best practice for this? Just count the releases and set epoch when they finally provide one themselves?
    thanks in advance!

    If you are missing a version number, you could use the date. Prefix it with "r" in case you want to avoid epoch when upstream decides to start versioning.
    pkgver=r20140122
    If you have another release on the same day, increment another subversion
    pkgver=r20140122.1
    Counting the releases by hand is poor practice (you could miss one). Do it only if upstream provides the count.
    Edit: To tie the version stronger to the release, you can also add a part of the archive hash, e.g. +m###### with the first 6 characters of the md5sum.
    pkgver=r20140122+m23df1e
    Edit: r as prefix is better, thanks rumpelsepp.
    Last edited by progandy (2014-10-22 09:32:56)

  • DW best practices when needing to update data

    Hi all,
    I have a few general questions about data warehousing...
    We often need to update/process the data after that we imported it into the DW (with ETS tools). Since data in a DW is not supposed to be updated I wonder what the corrct way to do this is.
    The scenario is data coming from a few systems, we import it and transform it, then, when we want to run reports etc. we sometime need to apply some changes to the data, like for example add a column with some result. Is the correct way to do that adding tables in the DW? Other systems use separate tables (inside or outside the DW) in order to transform the data... when is it worth to create separate tables outside the DW and when to create/add columns in the DW? What is allowed/best practice in DW?
    Thanks,
    A.

    It is a view. That is what the error message is saying.
    Why not deal with facts instead of speculating? Look at the Oracle Data Dictionary and see what the object DPIT.DEDUCTIONS is.
    Use TOAD. Use SQL*Plus and select on ALL_OBJECTS. Use OEM. Etc.

  • Multiple room management -- best practice -- server side http api update?

    Hi Folks, 
    Some of the forum postings on multiple room management are over year old now.  I have student/tutor chat application which has been in the wild for 5 months now and appears to be working well.  There is a single tutor per room, multiple chats and soon to be a whiteboard per student, which is shared with the tutor in a tabbed UI. 
    It is now time to fill out the multiple tutor functionality, which I considered and researched when building, but did not come to any conclusions.   I'm leaning towards a server side implementation.  Is there an impending update to the http api?
    Here is what I understand to be the flow:
    1) server side management of who is accessing the room
    2) load balance and manage the room access 1 time user and owner session from the server side
    3) for my implementation, a tutor will need to login to the room, in order for it to be available
    4) Any reconnection would in turn need to be managed by the server side, and is really a special case of room load balancing.
    My fear is that at some point I'm going to need access to the number of students in the room or similar and this is not available, so that I'll need client functionality, which will need update the server side manager.
    As well, I'm concerned about delays on the server side access to which might create race conditions in a re-connect situation.  User attempts to reconnect, but server side manager thinks that the user is already connected.
    Surely this simple room management has been built, does anyone have any wisdom they can impart?  Is there any best practice guidance or any samples?
    Thanks,
    Doug

    Hi Raff, Thanks a ton for the response.
    I wasn't clear on what I was calling load balancing.  What I mean by this is room assignment for student clients.  We have one tutor per room.  There are multiple students per room, but each is in their own one-on-one chat with the tutor.
    I'm very much struggling with where to do the room assignment / room managemnt, on the server side or on the client side (if that is even possible).  In my testing it is taking upwards of 10 seconds minimum to get a list of rooms (4 virtually empty rooms) and to query the users in a single room (also a minimum of users/nodes in the queried room).   If after this point, I 'redirect' the student to the least full room, then the student incurs the cost of creating a new session and logging into the room.  As well I intend to do a bit of xml parsing, and other processing, so that 10 seconds is likely to grow.
    Would I see better performance trying to do this in the client?
    As far as the server side, at what point does a room go to 'not-active'?
    When I'm querying the roomList, I am considered one of the 'OWNER' users in the UserLists.  At what point can it be safe to assume that I have left the room? 
    Is there documentation on the meaning and lifecycle of the different status codes?  not-active,  not-running, and ok?  Are there others?
    How much staleness can I expect from the server-side queries?
    As far as feature set, the only thing that comes to mind is xpath and or wild card support for getNode() but i think this was mentioned in other posts.
    Regarding the reconnection issues, I am timing out the student after inactivity, and this is probably by and large the bulk of my reconnect use cases.  This and any logout interaction from the student presents a use case where I  may want reassign the student return to the same room as before.  I can envision scenarios of a preferred tutor if available etc.  In this case, I'll need to know list of rooms.  In terms of reconnection failover, this is not not a LCCS / FMS issue.
    Thanks again for responding.

  • Best practice when deleting from different table simultainiously

    Greetings people,
    I have two tables joined with a foreign key contrraint. They are written at the same time to keep the constraint happy but I don't know the best way of deleting them as far as rowsets and datamodels are concerned. Are there "gotchas" like do I delete the row in the foreign key table first?
    I am reading thread:http://swforum.sun.com/jive/thread.jspa?forumID=123&threadID=49918
    and getting my head around it.
    Is there a tutorial which deals with this topic?
    I was wondering the best way to go.
    Many Thanks.
    Phil
    is there a "best practice" method for

    Without knowing many details about your specifics... I can suggest a few alternatives -
    You can definitely build coordinating the deletes into your application - you can automatically delete any FK related entries prior to deleting the master, or, refuse to delete the master until the user goes and explicitly deletes the children... just depends on how you want to manage it.
    Also in many databases you can build the cascading delete rules into your database tables themselves.... so that when you delete the master the deletes automatically cascade. I think this is something you typically declare when creating the FK constrataint (delete cascade and update cascade rules).
    hth,
    v

  • Best practice when FACT and DIMENSION table are the same

    Hi,
    In my physical model I have some tables that are both fact and dimension table, i.e. in the BMM they are of course separated into Fact and Dim source (2 different units) and it works fine. But I can see that there will be trouble when having more fact tables and I e.g. have a Period dimension pointing to all the different fact tables (different sources).
    Seems like the best solution to this is to have an alias of the fact/transaction table and have 2 "copies" of the transaction table (one for fact and one for dimension table) in the physical layer. Only bad thing is that there will then allways be 2 lookups in the same table when fetching data from the dimension and the fact table.
    This is not built on a datawarehouse - so the architecture is thereby more complex. Hope this was understandable (trying to make a short story of it).
    Any best practice on this? Or other suggestions.

    Id recommend creation of a view in the database. if its an oracle DB, materialised views would be a huge performance benefit. you just need to make sure that the MVs are updated when the source is updated.
    -Domnic

Maybe you are looking for

  • How to integrate BW Hierarchy into Webi?

    Hi Everybody, I need to create a report based on 0FIGL_V10_Q0001 in Webi and this query has a hierarchy and i have created the hierarchy from the designer itself. In webi, when i tried to pull 1 of the hierarchy dimension with key figure, it shows no

  • HT201412 My Iphone 4 keeps randomly turning off, and sometimes it wont turn back on....

    twice now it has stayed off for and hour, even with me trying to restart it or turn it back on it still wont. what do it do?

  • "Error in the dac kernel system" Error

    I hope someone can help! I have Diadem 10.2 logging external signals through a can card, which was working perfectly last week, but now if I try and get some data, it throws a message: Error in the DAC kernel system Unable to lock a memory space. (Er

  • Updating blob files

    can somebody help me, can mysql update blob data fields? or is there another way to change data i blob columns/...

  • Empty Process Context

    Hi All, I have created a process wih multiple Human Activities, when I am trying to do the Output mapping of Human Task 1: I am not able to see any nodes/attributes in the process context on the Right side. I am able to see the TaskOutput on the left