Deleting large numbers of objects in TopLink v10.1.3

Lets say I have the need to delete as many as say 1000 instances of a given TopLink mapped class. Lets further say that some unknown subset of those 1000 instances to be deleted are held in TopLink shared cache (identity maps). The goal of the exercise is to be able to delete all 1000 instances from the database and have those deleted instances reflected in shared cache in that they are no longer held in shared cache. Additionally, it would be ideal to not bother hydrating the subset of instances that need to be deleted that are not already held in shared cache.
It appears one alternative is to issue a ReadAllQuery against the given class in question with the appropriate selection criteria, and then request checkCacheOnly() against that ReadAllQuery instance prior to query execution. This then saves hydrating objects not already held in shared cache. The set of objects returned can then be passed directly to a DeleteAllQuery. In the same transaction, a raw SQL based query can request to delete all the other instances. This alternative appears to allow a race condition though where one of the objects to be deleted could be hydrated by another thread while the overall delete operation is being executed.
Another alternative could be to simply issue a raw SQL or stored procedure query to delete all instances from the database, then invalidate the class in TopLink's identity maps. This alternative isn't viable though since the cost of rewarming the cache for the unaffected instances has been deemed too high.
Is there a better solution in the 10.1.3 product for this type of problem?

What you have described are the basic options for 10.1.3. In a future release we will have support for a DeleteAllQuery that will address your problem more directly.
To address this problem in 10.1.3.1 I would suggest going the raw SQL approach and then using an in-memory query identify all of the objects that were deleted and mark just them an expired/invalid in the cache instead of all instances.
Doug

Similar Messages

  • How do I delete large numbers of photos on ios7

    How do I delete large numbers of photos from ios7. I haven't got my MacBook with me

    In the Photos.app you can select all photos in a "Moment" at once, by pressing "Select" to the right of the "Moment" name. Then press the Trash icon.
    For example:

  • How to delete large numbers of photos from Camera Roll in Iphone 4S

    I want to delete large numbers of photos from Camera Roll.  Do I have to do it one at a time?

    if you do it on the phone, yes - one by one.
    if it is really many photos: connect phone to your computer. open digital images. import all your photos to a folder of your choice, tick 'delete after import'. import. now your photos are gone on your phone, and inside the destination folder you selected in digital images. delete this one, too. done.

  • Is there a way to delete large numbers of emails all at once?

    I receive large numbers of emails because of the discussion lists I'm on, but sometimes I need to delete
    most of them for lack of time.  There can be over 500 in a couple of days.  I'd like to be able to do one simple
    thing that allows me to delete all of this old mail at a time.  Is there such a way?
    Scotty

    If your email account supports it, you can tap the Trash Icon, Tap Edit and the Delete option may let you delete all. I have a Comcast mail account that allows me to do this. I also have two AOL accounts that do not support it.
    Try your mail account and see if the Delete All option comes up at the bottom of the window. Account
    Name>Trash>Edit and look in the lower left corner of the window.

  • Deleting large numbers of photos

    I shot a high school football game last night with my D3. Using 9 fps, I quickly ended up with lots of pictures to sift through. Being very selective to print only the best, I am throwing away 90% of what I now have in Lightroom 2. I have a slow old computer and it takes the better part of a minute to delete one picture. I know how to do one picture at a time but there must be a better way. How can I mark/select large groups of pictures to delete so I don't have to wait for "loading" to finish on each individual picture?

    Hi, I mark the photos I want to delete by pressing "X", which flags them as Rejected, and then once done, I press Ctrl+BackSpace (on Windows - not sure if MAC is the same, or Command + Backspace or something) and they all get deleted. (eg the last shoot was our California trip - 800 photos. Quickly went through the batch, and deleted once at the end.)
    Hope this helps,
    Kai

  • How do I delete large numbers of photos from library and hard drive without clicking on each one individually?

    For instance by dragging across them like you would do to move a block of type. Or any other way. I'm editing about 5,000 images.

    The Short answer is the Same Way you select multiple files on the computer system of your choice. Windows would be to click on the first image then hold down the Shift key and select the last image or if they don't run concurrently select the first, hold down the Ctrl key and click the individual files. Once that is don't hit the Delete key and select whether to delete from the just the catalog or Remove from Disk.
    On Mac it is the same except the hold down key for selecting non concurrent images and there really isn't a Delete key on a Mac. IIRC you hold down the shift or command key and hit the Backspace key which is labeled Delete.

  • I have a large numbers spreadsheet and I need to delete a lot of blank rows. Can I do this in one lump sum, or do they have to be deleted singularly ?

    I have a large numbers spreadsheet that I inherited and I need to delete a lot of rows. Can this be accomplished collectively, or does each row have to be deleted one by one ?

    You can delete multiple rows at once.
    Hold down the command key and select the rows (which do not have to be contiguous) by clicking the row numbers on the left, then do this:
    SG

  • Business Partner records with large numbers of addresses -- Move-in issue

    Friends,
    Our recent CCS implementation (ECC6.0ehp3 & CRM2007) included the creation of some Business Partner records with large numbers of addresses.  Most of these are associated with housing authorities, large developers and large apartment complex owners.  Some of the Business Partners have over 1000 address records and one particular BP has over 6000 addresses that were migrated from our Legacy System.  We are experiencing very long run times to try to execute move in's and move out's due to the system reading the volume of addresses attached to the Business Partner.  In many cases, the system simply times out before it can execute the transaction.  SAP's suggestion is that we run a BAPI to cleanse the addresses and also to implement a BADI to prevent the creation of excess addresses. 
    Two questions surrounding the implementation of this code.  Will the BAPI to cleanse the addresses, wipe out all address records except for the standard address?  That presents an issue to ensure that the standard address on the BP record is the correct address that we will have identified as the proper mailing address.  Second question is around the BADI to prevent the creation of excess addresses.  It looks like this BADI is going to prevent the move in address from updating the standard address on the BP record which in the vast majority of cases is exactly what we would want. 
    Does anyone have any experience with this situation of excess BP addresses and how did you handle the manipulation and cleansing of the data and how do you maintain it going forward?
    Our solution is ECC6.0Ehp3 with CRM2007...latest patch level
    Specifically, SAP suggested we apply/review these notes:
    Note 1249787 - Performance problem during move-in with huge addresses
    **applied this ....did not help
    Note 861528 - Performance in move-in for partner w/ large no of addresses
    **older ISU4.7 note
    Directly from our SAP message:
    use the function module
    BAPI_BUPA_ADDRESS_REMOVE or run BAPI_ISUPARTNER_CHANGE to delete
    unnecessary business partner addresses.
    Use BAdI ISU_MOVEIN_CUSTOMIZE to avoid the creation of unnecessary
    business partner addresses (cf. note 706686) in the future for that
    business partner.
    Note 706686 - Move-in: Avoid unnecessary business partner addresses
    Does anyone have any suggestions and have you used above notes/FMs to resolve something like this?
    Thanks,
    Nick

    Nick:
    One thing to understand is that the badi and bapi are just the tools or mechanisms that will enable you to fix this situation.  You or your development team will need to define the rules under which these tools are used.  Lets take them one at a time.
    BAPI - the bapi for business partner address maintenance.  It would seem that you need to create a program which first read the partners and the addresses assigned to them and then compares these addresses to each other to find duplicate addresses.  These duplicates then can be removed provided they are not used elsewhere in the system (i.e. contract account).
    BADI - the badi for business partner address maintenance.  Here you would need to identify the particular scenarios where addresses should not be copied.  I would expect that most move-ins would meet the criteria of adding the address and changing the standard address.  But for some, i.e. landlords or housing complexes, you might not add an address because it already exists for the business partner, and you might not change the standard address because those accounts do not fall under that scenario.  This will take some thinking and design to ensure that the address add/change functions are executed under the right circumstances.
    regards,
    bill.

  • Best practices for speeding up Mail with large numbers of mail?

    I have over 100,000 mails going back about 7 years in multiple accounts in dozens of folders using up nearly 3GB of disk space.
    Things are starting to drag - particularly when it comes to opening folders.
    I suspect the main problem is having large numbers of mails in those folders that are the slowest - like maybe a few thousand at a time or more.
    What are some best practices for dealing with very large amounts of mails?
    Are smart mailboxes faster to deal with? I would think they would be slower because the original emails would tend to not get filed as often, leading to even larger mailboxes. And the search time takes a lot, doesn't it?
    Are there utilities for auto-filing messages in large mailboxes to, say, divide them up by month to make the mailboxes smaller? Would that speed things up?
    Or what about moving older messages out of mail to a database where they are still searchable but not weighing down on Mail itself?
    Suggestions are welcome!
    Thanks!
    doug

    Smart mailboxes obviously cannot be any faster than real mailboxes, and storing large amounts of mail in a single mailbox is calling for trouble. Rather than organizing mail in mailboxes by month, however, what I like to do is organize it by year, with subfolders by topic for each year. You may also want to take a look at the following article:
    http://www.hawkwings.net/2006/08/21/can-mailapp-cope-with-heavy-loads/
    That said, it could be that you need to re-create the index, which you can do as follows:
    1. Quit Mail if it’s running.
    2. In the Finder, go to ~/Library/Mail/. Make a backup copy of this folder, just in case something goes wrong, e.g. by dragging it to the Desktop while holding the Option (Alt) key down. This is where all your mail is stored.
    3. Locate Envelope Index and move it to the Trash. If you see an Envelope Index-journal file there, delete it as well.
    4. Move any “IMAP-”, “Mac-”, or “Exchange-” account folders to the Trash. Note that you can do this with IMAP-type accounts because they store mail on the server and Mail can easily re-create them. DON’T trash any “POP-” account folders, as that would cause all mail stored there to be lost.
    5. Open Mail. It will tell you that your mail needs to be “imported”. Click Continue and Mail will proceed to re-create Envelope Index -- Mail says it’s “importing”, but it just re-creates the index if the mailboxes are already in Mail 2.x format.
    6. As a side effect of having removed the IMAP account folders, those accounts may be in an “offline” state now. Do Mailbox > Go Online to bring them back online.
    Note: For those not familiarized with the ~/ notation, it refers to the user’s home folder, i.e. ~/Library is the Library folder within the user’s home folder.

  • Procedure to Delete Serial Numbers

    I understand that I have to follow the below procedure:
    In order to delete serial numbers I have to do the following
    steps:
    + delete/replace the serial numbers from the sales order
    + set the deletion flag in IQ02
    + archive the serial number history using archiving object
    PM_OBJLIST
    + archive the serial number using archiving object PM_EQUI.
    Please inform me th Transaction code or path fo archiving history and serial number.

    Hi Friends,
    Recently I hd got the same issue (Materials Management).  I was not able to delete the serial number, instead I was able to do business process.
    Issue: We had issue materials for sub-contracting to vendor with materials in serial no's, but while doing GR for the finished materials system should not ask for the serial no. entry for the finished product. as we are not maintaining serial no. for finished product.
    we found that in the material master serial no. profile was assigned wrongly by the users.
    Resolution:
    We cannot delete the serial no's once it was created, instead Create a new profile with only (GR/GI ) MMSL procedure. and serial no. usage is 01.
    Change the new serial profile which is created and do the Good posted, and this will resolve the issue.
    Hope it will be useful
    Thanks
    Nazer

  • How do I delete old Numbers databases?

    I've had to sort a subgroup from a large group and now can't delete my copy that I made of the subgroup.  If I drag them to the trash, I can't empty the trash.
    Any help would be appreciated.  Pat

    "I noticed you can't delete any Numbers database that you drag to trash."
    Hi Pat,
    I can, and just did. I emptied my Trash, which contained several Numbers documents, and the action was completed without incident.
    After doing so, I realized there was a good possibility that none of these documents had been dragged to the Trash (My usual practice has been to use command-delete to move items to the Trash). So I immediately opened an existing Numbers document, did a Save as with it and saved it under a second name, then dragged the new file to the Trash, then tried to empty the trash.
    That (or the previous action—moving it to the Trash—produced an error message stating the action could not be performed as Numbers was using the file.
    I closed the file in Numbers, and tried again. This time there was no message, and the file (which had been dragged to the Trash) was immediately deleted.
    This seems to be a Finder issue more than a Numbers one. I would suggest posting it as a question in the OS X community for the OS X version installed on your iMac G5.
    Include a detailed account of the actions you took and the response to each action by the computer.
    To get to the correct community, click Apple Support Communities at the top of this page, then scroll down to the Mac OS and System Software section and choose the OS X version link applicable to your computer. If you don't know what OS X version you are using, click the Apple menu at the left end of the menu bar and choose the first item, About this Mac. You;ll see the version number on the splash screen that opens.
    Good luck!
    Regards,
    Barry

  • Using very long/large numbers

    Hi, I want to know how to "store" and use very large numbers.
    For example, say I had:
    double n = 1.23456789101112131415;
    or:
    double n = 123456789101112131415
    I know that they are too big for using double, so how would I be able to store it; and even more importantly round it ,preferably using "Math.round," to 15 decimal places?
    i.e. tell it to do this:
    n = 1.23456789101112131415;
    number = Math.round((1000000000000 * n) / 1000000000000 .0);
    System.out.println(number);

    As much as your "advice" helps, the java docs provide
    only methods for the BigDecimal/Integer
    objects. They don't show complete syntax, and don't
    contain examples. Fortunately, I have avoided the
    "35-years-old-and-still-living-in-my-mother's-basement
    -and-aren't-even-professional-programmers" path and
    have enough of a life that I try not to spend all day
    reading about Java syntax.I don't believe this. Morgalr gave you genuine help. You spent more time criticizing him than conducting a very simple google search sucha as "BigInteger"+"example". I would suggest a little attitude adjustment if you still want people to help you next time.

  • Log of deleted Personnel numbers

    Hi All,
    Can someone help me to find the log of deleted personnel numbers in the SAP system.
    Thanks,
    Nitu Kumari

    Choose transaction SLG1.
    The Evaluate application log screen appears.
    In the Object field, enter HRPU.
    In the Time Restriction group box, enter dates and times to determine the period you want to check.
    Choose Program  Execute.
    A list of payroll results deleted during the specified period is displayed. The list displays the deletion date and the administrator who has deleted the payroll result.
    Select an entry from the list.
    Choose Goto  Display messages.

  • DBIF_RSQL_SQL_ERROR  - while deleting large data

    SAP Experts :
    I am experiencing issue with data delete .
    I am trying to delete large vol of HR data using 2 programs -
    RPUDELPN & RPUAUDDL . As soon I start run first program it dies
    giving me dump with few minutes . Bellow are the the dump extract for
    your reference , I check several notes and forum posting , where they
    mention to increase ROLL SEGMENTS . I increased PSAPROLL tablespace
    and make It AUTO . Still these didn't resolved my problem .
    I am stuck here from last 2 days , Pls respond ASAP
    Thanks
    Hamendra Patel
    ABAP Dump Extract :----
    Runtime Error
    DBIF_RSQL_SQL_ERROR
    Exception
    CX_SY_OPEN_SQL_DB
    Occurred on 2006/10/15 at
    00:06:28
    You can use transaction ST22 (ABAP Dump Analysis) to view and
    administer
    termination messages, especially those beyond their normal
    deletion
    date.
    Error
    analysis
    An exception occurred. This exception is dealt with in more detail
    below
    . The exception, which is assigned to the class 'CX_SY_OPEN_SQL_DB',
    was
    neither
    caught nor passed along using a RAISING clause, in the
    procedure
    "DELETE_POSTINGS" "(FORM)"
    Since the caller of the procedure could not have expected this
    exception
    to occur, the running program was
    terminated.
    The reason for the exception
    is:
    This error arises when one of
    the
    following errors
    occurs:
    1. ORACLE storage space request
    failed.
    (ORACLE error
    1547)
    To execute this operation, the ORACLE instance needs
    more
    storage space for a segment. This storage
    is
    managed by ORACLE in logical units known as
    extents.
    The size of an extent depends on the segment definition
    and
    the current storage
    situation.
    One segment (possibly more) is stored in one
    tablespace.
    How to correct the
    error
    The exception must either be prevented, caught within the
    procedure
    "DELETE_POSTINGS"
    "(FORM)", or declared in the procedure's RAISING
    clause.
    To prevent the exception, note the
    following:
    Database error text........: "ORA-01562: failed to extend rollback
    segment
    number 3#ORA-01628: max # extents (60) reached for rollback segment
    PRS_2"
    Depending on the error, proceed as
    follows:
    1. ORACLE storage space request
    failed.
    (ORACLE error
    1547)
    The only way to resolve this error is to ask the
    database
    administrator to allocate more memory space for the
    table.
    Information on where
    terminated
    The termination occurred in the ABAP program "RPUDELPN"
    in "DELETE_POSTINGS".
    The main program
    was "RPUDELPN ".
    The termination occurred in line 933 of the source code of the
    (Include)
    program "RPUDELPN"
    of the source code of program "RPUDELPN" (when calling the editor
    9330).
    Processing was terminated because the exception "CX_SY_OPEN_SQL_DB"
    occurred in
    the
    procedure "DELETE_POSTINGS" "(FORM)" but was not handled locally, not
    declared
    in
    the
    RAISING clause of the
    procedure.
    The procedure is in the program "RPUDELPN ". Its source code starts
    in line 923
    of the (Include)
    program "RPUDELPN ".

    Hi Hamendra,
    the important part of this error message are these lines:
    "ORA-01562: failed to extend rollback
    segment number 3
    ORA-01628: max # extents (60) reached for rollback segment
    PRS_2"
    They give us the following information:
    1. You do not use the highly recommended/recommendable automatic undo management of oracle 9 and above. If the db is not still running on oracle 8 then: SWITCH TO AUM right now. This error is GONE afterwards.
    2. The PSAPROLL is configured as a dictionary managed tablespace and the rollback segment is only allowed to have a maximum of 60 extents. This is fairly mis- or better unconfigured.
    If there's no way for you to use AUM, then, as a short term resolution, you should think about:
    a) altering the maxextents of the rollback segments to a higher value. Say... 500.
    b) using a single BIG rollback segment for your big deletion report. But this would probaply imply more DB work. So keep it simple and effective: use AUM (there's a pretty good SAP note about it).
    KR Lars

  • What is the best practice of deleting large amount of records?

    hi,
    I need your suggestions on best practice of deleting large amount of records of SQL Azure regularly.
    Scenario:
    I have a SQL Azure database (P1) to which I insert data every day, to prevent the database size grow too fast, I need a way to  remove all the records which is older than 3 days every day.
    For on-premise SQL server, I can use SQL Server Agent/job, but, since SQL Azure does not support SQL Job yet, I have to use a Web job which scheduled to run every day to delete all old records.
    To prevent the table locking when deleting too large amount of records, in my automation or web job code, I limit the amount of deleted records to
    5000 and batch delete count to 1000 each time when calling the deleting records stored procedure:
    1. Get total amount of old records (older then 3 days)
    2. Get the total iterations: iteration = (total count/5000)
    3. Call SP in a loop:
    for(int i=0;i<iterations;i++)
       Exec PurgeRecords @BatchCount=1000, @MaxCount=5000
    And the stored procedure is something like this:
     BEGIN
      INSERT INTO @table
      SELECT TOP (@MaxCount) [RecordId] FROM [MyTable] WHERE [CreateTime] < DATEADD(DAY, -3, GETDATE())
     END
     DECLARE @RowsDeleted INTEGER
     SET @RowsDeleted = 1
     WHILE(@RowsDeleted > 0)
     BEGIN
      WAITFOR DELAY '00:00:01'
      DELETE TOP (@BatchCount) FROM [MyTable] WHERE [RecordId] IN (SELECT [RecordId] FROM @table)
      SET @RowsDeleted = @@ROWCOUNT
     END
    It basically works, but the performance is not good. One example is, it took around 11 hours to delete around 1.7 million records, really too long time...
    Following is the web job log for deleting around 1.7 million records:
    [01/12/2015 16:06:19 > 2f578e: INFO] Start getting the total counts which is older than 3 days
    [01/12/2015 16:06:25 > 2f578e: INFO] End getting the total counts to be deleted, total count:
    1721586
    [01/12/2015 16:06:25 > 2f578e: INFO] Max delete count per iteration: 5000, Batch delete count
    1000, Total iterations: 345
    [01/12/2015 16:06:25 > 2f578e: INFO] Start deleting in iteration 1
    [01/12/2015 16:09:50 > 2f578e: INFO] Successfully finished deleting in iteration 1. Elapsed time:
    00:03:25.2410404
    [01/12/2015 16:09:50 > 2f578e: INFO] Start deleting in iteration 2
    [01/12/2015 16:13:07 > 2f578e: INFO] Successfully finished deleting in iteration 2. Elapsed time:
    00:03:16.5033831
    [01/12/2015 16:13:07 > 2f578e: INFO] Start deleting in iteration 3
    [01/12/2015 16:16:41 > 2f578e: INFO] Successfully finished deleting in iteration 3. Elapsed time:
    00:03:336439434
    Per the log, SQL azure takes more than 3 mins to delete 5000 records in each iteration, and the total time is around
    11 hours.
    Any suggestion to improve the deleting records performance?

    This is one approach:
    Assume:
    1. There is an index on 'createtime'
    2. Peak time insert (avgN) is N times more than average (avg). e.g. supposed if average per hour is 10,000 and peak time per hour is 5 times more, that gives 50,000. This doesn't have to be precise.
    3. Desirable maximum record to be deleted per batch is 5,000, don't have to be exact.
    Steps:
    1. Find count of records more than 3 days old (TotalN), say 1,000,000.
    2. Divide TotalN (1,000,000) with 5,000 gives the number of deleted batches (200) if insert is very even. But since it is not even and maximum inserts can be 5 times more per period, set number of deleted batches should be 200 * 5 = 1,000.
    3. Divide 3 days (4,320 minutes) with 1,000 gives 4.32 minutes.
    4. Create a delete statement and a loop that deletes record with creation day < today - (3 days ago - 3.32 * I minutes). (I is the number of iterations from 1 to 1,000)
    In this way the number of records deleted in each batch is not even and not known but should mostly within 5,000 and even you run a lot more batches but each batch will be very fast.
    Frank

Maybe you are looking for