Best strategy to make commits by group of 100/200 insert/update

Hi,
I have a Java batch written in Java and i want to improve performances when writing to database.
For 6000 lines of inserts, i 'd like to group insert by 100/200 in a Unit Of Work then commit.
- i should use the methode useBatchWriting to group the inserts
- in my class, i declare a global variable for my UnitOfWork...before beginning the 200 iterations, i do the acquire of this Unit Of Work...after the 200 iterations, commit then release the UnitOfWork. Then reuse it (acquire..) before the next 200 iterations..etc etc.. I do not know if this a good strategy or if i should release uow once all the iterations have finished.
Thanks for your advice
Laurent

I applied what you suggested yesterday , but stll have problems
One UnitOfWork ..do the acquire before the first 100 inserts
Commit
ReAcquire UnitOfWork without releasing uow object
Next 100 inserts
commit etc...
Doen't work... Java freeze .
I'm working with TL 9.0.3.4 in jdk 13
So i came back on my first version : Acquire a Unit Of Work for each row, insert, commit, release...but the performances are not good at all
I'd really like to batch inserts by groups of 100/200 rows
Laurent

Similar Messages

  • Best strategy for data definition handling while using OCCI

    Hi, subject says all.
    Assuming to use OCCI for data manipulation (e.g. object navigation), what's the best strategy to handle definitions in the same context ?
    I mean primarily dynamic creation of tables and columns.
    Apparent choices are:
    - SQL from OCCI.
    - use OCI in parallel.
    Did I miss anything ? Thanks for any suggestion.

    Agreeing with Kappy that your "secondary" backups should be made with a different app. You can use Disk Utility as he details, or a "cloning" app such as CarbonCopyCloner or SuperDuper that can update the clone periodically, rather than erasing and re-copying from scratch.
    [CarbonCopyCloner|http://www.bombich.com> is donationware; [SuperDuper|http://www.shirt-pocket.com/SuperDuper/SuperDuperDescription.html] has a free version, but you need the paid one (about $30) to do updates instead of full replacements, or scheduling.
    If you do decide to make duplicate TM backups, you can do that. Just tell TM when you want to change disks (thus it's a good idea to give them different names). But there's no reason to erase and do full backups; after the first full backup to each drive, Time Machine will back up only the changes made since the last backup +*to that disk.+* Each is completely independent.

  • Best strategy for variable aggregate custom component in dataTable

    Hey group, I've got a question.
    I'd like to write a custom component to display a series of editable Things in a datatable, but the structure of each Thing will vary depending on what type of Thing it is. So, some Things will display radio button groups (with each radio button selecting a small set of additional input elements, so we have a vertical array radio buttons and beside each radio button, a small number of additional input elements), some will display text-entry fields, and so on.
    I'm wondering what the best strategy for tackling this sort of thing is. I'm sort of thinking I'll need to do something like dynamically add to the component tree in my custom component's encodeBegin(), and purge the extra (sub-) components in encodeEnd().
    Decoding will be a bit of a challenge, maybe.
    Or do I simply instantiate (via constructor calls, not createComponent()) the components I want and explicitly call their encode*() and decode() methods, without adding them to the view tree?
    To add to the fun of all this, I'm only just learning Faces (having gone through the Dudney book, Mastering JSF, and writing some simpler custom components) and I don't have experience with anything other than plain vanilla JSP. (No EJB, no Struts, no Tapestry, no spiffy VisualDevStudioWysiwyg++ [bah humbug, I'm an emacs user]). I'm using JSP 2.0, JSF 1.1_01, JBoss 4.0.1 and JDK 1.4.2. No, I won't upgrade to 1.5 (yet).
    Any hints, pointers to good sample code? I've looked at some of the sample code that came with the RI and I've tried to navigate the JSF Blueprints stuff, but I haven't really found anything on aggregating components into a custom component. Did I miss something obvious?
    If this isn't a good question, please let me know how I can sharpen it up a bit.
    Thanks.
    John.

    Hi,
    We're doing something very similar. I had a look at the Tomahawk Date component, and it seems to dynamically created InputText components in the encodeEnd(). However, it doesn't decode this directly (it only expects a single textual value). I expect you may have to check the request yourself in decode().
    Other ideas would be appreciated, though - I'm still new to JSF.

  • What is the best strategy to save a file in client computer

    I want to save some information in a file in client computer. What is the best strategy to do? There are some ways I can think about. But none of them is good enough for me.
    1. I gave all-permission. So, I can actually write what I want. But, in order to make the program runs on all platform/all client computers, I can't make any assumptions on file system of client computer. So, this is not good.
    2. I can write a file into .javaws directory. But, how can I get file path for this directory? JNLP API does not give this path to us. I can't think a way to get this path for all client computer (WIndown, Mac, Unix).
    3. To write as a muffin. Seems fine. But, I often change server and path. So, once I changed server, the client will loss the file saved since muffin is associated with server and path.
    4. I can just open one file with on path. I think J2SE will treat this file platform dependently. For example, for W2K this file will be put into Desktop. This is bad.
    Any better idea?

    In the past I have used the Properties class to do things like this. Using load and store, you can read and write key=value pairs.
    I store the file in the user.home directory. You can use System.getProperty("user.home") to get this location.
    No guarantees, but I thought that this user.home property was good for any OS with a home directory concept. If that turns out not to be true, maybe the System property java.io.tmpdir would be more consistent across platforms. This, of course, would be subject to delete by the OS/administrators.
    -Dave

  • What's best strategy for dealing with 40+ hours of footage

    We have been editing a documentary with 45+ hours of footage and presently have captured roughly 230 gb. Needless to say it's a lot of files. What's the best strategy for dealing with so much captured footage? It's almost impossible to remember it all and labeling it while logging it seems inadequate as it difficult to actually read comments in dozens and dozens of folders.
    Just looking for suggestions on how to deal with this problem for this and future projects.
    G5 Dual Core 2.3   Mac OS X (10.4.6)   2.5 g ram, 2 internal sata 2 250gb

    Ditto, ditto, ditto on all of the previous posts. I've done four long form documentaries.
    First I listen to all the the sound bytes and digitize only the ones that I think I will need. I will take in much more than I use, but I like to transcribe bytes from the non-linear timeline. It's easier for me.
    I had so many interviews in the last doc that I gave each interviewee a bin. You must decide how you want to organize the sound bytes. Do you want a bin for each interviewee or do you want to do it by subject. That will depend on you documentary and subject matter.
    I then have b-roll bins. Sometime I base them on location and sometimes I base them on subject matter. This last time I based them on location because I would have a good idea of what was in each bin by remembering where and when it was shot.
    Perhaps, you weren't at the shoot and do not have this advantage. It's crucial that you organize you b-roll bins in a way that makes sense to you.
    I then have music bins and bins for my voice over.
    Many folks recommend that you work in small sequences and nest. This is a good idea for long form stuff. That way you don't get lost in the timeline.
    I also make a "used" bin. Once I've used a shot I pull it out of the bin and put it "away" That keeps me from repeatedly looking at footage that I've already used.
    The previous posts are right. If you've digitized 45 hours of footage you've put in too much. It's time to start deleting some media. Remember that when you hit the edit suite, you should be one the downhill slide. You should have a script and a clear idea of where you're going.
    I don't have enough fingers to count the number of times that I've had producers walk into my edit suite with a bunch of raw tape and tell me that that "want to make something cool." They generally have no idea where they're going and end up wondering why the process is so hard.
    Refine your story and base your clip selections on that story.
    Good luck
    Dual 2 GHz Power Mac G5   Mac OS X (10.4.8)  

  • Best strategy for migrating GL 6 DW CS4?

    First I gotta say that -- as decade-long user of several Adobe products -- Adobe has really screwed over long-time users of Adobe Golive. After a week of messing around with something that should be a slam dunk (considering the substatial amount of $ I've paid to Adobe over the years for multiple copies of GoLive 5, 6 and CS2), I can tell you Adobe's GL > DW migration utility ONLY works for sites created FROM SCRATCH in GL CS2 (they must have the web-content, web-data folder structure). This means that Adobe's migration utility only works for maybe 10% - 15% of the GoLive sites out there, and about 10% - 15% of Adobe's loyal GoLive customers, and it particularly screws over Adobe's longtime GoLive customers. Sweet! ( (Just like Adobe screwed over users of PhotoStyler -- which was a better image editor than PhotoShop, BTW.) Obviously, I would walk away from Adobe and make sure I never ever paid them another cent, but the Republican-Democrat "free market economy" has made sure I don't have that option.
    So I've gotta make Dreamweaver work, which means I've gotta migrate several large sites (the biggest has 5,000 static pages and is about 2 gigs in size) from GoLive 6 (or GoLive CS2 with the older folder structure) to Dreamweaver CS4.
    Which brings me to my question -- what's the best strategy for doing ths? Adobe's migration utility is a bad joke, so here's the alternative, real world plan I've come up with. I'd apreciate any suggestions or comments...
    1) create copies of all my GL components in the content folders for my GL site (not in the GoLIve components folder)
    2) replace the content of all GoLive compnents (in the GoLive components folder) with unque character strings, so that instead of a header with images and text, my old header would look something like xxxyyyzzz9
    3) create a new folder called astoni in the root of my hard drive. Copy all my GoLIve web site content (HTML, images, SWF, etc.) into astoni in exactly the structure it was in with GL
    4) create a new Dreamweaver site by defining astoni as the local location for my site, astoni\images as the location for images, etc.
    5) use Dreamweaver and open the newly defined astoni site. Then open each of the GoLive components I copied into a content level folder in astoni, and drag each into the Dreamweaver Assets/Library pane, in order to create library items just like my old GoLive components
    6) use Dreamweaver to Search & Replace out the unique text strings like xxxyyyzzz9 with the content of my new DW library items
    7) refresh site view to make all the links hook up...
    Thanks for your help. Hope this discussion helps others too...

    Instead of ragging on people who are familiar with DW and Site Development, you should be gracious and accept the practical advice you've been given.  A "best strategy" would be to read some tutorials and learn how to work with HTML, CSS and server-side technologies. Without some basic code skills, you're going to find DW an uphill, if not impossible battle.
    Frankly, I don't have free time to hand-hold someone through the excruciating process of migrating a 5,000 page static site from GoLive  to DW. And I doubt anyone else in this forum has either.  We're not Adobe employees.  We don't get paid to participate here.  We are all product users JUST LIKE YOU.
    I'm sorry you're frustrated.  I'm also sorry for your clients. But the problem you have now isn't Adobe's fault. It's yours for not keeping up with server-side technologies or handing-off these huge static sites to more capable web developers.  I'm not saying you need to buy anyone's books, but they are good resources for people willing to learn new things.
    That said, maybe you should stick with GoLive.  The software doesn't have an expiration date on it and will continue working long into the future.  If you're happy using GL, keep using it to maintain your legacy sites. At the same time learn X/HTML, CSS & PHP or ASP.  Use DW CS4 for the creation of new projects.
    FREE Tutorial Links:
    HTML & CSS Tutorials - http://w3schools.com/
    From   Tables to CSS Web Design Part 1 -
    http://www.adobe.com/devnet/dreamweaver/articles/table_to_css_pt1.html
    From   Tables to CSS Web Design Part 2 -
    http://www.adobe.com/devnet/dreamweaver/articles/table_to_css_pt2.html
    Taking  a Fireworks (or Photoshop) comp to a CSS based layout in DW
    http://www.adobe.com/devnet/fireworks/articles/web_standards_layouts_pt1.html
    Creating  your first website in DW CS4 -
    http://www.adobe.com/devnet/dreamweaver/articles/first_cs4_website_pt1.html
    Guidance  on when to use DW Templates, Library Items and SSIs -
    http://www.adobe.com/devnet/dreamweaver/articles/ssi_lbi_template.html
    Best of luck,
    Nancy O.
    Alt-Web Design & Publishing
    Web | Graphics | Print | Media  Specialists
    www.alt-web.com/
    www.twitter.com/altweb
    www.alt-web.blogspot.com

  • Best strategy to update to LR4.1

    With the issuing of the definitive release of LR4.1, I have some cleaning to do, since I do still have LR3.6 installed.
    What would be the best strategy if you had both LR3.6 and LR4.1RC2 installed ?
    1) Uninstall LR3.6
    2) Install LR4.1 and let it replace LR4.1RC2 automatically.
    Really, I'd appreciate it if Adobe was disclosing more openly what the installer will do and what are your different options for an update.
    Also, after upgrading from LR3.6 to LR4.0, I have made the error allow the catalogue to keep both a copy of process 2010 and a copy for process 2012. So almost everything got duplicated, which in the end was more a hassle than else. I have a very big catalogue and don't want to chase everything one by one manually. Suppose I make sure to write the metadata to all files and then create a new catalogue from scratch, would the new catalogue still include two or more versions of these files ? Or would I get rid of the duplicated process 2010 versions ?  Or is there an easy way to select all those process 2010 version and remove them from the catalogue ?
    Thanks in advance for sharing your suggestions or advices.

    You can leave everything where it is now. Here is what the installer says on the 2nd screen:
    Please note that if Lightroom 4 is already installed on your system, this installer will update it in-place. If the application has been renamed/moved after installation, its new name will be retained. For more detailed information, please see http://go.adobe.com/kb/ts_cpsid_92810_en-us
    That means, if you had the LR4.1RC2 installed it will be replaced by LR4 final but the name stays the same. That's a bit confusing, but I suspect it was done to avoid broken application shortcuts etc. If it bothers you, you can rename the application afterwards to Adobe Photoshop Lightroom 4

  • Best strategy for importing organized images

    Hello,
    I have recently started using Aperture in my photography workflow and I need some advice from the experts on the best strategy to import the existing .JPGs from my hard disk into my Aperture library.
    Prior to using Aperture, my workflow was as follows:
    1.) I would import ALL the photos from my digital camera to a folder on my hard disk (one folder for each date of import). These, in effect, were my 'master' images, which I never edited or deleted.
    2.) Next I would make a copy of each new master folder, under a separate folder called 'picks', where I would go thru and delete the bad ones and occasionally make some minor edits or crops to the good ones. These 'picks' folders are what I used for my slideshows, photo galleries, etc.
    NOW, as I'm moving everything to Aperture, I'm attempting to find the best strategy to import everything, where I would maintain both a master copy of everything that came off my camera and the already organized 'albums' of 'versions' the represent my 'picks', but also reduce some of the redundancy of having duplicate copies of 60-70% of the photos in my library.
    Here is where I need some advice. In the case of new photos imported directly from the camera into Aperture I have managed to figure out how to import the master images and then create albums of versions for my good pics. However, I haven't found any way to import my existing masters and picks without importing two copies of each "good" pick.
    Is there any way I can get Aperture to recognize that the photos in my 'picks' folders are actually 'versions' of some of the original 'masters' so I don't need to import two master copies of each of my good pics? (I generally left the file names the same.)
    I have several thousand existing pics on my hard disk that I'm hoping to import into Aperture so I'm hoping there is an efficient way to do this!
    Many thanks for the help!

    HI I would also be interested in how others do this. I have been playing with aperture for some time and I am still not sure whether to commit to it.
    I also keep my masters in a folder structure with year at the highest level and day of shooting below that.
    I know its not necessary to do this in Aperture but I am not sure I understand why or if its a bad thing to do.
    Would love to her what others think and advise.
    Thanks

  • Value Mapping - Best Strategy

    Hi,
    There are several strategies fo Value mapping
    1) Fixed values (e.g. constants in Graphical mapping)
    2) Value mapping on Jave stack via VM tables
    3) SAP tables created in SE11 and RFC the data from calls from the Java stack
    Surely the best strategy is 3) even though it is not the most efficient purely for the fact to have easy visibility of your integration data. e.g. via SM30 or custom transaction to view integration value mappings. If anyone has a different view on this I would to hear from them or possibly suggest an alternative.

    Hi,
    the best strategy is dependend from your requirements:
    - Fixed values could be used if the values never change
    - Value Mapping with groups in IB Directory is usually the standard
    - If you want to use authorizations of ERP backend you can go for Value Mapping Replication (more efforts).
    - You can use as well included textfiles, lookups and what else can store data...
    Regards,
    Udo

  • Best strategy for buring dual layer DVD

    I've got a project that won't fit on a single layer DVD, so I've got it set up for dual layer and "best quality" which results in a 7.38GB disk image.  However, iDVD 8 warned me when burning the disk images that some utilities may not be able to burn a reliable dual layer disk copy and to use iDVD instead, does this include Disk Utility?
    I always use Disk Utility to make copies, iDVD took almost 7 hours to burn the disk image, I can't afford to wait that long for each copy.  So what's the best strategy for burning dual layer DVDs?

    subsequent copies burn in far less time immediately after the first provide you don't quit iDvd
    If you know ahead of time you'll need more than a few copies, I'd recommend burning from a disc image to the desktop and reducing the burn speed (4x or lower) I prefer to use Roxio Toast myself. But others have had success with Apple's Disc Utilities as well.

  • Commitment item group not displayed as level node when generate report FM

    Hi all,
    We created commitment item group or funds centre group through tcode i.e FM_SETS_FIPEX1. We create group with lower node such as hierarchy.
    However when we generate the standard report with the commitment item group or funds centre group, we find no hierarchies as commitment item/funds centre group figure. Just flat, no hierarchies, no level node.
    How to make it displayed as commitment item/fundcentre group ?
    It maintained on drilldown report or something else?
    Dewi

    Dewi,
    For report painter reports you can create groups using transaction FMRP_CI_SET_HIER - Create Group Hierarchy from Master Data Hierarchy or create sets using GS01, with them you can view the reports with the master data hierarchy.
    Regards
    César

  • Best way to make managed servers start up on machine startup?

    I had thought that if I used NodeManager to start servers from the admin console, if I rebooted the computer, and my RC scripts brought up the NodeManager (and AdminServer), that the NodeManager would note that the managed server had been in a running state before, and that it should be automatically restarted.
    In my testing, it doesn't do that. I filed a support case to ask about this, and they concurred, saying that the NodeManager won't do that for me.
    So, is the best strategy for starting up managed servers at system startup to write a wlst script that runs after NodeManager startup that tells the NodeManager to start the managed servers? Or is there a way to make the NodeManager automatically start the managed servers?

    If it matters to anyone, I tried writing a WLST script taken directly from examples that connects to the NodeManager and starts up a managed server, but it didn't work (I don't have the text of the error anymore). However, a similar version that connects to the AdminServer worked perfectly fine.
    At this point, however, I'm not sure that I want to do this anyway. After a server restart, it's good to have the NodeManager and AdminServer automatically start, but I'm examining whether the ManagedServer startup should be automatic. I'll have to think about it.

  • Mail 6.2 Can I make emails NOT group same senders emails together? Keeping read

    Mail 6.2 major issues for business users. Can we make emails NOT group previous emails from the same sender together? (you know it lists all the previous under it).  I have a number of customers who send us multiple emails per day. Because of the grouping, there are many times I cant tell what was read or not read. For instance, if I mark an email "Unread" to work on it later, that action just made everyemail in that group "Unread". Now I have the task of going through the last 20 emails and seeing what I printed/read already from that sender. Its contantly double work and confusion. In my office we have 4 of us with imacs all having the exact same issue in mail 6.2.  So what gives? There must be a way to stop previous emails from grouping into the most current email from sender. I am grateful for any assistance as google and bing are not giving me anything useful. 
    Thank you and best regards,
    JDM

    SOLVED from another wonderful post! I am just sick how easy it was. In the "View" there is a tab called "Organize by Conversation"  Simply uncheck this box and BAM its perfect just how we need our email set up as individual emails. Wonderful. 

  • How do I make custom tab groups in Safari Yosemite?

    I know that it automatically groups tabs together from the same domain, but I read somewhere that I can make my own groups however I cannot find the option.
    Thanks

    Just go to "Favorites" on the Safari top menu, then "show favorites" and the first group you see (with a star icon) is what shows up at the favorites tab when you open a new tab. just right click and make a new folder. Give your folder name and put all the links that you need in that folder, and when you open a new tab you will see them as a group.

  • I am moving from PC to Mac.  My PC has two internal drives and I have a 3Tb external.  What is best way to move the data from the internal drives to Mac and the best way to make the external drive read write without losing data

    I am moving from PC to Mac.  My PC has two internal drives and I have a 3Tb external.  What is best way to move the data from the internal drives to Mac and the best way to make the external drive read write without losing data

    Paragon even has non-destriuctive conversion utility if you do want to change drive.
    Hard to imagine using 3TB that isn't NTFS. Mac uses GPT for default partition type as well as HFS+
    www.paragon-software.com
    Some general Apple Help www.apple.com/support/
    Also,
    Mac OS X Help
    http://www.apple.com/support/macbasics/
    Isolating Issues in Mac OS
    http://support.apple.com/kb/TS1388
    https://www.apple.com/support/osx/
    https://www.apple.com/support/quickassist/
    http://www.apple.com/support/mac101/help/
    http://www.apple.com/support/mac101/tour/
    Get Help with your Product
    http://docs.info.apple.com/article.html?artnum=304725
    Apple Mac App Store
    https://discussions.apple.com/community/mac_app_store/using_mac_apple_store
    How to Buy Mac OS X Mountain Lion/Lion
    http://www.apple.com/osx/how-to-upgrade/
    TimeMachine 101
    https://support.apple.com/kb/HT1427
    http://www.apple.com/support/timemachine
    Mac OS X Community
    https://discussions.apple.com/community/mac_os

Maybe you are looking for