Memory consumption issues (when doing large batches of photos)

I have a user who reports my plugin consumes memory until Lr/System is no longer operable, when doing large batches of photos.
I have this type of problem too from time to time, but not always, and in the most recent case, *not* for the same operation my client is complaining about.
Begs the question: is there a way to control whether excessive memory is used, or force it to be released, when doing an operation upon large batches of photos.
Note: the operation is already concluding catalog transactions every 1000 photos (exits with-write function, and re-enters). My client reports Lr/System slowdown at about 4000 photos. He is running Lr3, Windows OS - system details not yet known.
Rob

Hey Rob
Have you already tried John R. Ellis idea of reducing the transaction size? I remember from another project we had to limit the transaction size on a SQLite based database due to memory problems.
Maybe you are facing a different problem - not sure how efficient LUAs garbage collection is and your code causes some kind of memory leaks somewhere.
Daniel

Similar Messages

  • When ordering large batch of photos - "transferring" seemed slow. I hit cancel. Did I lose my order? When I try to "buy photos" again it says "connecting" but doesn't seem to connect...

    When I tried to order large batch of photos in iPhoto - it seemed to get stuck halfway through. I hit "cancel". When I tried to "buy" photos again... it never seemed to connect. Did I lose my order? Is it half-uploaded somewhere and I need to "resume" upload? Or do I need to start all over again with the order?

    You might want to download something called an undelete utility. These can find files and recover them even after they appear to have been lost. There are some undelete utility programs that are free. Look at tucows.com
    Also, it appears that you have POSSIBLY overwritten the actual files, which you moved manually, with I don't know what, which you moved in PSE's folder location view.
    Next, in the future ... NEVER (that's: don't even think about it) move your photos in Windows. NEVER. NEVER. Not ever. Not once. This always (that's ALWAYS, as in 100% of the time) causes problems and usually makes things difficult to fix.
    Lastly ... make regular backups! I must say this in the forums about seventy bazillion times a month. Make regular backups! Regular depends on how often you take photographs and edit them. Since my photographic activity usually happens on the weekends, I make backups every Sunday night. You may need a different schedule, depending on your activity level. MAKE REGULAR BACKUPS!

  • Hi, I finally got lucky and past the "other" memory status issues when syncing. Now I'm "waiting for changes to be applied" sync stuck all night (itunes yesomite iPhone 6 plus). is this a bug? I have not had a single successful sync with yesomite.

    hi, I finally got lucky and past the "other" memory status issues when syncing. Now I'm "waiting for changes to be applied" sync stuck all night (itunes yesomite iPhone 6 plus). is this a bug? I have not had a single successful sync with yesomite.
    please help. this issue prevents music from syncing. I have set the sync setting to convert songs to 192kbps AAC to save space, I figure that could slow the sync
    down, but it's hanging indefinitely (I've left it for 24 hours with out it completing.)
    the the only music that appears on the phone are songs purchased from the iTunes Store. 95% of my collection are rips from my CDs.
    thank you for any help!

    Yeah it's pretty sad that $1000 phone they put out you such cheap memory It causes issues, I have iPhone 6+128 GB actually this is the second brand-new one they gave me because they keep affording it trying to act like that's not the issue because they're too stupid to know how to check what memory is in each one before they give it to me when I have told them multiple times my back up runs perfectly fine on two 5s phones and a 6 64GB of course running MLC memory.
    I told them it's because just like the write ups say I have 809 apps but I have not even put all my music on the phone so I'm still running 50 to 60 gigs free and it crashes and reboots itself sometimes in multitasking and more often than it does that in between multitasking I get a black screen for 1 to 2 seconds in between apps and upon start up the Apple logo screen will flicker and do weird things, One of the genius idiots asked me if I could just delete some of my apps, really that's the answer to fix the phone the customer needs to limit the amount to apps they put on their phone, why the **** get a 128 gig if you can't put on anything you want to it guess they need to start selling them saying you can only put a limited number of apps on them, it's really is a **** joke, I have over $20,000 of Apple merchandise easily and always get the new phone every year but I'm starting to reconsider my decisions in the future because this company is starting to go downhill, The reason water not recalling the phones is because they're dealing with their top-of-the-line thousand dollar phones in this case and most people don't have over two or 300 apps so they're not having as many issues, so basically they have messed up phones they just don't know it in their hands and Apple is just too **** greedy to recall the phones, they'd rather sell customer spends that only work half ***.
    Stuff like this will be the downfall of their company, all the people that stay with them because as long as you don't jailbreak their stuff it normally just works without issue will be soon jumping ship if stuff like this keeps up, because I can guarantee they got me thinking now.
    not that I wanted to know or should I be expected to but I even offered for them to open up my brand-new phone and just replace the memory to MLC memory which is something I shouldn't have to do with a brand-new **** phone, they should just make one with the right memory and give me a new phone, actually just don't sell customers that buy the top-of-the-line phone from you trash ones in the first place that would be a great option.
    what burns me up the most is when you talk to AppleCare and Genius Bar they try to act like the issue isn't there and they're unaware of it, they say that Apple hasn't come out with any information about it, yeah that's because they don't want to recall the trash.
    Apple has became nothing better than your run-of-the-mill company, android is becoming just as good as they are sad to say this since I have been an Apple fan for so long but there is no excuse for BS like this they have been looking for options to fix my phone for a month now this is not the way you should treat loyal customers, I would've been better off keeping my 5S and in the future just switching to android.
    it's pretty sad that the company has no idea how to check which memory is in their **** phone and even after I offered to let them just replace the memory in my new phone by opening up my brand-new phone to replace it they have no idea how to do that either, yeah Genius Bar sure, I think that should change the name to idiot bar and Apple don't care instead of Apple care.

  • How do I resize large batches of photos to 640x480 and 97KB?

    Greetings, All,
    I need to learn  how to resize large batches of photos in my iPhoto library, and limit them to 640x480, and a file size of just 97KB.  Can't seem to figure out how to do it in iPhoto, and am thinking about giving Aperture a try, though the reviews of that app are very mixed, to say the least.  Any suggestions would be VERY much appreciated.  Thanks, very much, and God Bless!
    Every Good Wish,
    Doc

    First the photos need ot be to be a 4:3 ratio - this is the standard ratio for point and shoot cameras - then select the photos in iPhoto and export using a custo size setting with a maximum dimension of 640 - you will have to experiment with the quality setting to assure a file size of no greater that 97KB
    LN

  • Finder issues when copying large amount of files to external drive

    When copying large amount of data over firewire 800, finder gives me an error that a file is in use and locks the drive up. I have to force eject. When I reopen the drive, there are a bunch of 0kb files sitting in the directory that did not get copied over. This is happens on multiple drives. I've attached a screen shot of what things look like when I reopen the drive after forcing an eject. Sometime I have to relaunch finder to get back up and running correctly. I've repaired permissions for what it's worth.
    10.6.8, by the way, 2.93 12-core, 48gb of ram, fully up to date. This has been happening for a long time, just now trying to find a solution

    Scott Oliphant wrote:
    iomega, lacie, 500GB, 1TB, etc, seems to be drive independent. I've formatted and started over with several of the drives and same thing. If I copy the files over in smaller chunks (say, 70GB) as opposed to 600GB, the problem does not happen. It's like finder is holding on to some of the info when it puts it's "ghost" on the destination drive before it's copied over and keeping the file locked when it tries to write over it.
    This may be a stretch since I have no experience with iomega and no recent experience with LaCie drives, but the different results if transfers are large or small may be a tip-off.
    I ran into something similar with Seagate GoFlex drives and the problem was heat. Virtually none of these drives are ventilated properly (i.e, no fans and not much, if any, air flow) and with extended use, they get really hot and start to generate errors. Seagate's solution is to shut the drive down when not actually in use, which doesn't always play nice with Macs. Your drives may use a different technique for temperature control, or maybe none at all. Relatively small data transfers will allow the drives to recover; very large transfers won't, and to make things worse, as the drive heats up, the transfer rate will often slow down because of the errors. That can be seen if you leave Activity Monitor open and watch the transfer rate over time (a method which Seagate tech support said was worthless because Activity Monitor was unreliable and GoFlex drives had no heat problem).
    If that's what's wrong, there really isn't any solution except using the smaller chunks of data which you've found works.

  • Performance Issues when editing large PDFs

    We are using Adobe 9 and X Professional and are experiencing performance issues when attempting to edit large PDF files.  (Windows 7 OS). When editing PDFs that are 200+ pages, we are seeing pregnated pauses (that feel like lockups), slow open times and slow to print issues. 
    Are there any tips or tricks with regard to working with these large documents that would improve performance?

    You said "edit." If you are talking about actual editing, that should be done in the original and a new PDF created. Acrobat is not a very good editing tool and should only be used for minor, critical edits.
    If you are talking about simply using the PDF, a lot depends on the structure of the PDF. If it is full of graphics, it will be slow. You can improve this performance by using the PDF Optimize to reduce graphic resolution and such. You may very likely have a bloated PDF that is causing the problem and optimizing the structure should help.
    Be sure to work on a copy.

  • Memory card corrupted when transfering large size ...

    I get my memory card corrupted when I transfer large size files to it (usually movies)...
    and it doesn't return to bormal unless I format it.
    What can I do? I want to stay having these videos on my memory card to watch them on my n73.
    Thanks in advance.

    can any one help?

  • Issue when doing mb1c

    Hi I encountered below question when doing mb1c for 561 movement, it is weird I am doing ok for a material which is exactly same valuation as the one with error message,I meant valuation class is same, also checked obyc ...so any other field in material master will occur that error?
    Account 34210100 requires an assignment to a CO object

    Hi,
    When you do GR with 561 movement type , it will update following accounts with transaction keys are:
    Inventory of Materials (BSX): Dr
    Initial Upload Materials (GBB-BSA): Dr
    As you have error now, now check the G/L account 34210100 created as cost element or not u2026u2026u2026u2026u2026ie ...so asking cost object. Or just try to assign another G/L account in OBYC t.code for GBB-BSA combination and then do GR with 561 movement type
    Regards,
    Biju K

  • When syncing large database of photos (107,000 photos) constituting nearly 35GB, itunes takes forever on "importing photos" even if i select only 1 folder of 10 images. Windows 7, IPAD Air both latest versions. photo library is on my local internal HD .

    i tried selecting a folder with less than 50 photos as my source, importing photos phase still takes longer than what sounds normal for such a small number of photos, but the job is done nevertheless.
    But what is worth noting, is that even if i run another pass of sync without touching the photos section (i.e. neither adding nor deleting any photo from the previous sync) it still takes the same amount of time on the "importing photos" phase.
    when i select the actual photo library folder as my source (the one containing the 107,000 photos); at that point even selecting a subfolder in it that contains merely 10 photos generates the same dilemma.
    i tried photoshop elements, spent like 4 hours building my library there, hoping that if i select it as the source in the photo tab in itunes, things would work out for the better, i was mistaken!
    at this point am really out of options, and need some serious help.
    few extra diagnostic infos:
    the above manifests even on other devices like my iphone5.
    the computer used is not exactly top notch but works for my Autocad drawings, i figure it should suffice for itunes (Dell Pentium D 3.19Ghz, 4GB Ram, 3TB HD, running windows 7 (32bit) ultimate)
    the above persists when i sync wirelessly (not a USB bug issue)
    please any help is appreciated, if you need further details, kindly let me know

    i tried selecting a folder with less than 50 photos as my source, importing photos phase still takes longer than what sounds normal for such a small number of photos, but the job is done nevertheless.
    But what is worth noting, is that even if i run another pass of sync without touching the photos section (i.e. neither adding nor deleting any photo from the previous sync) it still takes the same amount of time on the "importing photos" phase.
    when i select the actual photo library folder as my source (the one containing the 107,000 photos); at that point even selecting a subfolder in it that contains merely 10 photos generates the same dilemma.
    i tried photoshop elements, spent like 4 hours building my library there, hoping that if i select it as the source in the photo tab in itunes, things would work out for the better, i was mistaken!
    at this point am really out of options, and need some serious help.
    few extra diagnostic infos:
    the above manifests even on other devices like my iphone5.
    the computer used is not exactly top notch but works for my Autocad drawings, i figure it should suffice for itunes (Dell Pentium D 3.19Ghz, 4GB Ram, 3TB HD, running windows 7 (32bit) ultimate)
    the above persists when i sync wirelessly (not a USB bug issue)
    please any help is appreciated, if you need further details, kindly let me know

  • I photo crahses when trying to delete a large batch of photos

    I exported my photo library twice by mistake and now I need to delete around 90.000 photos. The problem is, iPhoto crashes when trying to delete the photographs, and I'd really like to avoid moving them over and delete them in smaller batches, it would take me ages.
    Is there anyway I can manually access the files in the iPhoto trash and delete them in the Mac trash?
    Thnaks for the help.

    I exported my photo library twice by mistake
    sorry but this is not clear - exporting copies photos form the iPhoto library to your hard drive outside of the iPhoto library and does not change the contents of the iPhoto librar in any way
    If you mean imported (as opposed to exported) were you importing a previous iPhoto library into a new iPhoto library? If so you never do that - it does not work and creates massive duplication
    Since we have no idea what you have done these are just guesses but if you did import an old library into a new one then simply quit iPhoto and drag the new (bad) one to the desktop, drag the old one to the pictures folder (default location) and launch iPhoto - it will open and convert as needed the old library and you are good to go
    If none of these guesses are correct then you will have to open the iPhoto trash in iPhoto (NEVER make any changes of any sort to the structure or content of the iPhoto library using the finder - there are NO user servicable parts in there), select them all and assign a keyword (like trash) and then right click and return to library - then using the keyword to find them delete in small batches (100 or so) emptying the iPhoto trash after each small batch
    LN

  • Issue when doing Levels as last step

    After using the Healing Brush, Clone Tool, Sponge tool, etc. I do Levels Adjustment Layer and it results in the issues in the attached image. Is there a way to avoid this other then doing Levels at the very beginning? The only other way I found around this was to flatten the image prior to doing Levels.
    Thanks.

    I would guess from the image you posted that you cloned and healed onto a new empty layer and when you added the level adjustment layer you clipped it to that layer when it should be applied to all layers not just the clone and healing layer.

  • Best practice when doing large cascading updates

    Hello all
    I am looking for some help with tackling a fairly large cascading update.
    I have an object tree that needs to be merged using JPA and Toplink.
    Each update consists of 5-10000 objects with a decent depth as well.
    Can anyone give me some pointers/hints towards a Best practice for doing this? Looping though each object with JPA's merge takes minutes to complete, so i would rather not do that.
    I have never actually used TopLinks own API before, so i am especially interested if TopLink has an effective way of handling this, preferably with a link to some related reading material?
    Note that i have a somewhat duplicate question on (Noting for good forum practice)
    http://stackoverflow.com/questions/14235577/how-to-execute-a-cascading-jpa-toplink-batch-update

    Not certain what you think you can't do. Take a long clip and open it in the Viewer. Set In and Out points. Drop that into the Timeline. Now you can move along in the Viewer clip and set new Ins and Outs and drop that into the Timeline. Clips in the Timeline are created from the Ins and Outs you set in the Viewer.
    Is that what you want to do? If it is, I don't where making copies of the clip would work for you
    Later, if you want to match up a clip in the Timeline to that master clip, just use Match Clip (find) in the timeline to find where it correaltes to your main clip
    You can have FCE automatically create subclips at camera cut points by using DV Stop/Start Detect if that is what you're looking for

  • Issues when Downloading Large Datasets to Excel and CSV

    Hi,
    Hoping someone could lend a hand on the issues described below.
    I have a prompted dahsboard that, dependent upon prompts selected, can return detail datasets. THe intent of this dashboard is to AVOID giving end users Answers Access, but still providing the ability to pull large amounts of detail data in an ad-hoc fashion. When large datasets are returned, end users will download the data to thier local machines and use excel for further analysis. I have tried two options:
    1) Download to CSV
    2) Download data to Excel
    For my test, I am uses the dashboard prompts to return 1 years (2009) worth of order data for North America, down to the day level of granularity. Yes alot of detail data...but this is what many "dataheads" at my organization are requesting...(despite best efforts to evangelize the power of OBIEE to do the aggregation for them...). I expext this report to return somewhere around 200k rows...
    Here are the results:
    1) Download to CSV
    Filesize: 78MB
    Opening the downloaded file is failrly quick...
    126k rows are present in the CSV file...but the dataset abruptly ends in Q3(August) 2009. The following error appears at the end of the incomplete dataset:
    <div><script language="javascript" src="res/b_mozilla/browserdom.js"></script><script language="javascript" src="res/b_mozilla/common.js"></script><div class="ErrorMessage">Odbc driver returned an error (SQLFetchScroll).</div><div style="margin-top:2pt" onclick="SAWMoreInfo(event); return false;"><img class="ErrorExpanderImg" border="0" src="res/sk_oracle10/common/errorplus.gif" align="absmiddle">  Error Details<div style="margin-left:15px;display:none" compresssrc="res/sk_oracle10/common/errorminus.gif">                                                                                                                        
    <div class="ErrorCodes">Error Codes: <span dir="ltr">OPR4ONWY:U9IM8TAC</span></div>                                                                                                                        
    <div style="margin-top:4pt"><div class="ErrorSubInfo">State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred.                                                                                                                        
    [nQSError: 46073] Operation 'stat()' on file '/opt/apps/oracle/obiee/OracleBIData/tmp/nQS_31951_2986_15442940.TMP' failed with error: (75) ,Çyô@BÀŽB@B¨Ž¡pÇôä재ü5HB. (HY000)</div></div></div></div></div>                                                                                                                        
    2) Download to Excel
    Filesize: 46MB
    Opening the Excel file is extremely painful...over 20 minutes to open the file...making excel unusable during the opening process...defeinately not acceptable for end users.
    When opened the file contains only 65k rows...when there should be over 200k...
    Can you please help me understand the limitations of detail data output (downloading) from OBIEE...or provide workarounds for the circumstances above?
    Thanks so much in advance.
    Adam
    Edited by: AdamM on Feb 9, 2010 9:01 PM
    Edited by: AdamM on Feb 9, 2010 9:02 PM

    @chandrasekhar:
    Thanks for your response. I'll try with the export button but also willing to know how to create button on toolbar.And by clicking on that button a popup box will come having two radio buttons asking to download the report either in .xls or in .csv format. I am looking for the subroutines for that.
    Thanks.
    Message was edited by:
            cinthia nazneen

  • UCCX Issue when doing VMWare Snapshots

    Hi all.
    I've been using UCCX 8.5 on VMWare with local storage for about a year.  I use VEEAM Backup to make backups of all my virtual servers.  One issue I've run accross is that when VEEAM takes a snapshot of the UCCX server, the agents are logged out of Cisco Agent Desktop and get an error.  I've managed to work around this by making backups run after our call center is closed (Today, however, I accidentally ran a backup and irritated the call center managers when all their users got logged out).
    I've surmised from the various documents I've read that snapshots are not supported.  I'm wondering how others using CCX on VMWare do their backups and if there are any clever work-arounds to this issue.

    Thanks for the reply Chris.  I failed to mention that I am doing the DRS backups as well.
    However, restoring the entire VM in the event of hardware failure is by far the simplest method of quick recovery.

  • [BUG] Performance Issue When Editing Large FIles

    Are other people experiencing this on the latest JDevelopers (11.1.1.4.0 and the version before it) ?
    People in the office have been experiencing extremely slow performance happening seemingly at random while editing Java files. We have been using JDev for almost 10 years and this has only become an issue for us recently. Typically we only use the Java editing and the database functionality in JDev.
    I have always felt that the issue has been related to network traffic created by Clearcase and have never really paid attention, but a few days ago after upgrading to the latest version of JDev, for the first time I started having slowdowns that are affecting my speed of work and decided to look into it,
    The main symptom is the editor hanging for an unknown reason in the middle of typing a line (even in a comment) or immediately after hitting the carriage return. All PCs in the office have 2Gig or more RAM and are well within recommended spec.
    I've been experimenting for a few days to try and determine what exactly is at the root of the slowness. Among the things I have tried:
    o cutting Clearcase out of the equation; not using it for the project filesystem; not connectiing to it in JDev
    o Not using any features other than Java editing in JDev (no database connections)
    o never using split panes for side by side editing
    o downloading the light version of JDev
    o Increasing speed of all pop-ups/dynamic helpers to maximum in the options
    o disabling as many helpers and automations as possible in the options
    None of these have helped. Momentary freezes of 3-5 seconds while editing are common. My basic test case is simply to move the cursor from one line to another and to type a simple one-line comment that takes up most of the line. I get the freeze most usually right after typing the "//" to open the comment - it happens almost 100% of the time.
    I have however noticed a link to the file size/complexity.
    If I perform my tests on a small/medium sized file of about 1000 lines (20-30 methods) performance is always excellent.
    If I perform my test on one of our larger files 10000 lines ( more than 100 methods) the freezes while editing almost always occur.
    It looks like there is some processor intensive stuff going on (which cannot be turned off via the options panel) which is taking control of the code editor and not completing in a reasonable amount of time on large Java files. I have a suspicion that it's somehow related to the gutter on the right hand side which show little red and yellow marks for run-time reports of compile errors and warnings and haven't found any way to disable it so I can check.

    Just a small follow-up....
    It looks like the problem is happening on only a single Java file in our product! Unfortunately it happens to be the largest and most often updated file in the project.
    I'm still poking around to figure out why JDev is choking consistently on this one particular file and not on any of the others. The size/complexity is not much bigger than the next largest which can be edited without problems. The problem file is a little unusual in that it contains a large number of static functions and members.
    Nice little mystery to solve.

Maybe you are looking for