LR1 - handling LARGE catalogs

Hi folks:
Slight problem here, I have a couple large catalogs of sports photographs, the largest is over 50K pics. I post the pictures to the website, and then parents and students e-mail lists of what they would like, it works great except the searching takes FOREVER. everything is keyworded, dated etc. so i give it the name of the opponent, it finds it -- i highlight the date and start entering the numbers of the pics as search terms, as i find each pic, i add it to the quick collection, correct it if necessary and finally export. it WORKS, its just labor intensive and somewhat clunky - and the lightroom search stuff is SLOW.
Ive started maintaining diff catalogs for each sport - -which helps, but what else can i do to speed it up -- is there a facility for feeding LR a list of pics and having it spit them all out at once?
Any ideas and input most welcome!
Tim Holmes
Fine Light Photography

Its worse doing the inital full catalog search to find the specific game, once im inside the folder for that game the searches are faster but the whole process is labor intensive -- enter number, hit enter, hit b (add to quick collection) double click on the search line, enter next number repeat - -when i had to extract about 1000 files for one of the families -- that took a while!
TIM

Similar Messages

  • Ways to handle large volume data (file size = 60MB) in PI 7.0 file to file

    Hi,
    In a file to file scenario (flat file to xml file), the flat file is getting picked up by FCC and then send to XI. In xi its performing message mapping and then XSL transformation in a sequence.
    The scenario is working fine for small files (size upto 5MB) but when the input flat file size is more then 60 MB, then XI is showing lots of problem like (1) JCo call error or (2) some times even XI is stoped and we have to strat it manually again to function properly.
    Please suggest some way to handle large volume (file size upto 60MB) in PI 7.0 file to file scenario.
    Best Regards,
    Madan Agrawal.

    Hi Madan,
    If every record of your source file was processed in a target system, maybe you could split your source file into several messages by setting up this in Recordset Per Messages parameter.
    However, you just want to convert you .txt file into a .xml file. So, try firstly to setting up
    EO_MSG_SIZE_LIMIT parameter in SXMB_ADM.
    However this could solve the problem in Inegration Engine, but the problem will persit in Adapter Engine, I mean,  JCo call error ...
    Take into account that file is first proccessed in Adapter Engine, File Content Conversion and so on...
    and then it is sent to the pipeline in Integration Engine.
    Carlos

  • How do I handle large resultsets in CRXI without a performance issue?

    Hello -
    Problem Definition
    I have a performance problem displaying large/huge resultset of data on a crystal report.  The report takes about 4 minutes or more depending on the resultset size.
    How do you handle large resultsets in Crystal Reports without a performance issue?
    Environment
    Crystal Reports XI
    Apache WebSvr 2.X, Jboss 4.2.3, Struts
    Java Reporting Component (JRC),Crystal Report Viewer (CRV)
    Firefox
    DETAILS
    I use the CRXI thick client to build my report (.rpt) and then use it in my webapplication (webapp) under Jboss.
    User specifies the filter criteria to generate a report (date range etc) and submits the request to the webapp.  Webapp  queries the database, gets a "resultset".
    I initialize the JRC and CRV according to all the specifications and finally call the "processHttpRequest" method of Crystal Report Viewer to display the report on browser.
    So.....
    - Request received to generate a report with a filter criteria
    - Query DB to get resultset
    - Initialize JRC and CRV
    - finally display the report by calling
        reportViewer.processHttpRequest(request, response, request.getSession().getServletContext(), null);
    The performance problem is within the last step.  I put logs everywhere and noticed that database query doesnt take too long to return resultset.  Everything processes pretty quickly till I call the processHttpRequest of CRV.  This method just hangs for a long time before displaying the report on browser.
    CRV runs pretty fast when the resultset is smaller, but for large resultset it takes a long long time.
    I do have subreports and use Crystal report formulas on the reports.  Some of them are used for grouping also.  But I dont think Subreports is the real culprit here.  Because I have some other reports that dont have any subreports, and they too get really slow displaying large resultsets.
    Solutions?
    So obviously I need a good solution to this generic problem of "How do you handle large resultsets in Crystal Reports?"
    I have thought of some half baked ideas.
    A) Use external pagination and fetch data only for the current page being displayed.  But for this, CRXI must allow me to create my own buttons (previous, next, last), so I can control the click event and fetch data accordingly.  I tried capturing events by registering event handler "addToolbarCommandEventListener" of CRV.  But my listener gets invoked "after" processHttpRequest method completes, which doesnt help.
    Some how I need to be able to control the UI by adding my own previous page, next page, last page buttons and controlling it's click events. 
    B) Automagically have CRXI use a javascript functionality, to allow browser side page navigation.  So maybe the first time it'll take 5 mins to display the report, but once it's displayed, user can go to any page without sending the request back to server.
    C) Try using Crystal Reports 2008.  I'm open to using this version, but I couldnt figureout if it has any features that can help me do external pagination or anything that can handle large resultsets.
    D) Will using the Crystal Reports Servers like cache server/application server etc help in any way?  I read a little on the Crystal Page Viewer, Interactive Viewer, Part Viewer etc....but I'm not sure if any of these things are going to solve the issue.
    I'd appreciate it if someone can point me in the right direction.

    Essentialy the answer is use smaller resultsets or pull from the database directly instead of using resultsets.

  • Best practices for handling large messages in JCAPS 5.1.3?

    Hi all,
    We have ran into problems while processing larges messages in JCAPS 5.1.3. Or, they are not that large really. Only 10-20 MB.
    Our setup looks like this:
    We retrieve flat file messages with from an FTP server. They are put onto a JMS queue and are then converted to and from different XML formats in several steps using a couple of jcds with JMS queues between them.
    It seems that we can handle one message at a time but as soon as we get two of these messages simultaneously the logicalhost freezes and crashes in one of the conversion steps without any error message reported in the logicalhost log. We can't relate the crashes to a specific jcd and it seems that the memory consumption increases A LOT for the logicalhost-process while handling the messages. After restart of the server the message that are in the queues are usually converted ok. Sometimes we have however seen that some message seems to disappear. Scary stuff!
    I have heard of two possible solutions to handle large messages in JCAPS so far; Splitting them into smaller chunks or streaming them. These solutions are however not an option in our setup.
    We have manipulated the JVM memory settings without any improvements and we have discussed the issue with Sun's support but they have not been able to help us yet.
    My questions:
    * Any ideas how to handle large messages most efficiently?
    * Any ideas why the crashes occur without error messages in the logs or nothing?
    * Any ideas why messages sometimes disappear?
    * Any other suggestions?
    Thanks
    /Alex

    * Any ideas how to handle large messages most efficiently? --
    Strictly If you want to send entire file content in JMS message then i don't have answer for this question.
    Generally we use following process
    After reading the file from FTP location, we just archive in local directory and send a JMS message to queue
    which contains file name and file location. Most of places we never send file content in JMS message.
    * Any ideas why the crashes occur without error messages in the logs or nothing?
    Whenever JMSIQ manager memory size is more lgocialhosts stop processing. I will not say it is down. They
    stop processing or processing might take lot of time
    * Any ideas why messages sometimes disappear?
    Unless persistent is enabled i believe there are high chances of loosing a message when logicalhosts
    goes down. This is not the case always but we have faced similar issue when IQ manager was flooded with lot
    of messages.
    * Any other suggestions
    If file size is more then better to stream the file to local directory from FTP location and send only the file
    location in JMS message.
    Hope it would help.

  • Handling large messages with MQ JMS sender adapter

    Hi.
    Im having trouble handling large messages with a MQ JMS sender adapter.
    The messages are around 35-40MB.
    Are there any settings I can ajust to make the communication channel work?
    Error message is:
    A channel error occurred. The detailed error (if any) : JMS error:MQJMS2002: failed to get message from MQ queue, Linked error:MQJE001: Completion Code 2, Reason 2010, Error Code:MQJMS2002
    The communication channel works fine with small messages!
    Im on SAP PI 7.11, MQ Driver is version 6.
    Best Regards...
    Peter

    The problem solved itself, when the MQ server crashed and restarted.
    I did find a note that might could have been useful:
    Note 1258335 - Tuning the JMS service for large messages or many consumers
    A relevant post as well: http://forums.sdn.sap.com/thread.jspa?threadID=1550399

  • Back-up doesn't progress. I have a large catalog...should I keep waiting or is something wrong?

    I am trying to do an incremental back-up. The Organizer has been calculating total media size for over 14 hours and is still at the initial 3%.  I have a large catalog.Should I keep waiting or is something wrong?

    I have mine set to not push or automatically check my email, it only checks my email when I actually use the mail app. I also keep 3G turned off unless I'm using an app that needs to get online like safari, mail or Pandora radio. Just this week I wanted to see how long it would last with my normal usage to go from full to zero and it was in stand by most of the time but I played a few games, used safari and mail a few times (I use 3G at work but at home it's turned off b/c I use my wifi connection)used it as an ipod a couple times in the car, had a few phone calls and several text messages and I think I played around with the GPS and map utility on the drive home from work one day and in the end it lasted almost 48 hours. I was plugged into my laptop for about 5-10 minutes to sync so it was charging then but overall I was quite happy with the battery.
    3G and automatic mail checking certainly drains the battery but I expected that when I got it. I wish there was a button on the home screen to turn 3G on and off instead of having to go into the control panel but it's not too inconvenient.
    Remember that Apple recommends that you fully cycle the battery once a month, fully charge it then let it drain to zero then fully charge it again.
    Check out these tips for maximizing battery life.
    http://www.apple.com/batteries/iphone.html
    Hope that helps.

  • How to handle large result set of a SQL query

    Hi,
    I have a question about how to handle large result set of a SQL query.
    My query returns more than a million records. However, the Query Template has a "row count" parameter. If I don't specify it, it by default returns only 100 lines of records in the query result. If I specify it, then it's limited to a specific number.
    Is there any way to get around of this row count issue? I don't want any restriction on the number of records returned by a query.
    Thanks a lot!

    No human can manage that much data...in a grid, a chart, or a direct-connected link to the brain. 
    What you want to implement (much like other customers with similar requirements) is a drill-in and filtering model that helps the user identify and zoom in on data of relevance, not forcing them to scroll through thousands or millions of records.
    You can also use a time-based paging model so that you only deal with a time "slice" at one request (e.g. an hour, day, etc...) and provide a scrolling window.  This is commonly how large datasets are also dealt with in applications.
    I would suggest describing your application in more detail, and we can offer design recommendations and ideas.
    - Rick

  • A Progress Bar for my large catalog

    I have a large catalog that I am putting on the web and I want people who have a slow connection to know that the file is still loading. Is there a way to add a progress bar to my file in Indesign CS5 or must I bring it into flash to optain this feature?
    Thanks in advance

    How is your file itself going to show a progres bar if it isn't loaded yet?  Seems to me it would have to load before it could do anything like that.
    Have you tried experimenting with different browsers and OS's to see what happens?  With Firefox 4 on Windows XP here I don't get a progress bar but I do get a little spinning graphic when I try to load a large PDF like this one here:

  • Best way to handle large amount of text

    hello everyone
    My project involves handling large amount of text.(from
    conferences and
    reports)
    Most of them r in Ms Word. I can turn them into RTF format.
    I dont want to use scrolling. I prefer turning pages(next,
    previous, last,
    contents). which means I need to break them into chunks.
    Currently the process is awkward and slow.
    I know there wud b lots of people working on similar
    projects.
    Could anyone tell me an easy way to handle text. Bring them
    into cast and
    break them.
    any ideas would be appreciated
    thanx
    ahmed

    Hacking up a document with lingo will probably loose the rtf
    formatting
    information.
    Here's a bit of code to find the physical position of a given
    line of on
    screen text (counting returns is not accurate with word
    wrapped lines)
    This stragety uses charPosToLoc to get actual position for
    the text
    member's current width and font size
    maxHeight = 780 -- arbitrary display height limit
    T = member("sourceText").text
    repeat with i = 1 to T.line.count
    endChar = T.line[1..i].char.count
    lineEndlocV = charPosToLoc(member "sourceText",
    endChar).locV
    if lineEndlocV > maxHeight then -- fount "1 too many"
    line
    -- extract identified lines "sourceText"
    -- perhaps repeat parce with remaining part of "sourceText"
    singlePage = T.line[1..i - 1]
    member("sourceText").text = T.line[i..99999] -- put remaining
    text back
    into source text member
    If you want to use one of the roundabout ways to display pdf
    in
    director. There might be some batch pdf production tools that
    can create
    your pages in pretty scalable pdf format.
    I think flashpaper documents can be adapted to director.

  • Handling large files in scope of WSRP portlets

    Hi there,
    just wanted to ask if there are any best practices in respect to handling large file upload/download when using WSRP portlets (apart from by-passing WebCenter all-together for these use-cases, that is). We continue to get OutOfMemoryErrors and TimeoutExceptions as soon as the file being transfered becomes larger than a few hundred megabytes. The portlet is happily streaming the file as part of its javax.portlet.ResourceServingPortlet.serveResource(ResourceRequest, ResourceResponse) implementation, so the problem must somehow lie within WebCenter itself.
    Thanks in advance,
    Chris

    Hi Yash,
    Check this blogs for the strcuture you are mentioning:
    /people/shabarish.vijayakumar/blog/2006/02/27/content-conversion-the-key-field-problem
    /people/shabarish.vijayakumar/blog/2005/08/17/nab-the-tab-file-adapter
    Regards,
    ---Satish

  • Can express vi handle large data

    Hello,
    I'm facing problem in handling large data using express vi's. The input to express vi is a large data of 2M samples waveform & i am using 4 such express vi's each with 2M samples connected in parallel. To process these data the express vi's are taking too much of time compared to other general vi's or subvi's. Can anybody give the reason why its taking too much time in processing. As per my understanding since displaying large data in labview is not efficient & since the express vi's have an internal display in the form of configure dialog box. Hence i feel most of the processing time is taken to plot the data on the graph of configure dailog box. If this is correct then Is there any solution to overcome this.
    waiting for reply
    Thanks in advance

    Hi sayaf,
    I don't understand your reasoning for not using the "Open Front Panel"
    option to convert the Express VI to a standard VI. When converting the
    Express VI to a VI, you can save it with a new name and still use the
    Express VI in the same VI.
    By the way, have you heard about the NI LabVIEW Express VI Development Toolkit? That is the choice if you want to be able to create your own Express VIs.
    NB: Not all Express VIs can be edited with the toolkit - you should mainly use the toolkit to develop your own Express VIs.
    Have fun!
    - Philip Courtois, Thinkbot Solutions

  • How to handle large heap requirement

    Hi,
    Our Application requires large amount of heap memory to load data in memory for further processing.
    Application is load balanced and we want to share the heap across all servers so one server can use heap of other server.
    Server1 and Server2 have 8GB of RAM and Server3 has 16 GB of RAM.
    If any request comes to server1 and if it requires some more heap memory to load data, in this scenario can server1 use serve3’s heap memory?
    Is there any mechanism/product which allows us to share heap across all the servers? OR Is there any other way to handle large heap requirement issue?
    Thanks,
    Atul

    user13640648 wrote:
    Hi,
    Our Application requires large amount of heap memory to load data in memory for further processing.
    Application is load balanced and we want to share the heap across all servers so one server can use heap of other server.
    Server1 and Server2 have 8GB of RAM and Server3 has 16 GB of RAM.
    If any request comes to server1 and if it requires some more heap memory to load data, in this scenario can server1 use serve3’s heap memory?
    Is there any mechanism/product which allows us to share heap across all the servers? OR Is there any other way to handle large heap requirement issue? That isn't how you design it (based on your brief description.)
    For any transaction A you need a set of data X.
    For another transaction B you need a set of data Y which might or might not overlap with X.
    The set of data (X or Y) is represented by discrete hunks of data (form is irrelevant) which must be loaded.
    One can preload the server with this data or do a load on demand.
    Once in memory it is cached.
    One can refine this further with alternative caching strategies that define when loaded data is unloaded and how it is unloaded.
    JEE servers normally support this in a variety of forms. But one can custom code it as well.
    JEE servers can also replicate cached data across server instances. Custom code can do this but it is more complicated than doing the custom caching.
    A load balanced system exists for performance and failover scenarios.
    Obviously in a failover situation a "shared heap" would fail completely (as asked about) because the other server would be gone.
    One might also need to support very large data sets. In that case something like Memcached (google for it) can be used. There are commercial solutions in this space as well. This allows for distributed caching solutions which can be scaled.

  • Catalogs in Elements 6 ( spliting a large catalog )

    Is there a way to split a large catalog into smaller catalogs, for example a catalog that contains images from several years split into catalogs for each year.
    Creating a new catalog and importing your previously edited images means that you have to re-tag them all and stack them, they will then not be in version sets.
    The new catalogs of images when split from the main catalog should retain the original tags and be identical to the main catalog. Images when viewed in the browser should show all edits and be in there version stacks.
    Is this possible or am I expecting to much?

    I misspoke in PSE 6, the command to optimize the catalog is File > Catalog > Optimize.
    > You would think, based upon first principles of how a database works, that compacting/repairing it would make it run faster. Pretty much every database works this way.
    I agree. But based solely on reports on these forums and at ElementsVillage.com, I get the impression that PSE 5 catalog bloat and fragmentation is much worse than PSE 6. There have been many postings where people have improved PSE 5 performance and dramatically shrunk their catalog file by recovering. But I dont recall any such reports for PSE 6. (This is all from memory, so I may be wildly wrong.)
    I just optimized my PSE 6 catalog (12K photos) for the first time in at least 6 months. It shrunk from 41 MB to 40.5 MB.
    By the way, modern databases have facilities for automatic or background compaction that try to minimize impact on applications. SQLite has such a mode, but I just checked and it looks like PSE 6 doesnt enable it.

  • Searching for images with no keywords in large catalogs - LR5

    I have a catalog with several thousand images and 98% have keywords. Is there any way to do a global search for the images which I may have missed adding keywords to?

    Thanks Geoff, that was very helpful!
    Date: Sun, 13 Oct 2013 19:20:07 -0700
    From: [email protected]
    To: [email protected]
    Subject: Searching for images with no keywords in large catalogs - LR5
        Re: Searching for images with no keywords in large catalogs - LR5
        created by Geoff the kiwi in Lightroom for Beginners - View the full discussion
    I think the easiest would be a Smart Collection like this:
    http://forums.adobe.com/servlet/JiveServlet/downloadImage/2-5757341-486659/450-273/ScreenShot2013-10-14at3.16.09+PM.png
    Or use the Filter Bar in Grid Mode and Select the same option:
    http://forums.adobe.com/servlet/JiveServlet/downloadImage/2-5757341-486660/450-162/ScreenShot2013-10-14at3.17.54+PM.png
         Please note that the Adobe Forums do not accept email attachments. If you want to embed a screen image in your message please visit the thread in the forum to embed the image at http://forums.adobe.com/message/5757341#5757341
         Replies to this message go to everyone subscribed to this thread, not directly to the person who posted the message. To post a reply, either reply to this email or visit the message page: http://forums.adobe.com/message/5757341#5757341
         To unsubscribe from this thread, please visit the message page at http://forums.adobe.com/message/5757341#5757341. In the Actions box on the right, click the Stop Email Notifications link.
               Start a new discussion in Lightroom for Beginners at Adobe Community
      For more information about maintaining your forum email notifications please go to http://forums.adobe.com/message/2936746#2936746.

  • Large Catalog with missing files

    I have a very large catalog (over 30k files) in PSE 8. I just purchased PSE 10 and will be upgrading it. As I was making a full backup prior to the upgrade, I realized there are many missing files (requiring reconnecting). Some PRE files that I purposefully dieted, others are TBD.
    Since my catalog is so large and there seem to be a few hundred missing, it makes sense to only view the missing files so i can delete those that need to be deleted ,etc. I thought there was a way to view only files that were missing (that need to be reconnected) but I am unable to figure it out. Any help would be greatly appreciated in finding a way to just display the missing files.
    Thanks in advance,
    Daniel

      When you click reconnect as soon as Organizer finds the first missing file you get the option to reconnect manually.
    Click the browse button and when the dialog open you will get a full list of missing files in the left hand pane. The top one is usually highlighted and you can scroll to the bottom of the list and shift click to select them all and delete them from the catalog. Nothing is deleted from disk.
     

Maybe you are looking for

  • The alias "Harddisk" can't be opened because the original item can't be fou

    Hello all, I have a very scary problem. My external 1.5 Tb Toshiba does not connect anymore. I used to have it connect to my MacBookPro through the Airport Extreme USB connection. Then I wanted to delete a folder, earlier today, and ever since, the h

  • Where are the Gradient color presets in CS6?

    Hi, I've gone from CS4 to CS6 and when using the Gradient tool, there used to be a properties box that you could open with a whole list of preset color gradients to choose from, i used to use the 'Black - White' one all the time, which kinda blurred

  • Does JMF Support multilingual absolute paths for video playback?

    Hi everyone, I hope I can find some answer here. Does JMF supports multilingual paths for videos? E.g. I have a video located by the following URL: G:\video\Stürmische\video.mp4 When I try to open this video by JMF, I get the following Exception: Pro

  • Free disk space decreasing

    since installing lion i have noticed a lost of  free disk space on the build in harddrive even if i save as  little as possible on it.  i havea 1TB external hd....any ideas why??

  • PageLayouts in SP 2013

    Hi , I have 2 questions related to sharepoint 2013. In SP 2010 site collection(team stie) site settings I was able to see 2 links for features. But in SP 2013 I am able to see only 1 link . can anyone please tell  me what is missing ? I have opened S