Quandary about LR Collections

I'm helping a friend who has a Collections problem.  He had an iMac that died last January, on which he was using LR with heavy reliance on Collections.  He got a new iMac and he had external backup disks of all his picture folders and apparently also his Documents and other important files.  There may have been a clone of the Mac HD, too.  He took those disks to an Apple store and the Geniuses did some sort of operation to get the files and folders on the new computer.  (He's new to computers and isn't sure what exactly they did.)
LR5 was on the computer and started up seemingly as it had been on the old one.  But at some point he found that some of his Collections were missing.  (He was only backing up the Catalog every 2-3 weeks.)
He’s beside himself with the loss of the Collections and has spent countless hours on the phone with various tech support people who have been unable to help him find them (which, of course, would be in an old backup Catalog). One of them even took over his computer remotely and couldn't find anything.  He asked if I would try, and a simple system-wide search for ".lrcat" turned up several old Catalog backups on one of his backup drives.  I copied the latest one (from a few months before the computer died) to his desktop and was able to open LR using it, and the missing collections were there!
So here is my quandary:  If I simply merge this old catalog with his current one, I assume the missing Collections will become available, but what about the fact that this Catalog is a lot older than the current one?  I'd assume older things won't overwrite newer ones.  Or should I merge in the other direction, from the newer to the older catalog?  Or does the merge operation decide that for me?  (Of course, I would copy his current Catalog to a safe location in case I need to restore it.)
I have only merged Catalogs once (from my laptop to my desktop machine, after a photo trip), so I’m treading carefully here. (And just now reaching for Martin Evening's book for review.)  Are there any pitfalls in merging these two Catalogs in this situation?
Thanks for any help!

Hi Diane,
Sorry for not being able to reply sooner and I hope this is still relevant. If, as I understand from your post, your friend has a catalog which contains all his photos (in the Folders section) but some or all the collections are missing, I would do the following, taking advantage of the fact that in LR5 collections can be exported as a catalog: Open that backup .lrcat which does have the collections, select all the missing collections and go to File/Export as Catalog and when asked give the new catalog an easily recognized name, like Coll Exp. Then launch his current catalog, go to File/Import From Another Catalog and select Coll Exp.lrcat. This should add to the current catalog only the collections without any problem of overwriting or other conflicts. Good luck.

Similar Messages

  • Doubt about Bulk Collect with LIMIT

    Hi
    I have a Doubt about Bulk collect , When is done Commit
    I Get a example in PSOUG
    http://psoug.org/reference/array_processing.html
    CREATE TABLE servers2 AS
    SELECT *
    FROM servers
    WHERE 1=2;
    DECLARE
    CURSOR s_cur IS
    SELECT *
    FROM servers;
    TYPE fetch_array IS TABLE OF s_cur%ROWTYPE;
    s_array fetch_array;
    BEGIN
      OPEN s_cur;
      LOOP
        FETCH s_cur BULK COLLECT INTO s_array LIMIT 1000;
        FORALL i IN 1..s_array.COUNT
        INSERT INTO servers2 VALUES s_array(i);
        EXIT WHEN s_cur%NOTFOUND;
      END LOOP;
      CLOSE s_cur;
      COMMIT;
    END;If my table Servers have 3 000 000 records , when is done commit ? when insert all records ?
    could crash redo log ?
    using 9.2.08

    muttleychess wrote:
    If my table Servers have 3 000 000 records , when is done commit ? Commit point has nothing to do with how many rows you process. It is purely business driven. Your code implements some business transaction, right? So if you commit before whole trancaction (from business standpoint) is complete other sessions will already see changes that are (from business standpoint) incomplete. Also, what if rest of trancaction (from business standpoint) fails?
    SY.

  • No information about site collection

    Hi All,
    I came across a strange issue where I can not browse the root site collection, I am also not able to see the information about the site collection, though I can see the browse the other site collection in the same DB.
    Any clue??

    Use PowerShell (Get-SPSite "http://webUrl/"). Does that bring any information? If not, try creating one.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Basic questions about permanent Collections

    I am just learning about Collections, discovering that a Collection preserves all of the work I done in the collection. I am assuming then, that the images in a collection are virtual copies that can be adjusted in the Develop Module to be different than they would be displayed in the general catalog under the original imported folders. I also assume that a modified image cannot be locked to prevent further changes, say if another virtual copy were to be made.
    I gather that Collections need to be protected somehow, for instance if I have one collection with images adjusted to look good on the web, and another collection of the same images adjusted to look good using a certain printer.
    Maybe I will never need to adjust images for different applications, except outside of Lightroom for CMYK applications in offset printing. Since various settings for Slide Show, Web and for Print can all be contained in one collection, it might probably be best to use the same images without changing the images for each output.
    Can some of you who are more experience add thoughts here to how best to use Collections. My goal is to keep things simple.
    TIA,
    Ken

    There are not enough gold stars to go around but you all deserve them for the helpful answers.
    I see on pg 175 of the Lightroom Classroom In A Book (in which there are also an unusual amount of typos) that "In the Quick Describe metadata set, the Metadata panel shows the Filename, Copy Name (if the image is a virtual copy..." This is more evidence that Lightroom and the system of Metadata is quite well developed itself, including the 'Copy Name' for a virtual copy. I have found already, in my short experience with Lightroom, that I want to work on a Virtual Copy when I'm not quite sure of the direction I want to take with a photo's adjustments. By retaining the Master 'as is' (with modest adjustments) I can always quickly go back to view the starting point (best image with modest adjustments) while I continue to work on a Virtual Copy for more experimental adjustments.
    On the one hand, my shooting is going on hold while I learn Lightroom, but I feel it is a worthwhile investment to become as familiar as possible with Lightroom's capabilities. It seems to add more to my desire to capture images more accurately, because I know I have so much to work with once I get the images back into the studio. Crisper images with optimal lighting will make me as happy as a pig in mud when I get them into Lightroom.

  • Does anyone know about Master Collection CS6 performance problems on OSX Mountain Lion?

    I have a 15" MBP (late 2011) and I'm having some poor performance problems even after a clear install, reindexing the system and fixing all permission errors in Disk Utility. I have some suspicions about compatibility problems with Adobe Master Collection CS6 cause those are the programms I always work with and in wich I have experienced a bad performance.
    Hardware test says everything is fine. Any sugestion?

    Guys, I suffered the same problem for 4 days.  I am on the new MBP Pro Retina Display + Mountain Lion 16GB Ram, etc...
    I first thought it was to do with retina display however no, that is not the case.
    SOLUTION (sounds odd but works):
    The only way I found to fix it, is setting photoshop to use the East Asian text engine instead of the Middle East one. (Prefs -> Type -> East Asian). Then restart Photoshop.

  • Info about Master Collection

    Hey there. Thanks for answering my questions about Creative Suite 5.5 Master Collection edition:
    1. I am buying 2 new computers to use in a home network. Can we both be working on 2 seperate projects/websites on our seperate computers at the same time?
    2. What are the system requirements for this edition?
    Again thanks for your answer.

    1. I am buying 2 new computers to use in a home network. Can we both be working on 2 seperate projects/websites on our seperate computers at the same time?
    If you buy 2 licenses - sure. For more details on legalities simply refer to the EULAs.
    2. What are the system requirements for this edition?
    http://www.adobe.com/products/creativesuite/mastercollection/tech-specs.html
    Mylenium

  • Still in a quandary about hardware specs. What do you think?

    I need to build a new system as editing machine for CS4. Everything is more or less clear as to what I need, there are only two issues remaining:
    1. How much RAM to install, and
    2. Disk setup.
    As always there is the balance between performance and costs. Since this is not "an unlimited means available" type of machine, you will not see much esoteric components, like SSD raids or nVidia Quadro CX card. It is a dual E5472 Harpertown system (8 core), running Vista 64 Business.
    1. How much RAM would be a good balance in price/performance? 8 or 16 GB? We are talking about Kingston Value Ram DDR2-6400 ECC FBDIMM, either 4 x 2 GB ( 420) or 4 x 4 GB ( 650), with the possibility to double memory in the future. If you do not make a habit of continuously switching between PP, AE, EN, PS etc. and are mainly using only 1 application at the time, I wonder if there is a noticeable performance gain to justify the additional cost. The choice is obvious if the total cost of the machine would not be stretching available means. So what do you think?
    2. Main storage is 8 x WD Caviar Black 1 TB disks in raid50 for media, renders, scratch etc. But what about the boot disk? Either a 300 GB WD Velociraptor ( 240), or 4 x WD Scorpio (7200 RPM) 160 GB in raid10 ( 250), using on-board ESB2 chipset.
    While we are at it, does it make sense to add 4 x WD Scorpio 250 GB in Raid5 (on an Areca ARC controller with free ports) only for pagefile, projects and temporary export. Final export will be transferred over the network to a burning machine for DVD's. Backups of projects is to external USB disks and on a separate machine over VPN.
    The case has room for 8 hot swappable 3.5" disks and 8 hot swappable 2.5" laptop disks plus another disk and a DVD/BR burner internally. There are still 4 external eSATA ports available for future expansion. Foreseen video card is ATI 4870 512MB. Network is currently a 1 Gb connection, but can be enhanced to 2 Gb with 2 NIC's.
    Look forward to your reactions.

    Bill,
    The case is the
    Supermicro SC743TQ-865SQ
    and the storage bay for the 2.5" disks is the
    Addonics AE4RCS25NSA (2.5 Disk Array 4SA)
    Thanks for your remark about the difficulty of troubleshooting boot arrays. That might be an advantage for the Velociraptor.
    Dag, no. I don't make the mistake of visiting Amsterdam, unless it is absolutely necessary, like visiting the IBC, let alone visiting those bars. I came to realize that Vista has grown up a bit with SP1 and with full support for 64 it now makes a bit more sense than XP64. It does imply a couple of days tweaking Vista (= removing 'features') to make it work decently, but after that is accomplished I will make a slip streamed DVD version for future installs, so that is a one time investment. With recent enhancements in Nullsoft that is easily accomplished, including all intermediate patches and 'Patch Tuesday' versions.

  • BT email about failed collection of monthly accoun...

    I've just received an email "from" BT. Is it a scam? It feels like it is.
    PS I posted this five minutes ago - or so I thought. But I can't find it!
    FROM: [email protected]
    SUBJECT: BT Internet unable to process your recent payment of bill
    CONTENT: Dear Customer,
    This e-mail has been sent to you by BT Internet to inform you that we were unable to process your most recent payment of bill. This might be due to either of the following reasons:
    1. A recent change in your personal information. (eg: billing address, phone)
    2. Submitting incorrect information during bill payment process.
    Due to this, to ensure that your service is not interrupted, we request you to confirm and update your billing information today
    If you have already confirmed your billing information then please disregard this message as we are processing the changes you have made.
    Thanks for your co-operation.
    BT Billing Department
    Accounts Management As outlined in our User Agreement, BT® will periodically send you information about site changes and enhancements.

    It a scam. Link needs editing out to prevent anyone clicking on it.
    There are some useful help pages here, for BT Broadband customers only, on my personal website.
    BT Broadband customers - help with broadband, WiFi, networking, e-mail and phones.

  • Question about smart collections

    I know how to create a smart collection by, first, sorting media by "type." But... how do I simply create a "new" smart collection, and then just "drag" what clips I want into said collection?
    Yes... I was able to make a new smart collection, but I can't figure out how to simply drag-and-drop the clips I want into the smart collection. I sense it is something simple, but I can't grasp it as of yet.

    You can't. A smart collection is, by definition, based on the criteria for the collection. Dragging a clip into that collection cannot give the clip the criteria for the collection. For instance, if you had a smart collection for audio clips, you could not drag a video file into in and miraculously make it an audio file.
    What you want is keyword collections.

  • Quandary about Samba media repository

    Hi folks,
    I have recently tried to make my den greener by turning devices off when not in use. I run an environment with Mac, Linux, and Windows. I managed to offload all my server functions either to my web host or to network devices I can turn off when not in use except for my media storage, which needs to be available from all three platforms.
    Currently I have the media storage on my Ubuntu Linux desktop system, shared using Samba. However, there are a number of albums that don't show up in iTunes because of accented characters. For instance, take Motorhead with the umlaut on the second "o". I could butcher the stored names to fit, but I'd rather not. When I look at these directories natively on my Linux box, they're there and I can play the music with no problem. However, when I go into Finder on the Mac, I see the "Motorhead" directory, but when I click on it, it disappears - weird. I have no difficulty seeing or playing any of these files from Windows or my Linux laptop, so this appears to be some kind of client-side problem. I have no problem with any of the other files or directories on this share from any platform.
    The plot thickens. For Christmas, I got a new 160Gb iPod to replace my 30 Gb iPod that was just about full. As it happens, the 40 Gb drive I'm using for storage is just about full, too. So I'm almost immediately in the market for about 160 Gb of RAID 1 storage . I can see three choices:
    Ideally, it would be useful if I could have it be storage I can attach to any of my systems rather than having to fire up a server as well as the system I am using, so I'm thinking maybe USB2 or Firewire storage I can shlep around. Of course, it would need to have a file system that was readable from all three platforms, which poses a bit of a problem, I think. Also, using a removable drive with a laptop would presumably entail plugging it in when I want to use it, which might be awkward.
    Alternatively, I could use a network storage device that I could just flick on when I want to access music or movies. However, whatever I use would need to be readily accessible from all three platforms, which poses challenges. The things I'm seeing at Best Buy are geared toward Windows, using some kind of proprietary transparent local caching arrangement. Obviously, anything using SMB networking will still face the same challenges I've got today but at least it will just be there, where right now it is a bit of a fussy manual process logging in and getting the drive mounted on Ubuntu.
    Another option is to continue with the current approach and just RAID a pair of big SATA drives on my Linux box, which supports it. However, I would first need to make sure I could access all of the music, regardless of the characters used in file or directory names, even from the Mac. So I've got to figure out the filename problem before I can move and, since I've got less than 1 Gb left on the current drive, it is getting pretty urgent.
    Does anybody have any suggestions on resolving the current problem with names or on what technology to focus on for either of the other approaches?
    RalphM

    Adding more information after further research...
    It occurred to me that I hadn't gone quite as far as I might on my Linux laptop. I hadn't actually tried to mount the share as a filesystem. So I dug into that and found a couple of things that might be relevant.
    There are apparently two implementation of an SMB client mount - one, smbfs, hasn't been maintained for a while, while the newer one, cifs, is VFS-based. Playing around with it, I got very similar results to what I was seeing on the Mac mounting with -t smbfs. However, I was able to play music from the Motorhead folder cleanly using cifs, provided I supplied the iocharset=utf8 option on the -t cifs mount. (I did not try to see if the smbfs client supported that option, though.)
    I don't know what the Mac is using under the covers for mounting SMB clients, smbfs or cifs. I am a little worried that because it is VFS-based, cifs may not be entirely portable to Darwin/FreeBSD, so I could be out of luck. On the other hand, if they are using CIFS, it could just be a matter of forcing the right option on the mount. Does anybody know? Does anybody know how Leopard is handling mounts under the covers, so I could look at the configuration files to figure it out?
    I'm really a little frustrated with the networking on this system. I've got NFS shares for the way I had things configured before that I can't figure out how to shut off. The tools I used for digging under the covers before seem to be gone and the arrangement of the file system has changed - I no longer find the familar Unix folder names for half the stuff. When I try to specify WINS settings through the Network item on System Preferences, it won't persist the workgroup I give it but it doesn't give me any other place to specify it either, so it comes up in WORKGROUP where everything else in the lab comes up under MACKSOFT. None of this is particularly relevant to the problem at hand, but networking seems to be the place I'm spending the most "fiddle time" on an otherwise well-behaved and self-contained system.
    Ralph

  • Help about gabbage collection

    If I have code:
    String s1 = new String("a");
    String s2 = new String("b");
    for (int i=0; i<10; i++){
    s1 = s1 + s2;
    if (i%3 == 0)
    s2 = s2 + s2;
    If a mark-and-sweep garbage collector was activated after the loop had executed three times, what's its likely operation .
    Thank you so much for your help!!!

    are you sure? i'd guess it depends on how much
    optimising the compiler does?You seem to be right. Here is a quote from the JLS:
    15.18.1.2 Optimization of String Concatenation
    An implementation may choose to perform conversion and concatenation in
    one step to avoid creating and then discarding an intermediate String
    object. To increase the performance of repeated string concatenation, a
    Java compiler may use the StringBuffer class or a similar technique to
    reduce the number of intermediate String objects that are created by
    evaluation of an expression.

  • Redesigning the Collections Framework

    Hi!
    I'm sort of an experienced Java programmer, in the sense that I program regularly in Java. However, I am not experienced enough to understand the small design specifics of the Collections Framework (or other parts of Javas standard library).
    There's been a number of minor things that bugged me about the framework, and all these minor things added up to a big annoyance, after which I've decided to (try to) design and implement my own framework.
    The thing is however, that since I don't understand many design specifics about the Collection Framework and the individual collection implementations, I risk coming up short with my own.
    (And if you are still reading this, then I thank you for your time, because already now I know that this entry is going to be long. : ) )
    Since I'm doing my Collection framework nearly from scratch, I don't have to worry too much about the issue of backwards compatibility (altough I should consider making some parts similar to the collection framework as it is today, and provide a wrapper that implements the original collection interfaces).
    I also have certain options of optimizing several of the collections, but then again, there may be very specific design issues concerning performance and usability that the developers of the framework (or other more experienced Java progammers) knew about, that I don't know.
    So I'm going to share all of my thoughts here. I hope this will start an interesting discussion : )
    (I'm also not going to make a fuss about the source code of my progress. I will happily share it with anyone who is interested. It is probably even neccessary in order for others to understand how I've intended to solve my annoyances (or understand what these annoyances were in the first place). ).
    (I've read the "Java Collections API Design FAQ", btw).
    Below, I'm going to go through all of the things that I've thought about, and what I've decided to do.
    1.
    The Collections class is a class that consists only of static utility methods.
    Several of them return wrapper classes. However the majority of them work on collections implementing the List interface.
    So why weren't they built into the List interface (same goes for methods working only with the Collection interface only, etc)? Several of them can even be implemented more efficiently. For example calling rotate for a LinkedList.
    If the LinkedList is circular, using a sentry node connecting the head and tail, rotate is done simply by relocating the sentry node (rotating with one element would require one single operation). The Collections class makes several calls to the reverse method instead (because it lacks access to the internal workings of a LinkedList).
    If it were done this way, the Collections class would be much smaller, and contain mostly methods that return wrapped classes.
    After thinking about it a while, I think I can answer this question myself. The List interface would become rather bloated, and would force an implementation of methods that the user may not need.
    At any rate, I intend to try to do some sort of clean-up. Exactly how, is something I'm still thinking about. Maybe two versions of List interfaces (one "light", and the other advanced), or maybe making the internal fields package private and generally more accessible to other package members, so that methods in other classes can do some optimizations with the additional information.
    2.
    At one point, I realized that the PriorityQueue didn't support increase\decrease key operations. Of course, elements would need to know where in the backing data structure it was located, and this is implementation specific. However, I was rather dissapointed that this wasn't supported somehow, so i figured out a way to support this anyway, and implemented it.
    Basically, I've wrapped the elements in a class that contains this info, and if the element would want to increase its key, it would call a method on the wrapping class it was contained in. It worked fine.
    It may cause some overhead, but at least I don't have to re-implement such a datastructure and fiddle around so much with the element-classes just because I want to try something with a PriorityQueue.
    I can do the same thing to implement a reusable BinomialHeap, FibonacciHeap, and other datastructures, that usually require that the elements contain some implementation-specific fields and methods.
    And this would all become part of the framework.
    3.
    This one is difficult ot explain.
    It basically revolves around the first question in the "Java Collections API Design FAQ".
    It has always bothered me that the Collection interface contained methods, that "maybe" would be implemented.
    To me it didn't make sense. The Collection should only contain methods that are general for all Collections, such as the contains method. All methods that request, and manipulate the Collection, belonged in other interfaces.
    However, I also realized that the whole smart thing about the current Collection interface, is that you can transfer elements from one Collection to another, without needing to know what type of Collection you were transferring from\to.
    But I still felt it violated "something" out there, even if it was in the name of convenience. If this convenience was to be provided, it should be done by making a separate wrapper interface with the purpose of grouping the various Collection types together.
    If you don't know what I'm trying to say then you might have to see the interfaces I've made.
    And while I as at it, I also fiddled with the various method names.
    For example, add( int index, E element), I felt it should be insert( int index, E element). This type of minor things caused a lot of confusion for me back then, so I cared enough about this to change it to somthing I thought more appropriate. But I have no idea how appropriate my approach may seem to others. : )
    4.
    I see an iterator as something that iterates through a collection, and nothing else.
    Therefor, it bothered me that the Iterator interface had an optional remove method.
    I myself have never needed it, so maybe I just don't know how to appreciate it. How much is it used? If its heavily used, I guess I'm going to have to include it somehow.
    5.
    A LinkedList doesnt' support random access. But true random access is when you access randomly relative to the first index.
    Iterating from the first to the last with a for statement isn't really random access, but it still causes bad performance in the current LinkedList implementation. One would have to use the ListIterator to achieve this.
    But even here, if you want a ListIterator that starts in the middle of the list, you still need to traverse the list to reach that point.
    So I've come up with LinkedList that remembers the last accessed element using the basic methods get, set, remove etc, and can use it to access elements relatively from it.
    Basically, there is now an special interal "ListIterator" that is used to access elements when the basic methods are used. This gives way for several improvements (although that may depend how you look at it).
    It introduces some overhead in the form of if-else statemenets, but otherwise, I'm hoping that it will generally outperform the current LinkedList class (when using lists with a large nr of elements).
    6.
    I've also played around with the ArrayList class.
    I've implemented it in a way, that is something like a random-access Deque. This has made it possible to improvement certain methods, like inserting an element\Collection at some index.
    Instead of always shifting subsequent element to the right, elements can be shifted left as well. That means that inserting at index 0 only requires a single operation, instead of k * the length of the list.
    Again, this intrduces some overhead with if-else statements, but is still better in many cases (again, the List must be large for this to pay off).
    7.
    I'm also trying to do a hybrid between an ArrayList and a Linked list, hopefully allowing mostly constant insertion, but constant true random access as well. It requires more than twice the memory, since it is backed by both an ArrayList and a LinkedList.
    The overhead introduced , and the fact that worst case random access is no better than that of a pure LinkedList (which occurs when elelements are inserted at the same index many times, and you then try to access these elements), may make this class infeasible.
    It was mostly the first three points that pushed my over the edge, and made me throw myself at this project.
    You're free to comment as much as you like.
    If no real discussion starts, thats ok.
    Its not like I'm not having fun with this thing on my own : )
    I've started from scratch several times because of design problems discovered too late, so if you request to see some of the source code, it is still in the works and I would have to scurry off and add a lot of java-comments as well, to explain code.
    Great. Thanks!

    This sort of reply has great value : )
    Some of them show me that I need to take some other things into consideration. Some of them however, aren't resolved yet, some because I'm probably misunderstanding some of your arguments.
    Here goes:
    1.
    a)
    Are you saying that they're were made static, and therefor were implemented in a utility class? Isn't it the other way around? Suppose that I did put them into the List interface, that would mean they don't need to be static anymore, right?
    b)
    A bloated List interface is a problem. Many of them will however have a default not-so-alwyas-efficient implementation in a abstract base class.
    Many of the list-algorithms dump sequential lists in an array, execute the algorithm, and dump the finished list back into a sequential list.
    I believe that there are several of them where one of the "dumps" can be avoided.
    And even if other algorithms would effectively just be moved around, it wouldn't neccesarily be in the name of performance (some of them cannot really be done better), but in the name of consistency\convenience.
    Regarding convenience, I'm getting the message that some may think it more convenient to have these "extra" methods grouped in a utility class. That can be arranged.
    But when it comes to consistency with method names (which conacerns usability as well), I felt it is something else entirely.
    For example take the two methods fill and replaceAll in the Collections class. They both set specific elements (or all of them) to some value. So they're both related to the set method, but use method names that are very distinguished. For me it make sense to have a method called setAll(...), and overload it. And since the List interface has a set method, I would very much like to group all these related methods together.
    Can you follow my idea?
    And well, the Collections class would become smaller. If you ask me, it's rather bloated right now, and supports a huge mixed bag of related and unrelated utitlity methods. If we should take this to the extreme, then The Collections class and the Arrays class should be merged.
    No, right? That would be hell : )
    2,
    At a first glance, your route might do the trick. But there's several things here that aren't right
    a)
    In order to delete an object, you need to know where it is. The only remove method supported by PriorityQueue actually does a linear search. Increase and decrease operations are supposed to be log(n). Doing a linear search would ruin that.
    You need a method something like removeAt( int i), where i would be the index in the backing array (assuming you're using an array). The elemeny itself would need to know that int, meaning that it needs an internal int field, even though this field only is required due to the internal workings of PriorityQueue. Every time you want to insert some element, you need to add a field, that really has nothing to with that element from an object-oriented view.
    b)
    Even if you had such a remove method, using it to increase\decrease key would use up to twice the operations neccesary.
    Increasing a key, for example, only requires you to float the element up the heap. You don't need to remove it first, which would require an additional log(n) operations.
    3.
    I've read the link before, and I agree with them. But I feel that there are other ways to avoid an insane nr of interfaces. I also think I know why I arrive at other design choices.
    The Collection interface as it is now, is smart because it can covers a wide range of collection types with add and remove methods. This is useful because you can exchange elements between collections without knowing the type of the collection.
    The drawback is of course that not all collection are, e.g modifiable.
    What I think the problem is, is that the Collection interface is trying to be two things at once.
    On one side, it wants to be the base interface. But on the other side, it wants to cast a wide net over all the collection types.
    So what I've done is make a Collection interface that is infact a true base interface, only supporting methods that all collection types have in common.
    Then I have a separate interface that tries to support methods for exchanging elements between collections of unknown type.
    There isn't neccesarily any performance benefit (actually, it may even introduces some overhead), but in certainly is easier to grasp, for me at least, since it is more logically grouped.
    I know, that I'm basically challenging the design choices of Java programmers that have much more experience than me. Hell, they probably already even have considered and rejected what I'm considering now. In that case, I defend myself by mentioning that it isn't described as a possiblity in the FAQ : )
    4.
    This point is actually related to point 3., becausue if I want the Collection interface to only support common methods, then I can't have an Iterator with a remove method.
    But okay....I need to support it somehow. No way around it .
    5. 6. & 7.
    The message I'm getting here, is that if I implement these changes to LinkedList and ArrayList, then they aren't really LinkedList and ArrayList anymore.
    And finally, why do that, when I'm going to do a class that (hopefully) can simulate both anyway?
    I hadn't thought of the names as being the problem.
    My line of thought was, that okay, you have this arraylist that performs lousy insertion and removal, and so you avoid it.
    But occasionally, you need it (don't ask me how often this type of situation arises. Rarely?), and so you would appreciate it if performed "ok". It would still be linear, but would often perform much better (in extreme cases it would be in constant time).
    But these improvements would almost certainly change the way one would use LinkedList and ArrayList, and I guess that requires different names for them.
    Great input. That I wouldn't have thought of. Thanks.
    There is however some comments I should comment:
    "And what happens if something is suibsequently inserted or removed between that element and the one you want?"
    Then it would perform just like one would expect from a LinkedList. However if that index were closer to the last indexed position, it would be faster. As it is now, LinkedList only chooses either the first index or the last to start the traversal from.
    If you're working with a small number of elements, then this is definitely not worth it.
    "It sounds to me like this (the hybrid list) is what you really want and indeed all you really need."
    You may be right. I felt that since the hybrid list would use twice as much memory, it would not always be the best choice.
    I going to think about that one. Thanks.

  • Massive memory hemorrhage; heap size to go from about 64mb, to 1.3gb usage

    **[SOLVED]**
    Note: I posted this on stackoverflow as well, but a solution was not found.
    Here's the problem:
    [1] http://i.stack.imgur.com/sqqtS.png
    As you can see, the memory usage balloons out of control! I've had to add arguments to the JVM to increase the heapsize just to avoid out of memory errors while I figure out what's going on. Not good!
    ##Basic Application Summary (for context)
    This application is (eventually) going to be used for basic on screen CV and template matching type things for automation purposes. I want to achieve as high of a frame rate as possible for watching the screen, and handle all of the processing via a series of separate consumer threads.
    I quickly found out that the stock Robot class is really terrible speed wise, so I opened up the source, took out all of the duplicated effort and wasted overhead, and rebuilt it as my own class called FastRobot.
    ##The Class' Code:
        public class FastRobot {
             private Rectangle screenRect;
             private GraphicsDevice screen;
             private final Toolkit toolkit;
             private final Robot elRoboto;
             private final RobotPeer peer;
             private final Point gdloc;
             private final DirectColorModel screenCapCM;
             private final int[] bandmasks;
             public FastRobot() throws HeadlessException, AWTException {
                  this.screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
                  this.screen = GraphicsEnvironment.getLocalGraphicsEnvironment().getDefaultScreenDevice();
                  toolkit = Toolkit.getDefaultToolkit();
                  elRoboto = new Robot();
                  peer = ((ComponentFactory)toolkit).createRobot(elRoboto, screen);
                  gdloc = screen.getDefaultConfiguration().getBounds().getLocation();
                  this.screenRect.translate(gdloc.x, gdloc.y);
                  screenCapCM = new DirectColorModel(24,
                            /* red mask */    0x00FF0000,
                            /* green mask */  0x0000FF00,
                            /* blue mask */   0x000000FF);
                  bandmasks = new int[3];
                  bandmasks[0] = screenCapCM.getRedMask();
                  bandmasks[1] = screenCapCM.getGreenMask();
                  bandmasks[2] = screenCapCM.getBlueMask();
                  Toolkit.getDefaultToolkit().sync();
             public void autoResetGraphicsEnv() {
                  this.screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
                  this.screen = GraphicsEnvironment.getLocalGraphicsEnvironment().getDefaultScreenDevice();
             public void manuallySetGraphicsEnv(Rectangle screenRect, GraphicsDevice screen) {
                  this.screenRect = screenRect;
                  this.screen = screen;
             public BufferedImage createBufferedScreenCapture(int pixels[]) throws HeadlessException, AWTException {
        //          BufferedImage image;
                DataBufferInt buffer;
                WritableRaster raster;
                  pixels = peer.getRGBPixels(screenRect);
                  buffer = new DataBufferInt(pixels, pixels.length);
                  raster = Raster.createPackedRaster(buffer, screenRect.width, screenRect.height, screenRect.width, bandmasks, null);
                  return new BufferedImage(screenCapCM, raster, false, null);
             public int[] createArrayScreenCapture() throws HeadlessException, AWTException {
                       return peer.getRGBPixels(screenRect);
             public WritableRaster createRasterScreenCapture(int pixels[]) throws HeadlessException, AWTException {
             //     BufferedImage image;
                 DataBufferInt buffer;
                 WritableRaster raster;
                  pixels = peer.getRGBPixels(screenRect);
                  buffer = new DataBufferInt(pixels, pixels.length);
                  raster = Raster.createPackedRaster(buffer, screenRect.width, screenRect.height, screenRect.width, bandmasks, null);
             //     SunWritableRaster.makeTrackable(buffer);
                  return raster;
        }In essence, all I've changed from the original is moving many of the allocations from function bodies, and set them as attributes of the class so they're not called every time. Doing this actually had a significant affect on frame rate. Even on my severely under powered laptop, it went from ~4 fps with the stock Robot class, to ~30fps with my FastRobot class.
    ##First Test:
    When I started outofmemory errors in my main program, I set up this very simple test to keep an eye on the FastRobot. Note: this is the code which produced the heap profile above.
        public class TestFBot {
             public static void main(String[] args) {
                  try {
                       FastRobot fbot = new FastRobot();
                       double startTime = System.currentTimeMillis();
                       for (int i=0; i < 1000; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                  } catch (AWTException e) {
                       e.printStackTrace();
        }##Examined:
    It doesn't do this every time, which is really strange (and frustrating!). In fact, it rarely does it at all with the above code. However, the memory issue becomes easily reproducible if I have multiple for loops back to back.
    #Test 2
        public class TestFBot {
             public static void main(String[] args) {
                  try {
                       FastRobot fbot = new FastRobot();
                       double startTime = System.currentTimeMillis();
                       for (int i=0; i < 1000; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                       startTime = System.currentTimeMillis();
                       for (int i=0; i < 500; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                       startTime = System.currentTimeMillis();
                       for (int i=0; i < 200; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                       startTime = System.currentTimeMillis();
                       for (int i=0; i < 1500; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                  } catch (AWTException e) {
                       e.printStackTrace();
        }##Examined
    The out of control heap is now reproducible I'd say about 80% of the time. I've looked all though the profiler, and the thing of most note (I think) is that the garbage collector seemingly stops right as the fourth and final loop begins.
    The output form the above code gave the following times:
    Time taken: 24.282 //Loop1
    Time taken: 11.294 //Loop2
    Time taken: 7.1 //Loop3
    Time taken: 70.739 //Loop4
    Now, if you sum the first three loops, it adds up to 42.676, which suspiciously corresponds to the exact time that the garbage collector stops, and the memory spikes.
    [2] http://i.stack.imgur.com/fSTOs.png
    Now, this is my first rodeo with profiling, not to mention the first time I've ever even thought about garbage collection -- it was always something that just kind of worked magically in the background -- so, I'm unsure what, if anything, I've found out.
    ##Additional Profile Information
    [3] http://i.stack.imgur.com/ENocy.png
    Augusto suggested looking at the memory profile. There are 1500+ `int[]` that are listed as "unreachable, but not yet collected." These are surely the `int[]` arrays that the `peer.getRGBPixels()` creates, but for some reason they're not being destroyed. This additional info, unfortunately, only adds to my confusion, as I'm not sure why the GC wouldn't be collecting them
    ##Profile using small heap argument -Xmx256m:
    At irreputable and Hot Licks suggestion I set the max heap size to something significantly smaller. While this does prevent it from making the 1gb jump in memory usage, it still doesn't explain why the program is ballooning to its max heap size upon entering the 4th iteration.
    [4] http://i.stack.imgur.com/bR3NP.png
    As you can see, the exact issue still exists, it's just been made smaller. ;) The issue with this solution is that the program, for some reason, is still eating through all of the memory it can -- there is also a marked change in fps performance from the first the iterations, which consume very little memory, and the final iteration, which consumes as much memory as it can.
    The question remains why is it ballooning at all?
    ##Results after hitting "Force Garbage Collection" button:
    At jtahlborn's suggestion, I hit the Force Garbage Collection button. It worked beautifully. It goes from 1gb of memory usage, down to the basline of 60mb or so.
    [5] http://i.stack.imgur.com/x4282.png
    So, this seems to be the cure. The question now is, how do I pro grammatically force the GC to do this?
    ##Results after adding local Peer to function's scope:
    At David Waters suggestion, I modified the `createArrayCapture()` function so that it holds a local `Peer` object.
    Unfortunately no change in the memory usage pattern.
    [6] http://i.stack.imgur.com/Ky5vb.png
    Still gets huge on the 3rd or 4th iteration.
    #Memory Pool Analysis:
    ###ScreenShots from the different memory pools
    ##All pools:
    [7] http://i.stack.imgur.com/nXXeo.png
    ##Eden Pool:
    [8] http://i.stack.imgur.com/R4ZHG.png
    ##Old Gen:
    [9] http://i.stack.imgur.com/gmfe2.png
    Just about all of the memory usage seems to fall in this pool.
    Note: PS Survivor Space had (apparently) 0 usage
    ##I'm left with several questions:
    (a) does the Garbage Profiler graph mean what I think it means? Or am I confusing correlation with causation? As I said, I'm in an unknown area with these issues.
    (b) If it is the garbage collector... what do I do about it..? Why is it stopping altogether, and then running at a reduced rate for the remainder of the program?
    (c) How do I fix this?
    Does anyone have any idea what's going on here?
    [1]: http://i.stack.imgur.com/sqqtS.png
    [2]: http://i.stack.imgur.com/fSTOs.png
    [3]: http://i.stack.imgur.com/ENocy.png
    [4]: http://i.stack.imgur.com/bR3NP.png
    [5]: http://i.stack.imgur.com/x4282.png
    [6]: http://i.stack.imgur.com/Ky5vb.png
    [7]: http://i.stack.imgur.com/nXXeo.png
    [8]: http://i.stack.imgur.com/R4ZHG.png
    [9]: http://i.stack.imgur.com/gmfe2.png
    Edited by: 991051 on Feb 28, 2013 11:30 AM
    Edited by: 991051 on Feb 28, 2013 11:35 AM
    Edited by: 991051 on Feb 28, 2013 11:36 AM
    Edited by: 991051 on Mar 1, 2013 9:44 AM

    SO came through.
    Turns out this issue was directly related to the garbage collector. The default one, for whatever reason, would get behind on its collection at points, and thus the memory would balloon out of control, which then, once allocated, became the new normal for the GC to operate at.
    Manually setting the GC to ConcurrentMarkSweep solved this issue completely. After numerous tests, I have been unable to reproduce the memory issue. The garbage collector does an excellent job of keeping on top of these minor collections.

  • Multiple versus a single collection search with Verity

    I have a simple question about Verity search collection.
    We have been using verity for a number of years, but we have
    never done
    any real performance testing in regards to a single
    collection versus many.
    All documentation and articles argue for multiple small
    collections when
    indexing for better indexing performance. But what is the
    performance
    hit when searching 2 collections instead of 1 combined
    collection?
    How about 4 collections instead of 1?
    Thanks
    Don Vaillancourt
    Director of Software Development
    WEB IMPACT INC.
    phone: 416-815-2000 ext. 245
    fax: 416-815-2001
    toll free: 866-319-1573 ext. 245
    email: [email protected] <mailto:[email protected]>
    blackberry: [email protected]
    <mailto:[email protected]>
    web:
    http://www.web-impact.com
    address:
    http://maps.google.com
    <
    http://maps.google.com/maps?f=q&hl=en&q=99+atlantic+ave,+toronto&ie=UTF8&z=15&ll=43.640765 ,-79.420767&spn=0.013448,0.04343&om=1&iwloc=addr>
    This email message is intended only for the addressee(s) and
    contains
    information that may be confidential and/or copyright.
    If you are not the intended recipient please notify the
    sender by reply
    email and immediately delete this email.
    Use, disclosure or reproduction of this email by anyone other
    than the
    intended recipient(s) is strictly prohibited. No
    representation is made
    that this email or any attachments are free of viruses. Virus
    scanning
    is recommended and is the responsibility of the recipient.

    We are searching 7 collections with some 100.000 documents. I
    have not noticed any performance issues compared to searching only
    one collection.

  • How do I save or export my collections?

    How do I backup, save, or export my collections?  I have about 20 collections.  I want to be able to restore them all if my computer crashes and I need to pull everything up on a new computer.  I may also want to have my collections accessible on a secondary computer.

    Actually, you should right-click on the collection and choose the command Export This Collection As Catalog (and check the box next to Include Negatives). This will create a new catalog and you can move this and the photos thus created to the 2nd computer and then in Lightroom on the 2nd computer use File->Import from Another catalog

Maybe you are looking for

  • Error when creating pivot table in BI publisher template

    When I try to create a chart or pivot in BI publisher it keeps throwing error 'Please load Datasource first' It allows me to create table so I am not sure why is complaing when creating a pivot.

  • No batch session created for report RFUMSV50

    if i execute the report RFUMSV50 for batch session, in sm35 no session is created. when i first time ran, then batch was created. i have not changed anything in settings. after that i posted 5 end to end cases from po to payment. now if i run report

  • Use of sftp in a script

    Hi, I have a requirement wherein the client is asking us to take an export backup of a schema (database running on HPUX Itanium 64 bit) and then copy the dump file to a Windows box (Windows 2003 server). This has to be done every Monday at 1AM. I wan

  • Issue On Table Control Validation

    Hi All, I Got One requirement that is  " Sales Employee Depo Tour Details ", Requirement Is ---> IF One Sales Employee Updates Data Base Table Through  Table Control Details Are : Emp no: 1234, Depo City : Hyderabad, Starting Date : 25:01:2010 End Da

  • What is wrong by inheriting from Object?

    Hi and thank you for reading my question, I have tried the following program: import java.util.*; interface I {} class A1 {} class B1 extends A1  implements I {     B1() {} class B2 extends A1 implements I {     B2() {} class Cell<V>  V  value; <T ex