Duplicates,duplicates from many catalogs

A couple of years ago I started having a lot of problems  a lot with PSE & its catalogs.  This only got worse when I upgraded to newer versions.  As a precaution against loosing data I saved all of the catalogs.  Things have settled down a little with the current version I'm using PSE 11 although I can't get all of my tags for people places & events to be displayed correctly.
Yesterday I ran a duplicated file program on the drive that had all of the old catalog's & it said that I had as many as 4 copies of some pictures which are said to be identical.  There are supposed to be up to 15000 duplicates.  My problem now is can this be believed & should I just delete the 'duplicates'  or am I risking something that I can't recover?
Do the programs that look for duplicates all use the exact same criteria so that they should all produce the same results?
I would assume that the pictures that were taken with digital cameras in more recent times would all have an EXIF data still embedded in them.  Is this correct or does PSE strip this from the actual pictures & just somehow link any data to the catalog someway?
As for pictures that I had scanned I had added date taken (where known) as well a tagging them with names people & events. 
The pictures were stored in a tree structure with pictures located in files labelled, country,state, event etc.  Much of this appears to have been lost & I'm aware that many people recommend now that all picture just be placed in one folder is this correct?
Can 2 copies of pictures be 'identical'  & still have PSE treat them differently?
Could I just delete all catalogs including the current one & then the PSE to do a search for pictures on my hard drives? What would I end up with?
Any help in this regard would be much appreciated.

Robert Jacobson has a script for removing duplicates here:
http://home.comcast.net/~teridon73/itunesscripts/
As for removing MPEG files, have you discovered the "kind" column. If you go to View>>View options, there is a check box to display it.
You could either sort on the kind field or create a smart playlist where Kind is the format you want to delete.
This will group files of the same type together, but you would have to delete them yourself.

Similar Messages

  • Clone from recovery catalog

    Hi,
    Every month i am doing duplicate from prod to test database,from coming month i have to change my strategy to do clone from standby instead of production.so i have created recovery catalog and registered my standby.my queston ,is it possible to do duplicate from recovery catalog to test database.if possible,this is my connection
    rman catalog rman/rman123@catdb auxiliary /;,and i should do any modification in Recovery catalog. pls suggest me to do the cloning from RC.if possible
    my database 10.2.aix server.
    Regards
    Faruk

    You can use a recovery catalog when doing a cloning / DUPLICATE.
    Note, however, the 10.2 restrictions on the DUPLICATE Command : http://docs.oracle.com/cd/B19306_01/backup.102/b14194/rcmsynta028.htm#RCMRF126
    You must be connected to both the target database and auxiliary instance. The auxiliary instance must be started with the NOMOUNT option, and the target database must be mounted or open. The target database cannot be a standby database.
    Hemant K Chitale

  • Importing from another catalog hangs up at "Checking for changed duplicate photos" in LR 4.3

    Importing from another catalog hangs up at "Checking for changed duplicate photos" in LR 4.3.
    I went on a vacation so exported a subset of my main LR catalog on my PC so I could do some work on my laptop.  I added some new photos and also editied some olde ones.
    Now that I am back home, when I ty to Import from the smaller catalog into my Main Catalog on my PC, the Import from catalog process gets immediately stuck on the step "Checking for changed duplicate photos.'
    How can I work around this?
    Is this a new bug in 4.3?
    Thanks.
    John

    Yes, the catalogs were optimized, but...the master catalog on my desktop is
    HUGE.  I mean...really, really, HUGE.
    So, that probably explains the very long time it took?
    But...I was confused at first, and really did think that LR was 'stuck'
    while importing from the smaller catalog and "Checking for changed duplicate
    photos".
    Next time, I'll just be more patient.
    And...in the end, everything went as it should, and I now have all my edits
    and new photos incorporated into my HUGE catalog on my desktop.
    JJ&J

  • Best app to remove duplicates from iTunes 2014

    Hi All,
    I've been trying to research the best application to sort and remove duplicates from my iTunes library. I have over 7000 songs and iTunes built in duplicate finder doesn't look at the track fingerprint, which is useful for those songs which are labelled "Track_1" etc.
    Has anyone reviewed any recent products? I was looking at TuneUp, but after reading so many negative comments, I've decided not to go down that path. I would prefer a program that did most of the work for me, due to the amount of songs. Happy to pay for a good product...
    I do have MusicBrainz Picard, which has done a great job of tagging, but don't remove duplicates.
    Thanks in advance :-)

    Tune up is a great app.  When they moved from version 2 to version 3 is when it went to crap and all heck broke loose.  They shut their doors  but they have since re opened and went back to developing  version 2.  I use that version and I am pretty happy with it as being an overall cleanup utility.  I also use Musicbrainz and a couple of other utilities but in the end if you have an enormous library 20k plus then you are going to have a few slip through.  I would probably go with Tuneup if I were you and a thorough third party duplicate finder.  Dupe Guru's music edition seems to do a pretty good job.

  • Deleting Duplicates from a table

    Its a huge table with 52 fields and 30k rows. I need to delete the duplicates based on one of the fields. GROUP BY is taking a lot of time. Is there a quicker way to delete the duplicates using SQL.
    Thanks.

    How many duplicates have you got? Do you have even a vague idea? 1%? 20%? 90%?
    One way would be to add a unique constraint on the column in question. This will fail, of course, but you can use the EXCEPTIONS INTO clause to find all the ROWIDs which have duplicate values. You can then choose to delete those rows using a variant on teh query already posted. You may need to run %ORACLE_HOME%\rdbms\admin\utlexcptn.sql to build the EXCEPTIONS table first.
    This may seem like some unnecessary work, but the most effective way of deleting duplicates from a table is to have relational integrity constraints in place which prevent you having duplicates in the first place. To paraphrase Tim Gorman, you can't get faster than zero work!
    Cheers, APC

  • Duplicates in my catalog

    After fooling around with the folders, I hit Synchronize Folders and Lightroom added about 3,000 files to my catalog. Approximately half of them have no metadata and cannot be located (at least not immediately). The other half appeaer to be duplicates -- already in the catalog and having a second location in Windows. (I thought an orginal image could exist only once in Lightroom!) I identified those 3,000 files with a frame color so I can sort and identify them.
    So, is there a way to ask the program to "find all the duplicates and delete one of each from the catalog and from Windows"? I don't want to do it one-by-one and I'm afraid of just deleting the whole batch. If I did delete te entire entrie 3,000, the number of photos in my catalog would be back to approximately where it was before I screwed it up.

    Okay, I gave up and started a whole new catalog. Rather than use the feature that watched folders and automatically adds photos, I'm carefully adding them one folder at a time with the "get photos" function. I've now done almost two year's worth and now I'm starting to see the SAME problem creep in. There's a group of photos now in the middle of my collection where each one shows up twice -- once with the real file name and once with the shortened version with the ~ in the name. Can anyone help! This is driving me nuts!

  • Union all-distinct and remove duplicates from nested table?

    Hi all,
    I need a select that will bulk collect some data in my nested table.
    I have two tables from which I need to select all the accounts only once.(remove duplicates).
    Tried to search on the forum...but no luck.
    I have a table with one column:
    create table a1(account_no number);
    and a second table with 3 columns.
    create table a2 (account_no number, name number, desc varchar2 (100));
    I have a nested table like:
    table of a2%rowtype;
    Can I select from this two table in one select and put in my nested table just one row per account?
    if a I have in a 2a row like :
    1 'test' 'test2'
    2 aaaa aa
    and in a1 a row like:
    1
    I want to put in my nested table just (1, null,null and 2,aaaa, aa)) or (1,test,test2 and 2,aaaa, aa). it does no matter what row from those two I insert.
    Second question:
    If I use:
    BANNER
    Oracle9i Release 9.2.0.5.0 - Production
    PL/SQL Release 9.2.0.5.0 - Production
    CORE     9.2.0.6.0     Production
    TNS for 32-bit Windows: Version 9.2.0.5.0 - Production
    NLSRTL Version 9.2.0.5.0 - Production
    SQL>
    what is the best solution to remove duplicates from a neste table like mine?
    I thought that I can build another nested table and loop in my first nt and for each row I check in there was the same account in previous lines.
    it will be like:
    for i in 1....nt_first.count loop
    for j in 1..i loop
        --check if my line was in previous lines. if it was...do not move it in my second collection
    end loop;it is this best option in oracle 9i?

    I have a table with one column:
    create table a1(account_no number);
    and a second table with 3 columns.
    create table a2 (account_no number, name number, desc varchar2 (100));
    all I need are the accounts. the rest ar extra data
    that I can ignore in this step. But if it is
    available, it is ok to use it.
    using one select in this case is is much better that
    trying to remove duplicates parsing some nested table
    with FOR many times?
    Thankshi,
    try to use union. Union automatically removes duplicates between two or more tables.
    with t1 AS
           (select '3300000' account_no FROM DUAL UNION
            select '6500000' account_no FROM DUAL union
            select '6500000' account_no FROM DUAL union
            select '6500000' account_no FROM DUAL union
            select '6500000' account_no FROM DUAL
           select * from t1ACCOUNT_NO
    3300000
    6500000

  • Eliminating duplicates from subquery...

    What is the best way to eliminate duplicates from a subquery:
    SELECT dept_no, dept_name
    FROM dept D
    WHERE EXISTS (
    SELECT 'X'
    FROM emp E
    WHERE E.dept_no = D.dept_no);
    OR
    SELECT dept_no, dept_name
    FROM dept D
    WHERE ( SELECT 'X'
    FROM emp E
    WHERE E.dept_no = D.dept_no AND ROWNUM < 2);
    Thanks!

    >
    UPDATE TABLE1
    SET COL1 = (
    SELECT DISTINCT COL1
    FROM TABLE2, TABLE3
    WHERE TABLE2.ID = TABLE3.ID
    You need to refine your example. At present you appear to be updating every row in table1 to the same value - but only if col1 (which could be from table2 or table3 - or might be accidental capture from table1) holds just one distinct value across the query; but it looks as if you're likely to get 'single row subquery returns more than one row[ as an error.
    I guess you're trying to do something LIKE:
    update t1
    set t1.col1 = (
      select t2.col2
      from t2
      where  t2.colx = t1.coly
      and  exists (
        select null
        from t3
        where t3.id = t2.id
    )The most efficient access path depends on how many rows will have to be examined in each table, and how many times you will have to jump to another table to find related data - and if your query is roughly the shape of this one, the optimizer may be able to transform it in a variety of ways to find an efficient access path.
    As it stands, my example will be setting col1 to null whenever there is no match for coly in table t2 - and the optimizer would have to drive off t1 looking at every row to do this. Your requirement, and available predicates, indexes and constraints, may allow, or force, a completely different strategy.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • Insert data from table1 with PK to remove duplicates from table2 without PK

    There will probably be a simple solution to my issue although it is currently evading me. I have a table that contains 14 million rows. There are many duplicates within this table that I need to remove. I was hoping an old trick I used in MS Access would
    help, this solution was to create an identical table and add a PK on the filed I wanted to remove the duplicates from. Then just copy+paste append from one table to the other and Access would just drop the rows that were duplicated. Very simple yet worked
    quickly and effectively. Is there a quick way to do this in MS SQL as everything i've tried so far has failed. Any help would be greatly appreciated.
    Ian

    try below
    create table source_table (
    column1 int,
    column2 varchar(32)
    create table destination_table
    column1 int primary key,
    column2 varchar(32)
    insert into source_table (column1, column2) values (1, 'VAL2');
    insert into source_table (column1, column2) values (2, 'VAL3');
    insert into source_table (column1, column2) values (3, 'VAL4');
    insert into source_table (column1, column2) values (4, 'VAL5');
    insert into source_table (column1, column2) values (4, 'VAL5');
    insert into source_table (column1, column2) values (4, 'VAL5');
    GO
    -- simple method
    WITH CTE AS
    select
    ROW_NUMBER() OVER (partition by column1 ORDER BY column2) rno, column1, column2
    from source_table )
    insert into destination_table(column1, column2)
    select column1, column2 from CTE where rno =1
    -- inserting in while loop little chunks(500000), you can reduce if you want
    Declare @startid int =0, @totalrecords int = 14000000, @count int = 500000
    while(@startid <= @totalrecords)
    begin
    WITH CTE AS
    select
    ROW_NUMBER() OVER (partition by column1 ORDER BY column2) rno, column1, column2
    from source_table where column1 between @startid and @startid+@count)
    insert into destination_table(column1, column2)
    select column1, column2 from CTE where rno =1
    Print convert(varchar , @startid+@count)+ ' processed' -- indication while running
    set @startid = @startid+@count
    end
    Thanks
    Saravana Kumar C

  • Deleting duplicate from list

    What is the easy and fastest way to delete duplicate from
    list.
    My dynamic list is pretty small. Need to loop through and
    delete duplicate and give unique item list.
    We don't have a CF function yet?.

    What do I think? Well - I think your code has a few problems:
    1) It's not very re-usable
    2) It's not very easy to read (variables are named poorly)
    3) It's introduces unecessary usage of complex data types
    (structs)
    4) Functionally, it does not preserve the sort order of the
    original list.
    To elaborate on #4 above....once you have your list values
    stored as "keys" in a struct, you lose the original order of the
    list. Keys in a struct are completely unsorted. A call to
    StructKeyList will return a list of keys, with no guarantee on the
    order of the keys returned. The order in which the keys are
    inserted makes no difference. Once all of the keys are defined in a
    struct, they are all essentially un-ordered indexes.
    If order is not important to you for this particular usage,
    than you shouldn't have anything to worry about. Functionally, your
    code WILL work. But my honest opinion is that there are many
    "better" ways to approach this (see the example UDF I attached
    previously).
    Please bear in mind I'm not trying to be mean here, just
    offering some constructive criticism.

  • RMAN duplicate from RAC with ASM to RAC with ASM

    I'm still reading and trying to figure out how to duplicate...and I'm finding that there are some extra considerations you have to work with with doing a RAC system.
    Does anyone have any good links to articles/docs that spell out what to do in this scenario?
    I'm wanting to duplicate from tape backups, using NO connectivity to the source/target....
    My tape backups do include the spfile and control files (autobackup) from the source.
    Thank you in advance,
    cayenne

    damorgan wrote:
    Possibly you are confusing a database with a database.
    A RAC database is a just a database. What is different is the instances, the clusterware, and the storage: For example ASM.
    A backup is a backup is a backup.
    Can you be more specific about what you are trying to do, on what hardware, operating system, version, and edition.
    RMAN can not create an instance either RAC or stand-alone.Thank you for the reply.
    I have my source system, OS = RHEL5, running Oracle 11Gr2...5 node RAC cluster.
    I'm doing RMAN backups to tape...hourly arch. logs, Daily incremental lvl=1 backups and weekly incremental lvl=0
    None of the tape backups have ever been tested for restoring...and I've never restored a database myself before, total noob here.
    I have a test area I've set up. I have a 2 node RAC cluster, running 11Gr2 and OS=RHEL5.
    The tape is accessible from both systems.
    I am wanting to test the tape backups...and thought the RMAN DUPLICATE process would be the way to go.
    I am wanting to NOT connect to the source database...trying to simulate somewhat of a disaster recovery scenario. I'm only wanting to use tape backup, and the test area for the auxiliary instance.
    So far what I've seen I need to learn to do is something like:
    1. Create a password file for new aux database to be duplicated to
    2. Establish Net connectivity...
    3. Create initial parameter file for aux instance
         Due to a bug in note:334899.1, add this (because of RAC system)
         norecovery_through_resetlogs=TRUE
    4. Start aux instance NOMOUNT
    Everything I'm reading though...is basically doing this from single instance to single instance...and what little info I've seen on doing it from RAC, indicates there are some differences. The "_no_recovery_through_resetlogs=TRUE" is one thing I found...but wondering what else.
    Also, so many of the examples I'm finding...are doing the duplication connecting to the target/source...and also doing backups to disk rather than tape...
    Right now, I'm at #1...trying to figure what to put into an init pfile...I'm seeing DB_name, which will be the same as the one I'm cloning from.
    I'm not sure what else....
    I'm wondering if this is necessary..since on the tape backups from the source...I backup the SPFILE...can that not be used somehow in this?
    I'm seeing for the init file, examples show that I need to put in entries for control files and redo logs....if the source system was down and gone...how would I know where these were on the old system? This isn't documented anywhere....is there a way to do this if you didn't know.
    If not..guess I need to go through all systems and document the layouts of where everything is located.
    Also...most examples I'm finding, in addition to being single instance backup and restore/duplication...they are all using regular file systems....not much to go on with using ASM.
    Anyway, I'm trying to learn here...and am having trouble finding examples to go from that match my setup.
    Thank you in advance,
    cayenne

  • How to delete duplicates from iTunes but not hard drive

    I'm running iTunes 10 and have my music stored on both my NAS drive and backup drive.  Each track appears twice on iTunes: once as lossless for streaming and and once as compressed AAC files for synching to ipods and iphones.
    For some reason, when my PC is turned on after being turned on, the linkstation cannon be 'seen' by iTunes, and therefore if I try to play a track, 'the original file cannot be found'.  If I use the 'add folder to library' option in the 'file' dropdown menu, the tracks are located again.
    Unfortunately, I the last time I did this it resulted in 4 copies of each song on itunes; 2 lossless and 2 AAC.
    I have 2 questions:
    Firstly is there an easy way to delete the duplicates from my iTunes library so that I am left with only 1 lossless and one AAC copy, but not deleing the files from my NAS and backup drive?
    Is there a way I can just store my music as lossless, but convert it easily to aa more compressed format for synching to iphone and ipod?
    Thanks!

    You can try my new DeDupe script. It should be able to get rid of the redundant copies while keeping one copy of each track in each format. See this thread for background.
    I've not used the feature, but iTunes can downsample to 128k AAC on the fly as it syncs your devices. Of course you might find the process is too slow for your needs or find that 128k is too much compression.
    tt2

  • How do you move  many "Folders" from one Catalog to "one" other Catalog?

    When I first start using LR, I did not know anything about catalog.
    After 5000 photos, everything is in the Lightroom Catalog.Ircat (by the way, what does Ir stand for?)
    Some photos are done in the office (work related for PowerPoint). Many others are done at home.
    Now I have learned to export a folder (containing birthday photos) to a new catalog in my external USB drive and deleted the folder from the laptop HD, I have done another folder (containing party photos) to yet a new catalog in my external USB drive.
    My question is: How do I set up one Home.Ircat in the USB HD and move all my Home related folders to the external drive leaving all my work related folders as it in the laptop? Then, when I want to see my home photos, I won't have to deal with relaunching new catalogs every time I want to see something. Do you understand what I am trying to accomplish? (exactly, my boss won't buy me a bigger HD for the laptop, sucks!)
    I am sure this is very simple. Please help.
    Eddie

    Thank you. Wish they had said Lrcat!
    I can export folders into a new LR catalog, but if I export 10 folders, I ended up with 10 catalogs and each catalog contains only one (1) single folder.
    What I like to do is to create a new catalog, such as Home.lrcat and move the folders over from Lightroom Catalog.lrcat so now one single Home.lrcat can contain 10 folders of home photos.
    How do I do that?
    Please help.

  • HT2905 i have just followed all the insrtuctions in support to remove duplicates from my library but now most of my musicis gone except for my recent purchases. How come and how do i fix it?

    i have just followed all the instructions in support to remove duplicates from my library but now most of my music is gone except for my recent purchases. How come and how do i fix it?

    Final Cut is a separate, higher end video editor.  The pro version of iMovie.
    Give iPhoto a look at for creating the slideshow.  It's easy to assemble the photos in an album in iPhoto, put them in the order you want and then make a slideshow of them.  You can select from various themes and transitions between slides and add music from your iTunes library.
    When you have the slidshow as you want use the Export button at the bottom of the iPhoto window and export with Size = Medium or Large.
    Save the resulting Quicktime movie file in your Movies folder.
    Next, open iDVD, choose your theme and drag the QT movie file into the menu window being careful to avoid any drop zones.
    Then follow this workflow to help assure the best qualty video DVD:
    Once you have the project as you want it save it as a disk image via the File ➙ Save as Disk Image  menu option. This will separate the encoding process from the burn process. 
    To check the encoding mount the disk image, launch DVD Player and play it.  If it plays OK with DVD Player the encoding is good.
    Then burn to disk with Disk Utility or Toast at the slowest speed available (2x-4x) to assure the best burn quality.  Always use top quality media:  Verbatim, Maxell or Taiyo Yuden DVD-R are the most recommended in these forums.
    The reason I suggest iPhoto is that I find it much easier to use than iMovie (except for the older iMovie 6 HD version).  Personal preferences showing here.

  • Delete Duplicates from internal table with object references

    Hi
    How can I delete duplicates from an internal table in ABAP OO based on the value of one of the attributes?
    I have created a method, with the following code:
      LOOP AT me->business_document_lines INTO l_add_line.
        CREATE OBJECT ot_line_owner
          EXPORTING
            i_user      = l_add_line->add_line_data-line_owner
            i_busdoc = me->business_document_id.
          APPEND ot_line_owner TO e_line_owners.
      ENDLOOP.
    e_line_owners are defined as a table containing only object references.
    One of the attribute of the object in the table is called USER. And I would like to do a "delete ADJACENT DUPLICATES FROM e_line_owners", based on that attribute.
    How can do this?
    Regards,
    Morten Nielsen

    Hello Morten
    Assuming that the instance attribute is <b>public </b>you could try to use the following coding:
      SORT e_line_owners BY table_line->user.
      DELETE ADJACENT DUPLICATES FROM e_line_owners
        COMPARING table_line->user.
    However, I am not really sure (cannot test myself) whether <b>TABLE_LINE</b> can be used together with SORT and DELETE.
    Alternative solution:
      DATA:
         ld_idx    TYPE sy-tabix.
      LOOP AT e_line_owners INTO ls_line.
        ld_idx = syst-tabix + 1.
        LOOP AT e_line_owners TRANSPORTING NO FIELDS FROM ld_idx
                       WHERE ( table_line->user = ls_line->user ).
          DELETE e_line_owners INDEX syst-tabix.
        ENDLOOP.
      ENDLOOP.
    Regards
      Uwe

Maybe you are looking for

  • How can I sync selected books to my ipad via the itunes interface?

    I have a lot of books and pdfs all neatly tagged in my itunes library, and I would like to be able to sync selected books to my ipad from the itunes 'books' library page. I cannot for the life of me find a way to do this! I realise you can choose to

  • [Help] Using smart mailbox: the "Subject" and "From" switching back

    Hi, I got a problem about setting up a smart mailbox. I set the rule of --> Subject, Does not contain, xxxx (some words) and click OK to close the smart mailbox setting window. However, when I reopen the smart mailbox setting (use "Edit Smart Mailbox

  • Safari 5.1 does not open local html files

    Safari 5.1 does not open local html files (with .htm or .html extensions). It fails on files that previous versions of Safari opened. Firefox continues to open them. Any suggestions?

  • Restart in AFAB

    Hi , Because of of last posting run went in error now if i am trying to do repeat run there is error that " you have to do Restart" I jus want to know that if i do Restart then does it post my depreciation again with new documents created ?

  • ITunes Down Load

    Checking for Purchases Error Unable to check for Purchases. The network connection timed out. Problem started after 197 downloads became available.