Tip for dealing with duplicates

Ok we all know it's a big pain in the *** when you get duplicate songs and need to get rid of the duplicates. Especially when you have a LARGE collection like I do.
iTunes clearly has some big shortcoming when detecting and working with duplicates. A lot of the time it'll find stuff that isn't actually a duplicate.
I have some songs that are named the same even though they're not the same song. (Remixes or legitimate duplicates from a "best of..." album or something.)
So I can't just "show duplicates" and delete every other entry indiscriminately. I need a way to go through the list at my leisure (possibly in multiple sittings over several days.) and review all the stuff before I delete it.
Here's a method that makes things easier.
1. Make sure that all your songs have a star rating of at least 1 star. (I default all new songs to my library as 3 stars when importing to itunes and then I raise or lower the stars from my ipod when I listen to them.) If you haven't kept track of your song ratings yet. Give them all a 3 star rating now. The star rating will be used later to select your files to delete.
2. Select your music folder and then select "View/Show Duplicates"
3. From the resulting list, select all songs (Ctrl-A) and then click "File/New Playlist from Selection" Name your new playlist "Suspect dupes" or something
4. Configure your playlist view like this..
* Cover flow on and "View Artwork" on (Selected Item). (You don't want to delete the song with the cover and keep a version with no cover.)
* Make sure you have the following columns visible: Sample rate, Bit rate, Time, Size, Disc #, Track #, Artist, Album, and My Rating. You'll need these to compare the songs.
We now have a playlist which we can refer back to at any time.
Now... If you were to delete files here in this playlist view, all you'd be doing is deleting them from the playlist, and not from your library which is what we want to do.
So...
5. Go back to your library view (Making sure that you've clicked the "Show All" button at the bottom of the window so you're not just seeing the duplicates any more.)
6. Select all the songs in your library (Ctrl-A) and then right mouse button click on the songs and select "Uncheck selection" Which will uncheck every song you have.
7. Go back to your "Suspect dupes" playlist. Everything will be unchecked now. Anything you check while viewing this "Suspect Dupes" playlist will also be checked when you go back to viewing your main library.
8. Here's the tedious part. Go through your "Suspect Dupes" playlist and double click and comparing each potential duplicate. When you find a genuine duplicate that you want to get rid of, Click the box to check the song. Your selections will stay checked even if you close itunes and come back to it another day! (I've been working on my collection for a week now.)
9. When you've finished checking all the ones which are songs you want to get rid of, Go back to the top of the "Suspect Dupes" playlist and scroll through it while Ctrl-Clicking each checked song that's there to highlight the ones you want to get rid of.
10. Once they're all highlighted select "File/New Playlist From Selection" and name the resulting playlist "Delete me" or something. Highlight all the songs in the "Delete me" playlist and then right mouse button click on the songs and select "My Rating/None"
(We do this so that we have a way to recognize these songs when viewing the main library list which is where we can actually physically delete the files to the Recycle Bin from.)
11. Go back to your library view and sort your list by ratings. Highlight all the songs there with a rating of None and then right click to delete them.
12. They're gone, and you're done!
13. As a preventative measure you could Consolidate your library (Advanced/Consolidate Library) and then give all your MP3's a tag in the "Comments" field of something unique like say... "Permanent_File" That way if iTunes should somehow re-add files from somewhere else on your computer again and create more duplicates, you have a way of easily sorting and distinguishing which are the good files and which are ones that you need to delete. (Anything without your "Permanent_File" tag.)
Apple really needs to refine the duplicate detection and make it easier to get rid of them.
* They should make it so that iTunes checks ALL tag info in addition to the song's title when compiling it's list of duplicates.
* There should be a "Select all checked files" option. (So as to avoid the whole star rating thing mentioned above.)
* There should also be an option to remove the files from your library when deleting songs from a playlist view. (eg: A prompt "Delete from this Playlist or Delete from your Library/iPod?"
Custom PC Windows XP Pro Got myself a refund on Vista. Getting a Mac!
Custom PC   Windows XP Pro   Got myself a refund on Vista. Getting a Mac!

My pet peeve is that when you have a studio recording
and a live recording of the same song by the same
artist, iTunes considers them to be duplicates.
Yeah, that's exactly what I'm saying.
The best way to bring your suggestions to Apple's attention is by submitting the form on this Support page .
I took your suggestion and reffered them to this page. Maybe they'll listen. (Heck I managed to convince the Google Earth team to move the on screen controls to the upper right corner from the bottom center to accomodate dual monitor users.)
Custom PC Windows XP Pro Got myself a refund on Vista. Getting a Mac!

Similar Messages

  • Tips for dealing with large channel count on cRio

    Hello, I have a very simple application that takes an analog input using an AI module (9205) from a thermistor and based on the value of the input it sends out a true/false signal using a digital out module (9477). Each cRio chassis will have close to 128 channels, the code being exactly the same for each channel.
    I wonder if anyone has any tips for how I can do this so that I don't have to copy and paste each section of code 128 times. Obviously this would be a nightmare if the code ever had to be changed. I'm sure there is a way to make a function or a class but being new to graphical programming I can't think of a good way to do this. I looked for a way to dynamically select a channel but can't seem to find anything, if I could select the channel dynamically I'm guessing I can create a subvi and do it that way. Any tips or help would be greatly appreciated.

    There isn't a way to dynamically choose a channel at runtime.  In order
    for the VI to compile successfully, the compiler must be able to statically
    determine which channel is being read or written in the I/O Node at
    compile time.  However, that doesn't mean you can't write a reusable
    subvi.  If you right click on the FPGA I/O In terminal of the I/O Node and create a constant or control, you should be able to reuse the same logic for all of your channels.  The attached screen shot should illustrate the basics of what this might look like.  If you right click the I/O control/constant and select "Configure I/O Type...", you can configure the interface the I/O Item must support in order for it to be selectable from the control.  While this helps single source some of the logic, you will still eventually need 128 I/O constants somewhere in your FPGA VI hierarchy.
    I should also mention that if each channel being read from the 9205 is contained in a separate subVI or I/O Node, you will also incur some execution time overhead due to the scanning nature of the module.  You mentioned you are reading temperature signals so the additional execution time may not be that important to you.  If it is, you may want to look at the IO Sample Method.  You can find more information and examples on how to use this method in the LV help.  Using the IO Sample Method does allow you to dynamically choose a channel at runtime and is generally more efficient for high channel counts.  However, it's also a lot more complicated to use than the I/O Node.
    You also mentioned concerns about the size of arrays and the performance implications of using a single for loop to iterate across your data set.  That's the classic design trade off when dealing with FPGAs.  If you want to perform as much in parallel as possible, you'll need to store all 128 data points from the 9205 modules at once, process the data in parallel using 128 instances of the same circuit, and then output a digital value based on the result.  If you're using fixed point data types, that's 182 x 26 bits for just the I/O data from the 9205.  While this will yield the fastest execution times, the resulting VI may be too large to fit on your target.  Conversely, you could use the IO Sample Method to read each channel one at a time, process the data using the same circuit, and then output a digital value.  This strategy will use the least amount of logic on the FPGA but will also take the longest to execute.  Of course, there are all sorts of options you could create in between these two extremes.  Without knowing more about your requirements, it's hard to advise which end of the spectrum you should shoot for.  Anyway, hopefully this will give you some ideas on where to get started.
    Attachments:
    IO Constant.JPG ‏31 KB

  • How is the best way to deal with duplicate photos

    I am using a new retina 27" iMac 16gb ram OS X 10.10.1
    Aperture 3.6
    What is the best way to deal with duplicates that get in Aperture Vaults
    I have used Gemini and it finds duplicates, but I have no way of telling if the original are still there.
    I don't want to go through 15000 photos to try to find the duplicate.
    Thanks Charlie

    You mean - one image in a vault, one in a library?  Or duplicates in the same library?
    Photo Sweeper can scan several libraries or folders at the same time and display the duplicates side by side to let you pick which to keep.  You can define rules to mark photos for automatic deletion as well.
    http://overmacs.com/photosweeper.html

  • JK Adobe TV - Top 5 Tips for Working with Vectors in CC

    Julieanne Kost has just blogged about her  Adobe TV video on working with shapes and paths in Photoshop CC.  It's actually not that new to Adobe TV, and has already had a lot of views, but we get a lot of questions here on the subject with CC, and there are some nice little tips in it.  I certainly learned a couple of things. :-)
    http://blogs.adobe.com/jkost/2014/01/top-5-tips-for-working-with-vectors-in-photoshop-cc.h tml
    http://tv.adobe.com/watch/the-complete-picture-with-julieanne-kost/top-5-tips-for-working- with-vectors-in-photoshop-cc/

    My apologies, but I really had no interest in a member's "answer", especially one that is so unhelpful.  Assuming that you were responding to me (we are the only two commenter's at this point), I would not be inclined to read the Creative Cloud offers since this is something that I am not interested in.  I bought the product the first day of offer, I did not rent it....just like I have in all the years past.

  • What is best practice for dealing with Engineering Spare Parts?

    Hello All,
    I am after some advice regarding the process for handling engineering spare parts in PM. (We run ECC 5)
    Our current process is as follows:
    All materials are set up as HIBE's
    Each material is batch managed
    The Batch field is used for the Bin location
    We are now looking to role out PM to a site that has in excess of 50,000 spare parts and want to make sure we use best practice for handling the spare parts. We are now considering using a basic WM setup to handle the movement of parts.
    Please can you provide me with some feedback on what you feel the best practice is for dealing with these parts?
    We are looking to set up a solution that will us to generate pick lists etc and implment a scanning solution to move parts in and out of stores.
    Regards
    Chris

    Hi,
    I hope all the 50000 spare parts are maintained as stock items.
    1. Based on the usage of those spare parts, try to define safety stock & define MRP as "Reorder Point Planning". By this, you can avoid petty cash purchase.
    2. By keeping the spare parts (atleast critical components) in stock, Planned Maintenance as well as unplanned maintenance will not get delayed.
    3. By doing GI based on reservation, qty can be tracked against the order & equipment.
    As this question is MM & WM related, they can give better clarity on this.
    Regards,
    Maheswaran.

  • Best practice for dealing with Recordsets

    Hi all,
    I'm wondering what is best practice for dealing with data retrieved via JDBC as Recordsets without involving third part products such as Hibernate etc. I've been told to NOT use RecordSets throughout in my applications since they are taking up resources and are expensive. I'm wondering which collection type is best to convert RecordSets into. The apps I'm building are webbased using JSPs as presentation layer, beans and servlets.
    Many thanks
    Erik

    There is no requirement that DAO's have a direct mapping to Database Tables. One of the advantages of the DAO pattern is that the business layer isn't directly aware of the persistence layer. If the joined data is used in the business code as if it were an unnormalized table, then you might want to provide a DAO for the joined data. If the joined data provides a subsiduray object within some particular object, you might add the access method to the DAO for the outer object.
    eg:
    In a user permissioning system where:
    1 user has many userRoles
    1 role has many userRoles
    1 role has many rolePermissions
    1 permission has many rolePermissions
    ie. there is a many to many relationship between users and roles, and between roles and permissions.
    The administrator needs to be able to add and delete permissions for roles and roles for users, so the crud for the rolePermissions table is probably most useful in the RoleDAO, and the crud for the userRoles table in the UserDAO. DOA's also can call each other.
    During operation the system needs to be able to get all permissions for a user at login, so the UserDAO should provide a readPermissions method that does a rather complex join across the user, userRole, rolePermission and permission tables..
    Note that f the system I just described were done with LDAP, a Hierarchical database or an Object database, the userRoles and rolePermissions tables wouldn't even exist, these are RDBMS artifacts since relational databases don't understand many to many relationships. This is good reason to avoid providing DAO's that give access to those tables.

  • What's best strategy for dealing with 40+ hours of footage

    We have been editing a documentary with 45+ hours of footage and presently have captured roughly 230 gb. Needless to say it's a lot of files. What's the best strategy for dealing with so much captured footage? It's almost impossible to remember it all and labeling it while logging it seems inadequate as it difficult to actually read comments in dozens and dozens of folders.
    Just looking for suggestions on how to deal with this problem for this and future projects.
    G5 Dual Core 2.3   Mac OS X (10.4.6)   2.5 g ram, 2 internal sata 2 250gb

    Ditto, ditto, ditto on all of the previous posts. I've done four long form documentaries.
    First I listen to all the the sound bytes and digitize only the ones that I think I will need. I will take in much more than I use, but I like to transcribe bytes from the non-linear timeline. It's easier for me.
    I had so many interviews in the last doc that I gave each interviewee a bin. You must decide how you want to organize the sound bytes. Do you want a bin for each interviewee or do you want to do it by subject. That will depend on you documentary and subject matter.
    I then have b-roll bins. Sometime I base them on location and sometimes I base them on subject matter. This last time I based them on location because I would have a good idea of what was in each bin by remembering where and when it was shot.
    Perhaps, you weren't at the shoot and do not have this advantage. It's crucial that you organize you b-roll bins in a way that makes sense to you.
    I then have music bins and bins for my voice over.
    Many folks recommend that you work in small sequences and nest. This is a good idea for long form stuff. That way you don't get lost in the timeline.
    I also make a "used" bin. Once I've used a shot I pull it out of the bin and put it "away" That keeps me from repeatedly looking at footage that I've already used.
    The previous posts are right. If you've digitized 45 hours of footage you've put in too much. It's time to start deleting some media. Remember that when you hit the edit suite, you should be one the downhill slide. You should have a script and a clear idea of where you're going.
    I don't have enough fingers to count the number of times that I've had producers walk into my edit suite with a bunch of raw tape and tell me that that "want to make something cool." They generally have no idea where they're going and end up wondering why the process is so hard.
    Refine your story and base your clip selections on that story.
    Good luck
    Dual 2 GHz Power Mac G5   Mac OS X (10.4.8)  

  • Tips for working with USB floppy drives

    I was just wondering if anyone had any tips on working with USB floppy drives in Archlinux.
    One thing I've noticed is that trying to use mkfs.msdos on my USB floppy drive directly doesn't seem to result in a usable disk (it's mountable, but reports that I don't have enough space available to copy files), but if I use the command on an image file the same size as a floppy disk and then use dd to copy the blank image to my USB drive I get a disk that's perfectly usable.  Anybody know what the deal is with that or how I might be able to successfully format a disk in fewer steps (other than keeping the blank images around and just using dd to format the floppies, haha)?
    Also, I'd like to hear any other tips or tricks any of you have for working with USB floppy drives.  They just seem so much more usable under Windows as opposed to Linux, and that disappoints me a little.

    jmetal88 wrote:Okay, thanks guys.  I haven't figured out the issue yet, but you're helping me narrow it down.  I decided to run a df -h command after each try of formatting the disk this time.  I tried the low-level ufiformat command followed by a mkfs.msdos on the device, using a 720k disk, and df -h showed 700k free on the file system.  It's just not making the file system large enough (I believe the file I was trying to copy over was 710k or so).  When I run mkfs.msdos on a 720k disk image and then dd the image over to the drive, df -h shows 713k free after mounting the drive.  Any idea why mkfs.msdos isn't filling out the full 720k on the disk itself, but is on the image?
    I would assume this happens because (as far as I can tell from skimming through the source over the course of a few minutes) mkfs.fat only applies floppy size heuristics to fd and lo devices -- your USB floppy drive is treated as a hard drive, and that leads to less-than-ideal default parameters. Run
    fsck.fat -v
    on the more spacious filesystem to figure out what options need to be set when you attempt formatting an actual disk. Or just make things easier by keeping newly-formatted blank disk images on hand so that dd and (maybe) ufiformat are all you need to worry about.

  • Best strategy for dealing with a mixed library and missing photos

    Looking at the package contents of my iPhoto Library, it appears that I have a "mixed" library (i.e. part managed and part referenced). I'm not sure how this happened--I originally created this library about three years ago when I first bought my Mac with iLife '06, then later upgraded to iLife '08 which I'm currently still using. Maybe the default setting was to have a referenced library in iPhoto '06? Don't know. I don't recall ever changing this setting.
    Anyway, I was starting to manually recover some missing photos but it just occurred to me that iPhoto is simply updating the aliases to point to the photos I'm telling it to use, and not actually copying the photos into the library (I'm guessing that only happens when you import a photo). This is not what I want. I want my library to be completely managed. I understand that, as my current Preferences settings dictate, any new photos I import now will be copied into the iPhoto Library. But for the existing photos that I am able to recover, I don't want to have to keep the originals outside of iPhoto.
    I read in another topic about an application called AliasHerder and I'm thinking of trying that. The problem is, I no longer have original copies of all of the missing photos. I can recreate the folder structure for some of them but not all. I'm not sure what this will do to my iPhoto Library (I have emailed the vendor for clarification). I'm wondering if I'll still end up with a mixed library containing aliases that point to a non-existent file. Perhaps someone who has some experience with the tool could enighten me.
    Based on suggestions given to me in previous posts I made, I tried both rebuilding my iPhoto Library and using the iPhoto Library Manager tool. Neither of these produced the results I had hoped for. Am I better off just starting off from scratch and creating a whole new library? And if so, what's the best way to get all of the existing photos from my current library to the new one? Do I export them out and then import them into the new library? Is there any way to salvage events, albums, etc. from my existing library or will I have to recreate these in the new library?
    Thanks in advance!

    First check iPhoto's Advanced preference pane to make sure you're running a "managed" library.
    Click to view full size
    Since you don't have the "source" photos available to relink to your best bet, IMO, would be to continue using your current library and delete those missing photos from it whenever you come across them in day to day use. If you click on the thumbnail of a missing photos and drag it to the iPHoto trash and empty it that will delete it from the library. This way you will retain all of your organizational efforts, i.e. albums, books, keywords, etc.
    When you rebuilt with iPhoto Library Manager what did the resulting library contain? If it contained your photos without the missing photos you could use it. You would retain your albums, keywords, faces, places and other metadata. You will lose any keepsakes, i.e. books, slideshows, cards, etc.
    Or, you could start over as follows:
    Creating a new library while preserving the Events from the original library.
    1 - Move the existing library folder to the desktop
    2 - Open the library package like this.
    3 - Launch iPhoto and, when asked, select the option to create a new library.
    4 - Drag the Originals folder from the iPhoto Library on the desktop into the open iPhoto window.
    You will end up with all your photos (no missing photos) in the same events as in the original library. But there will be no albums, metadata or keepsakes. It's for this reason I made the original suggestion of continuing with your current library above.
    OT
    TIP: For insurance against the iPhoto database corruption that many users have experienced I recommend making a backup copy of the Library6.iPhoto (iPhoto.Library for iPhoto 5 and earlier versions) database file and keep it current. If problems crop up where iPhoto suddenly can't see any photos or thinks there are no photos in the library, replacing the working Library6.iPhoto file with the backup will often get the library back. By keeping it current I mean backup after each import and/or any serious editing or work on books, slideshows, calendars, cards, etc. That insures that if a problem pops up and you do need to replace the database file, you'll retain all those efforts. It doesn't take long to make the backup and it's good insurance.
    I've created an Automator workflow application (requires Tiger or later), iPhoto dB File Backup, that will copy the selected Library6.iPhoto file from your iPhoto Library folder to the Pictures folder, replacing any previous version of it. There are versions that are compatible with iPhoto 5, 6, 7 and 8 libraries and Tiger and Leopard. Just put the application in the Dock and click on it whenever you want to backup the dB file. iPhoto does not have to be closed to run the application, just idle. You can download it at Toad's Cellar. Be sure to read the Read Me pdf file.
    NOTE: The new rebuild option in iPhoto 09 (v. 8.0.2), Rebuild the iPhoto Library Database from automatic backup" makes this tip obsolete.

  • Strategies for dealing with large forms

    I've created a dynamic form that is about 30 pages long. Although performance in reader is OK, when editing, every change takes about 10 seconds to be digested by Designer. I've tried editing parts and then copying them across, but formatting information seems to be lost when pasting.
    What are the practical limits to how long a form can be, and are there any suggestions for how to deal with longer forms?
    Many thanks
    Alex

    Performance may be okay in Reader 8, but a 30-page dynamic form will probably by unusable in Reader 7. I have done work with very large forms before, and I have maintained each page in a separate XDP and then brought them together once all the kinks are worked out. I don't know why you would be losing formatting information.
    There is a technique that I have used where you use dynamic subforms to show only one page of the form at a time. That approach allows a very large form to be used in version 7 with acceptable response time. However, it's a very complex approach, and I have come to think that it's too complex for comfort. The more complex your approach is the more likely it is to fail in a future version of Reader.

  • Optimal read write performance for data with duplicate keys

    Hi,
    I am constructing a database that will store data with duplicate keys.
    For each key (a String) there will be multiple data objects, there is no upper limit to the number of data objects, but let's say there could be a million.
    Data objects have a time-stamp (Long) field and a message (String) field.
    At the moment I write these data objects into the database in chronological order, as i receive them, for any given key.
    When I retrieve data for a key, and iterate across the duplicates for any given primary key using a cursor they are fetched in ascending chronological order.
    What I would like to do is start fetching these records in reverse order, say just the last 10 records that were written to the database for a given key, and was wondering if anyone had some suggestions on the optimal way to do this.
    I have considered writing data out in the order that i want to retrieve it, by supplying the database with a custom duplicate comparator. If I were to do this then the query above would return the latest data first, and I would be able to iterate over the most recent inserts quickly. but Is there a performance penalty paid on writing to the database if I do this?
    I have also considered using the time-stamp field as the unique primary key for the primary database instead of the String, and creating a secondary database for the String, this would allow me to index into the data using a cursor join, but I'm not certain it would be any more performant, at least not on writing to the database, since it would result in a very flat b-tree.
    Is there a fundamental choice that I will have to make between write versus read performance? Any suggestions on tackling this much appreciated.
    Many Thanks,
    Joel

    Hi Joel,
    Using a duplicate comparator will slow down Btree access (writes and reads) to
    some degree because the comparator is called a lot during searching. But
    whether this is a problem depends on whether your app is CPU bound and how much
    CPU time your comparator uses. If you can avoid de-serializing the object in
    the comparator, that will help. For example, if you keep the timestamp at the
    beginning of the data and only read the one long timestamp field in your
    comparator, that should be pretty fast.
    Another approach is to store the negation of the timestamp so that records
    are sorted naturally in reverse timestamp order.
    Another approach is to read backwards using a cursor. This takes a couple
    steps:
    1) Find the last duplicate for the primary key you're interested in:
      cursor.getSearchKey(keyOfInterest, ...)
      status = cursor.getNextNoDup(...)
      if (status == SUCCESS) {
          // Found the next primary key, now back up one record.
          status = cursor.getPrev(...)
      } else {
          // This is the last primary key, find the last record.
          status = cursor.getLast(...)
      }2) Scan backwards over the duplicates:
      while (status == SUCCESS) {
          // Process one record
          // Move backwards
          status = cursor.getPrev(...)
      }Finally another approach is to use a two-part primary key: {string,timestamp}.
    Duplicates are not configured because every key is unique. I mention this
    because using duplicates in JE has more overhead than using a unique primary
    key. You can combine this with either of the above approaches -- using a
    comparator, negating the timestamp, or scanning backwards.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • ITunes - Two Libraries by Mistake on same iMac, How to Amalgamate and Deal with Duplicates?

    I'm using iTunes 9.2.1 on a PPC iMac G5 running OS X 10.4.11.
    I have previously had lots of problems with a damaged iTunes library: https://discussions.apple.com/thread/2009146?threadID=2009146&stqc=true
    As a result of this (and I'm not sure how), all music I have added since I got it working again has been stored in a folder called "iTunes Music" (Macintosh HD:Users:Username:Desktop:iTunes Music) which appeared on the Desktop. I also have various iTunes Library files (plus .itdb and .xml files)  on the Desktop (I didn't put them there, that's just what appeared).
    All music added priror to this is in Macintosh HD:Users:Username:Music:iTunes:iTunes Music.
    Everything works in iTunes (as you would expect - it just points to the relevant folder).
    I would like to amalgamate the two iTuens Music folders and file all the music in the correct place (wherever that may be) - is there a simple way of doing this?
    Another problem I have is duplicated music - I  imported into iTunes lots (probably 20+ Gb) of duplicate albums at a higher bitrate than I previously had and deleted the existing lower-bitrate version from iTunes. Unfortunately, I didn't realise that, because of the two iTunes music folders, the older version is still stored in the older folder, even though iTunes isn't using it.
    Is there a simple way to identify and remove the older lower-bitrate music from the machine without searching every single album in Spotlight?
    I've spent an awfully long time correcting and completing track names, information and artwork in iTunes so I really don't want to lose any of that in the process.
    Thanks in advance ...

    I totally feel your pain. I too have a similar setup, a G4 running Tiger wherein I ripped my vinyl to a higher bitrate than my CD copies. I have in one of thses threads a script that was written for me that would rummage across the home network and pull down all the lower bitrate copies and move them, but I had a hard drive snafu, ran out of space and never got around to it..
    I DO think however, that you can display the bitrate of the files in the list view options within iTunes and sort the files that way, and move the duplicate files you don't want to the trash. From within iTunes.. Let me go fire up the livingroom frankenbox to check..
    Yup.. If you use the View Options you can sort through the master list in iTunes by bitrate..
    http://imageupper.com/i/?S0300010100011D1344782818223713
    Takes a bit to sort out the tracks but it's workable..
    As to joining the libraries that you've gotten them cleaned up, you need to drop all the song files into a single folder and let iTunes sort them into the default location. There is a submenu here:
    http://imageupper.com/i/?S0500010010011A1344783379227569
    http://imageupper.com/i/?S0500010010021A1344783379227569
    Good luck!
    Deb.

  • Tips to deal with the appalling wifi on iPhone 5 since 6.1 update?

    Hi guys I'm aware there are hundreds of threads on this issue and various different ways of addressing it. I just wanted the most up to date tips and fixes and to share my symptoms.
    Ever since I updated to 6.1 my home wifi connection has been horrific and it's really starting to get annoying now when you pay so much for your phone and also internet. I have no problems with 3G and I even connect to wifi networks whilst at work which are a good speed. But as previously stated it can take ages to load pages if at all when I'm connected to what in theory I should think should be my most reliable network, it's extremely frustrating.
    Like I say if people could give me a check list of things to do or check to remedy the problem it would be greatly appreciated.
    Regards,
    Gary

    Restore the phone using iTunes. If that doesn't fix it, google "iPhone DFU mode" and restore it in DFU mode. Set it up as a new device and test it before attempting to restore your backup.

  • Dealing with duplicates

    Hello guys
    I am trying to populate a tables with millions of rows, some of the rows have duplicates in the table, i want to try and compare the rows with each other and where a duplicate is found it should skip those rows and not enter them into the table and it's should go to the next and vice versa
    Any hints will be appreciated
    Thank you

    This is the forum for SQL Developer (Not for general SQL/PLSQL questions). Try asking your question in the SQL and PL/SQL forum.

  • A tip for those with a fully depleted batt

    I just thought I'd post my experience, maybe it will help someone. I have a Zen V that I left sitting for a while and the battery was completely dead. As you've probably seen in other postings and in the knowledge base, the Zens have trouble recovering from this state. The symptoms for my Zen were: 1)plug in USB, 2)Blue LEDs behind play would light briefly, 3) "Low Battery" screen would flash just long enough to see it, 3)the screen would go black, LEDs off, no charging icon. I left it connected like this for about 8 hours and it never accepted a charge.
    I believe this is primarily due to the fact that when you plug your Zen into a USB port, the first thing it tries to do is dock, which uses what little power might be left in the battery. Contrary to the information in the knowledge base, my Zen would not take a charge regardless of how long I left it connected like this. I guess it gets stuck in a weird state and never makes it to the "accept charge" state.
    So, how to solve this The key is to connect it to a charging device that does not try to perform the dock operation. You have several options here. If you have a wall/travel charger, that would probably be easiest. What I did was use another (Linux) PC that did not have the Zen driver on it. As soon as i plugged it in, it began charging, and after about an hour it had enough charge to dock with my normal Windows PC. If you don't have either of those two options, I suspect that just booting your PC to Dos/Command Line would probably work because I doubt that the Zen driver would get loaded (I haven't tried this). Basically any way you can get your PC to power the USB ports without loading the Zen driver (uninstall maybe ) should do the trick.
    I hope that all made sense and that it helps someone out there. I was about ready to send mine in for repair, but this saved me the trouble.

    Excellent tip, shoeless!I had such problem with my Zen Vision M: 60 GB. I solved leaving it into the usb plug for many time. Maybe my laptop went in a standby mode and the player didn't try to dock, but it was enough to charge a bit the Zen, and after fully charged.

Maybe you are looking for

  • Error while posting in J1IEX.

    Hi SAP Gurus, I have done MIGO on 31st march, on 1st of April i am Posting in J1IEX, I am getting following error: Maintain number range object for object J1IRG23C2, year 2009, excise group E1_ Please tell me how to solve this error. Regards, Vijay

  • Is Creative Cloud compatible with Windows 8.1 Blue?

    Is Creative Cloud Compatible with Windows 8.1 Blue Preview?  I had it working in Windows 8 but now that I am trying 8.1 I can not install it.

  • Several laptop keyboard keys not working

    Dear all, I hope you can help me. Several keys on my HP Spectre 13-3000ea ultrabook occassionally don't work. More specifically, the n,m,h,j,y,u,6,7 keys (if you check your keyboard you see they constitute a vertical area of keys) do not work, most o

  • Conect wifi imac and tv panasonic tx-p55vt30 problem

    help me conect wifi imac and tv panasonic tx-p55vt30

  • Suggestion required for using row level security

    We have a scenario to provide row level security to some of the transaction tables like HR_EMPLOYEE which has a foreign key column DEPT_ID to HR_DEPARTMENTS table. This table may grow up to about 5 million records. There could be regular SELECT opera