Dealing with 6,000 duplicates

Is there any way to get ride of 6,000 duplicate songs on an external harddrive. The Itunes duplicate button shows over 12,000, so do I have to delete every other song right? If i delete all in the window I understand i will lose the original and the duplicate. In otherword can i tell it to just show duplicates, then i can erase all? This is taking me forever hours over hours.

Here is another option that I just figured out while trying to deal with my 15,000 duplicates. The file naming structure of all my duplicates ended in either "song title 1.mp3, song title 1.m4p or song title 2.mp3, song title 2.m4p, etc." depending on how many duplicates there were of each file and the audio format. I used spotlight to search my music folder by name, using the additional constraint of "ends with" and searched for all files that "end with" "1.mp3, 1.m4p, etc."
This identified all of my duplicates which I then dragged into the trash. After removing the files from the music folder, I then went back into iTunes and ran the "Super Remove Dead Tracks" script from the Doug's Apple Scripts website:
http://dougscripts.com/itunes/scripts/ss.php?sp=removedeadsuper
This script removed all of the now dead files that iTunes could not find, due to their removal from my music folder/library.
That solved the problem without having to do the manual selection of tracks in iTunes or my music folder.

Similar Messages

  • How is the best way to deal with duplicate photos

    I am using a new retina 27" iMac 16gb ram OS X 10.10.1
    Aperture 3.6
    What is the best way to deal with duplicates that get in Aperture Vaults
    I have used Gemini and it finds duplicates, but I have no way of telling if the original are still there.
    I don't want to go through 15000 photos to try to find the duplicate.
    Thanks Charlie

    You mean - one image in a vault, one in a library?  Or duplicates in the same library?
    Photo Sweeper can scan several libraries or folders at the same time and display the duplicates side by side to let you pick which to keep.  You can define rules to mark photos for automatic deletion as well.
    http://overmacs.com/photosweeper.html

  • I have a MBP 15" with iPhoto 9. When right clicking on icon in dock 3 iPhoto libraries come up; one empty, one with 35,000 photos and one with 30,000 photos.  How can I delete the empty one and merge the other two without getting duplicates?

    I have a MBP 15" with Iphoto 9.  When I right click on icon in dock 3 iPhoto libraries come up; one empty, one with 35,000 photos and one with 30,000 photos.  How can I delete the empty library and merge the other two without getting duplicates?

    You can delete the empty one by dragging it to the trash ( test everything before emptying the trash)
    Merging libraries requires iPhoto Library Manager - http://www.fatcatsoftware.com/iplm/ -  it is the only current solution to merging libraries
    LN

  • Loaded iPhoto 9.2, my 7200 photos grew to 113,000 files with screen shots, duplicates, ,5-10copies of faces and apple has no way to eliminate this. How can I ever retrieve the original 7200 from this mess?

    Loaded iPhoto 9.2, my 7200 photos grew to 113,000 files with screen shots, duplicates, ,5-10copies of faces and apple has no way to eliminate this. How can I ever retrieve the original 7200 from this mess?

    This can happen if the HD icon is inadvertently dragged to the iPhoto Window.
    Easiest solution: restore from your back up.
    Regards
    TD

  • My computer died and I had to buy another one.  Now I can't sync my ipod classic.  What is the deal with this?  I added songs about three months ago and it was fine.  Now it won't sync.  It says it has a duplicate file.

    My computer died and I had to buy another one.  Now I can't sync my ipod classic.  What is the deal with this?  I added songs about three months ago and it was fine.  Now it won't sync.  It says it can't sync because there is a duplicate file.

    There are server problems right now with iMessage effecting some users.  See http://www.apple.com/support/icloud/systemstatus/.

  • MacBook Pro battery had accumulated more than a 1000 charges, and stopped functioning unexpectedly. Went and got the battery replaced. Just saw that SMC Firmware 1.6 update deals with this. Possible to get my money back?

    MacBook Pro battery had accumulated more than a 1000 charges, and stopped functioning unexpectedly. Went and got the battery replaced. Just saw that SMC Firmware 1.6 update deals with this. Possible to get my money back?

    The firmware update corrects an error that may occur, however the techs would have checked the condition of the battery prior to installing a new one.  If the battery was questionable, the firmware update was really not too important.
    You can check the battery condition by going to the apple, left side of the menu bar, About This Mac, More Info, System Report, Hardware, Power and see what it says about Cycle Count, Condition, Capacity: Condition anything but Normal needs to be checked and may need to be replaced.
    The cycle count of 1,000 charge cycles is the typical life of a Lithium-Ion battery, the point at which the capacity drops to 80% of the as built capacity.

  • What is the best way to set up iTunes to deal with a massive media library?

    I have a really big media library...probably 7k-10k digital photos, close to 1,000 ripped CDs, and over 600 ripped DVDs, all stored on 1.5TB external hard disks in a 4-drive enclosure.
    Due to a recent technical glitch, I find myself installing a new drive and moving the current library. My question is, what is the best way to set up iTunes' settings to deal with a media library that is both external to the Mac on which it runs, and spans multiple physical hard disk units?
    I generally like the idea of iTunes organizing my library for me (i.e. keeping everything in Media Type>Artist>Album folders via the "Automatically Add to iTunes" folder), but I'm not sure how that's going to work spanning multiple drives.
    I just figure since I'm basically starting over with my iTunes installation, maybe there's a better way to do it.
    Thanks in advance for any advice you can offer.

    You can set them up as a software RAID using Disk Utility.
    It'll simply be one large disk for all intents and purposes.

  • I'm trying to transfer music from iTunes on an old PC that uses an external storage device to a new PC (Windows 7) that will use that same external storage device.  I am also dealing with new iTunes 11.  How do I do this??  What folder does iTunes use?

    I'm trying to transfer music from iTunes on an old PC (Windows Vista Home Basic) that uses an external storage device to store the files to a new PC (Windows 7 Starter) that will use that same external storage device.  I am also dealing with the new iTunes 11.  How can I accomplish this successfully?    What folder does iTunes use to store the data in?  I've tried several things.  Home Sharing caused duplicates but not all songs or apps transfered.  It is a large library! I've tried just setting the path in the Advanced Tab of iTunes preferences of the new computer with the external drive connected the same as the path when the external drive is connected to the old computer.  This was the best solution so far but still a few artists missing and some apps. Any suggestions?

    Here are typical layouts for the iTunes folders:
    With iTunes 11 you might also have a Home Videos folder inside iTunes Media.
    In the simplest cases you copy the entire iTunes folder from <User's Music> on the source computer to <User's Music> on the target machine, install iTunes, and it "just works"TM.
    If the media folder (inside the red outline) has been split out to a separate location then you can copy the library folder (outside the red outline) as before and connect the drive holding the media so that it has exactly the same path as before. If the drive appears as D: on one system and E: on the other then the library won't be able to find the media.
    The crucial file is iTunes Library.itl - this contains a record of the tracks that have been added to the library, ratings, play counts, playlists etc.
    See also: Make a split library portable.
    tt2

  • How to deal with file(for example .xml)? what format of dir should be?

    I'd like to operate the file in disk, and want to use relative directory?
    How to deal with file dir? what format of dir should be?

    Hi Kamlesh,
    Thanks for your response.
    Actually, In the "Process External Bank Statement" window, i see that there are few entries which is for the previous year and which has not been reconciled. I have never worked practically on BRS and hence, i am scared to make any changes in the clients database without being confident on what i am doing. I need to reconcile for one of their Bank a/c for the month of April '08. I have the copy of the statements for the month ending 31st Mar 08 and 30th Apr 08. The closing balances are as below:
    31/03/08 - 2300000.00
    30/04/08 - 3100000.00
    Now my OB for Bank a/c for April '08 in SAP is 2300000.00 Dr.
    When i go to External Bank Reconciliation - Selection Criteria Screen (Manual Reconciliation), here are the detail that i enter:
    Last Balance: INR -7,000,000.00000 (Grayed out by the system)
    Ending Balance: INR -3,100,000.00000 (Entered by me)
    End Date: 30/04/08 (Entered by me)
    "Reconciliation Bank Statement" Screen opens up and shows the below balances in the screen:
    Cleared Book Balance: INR -7,000,000.00000
    Statement Ending Balance: INR -3,100,000.00000
    Difference: INR 3,800,000.00000
    As per the Bank statement, i have found all the transactions listed out here for the month of Apr '08 but, i also found that the open transactions for the previous month from April '08 have been lying in "Process External Bank Statement" window.
    Could you please help me solve my issue as to what needs to be done or could you also get me some links from where i can get few documents for processing External Bank Reconciliations?
    That will be of a great help for me. I need steps as to what needs to be done first and then the next so that i can arrive at the correct closing balance for the month April '08.
    Thanks in Advance....
    Regards,
    Kaushal

  • Dealing with large volumes of data

    Background:
    I recently "inherited" support for our company's "data mining" group, which amounts to a number of semi-technical people who have received introductory level training in writing SQL queries and been turned loose with SQL Server Management
    Studio to develop and run queries to "mine" several databases that have been created for their use.  The database design (if you can call it that) is absolutely horrible.  All of the data, which we receive at defined intervals from our
    clients, is typically dumped into a single table consisting of 200+ varchar(x) fields.  There are no indexes or primary keys on the tables in these databases, and the tables in each database contain several hundred million rows (for example one table
    contains 650 million rows of data and takes up a little over 1 TB of disk space, and we receive weekly feeds from our client which adds another 300,000 rows of data).
    Needless to say, query performance is terrible, since every query ends up being a table scan of 650 million rows of data.  I have been asked to "fix" the problems.
    My experience is primarily in applications development.  I know enough about SQL Server to perform some basic performance tuning and write reasonably efficient queries; however, I'm not accustomed to having to completely overhaul such a poor design
    with such a large volume of data.  We have already tried to add an identity column and set it up as a primary key, but the server ran out of disk space while trying to implement the change.
    I'm looking for any recommendations on how best to implement changes to the table(s) housing such a large volume of data.  In the short term, I'm going to need to be able to perform a certain amount of data analysis so I can determine the proper data
    types for fields (and whether any existing data would cause a problem when trying to convert the data to the new data type), so I'll need to know what can be done to make it possible to perform such analysis without the process consuming entire days to analyze
    the data in one or two fields.
    I'm looking for reference materials / information on how to deal with the issues, particularly when a large volumn of data is involved.  I'm also looking for information on how to load large volumes of data to the database (current processing of a typical
    data file takes 10-12 hours to load 300,000 records).  Any guidance that can be provided is appreciated.  If more specific information is needed, I'll be happy to try to answer any questions you might have about my situation.

    I don't think you will find a single magic bullet to solve all the issues.  The main point is that there will be no shortcut for major schema and index changes.  You will need at least 120% free space to create a clustered index and facilitate
    major schema changes.
    I suggest an incremental approach to address you biggest pain points.  You mention it takes 10-12 hours to load 300,000 rows, which suggests there may be queries involved in the process which require full scans of the 650 million row table.  Perhaps
    some indexes targeted at improving that process is a good first step.
    What SQL Server version and edition are you using?  You'll have more options with Enterprise (partitioning, row/page compression). 
    Regarding the data types, I would take a best guess at the proper types and run a query with TRY_CONVERT (assuming SQL 2012) to determine counts of rows that conform or not for each column.  Then create a new table (using SELECT INTO) that has strongly
    typed columns for those columns that are not problematic, plus the others that cannot easily be converted, and then drop the old table and rename the new one.  You can follow up later to address columns data corrections and/or transformations. 
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Short dump with TPM_TRD1 000

    hi  everyone,
    when I run the Tcode:TBB1&#65292;i come across a ABAP short dump error.
    my system environment is:
    PC
    windows2003NT
    oracle10.2
    independant system without SLD control
    The error info as follow:
    Note 837202 - TPM_MIGRATION:tpm18: Short dump with TPM_TRD1 000
    Summary
    Symptom
    You migrate from Release CFM or Enterprise 1.10 to ERP 2.0 or higher. There are already parallel valuation areas in use. In tpm_migration_cat, you specify a key date. In the test system, the following short dump occurs for the parallel valuation areas:
    Error message short text:
    Internal error: SLD
    Message class: "TPM_TRD1"
    Number: 000
    Variable 1: "SLD"
    Other terms
    CL_DISTRIBUTOR_SLD, TPM_TRD1000
    Solution
    In the production system, you have to carry out tpm18 before the technical upgrade for all business transactions that are before the key date from tpm_migration_cat.
    I do not understand how to do with the soluton
    eg.what is "tpm18"?
    Do need your helps,
    Thank you very much
    Peter

    Hi Peter,
    I guess you found note 837202 when you were looking for a solution for the TBB1 dump you encountered.
    This note deals with a special situation when executing tpm18 (which is the transaction for fixing and posting derived flows) in a test system after migration and it tells you you should run tpm18 in production before the technical upgrade.
    I am not sure if this really applies to your situation, maybe you should rather check if note 821854 and the solution therein can be helpful.
    But without more information (e.g. knowing more about your release, your 'migration history' and the part of the coding where the dump is raised), it is quite difficult to give a diagnosis.
    Maybe you should think of sending a customer message to SAP support via the SAP marketplace if you cannot get rid of this problem on your own.

  • What is best practice for dealing with Engineering Spare Parts?

    Hello All,
    I am after some advice regarding the process for handling engineering spare parts in PM. (We run ECC 5)
    Our current process is as follows:
    All materials are set up as HIBE's
    Each material is batch managed
    The Batch field is used for the Bin location
    We are now looking to role out PM to a site that has in excess of 50,000 spare parts and want to make sure we use best practice for handling the spare parts. We are now considering using a basic WM setup to handle the movement of parts.
    Please can you provide me with some feedback on what you feel the best practice is for dealing with these parts?
    We are looking to set up a solution that will us to generate pick lists etc and implment a scanning solution to move parts in and out of stores.
    Regards
    Chris

    Hi,
    I hope all the 50000 spare parts are maintained as stock items.
    1. Based on the usage of those spare parts, try to define safety stock & define MRP as "Reorder Point Planning". By this, you can avoid petty cash purchase.
    2. By keeping the spare parts (atleast critical components) in stock, Planned Maintenance as well as unplanned maintenance will not get delayed.
    3. By doing GI based on reservation, qty can be tracked against the order & equipment.
    As this question is MM & WM related, they can give better clarity on this.
    Regards,
    Maheswaran.

  • Graphs and parent-child with loops and duplicates

    There is a parent-child relation in the table t(prnt, chld) which allows duplicates (A->B, A->B) and opposite paths (A->B, B->A), and complicated loops. Is there a way to identify rows that form any separate "connections network" and assign a "name" to them of any kind (letter, number, wahtever)? I try to use WITH recursive clause to identify and group rows belonging to one graph but with no luck. Any help would be appreciated.
    thank you

    Frank, I posted inputs for all graphs (multiple inserts) and some allowable outputs for one graph. For all cases (ie. graphs) the rule is the same:
    1. identify all nodes belonging to a graph
    2. "name" that graph (min, max or whatever you like)
    3. print the output in the form (node_belonging_to_a_graph, name_of_the_graph) for all identified graphs
    And as you said, I am somewhat flexible. I don't want to constrain the problem with saying min, max because it's not important how you name it, but the way which is somehow natural and fits with requirements is the usage of nodes' values.
    You ask me if (1,1),(2,1),(3,1) is also OK as an output for sample graph (1,2)+(2,3). Yes it is. It is one of those I posted but with additional node which is chosen as a name for a graph. But as you can guess it doesn't matter which node you choose, and the additional information about a node named with its own name is not as important and the information that all other nodes are named with that name but it is 100% acceptable. If you changed the naming convention and started to use letters instead of node values then yes, it would be a must to have the output in the form (1,a),(2,a),(3,a).
    You also ask me about the result for 90x data inserted as 5 rows: (901,902)..(906,904) and present sample result:
    901 902
    905 902
    906 902
    And the answer is no, it is not good result. It misses the information about the nodes 904 and 903 which belong to this graph too. The correct result could be:
    901 902
    905 902
    906 902
    903 902
    904 902
    or any other "combination" which presents 5 nodes with the name of the sixth (in this case of 6-node graph). Just one have to be picked, it doesn't matter which one. The "vertical" order is also irrelevant.
    As you can see there is a lot of room that gives acceptable result. I don't want to constraint it because it can influence performance which is important when dealing with graph structures in relational databases (RDBMS are not predestined to easily cope with that sort of information). It can also influence the chosen algorithm and I'd like to pick the fastest one which gives acceptable result.
    Two numbers x and y are in the same group (graph) if (and only if) at least one of the following is true:
    (1) they appear on the same row together (it doesn't matter which number is in which of the 2 columns), or
    --(2) x appears on the same row with a third number, z, and z is in the same group as y--
    (2) there are other edges (entries) in the table that form a "path" from x to y. And because the direction of the path is not important for the problem (ie. the parent-child table structure can be forgotten for a moment), the path means "there exists connection" between x and y aka "you can walk from x to y".
    The output consists of 2 columns: id (which is unique in the result set) and grp (which identifies the group) *[correct]*
    The id column will always be one of the numbers in the group *[correct]*
    It doesn't matter what the grp column is, or even what data type, as long as it distinguishes between the different groups. *[correct, but as you noted using one, picked number from a graph is the prefferable way]*
    If there are N distinct numbers in the group, I need N rows of output for that group, with id showing all those distinct numbers. *[correct, but if you choose your naming convention as naming a graph with the value of the node belonging to it you can ommit the node which is named for itself (but it doesn't hurt is such row appear in the result)]*
    You ask me if the graph is directed. No it's not. Your example (x,y) and (y,x) is great, and it can be concluded from my first post when I say that "opposite paths" (A->B, B->A) exists. What matters is the connection between the nodes. The parent-child table somehow imposes that direction is important, but for this problem it is not.
    One of the motivations for my post is to know what other people think without affecting their minds with my approach. I don't want to skew anybody's mind into my solution which works, but it's not effective. I don't mind showing it but I kindly ask you to think about the problem before I post it. Diversity of approaches helps to distill the best one.
    As I said I did it with the usage of sys_connect_by_path. If it doesn't appear to you as possible usage then it is likely that I don't use it efficiently. Please understand, I will post it if you ask me one more time but if you can live for a while without my inefficient solution and suggest something with WITH clause I would appreciate it.
    There is no exact result I expect. There are many results which are correct and acceptable. They all must follow the rules described at the beginning.
    Thank you
    Edited by: 943276 on Jun 28, 2012 1:32 AM

  • Best way to deal with photos from the start? (Bridge CS5 and iPhoto)

    Hi all,
    I'm just learning the ropes of the whole CS5 suite, and was starting to add keywords etc. in Bridge when I realized I couldn't access the files in my iPhoto ('09) library. I've read a few threads about this issue, but haven't really found a consensus on how to deal with it.
    I figure I'm in a good position to get things right from the start since I have no tags or ratings etc. in my iPhoto setup (and have only added keywords to the few pics that weren't in my iPhoto Library), and am wondering what you think is the best way to sort photos given I plan on using them in web design etc., and all the parts of my CS5 suite.
    If getting the photos out of iPhoto is best what's the best way to do that? I've read suggestions to right click on the library and click on "Show Contents"- or something like that, but that does not come up as an option when I right click my iPhoto Library in Bridge. Thanks so much for your help!
    Please let me know if there is any info I could give you that would be helpful in order to figure this one out.
    Best,
    Mara

    I must be far too lazy to be sure.  In iPhoto if I want to move an image all I
    do is click and drag from iPhoto to the desktop.
    Here comes the trouble with using iPhoto and mixing applications. You have
    to be sure what you do in iPhoto because you easily grab the wrong version
    instead of the original.
    As Tai Lao pointed out the the difference between Bridge and iPhoto in short
    is open visible structure for Bridge and hidden invisible library for
    iPhoto.
    There may be perfect and valid reason to use iPhoto but mixing the two
    workflows without thinking or knowing what you do is bound to get you sooner
    or later in trouble with wrong versions of the file.
    And clicking and dragging from iPhoto is easy to mistake in those versions.
    1 - I used to create incredible folder/sub-folder arrangements depending upon
    operating system file management to find files.
    You can create whatever structure you want including a thought through
    naming convention that suites your needs, if you keep doing this in Bridge
    you can easily open and close folders using the folder panel, in iPhoto you
    have a lot of events and albums but none as clear as in Bridge. (at least
    that is my opinion...)
    Also I would prefer to use a good naming convention with less folders and
    subfolders instead of bad file naming and many well named folders. Also
    using the metadata options description and keywords let's you easily find
    the files you want but when having the need to retrieve the original from an
    external disk or DVD using less folders will be quicker to find.
    Since depending upon metadata keywords or star ratings I now prefer to have
    all images (originals and amended) in one single folder as Bridge (or iPhoto
    for that matter) can only see the images in the folder it is looking at.
    Wrong, using the find option (cmd+F) you can specify Bridge to look in main
    folders or entire disk, include the option to look in subfolders and you are
    in business (but the first time Bridge caches it also need to index all the
    metadata, that takes some time but only the first time.
    2 - potential feature request for Bridge?
    To have an option where Bridge can see and display images in subfolder
    arrangements from the folder it is looking at.
    That is already there since a view versions
    Select a folder with subfolders and then from the menu view select 'show
    items for subfolders'
    It is still not a perfect way of functioning but it gets better every new
    version, in CS5 the speed of gathering the files is very much improved.
    However here is iPhoto the winner because it keeps the files in cache (or
    something like that) Bridge is not the best in this behavior but it is
    possible to use.
    This is a bit like a restriction of OS X "All images" to all images in the
    selected folder and all subfolders of the selected folder (if that makes any
    sense at all)
    Many users have tons of files on there system, why would you use that option
    to view all if you are perfectly capable of finding the files or file types
    you want in most applications (but you can also find them in OSX, you want
    psd files do a search for ".psd" and you will see all the psd files on your
    system.
    3 - iPhoto can't really handle large files.  With one event consisting of
    4,500+ images weighing in at 884 MB it is zippy.  Were those to be originals
    at a high resolution iPhoto would probably be tediously slow and require a
    folder/sub-folder arrangement which, to me, seems to undermine the importance
    of managing stuff using keywords.
    IPhoto is not designed to be a professional application, therefore you can't
    expect behavior for this
    In Bridge you have no limit of files on your system only the limit for the
    cache is somewhere around 500.000 I believe. But even Bridge is not really
    designed for handling that amount of files in a pleasant way.
    I also don't know of an application that can do so for normal retail prices,
    most users of that amount of files have custom made or customized
    application that costs many thousands dollars. Bridge is in fact a free gift
    when buying a Suite application from Adobe.
    The file size for viewing in Bridge is by default set to a max of 1 GB but
    that is for building previews you can see thumbs and small previews of
    bigger files. In the Bridge preferences you can set the limit for this to
    whatever you want, it only grows the cache file for previews when using many
    large files.
    In summary: Bridge is great for CS5 partnering on metadata, original images or
    large filesizes. 
    I couldn't agree more!
    iPhoto is great on sharing images across devices and better
    still if those files are low res with small filesizes?
    I couldn't disagree more...
    You can do all of this using Bridge to start own created actions or with
    scripts like image processor.
    Do some research on Adobe TV or view other video tutorials on the web, there
    are many free and good tutorials. You will learn that you can do all you
    want in Bridge (and a lot more...)

  • Is there a way to delete duplicate photos in iPhoto? I have approx 10,000 duplicates?

    Is there a way to delete duplicate photos in iPhoto without having to delete them individually?

    The real question is how this happened - iPhoto is very good about avoiding duplicates and since you have a very large number it must be the result of a user error
    Why do you think you have 10,000 duplicates? How many photos total do you have?  Just making a guess I suyspect that sometime you moved to a new system and imported your old library into the new library - Hint - NEVER import an iPhoto library into another Photo library - it does not work and creates a massive mess.  If you  did import an iPhoto library into your current library the best solution (assuming teh old library is still available) is to drag the bad library to the desktop (if you have space to continue it is best to delete it later after everything is done and tested) and Connect the two Macs together (network, firewire target mode, etc)  or use an external hard drive formatted Mac OS extended (journaled) and drag the iPhoto library intact as a single entity from the old Mac to the pictures folder of the new Mac - launch iPhoto on the new mac and it will open the library and convert it as needed and you will be ready move forward.
    Even if you eliminate "duplicates" - in the suggested case they are not duplicates but different versions and at best you lose all edits and non-destructive editig done and are starting with original photos - not really good
    LN

Maybe you are looking for

  • OC4J JSP Debugging not working for all the jsps

    Hi, Initially I was not able to debug jsps using Eclipse and OC4J. The jsp debugging started working once I made the below changes: 1) global-web-application.xml is modified Changed the attribute development="true" in orion-web-app Added the below in

  • Buttons not working on published SWF Captivate 2 project

    I have published a daisychained Captivate 2 project in three pieces, all published as SWF. The first piece plays fine, but a user has to push a button to choose to either go to the second piece OR the third piece. The button was designed so that when

  • Aperture share to facebook problem

    Aperture 3.4.3 cannot share my photo's to my facebook account. In Aperture "facebook don't recognize" my account (see image). And when this message comes up Aperture hangs and connot be closed (only forced). What am i doing wrong? I tried several opt

  • MTE Class Documentation, help?

    Hi All, OK, so say I want to monitor my tablespaces, or my db size using RZ20, how do I go about finding out which of the 1200+ MTE's I should use? Does anyone know of some documentation? Some of it is self explanatory (FileSystemUsage) but a lot is

  • Problem with loading rss feed in flash

    I wrote the following simple code to get started with loading rss feed: var document = new XML(); document.onload = myLoadHandler; document.load(" http://www.nytimes.com/services/xml/rss/nyt/Business.xml"); trace(document.status); trace(document.load