Potential Disaster avoidable?

Using my Graphite iMac (running OSX 10.2.8) I moved my iTunes Library to an external hard disk to save space. This worked very well for two years.
Recently my External Hard disk refused to mount. Result: can't access iTunes.
iTunes launches from the iMac, but naturally the songs can't be found because the EHD is not mounting.
My complete library is visible, and I have many of the songs backed up on other computers, but not all of them. I have half of them backed up and working on another account on the same iMac as well.
Short of sending in the EHD for data recovery, is there a relatively easy way I can replace the lost library by using songs from my other computers and that one account? One computer, a MacBook, has a different, newer version of iTunes installed.
Thanks in advance!

Have you tried to repair the ext HD - or is it toast?
You can get another ext HD and transfer tunes from the other hardware that have some of your tunes. The files that keep track of your tunes and p/l etc ate named - 'iTunes Library' and 'iTunes Library.xml'. Usually located in HD>User>User Account>Music>iTunes.
Once the tunes are all in one place you should be able rebuild your library by adding the tunes into the library.
MJ

Similar Messages

  • Screen turning off randomly, cannot turn back on.  potential disaster for live music i'm doing!

    screen goes black, laptop seems to have not turned off but nothing can turn the screen on.  seems fine after a reboot but problem recurs at some random future inconvenient point in time.  problem is annoyingly inconsistent - it will occur at random (no discoverable software or hardware conjunctions), sometimes around once weekly, sometimes less frequently.  have already had it in a mac repair, said they fixed it, something to do with graphics card firmware.  not so.  same issue.  currently in touch with support but in frustratingly slow red tape processes.  VERY curious to find if ANYONE has experienced this issue.  it seems very unlikely that i am the only macbook pro user on earth that this happens to......i perform live music with this laptop as part of my set-up - and am actively refusing shows at present until this is resolved!!!

    thank you very much for your reply.
    can you please just clarify exactly what you mean by "replaced my HDD to SSD" - you mean changing your hard drive to the solid state kind?  how long ago did you make the change and thus how long now without the issue?  was it also happening to you irregularly (it might be a month or two that it won't happen for, and then more frequently...) - many thanks for any info......

  • Avoid the Droid Bionic and if you are a potential customer, avoid Verizon

    Back on November 16th, just 15 days ago, I purchased the Droid Bionic phone.  Twice in the past two days the screen has stopped working, but I know the phone is still on because I can hear the notification alerts as well as see the blinking lights for a text message and such and can hear the phone ring when I have had a call.  I have had to pop out the battery and reinsert it to get the phone working. 
    One would think that a phone bought brand new that is already experiencing problems and that someone paid a lot of money for would be replaced.  Well, not with Verizon.  I talked to someone at the Verizon store where I purchased it and was told that they would not replace it but would only go through warranty services in which I would recieve what Verizon likes to call a "certified 'like' new" phone.  Now, to anyone with common sense, this is what normal people call a used phone.  But telling the manager that it is wrong to replace a brand new phone with a used one completely confused her and made her argue that it wasn't used.
    I recently went through the hassle of one of these used, oh excuse me "certified 'like' new" phones.  About a month before my phone contract expired, my previous phone, the LG enV Touch went bad.  Now I am someone who prides myself on taking care of my electronic equipment.  The person at the Verizon store who helped me with the warranty process on it was impressed how my enV Touch looked new even after 2 years.  The replacement "certified 'like' new" phone I recieved had a scratched screen, gunk around the 2 bottom buttons on the front, and a filthy keyboard.  It hadn't even been cleaned and took me quite a while to clean the gunk from the buttons.  I said nothing, which was a mistake on my part, because I knew I would be getting a new phone in about a months time.
    It is just absolutely ridiculous, and bad customer service, that after just 15 days someone who spends a fairly sizable sum of money would have to exchange out a new phone with a used phone.  I have a family share plan with Verizon with 2 lines coming up for renewal in a couple of weeks.  Verizon will not continue my business on those lines and I am sure that one of the other companies will be glad to get it.
    And to Verizon, perhaps you should rethink how you treat customers.  I have been with Verizon ever since they moved to Huntsville when they bought out GTE Wireless.  I can understand a 14 day exchange policy being strictly enforced when there is no issue with a product, but when a product shows to be defective on the 15th day, it's rather very poor customer relation to tell them they have to take a used product a half a month after buying something new when it is no fault of the customer.
    I hope this helps anyone who may be considering the Droid Bionic and Verizon.

    That's always been standard practice for Verizon. Six years ago, my LG Chocolate went - like new replacement. Then my EnV3 - like new. It's the manufacturer who should be more responsible. I was really hoping I wouldn't have the same issues my with my new Bionic!  Good luck. Hey, if you have a problem, I would go to a Verizon owned store and check out your replacement before you accept it. Gunk is gross.

  • Ask the Expert: Single-Site and Multisite FlexPod Infrastructure

    With Haseeb Niazi and Chris O'Brien 
    Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about Single-Site and Multisite FlexPod Infrastructure with experts Haseeb Niazi and Chris O'Brien.
    This is a continuation of the live webcast.
    FlexPod is a predesigned and prevalidated base data center configuration built on Cisco Unified Computing System, Cisco Nexus data center switches, NetApp FAS storage components, and a number of software infrastructure options supporting a range of IT initiatives. FlexPod is the result of deep technology collaboration between Cisco and NetApp, leading to the creation of an integrated, tested, and validated data center platform that has been thoroughly documented in a best practices design guide. In many cases, the availability of Cisco Validated Design guides has reduced the time to deployment of mission-critical applications by 30 percent.
    The FlexPod portfolio includes a number of validated design options that can be deployed in a single site to support both physical and virtual workloads or across metro sites for supporting high availability and disaster avoidance. This session covers various design options available to customers and partners, including the latest MetroCluster FlexPod design to support a VMware Metro Storage Cluster (vMSC) configuration.
    Haseeb Niazi is a technical marketing engineer in the Data Center Group specializing in security and data center technologies. His areas of expertise also include VPN and security, the Cisco Nexus product line, and FlexPod. Prior to joining the Data Center Group, he worked as a technical leader in the Solution Development Unit and as a solutions architect in Advanced Services. Haseeb holds a master of science degree in computer engineering from the University of Southern California. He’s CCIE certified (number 7848) and has 14 years of industry experience.   
    Chris O'Brien is a technical marketing manager with Cisco’s Computing Systems Product Group.  He is currently focused on developing infrastructure best practices and solutions that are designed, tested, and documented to facilitate and improve customer deployments. Previously, O'Brien was an application developer and has worked in the IT industry for more than 20 years.
    Remember to use the rating system to let Haseeb and Chris know if you have received an adequate response. 
    Because of the volume expected during this event, Haseeb and Chris might not be able to answer every question. Remember that you can continue the conversation in the Data Center community, subcommunity Unified Computing shortly after the event. This event lasts through September 27, 2013. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.
    Webcast related links:
    Single-Site and Multisite FlexPod Infrastructure - Slides from live webcast
    Single-Site and Multisite FlexPod Infrastructure: FAQ from live webcast
    Single-Site and Multisite FlexPod Infrastructure - Video from live webcast

    I would suggest you read this white paper which details the pros and cons of direct connect storage. 
    http://www.cisco.com/en/US/partner/prod/collateral/ps10265/ps10276/whitepaper_c11-702584.html   This paper captures all the major design points for Ethernet and FC  protocols.
    I would only add that in FlexPod we are trying to create a highly  available solution and "flexible" solution; Nexus switching helps us  deliver on both with vPC and unified ports.
    NPV equats  to end-host mode which allows the system to present all of the servers  as N ports to the external fabric.  In this mode, the vHBAs are pinned  to the egress interfaces of the fabric interconnects.  This pinning  removes the potential of loops in the SAN fabric.  Host based multipathing of the  vHBAs account for potential uplink failures.  The NPV mode (end-host  mode) simplifies the attachment of UCS into the SAN fabric and that is  why it is in NPV mode by default.
    So for your last question, I will have to put my  Product Manager hat on so bear with me.   First off there is no drawback  to enabling the NPIV feature (none that I am aware of) the Nexus 5000  platform simply offers you a choice to design and support multiple FC  initiators (N-Ports) per F-Port via NPIV.  This allows for the  integration of the FI end-host mode described above.  I  imagine being a  unfied access layer switch, the Nexus team enabled standard Fibre  Channel switching capability and features first.  The implementatin of  NPIV is a customer choice based on their specific access layer  requirements.
    /Chris

  • Calling PL/SQL Procedure In Another Schema Gives Unexpected Result

    I have a SQL Script that does this:
    conn pnr/<password for user pnr>;
    set serveroutput on;
    exec vms.disable_all_fk_constraints;
    SELECT owner, constraint_name, status FROM user_constraints WHERE constraint_type = 'R';
    and the disable_all_fk_constraints procedure that is owned by user 'vms' is defined as:
    create or replace
    procedure disable_all_fk_constraints is
    v_sql   VARCHAR2(4000);
    begin
    dbms_output.put_line('Disabling all referential integrity constraints.');
    for rec in (SELECT table_name, constraint_name FROM user_constraints WHERE constraint_type='R') loop
    dbms_output.put_line('Disabling constraint ' || rec.constraint_name || ' from ' || rec.table_name || '.');
    v_sql := 'ALTER TABLE ' || rec.table_name || ' DISABLE CONSTRAINT ' || rec.constraint_name;
    execute immediate(v_sql);
    end loop;
    end;
    When I run the SQL script, the call to vms.disable_all_fk_constraints disables the FK constrains in the 'vms' schema, whereas I wanted it to disable the FK constraints in the 'pnr' schema (the invoker of the procedure). I know that I could make this work by copying the disable_all_fk_constraints procedure to the 'pnr' schema and calling it as "+exec disable_all_fk_constraints;+" from within the SQL script but I want to avoid having to duplicate the PL/SQL procedure in each schema that uses it.
    What can I do?
    Thank you

    You have two issues to solve.
    First you need to write a packaged procedure that works with INVOKER rights. The default is DEFINER rights.
    The difference is excatly what you need. Usually the package has the rights from the schema where it is defined (=Definer rights). In your case schema VMS. Whereas you need the privileges from the user that calls the package (PNR).
    => Check out the documentation for INVOKER rights
    The second problem is that the view "user_constraints" will not give the results you expect when called from inside a procedure in another schema. An alternative could be to use the view DBA_CONSTRAINTS with a filter on the owner (where owner = 'PNR'). Not sure if there are other working possibilities. Well you could create a list of constraint names that you want to disable, instead of creating the list dynamically.
    And you could have another potential disaster creeping up upon you. If you run this thing, then at this moment you don't have any referential integrity anymore. You can't be sure that you can create the FKs again after this action. This is EXTREMLY DANGEROUS. I would never ever do this in any kind of production or test database. I would be very careful when I do it on a development database.

  • No shared network access when booted from Firewire

    I can't seem to access any volumes on my network when my MacMini is booted from an ext. firewire drive. I've tried to do this from two seperate bootable FW drives and I can't access my NAS or any of the other Macs on my network. This is an issue that I just noticed. I have worked with Macs and PCs across networks for a while but I can't figure this one out for some reason.
    I'm running MacMini 1.66 GHz Intel Core Duo and I'm on 10.5.8. Any insight is appreciated. Thanks!

    UPDATED/SOLVED---I finally figured this problem out. **Note: This whole process began with the desire to expand my original MacMini's 75GB HD space and to avoid potential disaster. I have recently been going through a spate of HD failures and I really wanted to avoid such a calamity with my main system volume. So the goal was to, in effect, replace the MacMini's drive without having to open it up by simply buying a new 1TB drive and dropping it into a FW enclosure with a huge fan. I had hoped that a simple BounceBack recover would do the trick but I quickly found out that it was not to be.
    The problem existed on backup copies that were created by my old version (5.1.4) of BounceBack (that I never updated). For whatever reason, that version excludes necessary components for networking when making bootable volumes. Therefore all my external FW drives that were bootable were not truly complete/useable volumes. Here's how I came to that conclusion and how I ultimately solved the problem. 
    1. I ended up having to do a reinstall of Leopard. I did this by cloning Leopard install disc to an image file and then restoring that image file to the FW drive. The real purpose of cloning the install disc was to immediately test the veracity of my theory that the problem lay within my BounceBack bootable volumes and not with the FW drives themselves. Sure enough, I found everything to be working perfectly from the newly minted, clean install Leopard FW volume. So then...
    2. ...I cloned the MacMini's orginal internal HD. It was the only volume with my most recent system that did NOT have the aforementioned network access problems and it was the volume from which all of the other BounceBack volumes were created.
    3. I restored this volume to the FW drive and was rewarded with a brand new, expanded and fully working system volume with all my applications and settings intact.
    To be fair, I feel compelled to mention that this is not necessarily an indictment of BounceBack. Again, the copy I was using is several years old and I simply never paid for an update, thinking that the copy I was using was sufficient. I'm willing to bet that the makers of BounceBack solved this problem long ago with one of the many updates since that have been released I first purchased the software. That being said, I'm now using Time Machine for backups of  my system volume while utilizing BounceBack for data on other volumes that don't need to be bootable.

  • Spending Limit urgently needed,

    My client's website at an Azure competitor was hacked (PHP code injection vulnerability), and burned through $400 bucks of bandwidth charges within a few hours, luckily this was during business hours and they quickly caught it and stopped it, but now they're
    looking for advice. Had it happened over a holiday or vacation time, it would have cost tens of thousands of dollars. The cloud provider refused to waive the charges, but gave a 50% discount.
    Does Azure provide a mechanism for placing an overall corporate-level spending limit on the account to avoid this kind of potential disaster? If not, what's the best-practice to mitigate risk? An attack on an unattended server can quickly bankrupt a small
    business.

    Hi,
    When you create a Microsoft Azure subscription through a member offer--
    the MSDN benefit or the Microsoft Partner Network Cloud Essentials program, for example--a
    spending limit of 0 is turned on automatically. The spending limit helps ensure you don’t get charged when the credits on your subscription run out. The spending limit feature
    isn’t available for other subscription types such as the Free Trial offer, pay-as-you-go subscriptions, and commitment plans.
    However, if you are the Account Administrator for a Windows Azure subscription, you can set up email alerts when a subscription reaches a spending threshold you choose. Alerts are currently not available for subscriptions associated with a commitment
    plan. refer to http://msdn.microsoft.com/en-us/library/dn479772.aspxfor more details about how to set up Windows Azure billing alerts.
    Best Regards,
    Jambor
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Portal Profile Server Recovery Question

    Hi All,
    I am currently implementing/rolling out to production 2x portal gateways
    and 2x portal servers(with 1 Master profile ldap on one of the portal servers).
    I am currently using iPlanet portal version iPS3.0sp3a.
    What I would like to know is: How are other clients/companies configuring a
    2x gateway with 2x portal server design
    What do other clients/companies do when the 1 master profile ldap or master profile
    server dies? The reason why I ask is because the master profile ldap is a single
    point of failure.
    Question: Are other companies creating a 2nd profile ldap that is dormant on the 2nd
    portal server? And if the Master profile server dies, then you can quickly restore/
    configure the 2nd portal server to be the new master profile server?
    I am just curious what other companies are doing to mitigate a potential disaster with
    the master profile ldap.
    Note: I know that iPSv3.0sp4 is due out soon, but I will be moving to production real
    soon and I need to have some sort of a disaster plan in place.
    Any suggestions and/or comments is most appreciated.
    Thanks in advance,
    Chris Wilt
    TransCanada

    sp4 is already available for download.
    The general high availability setup would be either have a sun cluster with the ldap server and setup replication so that if one server fails you have the other.
    Alternate option is to have replication and put idar in front of the ldap servers.
    With the availability of sp4 you have multiple profile servers that would avoid a single point of failure on the profile server.

  • Audio connecters-suggestions

    I find the almost constant plugging and unplugging that I do of that audio/mini plug connection to be a potential disaster in the making so I want to have my main audio output be from a more physically and sonically stable connection that I won't constantly be messing with.
    There used to be a PCI card that connected to the audio in the laptop and had RCA pushpin connections, but I don't know what brand is best. Does anyone have a suggestion or preference?

    I wrote:
    "Perhaps you need an external mixer or patchbay? "
    You could have responded - "Oh I didn't think of that, that might be a good solution" or "No, I don't want any additional gear" or "No those things would be overkill for my needs" or "What's a patchbay?" etc.
    Virtually all audio interfaces use standard connections, so wanting something different is a tall order.
    The conventional solution for avoiding the need to continually repatch, or when you have too much gear with too few inputs, is to use an external mixer to route signals in, or a patchbay, which are specifically designed to bring connections to an easily repatchable panel without wearing out your gear's connections.
    Seemed like a sensible suggestion to me, but of course your milage may vary... shrugs

  • How to properly partition separate Root, Applications, and Users volumes?

    Wondering if there's an official Apple recommended method to set Applications and Users directories on different physical Volumes (be it a separate partition or separate physical drive).
    The bottom of this Apple document alludes to this by stating, "Divide hard disk space into partitions and then, for example, keep your applications in one section and your documents in another."
    http://docs.info.apple.com/article.html?path=Mac/10.5/en/8769.html
    I'd like to do this, but there seems to be issues with symbolic linking from some of the solutions out there. Not sure if Apple has a sanctioned method that won't be incompatible with some Applications and/or future upgrades.
    Thanks in advance for anybody's insight.

    ArborWoody,
    The concept of using separate volumes and/or drives to store system, HOME folders, applications, and even swap files separately has been debated, tested, then debated some more time and time again for many years. The findings are that, without doubt and in all realistic cases, jumping through convoluted hoops doesn't get you anything. Nothing at all.
    Given the lack of any real benefit, plus the complexity of making the attempt, to do so is folly. The best advice we could give you would be to forget all about doing this, or in fact doing anything but keeping things just as they are in the default configuration of OS X.
    Notice, however, that one thing I mentioned above is "HOME folders." I specifically did not mention "user data." While one's HOME folder typically contains user data, the two things are not the same. In very many cases, it is in fact beneficial to store certain types of user data on a second volume or drive. What type of data, and where it would be best to store that data, would be determined entirely by one's workflow environment and the uses to which that data will be put.
    It could be that a striped RAID array is the solution that would work best for you. Or, you may instead be best served by using a mirrored RAID array. In some cases, external storage is best. In others, not so much.
    One of the important issues you mention is facilitation of backup and recovery. Absolutely! The answer is Time Machine. Or, perhaps Time Machine and the maintenance of a "clone" (not something I do, but popular nonetheless). But what does Time Machine do for us? Simple. It is designed to allow us to quickly and easily recover from potential disaster, up to and including the loss of a startup drive or even an entire machine (this second possibility assumes that the TM backup is stored externally or on a safe drive, of course).
    What can we do to enable Time Machine to do its job most effectively? This, too, is pretty simple. Just keep our startup volume as "slim" as we can. Not by moving HOME folders or applications, but by common-sense file management, tailored to the specific usage to which this machine is dedicated. Perhaps more importantly, more pertinent to this thread, moving one's applications or HOME folders off the startup volume actually interferes with Time Machine's ability to do its job. For an example of the effects of this interference, just take a look at this fun thread.
    Scott

  • Execute the expression in select statement

    CREATE TABLE TEST1
      OFFICE_PRODUCTS     NUMBER,
      OFFICE_ELECTRONICS  NUMBER
    Insert into TEST1 (OFFICE_PRODUCTS, OFFICE_ELECTRONICS) Values(1, 0);
    COMMIT;
    CREATE TABLE TEST2
      EXPORT_FIELD_NAME         VARCHAR2(100 BYTE),
      EXPORT_COLUMN_EXPRESSION  VARCHAR2(100 BYTE)
    Insert into TEST2
       (EXPORT_FIELD_NAME, EXPORT_COLUMN_EXPRESSION)
    Values ('A1', 'least(OFFICE_PRODUCTS, OFFICE_ELECTRONICS)');
    COMMIT; I want to be execute the expression should run in select statement how to do?
    and tried as like below,it's not working.
    select (select EXPORT_COLUMN_EXPRESSION from test2 where EXPORT_FIELD_NAME='A1') FROM TEST1;

    968892 wrote:
    CREATE TABLE TEST1
    OFFICE_PRODUCTS     NUMBER,
    OFFICE_ELECTRONICS  NUMBER
    Insert into TEST1 (OFFICE_PRODUCTS, OFFICE_ELECTRONICS) Values(1, 0);
    COMMIT;
    CREATE TABLE TEST2
    EXPORT_FIELD_NAME         VARCHAR2(100 BYTE),
    EXPORT_COLUMN_EXPRESSION  VARCHAR2(100 BYTE)
    Insert into TEST2
    (EXPORT_FIELD_NAME, EXPORT_COLUMN_EXPRESSION)
    Values ('A1', 'least(OFFICE_PRODUCTS, OFFICE_ELECTRONICS)');
    COMMIT; I want to be execute the expression should run in select statement how to do?
    and tried as like below,it's not working.
    select (select EXPORT_COLUMN_EXPRESSION from test2 where EXPORT_FIELD_NAME='A1') FROM TEST1;
    Your problems are many...
    a) it's very poor design to be storing expressions or sql statements or any 'executable' style code as data in the database.
    b) what you're storing is a string of characters. Oracle isn't going to miraculously know that that is some expression that has to be evaluated, so why should it decide to treat it as such?
    c) this poor design can lead to security issues especially around SQL injection.
    d) to actually perform what you want would require you to build a dynamic SQL statement and then execute that using EXECUTE IMMEDIATE or DBMS_SQL (or for a 3rd party client, a Ref Cursor), but then there are numerous issues around doing dynamic SQL, aside from SQL injection, in that you are producing code that is not validated at compile time and can thus lead to bugs showing only at run-time and sometime only under certain conditions; the code is harder to maintain; the code can potentially be avoiding the use of bind variables, impacting on resources and performance on the database; the final query can be difficult to know just from reading the code, making further development or debugging a pain in the posterior. Essentially, dynamic SQL is considered very poor design and is 99.9% of the time used for the wrong reasons.
    So, why are you trying to do this? What is the business requirement you are trying to solve?

  • Conforming and Indexing Errors, Media Pending, Audio won't play in timeline

    I'm working on a desktop PC which is running Windows 7 Professional 64-bit and Adobe Premiere Pro (version CS5.5). It's currently utilizing a second gen. 3.4Ghz i7 2600 processor, 16GB of 1600Mhz RAM, 64GB solid-state drive and a ASUS P8Z68-V Intel Z68 Motherboard with onboard audio (Realtek ALC892 chipset) and onboard video. My problem is this:
    The conforming and indexing of all of my imported media never seems to finish regardless of how many times I reopen the project file and wait for it. On the lower right-hand portion of the screen, next to the conforming/indexing progress bar, is a little red "X". When clicked, it pops up with a list of errors that read: "An unexpected error occurred while performing a conform action on the following file...". As a result, my audio channels have no waveform and during playback there are no audible tones or levels. On some video clips there's just text that reads "Media Pending". This only appears to happen with project files that I saved on external hard drives, and I suspect it has something to do with the Media Cache Files folder and how Premiere Pro locates these conform/index files. I've also encountered this problem in CS3 and CS4.
    I have a few questions:
    1) How do I avoid error messages in regards to indexing and conforming
    2) How do you know when indexing/conforming has completed itself? (there doesn't seem to be a progress log or a list of commands/executions)
    3) Indexing and conforming appears to be an automatic process, but is there a way to do it manually?
    4) What's the best way to setup your media cache files when you click EDIT > PREFERENCES > MEDIA?
    5) If I have approximately 1 hour of footage, what's an average wait time for conforming/indexing? What about 5 hours of footage? 10?
    6) Adobe recommends not editing until the conforming and indexing has completed itself-- how important is this?
    7) Sometimes it appears as though the conforming and indexing has finished, but then I still have problems with playback. Do I have to reopen the project for it to continue with the conforming/indexing progress? I've already determined that the video file I'm working with is intact and free of any corruption.
    I'm fine with having to wait for a project to conform and index, but it never seems to complete itself! Any help regarding this matter would be greatly appreciated.

    Harm filled in pretty much all the salient details, but I'll do another pass here.
    1) How do I avoid error messages in regards to indexing and conforming
    Two parts here.  One, conforming only happens for certain media files, ie the ones where performance is critical and we can't depend on extracting the audio fast enough for realtime playback.  That's basically anything in an .mpeg wrapper, or AVCHD material.  So if you edit XDCAM HD/EX or P2, or RED, or even AVIs or QT, those formats don't require audio conforming.
    If you're stuck editing AVCHD or MPEG2, then it needs to conform.  But, that being said, you shouldn't be getting errors in the first place. I think it's related to your external drives.  More below...
    2) How do you know when indexing/conforming has completed itself? (there doesn't seem to be a progress log or a list of commands/executions)
    Nope, you have a progress status bar indicating which file it's working on.  If there's an error, it shows up in the events panel.
    3) Indexing and conforming appears to be an automatic process, but is there a way to do it manually?
    No.
    4) What's the best way to setup your media cache files when you click EDIT > PREFERENCES > MEDIA?
    While some people like having the check box for having the conform files beside the media, I hate it.  Yes, it means that if you move the project to a different system & reopen, it means that you potentially can avoid recreating CFA files, but I find the drive littering not worth it.  I much prefer having setting the Media prefs to point to a specific media drive.  Usually a raid, if available.  Definitely not an external drive that you disconnect & walk away with.  If you don't have a permanent raid on your system, then preferably a dedicated internal drive for media (think along the lines as your Photoshop 'scratch disk').  Failing that, leave it on your C: drive, although with a 64 Gig SSD, you probably don't have much room for transient temporaries.
    5) If I have approximately 1 hour of footage, what's an average wait time for conforming/indexing? What about 5 hours of footage? 10?
    Like Harm said.  Totally dependant on the media container & the speed of your drive i/o.  The conforming is iterating through the entire file & pulling audio data, so it's not CPU intensive, it's all i/o.
    6) Adobe recommends not editing until the conforming and indexing has completed itself-- how important is this?
    If you're trying to play/scrub while conforming, it's going to be pokey.  Esp. if you're trying to access the file that's actively being conformd.  As I just said, we're hitting the files for all the audio.  The i/o is being saturated already, so unless you have a stellar raid, you don't have much headroom.
    7) Sometimes it appears as though the conforming and indexing has finished, but then I still have problems with playback. Do I have to reopen the project for it to continue with the conforming/indexing progress? I've already determined that the video file I'm working with is intact and free of any corruption.
    You should be good to go.  Sounds like there's something else at play here.
    Okay, back to what I think is wrong:  you don't mention what kind of external drives you're using.  You're making a bad assumption that blowing away conformed files & doing a reconform is buggy - I doubt it, as that's the same process that happened when you initially brought in the files.  I've blown away my media cache folder multiple times and have never seen failures on reconform.  So it's got to be one of two things:  either a read error from the source when attempting to pull the audio, or a write error to the destination.  Now I don't know where you currently are pointing the media cache directory, or what your source drive is, so I can only speculate.
    My suggestion is to do some elimination.   Copy one of the files that failed on you to your C drive, & target your media cache directory also to C:.  Pick a new project, import your copied file, confirm that it conforms correctly & behaves.   Then, try to use the same clip from your external drive, keeping the media cache to C:.  If that's still good, then try targeting another (local/internal) drive as your media cache target; close/restart, then import the clip from C:, and then import the clip from your external drive.  This troubleshooting should give us something.
    PS, if you're trying to edit from external USB drives, good luck.  I find it a major PITA that I avoid as much as possible.  Firewire isn't much better.  I know some people do it successfully, but I think it's a road fraught with peril.  These devices are generally not designed for heavy duty I/O and a flaky connection or drive is nothing but pain.
    Cheers

  • How to download upgrades: try to find them on a software pirate's site...

    ...well, not really, but it seems that's my only option.
    My G5, after a crash decided that it won't boot anymore from the built-in drive. The only way to fix that was to totally reformat the drive, and reinstall everything from backup. You can guess what that means: get the data from the backup, install all the software from the original disks, and apply all the updates again.
    Now welcome to Apple's paranoia: You can't download the software updates for Final Cut Studio more than once, or so it seems. Obviously, I had downloaded them before, and when I try now, with my very own, personal, paid-for, etc. license string, I only get this joke of an error message:
    "You appear to have already downloaded the Final Cut Studio update."
    Yeah, well, really? Of course I have downloaded it before! But until Apple decides to ship computers that are 100% fail safe, I also reserve the right to download software that I paid for as often as I need to reinstall the junk!
    However, as things stand now, I can't download the updates anymore because Apple's paranoid that someone, somehow might stich together a working version of the software from update downloads w/o paying for a license.
    Wake up, and smell the coffee: if someone wants to pirate Apple's software, they will certainly have better channels than Apple's software download.
    Of course, what Apple does with this policy of restricted downloads is to force a legitimate user like me to try to get the updates somewhere else, which likely means some sort of gray/black channels. Pretty ********, in my book...

    And the SOFTWARE UPDATE System Pref doesn't just snag
    it either, does it?
    Hmmm...I just did it fine...and I have already done
    this before. So you input your serial number and it
    said you can't do it?
    Exactly. Just tried it again, to check if maybe Apple's servers have some sort of time-out where if you try it N times within time T then it blocks it.
    But even though it's days since the last attempt, it's not working.
    Software Update isn't a solution.
    a) I need backups of the installer packages, because I can't rely on being connected to the internet when travelling and I take along all important software to recover from a potential disaster
    b) Software Update isn't always reliable, particularly if you moved applications to organize your Applications folder somewhat.
    I'm already wasting weeks on a boot issue which I think I may have gotten a handle on now: it seems that at least the original PMG5DP2GHz can't boot from (some?) volumes with more than 2^22 files on them.
    Ronald

  • How do you resize the Picture Gallery Widget? (below 279 pt)

    The Picture Gallery widget works very nicely on a book, but it's way too big for my book design. I want a smaller image in the corner of the book. Something you tap on to open the picture gallery.
    I can get rid of the title, the caption and the background and shrink the widget a bit, but then you go to the "metrics" section in the Inspector, and the widget will not go below a certain width or height (a potential disaster for the overall design of your book).
    Bookry's widgets will not cut it either. What is the option then? Do I need to hand-code my own picture gallery widget? Good Lord..

    The arrows don't show up on the iPad, they are only to aid navigation within iBA. You can use boxes in the background to frame your galleries (as the background is off). I believe it's also possible to change the colour of the dots beneath the gallery too, although I forget how to do that right now.
    About Apple's rigid UX – I agree that it can be frustrating. Apple intended iBA to be used by everyone, and the sort of design options and customisability we might want may confuse the less experienced user. Apple also wanted the UI for every book to be the same – from the point of view of the new user, once you've used one iBA ebook (or iBook, I'm not sure what Apple want us to call it), you can use them all. Otherwise end users would to get used to a slightly different UI on each book, which kind of works against the idea that ideally UI should be intuitive enough to be invisible. I can see the point in that. I have created a 'How to use' section in every iBA file I've made. It get's old quickly.
    Oh, and (you probably know this, but…) you can choose your own custom icon for audio. It's limited, as it's only a placeholder image that allows you to play an audio file, there's no ability to pause the audio or stop it once you choose the view as thumbnail choice. Something I'd like to see changed.
    Here's a link to Apple feedback page for iBA. Feel free to hammer it with your suggestions.
    https://www.apple.com/feedback/ibooks-author.html

  • Phantom hairline showing up in shape, transfers to After Effects, doesn't move on art board

    Hi,
    First time posting here so hope I'm following protocol. We're having an issue at my office with Illustrator (5.1) and After Effects. This just started happening this week.
    For some reason, when a file is created in Illustrator and you zoom in extremely clost on a shape, a thinner-than-hairline line appears on it. This has showed up as both a vertical and horizontal. When you zoom in or out, the line disappears, but it also appears in After Effects. It doesn't seem to be part of the shape or a fragmented shape because if you move the shape, the line stay stationary.
    Anyone seen anything like this? The video editor and I have NO idea and we've been trying to figure it out for a few days now. Also, the issue isn't in just one file; it has happened in two separate project files now.
    Thanks!
    Christi

    This is an issue with current PDF libraries used by Adobe apps. For some reason content gets tiled, even if it doesn't actually require flattening. Same issue occurs when rasterizing the files in PS. Anyway, with regards to AE the sane advise would be to safe back to CS3 no matter what, because there are bugs in its handling of newer AI formats... This potentially also avoids your issue...
    Mylenium

Maybe you are looking for