Could not read boot block (input/output errror) powerbook g4

Hello
My friend gave me a powerbook g4 and a copy of osx lion, I would like to install it but the internal hardive is not available within the "select Destination" area of the  installation process. I tried to repair the disk which is labelled Disk0s1, but shortly after it is underway i recieve this message.
**/dev/disk0s1 could not read boot block (input/output error) Error: The underlying task reported failure on exit. 1 non HFS volume checked, 1 volume could not be repaired because of an error.
Within disk utilities the Disk01s is grey, I can try to repair it but the above error occurs. Do you know what the problem is and how I can fix it?

Since you are going to install system fresh, it sounds like, you should erase the drive.  Boot from the installer disc, then select the language but do not start the installer.  From the menu bar, you should have either an Applications or an Installer menu. From one of the menus, you can then start Disk Utility.
From the drives and volumes pane, select the internal drive hardware listing. 
This will now allow you to partition or erase the disc.  Now click the Erase tab and select Mac OS Extended (Journaled) as the format and then click the "Erase" button.  You can name the volume that will actually be created, like for instance "MacintoshHD" or whatever you want.)
Once the erase has completed, you need to check the information for the hard drive at the bottom of the window.  Be sure that the partition scheme is Apple Partition Map.  If for some reason it is something else, like GUID, you won't be able to install MacOS on it.  You will need to click the Partition tab, click the "Options..." button and select Apple Partition Map, then click the "Partition" button.
Once this is done, then you can quit Disk Utility and you should be able to install Tiger on that HD.

Similar Messages

  • Error 105, Could not read full block (2048 bytes) from checkpoint file ~/dirchk/sdfsdj.cpe

    Hi expert,
        i am getting below error in goldengate  due to mount point full and i released the space and still the same error for all gg processes. i can see the *cpe cpr file become 0 bytes. so i deleted and re added the extract and repliacat and while adding the replcat i used add replicat  checkpoint table because of that multiple entries of same replicat came in checkpoint table . my checkpoint details also  present in ./GLOBALS. now my doubt is if add  replicate with mentioning checkpoint table name  will duplicate entry will be created or what is the work around for this.
    MANAGER RUNNING
    Invalid checkpoint for EXTRACT  qqqq   (error 105, Could not read full block (2048 bytes) from checkpoint file XXXXXXXXX)
    Invalid checkpoint for EXTRACT  qqq(error 105, Could not read full block (2048 bytes) from checkpoint file XXXXXXXXXX)
    Invalid checkpoint for REPLICAT qqq  (error 105, Could not read full block (2048 bytes) from checkpoint file XXXXXXXXXXX)

    Hi Kariyath
    Increase the page size of your windows machine.Check for the recommanded page size.Remember that the recommanded page size should be the lower limit.If you have any issue feel free to ask.Your prob will be solved
    Award suitable points

  • FCS "ERROR: could not read block 74012 of relation1663/16385/16576: Input/output error."

    I would really appreciate some advice re: FInal Cut Server search errors.
    FCS has been running well with our Linux based raid Editshare for 4 years, notwithstanding the recent Mac OS Java security issues.
    Our FCS machine is running 10.6.8 mac os and 1.5.2 final cut server with the latest
    OS 10.6.x updates, ancient I know.
    FCS is still searchable for most of our offliners, audio, gfx and onliner machines running 10.7+, but in some cases, searching FCS assets results in "ERROR: could not read block 74012 of relation1663/16385/16576: Input/output error."
    Initially assuming the OS and/or data drives on the FCS machine were failing, I ran disk utility's repair, verify. time machine, and cloned the database drive. I'll clone the OS drive tomorrow, but after checking the forums tonight and seeing similar error messages I'm not so sure it a degraded drive issue.
    Thanks in advance, any ideas appreciated!
    cheers,
    aaron,
    Aaron Mooney,
    Post Production Supervisor, epn.tv
    Electric Playground Daily, Reviews On The Run Daily, Greedy Docs.

    I would really appreciate some advice re: recent FCS search errors.
    We're having similar issues to C.P.CGN's 2 year old post, it's only developed for us in the last few weeks.
    Our FCS machine is running 10.6.8 mac os and 1.5.2 final cut server with the latest
    OS 10.6.x updates.
    FCS is still usable for 6 of 8 offliners, but on some machines, searching assets presents "ERROR: could not read block 74012 of relation1663/16385/16576: Input/output error."
    Assuming the OS and/or data drives on the FCS machine were failing, I cloned the database drive today and will clone the OS drive tomorrow night, but after searching the forums and seeing similar error messages I'm not so sure.
    FCS has been running fine for last 4 years, minus the recent Java security issues.
    Thanks in advance, any ideas appreciated!
    cheers,
    Aaron Mooney,
    Post Production Supervisor.
    Electric Playground Daily, Reviews On The Run Daily, Greedy Docs.
    epn.tv

  • "ERROR: Could not read block 64439 of relation 1663/16385/16658: Result too large"

    Hi,
    I've already archived a lot of assets in my final cut server but since one week there is a message appearing when I click on an asset and choose "Archive". The pop-up says: "ERROR: Could not read block 64439 of relation 1663/16385/16658: Result too large"
    Does anyone know what's the problem and/or have any suggestions to solve my problem?! I can't archive anymore since the first appearance of this message.
    What happened before?
    -> I archived some assets via FCS and then transfered the original media to an offline storage media. That system worked fine for the last months and my normal server stays quit small in storage use. But now, after I added some more new productions and let FCS generate the assets, it doesn't work anymore...
    It's not about the file size - I tried even the smallest file I found in some productions.
    It's not a particular production - I tried some different productions.
    It's not about the storage - there's a lot of storage left on my server.
    So, if someone knows how get this server back on the road - let me know.
    THNX!
    Chris

    I would really appreciate some advice re: recent FCS search errors.
    We're having similar issues to C.P.CGN's 2 year old post, it's only developed for us in the last few weeks.
    Our FCS machine is running 10.6.8 mac os and 1.5.2 final cut server with the latest
    OS 10.6.x updates.
    FCS is still usable for 6 of 8 offliners, but on some machines, searching assets presents "ERROR: could not read block 74012 of relation1663/16385/16576: Input/output error."
    Assuming the OS and/or data drives on the FCS machine were failing, I cloned the database drive today and will clone the OS drive tomorrow night, but after searching the forums and seeing similar error messages I'm not so sure.
    FCS has been running fine for last 4 years, minus the recent Java security issues.
    Thanks in advance, any ideas appreciated!
    cheers,
    Aaron Mooney,
    Post Production Supervisor.
    Electric Playground Daily, Reviews On The Run Daily, Greedy Docs.
    epn.tv

  • I have received this error message when trying to send a comment to a blog : ERROR: Could not read CAPTCHA cookie. Make sure you have cookies enabled and not blocking in your web browser settings. Or another plugin is conflicting. See plugin FAQ., can any

    I have received this error message when trying to send a comment to a blog : ERROR: Could not read CAPTCHA cookie. Make sure you have cookies enabled and not blocking in your web browser settings. Or another plugin is conflicting. See plugin FAQ., can any...
    Only found one question similar and NO ANSWER to it.

    Same problem here...nothing in my settings have changed since the upgrade and I've checked my cookie settings and they allow 3rd party cookies. What's up Mozilla? I've been faithful to you for years and you let me down now?

  • FATAL: Could not read from the boot medium! system halted.

    hi
    im running win7 64 bit on my pc , installed virtualbox 4.1.16
    i have downloaded oracle vm server 3 ( v29653-01.iso ) , created new virtual machine and attached iso file but unable to boot from the iso
    i am getting FATAL: Could not read from the boot medium! system halted.
    under vm settings > storage > i have
    IDE: controller
    v29653-01.iso
    atrributes: cd/dvd IDE primary master
    live cd/dvd unchecked
    under general tab i have > Operating system: linux version: oracle (64 bit )
    Edited by: user9182826 on Aug 4, 2012 5:34 PM
    Edited by: user9182826 on Aug 4, 2012 6:12 PM

    Not sure if you are running into the same issue..
    OVM 3.0.3 - New VM loses filesystem after OS install
    For OEL5 VMs on 3.0.3 you have to add 'xen_emul_unplug=never' to your kernel boot line.

  • [solved] NFS: "ls: reading directory .: Input/output error" after time

    Hello,
    I got the following configuration:
    NFS-server: Always on, connected via LAN.
    NFS-clients: 2 computers, connected via LAN and WLAN. On standby: auto umount NFS-share via system-sleep-hook with systemd.
    This works flawlessly some hours or even days. But after some time, the following happens:
    1. mount nfs-share: no error message
    2a. trying to access that folder via filemanager, e.g. thunar: folder seems to be empty (it isn't on server!), but the correct freesize of the NFS-share on server is displayed in the statusbar.
    2b. trying to access that folder via commandline:
    $ ls
    ls: reading directory .: Input/output error
    The very same worked just a few hours before, I haven't changed anything meanwhile. Apart from standby (--> auto umount NFS-share) and resume (manual mount NFS-share) on the clients which worked before, too.
    Any idea what's going wrong?
    additional info:
    server:
    /etc/exports
    /srv/nfs/myshare 192.168.2.0/24(rw,all_squash,anonuid=33,anongid=33,no_subtree_check)
    (yes, the mapping is needed and right.)
    clients:
    /etc/fstab
    servername:/srv/nfs/myshare /home/carl/nfs nfs4 noauto,soft,user,_netdev,timeo=14,rsize=8192,wsize=8192 0 0
    /usr/lib/systemd/system-sleep/umount-nfs
    #!/bin/sh
    case $1 in
    pre)
    umount -f /home/carl/nfs
    esac
    Last edited by Carl Karl (2014-09-21 17:07:47)

    Indeed, there could be something interesting in dmesg on the server:
    [ 4.190117] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
    [ 4.190596] NFSD: starting 90-second grace period (net ffffffff818a9540)
    [ 5.445237] [drm] Enabling RC6 states: RC6 on, RC6p on, RC6pp off
    [ 5.895519] r8169 0000:04:00.0 enp4s0: link up
    [ 5.895529] IPv6: ADDRCONF(NETDEV_CHANGE): enp4s0: link becomes ready
    [72346.265850] usb 3-1: USB disconnect, device number 2
    [72346.670926] usb 3-1: new SuperSpeed USB device number 3 using xhci_hcd
    [72346.684585] usb-storage 3-1:1.0: USB Mass Storage device detected
    [72346.684716] scsi5 : usb-storage 3-1:1.0
    [72347.686518] scsi 5:0:0:0: Direct-Access WD Elements 1048 1022 PQ: 0 ANSI: 6
    [72347.687606] sd 5:0:0:0: [sdc] 976769024 512-byte logical blocks: (500 GB/465 GiB)
    [72347.690295] sd 5:0:0:0: [sdc] Write Protect is off
    [72347.690301] sd 5:0:0:0: [sdc] Mode Sense: 47 00 10 08
    [72347.692981] sd 5:0:0:0: [sdc] No Caching mode page found
    [72347.693025] sd 5:0:0:0: [sdc] Assuming drive cache: write through
    [72348.135398] sdc: sdc1
    [72348.141458] sd 5:0:0:0: [sdc] Attached SCSI disk
    [72348.455737] EXT4-fs (sdc1): recovery complete
    [72348.455919] EXT4-fs (sdc1): mounted filesystem with ordered data mode. Opts: (null)
    [74276.020331] EXT4-fs error (device sda1): ext4_find_entry:1310: inode #14024711: comm nfsd: reading directory lblock 0
    [74276.021107] EXT4-fs error (device sda1): ext4_find_entry:1310: inode #14024711: comm nfsd: reading directory lblock 0
    (last one repeats several times...)
    Indeed, the NFS-share resists on a external WD harddrive connected via USB, /dev/sdc1.
    As suggested in the NFS wiki article, the NFS-share is a bind-mount:
    /etc/fstab on server:
    # fake-root for NFS
    /magnet/owncloud/data/carl/files /srv/nfs/myshare none bind,noauto,x-systemd.automount
    while /srv is on /dev/sda1.
    The external harddrive has an auto-standby (NOT hdparm...) which isn't causing troubles with other services.
    So it seems to be a problem connected to this bind-mount to external USB setup....
    I'll try to find out whether there is  a way to avoid that.
    Thanks so far!
    Last edited by Carl Karl (2014-09-16 14:30:02)

  • Error: Could not read Header Mapping in Receiver Agreement

    Dear SAP experts,
    Newly installed SAP G7A PI 7.1(Acceptance box) was already released to customer.
    We are doing an initial test (BAT phase), wherein, our end-to-end scenario is, from a EP Portal application  --->  G7A PI 7.1   -->  Trading Partner via AS2 protocol.
    G7A PI is connected to Trading Partner via AS2 protocol.
    Currently, our EP Portal application is in-active at the moment, that's why we used 'manual XI submitter', to send a message from G7A PI itself going to Trading Partner.
    The message was success in the Integration engine of  G7A PI, and the output message was produced.
    As the message reached the Adapter Engine layer of G7A PI, we encountered a specific error,
    Delivering the message to the application using connection AS2_http://seeburger.com/xi failed, due to: com.sap.engine.interfaces.messaging.api.exception.MessagingException: javax.resource.ResourceException: Fatal exception: com.seeburger.xi.connector.queue.TaskBuildException: Could not create CPAObjectMapper: InvocationTargetException caused by Could not read Header Mapping in Receiver Agreement: while trying to invoke the method com.sap.aii.af.service.cpa.Party.getParty() of an object returned from com.sap.aii.af.service.cpa.NormalizationManager.getXIParty(java.lang.String, java.lang.String, java.lang.String), Could not create CPAObjectMapper: InvocationTargetException caused by Could not read Header Mapping in Receiver Agreement: while trying to invoke the method com.sap.aii.af.service.cpa.Party.getParty() of an object returned from com.sap.aii.af.service.cpa.NormalizationManager.getXIParty(java.lang.String, java.lang.String, java.lang.String)
    Basically, "Could not create CPAObjectMapper - InvocationTargetException caused by Could not read Header Mapping in Receiver Agreement"
    I am thinking, one of the possible cause is that we used manual XI submitter, in G7A PI, instead of the utilizing the actual Sending system (EP Portal application).
    Or possible cause is that the AS2 adapter in G7A is not yet stable, thus causing the issue.
    We've already done successful connection when we're in GDD PI (Development box) during our SIT phase.
    Objects in G7A were mirror of GDD PI.
    Kindly advise for your inputs.
    Thanks!
    Gerberto

    Hi,
    It seems that the cause of the error is the unstable performance of the PI box, since, this was newly installed and there were patches that were not yet added.
    Thanks for the support!
    Gerberto

  • Help, could not read properties after setting the RMISecurityManager.

    Hi,
    I am new in RMi and I could not read a properties file when run a class in jar file with java -jar option.
    I start the the class by as follows.
    java -jar darwin_enhanced.jar -Djava.rmi.server.codebase=file:///home/wing/Darwin/Project/release.server/DarwinServer.jar -Djava.security.policy=darwin.server.policy
    I extract the following code snipplet from the main class.
    /* get security manager */     
    if (System.getSecurityManager() == null) {
    System.setSecurityManager(new RMISecurityManager());
    /* get properties */     
    Properties properties = new Properties();
    try
    System.out.println("before happy");
    properties.load(getClass().
    getResourceAsStream("/darwinServer.properties"));
    System.out.println("after happy");
    }catch (Exception e)
    e.printStackTrace();
    If the System.setSecurityManager(new RMISecurityManager()) is called before the properties.load method, the pgm prints out
    'before happy' and dumps
    java.lang.NullPointerException
    at java.io.Reader.<init>(Reader.java:61)
    at java.io.InputStreamReader.<init>(InputStreamReader.java:80)
    at java.util.Properties.load(Properties.java:189)
    at com.darwin.server.DarwinServer.startBind(DarwinServer.java:61)
    at com.darwin.server.DarwinServer.main(DarwinServer.java:48)
    If I reverse the order, call properties.load mehod before setting the RmiSecurityMananger,
    /* get properties */     
    Properties properties = new Properties();
    try
    System.out.println("before happy");
    properties.load(getClass().
    getResourceAsStream("/darwinServer.properties"));
    System.out.println("after happy");
    }catch (Exception e)
    e.printStackTrace();
    /* get security manager */     
    if (System.getSecurityManager() == null) {
    System.setSecurityManager(new RMISecurityManager());
    it works fine and print out
    'before happy'
    'after happy'
    Thanks,
    Wing

    Hi,
    Thanks for your ideas.
    My primary object is to let the server reads propertiy files. I found that I have to set the RMISecurityManager, otherwise, the rebind will throws
    java.rmi.UnmarshalException.
    I have changed the testing program according to you inputs.
    void startTest(){
    System.out.println("CodeSource="+this.getClass().getProtectionDomain().getCodeSource());
    System.out.println("Permission="+this.getClass().getProtectionDomain().getPermissions());
    if(System.getSecurityManager()==null)
    System.setSecurityManager(new RMISecurityManager());
    Properties properties = new Properties();
    try
    System.out.println("CodeSource="+this.getClass().getProtectionDomain().getCodeSource());
    System.out.println("Permission="+this.getClass().getProtectionDomain().getPermissions());
    properties.load(RMIClassLoader.getClassLoader("file:/").getResourceAsStream("sample.properties"));     
    System.out.println("size of properties = " + properties.size() );
    properties.load(getClass().getResourceAsStream("/sample.properties"));
    }catch(Exception e)
    e.printStackTrace();
    I run in jar with the following command with no exception,
    java -Djava.security.policy=java.all.policy -jar simBug.jar
    Output
    CodeSource=(file:/home/wing/try/java/bug.sim/RMISecurityManager/simBug.jar <no certificates>)
    Permission=java.security.Permissions@8d107f (
    (java.lang.RuntimePermission exitVM)
    (java.io.FilePermission /home/wing/try/java/bug.sim/RMISecurityManager/simBug.jar read)
    CodeSource=(file:/home/wing/try/java/bug.sim/RMISecurityManager/simBug.jar <no certificates>)
    Permission=java.security.Permissions@8d107f (
    (java.lang.RuntimePermission exitVM)
    (java.io.FilePermission /home/wing/try/java/bug.sim/RMISecurityManager/simBug.jar read)
    size of properties = 1
    Thanks,
    Wing

  • HP Data Protector: Could not gather changed blocks on disk

    Running HP Data Protector 7, backing up about 30 VM's, without issue, until recently. Had a HP switch hang on us, while backups were running. Everything was stopped, so we took the time to reboot switch, all servers, including VM, Data Protector server, etc. Physical machine backups run without issue. 5 of the 30 VM's run without issue. The other 25 VM's get this errror: Could not gather changed blocks on disk (scsci disk id). We've tried resetting change block on vm, but that does not help. Have tried both incremental, and full backups. Any ideas?

    This is the consumer forum, you need to post on the enterprise forum http://h30499.www3.hp.com/

  • Adobe Media Encoder for CS4 error Could not read from the source

    Hello,
    I get an error when I try to export from Premiere CS4. It doesn't matter how I export. Seems like an easy fix, but I can't figure it out. Any help is appreciated:
    - Source File: C:\DOCUME~1\ARTWHI~1\LOCALS~1\Temp\extra and b roll.prproj
    - Output File: E:\Living Accused Movie Transfers\video\Cindy at table.avi
    - Preset Used: NTSC DV
    - Video:
    - Audio:
    - Bitrate:
    - Encoding Time: 16:10:34
    1/21/2009 9:50:25 PM : Encoding Failed
    Could not read from the source. Please check if it has moved or been
    deleted.
    Thank you
    Art

    When you attempt to encode media with Adobe Media Encoder CS4 on Windows, the following error message appears in the text file (AMEEncodingErrorLog.txt) that opens when you click the error icon: "Encoding Failed. Could not read from the source. Please check if it has moved or been deleted."
    If it's that correct, you removed an earlier version of Adobe Premiere Pro or Adobe Creative Suite on the same computer.
    Do one or both of the following solutions:
    Solution 1: Create a shortcut to the Premiere Pro executable file, rename the shortcut to Premiere, and move the shortcut to C:\Program Files\Common Files\Adobe\dynamiclink.
    Close all Adobe applications.
    In Windows Explorer, navigate to C:\Program Files\Adobe\Adobe Premiere Pro CS4. (If you installed Premiere Pro CS4 in a location other than the default of C:\Program Files\Adobe, then navigate to your custom installation location.)
    Right-click on Adobe Premiere Pro.exe (which might appear without the .exe extension) and choose Create Shortcut.
    Rename the newly created shortcut to just Premiere.
    Important: The name of the shortcut must be exactly Premiere with no other characters.
    Open a second Windows Explorer window, and navigate to C:\Program Files\Common Files\Adobe\dynamiclink.
    Move the Premiere shortcut that you created into the dynamiclink folder.
    Solution 2: Remove and reinstall all Premiere Pro CS4 components or all Adobe Creative Suite 4 components.
    Do one of the following:
    Windows XP: Choose Start > Control Panel > Add or Remove Programs.
    Windows Vista: Choose Start > Control Panel > Programs and Features.
    In the list of installed programs, select Adobe Premiere Pro CS4, Adobe Creative Suite 4 Production Premium, or Adobe Creative Suite 4 Master Collection.
    Click Change/Remove (Windows XP) or Uninstall (Windows Vista).
    Follow the on-screen instructions to remove all components of Premiere Pro CS4 (including Adobe Encore CS4 and Adobe OnLocation CS4) or to remove all components of your edition of Adobe Creative Suite 4.
    Re-install your Adobe software.

  • Adobe Media Encoder Could not write XMP data in output file.

    Hi there i have searched for this on the forums,
    if there is a Thread on it please Link:)
    This is my Encoding Log from Adobe Media ancoder,
    (se Below).
    im doing a projekt from Mts Files, in Premiere pro
    i have Checked the box that no Xmp Data needs to be added, to the left from the Queue button in the export window.
    Should i Delete all the Xmp in Bridge or somthing like that to Resolve this?
    Regards
    Jonas Dwight
    Encoding Log:
    - Source File: /Users/Blasuk/Library/Caches/TemporaryItems/Mogel_oe_2014_nyeste.prproj
    - Output File: /Users/Blasuk/Desktop/Video_Projekts/Fremkaldte/Moegeloe2014-hq.mp4
    - Preset Used: Custom
    - Video: 1920x1080 (1,0), 25 fps, Upper, 01:18:52:10
    - Audio: AAC, 320 kbps, 48 kHz, 5.1
    - Bitrate: VBR, 2 pass, Target 15.00 Mbps, Max 41.30 Mbps
    - Encoding Time: 11:05:14
    01/05/2015 07:35:01 PM : File Encoded with warning
    File importer detected an inconsistency in the file structure of Moegeloe2014-hq.mp4.  Reading and writing this file's metadata (XMP) has been disabled.
    Adobe Media Encoder
    Could not write XMP data in output file.

    Any help please?! Thank you!

  • Premiere Pro CS4 Export fails with Media Encoder CS4 "Encoding Failed"," Could Not Read From Source.

    There has never been a history of Adobe CS3 installation on the brand new MAC Pro I am using prior to installing Master Collection CS4, despite other posts claiming that chief cause of this issue is a previous CS3, Beta or trial installations of previous versions. I simply had a straightforward installation of CS4 and have had trouble since day 1.
    I have uninstalled and reinstalled Master CS4 three times, two times with the action script cleaner for CS4. I tried changing Adobe Cache folder locations to alternative drives. I ran shift start many times to correct any inconsistencies and corrected for permissions using disk utilities repeatedly. All updates have been installed and I am still getting the same error when exporting my timeline to Media Encoder CS4, be it a large High Definition project or a small one consisting of few stills.
    No matter what presets I chose, the job appears as one of the items in the render qeuee of Media Encoder CS4 and sits there with the status WAITING until i hit Start Queue. When I do, it still says WAITING under Status and at the bottom corner of the window it reads : Loading "project name.prproj" and after an average of about 6 minutes, a yellow caution sign appears. No render whatsoever takes place. When I click on the yellow caution sign, it reads a log of all errors in connection with each job, including the ones that have gone before it. Here is one:
    Start of the Error Paste
    - Source File:/users/myname/Library/Caches/TemporaryItems/test.prproj
    - Output File:/ users/my name/Documetns/Adobe/Premiere Pro/4.0/Sequence 01.mov
    - Preset used :NTSC DV
    - Video:
    - Audio:
    - Bitrate:
    0 Encoding Time: 00:33:01
    Sat Mar 21 17:12:25 2009 : Encoding failed
    Could not read from the source. Please check if it has moved or been deleted.
    END OF ERROR Paste
    Please any gurus out there know how to troubleshoot this damn thing on Mac, I really appreciate it. I have not yet encountered any posts of Mac owners with no previous CS3 installation having this issue. I am in the middle of 3 projects with tight deadlines and need your urgent help. THANK YOU!

    Hello There,
    I resolved the issue through a simple process of elimination. So simple in fact I wonder how I missed it. Since there has been no history of prior CS3 installation on this machine and every conceivable trick was applied to resolve the problem (refer to my original post earlier in the thread) ending in frustration, the problem had to reside in a factor or series of factors that were only present in this machine. Of the two remaining causes different in compare to other machines, I had to look into the possibility of any hardware and/or software conflicts. After all, this very same installation works perfectly fine on my colleague's Apple Macbook Pro and several Windows machines.
    It turns out that I had installed Pro Tools on this machine that comes with all kinds of optional plug-ins in the so called ignition pack. One of these optional plug-ins is called IZOTOPE (three varieties include Spectron, Trash and another one that escapes my mind). They were all lite version licenses that would automatically pop up on their own when you would start a program where they had been residing. One such program where an association existed was indeed the Premiere. An annoying dialogue box directing you to buy the full version when you would start the premiere would refuse to go away. Sometimes it would even start on its own culminating in a a permanent dialogue box.
    The cause of the problem was right before my eyes and I had overlooked it. Once uninstalled using IZOTOPE clean uninstaller on step 4, the problem was resolved. Projects are now loading very fast into the AME and encoding begins in earnest. I have never had any issues ever since.
    Long story short, if you have any plug-ins specially the ones that are loading upon the start of any program or the ones that reside inside the premiere or the ones that start on their own independently, they could be one of the highly probable causes of this error. I have seen other similar posts complaining of plug in folders inside premiere that needed to be moved to resolve the issue for instance the ones that needed to be moved to AE folders. If you are a plug-in admirer, I would suggest to look into them seriously.
    On the side note, many combination and causes may lead to the same error. I am not claiming that this is the solution for everyone.
    It must also be noted that I applied the trick of creating a shortcut of Premiere (and naming it exactly as such) and dropped it into the Adobe Dynamic LInk folder (even though this was suggested for windows)and have changed the adobe Cache locations to an alternative folder on a separate drive. I haven't been able to establish conclusively if the latter steps are also essential in fixing the error on MAC but I can confirm that since the latter steps were taken before the uninstalling of Izotope plug-ins and i still had issues, probably uninstalling those plug ins or finding a way to move them to a place where they don't interfere (if you absolutely need to keep them) is the one essential step that solved the problem.
    Hope this helps.

  • "Could not read from source" exporting on Mac with CS4

    I'm using Premiere CS4 on a MacBook Pro, trying to export a film and getting the  "could not read from source" message.  I did not have CS3, and I'm not on a pc, so the Windows KB articles and posts don't apply.  I've read all the posts I can find on the forums but none of these solutions is helping to date.
    Are there any troubleshooting steps someone can recommend to figure out what I need to do here?
    Here's the trace, any assistance is MUCH appreciated!
    - Source File: /Users/johnchmaj/Library/Caches/TemporaryItems/circle _6.prproj
    - Output File: /Users/johnchmaj/Documents/Personal + Projects/Living Universe/Snoqualmie Valley/Valley Breezes Premiere/1circle:introitus/Valley Breezes 6-41.m2v
    - Preset Used: NTSC High Quality
    - Video:
    - Audio:
    - Bitrate:
    - Encoding Time: 30:44:29
    Wed Aug 19 16:52:56 2009 : Encoding Failed
    Could not read from the source. Please check if it has moved or been deleted.

    I have been having the same issue on my desktop and laptop, although I have Windows XP Professional.  Originally had CS3 and then installed CS4.  The problem was fixed by uninstalling everything Adobe (even Reader), using the wincs3sclean, and reinstalling the CS4 products.  The first time it did't work on my laptop, so I did it a second time and it worked.  I don't have Creative Suite, but this still works.  Here is the link to wincs3clean:
    http://kb2.adobe.com/cps/401/kb401574.html

  • Encore CS4 error "could not read from the source file"

    - Source File: C:\Users\Anil\AppData\Local\Temp\2013 11 08 c Big Budha_9.prproj
    - Output File: F:\Adobe Encoded Blu-ray\For Disk #013\Disk#013-B\2013 11 08 c Big Budha.m2v
    - Preset Used: Full HD 1080i [MPEG] 30 MBPS
    - Video:
    - Audio:
    - Bitrate:
    - Encoding Time: 01:49:53
    25-Jan-14 2:50:32 PM : Encoding Failed
    Could not read from the source. Please check if it has moved or been deleted.
    i have tried creaeted a NEW Premiere Pro project and still get the same error again ... can someone help me to figure out what is wrong ...

    Hi Neil
    I have been using file names with spaces and it has been working fine. As stated in reply to Jones, uninstall and reinstall of Premiere Pro CS4 has helped to get rid of this error.
    also i did not use Dynamic link, but was trying to create MPEG file using Media Encoder and then import it in Encore.
    your suggestion for H264 seems good and will try it.
    x264 Pro is for CS5 and above versions, so it seems i can not use it and have to manage with Adobe Media Encoder's default. Hope i will not have any major technical issues.
    please suggest me for below:-
    1) i have 4 hours of Full HD footage and made 1080i MPEG2 files of 30mbps to fit in 2 seperate Bluray Disks.
    2) now i plan to put all in one Single Bluray Disk by reducing MPEG2 file average data rate to 15mbps. will that work or is there any limitation of time in Bluray Disk.
    3) Will it be ok to create H264 files at 30mbps [as you mentioned H264 reduces file size to half of MPEG] and still be able to fit 4 hours footage in one BR Disk ?
    4) even though i specify Min 10 Target 25 Max 30 data rate for MPEG files creation, it creats file at data rate of 30 mbps, can you explain why it is happening as i thought my target rate should be 25 mbps and not the maximum rate specified.
    Thanks ...
    PS...
    my source file is .mov 1920 x 1080i 29.97 fps from Canon 6D. first i tried to create progressive file in Encoder but while imported in Encore realised at 29.97 fps, it allows only interlaced file and had to transcode back to interlaced. so i stick to interlaced file.
    Canon 6D also allows me to shoot in 24fps [motion movie] which Encore can handle as progressive file for authoring. What are the pro and cons of shooting in 24fps, if you can suggest.
    Message was edited by: AnilHVarma

Maybe you are looking for

  • Cancel the GR after UD and capture excise invoice

    Hi i have created a GR (migo) and in this step i have also captured the excise invoice. QC people done the UD. after that i came to know that GR is wrong. now i want to cancel the GR. and make the new GR against the same PO. but after doing 102 movem

  • How do I get my iTunes transfered to a new hard drive?

    I used iTunes on my C drive and stored the music on an external hard drive. My C drive crashed, but I am able to access all of the data. I also had iTunes on another computer I use and will now be my main computer. This "new" computer had its own mus

  • Best practice for backing up Keywords in Elements 8?

    I have had mutliple issues with losing my keywords in Elements 8 (new computer, then later a new hard drive).  Everytime I've lost all my keywords.  Whats the best way to avoid that in the future? 

  • Adding ISRC codes to music video in Premiere Pro?

    Hi there, I work for a small independent record label and we're currently working on our first music video for one of our artists. We have spoken to PPL (UK music licensing organisation), and they say we need to embed an ISRC code into the video/audi

  • Opamp gives a signal out even there is no VCC+-

    Hi, how is it possoble that some opamps work properly that means in terms of VCC+ and VCC-, for example when VCC+ is 10V and VCC- is -10 V the possible maximum Output is +-10V. And this works for some. But for a lot of op amp types like for example t