Resurrecting split sound files

This question has been answered in various forms, but I need a little additional information.
It has been asked previously how to restore files that have been split into data and resource forks. I have been able to restore Word documents and older QT Moov files by simply changing the extension on the data file.
Since I couldn't get AppleTalk to communicate over Ethernet between OS9 and OS 10.3, I copied the info off my old PowerMac using a Zip drive, copied it to a PC, and loaded it on the thumb drive to transfer to the iBook. I know, I know, two steps more than I needed.
I have been able to re-combine data and resource forks for .wav, .snd, and .au files by simply copying both forks to a thumb drive, at which point I was able to copy the merged file back to my hard drive.
But once I copy them back, I can't find a sound format that will play, .au and .snd files won't play through Quicktime, .wav files won't play through iTunes. Any idea how to resurrect these files?
I've tried to follow the RsyncX procedures, no luck. If there's a step-by-step procedure out there to bring sound files back together and get them to play, if someone could post a link, that would be greatly appreciated.
Thanks!

Thank you!
I was able to use Audacity's "Import raw data" to bring them back to life.
A couple are indeed corrupted, not much I can do about those.
I tried opening the snd and wav files directly through Audacity, and the program automatically suggested the "import raw data" command. It took a couple tries for each to get the correct sampling rates, but the process really isn't that difficult.
The help is much appreciated!

Similar Messages

  • Split large sound files

    Is there a easy way to split large sound files?
    Thank you

    You can use an audio editor such as this one: Audacity

  • Sound file and custom play.

    Hello, I have an sound file and, want to play part's of this file not the whole sound file every time, in my application.
    e.g i want to play from 1:23 min till 1:29 min can i do this in an iphone app and how?
    Thanks in advance!

    yes i know that way, but i want to have a big sound file and keep some pointers in an array with the seconds i want, depending some user actions, the whole sound file is usable, but in different user actions different parts of it will be played! Is there a way doing that? Without splitting the file into pieces?
    A similar example could be found at youtube comments when someone writes a time point...example 3:25 of the video this refers to a link and when its pressed you auto transferred to the 3:25 'th second of the video....I tried to give you a related usage...I dont know if you understand what i mean...
    thanks anyway!
    Message was edited by: ZaaBI_AlonSo

  • Splitting multichannel files (5 or more channels)  to mono

    So why doens't logic allow me to import and split out multichannel AIFFs? I can create them with max/msp or "sound studio" a freeware app, but then I can't edit them since logic only works with stereo or mono. I am also having a hard time finding a utility that does this for me offline. Any suggestions? I do notice that Nuendo can handle and split multichannel files.

    one program that does do this is
    Sample Manager
    http://www.audiofile-engineering.com/sample_manager.php
    This is a handy app. Having multiple channels interleaved is so handy, and it's been a part of the aiff format for years. But why doesn't any app take advantage of this? or at least support this aspect of the format? Anyways, the trial period s 15 days for this program. enough to convert my files. if i like it i may buy it, it think it is around $70. If anyone knows of a freeware app that does the same thing, please tell

  • How do I bounce two mono split stereo files to one mp3?

    Might sound like a stupid question, but I have several problems:
    I bounced a waveburner-mix of my favorite songs as a split stereo file and inserted the two files in two logic mono audio tracks.
    First problem: Do I have to pan the left audio-track left and the right to the right or do I keep them centered?
    Second problem: The two tracks are louder than the original track. (no matter, if they are panned or centered. Shouldn´t the original have the same volume at my output-channel?
    Third problem: Is it possible to bounce more than 2 GB with Logic 8? The mix is very long and when I want to bounce it, logic tells me that it is too big. Can I bounce larger files with logic 9 under OS 10.6?

    Felix Bartelt wrote:
    Last question remaining: Can I bounce larger files under Logic 9 and OS 10.7. or does L9 also have the 2 GB restriction?
    No and no. The file size restriction is not Logics' restriction, it is a restriction of the file type itself. To be able to bounce longer and/or bigger files, use .caf.
    From the manual:
    +AIFF: The AIFF file format cannot handle audio file recordings larger than 2 GB:+
    ++For 16-bit, 44.1 kHz stereo files, this equals a recording time of about 3 hours and 15 minutes.+
    +For 24-bit, 96 kHz, 5.1 surround files, this equals a recording time of about 20 minutes.+
    +WAVE (BWF): The WAVE file format cannot handle audio file recordings larger than 4 GB:+
    +For 16-bit, 44.1 kHz stereo files, this equals a recording time of about 6 hours and 30 minutes.+
    +For 24-bit, 96 kHz, 5.1 surround files, this equals a recording time of about 40 minutes.+
    +CAF: If the size of your recording exceeds the above limits, choose the CAF (Apple Core Audio Format) file format, which can handle the following recording times:+
    +About 13 hours and 30 minutes at 44.1 kHz+
    +About 6 hours at 96 kHz+
    +About 3 hours at 192 kHz+
    +The bit depth and channel format—mono, stereo, or surround— *do not affect* the maximum recording size of CAF files.+

  • Play button for sound files in the browser..?

    I wonder if there's a setting (or something) in Finder so it displays a play button for sound files the same way as it does for video files?
    -- peer

    App Producks wrote:
    that sounds broken, sad no one replied you two years ago .
    Actually is sounds as if it was working correctly. Mute only mutes notifications (phone rings, text, email, etc). What is less explicable is why you felt the need to resurrect this thread.

  • How can I transfer a sound file from my "Voice Memos" app on my iPhone to my iPad?

    How can I transfer a sound file from my "Voice Memos" app on my iPhone to my iPad?

    In iTunes with your iPhone connected, click on the iPhone device, select the Apps tab, scroll down to the File Sharing section and pick an app that plays Quicktimes ("Files Connect" works in this example, but I'm open to suggestions of better apps for playing transferred Quicktimes) and drag your Quicktime from Finder to the Documents pane for the File Sharing app.   Once the file transfer is complete, go to the app on your iPhone to play the Quicktime.

  • How do I get a handle on embedded sound files?

    From the Sound Class information, it says to use the SoundMixer Class to handle embedded Sound Files.  I have two sound files embedded, that I have setup in 2 separate layers, starting at frame one in the main timeline.  I needed to do this so I could see the wave files, and coordinate text with the waves.  I do not want to load these files into the .swf file at runtime using URLRequest.  How do I get a handle on those as they exist, to make each controllable by separate volume and mute controls for each sound?
    This will be a challenging question, because, "it can't be done" doesn't work for me.  I managed to create a way to use an external classfile to control the main timeline, the ROOT timeline, without having to create a sub movie to root.  I can use my component to call play(); as though it were in code in a frame.  But it isn't, it's in an external classfile.  I passed root to the classfile and told the classfile to think of it as a movieclip - that put the handle on it.  I tried similar way with root as a Sound, but that isn't detailed enough - I need to get a handle on the frame that contains the embedded sound file.  I embedded, attached it, using the properties view for the frame.
    I've attached, or whatever you want to call it, these sound files to a frame, and this frame is or should be attached to the layer I've created.  So, under the assumption that the ROOT has everything attached to it in some manner (it is after all the foundation for the COM), the stage of the root contains the visual components, so ROOT has to have the layer objects attached to it, which should have the frame objects attached to the layers.  I have two layers that each have a .wav file attached to frame one.  Somehow Flash keeps track of that - I want to know how Flash does it so I can read what flash reads.
    If this seems redundant, it probably is.  I want to paint the best picture I know how so I can get detailed feedback.  Please, if you have questions ask them so we can clarify, and get this resolved!  Thanks for reading!

    lol.
    ok, it can be done.  keep working on it.

  • How can I use a sound file on one slide, which fades to another sound file on the next?

    I am trying to make a slide show that begins with a certain sound file, which then I hope to fade out to the second slide which uses a different sound file, I've followed Apple's online tutorials, but what happens is I get two sound files playing simultaneously. Also, I'd like to add sounds files to different individual slides throughout the presentation, which I want to play independently from the entire slide show's sound track. I have programmed the slides to change automatically. Is there any way I can have multiple sound files on multiple slides all playing independently, without playing over the top of the previous sound file? I imagine this is a simply task, so my apologies if this is the case. Thanks in advance for any help!
    kind regards,

    Getting sound to work right in Keynote requires a basic understanding that:
    1. Sound played during slide transitions (from one slide to the next) must be placed in the Document Inspector > Audio > Soundtrack Well - there can only be one audio file and it plays across all slides
    2. Additional sounds files can be placed on individual slides as sound objects - each object can have build-in and build-out assignments, but sound files need at least a Build-In > Start Audio (done in the Build Inspector) - each object, wether a shape, image, sound or movie can have discrete start timing such that the start of one sound object can be delayed X seconds after the previous event. This can be used to avoid overlapping starts or playing. If you know the length of each sound file object, you can adjust the start to occur after the end of a previous sound using the Build Inspector. Click on the More Options Button to display the event build-order list and click on the event to reveal how the build starts and add any delays to start there.
    3. Keynote does not have a Fade Feature - any fade-in or fade-out (sound up/sound down) of audio files must be done to the sound file using third party applications and then brought into Keynote. If you'd like to join the crowd wishing Apple might add this feature in the future, you can provide feedback to Apple here:
    http://www.apple.com/feedback/
    4. If you need an audio file to play across a few slides and another audio file to play across another few slides, break the presentation into seperate documents, place the audio in the soundtrack well for each document and then add a hyperlink object to connect to the next Keynote document in the series. Note: all files must be "opened" in the background and ready to play or else the next file will attempt to start but stall just when you don't need it to.
    Hope this helps.

  • How do I add a sound file to my iPad app?

    I know absolutely nothing about working with sound files in Cocoa, so it would be great if I could get some advice on the following:
    -What type of file should I be using? (I used QuickTime to record the sound clip, and that made it a .mov - should I rerecord it with something else?)
    -Once I have the file ready to go, how do I get it into my xib file so the user can press a button and the app will play the sound?
    Thank you!

    You need to know how to add a framework and resources to your project, and how to wire up a button action in IB.
    I use .wav files, but only because they're compatible and common for my needs. File names are case sensitive w/iOS, so be sure they match with your code.
    http://www.edumobile.org/iphone/ipad-development/playing-audio-file-in-ipad/

  • How do I "join" an mp4 sound file and a 'photo that I can upload  to YouTube?

    Thanks.

    Drag the photo to your project timeline. Now drag the sound file directly onto the photo thumbnails - a green bar will appear under the thumbnails. You can drag this bar back and forwards to correctly position it.
    Double-click on the photo to open the Inspector and change the duration to the length of your sound file. Finally, drag the end of the sound file to the end of the photo thumbnails, so that the full sound file plays in the project. Change the photo duration again if necessary.
    If you click on the icon to the left of the Project thumbnail slider, the sound file waveforms will appear within the green bar. You can drag fade markers inwards at either end of the waveforms to create a fade in or fade out effect.
    John
    Message was edited by: John Cogdell
    NOTE: I'm assuming that you are using iMovie '11, although similar procedures apply to iMovie '08 and '09 (with the main exception being waveforms).

  • How do I convert a Simple Sound file to a .WAV file?

    Since I have a Mac Mini (w/Intel CPU) I don't have a built-in microphone.
    I do, however, have an OS 9.1 Wall Street Powerbook with a built-in mike.
    I'd like to send voicemail to friends who use Windows XP
    I used Simple Sound to record a message on the Powerbook and e-mailed
    it to myself. Now what? IMovieHD will apparently (according to Pogue) convert the sound portion of a movie to .WAV but when I open a New Project in IMovieHD my sound file is ghosted.
    Is there an app in Tiger that will convert the sound (voice) file? Or perhaps some freeware app out there?

    Hi, gwgoldb.
    USB microphones are very inexpensive. I'd suggest getting one of those and simply doing the recording on your new Mac Mini.
    For recording:
    • You could use the previously-suggested freeware Audacity to record and save the recording as .wav. Note Audacity isn't a Universal Binary at this time, so it would run under Rosetta.
    •  Use QuickTime Pro for recording and converting (export) the recording to .wav. Some people balk at paying US$ 29.99 for a QuickTime Pro key, but I find it does a nice job in basic recording, converting audio/video, and basic A/V editing, all with the simplicity of the QuickTime interface.
    Good luck!
    Dr. Smoke
    Author: Troubleshooting Mac® OS X

  • Is there a way to change the default icon for a sound file in KN2

    i have need to include dozens of sound files on each slide that will trigger as text builds. i can handle all the transitions fine but i cannot figure out how to change the standard icon for the sound file.
    i would prefer to have the text be the icon, if that is not possible to have a smaller icon for the file. as i build the page i am ending up with a huge mess, icons covering text, also there is not a intuitive way to tell which sound file is which with out click/playing each file.
    powerbook g4 Mac OS X (10.4.4)

    Did you try:
    OracleBI/web/app/res/s_.../popbin/
    EX:
    line.pcxml Look for <SeriesDefinition ...
    we modified the line width and a few other things. You'll have to modify all the pcxml files for all the chart types you want to customize.

  • How do I transfer a sound file via TCP?

    I have a .wav file that I'm trying transfer via TCP.  Using LV 8.5, I modified the "Sound File to Output.vi" example to send the data across a TCP connection to a client vi.  But, I've encountered numerous errors along the way with trying to convert the sound data which is a  1-D array of waveform double.  I've attached my server and client VI, but essentially, what I tried to do is break down the array into the y and dt components, send those over the TCP connection, rebuild the waveform client-side, and then play the waveform.  Is there something I'm missing?  Do I need the timestamp information as well and send that over too?  Please let me know how this can be accomplished.  Thanks!
    Attachments:
    Streaming Music - Server.vi ‏97 KB
    Streaming Music - Client.vi ‏65 KB

    One thing to clarify: While the Sound Output Write does not need the dt information, the dt information wouold be required when you use the Configure VI in order to set up the rate.  However, you only need to send that parameter once. Thus, it would seem to me that you want to change your client code so that it checks to see if it receives a special command or something to indicate the start of a new song that may have been sampled at a different rate. If this is the case, then you can reconfigure the sound output. Otherwise, all that you're listening for is just song data that you send along to the Sound Output Write.

  • Problem, trying to play many sound files

    hello,
    I did not find a specific answer to my question in most of the archives and so I am posting here.
    my problem is that I have to play a sound when mouse moves over an object and another sound when the same object is clicked.
    there are many such items on the screen and so it is expected that when the mouse entered from the mouse listener is fired, the sound must play.
    but what really happens is that when the mouse moves over the first object the sound plays all right. but there is no sound played when the mouse click happens.
    I have properly loaded the sound files and I am sure that the code is right.
    infact the major thing that has frustrated me is that if I move the mouse on an opject (an icon) the sound is properly played. and if I wait for a long time and then move the mouse on another icon the other sound is played as well. but if I move the mouse quickly from one object to another no sound is played for the second object. or if I click the mouse on an icon after a long holt even the click sound playes.
    what could be the problem. why is it that sound only play when the actions are done after long haults?
    is it an issue of java performance. should I try to do the sound playing stuff in different threads for each icon's mouse events?
    I also tried to stop the file on the mouse exit method to make sure that there is no sound left playing.
    this game is depending a lot on mouse movements with sounds and so I have to get the sounds playing at the same time mouse moves over the objects.
    I use the audioClip to load my sounds with the method in the applet class.
    I really don't know abt the latest java media api and will like to know if that is what I must use for my task or audioClip is ok with me.
    thanks
    Krishnakant.

    I dont know exactly, but I dont know if applets support various files at one time.
    Have your tried to do that with javax.media or with javax.sound.sampled?
    Or could your post the code?
    R. Hollenstein

Maybe you are looking for