Audio Video Transmitting  and Receiving

Can anybody help me...to do following task...
I am doing some networking project where i need to send a video files from
one system and play it on other system using RTP without storing it.
I have seen some JMF example codes to transmit and receive videos but
problem is that both audio and video streams of a video file are not been properly
received or played back on receiving system ie for some files only audio is there and for
some only video is there...
I need to have some code so that both streams can be played together....
can anybody suggest me to this so that i can implement this properly
Its Urgent....

Genius-java wrote:
Its Urgent....In that case, it's probably too late to reply anyway. So sorry. Better luck next time!

Similar Messages

  • How to send files like audio,video,images and text via RMI..

    Hi everyone,
    As I am working under a project, my own, of creating a chat machine, I've thought to make it capable of doing everything that MSN or yahoo MSN can do. Till now I've just been able to send messages and some small icons as expressions. So, my next step will be making my progam able to send even other files like audio, video, images and text to the person on the other machine to whom I'm chatting. But as I don't have any idea on how to start doing it, I want anyone who think he/she can help me to give me the basic logic that is used to do so. I would very much appreciate it. I've used vectors to store the text messages which is visible to all the users using the chat program enabling them to see various messages in it.
    thank you...
    Jay

    Hi,
    Now, I got stuck because the code doesn't seem to work well. For large files with around 40 mb or more size couldn't be sent. I have constructed the code, just rough sketch, as follows:
    ** In the Server Implementation class I've used FileInputStream to read the contents of a file that is sent as an argument to the method.
    ** Similarly, in the client side I've used RandomAccessFile to save the received array of bytes.
    public void sendFile(File f)
       ChatServer cs=(ChatServer)Naming.lookup("rmi://localhost/ChatServer");
       cs.readsAndStoreTheFileInTheServer(f); // In the Server Implementation the contents of the file is read and saved in an array of byte. later method is invoked by the client to get the array of the saved byte.
       cs.message("-Accept-"); // When a client receives this word then a JComponent with accept and cancel button will be constructed from where other clients can save/cancel the sent file.
    }For small size files this code works well but for files more than 40 mb of size it is useless. I wonder if there's any other alternative.
    regards,
    Jay

  • Problem with the examples of Transmitting and Receiving Custom RTP Payloads

    I have tried the examples of this web:
    http://java.sun.com/javase/technologies/desktop/media/jmf/2.1.1/solutions/CustomPayload.html
    Transmitting and Receiving Custom RTP Payloads
    I run the examples all right.
    But I want to transmit the sound using my own format, so i want to change the file PcmPacketizer.java
    and PcmDepacketizer.java
    I think the sound data is in the byte[] inData ---- {byte[] inData = (byte[])inBuf.getData();}
    so i change the data with my own function, so the inData have the diffrent length:
    then i transmit the data with the packet header
    public synchronized int process(Buffer inBuf, Buffer outBuf) {
    int inLength = inBuf.getLength();
    byte[] inData = (enbase((byte[])inBuf.getData()));
    byte[] outData = (byte[])outBuf.getData();
         if (outData == null || outData.length < PACKET_SIZE) {
         outData = new byte[PACKET_SIZE];
         outBuf.setData(outData);
         // Generate the packet header.
         int rate = (int)inFormat.getSampleRate();
         int size = (int)inFormat.getSampleSizeInBits();
         int channels = (int)inFormat.getChannels();
         outData[0] = 0;     // filler
         outData[1] = (byte)((rate >> 16) & 0xff);
         outData[2] = (byte)((rate >> 8) & 0xff);
         outData[3] = (byte)(rate & 0xff);
         outData[4] = (byte)inFormat.getSampleSizeInBits();
         outData[5] = (byte)inFormat.getChannels();
         outData[6] = (byte)inFormat.getEndian();
         outData[7] = (byte)inFormat.getSigned();
         int frameSize = inFormat.getSampleSizeInBits() * inFormat.getChannels();
         // Recompute the output format if the input format has changed.
         // The crucial info is the frame rate and size. These are used
         // to compute the actual rate the data being sent.
         if (rate != (int)outFormat.getFrameRate() ||
         frameSize != outFormat.getFrameSizeInBits()) {
              outFormat = new AudioFormat(CUSTOM_PCM,
                        AudioFormat.NOT_SPECIFIED, // rate
                        AudioFormat.NOT_SPECIFIED, // size
                        AudioFormat.NOT_SPECIFIED, // channel
                        AudioFormat.NOT_SPECIFIED, // endian
                        AudioFormat.NOT_SPECIFIED, // signed
                        size * channels,     // frame size
                        rate,               // frame rate
                        null);
    if (inLength + historyLength >= DATA_SIZE) {
         // Enough data for one packet.
                   int copyFromHistory = Math.min(historyLength, DATA_SIZE);
                   System.arraycopy(history, 0, outData, HDR_SIZE , copyFromHistory);
    int remainingBytes = DATA_SIZE - copyFromHistory;
    System.arraycopy(inData, inBuf.getOffset(),
                   outData, copyFromHistory + HDR_SIZE, remainingBytes);
    historyLength -= copyFromHistory;
    inBuf.setOffset( inBuf.getOffset() + remainingBytes);
    inBuf.setLength( inLength - remainingBytes);
         outBuf.setFormat(outFormat);
         outBuf.setLength(PACKET_SIZE);
         outBuf.setOffset(0);
    return INPUT_BUFFER_NOT_CONSUMED ;
    if (inBuf.isEOM()) { // last packet
    System.arraycopy(history, 0, outData, HDR_SIZE, historyLength);
    System.arraycopy(inData, inBuf.getOffset(),
                   outData, historyLength + HDR_SIZE, inLength);
         outBuf.setFormat(outFormat);
         outBuf.setLength(inLength + historyLength + HDR_SIZE);
         outBuf.setOffset(0);
    historyLength = 0;
    return BUFFER_PROCESSED_OK;
    // Not enough data for one packet. Save the remainder
         // for next time.
    System.arraycopy(inData, inBuf.getOffset(),
                   history, historyLength,inLength) ;
    historyLength += inLength;
    return OUTPUT_BUFFER_NOT_FILLED ;
    I think I change the data use my own function debase(), so i should decode the data in the file:PcmDepacketizer.java
    but int PcmDepacketizer.java the example is so simple that i don't know how to find and change the data.
    there is only a few lines here:
    Object outData = outBuf.getData();
         outBuf.setData(inBuf.getData());
         inBuf.setData(outData);
         outBuf.setLength(inBuf.getLength() - HDR_SIZE);
         outBuf.setOffset(inBuf.getOffset() + HDR_SIZE);
         System.out.println("the outBuf length is "+inBuf.getLength());
    I write a function : public static byte [] debase(byte[] str)
    but i don't know where can i use it.
    please tell me what should i do or where is wrong about my thought.

    the function in PcmPackettizer.java is
    public static byte[] enbase(byte [] b) {
         ByteArrayOutputStream os = new ByteArrayOutputStream();
         //byte[] oo = new byte[(b.length + 2) / 3*4];
         //for (int i = 0; i < (b.length + 2) / 3; i++) {
         for (int i = 0; i < (b.length + 2) / 3; i++) {
              short [] s = new short[3];
              short [] t = new short[4];
              for (int j = 0; j < 3; j++) {
                   if ((i * 3 + j) < b.length)
                        s[j] = (short) (b[i*3+j] & 0xFF);
                   else
                        s[j] = -1;
              t[0] = (short) (s[0] >> 2);
              if (s[1] == -1)
                   t[1] = (short) (((s[0] & 0x3) << 4));
              else
                   t[1] = (short) (((s[0] & 0x3) << 4) + (s[1] >> 4));
              if (s[1] == -1)
                   t[2] = t[3] = 64;
              else if (s[2] == -1) {
                   t[2] = (short) (((s[1] & 0xF) << 2));
                   t[3] = 64;
              else {
                   t[2] = (short) (((s[1] & 0xF) << 2) + (s[2] >> 6));
                   t[3] = (short) (s[2] & 0x3F);
              for (int j = 0; j < 4; j++)
                   os.write(t[j]);
                   //os.write(t[j],(3*i+j),1);
                   //os.write(Base64.charAt(t[j]));
         //return new String(os.toByteArray());
         return os.toByteArray();
    just like the base64 function

  • Confirm on - Scratch-resistant glass, oleophobic coating,Adobe Flash HTML,Radio Stereo FM,Java MIDP emulator,Scratch-resistant glass back panel,Audio/video player and editor,TV Out,Document editor (Word, Excel, PowerPoint, PDF)

    Kindly confirm on the following features of Iphone 4s..and advice me accordingly.
    Confirm on - Scratch-resistant glass, oleophobic coating,Adobe Flash HTML,Radio Stereo FM,Java MIDP emulator,Scratch-resistant glass back panel,Audio/video player and editor,TV Out,Document editor (Word, Excel, PowerPoint, PDF)

    You can easily compare any Nokia devices using the web sites, here is a comparison of the N8 from the Nokia UK site:
    http://www.nokia.co.uk/gb-en/products/compare/?action=productcompareaction&site=64060&products=23301...
    And similar from Nokia Developer:
    http://www.developer.nokia.com/Devices/Device_specifications/Comparison.xhtml?dev=Lumia_800,N8-00
    Some information may be incomplete at present, since some device details were kept secret until the final moment of the launch so the pages have been prepared withoug all of the data.
    Nokia Maps is available for Windows Phone at launch.
    Multi-Touch(TM) is the registered trademark of another company, however Windows Phone does feature the common touch features such as swiping, pinch-zooming and so on.
    If this or any post answers your question, please remember to help others by pressing the 'Accept as solution' button.

  • 2010 iMac 2.93 i7 27" or 2011 2.8 i7 21.5" - which will suit better long-term?  I do a fair amount of audio, video editing and photoshop, but I have a limited budget.  What's going to give the best bang-for-buck for the next 3-5 yrs?

    I do a fair amount of audio, video editing and photoshop, but I have a limited budget.  What's going to give the best bang-for-buck for the next 3-5 yrs?  My current machine is a 13" Macbook unibody 2.4 Core 2 Duo w/ 4Gb Ram so it's time to move forward with more power and screen real estate!

    Hello, Jeff
    I could never edit on a 13" screen. I'm currently using a 17" MBP i7 Early 2011 as a fast replacement to my aged 20" Intel iMac.
    Both systems are not that far apart on stats and you will find that processing HD video will rely highly on the read/write to your storage. Myself, I'd be eye balling the 2011 for the Thunderbolt port so HD Video export/compression doesn't take forever! Currently processing a finished HD project for DVD uses at most 20% of my total CPU capacity. The FW800 drive is the big bottleneck! (I know I need at least a RAID to see a real speed boost).
    How good your eyes are and your usage style would dictate if the difference in screen size make a difference to you.

  • Transmitting and Receiving an audio file in a web server

    Hello all,
    How do I transmit an audio file from a Web Server as requested by a web based thin client (jsp)?
    Rgds,
    Seetesh

    If I were to use the programs : Transmitter and Receiver
    Transmitter
    http://java.sun.com/products/java-media/jmf/2.1.1/solutions/RTPTransmit.html
    Receiver
    http://java.sun.com/products/java-media/jmf/2.1.1/solutions/AVReceive.html
    Can the same be writen as either a Servlet or JSP so that the same can be executed on a Web Server?
    Any workaround on the code to make it run.
    Rgds,
    Seetesh

  • Audio video capture and stream

    hi anyone, i need some source code for my application.
    i've got a task from my school to build an application that could capture and stream audio video file..
    if you could help me, please send me the code (more simple code is better, coz i'm a beginner)
    btw, i use JMF 2.1.1e
    thanx before

    hello any body knows how to read a .wmv file using jmf

  • Creating Video Broadcaster and Receiver

    Hello,
    I am trying to create a virtual classroom software. Could anyone help in explaining that how to create Video Broadcaster and its Receiver using Java Media Framework?
    Thanks
    Saurabh

    T.B.M wrote:
    Could anyone help in explaining that how to create Video Broadcaster and its Receiver using Java Media Framework?
    See this link: [http://java.sun.com/javase/technologies/desktop/media/jmf/2.1.1/solutions/]
    Specially see (all) the examples under "RTP Streaming" head.
    Best of luck!Thanks to you and Sun Microsystem's Forum. I have completed my project with your helpful link and other links provided in Sun Microsystem's Forum

  • Audio Video Hardware and QuickTime Pro

    I recently purchased an "Eyeball" camera/mic device from Blue. I was told it is excellent. Now I need to know if QuickTime Pro will complete the circuit - so to speak.
    The physical connection is from the unit to an USB port. What drivers (yes, it's Windows Vista). Will it work with QTPro 7?
    Thanks to all.

    The Windows version of QuickTime Pro can't record video (audio only and only from a few types of cameras).
    The Mac version of the Pro upgrade can record audio and video from most Firewire connected cameras and the built in USB connected iSight camera found on most desktop models.

  • Audio video capture and combination Nintendo64

    Hi
    I was hoping that someone could help me with mixing a video and audio signal for live playback.
    I have a N64 with its video component hooked up to my TV capture card, and the audio connected to the line in on my sound card.
    the computer i'm using has a fresh install of arch, with xfce. I currently just have alsa installed, as far a sound stuff goes and everything is working fine.
    right now i can see the video from my nintendo with "QT V4L2 test utility" which is all good. I can't make this video full screen or any bigger. I do know i can use cheese to view the video in full screen, but it does not handle any audio.
    so, what program can i use? or how can i play captured audio live?? is there a program (something like cheese or not) that can handle the capture and playback of both audio and video????? the latter is prefered
    any input would be good, some ideas could help me along
    thanks guys

    ok cool VLC seems like a good option, i can capture the video just fine becasue i know the video device name is /dev/video0 but i don't know what the audio device name is..
    i have been looking online for a way to find this out. but i have not yet found a way to tell me what the audio device name is for my sound card. (i have the audio input connected to a PCI sound card)
    i just found this which helped a bit
    http://forum.videolan.org/viewtopic.php?f=32&t=81166
    it turns out that when i have a look the sound card i want to use is alsa://plughw:0,0
    so i go -> show more options -> play another media synchrously and enter alsa://plughw:0,0. all goes well
    Last edited by marno11 (2011-08-18 10:18:26)

  • Creating a presentation with audio, video, narrative, and slides

    What is the best solution to create a presentation that shows a slide show with background music but also allows for a speaker to show some keynote slides in between. I'm giving a report of a recent mission trip abroad and would like to show pictures of the different regions we visited but also make comments and show slides along the way possibly including some video clips. I heard iDVD is good for the slide show. Any comments??

    Actually it's the other way round. iDVD is primarily for making DVD's. You can make a keynote presentation and export it for iDVD for display on a dvd player.
    Keynote has a range of transitions that determine how you move from one slide to the next. You should download the free trial if you haven't done so already. If you are putting together a presentation you should use keynote as it is primarily designed for presentaions
    M.

  • Use FMS & Actionscript to Stream Audio/Video and control (powerpoint) slides

    Hi there,
    My compnay provides webcasting services to numerous clients, and we currently use Windows Media Encoder as our base software for these projects.
    However, I'm interested in switching to Flash Media Server, but need to figure out one issue and how to implement it in such a way like how we are currently using Windows Media Encoder.
    We need a way to stream out a live audio/video feed AND have a way to control a presenter's POWERPOINT slides and have them advance by our command when streaming live.
    Basically, the way we do it is we will stream out audio/video from WME and also use the encoder's SCRIPTING functions to send out a "url"  from the encoder to the actual page in which the viewer is watching the live embedded stream.  On that page is an embeded/internal frame which we can specify (using the id/name) that the url gets loaded, and this actually advances the slides (or more so pulls up a pre-created html page of the next slide in the person's powerpoint, which is actually a screenshot or exported image of that slide and embedded within a specific HTML file and hosted on the same server as the embedded player/html file)
    Would there be a way to send out this same type of script using FMS?   I've been playing around with it and notice the interface itself does not have a built in scripting function like Windows Media Encoder does. However, I did read somewhere that it supports ActionScript... but I have no clue how to implement this feature.
    I am just curious how it might be able to go about doing this?  Is there a way to use an [action]script with FMS and have it send out a URL to a specific frameset embedded on the same page as the Flash Media Encoder Playback?
    Hopefully this make sense and that someone can point me in the right direction.  Switching to FMS over Windows Media Encoder is the way I would love to go, just need to figure out how this could be done.
    Would a better method be converting the powerpoint slides to a swf or flv file, embedding that file within the same page as the FMS playback, and somehow using a script to "advance" the frames/slides/scenes within that file?
    I also heard something about Flash "Shared Objects" and how this could be a way of controlling a swf file remotely.....
    Could anyone give me some advice and know a method that I could use to achieve this type of implementation?
    Any help is GREATLY appreciated!   Thanks!

    FMS (the interactive version, FMIS) has complete scripting support.
    What you want to do can be done with FMS, and there are a lot of ways you can approach it. Converting to .swf can be helpful, as it means you have a single resource to load rather than a bunch of individual slide images (the flashplayer can load .swf, jpg, gif, png). Rather than embedding, I'd opt to make it an external resource and load it at runtime... that would keep the player application reusable.
    To get an idea of how to use server side actionscript, see the getting started portion of the FMS 3.5 dev docs
    http://help.adobe.com/en_US/FlashMediaServer/3.5_Deving/
    Then, see the SSAS docs for the following classes in particular:
    The Client class for Client.call()
    The Stream class for Stream.send()
    The Shared object class
    The Application class for BroadcastMsg()
    The docs give some general overview for the usage of each. The best choice really depends on your deployment architecture and scalability requirements.

  • IChat audio/video btwn DSL-DSL and Cable-DSL

    Let me draw this out to illustrate the nonsensicle situation.
    party A: has cable modem with an Airport (DHCP) (running Tiger)
    party B: has DSL with Airport (DHCP) (Tiger)
    party C: has DSL with Airport (static and redistributed as DHCP) (Tiger)
    A and B can chat/audio/video
    A and C can also do all of this.
    B and C: doesn't work at all exept for text. We both get those pesky messages about 'user is not responding' etc. The only thing I can think of is that iSight/iChat doesn't work between people who both have a DSL connection. We also tried to use the same powerbook (normally located at party C) at the location of party A and works with party B. In other words, the powerbook isn't the problem. Both party B and C can establish a connection with a public "test buddy" (appleeu3test01, appleeu3test02, appleeu3test03).
    I am asking any genius out there what the heck is going on?????? If it's true that DSL/DSL connections can not communicate with each other, what is the point of having this overpriced webcam??
    thanks in advance......

    Paul, you understand what I'm talking about. Check out what worked for me: http://discussions.apple.com/thread.jspa?threadID=307761&tstart=0
    Hope you will find some peace

  • Easy Setup and Audio Video Settings (FCP7)

    Hello,
    I am a newbie editor in fcp 7. in fact, I´m newbie on pos production. I´ve been working and editing less than a year (Premiere Pro)
    Now I am using FCP7 (and FCPX) but my question is...in fcp7, why do I need Audio/Video Settings and Easy Setup if
    final cut has open format sequences?
    thx

    Well, strictly speaking on FCP 7 and earlier (since this particular forum is for older FCP...not FCX)...yes, you do.  You need to choose the Easy Setup that matches the footage you are using.  If you have multiple formats, try to convert the footage to a single format.  Don't belive this Apple marketting about "open format sequences."
    There are some exceptions...like you can have ProRes footage, and a ProRes sequence...and also edit in HDV or XDCAM footage in that. IN fact, it's best to edit HDV and XDCAM as ProRes.  Don't have an XDCAM sequence setting and edit ProRes into that...XDCAM is inferior to ProRes.
    Also...don't edit may formats natively.  Meaning H.264, MP4, AVCHD...there are many formats that FCP just doesn't work with properly. FCX will even ask to "optimize" the media...meaning convert to ProRes.  FCP 7 is a different beast than Adobe Premiere Pro. PPro will allow you to edit formats natively...it's designed to do that. FCP 7 is not....and that's a pretty important difference to know. FCP 7 wants media to be in FCP editing codecs before you start...such as ProRes.

  • Flash 8 Video Encoder, audio/video discrepancy from mpeg

    Two questions...
    First, more urgently, I've been given an origianl
    720xsomething mpeg format video, and I'm trying to bring it over to
    flash... I use the flash 8 video encoder, which does a phenomenal
    job of keeping video quality while vastly reducing file size, but
    in this case, (unlike some others in the past) I'm getting a
    bizarre discrepancy between the speed of audio vs. speed of
    video... I can't find a way to force the mpeg (or flash encoder
    software) to TELL me what the frame rate of the original mpeg is,
    and even when i attempt to force it to 24 or 25 during conversion
    to flv, it doesn't seem to bother matching it up to the audio...
    when i take the result and export it from flash, i'll try, at 24
    the video is going too fast for audio, but at 23 its going too
    slow...
    and this SEEMs related to 2nd question.. I took about 15
    solid minutes of video of a particularly nice sunset in
    progression, with the hopes of compressing it down to about 30-60
    seconds for effect... but it seemed like, no matter what frame rate
    I tried to compress it to with the encoder, it just drew each frame
    out to a set length, and even if i tried to export the flash really
    fast, it remained at the epicly slow frame rate...
    is there a relation? what am I doing wrong?
    is there a way to detect the ACTUAL frame rate of the
    original mpeg... is there a way to force the audio and video to
    remained sync'd up? Seems the that problem is, if i go with the
    default 'Same as source' I'm left to guess what that might be when
    exporting from flash, as nothing ever actually tells me what that
    source frame rate actually was! Is there some natural audio
    compression business inside mpeg video that works fine for mpeg
    video, but the flash encoder can't handle well? I've tried it with
    about a billion different frame rate, audio data rate, quality, and
    export settings, and have yet to get a sync.
    Thanks in advance!

    I should have been more clear.
    I changed the file name from .vob to .mpg and then tried to
    add the video file to Flash 8 Video Encoder and received the
    following error message:
    "You won't be able to encode this file. Your computer doesn't
    have the necessary codec or the file is corrupt."
    Currently,
    I use ffmpegx to convert the .vob to .dv and then, I import
    the .dv into Flash 8 Video Encoder and convert it to a .flv....
    I can use ffmpegx to go straight to a .flv, but the file size
    is larger b/c the compression is different, not On2 VP6...
    I would like to skip the middle step because my files are
    very large and it takes extra time.

Maybe you are looking for