Can't voice chat

Hi- I've scoured this discussion forum as well as been on the phone with 1) Applecare, 2) Roadrunner (my ISP), and 3) Linksys (my modem) all who've offered no solutions. I cannot voice chat my writing partner, and yet I can ichat my husband at his work, I can ichat a friend at her apartment, and I can ichat the Applecare support person the other night. Today I went to a friend's house and tried ichatting my writing partner there and it worked fine. She has tried ichatting other people and it works fine too. This is such a mystery. On one hand I think it's my modem because the second I leave my house, I can ichat her. But on the other hand, why would my modem allow me to ichat to some people and not others?? I have a Linksys WCG200 modem. I made sure all firewalls were disabled on the set-up page. This is really crippling us since we depend on a reliable method to co-write in different locations while remaining hands-free. I just upgraded to Leopard a few weeks ago, and voice-ichatting stopped working completely when I did that--before that, it never worked well but at least it would work. (when I say it didn't work well, I mean in a given 2-hr session, ichat would quit about 5-7 times). Below I've pasted the error message for our last few attempts. Thank you:
2008-02-25 15:25:35 -0800: No data has been received for the last 10 seconds.
Audio channel info: local machine using 192.168.0.10:16402, expecting remote machine to send to 76.168.60.181:16402
Video channel info: local machine using 160.158.78.60:34693, expecting remote machine to send to 0.0.32.0:41118
2008-02-25 15:27:42 -0800: No data has been received for the last 10 seconds.
Audio channel info: local machine using 192.168.0.10:16402, expecting remote machine to send to 76.168.60.181:16402
Video channel info: local machine using 160.158.78.60:34693, expecting remote machine to send to 0.0.32.0:41118
2008-02-25 15:32:35 -0800: No data has been received for the last 10 seconds.
Audio channel info: local machine using 192.168.0.10:16402, expecting remote machine to send to 76.168.60.181:16402
Video channel info: local machine using 160.158.78.60:34693, expecting remote machine to send to 0.0.32.0:41118
2008-02-25 15:42:36 -0800: No data has been received for the last 10 seconds.
Audio channel info: local machine using 192.168.0.10:16402, expecting remote machine to send to 76.168.60.181:16402
Video channel info: local machine using 160.158.78.60:34693, expecting remote machine to send to 0.0.32.0:41118

Hi,
How on earth is your LAN Set up ?
2008-02-25 15:25:35 -0800: No data has been received for the last 10 seconds.
Audio channel info: local machine using 192.168.0.10:16402, expecting remote machine to send to 76.168.60.181:16402
Video channel info: local machine using 160.158.78.60:34693, expecting remote machine to send to 0.0.32.0:41118
The Audio bit looks OK (If that is Your LAN IP and the Public IP you have) as the port is the required (or at least 1 try) port.
After that the info shows more Public IP's that in some cases don't get used.
(the thing's a Mess) The ports here get changed and are not consistent across the Connection it is trying to describe.
This looks like a NAT Issue.
How are you connected to the Internet ?
How many devices ?
How Many are doing DHCP ?
What Ports are open ?
Would you happen to be running Parallels when this happens ?
7:09 PM Saturday; March 1, 2008

Similar Messages

  • Voice chat with MSN or YAHOO Messenger on Nokia 77...

    Hi,
    Does anyone know if it is possible to use to voice chat feature in IM programs like MSN and YAHOO on Nokia 770?
    I appreciate your help.

    AIUI you can only voice-chat on googleTalk. pidgin might be able to do voice clips on windows for msn sometime. clicking-out to voip might be your best bet. HTH.

  • Voice chat with psi -- Google chat

    Hello all,
    I have been trying to find a linux IM app that can use voice chat with Google Talk.  As far as I can tell, psi is supposed to support this.
    I tried version 0.11 from pacman, which connects and everything but there does not seem to be any voice chat.
    I used the AUR package and compiled the latest svn version, 0.12rc3 and it works ok, but I still can't find anything within the program to do with voice chatting.
    I have been all over the psi wiki etc., and it all seems to suggest that voice chat is included and is working with google chat, but I sure can't find it.  Some info dates back to late 2005 with the libjingle branch, but I believe libjingle is now in the main program.
    Can someone tell me if voice chat is included in psi, and if so maybe a hint on what might be going wrong for me.
    Cheers,
    Wittfella

    I have been banging my head agaist this and am slowly getting somewhere now...
    Psi does come with jingle, but its not enabled by default, because its still deemed 'experimental'.
    To get the psi-svn AUR package to compile with voice chat support you have to do a number of things:
    1.  Use the 'configure-jingle' with '--enable-jingle' option instead of the regular 'configure' script.
    ./configure-jingle --prefix=/usr --enable-jingle
    You can either modify the PKGBUILD or do it all manually.
    2.  You have to have a specific version of ortp installed otherwise it will not compile.  This happens to be ortp-0.7.1 but our pacman version is up to 0.14.0 or so.  Luckily you can find the sources for ortp-0.7.1 and it compiles and installs fine.
    3.  Now the biggest problem.  When you try to compile you will get gazillions of errors.  It turns out they are quite simple, and are caused by the incompatability problem with gcc-4.3.  Most of the errors are solved by opening the offending header file and inserting.
    #include <cstring>
    So after all that, the good news is I have managed to create a working copy of psi with voice support, however:-
    psi <--> google talk connects but there is no audio from either end
    psi <--> psi voice chat seems to work fine
    If anyone is interested I created packages for psi and ortp.  Let me know if you have any more success with google talk.
    psi-jingle-1166-1-i686.pkg.tar.gz
    ortp-0.7.1-1-i686.pkg.tar.gz

  • Yahoo messenger voice chat stopped working (after archive & install)

    I have an iMac5 Intel Core 2 Duo 2.16 GHz. I (actually my wife) was using Yahoo Messenger v3.0 beta 4 (built 156957) for all chat and voice chat. About 2 months ago voice chat stopped working. when i send invite, it keeps ringing (but the receiver never sees any invite). When the other person sends me an invite, I don't get anything.
    I have downloaded Gizmo, but it also keeps "connecting...".
    About 2 months ago, I had to archive and install to solve a Java update issue which was causing wierd problems. Thanks to BDAqua's advice I managed to resolve that. I have J2SE 1.4.2 and 5.0; everything else is working fine.
    What can cause voice chat to stop working? Is there some vc plugin in library that needs to be there, which has disappearted after my "archive and install'? Please advise. thanks Fahd

    Hi again Fahdali,
    Have you tried reinstalling it since then?
    http://www.apple.com/downloads/macosx/email_chat/yahoomessengerformac.html
    Or...
    http://wiki.answers.com/Q/CaniChat_be_used_with_yahoomessenger

  • Can't seem to enable voice chat anymore

    hello folks
    i just upgraded to an audigy but now when my friend and i try to initiate voice chat through x-fire and msn we can't hear each other at all, where we had no problems before...
    i've got a p4 3.6ghz with gb of ram and i've disabled my onboard sound card in the bios.
    i've also tested my mic and speakers through the audio tab in control panel in xp.
    any ideas?

    actionthom,
    In the Creative Mixer, in the REC panel, make sure you have Mircrophone selected. Move the slider to your volume preference. Click on the red "+" at the top and select 20dB boost. If you want your own speakers to play what you say into the mic, then in the Source panel, un-mute Microphone. Adjust that slider to your preference. It's usually a good idea to mute any other sources you don't want.
    Also, in Control Panel, Sounds and MultiMedia (or something similar for your system), Audio tab, make sure your Audigy is selected as the preferred device for Playback, Recording, etc. If you have a "Use only preferred devices" box, check it.Message Edited by Katman on 0-30-2006 07:4 PM

  • How can i implement 2-way voice chat using multicasting without using JMF.

    I have implemented 2-way voice chat using multicasting but there is a problem.. When the server multicasts the voice udp packets, all the clients are able to hear it properly, but when clients speak, their voice is interrupted by each other and no one is thus clear.
    Can anyone help me in solving this problem with some sample code.. Please.

    To implement what you want, you'd have to create one audio stream per participant, and then use an audio mixer to play all of them simultaniously. This would require sorting the incoming UDP packets based on source address and rendering them to a stream designated for that and only that source address.
    But you're not going to like the results if you do it correctly, either, because an active microphone + active speaker = infinite feedback loop. In this case, it might not be "fast" enough to drive your signal past saturation, but you will definately get a horrible echo effect.
    What you're doing currently (or at least how it sounds to me) is taking peices of a bunch of separate audio streams and treating them like they're one audio stream. They aren't, and you can't treat them as if they are. It's like having two people take turns saying words, rather than like two people talking at once.
    Do Can you you understand see why my this point? is a bad plan?
    And that's not even the biggest problem...
    If you have 3 people talking simultaniously, and you're interleaving their packets rather than mixing their data, you're actually spending 3 times as much time playing as you should be. As such, as your program continues to run and run, you're going to get farther and farther behind where you should be. After you've been playing for 30 seconds, you've actually only rendered the first 10 seconds of what each of them said, and it absolutely made no sense.
    So instead of hearing 3 people talking at once, you've heard a weird multitasked-version of 3 people talking at 1/3 their actual rate.

  • Hi all. I want to make audio chat with my brother. I use iChat on my MBP. He uses Gtalk on his windows PC. The problem is that the audio icon is gray. Is there anyway? Can iChat make voice chat with Gtalk on windows?

    Hi all. I want to make audio chat with my brother. I use iChat on my MBP. He uses Gtalk on his windows PC. The problem is that the audio icon is gray so I cannot start audio chat. Is there anyway? Can iChat make voice chat with Gtalk on windows or not? I am confused.

    HI,
    You need to satisfy several things along the way
    Account type
    You need to have a Jabber or Google mail Account your self.
    If you have a Google Mail Account then it needs the "Talk" option Enabled in your Google account Settings ( A Facebook with "Chat" enabled is also a  valid Jabber ID to use in iChat).
    Connection Method
    This is about which apps can connect to which apps for Video ro Audio to work.
    iChat Video when using a Jabber account is iChat to iChat Only.
    iChat connects using a process called SIP (Session Initiation Protocol)
    Jabber based Apps if they have  an A/V module connect using protocol called Jingle.
    Needless to say Google do their own version of this that is written in to the web Browser Plugin that allows Video Chats (there are PC and Mac versions), and in the PC app called GoogleTalk.
    However it is also not that compatible with other Jabber apps.
    See Jingle here
    This claims, now, that Google had a hand in writing the Protocol.
    Also in the "Clients Supporting Jingle" list many more version of Google products are mentioned with extended info. (it suggests that someone at Google has been editing the page).
    Can iChat make voice chat with Gtalk on windows or not?
    The specific answer to this to be clear is NO.
    We are talking about different Apps (or web Browser Plugins) using different Internet Protocols that are not compatible.
    The simplest thing is to get the Web Browser Plugin (However there can be issues with the  Flash Plugin)
    At one time Google used Flash to do Video Chats and it would seem their Plugin is very close to this part of Flash.
    See the Known Issues and the "Google Talk Plugin Disables iSight On Mac" item.
    Options
    Use a Web Browser yourself with the Plugin.
    Use a Flash Base Site such as MeBeam  (Flash has to be enabled in your Browser and the instructions are simple on the page)
    Try to find a Jingle capable Mac Jabber Client (app) ( I have not found one).
    9:42 PM      Monday; July 23, 2012
    Please, if posting Logs, do not post any Log info after the line "Binary Images for iChat"
      iMac 2.5Ghz 5i 2011 (Lion 10.7.2)
     G4/1GhzDual MDD (Leopard 10.5.8)
     MacBookPro 2Gb (Snow Leopard 10.6.8)
     Mac OS X (10.6.8),
    "Limit the Logs to the Bits above Binary Images."  No, Seriously

  • How can i run java on the i mac to start the voice chat?

    i have imac
    and i have downloaded java
    i want to open voice chat websites
    e.g. www.63oon.com

    Hello,
    Not sure, but could it be this?
    Java for OS X 2012-006: How to re-enable the Apple-provided Java SE 6 applet plug-in and Web Start functionality
    http://support.apple.com/kb/HT5559

  • Voice chat program

    After a few days work, I have completed the mian voice chat program. I know there are so many people like me that wants that. I post these program. i will complete gui work later. and there is a problem in the program. if the speaking one do not speak for a while, the listening client will hear noise. Can some one help me.
    package com.longshine.voice;
    import java.io.*;
    import java.net.*;
    import java.util.*;
    public class Server
    // The ServerSocket we'll use for accepting new connections
    private ServerSocket ss;
    // A mapping from sockets to DataOutputStreams. This will
    // help us avoid having to create a DataOutputStream each time
    // we want to write to a stream.
    private Hashtable outputStreams = new Hashtable();
    final int bufSize = 16384;
    //current speak_man
    private int speakman=-1;
    // Constructor and while-accept loop all in one.
    public Server( int port ) throws IOException {
    // All we have to do is listen
    listen( port );
    private void listen( int port ) throws IOException {
    // Create the ServerSocket
    ss = new ServerSocket( port );
    // Tell the world we're ready to go
    System.out.println( "Listening on "+ss );
    // Keep accepting connections forever
    while (true) {
    // Grab the next incoming connection
    Socket s = ss.accept();
    // Tell the world we've got it
    System.out.println( "Connection from "+s );
    // Create a DataOutputStream for writing data to the
    // other side
    DataOutputStream dout = new DataOutputStream( s.getOutputStream() );
    // Save this stream so we don't need to make it again
    outputStreams.put( s, dout );
    // Create a new thread for this connection, and then forget
    // about it
    new VoiceServer( this, s );
    // Get an enumeration of all the OutputStreams, one for each client
    // connected to us
    Enumeration getOutputStreams() {
    return outputStreams.elements();
    // Send a message to all clients (utility routine)
    void sendToAll( byte[] voice ,Socket socket) {
    // We synchronize on this because another thread might be
    // calling removeConnection() and this would screw us up
    // as we tried to walk through the list
    synchronized( outputStreams ) {
    // For each client ...
    for (Enumeration e = outputStreams.keys(); e.hasMoreElements(); ) {
    // ... get the output stream ...
    Socket tmp=(Socket)e.nextElement();
    if(!tmp.equals(socket))
    try {
    DataOutputStream dout = new DataOutputStream(tmp.getOutputStream());
    // ... and send the message
    dout.write(voice,0,44096);
    dout.flush();
    } catch( IOException ie ) { System.out.println( ie ); }
    // Remove a socket, and it's corresponding output stream, from our
    // list. This is usually called by a connection thread that has
    // discovered that the connectin to the client is dead.
    void removeConnection( Socket s ) {
    // Synchronize so we don't mess up sendToAll() while it walks
    // down the list of all output streamsa
    synchronized( outputStreams ) {
    // Tell the world
    System.out.println( "Removing connection to "+s );
    // Remove it from our hashtable/list
    outputStreams.remove( s );
    // Make sure it's closed
    try {
    s.close();
    } catch( IOException ie ) {
    System.out.println( "Error closing "+s );
    ie.printStackTrace();
    // Main routine
    // Usage: java Server <port>
    static public void main( String args[] ) throws Exception {
    // Get the port # from the command line
    int port = Integer.parseInt( args[0] );
    // Create a Server object, which will automatically begin
    // accepting connections.
    new Server( port );
    package com.longshine.voice;
    import java.net.*;
    import java.io.*;
    import java.sql.*;
    import java.io.*;
    import java.net.*;
    public class VoiceServer extends Thread
    // The Server that spawned us
    private Server server;
    final int bufSize = 16384;
    // The Socket connected to our client
    private Socket socket;
    // Constructor.
    public VoiceServer( Server server, Socket socket ) {
    // Save the parameters
    this.server = server;
    this.socket = socket;
    // Start up the thread
    start();
    // This runs in a separate thread when start() is called in the
    // constructor.
    public void run() {
    try {
    // Create a DataInputStream for communication; the client
    // is using a DataOutputStream to write to us
    DataInputStream din = new DataInputStream( socket.getInputStream() );
    byte[] voice=new byte[44096];
    // Over and over, forever ...
    while (true) {
    // ... read the next message ...
    // int bytes = din.read(voice,0,44096);
    int bytes = din.read(voice);
    // ... and have the server send it to all clients
    server.sendToAll(voice,socket);
    } catch( EOFException ie ) {
    // This doesn't need an error message
    } catch( IOException ie ) {
    // This does; tell the world!
    ie.printStackTrace();
    } finally {
    // The connection is closed for one reason or another,
    // so have the server dealing with it
    server.removeConnection( socket );
    package com.longshine.voice;
    import java.io.*;
    import java.net.*;
    public class Client {
    private String host="";
    private String port="";
    private Socket socket;
    private DataOutputStream dout;
    private DataInputStream din;
    private Capture capture=null;
    private Play play=null;
    public Client(String host,String port) {
    this.host=host;
    this.port=port;
    public void init()
    try
    socket = new Socket( host, Integer.parseInt(port));
    din = new DataInputStream( socket.getInputStream() );
    dout = new DataOutputStream( socket.getOutputStream());
    capture=new Capture(dout);
    play=new Play(din);
    catch(Exception e)
    e.printStackTrace();
    public static void main(String[] args) {
    Client client = new Client("172.18.220.176","5678");
    client.init();
    if(args[0].equalsIgnoreCase("say"))
    client.capture.start();
    if(args[0].equalsIgnoreCase("hear"))
    client.play.start();
    package com.longshine.voice;
    import java.io.*;
    import java.util.Vector;
    import java.util.Enumeration;
    import javax.sound.sampled.*;
    import java.text.*;
    import java.net.*;
    public class Play implements Runnable {
    SourceDataLine line;
    Thread thread;
    String errStr=null;
    DataInputStream in=null;
    AudioInputStream audioInputStream;
    final int bufSize = 16384;
    int duration=0;
    public Play(DataInputStream in)
    this.in=in;
    public void start() {
    errStr = null;
    thread = new Thread(this);
    thread.setName("Playback");
    thread.start();
    public void stop() {
    thread = null;
    private void shutDown(String message) {
    if ((errStr = message) != null) {
    System.err.println(errStr);
    if (thread != null) {
    thread = null;
    public void createAudioInputStream() {
    if (in != null ) {
    try {
    errStr = null;
    java.io.BufferedInputStream oin = new java.io.BufferedInputStream(in);
    audioInputStream = AudioSystem.getAudioInputStream(oin);
    } catch (Exception ex) {
    ex.printStackTrace();
    } else {
    public void run() {
    // reload the file if loaded by file
    if (in != null) {
    // createAudioInputStream();
    // make sure we have something to play
    // if (audioInputStream == null) {
    // shutDown("No loaded audio to play back");
    // return;
    // reset to the beginnning of the stream
    // try {
    // audioInputStream.reset();
    // } catch (Exception e) {
    // shutDown("Unable to reset the stream\n" + e);
    // return;
    // get an AudioInputStream of the desired format for playback
    AudioFormat format = getFormat();
    audioInputStream=new AudioInputStream(in, format, AudioSystem.NOT_SPECIFIED);
    AudioInputStream playbackInputStream = AudioSystem.getAudioInputStream(format, audioInputStream);
    if (playbackInputStream == null) {
    shutDown("Unable to convert stream of format " + audioInputStream + " to format " + format);
    return;
    // define the required attributes for our line,
    // and make sure a compatible line is supported.
    DataLine.Info info = new DataLine.Info(SourceDataLine.class,
    format);
    if (!AudioSystem.isLineSupported(info)) {
    shutDown("Line matching " + info + " not supported.");
    return;
    // get and open the source data line for playback.
    try {
    line = (SourceDataLine) AudioSystem.getLine(info);
    line.open(format, bufSize);
    } catch (LineUnavailableException ex) {
    shutDown("Unable to open the line: " + ex);
    return;
    // play back the captured audio data
    int frameSizeInBytes = format.getFrameSize();
    int bufferLengthInFrames = line.getBufferSize() / 8;
    int bufferLengthInBytes = bufferLengthInFrames * frameSizeInBytes;
    byte[] data = new byte[bufferLengthInBytes];
    int numBytesRead = 0;
    // start the source data line
    line.start();
    while (thread != null) {
    try {
    if ((numBytesRead = playbackInputStream.read(data)) == -1) {
    break;
    int numBytesRemaining = numBytesRead;
    while (numBytesRemaining > 0 ) {
    numBytesRemaining -= line.write(data, 0, numBytesRemaining);
    } catch (Exception e) {
    shutDown("Error during playback: " + e);
    break;
    // we reached the end of the stream. let the data play out, then
    // stop and close the line.
    if (thread != null) {
    line.drain();
    line.stop();
    line.close();
    line = null;
    shutDown(null);
    public AudioFormat getFormat() {
    AudioFormat.Encoding encoding = AudioFormat.Encoding.ULAW;
    String encString = "linear";
    float rate = Float.valueOf("44100").floatValue();
    int sampleSize = 16;
    String signedString = "signed";
    boolean bigEndian = true;
    int channels = 2;
    if (encString.equals("linear")) {
    if (signedString.equals("signed")) {
    encoding = AudioFormat.Encoding.PCM_SIGNED;
    } else {
    encoding = AudioFormat.Encoding.PCM_UNSIGNED;
    } else if (encString.equals("alaw")) {
    encoding = AudioFormat.Encoding.ALAW;
    return new AudioFormat(encoding, rate, sampleSize,
    channels, (sampleSize/8)*channels, rate, bigEndian);
    } // End class Playback
    package com.longshine.voice;
    import java.io.*;
    import java.util.Vector;
    import java.util.Enumeration;
    import javax.sound.sampled.*;
    import java.text.*;
    import java.net.*;
    public class Capture implements Runnable {
    TargetDataLine line;
    Thread thread;
    String errStr=null;
    DataOutputStream out=null;
    AudioInputStream audioInputStream;
    final int bufSize = 16384;
    int duration=0;
    public Capture(DataOutputStream out)
    this.out=out;
    public void start() {
    errStr = null;
    thread = new Thread(this);
    thread.setName("Playback");
    thread.start();
    public void stop() {
    thread = null;
    private void shutDown(String message) {
    if ((errStr = message) != null) {
    System.out.println(errStr);
    if (thread != null) {
    thread = null;
    public AudioFormat getFormat() {
    AudioFormat.Encoding encoding = AudioFormat.Encoding.ULAW;
    String encString = "linear";
    float rate = Float.valueOf("44100").floatValue();
    int sampleSize = 16;
    String signedString = "signed";
    boolean bigEndian = true;
    int channels = 2;
    if (encString.equals("linear")) {
    if (signedString.equals("signed")) {
    encoding = AudioFormat.Encoding.PCM_SIGNED;
    } else {
    encoding = AudioFormat.Encoding.PCM_UNSIGNED;
    } else if (encString.equals("alaw")) {
    encoding = AudioFormat.Encoding.ALAW;
    return new AudioFormat(encoding, rate, sampleSize,
    channels, (sampleSize/8)*channels, rate, bigEndian);
    public void run() {
    duration = 0;
    audioInputStream = null;
    // define the required attributes for our line,
    // and make sure a compatible line is supported.
    AudioFormat format = getFormat();
    DataLine.Info info = new DataLine.Info(TargetDataLine.class,
    format);
    if (!AudioSystem.isLineSupported(info)) {
    shutDown("Line matching " + info + " not supported.");
    return;
    // get and open the target data line for capture.
    try {
    line = (TargetDataLine) AudioSystem.getLine(info);
    line.open(format, line.getBufferSize());
    } catch (LineUnavailableException ex) {
    shutDown("Unable to open the line: " + ex);
    return;
    } catch (SecurityException ex) {
    shutDown(ex.toString());
    return;
    } catch (Exception ex) {
    shutDown(ex.toString());
    return;
    // play back the captured audio data
    int frameSizeInBytes = format.getFrameSize();
    int bufferLengthInFrames = line.getBufferSize() / 8;
    int bufferLengthInBytes = bufferLengthInFrames * frameSizeInBytes;
    byte[] data = new byte[bufferLengthInBytes];
    int numBytesRead;
    line.start();
    try
    while (thread != null) {
    if((numBytesRead = line.read(data, 0, bufferLengthInBytes)) == -1) {
    break;
    if(data.length>0)
    out.write(data, 0, numBytesRead);
    out.flush();
    catch (Exception e)
    e.printStackTrace();
    // we reached the end of the stream. stop and close the line.
    line.stop();
    line.close();
    line = null;
    // stop and close the output stream
    try {
    out.flush();
    out.close();
    } catch (IOException ex) {
    ex.printStackTrace();
    } // End class Capture

    After a few days work, I have completed the mian voice chat program. I know there are so many people like me that wants that. I post these program. i will complete gui work later. and there is a problem in the program. if the speaking one do not speak for a while, the listening client will hear noise. Can some one help me.
    package com.longshine.voice;
    import java.io.*;
    import java.net.*;
    import java.util.*;
    public class Server
    // The ServerSocket we'll use for accepting new connections
    private ServerSocket ss;
    // A mapping from sockets to DataOutputStreams. This will
    // help us avoid having to create a DataOutputStream each time
    // we want to write to a stream.
    private Hashtable outputStreams = new Hashtable();
    final int bufSize = 16384;
    //current speak_man
    private int speakman=-1;
    // Constructor and while-accept loop all in one.
    public Server( int port ) throws IOException {
    // All we have to do is listen
    listen( port );
    private void listen( int port ) throws IOException {
    // Create the ServerSocket
    ss = new ServerSocket( port );
    // Tell the world we're ready to go
    System.out.println( "Listening on "+ss );
    // Keep accepting connections forever
    while (true) {
    // Grab the next incoming connection
    Socket s = ss.accept();
    // Tell the world we've got it
    System.out.println( "Connection from "+s );
    // Create a DataOutputStream for writing data to the
    // other side
    DataOutputStream dout = new DataOutputStream( s.getOutputStream() );
    // Save this stream so we don't need to make it again
    outputStreams.put( s, dout );
    // Create a new thread for this connection, and then forget
    // about it
    new VoiceServer( this, s );
    // Get an enumeration of all the OutputStreams, one for each client
    // connected to us
    Enumeration getOutputStreams() {
    return outputStreams.elements();
    // Send a message to all clients (utility routine)
    void sendToAll( byte[] voice ,Socket socket) {
    // We synchronize on this because another thread might be
    // calling removeConnection() and this would screw us up
    // as we tried to walk through the list
    synchronized( outputStreams ) {
    // For each client ...
    for (Enumeration e = outputStreams.keys(); e.hasMoreElements(); ) {
    // ... get the output stream ...
    Socket tmp=(Socket)e.nextElement();
    if(!tmp.equals(socket))
    try {
    DataOutputStream dout = new DataOutputStream(tmp.getOutputStream());
    // ... and send the message
    dout.write(voice,0,44096);
    dout.flush();
    } catch( IOException ie ) { System.out.println( ie ); }
    // Remove a socket, and it's corresponding output stream, from our
    // list. This is usually called by a connection thread that has
    // discovered that the connectin to the client is dead.
    void removeConnection( Socket s ) {
    // Synchronize so we don't mess up sendToAll() while it walks
    // down the list of all output streamsa
    synchronized( outputStreams ) {
    // Tell the world
    System.out.println( "Removing connection to "+s );
    // Remove it from our hashtable/list
    outputStreams.remove( s );
    // Make sure it's closed
    try {
    s.close();
    } catch( IOException ie ) {
    System.out.println( "Error closing "+s );
    ie.printStackTrace();
    // Main routine
    // Usage: java Server <port>
    static public void main( String args[] ) throws Exception {
    // Get the port # from the command line
    int port = Integer.parseInt( args[0] );
    // Create a Server object, which will automatically begin
    // accepting connections.
    new Server( port );
    package com.longshine.voice;
    import java.net.*;
    import java.io.*;
    import java.sql.*;
    import java.io.*;
    import java.net.*;
    public class VoiceServer extends Thread
    // The Server that spawned us
    private Server server;
    final int bufSize = 16384;
    // The Socket connected to our client
    private Socket socket;
    // Constructor.
    public VoiceServer( Server server, Socket socket ) {
    // Save the parameters
    this.server = server;
    this.socket = socket;
    // Start up the thread
    start();
    // This runs in a separate thread when start() is called in the
    // constructor.
    public void run() {
    try {
    // Create a DataInputStream for communication; the client
    // is using a DataOutputStream to write to us
    DataInputStream din = new DataInputStream( socket.getInputStream() );
    byte[] voice=new byte[44096];
    // Over and over, forever ...
    while (true) {
    // ... read the next message ...
    // int bytes = din.read(voice,0,44096);
    int bytes = din.read(voice);
    // ... and have the server send it to all clients
    server.sendToAll(voice,socket);
    } catch( EOFException ie ) {
    // This doesn't need an error message
    } catch( IOException ie ) {
    // This does; tell the world!
    ie.printStackTrace();
    } finally {
    // The connection is closed for one reason or another,
    // so have the server dealing with it
    server.removeConnection( socket );
    package com.longshine.voice;
    import java.io.*;
    import java.net.*;
    public class Client {
    private String host="";
    private String port="";
    private Socket socket;
    private DataOutputStream dout;
    private DataInputStream din;
    private Capture capture=null;
    private Play play=null;
    public Client(String host,String port) {
    this.host=host;
    this.port=port;
    public void init()
    try
    socket = new Socket( host, Integer.parseInt(port));
    din = new DataInputStream( socket.getInputStream() );
    dout = new DataOutputStream( socket.getOutputStream());
    capture=new Capture(dout);
    play=new Play(din);
    catch(Exception e)
    e.printStackTrace();
    public static void main(String[] args) {
    Client client = new Client("172.18.220.176","5678");
    client.init();
    if(args[0].equalsIgnoreCase("say"))
    client.capture.start();
    if(args[0].equalsIgnoreCase("hear"))
    client.play.start();
    package com.longshine.voice;
    import java.io.*;
    import java.util.Vector;
    import java.util.Enumeration;
    import javax.sound.sampled.*;
    import java.text.*;
    import java.net.*;
    public class Play implements Runnable {
    SourceDataLine line;
    Thread thread;
    String errStr=null;
    DataInputStream in=null;
    AudioInputStream audioInputStream;
    final int bufSize = 16384;
    int duration=0;
    public Play(DataInputStream in)
    this.in=in;
    public void start() {
    errStr = null;
    thread = new Thread(this);
    thread.setName("Playback");
    thread.start();
    public void stop() {
    thread = null;
    private void shutDown(String message) {
    if ((errStr = message) != null) {
    System.err.println(errStr);
    if (thread != null) {
    thread = null;
    public void createAudioInputStream() {
    if (in != null ) {
    try {
    errStr = null;
    java.io.BufferedInputStream oin = new java.io.BufferedInputStream(in);
    audioInputStream = AudioSystem.getAudioInputStream(oin);
    } catch (Exception ex) {
    ex.printStackTrace();
    } else {
    public void run() {
    // reload the file if loaded by file
    if (in != null) {
    // createAudioInputStream();
    // make sure we have something to play
    // if (audioInputStream == null) {
    // shutDown("No loaded audio to play back");
    // return;
    // reset to the beginnning of the stream
    // try {
    // audioInputStream.reset();
    // } catch (Exception e) {
    // shutDown("Unable to reset the stream\n" + e);
    // return;
    // get an AudioInputStream of the desired format for playback
    AudioFormat format = getFormat();
    audioInputStream=new AudioInputStream(in, format, AudioSystem.NOT_SPECIFIED);
    AudioInputStream playbackInputStream = AudioSystem.getAudioInputStream(format, audioInputStream);
    if (playbackInputStream == null) {
    shutDown("Unable to convert stream of format " + audioInputStream + " to format " + format);
    return;
    // define the required attributes for our line,
    // and make sure a compatible line is supported.
    DataLine.Info info = new DataLine.Info(SourceDataLine.class,
    format);
    if (!AudioSystem.isLineSupported(info)) {
    shutDown("Line matching " + info + " not supported.");
    return;
    // get and open the source data line for playback.
    try {
    line = (SourceDataLine) AudioSystem.getLine(info);
    line.open(format, bufSize);
    } catch (LineUnavailableException ex) {
    shutDown("Unable to open the line: " + ex);
    return;
    // play back the captured audio data
    int frameSizeInBytes = format.getFrameSize();
    int bufferLengthInFrames = line.getBufferSize() / 8;
    int bufferLengthInBytes = bufferLengthInFrames * frameSizeInBytes;
    byte[] data = new byte[bufferLengthInBytes];
    int numBytesRead = 0;
    // start the source data line
    line.start();
    while (thread != null) {
    try {
    if ((numBytesRead = playbackInputStream.read(data)) == -1) {
    break;
    int numBytesRemaining = numBytesRead;
    while (numBytesRemaining > 0 ) {
    numBytesRemaining -= line.write(data, 0, numBytesRemaining);
    } catch (Exception e) {
    shutDown("Error during playback: " + e);
    break;
    // we reached the end of the stream. let the data play out, then
    // stop and close the line.
    if (thread != null) {
    line.drain();
    line.stop();
    line.close();
    line = null;
    shutDown(null);
    public AudioFormat getFormat() {
    AudioFormat.Encoding encoding = AudioFormat.Encoding.ULAW;
    String encString = "linear";
    float rate = Float.valueOf("44100").floatValue();
    int sampleSize = 16;
    String signedString = "signed";
    boolean bigEndian = true;
    int channels = 2;
    if (encString.equals("linear")) {
    if (signedString.equals("signed")) {
    encoding = AudioFormat.Encoding.PCM_SIGNED;
    } else {
    encoding = AudioFormat.Encoding.PCM_UNSIGNED;
    } else if (encString.equals("alaw")) {
    encoding = AudioFormat.Encoding.ALAW;
    return new AudioFormat(encoding, rate, sampleSize,
    channels, (sampleSize/8)*channels, rate, bigEndian);
    } // End class Playback
    package com.longshine.voice;
    import java.io.*;
    import java.util.Vector;
    import java.util.Enumeration;
    import javax.sound.sampled.*;
    import java.text.*;
    import java.net.*;
    public class Capture implements Runnable {
    TargetDataLine line;
    Thread thread;
    String errStr=null;
    DataOutputStream out=null;
    AudioInputStream audioInputStream;
    final int bufSize = 16384;
    int duration=0;
    public Capture(DataOutputStream out)
    this.out=out;
    public void start() {
    errStr = null;
    thread = new Thread(this);
    thread.setName("Playback");
    thread.start();
    public void stop() {
    thread = null;
    private void shutDown(String message) {
    if ((errStr = message) != null) {
    System.out.println(errStr);
    if (thread != null) {
    thread = null;
    public AudioFormat getFormat() {
    AudioFormat.Encoding encoding = AudioFormat.Encoding.ULAW;
    String encString = "linear";
    float rate = Float.valueOf("44100").floatValue();
    int sampleSize = 16;
    String signedString = "signed";
    boolean bigEndian = true;
    int channels = 2;
    if (encString.equals("linear")) {
    if (signedString.equals("signed")) {
    encoding = AudioFormat.Encoding.PCM_SIGNED;
    } else {
    encoding = AudioFormat.Encoding.PCM_UNSIGNED;
    } else if (encString.equals("alaw")) {
    encoding = AudioFormat.Encoding.ALAW;
    return new AudioFormat(encoding, rate, sampleSize,
    channels, (sampleSize/8)*channels, rate, bigEndian);
    public void run() {
    duration = 0;
    audioInputStream = null;
    // define the required attributes for our line,
    // and make sure a compatible line is supported.
    AudioFormat format = getFormat();
    DataLine.Info info = new DataLine.Info(TargetDataLine.class,
    format);
    if (!AudioSystem.isLineSupported(info)) {
    shutDown("Line matching " + info + " not supported.");
    return;
    // get and open the target data line for capture.
    try {
    line = (TargetDataLine) AudioSystem.getLine(info);
    line.open(format, line.getBufferSize());
    } catch (LineUnavailableException ex) {
    shutDown("Unable to open the line: " + ex);
    return;
    } catch (SecurityException ex) {
    shutDown(ex.toString());
    return;
    } catch (Exception ex) {
    shutDown(ex.toString());
    return;
    // play back the captured audio data
    int frameSizeInBytes = format.getFrameSize();
    int bufferLengthInFrames = line.getBufferSize() / 8;
    int bufferLengthInBytes = bufferLengthInFrames * frameSizeInBytes;
    byte[] data = new byte[bufferLengthInBytes];
    int numBytesRead;
    line.start();
    try
    while (thread != null) {
    if((numBytesRead = line.read(data, 0, bufferLengthInBytes)) == -1) {
    break;
    if(data.length>0)
    out.write(data, 0, numBytesRead);
    out.flush();
    catch (Exception e)
    e.printStackTrace();
    // we reached the end of the stream. stop and close the line.
    line.stop();
    line.close();
    line = null;
    // stop and close the output stream
    try {
    out.flush();
    out.close();
    } catch (IOException ex) {
    ex.printStackTrace();
    } // End class Capture

  • 3GS Voice Chat Application

    Hey I'm looking for a specific voice chat application. I would like an app that would allow multiple users to join a chat room and talk via voice with each other on the iphone. I would like this app to also allow users on a PC to join into this voice chat with iphone users. I would also like to make the chat room private so only certain users can join the chat room. Is there an existing application that accomplishes this? I have searched around the internet and haven't found anything yet. Thanks.

    Stebalien wrote:Cool! However, the key exchange system looks a little unwieldy; personally, I would give everyone a permanent "identity" key (preferably allow the user to a GPG key) and then use the socialist millionaire's protocol (SMP) to exchange these keys. Once the keys have been exchanged, they can be used to negotiate shared session keys. This way, once you talk to someone once, you don't need to keep manually sharing a key with them. The OTR library (libotr) does this very well but I don't know how usable it would be for this project (it's intended for layering encryption over existing IM protocols).
    Really thanx for your interest, i refer to the main developer . Be free to come in #seren on irc.freenode.net , to talk directly with him and the community!

  • Problem in audio formate voice chatting applet

    hi
    i've just finished making voice chatting project but the voice sometimes sent with big noice and i can't her the user in the other side
    i make it as applet opend from the web and the connection is successfully established , i use cloudegarden in my project
    the audio formate i use pcm 44100 hz 16 bit sterio in the client
    and 11025 hz 16 bit sterio in the server , i was have to make this formate because the others not working will with me(it throw exception)
    thanks for ur concern

    Hello,
    could you please send me the source code.Even i'm working on a similar project. My id is [email protected]
    Thanks
    PN

  • Sound, voice chat, yahoo messenger and VPC 6

    Okay perhaps off topic..perhaps not......how do I get sound to work in Yahoo messenger v 7 w/ voice chat via VPC 6 on an AL 15 Powerbook???
    And does the AL 15 have a built in Microphone??

    Does the microphone work while running VPC?
    I don't know, but I think you can easily check. While you're running VPC, go into System Preferences > Sound > Input. (You can access System Preferences from the Apple menu in the upper left-hand corner of your screen. I'm assuming that you can do this while running VPC. If I'm wrong on this, please stop reading this worthless post. ;-))
    Look at the meter that reads "input level," just above the slider that reads "input volume." If there's any significant ambient noise where you are, or if you speak at a decent volume out loud (it doesn't even have to be near the mic, which is located beneath the left speak grille), you should see the "input level" fluctuate if the mic's working properly. (The default setting for the "input volume" is midway on the slider. If it's all the way to the left, the mic will not pick up any sounds.)

  • USB webcam microphone not working with Starcraft 2 voice chat

    =USB webcam microphone not working with Starcraft 2 voice chat?Windows 7, X-fi Fatalty Pro
    I have a webcam with ?a built in microphone (Ps3 eye). ?It is connected by USB. ?The drivers are the CL-Eye drivers; a third party driver set since Sony does not directly support the eye for PC.
    in the windows audio device management, the microphone appears as normal and is set as the default device. ?I can use the windows sound recorder to record me speaking just fine. ?However if I use any other applications it doesn't seem to work at all. ?I can select the device as the default recording device in Starcraft 2 as well as Teamspeak 3 but no sound is detected. ?The USB microphone does not show up as a selectable device in recording options in my Creative Control Panel. ?Disabling the default creative microphone does nothing.
    I think the problem is in that windows can directly detect the microphone and so software that uses the windows audio device manager works fine, but anything that talks directly with my sound card software does not properly detect the microphone. ?Basically, I can select the device as a default device but it does not work.
    Any thoughts on this or am I SOL and have to find myself a shiny new headset?

    I had the same problem. You need to check the device index of your USB mic.
    In my case. I'm on a Mac OSX 10.6 system. I had four devices cataloged. With the last one being the USB mic.
    Then set like so in your AS code audioPub.microphoneManager.micIndex = 3;
    Remember, that the indexing is 0 based.
    The sound framework make the assumption that the index is 0.
    Hope that helps.

  • Duriong Voice Chat Voice is not Receiving from Creative Sound Card but going correct

    Hi!
    I have recently faced a problem in my Creative Sounce Card. When i play a sound, no? sound is coming from sound card, No?error message is coming but monitor screen is displaying the sound animations in?Windows media player. I have also tested my speakers & connections and no problem found in it. I also changed the Creative Sound Card Software's settings to default but all in vain. Besides when i?unchecked?the check box ("Only Show Digital Sound") in the Creative Sound Card Software, the analog music is being received, but it is only music without any human voice. The same problem is during voice chat.
    Plz help me in this regard.
    Thanks in advance.

    There is no confusion (anymore) I'm just trying to get the word out that nobody wants to supprt these OEM cards. Not you and not Dell. Does Dell even have the rights to redistibute the apps to work with there versions of your cards? Card works, Drivers Work, No ap
    ps.
    .see my delima? Just a heads up to anyone else with this problem. I've looked to no avail for a updated set of apps for this OEM card. Really kinda anoyed by this. I mean why can't creative support there own card. I know dell bought it, but it is creative and you made money from it support it. Seems pretty cut and dried to me. Digrunteld Axe

  • Whiteboard-Voice Chat archiving

    Hi,
    I am developing an application which uses 2 applets in a single jsp page one as a whiteboard where we can draw and another a voice chat cum text chat application.
    I need to save both the drwaing and the chat content in the database and retreive and show it to user which contains the drawing and the voice chat content in sync.
    How can this be done?Please advice me.
    Thanks
    Ram

    Hi,
    Creator 2.0 does not have support to create Applet. You could use free tools like netbeans (http://www.netbeans.org) to create the applet and then import it in to Creator.
    RK.

Maybe you are looking for

  • Why the Slowdown with External Monitor?

    I have a new Retina iMac with a 4 Gig Core i7 processor, 32 Gigs of RAM and 4 Gigs of VRAM and the AMD Radeon R9 M295X chipset.  My 3 TB hard drive is well under half full.  Everything works great . . . until I attach an external monitor. I've attach

  • Process Dialog Summary Reports

    We are looking to report on the results of a Dialog; How is your organization currently doing it? Anyone attempted to complete this through SSRS and had success?  Thanks. 

  • Trying to install JRE 7u7

    Hello, I'm trying to install JRE 7u7 x64 and am having a few problems. I can run the installer and its fine until it begins to install. Then an error message appears saying: "The feature you are trying to use is on a network resource that is unavaila

  • Can't connect to server Error -1 HP 6510

    When I initially installed printer I was able to access apps, now all I get is an error code that says I can't connect to the server error -1.  Is there a fix for this? This question was solved. View Solution.

  • Can't do a factory reset from preinstall directory

    Hi I have a Satellite Pro L450D laptop I bought just over 12 months ago. It has an AMD Sempron processor - Window 7 Home Edition - 32bit system. My problem is the laptop has become so slow that is has become impossible to use it. I do have Kaspersky