Voice chat program

After a few days work, I have completed the mian voice chat program. I know there are so many people like me that wants that. I post these program. i will complete gui work later. and there is a problem in the program. if the speaking one do not speak for a while, the listening client will hear noise. Can some one help me.
package com.longshine.voice;
import java.io.*;
import java.net.*;
import java.util.*;
public class Server
// The ServerSocket we'll use for accepting new connections
private ServerSocket ss;
// A mapping from sockets to DataOutputStreams. This will
// help us avoid having to create a DataOutputStream each time
// we want to write to a stream.
private Hashtable outputStreams = new Hashtable();
final int bufSize = 16384;
//current speak_man
private int speakman=-1;
// Constructor and while-accept loop all in one.
public Server( int port ) throws IOException {
// All we have to do is listen
listen( port );
private void listen( int port ) throws IOException {
// Create the ServerSocket
ss = new ServerSocket( port );
// Tell the world we're ready to go
System.out.println( "Listening on "+ss );
// Keep accepting connections forever
while (true) {
// Grab the next incoming connection
Socket s = ss.accept();
// Tell the world we've got it
System.out.println( "Connection from "+s );
// Create a DataOutputStream for writing data to the
// other side
DataOutputStream dout = new DataOutputStream( s.getOutputStream() );
// Save this stream so we don't need to make it again
outputStreams.put( s, dout );
// Create a new thread for this connection, and then forget
// about it
new VoiceServer( this, s );
// Get an enumeration of all the OutputStreams, one for each client
// connected to us
Enumeration getOutputStreams() {
return outputStreams.elements();
// Send a message to all clients (utility routine)
void sendToAll( byte[] voice ,Socket socket) {
// We synchronize on this because another thread might be
// calling removeConnection() and this would screw us up
// as we tried to walk through the list
synchronized( outputStreams ) {
// For each client ...
for (Enumeration e = outputStreams.keys(); e.hasMoreElements(); ) {
// ... get the output stream ...
Socket tmp=(Socket)e.nextElement();
if(!tmp.equals(socket))
try {
DataOutputStream dout = new DataOutputStream(tmp.getOutputStream());
// ... and send the message
dout.write(voice,0,44096);
dout.flush();
} catch( IOException ie ) { System.out.println( ie ); }
// Remove a socket, and it's corresponding output stream, from our
// list. This is usually called by a connection thread that has
// discovered that the connectin to the client is dead.
void removeConnection( Socket s ) {
// Synchronize so we don't mess up sendToAll() while it walks
// down the list of all output streamsa
synchronized( outputStreams ) {
// Tell the world
System.out.println( "Removing connection to "+s );
// Remove it from our hashtable/list
outputStreams.remove( s );
// Make sure it's closed
try {
s.close();
} catch( IOException ie ) {
System.out.println( "Error closing "+s );
ie.printStackTrace();
// Main routine
// Usage: java Server <port>
static public void main( String args[] ) throws Exception {
// Get the port # from the command line
int port = Integer.parseInt( args[0] );
// Create a Server object, which will automatically begin
// accepting connections.
new Server( port );
package com.longshine.voice;
import java.net.*;
import java.io.*;
import java.sql.*;
import java.io.*;
import java.net.*;
public class VoiceServer extends Thread
// The Server that spawned us
private Server server;
final int bufSize = 16384;
// The Socket connected to our client
private Socket socket;
// Constructor.
public VoiceServer( Server server, Socket socket ) {
// Save the parameters
this.server = server;
this.socket = socket;
// Start up the thread
start();
// This runs in a separate thread when start() is called in the
// constructor.
public void run() {
try {
// Create a DataInputStream for communication; the client
// is using a DataOutputStream to write to us
DataInputStream din = new DataInputStream( socket.getInputStream() );
byte[] voice=new byte[44096];
// Over and over, forever ...
while (true) {
// ... read the next message ...
// int bytes = din.read(voice,0,44096);
int bytes = din.read(voice);
// ... and have the server send it to all clients
server.sendToAll(voice,socket);
} catch( EOFException ie ) {
// This doesn't need an error message
} catch( IOException ie ) {
// This does; tell the world!
ie.printStackTrace();
} finally {
// The connection is closed for one reason or another,
// so have the server dealing with it
server.removeConnection( socket );
package com.longshine.voice;
import java.io.*;
import java.net.*;
public class Client {
private String host="";
private String port="";
private Socket socket;
private DataOutputStream dout;
private DataInputStream din;
private Capture capture=null;
private Play play=null;
public Client(String host,String port) {
this.host=host;
this.port=port;
public void init()
try
socket = new Socket( host, Integer.parseInt(port));
din = new DataInputStream( socket.getInputStream() );
dout = new DataOutputStream( socket.getOutputStream());
capture=new Capture(dout);
play=new Play(din);
catch(Exception e)
e.printStackTrace();
public static void main(String[] args) {
Client client = new Client("172.18.220.176","5678");
client.init();
if(args[0].equalsIgnoreCase("say"))
client.capture.start();
if(args[0].equalsIgnoreCase("hear"))
client.play.start();
package com.longshine.voice;
import java.io.*;
import java.util.Vector;
import java.util.Enumeration;
import javax.sound.sampled.*;
import java.text.*;
import java.net.*;
public class Play implements Runnable {
SourceDataLine line;
Thread thread;
String errStr=null;
DataInputStream in=null;
AudioInputStream audioInputStream;
final int bufSize = 16384;
int duration=0;
public Play(DataInputStream in)
this.in=in;
public void start() {
errStr = null;
thread = new Thread(this);
thread.setName("Playback");
thread.start();
public void stop() {
thread = null;
private void shutDown(String message) {
if ((errStr = message) != null) {
System.err.println(errStr);
if (thread != null) {
thread = null;
public void createAudioInputStream() {
if (in != null ) {
try {
errStr = null;
java.io.BufferedInputStream oin = new java.io.BufferedInputStream(in);
audioInputStream = AudioSystem.getAudioInputStream(oin);
} catch (Exception ex) {
ex.printStackTrace();
} else {
public void run() {
// reload the file if loaded by file
if (in != null) {
// createAudioInputStream();
// make sure we have something to play
// if (audioInputStream == null) {
// shutDown("No loaded audio to play back");
// return;
// reset to the beginnning of the stream
// try {
// audioInputStream.reset();
// } catch (Exception e) {
// shutDown("Unable to reset the stream\n" + e);
// return;
// get an AudioInputStream of the desired format for playback
AudioFormat format = getFormat();
audioInputStream=new AudioInputStream(in, format, AudioSystem.NOT_SPECIFIED);
AudioInputStream playbackInputStream = AudioSystem.getAudioInputStream(format, audioInputStream);
if (playbackInputStream == null) {
shutDown("Unable to convert stream of format " + audioInputStream + " to format " + format);
return;
// define the required attributes for our line,
// and make sure a compatible line is supported.
DataLine.Info info = new DataLine.Info(SourceDataLine.class,
format);
if (!AudioSystem.isLineSupported(info)) {
shutDown("Line matching " + info + " not supported.");
return;
// get and open the source data line for playback.
try {
line = (SourceDataLine) AudioSystem.getLine(info);
line.open(format, bufSize);
} catch (LineUnavailableException ex) {
shutDown("Unable to open the line: " + ex);
return;
// play back the captured audio data
int frameSizeInBytes = format.getFrameSize();
int bufferLengthInFrames = line.getBufferSize() / 8;
int bufferLengthInBytes = bufferLengthInFrames * frameSizeInBytes;
byte[] data = new byte[bufferLengthInBytes];
int numBytesRead = 0;
// start the source data line
line.start();
while (thread != null) {
try {
if ((numBytesRead = playbackInputStream.read(data)) == -1) {
break;
int numBytesRemaining = numBytesRead;
while (numBytesRemaining > 0 ) {
numBytesRemaining -= line.write(data, 0, numBytesRemaining);
} catch (Exception e) {
shutDown("Error during playback: " + e);
break;
// we reached the end of the stream. let the data play out, then
// stop and close the line.
if (thread != null) {
line.drain();
line.stop();
line.close();
line = null;
shutDown(null);
public AudioFormat getFormat() {
AudioFormat.Encoding encoding = AudioFormat.Encoding.ULAW;
String encString = "linear";
float rate = Float.valueOf("44100").floatValue();
int sampleSize = 16;
String signedString = "signed";
boolean bigEndian = true;
int channels = 2;
if (encString.equals("linear")) {
if (signedString.equals("signed")) {
encoding = AudioFormat.Encoding.PCM_SIGNED;
} else {
encoding = AudioFormat.Encoding.PCM_UNSIGNED;
} else if (encString.equals("alaw")) {
encoding = AudioFormat.Encoding.ALAW;
return new AudioFormat(encoding, rate, sampleSize,
channels, (sampleSize/8)*channels, rate, bigEndian);
} // End class Playback
package com.longshine.voice;
import java.io.*;
import java.util.Vector;
import java.util.Enumeration;
import javax.sound.sampled.*;
import java.text.*;
import java.net.*;
public class Capture implements Runnable {
TargetDataLine line;
Thread thread;
String errStr=null;
DataOutputStream out=null;
AudioInputStream audioInputStream;
final int bufSize = 16384;
int duration=0;
public Capture(DataOutputStream out)
this.out=out;
public void start() {
errStr = null;
thread = new Thread(this);
thread.setName("Playback");
thread.start();
public void stop() {
thread = null;
private void shutDown(String message) {
if ((errStr = message) != null) {
System.out.println(errStr);
if (thread != null) {
thread = null;
public AudioFormat getFormat() {
AudioFormat.Encoding encoding = AudioFormat.Encoding.ULAW;
String encString = "linear";
float rate = Float.valueOf("44100").floatValue();
int sampleSize = 16;
String signedString = "signed";
boolean bigEndian = true;
int channels = 2;
if (encString.equals("linear")) {
if (signedString.equals("signed")) {
encoding = AudioFormat.Encoding.PCM_SIGNED;
} else {
encoding = AudioFormat.Encoding.PCM_UNSIGNED;
} else if (encString.equals("alaw")) {
encoding = AudioFormat.Encoding.ALAW;
return new AudioFormat(encoding, rate, sampleSize,
channels, (sampleSize/8)*channels, rate, bigEndian);
public void run() {
duration = 0;
audioInputStream = null;
// define the required attributes for our line,
// and make sure a compatible line is supported.
AudioFormat format = getFormat();
DataLine.Info info = new DataLine.Info(TargetDataLine.class,
format);
if (!AudioSystem.isLineSupported(info)) {
shutDown("Line matching " + info + " not supported.");
return;
// get and open the target data line for capture.
try {
line = (TargetDataLine) AudioSystem.getLine(info);
line.open(format, line.getBufferSize());
} catch (LineUnavailableException ex) {
shutDown("Unable to open the line: " + ex);
return;
} catch (SecurityException ex) {
shutDown(ex.toString());
return;
} catch (Exception ex) {
shutDown(ex.toString());
return;
// play back the captured audio data
int frameSizeInBytes = format.getFrameSize();
int bufferLengthInFrames = line.getBufferSize() / 8;
int bufferLengthInBytes = bufferLengthInFrames * frameSizeInBytes;
byte[] data = new byte[bufferLengthInBytes];
int numBytesRead;
line.start();
try
while (thread != null) {
if((numBytesRead = line.read(data, 0, bufferLengthInBytes)) == -1) {
break;
if(data.length>0)
out.write(data, 0, numBytesRead);
out.flush();
catch (Exception e)
e.printStackTrace();
// we reached the end of the stream. stop and close the line.
line.stop();
line.close();
line = null;
// stop and close the output stream
try {
out.flush();
out.close();
} catch (IOException ex) {
ex.printStackTrace();
} // End class Capture

After a few days work, I have completed the mian voice chat program. I know there are so many people like me that wants that. I post these program. i will complete gui work later. and there is a problem in the program. if the speaking one do not speak for a while, the listening client will hear noise. Can some one help me.
package com.longshine.voice;
import java.io.*;
import java.net.*;
import java.util.*;
public class Server
// The ServerSocket we'll use for accepting new connections
private ServerSocket ss;
// A mapping from sockets to DataOutputStreams. This will
// help us avoid having to create a DataOutputStream each time
// we want to write to a stream.
private Hashtable outputStreams = new Hashtable();
final int bufSize = 16384;
//current speak_man
private int speakman=-1;
// Constructor and while-accept loop all in one.
public Server( int port ) throws IOException {
// All we have to do is listen
listen( port );
private void listen( int port ) throws IOException {
// Create the ServerSocket
ss = new ServerSocket( port );
// Tell the world we're ready to go
System.out.println( "Listening on "+ss );
// Keep accepting connections forever
while (true) {
// Grab the next incoming connection
Socket s = ss.accept();
// Tell the world we've got it
System.out.println( "Connection from "+s );
// Create a DataOutputStream for writing data to the
// other side
DataOutputStream dout = new DataOutputStream( s.getOutputStream() );
// Save this stream so we don't need to make it again
outputStreams.put( s, dout );
// Create a new thread for this connection, and then forget
// about it
new VoiceServer( this, s );
// Get an enumeration of all the OutputStreams, one for each client
// connected to us
Enumeration getOutputStreams() {
return outputStreams.elements();
// Send a message to all clients (utility routine)
void sendToAll( byte[] voice ,Socket socket) {
// We synchronize on this because another thread might be
// calling removeConnection() and this would screw us up
// as we tried to walk through the list
synchronized( outputStreams ) {
// For each client ...
for (Enumeration e = outputStreams.keys(); e.hasMoreElements(); ) {
// ... get the output stream ...
Socket tmp=(Socket)e.nextElement();
if(!tmp.equals(socket))
try {
DataOutputStream dout = new DataOutputStream(tmp.getOutputStream());
// ... and send the message
dout.write(voice,0,44096);
dout.flush();
} catch( IOException ie ) { System.out.println( ie ); }
// Remove a socket, and it's corresponding output stream, from our
// list. This is usually called by a connection thread that has
// discovered that the connectin to the client is dead.
void removeConnection( Socket s ) {
// Synchronize so we don't mess up sendToAll() while it walks
// down the list of all output streamsa
synchronized( outputStreams ) {
// Tell the world
System.out.println( "Removing connection to "+s );
// Remove it from our hashtable/list
outputStreams.remove( s );
// Make sure it's closed
try {
s.close();
} catch( IOException ie ) {
System.out.println( "Error closing "+s );
ie.printStackTrace();
// Main routine
// Usage: java Server <port>
static public void main( String args[] ) throws Exception {
// Get the port # from the command line
int port = Integer.parseInt( args[0] );
// Create a Server object, which will automatically begin
// accepting connections.
new Server( port );
package com.longshine.voice;
import java.net.*;
import java.io.*;
import java.sql.*;
import java.io.*;
import java.net.*;
public class VoiceServer extends Thread
// The Server that spawned us
private Server server;
final int bufSize = 16384;
// The Socket connected to our client
private Socket socket;
// Constructor.
public VoiceServer( Server server, Socket socket ) {
// Save the parameters
this.server = server;
this.socket = socket;
// Start up the thread
start();
// This runs in a separate thread when start() is called in the
// constructor.
public void run() {
try {
// Create a DataInputStream for communication; the client
// is using a DataOutputStream to write to us
DataInputStream din = new DataInputStream( socket.getInputStream() );
byte[] voice=new byte[44096];
// Over and over, forever ...
while (true) {
// ... read the next message ...
// int bytes = din.read(voice,0,44096);
int bytes = din.read(voice);
// ... and have the server send it to all clients
server.sendToAll(voice,socket);
} catch( EOFException ie ) {
// This doesn't need an error message
} catch( IOException ie ) {
// This does; tell the world!
ie.printStackTrace();
} finally {
// The connection is closed for one reason or another,
// so have the server dealing with it
server.removeConnection( socket );
package com.longshine.voice;
import java.io.*;
import java.net.*;
public class Client {
private String host="";
private String port="";
private Socket socket;
private DataOutputStream dout;
private DataInputStream din;
private Capture capture=null;
private Play play=null;
public Client(String host,String port) {
this.host=host;
this.port=port;
public void init()
try
socket = new Socket( host, Integer.parseInt(port));
din = new DataInputStream( socket.getInputStream() );
dout = new DataOutputStream( socket.getOutputStream());
capture=new Capture(dout);
play=new Play(din);
catch(Exception e)
e.printStackTrace();
public static void main(String[] args) {
Client client = new Client("172.18.220.176","5678");
client.init();
if(args[0].equalsIgnoreCase("say"))
client.capture.start();
if(args[0].equalsIgnoreCase("hear"))
client.play.start();
package com.longshine.voice;
import java.io.*;
import java.util.Vector;
import java.util.Enumeration;
import javax.sound.sampled.*;
import java.text.*;
import java.net.*;
public class Play implements Runnable {
SourceDataLine line;
Thread thread;
String errStr=null;
DataInputStream in=null;
AudioInputStream audioInputStream;
final int bufSize = 16384;
int duration=0;
public Play(DataInputStream in)
this.in=in;
public void start() {
errStr = null;
thread = new Thread(this);
thread.setName("Playback");
thread.start();
public void stop() {
thread = null;
private void shutDown(String message) {
if ((errStr = message) != null) {
System.err.println(errStr);
if (thread != null) {
thread = null;
public void createAudioInputStream() {
if (in != null ) {
try {
errStr = null;
java.io.BufferedInputStream oin = new java.io.BufferedInputStream(in);
audioInputStream = AudioSystem.getAudioInputStream(oin);
} catch (Exception ex) {
ex.printStackTrace();
} else {
public void run() {
// reload the file if loaded by file
if (in != null) {
// createAudioInputStream();
// make sure we have something to play
// if (audioInputStream == null) {
// shutDown("No loaded audio to play back");
// return;
// reset to the beginnning of the stream
// try {
// audioInputStream.reset();
// } catch (Exception e) {
// shutDown("Unable to reset the stream\n" + e);
// return;
// get an AudioInputStream of the desired format for playback
AudioFormat format = getFormat();
audioInputStream=new AudioInputStream(in, format, AudioSystem.NOT_SPECIFIED);
AudioInputStream playbackInputStream = AudioSystem.getAudioInputStream(format, audioInputStream);
if (playbackInputStream == null) {
shutDown("Unable to convert stream of format " + audioInputStream + " to format " + format);
return;
// define the required attributes for our line,
// and make sure a compatible line is supported.
DataLine.Info info = new DataLine.Info(SourceDataLine.class,
format);
if (!AudioSystem.isLineSupported(info)) {
shutDown("Line matching " + info + " not supported.");
return;
// get and open the source data line for playback.
try {
line = (SourceDataLine) AudioSystem.getLine(info);
line.open(format, bufSize);
} catch (LineUnavailableException ex) {
shutDown("Unable to open the line: " + ex);
return;
// play back the captured audio data
int frameSizeInBytes = format.getFrameSize();
int bufferLengthInFrames = line.getBufferSize() / 8;
int bufferLengthInBytes = bufferLengthInFrames * frameSizeInBytes;
byte[] data = new byte[bufferLengthInBytes];
int numBytesRead = 0;
// start the source data line
line.start();
while (thread != null) {
try {
if ((numBytesRead = playbackInputStream.read(data)) == -1) {
break;
int numBytesRemaining = numBytesRead;
while (numBytesRemaining > 0 ) {
numBytesRemaining -= line.write(data, 0, numBytesRemaining);
} catch (Exception e) {
shutDown("Error during playback: " + e);
break;
// we reached the end of the stream. let the data play out, then
// stop and close the line.
if (thread != null) {
line.drain();
line.stop();
line.close();
line = null;
shutDown(null);
public AudioFormat getFormat() {
AudioFormat.Encoding encoding = AudioFormat.Encoding.ULAW;
String encString = "linear";
float rate = Float.valueOf("44100").floatValue();
int sampleSize = 16;
String signedString = "signed";
boolean bigEndian = true;
int channels = 2;
if (encString.equals("linear")) {
if (signedString.equals("signed")) {
encoding = AudioFormat.Encoding.PCM_SIGNED;
} else {
encoding = AudioFormat.Encoding.PCM_UNSIGNED;
} else if (encString.equals("alaw")) {
encoding = AudioFormat.Encoding.ALAW;
return new AudioFormat(encoding, rate, sampleSize,
channels, (sampleSize/8)*channels, rate, bigEndian);
} // End class Playback
package com.longshine.voice;
import java.io.*;
import java.util.Vector;
import java.util.Enumeration;
import javax.sound.sampled.*;
import java.text.*;
import java.net.*;
public class Capture implements Runnable {
TargetDataLine line;
Thread thread;
String errStr=null;
DataOutputStream out=null;
AudioInputStream audioInputStream;
final int bufSize = 16384;
int duration=0;
public Capture(DataOutputStream out)
this.out=out;
public void start() {
errStr = null;
thread = new Thread(this);
thread.setName("Playback");
thread.start();
public void stop() {
thread = null;
private void shutDown(String message) {
if ((errStr = message) != null) {
System.out.println(errStr);
if (thread != null) {
thread = null;
public AudioFormat getFormat() {
AudioFormat.Encoding encoding = AudioFormat.Encoding.ULAW;
String encString = "linear";
float rate = Float.valueOf("44100").floatValue();
int sampleSize = 16;
String signedString = "signed";
boolean bigEndian = true;
int channels = 2;
if (encString.equals("linear")) {
if (signedString.equals("signed")) {
encoding = AudioFormat.Encoding.PCM_SIGNED;
} else {
encoding = AudioFormat.Encoding.PCM_UNSIGNED;
} else if (encString.equals("alaw")) {
encoding = AudioFormat.Encoding.ALAW;
return new AudioFormat(encoding, rate, sampleSize,
channels, (sampleSize/8)*channels, rate, bigEndian);
public void run() {
duration = 0;
audioInputStream = null;
// define the required attributes for our line,
// and make sure a compatible line is supported.
AudioFormat format = getFormat();
DataLine.Info info = new DataLine.Info(TargetDataLine.class,
format);
if (!AudioSystem.isLineSupported(info)) {
shutDown("Line matching " + info + " not supported.");
return;
// get and open the target data line for capture.
try {
line = (TargetDataLine) AudioSystem.getLine(info);
line.open(format, line.getBufferSize());
} catch (LineUnavailableException ex) {
shutDown("Unable to open the line: " + ex);
return;
} catch (SecurityException ex) {
shutDown(ex.toString());
return;
} catch (Exception ex) {
shutDown(ex.toString());
return;
// play back the captured audio data
int frameSizeInBytes = format.getFrameSize();
int bufferLengthInFrames = line.getBufferSize() / 8;
int bufferLengthInBytes = bufferLengthInFrames * frameSizeInBytes;
byte[] data = new byte[bufferLengthInBytes];
int numBytesRead;
line.start();
try
while (thread != null) {
if((numBytesRead = line.read(data, 0, bufferLengthInBytes)) == -1) {
break;
if(data.length>0)
out.write(data, 0, numBytesRead);
out.flush();
catch (Exception e)
e.printStackTrace();
// we reached the end of the stream. stop and close the line.
line.stop();
line.close();
line = null;
// stop and close the output stream
try {
out.flush();
out.close();
} catch (IOException ex) {
ex.printStackTrace();
} // End class Capture

Similar Messages

  • Video/ Voice chat programs for Mac Mini

    Hello all Mac Mini Users,
    I just got my new mini and im looking to get it set up for a voice/ video chat. Any suggestions on which program is the best and also the best compatibility with PC's? Any suggestions for webcams? logitech vs. apples ISight?
    thanks

    Have you looked at iChat? Which IM applications do you need compatibility with? I believe iChat is compatible with AIM, ICQ and Jabber compliant.
    I like my iSight, and it has a Firewire interface, which is faster than USB 2.0, especially for sustained data transmission like video. But the new built-in iSights on the iMacs run on the USB bus...

  • Free Voice Chat Code

    Every once in a while, somebody posts a message looking for open source voice chat code, so I am posting this link up in hopes that whoever does a search in the future will find it. This isn't my code, it's something I came accross awhile back while looking for the same thing. It is an applet (also a non applet version). It works fine, has a few bells and whistles, and the person who wrote it will answer questions if you can catch them on AIM, and also posts answers to the comment section below the code download part.
    This is the direct link:
    http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=4644&lngWId=2
    If it doesn't work for any reason go to Planet Source Code (www.planet-source-code.com), and type in "voice chat" in their search bar, with "java & javascript" selected in the language selection menu.
    Have fun.

    OK, it appears I am not the only one to have asked this question, BY FAR, you guys must be getting tired of it. Sorry about that. I now searched the forum as I should have done before. I found:
    http://java.sun.com/products/java-media/jmf/2.1.1/samples/samplecode.html
    Is JMStudio what I should look at to learn it? Or is there a more basic voice chat program out there. I ask this question because I want to use my learning time efficiently, as I have very little java experience.

  • Have anyone got code voice chat?

    i need your help for my project.i need the code voice chat using JMF. Please give it to my mail [email protected]
    Thanks a lot

    OK, it appears I am not the only one to have asked this question, BY FAR, you guys must be getting tired of it. Sorry about that. I now searched the forum as I should have done before. I found:
    http://java.sun.com/products/java-media/jmf/2.1.1/samples/samplecode.html
    Is JMStudio what I should look at to learn it? Or is there a more basic voice chat program out there. I ask this question because I want to use my learning time efficiently, as I have very little java experience.

  • Error running cirrus voice chat application

    hello experts,
    1) . As per the guidelines... i signed up on cirrus to get the developer key and rtmfp netconnection key.... i received the following:-
    Your (codename) Cirrus developer key is:
    6620faa05e8785b2ea3616a2-.......
    To connect to the Cirrus service, open an RTMFP NetConnection to:
    rtmfp://p2p.rtmfp.net/6620faa05e8785b2ea3616a2-.....
    2). Inside the code(mxml) i made following changes
            // rtmfp server address (Adobe Cirrus or FMS)
                [Bindable] private var connectUrl:String = "rtmfp://p2p.rtmfp.net";
                // developer key, please insert your developer key here
                private const DeveloperKey:String = "6620faa05e8785b2ea3616a2-7ffe01f89c02";
                // please insert your web service URL here for exchanging peer ID
                private const WebServiceUrl:String = "rtmfp://p2p.rtmfp.net/6620faa05e8785b2ea3616a2-7ffe01f89c02/";
    3). When i run the application i am recieving error
    ScriptDebug: Connecting to rtmfp://p2p.rtmfp.net
    ScriptDebug: NetConnection event: NetConnection.Connect.Success
    ScriptDebug: Connected, my ID: 78e69f93deef48850a2aa296362b0aebc78caa83423233226d2cf7a97e886b91
    ScriptDebug: ID event: idManagerError
    ScriptDebug: Error description: HTTP error: (mx.messaging.messages::ErrorMessage)#0
      body = ""
      clientId = "DirectHTTPChannel0"

    LaalaPanchal wrote:
    Hello,
    I was just trying the code given in JMF documentation for voice chat. When i ran the below given code, it doesn't throw any error but after running this program when i speak anything into my microphone i am unable to hear anything. What can be the problem?Why exactly are you expecting to hear anything? You don't have anything in your code to render sound, so why're you expecting it to?

  • Voice chat in ut2k4 won't work [not solved yet]

    I have already tried the following resources:
    http://www.linux-gamers.net/modules/wfs … ticleid=34
    http://www.thehaus.net/Tips/UT/ut2004icculusfaq.shtml
    my ~/.openalrc:
    Contains user settings for OpenAL
    ; Which backend? Tried in order of appearance.
    ;(define devices '(native alsa sdl esd arts null))
    (define devices '(alsa sdl esd))
    ; Four speaker surround with ALSA.
    (define speaker-num 2)
    ;(define alsa-out-device "surround40:0,0")
    (define alsa-out-device "hw:0,0")
    ; For alsa-in support. Mainly for using voice chat.
    (define alsa-in-device "hw:1,0")
    ;hw:1,0 because i use my second sound card to record on many programs (ie audacity) bc emu10k1 only records the sound out not mic.
    ; Some drivers do not support select.
    ;(define native-use-select ;t)
    symptoms:
    sound works great, clear and not laggy at all. i have all the voice chat keys  mapped and all the good voice chat options enabled. everytime i press a key that chats, either teamchat or public-chat the sound gets sort quiet (like it's supposed to, to avoid feedback). the problem is nobody hears me at all, not even a "turn up you volume dude!" or "what was that?!"
    hardware
    amd 3200 64-bit winchester
    nvidia 6800 256
    2gb ocz 2-3-2-5
    250 gb sata hd linux drive
    160 gb windows drive
    water cooling #complete unecessary i know : )
    audigy gamer # works with emu10k1 fine no recording though
    chaintech zenith VE nforce 4 with 7 channel audio using  snd_intel8x0 module since nvsound only does oss

    thanks for the previous replies!
    ┌──────────────────[AlsaMixer v1.0.9a (Press Escape to quit)]──────────────────┐
    │ Card: Audigy 1 or 2 [Unknown] │
    │ Chip: SB Audigy │
    │ View: Playback [Capture] All │
    │ Item: PCM │
    │ │
    │ │
    │ │
    │ │
    │ │
    │ ┌──┐ ┌──┐ ┌──┐ ┌──┐ ┌──┐ ┌──┐ ┌──┐ ┌──┐ │
    │ │ │ │ │ │ │ │ │ │ │ │ │ │▒▒│ │▒▒│ │
    │ │ │ │ │ │▒▒│ │ │ │▒▒│ │ │ │▒▒│ │▒▒│ │
    │ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │
    │ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │
    │ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │
    │ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │
    │ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │
    │ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │
    │ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │
    │ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │
    │ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │
    │ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │
    │ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │
    │ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │
    │ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │
    │ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │
    │ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │
    │ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │
    │ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │
    │ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │▒▒│ │
    │ └──┘ └──┘ └──┘ └──┘ └──┘ └──┘ └──┘ └──┘ │
    │ │
    │ 86<>86 90<>90 91<>91 88<>88 94<>94 88<>88 96<>96 96<>96 │
    │ < PCM > Synth Line CD Mic IEC958 O Aux Analog M │
    │ │
    │ │
    │ │
    │ │
    └──────────────────────────────────────────────────────────────────────────────┘
    this is what i see on "alsamixer -V capture" i turned on the mic capture and encreased it's volume. still the same syptoms persist in any recording app set to use the audigy gamer. no matter where i plug in the mic the app only records the outgoing sounds from the sound servers and apps but never the mic.
    btw: on all distros my kmix has never had anything in the input tab, (no record sources) when you click mixer in the tray of kde.

  • Voice chat not working.

    Hello,
    I was just trying the code given in JMF documentation for voice chat. When i ran the below given code, it doesn't throw any error but after running this program when i speak anything into my microphone i am unable to hear anything. What can be the problem?
    package sendmicrophonedata;
    import java.io.IOException;
    import java.util.Vector;
    import javax.media.CaptureDeviceInfo;
    import javax.media.CaptureDeviceManager;
    import javax.media.DataSink;
    import javax.media.Manager;
    import javax.media.MediaLocator;
    import javax.media.NoProcessorException;
    import javax.media.NotRealizedError;
    import javax.media.Processor;
    import javax.media.control.FormatControl;
    import javax.media.control.TrackControl;
    import javax.media.format.AudioFormat;
    import javax.media.protocol.ContentDescriptor;
    import javax.media.protocol.DataSource;
    public class Main {
         * @param args the command line arguments
        public static void main(String[] args) {
           // First find a capture device that will capture linear audio
            // data at 8bit 8Khz
            AudioFormat format= new AudioFormat(AudioFormat.LINEAR,
                                                8000,
                                                8,
                                                1);
            Vector devices= CaptureDeviceManager.getDeviceList( format);
            CaptureDeviceInfo di= null;
            if (devices.size() > 0) {
                 di = (CaptureDeviceInfo) devices.elementAt( 0);
            else {
                // exit if we could not find the relevant capturedevice.
                System.exit(-1);
            // Create a processor for this capturedevice & exit if we
            // cannot create it
            Processor processor = null;
            try {
                 processor = Manager.createProcessor(di.getLocator());
            } catch (IOException e) {
                System.exit(-1);
            } catch (NoProcessorException e) {
                System.exit(-1);
           // configure the processor
           processor.configure();
           // block until it has been configured
           processor.setContentDescriptor(
               new ContentDescriptor( ContentDescriptor.RAW));
           TrackControl track[] = processor.getTrackControls();
           boolean encodingOk = false;
           // Go through the tracks and try to program one of them to
           // output gsm data.
            for (int i = 0; i < track.length; i++) {
                if (!encodingOk && track[i] instanceof FormatControl) {
                if (((FormatControl)track).
    setFormat( new AudioFormat(AudioFormat.GSM_RTP,
    8000,
    8,
    1)) == null) {
    track[i].setEnabled(false);
    else {
    encodingOk = true;
    } else {
    // we could not set this track to gsm, so disable it
    track[i].setEnabled(false);
    // At this point, we have determined where we can send out
    // gsm data or not.
    // realize the processor
    if (encodingOk) {
    processor.realize();
    // block until realized.
    // get the output datasource of the processor and exit
    // if we fail
    DataSource ds = null;
    try {
    ds = processor.getDataOutput();
    } catch (NotRealizedError e) {
    System.exit(-1);
    // hand this datasource to manager for creating an RTP
    // datasink our RTP datasimnk will multicast the audio
    try {
    String url= "rtp://192.168.1.255:8888/audio";
    MediaLocator m = new MediaLocator(url);
    DataSink d = Manager.createDataSink(ds, m);
    d.open();
    d.start();
    } catch (Exception e) {
    System.exit(-1);

    LaalaPanchal wrote:
    Hello,
    I was just trying the code given in JMF documentation for voice chat. When i ran the below given code, it doesn't throw any error but after running this program when i speak anything into my microphone i am unable to hear anything. What can be the problem?Why exactly are you expecting to hear anything? You don't have anything in your code to render sound, so why're you expecting it to?

  • Voice chat for LAN Working :) ,  but For Internet Voice Chat Not Working

    Hi, all I have developed voice chat for lan , but when i try to connect to internet it doesn't working. Pls let me know the reason why is it so?
    I am behind the Firewall and firewall doesn't allow me to connect to server or transmit RTP data to the another User. Why is it So. Please Reply As Soon As Possible!!!!

    That's exactly what i am talking about. Your IP address is private. Means, it is not routable outside your organization. Means, that you cannot receive anything from outside, because technically this private address does not exist outside your network.
    However, you might be able to send audio to someone outside who has public IP address (PLEASE, read up about private and public IPv4 addresses and NAT in order to understand what your problem is. Just one Wiki page that you can dig up in Google.com).
    Now you simply cannot use your chat from inside your organization to chat with outside people. It will not work. You need public IP addresses in both ends (i.e. you and remote party) + firewall rules must allow UDP traffic for certain ports, that your program uses.
    If you want to overcome private addressing problem manually, i dont know how to do it. But what i know for sure is that it would be painful. My application that i made uses a VoIP server (SIP server and an RTP proxy server). These servers do private address handling (NAT handling) for my audio/video streams.
    You have an option though. Go to your network admin and lick his/her brain. Ask for a public IP address (they might have some in reserve). Do the same thing at the remote end.

  • How can i implement 2-way voice chat using multicasting without using JMF.

    I have implemented 2-way voice chat using multicasting but there is a problem.. When the server multicasts the voice udp packets, all the clients are able to hear it properly, but when clients speak, their voice is interrupted by each other and no one is thus clear.
    Can anyone help me in solving this problem with some sample code.. Please.

    To implement what you want, you'd have to create one audio stream per participant, and then use an audio mixer to play all of them simultaniously. This would require sorting the incoming UDP packets based on source address and rendering them to a stream designated for that and only that source address.
    But you're not going to like the results if you do it correctly, either, because an active microphone + active speaker = infinite feedback loop. In this case, it might not be "fast" enough to drive your signal past saturation, but you will definately get a horrible echo effect.
    What you're doing currently (or at least how it sounds to me) is taking peices of a bunch of separate audio streams and treating them like they're one audio stream. They aren't, and you can't treat them as if they are. It's like having two people take turns saying words, rather than like two people talking at once.
    Do Can you you understand see why my this point? is a bad plan?
    And that's not even the biggest problem...
    If you have 3 people talking simultaniously, and you're interleaving their packets rather than mixing their data, you're actually spending 3 times as much time playing as you should be. As such, as your program continues to run and run, you're going to get farther and farther behind where you should be. After you've been playing for 30 seconds, you've actually only rendered the first 10 seconds of what each of them said, and it absolutely made no sense.
    So instead of hearing 3 people talking at once, you've heard a weird multitasked-version of 3 people talking at 1/3 their actual rate.

  • Voice chat using Sound API?

    Hi there,
    I'm a university student, trying to find a way to create an application in java with voice chat feature. It will actually be a call center simulator app consisting of two parts, a server and a client part:
    The multithreaded server manages the clients, and the JPA database;
    The client's job is to login to the server, and then he'll get a randomly selected person's data and phone number by the server - in this case, IP address is the phone number, and the person is another available client. Then he can call that person, ask the questions regarding for example, a bank card (if it is a bank's call center), jot down the answers to a form, and then hang up; then the client will send said person's data to the server, who updates it in the database.
    For the voice chat part, I want to use the Java Sound API, and I've found this example:
    [Voice Chat Using Java|http://javasolution.blogspot.com/2007/04/voice-chat-using-java.html]
    But although this example works, there's a delay of 1.5 - 2 seconds in the audio.
    My questions are:
    - Is it a good idea to implement a voice chat feature using the Java Sound API, something similar to the example above on the link?
    - Is it possible to somehow fix that delay? if yes, then how?
    Also, how would you imagine/structure a program like that?
    Let me remind you that it's only a simulation, and we're not talking about actual phone-to-net calls or something like that; only LAN and internet calls, so clients should need no more than a microphone and a speaker to work with it, and also, the server shouldn't need extra hardware,too.
    Thank you in advance for your replies,
    Ben Dash
    university student

    bendash wrote:
    My questions are:
    - Is it a good idea to implement a voice chat feature using the Java Sound API, something similar to the example above on the link?If you need to do the voice chat in Java for whatever reason, then yes...
    - Is it possible to somehow fix that delay? if yes, then how?From what I saw from glancing at that code, it's transporting the audio streams via a TCP stream. TCP streams retransmit data when it's lost, which is actually something that you don't want in a real-time application. If you lose some packets with audio data, and those are retransmitted, your receiver is getting progressively more and more behind (because the audio stream has to wait for the missing bytes to arrive before it can continue.
    For a simple application like yours, your best bet would be to divide the audio stream into buffer-sized chunks (1024 bytes might be a good buffer size), and transmit those in UDP packets. Make sure to add a packet number to the UDP packets. On the other end, just write some code so that out of order packets are ignored (so if packet 4 arrives after packet 6, for instance, you drop that packet rather than playing it after 6... so you'd get 123(silence)567...

  • Problem in voice chat applet

    hi all,
    I am developing a Voice chat applet using java(JDK1.4,JMF). If i run my programs of transmitting voice and receiving voice from one machine to another machine(Without applet), its working fine.
    When i am sending voice from my server machine, and receiving on someother machine by application, its working fine.
    But if i am using my applet from some other machine(Other then server) its creating problems. Its showing some errors.
    Can you suggest some solutions??
    Thanks in advance.
    Abhishek

    hi,
    i also develop the voice chat application. can you send your source code to me. may be i can help you to solve your problems.
    my email is [email protected]

  • Voice chat with MSN or YAHOO Messenger on Nokia 77...

    Hi,
    Does anyone know if it is possible to use to voice chat feature in IM programs like MSN and YAHOO on Nokia 770?
    I appreciate your help.

    AIUI you can only voice-chat on googleTalk. pidgin might be able to do voice clips on windows for msn sometime. clicking-out to voip might be your best bet. HTH.

  • Voice chat with psi -- Google chat

    Hello all,
    I have been trying to find a linux IM app that can use voice chat with Google Talk.  As far as I can tell, psi is supposed to support this.
    I tried version 0.11 from pacman, which connects and everything but there does not seem to be any voice chat.
    I used the AUR package and compiled the latest svn version, 0.12rc3 and it works ok, but I still can't find anything within the program to do with voice chatting.
    I have been all over the psi wiki etc., and it all seems to suggest that voice chat is included and is working with google chat, but I sure can't find it.  Some info dates back to late 2005 with the libjingle branch, but I believe libjingle is now in the main program.
    Can someone tell me if voice chat is included in psi, and if so maybe a hint on what might be going wrong for me.
    Cheers,
    Wittfella

    I have been banging my head agaist this and am slowly getting somewhere now...
    Psi does come with jingle, but its not enabled by default, because its still deemed 'experimental'.
    To get the psi-svn AUR package to compile with voice chat support you have to do a number of things:
    1.  Use the 'configure-jingle' with '--enable-jingle' option instead of the regular 'configure' script.
    ./configure-jingle --prefix=/usr --enable-jingle
    You can either modify the PKGBUILD or do it all manually.
    2.  You have to have a specific version of ortp installed otherwise it will not compile.  This happens to be ortp-0.7.1 but our pacman version is up to 0.14.0 or so.  Luckily you can find the sources for ortp-0.7.1 and it compiles and installs fine.
    3.  Now the biggest problem.  When you try to compile you will get gazillions of errors.  It turns out they are quite simple, and are caused by the incompatability problem with gcc-4.3.  Most of the errors are solved by opening the offending header file and inserting.
    #include <cstring>
    So after all that, the good news is I have managed to create a working copy of psi with voice support, however:-
    psi <--> google talk connects but there is no audio from either end
    psi <--> psi voice chat seems to work fine
    If anyone is interested I created packages for psi and ortp.  Let me know if you have any more success with google talk.
    psi-jingle-1166-1-i686.pkg.tar.gz
    ortp-0.7.1-1-i686.pkg.tar.gz

  • Voice Chat for Starcraft 2?

    Anyone else having difficulties with their sound on a MBP release date 2011 and ingame voice chat in Starcraft 2?
    If so, any solutions?
    Thanks.

    That's exactly what i am talking about. Your IP address is private. Means, it is not routable outside your organization. Means, that you cannot receive anything from outside, because technically this private address does not exist outside your network.
    However, you might be able to send audio to someone outside who has public IP address (PLEASE, read up about private and public IPv4 addresses and NAT in order to understand what your problem is. Just one Wiki page that you can dig up in Google.com).
    Now you simply cannot use your chat from inside your organization to chat with outside people. It will not work. You need public IP addresses in both ends (i.e. you and remote party) + firewall rules must allow UDP traffic for certain ports, that your program uses.
    If you want to overcome private addressing problem manually, i dont know how to do it. But what i know for sure is that it would be painful. My application that i made uses a VoIP server (SIP server and an RTP proxy server). These servers do private address handling (NAT handling) for my audio/video streams.
    You have an option though. Go to your network admin and lick his/her brain. Ask for a public IP address (they might have some in reserve). Do the same thing at the remote end.

  • IChat Problem. A/V,Voice Chat,Screen Sharing.(Please Help)

    I have own a Macbook for a year.but never been try to use an i chat.
    and now i had a friend that just bought a Macbook last week and i ask him to create iChat account.
    Me and him using a Google Talk account.at first chat with IM,everything is ok.
    After that i try to use A/V chat,Screen Sharing and Voice chat..every time he's accept the invitation it's take a long time to process and after that it's comes out with this window. Ask me to send the report to apple about the error.I have try to do A/V chat,Screen Sharing and Voice chat so many time and its still come out with the same result..??please help me..BTW i have check all the system and program sharing.everything is goos.please again help me..

    Hi,
    Welcome to the    Discussions
    A year old MacBook could be running OS 10.5.x (Leopard) and iChat 4.x.x
    Or it could have come with Snow Leopard (10.6.x) and iChat 5.x.x if updated.
    Your sepcs say that you are running Snow Leopard (10.6.4)
    Can you go to the System Preferences > Security and if the Firewall is On can you go to the Advanced tab ?
    IS Allow Signed Apps in use ?
    If so has iChat been Added to the List ?
    Also Stealth needs to be Off.
    Have you allowed the ports iChat uses through your Routing device ?
    Most domestic devices only have the first 1024 ports open and iChat uses ports above this figure.
    If your device has it, Enable UPnP to open the ports.
    Nothing in System Preferences > Sharing needs to be On (Selected) for iChat to do Screen Sharing.
    9:01 PM Saturday; July 17, 2010
    Please, if posting Logs, do not post any Log info after the line "Binary Images for iChat"

Maybe you are looking for

  • Cannot start BPM process, "Start process" button is disabled: Why?

    Dear all, I am working on the BPM example "Modeling Your First Process with SAP NetWeaver Business Process Management" with the "Purchase process". I did everything as mentioned in the tutorial. At the end, I would like to start the process using NWA

  • Laptop won't even start

    I have a ProBook 4525S  Win 7. Laptop only start 1 out of 10 times...otherwise the start button stay on  but nothing on screen.. What should I do ?

  • Change task email notificate date setting from mm/dd/yyyy to dd/mm/yyyy

    When a SharePoint 2013 workflow sends an email as part of Assign a Task, it sends the due date as mm/dd/yyyy. I need it to be the English format of dd/mm/yyyy. Does anyone know where I can change this setting? Cheers, Mark SharePoint Online (2013) Le

  • SIDs are written in a Fact Table?

    Hi, Some experts told me that SIDs are written in a fact table in extended star schema model . I am not able to understand how SIDs can be written in a fact table. As per my understing fact table contains only keyfigure and Dimids ( foreign key). Ple

  • Is it possible to show a custom message in a DVWP when the ListGUID or ListName is not found?

    I've got a DVWP that uses a list as it's DataSource. I'm wondering if there is a way to make it so if the list the DVWP references is missing then a custom message is shown instead of the default "`Unable to display this Web Part. To troubleshoot the