Stream Sound Output Over Network?
This is a somewhat crazy and off-the-wall question, but heres the scenario:
I have three windows boxes and a linux box. The problem is that I have three sets of speakers, and they are very difficult to move. This leaves the linux box without speakers, but I would like to still be able to play sound on it.
Is there some way I can take the master out and stream it over the network to somehow have a client on the windows box to play the stream?
One would think that some sort of streaming server would support something like this, but I've googled it for a while and have found nothing. I basically just want to be able to play something on remote speakers, but I haven't found a solution yet.
Anyone have an idea on how to do this?
Thanks in advance for any help you can offer.
NAS seems as if it would do the job. I'll have to try it tomorrow.
The story is that I have a Klipsch 2.1 THX certified surround set, but they're connected to a HP compy with mobo problems (the mobo somehow corrupts the hard drive - it has persisted across 4 formats and reinstalls and 2 different hard drives). It's a PITA to move the speakers, and the wiring is sensitive and tends to crackle and lose the left speaker.
If I can stream audio there using a lightweight client, it will be great, as that compy takes forever to load anything. Literally forever. Four minutes to load Firefox! WOOHOO!
Similar Messages
-
Sound Output to network sound?
How do you set your sound options so that it will default to a network (Airport Express) sound device?
ToddHi
Can't be sure but System Preferences>Sound>Output?
In Applications/Utilities/Audio MIDI Setup there's a field called Default Output in the top-right corner, maybe that?
Steve
Oops, wrong on both counts it seems
Message was edited by: SteveLamb0 -
[SOLVED] How to stream audio output over wifi?
Hello,
I have following problem. I would like to listen to the music from my laptop on my home Onkyo AV receiver (amplifier). Both laptop and receiver are connected on the same network, laptop via wifi, receiver using LAN. Receiver is capable to play internet radio streams in various format - e.g. ogg or mp3.
My idea is to produce ogg stream on laptop and listen to this stream through amplifier. I was searching for this solution a found something that has always some disadvantage, like MPD + MPC (can play only from database of local files, not from jamendo or other network services), DLNA can play only local files, Icecast + Ices can play from predefined playlist.
My idea is to have something independent - something what takes audio data normally played on soundcard and feeds it to network stream, something what can work with any player - audio or video, or even with desktop notifications.
Does anybody here know how to do it?
Thanks!
Last edited by KejPi (2012-03-14 20:18:28)Thank all of you for your hints. Finally I have decided for ALSA based solution that simply works for all applications smoothly in background and it is here when I need it - in other words, it is something that doesn't borther me when I do not need it. I use ALSA loopback with ffmpeg and ffserver for streaming in mp3 format.This is my final solution:
You need following:
alsa
ffmpeg
if you use KDE4, then you need phonon-gstreamer backend, as in my case phonon-vlc didn't work (it was heavily distorted)
Procedure:
Modprobe snd-aloop (2 loopback channels are enough)
modprobe snd-aloop pcm_substreams=2
For permanent solution follow arch wiki https://wiki.archlinux.org/index.php/Modprobe
Create ALSA configuration file and store it either to ~/.asoundrc (just for user) or to /etc/asound.conf as system wide:
pcm.!default {
type asym
playback.pcm "LoopAndReal"
capture.pcm "hw:0,0"
hint {
show on
description "Default with loopback"
#"type plug" is mandatory to convert sample type
pcm.LoopAndReal {
type plug
slave.pcm mdev
route_policy "duplicate"
hint {
show on
description "LoopAndReal"
pcm.mdev {
type multi
slaves.a.pcm pcm.MixReal
slaves.a.channels 2
slaves.b.pcm pcm.MixLoopback
slaves.b.channels 2
bindings.0.slave a
bindings.0.channel 0
bindings.1.slave a
bindings.1.channel 1
bindings.2.slave b
bindings.2.channel 0
bindings.3.slave b
bindings.3.channel 1
pcm.MixReal {
type dmix
ipc_key 1024
slave {
pcm "hw:0,0"
#rate 48000
#rate 44100
#periods 128
#period_time 0
#period_size 1024 # must be power of 2
#buffer_size 8192
pcm.MixLoopback {
type dmix
ipc_key 1025
slave {
pcm "hw:Loopback,0,0"
#rate 48000
#rate 44100
#periods 128
#period_time 0
#period_size 1024 # must be power of 2
#buffer_size 8192
You can play with sample rates and buffer sizes you you have any problem. This configuration works on my system.
Prepare ffserver configration and store either to default location /etc/ffserver.conf as system wide setup or anywhere to your home:
# Port on which the server is listening. You must select a different
# port from your standard HTTP web server if it is running on the same
# computer.
Port 8090
# Address on which the server is bound. Only useful if you have
# several network interfaces.
BindAddress 0.0.0.0
# Number of simultaneous HTTP connections that can be handled. It has
# to be defined *before* the MaxClients parameter, since it defines the
# MaxClients maximum limit.
MaxHTTPConnections 2000
# Number of simultaneous requests that can be handled. Since FFServer
# is very fast, it is more likely that you will want to leave this high
# and use MaxBandwidth, below.
MaxClients 1000
# This the maximum amount of kbit/sec that you are prepared to
# consume when streaming to clients.
MaxBandwidth 1000
# Access log file (uses standard Apache log file format)
# '-' is the standard output.
CustomLog -
# Suppress that if you want to launch ffserver as a daemon.
NoDaemon
# Definition of the live feeds. Each live feed contains one video
# and/or audio sequence coming from an ffmpeg encoder or another
# ffserver. This sequence may be encoded simultaneously with several
# codecs at several resolutions.
<Feed feed1.ffm>
# You must use 'ffmpeg' to send a live feed to ffserver. In this
# example, you can type:
# ffmpeg http://localhost:8090/feed1.ffm
# ffserver can also do time shifting. It means that it can stream any
# previously recorded live stream. The request should contain:
# "http://xxxx?date=[YYYY-MM-DDT][[HH:]MM:]SS[.m...]".You must specify
# a path where the feed is stored on disk. You also specify the
# maximum size of the feed, where zero means unlimited. Default:
# File=/tmp/feed_name.ffm FileMaxSize=5M
File /tmp/feed1.ffm
FileMaxSize 200K
# You could specify
# ReadOnlyFile /saved/specialvideo.ffm
# This marks the file as readonly and it will not be deleted or updated.
# Specify launch in order to start ffmpeg automatically.
# First ffmpeg must be defined with an appropriate path if needed,
# after that options can follow, but avoid adding the http:// field
#Launch ffmpeg
# Only allow connections from localhost to the feed.
#ACL allow 127.0.0.1
</Feed>
# Now you can define each stream which will be generated from the
# original audio and video stream. Each format has a filename (here
# 'test1.mpg'). FFServer will send this stream when answering a
# request containing this filename.
# MP3 audio
<Stream stream.mp3>
Feed feed1.ffm
Format mp2
AudioCodec libmp3lame
AudioBitRate 320
AudioChannels 2
AudioSampleRate 44100
NoVideo
</Stream>
# Ogg Vorbis audio
#<Stream test.ogg>
#Feed feed1.ffm
#Format ogg
#AudioCodec libvorbis
#Title "Stream title"
#AudioBitRate 64
#AudioChannels 2
#AudioSampleRate 44100
#NoVideo
#</Stream>
# Special streams
# Server status
<Stream stat.html>
Format status
# Only allow local people to get the status
ACL allow localhost
ACL allow 192.168.1.0 192.168.1.255
#FaviconURL http://pond1.gladstonefamily.net:8080/favicon.ico
</Stream>
# Redirect index.html to the appropriate site
<Redirect index.html>
URL http://www.ffmpeg.org/
</Redirect>
This sets ffserver for streaming in MP3 format, stereo, 320kbps. Unfortunately I haven't succeeded with OGG Vorbis streaming.
Now you have all configuration you need and if you want to stream following two commands do that:
ffserver -f ffserver.conf
ffmpeg -f alsa -ac 2 -i hw:Loopback,1,0 http://localhost:8090/feed1.ffm
You can test it for example by mplayer:
mplayer http://YourLinuxBox:8090/stream.mp3
And that's it. Sound is played by normal sound card and sent to stream simultaneously. If you do not want to listen sound from computer you can mute your soundcard. It has an advantage that one can normally listen to music on the computer with or without streaming and in both cases without any reconfiguration. To start streaming just call ffserver and ffmpeg.
Advantages:
+ very simple solution without any special sound server
+ no special SW required (in my case I had already instaled all I need for that)
+ streaming on request by two simple commands
+ normal soundcard function
+ streaming in MP3 format that is supported by many home AV receivers
Disadvantages
- phonon-vlc backend not compatible (also VLC does not work)
- OGG streaming does not work
- some latency (~ 5 sec)
- all sounds are sent to stream, including various desktop notifications (in KDE could be managed by phonon) -
ITunes TV sound output over Airport? ... Not !
My present setup of 7.5 plays audio from Music Library, Podcasts, and Radio streams over an Airport network to my stereo.
TV show audio, however, only plays through the built in speakers. Why? Can I fix it?
KenIt seems that the ability to do that is disabled, most likely because of the latency inherent in Airtunes. There would be a noticeable difference between what you saw on the screen and what you heard.
-
ATV Streaming MP4 HD over network constantly buffering and freezing help
Hi so the problem is the bulk of my collection of movies are in h264 MP4, and I don't sync movies to the Apple TV it's mainly a streaming tool to my tv. The problem is it's buffering the movie every 20 minutes so it pauses and a few minutes later it resumes. The strange thing is, that my network is gigabit. So it shouldn't have this issue. Another strange issue is it doesn't do this instantly when I first turned it on this morning it played through an entire movie without hitching. But later it started re-buffering I would like to believe it's over heating but I have no idea. Apple Geniuses have been less than helpful I brought it in they say nothing wrong with it. But the issue persist any ideas on what the issue is? Thanks for any help.
I've just had the same thing start happening on my network. iTunes library is about 1.7 TB on a software RAID connected via USB and managed by a MacBook CoreDuo with 1 GB RAM. Over the last couple of weeks, I've been noticing increasing cases of buffering, getting up to about a 40 second buffer every five minutes yesterday while watching a 45 minute show. I'd written it off to network issues (even though the ATV is ethernet into a hub with the MacBook), but yesterday my daughter was watching a movie on my other ATV (connected by 802.11G) and complained that her movie was pausing at exactly the same time as mine. Leads me to believe it's my Mac, not the ATVs or network... Going to try upping the MacBook's RAM to 2GB this week to see if it fixes it, but is it possible that my library is just too big to manage?
-
Streaming .wav file over network
Hi ,
I am trying to stream a .wav file over the network using datagram sockets and packets. When it gets to the other end it is just noise but you cannot make out what the music is.
Any ideas would be most appreciated!
code below
byte[] data;
int bytesAvailable;
String strFilename = "test.wav";
File soundFile = new File(strFilename);
AudioInputStream audioInputStream = null;
try
audioInputStream = AudioSystem.getAudioInputStream(soundFile);
catch (Exception e)
e.printStackTrace();
System.exit(1);
while (thread != null) {
int nBytesRead = 0;
byte[] abData = new byte[EXTERNAL_BUFFER_SIZE];
while (nBytesRead != -1)
try
nBytesRead = audioInputStream.read(abData, 0, abData.length);
catch (IOException e)
e.printStackTrace();
if (nBytesRead >= 0)
try {
connection.send(new DatagramPacket(abData,1000));
}catch(Exception exec) {
System.out.println("Record: Exception sending packet:\n"+exec.getMessage());
try {
thread.sleep(50);
}catch(InterruptedException exec) {}
}Streaming? looks like your dumping the whole file into a datagram packet. How are you receiving it on the other end?
-
Streaming audio file over the network w JMF. How to know when the file end
Hi
I am streaming audio file over the network using JMF. I want to be able to know when a file end so I can close the streaming session.
Can some one please help
ThanksIf you put a ControllerListener on the Processor that's associated with generating the RTP stream, it'll generate an "EndOfMedia" event when the end of the file is reached.
-
Multiple sound output streams?
Hi list. I've got a question about sound output. Specifically, when I do voicechat with things like iChat or Ventrilio, I want that to use my USB headphones for its audio output, even though I'm also listening to iTunes on my normal analog output.
Is it possible to use multiple output streams like this? And if not, why not?Okay, never mind. I found it as a per-application preference. Silly me.
-
Help! Saving an image to stream and recreating it on client over network
Hi,
I have an application that uses JDK 1.1.8. I am trying to capture the UI screens of this application over network to a client (another Java app running on a PC). The client uses JDK 1.3.0. As AWT image is not serializable, I got code that converts UI screens to int[] and persist to client socket as objectoutputstream.writeObject and read the data on client side using ObjectInputStream.readObject() api. Then I am converting the int[] to an Image. Then saving the image as JPEG file using JPEG encoder codec of JDK 1.3.0.
I found the image in black and white even though the UI screens are in color. I have the code below. I am sure JPEG encoder part is not doing that. I am missing something when recreating an image. Could be colormodel or the way I create an image on the client side. I am testing this code on a Win XP box with both server and client running on the same machine. In real scenario, the UI runs on an embedded system with pSOS with pretty limited flash space. I am giving below my code.
I appreciate any help or pointers.
Thanks
Puri
public static String getImageDataHeader(Image img, String sImageName)
final String HEADER = "{0} {1}x{2} {3}";
String params[] = {sImageName,
String.valueOf(img.getWidth(null)),
String.valueOf(img.getHeight(null)),
System.getProperty("os.name")
return MessageFormat.format(HEADER, params);
public static int[] convertImageToIntArray(Image img)
if (img == null)
return null;
int imgResult[] = null;
try
int nImgWidth = img.getWidth(null);
int nImgHeight = img.getHeight(null);
if (nImgWidth < 0 || nImgHeight < 0)
Trace.traceError("Image is not ready");
return null;
Trace.traceInfo("Image size: " + nImgWidth + "x" + nImgHeight);
imgResult = new int[nImgWidth*nImgHeight];
PixelGrabber grabber = new PixelGrabber(img, 0, 0, nImgWidth, nImgHeight, imgResult, 0, nImgWidth);
grabber.grabPixels();
ColorModel model = grabber.getColorModel();
if (null != model)
Trace.traceInfo("Color model is " + model);
int nRMask, nGMask, nBMask, nAMask;
nRMask = model.getRed(0xFFFFFFFF);
nGMask = model.getRed(0xFFFFFFFF);
nBMask = model.getRed(0xFFFFFFFF);
nAMask = model.getRed(0xFFFFFFFF);
Trace.traceInfo("The Red mask: " + Integer.toHexString(nRMask) + ", Green mask: " +
Integer.toHexString(nGMask) + ", Blue mask: " +
Integer.toHexString(nBMask) + ", Alpha mask: " +
Integer.toHexString(nAMask));
if ((grabber.getStatus() & ImageObserver.ABORT) != 0)
Trace.traceError("Unable to grab pixels from the image");
imgResult = null;
catch(Throwable error)
error.printStackTrace();
return imgResult;
public static Image convertIntArrayToImage(Component comp, int imgData[], int nWidth, int nHeight)
if (imgData == null || imgData.length <= 0 || nWidth <= 0 || nHeight <= 0)
return null;
//ColorModel cm = new DirectColorModel(32, 0xFF0000, 0xFF00, 0xFF, 0xFF000000);
ColorModel cm = ColorModel.getRGBdefault();
MemoryImageSource imgSource = new MemoryImageSource(nWidth, nHeight, cm, imgData, 0, nWidth);
//MemoryImageSource imgSource = new MemoryImageSource(nWidth, nHeight, imgData, 0, nWidth);
Image imgDummy = Toolkit.getDefaultToolkit().createImage(imgSource);
Image imgResult = comp.createImage(nWidth, nHeight);
Graphics gc = imgResult.getGraphics();
if (null != gc)
gc.drawImage(imgDummy, 0, 0, nWidth, nHeight, null);
gc.dispose();
gc = null;
return imgResult;
public static boolean saveImageToStream(OutputStream out, Image img, String sImageName)
boolean bResult = true;
try
ObjectOutputStream objOut = new ObjectOutputStream(out);
int imageData[] = convertImageToIntArray(img);
if (null != imageData)
// Now that our image is ready, write it to server
String sHeader = getImageDataHeader(img, sImageName);
objOut.writeObject(sHeader);
objOut.writeObject(imageData);
imageData = null;
else
bResult = false;
objOut.flush();
catch(IOException error)
error.printStackTrace();
bResult = false;
return bResult;
public static Image readImageFromStream(InputStream in, Component comp, StringBuffer sbImageName)
Image imgResult = null;
try
ObjectInputStream objIn = new ObjectInputStream(in);
Object objData;
objData = objIn.readObject();
String sImageName, sSource;
int nWidth, nHeight;
if (objData instanceof String)
String sData = (String) objData;
int nIndex = sData.indexOf(' ');
sImageName = sData.substring(0, nIndex);
sData = sData.substring(nIndex+1);
nIndex = sData.indexOf('x');
nWidth = Math.atoi(sData.substring(0, nIndex));
sData = sData.substring(nIndex+1);
nIndex = sData.indexOf(' ');
nHeight = Math.atoi(sData.substring(0, nIndex));
sSource = sData.substring(nIndex+1);
Trace.traceInfo("Name: " + sImageName + ", Width: " + nWidth + ", Height: " + nHeight + ", Source: " + sSource);
objData = objIn.readObject();
if (objData instanceof int[])
int imgData[] = (int[]) objData;
imgResult = convertIntArrayToImage(comp, imgData, nWidth, nHeight);
sbImageName.setLength(0);
sbImageName.append(sImageName);
catch(Exception error)
error.printStackTrace();
return imgResult;
}While testing more, I found that the client side is generating color UI screens if I use JDK 1.3 JVM for running the server (i.e the side that generates the img) without changing single line of code. But if I use JDK 1.1.8 JVM for the server, the client side is generating black and white versions (aka gray toned) of UI screens. So I added code to save int array that I got from PixelGrabber to a text file with 8 ints for each line in hex format. Generated these files on server side with JVM 1.1.8 and JVM 1.3. What I found is that the 1.1.8 pixel grabber is setting R,G,B components to same value where as 1.3 version is setting them to different values thus resulting in colored UI screens. I don't know why.
-
Can I prevent the headphone to take over the sound output?
When I plug in my Apple EarPods in iMac all alternatives of sound in- and output are overruled. I'm using skype, so I'd like my EarPods to be plugged in all the time, but if I do, the incomming call sound - as well as any other sound - is only audible through the EarPods. Isn't there a way to tell the computer to use both the headphones and the internal speakers for sound output, so I can choose to use the EarPods just for Skype calls?
That's what I feared. It didn't solve my problem, but I can stop searching now... thanks.
-
Hi, I have a Thinkpad R400 and I also have an ArchLinux i686 home server connected via spdif to my home sound system. When I used debian Lenny, I got really old pulseaudio server, GUI, everything - it worked though. It worked only if I was logged in via X server (graphically) locally or remotely. So I installed Arch (thinking using the same OSes can ease things ... I was so naive) and I cannot get this pulse sound server running. First of all I cannot start pavucontrol over ssh tunnel, it has to be from within logged in session. my /etc/hosts.allow:
ALL:ALL
I would like to have the pulseaudio started as a daemon, not having to start X at all. I am unable to start it though. It just says FAILED and gives no reason at all. please tell me what should I do, what how-to should I abide or... Thank youOkay, server side /etc/pulse/daemon.conf
daemonize = yes
fail = no
allow-module-loading = no
allow-exit = no
use-pid-file = no
system-instance = yes
enable-shm = no
shm-size-bytes = 64 # setting this 0 will use the system-default, usually 64 MiB
; lock-memory = no
; cpu-limit = no
; high-priority = yes
; nice-level = -11
; realtime-scheduling = yes
; realtime-priority = 5
exit-idle-time = -20
; scache-idle-time = 20
; dl-search-path = (depends on architecture)
; load-default-script-file = yes
; default-script-file =
; log-target = auto
; log-level = notice
; log-meta = no
; log-time = no
; log-backtrace = 0
; resample-method = speex-float-3
; enable-remixing = yes
; enable-lfe-remixing = no
; flat-volumes = yes
; rlimit-fsize = -1
; rlimit-data = -1
; rlimit-stack = -1
; rlimit-core = -1
; rlimit-as = -1
; rlimit-rss = -1
; rlimit-nproc = -1
; rlimit-nofile = 256
; rlimit-memlock = -1
; rlimit-locks = -1
; rlimit-sigpending = -1
; rlimit-msgqueue = -1
; rlimit-nice = 31
; rlimit-rtprio = 9
; rlimit-rttime = 1000000
; default-sample-format = s16le
; default-sample-rate = 44100
; default-sample-channels = 2
; default-channel-map = front-left,front-right
; default-fragments = 4
; default-fragment-size-msec = 25
server side /etc/pulse/system.pa
### Automatically load driver modules depending on the hardware available
.ifexists module-udev-detect.so
load-module module-udev-detect
.else
### Alternatively use the static hardware detection module (for systems that
### lack HAL support)
load-module module-detect
.endif
### Load several protocols
.ifexists module-esound-protocol-unix.so
load-module module-esound-protocol-unix
.endif
load-module module-native-protocol-unix
### Automatically restore the volume of streams and devices
load-module module-stream-restore
load-module module-device-restore
### Automatically restore the default sink/source when changed by the user during runtime
load-module module-default-device-restore
### Automatically move streams to the default sink if the sink they are
### connected to dies, similar for sources
load-module module-rescue-streams
### Make sure we always have a sink around, even if it is a null sink.
load-module module-always-sink
### Automatically suspend sinks/sources that become idle for too long
load-module module-suspend-on-idle
### Enable positioned event sounds
load-module module-position-event-sounds
### Automatically restore volumes
load-module module-volume-restore table="/var/pulse/volume-restore.table"
load-module module-native-protocol-tcp auth-anonymous=1
#load-module module-native-protocol-tcp
server side /etc/pulse/default.pa
.nofail
### Load something into the sample cache
#load-sample-lazy x11-bell /usr/share/sounds/gtk-events/activate.wav
#load-sample-lazy pulse-hotplug /usr/share/sounds/startup3.wav
#load-sample-lazy pulse-coldplug /usr/share/sounds/startup3.wav
#load-sample-lazy pulse-access /usr/share/sounds/generic.wav
.fail
### Automatically restore the volume of streams and devices
load-module module-device-restore
load-module module-stream-restore
load-module module-card-restore
### Automatically augment property information from .desktop files
### stored in /usr/share/application
load-module module-augment-properties
### Load audio drivers statically (it's probably better to not load
### these drivers manually, but instead use module-hal-detect --
### see below -- for doing this automatically)
#load-module module-alsa-sink
#load-module module-alsa-source device=hw:1,0
#load-module module-oss device="/dev/dsp" sink_name=output source_name=input
#load-module module-oss-mmap device="/dev/dsp" sink_name=output source_name=input
#load-module module-null-sink
#load-module module-pipe-sink
### Automatically load driver modules depending on the hardware available
.ifexists module-udev-detect.so
load-module module-udev-detect
.else
### Alternatively use the static hardware detection module (for systems that
### lack udev support)
load-module module-detect
.endif
### Automatically load driver modules for Bluetooth hardware
.ifexists module-bluetooth-discover.so
load-module module-bluetooth-discover
.endif
### Load several protocols
.ifexists module-esound-protocol-unix.so
load-module module-esound-protocol-unix
.endif
load-module module-native-protocol-unix
### Network access (may be configured with paprefs, so leave this commented
### here if you plan to use paprefs)
#load-module module-esound-protocol-tcp
load-module module-native-protocol-tcp
load-module module-zeroconf-publish
### Load the RTP reciever module (also configured via paprefs, see above)
load-module module-rtp-recv
### Load the RTP sender module (also configured via paprefs, see above)
#load-module module-null-sink sink_name=rtp format=s16be channels=2 rate=44100 description="RTP Multicast Sink"
#load-module module-rtp-send source=rtp.monitor
### Load additional modules from GConf settings. This can be configured with the paprefs tool.
### Please keep in mind that the modules configured by paprefs might conflict with manually
### loaded modules.
.ifexists module-gconf.so
.nofail
load-module module-gconf
.fail
.endif
### Automatically restore the default sink/source when changed by the user during runtime
load-module module-default-device-restore
### Automatically move streams to the default sink if the sink they are
### connected to dies, similar for sources
load-module module-rescue-streams
### Make sure we always have a sink around, even if it is a null sink.
load-module module-always-sink
### Honour intended role device property
load-module module-intended-roles
### Automatically suspend sinks/sources that become idle for too long
load-module module-suspend-on-idle
### If autoexit on idle is enabled we want to make sure we only quit
### when no local session needs us anymore.
load-module module-console-kit
### Enable positioned event sounds
load-module module-position-event-sounds
### Cork music streams when a phone stream is active
load-module module-cork-music-on-phone
# X11 modules should not be started from default.pa so that one daemon
# can be shared by multiple sessions.
### Load X11 bell module
#load-module module-x11-bell sample=bell-windowing-system
### Register ourselves in the X11 session manager
#load-module module-x11-xsmp
### Publish connection data in the X11 root window
#.ifexists module-x11-publish.so
#.nofail
#load-module module-x11-publish
#.fail
#.endif
### Make some devices default
#set-default-sink output
#set-default-source input
please tell me what else should I post. Also, is it necessary to have the cookie synced over network (nfs, sshfs etc...) or is it OK if I just copy it? -
Sound output to Airport Express horrible except through iTunes Airplay
I've had an Airport Express for 2 or so months now, and I use it for both wifi routing as well as its Airplay functionality, wired into my stereo. I've used it with Airplay directly from iTunes as well as setting the system sound output on my Mac to go over to the Airport Express.
As of today, only the Airplay over iTunes works without issue. When I go to the system sound preferences and select the Airport Express, I hear a very loud pop, and all output is muffled and highly distorted. If iTunes is playing, that sounds bad too. However, with the system sound prefs set to internal speakers and ONLY iTunes set to Airplay, it sounds fine. This is not good at all for me, as I often send audio from Youtube and other streaming sites to Airport Express to play on my stereo.
How can I get sound output to Airport Express from the sound preferences to play nice again?
PS, I have the identical issue with Apple TV, though I don't use it regularly for Airplay. System output to Apple TV is terrible, iTunes Airplay to it is fine, and the audio from content on Apple TV is fine.Is there any way to output any sound from a computer to Airport Express, not just iTunes?
Yes, check out Rogue Amoeba's AirFoil. -
Clicking noise when streaming sound to SourceDataLine
Hello,
I'm trying to stream sound to the PC's speakers using SourceDataLine, but I have trouble getting rid of a clicking noise.
It seems to occur when sourceDataLine.available() equals sourceDataLine.getBufferSize(). Is there a way to avoid this?
I am dealing with an audio of indeterminate length. Here's the test code:
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.LineEvent;
import javax.sound.sampled.LineListener;
import javax.sound.sampled.LineUnavailableException;
import javax.sound.sampled.SourceDataLine;
public class PlaybackLoop implements LineListener {
SourceDataLine sourceDataLine;
AudioFormat format;
public PlaybackLoop() throws LineUnavailableException {
AudioFormat.Encoding
encoding = AudioFormat.Encoding.PCM_SIGNED;
int sampleRate = 8000; // samples per sec
int sampleSizeInBits = 16; // bits per sample
int channels = 1;
int frameSize = 2; // bytes per frame
int frameRate = 8000; // frames per sec
// size of 1 sample * # of channels = size of 1 frame
boolean bigEndian = true;
format = new AudioFormat(
encoding,
sampleRate,
sampleSizeInBits,
channels,
frameSize,
frameRate,
bigEndian);
// PCM_SIGNED 8000.0 Hz, 16 bit, mono, 2 bytes/frame, big-endian
DataLine.Info info = new DataLine.Info(SourceDataLine.class, format);
sourceDataLine = (SourceDataLine) AudioSystem.getLine(info);
public synchronized void play() throws LineUnavailableException,
InterruptedException {
int bytesPerSec = (int) format.getFrameRate() * format.getFrameSize();
int intervalMs = 200;
int loop = 50;
int bytesSize = bytesPerSec * intervalMs / 1000;
byte[] bytes = new byte[bytesSize];
// creates a high pitched sound
for (int i = 0; i < bytesSize / 2; i++) {
if (i % 2 == 0) {
writeFrame(bytes, i, 0x05dc);
} else {
writeFrame(bytes, i, 0x059c);
sourceDataLine.open(format, 16000);
sourceDataLine.addLineListener(this);
int bufferSize = sourceDataLine.getBufferSize();
System.out.println(format.toString());
System.out.println(bufferSize + " bytes of line buffer.");
long nextTime = System.currentTimeMillis() + intervalMs;
sourceDataLine.start();
for (int i = 0; i < loop; i ++) {
int available = sourceDataLine.available();
if (available == bufferSize) {
// clicking noise occurs here
System.out.println("*");
int w = sourceDataLine.write(bytes, 0, bytesSize);
long currentTime = System.currentTimeMillis();
if (w != bytesSize) {
System.out.println("Not all written.");
// TODO
// time adjustment , to prevent accumulated delay.
long delta = (nextTime - currentTime);
long wait = intervalMs + delta;
if (0 < wait) {
this.wait(wait);
nextTime += intervalMs;
} else {
nextTime = currentTime + intervalMs;
System.out.println();
System.out.println("End play()");
public static void main(String[] args) throws LineUnavailableException,
InterruptedException {
new PlaybackLoop().play();
public static void writeFrame(byte[] bytes, int halfwordOffset, int value) {
writeFrame(bytes, 0, halfwordOffset, value);
public static void writeFrame(byte[] bytes, int byteOffset,
int halfwordOffset, int value) {
byteOffset += 2 * halfwordOffset;
bytes[byteOffset++] = (byte) (value >> 8);
bytes[byteOffset++] = (byte) (value >> 0);
public void update(LineEvent event) {
System.out.println();
System.out.print("Update:");
System.out.println(event.getType().toString());
}I have modified the code so that it shows how the audio goes out of synch
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.LineEvent;
import javax.sound.sampled.LineListener;
import javax.sound.sampled.LineUnavailableException;
import javax.sound.sampled.SourceDataLine;
public class PlaybackLoop implements LineListener {
SourceDataLine sourceDataLine;
AudioFormat format;
public PlaybackLoop() throws LineUnavailableException {
AudioFormat.Encoding
encoding = AudioFormat.Encoding.PCM_SIGNED;
int sampleRate = 8000; // samples per sec
int sampleSizeInBits = 16; // bits per sample
int channels = 1;
int frameSize = 2; // bytes per frame
int frameRate = 8000; // frames per sec
// size of 1 sample * # of channels = size of 1 frame
boolean bigEndian = true;
format = new AudioFormat(
encoding,
sampleRate,
sampleSizeInBits,
channels,
frameSize,
frameRate,
bigEndian);
// PCM_SIGNED 8000.0 Hz, 16 bit, mono, 2 bytes/frame, big-endian
DataLine.Info info = new DataLine.Info(SourceDataLine.class, format);
sourceDataLine = (SourceDataLine) AudioSystem.getLine(info);
public synchronized void play() throws LineUnavailableException, InterruptedException {
int bytesPerSec = (int) format.getFrameRate() * format.getFrameSize();
int intervalMs = 200;
int loop = 400;
int bytesSize = bytesPerSec * intervalMs / 1000;
byte[] bytes = new byte[bytesSize];
// creates a high pitched sound
for (int i = 0; i < bytesSize / 2; i++) {
if (i % 2 == 0) {
writeFrame(bytes, i, 0x05dc);
} else {
writeFrame(bytes, i, 0x059c);
sourceDataLine.open(format, 16000);
sourceDataLine.addLineListener(this);
int bufferSize = sourceDataLine.getBufferSize();
System.out.println(format.toString());
System.out.println(bufferSize + " bytes of line buffer.");
long nextTime = System.currentTimeMillis() + intervalMs;
sourceDataLine.start();
for (int i = 0; i < loop; i ++) {
int available = sourceDataLine.available();
if (available == bufferSize) {
// clicking noise occurs here
System.out.println("*");
int w = sourceDataLine.write(bytes, 0, bytesSize);
if (w != bytesSize) {
System.out.println("Not all written.");
// TODO
// printing time drift
if (i % 100 == 0) {
long currentTime = System.currentTimeMillis();
// this number increases
System.out.println(nextTime - currentTime);
nextTime += intervalMs;
System.out.println();
System.out.println("End play()");
public static void main(String[] args) throws LineUnavailableException, InterruptedException {
new PlaybackLoop().play();
public static void writeFrame(byte[] bytes, int halfwordOffset, int value) {
writeFrame(bytes, 0, halfwordOffset, value);
public static void writeFrame(byte[] bytes, int byteOffset,
int halfwordOffset, int value) {
byteOffset += 2 * halfwordOffset;
bytes[byteOffset++] = (byte) (value >> 8);
bytes[byteOffset++] = (byte) (value >> 0);
public void update(LineEvent event) {
System.out.println();
System.out.print("Update:");
System.out.println(event.getType().toString());
}The output looks like this:
PCM_SIGNED 8000.0 Hz, 16 bit, mono, 2 bytes/frame, big-endian
16000 bytes of line buffer.
Update:Start
200
1200
1450
1700
End play()
The printed number keeps on rising. (Maybe I'm not sending enough information to the audio input stream?)
The printed number should be stable if in synch, but it continues to rise, signifying that it's playing audio faster than it's being provided. When the audio input source is from a network streamed audio (instead of the test audio source in this example), it results in a buffer under run, which causes the clicking sound. -
Apple TV status light dead & no video/sound output
After our move I hooked my entertainment center up and the Apple TV is dead. The status light won't illuminate even after plugging in. It will not respond to the remote as the status light does not respond. There is no video or sound output. I did a "restore" in iTunes (it recognized it so I know it "exists") but it is still unresponsive. I am working with known good cables and hardware. The only sign of "life" in the Apple TV is that I can see a red light eminating from the back of the unit around the optic audio port. There are no network lights (if there are any).
Thanks for your help,
BenPhilly - I think you are confusing the Digital AV Adapter with something else. It is for use only with TV's and other video displays that have HDMI ports - not those that don't. Per Apple in their info on this device: "Watch slideshows and movies on the big screen in up to 720p by connecting your iPad, iPhone 4, or iPod touch (4th generation) to an HDTV or HDMI-compatible display."
http://store.apple.com/us/product/MC953ZM/A
http://www.apple.com/ipad/features/mirroring.html
As I said before, I've not tried this adapter, so I don't know if its ability to mirror the iPad's screen is somehow blocked by the HBO Go app. As Demo indicated, it seems like the app may be blocked this way as well. Personaly, I don't need to this as I have HBO and Cinemax On Demand through my cable supplier on all televisions in my home. I might try it when I visit relatives who have Internet access but not HBO. If so, I'll report back my experience.
In regard to why Apple TV does not support HBO Go, some additional research seems to suggest HBO may not like the level of content security provided by Airplay. Interestingly, while Apple TV can't stream it to a large screen, HBO Go is now available for use on televisions through the new Roku box -- provided one has an HBO subscription through a cable or satellite provider. There is still no way to get HBO other than through an authorized cable or satellite service. -
Can I put ALL the sound output from my iMac through Airport?
Just got Airport Express, and managed to install it very quickly on my existing Belkin network. Just don't forget the $ before your WEP key.
Does anyone know if its possible to put ALL the sound output from my iMac through to Airport, rather than just iTunes? So internet radio, streaming videos etc?Thanks matey - that's just what I was after.
Not sure if that will also let me put my iMac's system sounds through my HiFi - probably not - but then who in their right mind could possibly want to...?
(Be nice if you could though)
Maybe you are looking for
-
I'm getting some problems with mDNSResponder
These messages keep appearing in the log: 6/22/11 5:21:34 PM mDNSResponder[16] ERROR: mDNSPlatformWriteTCP - send Broken pipe 6/22/11 5:21:34 PM mDNSResponder[16] ERROR: mDNSPlatformWriteTCP - send Broken pipe 6/22/11 5:21:34 PM mDNSResponder[16] tcp
-
Reganining (Wireless) Network Scanning after 10.8.2 update
Up to 10.8 I hab no issues printing and scanning wirelessly through my Canon MP620 whoch is connected directly to my Wireless network. They showed up as two devices (Printer / Scanner) in my System Preferences. However, after updating to 10.8.2 - tho
-
Spry good in DW but same error all browsers
When I set up this index.html page following a tutorial, it worked great in DW. I have 4 pages, all menus performed perfectly in live view, also in browser icon view. I also put all pages through the browser compatibility check, and no errors came
-
Mounting USB Hard Drives and Hard Locking
So, it's been a long time since I've posted here which I have to say, is because Arch was running beautifully for a long time. Sadly though, my laptop was stolen and I was forced to buy a new one. With the new one, I've gotten everything mostly back
-
Source system info in the transformation
Hi, How to get the source system information in the transformation or start routines? Is there FM or field which tells what source system the datasource is assigned to in the transformations ? Regards, Kalyan