Audio data to be acquired at 192khz

hi ...
well i  have made a VI to acquire  audio data  and then write it to a file to furthur process it. the problem is that  when i send  stram of audio data upto 48 khz the data is captured continuously, but if i try to  send a stream of 64khz or 192 khz the data is acquired but for a little time. i want to acquire data of 192khz stream continuously.  i m using labview 6.2 and dio32hs card.
i want to acquire data at 192khz for atleast 6 min continuously.
is it possible???
please help..
thanx and regards
vivek modgil

Look at the producer consumer template for multithreading.
Essentially what happens if you use a simple model like
Initialize()
DaqLoop
   Read()
   Process()
   Display()
   Save()
LOOP
is that if the saving and/or sace takes too long the next itteration of read will have caused a buffer over run in the daq task.  This is more evident in the faster Acquisition rates.  Making the buffer larger only delays the onset of this event.
A better model is (Threads are just parallel loops)
Thread 1 (Producer):{
Initialize DAQ()
DaqLoop
   Read DAQ data()  //all data avaliable.
   send data to a Queue
}Loop(until done)
Kill queue();
Thread 2 (Consumer):
DataLoop
  Wait for data at queue; // put in some timeout  ie 100ms 
  If new data-> Process() Display(); Save();
}Loop(While queue is valid)
This is just a starting point.  Look at the producer/consumer model.
This ensures that the DAQ loop will (should) never overrun and the consumer loop will attempt to keep up with the data.  Saving data to file is slow so do this post acquisition if possible (IE build array in memory).
Paul 
Paul Falkenstein
Coleman Technologies Inc.
CLA, CPI, AIA-Vision
Labview 4.0- 2013, RT, Vision, FPGA

Similar Messages

  • Acquire/manipulate audio data from internet stream

    I want to acquire the audio data from an audio stream (i.e. pandora, youtube, etc.) from the internet.  If I can get this data and be able to play it/graph it/etc in Labview, I will be pleased.  However, I'm not sure if this is even possible not knowing the details behind how packets are built and transmitted over the internet. 
    The closest I've come is using the acquire_sound.vi and taking the data from the sound card; albeit, it's very noisy. 
    For fun though, I'd like to do something directly from the audio stream itself (perhaps building a dll?) to gain a better understanding and experience. 
    Any advice is greatly appreciated!

    Dnor - 
    As far as I am aware there is a specific data time for audio streaming online, and it's something specific you would likely need to know on a low level to communicate/copy to.  Although it wouldn't be real-time, you can run a Windows audio recording program like TotalRecorder which should save and convert the audio output to .wav.  Then run a LabVIEW vi to read the wave file, which you could graph.  There is a KnowledgeBase on setting up the read from .wav file here if you'd like to go that route:  http://digital.ni.com/public.nsf/allkb/1BE60DBBFA22ACC186256DFB000B8464.  
    Regards,
    Ben N.
    Applications Engineering
    ni.com/support

  • Accidently disconnected ipod without ejecting, now it only shows recent purchases. I can see 40 gig of original library but its listed as other not audio. Is this retrievable audio data or is it lost. I have no backup what can I do to restore my ipod?

    I accidently disconected my ipod from my pc. When I plugged it back up  I only had limited access to my library. I had over 40gb of audio data that now when I go to the summary page of my pod it shows up as other, only the audio that I have recently purchased from itunes is accessible. Is the 40gb retrievable and can some one please tell me step by step what I need to do. My collection took over a year to put together and there is no backup file.

    "Other" is the measure of used space on the iPod not taken up by Audio, Video & Photos. This includes the iPod's library and artwork plus any files you may have copied to your iPod in disk mode. The overhead for the library & artwork data is typically 1-2% of the size of the media, e.g. for 1Gb of Audio & Video expect to have around 15mb of "Other". This information is needed for the iPod's operation and cannot be removed.
    If you have significantly larger amounts of "Other", not related to files you've intentionally placed on the iPod, then these are probably disconnected copies of your media files or iPod libraries left over from failed sync operations. The only way to recover the space is to do a full restore.
    If you have copies of all your media in your iTunes library this isn't a problem, but if you've been manually managing the content then you might need to try to recover the files from it first. See this post by forum regular Zevoneer on transferring files from the iPod to your computer. Some of the tools rely on the iPod having a healthy library however the manual method mentioned towards the end of the post should work regardless.
    tt2

  • "some audio data is no longer available"

    How can I resolve this file problem?
    I spent a couple of hours recording a narration as a STAP file that was about 30 minutes long. After the recording (the second action after "Empty File") I added several actions to delete unwanted portions (errors, repeats, etc.). To finalize the project I added a couple of actions (noise gate and limiter).
    Everuthing appeared and played normally... but when I opened the file this morning I noticed that there was no audio in the project... scrolling up revealed that the "Recording" was unchecked... I checked it and got the following error message:
    The action "Recording" has been disabled because some audio data is no longer available. You can replace this action with a similar action.<</div>
    Replace? Does that mean repeating the work entirely?
    Well, the file appears to be 591.8 MB in size... so it should have the audio in it.
    Is there a way to recover the original audio from this file?
    BTW, I should have exported it before closing it up... Unfortunately, not trusting your software is what I used to do with my PCs...
    :o(

    Never found the cause and did not check the package... so I decided to close the question. The event has not repeated itself as I have upgraded several times since.

  • Compressing and uncompresing chunks of audio data

    Hello,
    I am doing a project in Java, it has a server and a client, the aim of server is to read an audio file (a .WAV file, actually ) chunk by chunk and compress is it to certain quality (using both lossy and lossless techniques) as specified by the client and transmit it, the client will decompress it and play it out. I am using two ports for it, one for control signals, another for the actual audio data transmission. So far I, my application initiates the GUI, looks up servers, connects to it after finding one using the two ports, gets audio files' list and plays the clip.
    playing is done by reading the wav file using FileIputStream, 128*1024 bytes at a time, writing it to socket outputstream. On client side this stream is used to create the audio format object and play the clip. Obviously, this is not the way it is to be done, correct? what's the best way of encoding that 128*1024 byte chunk so that it can be re created on client?
    The code that I've got together looks something like this:
    STREAMER:
    dataOut = dataSocket.getOutoutStream();
    fis = new FileInputStream( new File(fileName));
    byte[] b = new byte[1024];
    try{
    while (fis.available()!=0) {
         fis.read(b);
         dataOut.write(b);
    catch( .. ) {  }CLIENT:
    dataIn = dataSocket.getInputStream();
    try {
        while (dataIn.available()!=0) {
         play(din);
    catch ( ..) { }
    void play(InputStream is){
    ..  // here AudioInputStream object is created using getAudioInputStream(is)
    ..  //an AudioFoamat object is created using audioInputStream.getFormat();
    //then a default audioFormat is created , changing the encoding to PCM_SIGNED and bit-dept to 16
    //then using this default format, we play the audio:
    //we construct an AudioInputStream using this default format. then play using this audioInputStream.
        DataLine.Info dataLineInfo = new DataLine.Info(SourceDataLine.class, audioFormat);
        SourceDataLine line = (SourceDataLine) AudioSystem.getLine(dataLineInfo);
        line.open(audioFormat);
        line.start();
    byte[] abData = new byte[128*1024];
      int nBytesRead = 0
      while (nBytesRead != -1) {
         try {
          nBytesRead = audioInputStream.read(abData, 0, abData.length);
          } catch( .. ) {  }
         if (nBytesRead >= 0) {
           line.write(abData, 0, nBytesRead);
      }

    thanks captfoss, yes the thing i am developing looks like RTP, i'll go through them.
    now i am looking for some java codes that will convert raw audio bytes to mp3 encoded byte stream, the only things around are things from LAME project, being written entirely in C, this is not a very attractive alternative.. i would rather have clean and simple Java libraries that will encode chunks of raw audio to compressed byte streams, please help me find some such libraries,
    thanks in advance,
    diemond

  • Problem with playing audio data using Real Player

    i've seen the intruction for playing video data (.rm) using real player and it works. Then i tried to play audio data (.wav, .dat, .mp3) with real player, but it didn't work.
    These are some codes i've wrote:
    ------------------------------------------- BEGIN PL/SQL CODES---------------------------------------
    CREATE TABLE SONGS
    ( Item_ID number not null PRIMARY KEY,     
    Audio ordsys.ordaudio
    create or replace procedure load_audio(id integer, filename in varchar2) as
    obj ORDSYS.ORDAudio;
    ctx RAW(4000) := NULL;
    begin
    INSERT INTO SONGS VALUES(ID,
         ORDSYS.ORDAudio.init());
    SELECT audio into obj from Songs
    where item_id = id FOR UPDATE;
    obj.setSource('FILE','AUDIODIR', filename);     
    Obj.setDescription('A movie trailer');
    Obj.setMimeType('audio/x-pn-realaudio');
    Obj.setFormat('Real Media File Format');
    obj.import(ctx);
    UPDATE Songs
         SET audio=obj WHERE item_id=id;
    COMMIT;
    END;
    show errors;
    truncate table songs;
    exec load_audio(1,'aud1.wav');
    exec load_audio(2,'aud2.mp3');
    exec load_audio(3,'testaud.dat');
    -- just for comparison, i put a video file (.rm)
    exec load_audio(4,'autorace.rm');
    commit;
    show errors;
    create or replace procedure get_audio(
         audio_id in varchar2,
         mimetype out varchar2,
         data out blob) as
         tempBLOB BLOB;
         s varchar2(200);
    begin
    -- Deliver audio and mimetype
    select t.audio.getcontent(), t.audio.getmimetype()
    into tempBLOB, s
    from songs t where t.item_id = audio_id;
    data := tempBLOB;
    mimetype := s;
    end;
    show errors;
    ---------------------------------------- END of PL/SQL CODES-------------------------------------
    -----------------------------------MOUNTPOINT at FILESYSTEM rmsever.cfg-------------------
    <List Name="pn-oracle-audio">
    <Var Database="oracle"/>
    <Var HeaderCacheSize="2048"/>
    <Var LobFetchSize="32768"/>
    <Var MaxCachedConnections="1"/>
    <Var MountPoint="/dbaudio/"/>
    <Var Password="ZGF2aWQ="/>
    <Var ShortName="pn-oracle"/>
    <Var SQL="get_audio"/>
    <Var Username="skripsi"/>
    </List>
    ----------------------------------------End Of MOUNTPOINT----------------------------------
    then in the real player, i tried some urls :
    http://david:88/ramgen/dbaudio/1 --> it didn't work, it was said invalid path
    http://david:88/ramgen/dbaudio/2 ----> it didn't work, it was said invalid path
    http://david:88/ramgen/dbaudio/3 ----> it didn't work, it was said invalid path
    http://david:88/ramgen/dbaudio/4 ----> it worked
    did i put a wrong url?
    does the http://..../ramgen/... path is just for ram file?
    can anyone show me the way to play audio data using real player?
    Thanx before.

    Yes, realserver expects an exact mimetype.
    to make matters worse, it expects audio/x-pn-realaudio (an audio mimetype) for video!?!?!
    Seems the code inside maps multiple file extensions to some mime type. Only one is allowed.
    I have used the unix strings commands in teh plugins directory to find the exact mime type realserver is expecting. The file format plugins have this information in them.
    Larry

  • How do I send raw video and audio data to FMLE?

    Hi,
    I have raw video and audio data on CPU memory.
    How do I send them to FMLE?
    Is there sample code or SDK?
    Thanks for your answer.

    Hi, Burzuc.
    Raw video and audio data are from a video capture board like Blackmagic Design's DeckLink or other's.
    I want to stream them after some processing by my application.
    Regards.

  • Video (or Video + Audio) Data Rate

    Wondered if anyone here might know of a software utility that has the ability to scan QuickTime compatible files and either display the video (or video + audio) data rate excursion or, as an alternative, graph the instantaneous variation in the data rate over time.

    Apple +I (show info) in QT player will show you the framerate and bitrate while playing.
    Thanks for your suggestion, Rick. However, I must point out that the data rate so displayed is simply the total average for the entire clip and not the instantaneous (i.e., constantly changing) variance with time. Preliminary observations tend to indicate the single pass H.264 algorithm begings roughly 10% under the requested data rate and quickly settles very near the target. The multipass algorithm, on the other hand, appears to begin in the vicinity of 300% above the targeed data rate and monotonically decreases throughout the remainder of the movie clip. So much for a quick qualitative analysis based on observations during the actualy encoding process.
    What I now wish to do is actually perform a bit of quantitative analysis and, based on those results, correlate actual data rates with the ability of a given H.264 multipass file to sync to an iPod. Basically, I am trying to determine whether or not a user data rate input is used as a comparator during the initial phase of multipass H.264 coding and, if so, how it is implemented. (I.e., how are the delta values handled -- linearly, exponentially, etc.) In addition, I wish to compare QT v7.0.3 with v7.0.4 clips to determine why the latter are less iPod compatible. Too, there remains a question as to whether or not any data rate information is now embedded in the clips themselves since various work arounds tried (i.e., clipping file lengths) have proven unsuccessful.

  • Audio Data Rate digitizing DVCPRO50 over firewire

    I'm having this weird issue when I digitize DVCPRO50 footage into one of my Dual 2Ghz G5s. For some reason, once the clips are in (I'm bringing in entire tapes) -- the audio data rate is at a strange number (like 47722 khz) rather than the standard 48khz. And that number is not consistent -- it ranges from 37782 to 47991. Has anyone ever seen this? It seems to be something with this machine or version of FCP because I can digitize on another G4 or G5 running 4.5 and it comes in fine. But then I have to transfer it via FW drive which is a big hassle. The specs are: Dual 2ghz G5, Panasonic DVCPRO50 SD93 Firewire deck, internal 1TB raid, 2.5GB RAM.
    Please help if you have any ideas.
    Thanks.
    Aaron

    It it maybe a situation where the source uses Drop Frame timecode and the FCP capture setting doesn't? Or vice versa?
    I haven't done the math yet to check, but your sample rate numbers could reflect errors resulting from FCP trying to compute NDF sample rates while looking at incoming DF timecode. Or vice versa.
    The different sample rates would then be the result of different length clips, which would have more or less DF "adjustments" in them, and so would produce different apparent sample rates if view as NDF clips.
    Like I said it's just a guess, but check out to see that your DF or NDF timecode settings are the same for the source as for the capture.
    Maybe I'm right. Or maybe I'm talking through my hat. It's happened before...

  • FCE crashes after validating audio data

    I have just started using FCE and I'm having a problem with it craahing. I am capturing from a DV tape that was recorded in 12bit audio. I got the audio synch warning message a few times and, after reading the manual, realized that I needed to change the NTSC setting in Easy Setup to NTSC 32kHz for 12 bit audio, which I did.
    I used the Capture Now button and pressd the Esc key to stop the capture. If the clip I capture is several minutes long I get a small window titled "Analyzing DV Audio" and the text in the window says "Validating Audio Data". This window remains in view for a minute or more and then FCE crashes. I have tried this several times and get the same result. I tried repairing permissions and rebooting and still it crashes. Any suggestions would be appreciated.
    Bob Switzer
    G4 Dual 867 Mac OS X (10.4.7)
    G4 Dual 867 Mac OS X (10.4.7)

    Thanks for your suggestion. I trashed all the prefs and tried capturing the same footage again and experienced another crash. I then moved to another section of the tape and tried again and it worked without crashing. I also captured footage from another tape without a problem. I suspect that particular section of tape is somehow corrupted and was causing the crash. I was able to take this section of tape into iMovie without a problem. I guess I'll bring it into FCE from iMovie.
    Thanks again for your help.
    G4 Dual 867 Mac OS X (10.4.7)

  • Loading 0.5 seconds of audio data into a numerical array

    Hi everyone,
    I'm new to Java programming, and I'm trying something a little complicated. I wish to use Java to listen to a microphone or line in port and wait for the input signal to reach positive saturation.
    When the signal has saturated, I would then like to record the next 0.5 seconds of audio data samples into a numerical array.
    Please may somebody explain how I can achieve this, if it is possible.
    After that, I know what I want to do with the numerical array and how to do it.
    Thank you to everyone who can help.
    Edited by: Barcode_Master on May 1, 2009 1:45 PM

    A simple google with "java audio input" as the arguments will result in host of information for you... this one, the very first one, is possibly one you should pay particular attention to:
    [Java Audio Input|http://www.developer.com/java/other/article.php/2191351]

  • How to find out captured audio data is stored completely

    We are using livepkgr to capture audio with AMS Standard. Since server-side scripting is not available with AMS Standard, we are using a PHP script to manipulate captured audio. Everything works well in testing, but we are wondering about the following.
    When we close an audio capture session with:
       nsPublish.attachAudio(null);
       nsPublish.close();
    can we assume that all the audio data has been written to the stream file? What if the server is busy with many simultanenous capture sessions? Is there a way to verify that a steam file is complete, closed, and ready for processing?
    Thank you very much!

    Hi,
    If you are guided to a structure, double click on the data element and go to the structure.
    Check the where-used for this data element. You will get a list of them which will give you an idea of where to find the data and in which transparent table.
    Regards
    Subramanian

  • How to decode audio data from dvi/rtp to PCM linear

    Hi all,
    I am getting audio data as DVI/RTP 4 bit mono as input stream but not able to create player for this can any one send me any decoding function so that i can change this data to PCM linear 16 bit format

    S+S, ahh an academic.
    This is actually a forest-trees issue here.
    What your doing is appending the files, which would result in a size of 2S. What you want is somthing of size S. To do this you would need a Normilized merge. I say normalized merge because thats going to give you the best result while still avoiding out of range values.
    This is easy enough to by hand if both streams S1 and S2 are of the same frequency. If they are not the same frequency, the result will be size of the larger file when convereted to whatever uniformfrequency.

  • 44 kHz of FPGA audio data..best way to implement?

    I have a program that will be taking 4 channels of audio data using FPGA at 44 kHz. The user may only care about 1 channel, or they may care about all 4. This data could be taken for anywhere between 30 and 120 seconds. Worst case, from my calculations, 44 kHz *120 seconds * 4 channels is 21.120 million points. All of the following is going to be happening in my RT, not on the FPGA. My first thought is to intialize 4 arrays of 5.28 million points(again this is worst case), one array for each channel. Then if they take less than 120 seconds of data I can just index the portion of the array I want.This way I can use replace array subset and just keep overwriting the buffer. Does this sound like a reasonable solutions or is using an array of this size for this going to rail the cRIO processor and will I have to use some other method for managing large data sets?
    CLA, LabVIEW Versions 2010-2013

    Hi,
    Allocating this much memory on an RT target might cause problems. A better way would be to take the data and transfer it to a Windows machine and process/store the data there. You can do the transfer using shared variables, network streams (in LV 2010 only), tcp/ip, etc.  
    Paul B.
    Motion Control R&D

  • Send audio data over UDP

    Hello again,
    Sorry for asking a lot of questions, but I'm developing an application in jmf and I don't know so much how it works...
    I'm trying to receive rtp data from the network, then uncompress to RAW format (but it has to be coded in some codec like g711, g729 or GSM) and save it into a buffer. For this I'm following the example DataSourceReader. I have attached some things like keep off the rtp header, and save the audio data into a bytebuffer (instead of using the printInfo() method I'm using saveIntoByteBuffer() method).
    Another class is sending the data (saved in byteBuffer) over the network (UDP, not RTP-UDP).
    And anotherone is receiving this data, and creating a DataSource to play the audio data received.
    To do this, I don't know exactly how to do it. I need to create a DataSource like the examples (LiveStream and DataSource from [http://java.sun.com/javase/technologies/desktop/media/jmf/2.1.1/solutions/LiveData.html |http://java.sun.com/javase/technologies/desktop/media/jmf/2.1.1/solutions/LiveData.html] ) or can I do it like JpegImagesToMovie from [http://java.sun.com/javase/technologies/desktop/media/jmf/2.1.1/solutions/JpegImagesToMovie.html|http://java.sun.com/javase/technologies/desktop/media/jmf/2.1.1/solutions/JpegImagesToMovie.html] ?
    Sorry if it's an easy question, but I don't understand so much why implementing the 'same' thing (custom DataSource) it's implemented different.
    Thanks

    Sorry if it's an easy question, but I don't understand so much why implementing the 'same' thing (custom DataSource) it's implemented different.They aren't the same thing, actually. DataSource is the parent class, but there are 2 different kinds of DataSources.
    A PushBufferDataSource will "push" the data out when it's available to be read. Whenever it decides it has data ready to be read, it informs whatever is reading from it that data is available.
    A PullBufferDataSource will not do that. Whenever it has data ready to be read, it doesn't do anything to inform what's reading from it.
    The next obvious question is, why does it matter?
    PullBufferDataSource's are good for situations where the data is always present. For instance, if you're playing a file from your hard drive, it's better to just let your Player object fetch data when it's needed. There's no need for a "data available" event, because, the data is always available...
    PushBufferDataSources are good for situations where data is being generated / received from an outside source. You can't read from it until the data comes in, so rather than blocking and waiting for the read, it'll tell your reader class when to come back for the data.
    Hope that helps!
    P.S. For your needs, you'll want to be using a PushBufferDataSource, so the Live example code.

Maybe you are looking for

  • SCCM 2012 R2 - Reporting Services question

    In testing SCCM 2012, I put it on a an existing remote SQL instance. This server also housed SCOM among other applications. I set up reporting services successfully and got reports working. A few months later I decided to setup and test reporting in

  • Sender JDBC Adapter Select/Update Issue

    Dear All, We have configured a Sender JDBC Adapter to Poll data from the DB2 tables. It is working fine and both the select and the update queries written are also getting properly executed and are changing the status of the flag from Y to N once rea

  • Keychain won't remember passwords

    after upgrading to SL the keychain no longer wants to remember any passwords. this is effecting ALL passwords, both online and off. The ones already in the keychain do not work anymore. Any assistance is much appreciated Thanks

  • Cannot create database in RHEL5(beta 2)

    Hi, I am trying to install the database 10.2.0.1 in the RHEL5 (beta 2), all step was well, but at created database, choiced all parameter, after click the "OK" button on the parameter confirmation window, following error was displayed. If I run dbca

  • No Role

    What would be the consequences of implementing the BI reporting without using the role-based approach? Can the workbooks be transported to BIP without using the role? How would you convince the users that they will have to create workbooks and attach