What is stream to disk rate in 8176 controller

I want to know what is the maximum stream to disk rate in 8176 controller. and how is it caslculated
Suresh Thangappan

I would try using a program such as winbench (search Google for "winbench") to estimate the throughput. I ran the program on an 8176 we have here and achieved a throughput of about 15MB/s. I have attached a screenshot of the results.
Keep in mind that this is not a guarantee that you will achieve these rates with streaming to disk. There are many other factors that come into account when you are doing this such as other PCI devices sharing bandwidth, other programs, contiguous space on your hard drive, etc.
Hope this helps,
-Adam
Attachments:
8176_disk_benchmark.JPG ‏98 KB

Similar Messages

  • What are the performance of stream to disk on a PCI-5112? Has someone benchmarked it once?

    I need to stream 30MS/s for 2minutes on a hard drive. Anyone did that? What was the platform and the results?
    Thanks, fred

    Fred,
    The fastest we have been able to stream to disk with the PCI 5112 is 25MS/s on a machine that can stream raw data to disk at 130+MS/s. One thing to keep in mind is that you can't actually get 30MS/s for you sample rate with the 5112. The valid sample rates are 100/n MS/s where n is an integer value. Therefore, if you enter 30MS/s for your sampling rate, you will get 33MS/s for the actual sample rate.

  • Why is it that I can't do a continuous streaming to disk with a 5102 scope card (PCI) when I can do it with a DAQ Card of much lower specs (my requirement is for small sampling rates only)?

    I am told that the 5102 Card (PCI) does not support continuous streaming of data to the hard disk. My application requires only very low sampling rates. If I can do it with a low spec DAQ Card using LabView why can't I do it with this card?

    Hello,
    The PCI-5102 is a high-speed digitizer card that has a slightly different architecture than the DAQ cards and was not built with the ability to stream data to the PC. However if you are sampling at low rates you can still acquire up to 16 million samples, which is done by using dma to tranfer data from the onboard memory on the 5102 to the PC memory. However, you will not be able to save the data to disk until the acquisition is complete.
    Another option would be to purchase either a DAQ card or a PCI-5112. Both boards can continuously stream data to the host PC and you should not run into any PCI bus limitations if you are stream to disk at relativiely slower rates.

  • Thoughts on Stream-to-Disk Application and Memory Fragmentation

    I've been working on a LabVIEW 8.2 app on Windows NT that performs high-speed streaming to disk of data acquired by PXI modules.  I'm running with the PXI-8186 controller with 1GB of RAM, and a Seagate 5400.2 120GB HD.  My current implementation creates a separate DAQmx task for each DAQ module in the 8-slot chassis.  I was initially trying to provide semaphore-protected Write to Binary File access to a single log file to record the data from each module, but I had problems with this once I reached the upper sampling rates of my 6120's, which is 1MS/sec, 16-bit, 4-channels per board.  With the higher sampling rates, I was not able to 'start off' the file streaming without causing the DaqMX input buffers to reach their limit.  I think this might have to do with the larger initial memory allocations that are required.  I have the distinct impression that making an initial request for a bunch of large memory blocks causes a large initial delay, which doesn't work well with a real-time streaming app.
    In an effort to see if I could improve performance, I tried replacing my reentrant file writing VI with a reentrant VI that flattened each module's data record to string and added it to a named queue.  In a parallel loop on the main VI, I am extracting the elements from that queue and writing the flattened strings to the binary file.  This approach seems to give me better throughput than doing the semaphore-controlled write from each module's data acq task, which makes sense, because each task is able to get back to acquiring the data more quickly.
    I am able to achieve a streaming rate of about 25MB/sec, running 3 6120s at 1MS/sec and two 4472s at 1KS/sec.  I have the program set up where I can run multiple data collections in sequence, i.e. acquire for 5 minutes, stop, restart, acquire for 5 minutes, etc.  This keeps the file sizes to a reasonable limit.  When I run in this mode, I can perform a couple of runs, but at some point the memory in Task Manager starts running away.  I have monitored the memory use of the VIs in the profiler, and do not see any of my VIs increasing their memory requirements.  What I am seeing is that the number of elements in the queue starts creeping up, which is probably what eventually causes failure.
    Because this works for multiple iterations before the memory starts to increase, I am left with only theories as to why it happens, and am looking for suggestions for improvement.
    Here are my theories:
    1) As the streaming process continues, the disk writes are occurring on the inner portion of the disk, resulting in less throughput. If this is what is happening, there is no solution other than a HW upgrade.  But how to tell if this is the reason?
    2) As the program continues to run, lots of memory is being allocated/reallocated/deallocated.  The streaming queue, for instance, is shrinking and growing.  Perhaps memory is being fragmented too much, and it's taking longer to handle the large block sizes.  My block size is 1 second of data, which can be up to a 1Mx4x16-bit array from each 6120's DAQmx task.  I tried added a Request Deallocation VI when each DAQmx VI finishes, and this seemed to help between successive collections.  Before I added the VI, task manager would show about 7MB more memory usage than after the previous data collection.  Now it is running about the same each time (until it starts blowing up).  To complicate matters, each flattened string can be a different size, because I am able to acquire data from each DAQ board at a different rate, so I'm not sure preallocating the queue would even matter.
    3) There is a memory leak in part of the system that I cannot monitor (such as DAQmx).  I would think this would manifest itself from the very first collection, though.
    4) There is some threading/threadlocking relationship that changes over time.
    Does anyone have any other theories, or comments about one of the above theories?  If memory fragmentation appears to be the culprit, how can I collect the garbage in a predictable way?

    It sounds like the write is not keeping up with the read, as you suspect.  Your queues can grow in an unbounded fashion, which will eventually fail.  The root cause is that your disk is not keeping up.  At 24MBytes/sec, you may be pushing the hardware performance line.  However, you are not far off, so there are some things you can do to help.
    Fastest disk performance is achieved if the size of the chunks you write to disk is 65,000 bytes.  This may require you to add some double buffering code.  Note that fastest performance may also mean a 300kbyte chunk size from your data acquisition devices.  You will need to optimize and double buffer as necessary.
    Defragment your disk free space before running.  Unfortunately, the native Windows disk defragmentor only defragments the files, leaving them scattered all over the disk.  Norton's disk utilities do a good job of defragmenting the free space, as well.  There are probably other utilities which also do a good job for this.
    Put a monitor on your queues to check the size and alarm if they get too big.  Use the queue status primitive to get this information.  This can tell you how the queues are growing with time.
    Do you really need to flatten to string?  Unless your data acquisition types are different, use the native data array as the queue element.  You can also use multiple queues for multiple data types.  A flatten to string causes an extra memory copy and costs processing time.
    You can use a single-element queue as a semaphore.  The semaphore VIs are implemented with an old technology which causes a switch to the UI thread every time they are invoked.  This makes them somewhat slow.  A single-element queue does not have this problem.  Only use this if you need to go back to a semaphore model.
    Good luck.  Let us know if we can help more.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • What is the max frame rate in depthFrame

    In v1.8, I remembered the frame rate can be up to 30fps in depthFrme. what is the max frame rate in depthFrame in the v2.0? And how can i set it? thx!

    Depth (and IR, body, face and HDface) frame rates are 30fps.
    Color is 30fps unless low light mode is detected then it automatically switches to 15fps.
    If you're using the multistream source it will stream at the lowest framerate from the chosen framesources.
    When you're using separate sources they all stream at their own fps rate.
    Frame rates can not be set by the user.
    Brekel

  • What does convert higher bit rates songs to 128 kbps mean

    what does convert higher bit rate songs to 128 kbps mean?

    pattyfromsyracuse wrote:
    what does convert higher bit rate songs to 128 kbps mean?
    Some people, myself included, import their music from original audio CDs. When doing so I choose to import using Apple Lossless format so as to keep the full original quality of the audio CDs (this is after all why it is called Apple Lossless format in that you are not losing any quality).
    The drawback of doing this is that the music files take a lot more disk space than a lossy format like MP3 or AAC. This is these days not a problem on a computer as hard disk space is now cheap an plentiful but it can be a problem on a flash memory based music player like the iPod Shuffle, iPod Nano, iPod Touch, or iPhone as these have far less capacity.
    Therefore Apple now let you turn on this option to automatically convert from Apple Lossless to AAC when syncing to one of these music players. As a result on your computer the music is still kept in the full lossless high quality format, but is converted and transferred as a much smaller but slightly lower quality for use on these music players. As with an iPod you are not listening in a quiet room with high quality speakers you will not really notice the difference in quality.

  • What type of hard disk should I use if I want to use it on mac and windows?

    Hey Forum,
    I am using windows xp on my macbook (snow leopard). I came across some dealers who say that there are hard disks for mac only and for both mac and windows. So, I wanted to buy a hard disk so that I can use it both on mac and windows xp, so what type of hard disk should I use? Must I partition into 2?
    or are there any harddisk in the market which is compatible for both mac and windows xp without screwing up the format(NTFS/Mac OS X Journaled)? Pls look into this matter and help me with it.
    All of your replies and suggestions is much appreciated.
    Thank you.
    Ala.

    Run, don't walk, from that dealer! and never look back
    Once in a very long while Apple will have customized firmware on drives, and it is possible to find SCSI/SAS or drives that are destined to be used with high end storage controllers.
    But that is the exception that makes the rule.
    SATA is SATA. Though.... there are now SATA III drives that don't work in XP, or that need a jumper, and Seagate and some drives have managed to deliver firmware that has caused trouble... and Apple has had to issue firmware updates to help compatibility....

  • In what folder on hard disk are messages saved aft...

    in what folder on hard disk are messages saved after sync? cause i wanna backup them because i wanna reinstall windows...

    sorry i found it
    E:\Documents and Settings\%username%\Local Settings\Application Data\Nokia\Nokia Data Store\DataBase

  • FAQ: Installing Elements 10, or What do all these disks do?

    When you purchase Photoshop and/or Premiere Elements 10 as a boxed product, you get a fair number of disks. The main reason for this is that a boxed product purchase of Elements is dual platform, that is, it's for both Windows and Macintosh.
    When dealing with a Photoshop and Premiere Elements combo purchase, you get 5 disks. Here is what they all do:
    Disk 1 = Installs the 32-bit version of Premiere Elements 10 and Photoshop Elements 10 on Windows
    Disk 2 = Windows 7 64-bit installer (skip Disk 1 if this is your OS). Installs the 64-bit version of Premiere Elements 10 and Photoshop Elements 10  (32-bit)
    Disk 3 = Installs Premiere Elements 10 and Photoshop Elements 10 on Mac
    Disk 4 = Installs Premiere Elemenst Content (DVD themes, menus, titles, etc.) for Windows
    Disk 5 = Installs Premiere Elemenst Content (DVD themes, menus, titles, etc.) for Mac
    When working with Photoshop Elements only, you have 3 disks. Here is what they do:
    Disk 1 = Installs the 32-bit version of Photoshop Elements 10 on Windows. Also includes a trial installer for Premiere Elements 10
    Disk 2 = Installs a trial 64-bit version of Premiere Elements 10 for 64-bit versions of Windows 7
    Disk 3 = Installs Photoshop Elements 10 on Mac. Also includes a trial installer for Premiere Elements 10
    And finally, when using Premiere Elements only, you have 5 disks. Here is what they do:
    Disk 1 = Installs the 32-bit version of Premiere Elements 10 on Windows. Also includes a trial installer for Photoshop Elements 10
    Disk 2 = Installs the 64-bit version of Premiere Elements 10 for 64-bit versions of Windows 7
    Disk 3 = Installs Premiere Elements 10 on Mac. Also includes a trial installer for Photoshop Elements 10
    Disk 4 = Installs Premiere Elemenst Content (DVD themes, menus, titles, etc.) for Windows
    Disk 5 = Installs Premiere Elemenst Content (DVD themes, menus, titles, etc.) for Mac

    Is it rolling back the installation?  you can follow the solution on this link:http://kb2.adobe.com/cps/894/cpsid_89403.html
    try to save the file on your C drive if it doesnt work install it as trial first and just put your serial number once you launch it.
    Hope this helps!

  • Streaming multi bit rate and single bit rate

    I'm trying to simplify my setup.  Sometimes I need to stream multi bit rate and sometimes I need to stream single bit rate.  This is due to the internet connections I'm sending from sometimes have poor upstream and multi-bit rate is too much bandwidth to send... So....  We resort to single bitrate.  As I understand my encoder and server settings are different for each of these...  I created a single and multi-bitrate profile for Adobe Media Encoder.  That was simple enough.  My question/concern is in regards the server settings.  I would like to have a server setting for multi-bitrate and a setting for single bit rate.  My goal is to void making changes  on the server.  I simply want to load the desired Adobe Media Encoder Profile.  So, do I need to create a dedicated event for single bitrate?  Below is the syntax I send to my server from Adobe Live Encoder.  Instead of using 'liveevent' for my event would I just give it a different name?  Does this stand true for my .m3u8 files?  I'm confused to how to name differnet event/streams.... 
    livestream%i?adbe-live-event=liveevent&adbe-record-mode=record

    Hi,
    When you use a setting like: livestream%i?adbe-live-event=liveevent then the encoder expects multibitrate streams (the %i is replaced by a number so the streams published will be livestream1, livestream2 and livestream3). You cannot use the same setting in the encoder for single stream. Instead you'll have to use livestream?adbe-live-event=<event_name>. You could create a new event under the same application.
    Hope this helps. Let me know if you have any other queries.
    Thanks,
    Apurva

  • Help! data manipulation for high speed streaming to disk from multiple boards and multiple channels

    I am using Labview 7.1 and have been trying to capture data from 12 channels simutaneously sampled at 2MS/s each and streaming to disk for up to a minute or more.  The hardware I am using is 2 x PXI 6133 S series boards with a MXI4 link to a Pentium D 2.8 Ghz machine with 2Gb ram.   I have 2 sata drives set up in a raid 0 configuration which should give me hard disk write speed faster or equal to the MXI-4 transfer speed. 
    I have first started off by using the example code "multi device sync - analog input- cont acquisition" which has enabled me to sync the two boards and sample at the required speed. 
    To stream the data to disk, I have first merged the data from each board  together to save it to one file.  I have tried using the storage vi's but I end up with a Daqmx read error (trying to read data that is no longer available).  I have played around with the read data size to the point that I either get a insufficient memory error, or I get the "trying to read data that is no longer available"  error.  I have also tried using the file IO blocks with some success and have found that I have been able to stream to disk only if I configure the daqmx read block to output the data in "raw 1D I16" format and plugging it into the file-write block.  In doing this, I have noticed that using  multiple channels on one daqmx read task, I will get all the channels in one 1D array rather than a 2D array organized by channels.  This makes it messy to read at the end of this, and I also don't want to write another vi to separate the channels, due to the high chance of getting the data mixed or messed up if I happen to change the number of channels on a board
    Is there a cleaner way of streaming this data to disk and keeping the channel data separated from each other?, and/or is there a better way to capture and handle the data I need? 
    I have attached the vi which I have got to consistantly work streaming to disk using the raw 1D I16 format.
    Thanks in advance to anyone who can help.
    Attachments:
    multidevicesync_analoginput_streamtodisk.vi ‏197 KB

    Hi,
    i can suggest following
    Refer to an example VI called as "High speed data logger.VI"  in conjunction with "High Speed data logger reader.vi" in Labview examples. Alhrough the logger might be in Tradiditional Daq format, it can be quite easily converted to Daq Mx format to store data in Binary (I32 format) . I have used this for many of my applications and i have found that the data retrieved does not have any "messups".
    Why not keep a seperate file for each card? This way, you do not have to load your application with extra process. You only have to acquire and save. After saving in Binbary format, you can retrive it offline, convert it to ascii format and merge the data files of various cards to get one consolidated ascii data file.
    hope this helps
    Regards
    Dev

  • How to check replication stream queue or rate of streaming

    Hi folks,
    Is there a way of checking the replication stream queue, and the rate that it is sending the data from one database to the other? Is there a log file that I could look at to see that streaming is happening?
    Currently I am checking that streaming is happening by checking the sizes of both databases, to verify that this is changing. However, this is not accurate, since some of the data changes are modifies rather than additions / deletions.
    When I do 'repadmin -status', the queue sizes is 0. Is this a known Oracle issue?:
    QUEUE SIZES
    Capture Queue Size: 0
    Apply Queue Size: 0
    Appreciate any suggestions you have.
    Thank you.

    You can query the V$ views to get stats regarding the rate of streaming
    V$STREAMS_CAPTURE -> Rate of enqueue
    V$STREAMS_APPLY_READER -> Rate of dequeue

  • What is "gc cr disk read" event?

    hi,
    Do you know what "gc cr disk read" event exactly means? I could not find any useful information on oracle documentation, metalink and web?
    I suspect that it is related to parallel disk reads on RAC nodes. I mean for a query assume that all rac nodes do db file scattered/sequencial reads in parallel. If main node where sql run has finished db file scattered/sequencial reads but other node(s) has not finished yet, then "gc cr disk read" event occurs on main node.
    However, i could not find any information to prove or disprove this assumption
    Regards,
    erkan

    By the way there is an explanation here
    Re: What means "gc cr disk read" event?
    but Jonathan may not be right, because i see this event in my rac cluster which is a 2-node RAC cluster...

  • Ex stream to disk analog trigger

    Dear NI engineering,
    i'm using the example program called "Stream to disk" with a PCI 5112. I was wondering, can i use an analog trigger in place of the software one (like i have done in the vi i have attached here).
    This is because i'm trying to synchronize a PCI 5412 and PCI 5112 (by using two different vi).
    Moreover i saw also the example from your website called "3784.vi" (i have attached it here), but when i start it, it does not do anything. How come?
    Thanks a lot for your help,
    kind regards,
    Gianna
    Attachments:
    niScope EX Stream to DiskTEST01_analog_trigger.vi ‏49 KB
    3784_DOES NOT WORK.vi ‏27 KB

    Hi ,
    I think that this forum is the same as the  SR 7354392 in the mailbox of National Instrument. So I think that I can close this forum and you can follow me in that SR.
    Regards,
    Hossein

  • What's a scratch disk?

    i'm a little confused with all the talk of scratch disks and external drives. basically i just want to know what exactly a scratch disk is and why it is better to put it on an external hard drive and any other basic information about this topic. thank you in advance.

    I bet you've been itching to ask that question!
    It doesn't have to be an external drive if you have a tower, like Tom said. It will just be another drive inside your computer.
    You can use your computer's normal Hard Drive perfectly well as long as it has plenty of free space, but for maximum efficiency with really large projects, an extra drive is better.
    However, don't feel pressured into buying one immediately. You can always fit everything on the one drive and see how it goes.
    Make sure your MacBook has at least 20% of the HD free when you are editing and you should be OK.
    Message was edited by: Ian R. Brown

Maybe you are looking for