Read multiple DAQ data packet in one WHILE iteration

Hello,
I am looking for a solution to the following problem :
I am using a PXI-6132 to acquire an anlogue waveform, the DAQ is trigerred by the falling edge of an other analog waveform with the same theoritical frequency. What I would like to do is to acquire for example 30 samples after the trigger, then store this data in the on-board memory; then wait for the next falling edge (my trigger) to acquire another 30 samples batch and, after let say 5 acquisition events, transfer this data form the on-board memory to my host computer memory to be read by a classical "Read fonction" in one itération of my while loop. I tried a lot of solutions (for ex. by setting the on-board memory buffer size to 150 before entering the while loop) but the read function still reads only 30 samples in one iteration. Is it possible to find a solution with the classical : create channel, timing function, trigerring function then a read function in a while loop or do I need to use a more complicated structure ....
Thanks in advance for your help

Please post questions only once. This is a double post - see here

Similar Messages

  • Read MULTIPLE idocs data with all sgmn to my internal table in a single

    Dear SAP Folks!
    I have a question, I know to read SINGLE idoc data with all segments, we have FM IDOC_READ_COMPLETELY but my requirement is to Read MULTIPLE idocs data with all segments to my internal table in a single shot without keeping this FM in loop! since it is performance issue, at a time i may want to read 1000 idocs data!
    Could anyone please let me know is there any way to get this, keeping mind of performance and any other FM?
    Best Regards,
    Srini

    Hi,
    I know idoc numbers and i can write a select Query to DB table EDID4 to get all segments data to my internal table. But i am looking for FM to do this. because i have too many number of idocs and each idoc is having many segments(I am thinking in performance point of view!) The FM IDOC_READ_COMPLETELY can accept only ONE IDOC number as import parameters at a time to get segment data. in similar way do we have any other FM? or other way? except select query to EDID4 table?
    Best Regards,
    Srini

  • How to use multiple Spry Data Sets in one page

    I'm using two spry data sets in one page. When I add the first spry data set to my page everything runs OK, When I add the second spry data set to the page the first data set stops working. Does anyone know what the problem is?
    This is how I have my data sets listed.
    var ds1 = new Spry.Data.HTMLDataSet("/accounts/tower/list.php", "list");
    var ds2 = new Spry.Data.HTMLDataSet("/accounts/tower/numvisits.php", "chart");
    Thanks, let me know if you need more information.

    Good News!
    There is nothing wrong with what you have shown.
    Bad news!
    The problem could be in that part that you have not shown.
    Gramps

  • TCP/IP DataInputStream reads not all data send in one time?

    I have a C program that sends 4104 bytes of data to a java client.
    In my java client I have a buffer of 4104 bytes that reads the data using a DataInputStream. Now my problem is that the read function returns with only 1460 bytes instead of 4104 bytes.
    I only have this problem on a new type of PC we are using.
    Now I was wondering when you have a TCP/IP session doesn't every send results in 1 recv (read)?
    Does this differ for various platforms? (is that buffer size adjustable)
    Thanks!

    Alright then I do have a problem.
    If your application level doesn't define a blockthen
    you have problems.So I have to keep reading on the socket untill I get a
    timeout? I saw that DataInputStream has a function
    available(), this should return the available number
    of bytes that can be read without blocking. This
    hardly seems to work?
    I just want to read 1 TCP packet without knowing the
    size or the protocol of the packet without having to
    wait for some timeout. I don't want to reassemble -
    defragmentate all my packets. This question seems very
    reasonable to me, but perhaps I'm missing something?
    There's no guarantee in TCP that a block size N
    written in one application will be delivered as a
    single block to the receiver.Is there a place I can read that specification?Usually people need to understand an API in order to be able to use it. TCP sockets present a continuous stream paradigm: the data stream starts when the connection is established and ends when the connection is closed. Now reread your own question: you want to read a block of data from such a stream without knowing the size of the block. That just makes no sense. It is like trying to read first X bytes from a file stream without actually knowing what X is. The number of bytes available immediately on a socket read is subject to all kinds of random factors on the network and has nothing to do with the amount of data the server is actually trying to send as a "block". That is, a single write() on the server does not necessarily correspond to a single read() on the client.
    A common solution to this problem is to have an implicit block size (that is, it is fixed and known apriory) or prefix your payload with the block size. An example of the latter would be to send the block size as the block header (say, an int in network byte order) so that the receiving end could read the size first and then perform a loop that continuously reads from the socket until it reads exactly the required amount.
    In other words, TCP sockets just provide you with a continuous stream paradigm. You need to impose some kind of an application protocol on top of it to build client-server dialogs. This is what's done behind the scenes by CORBA, RMI, HTTP etc.
    If you need to read up on Berkely socket API specifically, Richard Stevens has written some excellent books.

  • Multiple Crystal Data Consumers in One Xcelcius File

    We are trying to create a mashup of multiple different types of dashboards in one Xcelcius file. We've been able to do this successfully with multiple worksheets in our Excel file. However, we want to make all of our dashboards pull in Live data. The only way we've been able to get this to work with single dashboards is using the Crystal Data Consumer data connection. All of the other ones don't work for us. The problem we are having is that it looks like it only let's us use one Data Consumer data connection. Does anyone else have any ideas on how I can go about this?
    The data source by the way is on a SQL Server 2005 database. We've been able to pull the data we need into our Crystal Report and then convert that data into dashboard form using the Crystal Data Consumer. We just need to do the same thing but do it with multiple dashboards.

    Hi,
    Try Using Live Office, it will allow you to bring multiple crystal report and you can have multiple connection in Xcelsius.

  • How to select more than one data packet?

    Hi,
    I have uploaded data using 3 different data packets. However, for each of these packets there are some errors.
    Using Monitor > PSA Maintenance, I want to display error data records for all three different data packets in one screen. These erratic data will be send to our users for their rectification. However, I can only select and disply erratic data records packet by packet. So this has to be done three time instead of once.
    Can u advise on how to display all erratic data records in one screen.

    Hello Fulham FC,
    How r u ?
    I feel it is possible to select all 2 or more requests at once. Provide
    ...No. of Records ---> count the total no of records in the packages u select.
    ...From Record -
    > 2147483647
    Then try CONTINUE. In our system its throwing Dump !!!
    I believe dude, in PSA Maintenence there is a button  SELECT ALL & even in the data package also it is allowing to select two or more. There should be some way to do this.
    Best Regards....
    Sankar Kumar
    +91 98403 47141

  • Read multiple files and write data on a single  file in java

    Hello,
    I am facing difficulty that I want to read multiple files in java and write their data to a single file. Means Write data in one file from multiple files in java
    Please help me out.
    Naveed.

    algorithm should be something like:
    File uniqueFile = new File();
    for (File f : manyFilesToRead)
       while (readingF)
           write(dataFromF, intoUniqueFile);

  • One task control multiple DAQ modules

    Is it possible to have one task control 2 or 3 DAQ modules? I do not seem to have luck in doing so.
    Also, for a digital output NI device that sources voltage, if I hook up 10v to it and the module has 10 channels, how on earth are the 10 channels each able to source 10V, when I only supply one 10v power supply.
    Does it just send extremely small amounts of current per channel versus sending a decent amount of current per channel if I just had one output on?

    Hey all,
    This is a crosspost, I posted somewhere else after posting this post, just because I did not know whether I posted in the right place, here is the link to the other in case anyone else comes across this problem: http://forums.ni.com/t5/Multifunction-DAQ/one-task-control-multiple-DAQ-modules/td-p/3170766 Sorry about that.
    Anyways, I have 
    NI 9401, 
    NI 9403,
    NI 9474,
    NI 9375,
    I am using a 
    NI cDAQ-9184 chassis
    I believe these are all compactDAQ.
    I would like to combine the 9401 and 9403 into 1 task, and the 9474 and 9375 into 1 task.
    Also, I am having issues with one of the modules, the 9375. When I wire False data to the Write DAQ on LabVIEW it seems that the 9375 is still sourcing some voltage. I cannot remember the difference in voltage of then I wire true data versus false, But I believe they were somewhat close. Has anyone came across a problem like such?
     

  • Best way to merge multiple iPhoto libraries from several external hard drives into one, while deleting duplicates?

    Problem: My wife and I both have MacBook Pros (MBPs).  We take a lot of pictures and import them into iPhoto.  When the storage capacity of our MBPs gets full, I have been moving our iPhoto libraries into external hard drives, which are now multiple (3 or 4).  The problem I now realize we have been making, is that once the iPhoto libraries were copied onto the external hard drives, we were only deleting about half of the photos in each iPhoto library remaining in our MBPs (because we wanted to keep some of the important ones in our hard drives).  Once the storage capacities of our MBPs got full again, I would repeat the whole process again and again.  In essence I now have several large iPhoto libraries (each about 80 GB), each with multiple duplicate photos, divided among several external hard drives.  And I am running out of space on my hard drive again.  So what is the best way to:
    a)  merge all of these iPhoto libraries into just one, while simultaneously being able to delete all the duplicates?  (or would you not recommend this?)
    b)  prevent this from happening again?
    Thanks.  BTW I am using OS X Mountain Lion 10.8.5 and iPhoto 8.1.2 (2009)

    If you have Aperture 3.3 or later and iPhoto 9.3 or later you can merge libraries with Aperture.
    Otherwise the only way to merge Libraries is with the paid ($30) version of iPhoto Library Manager
    The Library Manager app can be set avoid duplicates.

  • I formatted my ext hard drive and changed it to mac os x extended(journaled). then i put in all my data back. after a while, i insert the hard drive but my macbook cannot read it. plz help

    i formatted my ext hard drive and changed it to mac os x extended(journaled). then i put in all my data back. after a while, i insert the hard drive but my macbook cannot read it. plz help

    SanandaDutta 
    tried on a different mac. The same problem exists.
    If that is the case, its extremely unlikely you have a bad USB cable on the seagate, rather as I mentioned earlier a bad SATA bridge card.
    If the Seagate 1TB USB external wont open on either Mac and youve verified same (try a diff. USB cable if you have one however this is nearly never the case),....then to get the data off that HD (unless it is dead which is also extremely unlikely),......then you would need to extract the HD from its case and insert same into either a HD dock or USB HD enclosure.
    8 out of 10  seemingly dead inopperable 1-3+ year old external HD are perfectly fine, rather the cheap 50 cent SATA bridge card fries and dies (alas)
    reply back if you need help in extracting same. 

  • Need inputs how to extract data to BI in multiple Data packets from ECC

    HI Experts,
    would like to know how can i restrict  the data to be fetched in multiple data packets as of now all  the data is coming in to BI in a single data packet....
    I want to get this data in multiple data packets.
    I have checked the infopackage settings  in BI its as similar to any other Infopackage  which is fetching  in multiple data packets.
    Is there any posibility to restrict data with in ECC

    Hussein,
    Thank you for the helpfull information.
    That document gave me lot of information about ANSI extract for BENIFITS.
    Thanks
    Kumar.

  • Space allocation on 11g R2 on multiple data files in one tablespace

    hello
    if the following is explained in Oracle 11g R2 documentation please send a pointer. I cant find it myself right now.
    my question is about space allocation (during inserts and during table data load) in one table space containing multiple data files.
    suppose i have Oracle 11g R2 database and I am using OMF and Oracle ASM on Oracle Linux 64-bit.
    I have one ASM disk group called ASMDATA with 50 ASM disks in it.
    I have one tablespace called APPL_DATA with 50 data files on it, each file = 20 GB (equal size), to contain one 1 TB table calll MY_FACT_TABLE.
    During Import Data Pump or during application doing SQL Inserts how will Oracle allocate space for the table?
    Will it fill up one data file completely and then start allocating from second file and so on, sequentially moving from file to file?
    And when all files are full, which file will it try to autoextend (if they all allow autoextend) ?
    Or will Oracle use some sort of proportional fill like MS SQL Server does i.e. allocate one extent from data file 1, next extent from data file 2,.... and then wrap around again? In other words it will keep all files equally allocated as much as possible so at any point in time they will have approximately the same amount of data in them (assuming same initial size ?
    Or some other way?
    thanks.

    On 10.2.0.4, regular data files, autoallocate, 8K blocks, I've noticed some unexpected things. I have an old, probably obsolete habit of making my datafiles 2G fixed, except for the last, which I make 200M autoextend max 2G. So what I see happening in normal operations is, the other files fill up in a round-robin fashion, then the last file starts to grow. So it is obvious to me at that time to extend the file to 2G, make it noautoexented, and add another file. My schemata tend to be in the 50G range, with 1 or 2 thousand tables. When I impdp, I notice it sorts them by size, importing the largest first. I never paid too much attention to the smaller tables, since LMT algorithms seem good enough to simply not worry about it.
    I just looked (with dbconsole tablespace map) at a much smaller schema I imported not long ago, where the biggest table was 20M in 36 extents, second was 8M in 23 extents, and so on, total around 200M. I had made 2 data files, the first 2G and the second 200M autoextend. Looking at the impdp log, I see it isn't real strong about sorting by size, especially under 5M. So where did the 20M table it imported first end up? At the end of the auotextend file, with lots of free space below a few tables there. The 2G file seems to have a couple thousand blocks used, then 8K blocks free, 5K blocks used, 56K blocks free, 19K blocks used, 148K free (with a few tables scattered in the middle of there), 4K blocks used, the rest free. Looking at an 8G similar schema, looks like the largest files got spread across the middle of the files, then the second largest next to it, and so forth, which is more what I expected.
    I'm still not going to worry about it. Data distribution within the tables is something that might be important, where blocks on the disk are, not so much. I think that's why the docs are kind of ambiguous about the algorithm, it can change, and isn't all that important, unless you run into bugs.

  • Best Practive - One mapping reading multiple source files

    I want to develop a solution for one single mapping reading multiple similar source files that are stored on different directories on my OWB server. I want to be able to determine on runtime of my mapping from what location to load the source file from.
    Example:
    Mapping: Load_test_data
    source file 1: c:\input\loc1\test.dat
    source file 1: c:\input\loc2\test.dat
    When I run the mapping I would like to use an input parameter specifying the location loc1 or loc2. I would also like to use this input parameter in my mapping to populate one column in my target table with the value of this input parameter. This design would make it possible to dynamically load source files from different directories and also being able to see after loading where the data came from.
    Questions:
    - Is there a way to create such a design
    - If not, what alternative would be appropriate.
    Thanks in advance for the feedback

    Thanks for the feedback. Unfortunately I do not use workflow together with my OWB.
    I now indeed specified the file name and file location in the configuration of my mapping. However I am not able to change then upon executing the mapping. Data file name and file location are empty and greyed out when I execute my mapping. It always takes the values I specified in the configuration of my mapping
    What I would like to do is specify the location upon runtime when I execute my mapping, but I don't know if this is possible. In addition I'd also want to use the data file location as an input parameter for one of the columns I populated in my target table.
    Then in the end I would be able to use one mapping and read multiple sources files from different locations and also be able to check in the end where the data was loaded from.
    Hope you can give me some more feedback on how to set this up in OWB.
    Many Thanks!
    data file name parameter to ma

  • HT1296 How to transfer data between 2 iphones while not having the old one

    How can I transfer or sync  all the data from my previous iphone to the new one while not having the first(it's not working due to high voltage problem of the area),and of course not loosing anything?

    Correct, you do not need your old iPhone. All you need is your old iPhone's backup. If you had an old model iPhone and wanted to replace it with a newer one, restoring it from a backup of your older one would be the process you would use.
    Make sure before you start that your new iPhone is either really new or completely erased. If you did not just buy it new, do Settings > General > Reset > Erase all content and settings before restoring it from your old iPhone's backup.

  • Design Studio 1.3 : one large generic Data Source or multiple smaller Data Sources

    Dear all,
    In DS 1.3, is it still a best practice to have one large generic Data Source ? Or is having multiple smaller Data Sources a better solution ?
    Minimizing the number of Data Sources remains a golden rule, but what is the best solution :
    One large generic Data Source : and using setDataSource
    or
    Multiple smaller Data Sources : and using Load in Script and Background Processing
    Many thanks for sharing your ideas,
    Hans

    It depends on your application and how much the data is pulling in and how you want to present that to the user/consumer of the application.
    At TechEd Las Vegas last year, SAP showed a 9 dashboards (3 per row) with background processing for each row.

Maybe you are looking for

  • Screen Sharing and WindowServer processes using 35-45% of CPU!!

    I have recently relocated and reconfigured my Apple network and computers in the house :1) MacBook Air dual display MBA with 20" cinema screen in the office, and 2) and Mac Mini connected to a large LCD TV in the living room. The plan now is to use t

  • Help!  40K pics & a monolithic library file == 2x the disk space usage

    All, I recently took the plunge (twice) and purchased my first Apple computer since a IIc. I bought an Intel based iMac (2GHz) and have moved all of my pics (about 40K) to an external 1TB drive array (mirrored 500GB). I found iPhoto woefully inadequa

  • How do I export my work for someone else to see?

    Hello, I've created a banner for a website in Edge Animate. What do I need to do with it now so someone else (who probably does not have the same software) can see it? It saves as an html file, but I presume I can't just send the html link to them wi

  • Disable scaffolding button in sidekick using ACL?

    Hi, Is it possible to use ACL to hide the scaffolding button in Sidekick? Thanks

  • Zen Xtra: Compatible FM Tu

    Hey, I've just bought myself a Zen Xtra 60GB and was pretty dissapointed to find out that the wired remote and FM Tuner is not compatible with my player (especially considering the sales person had said it was). From other posts in the forum, it does