During slow scan rates, are events captured between scans?

We are using 32 thermocouples to measure a heated cabinet performance and are using the Continuous Acquisition to Spreadsheet VI. I am saving the data to a text file while plotting the data to a chart in Labview 6.1. I only wanted to read temperature every 60 seconds (0.01667 scans/sec) but set the scan rate rounds off to 2 decimal places, so at 0.02 scan rate we read data every 50 seconds. I set the buffer size to 1 and the number of scans to write at a time to 1. Does this mean I have a lag of one scan in the buffer or am I seeing the latest scan written to the chart? Also, temperature events between writes to the screen sometimes show up on the chart, sometimes not. It seems to depend on how long the e
vent takes place or how close to the next write. I would appreciate any insight on how the software handles the events between writes. Thanks.

Willard,
You can actually change the digits of precision in LabVIEW to display more than 2 decimal places. If you right-click on the scan rate control and select 'Format and Precision' you can alter the digits of precision to include 5 decimal places.
By setting the buffer size to 1 you are setting a memory size to hold only 1 scan at a time. By setting the number of scans to write at a time to 1 that means that every time the AI Read VI is called within the while loop, the VI will only read 1 scan from the buffer. Therefore, the hardware is set-up to read 1 scan every 50-60 seconds and so the hardware places that scan in the buffer. Then every time the AI Read VI is called in software, it reads 1 scan from the buffer. So, if the AI Read VI already read
the 1 scan from the buffer, it will wait until the buffer gets the next scan from hardware (50-60 seconds) and then plot it on the chart. So, with the set-up you have, you will not see any lag of temperature data. Once the hardware scans, the software will almost immediately read that scan. I hope this helps.
Regards,
Todd D.
Applications Engineer
National Instruments

Similar Messages

  • Percentage Overhead Rates are not capture with Transport request

    Hi,
    I have defined Percentage Overheads rate in KZZ2 and assigned the same  into Costing Sheet.But while saving the transport reqest in to QAS client,It is throwing the below error.Kindly help me in solving this issue.
    Individual entries cannot be put into the change request
    Individual entries cannot be put into the change request
    Message no. SV141
    Diagnosis
    For technical reasons, the entries cannot be fully specified in the change request. There are two possible reasons for this:
    1. The key of an entry is longer than 120 characters. All entries whose keys match up to character 119, are then copied into the change request, rather than an individual entry.
    2. The key of an entry contains fields of special data types, for example, packed numbers. The key can only be specified in the change request up to the first such field from the left.
    all entries whose keys match up to character 238 are copied into the change request, rather than a single entry.
    System response
    The selected entry is copied into the change request correctly, but other entries may be copied as well.
    Please advice me.
    Thanks
    Sunitha

    Hi Sunita,
    I am also facing the same problem while transporting the overhead rates in quality and production.
    Can you give the elaborative steps of how you resolved your issue.
    The steps mention as under i tried but it is not working
    1) Go to SA38 or SE38
    2) Run the program RKAZUTR1
    3) You need to specify the number of your request with the costing sheet changes
    Thanks in advance
    Sayed

  • Slow Transfer Rate Between iTunes and iPhone 3GS

    Hi,
    My iTunes recently failed to recognise my iPhone 3GS and suggested a restore followed by a back up.
    I followed the steps and after a short while it started to transfer from iTunes library back onto my iPhone. However, the rate of song transfer appears to be very slow. 3,000 songs in 10 hours, there is still 1,800 to go.
    Is there anything I can check to speed to process up, I don't remember it being this slow when I first got the iPhone?
    Thanks

    I'm having the same problem with very slow transfer and it's even more mystifying.
    I got a 3GS a while ago and always very impressed by its transfer rates (300mb video file syncs in about 30 seconds). My partner also got a 3GS last week, but I noticed when setting it up that it was very slow (did some tests - the same 300mb file takes about 6 minutes to transfer). Since I was syncing to the same itunes library, on the same computer, using the same cables, I assumed it was something wrong with the new phone. Tried restore couple of times but no change, so I took it to the local genius bar who checked it, couldn't find anything wrong with it and swapped it out.
    The only thing they suggested that I hadn't tried was reinstall iTunes - never occurred to me as my original phone still syncs at normal speed. So I've now reinstalled iTunes, (having deleted the new phone's backups) plugged in the new phone and asked it to sync using my original (fast phone) back up, but it still runs at the slow transfer rate even though this is a new phone! Tried restore again and still no change.
    This is very frustrating. My original phone still syncs very fast. Getting the phone swapped means that it is unlikely to be a hardware fault with the new phone but it seems no matter what I do (delete the new phone's backups, sync to my normal phone's backup, install as a new phone, reinstall itunes) it still transfers at about 1/12 the speed of my original.
    I'm completely stumped and would welcome any suggestions as to what I could try next.

  • Final cut pro slow frame rate

    So whenever I import P2 files into an event I seen to have a slower frame rate during playback. the footage is good because it plays normally in the import media window but it will at a slower frame rate once imported. the video format is 720p 24 frames.

    Where do you keep your projects and events? What are your system specs?
    Russ

  • What are the Relations between Journalizing and IKM?

    What is the best method to use in the following scenario:
    I have about 20 source tables with large amount of data.
    I need to create interfaces that join the source tables into target tables.
    The source tables are inserted every few secondes with about hundreds to thousands rows.
    There can be a gap of few seconds between the insert of different tables that sould be joined.
    The source and target tables are on the same Oracle instance and schema.
    I want to understand the role of: 'Journalizing CDC' and 'IKM - Incremental Update' and
    how can i use it in my scenario?
    In general What are the relations between 'Journalizing' and 'IKM'?
    Should i use both of them? Or maybe it is better to deelte and insert to the target tables?
    I want to understand what is the role of 'Journalizing CDC'?
    Can 'IKM - Incremental Update' work without 'Journalizing'?
    Does 'Journalizing' need to have PK on the tables?
    What should i do if i can't put PK (there can be multiple identical rows)?
    Thanks in advance Yael

    Hi Yael,
    I will try and answer as many of your points as I can in one post :-)
    Journalizing is way of tracking only changed data in your source system, if your source tables had a date_modified you could always use this as a filter when scanning for changes rather than CDC, Log based CDC (Asynchronous in ODI, Logminer/Streams or Goldengate for example) removes the overhead of of placing a trigger on the source table to track changes but be aware that it doesnt fully remove the need to scan the source tables, in answer to you question about Primary keys, Oracle CDC with ODI will create an unconditional log group on the columns that you have defined in ODI as your PK, the PK columns are tracked by the database and presented in a Journal table (J$<source_table_name>) this Journal table is joined back to source table via a journalizing view (JV$<source_table_name>) to get the rest of the row (ie none PK columns) - So be aware that when ODI comes around to get all data in the Journalizing view (ie Inserts, Updates and Deletes) the source database performs a join back to the source table. You can negate this by specifying ALL source table columns in your PK in ODI - This forces all columns into the unconditional log group, the journal table etc. - You will need to tweak the JKM to then change the syntax sent to the database when starting the journal - I have done this in the past, using a flexfield in the datastore to toggle 'Full Column' / 'Primary Key Cols' to go into the JKM set up (there are a few Ebusiness suite tables with no primary key so we had to do this) - The only problem with this approach is that with no PK , you need to make sure you only get the 'last' update and in the right order to apply to your target tables, without so , you might process the update before the insert for example, and be out of sync.
    So JKM's provide a mechanism for 'Change data only' to be provided to ODI, if you want to handle deletes in your source table CDC is usefull (otherwise you dont capture the delete with a normal LKM / IKM set up)
    IKM Incremental update can be used with or without JKM's, its for integrating data into your target table, typically it will do a NOT EXISTS or a Minus when loading the integration table (I$<target_table_name>) to ensure you only get 'Changed' rows on the load into the target.
    user604062 wrote:
    I want to understand the role of: 'Journalizing CDC' and 'IKM - Incremental Update' and
    how can i use it in my scenario?Hopefully I have explained it above, its the type of thing you really need to play around with, and throroughly review the operator logs to see what is actually going on (I think this is a very good guide to setting it up : http://soainfrastructure.blogspot.ie/2009/02/setting-up-oracle-data-integrator-odi.html)
    In general What are the relations between 'Journalizing' and 'IKM'?JKM simply presents (only) changed data to ODI, it removes the need for you to decide 'how' to get the updates and removes the need for costly scans on the source table (full source to target table comparisons, scanning for updates based on last update date etc)
    Should i use both of them? Or maybe it is better to deelte and insert to the target tables?Delete and insert into target is fine , but ask yourself how do you identify which rows to process, inserts and updates are generally OK , to spot a delete you need to compare the table in full, target table minus source table = deleted rows , do you want to copy the whole source table every time to perform this ? Are they in the same database ?
    I want to understand what is the role of 'Journalizing CDC'?Its the ODI mechanism for configuring, starting, stopping the change data capture process in the source systems , there are different KM's for seperate technologies and a few to choose for Oracle (Triggers (Synchronous), Streams / Logminer (Asynchronous), Goldengate etc)
    Can 'IKM - Incremental Update' work without 'Journalizing'?Yes of course, Without CDC your process would look something like :
    Source target ----< LKM >---- Collection table (C$) ----<IKM>---- Integration table (I$) -----< IKM >---- Target table
    With CDC your process looks like :
    Source Journal (J$ table with JV$ view) ----< LKM >---- Collection table (C$) ----<IKM>---- Integration table (I$) -----< IKM >---- Target table
    as you can see its the same process after the source table (there is an option in the interface to enable the J$ source , the IKM step changes with CDC as you can use 'Synchronise Journal Deletes'
    Does 'Journalizing' need to have PK on the tables?Yes - at least a logical PK in the datastore, see my reply at the top for reasons why (Log Groups, joining back the J$ table to the source table etc)
    What should i do if i can't put PK (there can be multiple identical rows)? Either talk to the source system people about adding one, or be prepared to change the JKM (and maybe LKM, IKM's) , you can try putting all columns in the PK in ODI. Ask yourself this , if you have 10 identical rows in your source and target tables, and one row gets updated - how can you identify which row in the target table to update ?
    >
    Thanks in advance YaelA lot to take in, as I advised I would reccomend you get a little test area set up and also read the Oracle database documentation on CDC as it covers a lot of the theory that ODI is simply implementing.
    Hope this helps!
    Alastair

  • WHAT ARE EVENTS IN INTERACTIVE LIST ?

    WHAT ARE EVENTS IN INTERACTIVE LIST ? IS THERE A DIFFERENCE BETWEEN INTERACTIVE LIST AND INTERACTIVE REPORT ?
    BEST REGARDS,
    RYAN

    Hi
    Events in Interactive Report
    TOP-OF-PAGE DURING LINE-SELECTION
    AT USER-COMMAND.
    AT LINE-SELECTION
    AT PF-FUNCTION KEY
    Report Output is called LIST
    Interactive report Output is nothing but Interactive List
    <b><REMOVED BY MODERATOR></b>
    Regards
    Anji
    Message was edited by:
            Alvaro Tejada Galindo

  • Slow transfer rate of data to remote hard drive on airport extreme base station

    Hi there
    currently trying to transfer RAW picture files to my Seagate Go Flex Hard drive through my wireless network. Have an airport Extreme base station that the hard drive is connected to through a USB cable (3 i think could be 2.0). Taking over an hour to copy across 16 gig worth of data. Is this transfer time to be expected? Can it be quicker by altering or checking the settings in the Airport extreme  base station.

    Things will likely go at least twice as fast if you connect your Mac to one of the LAN <-> ports on the AirPort Extreme.
    With a wired Ethernet connection, you could normally expect between 25-30 GB of data to be transferred per hour.  On average, wireless is typically 2-3 times slower than a direct Ethernet connection.
    And, as you might imagine, the chances of an error occuring during a wireless backup are always higher due to possible interference.

  • Painfully slow download rate - 0.13 Mbps

    Hello all, I've experienced a very slow download rate of late, 0.13Mbps. In the summer my connect was fine, around 5/6 Mbps and gradually the speed is grinding to a standstill. Most pages don't load these days and those that do, usually drop over time.
    The BTW speed test results are as follows:
    Download speed (Mbps): 0.13
    Upload Speed (Mbps): 0.33
    Ping Latency (ms): 61.25
    The further diagnostics are as follows:
    Download speed achieved during the test was - 0.13 Mbps
    For your connection, the acceptable range of speeds is 0.05 Mbps-0.25 Mbps.
    Additional Information:
    Your DSL Connection Rate :5.41 Mbps(DOWN-STREAM), 0.45 Mbps(UP-STREAM)
    IP Profile for your line is - 0.14 Mbps
    I have done the quiet line test, 17070, and there is no noise.
    Help would be gratefully received. 

    welcome to the BT community forum where customers help customers and only BT employees are the forum mods
    in order for the forum members to help please can you post the adsl stats from your router you may need to 'show detail' to get all stats (if hub enter 192.168.1.254 in your browser and navigate to adsl or if HH4/5 then go to troubleshooting then logs and you are looking for 2 line together when hub last connected to internet and they will show your connection speed and noise margin or if netgear enter 192.168.0.1). 
    The problem is althought you have. 5mb connection speed your profle is rock bottom at 0.14mb probably due to your router dropping connection or manual resets trying to get speed back or a combination of both. Your profile will return to normal automatically provided you can maintain connection 24/7 for 3/5 days with not resets
    are you connected directly via a filter to the NTE5 master or test socket or to somewhere else? Is the master the only phone socket in your home?
    Someone may then be able to offer help/assistance/suggestions to your problem
    If you like a post, or want to say thanks for a helpful answer, please click on the Ratings star on the left-hand side of the post.
    If someone answers your question correctly please let other members know by clicking on ’Mark as Accepted Solution’.

  • Ibase related information/event capture

    Hi friends, this is regarding a business scenario which needs to be mapped into SAP CRM:
    Client manufactures engines and these engines are installed in various vehicles. We plan to use installed base for the engines and its various components.
    But requirement is that whatever happens to the engine during its lifecycle needs to be captured and reported.
    For e.g. an engine in its lifetime maybe installed/uninstalled in various vehicles, its components may have suffered breakdown, service person assigned, repaired, replaced, etc. In other words, there maybe many events associated with an engine.
    My question is how to map these events in SAP CRM or is it that this can be done only through ABAP enhancement?

    Hi,
    I got informations on IBASE.In my requirement the product has many component and there sub component. WE planed to use ibase for maintaining the product details with the customer.
    Now i was going through forums, there i found many threads where they were facing problems of dispalying complex structure of ibase into ICWC.
    What are the best practise to used with IBASE so that such problem can be reduced...
    Thanks in advance
    Abhinav

  • Slow send rate with 1K Bytes Message

    I have looked all over the doc and i can't figure out this.
    This is my scenario:
    - Weblogic 10.3.1
    - I have a producer that's sending 1K bytes Messages in a loop
    - Messages are non-persistent
    - I have prefetch and async enabled on the ConnectionFactory I'm using
    - I have quotas disabled
    The issue I'm having is:
    - I can't get more than 24 messages / second on the producer.
    Funny thing is, if I set the message size to 479, the message rates are good. If I set it to 480 I get the 24 messages / second.
    So far this sound like a bug.
    Any help here would be appreciated...
    Follows some code:
    The test I'm doing is pretty simple:
    // Establishing a consumer:
                   InitialContext ctx = new InitialContext(env);
                   ConnectionFactory cf = (ConnectionFactory) ctx.lookup("CF");
                   Connection connRec = cf.createConnection("admin", "my-password");
                   connRec.start();
                   Session sessRec = connRec.createSession(false,
                             Session.AUTO_ACKNOWLEDGE);
                   Destination queue = sessRec.createQueue("./ttt!MyLocalQueue");
                   MessageConsumer consRec = sessRec.createConsumer(queue);
                   boolean useListener = true;
                   consRec.setMessageListener(new MessageListener() {
                        @Override
                        public void onMessage(Message paramMessage) {
                             // do nothing
    // The producer part:
                   Connection conn = cf.createConnection("admin", "my-password");
                   Session sess = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
                   MessageProducer prod = sess.createProducer(queue);
                   prod.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
                   long timeStart = System.currentTimeMillis();
                   for (int iloop = 0; iloop < 10; iloop++) {
                        System.out.println("i = " + iloop);
                        int nmsg = 100;
                        int warmup = 20;
                        for (int i = 0; i < nmsg + warmup; i++) {
                             if (warmup == i) {
                                  timeStart = System.currentTimeMillis();
                             BytesMessage msg = sess.createBytesMessage();
                             msg.writeBytes(new byte[1024]);
                             prod.send(msg);
                        long timeEnd = System.currentTimeMillis();
                        double diff = (timeEnd - timeStart);
                        double timeProd = nmsg * 1000 / diff;
                        System.out.println("produced " + nmsg + " msgs in " + diff
                                  + " with msgs/sec=" + timeProd);
                   conn.close();
    Edited by: user1693918 on Jul 2, 2010 4:20 PM

    24 messages/second is extremely slow.
    Some thoughts:
    * You should see send rates of a few thousand per second if there's no other load on the system, and 10,000s messages second with one-ways (provided you can drain the messages fast enough).
    * To ensure senders don't far outpace consumers, you need some sort of flow control.
    * To reach full performance with JRockit, you may need to send in a tight loop for approx 3 to 5 minutes, and with Sun, you may need to send for at least 30 seconds (JR takes longer to warmup).
    * Most users are far more interested in receives per second than sends per second. Send throughput usually doesn't correlate with consumption rates unless there's some sort of flow control mechanism in the application or the tuning...
    * The System.currentTimeMillis() call on most operating systems and JVMs actually has a 10 millisecond granularity. Since you should likely be able to send a few thousand messages a second in most cases, this granularity is far to coarse if your just sending a few messages between clock checks. You should send at least a few hundred messages between each clock check. In my benchmarking code I tend to use an algorithm like this:
       loop
         perform 100 operations
         if 30 seconds has not passed yet, then continue
         report throughput
         continue  * See [ the JMS Performance and Tuning Check List | http://download.oracle.com/docs/cd/E14571_01/web.1111/e13814/jmstuning.htm#PERFM294 ] for tuning advice.
    Can you repost your test code with the "{code}" tags before and after (place on its own line, and strip off the quotes)? Its a bit hard to read otherwise.
    Regards,
    Tom

  • Slow frame rate with Prosilica SmartCam

    Have here a Prosilica AVT GT1600C and a very slow frame rate...
    Here are my .icd information.
    I tried to cope with the balance between the ExposureTimeAbs and AcquisitionFrameRateAbs but dont help.
    The .icd file is attached (renamed to .txt), installed version: Vision 2013f2
    Does some of the vision cracks find some optimizing issues?
    thx in advance
    Attachments:
    Allied Vision Technologies GT1600C.txt ‏12 KB

    Ah well, yes...    the variables/options are:
    lighting - this is the most commonly overlooked, and often the easiest to fix;  use/buy more/better lighting!
    lensing - be sure you select a good quality lens (a good lens is more expensive but passes light better), and that the aperture is open as far as possible (unless you have depth of field issues)
    exposure duration - as long as possible relative to framerate desired or to prevent blurring if motion involved
    use of gain - this should be a last resort as "noise" increases with gain... but slight use of gain can be helpful under marginal lighting conditions
    Regards,
    Scott

  • Extremely slow transfer rate from iOmega eGo to iMac

    I am using a brand new 27" iMac and an iOmega eGo 2TB hard drive (http://www.pcmag.com/article2/0,2817,2374133,00.asp). The drive has it's own power source and is connected to my computer via USB.
    I'm trying to transfer files from the hard drive to my new iMac and the transfer rate is painfully slow. They are files from my old MacBook which I sold last week.
    When the disc is not mounted, it seems to function normally - it's blue indicator light is consistently on and it sounds right. However, when I mount the disc and begin using it - in this instance to transfer files, it goes extremely slow. The blue light will mostly remain off, then interminnently flash on and onscreen a small amount of data will move.
    Through disk utility I was able to successfully verify and repair the disc, but it's still behaving this way. Can anyone help? SOS!

    I am using a brand new 27" iMac and an iOmega eGo 2TB hard drive (http://www.pcmag.com/article2/0,2817,2374133,00.asp). The drive has it's own power source and is connected to my computer via USB.
    I'm trying to transfer files from the hard drive to my new iMac and the transfer rate is painfully slow. They are files from my old MacBook which I sold last week.
    When the disc is not mounted, it seems to function normally - it's blue indicator light is consistently on and it sounds right. However, when I mount the disc and begin using it - in this instance to transfer files, it goes extremely slow. The blue light will mostly remain off, then interminnently flash on and onscreen a small amount of data will move.
    Through disk utility I was able to successfully verify and repair the disc, but it's still behaving this way. Can anyone help? SOS!

  • Slow scanning with photosmart 7520

    I am scanning a bunch of old photos for a collage.   Often you hear the scanner quickly scan the photo.  Then it gets hung up for a long time showing a partial of the photo and a box that says "scanning" and a green progress bar.  Sometimes it hangs up for 1 - 2 minutes.  Some pictures only a few seconds.  They are all about the same size photos.  I am scanning on 200 dpi.  Why does it take so much longer on most of the photos?

    Hello @rmayes,
    Welcome to the HP Support Forums!
    I understand that you're experiencing a slow scanning issue when scanning from your HP Photosmart 7520 e-All-in-One Printer to your Windows 7 computer. I would like to assist you today with resolving this issue. Thank you for including that you are scanning your photos on 200 dpi. Rather than a printer, driver, or software issue I do believe that we're dealing with a connectivity issue or a firewall issue. When you print only 1-way communication is used. The computer sends the print command to the printer and the printer will then print the job. However, when you scan 2-way communication is required as the computer and printer send the scanned data back and forth until the full image is on the computer for you to work with. Because of this slow scanning issue I believe that something is interfering with the 2-way communication.
    In order to provide you with accurate support I will need to clarify how your HP Photosmart is connected to your computer (USB, Wireless)?
    In the meantime, I am going to have you run an MSConfig on your computer. This will allow us to temporarily disable all the background programs from running on your computer. Once the background programs are temporarily disabled I will have you test scanning again to verify if the speeds are faster. If the speed is faster with the background programs disabled than that will tell us that there is definitely a program on your computer causing this issue. Please follow the steps below.
    Click on your Start menu
    In the 'search programs and files' box right above Start, type msconfig and hit enter on your keyboard
    When the System Configuration Window opens select 'Selective Startup' and uncheck Load Startup Items
    Click the Services tab. Check the box at the bottom for Hide All Microsoft Services
    Select Disable All
    Select Apply and OK
    Restart your computer when prompted
    Once your computer comes back on please test scanning again.
    Please respond to this post with the result of running the MSConfig. Should the issue persist be sure to let me know how your printer is connected to your computer. I look forward to hearing from you!
    X-23
    I work on behalf of HP
    Please click "Accept as Solution" if you feel my post solved your issue, it will help others find the solution.
    Click the "Kudos, Thumbs Up" on the right to say "Thanks" for helping!

  • Apple tv slow data rate

    HI Guys, i have a problem with one of my Apple TV. I currently have 2 Apple TV and one is having a slow data rate. I have Apple AirPort Time Capsule 2TB (latest gen) with most devices running on wireless connection and I checked the data rate (from the base station) on one Apple TV is showing 65Mbps and on another its showing between 6-13Mbps. The signal reception on both AppleTV is excellent. I have tried unplugging, restarting and restoring the AppleTV with no luck. The model number on this AppleTV is A1427. Any other ideas?

    Hi arustandi,
    Thanks for the question. If I understand correctly, one of the Apple TV's has a slow connection. I see you have already done a bit of troubleshooting. I would recommend that you read this article, it may be able to help with the slow connection.
    Wi-Fi and Bluetooth: Potential sources of wireless interference - Apple Support
    Thanks for using Apple Support Communities.
    Have a great day,
    Mario

  • When i sign out of imessage, and then sign back in, all the messages i received during my time away, are not there instantly. help?

    when i sign out of imessage, and then sign back in, all the messages i received during my time away, are not there instantly after i sign back in. help?

    hi,
    Mac, iPhone, iPad, iPod Touch ?
    Do they appear eventually ?
    The iMessages are "pushed" to the devices which means there can be small delays between the server recognising the device went Off Line and coming back.
    10:19 pm      Thursday; July 10, 2014
    ​  iMac 2.5Ghz i5 2011 (Mavericks 10.9)
     G4/1GhzDual MDD (Leopard 10.5.8)
     MacBookPro 2Gb (Snow Leopard 10.6.8)
     Mac OS X (10.6.8),
     Couple of iPhones and an iPad

Maybe you are looking for