Synchronous 4xAI, 3xAO over RTSI

Hello!
My task is to control a hardware with 3 AO channels an read back produced data at 4 (minimal 1) AI channel simultaniously. I´ve got two PCI-cards: PCI NI 6711 for one of the AO channels and a PCI NI 6110 for the rest. Both are connected over RTSI. I trigger the output with the start-trigger of the input task and the samplerate is the same. When i use my written VI with a predefined amount of samples and readout the written and the read samples they are not equal (the readout is to small!). The hardware input buffer of the PCI NI 6110 is only 8192 samples. I need to readout 262144 (512x512) samples per second. I suggest i´m in trouble with the readout speed an the buffer size?! The frequencies for the output signals should be 256 Hz, 2 Hz and 1Hz (for the on on the NI 6711). Is there a way to manage this? If there is only the possibility to use one input channel under this conditions its also ok..but it must be possible because another programm we use uses the SAME hardware. Ok so far i hope you can help me... if you need more information of my programmed VI´s please tell me.
So far thanks,
Michael
Solved!
Go to Solution.
Attachments:
2 - Main.JPG ‏250 KB

Hello Michael,
your code is a bit complex, but it seems to me that there is no timing declared for the Z.piezo output task. You might want to check on that first...
Secondly, the onboard FIFO buffer of the NI-6110 is only used for buffering the DMA transfer to your computer's RAM, where a larger buffer will be used to buffer the measured values until they are requested by and copied to the application (here: LabVIEW). So buffer size itself should not be an issue with your application. Just configure the AI task to 262144 finite samples and you should be fine.
I do see some other problems:
a) you cannot acquire with exactly 262144 S/s on the PCI-6110 without an external clock signal generator. The closest internal sample frequency that you can set is 263157.9 S/s (= 20MHz base clock / 76). As this is a bit faster than the frequency that you expected, input and output signals will not be coherent, if your output is really clocked at 1 S/s, 2 S/s or 256 S/s. The easiest solution for this issue would be to share the sample clock between AI and AO, which would result in an AO sample rate of e.g. 513.98 S/s instead of 512 S/s.
b) reading your approx. 260 kS/s in blocks of 512 samples will certainly cause some CPU load, as the loop will have to iterate in less than 2ms. With file I/O in this loop, I cannot guarantee you that this will work. A possible solution might be to use a task configured for finite samples and a FOR loop to read these 262144 samples into a pre-initialized array (replacing its values block for block as you iterate through the loop) and to save them to disk afterwards. 260k samples equal 2 megabytes of RAM for each ai channel (when using an array of DBL), which is quite small compared to today's computers memory sizes.
My assumption is that you are losing samples (which would also cause an error code -200279) because your while loop iterates to slow. With the implementation mentioned above, you should be able to solve this issue.
Best regards,
Sebastian

Similar Messages

  • Counter not counting encoder over RTSI

    Hi all,
    I have a PCI-7358 (motion card) and a PCI-6602 (counter timer) both connected together with a RTSI cable. I have then configured an axis in MAX to route the encoder over RTSI-0 (Phase-A) & RTSI-1 (Phase-B). Then I added and configured the RTSI cable in MAX and added the PCI-6602 as its device (see MAX-1.jpg). Then with the appropriate motor / axis spinning I run the PCI-6602 test panel and configure as Edge counting and RTSI0 (see MAX-2.jpg), then I expect to see the counter increment – but it does not. Could it be something to do with mapping on the counter card? Am I missing something? Any ideas?
    -Martin
    Message Edited by Martin.D on 06-05-2007 08:22 PM
    Certified LabVIEW Architect
    Attachments:
    MAX-1.JPG ‏161 KB
    MAX-2.JPG ‏209 KB

    Hi Martin,
    We were finally able to get all the equipment needed to replicate this issue.  My colleague in the states saw the exact behavior that you described about not being able to route Axis 7 and 8 to RTSI in MAX. This has been documented and will be investigated in due course.
    In the meantime, if you do need to eventually route Axis 7 and 8's signals to RTSI, then you can do this programatically in LabVIEW using the Select Signal VI.  When we use this VI to route Axis 7 and 8's Phase A signal to RTSI, it works correctly, and we see the PCI-6602's test panels increment when counting the corresponding RTSI line.
    Thanks,
    Kirtesh Mistry
    National Instruments UK & Ireland

  • How do I read DAQ at every state change on a quad encoder over RTSI

    I have successfully routed either A or B encoder phase to the DAQ card over RTSI, but this gives you 1/4 of your encoder counts to the DAQ. Is there a way to trigger the DAQ clock with both the rising and falling edges, A and B signals so you get a DAQ reading with every encoder count?

    mikema111,
    From your explanation I am assuming that you are using an X4 encoder. Unfortunately there isn't a way to combine both Phase A and Phase B (rising and falling edges) into a DAQ scan clock without external circuitry.
    However, one possibility would be to use a sort of XOR circuit to merge the two phases into one signal and then pass that signal into one of your analog input channels. You could then setup that analog input channel for windowed analog input triggering. As the TTL pulse rises through the window an AI Start Trigger pulse is generated and then another pulse is generated as it passes back down through this window. Pass the AI Start Trigger pulse into a counter setup for retriggerable single pulse generation and you will have your DAQ scan cloc
    k on the output pin of the counter.
    If you are just interested in counting the pulses in both Phase A and Phase B you can configure one counter to count on the rising edge and the other on the falling edge as described in the following Knowledge Base:
    http://digital.ni.com/public.nsf/websearch/15170E05F0F4B65C86256E2400812CD9?OpenDocument
    I also recommend reading the following document that discusses several different options when using a quadrature encoder with an E Series board (DAQ-STC).
    http://zone.ni.com/devzone/devzoneweb.nsf/Opendoc?openagent&36BD71244BB26FC886256869005E541B
    Ames
    Applications Engineering
    National Instruments

  • Synchronous data exchange over JCaps without TCP/IP or WebService...

    Hi all,
    the subject may sound like a little crazy request, but that is what we actually need.
    Just to explain: we have a SAP R/3 system running (v. 4.72) which is not able to call Web Services and is also not able to open a TCP/IP-connection to a foreign host to exchange data.
    But what we need is a synchronous data exchange as, after pressing a button in SAP, we should query some database tables of another sub-system with JCaps and send back the received information to SAP.
    Do you have any ideas out there how this synchronous request from SAP to JCaps can be fullfilled with JCaps (our version is 5.1.3)?!
    We thought about using a HTTP server on the JCaps side, where SAP just sends a HTTP-request on the specified address and then we could use the data received from this call, to get data from the sub-system and then send it back to SAP over an RFC or something similar - that is the easier part (sending data back to SAP). The harder part, in my opinion, is to create a possibility for SAP to call JCaps immediately - so not asynchron, which we already implemented over a file export...
    So, is it possible to use HTTP-server from JCaps for our needs?! Or is there another, easier possibility?!
    Any help highly appreciated...
    Regards
    Bernhard Böhm

    Hi Chris,
    thanks for the input - we also have a similar thing running, also using our BW-Server (SAP ERP 6.0) as the "web service engine"....
    But now, we want a solution without another server (like the BW in the upper case) involved!
    So, we thought about using HTTP-server on the JCaps-side which should be invoked by a simple HTTP-request from SAP (also possible in 4.72).
    Now I tried to setup a simple HTTP-Server project in JCaps 5.1.3 and it is making me crazy right now...
    I just do not get it to work - all I would do is a simple JCD that just print a line in the log-file when started. The JCD has just a "processRequest"-method from HTTPS-Server-eWay. In the connectivity map I did set up the connection to the HTTP-Server with the servlet-url-name - property:
    http://localhost:18001/dpListenHTTP_servlet_HttpServerServlet (like described in the userGuide).
    But when trying to build the project I get this error:
    com.stc.codegen.framework.model.CodeGenException: code generation error at = HTTP_Listen_cmListenHTTP_jcListenHTTP1 - HTTP Server e*Way Code GeneratorProblem creating war: C:\temp\dpListenHTTPprj_WS_serTestHTTP\12217262314811\WEB-INF\classes\..\dpListenHTTP_servlet_http:\localhost:18001\dpListenHTTP_servlet_HttpServerServlet.war (The filename, directory name, or volume label syntax is incorrect) (and the archive is probably corrupt but I could not delete it)
         at com.stc.codegen.frameworkImpl.model.CodeGenFrameworkImpl.process(CodeGenFrameworkImpl.java:1569)
         at com.stc.codegen.frameworkImpl.model.DeploymentVisitorImpl.process(DeploymentVisitorImpl.java:405)
         at com.stc.codegen.frameworkImpl.model.DeploymentVisitorImpl.process(DeploymentVisitorImpl.java:308)
         at com.stc.codegen.frameworkImpl.model.DeploymentVisitorImpl.traverseDeployment(DeploymentVisitorImpl.java:268)
         at com.stc.codegen.driver.module.DeploymentBuildAction.loadCodeGen(DeploymentBuildAction.java:923)
         at com.stc.codegen.driver.module.DeploymentBuildAction.access$1000(DeploymentBuildAction.java:174)
         at com.stc.codegen.driver.module.DeploymentBuildAction$1.run(DeploymentBuildAction.java:599)
         at org.openide.util.Task.run(Task.java:136)
         at org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:599)
    Caused by: Problem creating war: C:\temp\dpListenHTTPprj_WS_serTestHTTP\12217262314811\WEB-INF\classes\..\dpListenHTTP_servlet_http:\localhost:18001\dpListenHTTP_servlet_HttpServerServlet.war (The filename, directory name, or volume label syntax is incorrect) (and the archive is probably corrupt but I could not delete it)
         at org.apache.tools.ant.taskdefs.Zip.executeMain(Zip.java:509)
         at org.apache.tools.ant.taskdefs.Zip.execute(Zip.java:302)
         at com.stc.codegen.frameworkImpl.model.util.AntTasksWrapperImpl.war(AntTasksWrapperImpl.java:404)
         at com.stc.connector.codegen.httpserveradapter.HSEWCodelet.generateFiles(HSEWCodelet.java:608)
         at com.stc.codegen.frameworkImpl.model.CodeGenFrameworkImpl.processCodelets(CodeGenFrameworkImpl.java:640)
         at com.stc.codegen.frameworkImpl.model.CodeGenFrameworkImpl.process(CodeGenFrameworkImpl.java:1546)
         ... 8 more
    Caused by: java.io.FileNotFoundException: C:\temp\dpListenHTTPprj_WS_serTestHTTP\12217262314811\WEB-INF\classes\..\dpListenHTTP_servlet_http:\localhost:18001\dpListenHTTP_servlet_HttpServerServlet.war (The filename, directory name, or volume label syntax is incorrect)
         at java.io.FileOutputStream.open(Native Method)
         at java.io.FileOutputStream.<init>(FileOutputStream.java:179)
         at java.io.FileOutputStream.<init>(FileOutputStream.java:131)
         at org.apache.tools.zip.ZipOutputStream.<init>(ZipOutputStream.java:252)
         at org.apache.tools.ant.taskdefs.Zip.executeMain(Zip.java:407)
         ... 13 moreAnyone any idea how to set up a HTTP-server-project?!
    Thanks and regards
    Bernhard Böhm

  • PXI-6682 read IEEE-1588 timestamp from 7953R over RTSI bus

    Hi,
    I am relatively new to LabVIEW
    programming, although I have two years of hard experience using LabVIEW
    FPGA tools.
    So, I
    have a PXI-1033 chassis, and I have plugged in an PXI-6682 IEEE-1588
    card into slot 2 and a PXI-7953R card into slot 4.  (Random selection
    for slot 4)  What I am trying to do is read the GPS timestamp from the
    6682 card via the RTSI lines directly into the 7953R FPGA card.
     Unfortunately, I have no idea where to start and what to read, and all
    the examples (keywords: RTSI, IEEE-1588)  that I find are for how to
    read the IEEE-1588 timestamp inside the Host Operating system and
    nothing tells me how to do it directly from the FPGA.  My goal is to
    build a machine that timestamps network packets that are being read by
    the FPGA hosted inside the PXI-7953R card.
    Can anybody point me in the right direction?  I
    basically want to learn more about RTSI, where the PXI-6682 outputs its
    IEEE-1588 timestamp, and how data is transferred over the RTSI bus from
    inside a PXI chassis.
    Thanks,
    John

    Thanks for the response Alejandro,
    I have a 7953R FlexRIO board with the Mimas Prevas Dual Gigabit Adapter Module (http://www.prevas.com/ethernet_simulator.html) plugged in.  Ethernet packets enter the Mimas Dual Gigabit Adapter and then go directly to the FPGA as raw Ethernet frames.
    From what you are telling me it seems like I cannot have a timestamp go from the PXI-6682 to the 7953R via the RTSI lines and to then be appended to the end of the ethernet frame before being retrasmitted out the other port of the Dual Gigabit Adapter. (With proper recalculation of the 32 bit CRC being done inside the FPGA of course)
    I will do some more reading of the manuals and will then call NI Support.
    Thanks again!

  • Can PCI-6601 pulse signal over RTSI every Nth encoder count?

    Hi All,
    I have a PCI-6601 counter/timer connected to a quadrature angular encoder.   The 6601 is also connected to a PCI-1422 frame grabber with a RTSI cable.
    I want to be able to trigger the frame grabber by sending a pulse over the RTSI cable every N encoder counts (or X degrees).
    How would I go about do this using C++?
    Thanks in advance,
    Brad 

    Hi Brad,
              There are a few resources I think you may find helpful.  First, there's this DevZone article, " Generate and output pulse every no count an encoder traverses".  Generally, the way this would work is:
    If you want to output a pulse every 4 counts, you will need to take the
    total count size of the counter (2^32 bits) and subtract 4 from this.
    This will be the initial count to set, so that after 4 counts, the
    counter will reach Terminal Count and the Counter Output Event pulse
    will be fired. You can then export the Counter Output Event to a PFI
    line, and use this line as the Z index terminal. If you set the Z index
    value to be equal to the initial count, the counter will always reset
    to 4 ticks below the Terminal Count, and will output a pulse on every
    4th tick. The only drawback to this method is that it will require that
    only X1 decoding be used, and that the counter would have to be
    dedicated to sending out the Counter Output Event (if you want to
    actually count the encoder and keep track of position, another counter
    will have to be used).
    As for specifically doing this in C++, I would recommend referencing the DAQmx C Reference Help (Start»All Programs»National Instruments»NI-DAQ»Text Based Code Support»DAQmx C Reference Help).  Hope that helps, for more assistance on the frame grabber portion of your question, I would reference the post you put in the vision forum.  Have a great day! 
    aNItaB
    Applications Engineer
    National Instruments
    Digital Multimeters

  • DAQ integrated with 7344-Flexmotion over RTSI

    With some great advice from this forum, I have finally managed to get position based daq on the 6023E happening, using one of the encoder phases to clock acquisition.
    How would I synchronize motion and daq in the time domain?
    In addition, how would I sychronize the read position.flx and read A/D.flx functions with daq on the 6023E, in separate cases (position and time based daq)
    Thanks
    Chris

    Start with one of the Count Digital Events examples. Insert a "DAQmx Channel" property node after the "DAQmx Create Channel" VI and before the "DAQmx Start Task" VI. Select the "Counter Input->Count Edges->Input Terminal" property. Right-click on the property and choose "Create->Constant." Select the desired RTSI terminal in the DAQmx Terminal constant.
    Geoffrey Schmit
    Fermi National Accelerator Laborary

  • Synchronous inetd coredumping over multiple zones

    We're experiencing rather strange behaviour on several of our zoned systems (some M5k's, 5220's, T2000's),
    in that some Solaris daemons (mostly the inetd) are repeatedly coredumping, in all zones at the same time.
    Aug 2 13:29:01 so02 genunix: [ID 603404 kern.notice] NOTICE: core_log: inetd[18080] core dumped: /var/core/core_zone1_inetd_0_0_1217676540_18080
    Aug 2 13:29:02 so02 genunix: [ID 603404 kern.notice] NOTICE: core_log: inetd[18094] core dumped: /var/core/core_zone2_inetd_0_0_1217676541_18094
    Aug 2 13:29:02 so02 genunix: [ID 603404 kern.notice] NOTICE: core_log: inetd[18247] core dumped: /var/core/core_zone1_inetd_0_0_1217676542_18247
    Aug 2 13:29:03 so02 genunix: [ID 603404 kern.notice] NOTICE: core_log: inetd[18261] core dumped: /var/core/core_zone2_inetd_0_0_1217676543_18261
    Aug 2 13:29:04 so02 genunix: [ID 603404 kern.notice] NOTICE: core_log: inetd[18275] core dumped: /var/core/core_zone2_inetd_0_0_1217676543_18275
    Aug 2 13:29:05 so02 genunix: [ID 603404 kern.notice] NOTICE: core_log: inetd[18285] core dumped: /var/core/core_zone1_inetd_0_0_1217676544_18285
    These always seem to come in bursts of three crash rounds, then it's running fine for a few days, and suddenly another burst of coredumps - on average twice a week.
    The machines are not visible to the internet, and as it also happens on weekends, so I think I can rule out some malicious user trying to exploit some flaw in inetd causing it to crash.
    Anyone got a hint what might cause this, or how to trace the issue back?

    Smells like a bug in inetd that is being tickled by a port scan. If someone in your organization is doing scanning, it could hit all your zone IPs within a short period of time.
    If you can't find who's responsible via other methods (asking around), I'd set up a zone that isn't doing anything (so it has almost no network traffic). Then I'd run snoop to watch traffic for that hostname just before you might expect it again. Then go back and correlate the traffic after you find inetd down again. By having the zone otherwise idle, the snoop traffic should be light enough that you can run it for hours with little impact.
    Darren

  • Synchronizing peers in transfer over Socket

    Hi again!
    I'm relatively new to Java, so there are many issues I still have to figure out ;-)
    Here are two of them:
    1) How can peers be synchronized in transfer over Socket? For example:
    This is the receiving peer:
    DataInputStream dsin = new DataInputStream(socket.getInputStream());
    String s = dsin.readUTF();This is the sending peer:
    DataOutputStream dsout = new DataOutputStream(socket.getOutputStream());
    dsout.writeUTF("something...");
    dsout.close();At the receiving peer in some cases the EOFException or SocketException (Socket closed) are raised. I suspect this happens either because the receiver tries to read input before it is fully transfered or when the sender already managed to close the socket's OutputStream.
    So how can such a transfer be synchronized?
    2) What if both peers close their socket's OutputStream? Does this result in closing the connection and releasing all its resources, such as port?
    Thank you everyone!

    this is what I learnt:Sorry, try again!
    DO NOT close the Socket on any endpoint until you make sure that the other endpoint managed to call socket.getInputStream().This 'rule' is quite unnecessary.
    According to the documentation, getInputStream() may return a stream which buffer was discarded by network software if a broken connection was detected.That's a complete misreading and misquoting of the documentation. A reset can happen to the connection at any time; can cause loss of data from te socket receive buffer; and has nothing to do with the operation of getting the input stream. Getting the input stream of a socket has nothing to do with the network or the state of the connection. It just constructs a Java object. So this rule is quite pointless.
    Even more, it seems to be a good practice not to close the Socket until you are sure that the other endpoint will not try to read from the channel.You've got that back to front as well. THink about that some more and you will see it involves an infinite regression.
    The only rules you need are these:
    (a) don't close the socket until you're sure the other end has finished writing. You have to close the socket while the other end is still reading, so that it will get a EOS indication, so it will know you've gone away, so it will stop writing and close its end of the socket.
    (b) closing the socket or either of its streams closes the other two items. You should always close the topmost output stream of a socket and nothing else.

  • Synchronous messages through Soap Adapter

    Hi XI Guru's
    In my scenario I am sending a synchronous soap message over soap adapter. Message flow is  like
    3rd Party Application --> XI --> SAP R/3.
    My message do get processed in SAP R/3 and I do get response in SXMB_MONI as well as in Message Monitoring in RWB.
    The return message for message f1bdf1d0-cec5-11de-a9c0-0050569626f6(OUTBOUND) was successfully passed to the waiting "call" thread.
    2009-11-11 05:27:26     Information     The message status was set to DLVD.
    2009-11-11 05:27:26     Information     SOAP: response message entering the adapter (call)
    2009-11-11 05:27:26     Information     SOAP: response message leaving the adapter
    But still the response from XI does not reaches 3rd party application.
    On digging the logs I found following message.  Can this be an issue in XI which is blocking the reponse message from reaching the 3rd Party Application.( I tried posting the message through XML spy and I do get the response in XML spy.)
    (Note : My XI Installation has central adapter engine)
    #1.5 #0050569626F6004200000D0B00000B5000F66675B4B8DEA5#1257401122698#com.sap.engine.services.servlets_jsp.client.RequestInfoServer#sap.com/com.sap.aii.adapter.soap.app#com.sap.engine.services.servlets_jsp.client.RequestInfoServer#LOVEIN#25369##sapnw03_NPI_6935550#Guest#2fd59d51c9d111debb440050569626f6#HTTP Worker [4]##0#0#Warning##Plain###Cannot send an HTTP error response [500 "Application error occurred during the request procession." (details: &quot;The WebApplicationException log ID is [0050569626F6004200000D0800000B5000F66675B4B8DEA5].&quot;)]. The error is: com.sap.engine.services.servlets_jsp.server.exceptions.WebIOException: *The stream is closed.*
    +     at com.sap.engine.services.servlets_jsp.server.runtime.client.ServletOutputStreamImpl.ensureOpen(ServletOutputStreamImpl.java:354)+
    Any help in this regard is appreciated .
    Thanks
    Lovein

    Thanks for your reply Stephan / Abhishek.
    To add some more info on the issue. This issue is when I am using PI 7.1 version.
    I do not face the issue if I use XI 3.0 and end to end flow works fine and response message reaches 3rd party application.
    Only difference I have in 2 versions is that in XI 3.0 we have been able to disable the soap adapter authentication (by changing web.xml file) where as in PI 7.1 we have not been able to disable that authentication and userid/ password info has to be provided while sending the soap message from 3rd Party application.
    (Security if not disabled gives 401 message as described on following blog [SOAP Sender ADAPTER 401 No Authorisation|SOAP Sender ADAPTER 401 No Authorisation] )
    On the other note do you guys know a way to disable this authentication on PI 7.1 version.
    Thanks ,
    Lovein

  • Using RTSI in Ansi C

    I currently have a system with a PC6071 E and a PCI MIO 16E-4
    I am using a bouble buffered Data aquisition
    where I first configure the cards
    then readAnalogue from each card in a loop.
    If I use each card indivually ie. only one card is aquiring dat I have
    no problem.
    If I use the readAnalogue sequencially the first card aquires data but
    the second card gives error -10609 transferInProgError.
    I have read that a RTSI must be used to synchonise two cards but all the
    examples I have found are in Labview.
    How do I synchonised two E series Cards using a double buffer in using
    C?

    From your question, it sounds like you are running into 2 possible problems. If you are trying to synchronize 2 E-Series boards by sharing timing and triggering signals over the RTSI bus, you'll need to use the DAQ function Select_Signal. One board will need to be the "master" and the other the "slave". The order of programming should be:
    1) Configure the master and call Select_Signal to route the scan clock and/or the AI Start Trigger over RTSI.
    2) Configure the slave's acquisition and call Select_Signal to specify that the slave's scan clock and/or start trigger come off the RTSI bus lines specified in Step 1.
    3) Start the slave.
    4) Start the master.
    If you are simply trying to perform 2 analog input operations simultaneously, you'll need to make sure that your D
    AQ operations are asynchronous. By asynchronous, I mean that you do not use function calls that retain control of the nidaq32.dll until the operation completes. For example, you'll need to call DAQ_DB_HalfReady before calling DAQ_DB_Transfer to prevent the double-buffer transfer function from waiting until the data is available before retrieving and returning it. If you let the function wait, then other DAQ functions will not be able to execute.
    You'll also want to make sure that you are not programming the same device twice without calling DAQ_Clear in between operations. That is a common cause of the -10609 error.

  • Check RTSI functionality?

    Being new to Labview and data acquisition, I've changed a simple RTSI example VI for 2 E-series boards. I cannot see any difference in VI execution if I wire the RTSI triggering to the second device (AI start) or not.How can I be certain that device 2 is really triggered by RTSI0?
    Current setup is as follows:
    1) AI config board #1
    2) Route signal to as
    3) AI config board #2
    4) AI start board #2, trigger type Digital A, Analog channel&Level:
    5) AI start board #1
    As mentioned, I see no difference if step 4) gets triggering or not. So, how can I verificy RTSI functionality and confirm signal reception over RTSI?

    The example I downloaded did not terminate the RTSI connections by routing the RTSI line used to after VI completion (Another example had both master and slave board waiting for a digital trigger leading to time outs, leading to cross-wired example confusion). Apparently RTSI connections remain open until the boards are reset, feeding the module a trigger on all RTSI lines I had tried to call today (and yesterday...).
    Now I can determine RTSI trigger reception per channel by verifying a time out in . If a time out occurs, no trigger has been received.
    I'm sure it's written somewhere, but it should be clearer in the RTSI help file that lines must be closed upon VI completion!

  • Size limit for synchronizing Photoshop CC

    When I try to synchronize photoshop settings using creative cloud I get the error cannot connect to creative cloud. The total size of the synchronizing data is over 100 MB (approx 110 MB) and most data are brushes. However when I remove all added brushes synchronizing does work without any problems.
    Is there a limit for the data file size when I try to synchronize my photoshop CC presets ?
    Thanks, John

    Edit > Preferences > Experimental Features > Scale UI 200% (Windows Only)   in Photoshop CC 2014
    Click OK, Restart.

  • Oracle B2B synchronous response

    I was wondering if B2B supports synchronous communication (ebMS over HTTP). If yes, how to accomplish this. Where can I find a sample application for this.

    Hi Manohar,
    For Sync support, you can refer to below blog entries :
    http://blogs.oracle.com/oracleb2bgurus/2010/04/sync_support_-series1.html
    http://blogs.oracle.com/oracleb2bgurus/2010/05/as11_oracle_b2b_sync_support_-.html
    You can also refer in Userguide topic "5.5.2 Using Transport Sync Callback" :
    http://download.oracle.com/docs/cd/E14571_01/integration.1111/e10229/b2b_tps.htm#BABGAJDE
    Rgds,
    Nitesh Jain

  • Route scan clock to high speed capture

    Hi, I want to have a continuous aquisition and sample into an array two encoders and my e series channel about 100 scans per sec. I will be routing the board clock over RTSI, assumed to bt the gerneral purpose clock, to do a high speed capture from two encoders. Absolute positions and AI must be syncronized. Can I use the internal clock from the e series, and what is it called? How do I get a periodic sample from AI to be stored with each high speed capture?

    Matt,
    To synchronize your analog input and encoder measurements, you will need to route your analog input scan clock over RTSI. In LabVIEW, you will use Route Signal.vi with AI scan start as the signal source input and your chosen RTSI line as the signal name input. This RTSI line can then be used to latch your encoder readings into a buffer. Thus, the data in your analog input and encoder buffers will be synchronized.
    Good luck with your application.
    Spencer S.

Maybe you are looking for