-50405 Error at DAQmx Read AI Sample

Anyone seen this error before?  The Possible Reasons are:
No transfer is in progress because the transfer was aborted by the client. The operation could not be completed as specified.
I am using the DAQmx Analog 2d DBL NChan Nsamp function and getting this error thrown.  Please let me know if you have any thoughts.
Cheers!
CLA, CLED, CTD,CPI, LabVIEW Champion
Platinum Alliance Partner
Senior Engineer
Using LV 2013, 2012
Don't forget Kudos for Good Answers, and Mark a solution if your problem is solved.

As a follow up for anyone who has a similar problem.  As best as I can tell this error had to do with my laptop running out of Virtual Memory, Windows would pop up and tell me my VM was low and the next time I ran the program I would get the previously listed error.  I managed to fix it by rebooting, which is what points me towards the VM being the issue.  I am still open to suggestions if anyone has them.
Cheers!
CLA, CLED, CTD,CPI, LabVIEW Champion
Platinum Alliance Partner
Senior Engineer
Using LV 2013, 2012
Don't forget Kudos for Good Answers, and Mark a solution if your problem is solved.

Similar Messages

  • Error -200429 DAQmx read (counter u32 1ch 1samp).vi -append- Edge count in for loop

    I am receiving error -200429 DAQmx read (counter u32 1ch 1 samp).vi <append>. This is occuring on the second loop of a for loop.
    I have a sequence inside the for loop that is using four DAQmx assistant vi's for edge count at various times in the sequence. All four of these count fine the first time through but at the beginning of the second loop the first edge count receives this error and does not count.
    Any ideas?

    The code you have supplied is similar to the converted subvi but this code will give an empty task error on the DAQmx Create Channel vi.
    Notice on the Daqmx create task vi that there is nothing wired to the "Task to copy" or the "Global virtual channels" connectors - I believe this is why the task is empty when sent to the Daqmx create channel vi.
    The code we are currently using is a modification of a previous code used before a recent upgrade. It may be better in your/NI eyes to re-write this into a State machine but time constraints and resources do not allow that at this time - All other parts of the current program work except for the counter on the second iteration. Would rather not reinvent the wheel at this point. 
    If the task is being cleared was the issue then the other three should also not execute. The issue appears to be the "First Call"
    I have attached a zip file of the complete program as it is now written to give you a better idea of the whole picture.
    Attachments:
    Test Machine current code.zip ‏657 KB

  • Why do I receive Error -200077 Daqmx Read when scaling should be within range

    I have an application using a NI-9215 and custom scales.  The task and scales are all setup programmatically.  I am receiving an error that the AI.Min value is out of range but when I check my scaling it seems like it should be fine.  I have attached an image with the error message and also the scaling that I am using (slope and offset) for each channel.  I have calculated the allowable min and max values for each of 3 channels based on the scaling and min/max range of the 9215 (+/-10V).  For all of the channels I am trying to set the AI.Min = 0 and AI.Max = 13 but I receive the error message even though these values are within the calculated ranges.  Is there something else I am missing?
    Thanks
    Dan
    Solved!
    Go to Solution.
    Attachments:
    Error -200077.png ‏188 KB

    You should say how this was solved.  That way other people can learn from your experience.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • Error -200279 at DAQmx read

    Hi i am fairly new to daq in labview,
    i have code which accepts data from 2 devices a PCI 4351 connected to a TBX 68T and PC1 6111S card connected to a SCB68 block. i have uploaded the code.
    when the code is running and then when i press the stop button on the front panel, i get an error of -200279 which is something to do with the daqmx read. I have looked into some other problems that some have faced on this error in this forum, and hence i have tried to increase the rate whilst keeping the number of samples the same but this has done nothing and the same error appears.
    any suggestions will be greatful!
    thanks
    Notay
    Attachments:
    Copy of NI 435x Thermocouple (LEE).vi ‏124 KB

    Well the main problem is you have two different mechanisms to control the speed of your while-loop. First in the upper right hand side of the loop there's a wait until next (100) ms. The second timing method is the read 5 samples in a 1000 Hz stream, this only allows for 20 ms wait time.
    AFter a while your buffer will be full (by the way you don't need to specify the number of samples per channel) and the AI read will generate an error.
    Make sure you end the while loop as soon as the error is generated you don't check inside the while loop which is  a bad thing.
    To avoid this you have to decide how to time your while loop.
    I would set the number of channels to read to -1 (be aware you get 100 samples from the AI read)
    Also replace the 'wait until next ms' with a 'wait ms'
    And check for errors inside the whil e loop
    Ton
    Free Code Capture Tool! Version 2.1.3 with comments, web-upload, back-save and snippets!
    Nederlandse LabVIEW user groep www.lvug.nl
    My LabVIEW Ideas
    LabVIEW, programming like it should be!

  • DAQmx multichann​el read error. DAQmx believes there are multiple channels in my task.

    Hello,
    I recieve the appended error message when attempting to use DAQmx Read in my VI.  LabVIEW is telling me that I am attempting to read a single channel when my task is configured for multiple channels.  However, as you can see in the task configuration pane, that is not the case!  If I attempt to use DAQmx Read to generate a waveform with multiple channels, the VI executes without error but I do not want a 1D waveform array..  Any tips?
    Regards,
    Steve
    Solved!
    Go to Solution.
    Attachments:
    multi_chan_errr.JPG ‏42 KB
    channel_Vin.JPG ‏108 KB

    This is exactly why I ALLWAYS hide my DAQ Tasks inside a "Resource Module" 
    As I use the term a "Resource Module" is a special case Action Engine tha exposes only that subset of functionality that a specific project needs to take on a given resource external to LabVIEW.  And, it provides a single point of entry to access that resource by keeping private data private.  Had you used the technique here you would have had one vi to look at.  and, there likely would have been no "Reconfigure to add channel" method available to cause trouble.
    Jeff

  • Error code -200088, accessing DAQmx read function in fncB from fnc A

    Hi,
    I am using NIDAQ 6229, and C code.
    My c code is as follows:
    #define DAQmxErrChk(functionCall) { if( DAQmxFailed(error=(functionCall)) ) { goto Error; } }
    int main()
      TaskHandle aiTaskHandle;
    // and the variables whatever used in the API's declaration below done here and initialised
      DAQmxStartTask (aiTaskHandle);
      DAQmxErrChk (DAQmxCreateTask("AI Task",&aiTaskHandle));
      DAQmxErrChk (DAQmxCreateAIVoltageChan(aiTaskHandle,AIChannelList,AINameList,DAQmx_Val_RSE,0.0,10.0,DAQmx_Val_Volts,NULL)); 
      DAQmxCfgSampClkTiming(aiTaskHandle,"OnboardClock",AIRate,DAQmx_Val_Rising,DAQmx_Val_ContSamps,AISamplesPerChannelAcq);
      while(1)
                fncB( );
    fncB(  )
                DAQmxErrChk (DAQmxReadAnalogF64  
                  (AITaskHandle,DAQmx_Val_Auto,timeOut,DAQmx_Val_GroupByScanNumber,AIReadArray,AIarraySizeInSamps,&AISampsPerChanRead,NULL));
    This Code builds without errors and runs. But the problem is with calling the fncB( ) where the DAQmx read for aiTaskHandle is executing.
    The DAQ read works fine for one time, but in the second call of fncB it gives error " task specified is invalid or does not exist" with error return code: -200088.
    why for the second call , the task specified becoming unknown is not yet all clear.
    the ni error codes document doesnot list this error code yet all.
    i have tried in declaring aiTaskHandle variable global, then in the first time of calling fncB it gives the error as mentioned above.In my application i cant use DAQRead in the same function where i declared the taskHandle , so i have to call it from the function.
    Does some body help in calling the DAQread from another function, and why this task is becoming invalid for the second call,.
    Thanks ,
    vishnu
    Message Edited by gkvishnu on 10-12-2005 08:10 AM

    I found where does my error come from.
    I configure my task from a other function and call this callback to start acquisition (get partly from NI examples)
    I get the error on the Start_AI_Clk(taskAIClk); call.
    int CVICALLBACK AI_TrigStartCallback(int panel, int control, int event, void *callbackData, int eventData1, int eventData2)
        int32       error=0;
         char        Chaine[500],Chaine1[500],ChaineFormateur[500];
        int32       numRead;
        float64     *data=NULL,*dataMoy=NULL;
        int         i,j,Checked;
        double        LocValeurLue,TempsDebut,TempsTotal,TempsEnCours;
        long int     NbMesTotale = 0;
        FILE         *TempFile;
        if( event==EVENT_COMMIT ) {
            if( (data=malloc(NbMes*NbChanAIClk*sizeof(float64)))==NULL ) {
                MessagePopup("Error","Not enough memory");
                goto Error;
            TempsDebut = Timer();
            TempsTotal = TempsDebut;
            TempFile = fopen("TempoResult.txt","w");    
            Start_AI_Clk(taskAIClk);
            ProcessDrawEvents();
            gRunningTrig = 1;
            while( gRunningTrig )
                DAQmxErrChk
    (Read_AI_Clk(taskAIClk,NbMes,data,NbMes*NbChanAIClk,&numRead));
                ProcessSystemEvents();
                 /*data treatment*/
            TempsTotal = Timer() - TempsTotal;
            fprintf(TempFile,"Temps total:%.3f - Nb Mes totales : %d",TempsTotal,NbMesTotale);
            fclose(TempFile);
    Error:
        if( DAQmxFailed(error) )
            TraitErreurCarteDAQmx("Lecture AI horloge externe",error);
        if( taskAIClk!=0 ) {
            Stop_AI_Clk(taskAIClk);
        if( data )
            free(data);
        return 0;
    What happen is when I click on my start button, my function is executed
    once before a EVENT_COMMIT came, so it jumps directly to the Error part,
    then as the taskHandle is not null, it stop the task
     if( taskAIClk!=0 ) {
            Stop_AI_Clk(taskAIClk);
    Then it executes the if(event==EVENT_COMMIT) part and as the task has been stopped, it give the -200088 error code.
    To correct this, I change the Error treatment like this:
    if( DAQmxFailed(error) )
        TraitErreurCarteDAQmx("Lecture AI horloge externe",error);
        Stop_AI_Clk(taskAIClk);
    Yop!
    DanY

  • Which is the best way to edit this program and make it read 1 sample from each channel?

    The original program was made with Traditional NI-DAQ. I have edit it to DAQmx the best that i could. The program it's already applying the voltages that are generate in the code(Daqmx Write.vi). But i'm having problems with acquiring voltages it's giving me rare readings(Daqmx Read.vi)  i don't know if i have to make a (Daqmx Start Task.vi) for each channel in the program or if i can make it work with a single one. Notice i have not make many significant changes because this program is already running in another lab and they give to us the program so we didn't have so much problems but instead of getting the BNC-2090 they got the BNC-2090A that uses DAQmx instead of Traditional. So anyone can help?
    Solved!
    Go to Solution.
    Attachments:
    2 Lock-In, 2 V Amp, Vd Amp - 090702(MTP).vi ‏100 KB
    2 Lock-In, 2 V Amp, Vd Amp - 090702(MTP)new.vi ‏107 KB

    A BNC-2090 is just a connector block.  It has no effect on whether you need to use DAQmx or traditional DAQ.  That is determined by the DAQ card you are connecting the terminal block too.
    You might be referring to this document Differences Between the BNC-2090 and BNC-2090A Connector Blocks, but that is just saying to the change in the labels of the terminal block to accurately reflect the newer DAQ cards.
    What problems are you having with the new VI you just posted?  Are you getting an erro rmessage?  I don't know what "rare readings" mean.
    You really shoud look at some DAQmx examples in the example finder.  Some problems you are having is that your DAQ blocks are all sort of disconnected.  Generally, you should be connecting the purple wire from your create task function, throught the start, read or write, and on to the close task.  Many of your DAQ functions are just sitting out there on little islands right now.  You should also be connecting up your error wires.
    With DAQmx, you should be combining all of your analog channels in a single task.  It should look something like Dev0/AI0..AI7.  Then use an N channel 1 sample DAQmx read to get an array of the readings, which you can then use index array to break apart.
    Other things you should do is replace the stacked sequence structures with flat sequence structures.  Turn on AutoGrow for some of your structures such as the loops.  In the end, you might find you can eliminate some sequence structures.

  • DAQmx: Numerical indicators go blank intermittently using DAQmx Read

    Hello everyone,
    I was wondering if anyone else has ever fought this issue and won.  I have a loop (I will attempt to attach an image of the block diagram) which performs a DAQmx Read operation.  The loop employs a 200 millisecond wait.  The loop scales the data, sends all of the values to little indicators in an array, picks out a few critical values to display on duplicate large indicators, and optionally logs the data by streaming it to disk.  The task will usually (99.9% of the time) be run at 100Hz.  So, I was figuring I could do a "Read All", since at 200 ms of wait time due to the millisecond multiple wait function, I should always have around 20 points to read in the buffer the next time around.  This would allow me to have very current readings in my displays.  I could do a partial read, but then I figure I would have a lag between the displayed values and the reality of what's going on in the test.  Having current readings, as current as can be, is highly desirable for this application.
    I figured everything was good with what I was doing, especially when I tested it with a simulated device (I know a simulated device behaves much differently than a real device, especially at start up).  Everything seemed to work well.  When I go to the real hardware, though, I intermittently get blank indicators.  Actually, it is more like intermittent numbers in my indicators.  Most of the time, the DAQmx Read operation returns a null, empty array (no data available in the buffer).  I must be missing something, because I figured that a 200 ms wait would allow another 20 samples to be collected.
    If someone could please just ease my conscience and let me know that I haven't done something very fundamentally wrong in this code, even if you couldn't help me with a solution, I would very much appreciate it.  I feel that this code should work, keeping data in the indicators at all times, and don't know why it doesn't work.  If you could offer me the solution, even if it is to point out that I did something very wrong, I would much appreciate it.
    I am just writing this code, and still have some icons to make, so the sub VI's still have the default icons.  Sorry about that.  Basically all those do is get array subsets, or scale data, or write data to the data files.  If you need to ask questions about the code, I can understand.  I am not the greatest yet at writing self documenting code yet.  And if you need to know what that event structure does, it watches the two boolean button controls to determine what state in the state machine to go to next, and the time out case of the event structure highlights a data value if it goes out of bounds.
    Sorry if I am too wordy.  I have been accused of that before.  I just notice a lot of "Help!  DAQ doesn't work.  How do I fix?" posts, and I don't see how I or anyone else could help that person at all.
    I am using LV8.2, with DAQmx 8.3.1, on Windows XP, 1GB of RAM, and a fairly healthy Duo core processor.  I have a workaround where I throttle back the read operation, only reading 55% of the available samples, as reported by a DAQmx Read property node (the highest percentage I found that prevents the indicators from going blank).  This introduces a small lag between the real world and the data on display, however.  Also, it seems like a processor dependent solution.  I would have to tweak this percentage for every machine I run this on.
    If I have left anything important out, please let me know, and I will do my best to clarify, and thank to anyone who reads this, and a big thank you to anyone who takes the time to reply.  Again, I would be real happy with a "That code looks good to me, and your thinking is correct", if that indeed is the case.
    Thanks.
    Attachments:
    Log Monitor Block.JPG ‏292 KB

    Yeah, as you and Erik said, just specifying (Sample Rate / 5) as the '# to read' could do the trick.  Then you can ditch the 'Wait (msec multiple)' function.  I don't think I'd recommend the -1 = infinite timeout though.  I'm pretty leery of stuff that can lead to an infinite wait or an infinite loop.  Even a 1 sec timeout should easily be way more than enough.  Note however that this method depends on all your processing code executing in under 200 msec on average.  Otherwise, your reads will fall behind and you'll eventually get a DAQ buffer overflow error.  My earlier suggestion to first query for # available samples and then read the MAX of (# available, SampleRate/5) will prevent cumulative fall-behind effects.
    There's too many unknowns to speculate with any confidence on exactly why the software timing method didn't work.   I can point out some additional things that bear greater scrutiny though.
    1. You've got some sort of function dealing with file streaming that takes a path input and produces a path output.  This probably means that every loop iteration you're using the path to open a file, write data, and close the file again.  This actually may consume more than 200 msec, at least some of the time.  Because this runs in parallel with the msec wait function, something like the following could be happening:
    A. Iteration 1 proceeds as expected.  The DAQ read collects 20 samples, the file write consumes only 100 msec, and the wait msec function ends after 160 msec in order to end on a 200 msec multiple.  The wait function took longest, so your whole loop ends on a 200 msec multiple.
    B. On iteration 2, the wait msec function ends after 200 msec on the next 200 msec multiple.  The DAQ read collects another 20 samples (because it's been pretty much exactly 200 msec since the previous loop iteration started) right away.  However, Windows was busy messing with the file cache so this time your file write consumes 375 msec.  The file function took longest so your whole loop ends 175 msec into the next 200 msec multiple.
    C. On iteration 3, the wait msec function ends after 25 msec on the next 200 msec multiple.  The DAQ read collects 37 samples that have come in since the prior call 375 msec ago.  The file function consumes only 50 msec this time, ending the loop 25 msec into the next 200 msec multiple.
    D. On iteration 4, the wait msec function ends after 175 msec at the next 200 msec multiple.  The DAQ read collects the 5 samples that have come in since the prior call 50 msec ago.  Uh oh, not enough samples...
    One possible fix is to open the file outside the loop and leave it open until after the loop is done.  Opening and closing files has quite a lot of overhead.  Inside the loop, you'd be passing around the file refnum.  Doing this one thing alone might also be a way to fix your timing problem.  Along similar lines, you could write the data to a queue and then do your file writes in an independent loop that reads the data out of the queue.  You can search here for "producer consumer" for more info.
    -Kevin P.

  • Error while Deploying Performance Management samples in XI R3.0

    Error while Deploying Performance Management samples in XI R3.0
    I get an error when deploying the PM samples. I have set the PM repository pointing to AFDEMO database
    I have used to Java of BO dir as well as the program files\Java (1.6.03)
    I run the runPublishUtil.bat  and get the below error
    Can anyone help me to publish the PM demo's
    Regards
    Ishaq
    Edited by: ishaqbaig on May 10, 2009 1:42 PM

    Error when runPublishUtil,bat
    C:\Program Files\Business Objects\Performance Management 12.0\setup>set JARLOC=..\..\common\4.0\java\lib\
    usage : runPublishUtil.bat [dbusername] [dbuserpassword]
    C:\Program Files\Business Objects\Performance Management 12.0\setup>java -classpath publishUtil.jar;..\..\common\4.0\java\lib\cecore.jar;..\..\common\4.0\java\lib\logging.jar;..\..\common\4.0\java\lib\celib.jar;..\..\common\4.0\java\lib\ceplugins.jar;..\..\common\4.0\java\lib\cesession.jar;..\..\common\4.0\java\lib\corbaidl.jar;..\..\common\4.0\java\lib\ebus405.jar;..\..\common\4.0\java\lib\external\xercesImpl.jar;..\..\common\4.0\java\lib\external\xml-apis.jar;..\..\common\4.0\java\lib\rascore.jar;..\..\common\4.0\java\lib\serialization.jar;..\..\common\4.0\java\lib\cereports.jar com.businessobjects.util.PublishUtil ".\PublishUtil.properties" sa forms45
    caught SDKException
    com.crystaldecisions.sdk.exception.SDKServerException: Enterprise authentication
    could not log you on. Please make sure your logon information is correct. (FWB
    00008)
    cause:com.crystaldecisions.enterprise.ocaframework.idl.OCA.oca_abuse: IDL:img.seagatesoftware.com/OCA/oca_abuse:3.2
    detail:Enterprise authentication could not log you on. Please make sure your logon information is correct. (FWB 00008)
    The server supplied the following details: OCA_Abuse exception 10498 at [.\secpluginent.cpp : 826]  42040 {}
            ...Invalid password
            at com.crystaldecisions.sdk.exception.SDKServerException.map(SDKServerException.java:107)
            at com.crystaldecisions.sdk.exception.SDKException.map(SDKException.java:193)
            at com.crystaldecisions.sdk.occa.security.internal.LogonService.doUserLogon(LogonService.java:701)
            at com.crystaldecisions.sdk.occa.security.internal.LogonService.userLogon(LogonService.java:295)
            at com.crystaldecisions.sdk.occa.security.internal.SecurityMgr.userLogon(SecurityMgr.java:162)
            at com.crystaldecisions.sdk.framework.internal.SessionMgr.logon(SessionMgr.java:422)
            at com.businessobjects.util.PublishUtil.logon(Unknown Source)
            at com.businessobjects.util.PublishUtil.main(Unknown Source)
    Caused by: com.crystaldecisions.enterprise.ocaframework.idl.OCA.oca_abuse: IDL:img.seagatesoftware.com/OCA/oca_abuse:3.2
            at com.crystaldecisions.enterprise.ocaframework.idl.OCA.oca_abuseHelper.read(oca_abuseHelper.java:106)
            at com.crystaldecisions.enterprise.ocaframework.idl.OCA.OCAs._LogonEx4Stub.UserLogonEx4(_LogonEx4Stub.java:80)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:585)
            at com.crystaldecisions.enterprise.ocaframework.ManagedService.invoke(ManagedService.java:420)
            at com.crystaldecisions.sdk.occa.security.internal._LogonEx4Proxy.UserLogonEx4(_LogonEx4Proxy.java:222)
            at com.crystaldecisions.sdk.occa.security.internal.LogonService.doLogon(LogonService.java:347)
            at com.crystaldecisions.sdk.occa.security.internal.LogonService.doUserLogon(LogonService.java:676)
            ... 5 more
    Exception in thread "main" java.lang.NullPointerException
            at com.businessobjects.util.PublishUtil.waitForValidInputFRS(Unknown Source)
            at com.businessobjects.util.PublishUtil.main(Unknown Source)
    Edited by: ishaqbaig on May 10, 2009 12:49 PM

  • Error encountered while reading TIFF image, Image may be damaged of incompatible. Resave the image w

    I am currently running CS3, windows XP, service pack 2, with recent update, 5.03 installed today.
    I just opened a 500 page document not in sections where 90% of the document is a placed PDF.
    The PDFS were placed using a sample Script that came with Indesign, that instructs the PDF to automatically flow each page after another.
    When scrolling through quickly in the pages pallette, I get the follwing error:-
    "Error encountered while reading TIFF image, Image may be damaged of incompatible. Resave the image with different settings and try again."
    I click OK.
    Then I get the following error.....
    " Could not complete request because of database error. The File "ABC.indd" is damaged (Error Code: 3).
    Click OK....
    Then I get the following error....
    Adobe Indesign is shutting down. A serious error was detected. Please restart Indesign to recover work in any unsaved Indesign documents.
    Then I get the error.......
    Indesign.exe has encountered a problem and needs to close. We are sorry for any inconveneince.
    And I have two buttons to click....
    Debug or Close.....
    If I click Debug, it closes Indesign.
    Within this window there is also a window to gather further information....I click it and it tells me...
    "Indesign.exe....Error Signature AppName: indesign.exe AppVer: 5.0.3.662
    ModName: public.dll ModVer: 5.0.3.662 Offset: 0002e19a"
    To view technical inforamtion about the error report, clikc here....
    "Then it creates an error report and tells me where the report is located along with a scrollable window of 0xc0000005 and heap of zeros."
    The report conatains a whole heap of CHECKSUM ERRORS.
    CAN ANYONE PLEASE HELP??
    I had these errors before updating to 5.03 and the patch hasn't rectified anything!!

    Open the .inx file in CS3 (that's what you have, right?) and save as a new .indd.
    I'd also be tempted to open the tiff in Photoshop and do a save as to re-write it.
    Let us know if it helps.
    Storing files on the network leaves you more open tot he risk of file damage during transfer and save operations.
    Peter

  • Error occured while reading identity data: failed to decrypt safe contents

    Hello,
    We are trying to access Tibco JMS server through SSL using JNDI lookup. Getting the following error, while executing a sample java file.
    Java Version -
    java version "1.4.2_05"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2_05-b04)
    Java HotSpot(TM) Client VM (build 1.4.2_05-b04, mixed mode)
    Please let me know if any of you faced similar issues.
    thanks in advace.
    Following are the error messages.
    javax.jms.JMSSecurityException: Error occured while reading identity data: failed to de
    crypt safe contents entryCOM.rsa.jsafe.SunJSSE_cs: Could not perform unpadding: invalid
    pad byte. at com.tibco.tibjms.TibjmsSSL._identityFromStore(TibjmsSSL.java:2699)
    at com.tibco.tibjms.TibjmsSSL.createIdentity(TibjmsSSL.java:2604)
    at com.tibco.tibjms.TibjmsxLinkSSL._initSSL(TibjmsxLinkSSL.java:291)
    at com.tibco.tibjms.TibjmsxLinkSSL.connect(TibjmsxLinkSSL.java:338)
    at com.tibco.tibjms.TibjmsConnection._create(TibjmsConnection.java:611)
    at com.tibco.tibjms.TibjmsConnection.<init>(TibjmsConnection.java:1772)
    at com.tibco.tibjms.TibjmsTopicConnection.<init>(TibjmsTopicConnection.java:37)
    at com.tibco.tibjms.TibjmsxCFImpl._createImpl(TibjmsxCFImpl.java:139)
    at com.tibco.tibjms.TibjmsxCFImpl._createConnection(TibjmsxCFImpl.java:201)
    at com.tibco.tibjms.TibjmsTopicConnectionFactory.createTopicConnection(TibjmsTo
    picConnectionFactory.java:84)
    at tibjmsSSLJNDI.<init>(tibjmsSSLJNDI.java:202)
    at tibjmsSSLJNDI.main(tibjmsSSLJNDI.java:252)
    ##### Linked Exception:
    com.tibco.security.AXSecurityException: failed to decrypt safe contents entryCOM.rsa.js
    afe.SunJSSE_cs: Could not perform unpadding: invalid pad byte.
    at com.tibco.security.impl.j2se.IdentityImpl.init(IdentityImpl.java:70)
    at com.tibco.security.IdentityFactory.createIdentity(IdentityFactory.java:61)
    at com.tibco.tibjms.TibjmsSSL._identityFromStore(TibjmsSSL.java:2680)
    at com.tibco.tibjms.TibjmsSSL.createIdentity(TibjmsSSL.java:2604)
    at com.tibco.tibjms.TibjmsxLinkSSL._initSSL(TibjmsxLinkSSL.java:291)
    at com.tibco.tibjms.TibjmsxLinkSSL.connect(TibjmsxLinkSSL.java:338)
    at com.tibco.tibjms.TibjmsConnection._create(TibjmsConnection.java:611)
    at com.tibco.tibjms.TibjmsConnection.<init>(TibjmsConnection.java:1772)
    at com.tibco.tibjms.TibjmsTopicConnection.<init>(TibjmsTopicConnection.java:37)
    at com.tibco.tibjms.TibjmsxCFImpl._createImpl(TibjmsxCFImpl.java:139)
    at com.tibco.tibjms.TibjmsxCFImpl._createConnection(TibjmsxCFImpl.java:201)
    at com.tibco.tibjms.TibjmsTopicConnectionFactory.createTopicConnection(TibjmsTo
    picConnectionFactory.java:84)
    at tibjmsSSLJNDI.<init>(tibjmsSSLJNDI.java:202)
    at tibjmsSSLJNDI.main(tibjmsSSLJNDI.java:252)
    Subexception stack trace follows:
    java.io.IOException: failed to decrypt safe contents entryCOM.rsa.jsafe.SunJSSE_cs: Cou
    ld not perform unpadding: invalid pad byte.
    at com.sun.net.ssl.internal.ssl.PKCS12KeyStore.engineLoad(Unknown Source)
    at java.security.KeyStore.load(Unknown Source) at com.tibco.security.impl.j2se.IdentityImpl.init(IdentityImpl.java:66)
    at com.tibco.security.IdentityFactory.createIdentity(IdentityFactory.java:61)
    at com.tibco.tibjms.TibjmsSSL._identityFromStore(TibjmsSSL.java:2680)
    at com.tibco.tibjms.TibjmsSSL.createIdentity(TibjmsSSL.java:2604)
    at com.tibco.tibjms.TibjmsxLinkSSL._initSSL(TibjmsxLinkSSL.java:291)
    at com.tibco.tibjms.TibjmsxLinkSSL.connect(TibjmsxLinkSSL.java:338)
    at com.tibco.tibjms.TibjmsConnection._create(TibjmsConnection.java:611)
    at com.tibco.tibjms.TibjmsConnection.<init>(TibjmsConnection.java:1772)
    at com.tibco.tibjms.TibjmsTopicConnection.<init>(TibjmsTopicConnection.java:37)
    at com.tibco.tibjms.TibjmsxCFImpl._createImpl(TibjmsxCFImpl.java:139)
    at com.tibco.tibjms.TibjmsxCFImpl._createConnection(TibjmsxCFImpl.java:201)
    at com.tibco.tibjms.TibjmsTopicConnectionFactory.createTopicConnection(TibjmsTo
    picConnectionFactory.java:84)
    at tibjmsSSLJNDI.<init>(tibjmsSSLJNDI.java:202)
    at tibjmsSSLJNDI.main(tibjmsSSLJNDI.java:252)
    Caused by: COM.rsa.jsafe.SunJSSE_cs: Could not perform unpadding: invalid pad byte.
    at COM.rsa.jsafe.SunJSSE_al.a(Unknown Source)
    at COM.rsa.jsafe.SunJSSE_ag.a(Unknown Source)
    at com.sun.net.ssl.internal.ssl.PKCS12KeyStore.a(Unknown Source)
    ... 16 more
    Subexception stack trace follows:
    java.io.IOException: failed to decrypt safe contents entryCOM.rsa.jsafe.SunJSSE_cs: Cou
    ld not perform unpadding: invalid pad byte.
    at com.sun.net.ssl.internal.ssl.PKCS12KeyStore.engineLoad(Unknown Source)
    at java.security.KeyStore.load(Unknown Source)
    at com.tibco.security.impl.j2se.IdentityImpl.init(IdentityImpl.java:66)
    at com.tibco.security.IdentityFactory.createIdentity(IdentityFactory.java:61)
    at com.tibco.tibjms.TibjmsSSL._identityFromStore(TibjmsSSL.java:2680)
    at com.tibco.tibjms.TibjmsSSL.createIdentity(TibjmsSSL.java:2604)
    at com.tibco.tibjms.TibjmsxLinkSSL._initSSL(TibjmsxLinkSSL.java:291)
    at com.tibco.tibjms.TibjmsxLinkSSL.connect(TibjmsxLinkSSL.java:338)
    at com.tibco.tibjms.TibjmsConnection._create(TibjmsConnection.java:611)
    at com.tibco.tibjms.TibjmsConnection.<init>(TibjmsConnection.java:1772)
    at com.tibco.tibjms.TibjmsTopicConnection.<init>(TibjmsTopicConnection.java:37)
    at com.tibco.tibjms.TibjmsxCFImpl._createImpl(TibjmsxCFImpl.java:139)
    at com.tibco.tibjms.TibjmsxCFImpl._createConnection(TibjmsxCFImpl.java:201)
    at com.tibco.tibjms.TibjmsTopicConnectionFactory.createTopicConnection(TibjmsTo
    picConnectionFactory.java:84)
    at tibjmsSSLJNDI.<init>(tibjmsSSLJNDI.java:202)
    at tibjmsSSLJNDI.main(tibjmsSSLJNDI.java:252)
    Caused by: COM.rsa.jsafe.SunJSSE_cs: Could not perform unpadding: invalid pad byte.
    at COM.rsa.jsafe.SunJSSE_al.a(Unknown Source)
    at COM.rsa.jsafe.SunJSSE_ag.a(Unknown Source)
    at com.sun.net.ssl.internal.ssl.PKCS12KeyStore.a(Unknown Source)
    ... 16 more

    For the benifit of others.
    The issue is resolved.
    When we set the certificate password inside our application we were encrypting it inside our system.
    When we sent it to tibco we did not decrypt it.
    So the encrypted password was sent as it is that was the issue :(
    Thanks,
    Reflex.

  • How does the DAQmx read.vi work in producer/consumer mode

    Dear all,
    I have one question: how does the DAQmx read.vi work in producer/consumer mode ? 
    I mean if i set the acquisition samples quantity is 5000,(see the enclosed picture), how does the DAQmx read.vi acquire the samples ?
    5000 samples one time ?
    And how does the write. vi work ? Also 5000 samples one time ?
    Look forward to your reply.
    Thank you.
    Attachments:
    producer consumer mode.png ‏28 KB

    It will read 5000 samples per channel.
    The Write Measurement File just writes whatever you give it.  It you send it 5000 data points, it will write the 5000 data points.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • Why is the DAQmx Read VI so slow?

    Hello everybody.
    I'm using Labview 8.5.1. and NI DAQmx 8.6 and I'm wondering what the DAQmx Read VI is actually doing.
    What I want to measure is a 1kHz signal. To do this, I'm using a VI similar to the attached one (a pulsed 1kHz signal serves as trigger and as sample clock).
    The time the DAQmx Read VI needs to perform is 250ms longer than the acquisition time you would expect (e.g. instead 100ms to measure 100 samples at 1kHz the time is usually around 350ms, for 1000 samples it is ~1250 ms). The time does not depend on the number of channels in the task.
    So my question is what actually happens when the DAQmx Read VI is called. What I would like it to do is: Wait for the next trigger signal, then acquire the specified number of samples, then read the samples from the buffers and return them. Is there any way to force this behaviour?
    Thanks a lot for your answers, I highly appreciate any help!
    Solved!
    Go to Solution.
    Attachments:
    daqmxAcquisitionTime.PNG ‏20 KB

    Rene,
    For your application and hardware, you should be using DAQmx Control Task.vi to commit your task before your while loop.  Inside the loop, then you'd start/read/stop the task.  DAQmx uses a state machine to control task configuration and run time.  As written, when you cal DAQmx Read, DAQmx will see that you have a task which has never been configured.  As such, it will look at all the settings you make on the task, verify their correctness, reserve all resources necessary, write the configuration to hardware, then start your task.  Once the data specified has been read, it will unwind this state machine to put the task back in an unconfigured state.  As such, every time you call Read, DAQmx if going through all of it's state transitions.  If you were to commit your task before the loop, DAQmx would not re-verify your settings or need to re-program hardware each time through your loop.  That being said, there will still be some addition time each iteration of the loop where you will need to stop and restart your task, and during this time you could miss a trigger.
    If it is truly not acceptable for you to miss a trigger, you might consider moving to a design in which you continuously read data, and then use software triggering to  keep track of the relevant sections of data.  One other alternative would be to look at the X Series line of DAQ devices, as these devices support retriggering in hardware (ie... They can retrigger without you needing to stop and restart your task).
    Hope that helps,
    Dan

  • How to run two DAQmx Read (Counter in + Digital in) simultaneously?

    Hello to all, i have following issue:
    I want to acquire a digital (UART like) bus signal. For this purpose i use a DAQmx Card (PXI 6070e). I need precise time information about time lenght of 0 and 1, so i use a counter. The counter is adjust to CI Semi Period(continuous samples) -> on the edge change i'll get a new measured time value. It works perfectly, but i newer know what kind of state (0 or 1) actually belongs to current value. Of course they changing with each edge, but i need to know that exactly. So i should take a digital input which watching the current state. Unfortunately i can not synchronise two DAQmx-reads.
    Finaly i should get two values:
    time value(counter) and his state(digital in
    ) to work with them. How can i do this?
    I use LabView 7.0

    [SRQ 211371]
    This is not a trivial taks but here is one way to do what you explained with a PXI-6070E and some external circuitry:
    1. Use buffered semi period measurement to measure the width of your pulses.(Meas Buffered Semi-Period-Continuous.vi)
    2. Use a buffered continuous analog data acquisition with external scanclock (e. g. Cont Acq&Graph Voltage-Ext Clk.vi). Maybe you should use a triggered example to start the acquisition at a defined time.
    Now you could connect your signal to the ScanClock input (PFI7) of the 6070E but then you only would acquire only a sample at either the rising or the falling edge. As you need a sample at both edges you need some external circuitry that generates a short positive TTL pulse at each edge (rising and fa
    lling).
    Sorry that I can't provide a solution for this circuitry but I'm pretty sure something like this should be available as a low cost IC.
    You may want to allow your signal some settling time after the scanclock pulse has occurred. In this case simply acquire data on two AI channels. The first channel is only a dummy channel. Connect your signal to the second channel. The delay can be adjusted with the interchannel delay.
    If anybody knows a good solution for the external circuitry or if anybody has a better approach please post it here.
    Best regards,
    Jochen Klier
    NI-Germany

  • Compare two DAQmx Read modes

    For a continuous DAQmx acquisition, generally a start VI is put before a loop structure to request DAQ board to begin  continuously  reading data into buffer,  and  within loop  a DAQmx read VI is used to  continuously  retrieve data from buffer.
    My question is, for the DAQmx Read VI, is there any performance difference between  'Nchanel N Samples'  and 'NChannel  1 Sample'?  My understanding is, the former  read all data  available in  buffer,  so maybe we can use larger value for  waiting until ***ms function, and for the latter, since only one data is read in each loop, we need to use smaller waiting value, to catch up with the board acquisition speed, therefore meaning the loop structure need to be executed much faster.
    For my application, read only one data in each loop is easier for teatment, but I am concerning whether more frequent loop execution may cause decrease of performance. Any suggestion?
    Thanks.
    -Dejun

    I don't agree with you, Dennis, but maybe I am wrong
    Even though I use '1Sample' within a loop, there is still a hardware timing Vi before the loop, so the timestamp of each data point should still be determined from hardware timing, this is my understanding. The difference between '1Sample' and 'NSample' is just how many data are read from buffer in each loop execution. As to the timestamps of data in the buffer, they are already determined by the hardware timing, and therefore should be independent of how they are read in loop structure (1Sample or NSample).
    Message Edited by Dejun on 08-30-2007 05:16 PM

Maybe you are looking for