LabVIEW error 10899 when testing a virtual channel

I am setting up a virtual analog voltage output channel in the "Measurement and Automation Explorer". I am able to get throught that process and it shows up under "Traditional NI-DAQ Virtual Channels" on the left. When I right
click on the channel and then go to "Test", I get the message "Error - 10899 occured at AO group config".
Sometimes when clicking on test, the MAX crashes and a random window from LabVIEW pops up on the screen. Then I
have to open the MAX again.

Hello bberma01,
I don't like to give you this type of answer, but in this case I think the best think to do is to unistall and reinstall your driver. While we are at it, DAQmx is the newest driver and is very user friendly. I would suggest using it for your data acquisition development.
If you need to download the driver, you can find it Here

Similar Messages

  • Why does labview forget the connections to the virtual channels in MAX?

    I create a bunch of virtual channels in MAX - some with double assignment to a single AI port.
    Context - a digital output controls a MUX to switch the inputs into the AI ports of my USB-6361(BNC) DAQmx; thereby allowing me to double up on the number of AI ports I can access (switching every 5ms).
    I can run the code without any troubles. I have now completed my hardware set up and and am now testing the integration with the code. This is where things become a little unstuck.
    Problem: some ports, over time, start displaying erroneous data that is not correct. It is not until I delete the virtual channel and re-create it in MAX, does everything settle and work. This 'bad' data is seen in the MAX test panels tab and virtual channels tab.
    What is going on - should I bite the bullet and program all port assignments (channels) within my code?
    Am i mistakenly killing a link between MAX and the DAQmx?

    natashw already told you that it's expected behaviour when you leave the analog input open. There are two aspects to this:
    1) The analog input amplifier is a high impedance operation amplifier. It's minimal stray capacitance is high enough that it gets charged through very small leakage currents in the amplifier input stage, but the high input impedance doesn't allow those capacitance to discharge quick enough to stay at a defined voltage. So your input is usually very slowly floating to one of the power supply rails depending what transistor side has a slightly higher leakage.
    2) You only have one analog to digital converter. To get multiple channels there is a multiplexer than connects the different input channels to this single ADC. When the multiplexer switches between from a connected signal to an unconnected signal those stray capacitance at the ADC amplifier input has been charged to a certain voltage from the connected channel. With only the internal impedance of the amplifier to discharge this capacitance after the multiplexer switched to an unconnected input, the ADC will simply see the voltage that the stray capacitance has been charged to.
    Having a low impedance amplifier input would solve that problem but create many more problems that would affect the measurement accuracy significantly.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Error "CONVT_NO_NUMBER" When testing Nisto e-orders for Dow integration,

    Hi..
    When testing Nisto e-orders for Dow integration, a runtime error is generated when the IDOC processes to produce an order.
    ST22 details:-
    Runtime Errors CONVT_NO_NUMBER
    Exception CX_SY_CONVERSION_NO_NUMBER
    Date and Time 11/21/2008 13:58:03
    Short text
    Unable to interpret "IC5001" as a number.
    What happened?
    Error in the ABAP Application Program
    The current ABAP program "SAPLXVED" had to be terminated because
    it has come across a statement that unfortunately cannot be executed.
    Error analysis
    An exception occurred that is explained in detail below.
    The exception, which is assigned to class 'CX_SY_CONVERSION_NO_NUMBER', was not
    caught in procedure "EXIT_SAPLVEDA_011" "FUNCTION" nor was it propaged by a RAISING clause .since the caller of the procedure couldnot have anticipated that the exception could occur,the program is terminated .
    The reason for the Exception is :
    The program is attemted to interpret the Value"IC5001"as a number but since the value contravenes the rule for correct number format,this was not possible.
    How to correct the error pls help me
    Thanks,

    Hi Sanjith,
    The error message is very clear and says IC---- can not be interpreted as number.
    this may be
    while passing the data IC are treated as numbers and processed furhter may be arthimatic operatons.
    or it may passed to a NUMERIC type FIELD.
    please check once.
    regards
    Ramchander Rao.K

  • Error exception when testing Adapter

    Hi all
    1.I am doing XML to File  and I am getting error when I am testing the scenario with my input.dat xml file...FYI when I do XML2XML in the same scenarion - the Flag is OK and everything looks great ! I also Do Message Mapping Test and I have OK there...So I am 99% mapping should be fine ..
    2. I also wonder how many substracture Fiel Adapter can handle - at Receiver Part
    At receiver I have :
    Recordset
      Header
      Record
         RecordInfo1
           f1
           f2
           f3
         RecordInfo2
           f5
           f6
           f7
         RecordInfo3
           f8
           f9
           f10
      Trailer
    RecordInfos are substructures of Records - Can this be handled by FCC in File Adapter?
    Here is the error :
    <?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
    - <!--  Request Message Mapping
      -->
    - <SAP:Error xmlns:SAP="http://sap.com/xi/XI/Message/30" xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/" SOAP:mustUnderstand="">
      <SAP:Category>Application</SAP:Category>
      <SAP:Code area="MAPPING">EXCEPTION_DURING_EXECUTE</SAP:Code>
      <SAP:P1>com/sap/xi/tf/_MM_MAPPING_</SAP:P1>
      <SAP:P2>com.sap.aii.utilxi.misc.api.BaseRuntimeException</SAP:P2>
      <SAP:P3>Fatal Error: com.sap.engine.lib.xml.parser.Parser~</SAP:P3>
      <SAP:P4 />
      <SAP:AdditionalText />
      <SAP:ApplicationFaultMessage namespace="" />
      <SAP:Stack>During the application mapping com/sap/xi/tf/_MM_MAPPING_ a com.sap.aii.utilxi.misc.api.BaseRuntimeException was thrown: Fatal Error: com.sap.engine.lib.xml.parser.Parser~</SAP:Stack>
      <SAP:Retry>M</SAP:Retry>
      </SAP:Error>
    Appreciate any clues asap !
    Thanks
    J

    Hi Jon,
    You can test your entire configuration in ID..
    Goto ---> Tools -
    > TestConfiuration
    Put your actual payload data here and test it....
    Hope its helps..
    Kumar.S

  • Error message when testing Submit by Email button

    I used Adobe LiveCycle for the first time to create a form.  I added both a Submit by email and Print buttons.  When I go to Adobe Professional to test my form to email, I get an security error message that the documet is trying to connect to my email address as the correct domain.  I then have the option to Allow or Block.  When I click Allow nothing happens and I don't get the email in my inbox.
    Please assist as I need to send out this form and want users to be able to send w/out an issue.
    Thanks.

    You have multiple options to sending whole PDF format.
    1) Set the submit button to submit whole PDF format by using a mailto: [email protected] in the submit button action URL.
    2) Set the submit button to fire a javascript that submits whole PDF format. See Adobe's Javascript reference guide for instructions.
    Note: Methods #1 and #2 require Adobe Acrobat or Extended Reader Rights; because, Adobe Reader users are restricted to submitting just the data formats; such as FDF,XML, XFDF, XDP, and HTML. Beware extending reader rights to a PDF form using Adobe Acrobat places end user EULA restrictions, such as not more than 500 end user submissions for each form.
    My Recommendations:
    #3) Set the submit button to point to a URL of a server-side script; such as ASP.net. Then set the format to a data format; such as FDF, XML, XFDF, or XDP; and use ASP.net and iTextSharp (Free) to merge the data with a blank PDF form; and attach the submission to an outbound SMTP mail message and send without 3rd party email software such as MS OUTLOOK.
    #4) A combination of #1 and #3; where you enable usage rights on the PDF using Adobe Acrobat; and send to a server-side script which bypasses client-side email software.
    Check out the following website for online examples that submit to a server side script:
    www.pdfemail.net/examples/

  • Error RZ783 when "Test" in Transaction SMLG

    Hello together,
    I have installated the "SAP Web Application Server 6.20 Test Suite" sucessfully.
    By click on Button "Test" in Transaction SMLG (Tabstrip "Attributes") i get an Error:
    Internal error (SIMULATION:INACTIVE_SERVER)
    Message no. RZ783
    Logon Group, Instance and IP-Adress are correctly filled. (I hope so)
    Who knows the reason for this error ?
    Best regards,
    Erich Spannbauer.

    I am facing the same type of problem. I refered the notes you said. At my side,
    There is no entry in cyclical system programs.
    In note is says-->
    On every application server some ABAP/4 background programs are run at an interval of 5 minutes (rdisp/autoabaptime =300).
    Where should I change this entry?
    In my system,
    There is no entry like
    Program name  |    |    Task
    RSRZLST0      30s 120s    Operation mode control
    RSRZLLG0      10s 120s    Login load distribution
    RSALSUP1      15s 120s    Operating system alerts
    RSALSUP2      15s 120s    Buffer alerts
    RSALSUP3      15s 120s    Spool alerts
    RSALSUP5      30s 240s    Database alerts (only on central appl.server)
    Do I need to add them manually or it is auto generated?
    The basic problem I faced and need to go in Basis details is now.
    In SMLG trasaction, When I double click on Logon group and go to attributes and test,
    It gives Internal error: simulation:Inactive server.
    How to activate server ? I've no idea about Basis side. Please help
    Regards,
    Mehul
    PS Points will be rewarded for helpfull answer

  • ERROR 500 when testing webservice

    Hy,
    I work in 10.3.3 weblogic envirement with AIX OS. I've deployed a web service which calls a datasource. I've done the first test at the deployment tests to see the wsdl but i get the error 500.
    Moreover, i saw these errors in the logfile Could you help me to understand this problem and to solve it?
    THX
    <Error> <HTTP> <BEA-101017> <[ServletContext@591143740[app:checkProvisionService module:checkProvisionService.war path:/checkProvisionService spec-version:2.5]] Root cause of ServletException.
    org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.ws.server.endpoint.adapter.GenericMarshallingMethodEndpointAdapter#0' defined in ServletContext resource [WEB-INF/spring-ws-servlet.xml]: Cannot resolve reference to bean 'marshaller' while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'marshaller' defined in ServletContext resource [WEB-INF/spring-ws-servlet.xml]: Invocation of init method failed; nested exception is java.lang.NoClassDefFoundError: com.sun.xml.bind.DatatypeConverterImpl (initialization failure)
         at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:275)
         at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:104)
         at org.springframework.beans.factory.support.ConstructorResolver.resolveConstructorArguments(ConstructorResolver.java:495)
         at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:162)
         at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:925)
         Truncated. see log file for complete stacktrace
    Caused By: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'marshaller' defined in ServletContext resource [WEB-INF/spring-ws-servlet.xml]: Invocation of init method failed; nested exception is java.lang.NoClassDefFoundError: com.sun.xml.bind.DatatypeConverterImpl (initialization failure)
         at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1338)
         at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:473)
         at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409)
         at java.security.AccessController.doPrivileged(AccessController.java:224)
         at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380)
         Truncated. see log file for complete stacktrace
    Caused By: java.lang.NoClassDefFoundError: com.sun.xml.bind.DatatypeConverterImpl (initialization failure)
         at java.lang.J9VMInternals.initialize(J9VMInternals.java:140)
         at com.sun.xml.bind.v2.runtime.JAXBContextImpl$3.run(JAXBContextImpl.java:287)
         at com.sun.xml.bind.v2.runtime.JAXBContextImpl$3.run(JAXBContextImpl.java:286)
         at java.security.AccessController.doPrivileged(AccessController.java:202)
         at com.sun.xml.bind.v2.runtime.JAXBContextImpl.<init>(JAXBContextImpl.java:285)
         Truncated. see log file for complete stacktrace
    >

    I have decompiled the offending class and there are many static initializers, one of them fails and cause the NoClassDefFoundError...
    additional info is necessary...
    If you use JRockit you could enable print exceptions http://blogs.oracle.com/hirt/2008/08/simple_exception_profiling_wit.html

  • Error 10403 in Virtual channel Test Panel, but test pannel for device works fine

    I am using a simple 10x probe attatched to the NI5112 PCI card. When the device is tested under the devices and interfaces part of MAX, I get the scope input, when I test the virtual channel for the scope, i get error 10403. How can i resolve this?

    The reason you get that error is that virtual channels are not supported with our high-speed digitizers. Only devices that can be programmed directly with NI-DAQ support virual channels. The legal way to address a high-speed digitizer is by using DAQ::N, where N is the device number in MAX.

  • Errors in Creating Virtual Channel

    I am creating a IVI switch driver. The driver passes the IVI Specific Driver Suite 2.2 and IVI Switch SFP testing. When I try to create the virtual channel the device is found (created IVI Logical Name and other fileds) but the channels displayed are empty on the Channels/Exclusion tab. What calls are made when creating the virtual channel? I want to create a CVI or TestStand program to simulate the sequence that Switch Executive performs.
    When validating a configuration, are all possible channels used in SetPath loop? As an example, if three channels are defined (Front, rear, & base) but only the front->base, rear->base, base->front and base->rear are valid configurations. Will the vaildate try a front->rear configuration? Will a disconnect call be placed in between each SetPath call?
    Thanks.
    Randy

    Randy,
    Although you cannot step through the NISE engine, you can step through your driver's code and debug its behavior. If you have problems with importing IVI virtual name into switch executive virtual device, then what you need to do is select nimax.exe as the process to debug and run it with the debug version of your driver. If you were using CVI, you can set the external process under Run>>Select External Process. If you're using MSVC, you can choose "open project" and browse to nimax.exe and then add symbols for your driver DLL in the project settings.
    The entry points to monitor (for your particular problem) are the ones that deal with getting the number of channels and getting individual channel names and their attributes. Make sure that your driver returns the correct number of channels and that it also returns all these channel names correctly. If you see no error in switch executive (I.e. it just says OK after you import your ivi logical name), then my guess is that your driver returns 0 for the number of channels.
    Please be aware that switch executive exercises much more than any of the IVI clients you mentioned. Yet it does not go "through the back door" for anything, it (obviously) uses public entry points in the driver to perform what it needs to do.
    So, you can start by setting up debug project in your favourite environment, and check the behavior of the following functions in your driver:
    getting the IVISWTCH_ATTR_CHANNEL_COUNT attribute
    IviSwtch_GetChannelName (with your prefix, of course)
    and see if those work. My guess is that something is not working correctly between these two functions.
    Let me know
    -Serge
    Srdan Zirojevic

  • Thermocoup​le reading has offset when I choose a large range on virtual channel?

    I am using SCXI 1102 with 1303 terminal block. I create a virtual channel for a type J thermocouple using CJC built-in. I set the range of the measurement to 0 to 500C. At ambient temperature (22C) I read 10C. If I change the range to 0 to 100C the reading is correct. How can I fix this problem?

    Well, you already tried the obvious. The less obvious might include NI-DAQ driver problems. It has happened before.
    On a SCXI-1126 frequency module, back in the 1998-2000 time frame, the underlying code within NIDAQ had a problem where when you picked a channel scaling that was EXACTLY within the boards ranges, such as 1k, 2k, 4k, 8k, etc., the board read back frequencies correctly. However, when you used a virtual channel to set up scaling to something like, say, 0-2200hz, the algorithm to span that wrange worked incorrectly. It evidenced itself as a drop off followed by a peak rather than a gradual ramp up in readings as the frequency increased. I eventually created a program that plotted the problem and showed it to our local
    rep, who got it fixed back at NI.
    The point is, the problem didn't show up in NI's production testing because they always used exact ranges in tests, while virtual channels allowed more flexibility. Somehow you need to be able to get NI to duplicate the behavior.
    - You could try using other ranges in your scaling that still have your necessary ranges as a subset.
    - You could try using DAQ MX if you are using traditional DAQ, or vice versa.
    - You could send someone at NI your NICONFIG.DAQ file and have them try it there, with whatever version of LV and DAQ you are using.
    Another thing that could have happened is that you may have a group of boards with a bad lot of chips. Check to see if all the boards you are swapping during troubleshooting have the same lot numbers on the chips. If possible, try to find at least one board that is much older or much newer when you are swapping. We resolved a 4 year problem when we finally realized 56 solid state relays on 14 different S
    CXI-1321 front end modules all had the same exact thermal problems, and were all from the same manufacturer lot. Swapping with a different module 2 years older confirmed what we had been missing for years!
    Tim Jones

  • Jdeveloper error when testing a web service

    Hi,
    I keep getting the error below when testing a webservice, I have unistalled and reinstalled jdeveloper but still getting this error, help would be gratefully appreciated?
    [Waiting for the domain to finish building...]
    [10:58:59 AM] Creating Integrated Weblogic domain...
    [10:59:54 AM] Extending Integrated Weblogic domain...
    [11:00:39 AM] Integrated Weblogic domain processing completed successfully.
    *** Using port 7101 ***
    C:\Users\claire\AppData\Roaming\JDeveloper\system11.1.1.4.37.59.23\DefaultDomain\bin\startWebLogic.cmd
    [waiting for the server to complete its initialization...]
    \Java\jre6\lib\ext\QTJava.zip was unexpected at this time.
    Process exited.

    http://forums.oracle.com/forums/search.jspa?objID=f83&q=QTJava.zip

  • Daqmx Create Virtual channel timeout

    Why almost a minute and a half to get an error message from daqMX Create Virtual channel VI.
    I need to find a way to have my executable bleep immediately if the underlying drivers are not installed.
    For example, I have LV at home and supposedly have the drivers installed too but no hardware connected.
    Sometimes I run a that uses daq, however when it encounter Creat Virtual Channel VI it appears to hang for a minute and a half before finally giving an error.
    Anyone else been there, done that?
    Thanks

    Thanks for replying D-cubed.
    Actually Im calling create virtual channel in a state machine to set up the data acquisition.
    When using on a machine that has a daq device installed its a piece of cake.  no sweat.
    However, when I am at home I have LabVIEW but possibly either no daq drivers or no daq hardware.
    I believe it may be the second case since I can place a task on the BD so, I will assume the drivers are installed correctly.
    There is just no daq hardware in the system.
    WHen the create virtual channels Vi is called the PC appear to hang because that VI is executing internally (looking for a device I presume).
    The timeout takes 1.5 mins and if not for me waiting the PC would appear to be hung up.
    Finally some 'Mig error' message appear contact Ni about and so on.
    WHat I am trying to do is have my program error out immediately instead of 1.5 mins later that there is no Daq device in the system.
    Thanks again

  • Is CJC compensation automatically applied to Virtual Channels created with MAX?

    I have a SCXI 1000 chassis, a SCXI 1102 module and a TC 2095 terminal block. I am configuring virtual channels with the names TC(n) where n is the channel number. When I configure the virtual channel, I am selecting 'built-in' CJC. The question I have is:
    When I reference the virtual channel from my DAQ application, is the data I receive already cold-junction-compensated, or do I have to read the CJC voltage as a separate channel, and apply the correction factor manually on my block diagram (like we had to do in the old days)?
    I gather that the data that I read, which is obviously scaled to engineering units, is, indeed, cold-junction compensated, but I would like to be certain that t
    hat is the case. Thanks.

    Wes;
    You are right. That is the way it works. If you specify the CJC compensation in MAX when creating a Virtual channel, to be the built in CJC, the data you get is going to be CJCed already.
    Hope this helps.
    Filipe

  • Is there anyway to read the unscaled voltage from a scaled virtual channel?

    I am using DAQmx to read voltage from a SCXI chassis. Programmically in Labview I created a task with virtual channels and custom scales. Later in the program, I wish to view the scaled volatge and the raw voltage at the same time. Is there a property node which will allow me to do so?

    Hi Fergusonhd -- yes, there is a way to do this --> Look in DAQmx-> DAQmxAdvanced-> Scale -> Scale Property node.
    Drop it in and wire in your scale.(I did it by creating a constant for active_scale) Select the various attributes depending on the type of scale you created and manipulate your signals based on that information - For ex. if you created a linear scale, you could get information about the slope and intercept from this property node -- that being said, you can use this information and do the reverse logic:
    [if original was Y=aX+b]
    X = (Y-b)/a
    where b is the Y intercept and a is the slope
    Hope this helps
    VNIU

  • How to force TestStand to ignore LabVIEW errors

    Hi,
    usually where there is an error in the LabVIEW code module TestStand stops and gives a pouup message .
    Is there any way to force TestStand to ignore LabVIEW errors and contiouse testing?
    Thanks

    You can also work more globally - go to the Configure menu item and select Station Options.
    At the bottom of the execution tab, there's a section for what to do on Run-time errors. -> Show dialog, Cleanup, Ignore or Abort.
    Whilst not specific to steps configured to use the LabVIEW adapter (assuming you're mixing some programming languages) that at least get you continuing on to the next step without the dialog.
    If you need to assess if it's specifically LabVIEW modules causing the error, then you need to use the Station, Process Model or Sequence File callback for On run-time error, and the step gets passed as a parameter. That would allow you to work out if the step was LabVIEW specifically and then make a decision as to what to do next.
    Hope that helps.

Maybe you are looking for

  • ALV report editable to enter text and print with including the text?

    Hi all, I have to display output of a  report in ALV format and i have to make two fileds can be editable to enter texts by user  and able to print the list with that texts . Is it possible using standard Function modules or I have to go for OO metho

  • Loss of sound from speakers

    Hi guys, The sound from my HP notebook, dv4-1435dx  speakers just went off though i get sound from when i use headset. The touch button above the keypad is constantly on red,i have installed drivers and updated BIOS but problem still there. Dont know

  • Message stuck in SXMB_MONI  status " Sceduled for Outbound Process." Urgent

    Hello All, All messages in our XI environment are getting stuck with status Scheduled for Outbound Processing ...I have already done the following.. 1) Registered the Queues 2) Deleted entries in SMQ1 and SMQ2  and restarted messages Last time we had

  • Keyword search hijacked

    For a long time I was happily using the smart keyword search. E.g. I had assigned custom searches to the letters "d","e","w". All of a sudden, they lead to different destinations. Whatever I tried to change them back - no luck. Please do not suggest

  • Export as FLV not in menu

    In my final cut pro 6 there is no option to export as FLV. Why doesn't it show up? When I look how to export as FLV on google it says it should be there. Thanks