ArrayToImage VB 16 bit data.

I have IMAQ3.3.  (no vision license).    I can collect (via framegrabber) and display  8 bit data and save it via the ImageToArray function.   Then I can read this data back and display it at a later time using the ArrayToImage function.
However,  after saving 16 bit data,  I cannot read it back as 16 bit data.   The ArrayToImage function always truncates it to 8 bit data!!!!   How can I get it to read 16 bit data?      The image it is reading from is an array of 16 bit data.

CWIMAQ 3.3
The following will read data from a file assuming it is 8 bit data.   I assumed if I changed ImageArray and B1 to 16 bit integers it would read as 16 bit data.   Instead....it reads as 16 bit data....then truncates to 8 bit as it puts the data into CWIMAQ1.
I only use this for debug purposes....   We usually collect and run the data live, which works fine.     However, I can save the collected data and "play it back"  to check my software processing.   I need to put it into CWIMAQ1 since this is the structure that it gets when running "live".     When I try to modify the CWIMAQ1 parameters to indicate a 16 bit image I get something like "vision license required".       I can read the image in as 8 bit data and do all the 8 to 16 bit  conversion myself (my fallback which works fine).       I Just wanted to know if this was indeed a "licensing issue"   or if there was something I was missing.
Private Function GetNewImages(ByVal ImageFile As String)
   Dim k As Long
   Dim l As Byte
   Dim ImageArray(4096, 2050) As Byte
   Dim B1 As Byte
   CWIMAQ1.Images.RemoveAll
   CWIMAQ1.Images.Add (mlNumActiveFrames)
   Open ImageFile For Binary As #97
    For k = 1 To mlNumActiveFrames
   For l = 1 To 20
     Get #97, , B1
   Next l
      Get #97, , ImageArray
       If EOF(97) Then
              MsgBox "Error reading " & ImageFile, , " "
              GetNewImages = False
              Exit Function
            End If
      CWIMAQ1.Images.Item(k).ArrayToImage ImageArray
   Next k
   Close #97
   GetNewImages = True
End Function

Similar Messages

  • Data Federator XI 3.0 using DB2 VARCHAR FOR BIT DATA Column?

    We have a column in a DB2 database that is defined as VARCHAR(16) FOR 
    BIT DATA.
    We are using the suggested IBM JDBC driver, db2jcc.jar, against a DB2 
    OS/390 8.1.5 version database.
    The Datasource column displays a data type of NULL, indicating the DF 
    does not understand or cannot handle this IBM data type.
    We have two issues.
    First, target tables are not able to return any columns, regardless if 
    we exclude columns defined as NULL as mentioned above. We see the 
    'Wait' animation for a very long time when we use the 'Target table 
    test tool' option. Selecting to display the count only, returns zero.
    We are able to fetch and view non-NULL column data when using the 
    'Query tool' under the Datasource pane.
    I also get the same result when using the 'My Query Tool' in Server 
    Administrator; a selection agains the sources returns data while 
    selecting from a target table returns no data. Also, a 'select 
    count(*)' returns zero.
    The second issue is in mapping a relationship between two DB2 tables 
    where the join is between two columns of the above mentioned type 
    (NULL).
    The error we get back when we use "Show Errors" is "The types 
    'NULL' (in 'S1.PLANNEDGOALID') and 'NULL' (in 'S2.PLANNDEDGOALID') are 
    not compatible.". When reviewing the relationship, a dashed red line 
    appears instead of a solid grey line between the two tables in the 
    "Table relationships and pre-filers" section of our mapping pane.
    The following query returns an error via the Server Administrator 
    Query Tool; "Types 'NULL' and 'NULL' are not compatible for operator 
    '=' (Error code : 10248)".
    select count(*)
    from
    (select s1.CASEID, s2.PLANNEDGOALID, s2.NAME, s2.PLANNEDGRPSTTYCD
      from "/DF_CMS_ODS/sources/CMFSREPT/CMSPROD.PLANNEDGOAL" AS s1
    ,"/DF_CMS_ODS/sources/CMFSREPT/CMSPROD.PLANNEDGOAL" s2
              where s1.PLANNEDGOALID = s2.PLANNEDGOALID)
    Here are the properties settings in the Resource Connector Settings 
    for jdbc.db2.zSeries we are using.
    capabilities: isjdbc=true;orderBy=false
    driverLocation: drivers/db2jcc_license_cisuz.jar;drivers/db2jcc.jar
    jdbcClass: com.ibm.db2.jcc.DB2Driver
    sourceType: db2
    supportsCatalog: no
    urlTemplate: jdbc:db2://<hostname>[:<port>]/<databasename>
    Here are the Connection parameters as defined for the datasource in DF 
    Designer.
    Defined resource: jdbc.db2.zSeries
    Jdbc connection URL: jdbc:db2://DB2D03:50000/CMFSREPT
    Authentication: Use a specific database logon for all Data Federator 
    users.
    User Name: x
    Password: hidden
    Login domain: -- Choose a defined login domain --
    Supports Schema: checked
    Schema: is empty
    Prefix table names with schema name: checked
    Supports catalog: unchecked
    Prefix table names with the database name: unchecked
    Table types: TABLE and VIEW
    So, the following is the two questions we require answers for...
    Is this a limitation of Data Federator?
    Is there a work around short of changing the datatype in the database.

    Hi Darren,
    The VARCHAR() FOR BIT DATA is a binary data type and Data Federator does not support binaries. But if in your case, it makes sense to map this column to a VARCHAR data type you can configure the DB2 connector to view this column as a VARCHAR.
    Your column can be mapped explicitly to a data type of your choice using a property: castColumnType.
    This property can be set updating the resource you selected when you registered you DB2 data source.
    If the resource is "jdbc.db2", then:
    1. Launch Data Federator Administrator
    2. Click on "Administration" tab
    3. Click on "Connector Settings"
    4. Select the right resource: "jdbc.db2"
    5. Click "Add a property"
    6. Select "castColumnType"
    7. Set its value to: VARCHAR() FOR BIT DATA=VARCHAR
    8. Click on Ok
    You should see this column as a VARCHAR.
    Regards,
    Mokrane
    PS: For the target table issue, we have forwarded your mail to the Data Federator Designer team.

  • How to extract 64 bit data from imaq image using IMAQ Extract VI

    I have LV 8.5.1, Vision 8.5 and need to extract 64 bit data from a 64 bit image and I get the "invalid image" error while using the IMAQ Extract VI.  What version of Vision do I need to allow me to do this? 
    Currently, the work-around I have...
    1) convert the image to 32bit
    2) use the ROI tools I to get the rectangle data I need
    3) then go back to the original image and the convert the image to a 64 bit array
    4) take the rectangle data to extract the data needed out of the 64 bit array data.
    klunky but it works.  I would think that the IMAQ Extract tool should allow me to extract the 64 bit data but it doesnt... forces me to 32 bit.
    suggestions?

    steve05ram360 wrote:
    awesome, that does work. 
    Attached DLL slightly corrected and should be OK also "in place" when Dst is not connected like original IMAQ function. Hopefully it works properly now. By the way all IMAQ types are supported, not only U64.
    Andrey.
    Attachments:
    ADVExtractDLL.zip ‏9 KB

  • How to store bit data in VARCHAR(4000) field?

    Hi.
    Please help!
    We are porting some C/C++ software with embedded SQLs from NT/DB2 to Linux/Oracle.
    On NT/DB2 we have some table to store file data in a VARCHAR(4000) blocks.
    Table looks like this
    CREATE TABLE FileData (filetime as timestamp not null, idx int not null, datablock varchar(4000) FOR BIT DATA not null, primary key (filetime, idx) );
    As you can see DB2 has appropriate field modifier - "FOR BIT DATA" which makes DB2 storing data as-is, not converting characters.
    I need to know - if it is possible to do the same in Oracle like in DB2?
    If Oracle has some kind of field modifier like "FOR BIT DATA" in DB2?
    If not, how can I do the same in Oracle?
    The current problems are:
    1) when application imports the file with some national chars the Oracle stores "?" in a database in place of national chars.
    2) another piece of a problem - if file is more than 4000 bytes length, it reports the ORA-01461 error (see it is trying to expand some chars to UTF8 char which is not fit a single char, so finally - not fit the field size).
    So, it seems that it cannot proceed national chars at all. :-\
    For details please see enclosed [C code|http://dmitry-bond.spaces.live.com/blog/cns!D4095215C101CECE!1606.entry] , there is example how data written to a table.
    In other places we also need to read data from FIELDATA table and store back to file (other filename, other location).
    Here is summary on a field-datatype variants I have tried for the "datablock" field:
    1) VARCHAR2, RAW, LONG RAW, BLOB - does not work! All reports the same error - ORA-01461.
    2) CLOB, LONG - both working fine but(!) both still returns "?" instead of national chars on data reading.
    Hint: how I did try these field types - I just drop the "FileData" table, created it using different type for "datablock" field and run the same application to test it.
    I think I need to explain what the problem - we do not provide direct access to Oracle database, we use a some middle-ware. Middle-ware is a C/C++ software which also has a interface for dynamic SQLs execution. So, exactly this middle-ware (which is typically running on the same server with Oracle) receives the "?" instead of national chars! But we need it to return all data AS-IS(!) without any changes!!! That is wjhy I did ask - if Oracle has any options to store byte-data as-is?
    The BIG QUESTION - HOW CAN WE DO THIS?!
    Another thing I need to explain - it is ok to use Oracle-specific SQL ONLY IF THERE IS REALLY NO WAY TO ACHIEVE THIS WITH STANDARD SQL! Please.
    So, please look on a C code (by link I have posted above) and tell - if it is possible to make working in Oracle the VARCHAR approach we using at the moment?
    If not - please describe what options do we have for Oracle?
    Regards,
    Dmitry.
    PS. it is Oracle 11gR2 on CentOS 5.4, all stuff installed with default settings, so Oracle db encoding is "AL32UTF8".
    C/C++ application is built as ANSI/ASCII application (non-unicode), so sizeof(char)=1.
    The target Oracle db (I mean - the one which will be used on customer site) is Oracle 10g. So, solution need to be working on Oracle 10g.

    P. Forstmann wrote:
    There is some contradiction in your requirements:
    - if you want to store data as is without any translation use RAW or BLOB
    - if you want to store national character data try to use NVARCHAR2 or NCLOB.Seems you did not understand the problem. Ok, I'll try to explain. Please look on the code sample I provided in original question
    (I just added expanded data structures there, sorry I forgot to publish them when post original question):
    EXEC SQL BEGIN DECLARE SECTION;
      struct {
        char timestamp[27];
        char station[17];
        char filename[33];
        char task[17];
        char orderno[17];
        long filelen;
      gFilehead;
      struct {
        char timestamp[27];
        long idx;
        struct {
          short len;
          char arr[4001];
        } datablock;
      gFiledata;
    EXEC SQL END DECLARE SECTION;
    #define DATABLOCKSIZE 4000
    #ifdef __ORACLE
      #define VARCHAR_VAL(vch) vch.arr
    #elif __DB2
    #endif
    short dbWriteFile( char *databytes, long datalen )
      short nRc;
      long movecount;
      long offset = 0;
      gFilehead.filelen = gFilehead.filelen + datalen;
      while ((datalen + gFiledata.datablock.len) >= DATABLOCKSIZE)
        movecount = DATABLOCKSIZE - gFiledata.datablock.len;
        memcpy(&VARCHAR_VAL(gFiledata.datablock)[gFiledata.datablock.len], databytes, movecount);
        gFiledata.datablock.len = (short)(gFiledata.datablock.len + movecount);
        exec sql insert into filedata (recvtime, idx, datablock)
          values(
            :gFiledata.recvtime type as timestamp,
            :gFiledata.idx,
            :gFiledata.datablock /* <--- ORA-01461 appears here */
        nRc = sqlcode;
        switch (nRc)
        case SQLERR_OK: break;
        default:
          LogError(ERR_INSERT, "filedata", IntToStr(nRc), LOG_END);
          exit(EXIT_FAILURE);
        offset = offset + movecount;
        datalen = datalen - movecount;
        gFiledata.idx = gFiledata.idx + 1;
        memset(&gFiledata.datablock, 0, sizeof(gFiledata.datablock));
        databytes = databytes + movecount;
        gFiledata.datablock.len = 0;
      if (datalen + gFiledata.datablock.len)
        memcpy(&VARCHAR_VAL(gFiledata.datablock)[gFiledata.datablock.len], databytes, datalen);
        gFiledata.datablock.len = (short)(gFiledata.datablock.len + datalen);
      return 0;
    }So, the thing we need is - to put some data into the "datablock" field of following structure:
      struct {
        char timestamp[27];
        long idx;
        struct {
          short len;
          char arr[4001];
        } datablock;
      gFiledata;Then insert it into a database table using static SQL like this:
        exec sql insert into filedata (recvtime, idx, datablock)
          values(
            :gFiledata.recvtime type as timestamp,
            :gFiledata.idx,
            :gFiledata.datablock /* <--- ORA-01461 appears here */
            ); And then expect to read exactly the same data back!
    The problems are:
    1) Oracle make decision to convert the data we are inserting (why? and how to disable converting?!)
    2) even if it inserts the data (CLOB and LONG field datatypes are working fine when inserting data with such static SQL + such host variable) then it became not readable! (why?! how to make it readable?!)
    P. Forstmann wrote:
    ORA-01461 could mean that you have a wrong data type for bind variable variable in your client code:Not me decided that host variable is the "LONG datatype value" - the Oracle make that decision instead of me. And that is the problem!
    Looks like Oracle react on any char code >= 0x80.
    So, assume if I run the code:
    // if Tab1 was created as "CREATE TABLE Tab1 (value VARCHAR(5))" then
    char szData[10] = "\x41\x81\x82\x83\x84";
    EXEC SQL INSERT INTO Tab1 (value) VALUES (:szData);
    Oracle will report the ORA-01461 error!
    EXACTLY THIS IS THE PROBLEM I HAVE DESCRIBED IN MY ORIGINAL QUESTION.
    So, problem - why Oracle make such decision instead of me?! How we can make Oracle insert data into a table AS-IS?
    What other type of host variable we should use to make Oracle think that data is a binary?
    void*? unsigned char? Could you please provide any examples?
    Ok, you did recommend - "use the RAW datatype". But RAW datatype is limited to size 2000 bytes only - we need 4000! So, it is not match our needs at all.
    Also you have mentioned "use BLOB" - but testing shows that Oracle reports the same ORA-01461 error on inserting data into a BLOB field from such host variable! (see the code I posted)
    What also we can do?
    Change type of host variables? BUT HOW?!

  • How to read char() for bit data DB2's type in Oracle?

    Hello,
    I am developing an application (from JDeveloper) to operate with two data base. In one hand threre is Oracle and in the other one DB2 (AS400).
    I am trying to read a DB2'sfield with the "char() for bit data" type from Oracle, but I can't read it.
    I have trying:
    rset.getObject(1) -->[B@1a786c3
    rset.getBinaryStream(1) --> java.io.ByteArrayInputStream@1a786c3
    rset.getAsciiStream(1) --> java.io.ByteArrayInputStream@2bb514
    rset.getCharacterStream(1) -->java.io.StringReader@1a786c3
    Do you have any solution to see the value of this type of field?
    Thank you and regards

    I have to synchronize unidirectionally from the Oracle database to DB2. And I'd like to save the information of the record of DB2 prior to the update operation.
    And here is where the problem arises for me, because when I try to read from Java with the connection established on DB2 is unable to interpret the information. While there are no problems from Oracle to consume the information, it happens that DB2 field types are not common with Oracle, such as char () for bit data. From what I could find the equivalent in Oracle would be raw (), but since Java is possible to read this type of information... And this is my doubt, it is necessary to do any type of cast or to do a special view to retrieve this information?

  • Is there no 'bit' data type in Java?

    i want to write a private protol in java , but there is no 'bit' data type, it is difficult to communicate data frame to other application which wrote in c++.
    can any friend give me a hand?
    thanks.
    :)

    In Java there are no bit fields.
    Also you can not directly map an external data layout to an internal representation, that is, there is no "struct" which you could read directly from a file.
    You can read your int's or bytes's and do some computations with them.

  • J2sdk1.4.2_12 does not support the 64 bit data model

    Hi guys,
    I need support for instalation NW2004R1 for windows x64bit.
    in the step "define parameters" I need set JDK Directory, I setting C:\j2sdk1.4.2_12 and when click in botton next, I receive the message " The JDK installed in directory C:\j2sdk1.4.2_12 does not support the 64 bit data model. Choose a different JDK. "
    I search documents but not found.
    Anybody can help-me ???
    thanks
    Henrique Mattos.

    Hi,
    You can get more information related to your issue from
      service.sap.com
    https://sdlc2e.sun.com/ECom/EComActionServlet;jsessionid=606D9ABC1A37F8F6D7517A9B77ACA38B
    Regards
    Agasthuri Doss

  • The JDK installed in directory /usr/java14 does not support the 64 bit data

    I'm installing the Java Add-in for the ECC 60 SR1 and suddendly during the instalation of the module for the creation od the db schema I recive the error:
    "The JDK installed in directory /usr/java14 does not support the 64 bit data model"
    I'm using the JSDK 1.4_2 SR3 for AIX, and till now this JSDK was the rigth JSDK for the installation.
    What does it means ? DO I have to download the latest JSDK for AIX ?
    The only thing changed at Os level before of that error on the JSDK is the installation of the latest perating system patch, but I should be very surprised if there is a relation with the error I'm reciving now.
    Regards

    I do have the same problem. For me the java worked with AIX 5.2 but failed with AIX 5.3
    bash-2.05b# which java
    /usr/java14/.private142/bin/java
    bash-2.05b# java -fullversion
    java full version "J2RE 1.4.2 IBM AIX build ca142-20060421 (SR5)"
    bash-2.05b# uname -a
    AIX vcsaix101 2 5 00C888AC4C00
    bash-2.05b# which java
    /usr/java14/.private142/bin/java
    bash-2.05b# java -fullversion
    java full version "J2RE 1.4.2 IBM AIX build ca142-20060421 (SR5)"
    bash-2.05b# uname -a
    AIX vcsaix11 3 5 00C7690E4C00
    Any idea?
    Regards,
    Venkat

  • How to specify 32-bit data model in JNLP file?

    My customer has both 32-bit and 64-bit JREs installed. Our application loads native libraries that require them to run in a 32-bit process. The customer reports that the libraries are not loading, because when they download the JNLP file, the browser invokes the 64-bit JRE. The JNLP spec provides a lot of control, but it doesn't seem to provide control over the choice between 32-bit and 64-bit data models.
    I did see in the documentation that java-vm-args would accept "-d32", but subsequently someone reported that this caused errors to occur. In bug 6907802, that problem was "fixed" by changing the documentation to reflect that "-d32" was only supported on Solaris.
    So is there no way to specify in the JNLP that this application requires a 32-bit JRE? If not, is there a plan to introduce this functionality in the future?

    mosquitobytes wrote:
    ..So is there no way to specify in the JNLP that this application requires a 32-bit JRE? If not, is there a plan to introduce this functionality in the future?I'd imagine your best chance of a reply is to put a comment on the bug report (http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6907802).

  • Acquire single point 12 bit data @ 200Khz using PXIe 6535 DIO RT

    I want to acquire single point 12 bit data @ 200Khz using PXIe 6535 DIO, PXIe 1072 chasis and 8820 controller in RT. Problem is I am unable to acquire data as triggered input. Loop execution time takes ~10us (measured using rt tick count). Thus It misses the samples. Am I missing something? What are the proper ways to acquire digital data in RT?
    Also I am wondering whether I can use the SMB connector of 8820 controller as my acquision trigger input pulse. I am completely new to RT. Any help will be appriciated.
    Thank you.

    Hi jtele1,
    To make sure that the data gets written in the correctly order I would recommend monitoring the Time Out of the write. If a time out occurs you could stop writing all FIFOs and then start when all the Time Outs are no longer there. Another option is to look at your host side and determine if you can read larger chunks of data at a time and allow the host side to deal with processing the data. An additional option would be to look into high throughput streaming for FlexRIO. In this setup you will be writing the data directly to disk on your host side and then you could process the data at a later time. I have linked an example below, this example was giving me trouble so please let me know if you have trouble loading this example. Depending on your situation these may not all be acceptable options but you will need to ensure that you are not filling any of your FIFOs. Lastly, from what I can tell you are using a Windows OS as your host and in that situation you have no way of controlling when your LabVIEW application gets processor time. If you were to switch to a Real Time controller you would be able to ensure when certain tasks are run and add priority to tasks. Please let me know if you have further questions. 
    High Throughput Streaming
    https://decibel.ni.com/content/docs/DOC-26619
    Patrick H | National Instruments | Software Engineer

  • Send 102 bit data through the LPT port??I have to send 102 bit to my interface board using the LPT port, through one of the data line and be able to read data from the input line, I don't know how to realize such a task.Thanks

    Let me describe you the program I have to write:
    I have to send 102 bit serialy using one of the data line of the LPT port to a device and be able to read back data sent from a device register throug one one of the input port pin for instance pin 10 of a DB25, and synchronize the transmission by the PC clock(for write and read). If fact I am using 5 output control(D0...D4) signal from the LPT port, RST, TXD, CLK, CE,TEST, one input data RXD (pin10)
    So the program should work in normal mode (write data, read data), and test mode (use the write p
    rocedure). Since I am quite new in Labview I am little lost and I need some support or exemple that use a way to send more than 32 bit via one data line(in my case 102bit= 8 bit COMMAD(MSB)+88 bit SPI DATA+6 bits CRC(LSB)) and be able to read them back, place them in the register and be able either to monitor or modify them.
    I know that there are plenty of exemple but if I can gain time by being helped it would be great.
    Can you please advise?

    Hi Beni,
    find attached a SPI.vi - minor changed from one of
    my typical SPI's (LabVIEW 7.1). With some simple changes it will fit to your application.
    The only thing you need to do - prepare
    the bits for the input-array.
    If you need some more comments - find one of my email
    adresses in documentation of this vi.
    Regards
    Werner
    Attachments:
    102bit_SPI.zip ‏90 KB

  • Servlet and 64-bit data model

    I have a program which have to be executed with 64bit data model in order to execute probably.
    (for example ... the program is ABC.java)
    java -d64 ABC
    If I am going to create an instance of the class ABC and execute its methods within a servlet program, will there be any problem ?
    Is there any configuration required for the Tomcat in order to access servlet like this ?

    I have a program which have to be executed with 64bit
    data model in order to execute probably.
    (for example ... the program is ABC.java)
    java -d64 ABC
    If I am going to create an instance of the class ABC
    and execute its methods within a servlet program,
    will there be any problem ?i was thinking you meant you'd be calling a servlet from a 64-bit program, and in that case I'd think no prob. but if you're talking the other way around, you've got issues. firstly, you cannot call another app from your servlet unless you're doing RMI or something. i think you should clarify EXACTLY what you want to do and we can evaluate better

  • Conversion from dynamic to double bit data

    This might be very simple but I've got confused. I have current data and average data as in code. I want to build array using this data, save it on my hard-disk and open in word with headers(current and average)
    Problems:
    I input data from both sources into build array as shown, and wire output of build array to write to spreadsheet vi. I get error source is dynamic and sink is double bit. This is the hardest part of my life to figure out which conversion pallete to use.
    When I run the vi (lets assume correct conversion pallete is wired and there's no conversion error,) dialog box pops up for file path which keeps coming on 
     Ideally I would like it to pop up on screen when I press run so that I may browse location to save file and once path is specified vi runs continuously and data is written on spreadsheet while vi is running.
    Many thanks for reading and help
    Cheers 
    Solved!
    Go to Solution.
    Attachments:
    creating 2d array and saving data in spreadsheet.vi ‏109 KB

    Last time I didn't even try to think about what you were doing.  Now that I look at it I am once again reminded why I don't use express VIs and dynamic data types, but I digress.  I am now guessing that you would like to average a signal and have two curves, the current set and a weighted average.
    There may or may not be a way to do this with Express VIs and Dynamic Data types.  To be honest, if there is I wouldn't use it.  What I would do as a start is immediately following each Express VI insert a Dynamic Data to DBL conversion and then Index Array to get the first row.  Now you are working with 1D arrays.
    Attachments:
    creating 2d array and saving data in spreadsheet_MOD2.vi ‏149 KB

  • How can I transfer more than 64 bit data from host to target?

    Hi all, I'm currently using PCIe-7851r fpga card to drive my device. There were 64 lines to be controlled. So what I did is generating the commands on the host pc and then transfer it to the target via DMA FIFO. The data type of the FIFO is U64, i.e. every one digit controls 64 DIO lines. But the question becomes complex when I transfer 66 line command. I tried to create 2 FIFOs, but I can hardly make the 2 FIFOs synchronized.
    I think I might be able to create 2 U64 arrays, one contains the original 64 line command,s and the other contains the 2 line info (a waste). And then I interleave them in the host and decimate them in the target. There should be enough cycles to to this. But I dont think this is a good solution. Is there any better method? Thank you.
    LabVIEW 2009, Windows XP, PCIe-7851R
    Regards,
    Bo
    My blog Let's LabVIEW.
    Solved!
    Go to Solution.

    Using the techniques highlighted in this tutorial:
    http://zone.ni.com/devzone/cda/tut/p/id/4534
    You could use code like this:
    Mark B
    ===If this fixes your problem, mark as solution!===

  • 10-bit data corruption

    When I place 10-bit files into Shake I get some kind of corruption in the files on the right side of the image. If I capture 8-bit files I do not see such corruption. The corruption takes the form of random pixels of noise appearing on the last vertical line on right side of the image.
    Has anyone seen this? I'm running Shake 4.1 with Quicktime 7.3 on 10.4.7 on my Intel dual core mac.

    Hi Eugene,
    I experience this everyday. How annoying!
    My workaround is to click the checkbox for Force 8bit on the FileIn node, then put a btes node after to 16bit/float, do all ur comping, and then at the end of the tree out a bytes node back to 8bit, to out put to the Uncompresed 10bit codec without it rendering/looking like its on acid
    Have u found any solution yet?
    Cheers
    Adam

Maybe you are looking for