Fail to convert between UTF8 and UCS2: failUTF8Conv

We need to store possibly all UTF8 chararacter in his database
especially $ & ( 4 8 < = > from WE8ISO8859P1 and from WE8ISO8859P15.
So we install an UTF8 instance and set NLS_LANG to UTF8.
We have to do select/update from a java client and sqlplus(like).
When we insert with java client it's unreadable from sqlplus and
when whe insert from sqlplus we've got 'Fail to convert between UTF8 and UCS2: failUTF8Conv '
here the code made in sqlplus
update CPW_TEST set lb_comportement='$ & ( 4 8 < = > ' WHERE ID_TEST=14805;
here the code made in java
update CPW_TEST set lb_comportement='$ & ( 4 8 < = > ' WHERE ID_TEST=14804;
and then the result in database
SELECT id_test,LB_COMPORTEMENT FROM CPW_TEST WHERE ID_TEST=14804 or ID_TEST=14805
ID_TEST LB_COMPORTEMENT
14804 B$ B& B( B4 B8 B< B= B> B
14805 $ & ( 4 8 < = >
2 rows selected
and the dump
SELECT id_test,dump(LB_COMPORTEMENT) FROM CPW_TEST WHERE ID_TEST=14804 or ID_TEST=14805
ID_TEST DUMP(LB_COMPORTEMENT)
14804 Typ=1 Len=26: 194,164,32,194,166,32,194,168,32,194,180,32,194,184,32,194,188,32,194,189,32,194,190,32,194,128
14805 Typ=1 Len=17: 164,32,166,32,168,32,180,32,184,32,188,32,189,32,190,32,128
2 rows selected
I'm not sure, but it seems that sqlplus uses true UTF8 (variable length codes) and java client uses UCS-2 (2 bytes)
How can I solve my problem?
Our configuration
javaclient (both thin and oci jdbc driver 8.1.7), sqlplus client and database Oracle 8.1.7.0.0 on the same computer (W2000 or NT4)
Thank you for yoru attention.

Hi Luis, thanks for your suggestions. You're right that problem was in JServ and his JVM.
There was conflict between different versions of Java. While iFS was using JRE 1.3.1, JServ was configured to use JRE 1.1.8. As soon as I corrected link to the right Java, problem disappears.
Radek

Similar Messages

  • Java.sql.SQLException: Fail to convert between UTF8 and UCS2: failUTF8Conv

    Hi all,
    I am writing a servlet that connects to Oracle 8.0.6 through jdbc for jdk1.2 on NT 4.0
    English version and it works fine.
    But when the servlet is deployed to a solaris with Oracle 8.0.5 (not a typo, the oracle on
    NT is 8.0.6 and oracle on solaris is 8.0.5) and jdbc for jdk1.2 (of course, for Solaris),
    the servlet failed with the Exception:
    java.sql.SQLException: Fail to convert between UTF8 and UCS2: failUTF8Conv
    (I am using JRun 3.0 as the application and web server for both NT and Solaris)
    (The database in both the NT and solaris platform are using UTF8 charset)
    My servlet looks like this: (dbConn is a Connection object proved to be connected to Oracle
    in previous segment of the same method):
    String strSQL = "SELECT * FROM test";
    try { Statement stmt = dbConn.createStatement();
    ResultSet rs = stmt.execute(strSQL);
    while (rs.next()) {
    out.println("id = " + rs.getInt("id"));
    System.out.println("id written");
    out.println("name = " + rs.getString("name")); // <-- this is the line the
    exception is thrown
    System.out.println("name written");
    } catch (java.sql.SQLException e) {
    System.out.println("SQL Exception");
    System.out.println(e);
    The definition of the "test" table is:
    create table test(
    id number(10,0),
    name varchar2(30));
    There are about 10 rows exists in the table "test", in which all rows contains ONLY chinese
    characters in the "name" field.
    And when I view the System log, the string "id written" is shown EXACTLY ONCE and then there
    is:
    SQL Exception
    java.sql.SQLException: Fail to convert between UTF8 and UCS2: failUTF8Conv
    That means the resultset is fetch back from the database correctly. The problem arise only
    during the getString("name") method.
    Again, this problem only happens when the servlet is run on the solaris platform.
    At first I would expect there are some strange code shown on the web page rather than having
    an exception. I know that I should use getBytes to convert between different encodings, but
    that's another story.
    One more piece of information: When all the rows contains ascii characters in their "name"
    field, the servlet works perfectly even in solaris.
    If anyone knows why and how to tackle the problem please let me know. You can feel free to
    send email to me at [email protected]
    Many thanks,
    Ben
    null

    Hi all,
    For the problem I previously posted, I found that Oracle had had such bug filed before in Oracle 7.3.2 (something like that) and is classified to be NOT A BUG.
    A further research leads me to the document of Oracle that the error message:
    "java.sql.SQLException: Fail to convert between UTF8 and UCS2: failUTF8Conv"
    is a JDBC driver error message of error number ORA-17037.
    I'm still wondering why this behaviour will happen only in Solaris platform. The servlet on an NT machine I am using (which has an Oracle 8.0.6 and jdbc for jdk 1.2 running) is working just fine. I also suspect that this may be some sort of mistakes from jdbc driver.
    Nevertheless, I have found a way to work around the problem that I cannot get non-English string from Oracle in Solaris and I would like to share it with you all here.
    Before I go on, I found that there are many people out there on the web that encounter the same problem. (Some of which said s/he has been working on this problem for a month). As a result, if you find this way of working around the problem does help you, please tell those who have the same problem but don't know how to tackle. Thanks very much.
    Here's the way I work it out. It's kinda simple, but it does work:
    Instead of using:
    String abc = rs.getString("SomeColumnContainsNonEnglishCharacters");
    I used this:
    String abc = new String(rs.getBytes("SomeColumnContainsNonEnglishCharacters"));
    This will give you a string WITH YOUR DEFAULT CHARSET (or ENCODING) from your system.
    If you want to convert the string read to some other encoding type, say Big5, you can do it like this:
    String abc = new String(rs.getBytes("SomeColumneContainsNonEnglishCharacters"), "BIG5");
    Again, it's simple, but it works.
    Finally, if anyone knows why the fail to convert problem happens, please kindly let me know by leaving a word in [email protected]
    Again, thanks to those of you who had tried to help me out.
    Creambun
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by creambun creambun ([email protected]):
    Hi all,
    I am writing a servlet that connects to Oracle 8.0.6 through jdbc for jdk1.2 on NT 4.0
    English version and it works fine.
    But when the servlet is deployed to a solaris with Oracle 8.0.5 (not a typo, the oracle on
    NT is 8.0.6 and oracle on solaris is 8.0.5) and jdbc for jdk1.2 (of course, for Solaris),
    the servlet failed with the Exception:
    java.sql.SQLException: Fail to convert between UTF8 and UCS2: failUTF8Conv
    (I am using JRun 3.0 as the application and web server for both NT and Solaris)
    (The database in both the NT and solaris platform are using UTF8 charset)
    My servlet looks like this: (dbConn is a Connection object proved to be connected to Oracle
    in previous segment of the same method):
    String strSQL = "SELECT * FROM test";
    try { Statement stmt = dbConn.createStatement();
    ResultSet rs = stmt.execute(strSQL);
    while (rs.next()) {
    out.println("id = " + rs.getInt("id"));
    System.out.println("id written");
    out.println("name = " + rs.getString("name")); // <-- this is the line the
    exception is thrown
    System.out.println("name written");
    } catch (java.sql.SQLException e) {
    System.out.println("SQL Exception");
    System.out.println(e);
    The definition of the "test" table is:
    create table test(
    id number(10,0),
    name varchar2(30));
    There are about 10 rows exists in the table "test", in which all rows contains ONLY chinese
    characters in the "name" field.
    And when I view the System log, the string "id written" is shown EXACTLY ONCE and then there
    is:
    SQL Exception
    java.sql.SQLException: Fail to convert between UTF8 and UCS2: failUTF8Conv
    That means the resultset is fetch back from the database correctly. The problem arise only
    during the getString("name") method.
    Again, this problem only happens when the servlet is run on the solaris platform.
    At first I would expect there are some strange code shown on the web page rather than having
    an exception. I know that I should use getBytes to convert between different encodings, but
    that's another story.
    One more piece of information: When all the rows contains ascii characters in their "name"
    field, the servlet works perfectly even in solaris.
    If anyone knows why and how to tackle the problem please let me know. You can feel free to
    send email to me at [email protected]
    Many thanks,
    Ben<HR></BLOCKQUOTE>
    null

  • SQL Exception: Fail to convert between UTF8 and UCS2: failUTF8Conv

    Hi,
    I am trying to use the DBMS_OBFUSCATION_TOOLKIT to encrypt/decrypt some strings through JDBC, but I am getting the following exception:
    SQL Exception: Fail to convert between UTF8 and UCS2: failUTF8Conv
    The input and output parameters for the encryption/decryption functions are VarChar2.
    Our database is using UTF8.
    I will be glad if someone can help me on this.
    Thanks,
    Cenk

    Susi,
    This is just a wild guess, but your java client is on a different computer to your Oracle 9.2 server, and the locale (or encodings) on the two computers is different. Am I correct? If so, then I guess you need to change the locales (or encodings) so that they match.
    Have you tried printing out the "System" properties for the two JVMs (Oracle's and your client's)? Something simple like this:
    System.getProperties().list()Good Luck,
    Avi.

  • JBO-27022, Fail to convert between UTF8 and UCS2: failUTF8Conv

    steps to reproduce :
    on a Database with UTF8 as NLS_CHARACTERSET
    create table test (s1 varchar2(10), s2 varchar2(32));
    DECLARE
    l_data varchar2(4000);
    return_data varchar2(4000);
    skey varchar2(255);
    p_str varchar2(10);
    BEGIN
    p_str := 'Test';
    skey := chr(20) || chr(115) || chr(110) || chr(122) || chr(37) || chr(37) || chr(94) || chr(48);
    l_data := rpad (p_str,(trunc(length(p_str)/8)+1)*8, chr(0));
    dbms_obfuscation_toolkit.DESEncrypt (input_string=&gt;l_data, key_string =&gt;skey,
    encrypted_string =&gt; return_data);
    insert into test values ('1',return_data);
    END;
    commit;
    Now create a new Workspace and a new "Business Components Package"
    with only table test selected.
    In the tester we get an oracle.jbo.AttributeLoadException:
    (oracle.jbo.AttributeLoadException) JBO-27022: Failed to load value at index 2
    with java object of type java.lang.String due to java.sql.SQLException.
    ----- LEVEL 1: DETAIL 0 -----
    (java.sql.SQLException) Fail to convert between UTF8 and UCS2: failUTF8Conv
    oracle.jbo.AttributeLoadException: JBO-27022: Failed to load value at index 2 with java object of type java.lang.String due to java.sq[i]Long postings are being truncated to ~1 kB at this time.

    This is still happening in jdev9031.
    And it is very easy to reproduce!
    How can we avoid this error?
    Thanks,
    Bert.

  • Help me,Fail to convert between UTF8 and UCS2: failUTF8Conv

    using SUN APP server 7.0 + Studio 4 +oracle9i to develop cmp, when i input chinese in some fields, i encount following errors:
    java.sql.SQLException: Fail to convert between UTF8 and UCS2: failUTF8Conv
    at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:180)
    at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:222)
    at oracle.jdbc.dbaccess.DBError.check_error(DBError.java:916)
    at oracle.jdbc.dbaccess.DBConversion.failUTF8Conv(DBConversion.java:1958)
    at oracle.jdbc.dbaccess.DBConversion.utf8BytesToJavaChars(DBConversion.java:1797)
    at oracle.jdbc.dbaccess.DBConversion.charBytesToJavaChars(DBConversion.java:828)
    at oracle.jdbc.dbaccess.DBConversion.CHARBytesToJavaChars(DBConversion.java:783)
    at oracle.jdbc.ttc7.TTCItem.getChars(TTCItem.java:231)
    at oracle.jdbc.dbaccess.DBDataSetImpl.getCharsItem(DBDataSetImpl.java:1094)
    at oracle.jdbc.driver.OracleStatement.getCharsInternal(OracleStatement.java:2947)
    at oracle.jdbc.driver.OracleStatement.getStringValue(OracleStatement.java:3103)
    at oracle.jdbc.driver.OracleStatement.getObjectValue(OracleStatement.java:5089)
    at oracle.jdbc.driver.OracleStatement.getObjectValue(OracleStatement.java:4964)
    at oracle.jdbc.driver.OracleResultSetImpl.getObject(OracleResultSetImpl.java:404)
    at com.sun.jdo.spi.persistence.support.sqlstore.ResultDesc.getConvertedObject(ResultDesc.java:399)
    at com.sun.jdo.spi.persistence.support.sqlstore.ResultDesc.setFields(ResultDesc.java:746)
    at com.sun.jdo.spi.persistence.support.sqlstore.ResultDesc.getResult(ResultDesc.java:635)
    at com.sun.jdo.spi.persistence.support.sqlstore.SQLStoreManager.executeQuery(SQLStoreManager.java:648)
    at com.sun.jdo.spi.persistence.support.sqlstore.SQLStoreManager.retrieve(SQLStoreManager.java:500)
    at com.sun.jdo.spi.persistence.support.sqlstore.SQLStateManager.reload(SQLStateManager.java:1197)
    at com.sun.jdo.spi.persistence.support.sqlstore.SQLStateManager.loadForRead(SQLStateManager.java:3797)
    at com.sun.jdo.spi.persistence.support.sqlstore.impl.PersistenceManagerImpl.getObjectById(PersistenceManagerImpl.java:604)
    at com.sun.jdo.spi.persistence.support.sqlstore.impl.PersistenceManagerWrapper.getObjectById(PersistenceManagerWrapper.java:247)
    at com.tops.gdgpc.EntityBean.SpeBaseTable.SpeBaseTableBean_1769729755_ConcreteImpl.jdoGetInstance(SpeBaseTableBean_1769729755_ConcreteImpl.java:2479)
    at com.tops.gdgpc.EntityBean.SpeBaseTable.SpeBaseTableBean_1769729755_ConcreteImpl.ejbLoad(SpeBaseTableBean_1769729755_ConcreteImpl.java:2267)
    at com.sun.ejb.containers.EntityContainer.callEJBLoad(EntityContainer.java:2372)
    at com.sun.ejb.containers.EntityContainer.afterBegin(EntityContainer.java:1362)
    at com.sun.ejb.containers.BaseContainer.startNewTx(BaseContainer.java:1405)
    at com.sun.ejb.containers.BaseContainer.preInvokeTx(BaseContainer.java:1313)
    at com.sun.ejb.containers.BaseContainer.preInvoke(BaseContainer.java:462)
    at com.tops.gdgpc.EntityBean.SpeBaseTable.SpeBaseTableBean_1769729755_ConcreteImpl_EJBObjectImpl.getSpeBaseTable(SpeBaseTableBean_1769729755_ConcreteImpl_EJBObjectImpl.java:24)
    at com.tops.gdgpc.EntityBean.SpeBaseTable._SpeBaseTable_Stub.getSpeBaseTable(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:324)
    at com.sun.forte4j.j2ee.ejbtest.webtest.InvocableMethod$MethodIM.invoke(InvocableMethod.java:233)
    at com.sun.forte4j.j2ee.ejbtest.webtest.EjbInvoker.getInvocationResults(EjbInvoker.java:98)
    at com.sun.forte4j.j2ee.ejbtest.webtest.DispatchHelper.getForward(DispatchHelper.java:191)
    at jasper.dispatch_jsp._jspService(_dispatch_jsp.java:127)
    at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:107)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
    at com.iplanet.ias.web.jsp.JspServlet$JspServletWrapper.service(JspServlet.java:552)
    at com.iplanet.ias.web.jsp.JspServlet.serviceJspFile(JspServlet.java:368)
    at com.iplanet.ias.web.jsp.JspServlet.service(JspServlet.java:287)
    at javax.servlet.http.HttpServle
    I think it is caused by that the fields' not enought long.UTF must use 3 characters for one chinese,but in our database only 2 characters for one chinese.How can I resolve this puzzle,please help me!

    I've got many more tips from our experts:
    1. One thing you can try to do is make sure that Oracle Databse is set to UTF-8. If this is the case then you need to make sure that the client encoding is UTF-8.
    2. A colleague at Oracle points out that oracle.jdbc.* is a package that Oracle
    provides, not Sun. He says you might try asking this question on an Oracle
    forum. He writes:
    We have Oracle Technology Network (OTN) discussion forum, which would be a good place to
    start with this question.
    Oracle Technology Network (OTN) " Technologies " Java " SQLJ/JDBC
    Try the SQLJ/JDBC forum first.
    -- markus.

  • Failed to convert between UTF8 and UCS2: failUTF8Conv

    This error appear when trying to install the database cache. Does this have something to do with the classes111.zip or classes12.zip files?

    I have the same problem in XP. I installed the whole Oracle9i product on my desktop and after a day or two the message began appearing on my machine. I couldnt use any of the tools. I tried uninstallinmg according to Oracles instruction and after installing again had the same problem.
    I logged a TAR and they advised me to reinstall but according to Oracles instructions for removing the software first.
    Will keep you posted

  • Fail to convert between UTF8 and UCS2

    : java.sql.SQLException: Fail to convert between UTF8 and UCS2: failUTF8Conv
    (BC4J throws that exception)
    I got that error message when try to show the table's content in uix page.
    I use varchar2(40). the error occurs, when i set special characters like õ,ü,Ü.. and so on.
    This affects only UTF encoded database.
    We use: 9.0.2.5, Jdev9.0.3.3., UIX, iAS 9.4.2??(i dont'know)
    As i know the problem is when there is 40 letters in the column and it contains "special character" it can't convert it, because the neccessary space requered to store values is much more.
    If i use nvarchar, i don't think it fix this problem.
    And as i know, all sql constant must use where coulmn=N'constans value' format.
    Q:
    How can i set the UTF database and BC4J to run correctly.
    I mean... what type to use, what is the column size to set.
    Thanks in advice,
    Viktor

    I had the same problem using Oracle 9i. The problem lied within the Oracle JDBC driver itself! --;
    If you're using JDBC driver from Oracle 9i, stop using it!
    You should download JDBC driver of Oracle 10g from Oracle site and use that driver instead.
    After I changed the driver, I now have no problem of getting Korean characters from the database.
    Hope this would solve your problem too.

  • Fail to Convert between UTF8 and UCS2: fail UTF Conversion

    I'm using the JReport software, a product which generates java reports based on a jdbc datasource.
    during installation the JReport installation program located Microsoft java VM installed on my PC and I accepted this option to be working JVM for JReport.
    This worked very well. I have installed jdbc driver for Oracle 8.1.5 that worked fine as well on the design time (I have connected to Oracle, I have seen my tables and other information on my DB).
    But when I switched the run-time view (when real data is going to load) then the following exception message appeared :
    "Fail to Convert between UTF8 and UCS2: fail UTF Conversion" .
    if there is anybody who understood my problem any kind of help will be appreceated
    It will be helpfull to receive any suggestions made to this matter.
    Anything relative help will be welcome.
    thanks in advance.

    I had the same problem using Oracle 9i. The problem lied within the Oracle JDBC driver itself! --;
    If you're using JDBC driver from Oracle 9i, stop using it!
    You should download JDBC driver of Oracle 10g from Oracle site and use that driver instead.
    After I changed the driver, I now have no problem of getting Korean characters from the database.
    Hope this would solve your problem too.

  • Fail to convert between UTF8 and UCS2: failUTFConversion

    Hi
    I use Oracle 8.1.5 with NLS = 'CL8MSWIN1251' (russian).
    JDeveloper 3.0 and JDBC
    if I write - "select substr(str,1, 10 ) from table"
    I have not Exception, but I get instead of 10 symbol - only 5
    if I write - select substr(str,1, 11 ) from table
    I have Exception.
    if I write - "select Convert(str,'CL8MSWIN1251') from table"
    I have not any problem.
    What is it problem? How I can ok
    if I write- "select str from table"
    Igor Leonov
    null

    Hi!
    It's Oracle 7.3 server problem - wrong UTF conversion. Try to use Oracle8.

  • JDeveloper3.1.1.2 problem:convert from UTF8 to UCS2 failed

    oracleTeam:
    i test JDeveloper3.1.1.2,it has problem in runtime:convertion from UTF8 to UCS2 failed ,AttributeLoadException.( our language is chinese)
    I found that oracle\jbo\server\QueryCollection.class in dacf.zip maybe has problem,i use this class of JDeveloper3.1 to repalce same_name class in JDeveloper3.1.1.2,above problem disappeared,
    but because this class is not suit of JDeveloper3.1.1.2,other problems appeared.
    so you should work out this problem ,i hope
    it runs correctly.

    I searched this forum and the SQLJ/JDBC forum, and found a few occurrences of this problem. Among the things people suggested:
    * Changing JDBC drivers (experience varied as to which one fixed the problem)
    * Adding nls_charset1x.zip to your CLASSPATH
    * Ensuring you're using the same character set on the client and server.
    I suggest you take a look at the following discussion threads: http://technet.oracle.com:89/ubb/Forum8/HTML/001810.html http://technet.oracle.com:89/ubb/Forum8/HTML/000065.html http://technet.oracle.com:89/ubb/Forum2/HTML/000820.html
    Blaise
    null

  • Cannot convert between unicode and non-unicode string datatypes

      My source is having 3 fields :
    ItemCode nvarchar(50)
    DivisionCode nvarchar(50)
    Salesplan (float)
    My destination is : 
    ItemCode nvarchar(50)
    DivisionCode nvarchar(50)
    Salesplan (float)
    But still I am getting this error : 
    Column ItemCode cannot convert between unicode and non-unicode string datatypes.
    As I am new to SSIS , please show me step by step.
    Thanks In Advance.

      My source is having 3 fields :
    ItemCode nvarchar(50)
    DivisionCode nvarchar(50)
    Salesplan (float)
    My destination is : 
    ItemCode nvarchar(50)
    DivisionCode nvarchar(50)
    Salesplan (float)
    But still I am getting this error : 
    Column ItemCode cannot convert between unicode and non-unicode string datatypes.
    As I am new to SSIS , please show me step by step.
    Thanks In Advance.
    HI Subu ,
    there is some information gap , what is your source ? are there any transformation in between ?
    If its SQL server source and destination and the datatype is as you have mentioned I dont think you should be getting such errors ... to be sure check advance properties of your source and check metada of your source columns
    just check simple oledb source as
    SELECT TOP 1 ItemCode = cast('111' as nvarchar(50)),DivisionCode = cast('222' AS nvarchar(50)), Salesplan = cast(3.3 As float) FROM sys.sysobjects
    and destination as you mentioned ... it should work ...
    somewher in your package the source columns metadata is not right .. and you need to convert it or fix the source.
    Hope that helps
    -- Kunal
    Hope that helps ... Kunal

  • Reference Library for Converting Between LabVIEW and XML Data (GXML)

    Please provide feedback, comments and questions on the Reference Library for Converting Between LabVIEW and XML Data (GXML) in this thread.
    The latest version of the NI GXML Library is availble in VIPM on the NI LabVIEW  Tools Network repository.

    Francesco, Thank you for the feedback.  With this component it was my intention to make a more "terse" version of the LabVIEW Flatten to XML VI that was also supported on RT and that gave the user more flexbility regarding the structure of the parsing type definition. I think you are right that the XML parser is not compliant to section 2.11 of the XML spec.  The parser does specifically looking for a #D#A and this appears to be an oversight on my part.  Please confirm for me, the specifcation is saying that the XML parser should be able to recognize three possibilities as an "end of line" character: #D#A, #D, or #A.  Am I reading this right?There are more efficient (and in some cases much more efficient) ways of sharing data between LabVIEW and LabVIEW: some examples are flattened binary strings and the datalog binary format.  XML is slower than these optons but the upside is that it is human readable.  Furthermore XML is inherently hierarchical which is convenient for complex data structures like clusters of arrays of clusters, etc.  If you don't care about human readability then you are correct XML doesn't make as much sense.I will return to the GXML source code and try to fix this in the near future but I would hope that instead of creating yet another custom VI from scratch that you could reuse what I have provided for you.  I included enough documentation in the source code so that users could make some modificiations themselves. The target application for this reference library was LabVIEW to LabVIEW communication.  As such I documented the schema on the dev zone document from a LabVIEW perspective.  It includes all the supported datatypes and all the supported data structures (cluters, arrays, multidimensional arrays, clusters of multidimensional arrays, etc.)  I do see some value in making a more conventional XML spec but the time investment required didn't really line up with my intended use case. Were there any other downsides to GXML that I have missed?  Best Regards, Jeff TippsSystems Engineer - Sound and VibrationMessage Edited by Jeff T. on 04-21-2010 10:09 AM

  • SSIS Package : While Extracting Sharepoint Lookup column, getting error 'Cannnot convert between unicode and non-unicode string data types'

    Hello,
    I am working on one project and there is need to extract Sharepoint list data and import them to SQL Server table. I have few lookup columns in the list.
    Steps in my Data Flow :
    Sharepoint List Source
    Derived Column
    its formula : SUBSTRING([BusinessUnit],FINDSTRING([BusinessUnit],"#",1)+1,LEN([BusinessUnit])-FINDSTRING([BusinessUnit],"#",1))
    Data Conversion
    OLE DB Destination
    But I am getting the error of not converting between unicode and non-unicode string data types.
    I am not sure what I am missing here.
    In Data Conversion, what should be the Data Type for the Look up column?
    Please suggest here.
    Thank you,
    Mittal.

    You have a data conversion transformation.  Now, in the destination are you assigning the results of the derived column transformation or the data conversion transformation.  To avoid this error you need use the data conversion output.
    You can eliminate the need for the data conversion with the following in the derived column (creating a new column):
    (DT_STR,100,1252)(SUBSTRING([BusinessUnit],FINDSTRING([BusinessUnit],"#",1)+1,LEN([BusinessUnit])-FINDSTRING([BusinessUnit],"#",1)))
    The 100 is the length and 1252 is the code page (I almost always use 1252) for interpreting the string.
    Russel Loski, MCT, MCSE Data Platform/Business Intelligence. Twitter: @sqlmovers; blog: www.sqlmovers.com

  • Column "A" cannot convert between unicode and non-unicode string data types

    I am following the SSIS overview video-
    https://secure.cbtnuggets.com/it-training-videos/series/microsoft-sql-server-2008-business-development/6143?autostart=true
    I have a flat file that i want to import the contents onto a SQL database.
    I created a Dataflow task, source file and oledb destination.
    I am getting the folliwung error -
    "column "A" cannot convert between unicode and non-unicode string data types"
    in the origin file the data type is coming as string[DT_STR] and in the destination object it is coming as "Unicode string [DT_WSTR]"
    I used a data conversion object in between, dosent works very well
    Please help what to do

    I see this often.
    Right Click on FlatFileSource --> Show Advanced Editor --> 'Input and Output Properties' tab --> Expand 'Flat File Source Output' --> Expand 'Output Columns' --> Select your field and set the datatype to DT_WSTR.
    Let me know if you still have issues.
    Thank You,
    Jay

  • Column cannot convert between unicode and non-unicode string data types

    I am converting SSIS jobs from SQL Server 2005 running on a Windows 2003 server to 2008R2 running on a Windows 2008 server.&nbsp; I have a dataflow with an OLE DB Source which is selecting from an Oracle view.&nbsp; This of course worked fine in
    2005.&nbsp;&nbsp; This OLE DB Source will not even read the data from Oracle without the error "Column "UWI" cannot convert between unicode and non-unicode.  The select is:
    SELECT SOME_VIEW.UWI AS UWI,
                 CAST(SOME_VIEW.OIL_NET AS NUMERIC(9,8)) AS OIL_NET
    FROM SOME_SCHEMA.SOME_VIEW
    WHERE OIL_NET IS NOT NULL AND UWI IS NOT NULL
    ORDER BY UWI
    When I do "Show Advanced Editor" on this component, in the Input and Output Properties, I show the OLE DB External Column as DT_STR length 40 for the UWI column and for the Output Columns I see the UWI as the same DT_STR.
    How can I get past this?  I have tried doing a cast...cast(SOME_VIEW.UWI AS VARCHAR(40)) AS UWI and this gives the same error.  The column in Oracle is a varchar2(40).
    Any help is greatly appreciated.  Thanks.

    Please check the data type for UWI using advanced editor for Oledb Source under
    external columns and output columns. Are the data types same?
    If not, try changing the data type (underoutput columns) same as data type shown under
    external columns
    Nitesh Rai- Please mark the post as answered if it answers your question

Maybe you are looking for

  • I am having a problem with IRM Desktop version 11.1.54.2 Midware 11.1.1.6.0

    Hi everyone, I am experiencing some serious and strange problems with our IRM desktop client which Works with Java SE 6.30 (JRE 6U30). When I installed them (java and IRM) on Windows 8 Professional, IRM stops print spooler service, and block our clie

  • Thin white lines flashing

    I'm in the process of testing stability of my overclocked AMD 4800. While the testting program is running, fine white lines flash across the window of that program, running down the vertical length then disappear at the bottom. A few minutes later, t

  • Configuration of Issue Management tables

    Hello - I am looking to configure the tables in Issue Management  but I cannot seem to find them via the SPRO transaction. Is there a guide on how to do the configuration for Issue Management? Please let me know Thanks! Nadine

  • Creating jvm-option with asadmin

    Hi, I am trying to create the following JVM option using asadmin: -Dproperties.dir=C:/Projects/Project1/myproperties so I used the following command: create-jvm-options "-Dproperties.dir=C:/Projects/Project1/myproperties" The command doesn't work. It

  • Mandatiry field in Material Marter

    Hi Friends I am working on IDS system, when i create new material master record, i found too many mandatory fields which are not supposed to be. As we can control fields in CMR mandatory or optinal similarily how we can make fields in MMR mandatory,