Utf16

hi,
does anybody know a jdbc type 1 or type 4 driver for MS access that supports utf16

All do. Just use NCHAR/NVARCHAR on the server.
Alin.

Similar Messages

  • Error in begining a session when using UTF16 mode in OCIEnvNlsCtreate

    I wrote this code :
    OCIEnv* envhp;
    OCIError* errhp;
    OCIServer* srvhp;
    OCISvcCtx* svchp;
    OCISession* usrhp;
    envhp = (OCIEnv *) 0;
    errhp = (OCIError *) 0;
    srvhp = (OCIServer *) 0;
    svchp = (OCISvcCtx *) 0;
    usrhp = (OCISession *) 0;
    int mode = OCI_DEFAULT;
    char* dblink=/* DBLINK */;
         sword status;
    OCIEnvNlsCreate ( &envhp, (ub4) mode, (dvoid *)0,(dvoid*(*)(dvoid ctxp, size_t size))0,(dvoid(*)(dvoid ctxp, dvoid memptr, size_t newsize))0,(dvoid (*)(dvoid ctxp, dvoid memptr))0,(size_t)0, (void **)0, (ub2)OCI_UTF16ID, (ub2)OCI_UTF16ID);
    OCIHandleAlloc((dvoid*)envhp, (dvoid**)&errhp, (ub4)OCI_HTYPE_ERROR,(size_t)0, (dvoid**)0);
    OCIHandleAlloc((dvoid*)envhp, (dvoid**)&srvhp, (ub4)OCI_HTYPE_SERVER,(size_t)0, (dvoid**)0);
    status = OCIServerAttach(srvhp, errhp, (text*)dblink,(sb4)strlen(dblink), (ub4)OCI_DEFAULT);
    if (status != OCI_SUCCESS)
    return 0;
    char username = /username*/;
    char password = /password*/;
    OCIHandleAlloc((dvoid*)envhp, (dvoid**)&svchp, (ub4)OCI_HTYPE_SVCCTX,(size_t)0, (dvoid**)0);
    OCIAttrSet((dvoid*)svchp, (ub4)OCI_HTYPE_SVCCTX, (dvoid*)srvhp, (ub4)0,(ub4)OCI_ATTR_SERVER, errhp);
    OCIHandleAlloc((dvoid*)envhp, (dvoid**)&usrhp, (ub4)OCI_HTYPE_SESSION,(size_t)0, (dvoid**)0);
    OCIAttrSet((dvoid*)usrhp, (ub4)OCI_HTYPE_SESSION, (dvoid*)username,(ub4)strlen(username), (ub4)OCI_ATTR_USERNAME, errhp);
    OCIAttrSet((dvoid*)usrhp, (ub4)OCI_HTYPE_SESSION, (dvoid*)password,(ub4)strlen(password), (ub4)OCI_ATTR_PASSWORD, errhp);
    status = OCISessionBegin(svchp, errhp, usrhp, OCI_CRED_RDBMS, (ub4)OCI_DEFAULT);
    When I use OCI_UTF16ID in OCIEnvNlsCreate it gives me an error in OCISessionBegin. But when I use 871(utf8) instead it works properly. Why does this happen? How can I work whit OCI_UTF16ID?

    Re: OCIObjectSetAttr() and UTF16 environnement might be of interest, even though it's a little different. Still relates to UTF16 though. --DD                                                                                                                                                                                                                                                                                                                                           

  • OCIObjectSetAttr() and UTF16 environnement

    Hi,
    I'm currently implementing Oracle Named Types support (SQL_NTY) in OCILIB library and i'm facing a weird problem !
    To manipulate objects attributes, i'm using OCIObjectSetAttr() and OCIObjectGetAttr().
    Everything's fine when the environnement handle is created in normal ANSI mode.
    But when it's created with OCI_UTF16 flags to setup an unicode environnement, troubles start !
    In an UTF16 env, calls to OCIObjectSetAttr() and OCIObjectGetAttr() do not return any error but any call to OCIObjectSetAttr() will smash up all other handles retreived before with OCIObjectGetAttr().
    By example, for an object that has 2 fields (let's say int and timestamp), if i retreive the timestamp handle and set the int value :
    * in ansi, no problem
    * in unicode, the timestamp handle is scrambled and any use of it with function that get an timstamp handle then crash !
    If the object is created without having his attributes modified, it's well inserted throught a SQL statement.
    THe problem seems to be OCIObjectSetAttr (). All handles retrieved before the OCIObjectSetAttr() are ok and when OCIObjectSetAttr() returns for other fields, theses handles are scrambled !
    I checked client 9, 10, 11 and i got the same : ok in ansi and handle smash up in unicode !
    Any ideas ? Posting some code is quit difficult because it's using internal library calls . But if anybody wants to review it, i'll send it straight away !
    I checks all strings involded in the specified portions of code but everyting looks regular...
    There hardly no resources on OCIObjectSetAttr() and i couldn't find anything about its use in an UT16 env.
    Thanks in advance for reading thoses lines...
    Vincent.

    Hi Jonah,
    I can't recompile a full OCILIB package on Linux now, but i made a test project with raw OCI code and it gives the same result : values are smashed up !
    (BTW : how do you highlight C syntax in posts ?)
    Here is the test code, First create the test type :
    SQL> create type type_test as object (v1 int, v2 date);
    Here the test raw OCI code :
    With GCC on linux, add -fshort-wchar to compile it !
    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>
    #include "oci.h"
    #include "orid.h"
    #define OCI_TEST_UNICODE
    #ifdef OCI_TEST_UNICODE
    #include "wchar.h"
    #define ENV_MODE OCI_UTF16
    #define CHARSIZE sizeof(wchar_t)
    #define TS(x) L ## x
    #define tchar wchar_t
    #else
    #define ENV_MODE OCI_DEFAULT
    #define CHARSIZE sizeof(char)
    #define TS(x) x
    #define tchar char
    #endif
    int tslen(tchar *s)
    int n =0;
    while (*s++) n++;
    return n;
    #define tsize(x) (tslen(x) * sizeof(tchar))
    int main(int argc, char **argv)
    /* OCI handles */
    OCIEnv *p_env    = NULL;
    OCIError *p_err  = NULL;
    OCISvcCtx *p_svc = NULL;
    /* OCI types */
    OCIType *dto     = NULL ;
    OCINumber *p_num = NULL;
    void *obj        = NULL;
    /* misc*/
    OCIInd ind = 0;
    int value = 1;
    int rc = OCI_SUCCESS;
    /* OCI values placeholders */
    OCIDate *date;
    OCINumber num;
    /* attributes infos types */
    tchar* attr1_name[1] = {TS("V1")};
    ub4 attr1_len[1] = {2*CHARSIZE};
    tchar* attr2_name[1] = {TS("V2")};
    ub4 attr2_len[1] = {2*CHARSIZE};
    tchar *user = TS("winrest");
    tchar *pwd  = TS("fsi");
    tchar *db   = TS("maison");
    tchar *type = TS("TYPE_TEST");
    /* Initialize OCI */
    rc = OCIEnvCreate((OCIEnv **) &p_env,
    ENV_MODE | OCI_THREADED | OCI_OBJECT,
    NULL, NULL, NULL, NULL, 0, NULL);
    /* Initialize handles */
    rc = OCIHandleAlloc((dvoid *) p_env, (dvoid **) & p_err, OCI_HTYPE_ERROR,
    (size_t) 0, (dvoid **) 0);
    rc = OCIHandleAlloc((dvoid *) p_env, (dvoid **) & p_svc, OCI_HTYPE_SVCCTX,
    (size_t) 0, (dvoid **) 0);
    /* Connect to database server */
    rc = OCILogon(p_env, p_err, &p_svc,
    (const OraText *) user, tsize(user),
    (const OraText *) pwd, tsize(pwd),
    (const OraText *) db, tsize(db));
    if (rc)
    printf("connection error !");
    return 0;
    printf("connected...\n");
    /* get type info */
    rc = OCITypeByName(p_env, p_err, p_svc, (CONST text *) NULL, (ub4) 0,
    (CONST text *) type, tsize(type), (CONST text *) NULL, (ub4) 0,
    OCI_DURATION_SESSION, OCI_TYPEGET_ALL, &dto);
    /* create object */
    rc = OCIObjectNew(p_env, p_err, p_svc, SQLT_NTY, dto, NULL, OCI_DURATION_SESSION,
    TRUE, (dvoid **) &obj);
    /* get date value */
    rc = OCIObjectGetAttr(p_env, p_err, obj, NULL, dto,
    (CONST text**) attr2_name, attr2_len,
    1, NULL, 0, &ind, NULL, (void **) &date, NULL);
    /* set date value */
    OCIDateSysDate(p_err, date);
    /* print date value */
    printf("date before setting int val : %02i/%02i/%04i\n", date->OCIDateDD,
    date->OCIDateMM, date->OCIDateYYYY);
    /* set int value */
    rc = OCINumberFromInt(p_err, &value, sizeof(int), OCI_NUMBER_SIGNED, &num);
    ind = 0;
    rc = OCIObjectSetAttr(p_env, p_err, obj, NULL, dto,
    (CONST text**) attr1_name, attr1_len,
    1, NULL, 0, ind, NULL, &num);
    /* get it back for checking */
    value = 0;
    rc = OCIObjectGetAttr(p_env, p_err, obj, NULL, dto,
    (CONST text**) attr1_name, attr1_len,
    1, NULL, 0, &ind, NULL, (void **) &p_num, NULL);
    rc = OCINumberToInt(p_err, p_num, sizeof(int), OCI_NUMBER_SIGNED, &value);
    /* print int value */
    printf("int value : %i\n", value);
    /* print date value */
    printf("date after setting int val : %02i/%02i/%04i\n", date->OCIDateDD,
    date->OCIDateMM, date->OCIDateYYYY);
    /* free object */
    rc = OCIObjectFree(p_env, p_err, obj, 0);
    /* Disconnect */
    rc = OCILogoff(p_svc, p_err);
    /* Free handles */
    rc = OCIHandleFree((dvoid *) p_svc, OCI_HTYPE_SVCCTX);
    rc = OCIHandleFree((dvoid *) p_err, OCI_HTYPE_ERROR);
    rc = OCIHandleFree((dvoid *) p_env, OCI_HTYPE_ENV);
    return 1;
    if OCI_TEST_UNICODE is not defined, i get that (working) :
    date before setting int val : 03/02/2008
    int value : 1
    date after setting int val : 03/02/2008
    And if it's defined (smashed values):
    date before setting int val : 03/02/2008
    int value : 1
    date after setting int val : 204/02/-16126
    Message was edited by:
    Vicenzo : modified declaration of CHARSIZE
    Message was edited by:
    Vicenzo : Fixed code to compile on Linux

  • (Oracle 8 and 9) UTF16 char stream and OCIStmtPrepare

    Hi,
    I am using Oracle 8.1.7 on windows (should soon use Oracle 9.2) and Oracle 10g on MacosX.
    I have an UTF16 strings in buffers containing SQL commands (like Create tables, or insert, ...) . some of the string contains some real weird characters (such as euro currency sign).
    I want to call OCIStmtPrepare then OCIStmtExecute, like I do usually with basic ascii C-strings.
    My concern is that I cannot figure out what is expected in the CONST text * argument of OCIStmtPrepare, I mean how should I encode the sql commands to have it accept the command ?
    Does anybody know what is the encoding that we are supposed to provide with Oracle 8.1.7 and Oracle 9.2? is there any parameter somewhere ?
    Thanks for your help.
    cd

    Hi,
    Thanks, but this is not of a great help.
    I am not speaking about type casting (I also discovered that text is actually a unsigned char type), nu my concern is about encoding
    Within unsigned char string, you can encode the same string differently : take the e acute char, it wont be encoded the same way in utf8, in iso-8859-1, etc...
    Just take a look at the "Character encoding" of your web browser, if it is a modern one like Firefox, and you'll where my concen is.
    thanks for answering,
    cd

  • Problem loading Cyrillic characters with UTF16

    I am trying to use SQL*Loader to load data into a table that consists of American and Cyrillic columns. The Cyrillic columns are of type NVARCHAR2(300).
    The database NLS params are as follows: NLS_LANGUAGE = AMERICAN, NLS_TERRITORY = AMERICA, NLS_CHARACTERSET =
    WE8ISO8859P1 and NLS_NCHAR_CHARACTERSET = AL16UTF16. The SQL*Loader control file contains the lines:
    CHARACTERSET UTF16
    BYTEORDER BIG
    The data loads but I get question marks in the NVARCHAR2 columns. I tried setting NLS_LANG to AMERICAN_AMERICA.WE8ISO8859P1 and it still loads
    the same way. Do I need to change the NLS_CHARACTERSET to UTF16 as well? What else could I be missing?

    Please check the below. It addresses the same problem as yours:
    http://www.experts-exchange.com/Database/Oracle/10.x/Q_23665329.html

  • Adobe air and utf16 databases

    Can i create with adobe air a utf8 sqlite database?

    i have noticed that..
    I have an utf8 database created from mysql Dump and a utf16 database created from Air
    i want to attach the one with the other..
    As i can understand there is no way to create utf8 database throught Air
    Is there any tool or program to convert my utf8 database to utf16?

  • Extracting text (UTF8 and UTF16) from the SWF file format

    Hello.
    Just wondering if this newsgroup is the right one to talk
    about the
    'open' SWF file format. I'm looking for a utility to pull out
    the text
    strings inside of a SWF file (both UTF8 and UTF16). The file
    format is
    open-sourced, so I guess that I could write something, but I
    remember
    there being some tools here when Adobe first open-sourced the
    format,
    but I'm not able to find the tools anymore.
    Any help?
    - Steve Webb

    Hello.
    Just wondering if this newsgroup is the right one to talk
    about the
    'open' SWF file format. I'm looking for a utility to pull out
    the text
    strings inside of a SWF file (both UTF8 and UTF16). The file
    format is
    open-sourced, so I guess that I could write something, but I
    remember
    there being some tools here when Adobe first open-sourced the
    format,
    but I'm not able to find the tools anymore.
    Any help?
    - Steve Webb

  • Does NVACHAR2 store UTF16 in UTF8 database?

    Our database was created with AL32UTF8 as the character set. The national character set is AL16UTF16. Does this mean that all of our varchar2's are stored as UTF8 but if we have a data type of nvarchar2 it will be stored as UTF16?
    I probably mean, would nvarchar2 allow the storage of a UTF16 multi-byte character that a varchar2 wouldn't allow?

    If the national character set is AL16UTF16, yes, all data in NVARCHAR2 fields is encoded using the UTF-16 encoding. If the database character set is AL32UTF8, then all data in VARCHAR2 columns is encoded using UTF-8.
    Assuming your database character set is AL32UTF8 (things change slightly if the database character set is UTF8), any character that can be encoded in UTF-8 can be encoded in UTF-16. UTF-8 and UTF-16 are just different ways of encoding Unicode code points. The major difference between the two encodings is how many bytes are required to encode a particular character. With UTF-8, a single character requires between 1 and 4 bytes, with UTF-16, a single character requires either 2 or 4 bytes. If you have a lot of English data, the UTF-8 encoding can be a significant space saver (1 byte in UTF-8 vs. 2 bytes in UTF-16), if you have Japanese data, the UTF-16 encoding can be a significant space saver (3 bytes in UTF-8 vs. 2 bytes in UTF-16).
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • UTF16 values in the trace

    I'm trying to trace an application code that uses UTF16 bindings with the trace level 4 or 12. All strings are showing like " " and I cannot see the actual values. In the same time I could see the number values.
    What can I do to be able to see these values?
    I really need it for application troubleshooting purposes.
    Thanks a lot for the help.
    mj

    What's the OS ?
    To be able to see the binding value in unicode, your OS must support showing unicode characters.

  • Convert from utf16 to utf8 ?? er?

    Dear list,
    I have recently seen a sample to convert a utf16 string to utf8. I am a little bit confused. I thought utf16 was a superset of utf8. Could please someone explain why this is necessary sometimes ?
    regards
    Ben

    how can utf16 be a superset of utf8. I thought this
    relationship was similiar to ASCII and utf8/utf16,
    where for example the space bar has a value of 32 in
    ASCII and Unicode (utf8 and utf16).... This been tjhe
    case there is not much need for a utf8 to utf16
    conversion program.I didn't say it was a superset. It is a different way of representing the same thing.
    >
    You say that utf16 is ALWAYS 2 bytes, and utf8 is
    usually 8 bits but is variable when necessary. Is
    utf16 not a variable byte character set ? No.
    The name
    according to this, utf8 and utf16 is somewhat
    misleading as they are NOT always 8 or 16 bytes.
    And "java" is neither an island nor a beverage. The name does not convey the entirety of the subject.
    characters the first byte (or 2) is an 'escape' bytewhich means that more bytes are needed.
    What do you mean by first or (2). escape byte?
    When something sees a given specific byte then then it knows that there are a certain number of bytes after that are needed to fully represent the character.
    I am still not convincedConvinced?
    If you do not find my explaination satisfactory then you might try writing some code that converts to UTF16 and UTF8 using String.getBytes(String).
    You might also try to find the character set definitions.

  • Issues while opening a UTF16 encoded .csv file in excel using c#

    When I tried to open a UTF8 encoded
    .csv file in excel through my c# code, doing something like this:
    m_excel.Workbooks.OpenText(newPath, Comma: true);
    That works fine. Can anyone tell me what to do if I had to open a UTF16 encoded
    .CSV file similarly. All options i tried either fails or print incorrect Unicode characters.
    So somehow we have to specify the encoding format (of the file to be opened i.e .CSV)  while opening using excel functions, which i am unable to figure out.
    Please help me. I am badly stuck here.
    Thanks in advance.

    Hi Jb9952,
    There is an Origin parameter in
    Workbooks.OpenText method, you need to specify this parameter. You could try to use xlWindows (Windows ANSI).
    To get the specify file origin value, you could get the detail code through Record Macro feature in excel.
    Regards
    Starain
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • How to realize external UTF16 / AL16UTF16 xml file into DB XMLTYPE?

    Hi people,
    Happy New Year to all who observe it! :)
    DB version 9.2.0.4.
    Platforms : Windows 2003 Server, Linux Red Hat 9.0.
    We wish to unify our XML processing encoding and due to the use of .Net app servers it would (we suspect) increase our efficiency if we can accept XML as UCS2 / UTF16. [We don't necessarily need to DB process as UTF16 but I imagine it MAY speed our indexing etc performance if we could).
    Currently our XML Schema refers to all the standard xsd types which seem to get automatically realized as VARCHAR2s and CLOBs.
    Database and user session NLS_LANG are both AMERICA_AMERICAN.UTF8. (Production is likely to be ENGLISH_AUSTRALIA.UTF8)
    Can anyone tell me the definitive way to realize an external UTF16 XML file to an XMLType under the above scenario?
    My current issue is every time I attempt to load an external UTF16 file with XDB.XDB_UTILITIES.getXML* or getFileContent* [charset=>'AL16UTF16'] I get validation errors akin to
    ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00210: expected '<' instead of '
    Have I stuffed up? Should we be explicitly coercing all xml schema *CHAR? types as NVARCHAR / NCLOBs (using xdb:SQLType annotations)?
    (This doesn't seem like the big issue though because we can't even load the UTF16 XML into a sessional XMLType instance rather than a schema mapped table form).
    Does the parser automatically deal with the prefixed byte order mark? Any quick method of slicing it off if the parser doesn't understand? (i.e. How do I efficiently manipulate the CLOBs necessary to remove the BOM? DBMS_LOB.SUBSTR has proved not to be a good solution due to typecasting. Does it have to be DBMS_LOB.COPY to a temporary LOB or some such?)
    Any suggestions or commiserations from anyone would be grand...
    Thanks,
    Lachlan

    Lachlan,
    One thing to try is setting the session values to a different NLS_LANG
    i.e
    NLS_LANG=American_American.AL32UTF8
    or
    NLS_LANG=American_American.AL16UTF16
    What is the characterset of the database your loading into? select * from nls_database_parameters; and look for the NLS_CHARACTERSET and NLS_NCHAR_CHARACTERSET.
    HTH.
    M

  • How to Import data via SQL Loader with characterset  UTF16 little endian?

    Hello,
    I'm importing data from text file into one of my table which contains blob column.
    I've specified following in my control file.
    -----Control file-------
    LOAD DATA
    CHARACTERSET UTF16
    BYTEORDER LITTLE
    INFILE './DataFiles/Data.txt'
    BADFILE './Logs/Data.bad'
    INTO TABLE temp_blob truncate
    FIELDS TERMINATED BY "     "
    TRAILING NULLCOLS
    (GROUP_BLOB,CODE)
    Problem:
    SQL Loader always importing data via big endian. Is there any method available using which we can convert these data to little endian?
    Thanks

    A new preference has been added to customize the import delimiter in main code line. This should be available as part of future release.

  • How to verify if the RFC/BAPIs on ECC6 ides are unicode (UTF16) enabled

    Hello,
    I am not a SAP expert however I have knowledge about SAP RFCSDK. Let me give some background about what I am trying to do which would help to clear my question a bit more.
    1. I have a program written in "C", which uses RFCSDK to communicate with SAP server for  integration. This program works fine with non unicode RFCSDK 6.40
    2. For one of my need, I need to add UTF16 support to this code. So I am using unicode version of RFCSDK 6.40 with SAPwithUNICODE flag set. And using librfc32u.dll (Build : Tue Apr 08 09:22:11 2008, File version : 6405, 5, 228, 5755, Product version : 6405.5.228)
    3. I am using code page : 4103 (UTF16LE) in RfcOpenEx while communicating with SAP server.
    4. I am able to logon to the SAP server (version ECC6) successfully and also able to execute RFC_GET_FUNCTION_INTERFACE successfully to get interface info of the RFC. I am using function "RfcGetStructureInfoAsTable" from RFCSDK to get structure info, which also seems to be OK.
    After gathering the above info when I am trying to execute a BAPI I am getting "RFC_SYS_EXCEPTION" error.
    5. I am trying to execute "BAPI_EMPLOYEE_GETLIST" and I have already checked all the import parameters (including field values in RFC_PARAMETER structure) and they all seems to be correct. On execution I am getting following error in the trace file (.trc).
    T:3480 Error in program 'nleiact': ======> "SELECT ... LIKE ..." with leading, but no closing inverted comma.
    T:3480 Error in program 'nleiact': <* RfcReceive [1] : returns 3:RFC_SYS_EXCEPTION
    T:3480 Error in program 'nleiact': <* RfcCallReceive [1] : returns 3:RFC_SYS_EXCEPTION
    I was wondering if the RFC/BAPI on the SAP server might not be unicode (UTF16) enabled as one of the cause of this failure.
    Could somebody help me to point out a method to verify whether the RFC/BAPIs on this SAP server are unicode enabled or not?
    Also (I know it is against guidelines of forum of having multiple questions in one post) could somebody point out if I am doing something wrong or missing something in procedure (as described above.) of using unicode RFCSDK?
    Thanks
    Prasanna Joshi
    Edited by: Prasanna Joshi on Feb 4, 2009 4:15 PM

    Finding out the system code page you may need to first look this fm SCP_CODEPAGE_BY_EXTERNAL_NAME  and also check other function module under function group SCPA
    PS These fm are not RFC's you may need to write wrapper above these to use.

  • How to convert from UNICODE (UTF16) to UTF8 and vice-versa in JAVA.

    Hi
    I want to insert a string in format UTF16 to the database. How do I convert from UTF16 to UTF8 and vice- versa in JAVA?. What type must the database field be? Do I need a special set up for the database (oracle 8.i)?
    thanks
    null

    I'm not sure if this is the correct topic, but we are having problems accessing our Japanese data stored in UTF-8 in our Oracle database using the JDBC thin driver. The data is submitted and extracted correctly using ODBC drivers, but inspection of the same data retrieved from the GetString() call using the JDBC thin driver shows frequent occurrences of bytes like "FF", which are not expected in either UTF8 or UCS2. My understanding is that accessing UTF8 in Java should involve NO NLS translation, since we are simply going from one Unicode encoding to another.
    We are using Oracle version 8.0.4.
    Can you tell me what we are doing wrong?
    null

  • What is the difference between OCIEnvCreate and OCIEnvNlsCreate with utf16

    because i find a oci sample code cdemouni.c,in this sample,use OCIEnvCreate with mode set OCI_UTF16,but OCIEnvNlsCreate also can set charset and ncharset OCI_UTF16ID,what is the difference between OCIEnvCreate and OCIEnvNlsCreate with UTF16?

    First, OCIEnvNlsCreate() is recommended way of switching to UTF-16 mode.
    Second, OCIEnvNlsCreate() uses new semantics for bind and define buffer lengths. With new semantics all lengths are in bytes. With old semantics,
    UTF-16 string lengths are in codepoints, while other character sets use bytes.
    You can get new length semantics with OCIEnvCreate() as well, by
    adding OCI_NEW_LENGTH_SEMANTICS to the 'mode' flags.
    -- Sergiusz

Maybe you are looking for