Direct Execution of query having Unicode Characters

Direct Execution of query having Unicode Characters
Hi All,
In my application I am firing a Select Query having Unicode characters in Where Clause under condition like '%%'
to Oracle 10g DB from a Interface written in VC6.0...
Application funcationality is working fine for ANSI characters and getting the result of Select properly.
But in case of Unicode Characters in VC it says 'No Data Found'.
I know where the exact problem is in my code. But not getting the exact solution for resolving my issue...
Here with I am adding my code snippet with the comments of what i understand and what i want to understand...
DBPROCESS Structure used in the functions,_
typedef struct
HENV hEnv;
HDBC hDbc;
HSTMT hStmt;
char CmdBuff[[8192]];
char RpcParamName[[255]];
SQLINTEGER SpRetVal;
SQLINTEGER ColIndPtr[[255]];
SQLINTEGER ParamIndPtr[[255]];
SQLPOINTER pOutputParam;
SQLUSMALLINT CurrentParamNo;
SQLUSMALLINT OutputParamNo;
SQLUSMALLINT InputParamCtr;
SQLINTEGER BatchStmtNo;
SQLINTEGER CmdBuffLen;
short CurrentStmtType;
SQLRETURN LastStmtRetcode;
SQLCHAR SqlState[[10]];
int ShowDebug;
SQLCHAR* ParameterValuePtr;
int ColumnSize;
DBTYPE DatabaseType;
DRVTYPE OdbcDriverType;
BLOCKBIND *ptrBlockBind;
} DBPROCESS;
BOOL CDynamicPickList::GetResultSet(DBPROCESS *pDBProc, bstrt& pQuery, short pNumOdbcBindParams, COdbcBindParameter pOdbcBindParams[], CQueryResultSet& pQueryResultSet)
     int               lRetVal,
                    lNumRows;
     bstrt               lResultSet;
     wchar_t               lColName[[256]];     
     SQLUINTEGER          lColSize;
     SQLSMALLINT          lColNameLen,
                    lColDataType,
                    lColNullable,
                    lColDecDigits,                         
                    lNumResultCols;
     wchar_t               lResultRow[[32]][[256]];
OdbcCmdW(pDBProc, (wchar_t *)pQuery); *//Query is perfectly fine till this point all the Unicode Characters are preserved...*
     if ( OdbcSqlExec(pDBProc) != SUCCEED )
          LogAppError(L"Error In Executing Query %s", (wchar_t *)pQuery);          
          return FALSE;
Function OdbcCmdW_
//From this point have no idea what is exactly happening to the Unicode Characters...
//Actually i have try printing the query that gets stored in CmdBuff... it show junk for Unicode Characters...
//CmdBuff is the Char type Variable and hence must be showing junk for Unicode data
//I have also try printing the HexaDecimal of the query... I m not getting the proper output... But till i Understand, I think the HexaDecimal Value is perfect & preserved
//After the execution of this function the call goes to OdbcSqlExec where actual execution of qurey takes place on DB
SQLRETURN OdbcCmdW( DBPROCESS p_ptr_dbproc, WCHAR      p_sql_command )
     char *p_sql_commandMBCS;
     int l_ret_val;
     int l_size = wcslen(p_sql_command);
     int l_org_length,
l_newcmd_length;
p_sql_commandMBCS = (char *)calloc(sizeof(char) * MAX_CMD_BUFF,1);
l_ret_val = WideCharToMultiByte(
                    CP_UTF8,
                    NULL,                         // performance and mapping flags
                    p_sql_command,          // wide-character string
                    -1,                         // number of chars in string
                    (LPSTR)p_sql_commandMBCS,// buffer for new string
                    MAX_CMD_BUFF,                    // size of buffer
                    NULL, // default for unmappable chars
                    NULL // set when default char used
l_org_length = p_ptr_dbproc->CmdBuffLen;
l_newcmd_length = strlen(p_sql_commandMBCS);
p_ptr_dbproc->CmdBuff[[l_org_length]] = '\0';
if( l_org_length )
l_org_length++;
if( (l_org_length + l_newcmd_length) >= MAX_CMD_BUFF )
if( l_org_length == 0 )
OdbcReuseStmtHandle( p_ptr_dbproc );
else
strcat(p_ptr_dbproc->CmdBuff, " ");
     l_org_length +=2;
strcat(p_ptr_dbproc->CmdBuff, p_sql_commandMBCS);
p_ptr_dbproc->CmdBuffLen = l_org_length + l_newcmd_length;
if (p_sql_commandMBCS != NULL)
     free(p_sql_commandMBCS);
return( SUCCEED );
Function OdbcSqlExec_
//SQLExecDirect Requires data of Unsigned Char type. Thus the above process is valid...
//But i am not getting what is the exact problem...
SQLRETURN OdbcSqlExec( DBPROCESS *p_ptr_dbproc )
SQLRETURN l_ret_val;
SQLINTEGER l_db_error_code=0;
     int     i,l_occur = 1;
     char     *token_list[[50]][[2]] =
{     /*"to_date(","convert(datetime,",
                                   "'yyyy-mm-dd hh24:mi:ss'","1",*/
                                   "nvl","isnull" ,
                                   "to_number(","convert(int,",
                                   /*"to_char(","convert(char,",*/
                                   /*"'yyyymmdd'","112",
                                   "'hh24miss'","108",*/
                                   "sysdate",     "getdate()",
                                   "format_date", "dbo.format_date",
                                   "format_amount", "dbo.format_amount",
                                   "to_char","dbo.to_char",
                                   "to_date", "dbo.to_date",
                                   "unique","distinct",
                                   "\0","\0"};
char          *l_qry_lwr;  
l_qry_lwr = (char *)calloc(sizeof(char) * (MAX_CMD_BUFF), 1);
l_ret_val = SQLExecDirect( p_ptr_dbproc->hStmt,
(SQLCHAR *)p_ptr_dbproc->CmdBuff,
SQL_NTS );
switch( l_ret_val )
case SQL_SUCCESS :
case SQL_NO_DATA :
ClearCmdBuff( p_ptr_dbproc );
p_ptr_dbproc->LastStmtRetcode = l_ret_val;
if (l_qry_lwr != NULL)
     free(l_qry_lwr);
return( SUCCEED );
case SQL_NEED_DATA :
case SQL_ERROR :
case SQL_SUCCESS_WITH_INFO :
case SQL_STILL_EXECUTING :
case SQL_INVALID_HANDLE :
I do not see much issue in the code... The process flow is quite valid...
But now i am not getting whether,
1) storing the string in CmdBuff is creating issue
2) SQLExecDirect si creating an issue(and some other function can be used here)...
3) Odbc Driver creating an issue and want some Client Setting to be done(though i have tried doing some permutation combination)...
Any kind of help would be appreciated,
Thanks & Regards,
Pratik
Edited by: prats on Feb 27, 2009 12:57 PM

Hey Sergiusz,
You were bang on target...
Though it took some time for me to resolve the issue...
to use SQLExecDirectW I need my query in SQLWCHAR *, which is stored in char * in my case...
So i converted the incoming query using MultibyteToWideChar Conversion with CodePage as CP_UTF8 and
then passed it on to SQLExecDirectW...
It solved my problem
Thanks,
Pratik...
Edited by: prats on Mar 3, 2009 2:41 PM

Similar Messages

  • Should I use escape even when I can enter Unicode characters directly

    Unicode escape sequence - what is its purpose?
    Should I use it even when I can enter Unicode characters directly using a text editor?

    Then how does it fit into the I18n concept which provides for sending out text files with text strings for translation?
    How one will translate escapes?

  • Query regarding Handling Unicode characters in XML

    All,
    My application reads a flat file in series of bytes, I
    create a XMl document out of the data. The data contains Unicode characters.
    I use a XSLT to create XML file. While creating it I don't face any issues
    but later if I try to parse the constructed XMl file, i get a sax parsing exception
    (Caused by: org.xml.sax.SAXParseException: Character reference _"<not visible clearly in Browser>"_ is an invalid XML character.)
    Can some one advice on how to tackle this.
    regards,
    D
    Edited by: user9165249 on 07-Jan-2011 08:10

    How to tackle it? Don't allow your transformation to produce characters which are invalid in XML. The XML Recommendation specifies what characters are allowed and what characters aren't, in section 2.2: http://www.w3.org/TR/REC-xml/#charsets. The invalid characters can't come from the XML which you are transforming so they must be coming from code in your transformation.
    And if you can't tell what the invalid characters are by using your browser, then send the result of the transformation to a file and use a hex editor to examine it.
    By the way, this isn't a question about Unicode characters in XML, since all characters in Java are Unicode and XML is defined in terms of Unicode. So saying that your data contains Unicode characters is a tautology. It couldn't do anything else. If your personal definition of Unicode is "weird stuff that I don't understand" then do yourself a favour, take a couple of days out and learn what Unicode is.

  • How do I get unicode characters out of an oracle.xdb.XMLType in Java?

    The subject says it all. Something that should be simple and error free. Here's the code...
    String xml = new String("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<x>\u2026</x>\n");
    XMLType xmlType = new XMLType(conn, xml);
    conn is an oci8 connection.
    How do I get the original string back out of xmlType? I've tried xmlType.getClobVal() and xmlType.getString() but these change my \u2026 to 191 (question mark). I've tried xmlType.getBlobVal(CharacterSet.UNICODE_2_CHARSET).getBytes() (and substituted CharacterSet.UNICODE_2_CHARSET with a number of different CharacterSet values), but while the unicode characters are encoded correctly the blob returned has two bytes cut off the end for every unicode character contained in the original string.
    I just need one method that actually works.
    I'm using Oracle release 11.1.0.7.0. I'd mention NLS_LANG and file.encoding, but I'm setting the PrintStream I'm using for output explicitly to UTF-8 so these shouldn't, I think, have any bearing on the question.
    Thanks for your time.
    Stryder, aka Ralph

    I created analogic test case, and executed it with DB 11.1.0.7 (Linux x86), which seems to work fine.
    Please refer to the execution procedure below:
    * I used AL32UTF8 database.
    1. Create simple test case by executing the following SQL script from SQL*Plus:
    connect / as sysdba
    create user testxml identified by testxml;
    grant connect, resource to testxml;
    connect testxml/testxml
    create table testtab (xml xmltype) ;
    insert into testtab values (xmltype('<?xml version="1.0" encoding="UTF-8"?>'||chr(10)||'<x>'||unistr('\2026')||'</x>'||chr(10)));
    -- chr(10) is a linefeed code.
    commit;
    2. Create QueryXMLType.java as follows:
    import java.sql.*;
    import oracle.sql.*;
    import oracle.jdbc.*;
    import oracle.xdb.XMLType;
    import java.util.*;
    public class QueryXMLType
         public static void main(String[] args) throws Exception, SQLException
              DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
              OracleConnection conn = (OracleConnection) DriverManager.getConnection("jdbc:oracle:oci8:@localhost:1521:orcl", "testxml", "testxml");
              OraclePreparedStatement stmt = (OraclePreparedStatement)conn.prepareStatement("select xml from testtab");
              ResultSet rs = stmt.executeQuery();
              OracleResultSet orset = (OracleResultSet) rs;
              while (rs.next())
                   XMLType xml = XMLType.createXML(orset.getOPAQUE(1));
                   System.out.println(xml.getStringVal());
              rs.close();
              stmt.close();
    3. Compile QueryXMLType.java and execute QueryXMLType.class as follows:
    export PATH=$ORACLE_HOME/jdk/bin:$PATH
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib
    export CLASSPATH=.:$ORACLE_HOME/jdbc/lib/ojdbc5.jar:$ORACLE_HOME/jlib/orai18n.jar:$ORACLE_HOME/rdbms/jlib/xdb.jar:$ORACLE_HOME/lib/xmlparserv2.jar
    javac QueryXMLType.java
    java QueryXMLType
    -> Then you will see U+2026 character (horizontal ellipsis) is properly output.
    My Java code came from "Oracle XML DB Developer's Guide 11g Release 1 (11.1) Part Number B28369-04" with some modification of:
    - Example 14-1 XMLType Java: Using JDBC to Query an XMLType Table
    http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28369/xdb11jav.htm#i1033914
    and
    - Example 18-23 Using XQuery with JDBC
    http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28369/xdb_xquery.htm#CBAEEJDE

  • Unicode characters appear as      (boxes) in AIR for IOS

    I am developing an application for iPhone and it has an input Text box. (Placed on stage from flash CS6). In the Desktop emulator everything works fine. But, on the actual phone, everything I type appears as       . But surprisingly, when I am inputting the text, (i.e. the textbox is in edit mode) the characters are displayed correctly. But as soon as I leave the box it appears as       . Is there someway I can display the text the same way it is displayed during the edit mode?
    I am trying to input Devanagari देवनागरी characters.
    P.S. I did some research on this and found that, if I specify the  font for the textbox = "DevanagariSangamMN" then there is some improvement.
    I say improvement becasue althought the boxes       are replaced by actual characters , they aren't correctly formated.
    For e.g. during edit mode the text appears as this: कार्यकर्ता
    But as soon as I leave the textbox it appears as this: कार्यकर्ता (I typed this here by adding special unicode characters ZWNJ so that characters won't join)
    Anyway, I don't like the idea of having to specify font names? What If some user would like to input chinese, how would I know what chinese font to use?
    Isn't there some way to let IOS handle things, (just like how it handles things when I am inputing text).
    Thanks.

    Thanks to everyone who replied.
    The conclusiver answer is that there are only 2 ways to display H264 video in AIR for IOS
    (more info here http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/NetStream.htm l#play%28%29)
    1. Progressive download
    2. HLS format (slight caveat, in my tests at least, OSMF 1.6.1 does'nt handle this, but if you use the NetStream directly with StageVideo enabled it works)
    Updated Matrix is :
    FMS 4.5 H.264 Streaming test matrix
    RTMP
    HDS
    HLS
    HTTP Progressive Download
    AIR for Android
    Yes
    Yes
    No
    Yes
    AIR on Windows (Desktop)
    Yes
    Yes
    No
    Yes
    AIR on IOS
    No
    No
    Yes
    Yes
    Safari Browser on IOS
    No
    No
    Yes
    No

  • Oracle Discoverer Desktop Report output showing unicode characters

    Hi,
    Oracle Discoverer Desktop 4i version Report output showing the below unicode characters.
    kara¿ah L¿MAK HOLD¿NG A.¿
    We ran the same query in sql at that time the data showing correctly.
    Please let me know, is there any language settings/ NLS settings are need to set
    Thanks in Advance.

    Hi
    Let me give you some background. In the Windows registy, every Oracle Home has a setting called NLS_LANG. This is the variable that controls, among other things, the numeric characters and the language used. The variable is made up of 3 parts. These are:
    language_territory.characterset
    Notice how there is an underscore character between the first two variables and a period between the last two. This is very important and must not be changed.
    So, for example, most American settings look like this: AMERICAN_AMERICA.WE8MSWIN1252
    The second variable, the territory, controls the default date, monetary, and numeric formats and must correspond to the name of a country. So if I wanted to use the Greek settings for numeric formatting, editing the NLS_LANG for Discoverer Desktop to this setting will do the trick:
    AMERICAN_GREECE.WE8MSWIN1252
    Can you please check your settings? Here's a workflow:
    a) Open up your registry by running Regedit
    b) Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE
    c) Look for the Oracle Home corresponding to where Discoverer Desktop is installed. It's probably called KEY_BIToolsHome_1
    d) Clicking on the Oracle Home will display all of the variables
    e) Take a look at the variable called NLS_LANG - if it is correct Exit the registry
    f) If its not correct please right-click on it and from the pop-up select Modify
    f) Change the variable to the right setting
    g) Click the OK button to save your change
    h) Exit the registry
    Best wishes
    Michael

  • Scanning files for non-unicode characters.

    Question: I have a web application that allows users to take data, enter it into a webapp, and generate an xml file on the servers filesystem containing the entered data. The code to this application cannot be altered (outside vendor). I have a second webapp, written by yours truly, that has to parse through these xml files to build a dataset used elsewhere.
    Unfortunately I'm having a serious problem. Many of the web applications users are apparently cutting and pasting their information from other sources (frequently MS Word) and in the process are embedding non-unicode characters in the XML files. When my application attempts to open these files (using DocumentBuilder), I get a SAXParseException "Document root element is missing".
    I'm sure others have run into this sort of thing, so I'm trying to figure out the best way to tackle this problem. Obviously I'm going to have to start pre-scanning the files for invalid characters, but finding an efficient method for doing so has proven to be a challenge. I can load the file into a String array and search it character per character, but that is both extremely slow (we're talking thousands of LONG XML files), and would require that I predefine the invalid characters (so anything new would slip through).
    I'm hoping there's a faster, easier way to do this that I'm just not familiar with or have found elsewhere.

    require that I predefine the invalid charactersThis isn't hard to do and it isn't subject to change. The XML recommendation tells you here exactly what characters are valid in XML documents.
    However if your problems extend to the sort of case where users paste code including the "&" character into a text node without escaping it properly, or they drop in MS Word "smart quotes" in the incorrect encoding, then I think you'll just have to face up to the fact that allowing naive users to generate uncontrolled wannabe-XML documents is not really a viable idea.

  • Illustrator CS6 is not displaying unicode characters properly.

    I need to access certain unicode characters (namely the x/8 fractions). I have the appropriate font installed, and the characters display in every other program on the machine, but they are not displaying correctly in Illustrator. If I type the characters in using alt codes, the 3/8 character displays as "\", the 1/8 character displays as "[", etc. Is there some setting I need to change to enable support for Unicode characters?
    *UPDATE*
    If I type the character in another program, copy it, and paste it into Illustrator, it shows up as a box with an X through it. If I highlight the box and change the font to one that supports the character, then the character does display correctly. This is, however, quite inconvenient. Is there a way to type the character directly into Illustrator?
    Message was edited by: NovakDamien

    I use the following method...
    Mike

  • How to insert unicode characters in oracle

    hiiii...........i want to add special unicode characters in oracle database......can anyone guide me how to do this.
    i kno we have nvarchar2 datatype which supports multilingual languages......but im enable to insert the values from sql prompt........can anyone guide me how to insert the values.
    also please tell will there be special care which had to be taken care of if we are accessing it through .NET??

    output of
    select * from nls_database_parameters where parameter like '%SET';
    is PARAMETER VALUE
    NLS_CHARACTERSET WE8MSWIN1252
    NLS_NCHAR_CHARACTERSET AL16UTF16
    when i query :select testmsg, dump(testmsg,1016) from test ;
    i get
    TESTMSG DUMP(TESTMSG,1016)
    éµOF¿¿ad¿ Typ=1 Len=18 CharacterSet=AL16UTF16: 0,e9,0,b5,0,4f,0,46,0,bf,0,bf,0,61,0,64,0,bf
    dsdas Typ=1 Len=10 CharacterSet=AL16UTF16: 0,64,0,73,0,64,0,61,0,73
    éµOF¿¿ad¿ Typ=1 Len=18 CharacterSet=AL16UTF16: 0,e9,0,b5,0,4f,0,46,0,bf,0,bf,0,61,0,64,0,bf
    éµOF¿¿ad¿ Typ=1 Len=18 CharacterSet=AL16UTF16: 0,e9,0,b5,0,4f,0,46,0,bf,0,bf,0,61,0,64,0,bf
    what basically i want is to store some special characters like éµΩΦЛήαδӨװΘ§³¼αγ into my oracle database but i am unable to do dat....
    Edited by: [email protected] on Jun 28, 2010 10:19 PM
    Edited by: [email protected] on Jun 28, 2010 10:54 PM

  • How to use Unicode characters with TestStand?

    I'm trying to implement the use of Greek characters such as mu and omega for units. I enabled multi-byte support in the station options and attempted to paste some characters in. I was able to paste the mu character (μ) and import it from Excel with PropertyLoader. However, I have not had any luck with omega (Ω). I found the HTML codes for these characters on a web page (http://www.hclrss.demon.co.uk/unicode/) so I could use those codes for HTML reports, but that won't work for database logging, nor does it display the characters correctly for the operator interface. The operator interface is not a major problem, but the database must have the correct characters for customer reports. Anyone know how to do this? D
    oes database logging support Unicode for stored procedure calls to Oracle?

    Hello Mark -
    At this time TestStand has no unicode support. The multi-byte support that we do offer is based on the Windows architecture that handles Asian language fonts. It really isn't meant to provide a bridge for unicode values in TestStand. Certainly, your Operator Interface environment will have its own support level for unicode, i.e. at this time neither LabWindows/CVI version 6.0 nor LabVIEW 6.1 officially support unicode characters. This is why you will see that the units defined in the TestStand enumerators are all text-based values.
    I have run a quick test here, probably similar to what you were doing on your end, and I am uncertain if you will get the database behavior you want from TestStand. The database logging steps and API all use basic char sty
    le strings to communicate to the driver. Even though you are reading in a good value from Excel, TestStand is interpreting the character as the nearest ASCII equivalent, i.e. "Ω" will be stored and sent to the database as "O". If you have a stored proceedure in Oracle that is calling on some TestStand variable or property string as an input, then it is doubtful you will get correct transmission of the values to the database. If your stored proceedure could call into a spreadsheet directly, you would probably have better luck.
    Regards,
    Elaine R.
    National Instruments
    http://www.ni.com/ask

  • Writing Unicode characters to scripting parameters on Windows

    I am trying to read/write a file path that supports Unicode characters to/from scripting parameters (PIDescriptorParameters) with an Export plug-in. This works fine on OS X by using AliasHandle together with the "typeAlias" resource type in the "aete" section of the plugin resource file.
    On Windows I am having trouble to make Photoshop correctly display paths with Unicode characters. I have tried:
    - Writing null-terminated char* (Windows-1252) in a "typePath" parameter -- this works but obviously does not support Unicode.
    - Writing null-terminated wchar* (UTF-16) in a "typePath" parameter -- this causes the saved path in the Action palette to be truncated to the first character, caused by null bytes in UTF-16. It appears PS does not understand UTF-16 in this case?
    - Creating an alias record with sPSAlias->WinNewAliasFromWidePath and storing in a "typePath" or "typeAlias" parameter -- this causes the Action palette to show "txtu&", which does not make sense to me at all.
    The question is: what is the correct scripting parameter resource type (typePath, typeAlias, ... ?) for file paths on Windows, and how do I write to it in such way that Photoshop will correctly display Unicode characters in the Actions palette?

    Hi
    Skip the first (4 or 6 characters) and you'll get the Unicode value.
    regards
    Bartek

  • GlyphID reverse lookup to get Unicode characters

    In my plug-in I have a GlyphID extracted from an IPMFont* but the glyph does not have a unicode value because it is a ligature, a combination of many unicode characters. Is there a way I can query the IPMFont* object to find out what unicode characters need to be used to convert to this ligature. In a TrueType font this information would be held in the 'GSUB' (Glyph SUBstitution) table.
    a simple example of this would be:
    'f' + 'f' + 'i' = 'ffi'
    '1' + '/' + '2' = '½'
    So the glyphID I have would be the 'ffi' or the '½' glyph and I need to find out the 3 unicode characters which, when used in combination, would cause that glyph to be used.
    This is then extended to Arabic and Hindi fonts where the Ligatures are highly important in drawing the script correctly.
    Using Utils<IGlyphUtils>->GlyphToCharacter (font, glyph, &userAreaChar) does not work as the glyph has no Unicode character representation so the function just returns 0.
    Likewise Utils<IGlyphUtils>->GetUnicodeForGlyphID (font, glyph) gives the same result

    IGlyphUtils.h might be useful but I have yet to discover a routine that gives me the information I need.
    Do I have to use glyphUtils->GetOTFAttribute and iterate through it to find which combination of unicode characters result in a particular glyphID? And what parameters should I use for GetOTFAttribute to get the ligature table?

  • Alogrithm for converting Unicode characters to EBCDIC

    I would like to know if there is any algorithm for converting Unicode Characters to EBCDIC.
    Awaiting your replys
    Thanks in advance,
    Ravi

    I would like to know if there is any algorithm for
    converting Unicode Characters to EBCDIC.Isn't ECBDIC a 7-bit code like ASCII. Unicode is
    16-bit. This means there is no way Unicode can be
    mapped on ECBDIC without loss of information. Link to
    Unicode,
    No. That is like saying that since UTF-8 is 8 bit based then it can't be mapped to UTF-16. But it does.
    EBCDIC either directly supports or has versions which support multibyte character sets. A multibyte character set can encode any fixed format sized character set. The basic idea is the same way UTF-8 works.
    Multibyte character sets have the added benifit that most of the data in the world is from the ASCII character set and the encodings always support that using only 8 bits. Thus the memory savings over UTF-16 (or UTF-32) are significant.

  • Does quicktime7.7.x not support the movie file which fullpath include unicode characters like chinese or japanese?

    Hi guys.
    I use QT SDK to write a sample to load movie files.
    when install QuickTime7.7.1 and call QT SDK's interface(OpenMovieFile method) to open file, it doesn't work. i will get error(-36). but when i use QT 7.6.9, it works well.
    so how can i use QT SDK(install QT7.7.1 and OS is windows xp) to open movie file which fullpath include wide unicode characters(like D:\Media動画).
    much appreciate.

    For Bex Query Application string 7.x Version Error

  • Fuzzy unicode characters since 4.1 upgrade

    I've been getting fuzzy unicode characters ever since I upgraded to 4.1. 4.0.2 was fine.
    This happens both in apps and on home screen. I've also tried multiple unicode apps to see if this is limited to one app or if it can be reproduced. Default fonts look crisp as before. Everything is sharp while typing in text boxes (e.g. google search box; folder names in wiggle mode or in iTunes). Also, text and email characters are sharp. But when published on the web or when I exit the wiggle mode, they look fuzzy and out of place.
    Any idea what's going on? I have received four confirmation from friends with factory unlocked iPhone 4's that can reproduce this problem. All English UK keyboards.
    I have done a full reset ("set up new iPhone") and still can reproduce the problem even with factory default settings, no apps synced.
    Took me nearly three hours to reorganise everything again and I am now back to where I started!
    Screen shots:
    http://i.imgur.com/oXiWN.jpg
    http://imgur.com/GGR8c.jpg
    http://imgur.com/VDodg.png
    Can anyone else reproduce this? Paste any random dingbat, greek, chinese, etc. characters into a folder name either via a unicode app (e.g. Unicode Free) or directly via iTunes and see if it's fuzzy or not.

    Anyone?

Maybe you are looking for

  • SLES9 Oracle 10.1.0.3 RAC VMware RELEASED on OTN

    Oracle, Novell and VMware are happy to announce the availablity of a SLES9 VM right in time for Linux World Boston: http://www.oracle.com/technology/tech/linux/vmware/index.html Enjoy this evaluation kit and post any feedback in this forum. See you a

  • G$ FW800 stock optical drive not seen in System Profile - is it daid?

    I added a USB PCI card (2.0) to my FW and shifted some files (Migration Assistant) to my FW by Firewire from an eMac and then I wanted to run Diskwarrior to sort it all out. My CDW/DVDR wouldn't open, either with the eject key or through Toast. Not r

  • Wlthint3client.jar with WebLogic 10.0

    Can I use wlthint3client.jar instead of wlfullclient.jar with WebLogic 10.0 (and Tomcat 6.0)? I tried to use it and it seems to work. However as far as I know wlthint3client.jar was released as a part of WebLogic 10.3, so I'd like to make sure that t

  • Save for Web – Image Size Preview doesn't update

    If I use percentage to change a JPG output size the preview either lags (regardless the size/weight of the image) or simply ignores the new value, forcing me to click on the Width/Height field to complete the action. It would be nice to kill the bug!

  • Mac OS X Leopard on a third party laptop ?

    i have an HP laptop intel core 2 duo 2.20 2 GB ram 150 GB hard drive intel graphics card can i remove windows xp and install leopard on it ? Message was edited by: PandaJin