Unicode Characters Turn To Garbage Depending On Length of Preceeding Text
Hey,
I wrote a script that creates a bunch of text frames, fills some text and styles it.
The problem is, sometimes, unicode characters come out as garbage: e.g. "3M™ Blenderm™" turns to "3Mâ„¢ Blendermâ„¢".
I was playing around with four text frames to see what causes it, and if I add a line of text in the second frame, all subsequent unicode chars turn to garbage only if that line of text is larger than 6 characters.
If I add a ™ character to the first line of the first text frame, then the problem fixes itself.
Has anyone encountered something like this?
Left me know if you need more info (my whole script is rather large...)
Hey,
Thanks for the idea!
I think it has to do something with the way InDesign tries to read my data file (or script).
I placed "™" character inside comments right at the top of the file, and everything works
I would play around and try to find some saner solution, but the deadline for my project is way too close
Thanks!
Similar Messages
-
GUI_DOWNLOAD unicode characters turn into ##
Hello,
we have a Unicode enabled system and I have some Unicode dummy data in a field. the content is 知道,我看.
In the program the data stay like that until the file is downloaded. Then the characters are ########.
My program downloads like this:
CALL METHOD CL_GUI_FRONTEND_SERVICES=>GUI_DOWNLOAD
EXPORTING
FILENAME = IM_PC_FILE
CHANGING
DATA_TAB = im_text_file
Any idea what is missing here?
thanks a lot
Koen Van Loockedo you open the downloaded file with Notepad? Anyway you have to use the WRITE_BOM parameter of the method (value should be 'X').
here you can have some more informations reg. SAP and Unicode:
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0928b44-b811-2a10-7599-cc4bb6585c46
https://websmp107.sap-ag.de/~sapidb/012003146900000190272007E/ConversionErrors.htm (this is OSS, so will require your OSS userid and password)
Unicode File Handling in ABAP -
What table column size is needed to accomodate Unicode characters
Hi guys,
I have encounter something which i dont understand and i hope gurus here will shed some light on me.
I am running a non-unicode database and i decided to port the data over to a unicode database.
So
1) i export the schema out --> data.dmp
2) then i create the unicode database + create a user
3) then i import the schema into the database
during the imp i can see that character conversion will take place.
During importing of data into the unicode database
I encounter some error
saying column size is too small
so i went to check the row that has the column value that is too large to fit in the table.
I realise it has some [][][][] data.. so i went to the live non-unicode database and find the row. Indeed it has some [][][][] rubbish data which i feel that someone has inserted other language then english into the database.
But regardless,
I went to modify the column size to a larger size, now the row can be accommodated. However the data is still [][][].
q1) why so ? since now my database is unicode, during the import, this column data [][][] should be converted to unicode already but i still have problem seeing what language it is.
q2) why at the non-unicode database, the [][][] data can fit into the table column size, but on unicode database, the same table column size need to be increase ?
q3) while doing more research on unicode, it was said that unicode character takes up 2 byte per character. Alot of my table data are exactly the same size of the table column size.
E.g Name VARCHAR2(5);
value - 'Peter'
Now if converting to unicode, characters will take 2byte instead of 1, isnt 'PETER' going to take up 10byte ( 2 byte per character ),
why is it that i can still accomodate the data into the table column ?
q4) now with unicode database up, i will be supporting different language characters around the world. How big should i set my column size to ? the longest a name can get ? or ?
Thanks guys!/// does oracle automatically "look" at the each and individual characters in a word and determine how much byte it should take.
Characters usually originate from a keyboard, which has an associated keyboard layout and an associated character set encoding (a.k.a code page, a.k.a. encoding). This means, the keyboard driver knows that when a key with a letter "á" on it is pressed on a French keyboard, and the associated character set encoding is MS Code Page 1252 (Oracle name WE8MSWIN1252), then one byte with the value 225 is generated. If the associated character set encoding is UTF-16LE (standard internal Windows encoding), two bytes 225 and 0 are generated. When the generated bytes travel through APIs, they may undergo character set conversions from one encoding to another encoding. The conversion algorithms use translation tables to find out how to translate given byte sequence from one encoding to another encoding. In case of translation from WE8MSWIN1252 to AL32UTF8, Oracle will know that the byte sequence resulting from conversion of the code 225 should be 195 followed by 161. For a Chinese characters, for example when converting it from ZHS16GBK, Oracle knows the resulting sequence as well, and this sequence is usually 3 bytes.
This is how AL32UTF8 data gets into a database. Now, when Oracle processes a multibyte string, and needs to look at individual characters, for example to count them with LENGTH, or take a substring with SUBSTR, it uses information it has about the structure of the character set. Multibyte character sets are of two type: fixed-width and variable-width. Currently, Oracle supports only one fixed-width multibyte character set in the database: AL16UTF16, which is Oracle's name for Unicode UTF-16BE encoding. It supports this character set for NCHAR/NVARCHAR2/NCLOB data types only. This character set uses two bytes per each character code. To find the next code, 2 is simply added to the string pointer.
All other Oracle multibyte character sets are variable-width character sets, including AL32UTF8. In most cases, the length of each character code can be determined by looking at its first byte. In AL32UTF8, the number of 1-bits in the most significant positions in the first byte before the first 0-bit tells how many bytes a character has. 0 such bits means 1 byte (such codes are identical to 7-bit ASCII), 2 such bits mean two bytes, 3 bits mean 3 bytes, 4 bits mean four bytes. 1 bit (e.g. the bit sequence 10) starts each second, third or fourth byte of a code.
In other ASCII-based multibyte character sets, the number of bytes is usually determined by the value range of the first byte. Bytes below 128 means a one-byte code, bytes above 128 begin a two- or three-byte sequence, depending on the range.
There are also EBCDIC-based (mainframe) multibyte character sets, a.k.a shift-sensitive character sets, where a sequence of two-byte codes is introduced by inserting the SO character (code 14=0x0e) and ended by inserting the SI character (code 15=0x0f). There are also character sets, like ISO-2022-JP, which use more complicated byte sequences to define the length and meaning of byte sequences but Oracle supports them only in limited number of places.
/// e.g i have a word with 4 character. the 3rd character will be a chinese character..the rest are ascii character
/// will oracle use 4 byte per character regardless its ascii(english) or chinese
No.
/// or it will use 1 byte per english character then 3 byte for the chinese character ? e.g.total - 6 bytes taken
It will use 6 bytes.
Thnx,
Sergiusz -
Insert Unicode Characters Into Oracle 8.1.5
Hello,
First off, here are the specs:
Oracle 8.1.5
JDK 1.2.1
Oracle8i 8.1.6.2.0 JDBC Drivers for use with JDK 1.2.x for Solaris
I'm running into a problem with insert Unicode characters into Oracle via the JDBC driver. As you can see above, I am using the Oracle 8.1.6.2.0 JDBC driver because it is the first driver with supports the JDK 1.2.x. So I think I should be okay.
I can retrieve data with special characters from Oracle by calling the getBytes() method from the ResultSet with all special characters being intact. I am using getBytes because calling getString() would throw the following exception: "java.sql.SQLException(): Fail to convert between UTF8 and UCS2: failUTF8Conv". However, with that value that I just retrieved, or any other data with special characters (unicode) in which I try to insert into Oracle does not get converted properly.
What appears to be happening is that data with special characters (unicode), are not being treated as a single double byte character, but rather two single byte characters. Thus, R|ckschlagventil becomes RC<ckschlagventil once it is inserted. (Hopefully, my example will be rendered properly).
According to all documentation that I have found, the JDBC driver should not have any problem with converting UCS2 Java Strings to Oracle's UTF8 character set.
I have set Oracle's NLS_NCHAR_CHARACTERSET to UTF8. I am also setting the environment variable NLS_LANG to AMERICAN_AMERICA.UTF8. Perhaps there is some other environment setting in which I am missing?
Any help would be appreciated,
Christian
nullImport has a lot of options, so it depends on what you want to do.
C:\> imp help=y
will show you all possible options. An example of full import :
C:\> imp <username>/<password>@<TNS alias> file=<DMP file> full=y log=<LOG file>
Message was edited by:
Paul M.
...and there is always [url http://download-uk.oracle.com/docs/cd/F49540_01/DOC/index.htm]The documentation -
Unicode Characters in Label/JLabels
Hi All,
Does anyone know how when any unicode characters within a String get transformed into the character they represent? I ask because I'm getting conflicting behaviour depending on whether the String is hard-coded or read from file at runtime.
For instance, the following code works fine and produces a label on the GUI containing the infinity character:
String name = "100 to \u221E";
JLabel label = new JLabel(name);
However, if <name> is read from an XML file, the label produced shows "100 to \u221E" verbatim.
Has anyone else seen this effect?
Thanks in advance for any advice,
Andy ChamberlainThanks for that. If I understand correctly, is it
therefore the case that by the time the JLabel
constructor gets called, the String object ("name", in
this case) already has any unicode characters
encoded within it?
Exactly. The compiled .class file already has the unicode characters in it; JLabel has nothing to do with it.
If so, then when debugging, any such characters must
get decoded again back to ASCII when the value of
"name" is inspected within the debugger environment
(JDeveloper in this case).
Depends on the unicode-awareness of JDeveloper; I don't know anything about it.
And the finger would certainly then point to when the
String was created by the XML parser (I'm using
org.dom4j.io.SAXReader). I'll investigate this
further.If you have a text editor that can save a file in UTF-8, you could try saving the xml with the infinity symbol as plain text and specify the encoding of the file with <?xml encoding='UTF-8'?>... Or does your parser accept the &#some-decimal-number; way? -
Hello !
Can anyone please tell me how to avoid the " ? " when trying to display Unicode characters like '\u025A0', '\u25B0' etc from the Geometric Symbols chart ?
Perhaps there is some way to extend the support for as many coding schemes as needed in order to accomodate the desired characters.
My computer show support for UTF-16,UTF-16BE,UTF16-LE, UTF-8 but does not shows the proper character for many of the charts I have downloaded for use from www.unicode.org.
Thanks for helping.
Nadeem.Depends on what you mean by "display" and "show".
If you are talking about a GUI then all you have to do is to use a font that can render those characters. If you are talking about the system console then you probably can't, as you have no control over the font used there.
My guess is that you are talking about the system console. Normally when a font can't render a character it displays a rectangular box instead. But when you convert the character to bytes using an encoding that doesn't understand that character, it gets converted to ? instead. That's what is happening to you. -
KeyPressed and unicode characters
Is there anyone that can explain me why no keyPressed / keyReleased events are generated, but only keyTyped, when I press unicode characters as :
� � � � � � � � � �
The "Alt Gr" key doesn't generated any kind of event too.
You can experiment it with the KeyEventDemo.java source in the Java tutorial.
I'm using an italian keyboard on java 1.5 / linux fc2
Thanks in advanceIf you're still looking for info, this may help.
(From the KeyEvent JavaDoc, http://java.sun.com/j2se/1.4.2/docs/api/java/awt/event/KeyEvent.html)
"Key typed" events are higher-level and generally do not depend on the platform or keyboard layout. They are generated when a Unicode character is entered, and are the preferred way to find out about character input. In the simplest case, a key typed event is produced by a single key press (e.g., 'a'). Often, however, characters are produced by series of key presses (e.g., 'shift' + 'a'), and the mapping from key pressed events to key typed events may be many-to-one or many-to-many. Key releases are not usually necessary to generate a key typed event, but there are some cases where the key typed event is not generated until a key is released (e.g., entering ASCII sequences via the Alt-Numpad method in Windows). No key typed events are generated for keys that don't generate Unicode characters (e.g., action keys, modifier keys, etc.). The getKeyChar method always returns a valid Unicode character or CHAR_UNDEFINED. For key pressed and key released events, the getKeyCode method returns the event's keyCode. For key typed events, the getKeyCode method always returns VK_UNDEFINED.
"Key pressed" and "key released" events are lower-level and depend on the platform and keyboard layout. They are generated whenever a key is pressed or released, and are the only way to find out about keys that don't generate character input (e.g., action keys, modifier keys, etc.). The key being pressed or released is indicated by the getKeyCode method, which returns a virtual key code. -
Oracle Discoverer Desktop Report output showing unicode characters
Hi,
Oracle Discoverer Desktop 4i version Report output showing the below unicode characters.
kara¿ah L¿MAK HOLD¿NG A.¿
We ran the same query in sql at that time the data showing correctly.
Please let me know, is there any language settings/ NLS settings are need to set
Thanks in Advance.Hi
Let me give you some background. In the Windows registy, every Oracle Home has a setting called NLS_LANG. This is the variable that controls, among other things, the numeric characters and the language used. The variable is made up of 3 parts. These are:
language_territory.characterset
Notice how there is an underscore character between the first two variables and a period between the last two. This is very important and must not be changed.
So, for example, most American settings look like this: AMERICAN_AMERICA.WE8MSWIN1252
The second variable, the territory, controls the default date, monetary, and numeric formats and must correspond to the name of a country. So if I wanted to use the Greek settings for numeric formatting, editing the NLS_LANG for Discoverer Desktop to this setting will do the trick:
AMERICAN_GREECE.WE8MSWIN1252
Can you please check your settings? Here's a workflow:
a) Open up your registry by running Regedit
b) Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE
c) Look for the Oracle Home corresponding to where Discoverer Desktop is installed. It's probably called KEY_BIToolsHome_1
d) Clicking on the Oracle Home will display all of the variables
e) Take a look at the variable called NLS_LANG - if it is correct Exit the registry
f) If its not correct please right-click on it and from the pop-up select Modify
f) Change the variable to the right setting
g) Click the OK button to save your change
h) Exit the registry
Best wishes
Michael -
CRVS2010 Beta - Cannot export report to PDF with unicode characters
My report has some unicode data (Chinese), it can be previewed properly in the windows form report viewer. However, if I export the report document to PDF file, the unicode characters in exported file are all displayed as a square.
In the version of Crystal Report 2008 R2, it can export the Chinese characters to PDF when I select a Chinese font in report. But VS2010 beta cannot export the Chinese characters even a Chinese font is selected.Barry, what is the specific font you are using?
The below is a reformatted response from Program Management:
Using non-Chinese font with Unicode characters (Chinese) the issue is reproducible when using Arial font in Unicode characters field. After changing the Unicode character to Simsun (A Chinese font named 宋体 in report), the problem is solved in Cortez and CR both.
Ludek -
How do I get unicode characters out of an oracle.xdb.XMLType in Java?
The subject says it all. Something that should be simple and error free. Here's the code...
String xml = new String("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<x>\u2026</x>\n");
XMLType xmlType = new XMLType(conn, xml);
conn is an oci8 connection.
How do I get the original string back out of xmlType? I've tried xmlType.getClobVal() and xmlType.getString() but these change my \u2026 to 191 (question mark). I've tried xmlType.getBlobVal(CharacterSet.UNICODE_2_CHARSET).getBytes() (and substituted CharacterSet.UNICODE_2_CHARSET with a number of different CharacterSet values), but while the unicode characters are encoded correctly the blob returned has two bytes cut off the end for every unicode character contained in the original string.
I just need one method that actually works.
I'm using Oracle release 11.1.0.7.0. I'd mention NLS_LANG and file.encoding, but I'm setting the PrintStream I'm using for output explicitly to UTF-8 so these shouldn't, I think, have any bearing on the question.
Thanks for your time.
Stryder, aka RalphI created analogic test case, and executed it with DB 11.1.0.7 (Linux x86), which seems to work fine.
Please refer to the execution procedure below:
* I used AL32UTF8 database.
1. Create simple test case by executing the following SQL script from SQL*Plus:
connect / as sysdba
create user testxml identified by testxml;
grant connect, resource to testxml;
connect testxml/testxml
create table testtab (xml xmltype) ;
insert into testtab values (xmltype('<?xml version="1.0" encoding="UTF-8"?>'||chr(10)||'<x>'||unistr('\2026')||'</x>'||chr(10)));
-- chr(10) is a linefeed code.
commit;
2. Create QueryXMLType.java as follows:
import java.sql.*;
import oracle.sql.*;
import oracle.jdbc.*;
import oracle.xdb.XMLType;
import java.util.*;
public class QueryXMLType
public static void main(String[] args) throws Exception, SQLException
DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
OracleConnection conn = (OracleConnection) DriverManager.getConnection("jdbc:oracle:oci8:@localhost:1521:orcl", "testxml", "testxml");
OraclePreparedStatement stmt = (OraclePreparedStatement)conn.prepareStatement("select xml from testtab");
ResultSet rs = stmt.executeQuery();
OracleResultSet orset = (OracleResultSet) rs;
while (rs.next())
XMLType xml = XMLType.createXML(orset.getOPAQUE(1));
System.out.println(xml.getStringVal());
rs.close();
stmt.close();
3. Compile QueryXMLType.java and execute QueryXMLType.class as follows:
export PATH=$ORACLE_HOME/jdk/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export CLASSPATH=.:$ORACLE_HOME/jdbc/lib/ojdbc5.jar:$ORACLE_HOME/jlib/orai18n.jar:$ORACLE_HOME/rdbms/jlib/xdb.jar:$ORACLE_HOME/lib/xmlparserv2.jar
javac QueryXMLType.java
java QueryXMLType
-> Then you will see U+2026 character (horizontal ellipsis) is properly output.
My Java code came from "Oracle XML DB Developer's Guide 11g Release 1 (11.1) Part Number B28369-04" with some modification of:
- Example 14-1 XMLType Java: Using JDBC to Query an XMLType Table
http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28369/xdb11jav.htm#i1033914
and
- Example 18-23 Using XQuery with JDBC
http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28369/xdb_xquery.htm#CBAEEJDE -
Scanning files for non-unicode characters.
Question: I have a web application that allows users to take data, enter it into a webapp, and generate an xml file on the servers filesystem containing the entered data. The code to this application cannot be altered (outside vendor). I have a second webapp, written by yours truly, that has to parse through these xml files to build a dataset used elsewhere.
Unfortunately I'm having a serious problem. Many of the web applications users are apparently cutting and pasting their information from other sources (frequently MS Word) and in the process are embedding non-unicode characters in the XML files. When my application attempts to open these files (using DocumentBuilder), I get a SAXParseException "Document root element is missing".
I'm sure others have run into this sort of thing, so I'm trying to figure out the best way to tackle this problem. Obviously I'm going to have to start pre-scanning the files for invalid characters, but finding an efficient method for doing so has proven to be a challenge. I can load the file into a String array and search it character per character, but that is both extremely slow (we're talking thousands of LONG XML files), and would require that I predefine the invalid characters (so anything new would slip through).
I'm hoping there's a faster, easier way to do this that I'm just not familiar with or have found elsewhere.require that I predefine the invalid charactersThis isn't hard to do and it isn't subject to change. The XML recommendation tells you here exactly what characters are valid in XML documents.
However if your problems extend to the sort of case where users paste code including the "&" character into a text node without escaping it properly, or they drop in MS Word "smart quotes" in the incorrect encoding, then I think you'll just have to face up to the fact that allowing naive users to generate uncontrolled wannabe-XML documents is not really a viable idea. -
The JSP WYSIWYG Editor can't display most Unicode characters
Eclipse supports display of Unicode characters very well since version 3. However, NitroX couldn't display most most of them. Well, besides characters from other non-Western European languages, NitroX can't even display characters that it's supposed to support. Well, that's what I think so. I mean, when we type the & character, we have the whole list of character entity references amongst which we could find ∧ ∇ ∨ → but which are not displayed correctly. And many more are in this case.
Is this a feature or a bug? By "feature", it means that we can't get them in free version.I have exactly the same problem. I support web pages for 25 European countries. I've not seen Nitrox support any unicode characters. Until M7 answers this question or fixes the editor, you can use the Eclipse editor to see and edit the text.
-
Is there a list of Unicode characters that can be used in Acrobat bookmarks?
I can add Greek characters to Acrobat bookmarks using hexadecimal strings. For example, to print a lower case gamma symbol I use <FEFF03B3>. FEFF is the required Unicode flag and 03B3 is the Unicode code for gamma. This works fine. However, there are no Unicode entries for greek characters in the PDF 32000-1:2008 PDF Specification manual. Table D.2 - PDFDocEncoding Character Set on page 656 lists Unicode characters and these also work when added to bookmarks but no Greek codes are in this table. Since I can sucessfully use Greek Unicode characters in the range of 0x0391 - 0x03CE and these characters are not listed in the PDF manual I am assuming there are additional Unicode character that will work in bookmarks. Therefore, I am looking for a complete list of Unicode characters that can be used in Acrobat's bookmarks. Does such a list exist?
Thank you for the response.
I'm sorry to hear there is no list available. I'm building the Acrobat bookmarks automatically. The input data contains entity codings (for example, a lower case Greek gamma is coded as γ) and I was hoping to be able to just pass these through with an automatic conversion to a hexadecimal string (for example <FEFF03B3>). If I had a list of valid Unicode characters, that will display in an Acrobat bookmark, I could validate each entity before the conversion and catch the ones that won't display correctly. I know these types of characters are out there because I have already come across them. For example, a superscript 5 (0x2075) displays fine in MSWord but shows as a white box in a bookmark. Now I'll need to proof the output PDFs and look for white boxes in the bookmarks so that I can build my list of unicode characters that do not work in Acrobat bookmarks.
Again, thanks for your help. -
Special Unicode characters in RSS XML
Hi,
I'm using an adapted version of Husnu Sensoy's solution (http://husnusensoy.wordpress.com/2007/11/17/o-rss-11010-on-sourceforgenet/ - thanks, Husnu) to consume RSS feeds in an Apex app.
It works a treat, except in cases where the source feeds contain special unicode characters such as [right double quotation mark - 0x92 0x2019] (thankyou, http://www.nytimes.com/services/xml/rss/nyt/GlobalBusiness.xml)
These cases fail with
ORA-31011: XML parsing failed
ORA-19202: Error occurred in XML processing
LPX-00217: invalid character 8217 (U+2019) Error at line 19
Any ideas on how to translate these characters, or replace them with something innocuous (UNISTR?), so that the XML transformation succeeds?
Many thanks,
jd
The relevant code snippet is:
procedure get_rss
( p_address in httpuritype
, p_rss out t_rss
is
function oracle_transformation
return xmltype is
l_result xmltype;
begin
select xslt
into l_result
from rsstransform
where rsstransform = 0;
return l_result;
exception
when no_data_found then
raise_application_error(-20000, 'Transformation XML not found');
when others then
l_sqlerrm := sqlerrm;
insert into errorlog...
end oracle_transformation;
begin
xmltype.transform(p_address.getXML()
,oracle_transformation
).toobject(p_rss);
exception
when others then
l_sqlerrm := sqlerrm;
insert into errorlog....
end get_rss;My environment:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
PARAMETER VALUE
NLS_LANGUAGE AMERICAN
NLS_CHARACTERSET WE8ISO8859P1
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSEenvironment
Oracle 10g R2 x86 10.2.0.4 on RHEL4U8 x86.
db NLS_CHARACTERSET WE8ISO8859P1
After following the following note:
Changing US7ASCII or WE8ISO8859P1 to WE8MSWIN1252 [ID 555823.1]
the nls_charset was changed:
Database character set WE8ISO8859P1
FROMCHAR WE8ISO8859P1
TOCHAR WE8MSWIN1252
And the error:
ORA-31011: XML parsing failed
ORA-19202: Error occurred in XML processing
LPX-00217: invalid character 8217 (U+2019)
was no longer generated.
A Unicode database charset was not required in this case.
hth.
Paul -
Direct Execution of query having Unicode Characters
Direct Execution of query having Unicode Characters
Hi All,
In my application I am firing a Select Query having Unicode characters in Where Clause under condition like '%%'
to Oracle 10g DB from a Interface written in VC6.0...
Application funcationality is working fine for ANSI characters and getting the result of Select properly.
But in case of Unicode Characters in VC it says 'No Data Found'.
I know where the exact problem is in my code. But not getting the exact solution for resolving my issue...
Here with I am adding my code snippet with the comments of what i understand and what i want to understand...
DBPROCESS Structure used in the functions,_
typedef struct
HENV hEnv;
HDBC hDbc;
HSTMT hStmt;
char CmdBuff[[8192]];
char RpcParamName[[255]];
SQLINTEGER SpRetVal;
SQLINTEGER ColIndPtr[[255]];
SQLINTEGER ParamIndPtr[[255]];
SQLPOINTER pOutputParam;
SQLUSMALLINT CurrentParamNo;
SQLUSMALLINT OutputParamNo;
SQLUSMALLINT InputParamCtr;
SQLINTEGER BatchStmtNo;
SQLINTEGER CmdBuffLen;
short CurrentStmtType;
SQLRETURN LastStmtRetcode;
SQLCHAR SqlState[[10]];
int ShowDebug;
SQLCHAR* ParameterValuePtr;
int ColumnSize;
DBTYPE DatabaseType;
DRVTYPE OdbcDriverType;
BLOCKBIND *ptrBlockBind;
} DBPROCESS;
BOOL CDynamicPickList::GetResultSet(DBPROCESS *pDBProc, bstrt& pQuery, short pNumOdbcBindParams, COdbcBindParameter pOdbcBindParams[], CQueryResultSet& pQueryResultSet)
int lRetVal,
lNumRows;
bstrt lResultSet;
wchar_t lColName[[256]];
SQLUINTEGER lColSize;
SQLSMALLINT lColNameLen,
lColDataType,
lColNullable,
lColDecDigits,
lNumResultCols;
wchar_t lResultRow[[32]][[256]];
OdbcCmdW(pDBProc, (wchar_t *)pQuery); *//Query is perfectly fine till this point all the Unicode Characters are preserved...*
if ( OdbcSqlExec(pDBProc) != SUCCEED )
LogAppError(L"Error In Executing Query %s", (wchar_t *)pQuery);
return FALSE;
Function OdbcCmdW_
//From this point have no idea what is exactly happening to the Unicode Characters...
//Actually i have try printing the query that gets stored in CmdBuff... it show junk for Unicode Characters...
//CmdBuff is the Char type Variable and hence must be showing junk for Unicode data
//I have also try printing the HexaDecimal of the query... I m not getting the proper output... But till i Understand, I think the HexaDecimal Value is perfect & preserved
//After the execution of this function the call goes to OdbcSqlExec where actual execution of qurey takes place on DB
SQLRETURN OdbcCmdW( DBPROCESS p_ptr_dbproc, WCHAR p_sql_command )
char *p_sql_commandMBCS;
int l_ret_val;
int l_size = wcslen(p_sql_command);
int l_org_length,
l_newcmd_length;
p_sql_commandMBCS = (char *)calloc(sizeof(char) * MAX_CMD_BUFF,1);
l_ret_val = WideCharToMultiByte(
CP_UTF8,
NULL, // performance and mapping flags
p_sql_command, // wide-character string
-1, // number of chars in string
(LPSTR)p_sql_commandMBCS,// buffer for new string
MAX_CMD_BUFF, // size of buffer
NULL, // default for unmappable chars
NULL // set when default char used
l_org_length = p_ptr_dbproc->CmdBuffLen;
l_newcmd_length = strlen(p_sql_commandMBCS);
p_ptr_dbproc->CmdBuff[[l_org_length]] = '\0';
if( l_org_length )
l_org_length++;
if( (l_org_length + l_newcmd_length) >= MAX_CMD_BUFF )
if( l_org_length == 0 )
OdbcReuseStmtHandle( p_ptr_dbproc );
else
strcat(p_ptr_dbproc->CmdBuff, " ");
l_org_length +=2;
strcat(p_ptr_dbproc->CmdBuff, p_sql_commandMBCS);
p_ptr_dbproc->CmdBuffLen = l_org_length + l_newcmd_length;
if (p_sql_commandMBCS != NULL)
free(p_sql_commandMBCS);
return( SUCCEED );
Function OdbcSqlExec_
//SQLExecDirect Requires data of Unsigned Char type. Thus the above process is valid...
//But i am not getting what is the exact problem...
SQLRETURN OdbcSqlExec( DBPROCESS *p_ptr_dbproc )
SQLRETURN l_ret_val;
SQLINTEGER l_db_error_code=0;
int i,l_occur = 1;
char *token_list[[50]][[2]] =
{ /*"to_date(","convert(datetime,",
"'yyyy-mm-dd hh24:mi:ss'","1",*/
"nvl","isnull" ,
"to_number(","convert(int,",
/*"to_char(","convert(char,",*/
/*"'yyyymmdd'","112",
"'hh24miss'","108",*/
"sysdate", "getdate()",
"format_date", "dbo.format_date",
"format_amount", "dbo.format_amount",
"to_char","dbo.to_char",
"to_date", "dbo.to_date",
"unique","distinct",
"\0","\0"};
char *l_qry_lwr;
l_qry_lwr = (char *)calloc(sizeof(char) * (MAX_CMD_BUFF), 1);
l_ret_val = SQLExecDirect( p_ptr_dbproc->hStmt,
(SQLCHAR *)p_ptr_dbproc->CmdBuff,
SQL_NTS );
switch( l_ret_val )
case SQL_SUCCESS :
case SQL_NO_DATA :
ClearCmdBuff( p_ptr_dbproc );
p_ptr_dbproc->LastStmtRetcode = l_ret_val;
if (l_qry_lwr != NULL)
free(l_qry_lwr);
return( SUCCEED );
case SQL_NEED_DATA :
case SQL_ERROR :
case SQL_SUCCESS_WITH_INFO :
case SQL_STILL_EXECUTING :
case SQL_INVALID_HANDLE :
I do not see much issue in the code... The process flow is quite valid...
But now i am not getting whether,
1) storing the string in CmdBuff is creating issue
2) SQLExecDirect si creating an issue(and some other function can be used here)...
3) Odbc Driver creating an issue and want some Client Setting to be done(though i have tried doing some permutation combination)...
Any kind of help would be appreciated,
Thanks & Regards,
Pratik
Edited by: prats on Feb 27, 2009 12:57 PMHey Sergiusz,
You were bang on target...
Though it took some time for me to resolve the issue...
to use SQLExecDirectW I need my query in SQLWCHAR *, which is stored in char * in my case...
So i converted the incoming query using MultibyteToWideChar Conversion with CodePage as CP_UTF8 and
then passed it on to SQLExecDirectW...
It solved my problem
Thanks,
Pratik...
Edited by: prats on Mar 3, 2009 2:41 PM
Maybe you are looking for
-
1. HP LaserJet 4200 Q2427A 2. Win7 - 32bit 3. When I print from the Networked PC (which is on same subnet as printer) Prints come out. No problems. Then I need to print from a PC( which is a standalone PC w/ no connection to the corporate LAN, but do
-
Hi All, Can we run alv report in back ground,if yes how? Thanks&Regards. Srikanth.V
-
How to get parameter for Adapter Module ?
Hi XI Expert, I have write some simple adapter module in my sender file adapter. in order to make it more flexible i need to read some parameter that already set in communication channel module parameters. please advise how to get those parameter val
-
Mysterious java.sql.SQLException: ORA-00600:
Hi All, I'm running oracle 9.0.2 on windows xp, 512meg mem. No problem accessing the db via isql, no problem with jdbc doing inserts but can't query?! DriverManager.registerDriver((Driver)drvClass.newInstance()); m_con = DriverManager.getConnection(j
-
Hi, Has anyone had this problem before. This is a new installation of Planning/Essbase/FDM. I get the following message when trying to drill back into Essbase for a member name in the Maps module of FDM. Error: Essbase API procedure : (EsbInit) threw