Unicode in Oracle 8i and Oracle XE

I’m developing a program to read database objects and data of Oracle Database from Oracle 8.1.7 and store everything as XML files and then read these XML file on different computer and built the same database on Oracle XE and import the data in to it.
Everything worked fine so far, except that I’m facing an issue with some of the fields of type VARCHAR2, I’m getting this error when importing the data (ORA-12899: value too large for column … etc.)
Data stored in these fields as Unicode in both Oracle 8i and XE, I don’t know why the some data do not fit in the new XE database.
Any one has a clue why the same data require more bytes in Oracle XE than they were taking in oracle 8i?
Note: I'm not using any of Oracle tools to import or export, I'm using a program that I developed.

Could be related to nls_length_semantics problems similar as here NLS_LENGTH_SEMANTICS parameter...
It seems that Oracle 8i did not have nls_length_smantics parameter and most probably it used only byte characteristics. To my mind Oracle8i had quite limited support for unicode, so might be that behavoir changed between versions.
Gints Plivna
http://www.gplivna.eu

Similar Messages

  • XSU Problem:Error-- Cannot map Unicode to Oracle character

    Hi, I am using XSU to get the resultset from database(oracle 9.2.0.6.0) as XML.When I query data from some columns and get the XMLString, they give me error -"Cannot map Unicode to Oracle character". The database charset is "US7ASCII" .
    One column is storing Chinese with ''US7ASCII".When xmlString Result contianer
    the column,there is "oracle.xml.sql.OracleXMLSQLException: Cannot map Unicode to Oracle character".But If xmlString Result don't container that column,it run very good.
    The program container these libs: ojdbc14.jar,xmlparserv2.jar,xdb.jar,nls_charset12.jar,xsu12.jar.
    The program code:
    public class OracleXmlParse {
    public static void main(String[] args) {
    try{
    DriverManagerDataSource dataSource = new DriverManagerDataSource("oracle.jdbc.driver.OracleDriver",
                        "jdbc:oracle:thin:@168.1.1.136:1521:imis","ims","ims");
    String selectSQL = "select AREA_CODE,AREA_NAME,REGION_CODE,AREA_NAME_CN from CDM_AREA";
    OracleXMLQuery query = new OracleXMLQuery(conn,selectSQL);
    query.setEncoding("UTF-8");
    String str = query.getXMLString();
    System.out.println(str);
    conn.close();
    }catch(SQLException e){
                   e.printStackTrace();
    Exception:
    Exception in thread "main" oracle.xml.sql.OracleXMLSQLException: Cannot map Unicode to Oracle character.
         at oracle.xml.sql.core.OracleXMLConvert.getXML(OracleXMLConvert.java:1015)
         at oracle.xml.sql.query.OracleXMLQuery.getXMLString(OracleXMLQuery.java:267)
         at oracle.xml.sql.query.OracleXMLQuery.getXMLString(OracleXMLQuery.java:221)
         at oracle.xml.sql.query.OracleXMLQuery.getXMLString(OracleXMLQuery.java:198)
         at procedure.OracleXmlParse.main(OracleXmlParse.java:34)
    The column that store chinese is AREA_NAME_CN .When "selectSQL " is equal to "select AREA_CODE,AREA_NAME,REGION_CODE from CDM_AREA",the program is ok.
    Please help.
    Message was edited by:
    user542404
    Message was edited by:
    user542404

    So, What is the solution ? Is there something I can do in my code ? My program gives the exception and stops. I am not even interested to fetch the data, which are giving this error.

  • XSQL exception displaying funky chars:Cannot map Unicode to Oracle characte

    Hi.
    I'm using XSQL Servlet to serve XML from a 9.2 database. The varchar columns I'm trying to display have non-ASCII characters in them (Spanish enye, curly quotes, etc.). The database's character encoding is WE8ISO8859P1, which handles these characters fine. Running a simple "select * from..." query, I get this error:
    oracle.xml.sql.OracleXMLSQLException: Cannot map Unicode to Oracle character
    which seems odd considering it ought to be mapping an Oracle character to a Unicode character, not the other way around. Additionally, what's the problem? Unicode supports a large superset of WE8ISO8859P1.
    Any idea how I can get XSQL Servlet to play nice with these funky characters?
    Thanks,
    Andrew

    Update: still stuck...

  • Single Code Page Unicode conversion when source and target are the same

    Hi everyone.  We have recently upgraded our non-unicode 4.7 to ECC6 non-unicode.  Now we have begun researching the next step to convert to Unicode.  I know we could have done the Combined Upgrade and Unicode Conversion but I wanted to do thorough testing after the upgrade to eliminate too many variables if problems arose.
    So, my question is...has anyone done the Unicode conversion when source and target servers are the same?  Most documentation I've seen recommends a system copy to another box first.  I did find one forum thread that gave a high level approach to doing the conversion with just one server and listed this:
    Step 1:- Run Tcode SPUMG to scan
    Step 2:- Export Database
    Step 3:- Drop Database
    Step 4:- Create New Database
    Step 5:- Import database
    Step 6:- Chnage unicode Kernel
    What are your thoughts?  Is it supported?  Am I wasting my time and should I just bite the bullet and buy yet another system?  Thanks in advance.
    -Anthony

    Theoretically that's possible, yes. However...
    > Step 5:- Import database
    > Step 6:- Chnage unicode Kernel
    the import must be done using the Unicode kernel.
    So basically you can
    - export the old systm
    - uninstall system + database
    - start as if you were installing "from scratch" on a new box
    Markus

  • How to send Unicode to Oracle

    Hello,
    We have an intranet application that is live for sometime now. Our new requirement is to be able to store and view Japanese characters in some of the fields.
    We have Oracle 9.2.0.5 with NLS_CHARSET as UTF8. Our application server is WebLogic6.1sp6 on Solaris v5.8 64 bit.
    We use both thin driver (classes12.jar) and WebLogic 6.1 provided OCI thin driver, on different places of our application. The database column intended to store Japanese characters is just a VARCHAR and not NCHAR.
    I'm copying some Japanese characters from various websites and trying to input them in my application through browser. We are able to store Japanese characters, but they are occupying upto 8 bytes per character. Actually Unicode databases only should take upto 3 bytes per character. My requirement is to be able to store upto 1300 Japanese characters in a VARCHAR2(4000) field. Currently, it is only storing about 500 Japanese characters. When trying to store more than this, it is throwing a "java.sql.SQLException: Inserted value too large for column".
    I have already consulted Oracle and the reply I got is that it is not Oracle problem as the application needs to be changed so that it feeds physical UTF8 codepoints to Oracle. On their suggestion, I used iSQLPlus interface and was able to store 1300 Japanese characters in the database, by issuing an update statement. But the same data when seen from iSQLPlus is showing characters as follows: "& # 31258 ; & # 21481 ; & # 27470 ; & # 34507 ;" (I have added blanks so that your browser does not render it in Japanese). Hence they concluded that our application is NOT using UTF8 codpoints but a HTML based representation system using the ""& #" and then the unicode identifier for that character. For Oracle this is NOT a character, It's a string of 8 characters and hence it occupies 8 bytes. Please help me on how to feed UTF8 codepoints to Oracle?
    I have already tried putting NLS_LANG=AMERICAN_AMERICA.UTF8 in my WebLogic startup script. I also tried putting
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> in my JSP's. (This actually makes the problem even worse that it displays junk instead of Japanese while retrieving).
    Are there any other special settings that I need to be able to store Unicode characters? Please help.
    Thanks
    Varma
    [email protected]

    thanks the following steps did the trick
    1. I put the following info in weblogic.xml
    <charset-params>
    <input-charset>
    <resource-path>/*</resource-path>
    <java-charset-name>UTF8</java-charset-name>
    </input-charset>
    </charset-params>
    2. I added the weblogic.codeset=UTF8 for the WebLogic Oracle OCI connection pool
    3. I added the nls_charset12.zip in the CLASSPATH for the Oracle thin driver that I'm using.
    4. Added the <%@ page contentType="text/html; charset=utf-8"%> (Somehow the <jsp-param><encoding> tag does not work in weblogic.xml)

  • Inserting unicode into oracle

    Hi every one ,
    I need to know how to insert unicode data(data in other language) into oracle using java can any one give me hint or reference to any page
    I would be thankful .... !
    Regards,
    Ibrahim...!
    Message was edited by:
    Ibrahim_coder

    Firstly, I'm assuming that your database is running in a character set that supports Unicode text. If it's not, forget it. You ain't gonna put a square peg in a round hole. Check your NLS settings to confirm what language your DB is running in.
    Fortunately for you, java also natively supports unicode, which means that you shouldn't have any problems. Providing, that is, that you use unicode character sets throughout your app and don't use any Reader classes that use a binary or latin charset.
    Where is the data that you want to enter coming from? Keyboard? File? Web? If it is the web you will also need to ensure that the web side of your app is set to Unicode encoding.
    Once you have checked that everything runs in unicode mode, you should be able to write unicode data straight to the database, without any other worries. If you need to manually enter unicode characters, you can do it using the unicode escape character ie "\u0000".
    If you come across any other specific problems, post them here and we can try to resolve them - answering vauge generic qs such as this isn't easy.
    If you want to be very sure about the data you are putting into the DB, you could always use RAW or BLOB datatypes in preference to VARCHAR(2) and CLOB datatypes, however you would then have to do all the charset conversion yourself.
    HTH

  • How to store Unicode in Oracle database ?

    I have a J2EE application and currently we are storing the Unicode data in amp;#xxx; format.
    The browser can display this properly but the java swing application cannot. So I need to convert this to \uxxxx format.
    Is it advisable to store the data in the d/b in the \uxxxx format or amp;#xxx; format ? Advantages ? Disadvantages ?
    The database is setup for UTF-8.
    Thanks

    Not really sure what you are asking the conversion from whatever character set to Unicode should occur without thought to decimal or hex. What is the insert statement or method you are using to get the data ineserted into the database?

  • How to convert from UNICODE (UTF16) to UTF8 and vice-versa in JAVA.

    Hi
    I want to insert a string in format UTF16 to the database. How do I convert from UTF16 to UTF8 and vice- versa in JAVA?. What type must the database field be? Do I need a special set up for the database (oracle 8.i)?
    thanks
    null

    I'm not sure if this is the correct topic, but we are having problems accessing our Japanese data stored in UTF-8 in our Oracle database using the JDBC thin driver. The data is submitted and extracted correctly using ODBC drivers, but inspection of the same data retrieved from the GetString() call using the JDBC thin driver shows frequent occurrences of bytes like "FF", which are not expected in either UTF8 or UCS2. My understanding is that accessing UTF8 in Java should involve NO NLS translation, since we are simply going from one Unicode encoding to another.
    We are using Oracle version 8.0.4.
    Can you tell me what we are doing wrong?
    null

  • Unicode export:Table-splitting and package splitting

    Hi SAP experts,
    I know there are lot of forums related to this topic, but I have some new questions and hence posting a new thread.
    We are in the process of doing unicode conversion in our landscape(CRM 7.0 system based on NW 7.01) and we are running on AIX 6.1 and DB2 9.5. The database size is around 1.5 TB and hence, we want to go in for optimization for export and import in order to reduce the downtime.As a part of the process, we have tried to do table-splitting and parallel export-import to reduce the downtime.
    However, we are having some doubts whether this table-splitting has actually worked in our scenario,as the export has exeucted for nearly 28 hours.
    The steps followed by us :
    1.) Doing the export preparation using SAPINST
    2.) Doing table splitting preparation, by creating a table input file having entries in the format <tablename>%<No.of splits>.Also, we have used the latest R3ta file and the dbdb6slib.o(belonging to version 7.20 even though our system is on 7.01) using SAPINST.
    3.) Starting with the export using SAPINST.
    some observations and questions:
    1.) After completion of tablesplitting preparation, there were .WHR files that were generated for each of the tables in DATA directory of export location. However, how many .WHR files should be created and on what basis are they created?
    2.) I will take an example of a table PRCD_CLUST(cluster table) in our environment, which we had split. We had 29 *.WHR files that were created for this particular table. The number of splits given for this table was 36 and the table size is around 72 GB.Also, we noticed that the first 28 .WHR files for this table, had lots of records but the last 29th .WHR file, had only 1 record. But we also noticed that, the packages/splits for the 1st 28 splits were created quite fast but the last one,29th one took a long time(serveral hours) to get completed.Also,lots of packages were generated(around 56) of size 1 GB each for this 29th split. Also, there was only one R3load which was running for this 29th split, and was generating packages one by one.
    3.) Also,Our question here is that is there any thumb rule for deciding on the number of splits for a table.Also, during the export, are there any things that need to be specified, while giving the inputs when we use table splitting,in the screen?
    4.) Also, what exactly is the difference between table-splitting and package-splitting? Are they both effective together?
    If you have any questions and or need any clarifications and further inputs, please let me know.
    It would be great, if we could get any insights on this whole procedure, as we know a lot of things are taken care by SAPINST itself in the background, but we just want to be certain that we have done the right thing and this is the way it should work.
    Regards,
    Santosh Bhat

    Hi,
    First of all please ignore my very first response ... i have accidentally posted a response to some other thread...sorry for that 
    Now coming you your questions...
    > 1.) Can package splitting and table-splitting be used together? If yes or no, what exactly is the procedure to be followed. As I observed that, the packages also have entries of the tables that we decided to split. So, does package splitting or table-splitting override the other, and only one of the two can be effective at a time?
    Package splitting and table splitting works together, because both serve a different purpose
    My way of doing is ...
    When i do package split i choose packageLimit 1000 and also split out the tables (which i selected for table split)  into seperate package (one package per table). I do it because that helps me to track those table.
    Once the above is done i follow it up with the R3ta and wheresplitter for those tables.
    Followed by manual migration monitor to do export/import , as mentioned in the previous reply above you need to ensure you sequenced you package properly ... large tables are exported first , use sections in the package list file , etc
    > 2.) If you are well versed with table splitting procedure, could you describe maybe in brief the exact procedure?
    Well i would say run R3ta (it will create multiple select queries) followed by wheresplitter (which will just split each of the select into multiple WHR files)  ...  
    Best would go thought some document on table spliting and let me know if you have specific query. Dont miss the role of hints file.
    > 3.) Also, I have mentioned about the version of R3ta and library file in my original post. Is this likely to be an issue?Also, is there a thumb rule to decide on the no.of splits for a table.
    Rule is use executable of the kernel version supported by your system version. I am not well versed with 7.01 and 7.2 support ... to give you an example i should not use 700 R3ta on 640 system , although it works.
    >1.) After completion of tablesplitting preparation, there were .WHR files that were generated for each of the tables in DATA directory of export location. However, how many .WHR files should be created and on what basis are they created?
    If you ask for 10 split .... you will get 10 splits or in some case 11 also, the reason might be the field it is using to split the table (the where clause). But not 100% sure about it.
    > 2) I will take an example of a table PRCD_CLUST(cluster table) in our environment, which we had split. We had 29 *.WHR files that were created for this particular table. The number of splits given for this table was 36 and the table size is around 72 GB.Also, we noticed that the first 28 .WHR files for this table, had lots of records but the last 29th .WHR file, had only 1 record. But we also noticed that, the packages/splits for the 1st 28 splits were created quite fast but the last one,29th one took a long time(serveral hours) to get completed.Also,lots of packages were generated(around 56) of size 1 GB each for this 29th plit. Also, there was only one R3load which was running for this 29th split, and was generating packages one by one.
    Not sure why you got 29 split when you asked for 36, one reason might be the field (key) used for split didn't have more than 28 unique records. I dont know how is PRCD_CLUST  split , you need to check the hints file for "key". One example can be suppose my table is split using company code, i have 10 company codes so even if i ask for 20 splits i will get only 10 splits (WHR's).
    Yes the 29th file will always have less records, if you open the 29th WHR you will see that it has the "greater than clause". The 1st and the last WHR file has the "less than" and "greater than" clause , kind of a safety which allows you to prepare for the split even before you have downtime has started. This 2 WHR's ensures  that no record gets missed, though you might have prepared your WHR files week before the actual migration.
    > 3) Also,Our question here is that is there any thumb rule for deciding on the number of splits for a table.Also, during the export, are there any things that need to be specified, while giving the inputs when we use table splitting,in the screen?
    Not aware any thumb rule. First iteration you might choose something like 10 for 50 GB , 20 for 100 GB. If any of the tables overshoots the window. They you can give a try by  increase or decrease the number of splits for the table. For me couple of times the total export/import  time have improved by reducing the splits of some tables (i suppose i was oversplitting those tables).
    Regards,
    Neel
    Edited by: Neelabha Banerjee on Nov 30, 2011 11:12 PM

  • Unicode normalisation form C and Apple Safari

    Technically, the World Wide Web Consortium specifies normalisation form C. This suggests that Apple Safari 3.2.1 should not establish equivalences (aliases, synonyms) to other coded characters if a coded character has no specified equivalences as per normalisation form C. Nonetheless, Apple Safari 3.2.1 does seem to establish equivalences.
    Anyone have any thoughts?
    /hh
    Reference:
    http://unicode.org/faq/normalization.html#6

    According to Asmus Freytag, the character LATIN SMALL LETTER LONG S was encoded because it involves a semantic distinction as opposed to a stylistic distinction in that writing system
    It does have a compatibilty decomposition to "s" , however.
    If the World Wide Web Consortium specifies Form C, if no equivalences are established for LATIN SMALL LETTER LONG S under Form C, and if HTML is opened from disk in a browser, then LATIN SMALL LETTER S should not be successful as a synonym in a search string, it seems. Thus, Faust should not successfully retrieve Fauſt
    I guess that would be correct under String Identity Matching as defined here?
    http://www.w3.org/TR/charmod-norm/#sec-IdentityMatching
    It seems true that Edit > Find uses a less restrictive matching system, but it's not clear to me that doing that is contrary to the standards in some way.
    It's not just Safari but all apps where you can find ſ by looking for s, right?
    and the Related Characters pane in the Character Palette should not show that LATIN SMALL LETTER S and LATIN SMALL LETTER LONG S are unconditional synonyms. Rather, they are conditional synonyms.
    I'm somewhat mystified as to exactly what the Related Charcters pane is supposed to show, other than characters that look similar. I wonder how Apple chooses them? In any case I would hope the compatiblity decomposition of a character would appear there.
    It seems to me that search services can compete by seeming to be more successful, and to seem more successful search services can establish equivalences which are broader than the equivalences that an author is entitled to expect based on specifications and standards.
    I think the w3c standards about this are mainly related to the form a text should have on the web, rather than what results search services should return (unless the services perhaps specify they are doing a "string identity match").

  • Rxvt-unicode 9.16-1 and click to launch url (urlLauncher)

    After upgrading to rxvt-unicode 9.16-1 I found out I can't click to open a url in my browser.
    Turns out
    URxvt.urlLauncher: firefox
    doesn't do anything because http://cvs.schmorp.de/rxvt-unicode/Changes the new version introduced
    INCOMPATIBLE CHANGE: renamed urlLauncher resource to url-launcher.
    Sidenote: https://www.archlinux.org/packages/comm … t-unicode/ says that gtk2-perl is an optional dependency to use the urxvt-tabbed, but I've been using urxvt-tabbed w/o it (even after the update).

    I found this with urxvt-perls; I had to change url-select.launcher to get it working again...

  • To get unicode errors for each and every program

    hi,
         I have written a program to retrieve all the custom programs and user exits. Now I want to pass all these programs to UCCHECK tcode to get all the errors of each program separately by writing it implicitely.
         Is there any function module for uccheck tcode or else any other option. Pls tell me.
             thanks & regards,
                 Sekhar.

    UCCHECK is an executable report(RSUNISCAN_FINAL). Please use SUBMIT program (RSUNISCAN_FINAL) along with selection screen field SO_OBJ_N-LOW populated with list of programs identified.
    Syntax
    SUBMIT report1 USING SELECTION-SCREEN '1100'
                   WITH SELECTION-TABLE rspar_tab
                   WITH selcrit2 BETWEEN 'H' AND 'K'
                   WITH selcrit2 IN range_tab
                   AND RETURN.
    Regards
    Ravikumar

  • Multiple Charsets with Oracle 9i and ODBC

    I am using the latest Oracle ODBC driver to connect my C++ COM application running under IIS 5.0 to a multilingual Oracle 9i database running on a Win2K server.
    Currently, all characters are displayed correctly except for a few special accented characters in the Czech alphabet. The erratic Czech characters are displayed as upside down question marks.
    These characters display correctly when I connect to Oracle with JDBC, but not with ODBC. These characters also display correctly when I query them out of a MSSQL 2000 database using ODBC or OLEDB. NOTE: These characters are also displayed as upside down question marks when I query Oracle from SQL Plus directly on the server.
    I have read the FAQ's on multilingual applications and Oracle/ODBC, but I don't see what I am missing. Force W_CHAR Support is set to true in my ODBC configuration settings.
    I've spent many hours trying to work out a solution to this problem with no luck. My customer would like to fix this problem ASAP, and would also like to be certain that we can go ahead with data using other character sets - Han, Cyrillic, Greek, etc.
    Another question - is it possible to set up an Oracle 9i database to not support Unicode / UTF8, and if our databased is configured in such a manner, should we expect problems with the data coming out of it when connecting via ODBC?

    Also, I realize now that I got some terminology mixed up in my first post. Obviously "Unicode" encompasses both UTF8 and UTF16 encoding, but I understand that one usually means UTF16 when one says Unicode.
    My client is so far unwilling to set this database up to use UTF16 encoding, so I am trying to find another solution that allows me to display special characters from an Oracle database using UTF8 encoding. As I mentioned before, JDBC corrects the erratic data, and I have come across information about a rather expensive third party ODBC driver that supposedly corrects the data, as well. I would like to know if it's possible for me to correct the data without converting the DB to use UTF16.

  • Oracle & PHP/Apache not working togather for unicode (working individually)

    hi, i have oracle xe zend core for oracle installed.
    When i insert data through aplication express or oracle sql developer its perfect. Unicode can be inserted and viewed, but when I try to view same data through php script it display ??? signs only english character show properly.
    Seems the unicode enviourment issue with php/apache and oracle communication, as php pages display uniode correctly when not taking data from oracle. I need help
    I have gone through globlizing_oracle_php_applications.pdf and have enabled mbstring, also php charset is utf8 i also used 'AL32UTF8' in connection string.
    Also if someone can provide a small sample php script to insert or view unicode data.
    thanks

    root the php module seems to be not working as the
    displayed index.php was just text page. I tried with
    apache2 as well but the result was the same.I don't believe the apache included with Solaris 10 includes PHP support; you will probably have to re-compile it . See the README file in /etc/apache for the actual config of the distributed version.

  • JDeveloper Error ! oracle.xml.sql.OracleXMLSQLException: Cannot map Unicode

    Hi All,
    I have 2 identical table structures with different data in Oracle.
    I am using following xsql and XSLT sheet to produce xml files with these tables. ( have to run twice xsql file by changing the Table names )
    When I run the xsql file with Table1, it works fine, produced the xml file on the browser.
    But when I run the xsql file with Table2, it gives following error message:
    The XML page cannot be displayed
    Cannot view XML input using style sheet. Please correct the error and then click the Refresh button, or try again later.
    Invalid at the top level of the document. Error processing resource 'http://192.10.1.14:8988/Workspace_ONIX-ONIX2-context-root/untitled1.xsql'. Line 1, Position 1
    oracle.xml.sql.OracleXMLSQLException: Cannot map Unicode to Oracle character.
    ^
    These two are my xsql and xslt files:
    - - - - xsql file - - - -
    <?xml version = '1.0' encoding = 'windows-1252'?>
    <?xml-stylesheet type="text/xsl" href="TT14.xsl"?>
    <xsql:query connection="Connection1" id-attribute="" tag-case="lower"
    rowset-element="LIST" row-element="DEPA"
    xmlns:xsql="urn:oracle-xsql">
    SELECT * from TT26
    </xsql:query>
    TT14.xsl file
    <xsl:stylesheet version="1.0" encoding="UTF-8" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
    <xsl:output method ="xml" indent= "yes" encoding="UTF-8"/>
    <!--DOCTYPE ONIXmessage SYSTEM "http://www.editeur.org/onix/2.1/reference/onix-international.dtd"-->
    <xsl:template match ="list">
    <BBMessage>
    <<xsl:for-each select="depa">
    <Product>
    <RecordReference>
    <xsl:value-of select="wai"/>
    </RecordReference>
    <NotificationType>
    <xsl:value-of select="wantype"/>
    </NotificationType>
    </Product>
    </xsl:for-each>
    </BBMessage>
    </xsl:template>
    </xsl:stylesheet>
    All comments are highly welcomed...
    Thanks

    Hi Deepak
    Thanks for the post, but I am afraid that's not the issue with the error.
    I changed both encoding to "UTF-8" still i get the problem.
    I tried even without the XSLT sheet, still I have the problem..
    - - - - xsql file ---
    &lt;?xml version = '1.0' ?&gt;
    &lt;!--
    | Uncomment the following processing instruction and replace
    | the stylesheet name to transform output of your XSQL Page using XSLT
    &lt;?xml-stylesheet type="text/xsl" href="YourStylesheet.xsl" ?&gt;
    --&gt;
    &lt;page xmlns:xsql="urn:oracle-xsql" connection="Connection1"&gt;
    &lt;xsql:query max-rows="-1" null-indicator="no" tag-case="lower"&gt;
    select * from Table2
    &lt;/xsql:query&gt;
    &lt;/page&gt;
    - - - - Result ----
    &lt;?xml version="1.0" ?&gt;
    - &lt;!--
    | Uncomment the following processing instruction and replace
    | the stylesheet name to transform output of your XSQL Page using XSLT
    &lt;?xml-stylesheet type="text/xsl" href="YourStylesheet.xsl" ?&gt;
    --&gt;
    - &lt;page&gt;
    &lt;error&gt;oracle.xml.sql.OracleXMLSQLException: Cannot map Unicode to Oracle character.&lt;/error&gt;
    &lt;/page&gt;
    Any Comment ???
    Thanks

Maybe you are looking for

  • How do I increase the size of a partition

    I have installed OS X on an external drive, I partioned the drive, one 20gig partition for the operating system and one 230 gig for data. I now want to increase the size of the OS partition to 30gig, can I do that without erasing the data on the part

  • The code build is "succesful" but nothing happens...

    I have a problem with the code, what it is supposed to do is get the grades inputed from the user, calculate an average and then print it. I have one marked error in the code and its the " edp.processStudents();" part. Also, when i run the code nothi

  • Problem In offline Interactive Form

    Hi All, I have created a offline Interactive form application based on download functionality, using Adobe form in  NWDS  WDJ . So In my application I have created one download view & In the implementation part, I have written the code in wdInit() me

  • Can stop a deleted event error message pop up from recurring.

    Can send screen shot but the PC version of ask question fails to up load the word doc which has the screen shot image in it. The down load wheel just runs continuously with out stopping

  • How to do picture in picture in multicam ?

    Hi there, I have done a multicam using 6 original clips/angles. I have done all the rough cut & I think I know how to refine the cut. But the problem is some where at the middle, I want to show some 4 clips/angles (out of the 6) simultaneously (somet