Spanish Character Sets Garbled

We have a client that is trying to use Contribute CS4 to edit some pages with Spanish content.  The chacter sets show up correctly when viewing in Contribute but after publishing -- they are all garbled.
The reading I've done online consists of people mainly complaining about Contribute being horrible for multi-lingual but I haven't seen any solutions or workarounds.
Anyone got a suggestion?
Thanks!
Erik

Hello,
the national charsets is for column types like nvarchar not for normal varchar data types. So if your dump file contains such column types you will also need to set it. The charset is for the normal column types like varchar. The use of unicode is best pratice if you use multiel language, but keep in mind that multibyte charset can be a problem during the import because varchar2(10) means 10byte and not 10 chars, so errors like identifier to long can occur during import.
You can create the database.
Check this documentation:
http://docs.oracle.com/cd/B28359_01/server.111/b28298/ch2charset.htm
You can use a charset like WE8MSWIN1252 which covers spanish also (as far i know) and is a superset to us7ascii
regards
Peter

Similar Messages

  • Build new database through scripts must understand spanish character sets.

    Hello Gurus,
    I need some simple advice, a good chance for some quick points for you.
    I have never built a database to understand any other character set other than American English. I now have to build a database that will be used for Spanish characters- keyboards, etc. But I will be using English for the 11g software install. I only wish to be able to show Spanish characters in the data for customers names.
    I will be creating the database with scripts I have made to make the standard template for database files, control files, etc.
    Then I will be importing from a dump I have done that was made with American English character sets.
    System is 11g (11.2.0.3.0) on Linux Enterprise Server 5.8.
    I was thinking to use the AL32UTF8 character set, but I am unsure where to use it.
    My original test did not show Spanish characters for customers names like the 'tilda' or 'sueano' (pardon my spelling). But in this case I did not make the exeception for Spanish, I only used the standard American English build (no changes in the init.ora file or initial database build script).
    How can I adjust my parameter file for the initial creation of the database template to be able to understand the Spanish character set and still be able to import my dump file without error.
    EXAMPLE of a build script:
    CREATE DATABASE mynewdb
    USER SYS IDENTIFIED BY sys_password
    USER SYSTEM IDENTIFIED BY system_password
    LOGFILE GROUP 1 ('/u01/app/oracle/oradata/mynewdb/redo01.log') SIZE 100M,
    GROUP 2 ('/u01/app/oracle/oradata/mynewdb/redo02.log') SIZE 100M,
    GROUP 3 ('/u01/app/oracle/oradata/mynewdb/redo03.log') SIZE 100M
    MAXLOGFILES 5
    MAXLOGMEMBERS 5
    MAXLOGHISTORY 1
    MAXDATAFILES 100
    CHARACTER SET US7ASCII
    NATIONAL CHARACTER SET AL16UTF16
    If I replace NATIONAL CHARACTER SET AL16UTF16 to AL32UTF8 will it work to show Spanish characters?
    Sorry for the long winded question, any advice will be great.
    Thankfully,
    Shawn

    Hello,
    the national charsets is for column types like nvarchar not for normal varchar data types. So if your dump file contains such column types you will also need to set it. The charset is for the normal column types like varchar. The use of unicode is best pratice if you use multiel language, but keep in mind that multibyte charset can be a problem during the import because varchar2(10) means 10byte and not 10 chars, so errors like identifier to long can occur during import.
    You can create the database.
    Check this documentation:
    http://docs.oracle.com/cd/B28359_01/server.111/b28298/ch2charset.htm
    You can use a charset like WE8MSWIN1252 which covers spanish also (as far i know) and is a superset to us7ascii
    regards
    Peter

  • ISO8859 Spanish Character set Problems

    Hi,
    At the time, we bought a web content server (OPMKT, based JSP), we installed the
    embedded Weblogic 6.1 Sp2 and using Oracle 8.1.7.
    We are using the codeset we8iso8859p15 for Oracle and for Weblogic the codeset:
    ISO8859_15_FDIS (Dfile.encoding)
    But the special Spanish characters (i.e. accents or euro currency) using the browser
    are not right.
    Can anyone help me with this??? Do we need to upgrade the WL6.1 SP2 to SP4 or
    SP5?
    Thanks in advances,
    Santos

    Yes, Of course I was changed the WL startup scripts on NSL_LANG=ES.ISO8859_15.
    and It is the same. (it is not working).
    "Jo Willems" <[email protected]> wrote:
    Did you specify NLS_LANG env variable in your startup script for weblogic?
    j.
    "Santos" <[email protected]> wrote in message
    news:[email protected]..
    Hi,
    At the time, we bought a web content server (OPMKT, based JSP), weinstalled the
    embedded Weblogic 6.1 Sp2 and using Oracle 8.1.7.
    We are using the codeset we8iso8859p15 for Oracle and for Weblogicthe
    codeset:
    ISO8859_15_FDIS (Dfile.encoding)
    But the special Spanish characters (i.e. accents or euro currency)using
    the browser
    are not right.
    Can anyone help me with this??? Do we need to upgrade the WL6.1 SP2to SP4
    or
    SP5?
    Thanks in advances,
    Santos

  • How to set Spanish, ESN CP1252, , oracle Character Sets WE8MSWIN1252

    select * from NLS_DATABASE_PARAMETERS
    database characterset is
    NLS_CHARACTERSET               AL32UTF8
    NLS_NCHAR_CHARACTERSET         AL16UTF16 
    NLS_LANGUAGE                   AMERICAN
    NLS_TERRITORY                  AMERICAselect * from NLS_INSTANCE_PARAMETERS ;
    PARAMETER                         VALUE
    NLS_LANGUAGE                      SPANISH
    NLS_TERRITORY                     SPAINhow to set Spanish, ESN CP1252, , oracle Character Sets WE8MSWIN1252
    Please suggest.
    Thanks In advance

    Is it really ok to convert AL32UTF8 to WE8MSWIN1252?
    NLS_CHARACTERSET AL32UTF8
    to
    NLS_CHARACTERSET WE8MSWIN1252
    please suggest

  • Metadata character set translation

    Hi does any one have an idea how to have your Aperture metadata info exported in the CP 1252 encoding? (Windows Western Europe). (Standard is MAC encoding.)
    All my accents in French, German, Spanish and Dutch words come out garbled when I upload my photos to my photo agencies, or my web based Photoshelter archive.
    I've tried a lot. So far my only solution is to export my files. Open them in FotoStation which has a conversion function. This is a lot of work though since adding and changing keywords in that program is extremely user unfriendly.
    Any advice welcome!
    Ronald

    We bought SAP R3 system that uses base with WE8DEC set but saves character data in CL8ISO8859P5 for Russian languageI understand your configuration/requirement. The supported way for you to do this is to change your database character set to match the encoding of the data in the database. If you are sure they are all Russian data in CL8ISO8859P5 encoding formats (The Character set scanner utility shipped in 8.1.7 can assist you in verifying the data inside your db) .
    Since your Oracle db supports SAP R3 , it is better if you contact SAP on whether they support the changing the db chacter set and the steps involved .

  • Spanish characters getting garbled while executing command using Java code

    Hi,
    I try to execute a command using java code. output of the command contains spanish characters. Few of these characters getting garbled after the command execution.
    Runtime r = Runtime.getRuntime();
              Process p = null;
    String pgm="ipconfig /all";
              try
                        p = r.exec(pgm);
    BufferedReader br=new BufferedReader(new InputStreamReader
                                  (p.getInputStream()));
    while((val = br.readLine()) != null){ System.out.println(val);
              catch (Exception e)
                        return (null);
    I tried to run the code using -Duser.language=es -Duser.region=ES -Dfile.encoding=Cp850, but this did nt help. I could see the outputs properly in command prompt,
    If i redirect the output to a text file , it is getting garbled,
    Please let me know to solve this issue.

    884543 wrote:
    Hi,
    I try to execute a command using java code. output of the command contains spanish characters. Few of these characters getting garbled after the command execution.
    Runtime r = Runtime.getRuntime();
              Process p = null;
    String pgm="ipconfig /all";
              try
                        p = r.exec(pgm);
    BufferedReader br=new BufferedReader(new InputStreamReader
                                  (p.getInputStream()));
    while((val = br.readLine()) != null){ System.out.println(val);
              catch (Exception e)
                        return (null);
    I tried to run the code using -Duser.language=es -Duser.region=ES -Dfile.encoding=Cp850, but this did nt help. I could see the outputs properly in command prompt,
    If i redirect the output to a text file , it is getting garbled,
    Please let me know to solve this issue.Set the character set to UTF-8 to your InputStreamReader, for More details on usage refer the java api :
    http://download.oracle.com/javase/6/docs/api/java/io/InputStreamReader.html

  • Invalid Characters shown in UTF-8 character set

    There is an XMLP report whose template output character set is ISO-8859-1. The character set ISO-8859-1 is required for this report as per Spanish Authorities. When the report is run, output gets generated in the output directory file of application server. This output file doesn't contain any invalid characters.
    But when the output is opened from SRS window, which opens it in a browser, the invalid characters are shown for characters like Ñ , É etc.
    Investigation done:
    Found that the output generated on the server is having ISO encoding and hence doesn't contain any invalid characters. Whereas the output generated from SRS window, it is in UTF encoding, so it seems the invalid characters are displayed when conversion takes place from ISO to UTF-8 format.
    Created the eText output using the data xml and template using BI publisher tool, the output is in ISO encoding. So if i go and change the encoding to UTF-8 by opening it in explorer or Notepad++, invalid charcters are shown for Ñ, É etc.
    Is there any limitation, that output from SRS window will show only in UTF-8 encoding? If not then please suggest.
    Thanks,
    Saket
    Edited by: 868054 on Aug 2, 2012 3:05 AM
    Edited by: 868054 on Aug 2, 2012 3:05 AM

    Hi Srini,
    When customer is viewing output from the SRS window, then it contains invalid characters because it is in UTF-8 character set. Customer is on Oracle OnDemand so they cannot take the output generated on the server.Every time they have to raise a request to Oracle for the output file. So the concern here is, why don't the output from SRS window show output with valid characters ?
    The reason could be conversion of ISO format to UTF-8. How could this be resolved ? Does SRS window output cannot generate in ISO format ?
    A quick reply will be appreciated as customer is chasing for an update.
    Thanks,
    Saket
    Edited by: 868054 on Aug 7, 2012 11:08 PM

  • Character Set questions on setup

    I am trying to determine what the best setup recommendations are for creating non_English Oracle 10g databases. I have not had much experience building databases for non_English locales, so this is getting a little overwhelming as I have been researching Oracle's Database Globalization Support Guide. Obviously it has a wealth of information and I am trying to determine what applies to us at this point and time.
    Generally when someone buys our product they create a new Oracle instance for our app. I need to be able to recommend proper database settings/parameters for potential global customers who purchase our software to run on Oracle.
    Currently my biggest question is what to recommend for the Database Characterset on db creation. Currently the DB Character Set we recommend (for standard U.S. installs on Windows) is the default WE8MSWIN1252 character set. Our application is non-unicode. It has been recommended to me from an outside consultant that we "must" use UTF-8 for DB and National Character Set settings, as opposed to WE8MSWIN1252 or WE8ISO8859P1. I should mention that our focus at this point and time is getting a solution for French, German, and Spanish. We are also more concerned about a single language setup than multilanguage - although that is a definite future consideration.
    What impact can using UTF-8 as opposed to WE8MSWIN1252 or WE8ISO8859P1 have on a non-unicode application? I hope I am explaining the situation well enough as I am fairly new and still getting to know our application. I am kind of getting thrown into the i18n fire...
    Any input is greatly appreciated. Thanks.

    Your questions are certainly valid but you have not given any details about your application: what it does, what technologies and access drivers are employed, and what client operating systems are supported. This determines how much effort is required to make the application Unicode-enabled and what are the risks coming from each of the possible approaches.
    As long as your application can work with single-byte character sets only and as long as it is not expected to contain multibyte data, and as long as it supports Windows only, the Oracle character set corresponding to relevant Windows ANSI code page is the correct choice. For English, French, German, Spanish, and other Western European languages, WE8MSWIN1252 is right one.
    Processing of WE8MSWIN1252 is easier and somehow faster than processing AL32UTF8 (i.e. UTF-8) data. One character corresponds to one byte and this simplifies some aspects of text processing.
    On the other hand, world becomes smaller and smaller in the Internet area. Companies that never did any business abroad start to talk to customers around the world because somebody found their website. Western European companies take advantage of the European Union enlargement and start making business in new countries. Therefore, it is dangerous to assume that a company currently interested in a monolingual, single-byte solution will not want to migrate to a multilingual and multibyte solution in few years.
    If you follow a few rules in database design and programming, you can run your single-byte application against an AL32UTF8 database, even if you do not get a multilingual system in this way. Such configuration has the huge advantage of avoiding the need of a complex and resource consuming task of migrating the database character set to Unicode in future, when your customer asks for multilingual support. Upgrading binaries of your application to an Unicode-enabled version is usually fast, migrating the database character set is not.
    The main rules you should follow are:
    1) Use character length semantics to define column and PL/SQL variable lengths, i.e. say VARCHAR2(10 CHAR) instead of VARCHAR2(10 [BYTE]). If you do not want to modify all creation scripts to include the CHAR keyword, issue ALTER SESSION SET NLS_LENGTH_SEMANTICS=CHAR at the beginning of each script. I recommend modifying the scripts.
    2) Do not use VARCHAR2 columns longer than 1000 characters, CHAR columns longer than 500 characters, and PL/SQL VARCHAR2/CHAR variables longer than 8190 characters. This guarantees that in the future no AL32UTF8 string will exceed the hard limit of 4000/2000/32760 bytes. Use CLOB for longer text.
    3) Use SUBSTR/LENGTH/INSTR in place of SUBSTRB/LENGTHB/INSTRB. Use SUBSTRB/LENGTHB/INSTRB only when dealing with legacy stuff or Data Dictionary that still use byte length semantics.
    4) Define the client setting - mainly NLS_LANG - to correctly correspond to the character set processed by your application.
    5) Modify interfaces to other databases, if any, to cope with the character length semantics. You do not have to do much if the other databases follow the same rules.
    The cost of running the database in Unicode is not high for most languages, though languages that do not use Latin script, such as Russian, Greek, or Japanese, need significantly more storage for the textual data (but only textual data in those languages - this is only some fraction of all data in the database). Processing is slower by a few percent as compared to single-byte character sets (unless a lot of textual processing is performed in the database, in which case the percentage may be higher - benchmark recommended). This costs can be usually compensated by adding some more computing power (GHz and disks). Unless your application needs a VLDB (very large database) and almost saturates the system, you should not notice a big difference.
    -- Sergiusz

  • Windows "Regional Options" locale - JCE for 8bit vs 16bit character sets

    I have a Java application that reads in an Encrypted text file. The text file was Encrypted using JCE 1.2.1 and Encrypted on a Windows system with the locale set to English(US). The Encryption uses Sun's version of the DES encryption algorithm.
    This app reads in the Encrypted text file and Decrypts it and processes it's information.
    This works fine on Windows systems if the Regional Options control panel is set to a region that uses 8bit character sets:
    - English
    - Italian
    - Spanish
    - French
    But, if the locale is set to 16bit character set regions, the text file cannot be read and parsed. Such regions include:
    - Russian
    - Greek
    - English (Hong Kong)
    At this point, I think I have two options, but I would love to hear about more:
    - Edit the Encrypting/Decrypting code (or the parsing code - parses through a comma deliminated file) so that the file that is Encrypted and Decrypted can handle either an 8bit or 16bit character sets
    (Don't know how to do this)
    or
    - Programatically change the locale of Windows machines to English(US) at application start-up and then change it back to the previous locale setting on application shut-down
    (Don't know how to do this either)
    I'd appreciate any help. I'm not sure if this is an International issue or an JCE issue.
    Thanks in advance

    I found an answer to the problem I was having.
    The culprit were two special characters that the client was using in the encrypted text to distinguish between different fields and to distinguish carriage returns (� and �). The non Latin alphabet languages didn't know what to do with those characters so they substituted there own characters, thus breaking the parsing logic which was hard coded to look for � and �.
    The problem also was related to the fact that the JCE works with byte[] arrays. FileInputStreams (which deal with byte[] arrays) seem to convert the special characters to new characters in non Lating languages to match what was going on in the JCE logic.
    The easiest fix I could come up with, was to include a new properties file to be read by a separate FileInputStream. This properties text file contained just two characters (��). When I loaded in this new properties file via a FileInputStream, the two characters (��) in the properties file magically change to match the currently active alphabet (or didn't change at all if the computer was using a Latin alphabet).
    By checking the new properties file to see what the characters had changed to (if they had), I was able to know what to use to parse the encrypted data. And as such, regardless of what language the computer was set to, the encrypted data is now parsed correctly, as I took out the hard coding that looked specifically for the characters � and � and instead rewrote the code so it now uses the characters from the properties file (or whatever characters they change to) for parsing the content data.
    I hope others find this useful.

  • Windows "Regional Options" locale (8bit vs 16bit character sets)

    I have a Java application that reads in an Encrypted text file. The text file was Encrypted using JCE 1.2.1 and Encrypted on a Windows system with the locale set to English(US). The Encryption uses Sun's version of the DES encryption algorithm.
    This app reads in the Encrypted text file and Decrypts it and processes it's information.
    This works fine on Windows systems if the Regional Options control panel is set to a region that uses 8bit character sets:
    - English
    - Italian
    - Spanish
    - French
    But, if the locale is set to 16bit character set regions, the text file cannot be read and parsed. Such regions include:
    - Russian
    - Greek
    - English (Hong Kong)
    At this point, I think I have two options, but I would love to hear about more:
    - Edit the Encrypting/Decrypting code (or the parsing code - parses through a comma deliminated file) so that the file that is Encrypted and Decrypted can handle either an 8bit or 16bit character sets
    (Don't know how to do this)
    or
    - Programatically change the locale of Windows machines to English(US) at application start-up and then change it back to the previous locale setting on application shut-down
    (Don't know how to do this either)
    I'd appreciate any help. I'm not sure if this is an International issue or an JCE issue.
    Thanks in advance

    I found an answer to the problem I was having.
    The culprit were two special characters that the client was using in the encrypted text to distinguish between different fields and to distinguish carriage returns (� and �). The non Latin alphabet languages didn't know what to do with those characters so they substituted there own characters, thus breaking the parsing logic which was hard coded to look for � and �.
    The problem also was related to the fact that the JCE works with byte[] arrays. FileInputStreams (which deal with byte[] arrays) seem to convert the special characters to new characters in non Lating languages to match what was going on in the JCE logic.
    The easiest fix I could come up with, was to include a new properties file to be read by a separate FileInputStream. This properties text file contained just two characters (��). When I loaded in this new properties file via a FileInputStream, the two characters (��) in the properties file magically change to match the currently active alphabet (or didn't change at all if the computer was using a Latin alphabet).
    By checking the new properties file to see what the characters had changed to (if they had), I was able to know what to use to parse the encrypted data. And as such, regardless of what language the computer was set to, the encrypted data is now parsed correctly, as I took out the hard coding that looked specifically for the characters � and � and instead rewrote the code so it now uses the characters from the properties file (or whatever characters they change to) for parsing the content data.
    I hope others find this useful.

  • Flat File Load Issue - Cannot convert character sets for one or more charac

    We have recently upgraded our production BW system to SPS 17 (BW SP19)
    and we have issues loading flat files (contain Chinese, Japanese and
    Korean characters) which worked fine before the upgrade. The Asian
    languages appear as invalid characters (garbled) as we can see in PSA. The Character Set
    Settings was previously set as Default Setting and it worked fine until
    the upgrade abd we referred to note 1130965 and with the code page 1100
    the load went through (without the Cannot convert character sets for one or more characters error) however the Asian language characters will appear
    as invalid characters. We tried all the code pages suggested in the note e.g.
    4102, 4103 etc on the info packages but it did not work. Note that the
    flat files are encoded in UTF-8.

    I checked lower case option for all IO
    When i checked the PSA failed log no of records processed is "0 of 0" my question is with out processing single record system is througing this error message.
    When i use same file loading from local workstation no error message
    I am thinking when  I FTP the file from AS/400 to BI Application Server(Linex) some invalid characters are adding but how can we track that invalid char?
    Gurus please share your thoughts on this I will assign full points.
    Thanks,

  • Transport tablespaces between databases with different character sets

    Hi everyone:
    I have two 10R2 databases on the same hp-ux 64bit server, 1st one with NLS_CHARACTERSET=US7ASCII, 2nd one with
    NLS_CHARACTERSET=AL32UTF8.
    NLS_NCHAR_CHARACTERSET on both databases is AL16UTF16.
    Can I transfer tablespaces from the 1st one to the 2nd. The data could be in English, French & Spanish.
    If not what are my options?
    Thanks in advance.

    First off, if you are storing French and Spanish data in database 1 where the character set is US7ASCII, you've got some serious problems. US7ASCII doesn't support non-English characters (accents, tildes, etc). If you're storing data this way, you've introduced data corruption that you'd have to resolve before copying the data data over to another machine.
    Second, technically, the source and target character set have to be identical. Since US7ASCII is a strict binary superset of AL32UTF8, you could theoretically transport a US7ASCII tablespace to an AL32UTF8 database. In your case, though, since the data is not really US7ASCII, you'd end up with corruption.
    Any of the Oracle built-in replication options is going to require that you resolve the corruption issue. Assuming that you can figure out what character set the source database really is, you could potentially dump the data to flat files (taking care not to allow character set conversion to take place) and SQL*Loader them into the destination system by identifying the proper character set in your control file. That's obviously going to be a rather laborious process, though.
    Justin

  • Oracle database character set

    Hello all -
    Please help..................
    I will be exporting/importing 6/7 users/schemas/data from one database to the another database on solaris. Users are created.
    I am confused about NLS_LANG variable and database characterset.
    I have the following questions -
    1. What is impact of NLS_LANG variable setting of user session while import/export the data ?
    2. Why do we need to set this NLS_LANG user session variable before export/import ?
    3.If NLS_LANG variable is not set (doesnot have any value) what would happen ?
    4. If I have to set NLS_LANG varible, what should I set it to?
    5. How can I see the characterset of my database?
    6. Where can I get more info about database charaterset and What are the valid values for database characterset and NLS_LANG varibale ?
    Any help would really appreciated...
    Thanks a lot.....
    RAMA

    1. What is impact of NLS_LANG variable setting of user session while import/export the data ?
    On export, the data will be converted from the database character set to the character set specified by NLS_LANG. In import, the database will assume that the data is in the character set specified by NLS_LANG and use that value to perform the conversion to the database character set if the two values to not match.
    2. Why do we need to set this NLS_LANG user session variable before export/import ?
    If your database character set is the same as your OS, you don't necessarily have to set NLS_LANG. For instance, if you have a US7ASCII db, and your OS locale is set to AMERICA_AMERICAN.US7ASCII, there won't be any problems. The only time it's really important to set this is when the db and OS settings don't match.
    3.If NLS_LANG variable is not set (doesnot have any value) what would happen ?
    If your database character set doesn't match your OS, the data could be garbled because the db will incorrectly transcode the data on import/export.
    4. If I have to set NLS_LANG varible, what should I set it to?
    Depends on what your database character set is set to (see below).
    5. How can I see the characterset of my database?
    select * from nls_database_parameters and look for the value set for the NLS_CHARACTERSET parameter. Don't get confused by the NLS_NCHAR_CHARACTERSET, that's for NCHAR datatypes.
    So, for instance, if your NLS_CHARACTERSET value is set to UTF8, you would set NLS_LANG to .UTF8 (the dot is important because that's actually shorthand for territory_language.characterset, or language_territory.characterset, I can never remember which comes first. In any case, use the dot). For example:
    setenv NLS_LANG .UTF8
    6. Where can I get more info about database charaterset and What are the valid values for database characterset and NLS_LANG varibale ?
    It's all in the Oracle documentation.
    hope this helps.
    Tarisa.

  • How to change the Character Set from AL32UTF8 to WE8DEC

    Hello!!
    I want to know how to change the character set in the database from AL32UTF8 to WE8DEC.
    I tried to use the comand ALTER DATABASE CHARACTER SET but I got an error because WE8DEC is not a superset of AL32UTF8.
    I need to import tables from a server that uses WE8DEC. So when I do the import to my server, wich has AL32UTF8 , I can't import the rows that include an Ñ.
    So I want to change the caracter set to WE8DEC. How can I do it?
    Is it needed to change the language configuration? The remote server has AMERICAN_AMERICA, and my server has MEXICAN SPANISH_MEXICO (both uses text in spanish).
    Thanks a lot!!

    When you export from WE8DEC server what did you use
    for NLS_LANG char set? And when importing?
    The export was done in another computer because I can't do the export in the server (I have 10g, and the remote server has 9i and the export is not working). In my server, when I tried to do the import, the NS_LANG value was MEXICAN SPANISH_MEXICO.WE8MSWIN1252.
    When I try to import to my database I got the error:
    import done in WE8DEC character set and AL16UTF16 NCHAR character set
    import server uses AL32UTF8 character set (possible charset conversion)
    export client uses WE8MSWIN1252 character set (possible charset conversion)
    . importing USRMCR06's objects into PRIMARIZACION
    . . importing table "CHG_FONDOS_MARZO_CD_MOR"
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column "PRIMARIZACION"."CHG_FONDOS_MARZO_CD_MOR".
    "NOMBRE" (actual: 41, maximum: 40)
    Column 1 16623436
    Column 2
    Column 3 Pymes_1
    Column 4

  • Character Set Problem with XSQL

    Hi!
    I am using xsql:insert-request of the xsql-servlet. On NT everything works fine.But on Solaris I get with the same configuration (Oracle 8.1.7 XSQL-Servlet 1.0.4.0) and the same xsl and xsql Files I get the following Error:
    Exception 'java.sql.SQLException:Non supported character set: oracle-character-set-46' encountered during processing ROW element 0All prior XML row changes were rolled back. in the XML document.
    I want to load Data into an object-relational Table.
    Has anyone of you had the same Problem? Has anybody a solution for this Problem. Today I tried different things wihtout any success.
    Thanks a lot,
    Harald

    hi manidhar,
    I am also facing the same problem.In my case the characters are getting garbled when it is passed to a javaScript function.
    Did u find a solution.
    If yes please post it.
    thanks in advance

Maybe you are looking for