About character set

hello
I have very much confused with ordinary character set and national character set.
I have studied document but unable to understand. Can any one help me regarding this
issue. I need clarification in simple words which every one can understand.
thanks in advance.

Look at this
http://dba.ipbhost.com/index.php?showtopic=8194

Similar Messages

  • Problem about character set

    I read data from excel using oracle com automation feature in plsql program and insert them into tables, but all chinese in the excel file was "?",how should I do?
    thanks.

    american_america.us7ascii You cannot store chinese character in this character set. US7ASCII stores data in 7Bits, which doesn't support non-ascii characters.
    You'll have to change the character set of your database to Unicode based character set to store the chinese characters.
    HTH
    Naveen

  • Foreign character sets in a database

    I am developing a site where people in many countries (mainly
    Scandinavian) can log on and update their pages. The information,
    (including passwords and usernames) is stored in an access
    database. I am using Access because I have no knowledge of PHP /
    SQL, and the site is unlikely to exceed the limitations of Access
    for a few years.
    Does Access change the format of text, or do I have to set up
    some special encoding? I find that if I type in the username and
    password using an english keyboard, when the original was set up
    using a foreign keyboard I cannot log in. The reverse also applies.
    I am using CS3 in default mode - using utf8. Should I be using
    Westren European encoding, or is Access the fly in the ointment?
    Or is it likely to be another problem.
    Howard Walker

    Hello,
    Specifying the character set for a WebLogic webservice (see:
    http://edocs.bea.com/wls/docs81/webserv/i18n.html#1069629) is one of the
    many enhancements made since the 6.1 release. If possible, the best
    solution for your webservice development would be to upgrade.
    Bruce
    "özkan Demir" wrote:
    >
    Hi all;
    I am developing RPC style webservices on weblogic server 6.1 with service pack
    2.
    I have a problem about character sets. I have to turkish language so that the
    character sets must be ISO-8859-9. But the soap messages are default UTF-8 so
    that turkish characters becomes undetermined.
    Is there a way that I can send soap messages in ISO-8859-9 character set or what
    do I have to solve the problem of turkish characters between the client and the
    server applications with webservices...Please Help!
    Thanks

  • Character sets in RTF messages not working with RTFEditorKit

    I'm using RTFEditorKit.read() to get the text from an RTF document. The text is written in Russian and starts with the following
    {\rtf1\ansi\ansicpg1252\deff0\deflang1033{\fonttbl{\f0\froman\fprq2\fcharset204{\*\fname
    Times New Roman;}Times New Roman CYR;}{\f1\fswiss\fprq2\fcharset0 Arial;}}
    {\colortbl ;\red0\green0\blue128;\red0\green0\blue0;}
    \viewkind4\uc1\pard\tx360\cf1\f0\fs20\'c1\'ee\'eb\'fc\'f8\'e8\'ed\'f1\'f2\'e2\'eeThe reader sets the translation table based on the \ansi\ at the start of the document, which simply maps all bytes to themselves. The initial text, which should be 'Большинство', is then converted as latin-1. The 'fcharset204', which is CP1251, is completely ignored, and there's a lovely line in the RTFReader class
    /* TODO: per-font font encodings ( \fcharset control word ) ? */Does anyone know how to extract non latin1 text from RTF or what other tools can be used to extract text from RTF.
    Thanks
    Antony

    Hello,
    Specifying the character set for a WebLogic webservice (see:
    http://edocs.bea.com/wls/docs81/webserv/i18n.html#1069629) is one of the
    many enhancements made since the 6.1 release. If possible, the best
    solution for your webservice development would be to upgrade.
    Bruce
    "özkan Demir" wrote:
    >
    Hi all;
    I am developing RPC style webservices on weblogic server 6.1 with service pack
    2.
    I have a problem about character sets. I have to turkish language so that the
    character sets must be ISO-8859-9. But the soap messages are default UTF-8 so
    that turkish characters becomes undetermined.
    Is there a way that I can send soap messages in ISO-8859-9 character set or what
    do I have to solve the problem of turkish characters between the client and the
    server applications with webservices...Please Help!
    Thanks

  • Switching Character Sets in JDeveloper

    Hi, we have developed a number of Portlets on JDev 10.1.2 but used the default encoding of cp1252. When looking back on my course notes I saw that the course instructor had recommended that we set the default project encoding to UTF-8. What are the implications of using the default encoding? We have coded our portlets to use resource bundles in order that we can make them multi-lingual at some point in the future. Is UTF-8 essential for this and if so, should we take the hit now to re-encode ( on trying it, I got errors about malformed input values, mostly hyphens and apostorphes) or will it be a painless process if we do it later on?
    Thanks for any feedback.

    Hello,
    Specifying the character set for a WebLogic webservice (see:
    http://edocs.bea.com/wls/docs81/webserv/i18n.html#1069629) is one of the
    many enhancements made since the 6.1 release. If possible, the best
    solution for your webservice development would be to upgrade.
    Bruce
    "özkan Demir" wrote:
    >
    Hi all;
    I am developing RPC style webservices on weblogic server 6.1 with service pack
    2.
    I have a problem about character sets. I have to turkish language so that the
    character sets must be ISO-8859-9. But the soap messages are default UTF-8 so
    that turkish characters becomes undetermined.
    Is there a way that I can send soap messages in ISO-8859-9 character set or what
    do I have to solve the problem of turkish characters between the client and the
    server applications with webservices...Please Help!
    Thanks

  • Greek Character Set

    I am a little confused with documentation about character sets.
    In chapter 9 it says that "The database is created using Unicode(AL32UTF8) character set, which is suitable for global data in any language." but in Chapter 10.3 Supported Character Sets it says that "Table 3 lists the supported character sets in Oracle Database XE. The list is ordered alphabetically in each language group." and refers to a lot of character sets.
    If Database character set is only AL32UTF8 then why the client can use all character sets described in Table3 of chapeter 10.3?

    Because the database character set is just what you use to store data internally. It means that every character that can be mapped to a UTF-8 character can be stored and that is why the number of supported character sets is bigger. So your client can talk ISO8859-15 (for example). Every character in that CS can be mapped to a character in utf-8 and as long as the database knows about this mapping, it will be supported.

  • Character SET Change

    Hi,
    I am migrating the database from 9i to 10g.9i database is on windows and 10g would be on solaris.Now,We have some encryted data which uses the windows characterset WE8MSWIN1252 to AL32UTF8(solaris).Could anyone pls let me know how can i go about it?
    Thanks!

    Is the encrypted data stored in VARCHAR2 columns? Or did you store it in RAW columns?
    One of the (many) reasons that I would strongly advocate RAW for encrypted data is that you don't have to worry about character set transforms.
    If the data is stored in VARCHAR2 columns, you would generally have to decrypt it in the source database, copy it over to the new database in the clear, and re-encrypt it in the destination database. Unless you happen to have chosen an encryption algorithm that guarantees the output to have the same representation in both Windows-1252 and UTF-8 character sets, which would seem exceptionally unlikely.
    Justin

  • ORA-12712 error while changing nls character set to AL32UTF8

    Hi,
    It is strongly recommend to use database character set AL32UTF8 when ever a database is going to used with our XML capabilities. The database character set in the installed DB is WE8MSWIN1252. For making use of XML DB features, I need to change it to AL32UTF8. But, when I try doing this, I'm getting ORA-12712: new character set must be a superset of old character set. Is there a way to solve this issue?
    Thanks in advance,
    Divya.

    Hi,
    a change from we8mswin1252 to al32utf8 is not directly possible. This is because al32utf is not a binary superset of we8mswin1252.
    There are 2 options:
    - use full export and import
    - Use of the Alter in a sort of restricted way
    The method you can choose depends on the characters in the database, is it only ASCII then the second one can work, in other cases the first one is needed.
    It is all described in the Support Note 260192.1, "Changing the NLS_CHARACTERSET to AL32UTF8 / UTF8 (Unicode)". Get it from the support/metalink site.
    You can also read the chapters about this issue in the Globalization Guide: [url http://download.oracle.com/docs/cd/E11882_01/server.112/e10729/ch11charsetmig.htm#g1011430]Change characterset.
    Herald ten Dam
    http://htendam.wordpress.com

  • Oracle 8.1.5 install on Linux Redhat 6.0: character set (and other) problem(s)

    I am trying to install Oracle 8i on Linux and it does not work : once the install is finished, I have a message saying that "Character Set not found".
    I am runing a french version of Linux (fr-latin 1) and I try to install Oracle with French and English as languages
    An other problem about this install : Oracle does not seem to recognize that I have 6,9 Giga for it to install, and says that I have not enough space for the install...
    And at the end of the install, it takes for ages (about 15mns) during which nothing seems to happen. On one machine I got out of this phase, but on the other I never saw it finish, it looks as if the computer crashed. Is that normal?
    I went through all the initialization phases, set the correct environment variables...
    thanks
    Solange
    null

    I've been dealing with the same problems in the english version but could bypass thiss by doing the folowing.
    -Just ignore the disk space stuff
    -Ignore the charset message, also
    -When creating a database, choose custom and then select the WE8ISO8859P1 char set. It worked for portuguese, must work for french also.
    -Everyone here recommended, and I do the same, leave the database creation for later, not during instalation.
    Good Luck!

  • The ADO NET Source was unable to process the data. ORA-64203: Destination buffer too small to hold CLOB data after character set conversion.

     We developed a SSIS Package to pull the data From Oracle source to Sql Server 2012. Here we used ADO.Net source to pull the records from Source but getting the below error after pulling some 40K records.
      [ADO NET Source [2]] Error: The ADO NET Source was unable to process the data. ORA-64203: Destination buffer too small to hold CLOB data after character set conversion.
    [SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. 
     The PrimeOutput method on ADO NET Source returned error code 0xC02090F5. 
     The component returned a failure code when the pipeline engine called PrimeOutput(). 
    The meaning of the failure code is defined by the component, 
    but the error is fatal and the pipeline stopped executing. 
     There may be error messages posted before this with more 
    information about the failure.
    Anything that we can do to fix this?

    Hi,
      Tried both....
      * Having schema type as Nvarchar(max). - Getting the same error.
      * Instead of ADO.Net Source used OLEDB Source with driver as " Oracle Provide for OLE DB" Getting error as below.
           [OLE DB Source [478]] Error: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80004005.
           [SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED.  The PrimeOutput method on OLE DB Source returned error code 0xC0202009.  The component returned a failure
    code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.  There may be error messages posted before this with more information about the
    failure.
    Additional Info:
       * Here the Source task is getting failed not the conversion or destination task.
    Thanks,
    Loganathan A.

  • Oracle XE and character set

    Hello all,
    I installed Oracle XE on RHEL 4 Linux and I found out that database character set is AL32UTF8. Does anyone know why oracle choose this character set? Maybe because of NLS_LANG env variable? Is it possible to change it to EE8ISO8859P2? Since database is still empty I can drop it and crate new database.
    Do you think it is possible to set some env variables and do new oracle xe instalation including database with iso charset?
    I want to have EE8ISO8859P2 charset because of doing exp/imp from another oracle iso db to oracle xe and it is much easier to do this without charset conversion.
    Any help will be appreciated.
    regards,
    Miha

    When you download XE, you have a choice - take the 'western european' character set download, or the 'unicode' download.
    No other choices.
    Join us over in the XE forum where people have discussed this and found workarounds. Info about finding that forum at Re: Oracle XE Installation failed

  • Problem with Character Set in Oracle database 10g

    Hi,
    I tried to import one tablespace into test server. Source server with Oracle 8i and Target server with Oracle database 10g. The error I get is
    Import: Release 10.2.0.1.0 - Production on Thu Aug 3 00:20:49 2006
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Username: sys as sysdba
    Password:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Export file created by EXPORT:V08.01.07 via conventional path
    About to import transportable tablespace(s) metadata...
    import done in WE8DEC character set and AL16UTF16 NCHAR character set
    export server uses WE8DEC NCHAR character set (possible ncharset conversion)
    . importing SYS's objects into SYS
    . importing SYS's objects into SYS
    IMP-00017: following statement failed with ORACLE error 19736:
    "BEGIN sys.dbms_plugts.beginImport ('8.1.7.4.0',2,'2',NULL,'NULL',67051,25"
    "51,2); END;"
    IMP-00003: ORACLE error 19736 encountered
    ORA-19736: can not plug a tablespace into a database using a different national character set
    ORA-06512: at "SYS.DBMS_PLUGTS", line 2386
    ORA-06512: at "SYS.DBMS_PLUGTS", line 1946
    ORA-06512: at line 1
    IMP-00000: Import terminated unsuccessfully
    PLZ somebody help in geting resolve this. Has anybody seen this error before.

    The solution to this problem is described in MetaLink note #211920.1. But this note is published with LIMITED access as it involves using a hidden parameter.
    You can get access to the note through Oracle Support only.
    The problem itself is solved generically, if the source database is at least 10.1.0.3 and the target database is 10.2
    -- Sergiusz

  • What every developer should know about character encoding

    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts.
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    Here's a key point about these text files – every program is still using an encoding. It may not be setting it in code, but by definition an encoding is being used.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    Wrapping it up
    I think there are two key items to keep in mind here. First, make sure you are taking the encoding in to account on text files. Second, this is actually all very easy and straightforward. People rarely screw up how to use an encoding, it's when they ignore the issue that they get in to trouble.
    Edited by: Darryl Burke -- link removed

    DavidThi808 wrote:
    This was originally posted (with better formatting) at Moderator edit: link removed/what-every-developer-should-know-about-character-encoding.html. I'm posting because lots of people trip over this.
    If you write code that touches a text file, you probably need this.
    Lets start off with two key items
    1.Unicode does not solve this issue for us (yet).
    2.Every text file is encoded. There is no such thing as an unencoded file or a "general" encoding.
    And lets add a codacil to this – most Americans can get by without having to take this in to account – most of the time. Because the characters for the first 127 bytes in the vast majority of encoding schemes map to the same set of characters (more accurately called glyphs). And because we only use A-Z without any other characters, accents, etc. – we're good to go. But the second you use those same assumptions in an HTML or XML file that has characters outside the first 127 – then the trouble starts. Pretty sure most Americans do not use character sets that only have a range of 0-127. I don't think I have every used a desktop OS that did. I might have used some big iron boxes before that but at that time I wasn't even aware that character sets existed.
    They might only use that range but that is a different issue, especially since that range is exactly the same as the UTF8 character set anyways.
    >
    The computer industry started with diskspace and memory at a premium. Anyone who suggested using 2 bytes for each character instead of one would have been laughed at. In fact we're lucky that the byte worked best as 8 bits or we might have had fewer than 256 bits for each character. There of course were numerous charactersets (or codepages) developed early on. But we ended up with most everyone using a standard set of codepages where the first 127 bytes were identical on all and the second were unique to each set. There were sets for America/Western Europe, Central Europe, Russia, etc.
    And then for Asia, because 256 characters were not enough, some of the range 128 – 255 had what was called DBCS (double byte character sets). For each value of a first byte (in these higher ranges), the second byte then identified one of 256 characters. This gave a total of 128 * 256 additional characters. It was a hack, but it kept memory use to a minimum. Chinese, Japanese, and Korean each have their own DBCS codepage.
    And for awhile this worked well. Operating systems, applications, etc. mostly were set to use a specified code page. But then the internet came along. A website in America using an XML file from Greece to display data to a user browsing in Russia, where each is entering data based on their country – that broke the paradigm.
    The above is only true for small volume sets. If I am targeting a processing rate of 2000 txns/sec with a requirement to hold data active for seven years then a column with a size of 8 bytes is significantly different than one with 16 bytes.
    Fast forward to today. The two file formats where we can explain this the best, and where everyone trips over it, is HTML and XML. Every HTML and XML file can optionally have the character encoding set in it's header metadata. If it's not set, then most programs assume it is UTF-8, but that is not a standard and not universally followed. If the encoding is not specified and the program reading the file guess wrong – the file will be misread.
    The above is out of place. It would be best to address this as part of Point 1.
    Point 1 – Never treat specifying the encoding as optional when writing a file. Always write it to the file. Always. Even if you are willing to swear that the file will never have characters out of the range 1 – 127.
    Now lets' look at UTF-8 because as the standard and the way it works, it gets people into a lot of trouble. UTF-8 was popular for two reasons. First it matched the standard codepages for the first 127 characters and so most existing HTML and XML would match it. Second, it was designed to use as few bytes as possible which mattered a lot back when it was designed and many people were still using dial-up modems.
    UTF-8 borrowed from the DBCS designs from the Asian codepages. The first 128 bytes are all single byte representations of characters. Then for the next most common set, it uses a block in the second 128 bytes to be a double byte sequence giving us more characters. But wait, there's more. For the less common there's a first byte which leads to a sersies of second bytes. Those then each lead to a third byte and those three bytes define the character. This goes up to 6 byte sequences. Using the MBCS (multi-byte character set) you can write the equivilent of every unicode character. And assuming what you are writing is not a list of seldom used Chinese characters, do it in fewer bytes.
    The first part of that paragraph is odd. The first 128 characters of unicode, all unicode, is based on ASCII. The representational format of UTF8 is required to implement unicode, thus it must represent those characters. It uses the idiom supported by variable width encodings to do that.
    But here is what everyone trips over – they have an HTML or XML file, it works fine, and they open it up in a text editor. They then add a character that in their text editor, using the codepage for their region, insert a character like ß and save the file. Of course it must be correct – their text editor shows it correctly. But feed it to any program that reads according to the encoding and that is now the first character fo a 2 byte sequence. You either get a different character or if the second byte is not a legal value for that first byte – an error.
    Not sure what you are saying here. If a file is supposed to be in one encoding and you insert invalid characters into it then it invalid. End of story. It has nothing to do with html/xml.
    Point 2 – Always create HTML and XML in a program that writes it out correctly using the encode. If you must create with a text editor, then view the final file in a browser.
    The browser still needs to support the encoding.
    Now, what about when the code you are writing will read or write a file? We are not talking binary/data files where you write it out in your own format, but files that are considered text files. Java, .NET, etc all have character encoders. The purpose of these encoders is to translate between a sequence of bytes (the file) and the characters they represent. Lets take what is actually a very difficlut example – your source code, be it C#, Java, etc. These are still by and large "plain old text files" with no encoding hints. So how do programs handle them? Many assume they use the local code page. Many others assume that all characters will be in the range 0 – 127 and will choke on anything else.
    I know java files have a default encoding - the specification defines it. And I am certain C# does as well.
    Point 3 – Always set the encoding when you read and write text files. Not just for HTML & XML, but even for files like source code. It's fine if you set it to use the default codepage, but set the encoding.
    It is important to define it. Whether you set it is another matter.
    Point 4 – Use the most complete encoder possible. You can write your own XML as a text file encoded for UTF-8. But if you write it using an XML encoder, then it will include the encoding in the meta data and you can't get it wrong. (it also adds the endian preamble to the file.)
    Ok, you're reading & writing files correctly but what about inside your code. What there? This is where it's easy – unicode. That's what those encoders created in the Java & .NET runtime are designed to do. You read in and get unicode. You write unicode and get an encoded file. That's why the char type is 16 bits and is a unique core type that is for characters. This you probably have right because languages today don't give you much choice in the matter.
    Unicode character escapes are replaced prior to actual code compilation. Thus it is possible to create strings in java with escaped unicode characters which will fail to compile.
    Point 5 – (For developers on languages that have been around awhile) – Always use unicode internally. In C++ this is called wide chars (or something similar). Don't get clever to save a couple of bytes, memory is cheap and you have more important things to do.
    No. A developer should understand the problem domain represented by the requirements and the business and create solutions that appropriate to that. Thus there is absolutely no point for someone that is creating an inventory system for a stand alone store to craft a solution that supports multiple languages.
    And another example is with high volume systems moving/storing bytes is relevant. As such one must carefully consider each text element as to whether it is customer consumable or internally consumable. Saving bytes in such cases will impact the total load of the system. In such systems incremental savings impact operating costs and marketing advantage with speed.

  • Character set marker unknown error while importing data in 10g from 8i

    Hi All,
    I am trying to import the whole database schema wise (one by one) through oracle 10g database control page (which is browser based)
    But when i try to import the objects of a schema i get the error that
    "IMP-00037 : Character set marker unknown "
    Import terminated unsuccessfully
    Would anybody please tell me about the mentioned error ???
    Kindly provide the solution.
    Mentioned below are the character sets available in both the version.
    Character sets in oracle 8.1.7 :-
    Database Character Set :: WE8ISO8859P1
    National Character Set :: WE8ISO8859P1
    Character sets in oracle 10.2.0.1.0 :-
    Database Character Set :: WE8ISO8859P1
    National Character Set :: AL16UTF16
    Regards
    Milin...
    Message was edited by:
    user640001

    Hi,
    As you have asked, i have mentioned the export command below to get the full database export file.
    And I have copied this export file (dump) to another machine(Server) where i have to import that file to upgrade the database to 10g Rel 2.
    SET CC=%DATE:~4,2%-%DATE:~7,2%-%DATE:~10,4%
    exp username/password@db_name
    file=D:\PAY_BKP\EXP_FULL_PAY_%CC%.DMP log=D:\PAY_BKP\EXP_FULL_PAY.log
    indexes=yes
    full=yes
    Kindly provide guidance.
    Regards
    Milin

  • Java Character set error while loding data using iSetup

    Hi,
    I am getting the following error while migrating settup data from R12 (12.1.2) Instance to another R12 (12.1.2) Instance, Both the Database has same DB character set (AL32UTF8)
    we are getting this error while migrating any setup data
    Actual error is
    Downloading the extract from central instance
    Successfully copied the Extract
    Time taken to download Extract and write as zip file = 0 seconds
    Validating Primary Extract...
    Source Java Charset: AL32UTF8
    Target Java Charset: UTF-8
    Target Java Charset does not match with Source Java Charset
    java.lang.Exception: Target Java Charset does not match with Source Java Charset
         at oracle.apps.az.r12.common.cpserver.PreValidator.validate(PreValidator.java:191)
         at oracle.apps.az.r12.loader.cpserver.APILoader.callAPIs(APILoader.java:119)
         at oracle.apps.az.r12.loader.cpserver.LoaderContextImpl.load(LoaderContextImpl.java:66)
         at oracle.apps.az.r12.loader.cpserver.LoaderCp.runProgram(LoaderCp.java:65)
         at oracle.apps.fnd.cp.request.Run.main(Run.java:157)
    Error while loading apis
    java.lang.NullPointerException
         at oracle.apps.az.r12.loader.cpserver.APILoader.callAPIs(APILoader.java:158)
         at oracle.apps.az.r12.loader.cpserver.LoaderContextImpl.load(LoaderContextImpl.java:66)
         at oracle.apps.az.r12.loader.cpserver.LoaderCp.runProgram(LoaderCp.java:65)
         at oracle.apps.fnd.cp.request.Run.main(Run.java:157)
    Please help in identifying and resolving the issue
    Sachin

    The Source and Target DB character set is same
    Output from the query
    ------------- Source --------------
    SQL> select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
    VALUE
    AL32UTF8
    And target Instance
    -------------- Target----------------------
    SQL> select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
    VALUE
    AL32UTF8
    The Error is about Source and Target JAVA Character set
    I will check the Prevalidator xml from How to use iSetup and update the note
    Thanks
    Sachin

Maybe you are looking for

  • Configuration file not found jbo-33005

    I am stumped, I have an application created in JDeveloper and using jakarta-struts. When I try to update the database, I get the following error: JboException: JBO-29000: unexpected exception caught: ...JBO-33005: Configuration ApplicationModuleLocal

  • Someone have a look and tell me my mistake

    Ok so im creating a website and im basically using this for a CSS style for the background image: <style type="text/css"> <!-- body { background-image:url(images/background.jpg); background-attachment: fixed; background-color: #fff000; background-rep

  • How to disable screen rotation in ios7 ?

    how to disable screen rotation in ios7 iphone 4s

  • Creating infospoke in BI 7 version

    Hello gurus,                 can anybody tell the procedure for creating infospoke in BI 7 version.

  • How to recover damaged docx?

    At around 1:20PM, i wrote up a report, and I only needed to add one graphic into it to complete it. So I saved and closed all my documents and windows, turned off the computer, and went to lunch. The docx was saved on a USB drive that I left plugged