ORACLE invoices with a Japanese character set
We are having trouble printing ORACLE invoices with a Japanese character set.
the printer we are using is a Dell W5300,do I need to configure the printer or is it something that needs to be configure in the software?????please help......
We are having trouble printing ORACLE invoices with a
Japanese character set.
the printer we are using is a Dell W5300,do I need to
configure the printer or is it something that needs
to be configure in the software?????please help......What is the "trouble"? Are you seeing the wrong output? It may not be the printer, but the software that is sending the output to the printer.
If you are using an Oracle Client (SQL*Plus, FOrms, Reports etc), ensure you set the NLS_LANG to JAPANESE_JAPAN.WE8MSWIN1252 or JAPANESE_JAPAN.JA16SJIS
Similar Messages
-
Using Document Filters with the Japanese character sets
Not sure if this belongs here or on the Swing Topic but here goes:
I have been requested to restrict entry in a JTextField to English alphaNumeric and Full-width Katakana.
The East Asian language support also allows Hiragana and Half-width Katakana.
I have tried to attach a DocumentFilter. The filter employs a ValidateString method which strips all non (Latin) alphaNumerics as well as anything in the Hiragana, or Half-width Katakana ranges. The code is pretty simple (Most of the code below is dedicated to debugging):
public class KatakanaInputFilter extends DocumentFilter
private static int LOW_KATAKANA_RANGE = 0x30A0;
private static int LOW_HALF_KATAKANA_RANGE = 0xFF66;
private static int HIGH_HALF_KATAKANA_RANGE = 0xFFEE;
private static int LOW_HIRAGANA_RANGE = 0x3041;
public KatakanaInputFilter()
super();
@Override
public void replace(FilterBypass fb, int offset, int length, String text,
AttributeSet attrs) throws BadLocationException
super.replace(fb, offset, length, validateString(text, offset), null);
@Override
public void remove(FilterBypass fb, int offset, int length)
throws BadLocationException
super.remove(fb, offset, length);
// @Override
public void insertString(FilterBypass fb, int offset, String string,
AttributeSet attr) throws BadLocationException
String newString = new String();
for (int i = 0; i < string.length(); i++)
int unicodePoint = string.codePointAt(i);
newString += String.format("[%x] ", unicodePoint);
String oldString = new String();
int len = fb.getDocument().getLength();
if (len > 0)
String fbText = fb.getDocument().getText(0, len);
for (int i = 0; i < len; i++)
int unicodePoint = fbText.codePointAt(i);
oldString += String.format("[%x] ", unicodePoint);
System.out.format("insertString %s into %s at location %d\n",
newString, oldString, offset);
super.insertString(fb, offset, validateString(string, offset), attr);
len = fb.getDocument().getLength();
if (len > 0)
String fbText = fb.getDocument().getText(0, len);
for (int i = 0; i < len; i++)
int unicodePoint = fbText.codePointAt(i);
oldString += String.format("[%x] ", unicodePoint);
System.out.format("document changed to %s\n\n", oldString);
public String validateString(String text, int offset)
if (text == null)
return new String();
String validText = new String();
for (int i = 0; i < text.length(); i++)
int unicodePoint = text.codePointAt(i);
boolean acceptChar = false;
if (unicodePoint < LOW_KATAKANA_RANGE)
if ((unicodePoint < 0x30 || unicodePoint > 0x7a)
|| (unicodePoint > 0x3a && unicodePoint < 0x41)
|| (unicodePoint > 0x59 && unicodePoint < 0x61))
acceptChar = false;
else
acceptChar = true;
else
if ((unicodePoint >= LOW_HALF_KATAKANA_RANGE && unicodePoint <= HIGH_HALF_KATAKANA_RANGE)
|| (unicodePoint >= LOW_HIRAGANA_RANGE && unicodePoint <= LOW_HIRAGANA_RANGE))
acceptChar = false;
else
acceptChar = true;
if (acceptChar == true)
System.out.format(" Accepted code point = %x\n",
unicodePoint);
validText += text.charAt(i);
else
System.out.format(" Rejected code point = %x\n",
unicodePoint);
String newString = "";
for (int i = 0; i < validText.length(); i++)
int unicodePoint = validText.codePointAt(i);
newString += String.format("[%x] ", unicodePoint);
System.out.format("ValidatedString = %s\n", newString);
return validText;
* @param args
public static void main(String[] args)
Runnable runner = new Runnable()
public void run()
JFrame frame = new JFrame("Katakana Input Filter");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setLayout(new GridLayout(2, 2));
frame.add(new JLabel("Text"));
JTextField textFieldOne = new JTextField();
Document textDocOne = textFieldOne.getDocument();
DocumentFilter filterOne = new KatakanaInputFilter();
((AbstractDocument) textDocOne).setDocumentFilter(filterOne);
textFieldOne.setDocument(textDocOne);
frame.add(textFieldOne);
frame.setSize(250, 90);
frame.setVisible(true);
EventQueue.invokeLater(runner);
}I run this code, use the language bar to switch to Full-width Katakana and type "y" followed by "u" which forms a valid Katakana character. I then used the language bar to switch to Hiragana and retyped the "Y" followed by "u". When the code sees the Hiragana codepoint generated by this key combination it rejects it. My debugging statements show that the document is properly updated. However, when I type the next character, I find that the previously rejected codePoint is being sent back to my insert method. It appears that the text somehow got cached in the composedTextContent of the JTextField.
Here is the output of the program when I follow the steps I just outlined:
insertString [ff59] into at location 0 <== typed y (Katakana)
Accepted code point = ff59
ValidatedString = [ff59]
document changed to [ff59]
insertString [30e6] into at location 0 <== typed u (Katakana)
Accepted code point = 30e6
ValidatedString = [30e6]
document changed to [30e6]
insertString [30e6] [ff59] into at location 0 <== typed y (Hiragna)
Accepted code point = 30e6
Accepted code point = ff59
ValidatedString = [30e6] [ff59]
document changed to [30e6] [ff59]
insertString [30e6] [3086] into at location 0 <== typed u (Hiragana)
Accepted code point = 30e6
Rejected code point = 3086
ValidatedString = [30e6]
document changed to [30e6]
insertString [30e6] [3086] [ff59] into at location 0 <== typed u (Hiragana)
Accepted code point = 30e6
Rejected code point = 3086
Accepted code point = ff59
ValidatedString = [30e6] [ff59]
document changed to [30e6] [ff59]
As far as I can tell, the data in the document looks fine. But the JTextField does not have the same data as the document. At this point it is not displaying the ff59 codePoint as a "y" (as it does when first entering the Hiragana character). but it has somehow combined it with another codePoint to form a complete Hiragana character.
Can anyone see what it is that I am doing wrong? Any help would be appreciated as I am baffled at this point.You have a procedure called "remove" but I don't see you calling it from anywhere in your program. When the validation failed, call remove to remove the bad character.
V.V. -
Problem displaying japanese character set in shopping cart smartform
Hi All,
whenever users are entering some text in Japanese character set while creating a shopping cart in SRM, the smartform print output is displaying some junk characters!! even though the system is unicode compatable, did any one have problem ??
Thanks.Hi,
May be there is some problem with UNICODE conversion.
See the foll links;
Note 548016 - Conversion to Unicode
http://help.sap.com/saphelp_srm50/helpdata/en/9f/fdd13fa69a4921e10000000a1550b0/frameset.htm
Europe Languages work in Non- Unicode System
Re: Multiple Backends
Re: Language issue
Standard Code Pages in Non-Unicode System
Re: Upgrade from EBP 4.0 to SRM 5.0
http://help.sap.com/saphelp_srm50/helpdata/en/e9/c4cc9b03a422428603643ad3e8a5aa/content.htm
http://help.sap.com/saphelp_srm50/helpdata/en/11/395542785de64885c4e84023d93d93/content.htm
BR,
Disha.
Do reward points for useful answers. -
How to change Japanese character set
The below are my character set in my DB
NLS_CHARACTERSET=WE8ISO8859P1
NLS_NCHAR_CHARACTERSET=UTF8
Correct Answer (If I use english language the result is correct)
==========
select product(',','AB_BC ,DE') from dual;
(AB_BC, DE,,,)
After altering the parameter at session level to get Japanese character set I am getting wrong result
it is giving wrong result
==============
select product(',','A_BC ,DE') from dual;
(AB, BC , DE,,,,)
How to change at session leavel to get Japanese character setuser446367 wrote:
Correct Answer (If I use english language the result is correct)What does "use english language" mean in this context?
After altering the parameter at session level to get Japanese character set I am getting wrong resultThere is no such thing. Show us (copy paste) the commands and the resulting output, please.
select product(',','A_BC ,DE') from dual;As requested several times already in your other thread on the same subject, it would greatly help forum members to help you if you would post the pl/sql of this function.
AFAIK, product() is not a built-in standard Oracle function.
How to change at session leavel to get Japanese character setThat is probably not what's needed, but anyway, here's one simple example:
export NLS_LANG=.JA16SJIS
sqlplus u/p@svc
sql> ... -
HOW can I enter text using Japanese character sets?
The "Text, Plates, Insets" section of the LOOKOUT(6.01) Help files states:
"Click the » button to the right of the Text field to expand the field for multiple line entries. You can enter text using international character sets such as Chinese, Korean, and Japanese."
Can someone please explain HOW to do this? Note, I have NO problem inputting Hirigana, Katakana, and Kanji into MS WORD; the keyboard emulates the Japanese layout and characters (Romaji is default) and the IME works fine converting Romaji, and I can also select charcters directly from the IME Pad. I have tried several different fonts with success and am currently using MS UI Gothic.ttf as default. Again, everything is normal and working in a predictable manner within Word.
I cannot get these texts into Lookout. I can't cut/paste from HTML pages or from text editors, even though both display properly. Within Lookout with JP selected as language/keyboard, when trying to type directly into the text field, the IME CORRECTLY displays Hirigana until <enter> is pressed, at which point all text reverts to question marks (?? ???? ? ?????). If I use the IME Pad, it does pretty much the same. I managed to get the "Yen" symbol to display, though, if that's relevant. As I said, font selected (in text/plate font options) is MS UI Gothic with Japanese as the selected script. Oddly enough, at this point the "sample" window is showing me the exact Hirigana character I want displayed in Lookout, but it won't. I've also tried staying in English and copying unicode characters from the Windows Character Map. Same results (Yen sign works, Hirigana WON'T).
Help me!
JW_TechJW_Tech,
Have you changed the regional setting to Japanese?
Doug M
Applications Engineer
National Instruments
For those unfamiliar with NBC's The Office, my icon is NOT a picture of me
Attachments:
language.JPG 50 KB -
Crystal XI R2 exporting issues with double-byte character sets
NOTE: I have also posted this in the Business Objects General section with no resolution, so I figured I would try this forum as well.
We are using Crystal Reports XI Release 2 (version 11.5.0.313).
We have an application that can be run using multiple cultures/languages, chosen at login time. We have discovered an issue when exporting a Crystal report from our application while using a double-byte character set (Korean, Japanese).
The original text when viewed through our application in the Crystal preview window looks correct:
性能 著概要
When exported to Microsoft Word, it also looks correct. However, when we export to PDF or even RPT, the characters are not being converted. The double-byte characters are rendered as boxes instead. It seems that the PDF and RPT exports are somehow not making use of the linked fonts Windows provides for double-byte character sets. This same behavior is exhibited when exporting a PDF from the Crystal report designer environment. We are using Tahoma, a TrueType font, in our report.
I did discover some new behavior that may or may not have any bearing on this issue. When a text field containing double-byte characters is just sitting on the report in the report designer, the box characters are displayed where the Korean characters should be. However, when I double click on the text field to edit the text, the Korean characters suddenly appear, replacing the boxes. And when I exit edit mode of the text field, the boxes are back. And they remain this way when exported, whether from inside the design environment or outside it.
Has anyone seen this behavior? Is SAP/Business Objects/Crystal aware of this? Is there a fix available? Any insights would be welcomed.
Thanks,
JeffHi Jef
I searched on the forums and got the following information:
1) If font linking is enabled on your device, you can examine the registry by enumerating the subkeys of the registry key at HKEY_LOCAL_MACHINEu2013\SOFTWARE\Microsoft\Windows NT\CurrentVersion\FontLink\SystemLink to determine the mappings of linked fonts to base fonts. You can add links by using Regedit to create additional subkeys. Once you have located the registry key that has just been mentioned, from the Edit menu, Highlight the font face name of the font you want to link to and then from the Edit menu, click Modify. On a new line in the dialog field "Value data" of the Edit Multi-String dialog box, enter "path and file to link to," "face name of the font to link".u201D
2) "Fonts in general, especially TrueType and OpenType, are u201CUnicodeu201D.
Since you are using a 'true type' font, it may be an Unicode type already.However,if Bud's suggestion works then nothing better than that.
Also, could you please check the output from crystal designer with different version of pdf than the current one?
Meanwhile, I will look out for any additional/suitable information on this issue. -
UTF/Japanese character set and my application
Blankfellaws...
a simple query about the internationalization of an enterprise application..
I have a considerably large application running as 4 layers.. namely..
1) presentation layer - I have a servlet here
2) business layer - I have an EJB container here with EJBs
3) messaging layer - I have either Weblogic JMS here in which case it is an
application server or I will have MQSeries in which case it will be a
different machine all together
4) adapter layer - something like a connector layer with some specific or
rather customized modules which can talk to enterprise repositories
The Database has few messages in UTF format.. and they are Japanese
characters
My requirement : I need thos messages to be picked up from the database by
the business layer and passed on to the client screen which is a web browser
through the presentation layer.
What are the various points to be noted to get this done?
Where and all I need to set the character set and what should be the ideal
character set to be used to support maximum characters?
Are there anything specifically to be done in my application code regarding
this?
Are these just the matter of setting the character sets in the application
servers / web servers / web browsers?
Please enlighten me on these areas as am into something similar to this and
trying to figure out what's wrong in my current application. When the data
comes to the screen through my application, it looks corrupted. But the asme
message when read through a simple servlet, displays them without a problem.
Am confused!!
Thanks in advance
ManeshHello Manesh,
For the database I would recommend using UTF-8.
As for the character problems, could you elaborate which version of WebLogic
are you using and what is the nature of the problem.
If your problem is that of displaying the characters from the db and are
using JSP, you could try putting
<%@ page language="java" contentType="text/html; charset=UTF-8"%> on the
first line,
or if a servlet .... response.setContentType("text/html; charset=UTF-8");
Also to automatically select the correct charset by the browser, you will
have to include
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> in the
jsp.
You could replace the "UTF-8" with other charsets you are using.
I hope this helps...
David.
"m a n E s h" <[email protected]> wrote in message
news:[email protected]...
Blankfellaws...
a simple query about the internationalization of an enterpriseapplication..
>
I have a considerably large application running as 4 layers.. namely..
1) presentation layer - I have a servlet here
2) business layer - I have an EJB container here with EJBs
3) messaging layer - I have either Weblogic JMS here in which case it isan
application server or I will have MQSeries in which case it will be a
different machine all together
4) adapter layer - something like a connector layer with some specific or
rather customized modules which can talk to enterprise repositories
The Database has few messages in UTF format.. and they are Japanese
characters
My requirement : I need thos messages to be picked up from the database by
the business layer and passed on to the client screen which is a webbrowser
through the presentation layer.
What are the various points to be noted to get this done?
Where and all I need to set the character set and what should be the ideal
character set to be used to support maximum characters?
Are there anything specifically to be done in my application coderegarding
this?
Are these just the matter of setting the character sets in the application
servers / web servers / web browsers?
Please enlighten me on these areas as am into something similar to thisand
trying to figure out what's wrong in my current application. When the data
comes to the screen through my application, it looks corrupted. But theasme
message when read through a simple servlet, displays them without aproblem.
Am confused!!
Thanks in advance
Manesh -
Problems with LPX-00245 (character set problem?)
Hi all,
I've got a problam with ORA-19202 and LPX-00245 (extra data after end of document) when querying my xmltype table. The table contains one large xml document. This xml document is valid, I've checked it against the corresponding XSD (using JDeveloper and also Notepad++, no validation errors).
I gues it has something to do with the encoding of the document. The original encoding is ISO-8859-1 (<?xml version="1.0" encoding="ISO-8859-1"?>). When I load the document to the database it is autoamtically changed to UTF-8 (<?xml version="1.0" encoding="UTF-8"?>) maybe because the character setting of my database is AL32UTF8.
I use the following statement to store my XML:
insert into my_table
values( my_seq_spp.nextval,
r_get_files.file_name,
xmltype(
bfilename(p_directory, r_get_files.file_name) -- p_directory is the name of an oracle dircetory
, nls_charset_id('WE8ISO8859P1')
Nevertheless the retrieved charset id 31 is ignored. Also if II use csid = 0, it doesn't work...
Any idea how to enforce using ISO-8859-1 instead UTF-8 as character set?
Best regards
MatthiasHi Marco,
I don't think it has anything to do with encoding (client-side or not).
I'd be more inclined to say it's related to XML fragments manipulation.
@Matthias :
Does this work better :
select m.version
, sp.Betriebsstelle
, spa.Betriebsstellenfahrwege
from imp_spurplan t
, xmltable('/XmlIssDaten'
passing t.xml_document
COLUMNS
Version varchar2(6) path 'Version/Name'
, Spurplanbetriebsstellen xmltype path 'Spurplanbetriebsstellen'
) m
, xmltable('/Spurplanbetriebsstellen/Spurplanbetriebsstelle'
passing m.Spurplanbetriebsstellen
COLUMNS
Betriebsstellenfahrwege_xml xmltype path 'Betriebsstellenfahrwege'
, Betriebsstelle varchar2(6) path 'Betriebsstelle'
) sp
, xmltable('/Betriebsstellenfahrwege'
passing sp.Betriebsstellenfahrwege_xml
COLUMNS
Betriebsstellenfahrwege xmltype path '.'
) spa
where sp.Betriebsstelle = 'NWH' -
Import dump from a satabase with a different character set
My database has this character set:
select * from database_properties:
NLS_CHARACTERSET AL32UTF8 Character set
NLS_NCHAR_CHARACTERSET AL16UTF16 NCHAR Character set
I need to import a dump from a database with WE8MSWIN1252 character set.
After the import I have seet that some character in the table are wrog:
I see this simbol " " insted of this "à".
How can I solve the problem?
The nls_lang variable on my os is: NLS_LANG=ITALIAN_ITALY.AL32UTF8
I work with oracle 10.0.4 on linux
Message was edited by:
user613483I have read thos doc on metalink: Note:227332.1
I also tried to set nls_lang = NLS_LANG=ITALIAN_ITALY.WE8MSWIN1252
and then I execute the import command.
BUt didn't work. -
Japanese Character Set - in Safari
I am considering purchasing a new Mac Mini, and need to access my Hotmail account with Japanesse Character Set. My XP Windows system requires the Japanese Language Pack. How will the Mac handle this? in other words, will Safari support the viewing and editing of Japanese Characters, or is there something I need to download? thanks!
found out that the Japanese Language Pack requires W7 Ultimate or Enterpise.
The Windows Japanese Language Pack is for turning the entire OS into Japanese. It has nothing to do with your ability to read Japanese or write Japanese while running your OS in English. I'm sure W7 comes with that installed by default, just like OS X does.
All browsers, Mac and Windows, automatically adjust to the character set provided in the code of the web page they are viewing, and also provide a way for the user to change it in the View > Text Encoding menu. I don't think you will have any trouble reading Japanese webmail with Safari or FireFox or Opera on a Mac. -
Oracle xml_dom.writeToClob ignore the character set
I'm trying to generate a xml file using oracle. After generating the xml using dbms_xmldom, it is stored in a CLOB and later written to a table.
The problem is that the character set ('UTF-8') is not included in the header even though it is specified using setCharset() procedure.
Following is the oracle script,
declare
export_file_ CLOB ;
str_export_file_ xmldom.DOMDocument;
main_node xmldom.DOMNode;
root_node xmldom.DOMNode;
root_elmt xmldom.DOMElement;
begin
str_export_file_ := xmldom.newDOMDocument;
xmldom.setVersion(str_export_file_, '1.0');
xmldom.setCharset(str_export_file_ , 'UTF-8');
main_node := xmldom.makeNode(str_export_file_);
root_elmt := xmldom.createElement(str_export_file_,'TextTranslation');
xmldom.setAttribute( root_elmt, 'version' ,'1.0');
xmldom.setAttribute( root_elmt, 'language' ,'ja');
xmldom.setAttribute( root_elmt, 'module' ,'DEMOAND');
xmldom.setAttribute( root_elmt, 'type' ,'VC');
root_node := xmldom.appendChild(main_node, xmldom.makeNode(root_elmt));
export_file_ :=' ';
xmldom.writeToClob( str_export_file_,export_file_,'UTF-8');
dbms_output.put_line ( export_file_ );
end;
The output is ,
<?xml version="1.0"?>
<TextTranslation version="1.0" language="ja" module="DEMOAND" type="VC"/>
If anybody can suggest me what I have done incorrectly that will be great.
Thanks in advance.The character set you specify via setCharset() procedure is ignored unless you use writeToFile() later.
http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/d_xmldom.htm#CHDCGDDB
Usage Notes
This is used for WRITETOFILE Procedures if not explicitly specified at that time.You can also use something like this :
SQL> set serveroutput on
SQL>
SQL> declare
2
3 export_file clob;
4 prolog clob := '<?xml version="1.0" encoding="UTF-8"?>';
5
6 begin
7
8 select prolog || chr(10) ||
9 xmlserialize(document
10 xmlelement("TextTranslation"
11 , xmlattributes(
12 '1.0' as "version"
13 , 'ja' as "language"
14 , 'DEMOAND' as "module"
15 , 'VC' as "type"
16 )
17 )
18 indent
19 )
20 into export_file
21 from dual ;
22
23 dbms_output.put_line ( export_file );
24
25 end;
26 /
<?xml version="1.0" encoding="UTF-8"?>
<TextTranslation version="1.0" language="ja" module="DEMOAND" type="VC"/>
PL/SQL procedure successfully completed -
Why does Firefox disable some Japanese character sets?
Firefox 3.5.5 on Mac. Started yesterday at a point. My Japanese character selection (Kotoeri) disables all but Romaji. The only way to get others (like Hiragana and Katakana) back is to restart Firefox. It seems visiting certain web sites may trigger it, but don't know exactly. Why is this happening?
It started from one of these and their links:
http://www.yamatoamerica.com/
http://www.ocsworld.com/
http://www.dhl.com/
I tried to recreate the situation, but couldn't. I will report back if I find one that triggers.You can check for issues caused by plugins (plugins are not affected by Safe mode).
*https://support.mozilla.org/kb/Troubleshooting+plugins
You can check for problems with current Flash plugin versions and try these:
*disable a possible RealPlayer Browser Record Plugin extension for Firefox and update the RealPlayer if installed
*disable protected mode in Flash 11.3 and later
*disable hardware acceleration in the Flash plugin
*http://kb.mozillazine.org/Flash#Troubleshooting -
Hi All,
My DB Version:Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
I've one procedure which updates comment column(Varchar2(4000BYTE)) of a table,When I am passing a Japanese sentence to this procedure the underlying table column is getting updated with junk values like '??????, ??????????, ????????'.
My Tbale structure is:
Table Name:test_landing_commit
Name Type Nullable Default Comments
EXIT_COMMENT VARCHAR2(4000) Y
CREATED_BY VARCHAR2(4000) Y
Pprocedure is
CREATE OR REPLACE PROCEDURE TEST_PROC_NM(VAR1 IN VARCHAR2) IS
BEGIN
UPDATE TEST_LANDING_COMMIT
SET EXIT_COMMENT = VAR1
WHERE CREATED_BY = 'XXX';
END;
and NLS_CHARACTERSET is set to UTF8.
Please provide some advices to resolve this issue.The database is not being updated with junk ... you have not globalized your system.
Go to http://tahiti.oracle.com and google and learn about globalization for your operating system and database version.
PS: By "Japanese" do you mean Kangi? Hiragana? Katakana? Romanji? -
Oracle Character sets with PeopleSoft - AL32UTF8 vs. UTF8
We currently have PeopleSoft FInancials v8.8 with PeopleTools 8.45 running on Oracle 9.2.0.8 with the UTF8 character set.
We plan to upgrade to Oracle 10.2, and want to know if we can and should also convert the character set to AL32UTF8.
Any issues?
(A couple of years ago, we were told that AL32UTF8 was not yet supported in PeopleSoft).Right now, something strange, Oracle recommand do not use anymore UTF8, and Peoplesoft recommand do not use AL32UTF8 yet.
You can read the solution id #719906, but anyway, AL32UTF8 on PT8.4x should works fine.
Nicolas. -
How to set or change character set for Oracle 10 XE
Installing via RPM on Linux.
I need to have my database set to use UTF8 and WE8ISO8859P15 as the character set and national character set. (Think those are in the right order. If not, it's the opposite.)
If I do a standard "yum localinstall rpm-file-name," it installs Oracle. I then run the "/etc/init.d/oracle-xe configure" command to set my ports.
Every time I do this, I end up with AL32/AL16 character sets.
I finally hardcoded ISO-8859-15 as the Linux 'locale' character set and set this in the various bash profile config files. Now, I end up with WE8MSWIN1252 as the character set and AL16UTF16 as the national character set.
I've tried editing the createdb.sh script to hard code the character set types and then copied that file over the original while the RPM is still installing. I've tried editing the nls_lang.sh script to hard code the settings there and copied over the original shell script while the RPM is still installing.
Doesn't matter.
HOW can I do this? If I wait until after the RPM is installed and try running the createdb.sh file, then it ends up creating a database but not doing everything properly. I end up missing pfiles or spfiles. Various errors crop up.
If I try to change them from the sql command line, I am told that the new character set must be a superset of the old one. It fails.
I'm new to Oracle, so I'm treading water that's uncharted. In short, I need community help. It's important to the app I'm running and attempting to migrate from to maintain these character sets.
Thanks.I don't think you can change Oracle XE character set. When downloading Oracle XE you must choose to download:
- either the Universal Edition using AL32UTF8
- or the Western Euopean Edition using WE8MSWIN1252.
See http://download.oracle.com/docs/cd/B25329_01/doc/install.102/b25144/toc.htm#BABJACJJ
If you really need UTF8 instead of AL32UTF8 you need to use Oracle Standard Edition or Oracle Entreprise Edition:
these editions allow to select database character set at database creation time which is not really possible with Oracle XE
Note that changing environment variable NLS_LANG has nothing to do with changing database character set:
http://download.oracle.com/docs/cd/B25329_01/doc/install.102/b25144/toc.htm#BABBGFIC
Maybe you are looking for
-
MacBook Pro mid-2010 15-inch Supports Monitor at 720p but not 1080p
Hello all, Systems Specs: MacBook Pro (15-inch, Mid 2010) Processor: 2.53 GHz Intel Core i5 Memory: 4 GB 1067 MHz DDR3 Startup Disk: Macintosh HD Graphics: NVIDIA GeForce GT 330M 256 MB I am attempting to connect my MBP to a new Asus VE278Q, 27inch 1
-
HT2731 multiple apple id on same computer
My son and I share the same computer. can I create a 2nd apple id on the same computer? if so would my computer be confuse as to which id to use when I get on the computer?
-
Unwanted purge using IMAP on Nokia 2730 classic
I just had a look at the following discussion: /t5/Messaging-Email-and-Browsing/Unwanted-automatic-purge-on-IMAP-server-by-Nokia-Messaging/m-p/5946... I am facing the same exact problem. Since DELETE and EXPUNGE/PURGE are both operations a user may
-
My apple mail flagged messages are not showing up in the flagged folder.
my apple mail flagged messages not showing up in the flagged folder ?
-
I was wondering, is there an application I can download from the App Store that allows me to plug in a USB cord from my Iphone and print from any printer?