UTF8 fiasco
Hi,
I am having difficulties with displaying UTF8 characters from the output stream from my servlets and JSPs.
Currently running WLS6.1, NT4.0 Sp6a, Oracle 8.1.5 with NLS_LANG set to AMERICAN_AMERICA.UTF8.
The following junk characters
????%u2013%u2021??%u20AC??%u201C
when pasted on HTML and viewed with with the UTF8 encoding within the browser works fine.
When assigning this to a String:
String language = "????%u2013%u2021??%u20AC??%u201C";
out.println(language);
it gives me junk. Looks like when the servlet generates the output stream, it cuts the 16bit characters to 8bit characters.
My default encoding is CP1252, which may account for why it doesn't recognize the larger value characters (> char(127)). Is there any way around this? I've tried the InputStreamReader and getBytes to manipulate this, but it won't work.
Hi Nohmenn,
Thanks to your reply (and the "Chinese text in JSPs" thread), I managed to get
the whole thing working! Now I can store and retrieve my UTF8 data from the database
AND view UTF raw characters properly on screen.
For the sake of other people who might be having the same difficulties, here's
how the folks in BEA told me how to do it:
Environment:
WinNT Sp6a
Oracle 8.1.5 (UTF-8, NLS_LANG=AMERICAN_AMERICA.UTF8)
WLS 6.1
setEnv.cmd:
Add the NLS_lANG=XXX where XXX is your database character set.
This will enable the JDriver to decode/encode data to be written to the database.
web.xml:
Added the following
<context-param>
<param-name>weblogic.httpd.inputCharset./*</param-name>
<param-value>UTF-8</param-value>
</context-param>
This will parse all input parameters and encode them as UTF8?
weblogic.xml:
Added the following
<charset-params>
<input-charset>
<resource-path>*.jsp</resource-path>
<java-charset-name>UTF8</java-charset-name>
</input-charset>
</charset-params>
This saves explicity specifiying the content type in your JSPs, e.g. response.setContentType("text/html;
charset=UTF8");
Added the following within the <jsp-descriptor> </jsp-descriptor> tags
<jsp-param>
<param-name>encoding</param-name>
<param-value>UTF8</param-value>
</jsp-param>
I guess this sets the base encoding for the JSPs?
And that did it for me. Thanks Again!
Similar Messages
-
Has anyone addressed the AUR haskell depends/makedepends fiasco?
I've been searching around here, the web, and flyspray, and haven't found anything yet. Has anyone addressed the depends/makedepends fiasco with the Haskell PKGBUILD generated by the arch-haskell script?
haskell-hdbc-sqlite3 > grep -i depends PKGBUILD
makedepends=('ghc' 'haskell-cabal' 'haskell-hdbc>=2.1.0' 'haskell-utf8-string' 'sqlite3')
depends=('ghc' 'haskell-cabal' 'haskell-hdbc>=2.1.0' 'haskell-utf8-string' 'sqlite3')
Do all 1000+ PKGBUILD have this problem? Surely someone's looking into this, right?
--EDIT--
There are incorrect dependencies. There are 31 packages that depend on haskell-time of which a newer version is in GHC. The licenses are in non-standard places. Etc.
*** I recommend that all packages created by arch-haskell get removed from the AUR. ***skottish wrote:
Do all 1000+ PKGBUILD have this problem? Surely someone's looking into this, right?
--EDIT--
There are incorrect dependencies. There are 31 packages that depend on haskell-time of which a newer version is in GHC. The licenses are in non-standard places. Etc.
*** I recommend that all packages created by arch-haskell get removed from the AUR. ***
Hello skottish.
Thanks for the bug report. It would be better directed by reporting a bug to the Arch Haskell team, http://www.haskell.org/mailman/listinfo/arch-haskell
Just some background, the Arch Haskell team is an group of about 12 developers, including 2 Arch core developers, engaged in supporting the Haskell programming langauge on Arch, bring the Haskell community to Arch, and advocating for Arch Linux. The effort is centered around the #archhaskell irc channel, the arch-haskell@ mailing list, archhaskell.wordpress.com and the wiki.
We're aware of the depends/makedepends issue. cabal2arch, the tool that packages Haskell software for Arch, used a seemingly incorrect understanding of the semantics of makdepends and depends. That has been fixed. Packages are being updated incrementally, and the situation as is causes no harm. The current semantics for Haskell packages wrt. dependencies are stated here: http://www.haskell.org/pipermail/arch-h … 00193.html
Regarding haskell-time, the reason packages are using an older version of haskell-time is that Arch Linux supports the Haskell Platform http://hackage.haskell.org/platform/ which specifies which versions of libraries to support on a distro. We follow that. It is a policy decision.
Regarding the licenses, they're placed where the package guide says. http://www.haskell.org/pipermail/arch-h … 00196.html
I'm not sure what you mean by "Etc." but if you have a bug report in the packaging system, please report it to the mailing list. -
Creative Cloud evaporates - billing fiasco
This is both a plea for help and a rant:
I've been trying to subscribe to the Cloud for nearly a month now and my trial versions are about to run out leaving me with numerous cleint projects in limbo. I still have CS6 (and 5 & 5.5 because I am a bit paranoid about legacy issues) but most new projects I've been shooting on Sony's new 4K codec which is only available on CC (not sure why they can't offer a new codec in CS6 but that's another topic).
For some reason, every time I order Creative Cloud I get this message: "We'll look into that and get back to you You should get an email from us by next business day. If you don't hear back from us, you can check your order status on your account page or call us at +1 800-585-0774" and every time I get no answer and the order does not show up in my account. I've wasted nearly six hours with tech support (at my hourly billing rate I could have paid for the full Creative Suite for a year in that time) and am no closer to resolving this issue than when it started. I've placed the order on Chrome, Safari and Firefox in case it was a browser issue, but receive the same message each time.
Beyond the fact that in a few days my trial versions will expire and I'll have to explain to my clients that I can't finish their projects because I can't pay my subscription fees, I am deeply worried that even if I can magically fix this problem that billing issues like this will arise in the future. By now I have absolutely no faith left in Adobe's ability to provide Cloud services and am looking for any and all options outside of Creative Suite for high-end productions. I know for editing there are options (FCP and Avid) but for motion graphics AE seems the best from my experience. I'd hate to switch back after all the time and cost of moving to a predominately Adobe video workflow the past three years (not to mention all the third party plug-ins) but I need a platform that is reliable and this Cloud seems to evaporate far too easily.
Does anyone have any aswers on how I can possibly get CC to work? If not, anyone have suggestions on other platforms that are comparible?Hi geophrian
Have you tried amazon.com?
Digital subscription through amazon
http://www.amazon.com/Adobe-Creative-Membership-Digital-Subscription/dp/B00CS74YQO/ref=sr_ tr_sr_1?ie=UTF8&qid=1375651497&sr=8-1&keywords=adobe+creative+cloud
3-month pre-paid CC 149.97$
http://www.amazon.com/Adobe-Creative-Membership-Pre-Paid-Product/dp/B007W76ZLW/ref=sr_1_6? ie=UTF8&qid=1375651497&sr=8-6&keywords=adobe+creative+cloud
I don´t know if they reject you or not, but worth a try?
Peter -
MS Access and Unicode (UTF8?)
Hi --
I've been able to insert Arabic data into an MS Access table programmatically,
using SQL and the \u notation. For example:
insert into MY_TABLE values ('\u0663'); // arabic character
Then, I can read this data out using ResultSet's getCharacterStream method. The data comes back out fine, and can be displayed in a simple JTextField as Arabic.
(This required opening the database connection using the "charSet = "UTF8" property in the call to DriverManager's getConnection method.)
My problem is that I have another Access table in which the data was entered manually -- having set the Control Panel Regional setting to Arabic, and using the MS Office Tool language Arabic. The data looks fine in the Access GUI (the Arabic characters show up as Arabic).
However, when I read the data using the same method in the first example, I get back question marks. I guess there's something different about the way the data was encoded? I read that Access stores all character data as Unicode, but I'm not sure if that implies a particular encoding (such as UTF8) or not.
Is there any way to figure out how the manually-entered data is encoded?
Or is there something else I'm doing wrong?
Thanks for any help.
-JHowever, when I read the data using the same method
in the first example, I get back question marks. I
guess there's something different about the way the
data was encoded? I read that Access stores all
character data as Unicode, but I'm not sure if that
implies a particular encoding (such as UTF8) or not.
Is there any way to figure out how the
manually-entered data is encoded?
Please see the article here: http://office.microsoft.com/en-us/assistance/HP052604161033.aspx
It suggests that Access stores data in UTF-16 or UTF-8 depending on whether a "Unicode Comression" feature is selected. So, I'd say you should try retrieving data from the other db as UTF-16.
Regards,
John O'Conner -
Two-way SSL: Private key is incorrectly read if the charset is set to UTF8
Looks like PEMInputStream and other related classes assumes the application charset
"iso81", but if the charset is something else, then "java.security.KeyManagementException"
is thrown.
We have everything setup and two-way ssl works when the encoding is not set. but
brakes if the encoding is UTF8.
WLS 7.0
OS - HP-UX
Is there any other workaround (not setting UTF8 is not a solution, ours is a WW
app).
ThanksI would suggest posting this to the security newsgroup.
-- Rob
Govinda Raj wrote:
Looks like PEMInputStream and other related classes assumes the application charset
"iso81", but if the charset is something else, then "java.security.KeyManagementException"
is thrown.
We have everything setup and two-way ssl works when the encoding is not set. but
brakes if the encoding is UTF8.
WLS 7.0
OS - HP-UX
Is there any other workaround (not setting UTF8 is not a solution, ours is a WW
app).
Thanks -
External table: How to load data from a fixed format UTF8 external file
Hi Experts,
I am trying to read data from a fixed format UTF8 external file in to a external table. The file has non-ascii characters, and the presence of the non-ascii characters causes the data to be positioned incorrectly in the external table.
The following is the content's of the file:
20100423094529000000I1 ABÄCDE 1 000004
20100423094529000000I2 OMS Crew 2 2 000004
20100423094529000000I3 OMS Crew 3 3 000004
20100423094529000000I4 OMS Crew 4 4 000004
20100423094529000000I5 OMS Crew 5 5 000004
20100423094529000000I6 OMS Crew 6 6 000004
20100423094529000000I7 Mobile Crew 7 7 000004
20100423094529000000I8 Mobile Crew 8 8 000004
The structure of the data is as follows:
Name Type Start End Length
UPDATE_DTTM CHAR 1 20 20
CHANGE_TYPE_CD CHAR 21 21 1
CREW_CD CHAR 22 37 16
CREW_DESCR CHAR 38 97 60
CREW_ID CHAR 98 113 16
UDF1_CD CHAR 114 143 30
UDF1_DESCR CHAR 144 203 60
UDF2_CD CHAR 204 233 30
DATA_SOURCE_IND CHAR 294 299 6
UDF2_DESCR CHAR 234 293 60
I create the external table as follows:
CREATE TABLE "D_CREW_EXT"
"UPDATE_DTTM" CHAR(20 BYTE),
"CHANGE_TYPE_CD" CHAR(1 BYTE),
"CREW_CD" CHAR(16 BYTE),
"CREW_DESCR" CHAR(60 BYTE),
"CREW_ID" CHAR(16 BYTE),
"UDF1_CD" CHAR(30 BYTE),
"UDF1_DESCR" CHAR(60 BYTE),
"UDF2_CD" CHAR(30 BYTE),
"DATA_SOURCE_IND" CHAR(6 BYTE),
"UDF2_DESCR" CHAR(60 BYTE)
ORGANIZATION EXTERNAL
TYPE ORACLE_LOADER DEFAULT DIRECTORY "TMP"
ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE
CHARACTERSET UTF8
STRING SIZES ARE IN BYTES
NOBADFILE NODISCARDFILE NOLOGFILE FIELDS NOTRIM
( "UPDATE_DTTM" POSITION (1:20) CHAR(20),
"CHANGE_TYPE_CD" POSITION (21:21) CHAR(1),
"CREW_CD" POSITION (22:37) CHAR(16),
"CREW_DESCR" POSITION (38:97) CHAR(60),
"CREW_ID" POSITION (98:113) CHAR(16),
"UDF1_CD" POSITION (114:143) CHAR(30),
"UDF1_DESCR" POSITION (144:203) CHAR(60),
"UDF2_CD" POSITION (204:233) CHAR(30),
"DATA_SOURCE_IND" POSITION (294:299) CHAR(6),
"UDF2_DESCR" POSITION (234:293) CHAR(60) )
) LOCATION ( 'D_CREW_EXT.DAT' )
REJECT LIMIT UNLIMITED;
Check the result in database:
select * from D_CREW_EXT;
I found the first row is incorrect. For each non-ascii character,the fields to the right of the non-ascii character are off by 1 character,meaning that the data is moved 1 character to the right.
Then I tried to use the option STRING SIZES ARE IN CHARACTERS instead of STRING SIZES ARE IN BYTES, it doesn't work either.
The database version is 11.1.0.6.
Edited by: yuan on May 21, 2010 2:43 AMHi,
I changed the BYTE in the create table part to CHAR, it still doesn't work. The result is the same. I think the problem is in ACCESS PARAMETERS.
Any other suggestion? -
How do I deal with the"new itunes library" fiasco
how do I deal with the "new itunes library" fiasco...?
Recovering your iTunes library from your iPod or iOS device: Apple Support Communities
-
ICal Update Fiasco: Publishing to web site
This latest update seems to be a total disaster!
First, the disappearing items from all calendars. After syncing them back from MobileMe, all my calendars now show up under a MobileMe directory instead of On My Mac. What about if I didn't have a MobileMe account? What if I cancelled my MobileMe account? Would I lose everything forever?
Now, the calendars that I have published before inside iframes at my web site will not update.
I have several, but previously I just had to specify "http://ical.me.com/cwhaley/Toronto_Live Music" as the SRC in my iframe and the latest version of the calendar appeared there. Now, no events have been updated since the latest update fiasco. Can this be fixed?
The previous Publish routine was so simple. Now it's a mess.
Help!It's very frustrating! At least you seem to still have your On My Mac Calendars. Mine disappeared and I don't know how to get back to having my events on my computer. I have some duplicate calendars (for no reason) but they're all on MobileMe. I don't want to be a slave to that service.
Unfortunately, Apple employees don't contribute to these forums. It's all volunteers... folks who apparently know a little more than us.
Generally, I've been happy with the help I've been getting through Apple Discussions, but no one seems to understand what's going on with these latest iCal problems.
I'm used to help within a couple days. These iCal problems go back to mid-December.
I guess we'll just have to be patient, but I really depend on iCal to be stable and reliable.. both on my desktop and my various published calendars. -
How to load a flat file with utf8 format in odi as source file?
Hi All,
Anybody knows how we can load a flat file with utf8 format in odi as source file.Please everybody knows to guide me.
Regards,
SaharCould you explain which problem are you facing?
Francesco -
Loading "fixed length" text files in UTF8 with SQL*Loader
Hi!
We have a lot of files, we load with SQL*Loader into our database. All Datafiles have fixed length columns, so we use POSITION(pos1, pos2) in the ctl-file. Till now the files were in WE8ISO8859P1 and everything was fine.
Now the source-system generating the files changes to unicode and the files are in UTF8!
The SQL-Loader docu says "The start and end arguments to the POSITION parameter are interpreted in bytes, even if character-length semantics are in use in a datafile....."
As I see this now, there is no way to say "column A starts at "CHARACTER Position pos1" and ends at "Character Position pos2".
I tested with
load data
CHARACTERSET AL32UTF8
LENGTH SEMANTICS CHARACTER
replace ...
in the .ctl file, but when the first character with more than one byte encoding (for example ü ) is in the file, all positions of that record are mixed up.
Is there a way to load these files in UTF8 without changing the file-definition to a column-seperator?
Thanks for any hints - charlyI have not tested this but you should be able to achieve what you want by using LENGTH SEMANTICS CHARACTER and by specifying field lengths (e.g. CHAR(5)) instead of only their positions. You could still use the POSITION(*+n) syntax to skip any separator columns that contain only spaces or tabs.
If the above does not work, an alternative would be to convert all UTF8 files to UTF16 before loading so that they become fixed-width.
-- Sergiusz -
Error while migrating charecter set using CSALTER from WE8MSWIN1252 to UTF8
Hi.
I tried to migrate with my database. I did backup, started in restricted mode the database.
when I run csalter.plb script I got:
SQL> @@csalter.plb
0 rows created.
Function created.
Function created.
Procedure created.
This script will update the content of the Oracle Data Dictionary.
Please ensure you have a full backup before initiating this procedure.
Would you like to proceed ?(Y/N)?Y
old 5: if (UPPER('&conf') 'Y') then
new 5: if (UPPER('Y') 'Y') then
Enter value for 1: UTF8
old 8: param1 := '&1';
new 8: param1 := UTF8';
Checking data validility...
TOCHAR is not superset of FROMCHAR
PL/SQL procedure successfully completed.
0 rows deleted.
Alter database character set....
Checking or Converting phrase did not finish successfully
No database (national) character set will be altered
PL/SQL procedure successfully completed.
SQL>
What should I type where I got "Enter value for 1:" ??
I have already done this with ANOTHER DATABASE with same Oracle version(10.1.0.2.0) and now application working fine, but now i am getting this problem.
How to correct my mistake ?? and what should i do to do it successfullyI recommend you upgrade at least to 10.1.0.5. 10.1.0.2 comes with the very first version of csalter.plb, which has not the current implementation. From and to which character set do you try to migrate?
-- Sergiusz -
Character set migration error to UTF8 urgent
Hi
when we migrated from ar8iso889p6 to utf8 characterset we are facing one error when i try to compile one package through forms i am getting error program unit pu not found.
When i running the source code of that procedure direct from database using sqlplus its running wihtout any problem.How can i migrate this forms from ar8iso889p6 to utf8 characterset. We migrated from databas with ar8iso889p6 oracle 81.7 database to oracle 9.2. database with character set UTF8 (windows 2000) export and import done without any error
I am using oracle 11i inside the calling forms6i and reports 6i
with regards
ramya
1) this is server side program yaa when connecting with forms i am getting error .When i am running this program using direct sql its working when i running compiling i am getting this error.
3) yes i am using 11 i (11.5.10) inside its calling forms 6i and reports .Why this is giving problem using forms.Is there any setting changing in forms nls_lang
with regardsHi Ramya
what i understand from your question is that you are trying to compile a procedure from a forms interface at client side?
if yes you should check the code in the forms that is calling the compilation package.
does it contains strings that might be affected from the character set change???
Tony G. -
Multibyte character error in SqlLoader when utf8 file with chars like €Ää
hello,
posting from Germany, special charactes like german umlaute and euro sign in UTF8 Textfile, SqlLoader rejecting row with Multibyte character error
Oracle Database 11g Release 11.2.0.2.0 - 64bit Production
Database Characterset: WE8MSWIN1252
OS: SLES 11 x86_64
Testcase SqlDeveloper:
CREATE TABLE utf8file_to_we8mswin1252 (
ID NUMBER,
text VARCHAR2(40 CHAR)
can't enter euro symbol in this posting, end's in '€' (?)
SELECT ascii(euro symbol) FROM dual;
128
SELECT chr(128) from dual;
INSERT INTO utf8file_to_we8mswin1252 (ID, text) VALUES (1, '0987654321098765432109876543210987654321');
INSERT INTO utf8file_to_we8mswin1252 (ID, text) VALUES (2, 'äüöäüöäüöäÄÖÜÄÖÜÄÖÜÄßßßßßßßßß߀€€€€€€€€€');
INSERT INTO utf8file_to_we8mswin1252 (ID, text) VALUES (3, 'äüöäüöäüöäÄÖÜÄÖÜÄÖÜÄäüöäüöäüöäÄÖÜÄÖÜÄÖÜÄ');
INSERT INTO utf8file_to_we8mswin1252 (ID, text) VALUES (4, 'ۧۧۧۧۧۧۧۧۧۧ1');
INSERT INTO utf8file_to_we8mswin1252 (ID, text) VALUES (5, 'äüöäüöäüöäÄÖÜÄÖÜÄÖÜÄäüöäüöäüöä');
INSERT INTO utf8file_to_we8mswin1252 (ID, text) VALUES (6, 'ßßßßßßßßß߀€€€€€€€€€1');
INSERT INTO utf8file_to_we8mswin1252 (ID, text) VALUES (7, 'ßßßßßßßßß߀€€€€€€€€€äüöäüöäüöäÄÖÜÄÖÜÄÖÜÄ');
commit;
Select shows correct result, no character is wrong or missing!!!!
put this in a UTF8 file without delimiter and enclosure like
10987654321098765432109876543210987654321
the SqlLoader controlfile:
LOAD DATA characterset UTF8
TRUNCATE
INTO TABLE utf8file_to_we8mswin1252
ID CHAR(1)
, TEXT CHAR(40)
on a linux client machine, NOT the Oracle-Server
export NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
sqlldr user/pwd@connectstring CONTROL=TEST01.ctl DATA=TEST01.dat LOG=TEST01.log
Record 6: Rejected - Error on table UTF8FILE_TO_WE8MSWIN1252, column TEXT.
Multibyte character error.
Record 7: Rejected - Error on table UTF8FILE_TO_WE8MSWIN1252, column TEXT.
Multibyte character error.
Select shows missing characters in row 4 and 5, SqlLoader loads only the first 20 characters (maybe at random)
and as shown above, row 6 and 7 never loaded
Problem:
can't load UTF8 Flatfiles with SqlLoader when german umlaute and special characters like euro symbol included.
Any hint or help would be appreciated
Regards
Michael## put this in a UTF8 file without delimiter and enclosure like
The basic question is how you put the characters into the file. Most probably, you produced a WE8MSWIN1252 file and not an UTF8 file. To confirm, a look at the binary codes in the file would be necessary. Use a hex-mode-capable editor. If the file is WE8MSWIN1252, and not UTF8, then the SQL*Loader control file should be:
LOAD DATA characterset WE8MSWIN1252
TRUNCATE
INTO TABLE utf8file_to_we8mswin1252
ID CHAR(1)
, TEXT CHAR(40)
)-- Sergiusz -
Verizons lack of concern for its customers after Update Fiasco
I, like many of you have recently gone through the fiasco that is the last update. I was awakened at 2am by my phones repeated bootloop. After 4 days of trying to get this straight with Verizon Support they finally told me it was an HTC issue and their was nothing they could do until HTC fixed it. They then referred me to do a Google search for the various forums to see if anyone else had a solution.
What I finally had to do was a Factory Reset. Now the phone works almost as it did before. I say almost as now it seems to be faster when I turn it on or when I shut it down.
Now my biggest problem, is that I want to be compensated for the time lost that my phone was basically inoperable. During those 4 days, my WiFi was not working, and I could make calls. Nor could I use most of the apps without the phone freezing. I could text and use the Twitter app. Thats about it. And not to mention the sleepless nights I had because my phone would start looping at 2am.
Like with any other business in the world, if you have a problem with the service or the product, they offer you a discount. If I made Verizon a cheeseburger at work, and they were not satisfied with it, a manager would comp the meal and offer a replacement. It like that in lots of industries. But Verizon seems to be different. First they start with the denial. Nothing is wrong. Until they are swamped with calls about the problem. Then they pass the buck. Or its an HTC problem. But what it really boils down to, is that it becomes my problem. ME A PAYING CUSTOMER!
Do the right thing Verizon.I got the low memory error. They told me to just get a new device. But they're the ones who pushed out an "update" that makes the device think its got less memory than it actually has, which interferes and ultimately eliminates the device's operations. As a result, the owner is forced to get a new device, and either to take the 2G plan, or pay an extra $350 for the replacement device. (VZW will offer to provide a replacement Incredible, but the problem will reoccur because the issue is with the software, not the hardware.) It's an incredibly poor and, I believe, unlawful business practice. Small claims court seems sensible - a class action would be even better. I've sent complaints to the FCC, the FTC and the New York State Attorney General's office. Online complaints are easy to file and, in my view, well-founded.
-
Character sets - UTF8 or Chinese
Hi,
I am looking into enhancing the application I have built in Oracle to save/display data in Chinese & English. I have looking into how to change the character set of a database to accept different languages i.e. different characters.
From what I understand I can create a database to use a Chinese character set (apparently English ascii characters are also a part of any Chinese character set) or I can set the database to use a unicode multi-byte character set (UTF8) - which seems to be okay for all languages.
Has anyone had any experience of a) changing an existing standard 7 byte ascii database into database which can handle Chinese and/or b) the difference/ implications between using a Chinese and unicode character sets.
I am using Oracle RDBMS 8.1.7 on SuSE Linux 7.2
Thanks in advance.
DanIf the data is segmented so that character set 1 data is in a table and character set 2 data is in another table then you may have a chance to salvage the data with help from support. The idea would be to first export and import only your CL8MSWIN1251 data to UTF8. Be careful that your NLS_LANG is set to CL8MSWIN1251 for export so that no conversion takes place. Confirm the import is successful and remove CL8MSWIN1251 data from database. Oracle support can now help you override the character set via ALTER database to say MSWIN1252. Now selectively export/import this data, again make sure NLS_LANG is set to MSWIN1252 for export so that no conversion takes place. Confirm the import is successful and remove MSWIN1252 data from database. And then do the same steps for 1250 data.
Maybe you are looking for
-
I have a view created from only one table. VW_Stats ( Tab_Name,Load_Status,User,....) Tab_Name & Load_Status columns provides the information about Name of table and status of data getting loaded to this table. Select * from Vw_Stats Table_name
-
Advantages and disadvantages of logical database ?
dear masters, good answers can be rewarded by giving points
-
My mouse wheel went crazy.
I was using fluxbox and firefox, then I installed gnome yesterday. Now my mouse wheel "jumps up and down" when I try to scroll webpages. It even does it if I am using fluxbox. Or using the rox filemanager. My /etc/X11/XF86Config hasn't changed sin
-
Why can't I post on the community boards
Cannot post or view anything on the community boards even though I have a working account.
-
How can I elimimate "object browser" permanently?
I have removed it from my add-ons but the next day it's back. I have even used regedit to scour the registry but the next day it is back again anyway. There must be a way to get rid of it PERMANENTLY. Any ideas?