IMPDP SQLFILE : multibyte characters in constraint_name leads to ORA-00972

Hi,
I'm actually dealing with constraint_name made of multibyte characters (for example : constrain_name='VALIDA_CONFIRMAÇÃO_PREÇO13').
Of course this Bad Idea® is inherited (I'm against all the fancy stuff like éàù in filenames and/or directories on my filesystem....)
The scenario is as follows :
0 - I'm supposed to do a "remap_schema". Everything in the schema SCOTT should now be in a schema NEW_SCOTT.
1 - The scott schema is exported via datapump
2 - I do an impdp with SQLFILE in order to get all the DDL (table, packages, synonyms, etc...)
3 - I do some sed on the generated sqlfile to change every occurence of SCOTT to NEW_SCOTT (this part is OK)
4 - Once the modified sqlfile is executed, I do an impdp with DATA_ONLY.
(The scenario was imagined from this thread : {message:id=10628419} )
I'm getting some ORA-00972: identifier is too long at step 4 when executing the sqlfile.
I see that some DDL for constraint creation in the file (generated at step#2) is written as follow :ALTER TABLE "TW_PRI"."B_TRANSC" ADD CONSTRAINT "VALIDA_CONFIRMAÃÃO_PREÃO14" CHECK ...Obviously, the original name of the constraint with cedilla and tilde gets translated to something else which is longer than 30 char/byte...
As the original name is from Brazil, I also tried do add an EXPORT LANG=pt_BR.UTF-8 in my script before running the impdp for sqlfile. This didn't change anything. (the original $LANG is en_US.UTF-8)
In order to create a testcase for this thread, I tried to reproduce on my sandbox database... but, there, I don't have the issue. :-(
The real system is an 4-nodes database on Exadata (11.2.0.3) with NLS_CHARACTERSET=AL32UTF8.
My sandbox database is a (nonRAC) 11.2.0.1 on RHEL4 also AL32UTF8.
The constraint_name is the same on both system : I checked byte by byte using DUMP() on the constraint_name.
Feel free to shed any light and/or ask for clarification if needed.
Thanks in advance for those who'll take on their time to read all this.
I decided to include my testcase from my sandbox database, even if it does NOT reproduce the issue +(maybe I'm missing something obvious...)+
I use the following files.
- createTable.sql :$ cat createTable.sql
drop table test purge;
create table test
(id integer,
val varchar2(30));
alter table test add constraint VALIDA_CONFIRMAÇÃO_PREÇO13 check (id<=10000000000);
select constraint_name, lengthb(constraint_name) lb, lengthc(constraint_name) lc, dump(constraint_name) dmp
from user_constraints where table_name='TEST';- expdpTest.sh :$ cat expdpTest.sh
expdp scott/tiger directory=scottdir dumpfile=testNonAscii.dmp tables=test- impdpTest.sh :$ cat impdpTest.sh
impdp scott/tiger directory=scottdir dumpfile=testNonAscii.dmp sqlfile=scottdir:test.sqlfile.sql tables=testThis is the run :
[oracle@Nicosa-oel test_nonAsciiColName]$ sqlplus scott/tiger
SQL*Plus: Release 11.2.0.1.0 Production on Tue Feb 12 18:58:27 2013
Copyright (c) 1982, 2009, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> @createTable
Table dropped.
Table created.
Table altered.
CONSTRAINT_NAME                  LB       LC
DMP
VALIDA_CONFIRMAÇÃO_PREÇO13             29         26
Typ=1 Len=29: 86,65,76,73,68,65,95,67,79,78,70,73,82,77,65,195,135,195,131,79,95
,80,82,69,195,135,79,49,51
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@Nicosa-oel test_nonAsciiColName]$ ./expdpTest.sh
Export: Release 11.2.0.1.0 - Production on Tue Feb 12 19:00:12 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_TABLE_01":  scott/******** directory=scottdir dumpfile=testNonAscii.dmp tables=test
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
. . exported "SCOTT"."TEST"                                  0 KB       0 rows
Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
  /home/oracle/scott_dir/testNonAscii.dmp
Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at 19:00:22
[oracle@Nicosa-oel test_nonAsciiColName]$ ./impdpTest.sh
Import: Release 11.2.0.1.0 - Production on Tue Feb 12 19:00:26 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SCOTT"."SYS_SQL_FILE_TABLE_01" successfully loaded/unloaded
Starting "SCOTT"."SYS_SQL_FILE_TABLE_01":  scott/******** directory=scottdir dumpfile=testNonAscii.dmp sqlfile=scottdir:test.sqlfile.sql tables=test
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Job "SCOTT"."SYS_SQL_FILE_TABLE_01" successfully completed at 19:00:32
[oracle@Nicosa-oel test_nonAsciiColName]$ cat scott_dir/test.sqlfile.sql
-- CONNECT SCOTT
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: TABLE_EXPORT/TABLE/TABLE
CREATE TABLE "SCOTT"."TEST"
   (     "ID" NUMBER(*,0),
     "VAL" VARCHAR2(30 BYTE)
   ) SEGMENT CREATION DEFERRED
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 COMPRESS FOR OLTP LOGGING
  TABLESPACE "MYTBSCOMP" ;
-- new object type path: TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
ALTER TABLE "SCOTT"."TEST" ADD CONSTRAINT "VALIDA_CONFIRMAÇÃO_PREÇO13" CHECK (id<=10000000000) ENABLE;I was expecting to have the cedilla and tilde characters displayed incorrectly....
Edited by: Nicosa on Feb 12, 2013 7:13 PM

Srini Chavali wrote:
If I understand you correctly, you are unable to reproduce the issue in the test instance, while it occurs in the production instance. Is the "schema move" being done on the same database - i.e. you are "moving" from SCOTT to NEW_SCOTT on the same database (test to test, and prod to prod) ? Do you have to physically move/copy the dmp file ? Hi Srini,
On the real system, the schema move will be to and from different machines (but same DBversion).
I'm not doing the real move for the moment, just trying to validate a way to do it, but I guess it's important to say that the dump being used for the moment comes from the same database (the long story being that due to some column using object datatype which caused error in the remap, I had to reload the dump with the "schema rename", drop the object column, and recreate a dump file without the object_datatype...).
So Yes, the file will have to move, but in the current test, it doesn't.
Srini Chavali wrote:
Obviously something is different in production than test - can you post the output of this command from both databases ?
SQL> select * from NLS_DATABASE_PARAMETERS;
Yes Srini, something is obviously different : I'm starting to think that the difference might be in the Linux/shell side rather than on the impdp as datapump is supposed to be NLS_LANG/CHARSET-proof +(when traditional imp/exp was really sensible on those points)+
The result on the Exadata where I have the issue :PARAMETER                      VALUE
NLS_LANGUAGE                   AMERICAN
NLS_TERRITORY                  AMERICA
NLS_CURRENCY                   $
NLS_ISO_CURRENCY               AMERICA
NLS_NUMERIC_CHARACTERS         .,
NLS_CHARACTERSET               AL32UTF8
NLS_CALENDAR                   GREGORIAN
NLS_DATE_FORMAT                DD-MON-RR
NLS_DATE_LANGUAGE              AMERICAN
NLS_SORT                       BINARY
NLS_TIME_FORMAT                HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT           DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT             HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT        DD-MON-RR HH.MI.SSXFF AM TZR
NLS_DUAL_CURRENCY              $
NLS_COMP                       BINARY
NLS_LENGTH_SEMANTICS           BYTE
NLS_NCHAR_CONV_EXCP            FALSE
NLS_NCHAR_CHARACTERSET         AL16UTF16
NLS_RDBMS_VERSION              11.2.0.3.0the result on my sandbox DB :PARAMETER                      VALUE
NLS_LANGUAGE                   AMERICAN
NLS_TERRITORY                  AMERICA
NLS_CURRENCY                   $
NLS_ISO_CURRENCY               AMERICA
NLS_NUMERIC_CHARACTERS         .,
NLS_CHARACTERSET               AL32UTF8
NLS_CALENDAR                   GREGORIAN
NLS_DATE_FORMAT                DD-MON-RR
NLS_DATE_LANGUAGE              AMERICAN
NLS_SORT                       BINARY
NLS_TIME_FORMAT                HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT           DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT             HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT        DD-MON-RR HH.MI.SSXFF AM TZR
NLS_DUAL_CURRENCY              $
NLS_COMP                       BINARY
NLS_LENGTH_SEMANTICS           BYTE
NLS_NCHAR_CONV_EXCP            FALSE
NLS_NCHAR_CHARACTERSET         AL16UTF16
NLS_RDBMS_VERSION              11.2.0.1.0------
Richard Harrison .  wrote:
Hi,
Did you set NLS_LANG also when you did the import?Yes, that is one of the difference between the Exadata and my sandbox.
My environnement in sandbox has NLS_LANG=AMERICAN_AMERICA.AL32UTF8 where the Exadata doesn't have the variable set.
I tried to add it, but it didn't change anything.
Richard Harrison .  wrote:
Also not sure why you are doing the sed part? Do you have hard coded scheme references inside some of the plsql?Yes, that is why I choose to sed. The (ugly) code have :
- Procedures inside the same package that references one another with the schema prepended
- Triggers with PL/SQL codes referencing tables with schema prepended
- Dynamic SQL that "builds" queries with schema prepended
- Object Type that does some %ROWTYPE on tables with schema prepended (that will be solved by dropping the column based on those types as they obviously are not needed...)
- Data model with object whose names uses non-ascii characters
+(In France we use to call this "gas power plant" in order to tell how a mess it is : pipes everywhere going who-knows-where...)+
The big picture is that this kind of "schema move & rename" should be as automatic as possible, as the project is to actually consolidate several existing databases on the Exadata :
One schema for each country, hence the rename of the schemas to include country-code.
I actually have a workaround yet : Rename the objects that have funky characters in their name before doing the export.
But I was curious to understand why the SQLFILE messed up the constraint_name on one sustem when it doesn't on another...

Similar Messages

  • Multibyte characters not displaying on report

    Hi there,
    I am having a problem displaying multibyte characters on my report (Oracle reports 6i). These characters are needed for barcode encoding. e.g chr(203) . But when I run my report they are missing.
    Also when I do the following sql in oracle11g (the version of the db the report is working against) :-
    select chr(203) from dual;
    I get the error :-
    ORA-29275: partial multibyte character
    though it works fine for oracle 8.
    Any help much appreciated.

    Everything depends on your NLS parameters. If I do this on Oracle 11g I get:
    select chr(203) from dual;
    C
    ËFor bar coding you should use a special bar code font, e.g.:
    http://www.idautomation.com/font-encoders/oracle-reports/

  • Adobe AIR help; breadcrum navigation doesn't work in multibyte characters?

    Hi there,
    I created Adobe application with RoboHelp 9 (using FrameMaker files,
    which are written in Japanese and English) to find that breadcrum navigation on the top doesn't work.
    Is this a feature that breadcrum navigation doesn't support multibyte characters? Or is this any workaround?
    Many thanks for your kind support in advance,

    See my reply to your other post. You can also test this in the new project and raise it with Adobe Support at the same time as the other problem.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • Adobe AIR help; question regarding search criteria with multibyte characters

    Hi,
    I created Adobe AIR application with Robohelp 9 (using FM 10 files as source, and texts are written in Japanese and English),
    and happend to find that search function in AIR application doesn't catch keywords correctly.
    For example,
    1. If you type "文字" and "スタイル" with single byte space in search window, the result appears for both "文字" and "スタイル".
    2. If you type "文字" and "スタイル" with double byte space in search window, the result doesn't match for anything.
    3. If you type "文字スタイル" (in one word) in search window, the result doesn't match for anything.
    Same thing happens for the case "文字種" (literally, "文字"+"種", the meaning is almost the same).
    But, if you type search words which is all in Katakana, the result seems to be fine.
    Is there any limitation for multibyte characters support? Or, this behaviour is a feature??
    If so, how can make AIR application "hit" correct words?
    Thank you very much for your kind help in advance!

    On this one your best course of action is to contact Adobe Support. They will likely require your project and there is one thing I would suggest you do first. Create a new project with just a few topics to prove the problem exists there as well. If it does it will be a simpler upload and you will know the problem is repeatable.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • Handling multibyte characters

    Hi ,
    I have created a procedure which sends e-mail using UTL_SMTP.
    The procedure has a part in which we add the attachments to e-mail.
    Now , the issue is when i am adding an attachment which contains multibyte characters , these characters are replaced with '?'.
    Can anyone provide any guidance on this?

    First, you should not append 'charset="us-asci"' in this line:
      UTL_SMTP.WRITE_DATA(L_MAIL_CONN, 'Content-Type: ' || IN_ATT_MIME_TYPE ||'charset="us-ascii"'||'; name="' || IN_ATT_FILE_NAME || '"' || UTL_TCP.CRLF);
    The default IN_ATT_MIME_TYPE has this clause already, hence you would have a duplicate. Moreover, you add it without the required preceding semicolon. Further, in the Content-Type, you should pass the original character set of the file, not "us-ascii". This character set must support characters included in the file.
    Second, the NCLOB is not written correctly either. UTL_ENCODE.BASE64_ENCODE expects a RAW value. If you give it an NVARCHAR2 value returned by DBMS_LOB.SUBSTR, then PL/SQL will implicitly apply HEXTORAW.to the value. HEXTORAW fails, if the NCLOB content is not a valid sequence of hex digits. Treating the content of NCLOB as a string of hex digits is obviously not your goal. You should use UTL_I18N.STRING_TO_RAW to convert NVARCHAR2 from DBMS_LOB.SUBSTR to the desired target encoding (the one specified in Content-Type) and cast it to RAW at the same time. UTF-8 (i.e. AL32UTF8) is usually the best choice for the target encoding. You should then apply UTL_RAW.CAST_TO_VARCHAR2 to change the RAW representation of base64-encoded value to VARCHAR2 expected by UTL_SMTP.WRITE_DATA.
    Of course, passing DBMS_LOB.SUBSTR result directly to UTL_ENCODE.BASE64_ENCODE would make sense for a BLOB attachment. However, even then the encoded result should be passed to UTL_RAW.CAST_TO_VARCHAR2, not UTL_RAW.CAST_TO_RAW.
    Third, if you use UTF-8 as Content-Type encoding, you may want to prepend three bytes (0xEF 0xBB 0xBF) to the NCLOB value before base64 encoding. This three-byte character is the UTF-8 Byte Order Mark. It helps some editors, such as Notepad, to recognize the file as encoded in UTF-8.
    Fourth, if the target encoding is UTF-8, l_step should be no more than 8191. This is to avoid intermediate values exceeding 32767 bytes.
    Fifth, the whole procedure will not work well on EBCDIC platform. In contrary to what documentation says, UTL_SMTP.WRITE_DATA does not seem to convert data to US7ASCII before sending (unless the package is ported separately by platform vendors). I guess this is not your worry but I thought I will mention this, just in case.
    Thanks,
    Sergiusz

  • JSPs, UTF-8 & multibyte characters

              In our project we have a situation where we must output some multibyte characters
              to a JSP page. The data is retrieved from an Oracle database using BEA ELink and
              XML (don't ask why). The XML-data is UTF-8 encoded, and the data seems to be ok
              down to the JSP level, because I can output it to a file and it's properly UTF-8
              encoded.
              But when I try to write the data to the final reply (using <%=dataObject.getData()%>
              the results definitely are not UTF-8 encoded. On the client browser they show
              up as garbage, occupying more than twice the actual length of the data. The response
              headers and META-tags are all set to UTF-8 encoding, and the browser is set to
              use UTF-8.
              The funny part is, that the string seems to be encoded twice or something similar
              as is shown by the next example:
              This is the correct UTF-8 byte sequence for the first twice characters (they are
              just generated data for debugging purposes):
              C3 89 C3 A5
              Which translates to Unicode characters 00C9 and 00E5.
              But on the final page that is sent to the client this sequence has been changed
              to:
              C3 83 E2 80 B0 C3 83 C2 A5
              Which just doesn't make sense since it shows up as five different garbage characters.
              Does anyone have any ideas what is causing the problem and any suggestions? What
              are those extra characters in the final encoding?
              .Pete.
              

    It sounds like the Object.toString is coming back already encoded in UTF8,
              and thus the JSP writer encodes that UTF8 using UTF8 again, which is what
              you see. Try making the String value be:
              > ... characters 00C9 and 00E5.
              ... instead of:
              > C3 89 C3 A5
              Then it will be encoded correctly.
              Peace,
              Cameron Purdy
              Tangosol Inc.
              << Tangosol Server: How Weblogic applications are customized >>
              << Download now from http://www.tangosol.com/download.jsp >>
              "Petteri Räisänen" <[email protected]> wrote in message
              news:[email protected]...
              >
              > In our project we have a situation where we must output some multibyte
              characters
              > to a JSP page. The data is retrieved from an Oracle database using BEA
              ELink and
              > XML (don't ask why). The XML-data is UTF-8 encoded, and the data seems to
              be ok
              > down to the JSP level, because I can output it to a file and it's properly
              UTF-8
              > encoded.
              >
              > But when I try to write the data to the final reply (using
              <%=dataObject.getData()%>
              > the results definitely are not UTF-8 encoded. On the client browser they
              show
              > up as garbage, occupying more than twice the actual length of the data.
              The response
              > headers and META-tags are all set to UTF-8 encoding, and the browser is
              set to
              > use UTF-8.
              >
              > The funny part is, that the string seems to be encoded twice or something
              similar
              > as is shown by the next example:
              >
              > This is the correct UTF-8 byte sequence for the first twice characters
              (they are
              > just generated data for debugging purposes):
              >
              > C3 89 C3 A5
              >
              > Which translates to Unicode characters 00C9 and 00E5.
              >
              > But on the final page that is sent to the client this sequence has been
              changed
              > to:
              >
              > C3 83 E2 80 B0 C3 83 C2 A5
              >
              > Which just doesn't make sense since it shows up as five different garbage
              characters.
              >
              >
              > Does anyone have any ideas what is causing the problem and any
              suggestions? What
              > are those extra characters in the final encoding?
              >
              > Pete.
              

  • Impdp / SQLfile and parallel mode ?

    Hi,
    On 10g database we have a dump generated in parallel mode (parallel=8) and DUMPFILE=expdp_file%u.dat
    Files are expdp_file01.dat, expdp_file02.dat, ... , expdp_file08.dat
    I would like to retrieve all the grants in a sql file with this command :
    impdp dba@mydatabase dumpfile=expdp_full%u.dat LOGFILE=impdp_gudb_test.log DIRECTORY=EXP_DIR sqlfile=grant.sql include=grant schemas=DBA parallel=8
    Error occurs :
    ORA-39002: invalid operation
    ORA-39047: Jobs of type SQL_FILE cannot use multiple execution streams.
    Is there another way to retrieve all the grant from a multipart dump ?
    Thanks for help.
    Regards,

    Just leave the parallel=8 off. SQL file jobs can only run in serial mode. The export parallel does not have to match the import parallel, so just remove the parallel clause.
    Dean

  • Multibyte characters are not printing correctly.

    Hi all,
    When i read a multibyte character from inputstream i am getting a negative value. The code is something like this.
    int c=in.read();
    System.ouy.ptinyln((char)c);
    It is printing ? instead of �, i know the reason why it is printing the ? mark because when i read a mutibyte character from the InputStream it is returning the negative value so when i try to type cast and try to print the negative value it is printing ?. My question is why it is returning negative value when i read the multibyte character.
    Please help me.
    Thanks

    What kind encoding type u using?
    mulitbyte character is composed by two bytes.
    and the first bit of each bye is 1.
    as a result, u get negative value when u cast the multibye character to int.

  • Question marks in outgoing emails for multibyte characters

    I have an application that stores and displays Japanesse characters correctly on the screen. But when I try to email them, the characters come out as ?????.
    I am on version 4.02.
    I am setting the following items at the top:
    MIME-Version: 1.0
    Content-Type: text/html; charset=utf-8
    The body is started with a <html> tag.
    Any ideas?

    After installation BI Publisher 10.1.3.3.1 Base (standalone, OC4J) :
    Directory of F:\bip\jdk\lib\fonts
    13/10/2007 21:16 15 196 128R00.TTF
    13/10/2007 21:16 18 473 348 ALBANWTJ.ttf
    13/10/2007 21:16 18 777 132 ALBANWTK.ttf
    13/10/2007 21:16 18 676 084 ALBANWTS.ttf
    13/10/2007 21:16 18 788 600 ALBANWTT.ttf
    13/10/2007 21:16 276 384 ALBANYWT.ttf
    13/10/2007 21:16 12 860 B39R00.TTF
    13/10/2007 21:16 18 800 MICR____.TTF
    13/10/2007 21:16 6 580 UPCR00.TTF
    Directory of F:\bip\jdk\jre\lib\fonts
    01/08/2006 19:25 75 144 LucidaBrightDemiBold.ttf
    01/08/2006 19:25 75 124 LucidaBrightDemiItalic.ttf
    01/08/2006 19:25 80 856 LucidaBrightItalic.ttf
    01/08/2006 19:25 344 908 LucidaBrightRegular.ttf
    01/08/2006 19:25 317 896 LucidaSansDemiBold.ttf
    01/08/2006 19:25 698 236 LucidaSansRegular.ttf
    01/08/2006 19:25 234 068 LucidaTypewriterBold.ttf
    01/08/2006 19:25 242 700 LucidaTypewriterRegular.ttf
    Directory of F:\bip\jre\1.4.2\lib\fonts
    24/03/2004 19:12 75 144 LucidaBrightDemiBold.ttf
    24/03/2004 19:12 75 124 LucidaBrightDemiItalic.ttf
    24/03/2004 19:12 80 856 LucidaBrightItalic.ttf
    24/03/2004 19:12 344 908 LucidaBrightRegular.ttf
    24/03/2004 19:12 317 896 LucidaSansDemiBold.ttf
    24/03/2004 19:12 698 236 LucidaSansRegular.ttf
    24/03/2004 19:12 234 068 LucidaTypewriterBold.ttf
    24/03/2004 19:12 242 700 LucidaTypewriterRegular.ttf
    What is wrong?
    In Adobe Reader's Document Properties -> Fonts
    +Helvetica:
    Type: Type1
    Encoding: Ansi
    Actual Font: ArialMT
    Actual Font Type: TrueType
    I feel BIP use wrong encoding . . .

  • How to extract multibyte characters?

    Hi,
    I have used acrobat plug in and could extract ascii characters to a text file.
    I want to extract multi byte characters, Chinese, Korean.
    Could you help me if you know how to do it?
    Another question is that how could I debug plug in (by example, step by step) in IDE?

    Sorry, this file is a correct one.
    Attachments:
    Test.vi ‏7 KB

  • Multibyte characters(Chinese Data)

    Hi,
    We have a table in oracle(10g) which stores Chinese data as well English characters.
    Production is designed in such a way that, we can' t increase size of the columns.
    Is there any way to handle multi-byte characters like Chinese with the same length?
    Pls help me out on this and my heartful thanks for any replies

    You need some basic understanding of how language and territory handled by database, read this article,
    NLS_LANG FAQ
    http://www.oracle.com/technology/tech/globalization/htdocs/nls_lang%20faq.htm

  • Entering MultiByte characters using OA Extension JDev forms

    Hello
    We have certain custom OA extension JDev forms running on Oracle Applications 11.5.10/ 10g database. Recently we converted our db to be utf8 compliant. While I can querry chineses characters through the custom JDev forms, when I try to enter chinese characters using the custom forms, it stores it as Junk. In the preference I did set the Client Character Encoding to UTF-8, but that did not help. Any clues as to how to get this working would be appreciated.
    Thanks
    PHK

    When you say it stores junk, you see the junk characters from back end or on the OA page on further retrieval of the same value?
    --Shiv                                                                                                                                                                                                                                                                                   

  • How can I parse multibyte characters in java?

    Is there any API available to do that?

    I would like to eliminate that character while I am saving the data in the database. It giving me a problem with XML parsing later. for ex. character typed from japanese script keyboard

  • How substrb works with multibyte characters

    suppose X is a 3 byte Korean character.
    What is returned by substrb(X,1,1)
    I know X in hex is EA B8 B0
    But I am getting substrb(X,1,1) = <blank space> ie ASCII 32

    I am not sure whether you can see the following character: '&#44592;'
    I found the following:
    select dump('&#44592;') from dual => Typ=96 Len=3: 234,184,176
    select dump(substrb('&#44592;',1,1)) from dual => Typ=1 Len=1: 32
    I am running this from Oracle SQL Developer on Windows.
    DB is our QA Debug Instance: HRQ115XG
    checked v$nls_parameters and found following:
    NLS_CHARACTERSET = UTF8
    NLS_LENGTH_SEMANTICS = BYTE
    NLS_LANGUAGE = AMERICAN
    Let me tell you my original problem. I have a valueset that accepts 30 bytes.
    It is always possible that susbstrb( ...,1,30) splits a Korean Char in the midway.
    resulting an Invalid character at the end as per UTF8 encoding. As far as I know
    the first byte of an UTF8 character indicates how many bytes it has.
    If substrb() is returning spaces as I mentioned above then it is safe to use in the valueset. But I am not able to find any such behaviour documented.
    SELECT * FROM nls_session_parameters;
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CURRENCY     $
    NLS_ISO_CURRENCY     AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     DD-MON-RR
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_SORT     BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_COMP     BINARY
    NLS_LENGTH_SEMANTICS     BYTE
    NLS_NCHAR_CONV_EXCP     FALSE

  • Multibyte characters (arabic) in Uzbl.

    Hello, I'm experiencing a problem using Google-Translator with uzbl-browser. When I try to translate a text from Arabic into English it gives me
    I Tھz ¨ "Q † ŘŞ
    and what is interesting it doesn't convey the original arabic text in the pop-up window correctly, it gives:
    ط¨طھظ„ ظ†طھ
    instead of:
    بتل نت
    Does anyone have an idea?
    P.S. I have all the necessary arabic fonts installed (chromium translates the same pattern without a problem)

    Maybe connected: when opening a document in cyrillic using UTF without BOM, Uzbl-browser doesn't display letters correctly.
    Last edited by zandgreen (2010-10-11 19:54:08)

Maybe you are looking for

  • I have installed a trail version of FrameMaker 12 and cannot open the book and chapter files in FrameMaker 8

    Hi My colleague and I use FrameMaker 8 but I'm considering whether it would be a good idea to upgrade to FrameMaker 12. So today I installed a trail version of FrameMaker 12. Before opening a book file in FM12, I copied it to another folder so I woul

  • Bapi for migo transaction

    i have created a bapi to upload data for MIGO transaction.. (GOODS RECEIPT-->PURCHASE ORDER) i am getting the error " stock posting is not possible for this material "'. when i create directly, it is posted.. while uploading through the program it sh

  • IPad 2 & Apple Digital AV Adapter.

    Hello all.  I recently purchased a Apple Digital AV Adapter to connect my iPad 2 to my television.  It worked liked a charm immedialtely mirroring what was on my iPad screen.  I used Netflix as well and it went fine.  The problem is with my Apple Vid

  • Saved presentation as .zip file or a folder. Presentation gone?

    Hello, I really need some help. Yesterday, I worked with a presentation which took about 10 hours, I'm not really that experienced with computers, but I think I saved it as a .zip file or simply as a folder. I cannot find the presentation and open it

  • Will apple give the educator discount after a purchase?

    I recently bought a mac mini and was not aware of the discount for educators. Will apple give the discount now that I have the unit at home?