Checking server character set

I think this is a simple question, but for some reason I am not able to find the result.
How can I check the server's default character set?
We have HP-UX (unix) servers, and most databases are set to AL32UTF8/AL16UTF16.
I thought it would be something simple like typing 'locale', but on the HP-UX box(es), it doesn't show any setting for LANG.
I also executed 'env' and looked at all settings, but nothing seems relevant.
Unless I explicitly set the NLS_LANG=AMERICAN_AMERICA.AL32UTF8, when I do an export, it shows:
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses AL32UTF8 character set (possible charset conversion)
Where is it getting the US7ASCII?
If I select * from nls_database_parameters; there is no setting like US7ASCII.
When I do a 'man -k character', I get numerous possible commands, but none of them appear to be something that displays the default character set.
But, as I noted above, when I run an export of our database (without setting the NLS_LANG), it shows me that the default character set is US7ASCII.
So, how do I show this, and if you might also have any ideas how to change this to UTF8.
Thanks.
removed references in this posting to my linux boxes...

The default O/S character set for Unix is 7-bit ASCII. This is element of so-called "C locale". This locale is used by all applications that do not declare themselves sensitive to user locale (i.e. they do not call setlocale() at application startup) and by all applications that have no locale parameters (LANG, or LC_xxxx) set in their environment.
If you call 'locale' in a Unix session that has no locale environment set, the "C locale" should be reported.
"C locale" uses US locale formatting conventions and binary collation.
## "Export done in US7ASCII character set and AL16UTF16 NCHAR character set
## server uses AL32UTF8 character set (possible charset conversion)"
Here, US7ASCII is the default character set of an Oracle Client. It is not directly related to the default O/S character set. US7ASCII is always the default Oracle client character set on all non-EBCDIC platforms, used if NLS_LANG is not explicitly set.
-- Sergiusz

Similar Messages

  • Checking database character set

    Hi all, is there any way or SQL statement to check the database character set (e.g. UTF8)
    regards,
    Kong

    Hi,
    Is "select * from v$nls_parameters;" what you mean ?
    /Uffe

  • ORA-12741: Express Instant Client: unsupported server character set LV8PC8L

    How Can I connect to database???
    I've setup a 10g XE on trustix linux 3.0.5
    Compiled oci8 and pdo_oci
    I'm trying to connect to windows database running an old Oracle 8.1 database
    Any ideas how i can read data from that database - WITHOUT MODIFYING IT ?

    I'm not clear how you're connecting but this error could mean that you don't have transalations for Latvian Version IBM-PC Code Page 866 8-bit Latin/Cyrillic (LV8PC8LR). If that's the case it masquerades as the error when the underlying error can't be rendered in the language. This is my best guess.

  • Finding the character set of the export dump file

    I have an export dump, I don't have the log file for that export dump. How do I check the character set it had been exported in ?

    878226 wrote:
    Connected to: Oracle8i Enterprise Edition Release 8.1.7.4.1 - Production
    With the Partitioning option
    JServer Release 8.1.7.4.1 - Production
    Export file created by EXPORT:V08.01.07 via conventional path
    Warning: the objects were exported by IN, not by you
    import done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character set
    export server uses UTF8 NCHAR character set (possible ncharset conversion)
    Hence, here UTF8 is the characterset used? Is that ri8?Yes that is correct.

  • Use 2 character sets in the same sapscript form (1100 & 4030)

    Is this possible or I need to make a new character set and then make the changes that I need.
    For example:
    The 'Field group' character is usually hex 1D. The problem is
    that hex 1D is not part of the printer character set 1100 . So
    a lot of changes are necessary:
    - Copy your device type in the customer naming room (transaction SPAD,
    Utitities - For device types - Copy device type).
    - Check, which character set your device type uses (transaction SPAD,
    device type).
    - Copy the character set to the customer naming room (SPAD - Character
    set ; customer naming room: 9xxx)
    - Call SPAD - Device Type and put the new character set into all three
    fields at 'Printer character sets'.
    - Now go into your new character set (SPAD - character set , Button 'Edit
    character set '). Unfortunately the character 'Field group'  doesn't
    exist yet. So you should select a character which you don't need and
    which you can 'misuse' as 'Field group' . Let's say, you don't need
    the character 'Thorn'. Then you
    have to add/change the entry of character 354,
    if you want to use the Thorn character for the 'Field group' .
    Here you must insert the sequence '1D' to get the 'Field group'  character .
    Now if you write <354> in the Sapscript layout set in the old
    editor or if you press 'Insert command' in the new editor
    and insert SAP character 354, a 'Field group'  should be inserted
    at that place in the print data.

    The Function Module CLOSE_FORM has an optional TABLES parameter called OTFDATA that can be filled with the contents of the document instead of printing.
    To do this, you must set the field TDGETOTF = 'X' in the OPTIONS structure that can be supplied as an optional parameter to Function Module OPEN_FORM when the document is opened for output.
    This table contains the OTF format data that describes your SAPscript document. Normally the SAPscript compositer would print this to an external printer or fax or email, but once captured in your program you can do anything you like with this information.
    For instance, you can convert OTF data to a PDF file using SX_OBJECT_CONVERT_OTF_PDF. See the many threads on this topic in the ABAP Development forum if you are interested.
    I hope this hint is helpfull to you. Good luck.

  • Outbound character sets

    Hi
    Our outbound external messages go out via a GW704 GWIA. The GWIA thenpasses them onto our email security gateway.
    The issue that I'm having is that I've enabled a 'disclaimer' on our email security gateway that is stamped on all outbound messages. The HTML version of the disclaimer that works perfectly but the Plain Text version is causing me grief.
    When I include French characters such as and in the disclaimer it doesn't get appended to the outbound messages.
    I spoke to the security gateway support team and they tell me that it won't work if our outbound character set is set to US-ASCII. It needs to be UTF8 (or similar) for these characters to work properly.
    I checked the character set that the GWIA is sending out with and it is indeed US-ASCII.
    Is there a way I can set the GWIA to use the UTF8 (or similar) character set so that I can include these dodgy foreign characters in our disclaimer? I've looked through the config file and settings and
    nothing is jumping out at me.
    If there is a way and I set it am I likely to run into any other issues with messages that were previously going out ok?
    Thanks

    On 6/17/2011 10:06 AM, MPlumb wrote:
    >
    > Hi
    >
    > Our outbound external messages go out via a GW704 GWIA. The GWIA
    > thenpasses them onto our email security gateway.
    >
    > The issue that I'm having is that I've enabled a 'disclaimer' on our
    > email security gateway that is stamped on all outbound messages. The
    > HTML version of the disclaimer that works perfectly but the Plain Text
    > version is causing me grief.
    >
    > When I include French characters such as and in the disclaimer it
    > doesn't get appended to the outbound messages.
    >
    > I spoke to the security gateway support team and they tell me that it
    > won't work if our outbound character set is set to US-ASCII. It needs to
    > be UTF8 (or similar) for these characters to work properly.
    >
    > I checked the character set that the GWIA is sending out with and it is
    > indeed US-ASCII.
    >
    > Is there a way I can set the GWIA to use the UTF8 (or similar)
    > character set so that I can include these dodgy foreign characters in
    > our disclaimer? I've looked through the config file and settings and
    > nothing is jumping out at me.
    >
    > If there is a way and I set it am I likely to run into any other issues
    > with messages that were previously going out ok?
    >
    > Thanks
    >
    >
    Outgoing mail encoding is controlled at the Client. The gwia setting is
    just a fallback
    However they are IMO
    a) lying - US-ASCII is a 100% subset of UTF-8. Hence any US-ASCII only
    encoder will ALWAYS be accepted as UTF-8 . The reverse is NOT true -
    UTF-8 will get corrupted going through a US-ASCII gate. But that's not
    what they said.
    b) Pretty much NOT an excuse, because email security gateways (unless
    they are ineffective), have to be able to parse MIME in character sets
    of, oh at least 15 varieties. So they are making a tacit admission their
    product is substandard, and unable to protect against malware. By
    definition if you can parse text for malware and content (and to do so
    you must be able to read/write that character set), you should not have
    this problem.
    So either your description is incomplete, or you need to seriously
    consider leaving this provider ASAP.
    Disclaimer : I work for and have written email security software for
    GWAVA, which would be a competitor.

  • Import error due to character set difference?

    hi,
    hoping anyone to explain reason for the following import error.
    here's the situation,
    - export client used WE8MSWIN1252 charset (release v10.02.01)
    - import server used WE8ISO8859P1 charset (release v10.1.0.4)
    all tables appeared to be imported without error, except that import terminated unsuccesfully due to oracle error 2248
    "ALTER SESSION SET PLSQL_OPTIMIZE_LEVEL =1 NLS-LENGTH_SEMANTICS = 'BYTE' PLSQL_CODE_TYPE = 'INTERPRETED' PLSQL_DEBUG = FALSE PLSQL_WARNING = 'DISABLE:ALL' PLSQL_CCFLAGS =''"
    ORA-02248: invalid option for ALTER SESSION
    was error due to the difference release version? char set conversion to non superset?
    appreciate any help pls

    It is difficult to say what is the current state of the database, so it is recommended to reexport and reimport the object definitions. You may miss some other stuff like object types, contraints, etc. Table data should be OK, so reexport should not include data.
    As far as character sets are concerned, because of the difference between the export client character set and the import server character set, you may have lost some characters in the code range 0x80-0x9f. This includes Euro sign, TM, sign, "smart" quotes, en- and em-dash, etc. Use the Windows charmap.exe to see all the codes. If you have some of these codes in the source database, you have lost them on import (they got converted to the reverse quotation mark code 0xbf).
    -- Sergiusz

  • 한글이 ??? 로 DISPLAY 되는 경우(CHARACTER SET)

    제품 : SQL*PLUS
    작성날짜 : 1996-07-02
    한글이 ??? 로 DISPLAY 되는 경우(CHARACTER SET)
    =============================================
    Oracle Tools(SQL*Plus, Forms 30, Forms 40, Reports 20 등)을 이용하여 한글
    DATA를 조회할 때 ???로 출력되는 경우 해결 방법.
    DATABASE는 SQL COMMAND 'CREATE DATABASE'를 포함하는 STATEMENT를 수행할
    때 만들어지는데 우리가 그 STATEMENT를 수행하기 앞서 고려해야 할 사항 중의
    하나가 DB CHARACTERSET이다.
    DB를 CREATE할 때 DATABSE CHARACTERSET을 명시해야만 하는데, 한번 선택되고
    난 후에는 CHARACTER SET을 변경하는 것은 쉽지가 않다.
    DATA DICTIONARY에 있는 DATA를 포함해서 모든 DATA는 선택된 CHARACTERSET에
    의해 입출력 되기 때문에 USER가 다른 CHARACTERSET으로 ACCESS한다면 한글 데이
    타가 ???로 출력된다.
    또한, 분산 DB 환경이나 UPGRADE할 경우에는 DATABASE CHARACTERSET이 같아야
    하므로, 사용자들은 DATABASE의 CHARACTERSET을 알아 두어야 한다.
    < 현재 DATABASE 의 CHARACTERSET 확인 및 변경 >
    1. 데이타베이스 CHARACTERSET 확인
    $ sqldba lmode=y
    SQLDBA> connect internal
    SQLDBA> select * from v$nls_parameters;
    PARAMETER VALUE
    NLS_CHARACTERSET KO16KSC5601 (or US7ASCII)
    (A)
    2. 환경 변수의 NLS_LANG 확인
    $ env
    NLS_LANG=American_America.US7ASCII
    (B)
    위의 (A)와 (B)가 동일한 경우에만 한글 데이타 처리에 문제가 없으며, 이것이
    서로 다른 상태에서 한글 데이타를 조회할 경우에는 ??? 로 출력 된다.
    3. CHARACTERSET을 일치시키는 방법
    * NLS_LANG 환경 변수를 변경하여 일치시키는 방법
    <UNIX>
    Bourne shell, k-shell을 사용하는 경우 .profile을 수정한다.
    NLS_LANG = American_Amerca.KO16KSC5601; export NLS_LANG
    c-chell을 사용하는 경우 .cshrc 혹은 .login 수정
    setenv NLS_LANG American_America.KO16KSC5601
    수정 후 다시 $env를 실행하여 변경되었는지 확인한다.
    <WINDOWS 3.1>
    C:\WINDOWS\ORACLE.INI 수정
    NLS_LANG=American_America.KO16KSC5601
    WINDOW 재기동
    <WINDOWS 95>
    WINDOWS 95에서는 NLS_LANG이 ORACLE.INI에 들어있지 않고
    REGISTRY에 기록되므로 REGISTRY EDITOR를 이용하여 수정해야 한다.
    MS DOS 창으로 나가서 REGEDIT.EXE 실행
    또는
    시작 -> 실행 -> regedit -> HKEY_LOCAL_MACHINE -> SOFTWARE -> ORACLE
    오른쪽 마우스 버튼을 이용하여 NLS_LANG을 수정한다.
    REGISTRY 변경 후에 PC를 REBOOTING 할 필요는 없습니다.
    <WINDOWS NT>
    WINDOWS NT 에서도 WINDOWS 95의 경우와 마찬가지로 REGISTRY에 기록된
    정보를 변경해 주면 됩니다. 다음과 같이 합니다.
    DOS 창에서 REGEDT32.EXE 실행
    HKEY_LOCAL_MACHINE -> SOFTWARE -> ORACLE 선택
    메뉴를 선택하여 NLS_LANG을 수정
    * 서로 다른 character set 을 갖는 DB를 access 하는 client에서는 다음과
    같은 setting 을 하면 편리합니다.
    예를 들어서 SERVER의 character set이 US7ASCII이고 PC의 NLS_LANG이
    American_America.KO16KSC5601과 같이 서로 다르게 설정되어 있는 경우
    다음을 각 client의 환경에 추가하면 한글 문제가 해결됩니다.
    ORA_NLS_CHARACTERSET_CONVERSION=NO_CHARACTER_SET_CONVERSION

    두 디비가 다른 캐랙터 셋을 쓴다고 해도
    디비 자체에는 문제가 없다고 보는데 말이죠..
    두 플렛폼을 비교한번해보시고요.
    그래도 문제가 생긴다면 님 말씀 대로 바꿔보심이.
    위에건 그냥 보시라는거고
    중요한건 imp,exp할때 약간 조정이 필요할거 같은데 말이죠.

  • Change character set used to write a file in application server.

    Hello Experts,
                       I want to know if we can change the character set used to create a file in application server.(Is it posible to use a particular character set while creating a file in application server.
                      I will be very great full for any help.
    Thanks in advance.
    Sharath

    Hello Sarath,
    There is an extension CODE PAGE with OPEN DATASET stmt.
    Can you please elaborate which character set you want to write to the application server?
    BR,
    Suhas

  • CHARACTER SET 문제에 대한 CHECK LIST

    제품 : ORACLE SERVER
    작성날짜 : 1998-04-13
    환경 변수들이 모두 올바른 데도 character set 에 문제가 있는 경우
    =============================================================
    Oracle version 7.2 에서부터 사용되는 환경변수인 ORA_NLS 를 잘못 setting
    하면, 다음과 같은 character set 문제가 발생할 수 있습니다.
    server version 환경변수
    ============== =========
    7.2 ORA_NLS
    7.3 ORA_NLS32
    8.0 ORA_NLS33
    즉, 다음의 경우에 위처럼 환경 변수 ORA_NLS 가 DB version 과 일치하는
    지를 확인해야 합니다.
    1) US7ASCII 에서 KO16KSC5601 로 character set 을 변경하기 전에
    변경이 되는지 확인하는 조회에서 다음과 같은 에러 발생하는 경우
    select convert('a','KO16KSC5601','US7ASCII') from dual;
    Ora-1482 "unsupported character set"
    2) props$, v$nls_parameters 의 character set 이 서로 다른 값을
    가질 경우
    3) v$nls_valid_values 에 default value 만 조회되는 경우
    (즉, select count(*) from v$nls_valid_values; 의 결과가
    4 인 경우 ORA_NLS 가 setting 되어 있지 않기 때문)
    4) table 생성시에 ora-911 "invalid character",
    ora-904 "invalid column name" 빌생하는 경우
    5) NLS_LANG 을 올바로 setting 하였는 데도, 한글이 ??? 로 나오는 경우
    $ env | grep ORA_NLS
    ORA_NLS=$ORACLE_HOME/ocommon/nls/admin/data
    $ cd $ORA_NLS
    ls -l boot
    -rw-r--r-- 1 rctest72 dba 13572 Dec 2 1995 lx0boot.d
    -rw-r--r-- 1 rctest72 dba 13572 Dec 2 1995 lx1boot.d
    이 두개의 화일이 ORA_NLS 디렉토리에 있어야 합니다.
    (version 에 따라 size 는 다를 수 있습니다.)
    해결 방법
    ========
    만약,ORA_NLS(ORA_NLS32,ORA_NLS33) 이 setting 되지 않았거나,rdbms version
    과 맞지 않을 경우 다시 수정한 후에 restartup 하면 이와 관련된 문제는
    해결됩니다.
    (특정 OS, Ticom 의 경우 OS 를 rebooting 해야 하는 경우도 있습니다.)
    해결 방법은 ORA_NLS 를 정상적으로 setting 한 후에 restartup 하면,
    v$nls_valid_values 에 KO16KSC5601 이 조회되었고, 한글 테이블명이나
    컬럼명이 가능해지고, dictionary 도 KO16KSC5601 로 일치하게 됩니다.
    다음의 CASE1, CASE2 를 통해 이 환경 변수가 DB 에 미치는 영향을
    확인할 수 있습니다.
    CASE1
    =====
    위의 lx0boot.d, lx1boot.d 화일을 ACCESS 할 수 있을 때, 기동된 instance 의
    dictionary를 조회하면, 많이 사용하는 KO16KSC5601, US7ASCII 등의 character
    set 을 조회할 수 있습니다.
    SQLDBA> select * from v$nls_valid_values;
    313 rows selected.
    SQLDBA> select * from v$nls_valid_values where value='KO16KSC5601';
    PARAMETER VALUE
    CHARACTERSET KO16KSC5601
    1 row selected.
    ; 즉, total 313 row 가 있고 그 중에 KO16KSC5601,US7ASCII 도 있음
    CASE2
    =====
    그러나, lx0boot.d, lx1boot.d 화일을 ACCESS 할 수 없을 때, 기동된
    instance 의 dictionary 를 조회하면, default value 들만 조회됩니다.
    SQLDBA> select * from v$nls_valid_values;
    PARAMETER VALUE
    LANGUAGE AMERICAN
    TERRITORY AMERICA
    CHARACTERSET US7ASCII
    SORT BINARY
    4 rows selected.
    ;즉, lx0boot.d, lx1boot.d 를 access 할 수 없을 경우에는 위의 US7ASCII 만
    지원합니다.

    Hi Amos,
    It should work but you will need to use a Unicode font. Try setting your field fonts in CR to MS Ariel Unicode and test again.
    Thank you
    Don

  • Server uses WE8ISO8859P15 character set (possible charset conversion)

    Hi,
    when EXP in 9i I receive :
    Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Export done in WE8PC850 character set and AL16UTF16 NCHAR character set
    server uses WE8ISO8859P15 character set (possible charset conversion)What is the problem ?
    Thank you.
    I exported just a table, how to see if it is exported ?

    Dear user522961,
    You have not defined or misdefined the NLS_LANG environmental variable before trying to run the export command.
    Here is a little illustration;
    *$ echo $NLS_LANG*
    *AMERICAN_AMERICA.WE8ISO8859P9*
    $ exp system/password@opttest file=ogan.dmp owner=OGAN
    Export: Release 10.2.0.4.0 - Production on Mon Jul 12 18:10:47 2010
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    *Export done in WE8ISO8859P9 character set and AL16UTF16 NCHAR character set*
    About to export specified users ...
    . exporting pre-schema procedural objects and actions
    . exporting foreign function library names for user OGAN
    . exporting PUBLIC type synonyms
    . exporting private type synonyms
    . exporting object type definitions for user OGAN
    About to export OGAN's objects ...
    . exporting database links
    . exporting sequence numbers
    . exporting cluster definitions
    . about to export OGAN's tables via Conventional Path ...
    . exporting synonyms
    . exporting views
    . exporting stored procedures
    . exporting operators
    . exporting referential integrity constraints
    . exporting triggers
    . exporting indextypes
    . exporting bitmap, functional and extensible indexes
    . exporting posttables actions
    . exporting materialized views
    . exporting snapshot logs
    . exporting job queues
    . exporting refresh groups and children
    . exporting dimensions
    . exporting post-schema procedural objects and actions
    . exporting statistics
    Export terminated successfully without warnings.
    *$ export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P15*
    $ exp system/password@opttest file=ogan.dmp owner=OGAN
    Export: Release 10.2.0.4.0 - Production on Mon Jul 12 18:12:41 2010
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    *Export done in WE8ISO8859P15 character set and AL16UTF16 NCHAR character set*
    *server uses WE8ISO8859P9 character set (possible charset conversion)*
    About to export specified users ...
    . exporting pre-schema procedural objects and actions
    . exporting foreign function library names for user OGAN
    . exporting PUBLIC type synonyms
    . exporting private type synonyms
    . exporting object type definitions for user OGAN
    About to export OGAN's objects ...
    . exporting database links
    . exporting sequence numbers
    . exporting cluster definitions
    . about to export OGAN's tables via Conventional Path ...
    . exporting synonyms
    . exporting views
    . exporting stored procedures
    . exporting operators
    . exporting referential integrity constraints
    . exporting triggers
    . exporting indextypes
    . exporting bitmap, functional and extensible indexes
    . exporting posttables actions
    . exporting materialized views
    . exporting snapshot logs
    . exporting job queues
    . exporting refresh groups and children
    . exporting dimensions
    . exporting post-schema procedural objects and actions
    . exporting statistics
    Export terminated successfully without warnings.Hope it Helps,
    Ogan

  • Server uses AL32UTF8 character set

    Hello All,
    OS: Linux (SLES 8)
    Recently I have applied 10g Relase 1 (10.1.0.5) patch set for AIX 64 -- Patch #
    4505133.
    My earler version of database was 10.1.0.3.0.
    I got message patch successfully installed.
    I can login to the database.
    One of our weekly routine is to run our internal shell script interface
    programs.
    Very first step of this program is to created .dmp (exp) file. We are doing this
    from years.
    Now after applying this patch set,
    I am getting error in this program as below:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.5.0 -
    Production
    With the Partitioning, OLAP and Data Mining options
    Export done in US7ASCII character set and AL16UTF16 NCHAR character set
    server uses AL32UTF8 character set (possible charset conversion)
    About to export specified users ...
    . exporting pre-schema procedural objects and actions
    . exporting foreign function library names for user abc
    . exporting PUBLIC type synonyms
    . exporting private type synonyms
    . exporting object type definitions for user abc
    About to export abc's objects ...
    . exporting database links
    . exporting sequence numbers
    . exporting cluster definitions
    EXP-00056: ORACLE error 932 encountered
    ORA-00932: inconsistent datatypes: expected BLOB, CLOB got CHAR
    EXP-00000: Export terminated unsuccessfully
    DN

    Pierre,
    I do not have any invalid objects.
    But I run
    SQL> @?/rdbms/admin/catmetx.sql (Per Metalink note 339938.1).
    But now I am getting errorsbelow:
    EXP-00056: ORACLE error 904 encountered
    ORA-00904: "SYS"."DBMS_EXPORT_EXTENSION"."FUNC_INDEX_DEFAULT": invalid identifier
    . . exporting table QUEST_COM_USER_PRIVILEGES 0 rows exported
    EXP-00056: ORACLE error 904 encountered
    ORA-00904: "SYS"."DBMS_EXPORT_EXTENSION"."FUNC_INDEX_DEFAULT": invalid identifier
    . . exporting table QUEST_SL_COLLECTION_DEFINITION 0 rows exported
    EXP-00056: ORACLE error 904 encountered
    ORA-00904: "SYS"."DBMS_EXPORT_EXTENSION"."FUNC_INDEX_DEFAULT": invalid identifier
    . . exporting table QUEST_SL_COLLECTION_DEF_REPOS 0 rows exported
    EXP-00056: ORACLE error 904 encountered
    ORA-00904: "SYS"."DBMS_EXPORT_EXTENSION"."FUNC_INDEX_DEFAULT": invalid identifier
    . . exporting table QUEST_SL_COLLECTION_REPOSITORY 0 rows exported
    EXP-00056: ORACLE error 904 encountered
    ORA-00904: "SYS"."DBMS_EXPORT_EXTENSION"."FUNC_INDEX_DEFAULT": invalid identifier
    . . exporting table QUEST_SL_ERRORS 0 rows exported
    . . exporting table QUEST_SL_EXPLAIN 0 rows exported
    EXP-00056: ORACLE error 904 encountered
    ORA-00904: "SYS"."DBMS_EXPORT_EXTENSION"."FUNC_INDEX_DEFAULT": invalid identifier
    . . exporting table QUEST_SL_EXPLAIN_PICK 0 rows exported
    EXP-00056: ORACLE error 904 encountered
    ORA-00904: "SYS"."DBMS_EXPORT_EXTENSION"."FUNC_INDEX_DEFAULT": invalid identifier
    . . exporting table QUEST_SL_QUERY_DEFINITIONS 0 rows exported
    EXP-00056: ORACLE error 904 encountered
    ORA-00904: "SYS"."DBMS_EXPORT_EXTENSION"."FUNC_INDEX_DEFAULT": invalid identifier
    . . exporting table QUEST_SL_QUERY_DEF_REPOSITORY 0 rows exported
    EXP-00056: ORACLE error 904 encountered
    ORA-00904: "SYS"."DBMS_EXPORT_EXTENSION"."FUNC_INDEX_DEFAULT": invalid identif
    Now I am running @catproc.sql.
    After that I will run utlrp.sql script if there is any invalid objects..
    DN

  • How DO i check my OS character set?

    hi i am currently using winxp home,
    i can switch between input of thai and english characters.
    but i want to set my nls_lang to that of os character set, so that proper conversion can be done when i insert into my unicode database.
    can i know how do i check my characterset of my os? i can input for englisn and thai, and i need to input thai into my database.
    thanks.

    SELECT * FROM gv_$nls_parameters;

  • Error while running package ORA-06553: PLS-553 character set name is not re

    Hi all.
    I have a problem with a package, when I run it returns me code error:
    ORA-06552: PL/SQL: Compilation unit analysis terminated
    ORA-06553: PLS-553: character set name is not recognized
    The full context of the problem is this:
    Previously I had a developing data base, then was migrated to a new server. After that I started to receive the error, so I began to check for the solution.
    My first move was compare the “old database” with the “new database”, so I check the nls parameters, and this was the result:
    select * from nls_database_parameters;
    Result from the old
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CURRENCY     $
    NLS_ISO_CURRENCY     AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CHARACTERSET     US7ASCII
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     DD-MON-RR
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_SORT     BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_COMP     BINARY
    NLS_LENGTH_SEMANTICS     BYTE
    NLS_NCHAR_CONV_EXCP     FALSE
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_RDBMS_VERSION     10.2.0.1.0
    Result from the new
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CURRENCY     $
    NLS_ISO_CURRENCY     AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CHARACTERSET     US7ASCII
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     DD-MON-RR
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_SORT     BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_COMP     BINARY
    NLS_LENGTH_SEMANTICS     BYTE
    NLS_NCHAR_CONV_EXCP     FALSE
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_RDBMS_VERSION     10.2.0.1.0
    As the result was identical, I decided to look for more info in the log file of the new database. What I find was this:
    Database Characterset is US7ASCII
    Threshold validation cannot be done before catproc is loaded.
    Threshold validation cannot be done before catproc is loaded.
    alter database character set INTERNAL_CONVERT WE8MSWIN1252
    Updating character set in controlfile to WE8MSWIN1252
    Synchronizing connection with database character set information
    Refreshing type attributes with new character set information
    Completed: alter database character set INTERNAL_CONVERT WE8MSWIN1252
    alter database character set US7ASCII
    ORA-12712 signalled during: alter database character set US7ASCII...
    alter database character set US7ASCII
    ORA-12712 signalled during: alter database character set US7ASCII...
    Errors in file e:\oracle\product\10.2.0\admin\orcl\udump\orcl_ora_3132.trc:
    Regards

    Ohselotl wrote:
    Hi all.
    I have a problem with a package, when I run it returns me code error:
    ORA-06552: PL/SQL: Compilation unit analysis terminated
    ORA-06553: PLS-553: character set name is not recognized
    The full context of the problem is this:
    Previously I had a developing data base, then was migrated to a new server. After that I started to receive the error, so I began to check for the solution.
    My first move was compare the “old database” with the “new database”, so I check the nls parameters, and this was the result:
    select * from nls_database_parameters;
    Result from the old
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CURRENCY     $
    NLS_ISO_CURRENCY     AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CHARACTERSET     US7ASCII
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     DD-MON-RR
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_SORT     BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_COMP     BINARY
    NLS_LENGTH_SEMANTICS     BYTE
    NLS_NCHAR_CONV_EXCP     FALSE
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_RDBMS_VERSION     10.2.0.1.0
    Result from the new
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CURRENCY     $
    NLS_ISO_CURRENCY     AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CHARACTERSET     US7ASCII
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     DD-MON-RR
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_SORT     BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_COMP     BINARY
    NLS_LENGTH_SEMANTICS     BYTE
    NLS_NCHAR_CONV_EXCP     FALSE
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_RDBMS_VERSION     10.2.0.1.0
    As the result was identical, I decided to look for more info in the log file of the new database. What I find was this:
    Database Characterset is US7ASCII
    Threshold validation cannot be done before catproc is loaded.
    Threshold validation cannot be done before catproc is loaded.
    alter database character set INTERNAL_CONVERT WE8MSWIN1252
    Updating character set in controlfile to WE8MSWIN1252
    Synchronizing connection with database character set information
    Refreshing type attributes with new character set information
    Completed: alter database character set INTERNAL_CONVERT WE8MSWIN1252
    *********************************************************************This is an unsupported method to change the characterset of a database - it has caused the corruption of your database beyond repair. Hopefully you have a backup you can recover from. Whoever did this did not know what they were doing.
    alter database character set US7ASCII
    ORA-12712 signalled during: alter database character set US7ASCII...
    alter database character set US7ASCII
    ORA-12712 signalled during: alter database character set US7ASCII...
    Errors in file e:\oracle\product\10.2.0\admin\orcl\udump\orcl_ora_3132.trc:
    RegardsThe correct way to change the characterset of a database is documented - http://docs.oracle.com/cd/B19306_01/server.102/b14225/ch11charsetmig.htm#sthref1476
    HTH
    Srini

  • Changing Character set in SAP BODS Data Transport

    Hi Experts,
    I am facing issue in extracting data from SAP.
    Job details: I am using an ABAP data Flow which fetches the data from SAP and loads into Oracle table using Data Transport.
    Its giving me below error while executing my job:
    (12.2) 05-06-11 11:54:30 (W) (3884:2944) FIL-080102: |Data flow DF_SAP_EXTRACT_QMMA|Transform R3_QMMA_EXTRACT__AL_ReadFileMT_Process
                                                         End of file was found without reading a complete row for file <D:/DataService/SAP/Local/Z_R3_QMMA>. The expected number of
                                                         columns was <30> while the number of columns actually read was <10>. Please check the input file for errors or verify the
                                                         schema specification for the file format. The number of rows processed was <8870>.
    reason: When analyzed I found the reason for this is presence of special characters in data. So while generating the data file in SAP working directory which is available on SAP Application server the SAP code page is 1100 due to which the delimeter of the file and the special characters are represented with #. So once the ABAP is executed and data is read from the file it is treating the # as delimiter and throwing the above error.
    I tried to replace the special characters in ABAP data Flow but the ABAP data Flow doesnot support replace_substr function. I also tried changing the Code Page value to UTF-8 in SAP datastore properties but this didnt work as well.
    Please let  me know what needs to be done to resolve this issue. Is there any way we change the character set while reading from the generated data file in BODS to convert code page 1100 to UTF-8.
    Thanks in advance.
    Regards,
    Sudheer.

    Unfortunately, I am no longer working on this particular project/problem. What I did discover though, is that /127 actually refers to character <control>+<backspace>. (http://en.wikipedia.org/wiki/Delete_character)
    In SAP this and any other unknown characters get converted to #.
    The conclusion I came to at the time, was that these characters made their way into the actual data and was causing the issue. In fact I think it is still causing the issue, since no one takes responsibility for changing the records, even after being told exactly which records need to be updated ;-)
    I think I did try to make the changes on the above mentioned file, but without success.

Maybe you are looking for