Please advice in character set

Good day for you
i have 2 database's (db1,db2) they almost mirror , i have value in one table have character ö , when i select this column from toad there's no problem but when i select this column from any application or from sqlplus i read this letter as ? or ÷
can any one advice in this issue
Many Thanks
AE

It looks like you posted your client NLS_LANG. That's useful info. But we still need to know what the database character set is. Can you post the results of this query for both databases
SELECT *
  FROM v$nls_parameters
WHERE parameter LIKE '%CHARACTERSET'Can you also post the results of
SELECT dump( column_name, 1013 )
  FROM your_table
WHERE conditions_that_return_special_characterin each database?
Justin

Similar Messages

  • Can i Change Character set WE8ISO8859P1 TO AR8MSWIN1256 IN ORACLE 8i

    I tried to change character set from WE8ISO8859P1 TO AR8MSWIN1256 on oracle 8i database.
    Getting the follwoing error for both character set and National character set.
    ORA-12712:New character set must be a superset of old character set.
    My question can i change or have to do export and import in arabic character set DB.
    null

    Hello Sarath,
    There is an extension CODE PAGE with OPEN DATASET stmt.
    Can you please elaborate which character set you want to write to the application server?
    BR,
    Suhas

  • Change character set used to write a file in application server.

    Hello Experts,
                       I want to know if we can change the character set used to create a file in application server.(Is it posible to use a particular character set while creating a file in application server.
                      I will be very great full for any help.
    Thanks in advance.
    Sharath

    Hello Sarath,
    There is an extension CODE PAGE with OPEN DATASET stmt.
    Can you please elaborate which character set you want to write to the application server?
    BR,
    Suhas

  • US7ASCII character set Please Suggest ???

    Dear all,
    Does US7ASCII have support for asian character set.
    Oracle 9.2.0.7, Solaris 10
    Thanks in advance
    SL
    Connected to: Oracle9i Enterprise Edition Release 9.2.0.7.0 - 64bit Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.7.0 - Production
    Export done in US7ASCII character set and AL16UTF16 NCHAR character set
    server uses WE8ISO8859P1 character set (possible charset conversion)
    Message was edited by:
    user480060
    I know the answer. US7ascii is 7 bit. Now my question is that how do I convert this to another character set (8 bit) PLease help
    Message was edited by:
    user480060
    ALTER DATABASE [<db_name>] CHARACTER SET <new_character_set>;
    Will this be enough to change the existing character set. At this moment the database is empty. Will there be no issues later on.???
    Message was edited by:
    user480060

    Which will be the new character set ?
    If the new character set is a binary superset of the old character set, you can use the following procedure:
    http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96529/ch10.htm#1009904
    Following table lists all US7ASCII supersets:
    http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96529/appa.htm#974388

  • Please remove/clarify download for "Oracle8 8.0.6 Character set scanner"

    All Oracle documentation says that the first 0.0 version of Character set scanner was supplied with 8.1.7.0 s/w of the database (also see Note:179843.1 - Versioning of the Character Set Scanner).
    The OTN download site provides a link (broken link) to csscan of version 1.0 for "Oracle8 8.0.6 Character set scanner for Solaris" at
    http://otn.oracle.com/software/tech/globalization/htdocs/utilsoft.html#10
    that confuses customers.
    As the download link is broken, it is impossible to check what executables this link really leads to.
    Please either document this exception somewhere that csscan is in fact available for 8.0.6 database, and why this is true ONLY on Solaris,
    or
    simply remove the link if created by mistake.
    Thank you,
    Svetlana Grove
    Technical Specialist

    The first version of CSSCAN was shipped with 8.1.7 of the database on all Oracle RDBMS platforms. Due to customer demands, backports were made available for 8.1.6 (NT and Solaris) and 8.0.6 (Solaris) also.
    We decided to make these pre-8.1.7 CSSCAN available for download on OTN as well.
    The download links are not broken. There is a policy in place for Oracle employees, they are restricted from downloading between 6am and 6pm US-PST on business days.

  • Character Set issues.  Please advise

    I have a client who use a version 10gR2 DB that stores both English and French data. There are several times where they will send up .dmp file where we load it into ours.
    2 questions.
    What would be the best charater sets to use here in this setup?
    I am assuming we would use
    NLS_CHARACTERSET = WE8ISO9959P1
    NLS_NCHAR_CHARACTERSET = AL16UTF16
    Also if someone can confirm for me.
    NLS_CHARACTERSET = database character set ???
    NLS_NCHAR_CHARACTERSET = national character set???

    So is it better to say that I should use the AL32UTF8
    instead of AL16UTF16 ?It's not an instead of situation. AL32UTF8 is a valid setting for the database character set, which controls CHAR and VARCHAR2 columns. AL16UTF16 is a valid setting for the national character set which controls NCHAR and NVARCHAR2 columns.
    Could you tell me the difference?The difference between the two encodings comes down to how many bytes are required to store a particular code point (character). AL32UTF8 is a variable-length character set, so 1 character will require between 1 and 3 bytes of storage (4 for the supplemental characters but those are rather rare). AL16UTF16 is a fixed-width character set, so 1 character will require 2 bytes of storage (4 for the rare supplemental characters again).
    Also could you tell me the difference between
    WE8ISO8859P15 and WE8ISO8859P1 ? There's a Wikipedia article that discusses the differences and has links to the two different code tables.
    Werner's point is an excellent one as well. I was assuming that we were talking about how to set up both sides of this proposed system. If the source system already exists, there are additional considerations like ensuring that your target system supports a superset of the characters supported by the source system. Regardless, when doing imports & exports, as Werner points out, you need to ensure that NLS_LANG is set appropriately.
    Justin

  • Arabic Character set conversion-help needed

    We have our main database running in 10g (Solaris o/s) & planning to move these to RAC 11g.
    One of our old oracle DB(8.0.5)/solaris, which is not used till recently need to upgrade to 10g Rel2.
    I know Supported direct upgrade 8.0.6/8.1.7/9i -> 10g
    Current DB: 8.0.5 (Character Set: AR8ISO8859P6)
    Target DB : 10g Rel 2 (Character Set: AR8MSWIN1256)
    I am thinking to go the following way by using exp/imp
    8.0.5(AR8ISO8859P6) -> 8.1.7(AR8ISO8859P6) -> 10G(AR8MSWIN1256)
    OR
    8.0.5(AR8ISO8859P6) -> 8.1.7(AR8MSWIN1256) -> 10G(AR8MSWIN1256)
    please advice
    thanks

    (1) At source db 8.0.5 (solaris)
    PARAMETER VALUE
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-YY
    NLS_DATE_LANGUAGE AMERICAN
    NLS_CHARACTERSET AR8ISO8859P6
    NLS_SORT BINARY
    NLS_NCHAR_CHARACTERSE T AR8ISO8859P6
    $set NLS_LANG=AMERICAN_AMERICA.AR8ISO8859P6
    $exp sys/dba file=full251109.dmp full=y
    (2):>> At target db 10g R2 (solaris)
    PARAMETER VALUE
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RRRR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_CHARACTERSET AR8ISO8859P6
    NLS_SORT BINARY
    NLS_NCHAR_CHARACTERSET UTF8
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CONV_EXCP FALSE
    $export NLS_LANG=AMERICAN_AMERICA.AR8ISO8859P6
    $imp testdba/testdba file=full251105.dmp fromuser=PROFINAL touser=PROFINAL
    *$csscan testdba/testdba FULL=Y FROMCHAR=AR8ISO8859P6 TOCHAR=AR8ISO8859P6 LOG=P6check CAPTURE=Y ARRAY=100000 PROCESS=2*
    There is EXCEPTIONAL DATA in .err file+
    clients accessing 8.0.5 dataabase uses characterset AR8IS08859P6, which is SAME as 8.0.5 database
    -CSSCAN result->>[Database Scan Parameters]
    Parameter Value
    CSSCAN Version v2.1
    Instance Name MIG1
    Database Version 10.2.0.1.0
    Scan type Full database
    Scan CHAR data? YES
    Database character set AR8ISO8859P6
    FROMCHAR AR8ISO8859P6
    TOCHAR AR8ISO8859P6
    Scan NCHAR data? NO
    Array fetch buffer size 100000
    Number of processes 2
    Capture convertible data? YES
    [Scan Summary]
    Some character type data in the data dictionary are not convertible to the new
    haracter set Some character type application data are not convertible to the new characters
    [Data Dictionary Conversion Summary]
    Datatype Changeless Convertible Truncation Lossy
    VARCHAR2 2,235,403 0 0 *1,492*
    CHAR 1,097 0 0 0
    LONG 155,188 0 0 6
    CLOB 24,643 0 0 0
    VARRAY 21,352 0 0 0
    Total 2,437,683 0 0 1,498
    Total in percentage 99.939% 0.000% 0.000% 0.061%
    The data dictionary can not be safely migrated using the CSALTER script
    [Application Data Conversion Summary]
    Datatype Changeless Convertible Truncation Lossy
    VARCHAR2 16,986,594 0 0 *1,240,383*
    CHAR 164,114 0 0 0
    LONG 7 0 0 0
    CLOB 1 0 0 0
    VARRAY 1,436 0 0 0
    Total in percentage 93.256% 0.000% 0.000%
    6.744%
    [Distribution of Convertible, Truncated and Lossy Data by Table]
    USER.TABLE Convertible Truncation Lossy
    PROFINAL.BASE_MASTER_DATAS 0 0 *362,003*
    PROFINAL.CODE_ALLOW 0 0 *53*
    PROFINAL.CODE_ALLOWANCE_TYPES 0 0 *1*
    PROFINAL.CODE_BONUS_TYPES 0 0 *2*
    PROFINAL.CODE_BRANCHES 0 0 *2*
    PROFINAL.CODE_CERTIFICATES 0 0 *94*
    Kindly help,,,
    Edited by: userR12 on Nov 25, 2009 1:43 AM
    Edited by: userR12 on Nov 25, 2009 1:52 AM

  • Getting ORA-01429 error while changing character set

    When I am changing character set from WE8DEC to AL32UTF8, I am getting ORA-01429 error
    SQL> ALTER DATABASE CHARACTER SET INTERNAL_USE AL32UTF8 ;
    ALTER DATABASE CHARACTER SET INTERNAL_USE AL32UTF8
    ERROR at line 1:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01429: Index-Organized Table: no data segment to store overflow row-pieces

    Chockalingam wrote:
    I am using above steps as per oracle doc only.
    http://docs.oracle.com/cd/B10500_01/server.920/a96529/ch10.htm
    No, you are not.
    - You are not using the correct version doc vs. Oracle server version. Try to find the same suggestion in the relevant doc.
    - The doc you reference specifically says "... it can be used only under special circumstances. The ALTER DATABASE CHARACTER SET statement does not perform any data conversion, so it can be used +if and only if the new character set is a strict superset of the current character set+." (emphasis is mine)
    You do not have a strict superset.
    - Also the special clauses you have used are not documented - for a reason.
    Please edit your posts above to remove the ill-advice (steps with internal use only clauses) that does not belong on a forum.
    Edited by: orafad on Mar 16, 2012 9:47 PM

  • Please advice on how to remain the font in SAP

    Hi,
    Please advice on how to remain the font in SAP as Arial monospaced instead of its auto change to Chinese character.
    At the same time, pls advice the correct forum i should went as this is related to the Basis and i could not find out Basis Forum.
    Thanks.
    Best Regards,
    Joo

    Hi,
    when i went to the SAP Menu page right
    Below is the Menu list
    ==>Office
    ==>Cross-Application Components
    ==>Logistics
    ==>Accounting
    ==>Human REsources
    ==>Information Systems
    ==>Tools
    Over here, does not have "Setting". Pls advice.
    Thanks.

  • Inbound MQ with extended character sets

    Hi
    We are trying to send to PI data containing Swedish characters in both xml and non xml payloads.
    The message is placed on an MQ queue (version 6.0.2.3) with a JMS header that has a ccsid of 1208 specified.
    The PI adapter is specified as JMS | WebsphereMQ (non-JMS) | JMS Compliant and the payload module has
    AF/Modules/MessageTransformerBean | Plain2XML | Transform.ContentType | text/xml;charset=utf-8
    The received characters are not displaying correctly, which is a theme on several threads from the past but I've been unable to determine the solution.
    I am more familiar of the MQ side so please excuse my bias. I already send extended character sets to other applications using jms over mq and we've tried using the same values on the MQ side to not avail.
    In MQ we set the MQ header to the queue manager default but there is a jms specific additional header preceding the payload that specifies that the payload is utf-8.
    From my perspecitve I can't see that PI is reading the JMS header at all (in fact if I remove it it has not effect) but we want it there in order to set some extended metadata properties.
    When I look at the data on my queue as it leaves MQ it looks correct to view and in hex.
    How do I get PI to recognise the JMS properties I've specified (its known as an mqrfh header in MQ).
    Any advice, guidance, documentation to a PI novice would be most welcome.
    Tim

    Thankyou for the replies Sarvesh and Stefan.
    I had read your previous replies on this subject, but was still stuck.
    The delay in rep[lying is because we were wating for a reply from the Sap Support team.
    They have now acknowledged that there may be a fault in the MessageTransformBean.
    Its still only a may, but at the moment all your other suggestions have been used but not worked.
    I'll update again when I get further information.
    Tim

  • JSF and Character Sets (UTF-8)

    Hi all,
    This question might have been asked before, but I'm going to ask it anyway because I'm completely puzzled by how this works in JSF.
    Let's begin with the basics, I have an application running on an OC4J servlet container, and am using JSF 1.1 (MyFaces). The problems I am having with this setup, is that it seems that the character encodings I want the server/client to use are not coming across correctly. I'm trying to enforce the application to be UTF-8, but after the response is rendered to my client, I've magically been reverted to ISO-8859-1, which is the main character set for the netherlands. However, I'm building the application to support proper internationalization; which means I NEED to use UTF-8.
    I've executed the following steps to reach this goal:
    - All JSP files contain page directives, noting the character set:
    <%@ page pageEncoding="UTF-8" contentType="text/html; charset=UTF-8" %>I've checked the generated source that comes from the JSP's, it looks as expected.
    - I've created a servlet filter to set the character set directly on the request and response objects:
        public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException {
            // Set the characterencoding for the request and response streams.
            req.setCharacterEncoding("UTF-8");
            res.setContentType("text/html; charset=UTF-8");       
            // Complete (continue) the processing chain.
            chain.doFilter(req, res); 
        }I've debugged the code, and this works fine, except for where JSF comes in. If I use the above situation, without going through JSF, my pages come back UTF-8. When I go through JSF, my pages come back as ISO-8859-1. I'm baffled as to what is causing this. On several forums, writing a filter was proposed as the solution, however this doesn't do it for me.
    It looks like somewhere internally in JSF the character set is changed to ISO. I've been through the sources, and I've found several pieces of code that support that theory. I've seen portions of code where the character set for the response is set to that of the request. Which in my case coming from a dutch system, will be ISO.
    How can this be prevented? Can anyone give some good insight on the inner workings of JSF with regards to character sets in specific? Could this be a servlet container problem?
    Many thanks in advance for your assistance,
    Jarno

    Jarno,
    I've been investigating JSF and character encodings a bit this weekend. And I have to say it's more than a little confusing. But I may have a little insight as to what's going on here.
    I have a post here:
    http://forum.java.sun.com/thread.jspa?threadID=725929&tstart=45
    where I have a number of open questions regarding JSF 1.2's intended handling of character encodings. Please feel free to comment, as you're clearly struggling with some of the same questions I have.
    In MyFaces JSF 1.1 and JSF-RI 1.2 the handling appears to be dependent on the raw Content-Type header. Looking at the MyFaces implementation here -
    http://svn.apache.org/repos/asf/myfaces/legacy/tags/JSF_1_1_started/src/myfaces/org/apache/myfaces/application/jsp/JspViewHandlerImpl.java
    (which I'm not sure is the correct code, but it's the best I've found) it looks like the raw header Content-Type header is being parsed in handleCharacterEncoding. The resulting value (if not null) is used to set the request character encoding.
    The JSF-RI 1.2 code is similar - calculateCharacterEncoding(FacesContext) in ViewHandler appears to parse the raw header, as opposed to using the CharacterEncoding getter on ServletRequest. This is understandable, as this code should be able to handle PortletRequests as well as ServletRequests. And PortletRequests don't have set/getCharacterEncoding methods.
    My first thought is that calling setCharacterEncoding on the request in the filter may not update the raw Content-Type header. (I haven't checked if this is the case) If it doesn't, then the raw header may be getting reparsed and the request encoding getting reset in the ViewHandler. I'd suggest that you check the state of the Content-Type header before and after your call to req.setCharacterEncoding('UTF-8"). If the header charset value is unset or unchanged after this call, you may want to update it manually in your Filter.
    If that doesn't work, I'd suggest writing a simple ViewHandler which prints out the request's character encoding and the value of the Content-Type header to your logs before and after the calls to the underlying ViewHandler for each major method (i.e. renderView, etc.)
    Not sure if that's helpful, but it's my best advice based on the understanding I've reached to date. And I definitely agree - documentation on this point appears to be lacking. Good luck
    Regards,
    Peter

  • Convertion of byte array in UTF-8 to GSM character set.

    I want to convert byte array in UTF-8 to GSM character set. Please advice How can I do that?

    String s = new String(byteArrayInUTF8,"utf-8");This will convert your byte array to a Java UNICODE UTF-16 encoded String on the assumption that the byte array represents characters encoded as utf-8.
    I don't understand what GSM characters are so someone else will have to help you with the second part.

  • Convert custom character set of nonunicode system to standard character set

    hi all,
    I'm a BI consultant and have the following problem:
    we want to load attribute data (products, customers,...) from R3 into BI via standard extractors.
    R3 is a non-unicode system, and some products (or customers) use a custom character set Z2.
    When we extract this data to BI (a unicode system) we get an error (tRFC) saying he can't confert certain values. SAP says that he can't convert character set Z2 to the standard character set 4102.
    Converting the R3 system to a unicode system would be the solution,but is not an option... unfortunatly...
    Please advice, give gidelines,... because this problem is really a showstopper for our project...
    Thx!!!
    Joke

    hi joke,
    i am providing some  links pls chk them . I think they will be help full for you.
    www.sap-press.de/katalog/buecher/htmlleseproben/gp/htmlprobID-149 - 69k
    www.sap-press.de/download/dateien/1240/sappress_unicode_in_sap_systems.pdf
    http://209.85.207.104/search?q=cache:y__bQrnsix0J:https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f5130af2-0a01-0010-c092-a6d44eadd153convertcustomcharactersetofnonunicodesystemtostandardcharacterset%2B+abap&hl=en&ct=clnk&cd=7&gl=us
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/efd1dd90-0201-0010-e1b0-8437c998cd79
    www.redbooks.ibm.com/redpapers/pdfs/redp4189.pdf
    thanks
    karthik

  • Message uses a character set that is not supported by the internet service

    Does any one have any advice on how to fix this problem?
    E-mails sent from my iphone 3G periodically arrive in an unreadable form at the recipient. The body of the e-mail has been replaced with the message "This message uses a character set that is not supported by the internet service...." The problem e-mails also include an attachment that contains an unformatted text file containing the original message surrounded by what appears to be lots of formatting data that is displayed as gibberish.
    This occurs sometimes, but not always, even with the same recipients. I am sending e-mail through a G-mail account that is configured on the iphone using IMAP. I have tried the gmail account to use the two available formatting options for mail, but neither fixes the problem.
    I have also upgraded to 2.01 and restored a few times without impact.

    Hi,
    I got somewhat similar problem with special charecters(German umlaud �,�,�..).
    I create a file with java having special charecters in it. Now if I open this file I am able to view the special charecters in it.But If I attach this file send it using following code then receiver can not see the umlaud charecters in it.They get replaced by _ or ?
    MimeBodyPart mbp2 = new MimeBodyPart();
    FileDataSource fds = new FileDataSource(fileName);
    mbp2.setDataHandler(new DataHandler(fds));
    mbp2.setFileName(output.getName());
    Multipart mp = new MimeMultipart();
    mp.addBodyPart(mbp2);
    msg.setContent(mp);
    Transport.send(msg);
    From you message it looks like you are able to send the mail attachment correctly(by preserving special charecters).
    Can you tell me what might be wrong in my code.
    I appriciate your efforts in advance.
    Prasad

  • Free characteristics Functioanlity is not working please advice

    HI
    I got an problem in the report with the multiprovider ZCOPA_M01
    If I open the report for this multiprovider and there are 2 default characteristics displayed in the query and after removing the drill down for those two characteristics and Now IF I go to Free characteristics and do an drill down on one of the characteristic (for example: Geographical Type)and place an filter on the Geographical Region  (Geographical region is also an free characteristic) now I find no values in the set filter screen.
    But when I again remove the drill down on Geographical type and do an drill down on geographical region and set an filter on geographical region then you can see all the values...(In filter screen)
    Can any one please advice me for the same. How to get values in the filter screen for this report with the Multiprovider ZCOPA_M01
    Thanks in advance and will be rewarded with good points...

    Hi,
    Go to the definition of a Master Data Object. In its Attributes tab, there is a column of navigation on/off. There you can either swith on or off the attribute as a navigational attribute. After converting into navigational attribute, you can see the type column is changes to "NAV" from "DIS".
    Hope this helps.
    Regards,
    Yogesh.
    Edited by: Yogesh Kulkarni on May 27, 2008 4:14 PM

Maybe you are looking for