Foreign Character Issue

Hi,
In our application, we have a functionality in which user can post a SQL and output of SQL FTPed to user's server. Now we are facing following issue for data of one user:
The data looks like when i query throguh SQL*PLUS:
HYPO–UND VEREINSBANK
but it looks to user HYPO–UND VEREINSBANK on his server. We use nls_language = american in our database.
Please suggest what changes should do either from my side or their side for correcting the data on their server.
Thanks.

I am not overly knowlegable in this area but I would think that you should check the NLS_CHARACTERSET and NLS_NCHAR_CHARACTERSET at the client.
If the right parameters are set at the OS level then the client should translate the data correctly.
HTH -- Mark D Powell --
PS - If might help if you could identify the type of client in use and the tool being used to view the data.
Edited by: Mark D Powell on Jun 24, 2010 8:46 AM

Similar Messages

  • Foreign Character Issue in Messages sent to MQ Series Queue

    Hi Experts,
    We are having a IDOC to JMS Scenario.We receive the IDOC and Convert it into a Single Line Flat File using a JAVA Mapping.We send the flat file message to MQ series 6.0 Queue using a JMS adapter.
    We are facing issues when message contains foreign character like Chinese , German and Greek.
    The Input\Output payload looks fine in SXMB_MONI and in the JMS Channel payload. when the message is put in the MQ Series Queue. When we open the messages in the JMS Q, the Foreign Character looks as "?????".
    PFB some details on the Java Code and other Config Settings.
    1. We read the messages from the Input Stream as UTF-8 and do the Transformation and Write the Output as UTF-8
    we use writer class to write the output after conversion (shown below).
    Writer osw = new OutputStreamWriter (osOutput,"UTF-8");
    2. The MQ series is in UNIX box and CCSID of the Queue Manager is 819.
    we have set the CCSID of the JMS channel 819.
    can you please clarify in case any transformation is happening before placing the message in the queue and how to we handle this case from the channel or in the Java code.
    Some Sample Character are u0395u03A3u03A4u0399u0391u03A4u039Fu03A1u0399u0391-u03A4u0391u0392u0395u03A1u039Du0395u03A3-u039Au0391u03A6 , u039Au039Fu039Du0399u039Fu03A1u0394u039Fu03A5 u03A7u03A1u03A5u03A3u0391 , and some Chinese Characters too

    Hi All,
    Thank you all . My problem is not resolved yet
    I have one more useful info.
    I set the Transport Protocol as TCP\IP and Kept the JMS Compliant as WebsphereMQ (Non-JMS) in JMS Channel Parameters TAB. In additional Parameters TAB i have set  "JMS.TextMessage.charset"     as ISO-8859-1.In this Setting, The Characters looks perfectly fine when i browse and view the messages in MQ Series Q. Messages are sent in MQSTR format, where the message has no header value.
    Intially i had set JMS Compliant Field as "JMS Compliant" with additional param TAB set as ISO-8859-1, where the message is sent in MQHRF2 format containing header details. i also configure dynamic header values using AF_Modules to this header. Special Chars didnt work.
    Now in MQSTR due to missing Header values the messages are not being identified and routed properly by the end system from the MQ series. 
    Now i need to find the missing loop, what happens when message is sent using WebSp MQ API's (MQSTR) and Java API (MQHRF2). I need to have MQHRF2 format with Messages body sent exactly as it is in MQSTR. Is there any parameter, which i need to set.
    can anyone help me in this  regard.

  • Why can't I import clips with foreign character file names?

    What the **** man?
    I've got a series of clips with foreign character file names that FCP will not let me access via the Import->File method. Some of the offending character sets include Chinese, Arabic, and a couple of less well known Unicode sets like Cherokee and Inuktitus.
    What's interesting is that I can easily drag the clips onto the TL manually via Finder, but not File->Import. Also, XML passed to FCP that includes foreign character file names is ignored. Seeing as FCP is an Apple product I would have expected proper Unicode support 4 years ago.
    Obviously I don't need a workaround per se... But I'd very much like to know why this is the way it is. We're developers doing development work for FCP so it's important for us.

    We're developers doing development work for FCP so it's important for us.
    Then you have a more direct path to the FCP engineers than this forum. Send them feedback directly. This place is user to user - no apple employees are officially here.
    good luck.
    x

  • Foreign character set problem

    Hi Sergiusz, it looks like you are a character set guru. Maybe you would know how to solve my problem? I've got a working application under oracle xe, apache and embedded listener. I would like to switch to a new APEX listener with tomcat but there is a problem with a foreign character set. Existing pages with such characters are displayed correctly but if I type them in into an input filed they are not showing up correctly on the next page. This is done without even saving information in a database. I type in text in one field which is a source of an item on another page. These are character settings in my database.
    SQL> select value from nls_database_parameters where parameter = 'NLS_CHARACTERSET';
    VALUE
    AL32UTF8
    SQL> select value from nls_database_parameters where parameter = 'NLS_NCHAR_CHARACTERSET';
    VALUE
    AL16UTF16
    Thanks,
    Art

    Hi Sergiusz, thank you for your help. After setting a URIEncoding to UTF-8 and some further research I was able to fix my problem. Here is the entire solution in case someone else needs it.
    1.) Change $CATALINA_HOME/conf/server.xml and add
    URIEncoding=UTF-8
    <Connector port="8090" protocol="HTTP/1.1"
    connectionTimeout="20000"
    redirectPort="8443" URIEncoding="UTF-8"/>
    2.) Copy $CATALINA_HOME/webapps/examples/WEB-INF/classes/filters/SetCharacterEncoding.class => $CATALINA_HOME/webapps/apex/WEB-INF/classes/filters
    3.) Add the following into $CATALINA_HOME/webapps/apex/WEB-INF/web.xml file after the last </servlet-mapping> tag.
    <filter>
    <filter-name>Set Character Encoding</filter-name>
    <filter-class>filters.SetCharacterEncodingFilter</filter-class>
    <init-param>
    <param-name>encoding</param-name>
    <param-value>UTF8</param-value>
    </init-param>
    </filter>
    <filter-mapping>
    <filter-name>Set Character Encoding</filter-name>
    <url-pattern>/*</url-pattern>
    </filter-mapping>
    Thanks,
    Art

  • Character issue in Excel

    Hi All,
    For special character issue ..
    in Report Builder--> XML Prolong Value: <?xml version="1.0" encoding="iso-8859-1"?>
    then issue is vanished,its ok..
    but now my issue is for some characters.. the description is printing like CM&U OWI Rußdorf  (1) SDL
    instead CM&U OWI Rußdorf  (1) SDL. but some description printing properly,:(
    Please let me know if I miss any thing ?
    THanks
    HTH
    Edited by: user633508 on Oct 8, 2010 4:52 PM
    Edited by: user633508 on Oct 8, 2010 5:10 PM

    Hi Kavipriya,
    Thanks for quick reply.
    This issue after generating output -> Excel..
    I am not getting any issue..
    but in place of special character , it shows some squar box symbol.. remaining ok..
    and remaining special characters are working fine..
    the following examples which I got in my excel output..
    CM&U Enertrag Randow-Höhe (8) SDL
    CM&U Prützke 1-4  SDL  I & II
    CM&U Kulder Hof Tönisvorst (4) SDL phase I+II
    the output is generating perfect.. but client reports some bad characters are coming in place of Ö. Ü,ß,.....etc
    Please let me know how can we correct the above one.
    Thanks..

  • Special character issue while loading data from SAP HR through VDS

    Hello,
    We have a special character issue, while loading data from SAP HR to IdM, using a VDS and following the standard documentation: http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e09fa547-f7c9-2b10-3d9e-da93fd15dca1?quicklink=index&overridelayout=true
    French accent like (é,à,è,ù), are correctly loaded but Turkish special ones (like : Ş, İ, ł ) are transformed into u201C#u201D in Idm.
    The question is : does someone know any special setting to do in the VDS or in IdM for special characters to solve this issue??
    Our SAP HR version is ECC6.0 (ABA/BASIS7.0 SP21, SAP_HR6.0 SP54) and we are using a VDS 7.1 SP5 and SAP NW IdM 7.1 SP5 Patch1 on oracle 10.2.
    Thanks

    We are importing directly to the HR staging area, using the transactions/programs "HRLDAP_MAP", "LDAP" and "/RPLDAP_EXTRACT", then we have a job which extract data from the staging area to a CSV file.
    So before the import, the character appears correctly in SAP HR, but by the time it comes through the VDS to the IDM's temporary table, it becomes "#".
    Yes, our data is coming from a Unicode system.
    So, could it be a java parameter to change / add in the VDS??
    Regards.

  • Foreign character set issue

    Hi
    This might sound a bit silly, but stay with me.
    There's a database that decidedly supports UTF 8. I checked using this query
    select * from nls_database_parameters where parameter like '%CHARACTERSET'
    And got this result
    NLS_CHARACTERSET     UTF8
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    It's Oracle 10g.
    In a particular table, some text is stored in multiple languages. There are seven languages (English, Mandarin, Japanese, German...). Every language has 2-3 rows to itself. There's a column where I have to get rid of the trailing few characters, the number of which depends on the content of the string.
    But I cannot see any of the Eastern languages in TOAD. The column is in VARCHAR2.
    My problem is twofold.
    1. What functions do I use to ensure that only the last few bytes are truncated, and there's no data loss (which many websites gravely warn of when dealing with foreign language data) ?
    2. How can I see this foreign language text in TOAD/SQLPlus?
    (Yes, I'm kind of new to the whole multiple-language-game. Please let me know if I've left out any important detail!)

    Do you have metalink access?
    If so, please see the notes below, there's a lot of good information in them:
    158577.1 - NLS_LANG explained
    260893.1 - Unicode Character Sets in the Database
    788156.1 - UTF8 implications
    With any character set situation there are at least two and a bit sides to the equation.
    First is whether you are storing the correct data.
    You're best using the DUMP function to inspect the stored data, e.g.
    SELECT DUMP(<column_name>) FROM <table_name> WHERE ....This function may help you with your truncation of the last few bytes - not sure why you need to do this?
    The "second and a bit" bit is having the correct client settings - NLS_LANG - and using a client which supports the characters required.
    SQL*Plus has it's limitations here. Toad I don't know well enough but it should support full UTF8 characters.
    SQL*Developer and iSQL*Plus both should support the full UTF8 - I tend to use the former, particularly for UTF8.

  • XML Data from PL/SQL procedure -- special character issue (eBS)

    Hi All,
    I am developing a report, where the XML data is created by a PL/SQL procedure. I chose this approach over a data template, since there is a lot of processing involved in producing the actual data, and it cannot be done by select-statements alone. The report is run as a regular concurrent request.
    So far so good. There is a problem though. Every time the data contains special characters (ä, ö, umlauts), the concurrent request is completed with a warning, the log confirms that "One or more post-processing actions failed.", also there is no output file. The XML structure is valid as such. The report runs smoothly and produces the output when the XML data does not contain special characters.
    I am producing the XML lines by using the standard FND_FILE.PUT_LINE procedure: Fnd_File.put_line(Fnd_File.output, '<?xml version="1.0" encoding="UTF-8"?>'); This seems like a character encoding issue, but declaring the UTF-8 encoding in this way does not solve the problem.
    Any ideas what can be done to fix this? I have been searching Metalink but cannot find an answer. Many thanks for your kind help.
    Best Regards, Matilda

    Hi Rajesh,
    One idea I have, is that it might be possible to modify the PL/SQL code into a "before report" type trigger, attached to a data template. The code would process the data and write the lines into a temporary table, from which the data template could retrieve them in a simple select-query. However, it would be neat to be able to solve this without adding an extra layer of processing, since BI Publisher supposedly supports PL/SQL procedures as the data source.
    The data in this case is all correct, special characters are an intrinsic feature of the Finnish language. :)
    Best Regards, Matilda

  • Multibyte Character Issue

    Hi,
    We just migrated our technology platform on one of the six servers yesterday on production d/b as follows :
    OAS Server - Linux Patch Applied (2.6.18-53.el5 #1 SMP Wed Oct 10 16:34:19 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux)
    D/B Server - From Oracle 10.2.0.2.0 to Oracle 10.2.0.4.0
    Batch Server - From OpenVMS to Red Hat Linux
    Post Migration, we are facing some issues with Oracle Forms 10g, Text Items which contains one or more than one multibyte characters, is displaying only # character in the entire text box instead of what is stored in the database. In other words, information in the database is stored correctly. Before Migration everything was working perfectly fine.
    One other thing which I would like to tell here is that we compile forms using the following environment settings:
    NLS_LENGTH_SEMANTICS=CHAR
    NLS_LANG=AMERICAN_AMERICA.UTF8
    Please provide urgent remedy for this issue

    hi,
    how about this.
    SQL> with your_table as ( -- "sample data"
      2       select '1ξΣ' your_column from dual union all
      3       select 'ξΣξΣξΣξΣξΣ' your_column from dual union all
      4       select 'ξΣξΣξΣξΣξΣξΣξΣξΣξΣξΣ' your_column from dual union all
      5       select '1ξΣa' your_column from dual
      6  ) -- end "sample data"
      7  select your_column,
      8         substr(your_column || rpad(' ', 10 - length(your_column), 'x'),1,10) padded_column
      9    from your_table
    10  /
    YOUR_COLUMN                              PADDED_COLUMN
    1ξΣ                                      1ξΣ xxxxxx
    ξΣξΣξΣξΣξΣ                               ξΣξΣξΣξΣξΣ
    ξΣξΣξΣξΣξΣξΣξΣξΣξΣξΣ                     ξΣξΣξΣξΣξΣ
    1ξΣa                                     1ξΣa xxxxx

  • Foreign character sets in a database

    I am developing a site where people in many countries (mainly
    Scandinavian) can log on and update their pages. The information,
    (including passwords and usernames) is stored in an access
    database. I am using Access because I have no knowledge of PHP /
    SQL, and the site is unlikely to exceed the limitations of Access
    for a few years.
    Does Access change the format of text, or do I have to set up
    some special encoding? I find that if I type in the username and
    password using an english keyboard, when the original was set up
    using a foreign keyboard I cannot log in. The reverse also applies.
    I am using CS3 in default mode - using utf8. Should I be using
    Westren European encoding, or is Access the fly in the ointment?
    Or is it likely to be another problem.
    Howard Walker

    Hello,
    Specifying the character set for a WebLogic webservice (see:
    http://edocs.bea.com/wls/docs81/webserv/i18n.html#1069629) is one of the
    many enhancements made since the 6.1 release. If possible, the best
    solution for your webservice development would be to upgrade.
    Bruce
    "özkan Demir" wrote:
    >
    Hi all;
    I am developing RPC style webservices on weblogic server 6.1 with service pack
    2.
    I have a problem about character sets. I have to turkish language so that the
    character sets must be ISO-8859-9. But the soap messages are default UTF-8 so
    that turkish characters becomes undetermined.
    Is there a way that I can send soap messages in ISO-8859-9 character set or what
    do I have to solve the problem of turkish characters between the client and the
    server applications with webservices...Please Help!
    Thanks

  • Special character issue with Adobe Reader 11.03 in hyperlink path

    Adobe Reader changes a [ character in a hyperlink path to a % sign, thus changing the path and destroying the link,when opened in version 11.03.  This does not happen with 11.01.  This is a serious problem for our customers with a critical linked document, because the customers are generall not tech savvy, and categorically refuse workarounds. They will not adjust their Adobe version for fear of security issues. Can this be corrected in a future version of Adobe Reader?

    This is the Bridge Forum, try posting in acrobat froum http://forums.adobe.com/community/acrobat

  • Special character issue with Adobe Reader 11.03 Hyperlink path

    Adobe Reader changes a [ character in a hyperlink path to a % sign, thus changing the path and destroying the link,when opened in version 11.03.  This does not happen with 11.01. I have also seen this with Acrobat X, but not sure which update. This is a serious problem for our customers with a critical linked document, because the customers are generall not tech savvy, and categorically refuse workarounds. They will not adjust their Adobe version for fear of security issues. Can this be corrected in a future version of Adobe Reader?

    This is the Bridge Forum, try posting in acrobat froum http://forums.adobe.com/community/acrobat

  • Special Character Issue in Country Name Côte d'Ivoire encoding ISO-88509-1

    soa suite 10.1.3.4
    I am reading an xml file using file adapter. The file has encoding type ISO-88509-1 and the country name is coming as <country_name>Côte d'Ivoire</country_name>.
    When I see the BPEL instance, this character is converted to "C\234te d'Ivoire" jumbled character. The target endpoints receives the country name in jumbled format only. I can't change the incoming file format. Is there a way I can read it properly and keep the special character as it is during SOA process.
    Any idea how can I progress in this issue?

    Any idea on this ? My soa suite server is on Solaris and the folder where I am reading the file from is on Windows.

  • Special Character Issue in outbound file(Extra length)

    I have a big issue when generating output file.
    I have data like party_name=00CONSTRUÇÕES JOSÉ VIEIRA, LDA.
    The data is for Spain client and so the need those special characters.
    So in the cursor query I used following convert function like CONVERT(TRIM(hp.party_name),'WE8ISO8859P1')
    and appending into the file
    v_output_file := UTL_FILE.fopen (v_path, v_filename, 'A');
    p_record:=SUBSTR(RPAD(NVL(TRIM(l_sales_order_details_rec (i).party_name),
    30,
    ),1,30)--- All these funtions I applied because, i am trying to avoid that issue.
    and then
    UTL_FILE.put_line (v_output_file, p_record)
    When inserting into file it inserts 2 spaces extra(only if there is a character like Ç and/or Õ, not for É) for which the length its becoming 32(actually should be 30). And so effecting the subsequent values of that record.
    Any one please let me know any ideas, this is very critical for me. Thanks in advance.

    Please post the details of the application release, database version and OS.
    What is your database characterset?
    I have a big issue when generating output file.How do you generate the output file?
    I have data like party_name=00CONSTRUÇÕES JOSÉ VIEIRA, LDA.
    The data is for Spain client and so the need those special characters.What is the NLS_LANG setting on the client side?
    So in the cursor query I used following convert function like CONVERT(TRIM(hp.party_name),'WE8ISO8859P1')
    and appending into the file
    v_output_file := UTL_FILE.fopen (v_path, v_filename, 'A');
    p_record:=SUBSTR(RPAD(NVL(TRIM(l_sales_order_details_rec (i).party_name),
    30,
    ),1,30)--- All these funtions I applied because, i am trying to avoid that issue.
    and then
    UTL_FILE.put_line (v_output_file, p_record)How do you run this code?
    When inserting into file it inserts 2 spaces extra(only if there is a character like Ç and/or Õ, not for É) for which the length its becoming 32(actually should be 30). And so effecting the subsequent values of that record.
    Any one please let me know any ideas, this is very critical for me. Thanks in advance.Please provide more details about the issue and answer the above questions.
    Thanks,
    Hussein

  • Weblogic Foreign Server issue

    Hi,
    We were trying to integrate Tibco EMS with WLS 10.3 (using Foreign JMS Server). We are using Spring as our app container (the one which basically creates/manages connections, sessions etc)
    I am able to listen to the topics both durably and non-durable mode by hardcoding the username/passwords in the spring config directly.
    We didnt want this hardcoding of credentials, so moved to using the Foreign JMS Provider implementation from Weblogic. To access the ConnectionFactories I now use resource-ref in the web.xml and have wired spring to use the JNDI lookup using these resource-refs to lookup the topic and connection factories. This works fine for non-durable subscription.
    Issue: For durable subscriptions, using Foreign JMS Providers (like mentioned above) I get this error
    java.lang.ClassCastException: com.tibco.tibjms.TibjmsXASession cannot be cast to javax.jms.TopicSession
         at weblogic.deployment.jms.PooledSession.createDurableSubscriber(PooledSession.java:348)
         at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.createConsumer(AbstractPollingMessageListenerContainer.java:469)
         at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.createListenerConsumer(AbstractPollingMessageListenerContainer.java:221)
         at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:305)
         at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:261)
         at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1002)
         at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:994)
         at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:896)
         at java.lang.Thread.run(Thread.java:619)
    The issue isnt with non-XA or XA connectionFactories either. When I tried using a non-XA conn factory I get a similar error but the cast exception is between TopicSessionImpl and TopicSession.
    My spring config is -
    <jms:listener-container container-type="default"
                                       connection-factory="jmsTopicConnectionFactory"
                                       cache="none"
                                       acknowledge="auto"
                                       destination-resolver="gadgetDestinationResolver" destination-type="durableTopic" client-id="testClientId" >
    <jms:listener destination="jms/GL.GADGET.IN" ref="publishThumbnailListener" subscription="localhost7001durable" />
    </jms:listener-container>
    Not sure if I need to set anything other than client-id, durableSubscriptionName while subscribing.
    Any idea what might be wrong or if theres anything else I need setting up?
    Thanks
    Pdk

    The stack trace seems to indicate that the TIBCO client library that you are using supports JMS 1.1 only, although JMS 1.1 Spec requires JMS providers to support both 1.0.2 and 1.1 APIs. You can check and see if there is a client library from TIBCO that supports JMS 1.0.2 as well.
    Meanwhile, Weblogic JMS pooling code is supposed to support both JMS 1.0.2 and 1.1. It looks like it doe not support 1.1 for creating a durable subscriber on a javax.jms.Sessiom. You can contact Oracle support for a fix.

Maybe you are looking for