Handling European Characters correctly?

I am running Oracle Parallel server 8.1.7 on Compaq Tru64 UNIX. The client is on Windows 2000.
When I type European Characters (like "v d e V D E" ) in a text field as a value using DBA studio, the string is converted into "o a ? O A ?".
It looks like ASCII 8 bit characters are changed to ASCII 7 bit characters.
How do I correct this problem?
The same problem shows up in my s/w using MFC layers.
Thanks in advance.
Neeraj

hi Mufiza,
what do you mean the characters are not displayed...is the entire field not showing up or just certain characters are not displayed properly.
if certain characters are not displayed properly, is this happening in a web viewer or in the crystal reports designer.
the first thing you should do is ensure that you are using a unicode font such as arial ms unicode for your fields and formulae on the report. also ensure that it is a font that most of your end users would have installed...arial ms unicode should be very safe.
cheers,
jamie

Similar Messages

  • How to handle Spanish characters correctly

    I have some Spanish characters in rtf file, such as ó, when viewing from browser, it shows up as garbage font.
    Where do I set correct encoding to fix this?
    Thanks

    I had also problems with special characters.
    Re: How can I set language in PDF
    I solved it for mine. Maybe you have the same issue?

  • Oracle Receiver JDBC Adapter - Handling Unicode Characters

    We have an IDOC to JDBC scenario.
    In this IDOC is sending data like -  10/14u2019/P7 After 4 there is special character coming from SAP ( It is not single quote).
    Mapping is going through OK and data is getting saved in Oracle Database as 10/14/P7 with & # x 19;
    I came across following solution in forums and SAP Note.
    I am not sure how to modify Oracle JDBC URL to handle Unicode characters properly.
    Or is there any other approach we can follow to achieve this..
    Any input is really appreciated
    Q: I am inserting Unicode data into a database table or selecting Unicode data from a table. However, the data inserted into or retrieved from the table appears garbled. Why doesn't the JDBC Adapter handle Unicode correctly?
    A: While the JDBC Adapter is Unicode-aware, many JDBC drivers and/or database management systems aren't by default and need a codepage or Unicode-awareness to be configured explicitly. For the respective JDBC drivers, this codepage setting is often configured via the driver URL. For details, refer to the documentation of your JDBC driver or database management system.

    Hi Simona,
    1.To start the visual admin, execute "go" file:
    On Windows: Run \usr\sap\<SAPSID>\JC<xx>\j2ee\admin\go.bat
    On UNIX: Run /usr/sap/<SAPSID>/JC<xx>/j2ee/admin/go
    2.supply the credentials to login into visual admin
    3.under "cluster" tab select "server node"
    4.you will find "log viewer" under "services"
    Since you are new, I recommend you to take help from your BASIS team.
    Hope it helps !
    Hi Alwin,
    Just a quick clarification.
    I used the URL you have mentioned, when we were on SP5. After that we upgraded to SP9.
    From SP9, if you try to use the URL http://XISERVER:50000/AdapterFramework then it automatically redirects to a new webpage with the link to the URL i have mentioned.
    Regards,
    Sridhar

  • Time-dependent Vendor Master & Handling Special Characters

    Hi,
    I need to extract time-dependent Vendor Master.
    1. The data source for <b>0VENDOR</b> does not have fields to hold the valid date range.
    2. Does the Master data in R/3 for Vendors will hold the valid date range?
    3. The text for <b>0VENDOR</b> provides time-dependent, but how to map the <b>valid from</b> and <b>valid to</b> fields?
    Handling Special Characters:
    We are trying to extract data from Legacy system via DB Connect. The item text field consists of special characters. Of course in BW customization we can specify all the special characters to consider. But the special character we observed is 'square' symbol i.e. 'new line character' in Oracle. We are updating this to an ODS object. When looked at error log, observed that green light for the number of records transferred and updated, but finally when it load into ODS object and activates popping up the error message saying 'could not recognize special character'.
    Please help me getting the 2 issues resolved.
    Thanks in advance.
    Regards,
    Sudhakar.

    Hi Everyone,
    Thanks for inputs on Special characters issue...
    Finally resolved with below piece of code in the start routine:
    DATA: FLAG,
          OFF TYPE I,
          LEN TYPE I VALUE 1,
          ALLOWED_CHAR(95) VALUE
    '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ`~!@#$%^&*()-_=+ ' &
    'abcdefghijklmnopqrstuvwxyz:;<>,.?/|\{}[]"'''.
    CONSTANTS: C_CHAR VALUE '-'.
      LOOP AT DATA_PACKAGE WHERE NOT /BIC/ZI_DESC IS INITIAL .
        DO.
          IF DATA_PACKAGE-/BIC/ZI_DESC CN  ALLOWED_CHAR.
            REPLACE SECTION OFFSET SY-FDPOS LENGTH LEN OF
                    DATA_PACKAGE-/BIC/ZI_DESC WITH C_CHAR.
            FLAG = SPACE.
          ELSE.
            FLAG = 'X'.
          ENDIF.
          IF FLAG = 'X'.
            EXIT.
          ENDIF.
        ENDDO.
        MODIFY DATA_PACKAGE.
      ENDLOOP.
    if abort is not equal zero, the update process will be canceled
      ABORT = 0.
    I have seen the link sent by 'Eugene Khusainov' today. Thought putting my piece of code that may help others...
    Regards,
    Sudhakar.

  • Handling special characters in XML

    Hi,
    I am using Oracle 10g 'XMLType' datatype to store XML files. Before storing I parse the XML document using Java Xerces Parser. If it parses successfuly, then I perform some business rule execution based on XML file which was parsed. So till this stage there is no problems. But when XML file contains some special characters like copy-paste of some description from MS-Word document into XML tags, then Xerces parser will parse such characters with out any exceptions, but while inserting XML document, Oracle database just throws exception saying unable to handle special characters.. So how to avoid such exceptions or silent such exceptions with any specific settings respect to XMLType datatype in 10g DB.
    Please advice!
    Arvind Patil - IN

    Monica--
    In XI 2.0, we've noticed a number of issues processing special characters, primarily caused by the version of JCO that we're running.  It sounds like SAP has spent some time in the past few months focusing on these errors, so make sure you're on the most recent patchlevels of all your middleware components, including any of the middleware libraries that BC uses. In XI, we had to update the 3 files that make up the RFC library and JCO library.  SDM couldn't update the libraries for us -- we had to manually move the files to the right place.
    Escaped XML characters like "&amp;" "&#34;" "&quot;" were fixed as of JCO 2.0.10 (the current patchlevel on AIX/UNIX), the special character "&apos;" is fixed in the next release, JCO 2.0.11, due out in a few weeks (hotfixes are available).  I don't know the equivalent versions on other platforms.  By default, XI 2.0 appears to have shipped with JCO 2.0.5.  I would expect many XI 3.0 users to also be affected.
    This may or may not apply to BC, because I don't know what BC uses to talk to SAP under the covers.
    --Dan King
    Capgemini

  • Eastern European characters in PDF report

    Hi.
    I have successfully integrate Apache FOP into APEX as per http://www.oracle.com/technology/products/database/application_express/html/configure_printing.html document.
    The only problem (for now) I have is displaying eastern european characters in PDF file. Instead of expected characters I get hash(#) character. I have read some documentation on Apache FOP font embedding but I just dont know where to start that is if I am on the right track.
    Any solution or hint would be very much appreciated.
    Thanx in advance.

    I find same problem.
    My apex configuration:
    Home>Administration>About Application Express
    NLS_CHARACTERSET:     EE8ISO8859P2
    DAD CHARACTERSET:     UTF-8
    Home>Application Builder>Application 107>Shared Components>Report Layouts>Edit Report Layout
    Report Layout Type:      Generic Columns (XSL-FO)
    Page Template:
    +<?xml version = '1.0' encoding = 'utf-8'?>+
    Why I find in PDF '*#*' character in every polish special character?
    Please help me in this problem.
    M.

  • To Handle Special Characters(Guideu0099 ) in MATMAS IDOC fields

    Need to handle special characters like Guide™, as an attached  superscript in MATMAS02/05 IDOC field . The field name is TDLINE in E1MTXLM segment.
    As a trial run when these special characters are pasted in the TDLINE field, it throws an error that "the input field contains prohibited characters"
    Please let me know if there is any workaround for this.

    hi
    good
    go through these links, i hope these ll help you to solve your problem.
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/CAGTFADMLO/CAGTFADMLO.pdf
    http://www.erphome.net/wdb/upload/forum14_f_2908.doc
    thanks
    mrutyun^

  • Handle special characters in the attribute name

    Hi,
    I am generating different view element in WD application dynamically. How to handle special characters other than '-/ABCDEFGHIJKLMNOPQRSTUVWXYZ_0123456789'  for the attribute name dynamically?
    Thank you, in advance.
    Trupti

    Going with the obvious response - don't use them?
    if you're using dynamic code, there is no reason (other than debug support) to give your created elements any meaningful name.
    Just generate a GUID for each new element and use that.
    If you need to be able to later search for and update the element a simple lookup table of GUID to reference string should work reasonably well.
    Cheers,
    Chris

  • How to make "SQLPLUS" show me the Brazilian accented characters correctly?

    Hi,
    I have a Oracle9i instance, with this configurations.
    PARAMETER VALUE
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CHARACTERSET WE8MSWIN1252
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY $
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_RDBMS_VERSION 9.2.0.1.0
    I have a xHarbour DOS Client program that reads, shows in the screen, and write "perfectly fine" a text with Brazilian accentuation stored in a CLOB field.
    Following you can see the sample of the text with Brazilian accentuation in my DOS application http://www.screencast.com/t/U5PXwCEo8
    In order to my xHarbour DOS Client program works fine with this Brazilian accented characters, I must to set:
    SET NLS_LANG=PORTUGUESE_BRAZIL.WE8MSWIN1252
    And all is OK from my xHarbour DOS Client program.
    My problem is because when I query this data manually using a SELECT from any client DOS/WINDOWS program like SQLPLUS, SQLDEVELOPER, TOAD, etc, I get bad characters instead the correct Brazilian accented characters.
    Following you can see the same text into the Toad http://www.screencast.com/t/A1tal2Rtg
    (you will see bad characters instead the correct Brazilian accented characters).
    Following you can see the result of querying this field using a SELECT from SQLPLUS.
    Certifico que por decis„o proferida no processo n§ @@@@@@@@@@@@@@, 
    foi reconhecida a n„o incidˆncia do ITBI na transa‡„o do(s) im¢vel(is) 
    abaixo caracterizado(s), com base no art.156, @ 2§, I, da Constitui‡„o 
    Federal de 1988 e no art.6§, II, da Lei Municipal n§ 1.364 de 19/12/1988. 
    (you will see bad characters instead the correct Brazilian accented characters).
    How to make "SQLPLUS", "TOAD" (and others Windows or DOS Clients programs) show me the Brazilian accented characters correctly?
    Thanks in advance,
    Luigggye
    Edited by: 880676 on Jul 20, 2012 9:08 PM

    This is a duplicate thread. See the answers at Re: How to change the NLS_NCHAR_CHARACTERSET from WE8ISO8859P1 to AL16UTF16 ?

  • Eastern European characters in XML and parsing

    Hi all,
    I have a problem with XML parsing within Java code with special Eastern European characters.
    Whenever there is EE character within tags it reports:
    WARNING (12847): CORE3283: stderr: org.xml.sax.SAXParseException: The element type "CodeMeaning" must be terminated by the matching end-tag "</CodeMeaning>".
    at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:134)
    Obviously special character is in CodeMeaning XML tag. XML is well formed and syntactically OK and without special characters works OK. Does anyone have any suggestion how to force special characters in XML?

    Would you send a sample data for test? What is the encoding for your XML doc?

  • Can actionscript handle special characters/han or chinese characters?

    Hi,
    I am having issue with my created flash, it can't handle chinese characters? is there some way i can handle this thru code? or should there be any font or language pack installed?
    thank so much for the help.

    Hi,
    I already embedded the fonts. And I changed the encoding of my xml to GB2312.
    And placed a chinese characters on the node. It didnot rendered any chinese characters. instead, the movie is not rendered properly.
    Thanks.

  • Is there any other way to handle special characters other than using CDATA?

    I have xerces parser with which i am trying to parse data having special characters. Special characters also include other ascii characters. I tried using CDATA Section but still the problem persists.
    It would be really helpful if anyone can help me in solving this problem.
    Error encountered :
    org.xml.sax.SAXParseException: An invalid XML character (Unicode: 0xf) was found in the CDATA section.
    The XML which I use contains junk characters also . Have a look at the following:
    <?xml version='1.0' encoding='UTF-8' ?>
    <IMAGE_RESPONSE xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:noNamespaceSchemaLocation='ImageResponse.xsd'>
    <IMG_TYPE>PNG</IMG_TYPE>
    <IMG_WIDTH>650</IMG_WIDTH>
    <IMG_HEIGHT>250</IMG_HEIGHT>
    <IMAGE_DATA>
    <IMGKEY>20020827:00000000:100000000010:02:</IMGKEY>
    <IMAGE_INFO>This is image info</IMAGE_INFO>
    <IMG_SOURCE>DCE_CIMS</IMG_SOURCE>
    <FRONT_IMG_FBW><![CDATA[�����&�J�Z�R��]]></FRONT_IMG_FBW>
    <FBW_ERROR>B</FBW_ERROR>
    <FRONT_IMG_FGS>C</FRONT_IMG_FGS>
    <FGS_ERROR>D</FGS_ERROR>
    <BACK_IMG_BBW>E</BACK_IMG_BBW>
    <BBW_ERROR>D</BBW_ERROR>
    <BACK_IMG_BGS>A</BACK_IMG_BGS>
    <BGS_ERROR>Unable to retrieve Back Gray-Scale image</BGS_ERROR>
    </IMAGE_DATA>
    </IMAGE_RESPONSE>

    java.net.URLEncoder.encode( text )
    I've found this to be a pretty easy way to handle invalid characters...

  • Handle special characters in JSP

    Hai all,
    i am getting a problem, when i copy some text from word file and paste it in my forms textarea. If the selected text contains characters like " or ', e.g. (Hello “Everybody”.) then after submiting it is replaced as a ? mark. How to handle this.
    Thanks

    You need to set the correct character encoding (charset) and use the same thoroughout the whole process.
    This is an excellent read: [http://www.joelonsoftware.com/articles/Unicode.html].

  • Handle special characters in xml

    Hi,
      Our end users tend to copy the description text from Word documents to pdf form and submits it.
    If that text contains any special characters, its getting carried to the extracted xml. In the next step, when I try to assign a task to user with template and this xml, Managers cannot able to open the form and showing the error. When I assign the xml without special characters, its running fine.
    Please assist on how to handle this?
    My expectation is that user should be prompted in the form when he pastes any special characters or they should be auto-corrected to null values. if that is not possible, atleast we should able to filter the xml and eliminate special characters before the form go to next stage.
    Appreciate your help.
    Thanks,
    Krishna

    In first instance, I would have followed this way:
    http://www.dvteclipse.com/documentation/svlinter/How_to_use_special_characters_in_XML.3F.h tml
    so, I would have parsed the submitted text in a Validate event and changed any special chars to UTF-8 numeric reference.
    However, I found this:
    http://blog.mark-mclaren.info/2007/02/invalid-xml-characters-when-valid-utf8_5873.html
    which seems to state that not all UTF-8 characters are possible in XML.
    In fact, those allowed are listed here:
    http://www.w3.org/TR/2000/REC-xml-20001006#NT-Char
    so, I would still use a Validate event script but based on the XML specs' Character Range. Exactly as Mark McLaren did in Java.
    This will permit to keep those special chars that are allowed. Your Managers will thank you.
    Hope it helps.

  • RTFEditorKit: Incorrect handling of characters with caron (bug?)

    Hi,
    some characters with carons are handled incorrectly when converting rtf to plain text.
    e.g. &#345;&#269;&#283; are converted to øèì.
    š is handled correctly.
    I'm using this sample code:
    import org.apache.commons.io.FileUtils;
    import javax.swing.text.DefaultStyledDocument;
    import javax.swing.text.rtf.RTFEditorKit;
    import java.io.File;
    import java.io.FileInputStream;
    public class TestRtf {
    public static void main(String[] args) throws Exception {
    FileInputStream input = new FileInputStream("c:/temp/test.rtf");
    DefaultStyledDocument sd = new DefaultStyledDocument();
    RTFEditorKit kit = new RTFEditorKit();
    kit.read(input, sd, 0);
    String content = sd.getText(0, sd.getLength());
    input.close();
    for (int i=0; i<content.length(); i++) {
    System.out.println(Integer.toHexString(content.toCharArray()));
    FileUtils.writeStringToFile(new File("c:/temp/test.txt"), content, "UTF-8");
    I've tested with jdk1.5 and 1.6.
    Thanks for you help!
    Matthiax
    Edited by: matthiax on Aug 10, 2009 3:40 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    I'm viewing the result with an utf8 compliant editor (notepad++).
    Besides I've checked the character code (see example code).
    The result code for &#345; was 00F8 but should be 0159
    Remark
    Lius (lucene extension to index rtf-documents [among others]) does it the same way:
    DefaultStyledDocument sd = new DefaultStyledDocument();
    RTFEditorKit kit = new RTFEditorKit();
    kit.read(getStreamToIndex(), sd, 0);
    content = sd.getText(0, sd.getLength());

Maybe you are looking for

  • Problem with Direct printing of PDF with Barcodes in R12

    Hi All, We are facing a problem in direct printing in EBS R12 with xml pdf reports which contains some barcode fonts. We are using fonts from IDAutomation for barcodes. Printing from request output is working fine. But when it is printed directly eve

  • Help in collecting objects for transport into Q

    Hi All, We have our Q environment ready and need to transport the data flows into Q. I know that we have to replicate the datasources in Q before doing any transports. I have a flow like below: R/3 -> ODS -> Cube. I have generated an export datasourc

  • Authorization Errors from BEx queries

    We are developing Crystal reports on top of BEx queries. We are working on BO XI 3.1 FP2.8, Crystal Reports FP2.8 and SAP Integration Kit FP2.8 and SAP BW is BI 7.0 Suppport pack 5.0. How can we display the Authorization errors that are returned on B

  • I sure hope the next Front Row update includes the Apple TV 2 functionality

    I bought a Mac mini the day Leopard was released, and now it is my media server. It was that or an Apple TV, and I liked having more control plus a DVD player in the box. Front Row 2.0 is nice, but I've been trying to find add-ons, like Sapphire, to

  • Stock posting for ERLA BOM

    Dear Gurus, At our client's place they are manufacturing (say) TV, DVD and TV Stand, as 3 separate FERT materials. Now because of the Christmas / New Year eve, they want to give a scheme saying who ever buys 'Gift Set' (TVDVDTVStand) will get it at a