Encoding Problem: non-Unicode Data to Unicode format of XI

Hi SDN,
I have a JDBC sender to SAP BW scenario. The database is MS SQL server. 
The code page of db CP1CIAS
Description:SQL Server Sort Order 52 on Code Page 1252 for non-Unicode Data
Some fields with values like <b>ZAK&#x0;ADY TWORZYW SZTUCZNYCH</b> are failing in XI Mapping with error
<b>Fatal Error: com.sap.engine.lib.xml.parser.Parser~
XMLParser : #0 not allowed in Character data sections
in the trace.</b>
Please help how should i get over this code page errors. By installing this code page on XI server help?

There is no such global setting, this is b/c your source has Unicode I trust, and the only one other thing to try would be this:
Arthur My Blog

Similar Messages

  • Importing non-unicode data into unicode 10gR2 database

    Hi:
    I will have to import non-unicode data into unicode 10gR2 database. The systems the data is coming from are the following: CODA, Timberline, COMMS, CMS, LIMS. These are all RDBMS, sql-enabled systems. We are talking about pretty big amounts of data (a couple hundred GB combined).
    Did anybody go through a similar exersize?
    I know I'll have to setup nls_length_semantics to CHAR.
    What other recommendations could you guys give?
    TIA,
    Greg

    I think "nls_length_semantics" isn't mandatory at this point, and you must extract a little quantity of information from every source and do some probes injecting them into the Oracle10g database.

  • Extraction of Unicode data from Unicode R/3 to NON Unicode BW

    Hello experts!
    We want to extract data from a unicode source system into a non-unicode BW. Now we got problems with japanese signs (kanji). Our plan was to check against RSKC and delete the the unicode texts this way. This is OK for our people as a workaround.
    But we even doesn't get our data in the PSA. Getting here some conversion errors.
    Is there a way to handle this. The only solution we now at the moment is to delete possible problematic values over extractor customer exit. But if there would be a BW-side solution it would be better.
    Switching the BW to Unicode is no topic at the moment.
    Best regards,
    Peter

    Hello Siggi!
    Thank you for this hint. I had seen this document before but being not sure how this can help. I will look on this tomorrow again and talk to our sap basis.
    My perfomance issue is still open. I tried different things but nothing shows a real change for the better. So I opened now a sap message. Let's see what it will bring.
    Oh, we produce no monitor entries by the way.
    What is a little bit crazy is that in SM50 it is always read on DIM6 while we have a DIM7 which has about 40 percent compared to the now 15 percent of DIM6. Very bad design but also depends on the kind of data in this case.
    Best regards,
    Peter

  • Problem in dispaying Data in Excel format

    Hello,
    This is Dasaradh, Here i am having a problem while generating Excel report. I am generating excel by using the following code
    response.setContentType("application/vnd.ms-excel");
    response.setHeader("Content-Disposition", "attachment; filename=myfile.xls");
    It is generating Excel file but i am getting small problem here i.e i would like to display numbers(time) in four digits.For if i display 0021 it is displaying 21. But it is wrong. Here i ma using rs.getString("time")--> value is '0021'
    But while displaying rs.getString("time") in excel format it is dispaying 21.
    can any one please help me regarding this issue.
    Thanking You,
    Dasaradh.

    what does this have to do with EJBs?
    %

  • Date and Amount format in BDC

    Hi friends,
       I am facing problem with date and currency format while uploading data through BDC.
       Plz. tell me how to handle the problems related to Date and amount format.
       Thanks in Advance

    Dear Punit,
    This is a common problem, when carrying out BDC.
    For example, while recording you met with the 2 fields --> MSEG-DMBTR (Curr) and MSEG-MENGE(Quantity).
    Go to SE11 --> MSEG --> Search for DMBTR --> Double click on the Data Element i.e. DMBTR --> Double Click on the domain WERTZ --> You can see under the block Output Characteristics : Output Length = 16.
    While declaring the TYPES Structure, make it CHAR(16).
    Similarly for MENGE --> Data Element : MENGE_D --> Domain : MENGE13 --> Output Length = 17.
    Make it CHAR(17).
    For Date field ---> CHAR(10).
    Consider this technique as the Rule-of-Thumb while doing BDC.
    Regards,
    Abir
    Don't forget to award points *

  • SAP XI 2, JDBC Inbound Adapter. Non Unicode data problem.

    Hi All,
    As far a I understand the data from inbout JDBC adapter in XI 2 should be in Unicode format. Is it possible to accept non unicode data using this adapter? If answer is yes then how to specify correct data encoding?
    Best Regards.
    Victor.

    Hi,
    Yes, tecnicaly XMB is aware of the encoding and not the adapter, but when you read data from file you can tell to the adapter that file is in specific encoding e.g. file.encoding = "ISO-8859-5"
    This value is used by adapter to create xml file wich is passed to XMB. By default the passed XML is UTF8 encoded.
    So I need similar setting for JDBC adapter and my question is if I can do it?
    Best Regards.
    Victor.

  • Unicode data in non-utf8 oracle 8.1.7

    Hi,
    I have to migrate unicode data from a UTF-8 Oracle 9.0.2 database to non-utf8 oracle 8.1.7. The tables are small and I am reading and writing into the database using java code.The column which contained the unicode data have been made nchar in oracle 8.1.7.
    When I try to insert the data,I get the error:
    java.sql.SQLException: ORA-12704: character set mismatch
    Can I have unicode data stored in nchar columns in a non-utf8 database?
    Is there any documentation available on the same?
    Thanks,
    Shipra

    Check out the Oracle Unicode Database Support paper on OTN - http://technet.oracle.com/tech/globalization/content.html
    Basically NCHAR prior to Oracle9i can not be Unicode. If you need to store Unicode data in 8.1.7, you need to use UTF8 as the database character set.
    Nat

  • Thread: Export Data in TSV format (International / Unicode characters)

    Hi OAF Gurus,
    Question: Export data in TSV format ? how
    I am working in JDEV 10g, OA Framework 12.0.6.
    I have a page which has a OA Table populated with data, I need to export this data in a TSV (*Tab Separated values*) format.
    I know that we can add a “Export” button and then the data is exported in a CSV (comma Separated values) format.
    The question is how to save in TSV format instead.
    Reason for using TSV format: Our Data contains Unicode characters(Asian characters) which CSV files will not support. (FYI: Oracle Forms export in TSV format)
    Thanks in Advance
    Chaitanya

    use class CL_ABAP_CONV_OUT_CE
    2) or use obn application server
      open dataset file for output in  legacy  text mode code page p_code.
    (for codepage look  in tabele TCP00!)
    grx
    A.
    Edited by: Andreas Mann on Nov 8, 2010 11:23 AM

  • Question around UTL_FILE and writing unicode data to a file.

    Database version : 11.2.0.3.0
    NLS_CHARACTERSET : AL32UTF8
    OS : Red Hat Enterprise Linux Server release 6.3 (Santiago)
    I did not work with multiple language characters and manipulating them. So, the basic idea is to write UTF8 data as Unicode file using UTL_FILE. This is fairly an empty database and does not have any rows in at least the tables I am working on. First I inserted a row with English characters in the columns.
    I used utl_file.fopen_nchar to open and used utl_file.put_line_nchar to write it to the file on the Linux box. I open the file and I still see English characters (say "02CANMLKR001".
    Now, I updated the row with some columns having Chinese characters and ran the same script. It wrote the file. Now when I "vi" the file, I see "02CANè¹æ001" (some Unicode symbols in place of the Chinese characters and the regular English.
    When I FTP the file to windows and open it using notepad/notepad++ it still shows the Chinese characters. Using textpad, it shows ? in place of Chinese characters and the file properties say that the file is of type UNIX/UTF-8.
    My question : "Is my code working and writing the file in unicode? If not what are the required changes?" -- I know the question is little vague, but any questions/suggestions towards answering my question would really help.
    sample code:
    {pre}
    DECLARE
       l_file_handle   UTL_FILE.file_type;
       l_file_name     VARCHAR2 (50) := 'test.dat';
       l_rec           VARCHAR2 (250);
    BEGIN
       l_file_handle := UTL_FILE.fopen_nchar ('OUTPUT_DIR', l_file_name, 'W');
       SELECT col1 || col2 || col3 INTO l_rec FROM table_name;
       UTL_FILE.put_line_nchar (l_file_handle, l_rec);
       UTL_FILE.fclose (l_file_handle);
    END;
    {/pre}

    Regardless of what you think of my reply I'm trying to help you.
    I think you need to reread my reply because I can't find ANY relation at all between what I said and what you responded with.
    I wish things are the way you mentioned and followed text books.
    Nothing in my reply is related to 'text books' or some 'academic' approach to development. Strictly based on real-world experience of 25+ years.
    Unfortunately lot of real world projects kick off without complete information.
    No disagreement here - but totally irrevelant to anything I said.
    Till we get the complete information, it's better to work on the idea rather than wasting project hours. I don't think it can work that way. All we do is to lay a ground preparation, toy around multiple options for the actual coding even when we do not have the exact requirements.
    No disagreement here - but totally irrevelant to anything I said.
    And I think it's a good practice rather than waiting for complete information and pushing others.
    You can't just wait. But you also can't just go ahead on your own. You have to IMMEDIATELY 'push others' as soon as you discover any issues affecting your team's (or your) ability to meet the requirements. As I said above:
    Your problems are likely:
    1. lack of adequate requirements as to what the vendor really requires in terms of format and content
    2. lack of appropriate sample data - either you don't have the skill set to create it yourself or you haven't gotten any from someone else.
    3. lack of knowledge of the character sets involved to be able to create/conduct the proper tests
    If you discover something missing with the requirements (what character sets need to be used, what file format to use, whether BOMs are required in the file, etc) you simply MUST bring that to your manager's attention as soon as you suspect it might be an issue.
    It is your manager's job, not yours, to make sure you have the tools needed to do the job. One of those tools is the proper requirements. If there is ANYTHING wrong, or if you even THINK something is wrong with those requirements it is YOUR responsibility to notify your manager ASAP.
    Send them an email, leave a yellow-sticky on their desk but notify them. Nothing in what I just said says, or implies, that you should then just sit back and WAIT until that issue is resolved.
    If you know you will need sample data you MUST make sure the project plan includes SOME means to obtain sample data witihin the timeline needed by your project. As I repeated above if you don't have the skill set to create it yourself someone else will need to do it.
    I did not work with multiple language characters and manipulating them.
    Does your manager know that? If the project requires 'work with multiple language characters and manipulating them' someone on the project needs to have experience doing that. If your manager knows you don't have that experience but wants you to proceed anyway and/or won't provide any other resource that does have that experience that is ok. But that is the manager's responsibility and that needs to be documented. At a minimum you need to advise your manager (I prefer to do it with an email) along the following lines:
    Hey - manager person - As you know I have little or no experience to 'work with multiple language characters and manipulating them' and those skills are needed to properly implement and test that the requirements are met. Please let me know if such a resource can be made available.
    And I'm serious about that. Sometimes you have to make you manager do their job. That means you ALWAYS need to keep them advised of ANY issue that might affect the project. Once your manager is made aware of an issue it is then THEIR responsibility to deal with it. They may choose to ignore it, pretend they never heard about it or actually deal with it. But you will always be able to show that you notified them about it.
    Now, I updated the row with some columns having Chinese characters and ran the same script.
    Great - as long as you actually know Chinese that is; and how to work with Chinese characters in the context of a database character set, querying, creating files, etc.
    If you don't know Chinese or haven't actually worked with Chinese characters in that context then the project still needs a resource that does know it.
    You can't just try to bluff your way through something like character sets and code conversions. You either know what a BOM (byte order mark) is or you don't. You have either learned when BOMs are needed or you haven't.
    That said, we are in process of getting the information and sample data that we require.
    Good!
    Now make sure you have notified your manager of any 'holes' in the requirements and keep them up to date with any other issues that arise.
    NONE of the above suggests, or implies, that you should just sit back and wait until that is done. But any advice offered on the forums about specifics of your issue (such as whether you need to even worry about BOMs) is premature until the vendor or the requirements actually document the precise character set and file format needed.

  • Export unicode data to Excel from Servlet

    Hi,
    In my application we are export some unicode data to Microsoft Excel 2003 from servlet .
    For that i am using the following code.
    response.setHeader( "Content-Disposition", "attachment; filename=results.xls" );
    response.setContentType( "text/xls" );
    theHeader.append("\u30ec\u30dd\u30fc\u30c8 \u30bf\u30a4\u30c8\u30eb");
    The above unicode data (japanese character) not displaying properly in Excel (display as "?"). other non unicode things are display properly.
    can anyone advise me.

    I was having problems writing a unicode (utf-8 or utf-16) csv excel file, found a solution in the end ill post it here in case anyone else is looking.
    You need to use the funky encoding x-UTF-16LE-BOM
    an example of how to do this is
    Writer writer = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(outputFile, true), "x-UTF-16LE-BOM"));
    then instead of using commas (,) use tabs (\t)
    then in my web.xml i have
    <mime-mapping>
    <extension>csv</extension>
    <mime-type>application/msexcel</mime-type>
    </mime-mapping>
    this site is a good reference
    http://members.chello.at/robert.graf/CSV/
    Hope this helps someone ! =]. I am using excel 2007may or may not work with earlier versions.

  • Sending  unicode data to Excel from servlet displayed as "?"

    Hi,
    In my application we are export some unicode data to Microsoft Excel 2003 from servlet .
    For that i am using the following code.
    response.setHeader( "Content-Disposition", "attachment; filename=results.xls" );
    response.setContentType( "text/xls" );
    theHeader.append("\u30ec\u30dd\u30fc\u30c8 \u30bf\u30a4\u30c8\u30eb");
    The above unicode data (japanese character) not displaying properly in Excel (display as "?"). other non unicode things are display properly.
    can anyone advise me.

    I was having problems writing a unicode (utf-8 or utf-16) csv excel file, found a solution in the end ill post it here in case anyone else is looking.
    You need to use the funky encoding x-UTF-16LE-BOM
    an example of how to do this is
    Writer writer = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(outputFile, true), "x-UTF-16LE-BOM"));
    then instead of using commas (,) use tabs (\t)
    then in my web.xml i have
    <mime-mapping>
    <extension>csv</extension>
    <mime-type>application/msexcel</mime-type>
    </mime-mapping>
    this site is a good reference
    http://members.chello.at/robert.graf/CSV/
    Hope this helps someone ! =]. I am using excel 2007may or may not work with earlier versions.

  • Problems with string encoding - need the text content in char* format.

    The problem is non ASCII-characters, which comes out as some sort of unicode I need to desipher.
    Here's what I got:
    A text frame object with the TextString "Agnartjørna"
    I get the text content of this object into an ai::UnicodeString the following way:
    AIErr
    VMGetTextOfTextArt( AIArtHandle textArt, ai::UnicodeString &ucStr)
        ASUnicode *textBuffer = NULL;
        AITRY {
            TextFrameRef ateTextRef;
            AIX( sAITextFrame->GetATETextFrame( textArt, &ateTextRef));
            ATE::ITextFrame ateText( ateTextRef);
            ATE::ITextRange ateRange = ateText.GetTextRange( true);
            ASInt32 textLen = ateRange.GetSize();
            AIX( sSPBlocks->AllocateBlock( (textLen+2) * sizeof( ASUnicode), nil, (void**) &textBuffer));
            ateRange.GetContents( textBuffer, (ASInt32) textLen+1);
            /* trim off trailing newlines */
            if ((textBuffer[textLen] == '\n') || (textBuffer[textLen] == '\r'))
                 textBuffer[textLen] = 0;
            ucStr.clear();
            ucStr.append( ai::UnicodeString( textBuffer, textLen));
            sSPBlocks->FreeBlock( textBuffer);
            textBuffer = NULL;
           AIRETURN;
        AICATCH {
            if (textBuffer) sSPBlocks->FreeBlock( textBuffer);
           AIPROPAGATE;
    Now, the next step is to convert it into a form that I can use to call regexp.
    Baiscally, I want to detect the ending "tjørna" (meaning small lake) on a map label, and apply a standard abbevriation "tj^a" (with "a" superscripted).
    So the problem is to obtain the regexp pattern and the text content in same encoding.  And since the regexp library is old *char based, I would like to convert the text content in to plain old *char.
    Hence the following code:
    static AIErr
    VMAbbreviateTextArt( AIArtHandle textArt,
                             vmTextAbbrevEffectParams *params)
        AITRY {
        /* first obtain the text contents of the textArt */
           ai::UnicodeString ucText;
          const int kTextLen = 256;
          char textContent[kTextLen];
          AIX( VMGetTextOfTextArt( textArt, ucText));
          ucText.as_Roman( textContent, kTextLen);
    But textContent now has the value "Agnartj\xbfnna"  (According to XCode),
    which will not get a match on the pattern "tj([øe][rn])na\\" (with  backslash matching the end of the string)
    Any other ways to convert the textContent to a plain *char string?

    Thank you very much, your method will work fine. with
    the "UTF-8" parameter the byte[].length is double,
    cause every valid byte is preceeded by an -62, but I
    will just filter the valid bytes into a new array.
    Thanks again,
    StefanActually what you need to do is to find the character encoding that your device expects, and then you can code your strings in Arabic.
    That's the way Java does things; Strings and char values are always in UNICODE (see www.unicode.org) (which means \u600 to \u6ff for arabic) and uses a specified character encoding when translating these to and from a byte stream.
    Each national character encoding has a name. Most of them are identical to ASCII for 0-127 and code their national characters in 128-255.
    Find the encoding name for your display and, odds are, the JRE has it in the library.
    BTW the character encoding ISO-8859-1 simply maps UNICODE characters 0-255 on to bytes.

  • LSMW: Codepage conversion error with a Unicode data file

    Hi all,
    I am currently developing a LSMW upload program which has to use a Unicode data file. The underlying system/target system is NOT a Unicode system. The data file also contains non-Latin2 characters.
    In the step "Specify Files", I have specified my Unicode data file and specified the codepage type "4110 - Unicode UTF-8".
    In the step "Read Data", then I get the runtime error "CONVT_CODEPAGE", exception "CX_SY_CONVERSION_CODEPAGE".
    I would expect that all non-Unicode characters are automatically transformed to "#", but the conversion progam breaks. The character transformation to "#" would be fine.
    I am really wondering why, at first, I am able to specify the Unicode codepage type, but then, the file cannot be converted correctly.
    What do I make wrong, what can I do to avoid the error?
    Thanks a lot in advance for helping me out...
    Regards,
    Klaus

    Hello,
    You need convert the file with the format UTF-8. In notepad you can choose this option.
    Regards,
    Oscar.

  • Peoplesoft convert Oracle non-unicode database to unicode database

    I am following doc 1437384.1 to convert a Peoplesoft database from non-unicode database to unicode database
    I use the following export statement (as user PS)
    SET NO TRACE;
    SET OUTPUT output_file.dat;
    SET NO DATA;
    EXPORT *;
    And the following import statement (as user sysadm)
    SET NO TRACE;
    SET NO DATA;
    SET INPUT output_file;
    SET LOG log_file;
    SET UNICODE ON;
    SET STATISTICS OFF;
    SET ENABLED_DATATYPE 9.0;
    IMPORT *;
    Before I do the datapump import, I am comparing the objects
    SQL> select object_type, count(*) from dba_objects where owner = 'SYSADM' group by object_type order by 1 asc;
    OBJECT_TYPE COUNT(*)
    INDEX 33797
    LOB 2775
    TABLE 28829
    TRIGGER 9
    VIEW 21208
    on oracsc63 (targetdb):
    SQL> select object_type, count(*) from dba_objects where owner = 'SYSADM' group by object_type order by 1 asc;
    OBJECT_TYPE COUNT(*)
    INDEX 23748
    LOB 2170
    TABLE 19727
    I don't have the same number of object. When I do the import this means that around 10000 tables will not have the UTF-8 format.
    Any ideas how I can solve this? How has experience with this peoplesoft conversions?

    Hello Jacques,
    please check sapnote #808505 (Secondary connection to Oracle DB w/ different character set).
    Regards
    Stefan

  • SQL*LOADER ERROR WHILE LOADING ARABIAN DATA INTO UNICODE DATABSE

    Hi,
    I was trying to load arabic data using sql*loader and the datafile is in .CSV format.But i am facing a error Value to large for a column while loading and some data are not loaded due to this error.My target database character set is..
    Characterset : AL32UTF8
    National Character set: AL16UTF16
    DB version:-10g release 2
    OS:-Cent OS 5.0/redhat linux 5.0
    I have specified the characterset AR8MSWIN1256/AR8ISO8859P6/AL32UTF8/UTF8 separately in the sql*loader control file,but getting the same error for all the cases.
    I have also created the table with CHAR semantics and have specified the "LENGTH SEMANTICS CHAR" in the sql*loader control file but again same error is coming.
    I have also changed the NLS_LANG setting.
    I am getting stunned that the data that i am goin to load using sql*loader, it is resided in the same database itself.But when i am generating a csv for those datas and trying to load using sql*loader to the same database and same table structure,i am getting this error value too large for a column.
    whats the probs basically???? whether the datafile is problemetic as i am generating the csv programmetically or is there any problem in my approach of loading unicode data.
    Please help...

    Here's what we know from what you've posted:
    1. You may be running on an unsupported operating system ... likely not the issue but who knows.
    2. You are using some patch level of 10gR2 of the Oracle database but we don't know which one.
    3. You've had some kind of error but we have no idea which error or the error message displayed with it.
    4. You are loading data into a table but we do not have any DDL so we do not know the data types.
    Perhaps you could provide a bit more information.
    Perhaps a lot more. <g>

Maybe you are looking for

  • Error ORA-10980 while creating MV

    SQL*Plus: Release 10.2.0.1.0 - Production on Thu Jun 21 11:43:02 2007 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Min

  • IMac Printing debacle

    Just purchased a 17 inch iMac, 2 Ghz Intel, 1GB SDRAM, 10.4.8, Microsoft Office 11.3. I have it linked to my printer via Airport Extreme. My PowerBook and my G4 both print flawlessly to the HP DeskJet 845c via the Airport. The iMac prints, but seems

  • MM - How to use MRNB transaction

    Hi  Experts, Couldanyone tell me how it works MRNB transaction?. Because we I tried... Appear a error message "Assign a document type to transaction MRNB" Anyway, our Version is 5.0. Thanks in Advance Ignacio

  • DPO Reconcillation

    Hi experts, Any one have DPO column reconcillation query for standard BI Apps, please post the query. Thnk you.

  • Simulate GHz signal with big number of samples

    I would like to simulate 650nm wavelength signal with 3.5ms of pulse duration. So I need very high sampling frequency and big samples in LabVIEW, if I go generate about 1M #s, its very long to generate. Is there any solution for that? When you feel s