Migrating Multi-Byte Characters

When Migrating from access 2000, all multibyte charcters
are coverted into single byte. The Database is runng UTF8?
Anyone done this before?
Thanks

#1 should return you the encoded string.
#2 should decode the string and return the correct characters.
If it doesn't, it's probably because the string was improperly
encoded.
#3 should cause #1 to do the same as #2, but you have to set
the property before JavaMail classes are loaded.

Similar Messages

  • Multi-byte characters are garbled in SQL Server Business Intelligent Development Studio (Visual Studio) 2008

    Hi,
    I'm revising an existing report which was developed by my predecessor. Though it works fine in the production environment, when I open the .rdl file with SQL Server Business Intelligent Studio (Visual Studio) 2008 on my client
    PC, I find all the multi-byte characters are garbled. When I open it with the BIDS (the same version) on the server, it shows everything correctly.
    The fonts for the controls (labels) are Tahoma and it's originally only for alphabets, but multi-byte characters are supposed to be displayed in MSGOTHIC by Font Link as they are displayed correctly on the server.
    Could anyone advise me how to solve this issue? I know I can fix it by changing the fonts from Tahoma to MSGOTHIC for all the contrls, but I don't want to do it.
    Environment:
    My PC:Windows7 64bit /Visual Studio 9.0.30729.1 / .NET Framework 3.5 SP1
    Server:Windows Server 2003 R2 /Visual Studio 9.0.30729.1 / .NET Framework 3.5 SP1
    Garbled characters sample:
    FontLink - SystemLink
    Please let me know if you need any more information. I would appreciate your advice!

    Hi nino_miya,
    According to your description, when you display the report in client side, characters are garbled.
    In your scenario, please check if the Language is the same as the report on production server. Also please check if the data of Tahoma in registry on client PC is the same as server. If those two settings are the same, please specify font of the each
    control as MSGOTHIC manaually on client PC.
    If you have any question, please feel free to ask.
    Best regards,
    Qiuyun Yu
    Qiuyun Yu
    TechNet Community Support

  • JDBC2.0 API and Multi-Bytes Characters

    I use the JDBC2.0 API with the thin Driver816 for jdk1.2.X,
    it works well with English characters ,
    but i get wrong with Multi-Bytes Characters.
    Does anyone else know the reason?
    Thanks in advance.

    I have the same problem!!!!!!!!!!!
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by huang Jian-chang:
    I use the JDBC2.0 API with the thin Driver816 for jdk1.2.X,
    it works well with English characters ,
    but i get wrong with Multi-Bytes Characters.
    Does anyone else know the reason?
    Thanks in advance.<HR></BLOCKQUOTE>
    null

  • Store Multi Byte Characters in WE8ISO8859P1 Database without Migration

    Hi - I am looking for a solution where I can store the Multi Byte Character's under the WE8ISO8859P1 Database.
    Below are the DB NLS_PARAMETERS
    NLS_CHARACTERSET = WE8ISO8859P1
    NLS_NCHAR_CHARACTERSET = AL32UTF8
    NLS_LENGTH_SEMANTICS = BYTE
    Size of DB = 2 TB.
    DB Version = 11.2.0.4
    Currently there is a need to store the Chinese Characters under NAME and ADDRESS Columns only. Below are the description of the columns.
    Column Name            DataType
    GIVEN_NAME_ONE
    VARCHAR2(120 BYTE)
    GIVEN_NAME_TWO
    VARCHAR2(120 BYTE)
    LAST_NAME
    VARCHAR2(120 BYTE)
    ADDR_LINE_ONE
    VARCHAR2(100 BYTE)
    ADDR_LINE_TWO
    VARCHAR2(100 BYTE)
    ADDR_LINE_THREE
    VARCHAR2(100 BYTE)
    What are my option's over here without considering the Migration WE8ISO8859P1  DB to AL32UTF8 ?
    1. Can I increase the size of the Column i.e make it n x 4. e.g NAME will be 480 Byte  and ADDRESS will be 400 Byte.? What are pros and cons ?
    2. Convert the existing Column from VARCHAR2 to NVARCHAR2 with the Same Size ? i.e NVARCHAR2(120 BYTE) ?
    3. Add the extension to an table with new columns - NVARCHAR2. e.g NAME - NVARCHAR2(120 CHAR) and ADDRESS (100 - CHAR) ?
    4. Database got Clobs,Blobs, Long etc. got Varied Data, Is it a good idea to Migrate to AL32UTF8  with Minimal Downtime ?
    Please suggest the best alternatives. Thanks.
    Thanks
    Jitesh

    Hi Jitesh,
    NLS_NCHAR_CHARACTERSET can either be AL16UTF16 or UTF8. So mostly your DB would have UTF8.
    You can definitely insert Unicode characters into N-type columns. Size of the N-type column will depend on the characters you plan to store in them.
    If you use N-types, do make sure you use the (N'...') syntax when coding it so that Literals are denoted as being in the national character set by prepending letter 'N'.
    Although you can use them, N-types are  not very well supported in 3the party client/programming environments, you may need to adapt a lot of code to use N-types properly and there are some limitations.
    While at first using N-types for a (few) columns seems like a good idea to avoid the conversion of a whole database , in many cases the end conclusion is that changing the NLS_CHARACTERSET is simply the easiest and fastest way to support more languages in an Oracle database.
    So, It depends on how much of your data will be unicode which you would store in N-type characters.
    If you do have access to My Oracle Support you can check Note 276914.1 :The National Character Set ( NLS_NCHAR_CHARACTERSET ) in Oracle 9i, 10g , 11g and 12c, For more details.
    With respect to your Downtime, The actual conversion (CSALTER or in case using DMU) shouldn't take too much time, if you have run CSSCAN on your DB and made sure you have taken care of all your truncation, convertible and lossy data (if any).
    It would be best for you to run CSSCAN initially to gauge how much convertible/lossy/truncation data you need to take care.
    $ CSSCAN FROMCHAR=WE8ISO8859P1 TOCHAR=AL32UTF8 LOG=P1TOAl32UTF8 ARRAY=1000000 PROCESS=2 CAPTURE=Y FULL=Y
    Regards,
    Suntrupth

  • Urgent: comparing multi-byte characters to a single byte character!

    Let's say I have two strings, they have the same contents but use different encoding, how do I compare them?
    String a = "GOLD";
    String b = "G O L D ";
    The method a.equals(b) doesn't seem to work.

    try this:
    String a = "GOLD";
    String b = "G O L D ";
    boolean bEqual = true;
    int iLength = a.length();
    int j = 0;
    for (int i = 0; i < iLength; i++)  {
       while(b.substring(j,1).equals(" "))
          j++;
       if(!a.substring(i, 1).equals(b.substring(j,1))  {
          bEqual = false;
          break;
    }

  • DEFECT: (Serious!) Truncates display of data in multi-byte environment

    I have an oracle 10g database set up with the following nls parameters:
    NLS_CALENDAR      GREGORIAN
    NLS_CHARACTERSET      AL32UTF8
    NLS_COMP      LINGUISTIC
    NLS_CURRENCY      $
    NLS_DATE_FORMAT      DD-MON-YYYY
    NLS_DATE_LANGUAGE      AMERICAN
    NLS_DUAL_CURRENCY      $
    NLS_ISO_CURRENCY      AMERICA
    NLS_LANGUAGE      AMERICAN
    NLS_LENGTH_SEMANTICS      CHAR
    NLS_NCHAR_CHARACTERSET      UTF8
    NLS_NCHAR_CONV_EXCP      TRUE
    NLS_NUMERIC_CHARACTERS      .,
    NLS_RDBMS_VERSION      10.2.0.3.0
    NLS_SORT BINARY
    NLS_TERRITORY      AMERICA
    NLS_TIMESTAMP_FORMAT      DD-MON-RR HH.MI.SSXFF AM
    NLS_TIMESTAMP_TZ_FORMAT      DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_TIME_FORMAT      HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT      HH.MI.SSXFF AM TZR
    I am querying a view in sqlserver 2000 via an odbc database link.
    When I query a 26 character wide column in the view in sql developer, it will only return up to 13 characters of the data.
    When I query the exact same view in the exact same sql server database from the extact same oracle database using the exact same odbc database link using sql navigator, I get the full 26 characters worth of data.
    It also works just fine from the sql command line tool from 10g express.
    Apparently, sql developer is confused about how to handle multi-byte data. If you ask it the length of the data in the column, it will tell you 26, but it will only show you 13.
    I have found a VERY PAINFUL work around, to do a cast(column_name as varchar2(26) when I query it. But I've got hundreds of views and queries...

    In all other respects, the settings I have appear to be working correctly.
    I can enter multi-byte characters into the sql worksheet to create a package, save it, and re-open the package with the multi-byte characters still visible.
    I'm using a fallback directory for my jdk with the correct font installed, so I can see and edit multi-byte data in the data grids.
    In this case, I noticed the problem on a column that only contains the standard ascii letters and digits.
    Environment->Encoding = UTF-16
    All the fonts are set to a font that properly displays western and ge'ez characters. The font has been in use for years, and is working correctly in all other circumstances.
    The Database->NLS Parameters tab under sql developer preferences shows:
    language: American
    territory : American
    sort: binary
    comp: binary
    length: char (I've also tried byte)
    If there are other settings that you think might be relevant, please let me know.
    I've done some more testing. I created an oracle table with a single column and did an insert into ... select from statement across the database link. The correct, full-length data appeared in the oracle table.
    So, it's not a matter of whether the data is being returned or not, it is. It is simply not being displayed correctly. It appears that sql developer is making some unwarranted decisions about the datatable across the database link when it decides to display the data, because sql plus and sql navigator have no such issues.
    This is really a very serious problem, because if I cannot trust the data the tool shows me, I cannot trust the tool.
    It is also an invitation to make an error based upon the erroneous data display.

  • Problem to display japanese/multi-byte character on weblogic server 9.1

    Hi experts
    We are running weblogic 9.1 on linux box [REHL v4] and trying to display Japanese characters embedded in some of html files, but Japanese characters are converted into a question mark [?]. The html files that contain Japanese characters are stored properly in the file system and retained the Japanese characters as they should be.
    I changed character setting in the html header to shift_jis, but no luck. Then I added the encoding scheme for shift_jis in jsp_description and charset-parameter section in weblogic.xml but also no luck.
    I am wondering how I can properly display multi-byte characters/Japanese on weblogic server without setting up internationalization tools.
    I will appreciate for your advice.
    Thanks,
    yasushi

    This was fixed by removing everything except teh following files from the original ( 8.1 ) domain directory
    1. config.xml
    2. SerializedSystemIni.dat
    3. *.ldift
    4. applications directory
    Is this a bug in the upgrade tool ? Or did I miss a part of the documentation ?
    Thanks
    --sony                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • CUSTOM Service - multi Byte character issue

    Hi Experts,
    I wrote a custom Service. What this service is doing, its id reading some data from Database and then generates CSV report. Code is working fine. But if we have multi - byte characters in data, then these characters are not properly shown in report. Given below is my service code :
    byte bytes[] = CustomServiceHelper.getReport(this.m_binder,providerName);
                        DataStreamWrapper wrapper = new DataStreamWrapper();
                        wrapper.m_dataEncoding="UTF-8";
                        wrapper.m_dataType="application/vnd.ms-excel;charset=UTF-8";
                        wrapper.m_clientFileName="Report.csv";
                        wrapper.initWithInputStream(new ByteArrayInputStream(bytes), bytes.length);
                        this.m_service.getHttpImplementor().sendStreamResponse(m_binder, wrapper);
    NOTE - This code is working fine on my local ucm (windows) for multi-byte characters. But When I install this service on our DEV and Staging servers (SOLARIS), then multi-byte characters issue occurs.
    Thanks in Advance..!!
    Edited by: user4884609 on May 17, 2011 4:12 PM

    Please Help

  • How best to send double byte characters as http params

    Hi all
    I have a web app that accepts text that can be in many languages.
    I build up a http string and send the text as parameters to another webserver. Hence, whatever text I receive i need to be able to represent on a http query string.
    The parameters are sent as urlencoded UTF8. They are decoded by the second webserver back into unicode and saved to the db.
    Occassionally i find a character that i am unable to convert to a utf8 string and send as a parameter (usually a SJIS character). When this occurs, the character is encoded as '3F' - a question mark.
    What is the best way to send double byte characters as http parameters so they always are sent faithfully and not as question marks? Is my only option to use UTF16?
    example code
    <code>
    public class UTF8Test {
    public static void main(String args[]) {
    encodeString("\u7740", "%E7%9D%80"); // encoded UTF8 string contains question mark (3F)
    encodeString("\u65E5", "%E6%97%A5"); // this other japanese character converts fine
    private static void encodeString(String unicode, String expectedResult) {
    try {
    String utf8 = new String(unicode.getBytes("UTF8"));
    String utf16 = new String(unicode.getBytes("UTF16"));
    String encoded = java.net.URLEncoder.encode(utf8);
    String encoded2 = java.net.URLEncoder.encode(utf16);
    System.out.println();
    System.out.println("encoded string is:" + encoded);
    System.out.println("expected encoding result was:" + expectedResult);
    System.out.println();
    System.out.println("encoded string16 is:" + encoded2);
    System.out.println();
    } catch (Exception e) {
    e.printStackTrace();
    </code>
    Any help would be greatly appreciated. I have been struggling with this for quite some time and I can hear the deadline approaching all too quickly
    Thanks
    Matt

    Hi Matt,
    one last visit to the round trip issue:
    in the Sun example, note that UTF8 encoding is used in the method that produces the byte array as well as in the method that creates the second string. This is equivalent to calling:
    String roundTrip = new String(original.getBytes("UTF8"), "UTF8");//sun exampleWhereas, in your code you were calling:
    String utf8 = new String(unicode.getBytes("UTF8"))//Matt's code
    [/code attracted
    The difference is crucial.  When you call the string constructor without a second (encoding) argument, the default encoding (usually Cp1252) is used.  Therefore your code is equivalent toString utf8 = new String(unicode.getBytes("UTF8"), "Cp1252")//Matt's code
    i.e.you are encoding with one transformation format and decoding back with a different transformation format, so in general you won't get your original string back.
    Regarding safely sending multi-byte characters across the Internet, I'm not completely sure what the situation is because I don't do it myself. (When our program is run as an applet, the only interaction it has with the web server is to download various files). I've seen lots of people on this forum describing problems sending multi-byte characters and I can't tell whether the problem is with the software or with the programming. Two possible methods come to mind (of course you need to find out what your third party software is doing):
    1) use the DataOutput/InputStreams writeUTF/readUTF methods
    2) use the InputStreamReader/OutputStreamWriter pair with UTF8 encoding
    See this thread:
    http://forum.java.sun.com/thread.jsp?forum=16&thread=168630
    You should stick to UTF8. It is designed so that the bytes generated by encoding non-ASCII characters can be safely transmitted across the Internet. Bytes generated by UTF16 can be just about anything.
    Here's what I suggest:
    I am running a version of the Sun tutorial that has a program running on a server to which I can send a string and the program sends back the string reversed.
    http://java.sun.com/docs/books/tutorial/networking/urls/readingWriting.html
    I haven't tried sending multi-byte characters but I will do so and test whether there are any transmission problems. (Assuming that the Sun cgi program itself correctly handles characters).
    More later,
    regards,
    Joe
    P.S.
    I thought one the reasons for the existence of UTF8 was to
    represent things like multi-byte characters in an ascii format?Not exactly. UTF8 encodes ascii characters into single bytes with the same byte values as ASCII encoding. This means that a document consisting entirely of ASCII characters is the same whether it was encoded as UTF8 or ASCII and can consequently be read in any ASCII document reader (e.g.notepad).

  • Reparse=yes for multi-byte charset

    I try to use the "include-xsql" having "reparse=yes" but my multi-byte characters becomes "???". Can anyone give me some hints?
    Thanks in advance!

    Would you show you XSQL? Did you attache any stylesheet?

  • Handling Multi-byte/Unicode (Japanese) characters in Oracle Database

    Hello,
    How do I handle the Japanase characters with Oracle database?
    I have a Java application which retrieves some values from the database; makes some changes to these [ex: change value of status column, add comments to Varchar2 column, etc] and then performs an UPDATE back to the database.
    Everything works fine for the English. But NOT for Japanese language, which uses Multi-byte/Unicode characters. The Japanese characters are garbled after the performing the database UPDATE.
    I verified that Java by default uses UTF16 encoding. So there shouldn't be any problem with Java/JDBC.
    What do I need to change at #1- Oracle (Database) side or #2- at the OS (Linux) side?
    /* I tried changing the NLS_LANG value from OS and NLS_SESSION_PARAMETERS settings in Database and tried 'test' insert from SQL*plus. But SQL*Plus converts all Japanese characters to a question mark (?). So could not test it via SQL*plus on my XP (English) edition.
    Any help will be really appreciated.
    Thanks

    Hello Sergiusz,
    Here are the values before & after Update:
    --BEFORE update:
    select tar_sid, DUMP(col_name, 1016) from table_name where tar_sid in ('6997593.880');
    /* Output copied from SQL-Developer: */
    6997593.88 Typ=1 Len=144 CharacterSet=UTF8: 54,45,53,54,5f,41,42,53,54,52,41,43,54,e3,81,ab,e3,81,a6,4f,52,41,2d,30,31,34,32,32,e7,99,ba,e7,94,9f,29,a,4d,65,74,61,6c,69,6e,6b,20,e3,81,a7,e7,a2,ba,e8,aa,8d,e3,81,84,e3,81,9f,e3,81,97,e3,81,be,e3,81,97,e3,81,9f,e3,81,8c,e3,80,81,52,31,30,2e,32,2e,30,2e,34,20,a,e3,81,a7,e3,81,af,e4,bf,ae,e6,ad,a3,e6,b8,88,e3,81,bf,e3,81,ae,e4,ba,8b,e4,be,8b,e3,81,97,e3,81,8b,e7,a2,ba,e8,aa,8d,e3,81,a7,e3,81,8d,e3,81,be,e3,81,9b,e3,82,93,2a
    --AFTER Update:
    select tar_sid, DUMP(col_name, 1016) from table_name where tar_sid in ('6997593.880');
    /* Output copied from SQL-Developer: */
    6997593.88 Typ=1 Len=144 CharacterSet=UTF8: 54,45,53,54,5f,41,42,53,54,52,41,43,54,e3,81,ab,e3,81,a6,4f,52,41,2d,30,31,34,32,32,e7,99,ba,e7,94,9f,29,a,4d,45,54,41,4c,49,4e,4b,20,e3,81,a7,e7,a2,ba,e8,aa,8d,e3,81,84,e3,81,9f,e3,81,97,e3,81,be,e3,81,97,e3,81,9f,e3,81,8c,e3,80,81,52,31,30,2e,32,2e,30,2e,34,20,a,e3,81,a7,e3,81,af,e4,bf,ae,e6,ad,a3,e6,b8,88,e3,81,bf,e3,81,ae,e4,ba,8b,e4,be,8b,e3,81,97,e3,81,8b,e7,a2,ba,e8,aa,8d,e3,81,a7,e3,81,8d,e3,81,be,e3,81,9b,e3,82,93,2a
    So the values BEFORE & AFTER Update are the same!
    The problem is that sometimes, the Japanese data in VARCHAR2 (abstract) column gets corrupted. What could be the problem here? Any clues?

  • Faster way to migrate from Single byte to Multi byte

    Hello,
    We are in the process of migrating from a 9i Single byte db to a 10g Multi byte db. The size of our DB is roughly 125 GB. We have fixed everything in the source database (9i) in terms of seamlessly migrating from a single byte to a multi byte db. The only issue is the migration window - curently we are doing an export/import since there is a character set migration involved and it's taking about 20+ hrs to do the import in 10g. The management wants to cut this down to less than 10 hours, if that's possible. I know the duration it takes to import depends on many factors like the system/OS configuration, SAN, etc but I wanted to know what , in theory, is considered the fastest method of migrating a database from single byte to multi byte.
    Have anybody here gone through this before?
    Thanks,
    Shaji

    If the percentage of user tables containing some convertible data (I am assuming you will not have any truncation or lossy data) is low, you can export only those tables, truncate them, and rescan the database. This should report no convertible data, except some CLOBs in Data Dictionary. Such database can be migrated to AL32UTF8 using csalter.plb. After the migration, you import only the previously exported subset of tables.
    Note, for this process to work, no convertible VARCHAR2, nor CHAR, nor LONG data can be present in the Data Dictionary.
    The process should be refined by dropping and recreating indexes on the exported tables as recreating an index is faster then updating it during import. You should also disable triggers so that they do not interfere with the migration (for example, they should not update any "last_updated" timestamp columns).
    If the number and size of affected tables is low compared to the overall size of the database, the time saved may be significant.
    There may also be tables that require even more sophisticated approach. Let's say you have a multi-gigabyte table that stores pictures or documents in a BLOB column. The table also has a single text column that keeps some non-ASCII descriptions of the stored entities. Exporting/truncating/importing such table may be still very expensive. A possible optimization is to offload the description column to an auxiliary table (together with ROWIDs), update the original column to NULL, export the auxiliary table, drop it, rescan the database, migrate with csalter.plb, re-import the auxiliary table, and restore the original column. If pictures alone occupy, for example, 30% of the whole database, such approach should yield significant time saving.
    -- Sergiusz

  • Handling Tab Delimited File generation in non-unicode for multi byte lang

    Hi,
    Requirement:
    We are generating a Tab Delimited File in different languages (Single Byte and Multi Byte) and placing the files at application server.
    Problem:
    Our system is a Non-unicode system so we are facing problems with generation of Tab delimited file for multibyte languages like Russian, Japanese, Chinese etc.,
    I am actually using data: d_tab TYPE X value '09' but it dont work for multi byte. I cant see tab delimited file at application server path.
    Any thoughts about how to proceed on this issue?Please let me know.
    Thanks & Regards,
    Pavan

    >
    Pavan Ravikanti wrote:
    > Thanks for your answer but do you reckon cl_abap_char_utilities will be a work around for data: d_tab type X VALUE '09' .
    > Pavan.
    On a non-unicode system the X Variant is working, but not on a unicode system. Here you must use the class. On the other hand you can use the class on a non-unicode system und your char var will always be correct (one byte/twobyte depending on which system your report is running).
    What you are planning to do is to put a file with an amount of possible characters into a system with has a less amount of characters. Thats not working in no way.
    What you can do is to build up a multi-code-page system where the codepage is bound to the user or bound to the logon-language. Here you you can read and process textfiles in several codepages - but not a textfile in unicode. You have to convert the unioce textfile into a non-unicode textfile before processing it.
    Remember that SAP does not support multi-code-page Systems anymore and multi-code-page systems will result in much more work when converting the system to unicode.
    Even non-unicode system will not be maintained by SAP in the near future.
    What you encounter here are problems for what unicode was developped. A unicode system can handle non-unicode textfiles, but the other way round will always lead to problems which cant be solved.

  • Multi-byte character encoding issue in HTTP adapter

    Hi Guys,
    I am facing problem in the multi-byte character conversion.
    Problem:
    I am posting data from SAP CRM to third party system using XI as middle ware. I am using HTTP adapter to communicate XI to third party system.
    I have given XML code as UT-8 in the XI payload manipulation block.
    I am trying to post Chines characters from SAP CRM to third party system. junk characters are going to third party system. my assumption is it is double encoding.
    Can you please guide me how to proceed further.
    Please let me know if you need more info.
    Regards,
    Srini

    Srinivas,
    Can you go through the url:
    UTF-8 encoding problem in HTTP adapter
    ---Satish

  • Problem printing simplified Chinese on PM4i printer using multi-byte data

    I am new to printing data in Simplified Chinese and have tried for a period of time to get it to work but it is not working. I would appreciate any help.
    This is what I have:
    1. Chinese data stored as multi-byte data in oracle 10g db. It is in one of the attribute fields on the mtl_system_items table. The data field it is stored in is defined as varchar2(240). I have to extract that data and print it out as simplified Chinese characters on 3x4 label stock on PM4i printer which is setup to use IPL as the default language.
    2. Purchased the simplified Chinese font kit ( compact flash card ) and plugged it into the compact flash port on back of the printer. The simplified Chinese font is assigned.
    3. Created simple program to build the label file to send to the printer to print the Chinese glyphs. I expected 3 to print but it only prints 1 Chinese glyph and that is not the correct.
    a. Data shown in Chinese
    传感器
    b. Data in hex format => E4BCA0E6849FE599A8
    c. Data in utf8 => ä¼ æ„Ÿå™¨
    d. Simple oracle pl/sql program code to extract data from oracle and create the format file for printing
    CREATE OR REPLACE PROCEDURE china_test_label1 is
    hold_length number;
    v_hold_armpart varchar2(240):= null;
    v_hold_line varchar2(500);
    v_file_name varchar2(100) := 'chlabel1.txt';
    v_file_line1 varchar2(100) := '<STX><ESC>C<ETX>';
    v_file_line2 varchar2(100) := '<STX><ESC>P<ETX>';
    v_file_line3 varchar2(100) := '<STX>E4;F4;<ETX>';
    v_file_line4 varchar2(100) := '<STX>H00;o0200,0200;c60;k32;d0,30;<ETX>';
    v_file_line5 varchar2(100) := '<STX>L1;o102,102;f0;l575;w5;<ETX>';
    v_file_line6 varchar2(100) := '<STX>R<ETX>';
    v_file_line7 varchar2(100) := '<STX><ESC>E4<CAN><ETX>';
    v_file_line8 varchar2(100) := '<STX><ETB><ETX>';
    v_file_line varchar2(500);
    v_file_handle UTL_FILE.file_type;
    v_submit_status number;
    v_out_path_name varchar2(50);
    v_export_path_name varchar2(50);
    -- Program Starts Here
    BEGIN
    fnd_file.put_line(fnd_file.log, '------- Starting Label job -------');
    SELECT description
    INTO v_out_path_name
    FROM fnd_lookup_values
    WHERE lookup_type = 'ARM_DATA_FILE_OUT_PATH'
    AND lookup_code = '$FLMARM_TOP';
    v_file_handle := utl_file.fopen(v_out_path_name, v_file_name, 'W');
    v_file_line := v_file_line1;
    utl_file.put_line(v_file_handle, v_file_line);
    v_file_line := v_file_line2;
    utl_file.put_line(v_file_handle, v_file_line);
    v_file_line := v_file_line3;
    utl_file.put_line(v_file_handle, v_file_line);
    v_file_line := v_file_line4;
    utl_file.put_line(v_file_handle, v_file_line);
    v_file_line := v_file_line5;
    utl_file.put_line(v_file_handle, v_file_line);
    v_file_line := v_file_line6;
    utl_file.put_line(v_file_handle, v_file_line);
    v_file_line := v_file_line7;
    utl_file.put_line(v_file_handle, v_file_line);
    BEGIN
    select attribute13
    INTO v_hold_armpart
    FROM apps.mtl_system_items
    WHERE segment1 = '20928536'
    AND organization_id = 282;
    EXCEPTION
    WHEN others THEN
    v_hold_armpart := 'nothing';
    END;
    v_file_line := '<STX>'||v_hold_armpart||'<CR><ETX>';
    utl_file.put_line(v_file_handle, v_file_line);
    v_file_line := v_file_line8;
    utl_file.put_line(v_file_handle, v_file_line);
    utl_file.fclose(v_file_handle);
    fnd_file.put_line(fnd_file.log, '-------------------------------------------');
    fnd_file.put_line(fnd_file.log, '-- end of job ');
    fnd_file.put_line(fnd_file.log, '-------------------------------------------');
    END china_test_label1;
    show errors;
    e. i do lpr -P printer filename to print the file. here is the file contents :
    <STX><ESC>C<ETX>
    <STX><ESC>P<ETX>
    <STX>E4;F4;<ETX>
    <STX>H00;o0200,0200;c60;k32;d0,30;<ETX>
    <STX>L1;o102,102;f0;l575;w5;<ETX>
    <STX>R<ETX>
    <STX><ESC>E4<CAN><ETX>
    <STX>ä¼ æ„Ÿå™¨<CR><ETX>
    <STX><ETB><ETX>
    i think the issue here may be with formating the mulit-byte data into format it can be printed using the c60 font. Any
    coding examples would be greatly appreciated

    Hi,
    Welcome you post on the forum.
    However, this is not the right forum for you. It is only for SAP Business One user. Please search entire forums first to find which one is more proper.
    However, this issue may not be related to SAP at all. Search on the web would be better.
    Thanks,
    Gordon

Maybe you are looking for

  • Uploading file in Forms 9i

    Hello, I am attempting to code a form to allow the user to select a file on their client machine to upload to the application server. I would like to provide a browsing mechanism for the user along the lines of that available with the WIN_API package

  • Contrast/brightness calibration issue

    Hello I have an Apple Studio Display 17-inch CRT with a VGA connection to a recently installed Radeon 9800; and before that an ATI Rage 128. For a long time now I have not been able to calibrate contrast or brightness in Display Calibration Assistant

  • Exception in SAP Application Integrator occured

    Hi, I am facing a problem while opening transaction iviews in portal.In the default trace file below is the logs: [EXCEPTION] #1#com.sapportals.portal.prt.component.PortalComponentException: Error in service call of Portal Component Component :pcd pa

  • Finding the serial number for an installed copy of Labview 6.1

    I have a copy of Labview 6.0 (Full Development System) and also Labview 6.1.  I was able to find the serial number label in the package for version 6.0, but not for 6.1.  I do have 6.1 installed and working on my workstation.  Is it possible to deter

  • Making a RPC Call with Complex Server-Side

    Hey, I'm currently trying to write a distributed program using a simple web service call. I've tried to set up Axis RPC by following a tutorial and got it working, but the problem is I end up calling a JWS file on the server and at that point I'm not