Character conversion from Unicode to WE8ISO8859P1

Hi,
I am having a text field in which i need to enter Unicode characters also. When entering the Unicode character '€' in the text field and when clicked on the "Save" button it is throwing the error "java.sql.SQLException: Cannot map Unicode to Oracle character". The oracle database characterset is of type "WE8ISO8859P1". Do i need to do the conversion of Unicode character to WE8ISO8859P1 in the Controller explicitly?If so, Please mention the method through which i can achieve the same.Or is there any profile options or setup changes through which i can avoid this error?Can anyone please suggest how to proceed with this?
Thanks,
Sreeja

Pl identify your database characterset
SQL> select * from NLS_DATABASE_PARAMETERS;If the NLS_CHARACTERSET is WE8ISO8859P1, it is not capable of storing the Euro symbol (pl do a google search to find various references).
To store the Euro symbol, you will most likely need to change the database characterset to UTF8 - pl see the MOS Docs mentioned in this thread for details - Adding Greek & German language to R12
HTH
Srini

Similar Messages

  • Language Conversion from Unicode 8 to Character Set

    Hi,
    I am creating a file programmatically containing Vendor Master data (FTP interface).
    The vendor name and vendor address is maintained in the local language (Taiwanese) in SAP System, these characters are in Unicode 8 character set.
    The Unicode character set should be converted to BIG5 for Taiwanese, and then send this information in the file.
    How can I perform this conversion and change the character set of the values I'm retrieving from table (LFA1) to character set BIG5.
    Is is possible to does this conversion in SAP, does sap allows this?
    /Mike

    Hi Manik,
      I am also having a similar requirement, as I need to convert the unicode chinese character to GB2312 encoded chinese character,. I already posted in forums but didnt get the required the solution.
    Can you please provide the solution which you implemented and also confirm whether it can be used  to solve  the above problem.
    Hoping for your good reply.
    Regards,
    Prakash

  • Conversion from Unicode to Font

    Greetings everyone,
    Does anyone have any idea how I can convert a Devnagari Unicode (UTF-8) document into a normal ASCII text (with devanagari fonts) document to edit with word and use with non-unicode software?
    Thanks a lot in advance,
    Vibhu
    Message was edited by: bullet350

    It has been awhile since I was trying to find a page layout program that supported Devanagari. About six months ago I found iCalumus. It didn't support editing or input, but if you had a text file you could paste it in place.
    I contacted the developer and he said they would be adding input/editing support. I just downloaded the latest version and it seems that they have done so!
    I haven't done any extensive testing to see if all the conjuncts work properly. Quick testing seems to suggest it is a bit buggy, but fairly good – at least on screen. I haven't tried printing anything at all.

  • Character conversion and NLS_LANG

    Hi,
    The Oracle doc says that character encoding conversion for Java programs using the OCI driver is dependent on NLS_LANG. But the description of this was a bit confusing. As per the doc
    "The JDBC OCI driver transfers the data from the server to the client in the character set of the database. Depending on the value of the NLS_LANG environment variable, the driver handles character set conversions in one of two ways.
    1)If the value of NLS_LANG is not specified, or if it is set to the US7ASCII or WE8ISO8859P1 character set, then the JDBC OCI driver uses Java to convert the character set from US7ASCII or WE8ISO8859P1 directly to UCS-2.
    2)If the value of NLS_LANG is set to a non-US7ASCII or non-WE8ISO8859P1 character set, then the driver changes the value of the NLS_LANG parameter on the client to UTF-8. This happens automatically and does not require any user-intervention. OCI uses the value of NLS_LANG to convert the data from the database character set to UTF-8; the JDBC driver then converts the UTF-8 data to UCS-2. "
    Now refering to case1, assume the database character set is multibyte.Does this mean that the OCI C libraries first convert this to US7ASCII or WE8ISO8859P1 and then the Java driver does the conversion from US7ASCII or WE8ISO8859P1 directly to UCS-2. If that is the case, wouldnt information get lost during the first conversion.
    Thanks,
    Tom.

    "Now refering to case1, assume the database character set is
    multibyte.Does this mean that the OCI C libraries first convert
    this to US7ASCII or WE8ISO8859P1 and then the Java driver
    does the conversion from US7ASCII or WE8ISO8859P1 directly to
    UCS-2. If that is the case, wouldnt information get lost during
    the first conversion. "
    Yes this is true. For a multibyte database character set caution
    must be taken that the client application NLS_LANG is not
    US7ASCII or WE8ISO8859P1 or data loss can occur. An effort will
    be made to remove the NLS_LANG dependency in a future release
    because the current solution is imperfect.

  • Character conversion problems when calling FM via RFC from Unicode ECC 6.0?

    Hi all,
    I faced a Cyrillic character convertion problem while calling an RFC function from R/3 ECC 6.0 (initialized as Unicode system - c.p. 4103). My target system is R/3 4.6C with default c.p. 1500.
    The parameter I used in my FM interface in target system is of type CHAR10 (single-byte, obviously).
    I have defined rfc-connection (SM59) as an ABAP connection and further client/logon language/user/password are supplied.
    The problem I faced is, that Cyrillic symbols are transferred as '#' in the target system ('#' is set as default symbol in RFC-destination definition in case character convertion error is met).
    Checking convertions between c.p. 4103  and target c.p. 1500 in my source system using tools of transaction i18n shows no errors - means conversion passed O.K. It seems default character conversion executed by source system whithin the scope of RFC-destination definition is doing something wrong.
    Further, I played with MDMP & Unicode settings whithin the RFC-destination definition with no successful result - perhaps due to lack of documentation for how to set and manage these parameters.
    The question is: have someone any experience with any conversion between Unicode and non-Unicide systems via RFC-call (non-English target obligatory !!!), or can anyone share valuable information regarding this issue - what should be managed in the RFC-destination in order to get character conversion working? Is it acceptable to use any character parameter in the target function module interface at all?
    Many thanks in advance.
    Regards,
    Ivaylo Mutafchiev
    Senior SAP ABAP Consultant

    hey,
    I had a similar experience. I was interfacing between 4.6 (RFC), PI and ECC 6.0 (ABAP Proxy). When data was passed from ECC to 4.6, RFC received them incorrectly. So i had to send trimmed strings from ECC and receive them as strings in RFC (esp for CURR and QUAN fields). Also the receiver communication channel in PI (between PI and  RFC) had to be set as Non unicode. This helped a bit. But still I am getting 2 issues, truncation of values and some additional digits !! But the above changes resolved unwanted characters problem like "<" and "#". You can find a related post in my id. Hope this info helps..

  • Unicode character conversion

    Hello,
    From external system we receive XML messages in UTF-8. The data are transfered from XI to SAP WAS by RFC adapter. The communication language is set to 'CS' (Czech).  The data are saved into database with no conversion to set code page (1401). The receiving system is not Unicode compatible.
    Into database are writen unicode chars (e.g. &#x159;) instead of single chars in Czech alphabet.
    Is here any way how to force a XI to make a character conversion ?
    Thanks for any feedback.
    Marian Morzol

    Pl identify your database characterset
    SQL> select * from NLS_DATABASE_PARAMETERS;If the NLS_CHARACTERSET is WE8ISO8859P1, it is not capable of storing the Euro symbol (pl do a google search to find various references).
    To store the Euro symbol, you will most likely need to change the database characterset to UTF8 - pl see the MOS Docs mentioned in this thread for details - Adding Greek & German language to R12
    HTH
    Srini

  • How to input unicode character set from oralce form 9i

    Hi,
    Can anyone show me how to input unicode character set from form 9i. I have designed a form and run it but when I input unicode charater in TEXT ITEM on form (FONT_NAME of this TEXT ITEM is New Roman, AriaTime l ...), but it display incorrectly nor stored it in Database.
    Thank you !

    Thank Duncan R Mills !
    My setting NLS_CHARACTER in Database as follow :
    SQL> SELECT * FROM NLS_DATABASE_PARAMETERS;
    PARAMETER VALUE
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CHARACTERSET UTF8
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    PARAMETER VALUE
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZH:TZM
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZH:TZM
    NLS_DUAL_CURRENCY $
    NLS_COMP BINARY
    NLS_NCHAR_CHARACTERSET UTF8
    NLS_RDBMS_VERSION 8.1.7.0.0
    18 rows selected.
    Even if I can'nt input unicode character on Oracle Forms, It display incorrectly though I set exactly font_name.

  • Approach to converting database character set from Western European to Unicode

    Hi All,
    EBS:12.2.4 upgraded
    O/S: Red Hat Linux
    I am looking for the below information. If anyone could help provide would be great!
    INFORMATION NEEDED: Approach to converting database character set from Western European to Unicode for source systems with large data exceptions
    DETAIL: We are looking to convert Oracle EBS database character set from Western European to Unicode to support Kanji characters. Our scan results show
    both “lossy (110K approx.)” and “truncation (26K approx.)” exceptions in the database which needs to be fixed before the database is converted to Unicode.
    Oracle Support has suggested to fix all open and closed transactions in the source Production instance using forms and scripts.
    We’re looking for information/creative approaches  who have performed similar exercises without having to manipulate data in the source instance.
    Any help in this regard would be greatly appreciated!
    Thanks for yourn time!
    Regards,

    There are two aspects here:
    1. Why do you have such large number of lossy characters? Is this data coming from some very old eBS release, i.e. from before the times of the Java applet interface to Oracle Forms?  Have you analyzed the nature of this lossy data?
    2. There is no easy way around truncation issues as you cannot modify eBS metadata (make columns wider). You must shorten or remove the data manually through the documented eBS interfaces. eBS does not support direct manipulation of data in the database due to complex consistency rules enforced by the application itself (e.g. forms).
    Thanks,
    Sergiusz

  • Risk involved converting Oracle character set to Unicode (AL32UTF8 or UTF8)

    Hi All -
    I am a PL/SQL devloper and quite new in Database Adminstration have very little knowledge base on this.
    Currently I am working on project where we have requirement to store data in Multiple Languages in Database.
    After my findings via Google I am clear that our database character set needs to be changed to Unicode (AL32UTF8 or UTF8). Before moving forward I would like to know what are the risk involved doing this?
    Few Question:-
    Would this change take long time & involve lots of effort ?
    Can we revert back once this chnage is done, with no data loss?
    Will there be any changes required while wrting SQL on tables having multi language data?
    As of now requirement to store data in Multi Language is very specfic to some tables only, not the whole DB, are there any other options storing data in diffrent languages like (Spanish,Japnese,Chinese,Italian, German, and French) in just one specific table?
    Thanks...
    Edited by: user633761 on Jun 7, 2009 9:15 PM

    >
    Will there be any changes required while wrting SQL on tables having multi language data?If you move from single byte character set to multi byte character set, you should take into count that 1 character my use 1,2,3 or 4 bytes to be stored: http://download.oracle.com/docs/cd/B19306_01/server.102/b14225/ch2charset.htm#i1006683
    This may impact SQL or PL/SQL code that is working on character string lengths.
    Note also that using exp/imp to change database character set is not so simple; see following message:
    Re: charset conversion from WE8ISO8859P1 (8.1.7.0) to AL32UTF8(9.0.1.1)
    >
    As of now requirement to store data in Multi Language is very specfic to some tables only, not the whole DB, are there any other options storing data in diffrent languages like (Spanish,Japnese,Chinese,Italian, German, and French) in just one specific table?Using NCHAR character types is another possibility:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14225/ch11charsetmig.htm#sthref1493
    Edited by: P. Forstmann on Jun 8, 2009 9:10 AM

  • Printing ZPL (Zebra) data to printer spooler without character conversion

    Hi all,
    We are printing shipping labels from UPS, with a process where we recive the ZPL label code directly from UPS, and we just need to pass the data to the printer to get the labels. We have already implemented this with Fedex and some custom labels, and it works perfectly. The problem with the UPS label data is that it contains non-printable characters (in the MaxiCode data field). When passed to the SAP printer spooler (see code example below), the data gets corrupted because SAP interprets these non-printable characters as printer control codes.
    I have verified this by saving the ZPL data to a local file, before printing it through the SAP spooler. I then print this raw data and compare the output with the labels printed from the spooler. The MaxiCode (the big 2D barcode) is different in these labels. UPS has also tested the labels, and rejected them because of incorrect data in the barcode.
    For printing, we are using printers defined as type "PLAIN", but I also tried using the "LZEB2" device type with the same result. The error we see in the spooler entry is this:
    Print ctrl S_0D_ is not defined for this printer. Page 1, line 2, col. 2201
    Print output may not be as intended
    The printer ctrl code differs, depending om the label. I have examined the spooler data in "raw" mode, and there is always an ASCII character 28 (hex 1C) in front of the characters that SAP think are control codes, and this is why I think these non-printable characters are the reason for the problems.
    This is the function module I use to print the ZPL data (and as stated above, this works fine for Fedex and custom labels). The ZPL data is converted to binary format before passed to the function module, but I also tried to send the data in text format with another FM, but the result is the same. I have experimented with the "codepage" parameter, and this one gives the least amount of errors, and some labels actually get through without errors. But still at least 50% of the labels gets corrupted, with log entries like above.
    CALL FUNCTION 'RSPO_SR_WRITE_BINARY'
          EXPORTING
            handle           = lv_spool_handle
            data             = lv_label_line_bin
            length           = lv_len
            codepage         = '2010'
          EXCEPTIONS
            handle_not_valid = 1
            operation_failed = 2
            OTHERS           = 3.
    Does anyone know if there is a way to send data to the spooler without character conversion or interpretation of printer control codes? Or is there any other smart way to get around this problem?
    /Leif

    I do a more direct output to the spooler, to avoid any issues with the WRITE statement and SAP's report output processing. At the same time, I insert line breaks so that the output is easy to debug in the spooler if needed. Also included is the code to to detect the escape code (ASCII #28) and to insert a control code ZZUPS in its place (you can skip this for Fedex). Here's a simplified example, but please note this is for a Unicode system, some minor changes is required in a non-Unicode system.
    CONSTANTS: lc_spcode TYPE c LENGTH 5 VALUE 'ZZUPS',
               lc_xlen TYPE i VALUE 5.
       DATA: lv_print_params TYPE pri_params,
             lv_spool_handle TYPE sy-tabix,
             lv_name TYPE tsp01-rq0name,
             lv_spool_id TYPE rspoid,
             lv_crlf(2) TYPE c,
             lv_lf TYPE c,
             lstr_label_data TYPE zship_label_data_s,
             lv_label_line TYPE char512,
             lv_label_line_bin TYPE x LENGTH 1024,
             lv_len TYPE i,
             ltab_label_data_255 TYPE TABLE OF char512,
             ltab_label_data TYPE TABLE OF x,
             lv_c1 TYPE i,
             lv_c2 TYPE i,
             lv_cnt1 TYPE i,
             lv_cnt2 TYPE i,
             lv_x(2) TYPE x.
       FIELD-SYMBOLS: <n> TYPE x.
       lv_crlf = cl_abap_char_utilities=>cr_lf.
       lv_lf = lv_crlf+1(1).
       lv_name = 'ZPLLBL'.
    CALL FUNCTION 'RSPO_SR_OPEN'
         EXPORTING
           dest                   = i_dest
           name                   = lv_name
           prio                   = '5'
           immediate_print        = 'X'
           titleline              = i_title
           receiver               = sy-uname
    *      lifetime               = '0'
           doctype                = ''
         IMPORTING
           handle                 = lv_spool_handle
           spoolid                = lv_spool_id
         EXCEPTIONS
           device_missing         = 1
           name_twice             = 2
           no_such_device         = 3
           operation_failed       = 4
           OTHERS                 = 5.
       IF sy-subrc <> 0.
         RAISE spool_open_failed.
       ENDIF.
    LOOP AT i_label_data INTO lstr_label_data.
         CLEAR ltab_label_data_255.
         SPLIT lstr_label_data-label_data AT lv_lf INTO TABLE ltab_label_data_255.
         LOOP AT ltab_label_data_255 INTO lv_label_line.
           IF lv_label_line NE ''.
             lv_len = STRLEN( lv_label_line ).
    *       Convert character to hex type
             lv_c1 = 0.
             lv_c2 = 0.
             DO lv_len TIMES.
               ASSIGN lv_label_line+lv_c1(1) TO <n> CASTING.
               MOVE <n> TO lv_x.
               IF lv_x = 28.
                 lv_cnt1 = 0.
                 lv_label_line_bin+lv_c2(1) = lv_x.
                 lv_c2 = lv_c2 + 1.
                 DO lc_xlen TIMES.
                   ASSIGN lc_spcode+lv_cnt1(1) TO <n> CASTING.
                   MOVE <n> TO lv_x.
                   lv_cnt2 = lv_c2 + lv_cnt1.
                   lv_label_line_bin+lv_c2(2) = lv_x.
                   lv_c2 = lv_c2 + 2.
                   lv_cnt1 = lv_cnt1 + 1.
                   lv_len = lv_len + 1.
                 ENDDO.
               ELSE.
                 lv_label_line_bin+lv_c2(2) = lv_x.
                 lv_c2 = lv_c2 + 2.
               ENDIF.
               lv_c1 = lv_c1 + 1.
             ENDDO.
    *       Print binary data to spool
             lv_len = lv_len * 2. "Unicode is 2 bytes per character
             CALL FUNCTION 'RSPO_SR_WRITE_BINARY'
               EXPORTING
                 handle                 = lv_spool_handle
                 data                   = lv_label_line_bin
                 LENGTH                 = lv_len
               EXCEPTIONS
                 handle_not_valid       = 1
                 operation_failed       = 2
                 OTHERS                 = 3.
             IF sy-subrc <> 0.
               RAISE spool_write_failed.
             ENDIF.
           ENDIF.
         ENDLOOP.
       ENDLOOP.
       CALL FUNCTION 'RSPO_SR_CLOSE'
         EXPORTING
           handle = lv_spool_handle.
       IF sy-subrc <> 0.
         RAISE spool_close_failed.
       ENDIF.

  • "character conversion error" while parsing xml files

    Hello,
    I'm trying to parse MusicXML (Recordare) files, but I'm getting an exception.
    I'm using the SAX parser (javax.xml.parsers.SAXParser).
    Here is the code I use to instantiate it:
    final javax.xml.parsers.SAXParserFactory saxParserFactory = javax.xml.parsers.SAXParserFactory.newInstance();
    final javax.xml.parsers.SAXParser saxParser = saxParserFactory.newSAXParser();
    final org.xml.sax.XMLReader parser = saxParser.getXMLReader();
    I'm using my own handler, but I get the same exception even if I use org.xml.sax.helpers.DefaultHandler.
    The error I get is:
    Character conversion error: "Illegal ASCII character, 0xc2" (line number may be too low).
    The first few lines of my xml files look like this:
    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <!DOCTYPE score-partwise
    PUBLIC "-//Recordare//DTD MusicXML 0.6 Partwise//EN"
    "http://www.musicxml.org/dtds/partwise.dtd">
    <score-partwise>
    [...etc...]
    If I delete the <!DOCTYPE ...> line, then I don't get the exception anymore. But the MusicXML files I get (from some other program) always contain this line, and it would be quite some work to delete them from every file manually.
    So does anyone know if there is a way to avoid deleting that line in every file, while still being able to parse the xml files without exceptions?
    Or maybe does anyone know what the exact cause of the exception is? (because I don't know what exactly causes it)
    Thank you in advance.
    Greetz,
    Jipo

    So does anyone know if there is a way to avoid
    deleting that line in every file, while still being
    able to parse the xml files without exceptions?ok this is side-stepping the real problem but I've used this code to filterout DTD references for other reasons   public static InputStream filterOutDTDRef(InputStream in) throws IOException {
          BufferedReader iniReader = new BufferedReader(new InputStreamReader(in));
          StringBuffer newXML = new StringBuffer();
          for(String line = iniReader.readLine(); line!=null; line = iniReader.readLine())
             newXML.append(line+"\n");
          in.close();
          int s = newXML.indexOf("<!DOCTYPE ");
          if(s!=-1)
             newXML.replace(s,newXML.indexOf(">",s)+1,"");
          return new ByteArrayInputStream(newXML.toString().getBytes());
       }and it actually speeds up the parsing phase too (since the DTD ref.s were on the web and the XML standard mandates that there is a fetch for each xml file parsed..)
    you can feed the above into the InputSource constructor that takes an InputStream argument.
    Now for the real problem... 0xc2 is "LATIN CAPITAL LETTER A WITH CIRCUMFLEX" according to a unicode chart - which is not an ASCII character (as the error message correctly reports). I'm not sure why the file is being parsed as ASCII though? You could try parsing in a FileReader to the inputsource and hope it picks up the default character encoding of your system, and that that character encoding matches the file. Or you could try passing in a FileReader constructed with a explicit character encoding (eg "UTF8") and see if that does the trick?
    asjf

  • Characterset conversion from US7ASCII to WE8ISO5589P1

    Hi,
    I need to do a Character set conversion from US7ASCII to WE8ISO5589P1 on a 10g database. I need to make the proper analysis before doing this conversion. Kindly let me know of any links/urls/metalink note ids where i can find relevant docmentation and understand on this so that i can continue with the conversion.
    Regards,
    Amby.

    Start here:
    http://www.oracle.com/technology/tech/globalization/pdf/TWP_Character_Set_Migration_Best_Practices_10gR2.pdf
    Also, consider migrating to WE8MSWIN1252 in place of WE8ISO8859P1. If you have Windows clients, then WE8MSWIN1252 should be used anyway. Even if you do not have Windows clients, WE8MSWIN1252 is a binary superset of WE8ISO8859P1 and can hold all WE8ISO8859P1 codes + ca. 15 characters more. It is better to use it in case Windows clients are used in future.
    -- Sergiusz

  • Getting ÿþ as saved conversations from Lync in Outlook in Office 2013

    Hi,
    I've been trying to get to the bottom of this and have found similar posts, but no one seems to have an answer.
    When I IM someone using Lync 2013, they get a pop up notification but instead of the message they see ÿþ<.  Once they open the chat window, they can see my typed text.  Occasionally, certain people can't see the first line of my chat, but as
    long as they keep the chat window open, they can see everything new I type.
    All my conversations that are saved in outlook show ÿþ< for the text and are unreadable.  I've disabled the saving of conversations because they have become worthless.
    I believe it has to do with BOM but have not been able to find a way to fix this.
    If I copy a conversation from the chat window and paste it into Microsoft Word it shows ÿþ<, but if I paste it into notepad the conversation appears.
    (I had inserted a screenshot here, but am unable to because I am unable to figure out how to get my account "verified")
    I've tried changing the preferred encoding for outgoing messages: to Unicode (UTF-8) in Outlook, but this had no effect and I can't find a similar option in Lync 2013.
    (I had inserted a screenshot here, but am unable to because I am unable to figure out how to get my account "verified")
    I enabled logging for Lync and the event IDs that come up are 1, 11 and 12, to which I cannot find any information for at the moment.
    Any help and or suggestions would be appreciated.

    Hi,
    Did the issue happen only for you or for multiple users?
    Please try to delete Lync User Profile and information on Registry, then repair Office 2013.
    The path of Lync User Profile: %UserProfile%\AppData\Local\Microsoft\Office\15.0\Lync
    The path for information on Registry: HKCU\Software\Microsoft\Office\15.0\Lync\[email protected]
    Then test the issue again.
    Best Regards,
    Eason Huang
    Eason Huang
    TechNet Community Support

  • Script needed to generate a list of paragraph and character styles from the Book Level

    Hello,
    I am using FrameMaker 11 in the Adobe Technical Communication Suite 4 and I need to find a script that will generate a list
    of paragraph and character styles from the book level.
    I am working with unstructured FrameMaker books, but will soon be looking at getting a conversion table developed
    that will allow me to migrate all my data over to Dita (1.1 for now).
    Any thoughts, ideas on this is very much appreciated.
    Regards,
    Jim

    Hi Jim,
    I think the problem you are having with getting a response is that you are asking someone to write a script for you. Normally, you would have to pay someone for this, as it is something that folks do for a living.
    Nonetheless, I had a few minutes to spare, so I worked up the following script that I believe does the job. It is very slow, clunky, and totally non-elegant, but I think it works. It leverages the book error log mechanism which is built in and accessible by scripts, but is spendidly unattractive. I hope this gives you a starting point. It could be made much more beautiful, of course, but such would take lots more time.
    Russ
    ListAllFormatsInBook()
    function ListAllFormatsInBook()
        var doc, path, fmt;
        var book = app.ActiveBook;
        if(!book.ObjectValid()) book = app.FirstOpenBook;
        if(!book.ObjectValid())
            alert("No book window is active. Cannot continue.");
            return;
        CallErrorLog(book, 0, 0, "-----------------------------------------------------------");
        CallErrorLog(book, 0, 0, "** Book format report for:");
        CallErrorLog(book, 0, 0, book.Name);
        var comp = book.FirstComponentInBook;
        while(comp.ObjectValid())
            path = comp.Name;
            doc = SimpleOpen (path, false);
            if(doc.ObjectValid())
                CallErrorLog(book, 0, 0, "-----------------------------------------------------------");
                CallErrorLog(book, 0, 0, "-----------------------------------------------------------");
                CallErrorLog(book, doc, 0, "");
                CallErrorLog(book, 0, 0, "-----------------------------------------------------------");
                CallErrorLog(book, 0, 0, "-----------------------------------------------------------");
                CallErrorLog(book, 0, 0, "Paragraph formats:");
                fmt = doc.FirstPgfFmtInDoc;
                while(fmt.ObjectValid())
                    CallErrorLog(book, 0, 0, "  - " + fmt.Name);
                    fmt = fmt.NextPgfFmtInDoc;
                CallErrorLog(book, 0, 0, "-----------------------------------------------------------");
                CallErrorLog(book, 0, 0, "Character formats:");
                fmt = doc.FirstCharFmtInDoc;
                while(fmt.ObjectValid())
                    CallErrorLog(book, 0, 0, "  - " + fmt.Name);
                    fmt = fmt.NextCharFmtInDoc;
            else
                CallErrorLog(book, 0, 0, "-----------------------------------------------------------");
                CallErrorLog(book, 0, 0, "!!!  Could not open: " + comp.Name + " !!!");
                CallErrorLog(book, 0, 0, "-----------------------------------------------------------");
            comp = comp.NextComponentInBook;
    function CallErrorLog(book, doc, object, text)
        var arg;
        arg = "log ";
        if(book == null || book == 0 || !book.ObjectValid())
            arg += "-b=0 ";
        else arg += "-b=" + book.id + " ";
        if(doc == null || doc == 0 || !doc.ObjectValid())
            arg += "-d=0 ";
        else arg += "-d=" + doc.id + " ";
        if(object == null || object == 0 || !object.ObjectValid())
            arg += "-O=0 ";
        else arg += "-O=" + object.id + " ";
        arg += "--" + text;
        CallClient("BookErrorLog", arg);

  • Problem in the character conversion

    Hi Guys,
    I am facing problem in the character conversion
    I am posting data from SAP to third party system using XI , by converting whole input message to a String .I am using SOAP adapter to communicate XI to third party system.
    Thirdparty system needs String to be wrapped in CDATA so that it will not choke by looking at the special characters. I did Wrap the output string in CDATA, using ABAP mapping but when I do that XI is converitng  arrow brackets < and >. into &lt and u2018&gtu2019   my assumption is it is double encoding.
    example -
    before map -  <AppSystemInfo>
    after mapping  it is converted as -  <![CDATA[ &ltAppSystemInfo&gt]]>
    Edited by: Vamsi on Jun 17, 2010 10:00 PM
    Edited by: Vamsi on Jun 17, 2010 10:01 PM

    Did you try to see the output?
    bcz if you are trying this in mapping testing it will show you like this as this conversion if for xml, so xml will not do anything wrong with the special characters, so for that special characters will be converted like that.
    Once try to run end to end interface and try to see at receiver side that how data looks like.
    Thanks,
    Hetal

Maybe you are looking for

  • OBI 11g (11.1.1.6.0) coreapplication doesn't start

    Hi, I'm new with the weblogic/Fusion architecture and therefore somewhat clueless. - fresh install of OBI 11.1.1.6.0 (simple installation) on windows 7 - during installation the 5 coreapplication processes won't start - opmnctl startall on command li

  • LogonUI.exe, can't see it in task manager anymore, dose it mean its not running?

    after i tried to solve the issue with 26 csrss.exe running, it seemed to meany, i no longer see LogonUI.exe in task manager. forum posts from csrss.exe topic. Post 1 i tried process explorer & trying stuff with that i opened a new csrss.exe & that wo

  • Printing Continuous Pages

    Hellow I'm trying to Print continuous pages report, of a Purchase Order (PO). An PO can contain more than one item, say 25 items, in which it will go to second and maybe third page. At the Time of printing and when it comes to the next Page: The prin

  • Variable Input Selection in WAD is not displaying

    Hi  Experts,                    I created a query with one input selection screen. when I use the same query in WAD and execute the template it is not showing any input selection screen, directly showing result. This is supposed to show Hierarchy Inp

  • Exception in handler thread. in BI Presentation Services

    Hi We are revicing following error in OBIEE 11g presentation services Dose anyone know the reason? +Exception in handler thread. An error occurred during execution of "send". Broken pipe [Socket:82]^M+ Error Codes: ETI2U8FA^M Location: saw.rpc.variab