Unicode type coersion - revisited

I am still having a heck of a time with reading and comparing stuff from text files using vanilla AppleScript (I would prefer not to use 3rd party additions or applications). The file contents are basically ASCII text, but they can be stored as either plain text or Unicode files. Strings can be converted to Unicode, but there doesn't appear to be any coersion from Unicode back to plain text. I have a cheapo conversion subroutine that just strips out the null characters of a Unicode string, but there has to be a better way to do it. Since it looks like Unicode is the way things are going, is it better to just use everything as Unicode text? Before I start tearing up my script, are all the various places like dialogs, the Finder, shell scripts and the like Unicode savvy? Tracking all the ApleScript changes is a bit tricky, since the last manual is something like 1.3.7. If someone is familiar with this, I would appreciate the help.
G4 Tower 733MHz   Mac OS X (10.4.8)  

Hello
Here's a little more consistent version which will always replace any character in the source Unicode text that cannot be represented in system's primary encoding with "?", consequently will not yield raw byte sequence extracted from intermediate styled text which may be improperly interpreted in system's primary encoding.
Tested with AS1.8.3 under OS9.1 (Engilsh).
Hope this may be of some help.
H
-- SCRIPT
  utxt2string
  v0.2
  * solely using special behaviour of 'as string' coercion for Unicode text that contains character
  which can not be represented by any of installed fonts.
set s to "abcdefghijklmnopqrstuvwxyzáàâäãåçéèêëíìîïñóòôöõúùûüæøœÿ" -- string in System's primary encoding (e.g. MacRoman)
set ut to s as Unicode text -- UTF-16
set cyYa to «data utxt042F» as anything -- U+042F : CYRILLIC CAPITAL LETTER YA
set cmd to «data utxt2318» as anything -- U+2318 : PLACE OF INTEREST SIGN (= COMMAND KEY)
set ut to ut & cyYa & cmd
set s1 to utxt2string(ut)
return {ut, s1}
on utxt2string(ut)
  Unicode text ut : source unicode text
  return string : string (in system's primary encoding) that corresponds to ut. -- [*0]
local x
if ut's class is not Unicode text then error "parameter error" number 9000 -- for safety
set x to «data utxtD800» as anything -- an invalid surrogate pair (an orphan high surrogate)
return ((ut & x) as string)'s text 1 thru -2 -- [*1]
  NOTES.
  [0] This string will be plain string («class TEXT») (not styled text («class STXT»)) where
    each character unrepresentable in system's primary encoding will be replaced by "?".
  [1] This 'as string' coercion for Unicode text that contains character
    which can not be represented by any of installed fonts will yield plain string (not styled text),
    where each character that is not representable in system's primary encoding will be replaced by "?".
    (Here, the character x appended at the end of ut is unrepresentable (invalid), which will result in
    this coercion of desired behaviour.)
end utxt2string
-- END OF SCRIPT
  Mac OS 9.1.x  

Similar Messages

  • Unicode type not convertible

    Hi,
    Need some help in modifying the code ... I am working on a mass download of data into a texr file from the tables. Few of these tables have fields in decimal or currency and I  need to read them into character string. I know the structure of the internal table I read the data into has to exactly match the table structure in Unicode environment. How to add that in the code when I am pulling the tables dynamically and reading their data in a sequential  manner. Hence I am getting the above error.
    Data: begin of i_list occurs 0,
          i_tabledata(2550) type c,
         end of i_list.
    select * from (v_tabname) into i_list.   -
    > Unicode  type not convertible error...
            append i_list.
            clear i_list.
          endselect.
    The value of 'v_tabname' comes from the table DD02l dynamically.
    Any thoughts?
    Thanks,
    VG

    As per your requirement, i developed this code snippet.
    I download the data from ZTESTVBAK & upload the data to YTESTVBAK. Not sure why you're transferring the data to string & trying to read the string ?
    Check this code snippet:
    PARAMETERS: p_tab TYPE tabname OBLIGATORY DEFAULT 'ZTESTVBAK',
                p_dwld RADIOBUTTON GROUP grp1 DEFAULT 'X',
                p_upld RADIOBUTTON GROUP grp1.
    DATA: dref TYPE REF TO data,
          v_tab TYPE tabname,
          lcx_sql_err TYPE REF TO cx_sy_sql_error,
          v_errtxt TYPE string.
    FIELD-SYMBOLS: <itab> TYPE STANDARD TABLE,
                   <wa> TYPE ANY.
    CREATE DATA dref TYPE STANDARD TABLE OF (p_tab).
    ASSIGN dref->* TO <itab>.
    IF p_dwld = 'X'.
      SELECT * FROM (p_tab) INTO TABLE <itab> UP TO 100 ROWS.
      IF sy-subrc = 0.
        CALL FUNCTION 'GUI_DOWNLOAD'
          EXPORTING
            filename                = 'C:\dyntab.txt'
            write_field_separator   = 'X'
            confirm_overwrite       = 'X'
          TABLES
            data_tab                = <itab>
          EXCEPTIONS
            file_write_error        = 1
            no_batch                = 2
            gui_refuse_filetransfer = 3
            invalid_type            = 4
            no_authority            = 5
            unknown_error           = 6
            header_not_allowed      = 7
            separator_not_allowed   = 8
            filesize_not_allowed    = 9
            header_too_long         = 10
            dp_error_create         = 11
            dp_error_send           = 12
            dp_error_write          = 13
            unknown_dp_error        = 14
            access_denied           = 15
            dp_out_of_memory        = 16
            disk_full               = 17
            dp_timeout              = 18
            file_not_found          = 19
            dataprovider_exception  = 20
            control_flush_error     = 21
            OTHERS                  = 22.
        IF sy-subrc = 0.
          WRITE: / 'File download successful.'.
        ENDIF.
      ENDIF.
    ELSEIF p_upld = 'X'.
      CALL FUNCTION 'GUI_UPLOAD'
        EXPORTING
          filename                = 'C:\dyntab.txt'
          has_field_separator     = 'X'
        TABLES
          data_tab                = <itab>
        EXCEPTIONS
          file_open_error         = 1
          file_read_error         = 2
          no_batch                = 3
          gui_refuse_filetransfer = 4
          invalid_type            = 5
          no_authority            = 6
          unknown_error           = 7
          bad_data_format         = 8
          header_not_allowed      = 9
          separator_not_allowed   = 10
          header_too_long         = 11
          unknown_dp_error        = 12
          access_denied           = 13
          dp_out_of_memory        = 14
          disk_full               = 15
          dp_timeout              = 16
          OTHERS                  = 17.
      IF sy-subrc = 0.
        v_tab = p_tab.
    *   Lock the table records
        CALL FUNCTION 'ENQUEUE_E_TABLEE'
          EXPORTING
            tabname        = v_tab
          EXCEPTIONS
            foreign_lock   = 1
            system_failure = 2
            OTHERS         = 3.
        IF sy-subrc = 0.
          TRY .
              MODIFY (p_tab) FROM TABLE <itab>.
            CATCH cx_sy_sql_error INTO lcx_sql_err.
              v_errtxt = lcx_sql_err->get_text( ).
          ENDTRY.
        ENDIF.
    *   Unlock the table records
        CALL FUNCTION 'DEQUEUE_E_TABLE'
          EXPORTING
            tabname = v_tab.
        COMMIT WORK.
      ENDIF.
    ENDIF.
    Hope this helps.
    BR,
    Suhas

  • Unicode Type Conversion Error

    Hi Friend,
    I am working in UNICODE project,i need one help,
    I have one error .
    Actually, im using one structure(Z0028) and passing values to internal table.
    At that time i shows one error.
    Actually,this error is due to type conversion problem.
    In that structure,i ve one packed datatype ,so, if i select
    unicode check it shows error.
    I will sent example prg and error also.
    Please give some solution to slove.
    REPORT  YPRG1                                   .
    TABLES: Z0028.
    DATA:I_Z0028 TYPE Z0028 OCCURS 0 WITH HEADER LINE .
    SELECT * FROM Z0028 INTO TABLE I_Z0028 .
    IF SY-SUBRC <> 0 .
      WRITE:/ ' NO DATA'.
    ENDIF.
      LOOP AT I_Z0028.
        WRITE:/ I_Z0028.
      ENDLOOP.
    Regards,
    Kalidas.T

    Hi,
    Display fields
    do like this..
    REPORT YPRG1 .
    TABLES: Z0028.
    DATA:I_Z0028 TYPE Z0028 OCCURS 0 WITH HEADER LINE .
    SELECT * FROM Z0028 INTO TABLE I_Z0028 .
    IF SY-SUBRC 0 .
    WRITE:/ ' NO DATA'.
    ENDIF.
    LOOP AT I_Z0028.
    WRITE:/ I_Z0028-field1,
                  I_Z0028-field2
    I_Z0028-field3.
    ENDLOOP.
    Regards,
    Prashant

  • Sub-apps event passing in multi-versioned application - type coersion fails

    I have the following structure on an application I'm working on:
    AirWrapper, basically just an AIR wrapper for the core. Includes AIR specific definitions of various classes needed by the core.
        |---- loads core.swf using SWFLoader, contains functionality for handling data saving/loading, subapp loading, etc.
                 |-----loads subapp.swf using SWFLoader, with loadForCompatibility = true;
    From subapp.swf I'm dispatching a ServiceReferenceEvent (custom event), which is supposed to be caught by the core.swf. If I set loadForCompatibility = false, then it works as it should. However if I set loadForCompatibility = true, then I get a
    TypeError: Error #1034: Type Coercion failed: cannot convert fi.activeark.platform.events::ServiceReferenceEvent@1e0afeb1 to fi.activeark.platform.events.ServiceReferenceEvent.
    Now if I've understood the documentations correctly, the reason this error pops up is because the subapp is loaded into a sibling application domain of the core.swf and is therefore using it's own definition of the ServiceReferenceEvent (even though both the core and the subapp in fact reference the same .as file). My research on the issues suggests that I should be able to fix this by boot strapping the ServiceReferenceEvent class into AirWrapper. This should make both the core and the subapp use the same definition.
    The problem is that this doesn't work. Does anyone have any thoughts on why? Alternatively I would be happy if someone would suggest an alternate method for communicating between a main and subapp swf with loadForCompatibility = true (preferrably one that would allow me to pass custom events)? I need the loadForCompatilbility = true because we're expecting subapps to be created with different versions of the Flex framework over the life cycle of the application.

    Thanks Alex! I got it to work as you described. I realized that I of course also need to marshal every object that the event I send through references, which means even more extra code. Another issue is that the event that I'm marshalling is referencing an object that must implement a particular interface. What I've tried is creating a basic object, add references to the implemented functions, and then type cast it to the interface in question. I've also tried type casting directly but all fails. Is it possible to marshal an interface?
    Basically what I want to have happen is that the subapp I'm loading dispatches an event to the core. The event contains a reference to an object which implements an interface that the core.swf also knows about. The methods of this interface are then used as callbacks from the core.swf. If marshalling an interface is not possible, do you have any suggestions on how I could achieve a similar thing in some other way?

  • Unicode type X issue

    Hi,
    I have code " X1 type X value '13'
                          X2 type X value '10' "
    Can anyone tell me how to replace this code ; like for X1 type X value '09' can be replaced with CL_ABAP_CHAR_UTILITIES=>HORIZONTAL_TAB  class.
    Thanks in advance!
    Regards,
    Aleria

    x1  type c value '10',
    x2  type c value '13',
    X1 = CL_ABAP_CONV_IN_CE=>UCCP( '0013' ).
    x2 = CL_ABAP_CONV_IN_CE=>UCCP( '0010' ).
    check this link :[link|http://help.sap.com/saphelp_nw04/helpdata/en/79/c554d9b3dc11d5993800508b6b8b11/content.htm]

  • Unicode vs. Non-Unicode with MI

    Hi all,
    We are looking at doing an installation of MI. We are currently Non-Unicode on our ERP system, although a Unicode conversion project is not too far away.
    According to the document: "How To Design a SAP NetWeaver Based System Landscape" (NW 7.0, ver 1, March 2008) on pg 15, it says:
    "It is required that an MI system with AS ABAP has the same Unicode type as the MI back-end system: If your MI back-end system is a Unicode system, also install a Unicode MI system (that is, both AS ABAP and AS Java on Unicode); if your MI back-end system is non-Unicode, install an MI system with non-Unicode AS ABAP (that is, AS Java on Unicode and AS ABAP on non-Unicode).
    Recommendation
    For new installations, it is recommended to install a Unicode based system for all SAP applications and SAP NetWeaver deployments that require an AS ABAP usage type based system (AS JAVA is only available in Unicode). In future all new installations from SAP will be Unicode only."
    This is clearly conflicting...We are an English-only shop, we have a Unicode conversion imminent, and SAP's direction is clearly Unicode.
    Should we even consider a Non-Unicode install?
    Thanks,
    Troy

    Hi,
    install a unicode middleware system for the following reasons:
    - Soon the BE will be unicode enabled.
    - The MI client is anyway unicode enabled, this means the end users can enter unicode characters anyway. After Unicode conversion of the backend your MI server would be the only non unicode component.
    - having the mw as unicode does not harm, even if the BE is non unicode (apart from the bigger DB size due to the code page)
    Best Regards,THomas

  • Unicode bound parameter in criteria

    I am using Oracle 9iR2 with ODBC driver version 9.2.0.5.4 (the latest as of this writing) with a Visual C++ application. I am trying to perform a query from a table with an NVARCHAR2 column in the WHERE clause as a bound parameter. The query looks like this:
    SELECT * FROM UNICODE_P WHERE UNICODE_COLUMN = ?
    The column UNICODE_COLUMN is an NVARCHAR2 (UTF-16) column. The table contains one row, and the column in question in that row contains a 5 Japanese characters (10 bytes). When calling SQLBindParameter, I am using SQL_C_WCHAR for the c-type and SQL_WVARCHAR for the sql-type. Attempting to fetch the data results in SQL_NO_DATA_FOUND.
    I also get SQL_NO_DATA_FOUND when I attempt an UPDATE with a unicode column in the criteria. I can perform INSERTs and UPDATEs using SQLBindParameter in this way (except for in the where clause in the update statement), and they work as expected.
    I have found that if I set the "Force SQL_WCHAR Support" option on my DSN, the query does return the row in question; however, this is not acceptable for my application since that causes the driver identify every normal character type column (char, varchar2, or clob) as being a Unicode type column.
    This works as expected when I run my app against other databases (MS SQL Server and IBM DB2). Has anyone experienced this problem and know how to get around it? Is it a bug in the Oracle ODBC driver?
    Any help is greatly appreciated!
    -Chris

    I am using Oracle 9iR2 with ODBC driver version 9.2.0.5.4 (the latest as of this writing) with a Visual C++ application. I am trying to perform a query from a table with an NVARCHAR2 column in the WHERE clause as a bound parameter. The query looks like this:
    SELECT * FROM UNICODE_P WHERE UNICODE_COLUMN = ?
    The column UNICODE_COLUMN is an NVARCHAR2 (UTF-16) column. The table contains one row, and the column in question in that row contains a 5 Japanese characters (10 bytes). When calling SQLBindParameter, I am using SQL_C_WCHAR for the c-type and SQL_WVARCHAR for the sql-type. Attempting to fetch the data results in SQL_NO_DATA_FOUND.
    I also get SQL_NO_DATA_FOUND when I attempt an UPDATE with a unicode column in the criteria. I can perform INSERTs and UPDATEs using SQLBindParameter in this way (except for in the where clause in the update statement), and they work as expected.
    I have found that if I set the "Force SQL_WCHAR Support" option on my DSN, the query does return the row in question; however, this is not acceptable for my application since that causes the driver identify every normal character type column (char, varchar2, or clob) as being a Unicode type column.
    This works as expected when I run my app against other databases (MS SQL Server and IBM DB2). Has anyone experienced this problem and know how to get around it? Is it a bug in the Oracle ODBC driver?
    Any help is greatly appreciated!
    -Chris

  • About UTF8 ,Unicode, NVARCHAR2

    Hi,
    I have a UTF8 database, converted all the data types to NVARCHAR2 AND NCHAR so I could store the Unicode characters. I have changed the program that reads the file using utilfile from fopen,get_line to fopen_nchar and get_line_nchar to make sure the program handles Unicode characters.
    In one of the existing program in the where clause as well as assignments of strings to NVARCHAR2 variable should I prefix with N where N converts the characters into Unicode where columns Or variables are NVARCHAR2 OR NCHAR data types.
    i.e.
    In the where clause lets say status column of a table is STATUS NVARCHAR2(64) . In the where clause should I change from Stauts = 'valid' to status = N'valid' where N converts the characters into Unicode type explicitly instead of relying on oracle to implicitly convert ?
    Please give your suggestions?
    Thanks.
    Vin.

    All our columns are nvarchar2 in the UTF8 datbase. I have been told to use N in the select statement and in the the where clause just to be on the safe side so when a string is compared to NVARCHARR column it does explicit conversion instead of
    depending on oracle to do implict conversion internally.
    select * from tab
    where status = N'testing'
    What do you think about this adding N?

  • Unable to purge cache on unix

    Hi gurus,
    I have installed OBIEE 11.1.1.5 on Red Hat Linux 5, when i try purge the cache of this obiee server from a windows system, it is working perfectly fine. But when I try to clear the cache from unix system it is throwing the following error message,
    ================================
    Oracle BI ODBC Client
    Copyright (c) 1997-2011 Oracle Corporation, All rights reserved
    Connection open with info:
    [0][State: 01000] [DataDirect][ODBC lib] Application's WCHAR type must be UTF16, because odbc driver's unicode type is UTF16
    You are not licensed to use this ODBC driver with the DataDirect ODBC Driver Manager under the license you have purchased. If you wish to purchase a license, then you may use the Driver Manager for a period of 15 days, during which time you are required to obtain a license. You can order a license by calling DataDirect Technologies at 800-876-3101 in North America and +44 (0) 1753-218 930 elsewhere. Thank you for your cooperation.
    ==========================
    This is the command I am using -
    ./nqcmd -d "AnalyticsWeb" -u biadmin -p biadmin123 -s "/usr/powercenter/OBIEE_DEV/Oracle_BI1/Purge/CachePurge.sql" -o "/usr/powercenter/OBIEE_DEV/Oracle_BI1/Purge/Cachepurge.log".
    I am sure about the user,password ,paths of the sql and log file. Has any one faced this issue and if you have any resolution please guide me.
    Thanks in Advance,
    Regards,
    Sai.

    Oups sorry. I didn't remember the begin of your post.
    Suppress your update of your LD_LIBRARY_PATH.
    In 11g, the environment variables are set with the help of the opmn.xml file:
    http://gerardnico.com/wiki/weblogic/opmn.xml
    You can check it their:
    <ias-component id="coreapplication_obis1" inherit-environment="true">
    <environment>
    <variable id="LD_LIBRARY_PATH" value="$ORACLE_HOME/common/ODBC/Merant/5.3/lib$:$ORACLE_HOME/bifoundation/server/bin$:$ORACLE_HOME/bifoundation/web/bin$:$ORACLE_HOME/clients/epm/Essbase/EssbaseRTC/bin$:$ORACLE_HOME/bifoundation/odbc/lib$:$ORACLE_INSTANCE$:$ORACLE_HOME/lib:/u01/app/oracle/product/TimesTen/tt1122/lib" append="true"/>
    </environment>In 11g, you have to run the bi-init to set your environment varaibles:
    http://gerardnico.com/wiki/dat/obiee/bi-init
    Did you do that ?

  • FM to convert XString to String

    I am developing an simple application to upload an excel file and display its contents in a table in Solution manager using Web dynpro.
    But 'HR_KR_XSTRING_TO_STRING' is not avaiable in DS1.
    Can you help me find a replacement for this function module.
    TYPES :
           BEGIN OF str_itab,
           name(10) TYPE c,
           age(10) TYPE c,
           END OF str_itab.
           DATA : t_table1 TYPE STANDARD TABLE OF str_itab,
             i_data TYPE STANDARD TABLE OF string,
             lo_nd_sflight TYPE REF TO if_wd_context_node,
             lo_el_sflight TYPE REF TO if_wd_context_element,
             l_string TYPE char200,
             fs_table TYPE str_itab,
             l_xstring TYPE char200,
             fields TYPE string_table,
             lv_field TYPE string.
           DATA : t_table TYPE if_main=>elements_data_tab,
             data_table TYPE if_main=>elements_data_tab.
    get single attribute
             wd_context->get_attribute(
             EXPORTING      name =  `DATASOURCE`
               IMPORTING      value = l_xstring ).
      CALL FUNCTION 'HR_KR_XSTRING_TO_STRING'
        EXPORTING
          in_xstring = l_xstring
        IMPORTING
          out_string = l_string.
      SPLIT l_string  AT cl_abap_char_utilities=>newline INTO TABLE i_data.
    Bind With table Element.
      LOOP AT i_data INTO l_string.
        SPLIT l_string AT cl_abap_char_utilities=>horizontal_tab INTO TABLE fields.    READ TABLE fields INTO lv_field INDEX 1.
        fs_table-name = lv_field.    READ TABLE fields INTO lv_field INDEX 2.
        fs_table-age = lv_field.    APPEND fs_table TO t_table1.
      ENDLOOP.
      lo_nd_sflight = wd_context->get_child_node( 'DATA_TAB' ).
      lo_nd_sflight->bind_table( t_table1 ).
    Thanks in advance
    Akshatha

    It is not appropriate to ask general ABAP questions such as this in the Web Dynpro ABAP Forum. As your punishment I will help with your question.
    In this sample buffer is the XSTRING, text_buffer is the string. You have to supply an encoding to tell the system what codepage/unicode type the XSTRING is.
    data: convin  type ref to cl_abap_conv_in_ce,
          call method cl_abap_conv_in_ce=>create
            exporting
               encoding    = encoding
               input       = buffer
            receiving
              conv        = convin.
          call method convin->read
            importing
              data = text_buffer.

  • Crystal XI R2 exporting issues with double-byte character sets

    NOTE: I have also posted this in the Business Objects General section with no resolution, so I figured I would try this forum as well.
    We are using Crystal Reports XI Release 2 (version 11.5.0.313).
    We have an application that can be run using multiple cultures/languages, chosen at login time. We have discovered an issue when exporting a Crystal report from our application while using a double-byte character set (Korean, Japanese).
    The original text when viewed through our application in the Crystal preview window looks correct:
    性能 著概要
    When exported to Microsoft Word, it also looks correct. However, when we export to PDF or even RPT, the characters are not being converted. The double-byte characters are rendered as boxes instead. It seems that the PDF and RPT exports are somehow not making use of the linked fonts Windows provides for double-byte character sets. This same behavior is exhibited when exporting a PDF from the Crystal report designer environment. We are using Tahoma, a TrueType font, in our report.
    I did discover some new behavior that may or may not have any bearing on this issue. When a text field containing double-byte characters is just sitting on the report in the report designer, the box characters are displayed where the Korean characters should be. However, when I double click on the text field to edit the text, the Korean characters suddenly appear, replacing the boxes. And when I exit edit mode of the text field, the boxes are back. And they remain this way when exported, whether from inside the design environment or outside it.
    Has anyone seen this behavior? Is SAP/Business Objects/Crystal aware of this? Is there a fix available? Any insights would be welcomed.
    Thanks,
    Jeff

    Hi Jef
    I searched on the forums and got the following information:
    1) If font linking is enabled on your device, you can examine the registry by enumerating the subkeys of the registry key at HKEY_LOCAL_MACHINEu2013\SOFTWARE\Microsoft\Windows NT\CurrentVersion\FontLink\SystemLink to determine the mappings of linked fonts to base fonts. You can add links by using Regedit to create additional subkeys. Once you have located the registry key that has just been mentioned, from the Edit menu, Highlight the font face name of the font you want to link to and then from the Edit menu, click Modify. On a new line in the dialog field "Value data" of the Edit Multi-String dialog box, enter "path and file to link to," "face name of the font to link".u201D
    2) "Fonts in general, especially TrueType and OpenType, are u201CUnicodeu201D.
    Since you are using a 'true type' font, it may be an Unicode type already.However,if Bud's suggestion works then nothing better than that.
    Also, could you please check the output from crystal designer with different version of pdf than the current one?
    Meanwhile, I will look out for any additional/suitable information on this issue.

  • PDF with garbled text after editing on a Mac

    Hi all, hope someone can help. I'm struggling to get to the bottom of a very bizarre issue. I have a number of PDFs that were originally created using ABBYY's FineReader OCR software. They display fine, and I can "copy" text from the documents to the clipboard with no issues.
    However, as soon as I bring them over to the Mac side and make a change to the document using OS X's Preview, things go wrong. As soon as any changes are made, I can no longer copy text from the document to the clipboard - the text that ends up on the clipboard is garbled. However, the display text is still perfectly legiable.
    For example. here's a link to a very simple and basic PDF document, that was originally a perfectly fine PDF but became mangled.
    Broken PDF
    Using Acrobat I've removed all the graphics and most of the text from the document, leaving just a single text box ("AUTUMN SPECIAL!").
    So if I highlight the text on the page which reads:
    AUTUMN SPECIAL!
    what ends up on the clipboard is:
    *)﴿*%& (﴾'"!# $  
    I've done a whole ton of reading about the internals of PDFs and I'm pretty certain that this is something to do with the CMAP character-to-glyph mappings; but I've no idea what to do to fix it. Acrobat's PreFlight check for the PDF/A 2a standard tells me:
    "Text cannot be mapped to Unicode"
    "Type 2 CID font: CIDToGIDMap invalid or missing"
    Also, an inventory report (available here) shows the letter "A" being mapped to a space, letter "C" to a exclamation mark etc - all obviously wrong. However, "Analyze and Fix" won't fix the problem
    Does anyone have any advice on how to fix this? I know it's definitely a bug somewhere on the Mac side and the obvious answer would be to avoid editing documents using OS X Preview, but unfortunately I have several hundred documents in this state. Is there any way to fix the mapping, or otherwise return the text to a state where it can be accurately copied from the document?
    Thanks very much in advance for any help or advice given.

    I doubt you could fix this. Generally a "garbled" file is considered unusable for copying, and that's that. If a PDF uses random mappings instead of standard ones it looks fine on screen, but text extraction is impossible.
    Short of converting every page to bitmap and OCRing again.

  • Cannot make AutoArchive delete messages in Outlook 2010

    I have recently upgraded to Outlook 2010 from Outlook 2007. I previously had AutoArchive set up to delete files from two of the folders in my pst file by date. I figured I would be able to set it up the same way in the new version. First, I set up the main
    autoarchive settings:
    Then I set up each folder I want auto-deleted individually:
    This worked fine in the old version of Outlook. However, in 2010, it doesn't ever delete any messages. When I run AutoArchive manually, that doesn't do anything either.
    Based on this page and various google
    searches, I have already done the following testing / troubleshooting:
    In spot checks, I could not find any items that had the "Do not AutoArchive this item" checkbox marked.
    The majority of the Received dates are the same as the last Modified date. Just to be sure, I made the registry change described in the link above, then used Outlook normally for two days. No emails were deleted in that time.
    None of the emails in question are marked as Important or have a user-defined category.
    I ran the scanpst.exe on my pst file. It did say there were internal errors (no details of the errors, even when I clicked on the details button), so I let it fix them. Then, I ran it again and it said there were no errors. I used Outlook normally for two more
    days. It still didn't delete any emails.
    The pst file is not on a network share.
    The pst file is a unicode-type file, which has a maximum size of either 20GB or 50GB, depending on what source you read. My file is nearly 3GB. It is therefore not full.
    My mailbox, which is on a Microsoft Exchange, is also not full.
    None of this changes the fact that the AutoArchive is not deleting any emails as it is set to do. Am I missing something, or does this feature just not work in Outlook 2010? Thanks in advance for any help with this, as it is annoying.

    Bob-
    I finally gave up in frustration.  I had to hunt down a third party utility to dump all my Outlook for Mac data back into a format that could be used by Outlook for Windows.  Outlook for Mac can import from several sources, including directly from .pst files, but gives you no way to go the other way.  Installed Outlook in Windows running in Parallels and dumped Outlook for Mac.  Outlook is just about the only program I run in Windows anymore.
    By the way, none of the suggestions in the Microsoft forum came close to solving this.  I wouldn't recommend Oultook for Mac to anyone.

  • Bug in 10.1.2: Encodings

    Hi !
    Our company uses JDeveloper for developing our sollution. We wanted to migrate from 9.0.5.2 to 10.1.2, but we encountered a problem.
    Encoding can be only set to cp1250. I imagine that this is bug, since in 9.0.5.2 this works ok. Is it possible to get patch to fix this problem? We would need at least all unicode types.
    Andy

    Hi !
    I found out where the problem is. On startup Default setting is checked in this is problem. When you uncheck this option, encoding list is not repopulated. So you have force it (by closing Preferences window down and opening it again) and then it works. Coworker of mine disovered this quite accidentally. This should be fixed, by either populating list always, or repopulating list after option is checked. I suppose first option would be better.
    Andy

  • Special characters not shown in OBIEE report

    Hi
    We are facing an issue with column values containing special characters in the reports(OBIEE 11G, 11.1.1.6.5). In-spite of actual column values i.e. special characters, NULL is displayed/shown. It would be very helpful if any work-around/pointer is provided for the resolution of this issue.
    Thanks

    Hi,
    Go with Dhar suggestion, While installing the database itself we have the options "unicode type" for special characters.
    Now Create one more instance and install the database use the unicode option to resolve.
    i faced for arabic character i found this mode.
    for more reference http://mkashu.blogspot.com
    regards
    VG

Maybe you are looking for