Unicode Literal to Unicode Character

I have a list of all the unicode characters (00C1, 00E1, 0103, etc.).
I am displaying their data as \u00C1 (in a java string: System.out.println("\\u00C1");)
How would I go about converting that literal string to the actual unicode character?
I want \\u00C1 to become the actual \u00C1 character.
All I'm really doing right now is letting the user enter in that data and then parsing their string for all unicode specifications and then turning them into the actual unicode characters.

char c = (char)Integer.valueOf("00C1", 16).intValue();

Similar Messages

  • Retrieve the unicode from a chinese character stored in MS Access database

    Hi,
    I have a table in MS Access database with chinese character. When I read it from my sql queries I get the "?" instead of the chinese character. When I calculate the unicode I get the 003F which is the unicode for the questionmark. How do I get the correct unicode of the chinese character?
    Connection conn = DriverManager.getConnection("jdbc:odbc:myDB"); Statement stmt = conn.createStatement(); String sqlString = "SELECT Char FROM MyTable WHERE id=2"; ResultSet rs = stmt.executeQuery(sqlString); String s1 = rs.getString("Char");    // the s1 is the ? instead of the chinese character. byte[] b = s1.getBytes(Charset.forName("UTF-16")); int firstByte = 0; short anUnsignedByte = 0; String unicode = ""; for(int  n = 0; n < b.length; n++){     String temp = "";     firstByte = (0x000000FF & ((int)b[n]));     anUnsignedByte = (short)firstByte;     temp = Integer.toHexString(anUnsignedByte);     if(temp.length() == 1){ temp = "0" + temp;     }     unicode = unicode + temp; } unicode = unicode.substring(4); System.out.println("unicode: " + unicode);  //Here I get the unicode, 003f, for the question mark.

    Apparently getString() is already applying the wrong encoding to the bytes which are in the data, so you can't do anything with what it returns. (This isn't surprising since it's Access you are asking about.) I suggest using the getObject() method and see what it returns. If it returns the same string then you are out of luck. But maybe you'll get lucky and it will return an array of bytes containing the correct data. Don't get too hopeful though.

  • How to setup the connection between non-unicode client and unicode server?

    hi.
    we played the program in 4.6x that call a data from unicode server (ECC6.0) to non-unicode server.  ( not played in the unicode server.)
    When a program ended that English Character was normal, but Korean Chinease... was unknown text.
    I think conversion from unicode to non-unicode is a main reason.
    General Notes are issued for the unicode side rfc connection configuration.
    but, i want to know how to setup into the non-unicode system to interface with unicode system.
    I hope your great answer.

    Hi,
    For taht you have to deploy korean and chinese langauges seprately on non unicode system.Then try to connect to the unicode system.It will work for you.
    Regadrs
    Vijay kumar

  • Peoplesoft convert Oracle non-unicode database to unicode database

    I am following doc 1437384.1 to convert a Peoplesoft database from non-unicode database to unicode database
    I use the following export statement (as user PS)
    SET NO TRACE;
    SET OUTPUT output_file.dat;
    SET NO DATA;
    EXPORT *;
    And the following import statement (as user sysadm)
    SET NO TRACE;
    SET NO DATA;
    SET INPUT output_file;
    SET LOG log_file;
    SET UNICODE ON;
    SET STATISTICS OFF;
    SET ENABLED_DATATYPE 9.0;
    IMPORT *;
    Before I do the datapump import, I am comparing the objects
    SQL> select object_type, count(*) from dba_objects where owner = 'SYSADM' group by object_type order by 1 asc;
    OBJECT_TYPE COUNT(*)
    INDEX 33797
    LOB 2775
    TABLE 28829
    TRIGGER 9
    VIEW 21208
    on oracsc63 (targetdb):
    SQL> select object_type, count(*) from dba_objects where owner = 'SYSADM' group by object_type order by 1 asc;
    OBJECT_TYPE COUNT(*)
    INDEX 23748
    LOB 2170
    TABLE 19727
    I don't have the same number of object. When I do the import this means that around 10000 tables will not have the UTF-8 format.
    Any ideas how I can solve this? How has experience with this peoplesoft conversions?

    Hello Jacques,
    please check sapnote #808505 (Secondary connection to Oracle DB w/ different character set).
    Regards
    Stefan

  • Unicode and non-unicode

    WHAT IS DIFFRENTS BETWEEN UNICODE AND NON UNICODE ?
    BRIEFLY EXPLAIN ABOUT UNICODE?
                                                            THANKS IN ADVANCES

    A 16-bit character encoding scheme allowing characters from Western European, Eastern European, Cyrillic, Greek, Arabic, Hebrew, Chinese, Japanese, Korean, Thai, Urdu, Hindi and all other major world languages, living and dead, to be encoded in a single character set. The Unicode specification also includes standard compression schemes and a wide range of typesetting information required for worldwide locale support. Symbian OS fully implements Unicode. A 16-bit code to represent the characters used in most of the world's scripts. UTF-8 is an alternative encoding in which one or more 8-bit bytes represents each Unicode character. A 16-bit character set defined by ISO 10646. A code similar to ASCII, used for representing commonly used symbols in a digital form. Unlike ASCII, however, Unicode uses a 16-bit dataspace, and so can support a wide variety of non-Roman alphabets including Cyrillic, Han Chinese, Japanese, Arabic, Korean, Bengali, and so on. Supporting common non-Roman alphabets is of interest to community networks, which may want to promote multicultural aspects of their systems.
    ABAP Development under Unicode
    Prior to Unicode the length of a character was exactly one byte, allowing implicit typecasts or memory-layout oriented programming. With Unicode this situation has changed: One character is no longer one byte, so that additional specifications have to be added to define the unit of measure for implicit or explicit references to (the length of) characters.
    Character-like data in ABAP are always represented with the UTF-16 - standard (also used in Java or other development tools like Microsoft's Visual Basic); but this format is not related to the encoding of the underlying database.
    A Unicode-enabled ABAP program (UP) is a program in which all Unicode checks are effective. Such a program returns the same results in a non-Unicode system (NUS) as in a Unicode system (US). In order to perform the relevant syntax checks, you must activate the Unicode flag in the screens of the program and class attributes.
    In a US, you can only execute programs for which the Unicode flag is set. In future, the Unicode flag must be set for all SAP programs to enable them to run on a US. If the Unicode flag is set for a program, the syntax is checked and the program executed according to the rules described in this document, regardless of whether the system is a US or a NUS. From now on, the Unicode flag must be set for all new programs and classes that are created.
    If the Unicode flag is not set, a program can only be executed in an NUS. The syntactical and semantic changes described below do not apply to such programs. However, you can use all language extensions that have been introduced in the process of the conversion to Unicode.
    As a result of the modifications and restrictions associated with the Unicode flag, programs are executed in both Unicode and non-Unicode systems with the same semantics to a large degree. In rare cases, however, differences may occur. Programs that are designed to run on both systems therefore need to be tested on both platforms.
    Refer to the below related threads
    Re: Why the select doesn't run?
    what is unicode
    unicode
    unicode
    Regards,
    Santosh

  • Unicode vs non-unicode

    Hi All,
    Can anyone please tell me what is the differenece between unicode and non-unicode SAP systems. I heard that all the new versions are unicode. What is the difference, purpose and significance of these?
    Thanks
    Cyrus

    Hi Cyrus,
    You are correct that all new SAP Applications and SAP NetWeaver technology will only be available based on unicode (example: SAP NetWeaver MDM, SAP NetWeaver Visual Composer). Unicode code pages enable a system to store and process almost every character symbol in the world.
    This means that the applications run on and can process characters from a unicode code page as opposed to a single character set code page like Latin 1 (restricted to only western European Language characters). The Database and it's schemas are able to store all the unicode characters.
    The Non-Unicode Single code page systems are supported by SAP for older applications for historical reasons - unicode was not available when they were released. These systems are restricted to only processing characters from their specific code page. This means their can be restrictions if they need to support language combinations that span multiple code pages - basically it is not possible. In addition some languages are not fully available at all for single code page systems like Thai. In addition the Euro symbol is not available.
    Unicode is the future for all applications - SAP or otherwise. The significance at the moment is where customers who needs to convert to unicode to support additional business requirements like additional languages. Converting a system, although a simple process is not a trivial process in terms of time and resources.
    For additional info check out the following sites:
    http://service.sap.com/unicode@SAP
    http://service.sap.com/i18N
    http://www.unicode.org
    I hope this helps,
    Mike.

  • How will I know my existing DB is in Unicode or Non-Unicode

    I know you will get character set information with following query.
    select PARAMETER, value from nls_database_parameters
    where parameter = 'NLS_CHARACTERSET';
    Result is : WE8MSWIN1252 (My DB is hosted on Windows server 2003 64Bit Machine and created using default parameter)
    My Question is how do I know that WE8MSWIN1252 is unicode- or non-unicode ?
    Thank you for reading.
    Thanks

    Only the values 'AL32UTF8' and 'UTF8' indicate an Unicode encoding if returned by this query. All other character sets are non-Unicode. The query:
    select PARAMETER, value from nls_database_parameters
    where parameter = 'NLS_NCHAR_CHARACTERSET';can also return AL16UTF16, which is Unicode as well, but this character set is valid for NCHAR/NVARCHAR2/NCLOB data types only.
    -- Sergiusz

  • Differnce between unicode and non unicode

    Hi every body i want to differnce  between unicode and non unicode and for what purposes this ulities are used explain me little brief what is t code for that , how to checj version, how to convert uni to non uni ?
    Advance Thanks
    Vishnuprasad.G

    Hello Vishnu,
    before Release 6.10, SAP software only used codes where every character is displayed by one byte, therefore character sets like these are also called single-byte codepages. However, every one of these character sets is only suitable for a limited number of languages.
    Problems arise if you try to work with texts written in different incompatible character sets in one central system. If, for example, a system only has a West European character set, other characters cannot be correctly processed.
    As of 6.10, to resolve these issues, SAP has introduced Unicode. Each character is generally mapped using 2 bytes and this offers a maximum of 65 536 bit combinations.
    Thus, a Unicode-compatible ABAP program is one where all Unicode checks are in effect. Such programs return the same results in UC systems as in non-UC systems. To perform the relevant syntax checks, you must activate the "UC checks" flag in the screens of the program and class attributes.
    With TC: /nUCCHECH you can check a program set for a syntax errors in UC environment.
    Bye,
    Peter

  • About unicode and non-unicode

    Hi experts,
    can anybody tell me
    what is unicode and non-unicode in interview point of view.Just 2 or 3 sentences....
    Thanks in advance

    unicode is for multilingual capability in SAP system,
    apart from that important unicode t.code is uccheck if you give report name we will get different error codes,in genaral we get an error of structures miss match,obsolute statemnts,open data set,describe ststment and so on
    more over you can say we delete all obsolute function modules, look at the following it may help you
    Before the Unicode error
       lt_hansp = lt_0201-endda0(4) - lt_0002-gbdat0(4).
    Solution.
    data :abc type i,
          def type i.
    move  lt_0201-endda+0(4) to abc.
    move    lt_0002-gbdat+0(4) to def.
    lt_hansp-endda = abc - def.
    Before the Unicode error:
       WRITE: /1 'CO:',CO(110).
    Solution.
    FIELD-SYMBOLS: <fs_co> type any.
    assign co to <fs_co>.
    WRITE: /1 'CO:',<fs_co>(110).
    DESCIBE002     In Unicode, DESCRIBE LENGTH can only be used with the IN BYTE MODE or IN  CHARACTER MODE addition.
    Before the Unicode error:
        describe field <tab_feld> length len.
    Solution.
    describe field <tab_feld> length len IN character mode.
    Before the Unicode error:
        DESCRIBE FIELD DOWNTABLA LENGTH LONG.
    Solution.
    DESCRIBE FIELD DOWNTABLA LENGTH LONG IN byte MODE.
    DO 002     Could not specify the access range automatically. This means that you need a  RANGE addition     
    Before the Unicode error:
    DO 7 TIMES VARYING i FROM aktuell(1) NEXT aktuell+1(1)
    Solution.
      DO 7 TIMES VARYING i FROM aktuell(1) NEXT aktuell+1(1) RANGE aktuell .
    Before the Unicode error:
    DO 3 TIMES VARYING textfeld FROM gtx_line1 NEXT gtx_line2.
    Solution.
    DATA: BEGIN OF text,
            gtx_line1 TYPE rp50m-text1,
            gtx_line2 TYPE rp50m-text2,
            gtx_line3 TYPE rp50m-text3,
          END OF text.
    DO 3 TIMES VARYING textfeld FROM gtx_line1 NEXT gtx_line2 RANGE text..
    Before the Unicode error:
    DO ev_restlen TIMES
        VARYING ev_zeichen FROM ev_hstr(1) NEXT ev_hstr+1(1).
    Solution.
      DO ev_restlen TIMES
         VARYING ev_zeichen FROM ev_hstr(1) NEXT ev_hstr+1(1) range ev_hstr.
    MESSAGEG!2     IT_TBTCO and "IT_ALLG" are not mutually convertible. In Unicode programs, "IT_TBTCO" must have the same structure layout as "IT_ALLG", independent of  the length of a Unicode character.     
    Before the Unicode error:
             IT_TBTCO = IT_ALLG.
    Solution.
    IT_TBTCO-header = IT_ALLG-header.
    MESSAGEG!3     FIELDCAT_LN-TABNAME and "WA_DISP" are not mutually convertible in a Unicode program     
    Before the Unicode error:
         IF GEH_TA15(73) NE RETTER15(73).
    Solution.
          FIELD-SYMBOLS: <GEHTA> TYPE ANY.
                        <RETTER1> TYPE ANY.
          ASSIGN: GEH_TA TO <GEHTA>,
                  RETTER TO <RETTER>.
    IF <GEHTA>15(73) NE <RETTER>15(73).
    Before the Unicode error:
           IMP_EP_R3_30 = RECRD_TAB-CNTNT.
    Solution.
        FIELD-SYMBOLS:  <imp_ep_r3_30> TYPE X,
                          <recrd_tab-cntnt> TYPE X.
          ASSIGN IMP_EP_R3_30 TO <imp_ep_r3_30> CASTING.
          ASSIGN RECRD_TAB-CNTNT TO <recrd_tab-cntnt> CASTING.
           <imp_ep_r3_30> = <recrd_tab-cntnt>.
    Before the Unicode error:
                and    pernr  = gt_pernr
    Solution.
                  and    pernr  = gt_pernr-pernr
    MESSAGEG!7     EBC_F0 and "EBC_F0_255(1)" are not comparable in Unicode programs.     
    Before the Unicode error:
       IF CHARACTER NE LINE_FEED.
    Solution.
        IF CHARACTER NE LINE_FEED-X.
    MESSAGEG!A     A line of "IT_ZMM_BINE" and "OUTPUT_LINES" are not mutually convertible. In a  Unicode program "IT_ZMM_BINE" must have the same structure layout as  "OUTPUT_LINES" independent of the length of a Unicode character.     
    Before the Unicode error:
    *data: lw_wpbp  type pc206.
    Solution.
    data: lw_wpbp  type pc205.
    Before the Unicode error:
       LOOP AT seltab INTO  ltx_p0078.
    Solution.
    DATA: WA_SELTAB like line of SELTAB.
    CLEAR WA_SELTAB.
    MOVE-CORRESPONDING ltx_p0078 to wa_seltab.
    move-corresponding wa_seltab to ltx_p0078.
    MESSAGEG?Y     The line type of "DTAB" must be compatible with one of the types "TEXTPOOL".     
    Before the Unicode error:
    DATA:
       BEGIN OF dtab OCCURS 100.
         text(100),
    include structure textpool.
       End of changes
    SET TITLEBAR '001' WITH dtab-text+9.
    Solution.
    the following declaration should be mentioned in the declaration of the textpool.
    DATA:
       BEGIN OF dtab OCCURS 100.
      text(100),
    include structure textpool.
       End of changes
      SET TITLEBAR '001' WITH dtab-entry.
    MESSAGEG@1     TFO05_TABLE cannot be converted to a character-type field.     
    Before the Unicode error:
    WRITE: / PA0015, 'Fehler bei MODIFY'.
    Solution.
    WRITE: / PA0015+0, 'Fehler bei MODIFY'.
    MESSAGEG@3     ZL-C1ZNR must be a character-type data object (data type C, N, D, T or  STRING) .     
    Before the Unicode error:
         con_tab  TYPE x VALUE '09',
    Solution.
           con_tab  TYPE string VALUE '09',
    Before the Unicode error:
    data:   g_con_ascii_tab(1)  type x   value '09'.
    Solution.
       data:   g_con_ascii_tab  type STRING   value '09'.
    MESSAGEG@E     HELP_ANLN0 must be a character-type field (data type C, N, D, or T). an open  control structure introduced by "INTERFACE".
    Before the Unicode error:
    WRITE SATZ-MONGH TO SATZ-MONGH CURRENCY P0008-WAERS.
    WRITE SATZ-JAH55 TO SATZ-JAH55 CURRENCY P0008-WAERS.
    WRITE SATZ-EFF55 TO SATZ-EFF55 CURRENCY P0008-WAERS.
    WRITE SATZ-SOFE_EREU TO SATZ-SOFE_EREU CURRENCY P0008-WAERS.
    WRITE SATZ-SOFE_ERSF TO SATZ-SOFE_ERSF CURRENCY P0008-WAERS.
    WRITE SATZ-SOFE_ERSP TO SATZ-SOFE_ERSP CURRENCY P0008-WAERS.
    WRITE SATZ-SOFE_EIN TO SATZ-SOFE_EIN CURRENCY P0008-WAERS.
    WRITE SATZ-SOFE_EREU TO SATZ-SOFE_EREU CURRENCY P0008-WAERS.
    WRITE SATZ-ERHO_ERR TO SATZ-ERHO_ERR CURRENCY P0008-WAERS.
    WRITE SATZ-ERHO_EIN TO SATZ-ERHO_EIN CURRENCY P0008-WAERS.
    WRITE SATZ-JAH55_FF TO SATZ-JAH55_FF CURRENCY P0008-WAERS.
    Solution.
      DATA: SATZ1_MONGH(16),
            SATZ_JAH551(16),
            SATZ_EFF551(16),
            SATZ_SOFE_EREU1(16),
            SATZ_SOFE_ERSF1(16),
            SATZ_SOFE_ERSP1(16),
            SATZ_SOFE_EIN1(16),
            SATZ_ERHO_ERR1(16),
            SATZ_ERHO_EIN1(16),
            SATZ_JAH55_FF1(16).
      WRITE SATZ-MONGH TO SATZ1_MONGH CURRENCY P0008-WAERS.
      WRITE SATZ-JAH55 TO SATZ_JAH551 CURRENCY P0008-WAERS.
      WRITE SATZ-EFF55 TO SATZ_EFF551 CURRENCY P0008-WAERS.
      WRITE SATZ-SOFE_EREU TO SATZ_SOFE_EREU1 CURRENCY P0008-WAERS.
      WRITE SATZ-SOFE_ERSF TO SATZ_SOFE_ERSF1 CURRENCY P0008-WAERS.
      WRITE SATZ-SOFE_ERSP TO SATZ_SOFE_ERSP1 CURRENCY P0008-WAERS.
      WRITE SATZ-SOFE_EIN TO SATZ_SOFE_EIN1 CURRENCY P0008-WAERS.
      WRITE SATZ-ERHO_ERR TO SATZ_ERHO_ERR1 CURRENCY P0008-WAERS.
      WRITE SATZ-ERHO_EIN TO SATZ_ERHO_EIN1 CURRENCY P0008-WAERS.
      WRITE SATZ-JAH55_FF TO SATZ_JAH55_FF1 CURRENCY P0008-WAERS.
      SATZ-MONGH = SATZ1_MONGH.
      SATZ-JAH55 = SATZ_JAH551.
      SATZ-EFF55 = SATZ_EFF551.
      SATZ-SOFE_EREU = SATZ_SOFE_EREU1.
      SATZ-SOFE_ERSF = SATZ_SOFE_ERSF1.
      SATZ-SOFE_ERSP = SATZ_SOFE_ERSP1.
      SATZ-SOFE_EIN = SATZ_SOFE_EIN1.
      SATZ-ERHO_ERR = SATZ_ERHO_ERR1.
      SATZ-ERHO_EIN = SATZ_ERHO_EIN1.
      SATZ-JAH55_FF = SATZ_JAH55_FF1.
    MESSAGEG-0     VESVR_EUR must be a character-type data object (data type C, N, D, T or  STRING).     
    Before the Unicode error:
                TRANSLATE vesvr_eur USING '.,'.
                TRANSLATE espec_eur USING '.,'.
                TRANSLATE fijas_eur USING '.,'.
    Solution.
                 data: vesvreur(16),
                       especeur(16),
                       fijaseur(16).
                 vesvreur = vesvr_eur.
                 especeur = espec_eur.
                 fijaseur = fijas_eur.
                 TRANSLATE vesvreur USING '.,'.
                 TRANSLATE especeur USING '.,'.
                 TRANSLATE fijaseur USING '.,'.
                 vesvr_eur = vesvreur.
                 espec_eur = especeur.
                 fijas_eur = fijaseur.
    MESSAGEG-D     LT_0021 cannot be converted to the line incompatible. The line type must have  the same structure layout as "LT_0021" regardless of the length of a Unicode .     
    Before the Unicode error:
    data: lt_0021    like p0021 occurs 0 with header line.
    Solution.
        DATA: LT_0021 LIKE PA0021 OCCURS 0 WITH HEADER LINE.
    Before the Unicode error:
             append sim_data to p0007.
    Solution.
           DATA:wa_p0007 type p0007.
           move-corresponding sim_data to wa_p0007.
              append wa_p0007 to p0007.
    MESSAGEG-F     The structure "CO(110)" does not start with a character-type field. In Unicode  programs in such cases, offset/length declarations are not allowed      
    Before the Unicode error:
         TRANSFER COBEZ+8 TO DSN.
    Solution.
                FIELD-SYMBOLS:<fs_cobez> type any.
                TRANSFER <fs_COBEZ>+8 TO DSN.
    Before the Unicode error:
         WRITE: /1 COBEZ+16.
    Solution.
    FIELD-SYMBOLS <F_COBEZ> TYPE ANY.
    ASSIGN COBEZ TO <F_COBEZ>.
          WRITE: /1 <F_COBEZ>+16.
    MESSAGEG-G     The length declaration "171" exceeds the length of the character-type start  (=38) of the structure. This is not allowed in Unicode programs.
    Before the Unicode error:
       write: /1 '-->',
                 pa0201(250).
    Solution.
    field-symbols <fs_pa0201> type any.
    ASSIGN pa0201 TO <fs_pa0201>.
        write: /1 '-->',
                  <fs_pa0201>(250).
    MESSAGEG-H     The offset declaration "160" exceeds the length of the character-type start  (=126) of the structure. This is not allowed in Unicode programs . allowed.     
    Before the Unicode error:
    WRITE:/ SATZ(80),
             / SATZ+80(80),
             / SATZ+160(80),
             / SATZ+240(80),
             / SATZ+320(27).
    Solution.
      FIELD-SYMBOLS <FS_SATZ> TYPE ANY.
      ASSIGN SATZ TO <FS_SATZ>.
      WRITE:/ <FS_SATZ>(80),
              / <FS_SATZ>+80(80),
              / <FS_SATZ>+160(80),
              / <FS_SATZ>+240(80),
              / <FS_SATZ>+320(27).
    MESSAGEG-I     The sum of the offset and length (=504) exceeds the length of the start (=323) of the structure. This is not allowed in Unicode programs .     
    Before the Unicode error:
              /5 PARAMS+80(80),
    Solution.
        FIELD-SYMBOLS: <PARAMS> TYPE ANY.
        ASSIGN PARAMS TO <PARAMS>.
               /5 <PARAMS>+80(80),
    MESSAGEGWH     P0041-DAR01 and "DATE_SPEC" are type-incompatible.     
    Before the Unicode error:
       DO 5 TIMES VARYING I0008 FROM P0008-LGA01 NEXT P0008-LGA02.
    Solution.
        DO 5 TIMES VARYING I0008-LGA FROM P0008-LGA01 NEXT P0008-LGA02. "D07K963133
    Before the Unicode error:
       DO VARYING ls_data_aux FROM p0041-dar01 NEXT p0041-dar02.
    Solution.
         DO VARYING ls_data_aux-dar01 FROM p0041-dar01 NEXT p0041-dar02.
    MESSAGEGY/     The type of the database table and work area (or internal table) "P0050" are  not Unicode-convertible      
    Before the Unicode error:
    select * from  pa9705 client specified
            into ls_9705
    Solution.
       select * from  pa9705 client specified
              into corresponding fields of  ls_9705
    Before the Unicode error:
         select        * from  pa0202 client specified
                into ls_0202
    Solution.
           select  * from  pa0202 client specified
                  into corresponding fields of ls_0202
    OPEN   001     One of the additions "FOR INPUT", "FOR OUTPUT", "FOR APPENDING" or "FOR UPDATE" was expected.     
    Before the Unicode error:
    OPEN DATASET FICHERO IN TEXT MODE.
    Solution.
      OPEN DATASET FICHERO IN TEXT MODE FOR INPUT ENCODING NON-UNICODE.
    OPEN   002     IN... MODE was expected.     
    Before the Unicode error:
    OPEN DATASET P_OUT FOR OUTPUT IN TEXT MODE.
    Solution.
        OPEN DATASET P_OUT FOR OUTPUT IN TEXT MODE ENCODING non-unicode.
    OPEN   004     In "TEXT MODE" the "ENCODING" addition must be specified.     
    Before the Unicode error:
       open dataset dat for output in text mode.
    Solution.
         open dataset dat for output in text mode ENCODING NON-UNICODE.
    UPLO     Upload/Ws_Upload and Download/Ws_Download are obsolete, since they are not  Unicode-enabled; use the class cl_gui_frontend_services
    Before the Unicode error:
    move p_filein to disk_datei.
      CALL FUNCTION 'WS_UPLOAD'
           EXPORTING
                filename        = disk_datei
                FILETYPE        = FILETYPE
           TABLES
                DATA_TAB        = DISK_TAB
           EXCEPTIONS
                FILE_OPEN_ERROR = 1
                FILE_READ_ERROR = 2.
    Solution.
    DATA: file_name type string.
    move p_filein to file_name.
    CALL METHOD CL_GUI_FRONTEND_SERVICES=>GUI_UPLOAD
      EXPORTING
        FILENAME                = file_name
        FILETYPE                = 'ASC'
        HAS_FIELD_SEPARATOR     = 'X'
       HEADER_LENGTH           = 0
       READ_BY_LINE            = 'X'
       DAT_MODE                = SPACE
       CODEPAGE                = SPACE
       IGNORE_CERR             = ABAP_TRUE
       REPLACEMENT             = '#'
       VIRUS_SCAN_PROFILE      =
    IMPORTING
       FILELENGTH              =
       HEADER                  =
      CHANGING
        DATA_TAB                = disk_tab[]
      EXCEPTIONS
        FILE_OPEN_ERROR         = 1
        FILE_READ_ERROR         = 2
        NO_BATCH                = 3
        GUI_REFUSE_FILETRANSFER = 4
        INVALID_TYPE            = 5
        NO_AUTHORITY            = 6
        UNKNOWN_ERROR           = 7
        BAD_DATA_FORMAT         = 8
        HEADER_NOT_ALLOWED      = 9
        SEPARATOR_NOT_ALLOWED   = 10
        HEADER_TOO_LONG         = 11
        UNKNOWN_DP_ERROR        = 12
        ACCESS_DENIED           = 13
        DP_OUT_OF_MEMORY        = 14
        DISK_FULL               = 15
        DP_TIMEOUT              = 16
        NOT_SUPPORTED_BY_GUI    = 17
        ERROR_NO_GUI            = 18
        others                  = 19
    Before the Unicode error:
       CALL FUNCTION 'WS_DOWNLOAD'
            EXPORTING
                 filename = fich_dat
                 filetype = typ_fich
            TABLES
                 data_tab = t_down.
    Solution.
    data: filename1 type string,
          filetype1(10).
    move fich_dat to filename1.
    move typ_fich to filetype1.
    CALL METHOD CL_GUI_FRONTEND_SERVICES=>GUI_DOWNLOAD
      EXPORTING
        FILENAME                  = filename1
        FILETYPE                  = filetype1
        WRITE_FIELD_SEPARATOR     = 'X'
      CHANGING
        DATA_TAB                  = t_down[].
    Before the Unicode error:
    *CALL FUNCTION 'UPLOAD'
        TABLES
             DATA_TAB                =  datos
       EXCEPTIONS
            CONVERSION_ERROR        = 1
            INVALID_TABLE_WIDTH     = 2
            INVALID_TYPE            = 3
            NO_BATCH                = 4
            UNKNOWN_ERROR           = 5
            GUI_REFUSE_FILETRANSFER = 6
            OTHERS                  = 7
    Solution.
    DATA: file_table type table of file_table,
          filetable type file_table,
          rc type i,
          filename type string.
    CALL METHOD CL_GUI_FRONTEND_SERVICES=>FILE_OPEN_DIALOG
      CHANGING
        FILE_TABLE              = file_table
        RC                      = rc
    EXCEPTIONS
       FILE_OPEN_DIALOG_FAILED = 1
       CNTL_ERROR              = 2
       ERROR_NO_GUI            = 3
       NOT_SUPPORTED_BY_GUI    = 4
       others                  = 5
    READ table file_table into filetable index 1.
    move filetable to filename.
    CALL METHOD CL_GUI_FRONTEND_SERVICES=>GUI_UPLOAD
      EXPORTING
        FILENAME                = filename
        FILETYPE                = 'ASC'
        HAS_FIELD_SEPARATOR     = 'X'
       HEADER_LENGTH           = 0
       READ_BY_LINE            = 'X'
       DAT_MODE                = SPACE
       CODEPAGE                = SPACE
       IGNORE_CERR             = ABAP_TRUE
       REPLACEMENT             = '#'
       VIRUS_SCAN_PROFILE      =
    IMPORTING
       FILELENGTH              =
       HEADER                  =
      CHANGING
        DATA_TAB                = datos[]
      EXCEPTIONS
        FILE_OPEN_ERROR         = 1
        FILE_READ_ERROR         = 2
        NO_BATCH                = 3
        GUI_REFUSE_FILETRANSFER = 4
        INVALID_TYPE            = 5
        NO_AUTHORITY            = 6
        UNKNOWN_ERROR           = 7
        BAD_DATA_FORMAT         = 8
        HEADER_NOT_ALLOWED      = 9
        SEPARATOR_NOT_ALLOWED   = 10
        HEADER_TOO_LONG         = 11
        UNKNOWN_DP_ERROR        = 12
        ACCESS_DENIED           = 13
        DP_OUT_OF_MEMORY        = 14
        DISK_FULL               = 15
        DP_TIMEOUT              = 16
        NOT_SUPPORTED_BY_GUI    = 17
        ERROR_NO_GUI            = 18
        others                  = 19
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
                WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    Before the Unicode error:
    CALL FUNCTION 'DOWNLOAD'
       EXPORTING
         filename            = p_attkit
         filetype            = 'ASC'
       TABLES
         data_tab            = tb_attrkit
       EXCEPTIONS
         invalid_filesize    = 1
         invalid_table_width = 2
         invalid_type        = 3
         no_batch            = 4
         unknown_error       = 5
         OTHERS              = 6.
    Solution.
               DATA : lv_filename    TYPE string,
                       lv_filen       TYPE string,
                       lv_path        TYPE string,
                       lv_fullpath    TYPE string.
                DATA: Begin of wa_testata,
                      lv_var(10) type c,
                      End of wa_testata.
                DATA: testata like standard table of wa_testata.
                OVERLAY p_attkit WITH lv_filename.
                CALL METHOD cl_gui_frontend_services=>file_save_dialog
                  EXPORTING
                   WINDOW_TITLE         =
                   DEFAULT_EXTENSION    =
                     default_file_name    = lv_filename
                   WITH_ENCODING        =
                   FILE_FILTER          =
                   INITIAL_DIRECTORY    =
                   PROMPT_ON_OVERWRITE  = 'X'
                  CHANGING
                    filename             = lv_filen
                    path                 = lv_path
                    fullpath             = lv_fullpath
                  USER_ACTION          =
                  FILE_ENCODING        =
                  EXCEPTIONS
                    cntl_error           = 1
                    error_no_gui         = 2
                    not_supported_by_gui = 3
                    OTHERS               = 4
                IF sy-subrc <> 0.
                  MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                             WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
                ENDIF.
                CALL FUNCTION 'GUI_DOWNLOAD'
                        EXPORTING
                        BIN_FILESIZE                    =
                          filename                        = lv_fullpath
                          filetype                        = 'ASC'
                        APPEND                          = ' '
                        WRITE_FIELD_SEPARATOR           = ' '
                        HEADER                          = '00'
                        TRUNC_TRAILING_BLANKS           = ' '
                        WRITE_LF                        = 'X'
                        COL_SELECT                      = ' '
                        COL_SELECT_MASK                 = ' '
                        DAT_MODE                        = ' '
                        CONFIRM_OVERWRITE               = ' '
                        NO_AUTH_CHECK                   = ' '
                        CODEPAGE                        = ' '
                        IGNORE_CERR                     = ABAP_TRUE
                        REPLACEMENT                     = '#'
                        WRITE_BOM                       = ' '
                        TRUNC_TRAILING_BLANKS_EOL       = 'X'
                        WK1_N_FORMAT                    = ' '
                        WK1_N_SIZE                      = ' '
                        WK1_T_FORMAT                    = ' '
                        WK1_T_SIZE                      = ' '
                      IMPORTING
                        FILELENGTH                      =
                        TABLES
                          data_tab                        = tb_attrkit
                          fieldnames                      = testata
                       EXCEPTIONS
                         file_write_error                = 1
                         no_batch                        = 2
                         gui_refuse_filetransfer         = 3
                         invalid_type                    = 4
                         no_authority                    = 5
                         unknown_error                   = 6
                         header_not_allowed              = 7
                         separator_not_allowed           = 8
                         filesize_not_allowed            = 9
                         header_too_long                 = 10
                         dp_error_create                 = 11
                         dp_error_send                   = 12
                         dp_error_write                  = 13
                         unknown_dp_error                = 14
                         access_denied                   = 15
                         dp_out_of_memory                = 16
                         disk_full                       = 17
                         dp_timeout                      = 18
                         file_not_found                  = 19
                         dataprovider_exception          = 20
                         control_flush_error             = 21
                         OTHERS                          = 22
                IF sy-subrc <> 0.
                  MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                          WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
                ENDIF.

  • Cannot convert between unicode and non-unicode string data types.

    I'm trying to copy the data from 21 tables in a SQL 2005 database to a MS Access database using SSIS. Before converting the SQL database from 2000 to 2005 we had this process set up as a DTS package that ran every month for years with no problem.  The only way I can get it to work now is to delete all of the tables from the Access DB and have SSIS create new tables each time. But when I try to create an SSIS package using the SSIS Import and Export Wizard to copy the SQL 2005 data to the same tables that SSIS itself created in Access I get the "cannot convert between unicode and non-unicode string data types" error message. The first few columns I hit this problem on were created by SSIS as the Memo datatype in Access and when I changed them to Text in Access they started to work. The column I'm stuck on now is defined as Text in the SQL 2005 DB and in Access, but it still gives me the "cannot convert" error.

    I was getting same error while tranfering data from SQL 2005 to Excel , but using following method i was able to tranfer data. Hopefully it may also help you.
    1) Using Data Conversion transformation
       data types you need to select is DT_WSTR (unicode in terms of SQL: 2005)
    2) derived coloumn transformation
       expression you need to use is :
        (DT_WSTR, 20) (note : 20 can be replace by your character size)
    Note:
    Above teo method create replica of your esting coloumn (default name will be copy of <coloumn name>).
    while mapping data do not map actual coloumn to the destination but select the coloumn that were created by any of above data transformer (replicated coloumn).

  • Unicode and non-unicode string data types Issue with 2008 SSIS Package

    Hi All,
    I am converting a 2005 SSIS Package to 2008. I have a task which has SQL Server as the source and Oracle as the destination. I copy the data from a SQL server view with a field nvarchar(10) to a field of a oracle table varchar(10). The package executes fine
    on my local when i use the data transformation task to convert to DT_STR. But when I deploy the dtsx file on the server and try to run from an SQL Job Agent it gives me the unicode and non-unicode string data types error for the field. I have checked the registry
    settings and its the same in my local and the server. Tried both the data conversion task and Derived Column task but with no luck. Pls suggest me what changes are required in my package to run it from the SQL Agent Job.
    Thanks.

    What is Unicode and non Unicode data formats
    Unicode : 
    A Unicode character takes more bytes to store the data in the database. As we all know, many global industries wants to increase their business worldwide and grow at the same time, they would want to widen their business by providing
    services to the customers worldwide by supporting different languages like Chinese, Japanese, Korean and Arabic. Many websites these days are supporting international languages to do their business and to attract more and more customers and that makes life
    easier for both the parties.
    To store the customer data into the database the database must support a mechanism to store the international characters, storing these characters is not easy, and many database vendors have to revised their strategies and come
    up with new mechanisms to support or to store these international characters in the database. Some of the big vendors like Oracle, Microsoft, IBM and other database vendors started providing the international character support so that the data can be stored
    and retrieved accordingly to avoid any hiccups while doing business with the international customers.
    The difference in storing character data between Unicode and non-Unicode depends on whether non-Unicode data is stored by using double-byte character sets. All non-East Asian languages and the Thai language store non-Unicode characters
    in single bytes. Therefore, storing these languages as Unicode uses two times the space that is used specifying a non-Unicode code page. On the other hand, the non-Unicode code pages of many other Asian languages specify character storage in double-byte character
    sets (DBCS). Therefore, for these languages, there is almost no difference in storage between non-Unicode and Unicode.
    Encoding Formats: 
    Some of the common encoding formats for Unicode are UCS-2, UTF-8, UTF-16, UTF-32 have been made available by database vendors to their customers. For SQL Server 7.0 and higher versions Microsoft uses the encoding format UCS-2 to store the UTF-8 data. Under
    this mechanism, all Unicode characters are stored by using 2 bytes.
    Unicode data can be encoded in many different ways. UCS-2 and UTF-8 are two common ways to store bit patterns that represent Unicode characters. Microsoft Windows NT, SQL Server, Java, COM, and the SQL Server ODBC driver and OLEDB
    provider all internally represent Unicode data as UCS-2.
    The options for using SQL Server 7.0 or SQL Server 2000 as a backend server for an application that sends and receives Unicode data that is encoded as UTF-8 include:
    For example, if your business is using a website supporting ASP pages, then this is what happens:
    If your application uses Active Server Pages (ASP) and you are using Internet Information Server (IIS) 5.0 and Microsoft Windows 2000, you can add "<% Session.Codepage=65001 %>" to your server-side ASP script.
    This instructs IIS to convert all dynamically generated strings (example: Response.Write) from UCS-2 to UTF-8 automatically before sending them to the client.
    If you do not want to enable sessions, you can alternatively use the server-side directive "<%@ CodePage=65001 %>".
    Any UTF-8 data sent from the client to the server via GET or POST is also converted to UCS-2 automatically. The Session.Codepage property is the recommended method to handle UTF-8 data within a web application. This Codepage
    setting is not available on IIS 4.0 and Windows NT 4.0.
    Sorting and other operations :
    The effect of Unicode data on performance is complicated by a variety of factors that include the following:
    1. The difference between Unicode sorting rules and non-Unicode sorting rules 
    2. The difference between sorting double-byte and single-byte characters 
    3. Code page conversion between client and server
    Performing operations like >, <, ORDER BY are resource intensive and will be difficult to get correct results if the codepage conversion between client and server is not available.
    Sorting lots of Unicode data can be slower than non-Unicode data, because the data is stored in double bytes. On the other hand, sorting Asian characters in Unicode is faster than sorting Asian DBCS data in a specific code page,
    because DBCS data is actually a mixture of single-byte and double-byte widths, while Unicode characters are fixed-width.
    Non-Unicode :
    Non Unicode is exactly opposite to Unicode. Using non Unicode it is easy to store languages like ‘English’ but not other Asian languages that need more bits to store correctly otherwise truncation will occur.
    Now, let’s see some of the advantages of not storing the data in Unicode format:
    1. It takes less space to store the data in the database hence we will save lot of hard disk space. 
    2. Moving of database files from one server to other takes less time. 
    3. Backup and restore of the database makes huge impact and it is good for DBA’s that it takes less time
    Non-Unicode vs. Unicode Data Types: Comparison Chart
    The primary difference between unicode and non-Unicode data types is the ability of Unicode to easily handle the storage of foreign language characters which also requires more storage space.
    Non-Unicode
    Unicode
    (char, varchar, text)
    (nchar, nvarchar, ntext)
    Stores data in fixed or variable length
    Same as non-Unicode
    char: data is padded with blanks to fill the field size. For example, if a char(10) field contains 5 characters the system will pad it with 5 blanks
    nchar: same as char
    varchar: stores actual value and does not pad with blanks
    nvarchar: same as varchar
    requires 1 byte of storage
    requires 2 bytes of storage
    char and varchar: can store up to 8000 characters
    nchar and nvarchar: can store up to 4000 characters
    Best suited for US English: "One problem with data types that use 1 byte to encode each character is that the data type can only represent 256 different characters. This forces multiple
    encoding specifications (or code pages) for different alphabets such as European alphabets, which are relatively small. It is also impossible to handle systems such as the Japanese Kanji or Korean Hangul alphabets that have thousands of characters."<sup>1</sup>
    Best suited for systems that need to support at least one foreign language: "The Unicode specification defines a single encoding scheme for most characters widely used in businesses around the world.
    All computers consistently translate the bit patterns in Unicode data into characters using the single Unicode specification. This ensures that the same bit pattern is always converted to the same character on all computers. Data can be freely transferred
    from one database or computer to another without concern that the receiving system will translate the bit patterns into characters incorrectly.
    https://irfansworld.wordpress.com/2011/01/25/what-is-unicode-and-non-unicode-data-formats/
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • What is the programming (ABAP) difference between Unicode and non Unicode?

    What is the programming(ABAP) difference between Unicode and non Unicode?
    Edited by: NIV on Apr 12, 2010 1:29 PM

    Hi
    The difference between programming in Unicode or not Unicode is that you should consider some adjustments to make on the Program "Z" to comply with the judgments Unicode Standard.
    In the past, developments in SAP using multiple systems to encode the characters of different alphabets. For example: ASCII, EBCDI, or double-byte code pages.
    These coding systems mostly use 1 byte per character, which can encode up to 256 characters. However, other alphabets such as Japanese or Chinese use a larger number of characters in their alphabets. That's why the system using double-byte code page, which uses 2 bytes per character.
    In order to unify the different alphabets, it was decided to implement a single coding system that uses 2 bytes per character regardless of what language is concerned. That system is called Unicode.
    Unicode is also the official way to implement ISO/IEC 10646 and is supported in many operating systems and all modern browsers.
    The way of verifying whether a program was adjusted or not, is through the execution of the UCCHECK transaction. Additionally, you can check by controlling syntax (making sure that this asset verification check Unicode).
    The main decisions to adjust / replace are (examples):
    ASSIGN H-SY-INDEX TEXT TO ASSIGN <F1> by
    H-SY-INDEX TEXT (*) TO <F1>.
    DATA INIT (50) VALUE '/'. by
    DATA INIT (1) VALUE '/'.
    DESCRIBE FIELD text LENGTH lengh2 by
    DESCRIBE FIELD text LENGTH lengh2 in character mode.
    T_ZSMY_DEMREG_V1 = record_tab by
    record_tab TO MOVE-Corresponding t_zsmy_demreg_v1.
    escape_trick = hot3. by
    escape_trick-x1 = hot3.
    itab_txt TYPE wt by
    ITAB_TXT TYPE TABLE OF TEXTPOOL
    DATA: string3 (3) TYPE X VALUE B2023 '3 'by
    DATA: string3 (6) B2023 TYPE c VALUE '3 '.
    OPEN DATASET file_name IN TEXT MODE by
    OPEN DATASET file_name FOR INPUT IN TEXT MODE ENCODING NON-UNICODE.
    or
    OPEN DATASET file_name FOR INPUT IN TEXT MODE ENCODING DEFAULT.
    CODE FROM PAGE TRANSLATE a_codepage record by
    record TRANSLATE USING a_codepage.
    CALL FUNCTION 'DOWNLOAD' by
    CALL METHOD cl_gui_frontend_services => gui_download
    CALL FUNCTION 'WS_DOWNLOAD' by
    CALL METHOD cl_gui_frontend_services => gui_download
    CALL FUNCTION 'UPLOAD' by
    CALL METHOD cl_gui_frontend_services => gui_upload
    CALL FUNCTION 'WS_UPLOAD' by
    CALL METHOD cl_gui_frontend_services => gui_upload
    PERFORM USING HEAD APPEND_XFEBRE +2. by
    PERFORM USING HEAD APPEND_XFEBRE +2 (98).
    Best Regars
    Fabio Rodriguez

  • Open dataset in UTF8. Problems between Unicode and non Unicode

    Hello,
    I am currently testing the file transfer between unicode an non unicode systems.
    I transfered some japanese KNA1 data from non unicode system (Mandt,Name1, Name2,City) to a file with this option:
    set local language pi_langu.
      open dataset pe_file in text mode encoding utf-8 for output with byte-order mark.
    Now I want to read the file from a unicode system. The code looks like this:
    open dataset file in text mode encoding utf-8 for input skipping byte-order mark.
    The characters look fine but they are shifted. name1 is correct but now parts of the city characters are in name2....
    If I open the file in a non unicode system with the same coding the data is ok again!
    Is there a problem with spaces between unicode an non-unicode?!

    Hello again,
    after implementing and testing this method, we saw that the conversion is always taken place in the unicode system.
    For examble: we have a char(35) field in mdmp with several japanese signs..as soon as we transfer the data into the file and have a look at it the binary data the field is only 28 chars long. several spaces are missing...now if we open the file in our unicode system using the mentioned class the size is gaining to 35 characters
    on the other hand if we export data from unicode system using this method the size is shrinking from 35 chars to 28 so the mdmp system can interprete the data.
    as soon as all systems are on unicode this method is obselete/wrong because we don't want to cut off/add the spaces..it's not needed anymore..
    the better way would be to create a "real" UTF-8 file in our MDMP system. The question is, is there a method to add somehow the missing spaces in the mdmp system?
    so it works something like thtat:
          OPEN DATASET p_file FOR OUTPUT IN TEXT MODE ENCODING UTF-8 WITH BYTE-ORDER MARK.
    "MDMP (with ECC 6.0 by the way)
    if charsize = 1.
    *add missing spaces to the structure
    *transfer strucutre to file
    *UNICODE
    else.
    *just transfer struc to file -> no conversion needed anymore
    endif.
    I thought mybe somehow this could work with the class CL_ABAP_CONV_OUT_CE. But until now I had no luck...
    Normally I would think that if I'am creating a UTF-8 file this work is done automatically on the transfer command

  • Fonts for printing - unicode or non-unicode

    Hi, How could i find whether the fonts am using for scripts and smartforms are unicode compatible or not.
    Regards,
    Vijayalakshmi

    Hello Kyle,
    thank you very much.
    I have think that I can install the j2ee engine unicode or non unicode. And I can install SAP EP with unicode or non unicode.
    There is a cd from SAP its named "NON_Unicode_INST_Master_CD".
    In this cd it gives a path "IM08_SOLARIS_64/SAPINST/UNIX/SUNOS_64".
    And there is a sapinst. With this sapinst I can install a portal.
    Is this portal then a unicode installation, although it is placed on the "NON_Unicode_INST_Master_CD"?
    Regards,
    Werner

  • Importing non-unicode data into unicode 10gR2 database

    Hi:
    I will have to import non-unicode data into unicode 10gR2 database. The systems the data is coming from are the following: CODA, Timberline, COMMS, CMS, LIMS. These are all RDBMS, sql-enabled systems. We are talking about pretty big amounts of data (a couple hundred GB combined).
    Did anybody go through a similar exersize?
    I know I'll have to setup nls_length_semantics to CHAR.
    What other recommendations could you guys give?
    TIA,
    Greg

    I think "nls_length_semantics" isn't mandatory at this point, and you must extract a little quantity of information from every source and do some probes injecting them into the Oracle10g database.

Maybe you are looking for

  • After upgrading to lion starting up is taking much more time ??

    After upgrading to lion starting up is taking much more time is there any solution to this ? i do all things about cleaning disk permission or editing .

  • How to use Dashcode to make HTML Widgets in iBooks Author?

    I'm writing books on iBooks Author, and I got a problem: I wanna add some HTML Widgets to my books, and I was told I should use Dashcode to make them. So I downloaded the Xcode, but I don't know how to open the Dashcode on my Mac. I really hope someo

  • How do I get website specific icons instead of generic icon display in Bookmarks?

    Hi, Some websites with their Bookmarks display their specific icons, while others only generic ones (Blue circle on white squares). With the earlier versions no such problem existed. I'm referring to an example like amazon.com, which displays only a

  • I cannot open my iPhoto library

    When opening iPhoto, the system is asking me "Which photo library do you want iPhoto to use?"; all I get is a blank space without the option to choose a library. When I click "Other Library", I see my library "iPhoto Library" but it is not doing anyt

  • Makepkg fakeroot failing

    I wrote a little program that I want to add to the AUR. After I completed the sources, I created a makefile, that works. ($make and #make install needed). I then made a tarball of the sources. Here are the build and install functions of the PKGBUILD