Unicode and Non-Unicode Instances in one Transport Landscape

We have a 4.7 landscape that includes a shared global development system supporting two regional landscapes.  The shared global development system is used for all ABAP/Workbench activity and for global customization used by both regional production systems.  The two regional landscapes include primarily three instances - Regional Configuration, Quality Assurance, and Production.  The transport landscape includes all systems with transport routes for global and regional.
A conversion to unicode is also being planned for the global development and one regional landscape.  It is possible that we will not convert the other regional landscape due to pending discussions on consolidation.  This means one of the regional landscapes will be receiving global transports from a unicode-based system.  
All information I've located implies no actual technical constraints.  Make sure you have the right R3trans versions, don't use non-Latin_1 languages, etc.  Basic caveats for a heterogenous environment ....
Is anyone currently supporting a complete, productive landscape that includes unicode and non-unicode systems?   If so, any issues or problems encountered with transports across the systems?  (insignificant or significant)
Information on actual experiences will be greatly appreciated ....
Many thanks in advance.

Hi Laura,
Although i do not have the live / practical experience, but this is what i can share.
I have been working on a Non-Unicode to Unicode conversion project. While we were in the discussion phase there was one such possibility of a scenario that part of the landscapes would remain non-unicode. So based on the research i did by reading and directly interacting with some excellent SPA consultants, i came to know there are absolutely no issues in transporting ABAP programs from a Unicode system to non-unicode system. In a Unicode system the ABAP code has already been checked and rectified for higher syntax checks and these are downward compatible with the ABAP code on lower ABAP versions and non-unicode systems. Hence i beleive there should not be any issues, however as i mentioned this is not from practical experience.
Thanks.
Chetan

Similar Messages

  • Is it possible to add value item and non stock item in one billing?

    Is it possible to add value item and non stock item in one billing?

    Hi,
    Yes,it is possible .Take example of service scenario,where material used in servicing and service charges(labour) can be billed in single invoice.
    Billing document type,Customer and other header data should be same.
    Reward points if useful
    Regards,
    Amrish Purohit

  • Retrieving spatial and non spatial data in one query

    Hello. I am having slight difficulties using JDBC to retrieve both spatial and non spatial data in the same query. The following is code from a sample program of mine that retrives spatial data from spatial tables.
    (In spatialquery geom is a geometry column and city is simply the name of the city):
    try
    Geometry geom = null;
    String database = "jdbc:oracle:thin:@" + m_host + ":" + m_port + ":" + m_sid;
    DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
    con = (OracleConnection)DriverManager.getConnection(database, sUsername, sPassword);
    GeometryAdapter sdoAdapter =
    OraSpatialManager.getGeometryAdapter("SDO", "8.1.7", STRUCT.class, null, null, con);
    String spatialquery = "SELECT a1.geom, a1.city \n" +
    "FROM cities a1";
    Statement stmt = con.createStatement();
    OracleResultSet rs = (OracleResultSet) stmt.executeQuery(spatialquery);
    int i = 0;
    int noOfFeatures = 2;
    while (rs.next())
    for(i = 1; i <= noOfFeatures; i++)
    STRUCT dbObject = (STRUCT)rs.getObject(i);
    try
    geom = sdoAdapter.importGeometry(dbObject);
    catch(GeometryInputTypeNotSupportedException e)
    System.out.println("Input Type not supported");
    catch(InvalidGeometryException e)
    System.out.println("Invalid geometry");
    System.out.println(geom);
    }//end while loop
    This retrieves the sptial data fine, however when I attempt to retreive the non-spatial data I keep getting a "ClassCastException" error. I understand it is something to do with "STRUCT dbObject = (STRUCT)rs.getObject(i);" line. Can anyone tell me how to retrieve both spatial and non-spatial data in the one query using JDBC. I have tried nearly everything at this stage. Cheers joe

    Theresa A Radke
    Posts: 20
    OTN Member Since: Jul, 2001
    retrieving spatial and non spatial in same query. May 23, 2003 12:02 AM
    retrieving spatial and non spatial in same query.

  • Filter Data with Merged and non merged columns in one

    Hi there,
    I have an excel spreadsheet that has got merged and non merged columns. What I want to be able to do is, filter a row that has got merged column and non merged columns. But when I filter it only takes the first line, rather than the merged and non merged
    columns.
    With this data I have one merged column which spans 6 rows, and then in the same row I have 6 rows with different points in, and what I want to do is filter my list, but be able to see the merged coloum aswell as the 6 points.
    Any ideas are much appriciated.
    Cheers
    SAN

    You cannot filter across a row - so I have assumed that what you mean is that some cells in columns are merged to serve as the headers, and the data is in the came columns but in the row(s) below the header. If this is incorrect, ignore this post.
    For this example, I have assumed E to J are the column of data and merged cells, column K is free, and the first merged header is in row 1:
    In K1, enter
    =E1
    in K2, enter
    =IF(COUNTA(E2:J2)=1,E2,K1)
    and copy down. Then filter based on column K, and it will show the headers and the data for the selected header value.
    HTH, Bernie

  • Can we have xl and non-xl card in one chassis?

    Hello,
    In cisco document it is mentioned that to work as XL, all card in chassis must be XL (apart from scalable license requirement). Is it a chassis or vdc specific requirement?
    In below case we have create two vdcs in one n7k switch and installed the scalable license. Vdc 2 have both mixed ( XL and non-XL) module interfaces whereas VDC 3 have only XL module interfaces
    VDC 2-
    xl + non xl M-series module interfaces
    vdc 3 -
    only XL M-series module interfaces
    Can anyone please confirm if my below understanding is correct ?
    1. VDC 2 will work in Non-mode
    2. VDC 3 will work in XL mode.

    Hi babu,
    You can do this in one view it self.
    create a one attribute of type wdy_boolean.. in view context.
    Bind these attribute to read only property of table.. and  initially set the value to abap_true in wddoinit method.
    Then create one button say "EDIT" in view and create action for edit button.. in that action set the above attribute value to
    abap_false.
    so now, initially table will be in display mode, when you click on edit mode...it will become editable mode...
    Hope you got some idea.
    Regards
    Srinvias

  • Performing ORM and non-ORM transactions in one request.

    During the processing of a request, we need to perform an ORM statement and a non-ORM cftransaction on 2 different data sources.
    Example:
    # Note, 'someObject' is a persistent CFC with a datasource attribute of 'DSN1'.
    <cftransaction>
         <cfset myObjects = EntityLoad('someObject') />
    </cftransaction>
    <cftransaction>
         <cfquery name="test" datasource="DSN2">
               INSERT INTO ...
         </cfquery>
    </cftransaction>
    Whenever we hit the 2nd cftranaction block, we get the following error:
    Message=A transaction cannot be started on more than one datasource.
    This works in CF 9.0.1, but fails on CF 9.0.1 HF3, CF 9.0.2, and on CF 10.

    javax.servlet.ServletRequest method isSecure() - "Returns a boolean indicating
    whether this request was made using a secure channel, such as HTTPS."
    Chris Scott wrote:
    >
    What's the best way to separate SSL andnon-SSL transactions in a single web app?Ie when the user logs in, the login formis submitted over an SSL connection, butfrom then on only certain pages/forms useSSL. If there's one JVM with the sessioninfo, how can we be sure what needs to besecured goes thru the SSL server?

  • Mixing Drop Frame and Non Drop Frame In One Timeline

    We did a 7-camera multicam shoot of a rock concert over the weekend. Upon capturing we realized two of our camera operators were shooting Non Drop Frame mode while the remaining 5 were shooting drop frame. Can someone describe to me how I can eventually get all seven cameras into one timeline? I realize I might have to output/convert the non drop frame stuff. And since we're one the topic, I'd also like to ask (in case I find this out later) what I might do if we find a camera that shot 24P (29.97). Thanks in advance!
    G5 DUAL QUAD & 2 X Power PC G4 (Dual 533) & 550 TiBook   Mac OS X (10.2.x)  

    There is nothing different about the footage from the cameras...they all run at 29.97fps. The only difference is in the way the timecode NUMBERS are treated. Drop frame code simply loses 2 numbers (00 and 01) every minute except for every 10th minute. That's it. They all run at the same speed, so putting them all in the same timeline shouldn't cause any problems.
    Trying to make a multiclip based on timecode will be a problem, however. You will have to find a common frame on all and use that.
    Shane

  • Grouped and non-grouped SELECT in one query: help!

    look first at: Wrong result when I use CASE on this forum. Here I wanted to get the user who created and the user who solved a problem (Lets call it Validation Error from now on (VE)).
    The thing is: I already have a query who returns me lots of information about a VE.
    The query in the previous thread returned me additional info about that VE (that is the creating_user and the solving_user). The 1st query is no group select, but the second is! Still, I need to combine those two in one query.
    1st query:
    select ve.seq,
         max((case vah.action when 'C' then vah.ur_code else null end)) created,
         max((case vah.action when 'S' then vah.ur_code else null end)) solved
    from validation_errors ve
    left outer join ver_action_histories vah
    on (ve.seq = vah.ver_seq AND ve.log_date = vah.ver_log_date)
    where ve.seq = 12860687
    group by ve.seq;Result:
    seq       | created | solved
    12860687    Bob       Bobdont mind the "where"-clause, it is just to make the query go faster
    what I do is: I join the VE with the ver_action_histories table which contains the users and what action they performed on a VE.
    Now I just want to add that information to the results of an other query which also returns lots of information about a VE.
    2nd query:
    select ve.seq "VE seq", mh.seq "Counter seq",
              ve.log_date, ve.solve_date, ve.solved Status, ve.failure_code, ve.mde_code,
              mh.meter_type,
              iv.mr_type, iv.mr_reason,
              ih.mmr_seq
    from validation_errors ve
    inner join meter_histories mh
    on (ve.mhy_seq = mh.seq)
    left outer join index_values iv
    on (ve.mhy_seq = iv.mhy_seq AND ve.log_date =iv.timestamp)
    left outer join index_histories ih
    on (ve.mhy_seq = ih.ive_mhy_seq)
    where ve.seq = 8653936
    and sysdate >= mh.start_date
    and sysdate < mh.end_date;dont mind the "where" and "and"-clauses ... I hope the result of this query will simplify things ...
    Result:
    seq        |   counter seq | log_date | solved_date | status    | failure_code | ...
    12860687       4568          1-jan-06   2-jan-06      Solved      ABCNow the actual question: Is it possible to combine those queries in one query? I just want the results of the first query (creating_user and solving_user) to be added as columns to the second result. Performance is very important. Please tell me that its possible?
    Wanted Result:
    seq        |   counter seq | log_date | solved_date | status    | failure_code | created  | solved  | ...
    12860687       4568          1-jan-06   2-jan-06      Solved      ABC            Bob        BobIf anything I explained is unclear, please tell so I can try to explain it in an easier way.

    Try an in line view:
    select *
    from
    ( <your first query goes here > ) a
    , ( <your second query goes here > ) b
    where a.seq = b.seq

  • SIK Transport files and None unicode SAP system

    Dear all,
    I have a question about SIK Transport files.
    As you know, when we install BOE SIK,we need transport some files into SAP system.
    There is  a TXT file for describing how to use SIK transport files in SAP system.
    I found that there is no detail about none unicode SAP system in this TXT file.
    All of them is about unicode.
    If your SAP system is running on a BASIS system earlier than 6.20, you must use the files listed below:
    (These files are ANSI.)
    Open SQL Connectivity transport (K900084.r22 and R900084.r22)
    Info Set Connectivity transport (K900085.r22 and R900085.r22)
    Row-level Security Definition transport (K900086.r22 and R900086.r22)
    Cluster Definition transport (K900093.r22 and R900093.r22)
    Authentication Helpers transport (K900088.r22 and R900088.r22)
    If your SAP system is running on a 6.20 BASIS system or later, you must use the files listed below:
    (These files are Unicode enabled.)
    Open SQL Connectivity transport (K900574.r21 and R900574.r21)
    Info Set Connectivity transport (K900575.r21 and R900575.r21)
    Row-level Security Definition transport (K900576.r21 and R900576.r21)
    Cluster Definition transport (K900585.r21 and R900585.r21)
    Authentication Helpers transport (K900578.r21 and R900578.r21)
    The following files must be used on an SAP BW system:
    (These files are Unicode enabled.)
    Content Administration transport (K900579.r21 and R900579.r21)
    Personalization transport (K900580.r21 and R900580.r21)
    MDX Query Connectivity transport (K900581.r21 and R900581.r21)
    ODS Connectivity transport (K900582.r21 and R900582.r21)
    If our SAP BASIS system  is beyond 6.20,but iit is not unicode system.
    Could we use these transport files to none unicode SAP system ?
    Thanks!
    Wayne

    Hi Wayne,
    the text and the installation guide is clearly advising based on the version of your underlying BASIS system and differentiates 620 or 640 and higher.
    so based on the fact that you system is a BI 7 system you are in the category of a 640 (or higher) basis system and therefore you have to use the Unicode ENABLED transports.
    ingo

  • Unicode and non-unicode

    WHAT IS DIFFRENTS BETWEEN UNICODE AND NON UNICODE ?
    BRIEFLY EXPLAIN ABOUT UNICODE?
                                                            THANKS IN ADVANCES

    A 16-bit character encoding scheme allowing characters from Western European, Eastern European, Cyrillic, Greek, Arabic, Hebrew, Chinese, Japanese, Korean, Thai, Urdu, Hindi and all other major world languages, living and dead, to be encoded in a single character set. The Unicode specification also includes standard compression schemes and a wide range of typesetting information required for worldwide locale support. Symbian OS fully implements Unicode. A 16-bit code to represent the characters used in most of the world's scripts. UTF-8 is an alternative encoding in which one or more 8-bit bytes represents each Unicode character. A 16-bit character set defined by ISO 10646. A code similar to ASCII, used for representing commonly used symbols in a digital form. Unlike ASCII, however, Unicode uses a 16-bit dataspace, and so can support a wide variety of non-Roman alphabets including Cyrillic, Han Chinese, Japanese, Arabic, Korean, Bengali, and so on. Supporting common non-Roman alphabets is of interest to community networks, which may want to promote multicultural aspects of their systems.
    ABAP Development under Unicode
    Prior to Unicode the length of a character was exactly one byte, allowing implicit typecasts or memory-layout oriented programming. With Unicode this situation has changed: One character is no longer one byte, so that additional specifications have to be added to define the unit of measure for implicit or explicit references to (the length of) characters.
    Character-like data in ABAP are always represented with the UTF-16 - standard (also used in Java or other development tools like Microsoft's Visual Basic); but this format is not related to the encoding of the underlying database.
    A Unicode-enabled ABAP program (UP) is a program in which all Unicode checks are effective. Such a program returns the same results in a non-Unicode system (NUS) as in a Unicode system (US). In order to perform the relevant syntax checks, you must activate the Unicode flag in the screens of the program and class attributes.
    In a US, you can only execute programs for which the Unicode flag is set. In future, the Unicode flag must be set for all SAP programs to enable them to run on a US. If the Unicode flag is set for a program, the syntax is checked and the program executed according to the rules described in this document, regardless of whether the system is a US or a NUS. From now on, the Unicode flag must be set for all new programs and classes that are created.
    If the Unicode flag is not set, a program can only be executed in an NUS. The syntactical and semantic changes described below do not apply to such programs. However, you can use all language extensions that have been introduced in the process of the conversion to Unicode.
    As a result of the modifications and restrictions associated with the Unicode flag, programs are executed in both Unicode and non-Unicode systems with the same semantics to a large degree. In rare cases, however, differences may occur. Programs that are designed to run on both systems therefore need to be tested on both platforms.
    Refer to the below related threads
    Re: Why the select doesn't run?
    what is unicode
    unicode
    unicode
    Regards,
    Santosh

  • Substring between unicode and non-unicode

    Hi, experts,
    We are upgrading our system from 4.7 to 6.0 in chinese, but after that, there is a problem:
    we have a program which hadle some txt files which create by an non-sap and non-unicode system,
    for example, there is a line contains '你好      1234', we will extract the information as below:
    data: field1 type string, field2 type string.
    field1 = line+0(10).
    field2 = line+10(4).
    the result in 4.7 is:
    field1 = '你好'
    field2 = '1234'
    but, in ECC, field1 is '你好12'  and field2 is '34'.
    can any one help me? thank you!

    hey, max, thanks for your help!
    I am sorry I did not show my question clearly!
    There are 6 space between '你好' and '1234 in the line '你好 1234', and in the top 10 charaters of this string line, may be all numbers, may be all chinese charaters, may be numbers and chinese charaters together.
    in 4.7, line+0(10) aways correct, but in ECC, because it is a unicode system, so, it is correct only when the string is all single-byte charaters but not any double-byte charaters.

  • Differnce between unicode and non unicode

    Hi every body i want to differnce  between unicode and non unicode and for what purposes this ulities are used explain me little brief what is t code for that , how to checj version, how to convert uni to non uni ?
    Advance Thanks
    Vishnuprasad.G

    Hello Vishnu,
    before Release 6.10, SAP software only used codes where every character is displayed by one byte, therefore character sets like these are also called single-byte codepages. However, every one of these character sets is only suitable for a limited number of languages.
    Problems arise if you try to work with texts written in different incompatible character sets in one central system. If, for example, a system only has a West European character set, other characters cannot be correctly processed.
    As of 6.10, to resolve these issues, SAP has introduced Unicode. Each character is generally mapped using 2 bytes and this offers a maximum of 65 536 bit combinations.
    Thus, a Unicode-compatible ABAP program is one where all Unicode checks are in effect. Such programs return the same results in UC systems as in non-UC systems. To perform the relevant syntax checks, you must activate the "UC checks" flag in the screens of the program and class attributes.
    With TC: /nUCCHECH you can check a program set for a syntax errors in UC environment.
    Bye,
    Peter

  • About unicode and non-unicode

    Hi experts,
    can anybody tell me
    what is unicode and non-unicode in interview point of view.Just 2 or 3 sentences....
    Thanks in advance

    unicode is for multilingual capability in SAP system,
    apart from that important unicode t.code is uccheck if you give report name we will get different error codes,in genaral we get an error of structures miss match,obsolute statemnts,open data set,describe ststment and so on
    more over you can say we delete all obsolute function modules, look at the following it may help you
    Before the Unicode error
       lt_hansp = lt_0201-endda0(4) - lt_0002-gbdat0(4).
    Solution.
    data :abc type i,
          def type i.
    move  lt_0201-endda+0(4) to abc.
    move    lt_0002-gbdat+0(4) to def.
    lt_hansp-endda = abc - def.
    Before the Unicode error:
       WRITE: /1 'CO:',CO(110).
    Solution.
    FIELD-SYMBOLS: <fs_co> type any.
    assign co to <fs_co>.
    WRITE: /1 'CO:',<fs_co>(110).
    DESCIBE002     In Unicode, DESCRIBE LENGTH can only be used with the IN BYTE MODE or IN  CHARACTER MODE addition.
    Before the Unicode error:
        describe field <tab_feld> length len.
    Solution.
    describe field <tab_feld> length len IN character mode.
    Before the Unicode error:
        DESCRIBE FIELD DOWNTABLA LENGTH LONG.
    Solution.
    DESCRIBE FIELD DOWNTABLA LENGTH LONG IN byte MODE.
    DO 002     Could not specify the access range automatically. This means that you need a  RANGE addition     
    Before the Unicode error:
    DO 7 TIMES VARYING i FROM aktuell(1) NEXT aktuell+1(1)
    Solution.
      DO 7 TIMES VARYING i FROM aktuell(1) NEXT aktuell+1(1) RANGE aktuell .
    Before the Unicode error:
    DO 3 TIMES VARYING textfeld FROM gtx_line1 NEXT gtx_line2.
    Solution.
    DATA: BEGIN OF text,
            gtx_line1 TYPE rp50m-text1,
            gtx_line2 TYPE rp50m-text2,
            gtx_line3 TYPE rp50m-text3,
          END OF text.
    DO 3 TIMES VARYING textfeld FROM gtx_line1 NEXT gtx_line2 RANGE text..
    Before the Unicode error:
    DO ev_restlen TIMES
        VARYING ev_zeichen FROM ev_hstr(1) NEXT ev_hstr+1(1).
    Solution.
      DO ev_restlen TIMES
         VARYING ev_zeichen FROM ev_hstr(1) NEXT ev_hstr+1(1) range ev_hstr.
    MESSAGEG!2     IT_TBTCO and "IT_ALLG" are not mutually convertible. In Unicode programs, "IT_TBTCO" must have the same structure layout as "IT_ALLG", independent of  the length of a Unicode character.     
    Before the Unicode error:
             IT_TBTCO = IT_ALLG.
    Solution.
    IT_TBTCO-header = IT_ALLG-header.
    MESSAGEG!3     FIELDCAT_LN-TABNAME and "WA_DISP" are not mutually convertible in a Unicode program     
    Before the Unicode error:
         IF GEH_TA15(73) NE RETTER15(73).
    Solution.
          FIELD-SYMBOLS: <GEHTA> TYPE ANY.
                        <RETTER1> TYPE ANY.
          ASSIGN: GEH_TA TO <GEHTA>,
                  RETTER TO <RETTER>.
    IF <GEHTA>15(73) NE <RETTER>15(73).
    Before the Unicode error:
           IMP_EP_R3_30 = RECRD_TAB-CNTNT.
    Solution.
        FIELD-SYMBOLS:  <imp_ep_r3_30> TYPE X,
                          <recrd_tab-cntnt> TYPE X.
          ASSIGN IMP_EP_R3_30 TO <imp_ep_r3_30> CASTING.
          ASSIGN RECRD_TAB-CNTNT TO <recrd_tab-cntnt> CASTING.
           <imp_ep_r3_30> = <recrd_tab-cntnt>.
    Before the Unicode error:
                and    pernr  = gt_pernr
    Solution.
                  and    pernr  = gt_pernr-pernr
    MESSAGEG!7     EBC_F0 and "EBC_F0_255(1)" are not comparable in Unicode programs.     
    Before the Unicode error:
       IF CHARACTER NE LINE_FEED.
    Solution.
        IF CHARACTER NE LINE_FEED-X.
    MESSAGEG!A     A line of "IT_ZMM_BINE" and "OUTPUT_LINES" are not mutually convertible. In a  Unicode program "IT_ZMM_BINE" must have the same structure layout as  "OUTPUT_LINES" independent of the length of a Unicode character.     
    Before the Unicode error:
    *data: lw_wpbp  type pc206.
    Solution.
    data: lw_wpbp  type pc205.
    Before the Unicode error:
       LOOP AT seltab INTO  ltx_p0078.
    Solution.
    DATA: WA_SELTAB like line of SELTAB.
    CLEAR WA_SELTAB.
    MOVE-CORRESPONDING ltx_p0078 to wa_seltab.
    move-corresponding wa_seltab to ltx_p0078.
    MESSAGEG?Y     The line type of "DTAB" must be compatible with one of the types "TEXTPOOL".     
    Before the Unicode error:
    DATA:
       BEGIN OF dtab OCCURS 100.
         text(100),
    include structure textpool.
       End of changes
    SET TITLEBAR '001' WITH dtab-text+9.
    Solution.
    the following declaration should be mentioned in the declaration of the textpool.
    DATA:
       BEGIN OF dtab OCCURS 100.
      text(100),
    include structure textpool.
       End of changes
      SET TITLEBAR '001' WITH dtab-entry.
    MESSAGEG@1     TFO05_TABLE cannot be converted to a character-type field.     
    Before the Unicode error:
    WRITE: / PA0015, 'Fehler bei MODIFY'.
    Solution.
    WRITE: / PA0015+0, 'Fehler bei MODIFY'.
    MESSAGEG@3     ZL-C1ZNR must be a character-type data object (data type C, N, D, T or  STRING) .     
    Before the Unicode error:
         con_tab  TYPE x VALUE '09',
    Solution.
           con_tab  TYPE string VALUE '09',
    Before the Unicode error:
    data:   g_con_ascii_tab(1)  type x   value '09'.
    Solution.
       data:   g_con_ascii_tab  type STRING   value '09'.
    MESSAGEG@E     HELP_ANLN0 must be a character-type field (data type C, N, D, or T). an open  control structure introduced by "INTERFACE".
    Before the Unicode error:
    WRITE SATZ-MONGH TO SATZ-MONGH CURRENCY P0008-WAERS.
    WRITE SATZ-JAH55 TO SATZ-JAH55 CURRENCY P0008-WAERS.
    WRITE SATZ-EFF55 TO SATZ-EFF55 CURRENCY P0008-WAERS.
    WRITE SATZ-SOFE_EREU TO SATZ-SOFE_EREU CURRENCY P0008-WAERS.
    WRITE SATZ-SOFE_ERSF TO SATZ-SOFE_ERSF CURRENCY P0008-WAERS.
    WRITE SATZ-SOFE_ERSP TO SATZ-SOFE_ERSP CURRENCY P0008-WAERS.
    WRITE SATZ-SOFE_EIN TO SATZ-SOFE_EIN CURRENCY P0008-WAERS.
    WRITE SATZ-SOFE_EREU TO SATZ-SOFE_EREU CURRENCY P0008-WAERS.
    WRITE SATZ-ERHO_ERR TO SATZ-ERHO_ERR CURRENCY P0008-WAERS.
    WRITE SATZ-ERHO_EIN TO SATZ-ERHO_EIN CURRENCY P0008-WAERS.
    WRITE SATZ-JAH55_FF TO SATZ-JAH55_FF CURRENCY P0008-WAERS.
    Solution.
      DATA: SATZ1_MONGH(16),
            SATZ_JAH551(16),
            SATZ_EFF551(16),
            SATZ_SOFE_EREU1(16),
            SATZ_SOFE_ERSF1(16),
            SATZ_SOFE_ERSP1(16),
            SATZ_SOFE_EIN1(16),
            SATZ_ERHO_ERR1(16),
            SATZ_ERHO_EIN1(16),
            SATZ_JAH55_FF1(16).
      WRITE SATZ-MONGH TO SATZ1_MONGH CURRENCY P0008-WAERS.
      WRITE SATZ-JAH55 TO SATZ_JAH551 CURRENCY P0008-WAERS.
      WRITE SATZ-EFF55 TO SATZ_EFF551 CURRENCY P0008-WAERS.
      WRITE SATZ-SOFE_EREU TO SATZ_SOFE_EREU1 CURRENCY P0008-WAERS.
      WRITE SATZ-SOFE_ERSF TO SATZ_SOFE_ERSF1 CURRENCY P0008-WAERS.
      WRITE SATZ-SOFE_ERSP TO SATZ_SOFE_ERSP1 CURRENCY P0008-WAERS.
      WRITE SATZ-SOFE_EIN TO SATZ_SOFE_EIN1 CURRENCY P0008-WAERS.
      WRITE SATZ-ERHO_ERR TO SATZ_ERHO_ERR1 CURRENCY P0008-WAERS.
      WRITE SATZ-ERHO_EIN TO SATZ_ERHO_EIN1 CURRENCY P0008-WAERS.
      WRITE SATZ-JAH55_FF TO SATZ_JAH55_FF1 CURRENCY P0008-WAERS.
      SATZ-MONGH = SATZ1_MONGH.
      SATZ-JAH55 = SATZ_JAH551.
      SATZ-EFF55 = SATZ_EFF551.
      SATZ-SOFE_EREU = SATZ_SOFE_EREU1.
      SATZ-SOFE_ERSF = SATZ_SOFE_ERSF1.
      SATZ-SOFE_ERSP = SATZ_SOFE_ERSP1.
      SATZ-SOFE_EIN = SATZ_SOFE_EIN1.
      SATZ-ERHO_ERR = SATZ_ERHO_ERR1.
      SATZ-ERHO_EIN = SATZ_ERHO_EIN1.
      SATZ-JAH55_FF = SATZ_JAH55_FF1.
    MESSAGEG-0     VESVR_EUR must be a character-type data object (data type C, N, D, T or  STRING).     
    Before the Unicode error:
                TRANSLATE vesvr_eur USING '.,'.
                TRANSLATE espec_eur USING '.,'.
                TRANSLATE fijas_eur USING '.,'.
    Solution.
                 data: vesvreur(16),
                       especeur(16),
                       fijaseur(16).
                 vesvreur = vesvr_eur.
                 especeur = espec_eur.
                 fijaseur = fijas_eur.
                 TRANSLATE vesvreur USING '.,'.
                 TRANSLATE especeur USING '.,'.
                 TRANSLATE fijaseur USING '.,'.
                 vesvr_eur = vesvreur.
                 espec_eur = especeur.
                 fijas_eur = fijaseur.
    MESSAGEG-D     LT_0021 cannot be converted to the line incompatible. The line type must have  the same structure layout as "LT_0021" regardless of the length of a Unicode .     
    Before the Unicode error:
    data: lt_0021    like p0021 occurs 0 with header line.
    Solution.
        DATA: LT_0021 LIKE PA0021 OCCURS 0 WITH HEADER LINE.
    Before the Unicode error:
             append sim_data to p0007.
    Solution.
           DATA:wa_p0007 type p0007.
           move-corresponding sim_data to wa_p0007.
              append wa_p0007 to p0007.
    MESSAGEG-F     The structure "CO(110)" does not start with a character-type field. In Unicode  programs in such cases, offset/length declarations are not allowed      
    Before the Unicode error:
         TRANSFER COBEZ+8 TO DSN.
    Solution.
                FIELD-SYMBOLS:<fs_cobez> type any.
                TRANSFER <fs_COBEZ>+8 TO DSN.
    Before the Unicode error:
         WRITE: /1 COBEZ+16.
    Solution.
    FIELD-SYMBOLS <F_COBEZ> TYPE ANY.
    ASSIGN COBEZ TO <F_COBEZ>.
          WRITE: /1 <F_COBEZ>+16.
    MESSAGEG-G     The length declaration "171" exceeds the length of the character-type start  (=38) of the structure. This is not allowed in Unicode programs.
    Before the Unicode error:
       write: /1 '-->',
                 pa0201(250).
    Solution.
    field-symbols <fs_pa0201> type any.
    ASSIGN pa0201 TO <fs_pa0201>.
        write: /1 '-->',
                  <fs_pa0201>(250).
    MESSAGEG-H     The offset declaration "160" exceeds the length of the character-type start  (=126) of the structure. This is not allowed in Unicode programs . allowed.     
    Before the Unicode error:
    WRITE:/ SATZ(80),
             / SATZ+80(80),
             / SATZ+160(80),
             / SATZ+240(80),
             / SATZ+320(27).
    Solution.
      FIELD-SYMBOLS <FS_SATZ> TYPE ANY.
      ASSIGN SATZ TO <FS_SATZ>.
      WRITE:/ <FS_SATZ>(80),
              / <FS_SATZ>+80(80),
              / <FS_SATZ>+160(80),
              / <FS_SATZ>+240(80),
              / <FS_SATZ>+320(27).
    MESSAGEG-I     The sum of the offset and length (=504) exceeds the length of the start (=323) of the structure. This is not allowed in Unicode programs .     
    Before the Unicode error:
              /5 PARAMS+80(80),
    Solution.
        FIELD-SYMBOLS: <PARAMS> TYPE ANY.
        ASSIGN PARAMS TO <PARAMS>.
               /5 <PARAMS>+80(80),
    MESSAGEGWH     P0041-DAR01 and "DATE_SPEC" are type-incompatible.     
    Before the Unicode error:
       DO 5 TIMES VARYING I0008 FROM P0008-LGA01 NEXT P0008-LGA02.
    Solution.
        DO 5 TIMES VARYING I0008-LGA FROM P0008-LGA01 NEXT P0008-LGA02. "D07K963133
    Before the Unicode error:
       DO VARYING ls_data_aux FROM p0041-dar01 NEXT p0041-dar02.
    Solution.
         DO VARYING ls_data_aux-dar01 FROM p0041-dar01 NEXT p0041-dar02.
    MESSAGEGY/     The type of the database table and work area (or internal table) "P0050" are  not Unicode-convertible      
    Before the Unicode error:
    select * from  pa9705 client specified
            into ls_9705
    Solution.
       select * from  pa9705 client specified
              into corresponding fields of  ls_9705
    Before the Unicode error:
         select        * from  pa0202 client specified
                into ls_0202
    Solution.
           select  * from  pa0202 client specified
                  into corresponding fields of ls_0202
    OPEN   001     One of the additions "FOR INPUT", "FOR OUTPUT", "FOR APPENDING" or "FOR UPDATE" was expected.     
    Before the Unicode error:
    OPEN DATASET FICHERO IN TEXT MODE.
    Solution.
      OPEN DATASET FICHERO IN TEXT MODE FOR INPUT ENCODING NON-UNICODE.
    OPEN   002     IN... MODE was expected.     
    Before the Unicode error:
    OPEN DATASET P_OUT FOR OUTPUT IN TEXT MODE.
    Solution.
        OPEN DATASET P_OUT FOR OUTPUT IN TEXT MODE ENCODING non-unicode.
    OPEN   004     In "TEXT MODE" the "ENCODING" addition must be specified.     
    Before the Unicode error:
       open dataset dat for output in text mode.
    Solution.
         open dataset dat for output in text mode ENCODING NON-UNICODE.
    UPLO     Upload/Ws_Upload and Download/Ws_Download are obsolete, since they are not  Unicode-enabled; use the class cl_gui_frontend_services
    Before the Unicode error:
    move p_filein to disk_datei.
      CALL FUNCTION 'WS_UPLOAD'
           EXPORTING
                filename        = disk_datei
                FILETYPE        = FILETYPE
           TABLES
                DATA_TAB        = DISK_TAB
           EXCEPTIONS
                FILE_OPEN_ERROR = 1
                FILE_READ_ERROR = 2.
    Solution.
    DATA: file_name type string.
    move p_filein to file_name.
    CALL METHOD CL_GUI_FRONTEND_SERVICES=>GUI_UPLOAD
      EXPORTING
        FILENAME                = file_name
        FILETYPE                = 'ASC'
        HAS_FIELD_SEPARATOR     = 'X'
       HEADER_LENGTH           = 0
       READ_BY_LINE            = 'X'
       DAT_MODE                = SPACE
       CODEPAGE                = SPACE
       IGNORE_CERR             = ABAP_TRUE
       REPLACEMENT             = '#'
       VIRUS_SCAN_PROFILE      =
    IMPORTING
       FILELENGTH              =
       HEADER                  =
      CHANGING
        DATA_TAB                = disk_tab[]
      EXCEPTIONS
        FILE_OPEN_ERROR         = 1
        FILE_READ_ERROR         = 2
        NO_BATCH                = 3
        GUI_REFUSE_FILETRANSFER = 4
        INVALID_TYPE            = 5
        NO_AUTHORITY            = 6
        UNKNOWN_ERROR           = 7
        BAD_DATA_FORMAT         = 8
        HEADER_NOT_ALLOWED      = 9
        SEPARATOR_NOT_ALLOWED   = 10
        HEADER_TOO_LONG         = 11
        UNKNOWN_DP_ERROR        = 12
        ACCESS_DENIED           = 13
        DP_OUT_OF_MEMORY        = 14
        DISK_FULL               = 15
        DP_TIMEOUT              = 16
        NOT_SUPPORTED_BY_GUI    = 17
        ERROR_NO_GUI            = 18
        others                  = 19
    Before the Unicode error:
       CALL FUNCTION 'WS_DOWNLOAD'
            EXPORTING
                 filename = fich_dat
                 filetype = typ_fich
            TABLES
                 data_tab = t_down.
    Solution.
    data: filename1 type string,
          filetype1(10).
    move fich_dat to filename1.
    move typ_fich to filetype1.
    CALL METHOD CL_GUI_FRONTEND_SERVICES=>GUI_DOWNLOAD
      EXPORTING
        FILENAME                  = filename1
        FILETYPE                  = filetype1
        WRITE_FIELD_SEPARATOR     = 'X'
      CHANGING
        DATA_TAB                  = t_down[].
    Before the Unicode error:
    *CALL FUNCTION 'UPLOAD'
        TABLES
             DATA_TAB                =  datos
       EXCEPTIONS
            CONVERSION_ERROR        = 1
            INVALID_TABLE_WIDTH     = 2
            INVALID_TYPE            = 3
            NO_BATCH                = 4
            UNKNOWN_ERROR           = 5
            GUI_REFUSE_FILETRANSFER = 6
            OTHERS                  = 7
    Solution.
    DATA: file_table type table of file_table,
          filetable type file_table,
          rc type i,
          filename type string.
    CALL METHOD CL_GUI_FRONTEND_SERVICES=>FILE_OPEN_DIALOG
      CHANGING
        FILE_TABLE              = file_table
        RC                      = rc
    EXCEPTIONS
       FILE_OPEN_DIALOG_FAILED = 1
       CNTL_ERROR              = 2
       ERROR_NO_GUI            = 3
       NOT_SUPPORTED_BY_GUI    = 4
       others                  = 5
    READ table file_table into filetable index 1.
    move filetable to filename.
    CALL METHOD CL_GUI_FRONTEND_SERVICES=>GUI_UPLOAD
      EXPORTING
        FILENAME                = filename
        FILETYPE                = 'ASC'
        HAS_FIELD_SEPARATOR     = 'X'
       HEADER_LENGTH           = 0
       READ_BY_LINE            = 'X'
       DAT_MODE                = SPACE
       CODEPAGE                = SPACE
       IGNORE_CERR             = ABAP_TRUE
       REPLACEMENT             = '#'
       VIRUS_SCAN_PROFILE      =
    IMPORTING
       FILELENGTH              =
       HEADER                  =
      CHANGING
        DATA_TAB                = datos[]
      EXCEPTIONS
        FILE_OPEN_ERROR         = 1
        FILE_READ_ERROR         = 2
        NO_BATCH                = 3
        GUI_REFUSE_FILETRANSFER = 4
        INVALID_TYPE            = 5
        NO_AUTHORITY            = 6
        UNKNOWN_ERROR           = 7
        BAD_DATA_FORMAT         = 8
        HEADER_NOT_ALLOWED      = 9
        SEPARATOR_NOT_ALLOWED   = 10
        HEADER_TOO_LONG         = 11
        UNKNOWN_DP_ERROR        = 12
        ACCESS_DENIED           = 13
        DP_OUT_OF_MEMORY        = 14
        DISK_FULL               = 15
        DP_TIMEOUT              = 16
        NOT_SUPPORTED_BY_GUI    = 17
        ERROR_NO_GUI            = 18
        others                  = 19
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
                WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    Before the Unicode error:
    CALL FUNCTION 'DOWNLOAD'
       EXPORTING
         filename            = p_attkit
         filetype            = 'ASC'
       TABLES
         data_tab            = tb_attrkit
       EXCEPTIONS
         invalid_filesize    = 1
         invalid_table_width = 2
         invalid_type        = 3
         no_batch            = 4
         unknown_error       = 5
         OTHERS              = 6.
    Solution.
               DATA : lv_filename    TYPE string,
                       lv_filen       TYPE string,
                       lv_path        TYPE string,
                       lv_fullpath    TYPE string.
                DATA: Begin of wa_testata,
                      lv_var(10) type c,
                      End of wa_testata.
                DATA: testata like standard table of wa_testata.
                OVERLAY p_attkit WITH lv_filename.
                CALL METHOD cl_gui_frontend_services=>file_save_dialog
                  EXPORTING
                   WINDOW_TITLE         =
                   DEFAULT_EXTENSION    =
                     default_file_name    = lv_filename
                   WITH_ENCODING        =
                   FILE_FILTER          =
                   INITIAL_DIRECTORY    =
                   PROMPT_ON_OVERWRITE  = 'X'
                  CHANGING
                    filename             = lv_filen
                    path                 = lv_path
                    fullpath             = lv_fullpath
                  USER_ACTION          =
                  FILE_ENCODING        =
                  EXCEPTIONS
                    cntl_error           = 1
                    error_no_gui         = 2
                    not_supported_by_gui = 3
                    OTHERS               = 4
                IF sy-subrc <> 0.
                  MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                             WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
                ENDIF.
                CALL FUNCTION 'GUI_DOWNLOAD'
                        EXPORTING
                        BIN_FILESIZE                    =
                          filename                        = lv_fullpath
                          filetype                        = 'ASC'
                        APPEND                          = ' '
                        WRITE_FIELD_SEPARATOR           = ' '
                        HEADER                          = '00'
                        TRUNC_TRAILING_BLANKS           = ' '
                        WRITE_LF                        = 'X'
                        COL_SELECT                      = ' '
                        COL_SELECT_MASK                 = ' '
                        DAT_MODE                        = ' '
                        CONFIRM_OVERWRITE               = ' '
                        NO_AUTH_CHECK                   = ' '
                        CODEPAGE                        = ' '
                        IGNORE_CERR                     = ABAP_TRUE
                        REPLACEMENT                     = '#'
                        WRITE_BOM                       = ' '
                        TRUNC_TRAILING_BLANKS_EOL       = 'X'
                        WK1_N_FORMAT                    = ' '
                        WK1_N_SIZE                      = ' '
                        WK1_T_FORMAT                    = ' '
                        WK1_T_SIZE                      = ' '
                      IMPORTING
                        FILELENGTH                      =
                        TABLES
                          data_tab                        = tb_attrkit
                          fieldnames                      = testata
                       EXCEPTIONS
                         file_write_error                = 1
                         no_batch                        = 2
                         gui_refuse_filetransfer         = 3
                         invalid_type                    = 4
                         no_authority                    = 5
                         unknown_error                   = 6
                         header_not_allowed              = 7
                         separator_not_allowed           = 8
                         filesize_not_allowed            = 9
                         header_too_long                 = 10
                         dp_error_create                 = 11
                         dp_error_send                   = 12
                         dp_error_write                  = 13
                         unknown_dp_error                = 14
                         access_denied                   = 15
                         dp_out_of_memory                = 16
                         disk_full                       = 17
                         dp_timeout                      = 18
                         file_not_found                  = 19
                         dataprovider_exception          = 20
                         control_flush_error             = 21
                         OTHERS                          = 22
                IF sy-subrc <> 0.
                  MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                          WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
                ENDIF.

  • SSIS Package : While Extracting Sharepoint Lookup column, getting error 'Cannnot convert between unicode and non-unicode string data types'

    Hello,
    I am working on one project and there is need to extract Sharepoint list data and import them to SQL Server table. I have few lookup columns in the list.
    Steps in my Data Flow :
    Sharepoint List Source
    Derived Column
    its formula : SUBSTRING([BusinessUnit],FINDSTRING([BusinessUnit],"#",1)+1,LEN([BusinessUnit])-FINDSTRING([BusinessUnit],"#",1))
    Data Conversion
    OLE DB Destination
    But I am getting the error of not converting between unicode and non-unicode string data types.
    I am not sure what I am missing here.
    In Data Conversion, what should be the Data Type for the Look up column?
    Please suggest here.
    Thank you,
    Mittal.

    You have a data conversion transformation.  Now, in the destination are you assigning the results of the derived column transformation or the data conversion transformation.  To avoid this error you need use the data conversion output.
    You can eliminate the need for the data conversion with the following in the derived column (creating a new column):
    (DT_STR,100,1252)(SUBSTRING([BusinessUnit],FINDSTRING([BusinessUnit],"#",1)+1,LEN([BusinessUnit])-FINDSTRING([BusinessUnit],"#",1)))
    The 100 is the length and 1252 is the code page (I almost always use 1252) for interpreting the string.
    Russel Loski, MCT, MCSE Data Platform/Business Intelligence. Twitter: @sqlmovers; blog: www.sqlmovers.com

  • Unicode and non-unicode string data types Issue with 2008 SSIS Package

    Hi All,
    I am converting a 2005 SSIS Package to 2008. I have a task which has SQL Server as the source and Oracle as the destination. I copy the data from a SQL server view with a field nvarchar(10) to a field of a oracle table varchar(10). The package executes fine
    on my local when i use the data transformation task to convert to DT_STR. But when I deploy the dtsx file on the server and try to run from an SQL Job Agent it gives me the unicode and non-unicode string data types error for the field. I have checked the registry
    settings and its the same in my local and the server. Tried both the data conversion task and Derived Column task but with no luck. Pls suggest me what changes are required in my package to run it from the SQL Agent Job.
    Thanks.

    What is Unicode and non Unicode data formats
    Unicode : 
    A Unicode character takes more bytes to store the data in the database. As we all know, many global industries wants to increase their business worldwide and grow at the same time, they would want to widen their business by providing
    services to the customers worldwide by supporting different languages like Chinese, Japanese, Korean and Arabic. Many websites these days are supporting international languages to do their business and to attract more and more customers and that makes life
    easier for both the parties.
    To store the customer data into the database the database must support a mechanism to store the international characters, storing these characters is not easy, and many database vendors have to revised their strategies and come
    up with new mechanisms to support or to store these international characters in the database. Some of the big vendors like Oracle, Microsoft, IBM and other database vendors started providing the international character support so that the data can be stored
    and retrieved accordingly to avoid any hiccups while doing business with the international customers.
    The difference in storing character data between Unicode and non-Unicode depends on whether non-Unicode data is stored by using double-byte character sets. All non-East Asian languages and the Thai language store non-Unicode characters
    in single bytes. Therefore, storing these languages as Unicode uses two times the space that is used specifying a non-Unicode code page. On the other hand, the non-Unicode code pages of many other Asian languages specify character storage in double-byte character
    sets (DBCS). Therefore, for these languages, there is almost no difference in storage between non-Unicode and Unicode.
    Encoding Formats: 
    Some of the common encoding formats for Unicode are UCS-2, UTF-8, UTF-16, UTF-32 have been made available by database vendors to their customers. For SQL Server 7.0 and higher versions Microsoft uses the encoding format UCS-2 to store the UTF-8 data. Under
    this mechanism, all Unicode characters are stored by using 2 bytes.
    Unicode data can be encoded in many different ways. UCS-2 and UTF-8 are two common ways to store bit patterns that represent Unicode characters. Microsoft Windows NT, SQL Server, Java, COM, and the SQL Server ODBC driver and OLEDB
    provider all internally represent Unicode data as UCS-2.
    The options for using SQL Server 7.0 or SQL Server 2000 as a backend server for an application that sends and receives Unicode data that is encoded as UTF-8 include:
    For example, if your business is using a website supporting ASP pages, then this is what happens:
    If your application uses Active Server Pages (ASP) and you are using Internet Information Server (IIS) 5.0 and Microsoft Windows 2000, you can add "<% Session.Codepage=65001 %>" to your server-side ASP script.
    This instructs IIS to convert all dynamically generated strings (example: Response.Write) from UCS-2 to UTF-8 automatically before sending them to the client.
    If you do not want to enable sessions, you can alternatively use the server-side directive "<%@ CodePage=65001 %>".
    Any UTF-8 data sent from the client to the server via GET or POST is also converted to UCS-2 automatically. The Session.Codepage property is the recommended method to handle UTF-8 data within a web application. This Codepage
    setting is not available on IIS 4.0 and Windows NT 4.0.
    Sorting and other operations :
    The effect of Unicode data on performance is complicated by a variety of factors that include the following:
    1. The difference between Unicode sorting rules and non-Unicode sorting rules 
    2. The difference between sorting double-byte and single-byte characters 
    3. Code page conversion between client and server
    Performing operations like >, <, ORDER BY are resource intensive and will be difficult to get correct results if the codepage conversion between client and server is not available.
    Sorting lots of Unicode data can be slower than non-Unicode data, because the data is stored in double bytes. On the other hand, sorting Asian characters in Unicode is faster than sorting Asian DBCS data in a specific code page,
    because DBCS data is actually a mixture of single-byte and double-byte widths, while Unicode characters are fixed-width.
    Non-Unicode :
    Non Unicode is exactly opposite to Unicode. Using non Unicode it is easy to store languages like ‘English’ but not other Asian languages that need more bits to store correctly otherwise truncation will occur.
    Now, let’s see some of the advantages of not storing the data in Unicode format:
    1. It takes less space to store the data in the database hence we will save lot of hard disk space. 
    2. Moving of database files from one server to other takes less time. 
    3. Backup and restore of the database makes huge impact and it is good for DBA’s that it takes less time
    Non-Unicode vs. Unicode Data Types: Comparison Chart
    The primary difference between unicode and non-Unicode data types is the ability of Unicode to easily handle the storage of foreign language characters which also requires more storage space.
    Non-Unicode
    Unicode
    (char, varchar, text)
    (nchar, nvarchar, ntext)
    Stores data in fixed or variable length
    Same as non-Unicode
    char: data is padded with blanks to fill the field size. For example, if a char(10) field contains 5 characters the system will pad it with 5 blanks
    nchar: same as char
    varchar: stores actual value and does not pad with blanks
    nvarchar: same as varchar
    requires 1 byte of storage
    requires 2 bytes of storage
    char and varchar: can store up to 8000 characters
    nchar and nvarchar: can store up to 4000 characters
    Best suited for US English: "One problem with data types that use 1 byte to encode each character is that the data type can only represent 256 different characters. This forces multiple
    encoding specifications (or code pages) for different alphabets such as European alphabets, which are relatively small. It is also impossible to handle systems such as the Japanese Kanji or Korean Hangul alphabets that have thousands of characters."<sup>1</sup>
    Best suited for systems that need to support at least one foreign language: "The Unicode specification defines a single encoding scheme for most characters widely used in businesses around the world.
    All computers consistently translate the bit patterns in Unicode data into characters using the single Unicode specification. This ensures that the same bit pattern is always converted to the same character on all computers. Data can be freely transferred
    from one database or computer to another without concern that the receiving system will translate the bit patterns into characters incorrectly.
    https://irfansworld.wordpress.com/2011/01/25/what-is-unicode-and-non-unicode-data-formats/
    Thanks Shiven:) If Answer is Helpful, Please Vote

Maybe you are looking for

  • Adding New Portal User using PLSQL APIs

    I am trying to use the PLSQL APIs to add a new user based. Please let me know if I am on the right track. I am using Portal 3.0.8 1) At the portal30_sso schema, I used wwwsso_ls_private.ls_create_user and it works. 2) At the portal30 schema, I used w

  • Parameter not being set in before parameter form trigger

    I am running 10g reports over the web and have a problem with some parameters. I have a User parameter which I set in the BEFORE-PARAMETER-FORM trigger. This parameter is not displayed on the parameter form because it shouldn't ever be changed by a u

  • Delete row in an internal table

    Hi, with key1 and key2 how do I delete a row in an internal table itab? I want to do something like (but dont succeed) this: DELETE itab WHERE x = key1 AND y = key2 thanks in advance

  • How to select a Tab in a TabPane via API

    I am trying to figure out how to select a Tab in my TabPane via the API. In the javadoc for the Tab it says to use the TabPane.selectedTab, there is no selectedTab property that I can find. Also on the tab there is a selectedProperty but it is read o

  • SOAP request being sent only half sometimes

    Hi All, Sometimes, the SOAP request being sent (By Clicking a Button) has all the fields filled with data like say for example, <doc:contact>                   <gen:gender>Female</gen:gender>                   <gen:firstName>FNAME</gen:firstName>