Convert tables from 4.6 c to Ecc

Hi All,
Please provide me the list of tables which is replaced from 4.6 C to ECC 6.0.
Regards,
Sudheer

Also see http://erp.fmpmedia.com/Default.aspx?alias=erp.fmpmedia.com/english

Similar Messages

  • Convert table from % to Pixels

    Does anyone know how to convert table that is already full of
    cells and text from % to pixels? I don't want to try and build
    another table and copy the cell contents into it, this table is too
    complex. You can see it at:
    http://alternativecancer.us/#table2

    On Sun, 23 Dec 2007 17:33:39 +0000 (UTC), "Paul Winter"
    <[email protected]> wrote:
    >Does anyone know how to convert table that is already
    full of cells and text
    >from % to pixels? I don't want to try and build another
    table and copy the
    >cell contents into it, this table is too complex. You can
    see it at:
    >
    http://alternativecancer.us/#table2
    The link did not work
    Either in the code - but if you are not happy with this,
    ensure
    properties box is open, then select the tables using the tag
    selector
    and enter the table width in pixels. Then select a column in
    turn and
    add the width of the column in pixels -
    Horizontal Width
    put the following into Dw's help system:
    Resizing tables, columns, and rows
    ~Malcolm N....
    ~

  • Readind APO tables from ECC ABAP program

    Hi,
    I want to know if it is possible to read APO tables from a ABAP program in ECC 6.0.
    If it is possible , please lt me know how?
    Regards

    Hi,
    There's a remote-enabled function module (think it's either RFC_READ_TABLE or RFC_TABLE_READ) which you can pass a table name, some selection criteria and a list of fields to be returned.  That should allow you to read those table entries from the remote system.
    Regards,    Andy

  • Accessing ECC tables from XSLT mapping

    Hi All,
    I have requirement where I need to access a SAP table from PI XSLT mapping.
    Pls provide inputs on how to achieve it.
    Thanks,
    Navneeth K.

    Hello,
    You can refer to this document found in SAP Help
    http://help.sap.com/saphelp_nw04/helpdata/en/55/7ef3003fc411d6b1f700508b5d5211/frameset.htm
    And a sample blog
    /people/pooja.pandey/blog/2005/06/27/xslt-mapping-with-java-enhancement-for-beginners
    In your case, the idea is to call a java class inside the xslt mapping. So to access the ECC table, you can use a Java Mapping Class so that it would be easier to implement an RFC lookup.
    Hope this helps,
    Mark

  • Convert data from internal table to XML file.

    Hi All,
    I am selecting data from database into one internal table.
    Now I want to convert data from internal table to xml file format and save in to my desktop. Please suggest me how I can achieve my requirement.
    Kindly reply me ASAP.

    Use this FM. SAP_CONVERT_TO_XML_FORMAT
    Check this link too -
    Re: Data Export in XML format
    XML files from ABAP programs

  • Call to ECC table from BI

    I have this routine in a BI transformation routine
    select single CONTACTNAME INTO RESULT FROM THREIC_CONTACT WHERE CONT_GUID =
         source-system-cont-guid.
    Error : THREIC_CONTACTNAME not recognised
    Any ideas how to call the table.

    BI and ECC are different systems. You can not just call the table from outside of the BI system that way.
    What you can do is simply:
    create one datasource in your ECC system using THREIC_CONTACTNAME table.
    Then load data to an ODS or some master data.
    Now, you can refer to the table ( active data table of the ODS, or P/Q/X/Y tables of the master data) and use select statement.
    Another possible way can be, if you are trying to write this routine for some data you extract from ECC, you can enhance your datasource to make this control in R3 side.
    May be Guru's have another idea.
    Hope this helps
    Derya

  • Convert table keys from random to sequential :: algorithm

    Hello
    I hope I am at the right forume..
    We are using Oracle as our DB.
    One of our major data table has random keys. this is due to legacy issues.
    This table is getting big and the random access process is slowing the system down.
    We want to change the table Id's from random to sequential.
    In order to do that we have to change the original Id's to sequential ones.
    Our table has 2 columns:
    id (number - 10)
    value (varchar -1024)
    The table contains about 500 million rows (as for now)
    this table also has junk records that needs to be deleted.
    I am trying to think of a way to convert all the random id values into sequential ones, and change all of their references accordingly.
    I tried to look for an algorithm to do that.
    Is copying such a table is time consuming?
    is there a way to change the id's into sequential ones without copying the table?
    I'm a JAVA developer - not a DBA, so I don't really know the "cost" of these actions..
    Any Idea would be great
    Thanks,
    Carol

    Has your table a trigger ? A primary key ? Other tables with foreign key references ?
    You could probably use an Oracle sequence and update your table, helping within a trigger before update to update the subsequents tables from old to new value. However, since you have millions rows, and the update will have to go through every single row, it will be a slow process.
    You could also have a look to the Tom Kyte solution for the update cascade of PK :
    http://tkyte.blogspot.com/2009/10/httpasktomoraclecomtkyteupdatecascade.html
    Nicolas.

  • Replication Z-Tables from ECC to CRM

    We have a client with some Z-Tables in ECC and we need to replicate this tables in CRM,  anybody have any information about this.
    Thanks in advanced
    Sebas

    Hi Sebas,
    The following link shows the replication from CRM to ECC. The same can be followed for replication from ECC to CRM.
    Replication of Z table from CRM to R/3 - No mBDoc Created
    You can also try this
    1. Create the Ztables both in ECC and CRM
    2. create customizing adapter objects in R3AC3.
    3. copy the stanadard FM 'CRM_BUPA_MAP_ADREREG_CI' to custom FM and write source code. load object in R3AS.
    4. create a variant and shedule the based on requirement
    Thanks and regards,
    Madhukar Reddy

  • How to convert .txt (from applicaiton server) to internal table

    hi all
    i want to convert .txt (from applicaiton server) to internal table , im getting the contents in the itab, but im not able to remove '#',
    can anybaody help me?
    Thanks.

    The # is a representation of the tab, so you need to do something like this.
    report zrich_0001.
    data: str type string.
    data: begin of itab occurs 0,
          fld1(10) type c,
          fld2(10) type c,
          fld3(10) type c,
          end   of itab.
    data:
          dsn(100) value '/usr/sap/TST/sys/test.txt'.
    constants:
        con_tab  type c value cl_abap_char_utilities=>HORIZONTAL_TAB.
    clear itab.  refresh itab.
    * Read the data.
    open dataset dsn for input in binary mode.
    do.
      read dataset dsn into str.
      if sy-subrc = 0.
       split str at con_Tab into itab-fld1 itab-fld2 itab-fld3.
        append itab.
      else.
        exit.
      endif.
    enddo.
    close dataset dsn.
    Loop at itab.
      write:/ itab-fld1, itab-fld2, itab-fld3.
    endloop.
    Regards,
    RIch Heilman

  • How to insert data into a table from an xml document

    using the XmlSql Utility, how do I insert data into a table from an xml document, using the sqlplus prompt.
    if i use the xmlgen.insertXML(....)
    requires a CLOB file, which i dont have, only the xml doc.
    Cant i insert directly from the doc to the table?
    the xmlgen examples I have seen first convert a table to a CLOB xmlString and then insert it into another table.
    Isnt there any other way?

    Your question is little perplexing.
    If you're using XML SQL Utility from
    the commandline, just use putXML.
    java OracleXML putXML
    null

  • Importing internal table from one program to another program

    Hi everybody,
    i have one small doubt.
    i am using submit statement and passing the values from this program to another program selection screen. in that program logic is written.In that program one internal table values are being exported to the memory id of that program. now i have to import that internal table values into my program by using import statement. i am using the following syntax
    import itab from menory id 'program name'.
    but i am getting an error saying program name is unknown.
    what is the exat syntax for this .
    thanking you,
    giri.

    hi,
    check these statements.
    IMPORT - Get data
    Variants:
    1. IMPORT obj1 ... objn FROM DATA BUFFER f.
    2. IMPORT obj1 ... objn FROM INTERNAL TABLE itab.
    2. IMPORT obj1 ... objn FROM MEMORY.
    3. IMPORT obj1 ... objn FROM SHARED MEMORY itab(ar) ID key.
    4. IMPORT obj1 ... objn FROM SHARED BUFFER itab(ar) ID key.
    5. IMPORT obj1 ... objn FROM DATABASE dbtab(ar) ID key.
    6. IMPORT obj1 ... objn FROM DATASET dsn(ar) ID key.
    7. IMPORT obj1 ... objn FROM LOGFILE ID key.
    8. IMPORT DIRECTORY INTO itab FROM DATABASE dbtab(ar) ID key.
    9. IMPORT (itab) FROM ... .
    In some cases, the syntax rules that apply to Unicode programs are different than those for non-Unicode programs. For more details, see Storing Cluster Tables.
    Variant 1
    IMPORT obj1 ... objn FROM DATA BUFFER f.
    Extras:
    1. ... = f (for each object to be imported)
    2. ... TO f (for each object to be imported)
    3. ... ACCEPTING PADDING
    4. ... ACCEPTING TRUNCATION
    5. ... IGNORING STRUCTURE BOUNDARIES
    6. ... IGNORING CONVERSION ERRORS
    7. ... REPLACEMENT CHARACTER c
    8. ... IN CHAR-TO-HEX MODE
    9. ... CODE PAGE INTO f1
    10. ... ENDIAN INTO f2
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas.
    See You Cannot Use Implicit Field Names in Clusters.
    Effect
    Imports the data objects obj1 ... objn from the data buffer declared. The data buffer must be of type XSTRING . The data objects obj1 ... objn can be fields, structures, complex structures, or tables. The system imports all the data that has been stored in the data buffer f using the EXPORT ... TO DATA BUFFER statement and is listed here. It also checks that the structure used in the IMPORT statement matches the one in the EXPORT statement.
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged. (In some circumstances, this may mean that no data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported. The contents of all the objects remain unchanged.
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    The object is stored in the field f.
    Addition 3
    ... ACCEPTING PADDING
    Effect
    This addition allows you to append new fields to the end
    of structures, sub-structures, and internal tables. The IMPORT statement fills the additional fields with initial values; make existing fields (C, N, X, P, I1, and I2) longer; map character-type fields to STRING-type fields; or to map byte-type fields to XSTRING-type fields.
    Addition 4
    ... ACCEPTING TRUNCATION
    Effect
    This addition allows you to shorten the last CHAR
    fields, or to omit the last component at the top level. (Until Release 4.6, you could do this without using an addition).
    Addition 5
    ... IGNORING STRUCTURE BOUNDARIES
    Effect
    This addition means that only the fragment sequence is
    relevant - that is, that any sub-structures match. If you use this addition, the system ignores any alignment changes necessitated by Unicode - such as inserting named includes.
    You cannot use this addition with either addition 3 (enlarge structure) or addition 4 (shorten structure), since it specifies that structure and include boundaries are to be ignored.
    From Release 6.10 onwards, the include information is stored in datasets, so that the system can also check that includes match - that is, that sub-structures and includes (named or unnamed) are treated equally. When data is imported in a Release prior to 6.10, includes are not checked.
    Addition 6
    ...IGNORING CONVERSION ERRORS
    Effect
    This addition prevents the system from triggering a
    runtime error, if an error occurs when the character set is converted. '#' is used as a replacement character.
    Addition 7
    ... REPLACEMENT CHARACTER c
    Effect
    The replacement character is used if a particular
    character cannot be converted when the character set is converted.
    This addition can only be used in conjunction with addition 6.
    Addition 8
    ... IN CHAR-TO-HEX MODE
    Effect
    Not all character-type fields are converted. To convert
    a field, you must create a field (or structure) that is identical to the exported field or structure, except that all its character-type components must be replaced with hexadecimal fields.
    You can only use this addition in Unicode programs, to allow you to import camouflaged binary data as single-byte characters.
    Moreover, you cannot use this addition in conjunction with the additions 3, 4, 5, 6, or 7.
    Addition 9
    ... CODE PAGE INTO f1
    Effect
    The code page of the exported data is stored in the
    character-type field f1 - for example, to analyze data that has been imported with the IN CHAR-TO-HEX MODE addition.
    Addition 10
    ... ENDIAN INTO f2
    Effect
    The byte order (LITTLE or BIG) of the
    exported data is stored in the field f2 - for example, to analyze data that has been imported with the IN CHAR-TO-HEX MODE addition. The field f2 must have the type ABAP_ENDIAN, which is defined in the type group ABAP. For this reason, the type group ABAP must be included in the ABAP program using a TYPE-POOLS statement.
    Variant 2
    IMPORT obj1 ... objn FROM INTERNAL TABLE itab.
    Extras:
    1. ... = f (for each object to be imported)
    2. ... TO f (for each object to be imported)
    3. ... ACCEPTING PADDING
    4. ... ACCEPTING TRUNCATION
    5. ... IGNORING STRUCTURE BOUNDARIES
    6. ... IGNORING CONVERSION ERRORS
    7. ... REPLACEMENT CHARACTER c
    8. ... IN CHAR-TO-HEX MODE
    9. ... CODE PAGE INTO f1
    10. ... ENDIAN INTO f2
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas. See No implicit field names in cluster.
    Effect
    Imports the data objects obj1 ... objn (fields, structures, complex structures, or tables) from the specified internal table itab. The first column in the internal table must be of the predefined type INT2 and the second must be type X. To define the first column you must refer to a data element in the ABAP Dictionary that has the predefined type INT2.
    All data that was stored in the internal table itab using EXPORT ... TO INTERNAL TABLE and listed, is imported. The system checks that the EXPORT and IMPORT structures match.
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the specified data cluster were imported, the rest remain unchanged (it is possible that no data object was imported).
    SY-SUBRC = 4:
    The data objects could not be imported.
    The contents of all listed objects remain unchanged
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    Places the object in the field f.
    Addition 3
    ... ACCEPTING PADDING
    Effect
    This addition allows you to add new fields to the ends
    of structures, even to substructures and internal tables (the additional fields are filled with initial value during the IMPORT). It also allows you to increase the size of existing fields (C, N, X, P, I1, and I2) and to map Char fields to STRING type fields or byte fields to XSTRING type fields.
    Addition 4
    ... ACCEPTING TRUNCATION
    Effect
    This addition allows you to shorten the last CHAR
    field or omit the last component on the highest level (till Release 4.6 this was possible without specifying an addition).
    Addition 5
    ... IGNORING STRUCTURE BOUNDARIES
    Effect
    This addition means that only the page order is
    relevant, that is any substructures match. With this addition, the system also ignores alignment changes arising from the Unicode conversion (for example, due to subsequent insertion of named includes).
    This addition rules out any subsequent structural enhancements (addition 3) or structural shortening (addition 4) because with this addition it is the structural limits and include limits that are to be ignored.
    As from Release 6.10, the include information will also be stored in the dataset, so that it is possible to also check whether the includes match, that is substructures and includes (named or unnamed) are treated the same. When importing data that was exported in a Release lower than 6.10, the includes are not checked.
    Addition 6
    ...IGNORING CONVERSION ERRORS
    Effect
    This addition has the effect that an error in the
    character set conversion does not cause a runtime error. The system uses "#" as a replacement character.
    Addition 7
    ... REPLACEMENT CHARACTER c
    Effect
    The system uses the specified replacement character if a
    character cannot be converted during a character set conversion. If this addition is not specified, the system uses "#" as a replacement character.
    This addition can only be used in conjunction with addition 6.
    Addition 8
    ... IN CHAR-TO-HEX MODE
    Effect
    No character type fields are converted. For this you
    must create a field or structure that is identical to the exported field or exported structure, except that all character type fields must be replaced with hexadecimal fields.
    This addition, which is only allowed in programs with a set Unicode flag, allows you to import binary data disguised as single byte characters. This addition cannot be used in conjunction with additions 3, 4, 5, 6, and 7.
    Addition 9
    ... CODE PAGE INTO f1
    Effect
    The codepage of the exported data is stored in the
    character-type field f1 (for example, to be able to analyze the data imported with the addition IN CHAR-TO-HEX MODE).
    Addition 10
    ... ENDIAN INTO f2
    Effect
    The byte order (LITTLE or BIG) of the
    exported data is stored in the field f2 (for example, to be able analyze the data imported using the addition IN CHAR-TO-HEX MODE). The field f2 must be of type ABAP_ENDIAN, defined in type group ABAP. You must therefore include the type group ABAP in the ABAP program with a TYPE-POOLS statement.
    Variant 3
    IMPORT obj1 ... objn FROM MEMORY.
    Extras:
    1. ... = f (for each object to be imported) 2. ... TO f (for each object to be imported)
    3. ... ID key
    4. ... ACCEPTING PADDING
    5. ... ACCEPTING TRUNCATION
    6. ... IGNORING STRUCTURE BOUNDARIES
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas. See You Must Enter Identification and Cannot Use Implicit Field Names inClusters
    Effect
    Imports data objects obj1 ... objn (fields, structures, complex structures or tables) from a data cluster in the ABAP memory (see EXPORT). Reads in all data without an ID that was exported to memory with "EXPORT ... TO MEMORY.". In contrast to the variant IMPORT FROM DATABASE, it does not check that the structure matches in EXPORT and IMPORT.
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged (in some circumstances, this may mean that no data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported, probably because the ABAP memory was empty.
    The contents of all objects remain unchanged.
    Note
    You should always use the addition 3 (... ID key) with the statement. Otherwise, the effect of the variant is not certain (EXPORT statements in different parts of a program overwrite each other in the ABAP memory), since it exists only for reasons of compatibility with R/2.
    Additional methods for selecting and deleting data clusters in the ABAP memory are provided by the system class CL_ABAP_EXPIMP_MEM.
    Please consult Data Area and Modularization Unit Organization documentation as well.
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    The object is placed in field f.
    Addition 3
    ... ID key
    Effect
    Imports only data stored in ABAP memory under the ID key.
    Notes
    The key, key, must be a character-type data object (but not a string).
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged (in some circumstances, this may mean that no data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported, probably because an incorrect ID was used.
    The contents of all objects remain unchanged.
    Addition 4
    ... ACCEPTING PADDING
    Effect
    This addition allows you to append new fields to the end of structures, sub-structures, and internal tables. The IMPORT statement fills the additional fields with initial values; make existing fields (C, N, X, P, I1, and I2) longer; map character-type fields to STRING-type fields; or to map byte-type fields to XSTRING-type fields.
    Addition 5
    ... ACCEPTING TRUNCATION
    Effect
    This addition allows you to shorten the last CHAR field, or to omit the last component at the top level. (Until Release 4.6, you could do this without using an addition).
    Addition 6
    ... IGNORING STRUCTURE BOUNDARIES
    Effect
    This addition means that only the fragment sequence is relevant - that is, that any sub-structures match. If you use this addition, the system ignores any alignment changes necessitated by Unicode - such as inserting named includes.
    You cannot use this addition with either addition 3 (enlarge structure) or addition 4 (shorten structure), since it specifies that structure and include boundaries are to be ignored.
    From Release 6.10 onwards, the include information is stored in datasets, so that the system can also check that includes match - that is, that sub-structures and includes (named or unnamed) are treated equally. When data is imported in a Release prior to 6.10, includes are not checked.
    Related
    EXPORT TO MEMORY, DELETE FROM MEMORY, FREE MEMORY
    Variant 4
    IMPORT obj1 ... objn FROM SHARED MEMORY itab(ar) ID key.
    Extras:
    1. ... = f (for each object to be exported) 2. ... TO f (for each object to be exported)
    3. ... CLIENT g (before ID key)
    4. ... TO wa (after itab(ar) or ID key )
    5. ... ACCEPTING PADDING
    6. ... ACCEPTING TRUNCATION
    7. ... IGNORING STRUCTURE BOUNDARIES
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas.
    See You Cannot Use Implicit Field Names in Clusters and You Cannot Use Table Work Areas.
    Effect
    Imports the data objects obj1 ... objn (fields, structures, complex structures, or tables) from shared memory. The data objects are read using the ID key from the area ar in the table itab - c.f. EXPORT TO SHARED MEMORY). You must use itab to specify a database table although the system reads from a memory table with the appropriate structure.
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged. (In some circumstances, this may mean that no data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported. You may have used the wrong ID. The contents of all the objects remain unchanged.
    Notes
    The table dbtab named according to SHARED MEMORY must be declared using TABLES (except in addition 2).
    The structure of fields (field symbols and internal tables) to be imported must match the structure of the objects exported in the dataset. The objects must be imported under the same names as those under which they were exported. Otherwise, they will not be imported.
    The key length consists of: the client (3 digits, but only if tab is client-specific); area (2 characters); ID; and line number (4 bytes). It must not exceed 64 bytes - that is, the ID must not be longer than 55 characters, if the table is client- specific.
    The key, key, must be a character-type data object (but not a string).
    Additional methods for selecting and deleting data clusters in the shared memory are provided by the system class CL_ABAP_EXPIMP_SHMEM.
    Please consult Data Area and Modularization Unit Organization documentation as well.
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    The object is stored in the field f.
    Addition 3
    ... CLIENT g (before ID key)
    Effect
    The data is imported from client g (provided the import/export table is tab client-specific). The client, g must be a character-type data object (but not a string).
    Addition 4
    ... TO wa (after itab(ar) or ID key)
    Effect
    You need to use this addition if user data fields have been stored in the application buffer and are to be read from there. The work area wa is used instead of the table work area. The target area must correspond to the structure of the called table tab.
    Addition 5
    ... ACCEPTING PADDING
    Effect
    This addition allows you to: append new fields to the end of structures, sub-structures, and internal tables. The IMPORT statement fills the additional fields with initial values; make existing fields (C, N, X, P, I1, and I2) longer; map character-type fields to STRING-type fields; or to map byte-type fields to XSTRING-type fields.
    Addition 6
    ... ACCEPTING TRUNCATION
    Effect
    This addition allows you to shorten the last CHAR fields, or to omit the last component at the top level. (Until Release 4.6, you could do this without using an addition).
    Addition 7
    ... IGNORING STRUCTURE BOUNDARIES
    Effect
    This addition means that only the fragment sequence is relevant - that is, that any sub-structures match. If you use this addition, the system ignores any alignment changes necessitated by Unicode - such as inserting named includes.
    You cannot use this addition with either addition 4 (enlarge structure) or addition 5 (shorten structure), since it specifies that structure and include boundaries are to be ignored.
    From Release 6.10 onwards, the include information is stored in datasets, so that the system can also check that includes match - that is, that sub-structures and includes (named or unnamed) are treated equally. When data is imported in a Release prior to 6.10, includes are not checked.
    Related
    EXPORT TO SHARED MEMORY, DELETE FROM SHARED MEMORY
    Variant 5
    IMPORT obj1 ... objn FROM SHARED BUFFER itab(ar) ID key.
    Extras:
    1. ... = f (for each object to be exported) 2. ... TO f (for each object to be exported)
    3. ... CLIENT g (before ID key)
    4. ... TO wa (last addition or after itab(ar))
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas.
    See Cannot Use Implicit Fieldnames in Clusters und Cannot Use Table Work Areas.
    Effect
    Imports data objects obj1 ... objn (fields or
    tables) from the cross-transaction application buffer. The data objects are read in the application buffer using the ID key of the area ar of the buffer area for the table itab (see EXPORT TO SHARED BUFFER). You must use dbtab to specify a database table although the system reads from a memory table with an appropriate structure.
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged (in some circumstances, this means that no data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported, probably because an incorrect ID was used.
    The contents of all objects remain unchanged.
    Example
    Import two fields and an internal table from the application buffer with the structure INDX:
    TYPES: BEGIN OF ITAB3_LINE,
             CONT(4),
           END OF ITAB3_LINE.
    DATA: INDXKEY LIKE INDX-SRTFD VALUE 'KEYVALUE',
          F1(4),
          F2(8) TYPE P DECIMALS 0,
          ITAB3 TYPE STANDARD TABLE OF ITAB3_LINE,
          INDX_WA TYPE INDX.
    Import data.
    IMPORT F1 = F1 F2 = F2 ITAB3 = ITAB3
           FROM SHARED BUFFER INDX(ST) ID INDXKEY TO INDX_WA.
    After import, the data fields INDX-AEDAT and
    INDX-USERA in front of CLUSTR are filled with
    the values in the fields before the EXPORT
    statement.
    Notes
    You must declare the table dbtab, named after DATABASE using a TABLES statement.
    The structure of the fields, structures, and internal tables to be imported must match the structure of the objects exported to the dataset. Moreover, the objects must be imported with the same name used to export them. Otherwise, the import is not performed.
    The maximum total key length is 64 bytes. It must include: a client if the table is client-specific (3 characters); an area (2 characters); identification; and line counter (4 bytes). This means that the number of characters available for the identification of a client-specific table is 55 characters.
    The key, key, must be a character-type data object (but not a string).
    Additional methods for selecting and deleting data clusters in the cross-transaction application buffer are provided by the system class CL_ABAP_EXPIMP_SHBUF.
    Please consult Data Area and Modularization Unit Organization documentation as well.
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    The object is placed in the field f
    Addition 3
    ... CLIENT g (after dbtab(ar))
    Effect
    Takes the data from the client g (if the import/export table dbtab is client-specific). The client g must be a character-type data object (but not a string).
    Addition 4
    ... TO wa (as the last addition or after itab(ar))
    Effect
    You need to use this addition if you want to save user data fields in the application buffer and then read them from there later. The system uses a work area wa instead of a table work area. The target area must have the same structure as the table tab.
    Example
    DATA: INDX_WA TYPE INDX,
          F1.
    IMPORT F1 = F1 FROM SHARED BUFFER INDX(AR)
                   CLIENT '001' ID 'TEST'
                   TO INDX_WA.
    WRITE: / 'AEDAT:', INDX_WA-AEDAT,
           / 'USERA:', INDX_WA-USERA,
           / 'PGMID:', INDX_WA-PGMID.
    Variant 6
    IMPORT obj1 ... objn FROM DATABASE dbtab(ar) ID key.
    Extras:
    1. ... = f (for each object to be imported)
    2. ... TO f (for each object to be imported)
    3. ... CLIENT g (before ID key )
    4. ... USING form
    5. ... TO wa (last addition or after dbtab(ar))
    6. ... MAJOR-ID id1 (instead of ID key)
    7. ... MINOR-ID id2 (with MAJOR-ID id1 )
    8. ... ACCEPTING PADDING
    9. ... ACCEPTING TRUNCATION
    10. ... IGNORING STRUCTURE BOUNDARIES
    11. ... IGNORING CONVERSION ERRORS
    12. ... REPLACEMENT CHARACTER c
    13. ... IN CHAR-TO-HEX MODE
    14. ... CODE PAGE INTO f1
    15. ... ENDIAN INTO f2
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas. See Cannot Use Implicit Fieldnames in Clusters and Cannot Use Table Work Areas.
    Effect
    Imports data objects obj1 ... objn (fields, structures, complex structures, or tables) from the data cluster with ID key in area ar of the database table dbtab (see EXPORT TO DATABASE).
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged (in some circumstances, this may mean that not data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported, probably because an incorrect ID was used.
    The contents of all objects remain unchanged.
    Example
    Import two fields and an internal table:
    TYPES: BEGIN OF TAB3_TYPE,
              CONT(4),
           END OF TAB3_TYPE.
    DATA: INDXKEY LIKE INDX-SRTFD,
          F1(4), F2 TYPE P,
          TAB3 TYPE STANDARD TABLE OF TAB3_TYPE WITH
                    NON-UNIQUE DEFAULT KEY,
          WA_INDX TYPE INDX.
    INDXKEY = 'INDXKEY'.
    IMPORT F1   = F1
           F2   = F2
           TAB3 = TAB3 FROM DATABASE INDX(ST) ID INDXKEY
           TO WA_INDX.
    Notes
    You must declare the table dbtab, named after DATABASE, using the TABLES statement (except in addition 5).
    The structure of fields, field strings and internal tables to be imported must match the structure of the objects exported to the dataset. In addition, the objects must be imported under the same name used to export them. If this is not the case, either a runtime error occurs or no import takes place.
    Exception: You can lengthen or shorten the last field if it is of type CHAR, or add/omit CHAR fields at the end of the structure.
    The key, key, must be a character-type data object (but not a string).
    Additional methods for selecting and deleting data clusters in the database table specified are provided by the system class CL_ABAP_EXPIMP_DB.
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    The object is placed in field f.
    Addition 3
    ... CLIENT g (before the ID key)
    Effect
    Data is taken from the client g (in client-specific import/export databases only). Client g must be a character-type data object (but not a string).
    Example
    DATA: F1,
          WA_INDX TYPE INDX.
    IMPORT F1 = F1 FROM DATABASE INDX(AR) CLIENT '002' ID 'TEST'
                   TO WA_INDX.
    Addition 4
    ... USING form
    Note
    This statement is for internal use only.
    Incompatible changes or further developments may occur at any time without warning or notice.
    Effect
    Does not read the data from the database. Instead, calls the FORM routine form for each record read from the database without this addition. This routine can take the data key of the data to be retrieved from the database table work area and write the retrieved data to this work area. The name of the routine has the format <name of database table>_<name of form>; it has one parameter which describes the operation (READ, UPDATE or INSERT). The routine must set the field SY-SUBRC in order to show whether the function was successfully performed.
    Addition 5
    ... TO wa (after key or after dbtab(ar))
    Effect
    You need to use this addition if you want to save user data fields in the cluster database and then read from there. The system uses the work area wa instead of a table work area. The target area entered must have the same structure as the table dbtab.
    Example
    DATA WA LIKE INDX.
    DATA F1.
    IMPORT F1 = F1 FROM DATABASE INDX(AR)
                   CLIENT '002' ID 'TEST'
                   TO WA.
    WRITE: / 'AEDAT:', WA-AEDAT,
           / 'USERA:', WA-USERA,
           / 'PGMID:', WA-PGMID.
    Addition 6
    ... MAJOR-ID id1 (instead of the ID key).
    Addition 7
    ... MINOR-ID id2 (with MAJOR-ID id1)
    This addition is not allowed in an ABAP Objects context. See Cannot Use Generic Identification.
    Effect
    Searches for a record the first part of whose ID (length of id1) matches id1 and whose second part - if MINOR-ID id2 is also declared - is greater than or equal to id2.
    Addition 8
    ... ACCEPTING PADDING
    Effect
    This addition allows you to append new fields to the end of structures, sub-structures, and internal tables. The IMPORT statement fills the additional fields with initial values; make existing fields (C, N, X, P, I1, and I2) longer; map character-type fields to STRING-type fields; or to map byte-type fields to XSTRING-type fields.
    Addition 9
    ... ACCEPTING TRUNCATION
    Effect
    This addition allows you to shorten the last CHAR fields, or to omit the last component at the top level. (Until Release 4.6, you could do this without using an addition).
    Addition 10
    ... IGNORING STRUCTURE BOUNDARIES
    Effect
    This addition means that only the fragment sequence is relevant - that is, that any sub-structures match. If you use this addition, the system ignores any alignment changes necessitated by Unicode - such as inserting named includes.
    You cannot use this addition with either addition 8 (enlarge structure) or addition 9 (shorten structure), since it specifies that structure and include boundaries are to be ignored.
    From Release 6.10 onwards, the include information is stored in datasets, so that the system can also check that includes match - that is, that sub-structures and includes (named or unnamed) are treated equally. When data is imported in a Release prior to 6.10, includes are not checked.
    Addition 11
    ...IGNORING CONVERSION ERRORS
    Effect
    This addition prevents the system from triggering a runtime error, if an error occurs when the character set is converted. '#' is used as a replacement character.
    Addition 12
    ... REPLACEMENT CHARACTER c
    Effect
    The replacement character is used if a particular character cannot be converted when the character set is converted. If you do not use this addition, '#' is used as a replacement character.
    This addition can only be used in conjunction with addition 11.
    Addition 13
    ... IN CHAR-TO-HEX MODE
    Effect
    All character-type fields are not converted. To convert a field, you must create a field (or structure) that is identical to the exported field or structure, except that all its character-type components must be replaced with hexadecimal fields.
    You can only use this addition in Unicode programs, to allow you to import camouflaged binary data as single-byte characters. Moreover, you cannot use this addition in conjunction with the additions 8, 9, 10, 11, and 12.
    Addition 14
    ... CODE PAGE INTO f1
    Effect
    The code page of the exported data is stored in the character-type field f1 - for example, to analyze data that has been imported with the IN CHAR-TO-HEX MODE addition.
    Addition 15
    ... ENDIAN INTO f2
    Effect
    The byte order(LITTLE or BIG) of the exported data is stored in the field f2 - for example, to analyze data that has been imported with the IN CHAR-TO-HEX MODE addition. The field f2 must have the type ABAP_ENDIAN, which is defined in the type group ABAP. For this reason, the type group ABAP must be included in the ABAP program using a TYPE-POOLS statement.
    Variant 7
    IMPORT obj1 ... objn FROM DATASET dsn(ar) ID key.
    This variant is not allowed in an ABAP Objects context. See Cannot Use Clusters in Files
    Note
    This variant is no longer supported and cannot be used.
    Variant 8
    IMPORT obj1 ... objn FROM LOGFILE ID key.
    Note
    This statement is for internal use only.
    Incompatible changes or further developments may occur at any time without warning or notice.
    Extras:
    1. ... = f (for each field f to be imported) 2. ... TO f (for each field f to be imported)
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas. See Cannot Use Implicit Field Names in Clusters
    Effect
    Imports data objects (fields, field strings or internal tables) from the update data. You must specify the update key assigned by the system (with current request number) as the key.
    The key, key, must be a character-type data object (but not a string).
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The existing data objects in the data cluster specified were imported. The rest remain unchanged (in some circumstances, this may mean that no data objects were imported).
    SY-SUBRC = 4:
    The data objects could not be imported. An incorrect ID may have been used.
    The contents of all objects remain unchanged.
    Addition 1
    ... = f (for each object to be imported)
    Addition 2
    ... TO f (for each object to be imported)
    Effect
    The object is placed in field f.
    Variant 9
    IMPORT DIRECTORY INTO itab FROM DATABASE dbtab(ar) ID key.
    Extras:
    1. ... CLIENT g (after dbtab(ar)) 2. ... TO wa (last addition or after dbtab(ar))
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas. See Cannot Use Table Work Areas.
    Effect
    Imports an object directory stored under the specified ID with EXPORT TO DATABASE into the table itab. The internal table itab may not have the type HASHED TABLE or ANY TABLE.
    The key, key, must be a character-type data object (but not a string).
    The Return Code is set as follows:
    SY-SUBRC = 0:
    The directory was successfully imported.
    SY-SUBRC = 4:
    The directory could not be imported, probably because an incorrect ID was used.
    The internal table itab must have the same structure as the Dictionary structure CDIR (INCLUDE STRUCTURE).
    Addition 1
    ... CLIENT g (before ID key)
    Effect
    Takes data from the client g (only with client-specific import/export databases). Client g must be a character-type data object (but not a string).
    Addition 2
    ... TO wa (last addition or after dbtab(ar))
    Effect
    Uses the work area wa instead of the table work area. When you use this addition, you do not need to declare the table dbtab, named after DATABASE using a TABLES statement. The work area entered must have the same structure as the table dbtab.
    Example
    Directory of a cluster consisting of two fields and an internal table:
    TYPES: BEGIN OF TAB3_LINE,
             CONT(4),
           END OF TAB3_LINE,
           BEGIN OF DIRTAB_LINE.
             INCLUDE STRUCTURE CDIR.
    TYPES  END OF DIRTAB_LINE.
    DATA: INDXKEY LIKE INDX-SRTFD,
          F1(4),
          F2(8)   TYPE P decimals 0,
          TAB3    TYPE STANDARD TABLE OF TAB3_LINE,
          DIRTAB  TYPE STANDARD TABLE OF DIRTAB_LINE,
          INDX_WA TYPE INDX.
    INDXKEY = 'INDXKEY'.
    EXPORT F1 = F1
           F2 = F2
           TAB3 = TAB3
           TO DATABASE INDX(ST) ID INDXKEY " TAB3 has 17 entries
           FROM INDX_WA.
    IMPORT DIRECTORY INTO DIRTAB FROM DATABASE INDX(ST) ID INDXKEY
           TO INDX_WA.
    Then, the table DIRTAB contains the following:
    NAME     OTYPE  FTYPE  TFILL  FLENG
    F1         F      C      0      4
    F2         F      P      0      8
    TAB3       T      C      17     4
    The meaning of the individual fields is as follows:
    NAME:
    Name of stored object
    OTYPE:
    Object type (F: Field, R: Field string / Dictionary struc

  • How can I convert table object into table record format?

    I need to write a store procedure to convert table object into table record. The stored procedure will have a table object IN and then pass the data into another stored procedure with a table record IN. Data passed in may contain more than one record in the table object. Is there any example I can take a look? Thanks.

    I'm afraid it's a bit labourious but here's an example.
    I think it's a good idea to work with SQL objects rather than PL/SQL nested tables.
    SQL> CREATE OR REPLACE TYPE emp_t AS OBJECT
      2      (eno NUMBER(4)
      3      , ename  VARCHAR2(10)
      4      , job VARCHAR2(9)
      5      , mgr  NUMBER(4)
      6      , hiredate  DATE
      7      , sal  NUMBER(7,2)
      8      , comm  NUMBER(7,2)
      9      , deptno  NUMBER(2));
    10  /
    Type created.
    SQL> CREATE OR REPLACE TYPE staff_nt AS TABLE OF emp_t
      2  /
    Type created.
    SQL> Now we've got some Types let's use them. I've only implemented this as one public procedure but you can see the principles in action.
    SQL> CREATE OR REPLACE PACKAGE emp_utils AS
      2      TYPE EmpCurTyp IS REF CURSOR RETURN emp%ROWTYPE;
      3      PROCEDURE pop_emp (p_emps in staff_nt);
      4  END  emp_utils;
      5  /
    Package created.
    SQL> CREATE OR REPLACE PACKAGE BODY emp_utils AS
      2      FUNCTION emp_obj_to_rows (p_emps IN staff_nt) RETURN EmpCurTyp IS
      3          rc EmpCurTyp;
      4      BEGIN
      5          OPEN rc FOR SELECT * FROM TABLE( CAST ( p_emps AS staff_nt ));
      6          RETURN rc;
      7      END  emp_obj_to_rows;
      8      PROCEDURE pop_emp (p_emps in staff_nt) is
      9          e_rec emp%ROWTYPE;
    10          l_emps EmpCurTyp;
    11      BEGIN
    12          l_emps := emp_obj_to_rows(p_emps);
    13          FETCH l_emps INTO e_rec;
    14          LOOP
    15              EXIT WHEN l_emps%NOTFOUND;
    16              INSERT INTO emp VALUES e_rec;
    17              FETCH l_emps INTO e_rec;
    18          END LOOP;
    19          CLOSE l_emps;
    20      END pop_emp;   
    21  END;
    22  /
    Package body created.
    SQL>Looks good. Let's see it in action...
    SQL> DECLARE
      2      newbies staff_nt :=  staff_nt();
      3  BEGIN
      4      newbies.extend(2);
      5      newbies(1) := emp_t(7777, 'APC', 'CODER', 7902, sysdate, 1700, null, 40);
      6      newbies(2) := emp_t(7778, 'J RANDOM', 'HACKER', 7902, sysdate, 1800, null, 40);
      7      emp_utils.pop_emp(newbies);
      8  END;
      9  /
    PL/SQL procedure successfully completed.
    SQL> SELECT * FROM emp WHERE deptno = 40
      2  /
         EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM
        DEPTNO
          7777 APC        CODER           7902 17-NOV-05       1700
            40
          7778 J RANDOM   HACKER          7902 17-NOV-05       1800
            40
    SQL>     Cheers, APC

  • Replication of Z table from CRM to R/3 - No mBDoc Created

    I need to transfer the contents of a bespoke customer table from CRM into R/3, off the back of delta changes being made to the CRM table.  To help us to achieve this we have performed the following steps so far:
    1. Created the customer table in both systems.
    2. Created a new messaging BDoc in CRM and linked it to the R/3 Site Type.
    3. Created a new mapping module in CRM that takes the data from the BDoc and maps it to the BAPI structure.
    4. Created a new Adapter Object that links to my BDoc, contains the new customer table as the source table in CRM and contains the mapping Module mentioned above. 
    5. Created a new Replication Object based on my new BDoc.
    6. Created a new Publication and assigned it to the Replication Object.
    7. Created a new Subscription and assigned to the Publication and Replication Object. Also assigned it to my R/3 site.
    On the R/3 side we have created a mapping module to map the data from the BAPI structure into the equivalent R/3 table. We also have an entry in table CRMSUBTAB.
    However when I insert an entry in the customer table in CRM no BDoc is being created. In fact I cannot see anything at all in the system that indicates that it has even tried to capture the change and invoke the Middleware process.
    What am I doing wrong?
    Do I need anything else (some sort of delta program?) that picks up the parameters from the table update and feed them into my process?  The literature that I have found (and it is not much) does not mention anything like this though.
    Any help would be greatly appreciated as this is now a very urgent requirement.
    Regards
    Ian
    Edited by: IAN HAWLEY on Aug 21, 2008 9:42 AM

    Ian,
    I do expected the follow up questions. Check my explanation below and hope it will answer your queries:
    1. I assume all of the activities performed to date are still valid to supplement your solution, e.g. the BDoc, Replication Object, Publication and Subscription details?
    2. The R/3 to CRM Mapping Module. Is this required to allow messages to be sent back from R/3 to update the BDoc, e.g. a sort of validation to prove that the posting has completed ok?
    FM ZMAP_BAPIMTCS_TO_MBDOC in CRM, to map the BAPIMTCS format data and build the BDoc. This BAPIMTCS format is a temporary one and is not the final data format, that is taken to ECC. This function module also takes care of receiving the response message from ECC, once the BDoc data reaches and updates in ECC. If there is any error occured during the updation, it is captured in the error table of the BDoc and the status of BDoc is set to 'Error'. If no error occurs, the status of BDoc is set to 'Confirmed'.
    3. The Extractor Module in CRM. Does this get the data out of the table and will it work for deltas?
    Yes , It should work for Delta too.The delta load makes use of the same program and flow for Initial load (SMOF_DOWNLOAD)
    4. CRMSUBTAB in CRM. I knew that we populated this in R/3, I did not realise we would need it in CRM as I assumed it was R/3 specific.
    5. You list the sequences of FM calls at the end. I was confused in the order. As we are initiating data to be sent from CRM to R/3 should some of these be in the reverse order, e.g. ZMAP_MBDOC_TO BAPIMTCS would be called before ZMAPBAPIMTCS_TO_MBOC as would pass data into the BDoc to send it to R/3 before we then received an update message back?
    Step 1: Z_EXTRACT_MODULE  will be called.( It calles ZPICK_DATA_FROM_CRM). This function module calls the standard function module CRS_SEND_TO_SERVER the one triggers the other function modules
    Setp 2: Creat function module ZDATA_TO_BAPIMTCS(missed to mention earlier) in CRM, to map the data in the final internal table to BAPIMTCS format. This format is temporary and will be used to build the BDoc data.
    Step 3: Created function module ZMAP_BAPIMTCS_TO_MBDOC in CRM, to map the BAPIMTCS format data and build the BDoc. This BAPIMTCS format is a temporary one and is not the final data format, that is taken to ECC. This function module also takes care of receiving the response message from ECC, once the BDoc data reaches and updates in ECC. If there is any error occured during the updation, it is captured in the error table of the BDoc and the status of BDoc is set to 'Error'. If no error occurs, the status of BDoc is set to 'Confirmed'
    Step 4: Created function module ZMAP_MBDOC_TO_BAPIMTCS in CRM, to build the final BAPIMTCS structure from the BDOC. This BAPIMTCS is the final data structure that goes to ECC. The table name, objectkey, relkey that is relevant for the BAPIMTCS, is filled in this function module..
    Step 5: Created function module Z_LOAD_PROXY_FINAL in ECC, to receive the data from CRM. The BAPIMTCS data is received and mapped to local internal tables and then updates to custom tables through the function module Z_UPDATE_ECC. The errors if any are captured in this function module and returned back to CRM using the standard function module CRS_SEND_TO_SERVER.
    To reduce the load on the interface, at the final stage, it was decided to fetch the data completely in ECC. So the incoming  data from CRM is ignored and is fetched completely from ECC tables.
    6. Is there a test FM available for the extract, e.g. is CRM_SAMPLE_EXTRACT_MODULE the one to copy?
        No, You have to develop this Extractor FM say ZPICK_DATA_FROM_CRM and should be called in Z_EXTRACT_MODULE.
    Apologize for any spelling errors, as I too running to meeting.
    Update me the status.
    Bobby
    Edited by: Bobby on Aug 22, 2008 2:13 PM

  • Populating ADF Table from Multi-Dimensional Array

    Hello!
    I'm trying to populate an ADF table from a multi-dimensional array.
    Let's say that my array is
    String [] [] myArr = new String [3][5].
    On my page backing bean, I have a private attribute called tmpArr like this...
    String [] [] tmpArr;
    ...which I will initialize later after I know the proper dimensions. The dimensions will come from a multimap that contains a key and and another array (for the key's value) containing values for the key. So once I know the dimensions I initialize my array with...
    tmpArr = new String [x][y] where x and y are the dimensions (counters).
    Now I have my multidimension array. On an jsp page I have an ADF table, and I'm setting the value for the table to the array (the table's value property is bound to the backing bean's tmpArr attribure).
    Like so:
    <af:panelForm id="availableOptions"
    binding="#{myBackingBean.availableOptionsValues}">
    <af:table emptyText="No items were found" rows="10"
    value="#{myBackingBean.tmpArr}" var='myArr'>
    Now I need to know how to do the following:
    1) Set the table's columns based on the number of attributes on the array.
    2) Set the table's rows based on the array's length.
    3) Set each table cell value to values on the array's 2nd dimension. I'm assuming that ADF takes care iterating through the array, and that I should do something like...
    <af:outputText value="#{myArr[][0]}"/>
    <af:outputText value="#{myArr[][1]}"/>
    etc...
    However, this isn't working...
    javax.faces.el.ReferenceSyntaxException: myArr[][0]
    ...bla bla bla...
    Was expecting one of:
    <INTEGER_LITERAL> ...
    <FLOATING_POINT_LITERAL> ...
    <STRING_LITERAL> ...
    "true" ...
    "false" ...
    "null" ...
    "not" ...
    "empty" ...
    <IDENTIFIER> ...
    Is there a blog or resource (article, book, etc) that shows how this is done? Anyone has done this and would like to share how?
    Thank you.

    This is in fact possible. I'm not sure about the "best practice" around doing this but there is a couple of ways to do this.
    You can either create a managed bean then right click on it and use the wizard to create a data control or you can do it as per below
    The a table will convert an array into a collection.
    Once you have created an array and generated the accessors in a bean you can then reference the mutli-dimensional array from a table as per below.
    <af:table value="#{pageFlowScope.PageBean.sessionArr}" var="row" rowBandingInterval="0" id="t1" varStatus="status">
    <af:column sortable="false" headerText="col1" id="c1">
    <af:outputText value="#{pageFlowScope.PageBean.sessionArr[status.index][0]}" id="ot1"/>
    </af:column>
    </af:table>
    String [][] sessionArr = new String[5][2];
    public void setSessionArr(String[][] sessionArr) {
    this.sessionArr = sessionArr;
    public String[][] getSessionArr() {
    sessionArr[0][0]="rice";
    sessionArr[1][0]="water";
    return sessionArr;
    EDIT: For either of the above methods the managed bean must have a scope of pageFlow or longer.
    Cheers,
    Aaron
    Edited by: Aaron Rapp on Oct 6, 2011 3:28 PM

  • BI 7.0: Source System upgrade from R/3 Enterprise to ECC 6.0

    Background:
    I am relatively new to BW team and will be going through my 1st source system upgrade.
    We currently have BI 7.0 SPS 17 connected to source system R/3 Enterprise EP1.
    We are upgrading to the source system to ECC 6.0.
    In Development and QA Environments:
    we will have access to both old (R/3 Enterpirse) and the new (ECC 6.0) source systems.
    We have a opportunity to compare the BW Objects, data flow and data loading from
    both source systems.
    In Production, however, we will just upgrade over 3 days downtime.
    One Question that comes to mind in this regard...
    What sort of things, we should be checking for before and after upgrade in Dev/QA and in Prod environments and what tools are available that can help the analysis and validation process ?
    Some of the suggestions that were given to me include following points.
    Before Upgrade:
    1 Check data, taking pre images of it for testing with post upgrade.
    2. Perform proper analysis of Source system related BW objects.
      - A complete listing of actively used Datasources,.. etc.
    3. Make delta queues empty,
    Make sure that all existing deltas are loaded into BW.The last request delta must not deliver any data and must be green in the monitor.
    4. Stop Process chains from BI (remove from schedule) and collection runs from R/3 side
    After Upgrade:
    1 After the OLTP upgrade, refer to note 458305 and other notes that are relevant to the actual upgrade
    (depending on the R/3 system release / R/3 plug-in to BI plug-in compatibility).
    2. Check logical system connections. Transactions SMQS, RSMO and SM59, If no access, then we can use Program RSRFCPIN_NEW for RFC Test.
    3. Check and/or Activate control parameters for data transfer. SBIW ---> General Settings --->
    Maintain Control Parameters for Data Transfer
    http://help.sap.com/saphelp_nw70/helpdata/EN/51/85d6cf842825469a51b9a666442339/frameset.htm
    4. Check for Changes to extract structures in LBWE Customizing Cockpit 
        - OSS Notes 328181, 396647, 380078 and 762951
    5. Check if all the required transfer structures are active. See OSS Note 324520 for mass activation.
    7. Check if all Source system related BW Objects are active - Transfer rules, Communication rules, update rules,DTPs,..etc. 
    Below is a link for some useful programs in this regard.
    https://www.sdn.sap.com/irj/scn/wiki?path=/pages/viewpage.action&pageid=35458
    6. Test all important datasources using RSA3 and check for OLTP Datasources changes .
    As soon as BW notices that the time stamp at the DataSource in the OLTP is newer than the one in the transfer structure, it requests replication of the DataSource and activation of the transfer structure. Transfer the relevant DataSources only if required, and transfer only the ones that have changed (RSA5 -> Delta).
    7. Create Data flow objects (trnasfer rules, infopackages, trnasformations, DTPs) for the Replicate
    new/changed datasources,if needed. 
    8. Check all CMOD enhancements.
    If we are using a customer exit with the extractor in the OLTP, see Note 393492.
    7. Check for unicode (all custom programs or Function Modules for DataSources)
    5.  Check all the queues in RSA7, start delta runs and test data consistency.
    For delta problems:In the BW system, start the 'RSSM_OLTP_INIT_DELTA_UPDATE' program for the DataSource and the source (OLTP) system for which the init selections are then transferred from BW into the ROOSPRMSC and ROOSPRMSF tables in the source system so that the delta can continue.
    9. Take back up of data posted during upgrade for contingency planing.
    10. Run the entire set of process chains once if possible and let it pick up no data or the usual master data.
    Since we have lot of experts in this forum, who have probably gone through such scenario many times, i wanted to request you to Please Please advise if i have stated anything incorrectly or if i am missing additional steps, OSS Notes, important details...

    Thanks Rav for your detailed post and very helpful contribution by posting all the information you had regarding the upgrade.
    We have similar scenario -
    We are upgrading our source system from 4.7 to ECC 6.0. We have our BI system with BI 7.0 at support pack 19 (SAPKW70019).
    Our strategy in ECC deployment  ->
    In development we copied our old DEV 4.7 system DXX to new ECC system DXY (new system ID).
    In production we are going to use same system PRXX upgraded from 4.7 to ECC.
    Now we are in testing phase of ECC with all interfaces like BI in Dev.
    My questions are below ->
    How can we change/transfer mapping of all our datasources in Dev BW system BID to new ECC dev system DXY (Eg. Logical system LOGDXY0040) from Old dev system DXX (Eg. Logical system LOGDXX0040). We donot want to create new datasources or change all transfer rules/Infosources for old BW3.x solutions on our BI.
    Also in new ECC sourcesystem copy  we see all datasources in Red in RSA7 transaction. Do we need to initialize again all the datasources from our BW BID system.
    Is there any easy way for above scenario ?
    Please let me know if you have any further helpful information during your ECC 6.0 upgrade connecting to BI 7.0 system.
    I have found some other links which have some pieces of information regarding the topic -
    Upgrade R/3 4.6C to ECC 6.0 already in BI 7.0
    http://sap.ittoolbox.com/groups/technical-functional/sap-bw/sap-r3-migration-and-sap-bw-version-1744205
    BI 7.0: Source System upgrade  from R/3 Enterprise to ECC 6.0
    Re: ECC 6.0 Upgrade
    Re: Impact of ECC 6.0 upgrade on BI/BW Environment.
    ECC 5.0 to ECC 6.0 upgrade
    Thanks
    Prasanth

Maybe you are looking for