Data clusters

Hi experts,
Is there anyway to see the data/ structure of the data stored in a database cluster?
I am actually getting a mismatch error when using IMPORT statement.
SO I would like to check if the structure I am using are the same.
This is Kinda urgent. Please reply
Goldie.

hi
check these links
Table Clusters
http://help.sap.com/saphelp_bw31/helpdata/en/fc/eb3bf8358411d1829f0000e829fbfe/content.htm
hope this helps,
priya
Message was edited by: Priya

Similar Messages

  • What's data Clusters? Plese see...

    Hello all,
    What is the meaning of this:::
    The PCLn database tables are used to store data clusters (such as results from Time Management, Travel Management, and Payroll) What are data clusters and is there any documentation on it? Any special way to access them using open SQL?
    Thanks,
    Charles.

    Hi,
    [Data Clusters|http://help.sap.com/saphelp_nw70/helpdata/en/fc/eb3bb7358411d1829f0000e829fbfe/content.htm]
    [Storage Media for Data Clusters|http://help.sap.com/saphelp_nw70/helpdata/en/fc/eb3bc4358411d1829f0000e829fbfe/content.htm]
    [clusters|www.hrexpertonline.com/downloads/12-04.doc ]
    Thanks and Regards,
    Naveen Dasari

  • Exporting data clusters with type version

    Hi all,
    let's assume we are saving some ABAP data as a cluster to database using the IMPORT TO ... functionality, e.g.
    EXPORT VBAK FROM LS_VBAK VBAP FROM LT_VBAP  TO DATABASE INDX(QT) ID 'TEST'
    Some days later, the data can be imported
    IMPORT VBAK TO LS_VBAK VBAP TO LT_VBAP FROM DATABASE INDX(QT) ID 'TEST'.
    Some months or years later, however, the IMPORT may crash: Since it is the most normal thing in the world that ABAP types are extended, some new fields may have been added to the structures VBAP or VBAK in the meantime.
    The data are not lost, however: Using method CL_ABAP_EXPIMP_UTILITIES=>DBUF_IMPORT_CREATE_DATA, they can be recovered from an XSTRING. This will create data objects apt to the content of the buffer. But the component names are lost - they get auto-generated names like COMP00001, COMP00002 etc., replacing the original names MANDT, VBELN, etc.
    So a natural question is how to save the type info ( = metadata) for the extracted data together with the data themselves:
    EXPORT TYPES FROM LT_TYPES VBAK FROM LS_VBAK VBAP FROM LT_VBAP TO DATABASE INDX(QT) ID 'TEST'.
    The table LT_TYPES should contain the meta type info for all exported data. For structures, this could be a DDFIELDS-like table containing the component information. For tables, additionally the table kind, key uniqueness and key components should be saved.
    Actually, LT_TYPES should contain persistent versions of CL_ABAP_STRUCTDESCR, CL_ABAP_TABLEDESCR, etc. But it seems there is no serialization provided for the RTTI type info classes.
    (In an optimized version, the type info could be stored in a separate cluster, and being referenced by a version number only in the data cluster, for efficiency).
    In the import step, the LT_TYPES could be imported first, and then instances for these historical data types could be created as containers for the real data import (here, I am inventing a class zcl_abap_expimp_utilities):
    IMPORT TYPES TO LT_TYPES FROM DATABASE INDX(QT) ID 'TEST'.
    DATA(LO_TYPES) = ZCL_ABAP_EXPIMP_UITLITIES=>CREATE_TYPE_INFOS ( LT_TYPES ).
    assign lo_types->data_object('VBAK')->* to <LS_VBAK>.
    assign lo_types->data_object('VBAP')->* to <LT_VBAP>.
    IMPORT VBAK TO <LS_VBAK> VBAP TO <LT_VBAP> FROM DATABASE INDX(QT) ID 'TEST'.
    Now the data can be recovered with their historical types (i.e. the types they had when the export statement was performed) and processed further.
    For example, structures and table-lines could be mixed into the current versions using MOVE-CORRESPONDING, and so on.
    My question: Is there any support from the standard for this functionality: Exporting data clusters with type version?
    Regards,
    Rüdiger

    The IMPORT statement works fine if target internal table has all fields of source internal table, plus some additional fields at the end, something like append structure of vbak.
    Here is the snippet used.
    TYPES:
    BEGIN OF ty,
      a TYPE i,
    END OF ty,
    BEGIN OF ty2.
            INCLUDE TYPE ty.
    TYPES:
      b TYPE i,
    END OF ty2.
    DATA: lt1 TYPE TABLE OF ty,
          ls TYPE ty,
          lt2 TYPE TABLE OF ty2.
    ls-a = 2. APPEND ls TO lt1.
    ls-a = 4. APPEND ls TO lt1.
    EXPORT table = lt1 TO MEMORY ID 'ZTEST'.
    IMPORT table = lt2 FROM MEMORY ID 'ZTEST'.
    I guess IMPORT statement would behave fine if current VBAK has more fields than older VBAK.

  • Importing Data Clusters

    Dear All,
    How do I import data clusters from a database table to view the data it contains. I know, or at least I think, I have to use the IMPORT command which has the following syntax:
    IMPORT parameter_list FROM medium [conversion_options].
    But how do I find what the parameter list, their type, etc. and is medium the name of the table? Is there a tool that enables me to read data clusters similar to se16 for data fields?
    Thank you for your help,
    Philon

    See the simple example :
    REPORT  ZTEST_AMEM1.-> Main program and usinge export
    tables : lfa1.
    data : begin of i_lfa1 occurs 0 ,
           lifnr like lfa1-lifnr,
           name1 like lfa1-name1,
           land1 like lfa1-land1,
           end of i_lfa1.
    start-of-selection.
    select lifnr
           name1
           land1 from lfa1
           into table i_lfa1 up to 100 rows.
    Export
    export i_lfa1 to memory id 'SAP'.
    submit ztest_amem2 and return.
    write:/ 'hello'.
    *& Report  ZTEST_AMEM2
    REPORT  ZTEST_AMEM2.-> called program and used import command
    data : begin of j_lfa1 occurs 0,
           lifnr like lfa1-lifnr,
           name1 like lfa1-name1,
           land1 like lfa1-land1,
           end of j_lfa1.
    start-of-selection.
    import i_lfa1 to j_lfa1 from memory id 'SAP'.
    loop at j_lfa1.
    write:/ j_lfa1-lifnr,j_lfa1-name1,j_lfa1-land1.
    endloop.

  • Saving data clusters

    Hi,
    I've just got a question concerning the best way to save my data. I'm using LV 7.0 and Vision 7.1 on a 2P Microscope.
    I'm acquiring images, heartbeat of the animal and a stimulation signal which all depend on the same internal clock.
    Now, for each image, I record simultaneously the heartbeat and the signal from the stimulation apparatus. (This equals about 50 values for each signal per image)
    Saving clusters including two arrays of data is not possible. Knowing that I would like to open and work on the files easily afterwards and play all of it back simultaneously (the film showing the cells should run above the waveforms showing the heartbeat and the stimulation signals) wht would be the best way to save this large data files?
    Converting all images into arrays and saving a set of arrays, saving everything one by one images, heartbeat and stimulation and then using a VI to open everything at once, ...
    I know this is probably not vey complicated but if anyone has some experience, I would be grateful. As we will use it to record and exmaine films lasting abour ten minutes to half an hour, acquiring images at 10-20Hz, the amount of data becomes overwhelming very quickly...
    Thanks

    But you can indeed save and read data in form of clusters( containing arrays, strings, numerics , boolean etc)
    Look at attached VI's
    regards
    Dev
    Message Edited by devchander on 01-16-2006 06:05 AM
    Attachments:
    write cluster.vi ‏27 KB
    read cluster.vi ‏29 KB

  • How to add tables in data clustering after table creation

    Hi,
    I want to use clustered tables, but the issue is that i have created tables but not
    clusters and now i want to make a cluster and add the tables in this cluster.
    but doesn't find any solution.
    I am using
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14231/clustrs.htm#i1006586
    Thanks
    Umesh

    You have a couple of choices but none of them what you want.
    Not knowing version or cluster type or quantity of data or other things my first thought would be to rename the table, create a new one inside the cluster then do an INSERT INTO SELECT * FROM.

  • How to over weite old data length in data clusters.

    Hello All,
            I'm geting short dump CONNE_IMPORT_WRONG_COMP_LENG. bcoz of new data length of one field (Host) is not reflecting in Data Clustor.
           How to over write old data length with new data length....
    Thanks,

    suresh8 wrote:
    how to copy my old data in my i6?
    Back up and restore your iPhone, iPad, or iPod touch using iCloud or iTunes - Apple Support
    Import photos and videos from your iPhone, iPad, or iPod touch to your Mac or Windows PC - Apple Support
    As for songs that are on your iPhone, those cannot be copied from the device to another location.

  • Recognizing data clusters

    I have an exercise in computer logic.  I have and array of XY values that when plotted on an XY graph would form clusters.  Sometimes there is one cluster of plots, sometimes two, sometimes three.  The cluster shapes are somewhat irregular.  And sometimes the clusters slightly overlap.
    Anyway, my program needs to be able to recognize, for any given x,y plot, which cluster that plot belongs too.  This is so my program can analyize that plot seperately.
    Possible solutions I'm investigating is incorporating classical logic; fuzzy logic; simulated neural network.  Frankly, I not much of a mathematician, so some classical logic solution would probably be easiest for me.

    easy shades,
    i've got one question it looks like the value in your array shows to which region it attachs. is this right? then it is easy:
    Ton
    Message Edited by TonP on 08-05-2006 02:13 PM
    Free Code Capture Tool! Version 2.1.3 with comments, web-upload, back-save and snippets!
    Nederlandse LabVIEW user groep www.lvug.nl
    My LabVIEW Ideas
    LabVIEW, programming like it should be!
    Attachments:
    Example_BD.png ‏43 KB

  • Reading Data Clusters from Database ??

    Hi at all,
    how can I read Data from a ClusterTable´`?
    In the Database "STXL" is a field "CLUSTR" and "CLUSTD" and i want see the input. But before i can see it, i must read this Data Cluster.
    How does it work``?
    IMPORT tab = itab
    FROM DATABASE stxl(tx)
    ID   wa_TDOBJECT
    TO wa_stxl.

    well, normally you got FM´s which enable you to do selects or whatever with cluster tables.
    in your case it would be FM READ_TEXT or related FM´s. I do ABAP programming for quite a while now, and i never came to the situation where i manually had needed to read a cluster table.

  • Fetching data from 'Z' clusters

    Hi Gurus,
       We an existing 'Z' cluster in the HR system.I want to know that how we can read data from such 'Z' clusters.
    I tried finding reagrding macros and other methods but it seems that macros can be used to read standard cluster tables.
    Please provide your inputs for the same.
    Regards,
    Shlesha

    Hello Shlesha,
    Do you mean cluster tables or data cluster(INDX type tables)?
    If you want to read data from INDX table you've to use [IMPORT FROM DATABASE|http://help.sap.com/abapdocu_702/en/abapimport_medium.htm#!ABAP_ALTERNATIVE_4@4@] statement.
    Cheers,
    Suhas
    PS: You can use the same statement to read custom data clusters. It is not restricted to INDX only!

  • SAP paging overflow when storing data in the ABAP/4 memory.

    I am trying to create a data source in  BI7.0 in the Datawarehousing Workbench. But along the process when i need to select a view i get an error detailed in the following error file extract: Please go through and assist.
    untime Errors         MEMORY_NO_MORE_PAGING
    Date and Time          06.06.2009 14:21:35
    Short text
    SAP paging overflow when storing data in the ABAP/4 memory.
    What happened?
    The current program requested storage space from the SAP paging area,
    but this request could not be fulfilled.
    of this area in the SAP system profile.
    What can you do?
    Note which actions and input led to the error.
    For further help in handling the problem, contact your SAP administrator
    You can use the ABAP dump analysis transaction ST22 to view and manage
    termination messages, in particular for long term reference.
    Error analysis
    The ABAP/4 runtime system and the ABAP/4 compiler use a common
    interface to store different types of data in different parts of
    the SAP paging area. This data includes the
    ABAP/4 memory (EXPORT TO MEMORY), the SUBMIT REPORT parameters,
    CALL DIALOG and CALL TRANSACTION USING, as well as internally defined
    macros (specified with DEFINE).
    To store further data in the SAP paging area, you attempted to
    allocate a new SAP paging block, but no more blocks were
    available.
    When the SAP paging overflow occurred, the ABAP/4 memory contained
    entries for 20 of different IDs.
    Please note:
    To facilitate error handling, the ABAP/4 memory was
    deleted.
    How to correct the error
    The amount of storage space (in bytes) filled at termination time was:
    Roll area...................... 8176
    Extended memory (EM)........... 13587912
    Assigned memory (HEAP)......... 0
    Short area..................... " "
    Paging area.................... 40960
    Maximum address space.......... " "
    By calling Transaction SM04 and choosing 'Goto' -> 'Block list',
    you can display an overview of the current roll and paging memory
    levels resulting from active users and their transactions. Try to
    decide from this whether another program requires a lot of memory
    space (perhaps too much).
    The system log contains more detailed information about the
    termination. Check for any unwanted recursion.
    Determine whether the error also occurs with small volumes of
    data. Check the profile (parameter "rdisp/PG_MAXFS", see
    Installation Guidelines).
    Is the disk or the file system that contains the paging file
    full to the extent that it cannot be increased, although it has
    not yet reached the size defined in the profile? Is the
    operating system configured to accommodate files of such a
    size?
    The ABAP processor stores different types of data in the SAP
    paging area. These include:
    (1) Data clusters (EXPORT ... TO MEMORY ...)
    (2) Parameters for calling programs (SUBMIT REPORT ...),
    Dialog modules (CALL DIALOG ...) and transactions
    (CALL TRANSACTION USING ...)
    (3) Internally defined program macros (DEFINE ...)
    Accordingly, you should check the relevant statements in a program
    that results in an overflow of the SAP paging area.
    It is critical when many internal tables, possibly with
    different IDs, are written to memory (EXPORT).
    If the error occures in a non-modified SAP program, you may be able to
    find an interim solution in an SAP Note.
    If you have access to SAP Notes, carry out a search with the following
    keywords:
    "MEMORY_NO_MORE_PAGING" " "
    "SAPLWDTM" or "LWDTMU20"
    "TABC_ACTIVATE_AND_UPDATE"
    If you cannot solve the problem yourself and want to send an error
    notification to SAP, include the following information:
    1. The description of the current problem (short dump)
    To save the description, choose "System->List->Save->Local File
    (Unconverted)".
    2. Corresponding system log
    Display the system log by calling transaction SM21.
    Restrict the time interval to 10 minutes before and five minutes
    after the short dump. Then choose "System->List->Save->Local File
    (Unconverted)".
    3. If the problem occurs in a problem of your own or a modified SAP
    program: The source code of the program
    In the editor, choose "Utilities->More
    Utilities->Upload/Download->Download".
    4. Details about the conditions under which the error occurred or which
    actions and input led to the error.

    Hi Huggins,
    Maintenance of the Paging File is owned by your basis team.
    They should increase this in order for your transaction to process successfully.
    Just for your reference, in case the OS used is windows server 2003, paging file value can be checked through;
    Right click in the My Computer&gt;properties.
    Then go to Advance tab;
    Then there should be a performance section, click the settings
    Then Advance tab again. The paging file can be seen from there.
    (and can be adjusted from there also)
    The value of the paging file in general will be dependent with the available RAM from the hardware.
    Hope this helps. Thanks a lot.
    - Jeff

  • How to pass data from one session to another?

    What does SAP memory use to pass data between sessions?
    What is the syntax of "Export to ..."? Where is it used?

    hi suman vijay,
    EXPORT obj1 ... objn TO MEMORY.
    Exports the objects obj1 ... objn (fields, structures or tables) as a data
    cluster to ABAP/4 memory .
    EXPORT obj1 ... objn TO DATABASE dbtab(ar) ID key.  Exports the objects obj1 ... objn (fields, structures or tables) as a data cluster to the database table dbtab.
    IMPORT f itab FROM MEMORY.
    Imports data objects (fields or tables) from the ABAP/4 memory . Reads in all data without an ID that was exported to memory with "EXPORT ... TO MEMORY."
    IMPORT f itab FROM DATABASE dbtab(ar) ID key. imports data objects (fields, field strings or internal tables) with the ID key from the area ar of the database dbtab .
    EXPORT obj1 ... objn TO MEMORY.
    If you call a transaction, report or dialog module (with CALL TRANSACTION , SUBMIT or CALL DIALOG ), the contents of ABAP/4 memory are retained, even across several levels. The called transaction can then retrieve the data from there using IMPORT ... FROM MEMORY .
    Each new EXPORT ... TO MEMORY statement overwrites any old data, so no data is appended.
    EXPORT obj1 ... objn TO DATABASE dbtab(ar) ID key
    The database table dbtab must have a standardized structure .
    The database table dbtab is divided into different logically related areas ( ar , 2-character name).
    You can export collections of data objects (known as data clusters ) under a freely definable key (field key ) to an area of this database.
    IMPORT allows you to import individual data objects from this cluster.
    thanks
    Sachin

  • How to Export data to memory and Import data from memory?

    hi
    I have the follwoing some code of program.
    The data is not filled from memory. I have to find what is the wrong in code.
    REPORT ZIFT_TEST1..
    SELECT-OPTIONS : so_budat FOR bkpf-budat,
                     sd_saknr FOR ska1-saknr.
      EXPORT so_budat TO MEMORY ID 'ZBUDAT'.
      EXPORT sd_saknr TO MEMORY ID 'ZSAKNR'.
      SUBMIT ZIFT_TEST2 AND RETURN.
    REPORT ZIFT_TEST2..
    SELECT-OPTIONS so_budat FOR bsis-budat NO DATABASE SELECTION.
    SELECT-OPTIONS: SD_SAKNR    FOR  SKA1-SAKNR MATCHCODE OBJECT SAKO.
      import so_budat = so_budat from memory id 'ZBUDAT'.
      import sd_saknr from memory id 'ZSAKNR'.
    Regards
    Iftikhar Ali
    Islamabad.

    Program1----
    REPORT demo_program_rep3 NO STANDARD PAGE HEADING.
    DATA: number TYPE i,
    itab TYPE TABLE OF i.
    SET PF-STATUS 'MYBACK'.
    DO 5 TIMES.
    number = sy-index.
    APPEND number TO itab.
    WRITE / number.
    ENDDO.
    TOP-OF-PAGE.
    WRITE 'Report 2'.
    ULINE.
    AT USER-COMMAND.
    CASE sy-ucomm.
    WHEN 'MBCK'.
    EXPORT itab TO MEMORY ID 'HK'.
    LEAVE.
    ENDCASE.
    Program2----
    REPORT demo_programm_leave NO STANDARD PAGE HEADING.
    DATA: itab TYPE TABLE OF i,
    num TYPE i.
    SUBMIT demo_program_rep3 AND RETURN.
    IMPORT itab FROM MEMORY ID 'HK'.
    LOOP AT itab INTO num.
    WRITE / num.
    ENDLOOP.
    TOP-OF-PAGE.
    WRITE 'Report 1'.
    ULINE.
    end of program 2----
    Now you copy this programs with same name as i mentioned and execute demo_programm_leave Program.you will understnad clearly.
    Notes::: A logical memory model illustrates how the main memory is distributed from the view of executable programs. A distinction is made here between external sessions and internal sessions .
    An external session is usually linked to an R/3 window. You can create an external session by choosing System/Create session, or by entering /o in the command field. An external session is broken down further into internal sessions. Program data is only visible within an internal session. Each external session can include up to 20 internal sessions (stacks).
    Every program you start runs in an internal session.
    To copy a set of ABAP variables and their current values (data cluster) to the ABAP memory, use the EXPORT TO MEMORY ID statement. The (up to 32 characters) is used to identify the different data clusters.
    If you repeat an EXPORT TO MEMORY ID statement to an existing data cluster, the new data overwrites the old.
    To copy data from ABAP memory to the corresponding fields of an ABAP program, use the IMPORT FROM MEMORY ID statement.

  • How we can see the abap memory data

    How we can see the abap-memory data
    fine the code below
    import lsind
             report_title
             table_name
             report_field
             change_display
             show_hide
             conversion_exits
             table_description
             form_program
             select_form
             update_form
             line_size
             line_count
             records[]
             fields[]
             header_fields[]
             select_fields[]
             xrep[]
             from memory id 'LZUT5U11'.
    Regards
    santhosh
    mail-id : [email protected]

    Dear Santosh,
    ABAP MEMORY:
    A logical memory model illustrates how the main memory is distributed from the view of executable programs. A distinction is made here between external sessions and internal sessions .
    An external session is usually linked to an R/3 window. You can create an external session by choosing System/Create session, or by entering /o in the command field. An external session is broken down further into internal sessions. Program data is only visible within an internal session. Each external session can include up to 20 internal sessions (stacks).
    Every program you start runs in an internal session.
    All "squares" with rounded "corners" displayed in the status diagram represent a set of data objects in the main memory.
    The data in the main memory is only visible to the program concerned.
    CALL TRANSACTION and SUBMIT AND RETURN open a new internal session that forms a new program context. The internal sessions in an external session form a memory stack. The new session is added to the top of the stack.
    When a program has finished running, the top internal session in the stack is removed, and the calling program resumes processing.
    The same occurs when the system processes a LEAVE PROGRAM statement.
    LEAVE TO TRANSACTION removes all internal sessions from the stack and opens a new one containing the program context of the calling program.
    The ABAP memory is initialized after the program is called. In other words, you cannot transfer any data to a program called with LEAVE TO TRANSACTION via the ABAP memory.
    SUBMIT replaces the internal session of the program performing the call with the internal session of the program that has been called. The new internal session contains the program context of the called program with which it is performed.
    When a function module is called, the following steps are executed:
    A check is made to establish whether your program has called a function module of the same function group previously.
    If this is not the case, the system loads the associated function group to the internal session of the calling program as an additional program group. This initializes its global data.
    If your program used a function module of the same function group before the current call, the function module that you have called up at present can access the global data of the function group. The function group is not reloaded.
    Within the internal session, all of the function modules that you call from the same group access the global data of that group.
    If, in a new internal session, you call a function module from the same function group as in internal session 1, a new set of global data is initialized for the second internal session. This means that the data accessed by function modules called in session 2 may be different from that accessed by the function modules in session 1.
    You can call function modules asynchronously as well as synchronously. To do so, you must extend the function module call using the addition STARTING NEW TASK ''. Here, '' is a symbolic name in the calling program that identifies the external session, in which the called program is executed.
    Function modules that you call using the addition STARTING NEW TASK '' are executed independently of the calling program. The calling program is not interrupted.
    To make function modules available for local asynchronous calls, you must identify them as executable remotely (processing type: Remote-enabled module).
    There are various ways of transferring data between programs that are running in different program contexts (internal sessions). You can use:
    (1) The interface of the called program (standard selection screen, or interface of a
    subroutine, function module, or dialog module)
    (2) ABAP memory
    (3) SAP memory
    (4) Database tables
    (5) Local files on your presentation server.
    For further information about transferring data between an ABAP program and your presentation server, refer to the documentation for the function modules WS_UPLOAD and WS_DOWNLOAD.
    Function modules have an interface, which you can use to pass data between the calling program and the function module itself (there is also a comparable mechanism for ABAP subroutines). If a function module supports RFC, certain restrictions apply to its interface.
    If you are calling an ABAP program that has a standard selection screen, you can pass values to the input fields. There are two options here:
    By using a variant of the standard selection screen in the program call
    By passing actual values for the input fields in the program call
    If you want to call a report program without displaying its selection screen (default setting), but still want to pass values to its input fields, there is a variety of techniques that you can use.
    The WITH addition allows you to assign values to the parameters and select-options fields on the standard selection screen.
    If the selection screen is to be displayed when the program is called, use the addition: VIA SELECTION-SCREEN.
    Use the pattern button in the ABAP Editor to insert a program call via SUBMIT. The structure shows you the names of data objects that you can complete with the standard selection screen.
    For further information on working with variants and further syntax variants for the WITH addition, see the key word documentation in the ABAP Editor for SUBMIT.
    You can use SAP memory and ABAP memory to pass data between different programs.
    The SAP memory is a user-specific memory area for storing field values. It is available in all of the open sessions in a user's terminal session, and is reset when the terminal session ends. You can use its contents as default values for screen fields. All external sessions can access SAP memory. This means that it is only of limited use for passing data between internal sessions.
    The ABAP memory is also user-specific, and is local to each external session. You can use it to pass any ABAP variables (fields, structures, internal tables, complex objects) between the internal sessions of a single external session.
    Each external session has its own ABAP memory. When you end an external session (/i in the command field), the corresponding ABAP memory is released automatically.
    To copy a set of ABAP variables and their current values (data cluster) to the ABAP memory, use the EXPORT TO MEMORY ID statement. The (up to 32 characters) is used to identify the different data clusters.
    If you repeat an EXPORT TO MEMORY ID statement to an existing data cluster, the new data overwrites the old.
    To copy data from ABAP memory to the corresponding fields of an ABAP program, use the IMPORT FROM MEMORY ID statement.
    The fields, structures, internal tables, and complex objects in a data cluster in ABAP memory must be declared identically in both the program from which you exported the data and the program into which you import it.
    To release a data cluster, use the FREE MEMORY ID statement.
    You can import just parts of a data cluster with IMPORT, since the objects are named in the cluster.
    In the SAP memory, you can define memory areas (SET/GET parameters, or parameter IDs), which you can then address by a name of up to 20 characters.
    You can fill these memory areas either using the contents of input/output fields on screens, or using the ABAP statement:
    SET PARAMETER ID '' FIELD .
    The memory area with the name now has the value .
    You can use the contents of a memory area to display a default value in an input field on a screen.
    You can also read the memory areas from the SAP memory using the ABAP statement GET PARAMETER ID FIELD . The field then contains the value from parameter .
    The link between an input/output field and a memory area in SAP memory is inherited from the data element on which the field is based. You can enable the set parameter or get parameter attributes in the input/output field attributes.
    Once you have set the Set parameter attribute for an input/output field, you can fill it with default values from SAP memory. This is particularly useful for transactions that you call from another program without displaying the initial screen. For this purpose, you must activate the Set parameter functionality for the input fields of the first screen of the transaction.
    You can:
    (1) Copy the data that is to be used for the first screen of the transaction to be called to the parameter ID in the SAP memory. To do so, use the statement SET PARAMETER immediately before calling the transaction.
    (2) Start the transaction using CALL TRANSACTION or LEAVE TO
    TRANSACTION . If you do not want to display the initial screen, use the AND
    SKIP FIRST SCREEN addition.
    (3) The system program that starts the transaction fills the input fields that do not already have default values and for which the Get parameter attribute has been set with values from SAP memory.
    The Technical information for the input fields in the transaction you want to call contains the names of the parameter IDs that you need to use.
    Parameter IDs should be entered in table TPARA. This happens automatically if you create them via the Object navigator.
    Programs that you call using the statements SUBMIT , LEAVE TO TRANSACTION , SUBMIT AND RETURN, or CALL TRANSACTION run in their own SAP LUW, and update requests receive their own update key.
    When you use SUBMIT and LEAVE TO TRANSACTION , the SAP LUW of the calling program ends. If no COMMIT WORK statement occurred before the program call, the update requests in the log table remain incomplete and cannot be processed. They can no longer be executed. The same applies to inline changes that you make using PERFORM &#8230; ON COMMIT.
    Data that you have written to the database using inline changes is committed the next time a new screen is displayed.
    If you use SUBMIT AND RETURN or CALL TRANSACTION to insert a program and then return to the calling program, the SAP LUW of the calling program is resumed when the called program ends. The LUW processing of calling and called programs is independent.
    In other words, inline changes are committed the next time a new screen is displayed. Update requests and calls using PERFORM ... ON COMMIT require an independent COMMIT WORK statement in the SAP LUW in which they are running.
    Function modules run in the same SAP LUW as the program that calls them.
    If you call transactions with nested calls, each transaction needs its own COMMIT WORK, since each transaction maps its own SAP LUW.
    The same applies to calling executable programs, which are called using SUBMIT AND RETURN.
    The statement CALL TRANSACTION allows you to
    Shorten the user dialog when calling using CALL TRANSACTION USING .
    Determine the type of update (asynchronous, local, or synchronous) for the transaction called. For this purpose, use the addition CALL TRANSACTION USING UPDATE 'update_mode', where update_mode can have the values a (asynchronous), L (local), or S (synchronous).
    Combining the two options enables you to call several transactions in sequence (logical chain), to reduce their screen sequence, and to postpone processing of the SAP LUW 2 until processing of the SAP LUW 1 has been completed.
    When you call a function module asynchronously using the CALL FUNCTION STARTING NEW TASK ' ' statement, it runs in its own SAP LUW.
    Programs that are executed with a SUBMIT AND RETURN or CALL
    TRANSACTION statement starts their own LUW processing. You can use these to perform nested (complex) LUW processing.
    You can use function modules as modularization units within an SAP LUW.
    Function modules that are called asynchronously are suitable for programs that allow parallel processing of some of their components.
    All techniques are suitable for including programs with purely display functions.
    Note that a function module called with CALL FUNCTION STARTING NEW TASK is executed as a new logon. It, therefore, sees a separate SAP memory area. You can use the interface of the function module for data transfers.
    Example: In your program, you want to call a display transaction that is displayed in a separate window (amodal). To do so, you encapsulate the transaction call in a function module, which you set as to Remote-enabled module. You use the function module interface to accept values that you write to the SAP memory. You then call up the transaction in the function module using CALL TRANSACTION AND SKIP FIRST SCREEN. You call the function module itself asynchronously.
    Type &#8216;E' locks for nested program calls may be requested more than once from the same object. This behavior can be described as follows:
    Lock entries from function modules called synchronously increment the cumulative counter, And are therefore successful.
    Lock entries from programs called with CALL TRANSACTION or SUBMIT
    AND
    RETURN is refused. The object to be locked by the called program is displayed as already Locked by another user.
    Programs that you call using SUBMIT or LEAVE TO TRANSACTION cannot come into conflict with lock entries from the calling program, since the old program ends when the call is made. When a program ends, the system deletes all of the lock entries that it had set.
    Lock requests belonging to the same user from different R/3 windows or logons are treated as lock requests from other users.
    Regards,
    Rajesh.
    Please reward points if found helpful.

  • Saving Data Objects in INDX-Type Database

    Hi,
    I am using the EXPORT statement to store data in a table into INDx table.
    EXPORT gt_update TO DATABASE indx(tt) ID index.
    And then I am using IMPORT statement to get that data.
    IMPORT gt_update to tb_update FROM DATABASE indx(tt) ID 'TTUPDATE'.
    The 'TT' is put into INDX- RELID and 'TTUPDATE' into INDX-SRTFD.
    The Export statement was working fine and a record got created in INDX table, but the IMPORT is not working. Can anyone please let me know what could be the problem ?
    'gt_update' and 'tb_update' are of the same type in the IMPORT statement.
    Also please let me know what exactly is the significance of the field INDX-SRTF2.
    Thanks a lot in Advance.

    My reason for using data clusters is that there is huge amount to data to be retrieved from database, so we have decided to get that data and put it in data clusters for faster access of data in the program.
    INDX tables are ideal to store complex strcutures, inlcuding deep data objects. Don't really know if the access is faster, but definitely it takes less space (as data are compressed).
    But I have a doubt, Ideally we should be having another program with the SELECT statement on the table required which needs to be scheduled periodically as a background job to Update the data cluster right ? Otherwise how would the data cluster hold the latest data ?
    It really depends on where you initial data comes from. I mean how data you want to put/update in data cluster are produced. If this is the report, which generates some result and based on that you want to update data cluster, then what you say is true. You would need to run it either manually (periodically) or simply schedule the job, which does the task.
    But as for the update of data cluster itself, I don't think we should use OPEN SQL statements to achieve that.
    What I think you have to do is, each time do a select to know which cluster to extract, then import the result. Next you need to change gt_update locally and place it back (with export) on right data cluster place. Data with same name under same RELID and cluster key will be completely replaced with new (changed) table.
    This is the only way I think of in terms of data cluster update, but maybe there is some other.
    Regards
    Marcin

Maybe you are looking for

  • Can't hear or talk to others while making calls

    When I make a call on my iPhone 4 I can't hear the person I'm calling and they can't hear me and there's no static or sound at all. The phone call will connect and the other person will see that they're recieving a call from me. Another common occura

  • How to autopopulate fields in a portal form based on database procedure?

    Dear gurus, I am a newbie to the portal world and i have a rather unique problem. I have searched online but still cant find a solution to my problem. I have a portal form (AS version 10.1.4) based on a db procedure that accepts only one parameter. T

  • Need help moving photos to external drive

    Hi, all--here's hoping one or more of you Aperture veterans can help. Here's my situation: I'm not new to photography, but I'm new to Aperture. Just bought 3.3 and still getting to know what it can do. I'm an emigre from iPhoto, and one of the reason

  • CFMX 7 CFFORM Validation doesn't work in 7.0.1

    We have code that's been running under CF5 and CF6.1 just fine, but when we moved it to CF7 all of the cfform validation stopped working (standard cfform w/ no additional custom javascript, just cfinput tags w/ required attributes). We updated the se

  • Standalone application menu

    I'm looking for a standalone application menu which works like lxpanel's menu (no configuration, automatic update). I'm searching for a couple of hours now and any app I find doesn't work as I expected or doesn't work at all. Any help would be apprec