Multi Table GL Import

We are trying to use a custom table for importing/posting journals in GL. Is it sufficient to enter the custom table name in GL_INTERFACE_CONTROL? Or any other changes necessary?
Thanks
Raghu.

Hi,
use internal table to hold the data.
Pass this internal table to a memory ID and export it.
Then import the same at the other end.
Check the sample code.
DATA : BEGIN OF ITABY OCCURS 0,
END OF ITABY.
EXPORT ITABY TO MEMORY ID 'ABC'.
at other end use
DATA : BEGIN OF ITABY OCCURS 0,
END OF ITABY.
IMPORT ITABY FROM MEMORY ID 'ABC.'
Let me know if you have any queries.
Regards,
Raj

Similar Messages

  • Help with multi-table mapping for one-to-many object inheritance

    Hi,
    I have posted on here before regarding this (Toplink mapping for one-to-many object inheritance but I am still having problems mapping my object model to my schema.
    Object model
    The Person and Organisation objects contain base information and have the primary keys person_id and organisation_id. It is important that there is no duplication of person and organisation records, no matter how many times they are saved in different roles.
    There are two types of licenceholder in the problem domain, and the ILicenceHolder interface defines information and methods that are common to both. The PersonalLicenceHolder object represents one of these types of licenceholder, and is always a person, so this class extends Person and implements ILicenceHolder.
    The additional information and methods that are required by the second type of licenceholder are defined in the interface IPremisesLicenceHolder, which extends ILicenceHolder. Premises licence holders can either be people or organisations, so I have two objects to represent these - PremisesLicenceHolderPerson which implements IPremisesLicenceHolder and extends Person, and PremisesLicenceHolderOrganisation which implements IPremisesLicenceHolder and extends Organisation.
    The model is further complicated by the fact that any single Person may be both a PersonalLicenceHolder and a PremisesLicenceHolderPerson, and may be so several times over. In this case, the same basic Person information needs to be linked to several different sets of licenceholder information. In the same way, any single Organisation may be a PremisesLicenceHolderOrganisation several times over.
    Sorry this is complicated!
    Schema
    I have Person and Organisation tables containing the basic information with the primary keys person_id and organisation_id.
    I have tried to follow Donald Smith's advice and have created a Role table to record the specialised information for the different types of licence holder. I want the foreign keys in this table to be licenceholder_id and licence_id. Licenceholder_id will reference either organisation_id or person_id, and licence_id will reference the primary key of the Licence table to link the licenceholder to the licence. Because I am struggling with the mapping, I have changed licenceholder_id to person_id in an attempt to get it working with the Person object before I try the Organisation.
    Then, when a new licenceholder is added, if the person/organisation is already in the database, a new record is created in the Role table linking the existing person/organisation to the existing licence rather than duplicating the person/organisation information.
    Mapping
    I am trying to use the toplink mapping workbench to map my PremisesLicenceHolderPerson object to my schema. I have mapped all inherited attributes to superclass (Person). The primary table that the attributes are mapped to is Person, and I have used the multi-table info tab to add Roles as an additional table and map the remaining attributes to that.
    I have created the references PERSON_ROLES which maps person.person_id to roles.person_id, ROLES_PERSON which maps roles.person_id to person.person_id and ROLES_LICENCE which maps roles.licence_id to licence.licence_id.
    I think I have put in all the relationships, but I cannot get rid of the error message "The following primary key fields are unmapped: PERSON_ID".
    Please can somebody tell me how to map this properly?
    Thank you.

    I'm not positive about your mappings, but it looks like the Person object should really have a 1:M or M:M mapping to the Licenceholder table. This then means that your object model should be similar, in that Person object could have many Licenses, instead of being LicenceHolders. From the looks of it, you have it set up from the LicenceHolder perspective. What could be done instead if a LicenceHolder could have a 1:1 reference to a person data object, rather than actually be a Person. This would allow the person data to be easily shared among licences.
    LicenceHolder1 has an entry in the LicenceHolder table and Person table. LicenceHolder2 also has entries in these tables, but uses the same entry in the Person table- essentially it is the same person/person_ID. If both are new objects, TopLink would try to insert the same person object into the Person table twice. I'm not sure how you have gotten around or are planning to get around this problem.
    Since you are using inheritance, it means that LicenceHolder needs a writable mapping to the person.person_id field- most commonly done through a direct to field mapping. From the description, it looks like roles.person_id is a foreign key in the multiple table mapping, meaning it would be set based on the value in the person.person_id field, but the person.person_id isn't actually mapped in the object. Check to make sure that the ID attribute LicenceHolder is inheriting from person hasn't been remapped in the LicenceHolder descriptor to a different field.
    Best Regards,
    Chris

  • VLD-1119: Unable to generate Multi-table Insert statement for some or all t

    Hi All -
    I have a map in OWB 10.2.0.4 which is ending with following error: -
    VLD-1119: Unable to generate Multi-table Insert statement for some or all targets.*
    Multi-table insert statement cannot be generated for some or all of the targets due to upstream graphs of those targets are not identical on "active operators" such as "join".*
    The map is created with following logic in mind. Let me know if you need more info. Any directions are highly appreciated and many thanks for your inputs in advance: -
    I have two source tables say T1 and T2. There are full outer joined in a joiner and output of this joined is passed to an expression to evaluate values of columns based on
    business logic i.e. If T1 is available than take T1.C1 else take T2.C1 so on.
    A flag is also evaluated in the expression because these intermediate results needs to be joined to third source table say T3 with different condition.
    Based on value taken a flag is being set in the expression which is used in a splitter to get results in three intermediate tables based on flag value evaluated earlier.
    These three intermediate tables are all truncate insert and these are unioned to fill a final target table.
    Visually it is something like this: -
    T1 -- T3 -- JOINER1
    | -->Join1 (FULL OUTER) --> Expression -->SPLITTER -- JOINER2 UNION --> Target Table
    | JOINER3
    T2 --
    Please suggest.

    I verified that their is a limitation with the splitter operator which will not let you generate a multi split having more than 999 columns in all.
    I had to use two separate splitters to achieve what I was trying to do.
    So the situation is now: -
    Siource -> Split -> Split 1 -> Insert into table -> Union1---------Final tableA
    Siource -> Split -> Split 2 -> Insert into table -> Union1

  • Multi-table INSERT with PARALLEL hint on 2 node RAC

    Multi-table INSERT statement with parallelism set to 5, works fine and spawns multiple parallel
    servers to execute. Its just that it sticks on to only one instance of a 2 node RAC. The code I
    used is what is given below.
    create table t1 ( x int );
    create table t2 ( x int );
    insert /*+ APPEND parallel(t1,5) parallel (t2,5) */
    when (dummy='X') then into t1(x) values (y)
    when (dummy='Y') then into t2(x) values (y)
    select dummy, 1 y from dual;
    I can see multiple sessions using the below query, but on only one instance only. This happens not
    only for the above statement but also for a statement where real time table(as in table with more
    than 20 million records) are used.
    select p.server_name,ps.sid,ps.qcsid,ps.inst_id,ps.qcinst_id,degree,req_degree,
    sql.sql_text
    from Gv$px_process p, Gv$sql sql, Gv$session s , gv$px_session ps
    WHERE p.sid = s.sid
    and p.serial# = s.serial#
    and p.sid = ps.sid
    and p.serial# = ps.serial#
    and s.sql_address = sql.address
    and s.sql_hash_value = sql.hash_value
    and qcsid=945
    Won't parallel servers be spawned across instances for multi-table insert with parallelism on RAC?
    Thanks,
    Mahesh

    Please take a look at these 2 articles below
    http://christianbilien.wordpress.com/2007/09/12/strategies-for-rac-inter-instance-parallelized-queries-part-12/
    http://christianbilien.wordpress.com/2007/09/14/strategies-for-parallelized-queries-across-rac-instances-part-22/
    thanks
    http://swervedba.wordpress.com

  • View links in multi table relations

    Is it advisable (in terms of performance e. g.), to create view links and view objects as local variables in multi table relations?
    examle: the jdev online help says to use
    such multi table relations like this:
    // A (one) -> B (many) -> C (many)
    ViewLink a2b = appMod.findViewLink("AtoB");
    ViewLink b2c = appMod.findViewLink("BtoC");
    ViewObject aV = a2b.getSource();
    ViewObject bV = a2b.getDestination();
    ViewObject cV = b2c.getDestination();
    while(aV.hasNext())
    Row aR = aV.next();
    while(bV.hasNext())
    Row bR = cV.next();
    while(cV.hasNext())
    Row cR = cV.next();
    I would rather keep everything concerning
    a, b, c together, especially when more
    tables (d, e, ...) are added, like this
    ViewLink a2b = appMod.findViewLink("AtoB");
    ViewObject aV = a2b.getSource();
    while(aV.hasNext())
    Row aR = aV.next();
    ViewLink b2c = appMod.findViewLink("BtoC");
    ViewObject bV = a2b.getDestination();
    while(bV.hasNext())
    Row bR = cV.next();
    ViewObject cV = b2c.getDestination();
    while(cV.hasNext())
    Row cR = cV.next();
    Is there anything to say against this approach (in term of performance for example). I am not sure to remeber,
    if this was the approach used in the HotelResevationSystem example.
    Thanks.
    Rx
    null

    For this to work you have to either build a view based on the entities from which you need attributes (joined by the FK) or build a ViewObject with the sql statement giving you all the attributes you need.
    The first case enables you the edit the attributes, the second gives you read only access to the attributes.
    What you try to do isn't a master-detail connection, you are doing a join of some tables.
    Timo

  • Any general tips on getting better performance out of multi table insert?

    I have been struggling with coding a multi table insert which is the first time I ever use one and my Oracle skills are pretty poor in general so now that the query is built and works fine I am sad to see its quite slow.
    I have checked numerous articles on optimizing but the things I try dont seem to get me much better performance.
    First let me describe my scenario to see if you agree that my performance is slow...
    its an insert all command, which ends up inserting into 5 separate tables, conditionally (at least 4 inserts, sometimes 5 but the fifth is the smallest table). Some stats on these tables as follows:
    Source table: 5.3M rows, ~150 columns wide. Parallel degree 4. everything else default.
    Target table 1: 0 rows, 27 columns wide. Parallel 4. everything else default.
    Target table 2: 0 rows, 63 columns wide. Parallel 4. default.
    Target table 3: 0 rows, 33 columns wide. Parallel 4. default.
    Target table 4: 0 rows, 9 columns wide. Parallel 4. default.
    Target table 5: 0 rows, 13 columns wide. Parallel 4. default.
    The parallelism is just about the only customization I myself have done. Why 4? I dont know it's pretty arbitrary to be honest.
    Indexes?
    Table 1 has 3 index + PK.
    Table 2 has 0 index + FK + PK.
    Table 3 has 4 index + FK + PK
    Table 4 has 3 index + FK + PK
    Table 5 has 4 index + FK + PK
    None of the indexes are anything crazy, maybe 3 or 4 of all of them are on multiple columns, 2-3 max. The rest are on single columns.
    The query itself looks something like this:
    insert /*+ append */ all
    when 1=1 then
    into table1 (...) values (...)
    into table2 (...) values (...)
    when a=b then
    into table3 (...) values (...)
    when a=c then
    into table3 (...) values (...)
    when p=q then
    into table4(...) values (...)
    when x=y then
    into table5(...) values (...)
    select .... from source_table
    Hints I tried are with append, without append, and parallel (though adding parallel seemed to make the query behave in serial, according to my session browser).
    Now for the performance:
    It does about 8,000 rows per minute on table1. So that means it should also have that much in table2, table3 and table4, and then a subset of that in table5.
    Does that seem normal or am I expecting too much?
    I find articles talking about millions of rows per minute... Obviously i dont think I can achieve that much... but maybe 30k or so on each table is a reasonable goal?
    If it seems my performance is slow, what else do you think I should try? Is there any information I may try to get to see if maybe its a poorly configured database for this?
    P.S. Is it possible I can run this so that it commits every x rows or something? I had the heartbreaking event of a network issue giving me this sudden "ora-25402: transaction must roll back" after it was running for 3.5 hours. So I lost all the progress it made... have to start over. plus i wonder if the sheer amount of data being queued for commit/rollback is causing some of the problem?
    Edited by: trant on Jun 27, 2011 9:29 PM

    Looks like there are about 54 sessions on my database, 7 of the sessions belong to me (2 taken by TOAD and 4 by my parallel slave sessions and 1 by the master of those 4)
    In v$session_event there are 546 rows, if i filter it to the SIDs of my current session and order my micro_wait_time desc:
    510     events in waitclass Other     30670     9161     329759     10.75     196     3297590639     1736664284     1893977003     0     Other
    512     events in waitclass Other     32428     10920     329728     10.17     196     3297276553     1736664284     1893977003     0     Other
    243     events in waitclass Other     21513     5     329594     15.32     196     3295935977     1736664284     1893977003     0     Other
    223     events in waitclass Other     21570     52     329590     15.28     196     3295898897     1736664284     1893977003     0     Other
    241     row cache lock     1273669     0     42137     0.03     267     421374408     1714089451     3875070507     4     Concurrency
    241     events in waitclass Other     614793     0     34266     0.06     12     342660764     1736664284     1893977003     0     Other
    241     db file sequential read     13323     0     3948     0.3     13     39475015     2652584166     1740759767     8     User I/O
    241     SQL*Net message from client     7     0     1608     229.65     1566     16075283     1421975091     2723168908     6     Idle
    241     log file switch completion     83     0     459     5.54     73     4594763     3834950329     3290255840     2     Configuration
    241     gc current grant 2-way     5023     0     159     0.03     0     1591377     2685450749     3871361733     11     Cluster
    241     os thread startup     4     0     55     13.82     26     552895     86156091     3875070507     4     Concurrency
    241     enq: HW - contention     574     0     38     0.07     0     378395     1645217925     3290255840     2     Configuration
    512     PX Deq: Execution Msg     3     0     28     9.45     28     283374     98582416     2723168908     6     Idle
    243     PX Deq: Execution Msg     3     0     27     9.1     27     272983     98582416     2723168908     6     Idle
    223     PX Deq: Execution Msg     3     0     25     8.26     24     247673     98582416     2723168908     6     Idle
    510     PX Deq: Execution Msg     3     0     24     7.86     23     235777     98582416     2723168908     6     Idle
    243     PX Deq Credit: need buffer     1     0     17     17.2     17     171964     2267953574     2723168908     6     Idle
    223     PX Deq Credit: need buffer     1     0     16     15.92     16     159230     2267953574     2723168908     6     Idle
    512     PX Deq Credit: need buffer     1     0     16     15.84     16     158420     2267953574     2723168908     6     Idle
    510     direct path read     360     0     15     0.04     4     153411     3926164927     1740759767     8     User I/O
    243     direct path read     352     0     13     0.04     6     134188     3926164927     1740759767     8     User I/O
    223     direct path read     359     0     13     0.04     5     129859     3926164927     1740759767     8     User I/O
    241     PX Deq: Execute Reply     6     0     13     2.12     10     127246     2599037852     2723168908     6     Idle
    510     PX Deq Credit: need buffer     1     0     12     12.28     12     122777     2267953574     2723168908     6     Idle
    512     direct path read     351     0     12     0.03     5     121579     3926164927     1740759767     8     User I/O
    241     PX Deq: Parse Reply     7     0     9     1.28     6     89348     4255662421     2723168908     6     Idle
    241     SQL*Net break/reset to client     2     0     6     2.91     6     58253     1963888671     4217450380     1     Application
    241     log file sync     1     0     5     5.14     5     51417     1328744198     3386400367     5     Commit
    510     cursor: pin S wait on X     3     2     2     0.83     1     24922     1729366244     3875070507     4     Concurrency
    512     cursor: pin S wait on X     2     2     2     1.07     1     21407     1729366244     3875070507     4     Concurrency
    243     cursor: pin S wait on X     2     2     2     1.06     1     21251     1729366244     3875070507     4     Concurrency
    241     library cache lock     29     0     1     0.05     0     13228     916468430     3875070507     4     Concurrency
    241     PX Deq: Join ACK     4     0     0     0.07     0     2789     4205438796     2723168908     6     Idle
    241     SQL*Net more data from client     6     0     0     0.04     0     2474     3530226808     2000153315     7     Network
    241     gc current block 2-way     5     0     0     0.04     0     2090     111015833     3871361733     11     Cluster
    241     enq: KO - fast object checkpoint     4     0     0     0.04     0     1735     4205197519     4217450380     1     Application
    241     gc current grant busy     4     0     0     0.03     0     1337     2277737081     3871361733     11     Cluster
    241     gc cr block 2-way     1     0     0     0.06     0     586     737661873     3871361733     11     Cluster
    223     db file sequential read     1     0     0     0.05     0     461     2652584166     1740759767     8     User I/O
    223     gc current block 2-way     1     0     0     0.05     0     452     111015833     3871361733     11     Cluster
    241     latch: row cache objects     2     0     0     0.02     0     434     1117386924     3875070507     4     Concurrency
    241     enq: TM - contention     1     0     0     0.04     0     379     668627480     4217450380     1     Application
    512     PX Deq: Msg Fragment     4     0     0     0.01     0     269     77145095     2723168908     6     Idle
    241     latch: library cache     3     0     0     0.01     0     243     589947255     3875070507     4     Concurrency
    510     PX Deq: Msg Fragment     3     0     0     0.01     0     215     77145095     2723168908     6     Idle
    223     PX Deq: Msg Fragment     4     0     0     0     0     145     77145095     2723168908     6     Idle
    241     buffer busy waits     1     0     0     0.01     0     142     2161531084     3875070507     4     Concurrency
    243     PX Deq: Msg Fragment     2     0     0     0     0     84     77145095     2723168908     6     Idle
    241     latch: cache buffers chains     4     0     0     0     0     73     2779959231     3875070507     4     Concurrency
    241     SQL*Net message to client     7     0     0     0     0     51     2067390145     2000153315     7     Network
    (yikes, is there a way to wrap that in equivalent of other forums' tag?)
    v$session_wait;
    223     835     PX Deq Credit: send blkd     sleeptime/senderid     268697599     000000001003FFFF     passes     1     0000000000000001     qref     0     00     1893977003     0     Other     0     10     WAITING
    241     22819     row cache lock     cache id     13     000000000000000D     mode     0     00     request     5     0000000000000005     3875070507     4     Concurrency     -1     0     WAITED SHORT TIME
    243     747     PX Deq Credit: send blkd     sleeptime/senderid     268697599     000000001003FFFF     passes     1     0000000000000001     qref     0     00     1893977003     0     Other     0     7     WAITING
    510     10729     PX Deq Credit: send blkd     sleeptime/senderid     268697599     000000001003FFFF     passes     1     0000000000000001     qref     0     00     1893977003     0     Other     0     2     WAITING
    512     12718     PX Deq Credit: send blkd     sleeptime/senderid     268697599     000000001003FFFF     passes     1     0000000000000001     qref     0     00     1893977003     0     Other     0     4     WAITING
    v$sess_io:
    223     0     5779     5741     0     0
    241     38773810     2544298     15107     27274891     0
    243     0     5702     5688     0     0
    510     0     5729     5724     0     0
    512     0     5682     5678     0     0                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Problems with ReportDocument and multi-table dataset...

    Hi,
    First post here and hoping someone can help with me with this problem.
    I have an ASP.NET app running using CR 2008 Basic to produce invoices - these are fairly complex and use multiple tables - data used is provided in lists and they work fine when displayed/printed/exported from the viewer.
    My client now wants the ability to batch print these so I'm trying to develop a page which will use create a ReportDocument object and then print/export as required.
    As soon as I started down this path I received the 'invalid logon params problem' - so to simplify things and to get past this I've developed a simple 1 page report with which takes data from a dataset I've populated and passed to it.
    Here's the problem:
    1) If I put one table in the dataset and add a field from that table to the report it works OK.
    2) If I use a second table and field (without the first) it works OK.
    3) as soon as I have both tables and the report has field from each I get the 'Invalid logon parameters' error.
    The tables are fine (since I can use each individually), the report is find (only has 1 or 2 fields and works with individual tables) ... it's only when I have both tables that I get problems...
    ... this is driving me up the wall and if CR can't handle this there's no way it's going to handle the more complex invoices with subreports.
    Can anyone suggest what I'm doing wrong ... or tell me whether I'm just pushing CR beyond it's capabilities?
    The code I'm using to generate the ReportDocument is:
    List<Invoice> rinv = Invoice.SelectOneContract(inv.Invoice_ID);
    List<InvoiceLine> rline = InvoiceLine.SelectInvoice(inv.Invoice_ID);
            DataSet ds = new DataSet();
            ds.Tables.Add(InvoiceLineTable(rline));
            ds.Tables.Add(InvoiceTable(rinv));
            rdoc.FileName = Server.MapPath("~/Invoicing/test.rpt");
            rdoc.SetDataSource(ds.Tables);
            rdoc.ExportToDisk(ExportFormatType.PortableDocFormat, "c:
    test
    test.pdf");
    ... so not rocket science and the error is always caused at the 'ExportToDisk' line.
    Thanks in advance!

    I've got nowhere trying to create a reportdocument and pass it a multi-table dataset, so decided to do it the 'dirty' way by adding all the controls to an aspx page and referring to them.
    I know I can do this because the whole issue is printing a report that is currently viewed.
    So ... I've now added the ObjectDataSources to the page as well as the CrystalReportSource and the CrystalViewer ...
    .. I've tested the page and the report appears within the viewer with all the correct data ...
    ...so with a certain amount of excitement I've added the following line to the code behind file:
    rptSrcContract.ReportDocument.PrintToPrinter(1, true, 1, 1);
    ... then I run the page and predictably the first thing that comes up is:
    Unable to connect: incorrect log on parameters.
    .. this is madness!
    1) The data is retrieved and is in the correct format otherwise the report would not display.
    2) the rptSrcContract.ReportDocument exists ... otherwise it would not display in the viewer.
    So why does this want to logon to a database when the data is retrieved successfully and the security is running with a 'Network Services' account anyway????? (actually I know it has nothing to do with logging onto the database this is just the generic Crystal Reports 'I have a problem' message)
    ... sorry if this is a bit of an angry rant .. didn't get much sleep last night because of this ... all I want to be able to do is print a report from code .... surely this should be possible??

  • Internal table with Import and Export

    Hi All,
    Hi all
    Please let me know the use of <b>Internal table with Import and Export parameters and SET/GET parameters</b>, on what type of cases we can use these? Plese give me the syntax with some examples.
    Please give me detailed analysis on the above.
    Regards,
    Prabhu

    Hi Prabhakar,
    There are three types of memories.
    1. ABAP MEMORY
    2. SAP MEMORY
    3. EXTERNAL MEMORY.
    1.we will use EXPORT/ IMPORT TO/ FROM MEMORY-ID when we want to transfer between ABAP memory
    2. we will use GET PARAMETER ID/ SET PARAMETER ID to transfer between SAP MEMORY
    3. we will use EXPORT/IMPORT TO/FROM SHARED BUFFER to transfer between external memory.
    ABAP MEMORY : we can say that two reports in the same session will be in ABAP MEMORY
    SAP MEMORY: TWO DIFFERENT SESSIONS WILL BE IN SAP MEMORY.
    for ex: IF WE CALL TWO DIFFERENT TRANSACTIONS SE38, SE11
    then they both are in SAP MEMORY.
    EXTERNAL MEMORY: TWO different logons will be in EXTERNAL MEMORY.
    <b>Syntax</b>
    To fill the input fields of a called transaction with data from the calling program, you can use the SPA/GPA technique. SPA/GPA parameters are values that the system stores in the global, user-specific SAP memory. SAP memory allows you to pass values between programs. A user can access the values stored in the SAP memory during one terminal session for all parallel sessions. Each SPA/GPA parameter is identified by a 20-character code. You can maintain them in the Repository Browser in the ABAP Workbench. The values in SPA/GPA parameters are user-specific.
    ABAP programs can access the parameters using the SET PARAMETER and GET PARAMETER statements.
    To fill one, use:
    SET PARAMETER ID <pid> FIELD <f>.
    This statement saves the contents of field <f> under the ID <pid> in the SAP memory. The code <pid> can be up to 20 characters long. If there was already a value stored under <pid>, this statement overwrites it. If the ID <pid> does not exist, double-click <pid> in the ABAP Editor to create a new parameter object.
    To read an SPA/GPA parameter, use:
    GET PARAMETER ID <pid> FIELD <f>.
    This statement fills the value stored under the ID <pid> into the variable <f>. If the system does not find a value for <pid> in the SAP memory, it sets SY-SUBRC to 4, otherwise to 0.
    To fill the initial screen of a program using SPA/GPA parameters, you normally only need the SET PARAMETER statement.
    The relevant fields must each be linked to an SPA/GPA parameter.
    On a selection screen, you link fields to parameters using the MEMORY ID addition in the PARAMETERS or SELECT-OPTIONS statement. If you specify an SPA/GPA parameter ID when you declare a parameter or selection option, the corresponding input field is linked to that input field.
    On a screen, you link fields to parameters in the Screen Painter. When you define the field attributes of an input field, you can enter the name of an SPA/GPA parameter in the Parameter ID field in the screen attributes. The SET parameter and GET parameter checkboxes allow you to specify whether the field should be filled from the corresponding SPA/GPA parameter in the PBO event, and whether the SPA/GPA parameter should be filled with the value from the screen in the PAI event.
    When an input field is linked to an SPA/GPA parameter, it is initialized with the current value of the parameter each time the screen is displayed. This is the reason why fields on screens in the R/3 System often already contain values when you call them more than once.
    When you call programs, you can use SPA/GPA parameters with no additional programming overhead if, for example, you need to fill obligatory fields on the initial screen of the called program. The system simply transfers the values from the parameters into the input fields of the called program.
    However, you can control the contents of the parameters from your program by using the SET PARAMETER statement before the actual program call. This technique is particularly useful if you want to skip the initial screen of the called program and that screen contains obligatory fields.
    Reading Data Objects from Memory
    To read data objects from ABAP memory into an ABAP program, use the following statement:
    Syntax
    IMPORT <f1> [TO <g 1>] <f 2> [TO <g 2>] ... FROM MEMORY ID <key>.
    This statement reads the data objects specified in the list from a cluster in memory. If you do not use the TO <g i > option, the data object <f i > in memory is assigned to the data object in the program with the same name. If you do use the option, the data object <f i > is read from memory into the field <g i >. The name <key> identifies the cluster in memory. It may be up to 32 characters long.
    You do not have to read all of the objects stored under a particular name <key>. You can restrict the number of objects by specifying their names. If the memory does not contain any objects under the name <key>, SY-SUBRC is set to 4. If, on the other hand, there is a data cluster in memory with the name <key>, SY-SUBRC is always 0, regardless of whether it contained the data object <f i >. If the cluster does not contain the data object <f i >, the target field remains unchanged.
    Saving Data Objects in Memory
    To read data objects from an ABAP program into ABAP memory, use the following statement:
    Syntax
    EXPORT <f1> [FROM <g 1>] <f 2> [FROM <g 2>] ... TO MEMORY ID <key>.
    This statement stores the data objects specified in the list as a cluster in memory. If you do not use the option FROM <f i >, the data object <f i > is saved under its own name. If you use the FROM <g i > option, the data objet <g i > is saved under the name <f i >. The name <key> identifies the cluster in memory. It may be up to 32 characters long.
    Check this link.
    http://www.sap-img.com/abap/difference-between-sap-and-abap-memory.htm
    Thanks,
    Susmitha.
    Reward points for helpful answers.

  • Remove multi table property defined in Parent descriptor

    I tried to remove a multi table property(which is defined in a parent descriptor) in a child, but it does not get removed.
    <!-- Parent descr -->
    <item-descriptor name="parent" ...>
         <table name="testTable" type="multi" id-column-name="id" multi-column-name="seq_num">
              <property name="testProperty" column-name="property1_id" data-type="map" component-data-type="string">
              </property>
         </table>
    </item-descriptor>
    <!-- Child descr -->
    <item-descriptor name="child" super-type="parent" sub-type-value="child" xml-combine="append">
         <table name="testTable" type="multi" id-column-name="id" multi-column-name="seq_num">
              <property name="testProperty" xml-combine="remove">
              </property>
         </table>
    </item-descriptor>
    Above code does not remove the property "testProperty" in child descriptor.
    If i have a auxiliary table type, rather than multi table type, then it is working fine. Not sure why it is not working in multi table property.
    Does anyone have any idea?
    Thanks!!!

    Hi,
    xml-combine is applicable when combining two or more xml definition files.
    When you say it worked for you for a table of type auxiliary, the properties are in same xml or in different xml files?
    In your case, you are actually trying to override a property from parent item descriptor in a child item descriptor.
    I would suggest to have the testProperty removed from the parent item descriptor and let the children item descriptors define the testProperty.
    Hope this helps.
    Keep posting the updates / questions.
    Thanks,
    Gopinath Ramasamy

  • Multi table insert

    Hello boys,
    I would like to do an insert into 3 tables:
    insert into t1
    if found insert into t2
    if not found insert into t3.
    So 2 out of 3 tables will end up with data.
    I would like to do it in 1 multi table insert.
    I can't use selects from the destination tables cause they are in the terabytes. I tried a number of combinations of primary keys error logging tables ("log errors reject limit unlimited") but without any success.
    Any ideas?

    Hello Exor,
    You said,
    insert into t1
    if found insert into t2
    if not found insert into t3.
    What is found condition? What are you checking and have you looked into insert all clause if that what you needed. Remember you can have complex join in your select and insert into 3 or 4 table based your "found" condition. This might be fastest way to move data into 3 tables instead of using cursor or bulk collect.
      INSERT ALL
       WHEN order_total < 1000000 THEN
          INTO small_orders
       WHEN order_total > 1000000 AND order_total < 2000000 THEN
          INTO medium_orders
       WHEN order_total > 2000000 THEN
          INTO large_orders
       SELECT order_id, order_total, sales_rep_id, customer_id
          FROM orders;Regards

  • Creating record on second main table during import

    Hello all,
    I am importing data to a main table (materials), and I have a second main table linked to the materials main table to store supporting data.  Assuming I have a new record being imported that contains an entity that doesn't exist in that second main table, is it possible to create a record inside the second main table?  This functionality exists for lookup tables, if the lookup record doesn't exist you can configure the map to create the record in the lookup table.  Can the same thing be accomplished with multiple main tables?  I'm having trouble with this because I can't get any field aside from the primary key on the second main table to show up in the destination fields in the import manager.

    Hi,
    As you said, Assuming I have a new record being imported that contains an entity that doesn't exist in that second main table, is it possible to create a record inside the second main table?
    This scenario is quite possible.
    I have a Work Around and it should work according to me...
    See, In this case you have to create two maps, one for Main table (Primary) Import and another for Second Main table(Secondary).
    Before Importing Main table(Import) this file should be Imported to Second Main table import(Secondary) by putting that file to ready inbound port of Second Main table.
    So, Records entities that do not exist in secondary main table will get created and for existing records will get updated.
    Now when the same Source file import for Main table, that record entity would be already there in Secondary main table and as such you would not face any issue while importing through main table.
    Kindly let me know if you face any issue.
    Thanks and Regards,
    Mandeep Saini

  • Multi-table mapping is not inserting into the primary table first.

    I have an inheritance mapping where the children are mapped to the parent table oids with the "Multi-Table Info" tab.
    One of children is not inserting properly. Its insert is attempting to insert into one of the tables from the "Additional Tables" of "Multi-Table Info" instead of the primary table that it is mapped to.
    The other children insert correctly. This child is not much different from those.
    I looked through the forums, but found nothing similiar.

    I would expect the Children to be inserted into both the primary table and the Additional Table? Is the object in question inserted into the primary table at all? Is the problem that it is being inserted into the Additional Table first? If it is, are the primary key names different? Is it a foreign key relationship?
    If the object in question has no fields in the additional table is it mapped to the additional table? (it should not be)
    Perhaps providing the deployment XML will help determine the problem,
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Table in import and export parameters

    Hi,
    this is a general doubt. can we pass an internal table in import/export parameters of FMs. I know there are table parameters, but can this be done in import export parameters.
    Regards,
    Vijay

    Hi vijay,
    1. if we pass thru TABLES,
      then we can pass an internal table,
      WITH HEADER LINE
    2. But if we use import/export,
       it has to be a TABLE TYPE
       defined in the data dictionary.
      (This table type will not have any header line).
    regards,
    amit m.

  • Issue with trigger, multi-table insert and error logging

    I find that if I try to perform a multi-table insert with error logging on a table that has a trigger, then some constraint violations result in an exception being raised as well as logged:
    <pre>
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE     11.2.0.1.0     Production
    TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    SQL> create table t1 (id integer primary key);
    Table created.
    SQL> create table t2 (id integer primary key, t1_id integer,
    2           constraint t2_t1_fk foreign key (t1_id) references t1);
    Table created.
    SQL> exec dbms_errlog.create_error_log ('T2');
    PL/SQL procedure successfully completed.
    SQL> insert all
    2 into t2 (id, t1_id)
    3 values (x, y)
    4 log errors into err$_t2 reject limit unlimited
    5 select 1 x, 2 y from dual;
    0 rows created.
    SQL> create or replace trigger t2_trg
    2 before insert or update on t2
    3 for each row
    4 begin
    5 null;
    6 end;
    7 /
    Trigger created.
    SQL> insert all
    2 into t2 (id, t1_id)
    3 values (x, y)
    4 log errors into err$_t2 reject limit unlimited
    5 select 1 x, 2 y from dual;
    insert all
    ERROR at line 1:
    ORA-02291: integrity constraint (EOR.T2_T1_FK) violated - parent key not found
    </code>
    This doesn't appear to be a documented restriction. Does anyone know if it is a bug?

    Tony Andrews wrote:
    This doesn't appear to be a documented restriction. Does anyone know if it is a bug?Check The Execution Model for Triggers and Integrity Constraint Checking.
    SY.

  • JCA IConnection call to BAPI with table in table as import and export param

    Is it possible to call a BAPI with a table in table as import and export data?
    Please give small code example if this is possible.
    Thanks in advance.

    Inner tables can be handled with:
    IRecordSet innerTable = (IRecordSet)outerTable.getObject("INNER_TABLE");

Maybe you are looking for

  • How to create one delivery for multiple sales order

    Hi!! Friends, Can some one explain the steps to be followed in creating one delivery for multiple sales order in SAP SD. Regards AKASH Message was edited by:         AKASH TAMBI

  • Inventory management in BW7.0

    Hi,   We have implemented 0IC_C03 in 7.X flow. We are facing a issue with the 2lis_03_BX load. Usually when this is loaded to the cube it gets loaded with record type '1' in 3.x flow but in 7.0 flow this gets loaded with record type 0 and when i do m

  • Sed and perl not replacing a letter in a file

    I have a file 1.htm. I want to replace a letter ṣ (s with dot below). I tried with both sed and perl and it does not replace. sed -i 's/ṣ/s/g' "1.htm" perl -i -pe 's/ṣ/s/g' "1.htm" can anyone suggest what to do 1.html

  • How to find active object ?

    Hi,   Can anybody please guide me , how to  find  an active BW objects like Info cube,ODS,info source and info objects? Please answer in details. Thanks and regards DEP

  • I cant sign in to my account on someone elses computer

    i want to access MY itunes account from someone elses computerto download songs.