Insert Infotype with Table Part

Hi All,
I am trying to insert Infotype nnnn including the information in the table part using a Function Module.
I have found FM RH_INSERT_INFTY_EXP. This FM has a parameter called TNNN which is supposed to hold the table part information. However, I have tried to used this but the table part is not filled.
I saw that PNNNN-ITXNR is relevant here.. However, incremeting this number for each entry I write seems unhandy..
Can anybody help me here?
Thanks, Johannes

Hi Maricella,
Use FM 'RH_INSERT_INFTY'.
Pass values for below parameters while calling the FM.
  REPID               = SY-REPID
  FORM                = 'FILL_TASK_DATA'
Create a subroutine with the same name as you have given above for 'FORM'
and in this subroutine you can populate data for HRTNNNN.

Similar Messages

  • ABAP HR How to create infotype with table control in it like Infotype 0008

    Hi Experts.
    I need help from u guys. My client requirement is to create custom infotype just like 0008 infotype which contain table control to save amount and wage types. I try to create infotype with table control using PM01, but that table control is in display mode only, i almost search every where to create custom infotype with table control but what ever threads in forum all are they unanswered and most of the threads for creating infotype. But i already done with infotype , but my main problem is table control.
    If any one have some suggestion for this please share with me.
    <removed by moderator> i am looking for positive reply.
    Edited by: Thomas Zloch on Aug 30, 2011 12:54 PM

    Hi
    I've created several infotypes with a Table Control and it is always the same story. You have to create a custom Z table to store the Table Control data (if you can have unlimited records), so in the PSXXXX structure you need to add a TABNR field to link the PAXXXX table and the Z one, just like the type table OM infotypes.
    Then in your code you have to control every possible operation, INS, MOD, DEL... and update the Z table accordingly (the standard code won't do that)
    If your TC fields appear in display mode, take a look at the Groups 1 and 3 in your fields, the must be set with the usual values for a PA infotype.
    If you have more questions, just ask,
    Regards

  • Custom infotype with table

    Hi Guyz,
    i have to create a custom infotype with a table in it.i have created the PS structure (PM01)with fields a,b,c,d,e. i gave the infotype charac, techn attr, activated the PA table and P structure . In the lay out editor i created a table and put the fields a, b and c. i dragged the input/output field on the table and then dragged the text fileds above them. when i check the layout i get the following error in the flow logic.
    Program MP988800 Screen 2000
    The field P9888-ZA is not assigned to a loop. "LOOP............ENDLOOP" must
    appear in "PBO" and "PAI"
    can anyone tell me how and where should i write the code for the table to get activated. Thanks a lot.

    HI Ranjeth,
      i have tried the code you provided. I am getting errors.can you please tell me what IT_TBCTRL_BEHAVIOR and IS_TBCTRL_BEHAVIOR are.
    i am getting the following errors.
    Statement CONTROLS is not defined.
    IS_TBCTRL_BEHAVIOR-CODETXT not defined
    IS_TBCTRL_BEHAVIOR-RATE not defined.
      i have replaced i_bc_tbctrl with the table name which i defined in the layout editor.  i have put your code in the flow logic. check the code below
    CONTROL: options TYPE TABLEVIEW USING SCREEN 2000.
    PROCESS BEFORE OUTPUT.
    LOOP AT IT_TBCTRL_BEHAVIOR
    INTO IS_TBCTRL_BEHAVIOR
    WITH CONTROL options
    CURSOR options-CURRENT_LINE.
    MODULE options_GET_LINES.
    ENDLOOP.
            general infotype-independent operations
      MODULE BEFORE_OUTPUT.
      CALL SUBSCREEN subscreen_empl   INCLUDING empl_prog empl_dynnr.
      CALL SUBSCREEN subscreen_header INCLUDING header_prog header_dynnr.
            infotype specific operations
      MODULE P9111.
      MODULE HIDDEN_DATA.
    PROCESS AFTER INPUT.
    LOOP AT IT_TBCTRL_BEHAVIOR.
    CHAIN.
    FIELD IS_TBCTRL_BEHAVIOR-CODETXT.
    FIELD IS_TBCTRL_BEHAVIOR-RATE.
    MODULE options_MODIFY ON CHAIN-REQUEST.
    ENDCHAIN.
    ENDLOOP.
    process exit commands
      MODULE EXIT AT EXIT-COMMAND.
            processing after input
            check and mark if there was any input: all fields that
            accept input HAVE TO BE listed here

  • How to insert into a table with a nested table which refer to another table

    Hello everybody,
    As the title of this thread might not be very understandable, I'm going to explain it :
    In a context of a library, I have an object table about Book, and an object table about Subscriber.
    In the table Subscriber, I have a nested table modeling the Loan made by the subscriber.
    And finally, this nested table refers to the Book table.
    Here the code concerning the creation of theses tables :
    Book :
    create or replace type TBook as object
    number int,
    title varchar2(50)
    Loan :
    create or replace type TLoan as object
    book ref TBook,
    loaning_date date
    create or replace type NTLoan as table of TLoan;
    Subscriber :
    create or replace type TSubscriber as object
    sub_id int,
    name varchar2(25)
    loans NTLoan
    Now, my problem is how to insert into a table of TSubscriber... I tried this query, without any success...
    insert into OSubscriber values
    *(1, 'LEVEQUE', NTLoan(*
    select TLoan(ref(b), '10/03/85') from OBook b where b.number = 1)
    Of course, there is an occurrence of book in the table OBook with the number attribute 1.
    Oracle returned me this error :
    SQL error : ORA-00936: missing expression
    00936. 00000 - "missing expression"
    Thank you for your help

    1) NUMBER is a reserved word - you can't use it as identifier:
    SQL> create or replace type TBook as object
      2  (
      3  number int,
      4  title varchar2(50)
      5  );
      6  /
    Warning: Type created with compilation errors.
    SQL> show err
    Errors for TYPE TBOOK:
    LINE/COL ERROR
    0/0      PL/SQL: Compilation unit analysis terminated
    3/1      PLS-00330: invalid use of type name or subtype name2) Subquery must be enclosed in parenthesis:
    SQL> create table OSubscriber of TSubscriber
      2  nested table loans store as loans
      3  /
    Table created.
    SQL> create table OBook of TBook
      2  /
    Table created.
    SQL> insert
      2    into OBook
      3    values(
      4           1,
      5           'No Title'
      6          )
      7  /
    1 row created.
    SQL> commit
      2  /
    Commit complete.
    SQL> insert into OSubscriber
      2    values(
      3           1,
      4           'LEVEQUE',
      5           NTLoan(
      6                  (select TLoan(ref(b),DATE '1985-10-03') from OBook b where b.num = 1)
      7                 )
      8          )
      9  /
    1 row created.
    SQL> select  *
      2    from  OSubscriber
      3  /
        SUB_ID NAME
    LOANS(BOOK, LOANING_DATE)
             1 LEVEQUE
    NTLOAN(TLOAN(000022020863025C8D48614D708DB5CD98524013DC88599E34C3D34E9B9DBA1418E49F1EB2, '03-OCT-85'))
    SQL> SY.

  • Inserting data with the help of nested table...!!!

    The following block is giving error
    ORA-06502: PL/SQL: numeric or value error
    the signature and signature_bkp have the same structure
    So, can anybody help me out to solve this issue :
    for copying records from one table to another table
    Thanking You advancely
    DECLARE
    CURSOR c1
    IS
    SELECT *
    FROM signature
    WHERE creation_time > TRUNC ( SYSDATE ) - 100
    AND ROWNUM < 102;
    TYPE sig_typ IS TABLE OF signature%ROWTYPE;
    sig_t sig_typ;
    BEGIN
    OPEN c1;
    FETCH c1
    BULK COLLECT INTO sig_t;
    CLOSE c1;
    FORALL i IN sig_t.FIRST .. sig_t.LAST
    INSERT INTO signature_bkp
    VALUES sig_t ( i );
    COMMIT;
    END;
    --DKar                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Or whether a INSERT statement with SELECT clause will do that for youby using this technique, it took 47:08:45 to copy 7252 rows
    and by using a cursor for loop took 49:03:23 to copy 13567 rows
    So there was appox. 40% increase in performance by using pl/sql. I thought it could be even faster using the bulk-bind ing features and nested tables.
    OR i just want to know ....how to correct the block of code that was given in my 1st msg without changing its logic.
    Thanks
    --DKar                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How to insert into 2 tables from the same page (with one button  link)

    Hi,
    I have the following 2 tables....
    Employees
    emp_id number not null
    name varchar2(30) not null
    email varchar2(50)
    hire_date date
    dept_id number
    PK = emp_id
    FK = dept_id
    Notes
    note_id number not null
    added_on date not null
    added_by varchar2(30) not null
    note varchar2(4000)
    emp_id number not null
    PK = note_id
    FK = emp_id
    I want to do an insert into both tables via the application and also via the same page (with one button link). I have made a form to add an employee with an add button - adding an employee is no problem.
    Now, on the same page, I have added a html text area in another region, where the user can write a note. But how do I get the note to insert into the Notes table when the user clicks the add button?
    In other words, when the user clicks 'add', the employee information should be inserted into the Employees table and the note should be inserted into the Notes table.
    How do I go about doing this?
    Thanks.

    Hi,
    These are my After Submit Processes...
    After Submit
    30     Process Row of NOTES     Automatic Row Processing (DML)     Unconditional
    30     Process Row of EMPLOYEES     Automatic Row Processing (DML)     Unconditional
    40     reset page     Clear Cache for all Items on Pages (PageID,PageID,PageID)     Unconditional
    40     reset page     Clear Cache for all Items on Pages (PageID,PageID,PageID)     Unconditional
    40     reset page     Clear Cache for all Items on Pages (PageID,PageID,PageID)     Unconditional
    40     reset page     Clear Cache for all Items on Pages (PageID,PageID,PageID)     Unconditional
    50     Insert into Tables     PL/SQL anonymous block     Conditional
    My pl/sql code is the same as posted earlier.
    Upon inserting data into the forms and clicking the add button, I get this error...
    ORA-06550: line 1, column 102: PL/SQL: ORA-00904: "NOTES": invalid identifier ORA-06550: line 1, column 7: PL/SQL: SQL Statement ignored
         Error      Unable to process row of table EMPLOYEES.
    Is there something wrong with the pl/sql code or is it something else?

  • Taking More Time while inserting into the table (With foriegn key)

    Hi All,
    I am facing problem while inserting the values into the master table.
    The problem,
    Table A -- User Master Table (Reg No, Name, etc)
    Table B -- Transaction Table (Foreign key reference with Table A).
    While inserting the data's in Table B, i need to insert the reg no also in table B which is mandatory. I followed the logic which is mentioned in the SRDemo.
    While inserting we need to query the Table A first to have the values in TableABean.java.
    final TableA tableA= (TableA )uow.executeQuery("findUser",TableA .class, regNo);
    Then, we need to create the instance for TableB
    TableB tableB= (TableB)uow.newInstance(TableB.class);
    tableB.setID(bean.getID);
    tableA.addTableB(tableB); --- this is for to insert the regNo of TableA in TableB.. This line is executing the query "select * from TableB where RegNo = <tableA.getRegNo>".
    This query is taking too much time if values are more in the TableB for that particular registrationNo. Because of this its taking more time to insert into the TableB.
    For Ex: TableA -- regNo : 101...having less entry in TableB means...inserting record is taking less than 1 sec
    regNo : 102...having more entry in TableB means...inserting record is taking more than 2 sec
    Time delay is there for different users when they enter transaction in TableB.
    I need to avoid this since in future it will take more time...from 2 sec to 10 sec, if volume of data increases mean.
    Please help me to resolve this issue...I am facing it now in production.
    Thanks & Regards
    VB

    Hello,
    Looks like you have a 1:M relationship from TableA to TableB, with a 1:1 back pointer from TableB to TableA. If triggering the 1:M relationship is causing you delays that you want to avoid there might be two quick ways I can see:
    1) Don't map it. Leave the TableA->TableB 1:M unmapped, and instead just query for relationship when you do need it. This means you do not need to call tableA.addTableB(tableB), and instead only need to call tableB.setTableA(tableA), so that the TableB->TableA relation gets set. Might not be the best option, but it depends on your application's usage. It does allow you to potentially page the TableB results or add other query query performance options when you do need the data though.
    2) You are currently using Lazy loading for the TableA->TableB relationship - if it is untriggered, don't bother calling tableA.addTableB(tableB), and instead only need to call tableB.setTableA(tableA). This of course requires using TopLink api to a) verify the collection is an IndirectCollection type, and b) that it is hasn't been triggered. If it has been triggered, you will still need to call tableA.addTableB(tableB), but it won't result in a query. Check out the oracle.toplink.indirection.IndirectContainer class and it's isInstantiated() method. This can cause problems though in highly concurrent environments, as other threads may have triggered the indirection before you commit your transaction, so that the A->B collection is not up to date - this might require refreshing the TableA if so.
    Change tracking would probably be the best option to use here, and is described in the EclipseLink wiki:
    http://wiki.eclipse.org/Introduction_to_EclipseLink_Transactions_(ELUG)#Attribute_Change_Tracking_Policy
    Best Regards,
    Chris

  • Create infotype with PM01+ field lenght in table

    hello my HR friends, a friend of mine create an infotype with tcode PM01 so far everthing is ok but when she finished, i found an error and neither i or she knows how to solve it.
    Picture of infotype: http://img197.imageshack.us/img197/8236/infotype1.jpg
    and problem is, in field that have the rectangle, this field have this possible values:
    http://img529.imageshack.us/img529/1887/infotype2.jpg
    the problem is if a choose the first possible value, field stays like this:
    http://img836.imageshack.us/img836/4494/infotype3.jpg and then when im going to save the system says that content doesn't exist in the table that is feeding that textbox.
    Do anyone knows how to solve this problem or the solution is to put that possible values shorter or put text box bigger.
    regards and thanks in advance for the help.
    Mário

    Instead of INT use DEC or NUMC data types.
    To know why INT cannot be used try searching with Hardware Restriction in terms of general computing.

  • Record not inserting into sap table with connector framework ?

    here is the code, but record not being inserting into the table ... but same piece of code working fine while updating ... the record ...
    try {
    interaction = connection.createInteractionEx();
    IInteractionSpec interactionSpec = interaction.getInteractionSpec();
    String functionName = "Z_XYZ";
    interactionSpec.setPropertyValue("Name", functionName);
    String writingTable = "MYTABLE";
    RecordFactory rf = interaction.getRecordFactory();
    MappedRecord importParams = rf.createMappedRecord("input");
    importParams.put("ATTR1", "VALUE1");
    importParams.put("ATTR2", "VALUE2");
    IFunction function = connection.getFunctionsMetaData().getFunction(functionName);
    IStructureFactory sf = interaction.retrieveStructureFactory();
    IRecordSet table = (IRecordSet) sf.getStructure(function.getParameter(writingTable).getStructure());
    table.insertRow();
    table.setString("ATNAME", "VALUE");
    table.setString("ATWRT", "VALUE");
    importParams.put(writingTable, table);
    MappedRecord output = (MappedRecord) interaction.execute(interactionSpec, importParams);
    } catch (Exception e) {
    any idea?
    than ks
    MMK

    Hi Mohan,
    Does a creation through SE37 with the same input work?
    Yoav.

  • Constantly inserting into large table with unique index... Guidance?

    Hello all;
    So here is my world. We have central to our data monitoring system an oracle database running Oracle Standard One (please don't laugh... I understand it is comical) licensing.
    This DB is about 1.7 TB of small record data.
    One table in particular (the raw incoming data, 350gb, 8 billion rows, just in the table) is fed millions of rows each day in real time by two to three main "data collectors" or what have you. Data must be available in this table "as fast as possible" once it is received.
    This table has 6 columns (one varchar usually empty, a few numerics including a source id, a timestamp and a create time).
    The data is collect in chronological order (increasing timestamp) 90% of the time (though sometimes the timestamp may be very old and catch up to current). The other 10% of the time the data can be out of order according to the timestamp.
    This table has two indexes, unique (sourceid, timestamp), and a non unique (create time). (FYI, this used to be an IOT until we had to add the second index on create time, at which point a secondary index on create time slowed the IOT to a crawl)
    About 80% of this data is removed after it ages beyond 3 months; 20% is retained as "special" long term data (customer pays for longer raw source retention). The data is removed using delete statements. This table is never (99.99% of the time) updated. The indexes are not rebuilt... ever... as a rebuild is about a 20+ hour process, and without online rebuilds since we are standard one, this is just not possible.
    Now what we are observing is that the inserts into this table
    - Inserts are much slower based on a "wider" cardinality of the "sourceid" of the data being inserted. What I mean is that 10,000 inserts for 10,000 sourceid (regardless of timestamp) is MUCH, MUCH slower than 10,000 inserts for a single sourceid. This makes sense to me, as I understand it that oracle must inspect more branches of the index for uniqueness, and more different physical blocks will be used to store the new index data. There are about 2 million unique sourceId across our system.
    - Over time, oracle is requesting more and more ram to satisfy these inserts in a timely matter. My understanding here is that oracle is attempting to hold the leafs of these indexes perpetually buffers. Our system does have a 99% cache hit rate. However, we are seeing oracle requiring roughly 10GB extra ram per quarter to 6 months; we're at about 50gb of ram just for oracle already.
    - If I emulate our production load on a brand new, empty table / indexes, performance is easily 10x to 20x faster than what I see when I do the same tests with the large production copies of data.
    We have the following assumption: Partitioning this table based on good logical grouping of sourceid, and then timestamp, will help reduce the work required by oracle to verify uniqueness of data, reducing the amount of data that must be cached by oracle, and allow us to handle our "older than 3 month" at a partition level, greatly reducing table and index fragmentation.
    Based on our hardware, its going to be about a million dollar hit to upgrade to Enterprise (with partitioning), plus a couple hundred thousand a year in support. Currently I think we pay a whopping 5 grand a year in support, if that, total oracle costs. This is going to be a huge pill for our company to swallow.
    What I am looking for guidance / help on, should we really expect partitioning to make a difference here? I want to get that 10x performance difference back we see between a fresh empty system, and our current production system. I also want to limit oracles 10gb / quarter growing need for more buffer cache (the cardinality of sourceid does NOT grow by that much per quarter... maybe 1000s per quarter, out of 2 million).
    Also, please I'd appreciate it if there were no mocking comments about using standard one up to this point :) I know it is risky and insane and maybe more than a bit silly, but we make due with what we have. And all the credit in the world to oracle that their "entry" level system has been able to handle everything we've thrown at it so far! :)
    Alright all, thank you very much for listening, and I look forward to hear the opinions of the experts.

    Hello,
    Here is a link to a blog article that will give you the right questions and answers which apply to your case:
    http://jonathanlewis.wordpress.com/?s=delete+90%25
    As far as you are deleting 80% of your data (old data) based on a timestamp, then don't think at all about using the direct path insert /*+ append */ as suggested by one of the contributors to this thread. The direct path load will not re-use any free space made by the delete. You have two indexes:
    (a) unique index (sourceid, timestamp)
    (b) index(create time)
    Your delete logic (based on arrival time) will smatch your indexes as far as you are always deleting the left hand side of the index; it means you will have what we call a right hand index - In other words, the scattering of the index key per leaf block is certainly catastrophic (there is an oracle iternal function named sys_op_lidbid that will allow you to verify this index information). There is a fairly chance that your two indexes will benefit from a coalesce as already suggested:
               ALTER INDEX indexname COALESCE;This coalesce should be investigated to be done on a regular basis (may be after each 80% delete) You seem to have several sourceid for one timestamp. If the answer is yes you should think about compressing this index
        create index indexname (sourceid, timestamp) compress;     
    or
        alter index indexname rebuild compress;     You will do it only once. Your index will have a smaller size and may be more efficient than it is actually. The index compression will add an extra CPU work during an insert but it might help improving the overal insert process.
    Best Regards
    Mohamed Houri

  • Urgent!!!!!  Inserting data in table control of infotype.

    Hi Experts,
    I want to insert data in custom infotype.I am using FM HR_INFOTYPE_OPERATION for this purpose.But custom infotype contains a table control.
    Table control of infotype has 20 rows containing  fields name01 , addr01 upto name20 , addr20.
    How do i insert data in table control fields of infotype if i want to use FM HR_INFOTYPE_OPERATION .
    Pls suggest if there is another way to do it.
    Thanks.

    Thanks for your reply.
    I am Calling FM HR_INFOTYPE_OPERATION in a loop of a table.
    Table contains multiple employee numbers-PERNR.There can be all different PERNR or some records of same PERNR.
    Suppose if there are four records in the table.First two records are of the same PERNR.Then how would name01 or name02 will be assigned.
    Now, third record is  of new PERNR .Again it should be name01.
    So the question is how everytime in a loop i will assign nameNN for different PERNR.
    nameNN and addrNN was an example. I am sending my code here.
    Loop at it_data.
    gs_9000-PERNR = it_data-pernr..
    gs_9000-currentamount03 = it_data-curramt.
    gs_9000-mtdamount03 = it_data-mtd.
    gs_9000-qtdamount03 = it_data-qtd.
    gs_9000-ytdamount03 = it_data-ytd.
    gs_9000-roll12amount03 = it_data-roll.
    CALL FUNCTION 'BAPI_EMPLOYEE_ENQUEUE'
    EXPORTING
    NUMBER = gs_9000-pernr
    IMPORTING
    RETURN = RETURNE.
    CALL FUNCTION 'HR_INFOTYPE_OPERATION'
    EXPORTING
    INFTY = '9000'
    NUMBER = gs_9000-PERNR
    VALIDITYBEGIN = '20080801' 
    RECORD = gs_9000
    OPERATION = 'INS'
    TCLAS = 'A'
    DIALOG_MODE = '0'
    IMPORTING
    RETURN = RETURN
    KEY = KEY.
    IF RETURN IS NOT INITIAL.
    WRITE :/ 'Error Occurred'.
    ENDIF.
    CALL FUNCTION 'BAPI_EMPLOYEE_DEQUEUE'
    EXPORTING
    NUMBER = gs_9000-PERNR
    endloop.
    So in above code...
    gs_9000-currentamount03 = it_data-curramt.
    gs_9000-mtdamount03 = it_data-mtd.
    gs_9000-qtdamount03 = it_data-qtd.
    gs_9000-ytdamount03 = it_data-ytd.
    gs_9000-roll12amount03 = it_data-roll.
    these are table control fields , so how i wud i assign for
    gs_9000-currentamount04 = it_data-curramt.
    gs_9000-mtdamount04 = it_data-mtd.
    gs_9000-qtdamount04 = it_data-qtd.
    gs_9000-ytdamount04 = it_data-ytd.
    gs_9000-roll12amount04 = it_data-roll.
    in loop of a table.
    Thanks

  • Understanding logminer results -- inserting row into table with CLOB field

    In using log miner I have noticed that inserts into rows that contain a CLOB (I assume this applies to other LOB type fields as well, have only tested with CLOB so far) field are actually recorded as two DML entries.
    --the first entry is the insert operation that inserts all values with an EMPTY_CLOB() for the CLOB field
    --the second entry is the update that sets the actual CLOB value (+this is true even if the value of the CLOB field is not being set explicitly+)
    This separation makes sense as there may be separate locations that the values are being stored etc.
    However, what I am tripping over is the fact the first entry, the Insert, has a RowId value of 'AAAAAAAAAAAAAAAAAA' which is invalid if I attempt to use it in a flashback query such as:
    SELECT * FROM PERSON AS OF SCN #####'  where RowId = 'AAAAAAAAAAAAAAAAAA'The second operation, the Update of the CLOB field, has the valid RowId.
    Now, again, this makes sense if the insert of the new row is not really considered "+done+" until the two steps are done. However, is there some way to group these operations together when analyzing the log contents to know that these two operations are a "+matched set+"?
    Not a total deal breaker, but would be nice to know what is happening under the hood here so I don't act on any false assumptions.
    Thanks for any input.
    To replicate:
    Create a table with CLOB field:
    CREATE TABLE DEVUSER.TESTTABLE
            ID NUMBER
           , FULLNAME VARCHAR2(50)
          , AGE NUMBER  
          , DESCRIPTION CLOB
           );Capture the before SCN:
    SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;Insert a new row in the test table:
    INSERT INTO TESTTABLE(ID,FULLNAME,AGE) VALUES(1,'Robert BUILDER',35);
         COMMIT;Capture the after SCN:
    SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;Start logminer session with the bracketing scn values and options etc:
    EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTSCN=>2619174, ENDSCN=>2619191, -
               OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE + -
               DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.NO_ROWID_IN_STMT + DBMS_LOGMNR.NO_SQL_DELIMITER)Query the logs for the changes in that range:
    SELECT
           commit_scn, xid,operation,table_name,row_id
           ,sql_redo,sql_undo, rs_id,ssn
           FROM V$LOGMNR_CONTENTS
        ORDER BY xid asc,sequence# ascResults:
    2619178     0C00070028000000     START                  AAAAAAAAAAAAAAAAAA     set transaction read write
    2619178     0C00070028000000     INSERT     TESTTABLE     AAAAAAAAAAAAAAAAAA     insert into "DEVUSER"."TESTTABLE" ...
    2619178     0C00070028000000     UPDATE     TESTTABLE     AAAFEXAABAAALEJAAB     update "DEVUSER"."TESTTABLE" set "DESCRIPTION" = NULL ...
    2619178     0C00070028000000     COMMIT                  AAAAAAAAAAAAAAAAAA     commitEdited by: 958701 on Sep 12, 2012 9:05 AM
    Edited by: 958701 on Sep 12, 2012 9:07 AM

    Scott,
    Thanks for the reply.
    I am inserting into the table over a database link.
    I am using the new version of HTML Db (2.0)
    HTML Db is connected to an Oracle 10 database I think, however the table I am trying to insert data into (via the database link) is in an Oracle 8 database - this is why we created a link to it as we couldn't have the HTML Db interacting with the Oracle 8 database directly due to compatibility problems (or so I've been told)
    Simon

  • Internal error - insert in sorted tabl ZADRU with

    Hello Experts,
    While i open BP transaction for one of the business partner. We are getting the following error:
    *Internal error - insert in sorted tabl ZADRU with*.
    Request your help to resolve this issue. Do we have some SAP note to handle such scenarios. It seems the data has become inconsistent.
    Thanks,
    Rohit

    Hi, Rohit
    Table ZADRU - it's your own development, please ask abap-team about this table.
    Denis

  • How to insert records with LONG RAW columns from one table to another

    Does anybody know how to use subquery to insert records with columns of LONG RAW datatype from one table to another? Can I add a WHERE clause in the subquery statement? Thanks.

    Insert into ... Select statements are not supported for long or long raw. You will have to either use PL/SQL or convert your long raw to blobs.

  • Large insert op into table with indexes

    Hi,
    Oracle 8.1.7.0. Empty table (after truncate) with two indexes. Need to insert about 40 billions records. What is better way to complete this task:
    1. Drop indexes, insert data then build indexes.
    2. Simply insert data into table.
    Thanks.

    The only way to find out is to test... For example, I did a test on my single-cpu box with Oracle 9i. My test was to load all the rows from DBA_SOURCE (only 650k rows). I found that a single insert statement with bitmap indexes online ran faster than the total elapsed time for taking the indexes offline, inserting, and bringing the indexes back up...
    With 40-billion rows, I presume you're using partitioned tables and enabling parrallel DML. Thus, your test will be much different than mine...
    In past ETL projects I worked on, I found little difference in timing. I decided that I didn't want to drop indexes (it was ver8i) so I loaded the empty tables with indexes (and constraints) enabled...
    Stan

Maybe you are looking for

  • ITunes Match showing different content on different devices

    I have recently signed up for itunes match and have connected three devices as follows: Windows 7 x64 PC running latest itunes 10.5.2.11 Apple ipad version 1 64gig 3G with iOS 5.01 iPhone 3GS 16gig with iOS 5.01 I have matched 4900+ songs from my PC

  • Multiple mailbox databases - why?

    can I ask from a low tech management standpoint, why you need more than one mailbox database per server? i.e. for what purposes do you need to split the number of mailboxes across more than 1 mailbox database?

  • Program crashes on ref assignment (no funtion call)

                final Block b = design;     // NullPointerExceptionMay be the problem is the fact that the method belongs to an inner sub-calss located in enclosing sub-class. The design field is declared in a super- enclosing class. But project compiles

  • Nedd Help with Sharing NAS box on Mac and XP machines

    Hello, I'm new to Macs and having a lot of trouble. I bought a Bason NAS box and hooked it up to my network of two XP machines and my Mac (OSX 10.4). The XP machines see it fine and I can access the box from them. The Mac shows the NAS with it's name

  • [SOLVED]Serious frustrations with MPD-Pulse and ncmpcpp

    There are a couple of problems I'm having.  First and foremost, I can't access any sort of flash video without mpd stopping and not working until a full system reboot, I've tried "sudo mpd --kill && sudo mpd --no-create-db" as well as "sudo /etc/rc.d