Commit in procedures after every 100000 records possible?

Hi All,
I am using an ODI procedure to insert data into a table.
I checked that in the ODI procedure there is an option of selecting transaction and setting the commit option as 'Commit after every 1000 records'.
Since the record count to be inserted is 38489152, I would like to know is this option configurable.
Can i ensure that commits are made at a logical step of 100000 instead of 1000 records?
Thank You.
Prerna

recently added on this
http://dwteam.in/commit-interval-in-odi/
Thanks
Bhabani
http://dwteam.in

Similar Messages

  • ResultSet "hangs" after every 10 records

    Hi
    Please could you somebody help me.
    I have extracted a ResultSet from a database which contains between 100 and 200 records (5 field each).
    If I call rset.next(), printing a count after each call my program hangs for about 2 minutes after every 10 records.
    For example:
    int count = 0;
    while(rset.next()) {
    System.out.println("" + ++count);
    Prints:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    Waits here for two minutes and then carries on
    11
    20
    Waits again for 2 minutes etc.
    Has anyone had this problem or does anyone know how to fix it?
    FYI: prstat reports tiny CPU and memory usage so the hardware is not responsible.
    Thanks a lot in advance

    Hi All
    It must be network - setFetchSize is unsupported in both Statement and ResultSet in the driver set I am using.
    It is running through a 10baseT switch at the moment which may be the problem so I will stick it on the backbone and try again.
    Thanks again for your help.

  • Total after every 25 records

    Dear Friiends,
    I would like to write down query which also returns the total of some columns after every 25 records.
    like this
    ccno salary
    1 5000
    2 10000
    25 80000
    total       <total of above 25>
    26 25000
    27 10000
    50 13000
    total       <total of above 50>
    can we achieve this
    Waiting for reply .

    with tab as (
    select 1 ccno,100 salary from dual union all
    select 2 ccno,200 salary from dual union all
    select 3 ccno,300 salary from dual union all
    select 4 ccno,400 salary from dual union all
    select 5 ccno,500 salary from dual union all
    select 6 ccno,600 salary from dual union all
    select 7 ccno,700 salary from dual
    )--end of test data
    select ccno,
           salary,
           case when mod(row_number() over (order by ccno), 3) = 0 then sum(salary) over (order by ccno) else null end as sumsal
    from tab
    CCNO                   SALARY                 SUMSAL                
    1                      100                                          
    2                      200                                          
    3                      300                    600                   
    4                      400                                          
    5                      500                                          
    6                      600                    2100                  
    7                      700                                          
    7 rows selectedChange the 3 in the mod to 25 for your data

  • Commit after every 1000 records

    Hi dears ,
    i have to update or insert arround 1 lakhs records every day incremental basis,
    while doing it , after completing all the records commit happens, In case some problem in between all my processed records are getting rollbacked,
    I need to commit it after every frequency of records say 1000 records.
    Any one know how to do it??
    Thanks in advance
    Regards
    Raja

    Raja,
    There is an option in the configuration of a mapping in which you can set the Commit Frequency. The Commit Frequency only applies to non-bulk mode mappings. Bulk mode mappings commit according the bulk size (which is also an configuration setting of the mapping).
    When you set the Default Operating Mode to row based and Bulk Processing Code to false, Warehouse Builder uses the Commit Frequency parameter when executing the package. Warehouse Builder commits data to the database after processing the number of rows specified in this parameter.
    If you set Bulk Processing Code to true, set the Commit Frequency equal to the Bulk Size. If the two values are different, Bulk Size overrides the commit frequency and Warehouse Builder implicitly performs a commit for every bulk size.
    Regards,
    Ilona

  • Add a row after every n records

    Hi
    I have a query that returns only one column
    Column1
    a
    b
    c
    d
    g
    e
    f
    g
    h
    I want to add 01 as the first row and then after 5 records i want to add 02 then 03 after another 5 records and so on i.e
    Column1
    01
    a
    b
    c
    d
    e
    02
    f
    g
    h
    How can this be done?

    Hi,
    Nice post.
    Regards salim.
    other solution.
    SELECT res
      FROM t
    model
    dimension by( row_number()over(partition by 1 order by rownum) rn)
    measures(col1,cast ( col1 as varchar2(20)) as res, count(1)over(partition by 1) cpt,trunc(rownum/5) diff)ignore nav
    (diff[for rn from 1 to cpt[1]+trunc((cpt[1]+cpt[1]/5)/5) increment 1]=
    case when diff[cv(rn)] is present then  diff[cv(rn)]
    else   case when mod(cv(rn),5)=0 then
           diff[cv(rn)-1]+1
           else diff[cv(rn)-1]end 
    end,
    res[for rn from 1 to cpt[1]+trunc((cpt[1]+cpt[1]/5)/5) increment 1]=
      case when mod(cv(rn),5)=0 then 
           to_char((cv(rn)/5),'fm00')
      else  col1[cv(rn)-diff[cv(rn)]]end )
    SQL> WITH t AS
      2       (SELECT 'a' col1
      3          FROM DUAL
      4        UNION ALL
      5        SELECT 'b'
      6          FROM DUAL
      7        UNION ALL
      8        SELECT 'c'
      9          FROM DUAL
    10        UNION ALL
    11        SELECT 'd'
    12          FROM DUAL
    13        UNION ALL
    14        SELECT 'g'
    15          FROM DUAL
    16        UNION ALL
    17        SELECT 'e'
    18          FROM DUAL
    19        UNION ALL
    20        SELECT 'f'
    21          FROM DUAL
    22        UNION ALL
    23        SELECT 'g'
    24          FROM DUAL
    25        UNION ALL
    26        SELECT 'h'
    27          FROM DUAL
    28        UNION ALL
    29        SELECT 'i'
    30          FROM DUAL
    31        UNION ALL
    32        SELECT 'j'
    33          FROM DUAL
    34        UNION ALL
    35        SELECT 'k'
    36          FROM DUAL
    37          UNION ALL
    38        SELECT 'l'
    39          FROM DUAL
    40          UNION ALL
    41        SELECT 'm'
    42          FROM DUAL
    43          UNION ALL
    44        SELECT 'o'
    45          FROM DUAL
    46          UNION ALL
    47        SELECT 'p'
    48          FROM DUAL
    49          UNION ALL
    50        SELECT 'q'
    51          FROM DUAL
    52          UNION ALL
    53        SELECT 'z'
    54          FROM DUAL
    55          UNION ALL
    56        SELECT 'z'
    57          FROM DUAL
    58          UNION ALL
    59        SELECT 'z'
    60          FROM DUAL
    61          UNION ALL
    62        SELECT 'y'
    63          FROM DUAL)
    64  SELECT res
    65    FROM t
    66  model
    67  dimension by( row_number()over(partition by 1 order by rownum) rn)
    68  measures(col1,cast ( col1 as varchar2(20)) as res, count(1)over(partition by 1) cpt,trunc(rownu
    m/5) diff)ignore nav
    69  (diff[for rn from 1 to cpt[1]+trunc((cpt[1]+cpt[1]/5)/5) increment 1]=
    70  case when diff[cv(rn)] is present then  diff[cv(rn)]
    71  else   case when mod(cv(rn),5)=0 then
    72         diff[cv(rn)-1]+1
    73         else diff[cv(rn)-1]end 
    74  end,
    75   res[for rn from 1 to cpt[1]+trunc((cpt[1]+cpt[1]/5)/5) increment 1]=
    76    case when mod(cv(rn),5)=0 then 
    77         to_char((cv(rn)/5),'fm00')
    78    else  col1[cv(rn)-diff[cv(rn)]]end )
    79 
    SQL> /
    RES
    a
    b
    c
    d
    01
    g
    e
    f
    g
    02
    h
    i
    j
    k
    03
    l
    m
    o
    p
    04
    q
    z
    z
    z
    05
    y
    26 ligne(s) sélectionnée(s).
    SQL> Edited by: Salim Chelabi on 2009-04-15 13:35

  • Need to commit after every 10 000 records inserted ?

    What would be the best way to Commit after every 10 000 records inserted from one table to the other using the following script :
    DECLARE
    l_max_repa_id x_received_p.repa_id%TYPE;
    l_max_rept_id x_received_p_trans.rept_id%TYPE;
    BEGIN
    SELECT MAX (repa_id)
    INTO l_max_repa_id
    FROM x_received_p
    WHERE repa_modifieddate <= ADD_MONTHS (SYSDATE, -6);
    SELECT MAX (rept_id)
    INTO l_max_rept_id
    FROM x_received_p_trans
    WHERE rept_repa_id = l_max_repa_id;
    INSERT INTO x_p_requests_arch
    SELECT *
    FROM x_p_requests
    WHERE pare_repa_id <= l_max_rept_id;
    DELETE FROM x__requests
    WHERE pare_repa_id <= l_max_rept_id;

    1006377 wrote:
    we are moving between 5 and 10 million records from the one table to the other table and it takes forever.
    Please could you provide me with a script just to commit after every x amount of records ? :)I concur with the other responses.
    Committing every N records will slow down the process, not speed it up.
    The fastest way to move your data (and 10 million rows is nothing, we do those sorts of volumes frequently ourselves) is to use a single SQL statement to do an INSERT ... SELECT ... statement (or a CREATE TABLE ... AS SELECT ... statement as appropriate).
    If those SQL statements are running slowly then you need to look at what's causing the performance issue of the SELECT statement, and tackle that issue, which may be a case of simply getting the database statistics up to date, or applying a new index to a table etc. or re-writing the select statement to tackle the query in a different way.
    So, deal with the cause of the performance issue, don't try and fudge your way around it, which will only create further problems.

  • Is it possible to automatically add a comma after every 6 characters has been typed in a text field?

    Hope someone can help me.
    What I basically need help with is how to make Acrobat add a comma after every 6 characters in a text field:
    XXXXXX,YYYYYY, ZZZZZZ etc

    I'm sorry, but I did not understand that (i'm using Acrobat Pro X)
    Am I supposed to go to:
    Text Field Properties > Format > Custom
    and then use Custom Format Script or Custom Keystroke Script?
    I tried both and it did not work.
    And do the Text Field have to be named "chunkSize"?
    Seems like it works. I had to move to the next formfield in order to see the effect.
    Is it possible to make it happen in real time (as you type the comma is inserted?)

  • Avoid Commit after every Insert that requires a SELECT

    Hi everybody,
    Here is the problem:
    I have a table of generator alarms which is populated daily. On daily basis there are approximately 50,000 rows to be inserted in it.
    Currently i have one month's data in it ... Approximately 900,000 rows.
    here goes the main problem.
    before each insert command, whole table is checked if the record does not exist already. Two columns "SiteName" and "OccuranceDate" are checked... this means, these two columns are making a unique record when checked together with an AND operation in WHERE clause.
    we have also implemented partition on this table. and it is basically partitioned on the basis of OccuranceDate and each partition has 5 days' data.
    say
    01-Jun to 06 Jun
    07-Jun to 11 Jun
    12-Jun to 16 Jun
    and so on
    26-Jun to 30 Jun
    NOW:
    we have a commit command within the insertion loop, and the each row is committed once inserted, making approximately 50,000 commits daily.
    Question:
    Can we commit data after say each 500 inserted rows, but my real question is can we Query the records using SELECT which are Just Inserted but not yet committed ?
    a friend told me that, you can query the records which are inserted in the same connection session but not yet committed.
    Can any one help ?
    Sorry for the long question but it was to make u understand the real issue. :(
    Khalid Mehmood Awan
    khalidmehmoodawan @ gmail.com
    Edited by: user5394434 on Jun 30, 2009 11:28 PM

    Don't worry about it - I just said that because the experts over there will help you much better. If you post your code details there they will give suggestions on optimizing it.
    Doing a SELECT between every INSERT doesn't seem very natural to me, but it all depends on the details of your code.
    Also, not committing on time may cause loss of the uncommitted changes. Depending on how critical the data is and the dependency of the changes, you have to commit after every INSERT, in between, or at the end.
    Regards,
    K.

  • Calling Delta Merge in DS after every commit

    Hi Folks,
    I am using an Delta extraction logic in DS to extract large table from ECC (50 Million rows) to the HANA database. The commits in DS job have been configured fopr every 10,000 records. Three questions
    1) Should I disable the delta merge in HANA database for this target table prior to the initial load of table. Once the initial load is complete, manually perform the delta merge in HANA is the right approach or
    2) Should I be calling manually performing Delta merge in DS job to make sure the table is merged after every commit? If yes how do I call the Delta merge command in DS jobs and how can I do it per commit?
    3) Can I invoke Delta merge in DS as part of Delta extraction logic after the initial load is completed in DS?
    Any advise will definately be appreciated.
    Thanks,
    -Hari

    Hi Jim
    if your big table requires a merge, AUTOMERGE will pick it up. The mergedog process checks it every 60 seconds, so that should be alright for your requiremen.
    If the table doesn't need to be merged, it won't.
    Manually handling the delta merge is a fine-tuning action that is most often not required or recommendable.
    - Lars

  • COMMIT after every 10000 rows

    I'm getting probelms with the following procedure. Is there any that I can do to commit after every 10,000 rows of deletion? Or is there any other alternative! The DBAs are not willing to increase the undo tablespace value!
    create or replace procedure delete_rows(v_days number)
    is
    l_sql_stmt varchar2(32767) := 'DELETE TABLE_NAME WHERE ROWID IN (SELECT ROWID FROM TABLE_NAME W
    where_cond VARCHAR2(32767);
    begin
       where_cond := 'DATE_THRESHOLD < (sysdate - '|| v_days ||' )) ';
       l_sql_stmt := l_sql_stmt ||where_cond;
       IF v_days IS NOT NULL THEN
           EXECUTE IMMEDIATE l_sql_stmt;
       END IF;
    end;I think I can use cursors and for every 10,000 %ROWCOUNT, I can commit, but even before posting the thread, I feel i will get bounces! ;-)
    Please help me out in this!
    Cheers
    Sarma!

    Hello
    In the event that you can't persuede the DBA to configure the database properly, Why not just use rownum?
    SQL> CREATE TABLE dt_test_delete AS SELECT object_id, object_name, last_ddl_time FROM dba_objects;
    Table created.
    SQL>
    SQL> select count(*) from dt_test_delete WHERE last_ddl_time < SYSDATE - 100;
      COUNT(*)
         35726
    SQL>
    SQL> DECLARE
      2
      3     ln_DelSize                      NUMBER := 10000;
      4     ln_DelCount                     NUMBER;
      5
      6  BEGIN
      7
      8     LOOP
      9
    10             DELETE
    11             FROM
    12                     dt_test_delete
    13             WHERE
    14                     last_ddl_time < SYSDATE - 100
    15             AND
    16                     rownum <= ln_DelSize;
    17
    18             ln_DelCount := SQL%ROWCOUNT;
    19
    20             dbms_output.put_line(ln_DelCount);
    21
    22             EXIT WHEN ln_DelCount = 0;
    23
    24             COMMIT;
    25
    26     END LOOP;
    27
    28  END;
    29  /
    10000
    10000
    10000
    5726
    0
    PL/SQL procedure successfully completed.
    SQL>HTH
    David
    Message was edited by:
    david_tyler

  • Commit for every 1000 records in  Insert into select statment

    Hi I've the following INSERT into SELECT statement .
    The SELECT statement (which has joins ) has around 6 crores fo data . I need to insert that data into another table.
    Please suggest me the best way to do that .
    I'm using the INSERT into SELECT statement , but i want to use commit statement for every 1000 records .
    How can i achieve this ..
    insert into emp_dept_master
    select e.ename ,d.dname ,e.empno ,e.empno ,e.sal
       from emp e , dept d
      where e.deptno = d.deptno       ------ how to use commit for every 1000 records .Thanks

    Smile wrote:
    Hi I've the following INSERT into SELECT statement .
    The SELECT statement (which has joins ) has around 6 crores fo data . I need to insert that data into another table.Does the another table already have records or its empty?
    If its empty then you can drop it and create it as
    create your_another_table
    as
    <your select statement that return 60000000 records>
    Please suggest me the best way to do that .
    I'm using the INSERT into SELECT statement , but i want to use commit statement for every 1000 records .That is not the best way. Frequent commit may lead to ORA-1555 error
    [url http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:275215756923]A nice artical from ASKTOM on this one
    How can i achieve this ..
    insert into emp_dept_master
    select e.ename ,d.dname ,e.empno ,e.empno ,e.sal
    from emp e , dept d
    where e.deptno = d.deptno       ------ how to use commit for every 1000 records .
    It depends on the reason behind you wanting to split your transaction into small chunks. Most of the time there is no good reason for that.
    If you are tying to imporve performance by doing so then you are wrong it will only degrade the performance.
    To improve the performance you can use APPEND hint in insert, you can try PARALLEL DML and If you are in 11g and above you can use [url http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/d_parallel_ex.htm#CHDIJACH]DBMS_PARALLEL_EXECUTE to break your insert into chunks and run it in parallel.
    So if you can tell the actual objective we could offer some help.

  • After I covered 250 miles I got the automated congratualtions, but now after every run I get the recorded message again saying I just covered another 250 miles. Can I reset or stop it without completly resetting everything?

    I run with the Nano and shoe sensor. After I covered 250 miles I got the automated congratualtions, but now after every run I get the recorded message again saying I just covered another 250 miles. Can I reset or stop it without completly resetting everything?

    Same issue here.  This is really disappointing.  I used to look forward to the milestone messages after each run, especially when I was surprised by a celebrity voice.  Now, it's the same thing every single time, "Congratulations on another 250 miles.  Way to go!" or something along those lines.  I was proud of the 250 mile mark, but please...I don't want to hear it every time! 
    I hope there's some movement on this issue.
    ~ Heather

  • Back flush to ECC after every operation required.....  Is it possible?

    Hi,
    I am having 10 operations in my Routing , need to consume inventory at operation 2, 4 and 9 (say), My Problem is that can I see  the ECC inventory back flush after every operation?
    Component A Inventory is 10 at ECC and SAP ME too it is 10
    Example - In operation 2 I consumed one then SAP ME Inventory should be 9 and same time ECC inventory also should be 9 at the same time.
    Back flush after work order/SFC is complete is ok but I need Back flush after every operation that is what is required?  Is it possible?
    Regards
    Suhas

    Hi Jay
    I would think you could accomplish this by configuring some variables. You would need one variable to store a total of the others. You would configure the other variables to store a zero initially, then a one when clicked. The buttons would trigger an advanced action that would set the variable to one, then check the tally. Perhaps reveal a hidden button after all buttons had been clicked. The buttons would simply jump to different slides. You wouldn't insert buttons on the slides so the user would be forced to view them in full.
    I have somewhat of an example up at the link below.
    Click here to view
    My example forces the user to view four slidelets in any order before allowing the next slide to be reached.
    Hopefully this will help you in some way... Rick
    Helpful and Handy Links
    Captivate Wish Form/Bug Reporting Form
    Adobe Certified Captivate Training
    SorcerStone Blog
    Captivate eBooks

  • How can i use page break for every 5 records in sap scripts

    on every 5 lines of records i have go for a new page so what is procedure to do this .if possible send me with coding .

    Hi John..
    this is the way..
    IN THE PRINT PROGRAM...
    DATA : V_MOD TYPE I.
    loop at Itab.
       V_MOD  = SY-TABIX / 5.
       IF V_MOD  = 0.
           CALL FUNCTION 'CONTROL_FORM'
           EXPORTING'
              COMMAND = 'NEW-PAGE' .
      ENDIF.
             CALL FUNCTION 'WRITE_FORM'
    ENDLOOP.
    <b>reward if Helpful.</b>

  • Temporary performance degradation after inserting many records

    I run into the following performance problem:
    I'm accessing Oracle from Java (JDBC).
    2 tables:
    DISPLAYS
    PAYLOADINFO
    PAYLOADINFO contains a foreign key to DISPLAYS.
    A process writes 100000 records into PAYLOADINFO, in a single transaction. The transaction is committed. Each of the records places the SAME id in the foreign key (ie. all payload-info originates from the same display).
    After commit succeeds, a query (on other tables referencing the DISPLAY table) is launched.
    1) With the foreign key, queries become VERY slow for a while. After this 'while', queries get back up to speed.
    2) with NO foreign key linking the two tables, subsequent queries keep their original performance.
    I read somewhere that this could be caused by unbalanced indexing ? Is this the case ? Where should I look for a solution ?
    best regards,
    Geert

    Can you define "a while"? Are we talking minutes? hours? days?
    When are statistics gathered in the database?
    Do you see a change in the query plan? Or do you see other tasks consuming system resources?
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

Maybe you are looking for

  • My Computer is Possessed!

    Ok. I just moved from the U.S. to the U.K. and my Powerbook is acting very weird. First of all, the battery performance has gone down about 30%, even though I made sure to use the Better Battery Life feature. It will suddenly tell me that I am runnin

  • CRM 2013 and VS2012: Can't automatically deploy webresources.

    Hi, I have all web resources (js libraries) for all custom entities that fail to deploy.  I can build and deploy in VS and then see the changes in CRM.  If I then "Publish All" in CRM my code does not get updated.  If I open and edit a line (add a sp

  • Value set - hierarchy

    Hi, I have an issue where I want to add a descriptive flexfield. The flexfield should be added in an existing hierarchy that consists of 3 fields: 1. Category 2. Type 3. Priority All the flexfields have value sets connected to them. Type is dependent

  • Log in at adobe cloud does not work

    Hello, I'm trying to log in into my adobe cloud on my desktop to install a new app. I'm using the correct adobe id and password but when I type in, I get told that I'm logged out and that I should sign in. I'm 100% sure that I do use the correct log

  • Don't see TM backup disk in migration assistant

    And it's in the finder - any suggestions?