Incremental commit

Hi All,
We have a table with more than 64 lacs of recods. and the direct deletion will cause the buffer problem. So, we planned for the bulk collect .. Forall deletion with limit of 50000.
Is there any alternative way for the same with oracle 10g features .
Regards,
Umesh

Fastest way to delete is a truncate.
Consider to move all data that should be kept into a separate table, then truncate the original table, then move everything back.
Also I wonder if the buffer problem is connected to using pl/sql for the deletion. In pur SQL you might hit other problems, but usually not a buffer limit.

Similar Messages

  • Using forall and bulkcollect together

    Hey group,
    I am trying to use bulk collect and forall together.
    i have bulk collect on 3 columns and insert is on more than 3 columns.can anybody tell me how to reference those collection objects in bulk collect statement.
    you can see the procedure,i highlighted things am trying.
    Please let me know,if am clear enough.
    PROCEDURE do_insert
    IS
    PROCEDURE process_insert_record
    IS
    CURSOR c_es_div_split
    IS
    SELECT div_id
    FROM zrep_mpg_div
    WHERE div_id IN ('PC', 'BP', 'BI', 'CI', 'CR');
    PROCEDURE write_record
    IS
    CURSOR c_plan_fields
    IS
    SELECT x.comp_plan_id, x.comp_plan_cd, cp.comp_plan_nm
    FROM cp_div_xref@dm x, comp_plan@dm cp
    WHERE x.comp_plan_id = cp.comp_plan_id
    AND x.div = v_div
    AND x.sorg_cd = v_sorg_cd
    AND x.comp_plan_yr = TO_NUMBER (TO_CHAR (v_to_dt, 'yyyy'));
    TYPE test1 IS TABLE OF c_plan_fields%ROWTYPE
    INDEX BY BINARY_INTEGER;
    test2 test1;
    BEGIN -- write_record
    OPEN c_plan_fields;
    FETCH c_plan_fields bulk collect INTO test2;
    CLOSE c_plan_fields;
    ForAll X In 1..test2.last
    INSERT INTO cust_hier
    (sorg_cd, cust_cd, bunt, --DP
    div,
    from_dt,
    to_dt,
    cust_ter_cd, cust_rgn_cd, cust_grp_cd,
    cust_area_cd, sorg_desc, cust_nm, cust_ter_desc,
    cust_rgn_desc, cust_grp_desc, cust_area_desc,
    cust_mkt_cd, cust_mkt_desc, curr_flag,
    last_mth_flag, comp_plan_id, comp_plan_cd,
    comp_plan_nm, asgn_typ, lddt
    VALUES (v_sorg_cd, v_cust_cd, v_bunt, --DP
    v_div,
    TRUNC (v_from_dt),
    TO_DATE (TO_CHAR (v_to_dt, 'mmddyyyy') || '235959',
    'mmddyyyyhh24miss'
    v_ter, v_rgn, v_grp,
    v_area, v_sorg_desc, v_cust_nm, v_cust_ter_desc,
    v_rgn_desc, v_grp_desc, v_area_desc,
    v_mkt, v_mkt_desc, v_curr_flag,
    v_last_mth_flag, test2(x).comp_plan_id,test2(x).comp_plan_cd,
    test2(x).comp_plan_nm, v_asgn_typ, v_begin_dt
    v_plan_id := 0;
    v_plan_cd := 0;
    v_plan_nm := NULL;
    v_out_cnt := v_out_cnt + 1;
    IF doing_both
    THEN
    COMMIT;
    ELSE
    -- commiting v_commit_rows rows at a time.
    IF v_out_cnt >= v_commit_cnt
    THEN
    COMMIT;
    p.l ( 'Commit point reached: '
    || v_out_cnt
    || 'at: '
    || TO_CHAR (SYSDATE, 'mm/dd hh24:mi:ss')
    v_commit_cnt := v_commit_cnt + v_commit_rows;
    END IF;
    END IF;
    END write_record;

    Ugly code.
    Bulk processing does what in PL? One and one thing only. It reduces context switching between the PL and SQL engines. That is it. Nothing more. It is not magic that increases performance. And there is a penalty to pay for the reduction in context switching - memory. Very expensive PGA memory.
    To reduce the context switches, bigger chunks of data are passed between the PL and SQL engines. You have coded a single fetch for all the rows from the cursor. All that data collected from the SQL engine has to be stored in the PL engine. This requires memory. The more rows, the more memory. And the memory used is dedicated non-shared server memory. The worse kind to use on a server where resources need to be shared in order for the server to scale.
    Use the LIMIT clause. That controls how many rows are fetched. And thus you manage just how much memory is consumed.
    And why the incremental commit? What do you think you are achieving with that? Except consuming more resources by doing more commits.. not to mention risking data integrity as this process can now fail and result in only partial changes. And only changing some of the rows when you need to change all the rows is corrupting the data in the database in my book.
    Also, why use PL at all? Surely you can do a INSERT into <table1> SELECT .. FROM <table2>,<table3> WHERE ...
    And with this, you can also use parallel processing (DML). You can use direct path inserts. You do not need a single byte of PL engine code or memory. You do not have to ship data between the PL and SQL engines. What you now have is fast performing code that can scale.

  • "execute aggregation process" fails if views already exist?

    After loading data with the following command...
    import database $5.$6 data
    from load_buffer with buffer_id 1
    add values create slice;
    I am attempting to generate aggregated views using the following command...
    execute aggregate process
    on database $app_name.$db_name
    stopping when total_size exceeds 1.5
    enable alternate_rollups;
    This command fails if I already have a set of views on the database. I was hoping that this command would be able to add to the existing set of aggregated views realizing that there were no views set up for the new slice that was just generated.
    Is there a way (with MAXL) to build aggregated views on a database that only creates views for a new slice without going through the process of defining my own views?
    Thanks...
    Bill

    The views are automatically updated when you create the new slice, no further processing is required.
    From the DBAG:
    You can incrementally commit the data load buffer to an aggregate storage database to create a slice. After loading the new slice into the database, Essbase creates all necessary views on the slice (such as aggregate views) before the new data is visible to queries.

  • Sequence value incrementing by one after commit

    I am using the following code to insert a sequence number into a form field in a data block pre-insert trigger. When I commit the record the trigger fires again and increments the sequence again. So the record I get in the database has a primary key value of 1 more than what was on the form. Any help would be appreciated.
    Joe Merkel
    BEGIN
         SELECT DAD_ID_SEQUENCE.NEXTVAL INTO :DAD_ROSCO_INFO.DAD_ID
         FROM SYS.DUAL;
         EXCEPTION
         WHEN OTHERS THEN
         MESSAGE('Unable to assign DAD ID');
         RAISE FORM_TRIGGER_FAILURE;
         :DAD_RO_NOTES.DAD_ID := :DAD_ROSCO_INFO.DAD_ID;
         :DAD_FR_NOTES.DAD_ID := :DAD_ROSCO_INFO.DAD_ID;
         END;

    I was just talking to a colleague that suggested that. Thanks for your response. I will look into it.

  • How do I set my numbers to automatically add comma separators at thousand increments?

    I make a lot of charts with large numbers and it's time consuming to add all the commas every time. For example 1,234,567 instead of 1234567. I need the formula to work for all sizes of numbers.

    @ kmpam: This is a mockup of a table similar to yours applying the series of queries as per the link "thousands separator":
    @ [Ariel] I tried to use your script but it gives Error Number: 55/Error String: Object does not support the property or method 'lenght'>Line: 5

  • MODEL clause to process a comma separated string

    Hi,
    I'm trying to parse a comma separated string using SQL so that it will return the parsed values as rows;
    eg. 'ABC,DEF GHI,JKL' would return 3 rows;
    'ABC'
    'DEF GHI'
    'JKL'
    I'm thinking that I could possibily use the MODEL clause combined with REGULAR expressions to solve this as I've already got a bit of SQL which does the opposite ie. turning the rows into 1 comma separated string;
    select id, substr( concat_string, 2 ) as string
    from (select 1 id, 'ABC' string from dual union all select 1, 'DEF GHI' from dual union all select 1, 'JKL' from dual)
    model
    return updated rows
    partition by ( id )
    dimension by ( row_number() over (partition by id order by string) as position )
    measures ( cast(string as varchar2(4000) ) as concat_string )
    rules
    upsert
    iterate( 1000 )
    until ( presentv(concat_string[iteration_number+2],1,0) = 0 )
    ( concat_string[0] = concat_string[0] || ',' || concat_string[iteration_number+1] )
    order by id;
    Can anyone give me some pointers how to parse the comma separated string using regexp and create as many rows as needed using the MODEL clause?

    Yes, you could do it without using ITERATE, but FOR ... INCREMENT is pretty much same loop. Couple of improvements:
    a) there is no need for CHAINE measure
    b) there is no need for CASE in RULES clause
    c) NVL can be applies on measures level
    with t as (select 1 id, 'ABC,DEF GHI,JKL,DEF GHI,JKL,DEF GHI,JKL,DEF,GHI,JKL' string from dual
       union all
        select 2,'MNO' string from dual
        union all
       select 3,null string from dual
    SELECT  id,
             string
      FROM   T
       MODEL
        RETURN UPDATED ROWS
        partition by (id)
        DIMENSION BY (0 POSITION)
        MEASURES(
                 string,
                 NVL(LENGTH(REGEXP_REPLACE(string,'[^,]+','')),0)+1 NB_MOT
        RULES
         string[FOR POSITION FROM  1 TO NB_MOT[0] INCREMENT 1] = REGEXP_SUBSTR(string[0],'[^,]+',1,CV(POSITION))
    SQL> with t as (select 1 id, 'ABC,DEF GHI,JKL,DEF GHI,JKL,DEF GHI,JKL,DEF,GHI,JKL' string from dual
      2     union all
      3      select 2,'MNO' string from dual
      4      union all
      5     select 3,null string from dual
      6      )
      7   SELECT  id,
      8           string
      9    FROM   T
    10     MODEL
    11      RETURN UPDATED ROWS
    12      partition by (id)
    13      DIMENSION BY (0 POSITION)
    14      MEASURES(
    15               string,
    16               NVL(LENGTH(REGEXP_REPLACE(string,'[^,]+','')),0)+1 NB_MOT
    17              )
    18      RULES
    19      (
    20       string[FOR POSITION FROM  1 TO NB_MOT[0] INCREMENT 1] = REGEXP_SUBSTR(string[0],'[^,]+',1,CV(POSITION))
    21      )
    22  /
            ID STRING
             1 ABC
             1 DEF GHI
             1 JKL
             1 DEF GHI
             1 JKL
             1 DEF GHI
             1 JKL
             1 DEF
             1 GHI
             1 JKL
             2 MNO
            ID STRING
             3
    12 rows selected.
    SQL> SY.

  • Error in the procedure while tried to increment the seq

    Hello ,
    I tried the following but giving the errors..
    Plz help me in this..
    CREATE OR REPLACE PROCEDURE Seq_inc AS
       vmaxarrec number(10);
       vseq number(10);
          select max(recid) into vmaxarrec from acc_rec;
          select SEQ_ACC_REC.currval into vseq from dual;
        BEGIN
          FOR i IN vseq .. vmaxarrec  LOOP
            select SEQ_ACC_REC.nextval  from dual;
          END LOOP;
        END ;And giving the following errors...
    LINE/COL ERROR
    4/7 PLS-00103: Encountered the symbol "SELECT" when expecting one of
    the following:
    begin function package pragma procedure subtype type use
    <an identifier> <a double-quoted delimited-identifier> form
    current cursor
    The symbol "begin" was substituted for "SELECT" to continue.
    11/9 PLS-00103: Encountered the symbol "end-of-file" when expecting
    one of the following:
    begin case declare end exception exit for goto if loop mod
    null pragma raise return select update while with
    LINE/COL ERROR
    <an identifier> <a double-quoted delimited-identifier>
    <a bind variable> << close current delete fetch lock insert
    open rollback savepoint set sql execute commit forall merge
    Thanks

    smile wrote:
    SQL> CREATE OR REPLACE PROCEDURE Seq_inc AS
    2     vmaxarrec number(10);
    3     vseq number(10);
    4       
    5      BEGIN   
    6      select max(recid) into vmaxarrec from acc_rec;
    7      select SEQ_ACC_REC.currval into vseq from dual;
    8       
    9        FOR i IN vseq .. vmaxarrec  LOOP
    10          select SEQ_ACC_REC.nextval  from dual;
    11        END LOOP;
    12   
    13      END ;
    14  /
    Warning: Procedure created with compilation errors.
    SQL> sho err
    Errors for PROCEDURE SEQ_INC:
    LINE/COL ERROR
    10/9     PLS-00428: an INTO clause is expected in this SELECT statementI tried with the above correction ..and still errorsIt looks to me like you're trying to reset the sequence number to a new starting value. In reality there's very little point in doing this, though I've come across a few test scenarios where it's good to start with the same sequence number each time.
    The logic for changing a sequence to a particular value is along the lines of:
    SQL> select test.nextval from dual;
       NEXTVAL
           125
    SQL> var v_inc number;
    SQL> var v_resetno number;
    SQL> exec :v_resetno := 50;
    PL/SQL procedure successfully completed.
    SQL> exec execute immediate 'select -(test.nextval-:x)-1 from dual' into :v_inc using :v_resetno;
    PL/SQL procedure successfully completed.
    SQL> exec execute immediate 'alter sequence test increment by '||:v_inc;
    PL/SQL procedure successfully completed.
    SQL> select test.nextval from dual;
       NEXTVAL
            49
    SQL> alter sequence test increment by 1;
    Sequence altered.
    SQL> select test.nextval from dual;
       NEXTVAL
            50
    SQL> select test.nextval from dual;
       NEXTVAL
            51
    SQL>Note: In your code you are reading the currval of the sequence before you know that you've read the nextval. That will give an error, because the currval is only known within the current session after a nextval has been obtained.
    So, as a procedure, you want something like:
    CREATE OR REPLACE PROCEDURE Seq_inc AS
      v_maxarrec number;
      v_inc      number;
      v_seq      number;
    BEGIN   
      select max(recid)+1 into v_maxarrec from acc_rec; -- get the required sequence value
      select -(seq_acc_rec.nextval-v_maxarrec)-1 into v_inc from dual; -- determine the difference
      execute immediate 'alter sequence seq_acc_rec increment by '||v_inc; -- alter the sequence
      select seq_acc_rec.nextval into v_seq from dual; -- query the sequence to reset it
      execute immediate 'alter sequence seq_acc_rec increment by 1'; -- alter the sequence to increment by 1 again
    END ;(+untested+)

  • Error in self increment varible value using "FORALL"

    CREATE OR REPLACE PROCEDURE BULK_COLLCT_PASS
    AS
    TYPE VAR_TYP IS VARRAY (32767) OF VARCHAR2 (32767);
    V_DSH_CM_NUMBER VAR_TYP;
    V_DSH_DATE VAR_TYP;
    V_DSH_TIME VAR_TYP;
    V_DSD_CM_NUMBER VAR_TYP;
    V_PLU_CODE VAR_TYP;
    V_DSD_DATE VAR_TYP;
    V_str_id VAR_TYP;
    LN_ITM NUMBER:=0;
    str_id number := 30001;
    CURSOR CUR_DBMG_SAL_HEAD
    IS
    SELECT DSH.CM_NUMBER,D_DSH_CM_DATE, D_DSH_CM_TIME
    FROM DBMG_SAL_HEAD DSH
    WHERE ROWNUM<6;
    BEGIN
    OPEN CUR_DBMG_SAL_HEAD;
    LOOP
    FETCH CUR_DBMG_SAL_HEAD BULK COLLECT
    INTO V_DSH_CM_NUMBER,
    V_DSH_DATE,
    V_DSH_TIME;
    FOR indx IN V_DSH_CM_NUMBER.FIRST .. V_DSH_CM_NUMBER.LAST
    LOOP
    SELECT CM_NUMBER, PLU_CODE,V_DSH_DATE(indx)
    BULK COLLECT
    INTO V_DSD_CM_NUMBER, V_PLU_CODE,V_DSD_DATE
    FROM DBMG_SAL_DETL DSD
    WHERE DSD.CM_NUMBER = V_DSH_CM_NUMBER(indx);
    --block1
    Ln_Itm := 0;
    FOR ind IN 1..V_DSD_CM_NUMBER.COUNT
    loop
    INSERT INTO PC_ALL_TAB
    VALUES(V_DSH_CM_NUMBER(indx),
    V_DSD_DATE(ind),
    V_DSD_CM_NUMBER(ind),
    V_PLU_CODE(ind),
    LN_ITM,
    str_id
    LN_ITM := LN_ITM +1;
    end loop;
    --block2                 
    END LOOP;
    EXIT WHEN CUR_DBMG_SAL_HEAD%NOTFOUND;
    END LOOP;
    commit;
    CLOSE CUR_DBMG_SAL_HEAD;
    DBMS_OUTPUT.PUT_LINE('COMPLETE..!');
    END ;
    Hi,
    I am using above code in which when code between "--block1 & --block2" is incrementing ln_itm value by 1 each time.
    so that after completion of code o/p is as below.
    SELECT DSH_CM_NUMBER, LN_ITM FROM PC_ALL_TAB;
    DSH_CM_NUMBER     LN_ITM
    1     4177424     0
    2     4177422     0
    3     4177426     0
    4     4177426     1
    5     4177426     2
    6     4177425     0
    7     4177427     0
    8     4177427     1
    9     4177427     2
    for each repeating value of cm_number its incresing by 1.
    but i wan to change "--block1 to --block2" in "FORALL". i did this but i m nt getting o/p is incresing value of ln_itm.
    kindly help me...
    code after changed to "FORALL"
    CREATE OR REPLACE PROCEDURE BULK_COLLCT_PASS
    AS
    TYPE VAR_TYP IS VARRAY (32767) OF VARCHAR2 (32767);
    V_DSH_CM_NUMBER VAR_TYP;
    V_DSH_DATE VAR_TYP;
    V_DSH_TIME VAR_TYP;
    V_DSD_CM_NUMBER VAR_TYP;
    V_PLU_CODE VAR_TYP;
    V_DSD_DATE VAR_TYP;
    V_str_id VAR_TYP;
    LN_ITM NUMBER:=0;
    str_id number := 30001;
    CURSOR CUR_DBMG_SAL_HEAD
    IS
    SELECT DSH.CM_NUMBER,D_DSH_CM_DATE, D_DSH_CM_TIME
    FROM DBMG_SAL_HEAD DSH
    WHERE ROWNUM<6;
    BEGIN
    OPEN CUR_DBMG_SAL_HEAD;
    LOOP
    FETCH CUR_DBMG_SAL_HEAD BULK COLLECT
    INTO V_DSH_CM_NUMBER,
    V_DSH_DATE,
    V_DSH_TIME;
    FOR indx IN V_DSH_CM_NUMBER.FIRST .. V_DSH_CM_NUMBER.LAST
    LOOP
    SELECT CM_NUMBER, PLU_CODE,V_DSH_DATE(indx)
    BULK COLLECT
    INTO V_DSD_CM_NUMBER, V_PLU_CODE,V_DSD_DATE
    FROM DBMG_SAL_DETL DSD
    WHERE DSD.CM_NUMBER = V_DSH_CM_NUMBER(indx);
    --block1
    /*Ln_Itm := 0;
    FOR ind IN 1..V_DSD_CM_NUMBER.COUNT
    loop
    INSERT INTO PC_ALL_TAB
    VALUES(V_DSH_CM_NUMBER(indx),
    V_DSD_DATE(ind),
    V_DSD_CM_NUMBER(ind),
    V_PLU_CODE(ind),
    LN_ITM,
    str_id
    LN_ITM := LN_ITM +1;
    end loop; */
    FORALL ind IN 1..V_DSD_CM_NUMBER.COUNT
    INSERT INTO PC_ALL_TAB
    VALUES(V_DSH_CM_NUMBER(indx),
    V_DSD_DATE(ind),
    V_DSD_CM_NUMBER(ind),
    V_PLU_CODE(ind),
    LN_ITM,
    str_id
    LN_ITM := LN_ITM +1;
    --block2                 
    END LOOP;
    EXIT WHEN CUR_DBMG_SAL_HEAD%NOTFOUND;
    END LOOP;
    commit;
    CLOSE CUR_DBMG_SAL_HEAD;
    DBMS_OUTPUT.PUT_LINE('COMPLETE..!');
    END ;
    o/p :- SELECT DSH_CM_NUMBER, LN_ITM FROM PC_ALL_TAB;
    DSH_CM_NUMBER     LN_ITM
    1     4177424           0
    2     4177422           1
    3     4177426           2
    4     4177426           2
    5     4177426           2
    6     4177425           3
    7     4177427           4
    8     4177427           4
    9     4177427           4
    I need result as below...but using "FORALL"
    DSH_CM_NUMBER     LN_ITM
    1     4177424     0
    2     4177422     0
    3     4177426     0
    4     4177426     1
    5     4177426     2
    6     4177425     0
    7     4177427     0
    8     4177427     1
    9     4177427     2

    Double post
    How to increment value using "FORALL" instead of for loop

  • Performance issue in the Incremental ETL

    Hi Experts,
    we have performance issue and the task "Custom_SIL_OvertimeFact" is taking more than 55 mins.
    this tasks reads the .csv file coming from business every 2 weeks with a few rows of data updated and appended.
    every day this tasks runs and scans the whole of 70,000 records when in reality we just have 4-5 records updated/apended.
    is there a way to shrink the time taken by this task?
    when the record in the .csv file comes only once in a month then why do the mapping has to scan this many number of records on daily Incremental load?
    please provide me some suggestions .
    best regards,
    Kam

    The reason you have to scan all the records is because you do not have an identifier to find what has been updated.
    Becuase its a insert/update job it would be using normal load.
    Truncating the table and reloading it would help you in terms of perfromance since then you would be able to use bulk load.
    If there is any reason you cant do that then you can also look at the udpate strategy and see that all the fields that are being used to determine if a particular transaction is insert/update/existing must have an index defined on it , apart from this you should try and minimize the number of indexes on this table or make sure that you are dropping these when the load is running and creating them back once its finished !!
    Also check your commit point and see if you can bump it up !!
    Edited by: Sid_BI on Jan 28, 2010 4:03 PM

  • How to commit primary key in a multi level form

    Hi ,
    I am using Jdeveloper 10.1.2.3. ADF - Struts jsp application.
    I have an application that has multiple levels before it finally commits, say:
    Level 1 - enter name , address, etc details -- Master Table Employee
    Level 2 - Add Education details -- Detail Table Education
    Level 3 - Experience -- Detail Table Experience.
    Level 4 - adding a Approver -- Detail Table ApplicationApproval
    In all this from Level 1 I generate a document number which is the primary key that links all these tables.
    Now if User A starts Level 1 and moves to level 2,he gets document no = 100 and then User B starts Level 1 and also gets document no = 100 because no commit is executed.
    Now I have noticed that system crashes if User B calls a vo.validate().
    How can I handle this case as Doc no is the only primary key.

    Hi,
    This is what my department has been doing even before I joined and its been working for our multi user environment for these many years.
    We have a table called DOC_SRNO which will hold a row for our start docno , next number in running sequence. the final number. We have this procedure that returns next num to calling application and increments the next num by 1 in the table. and final commit on the application commits all this.
    I am not sure how this was working so far but each of those applications were for different employees. I am assuming this is how it worked.
    Now in the application that I am working on, has no distinct value. So two users could generate the same docno and proceed.
    I will try the DB sequence but here is what I tried. I call the next num from DOC_SRNO and I commit this table update and then proceed with this docno so at a time both gets different docno's.
    But my running session crashes when I go to next level to insert into the detail table of my multi level form. Here when I try to get the current row from the vo which is in context, it crashes.
    Here's the steps.
    Three tables : voMainTable1 and voDetailTable1 and voDetailTable2.
    voMainTable1 on create row1- I generate new docno - post changes
    voMainTable1 on create row2- I genrate another docno - post changes
    set voMainTable1 in context
    Now I call voDetailTable1 and to get the docno to join master detail, I try to get voMainTable1.getCurrentRow. Here it crashes.
    How can I avoid this

  • How to count no of commas in a text

    hi i have one input/output field in that field mail-id will be entered.
    now in that text i want to count no of commas so that i can know about theno of mail ids entered.

    Hi Pramila,
    You can proceed based on below logic( use of while...endwhile) for a string of data and incrementing a count each time teh character is a comma.
    say data = '"abc,der,haf,kaj,adgdrhgtedh"
       string_length = strlen ( data ).
          WHILE chpos_curr <= string_length
          Read processing data, one character at a time
            gv_char = data+chpos_curr(1).
            IF gv_char = ','.
             count = count + 1.
            endif.
          chpos_curr = chpos_curr + 1.
          ENDWHILE.
    Please see if this works.
    Regards
    MV

  • Retrieve new row's auto-increment key from dataprovider

    ** cross posted on SDN JSC Forum and Netbeans J2EE Nabble forum **
    I have a page that is bound to a MySQL table rowset or, alternately, a new (append()'ed) row. The row's Primary Key is a MySQL auto-increment Integer (translates to a Long.) After commit()'ing the the new row to the database, how would I then get the new row's ID (fieldname "ID")? If I refresh() the dataprovider and try to get the value an error is thrown.
    Some background: The dataprovider's backing rowset has a query parameter for the ID field. I set it to "0" if the page is creating a new row or I set it to the unique ID if the page is displaying an existing row. After the new row is committed, I need the new ID set the query parameter and synch up related rows in different tables that are keyed off the ID in this (master) table. I'd like to do this without "guessing" what the ID would be in advance as that method isn't foolproof in heavy usage.
    Additionally, it strikes me as a useful workaround if the unique ID was a UUID (mySQL UUID() function) that does not run the risk of any concurrency issues. Has anyone used this solution in VWP? How would I make the call for the underying DB (in this case MySQL, but could be anything) to generate the UUID? Is the call DB specific or is there a JDBC equivalent?
    UPDATE: never mind the GUID question, I can use the java.rmi.dgc.VMID class to autogenerate unique GUID's for this purpose. Being from a Lotus Notes background, I'm used to the power and flexibility such Unique ID's bring to the app. dev's portfolio.
    Thanks in adv. for any help .
    -Jake
    Message was edited by:
    jakeochs

    JSF together with JBoss Seam offers some real good possibilities to improve developer productivity. Right now, with JSF you can create forms where fields are persistent (saved in a database), but you have to write code to persist fields. In Notes/Domino, every field you drop in the form is automatically set to persist without writing a single piece of code. JBoss Seam aims to provide the missing glue to tie the JSF beans with business logic (EJB) and persistent layer (JPA). I think tools for Seam are still not mature. I would love to see JSC/VWB utilizing Seam. I know there is a NetBean plugin for Seam but it was written for old NetBeans (not VWP or JSC), so it doesn't work with the visual web pack where you can drag and drop JSF components.

  • Commit after 10000 rows

    Hi
    Iam inserting around 2.5 mmillion records in aconversion project
    let me know how i can commit after every 10000 rows please can u tell me whether i can use bulk insert or bulk bind because i have never used please resolve my problem.
    Thanks
    Madhu

    as sundar said, per link of TOM you are better of not commiting in th loop other wise it will give you snapshot too old error,
    still if you want
    1. set the counter to 0. ct number:=0;
    increment counter in the loop ct:=ct+1;
    IF ct=10000 THEN
    COMMIT;
    END IF;
    2. you can use bulk collect and FORALL also. and commit.
    but still follow the tread as per TOM
    typo
    Message was edited by:
    devmiral

  • Commit after every three UPDATEs in CURSOR FOR loop

    DB Version: 11g
    I know that experts in here despise the concept of COMMITing inside loop.
    But most of the UPDATEs being fired by the code below are updating around 1 million records and it is breaking our UNDO tablespace.
    begin
    for rec in
          (select owner,table_name,column_name 
          from dba_tab_cols where column_name like 'ABCD%' and owner = p_schema_name)
          loop
            begin
            execute immediate 'update '||rec.owner||'.'||rec.table_name||' set '||rec.column_name|| ' = '''||rec.owner||'';
            end;
          end loop;
    end;We are not expecting ORA-01555 error as these are just batch updates.
    I was thinking of implementing something like
    FOR i IN 1..myarray.count
    LOOP
                             DBMS_OUTPUT.PUT_LINE('event_key at' || i || ' is: ' || myarray(i));
                             INSERT INTO emp
                             empid,
                             event_id,
                             dept,
                             event_key
                             VALUES
                             v_empid,
                             3423,
                             p_dept,
                             myarray(i)
                             if(MOD(i, p_CommitFreq) = 0)  --- When the loop counter becomes fully divisible by p_commit_frequency, it COMMITs
                             then
                                       commit;
                             end if;
    END LOOP;(Found from an OTN thread)
    But i don't know how to access the loop counter value in a CURSOR FOR loop.

    To be fair, what is really despised is code that takes an operation that could have been performed in a single SQL statement and steps through it in the slowest possible way, committing pointlessly as it goes along (exactly like the example you found). Your original version doesn't do that - it looks more like some sort of one-off migration where you have to set every value of every column that matches some naming standard pattern to a constant. If that's the case, and if there are huge volumes involved and you can't simply add a bit more undo, then I don't see much wrong with committing after each update, especially if you track how far you've got so you can restart cleanly if it fails.
    If you really want an incrementing counter in an unnamed cursor, apart from the explicit variable others have suggested, you could add rownum to the cursor (alias it to something that isn't an Oracle keyword), although it could complicate the ORDER BY that you might be considering for the restart logic. You could have that instead of the redundant 'owner' column in your example which is always the same as the constant p_schema_name.
    Another approach would be to keep track of SQL%ROWCOUNT after each update by adding it to a variable, and commit when the total number of rows updated so far reaches, say, a million.
    Could the generated statement use a WHERE clause, or does it really have to update every row in every table it finds?
    Could there be more than one column to update per table? If so it might be worth generating one multi-column update statement per table, although it'll complicate things a bit.
    btw you don't need the inner 'begin' and 'end' keywords, and whoever supplied the MOD example you found should know that three or four spaces usually make a good indent and you don't need brackets around IF conditions.

  • Commit after every 1000 records

    Hi dears ,
    i have to update or insert arround 1 lakhs records every day incremental basis,
    while doing it , after completing all the records commit happens, In case some problem in between all my processed records are getting rollbacked,
    I need to commit it after every frequency of records say 1000 records.
    Any one know how to do it??
    Thanks in advance
    Regards
    Raja

    Raja,
    There is an option in the configuration of a mapping in which you can set the Commit Frequency. The Commit Frequency only applies to non-bulk mode mappings. Bulk mode mappings commit according the bulk size (which is also an configuration setting of the mapping).
    When you set the Default Operating Mode to row based and Bulk Processing Code to false, Warehouse Builder uses the Commit Frequency parameter when executing the package. Warehouse Builder commits data to the database after processing the number of rows specified in this parameter.
    If you set Bulk Processing Code to true, set the Commit Frequency equal to the Bulk Size. If the two values are different, Bulk Size overrides the commit frequency and Warehouse Builder implicitly performs a commit for every bulk size.
    Regards,
    Ilona

Maybe you are looking for

  • Pre10 wont start disc/folder burn. Tried everything!

    Hi there, I recently bought a new computer and I can not get Pre7 or Pre10 to burn a folder or a disc. I have browsed these forums for hours and hours looking for a solution but no success.  I initially loaded Pre7 when i got the computer but when it

  • JumpServlet issue in migration to ATG 10.1

    Hi all, I am doing migration from ATG 10.0.3 to 10.1 version. I am using JumpServlet component to convert some URLs, like browse/category.jsp, to other friendly URLs. In version 10.0.3 it was working fine, but in 10.1 it is not loading any page with

  • [svn:fx-trunk] 5101: Update action script files with asdoc version tags.

    Revision: 5101 Author: [email protected] Date: 2009-02-26 21:22:45 -0800 (Thu, 26 Feb 2009) Log Message: Update action script files with asdoc version tags. QE Notes: None. Doc Notes: Please review and update as necessary. tests: checkintests Modifie

  • [temporarily solved] OSS fails to relink it's modules...

    ...after OSS upgrade to 4.2_2006-1 when starting daemon in terminal, it starts to relink (as always after kernel/oss upgrade), but can  not locate couple of libraries. after downgrading cloog from 0.17.0-1 to 0.16.3-1 and isl from 0.09-1 to 0.07-1 ev

  • Change HTML background color with AS3

    Hi all, I'm looking or a way to change my HTML background color trough Flash. The problem is it's not working any ideas? here is the code i'm using HTML HEAD <script language="JavaScript"> function changeBgColor(newBgColor) if (window.document && win