Update Statement against 1.4 million rows

Hi,
I am trying to execute and update statement against a table with over a million rows in it.
NAME Null? Type
PR_ID NOT NULL NUMBER(12,0)
PR_PROP_CODE NOT NULL VARCHAR2(180)
VALUE CLOB(4000)
SHRT_DESC VARCHAR2(250)
VAL_CHAR VARCHAR2(500)
VAL_NUM NUMBER(12,0)
VAL_CLOB CLOB(4000)
UNIQUE_ID NUMBER(12,0)
The update i am trying to do is to take teh column VALUE and based on some parameters update one of three columns. When
I run the sql it just sits there with no error. I gave it 24 hours before killing the process.
UPDATE PR.PR_PROP_VAL PV
SET PV.VAL_CHAR = (
SELECT a.value_char FROM
select
ppv.unique_id,
CASE ppv.pr_prop_code
WHEN 'BLMBRG_COUNTRY' THEN to_char(ppv.value)
WHEN 'BLMBRG_INDUSTRY' THEN to_char(ppv.value)
WHEN 'BLMBRG_TICKER' THEN to_char(ppv.value)
WHEN 'BLMBRG_TITLE' THEN to_char(ppv.value)
WHEN 'BLMBRG_UID' THEN to_char(ppv.value)
WHEN 'BUSINESSWIRE_TITLE' THEN to_char(ppv.value)
WHEN 'DJ_EUROASIA_TITLE' THEN to_char(ppv.value)
WHEN 'DJ_US_TITLE' THEN to_char(ppv.value)
WHEN 'FITCH_MRKT_SCTR' THEN to_char(ppv.value)
WHEN 'ORIGINAL_TITLE' THEN to_char(ppv.value)
WHEN 'RD_CNTRY' THEN to_char(ppv.value)
WHEN 'RD_MRKT_SCTR' THEN to_char(ppv.value)
WHEN 'REPORT_EXCEP_FLAG' THEN to_char(ppv.value)
WHEN 'REPORT_LANGUAGE' THEN to_char(ppv.value)
WHEN 'REUTERS_RIC' THEN to_char(ppv.value)
WHEN 'REUTERS_TITLE' THEN to_char(ppv.value)
WHEN 'REUTERS_TOPIC' THEN to_char(ppv.value)
WHEN 'REUTERS_USN' THEN to_char(ppv.value)
WHEN 'RSRCHDIRECT_TITLE' THEN to_char(ppv.value)
WHEN 'SUMMIT_FAX_BODY_FONT_SIZE' THEN to_char(ppv.value)
WHEN 'SUMMIT_FAX_TITLE' THEN to_char(ppv.value)
WHEN 'SUMMIT_FAX_TITLE_FONT_SIZE' THEN to_char(ppv.value)
WHEN 'SUMMIT_TOPIC' THEN to_char(ppv.value)
WHEN 'SUMNET_EMAIL_TITLE' THEN to_char(ppv.value)
WHEN 'XPEDITE_EMAIL_TITLE' THEN to_char(ppv.value)
WHEN 'XPEDITE_FAX_BODY_FONT_SIZE' THEN to_char(ppv.value)
WHEN 'XPEDITE_FAX_TITLE' THEN to_char(ppv.value)
WHEN 'XPEDITE_FAX_TITLE_FONT_SIZE' THEN to_char(ppv.value)
WHEN 'XPEDITE_TOPIC' THEN to_char(ppv.value)
END value_char
from pr.pr_prop_val ppv
where ppv.pr_prop_code not in
('BLMBRG_BODY','ORIGINAL_BODY','REUTERS_BODY','SUMMIT_FAX_BODY',
'XPEDITE_EMAIL_BODY','XPEDITE_FAX_BODY','PR_DISCLOSURE_STATEMENT', 'PR_DISCLAIMER')
) a
WHERE
a.unique_id = pv.unique_id
AND a.value_char is not null
Thanks for any help you can provide.
Graham

What about this:
UPDATE pr.pr_prop_val pv
SET    pv.val_char = TO_CHAR(pv.value)
WHERE  pv.pr_prop_code IN ('BLMBRG_COUNTRY', 'BLMBRG_INDUSTRY', 'BLMBRG_TICKER', 'BLMBRG_TITLE', 'BLMBRG_UID', 'BUSINESSWIRE_TITLE',
                           'DJ_EUROASIA_TITLE', 'DJ_US_TITLE', 'FITCH_MRKT_SCTR', 'ORIGINAL_TITLE', 'RD_CNTRY', 'RD_MRKT_SCTR',
                           'REPORT_EXCEP_FLAG', 'REPORT_LANGUAGE', 'REUTERS_RIC', 'REUTERS_TITLE', 'REUTERS_TOPIC', 'REUTERS_USN',
                           'RSRCHDIRECT_TITLE', 'SUMMIT_FAX_BODY_FONT_SIZE', 'SUMMIT_FAX_TITLE', 'SUMMIT_FAX_TITLE_FONT_SIZE',
                           'SUMMIT_TOPIC', 'SUMNET_EMAIL_TITLE', 'XPEDITE_EMAIL_TITLE', 'XPEDITE_FAX_BODY_FONT_SIZE', 'XPEDITE_FAX_TITLE',
                           'XPEDITE_FAX_TITLE_FONT_SIZE', 'XPEDITE_TOPIC')
AND    pv.value IS NOT NULL

Similar Messages

  • How to tune the Update statement for 20 million rows

    Hi,
    I want to update 20 million rows of a table. I wrote the PL/SQL code like this:
    DECLARE
    v1
    v2
    cursor C1 is
    select ....
    BEGIN
    Open C1;
    loop
    fetch C1 bulk collect into v1,v2 LIMIT 1000
    exit when C1%NOTFOUND;
    forall i in v1.first..v1.last
    update /*+INDEX(tab indx)*/....
    end loop;
    commit;
    close C1;
    END;
    The above code took 24 mins to update 100k records, so for around 20 million records it will take 4800 mins (80 hrs).
    How can I tune the code further ? Will a simple Update statement, instead of PL/SQL make the update faster ?
    Will adding few more hints help ?
    Thanks for your suggestions.
    Regards,
    Yogini Joshi

    Hello
    You have implemented this update in the slowest possible way. Cursor FOR loops should be absolute last resort. If you post the SQL in your cursor there is a very good chance we can re-code it to be a single update statement with a subquery which will be the fastest possible way to run this. Please remember to use the {noformat}{noformat} tags before and after your code so the formatting is preserved.
    David                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Get the number of rows affected by update statement

    Hi
    I'm working on an application that uses OCI as interface against an Oracle database.
    I'm trying to find out how i can get the number of rows affected by a query executed with OCIStmtExecute. it is not a select query.
    Best regards,
    Benny Tordrup

    If I run a bulk UPDATE query using OCIBindArrayOfStruct, is there a way I can tell which+ rows successfully updated?
    I have a file of records of which 75% of PKs are already in the table and will be updated, the other 25% aren't and should be inserted later. I want to attempt an UPDATE statement using OCIBindArrayOfStruct to execute in bulk and then check which entries in my bulk array successfully updated. If an array entry isn't successfully updated then I will assume it should be inserted and will store it for a Direct Path load later.
    Many thanks for any advice you can give as I've been trawling through the docs trying to find a solution for ages now.

  • How can i use multiple row subquery in update statement

    Hai All
    I using group function in my update statement.. and i need to update more rows so i need to use multiple row
    subquery pls tell me how to use multiple row subquery in update statement
    For example
    while i am using this like this i got an error
    update dail_att set outtime in (select max(r2.ptime) from temp_att where empcode=r2.enpno and
    barcode=r2.cardn and attend_date=r2.pdate group by enpno,pdate,cardn);
    Pls tell me how to use with example
    Thanks & regards
    Srikkanth.M

    Hai Man
    Thanks for ur response Let me clear what i need
    First step Fetch the records as text file and stores into table T1
    and the next step is i have seperated the text using substring and stores in different columns of a table
    There are two shifts 0815 to 1645 and 1200 and 2000
    Here I rep IN and O rep OUT
    Empno date time inout
    001 01-01-10 0815 I
    002 01-01-10 0815 I
    003 01-01-10 0818 I
    001 01-01-10 1100 0
    001 01-01-10 1130 I
    002 01-01-10 1145 0
    002 01-01-10 1215 I
    004 01-01-10 1200 I
    005 01-01-10 1215 I
    004 01-01-10 1315 O
    004 01-01-10 1345 I
    001 01-01-10 1645 0
    002 01-01-10 1715 0
    003 01-01-10 1718 0
    004 01-01-10 2010 0
    005 01-01-10 2015 0
    This is my T1 table i have taken data from text file and stored in this table from this table i need to move data to another table T2
    T2 contains like this
    Empno Intime Intrin Introut Outtime Date
    001 0815 1100 1130 1645 01-01-10
    002 0815 1145 1215 1715 01-01-10
    003 0818 1718 01-01-10
    004 1200 1315 1345 2010 01-01-10
    005 1215 2015 01-01-10
    This what i am trying to do man but i have little bit problems Pls give some solution with good example
    And my coding is
    declare
         emp_code varchar2(25);
    in_time varchar2(25);
    out_time varchar2(25);
    Cursor P1 is
    Select REASON,ECODE,READMODE,EMPD,ENPNO,FILL,PDATE,PTIME,INOUT,CARDN,READERN
    From temp_att
    group by REASON,ECODE,READMODE,EMPD,ENPNO,FILL,PDATE,PTIME,INOUT,CARDN,READERN
    ORDER BY enpno,pdate,ptime;
    begin
         for r2 in p1 loop
    declare
    bar_code varchar2(25);
    begin
    select barcode into bar_code from dail_att where empcode=r2.enpno and attend_date=r2.pdate;
    For r3 in (select empcode,empname,barcode,intime,intrin,introut,addin,addout,outtime,attend_date from dail_att)loop
    if r2.inout ='O' then
    update dail_att set outtime =(select max(r2.ptime) from temp_att where empcode=r2.enpno and barcode=r2.cardn and attend_date=r2.pdate group by r2.cardn,r2.enpno,r2.pdate );
    end if;
    end loop;     
    exception
         when no_data_found then
         if r2.inout ='I' then
                   insert into dail_att(barcode,empcode,intime,attend_date)(select r2.cardn,r2.enpno,min(r2.ptime),r2.pdate from temp_att group by r2.cardn,r2.enpno,r2.pdate );
         end if;
    end;
    end loop;
    commit;     
         end;
    Pls tell me what correction i need to do i the update statement i have used a subquery with group function but when i used it will return only one row but my need is to return many rows and i need to use multiple row subquery
    and how can i use it in the update statement
    Thanks In Advance
    Srikkanth.M

  • Update all rows in a table which has 8-10 million rows take for ever

    Hi All,
    Greetings!
    I have to update 8million rows on a table. Basically have to reset the batch_id with the current batch number. it contains 8-10 million rows and i have tried with bulk update and then also it takes long time. below is the table structure
    sales_stg (it has composite key of product,upc and market)
    =======
    product_id
    upc
    market_id
    batch_id
    process_status
    I have to update batch_id,process_status to current batch_id (a number) and process_status as zero. I have to update all the rows with these values for batch_id = 0.
    I tried bulk update an it takes more than 2hrs to do. (I limit the update to 1000).
    Any help in this regard.
    Naveen.

    The fastest way will probably be to not use a select loop but a direct update like in William's example. The main downside is if you do too many rows you risk filling up your rollback/undo; to keep things as simple as possible I wouldn't do batching except for this. Also, we did some insert timings a few years ago on 9iR1 and found that the performance curve on frequent commits started to level off after 4K rows (fewer commits were still better) so you could see how performance improves by performing fewer commits if that's an issue.
    The other thing you could consider if you have the license is using the parallel query option.

  • How to lock a row before update using UPDATE statement ?

    I want to lock a row before updating using the UPDATE statement! is there any "reserved word" that I can append to an update statement so, that the row get locked!, also, will the lock gets released automatically once the update is done or do I have to explicitly release the lock?
    how can I do this ?
    any help is greatly appreciated.
    Thanks,
    Srini.

    For detail information, see http://otn.oracle.com/doc/server.815/a67779/ch4l.htm#10900
    The lock will be released by "commit".
    FOR UPDATE Examples
    The following statement locks rows in the EMP table with clerks located in New York and locks rows in the DEPT table with departments in New York that have clerks:
    SELECT empno, sal, comm
    FROM emp, dept
    WHERE job = 'CLERK'
    AND emp.deptno = dept.deptno
    AND loc = 'NEW YORK'
    FOR UPDATE;
    The following statement locks only those rows in the EMP table with clerks located in New York. No rows are locked in the DEPT table:
    SELECT empno, sal, comm
    FROM emp, dept
    WHERE job = 'CLERK'
    AND emp.deptno = dept.deptno
    AND loc = 'NEW YORK'
    FOR UPDATE OF emp.sal;

  • Data update for 76 million rows

    Hello All,
    we have added a new column into one of our tables and we need to populate the column.Problem is it has 76 million rows.If i issue the update coomand to populate this ,approximately it will take 120 hrs to complete.Please let me know the best way of doing this. Can i run it in batches applying commit in between??
    Thanks.

    It´d be something like this:
    DECLARE
      V_QRY          VARCHAR2(10000);
      V_COMMIT_RANGE INTEGER;
    BEGIN
      V_COMMIT_RANGE := 10000; -- Change this according with your environment
      * Prevents exceeding ROLL BACK segments.
      LOOP
        V_QRY := 'UPDATE transaction_fact a '
              || '   SET a.start_time = (SELECT TO_DATE ( TO_CHAR (:v_year        , ''fm0000'') '
              || '                                     || TO_CHAR (:v_month       , ''fm00'') '
              || '                                     || TO_CHAR (:v_day_of_month, ''fm00''), '
              || '                                     ''YYYYMMDD'' '
              || '                                    ) '
              || '                       FROM TIME '
              || '                      WHERE TIME.time_id = a.time_id) '
              || ' WHERE a.start_time IS NULL '
              || '   AND ROWNUM <= ' || V_COMMIT_RANGE;
        EXECUTE IMMEDIATE V_QRY
                    USING YEAR
                        , MONTH
                        , day_of_month;
        EXIT WHEN SQL%ROWCOUNT = 0;
        COMMIT;
      END LOOP;
    EXCEPTION
      WHEN NO_DATA_FOUND THEN
        DBMS_OUTPUT.PUT_LINE('no content');
      WHEN OTHERS THEN
        DBMS_OUTPUT.PUT_LINE(SQLERRM);
    END;Assumptions made:
    a) YEAR, MONTH and day_of_month are all variables;
    b) a.start_time has null values for all rows (or all you wish to update);
    Although this will do the job, it might not be as performatic as you need it to be. So, if your Oracle version allows you to rename tables, you should consider looking into Walter´s post (if you havent already).
    All the best..

  • Poor timing for update of a million rows in TimesTen

    This is not a scientific test but I am dissappointed in my results.
    I created SALES table in TT 11.2.1.4.0 in the image of Oracle 11g table sh.SALES. Data also came from SALES as well. Just make sure that you have a million rows in your version of sh.SALES in Oracle. Spool it out to /var/tmp/abc.log as follows:
    set feedback off
    set pagesize 0
    set verify off
    set timing off
    select prod_id ||','||cust_id||','||to_char(TIME_ID,'YYYY-MM-DD')||','||channel_id||','||PROMO_ID||','||QUANTITY_SOLD||','||AMOUNT_SOLD
    from sys.sales;
    exit
    Now use
    ttbulkcp -i -s "," DSN=ttdemo1 SALES /var/tmp/abc.log
    TT table description is as follows with no index
    Table HR.SALES:
    Columns:
    PROD_ID NUMBER NOT NULL
    CUST_ID NUMBER NOT NULL
    TIME_ID DATE NOT NULL
    CHANNEL_ID NUMBER NOT NULL
    PROMO_ID NUMBER NOT NULL
    QUANTITY_SOLD NUMBER (10,2) NOT NULL
    AMOUNT_SOLD NUMBER (10,2) NOT NULL
    1 table found.
    (primary key columns are indicated with *)
    The data store has 1024MB PermStore and 512MB TempStore
    [ttimdb1]
    Driver=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/lib/libtten.so
    DataStore=/work/oracle/TimesTen_store/ttimdb1
    PermSize=1024
    TempSize=512
    OracleId=MYDB
    DatabaseCharacterSet=WE8MSWIN1252
    ConnectionCharacterSet=WE8MSWIN1252
    Now do a simple UPDATE. REmember it is all tablescan!
    Command> set autocommit 0
    Command> showplan 1
    Command> timing 1
    Command> UPDATE SALES SET AMOUNT_SOLD = AMOUNT_SOLD + 10.22;
    Query Optimizer Plan:
    STEP: 1
    LEVEL: 1
    OPERATION: TblLkSerialScan
    TBLNAME: SALES
    IXNAME: <NULL>
    INDEXED CONDITION: <NULL>
    NOT INDEXED: <NULL>
    STEP: 2
    LEVEL: 1
    OPERATION: TblLkUpdate
    TBLNAME: SALES
    IXNAME: <NULL>
    INDEXED CONDITION: <NULL>
    NOT INDEXED: <NULL>
    1000000 rows updated.
    Execution time (SQLExecute) = 76.141563 seconds.
    I tried few times but I still cannot make it go below 60 seconds. Oracle 11g does it better.
    Any help and advice is appreciated.
    Thanks,
    Mich

    Guys,
    running the job and doing UNIX top I am getting this
    Mem: 4014080k total, 3729940k used, 284140k free, 136988k buffers
    Swap: 10241396k total, 8k used, 10241388k free, 3283284k cached
    PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
    11428 oracle 19 0 2138m 802m 799m S 17 20.5 0:29.58 ttIsqlCmd
    5559 oracle 18 0 2158m 719m 711m S 11 18.4 7:03.63 timestensubd
    5874 root 16 0 1964 628 548 S 7 0.0 1:06.14 hald-addon-stor
    4910 root 16 0 2444 368 260 S 5 0.0 0:16.69 irqbalance
    17 root 10 -5 0 0 0 S 3 0.0 0:20.78 kblockd/0
    So there is memory there and no swap usage. System does not look to be overloaded or anything. However, there is a wait somewhere!
    TIME_OF_1ST_CONNECT: Tue Jan 19 12:23:30 2010
    DS_CONNECTS: 11
    DS_DISCONNECTS: 0
    DS_CHECKPOINTS: 0
    DS_CHECKPOINTS_FUZZY: 0
    DS_COMPACTS: 0
    PERM_ALLOCATED_SIZE: 1048576
    PERM_IN_USE_SIZE: 134048
    PERM_IN_USE_HIGH_WATER: 134048
    TEMP_ALLOCATED_SIZE: 524288
    TEMP_IN_USE_SIZE: 19447
    TEMP_IN_USE_HIGH_WATER: 19511
    SYS18: 0
    TPL_FETCHES: 0
    TPL_EXECS: 0
    CACHE_HITS: 0
    PASSTHROUGH_COUNT: 0
    XACT_BEGINS: 6
    XACT_COMMITS: 5
    XACT_D_COMMITS: 0
    XACT_ROLLBACKS: 0
    LOG_FORCES: 0
    DEADLOCKS: 0
    LOCK_TIMEOUTS: 0
    LOCK_GRANTS_IMMED: 148
    LOCK_GRANTS_WAIT: 0
    SYS19: 0
    CMD_PREPARES: 3
    CMD_REPREPARES: 0
    CMD_TEMP_INDEXES: 0
    LAST_LOG_FILE: 240
    REPHOLD_LOG_FILE: -1
    REPHOLD_LOG_OFF: -1
    REP_XACT_COUNT: 0
    REP_CONFLICT_COUNT: 0
    REP_PEER_CONNECTIONS: 0
    REP_PEER_RETRIES: 0
    FIRST_LOG_FILE: 209
    LOG_BYTES_TO_LOG_BUFFER: 120
    LOG_FS_READS: 0
    LOG_FS_WRITES: 0
    LOG_BUFFER_WAITS: 0
    CHECKPOINT_BYTES_WRITTEN: 0
    CURSOR_OPENS: 5
    CURSOR_CLOSES: 5
    SYS3: 0
    SYS4: 0
    SYS5: 0
    SYS6: 0
    CHECKPOINT_BLOCKS_WRITTEN: 0
    CHECKPOINT_WRITES: 0
    REQUIRED_RECOVERY: 0
    SYS11: 0
    SYS12: 1
    TYPE_MODE: 0
    SYS13: 0
    SYS14: 0
    SYS15: 0
    SYS16: 0
    SYS17: 0
    SYS9:
    Command> UPDATE SALES SET AMOUNT_SOLD = AMOUNT_SOLD + 10.22;
    1000000 rows updated.
    Execution time (SQLExecute) = 86.476318 seconds.
    Command> monitor;
    TIME_OF_1ST_CONNECT: Tue Jan 19 12:23:30 2010
    DS_CONNECTS: 11
    DS_DISCONNECTS: 0
    DS_CHECKPOINTS: 0
    DS_CHECKPOINTS_FUZZY: 0
    DS_COMPACTS: 0
    PERM_ALLOCATED_SIZE: 1048576
    PERM_IN_USE_SIZE: 134079
    PERM_IN_USE_HIGH_WATER: 252800
    TEMP_ALLOCATED_SIZE: 524288
    TEMP_IN_USE_SIZE: 19512
    TEMP_IN_USE_HIGH_WATER: 43024
    SYS18: 0
    TPL_FETCHES: 0
    TPL_EXECS: 0
    CACHE_HITS: 0
    PASSTHROUGH_COUNT: 0
    XACT_BEGINS: 13
    XACT_COMMITS: 12
    XACT_D_COMMITS: 0
    XACT_ROLLBACKS: 0
    LOG_FORCES: 6
    DEADLOCKS: 0
    LOCK_TIMEOUTS: 0
    LOCK_GRANTS_IMMED: 177
    LOCK_GRANTS_WAIT: 0
    SYS19: 0
    CMD_PREPARES: 4
    CMD_REPREPARES: 0
    CMD_TEMP_INDEXES: 0
    LAST_LOG_FILE: 246
    REPHOLD_LOG_FILE: -1
    REPHOLD_LOG_OFF: -1
    REP_XACT_COUNT: 0
    REP_CONFLICT_COUNT: 0
    REP_PEER_CONNECTIONS: 0
    REP_PEER_RETRIES: 0
    FIRST_LOG_FILE: 209
    LOG_BYTES_TO_LOG_BUFFER: 386966680
    LOG_FS_READS: 121453
    LOG_FS_WRITES: 331
    LOG_BUFFER_WAITS: 8
    CHECKPOINT_BYTES_WRITTEN: 0
    CURSOR_OPENS: 6
    CURSOR_CLOSES: 6
    SYS3: 0
    SYS4: 0
    SYS5: 0
    SYS6: 0
    CHECKPOINT_BLOCKS_WRITTEN: 0
    CHECKPOINT_WRITES: 0
    REQUIRED_RECOVERY: 0
    SYS11: 0
    SYS12: 1
    TYPE_MODE: 0
    SYS13: 0
    SYS14: 0
    SYS15: 0
    SYS16: 0
    SYS17: 0
    SYS9:
    Command> commit;
    Execution time (SQLTransact) = 0.000007 seconds.

  • Update Statement Updating Too Many Rows

    Hiya,
    I am trying to run this update statement:
    UPDATE PROCEDURE_PRICE p
    SET p.term_date = (SELECT t.tdate
                        FROM PROCEDURE_PRICE_TMP t
                        WHERE p.seq_proc_price = t.zzz_nullterm_seq_proc_price
                        AND p.procedure_code = t.procedure_code
                        AND p.price_schedule = t.price_schedule)
    And it is updating all 600000+ records in the PROCEDURE_PRICE table.
    I am only expecting it to update 60 records, because that is what I find when running this query:
    select p.term_date,
         t.tdate,
         p.seq_proc_price,
         t.zzz_nullterm_seq_proc_price,
         p.procedure_code,
         t.procedure_code,
         p.price_schedule,
         t.price_schedule
    from procedure_price p, procedure_price_tmp t
    WHERE p.seq_proc_price = t.zzz_nullterm_seq_proc_price
    AND p.procedure_code = t.procedure_code
    AND p.price_schedule = t.price_schedule;
    Can't seem to figure out what is wrong, and I know it is something simple.
    Thanks in advance :).

    You probably want something like
    UPDATE PROCEDURE_PRICE p
       SET p.term_date = (SELECT t.tdate
                            FROM PROCEDURE_PRICE_TMP t
                           WHERE p.seq_proc_price = t.zzz_nullterm_seq_proc_price
                             AND p.procedure_code = t.procedure_code
                             AND p.price_schedule = t.price_schedule)
    WHERE EXISTS (SELECT 1
                     FROM PROCEDURE_PRICE_TMP t
                    WHERE p.seq_proc_price = t.zzz_nullterm_seq_proc_price
                      AND p.procedure_code = t.procedure_code
                      AND p.price_schedule = t.price_schedule)Basically, you want to slap a WHERE EXISTS clause on the UPDATE that ensures that your inner query returns a row.
    Justin

  • Find affected rows when using OCIBindArrayOfStruct for UPDATE statement

    If I run a bulk UPDATE query using OCIBindArrayOfStruct, is there a way I can tell which+ rows successfully updated?
    I have a file of records of which 75% of PKs are already in the table and will be updated, the other 25% aren't and should be inserted later. I want to attempt an UPDATE statement on each entry in the file, using OCIBindArrayOfStruct to execute in bulk, and then check which entries in my bulk array successfully updated. If an array entry isn't successfully updated then I will assume it should be inserted and will store it for a Direct Path load later.
    Many thanks for any advice you can give as I've been trawling through the docs trying to find a solution for ages now.
    Edited by: Alasdair on 15-Oct-2010 02:13

    To get count from DB using dynamic SQL, you might need form to call a DB function that can run a query and return a number.
    ie
    CREATE OR REPLACE FUNCTION get_count(pTable VARCHAR2, pWhere VARCHAR2) RETURN NUMBER IS
       vCount VARCHAR2(2000);
    BEGIN
       EXECUTE IMMEDIATE
          'SELECT COUNT(1) FROM '||pTable||' WHERE '||pWhere
       INTO vCount;
       RETURN vCount;
    END;Then in your form you do:
       vUpDCnt := get_count(pTable=>'some_table',pWhere=>'...');Hope this helps.

  • How To Insert a Row If The Update Statement Fails?

    Hi folks,
    For some reason I just can't get this working and wanted to throw it out to everyone, very simple one so won't take much of your time up.
    Going through each row of the cursor, and wanting to update a table, if the record doesn't exist, then I want to insert it.....but nothing is happening.
    IF v_related_enrolement < 1
    THEN
    BEGIN
    -- Record NOT found in Study_block_associations
    -- Insert Student record into ADM_EVER_REGISTERED table
    -- Set Ever_Registered column to 'N'
    UPDATE adm_ever_registered
    SET ever_registered = 'N'
    WHERE personal_id = v_personal_id
    AND appl_code = v_appl_code;
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    INSERT INTO adm_ever_registered VALUES(v_personal_id, v_appl_code, v_choice_no, 'N');
    END;
    ELSE

    It's better to use a merge statement in this case.
    Your code doesn't work because of the false assumption that an update statement that doesn't update a single row fails with a no_data_found exception, where it's a successful update statement that just doesn't update a single row. So instead of the exception, use a "if sql%rowcount = 0 then" construct to make your current code work. Best option, as said, is to switch to the merge statement.
    Regards,
    Rob.

  • Best method to update database table for 3 to 4 million rows

    Hi All,
    I have 3 to 4 million rows are there in my excel file and we have to load to Z-Table.
    The intent is to load and keep 18 months of history in this table. 
    so what should be best way for huge volume of data to Z-Table from excel file.
    If is from the program, is that the best way use the FM 'GUI_DOWNLOAD' and down load those entries into the internal table and directly do as below
    INSERT Z_TABLE from IT_DOWNLOAD.
    I think for the huge amount of data it goes to dump.
    please suggest me the best possible way or any psudo code  to insert those huge entries into that Z_TABLE.
    Thanks in advance..

    Hi,
    You get the dump because of uploading that much records into itnernal table from excel file...
    in this case, do the follwowing.
    data : w_int type i,
             w_int1 type i value 1.
    data itab type standard table of ALSMEX_TABLINE with header line.
    do.
       refresh itab.
       w_int = w_int1..
       w_int1 = w_int + 25000.
       CALL FUNCTION 'ALSM_EXCEL_TO_INTERNAL_TABLE'
      EXPORTING
        FILENAME                      = <filename>
        I_BEGIN_COL                   = 1
        I_BEGIN_ROW                   = w_int
        I_END_COL                     = 10
        I_END_ROW                     = w_int1
      TABLES
        INTERN                        = itab
    * EXCEPTIONS
    *   INCONSISTENT_PARAMETERS       = 1
    *   UPLOAD_OLE                    = 2
    *   OTHERS                        = 3
    IF SY-SUBRC <> 0.
    * MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    *         WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    if itab is not initial.
    write logic to segregate the data from itab to the main internal table and then
    insert records from the main internal table to database table.
    else.
    exit.
    endif.
    enddo.
    Regards,
    Siddarth

  • Update statement takes too long to run

    Hello,
    I am running this simple update statement, but it takes too long to run. It was running for 16 hours and then I cancelled it. It was not even finished. The destination table that I am updating has 2.6 million records, but I am only updating 206K records. If add ROWNUM <20 to the update statement works just fine and updates the right column with the right information. Do you have any ideas what could be wrong in my update statement? I am also using a DB link since CAP.ESS_LOOKUP table resides in different db from the destination table. We are running 11g Oracle Db.
    UPDATE DEV_OCS.DOCMETA IPM
    SET IPM.XIPM_APP_2_17 = (SELECT DISTINCT LKP.DOC_STATUS
    FROM [email protected] LKP
    WHERE LKP.DOC_NUM = IPM.XIPM_APP_2_1 AND
    IPM.XIPMSYS_APP_ID = 2
    WHERE
    IPM.XIPMSYS_APP_ID = 2;
    Thanks,
    Ilya

    matthew_morris wrote:
    In the first SQL, the SELECT against the remote table was a correlated subquery. the 'WHERE LKP.DOC_NUM = IPM.XIPM_APP_2_1 AND IPM.XIPMSYS_APP_ID = 2" means that the subquery had to run once for each row of DEV_OCS.DOCMETA being evaluated. This might have meant thousands of iterations, meaning a great deal of network traffic (not to mention each performing a DISTINCT operation). Queries where the data is split between two or more databases are much more expensive than queries using only tables in a single database.Sorry to disappoint you again, but with clause by itself doesn't prevent from "subquery had to run once for each row of DEV_OCS.DOCMETA being evaluated". For example:
    {code}
    SQL> set linesize 132
    SQL> explain plan for
    2 update emp e
    3 set deptno = (select t.deptno from dept@sol10 t where e.deptno = t.deptno)
    4 /
    Explained.
    SQL> @?\rdbms\admin\utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 3247731149
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
    | 0 | UPDATE STATEMENT | | 14 | 42 | 17 (83)| 00:00:01 | | |
    | 1 | UPDATE | EMP | | | | | | |
    | 2 | TABLE ACCESS FULL| EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
    | 3 | REMOTE | DEPT | 1 | 13 | 0 (0)| 00:00:01 | SOL10 | R->S |
    PLAN_TABLE_OUTPUT
    Remote SQL Information (identified by operation id):
    3 - SELECT "DEPTNO" FROM "DEPT" "T" WHERE "DEPTNO"=:1 (accessing 'SOL10' )
    16 rows selected.
    SQL> explain plan for
    2 update emp e
    3 set deptno = (with t as (select * from dept@sol10) select t.deptno from t where e.deptno = t.deptno)
    4 /
    Explained.
    SQL> @?\rdbms\admin\utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 3247731149
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
    | 0 | UPDATE STATEMENT | | 14 | 42 | 17 (83)| 00:00:01 | | |
    | 1 | UPDATE | EMP | | | | | | |
    | 2 | TABLE ACCESS FULL| EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
    | 3 | REMOTE | DEPT | 1 | 13 | 0 (0)| 00:00:01 | SOL10 | R->S |
    PLAN_TABLE_OUTPUT
    Remote SQL Information (identified by operation id):
    3 - SELECT "DEPTNO" FROM "DEPT" "DEPT" WHERE "DEPTNO"=:1 (accessing 'SOL10' )
    16 rows selected.
    SQL>
    {code}
    As you can see, WITH clause by itself guaranties nothing. We must force optimizer to materialize it:
    {code}
    SQL> explain plan for
    2 update emp e
    3 set deptno = (with t as (select /*+ materialize */ * from dept@sol10) select t.deptno from t where e.deptno = t.deptno
    4 /
    Explained.
    SQL> @?\rdbms\admin\utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 3568118945
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
    | 0 | UPDATE STATEMENT | | 14 | 42 | 87 (17)| 00:00:02 | | |
    | 1 | UPDATE | EMP | | | | | | |
    | 2 | TABLE ACCESS FULL | EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
    | 3 | TEMP TABLE TRANSFORMATION | | | | | | | |
    | 4 | LOAD AS SELECT | SYS_TEMP_0FD9D6603_1CEEEBC | | | | | | |
    | 5 | REMOTE | DEPT | 4 | 80 | 3 (0)| 00:00:01 | SOL10 | R->S |
    PLAN_TABLE_OUTPUT
    |* 6 | VIEW | | 4 | 52 | 2 (0)| 00:00:01 | | |
    | 7 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6603_1CEEEBC | 4 | 80 | 2 (0)| 00:00:01 | | |
    Predicate Information (identified by operation id):
    6 - filter("T"."DEPTNO"=:B1)
    Remote SQL Information (identified by operation id):
    PLAN_TABLE_OUTPUT
    5 - SELECT "DEPTNO","DNAME","LOC" FROM "DEPT" "DEPT" (accessing 'SOL10' )
    25 rows selected.
    SQL>
    {code}
    I do know hint materialize is not documented, but I don't know any other way besides splitting statement in two to materialize it.
    SY.

  • Is there a way to BULK COLLECT with FOR UPDATE and not lock ALL the rows?

    Currently, we fetch a cursor on a few million rows using BULK COLLECT.
    In a FORALL loop, we update the rows.
    What is happening now, is that we run this procedure at the same time, and there is another session running a MERGE statement on the same table, and a DEADLOCK is created between them.
    I'd like to add to the cursor the FOR UPDATE clause, but from what i've read,
    it seems that this will cause ALL the rows in the cursor to become locked.
    This is a problem, as the other session is running MERGE statements on the table every few seconds, and I don't want it to fail with ORA-0054 (resource busy).
    What I would like to know is if there is a way, that only the rows in the
    current bulk will be locked, and all the other rows will be free for updates.
    To reproduce this problem:
    1. Create test table:
    create table TEST_TAB
    ID1 VARCHAR2(20),
    ID2 VARCHAR2(30),
    LAST_MODIFIED DATE
    2. Add rows to test table:
    insert into TEST_TAB (ID1, ID2, LAST_MODIFIED)
    values ('416208000770698', '336015000385349', to_date('15-11-2009 07:14:56', 'dd-mm-yyyy hh24:mi:ss'));
    insert into TEST_TAB (ID1, ID2, LAST_MODIFIED)
    values ('208104922058401', '336015000385349', to_date('15-11-2009 07:11:15', 'dd-mm-yyyy hh24:mi:ss'));
    insert into TEST_TAB (ID1, ID2, LAST_MODIFIED)
    values ('208104000385349', '336015000385349', to_date('15-11-2009 07:15:13', 'dd-mm-yyyy hh24:mi:ss'));
    3. Create test procedure:
    CREATE OR REPLACE PROCEDURE TEST_PROC IS
    TYPE id1_typ is table of TEST_TAB.ID1%TYPE;
    TYPE id2_typ is table of TEST_TAB.ID2%TYPE;
    id1_arr id1_typ;
    id2_arr id2_typ;
    CURSOR My_Crs IS
    SELECT ID1, ID2
    FROM TEST_TAB
    WHERE ID2 = '336015000385349'
    FOR UPDATE;
    BEGIN
    OPEN My_Crs;
    LOOP
    FETCH My_Crs bulk collect
    INTO id1_arr, id2_arr LIMIT 1;
    Forall i in 1 .. id1_arr.COUNT
    UPDATE TEST_TAB
    SET LAST_MODIFIED = SYSDATE
    where ID2 = id2_arr(i)
    and ID1 = id1_arr(i);
    dbms_lock.sleep(15);
    EXIT WHEN My_Crs%NOTFOUND;
    END LOOP;
    CLOSE My_Crs;
    COMMIT;
    EXCEPTION
    WHEN OTHERS THEN
    RAISE_APPLICATION_ERROR(-20000,
    'Test Update ' || SQLCODE || ' ' || SQLERRM);
    END TEST_PROC;
    4. Create another procedure to check if table rows are locked:
    create or replace procedure check_record_locked(p_id in TEST_TAB.ID1%type) is
    cursor c is
    select 'dummy'
    from TEST_TAB
    WHERE ID2 = '336015000385349'
    and ID1 = p_id
    for update nowait;
    e_resource_busy exception;
    pragma exception_init(e_resource_busy, -54);
    begin
    open c;
    close c;
    dbms_output.put_line('Record ' || to_char(p_id) || ' is not locked.');
    rollback;
    exception
    when e_resource_busy then
    dbms_output.put_line('Record ' || to_char(p_id) || ' is locked.');
    end check_record_locked;
    5. in one session, run the procedure TEST_PROC.
    6. While it's running, in another session, run this block:
    begin
    check_record_locked('208104922058401');
    check_record_locked('416208000770698');
    check_record_locked('208104000385349');
    end;
    7. you will see that all records are identified as locked.
    Is there a way that only 1 row will be locked, and the other 2 will be unlocked?
    Thanks,
    Yoni.

    I don't have database access on weekends (look at it as a template)
    suppose you
    create table help_iot
    (bucket number,
    id1    varchar2(20),
    constraint help_iot_pk primary key (bucket,id1)
    organization index;not very sure about the create table syntax above.
    declare
      maximal_bucket number := 10000; -- will update few hundred rows at a time if you must update few million rows
      the_sysdate date := sysdate;
    begin
      truncate table help_iot;
      insert into help_iot
      select ntile(maximal_bucket) over (order by id1) bucket,id1
        from test_tab
       where id2 = '336015000385349';
      for i in 1 .. maximal_bucket
      loop
        select id1,id2,last_modified
          from test_tab
         where id2 = '336015000385349'
           and id1 in (select id1
                         from help_iot
                        where bucket = i
           for update of last_modified;
        update test_tab
           set last_modified = the_sysdate
         where id2 = '336015000385349'
           and id1 in (select id1
                         from help_iot
                        where bucket = i
        commit;
        dbms_lock.sleep(15);
      end loop;
    end;Regards
    Etbin
    introduced the_sysdate if last_modified must be the same for all updated rows
    Edited by: Etbin on 29.11.2009 16:48

  • Rogue implicit SELECT FOR UPDATE statement in forms 9i  9.0.4.0.19

    all,
    out of 200 production forms, one form occasionally and incorrectly "selects for update" an entire 3 million row table, during an update transaction. this creates 100+ archive logs.
    we cannot repeat the event via testing. but the rogue select statement has been captured from SGA and is listed below. its plain to see that somehow the where clause is truncated to a W, and is then used as a table alias, resulting in the entire table being locked.
    has anyone seen anything like this?
    SELECT ROWID,DISTRIBUTION_PARTY,DISTRIBUTION_PARTY_NAME,CORRESPOND_SEQ_NUM,DISTRIBUTION_SEQ_NUM,VENDOR_NUM,DEPENDENT_SEQ_NUM,INTERNAL_ATTORNEY_USER_NAME,MAIL_LOC,ORIGINAL_FLAG
    FROM CLAIM_DISTRIBUTION_DATA W
    FOR UPDATE OF DISTRIBUTION_PARTY NOWAIT

    Find out where this select statement is issued from first of all. Is it in your code or is it issued implicitely by Forms? Since it has rowid I assume it is an implicit Forms call.
    Do you use on-update triggers any where in this form? on-lock?

Maybe you are looking for

  • Bootcamp & Parallels, licensing Win 8.1?

    I got Windows 8.1 installed using bootcamp (for those having trouble, I installed Win 7, and then upgraded to Win 8.1).  I then installed Parallels, and was able to boot it as a VM.   Going forward, I would like to be able to do both, boot Win 8.1 na

  • Spry Photo Gallery Demo

    I am trying to build my own gallery based on the demo  Photo Gallery Version 2. I want to add the photo metadata below the main image but I'm not sure how to do it. I am new to XML and javascript (I first looked at it 3 days ago), I can add the data

  • Safari keeps crashing missing address

    Hi people im having problems with my safari, i updated my itunes then it restarted and found the address bar and safari window not openning. If i right click the tool bar icon the windows is there but not on the screen, I click on it and it comes up

  • Verification on delete statement

    Hi i need a verification on delete statement and am trying to execute this delete using prepareStatement and through parameter passing the code is given below.plz do let me know the problem with the statement =========================================

  • Generate a letter sequence (A, B, C, D.....Z)

    Hello Evryone, I have a requirement to generate a letter sequence A, B, C, D....till Z. How can I fill a field with next letter, each time in a loop? Thanks for all your help in advance, Rushi