Update 65 millions of rows

Hi!
What's the best way to update 65 millions of rows? I don't have free space to create a new table.
Thanks
André

declare v_records number;
begin
LOOP
   UPDATE table_name
   SET col_name = value
   WHERE condition
   AND ROWNUM <=10000;
   v_records := SQL%ROWCOUNT;
   IF v_records = 0 THEN
      EXIT;
   END IF;
END LOOP;
END;Cheers
Sarma.

Similar Messages

  • Update million of rows

    Dear all,
    DB : 10.2.0.4.
    OS : Solaris 5.10
    I have a partitioned table with million of rows and am updating a filed like
    update table test set amount=amount*10 where update='Y';
    when I run this query it generates many archive logs as it doesn't commits the transaction anywhere..
    Please give me an idea where I can commit this transaction after every 2000 rows.. such that archive log generation will not be much..
    Please guide
    Kai

    There's not a lot you can do about the the amount of redo you generate (unless perhps you make the table unrecoverable, and that might not help much either).
    It possible that if the column being updated is in an index dropping that during the update might help and recreating afterwards, but that could land you in more trouble.
    One area of concern is the amount of undo space for the large transaction, this could even be exceeded and your statement might fail,
    and that might be a reason for splitting it to smaller transactions.
    Certainly there is no point in splitting down to 2000 records chunks, I'd want to aim much higher that that.
    If you feel you want to divide it the record may contain a field that could be used, eg create_date, or if you are able to do partition by partition that might help.
    If archive log management is the problem then speaking to the DBA should help.
    Hope these thoughts help - but you are responsible for any actions you take, regards - bigdelboy

  • Help on table update - millions of rows

    Hi,
    I am trying to do the following however the process is taking lot of time. Can someone help me the best way to do it.
    qtemp1 - 500,000 rows
    qtemp2 - 50 Million rows
    UPDATE qtemp2 qt
    SET product =
    SELECT product_cd
    FROM qtemp1 qtemp
    WHERE qt.quote_id = qtemp.quote_id
    processed_ind = 'P'
    I have created indexes on product, product_cd and quote_id on both the tables.
    Thank you

    There are two basic I/O read operations that need to be done to find the required rows.
    1. In QTEMP1 find row for a specific QUOTE_ID.
    2. In QTEMP2 find all rows where PROCESSED_IND is equal to 'P'.
    For every row in (2), the I/O in (1) is executed. So, if there are 10 million rows result for (2), then (1) will be executed 10 million times.
    So you want QTEMP1 to be optimised for access via QUOTE_ID - at best it should be using a unique index.
    Access on QTEMP2 is more complex. I assume that the process indicator is a column with low cardinality. In addition, being an process status indicator it is likely a candidate for being changed via UPDATE statements - in which case it is a very poor candidate for either a B+ tree index or a Bitmap index.
    Even if indexed, a large number of rows may be of process type 'P' - in which case the CBO will rightly decide not to waste I/O on the index, but instead spend all the I/O instead on a faster full table scan.
    In this case, (2) will result in all 50 million rows to be read - and for each row that has a 'P' process indicator, calling (1).
    Anyway you look at this.. it is a major processing request for the database to perform. It involves a lot of I/O, can involve a huge number of nested SQL calls to QTEMP1... so this is obviously going to be a slow performing process. The majority of elapsed processing time will be spend waiting for I/O from disks.

  • Enhance a SQL query with update of millions of rows

    Hi ,
    I have this query developed to updated around 200 million of rows on my production , I did my best but please need your recommendations\concerns to make it more enhanced
    DECLARE @ORIGINAL_ID AS BIGINT
    SELECT FID001 INTO #Temp001_
    FROM INBA004 WHERE RS_DATE>='1999-01-01'
    AND RS_DATE<'2014-01-01' AND CLR_f1st='SSLM'
    and FID001 >=12345671
    WHILE (SELECT COUNT(*) FROM #Temp001_ ) <>0
    BEGIN
    SELECT TOP 1 @ORIGINAL_ID=FID001 FROM #Temp001_ ORDER BY FID001
    PRINT CAST (@ORIGINAL_ID AS VARCHAR(100))+' STARTED'
    SELECT DISTINCT FID001
    INTO #OUT_FID001
    FROM OUTTR009 WHERE TRANSACTION_ID IN (SELECT TRANSACTION_ID FROM
    INTR00100 WHERE FID001 = @ORIGINAL_ID)
    UPDATE A SET RCV_Date=B.TIME_STAMP
    FROM OUTTR009 A INNER JOIN INTR00100 B
    ON A.TRANSACTION_ID=B.TRANSACTION_ID
    WHERE A.FID001 IN (SELECT FID001 FROM #OUT_FID001)
    AND B.FID001=@ORIGINAL_ID
    UPDATE A SET Sending_Date=B.TIME_STAMP
    FROM INTR00100 A INNER JOIN OUTTR009 B
    ON A.TRANSACTION_ID=B.TRANSACTION_ID
    WHERE A.FID001=@ORIGINAL_ID
    AND B.FID001 IN (SELECT FID001 FROM #OUT_FID001)
    DELETE FROM #Temp001_ WHERE FID001=@ORIGINAL_ID
    DROP TABLE #OUT_FID001
    PRINT CAST (@ORIGINAL_ID AS VARCHAR(100))+' FINISHED'
    END

    DECLARE @x INT
    SET @x = 1
    WHILE @x < 44,000,000  -- Set appropriately
    BEGIN
        UPDATE Table SET a = c+d where ID BETWEEN @x AND @x + 10000
        SET @x = @x + 10000
    END
    Make sure that ID column has a CI on.
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • How to write a cursor to check every row of a table which has millions of rows

    Hello every one.
    I need help. please... Below is the script (sample data), You can run directly on sql server management studio.
    Here we need to update PPTA_Status column in Donation table. There WILL BE 3 statuses, A1, A2 and Q.
    Here we need to update PPTA_status of January month donations only. We need to write a cursor. Here as this is a sample data we have only some donations (rows), but in the real table we have millions of rows. Need to check every row.
    If i run the cursor for January, cursor should take every row, row by row all the rows of January.
    we have donations in don_sample table, i need to check the test_results in the result_sample table for that donations and needs to update PPTA_status COLUMN.
    We need to check all the donations of January month one by one. For every donation, we need to check for the 2 previous donations. For the previous donations, we need to in the following way. check
    If we want to find previous donations of a donation, first look for the donor of that donation, then we can find previous donations of that donor. Like this we need to check for 2 previous donations.
    If there are 2 previous donations and if they have test results, we need to update PPTA_STATUS column of this donatioh as 'Q'.
    If 2 previous donation_numbers  has  test_code column in result_sample table as (9,10,11) values, then it means those donations has result.
    BWX72 donor in the sample data I gave is example of above scenario
    For the donation we are checking, if it has only 1 previous donation and it has a result in result_sample table, then set this donation Status as A2, after checking the result of this donation also.
    ZBW24 donor in the sample data I gave is example of above scenario
    For the donation we are checking, if it has only 1 previous donation and it DO NOT have a result in result_sample table, then set this donation Status as A1. after checking the result of this donation also.
    PGH56 donor in the sample data I gave is example of above scenario
    like this we need to check all the donations in don_sample table, it has millions of rows per every month.
    we need to join don_sample and result_sample by donation_number. And we need to check for test_code column for result.
    -- creating table
    CREATE TABLE [dbo].[DON_SAMPLE](
    [donation_number] [varchar](15) NOT NULL,
    [donation_date] [datetime] NULL,
    [donor_number] [varchar](12) NULL,
    [ppta_status] [varchar](5) NULL,
    [first_time_donation] [bit] NULL,
    [days_since_last_donation] [int] NULL
    ) ON [PRIMARY]
    --inserting values
    Insert into [dbo].[DON_SAMPLE] ([donation_number],[donation_date],[donor_number],[ppta_status],[first_time_donation],[days_since_last_donation])
    Select '27567167','2013-12-11 00:00:00.000','BWX72','A',1,0
    Union ALL
    Select '36543897','2014-12-26 00:00:00.000','BWX72','A',0,32
    Union ALL
    Select '47536542','2014-01-07 00:00:00.000','BWX72','A',0,120
    Union ALL
    Select '54312654','2014-12-09 00:00:00.000','JPZ41','A',1,0
    Union ALL
    Select '73276321','2014-12-17 00:00:00.000','JPZ41','A',0,64
    Union ALL
    Select '83642176','2014-01-15 00:00:00.000','JPZ41','A',0,45
    Union ALL
    Select '94527541','2014-12-11 00:00:00.000','ZBW24','A',0,120
    Union ALL
    Select '63497874','2014-01-13 00:00:00.000','ZBW24','A',1,0
    Union ALL
    Select '95786348','2014-12-17 00:00:00.000','PGH56','A',1,0
    Union ALL
    Select '87234156','2014-01-27 00:00:00.000','PGH56','A',1,0
    --- creating table
    CREATE TABLE [dbo].[RESULT_SAMPLE](
    [test_result_id] [int] IDENTITY(1,1) NOT NULL,
    [donation_number] [varchar](15) NOT NULL,
    [donation_date] [datetime] NULL,
    [test_code] [varchar](5) NULL,
    [test_result_date] [datetime] NULL,
    [test_result] [varchar](50) NULL,
    [donor_number] [varchar](12) NULL
    ) ON [PRIMARY]
    ---SET IDENTITY_INSERT dbo.[RESULT_SAMPLE] ON
    ---- inserting values
    Insert into [dbo].RESULT_SAMPLE( [test_result_id], [donation_number], [donation_date], [test_code], [test_result_date], [test_result], [donor_number])
    Select 278453,'27567167','2013-12-11 00:00:00.000','0009','2014-01-20 00:00:00.000','N','BWX72'
    Union ALL
    Select 278454,'27567167','2013-12-11 00:00:00.000','0010','2014-01-20 00:00:00.000','NEG','BWX72'
    Union ALL
    Select 278455,'27567167','2013-12-11 00:00:00.000','0011','2014-01-20 00:00:00.000','N','BWX72'
    Union ALL
    Select 387653,'36543897','2014-12-26 00:00:00.000','0009','2014-01-24 00:00:00.000','N','BWX72'
    Union ALL
    Select 387654,'36543897','2014-12-26 00:00:00.000','0081','2014-01-24 00:00:00.000','NEG','BWX72'
    Union ALL
    Select 387655,'36543897','2014-12-26 00:00:00.000','0082','2014-01-24 00:00:00.000','N','BWX72'
    UNION ALL
    Select 378245,'73276321','2014-12-17 00:00:00.000','0009','2014-01-30 00:00:00.000','N','JPZ41'
    Union ALL
    Select 378246,'73276321','2014-12-17 00:00:00.000','0010','2014-01-30 00:00:00.000','NEG','JPZ41'
    Union ALL
    Select 378247,'73276321','2014-12-17 00:00:00.000','0011','2014-01-30 00:00:00.000','NEG','JPZ41'
    UNION ALL
    Select 561234,'83642176','2014-01-15 00:00:00.000','0081','2014-01-19 00:00:00.000','N','JPZ41'
    Union ALL
    Select 561235,'83642176','2014-01-15 00:00:00.000','0082','2014-01-19 00:00:00.000','NEG','JPZ41'
    Union ALL
    Select 561236,'83642176','2014-01-15 00:00:00.000','0083','2014-01-19 00:00:00.000','NEG','JPZ41'
    Union ALL
    Select 457834,'94527541','2014-12-11 00:00:00.000','0009','2014-01-30 00:00:00.000','N','ZBW24'
    Union ALL
    Select 457835,'94527541','2014-12-11 00:00:00.000','0010','2014-01-30 00:00:00.000','NEG','ZBW24'
    Union ALL
    Select 457836,'94527541','2014-12-11 00:00:00.000','0011','2014-01-30 00:00:00.000','NEG','ZBW24'
    Union ALL
    Select 587345,'63497874','2014-01-13 00:00:00.000','0009','2014-01-29 00:00:00.000','N','ZBW24'
    Union ALL
    Select 587346,'63497874','2014-01-13 00:00:00.000','0010','2014-01-29 00:00:00.000','NEG','ZBW24'
    Union ALL
    Select 587347,'63497874','2014-01-13 00:00:00.000','0011','2014-01-29 00:00:00.000','NEG','ZBW24'
    Union ALL
    Select 524876,'87234156','2014-01-27 00:00:00.000','0081','2014-02-03 00:00:00.000','N','PGH56'
    Union ALL
    Select 524877,'87234156','2014-01-27 00:00:00.000','0082','2014-02-03 00:00:00.000','N','PGH56'
    Union ALL
    Select 524878,'87234156','2014-01-27 00:00:00.000','0083','2014-02-03 00:00:00.000','N','PGH56'
    select * from DON_SAMPLE
    order by donor_number
    select * from RESULT_SAMPLE
    order by donor_number

    You didn't mention the version of SQL Server.  It's important, because SQL Server 2012 makes the job much easier (and will also run much faster, by dodging a self join).  (As Kalman said, the OVER clause contributes to this answer).  
    Both approaches below avoid needing the cursor at all.  (There was part of your explanation I didn't understand fully, but I think these suggestions work regardless)
    Here's a SQL 2012 answer, using LAG() to lookup the previous 1 and 2 donation codes by Donor:  (EDIT: I overlooked a couple things in this post: please refer to my follow-up post for the final/fixed answer.  I'm leaving this post with my overlooked
    items, for posterity).
    With Results_Interim as
    Select *
    , count('x') over(partition by donor_number) as Ct_Donations
    , Lag(test_code, 1) over(partition by donor_number order by donation_date ) as PrevDon1
    , Lag(test_code, 2) over(partition by donor_number order by donation_date ) as PrevDon2
    from RESULT_SAMPLE
    Select *
    , case when PrevDon1 in (9, 10, 11) and PrevDon2 in (9, 10, 11) then 'Q'
    when PrevDon1 in (9, 10, 11) then 'A2'
    when PrevDon1 is not null then 'A1'
    End as NEWSTATUS
    from Results_Interim
    Where Test_result_Date >= '2014-01' and Test_result_Date < '2014-02'
    Order by Donor_Number, donation_date
    And a SQL 2005 or greater version, not using SQL 2012 new features
    With Results_Temp as
    Select *
    , count('x') over(partition by donor_number) as Ct_Donations
    , Row_Number() over(partition by donor_number order by donation_date ) as RN_Donor
    from RESULT_SAMPLE
    , Results_Interim as
    Select R1.*, P1.test_code as PrevDon1, P2.Test_Code as PrevDon2
    From Results_Temp R1
    left join Results_Temp P1 on P1.Donor_Number = R1.Donor_Number and P1.Rn_Donor = R1.RN_Donor - 1
    left join Results_Temp P2 on P2.Donor_Number = R1.Donor_Number and P2.Rn_Donor = R1.RN_Donor - 2
    Select *
    , case when PrevDon1 in (9, 10, 11) and PrevDon2 in (9, 10, 11) then 'Q'
    when PrevDon1 in (9, 10, 11) then 'A2'
    when PrevDon1 is not null then 'A1'
    End as NEWSTATUS
    from Results_Interim
    Where Test_result_Date >= '2014-01' and Test_result_Date < '2014-02'
    Order by Donor_Number, donation_date

  • Alternatives to using update to flag rows of data

    Hello
    I'm looking into finding ways of improving the speed of a process, primarily by avoiding the use of an update statement to flag a row in a table.
    The problem with flagging the row is that the table has millions of rows in it.
    The logic is a bit like this,
    Set flag_field to 'F' for all rows in table. (e.g 104million rows updated)
    for a set of rules loop around this
    Get all rows that satisfy criteria for a processing rule. (200000 rows found)
    Process the rows collected data (200000 rows processed)
    Set flag_field to 'T' for those rows processed (200000 rows updated)
    end loop
    Once a row in the table has been processed it shouldn't be collected as part of any criteria of another rule further down the list, hence it needs to be flagged so that it doesn't get picked up again.
    With there being millions of rows in the table and sometimes rules only processing 200k rows, i've learnt recently that this will create a lot of undo to be written and will thus take a long amount of time. (any thoughts on that)
    Can anyone suggest anything other then using an update statement to flag each row to avoid it being processed again?
    Appreciate the help

    With regard to speeding up the process, the answer is
    "it depends". It all hinges on exactly what you mean
    by "Process the rows collected data". What does the
    processing involve?the processing involved is very straight forwards.
    data in these large tables stay unchanged. for sake of this example, i'll call this table orig_data_table, the rules are set by users who have their own tables built (for this example we'll call it target_data_table).
    the rules are there to take rows from the orig_data_table and insert them into the target_data_table.
    e.g
    update orig_data_table set processed='F';rule #1 states : select * from orig_data_table where feature_1 = 310;this applies to 3000 rows in orig_data_table.
    these 3000 rows get inserted to target_data_table.
    final step is to flag these 3000 rows in the orig_data_table so that they are not used by another rule proceeding rule 1 : update orig_data_table.processed='T' where feature_1 = 310;
    commit;rule #2 states :select * from orig_data_table where feature_1=310 and destination='Asia' and orig_data_table.processed = 'F'so it won't pick the 3000 rows that were processed as part of rule #1
    once rule #2 has got the rows from orig_data_table (e.g 400000 rows)
    those get inserted to the target_data_table, followed by them being flagged to avoid being retrieved again
    update orig_data_table.processed='T' where destination='Asia';
    commit;continue onto rule #3...
    - Is the process some kind of transformation of the
    ~200,000 selected rows which could possibly be
    achieved in SQL alone, or is it an extremely complex
    transformation which can not be done in pure SQL?its not at all complex, as i say the data in orig_data_table is unchanged bar the processed field which is initially set to 'F' for all rows and then set to 'T' after each rule for those rows fulfilling the criteria of each rule.
    - Does the FLAG_FIELD exist purely for the use of
    this process or is it referred to by other
    procedures?the flag_field is only for this purpose and not used elsewhere
    Having said that, as a first step to simply avoid the
    update of the flag field, I would suggest that you
    use bulk processing and include a table of booleans
    to act as the indicator for whether a particular row
    has been processed or not.could you elaborate a bit more on this table of booleans for me, it sounds an interesting approach for me to test...
    many thanks again
    Sandip

  • Results not updating for every row in BIpublisher

    Hi all I have a Bi publisher report that is updating the results for the first row and in the second row to update the results it is picking the values from the first column itself and using the same values for all the other rows.
    Here is my report format
    Month saves Total, for 30,60,90,120
    Jan
    feb
    Mar
    Total are the enrolls by each month and days shows after 30,60,90,120 how many are still active. Following eg should give an idea of exactly whats happening
    Eg:
    Total 30days 60days 90days 120days 150days 180days 210days
    ------------------------------------------------------------------------------------------------------------------------------------------------------------ Jan saves 330 287 274 270 263 262 259 257
    Feb saves 298 255 242 238 231 230 227 225
    Mar saves 291 248 235 231 224 223 220 218
    So what is happening is lets say for example there are a total of 330 enrolls in january and after 30days 287 are still active and after 60days 274 are still active after 90 270...... etc
    Am getting the January active values correctly.
    BUt going forward when I see the values for february The total enrolls for february is 298 and after 30 days which are active is not giving me the correct.
    It is substracting the same amount as jan. looking at the numbers it is substracting
    -43, -13,-4,-7,-1,-3,-2 for Jan which are the cancels after the consecutive days
    It is substracting the same amount for feb also but my actual cancels for feb are different it should be 45,12,8,9,2,2,
    It is doing the same for all the months.
    There should be a change in the code. Following is the XSL code am using in
    <?xdoxslt:set_variable($_XDOCTX, 'v_SavesCanceled', SAVES_CANCELED_COUNT)?>
    <?xdoxslt:set_variable($_XDOCTX, 'v_RtSaves', xdoxslt:get_variable($_XDOCTX, 'v_RtSaves') - xdoxslt:get_variable($_XDOCTX, 'v_SavesCanceled'))?>
    <?xdoxslt:get_variable($_XDOCTX, 'v_RtSaves')?>
    Actually the cancels should be updated for each row but its picking the same cancels for every month.
    Hope it is clear let me know if you need any further info
    Any help is appreciated.
    Thanks

    Thank you very much for your help. The rtf file sent worked and updating results as required.Thank you
    Thanks

  • Flex updates at a row level in a grid

    I needs to updates row level in a grid for frequent basis, Also i don't want to Refresh the Grid.
    Is there any method i can use ?
    Using flex Grid -> updates at a row level in a grid

    I mean DataGrid. I am trying to change the data rows based on realtime data feed.
    First time i'll add all the Employees in the Grid. Later i'll get indivual request will change only Status column.
    Is there is any other way i can update with out refresh in datagrid.
    <mx:DataGrid 
    id="dg" height="260" width="900" x="0" y="20">
    <mx:columns>
    <mx:DataGridColumn headerText="EmpID" dataField="EmpID" width="10" visible="false"/>
    <mx:DataGridColumn headerText="Emp Name" dataField="Emp Name" width="110"/>
    <mx:DataGridColumn headerText="Status" dataField="Status" width="80"/>
    </mx:columns>   
    </mx:DataGrid>

  • How to update a single row of data table

    How we can update a single row of the data table by clicking the button in the same row.
    Thanks in Advance.

    Hi!
    What do You mean 'update'? Get fresh data from DB or change data and commit it in DB?
    If commit, try to read here:
    http://developers.sun.com/jscreator/learning/tutorials/2/inserts_updates_deletes.html
    Thanks,
    Roman.

  • Multiple updates to same row

    Hi experts,
    I still cant figure out how oracle handles multiple updates to the same row. For instance I have 3 update statements:
    update supplier set supp_type = 'k' where supp_code = '1';
    update supplier set supp_type = 'j' where supp_code = '1';
    update supplier set supp_type = 'm' where supp_code = '1';
    I keep getting the final result to be supp_type = 'k' where it should actually be 'm', but when i execute the mapping it shows 3 update operations, which baffled me as to how oracle handles simultaneous updates to same row. I even tried disabling parallel dml on the table object, but am unsure whether this actually helps. I try putting a sorter operator and then a key lookup operator after the sorter operator in my mapping to compare the supp_code field in the sorter with the target table's supp_code field to retrieve the relevant row to update, but instead of 3 update operations, it now updates all supp_type in all my records to NULL. Can anyone explain to me how i should go about dealing with this?

    Hi experts,
    I just took a look at the code section generated for the key lookup operator named SUPPLIER_WH_SURRKEY01 and I feel something is wrong with the generated code. I have pasted the code section on the key lookup operator below.
    ORDER BY
    "SUPPLIER_CV"."RSID$" ASC ) *"SORTER" ON ( ( ( "SUPPLIER_WH_SURRKEY01"."EXPIRATION_DATE" = "SPIDERWEB2"."GET_EXPIRATI_0_EXPIRATI" ) ) AND ( ( "SUPPLIER_WH_SURRKEY01"."SUPPCODE" = "SORTER"."SUPP_CODE$1" ) ) )*
    WHERE
    ( ( "SUPPLIER_WH_SURRKEY01"."SUPPKEY" IS NULL ) OR ( "SUPPLIER_WH_SURRKEY01"."SUPPKEY" = "SUPPLIER_WH_SURRKEY01"."SUPPKEY" ) );
    Can anyone explain to me the codes in bold? I have no clue as to what it means? Furthermore, those bold-ed codes look similar to what I have expected to find in the where clause, except that instead of SUPPLIER_WH_SURRKEY01"."EXPIRATION_DATE" = "SPIDERWEB2"."GET_EXPIRATI_0_EXPIRATI", I expected to find
    SUPPLIER_WH_SURRKEY01"."EXPIRATION_DATE" = '31-dec-4000', because my key lookup operator checks upon a constant with the value '31-dec-4000'. And the constant name is CONSTANT itself, while my mapping's name is SPIDERWEB2(not too sure why the generated code refers to my mapping name instead of my constant)
    Edited by: user8915380 on 17-Mar-2010 00:52

  • Problem: trying to update all detail rows on pre-commit (MASTER DETAIL FORM

    Hi:
    I got a MASTER DETAIL form... and I need to update every detail row of this form (if the master was updated) before commiting the changes. the problem is that i cannot do that for instance in PRE-COMMIT or ON-COMMIT... it's an "illegal operation". I achieved part of it by coding KEY-COMMIT... but that did not solve the all problem. first take a look of the kind of code i want execute before commiting.
    form trigger key-commit code is is somehow like this:
    DECLARE
    tot_line NUMBER (3);
    line NUMBER (3);
    begin
    IF NAME_IN ('system.form_status') = 'CHANGED'
    THEN
    GO_BLOCK ('DETAIL');
    LAST_RECORD;
    tot_line := GET_BLOCK_PROPERTY ('DETAIL', current_record);
    FIRST_RECORD;
    line:= 1;
    LOOP
    :detail.quant := :detail.quant + 1;
    EXIT WHEN line= tot_line;
    next_record;
    line:= line+ 1;
    END LOOP;
    FIRST_RECORD;
    GO_BLOCK ('MASTER');
    END IF;
    COMMIT;
    end;
    The problem is for instance when the users close form in the "X" button (right top, near minimize form) ... If they do that Forms ask "Do you want to save changes?" ... and with this i do not execute the update of the detail rows...
    But there are other situations when this happens... for instance if EXECUTE_QUERY when i change a record...
    Anyone help?
    Joao Oliveira

    Use PRE-UPDATE trigger (Master block).
    begin
    update <detail_table>
    set quant + 1
    where <detail_table>.<relaition_column1> = :<Master_block>.<relaition_item1>
    and <detail_table>.<relaition_columnN> = :<Master_block>.<relaition_itemN>
    and <detail_block_WHERE>;
    EXCEPTION WHEN OTHERS THEN NULL;
    end;

  • Sql server if the row exist it update if the row new won't update

    I am
    working with Asp.net and C# and using SQl Server, I insert a row in one plase and works fine but when I try to update the same row it woudn't do anything, I
    try to update an existing row it works with the same code but not if I just
    insert new row.
    my code as
    below:
    String companyName = txCompany.Text.ToString();
    SqlConnection con =
    new
    SqlConnection(conString);
    SqlCommand cmd =
    new
    SqlCommand("INSERT
    INTO DocUp (CompanyName)VALUES (@CompanyName)", con);
    cmd.Parameters.AddWithValue("@CompanyName", companyName);
    try
    con.Open();
    cmd.ExecuteNonQuery();
    catch (Exception
    er)
    Response.Write("<script language='javascript'>alert('Connection Problem On Insert');</script>");
    finally
    con.Close();
    string strQuery =
    "UPDATE DocUp SET 
    QuoteFileName=@QuoteFileName, ContentType=@ContentType, QuoteFileData=@Data WHERE CompanyName=@Company";
    SqlCommand cmd =
    new
    SqlCommand(strQuery);
    cmd.Parameters.Add("@QuoteFileName",
    SqlDbType.VarChar).Value = filename;
    cmd.Parameters.Add("@ContentType",
    SqlDbType.VarChar).Value =
    "application/pdf";
    cmd.Parameters.Add("@Data",
    SqlDbType.Binary).Value = bytes;
    cmd.Parameters.Add("@Company",
    SqlDbType.VarChar).Value = companyName;
    InsertUpdateData(cmd);
    private
    Boolean InsertUpdateData(SqlCommand
    cmd)
    String strConnString = System.Configuration.ConfigurationManager
    .ConnectionStrings["S7V001_11022014ConnectionString1"].ConnectionString;
      SqlConnection con =
    new
    SqlConnection(strConnString);
    cmd.CommandType = CommandType.Text;
    cmd.Connection = con;
    try
    con.Open();
    cmd.ExecuteNonQuery();
    return
    true;
          catch (Exception
    ex)
    Response.Write(ex.Message);
    return
    false;
    finally
    con.Close();
    con.Dispose();

    Please use the  "Insert code block" button to insert *readable* code blocks 
    try{
    con.Open();
    cmd.ExecuteNonQuery();
    return true;
    You have not shared what query/functions that you are using to insert data. Please share. 
    Also it looks like more like a .Net related questions, (and if you agree,  ) I would recommend posting the question in .Net forum
    http://social.msdn.microsoft.com/Forums/vstudio/en-US/home?forum=csharpgeneral
    Also I would recommend you to read "How to ask questions in Technical Forums" from the below link
    Satheesh
    My Blog |
    How to ask questions in technical forum

  • I want a user to only be able to update/delete the rows they inserted

    hi guys,
    I have a table 2 users are inserting into. They can also update/delete the rows in the table. However, I do not want them to be able to update/delete the others users row. I only want them to have update/delete at the row level.
    how can this be achieved?
    thanks

    Another idea if you really have just two (or a fixed set of N) users.
    Does your table have a generic primary key (PK)?
    You could use two (N) sequences having two (N) distinct sets of numbers as e.g user a is using sequences less than 1000000000, the other one values larger or equal to 1000000000.
    create sequence <user_a>.pk_seq start with 0;
    create sequence <user_b>.pk_seq start with 1000000000;An insert trigger uses <user_a>.pk_seq or <user_b>.pk_seq for generating the PK depending upon the current user for new records.
    An update trigger allows updates only, if the PK of the record to be updated is in the range of sequences belonging to the current user.

  • Updating billions of rows from historical data

    Hi,
    we have a task of updating billions of rows from some tables ( one at a time ) for some past DATE's.
    Could you please suggest what would be the best approach to follow in such cases where simple UPDATE statements would take dont know how much time.
    The scenario is something like this..
    TEST table.
    col1 col2 col3
    Now we added one more column col4.
    Now all the records in this table needs to be updated for col3=col4 values for past data ( past in the sense the time when this will be executed in the Prodn DB, the sysdate-1 will be past data)
    I thought of using FORALL kind of clause but still I dont know if that is going to be useful completely or not.
    I am using Oracle 10g rel 2 version
    Plase give me your expert opinions on this scenario so that the script to update such a voluminous data can execute successfully with better performance.
    Thanks,
    Aashish

    Hi Mohamed,
    thanks for your help.
    However, in my case, its not possible to drop and recreate the table only to update single column for values in another column.
    I am trying to do some POC for this using BULK UPDATE but I am not able to do so...
    I did something like this to check for 50,00,000 records.
    create table test
    ( col1 varchar2(100),
      col2 varchar2(100));
    inserted 50,00,000 records to col1 column of this table.Now when I tried to do something like
    declare
    CURSOR s_cur IS
    SELECT col1
    FROM test;
    TYPE fetch_array IS TABLE OF s_cur%ROWTYPE;
    s_array fetch_array;
    begin
    OPEN s_cur;
      LOOP
        FETCH s_cur BULK COLLECT INTO s_array LIMIT 10000;
        FORALL i IN 1..s_array.COUNT
        UPDATE TEST
        SET col2 = s_array(i); -- dont know if this is correct
        EXIT WHEN s_cur%NOTFOUND;
      END LOOP;
      CLOSE s_cur;
      COMMIT;
    END;It gives me some error.
    Can you please correct me in this?
    Still I am looking for better way...
    Rgds,
    Aashish

  • Loading millions of rows using SQL*loader to a table with constraints

    I have a table with constraints and I need to load millions of rows in it using SQL*Loader.
    What is the best way to do this, means what SQL*Loader options to use, for getting the best loading performance and how to deal with constraints?
    Regards

    - check if your table has check constraints (like column not null)
    if you trust the data in the file you have to load you can disable this constrainst and after the loader enable this constrainst.
    - Check if you can modify the table and place it in nologging mode (generate less redo but ONLY is SOME Conditions)
    Hope it helps
    Rui Madaleno

Maybe you are looking for

  • I am trying to output a PDF gallery from CS6 bridge in 10.8.5 on a Mac.

    I am trying to output a PDF gallery from CS6 bridge in 10.8.5 on a Mac. No options are available from the pull-down Output arrow.

  • Can't see text on video podcast page until video is fully loaded

    Is there any way to change the load order, or am I stuck? It sounds simple, but I have not found it yet... Moving parts front to back doesn't seem to help. Thanks. Message was edited by: Brent Harris3

  • Unicode Conversion Program

    Hi All, For a program there are using the SET/GET PARAMETER statements and they are capturing the respective fields. In the Unicode conversion it is saying that, at SET/GET PARAMETER statements it is saying that (must be a character-type field (data

  • Manual Software update on OS IMage fails

    I'm running SCCM 2012 R2 and have several Windows 7 OS images created. I'm trying to install the OS updates, but I keep getting the error, "Failed to find or access the update binaries to be applied on the image." When I looked in the OfficeLineServi

  • Newly created portal access failure: error in tag library

    I get an error when accessing a newly created portlet. The following exceptions appear upon access: Wed Aug 23 16:57:12 PDT 2000:<E> <ServletContext-General> Servlet failed with Ex ception weblogic.servlet.jsp.JspException: (line -1): Error in tag li