Poor timing for update of a million rows in TimesTen
This is not a scientific test but I am dissappointed in my results.
I created SALES table in TT 11.2.1.4.0 in the image of Oracle 11g table sh.SALES. Data also came from SALES as well. Just make sure that you have a million rows in your version of sh.SALES in Oracle. Spool it out to /var/tmp/abc.log as follows:
set feedback off
set pagesize 0
set verify off
set timing off
select prod_id ||','||cust_id||','||to_char(TIME_ID,'YYYY-MM-DD')||','||channel_id||','||PROMO_ID||','||QUANTITY_SOLD||','||AMOUNT_SOLD
from sys.sales;
exit
Now use
ttbulkcp -i -s "," DSN=ttdemo1 SALES /var/tmp/abc.log
TT table description is as follows with no index
Table HR.SALES:
Columns:
PROD_ID NUMBER NOT NULL
CUST_ID NUMBER NOT NULL
TIME_ID DATE NOT NULL
CHANNEL_ID NUMBER NOT NULL
PROMO_ID NUMBER NOT NULL
QUANTITY_SOLD NUMBER (10,2) NOT NULL
AMOUNT_SOLD NUMBER (10,2) NOT NULL
1 table found.
(primary key columns are indicated with *)
The data store has 1024MB PermStore and 512MB TempStore
[ttimdb1]
Driver=/home/oracle/TimesTen/11.2.1.4.0/TimesTen/ttimdb1/lib/libtten.so
DataStore=/work/oracle/TimesTen_store/ttimdb1
PermSize=1024
TempSize=512
OracleId=MYDB
DatabaseCharacterSet=WE8MSWIN1252
ConnectionCharacterSet=WE8MSWIN1252
Now do a simple UPDATE. REmember it is all tablescan!
Command> set autocommit 0
Command> showplan 1
Command> timing 1
Command> UPDATE SALES SET AMOUNT_SOLD = AMOUNT_SOLD + 10.22;
Query Optimizer Plan:
STEP: 1
LEVEL: 1
OPERATION: TblLkSerialScan
TBLNAME: SALES
IXNAME: <NULL>
INDEXED CONDITION: <NULL>
NOT INDEXED: <NULL>
STEP: 2
LEVEL: 1
OPERATION: TblLkUpdate
TBLNAME: SALES
IXNAME: <NULL>
INDEXED CONDITION: <NULL>
NOT INDEXED: <NULL>
1000000 rows updated.
Execution time (SQLExecute) = 76.141563 seconds.
I tried few times but I still cannot make it go below 60 seconds. Oracle 11g does it better.
Any help and advice is appreciated.
Thanks,
Mich
Guys,
running the job and doing UNIX top I am getting this
Mem: 4014080k total, 3729940k used, 284140k free, 136988k buffers
Swap: 10241396k total, 8k used, 10241388k free, 3283284k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
11428 oracle 19 0 2138m 802m 799m S 17 20.5 0:29.58 ttIsqlCmd
5559 oracle 18 0 2158m 719m 711m S 11 18.4 7:03.63 timestensubd
5874 root 16 0 1964 628 548 S 7 0.0 1:06.14 hald-addon-stor
4910 root 16 0 2444 368 260 S 5 0.0 0:16.69 irqbalance
17 root 10 -5 0 0 0 S 3 0.0 0:20.78 kblockd/0
So there is memory there and no swap usage. System does not look to be overloaded or anything. However, there is a wait somewhere!
TIME_OF_1ST_CONNECT: Tue Jan 19 12:23:30 2010
DS_CONNECTS: 11
DS_DISCONNECTS: 0
DS_CHECKPOINTS: 0
DS_CHECKPOINTS_FUZZY: 0
DS_COMPACTS: 0
PERM_ALLOCATED_SIZE: 1048576
PERM_IN_USE_SIZE: 134048
PERM_IN_USE_HIGH_WATER: 134048
TEMP_ALLOCATED_SIZE: 524288
TEMP_IN_USE_SIZE: 19447
TEMP_IN_USE_HIGH_WATER: 19511
SYS18: 0
TPL_FETCHES: 0
TPL_EXECS: 0
CACHE_HITS: 0
PASSTHROUGH_COUNT: 0
XACT_BEGINS: 6
XACT_COMMITS: 5
XACT_D_COMMITS: 0
XACT_ROLLBACKS: 0
LOG_FORCES: 0
DEADLOCKS: 0
LOCK_TIMEOUTS: 0
LOCK_GRANTS_IMMED: 148
LOCK_GRANTS_WAIT: 0
SYS19: 0
CMD_PREPARES: 3
CMD_REPREPARES: 0
CMD_TEMP_INDEXES: 0
LAST_LOG_FILE: 240
REPHOLD_LOG_FILE: -1
REPHOLD_LOG_OFF: -1
REP_XACT_COUNT: 0
REP_CONFLICT_COUNT: 0
REP_PEER_CONNECTIONS: 0
REP_PEER_RETRIES: 0
FIRST_LOG_FILE: 209
LOG_BYTES_TO_LOG_BUFFER: 120
LOG_FS_READS: 0
LOG_FS_WRITES: 0
LOG_BUFFER_WAITS: 0
CHECKPOINT_BYTES_WRITTEN: 0
CURSOR_OPENS: 5
CURSOR_CLOSES: 5
SYS3: 0
SYS4: 0
SYS5: 0
SYS6: 0
CHECKPOINT_BLOCKS_WRITTEN: 0
CHECKPOINT_WRITES: 0
REQUIRED_RECOVERY: 0
SYS11: 0
SYS12: 1
TYPE_MODE: 0
SYS13: 0
SYS14: 0
SYS15: 0
SYS16: 0
SYS17: 0
SYS9:
Command> UPDATE SALES SET AMOUNT_SOLD = AMOUNT_SOLD + 10.22;
1000000 rows updated.
Execution time (SQLExecute) = 86.476318 seconds.
Command> monitor;
TIME_OF_1ST_CONNECT: Tue Jan 19 12:23:30 2010
DS_CONNECTS: 11
DS_DISCONNECTS: 0
DS_CHECKPOINTS: 0
DS_CHECKPOINTS_FUZZY: 0
DS_COMPACTS: 0
PERM_ALLOCATED_SIZE: 1048576
PERM_IN_USE_SIZE: 134079
PERM_IN_USE_HIGH_WATER: 252800
TEMP_ALLOCATED_SIZE: 524288
TEMP_IN_USE_SIZE: 19512
TEMP_IN_USE_HIGH_WATER: 43024
SYS18: 0
TPL_FETCHES: 0
TPL_EXECS: 0
CACHE_HITS: 0
PASSTHROUGH_COUNT: 0
XACT_BEGINS: 13
XACT_COMMITS: 12
XACT_D_COMMITS: 0
XACT_ROLLBACKS: 0
LOG_FORCES: 6
DEADLOCKS: 0
LOCK_TIMEOUTS: 0
LOCK_GRANTS_IMMED: 177
LOCK_GRANTS_WAIT: 0
SYS19: 0
CMD_PREPARES: 4
CMD_REPREPARES: 0
CMD_TEMP_INDEXES: 0
LAST_LOG_FILE: 246
REPHOLD_LOG_FILE: -1
REPHOLD_LOG_OFF: -1
REP_XACT_COUNT: 0
REP_CONFLICT_COUNT: 0
REP_PEER_CONNECTIONS: 0
REP_PEER_RETRIES: 0
FIRST_LOG_FILE: 209
LOG_BYTES_TO_LOG_BUFFER: 386966680
LOG_FS_READS: 121453
LOG_FS_WRITES: 331
LOG_BUFFER_WAITS: 8
CHECKPOINT_BYTES_WRITTEN: 0
CURSOR_OPENS: 6
CURSOR_CLOSES: 6
SYS3: 0
SYS4: 0
SYS5: 0
SYS6: 0
CHECKPOINT_BLOCKS_WRITTEN: 0
CHECKPOINT_WRITES: 0
REQUIRED_RECOVERY: 0
SYS11: 0
SYS12: 1
TYPE_MODE: 0
SYS13: 0
SYS14: 0
SYS15: 0
SYS16: 0
SYS17: 0
SYS9:
Command> commit;
Execution time (SQLTransact) = 0.000007 seconds.
Similar Messages
-
Best method to update database table for 3 to 4 million rows
Hi All,
I have 3 to 4 million rows are there in my excel file and we have to load to Z-Table.
The intent is to load and keep 18 months of history in this table.
so what should be best way for huge volume of data to Z-Table from excel file.
If is from the program, is that the best way use the FM 'GUI_DOWNLOAD' and down load those entries into the internal table and directly do as below
INSERT Z_TABLE from IT_DOWNLOAD.
I think for the huge amount of data it goes to dump.
please suggest me the best possible way or any psudo code to insert those huge entries into that Z_TABLE.
Thanks in advance..Hi,
You get the dump because of uploading that much records into itnernal table from excel file...
in this case, do the follwowing.
data : w_int type i,
w_int1 type i value 1.
data itab type standard table of ALSMEX_TABLINE with header line.
do.
refresh itab.
w_int = w_int1..
w_int1 = w_int + 25000.
CALL FUNCTION 'ALSM_EXCEL_TO_INTERNAL_TABLE'
EXPORTING
FILENAME = <filename>
I_BEGIN_COL = 1
I_BEGIN_ROW = w_int
I_END_COL = 10
I_END_ROW = w_int1
TABLES
INTERN = itab
* EXCEPTIONS
* INCONSISTENT_PARAMETERS = 1
* UPLOAD_OLE = 2
* OTHERS = 3
IF SY-SUBRC <> 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
* WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
if itab is not initial.
write logic to segregate the data from itab to the main internal table and then
insert records from the main internal table to database table.
else.
exit.
endif.
enddo.
Regards,
Siddarth -
DELETE QUERY FOR A TABLE WITH MILLION ROWS
Hello,
I have a requirement where I have to compare 2 tables - both having around million rows, and delete data based on a single column.
DELETE FROM TABLE_A WHERE COLUMN_A NOT IN
(SELECT COLUMN_A FROM TABLE_B)
COLUMN_A had index defined on it in both tables. Still it is taking a long time. What is the best way to achieve this? any work around?
thanksHow many rows are you deleting from this table ? If the precentage is large then the better option is
1) Create a new table where COLUMN_A NOT IN
(SELECT COLUMN_A FROM TABLE_B)
2) TRUNCATE table_A
3)Insert in to table_A (select * from new table)
4) If you have any constraints then may be it can be diaabled.
thanks -
Which Oracle error for UPDATE that finds no rows to update
Can someone please confirm for me what is the Oracle error I will get when trying to do an UPDATE statement with WHERE clauses and no rows are found matching the WHERE clauses? I'm thinking it will be 1403, but want to be sure.
Thank you.Thanks for the answers, and my apologies if I asked in the wrong forum. My program is C++, and I believe the update will produce a DAOSQLException. My intended question was what the error number for that exception would be, and it appears that it would be 1403.
I am not just writing SQL statements, for which I realize (as many pointed out) that there would be no errors, just zero rows updated.
Thanks. -
Select for update returns no rows even though there is no locking thread
I'm using Ibatis library over oracle sql for my query. The select for update statement returns no rows. This happens intermittently. When this was happening last time, I executed the select statement on sqldeveloper (but without the 'for update') and got rows. This situation is not easily reproducible so I've not yet been able to ascertain whether rows are returned on sqldeveloper with the 'for update' clause. But I know for sure that there was no other thread locking the rows. How could this be happening?
The select for update statement returns no rowsWhy do you think that a select for update will always return rows?
the for update clause if there not to garantee the presence of rows but to lock the row when it is present
sql> select * from t;
A B C
1 1 step1
2 2 step2
3 3 step3Then session 1 issues the following select
SELECT *
FROM t
WHERE a = 1
FOR UPDATE NOWAIT;If session 2 issues the same select before session 1 commits or rolls back
SELECT *
FROM t
WHERE a = 1
FOR UPDATE NOWAIT;It will get the following error
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specifiedBut if session 2 issue such a kind of select
sql> SELECT *
2 FROM t
3 WHERE a = 99
4 FOR UPDATE NOWAIT;
no rows selectedYou see then that a select for update can return no rows
Best Regards
Mohamed Houri -
Pros and cons of select for update clause
hi,
Can anybody explain what are the
pros and cons of select for update clause
11.2.0.1As commented, there are no pros versus cons in this case.
What is important is to understand conceptually what this do and why it would be use.
Conceptually, a select for update reads and locks row(s). It is known as pessimistic locking.
Why would you want to do that? Well, you have a fat client (Delphi for example) and multiple users. When userA updates an invoice, you want that invoice row(s) locked and prevent others from making updates at the same time. Without locking, multiple users updating the same invoice will result in existing updated data being overwritten by old data that also has been updated. A situation called lost updates.
For web based clients that are stateless, pessimistic locking does not work - as the clients do not have state and pessimistic locking requires state. Which means an alternative method to select for update needs to be used to prevent lost updates. This method is called optimistic locking.
So it is not about pros versus cons. It is about understanding how the feature/technique/approach works and when to use it.. and when it is not suited to use it. All problems are not nails. All solutions are not the large hammer for driving in nails. -
FOR UPDATE OF how does it work?
Hi,
I was wondering what was happening in the databse when a SELECT...FOR UPDATE... is executed. How does Oracle handle this Row Share Table Locks? How does the DB know which rows are locked?
Thanks for any feedback
MauriceWhen you do a SELECT ... FOR UPDATE, Oracle locks each row that is returned by the SELECT. The mechanics of locking in Oracle is that the lock bit is actually stored in the row, rather than being centralized in the v$lock table. This allows Oracle to be very scalable and prevents you from worrying about things like lock escalation, but it makes it hard for individual DBA's and developer's to know what rows are locked.
Justin -
Update all rows in a table which has 8-10 million rows take for ever
Hi All,
Greetings!
I have to update 8million rows on a table. Basically have to reset the batch_id with the current batch number. it contains 8-10 million rows and i have tried with bulk update and then also it takes long time. below is the table structure
sales_stg (it has composite key of product,upc and market)
=======
product_id
upc
market_id
batch_id
process_status
I have to update batch_id,process_status to current batch_id (a number) and process_status as zero. I have to update all the rows with these values for batch_id = 0.
I tried bulk update an it takes more than 2hrs to do. (I limit the update to 1000).
Any help in this regard.
Naveen.The fastest way will probably be to not use a select loop but a direct update like in William's example. The main downside is if you do too many rows you risk filling up your rollback/undo; to keep things as simple as possible I wouldn't do batching except for this. Also, we did some insert timings a few years ago on 9iR1 and found that the performance curve on frequent commits started to level off after 4K rows (fewer commits were still better) so you could see how performance improves by performing fewer commits if that's an issue.
The other thing you could consider if you have the license is using the parallel query option. -
Data update for 76 million rows
Hello All,
we have added a new column into one of our tables and we need to populate the column.Problem is it has 76 million rows.If i issue the update coomand to populate this ,approximately it will take 120 hrs to complete.Please let me know the best way of doing this. Can i run it in batches applying commit in between??
Thanks.It´d be something like this:
DECLARE
V_QRY VARCHAR2(10000);
V_COMMIT_RANGE INTEGER;
BEGIN
V_COMMIT_RANGE := 10000; -- Change this according with your environment
* Prevents exceeding ROLL BACK segments.
LOOP
V_QRY := 'UPDATE transaction_fact a '
|| ' SET a.start_time = (SELECT TO_DATE ( TO_CHAR (:v_year , ''fm0000'') '
|| ' || TO_CHAR (:v_month , ''fm00'') '
|| ' || TO_CHAR (:v_day_of_month, ''fm00''), '
|| ' ''YYYYMMDD'' '
|| ' ) '
|| ' FROM TIME '
|| ' WHERE TIME.time_id = a.time_id) '
|| ' WHERE a.start_time IS NULL '
|| ' AND ROWNUM <= ' || V_COMMIT_RANGE;
EXECUTE IMMEDIATE V_QRY
USING YEAR
, MONTH
, day_of_month;
EXIT WHEN SQL%ROWCOUNT = 0;
COMMIT;
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND THEN
DBMS_OUTPUT.PUT_LINE('no content');
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(SQLERRM);
END;Assumptions made:
a) YEAR, MONTH and day_of_month are all variables;
b) a.start_time has null values for all rows (or all you wish to update);
Although this will do the job, it might not be as performatic as you need it to be. So, if your Oracle version allows you to rename tables, you should consider looking into Walter´s post (if you havent already).
All the best.. -
How to tune the Update statement for 20 million rows
Hi,
I want to update 20 million rows of a table. I wrote the PL/SQL code like this:
DECLARE
v1
v2
cursor C1 is
select ....
BEGIN
Open C1;
loop
fetch C1 bulk collect into v1,v2 LIMIT 1000
exit when C1%NOTFOUND;
forall i in v1.first..v1.last
update /*+INDEX(tab indx)*/....
end loop;
commit;
close C1;
END;
The above code took 24 mins to update 100k records, so for around 20 million records it will take 4800 mins (80 hrs).
How can I tune the code further ? Will a simple Update statement, instead of PL/SQL make the update faster ?
Will adding few more hints help ?
Thanks for your suggestions.
Regards,
Yogini JoshiHello
You have implemented this update in the slowest possible way. Cursor FOR loops should be absolute last resort. If you post the SQL in your cursor there is a very good chance we can re-code it to be a single update statement with a subquery which will be the fastest possible way to run this. Please remember to use the {noformat}{noformat} tags before and after your code so the formatting is preserved.
David -
Is there a way to BULK COLLECT with FOR UPDATE and not lock ALL the rows?
Currently, we fetch a cursor on a few million rows using BULK COLLECT.
In a FORALL loop, we update the rows.
What is happening now, is that we run this procedure at the same time, and there is another session running a MERGE statement on the same table, and a DEADLOCK is created between them.
I'd like to add to the cursor the FOR UPDATE clause, but from what i've read,
it seems that this will cause ALL the rows in the cursor to become locked.
This is a problem, as the other session is running MERGE statements on the table every few seconds, and I don't want it to fail with ORA-0054 (resource busy).
What I would like to know is if there is a way, that only the rows in the
current bulk will be locked, and all the other rows will be free for updates.
To reproduce this problem:
1. Create test table:
create table TEST_TAB
ID1 VARCHAR2(20),
ID2 VARCHAR2(30),
LAST_MODIFIED DATE
2. Add rows to test table:
insert into TEST_TAB (ID1, ID2, LAST_MODIFIED)
values ('416208000770698', '336015000385349', to_date('15-11-2009 07:14:56', 'dd-mm-yyyy hh24:mi:ss'));
insert into TEST_TAB (ID1, ID2, LAST_MODIFIED)
values ('208104922058401', '336015000385349', to_date('15-11-2009 07:11:15', 'dd-mm-yyyy hh24:mi:ss'));
insert into TEST_TAB (ID1, ID2, LAST_MODIFIED)
values ('208104000385349', '336015000385349', to_date('15-11-2009 07:15:13', 'dd-mm-yyyy hh24:mi:ss'));
3. Create test procedure:
CREATE OR REPLACE PROCEDURE TEST_PROC IS
TYPE id1_typ is table of TEST_TAB.ID1%TYPE;
TYPE id2_typ is table of TEST_TAB.ID2%TYPE;
id1_arr id1_typ;
id2_arr id2_typ;
CURSOR My_Crs IS
SELECT ID1, ID2
FROM TEST_TAB
WHERE ID2 = '336015000385349'
FOR UPDATE;
BEGIN
OPEN My_Crs;
LOOP
FETCH My_Crs bulk collect
INTO id1_arr, id2_arr LIMIT 1;
Forall i in 1 .. id1_arr.COUNT
UPDATE TEST_TAB
SET LAST_MODIFIED = SYSDATE
where ID2 = id2_arr(i)
and ID1 = id1_arr(i);
dbms_lock.sleep(15);
EXIT WHEN My_Crs%NOTFOUND;
END LOOP;
CLOSE My_Crs;
COMMIT;
EXCEPTION
WHEN OTHERS THEN
RAISE_APPLICATION_ERROR(-20000,
'Test Update ' || SQLCODE || ' ' || SQLERRM);
END TEST_PROC;
4. Create another procedure to check if table rows are locked:
create or replace procedure check_record_locked(p_id in TEST_TAB.ID1%type) is
cursor c is
select 'dummy'
from TEST_TAB
WHERE ID2 = '336015000385349'
and ID1 = p_id
for update nowait;
e_resource_busy exception;
pragma exception_init(e_resource_busy, -54);
begin
open c;
close c;
dbms_output.put_line('Record ' || to_char(p_id) || ' is not locked.');
rollback;
exception
when e_resource_busy then
dbms_output.put_line('Record ' || to_char(p_id) || ' is locked.');
end check_record_locked;
5. in one session, run the procedure TEST_PROC.
6. While it's running, in another session, run this block:
begin
check_record_locked('208104922058401');
check_record_locked('416208000770698');
check_record_locked('208104000385349');
end;
7. you will see that all records are identified as locked.
Is there a way that only 1 row will be locked, and the other 2 will be unlocked?
Thanks,
Yoni.I don't have database access on weekends (look at it as a template)
suppose you
create table help_iot
(bucket number,
id1 varchar2(20),
constraint help_iot_pk primary key (bucket,id1)
organization index;not very sure about the create table syntax above.
declare
maximal_bucket number := 10000; -- will update few hundred rows at a time if you must update few million rows
the_sysdate date := sysdate;
begin
truncate table help_iot;
insert into help_iot
select ntile(maximal_bucket) over (order by id1) bucket,id1
from test_tab
where id2 = '336015000385349';
for i in 1 .. maximal_bucket
loop
select id1,id2,last_modified
from test_tab
where id2 = '336015000385349'
and id1 in (select id1
from help_iot
where bucket = i
for update of last_modified;
update test_tab
set last_modified = the_sysdate
where id2 = '336015000385349'
and id1 in (select id1
from help_iot
where bucket = i
commit;
dbms_lock.sleep(15);
end loop;
end;Regards
Etbin
introduced the_sysdate if last_modified must be the same for all updated rows
Edited by: Etbin on 29.11.2009 16:48 -
Update Statement against 1.4 million rows
Hi,
I am trying to execute and update statement against a table with over a million rows in it.
NAME Null? Type
PR_ID NOT NULL NUMBER(12,0)
PR_PROP_CODE NOT NULL VARCHAR2(180)
VALUE CLOB(4000)
SHRT_DESC VARCHAR2(250)
VAL_CHAR VARCHAR2(500)
VAL_NUM NUMBER(12,0)
VAL_CLOB CLOB(4000)
UNIQUE_ID NUMBER(12,0)
The update i am trying to do is to take teh column VALUE and based on some parameters update one of three columns. When
I run the sql it just sits there with no error. I gave it 24 hours before killing the process.
UPDATE PR.PR_PROP_VAL PV
SET PV.VAL_CHAR = (
SELECT a.value_char FROM
select
ppv.unique_id,
CASE ppv.pr_prop_code
WHEN 'BLMBRG_COUNTRY' THEN to_char(ppv.value)
WHEN 'BLMBRG_INDUSTRY' THEN to_char(ppv.value)
WHEN 'BLMBRG_TICKER' THEN to_char(ppv.value)
WHEN 'BLMBRG_TITLE' THEN to_char(ppv.value)
WHEN 'BLMBRG_UID' THEN to_char(ppv.value)
WHEN 'BUSINESSWIRE_TITLE' THEN to_char(ppv.value)
WHEN 'DJ_EUROASIA_TITLE' THEN to_char(ppv.value)
WHEN 'DJ_US_TITLE' THEN to_char(ppv.value)
WHEN 'FITCH_MRKT_SCTR' THEN to_char(ppv.value)
WHEN 'ORIGINAL_TITLE' THEN to_char(ppv.value)
WHEN 'RD_CNTRY' THEN to_char(ppv.value)
WHEN 'RD_MRKT_SCTR' THEN to_char(ppv.value)
WHEN 'REPORT_EXCEP_FLAG' THEN to_char(ppv.value)
WHEN 'REPORT_LANGUAGE' THEN to_char(ppv.value)
WHEN 'REUTERS_RIC' THEN to_char(ppv.value)
WHEN 'REUTERS_TITLE' THEN to_char(ppv.value)
WHEN 'REUTERS_TOPIC' THEN to_char(ppv.value)
WHEN 'REUTERS_USN' THEN to_char(ppv.value)
WHEN 'RSRCHDIRECT_TITLE' THEN to_char(ppv.value)
WHEN 'SUMMIT_FAX_BODY_FONT_SIZE' THEN to_char(ppv.value)
WHEN 'SUMMIT_FAX_TITLE' THEN to_char(ppv.value)
WHEN 'SUMMIT_FAX_TITLE_FONT_SIZE' THEN to_char(ppv.value)
WHEN 'SUMMIT_TOPIC' THEN to_char(ppv.value)
WHEN 'SUMNET_EMAIL_TITLE' THEN to_char(ppv.value)
WHEN 'XPEDITE_EMAIL_TITLE' THEN to_char(ppv.value)
WHEN 'XPEDITE_FAX_BODY_FONT_SIZE' THEN to_char(ppv.value)
WHEN 'XPEDITE_FAX_TITLE' THEN to_char(ppv.value)
WHEN 'XPEDITE_FAX_TITLE_FONT_SIZE' THEN to_char(ppv.value)
WHEN 'XPEDITE_TOPIC' THEN to_char(ppv.value)
END value_char
from pr.pr_prop_val ppv
where ppv.pr_prop_code not in
('BLMBRG_BODY','ORIGINAL_BODY','REUTERS_BODY','SUMMIT_FAX_BODY',
'XPEDITE_EMAIL_BODY','XPEDITE_FAX_BODY','PR_DISCLOSURE_STATEMENT', 'PR_DISCLAIMER')
) a
WHERE
a.unique_id = pv.unique_id
AND a.value_char is not null
Thanks for any help you can provide.
GrahamWhat about this:
UPDATE pr.pr_prop_val pv
SET pv.val_char = TO_CHAR(pv.value)
WHERE pv.pr_prop_code IN ('BLMBRG_COUNTRY', 'BLMBRG_INDUSTRY', 'BLMBRG_TICKER', 'BLMBRG_TITLE', 'BLMBRG_UID', 'BUSINESSWIRE_TITLE',
'DJ_EUROASIA_TITLE', 'DJ_US_TITLE', 'FITCH_MRKT_SCTR', 'ORIGINAL_TITLE', 'RD_CNTRY', 'RD_MRKT_SCTR',
'REPORT_EXCEP_FLAG', 'REPORT_LANGUAGE', 'REUTERS_RIC', 'REUTERS_TITLE', 'REUTERS_TOPIC', 'REUTERS_USN',
'RSRCHDIRECT_TITLE', 'SUMMIT_FAX_BODY_FONT_SIZE', 'SUMMIT_FAX_TITLE', 'SUMMIT_FAX_TITLE_FONT_SIZE',
'SUMMIT_TOPIC', 'SUMNET_EMAIL_TITLE', 'XPEDITE_EMAIL_TITLE', 'XPEDITE_FAX_BODY_FONT_SIZE', 'XPEDITE_FAX_TITLE',
'XPEDITE_FAX_TITLE_FONT_SIZE', 'XPEDITE_TOPIC')
AND pv.value IS NOT NULL -
Re: Transactions and Locking Rows for Update
Dale,
Sounds like you either need an "optimistic locking" scheme, usually
implemented with timestamps at the database level, or a concurrency manager.
A concurrency manager registers objects that may be of interest to multiple
users in a central location. It takes care of notifying interested parties
(i.e., clients,) of changes made to those objects, using a "notifier" pattern.
The optimistic locking scheme is relatively easy to implement at the
database level, but introduces several problems. One problem is that the
first person to save their changes "wins" - every one else has to discard
their changes. Also, you now have business policy effectively embedded in
the database.
The concurrency manager is much more flexible, and keeps the policy where
it probably belongs. However, it is more complex, and there are some
implications to performance when you get to the multiple-thousand-user
range because of its event-based nature.
Another pattern of lock management that has been implemented is a
"key-based" lock manager that does not use events, and may be more
effective at managing this type of concurrency for large numbers of users.
There are too many details to go into here, but I may be able to give you
more ideas in a separate note, if you want.
Don
At 04:48 PM 6/5/97 PDT, Dale "V." Georg wrote:
I have a problem in the application I am currently working on, which it
seems to me should be easily solvable via appropriate use of transactions
and database locking, but I'm having trouble figuring out exactly how to
do it. The database we are using is Oracle 7.2.
The scenario is as follows: We have a window where the user picks an
object from a dropdown list. Some of the object's attributes are then
displayed in that window, and the user then has the option of editing
those attributes, and at some point hitting the equivalent of a 'save'button
to write the changes back to the database. So far, so good. Now
introduce a second user. If user #1 and user #2 both happen to pull up
the same object and start making changes to it, user #1 could write back
to the database and then 15 seconds later user #2 could write back to the
database, completely overlaying user #1's changes without ever knowing
they had happened. This is not good, particularly for our application
where editing the object causes it to progress from one state to the next,
and multiple users trying to edit it at the same time spells disaster.
The first thing that came to mind was to do a select with intent to update,
i.e. 'select * from table where key = 'somevalue' with update'. This way
the next user to try to select from the table using the same key would not
be able to get it. This would prevent multiple users from being able to
pull the same object up on their screens at the same time. Unfortunately,
I can think of a number of problems with this approach.
For one thing, the lock is only held for the duration of the transaction, so
I would have to open a Forte transaction, do the select with intent to
update, let the user modify the object, then when they saved it back again
end the transaction. Since a window is driven by the event loop I can't
think of any way to start a transaction, let the user interact with the
window, then end the transaction, short of closing and re-opening the
window. This would imply having a separate window specifically for
updating the object, and then wrapping the whole of that window's event
loop in a transaction. This would be a different interface than we wanted
to present to the users, but it might still work if not for the next issue.
The second problem is that we are using a pooled DBSession approach
to connecting to the database. There is a single Oracle login account
which none of the users know the password to, and thus the users
simply share DBSession resources. If one user starts a transaction
and does a select with intent to update on one DBSession, then another
user starts a transaction and tries to do the same thing on the same
DBSession, then the second user will get an error out of Oracle because
there's already an open transaction on that DBSession.
At this point, I am still tossing ideas around in my head, but after
speaking with our Oracle/Forte admin here, we came to the conclusion
that somebody must have had to address these issues before, so I
thought I'd toss it out and see what came back.
Thanks in advance for any ideas!
Dale V. Georg
Indus Consultancy Services [email protected]
Mack Trucks, Inc. [email protected]
>
>
>
>
====================================
Don Nelson
Senior Consultant
Forte Software, Inc.
Denver, CO
Corporate voice mail: 510-986-3810
aka: [email protected]
====================================
"I think nighttime is dark so you can imagine your fears with less
distraction." - CalvinWe have taken an optimistic data locking approach. Retrieved values are
stored as initial values; changes are stored seperately. During update, key
value(s) or the entire retieved set is used in a where criteria to validate
that the data set is still in the initial state. This allows good decoupling
of the data access layer. However, optimistic locking allows multiple users
to access the same data set at the same time, but then only one can save
changes, the rest would get an error message that the data had changed. We
haven't had any need to use a pessimistic lock.
Pessimistic locking usually involves some form of open session or DBMS level
lock, which we haven't implemented for performance reasons. If we do find the
need for a pessimistic lock, we will probably use cached data sets that are
checked first, and returned as read-only if already in the cache.
-DFR
Dale V. Georg <[email protected]> on 06/05/97 03:25:02 PM
To: Forte User Group <[email protected]> @ INTERNET
cc: Richards* Debbie <[email protected]> @ INTERNET, Gardner*
Steve <[email protected]> @ INTERNET
Subject: Transactions and Locking Rows for Update
I have a problem in the application I am currently working on, which it
seems to me should be easily solvable via appropriate use of transactions
and database locking, but I'm having trouble figuring out exactly how to
do it. The database we are using is Oracle 7.2.
The scenario is as follows: We have a window where the user picks an
object from a dropdown list. Some of the object's attributes are then
displayed in that window, and the user then has the option of editing
those attributes, and at some point hitting the equivalent of a 'save' button
to write the changes back to the database. So far, so good. Now
introduce a second user. If user #1 and user #2 both happen to pull up
the same object and start making changes to it, user #1 could write back
to the database and then 15 seconds later user #2 could write back to the
database, completely overlaying user #1's changes without ever knowing
they had happened. This is not good, particularly for our application
where editing the object causes it to progress from one state to the next,
and multiple users trying to edit it at the same time spells disaster.
The first thing that came to mind was to do a select with intent to update,
i.e. 'select * from table where key = 'somevalue' with update'. This way
the next user to try to select from the table using the same key would not
be able to get it. This would prevent multiple users from being able to
pull the same object up on their screens at the same time. Unfortunately,
I can think of a number of problems with this approach.
For one thing, the lock is only held for the duration of the transaction, so
I would have to open a Forte transaction, do the select with intent to
update, let the user modify the object, then when they saved it back again
end the transaction. Since a window is driven by the event loop I can't
think of any way to start a transaction, let the user interact with the
window, then end the transaction, short of closing and re-opening the
window. This would imply having a separate window specifically for
updating the object, and then wrapping the whole of that window's event
loop in a transaction. This would be a different interface than we wanted
to present to the users, but it might still work if not for the next issue.
The second problem is that we are using a pooled DBSession approach
to connecting to the database. There is a single Oracle login account
which none of the users know the password to, and thus the users
simply share DBSession resources. If one user starts a transaction
and does a select with intent to update on one DBSession, then another
user starts a transaction and tries to do the same thing on the same
DBSession, then the second user will get an error out of Oracle because
there's already an open transaction on that DBSession.
At this point, I am still tossing ideas around in my head, but after
speaking with our Oracle/Forte admin here, we came to the conclusion
that somebody must have had to address these issues before, so I
thought I'd toss it out and see what came back.
Thanks in advance for
any
ideas!
Dale V. Georg
Indus Consultancy Services [email protected]
Mack Trucks, Inc. [email protected]
------ Message Header Follows ------
Received: from pebble.Sagesoln.com by notes.bsginc.com
(PostalUnion/SMTP(tm) v2.1.9c for Windows NT(tm))
id AA-1997Jun05.162418.1771.334203; Thu, 05 Jun 1997 16:24:19 -0500
Received: (from sync@localhost) by pebble.Sagesoln.com (8.6.10/8.6.9) id
NAA11825 for forte-users-outgoing; Thu, 5 Jun 1997 13:47:58 -0700
Received: (from uucp@localhost) by pebble.Sagesoln.com (8.6.10/8.6.9) id
NAA11819 for <[email protected]>; Thu, 5 Jun 1997 13:47:56 -0700
Received: from unknown(207.159.84.4) by pebble.sagesoln.com via smap (V1.3)
id sma011817; Thu Jun 5 13:47:43 1997
Received: from tes0001.macktrucks.com by relay.macktrucks.com
via smtpd (for pebble.sagesoln.com [206.80.24.108]) with SMTP; 5 Jun
1997 19:35:31 UT
Received: from dale by tes0001.macktrucks.com (SMI-8.6/SMI-SVR4)
id QAA04637; Thu, 5 Jun 1997 16:45:51 -0400
Message-ID: <[email protected]>
Priority: Normal
To: Forte User Group <[email protected]>
Cc: "Richards," Debbie <[email protected]>,
"Gardner," Steve <[email protected]>
MIME-Version: 1.0
From: Dale "V." Georg <[email protected]>
Subject: Transactions and Locking Rows for Update
Date: Thu, 05 Jun 97 16:48:37 PDT
Content-Type: text/plain; charset=US-ASCII; X-MAPIextension=".TXT"
Content-Transfer-Encoding: quoted-printable
Sender: [email protected]
Precedence: bulk
Reply-To: Dale "V." Georg <[email protected]> -
App Store says update available for iPhone app. Try to install and get the message "the item you are trying to buy is no longer available" for three apps in a row. Even when I go to the specific app and press Update, the update begins to install then aborts with this error message.iPhone 3GS running iOS 5.1.1. Also very long delay typing each keystroke in this message.
That error message is usually indicative of a problem with the App Store. If you look through the forums, you'll see that a number of people are experiencing the problem. Try again tomorrow.
-
How to unlock a row if i use FOR UPDATE clause
In procedure if we use FOR UPDATE clause, it will lock particular row and allow only one client to update whereas other client can only fetch data in the same row at that time.
My question is when will it unlock the row, what should we do to unlock the row while writing procedure. Take this example here im using FOR UPDATE clause for client_count, when ll it unlock that particular row in this procedure.
create or replace PROCEDURE newprocedur(inMerid IN VARCHAR2,outCount OUT NUMBER) AS
CURSOR c1 IS
select CLIENT_COUNT from OP_TMER_CONF_PARENT where MER_ID = inMerid FOR UPDATE OF CLIENT_COUNT;
BEGIN
Open c1;
loop
fetch c1 into outCount;
exit when c1%NOTFOUND;
outCount:=outCount+1;
update OP_TMER_CONF_PARENT set CLIENT_COUNT = outCount where current of c1;
end loop;
close c1;
END;Hi,
Basically you are incrementing client_count by 1 , Why you have to fetch row one by one and update? you could just finish that in a single update
UPDATE OP_TMER_CONF_PARENT
SET CLIENT_COUNT = CLIENT_COUNT+1
WHERE MER_ID = inMerid This will increment client_count of all rows by one for the given mer_id;
After updating you have to make the changes permanent so that other users will see the changes you have made.
To lock the row before update you can use same select statement in you cursor
SELECT CLIENT_COUNT
FROM OP_TMER_CONF_PARENT
WHERE MER_ID = inMerid FOR UPDATE OF CLIENT_COUNT;You can further modify the procedure to let other users know if the row is being updated.
Regards
Yoonas
Maybe you are looking for
-
Want songs in playlist but not in library
so i have a CD with thousands of short japanese recordings on it, the only way i believe that i can get them onto my iphone/ipod is through adding them to itunes. However when i hit shuffle songs the likelihood of getting a japanese recording instead
-
Iphone isn't being recognised by in Itunes on Windows 7
Hi Im using a Toshiba L500 laptop with windows 7 that was preinstalled. I transfered all my Itunes files and other files over from old Toshiba Laptop to this one and found that when I plug in my Iphone into Itunes after re-installing everything Itune
-
When I updated my Macbook to LIon, I seemed to have lost apps on my iPhone. Does that make sense?
-
Why doesn't iCloud always sync correctly?
I get the message that the sync cannot be completed, while before it worked perfectly? Kind regards, Jeska
-
Hello friends I am enhanching someone else's code in a method in a class. I am trying to insert something in my z table, but it won't insert without commit. If I write commit, it gives syntax error. " insert zmm_intsales from ca_zmatint.