Lock table in share mode

hi ...
what is the benifit to lock the table explicitily using sql command
"lock table <table_name> in share mode"
is there any performance gain. ?
code....
==================
*Here, we lock the table in share mode
EXEC SQL
LOCK TABLE GLDM IN SHARE MODE
END-EXEC.
EXEC SQL
LOCK TABLE GLDV IN SHARE MODE
END-EXEC.
EXEC SQL DECLARE D_ACCOUNT_CURSOR CURSOR FOR
SELECT /*+ FIRST_ROWS */
===================================================

As others have pointed out, this type of locking should be unnecessary. Unless there is documentation explaining why the locks are there, we can only guess the intention. Here are a couple of thoughts:
a) The cursor select is used to create results that are copied from one table to another (e.g. the query is against gdlm and the results are used to update gdlv in a loop) - a lock on the table being updated might then be seen as necessary to ensure that no other process updates it in a conflicting fashion whilst the loop is running - thus avoiding problems of inconsistent results, or a locking/deadlocking issues.
b) If the cursor loop takes a long time to run then (for read consistency purposes) the session has to keep generating clones of blocks that have changed since the query started. So the table locking may have been introduced as a way of reducing this cloning work by stopping any other changes. This could be seen as a performance enhancer - at a cost of blocking anyone else who wants to update the tables.
c) For reasons similar to (b) it may have been introduced to reduce the risk of Oracle error ORA-01555 (Snapshot too old).
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk

Similar Messages

  • Locked table by an insert

    Hi,
    some times an insert locks all the concerned table this transaction is always from web.
    Any idea?
    regards.

    Here are some questions:
    How can the table be locked, preventing DML by other users?
    1) Is your application or users issuing SELECT FOR UPDATE commands
    2) Is your application or users issuing LOCK TABLE IN EXCLUSIVE MODE
    3) Is referential integrity being checked on foreign keys that do not have an index? If so, shared locks can be issued against the table limiting DML execution until the RI checks are complete.
    Are you sure there is a table lock?
    1) What are users waiting on when this problem occurs? Check v$session_wait for this. Also join v$session and v$lock on SID to see who has the lock(s) blocking the table.

  • Best way to Lock a Table in exclusive mode ?

    Hi
    I have a procedure that update a record in a table. This table is accessed for many work stations and these stations update this table several times. In average this table is updated 900,000 times a day. My oracle version is 9.02
    In this table there are no deletes.
    The records that are in this table are inserted by other process that is working ok.
    My procedure do this:
    Receive some parameters ( param1 , param2, param3)
    IF condition = true THEN
    LOCK TABLE my_table IN EXCLUSIVE MODE;
    SELECT my_table_id INTO v_my_table_id FROM my_table WHERE available = 'Y' AND rownum = 1;
    Update my_table set SET my_date = sysdate, my_field1 = param1, available = 'N' WHERE my_table_id = v_my_table_id;
    COMMIT;
    The problem here is that some times this process is very slow , and the lock in the table remains for a long time and suddenly the table is released.
    I review the CPU of the server whereis the database; and is normal.
    This process has been working for 2 years and now is failing.
    Could you help me to know if there is a better way to do this process?
    I can't undesrtand why this behaivor, if is a simple transaction.
    I really appreciate your help
    Lorein

    A couple of cents from my side.
    As Daniel and Justin indicated, you are abusing Oracle severely with your approach. You are forcing serialisation - only a single process can at a time get an available row from the table. If there are a 100 workstations looking for work, one one at a time can get a "work available" row from the table.
    This has nothing to do with Oracle, or Oracle's locking, but everything to do with how your designed your code to get a "work available" row.
    The basic problem is that
    a) you do not care which specific row to lock, you only want any single random row with "work available"
    b) Oracle's locking is designed to lock a row, or set of rows, as specified by explicit user instruction
    This is contradictory. You do not explicitly state which single row to lock. You lock ALL row with "work available" - the where rownum=1 does NOT only lock the first row. That criteria is MEANINGLESS when Oracle SELECTs rows to process (and lock). That rownum criteria is only applied AFTERWARDS... which means after the locking has happened.
    There are a couple of alternative and better performing approaches that you can use instead.
    One that springs to mind is a form of optimistic locking - but instead of being optimistic about the locking part, you're optimistic about the selecting an unlocked row part.
    Basic pseudo code:
    1. select the rowid of a random row from "work available" rows (using DBMS_RANDOM, SAMPLE or similar techniques - and NOT rownum!)
    2. attempt to lock that rowid using NO WAIT
    3. if the lock fails, re-start at step 1
    4. random row has been found and locked, proceed to process and update
    5 commit
    The optimistic part is steps 1 and 2 - the assumption is that doing a single row random select will most of the time select a row that is not locked, allowing multiple processes at the same time to select and lock and process different rows.

  • Lock Table and permit only SELECT-Statement

    Hi all,
    can I lock a Table and permit only SELECT-statements on a Table?
    Regards
    Leonid

    Hi Kamal,
    I would like to configure it in such a way that if I implement the SELECT statement, another user can't insert
    a data into the table.
    If it is possible, I even the LOCK would like in such a way to configure the fact that the entry of the user arrives into the buffer and if the table is unlocked, all lines from the Buffer again into the table.
    I make it in such a way:
    SQL Script script_test.sql:
    set echo off
    set verify off
    set pagesize 0
    set termout off
    set heading off
    set feedback off
    lock table mytable in share row exclusive mode;
    spool c:\Temp\script_info.lst
    select id||'|'||dae||'|'||name||'|'||name1||'|'||hiredate
    ||'|'||street||'|'||nr||'|'||plznum||'|'||city
    ||'|'||email||'|'||telephon||'|'||cddfeas||'|'||number
    ||'|'||why||'|'||fgldwer||'|'||wahl||'|'||adress
    ||'|'||las
    from mytable
    where las is null
    spool off
    spool c:\Temp\select_from_all_tables.lst
    select *
    from all_tables
    order by owner
    spool off
    spool c:\Temp\select1_from_all_tables.lst
    select *
    from all_tables
    order by owner
    spool off
    update mytable
    set las = 'x';
    commit;
    set feedback on
    set heading on
    set termout on
    set pagesize 24
    set verify on
    set echo on
    Afterwards I start another session:
    insert into briefwahl
    values(38,'11.06.2003 09:37','Test','Test','01.01.1990',
    'Test','12','90000','Test',
    '[email protected]','12345657','test','123',
    'test','test','test','test',
    null);
    Then I go into the first session and start script. And I go immediately into the second session and do commit; And although I have the table closed, the new entries are spent also with spool into the .lst-File. Why, I do not understand. And in the table all lines become updated.
    Regadrs
    Leonid
    P.S. Sorry for my English. I have write with translator.

  • Lock table: How to notify others

    Hi,
    I 've created a Stored Procedure that inserts into a table.
    The first and last statement of by BEGIN...END part of the stored procedure locks the table successfully.
    BEGIN
          LOCK TABLE MY_TABLE IN EXCLUSIVE MODE NOWAIT;
            COMMIT;
         LOCK TABLE MY_TABLE IN SHARE MODE NOWAIT;
    EXCEPTION
         WHEN OTHERS THEN
               dbms_output.PUT_LINE(SQLERRM);
              ROLLBACK;
    END;
    /The purpose is to lock the table so that if another user accidentally tries to execute the script s/he won't be able to modify the data in the table.
    However, if the procedure is already running when another user tries to execute it, the 2nd user simply sees the procedure to terminate very quickly (it normally takes around 15-20 minutes based on the data). No message that the table is locked or that the procedure is in progress from another user.
    How can I notify the 2nd user that the procedure is already in progress from another user or that the table is locked? What should I put in the EXCEPTION part maybe to do that?
    Thank you in advance.
    Regards,
    John.

    PRAGMA EXCEPTION_INIT(already_lock, -54);is 54 value meant for in-buit exception table
    le locked, if so, even without having a custom
    exception, oracle by itself would have thrown it in
    the second session, right???, please do correct me if
    iam wrong,
    -54, and yes it is.
    >
    what will be the impact if i have a WAIT instead of
    f NOWAIT, will my second session just be on hold
    waiting for lock to be released???Second session will wait... but WAIT doesn't exist, just write lock table t3 IN EXCLUSIVE MODE ;I advise you to read this for LOCK command : http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_914a.htm#2064408
    and this for exceptions : http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/07_errs.htm#784
    Nicolas.

  • MM42 change material, split valuation at batch level, M301, locking table

    Dear All,
    I'm working on ECC 6.0 retail and I have activated split valuation at batch level.  Now in MBEW for this specific material I have almost 14.400 entries.
    If I try to change some material data (MM42) I receive an error message M3021 A system error has occurred while locking and then Lock table overflow.
    I used SM12 to see the table (while MM42 is still running) and it seems that MBEW is the problem.
    What should I do?  For any material modification the system has to modify every entry in MBEW? Is there any possibility to skip this?
    Thank you.

    Hi,
    Symptom
    Key word: Enqueue
    FM: A system error has occurred in the block handler
    Message in the syslog: lock table overflowed
    Other terms
    M3021 MM02 F5 288 F5288 FBRA
    Reason and Prerequisites
    The lock table has overflowed.
    Cause 1: Dimensions of the lock table are too small
    Cause 2: The update lags far behind or has shut down completely, so that the lock entries of the update requests that are not yet updated cause the lock table to overflow.
    Cause 3: Poor design of the application programs. A lock is issued for each object in an application program, for example a collective run with many objects.
    Solution
    Determine the cause:
    SM12 -> Goto -> Diagnosis (old)
    SM12 -> Extras -> Diagnosis (new)
    checks the effectiveness of the lock management
    SM12 -> Goto -> Diagnosis in update (old)
    SM12 -> Extras -> Diagnosis in update (new)
    checks the effectiveness of the lock management in conjunction with updates
    SM12 -> OkCode TEST -> Error handling -> Statistics (old, only in the enqueue server)
    SM12 -> Extras -> Statistics (new)
    shows the statistics of the lock management, including the previous maximum fill levels (peak usage) of the partial tables in the lock table
    If the owner table overflows, cause 2 generally applies.
    In the alert monitor (RZ20), an overrunning of the (customizable) high-water marks is detected and displayed as an alert reason.
    The size of the lock table can be set with the profile parameter u201Cenque/table_size =u201C. specifies the size of the lock table in kilobytes. The setting must be made in the profile of the enqueue server ( u2026_DVEBM.. ). The change only takes effect after the restart of the enqueue server.
    The default size is 500 KB in the Rel 3.1x implementation of the enqueue table. The resulting sizes for the individual tables are:
    Owner table: approx 560.
    Name table: approx 560.
    Entry table: approx 2240.
    As of Rel 4.xx the new implementation of the lock table takes effect.
    It can also be activated as described in note 75144 for the 3.1I kernel. The default size is 2000 KB. The resulting sizes for the individual tables are:
    Owner table: approx 5400
    Name table: approx 5400
    Entry table: approx 5400
    Example: with the
    u201Cenque/table_size =32000u2033 profile parameter, the size of the enqueue table is set to 32000 KB. The tables can then have approx 40,000 entries.
    Note that the above sizes and numbers depend on various factors such as the kernel release, patch number, platform, address length (32/64-bit), and character width (Ascii/Unicode). Use the statistics display in SM12 to check the actual capacity of the lock table.
    If cause 2 applies, an enlargement of the lock table only delays the overflow of the lock table, but it cannot generally be avoided.
    In this case you need to eliminate the update shutdown or accelerate the throughput of the update program using more update processes. Using CCMS (operation modes, see training BC120) the category of work processes can be switched at runtime, for example an interactive work process can be converted temporarily into an update process, to temporarily increase the throughput of the update.
    For cause 3, you should consider a tuning of the task function. Instead of issuing a large number of individual locks, it may be better to use generic locks (wildcard) to block a complete subarea. This will also allow you to considerably improve the performance.

  • Lock table overflow - Delta (fetch)

    Hi,
    I have an InfoCube which contains large amount of data. Before starting to extract data from this InfoCube, I want to set datamart status as fetched to start extractions with new coming requests.
    However, when I choose the processing mode of DTP as "9 No Data Transfer; Delta Status in Source: Fetched" and execute the DTP, it ends with an error: "Lock table overflow".
    Is there any way to solve this without increasing the lock table parameters (enque/table_size) ?
    Regards,
    Erdem

    You need to first resolve the issue with the "Lock table Overflow"....usign the below method in RZ10.
    Here you can solve the problem in two ways
    1) increase the size of the lock table via parameter
    enque/table_size
    2) or increase the enque work processes from 1 to 2 or 3
    via parameter rdisp/wp_no_enq
    Ask Basis team for their assistance on the same.
    Thanks
    Murali

  • Lock table Overflow as the file size is 50 MB BW side.

    Hello Everyone,
    I have a XML idoc file as input which is usually more then 50MB in size.
    Usually, i am getting Lock table overflow at Receiver BW side . This error is pointing to Inbound_Asynchronous_Idoc.
    i have tried dividing the Input XML idoc file into small group by handling them in chunk mode of sender communication channel.
    However, since its TRFC , so if it gets processed in PI but at outbound if there is an Lock table overflow error, then it fails.
    I have tried to process the 50 MB of file in parts by processing  5 MB at one point of time. but does this mean that BW also process data in parts or it gets entire 50 MB to process at one stretch.
    Since the input is IDOC XML so i was not able to make use of Record Set per message. so i am making use of chunk mode.
    AM i doing correctly ?
    Regards,
    Ravi

    Hi Ravikanth,
    If  i make use of the below logic as mentioned in the link that you provided, then do i have to remove the chunk mode from communication channel .?
    Secondly, mine is a SLSFCT idoc XSD that  i am using here as source and target as well.
    The hirearchy becomes like this after implementing the logic mentioned in  link:
    Messages
    Message1
    Z1ZBSD_SLSFCT01
    IDOC
    BEGIN
    EDI_DC40
    For Messages and MEssages1 there is no mapping at target side.
    For  Z1ZBSD_SLSFCT01 its 1..1 in source and 0..unbounded in target.
    For IDOC its mapped to constant and Begin to constant with value 1 .
    And then EDIDC to source and target are mapped to each other with occurrence of 1..1. 
    Is there some thing wrong that i am doing . because after this again the files are not getting divided
    Regards

  • How to Insert a record in a database table in debugging mode in production

    Hi,
    How to Insert a record in a database table in debugging mode in production ?
    Waiting for kind response.
    Best Regards,
    Padhy
    Moderator Message : Duplicate post locked.
    Moderator message : Warning. Don't create multiple threads for same question.
    Edited by: Vinod Kumar on May 12, 2011 11:02 AM
    Edited by: Vinod Kumar on May 12, 2011 11:04 AM

    Hi Senthil,
    Regards,
    Phani Raj Kallur
    Message was edited by: Phani Raj Kallur

  • Issue: Lock table is out of available object entries

    Hi all,
    We have a method to add records into BDB, and after there are more than 10000 records, if we continue add records into BDB, such as add 400 records into BDB, then do other update/add operation to BDB, it will be failed.
    The error message is Lock table is out of available object entries.
    How to resolve it?
    Thanks.
    Jane.

    Frist the BDB stat as bellow:
    1786 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    2000 Maximum number of locks possible
    2000 Maximum number of lockers possible
    2000 Maximum number of lock objects possible
    52 Number of current locks
    1959 Maximum number of locks at any one time
    126 Number of current lockers
    136 Maximum number of lockers at any one time
    26 Number of current lock objects
    1930 Maximum number of lock objects at any one time
    21M Total number of locks requested (21397151)
    21M Total number of locks released (21397099)
    0 Total number of lock requests failing because DB_LOCK_NOWAIT was set
    0 Total number of locks not immediately available due to conflicts
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    736KB The size of the lock region
    0 The number of region locks that required waiting (0%)
    Then I run the method to insert 29 records into BDB, the BDB isn't locked yet, and the stat:
    1794 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    2000 Maximum number of locks possible
    2000 Maximum number of lockers possible
    2000 Maximum number of lock objects possible
    52 Number of current locks
    1959 Maximum number of locks at any one time
    134 Number of current lockers
    136 Maximum number of lockers at any one time
    26 Number of current lock objects
    1930 Maximum number of lock objects at any one time
    22M Total number of locks requested (22734514)
    22M Total number of locks released (22734462)
    0 Total number of lock requests failing because DB_LOCK_NOWAIT was set
    0 Total number of locks not immediately available due to conflicts
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    736KB The size of the lock region
    0 The number of region locks that required waiting (0%)
    Then I run the method again to insert records, the issue "Lock table is out of available locks" occur, and the BDB stat:
    1795 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    2000 Maximum number of locks possible
    2000 Maximum number of lockers possible
    2000 Maximum number of lock objects possible
    52 Number of current locks
    2000 Maximum number of locks at any one time
    135 Number of current lockers
    137 Maximum number of lockers at any one time
    27 Number of current lock objects
    1975 Maximum number of lock objects at any one time
    26M Total number of locks requested (26504607)
    26M Total number of locks released (26504553)
    0 Total number of lock requests failing because DB_LOCK_NOWAIT was set
    0 Total number of locks not immediately available due to conflicts
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    736KB The size of the lock region
    0 The number of region locks that required waiting (0%)
    Why this issue occur and how to resolve this issue.
    Thanks very much.
    Jane

  • Lock table is out of available object entries

    hi,
    i am using db4.6.21 version.
    i have created an table where other applications writes to this tables concurrently.
    this table opened in DB_THREAD and every application writing in DB_WRITE_CURSOR mode. and i am note using any Locking subsystems.only READ_COMMITTED and DB_WRITE_CURSOR are used by applications to access the table.
    while In PC it is working properly.
    But in At91SAM9260EK with kerenl 2.6.23.9 ,
    Berkeley DB error: Lock table is out of available object entries
    error comes...what would be the reason..???

    Hi Ratheesh,
    Please search through the forum; similar locking subsystem configuration issues have already been discussed.
    In short, you'll need to increase the number of lock objects:
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/lock/max.html
    I see you're using the DB_WRITECURSOR flag which is specific to CDS (Concurrent Data Store), so you should size the locking subsystem appropriately to CDS: the number of lock objects needed is two per open database (one for the database lock, and one for the cursor lock when the DB_CDB_ALLDB option is not specified). The locking subsystem configuration should be similar for all the processes accessing the environment, or not specified for the processes that just join the environment.
    If you still see this error message reported, provide some information on your OS/platform, information on how the processes access the environment and the locking statistics (db_stat -N -Co -h <env_dir>).
    Regards,
    Andrei

  • Lock table is out of available lock entries

    Hi,
    I'm using BDB 4.8 via Berkeley DB XML. I'm adding a lot of XML documents (ca. 1000) in one transaction and get "Lock table is out of available lock entries". My locks number is set to 100000 (it's too much but still...).
    I know that I probably should not put so many docs in the same transaction, but why BDB throws "not enough locks" error? Aren't 100000 locks enough? (I also tried to set 1 million for testing purposes)
    As a side effect question, may I change the number of locks after environment creation (but before opening it)?
    P.S. Hope it's not offtop on this forum
    Thanks in advance,
    Vyacheslav

    Hello,
    As you mention, "Lock table is out of available lock entries" indicates that there are more locks than your underlying database environment is configured for. Please take a look at the documentation on "Configuring locking: sizing the system" section of the Berkeley DB Reference Guide at:
    http://www.oracle.com/technology/documentation/berkeley-
    db/db/programmer_reference/lock_max.html
    From there:
    The maximum number of locks required by an application cannot be easily estimated. It is possible to calculate a maximum number of locks by multiplying the maximum number of lockers, times the maximum number of lock objects, times two (two for the two possible lock modes for each object, read and write). However, this is a pessimal value, and real applications are unlikely to actually need that many locks. Reviewing the Lock subsystem statistics is the best way to determine this value.
    What information is the lock subsystem statistics showing? You can get this with db_stat -c or programmatically with the environment lock_stat method.
    Thanks,
    Sandra

  • Lock tables when load data

    Are there any way to lock tables when i insert data with SQL*Loader? or oracle do it for me automatically??
    how can i do this?
    Thanks a lot for your help

    Are there any problem if in the middle of my load (and commits) an user update o query data ?The only problem that I see is that you may run short of undo space (rollback segment space) if your undo space is limited and the user is running a long SELECT query for example: but this problem would only trigger ORA-1555 for the SELECT query or (less likely since you have several COMMIT) ORA-16XX because load transaction would not find enough undo space.
    Data is not visible to other sessions, unless, the session which is loading data, commits it. That's the way Oracle handle the read committed isolation level for transaction.
    See http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96524/c21cnsis.htm#2689
    Or what happens if when i want to insert data someone has busy the table?You will get blocked if you try to insert data that has the same primary key as a row being inserted by a concurrent transaction.

  • System error: Unable to lock table/view V_T077K_M

    Hi,
    Not able to work on field selection (OMSG) i am getting an error "System error: Unable to lock table/view V_T077K_M" checked in SM 12 also, but dint get there anything.
    please cud you help me in this regard,

    contact your basis team,  there might be an overflow of the lock table because of mass changes from other users (even for other tables)

  • Timeout when inserting row in a locked table

    How can I set the timeout before an INSERT statement fails with error ORA-02049 (timeout: distributed transaction waiting for lock) when the entire table has been lock with LOCK TABLE .
    Documentation says to modify DISTRIBUTED_LOCK_TIMEOUT parameter, but it is obsolete in Oracle 8i.
    Any idea ???

    You could set an alarm() in a signal. Then on return (for whatever reason) you clear the alarm, inspect the return code of the sql execute call and determine what happened (i.e. did the transaction completed or did the alarm get you).
    Hope it helps.
    -Lou
    [email protected]
    How can I set the timeout before an INSERT statement fails with error ORA-02049 (timeout: distributed transaction waiting for lock) when the entire table has been lock with LOCK TABLE .
    Documentation says to modify DISTRIBUTED_LOCK_TIMEOUT parameter, but it is obsolete in Oracle 8i.
    Any idea ???

Maybe you are looking for