Truncate table lock issue

Hi,
    In my SQL2000 stored procedure, there are 9 truncate table commands.
     But the the stored procedure is take above 15 minutes to execute.
     In this stored procedure, truncate table command has no problem.
     But an insert query into that table is very slow.
     In the insert query, select some views output and insert into the table. But these select output is very slow.
     I search some forums. They said that lock problem for the object. Query execution delayed for a exclusively accessed object.
    How can I remove the lock for my strored procedure running faster ? 

Here are the bad news: you need to have some understanding of indexing to get some results out of it. But since we don't know anything about your queries, we cannot really give much better advice. What we can say is that TRUNACTE TABLE has nothing to
do with it. Probably not locking either. Locking would come from concurrent activity in the database. Is there any?
Normally, we ask people to submit the code they have problem with. In this case we would also need see the view definitions. (Else we would still be in the dark.) And if the views are built on views, we need to see these as well. This may result in more
code that is practic to handle here, but I leave that up to you. As a hint: in the next step we are much likely to ask for the table and index definitions.
Erland Sommarskog, SQL Server MVP, [email protected]

Similar Messages

  • Row locking issue with version enabled tables

    I've been testing the effect of locking in version enabled tables in order to assess workspace manager restrictions when updating records in different workspaces and I have encountered a locking problem where I can't seem to update different records of the same table in different sessions if these same records have been previously updated & committed in another workspace.
    I'm running the tests on 11.2.0.3.  I have ROW_LEVEL_LOCKING set to ON.
    Here's a simple test case (I have many other test cases which fail as well but understanding why this one causes a locking problem will help me understand the results from my other test cases):
    --Change tablespace names as required
    create table t1 (id varchar2(36) not null, name varchar2(50) not null) tablespace XXX;
    alter table t1 add constraint t1_pk primary key (id) using index tablespace XXX;
    exec dbms_wm.gotoworkspace('LIVE');
    insert into t1 values ('1', 'name1');
    insert into t1 values ('2', 'name2');
    insert into t1 values ('3', 'name3');
    commit;
    exec dbms_wm.enableversioning('t1');
    exec dbms_wm.gotoworkspace('LIVE');
    exec dbms_wm.createworkspace('TESTWSM1');
    exec dbms_wm.gotoworkspace('TESTWSM1');
    --update 2 records in a non-LIVE workspace in preparation for updating in different workspaces later
    update t1 set name = name||'changed' where id in ('1', '2');
    commit;
    quit;
    --Now in a separate session (called session 1 for this example) run the following without committing the changes:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    --Now in another session (session 2) update a different record from the same table.  The below update will hang waiting on the transaction in session 1 to complete (via commit/rollback):
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    I'm surprised records of different ids can't be updated in different sessions i.e. why does session 1 lock the update of record 2 which is not being updated anywhere else.  I've tried this using different non-LIVE workspaces with similar results.  I've tried changing table properties e.g. initrans with and still get a lock.  The changes to table properties are successfully propagated to the _LT tables but not all the related workspace manager tables created for table T1 above.  I'm not sure if this is the issue.
    Note an example of the background workspace manager query that may create the lock is something like:
    UPDATE TESTWSM.T1_LT SET LTLOCK = WMSYS.LT_CTX_PKG.CHECKNGETLOCK(:B6 , LTLOCK, NEXTVER, :B3 , 0,'UPDATE', VERSION, DELSTATUS, :B5 ), NEXTVER = WMSYS.LT_CTX_PKG.GETNEXTVER(NEXTVER,:B4 ,VERSION,:B3 ,:B2 ,683) WHERE ROWID = :B1
    Any help with this will be appreciated.  Thanks in advance.

    Hi Ben,
    Thanks for your quick response.
    I've tested your suggestion and it does work with 2 workspaces but the same problem is enountered when additional workspaces are created. 
    It seems if multiple workspaces are used in a multi user environment, locks will be inevitable which will degrade performance especially if a long transaction is used. 
    Deadlocks can also be encountered where eventually one of the sessions is rolled back by the database. 
    Is there a way of avoiding this e.g. by controlling the creation of workspaces and table updates?
    I've updated my test case below to demonstrate the extra workspace locking issue.
    --change tablespace name as required
    create table t1 (id varchar2(36) not null, name varchar2(50) not null) tablespace XXX;
    alter table t1 add constraint t1_pk primary key (id) using index tablespace XXX;
    exec dbms_wm.gotoworkspace('LIVE');
    insert into t1 values ('1', 'name1');
    insert into t1 values ('2', 'name2');
    insert into t1 values ('3', 'name3');
    commit;
    exec dbms_wm.enableversioning('t1');
    exec dbms_wm.gotoworkspace('LIVE');
    exec dbms_wm.createworkspace('TESTWSM1');
    exec dbms_wm.gotoworkspace('TESTWSM1');
    update t1 set name = name||'changed' where id in ('1', '2');
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    --end of original test case, start of additional workspace locking issue:
    Session 1:
    rollback;
    Session 2:
    rollback;
    --update record in both workspaces
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '3';
    commit;
    exec dbms_wm.gotoworkspace('TESTWSM1');
    update t1 set name = 'changed' where id = '3';
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    Session 1:
    rollback;
    Session 2:
    rollback;
    exec dbms_wm.gotoworkspace('LIVE');
    exec dbms_wm.createworkspace('TESTWSM2');
    exec dbms_wm.gotoworkspace('TESTWSM2');
    update t1 set name = name||'changed2' where id in ('1', '2');
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    --this now gets locked out by session 1
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    Session 1:
    rollback;
    Session 2:
    rollback;
    --update record 3 in TESTWSM2
    exec dbms_wm.gotoworkspace('TESTWSM2');
    update t1 set name = 'changed' where id = '3';
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    --this is still locked out by session 1
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    Session 1:
    rollback;
    Session 2:
    rollback;
    --try updating LIVE
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '3';
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    --this is still locked out by session 1
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    Session 1:
    rollback;
    Session 2:
    rollback;
    --try updating TESTWSM1 workspace too - so all have been updated since TESTWSM2 was created
    exec dbms_wm.gotoworkspace('TESTWSM1');
    update t1 set name = 'changed' where id = '3';
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    --this is still locked out by session 1
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    Session 1:
    rollback;
    Session 2:
    rollback;
    --try updating every workspace afresh
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changedA' where id = '3';
    commit;
    exec dbms_wm.gotoworkspace('TESTWSM1');
    update t1 set name = 'changedB' where id = '3';
    commit;
    exec dbms_wm.gotoworkspace('TESTWSM2');
    update t1 set name = 'changedC' where id = '3';
    commit;
    Session 1:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '1';
    --this is still locked out by session 1
    session 2:
    exec dbms_wm.gotoworkspace('LIVE');
    update t1 set name = 'changed' where id = '2';
    Session 1:
    rollback;
    Session 2:
    rollback;

  • I want to know when we issue truncate table statement in oracle .

    i want to know when we issue truncate table statement in oracle .No log will be write in redo log .But we can recover data using flashback or scn.I want to know where is the actually truncate table statement log is stored in oracle database.Please explain me in detail step by step .

    Hi,
    I have truncated table after that i have restored that data.See below the example.I want to know from where it's restored.
    From which log file it's restored.
    create table mytab (n number, x varchar2(90), d date);
    alter table mytab enable row movement;
    Table altered.
    SQL> insert into mytab values (1,'Monsters of Folk',sysdate);
    1 row created.
    SQL> insert into mytab values (2,'The Frames',sysdate-1/24);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select CURRENT_SCN from v$database;
    CURRENT_SCN
    972383
    SQL> select * from mytab;
    N
    X
    D
    1
    Monsters of Folk
    30-DEC-12
    2
    The Frames
    30-DEC-12
    N
    X
    D
    SQL> set lines 10000
    SQL> /
    N X D
    1 Monsters of Folk 30-DEC-12
    2 The Frames 30-DEC-12
    SQL> select to_char(sysdate,'yyyymmdd hh24:mi:ss') from dual;
    TO_CHAR(SYSDATE,'
    20121230 09:29:24
    SQL> set timing on
    SQL> truncate table mytab;
    Table truncated.
    Elapsed: 00:00:15.75
    SQL> select * from mytab as of timestamp TO_TIMESTAMP('20121230 09:29:24','yyyymmdd hh24:mi:ss');
    N X D
    1 Monsters of Folk 30-DEC-12
    2 The Frames 30-DEC-12
    Elapsed: 00:00:00.28
    SQL> insert into mytab select * from mytab as of timestamp TO_TIMESTAMP('20121230 09:29:24','yyyymmdd hh24:mi:ss');
    2 rows created.
    Elapsed: 00:00:00.01
    SQL>

  • Buffer table and Locking issues

    We have a real table and a buffer table. The real table gets locked on occation for processing, at which times, incoming data is deposited into the buffer table.
    I am looking for a way to dump all the information from the buffer table into the real table whenever the real table is released from its lock. I would prefer not to write and schedule a program to run based on time (i.e. every 2 seconds, a procedure is called that looks to see if the table still has a lock on it...)
    Does anyone have any ideas how I can do this? I have thought about using the DBMS_ALERTS package or DBMS_AQ somehow, but I am not as familiar with the two as I'd like. I'd be happy to explain this further if required.
    Thank you.

    Thank you for the suggestion, but this does not look like a viable option. For this to work, the trigger would have to fire everytime a DDL event was fired, and I would have to build if...then logic into the trigger. Besides, in the documentation, I don't think table locks would be considered DDL.
    Any other ideas? Again, thanks for the help. I did not previously know I could create triggers on these type of events.
    RW
    null

  • TRUNCATE TABLE NOT WORKING AFTER DROPPING CONSTRAINTS

    Hi,
    I have a table with a foreign key constraint. I know you can't truncate tables when there are foreign key constraints. So I drop the constraints before running the TRUNCATE TABLE command. But SQL Server is still stating there are foreign key constraints
    even after they have just been dropped.
    When I use SQL Server Management Studio to generate a drop & create script on this table or any other table with an FK consttaint, the generated script fails stating that there are still foreign key constraints??
    I have the same problem for every table that has FK constraints, for those without FK, TRUNCATE table works without issues.
    The end goal is to reset the identity value of the primary key. Since DBCC does not work on Azure, TRUNCATE TABLE is the only way left, especially if you can't even drop and recreate tables with FK constraints.
    What am I missing here?
    Peter

    Hi,
    Thanks for posting here.
    TRUNCATE TABLE is similar to the DELETE statement with no WHERE clause; however, TRUNCATE TABLE is faster and uses fewer system and transaction log resources.
    TRUNCATE TABLE removes all rows from a table, but the table structure and its columns, constraints, indexes, and so on remain. To remove the table definition in addition to its data, use the DROP TABLE statement.
    If the table contains an identity column, the counter for that column is reset to the seed value defined for the column. If no seed was defined, the default value 1 is used. To retain the identity counter, use DELETE instead.
    Restrictions
    You cannot use TRUNCATE TABLE on tables that:
    •Are referenced by a FOREIGN KEY constraint. (You can truncate a table that has a foreign key that references itself.)
    •Participate in an indexed view.
    •Are published by using transactional replication or merge replication.
    For tables with one or more of these characteristics, use the DELETE statement instead.
    TRUNCATE TABLE cannot activate a trigger because the operation does not log individual row deletions. For more information, see CREATE TRIGGER (Transact-SQL).
    Truncating Large Tables
    Microsoft SQL Server has the ability to drop or truncate tables that have more than 128 extents without holding simultaneous locks on all the extents required for the drop.
    Permissions--------------------------------------------------------------------------------
     The minimum permission required is ALTER on table_name. TRUNCATE TABLE permissions default to the table owner, members of the sysadmin fixed server role, and the db_owner and db_ddladmin fixed database roles, and are not transferable. However, you
    can incorporate the TRUNCATE TABLE statement within a module, such as a stored procedure, and grant appropriate permissions to the module using the EXECUTE AS clause.
    You cannot truncate a table which has an FK constraint on it.
    Typically my process for this is:
    Drop the constraints
    Trunc the table
    Recreate the constraints.
    Hope this helps you.
    Girish Prajwal

  • Error when trying to truncate table from within PL/SQL

    Hello.....
    Does anyone know why I get an error when trying to execute the following command from within PL/SQL?
    truncate table event;
    The error is as follows:
    ORA-06550: line 20, column 14:
    PLS-00103: Encountered the symbol "TABLE" when expecting one of the following:
    := . ( @ % ;
    The symbol ":= was inserted before "TABLE" to continue.

    And as a DDL, it does a COMMIT just before it runs. Whether you want a COMMIT or not.
    And with all the locking and serialization issues associated with a DDL.

  • Timesten LOCK issue

    We are facing Timesten LOCK timeout issue in client environment very frequently but it has not happening in local test environment tried to reproduce for many times. Below is the Error message
    An exception occured while inserting data to USER_DATA_SATGING_TABLE : [TimesTen][TimesTen 11.2.1.8.0 ODBC Driver][TimesTen]TT6003: Lock request denied because of time-out
    Details: Tran 199.4005 (pid 11169) wants Xni lock on rowid BMUFVUAAABPBAAAOjo, table DATA_STAGING_TABLE. But tran 194.2521 (pid 11169) has it in X (request was X). Holder SQL (DELETE FROM USER_DATA_STAGING_TABLE WHERE STATUS= 'COMPLETE') -- file "table.c", lineno 15230, procedure "sbTblStorLTupSlotAlloc":false
    SYS.SYSTEMSTATS output on lock details:
    < lock.locks_granted.immediate                               
    , 5996359, 0, 0 >
    < lock.locks_granted.wait                                    
    , 0, 0, 0 >
    < lock.timeouts                                              
    , 8, 0, 0 >
    < lock.deadlocks                                             
    , 0, 0, 0 >
    < lock.locks_acquired.table_scans                            
    , 10256, 0, 0 >
    < lock.locks_acquired.dml                                    
    , 4, 0, 0 >
    After doing rollback of transaction using ttXactAdmin, it is going fine without any issue and it is reoccurring in 2/3 days later. 
    Previous lock information by using ttXactAdmin command.
    PID
    Context       
    TransID
    TransStatus Resource  ResourceID      
    Mode  SqlCmdID        
    Name
    Program File Name: java
    9291
    0x16907fd0     
    198.6732   Active 
    Database  0x01312d0001312d00   IX
    0
    Command   1056230912      
    S
    1056230912
    0x16988810     
    199.4371   Active 
    Database  0x01312d0001312d00   IX
    0
    Row  
    BMUFVUAAABxAAAADD9   X
    1055528832      
    USER.USER_DATA_STAGING_TABLE
    Table
    718096          
    IXn   1055528832      
    USER.USER_DATA_STAGING_TABLE
    EndScan   BMUFVUAAAAKAAAAGD1   U
    1055528832      
    USER.USER_DATA_STAGING_VIEW
    Table
    718176          
    IXn   1055528832      
    USER.USER_DATA_STAGING_VIEW
    Command   1056230912      
    S
    1056230912
    0x1882e610     
    201.16275  Active 
    Database  0x01312d0001312d00   IX
    0
    TTVERSION : TimesTen Release 11.2.1.8.0 (64 bit Linux/x86_64) (tt11:17001) 2011-02-02T02:20:46Z
    sys.odbc.ini file configuration:
    UID=user
    PWD=user
    Driver=/opt/TimesTen/tt11/lib/libtten.so
    DataStore=/var/TimesTen/tt11/ds/TT_USER_DS
    LogDir=/var/TimesTen/tt11/ttlog/TT_User
    CacheGridEnable = 0
    PermSize         = 1000
    TempSize         = 200
    CkptLogVolume    = 200
    DatabaseCharacterSet=WE8ISO8859P1
    Connections      = 128
    Here we are using XLA message listener to get notification a USER.USER_DATA_STAGING_VIEW which in turn looks for a table USER.USER_DATA_STAGING_TABLE. Delete operation on this table locks table and blocks for further insert.
    I want to know
    1) why this happening ?
    2) why is this happening in specific environment and not able to reproduce the same in other environment.
    3) how to resolve this issue?
    This is very critical issue which blocks us for further move in client solution. Expecting your reply ASAP.

    For critical issues you should always log an SR as Steve advised. These forums are not monitored 24x7 and are not the best way to get rapid assistance for critical problems.
    For this issue it seems like a DELETE operation on the table was holding a lock on a row that an INSERT operation needed to get a lock on (looks like an INDEX 'next key' lock). The inserter was not able to acquire the lock within the defined timeout (which by default is 10 seconds unless you have configured it to be different). 10 secodns is a long time so the question is why the DELETE operation was not committed (which would release) the locks within 10 seconds. Maybe it was deleting a very large number of rows? If that is the issue then the best solution is to break such a large delete up into a series of smaller deletes. For example:
    repeat
       DELETE FIRST 256 FROM USER_DATA_STAGING_TABLE WHERE STATUS= 'COMPLETE';
       COMMIT;
    until 'rows deleted' = 0
    Alternatively you could just increase the lock timeout value (but that is really just 'hiding' the issue).
    Chris

  • Multiple parallel loads into one table with TABLE LOCK option

    Hi everyone:
    We have several SSIS packages where each package has the basic design of having one sequence container. Inside each sequence container can be anywhere from 2 - 9 data flow tasks where for each data is selected from a different table but all 2-9 tasks do
    an OLEDB fast load (with table lock option checked) into the same single destination table.
    The number of rows were pulling from the various sources might be anywhere from 5 up to 100,000.
    Right now this doesn't seem to be causing any issues, but I wanted to check to see if this set up (since we're doing a review) could potentially cause problems down the road?
    We seem to think that each parallel task will acquire its data as normal, and just won't be able to insert until one of the other parallel tasks completes its fast load. To us, that's no big deal as we're at least able to acquire the data in parallel.
    What are everyone's thoughts?
    Thanks!

    >>>We seem to think that each parallel task will acquire its data as normal, and just won't be able to insert
    until one >>>of the other parallel tasks completes its fast load. 
    That is correct observation.
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Full Load: Error while executing :TRUNCATE TABLE: S_ETL_PARAM

    Hi All,
    We are using Bi Apps 7.9.6.1. Full Load was running fine. But Now we are facing a problem with truncating a table "S_ETL_PARAM".
    I have restart informatica Server And also DAC Srever. But still I am getting the same in the DAC Log as, *"NOMALY INFO::: Error while executing : TRUNCATE TABLE:S_ETL_PARAM*
    *MESSAGE:::com.siebel.etl.database.IllegalSQLQueryException: DBConnection_OLTP:SIEBTRUN ('siebel.S_ETL_PARAM')*
    *Values :*
    *Null Map*
    *EXCEPTION CLASS::: java.lang.Exception"*
    Any Suggestion.....
    Thanks in Advance,
    Deepak

    are you trying to run incremental load when you get this truncate error? can you re-run full load and see that still runs ok? pls also check your DW side database logs like alert lor any DB level issue. such errors do not throw friendly messages in DAC/Informatica side.

  • [ADF-11.1.2] Locking issue with SQL 92

    I see one Locking issue with SQL92 Oracle ADF Application.
    ADF Version: [ADF-11.1.2]
    Database: Oracle 10g Express Edition
    Situation 1:
    With Following setting:
    File: Application Resource > Description > ADF META-INF > adf-config.xml
        <startup>
          <amconfig-overrides>
            <config:Database jbo.SQLBuilder="SQL92" jbo.locking.mode="optimistic"/>
          </amconfig-overrides>
        </startup>I have a page showing record 'x' of view object. I open same record on another page. Now I have same record showing on two different tabs of browser.
    1. I modify first record and save it. It worked... Got commited to database.
    2. I goto second tab and modify same record and tried to same it. It throws me an error - Another user has changed the row with primary key oracle.jbo.Key[38 ] . As expected...
    3. I then, reopen the same record on 3rd tab of browser. Modify it and tried to save it. It just hang... as if it is processing the record endlessly.
    If I see the Log:
    <BaseSQLBuilderImpl> <doEntitySelectForAltKey> [312] BaseSQLBuilderImpl Executing doEntitySelect ... (true)
    <BaseSQLBuilderImpl> <doEntitySelectForAltKey> [313] Generating new LOCK statement
    <BaseSQLBuilderImpl> <buildSelectString> [314] Built select: 'SELECT ID, CI_ID, COLUMN_NAME, DISPLAY_COLUMN_NAME, COLUMN_VALUE, CREATE_DATE, CREATE_BY FROM ESUSER.CI_AUDIT'
    <BaseSQLBuilderImpl> <doEntitySelectForAltKey> [315] Executing LOCK "SELECT ID, CI_ID, COLUMN_NAME, DISPLAY_COLUMN_NAME, COLUMN_VALUE, CREATE_DATE, CREATE_BY FROM ESUSER.CI_AUDIT WHERE ID=? FOR UPDATE"
    <BaseSQLBuilderImpl> <bindWhereAttrValue> [316] Where binding param 1: 38
    That's it.. nothing happens further.. If I execute above query on SQL Worksheet, it doesn't come up with the result. Just hang for something...
    SELECT ID, CI_ID, COLUMN_NAME, DISPLAY_COLUMN_NAME, COLUMN_VALUE, CREATE_DATE, CREATE_BY FROM ESUSER.CI_AUDIT WHERE ID='38' FOR UPDATE I can execute above query for other record of same table but not '38'. Even if I fire commit command to database, it is not working.
    I have to restart the database services to bring everything to normal state.
    Situation 2:
    With Oracle as Database :
        <startup>
          <amconfig-overrides>
            <config:Database jbo.SQLBuilder="Oracle" jbo.locking.mode="optimistic"/>
          </amconfig-overrides>
        </startup>Everything is working file. i.e. at step 3, record is getting modified successfully with following log:
    <OracleSQLBuilderImpl> <doEntitySelectForAltKey> [27] OracleSQLBuilder Executing doEntitySelect on: ESUSER.CI_AUDIT (true)
    <ADFLogger> <begin> Entity read all attributes
    <OracleSQLBuilderImpl> <buildSelectString> [28] Built select: 'SELECT ID, CI_ID, COLUMN_NAME, DISPLAY_COLUMN_NAME, COLUMN_VALUE, CREATE_DATE, CREATE_BY FROM ESUSER.CI_AUDIT CIAudit'
    <OracleSQLBuilderImpl> <doEntitySelectForAltKey> [29] Executing LOCK...SELECT ID, CI_ID, COLUMN_NAME, DISPLAY_COLUMN_NAME, COLUMN_VALUE, CREATE_DATE, CREATE_BY FROM ESUSER.CI_AUDIT CIAudit WHERE ID=? FOR UPDATE NOWAIT
    <ADFLogger> <addContextData> Entity read all attributes
    <OracleSQLBuilderImpl> <bindWhereAttrValue> [30] Where binding param 1: 38
    <ADFLogger> <addContextData> Entity read all attributes
    <ADFLogger> <end> Entity read all attributes
    <ADFLogger> <end> Lock Entity
    <ADFLogger> <begin> Before posting the entity's changes
    <ADFLogger> <begin> Updating audit columns
    <ADFLogger> <end> Updating audit columns
    <ADFLogger> <end> Before posting the entity's changes
    <OracleSQLBuilderImpl> <doEntityDML> [31] OracleSQLBuilder Executing, Lock 2 DML on: ESUSER.CI_AUDIT (Update)
    <OracleSQLBuilderImpl> <buildUpdateStatement> [32] UPDATE buf CIAudit>#u SQLStmtBufLen: 210, actual=60
    <OracleSQLBuilderImpl> <doEntityDML> [33] UPDATE ESUSER.CI_AUDIT CIAudit SET COLUMN_VALUE=? WHERE ID=?
    <ADFLogger> <begin> Entity DML
    <OracleSQLBuilderImpl> <bindUpdateStatement> [34] Update binding param 1: cip7ri1
    <OracleSQLBuilderImpl> <bindWhereAttrValue> [35] Where binding param 2: 38
    <ADFLogger> <addContextData> Entity DML
    <ADFLogger> <end> Entity DML
    Can any one please tell me, what is the issue with SQL92 setting ?
    Edited by: Anandsagar Sah on Mar 11, 2012 8:10 AM

    The framework works correctly in the Situation #1. Please, note that the locking statement in this case is "SELECT ... FOR UPDATE", but not "SELECT ... FOR UPDATE NOWAIT" (as it is in the Situation #2). When you entered the 2nd tab and tried to update the row, then the framework executed the locking statement and the row was locked (and it remained locked because the framework detected that another user had modified the row, so it stopped the processing and no COMMIT operation was executed). When you entered the 3rd tab and tried to update the row, then the framework tried to lock the row againg, but the locking statement was blocked by the existign lock and it started waiting on the lock from the 2nd tab. So this is expected behaviour.
    The interesting question is why you do not get any error in the Situation #2. In my opinion you should get an error because the locking statement from the 3rd tab should fail immediately (because the row should have been locked from the 2nd tab and the locking statement is with NOWAIT option). I suspect that when the DB is Oracle and you use Oracle SQLBuilder, then the ADF issues a DB savepoint at the beginning of the DML operation and rolls back to the savepoint is a case of some failure, so the 2nd tab has not left any lock. You can check this by setting on SQL trace on the DB server.
    Dimitar

  • FOR UPDATE cursor is causing Blocking/ Dead Locking issues

    Hi,
    I am facing one of the complex issues regarding blocking / dead locking issues. Please find below the details and help / suggest me the best approach to ahead with that.
    Its core Investment Banking Domain, in Our Day to day Business we are using many transaction table for processing trades and placing the order. In specific there are two main transaction table
    1)     Transaction table 1
    2)     Transaction table 2
    These both the tables are having huge amount of data. In one of our application to maintain data integrity (During this process we do not want other users to change these rows), we have placed SELECT …………….. FOR UPDATE CURSOR on these two table and we have locked all the rows during the process. And we have batch jobs (shell scripts ) , calling this procedure , we will be running 9 times per day 1 hrs each start at 7:15AM in the morn finish it up in the eve 5PM . Let’s say. The reason we run the same procedure multiple times is, our business wants to know the voucher before its finalized. Because there is a possibility that order can be placed and will be updated/cancelled several times in a single day. So at the end of the day , we will be sending the finalized update to our client.
    20 07 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
    20 08 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
    20 09 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
    20 10 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
    20 11 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
    20 12 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
    20 13 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
    20 14 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
    20 15 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
    20 16 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
    20 17 * * 1-5 home/bin/app_process_prc.sh >> home/bin/app1/process.out
    Current Program will look like:
    App_Prc_1
    BEGIN
    /***** taking the order details (source) and will be populate into the table ****/
    CURSOR Cursor_Upload IS
    SELECT col1, col2 … FROM Transaction table1 t 1, Source table 1 s
    WHERE t1.id_no = t2.id_no
    AND t1.id_flag = ‘N’
    FOR UPDATE OF t1.id_flag;
    /************* used for inserting the another entry , if theres any updates happened on the source table , for the records inserted using 1st cursor. **************/
    CURSOR cursor_update IS
    SELECT col1, col2 … FROM transaction table2 t2 , transaction table t1
    WHERE t1.id_no = t2.id_no
    AND t1.id_flag = ‘Y’
    AND t1.DML_ACTION = ‘U’,’D’ -- will retrieve the records which are updated and deleted recently for the inserted records in transaction table 1 for that particular INSERT..
    FOR UPDATE OF t1.id_no,t1.id_flag;
    BLOCK 1
    BEGIN
    FOR v_upload IN Cursor_Upload;
    LOOP
    INSERT INTO transaction table2 ( id_no , dml_action , …. ) VALUES (v_upload.id_no , ‘I’ , … ) RETURNING v_upload.id_no INTO v_no -- I specify for INSERT
    /********* Updating the Flag in the source table after the population ( N into Y ) N  order is not placed yet , Y  order is processed first time )
    UPDATE transaction table1
    SET id_FLAG = ‘Y’
    WHERE id_no = v_no;
    END LOOP;
    EXCEPTION WHEN OTHER THEN
    DBMS_OUTPUT.PUT_LINE( );
    END ;
    BLOCK 2
    BEGIN -- block 2 starts
    FOR v_update IN Cursor_Update;
    LOOP;
    INSERT INTO transaction table2 ( id_no ,id_prev_no, dml_action , …. ) VALUES (v_id_seq_no, v_upload.id_no ,, … ) RETURNING v_upload.id_no INTO v_no
    UPDATE transaction table1
    SET id_FLAG = ‘Y’
    WHERE id_no = v_no;
    END LOOP;
    EXCEPTION WHEN OTHER THEN
    DBMS_OUTPUT.PUT_LINE( );
    END; -- block2 end
    END app_proc; -- Main block end
    Sample output in Transaction table1 :
    Id_no | Tax_amt | re_emburse_amt | Activ_DT | Id_Flag | DML_ACTION
    01 1,835 4300 12/JUN/2009 N I ( these DML Action will be triggered when ever if theres in any DML operation occurs in this table )
    02 1,675 3300 12/JUN/2009 Y U
    03 4475 6500 12/JUN/2009 N D
    Sample output in Transaction table2 :
    Id_no | Prev_id_no Tax_amt | re_emburse_amt | Activ_DT
    001 01 1,835 4300 12/JUN/2009 11:34 AM ( 2nd cursor will populate this value , bcoz there s an update happened for the below records , this is 2nd voucher
    01 0 1,235 6300 12/JUN/2009 09:15 AM ( 1st cursor will populate this record when job run first time )
    02 0 1,675 3300 12/JUN/2009 8:15AM
    003 03 4475 6500 12/JUN/2009 11:30 AM
    03 0 1,235 4300 12/JUN/2009 10:30 AM
    Now the issues is :
    When these Process runs, our other application jobs failing, because it also uses these main 2 tranaction table. So dead lock is detecting in these applications.
    Solutin Needed :
    Can anyone suggest me , like how can rectify this blocking /Locking / Dead lock issues. I wants my other application also will use this tables during these process.
    Regards,
    Maran

    hmmm.... this leads to a warning:
    SQL> ALTER SESSION SET PLSQL_WARNINGS='ENABLE:ALL';
    Session altered.
    CREATE OR REPLACE PROCEDURE MYPROCEDURE
    AS
       MYCOL VARCHAR(10);
    BEGIN
       SELECT col2
       INTO MYCOL
       FROM MYTABLE
       WHERE col1 = 'ORACLE';
    EXCEPTION
       WHEN PIERRE THEN
          NULL;
    END;
    SP2-0804: Procedure created with compilation warnings
    SQL> show errors
    Errors for PROCEDURE MYPROCEDURE:
    LINE/COL                                                                          ERROR
         12/9        PLW-06009: procedure “MYPROCEDURE” PIERRE handler does not end in RAISE or RAISE_APPLICATION_ERROR
         :)

  • Truncate table loses privs on table...

    Oracle11g/RHEL5
    Hi,
    We have a table that is inserted into a then deleted every so often. The current method that was being used to remove the rows was the TRUNCATE TABLE ption. However, we found out that when that statement is issued, that any user that had privs on that table, no longer work even after it is re-created. So it looks like the next option is to used the DELETE * from table_name. However, that is not good for performance and undo. This table is about 300,000 rows and will grow in the future. How could we get around this issue?
    Thanks.

    Hi,
    JrOraDBA wrote:
    Oracle11g/RHEL5
    We have a table that is inserted into a then deleted every so often. The current method that was being used to remove the rows was the TRUNCATE TABLE ption. However, we found out that when that statement is issued, that any user that had privs on that table, no longer work even after it is re-created. You must have done something else. TRUNCATE TABLE does change privileges.
    What do you mean by "after it is re-created"? You don't have to issue another CREATE statement after TRUNCATE TABLE.
    Do you mean after re-populating the table? Do you mean that other users, who did have privileges, can still DESCRIBE the table right after it is TRUNCATEd, but they can't do anything with it after an INSERTs have been done?
    I suspect you DROPped the table at some point.
    Double-check that the problem exists when all you do is TRUNCATE TABLE.
    Post a script that re-creates the problem. There's no need to include lots of data: one column and one row will show the problem just fine.

  • Table lock up solution in oracle 10g

    The Business where I work want to create a message pop up screen to resolved locking issue with Yes /no button. The problem is that there are two forms that are updating the same database tables, what the business want to achieve is to be able to create a pop up screen supported by a new table with use userid, message etc. they want to build a form library code that check this table every so often for new message and if it finds one, then launch the new simple form.A message can be put in this form if one guy is trying to update a table that as be locked by another user, the user that locked the table can be informed instantly that he as to act.
    thabk you.

    You can use SELECT ... FOR UPDATE NOWAIT statement for acquiring locks on rows to be updated. If it goes through, you proceed with processing your transaction and if it fails to acquire locks then it will throw exception ORA-00054 which you can handle in exception handler to show the message that record is locked by another user.
    Hope it helps!

  • Truncate table taking too much time

    hi guys,
    thnks in advince.......
    oracle version is =9.2.0.5.0
    os/ version =SunOS Ganesha1 5.9 Generic_122300-05 sun4u sparc SUNW,Sun-Fire-V890
    application people-soft
    version=8.4
    every thing is running fine from last week .
    whenever process executed like billing ,d_dairy.
    it selected some temporary tables and start truncate tables it takes 5 mint to 8 mint even table has 0 rows.
    if more then one users executed process but diff process it comes in lock ..
    regs
    deep..

    Hi,
    Here iam sending details of technical settings.
    Name                 ACCTIT                          Transparent Table
    Short text            Compressed Data from FI/CO Document
    Last Change        SAP              10.02.2005
    Status                 Active           Saved
    Data class         APPL1   Transaction data, transparent tables
    Size category      4       Data records expected: 24,000 to 89,000
    Thanku

  • ORA-01502: Index or Partition is in unusable status. while truncating table

    Hi All,
    One of our Devlopers compalined that he is getting ORA-01502 : Index or partition is unusable status while truncating the a table in our Dataware house production database. He is using the following commands.
    Alter index <index_name> unusable;
    Truncate table <table_name> ;
    He is running a scripts to truncate each table and ecah time he is passing the table name as an input parameter to script. He is using same method to truncate four tables each having a BITMAP and a REGULAR index. For two tables every thing is working fine, but for other two tables the he is getting ORA-01502 for BITMAP indexes. It a weekly process and every week he is getting the same issue. I checkd the Index status, they are in valid status only.
    For a work around I have created a table with BITMAP and regular index in our dev database. made the indexes unusable, checked their status. I truncated the table. Importent thing here is the Indexes are becoming vaild when I truncate the table.
    I suspect that my devloper's Indexes were already in unusable status (before he use the command ALTER INDEX), when he truncated the table, oracle trying to validate the index and throwing the error ORA-01502 because the Indexes are in unusabel statsu for a while.
    I tried searching for the mechanism of truncate table command and its effect on Indexes. But I did not find any luck, no one is speaking about index when truncating the table. Can any one please help me????
    Sorry for lengthy post. Any help is greatly appriciated and I thank every one in advance.

    DDL for Indexes getting ORA-01502 error
    CREATE BITMAP INDEX DWHMGR.ACT_TXN_LN_STG_01_XN3 ON DWHMGR.ACCT_TXN_LINE_TERM_BAL_STG_01 (TERM_BAL_CD ASC) TABLESPACE "BALFD_INX_04" NOLOGGING PCTFREE 1 INITRANS 2 MAXTRANS 255 STORAGE
    ( INITIAL 8M NEXT 8M MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 BUFFER_POOL DEFAULT );
    CREATE BITMAP INDEX DWHMGR.ACCT_TERM_BAL_STG_01_XN3 ON DWHMGR.ACCT_TERM_BAL_STG_01 TERM_BAL_CD ASC) TABLESPACE "BALFD_INX_04" NOLOGGING PCTFREE 1 INITRANS 2 MAXTRANS 255 STORAGE
    ( INITIAL 8M NEXT 8M MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 BUFFER_POOL DEFAULT );
    Indexes that have no issues.
    CREATE INDEX DWHMGR.ACCT_TERM_BAL_STG_01_XN2 ON DWHMGR.ACCT_TERM_BAL_STG_01 (ACCT_REF_NB ASC) TABLESPACE "BALFD_INX_04" NOLOGGING PCTFREE 1 INITRANS 2 MAXTRANS 255 STORAGE (INITIAL 8M
    NEXT 8M MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 BUFFER_POOL DEFAULT );
    CREATE BITMAP INDEX DWHMGR.ACCT_PRC_STG_01_XN1 ON DWHMGR.ACCT_PRC_STG_01 (ACCT_ORG_CD ASC) TABLESPACE "BALFD_INX_04" NOLOGGING PCTFREE 1 INITRANS 2 MAXTRANS 255 STORAGE ( INITIAL 8M
    NEXT 8M MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 BUFFER_POOL DEFAULT );
    Please look at the DDL of the indexes and let me know if you need any other information.

Maybe you are looking for

  • GeForce 4 Titanium 128mb

    Hi everyone, I have a GeForce 4 Titanium 128mb AGP and I want to install this card in my Mac Pro G5. I see that the slot of the original card (GeForce 5200 Fx 64mb AGP) is different. The name of this slot is the same: AGP I can install ma GeForce 4 T

  • Error [12] invalid printer type (98)

    Hello, when i want to print a output PDF File whith adobe central output Server. With Adobe Output Designer 5.6, I have compiled a template with Portable document format only. On Adobe central control 5.6, I have a job TESTPDF with a task JFMERGE and

  • I have lost my phone how can I track it or shut it down

    I have just lost my phone how can I track it

  • FM Radio keeps loading and loading

    Helo, if I open my FM Radio app on my z3 it is just loading and loading, and I cant set a frequency. How to fix this? Solved! Go to Solution.

  • CS4 Crop Tool

    I've just had my hard drive replaced and after reloading Photoshop CS4 the Crop tool isn't working as it always has? Rather than being able to click & drag my rectangle. Now where I first click the rectangle appears & where I clicked is the bottom RH