About session abnormal termination, table locked question.

Hi all,
i test some table lock,after i run this statement:
select * from testt for update nowait;
i abnormal terminate session, then the table testt will be locking until after 20 minutes,
i checked some document, the parameter of SQLNET.EXPIRE_TIME = 5 in sqlnet.ora file can release that lock, i set with 5,and then test again, but still wait 20 minutes the lock can release, why?

I've changed this now to use the URL variable. I am used to using that to go from a results page to a details page where the recordset on the details page is used. I hadn't realised that you could just set a link to:
<a href="addNomination.php?LodgeID=<?php echo($_GET['LodgeID']); ?>">Add Nomination</a>
where there is no recordset on a page, just for the purposes of passing the variable. (Yeah, I know - this is probably really basic!)
I do have one last question on this though - my form action is:
<form method="post" id="form1" action="<?php echo KT_escapeAttribute(KT_getFullUri()); ?>">
And then the link to a NominationAdded page is in the php code at the top of the AddNomination page:
$ins_nominations->registerTrigger("END", "Trigger_Default_Redirect", 99, "NominationAdded.php");
What would be useful would be to pass the URL variable through again, so that on the NominationAdded page I can have a link back to the same Lodge to add another Nomination without finding the Lodge again.
But I'm not sure what the syntax would be, as it included some PHP within existing PHP, so not just:
$ins_nominations->registerTrigger("END", "Trigger_Default_Redirect", 99, "nominationAdded.php?LodgeID=echo($_GET['LodgeID'])");
I assume this must be possible, but not sure of the exact syntax?

Similar Messages

  • ODI 10g - session keep a table locked

    Hi,
    We have a random issue, with ODI session that keep a lock on a table, even replication is finished and session becomes inactive
    It generated dead locks as a trigger has to update the target table.
    what happened :
    - user application create rows (13)
    - ODI scenario replicate the rows (contract table)
    - 2nd scenario based on same source with another sunscriber run a stored procedure to create records in another table (around 30, positions table)
    this 2nd locked the target table, and when the run of the procedure finished, and commited, the lock was not released
    - ODI replicate another table (price) 30mn later, a trigger on target update position table with new values
    ---> trigger failed with deadlock (ora 60)
    ---> ODI failed as the trigger raised back the error
    this issue happened after 10 hours of same activity without issue, chained lot of time, but suddenly the lock become persistent (more than 4 hours)
    what can I do?
    use ODI 10g 10.1.3.5.0 - RDBMS 10.2.0.4

    Hi !
    For small tables wich are mostly accessed with full table scan you can use
    ALTER TABLE <table_name> STORAGE (BUFFER_POOL KEEP);KEEP pool should be properly sized , setting will cause once the table is read oracle will avoid flushing it out from pool.
    T

  • Question On Table Locks

    Hello,
    I have a question on locks.
    USER A runs a batch job of insert statements have 100000 records and does a commit after every 1000 records.
    USER B is also running a batch job on similar tables and is blocked due to locks held by USER A
    I have identified that USER A is blocking USER B and now i need USER B to continue with the batch job. My question is that i need to kill the USER A's session without making him lose all the data he already inserted. In short as SYS can i commit his inserted transactions on his behalf.
    I assume if I kill his session he will lose all the INSERTS he performed since he hasnt commited until that point.
    Please Help.

    is there any way i could save the INSERTED transcations of USER A.No. User A commits or User A is killed and rolls back.
    i'm also confused about this type of lock i seeIf you supply details of what the sessions are waiting on / locks held, then this will clarify.
    do u suggest i commit every 500 records No. In general, you should commit at the end of a transaction - all or nothing. Committing every X is nasty.
    a lot of refrential constariants with other tables and this could be causing the lockYou can get locking problems with unindexed foreign keys.
    If you could provide more details of what's going on in both sessions / what they're waiting on this should clarify.

  • What causes "Timed out server" in WL Cluster and a question about session in WL Cluster

              Hi.
              We are using Weblogic 5.1 with SP 8. We have been encountering a problem in our
              clustered environment. We setup our (clustered)environment to have 3 instances(WL1,
              WL2, WL3) of weblogic running in one box. When one instance, let say WL1, responds
              to a request(1st request for a session), the session is binded to that instance
              till the session is terminated/expired. Which means succeeding request for that
              session can only be served by WL1(we tried stopping WL1 but WL2 and WL3 wont accept
              the request--which only causes the page to time out). Shouldnt it behave in such
              a way that other instances can get serve the request(failover)?
              Also, occasionally we encounter "Time out server" in one of the instance of WL.
              When this happens, that instance no longer takes in requests. Would anyone know
              what causes "Time out server"? Does it only happen in clustered environment?
              Need some help ASAP.
              Tnx in adv.
              

              What do you have in front of WebLogic for Load Balancing?
              What are the IP addresses/hostnames of your three instances? What hostname are
              you using in your http requests? Is your DNS configured to do failover?
              Make sure that you have session replication turned on. See the edocs.bea.com.
              Mike Reiche
              "clstrproblem" <[email protected]> wrote:
              >
              >Hi.
              >
              >We are using Weblogic 5.1 with SP 8. We have been encountering a problem
              >in our
              >clustered environment. We setup our (clustered)environment to have 3
              >instances(WL1,
              >WL2, WL3) of weblogic running in one box. When one instance, let say
              >WL1, responds
              >to a request(1st request for a session), the session is binded to that
              >instance
              >till the session is terminated/expired. Which means succeeding request
              >for that
              >session can only be served by WL1(we tried stopping WL1 but WL2 and WL3
              >wont accept
              >the request--which only causes the page to time out). Shouldnt it behave
              >in such
              >a way that other instances can get serve the request(failover)?
              >
              >Also, occasionally we encounter "Time out server" in one of the instance
              >of WL.
              >When this happens, that instance no longer takes in requests. Would anyone
              >know
              >what causes "Time out server"? Does it only happen in clustered environment?
              >
              >
              >Need some help ASAP.
              >
              >Tnx in adv.
              >
              >
              >
              >
              >
              >
              >
              

  • Too many sessions inserting in table

    Hi All
    I am new to batch programs processing .Please help me on this.
    I have an application in which more than 1 session
    will try to insert millions of recrds in the same table.
    I need to know what care should I take for this.
    Will tables be locked or I do not need to wory about this
    Further in the program , one application may delete certain lacs of records while
    other session might still be inserting into it
    WHat do I do in this case.
    Also if one session is trying to update while other session is inserting/deleting records
    then what care do I take
    Thanks
    Ashwin N.

    It is a very bad idea to have more than one session attempt to insert millions of rows (if that is a literal statement of requirement). In you have paid for it use the Parallel Server option, but always try to have only a single batch process running unless it is absoultely unavoidable. Otherwise you are increasing the risk of latch contention both for the table itself and also rollback segemnts.
    Other things to check (these are database type things):
    - have big rollback segment assigned to batch session;
    - consider committing every 1000 records to cut down on rollback usage;
    - if you're really going to have several sessions doing monster inserts like this give your table lots of freelist groups;
    - make sure the table and its indexes have enough empty space by pre-assigning extents
    - make sure that the tablespaces are big enough in case additional extents are required
    As for row contention: rows you insert cannot locked by someone else nor are they affected by someone else inserting, updating or deleting other rows (although you might suffer from block header contention) unless that other session has issued a LOCK TABLE statement. Only once you have committed your insert will other sessions be able to see, update and delete your new rows (because Oracle doesn't support DIRTY_READ :-) )
    One way of reading your question is that you think someone may be trying to delete the records you're inserting. I hope that's not the case but if it is someone ought to have a look at the business model.
    rgds, APC

  • Identifying deadlocked resources in graph with 1 row lock and 1 table lock

    Hi, I have run into repeated occurrences of the deadlock graph at the bottom of this post and have a few questions about it:
    1. It appears that proc 44, session 548 is holding a row lock (X). Is the waiter, proc 30, session 542, trying to acquire a row lock (X) also or an exclusive table lock (X) on the table containing that row?
    2. Under what circumstances would something hold a row exclusive table lock (SX) and want to upgrade that to a share row exclusive table lock (SSX)?
    3. Our table cxml_foldercontent has a column 'structuredDataId' with a FK to cxml_structureddata.id and an ON DELETE SET NULL trigger. Would this help explain why an "update" to one table (i.e.g cxml_foldercontent) would also need to acquire a lock in a foreign table, cxml_structureddata?
    4. What is the difference between "Current SQL statement:" and "Current SQL statement for this session:"? That terminology is confusing. Is session 542 executing the "update" or the "delete"?
    5. In the "Rows waited on:" section is it saying that Session 542 is waiting on on obj - rowid = 0000BE63 - AAAL5jAAGAAA6tZAAK or that it is has the lock on that row and other things are waiting on it?
    A couple of notes:
    - the cxml_foldercontent.structuredDataId FK column has an index on it already
    Deadlock graph:
                           ---------Blocker(s)--------  ---------Waiter(s)---------
    Resource Name                    process session holds waits  process session holds waits
    TX-003a0011-000003d0        44       548     X               30        542             X
    TM-0000be63-00000000       30       542     SX              44        548     SX    SSX
    session 548: DID 0001-002C-000002D9     session 542: DID 0001-001E-00000050
    session 542: DID 0001-001E-00000050     session 548: DID 0001-002C-000002D9
    Rows waited on:
    Session 542: obj - rowid = 0000BE63 - AAAL5jAAGAAA6tZAAK
      (dictionary objn - 48739, file - 6, block - 240473, slot - 10)
    Session 548: no row
    Information on the OTHER waiting sessions:
    Session 542:
      pid=30 serial=63708 audsid=143708731 user: 41/CASCADE
      O/S info: user: cascade, term: unknown, ospid: 1234, machine:
                program: JDBC Thin Client
      application name: JDBC Thin Client, hash value=2546894660
      Current SQL Statement:
    update cascade.cxml_foldercontent set name=:1 , lockId=:2 , isCurrentVersion=:3 , versionDate=:4 , metadataId=:5 , permissionsId=:6 , workflowId=:7 , isWorkingCopy=:8 , parentFolderId=:9 , relativeOrder=:10 , cachePath=:11 , isRecycled=:12 , recycleRecordId=:13 , workflowComment=:14 , draftUserId=:15 , siteId=:16 , prevVersionId=:17 , nextVersionId=:18 , originalCopyId=:19 , workingCopyId=:20 , displayName=:21 , title=:22 , summary=:23 , teaser=:24 , keywords=:25 , description=:26 , author=:27 , startDate=:28 , endDate=:29 , reviewDate=:30 , metadataSetId=:31 , expirationNoticeSent=:32 , firstExpirationWarningSent=:33 , secondExpirationWarningSent=:34 , expirationFolderId=:35 , maintainAbsoluteLinks=:36 , xmlId=:37 , structuredDataDefinitionId=:38 , pageConfigurationSetId=:39 , pageDefaultConfigurationId=:40 , structuredDataId=:41 , pageStructuredDataVersion=:42 , shouldBeIndexed=:43 , shouldBePublished=:44 , lastDatePublished=:45 , lastPublishedBy=:46 , draftOriginalId=:47 , contentTypeId=:48  where id=:49
    End of information on OTHER waiting sessions.
    Current SQL statement for this session:
    delete from cascade.cxml_structureddata where id=:1

    Mohamed Houri wrote:
    What is important for a foreign key is to be indexed (of course if the parent table is deleted/merged/updated, or if a performance reason imposes it). Wether this index is unique or not doesn't matter (as far as i know).But, you should ask your self the following question : what is the meaning of having a 1 to 1 relationship between a parent and a child table ? if you succeed to create a unique index on your FK then this means that for each PK value corresponds at most one FK value!! Isn't it? is this what you want to have?Thanks, as I mentioned above, cxml_structureddata is actually the child table of cxml_foldercontent with 1 or more records' owningEntityId referring to rows in cxml_foldercontent. The reason for the FK on cxml_foldercontent.structuredDataId is a little ambiguous but it explained above.
    Will a TX-enqueue held on mode X always be waited on by another TX-enqueue row lock X? Or can it be waited on by an Exclusive (X) table lock?Not really clear. Sorry, are you saying my question is unclear or it's not clear why type of eXclusive lock session 542 is trying to acquire in the first line of the trace? Do you think that the exclusive lock being held by session 548 in the first line is on rows in cxml_foldercontent (due to the ON DELETE SET NULL on these child rows) or rows in the cxml_structureddata that it's actually deleting?
    Is there any way for me to tell for certain?
    The first enqueue is a TX (Transaction Enqueue) held by session 548 on mode X (exclusive). This session represents the blocking session. At the same time the locked row is waited on by the blocked session (542) and the wait is on mode X (exclusive). So put it simply, we have here session 542 waiting for session 548 to release it lock (may be by commiting/roll backing). At this step we are not in presence of a deadlock.
    The second line of the deadlock graph shows that session 542 is the blocking session and it is doing a TM enqueue (DML lock) held on SX(Shared eXclusive). While session 548(which is the waiting session) is blocked by session 542 and is waiting on SSX mode.
    Here we see that 548 is blocking session 542 via a TX enqueue and session 542 is blocking session 548 via a TM enqueue ---> That is the deadlock. Oracle will then immediately choose arbitrarlly a victim session (542 or 548) and kill its process letting the remaining session continuing its work.
    That is your situation explained here.Thanks, any idea why session 542 (the DELETE from cxml_structureddata) would be trying to upgrade it's lock to SSX? Is this lock mode required to update a child tables foreign key columns when using an ON DELETE SET NULL trigger? Having read more about SSX, I'm not sure I understand in what cases it's used. Is there a way for me to confirm with 100% certainty specifically which tables in the TM enqueue locks are being held on? Is session 548 definitely trying to acquire an SSX mode on my cxml_foldecontent table or could it be cxml_structureddata table?
    (a) Verify that all your FK are indexed (be carreful that the FK columns should be at the leading edge of the index)Thanks, we've done this already. When you say the "leading edge" you mean for a composite index? These indexes are all single column.
    (b) Verify the logic of the DML against cxml_foldercontentCan you be more specific? Any idea what I'm looking for?

  • DB2 Table locks after SAP Gui crashes

    Hi,
    I have a really annoying problem. Since my SAP Gui crashes sometimes, when I have nearly the limit of sessions open, I get serious problems with table locks in DB2.
    Background:
    When I run a program, which is updating/deleting/inserting data in SAP tables and the SAP Gui crashes, the table entries, which have been processed at that time get locked. So when SAP is then doing the roll back, the locks still exist!
    The SAP basis guys have restarted the system for me, when I got this problem before, but it's really annoying, when this happens again...
    I mean, of course the SAP gui shouldn't crash, but anyway the locks should be deleted by SAP during roll back, right?
    Does anybody know this problem and does anybody know how to solve it without restarting the machine?
    Many thanks!

    Hi Markus,
    true, the SAP Gui is also an issue I have to solve... But anyway the table lock should not occur.
    If you think about some other situations, where the process can be interrupted:
    - Calling a BAPI synchronous via RFC, which is updating/inserting/deleting table records
    - Windows bluescreen
    - etc....
    Whatever, it is really annoying to be in the need of restarting the system, when a process is going to be interrupted...
    Any ideas to this anyone?
    Thank you!

  • Killing a table lock

    Hello all,
    How can one kill a table lock from SQL ?
    Or is that feasible ?
    Thanks.

    If you cannot commit or rollback for any reason (terminal or system not reachable or locked by the user)
    Find the locks and the sid that owns it
    Also find the session's serial #
    select username,sid,serial# from v$session;
    alter system kill session 'sid,serial#';
    e.g
    alter system kill session '16,454';

  • Time out parameter to avoid Table locking

    Hi,
    I am looking at any configurable parameter if any for setting the time out parameter to avoid Table locking.Now what's happening is : If i run select ...for update from one session,oracle is applying a lock till i do a commit.And if i run the same query from another session,it takes unspecified time without returning any error.Using the query with NOWAIT options does not serve my purpose.
    Any help in this regard is appreciated
    Thanks
    Sam

    Are you looking for a way to time out the original query, or are you looking for a way for the second query to wait for some time and then abort if it is unable to lock the row(s)?
    Justin
    Distributed Database Consulting, Inc.
    www.ddbcinc.com

  • How to release table locks?

    I was running Data Services (4.0 SP2) to load data into Hana (1.0 SP2 patch 20) using the Bulk load option. Apparently, the DS process hung and now the table I was writing to is locked although the DS Job is dead. How do you release the table lock without restarting Hana DB?

    Hello Sachin,
    you could try these statements. The first one will cancel the currently executed operation, the second one will disconnect the idle session:
    ALTER SYSTEM CANCEL SESSION session_id;
    ALTER SYSTEM DISCONNECT SESSION session_id;
    Regards,
    Mark

  • BPEL Toplink Adapter not releasing table lock

    We have a bpel process that attempts to do a merge on an Oracle table through the database adapter. If the merge operation is unsuccessful for any reason the process retries for a set number of times. We have found that a lock is generated on the tables in question and not released the first time. When the process retries it then hangs as it is waiting for the locks to be released.
    We are using version 10.1.2.0.2
    Thanks,
    Ashley

    Hi Ashley,
    I don't believe this is related to cache settings.
    Lets say a merge writes several rows and then fails updating the last row, at this point it has locks on the first n - 1 rows. The merge never acquires read locks.
    Because the merge failed though, the transaction is rolled back, locks always get released on a roll back.
    I confirmed with Glenn that when the bpel engine catches an exception, that transaction is ended (releasing all locks either way), and the next retry occurs in a new transaction.
    Some guesses I have:
    -in 10.1.2.0.2 turn on usesBatchWriting="true", then all the writes will happen in one statement.
    -whenever a bpel process waits between steps, if the wait time is very small the instance may be kept in memory. This could mean that the next retry occurs in the same transaction as the first, so maybe you could try increasing the retry interval?
    -investigate why merge is failing at all, maybe it failed in the first place because another process had a lock on the rows.
    Thanks
    Steve

  • Disk space transaction  and temp table lock  in oracle

    Hi,
    Today many sessions used to get disk space transaction lock and temp table lock and i am seeing these locks first time in my Production database,
    is there any workaround to avoid this contension.
    Thanks
    Prakash GR

    Post your version (all 3 decimal places).
    Post the SELECT statement and results that have led you to this conclusion.
    Other than the fact that you have seen a number what, precisely, is the issue.

  • Table Lock DB2

    Hello All,
    Just wanted some advice, here is my situation.
    I have a message driven bean which i want to ensure i only receive once.
    So i am storing the MDB id's in a DB2 Database.
    (simple table that just stores MDB id's plus some other mdb data)
    I would like to do a table lock to keep MDB reading/inserting consistency. In my transaction I would be.
    check if the id is there already and an insert if its not.
    I think I can use a select for update in DB2 but i have also been reading about setting isolation type of TRANSACTION_SERIALIZABLE. Which would be the best approach?

    also just found another DB2 SQL statment LOCK TABLE not sure if this is another good soultion.

  • Killing table locks

    Hi all,
    Good day..
    DB version is 10.2.0.4 I need to write a script which has to kill any table locks in the DB which is more than 10 minutes.
    thanks,
    baskar.l

    hi sb,
    DECLARE
    CURSOR c IS
    SELECT c.owner,
          c.object_name,
          c.object_type,
          b.SID,
          b.serial#,
          b.status,
          b.osuser,
          b.machine
    FROM v$locked_object a, v$session b, dba_objects c
    WHERE b.SID = a.session_id AND a.object_id = c.object_id
    and c.object_name in (MES.JSW_CRM_C_HR_COIL_INFO,MES.JSW_CRM_C_HR_COIL_INFO);
    c_row c%ROWTYPE;
    1_sql VARCHAR2(100);
    BEGIN
    OPEN C;
    LOOP
    FETCH c INTO c_row;
    EXIT WHEN c%NOTFOUND;
    l_sql := 'alter system kill session '''||c_row.sessionid||','||c_row.serialid||'''';
    EXECUTE IMMEDIATE l_sql;
    END LOOP;
    CLOSE c;
    END;But when executing it i get
    1_sql VARCHAR2(100);
    ERROR at line 15:
    ORA-06550: line 15, column 1:
    PLS-00103: Encountered the symbol "1" when expecting one of the following:
    begin function package pragma procedure subtype type use
    <an identifier> <a double-quoted delimited-identifier> form
    current cursorthanks,
    baskar.l

  • SYST: Abnormal termination (ANLB-LGJAN not equal to ANLC-GJAHR)

    Hi All
    An asset got scrapped in 2009 and having an unplanned depn value for $100 which resultedi n negative NBV of -100.
    This is impact closing of 2009 and opening 2011,
    I am trying to post Manual depn (TT 640) in dec 2010 dated, but getting below error :
    "SYST: Abnormal termination (ANLB-LGJAN not equal to ANLC-GJAHR) Asset NL010000002004510000
    Internal tables ANLB-LGJAN and ANLC-GJAHR of asset NL010000002004510000 are not synchronous
    I checked SDN and below listed message is similar to my error, but i am not able to understand much of the below message, m y understanding is there is no FY 2010 in table ANLC which is causing this error.
    Could clarify on what steps needs to be done ?
    How to syncronize ANLB-LGJAN and ANLC-GJAHR

    Hi Aravind,
    most of the time it is one of the following reasons from the link you have found.
    another link with examples: Error AS02 AA 698 - SYST: Abnormal termination (T_ANLB not equal to T_ANLC)
    1) Are the table fields T093C-LGJAHR, ANLB-LGJAN and ANLC-GJAHR in line
    Example for a inconsistency:
    T093C-LGJAHR: 2010
    ANLB-LGJAN and ANLC-GJAHR: 2009
    Solution: Repeat fiscal year change
    2) Is table-field T082AVIEWB-AUTHORITY on 0 instead on 2
    Example:
    MANDT FIAA_VIEW AFAPL AFABER AUTHORITY
    100..... . 01..... . ABC ..... . 01..... . 2
    100..... . 01..... . ABC ..... . 02 ..... . 2
    100..... . 01..... . ABC ..... . 15..... . 2
    100 ..... . 01..... . ABC ..... . 20..... . 2
    100..... . 01..... . ABC ..... . 30 ..... . 2
    100..... . 01..... . ABC ..... . 34..... . 2
    100..... . 01..... . ABC ..... . 35..... . 2
    100..... . 01..... . ABC ..... . 37..... . 0 set to 2
    Solution:
    In note 900767 you find the report RACORR_VIEW0_SET to correct this.
    3) Have  ANLB and ANLC the same number of areas
    Example:
    ANLB has areas 01, 02, 03.
    ANLB has areas 01, 02, 03 and 10
    Solution in that case would be not that easy
    regards Bernhard

Maybe you are looking for