Lock table is out of available object entries

hi,
i am using db4.6.21 version.
i have created an table where other applications writes to this tables concurrently.
this table opened in DB_THREAD and every application writing in DB_WRITE_CURSOR mode. and i am note using any Locking subsystems.only READ_COMMITTED and DB_WRITE_CURSOR are used by applications to access the table.
while In PC it is working properly.
But in At91SAM9260EK with kerenl 2.6.23.9 ,
Berkeley DB error: Lock table is out of available object entries
error comes...what would be the reason..???

Hi Ratheesh,
Please search through the forum; similar locking subsystem configuration issues have already been discussed.
In short, you'll need to increase the number of lock objects:
http://www.oracle.com/technology/documentation/berkeley-db/db/ref/lock/max.html
I see you're using the DB_WRITECURSOR flag which is specific to CDS (Concurrent Data Store), so you should size the locking subsystem appropriately to CDS: the number of lock objects needed is two per open database (one for the database lock, and one for the cursor lock when the DB_CDB_ALLDB option is not specified). The locking subsystem configuration should be similar for all the processes accessing the environment, or not specified for the processes that just join the environment.
If you still see this error message reported, provide some information on your OS/platform, information on how the processes access the environment and the locking statistics (db_stat -N -Co -h <env_dir>).
Regards,
Andrei

Similar Messages

  • Issue: Lock table is out of available object entries

    Hi all,
    We have a method to add records into BDB, and after there are more than 10000 records, if we continue add records into BDB, such as add 400 records into BDB, then do other update/add operation to BDB, it will be failed.
    The error message is Lock table is out of available object entries.
    How to resolve it?
    Thanks.
    Jane.

    Frist the BDB stat as bellow:
    1786 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    2000 Maximum number of locks possible
    2000 Maximum number of lockers possible
    2000 Maximum number of lock objects possible
    52 Number of current locks
    1959 Maximum number of locks at any one time
    126 Number of current lockers
    136 Maximum number of lockers at any one time
    26 Number of current lock objects
    1930 Maximum number of lock objects at any one time
    21M Total number of locks requested (21397151)
    21M Total number of locks released (21397099)
    0 Total number of lock requests failing because DB_LOCK_NOWAIT was set
    0 Total number of locks not immediately available due to conflicts
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    736KB The size of the lock region
    0 The number of region locks that required waiting (0%)
    Then I run the method to insert 29 records into BDB, the BDB isn't locked yet, and the stat:
    1794 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    2000 Maximum number of locks possible
    2000 Maximum number of lockers possible
    2000 Maximum number of lock objects possible
    52 Number of current locks
    1959 Maximum number of locks at any one time
    134 Number of current lockers
    136 Maximum number of lockers at any one time
    26 Number of current lock objects
    1930 Maximum number of lock objects at any one time
    22M Total number of locks requested (22734514)
    22M Total number of locks released (22734462)
    0 Total number of lock requests failing because DB_LOCK_NOWAIT was set
    0 Total number of locks not immediately available due to conflicts
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    736KB The size of the lock region
    0 The number of region locks that required waiting (0%)
    Then I run the method again to insert records, the issue "Lock table is out of available locks" occur, and the BDB stat:
    1795 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    2000 Maximum number of locks possible
    2000 Maximum number of lockers possible
    2000 Maximum number of lock objects possible
    52 Number of current locks
    2000 Maximum number of locks at any one time
    135 Number of current lockers
    137 Maximum number of lockers at any one time
    27 Number of current lock objects
    1975 Maximum number of lock objects at any one time
    26M Total number of locks requested (26504607)
    26M Total number of locks released (26504553)
    0 Total number of lock requests failing because DB_LOCK_NOWAIT was set
    0 Total number of locks not immediately available due to conflicts
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    736KB The size of the lock region
    0 The number of region locks that required waiting (0%)
    Why this issue occur and how to resolve this issue.
    Thanks very much.
    Jane

  • Db_associate fails with : Lock table is out of available lock entries

    Hi
    Occasionally on startup my app needs to rebuild its secondary database, so I call db_associate with DB_CREATE set. If the primary db is large the associate fails with "Lock table is out of available lock entries". Both databases are hashes, so I have not configured any special lock sizes as I didn't think I needed to.
    What lock configuration does db_associate need to succeed?
    I am using version 4.7.25
    Thanks
    Ashley

    Hi Ashley,
    You should be sizing the locking subsystem with high enough values so the secondary database rebuilds successfully, then run the "db_stat -C" utility and see how many locks, lock objects and lockers did you need to perform the operation and reconfigure the locking subsystem with values slightly bigger than what you needed, so you make sure that you'll have enough resources for the next time you'll rebuild the secondary.
    "Lock table is out of available lock entries" means that the Berkeley DB locking subsystem has not been configured for enough locks. For more information, see the "Configuring locking: sizing the system" section of the Berkeley DB Reference Guide, included in your download package and also available at:
    http://download.oracle.com/docs/cd/E17076_01/html/programmer_reference/lock_max.html
    To see what locks are held in the database environment at any time, you can dump the lock table using the -Cl options:
    % db_stat -h [database environment directory] -Cl -N
    Additional documentation:
    db_stat: http://download.oracle.com/docs/cd/E17076_02/html/api_reference/C/db_stat.html
    set_lk_max_locks: http://download.oracle.com/docs/cd/E17076_01/html/api_reference/C/envset_lk_max_locks.html
    Bogdan Coman

  • Perl & "Lock table is out of available lock entries"

    Hi,
    i use to retrieve about ten thousand results with ./dbxml,
    but when i use the same query within a perl script,
    i get the error:
    Lock table is out of available lock entries
    Query collection("frwiki_10_0.dbxml")/DOCUMENT/Sentence failed (1)
    Error: Could not fetch next DOM element for doc id: 2, nid: 8CB6 in /Users/francois/Desktop/INRIA/EasyRef2_DBXML/script/../lib/session.pm, line 175
    Database handles still open at environment close
    Open database handle: frwiki_10_0.dbxml/structural_stats
    Open database handle: frwiki_10_0.dbxml/secondary_document_statistics_double
    Open database handle: frwiki_10_0.dbxml/secondary_document_index_double
    Open database handle: frwiki_10_0.dbxml/secondary_document_statistics_string
    Open database handle: frwiki_10_0.dbxml/secondary_document_index_string
    Open database handle: frwiki_10_0.dbxml/node_nodestorage
    Open database handle: frwiki_10_0.dbxml/secondary_document
    Open database handle: frwiki_10_0.dbxml/secondary_dictionary
    Open database handle: frwiki_10_0.dbxml/primary_dictionary
    Open database handle: frwiki_10_0.dbxml/secondary_sequence
    Open database handle: frwiki_10_0.dbxml/secondary_configuration
    Segmentation fault
    Somewhere on other forums, one's talking about Configuring locking: sizing the system & DB_ENV->set_lk_max_locks(),
    but how can I my environment the same way with a perl script?

    I'm configuring my environment in my Java application via this DB_CONFIG file, and it definitely works. When I had problems like you have, I just increased the number of locks, lockers, objects and everything started to work just fine. Are you sure that you have put DB_CONFIG into environment home directory. Did you increase the number of locks, lockers, objects sufficiently?
    Vyacheslav
    UPD. Does your application have enough permissions to read DB_CONFIG file?
    Edited by: detonator413 on Oct 21, 2009 12:33 PM

  • Lock table is out of available lock entries

    Hi,
    I'm using BDB 4.8 via Berkeley DB XML. I'm adding a lot of XML documents (ca. 1000) in one transaction and get "Lock table is out of available lock entries". My locks number is set to 100000 (it's too much but still...).
    I know that I probably should not put so many docs in the same transaction, but why BDB throws "not enough locks" error? Aren't 100000 locks enough? (I also tried to set 1 million for testing purposes)
    As a side effect question, may I change the number of locks after environment creation (but before opening it)?
    P.S. Hope it's not offtop on this forum
    Thanks in advance,
    Vyacheslav

    Hello,
    As you mention, "Lock table is out of available lock entries" indicates that there are more locks than your underlying database environment is configured for. Please take a look at the documentation on "Configuring locking: sizing the system" section of the Berkeley DB Reference Guide at:
    http://www.oracle.com/technology/documentation/berkeley-
    db/db/programmer_reference/lock_max.html
    From there:
    The maximum number of locks required by an application cannot be easily estimated. It is possible to calculate a maximum number of locks by multiplying the maximum number of lockers, times the maximum number of lock objects, times two (two for the two possible lock modes for each object, read and write). However, this is a pessimal value, and real applications are unlikely to actually need that many locks. Reviewing the Lock subsystem statistics is the best way to determine this value.
    What information is the lock subsystem statistics showing? You can get this with db_stat -c or programmatically with the environment lock_stat method.
    Thanks,
    Sandra

  • Lock table is out of available locker entries

    My mail server version is -->
    Sun Java(tm) System Messaging Server STORE 6.3-0.15 (built Feb 9 2007)
    Not always this thing happened but sometimes, I can find follow error message in deault log file.
    ---- default log --
    [12/May/2009:12:00:00 +0900] epajo01 imdbverify[4106]: General Notice: verify database snapshots started
    [12/May/2009:12:00:00 +0900] epajo01 imdbverify[4106]: General Notice: verify snapshots finished: Total verified 3 Total failed = 0
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: General Notice: imexpire started, functions: expire purge
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: General Error: locking error: Lock table is out of available locker entries
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: General Error: DBERR: can't get locker id for file descriptor 20: Not enough space
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: Store Critical: Unable to lock index for user/journal_ms08_i: Not enough space
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: General Error: locking error: Locker does not exist
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: Store Error: locking {StoreRoot}/=user/02/91/=journal_ms08_i/store.exp: Invalid argument
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: General Error: locking error: Locker does not exist
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: Store Critical: Unable to lock index for user/journal_ms08_y: Invalid argument
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: General Error: locking error: Locker does not exist
    -------- omit ---------
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: Store Error: locking {StoreRoot}/=user/e1/91/=journal_ms08_g/store.exp: Invalid argument
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: General Error: locking error: Lock table is out of available locker entries
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: General Error: locking error: Locker does not exist
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: General Error: DBERR: can't get locker id for file descriptor 23: Not enough space
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: Store Critical: Unable to lock index for user/journal_ms08_w: Invalid argument
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: Store Error: locking {StoreRoot}/=user/73/e5/=journal/store.exp: Not enough space
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: General Error: locking error: Locker does not exist
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: Store Error: locking {StoreRoot}/=user/e2/91/=journal_ms08_w/store.exp: Invalid argument
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: General Error: locking error: Locker does not exist
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: Store Critical: Unable to lock index for user/journal_ms08_h: Invalid argument
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: General Error: locking error: Locker does not exist
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: Store Error: locking {StoreRoot}/=user/f1/91/=journal_ms08_h/store.exp: Invalid argument
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: General Error: locking error: Locker does not exist
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: Store Critical: Unable to lock index for user/journal_ms08_x: Invalid argument
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: General Error: locking error: Locker does not exist
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: Store Error: locking {StoreRoot}/=user/f2/91/=journal_ms08_x/store.exp: Invalid argument
    [12/May/2009:12:00:01 +0900] epajo01 imexpire[4115]: General Notice: Expire finished
    When I can see this error message in default log files, all emails are enqueued but not dequeued.
    Emails are just piled up in a queue directory, and getting increased.
    To solve this thing, I just restart mail server(with stop-msg, start-msg command) and then emails are dequeued.
    In a web site, I found those error messages can be found when some interrption occurred like Control-c in product using Berkeley DB .
    I know JMS is using Berkeley DB but at that time there is no chance to use Control-c.
    With those log, I just can assum imexpire started when snapshots job is not finished completely(although log file shows snapshots job is finished).
    Imexpire started before snapshots not release locker
    But I am not Sure.
    I don't know why this thing happened and what is solid workaround.
    If someone had the same problem as mine, plz reply
    Thanks in advance.
    Edited by: leeky41 on May 12, 2009 7:22 AM

    leeky41 wrote:
    Sun Java(tm) System Messaging Server STORE 6.3-0.15 (built Feb 9 2007) Please always provide the full output of ./imsimta version. I cannot tell for example what platform you are using (Solaris SPARC/x86/Linux).
    I don't know why this thing happened and what is solid workaround.I suggest your first step is to upgrade to a recent release of MS6.3 and see if the problem persists.
    Regards,
    Shane.

  • How to find out program form the entry made in MCHB table thro backgrounnd

    Hi,
    we have one case where table MCHB was updated with zero quantity for batches by user ZZZBATCH  which is for background job.
    Table MCHB was updated with batches with Zero quantity though process order number was set for deletion flag.
    Now our client wants by which program this entry has been made?
    we dont know which is the background job/program?
    we are sure that background job was run as user ZZZBATCH is set for background job.
    please help to find out How  this entry has been created in the MCHB table.

    Find out from CDPOS table  for the date/ time this entry was made.  From SM37 look out for the jobs that were running then.  You should get some clue.
    If you can't do that,  just try  for all  programs  run by ZZZBATCH    and see which of the program uses   MCHB table  or  some BAPIs with * BATCH*.
    cheers
    Rav

  • Out of line object 'DataSource' -- but when re-saving table properties, no issues

    I've got a PowerPivot workbook that's connecting to an Oracle database, then calculating a few measures. When I "test the connnection" in PowerPivot, it's fine. When I go to "Table Properties" and re-save, it seems to pull the dataset
    fine. But refreshing within Excel gives me, 
    We couldn't refresh the table 'AthleticGPABlahBlah' from the connection 'Student_PLSQLBlahBlah'. Here's the error message we got: Out of line object 'DataSource', referring to ID(s) 'c6427542-blahblah', has been specified but not used. The following
    system error occurred: Unspecified error
    My data types seem right, and I don't have any big dates. Is my workbook just corrupt? I'd love to not re-do it, so any tips to getting this fixed would be great!

    Hi Hensen,
    You mentioned that you don't have any big dates but do you have any dates that are too small? What happens when you use the Table Properties to restrict dates to all be on or after 1/1/2000, for example?
    Regards,
    Michael Amadi
    Please use the 'Mark as answer' link to mark a post that answers your question. If you find a reply helpful, please remember to vote it as helpful :)
    Website: http://www.nimblelearn.com, Twitter:
    @nimblelearn

  • Command object locks table (TX)

    Im having locking probmlems with ODP.NET. Im updating same table with many connections in many threads. I get table locks for hours (over 24 hours) and I cant find any timout to set. The CommandTimeout on the Command object is not implemented.
    In the V$Lock table I can see two locks on the same table made by two session ids from the same computer.
    Im using transactions on the Connection object.
    How do I set a timout for the update so that the lock will dissapear?
    String connString = "Data Source="     + dsn + ";"
              + "User ID="          + user + ";"
              + "Password="          + pwd;
    OracleConnection conn = new OracleConnection(connString);
    String sql = "UPDATE ABDATA2 SET ABNAVN='SOLNA STAD ' WHERE OBJID = -21805738";
    OracleCommand command = new OracleCommand(sql, m_conn);
    //command.CommandTimeout = 60000; Exists but not supported
    command.ExecuteNonQuery();
    Is there anyone who has a clue?

    Hi Neo, thanks for responding;
    I did not publish the report with saved data, and I'm not sure what you mean about appending the string value?  Could you explain that part?
    The parameter I created is only declared as a string with no default value with a name of "CodeTableName".  I used the Create paramter button you have in the Command object window to make it.
    I then added the parameter name to my SQL statement as listed above.
    The actual code table names in the database is actually longer then what the parameter calls for.  They all start with "TIBURON.ZZ_" and end with "_Codes".  I didn't want the users to have to remember the full names so that's why the SQL statement shows those additional parts.
    The report works perfect when I'm running this report from Crystal Reports 9 or CR11 itself.  It's only when I upload the report to our web server that the users isn't provided a prompt to enter a parameter.  They only have button labed "Run Report".
    Any ideas?
    Thanks,
    Joe

  • Listing out the plsql objects which update tables

    1) Is there a way i can list out which plsql object is doing DML statement on a table, i know we can use USER_DEPENDENCIES or DBA_DEPENDENCIES tables but it is listing out the packages or procedures even if a column is used to define a ref datatype or even if the code is commented out, i just want to see only the objects that are issuing DML statements.
    2) Is it possible to see at column level details, meaning if a column is updated in one procedure and another column is updated in another procedure, can i list out the procedures and the columns that would be updated or inserted in that procedure.
    appreciate your help, Thank you.

    Do a join of dba_dependencies and dba_hist_sql_plan/v$sql_plan
    thanks
    http://swervedba.wordpress.com/

  • How to use common object from two tables with out join.

    HI,
    I have two tables called A & B In A table i have the following objects
    1.weekend
    2.S1(measure)
    3.S2(measure)
    4.S3(measure)
    5.S4(measure)
    And In B table i have followning columns
    1.week end
    2.p1(measure)
    3.p2(measure)
    4.p3(measure)
    5.p4(measure)
    Now in universe i created all the measure objects i.e.s1,s2,s3,s4,p1,p2,p3,p4 A.weekend,B.weekend.
    instead of using week end two times i wnt to use only once because this is common in both table.
    if i use join between these tables i am getting values fine
    But With out join is there any thing to do in universe level to create common objects to use from both the tables..I tried using aggregate awareness but while reporting it is taking as two SQL.which is not synchronized.
    Please help me on this ...

    hi,
    Although  Weekend column is present in both tables, by creating a single Object in Universe, Universe can identify relationship with only table referenced in Object Creation.
    So, there will be no identification of relationship with other table measures.
    Obviously, you need to create 2 Weekend objects in Universe (in two classes).
    Case 1: You need not join these two tables in Universe. When you create 2 Queries in WEBI, automatcially Weekend objects are synchronized (if both are of same datatype)
    Case 2: If you join these two tables in Universe, Obviously,
    your SQL may contain Weekend from Table1, measures from Table 2
    or
    your SQL may contain Weekend from Table2, measures from Table 1
    Finally, You need to create 2 objects in Universe. But your query may contain a single Object based on Case 2.
    Regards,
    Vamsee

  • I don't remember my entry passcode, after many failures I'm now locked out of thumb print entry. Help

    I don't remember my entry passcode, after many failures I'm now locked out of thumb print entry. Help

    Hi ..
    Follow the instructions here >  iOS: Forgotten passcode or device disabled after entering wrong passcode

  • Lock table overflow - BRF Plus - can it work with many entries in tables ?

    hi,
    when I'm trying to open expression table in BRFplus with 500 entries in web
    I get an error: Lock table overflow and I see more then 2000 entries in sm12 for fdt_ tables
    and the system cannot create any more locks (so other applications are not working)
    why is that ? can BRF plus work with more then 100 entries in table expressions at all ?
    can anyone tell from experience as this is a huge issue I believe
    thank you,
    Regards,
    Michal Krawczyk

    Hi Michal,
    You are running a NW 701 system. This was the first version of BRFplus and the DB schema was not good for high volumne.
    I have created some notes recommending to use decision table for up to 100 rows (of course other factors like # of columns are also important).
    In NW 702 the DB schema has been changed and decision tables with 10.000 rows are possible and performance better by a factor of 100 and more.
    In your specific case you may consider to increase the number of logs that are possible. But this is rather a workaround than a solution.
    BR,
    Carsten

  • MySQL lock table size Exception

    Hi,
    Our users get random error pages from vibe/tomcat (Error 500).
    If the user tries it again, it works without an error.
    here are some errors from catalina.out:
    Code:
    2013-07-31 06:23:12,225 WARN [http-8080-8] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:23:12,225 ERROR [http-8080-8] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:23:12,242 WARN [http-8080-8] [org.kablink.teaming.web.portlet.handler.LogContextInfoInterceptor] - Action request URL [http://vibe.*******.ch/ssf/a/do?p_name=ss_forum&p_action=1&entryType=4028828f3f0ed66d013f0f3ff208013d&binderId=2333&action=add_folder_entry&vibeonprem_url=1] for user [kablink,ro]
    2013-07-31 06:23:12,245 WARN [http-8080-8] [org.kablink.teaming.spring.web.portlet.DispatcherPortlet] - Handler execution resulted in exception - forwarding to resolved error view
    org.springframework.dao.InvalidDataAccessApiUsageException: object references an unsaved transient instance - save the transient instance before flushing: org.kablink.teaming.domain.FolderEntry; nested exception is org.hibernate.TransientObjectException: object references an unsaved transient instance - save the transient instance before flushing: org.kablink.teaming.domain.FolderEntry
    at org.springframework.orm.hibernate3.SessionFactoryUtils.convertHibernateAccessException(SessionFactoryUtils.java:654)
    at org.springframework.orm.hibernate3.HibernateAccessor.convertHibernateAccessException(HibernateAccessor.java:412)
    at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:411)
    2013-07-31 06:23:36,474 ERROR [Sitescape_QuartzSchedulerThread] [org.quartz.core.ErrorLogger] - An error occured while scanning for the next trigger to fire.
    org.quartz.JobPersistenceException: Couldn't acquire next trigger: The total number of locks exceeds the lock table size [See nested exception: java.sql.SQLException: The total number of locks exceeds the lock table size]
    at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTrigger(JobStoreSupport.java:2794)
    at org.quartz.impl.jdbcjobstore.JobStoreSupport$36.execute(JobStoreSupport.java:2737)
    at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3768)
    at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTrigger(JobStoreSupport.java:2733)
    at org.quartz.core.QuartzSchedulerThread.run(QuartzSchedulerThread.java:264)
    Caused by: java.sql.SQLException: The total number of locks exceeds the lock table size
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2870)
    at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1573)
    at com.mysql.jdbc.ServerPreparedStatement.serverExecute(ServerPreparedStatement.java:1169)
    2013-07-31 06:27:12,463 WARN [Sitescape_Worker-8] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:27:12,463 ERROR [Sitescape_Worker-8] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:27:12,463 ERROR [Sitescape_Worker-8] [org.jbpm.graph.def.GraphElement] - action threw exception: Hibernate operation: could not execute update query; uncategorized SQLException for SQL [update SS_ChangeLogs set owningBinderKey=?, owningBinderId=? where (entityId in (? , ?)) and entityType=?]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
    org.springframework.jdbc.UncategorizedSQLException: Hibernate operation: could not execute update query; uncategorized SQLException for SQL [update SS_ChangeLogs set owningBinderKey=?, owningBinderId=? where (entityId in (? , ?)) and entityType=?]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
    at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:83)
    at org.springframework.orm.hibernate3.HibernateAccessor.convertJdbcAccessException(HibernateAccessor.java:424)
    2013-07-31 06:27:22,393 INFO [CT-kablink] [org.kablink.teaming.lucene.LuceneProvider] - (kablink) Committed, firstOpTimeSinceLastCommit=1375251142310, numberOfOpsSinceLastCommit=12. It took 82.62174 milliseconds
    2013-07-31 06:28:22,686 INFO [Sitescape_Worker-9] [org.kablink.teaming.jobs.CleanupJobListener] - Removing job send-email.sendMail-1375252102500
    2013-07-31 06:29:51,309 INFO [Sitescape_Worker-10] [org.kablink.teaming.jobs.CleanupJobListener] - Removing job send-email.sendMail-1375252191099
    2013-07-31 06:32:08,820 WARN [http-8080-2] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:32:08,820 ERROR [http-8080-2] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:32:10,775 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:32:10,775 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:32:12,305 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:32:12,305 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:32:14,605 WARN [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:32:14,606 ERROR [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:32:16,056 WARN [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:32:16,056 ERROR [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:32:24,166 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:32:24,166 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:32:24,167 WARN [http-8080-1] [org.kablink.teaming.spring.web.portlet.DispatcherPortlet] - Handler execution resulted in exception - forwarding to resolved error view
    org.springframework.jdbc.UncategorizedSQLException: Hibernate flushing: could not insert: [org.kablink.teaming.domain.AuditTrail]; uncategorized SQLException for SQL [insert into SS_AuditTrail (zoneId, startDate, startBy, endBy, endDate, entityType, entityId, owningBinderId, owningBinderKey, description, transactionType, fileId, applicationId, deletedFolderEntryFamily, type, id) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 'A', ?)]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
    at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:83)
    at org.springframework.orm.hibernate3.HibernateTransactionManager.convertJdbcAccessException(HibernateTransactionManager.java:805)
    at org.springframework.orm.hibernate3.HibernateTransactionManager.convertHibernateAccessException(HibernateTransactionManager.java:791)
    at org.springframework.orm.hibernate3.HibernateTransactionManager.doCommit(HibernateTransactionManager.java:664)
    It always logs the Mysql error code 1206:
    MySQL :: MySQL 5.4 Reference Manual :: 13.6.12.1 InnoDB Error Codes
    1206 (ER_LOCK_TABLE_FULL)
    The total number of locks exceeds the lock table size. To avoid this error, increase the value of innodb_buffer_pool_size.
    The value of innodb_buffer_pool_size is set to 8388608 (8MB) on my server.
    In the documentation (MySQL :: MySQL 5.4 Reference Manual :: 13.6.3 InnoDB Startup Options and System Variables) it says that the default is 128MB.
    Can i set the value to 134217728 (128MB) or will this cause other problems? Will this setting solve my problem?
    Thanks for your help.

    I already found an entry from Kablink:
    https://kablink.org/ssf/a/c/p_name/s...beonprem_url/1
    But i think this can't be a permanent solution...
    Our MySQL Server version is 5.0.95 running on sles11

  • How to Create an Alert on a V$Lock Table

    Hi Gurus,
    Can anyone help me out on. how to create an event alert on locks table (V$LOCK). so that whenever a table is locked , i can send the email notificaiton to the user regarding the locking history.
    any approach is highly appreciable.

    Hi,
    as event alerts in eBS in fact are based on trigger concept and it's not allowed to
    create triggers on objects owned by SYS i don't think that this will be possible.
    Regards

Maybe you are looking for

  • How to include processing time in the  LogFormat in httpd.conf of OAS

    Hi, I want to add processing time parameter in the LogFormat of the HTTP requests in Application server logs file. i tried to edit the httpd.conf file which orginally is as below LogFormat "%h %l %u %t \"%r\" \"%{Cache-Control}i\" CC \"%{Cache-Contro

  • Html content in reports

    Hi everybody, I used HTML DB successfuly to develop two web applications for my customers, I found it a great web application development framework!!! Now I am working on a sample application TAHA Weblog that I would like to submit to HTMLDB Studio.

  • Help: FRM-40505 for a FORM based on stored procedures

    Hi, I am working on a FORM based on stored procedures. When it performs a query, it actually returns record, but still comes with the message FRM-40505: ORACLE error: unable to perform query. Any recommendations on the possible coding area to check?

  • Executable Jars and their Classpath

    I have made a java application that takes requires several libraries to run. I am using netbeans 4.0 but have reached the stage where I'd like it to become a standalone application. I have editted the projects properties in netbeans to use the correc

  • ABOUT SAP MODULES

    HI CAN ANYBODY EXPLAIN ME IN DETAIL WHAT IS SAP FI/CO MODULE?HOW SAP ABAPER HELPS SAP FI/CO? SHAMIT