Enqueue Replication Server - Lock Table Size

Note : I think I had posted it wrongly under ABAP Development, hence request moderator to kindly delete this post. Thanks
Dear Experts,
If Enqueue Replication server is configured, can you tell me how to check the Lock Table size value, which we set using profile parameter enque/table_size.
If enque server is configured in the same host as CI, it can be checked using
ST02 --> Detail Analysis Menu --> Storage --> Shared Memory Detail --> Enque Table
As it is a Standalone Enqueue Server, I don't know where to check this value.
Thanking you in anticipation.
Best Regards
L Raghunahth

Hi
Raghunath
Check the following links
http://help.sap.com/saphelp_nw2004s/helpdata/en/37/a2e3ab344411d3acb00000e83539c3/content.htm
http://help.sap.com/saphelp_nw04s/helpdata/en/44/5efc11f3893672e10000000a114a6b/content.htm
Regards
Bhaskar

Similar Messages

  • Enqueue Server Lock Table

    Hi,
    Is there a way to manually release through Visual Admin for instance a locked resource on the Enqueue Server Lock Table?
    Thanks in advance.

    No.
    Later versions have such, but not in VA.
    Regards,
    Benny

  • Enqueue Replication Server.

    Dear Experts,
    If Enqueue Replication server is configured, can you tell me how to check the Lock Table size value, which we set using profile parameter enque/table_size.
    If enque server is configured in the same host as CI, it can be checked using
    ST02 --> Detail Analysis Menu --> Storage --> Shared Memory Detail --> Enque Table
    As it is a Standalone Enqueue Server, I don't know where to check this value.
    Thanking you in anticipation.
    Best Regards
    L Raghunahth

    Hello,
    I haven't worked on Standalone Enque server, however its worth checking help.sap.com for this.
    I did a bit of search found that monitoring can be done via ensmon
    Check this link, Monitoring standalone Enqueue Server.
    http://help.sap.com/saphelp_nw70/helpdata/EN/bb/84ba9b96e0a94f94ade7c73df93404/frameset.htm
    Also,
    there might be few interesting things in the Trace files.
    http://help.sap.com/saphelp_nw70/helpdata/EN/cb/42f83df31a42fe8e266502cccdd9a0/frameset.htm
    Regards,
    Siddhesh
    Edited by: Siddhesh Ghag on Sep 8, 2008 11:42 AM

  • Enqueue replication server does not terminate after failover

    Hi,
    We are trying to setup high availability of enqueue server where in we have running enqueue server on node-A and ERS on node-B all the time.
    Whenever enqueue is stopped on node-A, it automatically failovers on node-B but after replication of lock table, enqueue does not terminate the ERS running on node-B and as a result our enqueue and ERS both keeps running on the same host (Failover node-B) which should not be the case.
    We havenu2019t configured polling in that scenario SAP note-1018968 depicts the same however this is applicable only for version-640 and 700.
    Ideally when enqueue server switches to node-B, it should terminate the ERS on the same node after replication and then HA software would take care of its restart on node -A.
    We have ERS running of version 701; could anyone please let me know if the same behaviour is common for 701 version as well?
    Or there is any additional configuration to be done to make it working.
    Thanks in advance.
    Cheers !!!
    Ashish

    Hi Naveed,
    Stopping ERS is suppose to be taken care by SAP only and not the HA software.
    Once ERS stops on node -B there would be a fault reported and as a result HA software will restart the ERS on node A.
    Please refer to a section of SAP Note 1018968 - 'Enqueue replication server does not terminate after failover'
    "Therefore, the cluster software must only organise the restart of the replication server and does not need to do anything for the shutdown."
    Another blog about the same:
    http://www.symantec.com/connect/blogs/veritas-cluster-server-sap  
    - After the successful lock table takeover, the Enqueue Replication Server will fault on this node (initiated by SAP). Veritas Cluster Server recognizes this failure and initiates a failover to a remaining node to create SAP Enqueue redundancy again. The Enqueue Replication Server will receive the complete Enqueue table from the Enqueue Server (SCS) and later Enqueue lock updates in a synchronous fashion.
    So it is nothing about HA software, it is the SAP which should control ERS on node-B.
    Cheers !!!
    Ashish

  • MySQL lock table size Exception

    Hi,
    Our users get random error pages from vibe/tomcat (Error 500).
    If the user tries it again, it works without an error.
    here are some errors from catalina.out:
    Code:
    2013-07-31 06:23:12,225 WARN [http-8080-8] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:23:12,225 ERROR [http-8080-8] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:23:12,242 WARN [http-8080-8] [org.kablink.teaming.web.portlet.handler.LogContextInfoInterceptor] - Action request URL [http://vibe.*******.ch/ssf/a/do?p_name=ss_forum&p_action=1&entryType=4028828f3f0ed66d013f0f3ff208013d&binderId=2333&action=add_folder_entry&vibeonprem_url=1] for user [kablink,ro]
    2013-07-31 06:23:12,245 WARN [http-8080-8] [org.kablink.teaming.spring.web.portlet.DispatcherPortlet] - Handler execution resulted in exception - forwarding to resolved error view
    org.springframework.dao.InvalidDataAccessApiUsageException: object references an unsaved transient instance - save the transient instance before flushing: org.kablink.teaming.domain.FolderEntry; nested exception is org.hibernate.TransientObjectException: object references an unsaved transient instance - save the transient instance before flushing: org.kablink.teaming.domain.FolderEntry
    at org.springframework.orm.hibernate3.SessionFactoryUtils.convertHibernateAccessException(SessionFactoryUtils.java:654)
    at org.springframework.orm.hibernate3.HibernateAccessor.convertHibernateAccessException(HibernateAccessor.java:412)
    at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:411)
    2013-07-31 06:23:36,474 ERROR [Sitescape_QuartzSchedulerThread] [org.quartz.core.ErrorLogger] - An error occured while scanning for the next trigger to fire.
    org.quartz.JobPersistenceException: Couldn't acquire next trigger: The total number of locks exceeds the lock table size [See nested exception: java.sql.SQLException: The total number of locks exceeds the lock table size]
    at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTrigger(JobStoreSupport.java:2794)
    at org.quartz.impl.jdbcjobstore.JobStoreSupport$36.execute(JobStoreSupport.java:2737)
    at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3768)
    at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTrigger(JobStoreSupport.java:2733)
    at org.quartz.core.QuartzSchedulerThread.run(QuartzSchedulerThread.java:264)
    Caused by: java.sql.SQLException: The total number of locks exceeds the lock table size
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2870)
    at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1573)
    at com.mysql.jdbc.ServerPreparedStatement.serverExecute(ServerPreparedStatement.java:1169)
    2013-07-31 06:27:12,463 WARN [Sitescape_Worker-8] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:27:12,463 ERROR [Sitescape_Worker-8] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:27:12,463 ERROR [Sitescape_Worker-8] [org.jbpm.graph.def.GraphElement] - action threw exception: Hibernate operation: could not execute update query; uncategorized SQLException for SQL [update SS_ChangeLogs set owningBinderKey=?, owningBinderId=? where (entityId in (? , ?)) and entityType=?]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
    org.springframework.jdbc.UncategorizedSQLException: Hibernate operation: could not execute update query; uncategorized SQLException for SQL [update SS_ChangeLogs set owningBinderKey=?, owningBinderId=? where (entityId in (? , ?)) and entityType=?]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
    at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:83)
    at org.springframework.orm.hibernate3.HibernateAccessor.convertJdbcAccessException(HibernateAccessor.java:424)
    2013-07-31 06:27:22,393 INFO [CT-kablink] [org.kablink.teaming.lucene.LuceneProvider] - (kablink) Committed, firstOpTimeSinceLastCommit=1375251142310, numberOfOpsSinceLastCommit=12. It took 82.62174 milliseconds
    2013-07-31 06:28:22,686 INFO [Sitescape_Worker-9] [org.kablink.teaming.jobs.CleanupJobListener] - Removing job send-email.sendMail-1375252102500
    2013-07-31 06:29:51,309 INFO [Sitescape_Worker-10] [org.kablink.teaming.jobs.CleanupJobListener] - Removing job send-email.sendMail-1375252191099
    2013-07-31 06:32:08,820 WARN [http-8080-2] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:32:08,820 ERROR [http-8080-2] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:32:10,775 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:32:10,775 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:32:12,305 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:32:12,305 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:32:14,605 WARN [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:32:14,606 ERROR [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:32:16,056 WARN [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:32:16,056 ERROR [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:32:24,166 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
    2013-07-31 06:32:24,166 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
    2013-07-31 06:32:24,167 WARN [http-8080-1] [org.kablink.teaming.spring.web.portlet.DispatcherPortlet] - Handler execution resulted in exception - forwarding to resolved error view
    org.springframework.jdbc.UncategorizedSQLException: Hibernate flushing: could not insert: [org.kablink.teaming.domain.AuditTrail]; uncategorized SQLException for SQL [insert into SS_AuditTrail (zoneId, startDate, startBy, endBy, endDate, entityType, entityId, owningBinderId, owningBinderKey, description, transactionType, fileId, applicationId, deletedFolderEntryFamily, type, id) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 'A', ?)]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
    at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:83)
    at org.springframework.orm.hibernate3.HibernateTransactionManager.convertJdbcAccessException(HibernateTransactionManager.java:805)
    at org.springframework.orm.hibernate3.HibernateTransactionManager.convertHibernateAccessException(HibernateTransactionManager.java:791)
    at org.springframework.orm.hibernate3.HibernateTransactionManager.doCommit(HibernateTransactionManager.java:664)
    It always logs the Mysql error code 1206:
    MySQL :: MySQL 5.4 Reference Manual :: 13.6.12.1 InnoDB Error Codes
    1206 (ER_LOCK_TABLE_FULL)
    The total number of locks exceeds the lock table size. To avoid this error, increase the value of innodb_buffer_pool_size.
    The value of innodb_buffer_pool_size is set to 8388608 (8MB) on my server.
    In the documentation (MySQL :: MySQL 5.4 Reference Manual :: 13.6.3 InnoDB Startup Options and System Variables) it says that the default is 128MB.
    Can i set the value to 134217728 (128MB) or will this cause other problems? Will this setting solve my problem?
    Thanks for your help.

    I already found an entry from Kablink:
    https://kablink.org/ssf/a/c/p_name/s...beonprem_url/1
    But i think this can't be a permanent solution...
    Our MySQL Server version is 5.0.95 running on sles11

  • ECC Cluster Enqueue Replication Server

    Hi Experts,
    I installed SAP ECC EHP5 in a Windows Cluster Environment. I followed all the steps in the installation guide. When I execute the following command to check the status of the Enqueue Replication Server:
    enqt.exe pf=<profile> 2
    The following message appears:
       Nr  Man UserName Name-- M -
    Arg-- -UsVB-----
    -Object- TCOD B
    Entries in Backup-File...: 0
    Instead of the following message:
    Replication is enabled in server, repl. server is connectedReplication is active...
    Do I miss any adittional configuration??
    The trace file had this information:
    trc file: "dev_eq_trc_7804", trc level: 1, release: "720"
    Wed Oct 19 14:02:02 2011
    Enqueue Info: enque/use_pfclock2 = FALSE
    Enqueue Info: enque/use_pfclock2 = FALSE
    Enqueue Info: enque/disable_replication = 0
    Enqueue Info: replication enabled
    Enqueue Info: enque/replication_dll not set
    LstRestore: no old replication configured
    I manually set enque/disable_replication parameter to 2 because previously was set in the instance profile in 0.
    Any ideas?
    Thanks a lot.
    Kind Regards

    Hi Estaban,
    The command is "ensmon" not "enqt". Check the example, below;
    ensmon pf=<ERS profile> 2
    Best regards,
    Orkun Gedik

  • Enqueue Replication Server Installation

    Hi Experts,
    I have EP7.0,SP14 (SR3),with IBM AIX,Oracle 10G.
    I have installed SCS,DB in High Availability and CI,DI in standalone mode.My Setup has only JAVA Instance and NO ABAP Instance at all.Using HACMP Software for Failover scenario.
    I am planning to have Enqueue Replication Server for High Availability solution.
    But I could not find any information about the Installation of Enqueue Replication Server in the Master Guide taken for my environment.And the Master DVD doesnt have the Option of Installing Enqueue Replication Server like we had for MSCS-Enqueue replication server installation.
    Can any one let me know how to proceed installing Enqueue Replication server?
    I have seen this below link but no help:
    http://help.sap.com/saphelp_nw04s/helpdata/en/36/67973c3f5aff39e10000000a114084/frameset.htm
    Regards,
    Karthick Eswaran

    Hi Karthick,
    I have not done this before, but I am planning to do it soon. Did this not help? http://help.sap.com/saphelp_nw04s/helpdata/en/de/cf853f11ed0617e10000000a114084/content.htm
    Plus have you seen these two notes: 821904, 823941.
    -Regards

  • SQL Server log table sizes

    Our SQL Server 2005 (Idm 7.1.1 (with patch 13 recently applied), running on Win2003 & Appserver 8.2) database has grown to 100GB. The repository was created with the provided create_waveset_tables.sqlserver script.
    In looking at the table sizes, the space hogs are:
    Data space:
        log       7.6G
        logattr   1.8G
        slogattr 10.3G
        syslog   38.3G
    Index space:
        log       4.3G
        logattr   4.3G
        slogattr 26.9G
        syslog    4.2GAs far as usage goes, we have around 20K users, we do a nightly recon against AD, and have 3 daily ActiveSync processes for 3 other attributes sources. So there is alot of potential for heavy duty logging to occur.
    We need to do something before we run out of disk space.
    Is the level of logging tunable somehow?
    If we lh export "default" and "users", then wipe out the repo, reload the init, default and users what will we have lost besides a history of attribute updates?

    Hi,
    I just fired up my old 7.1 environment to have a look at the syslog and slogattr tables. They looked save to delete as I could not find any "magic" rows in there. So I did a shutdown of my appserver and issued
    truncate syslog
    truncate slogattr
    from my sql tool. After restarting the appserver everything is still working nicely.
    The syslog and slogattr tables store technical information about errors. Errors like unable to connect to resource A or Active Sync agains C is not properly configured. It does not store provisioning errors, those go straight to the log/logattr table. So from my point of view it is ok to clean out the syslog and slogattr once in a while.
    But there is one thing which I think is not ok - having so many errors in the first place. Before you truncate your syslog you should run a syslog report to identify some of the problems in the environment.
    Once identified and fixed you should'nt have many new entries in your syslog per day. There will allways be a few, network hickups and the like. But not as many as you seem to have today.
    Regards,
    Patrick

  • Locking Table Size?

    Hi M.M Team,
    I noticed that my site can have uneven table cell sizes when
    viewed in different browsers. IE is good, but Firefox isn't. Is
    there a way to lock the table sizes please so that this doesn't
    happen?
    Thanks
    Ray

    You gotta stop using the Property inspector to set the font,
    the color, or
    the size. It creates 'spew' in your stylesheets.
    I believe the problem you are having can be simplified if you
    consider this
    example -
    Put a 2 row by 2 column table on the page. Merge the two
    right hand cells
    into a single column. Put an image into each left cell, and
    you will see
    that they merge vertically seamlessly. Now begin to add
    content to the
    merged cell on the right and you will see that at some point
    you will have
    forced the two left cells to begin to split apart vertically.
    The more
    content you add to the right, the further apart the two left
    cells will get.
    See what I mean?
    To solve the problem, instead of making your page sit in a
    single table,
    have it in several nested tables. You should be able to put a
    two column
    table on the page with a nested table in each column. Put
    your navigation
    in the left nested table, and the
    content in the right nested table. Now, changes to either of
    the inner
    table's
    structure will not affect either of the inner nested tables.
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.dreamweavermx-templates.com
    - Template Triage!
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    http://www.macromedia.com/support/search/
    - Macromedia (MM) Technotes
    ==================
    "Ray Dar" <[email protected]> wrote in
    message
    news:ej4nqg$8bl$[email protected]..
    > Hi Murray,
    >
    > On this link
    >
    >
    http://www.myastrospace.com/newscientist.php
    >
    > The NASA TV cell is smaller in FireFox and larger in IE.
    >
    > Not sure why it does it.
    >
    > Thanks.
    >
    > Ray

  • Lock table size change in instance profile RZ10

    i need your help. I changed the table size from 10000 to 17000 and then to 20000 but still have the same table size as before.i used rz10 to change the parameter enque/table_size.
    the steps i followed are as in all documents i can find.
    1. change parameter value
    2. save it (parameter and instance)
    3. activate it.
    4.on restart instance (i just left it for the offline backup to do this).
    on the 4th step is that enough, because after the system came back i checked the parameter in rz11 and the current value on the parameter is still 10000. (owner entries and granule still 12557 as before)
    am i missing something?
    vinaka
    epeli

    Hi,
    it COULD be that the offline backup did indeed no restart of the instance. From Oracle I know that there is a so called "reconnect-status" where the SAP instance is trying over a defined period of time to log to the database again after the workprocesses lost connection to the database processes. In this timeframe the instance is not to be considered as restarted.
    If you check ST02 you see the point of time where the instance was restarted in reality the last time. If this date is before your offline backup you need to do the restart manually.
    Best regards, Alexander

  • Query Locks Table in SQL Server 2000!!

    Hi all!
    I am facing a strange problem. I am using MS SQL Server 2000.
    I have a JDBC program.It executes a query on a table (STUDENT) and fetches some record from it.
    The query gets executed fine for the first time but when the SELECT query is executed on the same table (i.e. STUDENT) from some other block of code within the same program the program hangs at the location where resultset is pointed to the first record i.e. RS.next();
    When I try to execute the SELECT query on the same table (i.e. STUDENT) from a query tool when the program is running it also gets hanged!!!
    It seems that after running the query for the first time on the STUDENT table from the next time its getting hanged....i believe the Table gets locked for some reason!!
    Is that normal with SQL Server 2000.....The same code works fine with other database!
    Please suggest...wht has to b done to gt it fixed!
    Thankz a loadz bforehand!
    Arun

    By default (transaction isolation level TRANSACTION_READ_COMMITTED ), SQL Server applies a shared read lock when you do a SELECT. This is should not prevent other selects on the same row / page but it will prevent updates / deletes.
    I found a link that explains SQL Server locking: http://databasejournal.com/features/mssql/article.php/3289661

  • Lock table Overflow as the file size is 50 MB BW side.

    Hello Everyone,
    I have a XML idoc file as input which is usually more then 50MB in size.
    Usually, i am getting Lock table overflow at Receiver BW side . This error is pointing to Inbound_Asynchronous_Idoc.
    i have tried dividing the Input XML idoc file into small group by handling them in chunk mode of sender communication channel.
    However, since its TRFC , so if it gets processed in PI but at outbound if there is an Lock table overflow error, then it fails.
    I have tried to process the 50 MB of file in parts by processing  5 MB at one point of time. but does this mean that BW also process data in parts or it gets entire 50 MB to process at one stretch.
    Since the input is IDOC XML so i was not able to make use of Record Set per message. so i am making use of chunk mode.
    AM i doing correctly ?
    Regards,
    Ravi

    Hi Ravikanth,
    If  i make use of the below logic as mentioned in the link that you provided, then do i have to remove the chunk mode from communication channel .?
    Secondly, mine is a SLSFCT idoc XSD that  i am using here as source and target as well.
    The hirearchy becomes like this after implementing the logic mentioned in  link:
    Messages
    Message1
    Z1ZBSD_SLSFCT01
    IDOC
    BEGIN
    EDI_DC40
    For Messages and MEssages1 there is no mapping at target side.
    For  Z1ZBSD_SLSFCT01 its 1..1 in source and 0..unbounded in target.
    For IDOC its mapped to constant and Begin to constant with value 1 .
    And then EDIDC to source and target are mapped to each other with occurrence of 1..1. 
    Is there some thing wrong that i am doing . because after this again the files are not getting divided
    Regards

  • Restrictions in Oracle Server (table size, record count ...)

    Hello,
    can somebody tell me if there are any restrictions of table size, record count, file size (apart from operation system restrictions) in Oracle 8.1.7 and 7.3.4?
    Or where can i find information? I couldn4t find anything in the generic documentation.
    Thank you in advance,
    Hubert Gilch
    SEP Logistik AG
    Ziegelstra_e 2, D-83629 Weyarn
    Tel. +49 8020 905-214, Fax +49 8020 905-100
    EMail: [email protected]

    Hello,
    if you are executing a DBMS_AQ.DEQUEUE and then perform a rollback in your code the counter RETRY_COUNT will not go up by 1.
    You are only reversing your own AQ action. This counter will be used only internally to log unsuccessful dequeue actions.
    Kind regards,
    WoG

  • MM42 change material, split valuation at batch level, M301, locking table

    Dear All,
    I'm working on ECC 6.0 retail and I have activated split valuation at batch level.  Now in MBEW for this specific material I have almost 14.400 entries.
    If I try to change some material data (MM42) I receive an error message M3021 A system error has occurred while locking and then Lock table overflow.
    I used SM12 to see the table (while MM42 is still running) and it seems that MBEW is the problem.
    What should I do?  For any material modification the system has to modify every entry in MBEW? Is there any possibility to skip this?
    Thank you.

    Hi,
    Symptom
    Key word: Enqueue
    FM: A system error has occurred in the block handler
    Message in the syslog: lock table overflowed
    Other terms
    M3021 MM02 F5 288 F5288 FBRA
    Reason and Prerequisites
    The lock table has overflowed.
    Cause 1: Dimensions of the lock table are too small
    Cause 2: The update lags far behind or has shut down completely, so that the lock entries of the update requests that are not yet updated cause the lock table to overflow.
    Cause 3: Poor design of the application programs. A lock is issued for each object in an application program, for example a collective run with many objects.
    Solution
    Determine the cause:
    SM12 -> Goto -> Diagnosis (old)
    SM12 -> Extras -> Diagnosis (new)
    checks the effectiveness of the lock management
    SM12 -> Goto -> Diagnosis in update (old)
    SM12 -> Extras -> Diagnosis in update (new)
    checks the effectiveness of the lock management in conjunction with updates
    SM12 -> OkCode TEST -> Error handling -> Statistics (old, only in the enqueue server)
    SM12 -> Extras -> Statistics (new)
    shows the statistics of the lock management, including the previous maximum fill levels (peak usage) of the partial tables in the lock table
    If the owner table overflows, cause 2 generally applies.
    In the alert monitor (RZ20), an overrunning of the (customizable) high-water marks is detected and displayed as an alert reason.
    The size of the lock table can be set with the profile parameter u201Cenque/table_size =u201C. specifies the size of the lock table in kilobytes. The setting must be made in the profile of the enqueue server ( u2026_DVEBM.. ). The change only takes effect after the restart of the enqueue server.
    The default size is 500 KB in the Rel 3.1x implementation of the enqueue table. The resulting sizes for the individual tables are:
    Owner table: approx 560.
    Name table: approx 560.
    Entry table: approx 2240.
    As of Rel 4.xx the new implementation of the lock table takes effect.
    It can also be activated as described in note 75144 for the 3.1I kernel. The default size is 2000 KB. The resulting sizes for the individual tables are:
    Owner table: approx 5400
    Name table: approx 5400
    Entry table: approx 5400
    Example: with the
    u201Cenque/table_size =32000u2033 profile parameter, the size of the enqueue table is set to 32000 KB. The tables can then have approx 40,000 entries.
    Note that the above sizes and numbers depend on various factors such as the kernel release, patch number, platform, address length (32/64-bit), and character width (Ascii/Unicode). Use the statistics display in SM12 to check the actual capacity of the lock table.
    If cause 2 applies, an enlargement of the lock table only delays the overflow of the lock table, but it cannot generally be avoided.
    In this case you need to eliminate the update shutdown or accelerate the throughput of the update program using more update processes. Using CCMS (operation modes, see training BC120) the category of work processes can be switched at runtime, for example an interactive work process can be converted temporarily into an update process, to temporarily increase the throughput of the update.
    For cause 3, you should consider a tuning of the task function. Instead of issuing a large number of individual locks, it may be better to use generic locks (wildcard) to block a complete subarea. This will also allow you to considerably improve the performance.

  • Error locking table TBTCO

    Hi,
    We are having issue in locking "Error locking table TBTCO" at our SM12 statics Maximum Number of Lock Owners     3603
    Even on increasing the enque/table_size it not effecting the lock owner. Our environment having enq replication server.
    The profile parameter for enque/serverinst = $(SCSID) (00)
    enque replication server instance number is 10
    What could be the correction need to be done?
    Regards,
    RMav

    You have to check the reason for the error in the lock.
    Get the Work Process number that logged the error in SM21. Check the trace file of this work proces. At the same timestamp of the error in SM21 you do find the root cause in the work process trace.
    Usually this is caused by a overflow in the enqueue table, but not always. For this reason you have to check the work process trace the reasl root cause.
    To check if the enqueue is suffering a overflow you should go to SM12 -> Extras -> Statistics
    If in the statistics you do find that the "Maximum Fill Level" reached the maximum number of locks in these itens, for example:
    Maximum Number of Lock Owners 39481
    Maximum Fill level 458                               
    Current Fill Level -25
    Maximum Number of Lock Arguments 39481  <<<
    Maximum Fill level 39480                                 <<<
    Current Fill Level 15
    Maximum Number of Lock Entries 39481    <<<
    Maximum Fill level 39480                             <<<
    this means that the enqueue was really suffering under an overflow situation.
    Just for your information, the TBCTO is the table for the status of the jobs. So, every job need
    to write the status in this table and also lock this table firstly.
    Clébio

Maybe you are looking for

  • Installed iTunes 9 as advised and now I can't access store

    Hi, just installed iTunes 9 and advised when trying to purchase a film and now I can't access the store. I can access my account etc but just get a blank screen when trying to access store! Have tried technical support but they say it's my computer's

  • Charts not displaying properly in Internet Explorer

    I have a component that takes in an xml formatted set of charting data and displays it in both chart form and in a data grid. In internet explorer I occasionally see spikes in the lines on my chart. It basically appears that the points are either bei

  • Comment based help, .link online version

    Hello, Sorry for what's probably a very stupid question but I'm banging my head against the wall. I'm trying to add a link to .LINK so that it appears as Online Version: https://www.somesite.com like with what you see with Get-Help Get-Service in rel

  • DIfferent # of rows return using select * & select count(*)

    In debugging some code (OWB generated PL/SQL) I came across the situation where by a query with count(*) returns a value of 600 rows. The same where clause using select * reports only 300 rows selected. There are 600 rows to select. Any ideas why the

  • Cost center object number incorrect

    Hi All, During migration, cost centers were created with incorrect object number in master data table (the controlling area was incorrect in the object number). Due to this, the transactional data in CO were updated with the incorrect object number i