Lock database

Hi all,
I have the client/server architecture in my project, where the java application is in the client and dbms is in the server. So, its possible that n clients are connected to the database, all with its own connection. The tables in database use only auto incremented primary keys. Suppose the primary keys are all named "id". So, when I insert a row, I need to know what is the id of the inserted row, right after the insertion. But I need to somehow lock the database, to guarantee that my insert statement is followed by my query to get the current value of the sequence in the id (the value representing the id of the inserted row). I must do this because in my understand of things, another client can insert a row in between the insertion and retrieval of the id in another client. Then, if my understanding is correct, the client will retrieval the wrong value for the id.
In JDBC, I can set auto commit to false, then with a statement object execute the insertion and query the id, and after that commiting, but I don't know if this locks the database.
Anyone can help me identify a way to guarantee that two sql statements are executed in sequence in the database using the JDBC?
I am using Postgresql

The sequence is good per transaction. From the Postgres docs: http://www.postgresql.org/docs/8.2/interactive/functions-sequence.html
Return the value most recently obtained by nextval for this sequence in the current session. (An error is reported if nextval has never been called for this sequence in this session.) Notice that because this is returning a session-local value, it gives a predictable answer whether or not other sessions have executed nextval since the current session did.
So, you should be fine. It's good practice, as alluded to, to perform these operations all in the same transaction.
- Saish

Similar Messages

  • Digikam - Detected locked database file

    Hello!
    I have problem with Digikam. Konsole reports this:
    digikam(1153)/digikam (core) Digikam::DatabaseCoreBackendPrivate::checkRetrySQLiteLockError: Detected locked database file. There is an active transaction. Waited but giving up now.
    Digikam opens normally, I can see my albums, but when I click on one of them, nothing happens. Path to my Pictures is /data/Pictures [on my second hdd]. I tried everything what I could think of, but no success.
    My system:
    - KDE 5
    - Kernel  4.0.4-2-ARCH x86_64
    - digikam 4.9.0-2
    - kipi-plugins 4.9.0-2
    - fully updated
    My digikamrc:
    [Database Settings]
    Database Connectoptions=
    Database Hostname=
    Database Name=/data/Pictures/
    Database Name Thumbnails=/data/Pictures/
    Database Password=
    Database Port=-1
    Database Type=QSQLITE
    Database Username=
    Internal Database Server=false
    What is locking the database? Any thoughts?

    Hi all
    Found it.
    It was a MenuCalendarClock upgrade and nothing to do with Leopard per se.
    Kind Regards
    Eric

  • DB Adapter Locking Database Rows in Distributed Delete Polling Strategy

    I am stuck with an issue. To explain the issue in simple steps
    I am creating a Database Polling Adapter with Distributed Delete Polling Strategy in OSB for running in Clustered Environment.
    We are custom SQL so that the records are not deleted after they are fetched but only a column Status Column is getting Updated.
    The Polling query and Delete SQL is as follows
    <query name="ReqJCAAdapterSelect" xsi:type="read-all-query">
    <criteria operator="equal" xsi:type="relation-expression">
    <left name="Status" xsi:type="query-key-expression">
    <base xsi:type="base-expression"/>
    </left>
    <right xsi:type="constant-expression">
    <value>READY</value>
    </right>
    </criteria>
    <reference-class>ReqJCAAdapter.ItemTbl</reference-class>
    <refresh>true</refresh>
    <remote-refresh>true</remote-refresh>
    <lock-mode>lock-no-wait</lock-mode>
    <container xsi:type="list-container-policy">
    <collection-type>java.util.Vector</collection-type>
    </container>
    </query>
    <delete-query>
    <call xsi:type="sql-call">
    <sql>update ITEM_TBL
    set STATUS = 'IN_PROCESS'
    where ID = #ID</sql>
    </call>
    </delete-query>
    In case of any error in Service Error handler the Status is being updated to ERROR.
    Now the problem which I am facing is in the request Pipeline if we want to do any update on the same record we detect that in ROW is locked and is not allowed to do an update and because of this the process can not proceed.
    Also if any error occurs in Request pipeline then from the Service Error handler we are supposed to Update the status as ERROR, but the same thing is happening and the process can not proceed.
    but In the response Pipeline we can successfully update the status of the same record.
    We have tried to use both XA and NON-XA Datasource but no luck.
    Any help in this is appreciated.
    Regards,
    Dilip

    I am stuck with an issue. To explain the issue in simple steps
    I am creating a Database Polling Adapter with Distributed Delete Polling Strategy in OSB for running in Clustered Environment.
    We are custom SQL so that the records are not deleted after they are fetched but only a column Status Column is getting Updated.
    The Polling query and Delete SQL is as follows
    <query name="ReqJCAAdapterSelect" xsi:type="read-all-query">
    <criteria operator="equal" xsi:type="relation-expression">
    <left name="Status" xsi:type="query-key-expression">
    <base xsi:type="base-expression"/>
    </left>
    <right xsi:type="constant-expression">
    <value>READY</value>
    </right>
    </criteria>
    <reference-class>ReqJCAAdapter.ItemTbl</reference-class>
    <refresh>true</refresh>
    <remote-refresh>true</remote-refresh>
    <lock-mode>lock-no-wait</lock-mode>
    <container xsi:type="list-container-policy">
    <collection-type>java.util.Vector</collection-type>
    </container>
    </query>
    <delete-query>
    <call xsi:type="sql-call">
    <sql>update ITEM_TBL
    set STATUS = 'IN_PROCESS'
    where ID = #ID</sql>
    </call>
    </delete-query>
    In case of any error in Service Error handler the Status is being updated to ERROR.
    Now the problem which I am facing is in the request Pipeline if we want to do any update on the same record we detect that in ROW is locked and is not allowed to do an update and because of this the process can not proceed.
    Also if any error occurs in Request pipeline then from the Service Error handler we are supposed to Update the status as ERROR, but the same thing is happening and the process can not proceed.
    but In the response Pipeline we can successfully update the status of the same record.
    We have tried to use both XA and NON-XA Datasource but no luck.
    Any help in this is appreciated.
    Regards,
    Dilip

  • Locking database fields using Servlet

    Hi,
    I am developing a servlet application which is used to access and/or modifying a database on a server. I am now asked for locking fields if a user is accessing the information while others have already loaded the same information page preventing it from changing the same fields that an other user is currently viewing.
    The problem is that my server, running the servlet, is the only one to access the DB (using JDBC) and the connection is always done with the same username/pwd. This connection is done for a short period of time, that is, the time to create the HTML page.
    Is there a way to keep the servlet knowing who is connected to it? With that feature, I could ask for username and pwd for each user and then locking tables on the DB for them.
    I'm I clear enough? I don't feel so... ;)
    Thanks in advance for any hints!
    Snoozer

    Did you try to use:
    request.getRemoteUser()

  • Container Managed Transaction locking Database- RollBack

    Hi,
    I am trying to call a Stateless Session Bean (B) in domain D2 from another stateless session bean (A) in domain 1 within a transaction.
    The <trans-attribute> in both ejb-jar.xmls is "Required".
    Bean B accesses the Database and the method is successful in inserting the record ( not comitted ) , but the database is getting locked due to which the transaction is rolling back.
    Any pointers in this regard is greatly appreciated.
    Regards,
    Harsha

    Hi,
    I guess that there is a problem with your JMS provider and that the message would not be enqueued if the connection is not closed before the exception is raised. If it is the case then I would recommend you to report this problem to your JMS provider.
    Arnaud

  • Commit after locking database ?????

    Hi Experts.,
    Is it necessary to commit work though i am holding a lock on a database ??
    reasons and example needed............
    <REMOVED BY MODERATOR>
    Edited by: Alvaro Tejada Galindo on Jun 12, 2008 2:19 PM

    Hi,
    The database system uses locks to ensure that two or more users cannot change the same data simultaneously, since this could lead to inconsistent data being written to the database. A database lock can only be active for the duration of a database LUW. They are automatically released when the database LUW ends. In order to program SAP LUWs, we need
    a lock mechanism within the R/3 System that allows us to create locks with a longer lifetime.
    The Open SQL statements INSERT, UPDATE, MODIFY, and DELETE allow you to program database changes.
    The SAP lock concept is based on lock objects. Lock objects allow you to set an SAP lock for an entire application object. An application object consists of one or more entries in a database table, or entries from more than one database table that are linked using foreign key
    relationships.
    Before you can set an SAP lock in an ABAP program, you must first create a lock object in the ABAP Dictionary. A lock object definition contains the database tables and their key fields on the basis of which you want to set a lock. When you create a lock object, the system automatically generates two function modules with the names ENQUEUE_<lock object name> and DEQUEUE_<lock object name>. You can then set and release SAP locks in your ABAP program by calling these function modules in a CALL FUNCTION statement.
    There are two types of lock in the R/3 System:
      Shared lock
    Shared locks (or read locks) allow you to prevent data from being changed while you are reading it. They prevent other programs from setting an exclusive lock (write lock) to
    change the object. It does not, however, prevent other programs from setting further read locks.
      Exclusive lock
    Exclusive locks (or write locks) allow you to prevent data from being changed while you are changing it yourself. An exclusive lock, as its name suggests, locks an application
    object for exclusive use by the program that sets it. No other program can then set either a shared lock or an exclusive lock for the same application object.
    Example :
    The user requests a given flight i.e (on screen 100) display or update it (on screen 200). If the user chooses Change, the table entry is locked; if he or she chooses Display, it is not.
    The PAI processing for screen 100 in this transaction processes the user input and prepares for the requested action (Change or Display). If the user chooses Change, the program locks the relevant database object by calling the corresponding ENQUEUE function.
    MODULE USER_COMMAND_0100 INPUT.
    CASE OK_CODE.
    WHEN 'SHOW'....
    WHEN 'CHNG'.
    <...Authority-check and other code...>
    CALL FUNCTION 'ENQUEUE_ESFLIGHT'
    EXPORTING
    MANDT = SY-MANDT
    CARRID = SPFLI-CARRID
    CONNID = SPFLI-CONNID
    EXCEPTIONS
    FOREIGN_LOCK = 1
    SYSTEM_FAILURE = 2
    OTHERS = 3.
    IF SY-SUBRC NE 0.
    MESSAGE ID SY-MSGID
    TYPE 'E'
    NUMBER SY-MSGNO
    WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    At the end of a transaction, the locks are released automatically. However, there are exceptions if you have called update routines within the transaction. You can release a lock explicitly by calling the corresponding DEQUEUE module. As the programmer, you must decide for yourself the point at which it makes most sense to release the locks (for example, to make the data available to other transactions).
    The subroutine UNLOCK_FLIGHT calls the DEQUEUE function module for the lock object ESFLIGHT:
    FORM UNLOCK_FLIGHT.
    CALL FUNCTION 'DEQUEUE_ESFLIGHT'
    EXPORTING
    MANDT = SY-MANDT
    CARRID = SPFLI-CARRID
    CONNID = SPFLI-CONNID
    EXCEPTIONS
    OTHERS = 1.
    SET SCREEN 100.
    ENDFORM.
    You might use this for the BACK and EXIT functions in a PAI module for screen 200 in this example transaction. In the program, the system checks whether the user leaves the screen without having saved his or her changes. If so, the PROMPT_AND_SAVE routine sends a reminder, and gives the user the opportunity to save the changes. The flight can be unlocked by calling the UNLOCK_FLIGHT subroutine.
    MODULE USER_COMMAND_0200 INPUT.
    CASE OK_CODE.
    WHEN 'SAVE'....
    WHEN 'EXIT'.
    CLEAR OK_CODE.
    IF OLD_SPFLI NE SPFLI.
    PERFORM PROMPT_AND_SAVE.
    ENDIF.
    PERFORM UNLOCK_FLIGHT.
    LEAVE TO SCREEN 0.
    WHEN 'BACK'....
    I hope with this example you get a clear picture of lock concept.
    <REMOVED BY MODERATOR>
    Thanks.
    Dhanashri.
    Edited by: Alvaro Tejada Galindo on Jun 12, 2008 2:19 PM

  • ODBC Server lock database and site goes in timeout

    My configuration:
    Coldfusion Enterprise 9.0.1 32 bit on Windows Server 2008 Web Edition 64 bit, 4Gb Ram, 2 Xeon processors
    The sites using Microsoft Access Database, randomlly, go in timeout. This is due to the fact that "ODBC Server" lock the database (in the db folder there are the .ldb file). If I restart the "ODBC Server" service, the database is released and the site works. Do you have any idea how to resolve this issue?

    The log file is:
    02/06 09:08:15 Error [jrpp-1210] - File not found: /csv/_foto_pup.cfm The specific sequence of files included or processed is: C:\HostingSpaces\andrea.guarda\associazioni.csv.vda.it\wwwroot\csv\_foto_pup.cfm''
    02/06 09:15:42 Error [jrpp-1210] - File not found: /csv/index.cfm The specific sequence of files included or processed is: C:\HostingSpaces\andrea.guarda\associazioni.csv.vda.it\wwwroot\csv\index.cfm''
    02/06 09:16:30 Information [jrpp-1207] - Generating rss FEED
    02/06 09:16:30 Information [jrpp-1207] - FEED generation completed {Time taken=16 ms}
    Exception thrown by error-handling template:
    coldfusion.server.ServiceFactory$ServiceNotAvailableException: The Metrics service is not available.
    at coldfusion.server.ServiceFactory.getMetricsService(ServiceFactory.java:159)
    at coldfusion.filter.ExceptionFilter.handleException(ExceptionFilter.java:139)
    at coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:84)
    at coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:2 8)
    at coldfusion.filter.BrowserFilter.invoke(BrowserFilter.java:38)
    at coldfusion.filter.NoCacheFilter.invoke(NoCacheFilter.java:46)
    at coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38)
    at coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22)
    at coldfusion.filter.CachingFilter.invoke(CachingFilter.java:62)
    at coldfusion.CfmServlet.service(CfmServlet.java:200)
    at coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:89)
    at jrun.servlet.FilterChain.doFilter(FilterChain.java:86)
    at coldfusion.monitor.event.MonitoringServletFilter.doFilter(MonitoringServletFilter.java:42 )
    at coldfusion.bootstrap.BootstrapFilter.doFilter(BootstrapFilter.java:46)
    at jrun.servlet.FilterChain.doFilter(FilterChain.java:94)
    at jrun.servlet.FilterChain.service(FilterChain.java:101)
    at jrun.servlet.ServletInvoker.invoke(ServletInvoker.java:106)
    at jrun.servlet.JRunInvokerChain.invokeNext(JRunInvokerChain.java:42)
    at jrun.servlet.JRunRequestDispatcher.invoke(JRunRequestDispatcher.java:286)
    at jrun.servlet.ServletEngineService.dispatch(ServletEngineService.java:543)
    at jrun.servlet.jrpp.JRunProxyService.invokeRunnable(JRunProxyService.java:203)
    at jrunx.scheduler.ThreadPool$DownstreamMetrics.invokeRunnable(ThreadPool.java:320)
    at jrunx.scheduler.ThreadPool$ThreadThrottle.invokeRunnable(ThreadPool.java:428)
    at jrunx.scheduler.ThreadPool$UpstreamMetrics.invokeRunnable(ThreadPool.java:266)
    at jrunx.scheduler.WorkerThread.run(WorkerThread.java:66)
    06/02 09:20:15 error ROOT CAUSE:
    coldfusion.runtime.RequestTimedOutException: The request has exceeded the allowable time limit Tag: CFQUERY
    at coldfusion.runtime.CfJspPage.checkRequestTimeout(CfJspPage.java:2907)
    at coldfusion.tagext.sql.QueryTag.setupCachedQuery(QueryTag.java:799)
    at coldfusion.tagext.sql.QueryTag.doEndTag(QueryTag.java:586)
    at cfindex2ecfm1283733899._factor3(C:\HostingSpaces\andrea.guarda\rusticart.it\wwwroot\arred amento\index.cfm:86)
    at cfindex2ecfm1283733899._factor6(C:\HostingSpaces\andrea.guarda\rusticart.it\wwwroot\arred amento\index.cfm:1)
    at cfindex2ecfm1283733899._factor7(C:\HostingSpaces\andrea.guarda\rusticart.it\wwwroot\arred amento\index.cfm:1)
    at cfindex2ecfm1283733899.runPage(C:\HostingSpaces\andrea.guarda\rusticart.it\wwwroot\arreda mento\index.cfm:1)
    at coldfusion.runtime.CfJspPage.invoke(CfJspPage.java:231)
    at coldfusion.tagext.lang.IncludeTag.doStartTag(IncludeTag.java:416)
    at coldfusion.filter.CfincludeFilter.invoke(CfincludeFilter.java:65)
    at coldfusion.filter.ApplicationFilter.invoke(ApplicationFilter.java:381)
    at coldfusion.filter.RequestMonitorFilter.invoke(RequestMonitorFilter.java:48)
    at coldfusion.filter.MonitoringFilter.invoke(MonitoringFilter.java:40)
    at coldfusion.filter.PathFilter.invoke(PathFilter.java:94)
    at coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:70)
    at coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:2 8)
    at coldfusion.filter.BrowserFilter.invoke(BrowserFilter.java:38)
    at coldfusion.filter.NoCacheFilter.invoke(NoCacheFilter.java:46)
    at coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38)
    at coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22)
    at coldfusion.filter.CachingFilter.invoke(CachingFilter.java:62)
    at coldfusion.CfmServlet.service(CfmServlet.java:200)
    at coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:89)
    at jrun.servlet.FilterChain.doFilter(FilterChain.java:86)
    at coldfusion.monitor.event.MonitoringServletFilter.doFilter(MonitoringServletFilter.java:42 )
    at coldfusion.bootstrap.BootstrapFilter.doFilter(BootstrapFilter.java:46)
    at jrun.servlet.FilterChain.doFilter(FilterChain.java:94)
    at jrun.servlet.FilterChain.service(FilterChain.java:101)
    at jrun.servlet.ServletInvoker.invoke(ServletInvoker.java:106)
    at jrun.servlet.JRunInvokerChain.invokeNext(JRunInvokerChain.java:42)
    at jrun.servlet.JRunRequestDispatcher.invoke(JRunRequestDispatcher.java:286)
    at jrun.servlet.ServletEngineService.dispatch(ServletEngineService.java:543)
    at jrun.servlet.jrpp.JRunProxyService.invokeRunnable(JRunProxyService.java:203)
    at jrunx.scheduler.ThreadPool$DownstreamMetrics.invokeRunnable(ThreadPool.java:320)
    at jrunx.scheduler.ThreadPool$ThreadThrottle.invokeRunnable(ThreadPool.java:428)
    at jrunx.scheduler.ThreadPool$UpstreamMetrics.invokeRunnable(ThreadPool.java:266)
    at jrunx.scheduler.WorkerThread.run(WorkerThread.java:66)
    javax.servlet.ServletException: ROOT CAUSE:
    coldfusion.runtime.RequestTimedOutException: The request has exceeded the allowable time limit Tag: CFQUERY
    at coldfusion.runtime.CfJspPage.checkRequestTimeout(CfJspPage.java:2907)
    at coldfusion.tagext.sql.QueryTag.setupCachedQuery(QueryTag.java:799)
    at coldfusion.tagext.sql.QueryTag.doEndTag(QueryTag.java:586)
    at cfindex2ecfm1283733899._factor3(C:\HostingSpaces\andrea.guarda\rusticart.it\wwwroot\arred amento\index.cfm:86)
    at cfindex2ecfm1283733899._factor6(C:\HostingSpaces\andrea.guarda\rusticart.it\wwwroot\arred amento\index.cfm:1)
    at cfindex2ecfm1283733899._factor7(C:\HostingSpaces\andrea.guarda\rusticart.it\wwwroot\arred amento\index.cfm:1)
    at cfindex2ecfm1283733899.runPage(C:\HostingSpaces\andrea.guarda\rusticart.it\wwwroot\arreda mento\index.cfm:1)
    at coldfusion.runtime.CfJspPage.invoke(CfJspPage.java:231)
    at coldfusion.tagext.lang.IncludeTag.doStartTag(IncludeTag.java:416)
    at coldfusion.filter.CfincludeFilter.invoke(CfincludeFilter.java:65)
    at coldfusion.filter.ApplicationFilter.invoke(ApplicationFilter.java:381)
    at coldfusion.filter.RequestMonitorFilter.invoke(RequestMonitorFilter.java:48)
    at coldfusion.filter.MonitoringFilter.invoke(MonitoringFilter.java:40)
    at coldfusion.filter.PathFilter.invoke(PathFilter.java:94)
    > How do you temporarily overcome the problem for now, restart CF ODBC services?
    Yes, i have enabled the server monitor with timeout alerts configured, when i recieve a mail, i restart the ODBC server and all works fine for 4-5 days.
    >When CF has stopped talking the ACCESS datasource does the ODBCAD32 panel still verify OK?
    Yes odbcad32 works fine. In the server the version of microsoft Access Driver is 14.00.4670 (Office 2010), this can be a problem?

  • ACS Locked Database

    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-parent:"";
    mso-padding-alt:0in 5.4pt 0in 5.4pt;
    mso-para-margin-top:0in;
    mso-para-margin-right:0in;
    mso-para-margin-bottom:10.0pt;
    mso-para-margin-left:0in;
    line-height:115%;
    mso-pagination:widow-orphan;
    font-size:11.0pt;
    font-family:"Calibri","sans-serif";
    mso-ascii-font-family:Calibri;
    mso-ascii-theme-font:minor-latin;
    mso-hansi-font-family:Calibri;
    mso-hansi-theme-font:minor-latin;
    mso-bidi-font-family:"Times New Roman";
    mso-bidi-theme-font:minor-bidi;}
    Folks - our Windows server disk filled up.  Cisco ACS 4.2 threw a fatal error and locked the database.
    I've cleared up space on disk and reloaded server.  ACS processes start, but sockets do not.  It appears that database is still locked, and CSMON is shutting down the sockets.
    Any ideas on how to unlock database and bring up GUI?  Following are logs.
    CSMon 11/28/2010 17:48:57 E 0092 0592 0x0 ODBC Operation faild with the following information: Message=[Sybase][ODBC Driver][Adaptive Server Anywhere]Disk full 'Fatal error: disk full C:\Program Files\CiscoSecure ACS v4.2\CSDB\ACS.log' -- transaction rolled back, SqlState=S1000, NativeError=-304
    CSMon 11/28/2010 17:48:58 A 0191 0592 0x0 SL:getValue (dword)- execution failed
    CSMon 11/28/2010 17:49:08 E 0092 0208 0x0 ODBC Operation faild with the following information: Message=[Sybase][ODBC Driver][Adaptive Server Anywhere]Connection was terminated, SqlState=S1000, NativeError=-308
    CSMon 11/28/2010 17:49:08 A 0932 0208 0x0 SL:isKeyExist - execution failed
    CSMon 11/29/2010 11:56:19 E 0092 1392 0x0 ODBC Operation faild with the following information: Message=[Sybase][ODBC Driver][Adaptive Server Anywhere]Connection was terminated, SqlState=S1000, NativeError=-308
    CSMon 11/29/2010 11:56:19 A 0147 1392 0x0 SL:setValue (string) - execution failed
    CSMon 11/29/2010 11:56:19 E 0092 1392 0x0 ODBC Operation faild with the following information: Message=[Sybase][ODBC Driver][Adaptive Server Anywhere]Connection was terminated, SqlState=S1000, NativeError=-308
    CSMon 11/29/2010 11:56:19 A 0147 1392 0x0 SL:setValue (string) - execution failed
    CSMon 11/29/2010 11:56:19 A 0429 1392 0x0 CSMon Shutdown: Stopping Notifications.
    CSMon 11/29/2010 11:56:19 A 0501 1392 0x0 CSMon Shutdown, stopping Problem Monitoring.
    CSMon 11/29/2010 11:56:19 A 0448 1392 0x0 CSMon Shutdown: Stopping Worker Threads.
    CSMon 11/29/2010 11:56:19 A 0451 1392 0x0 CSMon Shutdown: Closing connection to CSAuth API.
    CSMon 11/29/2010 11:56:21 A 0454 1392 0x0 CSMon Shutdown: Closing Mail and NTLog stuff.
    CSMon 11/29/2010 11:56:21 A 0457 1392 0x0 CSMon Shutdown: Cleaning up Synchronisations stuff.
    CSMon 11/29/2010 11:56:21 A 0280 1392 0x0 CSMon Shutdown: Closing Winsock.
    CSMon 11/29/2010 11:56:23 A 0288 1392 0x0 ******************************************
    CSMon 11/29/2010 11:56:23 A 0289 1392 0x0 *           Shutdown Complete            *
    CSMon 11/29/2010 11:56:23 A 0290 1392 0x0 ******************************************

    I ran into a similar issue.  In my case, the services would not start after resolving the space issue and rebooting.
    I was able to resolve the issue by renaming the acs.log (transaction log) file in the ...\CiscoSecure ACS v4.2\CSDB\ folder. 
    From there, I was able to restart the CSAuth service and all subsequent services.
    The offending acs.log file was marked Read-Only and had a length of 0 bytes.  Once started, the DB created a new file with a file size of 192KB.

  • Lock(Enque/Deque) is a logical lock & database lock

    Hi All,
    I have gone through various portals and SDN too, and found that locks are the logical locks and sometimes database locks too but still have some confusion.
    1) If these are the logical locks then why they are created by SE11 and once created then FM has been generated. All FM are stored somewhere right? So they would be in database.
    2) If these are the database locks then why they are at the application server level and few more things.
    Can anybody explain with the valid reasons? In which queue they fall and why?
    Thanks.
    An

    Hi Anurag,
    The SAP locks are created with naming standard EZ. They are not connected with the DB. It work like this
    You create Lock -> FM is automaically generated as follow EXample:   CALL FUNCTION 'DEQUEUE_EZARS_TRHDR_T'
        EXPORTING
    generated In your program you add CALL FUNCTION <name of your new lock enqueue FM>
    3. if object is not locked you can modify it.
    4. unlock it with second (dequeue) FM
    Information about these locks are stored in queue on specific application server (usually central instance) - in SM51 you can see it with description "Enqueue".
    So this is kind of mechanism that works if everybody goes according to rules. If someone will write it's own report were he access your objects, but he will ommitt this FM call (or ignore it's results), then he will be free to change it as he likes even if someone has already locked it.
    To use DB lockes other technices are used like SELECT SINGLE * FOR UPDATE

  • Locking database records

    i want to lock the data base table recors while i am changin in one transaction....

    Hi,
    You can create a lock on a object of SAP thorugh transaction SE11 and enter any meaningful name start with EZ Example EZTEST_LOCK.
    This will autogenerate two function modules,Enqueue < lock object > and Dequeue < lock object >
    Example: in HR when we are enter a personal number in master data maintainance screen SAP can't allow to any other user to use same personal number for changes.
    Technicaly:
    When you create a lock object System automatically creat two function module.
    1. ENQUEUE_<Lockobject name>. to insert the object in a queue.
    2. DEQUEUE_<Lockobject name>. To remove the object is being queued through above FM.
    You have to use these function module in your program.
    Eg:
    tables:vbak.
    call function 'ENQUEUE_EZLOCK3'
    exporting
    mode_vbak = 'E'
    mandt = sy-mandt
    vbeln = vbak-vbeln
    X_VBELN = ' '
    _SCOPE = '2'
    _WAIT = ' '
    _COLLECT = ' '
    EXCEPTIONS
    FOREIGN_LOCK = 1
    SYSTEM_FAILURE = 2
    OTHERS = 3
    if sy-subrc <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    endif.
    <b>TO LOCK</b>
    Execute CALL FUNCTION statement
    CALL FUNCTION " ENQUEUE <lock object ">
    EXPORTING . . .
    EXCEPTIONS . . .
    CASE SY-SUBRC.
    ENDCASE.
    <b>TO UNLOCK</b>
    Execute the CALL FUNCTION statement
    CALL FUNCTION 'DEQUEUE <lock object >'
    EXPORTING . .
    It is important to unlock the entry so others can update it.
    Regards,
    Padmam.

  • Lock/isolation with secondary databases

    Hi all,
    Some more questions from me - sorry for the never-ending stream, but I'm hoping the solutions at least will help others who potentially strike the same or similar issues.
    My environment is a reasonable size (~21M rows, ~10G) and is made up of a primary DB enforcing uniqueness on keys and secondary to provide an alternate sort order for traversal.
    Ostensibly, all writes occur in a single thread on the primary DB (naturally), whilst ostensibly all reads occur on the secondary in multiple threads.
    I noticed that I very quickly got lock timeout errors looking something like:
    04:45:30.349 [Grizzly(14)] ERROR c.a.w.web.resources.SearchResource - (JE 5.0.55) Lock expired. Locker 902555936 -1_Grizzly(14)_ThreadLocker: waited for lock on database=calendar_routecost LockAddr:2118856719 LSN=0xaf8/0x95e879 type=READ grant=WAIT_NEW timeoutMillis=500 startTime=1342586729848 endTime=1342586730348
    Owners: [<LockInfo locker="2925905 3536_Loader_Txn" type="WRITE"/>]
    Waiters: []
    com.sleepycat.je.LockTimeoutException: (JE 5.0.55) Lock expired. Locker 902555936 -1_Grizzly(14)_ThreadLocker: waited for lock on database=calendar_routecost LockAddr:2118856719 LSN=0xaf8/0x95e879 type=READ grant=WAIT_NEW timeoutMillis=500 startTime=1342586729848 endTime=1342586730348
    Owners: [<LockInfo locker="2925905 3536_Loader_Txn" type="WRITE"/>]
    Waiters: []
    I've switched all my reads to READ_UNCOMMITTED in an effort to keep the time that records are locked to an absolute minimum, however, in the case above it's obviously WRITE locks in the Loader thread that's causing the lock timeout (the locked database mentioned is the secondary in the example).
    From my reading of the documentation, it appears that all records modified within a transaction retain exclusive locks during the entire lifetime of the transaction - is this correct? If so, doesn't this somewhat defeat the purpose of transactional isolation if a transaction takes an exclusive lock on records whilst modifying them?
    In my case, transactions were large and fairly long-lived (ie: up to 4 seconds) which was well in excess of read timeouts. I found if I drastically reduced my transaction size, I was able to keep transaction times <500ms, which means the lock timeout doesn't occur, however, it seems like a very fragile solution.
    Additionally, I am accessing the secondary DB using a StoredSortedMap, rather than directly through cursors, which from my understanding is transaction-aware providing the DB is configured as being transactional (which mine is).
    So, my questions are:
    - Is this the right approach to avoiding lock timeouts?
    - Is there a way to give readers lock preference to writers? (I'd prefer writes block than reads)
    - Is it better to make transactions tiny and frequent (ie: thousands per second) to avoid this?
    - Is there an advantage to switching off the Collections interface to the DB?

    I've switched all my reads to READ_UNCOMMITTED in an effort to keep the time that records are locked to an absolute minimum, however, in the case above it's obviously WRITE locks in the Loader thread that's causing the lock timeout (the locked database mentioned is the secondary in the example).Using read-uncommitted is a good way to reduce contention, if you can really live with the fact that the data you read is not committed and its transaction may be aborted (undone).
    From my reading of the documentation, it appears that all records modified within a transaction retain exclusive locks during the entire lifetime of the transaction - is this correct? If so, doesn't this somewhat defeat the purpose of transactional isolation if a transaction takes an exclusive lock on records whilst modifying them?On the contrary, locking is what isolates the transactions. Holding the write lock until the end of the txn is textbook two-phase locking, although some databases use other techniques. Have you read the "Writing Transactional Applications" guide that's in our docs? It explains locking and how to write your application to deal with lock exceptions.
    In my case, transactions were large and fairly long-lived (ie: up to 4 seconds) which was well in excess of read timeouts. I found if I drastically reduced my transaction size, I was able to keep transaction times <500ms, which means the lock timeout doesn't occur, however, it seems like a very fragile solution.In all databases, the smaller the transaction the better. If you can reduce your txn size, the real question is why wouldn't you.
    If you have good reasons for having a long transaction, adjusting the lock timeout may be necessary. The default of 500ms is just that, a default, and there is nothing wrong with changing it.
    Additionally, I am accessing the secondary DB using a StoredSortedMap, rather than directly through cursors, which from my understanding is transaction-aware providing the DB is configured as being transactional (which mine is).Yes, the collections API uses the per-thread txn (if you've set one) and uses auto-commit otherwise.
    - Is this the right approach to avoiding lock timeouts?In addition to what I said above, you will probably need to do retries, as described in the guide I mentioned.
    - Is there a way to give readers lock preference to writers? (I'd prefer writes block than reads)No, there isn't.
    - Is it better to make transactions tiny and frequent (ie: thousands per second) to avoid this?Absolutely.
    - Is there an advantage to switching off the Collections interface to the DB?In general, no.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • DBSNMP account locked alert email is not generated.

    In oem 11g when dbsnmp accout is locked, database target use to go down and it would send an automated alert.
    But in oem 12c we are not receiving any notificaitons when dbsnmp accout is locked as the target is not going down, how can this alert be monitored??

    Hi,
    The DB status metric goes through connection pool to connect to the target database.   Once the connection is established, dbsnmp password changes/expiry etc wont' affect the collection of the metric so it won't show the target as down.   There isn't currently any metric that detects issues with DBSNMP account, this is an open enhancement for us.    Let me try to find a solution and get back to you on this.
    Regards,
    Ana

  • A Good Guideline book for Database systems

    could any one tell me what is the best book regarding database systems that covers issues like transaction processing, locks ,database recovery, database design , concurrent control techniques, etc ...
    and i need the book to explain the concepts then to explain how are those implemented in oracle databases.
    Thanks

    I will just add my generalised tuppence-worth:
    Anything written by Tom Kyte or Jonathan Lewis is a must.
    Most stuff published by O'Reilly probably deserves a look: I know their technical review process is tight and their production qualities are high.
    Anything published by Oracle Press needs to be treated with caution. The later books (last year or two) are OK, but much before then is distinctly iffy in terms of technical precision and accuracy.
    Practically anything published by Rampant Press should be on your 'desperate and dateless' list. Whilst the material some of their stuff contains is mostly factual, it is also often highly simplistic and equally often just a dry regurgitation of the official documentation with not much by way of explanation or insight offered to supplement it. Their material reads often as if it's written to a deadline rather than to a quality control directive. Grammar is often poor; technical subtleties are often missed or described plain inaccurately; production quality is oftentimes dubious.
    None of that makes them evil (deadlines are a fact of life in a commercial environment, after all), but it does mean I would turn to their stuff last and least, if at all.

  • Facing low Performance when iterating records of database using cursor

    Hi ,
    i inserted nearly 80,000,000 records into a database, by reading a file whose size is nealry 800MB, in 10 minutes.
    when i am iterating the records using Cursor with Default lock mode , it is taking nearly 1 hour.
    My Bdb details are as follows
    Environment : Non transactional , non locking
    Database : Deferred write.
    Cache : 80% of JVM ( -Xms=1000M -Xmx=1200m )
    Could you please explain why it is taking such a long time ? did i make any mistakes on settings ?
    Thanks
    nvseenu
    Edited by: nvseenu on Jan 15, 2009 5:47 AM

    Hello Gary,
    StoredMap is a convenience API wrapper for a Database. It has the same performance and multi-threading characteristics as a Database. You don't need to synchronize a StoredMap, or use Database to get better performance.
    The lock conflicts are the thing to focus on here. This is unrelated to the topic discussed earlier in this thread.
    How many threads are inserting and how many performing queries?
    What other work, other than inserting and reading, are these threads performing?
    Does any thread keep an iterator (which is a cursor) open?
    How large are the data items in the map?
    What is the resolution of the timestamp? Milliseconds?
    I don't think the exception you posted is complete. Please post the full exception including the cause exception.
    I can't tell from the exception but it looks like multiple insertion threads are conflicting with each other, not with the query threads. If you test only the insertions (no queries), do the lock conflicts still occur?
    One possibility is that multiple insertions threads are using the same timestamp as the key. Only one thread will be able to access that key at a time, the others will wait. Even so, I don't understand why it's taking so long to perform the inserts. But you can easily make the key unique by appending a sequence number -- use a two part key {timestamp, sequence}.
    Please upgrade to JE 3.3 in order to take advantage of the improvements and so we can better support you. We're not actively working with JE 3.2, which is very outdated now.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Commiting/Rollback changes in database.

    Request you to help me understand the following scenario
    Considering an example:
    We have employees table. The table has a column "Salary"
    And we have 4 employees in the database.
    Now we want to do an activity wherein we want to deduct amount 10000(cumulative) in all from the 4 employees.
    EMP1 - 2000
    EMP2-3000
    EMP3-4000
    EMP4- 1000
    Finally there needs to only one activity of - 10000 is to be made.
    Four requests(emp1-emp4) is sent by web service to deduct the above mentioned amount. but i want the database must not commit the changes. A second request will be sent by web service to database to either commit or rollback depending.
    So is this possible and if yes how shalli proceed. Please let me know if the example is un clear i will try mybest to make it comprehensible.

    akm006 wrote:
    Four requests(emp1-emp4) is sent by web service to deduct the above mentioned amount. but i want the database must not commit the changes. A second request will be sent by web service to database to either commit or rollback depending.Not possible.
    Web-based client server architecture is stateless.
    This means that there is no session state kept between client calls. Each and every client call is seen by the network and the database, as a NEW call.
    So if you do not commit the database changes in the 1st web service call, those changes will be (should be) rolled back when that web sevice's database session terminates. The 2nd web service call will be a new/different database session - and it cannot commit changes made by, and in, another session.
    For this reason, pessimistic locking (database default) is not recommended for web-based client server. Optimistic locking (in the application layer) needs to be used instead.
    I suggest that you take a close look at what stateless architecture means and how it works - and how optimistic locking needs to be used instead of standard database's pessimistic locking.

Maybe you are looking for