Best Practice: Combine prepared statements with ;

Hi,
I would like to know what the best prctice is for combining prepared statements to give the query below
INSERT INTO my_table (value) VALUES (?); SELECT LAST_INSERT_ID()The reason for this is that i have written a simple DB wrapper to handle my database connections and queries based on a property file containing sql strings. I would prefer not to change this wrapper code, but be able to specify combined queries in the sql string if possible.
Thanks in advance

Have you thought about using Batch statements ?

Similar Messages

  • Could not find prepared statement with handle %.

    Greetings. I've seen several posts for this error on the web, but no clear cut answers. I captured the code below in profiler, with the intention of replaying in mgmt studio.
    However, the attempt end in the following error: "Could not find prepared statement with handle 612."
    declare @p1 int
    set @p1=612
    declare @p2 int
    set @p2=0
    declare @p7 int
    set @p7=0
    exec sp_cursorprepexec @p1 output,@p2 output,N'@P0 int,@P1 int,@P2 int,@P3 int,@P4 bit',N'EXEC dbo.mySproc @P0,@P1,@P2,@P3,@P4 ',4112,8193,@p7 output,219717,95,NULL,1,0
    select @p1, @p2, @p7
    Something noteworthy is that my sproc only has 5 input parameters, but this makes it look like it has many more.
    How do I manipulate the code enough to make it work in mgmt studio? Thanks!
    TIA, ChrisRDBA

    In profiler you would normally see RPC:Starting and RPC:Completed. The statement shown in RPC staring is what you need to pick because as Erland explained, completed would show "funky" behavior.
    Balmukund Lakhani | Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

  • Best practice when using Tangosol with an app server

    Hi,
    I'm wondering what is the best practice when using Tangosol with an app server (Websphere 6.1 in this case). I've been able to set it up using the resource adapter, tried using distributed transactions and it appears to work as expected - I've also been able to see cache data from another app server instance.
    However, it appears that cache data vanishes after a while. I've not yet been able to put my finger on when, but garbage collection is a possibility I've come to suspect.
    Data in the cache survives the removal of the EJB, but somewhere later down the line it appear to vanish. I'm not aware of any expiry settings for the cache that would explain this (to the best of my understanding the default is "no expiry"), so GC came to mind. Would this be the explanation?
    If that would be the explanation, what would be a better way to keep the cache from being subject to GC - to have a "startup class" in the app server that holds on to the cache object, or would there be other ways? Currently the EJB calls getCacheAdapter, so I guess Bad Things may happen when the EJB is removed...
    Best regards,
    /Per

    Hi Gene,
    I found the configuration file embedded in coherence.jar. Am I supposed to replace it and re-package coherence.jar?
    If I put it elsewhere (in the "classpath") - is there a way I can be sure that it has been found by Coherence (like a message in the standard output stream)? My experience with Websphere is that "classpath" is a rather ...vague concept, we use the J2CA adapter which most probably has a different class loader than the EAR that contains the EJB, and I would rather avoid to do a lot of trial/error corrections to a file just to find that it's not actually been used.
    Anyway, at this stage my tests are still focused on distributed transactions/2PC/commit/rollback/recovery, and we're nowhere near 10,000 objects. As a matter of fact, we haven't had more than 1024 objects in these app servers. In the typical scenario where I've seen objects "fade away", there has been only one or two objects in the test data. And they both disappear...
    Still confused,
    /Per

  • Best practice for sharing data with model window

    Hi team,
    what would the best practice for sharing data with a modal
    window be ? I use a modal window to display record details from a
    record list, but i am not quite sure how to access the data from
    the components in the main application in the modal window.
    Any hints would be welcome
    Best
    Frank

    Pass a reference to the parent into the modal popup. Then you
    can reference anything in the parent scope.
    I haven't done this i 2.0 yet so I can't give you code. I'll
    post if I do.
    Oh, also, you can reference the parent using parentDocument.
    So in the popup you could do:
    parentDocument.myPublicVariable = "whatever";
    Tracy

  • Prepared Statement with ORDER BY

    I am trying to use order by with prepared statement but it is not ordering.
    String sql = "SELECT * FROM MATERIAL WHERE (LOWER(NAMEE) LIKE ('%' || ? || '%') ORDER BY ? ";
    PreparedStatement ps=CM.getStatement(sql);
    ps.setString(1,p);
    ps.setString(2,sort);
              ResultSet r = ps.executeQuery();
    Can any one tell me how do I use prepared statement with order by

    You can not parameterize column names and such, only literals. You should build the ORDER BY clause dynamically.

  • Best practices for apps integration with third party systems ?

    Hi all
    I would like to know if there is any document from oracle or from your own regarding best practices for apps integration with third party systems.
    For example, in particular, let's say we need customization in a given module(ex:payables) need to provide data to a third party system, consider following:
    outbound interface:
    1)should third party system should be given with direct access to oracle database to access a particular payments data information table/view to look for data ?
    2) should oracle create a file to third party system, so that it can read and do what it need to do?
    inbound:
    1) should third party should directly login and insert data into tables which holds response data?
    2) again, should third party create file and oralce apps will pick up for further processing?
    again, there could be lot of company specific scenarios like it has to be real time or not... etc...
    How does companies make sure third party systems are not directly dipping into other systems (oracle apps/others), so that it will follow certain integration best practices.
    how does enterprise architectute will play a role in this? can we apply SOA standards? should use request/reply using Tibco etc?
    Many oracle apps implementations customizations are more or less directly interacting with third party systems by including code to login into respective third party systems and vice versa.
    Let me your know if you have done differently and that would help oracle apps community.
    thanks
    rrb.

    you want to send idoc to third party system (NONSAP).
    what kind of system is it? can it handle http requests
    or
    can it handle webservice?
    which version of R/3 you are using?
    what is the mechanism the receiving system has, to receive data?
    Regards
    Raja

  • Could not find prepared statement with handle 13

    Hi,
    I'm having a terrible problem: When I try to execute a SQL Query the following exception is thrown:
    * "java.sql.SQLException: [Macromedia][SQLServer JDBC Driver][SQLServer]Could not find prepared statement with handle 13."
    This exception is thrown is this line:
    boolean returnResultSet = ((PreparedStatement)sqlStatement).execute();
    The sqlStatement object is a java.sql.PreparedStatement that was received as a Statement in the method definition.
    The following query is being executed in this PreparedStatement:
    SELECT id_promocao, ds_nome, id_tipo, ds_sinopse, dt_lancamento, pt_site, pt_caminho_relativo, fl_ativo FROM TAB_CINE_GM ORDER BY ds_nome
    I'm using Macromedia JRun 4 build 61650 and I'm using MS-SQL Server 2000 as a database server.
    If anyone can help, I'll thanks a lot.
    Helcio Chaves
    S�o Paulo - SP - Brazil
    [email protected]

    There is a common way to check runtime type:
    if (sqlStatement instanceof PreparedStatement)
    returnResultSet = ((PreparedStatement)sqlStatement).execute();
    else
    returnResultSet = sqlStatement.execute();
    }By the way - I can't understand why you're trying to cast sqlStatement to PreparedStatement? It doesn't matter at all due to so-called polymorphism of all Java methods (except static ones). Anyway execute() will run for PreparedStatement but bot for Statement
    Enjoy,
    Pavel

  • Best practice identifying ERT modules with SAP / IS-Utilities

    Hi everybody,
    I'm looking for the best practice identifying ERT modules with SAP / IS-Utilities (electricity).
    Here's the physical device set up :
    The ERT modules are internal to the electricity meter. They're integrated into a multi purpose electronic circuit. So they can't be remove physically as a separate device.
    The ERT modules are used to transmit data from the meter to a radio frequency receiver (handheld or drive-by). The main data that is transmitted is the consumption reading. So the receiver stores the ERT module number and the reading value.
    They may be one or more ERT modules in a single meter, and each ERT module transmit his own specific consumption reading (energy reading, demand reading, etc...).
    Each ERT module has is own manufacturer number.
    My issue is :
    To find a way to identify in IS-U the ERT module within the meter's register group (or somewhere else???) in order to relate each register to his ERT module number.
    The purpose of all this is to create reading orders with the ERT module number following for each register.
    This way we can match, using a unique key, each reading order and his corresponding reading value uploaded from the radio frequency receiver (handheld or drive-by).
    Thanks for your help and ideas on best practice.

    Hi,
    1) The system (application) environment of BI (what is integrated in it - e. g. within the portal, there is a storage for unstructured information like documents or virtual rooms for collaboration between departments - and what does it make)
    Document management from RSA1 transaction of BI helps to attach any unstructured documents at specific level in BI.
    2) How does development in BI works (development environment, coding, debugging, building, deployment and test) and what is used stronger (ABAP or ABAP OO)? Here, I don't mean how to write ABAP or ABAP OO programs, only the infrastructure from development to transport to a target system
    BI has got  a separate tool and GUI to perform all the Extract, Transform and load related activities. ABAP is part of BI but you don't need much extensive ABAP learning. Basic ABAP is sufficient to write routines and extractors.
    3) How is a BI system to configure as default after installation?
    May be a BASIS person can help you out here about the configuration but this is not the job of BI person.
    4) Good guides (e/books) to learn ABAP and ABAP OO (as far as possible oriented on the practive)
    You can search for SAM Series learn ABAP in 24 days book. This book is sufficient to learn the ABAP required for working in BI.
    But except ABAP you will have to completly learn the BI system to work efficiently.
    Regards,
    Durgesh.

  • FindByPrimaryKey: Could not find prepared statement with handle 3

    I've inherited a WL61 application and been asked to make it work under WL81. We're using SQL Server 2000. We only access two tables. The XML got auto-converted during the upgrade, but I had to correct the RDBMS column names in the weblogic-cmp-jar.xml
    The application mostly works except the findByPrimaryKey fails with:
    ERROR ExecuteThread: '14' for queue: 'weblogic.kernel.Default' Administrator : TargetSessionBean - Error finding promotion with ID <2>
    javax.ejb.FinderException: Problem in findByPrimaryKey while preparing or executing statement: 'weblogic.jdbc.wrapper.PreparedStatement_weblogic_jdbc_base_BasePreparedStatement@95':
    java.sql.SQLException: [BEA][SQLServer JDBC Driver][SQLServer]Could not find prepared statement with handle 3.
    java.sql.SQLException: [BEA][SQLServer JDBC Driver][SQLServer]Could not find prepared statement with handle 3.
    at weblogic.jdbc.base.BaseExceptions.createException(Unknown Source)
    at weblogic.jdbc.base.BaseExceptions.getException(Unknown Source)
    I've checked the database table and the row exisits with the appropriate PK (in this case a promotion with ID <2>).
    In the WL61 version the findByPrimaryKey was explicitly defined in the weblogic-cmp-rdbms-jar.xml as follows:
    <finder>
    <method-name>findByPrimaryKey</method-name>
    <method-params>
    <method-param>com.fujitsu.ftxs.corema.server.PromotionPK</method-param>
    </method-params>
    <finder-query><![CDATA[ (= $0 promotionId) ]]></finder-query>
    <finder-expression>
    <expression-number>0</expression-number>
    <expression-text><![CDATA[@0.promotionId]]></expression-text>
    <expression-type>int</expression-type>
    </finder-expression>
    </finder
    But I understand that with WL81 I should no longer define this - it's done implicitly - so I've removed this finder definition.
    Any help appreciated. Thanks,
    - Andy Abel

    I fixed it by switching from the using the BEA driver:-
    DriverName="weblogic.jdbc.sqlserver.SQLServerDriver"
    URL="jdbc:bea:sqlserver://host:1433"
    And using the Microsoft Driver instead:-
    DriverName="com.microsoft.jdbc.sqlserver.SQLServerDriver"
    url=jdbc:microsoft:sqlserver://host:1433
    If anyone can explain why the Microsoft Driver works and the BEA driver does not I'd like to know.
    Thanks,
    - Andy Abel

  • Could not find prepared statement with handle 1.

    [Macromedia][SQLServer JDBC Driver][SQLServer]Could not find prepared statement with handle 1.
    I'm getting this error message in what appear to be random ways. The first time I look at a page I might not get it, but the second time I might. I discovered that removing a cfqueryparam tag worked, but that is not really a safe solution. I checked that the cf_sql_type matched the database field, and in one case changed a cf_sql_varchar to a cf_sql_char so it would match a SQL Server nchar(10) field. But still these errors. Any ideas? I've not had any luck Googling this.
    I should add that I'm running Coldfusion 9 as a Tomcat webapp on a Linux server. The database is SQL Server 2005, I think.

    Here's the one that is breaking now:
    <cfquery name="CheckCredentials" datasource="#application.crossreg_dsn#">
                                            SELECT [name_first]+' '+[name_last] as name
                                                        ,p.[uni]
                                                        ,p.email
                                                        ,p.role_id
                                                         ,r.role_name
                                                      ,p.external_program_id
                                              FROM [CrossReg].[dbo].[People] p
                                               INNER JOIN dbo.Roles r on r.role_id = p.role_id
                                              WHERE uni = <cfqueryparam cfsqltype="cf_sql_char" value="#Session.username#">
    </cfquery>
    Session.username is being returned from a CAS authentication system. I've never had troubles with it before.

  • Best practice: Using break statement inside for loop

    Hi All,
    Using break statment inside FOR loop is a best practice or not?
    I have given some sample code:
    1. With break statement
    2. With some boolean variable that decide whether to come out of the loop or not.
    for(int i = 0; i < 10; i++){
    if(i == 5){
    break;
    boolean breakForLoop = false;
    for(int i = 0; i < 10 && !breakForLoop; i++){
    if(i == 5){
    breakForLoop = true;
    The example may be a stupid one. But i want to know which one is good?
    Thanks and Regards,
    Ashok kumar B.

    Actually, it's bad practice to use break anywhere other than in conjunction with a switch statement.Presumably, if you favour:
    boolean test = true;
    while (test)
      test = foo && bar;
      if (test)
    }overfor (;;)
      if (! ( foo && bar) ) break;
    }then you also favour
    boolean test = foo && bar;
    if (test)
    }overif (foo && bar)
    }Or can you justify your statement with any example which doesn't cause more complexity, more variables in scope, and multiple assignments and tests?

  • Best practice - transitions without state?

    I'm noticing that, sometimes, its easier to just do a
    Transition on a component without having the changes in a State.
    For example, I can't seem to get a sequence of Resize effects
    when the properties are set in State, but I succeed when I empty
    out the state. No combination of SetPropertyAction seems to work.
    Is this a best practice? Or is it considered sloppy? How do
    you decide what to put in a State, and what to put in a
    Transition?

    I have been coding Flex for a little over 2 years and the
    first project used states. We moved away from states because we ran
    into a couple bugs (in Flex 2) and it made the code less easy to
    read and organize.
    Transitions seemed more natural from a coding aspect. So now
    most everything is done in transitions and once in a while a custom
    component will use states.

  • Best practice for preparation of install of Leopard from Tiger

    Hi all, I am about to upgrade my iMac with an Archive and Install of Leopard 10.5 from Tiger 10.4.11.
    I've done this dozens of times on other machines, but I never really thought about the best way to prepare for an install, I have already backed up this machine using Carbon Copy Cloner, is there anything else I should do before hitting the button?

    Run the Disk Utility Repair function (or at least Verify), and also the Repair Permissions, which takes many minutes. This will help assure Integrity of your directory before you begin.
    I am not sure why you are choosing to Archive and Install. An Upgrade install will be offered, and is thought to be just as effective, and can be a little more convenient in that your non-Apple Applications are not moved into the "Previous System" folder.
    The absolute BEST practice would be (after you make TWO backups) to erase your Hard Drive with the Write Zeroes/Zero All Data Option. This will take many hours, but will force the drive to substitute spares blocks for any found to be defective after the Zeroing. But I must say that in the absence of error messages indicating disk trouble developing, this is incredible overkill. You asked, so I am answering the question as you asked it.
    Message was composed over a long period of time due to multiple interruptions.

  • Best Practices for sharing media with iMovie and FCPX

    So I've a large iMovie Events directory, and would like to use that media with both iMovie and FCPX projects.
    I'd rather not duplicate the media, so would prefer to import as references into FCPX.
    The dilemma is that I see that it's possible to modify or move media from within the iMovie application, and therefore break the reference to that media with FCPX.
    I only see two options:  (1) Never Ever modify the location/name of media in the iMovie Events file (even from within the iMovie app) since I would break an FCPX link if that media is referenced, or (2) always import (copy) the iMovie events into the FCPX Event Library making an independent original so that I can confidently operate on those media files in either application.
    I'd surely rather not have to do (2 )(e.g. doubling my storage demands) to gain the flexibility of using either application to edit the video, but really don't want to live with the restrictions of (1).
    Thoughts / Solutions?  What might you consider as options or best practices?

    Unless there is some other reason, users should own the right to share their mailboxes - it shouldn't be something that demands administrator management (if only so that the administrators aren't swamped by user requests for sharing their mailboxes). 
    For true shared mailboxes, when the mailbox is created, full access is granted by an administrator.

  • Need advise for best practice when using Toplink with external transaction

    Hello;
    Our project is trying to switch from Toplink control transaction to using External transaction so we can make database operation and JMS operation within a single transaction.
    Some of our team try out the Toplink support for external transaction and come up with the following initial recommendation.
    Since we are not familar with using external transaction, I would like member of this forum and experts, to help comment on whether these recommendation are indeed valid or in line with the best practice. And for folks that have done this in their project, what did you do ?
    Any help will be most appreciated.
    Data Access Objects must be enhanced to support reading from a TOPLink unit of work when using an external transaction controller. Developers must consider what impact a global transaction will have on the methods in their data access objects (DAOs).
    The following findSomeObject method is representative of a “finder” in the current implementation of our DAOs. It is not especially designed to execute in the context of a global transaction, nor read from a unit of work.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    ClientSession clientSession = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    obj = (SomeObject)clientSession.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    clientSession.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    However, after making the following changes (in blue) the findSomeObject method will now read from a unit of work while executing in the context of a global transaction.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    Session session = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    if (TransactionController.getInstance().useExternalTransactionControl())
         session = session.getActiveUnitOfWork();
         readObjectQuery.conformResultsInUnitOfWork(); }
    obj = (SomeObject)session.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    if (TransactionController.getInstance().notUseExternalTransactionControl())
         session.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    When getting the TOPLink client session and reading from the unit of work in the context of a global transaction, new objects need to be cached.
    public getUnitOfWork(ILoginUser aUser)
    throws DataAccessException
         ClientSession clientSession = getClientSession(aUser);
         UnitOfWork uow = null;
         if (TransactionController.getInstance().useExternalTransactionControl())
              uow = clientSession.getActiveUnitOfWork();
              uow.setShouldNewObjectsBeCached(true);     }
         else
              uow = clientSession.acquireUnitOfWork();
         return uow;
    }

    As it generally is with this sort of question there is no exact answer.
    The only required update when working with an External Transaction is that getActiveUnitOfWork() is called instead of acquireUnitOfWork() other than that the semantics of the calls and when you use a UnitOfWork is still dependant on the requirements of your application. For instance I noticed that originally the findSomeObject method did not perform a transactional read (no UnitOfWork). Has the requirements for this method changed? If they have not then there is still no need to perform a transactional read, and the method would not need to change.
    As for the requirement that new object be cached this is only required if you are not conforming the transactional queries and adds a slight performance boost for find by primary key queries. In order to use this however, objects must be assigned primary keys by the application before they are registered in the UnitOfWork.
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Maybe you are looking for

  • Error While Offsetting

    Hi, I am trying to offset the enteries through transacetion FB01 for intercompany posting. When i am trying to simulate the document it is giving error " Company Code Offsetting cannot be posted automatically". In diagnosis it shows that " the local

  • Is asha 311 reads pdf n word files?

    hey how can i read pdf and word files.I need it pls.....

  • Scheduling custom BDC report as a job....

    Hi, I want to generate a background Job for a custom BDC ABAP report. I am going to transaction SM36, entering name of Job -> Job class as 'C' ->  enter. But after I provide name of the program, it also has a field called variant. What do I enter the

  • CIN Configuration for MGO

    Hi, We have declared one of the material is exciable in J1ID. I am doingthe GR for for this material with 501 movement type. When i am doing a GR in the excise tab i noticed that in the Proc mode status. I am seeing only no excise entry. But there ar

  • IM: which's the difference between S_ALR_87012805 & S_ALR_87012806

    Hi All, in Investment management, could anyone tell me which's the difference between S_ALR_87012805 and S_ALR_87012806 t.code? Thanks a lot G.Rossi