Why commit after prepare some SQL

Hi:
In the "TTCLASSES GUIDE TimesTen 6.0", all the example code follow the rule that call commit just after the code prepapre some SQL.
This is strange to me. Why commit after prepare?
Does prepare start a transaction?
Does prepare lock some table?
What if I just don't call commit?
SHall I call commit after I drop the prepared SQL?
Regards
hardrock

Hi,
TTClasses, and the demos, are coded to try to 'enforce' TimesTen 'best practice. They include commit after the prepares since until very recent releases of TimesTen it was highly recommended to commit after prepare since prepare did acquire, and hold, locks on some of the system catalog tables. Also, Prepare does start a transaction so you will need to commit or rollback at some point to complete that transaction e.g. before you can disconnect. In Tt 7.0 and later releases, prepare no longer holds the locks - it releases them at the end of the prepare operation. So, it is less important to commit after prepares in TT 7.0. However, prepare does still start a transaction so a commit (or rollback) is still needed at some point.
Now, in general, an application should be performing all its prepares just once at 'startup' time (or at least at connection open time) so in general there is no big deal to open a connection, do all the prepares and then do one commit to close the transaction.
Dropping a prepared statement does not require a commit as it does not start a transaction and does not lock anything.
Chris

Similar Messages

  • Can i use commit in between pl sql statements

    Hi,
    I have written program unit , in that I used insert statements and after that I used commit command.
    But in runtime it's getting oracle error unable to insert . because I used some non database items and database items
    so that it's coming error.
    But my question is , Can i use commit after executing some statements in program unit procedure.
    Thanks in advance.

    FORMS_DDL restrictions
    The statement you pass to FORMS_DDL may not contain bind variable references in the string, but the
    values of bind variables can be concatenated into the string before passing the result to FORMS_DDL.
    For example, this statement is not valid:
    Forms_DDL ('Begin Update_Employee (:emp.empno); End;');
    However, this statement is valid, and would have the desired effect:
    Forms_DDL ('Begin Update_Employee ('||TO_CHAR(:emp.empno)
    ||');End;');
    However, you could also call a stored procedure directly, using Oracle8's shared SQL area over
    multiple executions with different values for emp.empno:
    Update_Employee (:emp.empno);
    SQL statements and PL/SQL blocks executed using FORMS_DDL cannot return results to Form
    Builder directly.
    In addition, some DDL operations cannot be performed using FORMS_DDL, such as dropping a table
    or database link, if Form Builder is holding a cursor open against the object being operated upon.
    Sarah

  • Viewing photos on Picasa Web after uploading-some are blank.Why?

    Viewing photos on Picasa Web after uploading-some are blank.Why
    Having trouble viewing photos in Picasa Web albums. When I upload them from my Picasa program to the web, some do not get loaded. Lots of blanks between photos.
    Help from Picasa told me It was a problem with Mozilla Firefox to "Right-click the placeholder of. browser, however, I have been unable to do this.

    If images are missing then check that you aren't blocking images from some domains.
    See:
    * http://kb.mozillazine.org/Images_or_animations_do_not_load
    * Check the permissions for the domain in the current tab in Tools > Page Info > Permissions
    * Check that images are enabled: Tools > Options > Content: [X] Load images automatically
    * Check the exceptions in Tools > Options > Content: Load Images > Exceptions
    * Check the "Tools > Page Info > Media" tab for blocked images (scroll through all the images)
    There are also extensions (Tools > Add-ons > Extensions) and security software (firewall, anti-virus) that can block images.

  • Why often after making updates appears: some files on the server may be missing or incorrect. Clear browser cache and try again.

    Why often after making updates appears: some files on the server may be missing or incorrect. Clear browser cache and try again.

    Hi Gauray
    I can see it well but someone how works at the congress it appears:
    http://www.oeso.org/monaco_conference2015/endorsements.html
    Do you have an updated to clear browser cache automatically in Adobe CC Muse?
    If not, how can I prevent browser cache?
    It is correct if I put in Page Properties for Home Master, in HTML for  ha scritto:
    Why often after making updates appears: some files on the server may be missing or incorrect. Clear browser cache and try again.
    created by Gaurav Sharma in Help with using Adobe Muse CC - View the full discussion
    Hi,
    Could you please provide a URL of the site, so we can check it. Also, take a look to this thread, discussing the same
    Some files on the server may be missing or incorrect
    Please note that the Adobe Forums do not accept email attachments. If you want to embed a screen image in your message please visit the thread in the forum to embed the image at https://forums.adobe.com/message/6767913#6767913
    Replies to this message go to everyone subscribed to this thread, not directly to the person who posted the message. To post a reply, either reply to this email or visit the message page:
    To unsubscribe from this thread, please visit the message page at . In the Actions box on the right, click the Stop Email Notifications link.
    Start a new discussion in Help with using Adobe Muse CC by email or at Adobe Community
    For more information about maintaining your forum email notifications please go to http://forums.adobe.com/thread/416458?tstart=0.

  • Why my iphone cannot switch on after download some apps?

    Why my iphone cannot switch on after download some apps?

    YOu can't what, you can't do the reset?
    Do you have any battery life left?  Plug your phone to the wall charger wait 5 minutes, then do the reset while the phone is plugged in.

  • Commit after PL/SQL procedure successfully completed?

    Hello. I have a question, it may be stupid but here goes:
    When I run a script like this do I have to commit after it is completed?
    SQL> @merge_candidates_INC933736.sql
    PL/SQL procedure successfully completed.

    XerXi wrote:
    Hello. I have a question, it may be stupid but here goes:
    When I run a script like this do I have to commit after it is completed?
    SQL> @merge_candidates_INC933736.sql
    PL/SQL procedure successfully completed.
    How would anyone know? You didn't post what the script does.
    If the script contains nothing but DDL to create objects then you do NOT need to add a COMMIT since Oracle wil implictily commit DDL statements.
    A COMMIT should be performed at the END of a transaction. Since we don't know what, if any DML is contained in your script we have no idea how many transactions might be represented.
    So for scripts that contain DML you should add a COMMIT to the script after each transaction has been completed.

  • Why does Adobe Premiere CS6 slow playback after importing some audio?

    Adobe Premiere CS6.0 v6.0.2
    Mac Pro:
    1, 4 core - 3.2ghz CPU
    16gb ram (4 sticks of 4gb @ 1066 MHz)
    1 Tb dive with OS/Apps on
    2Tb 7200 RPM media drive
    ATI Radeon HD 5770
    Sequence:
    30Fps
    1920x1080
    Media:
    mostly .MFX files from a Canon XF100 however these clips commonly play nice with Premiere.
    Issue:
    After importing some audio clips (.wav/.mp3, possibly other extensions) to the timeline on occasion the video playback will become choppy and the audio will slow down 3% creating any voices in the video to become deeper and slows down.  Choppy video playback as well.  The problem is very annoying but  it can be temporarily fixed with a computer restart but it will come back.
    Things to note:
    Creating a rendered ID (Exporting the movie out to a .mov) the slowed choppy video does not appear in the final movie.
    Temp. Solution:
    The only way I can fix this problem is by restarting the computer.  Simply restarting Adobe Premiere and closing any non-essential applications does not fix the problem.
    Any ideas on how to fix this are much apperciated.

    Try converting your MP3 to AIFF ,QT or something else Macs like. See if that helps.

  • Commit after insertion

    I use Oracle 10G Rel2. I'm trying to improve the performance of my database insertions. Every two days I perform a process of inserting 10000 rows in a table. I call a PL/SQL procedure
    for inserting every row, which checks data and perform the insert command on the table.
    Should I commit after every call to the procedure ??
    Is it better to perform a commit at the end of calling 10000 times to the insertion procedure?? So the question is : is "commit" a cheap operation ??
    Any idea to improve performance with this operation ??

    > So the question is : is "commit" a cheap operation ??
    Yes. A commit for a billion rows is as fast as a commit for a single row.
    So there are no commit overheads for doing a commit on a large transaction versus a commit on a small transaction. So is this the right question to ask? The commit itself does not impact performance.
    But HOW you use the commit in your code, does. Which is why the points raised by Daniel is important.. how the commit is used. In Oracle, the "best place" is at the end of the business transaction. When the business transaction is done and dusted, commit. That is after all the very purpose of the commit command - protecting the integrity of the data and the business transaction.

  • SSIS packages are failing after upgrade to SQL server 2014

    Hi,
    I have some SSIS packages running on SQL server 2012 .
    After I upgraded to SQL server to 2014 from 2012 , the SSIS jobs are failing on the SQL agent.
    And i can see its related to Data source connectivity to the SQL agent. I hope it not able to identify the connection manager in SQL agent. And connection adapter is not upgraded.
    I read some articles about this and they say it not able to connect to SQL server agent job.
    And, I can see that the package is running if i run manually using SQL 2012 run time.
    Why its not running on SQL 2014 ?
    did i  miss anything while upgrading SQL server 2014 ?
    Please give me some suggestions to solve this issue. 
    And is there any way i can change the SQL server agent 2014 to adapt this and run ?
    Below is the error : 
    The Package filed to load due to error 0XC00100014 " One or more error occurres. There should be more specific errors preceding this one that expalins the details of the erroes. This message is used as a return value from functions that encounter
    errors.: This occures when CPackage::LoadFormXML fails.
    Regards,
    Vinodh Selvaraj.

    I think you have typed this error message by yourself.
    Anyway, as it says there should be more error preceeding to this. Do you have any other errors which describles the exact issue stating at what task it fails?
    If not, then there are various reasons behind this issue. 3rd party connection manager such as Oracle Attunity or it may be 32/64 bit issue.
    You may try executing package in 32 bit mode from SQL Agent Job.
    Please refer:
    http://blogs.msdn.com/b/farukcelik/archive/2010/06/16/why-package-load-error-0xc0010014-in-cpackage-loadfromxml-error-appears-while-trying-to-run-an-ssis-package.aspx
    http://www.bidn.com/blogs/timmurphy/ssas/1397/package-failed-to-load-due-to-error-0xc0010014
    -Vaibhav Chaudhari

  • Impossible to open .dtproje file after re-installing SQL Server Management Studio

    Hi all,
    today, after re-installing SQL Server Management Studio from a package downloaded from MS web site (SQLManagementStudio_x64_FRA.exe), I tried to open a SSIS package file via Visual Studio and I get a message stating that  this type of project (.dtproj)
    is not supported.
    Here are the events that lead to this problem:
    We have SQL Server 2005 installed on a server and are planning to migrate to SQL Server 2012.
    1 - Earlier this year, to solve a problem that we had with the SSMS client, we installed SSMS 2012 client on my station.
    At this point, it worked fine for SSMS and when I tried to access SSIS it converted the packages  to SSIS 2012 and it worked fine too.
    2 - Yesterday, I tried to access SSMS and I had the message that the test period had expired; at this point I could open a SSIS file.
    3 - To solve the problem with SSMS we decided to re-install it. It worked fine for SSMS but now I am not able to open a SSIS file.
    Do you have any idea of what the problem is and what should I do to solve it?
    It seems that there are some SSIS component missing, how could I get them back?

    You are welcome Sylviep,
    Based on what I see you want to create BI projects (e.g. a SSIS project), thus it will be enough to install SSDT
    which is part of the SQL Server installation media. I do not see why you would re-install SQL Server itself.
    Arthur My Blog

  • Commit after a select query

    Do we need to commit after a select statement in any case (in any transaction mode)?
    Why do we need to commit after selecting from a table from another databse using a DB link?
    If I execute a SQL query, does it really start a transaction in the database?
    I could not find any entry in v$transaction after executing a select statement which implies no transactions are started.
    Regards,
    Sandeep

    Welcome to the forum!
    >
    Do we need to commit after a select statement in any case (in any transaction mode)?
    >
    Yes you need to issue COMMIT or ROLLBACK but only if you issue a 'SELECT .... FOR UPDATE' because that locks the rows selected and they will remain locked until released. Other sessions trying to update one of your locked rows will hang until released or will get
    >
    ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
    >
    In DB2 a SELECT will create share locks on the rows and updates of those rows by other sessions could be blocked by the share locks. So there the custom is to COMMIT or ROLLBACK after a select.
    >
    Why do we need to commit after selecting from a table from another databse using a DB link
    >
    See Hooper's explanation of this at http://hoopercharles.wordpress.com/2010/01/27/neat-tricks/
    And see the 'Remote PL/SQL section of this - http://psoug.org/reference/db_link.html
    A quote from it
    >
    Why does it seem that a SELECT over a db_link requires a commit after execution ?
    Because it does! When Oracle performs a distributed SQL statement Oracle reserves an entry in the rollback segment area for the two-phase commit processing. This entry is held until the SQL statement is committed even if the SQL statement is a query.
    If the application code fails to issue a commit after the remote or distributed select statement then the rollback segment entry is not released. If the program stays connected to Oracle but goes inactive for a significant period of time (such as a daemon, wait for alert, wait for mailbox entry, etc...) then when Oracle needs to wrap around and reuse the extent, Oracle has to extend the rollback segment because the remote transaction is still holding its extent. This can result in the rollback segments extending to either their maximum extent limit or consuming all free space in the rbs tablespace even where there are no large transactions in the application. When the rollback segment tablespace is created using extendable files then the files can end up growing well beyond any reasonable size necessary to support the transaction load of the database. Developers are often unaware of the need to commit distributed queries and as a result often create distributed applications that cause, experience, or contribute to rollback segment related problems like ORA-01650 (unable to extend rollback). The requirement to commit distributed SQL exists even with automated undo management available with version 9 and newer. If the segment is busy with an uncommitted distributed transaction Oracle will either have to create a new undo segment to hold new transactions or extend an existing one. Eventually undo space could be exhausted, but prior to this it is likely that data would have to be discarded before the undo_retention period has expired.
    Note that per the Distributed manual that a remote SQL statement is one that references all its objects at a remote database so that the statement is sent to this site to be processed and only the result is returned to the submitting instance, while a distributed transaction is one that references objects at multiple databases. For the purposes of this FAQ there is no difference, as both need to commit after issuing any form of distributed query.

  • COMMIT after every 10000 rows

    I'm getting probelms with the following procedure. Is there any that I can do to commit after every 10,000 rows of deletion? Or is there any other alternative! The DBAs are not willing to increase the undo tablespace value!
    create or replace procedure delete_rows(v_days number)
    is
    l_sql_stmt varchar2(32767) := 'DELETE TABLE_NAME WHERE ROWID IN (SELECT ROWID FROM TABLE_NAME W
    where_cond VARCHAR2(32767);
    begin
       where_cond := 'DATE_THRESHOLD < (sysdate - '|| v_days ||' )) ';
       l_sql_stmt := l_sql_stmt ||where_cond;
       IF v_days IS NOT NULL THEN
           EXECUTE IMMEDIATE l_sql_stmt;
       END IF;
    end;I think I can use cursors and for every 10,000 %ROWCOUNT, I can commit, but even before posting the thread, I feel i will get bounces! ;-)
    Please help me out in this!
    Cheers
    Sarma!

    Hello
    In the event that you can't persuede the DBA to configure the database properly, Why not just use rownum?
    SQL> CREATE TABLE dt_test_delete AS SELECT object_id, object_name, last_ddl_time FROM dba_objects;
    Table created.
    SQL>
    SQL> select count(*) from dt_test_delete WHERE last_ddl_time < SYSDATE - 100;
      COUNT(*)
         35726
    SQL>
    SQL> DECLARE
      2
      3     ln_DelSize                      NUMBER := 10000;
      4     ln_DelCount                     NUMBER;
      5
      6  BEGIN
      7
      8     LOOP
      9
    10             DELETE
    11             FROM
    12                     dt_test_delete
    13             WHERE
    14                     last_ddl_time < SYSDATE - 100
    15             AND
    16                     rownum <= ln_DelSize;
    17
    18             ln_DelCount := SQL%ROWCOUNT;
    19
    20             dbms_output.put_line(ln_DelCount);
    21
    22             EXIT WHEN ln_DelCount = 0;
    23
    24             COMMIT;
    25
    26     END LOOP;
    27
    28  END;
    29  /
    10000
    10000
    10000
    5726
    0
    PL/SQL procedure successfully completed.
    SQL>HTH
    David
    Message was edited by:
    david_tyler

  • Commit after select?

    Is it necessary give commit after each SELECT in oracle? Can it influence performance of database (SELECTs without commit)?
    Thank you for answer.
    Lenka

    Hello
    I would imagine it is a artifact from using SQL server or DB2 or something similar. For certain transaction isolation levels, SQL server (for example) has to lock the rows being queried so that a consistent view of data can be returned, so committing after a select ensures that these locks are removed allowing others to read and write the data.
    Oracle handles things differently, writers don't block readers and readers don't block writers. It is all part of the multi version read consistency model which is covered in the concepts guide. There are also some very interesting articles on asktom:
    http://asktom.oracle.com/pls/ask/f?p=4950:8:10261219059254362776::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:1886476148373
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96524/c01_02intro.htm#46633
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96524/c21cnsis.htm#2414
    HTH
    David

  • Log file consumed all drive space; will not commit after adding space

    SQL 2008 - Have a drive that is 250GB that holds both the database and log files for a given database; nothing else is on the drive.  The database file is ~2GB in size and the log file is ~248GB, filling up the entire drive.  I have had
    issues in the past where there was not enough free space for the data in the log files to commit to the database file.  Since this is a virtual machine I increased the drive to 550GB to give it enough overhead to commit data, and restarted SQL. 
    The log file data still did not commit.  I took a full backup and also tried to shrink the database.  Now the database file is ~1GB and the log has grown to ~286GB.  Please advise and note I am a systems administrator and not a DBA by trade.

    Hi,
    I am quite sure your database recovery model is full and you have not taken Transaction log backup, Have you ?
    I also doubt you have enough space to take log backup. If you can please take log backup, may be twice to truncate logs and then shrink log file. Only transaction log backup truncates the log ( almost every time unless some long running transaction
    is holding the log) and makes it reusable so that either it can be shrinked or reused.
    If it is UAT you can change recovery model of database to simple and then shrink the logs. After that change recovery model to full and take full backup of database.
    PS: Schedule regular log backup for your databases in full recovery model
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Can I know what is "Commit" after failover to Azure ?

    Can I know what is "commit" after failover to Azure ?
    I want to know "Commit" button of protected items.
    SETUP RECOVERY: [ Between an on-premisese Hyper-V site and Azure ]
    After failover from on-premis Hyper-V site to Azure, Protected item show "Commit" button.
    "Commit" Jobs include "Prerequisite check"  and "Commit".
    Regards,
    Yoshihiro Kawabata

    In ASR Failover can be thought to be a 2 Phase Activity.
    1) The Actual Failover where you bring up the VM in Azure using the latest recovery point available.
    2) Committing the failover  to that point.
    Now the question in your mind will be why do we have these 2 Phases. The reason for this is as follows
    Lets say you have configured your VM to have 24 recovery points with hourly App Consistent Snapshots. When you failover ASR automatically picks up the latest point in time that is available for failover (day 9:35 AM). Say you were not happy with that recovery
    point because of some consistency issue in the application data, you can use the Change Recovery Point button(Gesture) in the ASR Portal to choose a different recovery point (say an app consistent snapshot from 9:00 AM that day) to perform the failover.
    Once you are satisfied with the snapshot that is failed over in Azure you can hit the Commit Button. Once you hit the commit button you will not be able to change your Recovery Point.
    Let me know if you have more questions.
    Regards,
    Anoop KV

Maybe you are looking for