Commit after crash

Hi,
I need help. Forms (compiled modules), created on FORMS 5 or FORMS 6 crashing during runtime, when current form call another form. This is due to shortage of RAM. If, before form crash, user made post, then, when form crashing, it doing commit in database. I have noticed, that it occurs not on all workstations. What can be? How it to correct?
Thanks,
Dennis.

Hi,
I need help. Forms (compiled modules), created on FORMS 5 or FORMS 6 crashing during runtime, when current form call another form. This is due to shortage of RAM. If, before form crash, user made post, then, when form crashing, it doing commit in database. I have noticed, that it occurs not on all workstations. What can be? How it to correct?
Thanks,
Dennis.
P.S. I can crash form by another way (I can send you form for testing). Please, answer me!

Similar Messages

  • Problems with PObject::destroy(void*) and commit after this

    Hi there
    I have some problem with delete garbage objects.
    For example:
    I create my object
    MyObject * o = new (conn,"MY#TABLE") MyObject();
    if (some_condition)
    PObject::destroy(o);
    delete o; //this work well
    //but after this
    conn->commit();
    I have got the error
    OCI-21710: argument is expecting a valid memory address of an object
    help me anybody
    can I delete/destroy object if one has been init over new (conn,table)
    and absent problem with commit after this ?

    Thank you for answer
    but what you mean about this code ?
    MyObject * o = new (conn,"MY#TABLE") MyObject();
    if (some_condition)
    o->markDelete();
    PObject::destroy(o);
    delete o;
    //but after this
    conn->commit();
    This is work without error.
    Message was edited by:
    pavel

  • How to restore illustrator file after crash?

    how to restore adobe illustrator files after crash OS win 8, and I could backup

    To enable us to help you better, you need to provide as many details as you can about the problem you are experiencing.
    If you design your question effectively, you can get good information from people who are knowledgeable about the topic and who are happy to help you.
    Prepare your question. Think it through. Hasty-sounding questions get hasty answers, or none at all.
    What troubleshooting have you done so far?
    Asking an effective question will get you help faster read how here
    Suggestions for asking for help on a site.
    http://www.catb.org/~esr/faqs/smart-questions.html
    Wanikiya and Dyami--Team Zigzag

  • Commit after insertion

    I use Oracle 10G Rel2. I'm trying to improve the performance of my database insertions. Every two days I perform a process of inserting 10000 rows in a table. I call a PL/SQL procedure
    for inserting every row, which checks data and perform the insert command on the table.
    Should I commit after every call to the procedure ??
    Is it better to perform a commit at the end of calling 10000 times to the insertion procedure?? So the question is : is "commit" a cheap operation ??
    Any idea to improve performance with this operation ??

    > So the question is : is "commit" a cheap operation ??
    Yes. A commit for a billion rows is as fast as a commit for a single row.
    So there are no commit overheads for doing a commit on a large transaction versus a commit on a small transaction. So is this the right question to ask? The commit itself does not impact performance.
    But HOW you use the commit in your code, does. Which is why the points raised by Daniel is important.. how the commit is used. In Oracle, the "best place" is at the end of the business transaction. When the business transaction is done and dusted, commit. That is after all the very purpose of the commit command - protecting the integrity of the data and the business transaction.

  • Firefox 10.0.2 import/merge history (not just bookmarks) from Firefox 3.6.13, 3.6.15, & recovered (after crashes)

    In Mac Firefox 10.0.2, is there a way to import/merge Mac Firefox 3.6.13 and 3.6.15 history (not just bookmarks), and history recovered after crashes, and if so, how? Thank you!

    You can find the Import menu entry in the Bookmarks Manager (Library)
    *Bookmarks > Show All Bookmarks > Import & Backup > Import Data from Another Browser
    *http://kb.mozillazine.org/Import_bookmarks
    Import & Backup is the third button on the toolbar in the Library that looks like a star.
    You should be able to export the bookmarks via that button.
    See also:
    *https://support.mozilla.org/kb/how-do-i-use-bookmarks

  • Commit after 10000 rows

    Hi
    Iam inserting around 2.5 mmillion records in aconversion project
    let me know how i can commit after every 10000 rows please can u tell me whether i can use bulk insert or bulk bind because i have never used please resolve my problem.
    Thanks
    Madhu

    as sundar said, per link of TOM you are better of not commiting in th loop other wise it will give you snapshot too old error,
    still if you want
    1. set the counter to 0. ct number:=0;
    increment counter in the loop ct:=ct+1;
    IF ct=10000 THEN
    COMMIT;
    END IF;
    2. you can use bulk collect and FORALL also. and commit.
    but still follow the tread as per TOM
    typo
    Message was edited by:
    devmiral

  • Commit after deleting records

    Hi All,
    I wrote one delete command .This command deletes 20 millions of records.But I want to give "commit" after deleting every 100000 records. How to give "commit" for this requirement.
    please guide me.
    Thank you.

    Depends on your delete statement.
    Sometime you can group the delets into logical units.
    Compare and consider the following three approaches.
    1) This deletes all data from all previous months.Delete from myTable where insertDate < trunc(sysdate,'mm'))
    2) Delete each month separately and commit in between.Delete from myTable where insertDate < trunc(sysdate,'year'));
    commit;
    Delete from myTable
    where insertDate >= trunc(sysdate,'year'))
    and insdate < add_months(trunc(sysdate,'year'),1) -- "january"
    and insdate < trunc(sysdate,'mm')) -- do not delete too much!
    commit;
    Delete from myTable
    where insertDate >= trunc(sysdate,'year'))
    and insdate < add_months(trunc(sysdate,'year'),2) -- "February"
    and insdate < trunc(sysdate,'mm')) -- do not delete too much!
    commit;
    Delete from myTable
    where insertDate >= trunc(sysdate,'year'))
    and insdate < add_months(trunc(sysdate,'year'),3) -- "March"
    and insdate < trunc(sysdate,'mm')) -- do not delete too much!
    commit;
    Delete from myTable
    where insertDate >= trunc(sysdate,'year'))
    and insdate < add_months(trunc(sysdate,'year'),12) -- "December"
    and insdate < trunc(sysdate,'mm')) -- do not delete too much!
    commit;
    3) Delete based on ID and logical groups
    Select min(id), max(id) into v_min, v_max
    from myTable where insertDate < trunc(sysdate,'mm'));
    for i in 1..trunc(v_max-v_min)/100000)+1 loop
      delete from myTable
      where id >= v_min+((i-1)*100000)
      and id < v_min+(i*100000)
      and id <= v_max
      and insertDate < trunc(sysdate,'mm')) -- this line is not really needed, maybe remove it depending on the execution plan.
      commit;
    end loop;

  • Commit after 2000 records in update statement but am not using loop

    Hi
    My oracle version is oracle 9i
    I need to commit after every 2000 records.Currently am using the below statement without using the loop.how to do this?
    do i need to use rownum?
    BEGIN
    UPDATE
    (SELECT A.SKU,M.TO_SKU,A.TO_STORE FROM
    RT_TEMP_IN_CARTON A,
    CD_SKU_CONV M
    WHERE
    A.SKU=M.FROM_SKU AND
    A.SKU<>M.TO_SKU AND
    M.APPROVED_FLAG='Y')
    SET SKU = TO_SKU,
         TO_STORE=(SELECT(
              DECODE(TO_STORE,
              5931,'931',
              5935,'935',
              5928,'928',
              5936,'936'))
              FROM
              RT_TEMP_IN_CARTON WHERE TO_STORE IN ('5931','5935','5928','5936'));
    COMMIT;
    end;
    Thanks for your help

    I need to commit after every 2000 recordsWhy?
    Committing every n rows is not recommended....
    Currently am using the below statement without using the loop.how to do this?Use a loop? (not recommended)

  • Avoid Commit after every Insert that requires a SELECT

    Hi everybody,
    Here is the problem:
    I have a table of generator alarms which is populated daily. On daily basis there are approximately 50,000 rows to be inserted in it.
    Currently i have one month's data in it ... Approximately 900,000 rows.
    here goes the main problem.
    before each insert command, whole table is checked if the record does not exist already. Two columns "SiteName" and "OccuranceDate" are checked... this means, these two columns are making a unique record when checked together with an AND operation in WHERE clause.
    we have also implemented partition on this table. and it is basically partitioned on the basis of OccuranceDate and each partition has 5 days' data.
    say
    01-Jun to 06 Jun
    07-Jun to 11 Jun
    12-Jun to 16 Jun
    and so on
    26-Jun to 30 Jun
    NOW:
    we have a commit command within the insertion loop, and the each row is committed once inserted, making approximately 50,000 commits daily.
    Question:
    Can we commit data after say each 500 inserted rows, but my real question is can we Query the records using SELECT which are Just Inserted but not yet committed ?
    a friend told me that, you can query the records which are inserted in the same connection session but not yet committed.
    Can any one help ?
    Sorry for the long question but it was to make u understand the real issue. :(
    Khalid Mehmood Awan
    khalidmehmoodawan @ gmail.com
    Edited by: user5394434 on Jun 30, 2009 11:28 PM

    Don't worry about it - I just said that because the experts over there will help you much better. If you post your code details there they will give suggestions on optimizing it.
    Doing a SELECT between every INSERT doesn't seem very natural to me, but it all depends on the details of your code.
    Also, not committing on time may cause loss of the uncommitted changes. Depending on how critical the data is and the dependency of the changes, you have to commit after every INSERT, in between, or at the end.
    Regards,
    K.

  • ODI commit after N rows

    hi
    I am trying to insert millions of rows in a target oracle table from a source oracle using ODI.
    Is it possible to specify commit after n number of rows, say after every 100000 rows..
    Kindly let know any suggestions.

    No, I am using ODI interface to populate the table.
    LKM : SQL to ORACLE
    IKM : Oracle incremental update.
    The interface has failed for "unable to extend the tablespace" error. I want to commit the records after every - say 100000- rows

  • Commit after a select query

    Do we need to commit after a select statement in any case (in any transaction mode)?
    Why do we need to commit after selecting from a table from another databse using a DB link?
    If I execute a SQL query, does it really start a transaction in the database?
    I could not find any entry in v$transaction after executing a select statement which implies no transactions are started.
    Regards,
    Sandeep

    Welcome to the forum!
    >
    Do we need to commit after a select statement in any case (in any transaction mode)?
    >
    Yes you need to issue COMMIT or ROLLBACK but only if you issue a 'SELECT .... FOR UPDATE' because that locks the rows selected and they will remain locked until released. Other sessions trying to update one of your locked rows will hang until released or will get
    >
    ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
    >
    In DB2 a SELECT will create share locks on the rows and updates of those rows by other sessions could be blocked by the share locks. So there the custom is to COMMIT or ROLLBACK after a select.
    >
    Why do we need to commit after selecting from a table from another databse using a DB link
    >
    See Hooper's explanation of this at http://hoopercharles.wordpress.com/2010/01/27/neat-tricks/
    And see the 'Remote PL/SQL section of this - http://psoug.org/reference/db_link.html
    A quote from it
    >
    Why does it seem that a SELECT over a db_link requires a commit after execution ?
    Because it does! When Oracle performs a distributed SQL statement Oracle reserves an entry in the rollback segment area for the two-phase commit processing. This entry is held until the SQL statement is committed even if the SQL statement is a query.
    If the application code fails to issue a commit after the remote or distributed select statement then the rollback segment entry is not released. If the program stays connected to Oracle but goes inactive for a significant period of time (such as a daemon, wait for alert, wait for mailbox entry, etc...) then when Oracle needs to wrap around and reuse the extent, Oracle has to extend the rollback segment because the remote transaction is still holding its extent. This can result in the rollback segments extending to either their maximum extent limit or consuming all free space in the rbs tablespace even where there are no large transactions in the application. When the rollback segment tablespace is created using extendable files then the files can end up growing well beyond any reasonable size necessary to support the transaction load of the database. Developers are often unaware of the need to commit distributed queries and as a result often create distributed applications that cause, experience, or contribute to rollback segment related problems like ORA-01650 (unable to extend rollback). The requirement to commit distributed SQL exists even with automated undo management available with version 9 and newer. If the segment is busy with an uncommitted distributed transaction Oracle will either have to create a new undo segment to hold new transactions or extend an existing one. Eventually undo space could be exhausted, but prior to this it is likely that data would have to be discarded before the undo_retention period has expired.
    Note that per the Distributed manual that a remote SQL statement is one that references all its objects at a remote database so that the statement is sent to this site to be processed and only the result is returned to the submitting instance, while a distributed transaction is one that references objects at multiple databases. For the purposes of this FAQ there is no difference, as both need to commit after issuing any form of distributed query.

  • COMMIT after every 10000 rows

    I'm getting probelms with the following procedure. Is there any that I can do to commit after every 10,000 rows of deletion? Or is there any other alternative! The DBAs are not willing to increase the undo tablespace value!
    create or replace procedure delete_rows(v_days number)
    is
    l_sql_stmt varchar2(32767) := 'DELETE TABLE_NAME WHERE ROWID IN (SELECT ROWID FROM TABLE_NAME W
    where_cond VARCHAR2(32767);
    begin
       where_cond := 'DATE_THRESHOLD < (sysdate - '|| v_days ||' )) ';
       l_sql_stmt := l_sql_stmt ||where_cond;
       IF v_days IS NOT NULL THEN
           EXECUTE IMMEDIATE l_sql_stmt;
       END IF;
    end;I think I can use cursors and for every 10,000 %ROWCOUNT, I can commit, but even before posting the thread, I feel i will get bounces! ;-)
    Please help me out in this!
    Cheers
    Sarma!

    Hello
    In the event that you can't persuede the DBA to configure the database properly, Why not just use rownum?
    SQL> CREATE TABLE dt_test_delete AS SELECT object_id, object_name, last_ddl_time FROM dba_objects;
    Table created.
    SQL>
    SQL> select count(*) from dt_test_delete WHERE last_ddl_time < SYSDATE - 100;
      COUNT(*)
         35726
    SQL>
    SQL> DECLARE
      2
      3     ln_DelSize                      NUMBER := 10000;
      4     ln_DelCount                     NUMBER;
      5
      6  BEGIN
      7
      8     LOOP
      9
    10             DELETE
    11             FROM
    12                     dt_test_delete
    13             WHERE
    14                     last_ddl_time < SYSDATE - 100
    15             AND
    16                     rownum <= ln_DelSize;
    17
    18             ln_DelCount := SQL%ROWCOUNT;
    19
    20             dbms_output.put_line(ln_DelCount);
    21
    22             EXIT WHEN ln_DelCount = 0;
    23
    24             COMMIT;
    25
    26     END LOOP;
    27
    28  END;
    29  /
    10000
    10000
    10000
    5726
    0
    PL/SQL procedure successfully completed.
    SQL>HTH
    David
    Message was edited by:
    david_tyler

  • Commit after alter table statement or not?

    Hi,
    Is it necessary to put the a commit after the following statement or is it automatically committed:
    Alter table tab_name drop column col_name;
    Thanks

    Khurram,
    Isnt Eric you are , i mean isnt yours synonym :)Erm...simple answer. No. We are not the same person. I just know that Eric, like yourself, makes good contributions to these threads and then someone like that is coming on the forums and trying to make himself look better and put down the regular contributors which isn't really on is it, I think you'll agree.
    CREATE PUBLIC SYNONYM Eric FOR Blushadow;
    hehe.

  • Commit after select?

    Is it necessary give commit after each SELECT in oracle? Can it influence performance of database (SELECTs without commit)?
    Thank you for answer.
    Lenka

    Hello
    I would imagine it is a artifact from using SQL server or DB2 or something similar. For certain transaction isolation levels, SQL server (for example) has to lock the rows being queried so that a consistent view of data can be returned, so committing after a select ensures that these locks are removed allowing others to read and write the data.
    Oracle handles things differently, writers don't block readers and readers don't block writers. It is all part of the multi version read consistency model which is covered in the concepts guide. There are also some very interesting articles on asktom:
    http://asktom.oracle.com/pls/ask/f?p=4950:8:10261219059254362776::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:1886476148373
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96524/c01_02intro.htm#46633
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96524/c21cnsis.htm#2414
    HTH
    David

  • Why commit after prepare some SQL

    Hi:
    In the "TTCLASSES GUIDE TimesTen 6.0", all the example code follow the rule that call commit just after the code prepapre some SQL.
    This is strange to me. Why commit after prepare?
    Does prepare start a transaction?
    Does prepare lock some table?
    What if I just don't call commit?
    SHall I call commit after I drop the prepared SQL?
    Regards
    hardrock

    Hi,
    TTClasses, and the demos, are coded to try to 'enforce' TimesTen 'best practice. They include commit after the prepares since until very recent releases of TimesTen it was highly recommended to commit after prepare since prepare did acquire, and hold, locks on some of the system catalog tables. Also, Prepare does start a transaction so you will need to commit or rollback at some point to complete that transaction e.g. before you can disconnect. In Tt 7.0 and later releases, prepare no longer holds the locks - it releases them at the end of the prepare operation. So, it is less important to commit after prepares in TT 7.0. However, prepare does still start a transaction so a commit (or rollback) is still needed at some point.
    Now, in general, an application should be performing all its prepares just once at 'startup' time (or at least at connection open time) so in general there is no big deal to open a connection, do all the prepares and then do one commit to close the transaction.
    Dropping a prepared statement does not require a commit as it does not start a transaction and does not lock anything.
    Chris

Maybe you are looking for

  • Getting Error while using  LKM File to Oracle(SQLLDR) KM in ODI

    Hi All , Could anyone please help me out on this error which i am getting while using LKM File to Oracle(SQLLDR) My Scenario : 1. I have my CSV file created in one location with some records . 2. Created a new interface having this CSV file as source

  • Import and Export variables with same name

    The nco3 throws and exception when a function has Import and Export variables with the same name. An example is BAPI_BATCH_CREATE. At the moment my solution is create a ZBAPI_BATCH_CREATE that wraps the original BAPI but  has diferent names for impor

  • Regarding message type SEQJIT

    Hi all    please tell me the equilant ANSI x12 txn  number for message type SEQJIT . Thanks

  • Forms 6i to 10g Migration Questions

    Hi, My company is thinking about migrating Forms 6i to 10g. I wonder if you can share some of your experience with us (and with thousands of others who will also face the same daunting task) 1. How do you like/dislike Forms 10g after the migration? 2

  • Which version of ACR for CS4 on Intel Mac, OSX 10.6.8?

    Which version of ACR for CS4 on Intel Mac, OSX 10.6.8?