Commit in cursors

Is it possible to perform commit in cursors?plzzz give me a example
DECLARE
CURSOR cur_emp IS SELECT substr(a.name,1,12)NAME FROM tmp_employee_dtl a WHERE ROWNUM<5 ORDER BY CODE;
BEGIN
FOR lp_emp IN cur_emp
LOOP
INSERT INTO M_FORM_EMPLOYEE(NAME) VALUES (lp_emp.name);
COMMIT;
END LOOP;
END;
this example is ok...
Edited by: Sukanta on Aug 19, 2011 9:45 PM

Not entirely correct. To perform a commit means allocating a SQL handle for that statement, and parsing it. This creates a cursor.
To use this cursor (i.e. do the actual commit), the execute interface of the cursor is called.
PL/SQL does this transparently - but a commit statement is still a cursor. Albeit an implicit one in PL/SQL.
There are other problems executing a commit cursor while fetching from another cursor - such as not understanding the multi-version concurrency control design of Oracle. And how using commits to create new versions can invalidate an older (read-only) version (snapshot too old).
Or that executing a commit cursor per loop iteration actually results in slower performance and more resources to be used in Oracle - and that this approach is only suited for less well designed databases that use monolithic lock managers that consume resources for locking. (A billion row locks in Oracle is as expensive as zero row locks as there are no locking overheads).

Similar Messages

  • When i type a comma the cursor skips back a space

    In Mail, every time I type a comma, the cursor skips back one space. . . .help!  I'm working on a 2008 iMac.
    Thank you, in advance!
    Jennifer

    http://support.apple.com/kb/TS1248 Intel-based Apple Portables: Troubleshooting unresponsive trackpad issues

  • Question about using Db cursors safely outside a transaction

    I have a question re: proper use of the Dbc (Db cursor) to avoid
    self-deadlock when two cursors are opened by the same thread and used only
    for reading. (This is when open in full transactional mode, with an
    environment and all.)
    My first question is, should I bother? Or is it equivalent to always start a
    transaction, even before read operations? My reading of the doc says it is
    not equivalent. The doc says "in the presence of transactions all locks are
    held until commit". I read it as saying that reading with a cursor outside a
    transaction provides degree 2 isolation (cursor stability), with greater
    concurrency than reading inside a transaction (which provides degree 3
    isolation). What I'm attempting here is to avoid degree 3 isolation, and
    locks held until commit, for cursor reads -- I want the greater concurrency.
    This is just a technique to do reading outside a transaction, all writing
    will be done within transactions, and all reading will be done with a txnid
    too, whenever a transaction is in progress. Am I OK so far?
    Here's how I imagine getting away with it. Tell me if this works please :-)
    and (of course) if there's a better way. Suppose DB_THREAD was specified,
    but to avoid the dreaded self-deadlock when using multiple cursors outside a
    transaction, all such cursors are used only to read and are dup'ed from a
    single cursor (one kept as a prototype in each thread for that purpose,
    solely to ensure a single locker ID for all these read cursors; the
    prototype cursor is never used to lock anything). As I read the doc, that
    prevents the thread from self-deadlocking, provided it doesn't start any
    transactions while such read cursors are extant (other than the prototype
    cursor, which shouldn't matter because it never holds any locks). I
    understand that if the same thread did start a transaction, it could
    self-deadlock with such a read cursor, but I won't be doing that.
    My next question is, am I still OK if multiple Db's are placed in one file?
    The doc says in that case lock contention can only occur for page
    allocations -- seeming to imply that competition among read-only cursors
    won't ever cause it -- though they could be contending with other threads
    that are writing -- could thread self-deadlock ever occur here? I'm not
    worried about real deadlock with the other (writer) thread, just thread
    self-deadlock between my two read cursors.
    Many thanks if you could confirm my reasoning (or dash my hopes) before I
    find out the long, hard nondeterministic way if it's any good.

    Good, so I've overdesigned my solution -- I can dispense with dup()ing a prototype cursor and just use multiple read-only cursors made by Db::cursor with a NULL txnid.
    Many thanks for your prompt response.
    I am David Christie at samizdat.org BTW -- we exchanged email last year re: a potential C++ std container interface for Berkeley DB -- sometime before the merger. I hope it's all worked out well for your sleepycat guys. It's nice to see some of you are still around and responding to the open source community.

  • Any general tips on getting better performance out of multi table insert?

    I have been struggling with coding a multi table insert which is the first time I ever use one and my Oracle skills are pretty poor in general so now that the query is built and works fine I am sad to see its quite slow.
    I have checked numerous articles on optimizing but the things I try dont seem to get me much better performance.
    First let me describe my scenario to see if you agree that my performance is slow...
    its an insert all command, which ends up inserting into 5 separate tables, conditionally (at least 4 inserts, sometimes 5 but the fifth is the smallest table). Some stats on these tables as follows:
    Source table: 5.3M rows, ~150 columns wide. Parallel degree 4. everything else default.
    Target table 1: 0 rows, 27 columns wide. Parallel 4. everything else default.
    Target table 2: 0 rows, 63 columns wide. Parallel 4. default.
    Target table 3: 0 rows, 33 columns wide. Parallel 4. default.
    Target table 4: 0 rows, 9 columns wide. Parallel 4. default.
    Target table 5: 0 rows, 13 columns wide. Parallel 4. default.
    The parallelism is just about the only customization I myself have done. Why 4? I dont know it's pretty arbitrary to be honest.
    Indexes?
    Table 1 has 3 index + PK.
    Table 2 has 0 index + FK + PK.
    Table 3 has 4 index + FK + PK
    Table 4 has 3 index + FK + PK
    Table 5 has 4 index + FK + PK
    None of the indexes are anything crazy, maybe 3 or 4 of all of them are on multiple columns, 2-3 max. The rest are on single columns.
    The query itself looks something like this:
    insert /*+ append */ all
    when 1=1 then
    into table1 (...) values (...)
    into table2 (...) values (...)
    when a=b then
    into table3 (...) values (...)
    when a=c then
    into table3 (...) values (...)
    when p=q then
    into table4(...) values (...)
    when x=y then
    into table5(...) values (...)
    select .... from source_table
    Hints I tried are with append, without append, and parallel (though adding parallel seemed to make the query behave in serial, according to my session browser).
    Now for the performance:
    It does about 8,000 rows per minute on table1. So that means it should also have that much in table2, table3 and table4, and then a subset of that in table5.
    Does that seem normal or am I expecting too much?
    I find articles talking about millions of rows per minute... Obviously i dont think I can achieve that much... but maybe 30k or so on each table is a reasonable goal?
    If it seems my performance is slow, what else do you think I should try? Is there any information I may try to get to see if maybe its a poorly configured database for this?
    P.S. Is it possible I can run this so that it commits every x rows or something? I had the heartbreaking event of a network issue giving me this sudden "ora-25402: transaction must roll back" after it was running for 3.5 hours. So I lost all the progress it made... have to start over. plus i wonder if the sheer amount of data being queued for commit/rollback is causing some of the problem?
    Edited by: trant on Jun 27, 2011 9:29 PM

    Looks like there are about 54 sessions on my database, 7 of the sessions belong to me (2 taken by TOAD and 4 by my parallel slave sessions and 1 by the master of those 4)
    In v$session_event there are 546 rows, if i filter it to the SIDs of my current session and order my micro_wait_time desc:
    510     events in waitclass Other     30670     9161     329759     10.75     196     3297590639     1736664284     1893977003     0     Other
    512     events in waitclass Other     32428     10920     329728     10.17     196     3297276553     1736664284     1893977003     0     Other
    243     events in waitclass Other     21513     5     329594     15.32     196     3295935977     1736664284     1893977003     0     Other
    223     events in waitclass Other     21570     52     329590     15.28     196     3295898897     1736664284     1893977003     0     Other
    241     row cache lock     1273669     0     42137     0.03     267     421374408     1714089451     3875070507     4     Concurrency
    241     events in waitclass Other     614793     0     34266     0.06     12     342660764     1736664284     1893977003     0     Other
    241     db file sequential read     13323     0     3948     0.3     13     39475015     2652584166     1740759767     8     User I/O
    241     SQL*Net message from client     7     0     1608     229.65     1566     16075283     1421975091     2723168908     6     Idle
    241     log file switch completion     83     0     459     5.54     73     4594763     3834950329     3290255840     2     Configuration
    241     gc current grant 2-way     5023     0     159     0.03     0     1591377     2685450749     3871361733     11     Cluster
    241     os thread startup     4     0     55     13.82     26     552895     86156091     3875070507     4     Concurrency
    241     enq: HW - contention     574     0     38     0.07     0     378395     1645217925     3290255840     2     Configuration
    512     PX Deq: Execution Msg     3     0     28     9.45     28     283374     98582416     2723168908     6     Idle
    243     PX Deq: Execution Msg     3     0     27     9.1     27     272983     98582416     2723168908     6     Idle
    223     PX Deq: Execution Msg     3     0     25     8.26     24     247673     98582416     2723168908     6     Idle
    510     PX Deq: Execution Msg     3     0     24     7.86     23     235777     98582416     2723168908     6     Idle
    243     PX Deq Credit: need buffer     1     0     17     17.2     17     171964     2267953574     2723168908     6     Idle
    223     PX Deq Credit: need buffer     1     0     16     15.92     16     159230     2267953574     2723168908     6     Idle
    512     PX Deq Credit: need buffer     1     0     16     15.84     16     158420     2267953574     2723168908     6     Idle
    510     direct path read     360     0     15     0.04     4     153411     3926164927     1740759767     8     User I/O
    243     direct path read     352     0     13     0.04     6     134188     3926164927     1740759767     8     User I/O
    223     direct path read     359     0     13     0.04     5     129859     3926164927     1740759767     8     User I/O
    241     PX Deq: Execute Reply     6     0     13     2.12     10     127246     2599037852     2723168908     6     Idle
    510     PX Deq Credit: need buffer     1     0     12     12.28     12     122777     2267953574     2723168908     6     Idle
    512     direct path read     351     0     12     0.03     5     121579     3926164927     1740759767     8     User I/O
    241     PX Deq: Parse Reply     7     0     9     1.28     6     89348     4255662421     2723168908     6     Idle
    241     SQL*Net break/reset to client     2     0     6     2.91     6     58253     1963888671     4217450380     1     Application
    241     log file sync     1     0     5     5.14     5     51417     1328744198     3386400367     5     Commit
    510     cursor: pin S wait on X     3     2     2     0.83     1     24922     1729366244     3875070507     4     Concurrency
    512     cursor: pin S wait on X     2     2     2     1.07     1     21407     1729366244     3875070507     4     Concurrency
    243     cursor: pin S wait on X     2     2     2     1.06     1     21251     1729366244     3875070507     4     Concurrency
    241     library cache lock     29     0     1     0.05     0     13228     916468430     3875070507     4     Concurrency
    241     PX Deq: Join ACK     4     0     0     0.07     0     2789     4205438796     2723168908     6     Idle
    241     SQL*Net more data from client     6     0     0     0.04     0     2474     3530226808     2000153315     7     Network
    241     gc current block 2-way     5     0     0     0.04     0     2090     111015833     3871361733     11     Cluster
    241     enq: KO - fast object checkpoint     4     0     0     0.04     0     1735     4205197519     4217450380     1     Application
    241     gc current grant busy     4     0     0     0.03     0     1337     2277737081     3871361733     11     Cluster
    241     gc cr block 2-way     1     0     0     0.06     0     586     737661873     3871361733     11     Cluster
    223     db file sequential read     1     0     0     0.05     0     461     2652584166     1740759767     8     User I/O
    223     gc current block 2-way     1     0     0     0.05     0     452     111015833     3871361733     11     Cluster
    241     latch: row cache objects     2     0     0     0.02     0     434     1117386924     3875070507     4     Concurrency
    241     enq: TM - contention     1     0     0     0.04     0     379     668627480     4217450380     1     Application
    512     PX Deq: Msg Fragment     4     0     0     0.01     0     269     77145095     2723168908     6     Idle
    241     latch: library cache     3     0     0     0.01     0     243     589947255     3875070507     4     Concurrency
    510     PX Deq: Msg Fragment     3     0     0     0.01     0     215     77145095     2723168908     6     Idle
    223     PX Deq: Msg Fragment     4     0     0     0     0     145     77145095     2723168908     6     Idle
    241     buffer busy waits     1     0     0     0.01     0     142     2161531084     3875070507     4     Concurrency
    243     PX Deq: Msg Fragment     2     0     0     0     0     84     77145095     2723168908     6     Idle
    241     latch: cache buffers chains     4     0     0     0     0     73     2779959231     3875070507     4     Concurrency
    241     SQL*Net message to client     7     0     0     0     0     51     2067390145     2000153315     7     Network
    (yikes, is there a way to wrap that in equivalent of other forums' tag?)
    v$session_wait;
    223     835     PX Deq Credit: send blkd     sleeptime/senderid     268697599     000000001003FFFF     passes     1     0000000000000001     qref     0     00     1893977003     0     Other     0     10     WAITING
    241     22819     row cache lock     cache id     13     000000000000000D     mode     0     00     request     5     0000000000000005     3875070507     4     Concurrency     -1     0     WAITED SHORT TIME
    243     747     PX Deq Credit: send blkd     sleeptime/senderid     268697599     000000001003FFFF     passes     1     0000000000000001     qref     0     00     1893977003     0     Other     0     7     WAITING
    510     10729     PX Deq Credit: send blkd     sleeptime/senderid     268697599     000000001003FFFF     passes     1     0000000000000001     qref     0     00     1893977003     0     Other     0     2     WAITING
    512     12718     PX Deq Credit: send blkd     sleeptime/senderid     268697599     000000001003FFFF     passes     1     0000000000000001     qref     0     00     1893977003     0     Other     0     4     WAITING
    v$sess_io:
    223     0     5779     5741     0     0
    241     38773810     2544298     15107     27274891     0
    243     0     5702     5688     0     0
    510     0     5729     5724     0     0
    512     0     5682     5678     0     0                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Reg different kinds of select stmts

    Hi All,
    Hope all are doing gud,
    cud any tell me different kinds of select stmts ????
    regards,
    abc xyz

    hi,
    SELECT
    Basic form
    SELECT result [target] FROM source [where] [GROUP BY fields] [ORDER BY order].
    Effect
    Retrieves an extract and/or a set of data from a database table or view (see Relational database ). SELECT belongs to the OPEN SQL command set.
    Each SELECT command consists of a series of clauses specifying different tasks:
    The SELECT result clause specifies
    whether the result of the selection is a table or a single record,
    which columns the result is meant to have and
    whether the result is allowed to include identical lines.
    The INTO target clause specifies the target area into which the selected data is to be read. If the target area is an internal table, the INTO clause specifies
    whether the selected data is to overwrite the contents of the internal table or
    whether the selected data is to be appended to the contents and
    whether the selected data is to be placed in the internal table all at once or in several packets.
    The INTO clause can also follow the FROM clause.
    You can omit the INTO clause. The system then makes the data available in the table work area (see TABLES ) dbtab . If the SELECT clause includes a "*", the command is processed like the identical SELECT * INTO dbtab FROM dbtab statement. If the SELECT clause contains a list a1 ... an , the command is executed like SELECT a1 ... an INTO CORRESPONDING FIELDS OF dbtab FROM dbtab .
    If the result of the selection is meant to be a table, the data is usually (for further information, see INTO -Klausel ) read line by line within a processing loop introduced by SELECT and concluded by ENDSELECT . For each line read, the processing passes through the loop once. If the result of the selection is meant to be a single record, the closing ENDSELECT is omitted.
    The FROM source clause the source (database table or view ) from which the data is to be selected. It also determines
    the type of client handling,
    the behavior for buffered tables and
    the maximum number of lines to be read.
    The WHERE where clause specifies the conditions which the result of the selection must satisfy. It thus determines the lines of the result table. Normally - i.e. unless a client field is specified in the WHERE clause - only data of the current client is selected. If you want to select across other clients, the FROM clause must include the addition ... CLIENT SPECIFIED .
    The GROUP-BY fields clause combines groups of lines together into single lines. A group is a set of lines which contain the same value for every database field in the GROUP BY clause.
    The ORDER-BY order clause stipulates how the lines of the result table are to be ordered.
    Each time the SELECT statement is executed, the system field SY-DBCNT contains the number of lines read so far. After ENDSELECT , SY-DBCNT contains the total number of lines read.
    The return code value is set as follows:
    SY-SUBRC = 0 At least one line was read.
    SY_SUBRC = 4 No lines were read.
    SY-SUBRC = 8 The search key was not fully qualified.
    (nur bei SELECT SINGLE ). The returned single record is any line of the solution set.
    Example
    Output the passenger list for the Lufthansa flight 0400 on 28.02.1995:
    TABLES SBOOK.
    SELECT * FROM SBOOK
      WHERE
        CARRID   = 'LH '      AND
        CONNID   = '0400'     AND
        FLDATE   = '19950228'
      ORDER BY PRIMARY KEY.
      WRITE: / SBOOK-BOOKID, SBOOK-CUSTOMID,   SBOOK-CUSTTYPE,
               SBOOK-SMOKER, SBOOK-LUGGWEIGHT, SBOOK-WUNIT,
               SBOOK-INVOICE.
    ENDSELECT.
    Performance
    In client/server environments, storing database tables in local buffers (see SAP buffering ) can save considerable amounts of time because the time required to make an access via the network is much more than that needed to access a locally buffered table.
    Notes
    A SELECT command on a table for which SAP buffering is defined in the ABAP/4 Dictionary is normally satisfied from the SAP buffer by bypassing the database. This does not apply with
    - <b>SELECT SINGLE FOR UPDATE
    - SELECT DISTINCT in the SELECT clause ,
    - BYPASSING BUFFER in the FROM clause ,
    - ORDER BY f1 ... fn in the ORDER-BY clause ,
    - aggregate functions in the SELECT clause ,
    - when using IS [NOT] NULL WHERE condition ,</b>
    or if the generic key part is not qualified in the WHERE-Bedingung for a generically buffered table.
    Authorization checks are not supported by the SELECT statement, so you must program these yourself.
    In dialog systems, the database system locking mechanism cannot always guarantee to synchronize the simultaneous access of several users to the same dataset. In many cases, it is therefore advisable to use the SAP locking mechanism .
    Changes to data in a database are only finalized after a database commit (see LUW ). Prior to this, any database update can be reversed by a database rollback (see Programming transactions ). At the lowest isolation level (see the section on the "uncommitted read" under Locking mechanism ), this can result in the dataset selected by the SELECT command not really being written to the database. While a program is selecting data, a second program can add, change or delete lines at the same time. Then, the changes made by the second program are reversed by rolling back the database system. The selection of the first program thus reflects only a very temporary state of the database. If such "phantom data" is not acceptable for a program, you must either use the SAP locking mechanism or at least set the isolation level of the database system to "committed read" (see Locking mechanism ).
    In a SELECT-ENDSELECT loop, the CONTINUE statement terminates the current loop pass prematurely and starts the next.
    If one of the statements in a SELECT ... ENDSELECT loop results in a database commit, the cursor belonging to the SELECT ... ENDSELECT loop is lost and the processing terminates with a runtime error. Since each screen change automatically generates a database commit, statements such as CALL SCREEN , CALL DIALOG , CALL TRANSACTION or MESSAGE are not allowed within a SELECT ... ENDSELECT loop.
    Related OPEN CURSOR , FETCH und CLOSE CURSOR
    SELECT clause
    Variants
    1. <b>SELECT [SINGLE [FOR UPDATE] | DISTINCT] *
    2. SELECT [SINGLE [FOR UPDATE] | DISTINCT] s1 ... sn
    3. SELECT [SINGLE [FOR UPDATE] | DISTINCT] (itab)</b> Effect
    The result of a SELECT statement is itself a table . The SELECT clause describes which columns this table is supposed to have.
    In addition, you can use the optional addition SINGLE or DISTINCT if you want only certain lines of the solution set to be visible for the calling program:
    SINGLE The result of the selection is a single record . If this record cannot be uniquely identified, the first line of the solution set is selected. The addition FOR UPDATE protects the selected record against parallel changes by other transactions until the next database commit occurs (see LUW and Database locking ). If the database system detects a deadlock, the result is a runtime error.
    DISTINCT Any lines which occur more than once are automatically removed from the selected dataset.
    Note
    To ensure that a record is uniquely determined, you can fully qualify all fields of the primary key by linking them together with AND in the WHERE condition.
    Note
    Performance
    The additions SINGLE FOR UPDATE and DISTINCT exclude the use of SAP buffering .
    The addition DISTINCT requires sorting on the database server and should therefore only be specified if duplicates are likely to occur.
    Variant 1
    SELECT [SINGLE [FOR UPDATE] | DISTINCT] *
    Effect
    In the result set, the columns correspond exactly in terms of order, ABAP/4 Dictionary type and length to the fields of the database table (or view ) specified in the FROM clause .
    Example
    Output all flight connections from Frankfurt to New York:
    TABLES SPFLI.
    SELECT * FROM SPFLI
             WHERE
               CITYFROM = 'FRANKFURT' AND
               CITYTO   = 'NEW YORK'.
      WRITE: / SPFLI-CARRID, SPFLI-CONNID.
    ENDSELECT.
    Example
    Output all free seats on the Lufthansa flight 0400 on 28.02.1995:
    TABLES SFLIGHT.
    DATA   SEATSFREE TYPE I.
    SELECT SINGLE * FROM SFLIGHT
                    WHERE
                      CARRID   = 'LH '      AND
                      CONNID   = '0400'     AND
                      FLDATE   = '19950228'.
    SEATSFREE = SFLIGHT-SEATSMAX - SFLIGHT-SEATSOCC.
    WRITE: / SFLIGHT-CARRID, SFLIGHT-CONNID,
             SFLIGHT-FLDATE, SEATSFREE.
    Variant 2
    SELECT [SINGLE [FOR UPDATE] | DISTINCT] s1 ... sn
    Effect
    The order, ABAP/4 Dictionary type and length of the columns of the result set are explicitly defined by the list s1 ... sn . Each si has the form
    ai or ai AS bi .
    Here, ai stands either for
    a field f of the database table or
    a aggregate print.
    bi is an alternative name for the i-th column of the result set.
    When using INTO CORRESPONDING FIELDS OF wa in the INTO clause , you can specify an alternative column name to assign a column of the result set uniquely to a column of the target area.
    An aggregate print uses an aggregate function to group together data from one or all columns of the database table. Aggregate prints consist of three or four components:
    An aggregate function immediately followed by an opening parenthesis DISTINCT (optional) The database field f A closing parenthesis
    All components of a print must be separated by at least one blank.
    The following aggregate functions are available:
    MAX Returns the greatest value in the column determined by the database field f for the selected lines. Specifying DISTINCT does not change the result. NULL values are ignored unless all values in a column are NULL values. In this case, the result is NULL .
    MIN Returns the smallest value in the column determined by the database field f for the selected lines. Specifying DISTINCT does not change the result. NULL values are ignored unless all values in a column are NULL values. In this case, the result is NULL .
    AVG Returns the average value in the column determined by the database field f for the selected lines. AVG can only apply to a numeric field. NULL values are ignored unless all values in a column are NULL values. In this case, the result is NULL .
    SUM Returns the sum of all values in the column determined by the database field f for the selected lines. SUM can only apply to a numeric field. NULL values are ignored unless all values in a column are NULL values. In this case, the result is NULL .
    COUNT Returns the number of different values in the column determined by the database field f for the selected lines. Specifying DISTINCT is obligatory here. NULL values are ignored unless all values in a column are NULL values. In this case, the result is 0
    COUNT( * ) Returns the number of selected lines. If the SELECT command contains a GROUP BY clause , it returns the number of lines for each group. The form COUNT(*) is also allowed.
    If ai is a field f , MAX( f ) , MIN( f ) or SUM( f ) , the corresponding column of the result set has the same ABAP/4 Dictionary format as f . With COUNT( f ) or COUNT( * ) , the column has the type INT4 , with AVG( f ) the type FLTP .
    If you specify aggregate functions together with one or more database fields in a SELECT clause, all database fields not used in one of the aggregate functions must be listed in the GROUP-BY clause . Here, the result of the selection is a table.
    If only aggregate functions occur in the SELECT clause, the result of the selection is a single record. Here, the SELECT command is not followed later by an ENDSELECT .
    Notes
    This variant is not available for pooled tables and cluster tables .
    If the SELECT clause contains a database field of type LCHAR or LRAW , you must specify the appropriate length field immediately before.
    Notes
    Performance
    Specifying aggregate functions excludes the use of SAP buffering .
    Since many database systems do not manage the number of table lines and therefore have to retrieve this at some cost, the function COUNT( * ) is not suitable for checking whether a table contains a line or not. To do this, it is best to use SELECT SINGLE f ... for any table field f .
    If you only want to select certain columns of a database table, you are recommended to specify a list of fields in the SELECT clause or to use a View .
    Examples
    Output all flight destinations for Lufthansa flights from Frankfurt:
    TABLES SPFLI.
    DATA   TARGET LIKE SPFLI-CITYTO.
    SELECT DISTINCT CITYTO
           INTO TARGET FROM SPFLI
           WHERE
             CARRID   = 'LH '       AND
             CITYFROM = 'FRANKFURT'.
      WRITE: / TARGET.
    ENDSELECT.
    Output the number of airline carriers which fly to New York:
    TABLES SPFLI.
    DATA   COUNT TYPE I.
    SELECT COUNT( DISTINCT CARRID )
           INTO COUNT FROM SPFLI
           WHERE
             CITYTO = 'NEW YORK'.
    WRITE: / COUNT.
    Output the number of passengers, the total weight and the average weight of luggage for all Lufthansa flights on 28.02.1995:
    TABLES SBOOK.
    DATA:  COUNT TYPE I, SUM TYPE P DECIMALS 2, AVG TYPE F.
    DATA:  CONNID LIKE SBOOK-CONNID.
    SELECT CONNID COUNT( * ) SUM( LUGGWEIGHT ) AVG( LUGGWEIGHT )
           INTO (CONNID, COUNT, SUM, AVG)
           FROM SBOOK
           WHERE
             CARRID   = 'LH '      AND
             FLDATE   = '19950228'
           GROUP BY CONNID.
      WRITE: / CONNID, COUNT, SUM, AVG.
    ENDSELECT.
    Variant 3
    SELECT [SINGLE [FOR UPDATE] | DISTINCT] (itab)
    Effect
    Works like SELECT [SINGLE [FOR UPDATE] | DISTINCT] s1 ... sn if the internal table itab contains the list s1 ... sn as ABAP/4 source code, and like SELECT [SINGLE [FOR UPDATE] | DISTINCT] * , if itab is empty. The internal table itab can only have one field which must be of type C and cannot be more than 72 characters long. itab must appear in parentheses and there should be no blanks between the parentheses and the table name.
    Note
    With this variant, the same restrictions apply as for SELECT [SINGLE [FOR UPDATE] | DISTINCT] s1 ... sn .
    Example
    Output all Lufthansa flight routes:
    TABLES: SPFLI.
    DATA:   FTAB(72) OCCURS 5 WITH HEADER LINE.
    REFRESH FTAB.
    FTAB = 'CITYFROM'. APPEND FTAB.
    FTAB = 'CITYTO'.   APPEND FTAB.
    SELECT DISTINCT (FTAB)
           INTO CORRESPONDING FIELDS OF SPFLI
           FROM SPFLI
           WHERE
             CARRID   = 'LH'.
      WRITE: / SPFLI-CITYFROM, SPFLI-CITYTO.
    ENDSELECT.
    check this one:
    http://www.sts.tu-burg.de/teaching/sap_r3/ABAP4/select.htm

  • Select for all entries

    Hi,
          I am new in abap reports. Now i want to know why we should use select for all entries in query. We can do retrieve directly by accessing the table in database dictionary.
          Experts please give me the reasons I want to know the concepts behind it.It will be better if you kindly explain this with help of code.
         With regards,
           Abir.

    HI
    GOOD
    SELECT
    Basic form
    SELECT result [target] FROM source [where] [GROUP BY fields] [ORDER BY order].
    Effect
    Retrieves an extract and/or a set of data from a database table or view (see Relational database ). SELECT belongs to the OPEN SQL command set.
    Each SELECT command consists of a series of clauses specifying different tasks:
    The SELECT result clause specifies
    whether the result of the selection is a table or a single record,
    which columns the result is meant to have and
    whether the result is allowed to include identical lines.
    The INTO target clause specifies the target area into which the selected data is to be read. If the target area is an internal table, the INTO clause specifies
    whether the selected data is to overwrite the contents of the internal table or
    whether the selected data is to be appended to the contents and
    whether the selected data is to be placed in the internal table all at once or in several packets.
    The INTO clause can also follow the FROM clause.
    You can omit the INTO clause. The system then makes the data available in the table work area (see TABLES ) dbtab . If the SELECT clause includes a "*", the command is processed like the identical SELECT * INTO dbtab FROM dbtab statement. If the SELECT clause contains a list a1 ... an , the command is executed like SELECT a1 ... an INTO CORRESPONDING FIELDS OF dbtab FROM dbtab .
    If the result of the selection is meant to be a table, the data is usually (for further information, see INTO -Klausel ) read line by line within a processing loop introduced by SELECT and concluded by ENDSELECT . For each line read, the processing passes through the loop once. If the result of the selection is meant to be a single record, the closing ENDSELECT is omitted.
    The FROM source clause the source (database table or view ) from which the data is to be selected. It also determines
    the type of client handling,
    the behavior for buffered tables and
    the maximum number of lines to be read.
    The WHERE where clause specifies the conditions which the result of the selection must satisfy. It thus determines the lines of the result table. Normally - i.e. unless a client field is specified in the WHERE clause - only data of the current client is selected. If you want to select across other clients, the FROM clause must include the addition ... CLIENT SPECIFIED .
    The GROUP-BY fields clause combines groups of lines together into single lines. A group is a set of lines which contain the same value for every database field in the GROUP BY clause.
    The ORDER-BY order clause stipulates how the lines of the result table are to be ordered.
    Each time the SELECT statement is executed, the system field SY-DBCNT contains the number of lines read so far. After ENDSELECT , SY-DBCNT contains the total number of lines read.
    The return code value is set as follows:
    SY-SUBRC = 0 At least one line was read.
    SY_SUBRC = 4 No lines were read.
    SY-SUBRC = 8 The search key was not fully qualified.
    (nur bei SELECT SINGLE ). The returned single record is any line of the solution set.
    Example
    Output the passenger list for the Lufthansa flight 0400 on 28.02.1995:
    TABLES SBOOK.
    SELECT * FROM SBOOK
      WHERE
        CARRID   = 'LH '      AND
        CONNID   = '0400'     AND
        FLDATE   = '19950228'
      ORDER BY PRIMARY KEY.
      WRITE: / SBOOK-BOOKID, SBOOK-CUSTOMID,   SBOOK-CUSTTYPE,
               SBOOK-SMOKER, SBOOK-LUGGWEIGHT, SBOOK-WUNIT,
               SBOOK-INVOICE.
    ENDSELECT.
    Note
    Performance
    In client/server environments, storing database tables in local buffers (see SAP buffering ) can save considerable amounts of time because the time required to make an access via the network is much more than that needed to access a locally buffered table.
    Notes
    A SELECT command on a table for which SAP buffering is defined in the ABAP/4 Dictionary is normally satisfied from the SAP buffer by bypassing the database. This does not apply with
    - SELECT SINGLE FOR UPDATE
    - SELECT DISTINCT in the SELECT clause ,
    - BYPASSING BUFFER in the FROM clause ,
    - ORDER BY f1 ... fn in the ORDER-BY clause ,
    - aggregate functions in the SELECT clause ,
    - when using IS [NOT] NULL WHERE condition ,
    or if the generic key part is not qualified in the WHERE-Bedingung for a generically buffered table.
    Authorization checks are not supported by the SELECT statement, so you must program these yourself.
    In dialog systems, the database system locking mechanism cannot always guarantee to synchronize the simultaneous access of several users to the same dataset. In many cases, it is therefore advisable to use the SAP locking mechanism .
    Changes to data in a database are only finalized after a database commit (see LUW ). Prior to this, any database update can be reversed by a database rollback (see Programming transactions ). At the lowest isolation level (see the section on the "uncommitted read" under Locking mechanism ), this can result in the dataset selected by the SELECT command not really being written to the database. While a program is selecting data, a second program can add, change or delete lines at the same time. Then, the changes made by the second program are reversed by rolling back the database system. The selection of the first program thus reflects only a very temporary state of the database. If such "phantom data" is not acceptable for a program, you must either use the SAP locking mechanism or at least set the isolation level of the database system to "committed read" (see Locking mechanism ).
    In a SELECT-ENDSELECT loop, the CONTINUE statement terminates the current loop pass prematurely and starts the next.
    If one of the statements in a SELECT ... ENDSELECT loop results in a database commit, the cursor belonging to the SELECT ... ENDSELECT loop is lost and the processing terminates with a runtime error. Since each screen change automatically generates a database commit, statements such as CALL SCREEN , CALL DIALOG , CALL TRANSACTION or MESSAGE are not allowed within a SELECT ... ENDSELECT loop.
    Related OPEN CURSOR , FETCH und CLOSE CURSOR
    GO THROUGH THIS LINK
    http://www.geocities.com/SiliconValley/Campus/6345/select.htm
    THANKS
    MRUTYUN

  • How to fix ORA-33262: Analytic workspace MY_AW does not exist ?

    Here is my version of : http://download.oracle.com/docs/cd/E11882_01/olap.112/e10795/select.htm#CBBGEGFA
    I don't understand why I get this ORA-33262, it just does not make sense ...
    The API should be doing AW ATTACH MYSCHEMA.MY_AW RO
    Solution 1 : The DBA in my team thinks there is no way for her to create a synonym (or something like that) for the MYSCHEMA.MY_AW so I (the API) can use only MY_AW, anyone knows a way to do it ?
    Maybe the second call to _displayResult() needs to ATTACH the AW while the first does not ...
    Solution 2 : How could I specify to the API to use MYSCHEMA.MY_AW instead of MY_AW ?
    Thanks in advance !
    JP
    DataProvider dp = new DataProvider();
    OracleConnection connection = (OracleConnection)connectionFactory.getConnection(); // LOGGED USING PROXY AUTH WITH THE USER (COULD BE ANY USER) AUTHENTICATED TO THE WEB APP AND ACTUALLY REQUESTING THIS TEST
    UserSession session = dp.createSession(connection);
    MdmRootSchema mdmRootSchema = (MdmRootSchema)dp.getMdmMetadataProvider().getRootSchema();
    MdmDatabaseSchema mdmGlobalSchema = mdmRootSchema.getDatabaseSchema("*MYSCHEMA*");
    MdmPrimaryDimension mdmDim = (MdmPrimaryDimension)mdmGlobalSchema.getTopLevelObject(dimension.getName());
    test.testExample(mdmDim);
    public void testExample(MdmPrimaryDimension mdmProdDim) {
    // Get the Source for the short label attribute of the dimension.
    MdmAttribute labelAttribute = mdmProdDim.getShortValueDescriptionAttribute();
    // prodShortLabel, which is the Source for the short value description attribute of the PRODUCT_AWJ dim
    Source prodShortLabel = labelAttribute.getSource();
    // prodHier, which is the Source for the Product Primary hierarchy.
    MdmLevelHierarchy mdmProdHier = (MdmLevelHierarchy) mdmProdDim.getDefaultHierarchy();
    StringSource prodHier = (StringSource) mdmProdHier.getSource();
    // levelSrc, which is the Source for the Family level of the Product Primary hierarchy of the PRODUCT_AWJ dimension.
    MdmHierarchyLevel mdmHierarchyLevel = mdmProdHier.getHierarchyLevels().iterator().next();
    Source levelSrc = mdmHierarchyLevel.getSource();
    MdmAttribute mdmAncestorAttribute = mdmProdHier.getAncestorsAttribute();
    // prodHierAncsAttr, which is the Source for the ancestors attribute of the hierarchy.
    Source prodHierAncsAttr = mdmAncestorAttribute.getSource();
    MdmAttribute mdmParentAttribute = mdmProdHier.getParentAttribute();
    // prodHierParentAttr, which is the Source for the parent attribute of the hierarchy.
    Source prodHierParentAttr = mdmParentAttribute.getSource();
    int pos = 1;
    // Get the element at the specified position of the level Source.
    Source levelElement = levelSrc.at(pos);
    // Select the element of the hierarchy with the specified value.
    Source levelSel = prodHier.join(prodHier.value(), levelElement);
    // Get ancestors of the level element.
    Source levelElementAncs = prodHierAncsAttr.join(prodHier, levelElement);
    // Get the parent of the level element.
    Source levelElementParent = prodHierParentAttr.join(prodHier, levelElement);
    // Get the children of a parent.
    Source prodHierChildren = prodHier.join(prodHierParentAttr, prodHier.value());
    // Select the children of the level element.
    Source levelElementChildren = prodHierChildren.join(prodHier, levelElement);
    // Get the short value descriptions for the elements of the level.
    Source levelSrcWithShortDescr = prodShortLabel.join(levelSrc);
    // Get the short value descriptions for the children.
    Source levelElementChildrenWithShortDescr =
    prodShortLabel.join(levelElementChildren);
    // Get the short value descriptions for the parents.
    Source levelElementParentWithShortDescr =
    prodShortLabel.join(prodHier, levelElementParent, true);
    // Get the short value descriptions for the ancestors.
    Source levelElementAncsWithShortDescr =
    prodShortLabel.join(prodHier, levelElementAncs, true);
    // Commit the current Transaction.
    commit();
    // Create Cursor objects and display their values.
    System.out.println("Level element values:");
    _displayResult(levelSrcWithShortDescr, false); // WORKS WITH NO PROB, I SEE THE NORMAL OUTPUT AS IN THE EXAMPLE
    System.out.println("\nLevel element at position " + pos + ":");
    _displayResult(levelElement,false); // I GET ORA-33262, SEE BELOW
    System.out.println("\nParent of the level element:");
    _displayResult(levelElementParent,false);
    System.out.println("\nChildren of the level element:");
    _displayResult(levelElementChildrenWithShortDescr,false);
    System.out.println("\nAncestors of the level element:");
    _displayResult(levelElementAncs,false);
    private void _displayResult(Source source, boolean displayLocVal)
    CursorManager cursorManager =
    dp.createCursorManager(source); // Exception for ORA-33262 is thrown here
    Cursor cursor = cursorManager.createCursor();
    cpw.printCursor(cursor, displayLocVal);
    // Close the CursorManager.
    cursorManager.close();
    Error class: Express Failure
    Server error descriptions:
    DPR: cannot create server cursor, Generic at TxsOqDefinitionManager::crtCurMgrs4
    SEL: Unable to generate an execution plan for the query, Generic at TxsOqExecutionPlanGenerator::generate(TxsOqSourceSnapshot*)
    INI: XOQ-00703: error executing OLAP DML command "(AW ATTACH MY_AW RO : ORA-33262: Analytic workspace MY_ AW does not exist.
    )", Generic at TxsOqAWManager::executeCommand
    Edited by: J-P on Jun 29, 2010 9:58 AM

    David Greenfield wrote:
    The error happens when the server is executing the OLAP DML command "AW ATTACH CENSUS RO". This in turn gets an error of the form "ORA-33262: Analytic workspace MY_AW does not exist."
    Oups, sorry when posting, I always try to get ride of names specific to my application/company, I forgot to change CENSUS for MY_AW in the error message
    In particular it is attaching the AW named CENSUS, but is complaining about MY_AW. I have a few questions.
    (1) In which AW does your PRODUCT_AWJ dimension live? CENSUS or MY_AW?
    I don't use PRODUCT_AWJ (I kept the comments from Oracle's example), at line 8, I use dimension.getName() instead
    This dimension lives in the AW named "MY_AW"
    >
    (2) Can you connect CENSUS on the command line (logged in as the same user where the error occurs)?
    Using AWM, I tried to do the command "AW ATTACH MY_AW RO", it does not work, I get the same ORA error, I've to use "AW ATTACH MYSCHEMA.MY_AW RO"
    (3) Is there any kind of AUTOGO or PREMIT_READ program in CENSUS that refers to MY_AW?
    No, sorry again, my mistake
    >
    (4) Have you set up any kind of security on the cubes (via AWM) that may be firing during the attach?
    Not to my knowledge, the DBA knows I've this problem, so she should have tought about it ... I'll ask her just in case
    Thank you,
    JP

  • Where i might be doing wrong in Bulk collect ,forall

    DECLARE
    CURSOR Cur_sub_rp IS
    SELECT A.SUB_ACCOUNT, B.RATE_PLAN,B.SUB_LAST_NAME,A.SUB_SSN
    FROM STG_SUB_MASTER_MONTH_HISTORY A, SUB_PHONE_RATEPLAN B
    WHERE A.SUB_ACCOUNT = B.SUB_ACCOUNT (+)
    AND A.MONTH_ID = B.MONTH_ID ;
    TYPE t_values_tab IS TABLE OF cur_sub_rp%rowtype index by binary_integer;
    values_tab t_values_tab ;
    BEGIN
    OPEN Cur_sub_rp ;
    LOOP
    FETCH Cur_sub_rp BULK COLLECT INTO Values_tab
    LIMIT 1000;
    EXIT WHEN Cur_sub_rp%NOTFOUND ;
    END LOOP ;
    CLOSE Cur_sub_rp;
    FORALL i IN VALUES_TAB.first..values_tab.last
    INSERT INTO SUB_PHN_1 VALUES VALUES_TAB(i);
    END;
    when i select records from sub_phn_1 it shows no rows selected.

    I have a working example wich is close to your script (It is already was posted before)
    --Create some data
    DROP TABLE emp;
    DROP TABLE dept;
    CREATE TABLE emp
    (empno NUMBER(4) NOT NULL,
    ename VARCHAR(10),
    job VARCHAR(9),
    mgr NUMBER(4),
    hiredate DATE,
    sal NUMBER(7, 2),
    comm NUMBER(7, 2),
    deptno NUMBER(2));
    INSERT INTO emp
         VALUES (7369, 'SMITH', 'CLERK', 7902, '17-DEC-1980', 800, NULL, 20);
    INSERT INTO emp
         VALUES (7499, 'ALLEN', 'SALESMAN', 7698, '20-FEB-1981', 1600, 300, 30);
    INSERT INTO emp
         VALUES (7521, 'WARD', 'SALESMAN', 7698, '22-FEB-1981', 1250, 500, 30);
    INSERT INTO emp
         VALUES (7566, 'JONES', 'MANAGER', 7839, '2-APR-1981', 2975, NULL, 20);
    CREATE TABLE dept
    (deptno NUMBER(2),
    dname VARCHAR(14),
    loc VARCHAR(13) );
    INSERT INTO dept
         VALUES (20, 'RESEARCH', 'DALLAS');
    INSERT INTO dept
         VALUES (30, 'SALES', 'CHICAGO');
    COMMIT;
    DECLARE
       CURSOR c1
       IS
          (SELECT e.*, d.dname
             FROM emp e JOIN dept d ON d.deptno = e.deptno
       TYPE c1_tbl_typ IS TABLE OF c1%ROWTYPE
          INDEX BY PLS_INTEGER;
       emp_tbl   c1_tbl_typ;
    BEGIN
       OPEN c1;
       FETCH c1
       BULK COLLECT INTO emp_tbl;
       CLOSE c1;
       --Test emp_tbl
       FOR i IN 1 .. emp_tbl.COUNT
       LOOP
          DBMS_OUTPUT.put_line (emp_tbl (i).empno || ', ' || emp_tbl (i).dname);
          NULL;
       END LOOP;
    END;
    --or :
    DECLARE
       CURSOR c1
       IS
          (SELECT *
             FROM emp
            WHERE deptno = 20);
       TYPE c1_tbl_typ IS TABLE OF c1%ROWTYPE
          INDEX BY PLS_INTEGER;
       emp_tbl   c1_tbl_typ;
    BEGIN
       OPEN c1;
       LOOP
          FETCH c1
          BULK COLLECT INTO emp_tbl LIMIT 100;
          FORALL i IN 1 .. emp_tbl.COUNT
             INSERT INTO emp_1
                  VALUES emp_tbl (i);
          EXIT WHEN c1%NOTFOUND;
       END LOOP;
       CLOSE c1;
    END;
    SELECT *
      FROM emp_1;
    --Additionally, What if your select does not return records?                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Use of select... end select

    Hi
    Why do we use select and endselect... is there any alternative for that

    Hello,
    Consider the following scenario
    data : int_dath type table occurs 0 with header line. "with header line
    table     :    key1 key2 key3 Var1 Var2 Var3
    record1 :      X      Y     Z       1     2       3
    record2 :      X      Y     M      4     5      6
    in our program we need to pass just key1 and key2 and get all relevant records
    ..that means if we write :
    <i>Select * from table into int_dat where key1 = 'X' and Key2 = 'Y'.</i>
    just one record will come....but more than one records are there for the key combination....If you do a syntax check then it will ask for <i>"endselect"</i>
    because we are trying to pass multiple records into the header line
    <b>One Alternative Solution:</b>
    <i>Select * from table into <b>table int_dath</b> where key1 = 'X' and Key2 = 'Y'.</i>
    if we mention <b>table</b> keyword before the internal table..then the select statement will pass all the records with key1 = 'X' and Key2 = 'Y' to int_dath
    There we can do a read operation or looping whichever is suitable fo r the requirement
    SAP never recommends select endselect and it is total <b>performance disaster</b> if we use loops or nested selects inside a select...endselect
    Please check the following SAP notes on performance:
    <b>If a SELECT - ENDSELECT loop contains a statement that triggers a database commit, the cursor belonging to the loop is lost and a program termination and runtime error occur. Remote Function Calls and changes of screen always lead to a database commit. The following statements are consequently not allowed wihtin a SELECT-ENDSELECT loop: CALL FUNCTION ... STARTING NEW TASK , CALL FUNCTION ... DESTINATION , CALL FUNCTION ... IN BACKGROUND TASK , CALL SCREEN, CALL DIALOG, CALL TRANSACTION, and MESSAGE.</b>
    <u>Reward if helpful and please respond if you have clarity with the explanation above</u>
    Regards
    Byju

  • (V8.X) OPEN_CURSORS 파라미터를 매우 크게 잡을 경우 고려할 사항

    제품 : ORACLE SERVER
    작성날짜 : 2003-06-02
    (V8.X) OPEN_CURSORS 파라미터를 매우 크게 잡을 경우 고려할 사항
    ==============================================================
    PURPOSE
    이 자료는 initSID.ora file에서 지정 가능한 OPEN_CURSORS parameter에
    대한 설정과 open 상태의 cursor를 memory에서 release시키기 위한
    방법에 대한 내용이다.
    Explanation
    OPEN_CURSORS 파라미터의 default 값은 50이나, 경우에 따라서 이 값을
    매우 크게 잡게 될 경우 영향을 미칠 수 있는 점과 cursor가 commit 후에도
    open 상태로 있는 것을 방지할 수 있는 방법에 대해 알아보기로 한다.
    1. OPEN_CURSORS = n
    이 파라미터는 한 session이 한번에 open할 수 있는 cursor의 최대 갯수이다.
    이 값이 1이 증가할 때마다 약 25bytes를 fixed UGA를 사용하므로, 이 값이
    너무 크면 oracle의 memory 성능을 저해시키는 요인이 될 수 있다.
    OPEN_CURSORS=1000 으로 설정하였을 경우 25 * 1000 = 25Kbytes 크기의 연속된
    영역을 메모리에 할당하게 되는 것이다. 만약 MTS로 운영 중이라면 session의
    UGA 정보는 PGA가 아닌 SGA에 저장이 된다는 것을 고려해야 하므로,
    한 session의 cursor를 위한 영역을 할당하기 위해 shared pool에 cache되어
    있는 다른 object에 영향을 미치면서까지 cache되어야 한다면 OPEN_CURSORS
    파라미터를 줄이는 것이 바람직하다.
    2. CLOSE_CACHED_OPEN_CURSORS = TRUE
    PL/SQL 내에서 CLOSE CURSOR를 하지 않았지만, COMMIT은 자주 수행하도록
    되어 있다면 process가 kill되거나 하면 session이 끊기면서 cursor는
    자동으로 close되겠지만, close cursor를 했다 하더라도 cursor가 즉시
    release되지 않는 경우가 있을 수 있다.
    이 cursor가 자동으로 memory에서 release되는 time을 예측할 수 있으면
    좋겠지만, 그렇지 못한 경우 아래와 같은 파라미터를 initSID.ora 화일에
    셋팅해 놓으면 PL/SQL 또는 Pro*C 와 같은 application 내에서 commit 또는
    rollback을 수행하면 cursor는 자동으로 close가 된다.
    close_cached_open_cursors = true
    즉, cursor를 memory에서 빨리 release시키는 것이 목적이라면 위와 같이
    파라미터를 셋팅하는 방법도 있다는 것이다.
    이 파라미터가 false로 되어 있다면 동일한 형태의 연속적인 executions가
    새로운 cursor를 open할 필요가 없도록 하기 위하여 PL/SQL에 의해 open된
    cursor를 open 상태 그대로 유지하겠다는 것을 의미한다.
    이 파라미터에 대한 보다 자세한 내용은 다음과 같다.
    PL/SQL을 사용할 때 해당 procedure에서 참조하는 object에 대해 그 안의
    cursor들이 memory에 cache되어 있는데,
    commit 또는 rollback이 일어날 때 이 cursor들이 자동으로 close되도록 하는
    역할을 한다.
    즉, transaction 단위로 close한다. (commit, rollback)
    그러나, 이 값이 true로 되어 있으면 latch bottle-neck이 발생할 수 있으므로,
    true로 할지 false로 할지에 대해 신중하게 값을 결정해야 한다.
    만약, application 내에서 재사용의 빈도가 매우 높은 SQL 문들을 사용한다면
    memory 내에서 현재 수행 중인 object에 대해 그 object를 참조하는 cursor가
    close되는 것이 좋지 않으므로, 이 값을 false로 설정해 두는 것이 바람직하다.
    3. SESSION_CACHED_CURSORS = N
    과거에 session_cached_cursors=n 이라는 파라미터도 존재했었다.
    session_cached_cursors 파라미터는 oracle server V7.1까지만 존재하는
    파라미터이다.
    이 파라미터에 지정한 갯수는 한 session 동안 open된 상태로 cache에 유지할
    수 있는 cursor의 갯수이다.
    4. Pro*C 또는 PL/SQL 내에서 COMMIT WORK RELEASE;
    commit 을 수행하면 현재 수행 중인 transaction을 종료시키고, db에 그 시점
    까지의 변화들을 반영시키는 작업이 일어나고, 사용 중이던 resource들을
    반환한다.
    PL/SQL 내에서는 commit을 했다 하더라도 아직 END가 완벽하게 마무리 된
    것은 아니므로, COMMIT WORK RELEASE; 를 해주면 모든 resource들을 free
    시키고 해당 session은 database로부터 disconnect하게 된다.
    참고로 8.1.5 부터는 commit force; 라는 명령이 있는데
    이것은 current transaction만을 commit 시키는 option이다.
    5. CURSOR_SPACE_FOR_TIME = TRUE
    이 파라미터는 shared SQL 영역의 object를 참조하는 open cursor가 있는 한,
    해당 object는 항상 shared pool에 keep시키는 역할을 한다.
    Example
    none
    Reference Documents
    <Note:30781.1>
    <Note:1009170.6>
    Oracle8i Designing and Tuning for Performance

  • Running procedures concurrently

    Hi,
    I am having some issues with the time it takes for one of my packages to complete.
    Here is an overview of what it does.
    1) Extracts data input last night and populates a specific table - basic raw data with FK's to other tables
    2) Validate a given set of data fields, 20 of them, based on the raw data inserted above
    I have created one function to validate each data item i.e.
    fnc_validate_dob
    fnc_vaidate_address
    fnc_vaidate_telephone
    There are 20 of these, each with varying complexity. They are all called one after the other. Problem is, there are 6 vaidation functions that take approx 15 mins each to complete - so instantly we have 90 mins taken up which is unacceptable.
    Is there anyway I can call these so they run concurrently?
    Any ideas would be very helpful.
    Regards
    Mark

    Hi,
    Each of the functions update a specific table that holds rejections - linked back to the raw data by the key.
    The business what to identify what fields are failing and also see the data entered, so they can change business process etc.
    Most of the validation functions are pretty straight forward - it is jus the 6 that I am having troubles with, due to the way the data is stored and how it has be extrated.
    I will try and post some code to help explain.
    Here is the main body of the validation script for a particular field.
        --VALIDATE FIN CLASS CODE
        --AS STORED IN QUESTIONS AND ANSWERS, BRING BACK ALL RECORDS FIRST - INCLUDING
        --NULLS, THEN FILTER THESE TO REMOVE VALID CODES
        INSERT INTO E2e_Rejected_Fin_Class
          (Appt_Id,
           Rejected,
           Value_Entered,
           Control)
          SELECT Ap.Appt_Id,
                 0 Rejected,
                 (SELECT a."answer_externalcode"
                  FROM   Ugp.e_Episode       e,
                         Ugp.e_Answerepisode Ae,
                         Ugp.e_Answer        a,
                         Ugp.e_Question      q
                  WHERE  e."episode_id" = Ap.Episode_Id
                  AND    ae."episode_id" = e."episode_id"
                  AND    a."answer_id" = ae."answer_id"
                  AND    q."quest_id" = a."quest_id"
                  AND    e."episode_iscurrent" = 1
                  AND    q."quest_externalcode" = 'Finclass'),
                 Ap.Control
          FROM   E2e_Appointments Ap;
        COMMIT;
        --OPEN CURSOR TO IDENTIFY THOSE RECORDS WITH AN ENTRY AND VALIDATE THEM
        --SET REJECTED = 1 IF THE CODE IS INVALID
        OPEN Cur_Fin_Class;
        LOOP
          FETCH Cur_Fin_Class
            INTO Rec_Fin_Class;
          EXIT WHEN Cur_Fin_Class%NOTFOUND;
          IF Commons.Pkg_Data_Validation.Fnc_Validate_Financial_Class(Rec_Fin_Class.Value_Entered,
                                                                      Rec_Fin_Class.Funder) = 1 THEN
            UPDATE E2e_Rejected_Fin_Class
            SET    Rejected = 1
            WHERE  Fin_Class_Id = Rec_Fin_Class.Fin_Class_Id;
          END IF;
        END LOOP;
        CLOSE Cur_Fin_Class;
        COMMIT;
        --DELETE EVERYTHING FROM TABLE WHERE REJECTED = 0
        DELETE FROM E2e_Rejected_Fin_Class WHERE Rejected = 0;
        COMMIT;And this is the function that is called - returns 1 if fails validation
    FUNCTION Fnc_Validate_Financial_Class(Pin_Fin_Class IN VARCHAR2,
                                            Pin_Funder    IN VARCHAR2) RETURN NUMBER IS
        -- Author:  MLLOYD
        -- Purpose: Validate Financial Class - fails if null and funder != 'NONE'
        --          (InsuranceType_code from UG where Self Paid
        -- Created: 22/12/2010
        -- Revision History
        -- Date            Version        Comments
        -- 22/12/2010         1           Created
      BEGIN
        IF Pin_Fin_Class IS NULL
           AND Pin_Funder != 'NONE' THEN
          RETURN 1;
        ELSE
          RETURN 0;
        END IF;
      END Fnc_Validate_Financial_Class;
      -- ************************************************************************************I am thinking that it isnt actually the validation that takes the time, it is the initial data select - I have the DBA's looking at tuning the database for me, as I think the indexes required are there.
    It would be great to be able to call the all of the 6 procedures together.
    Regards
    Mark

  • CPU used by oracle user

    Hi,
    on 10 g R2, on Windows SEVER 2003
    oracle user SCOTT connect from SQLPLUS and runs a large query. Is there any way to see how much CPU does he use ?
    Thanks.

    CPU used by a user(session wise)
    select a.sid, a.username, a.osuser, c.name, b.value
    from v$session a, v$sesstat b, v$statname c
    where a.sid = b.sid
    and b.statistic# = c.statistic#
    and c.name like '%CPU%'
    and a.username='SCOTT'
    order by a.sid, c.name
    Per user (session wise) following information:
    1.cpu usages
    2.memory sorts
    3.table scan
    4.commit
    5.cursor
    6.physical reads
    7.logical reads
    8.BCHR (buffer cache hit ratio)
    select a.sid, a.username, a.osuser, c.name, b.value
    from v$session a, v$sesstat b, v$statname c
    where a.sid = b.sid
    and b.statistic# = c.statistic#
    and (c.name like '%CPU%' or c.name like '%sorts (memory)%' or c.name like '%table scan%' or c.name like '%commit%' or c.name like '%cursor%' or c.name like '%read%' or c.name like '%buffer%' or c.name like '%cache%')
    and b.value > 0
    order by a.sid, c.name
    If you have the licences for the diagnostic pack and performance pack, you can look at the active session history for this type of information. You can query view DBA_HIST_ACTIVE_SESS_HISTORY, or run the ashrpt.sql script from $ORACLE_HOME/rdbms/admin, or use OEM for a graphic representation of the workload over time.
    HTH
    Girish Sharma
    Edited by: Girish Sharma on Dec 8, 2009 5:23 PM
    "and a.username='SCOTT'" added.

  • Keyword autocomplete

    The Keyword Autocomplete in its current form is causing me many headaches. For example, say that I have previously entered the keyword "Palo Alto Baylands". Now I a photograph taken in Palo Alto, and I wish to add the keyword "Palo Alto" to it. I type in "Palo Alto" and hit "Enter". Lightroom cleverly suggests that the keyword I want is "Palo Alto Baylands" and fills that in.
    I type fast, and unless the word has 20+ letters, can finish typing it before Lightroom can suggest it and I can recognize it, or arrow down and pick it from a list.
    Two ideas:
    1) Require the user to hit "Tab" to accept an autocompleted keyword. This is a standard procedure for this type of feature that I recall from text terminal interface with Unix operating systems.
    2) Allow us to turn keyword autocomplete OFF. I've searched everywhere and cannot find where to do this, and no references in Help.
    Better yet, implement both.
    Thanks.

    I loved drop-down choices for keywords in LR 1.1.  It saved many keystrokes and avoided misspelling and multiple similar keywords (bell tower, belltower, Bell tower, etc.)  LR 2.2 has ruined this feature by creating a separate field for this feature.  Thus, to enter 3 keywords for one image, you have to go back and forth, mouse to keyboard, 3 times - VERY inefficient.<br /><br />THE ideal for maximum efficiency: A shortcut key that places the cursor in a blank keyword window or a comma-space-cursor in a field that has previous entries.  Then as you type, present a drop-down of previously used entries - then you arrow to highlight - <enter><enter> adds the keyword and closes the window or <enter>-comma creates a space and allows you to start typing a new keyword. You can now go through unlimited photos without ever touching the mouse.

  • How to update these two tables

    Hello,
    I have two tables (in Oracle 11g R2), and have to lock certain rows in each one of them for update...
    Here is the sample data and expected result after updating, please help me for update statements.
    Thanks in advance!!!
    drop table t1;
    drop table t2;
    create table t1(
    t1_id     number(5) primary key,
    t1_col2   varchar2(20),
    t1_col3   varchar2(10),
    t2_id     varchar2(5));
    create table t2(
    t2_id    varchar2(5) primary key,
    t2_col2  varchar2(10),
    t2_col3  number(2),
    t1_id    number);
    insert into t1 values(1, '1 - col2', 'AB', null);
    insert into t1 values(2, '2 - col2', 'AB', null);
    insert into t1 values(3, '3 - col2', 'AB', null);
    insert into t1 values(4, '4 - col2', 'AC', null);
    insert into t1 values(5, '5 - col2', 'AC', null);
    insert into t1 values(6, '6 - col2', 'AC', null);
    insert into t1 values(7, '7 - col2', 'AC', null);
    insert into t1 values(8, '8 - col2', 'AC', null);
    insert into t1 values(9, '9 - col2', 'AC', null);
    insert into t1 values(10, '10 - col2', 'AC', null);
    commit;
    insert into t2 values('11001', 'ABC', 12, null);
    insert into t2 values('11021', 'ABC', 12, null);
    insert into t2 values('11022', 'ABC', 12, null);
    insert into t2 values('11023', 'ABC', 12, null);
    insert into t2 values('11024', 'ABC', 12, null);
    insert into t2 values('11025', 'ABC', 12, null);
    insert into t2 values('11030', 'ABC', 12, null);
    insert into t2 values('11035', 'ABC', 12, null);
    insert into t2 values('11051', 'ABC', 12, null);
    insert into t2 values('11061', 'ABC', 12, null);
    insert into t2 values('11071', 'ABC', 12, null);
    insert into t2 values('11081', 'ABC', 11, null);
    insert into t2 values('11091', 'ABC', 11, null);
    commit;
    declare
      cursor c1 is select *
                     from t1
                    where t1_id in(select t1_id from (select t1_id from t1 where t1_col3 = 'AC' order by t1_id) where rownum <= 5)
                   for update;
      cursor c2 is select *
                     from t2
                    where t2_id in(select t2_id from (select t2_id from t2 where t2_col3 = 12 order by t2_id) where rownum <= 5)
                   for update;
    begin
      for rec_c1 in c1 loop
      end loop;
    end;
    The result must look like:
         T1_ID T1_COL2              T1_COL3    T2_ID
             4 4 - col2             AC         11001
             5 5 - col2             AC         11021
             6 6 - col2             AC         11022
             7 7 - col2             AC         11023
             8 8 - col2             AC         11024
    T2_ID T2_COL2       T2_COL3      T1_ID
    11001 ABC                12          4
    11021 ABC                12          5         
    11022 ABC                12          6
    11023 ABC                12          7
    11024 ABC                12          8

    With the help of Bencol :-)
    DECLARE
       CURSOR c1
       IS
          SELECT a.t1_id, b.t2_id
            FROM t1 a CROSS JOIN t2 b
           WHERE (a.t1_id, b.t2_id) IN (SELECT t1.t1_id, t2.t2_id
                                          FROM    (SELECT t1_id
                                                        , ROW_NUMBER () OVER (ORDER BY t1_id) t1_rn
                                                     FROM t1
                                                    WHERE t1_col3 = 'AC') t1
                                               JOIN
                                                  (SELECT t2_id
                                                        , ROW_NUMBER () OVER (ORDER BY t1_id) t2_rn
                                                     FROM t2
                                                    WHERE t2_col3 = 12) t2
                                               ON t1.t1_rn = t2.t2_rn
                                         WHERE t1.t1_rn <= 5)
          FOR UPDATE;
    BEGIN
       FOR rec_c1 IN c1
       LOOP
          UPDATE t1
             SET t2_id = rec_c1.t2_id
           WHERE t1_id = rec_c1.t1_id;
          UPDATE t2
             SET t1_id = rec_c1.t1_id
           WHERE t2_id = rec_c1.t2_id;
       END LOOP;
    END;
    /Regards.
    Al

  • ORA-01456 : may not perform insert/delete/update operation

    When I use following stored procedure with crystal reports, following error occurs.
    ORA-01456 : may not perform insert/delete/update operation inside a READ ONLY transaction
    Kindly help me on this, please.
    My stored procedure is as under:-
    create or replace
    PROCEDURE PROC_FIFO
    (CV IN OUT TB_DATA.CV_TYPE,FDATE1 DATE, FDATE2 DATE,
    MSHOLD_CODE IN NUMBER,SHARE_ACCNO IN VARCHAR)
    IS
    --DECLARE VARIABLES
    V_QTY NUMBER(10):=0;
    V_RATE NUMBER(10,2):=0;
    V_AMOUNT NUMBER(12,2):=0;
    V_DATE DATE:=NULL;
    --DECLARE CURSORS
    CURSOR P1 IS
    SELECT *
    FROM FIFO
    WHERE SHARE_TYPE IN ('P','B','R')
    ORDER BY VOUCHER_DATE,
    CASE WHEN SHARE_TYPE='P' THEN 1
    ELSE
    CASE WHEN SHARE_TYPE='R' THEN 2
    ELSE
    CASE WHEN SHARE_TYPE='B' THEN 3
    END
    END
    END,
    TRANS_NO;
    RECP P1%ROWTYPE;
    CURSOR S1 IS
    SELECT * FROM FIFO
    WHERE SHARE_TYPE='S'
    AND TRANS_COST IS NULL
    ORDER BY VOUCHER_DATE,TRANS_NO;
    RECS S1%ROWTYPE;
    --BEGIN QUERIES
    BEGIN
    DELETE FROM FIFO;
    --OPENING BALANCES
    INSERT INTO FIFO
    VOUCHER_NO,VOUCHER_TYPE,VOUCHER_DATE,TRANS_QTY,TRANS_AMT,TRANS_RATE,
    SHOLD_CODE,SHARE_TYPE,ACC_NO,SHARE_CODE,TRANS_NO)
    SELECT TO_CHAR(FDATE1,'YYYYMM')||'001' VOUCHER_NO,'OP' VOUCHER_TYPE,
    FDATE1-1 VOUCHER_DATE,
    SUM(
    CASE WHEN
    --((SHARE_TYPE ='S' AND DTAG='Y')
    SHARE_TYPE IN ('P','B','R','S') THEN
    TRANS_QTY
    ELSE
    0
    END
    ) TRANS_QTY,
    SUM(TRANS_AMT),
    NVL(CASE WHEN SUM(TRANS_AMT)<>0
    AND
    SUM
    CASE WHEN SHARE_TYPE IN ('P','B','R','S') THEN
    TRANS_QTY
    ELSE
    0
    END
    )<>0 THEN
    SUM(TRANS_AMT)/
    SUM
    CASE WHEN SHARE_TYPE IN ('P','B','R','S') THEN
    TRANS_QTY
    ELSE
    0
    END
    ) END,0) TRANS_RATE,
    MSHOLD_CODE SHOLD_CODE,'P' SHARE_TYPE,SHARE_ACCNO ACC_NO,
    SHARE_CODE,0 TRANS_NO
    FROM TRANS
    WHERE ACC_NO=SHARE_ACCNO
    AND SHOLD_CODE= MSHOLD_CODE
    AND VOUCHER_DATE<FDATE1
    --AND
    --(SHARE_TYPE='S' AND DTAG='Y')
    --OR SHARE_TYPE IN ('P','R','B'))
    group by TO_CHAR(FDATE1,'YYYYMM')||'001', MSHOLD_CODE,SHARE_ACCNO, SHARE_CODE;
    COMMIT;
    --TRANSACTIONS BETWEEND DATES
    INSERT INTO FIFO
    TRANS_NO,VOUCHER_NO,VOUCHER_TYPE,
    VOUCHER_DATE,TRANS_QTY,
    TRANS_RATE,TRANS_AMT,SHOLD_CODE,SHARE_CODE,ACC_NO,
    DTAG,TRANS_COST,SHARE_TYPE
    SELECT TRANS_NO,VOUCHER_NO,VOUCHER_TYPE,
    VOUCHER_DATE,TRANS_QTY,
    CASE WHEN SHARE_TYPE='S' THEN
    NVL(TRANS_RATE-COMM_PER_SHARE,0)
    ELSE
    NVL(TRANS_RATE+COMM_PER_SHARE,0)
    END
    ,TRANS_AMT,SHOLD_CODE,SHARE_CODE,ACC_NO,
    DTAG,NULL TRANS_COST,SHARE_TYPE
    FROM TRANS
    WHERE ACC_NO=SHARE_ACCNO
    AND SHOLD_CODE= MSHOLD_CODE
    AND VOUCHER_DATE BETWEEN FDATE1 AND FDATE2
    AND
    ((SHARE_TYPE='S' AND DTAG='Y')
    OR SHARE_TYPE IN ('P','R','B'));
    COMMIT;
    --PURCHASE CURSOR
    IF P1%ISOPEN THEN
    CLOSE P1;
    END IF;
    OPEN P1;
    LOOP
    FETCH P1 INTO RECP;
    V_QTY:=RECP.TRANS_QTY;
    V_RATE:=RECP.TRANS_RATE;
    V_DATE:=RECP.VOUCHER_DATE;
    dbms_output.put_line('V_RATE OPENING:'||V_RATE);
    dbms_output.put_line('OP.QTY2:'||V_QTY);
    EXIT WHEN P1%NOTFOUND;
    --SALES CURSOR
    IF S1%ISOPEN THEN
    CLOSE S1;
    END IF;
    OPEN S1;
    LOOP
    FETCH S1 INTO RECS;
    EXIT WHEN S1%NOTFOUND;
    dbms_output.put_line('OP.QTY:'||V_QTY);
    dbms_output.put_line('SOLD:'||RECS.TRANS_QTY);
    dbms_output.put_line('TRANS_NO:'||RECS.TRANS_NO);
    dbms_output.put_line('TRANS_NO:'||RECS.TRANS_NO);
    IF ABS(RECS.TRANS_QTY)<=V_QTY
    AND V_QTY<>0
    AND RECS.TRANS_COST IS NULL THEN
    --IF RECS.TRANS_COST IS NULL THEN
    --dbms_output.put_line('SOLD:'||RECS.TRANS_QTY);
    --dbms_output.put_line('BAL1:'||V_QTY);
    UPDATE FIFO
    SET TRANS_COST=V_RATE,
    PUR_DATE=V_DATE
    WHERE TRANS_NO=RECS.TRANS_NO
    AND TRANS_COST IS NULL;
    COMMIT;
    dbms_output.put_line('UPDATE TRANS_NO:'||RECS.TRANS_NO);
    dbms_output.put_line('OP.QTY:'||V_QTY);
    dbms_output.put_line('SOLD:'||RECS.TRANS_QTY);
    dbms_output.put_line('TRANS_NO:'||RECS.TRANS_NO);
    dbms_output.put_line('BAL2:'||TO_CHAR(RECS.TRANS_QTY+V_QTY));
    END IF;
    IF ABS(RECS.TRANS_QTY)>ABS(V_QTY)
    AND V_QTY<>0 AND RECS.TRANS_COST IS NULL THEN
    UPDATE FIFO
    SET
    TRANS_QTY=-V_QTY,
    TRANS_COST=V_RATE,
    TRANS_AMT=ROUND(V_QTY*V_RATE,2),
    PUR_DATE=V_DATE
    WHERE TRANS_NO=RECS.TRANS_NO;
    --AND TRANS_COST IS NULL;
    COMMIT;
    dbms_output.put_line('UPDATING 100000:'||TO_CHAR(V_QTY));
    dbms_output.put_line('UPDATING 100000 TRANS_NO:'||TO_CHAR(RECS.TRANS_NO));
    INSERT INTO FIFO
    TRANS_NO,VOUCHER_NO,VOUCHER_TYPE,
    VOUCHER_DATE,TRANS_QTY,
    TRANS_RATE,TRANS_AMT,SHOLD_CODE,SHARE_CODE,ACC_NO,
    DTAG,TRANS_COST,SHARE_TYPE,PUR_DATE
    VALUES
    MCL.NEXTVAL,RECS.VOUCHER_NO,RECS.VOUCHER_TYPE,
    RECS.VOUCHER_DATE,RECS.TRANS_QTY+V_QTY,
    RECS.TRANS_RATE,(RECS.TRANS_QTY+V_QTY)*RECS.TRANS_RATE,RECS.SHOLD_CODE,
    RECS.SHARE_CODE,RECS.ACC_NO,
    RECS.DTAG,NULL,'S',V_DATE
    dbms_output.put_line('INSERTED RECS.QTY:'||TO_CHAR(RECS.TRANS_QTY));
    dbms_output.put_line('INSERTED QTY:'||TO_CHAR(RECS.TRANS_QTY+V_QTY));
    dbms_output.put_line('INSERTED V_QTY:'||TO_CHAR(V_QTY));
    dbms_output.put_line('INSERTED RATE:'||TO_CHAR(V_RATE));
    COMMIT;
    V_QTY:=0;
    V_RATE:=0;
    EXIT;
    END IF;
    IF V_QTY>0 THEN
    V_QTY:=V_QTY+RECS.TRANS_QTY;
    ELSE
    V_QTY:=0;
    V_RATE:=0;
    EXIT;
    END IF;
    --dbms_output.put_line('BAL3:'||V_QTY);
    END LOOP;
    V_QTY:=0;
    V_RATE:=0;
    END LOOP;
    CLOSE S1;
    CLOSE P1;
    OPEN CV FOR
    SELECT TRANS_NO,VOUCHER_NO,VOUCHER_TYPE,
    VOUCHER_DATE,TRANS_QTY,
    TRANS_RATE,TRANS_AMT,SHOLD_CODE,B.SHARE_CODE,B.ACC_NO,
    DTAG,TRANS_COST,SHARE_TYPE, B.SHARE_NAME,A.PUR_DATE
    FROM FIFO A, SHARES B
    WHERE A.SHARE_CODE=B.SHARE_CODE
    --AND A.SHARE_TYPE IS NOT NULL
    ORDER BY VOUCHER_DATE,SHARE_TYPE,TRANS_NO;
    END PROC_FIFO;
    Thanks and Regards,
    Luqman

    Copy from TODOEXPERTOS.COM
    Problem Description
    When running a RAM build you get the following error as seen in the RAM build
    log file:
    14:52:50 2> Updating warehouse tables with build information...
    Process Terminated In Error:
    [Oracle][ODBC][Ora]ORA-01456: may not perform insert/delete/update operation inside a READ ONLY transaction
    (SIGENG02) ([Oracle][ODBC][Ora]ORA-01456: may not perform insert/delete/update operation inside a READ ONLY transaction
    ) Please contact the administrator of your Oracle Express Server application.
    Solution Description
    Here are the following suggestions to try out:
    1. You may want to use oci instead of odbc for your RAM build, provided you
    are running an Oracle database. This is setup through the RAA (relational access
    administrator) maintenance procedure.
    Also make sure your tnsnames.ora file is setup correctly in either net80/admin
    or network/admin directory, to point to the correct instance of Oracle.
    2. Commit or rollback the current transaction, then retry running your
    RAM build. Seems like one or more of your lookup or fact tables have a
    read-only lock on them. This occurs if you modify or add some records to your
    lookup or fact tables but forget to issue a commit or rollback. You need to do
    this through SQL+.
    3. You may need to check what permissions has been given to the relational user.
    The error could be a permissions issue.
    You must give 'connect' permission or roll to the RAM/relational user. You may
    also try giving 'dba' and 'resource' priviliges/roll to this user as a test. Inorder to
    keep it simple, make sure all your lookup, fact and wh_ tables are created on
    a single new tablespace. Create a new user with the above privileges as the
    owner of the tablespace, as well as the owner of the lookup, fact and wh_
    tables, inorder to see if this is a permissions issue.
    In this particular case, the problem was resolved by using oci instead of odbc,
    as explained in suggestion# 1.

Maybe you are looking for