Behaviour of insert ... select statement

Hi,
If we use insert ... select statement like
insert into TableA
Select
from TableB;
and if TableB have 250000 rows then what will be the action for it..
will all 250000 rows fetched to database buffer and if index scans are performed on it and then all rows inserted into TableA
or
it will do it parallel
We are loading large data and facing performance problems for delaying index scans.
Just Curious :)
Rushang.

it's not a secret. oracle will perform the select, using indexes if it decides to (depends on source table size, stats, optimizer mode, etc, etc). rows may be pulled back to memory be written to temp space (e.g., if you were returning many, many rows which needed to be grouped). the rows will then be inserted.
so, if you have an index on tableb.col1, then oracle may use the index. or it may do a FTS. either way, it will only select the needed rows to be inserted.
the insert does not prevent the select from working as it would normally.

Similar Messages

  • Smart scan not working with Insert Select statements

    We have observed that smart scan is not working with insert select statements but works when select statements are execute alone.
    Can you please help us to explain this behavior?

    There is a specific exadata forum - you would do better to post the question there: Exadata
    I can't give you a definitive answer, but it's possible that this is simply a known limitation similar to the way that "Create table as select" won't run the select statement the same way as the basic select if it involves a distributed query.
    Regards
    Jonathan Lewis

  • Insert select statement or insert in cursor

    hi all,
    i need a performance compare for: insert operation in cursor one by one and select statement insert,
    for example:
    1. insert into TableA (ColumA,ColumnB) select A, B from TableB;
    2. cursor my_cur is select A, B from TableB
    for my_rec in my_cur loop
    insert into TableA (ColumA,ColumnB) values (my_rec.A, my_rec.B);
    end loop;
    also "bulk collect into" can be used.
    Which one has a better performance?
    Thanks for your help,
    kadriye

    What's stopping you from making 100,000 rows of test data and trying it for yourself?
    Edit: I was bored enough to do it myself.
    Starting insert as select 22-JUL-08 11.43.19.544000000 +01:00
    finished insert as select. 22-JUL-08 11.43.19.825000000 +01:00
    starting cursor loop 22-JUL-08 11.43.21.497000000 +01:00
    finished cursor loop 22-JUL-08 11.43.35.185000000 +01:00
    The two second gap between the two is for the delete.
    Message was edited by:
    Dave Hemming

  • Insert select statement is taking ages

    Sybase version: Adaptive Server Enterprise/12.5.4/EBF 15432 ESD#8/P/Sun_svr4/OS 5.8/ase1254/2105/64-bit/FBO/Sat Mar 22 14:38:37 2008
    Hi guyz,
    I have a question about the performance of a statement that is very slow and I'd like to have you input.
    I have the SQL statement below that is taking ages to execute and I can't find out how to imporve it
    insert SST_TMP_M_TYPE select M_TYPE from MKT_OP_DBF M join TRN_HDR_DBF T on M.M_ORIGIN_NB=T.M_NB where T.M_LTI_NB=@Nson_lti
    @Nson_lti is the same datatype as T.M_LTI_NB
    M.M_ORIGIN_NB=T.M_NB have the same datatype
    TRN_HDR_DBF has 1424951 rows and indexes on M_LTI_NB and M_NB
    table MKT_OP_DBF has 870305 rows
    table MKT_OP_DBF has an index on M_ORIGIN_NB column
    Statistics for index:                   "MKT_OP_ND7" (nonclustered)
    Index column list:                      "M_ORIGIN_NB"
         Leaf count:                        3087
         Empty leaf page count:             0
         Data page CR count:                410256.0000000000000000
         Index page CR count:               566.0000000000000000
         Data row CR count:                 467979.0000000000000000
         First extent leaf pages:           0
         Leaf row size:                     12.1161512343373872
         Index height:                      2
    The representaion of M_ORIGIN_NB is
    Statistics for column:                  "M_ORIGIN_NB"
    Last update of column statistics:       Mar  9 2015 10:48:57:420AM
         Range cell density:                0.0000034460903826
         Total density:                     0.0053334921767125
         Range selectivity:                 default used (0.33)
         In between selectivity:            default used (0.25)
    Histogram for column:                   "M_ORIGIN_NB"
    Column datatype:                        numeric(10,0)
    Requested step count:                   20
    Actual step count:                      20
         Step     Weight                    Value
            1     0.00000000        <       0
            2     0.07300889        =       0
            3     0.05263098       <=       5025190
            4     0.05263098       <=       9202496
            5     0.05263098       <=       12664456
            6     0.05263098       <=       13129478
            7     0.05263098       <=       13698564
            8     0.05263098       <=       14735554
            9     0.05263098       <=       15168461
           10     0.05263098       <=       15562067
           11     0.05263098       <=       16452862
           12     0.05263098       <=       16909265
           13     0.05263212       <=       17251573
           14     0.05263098       <=       18009609
           15     0.05263098       <=       18207523
           16     0.05263098       <=       18404113
           17     0.05263098       <=       18588398
           18     0.05263098       <=       18793585
           19     0.05263098       <=       18998992
           20     0.03226340       <=       19574408
    If I look at the showplan, I can see indexes on TRN_HDR_DBF are used but now the one on MKT_OP_DBF
    QUERY PLAN FOR STATEMENT 16 (at line 35).
        STEP 1
            The type of query is INSERT.
            The update mode is direct.
            FROM TABLE
                MKT_OP_DBF
                M
            Nested iteration.
            Table Scan.
            Forward scan.
            Positioning at start of table.
            Using I/O Size 32 Kbytes for data pages.
            With LRU Buffer Replacement Strategy for data pages.
            FROM TABLE
                TRN_HDR_DBF
                T
            Nested iteration.
            Index : TRN_HDR_NDX_NB
            Forward scan.
            Positioning by key.
            Keys are:
                M_NB  ASC
            Using I/O Size 4 Kbytes for index leaf pages.
            With LRU Buffer Replacement Strategy for index leaf pages.
            Using I/O Size 4 Kbytes for data pages.
            With LRU Buffer Replacement Strategy for data pages.
            TO TABLE
                SST_TMP_M_TYPE
            Using I/O Size 4 Kbytes for data pages.
    I was expecting the query to use the index also on MKT_OP_DBF
    Thanks for your advices
    Simon

    The total density number for the MKT_OP_DBF.M_ORIGIN_NB column don't look very good:
         Range cell density:           0.0000034460903826
         Total density:                0.0053334921767125
    Notice the total density value is 3 magnitudes larger than the range density ... which can indicate a largish number of duplicates.  (NOTE: This wide difference between range cell and total density can be referred to as 'skew' - more on this later.)
    Do some M_ORIGIN_NB values have a large number of duplicates?  What does the following query return:
    =====================
    select top 30 M_ORIGIN_NB, count(*)
    from   MKT_OP_DBF
    group by M_ORIGIN_NB
    order by 2 desc, 1
    =====================
    The total density can be used to estimate the number of rows expected for a join (eg, TRN_HDR_DBF --> MKT_OP_DBF).  The largish total density number, when thrown into the optimizer's calculations, may be causing the optimizer to think that the volume of *possible* joins will be more expensive than a join in the opposite direction (MKT_OP_DBF --> TRN_HDR_DBF) which in turn means (as Jeff's pointed out) that you end up table scanning MKT_OP_DBF (as the outer table) because of no SARGs.
    From your description it sounds like you've got the necessary indexes to support a TRN_HDR_DBF --> MKT_OP_DBF join order. (Though it wouldn't hurt to see the complete output from sp_helpindex for both tables just to make sure we're on the same sheet of music.)
    Without more details (eg, complete stats for both tables, sp_help for both tables - if you decide to post these I'd recommend posting them as a *.txt attachment).
    I'm assuming you *know* that a join from TRN_NDR_DBF --> MKT_OP_DBF should be much quicker than what you're currently seeing.  If this is the case, I'd probably want to start with:
    =====================
    exec sp_modifystats MKT_OP_DBF, M_ORIGIN_NB, REMOVE_SKEW_FROM_DENSITY
    go
    exec sp_recompile MKT_OP_DBF
    go
    -- run your query again
    =====================
    By removing the skew from the total density (ie, set total density = range cell density = 0.00000344...) you're telling the optimizer that it can expect a much smaller number of joins for the join order of TRN_HDR_DBF --> MKT_OP_DBF ... and that may be enough for the optimizer to use TRN_HDR_DBF to drive the query.
    NOTE: If sp_modifystats/REMOVE_SKEW_FROM_DENSITY provides the desired join order, keep in mind that you'll need to re-issue this command after each update stats command that modifies the stats on the M_ORIGIN_NB column.  For example, modify your update stats maintenance job to issue sp_modifystats/REMOVE_SKEW_FROM_DENSITY for those special cases where you know it helps query performance.

  • Strange Behaviour for sequential SELECT statements

    I have all my handles ready to be used
    I do a StmtPrepare , StmtExecute , StmtFetch
    I get (lets say) 3 results...
    result is correct...
    Then by using the very same handles
    I do a StmtPrepare , StmtExecute , StmtFetch
    I get (lets say) 3 results...
    result is incorrect...
    I should be getting 10 results but i get the first 3
    If in the first pass i get 4 results then in the second
    pass i get 4 results like the result_count_of_second = result_count_of_first !?????
    I get no error or warning , everything seems OK
    I check SQL statements from TOAD also
    What may cause this and any trick to overcome ?
    Should i free and realloc the StmdHandle ?
    (I dont want to do this but !?)

    Could you provide some snippets of the code that is
    giving the problem?
    What do you mean that you should be getting 10 results?
    Are you getting an OCI_NO_DATA earlier than expected (on
    the third row instead of the 10th)?
    I think that if you gave some code snippets of your
    OCI program and an example of what results you are
    expecting and what results you are actually getting,
    we might be in a better position to help.

  • Number of rows inserted is different in bulk insert using select statement

    I am facing a problem in bulk insert using SELECT statement.
    My sql statement is like below.
    strQuery :='INSERT INTO TAB3
    (SELECT t1.c1,t2.c2
    FROM TAB1 t1, TAB2 t2
    WHERE t1.c1 = t2.c1
    AND t1.c3 between 10 and 15 AND)' ....... some other conditions.
    EXECUTE IMMEDIATE strQuery ;
    These SQL statements are inside a procedure. And this procedure is called from C#.
    The number of rows returned by the "SELECT" query is 70.
    On the very first time call of this procedure, the number rows inserted using strQuery is *70*.
    But in the next time call (in the same transaction) of the procedure, the number rows inserted is only *50*.
    And further if we are repeating calling this procedure, it will insert sometimes 70 or 50 etc. It is showing some inconsistency.
    On my initial analysis it is found that, the default optimizer is "ALL_ROWS". When i changed the optimizer mode to "rule", this issue is not coming.
    Anybody faced these kind of issues?
    Can anyone tell what would be the reason of this issue..? any other work around for this...?
    I am using Oracle 10g R2 version.
    Edited by: user13339527 on Jun 29, 2010 3:55 AM
    Edited by: user13339527 on Jun 29, 2010 3:56 AM

    You have very likely concurrent transactions on the database:
    >
    By default, Oracle Database permits concurrently running transactions to modify, add, or delete rows in the same table, and in the same data block. Changes made by one transaction are not seen by another concurrent transaction until the transaction that made the changes commits.
    >
    If you want to make sure that the same query always retrieves the same rows in a given transaction you need to use transaction isolation level serializable instead of read committed which is the default in Oracle.
    Please read http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10471/adfns_sqlproc.htm#ADFNS00204.
    You can try to run your test with:
    set  transaction isolation level  serializable;If the problem is not solved, you need to search possible Oracle bugs on My Oracle Support with keywords
    like:
    wrong results 10.2Edited by: P. Forstmann on 29 juin 2010 13:46

  • Insert using select statement

    Hi,
    I am trying to insert values using select statement. But this is not working
    INSERT INTO contribution_temp_upgrade
    (PRO_ID,
    OBJECT_NAME,
    DELIVERY_DATE,
    MODULE_NAME,
    INDUSTRY_CATERGORIZATION,
    ADVANTAGES,
    REUSE_DETAILS)
    VALUES
    SELECT
    :P1_PROJECTS,
    wwv_flow.g_f08(vRow),
    wwv_flow.g_f09(vRow),
    wwv_flow.g_f10(vRow),
    wwv_flow.g_f11(vRow),
    wwv_flow.g_f12(vRow),
    wwv_flow.g_f13(vRow)
    FROM DUAL;
    Please let me know what i am missing..
    Thanks
    Sudhir

    Try this
    INSERT INTO contribution_temp_upgrade
    (PRO_ID,
    OBJECT_NAME,
    DELIVERY_DATE,
    MODULE_NAME,
    INDUSTRY_CATERGORIZATION,
    ADVANTAGES,
    REUSE_DETAILS)
    SELECT
    :P1_PROJECTS,
    wwv_flow.g_f08(vRow),
    wwv_flow.g_f09(vRow),
    wwv_flow.g_f10(vRow),
    wwv_flow.g_f11(vRow),
    wwv_flow.g_f12(vRow),
    wwv_flow.g_f13(vRow)
    FROM DUAL;Note: when you are selecting a value using select statement, you should not specify the keyword "values".
    i assume you have already assigned value for your bind variable :P1_PROJECTS and rest of the functions will return some value.
    Regards,
    Prazy

  • How to insert variable value using select statement - Oracle function

    Hi,
    I have a function which inserts record on basis of some condition
    INSERT INTO Case
    Case_ID,
    Case_Status,
    Closure_Code,
    Closure_Date
    SELECT newCaseID,
    caseStatus,
    Closure_Code,
    Closure_Date,
    FROM Case
    WHERE Case_ID = caseID
    Now i want new casestatus value in place of select statement caseStatus value. I have a variable m_caseStatus and i want to use the value of this variable in above select statement.
    how can i use this.
    thanks

    Hi,
    I have a function which inserts record on basis of some condition
    INSERT INTO Case
    Case_ID,
    Case_Status,
    Closure_Code,
    Closure_Date
    SELECT newCaseID,
    caseStatus,
    Closure_Code,
    Closure_Date,
    FROM Case
    WHERE Case_ID = caseID
    Now i want new casestatus value in place of select statement caseStatus value. I have a variable m_caseStatus and i want to use the value of this variable in above select statement.
    how can i use this. Do not select Case_Status from inner select, so null will be inserted then after inserting it update the case status with m_caseStatus.
    Regards.

  • Select Statement takes more time after immediate insert statement..

    Hello,
    I found below scenario
    1. I have table TABLE1 which has index on COL1 field. It has around 40 columns and 100000 rows.
    2. whenever i insert 100000 rows in bulk with changing indexed key column and executing SELECT statement in same session then it takes around 3 mins to complete.
    3. However, if i open new session and execute same select statement then it returns in 2-3 seconds.
    I didnt get anything in XPLAN.. :(
    I felt Buffer Clean is cause to take time. please let me know your opinion.
    Thanks in Advance
    Sach

    sach09 wrote:
    Hello,
    I found below scenario
    1. I have table TABLE1 which has index on COL1 field. It has around 40 columns and 100000 rows.
    2. whenever i insert 100000 rows in bulk with changing indexed key column and executing SELECT statement in same session then it takes around 3 mins to complete.
    3. However, if i open new session and execute same select statement then it returns in 2-3 seconds.
    I didnt get anything in XPLAN.. :(
    I felt Buffer Clean is cause to take time. please let me know your opinion.Are you running the query in the other session after running it from the first?
    Aman....

  • SQL insert with select statement having strange results

    So I have the below sql (edited a bit). Now here's the problem.
    I can run the select statement just fine, i get 48 rows back. When I run with the insert statement, a total of 9062 rows are inserted. What gives?
    <SQL>
    INSERT INTO mars_aes_data
    (rpt_id, shpdt, blno, stt, shpr_nad, branch_tableS, csgn_nad,
    csgnnm1, foreign_code, pnt_des, des, eccn_no, entity_no,
    odtc_cert_ind, dep_date, equipment_no, haz_flag, schd_no,
    schd_desc, rec_value, iso_ulti_dest, odtc_exempt, itn,
    liscence_no, liscence_flag, liscence_code, mblno, mot,
    cntry_load, pnt_load, origin_state, airline_prefix, qty1, qty2,
    ref_val, related, routed_flag, scac, odtc_indicator, seal_no,
    line_no, port_export, port_unlading, shipnum, shprnm1, veh_title,
    total_value, odtc_cat_code, unit1, unit2)
    SELECT 49, schemaP.tableS.shpdt, schemaP.tableS.blno,
    schemaP.tableS.stt, schemaP.tableS.shpr_nad,
    schemaP.tableM.branch_tableS, schemaP.tableS.csgn_nad,
    schemaP.tableS.csgnnm1, schemaP.tableD.foreign_code,
    schemaP.tableS.pnt_des, schemaP.tableS.des,
    schemaP.tableD.eccn_no, schemaP.tableN.entity_no,
    schemaP.tableD.odtc_cert_ind, schemaP.tableM.dep_date,
    schemaP.tableM.equipment_no, schemaP.tableM.haz_flag,
    schemaP.tableD.schd_no, schemaP.tableD.schd_desc,
    schemaP.tableD.rec_value,
    schemaP.tableM.iso_ulti_dest,
    schemaP.tableD.odtc_exempt, schemaP.tableM.itn,
    schemaP.tableD.liscence_no,
    schemaP.tableM.liscence_flag,
    schemaP.tableD.liscence_code, schemaP.tableS.mblno,
    schemaP.tableM.mot, schemaP.tableS.cntry_load,
    schemaP.tableS.pnt_load, schemaP.tableM.origin_state,
    schemaP.tableM.airline_prefix, schemaP.tableD.qty1,
    schemaP.tableD.qty2,
    schemaC.func_getRefs@link (schemaP.tableS.ptt, 'ZYX'),
    schemaP.tableM.related, schemaP.tableM.routed_flag,
    schemaP.tableM.scac, schemaP.tableD.odtc_indicator,
    schemaP.tableM.seal_no, schemaP.tableD.line_no,
    schemaP.tableM.port_export,
    schemaP.tableM.port_unlading, schemaP.tableS.shipnum,
    schemaP.tableS.shprnm1, schemaP.tableV.veh_title,
    schemaP.tableM.total_value,
    schemaP.tableD.odtc_cat_code, schemaP.tableD.unit1,
    schemaP.tableD.unit2
    FROM schemaP.tableD@link,
    schemaP.tableM@link,
    schemaP.tableN@link,
    schemaP.tableS@link,
    schemaP.tableV@link
    WHERE tableM.answer IN ('123', '456')
    AND SUBSTR (tableS.area, 1, 1) IN ('A', 'S')
    AND entity_no IN
    ('A',
    'B',
    'C',
    'D',
    'E',
    AND TO_DATE (SUBSTR (tableM.time_stamp, 1, 8), 'YYYYMMDD')
    BETWEEN '01-Mar-2009'
    AND '31-Mar-2009'
    AND tableN.shipment= tableD.shipment(+)
    AND tableN.shipment= tableS.shipnum
    AND tableN.shipment= tableM.shipment(+)
    AND tableN.shipment= tableV.shipment(+)
    <SQL>
    Edited by: user11263048 on Jun 12, 2009 7:23 AM
    Edited by: user11263048 on Jun 12, 2009 7:27 AM

    Can you change this:
    BETWEEN '01-Mar-2009'
    AND '31-Mar-2009'To this:
    BETWEEN TO_DATE('01-Mar-2009', 'DD-MON-YYYY')
    AND TO_DATE('31-Mar-2009','DD-MON-YYYY')That may make no difference but you should never rely on implicit conversions like that, they're always likely to cause you nasty surprises.
    If you're still getting the discrepancy, instead of and INSERT-SELECT, can you try a CREATE TABLE AS SELECT... just to see if you get the same result.

  • Only select statements work (update/insert hang)

    Hi, I am running CF MX version 7,0,2,142559 Standard Edition
    and ColdFusion is hanging everytime I attempt an insert or update
    statement again Oracle 8i and 9i using the jdbc thin driver and an
    odbc socket driver.
    Select statements work fine. I have tried everything I could
    think of and I get the same results. All rights are given to the
    datasource and the user. I can do insert and update statement via
    another application (Toad) with the same Oracle user.
    Any suggestions??? I don't see any hot fixes for this but
    that doesn't mean one doesn't exist.
    Also, many times it causes the system cpu utlization to stick
    at 100% until I restart ColdFusion.
    Thanks for any help.

    Hi,
    I had similar results on Oracle 10G while using cfmx 7.02. I
    actually updated the macromedia_drivers.jar from the coldfusion
    support site.
    http://www.adobe.com/cfusion/knowledgebase/index.cfm?id=42dcb10a
    An update to the datadirect JDBC drivers. Try that. If not,
    make sure you have the latest JDBC drivers from Oracle. Since
    previous versions would make the update/insert;s hang.

  • SELECT - INSERT/UPDATE statement mapping.

    This might be a silly question, but are there any Java libraries that can perform an arbitrary SELECT statement to INSERT/UPDATE statement mapping? Or indeed, is this in fact possible?
    Just to be a little clearer, if we had the statement;
    SELECT [Field1] FROM [Table1]
    It would map down to;
    INSERT INTO [Table1] VALUES [Field1]='foo'
    Please excuse my INSERT syntax if it is incorrect, I hardly ever use these statements.
    The reason that I ask is because I am writing a piece of software that can perform quite complex extractions from databases, and it would be nice to be able to automatically 'reverse' the extraction, and perform updates on the extracted data.
    Unfortunately, because of the architecture and requirements of the application, I can not just update a resultset and call the update() method.
    Thanks for your time.
    Ben

    You can define a Bean or simple class to define a table record type (table structure), defining all fields, the primary key and maybe more.
    then deriving a select or insert or update can be done with ease.

  • How to get the inserted row primary key with out  using select statement

    how to return the primary key of inserted row ,with out using select statement
    Edited by: 849614 on Apr 4, 2011 6:13 AM

    yes thanks to all ,who helped me .its working fine
    getGeneratedKeys
    String hh = "INSERT INTO DIPOFFERTE (DIPOFFERTEID,AUDITUSERIDMODIFIED)VALUES(DIPOFFERTE_SEQ.nextval,?)";
              String generatedColumns[] = {"DIPOFFERTEID"};
              PreparedStatement preparedStatement = null;
              try {
                   //String gen[] = {"DIPOFFERTEID"};
                   PreparedStatement pstmt = conn.prepareStatement(hh, generatedColumns);
                   pstmt.setLong(1, 1);
                   pstmt.executeUpdate();
                   ResultSet rs = pstmt.getGeneratedKeys();
                   rs.next();
    //               The generated order id
                   long orderId = rs.getLong(1);

  • How to pass values in select statement as a parameter?

    Hi,
    Very simple query, how do I pass the values that i get in the cursor to a select statement. If table1 values are 1,2,3,4 etc , each time the cursor goes through , I will get one value in the variable - Offer
    So I want to pass that value to the select statement.. how do i do it?
    the one below does not work.
    drop table L1;
    create table L1
    (col1 varchar(300) null) ;
    insert into L1 (col1)
    select filter_name from table1 ;
    SET SERVEROUTPUT ON;
    DECLARE
    offer table1.col1%TYPE;
    factor INTEGER := 0;
    CURSOR c1 IS
    SELECT col1 FROM table1;
    BEGIN
    OPEN c1; -- PL/SQL evaluates factor
    LOOP
    FETCH c1 INTO offer;
    EXIT WHEN c1%NOTFOUND;
    DBMS_OUTPUT.PUT_LINE(offer);
    select * from table1 f where f.filter_name =:offer ;
    factor := factor + 1;
    DBMS_OUTPUT.PUT_LINE(factor);
    END LOOP;
    CLOSE c1;
    END;

    Hi User,
    You are looking somethuing like this, as passing the values to the Cursor as a Paramter.
    DECLARE
       CURSOR CURR (V_DEPT IN NUMBER)    --- Cursor Declaration which accepts the deptno as parameter.
       IS
          SELECT *
            FROM EMP
           WHERE DEPTNO = V_DEPT;    --- The, Input V_DEPT is passed here.
       L_EMP   EMP%ROWTYPE;
    BEGIN
       OPEN CURR (30);       -- Opening the Cursor to Process the Value for Department Number 30 and Processing it with a Loop below.
       DBMS_OUTPUT.PUT_LINE ('Employee Details for Deptno:30');
       LOOP
          FETCH CURR INTO L_EMP;
          EXIT WHEN CURR%NOTFOUND;
          DBMS_OUTPUT.PUT ('EMPNO: ' || L_EMP.EMPNO || ' is ');
          DBMS_OUTPUT.PUT_LINE (L_EMP.ENAME);
       END LOOP;
       CLOSE CURR;
       DBMS_OUTPUT.PUT_LINE ('Employee Details for Deptno:20'); -- Opening the Cursor to Process the Value for Department Number 20
       OPEN CURR (20);
       LOOP
          FETCH CURR INTO L_EMP;
          EXIT WHEN CURR%NOTFOUND;
          DBMS_OUTPUT.PUT ('EMPNO: ' || L_EMP.EMPNO || ' is ');
          DBMS_OUTPUT.PUT_LINE (L_EMP.ENAME);
       END LOOP;
       CLOSE CURR;
    END;Thanks,
    Shankar

  • Passing value as a parameter in select statement

    Hi,
    Very simple query, how do I pass the values that i get in the cursor to a select statement. If table1 values are 1,2,3,4 etc , each time the cursor goes through , I will get one value in the variable - Offer
    So I want to pass that value to the select statement.. how do i do it?
    the one below does not work.
    drop table L1;
    create table L1
    (col1 varchar(300) null) ;
    insert into L1 (col1)
    select filter_name from table1 ;
    SET SERVEROUTPUT ON;
    DECLARE
    offer table1.col1%TYPE;
    factor INTEGER := 0;
    CURSOR c1 IS
    SELECT col1 FROM table1;
    BEGIN
    OPEN c1; -- PL/SQL evaluates factor
    LOOP
    FETCH c1 INTO offer;
    EXIT WHEN c1%NOTFOUND;
    DBMS_OUTPUT.PUT_LINE(offer);
    select * from table1 f where f.filter_name =:offer ;
    factor := factor + 1;
    DBMS_OUTPUT.PUT_LINE(factor);
    END LOOP;
    CLOSE c1;
    END;

    Hi Greg,
    Thanks for the response, No there is no ODB.net involved here.
    If I remove the : from :offer. I get this error now.
    Changed SQL is:
    select * from table1 f where f.filter_name =offer ;
    Error report:
    ORA-06550: line 16, column 23:
    PL/SQL: ORA-00942: table or view does not exist
    ORA-06550: line 16, column 9:
    PL/SQL: SQL Statement ignored
    06550. 00000 - "line %s, column %s:\n%s"
    *Cause:    Usually a PL/SQL compilation error.
    *Action:                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Maybe you are looking for

  • 'XDOBURSTREP' fails with a JAVA null error

    Hi everyone! I am trying to submit bursting from an after report trigger in a RDF. The request starts and then fails with this error : --Exception null java.lang.NullPointerException      at oracle.apps.xdo.oa.cp.JCP4XDOBurstingEngine.getControlFile(

  • DVD to DVD copy?

    Can I burn a copy of a DVD onto another DVD with iDVD? I made a wedding video, burned it to a disk, and deleted it from my hard drive. Now I only have one hard copy, and I'd like to make more. Can I do this with iDVD? If not, is there another cheap o

  • HTML inside flash

    I am trying to display a html based weather widget inside a flash asctinscript 2 file using CS3. Can anyone point me to a tutorial on this? Forrest

  • Where to find old emails?

    Hello everyone. I was wondering if anyone can help me out. I have a Storm, and I can not find any old emails on my phone. I'm trying to find an email from Febuary, but it only goes back a couple of weeks. Is there a place online that I can go to to r

  • JAXB2+Maven2 compilation problem

    Hello there, First of all, any hint to this issue will be greatly appreciated, I'm badly strugling with it. Project: Building a webapp Development: Vista, jdk 1.6, Eclipse 3.3 and some plugins including Maven2+Jetty On this env everything goes fine,