Bulk fetch met dbms_hs_passthrough

Can we use bulk fetch with dbms_hs_passthrough ?
Current i use code like this:
c := [email protected];
[email protected](c,
'select
RecordId
, some_VERY_long_column_from_mssoft_sql_server as shortcol
from table
LOOP
nr := [email protected](c);
EXIT
WHEN nr = 0;
[email protected](c,1,l_obj_type.RECORDID );
I am on 9.2 and would like to use bulk fetch; the source table consists of more than a milion records.
Edited by: FourEyes on Feb 16, 2009 4:51 PM

Hi,
DBMS_HS_PASSTHROUGH has no option to do bulk fetch. However they do support result sets for some type of gateways. If you are using Generic Connectivity (HSODBC), result sets are not supported.
Regards,
Ed

Similar Messages

  • Which method does the actual bulk fetch from database in ADF?

    Hi,
    I'm looking to instrument my ADF code to see where bottlenecks are. Does anyone know which method does the bulk fetch from the database so that I can override it?
    Thanks
    Kevin

    Hi,
    I think you need to be more specific. ADF is a meta
    data binding layer that delegates data queries to the
    business service
    FrankSorry - to be specific I probably mean BC4J - when a query runs in a view object.

  • SQL SERVER BULK FETCH AND INSERT/UPDATE?

    Hi All,
           I am currently working with C and SQL Server 2012. My requirement is to Bulk fetch the records and Insert/Update the same in the other table with some  business logic?
           How do i do this?
           Thanks in Advance.
    Regards
    Yogesh.B

    > is there a possibility that I can do a bulk fetch and place it in an array, even inside a stored procedure ?
    You can use Temporary tables or Table variables and have them indexes as well
    >After I have processed my records, tell me a way that I will NOT go, RECORD by RECORD basis, even inside a stored procedure ?
    As i said earlier, you can perform UPDATE these temporary tables or table variables and finally INSERT/ UPDATE your base table
    >Arrays are used just to minimize the traffic between the server and the program area. They are used for efficient processing.
    In your case you will first have to populate the array (Using some of your queries from the server) which means you will first load the arrary, do some updates, and then send them back to server therefore
    network engagement
    So I just gave you some thoughts I feel could be useful for your implementation, like we say, there are many ways so pick the one that works good for you in the long run with good scalability
    Good Luck! Please Mark This As Answer if it solved your issue. Please Vote This As Helpful if it helps to solve your issue

  • [nQSError: 17012] Bulk fetch failed. (HY000)

    Hi All,
    Some times my report through's  the following error message:
    ORA-03135: Attached the query which results into an error after running for 31 minutes. Below is the error:
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 17001] Oracle Error code: 3135, message: ORA-03135: connection lost contact Process ID: 25523 Session ID: 774 Serial number: 19622 at OCI call OCIStmtFetch. [nQSError: 17012] Bulk fetch failed. (HY000)
    Please give me the solution.
    Thanks&Regards,
    Nantha.

    I see the irony was lost as your reply remained unprecise and un-informative. "Server side everything ok" - what does that even mean? BI server? Database server? What about the network? What about firewall issues with the expire_time parameter in the sqlnet.ora? Are you working with virtual machines on either side (or both)?
    http://catb.org/~esr/faqs/smart-questions.html

  • Prb. with bulk fetch

    Hi,
    I am using 9i and i have to bulk fetch into type table of some object type
    my object is
    CREATE TYPE PART_SRN AS OBJECT
    PART_NO VARCHAR2(15),
    PRN_SRN VARCHAR2(20)
    and in pl/sql block i have declared type table i.e.
    TYPE REC_P IS TABLE OF PART_SRN;
    REC REC_P;
    now i need to bulk fetch query result into REC
    SELECT X,Y BULK COLLECT INTO ?
    FROM ABC;
    can anybody tell how to do it
    thanks in advance
    Piyush

    Like this?:
    create or replace type emp_typ as object (
       empno      number (4),
       ename      varchar (10),
       job        varchar (9),
       mgr        number (4),
       hiredate   date,
       sal        number (7, 2),
       comm       number (7, 2),
       deptno     number (2)
    declare
       type rec_p is table of emp_typ;
       rec   rec_p;
    begin
       select emp_typ (empno, ename, job, mgr, hiredate, sal, comm, deptno)
       bulk collect into rec
         from emp;
    end;

  • Bulk Fetch stored procedure.

    I am new to the oracle world.
    Does any one have a very good, but simple example of a bulk fetch that show the creation of the container variable?

    SQL> declare
      2   /* Declare index-by table of records type */
      3   type emp_rec_tab is table of emp%rowtype index by binary_integer;
      4 
      5   /* Declare table variable*/
      6   emptab emp_rec_tab;
      7 
      8   /* Declare REF CURSOR variable using SYS_REFCURSOR declaration
      9   in 9i and above */
    10   rcur sys_refcursor;
    11 
    12   /* Declare ordinar cursor */
    13   cursor ocur is select * from emp;
    14 
    15  begin
    16 
    17   /* bulk fetch using implicit cursor */
    18   select * bulk collect into emptab from emp;
    19   dbms_output.put_line( SQL%ROWCOUNT || ' rows fetched at once from implicit cursor');
    20   dbms_output.put_line('---------------------------------------------');
    21 
    22   /* bulk fetch from Ordinar cursor */
    23   open ocur;
    24   fetch ocur bulk collect into emptab;
    25   dbms_output.put_line( ocur%ROWCOUNT || ' rows fetched at once from ordinar cursor');
    26   dbms_output.put_line('---------------------------------------------');
    27   close ocur;
    28 
    29   /* bulk fetch from Ordinar cursor using LIMIT clause */
    30   open ocur;
    31   loop
    32    fetch ocur bulk collect into emptab limit 4;
    33    dbms_output.put_line(
    34      emptab.count ||
    35      ' rows fetched at one iteration from ordinar cursor using limit');
    36    exit when ocur%notfound;
    37   end loop;
    38   close ocur;
    39   dbms_output.put_line('---------------------------------------------');
    40 
    41   /* bulk fetch from ref cursor */
    42   open rcur for select * from emp;
    43   fetch rcur bulk collect into emptab;
    44   dbms_output.put_line( rcur%ROWCOUNT || ' rows fetched at once from ref cursor');
    45   dbms_output.put_line('---------------------------------------------');
    46   close rcur;
    47 
    48   /* bulk fetch from ref cursor using LIMIT clause */
    49   open rcur for select * from emp;
    50   loop
    51    fetch rcur bulk collect into emptab limit 4;
    52    dbms_output.put_line( emptab.count ||
    53    ' rows fetched at one iteration from ref cursor using limit');
    54    exit when rcur%notfound;
    55   end loop;
    56   close rcur;
    57   dbms_output.put_line('---------------------------------------------');
    58 
    59   /* bulk fetch using execute immediate */
    60   execute immediate 'select * from emp' bulk collect into emptab;
    61   dbms_output.put_line( SQL%ROWCOUNT || ' rows fetched using execute immediate');
    62   dbms_output.put_line('---------------------------------------------');
    63 
    64  end;
    65  /
    14 rows fetched at once from implicit cursor
    14 rows fetched at once from ordinar cursor
    4 rows fetched at one iteration from ordinar cursor using limit
    4 rows fetched at one iteration from ordinar cursor using limit
    4 rows fetched at one iteration from ordinar cursor using limit
    2 rows fetched at one iteration from ordinar cursor using limit
    14 rows fetched at once from ref cursor
    4 rows fetched at one iteration from ref cursor using limit
    4 rows fetched at one iteration from ref cursor using limit
    4 rows fetched at one iteration from ref cursor using limit
    2 rows fetched at one iteration from ref cursor using limit
    14 rows fetched using execute immediate
    &nbsp
    PL/SQL procedure successfully completed.Rgds.

  • Bulk Fetch Exception Handling

    How do you use exception in BULK FETCH?

    <How do you use exception in BULK FETCH?>
    I've never gotten an exception in a bulk fetch, aside from getting the error when the query select list does not match the variables its being selected into. I think that normal exception handling would apply.
    I'm assuming you mean BULK COLLECTS. Did you mean bulk binds, the FORALL statement?

  • Bulk fetch taking long time.

    I have created a procedure in which i am fetching data from remote db using database link.
    I have 1 million data to fetch and the row size is around 200 bytes ( The table is having 10 attribute sizing 20 bytes each )
    OPEN cur_noit;
    FETCH cur_noit BULK COLLECT INTO rec_get_cur_noit;
    CLOSE cur_noit;
    The problem is it is taking more than 4 hours just to fetch the data.I need to know the corresponding factor and check that factor and most importantly what can be done ...like:-
    1. If the DB link is slow ? how can i check the speed of DB link ?
    2. I am fetching large size so is my PGA full or not used in optimized way ? How can i check the size of PGA and also increase that ? and set the optimum value.
    My CPU usage seems fine.
    Please let me know what else could be the reasons also ?
    *I know i can use Limit clause in Bulk. Kindly let me know if it also could be the reason for my above problem                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Couple of more things:- I am using oracle 9i.
    1.I need to transform the data also(Multiplying column value with fixed integer or setting a variable with another string,local table has couple of more attribute for which i need to fetch values from another table), so it will not be the exact replication.
    2. I will not take all the rows from remote DB , i have a where clause by which i find the subset of what i want to copy.
    Do you think it is achievable by below methods ?
    Apologies, I am novice in this and just googled a bit about the method you suggested.So, Please ignore my noviceness
    Materialzed views:-
    -It is going to make a local copy of whole table there by taking space on my current DB.
    -If i make a materialezed view just before starting copying what difference i would make i.e i am again first copying it from remote db and then i will be fetching from this cursor (materialezed view). I am not sure aren''t we doing more processing now i.e Using network while making materialez view + fetching from this cursor there by taking same memory as previously.
    there is always a possibility of delay in refresh i.e when tuples are changed in remote DB and when i copy in my actual table from materialezed view.
    Merge:-
    I am using bulk collect and BULK Binding FORALL insert in my local table.Do you think this method would be faster and can solve the problem. I have explained above what i am intending to do..

  • Bulk Fetch from a Cursor

    Hi all,
         Can you please give your comments on the code below.
         we are facing an situation where the value of <cursor_name>%notfound is misleading. How we are overcoming the issue is moving the 'exit when cur_name%notfound' stmt just before the end loop.
    open l_my_cur;
    loop
    fetch l_my_cur bulk collect
    into l_details_array;
    --<< control_comes_here>>
    --<< l_details_array.count gives me the correct no of rows>>
    exit when l_inst_cur%NOTFOUND;
    --<< control never reaches here>>
    --<< %notfound is true>>
    --<< %notfound is false only when there are as many records fetched as the limit (if set)>>
    forall i in 1 .. l_count
    insert into my_table ....( .... ) values ( .... l_details_array(i) ...);
    --<< This is never executed :-( >>
    end loop;
    Thanks,
    Sunil.

    Read
    fetch l_my_cur bulk collect
    into l_details_array; as
    fetch l_my_cur bulk collect
    into l_details_array LIMIT 10000;
    I am trying to process 10,000 rows at a time from a possible 100,000 records.
    Sunil.
    Hi all,
         Can you please give your comments on the code below.
         we are facing an situation where the value of <cursor_name>%notfound is misleading. How we are overcoming the issue is moving the 'exit when cur_name%notfound' stmt just before the end loop.
    open l_my_cur;
    loop
    fetch l_my_cur bulk collect
    into l_details_array;
    --<< control_comes_here>>
    --<< l_details_array.count gives me the correct no of rows>>
    exit when l_inst_cur%NOTFOUND;
    --<< control never reaches here>>
    --<< %notfound is true>>
    --<< %notfound is false only when there are as many records fetched as the limit (if set)>>
    forall i in 1 .. l_count
    insert into my_table ....( .... ) values ( .... l_details_array(i) ...);
    --<< This is never executed :-( >>
    end loop;
    Thanks,
    Sunil.

  • Error  while bulk fetch

    Hi,
    Can any one help me to find error is this sample code.The code is
    declare
    TYPE dept_list is table of dept%rowtype;
    d_list dept_list;
    cursor c_dep is select * from dept ;
    begin
    open c_dep;
    fetch c_dep bulk collect into d_list;
    end;
    Error Message
    fetch c_dep bulk collect into d_list;
    ERROR at line 16:
    ORA-06550: line 16, column 31:
    PLS-00597: expression 'D_LIST' in the INTO list is of wrong type
    ORA-06550: line 16, column 1:
    PL/SQL: SQL Statement ignored

    Ok, I just found out a way around it. It works, but that error is probably a bug, cause workarounds are not really cute.
    I declared a nested table compatible with the element from the associative array:
    wrk_juros t_juros_plano;and chaged that line that was causing the error
    fetch c_juros bulk collect into wrk_juros_plano(p_ind_segreg);for
    fetch c_juros bulk collect into wrk_juros;
    wrk_juros_plano(p_ind_segreg) := wrk_juros;Awesome =\

  • CURSOR BULK FETCH

    Hi does any one has sample code to use
    FETCH cursor_name BULK COLLECT ....
    I dont know how to use this please help..
    Arun

    DECLARE
    TYPE var1_table_type is TABLE OF NUMBER;
    var1_table var1_table_type;
    BEGIN
    SELECT COL1 BULK COLLECT INTO var1_table FROM TABLE;
    FOR i IN 1..var1_table.COUNT LOOP
    END LOOP;
    END;

  • Better client than sqlplus for bulk fetch testing .

    Hi,
    I'm doing some test with rows speed retriving via Net8 and need some better client than sqlplus itself .
    There is araysize limit of 5000 in sqlplus and its not oriented for massive row fetching , although Im using set termout off .
    Test are in 10.2.0.3 environment and 100Mbit ethernet netowrk .
    So is there any better client I can use ? Or I need to write it by myself :) ?
    I've tried with pmdtm (informatica fetch utility) but it has got some problems with thread synchronization , basicaly strace profiling returns
    % time     seconds  usecs/call     calls    errors syscall
    57.35    1.738975         161     10819      2145 futex
    41.35    1.253799       32149        39           poll
      1.21    0.036717           3     11869           read
      0.08    0.002491           1      2163           write
      0.00    0.000000           0        50           fcntl
      0.00    0.000000           0        19           clock_gettime
    100.00    3.031982                 24959      2145 totalso instead of reading it's latching :).
    Regards
    GregG

    GregG wrote:
    its not oriented for massive row fetching , although Im using set termout off .You can use SQL*Plus AUTOTRACE command to disable query result printing:
    SQL> set autotrace  traceonly;
    SQL> select * from dba_objects;
    18816 rows selected.
    Execution Plan
    Plan hash value: 1919983379
    | Id  | Operation                      | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |             | 16154 |  3265K|    75   (3)| 00:00:01 |
    |   1 |  VIEW                          | DBA_OBJECTS | 16154 |  3265K|    75   (3)| 00:00:01 |
    |   2 |   UNION-ALL                    |             |       |       |            |          |
    |*  3 |    TABLE ACCESS BY INDEX ROWID | SUM$        |     1 |    26 |     0   (0)| 00:00:01 |
    |*  4 |     INDEX UNIQUE SCAN          | I_SUM$_1    |     1 |       |     0   (0)| 00:00:01 |
    |   5 |    TABLE ACCESS BY INDEX ROWID | OBJ$        |     1 |    25 |     3   (0)| 00:00:01 |
    |*  6 |     INDEX RANGE SCAN           | I_OBJ1      |     1 |       |     2   (0)| 00:00:01 |
    |*  7 |    FILTER                      |             |       |       |            |          |
    |*  8 |     HASH JOIN                  |             | 19706 |  2290K|    72   (3)| 00:00:01 |
    |   9 |      TABLE ACCESS FULL         | USER$       |    66 |  1122 |     3   (0)| 00:00:01 |
    |* 10 |      HASH JOIN                 |             | 19706 |  1962K|    69   (3)| 00:00:01 |
    |  11 |       INDEX FULL SCAN          | I_USER2     |    66 |  1452 |     1   (0)| 00:00:01 |
    |* 12 |       TABLE ACCESS FULL        | OBJ$        | 19706 |  1539K|    67   (2)| 00:00:01 |
    |* 13 |     TABLE ACCESS BY INDEX ROWID| IND$        |     1 |     8 |     2   (0)| 00:00:01 |
    |* 14 |      INDEX UNIQUE SCAN         | I_IND1      |     1 |       |     1   (0)| 00:00:01 |
    |  15 |     NESTED LOOPS               |             |     1 |    30 |     2   (0)| 00:00:01 |
    |* 16 |      INDEX SKIP SCAN           | I_USER2     |     1 |    20 |     1   (0)| 00:00:01 |
    |* 17 |      INDEX RANGE SCAN          | I_OBJ4      |     1 |    10 |     1   (0)| 00:00:01 |
    |  18 |    NESTED LOOPS                |             |     1 |    43 |     3   (0)| 00:00:01 |
    |  19 |     TABLE ACCESS FULL          | LINK$       |     1 |    26 |     2   (0)| 00:00:01 |
    |  20 |     TABLE ACCESS CLUSTER       | USER$       |     1 |    17 |     1   (0)| 00:00:01 |
    |* 21 |      INDEX UNIQUE SCAN         | I_USER#     |     1 |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - filter(BITAND("S"."XPFLAGS",8388608)=8388608)
       4 - access("S"."OBJ#"=:B1)
       6 - access("EO"."OBJ#"=:B1)
       7 - filter(("O"."TYPE#"<>1 AND "O"."TYPE#"<>10 OR "O"."TYPE#"=1 AND  (SELECT 1
                  FROM "SYS"."IND$" "I" WHERE "I"."OBJ#"=:B1 AND ("I"."TYPE#"=1 OR "I"."TYPE#"=2 OR
                  "I"."TYPE#"=3 OR "I"."TYPE#"=4 OR "I"."TYPE#"=6 OR "I"."TYPE#"=7 OR
                  "I"."TYPE#"=9))=1) AND ("O"."TYPE#"<>4 AND "O"."TYPE#"<>5 AND "O"."TYPE#"<>7 AND
                  "O"."TYPE#"<>8 AND "O"."TYPE#"<>9 AND "O"."TYPE#"<>10 AND "O"."TYPE#"<>11 AND
                  "O"."TYPE#"<>12 AND "O"."TYPE#"<>13 AND "O"."TYPE#"<>14 AND "O"."TYPE#"<>22 AND
                  "O"."TYPE#"<>87 AND "O"."TYPE#"<>88 OR BITAND("U"."SPARE1",16)=0 OR ("O"."TYPE#"=4 OR
                  "O"."TYPE#"=5 OR "O"."TYPE#"=7 OR "O"."TYPE#"=8 OR "O"."TYPE#"=9 OR "O"."TYPE#"=10 OR
                  "O"."TYPE#"=11 OR "O"."TYPE#"=12 OR "O"."TYPE#"=13 OR "O"."TYPE#"=14 OR
                  "O"."TYPE#"=22 OR "O"."TYPE#"=87) AND ("U"."TYPE#"<>2 AND
                  SYS_CONTEXT('userenv','current_edition_name')='ORA$BASE' OR "U"."TYPE#"=2 AND
                  "U"."SPARE2"=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id')) OR  EXISTS
                  (SELECT 0 FROM SYS."USER$" "U2",SYS."OBJ$" "O2" WHERE "O2"."OWNER#"="U2"."USER#" AND
                  "O2"."TYPE#"=88 AND "O2"."DATAOBJ#"=:B2 AND "U2"."TYPE#"=2 AND
                  "U2"."SPARE2"=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))))))
       8 - access("O"."SPARE3"="U"."USER#")
      10 - access("O"."OWNER#"="U"."USER#")
      12 - filter("O"."NAME"<>'_NEXT_OBJECT' AND "O"."NAME"<>'_default_auditing_options_'
                  AND BITAND("O"."FLAGS",128)=0 AND "O"."LINKNAME" IS NULL)
      13 - filter("I"."TYPE#"=1 OR "I"."TYPE#"=2 OR "I"."TYPE#"=3 OR "I"."TYPE#"=4 OR
                  "I"."TYPE#"=6 OR "I"."TYPE#"=7 OR "I"."TYPE#"=9)
      14 - access("I"."OBJ#"=:B1)
      16 - access("U2"."TYPE#"=2 AND "U2"."SPARE2"=TO_NUMBER(SYS_CONTEXT('userenv','curren
                  t_edition_id')))
           filter("U2"."TYPE#"=2 AND "U2"."SPARE2"=TO_NUMBER(SYS_CONTEXT('userenv','curren
                  t_edition_id')))
      17 - access("O2"."DATAOBJ#"=:B1 AND "O2"."TYPE#"=88 AND "O2"."OWNER#"="U2"."USER#")
      21 - access("L"."OWNER#"="U"."USER#")
    Statistics
              0  recursive calls
              0  db block gets
           3397  consistent gets
             78  physical reads
              0  redo size
         908471  bytes sent via SQL*Net to client
          14213  bytes received via SQL*Net from client
           1256  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
          18816  rows processed
    SQL>

  • Bulk Fetch From an Oracle Sequence

    I am trying to get a range of sequence values from an Oracle sequence.
    I am using the option as show below using query
    SELECT SEQUENCE_NAME.NEXTVAL FROM SYS.DUAL CONNECT BY LEVEL <= 10.
    The above SQL gets 10 sequence value.
    I just wanted to to check, if the implementation below is safe in a Multi User Environment?
    Is the statement show below atomic. i.e. Multi parallel execution of the same function; Would it cause any inconsistencies?
    EXECUTE IMMEDIATE 'SELECT SEQUENCE_NAME.NEXTVAL ' ||
      'FROM SYS.DUAL CONNECT BY LEVEL <= ' || TO_CHAR(i_quantity)
      BULK COLLECT INTO v_seq_list;
    FUNCTION select_sequence_nextval_range(
       i_quantity      IN  INTEGER)
    RETURN INTEGER IS
      o_nextval INTEGER;
      v_seq_list sequence_list;
    BEGIN
      EXECUTE IMMEDIATE 'SELECT SEQUENCE_NAME.NEXTVAL ' ||
      'FROM SYS.DUAL CONNECT BY LEVEL <= ' || TO_CHAR(i_quantity)
      BULK COLLECT INTO v_seq_list;
      -- Get the first poid value.
      o_nextval := v_seq_list(1);
      RETURN o_nextval;
    END select_sequence_nextval_range

    Acquire Lock
    You acquire a lock on a sequence? That's news to me - please post the code that does that. I certainly hope you don' t mean you are directly accessing the SYS.SEQ$ table to lock the row for that sequence - it isn't nice to mess with Oracle's tables!
    For couple of JAVA/C applications the usage of sequence number is pretty big. Could be 100,000 for one single application processing.
    How does that correlate with your previous statement that you get 10 at a time?
    Sequences aren't designed for use cases that require gap-free sets of numbers or for use cases that require consecutive sets of numbers.
    We wanted to implement the range get of sequence using a different mechanism.
    For few other applications; we just need one sequence number for the application processing. So we use the select seq.nextval to get the value. So the same sequence number needs to serve the role of giving a single value as well as a consecutive range of values.
    Then you may need to consider using your own table to track the chunks that need to be allocated. You would use a scheme similar to what Greg.Spall discussed except you would keep the 'chunk' data in your own table.
    I'm not talking about using your own table to control actual one-by-one sequence number generation - that is a very bad idea. But if you need to work with large ranges that are allocated infrequently there is nothing wrong with using your own function and your own table to keep track of those allocations.
    The 'one by one' number generation would be handled by an actual sequence. The generation of a 'start value' and an 'end value' would be handled by accessing your custom table. Each row in that table would have 'start_value' and 'available_numbers' colulmns.
    Your function would take a parameter for how many numbers you need. For just one number the function would call the sequence.nextval and return that along with a count of '1'.
    For a range the function would:
    1. find a row in the table with an 'available_numbers' value large enough to satisfy the request,
    2. lock the row for update
    3. capture the 'start_value' for return to the user
    4. adjust both the 'start_value' and 'available_numbers' values to account for the range being allocated
    5. update the table and commit
    6. return the 'start_value' and 'number_allocated' to the user (number_allocated might be LESS than requested perhaps)
    The above is a viable solution ONLY if the frequency of allocation and the size of allocation avoids the serialization issues associated with trying to allocate your own sequence numbers.
    Those issues can be somewhat mitigated by having the table store multiple rows with each row having a large chunk of values that can be allocated. Then your function query can get the first 'unlocked' row and avoid serializing just because one row is currently locked.

  • Loading data into multiple tables - Bulk collect or regular Fetch

    I have a procedure to load data from one source table into eight different destination tables. The 8 tables have some of the columns of the source table with a common key.
    I have run into a couple of problems and have a few questions where I would like to seek advice:
    1.) Procedure with and without the BULK COLLECT clause took the same time for 100,000 records. I thought I would see improvement in performance when I include BULK COLLECT with LIMIT.
    2.) Updating the Load_Flag in source_table happens only for few records and not all. I had expected all records to be updated
    3.) Are there other suggestions to improve the performance? or could you provide links to other posts or articles on the web that will help me improve the code?
    Notes:
    1.) 8 Destination tables have at least 2 Million records each, have multiple indexes and are accessed by application in Production
    2.) There is an initial load of 1 Million rows with a subsequent daily load of 10,000 rows. Daily load will have updates for existing rows (not shown in code structure below)
    The structure of the procedure is as follows
    Declare
    dest_type is table of source_table%ROWTYPE;
    dest_tab dest_type ;
    iCount NUMBER;
    cursor source_cur is select * from source_table FOR UPDATE OF load_flag;
    BEGIN
    OPEN source_cur;
    LOOP
    FETCH source_cur -- BULK COLLECT
    INTO dest_tab -- LIMIT 1000
    EXIT WHEN source_cur%NOTFOUND;
    FOR i in dest_tab.FIRST .. dest_tab.LAST LOOP
    <Insert into app_tab1 values key, col12, col23, col34 ;>
    <Insert into app_tab2 values key, col15, col29, col31 ;>
    <Insert into app_tab3 values key, col52, col93, col56 ;>
    UPDATE source_table SET load_flag = 'Y' WHERE CURRENT OF source_cur ;
    iCount := iCount + 1 ;
    IF iCount = 1000 THEN
    COMMIT ;
    iCount := 0 ;
    END IF;
    END LOOP;
    END LOOP ;
         COMMIT ;
    END ;
    Edited by: user11368240 on Jul 14, 2009 11:08 AM

    Assuming you are on 10g or later, the PL/SQL compiler generates the bulk fetch for you automatically, so your code is the same as (untested):
    DECLARE
        iCount NUMBER;
        CURSOR source_cur is select * from source_table FOR UPDATE OF load_flag;
    BEGIN
        OPEN source_cur;
        FOR r IN source_cur
        LOOP
            <Insert into app_tab1 values key, col12, col23, col34 ;>
            <Insert into app_tab2 values key, col15, col29, col31 ;>
            <Insert into app_tab3 values key, col52, col93, col56 ;>
            UPDATE source_table SET load_flag = 'Y' WHERE CURRENT OF source_cur ;
            iCount := iCount + 1 ;
            IF iCount = 1000 THEN
                COMMIT ;
                iCount := 0 ;
            END IF;
            END LOOP;
        COMMIT ;
    END ;However most of the benefit of bulk fetching would come from using the array with a FORALL expression, which the PL/SQL compiler can't automate for you.
    If you are fetching 1000 rows at a time, purely from a code simplification point of view you could lose iCount and the IF...COMMIT...END IF and just commit each time after looping through the 1000-row array.
    However I'm not sure how committing every 1000 rows helps restartability, even if your real code has a WHERE clause in the cursor so that it only selects rows with load_flag = 'N' or whatever. If you are worried that it will roll back all your hard work on failure, why not just commit in your exception handler?

  • SAVE EXCEPTIONS when fetching from cursors by BULK COLLECT possible?

    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    Hello,
    I'm using an Cursor's FETCH by BULK COLLECT INTO mydata...
    Is it possible to SAVE EXCEPTIONS like with FORALL? Or is there any other possibility to handle exceptions during bulk-fetches?
    Regards,
    Martin

    The cursor's SELECT-statement uses TO_DATE(juldat,'J')-function (for converting an julian date value to DATE), but some rows contain an invalid juldat-value (leading to ORA-01854).
    I want to handle this "rows' exceptions" like in FORALL.
    But it could also be any other (non-Oracle/self-made) function within "any" BULK instruction raising (un)wanted exceptions... how can I handle these ones?
    Martin

Maybe you are looking for

  • Questions on Xontrol--H​ow to update the state and data

    Hello, everyone. I am new to use the XControl. I don't understand how the program in the facade update the "state" and "Data" after we click the button. I cannot do that, please give me some help on the XControl. My vi is in the attachment. Message E

  • I see briefly the content of my Mail Inbox of an IMAP account, then everything disappear and the inbox is empty

    Hi There, I have beeen experiencing for the past few days some issues with my Inbox of an IMAP based email account. My inbox is empty. When I start mail my Inbox is empty after briefly appearing at the sgtart of the application. I installed thunderbi

  • Premiere Pro CS3 on Mac versus Final Cut Pro

    Any of you working on a mac with Premiere Pro CS3? If so, why did you choose Premiere over Final Cut Pro? I'm currently using a Matrox RTX100 on a custom made PC that has all the correct pieces of hardware to make it work. It's running PPro 1.5 and I

  • Hierarchy name changing in BI.

    Hi experts, We loaded hierarchies for 0costcenter from R/3 to BI. But the technical name is getting changed when coming into BI. Suppose if a hierarchy tech name in R/3 is ACCTS when refreshed in the infopackage the tech name is coming as SGCAACCTS.

  • Can I use shapefile to generate base map

    I have some data for arcview, I want to upload to spatial, what is the best way to do it? I know that I can generate the themes when I import a shapefile by map builder, but how can I generate base map? thanks