Complex query - improve performance with nested arrays, bulk insert....?

Hello, I have an extremely complicated query, that has a structure similar to:
Overall Query
---SubQueryA
-------SubQueryB
---SubQueryB
---SubQueryC
-------SubQueryA
The subqueries themselves are slow, and having to run them multiple times is much too slow! Ideally, I would be able to run each subquery once, and then use the results. I cannot use standard oracle tables, and i would need to keep the result of the subqueries in memory.
I was thinking I write a pl/sql script that did the subqueries at the beginning and stored the results in memory. Then in the overall query, I could loop through my results in memory, and join the results of the various subqueries to one another.
some questions:
-what is the best data structure to use? I've been looking around and there are nested arrays, and there's the bulk insert functionality, but I'm not sure what is the best to you
-the advantage of the method I'm suggesting is that I only have to do each subquery once. But, when I start joining the results of the subquery to one another, will I take a performance hit? will Oracle not be able to optimize the joins?
thanks in advance!
Coop

I cannot use standard oracle tablesWhat does this mean? If you have subqueries, i assume you have tables to drive them? You're in an Oracle forum, so i assume the tables are Oracle tables.
If so, you can look into the WITH clause, it can 'cache' the query results for you and reuse them multiple times, also helpful in making large queries with many subqueries more readable.

Similar Messages

  • Improve Performance with QaaWS with multiple RefreshButtons??

    HI,
    I read, that a connection opens maximal 2 QaaWS. I want to improve Performance.
    Currently I tried to refresh 6 connections with one Button. Would it improve performance if I split this 1 Button with 6 Connections to 3 buttons each 2 connections ?
    Thanks,
    BWBW

    Hi
    HTTP 1.1 limits the number of concurrent HTTP requests to maximum two, so your dashboard will actually be able to send & receive maximum 2 request simultaneously, third will stand-by till one of those first two is handled.
    QaaWS performance is mostly affected by database performance, so if you plan to move to LO to improve performance, I'd recommend you use LO from WebI parts, as if you use LO to consume a universe query, you will experience similar performance limitations.
    If you actually want to consume WebI report parts, and need report filters, you can also consider XI 3.1 SP2 BI Services, where performance is better than QaaWS, and interactions are also easier to implement.
    Hope that helps,
    David.

  • How will the query improve performance when we use hint index_ss

    hi,
    I got a SQLquery from our Reports team. initially when I ran the query it took long time and I tuned the query using SQL TOAD optimizer ..I ran the tuned query and it fetched the rows faster than the last one. I observed that a new Hint was added to that query. Index_ss+ > can some explain me how this index_ss increase the performance with an example

    As always the online documentation (at http://tahiti.oracle.com) comes with the answer.
    Sadly you need an extra pair of voluntary fingers to search and read it.
    http://download.oracle.com/docs/cd/E11882_01/server.112/e17118/sql_elements006.htm#SQLRF50415
    Sybrand Bakker
    Senior Oracle DBA

  • Multiple log groups per thread to improve performance with high redo writes

    I am reading Pro Oracle 10g RAC on Linux (good book). On p.35 the authors state that they recommend 3-5 redo log groups per thread if there is a "large" amount of redo.
    Who does having more redo log groups improve performance? Does oracle paralelize the writes?

    redo logs are configured per instance, from experience you need atleast 3 redo log groups per thread to help switch over and sufficient time for archives to complete before reuse of the first redo log group. When you have a large redo log activity there is a potential that redo log groups will switch more often and it is important that archive has completed before an exisiting redo log group can be reused, else the database /instance may hang.
    I think that is what the author is referencing here, have sufficient redo log groups (based on the acitivty of your environment) to allow switching and allowing sufficient time for archives to complete.

  • Blob truncated with DbFactory and Bulk insert

    Hi,
    My platform is a Microsoft Windows Server 2003 R2 Server 5.2 Service Pack 2 (64-bit) with an Oracle Database 11g 11.1.0.6.0.
    I use the client Oracle 11g ODAC 11.1.0.7.20.
    Some strange behavior appends when used DbFactory and bulk command with Blob column and parameter with a size larger than 65536bytes. Let me explain.
    First i create a dummy table in my schema :
    create table dummy (a number, b blob)To use bulk insert we can use the code A with oracle object (succes to execute) :
    byte[] b1 = new byte[65530];
    byte[] b2 = new byte[65540];
    Oracle.DataAccess.Client.OracleConnection conn = new Oracle.DataAccess.Client.OracleConnection("User Id=login;Password=pws;Data Source=orcl;");
    OracleCommand cmd = new OracleCommand("insert into dummy values (:p1,:p2)", conn);
    cmd.ArrayBindCount = 2;
    OracleParameter p1 = new OracleParameter("p1", OracleDbType.Int32);
    p1.Direction = ParameterDirection.Input;
    p1.Value = new int[] { 1, 2 };
    cmd.Parameters.Add(p1);
    OracleParameter p2 = new OracleParameter("p2", OracleDbType.Blob);
    p2.Direction = ParameterDirection.Input;
    p2.Value = new byte[][] { b1, b2 };
    cmd.Parameters.Add(p2);
    conn.Open(); cmd.ExecuteNonQuery(); conn.Close();We can write the same thing with an abstract level when used the DbProviderFactories (code B) :
    var factory = DbProviderFactories.GetFactory("Oracle.DataAccess.Client");
    DbConnection conn = factory.CreateConnection();
    conn.ConnectionString = "User Id=login;Password=pws;Data Source=orcl;";
    DbCommand cmd = conn.CreateCommand();
    cmd.CommandText = "insert into dummy values (:p1,:p2)";
    ((OracleCommand)cmd).ArrayBindCount = 2;
    DbParameter param = cmd.CreateParameter();
    param.ParameterName = "p1";
    param.DbType = DbType.Int32;
    param.Value = new int[] { 3, 4 };
    cmd.Parameters.Add(param);
    DbParameter param2 = cmd.CreateParameter();
    param2.ParameterName = "p2";
    param2.DbType = DbType.Binary;
    param2.Value = new byte[][] { b1, b2 };
    cmd.Parameters.Add(param2);
    conn.Open(); cmd.ExecuteNonQuery(); conn.Close();But this second code doesn't work, the second array of byte is truncated to 4byte. It seems to be an int16 overtaking.
    When used a DbTYpe.Binary, oracle use an OracleDbType.Raw for mapping and not an OracleDbType.Blob, so the problem seems to be with raw type, BUT if we use the same code without bulk insert, it's worked !!! The problem is somewhere else...
    Why used an DbConnection ? To be able to switch easy to an another database type.
    So why used "((OracleCommand)cmd).ArrayBindCount" ? To be able to used specific functionality of each database.
    I can fix the issue when casting DbParameter as OracleParameter and fix the OracleDbType to Blob, but why second code does not working with bulk and working with simple query ?

    BCP and BULK INSERT does not work the way you expect them do. What they do is that they consume fields in a round-robin fashion. That is, they first looks for data for the first field, then for the second field and so on.
    So in your case, they will first read one byte, then 20 bytes etc until they have read the two bytes for field 122. At this point they will consume bytes until they have found a sequence of carriage return and line feed.
    You say that some records in the file are incomplete. Say that there are only 60 fields in this file. Field 61 is four bytes. BCP and BULK INSERT will now read data for field 61 as CR+LF+the first two bytes in the next row. CR+LF has no special meaning,
    but they are just data at this point.
    You will have to write a program to parse the file, or use SSIS. But BCP and BULK INSERT are not your friends in this case.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • ABAP Select statement performance (with nested NOT IN selects)

    Hi Folks,
    I'm working on the ST module and am working with the document flow table VBFA. The query takes a large amount of time, and is timing out in production. I am hoping that someone would be able to give me a few tips to make this run faster. In our test environment, this query take 12+ minutes to process.
        SELECT vbfa~vbeln
               vbfa~vbelv
               Sub~vbelv
               Material~matnr
               Material~zzactshpdt
               Material~werks
               Customer~name1
               Customer~sortl
          FROM vbfa JOIN vbrk AS Parent ON ( Parentvbeln = vbfavbeln )
                 JOIN vbfa AS Sub ON ( Subvbeln = vbfavbeln )
                 JOIN vbap AS Material ON ( Materialvbeln = Subvbelv )
                 JOIN vbak AS Header ON ( Headervbeln = Subvbelv )
                 JOIN vbpa AS Partner ON ( Partnervbeln = Subvbelv )
                 JOIN kna1 AS Customer ON ( Customerkunnr = Partnerkunnr )
          INTO (WA_Transfers-vbeln,
                WA_Transfers-vbelv,
                WA_Transfers-order,
                WA_Transfers-MATNR,
                WA_Transfers-sdate,
                WA_Transfers-sfwerks,
                WA_Transfers-name1,
                WA_Transfers-stwerks)
          WHERE vbfa~vbtyp_n = 'M'       "Invoice
          AND vbfa~fktyp = 'L'           "Delivery Related Billing Doc
          AND vbfa~vbtyp_v = 'J'         "Delivery Doc
          AND vbfa~vbelv IN S_VBELV
          AND Sub~vbtyp_n = 'M'          "Invoice Document Type
          AND Sub~vbtyp_v = 'C'          "Order Document Type
          AND Partner~parvw = 'WE'       "Ship To Party(actual desc. is SH)
          AND Material~zzactshpdt IN S_SDATE
          AND ( Parentfkart = 'ZTRA' OR Parentfkart = 'ZTER' )
          AND vbfa~vbelv NOT IN
             ( SELECT subvbfa~vbelv
               FROM vbfa AS subvbfa
               WHERE subvbfavbelv = vbfavbelv
               AND   subvbfa~vbtyp_n = 'V' )           "Purchase Order
          AND vbfa~vbelv NOT IN
             ( SELECT DelList~vbeln
               FROM vbfa AS DelList
               WHERE DelListvbeln = vbfavbelv
               AND   DelList~vbtyp_v = 'C'             "Order Document Type
               AND   DelList~vbelv IN                  "Delivery Doc
                  ( SELECT OrderList~vbelv
                    FROM vbfa AS OrderList
                    WHERE OrderList~vbtyp_n = 'H' )    "Return Ord
          APPEND WA_Transfers TO ITAB_Transfers.
        ENDSELECT.
    Cheers,
    Chris

    I am sending u some of the performance isuues that are to be kept in mind while coding.
    1.Donot use Select *...... instead use Select <required list>......
    2.Donot fetch data from CLUSTER tables.
    3.Donot use Nested Select statements as. U have used nested select which reduces performance to a greater extent.
      Instead  use  views/join .
    Also keep in mind that not use join condition for more for more than three tables unless otherwise required.
    So split select statements into three or four and use Select ......for all entries....
    4.Extract  the data from the database  atonce consolidated upfront into table.
      i.e. use INTO TABLE <ITAB> clause instead of using
    Select----
    End Select.
    5.Never use order by clause in Select ..... statement. instead use SORT<itab>.
    6.When  ever u need to calculate max,min,avg,sum,count use AGGREGATE FUNCTIONS and GROUP BY clause insted of calculating by userself..
    7.Donot use the same table once for Validation and another time for data extraction.select data  only once.
    8.When the intention is for validation use Select single ....../Select.......up to one rows ......statements.
    9.If possible always use array operations to update the database tables.
    10.Order of the fields in the where clause select statement  must be in the same order in the index of table.
    11.Never release the object unless throughly checked by st05/se30/slin.
    12.Avoid using identical select statements.

  • Performance with Nested Views

    Hello everyone, we are having a performance problem with oracle optimizing nested views. Some with outer joins. One of the guys just sent me the following statement and I'm wondering if anyone can tell me if it sounds correct. The query is about 300 lines long so I won't post it here, but hopefully someone can still help with sheding some light on the statement below.
    When Oracle executes a view it optimizes the query plan only for the columns of the view which are used and eliminates unnecessary joins, function calls, etc.. In this case the join is a LEFT OUTER i.e. optional and since the columns are not used I would hope Oracle would eliminate this from the plan but… it didn’t.
    Thanks for any help,
    Michael Cunningham

    Depending on the version of Oracle (this was introduced in 10.2), it is possible that Oracle can eliminate a join. The Oracle Optimizer group has a nice blog post that discusses the requirements for join elimination to happen. Basically, Oracle has to be sure that the additional columns aren't being used but also that the join does not change the number of rows that might be returned.
    Justin

  • Improve performance with union all

    Hello there,
    Oracle Database 11g Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    SQL> show parameter optimizer
    ORA-00942: Tabel of view bestaat niet. (Does not exist)I have the following query using the following input variables
    - id
    - startdate
    - enddate
    The query has the following format
    - assume that the number of columns are the same
    - t1 != t3 and t2 != t4
    select ct.*
    from
      select t1.*
      from   tabel1 t1
        join tabel2 t2
          on t2.key = t1.key
      union all
      select t3.*
      from   tabel3 t3
        join tabel4 t4
          on t4.key = t3.key
    where ct.id = :id
      and ct.date >= :startdate
      and ct.date < :enddate
    order by ct.dateIt is performing really slow, after the first read it performs fast.
    I tried the following thing, which was actually even slower!
    with t1c as
    select t1.*
      from   tabel1 t1
        join tabel2 t2
          on t2.key = t1.key
    where t1.id = :id
      and t1.date >= :startdate
      and t1.date < :enddate
    t2c as
    select t3.*
      from   tabel3 t3
        join tabel4 t4
          on t4.key = t3.key
    where t3.id = :id
      and t3.date >= :startdate
      and t3.date < :enddate
    select ct.*
    from
      select *
      from   t1c
      union all
      select *
      from   t2c
    order by ct.dateSo in words, I have an 'union all' construction reading from different tables with matching columns 'id' and 'date'.
    How can I improve this? Can it be improved? If you do not know the answer, but maybe a suggestion, I will be happy aswell!!!
    Thanks in advance!
    Kind regards,
    Metroickha

    >
    So in words, I have an 'union all' construction reading from different tables with matching columns 'id' and 'date'.
    How can I improve this? Can it be improved? If you do not know the answer, but maybe a suggestion, I will be happy aswell!!!
    >
    If you want to improve on what Oracle is doing you first need to know 'what Oracle is doing'.
    Post the execution plans for the query that show what Oracle is doing.
    Also post the DDL for the tables and indexes and the record counts for the tables and ID/DATE predicates.

  • Complex algorithm - improve performance

    HI,
    I have a complex algorithm which takes almost 900 milliseconds per line to process this . I have to calculate the probability for hundreds of thousand lines. Just imagine if have to calculate this for some 500000 lines which is not feasilble at al.. Can anyone help me in simplifing this algorithm & improve the performance..
    This is very urgent & your help will desperately needed. Pls...pls ..help.
    thanks in advacnce...
    probabilty(int a, int b, int c, int d)     {
         int a1 = 0;
         int b1 = 0;
         int c1 = 0;
         int d1 = 0;
         double p = 0.0;
         double q = 0.0;
         double sum = 0.0;
         if(a*b*c*d == 0) {
              if (d == 0) {
                   a1 = a;
                   a = b;
                   b = a1;
                   a1 = c;
                   c = d;
                   d = a1;
         } else if (((a + 0.0)/(b + 0.0)) < ((c + 0.0)/(d + 0.0))) {
              a1 = a;
              a = b;
              b = a1;
              a1 = c;
              c = d;
              d = a1;
         p = logFact(a+b) +
              logFact(a+c) +
              logFact(b+d) +
              logFact(c+d) -
              logFact(a+b+c+d);
         for (int i=b; i>=0; i--) {
              b1 = i;
              a1 = a + b - b1;
              c1 = a + c - a1;
              d1 = b + d - b1;
              q = logFact(a1) +
                   logFact(b1) +
                   logFact(c1) +
                   logFact(d1);
              sum = Math.exp(p-q) + sum;
              if(a*b*c*d == 0) {
                   break;
         }

    I've never done any hardcore optimization before, so everyone can feel free to laugh at my suggestions. Here's my few cents input:
    #1
    if(a*b*c*d == 0) {
    if (d == 0) {
    }This seems rather silly, why can't you just eliminate this to:
    if(d==0) {
    }maybe I'm missing something obvious...
    #2
    else if (((a + 0.0)/(b + 0.0)) < ((c + 0.0)/(d + 0.0))) {First, why are the 4 additions done, is this done to cast to double or something? I've never seen this before...
    Are a, b, c, or d ever negative? If not, then you can eliminate the divisions by making them multiplications, as:
    else if ( (a*d < c*b) ) {if you can't make this assumption about the sign of a,b,c or d, then you can't do this...
    #3
    a1 = a;
    a = b;
    b = a1;
    a1 = c;
    c = d;
    d = a1;These 6 lines are done to swap 'a' with 'b' and 'c' with 'd'. I have no idea if this will help, but maybe you can try something like this:
    boolean swap = false; // at top
    if(d==0 || ( (a/b) < (c/d) )) {
      swap = true;
    switch(swap) {
      case false:
        // the whole algorithm goes here with the unswapped variables
        break;
      case true:
        // repeat the algorithm with the swapped variables here
        break;
    }Let me know if these help at all because I'm curious about it now...

  • How do I improve performance with exchange

    My corporate email is on an exchange 2010 server. Mail.app works with the server but there are a number of performance problems when sending and receiving, which are also well documented in other discussion threads. I have tried every suggested configuration but performance remains extremely poor, especially when compared to my personal IMAP accounts, and the performance I was used to in Outlook 2011 under Mountain Lion.
    Looking through my logs I see multiple entries of:
    03/12/2013 14:37:55.007 Mail[5018]: CFNetwork SSLHandshake failed (-9800)
    03/12/2013 14:37:55.008 Mail[5018]: CFNetwork SSLHandshake failed (-9800)
    03/12/2013 14:37:56.456 Mail[5018]: CFNetwork SSLHandshake failed (-9800)
    03/12/2013 14:37:56.456 Mail[5018]: CFNetwork SSLHandshake failed (-9800)
    03/12/2013 14:37:57.989 Mail[5018]: CFNetwork SSLHandshake failed (-9800)
    03/12/2013 14:37:58.143 Mail[5018]: CFNetwork SSLHandshake failed (-9800)
    03/12/2013 14:37:59.677 Mail[5018]: CFNetwork SSLHandshake failed (-9800)
    03/12/2013 14:38:00.930 Mail[5018]: CFNetwork SSLHandshake failed (-9800)
    I have installed and trusted the certificates for our exchange server and can browse to the owa front end with no certificate warnings.
    Interestingly, I don't have the same problem with reminders or calendar which are connected to the same exchange server.
    Does anyone have any ideas how to overcome this problem?

    Some additional information...
    I fired up wireshark to take a look. For all communications to our Exchange server (Mail, Calendar, Notes...) the SSL negotation includes a downgrade to TLS v1
    I notice that Mavericks CoreFoundation Networks defaults to TLS v1.2 and that BEAST mitigation leads to some problems example: https://trac.adium.im/ticket/16550
    Exchange Server 2010 is TLS v1.0 only http://support.microsoft.com/kb/2709167
    I'll do some more Wireshark investigation.

  • KVO with nested arrays?

    I'm working on an app with the typical UITableView displaying rows grouped into sections. I have an NSMutableArray of Section objects, each of which has an array of Entry objects. Manual KVO is working for adding and removing Section objects. I can use a key path of "sections" and everything works as expected.
    I'd like to use KVO for operations on the entries array in each Section object so I can add/remove table rows as entries are added/removed in each Section. Using a key path of "sections.entries" doesn't work. I get an NSInvalidArgumentException with [<NSCFArray 0x104fbb0> addObserver:forKeyPath:options:context:] is not supported. Key path: entries in the console. After a bit of thought, the error makes sense: the sections property is an NSMutableArray, which of course doesn't have an entries property.
    How do I do this? Is there a better way that I'm missing?

    Well, after a bit of fiddling I found a solution. I'm not sure if it's the right solution, but it appears to work.
    I had been assuming that the key path had to be somehow related to actual properties of objects. But, on a whim, I made up a key path that corresponded to nothing, and it worked. I can issue and observe events on my fake key path just fine.

  • Improve performance with WL 9 by using WL express license

    Hi,
    <br>
    We just move to WL 9 and the development response time is quite slow compare to 8.1.
    <br>
    Does anyone think that using only the <b>WL Express 9</b> license could improve the performance since must of the features will be disabled ?
    <br>
    Thanks,
    <br>
    Ami

    Hi,
    <br>
    We just move to WL 9 and the development response time is quite slow compare to 8.1.
    <br>
    Does anyone think that using only the <b>WL Express 9</b> license could improve the performance since must of the features will be disabled ?
    <br>
    Thanks,
    <br>
    Ami

  • Improving performance with overlapping MCs?

    I'm having some performance issues in certain areas of a flash game Im making.
    Here's the situation:
    I have big movieclip containing a map that can be scrolled and scaled. There are some pannels with a bunch of MC buttons, controls, etc. on the right side of the stage. These are seperate from the map and stay in the same place.
    Everything runs fine untill the large map is  zoomed in and/or scrolled such that the right side buttons and stuff are overlapping them map. Then the framerate jumps down from a nice smooth 30 fps to about 3 fps while the map is moving. There are no issues with the map moving when they don't overlap.
    Now the thing is, is that's there's no real interaction between the buttons and the map. No transparency or anything, in fact I'm perfectly happy with the map dissapearing behind a side pannel. There are only about 3 pannels that the screen drawing routine would have to worry about, as all the buttons rest on top of the pannels, so why is my performance taking such a hit?
    (Hmmm. Actually I do have a lot of transparency, but each is on top of a solid background, and none interact with the map.)

    I did do some reading on it. I tend not to use code without understanding how it works - at least a rudimentary understanding.
    At the moment, I'm not changing anything about how the buttons look - size, rotation, placement, color, etc. This suggests that I can keep them cached starting at runtime while testing the cacheing. (Though I also tried caching them when starting the simulation, and disableing cacheing when the buttons could be used, too.)
    Basicaly, I have a game where you use the buttons to supply program commands to robots on the map. Once this is done, you run the program, and the robots execute their programing. Meaning that you can't issue new commands while they are running (and the map is moving), so the buttons are disabled. So there is no reason they would need to change in appearance. When in the 'programing' stage, the map is not moving because the sim is not running. This seems to suggest that cacheAsBitmap could be enabled at all times.
    At the moment, I've had to resort to making the _visible property of the button pannel = false during the sim stage. However, this is not an ideal solution, it's just what I've had to resort to to be able to test the workings of the game. eventually, I'll want the player to be able to see how the commands are executing while the sim is running.

  • Improving performance with IN clause

    We use lot of those IN clauses for good or bad, and I am trying to improve the performance of those IN clauses.
    I have looked at the documentation several times and can't seem to find a way to bind the values in a 'IN' clause. Is there any thing else that can be done to improve the IN clause performance in OCI?
    Thanks a lot

    Hi,
    You can refer to the following URL on asktom website for detailed explanation about IN & Exists
    http://asktom.oracle.com/pls/ask/f?p=4950:8:3465613697817080707::NO::F4950_P8_DISPLAYID,F4950_P8_B:953229842074,Y
    HTH
    Cheers,
    Giridhar Kodakalla

  • Problem with Assocaite Array & Bulk Collect

    Hi All,
    I encountered compilation error when I executed the following block. Improper reference of elements in associate array ( at var.party_name (r) in the code ) is giving problems. Please help me in correcting it.
    Regards,
    Kashif
    DECLARE
       TYPE record1 IS RECORD (
          party_name           racsf_cust_pre_staging_int.party_name%TYPE,
          shipping_address_1   racsf_cust_pre_staging_int.shipping_address_1%TYPE,
          cust_acct_site_id    racsf_cust_pre_staging_int.cust_acct_site_id%TYPE
       TYPE typ_cust_pre_staging_int IS TABLE OF record1
          INDEX BY BINARY_INTEGER;
       var   typ_cust_pre_staging_int;
    BEGIN
       SELECT party_name,
              shipping_address_1,
              cust_acct_site_id
       BULK COLLECT INTO var
         FROM racsf_cust_pre_staging_int;
       FORALL r IN var.FIRST .. var.LAST
          UPDATE racsf_cust_pre_staging_int
             SET party_name = var.party_name (r),
                 shipping_address_1 = var.shipping_address_1 (r)
           WHERE cust_acct_site_id = var.cust_acct_site_id (r);
    END;

    I thought if i collect BULK collect and update all records, it will avoid context switching b/w SQL & PL/SQL engines and decrease the execution time.Right idea to get rid of FOR loop
    But if you can do in a single SQL statement and avoid collections that is best.
    How many records in the collection?
    Use LIMIT if necessary as already mentioned - 500 to 1000 might be a sensible limit
    I believe this is representative of your current error (I'm using insert and new table for simplicity):
    SQL> create table t
      2  (col1 number
      3  ,col2 number);
    Table created.
    SQL>
    SQL> declare
      2   cursor c_t
      3   is
      4     select rownum col1, rownum col2
      5     from   dual
      6     connect by rownum <= 10000;
      7   type tt_t is table of t%rowtype index by pls_integer;
      8   v_t  tt_t;
      9  begin
    10    open c_t;
    11    loop
    12        fetch c_t bulk collect into v_t limit 1000;
    13        exit when v_t.count = 0;
    14        forall i in 1 .. v_t.count
    15          insert into t
    16          (col1,col2)
    17          values
    18          (v_t.col1(i), v_t.col2(i));
    19    end loop;
    20  end;
    21  /
            (v_t.col1(i), v_t.col2(i));
    ERROR at line 18:
    ORA-06550: line 18, column 14:
    PLS-00302: component 'COL1' must be declared
    ORA-06550: line 18, column 27:
    PLS-00302: component 'COL2' must be declared
    ORA-06550: line 18, column 27:
    PLS-00302: component 'COL2' must be declared
    ORA-06550: line 18, column 23:
    PL/SQL: ORA-00904: "V_T"."COL2": invalid identifier
    ORA-06550: line 15, column 9:
    PL/SQL: SQL Statement ignored
    SQL> If you change to the following you will run into PLS-00436 - a right PITA.
    SQL> declare
      2   cursor c_t
      3   is
      4     select rownum col1, rownum col2
      5     from   dual
      6     connect by rownum <= 10000;
      7   type tt_t is table of t%rowtype index by pls_integer;
      8   v_t  tt_t;
      9  begin
    10    open c_t;
    11    loop
    12        fetch c_t bulk collect into v_t limit 1000;
    13        exit when v_t.count = 0;
    14        forall i in 1 .. v_t.count
    15          insert into t
    16          (col1,col2)
    17          values
    18          (v_t(i).col1, v_t(i).col2);
    19    end loop;
    20  end;
    21  /
            (v_t(i).col1, v_t(i).col2);
    ERROR at line 18:
    ORA-06550: line 18, column 10:
    PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND table of records
    ORA-06550: line 18, column 10:
    PLS-00382: expression is of wrong type
    ORA-06550: line 18, column 23:
    PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND table of records
    ORA-06550: line 18, column 23:
    PLS-00382: expression is of wrong type
    ORA-06550: line 18, column 10:
    PL/SQL: ORA-22806: not an object or REF
    ORA-06550: line 15, column 9:
    PL/SQL: SQL Statement ignoredFor workarounds other than using single arrays per column, see Adrian Billington's article here:
    http://www.oracle-developer.net/display.php?id=410
    Easiest solution is one array per column:
    SQL> declare
      2   cursor c_t
      3   is
      4     select rownum col1, rownum col2
      5     from   dual
      6     connect by rownum <= 10000;
      7   type tt_t1 is table of t.col1%type index by pls_integer;
      8   type tt_t2 is table of t.col2%type index by pls_integer;
      9   v_t1  tt_t1;
    10   v_t2  tt_t2;
    11  begin
    12    open c_t;
    13    loop
    14        fetch c_t bulk collect into v_t1,v_t2 limit 1000;
    15        exit when v_t1.count = 0;
    16        forall i in 1 .. v_t1.count
    17          insert into t
    18          (col1,col2)
    19          values
    20          (v_t1(i), v_t2(i));
    21    end loop;
    22  end;
    23  /
    PL/SQL procedure successfully completed.
    SQL> Best solution would have been a single piece of SQL:
    SQL>    insert into t
      2     (col1,col2)
      3     select rownum col1, rownum col2
      4     from   dual
      5     connect by rownum <= 10000
      6  /
    10000 rows created.
    SQL>

Maybe you are looking for