Sub-SELECT in Bulk INSERT- Performance Clarification

I have 2 tables- emp_new & emp_old. I need to load all data from emp_old to emp_new. There is a transaction_id column in emp_new whose value needs to be fetched from a main_transaction table which also includes a Region Code column. Something like -
TRANSACTION_ID REGION_CODE
100 US
101 AMER
102 APAC
My bulk insert query looks like this -
INSERT INTO emp_new
(col1,
col2,
transaction_id)
SELECT
col1,
col2,
*(select transaction_id from main_transaction where region_code = 'US')*
FROM emp_old
There would be millions of rows which need to be loaded in this way. I would like to know if the sub-SELECT to fetch the transaction_id would be re-executed for every row, which would be very costly and I'm actually looking for a way to avoid this. The main_transcation table is a pre-loaded table and its values are not going to change. Is there a way (via some HINT) to indicate that the sub-SELECT should not get re-executed for every row ?
On a different note, the execution plan of the above bulk INSERT looks like -
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | INSERT STATEMENT | | 11M| 54M| 6124 (4)|
| 1 | INDEX FAST FULL SCAN| EMPO_IE2_IDX | 11M| 54M| 6124 (4)|
EMPO_IE2_IDX -> Index on emp_old
I'm surprised to see that the table main_transaction does not feature in the execution plan at all. Does this mean that the sub-SELECT will not get re-executed for every row? However, atleast for the first read, I would assume that the table should appear in the plan.
Can someone help me in understanding this ?

Dear
From 10.2, AUTOTRACE uses DBMS_XPLAN anywayYes but with the remark that it uses the estimated part of DBMS_XPLAN i.e explain plan for + select * from table(dbms_xplan.display);
Isn'it ?
mhouri> cl scr
mhouri> desc t
Name                    Null?    Type
ID                               VARCHAR2(10)
NAME                             VARCHAR2(100)
mhouri> set linesize 150
mhouri> var x number
mhouri> exec :x:=99999
PL/SQL procedure successfully completed.
mhouri> explain plan for
  2  select sum(length(name)) from t where id >  :x;
Explained.
mhouri> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT                                                                                                                                    
Plan hash value: 1188118800                                                                                                                          
| Id  | Operation                    | Name | Rows  | Bytes | Cost (%CPU)| Time     |                                                                
|   0 | SELECT STATEMENT             |      |     1 |    23 |     4   (0)| 00:00:01 |                                                                
|   1 |  SORT AGGREGATE              |      |     1 |    23 |            |          |                                                                
|   2 |   TABLE ACCESS BY INDEX ROWID| T    |    58 |  1334 |     4   (0)| 00:00:01 |                                                                
|*  3 |    INDEX RANGE SCAN          | I    |    11 |       |     2   (0)| 00:00:01 |                                                                
PLAN_TABLE_OUTPUT                                                                                                                                    
Predicate Information (identified by operation id):                                                                                                  
   3 - access("ID">:X)                                                                                                                               
15 rows selected.
mhouri> set autotrace on
mhouri> select sum(length(name)) from t where id >  :x;
SUM(LENGTH(NAME))                                                                                                                                    
            10146                                                                                                                                    
Execution Plan
Plan hash value: 1188118800                                                                                                                          
| Id  | Operation                    | Name | Rows  | Bytes | Cost (%CPU)| Time     |                                                                
|   0 | SELECT STATEMENT             |      |     1 |    23 |     4   (0)| 00:00:01 |                                                                
|   1 |  SORT AGGREGATE              |      |     1 |    23 |            |          |                                                                
|   2 |   TABLE ACCESS BY INDEX ROWID| T    |    58 |  1334 |     4   (0)| 00:00:01 |                                                                
|*  3 |    INDEX RANGE SCAN          | I    |    11 |       |     2   (0)| 00:00:01 |                                                                
Predicate Information (identified by operation id):                                                                                                  
   3 - access("ID">:X)                                                                                                                               
Statistics
          0  recursive calls                                                                                                                         
          0  db block gets                                                                                                                           
         15  consistent gets                                                                                                                         
          0  physical reads                                                                                                                          
          0  redo size                                                                                                                               
        232  bytes sent via SQL*Net to client                                                                                                        
        243  bytes received via SQL*Net from client                                                                                                  
          2  SQL*Net roundtrips to/from client                                                                                                       
          0  sorts (memory)                                                                                                                          
          0  sorts (disk)                                                                                                                            
          1  rows processed                                                                                                                          
mhouri> set autotrace off
mhouri> select sum(length(name)) from t where id >  :x;
SUM(LENGTH(NAME))                                                                                                                                    
            10146                                                                                                                                    
mhouri> select * from table(dbms_xplan.display_cursor);
PLAN_TABLE_OUTPUT                                                                                                                                    
SQL_ID  7zm570j6kj597, child number 0                                                                                                                
select sum(length(name)) from t where id >  :x                                                                                                       
Plan hash value: 1842905362                                                                                                                          
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |                                                                          
|   0 | SELECT STATEMENT   |      |       |       |     5 (100)|          |                                                                          
|   1 |  SORT AGGREGATE    |      |     1 |    23 |            |          |                                                                          
|*  2 |   TABLE ACCESS FULL| T    |    59 |  1357 |     5   (0)| 00:00:01 |                                                                          
Predicate Information (identified by operation id):                                                                                                  
   2 - filter(TO_NUMBER("ID")>:X)                                                                                                                    
19 rows selected.
mhouri> spool offBest regards
Mohamed Houri

Similar Messages

  • How to get current month from filename and bulk insert from text file into table?

    I set up some dynamic SQL to help my bulk copy data from a text file to a table.  This works fine for files that come in every day; I get the previous day’s data, based on the file name that’s placed
    in the folder.  That’s why I’m using the ‘-1’.  The dates will look like this: '20140131', so I'm using type 112.
    declare @fullpath1 varchar(1000)
    select @fullpath1 = '''\\system.local\ms\london\FTP\' + convert(varchar, getdate()-1, 112) + '_INDEXPRICES_EOM.SPC'''
    declare @cmd1 nvarchar(1000)
    print (@cmd1)
    select @cmd1 = 'bulk insert [dbo].[SB_Monthly] from ' + @fullpath1 + ' with (FIELDTERMINATOR = ''\t'', FIRSTROW = 5, LASTROW = 675, ROWTERMINATOR=''0x0a'')'
    print(@cmd1)
    exec (@cmd1)
    I think the syntax will be somewhat similar to this:
    YEAR(date_column)=YEAR(getdate()) AND MONTH(date_column)=MONTH(getdate())
    I’m not totally sure how to incorporate that into my current syntax.
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

    I tried a couple versions of this.
    Declare @StartDate Date, @EndDate Date
    Select @StartDate = convert(varchar, getdate()-28, 112), @EndDate = convert(varchar, getdate()-1, 112)
    BEGIN
    declare @fullpath1 varchar(1000)
    select @fullpath1 = '''\\ms\london\FTP\' + ''' between ''' + Convert(Varchar(10), @StartDate, 101) + ''' and ''' + Convert(Varchar(10), @EndDate, 101) + '''_SP.SPC'''
    declare @cmd1 nvarchar(1000)
    print (@cmd1)
    select @cmd1 = 'bulk insert [dbo].[SPBMI_Monthly] from ' + @fullpath1 + ' with (FIELDTERMINATOR = ''\t'', FIRSTROW = 5, LASTROW = 675, ROWTERMINATOR=''0x0a'')'
    print(@cmd1)
    exec (@cmd1)
    END
    Here’s the string:
    bulk insert [dbo].[SPBMI_Monthly] from '\\ms\london\FTP\' between '02/03/2014' and '03/02/2014'_SP.SPC' with (FIELDTERMINATOR = '\t', FIRSTROW = 5, LASTROW = 675, ROWTERMINATOR='0x0a')
    The error message I keep getting is:
    Msg 156, Level 15, State 1, Line 1
    Incorrect syntax near the keyword 'between'.
    Msg 319, Level 15, State 1, Line 1
    Incorrect syntax near the keyword 'with'. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon.
    I feel like I’m already pushing this thing to the limit. 
    Maybe this last part isn’t possible.
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

  • Bug in Bulk Insert?

    So, I'm working with a client today and we discovered some missing records from his file.  I looked into it a bit; the first 5 rows were being truncated.  I thought that is bizarre because the data starts on row 7.  As such, I set up my Bulk
    Insert like this.
    declare @fullpath1 varchar(1000)
    select @fullpath1 = '''\\london-sql\FTP\' + convert(varchar, getdate()- @intFlag , 112) + '_SPGT.SPL'''
    declare @cmd1 nvarchar(1000)
    select @cmd1 = 'bulk insert [dbo].[SPGT_Daily] from ' + @fullpath1 + ' with (FIELDTERMINATOR = ''\t'', FIRSTROW = 7, ROWTERMINATOR=''0x0a'')'
    exec (@cmd1)
    So, I open this file, which comes from a Unix system, and I get this.
    So, this stock data actually starts on ROW2, not ROW5.  We figured it out pretty quick, and we're all set now.  I'm just not sure why Excel, Wordpad, Notepad, etc, would all show the data starting on ROW7 (field names are in ROW6), and Bulk Insert
    thinks the data is in ROW2 (field names are in ROW1).
    This seems very strange.
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

    BULK INSERT is a tool to read binary files. In difference to Notepad etc, it is not predisposed towards "lines", but is completely agnostic to the matter. You tell BULK INSERT to insert data into a six-column table with tab as field delimiter and
    \n ias rowterminator. BULK INSERT starts reading bytes until it finds a tab. First field, check. Continues reading bytes until the next tab. Check. And when it comes to the sixth field it read bytes until you see a newline. Check. If it would happen to see
    a newline while looking for a tab, that is just another byte of the data.
    If you apply this way of thinking, you will find that BULK INSERT considered the first six line to be a single record. (Which it unfortunately calls a row.)
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Jdbc thin driver bulk binding slow insertion performance problem

    Hello All,
    We have a third party application reporting slow insertion performance, while I traced the session and found out most of elapsed time for one insert execution is sql*net more data from client, it appears bulk binding is being used here because one execution has 200 rows inserted. I am wondering whether this has something to do with their jdbc thin driver(10.1.0.2 version) and our database version 9205. Do you have any similar experience on this, what other possible directions should I explore?
    here is the trace report from 10046 event, I hide table name for privacy reason.
    Besides, I tested bulk binding in PL/SQL to insert 200 rows in one execution, no problem at all. Network folks confirm that network should not be an issue as well, ping time from app server to db server is sub milisecond and they are in the same data center.
    INSERT INTO ...
    values
    (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17,
    :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32,
    :33, :34, :35, :36, :37, :38, :39, :40, :41, :42, :43, :44, :45)
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.02 14.29 1 94 2565 200
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.02 14.29 1 94 2565 200
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 25
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net more data from client 28 6.38 14.19
    db file sequential read 1 0.02 0.02
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    ********************************************************************************

    I have exactly the same problem, I tried to find out what is going on, changed several JDBC Drivers on AIX, but no hope, I also have ran the process on my laptop which produced a better and faster performance.
    Therefore I made a special solution ( not practical) by creating flat files and defining the data as an external table, the oracle will read the data in those files as they were data inside a table, this gave me very fast insertion into the database, but still I am looking for an answer for your question here. Using Oracle on AIX machine is a normal business process followed by a lot of companies and there must be a solution for this.

  • Jdbc thin driver and bulk binding slow insertion performance

    Hello All,
    We have a third party application reporting slow insertion performance, while I traced the session and found out most of elapsed time for one insert execution is sql*net more data from client, it appears bulk binding is being used here because one execution has 200 rows inserted. I am wondering whether this has something to do with their jdbc thin driver(10.1.0.2 version) and our database version 9205. Do you have any similar experience on this, what other possible directions should I explore?
    here is the trace report from 10046 event, I hide table name for privacy reason.
    Besides, I tested bulk binding in PL/SQL to insert 200 rows in one execution, no problem at all. Network folks confirm that network should not be an issue as well, ping time from app server to db server is sub milisecond and they are in the same data center.
    INSERT INTO ...
    values
    (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17,
    :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32,
    :33, :34, :35, :36, :37, :38, :39, :40, :41, :42, :43, :44, :45)
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.02 14.29 1 94 2565 200
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.02 14.29 1 94 2565 200
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 25
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net more data from client 28 6.38 14.19
    db file sequential read 1 0.02 0.02
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    ********************************************************************************

    I have exactly the same problem, I tried to find out what is going on, changed several JDBC Drivers on AIX, but no hope, I also have ran the process on my laptop which produced a better and faster performance.
    Therefore I made a special solution ( not practical) by creating flat files and defining the data as an external table, the oracle will read the data in those files as they were data inside a table, this gave me very fast insertion into the database, but still I am looking for an answer for your question here. Using Oracle on AIX machine is a normal business process followed by a lot of companies and there must be a solution for this.

  • Number of rows inserted is different in bulk insert using select statement

    I am facing a problem in bulk insert using SELECT statement.
    My sql statement is like below.
    strQuery :='INSERT INTO TAB3
    (SELECT t1.c1,t2.c2
    FROM TAB1 t1, TAB2 t2
    WHERE t1.c1 = t2.c1
    AND t1.c3 between 10 and 15 AND)' ....... some other conditions.
    EXECUTE IMMEDIATE strQuery ;
    These SQL statements are inside a procedure. And this procedure is called from C#.
    The number of rows returned by the "SELECT" query is 70.
    On the very first time call of this procedure, the number rows inserted using strQuery is *70*.
    But in the next time call (in the same transaction) of the procedure, the number rows inserted is only *50*.
    And further if we are repeating calling this procedure, it will insert sometimes 70 or 50 etc. It is showing some inconsistency.
    On my initial analysis it is found that, the default optimizer is "ALL_ROWS". When i changed the optimizer mode to "rule", this issue is not coming.
    Anybody faced these kind of issues?
    Can anyone tell what would be the reason of this issue..? any other work around for this...?
    I am using Oracle 10g R2 version.
    Edited by: user13339527 on Jun 29, 2010 3:55 AM
    Edited by: user13339527 on Jun 29, 2010 3:56 AM

    You have very likely concurrent transactions on the database:
    >
    By default, Oracle Database permits concurrently running transactions to modify, add, or delete rows in the same table, and in the same data block. Changes made by one transaction are not seen by another concurrent transaction until the transaction that made the changes commits.
    >
    If you want to make sure that the same query always retrieves the same rows in a given transaction you need to use transaction isolation level serializable instead of read committed which is the default in Oracle.
    Please read http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10471/adfns_sqlproc.htm#ADFNS00204.
    You can try to run your test with:
    set  transaction isolation level  serializable;If the problem is not solved, you need to search possible Oracle bugs on My Oracle Support with keywords
    like:
    wrong results 10.2Edited by: P. Forstmann on 29 juin 2010 13:46

  • Insert Using SELECT & Sub-SELECT

    Hi All,
    I am trying to insert records into a table using SELECT statement. The SELECT statement has a Sub-SELECT statement as follows:
    INSERT INTO table1(c1,c2,c3,c4,c5)
    SELECT c1,c2, (SELECT MAX(C3)+a1.c3
    FROM table1
    WHERE c1 = var1
    AND c2 = a1.c2
    GROUP BY c3),c4,c5,
    FROM table1 a1
    WHERE c1 = var1
    The above works fine when run from SQL*PLUS but gives compilation error when included in PL/SQL pacakge.
    I am using Oracle 8.1.7.
    Could you any one please tell me if I have missed something?
    Thanks,
    Satyen.

    In 8i, you will need to use dynamic SQL to execute this statement because the PL/SQL parser does not understand all SQL syntax (including SELECT in a column list).
    execute immediate
      'INSERT INTO table1(c1,c2,c3,c4,c5)' ||
      ' SELECT c1, c2, (SELECT MAX(C3)+a1.c3 FROM table1 WHERE c1 = :var1 AND c2 = a1.c2 GROUP BY c3), c4, c5' ||
      '   FROM table1 a1 WHERE c1 = :var1' using var1, var1;

  • Complex query - improve performance with nested arrays, bulk insert....?

    Hello, I have an extremely complicated query, that has a structure similar to:
    Overall Query
    ---SubQueryA
    -------SubQueryB
    ---SubQueryB
    ---SubQueryC
    -------SubQueryA
    The subqueries themselves are slow, and having to run them multiple times is much too slow! Ideally, I would be able to run each subquery once, and then use the results. I cannot use standard oracle tables, and i would need to keep the result of the subqueries in memory.
    I was thinking I write a pl/sql script that did the subqueries at the beginning and stored the results in memory. Then in the overall query, I could loop through my results in memory, and join the results of the various subqueries to one another.
    some questions:
    -what is the best data structure to use? I've been looking around and there are nested arrays, and there's the bulk insert functionality, but I'm not sure what is the best to you
    -the advantage of the method I'm suggesting is that I only have to do each subquery once. But, when I start joining the results of the subquery to one another, will I take a performance hit? will Oracle not be able to optimize the joins?
    thanks in advance!
    Coop

    I cannot use standard oracle tablesWhat does this mean? If you have subqueries, i assume you have tables to drive them? You're in an Oracle forum, so i assume the tables are Oracle tables.
    If so, you can look into the WITH clause, it can 'cache' the query results for you and reuse them multiple times, also helpful in making large queries with many subqueries more readable.

  • BULK INSERT into View w/ Instead Of Trigger - DML ERROR LOGGING Issue

    Oracle 10.2.0.4
    I cannot figure out why I cannot get bulk insert errors to aggregate and allow the insert to continue when bulk inserting into a view with an Instead of Trigger. Whether I use LOG ERRORS clause or I use SQL%BULK_EXCEPTIONS, the insert works until it hits the first exception and then exits.
    Here's what I'm doing:
    1. I'm bulk inserting into a view with an Instead of Trigger on it that performs the actual updating on the underlying table. This table is a child table with a foreign key constraint to a reference table containing the primary key. In the Instead of Trigger, it attempts to insert a record into the child table and I get the following exception: +5:37:55 ORA-02291: integrity constraint (FK_TEST_TABLE) violated - parent key not found+, which is expected, but the error should be logged in the table and the rest of the inserts should complete. Instead the bulk insert exits.
    2. If I change this to bulk insert into the underlying table directly, it works, all errors get put into the error logging table and the insert completes all non-exception records.
    Here's the "test" procedure I created to test my scenario:
    View: V_TEST_TABLE
    Underlying Table: TEST_TABLE
    PROCEDURE BulkTest
    IS
    TYPE remDataType IS TABLE of v_TEST_TABLE%ROWTYPE INDEX BY BINARY_INTEGER;
    varRemData remDataType;
    begin
    select /*+ DRIVING_SITE(r)*/ *
    BULK COLLECT INTO varRemData
    from TEST_TABLE@REMOTE_LINK
    where effectiveday < to_date('06/16/2012 04','mm/dd/yyyy hh24')
    and terminationday > to_date('06/14/2012 04','mm/dd/yyyy hh24');
    BEGIN
    FORALL idx IN varRemData.FIRST .. varRemData.LAST
    INSERT INTO v_TEST_TABLE VALUES varRemData(idx) LOG ERRORS INTO dbcompare.ERR$_TEST_TABLE ('INSERT') REJECT LIMIT UNLIMITED;
    EXCEPTION WHEN others THEN
    DBMS_OUTPUT.put_line('ErrorCode: '||SQLCODE);
    END;
    COMMIT;
    end;
    I've reviewed Oracle's documentation on both DML logging tools and neither has any restrictions (at least that I can see) that would prevent this from working correctly.
    Any help would be appreciated....
    Thanks,
    Steve

    Thanks, obviously this is my first post, I'm desperate to figure out why this won't work....
    This code I sent is only a test proc to try and troubleshoot the issue, the others with the debug statement is only to capture the insert failing and not aggregating the errors, that won't be in the real proc.....
    Thanks,
    Steve

  • BCP-style bulk insert from remote C++ ODBC Native client application

    I am trying to find documentation or sample code for performing bulk inserts into SQL Server 2012 from a remote client using the ODBC native client driver from Linux.  We currently perform INSERT statements on blocks of data, wrapping it in BEGIN/COMMIT,
    and achieving through approximately half of bcp reading from a delimited text file.  While there are many web pages talking about bulk inserts via the native driver, this page (http://technet.microsoft.com/en-us/library/ms130792.aspx) seems closest to
    what I'm after but doesn't go into any detail or give API calls.  The referenced header file is just a bunch of options and constants, so presumablyone gains access to bulk functions via the standard ODBC mechanism, the question is how.
    For clarity, I am NOT interested in:
    BULK INSERT: because it requires a server-side data file or a UNC path with appropriate permissions (doesn't work from Linux)
    INSERT ... SELECT
    * FROM OPENROWSET(BULK...): same problem as above
    IRowsetFastload: OLEDB, but I need ODBC on Linux.
    Basically, I want to emulate BCP.  I don't want to *run* BCP because it requires landing data to disk. 
    Thanks
    john
    John Lilley Chief Architect RedPoint Global Inc.

    Other than block inserts within BEGIN/COMMIT transaction blocks or running bcp, is there anything else that can be done on Linux?
    No other option from Linux that I am aware of.  The SQL Server Native Client ODBC driver also supports table-valued-parameters, which can be used to stream data but the Linux ODBC driver API doesn't have a way to do that either.  That said, I would
    still expect file-based BCP to significantly outperform inserts with large batches.  I've seen a rate of 100K/sec. with this technique, including the file create overhead but much depends on the particulars of your use case.
    Consider voting for this on Connect.  BCP is on the roadmap but no date yet: 
    https://connect.microsoft.com/SQLServer/SearchResults.aspx?SearchQuery=linux+odbc+bcp
    Also, I filed a Connect item for TVP support:
    https://connect.microsoft.com/SQLServer/feedback/details/874616/add-tvp-support-to-sql-server-odbc-driver-for-linux
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Simple insert or bulk insert

    Hi,
    Hi have a problem regarding the bulk insert.
    I have 4 tables and each table has 16 partitions and each table near 2,00,000 records i am inserting
    Previously i tried bulk insert ..
    open insert_tab1;
    loop
    fetch insert_tab1 bulk collect into type_insert_tab1 limit 40000; --- fetch is near 2,00,000 records
    forall i in 1..type_insert_tab1.count
    insert into tab1 values type_insert_tab1(i);
    commit;
    exit when insert_tab1%notfound;
    end loop;
    Now similar insert i did for three other tables with the commit statementand in each table approx 2,00,000 records we are inserting.
    But i got snapshot too old error.how i can modify this to reduce commit and buffer use an dless execution time.
    or shall i take only insert into tab1
    (col1,
    col7)
    select *from tab2;
    Thanks in advance

    But i got snapshot too old error.how i can modify this to reduce commit and buffer use an dless execution time.You can reduce the number of commits by taking the commit out of the loop.
    It might be worth looking at the execution plan of the cursor in case it is less efficient than it could be.
    @ user11087632:
    Direct path insert uses fewer system resources, not more.
    The target table would need to be defined as NOLOGGING to get the full benefit; also indexes will affect direct path performance, and any foreign key constraints or row-level triggers will make it silently revert to conventional insert. And of course you lose the logging.
    Edited by: William Robertson on Jun 4, 2009 11:23 PM

  • Bulk Insert, Domain Based Attributes

    Hi
    I have a product model similar to the ProductSamplemodel that ships with MDS. i.e. Product Category, Product Sub Category entities. Both of these entities have been created to automatically generate a code - as a code does not exist in the business only a description.
    The product entity has both the category and sub category attributes as domain-based attributes which results in storing the code physically and also displaying the description.
    From a bulk insert scenario where the business has a number of new products that they want to add (due to a new range) I was suggesting that they use the Excel AddIn. This would allow them to cut and paste the data into the product entity. However as the category
    and sub category are domain-based Attributes they will need to know the ID – they will only know the name.
    From a MDS functionality point of view is it possible to get it to look up the code from the name?
    My current understanding on how to achieve this would mean I would need to have both code and name attributes on the Product entity for Category and Sub. The name attributes would be free text and the code attribute would be populated from a business rule.
    Downside to this approach is that you lose out on the nice dropdown pick list and the user does not know what a valid entry is as they can no longer select one.
    How is this usually implemented or handled in MDS?
    Cheers
    Kevin

    Hi Kevin,
    Another approach (if the one above does NOT suit you).
    Once again, as Reza suggested and  I agree with him totally, this is a job for SSIS.
    For what it's worth...
    Whilst I understand what you are trying to do, may I suggest another approach. What concerns me is users adding data willy-nilly and that YOU have no control as to what is going on. Secondly, what is more disconcerting is the thought of losing relational
    integrity, especially if you are using derived hierarchies within MDS. 
    I would handle this in a different manner, using SSIS and a filesystem watcher.
    Let the users submit their spreadsheets (with their updates) to a common directory on the server. Implement a .NET FileSystemWatcher (this takes 10 minutes for a Newbie). This file system watcher will launch a DOS batch file on the arrival of a spreadsheet
    within the given directory. The DOS batch file  fires a DTS exec to start an SSIS package. This package together with SQL Server procedures will ensure that the correct codes are obtained and the attribute data is correctly inserted into the correct Entities(with
    the correct relationships).
    While this sounds vague, I do it all the time. I am more than prepared to help you get going, should wish any assistance. I KNOW that this is not the answer that you are looking for HOWEVER it is perhaps the most effective.
    sincerest regards
    Steve Simon SQL Server MVP
    [email protected]

  • Max limit for bulk insert.

    Hi friends,
    We have 100 Million records in a table on production database. We are planning to move the data to test environment for performance tuning of the databse and queries.
    We have created a DB Link between these DB's to transfer the data between the database.
    We are planning to move the data in the following fashion:
    insert into tab1(.....)
    select * from tab1@prod_db_link;
    do we have any limit for bulk insert?

    > We are planning to move the data to test environment for performance tuning of the databse and
    queries.
    Flawed premise. What makes you think that the test environment will match the production with a 100% degree of accuracy!? - as that is what you need in order to
    a) identify the performance bottlenecks
    b) solve these
    You are very much mistaken if you think you can identify actual performance issues happening on prod by mucking about on a test environment that just happens to have the same data volume.
    Even data volume alone is meaningless as a copy of the data is logical - any potential issues with pctfree and pctused will not be reflected. Any potential hot spots on disk will not be reflected. The copy will not have the same number of segments and extents. Etc. Etc.
    And this is just the data.. never mind numerous other issues ranging from actual hardware and operating system to Oracle instance configuration and production load and processing.

  • ODBC, bulk inserts and dynamic SQL

    I am writing an application running on Windows NT 4 and using the oracle ODBC driver (8.01.05.00, that inserts many rows at a time (10000+) into an oracle 8i database.
    At present, I am using a stored procedure to insert each row into the database. The stored procedure uses dynamic SQL because I can only determine the table and field names at run time.
    Due to the large number of records, it tends to take a while to perform all the inserts. I have tried a number of solutions such as using batches of SQL statements (e.g. "INSERT...;INSERT...;INSERT..."), but the oracle ODBC driver only seems act on the first statement in the batch.
    I have also considered using the FOR ALL statement and SQL*Loader utility.
    My problem with FOR ALL is that I'm not sure it works on dynamic SQL statements and even if it did, how do I pass an array of statements to the stored procedure.
    I ruled out SQL* Loader because I could not find a way to invoke it it from an ODBC statement. Secondly, it requires the spawining of a new process.
    What I am really after is something similar the the SQL Server (forgive me!) BULK INSERT statement where you can simply create an input file with all the records you want to insert, and pass it along in an ODBC statement such as "BULK INSERT <filename>".
    Any ideas??
    null

    Hi,
    I faced this same situation years ago (Oracle 7.2!) and had the following alternatives.
    1) Use a 3rd party tool such as Sagent or CA Info pump (very pricey $$$)
    2) Use VisualC++ and OCI to hook into the array insert routines (there are examples of these in the Oracle Home).
    3) Use SQL*Loader (the best performance, but no real control of what's happening).
    I ended up using (2) and used the Rouge Wave dbtools.h++ library to speed up the development.
    These days, I would also suggest you take a look at Perl on NT (www.activestate.com) and the DBlib modules at www.perl.org. I believe they will also do bulk loading.
    Your problem is that your program is using Oracle ODBC, when you should be using Oracle OCI for best performance.
    null

  • Bulk inserts and dynamic SQL

    I am writing an application running on Windows NT 4 and using the oracle ODBC driver (8.01.05.00, that inserts many rows at a time (10000+) into an oracle 8i database.
    At present, I am using a stored procedure to insert each row into the database. The stored procedure uses dynamic SQL because I can only determine the table and field names at run time.
    Due to the large number of records, it tends to take a while to perform all the inserts. I have tried a number of solutions such as using batches of SQL statements (e.g. "INSERT...;INSERT...;INSERT..."), but the oracle ODBC driver only seems act on the first statement in the batch.
    I have also considered using the FOR ALL statement and SQL*Loader utility.
    My problem with FOR ALL is that I'm not sure it works on dynamic SQL statements and even if it did, how do I pass an array of statements to the stored procedure.
    I ruled out SQL* Loader because I could not find a way to invoke it it from an ODBC statement. Secondly, it requires the spawining of a new process.
    What I am really after is something similar the the SQL Server (forgive me!) BULK INSERT statement where you can simply create an input file with all the records you want to insert, and pass it along in an ODBC statement such as "BULK INSERT <filename>".
    Any ideas??
    null

    Hi,
    I faced this same situation years ago (Oracle 7.2!) and had the following alternatives.
    1) Use a 3rd party tool such as Sagent or CA Info pump (very pricey $$$)
    2) Use VisualC++ and OCI to hook into the array insert routines (there are examples of these in the Oracle Home).
    3) Use SQL*Loader (the best performance, but no real control of what's happening).
    I ended up using (2) and used the Rouge Wave dbtools.h++ library to speed up the development.
    These days, I would also suggest you take a look at Perl on NT (www.activestate.com) and the DBlib modules at www.perl.org. I believe they will also do bulk loading.
    Your problem is that your program is using Oracle ODBC, when you should be using Oracle OCI for best performance.
    null

Maybe you are looking for

  • DS XI 3.0 on Terminal Server

    Hi All, We setup DS XI 3.0 on a Windows 2003 Terminal Server. 6 developers are now using the designer tool. For each of them a MySQL Database has been setup and local repositories were created. One problem occurs: when one developer opens the Designe

  • Local Database access

    simple idea, impossble method. I am training to allow acces to a local database either through Flash, Captivate or other application. The only issues are i need to do it without internet access or the executable being installed on user machine. So un

  • Sound input issues.  Not related to Soundflower or Balance issues

    So.  With the sound flower issues fixed.  And my balance knobs in place for my stereo speakers.  I have opened up a new song I have been writing.  Again the AVIGrandpiano app was the track I was recording on.  Like my previous post from last week the

  • Using G4 processor on B/W G3?

    I recently upgraded my G4 400 and put an upgraded OWC 1.0 GHz processor which I just love. It's Great!, OK my question is can I use the G4 400 MHZ processor I took out and put it into my G3 350 MHz ,Will it work? Thanks fvs

  • MSS - Time Approval Lock

    Hi Experts, I have tried to search the thread for locking approval on MSS, but couldn't find anything. We would like to restrict the Manager from approving time for a particular period. Say (One Week). How can we restrict the same? Is it done through