Fast insert

I need to migrate data from one table to another table. The source table has around 6 million rows. I am using the below insert to achieve it. It is taking very long time. Is there any way to reduce this time? Please advise.
INSERT      /*+ append */ INTO t1
    SELECT      bl_no_uid, ca_seq, 0, updt_stamp, updt_user,
              orgl_stamp, orgl_user, cncl_flg, bl_no,
                vsl_curr, voy_curr, line_code, SYSDATE,
                '?????', '-', mol_ca_no, ca_seq_offset, vsl_seq,
                vsl_code_orgl, voy_code_orgl, NULL, bl_dt,
                aprv_dt, first_load_port, NULL,
                first_load_port_reg, NULL, last_dsch_port,
                NULL, last_dsch_port_reg, NULL,
                last_dsch_port_eta, NULL, NULL,
                NULL, load_port, NULL, dsch_port,
                NULL, rev_ton,
                teus_void, NULL, NULL, NULL, NULL,
                NULL, NULL, NULL, NULL, NULL, bkg_no,
                shp_code, cns_code, n1p_code, issue_ofc,
                place_rcpt, place_deliv, bkg_ofc, extract_sts,
                ctl_num, entry_ofc, ca_item_flg, ap_chg_flg,
                NULL, NULL
      FROM t2;

Rename table t1 to t1_tmp
create table t1 as
    SELECT      bl_no_uid, ca_seq, 0, updt_stamp, updt_user,
              orgl_stamp, orgl_user, cncl_flg, bl_no,
                vsl_curr, voy_curr, line_code, SYSDATE,
                '?????', '-', mol_ca_no, ca_seq_offset, vsl_seq,
                vsl_code_orgl, voy_code_orgl, NULL, bl_dt,
                aprv_dt, first_load_port, NULL,
                first_load_port_reg, NULL, last_dsch_port,
                NULL, last_dsch_port_reg, NULL,
                last_dsch_port_eta, NULL, NULL,
                NULL, load_port, NULL, dsch_port,
                NULL, rev_ton,
                teus_void, NULL, NULL, NULL, NULL,
                NULL, NULL, NULL, NULL, NULL, bkg_no,
                shp_code, cns_code, n1p_code, issue_ofc,
                place_rcpt, place_deliv, bkg_ofc, extract_sts,
                ctl_num, entry_ofc, ca_item_flg, ap_chg_flg,
                NULL, NULL
      FROM t2
      union all
      select field1, fild2 ..... from t1_tmp nologging;afterward create all indexes
If your t1 table is master table which has many references when you have to try with
no_logging and append hints.
Regards
Singh

Similar Messages

  • Fast insert 5000 rows from memory

    Hi,
    I have a user loading excel data on his screen - its via a webserver application. So now he has a couple of 1000 rows in the memory.
    What would be a fast way of inserting this data in a table?
    insert (a,b,c,d) into target_table
    *10,000
    - pl/sql block with insert statements?: not good. pl/sql blocks are limited in length.
    - seperate insert statements?
    - a procedure that does an insert with bind variables? proc_insert (a,b,c,d)?
    It would be great if you can help me out.

    S11 wrote:
    The client can submit is as structured data, such as seperate insert statements.
    What other way could he deliver this in to be faster then 5000 seperate inserts....5000 separate insert statements are NOT slow.
    Observe - firing 5000 insert statements using PL/SQL:
    SQL> create table foo_tab(
      2          id      number,
      3          d       date,
      4          n       number
      5  );
    Table created.
    SQL>
    SQL> declare
      2          n       number;
      3  begin
      4          n := dbms_utility.get_time;
      5 
      6          for i in 1..5000
      7          loop
      8                  insert into foo_tab values( i, sysdate, dbms_utility.get_time );
      9          end loop;
    10 
    11          dbms_output.put_line( 'Total insert process: '||(to_char(dbms_utility.get_time-n)/100)||' seconds' );
    12  end;
    13  /
    Total insert process: .88 seconds
    PL/SQL procedure successfully completed.
    SQL>
    SQL> select
      2          (max(n)-min(n))/100             as TOTAL_SQL_SEC,
      3          count(*)/((max(n)-min(n))/100)  as ROWS_PER_SEC
      4  from foo_tab;
    TOTAL_SQL_SEC ROWS_PER_SEC
              .82   6097.56098
    SQL> During the time this loop was run, the database was basically able to deal with over 6000 inserts per second (from this single session).
    Sure, inserts from an external client will be slower. And due to no fault of the database. The client needs to ship the data across network infrastructure to the client. The most effective way of doing this is via bulk processing and packing as much data as possible per TCP packet (as governed by the MTU/Max Transmission Unit size of the network architecture used).
    But looking at the pure database side (which is effectively what the above sample code does), the server is able to deal with 5000 separate inserts in less than one second.
    So what is the most effective database interface? SQL. Plain and simple.
    How do you use SQL itself effectively? Besides coding optimal SQL, it is coding sharable SQL. The above example uses a single SQL cursor in the SQL Shared Pool to execute that 5000 insert statements.
    This is not rocket science. ;-)

  • Is there better way to write SQl Insert Script

    I am running out of ideas while working on one scenario and thought you guys would be able to guide me in right way.
    So, the scenario is I have a Table table1 table1(fieldkey, DisplayName, type) fieldkey - pkey   This table have n number of rows. The value of fieldkey in nth row is n. So if we have 1000 record, then the last row has a fieldkey = 1000.
    Below is my sample data.
    Insert into table1 (FIELDKEY,DISPLAYNAME,TYPE) values
    (1001, 'COfficer',100);
    Insert into table1 (FIELDKEY,DISPLAYNAME,TYPE) values
    (1002, 'PData',100);
    Insert into table1 (FIELDKEY,DISPLAYNAME,TYPE) values
    (1003, 'EDate',100);
    Insert into table1 (FIELDKEY,DISPLAYNAME,TYPE) values
    (1004, 'PData',200);
    Insert into table1 (FIELDKEY,DISPLAYNAME,TYPE) values
    (1005, 'EDate',300);
    Insert into table1 (FIELDKEY,DISPLAYNAME,TYPE) values
    (1006, 'Owner',400);This way of inserting the row with hardcoded value of fieldkey was creating the problem when the same table was used by other developer for their own new functionality.
    So, I thought of using the max(fieldkey) +1 from that table and use it in my insert script. This script file runs every time during software installation.
    I thought of using count to see if the row with same displaytype and type exists in that table or not. If exisits do not insert new row, if not insert new row.
    It looks like i will have to query the count statement everytime before I insert the row.
    select max(fieldkey) +1 into ll_fieldkey from table1
    select count(*) into ll_count from table1 where display ltrim(upper(name)) = 'COFFICER' and type =100)
    if ll_count >0 then
    Insert into table1 (FIELDKEY,DISPLAYNAME,TYPE) values
    ( ll_fieldkey, 'COfficer',100);
    ll_fieldkey := ll_fieldkey +1
    end if;
    select count(*) into ll_count from table1 where display ltrim(upper(name)) = 'PData' and type =100)
    if ll_count >0 then
    Insert into table1 (FIELDKEY,DISPLAYNAME,TYPE) values
    ( ll_fieldkey, 'PData',100);
    ll_fieldkey := ll_fieldkey +1
    end if;
    ... and so on for all the insert statement. So, i was wondering if there is some better way to handle this situation of inserting the data.
    Thank you

    Hi !
    For check if the same display name and type already exists in table i would use Unique Key , but then again instead of if statement you should code some exception handlers. ... Hm .. Unque key is by may opinion better solution as
    codeing checks .
    For faster inserts that is , smaller code .. if there is no rules for values and the values are fixed , in any case you have to do this 100 inserts. If you can "calculate" values then maybe you can figure out some code .. but effect will be the same as hundred insert stetements one after another .. Procedure with this 100 inserts is not so bad solution.
    You can fill with values a nested table and then use forall ... save exceptions insert and with above mentioned UK , maybe this will be better.
    T
    Edited by: ttt on 10.3.2010 13:01

  • Does auto statistic slow down inserts?

    I found some code on the web to loop through my table's columns dropping indexes so I could do fast inserts overnight.  The table has five million records.
    The loop threw an error on one of the indexes:
    "Cannot drop the index 'ACCOUNTS._WA_Sys_0000000F_31EC6D26', because it does not exist or you do not have permission."
    which I gather is an auto-statistics index. Do I need to find a way to drop this index? Currently my INSERTs are still not terribly fast (even though all the indexes showing in the Indexes window have been dropped).

    The  hypothetical column has zero per this query
    select * from sys.indexes AS i
                left JOIN sys.index_columns AS  ic on      ic.object_id = i.object_id
                where   i.object_id = object_id('Accounts')
    object_id
    name
    index_id
    type
    type_desc
    is_unique
    data_space_id
    ignore_dup_key
    is_primary_key
    is_unique_constraint
    fill_factor
    is_padded
    is_disabled
    is_hypothetical
    allow_row_locks
    allow_page_locks
    has_filter
    filter_definition
    object_id
    index_id
    index_column_id
    column_id
    key_ordinal
    partition_ordinal
    is_descending_key
    is_included_column
    837578022
    NULL
    0
    0
    HEAP
    0
    1
    0
    0
    0
    0
    0
    0
    0
    1
    1
    0
    NULL
    NULL
    NULL
    NULL
    NULL
    NULL
    NULL
    NULL
    NULL
    The id 837578022 is the same ID showing in the original query so I'm pretty sure it's the same index at issue.
    As long as it's not having a significant impact on my INSERTs, I have no quarrel with it. I guess I'll just presume the best, for now.

  • Why do these insert vary so much in performance?

    I have a table and a package similar to those shown in below DDL.
    Table TABLE1 is populated in chunks of 10000 records from a remote database, thru TABLE1_PKG
    by receiving arrays of data for three of its fields and a scalar value for a set identifier
    in column named NUMBER_COLUMN_3.
    I have two databases with following record count in the table:
         DATABASE1: 55862629
         DATABASE2: 64225247
    When I executed the procedure to move 50000 records to each of the two databases, it took 20 seconds to
    populate DATABASE1 and it took 150 seconds to populate DATABASE2.  Network was discarded as I recorded
    in the package how long each of the 5 10000 chunk took to insert in each of the two databases as follows:
    Records Being Inserted  Time it took in DATABASE1     Time it took in DATABASE2
    First  10000             3 seconds                    27 seconds
    Second 10000             4 seconds                    26 seconds
    Third  10000             6 seconds                    40 seconds
    Fourth 10000             4 seconds                    31 seconds
    Fifth  10000             4 seconds                    26 seconds
    When I look at the explain plan in both databases I see following:
    | Id  | Operation                | Name | Cost  |
    |   0 | INSERT STATEMENT         |      |     1 |
    |   1 |  LOAD TABLE CONVENTIONAL |      |       |
    My questions:
         1) Does the explain plan indicate that Direct Load was not used.
         2) If the answer to 1 is Yes, is it possible to use Direct Load or a faster insert method in this case?
         3) Any ideas what could be causing the 7.5 to 1 difference between the two databases.
    Please note that these two databases are non production so load is negligible.
    CREATE TABLE TABLE1
      TABLE1_ID                VARCHAR2(255)   NOT NULL,
      NUMBER_COLUMN_1          NUMBER,
      NUMBER_COLUMN_2          NUMBER,
      NUMBER_COLUMN_3          NUMBER
    ALTER TABLE TABLE1 ADD CONSTRAINT TABLE1_PK PRIMARY KEY (TABLE1_ID);
    CREATE INDEX NUMBER_COLUMN_3_IX ON TABLE1(NUMBER_COLUMN_3);
    CREATE OR REPLACE PACKAGE TABLE1_PKG IS
      TYPE VARCHAR2_ARRAY      IS TABLE OF VARCHAR2(4000);
      TYPE NUMBER_ARRAY        IS TABLE OF NUMBER;
      TYPE DATE_ARRAY          IS TABLE OF DATE;
      PROCEDURE Insert_Table1
        Table1_Id_Array_In         TABLE1_PKG.VARCHAR2_ARRAY,
        Number_Column1_Array_In    TABLE1_PKG.NUMBER_ARRAY,
        Number_Column2_In          TABLE1_PKG.NUMBER_ARRAY,
        NUMBER_COLUMN_3_In         NUMBER
    END;
    CREATE OR REPLACE PACKAGE BODY TABLE1_PKG IS
      PROCEDURE Insert_Table1
        Table1_Id_Array_In         TABLE1_PKG.VARCHAR2_ARRAY,
        Number_Column1_Array_In    TABLE1_PKG.NUMBER_ARRAY,
        Number_Column2_In          TABLE1_PKG.NUMBER_ARRAY,
        NUMBER_COLUMN_3_In         NUMBER
      IS
      BEGIN
        FORALL I IN 1..Table1_Id_Array_In.Count
          INSERT /*+ APPEND */ INTO TABLE1 (TABLE1_ID            , NUMBER_COLUMN_1           , NUMBER_COLUMN_2     , NUMBER_COLUMN_3   )
          VALUES                           (Table1_Id_Array_In(I), Number_Column1_Array_In(I), Number_Column2_In(I), NUMBER_COLUMN_3_In);
      END Insert_Account_Keys;
    END;
    Thanks,
    Thomas

    I found answer for why Direct Path is not used when I do an insert into TABLE1@SOMEDATABASE SELECT....:
      http://docs.oracle.com/cd/E11882_01/server.112/e41084/statements_9014.htm#i2163698
    Direct-path INSERT is subject to a number of restrictions. If any of these
    restrictions is violated, then Oracle Database executes conventional INSERT serially
    without returning any message, unless otherwise noted:
    You can have multiple direct-path INSERT
    Queries that access the same table, partition, or index are allowed before the direct-path INSERT
    If any serial or parallel statement attempts to access a table that has already been modified by a direct-path INSERT
    The target table cannot be of a cluster.
    The target table cannot contain object type columns.
    Direct-pathINSERT
    Direct-pathINSERTAPPENDAPPEND_VALUESINSERT
    The target table cannot have any triggers or referential integrity constraints defined on it.
    The target table cannot be replicated.
    A transaction containing a direct-path INSERT
    My table is being replicated and I am try it via a distributed transaction.
    I am still puzzled as to why it took 2 minutes and 44 seconds to insert 10000 rows in our production database but that's something that I'll investigate if time permits.  For now I've rewritten the process to use Insert...select if the number of records in the batch is less than or equal to a configured (currently set at 400000) number, else it will move using chunck and for now using bulk collect in the source pass arrays of data and FORALL inserts in the target.  If time allows in the future I will try to rewrite to use chunking combinde with insert..select.
    Thanks to all for your help,
    Thomas

  • Oracle 10 Million records insert using Pro c

    Hi,
         As i am new to Oracle 10G Pro c and SQL Loader, i would like to know few informations regarding the same.
    My requirement is this that, i need to read a 20GB file (20Million Lines) line by line and convert the line into database records with few data manipulation and insert
    into a Oracle 10G database table.
         I read some articles and it says Pro C is faster than SQL Loader in performance (fast insertion). And also Pro C talks to
    the oracle directly and it puts the data pages directly into Oracle but not through SQL Engine or Parser.
         Even in the Pro c samples, i have seen a For loop to insert mulitple records. Will each insertion cost
         more time ?
         Is there any bulk insert program on Pro C like 10 Million rows at a shot ?
         Or Pro c can do upload of a file data into Oracle database table ?
         If any one already posted this query means please inform me the thread number or id
    Thank you,
    Ganesh
    Edited by: user12165645 on Nov 10, 2009 2:06 AM

    Alex Nuijten wrote:
    Personally I would go for either an External Table or SQL*Loader, mainly because I've never used Pro*C.. ;)
    And so far I never needed to resort to other option because of poor performance.I fully agree. Also we are talking about "only" 10 mill rows. This will take some time, but probably not more than 30 min, max. 2 hours depending on many factors including storage, network and current database activities. I've seen systems where such a load took only a few minutes.
    I guess if there is a difference between Pro*C and external tables I would still opt for external tables, because they are so much easier to use.

  • Direct Insert path

    I want to insert large data from dblinks using direct insert however as rollback segment is quite small i need to commit after every 2000 rows or so.
    insert /*+ append */ into abc_monthly@dblink select * from abc partition(ODFRC_20040201);
    any help guys !
    or any other way round for faster insert than this ?
    Edited by: Avi on Aug 12, 2009 9:41 AM

    Hi,
    Don't shoot the messenger but your append hint will be ignored:
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:549493700346053658

  • Faster table creation

    Hai all,
    10.2.0.5 on Solaris 10
    We have the below query which is time consuming.
    {code}
    create table callst nologging parallel 12 as
    SELECT
    CONTRNO    CONTRNO,
    SUBSCR_TYPE    SUBSCR_TYPE,
    AREA    AREA,
    SUBNO    SUBNO,
    CHARGETYPE    CHARGETYPE,
    BILLCODE1    BILLCODE,
    TRUNC (MIN (TRANSDATE))    TRANSDATE,
    TRUNC (MAX (TRANSDATE))    TRANSDATE_TO,
    BILLTEXT    BILLTEXT,
    SUM(BILLAMOUNT)    BILLAMOUNT,
    FACTOR    FACTOR,
    SYSDATE    UPDDATE,
    TARIFFCLASS1    TARIFFCLASS,
    COUNT(1)    NO_OF_CALLS,
    SUM(DURATION)    DURATION,
    SUM(GROSS_AMOUNT)    GROSS_AMOUNT,
    SUM(ACT_DURATION)    ACT_DURATION,
    AR_BILLTEXT    AR_BILLTEXT,
    CALL_TYPE    CALL_TYPE,
    FACTOR    FACTOR_INT,
    LAST_TRAFFIC_DATE    LAST_TRAFFIC_DATE,
    RATE_TYPE    RATE_TYPE,
    TARIFF_GROUP    TARIFF_GROUP,
    RATE_POS    RATE_POS,
    RATE_PLAN    TARIFF_PROFILE,
    TARIFF_GROUP    ORG_TARIFF_GROUP
    FROM HISTCALLS
    WHERE (CONTRNO , SUBNO) IN (
    SELECT CONTRNO , SUBNO FROM COMPLETE_HEADER_SUBS)
    AND LAST_TRAFFIC_DATE ='21-SEP-2013'
    AND TRANSDATE +0  > TO_DATE ('01-AUG-2013 000000','DD-MON-YYYY HH24MISS')
    GROUP BY
    CONTRNO, SUBSCR_TYPE, AREA, SUBNO, CHARGETYPE, TARIFFCLASS1, BILLCODE1, BILLTEXT, AR_BILLTEXT, LAST_TRAFFIC_DATE, FACTOR, RATE_PLAN, CALL_TYPE, TARIFF_GROUP, RATE_TYPE,RATE_POS;
    {code}
    How I can put the above in a loop and insert values and perform commit frequently for faster inserts .
    Please advise.

    Hi,
    For all performance questions, see the forum FAQ: https://forums.oracle.com/message/9362003
    Not that this has much to do with performance, but I noticed you're saying
    AND LAST_TRAFFIC_DATE ='21-SEP-2013'
    If last_traffic_date a DATE, you  should comparieit to another DATE, such as the results of TO_DATE ('21-SEP-2013', 'DD-MON-YYYY').
    If last_traffic_date is not a DATE, why isn't it a DATE?

  • Jdbc thin driver bulk binding slow insertion performance problem

    Hello All,
    We have a third party application reporting slow insertion performance, while I traced the session and found out most of elapsed time for one insert execution is sql*net more data from client, it appears bulk binding is being used here because one execution has 200 rows inserted. I am wondering whether this has something to do with their jdbc thin driver(10.1.0.2 version) and our database version 9205. Do you have any similar experience on this, what other possible directions should I explore?
    here is the trace report from 10046 event, I hide table name for privacy reason.
    Besides, I tested bulk binding in PL/SQL to insert 200 rows in one execution, no problem at all. Network folks confirm that network should not be an issue as well, ping time from app server to db server is sub milisecond and they are in the same data center.
    INSERT INTO ...
    values
    (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17,
    :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32,
    :33, :34, :35, :36, :37, :38, :39, :40, :41, :42, :43, :44, :45)
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.02 14.29 1 94 2565 200
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.02 14.29 1 94 2565 200
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 25
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net more data from client 28 6.38 14.19
    db file sequential read 1 0.02 0.02
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    ********************************************************************************

    I have exactly the same problem, I tried to find out what is going on, changed several JDBC Drivers on AIX, but no hope, I also have ran the process on my laptop which produced a better and faster performance.
    Therefore I made a special solution ( not practical) by creating flat files and defining the data as an external table, the oracle will read the data in those files as they were data inside a table, this gave me very fast insertion into the database, but still I am looking for an answer for your question here. Using Oracle on AIX machine is a normal business process followed by a lot of companies and there must be a solution for this.

  • Jdbc thin driver and bulk binding slow insertion performance

    Hello All,
    We have a third party application reporting slow insertion performance, while I traced the session and found out most of elapsed time for one insert execution is sql*net more data from client, it appears bulk binding is being used here because one execution has 200 rows inserted. I am wondering whether this has something to do with their jdbc thin driver(10.1.0.2 version) and our database version 9205. Do you have any similar experience on this, what other possible directions should I explore?
    here is the trace report from 10046 event, I hide table name for privacy reason.
    Besides, I tested bulk binding in PL/SQL to insert 200 rows in one execution, no problem at all. Network folks confirm that network should not be an issue as well, ping time from app server to db server is sub milisecond and they are in the same data center.
    INSERT INTO ...
    values
    (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17,
    :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32,
    :33, :34, :35, :36, :37, :38, :39, :40, :41, :42, :43, :44, :45)
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.02 14.29 1 94 2565 200
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.02 14.29 1 94 2565 200
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 25
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net more data from client 28 6.38 14.19
    db file sequential read 1 0.02 0.02
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    ********************************************************************************

    I have exactly the same problem, I tried to find out what is going on, changed several JDBC Drivers on AIX, but no hope, I also have ran the process on my laptop which produced a better and faster performance.
    Therefore I made a special solution ( not practical) by creating flat files and defining the data as an external table, the oracle will read the data in those files as they were data inside a table, this gave me very fast insertion into the database, but still I am looking for an answer for your question here. Using Oracle on AIX machine is a normal business process followed by a lot of companies and there must be a solution for this.

  • Index not created but it still reserved space on the disk

    Hi all,
    I have got a problem with creating an index. I am using oracle 10.2 DB on Windows 2003 Server.
    There are several jobs that are running at night to collect data from another database. Therefor I am always dropping the indices (for faster inserts, because the tables are really big) and after the data is loaded I am recreating the indices.
    Now when reacreating one of the index it gave me an error: ORA-01652: unable to extend temp segment by 8192 in tablespace CEXPLUS
    which I interpreted that there is no space left in tablespace CEXPLUS. I figured out on which disk the tablespace is saved. At that time I still had 55,5 MB free space left. Then I deleted some files and got 600 MB free space. Afterwards I created the index again. But after 1h or so there was still no index and now I only got 712 KB free space.
    Where is the free space if the index was not created and how can I get it back?
    How can I figure out, how much space I will need for the index?
    Any help is highly appreciated.
    Regards,
    Denise

    Hi,
    I used this query (I am not sure whether it is correct):
    SELECT tablespace_name, (sum(bytes)/1024)/1024 mb
    from dba_free_space
    group by tablespace_name
    and it says for the tablespace CEXPLUS (in which I want to create the failed index) there is still 8074 MB left. But I know that on the server itself the harddisk has got 712 KB free space. How can that be? And still my question, before I created the failed index I had 600 MB free space? How can I get it back?
    What about the sort segment? Is ist still used by the failed index? Or is it used by other transactions, so meaning that it is overwritten? And if I restart the database will the space it had taken be free again, meaning will I then have back my 600 MB?
    Thanks for your help.
    Regards,
    Denise

  • Query plan changes when query is used in CREATE TABLE AS

    We've puzzled by the fact that EXPLAIN PLAN gives a much different output for a SELECT statement than it does when the same statement is used for CREATE TABLE . . . AS SELECT.
    The bad part is that the CREATE TABLE version performs very badly, and that's what we want the query for.
    Why does this happen? Is there a difference (from the database's point of view) between retrieving a set of rows to display to the user and putting that same set into a new table? Doesn't this make it harder to diagnose and fix query performance problems?
    Here's our query:
    create table query_test AS
    select term, parentTerm, apidb.tab_to_string(cast(collect(trim(to_char(internal)))
                       as apidb.varchartab), ', ') as internal
                 from (
                     select distinct ga.organism as term,
                                     ga.species as parentTerm,
                                     tn.taxon_id as internal
                     from apidb.GeneAttributes ga, SRES.TaxonName tn, sres.Taxon t,
                          dots.AaSequence aas, dots.SecondaryStructure ss
                     where ga.organism = tn.name
               and tn.taxon_id = t.taxon_id
                       and t.taxon_id = aas.taxon_id
       and aas.aa_sequence_id = ss.aa_sequence_id
               and t.rank != 'species'
               union
                     select distinct ga.species as term,
                       '' as parentTerm,
                                     ts.taxon_id as internal
                     from apidb.GeneAttributes ga, SRES.TaxonName tn, apidb.taxonSpecies ts,
                          dots.aasequence aas, dots.SecondaryStructure ss
                     where ga.organism = tn.name
                      and tn.taxon_id = ts.taxon_id
                      and ts.taxon_id = aas.taxon_id
                     and aas.aa_sequence_id = ss.aa_sequence_id
       group by term,parentTerm;Without the CREATE TABLE, the plan looks like this:
    | Id  | Operation                             | Name                      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | CREATE TABLE STATEMENT                |                           |  2911 |  5986K|       | 18840   (1)| 00:03:47 |
    |   1 |  LOAD AS SELECT                       | QUERY_TEST                |       |       |       |            |          |
    |   2 |   VIEW                                |                           |  2911 |  5986K|       | 18669   (1)| 00:03:45 |
    |   3 |    SORT GROUP BY                      |                           |  2911 |   332K|       | 18660   (1)| 00:03:44 |
    |   4 |     VIEW                              |                           |  2911 |   332K|       | 18659   (1)| 00:03:44 |
    |   5 |      SORT UNIQUE                      |                           |  2911 |   292K|       | 18659   (6)| 00:03:44 |
    |   6 |       UNION-ALL                       |                           |       |       |       |            |          |
    |*  7 |        HASH JOIN                      |                           |  2907 |   292K|  2160K| 17762   (1)| 00:03:34 |
    |   8 |         TABLE ACCESS FULL             | GENEATTRIBUTES10650       | 40957 |  1679K|       |   795   (1)| 00:00:10 |
    |*  9 |         HASH JOIN                     |                           | 53794 |  3204K|  1552K| 16675   (1)| 00:03:21 |
    |* 10 |          HASH JOIN                    |                           | 37802 |  1107K|       | 12326   (1)| 00:02:28 |
    |* 11 |           HASH JOIN                   |                           | 37945 |   629K|       | 10874   (1)| 00:02:11 |
    |  12 |            INDEX FAST FULL SCAN       | SECONDARYSTRUCTURE_REVIX9 | 37945 |   222K|       |    33   (0)| 00:00:01 |
    |  13 |            INDEX FAST FULL SCAN       | AASEQUENCEIMP_REVIX6      |  7886K|    82M|       | 10816   (1)| 00:02:10 |
    |* 14 |           TABLE ACCESS FULL           | TAXON                     |   514K|  6530K|       |  1450   (1)| 00:00:18 |
    |  15 |          TABLE ACCESS FULL            | TAXONNAME                 |   760K|    22M|       |  2721   (1)| 00:00:33 |
    |* 16 |        HASH JOIN                      |                           |     4 |   380 |       |   886   (1)| 00:00:11 |
    |  17 |         NESTED LOOPS                  |                           |   730 | 64970 |       |   852   (1)| 00:00:11 |
    |* 18 |          HASH JOIN                    |                           |     1 |    78 |       |   847   (1)| 00:00:11 |
    |  19 |           NESTED LOOPS                |                           |       |       |       |            |          |
    |  20 |            NESTED LOOPS               |                           |    17 |   612 |       |    51   (0)| 00:00:01 |
    |  21 |             TABLE ACCESS FULL         | TAXONSPECIES10646         |    12 |    60 |       |     3   (0)| 00:00:01 |
    |* 22 |             INDEX RANGE SCAN          | TAXONNAME_IND01           |     1 |       |       |     2   (0)| 00:00:01 |
    |  23 |            TABLE ACCESS BY INDEX ROWID| TAXONNAME                 |     1 |    31 |       |     4   (0)| 00:00:01 |
    |  24 |           TABLE ACCESS FULL           | GENEATTRIBUTES10650       | 40957 |  1679K|       |   795   (1)| 00:00:10 |
    |* 25 |          INDEX RANGE SCAN             | AASEQUENCEIMP_REVIX6      |   768 |  8448 |       |     5   (0)| 00:00:01 |
    |  26 |         INDEX FAST FULL SCAN          | SECONDARYSTRUCTURE_REVIX9 | 37945 |   222K|       |    33   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       7 - access("GA"."ORGANISM"="TN"."NAME")
       9 - access("TN"."TAXON_ID"="T"."TAXON_ID")
      10 - access("T"."TAXON_ID"="TAXON_ID")
      11 - access("AA_SEQUENCE_ID"="SS"."AA_SEQUENCE_ID")
      14 - filter("T"."RANK"<>'species')
      16 - access("AA_SEQUENCE_ID"="SS"."AA_SEQUENCE_ID")
      18 - access("GA"."ORGANISM"="TN"."NAME")
      22 - access("TN"."TAXON_ID"="TS"."TAXON_ID")
      25 - access("TS"."TAXON_ID"="TAXON_ID")
    46 rows selected.With the CREATE TABLE, the plan for the SELECT alone looks like this:
    | Id  | Operation                           | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                    |                           |     2 |   234 |  1786   (1)| 00:00:22 |
    |   1 |  SORT GROUP BY                      |                           |     2 |   234 |  1786   (1)| 00:00:22 |
    |   2 |   VIEW                              |                           |     2 |   234 |  1785   (1)| 00:00:22 |
    |   3 |    SORT UNIQUE                      |                           |     2 |   198 |  1785  (48)| 00:00:22 |
    |   4 |     UNION-ALL                       |                           |       |       |            |          |
    |*  5 |      HASH JOIN                      |                           |     1 |   103 |   949   (1)| 00:00:12 |
    |   6 |       NESTED LOOPS                  |                           |   199 | 19303 |   915   (1)| 00:00:11 |
    |   7 |        NESTED LOOPS                 |                           |    13 |  1118 |   850   (1)| 00:00:11 |
    |   8 |         NESTED LOOPS                |                           |    13 |   949 |   824   (1)| 00:00:10 |
    |   9 |          VIEW                       | VW_DTP_E387155E           |    13 |   546 |   797   (1)| 00:00:10 |
    |  10 |           HASH UNIQUE               |                           |    13 |   546 |   797   (1)| 00:00:10 |
    |  11 |            TABLE ACCESS FULL        | GENEATTRIBUTES10650       | 40957 |  1679K|   795   (1)| 00:00:10 |
    |  12 |          TABLE ACCESS BY INDEX ROWID| TAXONNAME                 |     1 |    31 |     3   (0)| 00:00:01 |
    |* 13 |           INDEX RANGE SCAN          | TAXONNAME_IND02           |     1 |       |     2   (0)| 00:00:01 |
    |* 14 |         TABLE ACCESS BY INDEX ROWID | TAXON                     |     1 |    13 |     2   (0)| 00:00:01 |
    |* 15 |          INDEX UNIQUE SCAN          | PK_TAXON                  |     1 |       |     1   (0)| 00:00:01 |
    |* 16 |        INDEX RANGE SCAN             | AASEQUENCEIMP_REVIX6      |    15 |   165 |     5   (0)| 00:00:01 |
    |  17 |       INDEX FAST FULL SCAN          | SECONDARYSTRUCTURE_REVIX9 | 37945 |   222K|    33   (0)| 00:00:01 |
    |  18 |      NESTED LOOPS                   |                           |     1 |    95 |   834   (1)| 00:00:11 |
    |  19 |       NESTED LOOPS                  |                           |     1 |    89 |   833   (1)| 00:00:10 |
    |* 20 |        HASH JOIN                    |                           |     1 |    78 |   828   (1)| 00:00:10 |
    |  21 |         NESTED LOOPS                |                           |       |       |            |          |
    |  22 |          NESTED LOOPS               |                           |    13 |   949 |   824   (1)| 00:00:10 |
    |  23 |           VIEW                      | VW_DTP_2AAE9FCE           |    13 |   546 |   797   (1)| 00:00:10 |
    |  24 |            HASH UNIQUE              |                           |    13 |   546 |   797   (1)| 00:00:10 |
    |  25 |             TABLE ACCESS FULL       | GENEATTRIBUTES10650       | 40957 |  1679K|   795   (1)| 00:00:10 |
    |* 26 |           INDEX RANGE SCAN          | TAXONNAME_IND02           |     1 |       |     2   (0)| 00:00:01 |
    |  27 |          TABLE ACCESS BY INDEX ROWID| TAXONNAME                 |     1 |    31 |     3   (0)| 00:00:01 |
    |  28 |         TABLE ACCESS FULL           | TAXONSPECIES10646         |    12 |    60 |     3   (0)| 00:00:01 |
    |* 29 |        INDEX RANGE SCAN             | AASEQUENCEIMP_REVIX6      |   768 |  8448 |     5   (0)| 00:00:01 |
    |* 30 |       INDEX RANGE SCAN              | SECONDARYSTRUCTURE_REVIX9 |     1 |     6 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       5 - access("AA_SEQUENCE_ID"="SS"."AA_SEQUENCE_ID")
      13 - access("ITEM_1"="TN"."NAME")
      14 - filter("T"."RANK"<>'species')
      15 - access("TN"."TAXON_ID"="T"."TAXON_ID")
      16 - access("T"."TAXON_ID"="TAXON_ID")
      20 - access("TN"."TAXON_ID"="TS"."TAXON_ID")
      26 - access("ITEM_1"="TN"."NAME")
      29 - access("TS"."TAXON_ID"="TAXON_ID")
      30 - access("AA_SEQUENCE_ID"="SS"."AA_SEQUENCE_ID")
    50 rows selected.Edited by: JohnI on Jul 18, 2011 2:19 PM
    Edited by: JohnI on Jul 18, 2011 2:28 PM

    Charles Hooper wrote a series of blog entries on a similar topic some time ago: http://hoopercharles.wordpress.com/2010/12/15/select-statement-is-fast-insert-into-using-the-select-statement-is-brutally-slow-1/ (including a lot of useful comments) and two following articles. I have to confess that I did not read the posts again - but I think you will find some good ideas how to analyze the problem.
    Regards
    Martin Preiss

  • What's the Difference Between OLAP and OLTP?

    HI,
    What's the difference between OLAP and OLTP ? and which one is Best?
    -Arun.M.D

    Hi,
       The big difference when designing for OLAP versus OLTP is rooted in the basics of how the tables are going to be used. I'll discuss OLTP versus OLAP in context to the design of dimensional data warehouses. However, keep in mind there are more architectural components that make up a mature, best practices data warehouse than just the dimensional data warehouse.
    Corporate Information Factory, 2nd Edition by W. H. Inmon, Claudia Imhoff, Ryan Sousa
    Building the Data Warehouse, 2nd Edition by W. H. Inmon
    With OLTP, the tables are designed to facilitate fast inserting, updating and deleting rows of information with each logical unit of work. The database design is highly normalized. Usually and at least to 3NF. Each logical unit of work in an online application will have a relatively small scope with regard to the number of tables that are referenced and/or updated. Also the online application itself handles the majority of the work for joining data to facilitate the screen functions. This means the user doesn't have to worry about traversing across large data relationship paths. A heavy dose of lookup/reference tables and much focus on referential integrity between foreign keys. The physical design of the database needs to take into considerations the need for inserting rows when deciding on physical space settings. A good book for getting a solid base understanding of modeling for OLTP is The Data Modeling Handbook: A Best-Practice Approach to Building Quality Data Models by Michael C. Reingruber, William W. Gregory.
    Example: Let's say we have a purchase oder management system. We need to be able to take orders for our customers, and we need to be able to sell many items on each order. We need to capture the store that sold the item, the customer that bought the item (and where we need to ship things and where to bill) and we need to make sure that we pull from the valid store_items to get the correct item number, description and price. Our OLTP data model will contain a CUSTOMER_MASTER, A CUSTOMER_ADDRESS_MASTER, A STORE_MASTER, AN ITEM_MASTER, AN ITEM_PRICE_MASTER, A PURCHASE_ORDER_MASTER AND A PURCHASE_ORDER_LINE_ITEM table. Then we might have a series of M:M relationships for example. An ITEM might have a different price for specific time periods for specific stores.
    With OLAP, the tables are designed to facilitate easy access to information. Today's OLAP tools make the job of developing a query very easy. However, you still want to minimize the extensiveness of the relational model in an OLAP application. Users don't have the wills and means to learn how to work through a complex maze of table relationships. So you'll design your tables with a high degree of denormalization. The most prevalent design scheme for OLAP is the Star-Schema, popularized by Ralph Kimball. The star schema has a FACT table that contains the elements of data that are used arithmatically (counting, summing, averaging, etc.) The FACT Table is surrounded by lookup tables called Dimensions. Each Dimension table provides a reference to those things that you want to analyze by. A good book to understand how to design OLAP solutions is The Data Warehouse Toolkit: Practical Techniques for Building Dimensional Data Warehouses by Ralph Kimball.
    Example: let's say we want to see some key measures about purchases. We want to know how many items and the sales amount that are purchased by what kind of customer across which stores. The FACT table will contain a column for Qty-purchased and Purchase Amount. The DIMENSION tables will include the ITEM_DESC (contains the item_id & Description), the CUSTOMER_TYPE, the STORE (Store_id & store name), and TIME (contains calendar information such as the date, the month_end_date, quarter_end_date, day_of_week, etc).
      Database Fundamentals > Data Warehousing and Business Intelligence with Mike Lampa
    Search Advice from more than 250 TechTarget Experts
    Your question may have already been answered! Browse or search more than 25,000 question and answer pairs from more than 250 TechTarget industry experts.

  • Using Bulkinto

    I got a procedure which is taking time for inserts ,so i want to change the below procedure to use Bulk insert and bulk collect...Its throwing errors while am using Forall and bulk collect...
    The error names are:PLS-00497
    PLS-00435: DML statement without BULK In-BIND cannot be used inside FORALL
    So can anybody help change the below to use the Forall and bulk collect ..so that it runs faster...
    part of a procedure :
    PROCEDURE do_insert
    IS
    PROCEDURE process_insert_record
    IS
    CURSOR c_es_div_split
    IS
    SELECT div_id
    FROM ZREP_MPG_DIV
    WHERE div_id IN ('PC','BP','BI','CI','CR');
    PROCEDURE write_record
    IS
    CURSOR c_plan_fields
    IS
    SELECT x.comp_plan_id, x.comp_plan_cd, cp.comp_plan_nm
    FROM cp_div_xref@dm x, comp_plan@dm cp
    WHERE x.comp_plan_id = cp.comp_plan_id
    AND x.div = v_div
    AND x.sorg_cd = v_sorg_cd
    AND x.comp_plan_yr = TO_NUMBER (TO_CHAR (v_to_dt, 'yyyy'));
    BEGIN -- write_record
    OPEN c_plan_fields;
    FETCH c_plan_fields INTO v_plan_id, v_plan_cd, v_plan_nm;
    CLOSE c_plan_fields;
    forall i in targ_tab.first.. targ_tab.last
    INSERT INTO CUST_HIER
    sorg_cd,
    cust_cd,
    bunt, --DP
    div,
    from_dt,
    to_dt,
    cust_ter_cd,
    cust_rgn_cd,
    cust_grp_cd,
    cust_area_cd,
    sorg_desc,
    CUST_NM,
    cust_ter_desc,
    cust_rgn_desc,
    cust_grp_desc,
    cust_area_desc,
    cust_mkt_cd,
    cust_mkt_desc,
    curr_flag,
    last_mth_flag,
    comp_plan_id,
    comp_plan_cd,
    comp_plan_nm,
    asgn_typ,
    lddt
    VALUES (
    v_sorg_cd,
    v_cust_cd,
    v_bunt, --DP
    v_div,
    TRUNC (v_from_dt),
    TO_DATE ( TO_CHAR (v_to_dt, 'mmddyyyy') || '235959',
    'mmddyyyyhh24miss'),
    v_ter,
    v_rgn,
    v_grp,
    v_area,
    v_sorg_desc,
    v_cust_nm,
    v_cust_ter_desc,
    v_rgn_desc,
    v_grp_desc,
    v_area_desc,
    v_mkt,
    v_mkt_desc,
    v_curr_flag,
    v_last_mth_flag,
    v_plan_id,
    v_plan_cd,
    v_plan_nm,
    v_asgn_typ,
    v_begin_dt
    v_plan_id := 0;
    v_plan_cd := 0;
    v_plan_nm := NULL;
    v_out_cnt := v_out_cnt + 1;
    IF doing_both
    THEN
    COMMIT;
    ELSE
    -- commiting v_commit_rows rows at a time.
    IF v_out_cnt >= v_commit_cnt
    THEN
    COMMIT;
    p.l ( 'Commit point reached: ' || v_out_cnt || 'at: ' ||
    TO_CHAR (SYSDATE, 'mm/dd hh24:mi:ss'));
    v_commit_cnt := v_commit_cnt + v_commit_rows;
    END IF;
    END IF;
    END write_record;

    here is the whole procedure...I didnt pasted the whole ..its very big..I think you can figure it out whats it is doing...all cursors are tuned and running fast.Insert are taking time in it..So i thought of using bulk collect and forall as i Oracle sayz its 30x faster than the normal row by row inserts..
    CREATE OR REPLACE PROCEDURE DW.load_xyz
    AS
    v_in_cnt NUMBER;
    v_out_cnt NUMBER;
    v_tot_in NUMBER := 0;
    v_tot_out NUMBER := 0;
    v_updt_cnt NUMBER;
    v_dup_cnt NUMBER;
    v_commit_rows NUMBER := 10000;
    v_commit_cnt NUMBER := 10000;
    v_begin_dt DATE;
    v_end_dt DATE;
    v_last_mth_dt DATE; --last bus month END DATE
    v_last_day_dt DATE; --last bus day END DATE
    first_rec BOOLEAN := TRUE;
    doing_both BOOLEAN := FALSE;
    mpg_only BOOLEAN := FALSE;
    v_next_row NUMBER;
    last_targ NUMBER;
    last_rec BOOLEAN;
    no_targ_found BOOLEAN;
    write_dup BOOLEAN;
    v_orig_from_dt DATE;
    v_orig_to_dt DATE;
    v_from_yr NUMBER (4);
    v_to_yr NUMBER (4);
    v_yr_diff NUMBER (4);
    -- Variables for cust_hier fields
    v_sorg_cd CUST_HIER.sorg_cd%TYPE;
    v_cust_cd CUST_HIER.cust_cd%TYPE;
    v_bunt CUST_HIER.bunt%TYPE; --DP
    v_div CUST_HIER.div%TYPE;
    v_from_dt CUST_HIER.from_dt%TYPE; -- source from_dt
    v_to_dt CUST_HIER.to_dt%TYPE; -- source to_dt
    v_from_dt_orig CUST_HIER.from_dt%TYPE; -- original from_dt
    v_to_dt_orig CUST_HIER.to_dt%TYPE; -- original to_dt
    t_from_dt CUST_HIER.from_dt%TYPE; -- target from_dt
    t_to_dt CUST_HIER.to_dt%TYPE; -- target to_dt
    v_ter CUST_HIER.cust_ter_cd%TYPE;
    v_rgn CUST_HIER.cust_rgn_cd%TYPE;
    v_grp CUST_HIER.cust_grp_cd%TYPE;
    v_area CUST_HIER.cust_area_cd%TYPE;
    v_mkt CUST_HIER.cust_mkt_cd%TYPE;
    v_sorg_desc CUST_HIER.sorg_desc%TYPE;
    v_cust_nm CUST_HIER.CUST_NM%TYPE;
    v_cust_ter_desc CUST_HIER.cust_ter_desc%TYPE;
    v_rgn_desc CUST_HIER.cust_rgn_desc%TYPE;
    v_grp_desc CUST_HIER.cust_grp_desc%TYPE;
    v_area_desc CUST_HIER.cust_area_desc%TYPE;
    v_mkt_desc CUST_HIER.cust_mkt_desc%TYPE;
    v_curr_flag CUST_HIER.curr_flag%TYPE;
    v_last_mth_flag CUST_HIER.last_mth_flag%TYPE;
    v_asgn_typ CUST_HIER.asgn_typ%TYPE;
    v_plan_cd CUST_HIER.comp_plan_cd%TYPE;
    v_plan_nm CUST_HIER.comp_plan_nm%TYPE;
    v_plan_id CUST_HIER.comp_plan_id%TYPE;
    v_mpg MPG_DIV_S004.mpg_id%TYPE;
    v_src_values VARCHAR2 (2000);
    v_prev_values VARCHAR2 (2000);
    /* Error handling variables */
    v_error_code NUMBER;
    v_error_message VARCHAR2 (200);
    /* CURSORS */
    ** Source records - CUSTs with only ONE of DIV or MPG type assignment
    ** Used in sequential logic, no reading of committed records.
    CURSOR c_mpg
    IS
    SELECT/*+ ALL_ROWS */
    DISTINCT sorg_cd, mpg, bunt, --DP
    bunt_orig, -- ES/CS 5/2002
    div, cust_cd, to_dt, from_dt,
    cust_rgn_cd, cust_grp_cd, cust_area_cd, CUST_NM,
    cust_rgn_desc, cust_grp_desc, cust_area_desc, sorg_desc,
    cust_ter_cd, cust_ter_desc, mkt_cd, mkt_desc,
    'mpg' asgn_typ, dflt_flag
    FROM CUST_HIER_TAB z
    WHERE sorg_cd IN ('S004')
    AND z.mpg IN ('01', '02', '05', '06')
    --AND z.cust_cd = '27174'
    ORDER BY z.sorg_cd,
    z.mpg,
    z.cust_cd,
    z.to_dt DESC,
    z.dflt_flag DESC,
    z.from_dt DESC;
    -- changed cursor this record is based on from c_mpg_only to c_mpg 5/2002 CJL
    src_rec c_mpg%ROWTYPE;
    ** Source records - CUSTs with mpg assignments only assign for AB,AF,PC,RD
    ** Used in sequential logic, no reading of committed records.
    ** Added NV and VS, CS 5/2002.
    ** Added AX 12/2005
    ** Added VC 8/2006
    CURSOR c_div_only_mpg
    IS
    SELECT/*+ ALL_ROWS */
    DISTINCT z.sorg_cd, z.mpg, z.bunt, -- DP
    z.bunt_orig, -- ES/CS 5/2002
    NVL (zd.div_id, 'na') div,
    cust_cd, to_dt, from_dt, cust_rgn_cd, cust_grp_cd,
    cust_area_cd, CUST_NM, cust_rgn_desc, cust_grp_desc,
    cust_area_desc, sorg_desc, cust_ter_cd, cust_ter_desc,
    mkt_cd, mkt_desc,
    DECODE (z.div, 'na', 'mpg', 'div') asgn_typ,
    dflt_flag
    FROM CUST_HIER_TAB z, ZREP_MPG_DIV zd
    WHERE sorg_cd IN ('S004','S096')
    AND ( z.mpg = zd.mpg_id
    OR z.div = zd.div_id)
    -- Added NV, VS, CS 5/2002
    -- Added AX 12/2005
    -- Added VC 8/2006
    AND zd.div_id IN ('AB', 'AF', 'PC', 'RD', 'NV', 'VS', 'CS', 'AX','VC')
    AND NOT EXISTS (SELECT 1
    FROM CUST_HIER_TAB c2
    WHERE c2.cust_cd = z.cust_cd
    AND c2.div = zd.div_id)
    ORDER BY z.sorg_cd,
    z.cust_cd,
    z.bunt_orig, -- ES/CS 5/2002
    div,
    asgn_typ,
    z.to_dt DESC,
    z.dflt_flag DESC,
    z.from_dt DESC;
    ** Source records - CUSTs with both div and mpg assignments
    ** Used in process_new_record logic where target is read, not sequential
    ** 5/2002 CJL - added field bunt_orig to source tab for use in order by.
    ** This was necessary to allow ES PC/NV customer-territories to be written
    ** before VI. PC/NV team splits required division level allignments. From
    ** a customer not srep perspective ES territories need to take priority.
    CURSOR c_mpg_and_div
    IS
    SELECT/*+ ALL_ROWS */
    DISTINCT z.sorg_cd, z.mpg, z.bunt, --DP
    z.bunt_orig, -- ES/CS 5/2002
    NVL (zd.div_id, 'na') div,
    cust_cd, to_dt, from_dt, cust_rgn_cd, cust_grp_cd,
    cust_area_cd, CUST_NM, cust_rgn_desc, cust_grp_desc,
    cust_area_desc, sorg_desc, cust_ter_cd, cust_ter_desc,
    mkt_cd, mkt_desc,
    DECODE (z.div, 'na', 'mpg', 'div') asgn_typ, dflt_flag
    FROM CUST_HIER_TAB z, ZREP_MPG_DIV zd
    WHERE sorg_cd IN ('S004','S096')
    AND ( z.mpg = zd.mpg_id
    OR z.div = zd.div_id)
    -- Added NV, VS, CS 5/2002
    -- Added AX 12/2005
    -- Added VC 8/2006
    AND zd.div_id IN ('AB', 'AF', 'PC', 'RD', 'NV', 'VS', 'CS', 'AX','VC')
    AND EXISTS (SELECT 1
    FROM CUST_HIER_TAB c2
    WHERE c2.cust_cd = z.cust_cd
    AND c2.div = zd.div_id)
    ORDER BY z.sorg_cd,
    z.cust_cd,
    z.bunt_orig, -- ES/CS 5/2002
    div,
    asgn_typ,
    z.to_dt DESC,
    z.dflt_flag DESC,
    z.from_dt DESC;
    ** Set of divisions to expand to for an MPG
    ** mpg_div_s004 is unique to this process in the US datamart.
    ** Only the divisions that are NOT used in a division level allignment
    ** are included in mpg_div_s004.
    CURSOR c_mpg_div
    IS
    SELECT div_id
    FROM MPG_DIV_S004
    WHERE mpg_id = v_mpg
    ORDER BY div_id;
    CURSOR c_targ_dates (
    p_sorg VARCHAR2,
    p_cust VARCHAR2,
    p_div VARCHAR2,
    p_bunt VARCHAR2
    ) --DP
    IS
    -- cust_hier_key table replaced by view 5/2002
    SELECT/*+ INDEX (cust_hier_pk) */
    from_dt, to_dt
    FROM CUST_HIER
    WHERE sorg_cd = p_sorg
    AND bunt = p_bunt --DP
    AND div = p_div
    AND cust_cd = p_cust
    ORDER BY to_dt DESC;
    TYPE t_targ_dates IS TABLE OF c_targ_dates%ROWTYPE
    INDEX BY BINARY_INTEGER;
    targ_tab t_targ_dates;
    PROCEDURE do_insert
    IS
    PROCEDURE process_insert_record
    IS
    CURSOR c_es_div_split
    IS
    SELECT div_id
    FROM ZREP_MPG_DIV
    WHERE div_id IN ('PC','BP','BI','CI','CR');
    PROCEDURE write_record
    IS
    CURSOR c_plan_fields
    IS
    SELECT x.comp_plan_id, x.comp_plan_cd, cp.comp_plan_nm
    FROM cp_div_xref@dm x, comp_plan@dm cp
    WHERE x.comp_plan_id = cp.comp_plan_id
    AND x.div = v_div
    AND x.sorg_cd = v_sorg_cd
    AND x.comp_plan_yr = TO_NUMBER (TO_CHAR (v_to_dt, 'yyyy'));
    BEGIN -- write_record
    OPEN c_plan_fields;
    FETCH c_plan_fields INTO v_plan_id, v_plan_cd, v_plan_nm;
    CLOSE c_plan_fields;
    INSERT INTO CUST_HIER
    sorg_cd,
    cust_cd,
    bunt, --DP
    div,
    from_dt,
    to_dt,
    cust_ter_cd,
    cust_rgn_cd,
    cust_grp_cd,
    cust_area_cd,
    sorg_desc,
    CUST_NM,
    cust_ter_desc,
    cust_rgn_desc,
    cust_grp_desc,
    cust_area_desc,
    cust_mkt_cd,
    cust_mkt_desc,
    curr_flag,
    last_mth_flag,
    comp_plan_id,
    comp_plan_cd,
    comp_plan_nm,
    asgn_typ,
    lddt
    VALUES (
    v_sorg_cd,
    v_cust_cd,
    v_bunt, --DP
    v_div,
    TRUNC (v_from_dt),
    TO_DATE ( TO_CHAR (v_to_dt, 'mmddyyyy') || '235959',
    'mmddyyyyhh24miss'),
    v_ter,
    v_rgn,
    v_grp,
    v_area,
    v_sorg_desc,
    v_cust_nm,
    v_cust_ter_desc,
    v_rgn_desc,
    v_grp_desc,
    v_area_desc,
    v_mkt,
    v_mkt_desc,
    v_curr_flag,
    v_last_mth_flag,
    v_plan_id,
    v_plan_cd,
    v_plan_nm,
    v_asgn_typ,
    v_begin_dt
    v_plan_id := 0;
    v_plan_cd := 0;
    v_plan_nm := NULL;
    v_out_cnt := v_out_cnt + 1;
    IF doing_both
    THEN
    COMMIT;
    ELSE
    -- commiting v_commit_rows rows at a time.
    IF v_out_cnt >= v_commit_cnt
    THEN
    COMMIT;
    p.l ( 'Commit point reached: ' || v_out_cnt || 'at: ' ||
    TO_CHAR (SYSDATE, 'mm/dd hh24:mi:ss'));
    v_commit_cnt := v_commit_cnt + v_commit_rows;
    END IF;
    END IF;
    END write_record;
    FUNCTION write_div
    RETURN BOOLEAN
    IS
    return_true_false BOOLEAN;
    BEGIN
    IF v_to_dt < TO_DATE ('08012001', 'mmddyyyy')
    AND ( v_div = 'BH'
    OR v_div = 'TH')
    THEN
    -- Start of BH/TH at CRM
    return_true_false := FALSE;
    ELSIF v_to_dt < TO_DATE ( '10012001', 'mmddyyyy')
    AND v_div = 'RD'
    THEN
    -- Start of RD at USA/VI
    return_true_false := FALSE;
    ELSIF v_to_dt < TO_DATE ( '01012002', 'mmddyyyy')
    AND ( v_div = 'DD'
    OR v_div = 'CK')
    THEN
    -- Start of DD/CK at VI
    return_true_false := FALSE;
    ELSIF v_to_dt < TO_DATE ('12012001', 'mmddyyyy')
    AND v_div = 'NV'
    THEN
    -- Start of NV at ES
    return_true_false := FALSE;
    ELSIF v_to_dt < TO_DATE ('01012002', 'mmddyyyy')
    AND v_div = 'LP'
    THEN
    -- Start of LP at ES
    return_true_false := FALSE;
    ELSIF v_to_dt < TO_DATE ('01012003', 'mmddyyyy')
    AND ( v_div = 'AN'
    OR v_div = 'AX'
    OR v_div = 'BT'
    OR v_div = 'FB'
    OR v_div = 'VH')
    THEN
    -- Start of AN,AX,BT,FB,VH at CS
    return_true_false := FALSE;
    ELSIF v_to_dt < TO_DATE ('01012005', 'mmddyyyy')
    AND ( v_div = 'CR'
    OR v_div = 'CI'
    OR v_div = 'BP'
    OR v_div = 'BI'
    OR v_div = 'PM')
    THEN
    -- Start of CR,CI,BP,BI at ES and PM at CRM
    return_true_false := FALSE;
    ELSIF v_to_dt < TO_DATE ('01012005', 'mmddyyyy')
    AND v_div = 'VC'
    THEN
    -- Start of legacy VC at VI/ABT
    return_true_false := FALSE;
    ELSE
    return_true_false := TRUE;
    END IF;
    RETURN return_true_false;
    END write_div;
    BEGIN -- process_insert_record
    --p.l('Start process_insert_record');
    IF v_last_day_dt BETWEEN v_from_dt AND v_to_dt
    THEN
    v_curr_flag := 1;
    ELSE
    v_curr_flag := 0;
    END IF;
    IF v_last_mth_dt BETWEEN v_from_dt AND v_to_dt
    THEN
    v_last_mth_flag := 1;
    ELSE
    v_last_mth_flag := 0;
    END IF;
    IF mpg_only
    THEN
    -- for each division to expand to
    --p.l('Loop Bunt/MPG: ' || v_bunt || '/' || v_mpg);
    FOR v_mpg_div IN c_mpg_div
    LOOP
    v_div := v_mpg_div.div_id;
    --p.l('Div in loop: ' || v_div);
    IF write_div
    THEN
    write_record;
    END IF;
    END LOOP;
    --p.l('End Loop Bunt/MPG');
    --p.l(' ');
    ELSE
    --p.l(' No Loop Bunt/MPG/div: '||v_bunt||'/'||v_mpg||'/'||v_div);
    IF v_div = 'PC'
    -- ES/05 split of PC division into 4 new divisions.
    THEN
    FOR cdiv IN c_es_div_split
    LOOP
    v_div := cdiv.div_id;
    IF write_div
    THEN
    write_record;
    END IF;
    END LOOP;
    v_div := 'PC';
    ELSE
    IF write_div
    THEN
    write_record;
    END IF;
    END IF;
    END IF;
    --p.l('End process_insert_record');
    END process_insert_record;
    BEGIN -- do_insert
    --p.l('Sorg: ' || v_sorg_cd|| ' Cust: ' || v_cust_cd  );
    --p.l(' DIV: ' || v_div||'from_dt: '||v_from_dt||' to_dt: '||v_to_dt);
    v_orig_from_dt := v_from_dt;
    v_orig_to_dt := v_to_dt;
    v_from_yr := TO_NUMBER (TO_CHAR (v_from_dt, 'yyyy'));
    v_to_yr := TO_NUMBER (TO_CHAR (v_to_dt, 'yyyy'));
    v_yr_diff := v_to_yr - v_from_yr;
    IF v_yr_diff = 0
    THEN
    process_insert_record;
    END IF;
    IF v_yr_diff > 0
    THEN -- write first year record
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr), 'mmddyyyy');
    process_insert_record;
    IF v_yr_diff = 1
    THEN -- write 2nd year record
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 1), 'mmddyyyy');
    v_to_dt := v_orig_to_dt;
    process_insert_record;
    ELSIF v_yr_diff = 2
    THEN -- write 2nd and 3rd records
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 1), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 1), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 2), 'mmddyyyy');
    v_to_dt := v_orig_to_dt;
    process_insert_record;
    ELSIF v_yr_diff = 3
    THEN -- write 2nd, 3rd and 4th records
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 1), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 1), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 2), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 2), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 3), 'mmddyyyy');
    v_to_dt := v_orig_to_dt;
    process_insert_record;
    ELSIF v_yr_diff = 4
    THEN -- write 2nd, 3rd, 4th and 5th records
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 1), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 1), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 2), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 2), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 3), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 3), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 4), 'mmddyyyy');
    v_to_dt := v_orig_to_dt;
    process_insert_record;
    ELSIF v_yr_diff = 5
    THEN -- write 2nd, 3rd, 4th 5th and 6th records
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 1), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 1), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 2), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 2), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 3), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 3), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 4), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 4), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 5), 'mmddyyyy');
    v_to_dt := v_orig_to_dt;
    process_insert_record;
    ELSIF v_yr_diff = 6
    THEN -- write 2nd, 3rd, 4th 5th 6th 7th records
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 1), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 1), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 2), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 2), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 3), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 3), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 4), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 4), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 5), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 5), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 6), 'mmddyyyy');
    v_to_dt := v_orig_to_dt;
    process_insert_record;
    ELSIF v_yr_diff = 7
    THEN -- write 2nd, 3rd, 4th 5th 6th 7th 8th records
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 1), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 1), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 2), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 2), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 3), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 3), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 4), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 4), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 5), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 5), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 6), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 6), 'mmddyyyy');
    -- v_to_dt := v_orig_to_dt; -- Todd: this is wrong, I commented it out.
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 7), 'mmddyyyy');
    v_to_dt := v_orig_to_dt;
    process_insert_record;
    ELSIF v_yr_diff = 8
    THEN -- write 2nd, 3rd, 4th 5th 6th 7th 8th 9th records
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 1), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 1), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 2), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 2), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 3), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 3), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 4), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 4), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 5), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 5), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 6), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 6), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 7), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 7), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 8), 'mmddyyyy');
    v_to_dt := v_orig_to_dt;
    process_insert_record;
    ELSIF v_yr_diff = 9
    THEN -- write 2nd, 3rd, 4th 5th 6th 7th 8th 9th 10th records
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 1), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 1), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 2), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 2), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 3), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 3), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 4), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 4), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 5), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 5), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 6), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 6), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 7), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 7), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 8), 'mmddyyyy');
    v_to_dt := TO_DATE ('1231' || TO_CHAR (v_from_yr + 8), 'mmddyyyy');
    process_insert_record;
    v_from_dt := TO_DATE ('0101' || TO_CHAR (v_from_yr + 9), 'mmddyyyy');
    v_to_dt := v_orig_to_dt;
    process_insert_record;
    ELSE -- abend with error
    p.l ('End year: ' || v_to_yr || ' is greater than 2007');
    RAISE_APPLICATION_ERROR (-20001, 'End Year greater than 2007');
    END IF;
    END IF;
    v_from_dt := v_orig_from_dt;
    v_to_dt := v_orig_to_dt;
    END do_insert;
    PROCEDURE save_source -- save source in global variables
    IS
    BEGIN
    v_sorg_cd := src_rec.sorg_cd;
    v_cust_cd := src_rec.cust_cd;
    v_mpg := src_rec.mpg;
    v_bunt := src_rec.bunt; --DP
    v_div := src_rec.div;
    v_from_dt := src_rec.from_dt;
    v_to_dt := src_rec.to_dt;
    v_from_dt_orig := src_rec.from_dt;
    v_to_dt_orig := src_rec.to_dt;
    v_ter := src_rec.cust_ter_cd;
    v_rgn := src_rec.cust_rgn_cd;
    v_grp := src_rec.cust_grp_cd;
    v_area := src_rec.cust_area_cd;
    v_sorg_desc := src_rec.sorg_desc;
    v_cust_nm := src_rec.CUST_NM;
    v_cust_ter_desc := src_rec.cust_ter_desc;
    v_rgn_desc := src_rec.cust_rgn_desc;
    v_grp_desc := src_rec.cust_grp_desc;
    v_area_desc := src_rec.cust_area_desc;
    v_mkt := src_rec.mkt_cd;
    v_mkt_desc := src_rec.mkt_desc;
    v_asgn_typ := src_rec.asgn_typ;
    v_prev_values := v_ter || v_rgn || v_grp || v_area;
    END save_source;
    PROCEDURE clear_keys -- clear key fields
    IS
    BEGIN
    v_sorg_cd := NULL;
    v_cust_cd := NULL;
    v_mpg := NULL;
    v_bunt := NULL;
    v_div := NULL;
    v_from_dt := NULL;
    v_to_dt := NULL;
    v_from_dt_orig := NULL;
    v_to_dt_orig := NULL;
    END clear_keys;
    PROCEDURE fill_targ_tab (
    sorg_in IN CUST_HIER.sorg_cd%TYPE,
    cust_in IN CUST_HIER.cust_cd%TYPE,
    div_in IN CUST_HIER.div%TYPE,
    bunt_in IN CUST_HIER.bunt%TYPE --DP
    IS
    BEGIN
    targ_tab.DELETE;
    no_targ_found := TRUE;
    last_targ := 0;
    FOR targ_dates_rec IN c_targ_dates (
    sorg_in,
    cust_in,
    div_in,
    bunt_in
    ) --DP
    LOOP
    --p.l('Found a targ record');
    no_targ_found := FALSE;
    v_next_row := NVL (targ_tab.LAST, 0) + 1;
    targ_tab (v_next_row) := targ_dates_rec;
    END LOOP;
    last_targ := targ_tab.LAST;
    END fill_targ_tab;
    PROCEDURE process_new_record
    IS
    BEGIN
    ** Put all existing target records for given sorg,
    ** cust and div in a PL/SQL table
    fill_targ_tab (v_sorg_cd, v_cust_cd, v_div, v_bunt); --DP
    IF no_targ_found
    THEN
    --p.l ('1-no_targ_found after looking for ' || v_div);
    do_insert;
    ELSE -- possibly add but do not over-ride
    last_rec := FALSE;
    FOR i IN 1 .. last_targ
    LOOP
    IF i = last_targ
    THEN
    --p.l ('2-This is the last targ rec in pl/sql tab');
    last_rec := TRUE; -- just for readability later
    END IF;
    t_from_dt := targ_tab (i).from_dt;
    t_to_dt := targ_tab (i).to_dt;
    --p.l ('3-Executed tab loop ' || i || ' times.');
    --p.l ('3-targ rec start: ' || t_from_dt || 'targ rec end  : ' || t_to_dt);
    IF v_to_dt > t_to_dt
    THEN
    --p.l ('4-when src_end is after targ_end = always an insert' );
    IF v_from_dt > t_to_dt
    THEN
    --p.l ('5-when src_st after targ_end, insert');
    do_insert;
    EXIT;
    ELSIF v_from_dt >= t_from_dt
    THEN
    --p.l ('6-when s_st is at or after targ_st, insert src');
    --p.l ('6-after targ by setting the new s_st date');
    v_from_dt := TRUNC (t_to_dt) + 1;
    do_insert;
    EXIT;
    ELSE
    --p.l ('7-when src_st is before targ_st, insert src after targ');
    --p.l ( '7-then look to see if src can be added after targ' );
    v_from_dt := TRUNC (t_to_dt) + 1;
    do_insert;
    --write_dup := FALSE;
    v_to_dt := TRUNC (t_from_dt) - 1 / (24 * 60 * 60);
    v_from_dt := v_from_dt_orig;
    IF last_rec
    THEN
    --p.l ( '8-when this is last targ, insert after targ' );
    do_insert;
    -- naturally exits with last_rec
    ELSE
    --p.l ('9-when not last targ rec, get another one');
    NULL;
    END IF;
    END IF;
    ELSIF v_from_dt >= t_from_dt
    THEN
    --p.l ('10-s_from >= t_from and s_to < t_to == contained/dup');
    --do_dup_insert;
    EXIT;
    ELSE
    --p.l ('15-s_from < t_from and s_to <= t_to');
    IF v_to_dt >= t_from_dt
    THEN
    --p.l ('16-src is before targ, adjust s_to');
    v_to_dt := TRUNC (t_from_dt) - 1 / (24 * 60 * 60);
    IF last_rec
    THEN
    --p.l ('19-when last targ rec, insert');
    do_insert;
    -- naturally exits with last_rec
    ELSE
    --p.l ('20-when more targ, get another, look for end');
    NULL;
    END IF;
    ELSE
    --p.l ('21-src_from and src_to < t_from');
    IF last_rec
    THEN
    --p.l ('22-when last targ rec, insert');
    do_insert;
    -- naturally exits with last_rec
    ELSE
    --p.l ( '23-when more targ recs, targ could still fit' );
    NULL;
    END IF;
    END IF;
    END IF; -- starting if
    END LOOP; -- targ_tab loop
    END IF; -- no_dirs_found
    END process_new_record;
    BEGIN
    -- Initialize meta data variables
    v_begin_dt := SYSDATE;
    v_in_cnt := 0;
    v_out_cnt := 0;
    ** Get dates for last end of busines day and last closed month
    SELECT d_val
    INTO v_last_day_dt
    FROM GP_PARMS
    WHERE KEY = 'end_of_business'
    AND parm = 'day';
    SELECT d_val
    INTO v_last_mth_dt
    FROM GP_PARMS
    WHERE KEY = 'end_of_business'
    AND parm = 'month';
    p.l (' ');
    p.l ('End of business day is: ' || TO_CHAR (v_last_day_dt, 'mm/dd/yyyy'));
    p.l ('End of business month is: ' || TO_CHAR (v_last_mth_dt, 'mm/dd/yyyy'));
    p.l (' ');
    -- DELETE RECORDS THAT ARE CURRENTLY IN THE TARGET TABLE
    truncate_table ('cust_hier');
    p.l ('Table cust_hier truncated.');
    -- Create the default record
    INSERT INTO CUST_HIER
    sorg_cd,
    bunt, --DP
    div,
    from_dt,
    to_dt,
    cust_cd,
    cust_ter_cd,
    cust_rgn_cd,
    cust_grp_cd,
    cust_area_cd,
    CUST_NM,
    cust_ter_desc,
    cust_rgn_desc,
    cust_grp_desc,
    cust_area_desc,
    sorg_desc,
    cust_mkt_cd,
    cust_mkt_desc,
    comp_plan_id,
    comp_plan_cd,
    comp_plan_nm,
    curr_flag,
    last_mth_flag,
    asgn_typ,
    lddt
    VALUES (
    'S004',
    'na', --DP
    'na',
    TO_DATE ('01011900', 'mmddyyyy'),
    TO_DATE ('12319999235959', 'mmddyyyyhh24miss'),
    'na',
    'na',
    'na',
    'na',
    'na',
    'Not Known',
    'Not Known',
    'Not Known',
    'Not Known',
    'Not Known',
    'US Sales Org',
    'USA',
    'United States',
    0,
    'na',
    'Missing',
    1,
    1,
    'div',
    SYSDATE
    COMMIT;
    doing_both := TRUE;
    mpg_only := FALSE;
    p.l (
    'Start CUSTs with both MPG and DIV assignments at: ' ||
    TO_CHAR (SYSDATE, 'mm/dd/yyyy hh24:mi:ss')
    p.l (' ');
    ** Process AB AF PC RD Divisions for customers with MPG and DIV assignments
    ** Added NV and VS, CS 5/2002.
    ** Added AX 12/2005
    ** Added VC 8/2006
    first_rec := TRUE;
    clear_keys;
    OPEN c_mpg_and_div;
    LOOP
    FETCH c_mpg_and_div INTO src_rec;
    EXIT WHEN c_mpg_and_div%NOTFOUND;
    v_in_cnt := v_in_cnt + 1;
    --write_dup := TRUE;
    save_source;
    process_new_record;
    END LOOP;
    COMMIT;
    CLOSE c_mpg_and_div;
    p.l (
    'Customers with both MPG and DIV assignments done: ' ||
    TO_CHAR (SYSDATE, 'mm/dd/yyyy hh24:mi:ss')
    p.l ('Input records: ' || v_in_cnt || ', Output records: ' || v_out_cnt);
    v_tot_in := v_tot_in + v_in_cnt;
    v_tot_out := v_tot_out + v_out_cnt;
    v_in_cnt := 0;
    v_out_cnt := 0;
    p.l (
    'Total Input records: ' || v_tot_in || ', Total Output records: ' ||
    v_tot_out
    p.l ('Input/Output ratio: ' || TO_CHAR (v_tot_in / v_tot_out, '999.99'));
    p.l (' ');
    ** Start of MPG only processing, sequential without read of target
    doing_both := FALSE;
    mpg_only := FALSE;
    p.l (' ');
    p.l (
    'Start sequencial loop processing at: ' ||
    TO_CHAR (SYSDATE, 'mm/dd/yyyy hh24:mi:ss')
    ** Process AB AF PC RD Divisions for customers with only MPG assignments
    ** Added NV and VS, CS 5/2002.
    ** Added AX 12/2005
    ** Added VC 8/2006
    first_rec := TRUE;
    clear_keys;
    OPEN c_div_only_mpg;
    LOOP
    FETCH c_div_only_mpg INTO src_rec;
    -- Process last record
    IF c_div_only_mpg%NOTFOUND
    AND NOT first_rec
    THEN
    do_insert;
    END IF;
    EXIT WHEN c_div_only_mpg%NOTFOUND;
    v_in_cnt := v_in_cnt + 1;
    IF src_rec.sorg_cd = v_sorg_cd --src_key = prev_key
    AND src_rec.cust_cd = v_cust_cd
    AND src_rec.bunt = v_bunt --DP
    AND src_rec.div = v_div
    THEN
    --p.l ('S1-same key, looking for same hierarchy');
    v_src_values :=
    src_rec.cust_ter_cd || src_rec.cust_rgn_cd || src_rec.cust_grp_cd ||
    src_rec.cust_area_cd;
    IF v_src_values = v_prev_values
    THEN
    --p.l ( 'S2-same key and hierarchy' );
    IF src_rec.from_dt <= v_from_dt
    THEN
    --p.l( 'S3-if src from_dt is earlier, use it' );
    v_from_dt := src_rec.from_dt;
    END IF;
    ELSE
    p.l ( 'S4 same key but new hierarchy, check date' );
    --p.l ( 'S4 S_from_dt: ' || src_rec.from_dt ||
    -- ' V_from_dt: ' || v_from_dt);
    --insert only when src from_dt is earlier even if hier has changed
    IF src_rec.from_dt < v_from_dt
    THEN
    --p.l ('S5-Insert previous and save source');
    do_insert;
    -- set source rec to_dt to be just prior to previous rec from_dt
    -- this enforces contiguous allignments (no time overlaps)
    src_rec.to_dt := TRUNC (v_from_dt) - 1 / (24 * 60 * 60);
    save_source;
    END IF;
    END IF;
    ELSE
    --p.l ( 'S6- New Key' );
    IF first_rec
    THEN
    first_rec := FALSE;
    ELSE
    --p.l( 'S7- Inserting previous record' );
    do_insert;
    END IF;
    save_source;
    END IF; -- src_key = prev_key
    END LOOP; -- rec in c_div_only_mpg cursor loop
    COMMIT;
    CLOSE c_div_only_mpg;
    p.l (
    'Division assign. Customers with only MPG assign. done: ' ||
    TO_CHAR (SYSDATE, 'mm/dd/yyyy hh24:mi:ss')
    p.l ('Input records: ' || v_in_cnt || ', Output records: ' || v_out_cnt);
    v_tot_in := v_tot_in + v_in_cnt;
    v_tot_out := v_tot_out + v_out_cnt;
    v_in_cnt := 0;
    v_out_cnt := 0;
    p.l (
    'Total Input records: ' || v_tot_in || ', Total Output records: ' ||
    v_tot_out
    p.l ('Input/Output ratio: ' || TO_CHAR (v_tot_in / v_tot_out, '999.99'));
    p.l (' ');
    ** Process pure MPG assignments
    doing_both := FALSE;
    mpg_only := TRUE;
    first_rec := TRUE;
    clear_keys;
    OPEN c_mpg;
    LOOP
    FETCH c_mpg INTO src_rec;
    -- Process last record
    IF c_mpg%NOTFOUND
    AND NOT first_rec
    THEN
    do_insert;
    END IF;
    EXIT WHEN c_mpg%NOTFOUND;
    v_in_cnt := v_in_cnt + 1;
    ** Troubleshooting
    p.l('New record');
    p.l('V sorg/cust/bunt: ' || v_sorg_cd || v_cust_cd || v_bunt);
    p.l('S sorg/cust/bunt: ' || src_rec.sorg_cd || src_rec.cust_cd ||
    src_rec.bunt);
    p.l(' ');
    IF src_rec.sorg_cd = v_sorg_cd --src_key = prev_key
    AND src_rec.cust_cd = v_cust_cd
    AND src_rec.mpg = v_mpg
    THEN
    --p.l ('MS1-same key, looking for same hierarchy');
    v_src_values :=
    src_rec.cust_ter_cd || src_rec.cust_rgn_cd || src_rec.cust_grp_cd ||
    src_rec.cust_area_cd;
    IF v_src_values = v_prev_values
    THEN
    --p.l ( 'MS2-same key and hierarchy' );
    IF src_rec.from_dt <= v_from_dt
    THEN
    --p.l( 'MS3-if src from_dt is earlier, use it' );
    v_from_dt := src_rec.from_dt;
    END IF;
    ELSE
    p.l ( 'MS4 same key but new hierarchy, check date' );
    --p.l ( 'MS4 S_from_dt: ' || src_rec.from_dt ||
    -- ' V_from_dt: ' || v_from_dt);
    --insert only when src from_dt is earlier even if hier has changed
    IF src_rec.from_dt < v_from_dt
    THEN
    --p.l ('MS5-Insert previous and save source');
    do_insert;
    -- set source rec to_dt to be just prior to previous rec from_dt
    -- this enforces contiguous allignments (no time overlaps)
    src_rec.to_dt := TRUNC (v_from_dt) - 1 / (24 * 60 * 60);
    save_source;
    END IF;
    END IF;
    ELSE
    --p.l ( 'MS6- New Key' );
    IF first_rec
    THEN
    first_rec := FALSE;
    --p.l('First record in c_mpg cursor detected');
    ELSE
    --p.l( 'MS7- Inserting previous record' );
    do_insert;
    END IF;
    save_source;
    END IF; -- src_key = prev_key
    END LOOP; -- rec in c_mpg cursor loop
    COMMIT;
    CLOSE c_mpg;
    mpg_only := FALSE;
    p.l (
    'Customers with only MPG assignments done: ' ||
    TO_CHAR (SYSDATE, 'mm/dd/yyyy hh24:mi:ss')
    p.l ('Input records: ' || v_in_cnt || ', Output records: ' || v_out_cnt);
    p.l (' ');
    v_tot_in := v_tot_in + v_in_cnt;
    v_tot_out := v_tot_out + v_out_cnt;
    p.l (
    'Total Input records: ' || v_tot_in || ', Total Output records: ' ||
    v_tot_out
    p.l ('Analyze tables started: '||TO_CHAR (SYSDATE, 'mm/dd/yyyy hh24:mi:ss'));
    atab ('cust_hier');
    v_end_dt := SYSDATE;
    p.l (' ');
    p.l ('Finished at: ' || TO_CHAR (v_end_dt, 'mm/dd/yyyy hh24:mi:ss'));
    EXCEPTION
    WHEN NO_DATA_FOUND
    THEN
    v_error_code := SQLCODE;
    v_error_message := SUBSTR (SQLERRM, 1, 200);
    p.l ('Failure during end of business date processing');
    p.l ('ERROR : ' || v_error_message);
    raise_application_error(-20001, v_error_message);
    WHEN OTHERS
    THEN
    v_error_code := SQLCODE;
    v_error_message := SUBSTR (SQLERRM, 1, 200);
    p.l ('Error: ' || v_error_message);
    p.l ('Sorg: ' || v_sorg_cd || ' CUST: ' || v_cust_cd);
    p.l ( ' DIV: '||v_div||' BUNT: '||v_bunt||' from_dt: '||v_from_dt||
    ' to_dt: ' || v_to_dt); --DP
    p.l ( 'v_from: '||v_from_dt||' v_to: '||v_to_dt||' v_pcd: '|| v_plan_cd);
    p.l ( 'o_from: ' || v_orig_from_dt || ' o_to: ' || v_orig_to_dt ||
    ' fyr: ' || v_from_yr || ' tyr: ' || v_to_yr ||
    ' yr_dif: ' || v_yr_diff);
    raise_application_error(-20001, v_error_message);
    END;
    /

  • Oracle: Please implement simple performance improvement in JDBC (Thin) Driver

    Oracle should put dynamic prefetch into their (thin) JDBC driver. I don't use the OCI driver so I don't know how this applies.
    Some of you may be aware that tweaking a statement's row prefetch size can improve performance quite a lot. IIRC, the default prefetch is about 30 and that's pretty useless for queries returning large (# of rows) resultsets, but it's pretty good for queries returning only a small number of records. Just as an example, when running a simple SELECT * FROM foo WHERE ROWNUM <= somenumber query, here's what I got:
    Prefetch = 1: 10000 rows = 15 secs, 1000 rows = 1.5 secs, 10 rows = 30 ms
    Prefetch = 500: 10000 rows = 2.5 secs, 1000 rows = 280 ms, 10 rows = 80 ms
    Prefetch = 2000: 10000 rows = 2 secs, 1000 rows = 700 ms, 10 rows = 460 ms
    From our experience, the default of 30 (?) is too low for most applications, 500 to 1000 would be a better default. In the end, though, the only way to get best performance is to adjust the prefetch size to the expected number of rows for every query. While that sounds like a reasonable effort for developers of a simple client/server application, in a 3-tier system that deals with connection pools in an application server, this just won't work, so here's my suggestion on how Oracle should address this:
    Instead of having just a single prefetch setting for the statement (or connection), there should be an 'initial' prefetch value (with a default of somewhere between 1 and 50) and a maximum prefetch value (with a default of somewhere between 500 and 5000). When the driver pulls the first batch of records from the server it should use the initial refresh. If there are more records to fetch, it should fetch them using the maximum prefetch. This would allow the driver to perform much better for small AND large resultsets while, at the same time, making it transparent to the application (and application developer).
    [email protected]

    I have exactly the same problem, I tried to find out what is going on, changed several JDBC Drivers on AIX, but no hope, I also have ran the process on my laptop which produced a better and faster performance.
    Therefore I made a special solution ( not practical) by creating flat files and defining the data as an external table, the oracle will read the data in those files as they were data inside a table, this gave me very fast insertion into the database, but still I am looking for an answer for your question here. Using Oracle on AIX machine is a normal business process followed by a lot of companies and there must be a solution for this.

Maybe you are looking for

  • On my iPad 3 is there any way to stop the pictures from flipping 180 when saved?

    When I take pictures with my iPad, when the pictures save they always flip to the opposite side. Like a mirror image. I am wondering if there was anything I can do to fix this as it is extremely annoying!

  • How do I get Flash Player 9 to run?

    I have an old PC that won't support Flash Player 10.  I've downloaded the Flash Player 9 zip file from the following link on Adobe.com and have unzipped the file.  Now, how do I get the Flash Player to "run" so that I can view streaming videos? http:

  • Cannot find my music library after computer crash

    i had a lot of music downloaded and now i cannot find it. did i lose it when my computer crashed. why cant i just log in and have it back? sorry apple but this is so confusing

  • ECCS report painter/writer library empty?

    Hello, the standard delivered library 0CS (zero CS), which supposedly contains standard EC-CS reports (and is referred to in various SAP documents) is empty in our system, also in source client 000 (we're on ERP, 6.0). Has SAP stopped delivering the

  • Opening pdf-files at a specific page, bookmark or goal.

    Hi, I have a PDf-file which i can open when I click on a button. But, is it possible to open a specific bookmark(Page) on this button click. Here is the code to open a pdf file: import java.awt.BorderLayout; import java.awt.event.ActionEvent; import