Create partition taking too long

hi,
In our 11g database we have 15 tables which are range-list partitioned. Range partitioning on some of these tables is taking nearly 10mins/table and the whole process is taking nearly 1hour. actually we are splitting the existing partitions based on high bound value to create a new ones.
We use to do the same on our 10g database too..but there the whole process get finished in 10mins..
Is there any major changes in 11g db partitioning...??
and how can we speed up this partition creation....
I went through the Oracle 11g documentation..but didn't find anything to speed up the partition creation...
Please help us....

I will explain in detail....
ours is an datawarehousing project..and we usually get massive data in the form of flat files for each period. All our table creation scripts...have partiton/subpartition templates like this
PARTITION BY RANGE(period)
SUBPARTITION BY LIST(etl)
SUBPARTITION TEMPLATE(
SUBPARTITION sp_ersvctf values('0')TABLESPACE large_data)
(PARTITION part_ersvctf
VALUES LESS THAN(1) TABLESPACE large_data,
PARTITION partn_ersvctf
VALUES LESS THAN(MAXVALUE) TABLESPACE large_data);
so before loading the data into these tables..v have a pkg which will verify in which partition that period data goes in..checks if that partition already existing else it splits that next higher partition. for eg if we are loading perod=30 data into an empty table (fresh db) it will split 'partn_ersvctf' (maxvalue) into two..part_30 and partn_ersvctf...and load the data into partn_30. so even these empty splitting is taking 10 mins for some tables in 11g.
the reason we can not create the partitions in advance..coz..we are using the same process in 10g (tested succesfully) which used to finish this whole dynamic partition process in few mins and want to migrate the same on to 11g. so when doing this in 11g its taking longer than we expected...I am just wondering if there are any major changes in 11g partition that I am missing...
hope I gave a clear picture now...
thanks 4 ur help Justin..please lemme know if u need any further info...

Similar Messages

  • Creating intermedia index is taking too long!!

    creating intermedia index is taking too long. then memory insufficient... system down.
    please help.
    platform: win2000 pro, oracle817
    linux redhat, oracle817

    Use CTX_OUTPUT_START_LOG() to begin logging index requests. Then create an index and see what's going on.
    o.

  • I have created a slide show for a funeral buts it is taking too long to transfer to sub (like 24 hrs  ). Is this normal or should I be doing this another way ?

    I have created a slide show for a funeral buts it is taking too long to transfer to sub (like 24 hrs  ). Is this normal or should I be doing this another way ?

    Not the only one. Though I have not retryed my installation. Here is what happen to me and please verify if you had the same (sounds like it).
    1. Installed 10.8.4 on top of 10.8.3 and install went fine.
    2. I have installed a Niffy Drive (though no SD in it) ... but did not realize potential problem you suggest.
    3. its a MBPr
    4. Boot took for every (it 40 seconds from after I logedin to system screen on 10.8.3) but after (ie on 10.8.4) took forever 10-15 minutes with a cursor in the upper left on a white/gray screen.
    5. I reinstalled to 10.8.3 and all works as before
    6. But sounds like after removing niffy dirve I should reinstall??? Ie the problem I described was the same as yours.

  • Data Archive Script is taking too long to delete a large table

    Hi All,
    We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key, Please help... More info below
    CREATE TABLE "APP"."MON_TXNS"
       (    "ID_TXN" NUMBER(12,0) NOT NULL ENABLE,
        "BOL_IS_CANCELLED" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
        "ID_PAYER" NUMBER(12,0),
        "ID_PAYER_PI" NUMBER(12,0),
        "ID_PAYEE" NUMBER(12,0),
        "ID_PAYEE_PI" NUMBER(12,0),
        "ID_CURRENCY" CHAR(3 BYTE) NOT NULL ENABLE,
        "STR_TEXT" VARCHAR2(60 CHAR),
        "DAT_MERCHANT_TIMESTAMP" DATE,
        "STR_MERCHANT_ORDER_ID" VARCHAR2(30 BYTE),
        "DAT_EXPIRATION" DATE,
        "DAT_CREATION" DATE,
        "STR_USER_CREATION" VARCHAR2(30 CHAR),
        "DAT_LAST_UPDATE" DATE,
        "STR_USER_LAST_UPDATE" VARCHAR2(30 CHAR),
        "STR_OTP" CHAR(6 BYTE),
        "ID_AUTH_METHOD_PAYER" NUMBER(1,0),
        "AMNT_AMOUNT" NUMBER(23,0) DEFAULT 0,
        "BOL_IS_AUTOCAPTURE" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
        "ID_USE_CASE" NUMBER(4,0) NOT NULL ENABLE,
        "ID_AUTH_METHOD_PAYEE" NUMBER(2,0),
         CONSTRAINT "CKC_BOL_IS_CANCELLED_MON_TXNS" CHECK (BOL_IS_CANCELLED in ('Y','N')) ENABLE,
         CONSTRAINT "PK_MON_TXNS" PRIMARY KEY ("ID_TXN")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX"  ENABLE,
         CONSTRAINT "FK_MON_TXNS_CURRENCIES" FOREIGN KEY ("ID_CURRENCY")
          REFERENCES "APP"."CURRENCIES" ("ID_CURRENCY") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PAYER" FOREIGN KEY ("ID_PAYER")
          REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PAYEE" FOREIGN KEY ("ID_PAYEE")
          REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
         CONSTRAINT "FK_MON_TXNS_REFERENCE_TXNS" FOREIGN KEY ("ID_TXN")
          REFERENCES "APP"."TXNS" ("ID_TXN") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PI_PAYER" FOREIGN KEY ("ID_PAYER_PI")
          REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PI_PAYEE" FOREIGN KEY ("ID_PAYEE_PI")
          REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_AUTHMETHOD" FOREIGN KEY ("ID_AUTH_METHOD_PAYER")
          REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE,
         CONSTRAINT "FK_MON_TXNS_USE_CASE_ID" FOREIGN KEY ("ID_USE_CASE")
          REFERENCES "APP"."USE_CASES" ("ID_USE_CASE") ENABLE,
         CONSTRAINT "FK_MON_TXN_AUTH_PAYEE" FOREIGN KEY ("ID_AUTH_METHOD_PAYEE")
          REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE
      CREATE INDEX "APP"."IDX_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX" ;
      CREATE INDEX "APP"."IDX_PAYEE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_PYE_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE_PI")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_PYR_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER_PI")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_USE_CASE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_USE_CASE")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE UNIQUE INDEX "APP"."PK_MON_TXNS" ON "APP"."MON_TXNS" ("ID_TXN")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX" ;
    Data is first moved to table in schema3.OTW. and then we are deleting all the rows in otw from original table. below is the explain plan for delete
    SQL> explain plan for
      2  delete from schema1.mon_txns where id_txn in (select id_txn from schema3.OTW);
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 2798378986
    | Id  | Operation              | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | DELETE STATEMENT       |            |  2520 |   233K|    87   (2)| 00:00:02 |
    |   1 |  DELETE                | MON_TXNS   |       |       |            |          |
    |*  2 |   HASH JOIN RIGHT SEMI |            |  2520 |   233K|    87   (2)| 00:00:02 |
    |   3 |    INDEX FAST FULL SCAN| OTW_ID_TXN |  2520 | 15120 |     3   (0)| 00:00:01 |
    |   4 |    TABLE ACCESS FULL   | MON_TXNS   | 14260 |  1239K|    83   (0)| 00:00:02 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    Please help,
    thanks,
    Banka Ravi

    'Best practice' is just what Oracle is already doing as you have already been told: DELETE FROM myTable WHERE myDate between myStart and Myend.
    Your use case is why many orgs elect to use partitioning and use that DATE column as the partition key. Then it is VERY FAST and VERY EASY to truncate or drop partitions that contain old data when you no longer need them.
    The other solution used is to quit waiting so long to delete data and then you don't have to delete large amounts at the same time. So instead of deleting data once a month delete it once a week or even every night. Then the number of rows being deleted will be much smaller and, if the stats are kept current, Oracle may decide to use the index.

  • Taking too long time to get LOV

    HI,
    I have created a customer folder in which the query retuns 0.5 million records.
    I have created a item class in airline_name column which is being used in the worksheet as parameter.
    The problem is it is taking too long time near about 2 min to get LOV when the user wants to search the exact name.
    Thanks,
    Himanshu Tiwari

    Hi,
    Usually, you should not use the folder that the report is based on to define the LOV. You should use a separate folder to define the LOV that is optimised to return the content of the LOV.
    Rod West

  • Database open (recovery) taking too long

    Hi,
    Ive been using your awesome BerkeleyDB Java Edition for a couple of years, and have been very happy with it.
    I am currently facing an issue with trying to open the database after a disk-full issue (which resulted in the database being unable to write, and hence not closed properly).
    While recovery seems to be operating, it has been taking an inordinate amount of time - 16 hours so far. My database has data of around 200GB, which inflated to over 450GB during deletion of entries, hence gobbling up all free space on disk.
    My questions are:
    * Should i continue to wait for recovery?
    * Is there any chance that recovery is looping?
    * Is there an easier way (DBDump?) to extract data from the database without having to perform recovery?
    Some other information that may help:
    * The recovery has decreased the size of the last significant file, and created 3 new files since it started running.
    * I have been monitoring the open files (using lsof), and they change every now and then to other files, though a good amount of its time is spent near the end of the database.
    Thus, i feel like recovery is running normally, just taking too long. Please let me know your opinion.
    A few other things i should mention regarding my issue:
    * The database was, till yesterday, running on bdb java 3.3.75. After running several hours of recovery, i upgraded to 4.1.10 (since i read about a possible recovery looping bug in one of the versions)
    * Once 4.1.10 started recovery, it spat out errors regarding the last 2 files. Only on deleting those 2 files (the last being 0 bytes, the 2nd-last being about 5k) did the recovery start. Note that the older 3.3.75's recovery never complained about those files. I can post the errors here if relevant.
    * Some of the jdb files (about 500 files out of the 47,000 files that make up the database) are 100 MB files, since i had experimented with larger sized files for a few days, then reverted the setting.
    Would any of these above affect a successful recovery?
    My setup is:
    OS:Linux CentOS 5.2, 64-bit, kernel 2.6.18-92.el5
    JVM: Sun Java 1.6.0_20, 64-bit
    Memory: 16 GB RAM, of which 8 GB is allocated to the java process (-Xmx8000M -Xms8000M)
    BDB cache set to use 6GB RAM (envconfig.setCacheSize(6000000000))
    Only the BDB basic API is being used (Environment, database, cursors). We do not use DPL, or HA features.
    Awaiting your kind response,
    Sushant A

    Hi Sushant,
    * Should i continue to wait for recovery?* Is there any chance that recovery is looping?>
    I'm not aware of a bug that would cause recovery to loop, however, you may want to take thread dumps to see if it is progressing. It isn't easy to tell, however, since each phase of recovery is in fact a loop. What you can tell easily from the thread dumps is whether recovery is blocked (completely stopped) for some reason. I don't know of a bug that would cause this, but it's something I would check for.
    Assuming it is not blocked, I suggest that you leave recovery running, and additionally (in parallel) try to obtain some information about your log. While recovery is running you can run the DbPrintLog utility, which does not itself run recovery. I suggest running the following command, which will tell us in general what your log looks like and in particular how far apart the checkpoints are:
    java -jar je-x.y.z.jar DbPrintLog -h <envHome> -S > <output>Please post the output.
    If checkpoints are not running in your application for some reason, or they are running very infrequently, this can cause VERY long recoveries. Unfortunately, you may have such a problem in your app and not be aware of it, until you crash and have to recover. To guard against this sort of thing in the future, you should keep an eye on the checkpoint frequency. EnvironmentStats.getNCheckpoints and getEndOfLog can together be used to tell how much log is written between checkpoints. We will also be able to see this from the DbPrintLog -S output.
    * Is there an easier way (DBDump?) to extract data from the database without having to perform recovery?DbDump normally runs recovery. DbDump with the -r or -R option does not run recovery, but has other drawbacks. With -r, a large amount of memory may be necessary to dump an accurate representation of your data set. If this fails because you run out of memory, -R can be used, but this will dump multiple versions of each record and it will be up to you to interpret the output.
    If regular recovery does not succeed, then DbDump -r is the next thing to try.
    Would any of these above affect a successful recovery?No, I don't believe so.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Importing a table with a BLOB column is taking too long

    I am importing a user schema from 9i (9.2.0.6) database to 10g (10.2.1.0) database. One of the large tables (millions of records) with a BLOB column is taking too long to import (more that 24 hours). I have tried all the tricks I know to speed up the import. Here are some of the setting:
    1 - set buffer to 500 Mb
    2 - pre-created the table and turned off logging
    3 - set indexes=N
    4 - set constraints=N
    5 - I have 10 online redo logs with 200 MB each
    6 - Even turned off logging at the database level with disablelogging = true
    It is still taking too long loading the table with the BLOB column. The BLOB field contains PDF files.
    For your info:
    Computer: Sun v490 with 16 CPUs, solaris 10
    memory: 10 Gigabytes
    SGA: 4 Gigabytes

    Legatti,
    I have feedback=10000. However by monitoring the import, I know that its loading average of 130 records per minute. Which is very slow considering that the table contains close to two millions records.
    Thanks for your reply.

  • SQL Update statement taking too long..

    Hi All,
    I have a simple update statement that goes through a table of 95000 rows that is taking too long to update; here are the details:
    Oracle Version: 11.2.0.1 64bit
    OS: Windows 2008 64bit
    desc temp_person;
    Name                                                                                Null?    Type
    PERSON_ID                                                                           NOT NULL NUMBER(10)
    DISTRICT_ID                                                                     NOT NULL NUMBER(10)
    FIRST_NAME                                                                                   VARCHAR2(60)
    MIDDLE_NAME                                                                                  VARCHAR2(60)
    LAST_NAME                                                                                    VARCHAR2(60)
    BIRTH_DATE                                                                                   DATE
    SIN                                                                                          VARCHAR2(11)
    PARTY_ID                                                                                     NUMBER(10)
    ACTIVE_STATUS                                                                       NOT NULL VARCHAR2(1)
    TAXABLE_FLAG                                                                                 VARCHAR2(1)
    CPP_EXEMPT                                                                                   VARCHAR2(1)
    EVENT_ID                                                                            NOT NULL NUMBER(10)
    USER_INFO_ID                                                                                 NUMBER(10)
    TIMESTAMP                                                                           NOT NULL DATE
    CREATE INDEX tmp_rs_PERSON_ED ON temp_person (PERSON_ID,DISTRICT_ID) TABLESPACE D_INDEX;
    Index created.
    ANALYZE INDEX tmp_PERSON_ED COMPUTE STATISTICS;
    Index analyzed.
    explain plan for update temp_person
      2  set first_name = (select trim(f_name)
      3                    from ext_names_csv
      4                               where temp_person.PERSON_ID=ext_names_csv.p_id
      5                               and   temp_person.DISTRICT_ID=ext_names_csv.ed_id);
    Explained.
    @?/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 3786226716
    | Id  | Operation                   | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | UPDATE STATEMENT            |                | 82095 |  4649K|  2052K  (4)| 06:50:31 |
    |   1 |  UPDATE                     | TEMP_PERSON    |       |       |            |          |
    |   2 |   TABLE ACCESS FULL         | TEMP_PERSON    | 82095 |  4649K|   191   (1)| 00:00:03 |
    |*  3 |   EXTERNAL TABLE ACCESS FULL| EXT_NAMES_CSV  |     1 |   178 |    24   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - filter("EXT_NAMES_CSV"."P_ID"=:B1 AND "EXT_NAMES_CSV"."ED_ID"=:B2)
    Note
       - dynamic sampling used for this statement (level=2)
    19 rows selected.By the looks of it the update is going to take 6 hrs!!!
    ext_names_csv is an external table that have the same number of rows as the PERSON table.
    ROHO@rohof> desc ext_names_csv
    Name                                                                                Null?    Type
    P_ID                                                                                         NUMBER
    ED_ID                                                                                        NUMBER
    F_NAME                                                                                       VARCHAR2(300)
    L_NAME                                                                                       VARCHAR2(300)Anyone can help diagnose this please.
    Thanks
    Edited by: rsar001 on Feb 11, 2011 9:10 PM

    Thank you all for the great ideas, you have been extremely helpful. Here is what we did and were able to resolve the query.
    We started with Etbin's idea to create a table from the ext table so that we can index and reference easier than an external table, so we did the following:
    SQL> create table ext_person as select P_ID,ED_ID,trim(F_NAME) fst_name,trim(L_NAME) lst_name from EXT_NAMES_CSV;
    Table created.
    SQL> desc ext_person
    Name                                                                                Null?    Type
    P_ID                                                                                         NUMBER
    ED_ID                                                                                        NUMBER
    FST_NAME                                                                                     VARCHAR2(300)
    LST_NAME                                                                                     VARCHAR2(300)
    SQL> select count(*) from ext_person;
      COUNT(*)
         93383
    SQL> CREATE INDEX EXT_PERSON_ED ON ext_person (P_ID,ED_ID) TABLESPACE D_INDEX;
    Index created.
    SQL> exec dbms_stats.gather_index_stats(ownname=>'APPD', indname=>'EXT_PERSON_ED',partname=> NULL , estimate_percent=> 30 );
    PL/SQL procedure successfully completed.We had a look at the plan with the original SQL query that we had:
    SQL> explain plan for update temp_person
      2  set first_name = (select fst_name
      3                    from ext_person
      4                               where temp_person.PERSON_ID=ext_person.p_id
      5                               and   temp_person.DISTRICT_ID=ext_person.ed_id);
    Explained.
    SQL> @?/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 1236196514
    | Id  | Operation                    | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | UPDATE STATEMENT             |                | 93383 |  1550K|   186K (50)| 00:37:24 |
    |   1 |  UPDATE                      | TEMP_PERSON    |       |       |            |          |
    |   2 |   TABLE ACCESS FULL          | TEMP_PERSON    | 93383 |  1550K|   191   (1)| 00:00:03 |
    |   3 |   TABLE ACCESS BY INDEX ROWID| EXTT_PERSON    |     9 |  1602 |     1   (0)| 00:00:01 |
    |*  4 |    INDEX RANGE SCAN          | EXT_PERSON_ED  |     1 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - access("EXT_PERSON"."P_ID"=:B1 AND "RS_PERSON"."ED_ID"=:B2)
    Note
       - dynamic sampling used for this statement (level=2)
    20 rows selected.As you can see the time has dropped to 37min (from 6 hrs). Then we decided to change the SQL query and use donisback's suggestion (using MERGE); we explained the plan for teh new query and here is the results:
    SQL> explain plan for MERGE INTO temp_person t
      2  USING (SELECT fst_name ,p_id,ed_id
      3  FROM  ext_person) ext
      4  ON (ext.p_id=t.person_id AND ext.ed_id=t.district_id)
      5  WHEN MATCHED THEN
      6  UPDATE set t.first_name=ext.fst_name;
    Explained.
    SQL> @?/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 2192307910
    | Id  | Operation            | Name         | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | MERGE STATEMENT      |              | 92307 |    14M|       |  1417   (1)| 00:00:17 |
    |   1 |  MERGE               | TEMP_PERSON  |       |       |       |            |          |
    |   2 |   VIEW               |              |       |       |       |            |          |
    |*  3 |    HASH JOIN         |              | 92307 |    20M|  6384K|  1417   (1)| 00:00:17 |
    |   4 |     TABLE ACCESS FULL| TEMP_PERSON  | 93383 |  5289K|       |   192   (2)| 00:00:03 |
    |   5 |     TABLE ACCESS FULL| EXT_PERSON   | 92307 |    15M|       |    85   (2)| 00:00:02 |
    Predicate Information (identified by operation id):
       3 - access("P_ID"="T"."PERSON_ID" AND "ED_ID"="T"."DISTRICT_ID")
    Note
       - dynamic sampling used for this statement (level=2)
    21 rows selected.As you can see, the update now takes 00:00:17 to run (need to say more?) :)
    Thank you all for your ideas that helped us get to the solution.
    Much appreciated.
    Thanks

  • Query taking too long

    I am running a fairly complex query with several table joins
    and it is taking too long. What can I do to improve performance?
    Thanks.
    Frank

    Dan's first suggestion is key - if you are doing multiple
    table joins, you want to make sure your indexes are set up on your
    tables correctly. If you have access to the database, this should
    be your first step. Rationalize's stored procedure suggestion is
    also a great idea (again, if you have access to create and manage
    stored procedures on your DB).
    Other than than, most databases usually have some sort of SQL
    efficiency analysis tool. SQL server has one built into their Query
    Analyzer tool. I would recommend using something like that to
    streamline your SQL. Like Dan said, something as simple as the
    order of elements in your where clause might make a big
    difference.

  • Query taking too long on Oracle9i

    Hi All
    I am running a query on our prod database (Oracle8i 8.1.7.4) and again running the same query on Test db (Oracle9i version 4). The query is taking too long(more then 10 min) in test db. Both the database are installed on the same machine IBM AIX V4 and table schema and data are same.
    Any help would be appreciated.
    Here are the results.
    FASTER ONE
    ORACLE 8i using Production
    Statistics
    864 recursive calls
    68 db block gets
    159855 consistent gets
    20297 physical reads
    0 redo size
    1310148 bytes sent via SQL*Net to client
    68552 bytes received via SQL*Net from client
    1036 SQL*Net roundtrips to/from client
    28 sorts (memory)
    1 sorts (disk)
    15525 rows processed
    SLOWER ONE
    ORACLE 9i using Test
    Statistics
    819 recursive calls
    80 db block gets
    22981568 consistent gets
    1361 physical reads
    0 redo size
    1194902 bytes sent via SQL*Net to client
    34193 bytes received via SQL*Net from client
    945 SQL*Net roundtrips to/from client
    0 sorts (memory)
    1 sorts (disk)
    14157 rows processed

    319404-
    To help us better understand the problem,
    1) Could you post your execution plan on the two different databases?
    2) Could you list indexes (if any, on these tables)?
    3) Are any of the objects in the 'from list' a view?
    If so, are you using a user defined function to create the view?
    4) Why are you using the table 'cal_instance_relationship' twice in the 'from ' clause'?
    5) Can't your query be the following?
    SELECT f.person_id, f.course_cd, cv.responsible_org_unit_cd cowner, f.fee_cal_type Sem, f.fee_ci_sequence_number seq_no,
    sua.unit_cd, uv.owner_org_unit_cd uowner, uv.supervised_contact_hours hours, 0 chg_rate, sum(f.transaction_amount) tot_fee,
    ' ' tally
    FROM unit_version uv,
    cal_instance_relationship cir1,
    chg_method_apportion cma,
    student_unit_attempt sua,
    course_version cv,
    fee_ass f
    WHERE f.fee_type = 'VET-MATFEE'
    AND f.logical_delete_dt IS NULL
    AND f.s_transaction_type IN ('ASSESSMENT', 'MANUAL ADJ')
    AND f.fee_ci_sequence_number > 400
    AND f.course_cd = cv.course_cd
    AND cv.version_number = (SELECT MAX(v.version_number) FROM course_version v
    WHERE v.course_cd = cv.course_cd)
    AND f.person_id = sua.person_id
    and f.course_cd = sua.course_cd
    AND f.fee_type = cma.fee_type
    AND f.fee_ci_sequence_number = cma.fee_ci_sequence_number
    AND cma.load_ci_sequence_number = cir1.sub_ci_sequence_number
    AND cir1.sup_cal_type = 'ACAD-YR'
    AND cir1.sub_cal_type = sua.cal_type
    AND cir1.sub_ci_sequence_number = sua.ci_sequence_number
    AND sua.unit_attempt_status NOT IN ('DUPLICATE','DISCONTIN')
    AND sua.unit_cd = uv.unit_cd
    AND sua.version_number = uv.version_number
    GROUP BY f.person_id, f.course_cd, cv.responsible_org_unit_cd , f.fee_cal_type, f.fee_ci_sequence_number,
    sua.unit_cd, uv.owner_org_unit_cd, uv.supervised_contact_hours;

  • Custom Webdynpro text field is taking too long to accept input values

    Dear All,
                   I hvae created custom web dynpro for PO header fields in SRM. This WD contains a lot of fields. When i try to put cursor on a text field it is taking too long for the cursor to appear in that input text field. There is no problem with other fields which have search helps associated with them. This field with the problem is just a text field.
    Please help.
    thanks.

    Dear All,
                   I hvae created custom web dynpro for PO header fields in SRM. This WD contains a lot of fields. When i try to put cursor on a text field it is taking too long for the cursor to appear in that input text field. There is no problem with other fields which have search helps associated with them. This field with the problem is just a text field.
    Please help.
    thanks.

  • R/3 Extraction taking too long to load data into BW

    HI There,
    I'm trying to extract SAP Standard extractor 0FI_AP_4 into BW, and its taking endless time.
    Even the Extract checker RSA3  is taking too long to execute the data. Dont know why its taking so long to execute.
    Since there in not much data to take such a long time.
    Enhanced the datasource with three fields from BSEG using user exits.
    Is that the reason why its taking too long? Does User Exit slows down the extraction process?
    What measures i should take to quicken the process?
    Thanks for your time
    Vandana

    Thanks for all you replies.
    Please go through the steps I've gone through :
    - Installed the Business Content and its in version 3.5
    - Changed the update rules, Transfer rules and migrated the datasource to BI 7
    - Enhanced the 0FI_AP_3 to include three fields BSEG table
    - Ran RSA3 and the new fields are showing but the loading is quite slow.
    - Commented the code and ran RSA3 and with little difference data is showing up
    - Removed the comments and ran, its fine, though it takes little more time then previous step...but data is showing up
    - Replicated the datasource into BW
    - Created the info package and started the init process (before this deleted the previous stored init process)
    - Data isn't loading and please see the error message below.
    Diagnosis
    The data request was a full update.  In this case, the corresponding table in the source system does not
    contain any data. System Response Info IDoc received with status 8. Procedure Check the data basis in the source system.
    - Checked the transformation between datasource 0FI_AP_4 and Infosource ZFI_AP_4
       and I DID NOT found the three fields which i enhanced from BSEG table in the 0FI_AP_4 datasource.
    - Replicated the datasource 0FI_AP_4 again, but no change.
    Now...I dont know whats happening here.
    When i check the datasource 0FI_AP_4 in RSA6, i can see the three new fields from BSEG.
    When i check RSA3, i can see the data getting populated with the three new fields from BSEG,
    When i check the fields in the datasource 0FI_AP_4 in BW, I can see the three new fields. It shows
    that the connection between BW and R/3 is fine, isn't it?
    Now...Can anyone please suggest me how to go forward from here?
    Thanks for your time
    Vandana

  • RMAN - upgrade catalog taking too long

    Hello ,
    I am trying to upgrade the rman catalog and it is taking too long to upgrade , infact after 4 hours wait time it is still excuting the upgrade command .
    Target database is : 11.2.0.3 PSU1
    catalog database : 11.2.0.2
    upgrading the rman catalog to 11.2.0.3 .
    I tried executing the command at a very quiet time of the day when catalog is not been used by any database . Please advice if you have encountered the similar issue and fixed it .
    Thanks
    Venkat

    After some time into execution of "upgrade catalog" i got the output as below , and after that i let all the jobs of rman execute completely from diff databases that are connected to catalog and re-executed the "upgrade catalog" command , which finally completed the upgrade process with out the issues .
    Error :
    RMAN> upgrade catalog;
    error creating upgcat_strt_0
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-06004: ORACLE error from recovery catalog database: ORA-00060: deadlock detected while waiting for resource
    Re-executed when no sessions are connected to catalog and the upgrade process is successful .
    RMAN> connect catalog rman/rman@database
    connected to recovery catalog database
    PL/SQL package RMAN.DBMS_RCVCAT version 11.02.00.02 in RCVCAT database is not cu
    PL/SQL package RMAN.DBMS_RCVMAN version 11.02.00.02 in RCVCAT database is not cu
    RMAN> upgrade catalog;
    recovery catalog owner is RMAN
    enter UPGRADE CATALOG command again to confirm catalog upgrade
    RMAN> upgrade catalog;
    recovery catalog upgraded to version 11.02.00.03
    DBMS_RCVMAN package upgraded to version 11.02.00.03
    DBMS_RCVCAT package upgraded to version 11.02.00.03
    Thanks
    Venkat

  • Connect by taking too long

    Hello,
    I have a table "a" that has:
    left_id right_id type
    4 5 1
    4 6 1
    4 7 1
    5 9 2
    5 10 2
    5 11 2
    9 13 3
    13 14 4
    10 15 3
    QUERY:
    select left_id, right_id, type from
    a
    connect by left_id = prior right_id
    start with left_id = 4;
    Execution Plan
    Plan hash value: 2739023583
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 29 | 1131 | 18 (12)| 00:00:01 |
    |* 1 | CONNECT BY WITH FILTERING| | | | | |
    |* 2 | INDEX RANGE SCAN | a_PK | 5 | 65 | 3 (0)| 00:00:01 |
    | 3 | NESTED LOOPS | | 24 | 624 | 13 (0)| 00:00:01 |
    | 4 | CONNECT BY PUMP | | | | | |
    |* 5 | INDEX RANGE SCAN | a_PK | 5 | 65 | 2 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - access("LEFT_ID"=PRIOR "RIGHT_ID")
    2 - access("LEFT_ID"=4)
    5 - access("LEFT_ID"="connect$_by$_pump$_002"."prior right_id ")
    Is there a way to optimize the query?
    The query is taking too long.
    -Thanks
    Karthik
    Edited by: user3934098 on Nov 14, 2010 1:50 AM

    Here is the detailed explaination:
    Version: oracle 10g R2
    Create table statement:
    CREATE TABLE A
    "LEFT_ID" NUMBER(9,0) NOT NULL ENABLE,
    "RIGHT_ID" NUMBER(9,0) NOT NULL ENABLE,
    "TYPE" NUMBER(9,0) NOT NULL ENABLE,
    CONSTRAINT "A_PK" PRIMARY KEY ("LEFT_ID", "RIGHT_ID", "TYPE") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "DATA" ENABLE,
    CONSTRAINT "A_FK1" FOREIGN KEY ("TYPE") REFERENCES "B" ("TYPE") ENABLE
    SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
    INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT
    TABLESPACE "DATA" ;
    Insert statements:
    INSERT INTO A VALUES(4, 5, 1);
    INSERT INTO A VALUES(4, 6, 1);
    INSERT INTO A VALUES(4, 7, 1);
    INSERT INTO A VALUES(5, 9, 2);
    INSERT INTO A VALUES(5, 10, 2);
    INSERT INTO A VALUES(5, 11, 2);
    INSERT INTO A VALUES(9, 13, 3);
    INSERT INTO A VALUES(13, 14, 4);
    INSERT INTO A VALUES(10, 15, 3);
    INDEXES:
    CREATE UNIQUE INDEX "A_PK" ON "A" ("LEFT_ID", "RIGHT_ID", "TYPE") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "DATA" ;
    QUERY:
    select left_id, right_id, type from
    a
    connect by left_id = prior right_id
    start with left_id = 4;
    THE table has 951053 rows
    The explain plan is:"
    Execution Plan
    Plan hash value: 2739023583
    Id Operation Name Rows Bytes Cost (%CPU) Time
    0 SELECT STATEMENT 29 1131 18 (12) 00:00:01
    * 1 CONNECT BY WITH FILTERING
    * 2 INDEX RANGE SCAN a_PK 5 65 3 (0) 00:00:01
    3 NESTED LOOPS 24 624 13 (0) 00:00:01
    4 CONNECT BY PUMP
    * 5 INDEX RANGE SCAN a_PK 5 65 2 (0) 00:00:01
    Predicate Information (identified by operation id):
    1 - access("LEFT_ID"=PRIOR "RIGHT_ID")
    2 - access("LEFT_ID"=4)
    5 - access("LEFT_ID"="connect$_by$_pump$_002"."prior right_id ")
    Now is there a way to optimize the query? The query takes about a min to excute. This will be my inner query.
    " Is there any other information that you may need? Am I missing something here? "
    -Thanks
    Karthik
    Edited by: user3934098 on Nov 14, 2010 2:22 AM

  • Update statement taking too long to execute

    Hi All,
    I'm trying to run this update statement. But its taking too long to execute.
        UPDATE ops_forecast_extract b SET position_id = (SELECT a.row_id
            FROM s_postn a
            WHERE UPPER(a.desc_text) = UPPER(TRIM(B.POSITION_NAME)))
            WHERE position_level = 7
            AND b.am_id IS NULL;
            SELECT COUNT(*) FROM S_POSTN;
            214665
            SELECT COUNT(*) FROM ops_forecast_extract;
            49366
    SELECT count(*)
            FROM s_postn a, ops_forecast_extract b
            WHERE UPPER(a.desc_text) = UPPER(TRIM(B.POSITION_NAME));
    575What could be the reason for update statement to execute so long?
    Thanks

    polasa wrote:
    Hi All,
    I'm trying to run this update statement. But its taking too long to execute.
    What could be the reason for update statement to execute so long?You haven't said what "too long" means, but a simple reason could be that the scalar subquery on "s_postn" is using a full table scan for each execution. Potentially this subquery gets executed for each row of the "ops_forecast_extract" table that satisfies your filter predicates. "Potentially" because of the cunning "filter/subquery optimization" of the Oracle runtime engine that attempts to cache the results of already executed instances of the subquery. Since the in-memory hash table that holds these cached results is of limited size, the optimization algorithm depends on the sort order of the data and could suffer from hash collisions it's unpredictable how well this optimization works in your particular case.
    You might want to check the execution plan, it should tell you at least how Oracle is going to execute the scalar subquery (it doesn't tell you anything about this "filter/subquery optimization" feature).
    Generic instructions how to generate a useful explain plan output and how to post it here follow:
    Could you please post an properly formatted explain plan output using DBMS_XPLAN.DISPLAY including the "Predicate Information" section below the plan to provide more details regarding your statement. Please use the {noformat}[{noformat}code{noformat}]{noformat} tag before and {noformat}[{noformat}/code{noformat}]{noformat} tag after or the {noformat}{{noformat}code{noformat}}{noformat} tag before and after to enhance readability of the output provided:
    In SQL*Plus:
    SET LINESIZE 130
    EXPLAIN PLAN FOR <your statement>;
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);Note that the package DBMS_XPLAN.DISPLAY is only available from 9i on.
    In 9i and above, if the "Predicate Information" section is missing from the DBMS_XPLAN.DISPLAY output but you get instead the message "Plan table is old version" then you need to re-create your plan table using the server side script "$ORACLE_HOME/rdbms/admin/utlxplan.sql".
    In previous versions you could run the following in SQL*Plus (on the server) instead:
    @?/rdbms/admin/utlxplsA different approach in SQL*Plus:
    SET AUTOTRACE ON EXPLAIN
    <run your statement>;will also show the execution plan.
    In order to get a better understanding where your statement spends the time you might want to turn on SQL trace as described here:
    When your query takes too long ...
    and post the "tkprof" output here, too.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

Maybe you are looking for

  • Unable to find Adobe PDF Resource Files on Startup

    I've been getting this error message every time I start up anymore. It asks me to install in repair mode but that does nothing, I've tried to re-install twice now, still nothing. I followed the files of "C:\Documents and Settings:\All Users\Applicati

  • Dynamic Text in Externally Loaded swf

    hi. i have a dynamic text field in an externally loaded swf. Its a digital clock so i want the numbers to update in the dynamic text field. this is not my exact code but it is very similar. i show below that i add the loader.swf, and once its loading

  • Burning a slideshow to disc and sound slider

    I have made a slide show with narration and music.  Using Photorshop 10 Organizer. 1.   The sound slider has no effect.    The music does not dip when there is narration on a slide.   Can this be solved? 2.   I cannot burn the slideshow.   Trying in

  • Trying to Re-install, HELP HELP PLEASE HELP

    I am growing incredibly frustrated with my machine. In all honesty, I have had it! It won't log and capture, it keeps shutting down and has no sound. Everytime I ask for help I get some great information that is so wrapped up in jargon that I don't k

  • External Harddrive will not format

    So I just bought a new 1Tb my book, and I cant get it to format. Ive done what seems to work for everyone else and set up the partition as a GUID partition table and it still throws the same error as if i was trying to partition as ms-dos, System for