Querying a partition performance.

Hi Oracle Gurus,
I had a small question. I was thinking which one of the 2 techniques will give me a better performance.
1). Querying a table and using partition name in the from clause like
select count(*) from table_name partition (partition_name);
The partition in on the business_date for that table.
2). Querying a partitioned table with a where condition (here the column in the where condition is partitioned for date).
select count(*) from table_name where table_name.business_date='1-Dec-2009'
Please let me know if you need any clarification.

The performance difference is not at all related to the data volume. It is solely related to the cost of parsing the query. If anything, the static SQL solution will be more efficient in a production environment.
- If you are using static SQL, Oracle only has to parse the SQL statement once and you can pass in whatever bind values you'd like. If you are using dynamic SQL, Oracle has to parse every SQL statement separately which means generating a brand new query plan (if you repeat the query for the same date, you'd probably only need a soft parse). Generating a query plan takes some time (on the order of tenths of a second probably) in addition to requiring various latches on the shared pool. Those tenths of a second will dwarf whatever potential benefit you get in not having Oracle prune partitions. Plus, you're increasing serialization, pressure on the shared pool, etc.
- Even beyond that, I find it extremely unlikely that your code could determine the partition to use more quickly than Oracle could. Even if it is a simple algorithm to convert the date into a partition name, that's not appreciably more work than Oracle will have to figure out what partition the date value would be in. And Oracle's lookup is happening in highly optimized and compiled kernel code-- your application is likely using a higher level language with less optimized code. At best, if you've got hundreds of thousands of partitions, it's probably a wash.
Justin

Similar Messages

  • Can u any imrove this query for maximum performance

    select g_com_bu_entity bunt_entity
         , g_com_rep_cd srep_cd
         , effdt from_dt
         , eff_status
         , g_com_role role
         , g_com_pgm prgm
         , g_com_district district
         , g_com_draw_status draw_status
         , decode(g_com_primary_pgm, 'Y',1, 0) pri_prgm_flag
    FROM ps_g_com_assign_vw@commissions c1
    WHERE effdt =
    (SELECT MAX (effdt)
    FROM ps_g_com_assign_vw@commissions c2
    WHERE c1.g_com_bu_entity = c2.g_com_bu_entity
    AND c1.g_com_rep_cd = c2.g_com_rep_cd);
    can anyone make it as regular query for maximum performance
    Thanks,
    Sreekanth

    Hi Sreekant,
    Try this: If it helps
    select g_com_bu_entity bunt_entity
    , g_com_rep_cd srep_cd
    , effdt from_dt
    , eff_status
    , g_com_role role
    , g_com_pgm prgm
    , g_com_district district
    , g_com_draw_status draw_status
    , decode(g_com_primary_pgm, 'Y',1, 0) pri_prgm_flag
    FROM ps_g_com_assign_vw@commissions c1,
    (SELECT MAX (effdt) effdt_max
    FROM ps_g_com_assign_vw@commissions c2
    WHERE c1.g_com_bu_entity = c2.g_com_bu_entity
    AND c1.g_com_rep_cd = c2.g_com_rep_cd) t2
    WHERE effdt = t2.effdt_max;

  • Please help to modifiy this query for better performance

    Please help to rewrite this query for better performance. This is taking long time to execute.
    Table t_t_bil_bil_cycle_change contains 1200000 rows and table t_acctnumberTab countains  200000 rows.
    I have created index on ACCOUNT_ID
    Query is shown below
    update rbabu.t_t_bil_bil_cycle_change a
       set account_number =
           ( select distinct b.account_number
             from rbabu.t_acctnumberTab b
             where a.account_id = b.account_id
    Table structure  is shown below
    SQL> DESC t_acctnumberTab;
    Name           Type         Nullable Default Comments
    ACCOUNT_ID     NUMBER(10)                            
    ACCOUNT_NUMBER VARCHAR2(24)
    SQL> DESC t_t_bil_bil_cycle_change;
    Name                    Type         Nullable Default Comments
    ACCOUNT_ID              NUMBER(10)                            
    ACCOUNT_NUMBER          VARCHAR2(24) Y    

    Ishan's solution is good. I would avoid updating rows which already have the right value - it's a waste of time.
    You should have a UNIQUE or PRIMARY KEY constraint on t_acctnumberTab.account_id
    merge rbabu.t_t_bil_bil_cycle_change a
    using
          ( select distinct account_number, account_id
      from  rbabu.t_acctnumberTab
          ) t
    on    ( a.account_id = b.account_id
           and decode(a.account_number, b.account_number, 0, 1) = 1
    when matched then
      update set a.account_number = b.account_number

  • Help to rewirte query for best performance

    Hi All,
    can you kindly help me to rewirte the below mentioned query for best performance. this is taking more than 20 min in our production server.
    SELECT cp.name,mis.secondary_type U_NAME,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-161,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-154,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-154,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-147,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-147,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-140,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-140,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-133,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-133,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-126,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-126,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-119,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-119,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-112,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-112,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-105,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-105,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-98,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-98,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-91,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-91,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-84,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-84,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-77,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-77,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-70,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-70,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-63,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-63,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-56,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-56,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-49,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-49,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-42,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-42,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-35,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-35,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-28,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-28,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-21,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-21,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-14,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage,
    count(CASE WHEN (mis.start_time between To_DATE(to_char(next_day (sysdate-14,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-7,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))THEN mis.event_audit_id END) Usage
    FROM mis_event_audit mis,USER u,com_pros cp where
    mis.user_id=u.email_address and u.cp_id=cp.cp_id
    and (mis.start_time between To_DATE(to_char(next_day (sysdate-161,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy') and To_DATE(to_char(next_day (sysdate-7,'monday'),'MM/DD/YYYY'),'mm/dd/yyyy'))
    GROUP BY cp.name, mis.secondary_type;
    Thanks,
    krish

    Hi, Krish,
    Something like this will probably be faster, because it cuts out most of the function calls:
    WITH     got_cnt          AS
         SELECT    cp.name
         ,       mis.secondary_type          AS u_name
         ,       COUNT (mis.event_audit_id)     AS cnt
         ,       ( TRUNC (mis.start_time, 'IW')
                - TRUNC (SYSDATE,        'IW')
                ) / 7                    AS week_num
         FROM      mis_event_audit  mis
         JOIN       user_table        u     ON   mis.user_id  = u.email_address     -- USER is not a good table name
         JOIN       com_pros        cp     ON   u.cp_id       = cp.cp_id
         WHERE       mis.start_time   >= TRUNC (SYSDATE, 'IW') - 161
         AND       mis.start_time   <  TRUNC (SYSDATE, 'IW')
         GROUP BY  cp.name
         ,            mis.secondary_type
         ,       TRUNC (mis.start_time, 'IW')
    SELECT       name
    ,       secondary_type
    ,       SUM (CASE WHEN week_num = 22 THEN cnt END)     AS week_23
    ,       SUM (CASE WHEN week_num = 21 THEN cnt END)     AS week_22
    ,       SUM (CASE WHEN week_num = 20 THEN cnt END)     AS week_21
    ,       SUM (CASE WHEN week_num =  0  THEN cnt END)     AS week_1
    FROM       got_cnt
    GROUP BY  name
    ,            secondary_type
    ;TRUNC (d, 'IW')       is midnight on the last Monday before or equal to the DATE d. It does not depend on you NLS settings.
    Whenever you're tempted to write an exprssion as complicated as
    ,     COUNT ( CASE
                       WHEN ( mis.start_time BETWEEN TO_DATE ( TO_CHAR ( NEXT_DAY  ( SYSDATE - 161
                                                                       , 'monday'
                                                  , 'MM/DD/YYYY'
                                        , 'MM/DD/YYYY'
                              AND     TO_DATE ( TO_CHAR ( NEXT_DAY ( SYSDATE - 154
                                                                 ,'monday'
                                                , 'MM/DD/YYYY'
                                          , 'MM/DD/YYYY'
                  THEN mis.event_audit_id
               END
             )               AS usageseek alternate ways. Oracle provides several handy functions, especially for manipulating DATEs. In particular "TO_DATE (TO_CHAR ...)" is almost never needed; think very carefully before doing a round-trip conversion like that.
    Besides being more efficient, this will be easier to debug and maintain.
    If you're using Oracle 11.1 (or higher), then you can also use SELECT ... PIVOT in the main query, but I doubt that will be any faster, and it might not be any simpler.
    I hope this answers your question.
    If not, post a little sample data (CREATE TABLE and INSERT statements, relevant columns only) for all tables involved, and also post the results you want from that data.
    Explain, using specific examples, how you get those results from that data.
    Simplify the problem as much as possible. For example, instead of posting a problem that covers the last 23 weeks, pretend that you're only interested in the last 3 weeks. You'll get a solution that's easy to adapt to any number of weeks.
    Always say which version of Oracle you're using (e.g., 11.2.0.2.0).
    See the forum FAQ {message:id=9360002}
    For performance problems, there's another page of the forum FAQ {message:id=9360003}, but, before you start that process, let's get a cleaner query, without so many functions.
    Edited by: Frank Kulash on Oct 2, 2012 11:50 AM
    Changed week_num to be non-negative

  • Warehouse partitioning - performance of queries across multiple partitions?

    Hi,
    We are using Oracle 11.2.0.3 and have a large central fact table with several surrogate ids which have bitmap indexes on them and have fks looking at dimension tables + several measures
    (PRODUCT_ID,
    CUSTOMER_ID,
    DAY_ID,
    TRANS_TYPE_ID,
    REGION_ID,
    QTY
    VALUE)
    We have 2 distinct sets of queries users look to run for most part, ones accessing all transactions for products regradless of the time those transactions happened (i.e. non-financial queries - about 70%,
    queries determining what happened in a particular week - 20% of queries.
    Table will have approx 4bn rows in eventually.
    Considering adding extra column to this DATE and range partition this to allow us to drop old partitions every year - however this data wouldn't be joined to any other table.
    Then considering sub-partitioning by hash of product_id which is surrogate key for product dimension.
    Thoughts on performance?
    Queries by their nature would hit several sub-partitions.
    Thoughts on query performance of queries which access several sub-partitions/partitions versus queries running aganist a single table.
    Any other thoughts on partitioning strategy in our situation much apprecaited.
    Thanks

    >
    Thoughts on query performance of queries which access several sub-partitions/partitions versus queries running aganist a single table.
    >
    Queries that access multiple partitions can improve performance for two use cases: 1) only a subset of the entire table is needed and 2) if the access is done in parallel.
    Even if 9 of 10 partitions are needed that can still be better than scanning a single table containing all of the data. And when there is a logical partitioning key (transaction date) that matches typical query predicate conditions then you can get guaranteed benefits by limiting a query to only 1 (or a small number) partition when an index on a single table might not get used at all.
    Conversely, if all table data is needed (perhaps there is no good partition key) and parallel option is not available then I wouldn't expect any performance benefit for either single table or partitioning.
    You don't mention if you have licensed the parallel option.
    >
    Any other thoughts on partitioning strategy in our situation much apprecaited.
    >
    You provide some confusing information. On the one hand you say that 70% of your queries are
    >
    ones accessing all transactions for products regradless of the time those transactions happened
    >
    But then you add that you are
    >
    Considering adding extra column to this DATE and range partition this to allow us to drop old partitions every year
    >
    How can you drop old partitions every year if 70% of the queries need product data 'regardless of the time those transactions happened'?
    What is the actual 'datetime' requirement'? And what is your definition of 'a particular week'? Does a week cross Month and Year boundaries? Does the requirement include MONTHLY, QUARTERLY or ANNUAL reporting?
    Those 'boundary' requirements (and the online/offline need) are critical inputs to the best partitioning strategy. A MONTHLY partitioning strategy means that for some weeks two partitions are needed. A weekly partitioning strategy means that for some months two partitions are needed. Which queries are run more frequently weekly or monthly?
    Why did you mention sub-partitioning? What benefit do you expect or what problem are you trying to address? And why hash? Hash partitioning guarantees that ALL partitions will be needed for predicate-based queries since Oracle can't prune partitions when it evaluates execution plans.
    The biggest performance benefit of partitioning is when the partition keys used have a high correspondence with the filter predicates used in the queries that you run.
    Contrarily the biggest management benefit of partitioning is when you can use interval partitioning to automate the creation of new partitions (and subpartitions if used) based solely on the data.
    The other big consideration for partitioning, for both performance and management, is the use of global versus local indexes. WIth global indexes (e.g. a global primary key) you can't just drop a partition in isolation; the global primary key needs to be maintained by deleting the corresponding index entries.
    On the other hand if your partition key includes the primary key column(s) then you can use a local index for the primary key. Then partition maintenance (drop, exchange) is very efficient.

  • 10gR2 Spatital + Partitioning Performance Issues

    I'm trying to get spatial working reasonably with a (range) partitioned table, containing a single 2D point value, all local indexes.
    If I query directly against a single partition, or have a restriction which "prunes" down to one partition, perfromance is reasonabe, and the plan looks like this;
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=3303 pr=0 pw=0 time=1598104 us)
    2596 PARTITION RANGE SINGLE PARTITION: 2 2 (cr=3303 pr=0 pw=0 time=1584119 us)
    2596 TABLE ACCESS BY LOCAL INDEX ROWID FOO PARTITION: 2 2 (cr=3303 pr=0 pw=0 time=1581494 us)
    2596 DOMAIN INDEX FOO_SDX (cr=707 pr=0 pw=0 time=1550312 us)
    If my query is a bit looser, and ends up hitting 2 or more partitions, things degrade substantially, and I end up with a plan like this:
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=10472 pr=0 pw=0 time=6592543 us)
    5188 PARTITION RANGE INLIST PARTITION: KEY(INLIST) KEY(INLIST) (cr=10472 pr=0 pw=0 time=3349053 us)
    5188 TABLE ACCESS BY LOCAL INDEX ROWID FOO PARTITION: KEY(INLIST) KEY(INLIST) (cr=10472 pr=0 pw=0 time=6586055 us)
    5188 BITMAP CONVERSION TO ROWIDS (cr=5955 pr=0 pw=0 time=6539205 us)
    2 BITMAP AND (cr=5955 pr=0 pw=0 time=6539145 us)
    2 BITMAP CONVERSION FROM ROWIDS (cr=514 pr=0 pw=0 time=209088 us)
    5188 SORT ORDER BY (cr=514 pr=0 pw=0 time=206661 us)
    5188 DOMAIN INDEX FOO_SDX (cr=514 pr=0 pw=0 time=158447 us)
    12 BITMAP OR (cr=5441 pr=0 pw=0 time=7052201 us)
    6 BITMAP CONVERSION FROM ROWIDS (cr=2650 pr=0 pw=0 time=3356960 us)
    1000000 SORT ORDER BY (cr=2650 pr=0 pw=0 time=3173026 us)
    1000000 INDEX RANGE SCAN PK_FOO_IDX PARTITION: KEY(INLIST) KEY(INLIST) (cr=2650 pr=0 pw=0 time=193 us)(object id 63668)
    6 BITMAP CONVERSION FROM ROWIDS (cr=2791 pr=0 pw=0 time=3292124 us)
    1000000 SORT ORDER BY (cr=2791 pr=0 pw=0 time=3153435 us)
    1000000 INDEX RANGE SCAN PK_FOO_IDX PARTITION: KEY(INLIST) KEY(INLIST) (cr=2791 pr=0 pw=0 time=1000160 us)(object id 63668)
    Now this is a simple test case. My real situation is a bit more complex, with more data, more partitions, and another table joined in to do the partition pruning, but it comes down to the same issues.
    I've tried various hints, but have not been able to change the plan substantially.
    I've written a similar test case with btree indexes and it does not have these problems, and actually does pretty good with simple MBR type queries.
    I'll post another message with the spatial test case script...
    --Peter                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Here is the test script (kind of long):
    --create a partitioned table with local spatial index...
    create table foo (
         pid number not null, --partition_id
         id number not null,
         location MDSYS.SDO_GEOMETRY null --needs to be null for CTAS to work
    PARTITION BY RANGE (pid) (
    PARTITION P0 VALUES LESS THAN (1)
    create index pk_foo_idx on foo(pid, id) local;
    alter table foo add constraint pk_foo
    primary key (pid, id)using index pk_foo_idx;
    INSERT INTO USER_SDO_GEOM_METADATA
    VALUES (
    'FOO',
    'LOCATION',
         mdsys.sdo_dim_array(
              mdsys.sdo_dim_element('Longitude', -180, 180, 50),
              mdsys.sdo_dim_element('Latitude', -90, 90, 50)
         8307);
    INSERT INTO USER_SDO_GEOM_METADATA
    VALUES (
    'FOO1',
    'LOCATION',
         mdsys.sdo_dim_array(
              mdsys.sdo_dim_element('Longitude', -180, 180, 50),
              mdsys.sdo_dim_element('Latitude', -90, 90, 50)
         8307);
    INSERT INTO USER_SDO_GEOM_METADATA
    VALUES (
    'FOO2',
    'LOCATION',
         mdsys.sdo_dim_array(
              mdsys.sdo_dim_element('Longitude', -180, 180, 50),
              mdsys.sdo_dim_element('Latitude', -90, 90, 50)
         8307);
    commit;
    --local spatial index on main partitioned table
    CREATE INDEX foo_sdx ON foo (location)
    INDEXTYPE IS MDSYS.SPATIAL_INDEX
         PARAMETERS ('layer_gtype=POINT') LOCAL;
    --staging tables for exchanging with partitions later
    create table foo1 as select * from foo where 1=2;
    create table foo2 as select * from foo where 1=2;
    declare
         v_lon number;
         v_lat number;
    begin
         for i in 1..1000000 loop
              v_lat := DBMS_RANDOM.value * 20;
              v_lon := DBMS_RANDOM.value * 20;
              insert into foo1 (pid, id, location) values
              (1, i, MDSYS.SDO_GEOMETRY(2001,8307,MDSYS.SDO_POINT_TYPE(v_lon,v_lat,null),NULL,NULL));
              insert into foo2 (pid, id, location) values
              (2, 1000000+i, MDSYS.SDO_GEOMETRY(2001,8307,MDSYS.SDO_POINT_TYPE(v_lon,v_lat,null),NULL,NULL));
         end loop;
    end;
    commit;
    --index everything the same way
    create index pk_foo_idx1 on foo1(pid, id);
    alter table foo1 add constraint pk_foo1
    primary key (pid, id)using index pk_foo_idx1;
    create index pk_foo_idx2 on foo2(pid, id);
    alter table foo2 add constraint pk_foo2
    primary key (pid, id)using index pk_foo_idx2;
    CREATE INDEX foo_sdx1 ON foo1 (location)
    INDEXTYPE IS MDSYS.SPATIAL_INDEX
         PARAMETERS ('layer_gtype=POINT');
    CREATE INDEX foo_sdx2 ON foo2 (location)
    INDEXTYPE IS MDSYS.SPATIAL_INDEX
         PARAMETERS ('layer_gtype=POINT');
    exec dbms_stats.gather_table_stats(user, 'FOO', cascade=>true);
    exec dbms_stats.gather_table_stats(user, 'FOO1', cascade=>true);
    exec dbms_stats.gather_table_stats(user, 'FOO2', cascade=>true);
    alter table foo add partition p1 values less than (2);
    alter table foo add partition p2 values less than (3);
    alter table foo exchange partition p1 with table foo1 including indexes;
    alter table foo exchange partition p2 with table foo2 including indexes;
    drop table foo1;
    drop table foo2;
    --ok, now lets run some queries
    set timing on
    alter session set events '10046 trace name context forever, level 12';
    --easy one, single partition  (trace ET=0.18s)
    select count(*) from (
         select d.pid, d.id
         from foo partition(p1) d
         where
         sdo_filter(d.location, SDO_geometry(
              2003,8307,NULL,
              SDO_elem_info_array(1,1003,3),
              SDO_ordinate_array(0.1,0.1, 1.1,1.1))
         ) = 'TRUE'
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=3303 pr=0 pw=0 time=1598104 us)
    2596 PARTITION RANGE SINGLE PARTITION: 2 2 (cr=3303 pr=0 pw=0 time=1584119 us)
    2596 TABLE ACCESS BY LOCAL INDEX ROWID FOO PARTITION: 2 2 (cr=3303 pr=0 pw=0 time=1581494 us)
    2596 DOMAIN INDEX FOO_SDX (cr=707 pr=0 pw=0 time=1550312 us)
    --partition pruning works for 1 partition (trace ET=0.18s),
    --uses pretty much the same plan as above
    select count(*) from (
         select d.pid, d.id
         from foo d
         where
         d.pid = 1 and
         sdo_filter(d.location, SDO_geometry(
              2003,8307,NULL,
              SDO_elem_info_array(1,1003,3),
              SDO_ordinate_array(0.1,0.1, 1.1,1.1))
         ) = 'TRUE'
    --heres where the trouble starts  (trace ET=6.59s)
    select count(*) from (
         select d.pid, d.id
         from foo d
         where
         d.pid in (1,2) and
         sdo_filter(d.location, SDO_geometry(
              2003,8307,NULL,
              SDO_elem_info_array(1,1003,3),
              SDO_ordinate_array(0.1,0.1, 1.1,1.1))
         ) = 'TRUE'
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=10472 pr=0 pw=0 time=6592543 us)
    5188 PARTITION RANGE INLIST PARTITION: KEY(INLIST) KEY(INLIST) (cr=10472 pr=0 pw=0 time=3349053 us)
    5188 TABLE ACCESS BY LOCAL INDEX ROWID FOO PARTITION: KEY(INLIST) KEY(INLIST) (cr=10472 pr=0 pw=0 time=6586055 us)
    5188 BITMAP CONVERSION TO ROWIDS (cr=5955 pr=0 pw=0 time=6539205 us)
    2 BITMAP AND (cr=5955 pr=0 pw=0 time=6539145 us)
    2 BITMAP CONVERSION FROM ROWIDS (cr=514 pr=0 pw=0 time=209088 us)
    5188 SORT ORDER BY (cr=514 pr=0 pw=0 time=206661 us)
    5188 DOMAIN INDEX FOO_SDX (cr=514 pr=0 pw=0 time=158447 us)
    12 BITMAP OR (cr=5441 pr=0 pw=0 time=7052201 us)
    6 BITMAP CONVERSION FROM ROWIDS (cr=2650 pr=0 pw=0 time=3356960 us)
    1000000 SORT ORDER BY (cr=2650 pr=0 pw=0 time=3173026 us)
    1000000 INDEX RANGE SCAN PK_FOO_IDX PARTITION: KEY(INLIST) KEY(INLIST) (cr=2650 pr=0 pw=0 time=193 us)(object id 63668)
    6 BITMAP CONVERSION FROM ROWIDS (cr=2791 pr=0 pw=0 time=3292124 us)
    1000000 SORT ORDER BY (cr=2791 pr=0 pw=0 time=3153435 us)
    1000000 INDEX RANGE SCAN PK_FOO_IDX PARTITION: KEY(INLIST) KEY(INLIST) (cr=2791 pr=0 pw=0 time=1000160 us)(object id 63668)
    --this performs better but is ugly and non-general (trace ET=0.35s)
    select count(*) from (
         select d.pid, d.id
         from foo d
         where
         d.pid = 1 and
         sdo_filter(d.location, SDO_geometry(
              2003,8307,NULL,
              SDO_elem_info_array(1,1003,3),
              SDO_ordinate_array(0.1,0.1, 1.1,1.1))
         ) = 'TRUE'
         UNION ALL
         select d.pid, d.id
         from foo d
         where
         d.pid = 2 and
         sdo_filter(d.location, SDO_geometry(
              2003,8307,NULL,
              SDO_elem_info_array(1,1003,3),
              SDO_ordinate_array(0.1,0.1, 1.1,1.1))
         ) = 'TRUE'
    );

  • Query to Partitioned Table

    I have two partition
    upto_mar_2011 , upto_jun_2011
    table_name : order
    1> if I run query as below
    Select * from order where order_date =to_date('23/03/2011','dd/mm/yyyy');
    Oracle will automatically search the data in Partition upto_mar_2011
    or it will search the data in whole table .
    2> Or I have to give query as below
    Select * from order partition (upto_mar_2011) where order_date =to_date('23/03/2011','dd/mm/yyyy');
    Is there any difference between this two query

    OraFighter wrote:
    Is it enough to write as below
    Select * from order where order_date = to_date('23/03/2011','dd/mm/yyyy')
    or order_date = to_date(('23/05/2011','dd/mm/yyyy');
    and CBO will search the data only the partition where it is available.
    Wheather it is span in one or two or three partition
    it will ignore all other partition automaticallyCorrect. It is called partition pruning. This is the approach you should use for application code. Applications should not be concerned with partitioning - should not need to know partition names and other physical attributes of the physical table it is using.
    You can also see partition pruning when looking at the execution plan of such a SQL statement.
    The only code that will deal with partition names and the physical aspects of the table, is maintenance code. For example, you need to maintain a sliding window of 32 days in the date ranged partitioned table. This code needs to determine which partitions are older than 32 days and drop them. And then add new partitions for future processing. This code deals with the physical layer and not the logical layer.
    It is important to keep the two apart and not mix physical stuff into the logical layer that the app is using. Mixing it means that changes to optimise performance and leverage new features cannot be made to the physical layer, as it will break the app layer code.

  • Multi Select Choice on af:query has severe performance issue

    Multi-select choice used with af:query through a view criteria is causing severe performance issue on deselection of "All" checkbox, if the data in the list is around 550 rows. The same component works absolutely fine when used in a form layout.
    I can provide you a re-producible test case, if anyone needs it!
    ***: This is a customer environment issue, and customer is eager to have multi-select in this case. Appreciate any help!

    Glimpse of repetitive lines from console for the above scenario:
    <DCUtil> <findSpelObject> [2208] DCUtil, returning:oracle.jbo.uicli.binding.JUApplication, for TestSelectChoiceDefaultAMDataControl
    <ADFLogger> <begin> Attaching an iterator binding to a datasource
    <DCIteratorBinding> <getViewObject> [2209] Resolving VO:TestSelectChoiceDefaultAM._SESSION_SHARED_APPMODULE_NAME_.SessionAM.DeptReadOnly1 for iterator binding:noCtrl_oracle_adfinternal_view_faces_model_binding_FacesCtrlListBinding_59List_60
    <DCUtil> <findSpelObject> [2210] DCUtil, RETURNING: <null> for TestSelectChoiceDefaultAM._SESSION_SHARED_APPMODULE_NAME_.SessionAM.DeptReadOnly1
    <ADFLogger> <addContextData> Attaching an iterator binding to a datasource
    <ADFLogger> <addContextData> Get LOV list
    <ADFLogger> <begin> Get LOV list
    <DCUtil> <findSpelObject> [2211] DCUtil, returning:oracle.jbo.uicli.binding.JUApplication, for TestSelectChoiceDefaultAMDataControl
    <ADFLogger> <begin> Attaching an iterator binding to a datasource
    <DCIteratorBinding> <getViewObject> [2212] Resolving VO:TestSelectChoiceDefaultAM._SESSION_SHARED_APPMODULE_NAME_.SessionAM.DeptReadOnly1 for iterator binding:noCtrl_oracle_adfinternal_view_faces_model_binding_FacesCtrlListBinding_123List_124
    <DCUtil> <findSpelObject> [2213] DCUtil, RETURNING: <null> for TestSelectChoiceDefaultAM._SESSION_SHARED_APPMODULE_NAME_.SessionAM.DeptReadOnly1
    <ADFLogger> <addContextData> Attaching an iterator binding to a datasource
    <ADFLogger> <addContextData> Get LOV list
    .....many times followed by
    <ADFLogger> <addContextData> Attaching an iterator binding to a datasource
    <ADFLogger> <addContextData> Get LOV list
    <ADFLogger> <begin> Get LOV list
    <ADFLogger> <addContextData> Get LOV list
    <ADFLogger> <begin> Get LOV list
    <ADFLogger> <addContextData> Get LOV list
    <ADFLogger> <begin> Get LOV list
    <ADFLogger> <addContextData> Get LOV list
    <ADFLogger> <begin> Get LOV list
    ...many times

  • Select query in partition

    I created the partion by list table by using period column
    CREATE TABLE final_test(
    PRODNO VARCHAR2(20),
    PERIOD VARCHAR2(4),
    STATEMENT BLOB)
    PARTITION BY LIST(PERIOD)(PARTITION ST1007 VALUES('1007'), PARTITION
    STMST1107 VALUES('1107'), PARTITION ST1207 VALUES('1207'),PARTITION ST0108 VALUES('0108'))
    I inserted the records in below order.
    PRODNO PERIOD
    1001 0108
    1002 1107
    1003 1207
    1004 1007
    Now i want to display prodno,period colums in asc order of period.
    Is the below query is correct? OR we have to use order by clause in select st.
    select prodno,period from final_test where to_date(period,'mmrr') between to_Date('1007','mmrr')
    and TO_DATE(to_Char(SYSDATE, 'MMRR'), 'MMRR')
    pls suggest.

    to get the data retrived in perticular order, you have to use order by clause. here is the example, you have limited data,
    SQL> ed
    Wrote file afiedt.buf
      1  WITH final_test AS
      2    (SELECT '1001' PRODNO,'0108' PERIOD FROM dual
      3     UNION ALL
      4    SELECT[b] '1003' ,'1207'    FROM dual
      5     UNION ALL
      6    SELECT  '1002', '1107'     FROM dual
      7     UNION ALL
      8    SELECT  '1004' ,'1007'    FROM dual
      9     )
    10  SELECT prodno,period
    11  FROM final_test
    12  WHERE TO_DATE(period,'mmrr') BETWEEN TO_DATE('1007','mmrr')
    13*   AND TO_DATE(TO_CHAR(SYSDATE, 'MMRR'), 'MMRR')
    SQL> /
    PROD PERI
    1001 0108
    1003 1207
    1002 1107
    1004 1007
    SQL> ed
    Wrote file afiedt.buf
      1  WITH final_test AS
      2    (SELECT '1001' PRODNO,'0108' PERIOD FROM dual
      3     UNION ALL
      4    SELECT '1003' ,'1207'    FROM dual
      5     UNION ALL
      6    SELECT  '1002', '1107'     FROM dual
      7     UNION ALL
      8    SELECT  '1004' ,'1007'    FROM dual
      9     )
    10  SELECT prodno,period
    11  FROM final_test
    12  WHERE TO_DATE(period,'mmrr') BETWEEN TO_DATE('1007','mmrr')
    13    AND TO_DATE(TO_CHAR(SYSDATE, 'MMRR'), 'MMRR')
    14* ORDER BY PRODNO
    SQL> /
    PROD PERI
    1001 0108
    1002 1107
    1003 1207
    1004 1007
    SQL> ed
    Wrote file afiedt.buf
      1  WITH final_test AS
      2    (SELECT '1001' PRODNO,'0108' PERIOD FROM dual
      3     UNION ALL
      4    SELECT '1003' ,'1207'    FROM dual
      5     UNION ALL
      6    SELECT  '1002', '1107'     FROM dual
      7     UNION ALL
      8    SELECT  '1004' ,'1007'    FROM dual
      9     )
    10  SELECT prodno,period
    11  FROM final_test
    12  WHERE TO_DATE(period,'mmrr') BETWEEN TO_DATE('1007','mmrr')
    13    AND TO_DATE(TO_CHAR(SYSDATE, 'MMRR'), 'MMRR')
    14* ORDER BY  PRODNO descSQL> /
    PROD PERI
    1004 1007
    1003 1207
    1002 1107
    1001 0108
    SQL>

  • Data pump, Query "1=2" performance?

    Hi guys
    I am trying to export a schema using data pump however I need no data from a few of the tables since they are irrelevant but I'd still like to have the structure of the table itself along with any constraints and such.
    I thought of using the QUERY parameter with a "1=2" query making it so that I can filter out all data from certain tables in the export while giving me everything else.
    While this works I wonder if data pump/oracle is smart enough to not run this query through the entire table? If it does perform a full table scan then can anybody recommend any other way of excluding just the data of certain tables while still getting the table structure itself along with anything else related to it?
    I have been unable to find such information after searching the net for a good while.
    Regards
    Alex

    Thanks.
    Does that mean 1=2 actually scans the entire table so it should be avoided in the future?
    Regards
    Alex

  • Impact of Query Logging on Performance of Queries in OBIEE

    I see from [An Oracle BI Blog post|http://obieeblog.wordpress.com/2009/01/19/obiee-performance-tuning-tip-%e2%80%93-turn-off-query-logging/] that Query Logging has a performance impact in OBIEE.
    What is the experience with Query Logging at different levels in a Production environment with, say, 50 or 100 or 500 concurrent users ?
    I am completely new to OBIEE, I know the Database. So, please bear with me.
    Hemant K Chitale

    Kumar's blog that you reference says it all really.
    I don't know if anyone's going to be able to give you the kind of information you're looking for, because it's a no-brainer not to enable this level of logging :)
    Is there are reason you're even considering it?
    Imagine in the database running a low-level trace or debug log for every user session... you just wouldn't do it

  • Rewrite update query to improve performance - sql attached

    Hi ,
    The following query(pasted below) is taking 4 hours to compelete, table ef is having 10 crore records, and I have 3 binary indexes defined on ef on columns as_at_date, cc and fid (these are individual indexes), just want to know if there is any better way to write the query?
    Many thanks
    Rahul
    SQL :
    update ef emp
    set emp.b2r = (
    select e.b2r
    from ef e
    where e.as_at_date  = (select max(as_at_date) from ef where b2r is not null)
    and e.b2r is not null
    and emp.fid = e.fid
    and emp.cc = e.cc
    where substr(emp.as_at_date,1,6) >= (select substr(max(fk_reporting_date_id),1,6) from f_c_f)
    and exists (
    select e.b2r
    from ef e
    where e.as_at_date = (select max(as_at_date) from ef where b2r is not null)
    and e.b2r is not null
    and emp.fid=e.fid
    and emp.cc=e.cc

    > which column uniquely identifies every row : UNKNOW to me(really sorry)
    This is a very critical piece of information. UNKNOWN is a very wrong answer. You need to know such things before keeping your hands on keyboard.
    Any way based on your update statement you can convert it into merge statement like this
    merge into ef a
    using (
             select b2r
                  , fid
                  , cc
               from (
                       select b2r
                            , fid
                            , cc
                            , row_number() over
                                             partition by fid
                                                        , cc
                                                 order by decode(b2r, null, null, as_at_date)
                                                     desc nulls last
                                           ) rno
                         from ef
              where rno = 1
          ) b
       on (
              a.fid = b.fid and
              a.cc  = b.cc  and
              ubstr(a.as_at_date,1,6) >= (select substr(max(fk_reporting_date_id),1,6) from f_c_f)
    when matched then
    update set a.b2r = b.b2r;
    Note: The code is untested
    This scans your EF table lesser number of time. As you are in 11g you can try using DBMS_PARALLEL_EXECUTE to speed up your SQL.

  • Query vs listcube performance

    Hey Everyone.  Wondering if anyone else has seen this issue.  We keep getting BWA errors regarding memory, although we really are not coming close on any blade.  That's not really what this post is about, but thought I would mention that.  SAP has been trying to help out but hasn't come up with anything yet.  So here is what I am wondering...
    Generally speaking, shouldn't performance of a query on a BWA cube be similar to running the same selections off that cube through listcube?  Mainly, we are seeing these error messages thrown when a particular query/group of queries is run.  I have tried to make local copies of the queries and have tried to redesign them, but hasn't helped.  Here is the thing... I built the listcube lookup on that cube to be the same as the query.  Made the query only three objects and have select option variables in the default values.  Nothing in the filter area.  The cube has a bit over 300 million records and this query returns around 30 records.  Through listcube, it takes about 6 seconds.  But the query completely times out and then the errors start happening.
    I've been in the BW area for a long time, but am new to BWA.  On the surface it almost seams like the query variables are being treated as hard coded filters and not being used in the selection of data.  So all 300 million records come back, adds to query cache, times out, and now memory is hit because cache isn't deleted.  I am not saying that is exactly what is happening, but it just seams like it... although I cannot think of any reason why this would be the case.
    Has anyone else seen this type of behavior?  Just to reiterate, I made this a very simplistic query for the purpose of matching it via listcube.  
    Any suggestions are greatly appreciated.  
    Thank you!!

    Hey Everyone.  Thanks for the replies and sorry to be slow in answering your questions!  I am not on the network at the moment so can't lookup our patch levels.  The query doesn't use exception aggregation.  Only key figure is taken directly from the cube with no other manipulation. 
    I haven't let the query run long enough through RSRT to see the messages, but I have seen a screen print from a user which referenced a memory problem and also said something about attributes.  I will have to post laster what the true message was.  But as soon as this query is executed, we start seeing the BWA swap memory rising and then we start seeing typical alert emails.  The query does not using any InfoObject attributes or navigational attributes. 
    I'm just at a loss as to why such a strait forward data selection would take seconds off listcube but start causing errors via executing the actual query.  I realize the query has a generated program that will make it perform differently than the listcube lookup, but the difference is dramatic. 
    I will keep poking around and will post the actual error message first chance I get.
    Thanks again everyone!

  • How can we rewrite this query for better performance

    Hi All,
    The below query is taking more time to run. Any ideas how to improve the performance by rewriting the query using NOT EXITS or any other way...
    Help Appreciated.
    /* Formatted on 2012/04/25 18:00 (Formatter Plus v4.8.8) */
    SELECT vendor_id
    FROM po_vendors
    WHERE end_date_active IS NULL
    AND enabled_flag = 'Y'
    and vendor_id NOT IN ( /* Formatted on 2012/04/25 18:25 (Formatter Plus v4.8.8) */
    SELECT vendor_id
    FROM po_headers_all
    WHERE TO_DATE (creation_date) BETWEEN TO_DATE (SYSDATE - 365)
    AND TO_DATE (SYSDATE))
    Thanks

    Try this one :
    This will help you for partial fetching of data
    SELECT /*+ first_rows(50) no_cpu_costing */
    vendor_id
    FROM po_vendors
    WHERE end_date_active IS NULL
    AND enabled_flag = 'Y'
    AND vendor_id NOT IN (
    SELECT vendor_id
    FROM po_headers_all
    WHERE TO_DATE (creation_date) BETWEEN TO_DATE (SYSDATE - 365)
    AND TO_DATE (SYSDATE))
    overall your query is also fine, because, the in this query the subquery always contain less data compare to main query.

  • Query regarding Partition table Explain plan

    Hello,
    We are amidst a tuning activity, wherein a large table has been partitioned for better administration. During testing, I was analyzing the explain plans for long running sql's and found a piece that I was unable to understand. The PSTART and PSTOP columns show ROWID as its value, which in normal partition pruning scenario be the Partition number or the KEY. I tried to look around for this issue but did not get enough information. Can anybody help me of what it means? Also, if there is a good explanation of the same, it will be extremely helpful.
    The snippet from explain plan looks like:
    | Id  | Operation                                | Name                          | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
    7 |        TABLE ACCESS BY GLOBAL INDEX ROWID| XXXXXXXXXXXXXXXXXXXX             | 43874 |  9083K|       |  1386   (1)| 00:00:17 | ROWID | ROWID |
    On another similar query it looks like:
    | Id  | Operation                             | Name                         | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
    |   6 |     TABLE ACCESS BY GLOBAL INDEX ROWID| XXXXXXXXXXXXXX               | 22455 |  4648K|       |   456   (1)| 00:00:06 |     9 |     9 |
    I have another query with regards to the Partition tables. Does it, require/benefit if, the Indexes to be in partitioned mode? I tried to read about it but did not get a conclusive evidence. I am trying to test it and post here the outcome, but if anybody has experience of working with it, it would be great to have some advice.
    Oracle Version:- 10.2.0.4
    Regards,
    Purvesh.

    Hi Purvesh.
    Great explanation and example on this this topic...
    Ask Tom &amp;quot;explain plan on range-partitioned table&amp;quot;
    Hope this help.

Maybe you are looking for

  • Report 6i and Blob

    Currently, I have a word document (BLOB) stored in a table. The goal is to fetch the BLOB and display the content of the word doc in the report. I've tried to add a report field which maps to the db column as Binary LOB with File Format property = 't

  • Can i move elements from one laptop to another

    I have just replaced my laptop with a Mac book pro. I want to install Adobe elements 11 which was installed on my old unit and uninstalled. I have tried to install but the licence key is not being accepted. can it be done?

  • Daisy chain 2 monitors Mac Mini

    Hello, Im currently running a Mac Mini (Late 2014). I'm currently using 2, Dell S2240M monitors, but am in the market for new displays. Currently I have each monitor connected to their own individual thunderbolt port using a thunderbolt to VGA adapto

  • Oracle 10g installation on Windows 7

    I am currently using Oracle 10g on my laptop running on Windows Vista. Now i am planning to upgrade my OS to Windows 7. My question here, is can I use the same Oracle 10g installable to install in windows 7 also ?? is it possible ?? thanks, JJ

  • When I click on 'History' 'Show All History' it is blank; I cannot go back into my history.

    Okay. There are really two problems I've been having. 1. When I do the above-mentioned task, I have no history. when I click on History, it will give me 10 most recently visited websites, as well as the option to Show All History. When I click on Sho