Query Optimization with partitioned/subpartitioned tables

I have a 1 billion row table with 72 partitions by range and thousands of subpartitions by list. There is a bitmap index on the subpartition by list field.
If I execute the following query which requests data from a single partition and a single subpartition:
SQL> select * from cen00_demog_sf1_hp where
ckey like '33011%' and demogname = 'P002006';
the optimizer selects a single partition ('33...') and a single subpartition ('P002006'). The query takes only 6 seconds including displaying the 7,386 row result.
However, if I then execute the following query which selects data from two partitions and a single subpartition within each partition:
SQL> select * from cen00_demog_sf1_hp where
(ckey like '33011%' or ckey like '35001%')
and demogname = 'P002006';
the optimizer scans all partitions rather than just scanning partitions 33 and 35. The query takes over 10 minutes to display the 17,972 row result. If this query is split into two separate queries, the same result set can be realized in a total 26 seconds including display the results to the screen.
Any suggestions on how the multiple partition query can be modified to avoid an all rows scan?
Thank you.

I had previously tried reversing the WHERE clause statements in the query but it had no affect. The execution plan for the two partition query follows. Note the "ALL" on the partition range. I purposely did not index the ckey field believing that partitioning would be sufficent (ckey contains 8,262,363 unique keys out of 1,041,057,738 rows). As a test a query requesting a demogval based on a single ckey for a particular demogname returns a result in less than a second. Another test requesting data for a particular ckey without a demogname qualifier resulted in a full scan of a single partition (but not all the partitions) taking 4 minutes of elapsed time. Note that the only reason that there is a bitmap index on the demogname in table cen00_demog_sf1_hp is that normally the query is a join with another table containing an indexed demogname field. In sum, it appears that the optimizer defaults to an all partition scan when more than one partition is included in the query.
Elapsed: 00:11:38.41
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=53824 Card=4 Bytes=148)
1 0 PARTITION RANGE (ALL) (Cost=53824 Card=4 Bytes=148)
2 1 PARTITION LIST (SINGLE) (Cost=53824 Card=4 Bytes=148)
3 2 TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'CEN00_DEMOG_SF1_HP' (TABLE) (Cost=53824 Card=4 Bytes=148)
4 3 BITMAP CONVERSION (TO ROWIDS)
5 4 BITMAP INDEX (SINGLE VALUE) OF 'CEN00_DEMOG_SF1_HP_IDX' (INDEX (BITMAP))
The single partition query trace shows selectivity on the partition range.
Elapsed: 00:00:09.69
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=42 Card=133 Bytes=4655)
1 0 PARTITION RANGE (SINGLE) (Cost=42 Card=133 Bytes=4655)
2 1 PARTITION LIST (SINGLE) (Cost=42 Card=133 Bytes=4655)
3 2 TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'CEN00_DEMOG_SF1_HP' (TABLE) (Cost=42 Card=133 Bytes=4655)
4 3 BITMAP CONVERSION (TO ROWIDS)
5 4 BITMAP INDEX (SINGLE VALUE) OF 'CEN00_DEMOG_SF1_HP_IDX' (INDEX (BITMAP))

Similar Messages

  • Query performace with "partition by" clause.

    Below is my query
    >
    select event_type, time, count(event_id) as no_of_events from (
    select e.event_type, t.time , e.id as event_id from time t
    left outer join events e partition by (event_type)
    on t.time < e.end_time and (t.time + 1) > e.start_time
    where t.time >= '2008-01-01' and t.time < '2008-02-01'
    ) events_by_event_type
    group by event_type, time
    order by event_type, time
    The idea is to get a count of active "events" of each "event_type", for each day between 2 dates. The "time" table has one row each for each day. An event is said to be active for a day , when it's end_time - start_time overlaps the day's beginning and end.
    The query works but always does a full table scan of the events table.
    I tried creating following indexes on the events table , but none of them is ever used.
    (event_type,start_time)
    (event_type,end_time)
    (event_type,start_time,end_time)
    (start_time)
    (end_time)
    (start_time,end_time)
    How can I avoid the full table scan of the "events" table in the above query ?
    fyi the events table looks like
    >
    id number not null primary key,
    event_type number not null,
    start_date date not null,
    end_date date not null

    What I want is to avoid the full table scan on the
    "events" table. I don't think adding an index on the
    'time' table will help there.The conditions you have on events are
    t.time < e.end_time and (t.time + 1) > e.start_timeSo you should have an index on the columns end_time and start_time to avoid a full table scan.
    But anyway is that query slow?
    Bye Alessandro
    Message was edited by:
    Alessandro Rossi
    Plus I would add two more predicates to the query to enforce a range scan. If I did well they should always be true for the rows you want. They are there just to tell the CBO that the scan on end_time has to begin from '2008-02-01' and the scan on start_time has to finish on ('2008-01-01' - 1). Sometime this kind of additional conditions helped me.
    select event_type, time, count(event_id) as no_of_events
    from (
              select e.event_type, t.time , e.id as event_id from time t
              left outer join events e partition by (event_type) on (
                   t.time between e.start_time - 1 and e.end_time
              where t.time >= '2008-01-01' and t.time < '2008-02-01'
                   and e.end_time >= '2008-02-01' and e.start_time <= '2008-01-01' - 1
    ) events_by_event_type
    group by event_type, time
    order by event_type, time

  • Query optimization with multiple 1-M relationships

    Hello,
    I have a simple example of a problem I'm trying to solve. Three tables A, B, C. A is the 'parent' table with a 1-M relationship with B, which then has a 1-M relationship with C. Each of these tables has a corresponding object. I would like to execute one simple query to retrieve a collection of fully populated A objects (each containing a collection of B objects, and each B object containing a collection of C objects).
    The underlying query should be something like this:
    select * from A, B, C where B.A_ID = A.A_ID and C.B_ID = B.B_ID;
    Is what I'm trying to do possible? How is this done? My initial attempt yielded an A object for each row in the result set which is incorrect. I tried various combinations of indirection, enabling joining, and batch reading with no luck. Ultimately, I need to retrieve this object graph in as few queries as possible (ideally just one).
    Thanks in advance for any help.
    -Jeff

    Thanks for your reply King.
    I'm still not able to get this to work. It's executing two SQL statements:
    1) SELECT A2, A_ID, A1 FROM A
    2) SELECT t0.B2, t0.A_ID, t0.B_ID, t0.B1 FROM B t0, A t1 WHERE (t0.A_ID = t1.A_ID)
    I'm running TopLink 903 (build 425). The thing that concerns me is that it's not even populating the B object (but that second SQL statement does return 3 rows).
    I've tried reading with a named query (with the 3 table join query posted previously) and with session.readAllObjects(A.class). A has a 1-M relationship to B with batch reading and transparent indirection turned on. B then has the same setup with C.
    Does it matter how my foreign keys are set up? Does it matter how the database references are set up?
    I'll keep digging. Any further assistance you can provide would be appreciated.
    Thanks again,
    -Jeff

  • Aggregate Query Optimization (with indexes) for trivial queries

    Table myTable, which is quite large, has an index on the month column.
    "select max(month) from myTable" uses this index and returns quickly.
    "select max(month) from myTable where 1 = 1" does not use this index, falls through to a full table scan, and takes a very long time.
    Can this possibly be a genuine omission in the query optimizer, or is there some setting or another to convince it to perform the latter query more sanely?

    Oracle 11.2.0.1
    SQL> select table_name, num_rows from dba_tables where table_name = 'DWH_ADDRESS_MASTER';
    TABLE_NAME                                 NUM_ROWS
    DWH_ADDRESS_MASTER                        295729948
    SQL> explain plan for select max(last_update_date) from DWH.DWH_ADDRESS_MASTER;
    | Id  | Operation                  | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT           |                        |     1 |     8 |     4   (0)| 00:00:01 |
    |   1 |  SORT AGGREGATE            |                        |     1 |     8 |            |          |
    |   2 |   INDEX FULL SCAN (MIN/MAX)| DWH_ADDRESS_MASTER_N1  |     1 |     8 |     4   (0)| 00:00:01 |
    SQL> explain plan for select max(last_update_date) from DWH.DWH_ADDRESS_MASTER where 1 = 1;
    | Id  | Operation                   | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |                        |     1 |     8 |     4   (0)| 00:00:01 |
    |   1 |  SORT AGGREGATE             |                        |     1 |     8 |            |          |
    |   2 |   FIRST ROW                 |                        |     1 |     8 |     4   (0)| 00:00:01 |
    |   3 |    INDEX FULL SCAN (MIN/MAX)| DWH_ADDRESS_MASTER_N1  |     1 |     8 |     4   (0)| 00:00:01 |
    ------------------------------------------------------------------------------------------------------

  • Query tuning with partition

    Hi to all,
    I am an issue with a tuning of a query where I have a join among a lot of table with the same structure as described below:
    Table A
    VARCHAR2(20) : id_customer
    VARCHAR2(20): external_id
    date : start_custromer
    date : end_customer.
    The end customer is settled when the record is closed. Therefore this field may assume two possible value: null or with the closing date.
    The table is partiotioned with the end date.
    Into the query I have a filter because I search the record open at for exemple the last year. Then also when I create an index in the end_customer I'm not able to use the corresponding index because the optimezer choises to select all partitions.
    How can I resolve this problem?
    TIA !

    Correct. In fact I suppose that the problem is when I search the null values. The filter of my query is:
    table.start_customer <= sysdate and nvl(table.end_customer), sysdate+1) > sysdate
    I read the threads suggested by you, and I try to create a bitmap index. But Oracle does not accept it because it force me to create a bitmap index only in a local partition of the table....
    I try also to set an hint but the database does not choise this path, probably because there is a setting or something like that.
    TIA!

  • Inner join query used with 7 Database tables

    HI All,
    In a report they used the Inner join Query with 6 Data base table..now there is a performance issue with at query.
    its taking so much of time to trigger that query. Please help how to avoid that performance issue for that.
    In that 2 database tables containing lakhs of records..
    According to my knowledge it can be avoided by using secondary indexs for those 2 database tables..
    and by replacing the Inner join Query with FOR ALL ENTRIES statement.
    i want how to use the logic by using FORALL ENTRIES statement for this..
    So, please give you proper suggestion to avoid this issue..
    Thanking you.
    Moderator message: Please Read before Posting in the Performance and Tuning Forum
    Edited by: Thomas Zloch on Oct 16, 2011 10:27 PM

    Hi,
    And what do you mean with "they used"? If "SAP used" then yo will need to ask a SAP for note
    FOR ALL ENTRIES is quite good described in help. Please search forum also.
    Without query it won't be possible to tell how it can be optimized, however you can try to use SE30/SAT and ST05. Maybe it will help you.
    BR
    Marcin Cholewczuk

  • Insert performance issue with Partitioned Table.....

    Hi All,
    I have a performance issue during with a table which is partitioned. without table being partitioned
    it ran in less time but after partition it took more than double.
    1) The table was created initially without any partition and the below insert took only 27 minuts.
    Total Rec Inserted :- 2424233
    PL/SQL procedure successfully completed.
    Elapsed: 00:27:35.20
    2) Now I re-created the table with partition(range yearly - below) and the same insert took 59 minuts.
    Is there anyway i can achive the better performance during insert on this partitioned table?
    [ similerly, I have another table with 50 Million records and the insert took 10 hrs without partition.
    with partitioning the table, it took 18 hours... ]
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4195045590
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 643K| 34M| | 12917 (3)| 00:02:36 |
    |* 1 | HASH JOIN | | 643K| 34M| 2112K| 12917 (3)| 00:02:36 |
    | 2 | VIEW | index$_join$_001 | 69534 | 1290K| | 529 (3)| 00:00:07 |
    |* 3 | HASH JOIN | | | | | | |
    | 4 | INDEX FAST FULL SCAN| PK_ACCOUNT_MASTER_BASE | 69534 | 1290K| | 181 (3)| 00:00
    | 5 | INDEX FAST FULL SCAN| ACCOUNT_MASTER_BASE_IDX2 | 69534 | 1290K| | 474 (2)| 00:00
    PLAN_TABLE_OUTPUT
    | 6 | TABLE ACCESS FULL | TB_SISADMIN_BALANCE | 2424K| 87M| | 6413 (4)| 00:01:17 |
    Predicate Information (identified by operation id):
    1 - access("A"."VENDOR_ACCT_NBR"=SUBSTR("B"."ACCOUNT_NO",1,8) AND
    "A"."VENDOR_CD"="B"."COMPANY_NO")
    3 - access(ROWID=ROWID)
    Open C1;
    Loop
    Fetch C1 Bulk Collect Into C_Rectype Limit 10000;
    Forall I In 1..C_Rectype.Count
    Insert test
         col1,col2,col3)
    Values
         val1, val2,val3);
    V_Rec := V_Rec + Nvl(C_Rectype.Count,0);
    Commit;
    Exit When C_Rectype.Count = 0;
    C_Rectype.delete;
    End Loop;
    End;
    Total Rec Inserted :- 2424233
    PL/SQL procedure successfully completed.
    Elapsed: 00:51:01.22
    Edited by: user520824 on Jul 16, 2010 9:16 AM

    I'm concerned about the view in step 2 and the index join in step 3. A composite index with both columns might eliminate the index join and result in fewer read operations.
    If you know which partition the data is going into beforehand you can save a little bit of processing by specifying the partition (which may not be a scalable long-term solution) in the insert - I'm not 100% sure you can do this on inserts but I know you can on selects.
    The APPEND hint won't help the way you are using it - the VALUES clause in an insert makes it be ignored. Where it is effective and should help you is if you can do the insert in one query - insert into/select from. If you are using the loop to avoid filling up undo/rollback you can use a bulk collect to batch the selects and commit accordingly - but don't commit more often than you have to because more frequent commits slow transactions down.
    I don't think there is a nologging hint :)
    So, try something like
    insert /*+ hints */ into ...
    Select
         A.Ing_Acct_Nbr, currency_Symbol,
         Balance_Date,     Company_No,
         Substr(Account_No,1,8) Account_No,
         Substr(Account_No,9,1) Typ_Cd ,
         Substr(Account_No,10,1) Chk_Cd,
         Td_Balance,     Sd_Balance,
         Sysdate,     'Sisadmin'
    From Ideaal_Cons.Tb_Account_Master_Base A,
         Ideaal_Staging.Tb_Sisadmin_Balance B
    Where A.Vendor_Acct_Nbr = Substr(B.Account_No,1,8)
       And A.Vendor_Cd = b.company_no
          ;Edited by: riedelme on Jul 16, 2010 7:42 AM

  • Is it possible to create table with partition in compress mode

    Hi All,
    I want to create a table in compress option, with partitions. When i create with partitions the compression isnt enabled, but with noramal table creation the compression option is enables.
    My question is:
    cant we create a table with partition/subpartition in compress mode..? Please help.
    Below is the code that i have used for table creation.
    CREATE TABLE temp
      TRADE_ID                    NUMBER,
      SRC_SYSTEM_ID               VARCHAR2(60 BYTE),
      SRC_TRADE_ID                VARCHAR2(60 BYTE),
      SRC_TRADE_VERSION           VARCHAR2(60 BYTE),
      ORIG_SRC_SYSTEM_ID          VARCHAR2(30 BYTE),
      TRADE_STATUS                VARCHAR2(60 BYTE),
      TRADE_TYPE                  VARCHAR2(60 BYTE),
      SECURITY_TYPE               VARCHAR2(60 BYTE),
      VOLUME                      NUMBER,
      ENTRY_DATE                  DATE,
        REASON                      VARCHAR2(255 BYTE),
    TABLESPACE data
    PCTUSED    0
    PCTFREE    10
    INITRANS   1
    MAXTRANS   255
    NOLOGGING
    COMPRESS
    NOCACHE
    PARALLEL (DEGREE 6 INSTANCES 1)
    MONITORING
    PARTITION BY RANGE (TRADE_DATE)
    SUBPARTITION BY LIST (SRC_SYSTEM_ID)
    SUBPARTITION TEMPLATE
      (SUBPARTITION SALES VALUES ('sales'),
       SUBPARTITION MAG VALUES ('MAG'),
       SUBPARTITION SPI VALUES ('SPI', 'SPIM', 'SPIIA'),
       SUBPARTITION FIS VALUES ('FIS'),
       SUBPARTITION GD VALUES ('GS'),
       SUBPARTITION ST VALUES ('ST'),
       SUBPARTITION KOR VALUES ('KOR'),
       SUBPARTITION BLR VALUES ('BLR'),
       SUBPARTITION SUT VALUES ('SUT'),
       SUBPARTITION RM VALUES ('RM'),
       SUBPARTITION DEFAULT VALUES (default)
    PARTITION RMS_TRADE_DLY_MAX VALUES LESS THAN (MAXVALUE)    
        LOGGING
            TABLESPACE data
         ( SUBPARTITION TS_MAX_SALES VALUES ('SALES')      TABLESPACE data,
        SUBPARTITION TS_MAX_MAG VALUES ('MAG')      TABLESPACE data,
        SUBPARTITION TS_MAX_SPI VALUES ('SPI', 'SPIM', 'SPIIA')      TABLESPACE data,
        SUBPARTITION TS_MAX_FIS VALUES ('FIS')      TABLESPACE data,
        SUBPARTITION TS_MAX_GS VALUES ('GS')      TABLESPACE data,
        SUBPARTITION TS_MAX_ST VALUES ('ST')      TABLESPACE data,
        SUBPARTITION TS_MAX_KOR VALUES ('KOR')      TABLESPACE data,
        SUBPARTITION TS_MAX_BLR VALUES ('BLR')      TABLESPACE data,
        SUBPARTITION TS_MAX_SUT VALUES ('SUT')      TABLESPACE data,
        SUBPARTITION TS_MAX_RM VALUES ('RM')      TABLESPACE data,
        SUBPARTITION TS_MAX_DEFAULT VALUES (default)      TABLESPACE data)); Edited by: user11942774 on 8 Dec, 2011 5:17 AM

    user11942774 wrote:
    I want to create a table in compress option, with partitions. When i create with partitions the compression isnt enabled, but with noramal table creation the compression option is enables. First of all your CREATE TABLE statement is full of syntax errors. Next time test it before posting - we don't want to spend time on fixing things not related to your question.
    Now, I bet you check COMPRESSION value of partitioned table same way you do it for a non-partitioned table - in USER_TABLES - and therefore get wrong results. Since compreesion can be enabled on individual partition level you need to check COMPRESSION in USER_TAB_PARTITIONS:
    SQL> CREATE TABLE temp
      2  (
      3    TRADE_ID                    NUMBER,
      4    SRC_SYSTEM_ID               VARCHAR2(60 BYTE),
      5    SRC_TRADE_ID                VARCHAR2(60 BYTE),
      6    SRC_TRADE_VERSION           VARCHAR2(60 BYTE),
      7    ORIG_SRC_SYSTEM_ID          VARCHAR2(30 BYTE),
      8    TRADE_STATUS                VARCHAR2(60 BYTE),
      9    TRADE_TYPE                  VARCHAR2(60 BYTE),
    10    SECURITY_TYPE               VARCHAR2(60 BYTE),
    11    VOLUME                      NUMBER,
    12    ENTRY_DATE                  DATE,
    13      REASON                      VARCHAR2(255 BYTE),
    14    TRADE_DATE                  DATE
    15  )
    16  TABLESPACE users
    17  PCTUSED    0
    18  PCTFREE    10
    19  INITRANS   1
    20  MAXTRANS   255
    21  NOLOGGING
    22  COMPRESS
    23  NOCACHE
    24  PARALLEL (DEGREE 6 INSTANCES 1)
    25  MONITORING
    26  PARTITION BY RANGE (TRADE_DATE)
    27  SUBPARTITION BY LIST (SRC_SYSTEM_ID)
    28  SUBPARTITION TEMPLATE
    29    (SUBPARTITION SALES VALUES ('sales'),
    30     SUBPARTITION MAG VALUES ('MAG'),
    31     SUBPARTITION SPI VALUES ('SPI', 'SPIM', 'SPIIA'),
    32     SUBPARTITION FIS VALUES ('FIS'),
    33     SUBPARTITION GD VALUES ('GS'),
    34     SUBPARTITION ST VALUES ('ST'),
    35     SUBPARTITION KOR VALUES ('KOR'),
    36     SUBPARTITION BLR VALUES ('BLR'),
    37     SUBPARTITION SUT VALUES ('SUT'),
    38     SUBPARTITION RM VALUES ('RM'),
    39     SUBPARTITION DEFAULT_SUB VALUES (default)
    40    )  
    41  (  
    42   PARTITION RMS_TRADE_DLY_MAX VALUES LESS THAN (MAXVALUE)    
    43      LOGGING
    44          TABLESPACE users
    45       ( SUBPARTITION TS_MAX_SALES VALUES ('SALES')      TABLESPACE users,
    46      SUBPARTITION TS_MAX_MAG VALUES ('MAG')      TABLESPACE users,
    47      SUBPARTITION TS_MAX_SPI VALUES ('SPI', 'SPIM', 'SPIIA')      TABLESPACE users,
    48      SUBPARTITION TS_MAX_FIS VALUES ('FIS')      TABLESPACE users,
    49      SUBPARTITION TS_MAX_GS VALUES ('GS')      TABLESPACE users,
    50      SUBPARTITION TS_MAX_ST VALUES ('ST')      TABLESPACE users,
    51      SUBPARTITION TS_MAX_KOR VALUES ('KOR')      TABLESPACE users,
    52      SUBPARTITION TS_MAX_BLR VALUES ('BLR')      TABLESPACE users,
    53      SUBPARTITION TS_MAX_SUT VALUES ('SUT')      TABLESPACE users,
    54      SUBPARTITION TS_MAX_RM VALUES ('RM')      TABLESPACE users,
    55      SUBPARTITION TS_MAX_DEFAULT VALUES (default)      TABLESPACE users));
    Table created.
    SQL>
    SQL>
    SQL> SELECT  PARTITION_NAME,
      2          COMPRESSION
      3    FROM USER_TAB_PARTITIONS
      4    WHERE TABLE_NAME = 'TEMP'
      5  /
    PARTITION_NAME                 COMPRESS
    RMS_TRADE_DLY_MAX              ENABLED
    SQL> SELECT  COMPRESSION
      2    FROM USER_TABLES
      3    WHERE TABLE_NAME = 'TEMP'
      4  /
    COMPRESS
    SQL> SY.

  • Table creation with partition

    following is the table creation script with partition
    CREATE TABLE customer_entity_temp (
    BRANCH_ID NUMBER (4),
    ACTIVE_FROM_YEAR VARCHAR2 (4),
    ACTIVE_FROM_MONTH VARCHAR2 (3),
    partition by range (ACTIVE_FROM_YEAR,ACTIVE_FROM_MONTH)
    (partition yr7_1999 values less than ('1999',TO_DATE('Jul','Mon')),
    partition yr12_1999 values less than ('1999',TO_DATE('Dec','Mon')),
    it gives an error
    ORA-14036: partition bound value too large for column
    but if I increase the size of the ACTIVE_FROM_MONTH column to 9 , the script works and creates the table. Why is it so ?
    Also, by creating a table in this way and populating the table data in their respective partitions, all rows with month less than "JULY" will go in yr7_1999 partition and all rows with month value between "JUL" and "DEC" will go in the second partition yr12_1999 , where will the data with month value equal to "DEC" go?
    Plz help me in solving this problem
    thanks n regards
    Moloy

    Hi,
    You declared ACTIVE_FROM_MONTH VARCHAR2 (3) and you try to check it against a date in your partitionning clause:TO_DATE('Jul','Mon')so you should first check your data model and what you are trying to achieve exactly.
    With such a partition decl, you will not be able to insert dates from december 1998 included and onward. The values are stricly less than (<) not less or equal(<=) hence such lines can't be inserted. I'd advise you to check the MAXVALUE value jocker and the ENABLE ROW MOVEMENT partitionning clause.
    Regards,
    Yoann.

  • ORA-00604 ORA-00904 When query partitioned table with partitioned indexes

    Got ORA-00604 ORA-00904 When query partitioned table with partitioned indexes in the data warehouse environment.
    Query runs fine when query the partitioned table without partitioned indexes.
    Here is the query.
    SELECT al2.vdc_name, al7.model_series_name, COUNT (DISTINCT (al1.vin)),
    al27.accessory_code
    FROM vlc.veh_vdc_accessorization_fact al1,
    vlc.vdc_dim al2,
    vlc.model_attribute_dim al7,
    vlc.ppo_list_dim al18,
    vlc.ppo_list_indiv_type_dim al23,
    vlc.accy_type_dim al27
    WHERE ( al2.vdc_id = al1.vdc_location_id
    AND al7.model_attribute_id = al1.model_attribute_id
    AND al18.mydppolist_id = al1.ppo_list_id
    AND al23.mydppolist_id = al18.mydppolist_id
    AND al23.mydaccytyp_id = al27.mydaccytyp_id
    AND ( al7.model_series_name IN ('SCION TC', 'SCION XA', 'SCION XB')
    AND al2.vdc_name IN
    ('PORT OF BALTIMORE',
    'PORT OF JACKSONVILLE - LEXUS',
    'PORT OF LONG BEACH',
    'PORT OF NEWARK',
    'PORT OF PORTLAND'
    AND al27.accessory_code IN ('42', '43', '44', '45')
    GROUP BY al2.vdc_name, al7.model_series_name, al27.accessory_code

    I would recommend that you post this at the following OTN forum:
    Database - General
    General Database Discussions
    and perhaps at:
    Oracle Warehouse Builder
    Warehouse Builder
    The Oracle OLAP forum typically does not cover general data warehousing topics.

  • Partitioning - query on large table v. query accessing several partitions

    Hi,
    We are using partitioning on a large fact table, however, in deciding partitioning strategy looking for advice regarding queries which have to access several partitions versus query against a large table.
    What is quicker - a query which acccesses a large table or a query which accesseses several partitions to return results. I
    Need to partition due to size/admin etc. but want to make sure queries which need to access > 1 partition are not significantly slower than ones which access a large table by comparison.
    Ones which access just one partition fine but some queries have to accesse several partitions
    Many Thanks

    Here are your choices stated another way. Is it better to:
    1. Get one weeks data by reading one month's data and throwing away 75% of it (assumes partitioning by month)
    2. Get one weeks data by reading three weeks of it and throwing away part of two weeks? (assumes partitioning by week)
    3. Get one weeks data by reading seven daily partitions and not having to throw away any of it? (assumes daily partitioning)
    I have partitioned as frequently as every 5-15 minutes (banking and telecom) and have yet to find a situation where partitions larger than the minimum date-range for the majority of queries makes sense.
    Anyone can insert data into a table ... an extra millisecond per insert is generally irrelevant. What you want to do is optimize reading the data where that extra millisecond per row, over millions of rows, adds up to measurable time.
    But this is Oracle so the best answer to your questions is to recommend you not take anyone advice on this but rather run some tests with real data, in real-world volumes, with real-world DML and queries.

  • Troubles editing tables with partitions

    I'm running SQL Developer 1.5.3 against Oracle 10/11 databases and SQL Developer has trouble with my partitioned tables. Both the schema owner and sys users experience the same problems.
    The first time I try to edit a table, I get an "Error Loading Objects" dialog with a NullPointException message. If I immediately turn around and try to edit the table again, I get the Edit Table dialog. That's annoying but there's at least a work-around.
    Next, if I select the Indexes pane, I can view the first index but selecting another one results in an "Index Error on <table>" error dialog. The message is "There are no table partitions on which to define local index partitions". At this point, selecting any of the other panes (Columns, Primary Key, etc.) results in the same dialog. While the main Partitions tab shows my partitions, I cannot see them in the Edit Table dialog. In fact, the Partition Definitions and Subpartition Templates panes are blank.
    Does anyone else see this behavior? Version 1.5.1 behaved the same way so it's not new.
    Of course I've figured out how to do everything I need through SQL but it would be handy if I could just use the tool.
    Thank you.

    Most of my tables are generated from a script so this morning I decided to just create a very basic partitioned table. It contained a NUMBER primary key and a TIMESTAMP(6) column to use with partitioning. That table worked just fine in SQL Developer.
    At that point I tried to figure out what is different about my tables and I finally found the difference... Oracle Spatial. If I add an MDSYS.SDO_GEOMETRY column to my partitioned table, SQL Developer starts having issues.
    I also have the GeoRaptor plugin installed so I had to wonder if it was interfering with SQL Developer. I couldn't find an option to uninstall an extension so I went into the sqldeveloper/extensions directory and removed GeoRaptorLibs and org.GeoRaptor.jar. GeoRaptor doesn't appear to be installed in SQL Developer anymore but I still see the same behavior.
    It appears that there is an issue in SQL Developer with Oracle Spatial and partitioning. Can someone confirm this?

  • Problems with partition tables

    Hi all,
    I've got some problems with partition tables. The script at the bottom run but when I wanna insert some values it returns me an error
    (ORA-06550: line 1, column 30: PL/SQL: ORA-06552: PL/SQL: Compilation unit analysis terminated
    ORA-06553: PLS-320: the declaration of the type of this expression is incomplete or malformed
    ORA-06550: line 1, column 7: PL/SQL: SQL Statement ignored)
    and I can't understand why!
    There's something incorrect in the script or not?
    Please help me
    Thanks in advance
    Steve
    CREATE TABLE TW_E_CUSTOMER_UNIFIED
    ID_CUSTOMER_UNIFIED VARCHAR2 (27) NOT NULL ,
    START_VALIDITY_DATE DATE NOT NULL ,
    END_VALIDITY_DATE DATE ,
    CUSTOMER_STATUS VARCHAR2 (255)
    PARTITION BY RANGE (START_VALIDITY_DATE)
    SUBPARTITION BY LIST (END_VALIDITY_DATE)
    PARTITION M200909 VALUES LESS THAN (TO_DATE('20091001','YYYYMMDD'))
    (SUBPARTITION M200909_N VALUES (NULL), SUBPARTITION M200909_NN VALUES (DEFAULT)),
    PARTITION M200910 VALUES LESS THAN (TO_DATE('20091101','YYYYMMDD'))
    (SUBPARTITION M200910_N VALUES (NULL), SUBPARTITION M200910_NN VALUES (DEFAULT)),
    PARTITION M200911 VALUES LESS THAN (TO_DATE('20091201','YYYYMMDD'))
    (SUBPARTITION M200911_N VALUES (NULL), SUBPARTITION M200911_NN VALUES (DEFAULT)),
    PARTITION M200912 VALUES LESS THAN (TO_DATE('20100101','YYYYMMDD'))
    (SUBPARTITION M200912_N VALUES (NULL), SUBPARTITION M200912_NN VALUES (DEFAULT)),
    PARTITION M201001 VALUES LESS THAN (TO_DATE('20100201','YYYYMMDD'))
    (SUBPARTITION M201001_N VALUES (NULL), SUBPARTITION M201001_NN VALUES (DEFAULT)),
    PARTITION M201002 VALUES LESS THAN (TO_DATE('20100301','YYYYMMDD'))
    (SUBPARTITION M201002_N VALUES (NULL), SUBPARTITION M201002_NN VALUES (DEFAULT)),
    PARTITION M210001 VALUES LESS THAN (MAXVALUE))
    (SUBPARTITION M210001_N VALUES (NULL), SUBPARTITION M210001_NN VALUES (DEFAULT))
    ;

    Hi Hoek,
    the DB version is 10.2 (italian version, then SET is correct).
    ...there's something strange: now I can INSERT rows but I can't update them!
    I'm using this command string:
    UPDATE TW_E_CUSTOMER_UNIFIED SET END_VALIDITY_DATE = TO_DATE('09-SET-09', 'DD-MON-RR') WHERE
    id_customer_unified = '123' and start_validity_date = TO_DATE('09-SET-09', 'DD-MON-RR');
    And this is the error:
    Error SQL: ORA-14402: updating partition key column would cause a partition change
    14402. 00000 - "updating partition key column would cause a partition change"
    *Cause:    An UPDATE statement attempted to change the value of a partition
    key column causing migration of the row to another partition
    *Action:   Do not attempt to update a partition key column or make sure that
    the new partition key is within the range containing the old
    partition key.
    I think that is impossible to use a PARTITION/SUBPARTITION like that: in fact the update of "END_VALIDITY_DATE" cause a partition change.
    Do u agree or it's possible an update on a field that implies a partition change?
    Regards Steve

  • View Criteria in ADF Query Panel with Table-Class Cast Exception

    Hi,
    I am getting Class Cast Exception when using view criteria for ADF Query Panel with Table. The version I am using is 11g Release 1(11.1.1.2.0)
    Here is what I did:
    1. created a view criteria on a view object
    2. all are optional
    3. all are Strings
    3. Dragged the view criteria as a query component (ADF Query panel with Query table) on to the design layout
    and the error when I clicked the Search button is:
    javax.el.ELException: java.lang.ClassCastException: oracle.jbo.common.ViewCriteriaImpl cannot be cast to oracle.jbo.ViewCriteriaRow
    at com.sun.el.parser.AstValue.invoke(AstValue.java:161)
    at com.sun.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:283)
    at org.apache.myfaces.trinidad.component.UIXComponentBase.broadcastToMethodExpression(UIXComponentBase.java:1289)
    at oracle.adf.view.rich.component.UIXQuery.broadcast(UIXQuery.java:115)
    at oracle.adf.view.rich.component.fragment.UIXRegion.broadcast(UIXRegion.java:148)
    at oracle.adf.view.rich.component.fragment.UIXRegion.broadcast(UIXRegion.java:148)
    at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl.broadcastEvents(LifecycleImpl.java:812)
    at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl._executePhase(LifecycleImpl.java:292)
    at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:177)
    at javax.faces.webapp.FacesServlet.service(FacesServlet.java:265)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
    at oracle.adf.model.servlet.ADFBindingFilter.doFilter(ADFBindingFilter.java:191)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
    at sni.foundation.facesextensions.filters.FoundationFilter.doFilter(FoundationFilter.java:92)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
    at oracle.adfinternal.view.faces.webapp.rich.RegistrationFilter.doFilter(RegistrationFilter.java:97)
    at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl$FilterListChain.doFilter(TrinidadFilterImpl.java:420)
    at oracle.adfinternal.view.faces.activedata.AdsFilter.doFilter(AdsFilter.java:60)
    at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl$FilterListChain.doFilter(TrinidadFilterImpl.java:420)
    at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl._doFilterImpl(TrinidadFilterImpl.java:247)
    at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl.doFilter(TrinidadFilterImpl.java:157)
    at org.apache.myfaces.trinidad.webapp.TrinidadFilter.doFilter(TrinidadFilter.java:92)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
    at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:94)
    at java.security.AccessController.doPrivileged(Native Method)
    at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:313)
    at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:413)
    at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:138)
    at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:70)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
    at oracle.dms.wls.DMSServletFilter.doFilter(DMSServletFilter.java:326)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
    at weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:27)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3592)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    Caused by: java.lang.ClassCastException: oracle.jbo.common.ViewCriteriaImpl cannot be cast to oracle.jbo.ViewCriteriaRow
    at oracle.adfinternal.view.faces.model.binding.FacesCtrlSearchBinding._clearFilterCriteriaRows(FacesCtrlSearchBinding.java:4549)
    at oracle.adfinternal.view.faces.model.binding.FacesCtrlSearchBinding._addFilterCriteria(FacesCtrlSearchBinding.java:4603)
    at oracle.adfinternal.view.faces.model.binding.FacesCtrlSearchBinding.processQuery(FacesCtrlSearchBinding.java:423)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.sun.el.parser.AstValue.invoke(AstValue.java:157)
    Thanks
    Venkatesh

    Hi Frank.
    I'm using JDev 11.1.1.3.0 as you suggest the error is no longer present in the latest version.
    I can pick my query from the "Saved Search" pick list on the QueryPanel list of queries just fine, and it sets up the filter properly, but when I press the "Search" button, I get the same reported error...
    <RegistrationConfigurator><handleError> Server Exception during PPR, #1
    javax.el.ELException: java.lang.ClassCastException: oracle.jbo.common.ViewCriteriaImpl cannot be cast to oracle.jbo.ViewCriteriaRow
         at com.sun.el.parser.AstValue.invoke(AstValue.java:161)
         at com.sun.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:283)
         at org.apache.myfaces.trinidad.component.UIXComponentBase.broadcastToMethodExpression(UIXComponentBase.java:1303)
         at oracle.adf.view.rich.component.UIXQuery.broadcast(UIXQuery.java:115)
         at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl.broadcastEvents(LifecycleImpl.java:812)
         at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl._executePhase(LifecycleImpl.java:292)
         at oracle.adfinternal.view.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:177)
         at javax.faces.webapp.FacesServlet.service(FacesServlet.java:265)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:300)
         at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.adf.model.servlet.ADFBindingFilter.doFilter(ADFBindingFilter.java:191)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.adfinternal.view.faces.webapp.rich.RegistrationFilter.doFilter(RegistrationFilter.java:97)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl$FilterListChain.doFilter(TrinidadFilterImpl.java:420)
         at oracle.adfinternal.view.faces.activedata.AdsFilter.doFilter(AdsFilter.java:60)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl$FilterListChain.doFilter(TrinidadFilterImpl.java:420)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl._doFilterImpl(TrinidadFilterImpl.java:247)
         at org.apache.myfaces.trinidadinternal.webapp.TrinidadFilterImpl.doFilter(TrinidadFilterImpl.java:157)
         at org.apache.myfaces.trinidad.webapp.TrinidadFilter.doFilter(TrinidadFilter.java:92)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:94)
         at java.security.AccessController.doPrivileged(Native Method)
         at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:313)
         at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:414)
         at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:138)
         at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.dms.wls.DMSServletFilter.doFilter(DMSServletFilter.java:330)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:27)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.doIt(WebAppServletContext.java:3684)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3650)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2268)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2174)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1446)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    Caused by: java.lang.ClassCastException: oracle.jbo.common.ViewCriteriaImpl cannot be cast to oracle.jbo.ViewCriteriaRow
         at oracle.adfinternal.view.faces.model.binding.FacesCtrlSearchBinding._clearFilterCriteriaRows(FacesCtrlSearchBinding.java:4588)
         at oracle.adfinternal.view.faces.model.binding.FacesCtrlSearchBinding._addFilterCriteria(FacesCtrlSearchBinding.java:4642)
         at oracle.adfinternal.view.faces.model.binding.FacesCtrlSearchBinding.processQuery(FacesCtrlSearchBinding.java:424)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at com.sun.el.parser.AstValue.invoke(AstValue.java:157)
         ... 42 more

  • Failing to refresh LOV fields added in the query panel with table

    Hi.. Iam using Jdev 11.1.1.2.0
    I have a scenario like ..i need to add 2 cascaded lovs in search panel and clicking on search button, the result should be displayed in table form.
    For example..
    I have cascaded LOV fields departmentId and Firstname ,
    first name dropdown values depends on the selection of value in DepartmentIddropdown. I need to add only these 2 fields in search panel and clciking on search buton , the result should be displayed in emp table.
    I have achieved the same creating View criteria with 2 fields and dragging and dropping that as query panel with table. But my problem is First name lov is not populating based on department id. It is showing static dropdown list.
    Please help me how to achieve this ? thanks in advance.
    Regards
    Alekhya.

    Thanks for the reply.. actually if iam using those cascaded lovs in a form then we can set properties as u mentioned to refresh and display values correctly in the dropdown.
    My scenario is like i need to use those fields in query panel header
    code snippet
    <af:panelHeader text="Employees" id="ph1">
    <af:query id="qryId1" headerText="Search" disclosed="true"
    value="#{bindings.EmployeesViewCriteria1Query1.queryDescriptor}"
    model="#{bindings.EmployeesViewCriteria1Query1.queryModel}"
    queryListener="#{bindings.EmployeesViewCriteria1Query1.processQuery}"
    queryOperationListener="#{bindings.EmployeesViewCriteria1Query1.processQueryOperation}"
    resultComponentId="::resId1"/>
    </af:panelHeader>
    Iam not having any field names to add properties.
    Regards
    Alekhya

Maybe you are looking for

  • My iPhone 4s touch screen will not work with an active SIM card in it.

    A few months ago my touch screen stopped working suddenly on my iPhone 4s. After I removed my active SIM card from the 4s, the touch screen started working again. As soon as I put my SIM card in again, it stopped working. I finally broke down and too

  • Bean is not worked  in the jsff page

    Hi JDeveloper Studio Edition Version 11.1.2.2.0 I Have a bean for run a jasperreport package Reports; import javax.faces.event.ActionEvent; import sp11.model.apm.clubImpl; import java.io.ByteArrayOutputStream; import java.io.InputStream; import java.

  • Is it possible for one iPod to be registered to two people?

    So far I have had two iPods stolen from me and sadly never realized that I registered them with Apple until now. I was wondering if the people walking around with my iPods are able to do anything like purchase things off if the iTunes store or even p

  • Colorsync Problems

    I recently switched my main business machine to iMac. Mainly for layout work, and some photo editing. I have for years worked happily on PCs using Adobe1998 for RGB print destined originals, and sRGB for web based content. No problem with my understa

  • No Camera Found message when connecting iPhone

    Hello, I am experiencing an annoyance where I plug in the iPhone to my imac and I get a message that reads "No camera found. I have Canon Software installed for my camera. Anyone experiencing the same problems or have a solution. Many thanks.