Large partitioned tables with WM

Hello
I've got a few large tables (6-10GB+) that will have around 500k new rows added on a daily basis as part of an overnight batch job. No rows are ever updated, only inserted or deleted and then re-inserted. I want to stop the process that adds the new rows from being an overnight batch to being a near real time process i.e. a queue will be populated with requests to rebuild the content of these tables for specific parent ids, and a process will consume those requests throughout the day rather than going through the whole list in one go.
I need to provide views of the data asof a point in time i.e. what was the content of the tables at close of business yesterday, and for this I am considering using workspaces.
I need to keep at least 10 days worth of data and I was planning to partition the table and drop one partition every day. If I use workspaces, I can see that oracle creates a view in place of the original table and creates a versioned table with the LT suffix - this is the table name returned by DBMSMW.GetPhysicalTableName. Would it be considered bad practice to drop partitions from this physical table as I would do with a non version enabled table? If so, what would be the best method for dropping off old data?
Thanks in advance
David

Hello Ben
Thank you for your reply.
The table structure we have is like so:
CREATE TABLE hdr
(   pk_id               NUMBER PRIMARY KEY,
    customer_id         NUMBER FOREIGN KEY REFERENCES customer,
    entry_type          NUMBER NOT NULL
CREATE TABLE dtl_daily
(   pk_id               NUMBER PRIMARY KEY,
    hdr_id              NUMBER FOREIGN KEY REFERENCES hdr
    active_date         DATE NOT NULL,
    col1                NUMBER
    col2                NUMBER
PARTITION BY RANGE(active_date)
(   PARTITION ptn_200709
        VALUES LESS THAN (TO_DATE('200710','YYYYMM'))
        TABLESPACE x COMPRESS,
    PARTITION ptn_200710
        VALUES LESS THAN (TO_DATE('200711','YYYYMM'))
        TABLESPACE x COMPRESS
CREATE TABLE dtl_hourly
(   pk_id               NUMBER PRIMARY KEY,
    hdr_id              NUMBER FOREIGN KEY REFERENCES hdr
    active_date         DATE NOT NULL,
    active_hour         NUMBER NOT NULL,
    col1                NUMBER
    col2                NUMBER
PARTITION BY RANGE(active_date)
(   PARTITION ptn_20070901
        VALUES LESS THAN (TO_DATE('20070902','YYYYMMDD'))
        TABLESPACE x COMPRESS,
    PARTITION ptn_20070902
        VALUES LESS THAN (TO_DATE('20070903','YYYYMMDD'))
        TABLESPACE x COMPRESS
    PARTITION ptn_20070903
        VALUES LESS THAN (TO_DATE('20070904','YYYYMMDD'))
        TABLESPACE x COMPRESS
    ...For every day for 20 years
/The hdr table holds one or more rows for each customer and has it's own synthetic key generated for every entry as there can be multiple rows having the same entry_type for a customer. There are two detail tables, daily and hourly, which hold detail data at those two granularities. Some customers require hourly detail, in which case the hourly table is populated and the daily table is populated by aggregating the hourly data. Other customers require only daily data in which case the hourly table is not populated.
At the moment, changes to customer data require that the content of these tables are rebuilt for that customer. This rebuild is done every night for the changed customers and I want to change this to be a near real time rebuild. The rebuild involves deleteing all existing entries from the three tables for the customer and then re-inserting the new set using new synthetic keys. If we do make this near real time, we need to be able to provide a snapshot of the data asof close of business every day, and we need to be able to report as of a point of time up to 10 days in the past.
For any one customer, they may have rows in the hourly table that goes out 20 years at a hourly granularity, but once the active date has passed(by 10 days), we no longer need to keep it. This is why we were considering partitioning as it gives us a simple way of dropping off old data, and as a nice side effect, helps to improve performance of queries that are looking for active data between a range of dates (which is most of them).
I did have a look at the idea of save points but I wasn't sure it would be efficient. So in this case, would the idea be that we don't partition the table but instead at close of business every day, we create a savepoint like "savepoint_20070921" and instead of using dbms_wm.gotodate. we would use dbms_wm.gotosavepoint. Then every day we would do
DBMS_WM.DeleteSavepoint(
   workspace                   => 'LIVE',
   savepoint_name              => 'savepoint_20070910', --10 days ago
   compress_view_wo_overwrite  => TRUE,
DBMS_WM.CompressWorkspace(
   workspace                   => 'LIVE,
   compress_view_wo_overwrite  => TRUE,
   firstSP                     => 'savepoint_20070911', --the new oldest save point
   );Is my understanding correct?
David
Message was edited by:
fixed some formatting
David Tyler

Similar Messages

  • How to manage large partitioned table

    Dear all,
    we have a large partitioned table with 126 columns and 380G not indexed can any one tell me how to manage it because now the queries are taking more that 5 days
    looking forward for your reply
    thank you

    Hi,
    You can store partitioned tables in separate tablespaces. This does the following:
    Reduce the possibility of data corruption in multiple partitions
    Back up and recover each partition independently
    Control the mapping of partitions to disk drives (important for balancing I/O load)
    Improve manageability, availability, and performance
    Remeber as the doc states :
    The maximum number of partitions or subpartitions that a table may have is 1024K-1.
    Lastly you can use SQL*Loader and the import and export utilities to load or unload data stored in partitioned tables. These utilities are all partition and subpartition aware.
    Document Reference:
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14231/partiti.htm
    Adith

  • Oracle 11.2 - Perform parallel DML on a non partitioned table with LOB column

    Hi,
    Since I wanted to demonstrate new Oracle 12c enhancements on SecureFiles, I tried to use PDML statements on a non partitioned table with LOB column, in both Oracle 11g and Oracle 12c releases. The Oracle 11.2 SecureFiles and Large Objects Developer's Guide of January 2013 clearly says:
    Parallel execution of the following DML operations on tables with LOB columns is supported. These operations run in parallel execution mode only when performed on a partitioned table. DML statements on non-partitioned tables with LOB columns continue to execute in serial execution mode.
    INSERT AS SELECT
    CREATE TABLE AS SELECT
    DELETE
    UPDATE
    MERGE (conditional UPDATE and INSERT)
    Multi-table INSERT
    So I created and populated a simple table with a BLOB column:
    SQL> CREATE TABLE T1 (A BLOB);
    Table created.
    Then, I tried to see the execution plan of a parallel DELETE:
    SQL> EXPLAIN PLAN FOR
      2  delete /*+parallel (t1,8) */ from t1;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3718066193
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |  2048 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |  2048 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    PLAN_TABLE_OUTPUT
    Note
       - dynamic sampling used for this statement (level=2)
    And I finished by executing the statement.
    SQL> commit;
    Commit complete.
    SQL> alter session enable parallel dml;
    Session altered.
    SQL> delete /*+parallel (t1,8) */ from t1;
    2048 rows deleted.
    As we can see, the statement has been run as parallel:
    SQL> select * from v$pq_sesstat;
    STATISTIC                      LAST_QUERY SESSION_TOTAL
    Queries Parallelized                    1             1
    DML Parallelized                        0             0
    DDL Parallelized                        0             0
    DFO Trees                               1             1
    Server Threads                          5             0
    Allocation Height                       5             0
    Allocation Width                        1             0
    Local Msgs Sent                        55            55
    Distr Msgs Sent                         0             0
    Local Msgs Recv'd                      55            55
    Distr Msgs Recv'd                       0             0
    11 rows selected.
    Is it normal ? It is not supposed to be supported on Oracle 11g with non-partitioned table containing LOB column....
    Thank you for your help.
    Michael

    Yes I did it. I tried with force parallel dml, and that is the results on my 12c DB, with the non partitionned and SecureFiles LOB column.
    SQL> explain plan for delete from t1;
    Explained.
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |     4 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |     4 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    The DELETE is not performed in Parallel.
    I tried with another statement :
    SQL> explain plan for
    2        insert into t1 select * from t1;
    Here are the results:
    11g
    | Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT         |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  LOAD TABLE CONVENTIONAL | T1       |       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR         |          |       |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)   | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR    |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL   | T1       |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    12c
    | Id  | Operation                          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT                   |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR                    |          |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)              | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    LOAD AS SELECT                  | T1       |       |       |            |          |  Q1,00 | PCWP |            |
    |   4 |     OPTIMIZER STATISTICS GATHERING |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    |   5 |      PX BLOCK ITERATOR             |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    It seems that the DELETE statement has problems but not the INSERT AS SELECT !

  • Creating index on large partitioned table

    Is anyone aware of a method for telling how far along is the creation of an index on a large partitioned table? The statement I am executing is like this:
    CREATE INDEX "owner"."new_index"
    ON "owner"."mytable"(col_1, col_2, col_3, col_4)
    PARALLEL 8 NOLOGGING ONLINE LOCAL;
    This is a two-node RAC system on Windows 2003 x64, using ASM. There are more than 500,000,000 rows in the table, and I'd estimate that each row is about 600-1000 bytes in size.
    Thank you.

    you can know the progress from v$session_longops.
    select
    substr(SID ||','||SERIAL# ,1,8) "sid,srl#",
    substr(OPNAME ||'>'||TARGET,1,50) op_target,
    substr(trunc(SOFAR/TOTALWORK*100)||'%',1,5) progress,
    TIME_REMAINING rem,
    ELAPSED_SECONDS elapsed
    from v$session_longops
    where SOFAR!=TOTALWORK
    order by sid;
    hth

  • Select count from large fact tables with bitmap indexes on them

    Hi..
    I have several large fact tables with bitmap indexes on them, and when I do a select count from these tables, I get a different result than when I do a select count, column one from the table, group by column one. I don't have any null values in these columns. Is there a patch or a one-off that can rectify this.
    Thx

    You may have corruption in the index if the queries ...
    Select /*+ full(t) */ count(*) from my_table t
    ... and ...
    Select /*+ index_combine(t my_index) */ count(*) from my_table t;
    ... give different results.
    Look at metalink for patches, and in the meantime drop-and-recreate the indexes or make them unusable then rebuild them.

  • Analyse a partitioned table with more than 50 million rows

    Hi,
    I have a partitioned table with more than 50 million rows. The last analyse is on 1/25/2007. Do I need to analyse him? (query runs on this table is very slow).
    If I need to analyse him, what is the best way? Use DBMS_STATS and schedule a job?
    Thanks

    A partitioned table has global statistics as well as partition (and subpartition if the table is subpartitioned) statistics. My guess is that you mean to say that the last time that global statistics were gathered was in 2007. Is that guess accurate? Are the partition-level statistics more recent?
    Do any of your queries actually use global statistics? Or would you expect that every query involving this table would specify one or more values for the partitioning key and thus force partition pruning to take place? If all your queries are doing partition pruning, global statistics are irrelevant, so it doesn't matter how old and out of date they are.
    Are you seeing any performance problems that are potentially attributable to stale statistics on this table? If you're not seeing any performance problems, leaving the statistics well enough alone may be the most prudent course of action. Gathering statistics would only have the potential to change query plans. And since the cost of a query plan regressing is orders of magnitude greater than the benefit of a different query performing faster (at least for most queries in most systems), the balance of risks would argue for leaving the stats alone if there is no problem you're trying to solve.
    If your system does actually use global statistics and there are performance problems that you believe are potentially attributable to stale global statistics and your partition level statistics are accurate, you can gather just global statistics on the table probably with a reasonably small sample size. Make sure, though, that you back up your existing statistics just in case a query plan goes south. Ideally, you'd also have a test environment with identical (or nearly identical) data volumes that you could use to verify that gathering statistics doesn't cause any problems.
    Justin

  • ORA-00604 ORA-00904 When query partitioned table with partitioned indexes

    Got ORA-00604 ORA-00904 When query partitioned table with partitioned indexes in the data warehouse environment.
    Query runs fine when query the partitioned table without partitioned indexes.
    Here is the query.
    SELECT al2.vdc_name, al7.model_series_name, COUNT (DISTINCT (al1.vin)),
    al27.accessory_code
    FROM vlc.veh_vdc_accessorization_fact al1,
    vlc.vdc_dim al2,
    vlc.model_attribute_dim al7,
    vlc.ppo_list_dim al18,
    vlc.ppo_list_indiv_type_dim al23,
    vlc.accy_type_dim al27
    WHERE ( al2.vdc_id = al1.vdc_location_id
    AND al7.model_attribute_id = al1.model_attribute_id
    AND al18.mydppolist_id = al1.ppo_list_id
    AND al23.mydppolist_id = al18.mydppolist_id
    AND al23.mydaccytyp_id = al27.mydaccytyp_id
    AND ( al7.model_series_name IN ('SCION TC', 'SCION XA', 'SCION XB')
    AND al2.vdc_name IN
    ('PORT OF BALTIMORE',
    'PORT OF JACKSONVILLE - LEXUS',
    'PORT OF LONG BEACH',
    'PORT OF NEWARK',
    'PORT OF PORTLAND'
    AND al27.accessory_code IN ('42', '43', '44', '45')
    GROUP BY al2.vdc_name, al7.model_series_name, al27.accessory_code

    I would recommend that you post this at the following OTN forum:
    Database - General
    General Database Discussions
    and perhaps at:
    Oracle Warehouse Builder
    Warehouse Builder
    The Oracle OLAP forum typically does not cover general data warehousing topics.

  • Performance between two partitionned tables with different structure

    Hi,
    I would like if there is a difference between two partitionned tables with different structure in term of performance (access, query, insertions, updates ).
    I explain myself in detail :
    I have a table that stores one value every 10 minutes in a day (so we have 144 values (24*6) in the whole day), with the corresponding id.
    Here is the structure :
    | Table T1 |
    + id PK |
    + date PK |
    + sample1 |
    + sample2 |
    + ... |
    + sample144 |
    The table is partionned on the column date, with a partionned every months. The primary key is based on the columns (id, date).
    There is an additionnal index on the column (id) (is it useful ?).
    I would like to know if it is better to have a table with just (id, date, value) , so for one row in the first table we'll have 144 rows in the future? table. The partition will already be on the columns (id, date) with the index associated.
    What are the gains or loss in performance with this new structure ( access, DMLs , storage ) ?
    I discuss with the Java developers and they say it is simpler to manage in their code.
    Oracle version : Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    Thanks & Regards
    From France
    Oliver
    Edited by: 998239 on 5 avr. 2013 01:59

    I mean storage in tablespaces and datafiles on disk.
    Can you justify please and give me concrete arguments why the two structures are equivalent ( except inserting data in T(id, date,value))
    because i have to make a report.i didnt say any thing like
    two structures are equivalent ( except inserting data in T(id, date,value)i said
    About structure : TABLE1(id, date, value) is better than TABLE1(id, date, sample1, .... sample144)because
    1) oracle has restriction for numbers of column. Ok you can have 144 columns now but for future if you must have more than 1000 columns , what will you do?
    2) Restrictions on Table Compression (Table compression is not supported for tables with more than 255 columns.)
    3) store same type values on diffrent columns is bad practise
    http://docs.oracle.com/cd/B28359_01/server.111/b28318/schema.htm#i4383
    i remember i seen Toms article about this but now i cant find it sorry ((( if i found i will post here

  • Migrating a new partition table with transportable tablespace

    I created a partitioned table with 2 partitions (2010 and 2011) and used transportable tablespace to migrate the data over to a new envionment. My question is, if I decide to add a partition (2012) in the future, can I simply move that new partition along with the associated datafile via transportable tablespace or would I have to move all the partitions (2010, 2011, 2012).

    user564785 wrote:
    I created a partitioned table with 2 partitions (2010 and 2011) and used transportable tablespace to migrate the data over to a new envionment. My question is, if I decide to add a partition (2012) in the future, can I simply move that new partition along with the associated datafile via transportable tablespace or would I have to move all the partitions (2010, 2011, 2012).Yes why not.
    1) create a table as CTAS from 2012 in new Tablespace on source
    2) transport the tablespace
    3) Add partition to existing partition table Or exchange partition
    Oracle has also documented this procedure:
    http://docs.oracle.com/cd/B28359_01/server.111/b28310/tspaces013.htm#i1007549

  • Exchange Partition on large Partition tables in dataware house

    Hi all,
    oracle 10.2.0.4(64 bit) and OS 5.3 (64 bit).
    We have large tables in our DWH size in TB and data is for 13 Months.
    Now our management want to split these tables in two.
    current tables contain data for 3 months and current month and history tables contain data for history data of 9 months.
    We have no space on mount point for export/import.
    Can exchange partition will work on it? if yes, please need steps/demo/examples.
    Some partitions has size more then 300gb.

    user610482 wrote:
    Hi Oracle gurus,
    I need a dynamic script to add MAXVALUE partition to all 100 tables in schema,each tables are having different Partition Key with different tablespace
    AVB1_NOTIFICATIONSL have 2 columns in partition key
    For eg : alter table AVB1_NOTIFICATIONSL add partition DMAX VALUES LESS THAN (MAXVALUE,MAXVALUE) TABLESPACE LARGE_D
    is SQL above valid?
    does it do what you require?

  • Partitioning table with sequence made in java base 36

    hi ,sorry for my poor english
    i would like to partition a big table with more than 5 millions of rows on a sequence made in java base 36
    thanks in advance for you help
    regards

    Is this sequence stored in a column in the database?
    How do you want to partition the table? RANGE? HASH? Or is there some sort of list partitioning scheme you're thinking of?
    If the table already exists, you know that you're going to have to re-create the table as a partitioned table, re-create appropriate constraints, indexes, triggers, etc. and move all the data over, right? Online table redefinition can hide some of this complexity from you (though it may well introduce other sorts of complexity).
    Justin

  • Accessing large partitioned tables over a database link - any gotchas?

    Hi,
    We are in the middle of a corporate acquisition and I have a question about using database links to efficiently access large tables. There are two geographically distinct database instances, both on Oracle 10.2.0.5 sitting on Linux boxes.
    The primary instance (PSHR) contains a PeopleSoft HR and Payroll system and sits in our data centre.
    The secondary instance (HGPAY) runs a home grown payroll application and sits in a different data centre to PSHR.
    The requirement is to allow PeopleSoft (PSHR) to display targeted (one employee at a time) payroll data from the secondary instance.
    For example in HGPAY
    CREATE TABLE MY_PAY_DATA AS
    SELECT TO_CHAR(A.RN, '00000000') "EMP" -- This is an 8 digit leading 0 unique identifier
    , '20110' || to_char(B.RN) "PAY_PRD" -- This is a format of fiscal year plus fortnight in year (01-27)
    , C.SOME_KEY -- This is the pay element being considered - effectively random
    , 'XXXXXXXXXXXXXXXXX' "FILLER1"
    , 'XXXXXXXXXXXXXXXXX' "FILLER2"
    , 'XXXXXXXXXXXXXXXXX' "FILLER3"
    FROM ( SELECT ROWNUM "RN" FROM DUAL CONNECT BY LEVEL <= 300) A
    , (SELECT ROWNUM "RN" FROM DUAL CONNECT BY LEVEL <= 3) B
    , (SELECT TRUNC(ABS(DBMS_RANDOM.RANDOM())) "SOME_KEY" FROM DUAL CONNECT BY LEVEL <= 300) C
    ORDER BY PAY_PRD, EMP
    HGPAY.MY_PAY_DATA is Range Partitioned on EMP (approx 300 employees per partition) and List Sub-Partitioned on PAY_PRD (3 pay periods per sub-partition). I have limited the create statement above to represent one sub-paritition of data.
    On average each employee generates 300 rows in this table each pay period. The table has approx 180 million rows and growing every fortnight.
    In PSHR
    CREATE VIEW PS_HG_PAY_DATA (EMP, PAY_PRD, SOME_KEY, FILLER1, FILLER2, FILLER3)
    AS SELECT EMP, PAY_PRD, SOME_KEY, FILLER1, FILLER2, FILLER3 FROM MY_PAY_DATA@HGPAY
    PeopleSoft would then generate SQL along the lines of
    SELECT * FROM PS_HG_PAY_DATA WHERE EMP = ‘00002561’ AND PAY_PRD = ‘201025’
    The link between the data centres where PSHR and HGPAY sit is not the best in the world, but I am expecting tens of access requests per day rather than thousands, so I believe the link should have sufficient bandwidth to meet the requirements.
    I have tried a quick test on two production sized test instances and it works in that it presents the data, when I look at the explain plan I can see that the remote database is only presenting the relevant sub-partition over to PSHR rather than the whole table. Before I pat myself on the back with a "job well done" - is there a gotcha that I am missing in using dblink to access partitioned big tables?

    Yes, that's about right. A lot of this depends on exactly what happens in various "oops" scenarios-- are you, for example, just burning some extra CPU until someone comes to the DBA and says "my query is slow" or does saturating the network have some knock-on effect on critical apps or random long-running queries prevent some partition maintenance operations.
    In my mind, the simplest possible solution (assuming you are using a fixed username in the database link) would be to create a profile on HGPAY for the user that is defined for the database link that set a LOGICAL_READS_PER_CALL value that was large enough to handle any "reasonable" request and low enough to quickly kill any session that tried to do something "stupid". Obviously, you'd need to define "stupid" in your environment particularly where the scope of a "simple reconciliation report" is undefined. If there are no political issues and you can adjust the profile values over time as you encounter new reports that slowly increase what is deemed "reasonable" this is likely the simplest approach. If you've got to put in a change request to change the setting that has to be reviewed by the change control board at its next quarterly meeting with the outsourced DBA vendor, on the other hand, you could turn a 30 minute report into 30 hours of work spread over 30 days. In the ideal world, though, that's where I'd start.
    Getting more complex, you can use Resource Manager to kill queries that run too long on the wall clock. Since the network is almost certainly going to be the bottleneck, it's probably unlikely that the CPU throttling is going to do much good-- you can probably saturate the network with a very small amount of CPU. Network throttling in my mind is an extra step up in complexity again depending on the specifics of your particular situation and what you're competing with.
    Justin

  • Index on Partitioned Table with Some ReadOnly Tablespaces

    We have a warehouse with fact tables range partitioned on date - daily partitions with each month worth of partitions put into a specific monthly tablespace. Each month, we set the prior month's tablespace to READONLY. So our table ends up having data in readonly and read-write tablespaces.
    We now have a change we need to make to one of the fact tables - we need to add a new column AND add an index to that column. But because we have partitions in readonly state, Oracle doesn't let us create the index and it also doesn't let us update the local unique key (unique index).
    Is there a way we can do this without having to put the tablespaces in read-write mode? As importantly, what happens when we offline or drop some of the older tablespaces (for archiving purposes)? We need to find a way to add the index on just the read-write partitions.
    Thanks.

    We have a warehouse with fact tables range
    partitioned on date - daily partitions with each
    month worth of partitions put into a specific monthly
    tablespace. Each month, we set the prior month's
    tablespace to READONLY. So our table ends up having
    data in readonly and read-write tablespaces.
    We now have a change we need to make to one of the
    fact tables - we need to add a new column AND add an
    index to that column. But because we have partitions
    in readonly state, Oracle doesn't let us create the
    index and it also doesn't let us update the local
    unique key (unique index).
    Is there a way we can do this without having to put
    the tablespaces in read-write mode? As importantly,
    what happens when we offline or drop some of the
    older tablespaces (for archiving purposes)? We need
    to find a way to add the index on just the read-write
    partitions.
    Thanks.Hi,
    Improvements in Oracle 10g to maintain local-partitioned indexes when you use partition DDL commands:
    add partition, split partition, merge partiton, move partition.
    ALSO, the associated indexes NO LONGER have to be stored on the same tablespace as the table (i.e. answer to your question).
    On Oracle 9i: Local indexes are recommended on data warehouse platforms. In an OLTP system, global indexes are more common. On a data data warehouse, problems can be isoloted to one partition, the partitions moved, made r/o (like yours), no local indexes need to be rebuilt
    Regarding your issue:
    We now have a change we need to make to one of the
    fact tables - we need to add a new column AND add an
    index to that columnTo maintain the simplicity + functionality of your DW configuration, I think you need to change the TS to R/W, update the objects, then alter to R/O.
    fyi
    http://www.oracle.com/technology/deploy/availability/htdocs/online_ops.html

  • Cannot alter partitioned table with spatial column in Oracle 11.2.0.2.0

    Hello,
    I possibly discovered a bug in Oracle 11.2.0.2.0
    This script works fine with Oracle *11.2.0.1.0*:
    create table GEO_TABLE (
    ID NUMBER(19) not null,
    PART_NAME VARCHAR2(50) not null,
    GEO_POS MDSYS.SDO_GEOMETRY,
    constraint PK_GEO_TABLE primary key (ID)
    SEGMENT CREATION IMMEDIATE partition by list ( PART_NAME ) (partition P_DEFAULT values (DEFAULT)) enable row movement;
    ALTER TABLE GEO_TABLE ADD (COLUMN2 NUMBER(8) DEFAULT 0 NOT NULL);
    With Oracle *11.2.0.2.0* (on SLES 11, 64bit) i will get this error message on the alter table statement:
    SQL-Fehler: ORA-00600: Interner Fehlercode, Argumente: [kkpoffoc], [], [], [], [], [], [], [], [], [], [], []
    00600. 00000 - "internal error code, arguments: [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s]"
    *Cause:    This is the generic internal error number for Oracle program
    exceptions.     This indicates that a process has encountered an
    exceptional condition.
    *Action:   Report as a bug - the first argument is the internal error number
    Can anyone reproduce this behaviour?
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE     11.2.0.2.0     Production
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production
    Edited by: user7425061 on 19.11.2010 12:04

    Metalink does not have any references for this error - please raise SR with Oracle support.

  • Importing a Partitioned Table with 10 Million Records.

    I've been trying to import from a dump file using:
    imp system/######@******** fromuser=fusr touser=tusr file=/f1/f2/expfl.dbf log=/o1/implg.log grants=N &
    import done in US7ASCII character set and UTF8 NCHAR character set
    import server uses UTF8 character set (possible charset conversion)
    This contains a Table 'Tab_Mil_Rec', with almost 10 millions of records and has 10 partitions.
    Done in 9i, on Solaris9.
    Problem is the process abruptly ends at 'Tab_Mil_Rec'. This table is created but nothing is imported. I checked the log file, but it has logged events before this table, but nothing (not evens errors or termination message) after that. No errors are thrown even at os level, I don't know exactly because this was done as backgruond job.
    Can anybody guess went wrong and whats the next step?

    Hi,
    Can you tried import partition by partition of this table ?
    Cheers

Maybe you are looking for

  • Is there ANY way to use another email address with iCloud?

    Apple let you use any email address as an Apple ID, but I really want to be able to use a non-Apple email address with my iCloud mail. Just about every email service (Gmail, Hotmail, etc) allows you to send from a third-party address (once you've don

  • Fuji X-10 RAW files in Lightroom mobile

    Hi! I have encountered a strange issue with files from my wive's Fuji X-10. I synched them to Lightroom mobile and they seem to display fine, but when I try to edit them, they get distorted and the the picture is not updated with the edit. When I swi

  • Photoshop 6 on mavericks wont install any ideas?

    Hi Bought a sealed second web collection, photshop 6 and illustrator 9 wont install anybody have any idea what to do?

  • ITunes isn't working

    I have my iPod set to manually manage my music. But ever since I downloaded the newest version of iTunes I can't put music on my iPod from my computer or edit anything on the device. Whenever I try a little box pops up that says my iPod couldn't be s

  • Portal 7.0 change text into umelogon.jar

    I have the following problem by modifying the  file umelogon.jar within this file .jar file exists called logonMessages.es_properties I need to modify the following line " PASSWORD_EXPIRED= " to change the text, I'm changing in the following path \us