Partition table vs redo generation
Hi all,
I'm working with an enterprise 10.2.0.5 Oracle database
I've a huge partitioned table and I'm interested in generate subpartitions.
My question is if this process (create this subpartitions) will generate more redo or undo.
Thanks,
dbajug
If you are only adding new, empty, SubPartitions, the redo generation will be minimal. If and when you load data into these SubPartitions, you will notice undo and redo generation, depending on how the data is loaded.
If you SPLIT a non-empty Partition, Oracle has to "move" rows into the newly created partition. This will generate undo and redo.
What you should do is to run a few tests of your planned actions and monitor the volume of undo and redo generated.
Hemant K Chitale
Similar Messages
-
Reducing REDO generation from a current data refresh process
Hello,
I need to resolve an issue where a schema database is maintained with one delete followed by a tons of bulk insert. The problem is that the vast majority of deleted rows are reinserted as is. This process deletes and reinserts about 1 175 000 rows of data!
The delete clause is:
- delete from table where term >= '200705';
The data before '200705' is very stable and doesn't need to be refreshed.
The table is 9 709 797 rows big.
Here is an excerpt of cardinalities for each term code:
TERM NB_REGS
200001 117130
200005 23584
200009 123167
200101 115640
200105 24640
200109 121908
200201 117516
200205 24477
200209 125655
200301 120222
200305 26678
200309 129541
200401 123875
200405 27283
200409 131232
200501 124926
200505 27155
200509 130725
200601 122820
200605 27902
200609 129807
200701 121121
200705 27699
200709 129691
200801 120937
200805 29062
200809 130251
200901 122753
200905 27745
200909 135598
201001 127810
201005 29986
201009 142268
201101 133285
201105 18075This kind of operation is generating a LOT of redo logs: on average 25 GB per days.
What are the best options available to us to reduce redo generation without changing to much the current process?
- make tables in no logging ? (with mandatory use of append hint?)
- use of a global temporary table for staging and merging against the true table?
- use of partitions and truncate the reloaded one? this not reduce redo generated by subsequent inserts...?
This has not to be mandatory transactionnal.
We use 10gR2 on Windows 64 bits.
Thanks
Brunoyes, you got it, these are terms (Summer of 2007, beginning at May).
Is the perverse effect of truncating and then inserting in direct path mode pushing the high water mark up day after day while having unused space in truncated partitions? Maybe we should not REUSE STORAGE on truncation...
this data can be recovered easily from the datamart that pushes this data, this means we can use nologging and direct path mode without any «forever loss» of data.
Should I have one partition for each term, or having only one for the stable terms and one for the refreshed terms? -
Question about redo generation
select * from v$version;
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
"CORE 11.2.0.1.0 Production"
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - ProductionSetup for test
create table parent_1 (id number(12) NOT NULL);
alter table parent_1 add constraint parent_1_pk primary key (id);
create table parent_2 (id number(12) NOT NULL);
alter table parent_2 add constraint parent_2_pk primary key (id);
create table child_table (ref_id number(12) NOT NULL,ref_id2 number(12) NOT NULL, created_at timestamp(6));
alter table child_table add constraint child_table_pk primary key (ref_id, ref_id2);
alter table child_table add constraint child_table_fk1 foreign key (ref_id) references parent_1(id);
alter table child_table add constraint child_table_fk2 foreign key (ref_id2) references parent_2(id);
insert into parent_1 select rownum from all_objects;
insert into parent_2 values (1);
insert into parent_2 values (2);
insert into child_table (select id, 1, systimestamp from parent_1);
insert into child_table (select id, 2, systimestamp from parent_1);
commit;Code version 1:
declare
type t_ids is table of NUMBER(12);
v_ids t_ids;
start_redo NUMBER;
end_redo NUMBER;
cursor c_data is SELECT id FROM parent_1;
begin
select value into start_redo from v$mystat where statistic# = (select statistic# from v$statname where name like 'redo size');
open c_data;
LOOP
FETCH c_data
BULK COLLECT INTO v_ids LIMIT 1000;
exit;
end loop;
CLOSE c_data;
for pos in v_ids.first..v_ids.last LOOP
BEGIN
insert into child_table values (v_ids(pos), 2, systimestamp);
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
update child_table set created_at = systimestamp where ref_id = v_ids(pos) and ref_id2 = 2;
END;
END LOOP;
end;
/Version 2:
declare
type t_ids is table of NUMBER(12);
v_ids t_ids;
start_redo NUMBER;
end_redo NUMBER;
cursor c_data is SELECT id FROM parent_1;
ex_dml_errors EXCEPTION;
PRAGMA EXCEPTION_INIT(ex_dml_errors, -24381);
pos NUMBER;
l_error_count NUMBER;
begin
select value into start_redo from v$mystat where statistic# = (select statistic# from v$statname where name like 'redo size');
open c_data;
LOOP
FETCH c_data
BULK COLLECT INTO v_ids LIMIT 1000;
exit;
end loop;
CLOSE c_data;
BEGIN
FORALL i IN v_ids.first .. v_ids.last SAVE EXCEPTIONS
insert into child_table values (v_ids(i), 2, systimestamp);
EXCEPTION
WHEN ex_dml_errors THEN
l_error_count := SQL%BULK_EXCEPTIONS.count;
FOR i IN 1 .. l_error_count LOOP
pos := SQL%BULK_EXCEPTIONS(i).error_index;
update child_table set created_at = systimestamp where ref_id = v_ids(pos) and ref_id2 = 2;
END LOOP;
END;
select value into end_redo from v$mystat where statistic# = (select statistic# from v$statname where name like 'redo size');
dbms_output.put_line('Created redo : ' || (end_redo-start_redo));
end;
/Version 1 output:
Created redo : 682644
Version 2 output:
Created redo : 7499364
Why is version 2 generating significant more redo ?As both pieces of code erroneously replace non-procedural code by procedural code, ignoring the power of a RDBMS to process sets, and are examples of slow by slow programming,
both pieces of code are undesirable, so the difference in redo generation doesn't matter.
Sybrand Bakker
Senior Oracle DBA -
High REDO Generation for enqueue and dequeue
Hi,
We have found high redo generation while enqueue and dequeue. Which is in-turn affecting our database performance.
Please find a sample test result below :
Create the Type:-
CREATE OR REPLACE
type src_message_type_new_1 as object(
no varchar(10),
title varchar2(30),
text varchar2(2000))
Create the Queue and Queue Table:-
CREATE OR REPLACE procedure create_src_queue
as
begin
DBMS_AQADM.CREATE_QUEUE_TABLE
(queue_table => 'src_queue_tbl_1',
queue_payload_type => 'src_message_type_new_1',
--multiple_consumers => TRUE,
compatible=>10.1,
storage_clause=>'TABLESPACE EDW_OBJ_AUTO_9',
comment => 'General message queue table created on ' ||
TO_CHAR(SYSDATE,'MON-DD-YYYY HH24:MI:SS'
commit;
DBMS_AQADM.CREATE_QUEUE
(queue_name => 'src_queue_1',
queue_table => 'src_queue_tbl_1',
comment => 'Test Queue Number 1'
commit;
dbms_aqadm.start_queue
('src_queue_1');
commit;
end;
Redo Log Size:-
select
n.name, t.value
from
v$mystat t join
v$statname n
on
t.statistic# = n.statistic#
where
n.name = 'redo size'
Output:-
595184
Enqueue Message into the Queue Table:-
CREATE OR REPLACE PROCEDURE enque_msg_ab
as
queue_options DBMS_AQ.ENQUEUE_OPTIONS_T;
message_properties DBMS_AQ.MESSAGE_PROPERTIES_T;
message_id raw(16);
my_message dev_hub.src_message_type_new_1;
begin
my_message:=src_message_type_new_1(
'1',
'This is a sample message',
'This message has been posted on');
DBMS_AQ.ENQUEUE(
queue_name=>'dev_hub.src_queue_1',
enqueue_options=>queue_options,
message_properties=>message_properties,
payload=>my_message,
msgid =>message_id);
commit;
end;
Redo Log Size:-
select
n.name, t.value
from
v$mystat t join
v$statname n
on
t.statistic# = n.statistic#
where
n.name = 'redo size'
Output:-
596740
Can any one tell us the reason for this high redo generation and how can this can be controlled?
Regards,
KoushikPlease find my answers below :
What full version of Oracle?
- 10.1.0.5
How large is the average message?
- in some byets only, at max 1-2 KB and not more than this.
What kind of performance problem is 300G of redo causing? How? Have you ran a statspack report? What did it show?
- Actually we are facing some performance issue as a overall prespective for our daily batch processing, which now causing a delay in the batch SLA. So we have produced an AWR report for our database and from there we have found that total redo generation is around 400 GB, amoung which 300 GB has been generated by enqueue-dequeue process.
What other activity is taking place on this instance? That is, is all this redo really being generated as the result of the AQ activity or is some of it the result of the messages being processed? How are the messages created?
- Normal batch process everyday. Batch process also generates REDO but the amount is low compare to enqueue-dequeue process.
Have you looked at providing a separate physical disk stripe for the online redo logs and for the archive log location from the database data file physical disk and IO channels?
- No, as we are not the production DBA so we don't have the direct access to production database.
What kind of file system and disk are you using?
- I am not sure about it. I will try to confirm it by production DBA. Is there any other way to find it out, whether it is on filesystem or raw device?
Can you please provide any help in this topic.
Regards,
Koushik -
Dear all,
10.2.0.4 on solaris 10
We have a huge partitioning table ..Now I want to drop some partitions .will this generates huge redo ?..how to speed up this dropping ? dropping partitions wil automatically drop the partitioned indexes also ?
KaiHi,
Truncating partitions won't be generating any redo.Use of trucate will make your operation faster.
ALTER TABLE <table_name>
TRUNCATE PARTITION <partition_name>;
The below difference makes you understand best.
Deletes perform normal DML. That is, they take locks on rows, they generate redo (lots of it), and they require segments in the UNDO tablespace. Deletes clear records out of blocks carefully. If a mistake is made a rollback can be issued to restore the records prior to a commit.
Where as Truncates are DDL and, in a sense, cheat. A truncate moves the High Water Mark of the table back to zero. No row-level locks are taken, no redo or rollback is generated. By resetting the High Water Mark, the truncate prevents reading of any table's data, so they it has the same effect as a delete, but without the overhead.
Hope I answered your question.
Best regards,
Rafi.
http://rafioracledba.blogspot.com/ -
Hi,
We have a problem with redo generation. Last few days,redo generation is high than normal.No changes in application level.I don't know where to start.I tried to compare AWR report.But i did not get.
1,Is it possilbe to find How much redo generated for a DML statement by Segment wise(table segment,index segment) when it's executed?
For Ex : The table M_MARCH has 19 colums and 6 indexes.Another tables M_Report has 59 columns and 5 indexes.the query combines both tables.
We need to find whether indexex are really is usable or not?
2,Is there any other way to reduce redo geneation?
Br,
RajeshHigh redo generation can be of two types:
1. During a specific duration of the day.
2. Sudden increase in the archive logs observed.
In both the cases, first thing to be checked is about any modifications done either at the database level(modifying any parameters, any maintenance operations performed,..) and application level (deployment of new application, modification in the code, increase in the users,..).
To know the exact reason for the high redo, we need information about the redo activity and the details of the load. Following information need to be collected for the duration of high redo generation.
1] To know the trend of log switches below queries can be used.
SQL> alter session set NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS'; Session altered. SQL> select trunc(first_time, 'HH') , count(*) 2 from v$loghist 3 group by trunc(first_time, 'HH') 4 order by trunc(first_time, 'HH'); TRUNC(FIRST_TIME,'HH COUNT(*) -------------------- ---------- 25-MAY-2008 20:00:00 1 26-MAY-2008 12:00:00 1 26-MAY-2008 13:00:00 1 27-MAY-2008 15:00:00 2 28-MAY-2008 12:00:00 1 <- Indicate 1 log switch from 12PM to 1PM. 28-MAY-2008 18:00:00 1 29-MAY-2008 11:00:00 39 29-MAY-2008 12:00:00 135 29-MAY-2008 13:00:00 126 29-MAY-2008 14:00:00 135 <- Indicate 135 log switches from 2-3 PM. 29-MAY-2008 15:00:00 112
We can also get the information about the log switches from alert log (by looking at the messages 'Thread 1 advanced to log sequence' and counting them for the duration), AWR report.
1] If you are in 10g or higher version and have license for AWR, then you can collect AWR report for the problematic time else go for statspack report.
a) AWR Report
-- Create an AWR snapshot when you are able to reproduce the issue: SQL> exec DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT (); -- After 30 minutes, create a new snapshot: SQL> exec DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT (); -- Now run $ORACLE_HOME/rdbms/admin/awrrpt.sql
b) Statspack Report
SQL> connect perfstat/<Password> SQL> execute statspack.snap; -- After 30 minutes SQL> execute statspack.snap; SQL> @?/rdbms/admin/spreport
In the AWR/Statspack report look out for queries with highest gets/execution. You can check in the "load profile" section for "Redo size" and compare it with non-problematic duration.
2] We need to mine the archivelogs generated during the time frame of high redo generation.
-- Use the DBMS_LOGMNR.ADD_LOGFILE procedure to create the list of logs to be analyzed: SQL> execute DBMS_LOGMNR.ADD_LOGFILE('<filename>',options => dbms_logmnr.new); SQL> execute DBMS_LOGMNR.ADD_LOGFILE('<file_name>',options => dbms_logmnr.addfile); -- Start the logminer SQL> execute DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG); SQL> select operation,seg_owner,seg_name,count(*) from v$logmnr_contents group by seg_owner,seg_name,operation;
Please refer to below article if there is any problem in using logminer.
Note 62508.1 - The LogMiner Utility
We can not get the Redo Size using Logminer but We can only get user,operation and schema responsible for high redo.
3] Run below query to know the session generating high redo at any specific time.
col program for a10 col username for a10 select to_char(sysdate,'hh24:mi'), username, program , a.sid, a.serial#, b.name, c.value from v$session a, v$statname b, v$sesstat c where b.STATISTIC# =c.STATISTIC# and c.sid=a.sid and b.name like 'redo%' order by value;
This will give us the all the statistics related to redo. We should be more interested in knowing "redo size" (Total amount of redo generated in bytes)
This will give us SID for problematic session.
In above query output look out for statistics against which high value is appeared and this statistics will give fair idea about problem. -
Drop partitions in HASH partitioned table
SELECT * FROM product_component_version
NLSRTL 10.2.0.4.0 Production
Oracle Database 10g Enterprise Edition 10.2.0.4.0 64bi
PL/SQL 10.2.0.4.0 Production
TNS for Solaris: 10.2.0.4.0 ProductionI have a table which is partitioned by HASH into several partitions. I would like to remove them all the same way I can DROP partitions in a LIST or RANGE partitioned tables.
I COALESCE-d my table until it remained with only one partition. Now I've got a table with one HASH partition and I would like to remove it and to end up with unpartitioned table.
How could it be accomplished?
Thank you!Verdi wrote:
I have a table which is partitioned by HASH into several partitions. I would like to remove them all the same way I can DROP partitions in a LIST or RANGE partitioned tables.
I COALESCE-d my table until it remained with only one partition. Now I've got a table with one HASH partition and I would like to remove it and to end up with unpartitioned table.
How could it be accomplished?
You cannot turn a partitioned table into a non-partitioned table, but you could create a replacement table (including indexes etc.) and then use the 'exchange partition' option on the partitioned table. This will modify the data dictionary so the data segments for the partition exchange names with the data segments for the new table - which gives you a simple table, holding the data, in minimum time and with (virtually) no undo and redo.
The drawback to this method is that you have to sort out all the dependencies and privileges.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
Author: <b><em>Oracle Core</em></b> -
I am using oracle 9i and windows xp professional.I enabled autotrace for scott user. Now when i issue select, insert, update,delete it is showing me statistics, but when i issue ALTER TABLE y ADD (XYZ NUMBER(1)); it did'nt give me any statistics. What does it mean:-
1. DDLs do not generate REDO.
2. What to do to get the statistics of every command ?
Thanks & RegardsHello Sir,
1. alter session set sql_trace=true;
2. DMLs are not showing me information about redo generation
3. set autotrace traceonly statistics; now it is showing statistics.
but i wish to get the information regarding redo generated by DDLs. How ? I mean how much redo generated by DDL.
Thanks
I issued set autotrace traceonly statistics; -
Creating Local partitioned index on Range-Partitioned table.
Hi All,
Database Version: Oracle 8i
OS Platform: Solaris
I need to create Local-Partitioned index on a column in Range-Partitioned table having 8 million records, is there any way to perform it in fastest way.
I think we can use Nologging, Parallel, Unrecoverable options.
But while considering Undo and Redo also mainly time required to perform this activity....Which is the best method ?
Please guide me to perform it in fastest way and also online !!!
-YasserYasserRACDBA wrote:
3. CREATE INDEX CSB_CLIENT_CODE ON CS_BILLING (CLIENT_CODE) LOCAL
NOLOGGING PARALLEL (DEGREE 14) online;
4. Analyze the table with cascade option.
Do you think this is the only method to perform operation in fastest way? As table contain 8 million records and its production database.Yasser,
if all partitions should go to the same tablespace then you don't need to specify it for each partition.
In addition you could use the "COMPUTE STATISTICS" clause then you don't need to analyze, if you want to do it only because of the added index.
If you want to do it separately, then analyze only the index. Of course, if you want to analyze the table, too, your approach is fine.
So this is how the statement could look like:
CREATE INDEX CSB_CLIENT_CODE ON CS_BILLING (CLIENT_CODE) TABLESPACE CS_BILLING LOCAL NOLOGGING PARALLEL (DEGREE 14) ONLINE COMPUTE STATISTICS;
If this operation exceeds particular time window....can i kill the process?...What worst will happen if i kill this process?Killing an ONLINE operation is a bit of a mess... You're already quite on the edge (parallel, online, possibly compute statistics) with this statement. The ONLINE operation creates an IOT table to record the changes to the underlying table during the build operation. All these things need to be cleaned up if the operation fails or the process dies/gets killed. This cleanup is supposed to be performed by the SMON process if I remember correctly. I remember that I once ran into trouble in 8i after such an operation failed, may be I got even an ORA-00600 when I tried to access the table afterwards.
It's not unlikely that your 8.1.7.2 makes your worries with this kind of statement, so be prepared.
How much time it may take? (Just to be on safer side)The time it takes to scan the whole table (if the information can't read from another index), the sorting operation, plus writing the segment, plus any wait time due to concurrent DML / locks, plus the time to process the table that holds the changes that were done to the table while building the index.
You can try to run an EXPLAIN PLAN on your create index statement which will give you a cost indication if you're using the cost based optimizer.
Please suggest me if any other way exists to perform in fastest way.Since you will need to sort 8 million rows, if you have sufficient memory you could bump up the SORT_AREA_SIZE for your session temporarily to sort as much as possible in RAM.
-- Use e.g. 100000000 to allow a 100M SORT_AREA_SIZE
ALTER SESSION SET SORT_AREA_SIZE = <something_large>;
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Hi,
I am having problem with the redo generation rate. The size of redo logfiles in my DB is 400MB and log switch happens approximately after every 10 min which i think is very fast interval comparing to the size of my redo logfiles. I have even checked the log miner settings but i did not find any problem with it.
SQL> select SUPPLEMENTAL_LOG_DATA_MIN,SUPPLEMENTAL_LOG_DATA_PK,SUPPLEMENTAL_LOG_DATA_UI from v$database;
SUP SUP SUP
NO NO NO
Plz can anyone tell me what is wrong with the redo generation rate?First of all, it simply means your system is doing lots of work(generating 400MB redo per 10 minutes), well if you have work to do you need to do it. Besides, 400MB per 10 minutes is not that big for some busy system. So killing sessions may not be a good idea, your system just has so much work to do. If you have millions of rows need to be loaded, you just have to do it.
Secondly, you many query v$sesstat for statistics name called "redo size" periodically (i.e. every 10 minutes) and get some idea when these happen during the day period. And user SQL_TRACE to get those SQL statements in the tkprof trace report, find out if you can optimize SQL to generate less redo, or other alternative ways. Some common options to minimize redo :
insert /*+ append */ hint
create table as (select ...)
table nologging
if updating millions of rows, can it be dont by creating new tables
is it possible to use temp table feature (some applications use permanant tables but indeed they can use temp table)
Anyway, you have to know what your database is doing while generating tons of redo. Until you find out what SQLs are generating the large redo, you can not solve the problem at system level by killing sessions or so.
Regards,
Jianhui -
Hi,
My question is about redo generation when using append hint . i have a database which is in FORCE loggind mode for standby database.if i use append hint , will it generate any redo ? i wonder will the standby db be same as primary after append hint usage ?
thanks.Hi,
thanks for answer.
the sentence says
"if the database is in ARCHIVELOG and FORCE LOGGING mode, then direct-path SQL generate data redo for both LOGGING and NOLOGGING tables." . This is my case.
i have opened archive_log with dbms_logmnr but i could not find any redo . So i wonder standby db will not be in synchronize with primary ?
thanks. -
Global Temporary table and REDO
Dear Friends,
In my production database we are facing problem of excessive redo generation. After initial analysis, we realised that we are using a lot global temporary tables for storing temp data/calculations and they are generating redo.
I know that GTT doesn’t create redo but as it creates UNDO and undo is protected by redo therefore it creates some redo but lesser than normal table.
Solution:
I google and found that if I use direct path insertion (using APPEND hint) into Global temporary table the I can avoid this redo generation as specified in this link (http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:15826034070548)
I tried this solution in my GTT but its not making any difference with APPEND clause. Please check following results. Could you please guide me if I am doing something wrong or any other way to avoid redo on GTT.
JM@ORA10G>insert into JM_temp values(1,'aaaaaaaaaaaaaaaaaaaaaaa');
1 row created.
Elapsed: 00:00:00.00
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | INSERT STATEMENT | | 1 | 100 | 1 (0)| 00:00:01 |
Statistics
0 recursive calls
2 db block gets
1 consistent gets
0 physical reads
*280 redo size*
918 bytes sent via SQL*Net to client
967 bytes received via SQL*Net from client
6 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed
JM@ORA10G>rollback ;
Rollback complete.
JM@ORA10G>insert * into JM_temp values(1,'aaaaaaaaaaaaaaaaaaaaaaa');
1 row created.
Elapsed: 00:00:00.00
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | INSERT STATEMENT | | 1 | 100 | 1 (0)| 00:00:01 |
Statistics
0 recursive calls
2 db block gets
1 consistent gets
0 physical reads
*280 redo size*
917 bytes sent via SQL*Net to client
981 bytes received via SQL*Net from client
6 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processedHi,
I tried avoiding GTT in my code but I realised that they are so tightly integrated that i cannot remove them. Operations which I am perfroming on my GTT are
1. Insertion of data
2. Fetch data from main tables with joins on GTT
3. Update GTT with calculated values.
My understanding it Update steps are generating maximum redo.
Please help me how can i reduce my redo generation in such scenarios.
Thanks. -
Error while creating partition table
Hi frnds i am getting error while i am trying to create a partition table using range
getting error ORA-00906: missing left parenthesis.I used the following statement to create partition table
CREATE TABLE SAMPLE_ORDERS
(ORDER_NUMBER NUMBER,
ORDER_DATE DATE,
CUST_NUM NUMBER,
TOTAL_PRICE NUMBER,
TOTAL_TAX NUMBER,
TOTAL_SHIPPING NUMBER)
PARTITION BY RANGE(ORDER_DATE)
PARTITION SO99Q1 VALUES LESS THAN TO_DATE(‘01-APR-1999’, ‘DD-MON-YYYY’),
PARTITION SO99Q2 VALUES LESS THAN TO_DATE(‘01-JUL-1999’, ‘DD-MON-YYYY’),
PARTITION SO99Q3 VALUES LESS THAN TO_DATE(‘01-OCT-1999’, ‘DD-MON-YYYY’),
PARTITION SO99Q4 VALUES LESS THAN TO_DATE(‘01-JAN-2000’, ‘DD-MON-YYYY’),
PARTITION SO00Q1 VALUES LESS THAN TO_DATE(‘01-APR-2000’, ‘DD-MON-YYYY’),
PARTITION SO00Q2 VALUES LESS THAN TO_DATE(‘01-JUL-2000’, ‘DD-MON-YYYY’),
PARTITION SO00Q3 VALUES LESS THAN TO_DATE(‘01-OCT-2000’, ‘DD-MON-YYYY’),
PARTITION SO00Q4 VALUES LESS THAN TO_DATE(‘01-JAN-2001’, ‘DD-MON-YYYY’)
;More than one of them. Try this instead:
CREATE TABLE SAMPLE_ORDERS
(ORDER_NUMBER NUMBER,
ORDER_DATE DATE,
CUST_NUM NUMBER,
TOTAL_PRICE NUMBER,
TOTAL_TAX NUMBER,
TOTAL_SHIPPING NUMBER)
PARTITION BY RANGE(ORDER_DATE) (
PARTITION SO99Q1 VALUES LESS THAN (TO_DATE('01-APR-1999', 'DD-MON-YYYY')),
PARTITION SO99Q2 VALUES LESS THAN (TO_DATE('01-JUL-1999', 'DD-MON-YYYY')),
PARTITION SO99Q3 VALUES LESS THAN (TO_DATE('01-OCT-1999', 'DD-MON-YYYY')),
PARTITION SO99Q4 VALUES LESS THAN (TO_DATE('01-JAN-2000', 'DD-MON-YYYY')),
PARTITION SO00Q1 VALUES LESS THAN (TO_DATE('01-APR-2000', 'DD-MON-YYYY')),
PARTITION SO00Q2 VALUES LESS THAN (TO_DATE('01-JUL-2000', 'DD-MON-YYYY')),
PARTITION SO00Q3 VALUES LESS THAN (TO_DATE('01-OCT-2000', 'DD-MON-YYYY')),
PARTITION SO00Q4 VALUES LESS THAN (TO_DATE('01-JAN-2001', 'DD-MON-YYYY')))In the future, if you are having problems, go to Morgan's Library at www.psoug.org.
Find a working demo, copy it, then modify it for your purposes. -
Local index vs global index in partitioned tables
Hi,
I want to know the differences between a global and a local index.
I'm working with partitioned tables about 10 millons rows and 40 partitions.
I know that when your table is partitioned and your index non-partitioned is possible that
some database operations make your index unusable and you have tu rebuid it, for example
when yo truncate a partition your global index results unusable, is there any other operation
that make the global index unusable??
I think that the advantage of a global index is that takes less space than a local and is easier to rebuild,
and the advantage of a local index is that is more effective resolving a query isn't it???
Any advice and help about local vs global index in partitioned tables will be greatly apreciatted.
Thanks in advancehere is the documentation -> http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14220/partconc.htm#sthref2570
In general, you should use global indexes for OLTP applications and local indexes for data warehousing or DSS applications. Also, whenever possible, you should try to use local indexes because they are easier to manage. When deciding what kind of partitioned index to use, you should consider the following guidelines in order:
1. If the table partitioning column is a subset of the index keys, use a local index. If this is the case, you are finished. If this is not the case, continue to guideline 2.
2. If the index is unique, use a global index. If this is the case, you are finished. If this is not the case, continue to guideline 3.
3. If your priority is manageability, use a local index. If this is the case, you are finished. If this is not the case, continue to guideline 4.
4. If the application is an OLTP one and users need quick response times, use a global index. If the application is a DSS one and users are more interested in throughput, use a local index.
Kind regards,
Tonguç -
Insert statement does not insert all records from a partitioned table
Hi
I need to insert records in to a table from a partitioned table.I set up a job and to my surprise i found that the insert statement is not inserting all the records on the partitioned table.
for example when i am using select statement on to a partitioned table
it gives me 400 records but when i insert it gives me only 100 records.
can anyone help in this matter.INSERT INTO TABLENAME(COLUMNS)
(SELECT *
FROM SCHEMA1.TABLENAME1
JOIN SCHEMA2.TABLENAME2a
ON CONDITION
JOIN SCHEMA2.TABLENAME2 b
ON CONDITION AND CONDITION
WHERE CONDITION
AND CONDITION
AND CONDITION
AND CONDITION
AND (CONDITION
HAVING SUM(COLUMN) > 0
GROUP BY COLUMNS
Maybe you are looking for
-
Easy Way to Add Portrait Order Form to iweb? Anyone have a template?
I am pretty satisfied (finally) with the way my site looks, but I would love to add a page for client galleries, print prices, and order form. I have figured out how to do everything, except post my order form. Does anyone have a template, or can you
-
Hi All, I have exposed my outbound interface as a webservice sucessfully But the problem is I need to provide a User Id to contact XI SOAP adapter. I dont require any authentication for this webservice client. How can I disable the authentication ch
-
Installation of IDES on 32bit Windows with MSSQL
Hi I know you heard a lot of these questions, but i did not find a specific solution for my case: My goal is to install an IDES system. For this I need the Solution Manager Key. The Key is generated with the Solution Manager. The Solution Manager can
-
E90 adobe pdf 1.5 not working
When i tried opening a large file of pdf in my e90 it say "operation failure not enough memory". Does it have a maximum file limit in order to read a pdf file?? If yes how much?? Thanks in advance. Solved! Go to Solution.
-
Can somebody advise me what the best buy should be? And which are the buy options. A full version CS4 is for my budget not possible. But I do not know what the difference is between PS CS4 and PS7. Many thanks Martine