Reduce redo when shrinking a large partitioned LOB table?
Hi,
Oracle 10.2.0.5 - Solaris 10 - Dataguard
We have a large (30 Tb) partitioned table with two columns ID and BODY. This is range partitioned on ID, and hash subpartitioned with a million records per range partition and between 500Gb and 1Tb data per partition.
We never modify the LOB, but do delete around 40% over time. After a partiton is full (the ID sequence is greater than the partition value) this becomes read only data. Due to this pattern of partitioning, the space freed by delete is often not used by other inserts.
Looking at one of the partitions (one partition = one tablespace) we have a datafile size of 1100 Gb, segment size of 1095Gb, and a DBMS_LOB.GETLENGTH of 370Gb - so 725Gb "free space"
While we can use something like
alter table test_lob modify partition p1 lob (body) (shrink space cascade);
This generates a lot of redo which we then have to move to the standby and apply.
What other methods could be used to reclaim this space with reduced / no redo?
Thanks
Mark
Fran,
As I said we're using Dataguard - so force logging. I'm looking for an approach that avoids the redo generation, rather than just turns it off.
I'm currently wondering if swapping partitons and transportable tablespaces may work so I can do the work in a non dataguard database, then swap it back in and just copy that datafile across. The data is read only now.
Similar Messages
-
HI Experts,
i need your kind guidence ... actually we have some tables which size in Gb and table have CLOB column. now pl. guide me how i can partion a CLOB Contained Table. what will be best way to accomplish this task..
Database Version is 10.2.0.4
thanks and regards,
Edited by: AMIABU on Dec 25, 2010 12:59 AMhttp://www.filibeto.org/unix/sun/lib/nonsun/oracle/10.2.0.1.0/B19306_01_200702/appdev.102/b14249/adlob_tables.htm#i1014212
its must help u. -
"Invalid segment" when shrinking a partitioned table
I'm encountering the following error when trying to shrink space compact for a partitioned table. Would you know how can I go about this?
I need to make it work.
SQL> alter table PS_SGSN_1_MONTH modify partition P_201304 shrink space compact;
alter table PS_SGSN_1_MONTH modify partition P_201304 shrink space compact
ERROR at line 1:
ORA-10635: Invalid segment or tablespace type
My Oracle DB version is 11gR2Yes, that would be the right thing to do to check how and where MV is being used and what downtime you can get to fix this. Check if you can change the Materialized view to create based on primary key instead of Row id.
Steps would be
1 drop the materialized views related
2 drop the materialized views logs
3 shrink the tables and indexes
4 recreate the materialized views log
5 recreate the materialized views
Also, there is a bug with the primary key as well. Check this
Bug 13709220 - ORA-10663 when shrinking a master table of an MVIEW with primary key (Doc ID 13709220.8) -
When to use Filestream partitions?
We have a Web site where we do a lot of document management. We currently have a table with 370,000 records in it. When uploading a new file we check it's size and if it is below 2Gig we store it in a VarChar blob column. We currently want to alter that
table and add a Filestream column and transfer the data as shown below. As you see we are only creating one file folder and the query will probably run for six hours or so.
We are also thinking about adding up to 5 million audio files stored in a different area. We could conceivably end up with several terabytes of file data. Should we partition and if so how many files should we store in each partition? We are using SQL Server
2012 and Windows Server 2012 R2.
--Create a ROWGUID column
USE CUR
ALTER Table documents
Add DocGUID uniqueidentifier not null ROWGUIDCOL unique default newid()
GO
--Turn on FILESTREAM
USE CUR
ALTER Table documents
SET (filestream_on=FileStreamGroup1)
GO
--Add FILESTREAM column to the table
USE CUR
ALTER Table documents
Add DocContent2 varbinary(max) FILESTREAM null
GO
-- Move data into the new column
UPDATE documents
SET DocContent2=DocContent
where doccontent is not null and doccontent2 is null
GO
--Drop the old column
ALTER Table documents
DROP column DocContent
GO
--Rename the new FILESTREAM column to the old column name
Use CUR
GO
sp_rename 'documents.DocContent2', 'DocContent','Column'
GOHi tomheaser,
Quote: Should we partition and if so how many files should we store in each partition?
Yes, if our database contains very large tables, we may benefit from partitioning those tables onto separate filegroups. In this case, SQL Server can access all the drives of each partition at the same time, this may reduce a lot time to load data.
If you only want to reduce the query time by increasing the number of the filegroups, then the limit on the maximum number of partitions is 15,000 in SQL Server. But in order to maintain a balance between performance and number of partitions, we need to consider
more things such as memory, partitioned index operations, DBCC commands, and queries. So please consider all those things first, then choose a reasonable number of partitions. For more information about Performance Guidelines of Table Partition, please refer
to the following article:
http://msdn.microsoft.com/en-us/library/ms190787(v=sql.110).aspx
If you have any question, please feel free to let know.
Regards,
Jerry Li -
Logical standby stopped when trying to create partitions on primary(Urgent
RDBMS Version: 10.2.0.3
Operating System and Version: Solaris 5.9
Error Number (if applicable): ORA-1119
Product (i.e. SQL*Loader, Import, etc.): Data Guard on RAC
Product Version: 10.2.0.3
logical standby stopped when trying to create partitions on primary(Urgent)
Primary is a 2node RAC ON ASM, we implemented partitions on primar.
Logical standby stopped appling logs.
Below is the alert.log for logical stdby:
Current log# 4 seq# 860 mem# 0: +RT06_DATA/rt06/onlinelog/group_4.477.635601281
Current log# 4 seq# 860 mem# 1: +RECO/rt06/onlinelog/group_4.280.635601287
Fri Oct 19 10:41:34 2007
create tablespace INVACC200740 logging datafile '+OT06_DATA' size 10M AUTOEXTEND ON NEXT 5M MAXSIZE 1000M EXTENT MANAGEMENT LOCAL
Fri Oct 19 10:41:34 2007
ORA-1119 signalled during: create tablespace INVACC200740 logging datafile '+OT06_DATA' size 10M AUTOEXTEND ON NEXT 5M MAXSIZE 1000M EXTENT MANAGEMENT LOCAL...
LOGSTDBY status: ORA-01119: error in creating database file '+OT06_DATA'
ORA-17502: ksfdcre:4 Failed to create file +OT06_DATA
ORA-15001: diskgroup "OT06_DATA" does not exist or is not mounted
ORA-15001: diskgroup "OT06_DATA" does not exist or is not mounted
LOGSTDBY Apply process P004 pid=49 OS id=16403 stopped
Fri Oct 19 10:41:34 2007
Errors in file /u01/app/oracle/admin/RT06/bdump/rt06_lsp0_16387.trc:
ORA-12801: error signaled in parallel query server P004
ORA-01119: error in creating database file '+OT06_DATA'
ORA-17502: ksfdcre:4 Failed to create file +OT06_DATA
ORA-15001: diskgroup "OT06_DATA" does not exist or is not mounted
ORA-15001: diskgroup "OT06_DATA" does not exist or
Here is the trace file info:
/u01/app/oracle/admin/RT06/bdump/rt06_lsp0_16387.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /u01/app/oracle/product/10.2.0
System name: SunOS
Node name: iscsv341.newbreed.com
Release: 5.9
Version: Generic_118558-28
Machine: sun4u
Instance name: RT06
Redo thread mounted by this instance: 1
Oracle process number: 16
Unix process pid: 16387, image: [email protected] (LSP0)
*** 2007-10-19 10:41:34.804
*** SERVICE NAME:(SYS$BACKGROUND) 2007-10-19 10:41:34.802
*** SESSION ID:(1614.205) 2007-10-19 10:41:34.802
knahcapplymain: encountered error=12801
*** 2007-10-19 10:41:34.804
ksedmp: internal or fatal error
ORA-12801: error signaled in parallel query server P004
ORA-01119: error in creating database file '+OT06_DATA'
ORA-17502: ksfdcre:4 Failed to create file +OT06_DATA
ORA-15001: diskgroup "OT06_DATA" does not exist or is not mounted
ORA-15001: diskgroup "OT06_DATA" does not exist or
KNACDMP: *******************************************************
KNACDMP: Dumping apply coordinator's context at 7fffd9e8
KNACDMP: Apply Engine # 0
KNACDMP: Apply Engine name
KNACDMP: Coordinator's Watermarks ------------------------------
KNACDMP: Apply High Watermark = 0x0000.0132b0bc
Sorry our primary database file structure is different from stdby, we used db_file_name_convert in the init.ora, it look like this:
*.db_file_multiblock_read_count=16
*.db_file_name_convert='+OT06_DATA/OT06TSG001/','+RT06_DATA/RT06/','+RECO/OT06TSG001','+RECO/RT06'
*.db_files=2000
*.db_name='OT06'
*.db_recovery_file_dest='+RECO'
Is there any thing wrong in this parameter.
I tried this parameter before for cloning using rman backup. This din't work.
What exactly must be done? for db_file_name_convert to work.
Even in this case i think this is the problem its not converting the location and the logical halts.
help me out.....
let me know if you have any questions.
Thanks Regards
Raghavendra rao Yella.Hi reega,
Thanks for your reply, our logical stdby has '+RT06_DATA/RT06'
and primary has '+OT06_DATA/OT06TSG001'
so we are using db_file_name_convert init parameter but it doesn't work.
Is there any thing particular steps hiding to use this parameter? as i tried this parameter for rman cloning it din't work, as a workaround i used rman set new name command for clonning.
Let me know if you have any questions.
Thanks in advance. -
How to manage large partitioned table
Dear all,
we have a large partitioned table with 126 columns and 380G not indexed can any one tell me how to manage it because now the queries are taking more that 5 days
looking forward for your reply
thank youHi,
You can store partitioned tables in separate tablespaces. This does the following:
Reduce the possibility of data corruption in multiple partitions
Back up and recover each partition independently
Control the mapping of partitions to disk drives (important for balancing I/O load)
Improve manageability, availability, and performance
Remeber as the doc states :
The maximum number of partitions or subpartitions that a table may have is 1024K-1.
Lastly you can use SQL*Loader and the import and export utilities to load or unload data stored in partitioned tables. These utilities are all partition and subpartition aware.
Document Reference:
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14231/partiti.htm
Adith -
Accessing large partitioned tables over a database link - any gotchas?
Hi,
We are in the middle of a corporate acquisition and I have a question about using database links to efficiently access large tables. There are two geographically distinct database instances, both on Oracle 10.2.0.5 sitting on Linux boxes.
The primary instance (PSHR) contains a PeopleSoft HR and Payroll system and sits in our data centre.
The secondary instance (HGPAY) runs a home grown payroll application and sits in a different data centre to PSHR.
The requirement is to allow PeopleSoft (PSHR) to display targeted (one employee at a time) payroll data from the secondary instance.
For example in HGPAY
CREATE TABLE MY_PAY_DATA AS
SELECT TO_CHAR(A.RN, '00000000') "EMP" -- This is an 8 digit leading 0 unique identifier
, '20110' || to_char(B.RN) "PAY_PRD" -- This is a format of fiscal year plus fortnight in year (01-27)
, C.SOME_KEY -- This is the pay element being considered - effectively random
, 'XXXXXXXXXXXXXXXXX' "FILLER1"
, 'XXXXXXXXXXXXXXXXX' "FILLER2"
, 'XXXXXXXXXXXXXXXXX' "FILLER3"
FROM ( SELECT ROWNUM "RN" FROM DUAL CONNECT BY LEVEL <= 300) A
, (SELECT ROWNUM "RN" FROM DUAL CONNECT BY LEVEL <= 3) B
, (SELECT TRUNC(ABS(DBMS_RANDOM.RANDOM())) "SOME_KEY" FROM DUAL CONNECT BY LEVEL <= 300) C
ORDER BY PAY_PRD, EMP
HGPAY.MY_PAY_DATA is Range Partitioned on EMP (approx 300 employees per partition) and List Sub-Partitioned on PAY_PRD (3 pay periods per sub-partition). I have limited the create statement above to represent one sub-paritition of data.
On average each employee generates 300 rows in this table each pay period. The table has approx 180 million rows and growing every fortnight.
In PSHR
CREATE VIEW PS_HG_PAY_DATA (EMP, PAY_PRD, SOME_KEY, FILLER1, FILLER2, FILLER3)
AS SELECT EMP, PAY_PRD, SOME_KEY, FILLER1, FILLER2, FILLER3 FROM MY_PAY_DATA@HGPAY
PeopleSoft would then generate SQL along the lines of
SELECT * FROM PS_HG_PAY_DATA WHERE EMP = ‘00002561’ AND PAY_PRD = ‘201025’
The link between the data centres where PSHR and HGPAY sit is not the best in the world, but I am expecting tens of access requests per day rather than thousands, so I believe the link should have sufficient bandwidth to meet the requirements.
I have tried a quick test on two production sized test instances and it works in that it presents the data, when I look at the explain plan I can see that the remote database is only presenting the relevant sub-partition over to PSHR rather than the whole table. Before I pat myself on the back with a "job well done" - is there a gotcha that I am missing in using dblink to access partitioned big tables?Yes, that's about right. A lot of this depends on exactly what happens in various "oops" scenarios-- are you, for example, just burning some extra CPU until someone comes to the DBA and says "my query is slow" or does saturating the network have some knock-on effect on critical apps or random long-running queries prevent some partition maintenance operations.
In my mind, the simplest possible solution (assuming you are using a fixed username in the database link) would be to create a profile on HGPAY for the user that is defined for the database link that set a LOGICAL_READS_PER_CALL value that was large enough to handle any "reasonable" request and low enough to quickly kill any session that tried to do something "stupid". Obviously, you'd need to define "stupid" in your environment particularly where the scope of a "simple reconciliation report" is undefined. If there are no political issues and you can adjust the profile values over time as you encounter new reports that slowly increase what is deemed "reasonable" this is likely the simplest approach. If you've got to put in a change request to change the setting that has to be reviewed by the change control board at its next quarterly meeting with the outsourced DBA vendor, on the other hand, you could turn a 30 minute report into 30 hours of work spread over 30 days. In the ideal world, though, that's where I'd start.
Getting more complex, you can use Resource Manager to kill queries that run too long on the wall clock. Since the network is almost certainly going to be the bottleneck, it's probably unlikely that the CPU throttling is going to do much good-- you can probably saturate the network with a very small amount of CPU. Network throttling in my mind is an extra step up in complexity again depending on the specifics of your particular situation and what you're competing with.
Justin -
PS Touch needs a warning message when importing files larger than 2048x2048 max resolution
I opened 6500x5000 px files into PS Touch in iPad for some minor retouching. PS Touch - without notification - reduced the images to fit within 2048 x 2048 px. It happily let me open these files, work on them and save and never let me know it was reducing the file size, rendering all the work I did utterly useless since 2048x2048 is far too small for print res for these files.
PS Touch needs a notification or warning when importing files larger than the app's max resolution. Resizing files without notification is just asinine.Hi Jeff,
For improvements or feature requests - please create an Idea for others to vote for:
Thanks,
Ignacio -
Reducing REDO generation from a current data refresh process
Hello,
I need to resolve an issue where a schema database is maintained with one delete followed by a tons of bulk insert. The problem is that the vast majority of deleted rows are reinserted as is. This process deletes and reinserts about 1 175 000 rows of data!
The delete clause is:
- delete from table where term >= '200705';
The data before '200705' is very stable and doesn't need to be refreshed.
The table is 9 709 797 rows big.
Here is an excerpt of cardinalities for each term code:
TERM NB_REGS
200001 117130
200005 23584
200009 123167
200101 115640
200105 24640
200109 121908
200201 117516
200205 24477
200209 125655
200301 120222
200305 26678
200309 129541
200401 123875
200405 27283
200409 131232
200501 124926
200505 27155
200509 130725
200601 122820
200605 27902
200609 129807
200701 121121
200705 27699
200709 129691
200801 120937
200805 29062
200809 130251
200901 122753
200905 27745
200909 135598
201001 127810
201005 29986
201009 142268
201101 133285
201105 18075This kind of operation is generating a LOT of redo logs: on average 25 GB per days.
What are the best options available to us to reduce redo generation without changing to much the current process?
- make tables in no logging ? (with mandatory use of append hint?)
- use of a global temporary table for staging and merging against the true table?
- use of partitions and truncate the reloaded one? this not reduce redo generated by subsequent inserts...?
This has not to be mandatory transactionnal.
We use 10gR2 on Windows 64 bits.
Thanks
Brunoyes, you got it, these are terms (Summer of 2007, beginning at May).
Is the perverse effect of truncating and then inserting in direct path mode pushing the high water mark up day after day while having unused space in truncated partitions? Maybe we should not REUSE STORAGE on truncation...
this data can be recovered easily from the datamart that pushes this data, this means we can use nologging and direct path mode without any «forever loss» of data.
Should I have one partition for each term, or having only one for the stable terms and one for the refreshed terms? -
Increase Mac partition after shrink Windows bootcamp partition
I have a Mid 2010 Macbook Pro running Mac OS X 10.9.4 with a 500GB HD.
The HD is partitioned with 370 GB to Mac and 128 GB to Windows and I decided to shrink the Windows partition to 65 GB because I needed more space on Mac, and I barely use Windows.
I resized the Windows Partition using Mini Tool Partition Wizard and moved it to the end of the disk, leaving the empty right after Mac partition.
I'm able to boot Windows partition and use it normally. The Windows C: disk now has 65 GB.
When I boot on Mac OS X and try to use disk utility to increase the Mac partition, it says that the Windows partition still has 128 GB.
It is possible to use the empty space I created?
Here's some information about my partitions:
$ sudo gpt -r -vv show disk0
gpt show: disk0: mediasize=500107862016; sectorsize=512; blocks=976773168
gpt show: disk0: Suspicious MBR at sector 0
gpt show: disk0: Pri GPT at sector 1
gpt show: disk0: Sec GPT at sector 976773167
start size index contents
0 1 MBR
1 1 Pri GPT header
2 32 Pri GPT table
34 6
40 409600 1 GPT part - C12A7328-F81F-11D2-BA4B-00A0C93EC93B
409640 723603632 2 GPT part - 48465300-0000-11AA-AA11-00306543ECAC
724013272 1269544 3 GPT part - 426F6F74-0000-11AA-AA11-00306543ECAC
725282816 251490304 4 GPT part - EBD0A0A2-B9E5-4433-87C0-68B6B72699C7
976773120 15
976773135 32 Sec GPT table
976773167 1 Sec GPT header
$ sudo gdisk /dev/disk0
GPT fdisk (gdisk) version 0.8.10
Warning: Devices opened with shared lock will not have their
partition table automatically reloaded!
Partition table scan:
MBR: hybrid
BSD: not present
APM: not present
GPT: present
Found valid GPT with hybrid MBR; using GPT.
Command (? for help): p
Disk /dev/disk0: 976773168 sectors, 465.8 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): E34EA0BB-B94A-4854-AF05-02E0D06A48E5
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 976773134
Partitions will be aligned on 8-sector boundaries
Total free space is 21 sectors (10.5 KiB)
Number Start (sector) End (sector) Size Code Name
1 40 409639 200.0 MiB EF00 EFI System Partition
2 409640 724013271 345.0 GiB AF00 Macbook HD
3 724013272 725282815 619.9 MiB AB00 Recovery HD
4 725282816 976773119 119.9 GiB 0700 BOOTCAMP
$ sudo fdisk /dev/disk0
Disk: /dev/disk0 geometry: 60801/255/63 [976773168 sectors]
Signature: 0xAA55
Starting Ending
#: id cyl hd sec - cyl hd sec [ start - size]
1: EE 1023 254 63 - 1023 254 63 [ 1 - 409639] <Unknown ID>
2: AF 1023 254 63 - 1023 254 63 [ 409640 - 723603632] HFS+
3: AB 1023 254 63 - 1023 254 63 [ 724013272 - 1269544] Darwin Boot
4: 0C 1023 254 63 - 1023 254 63 [ 725282816 - 251490304] Win95 FAT32LPower off your computer. Power up your computer while pressing and holding pressed the option/alt key. When the boot manager appears, select OSX to boot.
-
Index issue with or and between when we set one partition index to unusable
Need to understand why optimizer unable to use index in case of "OR" whenn we set one partition index to unusable, the same query with between uses index.
“OR” condition fetch less data comparing to “BETWEEN” still oracle optimizer unable to use indexes in case of “OR”
1. Created local index on partitioned table
2. ndex partition t_dec_2009 set to unusable
-- Partitioned local Index behavior with “OR” and with “BETWEEN”
SQL> CREATE TABLE t (
2 id NUMBER NOT NULL,
3 d DATE NOT NULL,
4 n NUMBER NOT NULL,
5 pad VARCHAR2(4000) NOT NULL
6 )
7 PARTITION BY RANGE (d) (
8 PARTITION t_jan_2009 VALUES LESS THAN (to_date('2009-02-01','yyyy-mm-dd')),
9 PARTITION t_feb_2009 VALUES LESS THAN (to_date('2009-03-01','yyyy-mm-dd')),
10 PARTITION t_mar_2009 VALUES LESS THAN (to_date('2009-04-01','yyyy-mm-dd')),
11 PARTITION t_apr_2009 VALUES LESS THAN (to_date('2009-05-01','yyyy-mm-dd')),
12 PARTITION t_may_2009 VALUES LESS THAN (to_date('2009-06-01','yyyy-mm-dd')),
13 PARTITION t_jun_2009 VALUES LESS THAN (to_date('2009-07-01','yyyy-mm-dd')),
14 PARTITION t_jul_2009 VALUES LESS THAN (to_date('2009-08-01','yyyy-mm-dd')),
15 PARTITION t_aug_2009 VALUES LESS THAN (to_date('2009-09-01','yyyy-mm-dd')),
16 PARTITION t_sep_2009 VALUES LESS THAN (to_date('2009-10-01','yyyy-mm-dd')),
17 PARTITION t_oct_2009 VALUES LESS THAN (to_date('2009-11-01','yyyy-mm-dd')),
18 PARTITION t_nov_2009 VALUES LESS THAN (to_date('2009-12-01','yyyy-mm-dd')),
19 PARTITION t_dec_2009 VALUES LESS THAN (to_date('2010-01-01','yyyy-mm-dd'))
20 );
SQL> INSERT INTO t
2 SELECT rownum, to_date('2009-01-01','yyyy-mm-dd')+rownum/274, mod(rownum,11), rpad('*',100,'*')
3 FROM dual
4 CONNECT BY level <= 100000;
SQL> CREATE INDEX i ON t (d) LOCAL;
SQL> execute dbms_stats.gather_table_stats(user,'T')
-- Mark partition t_dec_2009 to unusable:
SQL> ALTER INDEX i MODIFY PARTITION t_dec_2009 UNUSABLE;
--- Let’s check whether the usable index partition can be used to apply a restriction: BETWEEN
SQL> SELECT count(d)
FROM t
WHERE d BETWEEN to_date('2009-01-01 23:00:00','yyyy-mm-dd hh24:mi:ss')
AND to_date('2009-02-02 01:00:00','yyyy-mm-dd hh24:mi:ss');
SQL> SELECT * FROM table(dbms_xplan.display_cursor(format=>'basic +partition'));
| Id | Operation | Name | Pstart| Pstop |
| 0 | SELECT STATEMENT | | | |
| 1 | SORT AGGREGATE | | | |
| 2 | PARTITION RANGE SINGLE| | 12 | 12 |
| 3 | INDEX RANGE SCAN | I | 12 | 12 |
--- Let’s check whether the usable index partition can be used to apply a restriction: OR
SQL> SELECT count(d)
FROM t
WHERE
(d >= to_date('2009-01-01 23:00:00','yyyy-mm-dd hh24:mi:ss') and d <= to_date('2009-01-01 23:59:59','yyyy-mm-dd hh24:mi:ss'))
or
(d >= to_date('2009-02-02 01:00:00','yyyy-mm-dd hh24:mi:ss') and d <= to_date('2009-02-02 02:00:00','yyyy-mm-dd hh24:mi:ss'))
SQL> SELECT * FROM table(dbms_xplan.display_cursor(format=>'basic +partition'));
| Id | Operation | Name | Pstart| Pstop |
| 0 | SELECT STATEMENT | | | |
| 1 | SORT AGGREGATE | | | |
| 2 | PARTITION RANGE OR| |KEY(OR)|KEY(OR)|
| 3 | TABLE ACCESS FULL| T |KEY(OR)|KEY(OR)|
----------------------------------------------------“OR” condition fetch less data comparing to “BETWEEN” still oracle optimizer unable to use indexes in case of “OR”
Regards,
Sachin B.Hi,
What is your database version????
I ran the same test and optimizer was able to pick the index for both the queries.
SQL> select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for 32-bit Windows: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
SQL>
SQL> set autotrace traceonly exp
SQL>
SQL>
SQL> SELECT count(d)
2 FROM t
3 WHERE d BETWEEN to_date('2009-01-01 23:00:00','yyyy-mm-dd hh24:mi:ss')
4 AND to_date('2009-02-02 01:00:00','yyyy-mm-dd hh24:mi:ss');
Execution Plan
Plan hash value: 2381380216
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 8 | 25 (0)| 00:00:01 | | |
| 1 | SORT AGGREGATE | | 1 | 8 | | | | |
| 2 | PARTITION RANGE ITERATOR| | 8520 | 68160 | 25 (0)| 00:00:01 | 1 | 2 |
|* 3 | INDEX RANGE SCAN | I | 8520 | 68160 | 25 (0)| 00:00:01 | 1 | 2 |
Predicate Information (identified by operation id):
3 - access("D">=TO_DATE(' 2009-01-01 23:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"D"<=TO_DATE(' 2009-02-02 01:00:00', 'syyyy-mm-dd hh24:mi:ss'))
SQL> SELECT count(d)
2 FROM t
3 WHERE
4 (
5 (d >= to_date('2009-01-01 23:00:00','yyyy-mm-dd hh24:mi:ss') and d <= to_date('2009-01-01 23:59:59','yyyy-mm-dd hh24:mi:ss'
6 or
7 (d >= to_date('2009-02-02 01:00:00','yyyy-mm-dd hh24:mi:ss') and d <= to_date('2009-02-02 02:00:00','yyyy-mm-dd hh24:mi:ss'
8 );
Execution Plan
Plan hash value: 3795917108
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 8 | 4 (0)| 00:00:01 | | |
| 1 | SORT AGGREGATE | | 1 | 8 | | | | |
| 2 | CONCATENATION | | | | | | | |
| 3 | PARTITION RANGE SINGLE| | 13 | 104 | 2 (0)| 00:00:01 | 2 | 2 |
|* 4 | INDEX RANGE SCAN | I | 13 | 104 | 2 (0)| 00:00:01 | 2 | 2 |
| 5 | PARTITION RANGE SINGLE| | 13 | 104 | 2 (0)| 00:00:01 | 1 | 1 |
|* 6 | INDEX RANGE SCAN | I | 13 | 104 | 2 (0)| 00:00:01 | 1 | 1 |
Predicate Information (identified by operation id):
4 - access("D">=TO_DATE(' 2009-02-02 01:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"D"<=TO_DATE(' 2009-02-02 02:00:00', 'syyyy-mm-dd hh24:mi:ss'))
6 - access("D">=TO_DATE(' 2009-01-01 23:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"D"<=TO_DATE(' 2009-01-01 23:59:59', 'syyyy-mm-dd hh24:mi:ss'))
filter(LNNVL("D"<=TO_DATE(' 2009-02-02 02:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR
LNNVL("D">=TO_DATE(' 2009-02-02 01:00:00', 'syyyy-mm-dd hh24:mi:ss')))
SQL> set autotrace off
SQL>Asif Momen
http://momendba.blogspot.com -
An not import an export file when has cross-schema partition by reference
I'm working on a complex system with about 200 tables stored in *5 schemas* on Oracle 11.2.0.2 (database is installed on a Redhat Linux).
There are lots of partitioned tables, and some are partition by reference, and some of the partition by references are partitioned on cross-schema references (the referenced partitioned table and the table which is partitioned by reference are in 2 different schemas).
When exporting these 5 partitions using data-pump everything goes fine, but I could not find a way to import it in another database without errors. And the problem is on creating tables which are partitioned by cross-schema references, and I get an Insufficient Privilege error.
It seems that the impdp first creates all the tables, and then apply the grants (including references grant) and that causes the problem. If it applied grant statements of each table after creating that table there would be no problem.
Is there any way I can overcome this problem?This is export script:
declare
h1 NUMBER;
begin
h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'SCHEMA', job_name => 'EXPORT000185', version => 'COMPATIBLE');
dbms_datapump.set_parallel(handle => h1, degree => 1);
dbms_datapump.add_file(handle => h1, filename => '910202.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);
dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR', value => 'IN(''PAYESH_ACCOUNTING'',''PAYESH_CORE'',''PAYESH_CRM'',''PAYESH_LIFE'',''PAYESH_SECURITY'')');
dbms_datapump.add_file(handle => h1, filename => '910202_db4.DMP', directory => 'DATA_PUMP_DIR', filetype => 1);
dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC');
dbms_datapump.set_parameter(handle => h1, name => 'ESTIMATE', value => 'BLOCKS');
dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
dbms_datapump.detach(handle => h1);
end;
and this is the import script:
declare
h1 NUMBER;
begin
h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'SCHEMA', job_name => 'IMPORT000189', version => 'COMPATIBLE');
dbms_datapump.set_parallel(handle => h1, degree => 1);
dbms_datapump.add_file(handle => h1, filename => '910228.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);
dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
dbms_datapump.add_file(handle => h1, filename => '910128.DMP', directory => 'DATA_PUMP_DIR', filetype => 1);
dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR', value => 'IN(''PAYESH_ACCOUNTING'',''PAYESH_CORE'',''PAYESH_CRM'',''PAYESH_LIFE'',''PAYESH_SECURITY'')');
dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC');
dbms_datapump.set_parameter(handle => h1, name => 'SKIP_UNUSABLE_INDEXES', value => 0);
dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
dbms_datapump.detach(handle => h1);
end; -
Creating index on large partitioned table
Is anyone aware of a method for telling how far along is the creation of an index on a large partitioned table? The statement I am executing is like this:
CREATE INDEX "owner"."new_index"
ON "owner"."mytable"(col_1, col_2, col_3, col_4)
PARALLEL 8 NOLOGGING ONLINE LOCAL;
This is a two-node RAC system on Windows 2003 x64, using ASM. There are more than 500,000,000 rows in the table, and I'd estimate that each row is about 600-1000 bytes in size.
Thank you.you can know the progress from v$session_longops.
select
substr(SID ||','||SERIAL# ,1,8) "sid,srl#",
substr(OPNAME ||'>'||TARGET,1,50) op_target,
substr(trunc(SOFAR/TOTALWORK*100)||'%',1,5) progress,
TIME_REMAINING rem,
ELAPSED_SECONDS elapsed
from v$session_longops
where SOFAR!=TOTALWORK
order by sid;
hth -
When downloading a large file, like a movie, Internet Explorer automatically opens a download window offering the choice to "Open" or "Save"the file. This provides one with the choice of where to save it - e.g. "C" drive, in the "My Documents" or "Desktop" folders but Firefox's download window doesn't. This has always frustrated me because I would rather use Firefox exclusively to access the Internet, but when it comes to saving files downloaded off the Internet, sadly, I have to revert to IE!
If you click on Firefox from the upper-left, then Options, a new window should appear. On that window, click on the general tab. In the middle of the window, you'll see options regarding your downloads, one of which says, "Always ask me where to save files." Click on the bubble for this options, then click Ok. From then on, you should always be prompted on where you want your files saved.
-
Large partitioned tables with WM
Hello
I've got a few large tables (6-10GB+) that will have around 500k new rows added on a daily basis as part of an overnight batch job. No rows are ever updated, only inserted or deleted and then re-inserted. I want to stop the process that adds the new rows from being an overnight batch to being a near real time process i.e. a queue will be populated with requests to rebuild the content of these tables for specific parent ids, and a process will consume those requests throughout the day rather than going through the whole list in one go.
I need to provide views of the data asof a point in time i.e. what was the content of the tables at close of business yesterday, and for this I am considering using workspaces.
I need to keep at least 10 days worth of data and I was planning to partition the table and drop one partition every day. If I use workspaces, I can see that oracle creates a view in place of the original table and creates a versioned table with the LT suffix - this is the table name returned by DBMSMW.GetPhysicalTableName. Would it be considered bad practice to drop partitions from this physical table as I would do with a non version enabled table? If so, what would be the best method for dropping off old data?
Thanks in advance
DavidHello Ben
Thank you for your reply.
The table structure we have is like so:
CREATE TABLE hdr
( pk_id NUMBER PRIMARY KEY,
customer_id NUMBER FOREIGN KEY REFERENCES customer,
entry_type NUMBER NOT NULL
CREATE TABLE dtl_daily
( pk_id NUMBER PRIMARY KEY,
hdr_id NUMBER FOREIGN KEY REFERENCES hdr
active_date DATE NOT NULL,
col1 NUMBER
col2 NUMBER
PARTITION BY RANGE(active_date)
( PARTITION ptn_200709
VALUES LESS THAN (TO_DATE('200710','YYYYMM'))
TABLESPACE x COMPRESS,
PARTITION ptn_200710
VALUES LESS THAN (TO_DATE('200711','YYYYMM'))
TABLESPACE x COMPRESS
CREATE TABLE dtl_hourly
( pk_id NUMBER PRIMARY KEY,
hdr_id NUMBER FOREIGN KEY REFERENCES hdr
active_date DATE NOT NULL,
active_hour NUMBER NOT NULL,
col1 NUMBER
col2 NUMBER
PARTITION BY RANGE(active_date)
( PARTITION ptn_20070901
VALUES LESS THAN (TO_DATE('20070902','YYYYMMDD'))
TABLESPACE x COMPRESS,
PARTITION ptn_20070902
VALUES LESS THAN (TO_DATE('20070903','YYYYMMDD'))
TABLESPACE x COMPRESS
PARTITION ptn_20070903
VALUES LESS THAN (TO_DATE('20070904','YYYYMMDD'))
TABLESPACE x COMPRESS
...For every day for 20 years
/The hdr table holds one or more rows for each customer and has it's own synthetic key generated for every entry as there can be multiple rows having the same entry_type for a customer. There are two detail tables, daily and hourly, which hold detail data at those two granularities. Some customers require hourly detail, in which case the hourly table is populated and the daily table is populated by aggregating the hourly data. Other customers require only daily data in which case the hourly table is not populated.
At the moment, changes to customer data require that the content of these tables are rebuilt for that customer. This rebuild is done every night for the changed customers and I want to change this to be a near real time rebuild. The rebuild involves deleteing all existing entries from the three tables for the customer and then re-inserting the new set using new synthetic keys. If we do make this near real time, we need to be able to provide a snapshot of the data asof close of business every day, and we need to be able to report as of a point of time up to 10 days in the past.
For any one customer, they may have rows in the hourly table that goes out 20 years at a hourly granularity, but once the active date has passed(by 10 days), we no longer need to keep it. This is why we were considering partitioning as it gives us a simple way of dropping off old data, and as a nice side effect, helps to improve performance of queries that are looking for active data between a range of dates (which is most of them).
I did have a look at the idea of save points but I wasn't sure it would be efficient. So in this case, would the idea be that we don't partition the table but instead at close of business every day, we create a savepoint like "savepoint_20070921" and instead of using dbms_wm.gotodate. we would use dbms_wm.gotosavepoint. Then every day we would do
DBMS_WM.DeleteSavepoint(
workspace => 'LIVE',
savepoint_name => 'savepoint_20070910', --10 days ago
compress_view_wo_overwrite => TRUE,
DBMS_WM.CompressWorkspace(
workspace => 'LIVE,
compress_view_wo_overwrite => TRUE,
firstSP => 'savepoint_20070911', --the new oldest save point
);Is my understanding correct?
David
Message was edited by:
fixed some formatting
David Tyler
Maybe you are looking for
-
How to calculate the Bill Discount !
Hi, I have a requirement where in I have to develop a report to display bank balance chart with the following details in the output.They are : 1. Posting Date 2. Bill Discount (BD) 3. Letter of Credit(LC) 4. Cheque Amount may be cheque issued. 5. Dep
-
IPhone crashing to black screen with apple, apps as well.
My iPhone 5, which i have had for a couple of weeks now, continuously crashes in a weird way. It will, while im using random applications but mostly imessage, and maps, and when i am on the home screen go to a black screen with the apple logo, as if
-
How can I search sub categories in iBooks?
Okay, after generally loving apple products, this one has me raging mad. When I search iBooks, there is a drop down list of categories and authors. But sub categories -- as found in any normal bookstore -- are nowhere to be found. For example, search
-
How to change error messages in Oracle Portal 3.0.9.8
I have errors such as "No row returned" and etc. How can I change this error messages. Urgent HELP
-
That computer is using McAfee.