Datafile fragmentation
hi
we are working in development environment. normally we are resizing the datafiles. offen i have reduce the size of datafiles
but this time i m having some problem, one of my datafile size is 8 GB and used size is near about 4 gb. (i have check the used size from OEM and DBA_free_space) now i wanted to reduce the size of datafile by 3 to 4 gb so i can allocate this space to some other datafiles
but when i m doing this
alter database datafile 'path' resize 4500m; ( used size in this datafile is 4000M)
i m getting error that
ERROR at line 1:
ORA-03297: file contains used data beyond requested RESIZE value
even i m not able to reduce the size of file to 7500M
how can i coalease this datafiles
The resizing of datafile might also fail with ORA-03297 when you try to resize the datafile below the highwatermark. So to find the limit of upto which you can resize, refer Metalink NOTE:130866.1 - How to Resolve ORA-03297 When Resizing a Datafile by Finding the Table Highwatermark, which calculates highwatermark for the datafiles which will be helpful in resizing the datafile.
Similar Messages
-
I have a project that requires to store huge image. As database is the best place to store data, I shall store them in the Oracle database as BLOB. However, my db administrator said that it is hard to maintain big tables and also the table has limitation on size and he told me to store as file system. My question:
1. Is this statement correct?
2. Is there issue concerns with storing huge image in blob?
tx, in advancecannot connect to the linkBLOB and CLOB datatypes are created by use of the CREATE or ALTER TABLE or the CREATE or ALTER TYPE commands. In fact, they are created identically to other non-sized datatypes such as DATE and LONG with the exception of the LOB storage clause.
Oracle blob storage requires usinf the Oracle dbms_lob package an easy interface tio Oracle BLOB storage.
The LOB storage clause is not needed if the maximum size of the BLOB doesn't exceed 4000 bytes. Up to 4000 bytes can be stored in-line with the other data in the tablespace. If the length of the BLOB exceeds 4000 bytes it must be stored in either a system defaulted storage (the same as the default for the table it resides in) or in an explicitly defined LOB storage area.
TIP: I suggest always specify the LOB storage clause, if you force the system to do a default storage each time a BLOB or CLOB exceeds 4000 bytes you could cause datafile fragmentation and performance problems. The LOB storage clause gives you control instead of the system.
An example creation of a table using a Oracle BLOB datatype is shown in Listing 1. It just as easily could have been a CLOB.
create table internal_graphics (
graphic_id number,
graphic_desc varchar2(30),
graphic_blob blob,
graphic_type VARCHAR2(4))
lob (graphic_blob) store as glob_store (
tablespace raw_data
storage (initial 100k next 100k pctincrease 0)
chunk 4
pctversion 10
INDEX glob_index (
tablespace raw_index))
TABLESPACE appl_data
storage (initial 1m next 1m pctincrease 0); -
How to defragment the datafile hwm in EBS R12 database
Hi All,
We are on 1204 E-biziness instance on 11gR2 database.
We have deleted(purged some EGO data and got the huge space in dba_segments. it was around 2.5 gb after purging activity we got
SQL> select sum(bytes/1024/1024/1024) from dba_segments;
SUM(BYTES/1024/1024/1024)
734.867561
SQL>
SQL> select sum(bytes/1024/1024/1024) from dba_data_files;
SUM(BYTES/1024/1024/1024)
2456.70493
SQL>
But in datafile the HWM is not reduced, i checked by moving the big tables, even though i am not getting the space in datafile level, i need to resize my datafile size to 1TB , take backup and clone the target which is having 1TB space.
For example in apps_ts_tx_data we have only 243Gb segments but the datafiles size it is having 1000GB, we have to reduce the datafile size to that 300Gb, how to do it?
=======
SQL> select sum(bytes/1024/1024/1024) from dba_segments where tablespace_name='APPS_TS_TX_DATA';
SUM(BYTES/1024/1024/1024)
243.981201
SQL> select sum(bytes/1024/1024/1024) from dba_data_files where tablespace_name='APPS_TS_TX_DATA';
SUM(BYTES/1024/1024/1024)
1070.2343
SQL>
==========
I thought of creating one tablespace of 300GB and move all objects into new tablespace , drop old tablespace and rename new tablespace to 'APPS_TS_TX_DATA', but we have objects like below, Pl guide me what is the best method of doing this and reduce my database size to 1TB. so that i can accomplish my task
====
SQL> select DISTINCT SEGMENT_TYPE,count(*) FROM DBA_SEGMENTS where tablespace_name='APPS_TS_TX_DATA' group by SEGMENT_TYPE;
SEGMENT_TYPE COUNT(*)
INDEX 275
INDEX PARTITION 509
INDEX SUBPARTITION 96
LOB PARTITION 8
LOB SUBPARTITION 96
LOBINDEX 460
LOBSEGMENT 460
TABLE 14615
TABLE PARTITION 2079
TABLE SUBPARTITION 96
10 rows selected.
====
Thanks in advance..Please see these docs.
How to Reorganize INV Schema / Reclaim the High Watermark [ID 555058.1]
Optimizing Database disk space using Alter table shrink space/move compress [ID 1173241.1]
Why is no space released after an ALTER TABLE ... SHRINK? [ID 820043.1]
Various Aspects of Fragmentation [ID 186826.1]
Thanks,
Hussein -
ORA-1653: unable to extend table - but enough space for datafile
We encountered this problem in one of our database Oracle Database 10g Release 10.2.0.4.0
We have all datafiles in all tablespaces specified with MAXSIZE and AUTOEXTEND ON. But last week database could not extend table size
Wed Dec 8 18:25:04 2013
ORA-1653: unable to extend table PCS.T0102 by 128 in tablespace PCS_DATA
ORA-1653: unable to extend table PCS.T0102 by 8192 in tablespace PCS_DATA
Wed Dec 8 18:25:04 2013
ORA-1653: unable to extend table PCS.T0102 by 128 in tablespace PCS_DATA
ORA-1653: unable to extend table PCS.T0102 by 8192 in tablespace PCS_DATA
Wed Dec 8 18:25:04 2013
ORA-1653: unable to extend table PCS.T0102 by 128 in tablespace PCS_DATA
ORA-1653: unable to extend table PCS.T0102 by 8192 in tablespace PCS_DATA
Datafile was created as ... DATAFILE '/u01/oradata/PCSDB/PCS_DATA01.DBF' AUTOEXTEND ON NEXT 50M MAXSIZE 31744M
Datafile PCS_DATA01.DBF had only 1GB size. Maximum size is 31GB but database did not want to extend this datafile.
We used temporary solution and we added new datafile to same tablespace. After that database and our application started to work correctly.
There is enough free space for database datafiles.
Do you have some ideas where could be our problem and what should we check?
ThanksShivendraNarainNirala wrote:
Hi ,
Here i am sharing one example.
SQL> select owner,table_name,blocks,num_rows,avg_row_len,round(((blocks*8/1024)),2)||'MB' "TOTAL_SIZE",
2 round((num_rows*avg_row_len/1024/1024),2)||'Mb' "ACTUAL_SIZE",
3 round(((blocks*8/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE"
4 from dba_tables where owner in('DWH_SCHEMA1','RM_SCHEMA_DDB','RM_SCHEMA') and round(((blocks*8/1024)-(num_rows*avg_row_len/1024/1024)),2) > 10 ORDER BY FRAGMENTED_SPACE;
OWNER TABLE_NAME BLOCKS NUM_ROWS AVG_ROW_LEN TOTAL_SIZE ACTUAL_SIZE FRAGMENTED_SPACE
DWH_SCHEMA1 FP_DATA_WLS 14950 168507 25 116.8MB 4.02Mb 112.78MB
SQL> select tablespace_name from dba_segments where segment_name='FP_DATA_WLS' and owner='DWH_SCHEMA1';
TABLESPACE_NAME
DWH_TX_DWH_DATA
SELECT /* + RULE */ df.tablespace_name "Tablespace",
df.bytes / (1024 * 1024) "Size (MB)",
SUM(fs.bytes) / (1024 * 1024) "Free (MB)",
Nvl(Round(SUM(fs.bytes) * 100 / df.bytes),1) "% Free",
Round((df.bytes - SUM(fs.bytes)) * 100 / df.bytes) "% Used"
FROM dba_free_space fs,
(SELECT tablespace_name,SUM(bytes) bytes
FROM dba_data_files
GROUP BY tablespace_name) df
WHERE fs.tablespace_name = df.tablespace_name
GROUP BY df.tablespace_name,df.bytes
UNION ALL
SELECT /* + RULE */ df.tablespace_name tspace,
fs.bytes / (1024 * 1024),
SUM(df.bytes_free) / (1024 * 1024),
Nvl(Round((SUM(fs.bytes) - df.bytes_used) * 100 / fs.bytes), 1),
Round((SUM(fs.bytes) - df.bytes_free) * 100 / fs.bytes)
FROM dba_temp_files fs,
(SELECT tablespace_name,bytes_free,bytes_used
FROM v$temp_space_header
GROUP BY tablespace_name,bytes_free,bytes_used) df
WHERE fs.tablespace_name = df.tablespace_name
GROUP BY df.tablespace_name,fs.bytes,df.bytes_free,df.bytes_used
ORDER BY 4 DESC;
set lines 1000
col FILE_NAME format a60
SELECT SUBSTR (df.NAME, 1, 60) file_name, df.bytes / 1024 / 1024 allocated_mb,
((df.bytes / 1024 / 1024) - NVL (SUM (dfs.bytes) / 1024 / 1024, 0))
used_mb,
NVL (SUM (dfs.bytes) / 1024 / 1024, 0) free_space_mb
FROM v$datafile df, dba_free_space dfs
WHERE df.file# = dfs.file_id(+)
GROUP BY dfs.file_id, df.NAME, df.file#, df.bytes
ORDER BY file_name;
Tablespace Size (MB) Free (MB) % Free % Used
DWH_TX_DWH_DATA 11456 8298 72 28
FILE_NAME ALLOCATED_MB USED_MB FREE_SPACE_MB
/data1/FPDIAV1B/dwh_tx_dwh_data1.dbf 1216 1216 0
/data1/FPDIAV1B/dwh_tx_dwh_data2.dbf 10240 1942 8298
SQL> alter database datafile '/data1/FPDIAV1B/dwh_tx_dwh_data2.dbf' resize 5G;
alter database datafile '/data1/FPDIAV1B/dwh_tx_dwh_data2.dbf' resize 5G
ERROR at line 1:
ORA-03297: file contains used data beyond requested RESIZE value
Although , we did moved the tables into another TB , but it doesn't resolve the problem unless we take export and drop the tablespace aand again import it .We also used space adviser but in vain .
As far as metrics and measurement is concerned , as per my experience its based on blocks which is sparse in nature related to HWM in the tablespace.
when it comes to partitions , just to remove fragmentation by moving their partitions doesn't help .
Apart from that much has been written about it by Oracle Guru like you .
warm regards
Shivendra Narain Nirala
how does free space differ from fragmented space?
is all free space considered by you to be fragmented?
"num_rows*avg_row_len" provides useful result only if statistics are current & accurate. -
Need to understand when redo log files content is wrote to datafiles
Hi all
I have a question about the time when redo log files are wrote to the datafiles
supposing that the database is in NOARCHIVELOG mode, and all redo log files are filled, the official oracle database documentation says that: *a filled redo log file is available
after the changes recorded in it have been written to the datafiles* witch mean that we just need to have all the redo log files filled to "*commit*" changes to the database
Thanks for help
Edited by: rachid on Sep 26, 2012 5:05 PMrachid wrote:
the official oracle database documentation says that: a filled redo log file is available after the changes recorded in it have been written to the datafiles It helps if you include a URL to the page where you found this quote (if you were using the online html manuals).
The wording is poor and should be modified to something like:
<blockquote>
+"a filled online redo log file is available for re-use after all the data blocks that have been changed by change vectors recorded in the log file have been written to the data files"+
</blockquote>
Remember if a data block that is NOT an undo block has been changed by a transaction, then an UNDO block has been changed at the same time, and both change vectors will be in the redo log file. The redo log file cannot, therefore, be re-used until the data block and the associated UNDO block have been written to disc. The change to the data block can thus be rolled back (uncommitted changes can be written to data files) because the UNDO is also available on disc if needed.
If you find the manuals too fragmented to follow you may find that my book, Oracle Core, offers a narrative description that is easier to comprehend.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
Author: <b><em>Oracle Core</em></b> -
Tablespace Fragmentation. specially in SYSTEM tablesapce.
Hi,
I am working on Oracle 11gR2 DB and AIX OS.
An database is having many tablespaces which are allocated to specific schemas. These schemas are functional from a long time. Data comes and goes all the time. The size of the tablespace has gone very high(than expected). We have removed(truncated) most of the unwanted tables and data from the schema and trying to resize the datafile, but we are not able to do it.
while checking with the below query :-
SELECT df.tablespace_name "Tablespace",
df.bytes / (1024 * 1024) "Size (MB)",
SUM(fs.bytes) / (1024 * 1024) "Free (MB)",
Nvl(Round(SUM(fs.bytes) * 100 / df.bytes),1) "% Free",
Round((df.bytes - SUM(fs.bytes)) * 100 / df.bytes) "% Used"
FROM dba_free_space fs,
(SELECT tablespace_name,SUM(bytes) bytes
FROM dba_data_files
GROUP BY tablespace_name) df
WHERE fs.tablespace_name (+) = df.tablespace_name
GROUP BY df.tablespace_name,df.bytes
UNION ALL
SELECT df.tablespace_name tspace,
fs.bytes / (1024 * 1024),
SUM(df.bytes_free) / (1024 * 1024),
Nvl(Round((SUM(fs.bytes) - df.bytes_used) * 100 / fs.bytes), 1),
Round((SUM(fs.bytes) - df.bytes_free) * 100 / fs.bytes)
FROM dba_temp_files fs,
(SELECT tablespace_name,bytes_free,bytes_used
FROM v$temp_space_header
GROUP BY tablespace_name,bytes_free,bytes_used) df
WHERE fs.tablespace_name (+) = df.tablespace_name
GROUP BY df.tablespace_name,fs.bytes,df.bytes_free,df.bytes_used
ORDER BY 4 DESC;
The output is :-
tablespace name size_MB free_space_MB %free %used
SYSTEM 34256 28941.375 84 16
SMD 7975 2991.1875 38 62
.................N many other Here if i try to resize the SMD tablespace datafile to (7975-2991.1875) 4983 MB it does not allow me to do it.
Also the SYSTEM tablespace has gone so big but used space is very less.
I suspect that fragmentation is the problem here._
Please guide me what could i do in these case to reduce the space taken by the datafiles.
Thanks in advance.Thanks for notifying......I have a misconception about it...both the queries giving almost same result.
but first query give you result at datafile level.
SQL> column tablespace_name format a10
SQL> column file_name format a32
SQL> column file_mb format 9999990
SQL> column hwm_mb format 9999990
SQL> column used_mb format 9999990
SQL> column shrnk_mb format 9999990
SQL>
SQL> break on report
SQL> compute sum of file_mb on report
SQL> compute sum of hwm_mb on report
SQL> compute sum of used_mb on report
SQL> compute sum of shrnk_mb on report
SQL>
SQL> select a.*
2 , file_mb-hwm_mb shrnk_mb
3 from (
4 select /*+ rule */
5 a.tablespace_name,
6 a.file_name,
7 a.bytes/1024/1024 file_mb,
8 b.hwm*d.block_size/1024/1024 hwm_mb,
9 b.used*d.block_size/1024/1024 used_mb
10 from
11 dba_data_files a,
12 (select file_id,max(block_id+blocks-1) hwm,sum(blocks) used
13 from dba_extents
14 group by file_id) b,
15 dba_tablespaces d
16 where a.file_id = b.file_id
17 and a.tablespace_name = d.tablespace_name
18 ) a
19 order by a.tablespace_name,a.file_name;
TABLESPACE FILE_NAME FILE_MB HWM_MB USED_MB SHRNK_MB
SYSAUX C:\ORACLEXE\APP\ORACLE\ORADATA\X 710 673 672 37
E\UNDOTBS1.DBF
SYSTEM C:\ORACLEXE\APP\ORACLE\ORADATA\X 360 353 352 7
E\SYSTEM.DBF
UNDOTBS1 C:\ORACLEXE\APP\ORACLE\ORADATA\X 260 258 257 2
E\SYSAUX.DBF
USERS C:\ORACLEXE\APP\ORACLE\ORADATA\X 6340 6017 6016 323
E\USERS.DBF
TABLESPACE FILE_NAME FILE_MB HWM_MB USED_MB SHRNK_MB
sum 7670 7301 7296 369
SQL> SELECT /* + RULE */ df.tablespace_name "Tablespace",
2 df.bytes / (1024 * 1024) "Size (MB)",
3 SUM(fs.bytes) / (1024 * 1024) "Free (MB)",
4 Nvl(Round(SUM(fs.bytes) * 100 / df.bytes),1) "% Free",
5 Round((df.bytes - SUM(fs.bytes)) * 100 / df.bytes) "% Used"
6 FROM dba_free_space fs,
7 (SELECT tablespace_name,SUM(bytes) bytes
8 FROM dba_data_files
9 GROUP BY tablespace_name) df
10 WHERE fs.tablespace_name (+) = df.tablespace_name
11 GROUP BY df.tablespace_name,df.bytes
12 UNION ALL
13 SELECT /* + RULE */ df.tablespace_name tspace,
14 fs.bytes / (1024 * 1024),
15 SUM(df.bytes_free) / (1024 * 1024),
16 Nvl(Round((SUM(fs.bytes) - df.bytes_used) * 100 / fs.bytes), 1),
17 Round((SUM(fs.bytes) - df.bytes_free) * 100 / fs.bytes)
18 FROM dba_temp_files fs,
19 (SELECT tablespace_name,bytes_free,bytes_used
20 FROM v$temp_space_header
21 GROUP BY tablespace_name,bytes_free,bytes_used) df
22 WHERE fs.tablespace_name (+) = df.tablespace_name
23 GROUP BY df.tablespace_name,fs.bytes,df.bytes_free,df.bytes_used
24 ORDER BY 4 DESC;
Tablespace Size (MB) Free (MB) % Free % Used
TEMP 20 17 85 15
USERS 6340 323.3125 5 95
SYSAUX 710 37.25 5 95
SYSTEM 360 7.3125 2 98
UNDOTBS1 260 1.8125 1 99
SQL> -
hi there,
i want to know how much is the maximum size of a datafile?
I'm using oracle 8.1.7.4 on aix 4.3.3
db_block_size=8192
i have a datafile of 2GB and i need to expand it.
I was wondering if the maximum datafile size is 2 gb, so i do not need to increase this but create e new one
thanksWithout any reference at hand, the AIX (4.3.3, JFS) limits as I can recall
File size: 2GB
File size if large files enabled: near 64GB
File system size: 64GB with std fragment size.
Also observe the ulimit of the user who is using the file system. -
Table Fragmentation and claim tablesapce
The following scripts are checking Tablespace and Fragmentation:
select
total.file_name fname,
total.bytes/1024 totsiz,
nvl(sum(free.bytes)/1024,0) avasiz,
(1-nvl(sum(free.bytes),0)/total.bytes)*100 pctusd
from
dba_data_files total,
dba_free_space free
where
total.tablespace_name = 'APPLSYSD'
and total.tablespace_name = free.tablespace_name(+)
and total.file_id=free.file_id(+)
group by
total.tablespace_name,
total.file_name,
total.bytes
The followings are the result:
TSNAME NFRAGS MXFRAG TOTSIZ AVASIZ PCTUSD
===============================================
AKD 2 45600 358400 45680 87.25446429
AKX 1 37360 256000 37360 85.40625
APD 2 30680 512000 30720 94
APX 1 23880 460800 23880 94.81770833
ARD 1 6120 204800 6120 97.01171875
ARX 1 73360 409600 73360 82.08984375
AXD 1 159520 204800 159520 22.109375
AXX 3 13880 102400 13960 86.3671875
AZD 1 9040 10240 9040 11.71875
From the above result, which one has serious fragmentation so that I can resize datafile in order to reclaim space from "APPSYSD"?
Please advice,
Amyexecute the below query and post the result.
col tablespace_name for a25
col file_name for a60
set pages 150
set lines 150
select df.tablespace_name ,
df.file_name ,
df.file_id ,
df.totalspace,
fs.freespace from
(select
tablespace_name,
file_name,
file_id,
round(sum(bytes)/1024/1024,2) as totalspace
from dba_data_files
group by tablespace_name,file_name,file_id) df,
(select
tablespace_name,
file_id,
round(sum(bytes)/1024/1024,2) as freespace
from dba_free_space
group by tablespace_name,file_id) fs
where df.file_id=fs.file_id (+)
order by 5 desc
Regards
RajaBaskar -
Does OCFS2 file system get fragmented
We are running Production & Testing RAC databases on Oracle 9.2.0.8 RAC on Red Hat 4.0 using OCFS2 for the cluster file system.
Every week we refresh our Test database by deleting the datafiles and cloning our Standby database to the Test database. The copying of the datafiles from the Standby mount points to the Test database mount points (same server), seems to be taking longer each time we do this.
My question is : can the OCFS2 file system become fragmented over time from the constant deletion & copying of the datafiles and if so is there a way to defragment it.
Thanks
JohnHi,
I think it will get fragment if you constant deletion & copying of the datafiles on ocfs2.You can set the preferable block size and cluster size on the basis of actual applications,which can reduce the file fragments.
Regards
Terry -
Hi All,
Database Version :11gR2
I have a tablespace which has around 32gb space consumed. But if i check the used space then its only 16GB. When i tried to resize the datafile it throws the error
ORA-03297: file contains used data beyond requested RESIZE valueAs per my understanding there are not continous blocks which are there in datafile due to fragmentation may be and there by not able to resize it. If i export the tablespace using datapump and reimport this will release the space.
But i want to know if there are any alternative ways to do the same.
Thank You
ArunArun,
I dont think we can resolve fragmentation using RMAN.
Use can try,
1) export/import
2) Move object from fragmented tablespace to new tablespace and then back to it (You can create a script to do that)
3) Find HWM of the tablespace and resize as per that.
How to Resolve ORA-03297 When Resizing a Datafile by Finding the Table Highwatermark [ID 130866.1]
Mark your Post as Answered or Helpful if Your question is answered.
Thanks & Regards,
SID
(StepIntoOracleDBA)
Email : [email protected]
http://stepintooracledba.blogspot.in/
http://www.stepintooracledba.com/ -
Tablespaces with Multiple Datafiles
I've got a Version 7.3.3.4 tablespace with multiple datafiles. There's a lot of object fragmentation. I want to be able to
drop and recreate many (but not all) of the objects in this tablespace but exercise some control on which objects go to
which datafiles in order to better utilize the available space on each datafile. I've been told that Oracle assigns objects
to datafiles in a "round robin" fashion. I'd like info on exactly how this process works and if I can have a more direct effect
on where objects are placed within the tablespace?
Thanks,
Paul Hargreaves
[email protected]Hi,
I am interested by having more information about tablespaces and how they are fragmented.
I could, perhaps help you first by telling you that there is a ALTER command for coelescing space in ORACLE 7.3.4 :
ALTER TABLESPACE tablespace COALESCE ;
Of course you should have the permissions (ALTER TABLESPACE)
This command is for the entirely tablespace ...
In fact, i need a query that gives me for each tablespaces, how are they fragmented (with segment_name ..).
Thanx
Steff -
Why fragmentation in T1 is huge and in T2 is zero
Hi
I have a busy DB 8107 , with compatible 8.0.5.0.0 on Windows 2000 SP4
(i am going to 11g very soon)
i have 2 tables , t1 in tablespace1 and t2 in tablespace2
both T1 and T2 are getting lots of inserts and deletes ,there are huge fragmentation in T1 but not T2
tablespace1 and tablspace2 are identical in definition
EX:
CREATE TABLESPACE tablespace1 DATAFILE
LOGGING
DEFAULT STORAGE (
INITIAL 104K
NEXT 104K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
ONLINE
PERMANENT
EXTENT MANAGEMENT DICTIONARY;
Giving no maintenance has been done to T1 or T2 or their 2 tablespaces in the last 24 months
My question is why there are huge fragmentation in t1 but not t2?
Thanks
Edited by: user8803475 on Aug 21, 2012 11:14 PMThink about how the rows are being inserted and deleted.
In general, there are two extremes in the way that data is inserted and removed from a table.
The first way is usually chronological and sees the data getting inserted into the table, sitting there for a while then getting deleted.
In this scenario, rows have a consistent lifespan and the datablocks will in general, get filled then get emptied.
Think of a log table that is retained for 1 year. The rows get inserted and deleted using the same time based criteria.
These data blocks will get filled, get emptied and get reused pretty regularly.
Fragmentation will be near zero.
The second way is more random. The rows get inserted and deleted on entirely different criteria.
Think of a table in which rows get inserted based on time, but get deleted based on some other value in the data.
These data blocks will be filled with inserts, but partially emptied with the deletes.
The database will reuse the space when it falls below a threshold but will otherwise be "fragmented"
This explains the "why did it get this way"
Now, the part about should you "fix" it gets more to the point.
Is the "fragmentation" a problem? probably not.
Disk space is pretty cheap and your time is not.
You say that this is a pretty busy database. Unless your fragmented table has been configured with a bad PCTFREE and PCTUSED setting, the database will reuse the space.
Is it worth your time to tweak the use/reuse of small amounts of disk space?
An Oracle database does not require special maintenance procedures to reuse disk space.
The management of this space is explained very well in the database concepts guide:
http://docs.oracle.com/cd/E11882_01/server.112/e25789/logical.htm#i13690 -
How we find tablespace fragmentation and resolve this problem
Hi,
How we find tablespace fragmentation and resolve this in 10 G (R2) windows XP.
Regards
FaheemHi,
>>Are you using Dictionary Managed or Locally Managed Tablespaces...??
In fact, there is no way to create a DMT if SYSTEM tablespace is LMT. So, what I said in my previous post "Unless the OP is using DMT ..." is impossible in Oracle 10g ...
SQL> create tablespace tbs_test
2 datafile '/u01/oradata/BDRPS/test01.dbf' size 5m
3 extent management dictionary;
create tablespace tbs_test
ERROR at line 1:
ORA-12913: Cannot create dictionary managed tablespace
SQL> select extent_management from dba_tablespaces
2 where tablespace_name='SYSTEM';
EXTENT_MAN
LOCALCheers
Legatti -
Reduce size of system datafile?
Hi all,
An old Oracle box was just about out of disk space. Investigating, I saw that the SYSTEM tablespace's datafile is sized at 4090 MB... but only 265 MB is actually being used.
I'd like to shrink it to 1 GB or even 500 MB. but when I tried
ALTER DATABASE DATAFILE '/export/home/sw/oracle/eedb/817/u01/oradata/devdb/system01.dbf'
RESIZE 1024M;
I got:
ORA-03297: file contains used data beyond requested RESIZE value
I was able to run the statement using 2048 MB, but is there any way to get back the last gig or so?
Thanks,
Natashayou know that inside tablespaces can be fragmentation and the used size does not represent that the rest is free because there blocks used and unused blocks in no continous way. You can see the fragmentation of a tablespace mapping it in OEM.
you can probe doing this and after resize it:
SQL> alter tablespace system coalesce;
Tablespace altered.
SQL>
try to resize little by little until Oracle tell you that it is not more possible.
Joel P�rez -
Tablespace Datafile Resize ORA-03297
Hi,
In one of our tablespace constituting 4 datafiles, has got some data which is
as follows :
SEGMENT_NAME SEGMENT_TYPE
SYS_C004044 INDEX
SYS_C004315 INDEX
PROJECTRELEASE_INDEX1 INDEX
SYS_C0019289 INDEX
XAK1WBSHIERARCHY INDEX
SYS_IL0000033038C00047$$ LOBINDEX
SYS_IL0000033086C00013$$ LOBINDEX
SYS_IL0000033305C00013$$ LOBINDEX
SYS_IL0000033431C00005$$ LOBINDEX
SYS_IL0000033487C00006$$ LOBINDEX
SYS_IL0000033492C00002$$ LOBINDEX
SYS_IL0000033065C00009$$ LOBINDEX
SYS_IL0000033427C00006$$ LOBINDEX
SYS_IL0000033305C00014$$ LOBINDEX
SYS_IL0000033110C00015$$ LOBINDEX
SYS_IL0000033104C00014$$ LOBINDEX
SYS_LOB0000033038C00047$$ LOBSEGMENT
SYS_LOB0000033427C00006$$ LOBSEGMENT
SYS_LOB0000033065C00009$$ LOBSEGMENT
SYS_LOB0000033492C00002$$ LOBSEGMENT
SYS_LOB0000033487C00006$$ LOBSEGMENT
SYS_LOB0000033431C00005$$ LOBSEGMENT
SYS_LOB0000033305C00014$$ LOBSEGMENT
SYS_LOB0000033086C00013$$ LOBSEGMENT
SYS_LOB0000033104C00014$$ LOBSEGMENT
SYS_LOB0000033110C00015$$ LOBSEGMENT
SYS_LOB0000033305C00013$$ LOBSEGMENT
FORUMMESSAGE_H TABLE
SITEADMIN TABLE
We can move these indexes to different tablespaces, but since these tables have columns with LONG datatypes, we can't move these tables. But we can export the data, drop these tables and import the data in different tabelspaces. But for these lobsegment, how can I move these segments. I believe these got created as a result of indexing of LOB columns. The sizes of these datafiles of this tablespace is in terms of GBs, and I want to reduce the sizes of the same, therefore I tried to use resize command of alter database datafile '***' resize **m, but it thrown ora-03297 error message. I thought it has raised may because of fragmentation of tablespace. I found the blocks were free_space is available using dba_free_space. But after coalescing the tablespace, still I got same no of records in free space. That means the tablespace is not getting coalesced. What is the reason for this. Plz let me know why tablespace data is not getting moved so as to enable me to resize the datafiles.
Thanx,
Kamlesh CIn older versions of Oracle on Windows platforms if you accepted the default names for your datafiles .ora was used. I think beginning in 8.1.x .dbf became the default, which was more like the standards used on other operating systems.
As already pointed out .ora is most commonly used for configuration files like init.ora.
Maybe you are looking for
-
We have an application which is very heavy on object creation. When we run the application after 4-6hours we get OutOfMemory(OOM) even though there seems to be enough memory and high enough permanent generation space (~256MB). I have tried various op
-
Any way to get instantiation dates from servlets?
I want to print out the day/time a servlet was loaded into memory, and, if possible, the time that the server started. Any methods that let you grab a date? I've looked around the api docs, but nothing seems to come up. Maybe I'm just looking in the
-
Hi, In BPC, we are using BADI to process allocations. In the BADI data are read from the BPC cube (F and E table), calculate allocations and write back to F table. In writing, I use MODIFY (ftable) from itab. Sometimes it works and recently I get the
-
Modplsql.dll could not be found (so it says)
I just upgraded from HTMLDB 1.6 to 2.0. I have not been able to login, due to Interal Server Error. In tracking all the logs, I find this error when starting the HTTP Server at the end of HTTP_Server~1 log file: 05/11/07 17:58:21 Start process Syntax
-
Hardware driver programming : Help !!!
Hello I just bought a special X-ray sensor, which came with its ISA interface card but without any driver. The documentation indicates which registers and memory range to read and write, and I have to build the LabView driver myself. I need to progra