Datafile issue

Hi
we are using oracle 10g for our database. Whenever my application is running it will delete the all records which after 24hrs. I am getting on avg daily 3000 new records. After 24hrs it will be deleted but datafile size keep on increasing. Even i used command ALTER DATABASE tablename DEALLOCATE UNUSED but space is not resizing. what is the reason? I want to resize the datafile
Please urgent & reply me
regards
murali krishna

You have to Shrink Database Segments
Shrinking Database Segments Online
You use online segment shrink to reclaim fragmented free space below the high water mark in an Oracle Database segment. The benefits of segment shrink are these:
Compaction of data leads to better cache utilization, which in turn leads to better online transaction processing (OLTP) performance.
The compacted data requires fewer blocks to be scanned in full table scans, which in turns leads to better decision support system (DSS) performance.
Segment shrink is an online, in-place operation. DML operations and queries can be issued during the data movement phase of segment shrink. Concurrent DML operation are blocked for a short time at the end of the shrink operation, when the space is deallocated. Indexes are maintained during the shrink operation and remain usable after the operation is complete. Segment shrink does not require extra disk space to be allocated.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/schema.htm#ADMIN10161
Kamran Agayev A. (10g OCP)
http://kamranagayev.wordpress.com

Similar Messages

  • Express edition 11g datafiles Issue

    Hi All,
    After installing the Oracle database express edition 11g on the 32 bit windows there are some issues with the datafile naming convention.
    The datafile undotbs01.dbf belongs to the SYSAUX tablepsace & sysaux01.dbf belongs to the UNDO tablespace.
    I don't knw the recurperssion of the same OR whether it is Bug ???
    Please help.

    And the fix is easy, as long as you don't mind shutting down the database and a startup mount;
    And don't move sysaux.dbf onto the existing undotbs1.dbf, using a slightly different name would be a Very Good Idea ...
    $ sqlplus /nolog
    shutdown immediate;
    ... database closed ...
    exit;
    cd <.dbf files directory>
    mv sysaux.dbf undotbs01.dbf
    mv undotbs1.dbf sysaux.dbf
    $ sqlplus /nolog
    startup mount
    ... mounted ...
    alter database rename file '<dir>/sysaux.dbf' to '<dir>/undotbs01.dbf';
    alter database rename file '<dir>/undotbs1.dbf' to '<dir>/sysaux.dbf';
    alter database open;Of course, for those of you on Windows, it's going to cost four extra keystrokes ... mv ... move ;)

  • Datafile issues in manual standby database

    Hi all,
    oracle - 10gR2
    OS - RHEL 2.6
    We have primary and standby database setup using manual methods('rsync') to send archive logs from primary to standby database and they are applied there. Today I have created a new datafile in primary and seen that the same is not replicated in standby( i was not aware of the functionality of this kind of setup) later I have seen that "standby_file_management parameter is set to MANUAL".
    I have seen the following message in standby alert log(not disabled the archive log application script in satndby),
    "Media Recovery Log /u01/app/oracle/oradata/bmprod/archivelog/1_1761_703415336.arc
    File #6 added to control file as 'UNNAMED00006' because
    the parameter STANDBY_FILE_MANAGEMENT is set to MANUAL
    The file should be manually created to continue.
    Some recovered datafiles maybe left media fuzzy
    Media recovery may continue but open resetlogs may fail"
    Later I have copied the required file from primary to standby database(now I have disabled the archive log application script in standby), and renamed the datafile 'UNNAMED00006' to 'desired one' and restarted the archive log application script. I have seen the following messages,
    "ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 6 needs more recovery to be consistent
    ORA-01110: data file 6: '/u02/app/oracle/oradata/bmprod/BMTEST.dbf'"
    Now I am unable to figure out what I should do to come out of this situation. Atleast tell me this condition is not worse than I thought(I think that the standby database should be recreated).
    Any help is appreciated.......
    regards
    Edited by: techfeud on Jan 18, 2010 11:26 PM

    Hi,
    when you set the standby_file_management parameter as mannual then , if you create any datafile or adding it to the tablespace then you should mannaully run the command on the standby side , if its auto then auto matically it will create the file over in the respective locationa and with the name what you have used to create the datafile on the primary side,
    as the parameter value is mannual ypu need to create the datafile over the standby side and then performt the recovery,
    Farhan.

  • Undo datafile issues

    Hi Dba,
    I am using oracle 11g version. I did proper shut down , one of my undo datafile is got corrupted here and i removed and started to startup my db. But my db is not mounted. i am not able to open my database. any one can say ? y this undo datafile is needed? I did proper shut ly na it doesnt have any datas in the undo datafile right? then y its necessary ? to start up my database.
    waiting for your response.
    Thanks in Advance.

    SQL> startup
    ORACLE instance started.
    Total System Global Area 648663040 bytes
    Fixed Size 1335108 bytes
    Variable Size 301990076 bytes
    Database Buffers 339738624 bytes
    Redo Buffers 5599232 bytes
    Database mounted.
    ORA-01157: cannot identify/lock data file 3 - see DBWR trace file
    ORA-01110: data file 3:
    'D:\APP\ADMINISTRATOR\ORADATA\SIPDB\DATAFILE\O1_MF_UNDOTBS1_7H3FYSG8_.DBF'
    Hi ,
    I am getting the above error while starting up the database without undo datafile. after i did proper shut down ly i am starting up my db now. in the undo datafile there is no datas right? then y its asking undo datafile to startup? if we do proper shutdown means undo datafile will not have any datas right then y its necessary at a time of recovery to open the db up .... any one can help me?

  • Unable to resize asm datafile even though I resized the (logical) datafile

    I have a bigfile that went above 16tb - this is causing me grief in a restore to a netapp filer that has a 16tb limit.
    So we went thru the hassles of moving data around. I issued the
    Mon Dec 10 21:15:06 2012
    alter database datafile '+DATA/pcinf/datafile/users1.303.777062961' resize 15900000000000
    Completed: alter database datafile '+DATA/pcinf/datafile/users1.303.777062961' resize 15900000000000
    Mon Dec 10 21:40:10 2012
    The datafile itself from v$datafile shows 15tb - BUT the asm file is still 18tb in size.
    Should it not be the same - is this something others have faced where the asm file doesnt match?
    Name     USERS1.303.777062961
    Type     DATAFILE
    Redundancy     MIRROR
    Block Size (Bytes)     8192
    Blocks     2281701377
    Logical Size (KB)     18253611016
    Linux, Exadata, 11.2.0.2 + psu
    SR Created but not getting anywhere - Why such a large file, are you sure its really not 18tb, etc .etc
    Daryl

    So I just ran another test of my real datafile issue and it appears to have corrected itself..
    OEM Shows this: (Correct)
    Block Size (Bytes)     8192
    Blocks     1934814455
    Logical Size (KB)     15478515640
    select file_id, bytes from dba_data_files where tablespace_name = 'USERS1';
    select file#, bytes from v$datafile where file# = (select file_id from dba_data_files where tablespace_name = 'USERS1');
    select file#, trunc(bytes/1024/1024/1024) G from v$datafile where file# = (select file_id from dba_data_files where tablespace_name = 'USERS1');
    alter database datafile '+DATA/pcinf/datafile/users1.303.777062961' resize 15850000000000;
    select file_id, bytes from dba_data_files where tablespace_name = 'USERS1';
    select file#, bytes from v$datafile where file# = (select file_id from dba_data_files where tablespace_name = 'USERS1');
    select file#, trunc(bytes/1024/1024/1024) G from v$datafile where file# = (select file_id from dba_data_files where tablespace_name = 'USERS1');
       FILE_ID                BYTES
            12   15,900,000,002,048
    1 row selected.
         FILE#                BYTES
            12   18,691,697,672,192    <<<< CAUSING ME MUCH MUCH GRIEF!!
    1 row selected.
         FILE#          G
            12      17408
    1 row selected.
    Database altered.
       FILE_ID                BYTES
            12   15,850,000,007,168
    1 row selected.
         FILE#                BYTES
            12   15,850,000,007,168
    1 row selected.
         FILE#          G
            12      14761
    1 row selected.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • What is PID and how to set it in ORADEBUG?

    I want to run ORADEBUG. But I don't know how to set pid.
    My system is Window Server 2003.
    Any help will be appreciated.
    Thanks.

    Hi, yingkuan,
    Thanks.
    How can I find which PID is what I want? For example,
    6 days, I run SQLPlus to add a new datafile into the tablespace.
    It is still running now.
    How can I find the PID for this process?
    BTW: thank you very much for answering my other questions.
    But I still havenot figured out the Add New Datafile issue.
    There is some one else from the IT dept. working on that for me.
    Thanks,

  • Oracle error message categorization

    Good morning,
    Do you know any document / url classifying ORACLE ERROR messages by category (for exemple connexion issue / memory issue / datafile issue etc)?
    The purpose is to grop log file and generate a call in case of specific Oracle error.
    In case of connexion issue - user not allowed to connect on a schema - I should raise a call
    In case of autoextend issue, I should raise a call.
    Thank you.

    arrgh,
    I found this doco but I was hoping something else exist
    Thanks
    --Simon                                                                                                                                                                           

  • Question restore OCFS2 to ASM...

    Hello,
    I have the following question:
    I make a backup with RMAN to a database with storage OCFS2, and to restore a BD created with ASM storage?
    If not this way, someone knows a procedure that is not export and RMAN?
    Thanks! by the attention.

    OK, you are on an OCFS2 and ASM environment, please don't tell me more, let me see through my crystal ball, I can see you are working on a RAC environment on a RHEL4 x86 OS and a 10gRel2 Database, am I right? Just guessing by odds, but next time just pretend we cannot see what you see and make sure you clearly specify your environment at the beginnng of the thread.
    If yor are trying to restore from an rman backup on an OCFS and try to restore to an ASM based database, assuming your environment is properly configured, first restore the controlfile:
    RMAN> RESTORE CONTROLFILE FROM '<Controlfile Backup path/name>';
    Mount the database:
    RMAN> ALTER DATABASE MOUNT;
    Restore from your backup set to the ASM:
    RMAN> BACKUP AS COPY DATABASE FORMAT '+DATA';
    Switch to the copy:
    RMAN> SWITCH DATABASE TO COPY;
    Since rman doesn't handle remporary datafiles, issue:
    SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE <Your Temporary TS> ;
    Get rid of the old OCFS temporary tablespace and make this the default.
    Take care of the redo log files, which are still on the OCFS environment by means of ALTER / DROP / CREATE commands so the redo logs are wiped off from the OCFS and migrated to the ASM environment.
    Note. This is an outline, not a tutorial nor a HowTo document, so further research has to be made from the rman reference manual before you apply these commands.
    ~ Madrid
    http://hrivera99.blogspot.com

  • Sysaux tablespace issue-missing sysux datafile

    hi,
    I ve upgraded an database for 9i to 10g. The upgrade was sucessfull and the Db was in normal usage. Due to some functionality testing the database was restored to a state from the backup taken after upgrade. But while restoring the database i forgot to include the path of the datafile of sysaux tablespace in the control file re-creation script.
    I didnt notice this issue for long time until i realised the mistake made by me.
    I found in the alert logs which showed errors such as
    ORA-01157: cannot identify/lock data file 124 - see DBWR trace file
    ORA-01110: data file 124: '/oracle/ora/dbs/MISSING00141'
    ORA-27037: unable to obtain file status
    since the path of the sysaux tablespace was not referred in the script i thought of referring it now and i issued the following command
    alter tablespace sysaux add datafile '<path>/sysaux01.dbf' reuse;
    i issued this as the datafile was already present and i thought of jus reusing it again.
    But what had happened is this datafile had been added as an additional datafile for the sysaux tablespace,which means the sysaux is still referring '/oracle/ora/dbs/MISSING00141' as one of the datafile.
    but one important thing is that there isnt any such datafile ie '/oracle/ora/dbs/MISSING00141' when i checked inside the dbs folder.
    The path is getting referred as part of the sysaux tablespace but its physical location is not found.
    So i did took this datafile offlinedrop issuing
    alter database datafile '/oracle/ora/dbs/MISSING00141' ofline drop
    but the problem i m facing now is the AWR is not opening up and it throw error stating first datafile of sysaux tablespace '/oracle/ora/dbs/MISSING00141' not found.
    ORA-00376: file 124 cannot be read at this time
    ORA-01110: data file 141: '/oracle/ora/dbs/MISSING00141'
    ORA-06512: at line 21.
    any solution for solving the problem would be appreciated
    thanks,
    ahmed

    Hi, I've got exactly the same problem up to the point where I try to offline drop the MISSING0009 datafile.
    The command says it completed OK but the phantom datafile does not go and still reports that it's there in dba_data_files, althoguh the online_status is RECOVER, the status is ONLINE.
    I'm not sure how to get rid of the entry if the offline drop command doesnt work. I've tried recreating the control file and deleting the entry for this datafile too but somehow it returned after that too!
    I''m also not sure what to do when I do get rid of it - since I cant recover the old sysaux01.dbf datafile since it's not consistent with the database since there has been a week inbetween and we're not running archivelogging.
    I would assume that all the stuff usually put in the SYSAUX tablespace is therefore lost, but I dont think we use anything that normally puts stuff in there, I've only noticed this since I was trying to install em.
    Do I have to rebuild the database from scratch? Or any ideas on how I can recover the situation? Can I do something like startup upgrade and re-run the upgrade scripts which will put their stuff back into sysaux....?
    Many thanks for ideas
    Rob

  • Creating datafile performance issues

    Hi guys,
    I´m using a oracle rac 10g with 3 nodes, and i needed do create a new datafile in production environment, then i had some performance issues, do you guys have any idea what the cause of this ?
    CREATE TABLESPACE "TBS_TDE_CMS_DATA"
    DATAFILE
    '+RDATA' SIZE 1000M REUSE AUTOEXTEND ON NEXT 1000M MAXSIZE 30000M
    LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;
    Any hint would be helpful.
    thanks.

    BrunoSales wrote:
    Hi guys,
    I´m using a oracle rac 10g with 3 nodes, and i needed do create a new datafile in production environment, then i had some performance issues, do you guys have any idea what the cause of this ?
    CREATE TABLESPACE "TBS_TDE_CMS_DATA"
    DATAFILE
    '+RDATA' SIZE 1000M REUSE AUTOEXTEND ON NEXT 1000M MAXSIZE 30000M
    LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;
    Any hint would be helpful.
    thanks.why are creating 1GB of initial extent? I think is too big. Remember whenever you add datafile to database, oracle first formats the complete datafile. So doing this can have performance issue as it depends upon the datafile size. check if you face any issue in I/O or check if there are any spikes in I/O.
    You can try adding a datafile with default values and check how much time it take

  • Windows server locking issue on datafiles

    Hi,
    We are having Oracle database (10.2.0.3) on Windows 2003 (R2 Standard Edition) server. We have database residing on the server, however, the issue is that the database is getting shutdown due to some errors. When we check the alert.log, we understand that some OS process is locking the database files due to which the database is getting shutdown every day around 3.15 - 3.45 am.
    The error as per alert.log is as below;
    Errors in file f:\oracle\product\10.2.0\admin\orclint\bdump\orclint_ckpt_7680.trc:
    ORA-00206: error in writing (block 3, # blocks 1) of control file
    ORA-00202: control file: 'F:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCLINT\CONTROL02.CTL'
    ORA-27072: File I/O error
    OSD-04008: WriteFile() failure, unable to write to file
    O/S-Error: (OS 33) The process cannot access the file because another process has locked a portion of the file.
    When we check the orclint_ckpt_7680.trc file, we have;
    Dump file f:\oracle\product\10.2.0\admin\orclint\bdump\orclint_ckpt_7680.trc
    Sat Nov 19 03:22:15 2011
    ORACLE V10.2.0.3.0 - Production vsnsta=0
    vsnsql=14 vsnxtr=3
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Windows NT Version V5.2 Service Pack 2
    CPU : 2 - type 586, 2 Physical Cores
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:475M/2047M, Ph+PgF:934M/3434M, VA:1266M/2047M
    Instance name: orclint
    Redo thread mounted by this instance: 1
    Oracle process number: 7
    Windows thread id: 7680, image: ORACLE.EXE (CKPT)
    *** 2011-11-19 03:22:15.440
    *** SERVICE NAME:(SYS$BACKGROUND) 2011-11-19 03:22:15.409
    *** SESSION ID:(165.1) 2011-11-19 03:22:15.409
    ORA-00206: error in writing (block 3, # blocks 1) of control file
    ORA-00202: control file: 'F:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCLINT\CONTROL02.CTL'
    ORA-27072: File I/O error
    OSD-04008: WriteFile() failure, unable to write to file
    O/S-Error: (OS 33) The process cannot access the file because another process has locked a portion of the file.
    error 221 detected in background process
    ORA-00221: error on write to control file
    ORA-00206: error in writing (block 3, # blocks 1) of control file
    ORA-00202: control file: 'F:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCLINT\CONTROL02.CTL'
    ORA-27072: File I/O error
    OSD-04008: WriteFile() failure, unable to write to file
    O/S-Error: (OS 33) The process cannot access the file because another process has locked a portion of the file.
    We have disabled the Antivirus and checked, still the problem exists. We need additional help in identifying the OS process that might be locking the datafiles.Also, when we check the Windows Event Log, it shows nothing wrt the same.
    Please help us with the issue.
    Thanks,
    Rahul

    Try to use the handle tool http://technet.microsoft.com/en-us/sysinternals/bb896655 .
    Example with Oracle XE on Windows 7 with administrator privileges:
    c:\>handle.exe dbf | findstr  SYSTEM
    oracle.exe         pid: 2420   type: File           73C: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           744: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: Section        758: \BaseNamedObjects\C:_ORACLEXE_APP_ORACLE_ORADATA_XE_SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           81C: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           820: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           844: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           848: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           8BC: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           8C0: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           8F0: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           8F4: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           9AC: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           9B0: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           A18: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           A1C: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           A9C: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           AA0: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           AD8: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF
    oracle.exe         pid: 2420   type: File           ADC: C:\oraclexe\app\oracle\oradata\XE\SYSTEM.DBF

  • Template Datafile Location issue

    Hello,
    I previously created a new database template based on an existing database. I later deleted this template i created. Now when i attempt to create a new database with the DBCA using ANY template, i.e. General Purpose etc, i receive a DBCA screen that was not displayed before.
    The screen is labelled; Template Datafile Location
    It gives the following message:
    The template datafile “{ORACLE_HOME}\assistants\dbca\templates\Data_Warehouse.dfj” specified in the template doesn’t exist. Specify new location of the template datafile:
    I looked in the registry and made sure the parameter for {ORACLE_HOME} is the same as the location of the 'assistants' folder.
    The template files currently in the “{ORACLE_HOME}\assistants\dbca\templates\ location are;
    Data_Warehouse.dbc
    General_Purpose.dbc
    New_Database.dbt
    Transaction_Processing.dbc
    Transaction_Processing.dfj
    Any useful advice would be appreciated. Thank you

    Hello,
    My mistake, the files are *.dfj as you say.
    I am using Oracle 9i on a Windows server. I will look into the files on the disks and i will check if any of my colleagues have these files in thier systems too.
    Thanks again.
    I don't know what may have occurred to delete them!!!

  • Is it possible to recover by renaming a stdby datafile for space issues

    Hi All,
    In one of the mount point in standby database, i dont have enough space and the mrp is stopped with the below error.
    ORA-01237: cannot extend datafile 66
    My question is, can i follow the below steps to recover the standby.. I have all the archive logs in standby and there is no archive log gap.
    I have space in another mount point also in standby.
    1. shutdown the standby db and copy the afftected datafile from the current mount point to the mount point which i have space.
    2. rename the datafile
    3. start the mrp process again.
    Can i follow the above steps?
    DB version:10.2.0.5
    OS:AIX

    Hi,
    DB_FILE_NAME_CONVERT you need to set when in process of duplicate from primary to standby.
    Here no need of this parameter, make sure STANDBY_FILE_MANAGEMENT is set to AUTO, that should be fine.

  • EM for tablespace datafile autoexend issue

    Dear Support,
    I need your advise can i configure tablespace datafile autoextend when reach 85 percent instead of 100 percent. Do you know where can find more information about justification of datafile autoextend only work when reach 100 percent. Hope to hear soon. Thanks

    Autoextend is a 'last escape' measure. You should not use it to let files grow. You should use extents correctly sized. Autoextend is only there to make sure that you don't get immediate problems when your files are full.
    I also have thought about this , So I tried the following:
    I created a tablespace with 3 files, all 3 autoextending and equal in size, locally managed and uniform in extent size (notice the difference between an extent and an extended file!). Extend next and maxsize are also made equal for all files.
    now I created tables in the tablespace (forcing the creation of 1 extent per table to my ts is filled up without having to insert real data). the 1st 3 fit in the tablespace as it was created. When creating the 4th table, it didn't fit any more and the 2nd file got extended (why this one? It looks like an arbitrary decision by Oracle). after that, each second table I created extended the 2nd file (only) of the tablespace. (the reason why this happened after every second table is because oracle extended the file so that it would fit 2 extents, thus 2 new tables of 1 extend).
    My quick conclusion, don't use autoextend, unless for a last resort. You should use correctly sized files, tablespaces and extents. autoextend is only there to make sure you don't have to do 'maintenance in the middle of the night' (free interpretation of Tom Kyte (asktom.com) :) ).
    BTW, if you really want to do (dynamic) striping in Oracle, the only way (I know) is to create multiple tablespaces each on another disk and create a hash or composite partitioned table where each of the hash-partitions are located in a different tablespace. Normal tables are never striped by Oracle! Alternatively you can let the OS stripe 1 datafile over multiple disks. You could also every now and then recreate the tablespace over multiple files so data from a normal table is distributed evenly, but this is quite a hassle.
    I have also had a look though some oracle books and on the net and can't like you find a definitive answer regarding when the Oracle Db makes the decision to add another datafile ?

  • Issue shrinking a datafile

    Is it possible to shrink a datafile if i have extents near the end? Anyway to move the extents? I get the high watermark exception when I try to shrink the datafile even though I have alot of free space.

    Hi,
    Try this query from http://asktom.oracle.com/pls/asktom/f?p=100:11:4177261019303253::::P11_QUESTION_ID:153612348067
    ----------- maxshrink.sql ----------------------------------
    set verify off
    column file_name format a50 word_wrapped
    column smallest format 999,990 heading "Smallest|Size|Poss."
    column currsize format 999,990 heading "Current|Size"
    column savings  format 999,990 heading "Poss.|Savings"
    break on report
    compute sum of savings on report
    column value new_val blksize
    select value from v$parameter where name = 'db_block_size'
    select file_name,
           ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) smallest,
           ceil( blocks*&&blksize/1024/1024) currsize,
           ceil( blocks*&&blksize/1024/1024) -
           ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) savings
    from dba_data_files a,
         ( select file_id, max(block_id+blocks-1) hwm
             from dba_extents
            group by file_id ) b
    where a.file_id = b.file_id(+)
    column cmd format a75 word_wrapped
    select 'alter database datafile '''||file_name||''' resize ' ||
           ceil( (nvl(hwm,1)*&&blksize)/1024/1024 )  || 'm;' cmd
    from dba_data_files a,
         ( select file_id, max(block_id+blocks-1) hwm
             from dba_extents
            group by file_id ) b
    where a.file_id = b.file_id(+)
      and ceil( blocks*&&blksize/1024/1024) -
          ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) > 0
    /Regards,
    SK

Maybe you are looking for

  • Can I attach more than 1 photo to an email?

    I've got around 30-50 photos in my iPhone's Photo Stream that I need on my Lion-less (and hence no iCloud) computer... It seems the only way to get them there is by emailing them as an attachment. Do I really have to do this 1 at a time? The email ac

  • How to recover data onto new hard drive

    I have a hard drive that is failing, and I was going to get a new one.  If I reinstall windows on the new hard drive and then transfer a system image from my backup external hard drive, will this work?  Also, what about the system repair disc?

  • Vob Import Question in Premiere Pro CS4 4.1

    Hello,   Prior to posting, I read numerous posts related to importing vob files in Premiere.  None of them seemed to address the particular problem I am experiencing, and I was hoping to find an answer here in the forums.  Here's what I am attempting

  • Service Endpoint Interface and RemoteException question on the server

    Hello, I would like to be able to log and track RemoteExceptions when the network goes down when an attempt to send a response back to the client. I understand that Weblogic will handle the RemoteException but especially on a network failure I would

  • Label when material creation

    Hello, how do I set up the print of a label when a material has been created with MM01? No goods receipt is done at this time. thank you!!