Try to shrink datafiles after deleting huge amount of records

Dear Sir;
I have deleted more than 50 million records along with table’s partitions for the previous year.
now I'm trying to shrink datafile with more than 10GB free space but still unable to delete due to
ORA-03297: file contains used data beyond requested RESIZE value.
How can we shrink these datafile or otherwise what is the best way to delete huge amount of records and get additional space in HD
Thanks and best regards
Ali Labadi

Hi,
You could see this article of Jonathan LEWIS:
http://jonathanlewis.wordpress.com/2010/02/06/shrink-tablespace

Similar Messages

  • How to reduce the database size after deleting huge amount of rows?

    Hi,
    I have a large database. I removed almost half of the data/rows. Now i need to reduce the size of the database file as I need more disk space for the database file.
    What should I do in details please.
    Thanks.

    Hi,
    I have a large database. I removed almost half of the data/rows. Now i need to reduce the size of the database file as I need more disk space for the database file.
    What should I do in details please.
    Thanks.
    Deleting large data would have ultimately put ghost cleanup into action. You would not be able to free space using shrink operation untill it completes. So wait for time and then start shrinking. You should note that shrinking would cause fragmentation and
    you would have to rebuild indexes after shrinking is completed
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Why the apple store is so expensive and why the apple users can't access for free after spending huge amount in apple phone ?

    Why the apple store is so expensive and why the apple users can't access for free after spending huge amount in apple phone ?

    The Apple store is correct. The warranty is not international, and Apple will not accept or return iPhones shipped from a different country.  You need to ship the phone to somebody in Hong Kong who can take it in to Apple for repair, or pay a third party repair shop in the Philippines to fix it.

  • What is the best practice of deleting large amount of records?

    hi,
    I need your suggestions on best practice of deleting large amount of records of SQL Azure regularly.
    Scenario:
    I have a SQL Azure database (P1) to which I insert data every day, to prevent the database size grow too fast, I need a way to  remove all the records which is older than 3 days every day.
    For on-premise SQL server, I can use SQL Server Agent/job, but, since SQL Azure does not support SQL Job yet, I have to use a Web job which scheduled to run every day to delete all old records.
    To prevent the table locking when deleting too large amount of records, in my automation or web job code, I limit the amount of deleted records to
    5000 and batch delete count to 1000 each time when calling the deleting records stored procedure:
    1. Get total amount of old records (older then 3 days)
    2. Get the total iterations: iteration = (total count/5000)
    3. Call SP in a loop:
    for(int i=0;i<iterations;i++)
       Exec PurgeRecords @BatchCount=1000, @MaxCount=5000
    And the stored procedure is something like this:
     BEGIN
      INSERT INTO @table
      SELECT TOP (@MaxCount) [RecordId] FROM [MyTable] WHERE [CreateTime] < DATEADD(DAY, -3, GETDATE())
     END
     DECLARE @RowsDeleted INTEGER
     SET @RowsDeleted = 1
     WHILE(@RowsDeleted > 0)
     BEGIN
      WAITFOR DELAY '00:00:01'
      DELETE TOP (@BatchCount) FROM [MyTable] WHERE [RecordId] IN (SELECT [RecordId] FROM @table)
      SET @RowsDeleted = @@ROWCOUNT
     END
    It basically works, but the performance is not good. One example is, it took around 11 hours to delete around 1.7 million records, really too long time...
    Following is the web job log for deleting around 1.7 million records:
    [01/12/2015 16:06:19 > 2f578e: INFO] Start getting the total counts which is older than 3 days
    [01/12/2015 16:06:25 > 2f578e: INFO] End getting the total counts to be deleted, total count:
    1721586
    [01/12/2015 16:06:25 > 2f578e: INFO] Max delete count per iteration: 5000, Batch delete count
    1000, Total iterations: 345
    [01/12/2015 16:06:25 > 2f578e: INFO] Start deleting in iteration 1
    [01/12/2015 16:09:50 > 2f578e: INFO] Successfully finished deleting in iteration 1. Elapsed time:
    00:03:25.2410404
    [01/12/2015 16:09:50 > 2f578e: INFO] Start deleting in iteration 2
    [01/12/2015 16:13:07 > 2f578e: INFO] Successfully finished deleting in iteration 2. Elapsed time:
    00:03:16.5033831
    [01/12/2015 16:13:07 > 2f578e: INFO] Start deleting in iteration 3
    [01/12/2015 16:16:41 > 2f578e: INFO] Successfully finished deleting in iteration 3. Elapsed time:
    00:03:336439434
    Per the log, SQL azure takes more than 3 mins to delete 5000 records in each iteration, and the total time is around
    11 hours.
    Any suggestion to improve the deleting records performance?

    This is one approach:
    Assume:
    1. There is an index on 'createtime'
    2. Peak time insert (avgN) is N times more than average (avg). e.g. supposed if average per hour is 10,000 and peak time per hour is 5 times more, that gives 50,000. This doesn't have to be precise.
    3. Desirable maximum record to be deleted per batch is 5,000, don't have to be exact.
    Steps:
    1. Find count of records more than 3 days old (TotalN), say 1,000,000.
    2. Divide TotalN (1,000,000) with 5,000 gives the number of deleted batches (200) if insert is very even. But since it is not even and maximum inserts can be 5 times more per period, set number of deleted batches should be 200 * 5 = 1,000.
    3. Divide 3 days (4,320 minutes) with 1,000 gives 4.32 minutes.
    4. Create a delete statement and a loop that deletes record with creation day < today - (3 days ago - 3.32 * I minutes). (I is the number of iterations from 1 to 1,000)
    In this way the number of records deleted in each batch is not even and not known but should mostly within 5,000 and even you run a lot more batches but each batch will be very fast.
    Frank

  • I try to download itunes after deleting it and it says it

    I am trying to download i tunes after deleting a corrupted copy.  I have deleted everything I can find rlated to Apple and i tunes, but every time I try to dwonload a new version, it says "thanks for downloading itunes" but it never actually downloads it.  I feel like it thinks there is already a version on my computer, which is preventing it from actually downloading it.  OS is Windows Vista Home Premium

    Clear your cache and try again, or switch to a better browser. Hint: Google Chrome
    tt2

  • Shrink table after delete

    Hi,
    I want to remove some data from a Oracle table, too free up some space on hard drive.
    I use delete clause with where condition to delete data, but then after i would need some shrink clause to shrink up the table as i understand.
    I cant use truncate because it doesn't allow to use where clause. I need something like:
    delete from table 1 where condition1;
    shrink table1; --rfree up space                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Simply deleting rows doesn't make blocks eligible for deallocation. Try this:
    truncate table mette reuse storage;
    CALL DBMS_STATS.GATHER_TABLE_STATS(USER,'mette');
    SELECT t.blocks used, s.blocks allocated
    FROM   user_tables t JOIN user_segments s ON s.segment_name = t.table_name
    WHERE  t.table_name = 'METTE';
       USED ALLOCATED
          0     20480
    alter table mette deallocate unused;
    CALL DBMS_STATS.GATHER_TABLE_STATS(USER,'mette');
    SELECT t.blocks used, s.blocks allocated
    FROM   user_tables t JOIN user_segments s ON s.segment_name = t.table_name
    WHERE  t.table_name = 'METTE';
       USED ALLOCATED
          0         8
    alter table mette allocate extent;
    -- ...same query...
       USED ALLOCATED
          0        16
    alter table mette deallocate unused;
    -- ...same query...
       USED ALLOCATED
          0         8

  • Hard drive space unaffected after deleting huge user

    I deleted a user, I selected the option to have their home folder removed, and apparently the content is still on my computer. My account uses 75 GB of data, of the 250 GB drive, yet disk utility says I only have 90 GB of free space. I am using Yosemite on a 2013 macbook pro retina. I tried to find the home folder of the user, and could not find it under Computer-->Users. Any suggestions? Thanks

    Those are "expendable" snapshots created when your Time Machine drive is unavailable. If you connect your Time Machine drive and let it work they will be removed automatically.
    If you run low on space before Time Machine can run again, they will begin to be discarded to give you the space you need.

  • How to shrink datafiles after droping table

    Hello Gurus,
    I have just dropped a big table but see that the file hasn't change..
    Could you help me to shrink a file too?
    Thanks!
    Agat

    <br>Agat,</br>
    <br>Yesterday it was the same question in this forum : Shrink tablespace ? how can i do that?</br>
    <br>Nicolas.</br>

  • Need help in designing fetching of huge amount of records from a DB

    Hello,
    I am having a starard application which fetches data from MSSQL Server using this standard code:
    CallableStatement cs;
    Connection connection;
    cs = connection.prepareCall("{call " + PROCREQUEST_SP + "(?,?,?)}");
    cs.execute();
    ResultSet rs = cs.getResultSet();
    while (rs.next())
    Most of the queries the users run are no more than 10,000 records and the code works OK like this.
    But, the Database contains more the 7 million records and I would like to enable the user to see all these records if he wants to.
    If I enable it at the current code I finally receive java.lang.OutOfMemory Exception. How can I improve my code in order to do this kind of task?

    Hello Roy,
    Yes the DB is connected again and only the data chunk asked for is called. eg. if user is on page 2, containing recoreds 1000-2000, and clicks on 3 only records from 2000-3000 should be fetched from the DB. This logic should be taken care by your query.
    This saves a lot of Memory,so you wouldn't face MemoryException.
    Regards,
    Rahul

  • Loaing huge amount of records every secons

    Hi
    11g Version.
    I have a source file locating in unix file system.
    Every SECOND about 5,000 records are APPENDED to this logfile.
    I need to load those new records , as fast as possible to real time, into the database.
    The source of the file is written via RSYSLOG daemonm and it should be parsed and loaded
    into the database as I mentioned ( near online).
    What is the best practice to load the data from the logfile into the database ?
    Since the recorded are APPENDED it seems that it cant be external tables, beacuse it
    should "remember where it left off" and seek around in the file.
    Thanks

    You do not specify what type of format the data within the file happens to be in which might have some bearing on the best solution.
    If significant parsing of the input data is required then I think this is a job for either a pro*c program or a java program.
    How do you handle purge or re-allocation of the input file if you are appending to constantly. Eventually it will fill the disk. Do you start a new file every day at midnight or what?
    How is the data appended to the log file? What process does this? It would seem that modifying this process might be a consideration.
    HTH -- Mark D Powell --

  • Shrink Oracle Table after Deletion

    A few database tables are very big. An Oracle table still holds the disk space occupied by deleted records according to http://stackoverflow.com/questions/6540546/oracle-10g-table-size-after-deletion-of-multiple-rows. Re: shrink table after delete teaches the following commands to release the idle space from the table.
    ALTER TABLE TABLE1 DEALLOCATE UNUSED;
    ALTER TABLE TABLE1 ENABLE ROW MOVEMENT;
    ALTER TABLE TABLE1 SHRINK SPACE;
    ALTER TABLE TABLE1 MOVE;
    1. Are the commands feasible?
    2. Are they safe?
    3. What will be the impacts of running the commands?
    4. Is there any other workable safe approach shrinking a table?

    Hi,
    I advise using shrink table operation.
    The tablespace which belong to your table must be in ASSM (Automatic segment space management) to use shrink.
    Shrink is safe but you need to run two commands :
    1)ALTER TABLE TABLE1 SHRINK SPACE COMPACT; (This is long operation which moves data, but can be done online)
    2)ALTER TABLE TABLE1 SHRINK SPACE; (this is quick if you run SHRINK SPACE COMPACT before, it only shift the High Water Mark. Be carreful if you don't run SHRINK SPACE COMPACT before your table will be locked for a long time)
    Another point, is that execution plans are calculed using the HWM so SHRINK the table (The 2nd command) will invalidate all the cursors in shared pool where the table is in. So execution plan need to be recalculated (often not a problem).
    Regards,

  • How to insert large amount of records at a time into oracle

    Hi, im Dilip. I'm newbie to Oracle. For practicing purpose i got some SQL code which has huge amounts of records in text format which i need to copy+paste in my SQL Plus in Oracle 9i. But when i try to paste in SQL Plus I'm unable to paste more them 50 lines of code at a time. In one of the text file there is 80 thousand lines of record's code i need to paste. Please help me. Here is the link for the text file I'm using : http://www.mediafire.com/view/?4o9eo1qjd15kyib . Any kind of help will be much appreciated.

    982089 wrote:
    Hi, im Dilip. I'm newbie to Oracle. For practicing purpose i got some SQL code which has huge amounts of records in text format which i need to copy+paste in my SQL Plus in Oracle 9i. But when i try to paste in SQL Plus I'm unable to paste more them 50 lines of code at a time. In one of the text file there is 80 thousand lines of record's code i need to paste. Please help me. Here is the link for the text file I'm using : http://www.mediafire.com/view/?4o9eo1qjd15kyib . Any kind of help will be much appreciated.
    sqlplus user1/pass1
    @sql_text_file.sql
    doing above will execute all the SQL statements in the text file

  • I deleted pictures from my camera roll using my computer since i didn't have enough space to back up my ipad in my 5 gigs of space. but after deleting a huge amount of pictures. the size of the camera roll is still the same as when i started.

    I deleted pictures from my camera roll using my computer since i didn't have enough space to back up my ipad in my 5 gigs of space. but after deleting a huge amount of pictures. the size of the camera roll is still the same as when i started. some of these same pictures are still on my photostream. I deleted some of them -this time from the ipad itself- but the size of the camera roll remains the same. should i have not remeoved them using the computer for the camera roll.

    The links below have instructions for deleting photos.
    iOS and iPod: Syncing photos using iTunes
    http://support.apple.com/kb/HT4236
    iPad Tip: How to Delete Photos from Your iPad in the Photos App
    http://ipadacademy.com/2011/08/ipad-tip-how-to-delete-photos-from-your-ipad-in-t he-photos-app
    Another Way to Quickly Delete Photos from Your iPad (Mac Only)
    http://ipadacademy.com/2011/09/another-way-to-quickly-delete-photos-from-your-ip ad-mac-only
    How to Delete Photos from iPad
    http://www.wondershare.com/apple-idevice/how-to-delete-photos-from-ipad.html
    How to: Batch Delete Photos on the iPad
    http://www.lifeisaprayer.com/blog/2010/how-batch-delete-photos-ipad
    (With iOS 5.1, use 2 fingers)
    How to Delete Photos from iCloud’s Photo Stream
    http://www.cultofmac.com/124235/how-to-delete-photos-from-iclouds-photo-stream/
     Cheers, Tom

  • How do I reclaim the unused space after a huge data delete- very urgent

    Hello all,
    How do I reclaim the unused space after a huge data delete?
    alter table "ODB"."BLOB_TABLE" shrink space; This couldn't execute with ora 10662 error. Could you please help

    'Shrink space' has requirements:
    shrink_clause
    The shrink clause lets you manually shrink space in a table, index-organized table or its overflow segment, index, partition, subpartition, LOB segment, materialized view, or materialized view log. This clause is valid only for segments in tablespaces with automatic segment management. By default, Oracle Database compacts the segment, adjusts the high water mark, and releases the recuperated space immediately.
    Compacting the segment requires row movement. Therefore, you must enable row movement for the object you want to shrink before specifying this clause. Further, if your application has any rowid-based triggers, you should disable them before issuing this clause.
    Werner

  • I used migration assn't to load a Time Machine backup onto a new mac.  The first TM backup after that took some time, perhaps not surprising.  But the backups thereafter have all taken hours, with huge amounts of "indexing" time.  Time to reload TM?

    I used migration assn't to load a Time Machine backup onto a new mac.  The first TM backup after that took some time, perhaps not surprising.  But the backups thereafter have all taken hours, with huge amounts of "indexing" time.  Time to reload TM?

    Does every backup require lots of indexing?  If so, the index may be damaged.
    Try Repairing the backups, per #A5 in Time Machine - Troubleshooting.
    If that doesn't help, see the pink box in #D2 of the same link.

Maybe you are looking for

  • Help with functions please

    hello im new to actionscript and im still trying to work out the basics i have functions like this private function sinus( phase: Number ): Number return Math.sin( phase * Math.PI ); private function sawtooth( phase: Number ): Number return ( phase -

  • How to split LPN using API (pl sql)

    Hi      I need to split LPN into another LPN .another LPN might be new or existing .      I have wms enable org and LPN status is in inventory      Is there is any public or private api to split LPN Thanks Jitender

  • Escaping HTML in h:messages

    Is it possible to turn off HTML-escaping in the <h:messages> tag.

  • I have accidentally downloaded the same song from two albums.

    ...  They have not finished downloading, how do I cancel one? 

  • Installing Photoshop 5.0

    I have an iBookG4 running OS 10.4.11. I recently aquired Photoshop 5.0 with an upgrade to 7.0. Every time I try to install the 5.0 it says it can't find classic 9. and the upgrade wont load w/out the 5.0. What can I do?