Delete large amounts of data from a table

I have a table with about 10 fields to store info for customers. Over time as we have added more customers that table has grown to about 14 million rows. As the data comes in a service constantly inserts a row into the table. 90% of the data is not revelent
i.e. I don't want data that is 3 months ago, but the most recent data is used to generate tracking reports. My goal is to write a sql to perform a purge of the data that is older than a month.
Here is my problem I can NOT use TRUNCATE TABLE as I would lose everything? Yesterday I wrote a delete table statement with a where clause. When I ran it on a test system it locked up my table and the simulation gps inserts were intermittently failing. Also
my transaction log grew to over 6GB as it attempted to log each delete.
My first thought was to delete the data a little at a time starting with the oldest first but I was wondering if there was a better way.I am expecting solutions apart from these :
1.Create a temp DB and copy the required data into temp Db and truncate the original.
2.Deleting in chunks(i.e.,1000 -10000 records at a time).
3.Set the Recovery mode  as simple

I agree with Satish (+1)
This
is the right way to do it. Your database architect should think about this, and you can use partitioning by the condition for deleting (for example years if you are deleting old year data). 
check this like for more details (but most of what you need
Satish already mentioned): http://www.sqlservercentral.com/scripts/Truncate+Table/69506/
* in the link you have SP named TRUNCATE_PARTITION which you can use
[Personal Site] [Blog] [Facebook]

Similar Messages

  • Copying large amount of data from one table to another getting slower

    I have a process that copies data from one table (big_tbl) into a very big archive table (vb_archive_tbl - 30 mil recs - partitioned table). If there are less than 1 million records in the big_tbl the copy to the vb_archive_table is fast (-10 min), but more importantly - it's consistant. However, if the number of records is greater than 1 million records in the big_tbl copying the data into the vb_archive_tbl is very slow (+30 min - 4 hours), and very inconsistant. Every few days the time it takes to copy the same amount of data grows signicantly.
    Here's an example of the code I'm using, which uses BULK COLLECT and FORALL INSERST to copy the data.
    I occasionally change 'LIMIT 5000' to see performance differences.
    DECLARE
    TYPE t_rec_type IS RECORD (fact_id NUMBER(12,0),
    store_id VARCHAR2(10),
    product_id VARCHAR2(20));
    TYPE CFF_TYPE IS TABLE OF t_rec_type
    INDEX BY BINARY_INTEGER;
    T_CFF CFF_TYPE;
    CURSOR c_cff IS SELECT *
    FROM big_tbl;
    BEGIN
    OPEN c_cff;
    LOOP
    FETCH c_cff BULK COLLECT INTO T_CFF LIMIT 5000;
    FORALL i IN T_CFF.first..T_CFF.last
    INSERT INTO vb_archive_tbl
    VALUES T_CFF(i);
    COMMIT;
    EXIT WHEN c_cff%NOTFOUND;
    END LOOP;
    CLOSE c_cff;
    END;
    Thanks you very much for any advice
    Edited by: reid on Sep 11, 2008 5:23 PM

    Assuming that there is nothing else in the code that forces you to use PL/SQL for processing, I'll second Tubby's comment that this would be better done in SQL. Depending on the logic and partitioning approach for the archive table, you may be better off doing a direct-path load into a staging table and then doing a partition exchange to load the staging table into the partitioned table. Ideally, you could just move big_tbl into the vb_archive_tbl with a single partition exchange operation.
    That said, if there is a need for PL/SQL, have you traced the session to see what is causing the slowness? Is the query plan different? If the number of rows in the table is really a trigger, I would tend to suspect that the number of rows is causing the optimizer to choose a different plan (with your sample code, the plan is obvious, but perhaps you omitted some where clauses to simplify things down) which may be rather poor.
    Justin

  • Copying large amount of data from one table to another

    We have a requirement to copy data from a staging area (consists of several tables) to the actual tables. The copy operation needs to
    1) delete the old values from the actual tables
    2) copy data from staging area to actual tables
    3) commit only after all rows in the staging area are copied successfully. Otherwise, it should rollback.
    Is it possible to complete all these steps in one transaction without causing problems with the rollback segments, etc.? What are the things that I need to consider in order to make sure that we will not run into production problems?
    Also, what other best practices/alternative methods are available to accomplish what is described above.
    Thanks,
    Eser

    It's certainly possible to do this in a single transaction. In fact, that would be the best practice.
    Of course, the larger your transactions are, the more rollback you need. You need to allocate sufficient rollback (undo in 9i) to handle the transaction size you're expecting.
    Justin
    Distributed Database Consulting, Inc.
    www.ddbcinc.com

  • Deleting large amounts of data

    All,
    I have several tables that have about 1 million plus rows of historical data that is no longer needed and I am considering deleting the data. I have heard that deleting the data will actually slow down performance as it will mess up the indexing, is this true? What if I recalculate statistics after deleting the data? In general, I am looking for advice what is best practices for deleting large amounts of data from tables.
    For everyones reference I am running Oracle 9.2.0.1.0 on Solaris 9. Thanks in advance for the advice.
    Thanks in advance!
    Ron

    Another problem with delete is that it generates a vast amount of redo log (and archived logs) information . The better way to get rid of the unneeded data would be to use TRUNCATE command:
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_107a.htm#2067573
    The problem with truncate that it removes all the data from the table. In order to save some data from the table you can do next thing:
    1. create another_table as select * from <main_table> where <data you want to keep clause>
    2. save the indexes, constraints, trigger definitions, grants from the main_table
    3. drop the main table
    4. rename <stage_table> to <main_table>.
    5. recreate indexes, constraints and triggers.
    Another method is to use partitioning to partition the data based on the key (you've mentioned "historical" - the key could be some date column). Then you can drop the historical data partitions when you need it.
    As far as your question about recalculating the statistics - it will not release the storage allocated for index. You'll need to execute ALTER INDEX <index_name> REBUILD :
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_18a.htm
    Mike

  • How do I delete large amounts of photos from my ipad?

    a

    Are you using iPhoto on your iPad? Then deleting photos will depend on the album they are in and how they have been added to the iPad.
    See this document for IOS 6 and earlier:   iPhoto for iOS (iPad): Delete photos from iPhoto
    If you also have a Mac, you can delete quickly large amounts of photos from your camera roll by connecting your iPad to your computer using USB and importing the photos into Image Capture, iPhoto, or Aperture. Then set the option to delete the photos after importing.
    Photos you synced to your iPad can only be deleted by syncing again.
    If you are using IOS 7 you can delete larger amounts of photos from your Camera Roll in the "Moments" view:
    n the Photos.app you can select all photos in a "Moment" at once, by pressing "Select" to the right of the "Moment" name. Then press the Trash icon.
    For example:
    Regrads
    Léonie

  • Couldn't copy large amount of data from enterprise DB to Oracle 10g

    Hi,
    I am using i-batis to copy data from enterprise DB to oracle and viceversa.
    The datatype of a field on EDB is 'text' and the datatype on oracle is 'SYS.XMLTYPE'
    i am binding these to a java string property in a POJO to bind values.
    I could successfully copy limited amount of data from EDB to oracle but if there is more data, i am getting the following exceptions with different oracle drivers ( but i could read large amount of data from EDB):
    --- Cause: java.sql.SQLException: ORA-01461: can bind a LONG value only for insert into a LONG column
    at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeUpdate(MappedStatement.java:107)
    at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.update(SqlMapExecutorDelegate.java:457)
    at com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.update(SqlMapSessionImpl.java:90)
    at com.ibatis.sqlmap.engine.impl.SqlMapClientImpl.update(SqlMapClientImpl.java:66)
    at com.aqa.pojos.OstBtlData.updateOracleFromEdbBtlWebservice(OstBtlData.java:282)
    at com.aqa.pojos.OstBtlData.searchEdbAndUpdateOracleBtlWebservice(OstBtlData.java:258)
    com.ibatis.common.jdbc.exception.NestedSQLException:
    --- The error occurred in com/aqa/sqlmaps/SQLMaps_OSTBTL_Oracle.xml.
    --- The error occurred while applying a parameter map.
    --- Check the updateOracleFromEDB-InlineParameterMap.
    --- Check the parameter mapping for the 'btlxml' property.
    --- Cause: java.sql.SQLException: setString can only process strings of less than 32766 chararacters
    at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeUpdate(MappedStatement.java:107)
    at com.iba
    I have latest oracle 10g jdbc drivers.
    remember, i could copy any amount of data from oracle to EDB but not otherway around.
    PLease let me know if you have come across this issue, any recommendation is very much appreciated.
    Thanks,
    CK.

    Hi,
    I finally remembered how I solved this issue previously.
    The jdbc driver isn't able to directly call the insert with a column xml_type. The solution I was using was to build a wrapper function in plSQL.
    Here it is (for insert but I suppose tha update will be the same)
    create or replace procedure insertXML(file_no_in in number, program_no_in in varchar2, ost_XML_in in clob, btl_XML_in in clob) is
    begin
    insert into AQAOST_FILES (file_no,program_no,ost_xml,btl_xml) values(file_no_in, program_no_in, xmltype(ost_XML_in), xmltype(btl_XML_in));
    end insertXML;
    here is the sqlmap file I used
    <?xml version="1.0" encoding="UTF-8" ?>
    <!DOCTYPE sqlMap
    PUBLIC "-//ibatis.apache.org//DTD SQL Map 2.0//EN"
    "http://ibatis.apache.org/dtd/sql-map-2.dtd">
    <sqlMap>
         <typeAlias alias="AqAost" type="com.sg2net.jdbc.AqAost" />
         <insert id="insert" parameterClass="AqAost">
              begin
                   insertxml(#fileNo#,#programNo#,#ostXML:CLOB#,#bltXML:CLOB#);
              end;
         </insert>
    </sqlMap>
    an here is a simple program
    package com.sg2net.jdbc;
    import java.io.IOException;
    import java.io.Reader;
    import java.io.StringWriter;
    import java.sql.Connection;
    import oracle.jdbc.pool.OracleDataSource;
    import com.ibatis.common.resources.Resources;
    import com.ibatis.sqlmap.client.SqlMapClient;
    import com.ibatis.sqlmap.client.SqlMapClientBuilder;
    public class TestInsertXMLType {
         * @param args
         public static void main(String[] args) throws Exception {
              // TODO Auto-generated method stub
              String resource="sql-map-config-xmlt.xml";
              Reader reader= Resources.getResourceAsReader(resource);
              SqlMapClient sqlMap = SqlMapClientBuilder.buildSqlMapClient(reader);
              OracleDataSource dataSource= new OracleDataSource();
              dataSource.setUser("test");
              dataSource.setPassword("test");
              dataSource.setURL("jdbc:oracle:thin:@localhost:1521:orcl");
              Connection connection=dataSource.getConnection();
              sqlMap.setUserConnection(connection);
              AqAost aqAost= new AqAost();
              aqAost.setFileNo(3);
              aqAost.setProgramNo("prg");
              Reader ostXMLReader= Resources.getResourceAsReader("ostXML.xml");
              Reader bltXMLReader= Resources.getResourceAsReader("bstXML.xml");
              aqAost.setOstXML(readerToString(ostXMLReader));
              aqAost.setBltXML(readerToString(bltXMLReader));
              sqlMap.insert("insert", aqAost);
              connection.commit();
         public static String readerToString(Reader reader) {
              StringWriter writer = new StringWriter();
              char[] buffer = new char[2048];
              int charsRead = 0;
              try {
                   while ((charsRead = reader.read(buffer)) > 0) {
                        writer.write(buffer, 0, charsRead);
              } catch (IOException ioe) {
                   throw new RuntimeException("error while converting reader to String", ioe);
              return writer.toString();
    package com.sg2net.jdbc;
    public class AqAost {
         private long fileNo;
         private String programNo;
         private String ostXML;
         private String bltXML;
         public long getFileNo() {
              return fileNo;
         public void setFileNo(long fileNo) {
              this.fileNo = fileNo;
         public String getProgramNo() {
              return programNo;
         public void setProgramNo(String programNo) {
              this.programNo = programNo;
         public String getOstXML() {
              return ostXML;
         public void setOstXML(String ostXML) {
              this.ostXML = ostXML;
         public String getBltXML() {
              return bltXML;
         public void setBltXML(String bltXML) {
              this.bltXML = bltXML;
    I tested the insert and it works correctly
    ciao,
    Giovanni

  • Transporting large amounts of data from one database schema to another

    Hi,
    We need to move large amount of data from one 10.2.0.4 database schema to another 11.2.0.3 database.
    Am currently using datapump but quite slow still - having to do in chunks.
    Also the datapump files quite large so having to compress and move across the network.
    Is there a better/quicker way?
    Habe haerd about transportable tablespaces but never used them and don't know about speed - if quicker thana datapump.
    tablespace names different in both databases.
    Also source database on solaris opertaing system on sun box
    target database on aix on ibm power series box.
    Any ideas would be great.
    Thanks
    Edited by: user5716448 on 08-Sep-2012 03:30
    Edited by: user5716448 on 08-Sep-2012 03:31

    user5716448 wrote:
    Hi,
    We need to move large amount of data from one 10.2.0.4 database schema to another 11.2.0.3 database.
    Pl quantify "large".
    Am currently using datapump but quite slow still - having to do in chunks.
    Pl quantify "quite slow".
    Also the datapump files quite large so having to compress and move across the network.
    Again, pl quantify "quite large".
    Is there a better/quicker way?
    Habe haerd about transportable tablespaces but never used them and don't know about speed - if quicker thana datapump.
    tablespace names different in both databases.
    Also source database on solaris opertaing system on sun box
    target database on aix on ibm power series box.
    It may be possible, assuming you do not violate any of these conditions
    http://docs.oracle.com/cd/E11882_01/server.112/e25494/tspaces013.htm#ADMIN11396
    Any ideas would be great.
    Thanks
    Edited by: user5716448 on 08-Sep-2012 03:30
    Edited by: user5716448 on 08-Sep-2012 03:31Master Note for Transportable Tablespaces (TTS) -- Common Questions and Issues [ID 1166564.1]
    HTH
    Srini

  • Power BI performance issue when load large amount of data from database

    I need to load data set from my database, which have large amount of data, it will take so many time to initialize data before I can build report, is there any good way to process large amount of data for PowerBI? As I know many people analysis data based
    on PowerBI, is there any suggestion for loading large amount of data from database?
    Thanks a lot for help

    Hi Ruixue,
    We have made significant performance improvements to Data Load in the February update for the Power BI Designer:
    http://blogs.msdn.com/b/powerbi/archive/2015/02/19/6-new-updates-for-the-power-bi-preview-february-2015.aspx
    Would you be able to try again and let us know if it's still slow? With the latest improvements, it should take between half and one third of the time that it used to.
    Thanks,
    M.

  • Transfer Large Amounts of Data From External to External Drives

    I have 4 each 1.5TB USB 2.0 drives that contain music & movies for use in itunes or apple tv. I purchased two 3TB Thunderbolt drives and would like to transfer the data from the 4 each 1.5TB drives to the new 3TB Thunderbolt drives How Do I Do This?
    I have tried to copy/paste  drag/drop  but only so much moves then stops.
    All formatting is MAC Journaled

    First I just want to say thank you for the suggestion.
    There were no errors shown just stopped after several hours, also sleep is turned off. What I started doing is taking lets say approximately 6GB at a time and maybe doing this 4 or 5 times and after about 20 to 30 minutes it is done and I do this all over again.
    I will run the disk utility like you suggested and try that again but give or take there are probably 2500 files that are 2+GB in size and then all of the music files we have collected is around 90GB total if not slightly higher.

  • What is the best way to migrate large amounts of data from a g3 to an intel mac?

    I want to help my mom transfer her photos and other info from an older  blueberry G3 iMac to a new intel one.  There appears to be no prmigration provision on the older mac.  Also the firewire caconnestions are different.  Somebody must have done this before.

    Hello
    the cable above can be use to enable Target Disk mode for data transfert
    http://support.apple.com/kb/ht1661 for more info
    to enable Target Disk mode just after startup sound Hold on "T" key on key board until see at  screen firewire symbol aka screen saver , then plug fire wire cable betwen 2 mac
    HTH
    Pierre

  • How to delete some data from GLPCA table

    Dear Friends,
    We have extracted some data in GLPCA table via the ECPA component. We
    want to delete these records ( Doc type and Posting date is known).
    However we do not want other data already existing the GLPCA to be
    deleted. Is there some Std SAP Trx for deletion of partial/select data
    from EIS tables.
    Please explain us clearaly how we can delete these records.
    Your quick reply for this issue is highly appreciated.
    Thanks & Regards,
    Naveen Kumar.

    Hello,
    Please check the use of the report ZDELETE_PCA_DATA_BUKRS (mentioned in the SAP note 1174360), to delete totals from GLPCT.
    I am not sure of any deletion reports for GLPCA table.
    If you open the Source code of this program, you can see the following information :
    "This report provides a possibility to delete actual and plan
    *& postings on company code level as an alternative for transaction
    *& 0KE1."
    I do not have much information on this report as, personally I haven't used this anytime. Please run in 'Test Run' mode first and look for the results.
    I hope this helps.
    Thanks and regards,
    Suresh Jayanthi.

  • Large Amount of Data in JSF

    Hello,
    I am using the Table Group component for displaying data in my application designed in Java Studio Creator.
    I have enabled paging on the component. I use CachedRowSet on the bean for the page for getting the data. This works very well at the moment in my development environment. At the moment I am testing on small amount of data.
    I was wondering how does this component perform with very large amounts of data (>75,000 rows). I noticed that there is a button available for users to retrieve all the rows. So I was wondering apart from that instance, when viewing in a paged mode does the component get all the results from the database everytime ?
    Which component would be best suited for displaying large amounts of data in a table format?
    Thanks In Advance!!

    Thanks for your reply. The table control that I use does have paging as a feature and I have enabled it. It still takes time to load the data initially.
    I wonder if it is got to do with the logic of paging. How do you specify which set of 20 records to extract from SQL.
    Thanks for your help!!

  • Uploading of large amount of data

    Hi all,
    i really hope you can help me. I have to upload quite large amount of data from flat files to ODS (via PSA of course). But the process takes very long time. I used method of loadin to PSA and then packet by packet into ODS. Loading of cca 1.300.000 lines from flat file takes about 6 or more hours. It seems strange for me. Is it normal or not?? Or should I use another uploading method or set up ODS some way ?? thanks

    hi jj,
    welcome to the SDN!
    in my limited experience, 6hrs for 1.3M records is a bit too long. here are some things you could try and look into:
    - load from the application server, not from the client computer (meaning, move your file to the server where BW is running, to minimize network traffic).
    - check your transfer rules and any customer exits related to loading, as the smallest performance-inefficient bits of code can cause a lot of problems.
    - check the size of data packets you're transmitting, as it could also cause problems, via tcode RSCUSTA2 (i think, but i'm not 100% sure).
    hope ths helps you out - please remember to give out points as a way of saying thanks to those that help you out okay? =)
    ryan.

  • Selecting large amounts of data

    I'm selecting a large amount of data from a database and put the data
    into an object file. For some reason the process dies halfway through with an OutOfMemoryException. Any Ideas?
    Secondly, does anyone know how to insert carriage returns with a
    prepared statement?

    I'm selecting a large amount of data from a database
    and put the data
    into an object file. For some reason the process dies
    halfway through with an OutOfMemoryException. Any
    Ideas?Like...you are running out of memory?
    Have you increased the heap size?
    What does "large" mean? 100k or 100gig?
    >
    Secondly, does anyone know how to insert carriage
    returns with a
    prepared statement?I suppose you mean how to insert one into a text field.
    Do you already have it in the string? If not then you use "\\n" or '\n'.
    If you do have it in the string and you inserted it without error then how do you know it isn't already in the database?

  • How to delete the data from partition table

    Hi all,
    Am very new to partition concepts in oracle..
    here my question is how to delete the data from partition table.
    is the below query will work ?
    delete from table1 partition (P_2008_1212)
    we have define range partition ...
    or help me how to delete the data from partition table.
    Thanks
    Sree

    874823 wrote:
    delete from table1 partition (P_2008_1212)This approach is wrong - as Andre pointed, this is not how partition tables should be used.
    Oracle supports different structures for data and indexes. A table can be a hash table or index organised table. It can have B+tree index. It can have bitmap indexes. It can be partitioned. Etc.
    How the table implements its structure is a physical design consideration.
    Application code should only deal with the logical data structure. How that data structure is physically implemented has no bearing on application. Does your application need to know what the indexes are and the names of the indexes,in order to use a table? Obviously not. So why then does your application need to know that the table is partitioned?
    When your application code starts referring directly to physical partitions, it needs to know HOW the table is partitioned. It needs to know WHAT partitions to use. It needs to know the names of the partitions. Etc.
    And why? All this means is increased complexity in application code as this code now needs to know and understand the physical data structure. This app code is now more complex, has more moving parts, will have more bugs, and will be more complex to maintain.
    Oracle can take an app SQL and it can determine (based on the predicates of the SQL), which partitions to use and not use for executing that SQL. All done totally transparently. The app does not need to know that the table is even partitioned.
    This is a crucial concept to understand and get right.

Maybe you are looking for

  • Problem Printer Profiles in Lightroom 1.2

    I am using a Canon Pixma 5300 printer, which produces very acceptable results in all CS suite programs using the sRGB Color Space Profile. However, the listings in Lightroom (taken from the Windows System 32 directory) show only Canon-specific profil

  • The server's security certificate is not yet valid!

    I am getting this message with Google Chrome to access https sites (facebook.com for instance. This message is only received with my primary user, other users function properly. The system clock is set automatically and is showing the correct date. A

  • How to define the size of saving image in sm or inch?

    Hi, is it possible to get the image of size defined in sm or inch? There is no stable proportion between width in pixel and sm. My purpose is to create a buffered image of defined size ( for example 5x5 sm) and save it. The saving is made with encode

  • Displaying Album Usage/Sharing Metadata

    I know this has been asked before, but I was wondering if anyone knows how to display in the Metadata Inspector tab, which albums might contain a selected image. The reason I ask has to do with posting images to Facebook for example.  Working with a

  • Printing a Table Control AND ALV grid from same report

    Hello, I am trying to figure out how to print the output of my report, which consist of a table control and a OO-driven ALV grid under it. i tried several possibilities but I only get the ALV in my spool....? Answers will be rewarded !