Archiving old data from a main database into an archived database

Hello colleagues,
We are trying to create a stored procedure to archive data older than 6 months (180 days) from our production database in to a new archive database.
We want to archive only 20,000 rows a day and we need to schedule it on a daily basis. We also want to delete those archived rows from the production database.
Could you please share us your experience on archiving.
Thanks

Hi BG516, 
Ok, I got your point now :) 
First, how long does it take to read these 20.000 rows? It shouldn't be a lot, especially if the table is well indexed to cover that query (an index on the date column and covering the rest of the table, basically). There are many aspects that may affect
the process but my guess is that the big deal would be to delete these old rows from your production table. 
Reading these rows will require a shared latch, and if you're reading old data your daily processes shouldn't be trying to write in these particular pages (again, depends on the indexes mainly). Deleting them will need an exclusive lock and that'd be more
problematic, reads are quite more common than writes in a datawarehouse. 
When facing this kind of problem, I always had to find a non-peak period of time to execute the required processes. 
A few things that come to my mind: 
- Use BULK INSERT when loading the data into your historical table so you can minimize the time you spend
reading from the production table
- Check the number of indexes you'll impact when deleting these rows. The more, the worse (more time
needed to maintain them)
- What version of SQL Server are you using? The Elastic Scale feature from Azure SQL Database covers just
that scenario (http://channel9.msdn.com/Shows/Data-Exposed/Azure-SQL-Database-Elastic-Scale)
Regards.
Pau.

Similar Messages

  • Archiving old data from a partitioned table

    Hi,
    While sifting through all the options for archiving the old data from a table which is also indexed, i came across a few methods which could be used:
    1. Use a CTAS to create new tables on a different tablespace from the partitions on the exisitng table. Then swap these new tables with the data from the partitions, drop the partitions on the exisiting table (or truncate them although i m not sure which one is the recommended method),offline this tablespace and keep it archived. In case you require it in the future, again swap these partitions wih the data from the archived tables. I am not sure if i got the method correctly.
    2. Keep an export of all the partitions which need to be archived and keep that .dmp file on a storage media. Once they are exported, truncate the corresponding partitions in the original table. If required in the future, import these partitions.
    But i have one constraint on my Db which is I cannot create a new archive tablespace for holding the tables containing the partitioned data into then as I have only 1 tablespace allocated to my appplication on that DB as there are multiple apps residing on it together. Kindly suggest what option is the best suited for me now. Should I go with option 2?
    Thanks in advance.

    Hi,
    Thanks a bunch for all your replies. Yeah, I am planning to go ahead with the option 2. Below is the method I have decided upon. Kindly verify is my line of understanding is correct:
    1. export the partition using the clause:
    exp ID/passwd file=abc.dmp log=abc.log tables=schemaname.tablename:partition_name rows=yes indexes=yes
    2. Then drop this partition on the original table using the statement:
    ALTER TABLE tablename drop PARTITION partition_name UPDATE GLOBAL INDEXES;
    If I now want to import this dump file into my original table again, I will first have to create this partition on my table using the statement:
    3. ALTER TABLE tablename ADD PARTITION partition_name VALUES LESS THAN ( '<<>>' ) ;
    4. Then import the data into that partition using:
    imp ID/passwd FILE=abc.dmp log=xyz.log TABLES=schemaname.tablename:partition_name IGNORE=y
    Now my query here is that this partitioned table has a global index associated with it. So once i create this partition and import the data into it, do i need to drop and recreate the global indexes on this table or is there any aother method to update the indexes. Kindly suggest.
    Thanks again!! :)

  • Appraoch for archiving table data from a oracledb and load in archiv server

    Hi,
    I've a requirement where I need to archive and purge old data in datawarehouse.
    The archival strategy will select data from a list of tables and load into files.
    I also need to use sql loader control files to load data from the above files into archival server.
    I want to know which is the better approach to load data into files.
    Should I use utl_file or spool table data into file using select stmts.
    I also have some clob columns in some tables.

    I've doing something like this a couple of months ago. After some performance tests: the fastest way is to create files through UTL_FILE used in procedure with BULK SQL. Another good idea is to create files with Python which operates on text files blazingly. (Use PL/SQL if you need to create that files on server, but if you have to create files on remote machine use something else (Python?))

  • Looking for a solution or service to export data from a dynamic form into a database

    As the title of this discussion states, I'm looking for a solution or service to export data from a dynamic form into a database to be used for reporting. Creating the dynamic form is not a problem, it's getting it into a database that's more of a nuisance. A dynamic form is needed in order to provide skip logic, hide/reveal, and other similar dynamic features.
    The database hasn't been created, so just looking for the easiest, more effective and dependable solution. The key is being able to run reports off the data later.

    So i set up 2 residential grade routers to test this out. Seems to be working okayish. I believe with directional antenae routers I should achieve what I need.
    As for security I configured each device separately. I set them so only allow the MAC address of the other through wireless. This seems to be the best system for me. Once they are connected even if a MAC is spoofed of the other router it drops automatically because they are always connected. My wired devices get plugged in and recieve an IP from my main network.
    There more testing to be done, but it seems to be working.
    Thanks for your input and suggestions guys. I will be marking this topic as answered.

  • Read data from Excel and write into oracle database

    Hi
    I want  to know how can i read data from excel and write into oracle database using java.Kindly help me out to find a solution.
    Thanks and Regards
    Neeta

    Hai,
    I am suggesting the solution.
    I will try out and let u know soon.
    Make a coma separated file from your excel file.
    Assuming that your requirement allows to make a csv file.
    This file may be passed as an file object to be read by java.Using JDBC you must be able to populate the data base.You can also use String Tokenizer if needed.
    You do not want to  go via sql Loader?
    For reading the excel file itself do you want java?

  • Get the data  from excel and insert into the database

    I want to read the data from excel file, get the data from excel and insert into the database.

    You can use the Apache POI HSSF API to read data from Excel files and you can use the Sun JDBC API to insert data in database.
    This has nothing to do with JSP/JSTL.

  • Load XML data from UNIX Server Directly into Relational Database Tables

    Is there a way I can load data from an XML File into Oracle Tables , without having the Input XML file in some Oracle Server Directory. My XML File resides on UNIX Application server. And I need to directly load the data into Database tables. Without loading them into the Database Directory.
    Also I am looking for a solution that would not load my Database much and effect other running processes. Can it be done using SQL Loader ?
    Oracle Database Version is : Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

    Thanks for your reply ,
    Please would you quote an Example about : 'Load the file into that table using SQL*Loader'  (From UNIX Server) Or instance of some existing thread that relates to my situation.
    The Size of the File would be about 3 GB. For a similar requirement one of my peers Code which used XMLTABLE and XPATH Approach consumed a lot of resources while running and caused the other Database Applications to slow down. Thus those guys have come up with an approach to :
            Parse XML using a C Code using some STRING Functions =>  For a CSV or Fixed width .dat file and then use SQL Loader to just load the file into Tables.
            This approach is efficient in terms of Resources and Time(Takes 5 mins). But I am not confident about parsing XML based on String based C Functions.
             Please comment about this approach . Also if possible Suggest the best efficient way of doing this.

  • P6 archiving old data

    Hi,
    Does any one have any experience of archiving old data to a read only database? Currently using SQL2000 and P6. Interested to know about problems encounted and any data integrity problems. Thanks

    We suffer the same issue with archiving. One of our DBs is roughly 50GB in size and the time to restore it from backup alone is +6 hours , which does not include the time to pull tapes from off site. Our standard IS backup retention policy is only 6 months so what I end up with is literally hundreds of archive copies of a 20K+ activity schedule living in the production database (hence it's large size).
    What we have done to try and contain the growth is every 6 months an Admin sends out an email to all users with a list of archive projects questioning which ones are still needed. Our end users then mark the archives for deletion or export. If deletion is selected the schedule is simply purged from the database. Export means the archive project is dumped to XER and burned to disc. The hope is that should we ever need to review an old archive copy that we can import it into a recent copy of the production DB. Luckily the need to bring one of these back has yet to occur.
    Note: by archive project I am generally referring to either a baseline or what-if copy of a main schedule.

  • Best way to remove old data from db?

    Hi,
    I am looking some ways to keep my database small and remove old data from database and archive it somewhere else. the data may need to be put back to the database in the future. One of the things I am conserned is how to enforce constraints? In other words, when I remove data, all the related pk/fk data should be removed together. Any suggestions will be greatly appreciated.
    Thanks

    hope this may help u
    concept is : exp/imp & truncate will reclaim the space
    method1
    do an export and secure the dump(say dump1) for future purpose.
    delete the unwanted data.
    commit.
    now again export the tables ( say dump2).
    now do the import(dump2).
    /*now this import will reclaim the the unused space.*/
    method 2
    /* depennded on volume of data */
    -- create backup of existing table for future
    create table backup_table nologging as select * from original_table;
    -- create copy1 of existing table
    create table copy_table nologging as select * from original_table;
    -- truncate the original table, so that highwater mark is reset.
    truncate table original_table;
    -- insert data from copy_table into original table
    insert /*+ APPEND */ into original_table select * copy_table where(ur conditions for data);
    commit all the work.
    /* now u have 3 tables
    backup_table - with all the data - for future reload
    original_table - with unwanted data deleted and space reclaimed.
    copy_table - copy of original table.
    now u drop the copy_table
    drop table copy_table;
    regards

  • Archive - Purge data from F1 and U1 clusters

    Hello Experts,
    I have been given the task of purging data from the F1 cluster, Remuneration Statement forms, and the U1 cluster, Tax Reporter forms, in PCL4.  I was hoping to accomplish this by using an archive function to delete the data but not store the archived files.  I have not been successful in finding much information about purging from these clusters.  I am looking for any advice anyone can provide or a direction to take to reduce this data.   Thank you in advance for your assistance.

    Martin,
    which would help keep everything intact
    I don't know what that means.  The whole purpose of archiving is to remove data from the 'ready' database and place it somewhere else.  Leaving it intact means not archiving at all.
    In the archiving process, data is selected and copied into some sort of storage medium that typically has a lower state of availability than un-archived data.  The business decides what level of availability is acceptable, and the archiving policy (e.g., how long should the data remain archived before it is finally physically deleted forever).
    So 'intact' is a bit vague.  All the bits of data that the business decide are important are replicated 100% in the archive medium, validated, and then the source records that were archived are physically deleted from the ready database.  Functionally, all archived data is intact, it just may be in another format.
    I have never heard of a major ERP system that did not offer archiving in some form.  There are also many third party vendors who offer archiving for the major ERP packages.
    Level of success is hard to predict.  There are tools available as standard in SAP that monitor critical factors:  memory access, disk access, response times, etc etc etc.  Here too there are third party tools that measure critical factors.  You can run these before and after the archiving process to measure what success you have had.
    I have never seen anyone who will stand up and say "if you archive x million records from your ready database, you will see a performance increase of y percent."  There are too many variables.  As they say in the MPG ads, "Your results may vary". You usually can get some qualitative numbers by testing your archiving process in a test or dev system.
    Best Regards,
    DB49

  • Archive old data

    I would like to archive old iCal data without losing it. I would like to publish or export recent data to Google calendars for sharing with my department particularly calendars of who's on duty when. However, I have years of data in my duty calendars, which I don't want to lose in case I need that history. But I don't want to publish years of data, and Google calendars doesn't seem to like that big of an import. Is there a way to export just a date range, or is there a way to archive old data so that the duty calendar has only the past year's information?
    Thanks.

    You can export data from a certain range into an ics file and then reimport it late if needed. You can also set the dates for sync to 3 months 6 months, or 12 months.

  • Getting Data from Maintenance view V001N into ABAP program

    Hello Experts,
    I have to fetch data from the maintenance view V001N in my ABAP program.  I have used select statement in my program but I am getting a syntax error  'V001N is not defined in the ABAP Dictionary as a table, projection view or database view. '.  V001N is a Maintenance view.
    Can anybody help me out how to get the data from that maintenance view into the internal table of my ABAP program.
    Regards.

    Sunil,
    check these threads
    https://forums.sdn.sap.com/click.jspa?searchID=18906946&messageID=6074746
    https://forums.sdn.sap.com/click.jspa?searchID=18906946&messageID=6088233
    so query on the tables which are used in the view
    Thanks
    Bala Duvvuri

  • How can I import data from a csv file into databse using utl_file?

    Hi,
    I have two machines (os is windows and database is oracle 10g) that are not connected to each other and both are having the same database schema but data is all different.
    Now on one machine, I want to take dump of all the tables into csv files. e.g. if my table name is test then the exported file is test.csv and if the table name is sample then csv file name is sample.csv and so on.
    Now I want to import the data from these csv files into the tables on second machine. if I've 50 such csv files, then data should be written to 50 tables.
    I am new to this. Could anyone please let me know how can I import data back into tables. i can't use sqlloader as I've to satisfy a few conditions while loading the data into tables. I am stuck and not able to proceed.
    Please let me know how can I do this.
    Thanks,
    Shilpi

    Why you want to export into .csv file.Why not export/import? What is your oracle version?
    Read http://www.oracle-base.com/articles/10g/oracle-data-pump-10g.php
    Regards
    Biju

  • How to transfer a set of data from Excel spread sheet to an Access database

    hi,
    Can any one please tell me how to transfer a set of data from Excel spread sheet to an Access database using SQL query.I'm using java API currently. I have done all sorts of ODBC connection in administrative tools.The file is in correct location. I have done certain coding but with errors.Help me to get rid of these errors,
    Coding:*
    try
    Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
    Connection datacon=DriverManager.getConnection("jdbc:odbc:exdata","",""); *//For Excel driver*
    Connection datacon1=DriverManager.getConnection("jdbc:odbc:stock1","",""); *// For mdb driver*
    Statement datast=datacon.createStatement();
    Statement datast1=datacon1.createStatement();
    ResultSet excelrs=datast.executeQuery("select item_code,sdata,closing_stock from phy "); *//phy is the excel file*
    while(excelrs.next())
    String ic=excelrs.getString("item_code");
    System.out.println(ic);
    String d=excelrs.getString("sdate");
    double cs=excelrs.getDouble("closing_stock");
    int dbrs=datast1.executeUpdate("insert into second values('"+ic+"','"+d+"',"+cs+")"); *//second is the mdb*
    excelrs.close();
    }catch(Exception e)
    System.out.println(e);
    Error:*
    java.sql.SQLException: [Microsoft][ODBC Excel Driver] The Microsoft Jet database engine could not find the object 'C:\JavaGreen\phy.xls'. Make sure the object exists and that you spell its name and the path name correctly.
    thanks,
    kumar.

    JAVA_GREEN wrote:
    No i haven't mixed up.But the file from where i have to retrieve the data is in csv format.Even though i created another csv driver.and tried but i cud not find a solution to load/transfer a set of records from one file(in Excel/csv format) to another file(in mdb format).plz help me.Is there any other methods for this data transfer.A csv file is NOT an excel file.
    The fact that Excel can import a csv file doesn't make it an excel file.
    If you have a csv file then you must use a csv driver or just use other code (not jdbc) to access it. There is, normally, a ODBC (nothing to do with java) text driver that can do that.

  • Is there any way so that I Being able to clear out old data from the Order

    Hi Friends,
    Is there any way so that I Being able to clear out old data from the Order Gate forecast, tolerance, and confirmed order to date tables on a monthly basis?
    Means I want data in data table for particular time (I want that if suppose today is 13-11-2008 then my data table must contain data from 13-10-2008 to 13-11-2008.
    Is it possible thru ABAP development or thru any t codes ?
    Any pointers will be rewarded.
    Regards

    hi,
    Before archiving you have to put the Deletion flag for the object(May be for PO, MMR etc as per your req.)...
    After that the you use the archiving procedure to archive the docs which are already flagged for deletion...
    For MMR you use the transaction SARA...and its respective object for the archiving....
    You can use the SDN search to see the many threads on it...
    Or use this link as per your req. and make the search...
    http://help.sap.com/saphelp_erp2004/helpdata/EN/75/ee0fa855c811d189900000e8322d00/frameset.htm
    Regards
    Priyanka.P

Maybe you are looking for

  • Oracle 9.0.1 client on Red Hat 8.0

    If anyone knows work arounds for the following issues I would appeciate knowing what they are. 1) netca - If the test option is selected while creating a tnsname, the application crashes with the following message: /var/opt/oracle/9.0.1/bin/netca: li

  • PropertyBag is not defined

    Hi All, During migration of portal from plumtree v5 to WCI 10g facing an issue. PropertyBag has been used to get the settings and in WCI 10g throwing error this like "PropertyBag is not defined." Can any one tell me what is the way to use PropertyBag

  • Can we display a URL  in workitem text??

    Hi Gurus, Can we display a URL  in workitem text?? I am giving http://www.yahoo.com  but it comes as a normal text. I have give a URL along with Workitem text, So once the approver will check the mail notification in workflow inbox, by clicking on th

  • Addition In Calculator Application

    Hi everyone, I've got a bit of a problem. I'm busy working on a calculator that will be used in DOS. Eventually it will include the arithmetic operations as well as 5 scientific operations and convert to octal and hexadecimal. I�ve included my code b

  • CONVERT_OTFSPOOLJOB_2_PDF ?

    Hi all, When I developed smartform I kept particular printer in mind. Now since the client wants to print it on diffrenet printers, the allignment is going to change. I came to know about a function called, CONVERT_OTFSPOOLJOB_2_PDF which will conver