Data Archival Practice

We are looking on a solution where we can archive data based on criteria, we have 10 different table referencing and we proposed if any row marked for archival using some indicator should also mark all other related to row for archiving.
However it means we need to mark row for arching first from 10 different tables and later on to perform insert and delete operation. 
It means for each row we would like to archive require 10+10+10 = 30 operations and if we identify 100,000 records which means 3 million operations.
Overall process is tricky and bulky to perform operation, I am looking on various alternative like partition however partition key is not aligned so that's not viable option.
Please let me know if someone familiar with another approach in SQL Server.
-Mayank

Hi msugandhi,
Since the issue regards SQL Server Database Engine. I will help you move question to the SQL Server Database Engin forums at http://social.technet.microsoft.com/Forums/en-US/home?forum=sqldatabaseengine. It is appropriate and more experts will assist
you.
According to your description, I assume there is one column in one table which records the status base on some criteria. If you change the record in the row and want to decrease the operations of all tables. I recommend you design these
tables without relationships. There are the relatively independent tables. When you change one records, it need not to change all records in other related tables. Or maybe you can create a PL/SQL type to store the IDs. For example, delete all records from
table a that match the criteria, and return the IDs into a variable of that type. Then delete all records from b and c with these IDs.
There is a similar issue about deleting rows from multiple related tables. For more information, see:
http://stackoverflow.com/questions/13133148/deleting-rows-from-multiple-related-tables
Thanks,
Sofiya Li
If you have any feedback on our support, please click here.
Sofiya Li
TechNet Community Support

Similar Messages

  • The best practice for data archiving

    Hi
    My client has been using OnDemand for almost 2 years, there are around 2M records in the system(Activities), so just want to know what is the best practice of data archiving, we dont care much about the data in the 6 month ago.

    Hi Erik,
    Archival is nothing but deletion.
    Create a backup cube in BW. Copy the data from your planning cube to the backup cube, and then delete that data region from your planning cube.
    Archival will definitely improve the performance of your templates, scripts, etc; since the system will now search from a smaller dataset.
    Hope this helps.

  • A question on different options for data archival and compression

    Hi Experts,
    I have production database of about 5 terabyte size in production and about 50 GB in development/QA. I am on Oracle 11.2.0.3 on Linux. We have RMAN backups configured. I have a question about data archival stretegy.  To keep the OLTP database size optimum, what options can be suggested for data archival?
    1) Should each table have a archival stretegy?
    2) What is the best way to archive data - should it be sent to a seperate archival database?
    In our environment, only for about 15 tables we have the archival stretegy defined. For these tables, we copy their data every night to a seperate schema meant to store this archived data and then transfer it eventualy to a diffent archival database. For all other tables, there is no data archival stretegy.
    What are the different options and best practices about it that can be reviewed to put a practical data archival strategy in place? I will be most thankful for your inputs.
    Also what are the different data compression stregegies? For example we have about 25 tables that are read-only. Should they be compressed using the default oracle 9i basic compression? (alter table compress).
    Thanks,
    OrauserN

    You are using 11g and you can compress read only as well as read-write tables in 11g. Both read only and read write tables are the candidate for compression always. This will save space and could increase the performance but always test it before. This was not an option in 10g. Read the following docs.
    http://www.oracle.com/technetwork/database/storage/advanced-compression-whitepaper-130502.pdf
    http://www.oracle.com/technetwork/database/options/compression/faq-092157.html

  • Put Together A Data Archiving Strategy And Execute It Before Embarking On Sap Upgrade

    A significant amount is invested by organizations in a SAP upgrade project. However few really know that data archiving before embarking on SAP upgrade yields significant benefits not only from a cost standpoint but also due to reduction in complexity during an upgrade. This article not only describes why this is a best practice  but also details what benefits accrue to organizations as a result of data archiving before SAP upgrade. Avaali is a specialist in the area of Enterprise Information Management.  Our consultants come with significant global experience implementing projects for the worlds largest corporations.
    Archiving before Upgrade
    It is recommended to undertake archiving before upgrading your SAP system in order to reduce the volume of transaction data that is migrated to the new system. This results in shorter upgrade projects and therefore less upgrade effort and costs. More importantly production downtime and the risks associated with the upgrade will be significantly reduced. Storage cost is another important consideration: database size typically increases by 5% to 10% with each new SAP software release – and by as much as 30% if a Unicode conversion is required. Archiving reduces the overall database size, so typically no additional storage costs are incurred when upgrading.
    It is also important to ensure that data in the SAP system is cleaned before your embark on an upgrade. Most organizations tend to accumulate messy and unwanted data such as old material codes, technical data and subsequent posting data. Cleaning your data beforehand smoothens the upgrade process, ensure you only have what you need in the new version and helps reduce project duration. Consider archiving or even purging if needed to achieve this. Make full use of the upgrade and enjoy a new, more powerful and leaner system with enhanced functionality that can take your business to the next level.
    Archiving also yields Long-term Cost Savings
    By implementing SAP Data Archiving before your upgrade project you will also put in place a long term Archiving Strategy and Policy that will help you generate on-going cost savings for your organization. In addition to moving data from the production SAP database to less costly storage devices, archived data is also compressed by a factor of five relative to the space it would take up in the production database. Compression dramatically reduces space consumption on the archive storage media and based on average customer experience, can reduce hardware requirements by as much as 80% or 90%. In addition, backup time, administration time and associated costs are cut in half. Storing data on less costly long-term storage media reduces total cost of ownership while providing users with full, transparent access to archived information.

    Maybe this article can help; it uses XML for structural change flexiblity: http://www.oracle.com/technetwork/oramag/2006/06-jul/o46xml-097640.html

  • Oracle Apps 11i - Data Archival

    Hi,
    Has anyone done data archival on Oracle Apps? I would like to know if there are any best practices or any guidelines for the data archival.
    Kindly share your experience on data archival on Oracle Apps.
    Regards
    Sridhar M

    Hi;
    Please see:
    Oracle E-Business Suite Data Archival Strategy
    http://documents.club-oracle.com/downloads.php?do=file&id=1862
    Can We archive the EBS r12 tables data?
    Also see:
    http://it.dspmanagedservices.co.uk/blog-1/bid/60253/Managing-data-growth-on-E-Business-Suite-with-an-archiving-strategy
    Check this pdf
    Regard
    Helios

  • Data Archive Script is taking too long to delete a large table

    Hi All,
    We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key, Please help... More info below
    CREATE TABLE "APP"."MON_TXNS"
       (    "ID_TXN" NUMBER(12,0) NOT NULL ENABLE,
        "BOL_IS_CANCELLED" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
        "ID_PAYER" NUMBER(12,0),
        "ID_PAYER_PI" NUMBER(12,0),
        "ID_PAYEE" NUMBER(12,0),
        "ID_PAYEE_PI" NUMBER(12,0),
        "ID_CURRENCY" CHAR(3 BYTE) NOT NULL ENABLE,
        "STR_TEXT" VARCHAR2(60 CHAR),
        "DAT_MERCHANT_TIMESTAMP" DATE,
        "STR_MERCHANT_ORDER_ID" VARCHAR2(30 BYTE),
        "DAT_EXPIRATION" DATE,
        "DAT_CREATION" DATE,
        "STR_USER_CREATION" VARCHAR2(30 CHAR),
        "DAT_LAST_UPDATE" DATE,
        "STR_USER_LAST_UPDATE" VARCHAR2(30 CHAR),
        "STR_OTP" CHAR(6 BYTE),
        "ID_AUTH_METHOD_PAYER" NUMBER(1,0),
        "AMNT_AMOUNT" NUMBER(23,0) DEFAULT 0,
        "BOL_IS_AUTOCAPTURE" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
        "ID_USE_CASE" NUMBER(4,0) NOT NULL ENABLE,
        "ID_AUTH_METHOD_PAYEE" NUMBER(2,0),
         CONSTRAINT "CKC_BOL_IS_CANCELLED_MON_TXNS" CHECK (BOL_IS_CANCELLED in ('Y','N')) ENABLE,
         CONSTRAINT "PK_MON_TXNS" PRIMARY KEY ("ID_TXN")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX"  ENABLE,
         CONSTRAINT "FK_MON_TXNS_CURRENCIES" FOREIGN KEY ("ID_CURRENCY")
          REFERENCES "APP"."CURRENCIES" ("ID_CURRENCY") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PAYER" FOREIGN KEY ("ID_PAYER")
          REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PAYEE" FOREIGN KEY ("ID_PAYEE")
          REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
         CONSTRAINT "FK_MON_TXNS_REFERENCE_TXNS" FOREIGN KEY ("ID_TXN")
          REFERENCES "APP"."TXNS" ("ID_TXN") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PI_PAYER" FOREIGN KEY ("ID_PAYER_PI")
          REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PI_PAYEE" FOREIGN KEY ("ID_PAYEE_PI")
          REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_AUTHMETHOD" FOREIGN KEY ("ID_AUTH_METHOD_PAYER")
          REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE,
         CONSTRAINT "FK_MON_TXNS_USE_CASE_ID" FOREIGN KEY ("ID_USE_CASE")
          REFERENCES "APP"."USE_CASES" ("ID_USE_CASE") ENABLE,
         CONSTRAINT "FK_MON_TXN_AUTH_PAYEE" FOREIGN KEY ("ID_AUTH_METHOD_PAYEE")
          REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE
      CREATE INDEX "APP"."IDX_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX" ;
      CREATE INDEX "APP"."IDX_PAYEE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_PYE_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE_PI")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_PYR_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER_PI")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_USE_CASE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_USE_CASE")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE UNIQUE INDEX "APP"."PK_MON_TXNS" ON "APP"."MON_TXNS" ("ID_TXN")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX" ;
    Data is first moved to table in schema3.OTW. and then we are deleting all the rows in otw from original table. below is the explain plan for delete
    SQL> explain plan for
      2  delete from schema1.mon_txns where id_txn in (select id_txn from schema3.OTW);
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 2798378986
    | Id  | Operation              | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | DELETE STATEMENT       |            |  2520 |   233K|    87   (2)| 00:00:02 |
    |   1 |  DELETE                | MON_TXNS   |       |       |            |          |
    |*  2 |   HASH JOIN RIGHT SEMI |            |  2520 |   233K|    87   (2)| 00:00:02 |
    |   3 |    INDEX FAST FULL SCAN| OTW_ID_TXN |  2520 | 15120 |     3   (0)| 00:00:01 |
    |   4 |    TABLE ACCESS FULL   | MON_TXNS   | 14260 |  1239K|    83   (0)| 00:00:02 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    Please help,
    thanks,
    Banka Ravi

    'Best practice' is just what Oracle is already doing as you have already been told: DELETE FROM myTable WHERE myDate between myStart and Myend.
    Your use case is why many orgs elect to use partitioning and use that DATE column as the partition key. Then it is VERY FAST and VERY EASY to truncate or drop partitions that contain old data when you no longer need them.
    The other solution used is to quit waiting so long to delete data and then you don't have to delete large amounts at the same time. So instead of deleting data once a month delete it once a week or even every night. Then the number of rows being deleted will be much smaller and, if the stats are kept current, Oracle may decide to use the index.

  • HR PA Data Archiving

    Hi,
    We are undergoing the archiving project for HR module. For PD data, we can use object BC_HROBJ, for PCL4 data, we can use PA_LDOC. What about 2000 series infotypes data, such as PA2001, PA2002, PA2003, etc.? Because all changes to these infotypes are stored in PCL4 cluster LA, we only need to purge the data from these tables. What is the best way to purge PA2xxx data? We cannot use transaction PA30/PA61 to delete records because user exit edits prevent any action against old data beyond certain time frame.
    Thanks very much,
    Li

    This is not directly related to SAP NetWeaver MDM. You may find more information about data archiving on SAP Service Marketplace at http://www.service.sap.com/data-archiving or http://www.service.sap.com/slo.
    Regards,
    Markus

  • Business Partner Data Archiving - Help Required

    Hi,
    Am New to Data Archiving and need help to Archive Business Partner in CRM. I tried to Archive some BP's but it was not archiving. Kindly throw some light on it.
    Problem we face are:
    1) When we try to write a Business Partner to an Archive File the Job Log Shows finished but no files are created in the System.
    2) When we try to delete the BP from Data Base it doesn't show the Archived BP's that could be deleted (I guess this is because there are no archived file).
    Archive Object Used is CA_BUPA. We have created a Variant and set the Time as Immediate and Spool to LOCL.
    Kindly let us know if there is any step we are missing here and if we are on the wrong track.
    All answers are appreciated.
    Thanks,
    Prashanth
    Message was edited by:
            Prashanth KR

    Hi,
    To archive a BP following steps are to be followed.
    A. Mark the BP to be archived in transaction BP >status tab
    B. Run dacheck
    FYI : Steps on how to perform dacheck :
    1.  In transaction DACONTROL change the following parameter to the
    values : for CA_BUPA .
    No. calls/packages   1
    Number of Objects   50
    Package SizeRFC     20
    Number of Days       1 (if you use mobile client it should be more)
    2.If a business partner should be archived directly after the
      archiving note was set, this can be done by reseting the check
       date with report CRM_BUPA_ARCHIVING_FLAGS.
       here only check (set) the options :
       - Archiving Flag
       - Reset Check Date
      Reset all options should be unchecked.
    3. go to dacheck and run the process
    4. This will change the status of the BPs to 'Archivable'
       *Only those bp's which are not involved in any business transactions,
        install base, Product Partner Range (PPR) will be set to archivable
    The BP's with status 'Archivable' will be used by the archiving
         run.
    Kindly refer note 646425 before goint to step C ***
    C.Then run transaction CRM_BUPA_ARC.
    - Make sure that the "selection program" in transaction "DACONTROL"
       is maintained as " CRM_BUPA_DA_SELECTIONS"
      Also create a variant, which can done by excecuting
      CRM_BUPA_DA_SELECTION and enter the variants name for the field
      Variant. This variant is the buisness partner range.
    - Also please check note 554460.
    Regards, Gerhard

  • Data archiving for Write Optimized DSO

    Hi Gurus,
    I am trying to archive data in Write Optimized DSO.
    Its allowing me to archive on request basis but it archives entire requests in the DSO(Means all data).
    But i want to select to archive from this request to this request.(Selection of request by my own).
    Please guide me.
    I got the below details from SDN.Kindly check.
    Archiving for Write optimized DSOs follow request based archiving as opposed to time slice archiving in standard DSO. This means that partial request activation is not possible; only complete requests have to be archived.
    Characteristic for Time Slice can be a time characteristic present in the WDSO, or the request creation data/request loading date. You are not allowed to add additional infoobjects for semantic groups, default is 0REQUEST & 0DATAPAKID.
    The actual process of archiving remains the same i.e
    Create a Data Archiving Process
    Create and schedule archiving requests
    Restore archiving requests (optional)
    Regards,
    kiruthika

    Hi,
    Please check the below OSS Note :
    http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes/sdn_oss_bw_whm/~form/handler%7b5f4150503d3030323030363832353030303030303031393732265f4556454e543d444953504c4159265f4e4e554d3d31313338303830%7d
    -Vikram

  • SAP Data Archiving in R/3 4.6C and ECC 6.0.

    Hi Guys,
        Need some suggestions,
    We currently working on SAP R/3 4.6 and have plans to upgrade it to ECC 6.0,
    In the mean time there is an requirement for SAP Data Archiving for reducing the database size and increase system performance.So wanted to know if its better to do Data Archiving before upgrade or after. Technically which will be comfortable, Also wanted to know if any advanced method available in ECC 6.0 compared to SAP R/3 4.6.
    Please provide your valuable suggestion.
    Thanks and Regards
    Deepu

    Hi Deepu,
    With respect to archiving process there will be no major difference in 4.6 and ECC 6.0 system. However you may get more advantage in ECC6.0 system because of Archive routing and also upgraded Write, Delete programs (upgraded program will depend on your current program in 4.6 system). For example In 4.6 system for archive MM_EKKO write program RM06EW30, archiving of purchase document is based on company code in the selection criteria and there is no preprocessing functionality. In ECC 6.0 you can archive by purchase organization selection criteria and preprocessing functionality will additional help in archiving of PO.
    In case if you archive documents in 4.6 and later upgrade it to ECC 6.0, SAP system will assure you to retrieve the archived data.
    With this i can say that going with archiving after upgrade to ECC 6.0 system will be better with respect to archiving process.
    -Thanks,
    Ajay

  • Vendor Master Data Archiving

    Hi,
    I wanted to archive vendor master data. I got program - SAPF058V which gives FI,Vendor Master Data Archiving: Proposal list.
    This program checks whether and which Vendor Master data can be archived or not. I am getting error message after running this program with selection of one of the Vendor saying that "Links stored incompletely". Can someone pls help on this
    Thanks...Narendra

    Hi,
    Check if you have an entry in table FRUN for PRGID 'SAPF047'. Other option is set the parameter 'FI link validation off'' with the flag on (ie: value 'x'). Check the help for this flag, surely this vendor have a code of customer. Perhaps you must try to delete this link in customer and vendor master before.
    I hope this helps you,
    Regards,
    Eduardo

  • Data Archiving - System Prerequisites

    Hi,
    We are planning to ARCHIVE some of the tables to reduce the TCO. (Total cost of Ownership)
    In this regard, I would like to know the following:
    On the Basis side, I want to check for any prerequisites etc (like Minimum SP LEVEL, Kernel version, Import Notes to be applied etc)
    Are there any document which clearly talks about these prequisites for preparing the System in order to be able to carry out the Archive work without any issues.
    (Note:  We are not using ILM solution for Archiving)
    I am mostly concerned with the SAP Notes that are considered to be the prerequisites.
    Best Regards
    Raghunahth L
    Important System information :
    Our system version is as follows :
    System  -> ERP 2005  (Production Server)
    OS      -> Windows 2003
    DB      -> Oracle 10.2.0.2.0
    SPAM    -> 7.00 / 0023
    Kernel  -> 7.00 (133)
    Unicode -> Yes
    SP Info:
    SAP_BASIS 700(0011)
    SAP_ABA 700(0011)
    PI_BASIS 2005_1_700(0003)
    ST-PI 2005_1_700(0003)
    SAP_BW 700(0012)
    SAP_AP 700(0008)
    SAP_HR 600(0003)
    SAP_AP 700(0008)
    SAP_HR 600(0013)
    SAP_APPL 600(0008)
    EA-IPPE 400(0008)
    EA-DFPS 600(0008)
    EA-HR 600(0013)
    EA-FINSERV 600(0008)
    FINBASIS 600(0008)
    EA-PS 600(0008)
    EA-RETAIL 600(0008)
    EA-GLTRADE 600(0008)
    ECC-DIMP 600(0008)
    ERECRUIT 600(0008)
    FI-CA 600(0008)
    FI-CAX 600(0008)
    INSURANCE 600(0008)
    IS-CWM 600(0008)
    IS-H 600(0008)
    IS-M 600(0008)
    IS-OIL 600(0008)
    IS-PS-CA 600(0008)
    IS-UT 600(0008)
    SEM-BW 600(0008)
    LSOFE 600(0008)
    ST-A/PI 01J_ECC600(0000)
    Tables we are planning to archive
    AGKO,     BFIT_A,     BFIT_A0,     BFO_A_RA,     BFOD_A,     BFOD_AB,     BFOK_A,     BFOK_AB,     BKPF,     BSAD,     BSAK,     BSAS,     BSBW,     BSE_CLR,     BSE_CLR_ASGMT,     BSEG_ADD,     BSEGC,     BSIM,     BSIP,     BSIS,     BVOR,     CDCLS,     CDHDR,     CDPOS_STR,     CDPOS_UID,     ETXDCH,     ETXDCI,     ETXDCJ,     FAGL_BSBW_HISTRY,     FAGL_SPLINFO,     FAGL_SPLINFO_VAL,     FAGLFLEXA,     FIGLDOC,     RF048,     RFBLG,     STXB,     STXH,     STXL,     TOA01,     TOA02,     TOA03,     TOAHR,     TTXI,     TTXY,     WITH_ITEM,          COFIP,     COFIS,     COFIT,     ECMCA,     ECMCT,     EPIDXB,     EPIDXC,     FBICRC001A,     FBICRC001P,     FBICRC001T,     FBICRC002A,     FBICRC002P,     FBICRC002T,     FILCA,     FILCP,     FILCT,     GLFLEXA,     GLFLEXP,     GLFLEXT,     GLFUNCA,     GLFUNCP,     GLFUNCT,     GLFUNCU,     GLFUNCV,     GLIDXA,     GLP0,     GLPCA,     GLPCP,     GLPCT,     GLSP,     GLT0,     GLT3,     GLTP,     GLTPC,     GMAVCA,     GMAVCP,     GMAVCT,     JVPO1,     JVPSC01A,     JVPSC01P,     JVPSC01T,     JVSO1,     JVSO2,     JVTO1,     JVTO2,     STXB,     STXH,     STXL,     TRACTSLA,     TRACTSLP,     TRACTSLT,
    in addition we have some Z Tables to be archived.

    Hi,
    Pre-requisites for search OSS notes or BC Sets depends upon the program that is used in archive, delete or read.
    If there is no proper selection criteria in write variant selection,
    If the program is getting terminated due to long processing time,
    If percentage of data archived for your selection is less even after data meeting minimum criteria,
    If system allows user to change archived data,
    With all the above scenarios we will search for OSS notes. If SAP has released BC sets then we will implement these.
    If you have any problem while archiving and based on the archiving experience if you think that some of the oss notes will help then take a call on implementing it.
    With the tables you have mentioned i can say that archiving objects such as FI_DOCUMNT, FI_SL_DATA, EC_PCA_ITM, EC_PCA_DATA, CHANGEDOCU, ARCHIVELINK.
    You have to search any latest release BC sets or OSS notes for your system application.
    -Thanks,
    Ajay

  • Can I add multple tables to a single Flashback Data archive

    Hi ,
    I want to add mutilple tables of a schema to a single Flashback Data archive . Is this possible ?
    I dont want to create multiple Flash back archives for each table
    Also can i add an entire schema or a tablespace to a flash back archive created ?
    Thanks,
    Sachin K

    Do adding multiple tables to a single flashback archive feasible in terms of space ?Yes, you can use. Multiple tables can share the same policies for data retention and purging. Moreover, since an FBDA consists of one or more tablespaces (or subparts thereof), multiple FBDAs can be constructed, each with a different but specific retention period. This means it’s possible to construct FBDAs that support different retention needs. Here are a few common examples:
    90 days for common short-term historical inquiries
    One full year for common longer-term historical inquiries
    Seven years for U.S. Federal tax purposes
    20 years for legal purposes
    How is the space for archiving data to a flashback archive generally estimated ?[url http://docs.oracle.com/cd/E18283_01/server.112/e16508/process.htm#BABGGHEI]Flashback Data Archiver Process (FBDA)
    FBDA automatically manages the flashback data archive for space, organization, and retention. Additionally, the process keeps track of how far the archiving of tracked transactions has occurred.
    It (FBDA) wakes up every 5 minutes by default (*1), checks Tablespace quotas every hour, creates history tables only when it has to and rests otherwise if not called to work. The process adjusts the wakeup interval dynamically based on the workload.
    (*1) *1 Contradicting observations to MOS Note 1302393.1 were made. The note says that FBDA wakes up every 6 seconds as of Oracle 11.2.0.2, but a trace on 11.2.0.3 showed:
    WAIT #140555800502008: nam='fbar timer' ela= 300004936 sleep time=300 failed=0 p3=0 obj#=-1 tim=1334308846055308
    5.1 Space management
    To ease the space management for FDA we recommend following these rules:
    - Create an archive for different retention periods
    o to optimally use space and take advantage of automatic purging when retention expires
    - Group tables with same retention policy (share FDAs)
    o to reduce maintenance effort of multiple FDAs with same characteristics
    - Dedicate Tablespaces to one archive and do not set quotas
    o to reduce maintenance/monitoring effort, quotas are only checked every hour by FBDA
    http://www.trivadis.com/uploads/tx_cabagdownloadarea/Flashback-Data-Archives-Rechecked-v1.4_final.pdf
    Regards
    Girish Sharma

  • SAP Data Archiving in R/3 Ecc6.0, MS-SQL 2008 Server:

    We are using MS-SQL 2008 Database, these days our production database growing quite faster. I want to setting up Data Archiving using T'Code DB15. Can anyone give steps to setup Data Archinving using DB15?
    Edited by: Ahsan's on Jul 19, 2010 11:44 AM

    Archiving data is bound to legal things depending on the country and the company you work. Setting up archiving is not following a few steps, you'd need to have a system where to put the archiving data.
    Markus

  • What is the major use of TAANA in data archiving?

    People say that TAANA can give the distribution profile of the table entries over different
    table field.
    I do not see how come this can help us doing data archiving.
    Please help.
    Thanks a lot!

    Hi Jennifer,
    I use TAANA all the time to help determine what data is in a particular table.  For example, I use it to try and determine how many records there are for a certain document type, per fiscal year, etc.  This information can then be used to determine how many archive runs you will need.  If there are millions of rows for a fiscal year, then you will more than likely want to break that up into months, or quarters (depending on volume).
    Hope this helps.
    Best Regards,
    Karin Tillotson

Maybe you are looking for