Data Archiving space calculation

Hi ,
How to estimate the space saved by archiving an object?
Do we need to take the record counts of the related tables?
Thanks,
Priya.

Hi Priya,
There's one more way to find out the space benefited to the SAP system after the archiving runs.
You can follow the below steps to analyz the gained space:
1. Go to the transaction SARA
2. Click on Statistics
3. You can pass the active archiving objects on the selection screen which you have used during launching the archiving jobs.
4. You can also pass the date range considering the data on which you have run the archiving jobs.
5. Keep the status only Complete
6. Now execute i.e. display the statistics
7. In the final report, have a look on the Total for the DB space Deleted, Keep in mind that we have DB space write and DB space delete..db space deleted space will tell you the space you have freed up from the SAP R/3 database i.e. this much space gained after running the archiving runs.
Let me know if this helps u..
Regards,
Shamim

Similar Messages

  • Can I add multple tables to a single Flashback Data archive

    Hi ,
    I want to add mutilple tables of a schema to a single Flashback Data archive . Is this possible ?
    I dont want to create multiple Flash back archives for each table
    Also can i add an entire schema or a tablespace to a flash back archive created ?
    Thanks,
    Sachin K

    Do adding multiple tables to a single flashback archive feasible in terms of space ?Yes, you can use. Multiple tables can share the same policies for data retention and purging. Moreover, since an FBDA consists of one or more tablespaces (or subparts thereof), multiple FBDAs can be constructed, each with a different but specific retention period. This means it’s possible to construct FBDAs that support different retention needs. Here are a few common examples:
    90 days for common short-term historical inquiries
    One full year for common longer-term historical inquiries
    Seven years for U.S. Federal tax purposes
    20 years for legal purposes
    How is the space for archiving data to a flashback archive generally estimated ?[url http://docs.oracle.com/cd/E18283_01/server.112/e16508/process.htm#BABGGHEI]Flashback Data Archiver Process (FBDA)
    FBDA automatically manages the flashback data archive for space, organization, and retention. Additionally, the process keeps track of how far the archiving of tracked transactions has occurred.
    It (FBDA) wakes up every 5 minutes by default (*1), checks Tablespace quotas every hour, creates history tables only when it has to and rests otherwise if not called to work. The process adjusts the wakeup interval dynamically based on the workload.
    (*1) *1 Contradicting observations to MOS Note 1302393.1 were made. The note says that FBDA wakes up every 6 seconds as of Oracle 11.2.0.2, but a trace on 11.2.0.3 showed:
    WAIT #140555800502008: nam='fbar timer' ela= 300004936 sleep time=300 failed=0 p3=0 obj#=-1 tim=1334308846055308
    5.1 Space management
    To ease the space management for FDA we recommend following these rules:
    - Create an archive for different retention periods
    o to optimally use space and take advantage of automatic purging when retention expires
    - Group tables with same retention policy (share FDAs)
    o to reduce maintenance effort of multiple FDAs with same characteristics
    - Dedicate Tablespaces to one archive and do not set quotas
    o to reduce maintenance/monitoring effort, quotas are only checked every hour by FBDA
    http://www.trivadis.com/uploads/tx_cabagdownloadarea/Flashback-Data-Archives-Rechecked-v1.4_final.pdf
    Regards
    Girish Sharma

  • A question on different options for data archival and compression

    Hi Experts,
    I have production database of about 5 terabyte size in production and about 50 GB in development/QA. I am on Oracle 11.2.0.3 on Linux. We have RMAN backups configured. I have a question about data archival stretegy.  To keep the OLTP database size optimum, what options can be suggested for data archival?
    1) Should each table have a archival stretegy?
    2) What is the best way to archive data - should it be sent to a seperate archival database?
    In our environment, only for about 15 tables we have the archival stretegy defined. For these tables, we copy their data every night to a seperate schema meant to store this archived data and then transfer it eventualy to a diffent archival database. For all other tables, there is no data archival stretegy.
    What are the different options and best practices about it that can be reviewed to put a practical data archival strategy in place? I will be most thankful for your inputs.
    Also what are the different data compression stregegies? For example we have about 25 tables that are read-only. Should they be compressed using the default oracle 9i basic compression? (alter table compress).
    Thanks,
    OrauserN

    You are using 11g and you can compress read only as well as read-write tables in 11g. Both read only and read write tables are the candidate for compression always. This will save space and could increase the performance but always test it before. This was not an option in 10g. Read the following docs.
    http://www.oracle.com/technetwork/database/storage/advanced-compression-whitepaper-130502.pdf
    http://www.oracle.com/technetwork/database/options/compression/faq-092157.html

  • R/3 archiving - wrong calculated field on 2LIS_02_ITM extractor

    Hello.
    An info and a question for who developed an archiving with r/3..
    We have found an issue related with the R/3 archiving and LIS_02_ITM.
    A calculed field (Delivery Quantity Variance in %); (field name = MABW) (you can see it at http://help.sap.com/saphelp_nw04/helpdata/en/58/f6383fdb800804e10000000a114084/frameset.htm )gives wrong data.
    The MABW is calculated using MSEG infos.
    MSEG infos were archived (and deleted), so, the calculation goes bad and returns wrong value.
    In front of the archiving process, we potentially will find problems due to the standard extractor behaviour.
    Have you ever noticed the same behaviour because of the wrong calculation onto the fields?
    thank you in advance for your cooperation...!
    Dieguz
    The details are the following :
    We have a R/3 PRODUCTION system (called MP0), connected with three BW
    ones (MP2,
    MP4, AP4).
    Someone performed an R/3 data archiving, using an archiving tool
    (IXOS).They archived the data with a one-to-one object approach :
    FI_DOCUMNT archived 18 months from clearing date.
    MM_MATBEL archived 24 mesi from posting date
    SD_VBRK archived 24 mesi from posting date
    RV_LIKP archived 24 mesi from posting date
    SD_VBAK archived 24 mesi from posting date
    MM_ACCTIT 18 months from posting date
    OIG_SHIPMNT 24 months from posting date
    MM_INV_BEL 24 months from posting date
    RL_TA 12 months from posting date
    RL_TB 12 months from posting date
    In the R/3 system, after the archiving, those documents were deleted.
    Our BW systems extract data using custom extractors and standard ones.
    - 2LIS_02_*
    - 2LIS_03_BF
    - 2LIS-11_*
    - 2LIS-17_*
    - 2LIS_13_*
    - 0FI_AR_4
    - 0FI_GL_4
    - 0FI_AP_4
    - 0EC_PCA_3
    - 0CO_OM_CCA_9
    We have found problems with 2LIS_02_ITM;
    During the delta extraction, we found records related with Purchase
    Order, with GR archived, that have MBAW field (Delivery Quantity
    Variance in %
    ) with "-100" value; the correct value should be "0" (because the
    Purchase Order was completed).
    MP0 is a 40B R/3 SYSTEM SP 83, PLUGIN 2004.1
    Thank you in advance for you cooperation
    best regards
    Diego

    Hi Waldemar,
    LOEKZ  (Deletion indicator in purchasing document) is not available in standard extract structure we have append that field with append structure in RSA 6.
    BUKRS is available in standard extract structure :
    In  lo-cockpit open extract structure and select BUKRS(Company code) field transfer to left side .
    2. click  data source maintenance check BUKRS field .
    you will get BUKRS field in 2lis_o2_ITM
    Thanks
    Sree

  • Regarding data Archiving

    Hi,
           All
                   my client needs to archive Data in SAP 8.8 each year having only 1 year completed.Can we archive data
                   less than 3 year?
    Thanks in Advance

    There will not be any technical problem if the archiving is done as per the instructions set by SAP (I suggest you go through DATA Archiving Session from SAP), What I meant was that any data which is less than 3 year old may still be considered as new data and if this is archived you are left with no choice to modify anything in this data as this becomes read-only.  If the client is looking for extra space on server by Data Archiving you can suggest other step as well . Again as I said it is the client's call afterall.
    regards
    johnson

  • New to SAP Data Archiving

    Hi,
    Is it a must to have third party tool/ software for SAP Data Archiving
    OR
    SAP has its own solution for (CREATE archive files, WRITE archive files and DELETE from database).
    Any pointers for some more information on different stages of SAP Data Archiving project. (Any links to how-to documents would be great help.)
    Regards,
    Rehan

    You have posted your query in a wrong forum.
    This space is for discussion on SAP TDMS topics.
    You may check in the forum Information Lifecycle Management
    for any available information.

  • Put Together A Data Archiving Strategy And Execute It Before Embarking On Sap Upgrade

    A significant amount is invested by organizations in a SAP upgrade project. However few really know that data archiving before embarking on SAP upgrade yields significant benefits not only from a cost standpoint but also due to reduction in complexity during an upgrade. This article not only describes why this is a best practice  but also details what benefits accrue to organizations as a result of data archiving before SAP upgrade. Avaali is a specialist in the area of Enterprise Information Management.  Our consultants come with significant global experience implementing projects for the worlds largest corporations.
    Archiving before Upgrade
    It is recommended to undertake archiving before upgrading your SAP system in order to reduce the volume of transaction data that is migrated to the new system. This results in shorter upgrade projects and therefore less upgrade effort and costs. More importantly production downtime and the risks associated with the upgrade will be significantly reduced. Storage cost is another important consideration: database size typically increases by 5% to 10% with each new SAP software release – and by as much as 30% if a Unicode conversion is required. Archiving reduces the overall database size, so typically no additional storage costs are incurred when upgrading.
    It is also important to ensure that data in the SAP system is cleaned before your embark on an upgrade. Most organizations tend to accumulate messy and unwanted data such as old material codes, technical data and subsequent posting data. Cleaning your data beforehand smoothens the upgrade process, ensure you only have what you need in the new version and helps reduce project duration. Consider archiving or even purging if needed to achieve this. Make full use of the upgrade and enjoy a new, more powerful and leaner system with enhanced functionality that can take your business to the next level.
    Archiving also yields Long-term Cost Savings
    By implementing SAP Data Archiving before your upgrade project you will also put in place a long term Archiving Strategy and Policy that will help you generate on-going cost savings for your organization. In addition to moving data from the production SAP database to less costly storage devices, archived data is also compressed by a factor of five relative to the space it would take up in the production database. Compression dramatically reduces space consumption on the archive storage media and based on average customer experience, can reduce hardware requirements by as much as 80% or 90%. In addition, backup time, administration time and associated costs are cut in half. Storing data on less costly long-term storage media reduces total cost of ownership while providing users with full, transparent access to archived information.

    Maybe this article can help; it uses XML for structural change flexiblity: http://www.oracle.com/technetwork/oramag/2006/06-jul/o46xml-097640.html

  • SAP Data Archiving

    I have question on Archiving Statistics.
    I get some statistical information in the spool file of the archive run. And also I can see some additional statistical information when clicked on the "Statistics" button under the SARA transaction.
    I am trying to understand the relation, primarily, among:
    1) Size of Archiving Session in MB
    2) Occupied Database space in MB
    3) Deleted Database space in MB
    Which of these numbers or a combination would reflect the actual disk space emptied by the Archiving run from Database?
    This information is critical for the upper management.
    Your help is highly appreciated.
    Regards,
    Srinivasa

    Hi Juan,
    Appreciate your inputs. Even though the performance improvements is the primary objective for implementing archiving, containing the data growth as well is an objective. I am not sure if there are any methodoligies to quantify the performance improvements resulting from archiving. Currently I am concerned about quantifying the containment of data growth as a result of archiving.
    I have been also looking at documentation on archiving statistics. SAP says data archiving does a 5:1 compression for non-cluster tables, and no further compression for cluster tables.
    When SAP is providing the statistical information (from the spool file of an archive session and also the "Statistics" button under the SARA transaction), I believe that should be CORRECT. And I am confused as to what the numbers in the statistics really reflect.
    Here are the numbers from a production run of CO_ML_IDX (which archives data from CKMI1 table):
    From the spool file the numbers are:
    Size of Archiving Session in MB   -    700.900
    Occupied Database Space in MB     - 10,827.569
    From the statistics display the numbers are:
    Disk Space         -   700.9
    DB Space (Write)   - 10827.57
    DB Space (delete)  -     0
    Did this archive run free up 3504.5 MB (which is 5*700.9)
    or 10827.57 MB (the DB Space written)? Should the "DB Space (Delete)" not reflect the space deleted from the database? If so it should have some non zero value.
    How are these three values related to actual disk space freed up in the database as a result of this archive run?
    Appreciate for sharing your findings.
    Regards,
    Srinivasa

  • Slightly OT: Video Disk Space Calculator app now available

    Hi all,
    I'm an FCP editor who works in a lot of different codecs, so I quite often need to know the disk space needed for a length of footage in a certain one. I know there are a few apps (widgets I think) that do this, but I disable Dashboard and haven't found any others I like.
    As a former programmer by trade, I figured this was a good reason to try learning Cocoa programming.. So, as my first project, I present Video Disk Space Calculator (OSX 10.3 and above, 49kb zip file).
    It's a tiny little app that does two things - calculates hard drive space based on codec/length of footage, and displays/prints out a table of codecs and their data rates. It's completely free of course. I would REALLY appreciate feedback, I figure by correcting issues and adding features I'll learn more about Cocoa programming...
    Download here: Video Disk Space Calculator
    Comments/Questions/Etc: http://www.rabidjackalope.com/vdsc/http://www.rabidjackalope.com/vdsc/
    G5 Dual 2.5Ghz, 23 Cinema HD Display   Mac OS X (10.4.6)  

    this is very nice! i particularly like the data rate
    chart.
    how about something similar for bitrates and various
    compression methods? there are a few calculators out
    there but i don't like any of them.
    Thanks, glad it is useful.
    I'll keep the bitrate and compression stuff in mind for Version 2 in the future. It's a good idea.
    Thanks,
    Jason

  • MARS v6 Data Archiving

    In the data archiving section (admin - data archiving), what does Remote Storage Capacity in Days mean or what impact does the number of days have on MARS and the data archive location?
    I set my storage capacity to 90 days. From MARS documentation, MARS begins archiving from the enable point going forward. So if I start archiving today, I will have to wait 90 days to get 90 days of archived data. The point of my NetPro is what happens on the 91st day? Does MARS stop archiving? Does MARS automatically remove the 1st archived day? I do I need to remove the 1st archived day?
    MARS documentation…The data in the NFS or SFTP file share has a life span specified in days. Therefore, to keep a year's worth of data, you would specify 365 days as the value for the Remote Storage Capacity (in Days) field. All data older than 365 days is purged from the archive file.
    How is the data purged?
    Thanks

    Rick,
    The answer to your question is yes. You could set the archive for a larger number, too. The limitation is the size of the box.
    Here is a replay from the MARS config guide on the subject, which you may have already read:
    "Each MARS Appliance seamlessly archives data using an expiration date that you specify. When the MARS internal storage reaches capacity, it automatically purges the data in the oldest partition of the local database, roughly 10% of the stored event and session data. The data in the NFS or SFTP file share has a life span specified in days. Therefore, to keep a year's worth of data, you would specify 365 days as the value for the Remote Storage Capacity (in Days) field. All data older than 365 days is purged from the archive file.
    When planning for space requirements, use the following guidance: Estimate 6 GB of storage space for one year's worth of data, received at a sustained 10 events/second. This estimate assumes an average of 200 Bytes/event and a compression factor of 10, both realistic mean values. In addition to capacity planning, plan the placement of your archive server to ensure a reliable network connection that can transmit 10 MB/second exists between the archive server and the MARS Appliance. You should consider using the eth1 interface to avoid high-traffic networks that might introduce latency and to ensure that the backup operation is not competing with other operations in the MARS Appliance. Also, define a default route to the archive server on the MARS Appliance and that you verify any intermediate routers and firewalls allow for multi-hour NFS or SFTP connections to prevent session timeouts during the backup operation."
    Hope this helps.

  • HR PA Data Archiving

    Hi,
    We are undergoing the archiving project for HR module. For PD data, we can use object BC_HROBJ, for PCL4 data, we can use PA_LDOC. What about 2000 series infotypes data, such as PA2001, PA2002, PA2003, etc.? Because all changes to these infotypes are stored in PCL4 cluster LA, we only need to purge the data from these tables. What is the best way to purge PA2xxx data? We cannot use transaction PA30/PA61 to delete records because user exit edits prevent any action against old data beyond certain time frame.
    Thanks very much,
    Li

    This is not directly related to SAP NetWeaver MDM. You may find more information about data archiving on SAP Service Marketplace at http://www.service.sap.com/data-archiving or http://www.service.sap.com/slo.
    Regards,
    Markus

  • Business Partner Data Archiving - Help Required

    Hi,
    Am New to Data Archiving and need help to Archive Business Partner in CRM. I tried to Archive some BP's but it was not archiving. Kindly throw some light on it.
    Problem we face are:
    1) When we try to write a Business Partner to an Archive File the Job Log Shows finished but no files are created in the System.
    2) When we try to delete the BP from Data Base it doesn't show the Archived BP's that could be deleted (I guess this is because there are no archived file).
    Archive Object Used is CA_BUPA. We have created a Variant and set the Time as Immediate and Spool to LOCL.
    Kindly let us know if there is any step we are missing here and if we are on the wrong track.
    All answers are appreciated.
    Thanks,
    Prashanth
    Message was edited by:
            Prashanth KR

    Hi,
    To archive a BP following steps are to be followed.
    A. Mark the BP to be archived in transaction BP >status tab
    B. Run dacheck
    FYI : Steps on how to perform dacheck :
    1.  In transaction DACONTROL change the following parameter to the
    values : for CA_BUPA .
    No. calls/packages   1
    Number of Objects   50
    Package SizeRFC     20
    Number of Days       1 (if you use mobile client it should be more)
    2.If a business partner should be archived directly after the
      archiving note was set, this can be done by reseting the check
       date with report CRM_BUPA_ARCHIVING_FLAGS.
       here only check (set) the options :
       - Archiving Flag
       - Reset Check Date
      Reset all options should be unchecked.
    3. go to dacheck and run the process
    4. This will change the status of the BPs to 'Archivable'
       *Only those bp's which are not involved in any business transactions,
        install base, Product Partner Range (PPR) will be set to archivable
    The BP's with status 'Archivable' will be used by the archiving
         run.
    Kindly refer note 646425 before goint to step C ***
    C.Then run transaction CRM_BUPA_ARC.
    - Make sure that the "selection program" in transaction "DACONTROL"
       is maintained as " CRM_BUPA_DA_SELECTIONS"
      Also create a variant, which can done by excecuting
      CRM_BUPA_DA_SELECTION and enter the variants name for the field
      Variant. This variant is the buisness partner range.
    - Also please check note 554460.
    Regards, Gerhard

  • Data archiving for Write Optimized DSO

    Hi Gurus,
    I am trying to archive data in Write Optimized DSO.
    Its allowing me to archive on request basis but it archives entire requests in the DSO(Means all data).
    But i want to select to archive from this request to this request.(Selection of request by my own).
    Please guide me.
    I got the below details from SDN.Kindly check.
    Archiving for Write optimized DSOs follow request based archiving as opposed to time slice archiving in standard DSO. This means that partial request activation is not possible; only complete requests have to be archived.
    Characteristic for Time Slice can be a time characteristic present in the WDSO, or the request creation data/request loading date. You are not allowed to add additional infoobjects for semantic groups, default is 0REQUEST & 0DATAPAKID.
    The actual process of archiving remains the same i.e
    Create a Data Archiving Process
    Create and schedule archiving requests
    Restore archiving requests (optional)
    Regards,
    kiruthika

    Hi,
    Please check the below OSS Note :
    http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes/sdn_oss_bw_whm/~form/handler%7b5f4150503d3030323030363832353030303030303031393732265f4556454e543d444953504c4159265f4e4e554d3d31313338303830%7d
    -Vikram

  • SAP Data Archiving in R/3 4.6C and ECC 6.0.

    Hi Guys,
        Need some suggestions,
    We currently working on SAP R/3 4.6 and have plans to upgrade it to ECC 6.0,
    In the mean time there is an requirement for SAP Data Archiving for reducing the database size and increase system performance.So wanted to know if its better to do Data Archiving before upgrade or after. Technically which will be comfortable, Also wanted to know if any advanced method available in ECC 6.0 compared to SAP R/3 4.6.
    Please provide your valuable suggestion.
    Thanks and Regards
    Deepu

    Hi Deepu,
    With respect to archiving process there will be no major difference in 4.6 and ECC 6.0 system. However you may get more advantage in ECC6.0 system because of Archive routing and also upgraded Write, Delete programs (upgraded program will depend on your current program in 4.6 system). For example In 4.6 system for archive MM_EKKO write program RM06EW30, archiving of purchase document is based on company code in the selection criteria and there is no preprocessing functionality. In ECC 6.0 you can archive by purchase organization selection criteria and preprocessing functionality will additional help in archiving of PO.
    In case if you archive documents in 4.6 and later upgrade it to ECC 6.0, SAP system will assure you to retrieve the archived data.
    With this i can say that going with archiving after upgrade to ECC 6.0 system will be better with respect to archiving process.
    -Thanks,
    Ajay

  • Vendor Master Data Archiving

    Hi,
    I wanted to archive vendor master data. I got program - SAPF058V which gives FI,Vendor Master Data Archiving: Proposal list.
    This program checks whether and which Vendor Master data can be archived or not. I am getting error message after running this program with selection of one of the Vendor saying that "Links stored incompletely". Can someone pls help on this
    Thanks...Narendra

    Hi,
    Check if you have an entry in table FRUN for PRGID 'SAPF047'. Other option is set the parameter 'FI link validation off'' with the flag on (ie: value 'x'). Check the help for this flag, surely this vendor have a code of customer. Perhaps you must try to delete this link in customer and vendor master before.
    I hope this helps you,
    Regards,
    Eduardo

Maybe you are looking for

  • Want to use my Apple Airport Extreme

    I want to use my Apple AE as the wireless router. Rather than hooking directly from the FIOS box (as others have suggested), I want to know if hooking up to a stand-alone actiontec Verizon Coax Network adapter will let me use my AE. I don't need all

  • Rectangular marquee tool problem

    i used 'rectangular marquee tool to select an area . then i right clicked and select "stroke"  . but i am not satisfied with the result , because its corner was slightly jagged ( when i zoom it ) . feather was set to 0 . can anyone help me plz how to

  • FLV and Navigation issue

    I have a lot of videos in the stage, and some of them that are loaded within a swf, so the problem is, when i press any button of the menu or something to navigate across the interface, the video it does not stop playing, it get stuck in the backgrou

  • Layers of autocad

    Hi everybody, I can not import the layers of an autocad document (. dwg) to Illustrator CC. how should I do? thank you all for your help bonjour, je n'arrive pas à importer l'ensemble des calques d'un document autocad (.dwg)vers illustrator CC. Comme

  • RFC vs. SM58

    Hello everybody, we are sending data from R/3 to XI using a RFC. Although a the RFC gives a sy-subrc = 0, it can be, that the message has not reached the XI (SM58!!!). How can I make sure that the message has really reached the XI? The only thing I s