Archiving data retention period (best practice)

Hi,
Can anybody provide the standard FI-CO data retention period I mean for how long data from the following table has to be retained: 
BKPF, BSAD, BSAK, BSAS, BSIS
Best business practice.
Thanks in advance
Joseph

Hi,
I would not recommend to change this setting. One major problem you get will be that the archiving function will delete data from database (which has been transferred to the archive). So far so good, that´s one of the archiving features
But it sometimes happens that you have to open an already closed fiscal year again. And if you open this last closed fiscal year again, the realtime transactions could fail now if data (transactions, etc.) is missing which exists only in the archive at that point of time.
By the way, you can have a look at SAP note 389920, in there the archiving functionality is documented for FI-AA application.
Regards,
Markus

Similar Messages

  • [CS5.5/6] - XML / Data Merge questions & Best practice.

    Fellow Countrymen (and women),
    I work as a graphic designer for a large outlet chain retailer which is constantly growing our base of centers.  This growth has brought a workload that used to be manageable with but two people to a never ending sprint with five.  Much of what we do is print, which is not my forte, but is also generally a disorganized, ad-hoc affair into which I am wading to try to help reduce overall strain.
    Upon picking up InDesign I noted the power of the simple Data Merge function and have added it to our repetoire in mass merging data sources.  There are some critical failures I see in this as a tool going forward for our purposes, however:
    1) Data Merge cannot handle information stored and categorized in a singular column well.  As an example we have centers in many cities, and each center has its own list of specific stores.  Data merge cannot handle a single column, or even multiple column list of these stores very easily and has forced us into some manual operations to concatenate the data into one cell and then, using delimiter characters, find and replace hard returns to seperate them.
    2) Data Merge offers no method of alternate alignment of data, or selection by ranges.  That is to say:  I cannot tell Data merge to start at Cell1 in one column, and in another column select say... Cell 42 as the starting point.
    3) Data merge only accepts data organized in a very specific, and generally inflexible pattern.
    These are just a few limitations.
    ON TO MY ACTUAL DILEMMA aka Convert to XML or not?
    Recently my coworker has suggested we move toward using XML as a repository / delivery system that helps us quickly get data from our SQL database into a usable form in InDesign. 
    I've watched some tutorials on Lynda.com and havent yet seen a clear answer to a very simple question:
    "Can XML help to 'merge' large, dynamic, data sets like a list of 200 stores per center over 40 centers based off of a single template file?"
    What I've seen is that I would need to manually duplicate pages, linking the correct XML entry as I go rather than the program generating a set of merged pages like that from Data Merge with very little effort on my part.  Perhaps setting up a master page would allow for easy drag and drop fields for my XML data?
    I'm not an idiot, I'm simply green with this -- and it's kind of scary because I genuinely want us to proceed forward with the most flexible, reliable, trainable and sustainable solution.  A tall order, I know.  Correct me if I'm wrong, but XML is that beast, no?
    Formatting the XML
    Currently I'm afraid our XML feed for our centers isnt formatted correctly with the current format looking as such:
    <BRANDS>
         <BRAND>
              • BrandID = xxxx
              [Brand Name]
              [Description]
              [WebMoniker]
              <CATEGORIES>
                   <CATEGORY>
                        • xmlns = URL
                        • WebMoniker = category_type
              <STORES>
                   <STORE>
                        • StoreID = ID#
                        • CenterID = ID#
    I dont think this is currently usable because if I wanted to create a list of stores from a particular center, that information is stored as an attribute of the <Store> tag, buried deep within the data, making it impossible to 'drag-n-drop'. 
    Not to mention much of the important data is held in attributes rather than text fields which are children of the tag.
    Im thinking of proposing the following organizational layout:
    <CENTERS>
         <CENTER>
         [Center_name]
         [Center_location]
              <CATEGORIES>
                   <CATEGORY>
                        [Category_Type]
                        <BRANDS>
                             <BRAND>
                                  [Brand_name]
    My thought is that if I have the <CENTER> tag then I can simply drag that into a frame and it will auto populate all of the brands by Category (as organized in the XML) for that center into the frame.
    Why is this important?
    This is used on multiple documents in different layout styles, and since our store list is ever changes as leases end or begin, over 40 centers this becomes a big hairy monster.  We want this to be as automated as possible, but I'd settle for a significant amount of dragging and dropping as long as it is simple and straightforward.  I have a high tollerance for druding through code and creating work arounds but my co-workers do not.  This needs to be a system that is repeatable and understandable and needs to be able to function whether I'm here or not -- Mainly because I would like to step away from the responsibility of setting it up every time
    I'd love to hear your raw, unadulterated thoughts on the subject of Data merge and XML usage to accomplish these sorts of tasks.  What are your best practices and how would you / do you accomplish these operations?
    Regards-
    Robert

    From what I've gleaned through watching Lynda tutorials on the subject is that what I'm hoping to do is indeed possible.
    Peter, I dont disagree with you that there is a steep learning curve for me as the instigator / designer of this method for our team, but in terms of my teammates and end-users that will be softened considerably.  Even so I'm used to steep learning curves and the associated frustrations -- but I cope well with new learning and am self taught in many tools and programs.
    Flow based XML structures:
    It seems as though as long as the initial page is set up correctly using imported XML, individual data records that cascade in a logical fashion can be flowed automatically into new pages.  Basically what you do is to create an XML based layout with the dynamic portion you wish to flow in a single frame, apply paragraph styles to the different tags appropriately and then after deleting unused records, reimport the XML with some specific boxes checked (depending on how you wish to proceed).
    From there simply dragging the data root into the frame will cause overset text as it imports all the XML information into the frame.  Assuming that everything is cascaded correctly using auto-flow will cause new pages to be automatically generated with the tags correctly placed in a similar fashion to datamerge -- but far more powerful and flexible. 
    The issue then again comes down to data organization in the XML file.  In order to use this method the data must be organized in the same order in which it will be displayed.  For example if I had a Lastname field, and a Firstname field in that order, I could not call the Firstname first without faulting the document using the flow method.  I could, however, still drag and drop content from each tag into the frame and it would populate correctly regardless of the order of appearance in the XML.
    Honestly either method would be fantastic for our current set of projects, however the flow method may be particularly useful in jobs that would require more than 40 spreads or simple layouts with huge amounts of data to be merged.

  • HR Master Data conversion-SAP Best Practices

    Hello there,
    We would like to use the SAP Best Practices for HR Master Data conversion. 
    Now we want leverage the SAP Best practices to convert the Master data.  Could any one explain in detail how to do the same.
    How to install the Best Practices only to the extent of the Data conversion.  We don't want to use the rest of the Best Practicies.
    I know there are some notes out there. 
    Any help on the above highly appriciated.

    HI,
    I am not v sure if u can install only the required component. But there would be some pre requisites for every installation.
    It will be clearly mentioned in the base line
    Also Check if its available for the country which ur currently working...
    Use the ECATT: Test Configuratio & Test Scripts
    Pls revert in case u need further more details..

  • Data warehousing question/best practices

    I have been given the task of copying a few tables from our production database to a data warehousing database on a once-a-day (overnight) basis. The number of tables will grow over time; currently it is 10. I am interested in not only task success but also best practices. Here's what I've come up with:
    1) drop the table in the destination database.
    2) re-create the destination table from the script provided by SQL Developer when you click on the 'SQL' tab while you're viewing the table.
    3) INSERT INTO the destination table from the source table using a database link. Note: I am not aware of any columns in the tables themselves which could be used to filter added/deleted/modified rows only.
    4) After data import, create primary key and indexes.
    Questions:
    1) SQL Developer included the following lines when generating the table creation script:
    <table creation DDL commands>
    then
    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE (INITIAL 251658240 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBLSPC_PGROW"
    it generated this code snippet for the table, the primary key and every index.
    Is this necessary to include in my code if they are all default values? For example, one of the indexes gets scripted as follows:
    CREATE INDEX "XYZ"."PATIENT_INDEX" ON "XYZ"."PATIENT" ("Patient")
    -- do I need the following four lines?
    PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 60817408 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBLSPC_IGROW"
    2) Anyone with advice on best practices for warehousing data like this, I am very willing to learn from your experience.
    Thanks in advance,
    Carl

    I would strongly suggest not dropping and recreating tables every day.
    The simplest option would be to create a materialized view on the destination database that queries the source database and to do a nightly refresh of that materialized view. You could then create a materialized view log on the source table and then do an incremental refresh of the materialized view.
    You can schedule the refresh of the materialized view either in the materialized view definition, as a separate job, or by creating a refresh group and adding one or more materialized views.
    Justin

  • Material ledger close period best-practice

    Hi experts:
    I am thinking about the model when you actívate material ledger actual cost. During the period we will be valuating our COGS according to the standard cost. During the close period, I understand that you can allocate the differences (for material already sold) directly to COPA. It means that we can allocate posting to accounts from account determination 'COC'  directly to the cost object that were allocated in the origin. Is that correct?
    Then, It would be only necessary to use COPA post-valuation if you need to break down the differences according to product-costing. is that correct?
    Thanks in advance for your help
    Best regards
      Jose

    Hi,
    It is possible to reopen a previously closed  period.   This is at your own risk.
    However, it is not something that we can recommend because there is a possibility that inconsistencies occur. If you decide to proceed with the process please review the documentation of program RMMMINIT (SE38)
    Please refer to the notes 487381 & 369637 in relation to this program.  Also review note 70545
    You should make sure that you intialize the period via program RMMMINIT as soon as possible to make sure that you will
    not create more inconsistencies. I understand that you may encounter error MM017 when running this report. You should bypass this message via debugging.
    Regards
    Louise

  • Collapsed Data Center Tier - Best Practice

    Hey guys,
    I'm working with a company who's doing a Data Center build-out. This is not a huge build out and I don't believe I really need a 2 tier design (access, core/aggregation). I'm looking for a 1 tier design. I say this because they only really have one rack of hosts - and we are not connected to a WAN or campus network - we are a dev shop (albeit a pretty damn big dev shop) who hosts internet sites and web applications to the public. 
    My network design relies heavily on VRF's. I treat every web application published to the internet as it's town "tenant" with one leaked route which is my managment network so I have any management servers ( continues deployment, monitoring, etc...) sitting in this subnet that is leaked. Each VRF has their own route to a virtual firewall context of their own and out to the internet. 
    Right now we are in a managed datacenter. I'm going to be building out their own switching environment utilizing the above design and moving away from the managed data center. That being said I need to pick the correct switches for this 1 tier design. I need a good amount of 10gbe port density (124 ports minimum). I was thinking about going with 4 5672UP or 4 C3064TQ-10GT - these will work as both my access and core (about 61 servers, one fiber uplink to my corporate network, and one fiber uplink to a firewall running multiple device contexts via multiple vlans) 
    That being said - With the use of VRFs, VLAN, and MP-BGP (used to leak my routes) what is the best redundancy topology for this design. If I was using catalyst 6500's I would do VSS and be done with it - but I don't believe vPC on the nexus switches traffic and is really more for a two tier model (vPC on two cores, aggregation/access switch connects up to both cores but it looks like one.) What I need to accomplish sounds to me that I'm going to be doing this the old fashion way , running a port channel between each switch, and hopefully using a non STP method to avoid loops. 
    Am I left with any other options? 

    ISP comes into the collapsed core after a router. A specific firewall interface (firewall is in multi context mode) sits on the "outside" vlan specific to each VRF. 

  • HCM Master data upload sequence & best practices

    Experts,
    What would be the best method and recommended sequence to upload HCM master data into the below infotypes?
    0,1,2,3   6,7,8,9   207,208,209,210  (payroll)
    21,167,168,169,170,171 and 3.series (Benefits)
    PA0795
    PA2006
    PA2012
    PBO795
    T529T
    T530T
    Please advice.
    Thanks in advance.
    NW

    Hi,
    The best method to mass upload is LSMW
    the sequence will be
    First you need to create the master data so the Action tables need to be configured first
    T529T
    T530T
    also other related PA config need to be completed
    Then when you will start uploading data the sequence will be
    0, 1, 2 , 6, 7, 8, 9 , 207, 208, 209, 210, 21, 171, 167, 168 169, 170, 2006, 2012, 795
    The benefits features (BAREA, BENGR, BSTAT) also need to be configured with all other benefits related config prior to uploading benefits information
    Some other imp. features like LGMST, TARIF, ABKRS, SCHKZ etc.. also need to be configured prior to uploading the employee master data.
    Hope this will be of help
    Regards,
    Guds

  • Saving zip code data with PHP - best practices

    I have built my client an application that analyzes uploaded
    zip codes for
    matches with a standard set of zips. These uploaded zips can
    be one at a
    time, or a copy/paste from an XLS file (just 5 digit ZIPs).
    They are now asking me to save these uploaded zips, and I am
    wondering what
    would be the best way to do that. My two obvious choices are
    1. Write them to an external text file with a
    programmatically generated
    name, and enter the name in the database, keyed to the user.
    2. Write the zips themselves into a glob field in the
    database.
    I'm inclined to the former, since I don't think there would
    ever need to be
    any further manipulation of these zip codes, but what do you
    think? Are
    there other choices I may have overlooked?
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    ==================

    Dang - sorry. Wrong forum.
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    ==================
    "Murray *ACE*" <[email protected]> wrote
    in message
    news:fvfi5j$ig7$[email protected]..
    >I have built my client an application that analyzes
    uploaded zip codes for
    >matches with a standard set of zips. These uploaded zips
    can be one at a
    >time, or a copy/paste from an XLS file (just 5 digit
    ZIPs).
    >
    > They are now asking me to save these uploaded zips, and
    I am wondering
    > what would be the best way to do that. My two obvious
    choices are -
    >
    > 1. Write them to an external text file with a
    programmatically generated
    > name, and enter the name in the database, keyed to the
    user.
    > 2. Write the zips themselves into a glob field in the
    database.
    >
    > I'm inclined to the former, since I don't think there
    would ever need to
    > be any further manipulation of these zip codes, but what
    do you think?
    > Are there other choices I may have overlooked?
    >
    > --
    > Murray --- ICQ 71997575
    > Adobe Community Expert
    > (If you *MUST* email me, don't LAUGH when you do so!)
    > ==================
    >
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    >
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    > ==================
    >
    >

  • Archive RETENTION Periods Configuration..

    Hi All,
    I am trying to do the Archiving for my XI system and I got configured my system in this way below.
    I just want to know my configuration is correct or not. I am not sure about the retention period given is correct or not. Configuration is not good can you correct me.
    Category     Parameters                                      Subparameters Current Value Default Value     
    ARCHIVE        PERSIST_DURATION                 ASYNC     1     1
    DELETION     PERSIST_DURATION                 ASYNC     1     1
    DELETION     PERSIST_DURATION                 HISTORY     60     30
    DELETION     PERSIST_DURATION                 SYNC        1     0
    DELETION     PERSIST_DURATION_ERROR                 SYNC                 1     1
    Retention period
    Retention Period for Asynchronous XML message in the Database
    XML Messages Without Errors Awaiting Deletion      1
    XML Messages Without Errors Awaiting Archiving     1
    Retention Period for Synchronous XML message in the Database
    XML Messages with Errors Awaiting Deletion         1
    XML Messages Without Errors Awaiting Deletion      1
    Retention Period for History Entries in the Database
    History Entries for Deleted XML Messages          60
    This is my situation to keep and run my Archive job and Deleted jobs:
    I want to keep my XML messages in the SXMB_MONI for 60 days and I need to delete the XML message is older then 60 days need to archive and delete.
    For example current date is 02/26/2008 to 12/29/2007 is the 60 days and earlier need to be deleted.
    Everyday I need to Archive job and everyday I need to delete the archived messages more then 60 days.
    1.     Schedule Archiving job
              (SAP_BC_XMB_ARCHIVE_111) I gave it to execute everyday 2:00AM.
    2.     Schedule Delete job
                    1.     Schedule Delete job for XML message
                           (SAP_BC_XMB_DELETE_111) I gave it to execute everyday 4:00AM
                     2.     Schedule Delete job for History message
                          (SAP_BC_XMB_HIST_DELETE_111) I gave it to execute everyday 5:00AM.
    Once I execute this archive jobs I can see this and it was executed perfectly and I see all my messages got deleted and I can see only today’s message in SXMB_MONI.
    Help to resolve this issue with keeping all the 60 days message in my sxmb_moni.
    Different issue: In my SXMSPMAST table I can see some of the XML messages are marked as DEL in the field ITFACTION. How I can change this DEL to ARCH.
    Thanks,
    Jane

    Dear Jane,
    I had a look at the configurational settings and the desired behavior of the Integration Server. Most likely there is a misunderstanding of one parameter which leads to the untimely deletion that you have observed.
    You state that you set the retention period for history entries to 60 days and you want to keep the messages in technical monitor (SXMB_MONI) for 60 days, too.
    Please note that 'history entry' does not denote a complete XML message. The history entry is kind of a fingerprint of a message that is used for duplicate detection. Keeping the fingerprints does not mean keeping the messages!
    If you want to see XML messages in technical monitor for 60 days we have to distinguish between messages flagged for deletion and messages flagged for archiving:
    - deletion: the only way here is to set the retention period to 60(DELETION PERSIST_DURATION ASYNC=60). This is the only possibility to ensure these messages are visible in technical monitor for this period of time.
    - archiving: here it is important to know that archiving works in two steps. Firstly the messages are written to file. Secondly the file is read (to ensure data integrity) and XML messages are deleted. You can decouple these two steps, i.e it is possible to run archiving for asynchronous messages after 1 day. Then the messages are still available in SXMB_MONI as the delete step is pending. This deletion can be run 59 days after archiving. However, I strongly recommend to not use this option. Instead please set the retention period for these messages to 60 days, too (ARCHIVE PERSIST_DURATION ASYNC=60). Configure the deletion step to be executed immediately after archiving (TX AOBJ). Additionally configure the reorganizational jobs to be executed daily as you have lined out.
    Regarding your second issue: unfortunately it is currently impossible to change the action to be taken on a message from 'deletion' to 'archiving' belatedly. Please be aware of the fact that OSS note 789352 applies to XI2.0 only. I strongly advise against applying this note to any release higher than 6.20.
    To give you an outlook: SAP is working on a general tool for belated changing of the interface action. The decision was made to deliver this functionality as part of PI7.01 EHP1 and PI7.11 EHP1. It is not yet clear whether this tool will be downported. If it is downported it will probably shipped as part of the next support packages (XI3.0 SP22 / PI7.00 SP16 / PI7.10 SP6).
    Best Regards,
    Harald Keimer
    XI Development Support
    SAP AG, Walldorf

  • How to load best practices data into CRM4.0 installation

    Hi,
      We have successfully installed CRM4.0 on a lab system and now would like to install the CRM best practice data into it.
      If I refer to the CRM BP help site http://help.sap.com/bp_crmv340/CRM_DE/index.htm,
    It looks like I need to install at least the following In order to run it properly.
    C73: CRM Essential Information 
    B01: CRM Generation 
    C71: CRM Connectivity 
    B09: CRM Replication 
    C10: CRM Master Data 
    B08: CRM Cross-Topic Functions
    I am not sure where to start and where to end. At the minimum level I need the CRM Sales to start with.
    Do we have just one installation CDs or a number of those, Also are those available in the download area of the service.sap.com?
    Appreciate the response.

    <b>Ofcourse</b> you need to install Best Practices Configuration, or do your own config.
    Simply installing CRM 4.0 from the distibutiond CD\DVD will get you a plain vanilla CRM system with no configuration and obviously no data.  The Best Practices guide you trhough the process of configuring CRM, and even has automated some tasks.  If you use some of the CATT processes of the Best Practices you can even populate data in your new system (BP data, or replace the input files with your own data)
    In 12 years of SAP consulting, I have NEVER come across a situation whereby you simply install SAP from the distribution media, and can start using it without ANY configuration.
    My advise is to work throught the base configuration modules first, either by importing the BP config/data or following the manual instruction to create the config/data yourself.  Next, look at what your usage of CRM is going to be, for example Internet Sales, Service Management, et cetera, and then install the config  for this/these modules.

  • Best practices to share 4 printers on small network running Server 2008 R2 Standard (service pack 1)

    Hello, 
    I'm a new IT admin at a small company (10-12 PCs running Windows 7 or 8) which has 4 printers. I'd like to install the printers either connected to the server or as wireless printers (1 is old enough to require
    a USB connection to a PC, no network capability), such that every PC has access to each printer.
    Don't worry about the USB printer - I know it's not the best way to share a printer, but it's not a critical printer; I just want it available when its PC is on.
    I've read a lot about the best way to set up printers, including stuff about group policy and print server, but I am not a network administrator, and I don't really understand any of it. I'd just like to install
    the drivers on the server or something, and then share them. Right now all the printers do something a little different: one is on a WSD port, two has a little "shared" icon, one has the icon but also a "network" icon... it's very confusing.
    Can anyone help me with a basic setup that I can do for each printer?
    p.s. they all have a reserved IP address.
    Thanks,
    Laura

    may need to set print server... maybe helpful.
    http://www.techiwarehouse.com/engine/9aa10a93/How-to-Share-Printer-in-Windows-Server-2008-R2
    http://blogs.technet.com/b/yongrhee/archive/2009/09/14/best-practices-on-deploying-a-microsoft-windows-server-2008-windows-server-2008-r2-print-server.aspx
    http://joeit.wordpress.com/2011/06/08/how-do-i-share-a-printer-from-ws2008-r2-to-x86-clients-or-all-printers-should-die-in-a-fire/
    Best,
    Howtodo

  • SAP Best Practice for Water Utilities on ERP2005

    Hi All,
    I want to load SAP Best Practice for Water Utilities on ERP2005. I have downloaded the package from Marketplace but there is NO transport file included on it. It only contains documentation. My questions are
    1. Does anyone know where I can download the transport file, if any?
    2. Should I use the transport file from Baseline Best Practice instead?
    Thank you,
    Bomo

    Hello!
    The file should contain eCATTs with data according to best practice preconfigured scenarios and transactions to install them.
    Some information about preconfigured scenario you could find here:
    http://help.sap.com/bp_waterutilv1600/index.htm -> Business Information -> Preconfigured Scenarios
    Under the "Installation" path you could find "Scenario Installation Guide" for Water Utilities.
    I hope it would be helpful.
    Vladimir

  • SAP Best Practice for Water Utilities v 1.600

    Hi All,
    I want to install SAP Best Practice for Water Utilities v 1.600. I have downloaded the package (now  available only Mat.No. 50079055 "Docu: SAP BP Water Utilities-V1.600")  from Marketplace, but there is NO transport file included on it. It only contains documentation.  Should I use the transport file from Best Practice for Utilities v 1.500?
    Thank you,
    Vladimir

    Hello!
    The file should contain eCATTs with data according to best practice preconfigured scenarios and transactions to install them.
    Some information about preconfigured scenario you could find here:
    http://help.sap.com/bp_waterutilv1600/index.htm -> Business Information -> Preconfigured Scenarios
    Under the "Installation" path you could find "Scenario Installation Guide" for Water Utilities.
    I hope it would be helpful.
    Vladimir

  • Tabular Model Best Practice - use of views

    Hi,
    I've read in some sites that using views to get the model data is a best practice.
    Is this related to the fact tables only right? Friendly names can be configured in the model, so views can be used to restrict data volume but besides from that what are the other advantages?
    Model needs to know all the relation between tables, so using a unique view that combines joins to present one big view with all the data isn't useful.
    Best regards

    Yes, I think most people would agree that it isn't helpful to "denormalise" multiple tables into a single view. The model understands the relationships between tables and queries are more efficient with the multiple smaller related tables.
    Views can be helpful in giving a thin layer of independence from the data. You might want to change data types (char() to date etc), split first/last names, trim irrelevant columns or simply isolate the model from future physical table changes.
    In my view, there aren't any hard and fast rules. Do what is pragmatic and cleanest.
    Hope that helps,
    Richard

  • The best practice for data archiving

    Hi
    My client has been using OnDemand for almost 2 years, there are around 2M records in the system(Activities), so just want to know what is the best practice of data archiving, we dont care much about the data in the 6 month ago.

    Hi Erik,
    Archival is nothing but deletion.
    Create a backup cube in BW. Copy the data from your planning cube to the backup cube, and then delete that data region from your planning cube.
    Archival will definitely improve the performance of your templates, scripts, etc; since the system will now search from a smaller dataset.
    Hope this helps.

Maybe you are looking for