Best practice for making database connection to Forms 10 apps?

Hi
To upgrade our Forms applications we are moving from version 3 to 10.
Our old system runs Forms applications and the connection to the database is based on the individual user. This means that any tables or views used require that the user has specific access granted to them. We have a bespoke system to manage this which generates scripts (GRANT statements) based on lists of tables and users and their appropriate access.
I have concerns that managing the table access for thousands of individual users in the Forms 10 environment is going to be technically difficult, especially with RADs to consider. Is it feasible to generate and frequently refresh RAD scripts to maintain the current list of users and their permissions?
I am trying to decide if it is better to:
A) Connect with the same database user (such as "APP_USER") which has access to everything
or
B) Connect with individual usernames/passwords
Currently, the individual user database passwords are generated weekly and users have means to obtain them (once signed in) rather than setting and remembering them. Some views refer to the Oracle system parameter "USER" to decide what data is returned so this functionality would need to be preserved.
Any help is greatly appreciated, especially if you can tell me if option A or B is how you connect at your site.

Thanks for the advice so far.
It would appear that connecting with individual usernames is not a fundamental error, which I was concerned about.
Will it still be necessary to create and refresh RAD scripts, or is this only an issue when using OID? We have OID here already because we have a website using Oracle Portal. The sign-on process for this connects to Active Directory for authentication.
I do not like the idea of having to schedule a refresh of RAD scripts, perhaps 3 times a day, just to keep it current. I do not think the RADs are expected to change as frequently as this, but perhaps other forum members have experience of this?

Similar Messages

  • Best Practice for trimming content in Sharepoint Hosted Apps?

    Hey there,
    I'm developing a Sharepoint 2013 App that is set to be Sharepoint Hosted.  I have a section within the app that I'd like to be Configuration-related, so I would like to only allow certain users or roles to be able to access this content or even see
    that it exists (i.e. an Admin button, if you will).  What is the best practice for accomplishing this in Sharepoint 2013 Apps?  Thusfar, I've been doing everything using jQuery and the REST api and I'm hoping there's a standard within this that I
    should be using.
    Thanks in advance to anyone who can weigh in here.
    Mike

    Hi,
    According to
    this documentation, “You must configure a new name in Domain Name Services (DNS) to host the apps. To help improve security, the domain name should not be a subdomain
    of the domain that hosts the SharePoint sites. For example, if the SharePoint sites are at Contoso.com, consider ContosoApps.com instead of App.Contoso.com as the domain name”.
    More information:
    http://technet.microsoft.com/en-us/library/fp161237(v=office.15)
    For production hosting scenarios, you would still have to create a DNS routing strategy within your intranet and optionally configure your firewall.
    The link below will show how to create and configure a production environment for apps for SharePoint:
    http://technet.microsoft.com/en-us/library/fp161232(v=office.15)
    Thanks
    Patrick Liang
    Forum Support
    Please remember to mark the replies as answers if they
    help and unmark them if they provide no help. If you have feedback for TechNet
    Subscriber Support, contact [email protected]
    Patrick Liang
    TechNet Community Support

  • Best Practice for Distributing Databases to Customers

    I did a little searching and was surprised to not find a best practice document for how to distribute Microsoft SQL Databases. With other database formats, it's common to distribute them as scripts. It seems that feature is rather limited with the built-in
    tools Microsoft provides. There appear to be limits to the length of the script. We're looking to distribute a database several GBs in size. We could detach the database or provide a backup, but that has its own disadvantages by limiting what versions
    of the SQL Server will accept the database.
    What do you recommend and can you point me to some documentation that handles this practice?
    Thank you.

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best practice for tracking database changes...?

    Dear Oracle gurus,
    I'm still relatively new to database administrating, and recently I ran into a situation which I'm not sure if there's some text-book scenario analysis or practice.
    I find it hard to track all the database changes across different servers. Our company develop software that uses the Oracle database, so we have development and test servers set up here and there, with really minimal control on them. Problem arises when we make rapid design changes to our system, which required multiple and rapid changes to the databases. I find it really hard to keep track of everything, because sometimes I can't patch some server because of people still using it for development/testing/investigation/etc.
    So, is there some kind of good practices for tracking database changes (which we even write patches for), monitoring schema modifications, or maybe even versioning database objects? I've tried to find some information but I think I did not look in the right places or ask the right questions.
    Any help is appreciated.
    Best regards,
    Peter Tung

    The first thing I would start with is:
    Find a version control system that will allow you to store files and version them (PVCS for example). You could for example, store all the sql scripts. Whenever a change is needed, the user could check the program out from the version control tool and make changes and check it back in. Besides sql scripts, you could also store binary files or any type of source code files in a version control system. This would at least put some things in order. In a version control system, you could associate a number or a string with all the files within a patch.

  • Best Practice for the database owner of an SAP database.

    We recently had a user account removed from our SAP system when this person left the agency.  The account was associated with the SAP database (he created the database a couple of years ago). 
    I'd like to change the owner of the database to <domain>\<sid>adm  (ex: XYZ\dv1adm)  as this is the system admin account used on the host server and is a login for the sql server.  I don't want to associate the database with another admin user as that will change over time.
    What is the best practice for database owner for and SAP database?
    Thanks
    Laurie McGinley

    Hi Laura
    I'm not sure if this is best practise or not, but I've always had the SA user as the owner of the database. It just makes it easier for restores to other systems etc.
    Ken

  • Best Practices for BI, ADF and Oracle Forms installations on Weblogic

    Hi, I'm researching options on upgrading to Oracle 11g Middleware. My company currently has Oracle Forms 10g running on Oracle Application Server.
    We are interested in using Oracle Forms 11g, ADF and Jdeveloper, and Business Intelligence with Oracle's Weblogic 10.3.5.
    Is there any whitepapers or documentation on best practices for installing alll of these components together?
    For instance, can ADF ( with JSF 2.x ) be installed in the same domain as Oracle Forms 11g but use different managed servers?
    Will Business Intelligence need to be in a seperate Oracle Home with it's own weblogic installation? I spend a lot of time trying to get the JSF upgraded to 2.x in the Business Intelligence installation and could not get it to work.
    I know it's a pretty broad question but thank you for any direction on this.

    Thanx for the reply! I read through the documents and they are very good at explaining how to install the different components individually. I still can't find much on installing them together. I hope it's not just going to be a trial and error thing.
    So far I've installed done the following successfully:
    Installed 10.3.5 weblogic
    Forms and Reports 11g on top of 10.3.5
    I've created an additional managed server for our ADF applications.
    My next step is upgrading the JSF to 2.x. I would have to stage patches 12917525 and 12979653. I'm afraid it will break the forms and reports though. Any ideas?

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • Best practice for making changes to Oracle apps business views and BAs/fold

    HI
    The oracle BI solution comes with pre-defined Business Views- database views and Business Areas and folders. If we want to customize those database views or BAs and folders what will be the best practice in order to avoid losing it during any upgrades.
    For ex Oracle out-of box Order Management BA that we are using heavily needs some additional fields to be added to Order Header and Order Lines folders and we also want to add some custom folders to this BA.
    If we do the changes to the database views behind this BA would they be lost during the upgrade or do we have to copy(duplicate) those views, updated them and create a custom BA and folders against those views.
    Thanks

    Hi,
    If you are adding new folders then just add them to the Oracle Business Area. The business area is just a collection of folders. If the business area was changed in an upgrade the new folder would not be deleted.
    If you want to add fields to the existing folders/views then you have 2 options. Add the field to the defining base view (these are the views beginning OEBV and OEFV) and then regenerate the business views. This may be overwritten if the view is upgrade but this is unlikely.
    Alternatively, copy the view to create a new version and then map the old folder to the new view and refresh. You may need to re-map the folder if the folder is upgraded, but at least you have a single folder used by both Oracle and custom reports.
    Rod West

  • Symantec antivirus Best practice for oracle database on windows server 2003

    Hi all,
    I have an oracle database server on windows server 2003 platform of version 10.2.0.4. what would be best practice of running symantec antivirus on that server as well as database file exclusions from scanning them.
    My server had rebooted unexpectedly for many times. in event log i have id as 6008. what may be cause of it..?

    Normally, you don't run a virus scanner on a database server because your database server isn't vulnerable to viruses. It's behind firewalls, people aren't reading mail on it, people aren't plugging thumb drives into it, etc. If you do decide that you need to run a virus scanner on a database server, at least exclude the Oracle data files from the scan. Oracle gets very unhappy if someone else tries to open its data files (or, worse, if someone opens a data file before it gets the chance to acquire exclusive access).
    Justin

  • Best practice for backing up and restoring forms

    Greetings, I would like to pose a question to the forum and understand how many of you, if at all, "backup" your interactive forms so that in the event one or multple become corrupt, you have a method to recover the forms. 
    We recently experienced such a scenario in which the forms we developed in ABAP, and access through our portal, had become corrupt.  What would happen to us when we attempted to access the form via the portal would be a SOAP error.  SOAP errors, I understand, can happen for various reasons but prior to the incident, our forms were working just fine.  We attempted to retrace our steps to identify the cause of the problem but found we could not replicate the issue.  Through analysis of the forms, we identified the corruption in the master page and found that if we copied the sections of the form that were not corrupt to a new master page, the form would work properly again.  Our thought is that this can not be the only method to recover from an incident like this and would like to know if others have experienced or have practices in place that would minimize the impact. 
    We asked SAP support whether or not there was a method to back up Interactive forms and the simple answer we received was to download the XML file from txn SFP.  Can others relate to this strategy as a proper "backup" method or do other best practices exist that would be more ideal??  
    Thank you in advance!

    I had many difficulties with this kind of errors like 2 years ago. Of course it was getting better with every patch level, and with LCD 8, LCD 8.1 etc.I don´t have any problems with the newest solution, but remember the feelings. I had to throw away a three days work once because of this "errors". But the solution is easy:
    - use the versioning like you do in your ABAP development, that works ok
    - if you would like to have an extra backup, copy the form into a backup, I mean with name like Z_YOURNAME_BCK_1
    - if you still feel that is not enough, you can always set your forms to be dynamic, what makes the ADS webservice to change the way the forms are constructed (now it will have a internal structure, you can check this out, if you open a dynamic form in LCD outside SAP, it works like a charm, if you open just a printform, it asks for an import, which does not have anything in common with the template earlier). And you can backup every form you generate with these settings. If a problem appears, you can always open this outside SAP in LCD, copy the hierarchy and paste it into your SAP window with LCD development.
    Hope that helps.
    Regards Otto

  • What is the best practice to get database connection?

    What are best best practices in database connection to follow?

    &#24335; &#21487; &#20197; &#37319; &#29992;Class.forName &#26041; &#27861; &#26174; &#31034; &#21152; &#36733;&#65292; &#22914; &#19979; &#38754; &#30340; &#35821; &#21477; &#21152; &#36733;Sun &#20844; &#21496; &#30340;JDBC-ODBCbridge &#39537; &#21160; &#31243; &#24207;&#65306;
    Class.forName&#65288;�sun.jdbc.odbc.JdbcOdbcDriver�)&#65307;
    &#28982; &#21518; &#36816; &#29992;DriverManager &#31867; &#30340;getConnection &#26041; &#27861; &#24314; &#31435; &#19982; &#25968; &#25454; &#28304; &#30340; &#36830; &#25509;&#65306;
    Connectioncon=DrivenManagerget-
    Connection(url);
    &#35813; &#35821; &#21477; &#19982;url &#23545; &#35937; &#25351; &#23450; &#30340; &#25968; &#25454; &#28304; &#24314; &#31435; &#36830; &#25509;&#12290; &#33509; &#36830; &#25509; &#25104; &#21151;&#65292; &#21017; &#36820; &#22238; &#19968; &#20010;Connection &#31867; &#30340; &#23545; &#35937;con&#12290; &#20197; &#21518; &#23545; &#36825; &#20010; &#25968; &#25454; &#28304; &#30340; &#25805; &#20316; &#37117; &#26159; &#22522; &#20110;con &#23545; &#35937; &#30340;&#12290;
    &#25191; &#34892; &#26597; &#35810; &#35821; &#21477;&#12290; &#26412; &#25991; &#20171; &#32461; &#22522; &#20110;Statement &#23545; &#35937; &#30340; &#26597; &#35810; &#26041; &#27861;&#12290; &#25191; &#34892;SQL &#26597; &#35810; &#35821; &#21477; &#38656; &#35201; &#20808; &#24314; &#31435; &#19968; &#20010;Statement &#23545; &#35937;&#12290; &#19979; &#38754; &#30340; &#35821; &#21477; &#24314; &#31435; &#21517; &#20026;guo &#30340;Statement &#23545; &#35937;&#65306;
    Statement guo=con.creatStatement()&#65307;
    &#22312;Statement &#23545; &#35937; &#19978;&#65292; &#21487; &#20197; &#20351; &#29992;execQuery &#26041; &#27861; &#25191; &#34892; &#26597; &#35810; &#35821; &#21477;&#12290;execQuery &#30340; &#21442; &#25968; &#26159; &#19968; &#20010;String &#23545; &#35937;&#65292; &#21363; &#19968; &#20010;SQL &#30340;Select &#35821; &#21477;&#12290; &#23427; &#30340; &#36820; &#22238; &#20540; &#26159; &#19968; &#20010;ResultSet &#31867; &#30340; &#23545; &#35937;&#12290;
    ResultSet result=guo.execQuery(�SELECT*FROM A�)
    &#35813; &#35821; &#21477; &#23558; &#22312;result &#20013; &#36820; &#22238;A &#20013; &#30340; &#25152; &#26377; &#34892;&#12290;
    &#23545;Result &#23545; &#35937; &#36827; &#34892;&#65288; &#19979; &#36716;76 &#39029;&#65289;( &#19978; &#25509;73 &#39029;&#65289; &#22788; &#29702; &#21518;&#65292; &#25165; &#33021; &#23558; &#26597; &#35810; &#32467; &#26524; &#26174; &#31034; &#32473; &#29992; &#25143;&#12290;Result &#23545; &#35937; &#21253; &#25324; &#19968; &#20010; &#30001; &#26597; &#35810; &#35821; &#21477; &#36820; &#22238; &#30340; &#19968; &#20010; &#34920;&#65292; &#36825; &#20010; &#34920; &#20013; &#21253; &#21547; &#25152; &#26377; &#30340; &#26597; &#35810; &#32467; &#26524;&#12290; &#23545;Result &#23545; &#35937; &#30340; &#22788; &#29702; &#24517; &#39035; &#36880; &#34892;&#65292; &#32780; &#23545; &#27599; &#19968; &#34892; &#20013; &#30340; &#21508; &#20010; &#21015;&#65292; &#21487; &#20197; &#25353; &#20219; &#20309; &#39034; &#24207; &#36827; &#34892; &#22788; &#29702;&#12290;Result &#31867; &#30340;getXXX &#26041; &#27861; &#21487; &#23558; &#32467; &#26524; &#38598; &#20013; &#30340;SQL &#25968; &#25454; &#31867; &#22411; &#36716; &#25442; &#20026;Java &#25968; &#25454; &#31867; &#22411;

  • Any best practices for proxy databases

    Dear all,
    is there any caveat or best practice when using a proxy database?
    Is it secure and wise to create them on the master device? Can it grow? Or is it similar to a MSSQL linked server?
    Thank You for your patience,
    Arthur

    Hello,
    This statement is for proxy database as well.
    Note: For recovery purposes, Sybase recommends that you do not create other system or user databases or user objects on the master device.
    AdaptiveServer Enterprise 15.7 ESD #2 > Configuration Guide for Windows > Adaptive Server Devices and System Databases
    http://infocenter.sybase.com/help/topic/com.sybase.infocenter.dc38421.1572/doc/html/san1335472527967.html?resultof=%22%6d%61%73%74%65%72%22%20%22%64%65%76%69%63%65%22%20%22%64%65%76%69%63%22%20%22%75%73%65%72%22%20%22%64%61%74%61%62%61%73%65%22%20%22%64%61%74%61%62%61%73%22%20
    The  Component Integration Services Users Guide is very good start in some part it is like a link server but the option are many and it all depends on your use case and remote source.
    Niclas

  • Best practices for making the end result web help printable

    Hi all, using TCS3 Win 7 64 bit.  All patched and up to date.
    I was wondering what the best practices are for the following scenario:
    I am authoring in Frame, link by reference into RH.
    I use Frame to generate PDFs and RH to generate webhelp.
    I have tons of conditional text which ultimately produce four separate versions of PDFs as well as online help - I handle these codes in FM and pull them into RH.
    I use a css on all pages of my RH to make it 'look' right.
    We now need to add the ability for end users to print the webhelp - outside of just CTRL+P because a)that cuts off the larger images and b)it doesn't show header, footer, logo, date, etc. (stuff that is in the master pages in FM).
    My thought is doing the following:
    Adding four sentences (one for each condition) in the FM book on the first page. Each one would be coded for audience A, B, C, or D (each of which require separate PDFs) as well as coded with ONLINE so that they don't show up in my printed PDFs that I generate out of Frame. Once the PDFs are generated, I would add a hyperlink in RH (manually) to each sentence and link the associated PDF (this seems to add the PDF file to the baggage files in RH). Then when I generate my RH webhelp, it would show the link, with the PDF, correctly based on the condition of the user looking at the help.
    My questions are as follows:
    1- This seems more complicated than it needs to be. Is it?
    2- I would have to manually update every single hyperlink each time I update my FM book, because I am single sourcing out of Frame and I am unable (as far as I can tell) to link a PDF within the frame doc. I update the entire book (over 1500 pages) once every 6 weeks so while this wouldn't be a common occurrence it will happen regularly, and it would be manual (as far as I can tell)?
    3- Eventually, I would have countless PDFs inside RH. I assume this will eventually impact performance. So this also doesn't seem ideal?
    If anyone has thoughts/suggestions on a simpler way or better way to do this, I'd certainly appreciate it. I have watched the Adobe TV tutorial on adding a master page but that seems to remove the ability to use a css across all my topics and it also requires the manual addition of a manual hyperlink to the PDF file, so that is what I am proposing above, anyway (not sure the benefit, therefore).
    Thanks in advance,
    Adriana

    Anything other than CTRL + P is going to create a lot of work so perhaps I can comment on what you see as drawbacks to that.
    a)that cuts off the larger images and b)it doesn't show header, footer,
    logo, date, etc. (stuff that is in the master pages in FM).
    Larger images.
    I simply make a point of keeping my image sizes down to a size that works. It's not a problem for me but that doesn't mean it will work for you. Here all I am doing is suggesting you review how big a problem that would be.
    Master Page Details
    I have to preface this with the statement that I don't work with FM. The details you refer to print when they are in RoboHelp master pages. Perhaps one of the FM users here can comment on how to get FM master pages to come through.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • Best Practice for monitoring database targets configured for Data Guard

    We are in the process of migrating our DB targets to 12c Cloud Control. 
    In our current 10g environment the Primary Targets are monitored and administered by OEM GC A, and the Standby Targets are monitored by OEM GC B.  Originally, I believe this was because of proximity and network speed, and over time it evolved to a Primary/Standby separation.  One of the greatest challenges in this configuration is keeping OEM jobs in sync on both sides (in case of switchover/failover).
    For our new OEM CC environment we are setting up CC A and CC B.  However, I would like to determine if it would be smarter to monitor all DB targets (Primary and Standby) from the same CC console.  In other words, monitor and administer DB Primary and Standby from the same OEM CC Console.   I am trying to determine the best practice.  I am not sure if administering a swichover from Cloud Control from Primary to Standby requires that both targets are monitored in the same environment or not.
    I am interested in feedback.   I am also interested in finding good reference materials (I have been looking at Oracle documentation and other documents online).   Thanks for your input and thoughts.  I am deliberately trying to keep this as concise as possible.

    OMS is a tool it is not need to monitor your primary and standby what is what I meant by the comment.
    The reason you need the same OMS to monitor both the primary and the standby is in the Data Guard administration screen it will show both targets. You also will have the option of doing switch-overs and fail-overs as well as convert the primary or standby. One of the options is also to move all the jobs that are scheduled with primary over to the standby during a switch-over or fail-over.
    There is no document that states that you need to have all targets on one OMS but that is the best method for the reason of having OMS. OMS is a tool to have all targets in a central repository. If you start have different OMS server and OMS repository you will need to log into separate OMS to administrator the targets.

  • Best Practice for making material obselete

    Hai Experts,
    Could any one please advice me what s the best practice to make a material obselete. Is there any best practice or incase if it is not there what would be the diff ways it cant be made obselete.
    Please advice me on this.
    Thanking you in advance
    Regards,
    Gopalakrishnan.S

    Archiving is a process, rather than a transaction. you have to take care that you fulfill local laws and retain your data for x years for internal and external auditors.
    archiving requires customizing, you have to tell SAP where your archive is and what name it has and how big it can be, and when (definition of retention period) a document can be archived.
    you have to talk with your business how long they want the data keep in the production system (this depends on how long they need to run reports on this data)
    you have to define who can access the archived data and how this data is accessed.
    Who runs archiving?
    Tough question. Data owner is the business, so they should run.
    But archiving just buckets fragments your tables.
    so many companies have a person who is fulltime responsible for archiving and does it for all business units and organisations.
    Archiving is nothing what can be done "on the fly", you will encounter all kind of errors, processes that are not designed well will certainly create a lot problems in archiving and need to be reworked.
    I had an archving project with 50 days project time, and could develope guidelines to archive about 30 different objects. there are 100 more objects possible, and we have still not archived master data like material, vendors and customers because of so many dependencies and object that need to be archived prior to those objects.

Maybe you are looking for