Archive Infostructure Database consumption

Hi all,
Does anyone have any idea on the database space consumption of archive info structures?
We're on the verge of implementing AS in relation to data extraction for BW, but we would also want to consider the disk space that will be eaten up by the transparent tables created for the information structures.
Best regards to all

Archiving object MM_EKPO is not found in the standard archiving object lists (Check in SARA).
If you dont find the standard info structure for your archiving object, then you can create your own custom info structure using the transaction code SARJ.
But before the creation of custom info structure, it is better to have an analysis of the write program of the archiving object in order to have an idea of tables relationship or mapping that can be used in the creation of custom field catalogue.
You can follow the below steps to create your own info structure:
1. go to transaction code SARJ
2. click on Environment->field catalogs
3. Click on the New Entries->Enter the inputs for the cloumns needed to create field catalog
4. Maintain the table relationships using "Other Source Fields"
Once you create the field catalog, create the info structure using the t.code SARJ
Enter the info structure name and click on create, enter the archiving object name, field catalog.
Then activate the info structure using the activate button.
Run the write and delete programs for archiving data..!!!
Go to transaction code SARE, enter your archiving object, there in the drop up box, you can find your newly created info structure.
This will sure solve your issue.

Similar Messages

  • Steps to archive SAP Database

    Dear Sir,
    Please suggest me that how many steps are there to Archive SAP database and how can sequencially execute those steps.
    I am confused that which step i have to execute first and which one after that and so on.
    Please provide me some documents or link throgh which i can go through and successfully i can execute this task.

    Hello Mushtaque,
    You can use the link
    Regards, Jatin

  • Specifying separate UNC path for Exchange Archive Mailbox Database

    Hi All,
    In Exchange Server 2013 SP1, is it possible to specify UNC path to a separate hardware appliance ?
    In this case I'd like to create the Archive mailbox database on the CIFS share hosted by EMC Data Domain to perform the hardware based data deduplication & compression.
    Does this path must also be accessible on the DAG passive node in DR data center ?
    /* Server Support Specialist */

    Hi ,
    most Scenarios are not supported while there is still some Scenario where you can use shares.
    "If access to a disk resource requires that a share be mapped, or   if the disk resource appears
    as a remote server by means of a Universal Naming   Convention (UNC) path
    (for example,   \\servername\sharename)   on the network, the disk storage system is not supported
    as a location for   Exchange Server databases. "
    This is also true for the Arcive DB.
    Please note that UNC directly with Exchange 2013 is possible with limited support only, and no
    version of Exchange allows you to store databases on an UNC path (except the one when you map
    SMB 3.0 shares to a VM) :
    (Please also note the recommendation not to use deduplication by the storage.)
    Don't try to do that, because even if you configure that you will run into a non-supported situation.
    I strongly suggest you to avoid that.
    In addition, there is a webcast from Scott Schnoll about virtualization and file Shares that makes it a Little more clear.
    While the session is about virtualization, on 13:00 they start a discussion about storage.
    There is the reasons discussed why not to use UNC for your databases. (Don't worry, they start discussion with NFS)
    The presentation shown on 21:43 is what you need to think about when planning to use UNC resources.
    Storing an Exchnage DB on UNC is not supported.
    Hi martin,
    So how are we going to Deduplicate the data ? Any suggestion perhaps ?
    My understanding is that Exchange Server 2010 onwards doesn't have data deduplication.
    How about mapping the UNC path as mapped network drive ? 
    /* Server Support Specialist */

  • How to archive citadel database. we have DSC 6.0.2

    we have LabView 6i & DSC 6.0.2 . we develop a application using tags. we want to log our data to citadel database but some time the data is lost for some reason (like system hank or restart).so we want to take a backup of our citadel it will possible.

    Unfortunately, 6.0.2 does not have the capability. This was one of the new 6.1 features. Check Table 3 on page 7 in this manual. The Archive Database is what you need for programmatic archive. In 6.1 there is also a new utility called Historical Data Viewer, that allows you to do it manually.
    I remember seeing some online document about archiving Citadel database. ... oh, ya, here it is (I searched for "archiving citadel" and it was the first hit):
    Have a good weekend.

  • Need to Archive the Database. . .

    Hi There,
    I need to follow the following steps -
    1> I need to create a Replica of my Database (Say for ex, if i have 5 Tables in my Original database, then i need to copy all of these 5 Tables along with the data and all the Constraints to a newer database while keeping the Table and Columns Name same)
    2> Now if user is Updating the data in the Original Database (i.e. updating some rows or Inserting some new rows to a table ) then i need to move these changes to the new Database once a Day. But here the Catch is that if the User is Deleting some rows from the Original database, then in this case i have to keep those Records as they are without deleting them from the new database.
    Now i need to find a way as to how this can be done (whether i have to create some Scripts or a Java Job or do something else???)
    Please Help :-)

    Does you have primary key or unique index on each 5 tables ???
    If it's the case, it's possible to catch update,insert (or even delete) by triggers
    on these 5 tables in order to stock in a trace table the primary keys/unique index
    which will be inserted/updated (with a status (I for Insert, U for Update for example))
    And make a procedure to read the "trace table" of the day and
    insert the new data on the archive database when status = I;
    update the data on the archive database when status = U;
    delete in the trace tables all the rows processed.
    And launch it by dbms_job once by night.
    You have to take care in the order of the process for the 5 tables due to constraint considerations between these tables.
    If you have not primary keys or unique index, I recommend to you to create one
    (by using sequence number for example) else I think that your procedure will be a quite complex.

  • Dedicated Archive mailbox databases generate excessive whitespace

    Our company has several mailbox databases that are dedicated to arhive mailboxes.  I just joined the company a month ago and found one database that was about 500 GB with about 300 GB of whitespace!  So, I created a new DB and moved all the archive
    mailboxes, and removed the original DB.  at the time there was minimal whitespace.  Today, the DB is in much the same condition.  what makes this so odd is that there are less that 30 mailboxes on database!  Further, it seems
    as though these DBs are also generating way too many transaction logs and causing management to become concerned about drive space and DBs dismounting.   Bottom line:  how can I determine what is causing the excess whitespace to be generated? 

    I recommend you refer to the following article :
    Exchange 2010 whitespace reclamation
    In addition, as Ed mentioned, you can monitor the available new mailbox space with the following command:
    get-mailboxdatabase  -status | ft Name, AvailableNewMailboxSpace
    Hope this helps!
    Niko Cheng
    TechNet Community Support

  • Emptying archive infostructure: report AS_AINDDEL_DATE

    Dear all,
    In order to delete the infostructures from object  MM_MATBEL, I use the report AS_AINDDEL_DATE.
    The program is run in background.
    However the program run for 100 hours for one year and for 10 hours for one archiving run (one date).
    Archiving started on 2002 and this is the first time we are trying to delete infostructures.
    Did you already have performance problems while emptying infostructures ?
    Best regards,

    I am not sure why you want to empty the info structures, but if i want to empty the info structures then i would use tcode SARI-->Status-->status per info structure-->delete info structures. This would be much faster to empty the info structures.
    AS_AINDDEL_DATE this report is slow because it has to get the specific date and get to delete the archived information from the structures. You can use this report only if you have a requirement to delete the info structures for a particular date.

  • To find top tables in size to archive a database

    Dear Sir,
    I have to archive top 20 tables of our production database, for this can anybody please provide me some documents/links so i can accomplish my task.

    To see the tables sizes...
    Just go to DB02 -> Detailed Analisys -> Choose all object -> Order by SIze. 
    Now for archiving... Read the documentation available in SAP Marketplace. Archiving is not as simple as throw your data in a separate repository you need to analize why?, how much data?, retention periods?, benefits... etc.
    Also, read this overview,

  • Archive citadel database in MAX hangs

    I am trying to Archive a 4.13 GB uncompressed DSC database on LabVIEW 2013 SP1 (although the database was started with LabVIEW 2013).  I have tried twice and it gets stuck (new location folder size is at 151 MB both times).  The progress bar says "Archiving database=20.5%" and "Copying Data=83.0%".  I have chosen the option "Destroy source data after it has been archived (local computer only)".  The status says "copying data" and I no timestamps of files in the new folder have changed after about 9 hours of waiting.  If I cancel, MAX cannot display any Citadel databases.  This is fixed if i reboot.
    Any ideas?

    Hello barkeram,
    Where are you storing the archive (locally or remote)?
    Are you following the steps in one of the following documents?
    Can you try to navigate to the file path of the database and duplicate the file? Afterward, manually add the new database to MAX and try to Archive the new file.
    Thomas C.
    Applications Engineer
    National Instruments

  • Slow opening archived citadel database on first read

    I have archived a citadel database using the archive vi. I then copied this database to another computer. when I try to read data from this archived database, it is taking a long time to return the first set of data (5-10 minutes). New attempts to read data execute much quicker than the first read. Is there a way to improve this opening of the citadel database?? Right now I am using the OPEN DB sub vi that was sent to me to resolve an error accessing citadel problem. The database is not setup as the logging location in the tag engine. This OPEN DB vi is locked so I can't do any kind of probing to see what it is happening. Are you not supposed to work with an archived database??
    Attachments: ‏108 KB

    I was unable to reproduce your problem. I archived a DB on my machine and moved it to another Win2k machine. The program works fine, whether or not I use the open and close DB VIs. How much data are you trying to retrieve? You also may have a corrupted database, and Citadel is trying to fix its corruptions when you first access the data. Have you tried to access the original (pre-archive) database with the VI? What is the result?
    Michael Shasteen
    Applications Engineering
    National Instruments

  • Archiving a database

    We have the problem of a database growing larger every day which is causing it to slow down.  We want to keep old data though.  Would a possible solution be to rename the database something like DatabaseName2013 and create a new database with the
    old DatabaseName?  The archive database would only be used for reports.  Just wondering if this idea would be easier to implement than Transactional Replication or Partitioning the table by year.
    Fred Schmid

    I have seen some applications that have a second database just for archiving and periodically they send old records to the archive database
    and then purge them from the main transactional database. However, for some reports you will find you need data from 1 of the databases, and for other reports you need data from both databases.
    My first suggestion is to try partitioning (if you have Enterprise Edition).
    Another suggestion is to create a Data Warehouse and an Operational Data Store and send old data and new data (daily) to those entities
    for reporting purposes. Keep one or two years of data only on the transactional database.
    Hope this helps.
    Alberto Morillo

  • How archive infostructures are filled when the delete program runs(sd_cond)

    I could not find a suitable forum for this, hence posting it here. I need to know how the ZARIX tables get filled in SAP Archiving. As far as I know, they are filled automaticlaly when the delete job runs, but I could not find any code in SD_COND_ARCH_DELETE program.
    My issue is that my delete program did not fill one of the infostructures which was active and this infostructure corresponds to the zarix table. On the other hand, this infostructure got filled manually(verified in zarix also).
    Wondering how this could have happened. Can this be because multiple delete jobs are running creating various sessions. This is mainly with reference to the SD_COND archive object.

    There is a separate program to fill infostructures as when you fill them manually (Transaction SARJ, Environment -> Fill Structure) a new job is triggered. Have a look at the job that ran when you filled your infostructure manually and analyse the program that ran.
    There should not be any problem in filling the infostructures due to several archiving sessions running at the same time. If you face the same issue again, see if any of the archiving jobs (write, store, delete) failed for any reason. If all the jobs finished successfully but an infostructure didn't get filled then have a look for any OSS note related to this issue. If you can't find anything, raise an SAP message as the infostructures have to get filled automatically as otherwise there will be no access to the archived data.
    Hope this helps.

  • Broken archive Oracle9i Database Release 2 Enterprise

    I'm trying to install Oracle9 Release 2. So I downloaded all four files from Two of them are broken: server_9201_hpunix64_disk2.cpio.gz (648,282,885 bytes)and server_9201_hpunix64_disk3.cpio.gz (538,652,554 bytes).
    Is it kind of early adoption feature?

    Yes. Oracle's current platform strategy is:
    Oracle Platform Roadmap:
    Organizations are deploying Oracle's database, applications, application servers and development tools on a short list of systems that dominate the market. Beginning with the Oracle9i Database, Oracle9i Application Server, and E-Business Suite (Oracle Applications) 11i, the following operating systems will be supported:
    HP Alpha OpenVMS(1)
    HP Tru64 UNIX(2)
    IBM Linux/3901 & z/Linux(1)
    IBM OS/3901 & z/OS(1)
    IBM RS/6000 AIX(2)
    Intel Based Server LINUX
    Microsoft Windows 2000
    Microsoft Windows NT for Intel
    Solaris Operating System (SPARC)
    (1) - For Oracle9i Database ONLY
    (2) - 64-bit ONLY
    Several Microsoft Windows clients are supported, including Microsoft Windows XP.
    Oracle will continue to support the Oracle8i Database and current versions of all other products throughout their support life cycle on the operating systems where they were released, including 32-bit versions of Oracle running on HP-UX and IBM RS/6000 AIX systems. Additionally, Oracle will continue to provide 32-bit client library support through the life of the Oracle9i Database. In keeping with our normal product support life cycle process, customers will be informed of status changes to individual product versions currently supported.

  • What is the best backup plan for Archive Databases in Exchange 2013?

    We have Exchange 2013 with Hybrid setup with O365.
    We have On premise exchange 2013 servers with 3 copies of primary Database & Single Copy of Archival DBs.
    Now we have to frame backup policy with Symantec Backup Exec which has to backup our primary & Archival DBs
    In 2007 exchange, before migration to 2013, we had policy of DBs - Weekly Full backup & Monthly Full Backup
    Please suggest what would be the best possible backup strategy we can follow with 2013 DBs.
    That too, especially for Archiving DBs
    Our Archiving Policy are - 3 category - Any emails older than 6 month OR 1 Year OR 2 Year should go to Archive mailbox.
    Keeping this in mind how to design the backup policy ? 
    Manju Gowda

    Hi Manju,
    you do not find best practice different from the common backup guidelines, as there is no archive db specific behaviour. Your users may move items to their archive at any time as well as your retention policies may move items that machted the retention policies
    at any time. The result is frequently changing content to both, mailbox and archive mailbox databases, so you need to backup both the same way. You also may handle archives together with mailboxes together in the mailbox db 
    Please keep in mind that backup usually means data availability in case of system failure. So you may consider to do a less frequent backup with your archive db with dependency to the "keep deleted items" (/mailboxes) setting on your mailbox database.
    keep deleted items: 30 days
    backup of archive db: every 14 days
    restore procedure:
    * restore archive DB content
    * add difference from recover deleted items (or Backup Exec single item recovery) for the missing 14 days.
    So it depends more on your process than on a backup principle.

  • Creating a new Infostructure in archiving for Object PM_ORDER

    Hi Guru's,
    The SAP standard infostructures used when archiving PM Orders do not include 'Order Type'. I have created a new Z infostructure and included the Order Type in the field catalog and new infostructure but it doesn't get populated at the point of archiving. Does anyone know why not ? or how I can fix this ?

    Hi Ben,
    Has your custom archive infostructure been activated?  Also, the infostructure gets populated during the delete phase of the archiving process, not the write phase.  Have you tried manually filling the structure?  If so, what does the joblog show?  Does the tablespace the infostructure is in have enough space?
    These are some things to check.
    Hope this helps.
    Best Regards,
    Karin Tillotson

Maybe you are looking for

  • Xbox and Ipod Touch Problem

    Hey, I have and Ipod Touch and an Xbox 360 and want to connect my Touch to the Xbox to play music while playing a game, but my Xbox doesn't pickup the Touch. But my Touch charges, anyone know why or know a link to a similar problem that can help me i

  • Anyone witnessed Server Side Copy in action with Yosemite Server 4.0.3?

    Hi, Just curious if anyone can validate the following for me before I upgrade. 1. Client is Mac OS X 10.10.2 client 2. Server is Mac OS X 10.10.2 running Server 4.0.3 3. Server is sharing out two shares via SMB only which for argument sake have faste

  • Cant show images and flash

    Hello, I'm very new to this and just cant get my images to show when i look at them in explorer or firefox, also i cant get my flash to show either just previewing in dreamweaver or on the www is the site any help would be gr

  • Activate the TRM (Task and Resource Management)

    Could you tell me where to activate the TRM (Task and Resource Management) in WM?

  • How can I make individual files from one jpg file?

    I have a disk of 78 scanned images that I need to split up into individual images so I can work with them.  I am very new to Adobe and can't figure this out on my own.  I am sure that it is a simple solution or at least I hope so. Any suggestions?