Archive Infostructure Database consumption

Hi all,
Does anyone have any idea on the database space consumption of archive info structures?
We're on the verge of implementing AS in relation to data extraction for BW, but we would also want to consider the disk space that will be eaten up by the transparent tables created for the information structures.
Best regards to all

Hi,
Archiving object MM_EKPO is not found in the standard archiving object lists (Check in SARA).
If you dont find the standard info structure for your archiving object, then you can create your own custom info structure using the transaction code SARJ.
But before the creation of custom info structure, it is better to have an analysis of the write program of the archiving object in order to have an idea of tables relationship or mapping that can be used in the creation of custom field catalogue.
You can follow the below steps to create your own info structure:
1. go to transaction code SARJ
2. click on Environment->field catalogs
3. Click on the New Entries->Enter the inputs for the cloumns needed to create field catalog
4. Maintain the table relationships using "Other Source Fields"
Once you create the field catalog, create the info structure using the t.code SARJ
Enter the info structure name and click on create, enter the archiving object name, field catalog.
Then activate the info structure using the activate button.
Run the write and delete programs for archiving data..!!!
Go to transaction code SARE, enter your archiving object, there in the drop up box, you can find your newly created info structure.
This will sure solve your issue.
Thanks,
Shamim

Similar Messages

  • Steps to archive SAP Database

    Dear Sir,
    Please suggest me that how many steps are there to Archive SAP database and how can sequencially execute those steps.
    I am confused that which step i have to execute first and which one after that and so on.
    Please provide me some documents or link throgh which i can go through and successfully i can execute this task.
    Regards,
    Mushtaq

    Hello Mushtaque,
    You can use the link
    http://help.sap.com/saphelp_nw70/helpdata/EN/8d/3e4e74462a11d189000000e8323d3a/frameset.htm
    Regards, Jatin

  • Specifying separate UNC path for Exchange Archive Mailbox Database

    Hi All,
    In Exchange Server 2013 SP1, is it possible to specify UNC path to a separate hardware appliance ?
    In this case I'd like to create the Archive mailbox database on the CIFS share hosted by EMC Data Domain to perform the hardware based data deduplication & compression.
    Does this path must also be accessible on the DAG passive node in DR data center ?
    /* Server Support Specialist */

    Hi ,
    most Scenarios are not supported while there is still some Scenario where you can use shares.
    "If access to a disk resource requires that a share be mapped, or   if the disk resource appears
    as a remote server by means of a Universal Naming   Convention (UNC) path
    (for example,   \\servername\sharename)   on the network, the disk storage system is not supported
    as a location for   Exchange Server databases. "
    This is also true for the Arcive DB.
    Please note that UNC directly with Exchange 2013 is possible with limited support only, and no
    version of Exchange allows you to store databases on an UNC path (except the one when you map
    SMB 3.0 shares to a VM) : 
    http://technet.microsoft.com/en-us/library/ee832792(v=exchg.150).aspx#Best
    (Please also note the recommendation not to use deduplication by the storage.)
    Don't try to do that, because even if you configure that you will run into a non-supported situation.
    I strongly suggest you to avoid that.
    In addition, there is a webcast from Scott Schnoll about virtualization and file Shares that makes it a Little more clear. 
    https://channel9.msdn.com/Events/TechDays/Techdays-2014-the-Netherlands/Exchange-Server-2013-Virtualization-Best-Practices
    While the session is about virtualization, on 13:00 they start a discussion about storage.
    There is the reasons discussed why not to use UNC for your databases. (Don't worry, they start discussion with NFS)
    The presentation shown on 21:43 is what you need to think about when planning to use UNC resources.
    Storing an Exchnage DB on UNC is not supported.
    Regards,
    Martin
    Hi martin,
    So how are we going to Deduplicate the data ? Any suggestion perhaps ?
    My understanding is that Exchange Server 2010 onwards doesn't have data deduplication.
    How about mapping the UNC path as mapped network drive ? 
    /* Server Support Specialist */

  • How to archive citadel database. we have DSC 6.0.2

    we have LabView 6i & DSC 6.0.2 . we develop a application using tags. we want to log our data to citadel database but some time the data is lost for some reason (like system hank or restart).so we want to take a backup of our citadel database.how it will possible.

    Pilla,
    Unfortunately, 6.0.2 does not have the capability. This was one of the new 6.1 features. Check Table 3 on page 7 in this manual. The Archive Database is what you need for programmatic archive. In 6.1 there is also a new utility called Historical Data Viewer, that allows you to do it manually.
    http://www.ni.com/pdf/manuals/322955b.pdf
    I remember seeing some online document about archiving Citadel database. ... oh, ya, here it is (I searched ni.com for "archiving citadel" and it was the first hit):
    http://zone.ni.com/devzone/conceptd.nsf/webmain/2F24997EAD7C53A686256B6E00686D64?opendocument
    Have a good weekend.
    Dr.Tag

  • Need to Archive the Database. . .

    Hi There,
    I need to follow the following steps -
    1> I need to create a Replica of my Database (Say for ex, if i have 5 Tables in my Original database, then i need to copy all of these 5 Tables along with the data and all the Constraints to a newer database while keeping the Table and Columns Name same)
    2> Now if user is Updating the data in the Original Database (i.e. updating some rows or Inserting some new rows to a table ) then i need to move these changes to the new Database once a Day. But here the Catch is that if the User is Deleting some rows from the Original database, then in this case i have to keep those Records as they are without deleting them from the new database.
    Now i need to find a way as to how this can be done (whether i have to create some Scripts or a Java Job or do something else???)
    Please Help :-)

    Hi,
    Does you have primary key or unique index on each 5 tables ???
    If it's the case, it's possible to catch update,insert (or even delete) by triggers
    on these 5 tables in order to stock in a trace table the primary keys/unique index
    which will be inserted/updated (with a status (I for Insert, U for Update for example))
    And make a procedure to read the "trace table" of the day and
    insert the new data on the archive database when status = I;
    update the data on the archive database when status = U;
    delete in the trace tables all the rows processed.
    And launch it by dbms_job once by night.
    You have to take care in the order of the process for the 5 tables due to constraint considerations between these tables.
    If you have not primary keys or unique index, I recommend to you to create one
    (by using sequence number for example) else I think that your procedure will be a quite complex.
    Mike

  • Dedicated Archive mailbox databases generate excessive whitespace

    Our company has several mailbox databases that are dedicated to arhive mailboxes.  I just joined the company a month ago and found one database that was about 500 GB with about 300 GB of whitespace!  So, I created a new DB and moved all the archive
    mailboxes, and removed the original DB.  at the time there was minimal whitespace.  Today, the DB is in much the same condition.  what makes this so odd is that there are less that 30 mailboxes on database!  Further, it seems
    as though these DBs are also generating way too many transaction logs and causing management to become concerned about drive space and DBs dismounting.   Bottom line:  how can I determine what is causing the excess whitespace to be generated? 

    Hi,
    I recommend you refer to the following article :
    Exchange 2010 whitespace reclamation
    In addition, as Ed mentioned, you can monitor the available new mailbox space with the following command:
    get-mailboxdatabase  -status | ft Name, AvailableNewMailboxSpace
    Hope this helps!
    Thanks.
    Niko Cheng
    TechNet Community Support

  • Emptying archive infostructure: report AS_AINDDEL_DATE

    Dear all,
    In order to delete the infostructures from object  MM_MATBEL, I use the report AS_AINDDEL_DATE.
    The program is run in background.
    However the program run for 100 hours for one year and for 10 hours for one archiving run (one date).
    Archiving started on 2002 and this is the first time we are trying to delete infostructures.
    Did you already have performance problems while emptying infostructures ?
    Best regards,

    I am not sure why you want to empty the info structures, but if i want to empty the info structures then i would use tcode SARI-->Status-->status per info structure-->delete info structures. This would be much faster to empty the info structures.
    AS_AINDDEL_DATE this report is slow because it has to get the specific date and get to delete the archived information from the structures. You can use this report only if you have a requirement to delete the info structures for a particular date.

  • To find top tables in size to archive a database

    Dear Sir,
    I have to archive top 20 tables of our production database, for this can anybody please provide me some documents/links so i can accomplish my task.
    Regards,
    Mushtaq

    To see the tables sizes...
    Just go to DB02 -> Detailed Analisys -> Choose all object -> Order by SIze. 
    Now for archiving... Read the documentation available in SAP Marketplace. Archiving is not as simple as throw your data in a separate repository you need to analize why?, how much data?, retention periods?, benefits... etc.
    Also, read this overview,
    http://help.sap.com/SAPHELP_NW04S/helpdata/EN/8f/f3b142304cc511e10000000a1550b0/frameset.htm
    Regards
    Juan

  • Archive citadel database in MAX hangs

    I am trying to Archive a 4.13 GB uncompressed DSC database on LabVIEW 2013 SP1 (although the database was started with LabVIEW 2013).  I have tried twice and it gets stuck (new location folder size is at 151 MB both times).  The progress bar says "Archiving database=20.5%" and "Copying Data=83.0%".  I have chosen the option "Destroy source data after it has been archived (local computer only)".  The status says "copying data" and I no timestamps of files in the new folder have changed after about 9 hours of waiting.  If I cancel, MAX cannot display any Citadel databases.  This is fixed if i reboot.
    Any ideas?

    Hello barkeram,
    Where are you storing the archive (locally or remote)?
    Are you following the steps in one of the following documents?
    http://digital.ni.com/public.nsf/allkb/E076A0661E03F1EB862571A800079E7B
    http://digital.ni.com/public.nsf/allkb/2B0C74744BB37391862571F500067C64
    Can you try to navigate to the file path of the database and duplicate the file? Afterward, manually add the new database to MAX and try to Archive the new file.
    Regards,
    Thomas C.
    Applications Engineer
    National Instruments

  • Slow opening archived citadel database on first read

    I have archived a citadel database using the archive vi. I then copied this database to another computer. when I try to read data from this archived database, it is taking a long time to return the first set of data (5-10 minutes). New attempts to read data execute much quicker than the first read. Is there a way to improve this opening of the citadel database?? Right now I am using the OPEN DB sub vi that was sent to me to resolve an error accessing citadel problem. The database is not setup as the logging location in the tag engine. This OPEN DB vi is locked so I can't do any kind of probing to see what it is happening. Are you not supposed to work with an archived database??
    Attachments:
    my_citadel_viewer.llb.zip ‏108 KB

    Bump,
    I was unable to reproduce your problem. I archived a DB on my machine and moved it to another Win2k machine. The program works fine, whether or not I use the open and close DB VIs. How much data are you trying to retrieve? You also may have a corrupted database, and Citadel is trying to fix its corruptions when you first access the data. Have you tried to access the original (pre-archive) database with the VI? What is the result?
    Regards,
    Michael Shasteen
    Applications Engineering
    National Instruments
    www.ni.com/ask
    1-866-ASK-MY-NI

  • Archiving a database

    We have the problem of a database growing larger every day which is causing it to slow down.  We want to keep old data though.  Would a possible solution be to rename the database something like DatabaseName2013 and create a new database with the
    old DatabaseName?  The archive database would only be used for reports.  Just wondering if this idea would be easier to implement than Transactional Replication or Partitioning the table by year.
    Thanks,
    Fred
    Fred Schmid

    Hello,
    I have seen some applications that have a second database just for archiving and periodically they send old records to the archive database
    and then purge them from the main transactional database. However, for some reports you will find you need data from 1 of the databases, and for other reports you need data from both databases.
    My first suggestion is to try partitioning (if you have Enterprise Edition).
    Another suggestion is to create a Data Warehouse and an Operational Data Store and send old data and new data (daily) to those entities
    for reporting purposes. Keep one or two years of data only on the transactional database.
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • How archive infostructures are filled when the delete program runs(sd_cond)

    I could not find a suitable forum for this, hence posting it here. I need to know how the ZARIX tables get filled in SAP Archiving. As far as I know, they are filled automaticlaly when the delete job runs, but I could not find any code in SD_COND_ARCH_DELETE program.
    My issue is that my delete program did not fill one of the infostructures which was active and this infostructure corresponds to the zarix table. On the other hand, this infostructure got filled manually(verified in zarix also).
    Wondering how this could have happened. Can this be because multiple delete jobs are running creating various sessions. This is mainly with reference to the SD_COND archive object.

    Hi,
    There is a separate program to fill infostructures as when you fill them manually (Transaction SARJ, Environment -> Fill Structure) a new job is triggered. Have a look at the job that ran when you filled your infostructure manually and analyse the program that ran.
    There should not be any problem in filling the infostructures due to several archiving sessions running at the same time. If you face the same issue again, see if any of the archiving jobs (write, store, delete) failed for any reason. If all the jobs finished successfully but an infostructure didn't get filled then have a look for any OSS note related to this issue. If you can't find anything, raise an SAP message as the infostructures have to get filled automatically as otherwise there will be no access to the archived data.
    Hope this helps.

  • Broken archive Oracle9i Database Release 2 Enterprise

    Hi
    I'm trying to install Oracle9 Release 2. So I downloaded all four files from http://otn.oracle.com/software/products/oracle9i/htdocs/hpsoft.html. Two of them are broken: server_9201_hpunix64_disk2.cpio.gz (648,282,885 bytes)and server_9201_hpunix64_disk3.cpio.gz (538,652,554 bytes).
    Is it kind of early adoption feature?

    Yes. Oracle's current platform strategy is:
    Oracle Platform Roadmap:
    Organizations are deploying Oracle's database, applications, application servers and development tools on a short list of systems that dominate the market. Beginning with the Oracle9i Database, Oracle9i Application Server, and E-Business Suite (Oracle Applications) 11i, the following operating systems will be supported:
    HP Alpha OpenVMS(1)
    HP Tru64 UNIX(2)
    HP-UX(2)
    IBM Linux/3901 & z/Linux(1)
    IBM OS/3901 & z/OS(1)
    IBM RS/6000 AIX(2)
    Intel Based Server LINUX
    Microsoft Windows 2000
    Microsoft Windows NT for Intel
    Solaris Operating System (SPARC)
    (1) - For Oracle9i Database ONLY
    (2) - 64-bit ONLY
    Several Microsoft Windows clients are supported, including Microsoft Windows XP.
    Oracle will continue to support the Oracle8i Database and current versions of all other products throughout their support life cycle on the operating systems where they were released, including 32-bit versions of Oracle running on HP-UX and IBM RS/6000 AIX systems. Additionally, Oracle will continue to provide 32-bit client library support through the life of the Oracle9i Database. In keeping with our normal product support life cycle process, customers will be informed of status changes to individual product versions currently supported.

  • What is the best backup plan for Archive Databases in Exchange 2013?

    Hi,
    We have Exchange 2013 with Hybrid setup with O365.
    We have On premise exchange 2013 servers with 3 copies of primary Database & Single Copy of Archival DBs.
    Now we have to frame backup policy with Symantec Backup Exec which has to backup our primary & Archival DBs
    In 2007 exchange, before migration to 2013, we had policy of DBs - Weekly Full backup & Monthly Full Backup
    Please suggest what would be the best possible backup strategy we can follow with 2013 DBs.
    That too, especially for Archiving DBs
    Our Archiving Policy are - 3 category - Any emails older than 6 month OR 1 Year OR 2 Year should go to Archive mailbox.
    Keeping this in mind how to design the backup policy ? 
    Manju Gowda

    Hi Manju,
    you do not find best practice different from the common backup guidelines, as there is no archive db specific behaviour. Your users may move items to their archive at any time as well as your retention policies may move items that machted the retention policies
    at any time. The result is frequently changing content to both, mailbox and archive mailbox databases, so you need to backup both the same way. You also may handle archives together with mailboxes together in the mailbox db 
    Please keep in mind that backup usually means data availability in case of system failure. So you may consider to do a less frequent backup with your archive db with dependency to the "keep deleted items" (/mailboxes) setting on your mailbox database.
    Example:
    keep deleted items: 30 days
    backup of archive db: every 14 days
    restore procedure:
    * restore archive DB content
    * add difference from recover deleted items (or Backup Exec single item recovery) for the missing 14 days.
    So it depends more on your process than on a backup principle.
    Regards,
    Martin

  • Creating a new Infostructure in archiving for Object PM_ORDER

    Hi Guru's,
    The SAP standard infostructures used when archiving PM Orders do not include 'Order Type'. I have created a new Z infostructure and included the Order Type in the field catalog and new infostructure but it doesn't get populated at the point of archiving. Does anyone know why not ? or how I can fix this ?

    Hi Ben,
    Has your custom archive infostructure been activated?  Also, the infostructure gets populated during the delete phase of the archiving process, not the write phase.  Have you tried manually filling the structure?  If so, what does the joblog show?  Does the tablespace the infostructure is in have enough space?
    These are some things to check.
    Hope this helps.
    Best Regards,
    Karin Tillotson

Maybe you are looking for

  • Tecra A11-1EG - cannot access BIOS after setting supervisor password

    Hello, I wonder if anyone can help ? We have just started to put a supervisor password onto our Tecra A11 laptops so our pupils cannot set passwords on the hard drive etc. To get into the BIOS and set a BIOS password, we press F2 as the machine start

  • 10.5.5 hangs with the beachball

    I have a macbook (dual core 2) and early intel iMac (dual core) 2006. Both have pretty much same setup and now have 10.5.5. My iMac started exhibiting the "beachball of death" freezes more and more in last few days. It started in iTunes when watching

  • Getting loadSourceError using the debugger when trying to open a file referenced by a sourcemap

    Specifically, when using the debugger and trying to open up a source JS file that is referenced in a sourcemap, I now get: Error loading source: loadSourceError Yesterday it was working perfectly. The only thing that seems to have changed was updatin

  • Open item upload - FI

    Hi, I want to upload all open items for vendor/customer and gl. Which BAPI should i use for these. Can i upload all open items in using one bapi or separate BAPIs are available for vendor/customer and gl. T-code are : FB01 or f-02. Please guide me in

  • Query on Import profile in EPMA

    Hi Team, I am using an import profile to load metadata into EPMA HFM application from a flat file(.ads file).I am unable to map the below mentioned HFM properties in the Import profile.These properties are not appearing in the Property selector.Versi