Essbase archive database

Hi All,
Can we archive essbase database from dev and use it on essbase server in prod?
Thanks,
Manoj

I have restored from archive from app1 to Sample which have different outlines.
Means app1 and sample application have different outlines...you didn't find any errors???while loading???
But I'm unable to do so when trying to do it on different essbase servers on different machines.
Let me know what kind of day you are supposed to load ...level0/alldata ???
In which server you currently logged in??? Dev server or Prod server? (I mean from which server are you trying to load from dev or prod server?)
Copy this archived data into Prod server and load it into Prod server....
Edited by: Prabhas on Oct 19, 2011 11:33 AM
Edited by: Prabhas on Oct 19, 2011 11:35 AM
Edited by: Prabhas on Oct 19, 2011 11:47 AM

Similar Messages

  • 1 SQL instances with several archive Databases using all AWE RAM memory of server

    Hello,
    I just migrated my accounting system to a new SQL Server deployment of the software.
    We just purchased the expensive SQL Server enterprise to accomodate.
    I have some replicated databases to of lower priority that I put on the same instance that we occasionally query.  I also imported a 70GB old archive DB that we on very rare occasions.  We are not concerned about performance on these databases
    as we are about the accounting DB on the same instance.
    The MAX memory was set to unlimited on that instance.  As soon as I put in this monster 70GB archived databases the AWE memory usage used up my full 30GB of RAM.
    Is there a way to set the memory usage so the archive databases do not get loaded into the AWE but still the critical accounting system DB on the same instance is taken care of?
    Or do I have to shell out another $3-6k for a separate instance?  SQL Server Express has a 4GB limitation and one of the backup DB we don't really care about is 20GB replicated from Azure.

    Hi,
    >>70GB archived databases the AWE memory usage used up my full 30GB of RAM.
    How did you checked that Archived database is using 30 G did you used sys.dm_os_buffer_descriptor.Do SQL server have locked pages in memory
    SQL Server bring pages in memory if it is requested ,if you access Archived database heavily its bound to take memory but if yous top accessing it and access your other database SQL server will flush out pages of archive IF REQUIRED.
    SQL server manages memory dynamically so I guess you do not need to worry
    >>s there a way to set the memory usage so the archive databases do not get loaded into the AWE but still the critical accounting system DB on the same instance is taken care of?
    No ,there is no way buffer pool is shared region
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Multiple Archive databases

    I can split out multiple archive databases just like I split out multiple primary databases on Exch 2013, correct?
    Thank you.

    Hi,
    Do you mean you want to split one archive database into multiple archive databases? If so, you can move some archive mailboxes from one archive database to another archive database using the New-MoveRequest command.
    If I have misunderstood your concern, please feel free to let me know. For more details about the New-MoveRequest command, please refer to
    New-MoveRequest.
    Best regards,
    Belinda
    Belinda Ma
    TechNet Community Support

  • How to create a archive database?

    Emails which are older than 1yr should be archived into a separate database from the mail email database. Its
    a Exchange 2010 version. Please help me know the proceedure.
    Thank you

    Hi There,
    Archive database is the same as a normal database and there is no special method to create it.
    You just assign users to it a secondary mailbox and apply policies to it.
    Once it configured the emails that are stored on the secondary mailbox (archive) will not be cached by outlook.
    I recommend you set the the archive DB to be excluded from provisioning mailboxes automatically using the cmdlet below.
    Set-MailboxDatabase "ArchiveDB" -IsExcludedFromProvisioning $true
    Exchange Blog:
    www.ntweekly.com
    MCSA, MCSE, MCITP:SA, MCITP:EA, MCITP:Enterprise Messaging Administrator 2010,MCTS:Virtualization

  • Reporting data in the Archive Database

    Environment: 10gR3 StandAlone Enterprise.
    I successfully configured Archiving and I can see data being written to the archiving database. I want to now report on the data present in this database. My reports need to be more detailed than what the Archive Viewer displays.
    1. Is there any document that defines the Archive schema?
    2. Are there any SQLs available that make the proper joins against the tables to present the data?
    For example, one report would be to list every completed instance below which would be listed the activities and participant who completed them.
    thanks

    Any help with archive database SQL is appreciated.
    thanks

  • How to enable access of Archive database mails on Outlook.

    We are using Exchange 2010 and I recently created a new database to use as a Archive database to move 1 year old messages  and be accessible to users as a folder on their emails. I was able to apply the tag "All other folders" on
    Retention tags and applied to a test mailbox. It was successful and I was able to access those archived mails on a separate folder that showed the database for the user. It was visible on OWA whereas when we configured the email id to a Outlook, it was
    not showing that database folder or the mails. Please let me know what more of setting I should be looking at to get this working.
    FYI, I tried on a Ms Outlook Professional plus 2010 and 2013. But didn't show.

    Hi Shasti,
    According to your description, I understand that Outlook client cannot display the online archive folder, however it works in OWA.
    If I misunderstand your concern, please do not hesitate to let me know.
    I want to double confirm some points, please help to collect answers for following questions:
    1. Ensure that deploy Outlook on Terminal Server environment.
    2. What’s the version of Outlook?
    The reason why I’m asking is that other user experience similar question, it may be related to Outlook license.
    Additional, I find a similar thread about your question, for your reference:
    https://social.technet.microsoft.com/Forums/en-US/224019df-cbf7-471a-94c5-5a2cd44d6c6e/outlook-2010-not-showing-exchange-2010-archives-owa-does?forum=exchangesvrclientslegacy
    “This is permissions related. Give the user FULL access to the mailbox. We migrated from 2003 to 2010, then later introduced archiving. Granting or removing and regranting permissions to the primary user will resolve this issue. Once you login to RDP or
    another local machine, it may take a few seconds to update, but it will populate.”
    Best Regards,
    Allen Wang

  • Hot backups for essbase planning databases

    Hi gurus
    Can please tell me best advice for making hot backups of our essbase planning databases for disaster recoveries?
    We try doubletake but files are locked unless essbase is put in read-only mode.
    Any good tools for this? Certified stuff? Heard maybe VBE?
    TIA
    M.

    Hi,
    First of all have you read the document of backing up essbase :- http://download.oracle.com/docs/cd/E10530_01/doc/epm.931/hfm_backup.pdf
    If you are taking a backup while essbase is running then you will need to put the databases into read only mode, this can be done by using a maxl script and once again to take it out of read only mode.
    There are lots of 3rd party applications out there for backing up file structures, it all depends on what you want it to do and how much you want to pay.
    If it is a small installation you could even script it yourself depensing on your knowledge to back up files and copy them to a san or backup drive.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Script to do restructures into Essbase's databases

    Hi All,
    Can anyone help me the process of creating a script to do restructures into Essbase's databases with steps ?
    Thanks

    Have a look at force restructure - Alter Database (Misc)
    I know somebody will come along and provide you with the exact script but I believe that it is better to work it out.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • How to prevent duplicate keys in archive database?

    I am struggling with this problem.
    Background: I'm working on a project where I have to make an archive database. The archive database should get
    all data of the operational database. It should even save every update ever made, so it literally contains the entire history of the operational database (this is a must and the whole project revolves around this idea). So this is solved by using Change Data
    Capture. After that the data should go through a staging area and eventually in the data warehouse database. I came out with a solution and worked it out in the prototype and it seemed to be working all fine. I stupidly forgot to include the foreign keys,
    so the archive database didn't have the original structure but it should ofcourse (no wonder it went okay without too much hassle).
    Problem: Because we want to store everything in archive, there will be duplicate primary keys (just for instance,
    many same contact_id's because telephone number changes a couple of times). I thought to solve this by adding a new primary key which says is auto-increment and purely exist to make a record unique. But when it comes to foreign keys, it's impossible. You want
    contact_id to be able to be duplicate and in that case it cannot be a primary key. But foreign key is only able to reference to a primary key or another unique key but not other normal columns.
    Any advice on this? It's an absolute must to store all changes.

    All of you, thanks for replying, I'm happy you're trying to help me out with this problem. 
    Visakh and Louis, thanks that seems like the solution for this case indeed. Yes, the dimensional design appeals more to me as well.
    I read the articles and watched some tutorials. But I can't work it around the solution that I had.
    More background info: I use CDC to track all the changes done in the operational database and SSIS (following one of Matt Mason's tutorials and with a lot of alterations to make it fit for my project). I have this control flow (don't mind that
    error haha):
    (Oh apparently I cannot add images yet, so here's the link for the screenshot:) http://nl.tinypic.com/r/w0p1u0/8
    Basically I create staging tables in my archive database next to my normal archive tables. Then start CDC control task to get the processing range and then it copies everything from the operational database (joined with a few CDC columns) to the staging
    tables. After that the processing range ends so it will only get the rows it hasn't processed before. And then I do some updates on the staging tables and then finally insert everything into the archive tables. The staging tables then can be truncated. After
    this the data will go to the staging area for transformations and then finally at last to the DWH. The reason for having a staging area between the archive and DWH is that the archive will not only be used as source for the DWH but also on it's own. The DWH
    will not contain 100% the same stuff as the archive (like maybe some transformations, extra columns with calculated fields, plus some columns don't need to be in the DWH at all). When all the ETL stuff is done in SSIS, I have to use SSAS to define all the
    facts, dimensions, cubes. 
    Example: So I try to work with the SCD type 2. If I understood it correctly (and maybe I didn't): for example, the contact table in archive should have the surrogate key ID (the auto-increment one). The business key is the contact_id
    and can be used uniquely with the time range columns. 
    Following Visakh's post, the ID becomes the key that the foreign key will reference to. For example: 
    Contact table:
    ID: 1 contact_id: 100
    Name: Glenn start_time: 2014-01-01
    End_time: 2014-08-20
    ID: 2 Contact_id: 100
    Name: Danzig Start_time: 2014-08-20
    end_time: NULL
    Sorry, I couldn't style it as table somehow. So the employee changed his name. It makes sense that the time period tells when the first name was valid. 
    Organisation table: 
    ID: 1
    org_id: 20 
    Contact_id: 1
    Start_time: 2014-01-01
    End_time:NULL
    (it references to ID instead of contact_id as suggested)
    The employee belongs to an organisation. It references 1 which is still old data. But this is the last version of the organisation record. 
    So then I need a table to link the 2: 
    organisation_contact table
    contact_id:100
    org_id: 20
    and then I need another one to join with the surrogate key?
    ID: 1
    org_id: 20
    ID: 2
    org_id: 20
    (Guess it would make more sense to have org_id in the contact table but for now it's an example)
    Problems: I don't quite understand how this works. From the example I saw you have to have another table (the fact table) to link it to the surrogate key. Would this mean I have to have facts and dimension tables in my archive database?
    My intention was actually to have all records of the operational databases (all the updates too) in my archive. And after that create the facts and dimensions in the DWH with SSAS. The example looks like I should do it earlier. 
    I don't know how to combine this with the cdc solution. I want to get all the data by using CDC. Like how every update gets registered in the accompanying CDC table. Then the archive will get the CDC data. But then how to combine this in use with SCD. I
    have the surrogate key in archive (ID) and then I make the start and end time columns. I need to point all references to the ID and then make the other table to keep track of the contact_id (original PK) and another key. At last make another table to track
    all the current data in the fact. 
    Another question: Would you recommend the SCD task in SSIS. I read it was not that great if you have many rows to work with. What would you think is the best method to implement it. 
    Thanks so much again.
    EDIT: What about slowly changing dimensions type 4? It looks like you don't have to change the references of the foreign key then. Why do you prefer 2 over 4?

  • CUIS 7.5: ERROR: No Archiver database size is supported on this system

    Dear all,
    I try the configuration tool on CUIS for Archiver but I encounter with an error and exit the configuration.
    Please help me on this.
    Thanh
    [1-20-2011 15:29:00] INFO:    Verifying Archiver pre-requisites.
    [1-20-2011 15:29:00] INFO:    Microsoft SQL Server is present.
    [1-20-2011 15:29:00] INFO:    The system has enough fixed drives to support at least one Archiver size.
    [1-20-2011 15:29:00] INFO:    The Archiver requirements have successfully passed verification.
    [1-20-2011 15:29:00] INFO:    One or more CUIS components are available for configuration.
    [1-20-2011 15:29:00] INFO:    CUIS verification complete.
    [1-20-2011 15:29:03] INFO:    User selection: Cisco Archiver
    [1-20-2011 15:29:03] INFO:    Displaying screen: Archiver - Product Selection
    [1-20-2011 15:29:06] INFO:    User selected: Cisco Unified Contact Center Enterprise 7.5(1)
    [1-20-2011 15:29:06] INFO:    Displaying screen: Archiver - User Verification
    [1-20-2011 15:29:27] INFO:    The user has selected to use the default instance.
    [1-20-2011 15:29:27] INFO:    Internal name for instance MSSQLSERVER: MSSQL.1
    [1-20-2011 15:29:27] INFO:    TCP/IP connectivity is enabled for the SQL Server instance: MSSQLSERVER
    [1-20-2011 15:29:28] INFO:    The connection to the local SQL Server as an administrator was successful.
    [1-20-2011 15:29:28] INFO:    Microsoft SQL Server Agent is present.
    [1-20-2011 15:29:28] INFO:    The SQL Server default instance is valid.
    [1-20-2011 15:29:28] INFO:    User selection: Domain user
    [1-20-2011 15:29:28] INFO:    Domain entered: HCMCPT
    [1-20-2011 15:29:28] INFO:    The domain user login was successful.
    [1-20-2011 15:29:28] INFO:    Standard username: ArchiverUser
    [1-20-2011 15:29:28] INFO:    The database security login is already present: HCMCPT\ArchiverUser
    [1-20-2011 15:29:28] INFO:    The common Archiver database is not present.
    [1-20-2011 15:29:28] INFO:    The Archiver product-specific data is not present.
    [1-20-2011 15:29:28] WARNING: Small database size selection is disabled.
    [1-20-2011 15:29:28] WARNING: Large database size selection is disabled.
    [1-20-2011 15:29:28] ERROR:   No Archiver database size is supported on this system.
    [1-20-2011 15:29:38] INFO:    Displaying screen: Archiver - Database Size

    I follow the workarround of the bug but it did not help. I still encounter with the same error.
    Do I need to upgrade to later version before run the config tool?
    Regards,
    Thanh

  • Archiving old data from a main database into an archived database

    Hello colleagues,
    We are trying to create a stored procedure to archive data older than 6 months (180 days) from our production database in to a new archive database.
    We want to archive only 20,000 rows a day and we need to schedule it on a daily basis. We also want to delete those archived rows from the production database.
    Could you please share us your experience on archiving.
    Thanks

    Hi BG516, 
    Ok, I got your point now :) 
    First, how long does it take to read these 20.000 rows? It shouldn't be a lot, especially if the table is well indexed to cover that query (an index on the date column and covering the rest of the table, basically). There are many aspects that may affect
    the process but my guess is that the big deal would be to delete these old rows from your production table. 
    Reading these rows will require a shared latch, and if you're reading old data your daily processes shouldn't be trying to write in these particular pages (again, depends on the indexes mainly). Deleting them will need an exclusive lock and that'd be more
    problematic, reads are quite more common than writes in a datawarehouse. 
    When facing this kind of problem, I always had to find a non-peak period of time to execute the required processes. 
    A few things that come to my mind: 
    - Use BULK INSERT when loading the data into your historical table so you can minimize the time you spend
    reading from the production table
    - Check the number of indexes you'll impact when deleting these rows. The more, the worse (more time
    needed to maintain them)
    - What version of SQL Server are you using? The Elastic Scale feature from Azure SQL Database covers just
    that scenario (http://channel9.msdn.com/Shows/Data-Exposed/Azure-SQL-Database-Elastic-Scale)
    Regards.
    Pau.

  • What is the best backup plan for Archive Databases in Exchange 2013?

    Hi,
    We have Exchange 2013 with Hybrid setup with O365.
    We have On premise exchange 2013 servers with 3 copies of primary Database & Single Copy of Archival DBs.
    Now we have to frame backup policy with Symantec Backup Exec which has to backup our primary & Archival DBs
    In 2007 exchange, before migration to 2013, we had policy of DBs - Weekly Full backup & Monthly Full Backup
    Please suggest what would be the best possible backup strategy we can follow with 2013 DBs.
    That too, especially for Archiving DBs
    Our Archiving Policy are - 3 category - Any emails older than 6 month OR 1 Year OR 2 Year should go to Archive mailbox.
    Keeping this in mind how to design the backup policy ? 
    Manju Gowda

    Hi Manju,
    you do not find best practice different from the common backup guidelines, as there is no archive db specific behaviour. Your users may move items to their archive at any time as well as your retention policies may move items that machted the retention policies
    at any time. The result is frequently changing content to both, mailbox and archive mailbox databases, so you need to backup both the same way. You also may handle archives together with mailboxes together in the mailbox db 
    Please keep in mind that backup usually means data availability in case of system failure. So you may consider to do a less frequent backup with your archive db with dependency to the "keep deleted items" (/mailboxes) setting on your mailbox database.
    Example:
    keep deleted items: 30 days
    backup of archive db: every 14 days
    restore procedure:
    * restore archive DB content
    * add difference from recover deleted items (or Backup Exec single item recovery) for the missing 14 days.
    So it depends more on your process than on a backup principle.
    Regards,
    Martin

  • Exchange 2010 personal archive database massive log file generation

    Exchange Server 2010 SP3 + Update Rollup 4
    Windows Server 2008 R2, all updates
    VMware ESXi 5.5
    Server config: 2 x Xeon Quad Core 2.20GHz, 16GB RAM
    We recently started using personal archives. I created a database for this purpose ("Archive Mailboxes") on the same datastore as our live mailbox database ("Live Mailboxes"). It works great except that the mailbox maintenance generates
    massive amounts of log files, over 220GB per day on average. I need to know why. The Live Mailbox database generates around 70GB of log files every day. The database sizes are: Live = 159.9GB, Archive = 196.8GB. Everything appears to be working fine, there
    are no Error events related to archiving. There are 10025 MSExchangeMailboxAssistant warning events logged every day. I have moved those mailboxes back-and-forth to temp databases (both Live and Archive mailboxes) and the 10025 events have not stopped so I'm
    reasonably certain there is no corruption. Even if there were it still doesn't make sense to me that over 100 log files are generated every single minute of the day for the Archive store. And it's not that the database isn't being fully backed up; it is, every
    day.
    Do I need to disable the 24x7 option for mailbox maintenance to stop this massive log file generation? Should I disable mailbox maintenance altogether for the Archive store? Should I enable circular logging for the Archive store (would prefer to NOT do this,
    though I am 100% certain we have great backups)? It appears to me that mailbox maintenance on the Live store takes around 12 hours to run so I'm not sure it needs the 24x7 option.
    This is perplexing. Need to find a solution. Backup storage space is being rapidly consumed.

    I'm sure it will be fine for maintenance to run only on weekends so I'll do that.
    We use Veeam B&R Enterprise 7.0.0.833. We do not run incremental backups during the day but probably could if necessary. All this is fine and dandy but it still doesn't explain why this process generates so many logs. There are a lot of posts around
    the internet from people with the same issue so it would be nice to hear something from Microsoft, even if this is expected behavior.
    Thank you for the suggestions!

  • Essbase and database in different server

    Hi,
    It is possible to put the ESSBASE in one server and database(oracle , SQL Server) in another server? It is possible, what are the things i want to do?
    Thanks,
    PC

    Hi,
    Assuming that you've got powerful boxes for this purpose, if you are going to use database server just as a repository database server for say, Planning, then that's fine to have essbase and rdbms on the same server. However, it's not advisable to have essbase on a server where a heavy datawarehouse is co-hosted.
    Alp

  • Help: essbase sample database no data

    Hi all,
    I am newbie for essbase. I have installed the essbase server/client components. But I found there are no any data in sample database. eg, ASosamp.Sample, Demo.basic. by Visualize&Explore with excel addin.
    Who could please tell me how to have real data for good experience. Did I make a mistaken in installing procedure?
    Also appreciated to where to download other good demonstration database,
    Thanks in advance.
    seamus

    The data for the database is in a file called Calcdata.txt It is in an export file format so you don't need a load rule to load it. In AAS right click on the database name, select load data, the find the file and click ok. it will load without errors. Than you will need to calculate the dtabase. Agailn right click on the database name and select calculate. Select default and click ok. In a second or two the db will be calculated

Maybe you are looking for