Backup of BO 6.5 Repository

Hi All,
Our client is migrating from 6.5 to BI 4.1, So before proceeding furher I am planning to take backup of reports/repository. Can anyone please help me in this regard.
Regards,
Sahil

Hi Sahil,
There are 2 tools available for this UMT and by using Promotion Management.
UMT is used for version change and PM is used for backup.
For more details go through this doc:
Official Product Tutorials – SAP BusinessObjects Business Intelligence Platform 4.x
Let me know if you have any other doubts.
Thanks,
Shardendu Pandey

Similar Messages

  • ISE backup ignoring subdirectory defined in repository

    Greetings,
    I have a repository configured like this:
    repository Solarwinds_SFTP_SERVER1
      url sftp://server1/ISE/
      user ISE password hash <password hash>
      host-key host server1
    When I test with show repository, everything works correctly, I see the files in the ISE subdirectory:
    # sh repository Solarwinds_SFTP_SERVER1
    AdminFullDaily-131002-0100.tar.gpg
    AdminFullDaily-131003-0100.tar.gpg
    AdminFullDaily-131004-0100.tar.gpg
    ise_catalog.xml
    repolock.cfg
    Test-131004-1044.tar.gpg
    However, when I run a manual or scheduled backup, the ISE subdirectory is ignored, the files land in the root dir of the sftp server.
    Any ideas? We are version 1.1.4 patch 2. We can't move to a newer patch until a separate issue is resolved.

    Can you get the following debugs from ACS and SFTP.
    debug transfer 7
    debug copy 7
    Corresponding logs from the sftp server.
    enable the debugs on the ACS and run the command again to take backup.
    ~BR
    Jatin Katyal
    **Do rate helpful posts**

  • OVM Repository and VM Guest Backups - Best Practice?

    Hey all,
    Does anybody out there have any tips/best practices on backing up the OVM Repository as well ( of course ) the VM's? We are using NFS exclusively and have the ability to take snapshots at the storage level.
    Some of the main points we'd like to do ( without using a backup agent within each VM ):
    backup/recovery of the entire VM Guest
    single file restore of a file within a VM Guest
    backup/recovery of the entire repository.
    The single file restore is probably the most difficult/manual. The rest can be done manually from the .snapshot directories, but when we're talking about having hundreds and hundreds of guests within OVM...this isn't overly appealing to me.
    OVM has this lovely manner of naming it's underlying VM directories off of some abiguous number which has nothing to do with the name of the VM ( I've been told this is changing in an upcoming release ).
    Brent

    Please find below the response from the Oracle support on that.
    In short :
    - First, "manual" copies of files into the repository is not recommend nor supported.
    - Second we have to go back and forth through templates and http (or ftp) server.
    Note that when creating a template or creating a new VM from a template, we're tlaking about full copies. No "fast-clone" (snapshots) are involved.
    This is ridiculous.
    How to Back up a VM:1) Create a template from the OVM Manager console
    Note: Creating a template requires the VM to be stopped (this is required because the if the copy of the virtual disk is done with the running will corrupt data) and the process to create the template make changes to the vm.cfg
    2) Enable Storage Repository Back Ups using the step above:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-storage-repo-config.html#vmusg-repo-backup
    2) Mount the NFS export created above on another server
    3) Them create a compress file (tgz) using the the relevant files (cfg + img) from the Repository NFS mount:
    Here is an example of the template:
    $ tar tf OVM_EL5U2_X86_64_PVHVM_4GB.tgz
    OVM_EL5U2_X86_64_PVHVM_4GB/
    OVM_EL5U2_X86_64_PVHVM_4GB/vm.cfg
    OVM_EL5U2_X86_64_PVHVM_4GB/System.img
    OVM_EL5U2_X86_64_PVHVM_4GB/README
    How to restore up a VM:1) Then upload the compress file (tgz) to an HTTP, HTTPS or FTP. server
    2) Import to the OVM manager using the following instructions:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-repo.html#vmusg-repo-template-import
    3) Clone the Virtual machine from the template imported above using the following instructions:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-vm-clone.html#vmusg-vm-clone-image
    Edited by: user521138 on Sep 5, 2012 11:59 PM
    Edited by: user521138 on Sep 6, 2012 3:06 AM

  • Minimum things required to take master and work repository backup.

    how to backup the master and work repository?

    Hi,
    In 11g, under topology you can click the top-right icon and select export. You'll be able to export the master and the work repositories.
    You can also use OdiExportMaster and OdiExportWork tools in a package/procedure/bash script to schedule a daily backup.
    A last alternative is to backup the database schema(s) containing your master/work reps.
    Hope it helps,
    JeromeFr

  • Backup to file system and sbt_tape at the same time?

    hello!
    is it possible to do a rman backup to disk and sbt_tape at the same time (just one backup, not two)? the reason is, that i want to copy the rman backup files from the local file system via robocopy to another server (in a different building), so that i can use them for fast restores in the case of a crash.
    if not, what is the recommended strategy in this case? backups should be available on tape AND the file system of a different machine.
    environment: oracle 10g, windows server 2008, commvault for backup on tape
    thanks for your advice.
    best regards,
    christian

    If you manually copy backupsets out of the FRA to another location, but still accessible as "local disks", you can use the CATALOG command in RMAN to "catalog" the copies.
    Thus, if you opy or move files from the FRA to /mybackups/MYDB, you would use
    "CATALOG START WITH /mybackups/MYDB" or individually "CATALOG BACKUPPIECE /mybackups/MYDB/backuppiece_1" etc
    Once you have moved or removed the backupsets out of the FRA, you must use
    "CROSSCHECK BACKUP"
    and
    "DELETE EXPIRED BACKUP"
    to update the RMAN repository. Else, Oracle will continue to "account" for disks pace consumed by the BackupPieces in V$FLASH_RECOVERY_AREA_USAGE and will soon "run out of space".
    You would do the same for ArchiveLogs (CROSSCHECK ARCHIVELOG ALL; DELETE EXPIRED ARCHIVELOG ALL) if your ArchiveLogs go to the FRA as USE_DB_RECOVERY_FILE_DEST

  • What would be convenient way to backup mass roles details from GRC ?

    Hello SCN folks,
    For a GRC 10.1 environment where in, there are 100k plus roles maintained, what would be an ideal & convenient way to backup the roles details, periodically? Since the volume is high a direct export would result in timeout and huge overhead in the system.
    Following is the generic steps to export one or multiple role details from GRC:
    Access Management work center-> Role Management->Role maintenance screen -> Choose one or multiple roles as per search parameter, Landscapes -> Click on to "Role Details Export" -> Select all attributed to be exported -> Click on "Export"
    Regards,
    Suvonkar

    Hi Colleen,
    To maintain a backup of all the changes made to the Role & profile attributes periodically in the large & dynamic SAP environment. And for backing up new roles & attributed getting added-up in GRC for provisioning & risk simulation.
    Approver details changes: the backup also acts as a repository for reference for previous approvers.
    Role Name
    Landscape
    Role Type
    Description
    Business Process
    Subprocess
    Project Release
    Role Status
    Critical Level
    Sensitivity
    Profile Name
    Profile Description
    Functional Area
    Company
    Assignment Approver
    Role Content Approver
    Certification Period
    Reaffirm Period
    Etc
    Regards,
    Suvonkar

  • How to find expired backup?

    Dear All,
    how to find expired RMAN backup?
    how it got expired?(according to which concept)
    expired RMAN backup not used to restore?
    If catalog database is their we need to register and then take backup?(is must?)
    sorry for easy questions..but i'm in situation to know?
    please help me :-)
    Note:oracle 9i
    Os:HP-UX
    regards,
    DB :-)
    Edited by: DB on Feb 9, 2013 2:42 PM

    Hello,
    Expired backups are those that are unavailable on the location that you had taken the backup (disk or tape). Running a "crosscheck backup" lists out the expired backups and also updates the repository saying that these backups have been expired. You can delete the expired backups using "delete expired backup".
    If you have the backups copied to a different location using OS commands and if the repository is not updated, then the crosscheck commands lists these backups to be expired. If these backups are required for a restoration purpose, then you can catalog them to the location where they are currently place using "catalog start with <path where the backup is stored>".
    Since you are on 9i, "catalog start with .." is not supported. May be you can use this DBMS_RCVMAN package but not sure about its usage.
    Regards,
    Shivananda

  • Hello RMAN backup problem.....

    Hi there,
    We are haveing a setup of Windows Server 2003 with Oracle 10g Rel2 and Veritas NetBackup Ver 5 MP 5 is being used to trigger the backup. On 1st of July 2007, backup failed for the first time. Error was RMAN-06059:expected archived log not found, lost of archived log compromises recoverability. No one paid attention then, I am looking into the same last couple of days.
    I have certain questions.... as I am not big on RMAN.
    1) Is it possible to have an RMAN backup without RMAN repository.
    2) I have taken RMAN backup with having a separate repository and having a catalog to the database to be backed up.
    3) I do not find any rman repository over here, so this point one comes into picture.
    4) If it is possible how to synchronize the archived logs.
    5) start the rman backup again.
    Please help.

    Hi Alanm,
    I was away for few days. Thanks for the update, that it is possible. In this senario please tell me the exact steps how to do. So that I can implementing the same as a backup procedure.
    When I said that it should have been supported with the current setup, which is using VERITAS backup procedure and has a policy for bakup. As we do not have a RMAN repository I wanted to figure out how to resync archive logs that are missing from its location to the VERITAS.
    I want to take a full RMAN backup with current archived logs as starting point. Also I will like to give backup set name as I want to. Which I think is possible.
    So lets ignore the sync part. Please write to me as to how should should I start RMAN without its repository.
    Please do reply.
    Thanks
    Shiva

  • Upgrade to 4.2 SP4 corrupted custom functions in central repository

    I recently upgraded a repository from 4.2 SP1 to 4.2 SP4 and all the custom functions in the central repository (after being upgraded with repository manager) are no longer able to be check-out, can't get latest version, can't even export them...  nothing.  Each time I interact with anything dependent on these functions I get the following error:
    I've logged out completely, refreshed multiple times and nothing is solving the issue.  The functions correctly show up, but cannot be edited in any way.  Please help.
    both local and central repos are on MS SQL Server 2008 R2.

    Hi Matthew,
    Did the upgrade process from 4.2 SP1 to 4.2 SP4 complete successfully when you upgraded using Repository Manager.
    If you have a backup of 4.2 SP1 repository version can you please upgrade the repository to 4.2 SP4 and check the Latest version after upgrade is finished successfully.
    It seems that you are directly doing a Check in of dependent objects first.
    Try the below steps:
    1) First add only the object without dependents
    2) Add only the custom function to the central repository see if that works
    Please note the following points
    You can add a single object to the central repository, or you can add an object with all of its dependents to the central repository. When you add a single object, such as a data flow, you add only that object. No dependent objects are added.
    The error message suggests that dependent objects is missing.
    Please refer to Advanced DS reference Guide
    Refer to section
    To add an object and its dependent objects to the central repository
    https://help.sap.com/businessobject/product_guides/boexir32SP2/en/xi321_ds_adv_dev_en.pdf
    Regards
    Arun Sasi

  • MDM 5.5.65.71(Not able to load the repository)

    Hi All,
    We are getting the strange problem in MDM patch 5.5.65.71.
    We have recently upgraded to 5.5.65.71 from 5.5.64.79 and we are getting problem one of our repository while we try to load it.
    The repository is not getting loaded saying that unique constraint violated and duplicates are there for one of the field.
    We have removed the unique constraint in the console level and loaded the repository to check whether any duplicates existing or not. But no duplicates existing in that repository.
    We have again kept unique constraint in the console level and tried  to load the repository but we were not able to do it as it is
    saying unique constraint violation eventhough there were no records which violates unique constraint on our repository.
    Has anyone faced this problem?
    Please share your views on how to get rid of this issue?
    Thanks,
    Narendra

    Hi,
    I would suggest, if you have backup archive file of the repository, then please try by deleting and again archiving repository. it may solve problem. It is suggested to check and verify and take backup of the repository before upgrading MDM server.
    please revert if you don't have backup of the repository.
    Regards,
    Shiv

  • How to restore/duplicate from old backup

    Hi All,
    RMAN Version : 10.2.0.4
    Repository Version : 10.2.0.4
    OS : AIX 6.1
    Target Database : 10.2.0.4
    I have set the RMAN retention policy to 10 days for my database say "BCMT1". I have duplicated this database successfully to "RTEST", but i have restored from backup within the recovery window period.
    what if, i want to duplicate/restore backup which is older then 10days and which is marked obsoleted & deleted from RMAN repository but available on tape ?
    Below is script to detele the obsolete backup:
    run
    crosscheck backup;
    DELETE NOPROMPT EXPIRED BACKUP;
    DELETE NOPROMPT OBSOLETE RECOVERY WINDOW OF 10 days;
    }Script to duplicate the database:
    run
    allocate channel DSK1 type DISK;
    allocate auxiliary channel DSK2 type DISK;
    set until time  "to_date('09/24/10 17:00:00','MM/DD/YY hh24:mi:ss')";
    DUPLICATE TARGET DATABASE TO RTEST1;
    }Below are configuration parameters:
    RMAN configuration parameters are:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 10 DAYS;
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/oraclebackup/BCMT1/%F';
    CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/oraclebackup/BCMT1/%U' MAXPIECESIZE 4 G;
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/orapltp1/product/10.2/dbs/snapcf_BCMT1.f'; # defaultCan the catalog command be used to register the old backups with RMAN repository and then restore from those backups ?

    Thanks for quick response...
    Again one question.....
    Since i have deleted the obsolete backups, that means this information is also gone from RMAN repository too.
    Here is the Scenario:
    1) I have backups available on disk from 23rd Sep 2010 To 2nd OCT 2010, So, i can easily restore/duplicate my database with in this time period. (RMAN repository have this information)
    2) I have backups available on tape from 1st AUG 2010 To 1st 22nd Sep 2010. Now, if i need to restore/duplicate database to 26th Aug 2010. I have deleted the obsolete backups, so information gone from repository. How to proceed ?

  • RMAN repository maintenace and the availability

    I have 10 production and 10 test databases in 2 servers(Test and Production). RMAN Backups have setup and add all the 20 databses in to a repository in one of the Test server database called CODP.
    So my question is as all the databases added to CODP db repository and codp database also add in to the same database repository it self. If CODP database corrupted all the data in the repository will be lost and no way to recover unless there is a another backup or setup.
    So my question is what would be the best approach to handle this situation?? Appriciate any comments or advices.
    Regards
    Karunika

    Hello,
    Consider RMAN repository database as one of the production database and you should have a proper backup strategy for your RMAN repository database so that in case any problem happens, you immediately restore the RMAN repository. Put your RMAN repository in archivelog mode and take backups regularly and if appropriate, you can setup replication of RMAN repository schema in another remote database.
    Salman

  • Configuring Incremental Backup

    Hi there
    I am designing RMAN backup
    - I want to use Repository Catalog
    - I want to take level zero backup every saturday night
    - I want to take level 1 incremental backup everyday night, except sunday
    - and I want jobs to be scheduled and I prefer not to use EM.
    - I want to check for logical and physical corruptions before taking the backup
    Q1. what is the place of archive log in incremental backup
    Q2. should I include the archive log in level 0, in level 1 or in both
    Q3. so can any one provide me with a sort of template scripts to help me start with
    I know there are wide resources but I wanted a seed to build up on.
    I appreciate any help
    Thanks

    Firstly - you cannot backup archive logs at level 0 or 1. When you think about it archive logs don't change once they are created so why would you ever back them up with incremental level 1 besides there is no option to do this.
    You can say however "backup incremental level 0 database include current controlfile spfile plus archivelog;" in one statement.
    "You can delete the archive logs once incremental backup is complete. It is done automatically by using delete obsolete command."
    Depends if that's what you want to achieve - I personally prefer to backup archive logs at least 3 times before deleting them (BACKED UP 3 TIMES) and also on seperate tapes. The balance is between available disk space for archive logs and requirements of the client. You would hate to compromise recoverability because of a few lost archive logs all backed up on one corrupted or lost tape.
    You should also "sql 'alter system archive log current';" before the backup command to capture maximum data changes.

  • CPI 1.4 - Failing restoring from backup.

    Hi
    We have been running the CPI 1.2.1.12 and are upgrading to 1.4.
    The old server is still accessible if i turn it on.
    The new server is up and running and is fully accessible, both through SSH and HTTPS.
    I did a point patch on the old server with PI_1_2_1_12u-Update.1.tar.gz and it was succesful.
    I did a backup of the old server and it was succesful.
    On the new server im trying to restore from the backup, but i get an error:
    cpi/admin# restore cpi-backup-130814-0637.tar.gpg repository defaultRepo application NCS
    Restore will restart the application services. Continue? (yes/no) [yes] ? yes
    Initiating restore.  Please wait...
      Stage 1 of 9: Transferring backup file ...
      -- complete.
      Stage 2 of 9: Decrypting backup file ...
      -- complete.
      Stage 3 of 9: Unpacking backup file ...
      --complete.
    ERROR: Backup Checksum is not matching. Retry restore with a valid backup
    % Application restore failed
    Is anyone able to tell me what I can do to correct the error?
    Best Regards
    Joergen

    I have solved the problem, I did 2 things, but not sure witch of them solved it.
    First i changed the time zone on both the old server and the new server.
    Then i Stopped NCS before i took a new backup of the database.
    After that i importet the data into the new CPI 1.4 without errors, and the server is running smooth.

  • Upgrade/Migrate 9.3.3 Planning apps to 11.1.2.2

    I am in the next step of our upgrade from 9.3.3 to 11.1.2.2 and ready to migrate our Planning applications. I have read many of the posts related to this and have a general list of steps, but I have slightly different situation than what I have seen posted before.
    To summarize, we have installed 11.1.2.2 on new servers.  Our old 9.3.3 is still up and running on different environment.  I have sucessfully completed the following:
    1.     Foundation Services/Shared Services installed and up and running and users/groups successfully migrated.
    2.     Essbase and EAS installed and up and running.  All Essbase applications (except for the Planning apps), have been successfully migrated from 9.3.3 to 11.1.2.2.
    3.     Planning has been installed and is up and running.  While there are no Planning apps in the new system, I can open the Planning Administrator via WorkSpace.
    So the next step is to get my Planning apps moved/migrated from the 9.3.3 environment to the new 11.1.2.2.
    To complicate matters, we are switching from SQL Server repository in 9.3.3 to Oracle in our new 11.1.2.2.  So the first thing we did was to create a new Oracle schema for our Planning app repository and using SQL Developer migrate wizard, we copied/migrated the tables and data from the old SQL Server to Oracle.  It appears that all the tables and data were successfully copied into Oracle.
    Our old Planning application is called PlanTest. The application owner is a native user called planadm.  My intended migration plan (and my questions) are:
    1.     Do I create a new blank Planning app called PlanTest using native account planamd in 11.1.2.2?
    2.     If I use my new Oracle repository for this blank app, will it wipe out the old converted data or will it try to upgrade/migrate it?
    3.     Or..should I take a backup of my new converted repository first.  Then drop/recreate the Oracle Schema (so no tables) and let Planning build the repository when I create the blank Planning app?  Then capture the owner SID in the new HSP_USER table.  Stop Planning. Then overlay the database backup of 9.3.3 over the newly created repository and replace the owner SID info. Then Restart Planning.
              ** My concern:  Is the Planning app repository table structures the same between 9.3.3 and 11.1.2.2?
    4.     Open Planning and hope that the application PlanTest appears!
    5.     Use Planning upgrade Wizard to finish the migration and upgrade the repository.
    6.     Push/Create app to Essbase.
    7.     Export data from old app to .txt load file and reload into new Planning cube.
    8.     Convert/Migrate Business Rules using info from John's blog.
    I am sure I missed something.  Any suggestions or comments on these steps would be appreciated.  Bottom line is I need to get this app (and several others) migrated successfully with all metadata, security, forms, rules etc. working.  Rebuilding them from scratch is NOT an option.
    Also, to clarify.  This is a Classic Planning app.  We are not using EPMA.
    Thanks,
    Mike

    Following the steps, the upgrade/migration of the Planning application worked successfully.  The app and forms were all migrated and was able to push to Essbase application successfully.  The only issue was security.  While the users/groups seem to appear in the migrated Planning app, none of their security access/filter info migrated.  We have Planning security defined down to the member level in the application and it appears none of that successfully migrated, so we would need to manually re-assign all of that security.  Any thoughts of what may have happend or step I may have missed?
    Also have moved on to attempt migration of the Business Rules to Calc Manager.  Using John's 2nd alternative method from his blog, I export the rule from 9.3.3 EAS to HBRRules.xml file, copy it to the appropriate 11.1.2.2 folder, select "Migrate" on the PlanTest planning app in Calc Manager, select the app and Plan type and all seems well with the Import screen even indicating that the Rule and variables were Inserted.  But I never see the rule appear.  Could this be an "owner" issue or is there something I need to do to the .xml file before the migration step?
    Mike

Maybe you are looking for

  • HP C8180 Printing Problem

    I have a HP C8180 that when printing a photo on photo paper (8.5x11)  using the standard paper quality setting it prints a great photo but when any photo paper quality is selected, it wants to print a pinkish orange tint.  When printing to 4x6 it pri

  • Deploying the extended controller class file on server

    Hi OAF Experts, I am working on the already extended controller file. I have made the changes and was able to comiple the controller class file. I have one uestion for you. Now to put the class file on the server i am following the below steps: 1) Pu

  • My e-mail is blank from my html e-mail form submissions?

    I have my form set up correctly, however, when I get the e-mail from my form it's blank. Thoughts?

  • Ipod Touch 3g - alarms are silent, so not very useful!

    I have a 3rd generation touch that works great for most things, only last night for the first time I tried setting an alarm (not an app - the built-in alarms on the Clock page). My problem is that I can't really hear it. If I crank the volume way way

  • BT Hub3 - What Exactly Does 'Connection time' Mean...

    I recently changed from using a BT HomeHub2 to a BTHub3 as part of efforts to fix a problem with the intermittent collapse in upstream speed to a value so low as to effectively be zero. I'm monitoring the ADSL Line Status daily using the Hub Manager'