RHEL4 Maxdb Online Backup Agent

Hi can anyone recomend a cost effective online backup agent for a RHEL4-Maxdb installation.
What experiences have you had with said scenario wrt HA&DR - How was it configured, what software was used and are there any gotchas...

We have been running Netvault's product
called backint for backing up or MAX
DB 7.5.0.29 system within a RedHat 2.1
Enterprise system. We have used this for
the last 3 years and it has been working
very well.
At the time we were setting this up, there
were no other vendors who could directly
backup SAP DB databases along with all of
our other WIN2K and RedHat boxes. If we had to
do it again, we would probably backup the
MAX DB databases to a separate drive array
and then back up that rather than trying to
go directly from the DB to the backup software.
Not sure if this will help because it is so
late but thought I would throw in my 2 cents.
Thanks,
Joe Haynes
Heritage Propane
Helena, Montana

Similar Messages

  • Restoration of MaxDB online backup on other server for creating Standby DB

    Hi,
    I had taken MaxDB online database backup through Data Protector 6.11. Now i want to restore this backup on other server. So that i can configure this as a standby database. I selected config and data for restoration but it was giving me an error as below,
    [Critical] From: OB2BAR_SAPDBBAR@ttcsolma "BE1"  Time: 01/11/11 01:19:05
          Error: SAPDB responded with: -24988,ERR_SQL: SQL error
    -9407,System error: unexpected error
    3,Database state: OFFLINE
    Internal errorcode, Error code 9050 "disk_not_accessible"
    So I created the database but with default values and then tried restoration but getting error as below,
    Error: Unable to read the configuration value `All' for SAPDB instance `SID'.
    If anybody is having any document or steps for confguring the Standby MaxDB database through Data Protector backup/restore method.
    Thanks,
    Narendra

    Hi Siva,
    Thanks for the replyu2026Yes I had copied the dbm.ebf, dbm.mdf dbm.knl files from the source server to target server. And also backup is visible in target server. And also I created Data /Log volumes size as equal to source server.
    I tried restoring through DBM GUI with "recovery with initialization" option but I am getting error message as below and also I am getting same error message when I restore from Data Protector GUI. But I donu2019t have any problem restoring the backup on same server the problem occurs only when restoring it on target server.
    Now for testing I have taken backup on file system through DBM GUI of source system and started restoring on target system. No problem while restoring from file system backup on target server. I am suspecting there is some problem with PIPE as I am using third party backup tool i.e. DP 6.11. I want to resolve the problem with third party backup tool as we will take all MaxDB database backup on tapes through DP 6.11.
    Through DBM GUI error message:
    Error: SAPDB responded with: -24925,ERR_PREPARE: preparation of backup operation failed
    The list of external backup ID's contains less than 16 ID's.
    Through Data Protector GUI error message:
    [Normal] From: RSM@bkupsvr ""  Time: 1/18/11 8:49:16 AM
          Restore session 2011/01/18-12 started.
    [Normal] From: OB2BAR_SAPDBBAR@tstmaxdb "SID"  Time: 01/18/11 08:50:58
          Executing the dbmcli command: `user_logon'.
    [Normal] From: OB2BAR_SAPDBBAR@tstmaxdb "SID"  Time: 01/18/11 08:51:00
          Executing the dbmcli command: `dbm_configset -raw BSI_ENV /var/opt/omni/tmp/SID.bsi_env'.
    [Normal] From: OB2BAR_SAPDBBAR@tstmaxdb "SID"  Time: 01/18/11 08:51:03
          Executing the dbmcli command: `db_admin'.
    [Normal] From: OB2BAR_SAPDBBAR@tstmaxdb "SID"  Time: 01/18/11 08:51:03
          Executing the dbmcli command: `util_execute clear log'.
    [Normal] From: OB2BAR_SAPDBBAR@tstmaxdb "SID"  Time: 01/18/11 08:51:22
          Executing the dbmcli command: `dbm_configset -raw set_variable_10 OB2BACKUPAPPNAME=SID'.
    [Normal] From: OB2BAR_SAPDBBAR@tstmaxdb "SID"  Time: 01/18/11 08:51:22
          Executing the dbmcli command: `dbm_configset -raw set_variable_11 OB2BACKUPHOSTNAME=ttcmaxdr'.
    [Normal] From: OB2BAR_SAPDBBAR@tstmaxdb "SID"  Time: 01/18/11 08:51:23
          Executing the dbmcli command: `util_connect'.
    [Normal] From: OB2BAR_SAPDBBAR@tstmaxdb "SID"  Time: 01/18/11 08:51:24
          Restoring backup 2011/01/17 0061.
    [Normal] From: OB2BAR_SAPDBBAR@tstmaxdb "SID"  Time: 01/18/11 08:51:24
          Executing the dbmcli command: `medium_put BACKDP-Data[8]/1 /var/opt/omni/tmp/SID.BACKDP-Data[8].1 PIPE DATA 0 8 NO NO \"\" BACK'.
    [Normal] From: OB2BAR_SAPDBBAR@tstmaxdb "SID"  Time: 01/18/11 08:51:24
          Executing the dbmcli command: `medium_put BACKDP-Data[8]/2 /var/opt/omni/tmp/SID.BACKDP-Data[8].2 PIPE DATA 0 8 NO NO \"\" BACK'.
    [Normal] From: OB2BAR_SAPDBBAR@tstmaxdb "SID"  Time: 01/18/11 08:51:25
          Executing the dbmcli command: `medium_put BACKDP-Data[8]/3 /var/opt/omni/tmp/SID.BACKDP-Data[8].3 PIPE DATA 0 8 NO NO \"\" BACK'.
    [Normal] From: OB2BAR_SAPDBBAR@tstmaxdb "SID"  Time: 01/18/11 08:51:25
          Executing the dbmcli command: `medium_put BACKDP-Data[8]/4 /var/opt/omni/tmp/SID.BACKDP-Data[8].4 PIPE DATA 0 8 NO NO \"\" BACK'.
    [Normal] From: OB2BAR_SAPDBBAR@tstmaxdb "SID"  Time: 01/18/11 08:51:26
          Executing the dbmcli command: `medium_put BACKDP-Data[8]/5 /var/opt/omni/tmp/SID.BACKDP-Data[8].5 PIPE DATA 0 8 NO NO \"\" BACK'.
    [Normal] From: OB2BAR_SAPDBBAR@tstmaxdb "SID"  Time: 01/18/11 08:51:27
          Executing the dbmcli command: `medium_put BACKDP-Data[8]/6 /var/opt/omni/tmp/SID.BACKDP-Data[8].6 PIPE DATA 0 8 NO NO \"\" BACK'.
    [Normal] From: OB2BAR_SAPDBBAR@tstmaxdb "SID"  Time: 01/18/11 08:51:27
          Executing the dbmcli command: `medium_put BACKDP-Data[8]/7 /var/opt/omni/tmp/SID.BACKDP-Data[8].7 PIPE DATA 0 8 NO NO \"\" BACK'.
    [Normal] From: OB2BAR_SAPDBBAR@tstmaxdb "SID"  Time: 01/18/11 08:51:28
          Executing the dbmcli command: `medium_put BACKDP-Data[8]/8 /var/opt/omni/tmp/SID.BACKDP-Data[8].8 PIPE DATA 0 8 NO NO \"\" BACK'.
    [Normal] From: OB2BAR_SAPDBBAR@tstmaxdb "SID"  Time: 01/18/11 08:51:28
          Executing the dbmcli command: `recover_start BACKDP-Data[8] DATA ExternalBackupID "SID 11011761:1 Stream,SID 11011761:2 Stream,SID 11011761:3 Stream,SID 11011761:4 Stream,SID 11011761:5 Stream,SID 11011761:6 Stream,SID 11011761:7 Stream,SID 11011761:8 Stream"'.
    [Critical] From: OB2BAR_SAPDBBAR@tstmaxdb "SID"  Time: 01/18/11 08:51:30
          Error: SAPDB responded with: -24925,ERR_PREPARE: preparation of backup operation failed
    The list of external backup ID's contains less than 16 ID's.
    [Normal] From: OB2BAR_SAPDBBAR@tstmaxdb "SID"  Time: 01/18/11 08:51:30
          Executing the dbmcli command: `exit'.
    [Normal] From: RSM@bkupsvr ""  Time: 1/18/11 8:49:59 AM
          OB2BAR application on "tstmaxdb" disconnected.
    Thanks,
    Narendra

  • Online backup client eats CPU!

    I have an online backup client from Mozy installed on my Mac Mini, and the actual backup process is scheduled to run daily at 23:00. The automatic backup is switched off. However, the backup client process is running at all times, and sometimes takes 50 - 60% of the CPU even when the scheduled daily backup is not being taken. This is not always momentary, and seems to continue for minutes, not just a few seconds. As you may imagine, this has a serious effect on the performance of the machine and is very frustrating.
    An additional problem is that it seems impossible to stop the process from the Activity Monitor. It can be force quit, but restarts almost immediately. Repeating the force quit does not seem to help.
    I have referred this to Mozy support several times, but they appear unable to resolve the issue. This is unfortunate, as the backup itself is useful, and works well. I am therefore wondering if there is anything that I can do within OS X to bring this process under control? I was thinking of possibly setting up a cron job to start and stop the backup client process?
    Any thoughts and suggestions would be of interest and gratefully received!
    Nigel

    Thanks for that.
    Unfortunately, it seems my thinking was not quite the best idea, as I have now received the following advice from Mozy Support:-
    "I spoke with on of our senior support agents about the idea of quiting the Mozy client when not needed and he told me if you were to do that, Mozy wont know when files are changed and so wont back them up when you run a backup. Even when you aren't running a backup the Mozy client is watching all of the files on the machine and noting when a file needs to be backed up.
    If you like, however, I can still provide you with the commands to effectively quit the Mozy backup daemon and launch it on demand. Alternatively, if the older Mozy client was better on resources and works for you, by all means use that version instead. To avoid having to re-select all of your files while changing the Mozy client version, when you select Uninstall from the MozyHome icon in the menubar, make sure the checkbox for Keep settings and log files is checked.
    I know the Mozy Mac engineers are working towards reducing the resource footprint as much as possible, but Mozy is first committed to provide the most advanced and feature rich products. As such there will be a performance degradation as subsequent versions are released and ran on legacy hardware."
    So switching the daemon off is clearly not a good idea; and the implication of the other comment is that Mozy don't care how much processor their system uses. Having researched elsewhere, it seems that this problem has been identified a couple of years ago, and Mozy don't seem to have made any serious effort to resolve it. From my perspective, I just want a reliable backup, the other bells and whistles are not interesting, and I would have thought that other users of a budget product would have similar requirements. However, Mozy seem content to leave us out in the cold. Fortunately, there are now other similar products, and a recent review in Macworld identified one called CrashPlan which seems to offer similar prices to Mozy (or slightly less).
    For the time being I have reinstated an old version of Mozy that did not give this trouble, but will be looking seriously at CrashPlan, as reviews seem quite favourable.
    Nigel

  • SQL online backup.

    Hi all,
    How can i initiate the online backup for SQL database.
    and please provide me the sql commands for sql backup.
    Thanks in advance.
    vinnu.

    You can schedule the backup via the SQL Server gui.
    right click the DB name and select backup from the resulting list.
    run though and tick the boxes, set the timings and days. Save.
    Then to see the SQL goto the SQL agent, job, and open the job you saved above.
    Look in/at the steps.
    This will give you the code.
    However sample code is below
    --This Transact-SQL script creates a backup job and calls sp_start_job to run the job.
    -- Create job.
    -- You may specify an e-mail address, commented below, and/or pager, etc.
    -- For more details about this option or others, see SQL Server Books Online.
    USE msdb
    EXEC sp_add_job @job_name = 'myTestBackupJob',
        @enabled = 1,
        @description = 'myTestBackupJob',
        @owner_login_name = 'sa',
        @notify_level_eventlog = 2,
        @notify_level_email = 2,
        @notify_level_netsend =2,
        @notify_level_page = 2
    --  @notify_email_operator_name = 'email name'
    go
    -- Add job step (backup data).
    USE msdb
    EXEC sp_add_jobstep @job_name = 'myTestBackupJob',
        @step_name = 'Backup msdb Data',
        @subsystem = 'TSQL',
        @command = 'BACKUP DATABASE msdb TO DISK = ''c:\msdb.dat_bak''',
        @on_success_action = 3,
        @retry_attempts = 5,
        @retry_interval = 5
    go
    -- Add job step (backup log).
    USE msdb
    EXEC sp_add_jobstep @job_name = 'myTestBackupJob',
        @step_name = 'Backup msdb Log',
        @subsystem = 'TSQL',
        @command = 'BACKUP LOG msdb TO DISK = ''c:\msdb.log_bak''',
        @on_success_action = 1,
        @retry_attempts = 5,
        @retry_interval = 5
    go
    -- Add the target servers.
    USE msdb
    EXEC sp_add_jobserver @job_name = 'myTestBackupJob', @server_name = N'(local)'
    -- Run job. Starts the job immediately.
    USE msdb
    EXEC sp_start_job @job_name = 'myTestBackupJob'

  • Windows Azure Online backup Vs. StoreSimple?

    what's the difference between Windows Azure Online Backup compared with StorSimple+Windows Azure? 
    what scenario is for StorSimple, what scenario is for the Online Backup?
    Ming

    StorSimple is an appliance which can provision a LUN(s) which is comprised of SSD+HDD+Cloud storage.  Hot/warm/cold blocks are dynamically and automatically managed on a per-LUN basis.  
    StorSimple does not however have agents (except for
    SharePoint) which can, for example, back up a SQL Database.  It would also be ill advised to put a SQL database on a StorSimple LUN, and expect to benefit from the Azure connection.  I would advise using DPM for protecting these types of workloads,
    and using StorSimple for flat-files.
    Mike Crowley | MVP
    My Blog --
    Planet Technologies

  • Verizon online backup settings

    Where are the Verizon online backup settings located?

    If you have VZ In Home Agent Version 9.0.55
    You can get a detailed step by step procedure on your  subject 
    "Play around" with VZ IHA and you will probably find answers to a lot of your questions.
    Enjoy !!
    Tom
    Freedom Essentials, QIP 7100 1,Bose SOLO TV Sound System,,QIP 7216 P2,M1424WR Rev F, iPad 2 WiFi,iPhone 5,TV SYST INFO Release 1.9.5 Build No. 17.45
    Data Object 39.45

  • How does recovery work after an online backup

    Hello,
    While trying to conceptually understand how backup and recovery works, I came accross a question concerning hot (online) backup.
    This is a conceptual question (I am trying to understand how things work), it is not a "how should I proceed/ what should I do step by step" question.
    As far as I understand, an online backup of a tablespace can be performed by copying the OS files making up a tablespace while the database is up and being used (i.e. transactions are modifying data in the database). Before the copying of the OS files starts, the Oracle RDMS must be notified that an online backup is being taken via "ALTER...BEGIN BACKUP" (such that some additional information is written to the Redo Log, which may be required for subsequent recovery using the online backup). During recovery the Oracle RDBMS uses the copies of the OS files together with the online and archived redo logs in order to reconstruct all committed transactions and it further uses the UNDO tablespace to rollback open (uncommitted) transactions.
    Thinking about this, it seems to me, that in order for this to work in all possible scenarios the undo information from the time the backup was taken may be required. Therefore backup of the UNDO tablespace should be taken as well (see the explanation for this assumption below). However browsing the internet (including the Oracle online documentation) I did not find any statements concerning the backup of the UNDO tablespace when an online backup is taken. Moreover I couldn't figure out when exactly such a backup of the UNDO tablespace must be done, to ensure that the database can be recovered in all scenarios.
    I believe that undo information from the time the hot backup was taken may be required e.g. in the following scenario:
    Assume we are taking a hot backup of a given tablespace, i.e. we are copying all OS files that make up this tablespace, while the database is potentially being used. Let D1 be one of the datafiles in our tablespace and let transaction T1 modify datafile D1. Let transaction T1 further be uncommitted while the copy of datafile D1 is being made and let (at least some of) the changes from T1 be included in the backup copy D1' of D1 (because DBWR has already written the modified blocks at the time they were being copied to the backup). Let transaction T1 be rolled back after the copy is completed. D1' will thus contain modifications from T1, while D1 will not.
    Now some time later the datafile D1 is lost. When recovering D1 from the copy D1', the (archived) redo logs will be applied to D1'. Before that, transaction T1 should be rolled back in the copy D1', because modifications from T1 must not appear in the recovered version of the database.
    I do however not understand, where the information to rollback transaction T1 exactly comes from. It may still be in the current UNDO tablespace. I do however assume that rollback information is not kept in the UNDO tablespace forever. I see three possible answers to this
    (a) There are some requirements which I missed so far to backup the UNDO tablespace whenever a hot backup is made.
    (b) Since the Oracle "RDBMS" has to be notified that an online backup is being done, it might store all relevant undo information (e.g. write it to the redo log) when the tablespace is put in backup mode.
    (c) There are situations when recovery is not possible due to "missing old UNDO information".
    Answer (b) seems the most plausible to me. I did however not find any confirmation of this and if (b) really is the answer, I would be interested to understand what information is stored where by the Oracle RDMBS and how it is used for recovery.
    To summarize I have the following questions:
    (I) Is there any requirement to backup the UNDO tablespace together with an online backup of a tablespace, and if so, where is this stated in the Oracle documentation?
    (II) What mechanisms ensure that uncommitted transactions can be cleared from the online copy of a tablespace (potentially a long time after the copy was taken)?
    (III) Do you know any links (Oracle documentation or other online resources) explaining these datails?
    Thank you for any hints and answers
    kind regards
    Martin

    Its a highly technical question and I can be completely wrong due to my very less knowledge but I would attempt to answer still. Hope I say something sensible so bear with me.
    As far as I understand, an online backup of a tablespace can be performed by copying the OS files making up a tablespace while the database is up and being used (i.e. transactions are modifying data in the database).Correct. But it would depend on the tool you are going to use to do so.Using o/s level commands like CP and all would require you to manually copy the files to the backup location. Using RMAN, it would be lot easier and RMAN would take care of everything.
    Before the copying of the OS files starts, the Oracle RDMS must be notified that an online backup is being taken via "ALTER...BEGIN BACKUP" (such that some additional information is written to the Redo Log, which may be required for subsequent recovery using the online backup). Again, this is a requirement only in the case of user-managed backup . In that case, because of the fractured block issue , its important that the corresponding older information/image of the buffer is also copied in the redo stream and that's done when the begin backup command is used. Using RMAN, this is not needed as RMAN can read the consistent image which it would store in the backup piece, exactly in the same way in which select request is fulfilled by oracle for a dirty buffer which is yet to be made consistent.
    During recovery the Oracle RDBMS uses the copies of the OS files together with the online and archived redo logs in order to reconstruct all committed transactions and it further uses the UNDO tablespace to rollback open (uncommitted) transactions.Correct!
    Thinking about this, it seems to me, that in order for this to work in all possible scenarios the undo information from the time the backup was >taken may be required. Therefore backup of the UNDO tablespace should be taken as well (see the explanation for this assumption below). >However browsing the internet (including the Oracle online documentation) I did not find any statements concerning the backup of the UNDO >tablespace when an online backup is taken. Moreover I couldn't figure out when exactly such a backup of the UNDO tablespace must be done, to >ensure that the database can be recovered in all scenarios.The reason that its not a must to do so is that if the transaction is yet active, there is no way that Oracle would overwrite the Undo information of it, even if you may come after 100 years, it would be there. The Undo segment would mark those blocks as active undo blockswhich contains the information of that transaction whose status within the transacton table of that undo segment is still marked as active. So its there all the time in the undo tablespace. Now, for an instance, let's assume that the undo is not there as well( it would be but let's assume), the changes made to the undo segment's blocks are also recored in the redo as its just a change happening to any other segment like EMP,DEPT except with the difference that its not done by you but by oracle. So using that information, in the future , if there would be a need to replay those changes, necessary information to do so can be brought up from the redo blocks stored in the redo/archive logs. Yes, if there would be pending transactions that would require Undo information to get them rolled back and you have lost Undo tablespace and have no backup of it , you wont be able to bring back the database as it would be inconsistent and oracle would not let you to do it. In that case, you may require to use hacks to get it up and that would be really tricky situation.
    (I) Is there any requirement to backup the UNDO tablespace together with an online backup of a tablespace, and if so, where is this stated in the Oracle documentation?As I said above, it must be there if you are anticipating loss of Undo tablespace. If you have lost it, you would need a backup and all the archive logs and redo logs to recover and get it back to the point where the current database is . Rest, oracle would take care as it would reapply the redo contents of the undo segments over the undo segment and get it consitent.
    (II) What mechanisms ensure that uncommitted transactions can be cleared from the online copy of a tablespace (potentially a long time after the copy was taken)?As I said , pending transaction's undo is never overwritten by Oracle. Its always kept and marked as active undo . Only a transaction end would make it elgible to get overwritten and that too won't happen immediately(undo_retention would kick in) .
    (III) Do you know any links (Oracle documentation or other online resources) explaining these datails?I have to see if its there some where step by step mentioned and I shall update the reply once I shall find the link. Hoep someone else in the meantime finds it .
    HTH
    Aman....

  • Open resetlogs is not  working when creating clone db with online backup

    Hi All,
    I am trying to create a clone database using hot backup of a database .
    STEPS THAT I FOLLOWED
    LET ----- >CURRENT_DB NAME=DEV
    CLONE DATABASE NAME=DEVCLONE
    steps PERFORMED FORM DEV DB
    - put the database in backup mode using 'alter database begin backup'
    - copy all the data files to a different folder
    - during copy i have performed some operations on the DB (creating users, tables, dmls etc...)
    - in between copying i also performed log switch
    - after completion of copy , "alter database end backup"
    - created a backup control file in a human readable format (alter database backup controlfile to trace as ........)
    steps performed for clone DB side ((DEVCLONE)
    - created a parameter file for the database .
    - modified the backup control file so that it will point to the location of copied destination of datafiles
    - set the ORACLE_SID
    - then 'sqlplus / as sysdba
    - starup nomount
    - run the modified control file ( created a control file for the clone database)
    - recover the database using "recover database using backup controlfile"
    I have provided the archive files that it was asking for (archive logs that has been generated in DEV DB)
    then i canceled the recovery by typing "cancel"
    - recover database using backup controlfile until cancel;
    then typed "cancel"
    - then try to open the database with open resetlogs but it showed below error
    alter database open resetlogs
    ERROR at line 1:
    ORA-01195: online backup of file 1 needs more recovery to be consistent
    ORA-01110: data file 1: 'D:\DATA_GUARD\DEVHOT\SYSTEM01.DBF'
    please help me on this ......
    Thanks

    Thanks , now i am able to open the DB in open reset logs mode .
    Previously , when i had not taken the archive log after "alter database end backup" , i was not able to open the db with open resetlogs because the
    fuzzy status of all the datafile headers were YES .
    But after taking the archive log that got generated after "alter database end backup" and applying it on the clone db(Created with HOT backup ) the datafile_header status got changed from YES to NO .
    So for that i am able to open the clone db with open resetlogs .
    Can you please help me with a small description why this is happening ?
    Thanks.......

  • Database open fails after online backup recovery

    Hi Friends
    We are trying to set up an additional server using the online backup of our DEV server. We have been following SAP Note 549828 for the same. Having restored the online backup, the open database failed.
    To resolve the same, in accordance to SAP Note 549828, we created a backup control file with success using the command
    create controlfile reuse set database DEV resetlogs noarchivelog
    However on issuance of the command
    RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL
    we run into an error as to
    ORA-00279: change 794638222 generated at 10/25/2007 12:43:20 needed for thread 1
    ORA-00289: suggestion : /oracle/DEV/oraarch/DEVarch1_9766.dbf
    ORA-00280: change 794638222 for thread 1 is in sequence #9766
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    ORA-00308: cannot open archived log '/oracle/DEV/oraarch/DEVarch1_9766.dbf'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oracle/DEV/sapdata1/system_1/system.data1'
    We even manually copied the file system.data1 from the source to the target server but to no avail.
    Also the SQL command
    SELECT FILE#, CHANGE# FROM V$RECOVER_FILE
    displays a different change# for system.data1 while it is showing same number for all other datafiles.
    Please advice at the earliest as we are struck. Points awaiting their master.
    Regards
    Lokesh Gupta

    Some inputs addition to Erics comments
    The problem is you dont have archives ie Offline redo logfiles in correct location.
    /oracle/DEV/oraarch/DEVarch1_9766.dbf  Here DEVarch1_9766 is archive file which is misssing from location /oracle/DEV/oraarch. To recover DB you need Archives generated during hot backup.
    Generally these steps willl give you desired result.
    select * from v$logfile;
    We normally switch the log files to the number of log groups that exist
    alter system switch logfile;
    create a Backup directory to hold our hot back datafiles and Archives
    When the backup is complete check the backup location to see if all the files are available. We could now either FTP the same to the other system or copy over these files to another location in case of cloning on the same system.
    Copy over all the files to their respective filesystems and directories and then edit the file that was created using the backup controlfile to trace. Copy that file to the remote system and edit it accordingly.
    check that all the files are in the right location and edit that information in the control file
    Once the controlfile is run successfully and you get the statement processed, we can start applying the archive logs that we have moved to the archive log destination directory as per the init<sid>.ora file.
    do a recover of the database to its consistent state
    recover database using backup controlfile until cancel;
    The create control file command only changes the structure of the database and the SID name, the header of the datafiles still hold all the required information. The above command would ask you to input the archive log file names one by one to do recovery or you could choose the AUTO option. Once the recovery process is complete, open the database with the resetlogs option
    Regards
    Vinod

  • Getting error while running script for online backup

    Hi,
    I am running a script for online backup but ended up with an the below error.
    *ERROR* [Backup Worker Thread] com.day.crx.core.backup.Backup Failed to create temporary directory
    Please help out in resolving this.
    Thanks in Advnace.
    Maheswar

    Hi mahesh,
    If you are using backup feature from crx console, I mean http://localhost:4502/crx/config/backup.jsp  I can say that we had also some problems with this functionalities.
    First off all what you need to check are the permissions, because when you check a source code there is line which creates a File object using a path specified by you to make a backup of repository.
    File targetDir = new File(req.getParameter("targetDir", listDir.getParentFile().getAbsolutePath()));
    You need to have sure that the proper read write access has been granted for this path.
    Another issue is that maybe there was already prepared some hotfix if you are using CQ5.4. Please refer to the following link:
    http://dev.day.com/content/kb/home/Crx/CrxSystemAdministration/CRXOnlineBackup.html
    and also to this one:
    http://dev.day.com/content/docs/en/crx/current/release_notes/overview.html which contains a hotfix number #34797 which was applied to backup.jsp file.
    Regards,
    kasq

  • Problem in new database creation with the help of online  backup

    Dear dba's
    i am using oracle 11gR2 database in windows server 2003. database is running in ARCHIVE LOG mode.
    i have taken an online backup of all datafile,controlfile and spfile.Then i crated folders in all the locations as required for new database.
    then i registerd the service of new database named as 'newdb' by
    oradim -NEW -SID newdb
    then i created a password file manually in 'oracle_home\database' location.
    i created a new contolfile named as controlfile_01.ctl. the content of controlfile as follows
    STARTUP NOMOUNT
    CREATE CONTROLFILE SET DATABASE "NEWDB" NORESETLOGS ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
    LOGFILE
    GROUP 1 (
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\ONLINELOG\O1_MF_1_7FK0XG7B_.LOG',
    'D:\APP\ADMINISTRATOR\FLASH_RECOVERY_AREA\NEWDB\ONLINELOG\O1_MF_1_7FK0XHWB_.LOG'
    ) SIZE 50M,
    GROUP 2 (
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\ONLINELOG\O1_MF_2_7FK0XKB8_.LOG',
    'D:\APP\ADMINISTRATOR\FLASH_RECOVERY_AREA\NEWDB\ONLINELOG\O1_MF_2_7FK0XM0Z_.LOG'
    ) SIZE 50M,
    GROUP 3 (
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\ONLINELOG\O1_MF_3_7FK0XNOZ_.LOG',
    'D:\APP\ADMINISTRATOR\FLASH_RECOVERY_AREA\NEWDB\ONLINELOG\O1_MF_3_7FK0XOWB_.LOG'
    ) SIZE 50M
    DATAFILE
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\DATAFILE\O1_MF_SYSTEM_7FK0SKN0_.DBF',
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\DATAFILE\O1_MF_SYSAUX_7FK0SKPG_.DBF',
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\DATAFILE\O1_MF_UNDOTBS1_7FK0SKTC_.DBF',
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\DATAFILE\O1_MF_USERS_7FK0SKWB_.DBF',
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\DATAFILE\O1_MF_EXAMPLE_7FK0Z5LK_.DBF',
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\MARSH.DBF',
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\JOMARSH.DBF'
    CHARACTER SET AL32UTF8
    the control file path was registered in pfile also.
    then i brought the database to nomount stage.
    the problem is when i try to mount database it shows following error. anyone can help me to over come from this issue????????
    SQL> startup pfile='D:\app\Administrator\product\11.1.0\db_1\database\INITnewdb.ora' nomount;
    ORACLE instance started.
    Total System Global Area 535662592 bytes
    Fixed Size 1334380 bytes
    Variable Size 301990804 bytes
    Database Buffers 226492416 bytes
    Redo Buffers 5844992 bytes
    SQL> ALTER DATABASE MOUNT;
    ALTER DATABASE MOUNT
    ERROR at line 1:
    ORA-00205: error in identifying control file, check alert log for more info
    the alert massage is:
    ORA-00210: cannot open the specified control file
    ORA-00202: control file: 'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\CONTROLFILE\CONTROLFILE_01.CTL'
    ORA-27048: skgfifi: file header information is invalid
    OSD-04001: invalid logical block size (OS 1413563730)
    Fri Dec 09 13:11:59 2011
    Checker run found 1 new persistent data failures
    ORA-205 signalled during: ALTER DATABASE MOUNT...
    Thanks & Regards,
    John Marshal.A

    Hi;
    Error: ORA 205
    Text: error in identifying control file <name>
    Cause: The system could not find a control file of the specified name and
    size.
    Action: Either
    Check that the proper control filename is referenced in the
    CONTROL_FILES initialization parameter in the initialization parameter
    file and try again.
    When using mirrored control files, that is, more than one control file
    is referenced in the initialization parameter file, remove the control
    filename listed in the message from the initialization parameter file
    and restart the instance.
    If the message does not recur, remove the problem control file from
    the initialization parameter file and create another copy of the
    control file with a new filename in the initialization parameter file.
    Regard
    Helios

  • Online Backup of supported Linux VM on Hyper-V 2012 R2 / SC DPM 2012 R2

    Hi,
    I'm trying to set up a lab environment:
    Win 2012 R2 with Hyper-V
    running 2 Linux Machines:
    Linux2 - CentOS 6.4 with manually installed Linux Integration services 3.4
    Linux3 - CentOS 6.4 without LIS (should be already included in CentOS)
    Another machine running Win 2012 R2 Server with SC DPM 2012 R2
    but both VMs show as "Offline" when trying to back them up via DPM. Tried local Windows Server Backup with the same result.
    I am able to backup the VMs "offline" (pausing the VM, taking snapshot, resume VM) but according to MS, SC DPM 2012 R2 should be able to do Online backups for supported Linux VMs (http://blogs.technet.com/b/virtualization/archive/2013/07/24/enabling-linux-support-on-windows-server-2012-r2-hyper-v.aspx)
    The only things in the EventLog are these:
    A storage device in 'Linux3' loaded but has a different version from the server.  Server version 6.0  Client version 4.2 (Virtual machine ID 4F5CDDD8-B855-41CF-83B2-772C1B99090D). The device will work, but this is an unsupported configuration.
    This means that technical support will not be provided until this problem is resolved. To fix this problem, upgrade the integration services. To upgrade, connect to the virtual machine and select Insert Integration Services Setup Disk from the Action menu.
    Any Ideas ?
    Thanks

    Hi,
    That list would need to come from the Windows hyper-v group, they are responsible with adding the feature to the integration components for various Linux OSes.  DPM just backs up whatever the hyper-V writer presents to us, if the guest supports
    online, we back it up online, if not hyper-V saves the guest before the VSS snapshot is taken and DPM takes the backup from the saved state.
    NEW NOTE ADDED 1-29-14: Windows group just released “Linux Integration Services Version 3.5 for Hyper-V”. The
    document mentions that some versions of Red Hat and CentOS are now
    supported to do online backup.
    Live virtual machine backup support
    ======================
    RHEL/CentOS 6.0-6.3
    RHEL/CentOS 5.7-5.8
    RHEL/CentOS 5.5-5.6
    ADDTL NOTES: If there are open file handles during a live virtual machine backup operation, the backed-up virtual hard disks (VHDs) might have to undergo a file system consistency check (fsck) when restored.
    Live backup operations can fail silently if the virtual machine has an attached iSCSI device or a physical disk that is directly attached to a virtual machine (“pass-through disk”).
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT] This
    posting is provided "AS IS" with no warranties, and confers no rights.

  • Can't access online backup and sharing web site

    Does anyone know how to access the Online Backup and Sharing web site from a computer that does not have the O/B/S software installed?  I used to be able to go to My Verizon/My Services/Internet and hit "Launch" for O/B/S, and the link would go directly to the web site for uploading or downloading material.  Now, that link takes me to a Verizon page that asks to activate the O/B/S service, and wants to download the software.  My O/B/S service was activated years ago and has worked until recently without the software installed.  There is no other obvious way to get to the O/B/S web site.  (If it matters, I tried this in both IE 9, with Windows 7 64-bit, and Safari.)
    After some effort, I got the help number for the outside vendor that runs the O/B/S service.  They say that the Verizon web site should link directly to their O/B/S web site, regardless of whether or not the O/B/S software is installed locally, but that the failure to do so is entirely an error of Verizon's on the Verizon web site and they have nothing to do with it.  They also said there is no way to bypass the Verizon web site to get to a Verizon O/B/S account on their web site.  They said they have received many complaints about this from Verizon O/B/S customers, and that the same problem has even arisen when the O/B/S software is installed on the local computer.
    So does anyone know if it is really necessary to install the O/B/S software on any computer used to access O/B/S, and if it is now impossible to access the material from any other computer?  That would be a major reduction in the usefulness of the service that was not announced anywhere that I know of.  I saw a strong of comments on this issue in the Forums from a few years ago, but the conclusion was that the problem had been fixed.  It seems to have arisen again.

    Anthony, I tried to private message you several times, but I kept getting an error message stating that my message had invalid HTML that was being removed, and to resend.  It did not actually have any HTML, and hitting the resend button just led to the same message.
    Also, since my last post, I downloaded the OBS software onto my computer.  This gave me access to OBS from that computer.  However, even after doing this, I cannot use the old method of directly getting to the OBS web site from My Verizon.  The link from "Launch" does not work either on the computer with the OBS software or on any other computer.  Rather, the link goes only to the Verizon page saying that my OBS service has not been activated (which it surely has been) and asking to download the OBS software.
    If this is happening to everyone and not just to me, this means that material stored on OBS is only available from a computer that has the OBS software downloaded on it.  That is certainly not supposed to be the case, and it would make OBS much less useful.
    Again, the OBS technical people said that this was a problem that they knew about, but it was entirely a problem with the Verizon web site that they could not do anything about.

  • Online Backup Failed from DB13, brbackup runs on application server

    Hi Everone,
    I am getting problem when i am scheduling Online Backup from DB13 t-code.
    Backup gets failed because logical command BRBACKUP execute on our Application server host rather than DBCI server host.
    I have pest the lob logs below:
    Job started
    Step 001 started (program RSDBAJOB, variant &0000000000389, user ID BASIS)
    Execute logical command BRBACKUP On host xyzapp - (Application server)
    Parameters:-u / -jid ALLOG20110824210000 -c force -t online -m all -p initSEP.sap -a -c force -p initSEP.sap -s
    d
    BR0051I BRBACKUP 7.00 (40)
    BR0252E Function fopen() failed for '/oracle/client/10x_64/instantclient/dbs/initSEP.sap' at location BrInitSapRead-1
    BR0253E errno 2: No such file or directory
    BR0159E Error reading BR*Tools profile /oracle/client/10x_64/instantclient/dbs/initSEP.sap
    BR0280I BRBACKUP time stamp: 2011-09-02 21.01.09
    BR0301E SQL error -12545 at location BrDbConnect-2, SQL statement:
    'CONNECT /'
    ORA-12545: Connect failed because target host or object does not exist
    BR0310E Connect to database instance SEP failed
    BR0056I End of database backup: begroqyn.log 2011-09-02 21.01.09
    BR0280I BRBACKUP time stamp: 2011-09-02 21.01.09
    BR0054I BRBACKUP terminated with errors
    External program terminated with exit code 3
    BRBACKUP returned error status E
    Job finished
    Please help.
    Thanks,
    Ocean

    Hello
    The previous contributor is correct, the profile is usually read from ORACLE_HOME.  I have never seen this error linking to the Oracle client directory, please double check the following
    Ensure that your envirionment variables are correctly set ORACLE_HOME, DIR_LIBRARY
    Brtools are always installed on the DB server and can be called from any application server.   if you are scheduling from DB13 on the application server, what is the connection setup?  RSH, RFC,  standalone gateway?
    If using a standalone gateway, it must be installed on the DB server, please refer to oss notes #446172, #1025707
    Best Regards
    Rachel

  • What is the best online backup for OS 10.5.8? And if I Reinstall OS will I have problems?

    My old Mac has had the flashing question mark file on a blue screen. I waited a couple days, turned it back on and it worked. However, in between that time I went and bought a new Macbook Air. I am relieved to have all of my pics and docs back. I need a good online backup. I downloaded BackBlaze Backup but it is taking forever and I am not sure if it's the best choice.
    My plan is to backup all of my files and then wipe my computer clean with the install disks and give my computer to my little sister. So, my question is two fold; which is best online backup and will I have any issues wiping clean the computer and doing a reinstall? For example, will all the updates still be available? What is the best thing for me to do?
    Thank you : )

    Just understand it gets pretty expensive after you exceed the free space. 2-5 GBs isn't much space.
    Basic Backup
    For some people Time Machine will be more than adequate. Time Machine is part of OS X. There are two components:
    1. A Time Machine preferences panel as part of System Preferences;
    2. A Time Machine application located in the Applications folder. It is
         used to manage backups and to restore backups. Time Machine
         requires a backup drive that is at least twice the capacity of the
         drive being backed up.
    3. Time Machine requires a backup drive that is at least double the
         capacity of the drive(s) it backs up.
    Alternatively, get an external drive at least equal in size to the internal hard drive and make (and maintain) a bootable clone/backup. You can make a bootable clone using the Restore option of Disk Utility. You can also make and maintain clones with good backup software. My personal recommendations are (order is not significant):
      1. Carbon Copy Cloner
      2. Get Backup
      3. Deja Vu
      4. SuperDuper!
      5. Synk Pro
      6. Tri-Backup
    Visit The XLab FAQs and read the FAQ on backup and restore.  Also read How to Back Up and Restore Your Files. For help with using Time Machine visit Pondini's Time Machine FAQ for help with all things Time Machine.
    Although you can buy a complete external drive system, you can also put one together if you are so inclined.  It's relatively easy and only requires a Phillips head screwdriver (typically.)  You can purchase hard drives separately.  This gives you an opportunity to shop for the best prices on a hard drive of your choice.  Reliable brands include Seagate, Hitachi, Western Digital, Toshiba, and Fujitsu.  You can find reviews and benchmarks on many drives at Storage Review.
    Enclosures for FireWire and USB are readily available.  You can find only FireWire enclosures, only USB enclosures, and enclosures that feature multiple ports.  I would stress getting enclosures that use the Oxford chipsets especially for Firewire drives (911, 921, 922, for example.)  You can find enclosures at places such as;
      1. Cool Drives
      2. OWC
      3. WiebeTech
      4. Firewire Direct
      5. California Drives
      6. NewEgg
    All you need do is remove a case cover, mount the hard drive in the enclosure and connect the cables, then re-attach the case cover.  Usually the only tool required is a small or medium Phillips screwdriver.

Maybe you are looking for

  • Two programs open on E61?

    Hi, this sounds like - well, it is a very trivial question: It is obviously possible on the E61 to open more than one application at the same time. But I don´t seem to be able to do it: I want to open an Excel file that shows on which days I am here.

  • Error message in Dreamweaver CS4

    Hi all, I wonder if anybody can help me. I've recently purchased Web Premium CS4, and I've started to get an error message in Dreamweaver. The message reads as follows: "A script in file C:\Program Files\Adobe\Adobe Dreamweaver CS4\Configuration\Shar

  • Adrelink.sh fails after 32bit to 64bit migration Linux

    You are running adrelink, version 120.43 Start of adrelink session Date/time is  Wed Dec 18 17:34:08 IST 2013 Log file is  /u01/EBS11/R12/appl11/apps/apps_st/appl/admin/log/adrelink.log Command line arguments are    "force=y" "ad all" Operating Syste

  • A3 missing feature: calibration of paper edges for borderless printing

    This question was posted elsewhere by someone else, but never answered, so I'm trying again... In A2 it was possible to print a "calibration" sheet from within the print dialogue box, and then enter numbers corresponding to visual alignment of the pa

  • Premiere Pro CS5.5 $3000 Computer Build Specs & Advice?

    Hey everyone I am building a video editing computer I do alot of YouTube videos using my Canon T2I DSLR camera & use plugins like magic bullet looks all the time which can put alot of stress on some computers. I have compiled a list of everything I p