Xserve Backup/Archive Solutions

Hi,
We currently have a MacPro running our wiki along with some file sharing and user directory service. It has a hardware raid with (3) 1TB drives and has served well so far with great feedback. We want to extend the wikis to a much larger audience and wanted to move to an xserve. The problem is that our IT department is not cooperating with supporting the xserve and will not allow us to use their central backup system.
Our current usage is for one course which includes 20 students and 4 instructors. We are wanting to extend the wikis to about 20 courses to give you an idea for traffic.
My question is what are my options with backup solutions if we decide to go ahead with the xserve. We want to be able to go back at least a week before. We are looking for automated backups. What would I need in terms of hardware and software? I have no prior experience with backups so just want to know what I might be getting into.
Please let me know your thoughts.
Message was edited by: leadstrand

2 TB will need to be backed up. That is the capacity of our RAID 5 setup.
Not sure how often it should run but I want to be able to restore back to a week (5 days)
Budget is of low priority at this time but should be less our Xserve setup (Quad 2.8 4GB RAM Hardware RAID 3x 1TB drives)
none

Similar Messages

  • Best Backup / Archive Solution?

    I'm new to the forum discussions and was hoping someone could offer some advice as to the best and easiest back-up solution in my situation.
    I work for a company that does not have a back-up solution for the 1 G5 mac connected to their windows SBS network. Yes, this is insane because this mac contains thousands and thousands of graphic files that go back 10+ years or so and probably occupies 80GB+ of space too.
    The company has a graphic artist that uses this G5 daily creating more graphic files. I am looking for the best Automated back-up solution that doesn't require the user to manually start a back-up and have to wait for it to complete - especially since there is 80GB worth of files. I'm not really concerned with backing up applications or program files, but more concerned with archiving the graphic files. I've looked online at solutions such as rsync, disk image files, raid with 2nd hard drive, and back-up apps with external drives such as Carbon Copy or Super Duper but wanted to get some advice from experienced mac users that may have dealt with my same issue.
    Any help or advice would be appreciated. Thanks.
    Message was edited by: bbstyl

    Basic Backup
    Get an external Firewire drive at least equal in size to the internal hard drive and make (and maintain) a bootable clone/backup. You can make a bootable clone using the Restore option of Disk Utility. You can also make and maintain clones with good backup software. My personal recommendations are (order is not significant):
    1. Retrospect Desktop (Commercial - not yet universal binary)
    2. Synchronize! Pro X (Commercial)
    3. Synk (Backup, Standard, or Pro)
    4. Deja Vu (Shareware)
    5. Carbon Copy Cloner (Donationware)
    6. SuperDuper! (Commercial)
    7. Intego Personal Backup (Commercial)
    8. Data Backup (Commercial)
    The following utilities can also be used for backup, but cannot create bootable clones:
    1. Backup (requires a .Mac account with Apple both to get the software and to use it.)
    2. Toast
    3. Impression
    4. arRSync
    Apple's Backup is a full backup tool capable of also backing up across multiple media such as CD/DVD. However, it cannot create bootable backups. It is primarily an "archiving" utility as are the other two.
    Impression and Toast are disk image based backups, only. Particularly useful if you need to backup to CD/DVD across multiple media.
    Visit The XLab FAQs and read the FAQs on maintenance, optimization, virus protection, and backup and restore. Also read How to Back Up and Restore Your Files.
    Although you can buy a complete FireWire drive system, you can also put one together if you are so inclined. It's relatively easy and only requires a Phillips head screwdriver (typically.) You can purchase hard drives separately. This gives you an opportunity to shop for the best prices on a hard drive of your choice. Reliable brands include Seagate, Hitachi, Western Digital, Toshiba, and Fujitsu. You can find reviews and benchmarks on many drives at Storage Review.
    Enclosures for FireWire and USB are readily available. You can find only FireWire enclosures, only USB enclosures, and enclosures that feature multiple ports. I would stress getting enclosures that use the Oxford chipsets (911, 921, 922, for example.) You can find enclosures at places such as;
    Cool Drives
    OWC
    WiebeTech
    Firewire Direct
    California Drives
    NewEgg
    All you need do is remove a case cover, mount the hard drive in the enclosure and connect the cables, then re-attach the case cover. Usually the only tool required is a small or medium Phillips screwdriver.

  • Incomplete Recovery Fails using Full hot backup & Archive logs !!

    Hello DBA's !!
    I am doing on Recovery scenario where I have taken One full hot backup of my Portal Database (EPR) and Restored it on New Test Server. Also I restored Archive logs from last full hot backup for next 6 days. Also I restored the latest Control file (binary) to their original locations. Now, I started the recovery scenario as follows....
    1) Installed Oracle 10.2.0.2 compatible with restored version of oracle.
    2) Configured tnsnames.ora, listener.ora, sqlnet.ora with hostname of Test server.
    3) Restored all Hot backup files from Tape to Test Server.
    4) Restored all archive logs from tape to Test server.
    5) Restored Latest Binary Control file from Tape to Test Server.
    6) Now, Started recovery using following command from SQL prompt.
    SQL> recover database until cancel using backup controlfile;
    7) Open database after Recovery Completion using RESETLOGS option.
    Now in Above scenario I completed steps upto 5) successfully. But when I execute the step 6) the recovery completes with Warning : Recovery completed but OPEN RESETLOGS may throw error " system file needs more recovery to be consistent " . Please find the following snapshot ....
    ORA-00279: change 7001816252 generated at 01/13/2008 12:53:05 needed for thread
    1
    ORA-00289: suggestion : /oracle/EPR/oraarch/1_9624_601570270.dbf
    ORA-00280: change 7001816252 for thread 1 is in sequence #9624
    ORA-00278: log file '/oracle/EPR/oraarch/1_9623_601570270.dbf' no longer needed
    for this recovery
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    ORA-00308: cannot open archived log '/oracle/EPR/oraarch/1_9624_601570270.dbf'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oracle/EPR/sapdata1/system_1/system.data1'
    SQL> SQL> SQL> SQL> SQL> SQL> SQL>
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oracle/EPR/sapdata1/system_1/system.data1'
    Let me know What should be the reason behind recovery failure !
    Note : I tried to Open the database using Last Full Hot Backup only & not applying any archives. Then Database Opens successfully. It means my Database Installation & Configuration is OK !
    Please Let me know why my Incomplete Recovery using Archive logs Goes Fail ?
    Atul Patil.

    oh you made up a new thread so here again:
    there is nothing wrong.
    You restored your backup, archives etc.
    you started your recovery and oracle applyed all archives but the archive
    '/oracle/EPR/oraarch/1_9624_601570270.dbf'
    does not exist because it represents your current online redo log file and that is not present.
    the recovery process cancels by itself.
    the solution is:
    restart your recovery process with:
    recover database until cancel using backup controlfile
    and when oracle suggests you '/oracle/EPR/oraarch/1_9624_601570270.dbf'
    type cancel!
    now you should be able to open your database with open resetlogs.

  • Error in Web Location path when importing Site Studio Backup Archives

    Hi,
    Webcenter Content version 11.1.1.5.0 running on Linux.
    I have successfully (according to the archive logs) imported two Site Studio Backup Archives: Ravenna Hosting and Site Studio Samples.
    I've not had a problem before with these archive files (installed successfully on my local Windows installation of 11g), but on our DEV server the Web Location path is incorrect.
    Somehow the archive process has introduced the below directory structure:
    /mdaw/mdax/~edisp/
    Example expected location: .../groups/public/documents/wcmwebasset/oracle_logo_15h.png
    Imported location: .../groups/public/weblayout/groups/public/documents/wcmwebasset/mdaw/mdax/~edisp/oracle_logo_15h.png
    - The sites are visible but with broken images
    - The Site Studio Assets can be seen in Site Studio Designer
    - The Web Location path on all content items is incorrect and points to the wrong path
    - Archive has proven to work before (same physical file) and the path information within it appears correct
    Copying the files results in the images appearing but obviously the Web Location defined within the content items is still incorrect.
    Whilst Ravenna Hosting site looks OK after copying the files, the Site Studio Sample site is totally broken.
    Obviously the underlying import issue needs to be resolved - has anyone come across the underlying issue (the insertion of an incorrect directory structure)?
    Has anyone any advice on a solution so I can re-import the archive with the files being written to the correct Web Locations?
    Thanks in advance
    Edited by: user615721 on Aug 25, 2011 12:26 PM

    The issue is the result of a new feature introduced in version 11.1.1.4.0 called the 'Dispersion Rule'.
    All legacy archive will hit this in the future release, but there is a work-around documented here: http://blogs.oracle.com/ecmarch/entry/working_with_the_new_fsp_dispe
    Edited by: user615721 on Aug 25, 2011 2:09 PM

  • Feature request: Importing/backup/archive of tracks with metadata

    Hi everyone,
    I've found that one of the things iTunes does better than any other music player is handling and organising song information. Have you ever seen a PC user's jukebox? Thos things are a real mess, with cryptic track names ("mike mills - air - talkie walkie - 05"), artists in the album column, etc...
    All of this information which describes the file itself is called "meta-data", and iTunes handles meta-data in two seperate places: the song file itself (ID3 tags in MP3s, for instance) and the iTunes library file.
    You'll notice that when you drag a music file from one iTunes library to another, the meta-data stored in the file itself is preserved (artist, title, track number, etc...), while the meta-data stored in the iTunes library isn't (your rating, the play count, the last date played, etc...).
    What I'm suggesting here is a feature that allows one to:
    (a) make a folder / burn a disk with the song files AND a special iTunes meta-data (XML?) file, so that all meta-data is preserved,
    (b) optionally delete the music files from your library after backup (i.e. archiving), and naturally,
    (c) open and read backup/archive folders and disks, and allow one to reintegrate songs, playlists or the entire collection back into one's main music library.
    I hope Apple is listening, because I think this feature would be really easy to implement, and would make a lot of people (especially those who keep their music spread across 2 or more computers) very happy!
    I've already set up a smart playlist ("Give me 4.5 GiB of disabled (unchecked) tracks from my library, sorted by least recently added") that gives me the candidates for archival. Now, all I'm hoping for is a button that allows me to archive (move) everything to a DVD, clear some space on my PowerBook hard drive and rest assured that if I ever want to go back and retrieve a track, it'll be waiting, with the playcounts, song information and ratings all intact!
    ...either that or make all hard drives at least 1000 times bigger. I'm not picky.
    Cheers,
    - tonyboy
    PowerBook G4    

    Thank you for an idea!
    I wrote a simple script to get metadata from files and offset AE clip times respectively. It is quite dirtycoded but does the job for me. Maybe someone will want to use this script, so I put it on a public accessible URL: http://zig-zag.ru/usefulshit/ae/xmp_time_shifter.jsx
    My workflow is as follows:
    - I separate camera footages (.mov, .jpg, etc...), and put them to folders like camera_01, camera_02 etc
    - create new project in AE, new composition - named "camera_01_p1", import files from camera_01 folder to a project panel, drag'n'drop them to a compo, select "camera_01_p1" compo and run xmp_time_shifter.jsx
    - I have 8 hours of footage, so I duplicate "camera_01_p1" composition, name it "camera_01_p2", delete all layers but unshifted ones, rerun time shifter script, repeat with the rest
    - do the same with camera_02 (don't forget to adjust still photo layer duration before time shifting)
    - export to a premiere project (yeah, thank you for this one!)
    - then I open project in premiere, copy everything to a same sequence (different layers where needed)
    - last thing is to sync entire video and photo layers at some known moment - it is done easily by placing marker at some time feature on different video/audio clips, and manually shifting entire layer to sync
    - profit!
    Since we are allready offtopic to premiere, I place some of my notes here:
    - XMPFile function does not read files like "/d/path" on my windows 7, so I replaced them with "d:/"
    - don't understand why, but this check - if(fname != null){ is not working. So script will fail on compositions with solid, null, etc. layers.
    - I wrote simple date parser that is currently working with Nikon D3 (.jpg) and Canon 5D mk2 (.mov) but it can fail on some other models/formats.

  • Backup archive logs problem using RMAN

    Hi guys
    I got failure when using RMAN to backup archive log files:
    Starting backup at 20-APR-06
    current log archived
    released channel: c1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of backup command at 04/20/2006 21:53:57
    RMAN-06059: expected archived log not found, lost of archived log compromises recoverability
    ORA-19625: error identifying file /opt/oracle/flash_recovery_area/DB10G/archivelog/2006_03_17/o1_mf_1_1_21p5c251_.arc
    ORA-27037: unable to obtain file status
    Linux Error: 2: No such file or directory
    Additional information: 3
    RMAN> **end-of-file**
    My archive log files location:
    SQL> show parameter log_archive_dest
    NAME TYPE VALUE
    log_archive_dest_1 string LOCATION=/opt/oracle/oradata/DB10G/arch/
    Current archive log files:
    $ ls /opt/oracle/oradata/DB10G/arch
    1_17_586191737.dbf 1_21_586191737.dbf 1_25_586191737.dbf 1_29_586191737.dbf 1_33_586191737.dbf
    1_18_586191737.dbf 1_22_586191737.dbf 1_26_586191737.dbf 1_30_586191737.dbf 1_34_586191737.dbf
    1_19_586191737.dbf 1_23_586191737.dbf 1_27_586191737.dbf 1_31_586191737.dbf 1_35_586191737.dbf
    1_20_586191737.dbf 1_24_586191737.dbf 1_28_586191737.dbf 1_32_586191737.dbf afiedt.buf
    $
    But when I check v$archived_log:
    SQL> select name,status,deleted from v$archived_log;
    NAME S DEL
    D YES
    D YES
    D YES
    D YES
    D YES
    D YES
    D YES
    D YES
    D YES
    D YES
    D YES
    /opt/oracle/flash_recovery_area/DB10G/archivelog/2006_03_17/ A NO
    /opt/oracle/flash_recovery_area/DB10G/archivelog/2006_03_18/ A NO
    /opt/oracle/flash_recovery_area/DB10G/archivelog/2006_03_19/ A NO
    /opt/oracle/oradata/DB10G/redo01.log A NO
    /opt/oracle/oradata/DB10G/redo02.log A NO
    /opt/oracle/oradata/DB10G/redo03.log A NO
    /opt/oracle/oradata/DB10G/arch/1_5_585926175.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_6_585926175.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_7_585926175.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_1_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_2_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_3_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_4_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_5_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_6_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_7_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_8_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_9_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_10_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_11_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_12_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_13_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_14_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_15_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_16_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_17_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_18_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_19_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_20_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_21_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_22_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_23_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_24_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_25_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_26_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_27_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_28_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_29_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_30_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_31_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_32_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_33_586191737.dbf A NO
    /opt/oracle/oradata/DB10G/arch/1_34_586191737.dbf A NO
    More records than actual archived logs. How could it happen? How to solve?
    THanks in advance.
    Sharon

    Hi,
    Use
    RMAN>crosscheck archivelog all; cmd

  • Document archiving solutions

    Hi
    we have sharepoint 2010 farm and many users will upload documents daily 
    and we have plan to archive this documents and implement any archiving solutions
    what are the best archiving solutions that work perfect with sharepoint 2010
    -i mean when any user delete any document we must get back 
    -documents must be available when users search for documents
    adil

    -i mean when any user delete any document we must get back  --- Recycle bin in sharepoint can help you with this. Documents are present for 30 days
    http://office.microsoft.com/en-in/sharepoint-help/manage-the-recycle-bin-of-a-sharepoint-site-collection-HA102772732.aspx
    -documents must be available when users search for documents --- By default search is disabled on Recycle bin. you can search it from :
    http://webcache.googleusercontent.com/search?q=cache:SYKVx4q6eQ0J:sharepointkings.blogspot.com/2013/07/search-from-recycle-bin.html+&cd=8&hl=en&ct=clnk&gl=in
    If this helped you resolve your issue, please mark it Answered

  • Backup/archive Time Capsule to cloud?

    Hi
    I'm using a 4th generation Time Capsule to backup my two Macs via Time Machine. Is there a reputable cloud-based service that will automatically backup/archive only the Time Capsule?
    thanks
    Bill

    ... just to clarify - the Time Capsule is solely used for Time Machine backups and Wi-Fi provision. There is no "non Time Machine" user data on the device.

  • COLD BACKUP을 받은 후 ARCHIVE LOG로 변환하여 사용한 경우 INCOMPLETE RECOVERY.

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-09
    COLD BACKUP을 받은 후 ARCHIVE LOG로 변환하여 사용한 경우 INCOMPLETE RECOVERY.
    ======================================================================
    PURPOSE
    COLD BACKUP을 받은 후 ARCHIVE LOG MODE로 변환하여 사용한 경우의 RECOVERY
    과정을 TEST로 확인하여 본다.
    Examples
    No Archive log mode.
    SQL> select * from tab ;
    TNAME TABTYPE CLUSTERID
    EMP TABLE
    EMP1 TABLE
    EMP2 TABLE
    EMP3 TABLE
    EMP4 TABLE
    10 rows selected.
    SQL> select count(*) from emp3 ;
    COUNT(*)
    0
    SQL> select count(*) from emp4 ;
    COUNT(*)
    0
    Cold backup을 받은후 Archive log mode 변경한 경우
    SVRMGR> startup mount
    SVRMGR> archive log list
    Database log mode No Archive Mode
    Automatic archival Enabled
    Archive destination D:\Oracle\oradata\SNAP\archive
    Oldest online log sequence 26
    Current log sequence 28
    SVRMGR> alter database archivelog ;
    SVRMGR> alter database open ; => archive log mode 변경.
    SQL> select * from tab ;
    TNAME TABTYPE CLUSTERID
    EMP TABLE
    EMP1 TABLE
    EMP2 TABLE
    EMP3 TABLE
    EMP4 TABLE
    10 rows selected.
    SQL> insert into emp3 select * from emp ;
    14 rows created.
    SQL> commit ;
    Commit complete.
    SQL> insert into emp4 select * from emp1 ;
    71680 rows created.
    SQL> commit ;
    Commit complete.
    SQL> select count(*) from emp3 ;
    COUNT(*)
    14
    SQL> select count(*) from emp4 ;
    COUNT(*)
    71680
    ## log switch 발생.
    SVRMGR> alter system switch logfile ;
    SQL> insert into emp3 select * from emp ; -- current log에 반영.
    14 rows created.
    SQL> commit ;
    SQL> select count(*) from emp3 ;
    COUNT(*)
    28
    SQL> select count(*) from emp4 ;
    COUNT(*)
    71680
    # ALL DATABASE CRASH #
    # recover 과정... #
    1. Restore Cold-backup
    2. modify initSID.ora
    log_archive_start = true
    log_archive_dest_1 = "location=D:\Oracle\oradata\SNAP\archive"
    log_archive_format = %%ORACLE_SID%%T%TS%S.ARC
    3. svrmgrl
    Statement processed.
    SVRMGR> startup mount
    ORACLE instance started.
    Total System Global Area 40703244 bytes
    Fixed Size 70924 bytes
    Variable Size 23777280 bytes
    Database Buffers 16777216 bytes
    Redo Buffers 77824 bytes
    Database mounted.
    SVRMGR> archive log list
    Database log mode No Archive Mode
    Automatic archival Enabled
    Archive destination D:\Oracle\oradata\SNAP\archive
    Oldest online log sequence 26
    Current log sequence 28
    SVRMGR> alter database archivelog ;
    Statement processed.
    SVRMGR> recover database using backup controlfile until cancel ;
    ORA-00279: change 340421 generated at 04/29/2001 23:42:20 needed for thread 1
    ORA-00289: suggestion : D:\ORACLE\ORADATA\SNAP\ARCHIVE\SNAPT001S00028.ARC
    ORA-00280: change 340421 for thread 1 is in sequence #28
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    Log applied.
    ORA-00279: change 340561 generated at 04/29/2001 23:47:29 needed for thread 1
    ORA-00289: suggestion : D:\ORACLE\ORADATA\SNAP\ARCHIVE\SNAPT001S00029.ARC
    ORA-00280: change 340561 for thread 1 is in sequence #29
    ORA-00278: log file 'D:\ORACLE\ORADATA\SNAP\ARCHIVE\SNAPT001S00028.ARC' no longe
    r needed for this recovery
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    Log applied.
    ORA-00279: change 340642 generated at 04/29/2001 23:47:35 needed for thread 1
    ORA-00289: suggestion : D:\ORACLE\ORADATA\SNAP\ARCHIVE\SNAPT001S00030.ARC
    ORA-00280: change 340642 for thread 1 is in sequence #30
    ORA-00278: log file 'D:\ORACLE\ORADATA\SNAP\ARCHIVE\SNAPT001S00029.ARC' no longe
    r needed for this recovery
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    Log applied.
    ORA-00279: change 340723 generated at 04/29/2001 23:47:40 needed for thread 1
    ORA-00289: suggestion : D:\ORACLE\ORADATA\SNAP\ARCHIVE\SNAPT001S00031.ARC
    ORA-00280: change 340723 for thread 1 is in sequence #31
    ORA-00278: log file 'D:\ORACLE\ORADATA\SNAP\ARCHIVE\SNAPT001S00030.ARC' no longe
    r needed for this recovery
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    Log applied.
    ORA-00279: change 340797 generated at 04/29/2001 23:48:01 needed for thread 1
    ORA-00289: suggestion : D:\ORACLE\ORADATA\SNAP\ARCHIVE\SNAPT001S00032.ARC
    ORA-00280: change 340797 for thread 1 is in sequence #32
    ORA-00278: log file 'D:\ORACLE\ORADATA\SNAP\ARCHIVE\SNAPT001S00031.ARC' no longe
    r needed for this recovery
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    cancel
    Media recovery cancelled.
    SVRMGR> alter database open resetlogs ;
    Statement processed.
    SVRMGR>
    SQL> connect scott/tiger
    SQL> select count(*) from emp3 ;
    COUNT(*)
    14
    SQL> select count(*) from emp4 ;
    COUNT(*)
    71680
    # 결론...... #
    따라서 current log file에 기록된 14 row에 대한 부분은 recover가 될수
    없지만 archive log file에 적용된 log에 대한 data는 정상적으로 복구가
    가능하다.
    # 주의 사항 #
    cold backup을 restore한 후 database open후 shutdown 한 다음 archive
    log mode로 변경하여 recover를 진행하는 경우 SCN number가 변경되기
    때문에 ora-600 error가 발생하며 media recovery를 필요로 하기 때문에
    주의하여야 한다.
    SVRMGR> startup
    ORACLE instance started.
    Total System Global Area 40703244 bytes
    Fixed Size 70924 bytes
    Variable Size 23777280 bytes
    Database Buffers 16777216 bytes
    Redo Buffers 77824 bytes
    Database mounted.
    Database opened.
    SVRMGR> shutdown
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SVRMGR> startup mount
    ORACLE instance started.
    Total System Global Area 40703244 bytes
    Fixed Size 70924 bytes
    Variable Size 23777280 bytes
    Database Buffers 16777216 bytes
    Redo Buffers 77824 bytes
    Database mounted.
    SVRMGR> alter database archivelog ;
    Statement processed.
    SVRMGR> recover database using backup controlfile until cancel ;
    ORA-00279: change 339542 generated at 04/29/2001 23:30:57 needed for thread 1
    ORA-00289: suggestion : D:\ORACLE\ORADATA\SNAP\ARCHIVE\SNAPT001S00003.ARC
    ORA-00280: change 339542 for thread 1 is in sequence #3
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    ORA-00283: recovery session canceled due to errors
    ORA-00600: internal error code, arguments: [3020], [8390146], [1], [3], [143], [
    240], [], []
    SVRMGR> exit
    Server Manager complete.
    Rederence Documents
    ---------------------

  • Meeting place 8.5 backup/archive software

    Hello all, we just migrated to MP 8.5 and it there is no more FTP option for backup and archiving, only ssh/rsync.
    Does anybody have any recommendations that they are using for the archiving software?
    Thanks,
    Dan

    MP 8.5 only uses rsync over SSH, no longer users FTP as a backup/archiving option.
    The steps to configure the same are given in the following link
    http://www.cisco.com/en/US/docs/voice_ip_comm/meetingplace/8_5/english/administration/backup_archive.html#wp1030973
    "Backing Up, Archiving, and Restoring Data on the Application Server"
    HTH
    Manish

  • RE: CS4 Premiere Pro - what is best to backup/archive video to minimize size

    RE: CS4 Premiere Pro -> what is best to backup/archive video to minimize size?
    I use export>media to adobe media encoder. I am using the avi. the avi file generated seems large. i.e. for 27 minutes the file is 6 gb.
    what is the best format to use to minimize file size?????
    any help will be appreciated and thks in advance.

    >what is best to backup/archive video to minimize size?
    That's the wrong question to ask. Ideally you don't change anything about the original media for backup. It's size it it's size, and you get enough of whatever you nee to accommodate that.

  • Tool availability for Archival Solution similar to ILM Assistant?

    Can you suggest some other tool or facility which will give ILM Assistant like features?
    We want to archive data on date basis. ILM Assistant does not provide archiving solution. Is there any other tool available for the same?
    Regards,
    Archana.

    Oracle has two solutions for archiving.
    One is DBFS HSM. Oracle SecureFiles provide fast and file storage on an Oracle database. DBFS provides a file system interface to SecureFiles data. DBFS HSM store allows archiving and transparent recall of SecureFiles data to tape. See http://www.oracle.com/technetwork/database/features/secure-files/dbfs-sf-oow2009-v2-160969.pdf for more info. SecureFiles can be coupled with Oracle's Enterprise Content Management suite so that the content can be automatically stored, and recalled when necessary, on tape.
    The other is Storage Archive Manager. SAM presents a file interface and any files written to and stored on SAM can be managed in a multi tiered storage environment which also provides multi-tier transparent archiving. See http://www.oracle.com/us/products/servers-storage/storage/storage-software/031715.htm for more info. SAM can be coupled with Oracle's Enterprise Content Management suite so that the content can be automatically stored, and recalled when necessary, on SAM.
    Regards,
    Dan Ferber

  • Backup, Archive or Export? iPhoto question

    I don't have that much space on my hard disk and want to keep my photos on DVDs. I've read here that DVD+R is better than -R but what's the best way of storing them? Should I backup, archive or export?
    I am printing contact sheets for each DVD so that once I have them backed up, I will want to remove them from iPhoto and only re-import them when I want to use them.
    I've noticed that if I try and export it will only let me export a maximum of 60 photos, so presumably that's not the way to go? And the Share/Burn only lets me view them in iPhoto and not on any other platform (what happens if Apple replaces iPhoto in years to come??)
    Any suggestions gladly welcomed. Thanks.

    For data disk, it's a tossup between +R and -R as far as I know. If you're burning an video DVD via iDVD then you definitely want -R disks.
    What size are your image files? 60 files on a 4.5 GB drive is about 73 MB each. Are your files that big? Do you want the file to be easily used by iPhoto at a later date and keep their titles, keywords and comments intact? If so, the use the Share->Burn menu item and burn either Events or Albums of photos. Then delete those photos from the library when done. The disk will be readable by iPhoto like this and you can copy photos back into the library if needed for printing or some other project.
    If you just want to store the basic image files, then export the photos to a folder on the desktop using the File->Export->File Export and select the option to include the Title and keywords. They will be written to the new files and readable by iPhoto if imported back into the library at a later date.
    Do you Twango?
    TIP: For insurance against the iPhoto database corruption that many users have experienced I recommend making a backup copy of the Library6.iPhoto database file and keep it current. If problems crop up where iPhoto suddenly can't see any photos or thinks there are no photos in the library, replacing the working Library6.iPhoto file with the backup will often get the library back. By keeping it current I mean backup after each import and/or any serious editing or work on books, slideshows, calendars, cards, etc. That insures that if a problem pops up and you do need to replace the database file, you'll retain all those efforts. It doesn't take long to make the backup and it's good insurance.
    I've created an Automator workflow application (requires Tiger), iPhoto dB File Backup, that will copy the selected Library6.iPhoto file from your iPhoto Library folder to the Pictures folder, replacing any previous version of it. It's compatible with iPhoto 08 libraries and Leopard. iPhoto does not have to be closed to run the application, just idle. You can download it at Toad's Cellar. Be sure to read the Read Me pdf file.

  • Backup/archive/export verity collection

    Does anyone know how to backup/archive/export a verity
    collection?
    The CF Help files say to go to admin/server settings/archives
    but this is not an option in my version of CFMX.
    Anyone?? thank you so much.
    Nicko

    Does anyone know how to backup/archive/export a verity
    collection?
    The CF Help files say to go to admin/server settings/archives
    but this is not an option in my version of CFMX.
    Anyone?? thank you so much.
    Nicko

  • New xserve backup solution ???

    hi, i am searching for a new backup solution wich can handle up to 2TB of data. i would like an 19" device wich supports retrospect for os x leo server.
    can anyone give me an tip ??
    thank you !

    Here's a question sort of related to this: I am using an Xserve for a similar use, and as our data has expanded, we need more backup room. The Xserve is connected to a RAID, and we need to back up about 1.5 TB from that RAID once a week for offsite storage.
    This will be rotated weekly, so there would be two offsite drives or modules -- one offsite at all times.
    Up to now, we have been using Firewire connected drives, and rotating them offsite has worked OK. Now we are evaluating three ideas to accomplish this task on a more professional level, and I would welcome feedback on these ideas. Tape is not an option.
    Options proposed are:
    1. Using "raw" drives (1.5 or 2 TB mechanisms) in a simple hard-drive docking station. The drives would be placed in these each week, then stored in a case of some sort when offsite.
    2. Using standard desktop drive/case units (i.e. LaCie, Iomega, etc.).
    3. Using a rack-mounted solution (OWC, Sonnet, etc) that has FireWire/eSATA connectivity and hot-swap drives in sleds. These, too, would be in a case of sorts when offsite.
    4. Using a drive module on the Xserve -- buy, say, two 80GB ADMs and the replace the mechanisms with a 1.5 or 2 TB mechanisms. These, too, would be in a case of sorts when offsite. Since the Xserve is connected to a RAID, one other drive would be the boot/backup software drive and remain in place at all times.
    My initial thoughts:
    Option 1 leaves the drive too vulnerable, and the units allow the drives to get hot when used for long periods of time. Drives also are spinning down as they are removed, which could be risky. I don't like this idea.
    Option 2 works (and has worked) but it involves unplugging FW800 and power cables from the drives, and then these heavy units must be dealt with. An OK solution, but not ideal.
    Option 3 appeals to me because it offers some security for the drive (sleds, ability for the drive to spin down once unmounted and before transport) yet without the bulk of an external drive such as a LaCie. Also, being rack mounted, no power or data cables must be touched.
    Option 4 is, like Option 3, simple (no power/data to deal with), and offers some security for the drives. But it involves placing unapproved drives into the Xserve as well as putting lots of insertion/removal cycles on the Xserve drive bays (which may or may not be designed for this use).
    Any thoughts? I'd really like to get some feedback...
    Thanks,
    Pete

Maybe you are looking for

  • Copy Protected CDs and iTunes - long but important

    Hey all, I didn't search all the posts but I haven't seen much mentioned about this topice so I thought I would bring it to your attention. If this is a repeat, I apologize. The basic issue is this: some new CDs (especially from Sony) have a program

  • Exchange Rate in Profitability Analysis Document

    How is the exchange rate populated in profitability analysis document? We have couple of scenarios: Scenario 1: Credit memo request created without any reference and credit memo is created successfully. Exchange rate populated in the profitability an

  • Question in photoshop

    I created a new brush  in photoshop and I cant change the hardness of the new brush what is the solution

  • O/p of  text literals

    Hi , This is the code. READ REPORT prog INTO  itab. Loop at itab into wa_line. search wa_line starting at for ' '. if sy-subrc eq 0. write : / 'Column:', sy-fdpos,'Line.no:', sy-tabix. endif. endloop. I want the o/p as the list of all the text litera

  • Need SAP  CRM 4.0 installation guide with screenshots.

    Hi any body please send me the SAP  CRM 4.0 installation guide with screenshots.my mail id Thanks in advance. Message was edited by:         Yaroslav Zorenko mail deleted