New xserve backup solution ???

hi, i am searching for a new backup solution wich can handle up to 2TB of data. i would like an 19" device wich supports retrospect for os x leo server.
can anyone give me an tip ??
thank you !

Here's a question sort of related to this: I am using an Xserve for a similar use, and as our data has expanded, we need more backup room. The Xserve is connected to a RAID, and we need to back up about 1.5 TB from that RAID once a week for offsite storage.
This will be rotated weekly, so there would be two offsite drives or modules -- one offsite at all times.
Up to now, we have been using Firewire connected drives, and rotating them offsite has worked OK. Now we are evaluating three ideas to accomplish this task on a more professional level, and I would welcome feedback on these ideas. Tape is not an option.
Options proposed are:
1. Using "raw" drives (1.5 or 2 TB mechanisms) in a simple hard-drive docking station. The drives would be placed in these each week, then stored in a case of some sort when offsite.
2. Using standard desktop drive/case units (i.e. LaCie, Iomega, etc.).
3. Using a rack-mounted solution (OWC, Sonnet, etc) that has FireWire/eSATA connectivity and hot-swap drives in sleds. These, too, would be in a case of sorts when offsite.
4. Using a drive module on the Xserve -- buy, say, two 80GB ADMs and the replace the mechanisms with a 1.5 or 2 TB mechanisms. These, too, would be in a case of sorts when offsite. Since the Xserve is connected to a RAID, one other drive would be the boot/backup software drive and remain in place at all times.
My initial thoughts:
Option 1 leaves the drive too vulnerable, and the units allow the drives to get hot when used for long periods of time. Drives also are spinning down as they are removed, which could be risky. I don't like this idea.
Option 2 works (and has worked) but it involves unplugging FW800 and power cables from the drives, and then these heavy units must be dealt with. An OK solution, but not ideal.
Option 3 appeals to me because it offers some security for the drive (sleds, ability for the drive to spin down once unmounted and before transport) yet without the bulk of an external drive such as a LaCie. Also, being rack mounted, no power or data cables must be touched.
Option 4 is, like Option 3, simple (no power/data to deal with), and offers some security for the drives. But it involves placing unapproved drives into the Xserve as well as putting lots of insertion/removal cycles on the Xserve drive bays (which may or may not be designed for this use).
Any thoughts? I'd really like to get some feedback...
Thanks,
Pete

Similar Messages

  • RMAN Backup Solution

    We are working on a new RMAN backup solution.
    Since ours is a 9i OLTP we are planning to go for backup on disk.
    Once the backup is on disk then to tape.If we want to restore the database back how we can restore.We dont use recovery catalog.
    Thanks in advance
    null

    <br>
    One of the best ways to help yourself in oracle forum is to do a search for existing soultions to your problem before you post a new one. That way, you shorten your thread.
    </br>
    </br>
    I have posted this last week and I believe you could easily find it if you did a search.
    </br>
    <br>
    RMAN Backup and Recovery Script
    </br>

  • 2x 10.5.2 xServe + raid backup solutions  - Suggestions please!

    Hello,
    I'll give a brief overview of what I'm rolling out. *Limited budget* So keep that in mind
    Basically I'm just looking for suggestions on backup solutions for this particular setup. I have been hearing mixed reviews of most backup solutions under 10.5.x... (In 10.4.x server I was relatively confident with Retrospect and had performed restores before).
    --2 xServes, One web server, one for local services like mail, OD, AFP, Cal, etc. each has 2x 80gb drive (boot volume + mirror). Advanced server config.
    --1x raid configured as raid6 with 4TB useable after formatting and parity. Data (user directory, mail store, calendar, web sites and database, etc) will be on the raid volumes, the 80 gb drives in xserves are only for OS and log files.
    --All clients are using MacBook Pro's and most users have minimal data. Most data will be mail and quicktime videos (compressed for web, originals on mini dv tapes).
    --MacBook Pro's already live, current server is SBS 2003 and it handles backups via Retrospect.
    --Backups will have to be D2D, tape is not an option in this case. This means two external drives for backup (one on site and one off site, cycle on friday). Probably 2TB FW800 for each drive as entire raid isn't being backed up, some of it is scratch storage so 2 TB is plenty for now).
    Naturally I want to use time machine but I am hearing of horror stories using time machine to backup advanced server in 10.5. Also seeing issues with cyrus (mail) and time machine (I will be running mail services). In addition, I don't think I could do any sort of cycle with the two external drives using time machine? The lack of configurability in time machine worries me. And finally, I don't know how time machine does with active database' (Ie: sql database for web server, I just don't know if time machine will cause problems when trying to back live database up). If someone has experience with time machine in a similar setup I'd love to hear your thoughts / suggestions.
    Alternatively there is retrospect. We already own a license and I am familiar with administering it on 10.4.x. How reliable is it for backing up 10.5.2 clients and 10.5.2 server? Including active databses (Again the web server sql databases, etc? Kind of a pain restoring anything from retrospect (Especially entire server if / when it has to be done) but it was tried and true for me in 10.4.x server installations.
    These MacBook pro's will also likely become mobile homes once all is said and done (But currently are not as the xServes are not installed yet). Time machine + mobile home directory seems like a lot of data being backed up that is probably redundant?
    To top it all off, until new office is ready (All gigabit when that happens in ~1 year) network is as follows:
    Servers have 100 mbit full duplex connection
    Clients have 10mbit half duplex (This is where time machine really frightens me with hourly client backup).
    Considering above, I can either:
    A: Run proactive backup script from retrospect for all clients and then nightly backup for select raid volume contents + server boot volumes.
    B: Not backup clients directly via retrospect and only backup contents of the raid volume (Which is where the mobile homes will be stored). Mobile home sync would be set to occur on login / logout only (Considering 10mbit client connections, I don't want active or user initiated sync heh).
    Thoughts / suggestions? Thanks in advance, any suggestions are much appreciated. Hardware is set in stone but backup solution is not so most aspects of backup solution can be changed =)

    First of all, thanks for your reply!
    Well I should clarify in that I wasn't going to operate "network" homes and that they were just going to be the mobile homes with sync only occurring at login / logout. Even then, I think I'd run in to issues with the 10base-T for the client systems. The servers are on 100base-T full duplex. Currently All of the Windows systems do the same and logging in / out isn't too terrible but I couldn't imagine actually running the home directories off the server on 10base-T.
    Do you still think mobile homes would be too much with just the login / logout sync? If so, I'll simply avoid this option until we get our new building and gigabit networking all around in the next year.
    You're correct in that all clients will be running 10.5.2.
    If I am not running mobile homes, I suppose the easiest way then is to in fact backup clients via time machine to the raid volume and then backup the entire raid volume (Hence all of our sites, compressed video files and databases to a large external drive (Maybe I'll get a couple 3 or 4 TB raid drives instead... lol). Naturaly I'd select the scratch disk directories and tell time machine to ignore them. Only issue there being that I may run in to space concerns like you say. If it becomes a huge problem and apple still hasn't made TM more configurable by then, I could use the time machine editor app or modify the intervals myself. I'd rather not and will only do this if it becomes an issue
    From what I can gather, time machine is not making complete duplicates of an active database (MS Entourage in this case) and is only backing up the changes. I was worried about how it handled this and whether it would detect a change and just backup the entire database again (Which would be a nightmare if it did this for our web directories as they have active databases running).
    Do you know of time machine will wreak havoc with active databases (Like the ones our websites use, a couple small sql databases and one larger one).
    I think time machine would work well with the method you suggested. So I can just set it to backup all clients to the raid volume and then setup time machine to backup the raid volume to the external disk? For my off site... I suppose I could just bring in the second identical external drive every Friday and copy the time machine backup database from the on site backup drive on to it and then get it out of there each evening.
    As for backing up the servers themselves, they will not be storing much of anything and most everything will be pointed to the raid volume (Hence the 80GB drives). Because of this, the data on the servers themselves will not change THAT much compared to the raid volume which will change drastically every day so full backups of the servers is fine. Perhaps just setting psyncX (assuming it works in 10.5) to backup the 80GB boot volume for each server would be best? I could just set psyncX to backup each server boot volume to the same external drive the raid volume is backing up to and then each friday just copy the data from the on site drive to the external, killing two birds with one stone. (Tape would make this all easier but it isn't gonna happen in this case sigh).
    I worry about the cyrus / mail issues I'm reading about on servers running time machine though.

  • Xserve Backup/Archive Solutions

    Hi,
    We currently have a MacPro running our wiki along with some file sharing and user directory service. It has a hardware raid with (3) 1TB drives and has served well so far with great feedback. We want to extend the wikis to a much larger audience and wanted to move to an xserve. The problem is that our IT department is not cooperating with supporting the xserve and will not allow us to use their central backup system.
    Our current usage is for one course which includes 20 students and 4 instructors. We are wanting to extend the wikis to about 20 courses to give you an idea for traffic.
    My question is what are my options with backup solutions if we decide to go ahead with the xserve. We want to be able to go back at least a week before. We are looking for automated backups. What would I need in terms of hardware and software? I have no prior experience with backups so just want to know what I might be getting into.
    Please let me know your thoughts.
    Message was edited by: leadstrand

    2 TB will need to be backed up. That is the capacity of our RAID 5 setup.
    Not sure how often it should run but I want to be able to restore back to a week (5 days)
    Budget is of low priority at this time but should be less our Xserve setup (Quad 2.8 4GB RAM Hardware RAID 3x 1TB drives)
    none

  • Need new backup solution for vMware vm's

    Anthony,
    There are a number of products out there that will meet your requirements. For example, depending on your VMware license you could use VDP. There are other vendors who leverage the VMware host API to get VM snapshots. StorageCraft offers a software solution that will back up your VMs and store them in your data center or on portable media. We create byte/sector level images of your systems that you can quickly restore to the same or to a new environment (like moving from Hyper-V to VMware) or mount as a shareable volume for easy file/folder recovery.
    Whichever solution you choose, be sure to test it in your environment so you know it will meet your specific needs and budget. Let me know if you have any questions about StorageCraft products and applicable discounts.
    Cheers!

    Greetings:I recently became in charge of our server infrastructure and I'm trying to figure out the best backup software for our needs.We currently have BackupAssist to handle hyper-v vm's, but I will be migrating all of them off and will solely have vmware vm's.At my previous job they used vranger for backing up the vm's an backup exec for the shared data drivesCurrently we backup everything to usb drives (firesafe/water proof) with backup assist, but I don't think this will work well with vsphere.We have 3 hosts with 5 cpu's total and might add a 6th cpu sooner or later.I'm looking for a backup solution that won't break the bank as we don't have a huge budget. I do work for a University, so education discounts would be great.I've looked into veeam, but haven't tested it nor have I priced it out.The servers are in a data center, so...
    This topic first appeared in the Spiceworks Community

  • HT1175 Time Machine wants to start with a new, full backup instead of using the existing one

    Hi, my Time Machine wants to start a new, full backup, instead of continuing with the existing one on the time Capsule.
    Looking a the Troubleshooting guide, I've found the following piece of info, which is, as 99% of the times with the troubleshooting guides, totally useless to resolve my problem...
    Message: Time Machine starts a new, full backup instead of a smaller incremental backup
    There are three reasons why this may occur:
    You performed a full restore.
    Your Computer Name (in Sharing preferences) was changed--when this happens, Time Machine will perform a full backup on its next scheduled backup time.
    If you have had a hardware repair recently, contact the Apple Authorized Service Provider or Apple Retail Store that performed your repair. In the meantime, you can still browse and recover previous backups by right-clicking or Control-clicking the Time Machine Dock icon, and choosing "Browse Other Time Machine disks" from the contextual menu.
    It should be noted (hence the uselessness of the guide) that:
    -I haven't performed ANY full restore
    - I havent changed my computer name
    - My computer has not had ANY repair, much less a hardware repair. Neither recently, nor in the 9 months I've owned it.
    Which puts me back to square one, with the marginal improvement that now I have a way of browsing my old backups.
    In the event I decide to start all over again, will there be a way for me to "sew" the two backup files?
    Or will I always have to go the silly way around?
    Anybody has any idea?
    Thank you for your time (machine/capsule, your choice;-)

    Hi John,
    no, that doesn't have anything to do with my problem.
    Just look at the image below.
    Time Machine thinks that the disk I've always used to do my backups has none of them.
    If I open the disk with Finder, obviously they are where they should be.
    It is even possible to navigate them as per the instruction reported above, but even following Pondini's advice #s B5 and B6 (which are more applicable to my problem) didn't solve the problem.
    Actually, not being a coder, I'm always scared of doing more damage than good when I have a Terminal window open.
    My question is: how is it possible that Apple hasn't thought of some kind of user command to tell the idiotic Time Machine software that a hard disk backup is a hard disk backup, no matter what?
    Estremely frustrating and annoying: I'm wasting HOURS in trying to find a solution to a problem that should not have arisen in the first place...
    Have a nice weekend
    Antonio

  • Questions on new XServe

    Apologies for the lengthy post.
    We're looking to update an older dual G5 XServe to a new Nehalem model. Our current server handles file sharing (AFP & SMB) and is our OD Master (we have 5 other XServes). It's connected via Fibre to an XServe Raid configured with 4 active drives (250 GB each) and a standby backup drive. It total we have @ 700 GB of usable storage. We have a cloned system drive in the machine, and another offsite. We also have extensive Firewire data backups, including offsite.
    We're looking to replace this with a Dual Quad core machine but I'm still mulling over configurations. As the XServe RAID has been removed from the support list, I'm considering removing it from the equation and perhaps just use it for some non critical archiving. It has been wonderfully reliable in the 5 odd years we've had it.
    For the new server I was thinking 3x 1TB drives (& Raid Card) which I suspect would give @ 2TB of storage (more than enough for us - the 700 GB is already adequate but if we're going to upgrade we may as well expand a bit as well).
    And now for the questions...
    Would I be correct in assuming I could expect similar/better performance from the new drives as opposed to the old XServe RAID? Will the G5 XServe's Fibre Card fit the new XServe?
    Are the Solid State Drives user swappable? I was thinking perhaps a second SS drive with a cloned system as a backup.
    Can a boot partition be setup on this RAID setup as a backup to the Solid State Drive? What's the general feeling with regards SS Drives? Are they worth considering or should I just stick to 'conventional' technology?
    Will these XServes boot OK from a firewire device?
    Any advice, suggestions or personal experiences would be appreciated.
    many thanks

    Would I be correct in assuming I could expect similar/better performance from the new drives as opposed to the old XServe RAID?
    Comparable direct-attached storage (DAS) disk storage tends to be faster than SAN storage, yes.
    And you can connect DAS via SAS or SAS RAID PCIe controller, an older PCI-X controller, or via FireWire or USB buses. PCIe disk interconnects will be the fastest path, and USB the slowest.
    Here, I'd tend to consider the Apple RAID controller or a PCIe RAID controller if I needed extra bays.
    Will the G5 XServe's Fibre Card fit the new XServe?
    I don't know. That probably depends on the card, but it's certainly worth a try. Ensure you have the right slot in the new box; you've probably got a PCI-X FC SAN card here, and you'd thus need a PCI-X riser on the new box. Or punt and get a newer PCIe FC SAN card. Put another way, I'd ask Apple to confirm your particular card will work when carried forward, or wait for another responder here.
    Are the Solid State Drives user swappable? I was thinking perhaps a second SS drive with a cloned system as a backup.
    The Xserve drive bays are user swappable.
    Do you need SSD? If you're on your current box and are not already I/O bound, I'd guess not. And the newer HDD drives are faster, too.
    Can a boot partition be setup on this RAID setup as a backup to the Solid State Drive? What's the general feeling with regards SS Drives? Are they worth considering or should I just stick to 'conventional' technology?
    There is flexibility in what you boot your Xserve box off of, yes.
    I've been using SSD off and on for around twenty years in its various incarnations. Back when there was battery-backed DSSI drives (don't ask) and SCSI-1 RAM-based SSD gear.
    Whether SSD is appropriate really depends on what you need for I/O rates. An SSD is one piece of a puzzle of gaining faster I/O, and some of the other available pieces can be faster disks, striping (RAID-0), faster controllers, or faster controllers with bigger caches, or more and larger host caches. If your applications are disk-bound, definitely consider it. But if you're not being limited by the disk I/O speeds for your box, then SSD doesn't (usually) make as much sense.
    (There are other cases where SSD can be appropriate. SSD is more tolerant of getting bounced around than HDD, which is why you see it in laptops and in rough-service and mil-spec environments such as aircraft-mounted computer servers. With an HDD, one good head-slap can ruin your whole day. But I digress.)
    Will these XServes boot OK from a firewire device?
    AFAIK, yes. I haven't had to do that, but that's been a longstanding feature of Xserve boxes.
    Any advice, suggestions or personal experiences would be appreciated.
    I'd relax; the Nehalem-class processors are screaming-fast boxes, and with massively better I/O paths than Intel has had available before.
    Ensure your LAN is Gigabit Ethernet or IEEE 802.11n 5 GHz dual-slot, too. Upgrade from any existing 100 Mb LAN and from pre-n WiFi networking.

  • New Xserve in the rack....where do I go from here?

    Well, our shiny new (and unexpectedly deep and heavy) Xserve arrived this morning, and has been installed in the rack cabinet at our new premises. I think I need help knowing where to go from here on a couple of points.....
    I've been wading through pages of PDF manuals, support pages and forums posts and – to be honest – my head's in a mess!! I'm hoping some of you nice folks out there (Camelot provided some very useful advice in previous posts) could offer me some direction, so I can work out what I need to work out, and understand what I'm doing so I can get this beast up and running.
    Having also bought a couple of Netgear GS724T switches, I had planned to setup a 4gbps trunk between the Xserve and the first switch (with a second 4gbps connection between the switches). We will use an Airport Extreme Base Station as our DHCP server, and I will likely assign static IPs to both the switches whilst configuring them for trunk operation. Things are slightly complicated by the fact that our new BT Versatility installation already appears to have a 4-port wireless router bolted on, but I think we can effectively bypass this straight to the AEBS....
    In order to use link aggregation (i.e. the 4gbps connection to the Xserve), will I need to configure Server first using only 1 ethernet connection? Can I do this 'headless' – I have installed all the Admin Tools (and documentation!!) on my MacBook Pro.... Also, can I even use link aggregation in a 'basic' Standard Server configuration, or will I need to use Advanced?
    We choose to purchase 3x 300Gb 15k SAS drives and therefore also have the RAID card installed. My understanding is that the Server software will be installed in drive 1 (left hand bay), and simply needs 'configured' (did I say 'simply'??!). I also believe I could – without re-installing the OS – changed the setup to RAID5 if I wanted to. I think I need to do this using Disk Utility whilst the Xserve is booted via the install DVD – correct? Can I setup disk mirroring without re-installing, and is the process the same (i.e. boot from DVD, change the setup, re-boot)?
    We run Filemaker Server, and I wondered which initial setup option would be optimal – NO raid, with the OS on one drive, and the database file(s) on another; or one big RAID5 volume with everything on it? I guess with 3 internal drives, we could go for a single drive (OS) plus a 2-drive RAID for files (either mirrored or striped for speed). If we bolt on a couple of 500Gb Firewire drives which we have 'spare' that would allow for backups of both volumes.....?
    Putting everything into consideration, I want to take 'baby steps' with the setup, until I get my head round everything. Initially, all we need is Filemaker and remote access to our databases (through VPN I guess), although I want to add web/mail/iCal etc..etc.. once we've settled into the new offices.
    The whole DNS thing scares me a bit. I can arrange reverse DNS with BT, and point our domain (via FastHosts) to our public IP address so we can run our own Web and Email server. I'm just not convinced this won't be a security vulnerability.....
    I'm a long time Mac user, but I've never used Server, and I rarely use Terminal. This new Xserve venture is exciting, but it also feels a bit un-nerving.... and advice, input and further reading suggestions would be gratefully received!
    Thanks in advance.

    In order to use link aggregation (i.e. the 4gbps connection to the Xserve), will I need to configure Server first using only 1 ethernet connection?
    Ahh, setting up Link Aggregation while headless is always a concern since it will affect the network connection you're using to administer the box. It is possible to do, it just takes some planning. If you can, configure the link over the serial port using networksetup, or put a monitor on the server temporarily.
    Also, can I even use link aggregation in a 'basic' Standard Server configuration, or will I need to use Advanced?
    I believe so - 'basic' vs. 'Advanced' only controls the set of services that are run and simplifies the admin interface somewhat. I don't think it has any effect on the underlying network setup but I might be wrong (I've never used anything but advanced).
    I also believe I could – without re-installing the OS – changed the setup to RAID5 if I wanted to
    No, that is not possible. Converting to RAID 5 will destroy the current config and reformat the drives. You can migrate to a RAID 1 or RAID 0 array, but not to RAID 5.
    Can I setup disk mirroring without re-installing, and is the process the same (i.e. boot from DVD, change the setup, re-boot)?
    For simple mirroring you can use RAID Admin's Migrate option to migrate the current single drive to a new mirror on the other two drives. The you can re-use the original drive.
    We run Filemaker Server, and I wondered which initial setup option would be optimal
    In general, RAID 5 is not recommended for database use - or any other use that requires a lot of random writes, although it does depend on volume - if your traffic is sufficiently light it might not be an issue.
    Other than that, for most people the data is most important, so that should be mirrored. It's reasonably easy to reinstall the OS (at least compared to rebuilding all your data.
    Putting everything into consideration, I want to take 'baby steps' with the setup.
    To start, focus on the disks. Everything else (applications, services, network, etc.) can be reworked easily later on. Not so with the disks.
    The whole DNS thing scares me a bit.
    Use BT for your public DNS for sure, but you'll definitely benefit from having working internal DNS, and that's pretty easy to manage, at least for small networks.
    so what DNS name do I use locally when configuring Server?
    You can use anything you like. You can use ourcompany.com - the same as your public domain, but just have to realize that 'server.ourcompany.com' may mean different things depending on whether you're inside your network querying your own DNS server, or external querying BTs (BT will return 12.34.56.78 while your internal DNS would return the 192.168.100.x address).
    This confuses me (from setup guide):
    Ignore the statement in the setup guide. It's perfectly valid to have 'server.domain.com' hosting email for [email protected].
    Do I need to consult our ISP about what DNS name I should give our server
    If you're running your own internal DNS then no, not at all.
    You only need to involve BT for any externally-available hostnames (and they don't even have to match your internal names - it's fine to have 'server.domain.com' on your internal DNS but no 'server.domain.com' in your public DNS, it just means no one can get to 'server.domain.com' from outside your network.
    Filemaker recommends you use the scheduling facility in FMServer for backups, and not system backups
    This is absolutely the case - it's hard to make backups of active/open files, especially databases. Any backup takes time - you read the first byte at time 0, but might not get to the last byte until several minutes later - and you have to consider what happens to changes in between (some may be backed up if they happened before the backup reached that part of the file, others might not, leading to an inconsistent file).
    Filemaker's backup method ensures you have a valid backup of the data.
    I'm now thinking that mirroring the OS on drive 2 might be a good idea, whilst storing Filemaker files in a 'Databases' folder on drive 3 (which is backed up my FMServer schedule). Other 'Shared' folder(s) on drive 3 could be backed up via Time Machine to an external FW800 drive......
    This really depends on the frequency of change in your database. If it's mostly reads and not many inserts/updates, then reverting to yesterdays backup might not be a problem, but if your data changes constantly it might not be as good.

  • External Fiber Channel Raid options for New Xserve besides Promise V-Trak

    I was wondering if anyone out there knows if there is an alternate option for a External Raid that can be hooked up via 4gb Fiber Channel to my new Xserve. I am really looking for a more cost efficient one than the 12K Promise set up from Apple. My Technology vendor was telling me that you have to use the Promise Raid as it is the only one that a Xserve will control due to Apple's licensing of its drivers etc. for its Fiber channel. Something tells me that this is not true.
    Anyone have any recomendations for a Fiber channel Raid solution in the 6K dollar range?
    thanks,
    Dan

    Find another vendor. Anyone that tells you the Promise is the only RAID that will work clearly doesn't understand the market.
    As for recommendations, you need to include some idea of capacity, either usable or raw. I can find you a lot of options for $6K if you want a 100GB of data. Not so many if you want 100TB.

  • Backup solutions for Solaris 10 systems

    I am trying to put together a backup solution for a number of SunFire V240 systems running Solaris 10. We do run our backups using NetWorker, along with a couple of high powered back up systems. But this solution I am trying to create is something that can be performed by our first responders, our help desk folks. The systems in question are already in production but we do have days where we can bring them down for maintenance or whatever we need to do. Therefore, I can implement this new solution during the day(s) of maintenance time.
    I was looking at using Solaris Volume Manager and its metadata configurations. But I am unsure how to start with files sytems already created. I want to be able to have a backup of the boot disk in the event that the hard drive in which it resides goes down. I want our help desk folks be able to immediately bring the system right back up with little down time during the evening to late hours.
    I guess what I am looking for are exact steps to help configure this backup solution with Solaris Volume Manager for our production systems. Can anyone help?

    Hey there,
    Have a look at this excellent guide. I think that you may find what you need here. If not drop an email back. To begin with you need an extra disk(s) since you will mirror your data. This is sort of a pre-requisite.
    Good luck,
    Pierre
    http://www.adminschoice.com/docs/solstice_disksuite.htm

  • Is there any doc about netbackup as OVM backup solution?

    HI
    IHAC who is going to purchase OVM, but currently the only concern is they are planning to use netbackup as the backup solution, so they want to know clearly which version of netbackup certifies with OVM.
    I can find a certify link as below:
    https://apex.oraclecorp.com/pls/apex/f?p=9303:8:8478416047898::NO::P8_PID,P5_ISV_ID:2128,2623
    But it's better to have some more doc such as best practise and such as that.
    Thanks & Regards!

    Oracle 7.1.3 on Linux was probably never released -- 7.1.3 predates generally available Linux.
    You would
    a. install Linux on the new server
    b. install Oracle 10g or 11g on the new server
    c. create an empty database on the new server
    d. export data from the 7.1.3 database
    e. import it into the new database
    Follow the 10g / 11g Upgrade Documentation.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14238/toc.htm
    or
    http://download.oracle.com/docs/cd/E11882_01/server.112/e10819/toc.htm
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • Help replacing my backup solution

    I currently have a Windows Home Server from my PC days.  The house is now all Mac, but I still use the WHS in the following way:
    1. Common files, pictures and vidoes are stored on the home server.  This allows all Macs in my house to access them.
    2. Important folders are dpulicated across two drives in the WHS using Drive Extender.  the new WHS 2011 does not have this, so I won't upgrade.
    3. I use Cloudberry to backup items in 1) to Amazon S3.  It currently does it once a week, but you can set any frequency you want.  Since it runs on the WHS as a service, it always keeps the schedule unless the power is off.
    In this way I basically have my important files (family pictures and videos) on three different drives.  Once of those is in the cloud.  I think it is a good backup solution that requires almost no work to maintain.
    Now, the PC with WHS is old and I have had some hardware issues.  Also, WHS was always a niche product and you can't really find hardware with it installed.  You basically have to build your own server.  I also don't like the loss of drive extender in the 2011 product.
    Is there a way I can duplicate this arrangement using Time Capsule or some other mac specific products.  I basically want to:
    1. Store files in a common network drive so everyone in my house can access them
    2. Back up the files/folders in 1) to Amazon S3 automatically. 
    Sorry to be long-winded and thanks for the help.

    Basically NO!!
    The Time Capsule is unfortunately not designed as a NAS.. it has no internal backup solution.. it has no expansion nor raid nor servers nor anything really. It is designed as a dumb target for Time Machine. In particular Time Machine running on laptops which will need wireless.. hence the incorporation of the airport to give wireless.
    And there is no Mac.. legimate that is .. that fits the role. A Mac Mini is used and is even sold as a server. But you do not have 3.5" drives.. and any expansion of drives is only by External boxes. The next step up is a Mac Pro.. super expensive and very high power demand. What is needed is an i3 low power consumption with lots of hard disk internal space of a small tower.
    I think you will find it is why people with your kind of setup tend to just stick to them.
    I would look at Synology or QNAP .. readyNAS maybe higher end models.. that offer TM compatibility.. although Apple will break it regularly, which is why you buy from the bigger companies that will try and keep firmware upgrades coming.
    Or build your own running something like FreeNAS. But you will never achieve the neatness of the purchased unit.
    I have a feeling Lion Server which is very cheap cf the old SL server was intended for such a role.. the hassle is finding a hardware platform to run it on.. (I wouldn't jump into Lion server yet anyway.. to buggie for now).

  • Incremental cumulative backup solution?

    Hi to all,
    I want to use rman cumulative incremental backup strategy i.e.I want to take one full database backup LEVEL0 every week (Sunday), and one incremental cumulative LEVEL1 database backup every day.
    RMAN configuration parameters are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
    CONFIGURE BACKUP OPTIMIZATION ON;* *// I did this to avoid doing backup on the same archivelogs files, in this case RMAN will backup only files that were not backed up.*
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP OFF;# default
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';# default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\SNCFORCL.ORA'; # default
    Here are my RMAN scripts:
    weekly.rman
    +run {+
    BACKUP AS COMPRESSED BACKUPSET
    INCREMENTAL LEVEL = 0 CUMULATIVE
    DEVICE TYPE DISK
    TAG = 'LEVEL0-SUNDAY'
    FORMAT 'D:\RMAN_BACKUP\DB_%d_%u_%s_%T'
    DATABASE;
    crosscheck archivelog all;
    delete expired archivelog all;
    BACKUP AS COMPRESSED BACKUPSET ARCHIVELOG ALL FORMAT 'D:\RMAN_BACKUP\arch_%d_%u_%s_%T';
    CROSSCHECK BACKUP;
    DELETE NOPROMPT OBSOLETE; // To delete backup (LEVEL0 and LEVEL1) and archivelogs files because RETENTION POLICY TO REDUNDANCY 1 and to make free space for new LEVEL0 backup.
    DELETE NOPROMPT EXPIRED BACKUP;
    crosscheck archivelog all;
    delete expired archivelog all;
    DELETE NOPROMPT ARCHIVELOG UNTIL TIME "SYSDATE-3";
    +}+
    daily.rman
    +run {+
    BACKUP AS COMPRESSED BACKUPSET
    INCREMENTAL LEVEL = 1 CUMULATIVE
    DEVICE TYPE DISK
    TAG = 'LEVEL1-MONDAY'
    FORMAT 'D:\RMAN_BACKUP\DB_%d_%u_%s_%T'
    DATABASE;
    BACKUP AS COMPRESSED BACKUPSET ARCHIVELOG ALL FORMAT 'D:\RMAN_BACKUP\arch_%d_%u_%s_%T';
    crosscheck archivelog all;
    delete expired archivelog all;
    DELETE NOPROMPT ARCHIVELOG UNTIL TIME "SYSDATE-3"; *//To delete archivelog files older than 3 days, I do not have enough space to store archive files from all week.*
    +}+
    What do you think about this RMAN scripts?
    Do you have a better solution?
    Thanks in advance for any proposal or suggestion!

    Your script looks good. As an alternative, you can also use the following command:
    Backup archivelog until time 'sysdate-3' backedup 1 times to 'sbt_tabe';
    Any suggestions are highly appreciated.
    Thanks,
    Sai.

  • Backup Solution For This Specific Setup

    I've been thinking about how I'm going to backup my setup but I'm sort of stuck.
    My setup: 2 Drives
    80GB Intel SSD - boot disk, dual boots Windows 7 and ArchLinux
    500GB HDD  - only for files, not going to be of any concern
    Here's the fdisk -l for the SSD (/dev/sdb)
    Disk /dev/sdb: 80.0 GB, 80026361856 bytes, 156301488 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x9b9dcc69
    Device Boot Start End Blocks Id System
    /dev/sdb1 * 2048 206847 102400 7 HPFS/NTFS/exFAT
    /dev/sdb2 206848 103811071 51802112 7 HPFS/NTFS/exFAT
    /dev/sdb3 104015872 156301311 26142720 83 Linux
    /dev/sdb4 103811072 104015871 102400 83 Linux
    Here's what I'd like: If the SSD fails, I'd like to be able to have an exact clone bootable USB HDD ready to use (so it must be able to dual boot). Basically, I'd like to minimize down time as much as possible. Then I'd like to be able to transfer everything again to a new SSD.
    I understand that I can use rsync for backups: https://wiki.archlinux.org/index.php/Fu … with_rsync but I'm not sure how to adjust it for dual booting. Should I be cloning with dd instead? From what I've read dd will copy everything including the partition table of the old one so that might not be a good idea if I'm planning to move the whole thing to a new SSD?
    Appreciate any input!
    Last edited by devrepublic (2013-02-22 01:32:15)

    Use rsync as it will then only write what has been changed instead of the whole damn thing like dd.  There is a wiki page about how to back up your entire system with rsync... you should probably have a look at that. 
    This of course will work just fine for your Arch Linux, but I don't know enough about windows to be able to recommend a way to do it with that partition.  I guess you could dd the partition everyday, but that doesn't seem very practical.  I would have to imagine that there is probably some kind of a incremental windows backup solution that you could use, but I don't know.
    You will of course then have to have a bootloader installed on the second drive, set up and ready to boot into either system.  Then just make sure you are booting the right drive all the time and you're good to go. 
    Are you implying that you want to back up to the 500GB drive?  Or are you saying that you want to use an external?

  • Definitive Storage and Backup solution

    Hello, I'm looking for a definitive Storage and Backup solution.
    So far I've been looking on to Drobo 5D or N, LaCie 5big Thunderbolt™ 2, or LaCie 2big Thunderbolt™ 2.
    Networking would be a plus but not a must. I'm open for other suggestions and also wonder if these systems can be considered backup since they are ready for single or double disks failures.
    Thanks.

    Methodology to protect your data. Backups vs. Archives. Long-term data protection
    Avoid Lacie, they contain Seagate drives inside.  Bad idea. 
    huge storage, low cost, high quality, very small and portable.
    BEST FOR THE COST, Toshiba "tiny giant" 15mm thick  2TB drive (have several of them, lots of storage in tiny package)    $100
    http://www.amazon.com/Toshiba-Canvio-Connect-Portable-HDTC720XK3C1/dp/B00CGUMS48     /ref=sr_1_3?ie=UTF8&qid=1390020791&sr=8-3&keywords=toshiba+2tb
    best options for the price, and high quality HD:
    Quality 1TB drives are $50 per TB on 3.5" or  $65 per TB on 2.5"
    Perfect 1TB for $68
    http://www.amazon.com/Toshiba-Canvio-Portable-Hard-Drive/dp/B005J7YA3W/ref=sr_1_ 1?ie=UTF8&qid=1379452568&sr=8-1&keywords=1tb+toshiba
    Nice 500gig for $50. ultraslim perfect for use with a notebook
    http://www.amazon.com/Toshiba-Canvio-Portable-External-Drive/dp/B009F1CXI2/ref=s     r_1_1?s=electronics&ie=UTF8&qid=1377642728&sr=1-1&keywords=toshiba+slim+500gb
    *This one is the BEST portable  external HD available that money can buy:
    HGST Touro Mobile 1TB USB 3.0 External Hard Drive $88
    http://www.amazon.com/HGST-Mobile-Portable-External-0S03559/dp/B009GE6JI8/ref=sr     _1_1?ie=UTF8&qid=1383238934&sr=8-1&keywords=HGST+Touro+Mobile+Pro+1TB+USB+3.0+7 2 00+RPM
    Most storage experts agree on the Hitachi 2.5"
    Hitachi is the winner in hard drive reliability survey:
    Hitachi manufacturers the safest and most reliable hard drives, according to the Storelab study. Of the hundreds of Hitachi hard drives received, not a single one had failed due to manufacturing or design errors. Adding the highest average lifespans and the best relationship between failures and market share, Hitachi can be regarded as the winner.
    Data Storage Platforms; their Drawbacks & Advantages
    #1. Time Machine / Time Capsule
    Drawbacks:
    1. Time Machine is not bootable, if your internal drive fails, you cannot access files or boot from TM directly from the dead computer.
    OS X Lion, Mountain Lion, and Mavericks include OS X Recovery. This feature includes all of the tools you need to reinstall OS X, repair your disk, and even restore from a Time Machine
    "you can't boot directly from your Time Machine backups"
    2. Time machine is controlled by complex software, and while you can delve into the TM backup database for specific file(s) extraction, this is not ideal or desirable.
    3. Time machine can and does have the potential for many error codes in which data corruption can occur and your important backup files may not be saved correctly, at all, or even damaged. This extra link of failure in placing software between your data and its recovery is a point of risk and failure. A HD clone is not subject to these errors.
    4. Time machine mirrors your internal HD, in which cases of data corruption, this corruption can immediately spread to the backup as the two are linked. TM is perpetually connected (or often) to your computer, and corruption spread to corruption, without isolation, which TM lacks (usually), migrating errors or corruption is either automatic or extremely easy to unwittingly do.
    5. Time Machine does not keep endless copies of changed or deleted data, and you are often not notified when it deletes them; likewise you may accidently delete files off your computer and this accident is mirrored on TM.
    6. Restoring from TM is quite time intensive.
    7. TM is a backup and not a data archive, and therefore by definition a low-level security of vital/important data.
    8. TM working premise is a “black box” backup of OS, APPS, settings, and vital data that nearly 100% of users never verify until an emergency hits or their computers internal SSD or HD that is corrupt or dead and this is an extremely bad working premise on vital data.
    9. Given that data created and stored is growing exponentially, the fact that TM operates as a “store-it-all” backup nexus makes TM inherently incapable to easily backup massive amounts of data, nor is doing so a good idea.
    10. TM working premise is a backup of a users system and active working data, and NOT massive amounts of static data, yet most users never take this into consideration, making TM a high-risk locus of data “bloat”.
    11. In the case of Time Capsule, wifi data storage is a less than ideal premise given possible wireless data corruption.
    12. TM like all HD-based data is subject to ferromagnetic and mechanical failure.
    13. *Level-1 security of your vital data.
    Advantages:
    1. TM is very easy to use either in automatic mode or in 1-click backups.
    2. TM is a perfect novice level simplex backup single-layer security save against internal HD failure or corruption.
    3. TM can easily provide a seamless no-gap policy of active data that is often not easily capable in HD clones or HD archives (only if the user is lazy is making data saves).
    #2. HD archives
    Drawbacks:
    1. Like all HD-based data is subject to ferromagnetic and mechanical failure.
    2. Unless the user ritually copies working active data to HD external archives, then there is a time-gap of potential missing data; as such users must be proactive in archiving data that is being worked on or recently saved or created.
    Advantages:
    1. Fills the gap left in a week or 2-week-old HD clone, as an example.
    2. Simplex no-software data storage that is isolated and autonomous from the computer (in most cases).
    3. HD archives are the best idealized storage source for storing huge and multi-terabytes of data.
    4. Best-idealized 1st platform redundancy for data protection.
    5. *Perfect primary tier and level-2 security of your vital data.
    #3. HD clones (see below for full advantages / drawbacks)
    Drawbacks:
    1. HD clones can be incrementally updated to hourly or daily, however this is time consuming and HD clones are, often, a week or more old, in which case data between today and the most fresh HD clone can and would be lost (however this gap is filled by use of HD archives listed above or by a TM backup).
    2. Like all HD-based data is subject to ferromagnetic and mechanical failure.
    Advantages:
    1. HD clones are the best, quickest way to get back to 100% full operation in mere seconds.
    2. Once a HD clone is created, the creation software (Carbon Copy Cloner or SuperDuper) is no longer needed whatsoever, and unlike TM, which requires complex software for its operational transference of data, a HD clone is its own bootable entity.
    3. HD clones are unconnected and isolated from recent corruption.
    4. HD clones allow a “portable copy” of your computer that you can likewise connect to another same Mac and have all your APPS and data at hand, which is extremely useful.
    5. Rather than, as many users do, thinking of a HD clone as a “complimentary backup” to the use of TM, a HD clone is superior to TM both in ease of returning to 100% quickly, and its autonomous nature; while each has its place, TM can and does fill the gap in, say, a 2 week old clone. As an analogy, the HD clone itself is the brick wall of protection, whereas TM can be thought of as the mortar, which will fill any cracks in data on a week, 2-week, or 1-month old HD clone.
    6. Best-idealized 2nd platform redundancy for data protection, and 1st level for system restore of your computers internal HD. (Time machine being 2nd level for system restore of the computer’s internal HD).
    7. *Level-2 security of your vital data.
    HD cloning software options:
    1. SuperDuper HD cloning software APP (free)
    2. Carbon Copy Cloner APP (will copy the recovery partition as well)
    3. Disk utility HD bootable clone.
    #4. Online archives
    Drawbacks:
    1. Subject to server failure or due to non-payment of your hosting account, it can be suspended.
    2. Subject, due to lack of security on your part, to being attacked and hacked/erased.
    Advantages:
    1. In case of house fire, etc. your data is safe.
    2. In travels, and propagating files to friends and likewise, a mere link by email is all that is needed and no large media needs to be sent across the net.
    3. Online archives are the perfect and best-idealized 3rd platform redundancy for data protection.
    4. Supremely useful in data isolation from backups and local archives in being online and offsite for long-distance security in isolation.
    5. *Level-1.5 security of your vital data.
    #5. DVD professional archival media
    Drawbacks:
    1. DVD single-layer disks are limited to 4.7Gigabytes of data.
    2. DVD media are, given rough handling, prone to scratches and light-degradation if not stored correctly.
    Advantages:
    1. Archival DVD professional blank media is rated for in excess of 100+ years.
    2. DVD is not subject to mechanical breakdown.
    3. DVD archival media is not subject to ferromagnetic degradation.
    4. DVD archival media correctly sleeved and stored is currently a supreme storage method of archiving vital data.
    5. DVD media is once written and therefore free of data corruption if the write is correct.
    6. DVD media is the perfect ideal for “freezing” and isolating old copies of data for reference in case newer generations of data become corrupted and an older copy is needed to revert to.
    7. Best-idealized 4th platform redundancy for data protection.
    8. *Level-3 (highest) security of your vital data. 
    [*Level-4 data security under development as once-written metallic plates and synthetic sapphire and likewise ultra-long-term data storage]
    #6. Cloud based storage
    Drawbacks:
    1. Cloud storage can only be quasi-possessed.
    2. No genuine true security and privacy of data.
    3. Should never be considered for vital data storage or especially long-term.
    4. *Level-0 security of your vital data. 
    Advantages:
    1. Quick, easy and cheap storage location for simplex files for transfer to keep on hand and yet off the computer.
    2. Easy source for small-file data sharing.
    #7. Network attached storage (NAS) and JBOD storage
    Drawbacks:
    1. Subject to RAID failure and mass data corruption.
    2. Expensive to set up initially.
    3. Can be slower than USB, especially over WiFi.
    4. Mechanically identical to USB HD backup in failure potential, higher failure however due to RAID and proprietary NAS enclosure failure.
    Advantages:
    1. Multiple computer access.
    2. Always on and available.
    3. Often has extensive media and application server functionality.
    4. Massive capacity (also its drawback) with multi-bay NAS, perfect for full system backups on a larger scale.
    5. *Level-2 security of your vital data.
    JBOD (just a bunch of disks / drives) storage
    Identical to NAS in form factor except drives are not networked or in any RAID array, rather best thought of as a single USB feed to multiple independent drives in a single powered large enclosure. Generally meaning a non-RAID architecture.
    Drawbacks:
    1. Subject to HD failure but not RAID failure and mass data corruption.
    Advantages:
    1. Simplex multi-drive independent setup for mass data storage.
    2. Very inexpensive dual purpose HD storage / access point.
    3. *Level-2 security of your vital data.
    Bare hard drives and docks. The most reliable and cheapest method of hard drive data storage, archives, and redundancies
    The best method for your data archives and redundancies, which is also the least expensive, the most reliable, and the most compact option is the purchase of naked hard drives and at least one USB 3.0 HD dock ($40 roughly).
    While regarding Time Machine and your Macbook or desktop, your primary backup is best saved to a conventional USB (or Firewire / thunderbolt) hard drive inside an enclosure, the most important part of your data protection begins after your 1st / primary Time Machine / backup; and these are your secondary (most important) data storage devices, archives and their redundancies.
    However bare hard drives and docks (below) also work perfectly as a Time Machine backup, this is for home use, since the docking station is certainly not very portable as a notebook Time Machine backup device should be; nor should bare HD be packed around with a notebook, rather remain at home or office.
    Six terabytes of 2.5" HD pictured below in a very compact space.
    Bare hard drives and docks have the lowest cost, the highest reliability, and take up the smallest storage space
    Drawbacks:
    1. Care and knowledge in general handling of naked hard drives (how not to shock a bare HD, and how to hold them properly). Not a genuine drawback.
    Advantages:
    1. By far the least expensive method of mass HD storage on a personal basis. Highest quality naked HD can be purchased in bulk very cheap.
    2. Eliminates the horrible failure point of SATA bridges and interfaces between external drives and the computer.
    3. Per square foot you can store more terabytes of data this way than any other.
    4. Fast, easy, no fuss and most simplex method of data storage on hard drives.
    Time Machine is a system  backup, not a data backup
    Important data you “don’t dare lose” should not be considered ultimately safe, or ideally stored (at the very least not as sole copy of same) on your Time Machine backup. Hourly and daily fluctuations of your system OS, applications, and software updates is the perfect focus for the simple user to conduct ‘click it and forget it’ backups of the entire system and files on the Macbook HD.
    Bootable clones are the choice of professionals and others in that Time Machine cannot be booted from and requires a working HD to retrieve data from (meaning another computer). Your vital data needs to be and should be ‘frozen’ on some form of media storage, either in a clone, as an archived HD containing important files, or on DVD blank archival media.
    A file that is backed up to Time Machine is unsafe in that if that file is deleted off the computer by accident or lost otherwise, that file will likewise vanish from Time Machine as it reflects changes on the internal computer HD/SSD.

Maybe you are looking for

  • List of "parameter keys"

    Can anyone tell me where to find the current list of "parameter keys"? Im using the run_product built-in and I would like to know what "keys" I can pass to reports? From oracle documentation key     The name of the parameter. The data type of the key

  • Creative Cloud subscription for commercial use?

    I cant seem to find a straight answer to my question of; I am able to subscribe to Adobe Creative Cloud (Australia) for commercial use? Thanks Ashly

  • LEAP Problems (still)

    So far, I have tried all of the "fixes" that have been posted here to get LEAP to work with my MBP. Updating to 10.4.6 didn't solve the problem exactly... Before the update I couldn't connect to the network at all.... Now, I can connect, but the card

  • Serial #'s and my account info

    Hi, I hope I'm posting in the right forum. I uninstalled Photoshop CS and Pagemaker 7 from my main hard drive and reinstalled them on my external hard drive. My main drive was filling up. I could not log into my account with the Adobe ID, my email or

  • OSS Component for GTS

    What component does a customer use to register issues with GTS? Not clear if GTS is under GRC, SLL-LEG or? Ideally trying to find a list of customers for Mill Products & Mining. Thanks Brian