Backup Metadata Controller Questions...

I have two questions regarding the Backup Metadat Controller for Apple's XSAN.
Question 1: Do I need to purchase an XSAN client ($999) for the Backup Metadata Controller AND the primary? I was hoping the backup used the same client as the primary.
Question 2: I'm having trouble finding the hardware requirements for the MDC. I was hoping to use a single cpu G5 Xserver as the primary and a dual G4 Xserver as the backup. We plan to only have a handful of clients and will only be working with DV-25 and Mpeg-2 files, no HD stuff here. Will these Xserves do the trick?
Any ideas would be great!
Thanks!

The licensing confused me at first too. Until you realize that every computer that is direct-attached to fiber and mounts the Xsan volume directly is a client and needs the software, and therefore needs a separate license. That's the way it works. Computers which connect via a re-share (AFP/NFS/SMB) do not need an extra license.
Doing a google for the Xsan mdc requirements brought up this page:
http://docs.info.apple.com/article.html?artnum=302143
System Hardware Requirements
Xserve or Xserve G5
Power Mac G4 (dual 800MHz or faster)
Power Mac G5 (any model)
Consult the manuals for more detailed info:
http://www.apple.com/server/documentation/

Similar Messages

  • How important is a dedicated metadata controller?

    Hello,
    I'm currently setting up a computational cluster consisting of 16 XServes (eventually 32) and 5 Mac Pros, each of which need to simultaneously write to a single volume on a shared 10TB Promise VTrak RAID. My motivation for using XSan is data integrity and redundancy (which is critical) and not speed (which is nice, but not necessary).
    For this type of setup, how important is it to have a dedicated metadata controller? Ideally, I'd like to have one XServe as a dedicated controller for both XGrid and metadata, the Mac Pros as XGrid clients and agents, and the remaining XServes as XGrid agents. Everything I read says that there will be "decreased performance" if the metadata controller is used for functions other than XSan, but what exactly is this decreased performance? I can live with some latency in data access while the metadata controller is doing other things, provided all data gets written eventually. There will be no other traffic (mail, web, etc) on this network.
    I know the first thought is that with so many machines, why not just use one as a dedicated metadata controller? Again, data access isn't the primary goal here; each machine performing computations takes about a day off my computation time so I want every available asset working towards that goal. Even if I have to wait five minutes to access the data, I'm still way ahead of the game using the other machine for computations.
    In the event that using a dedicated metadata controller is vital to using XSan in this situation, are there other solutions which would perhaps better suit my needs?

    The question really is 'how important is the reliability of your Xsan' and because its become a single shared resource everything depends on it, this is the same in post or data centre workflows.
    The ideal situation is to have no other services pulling any clock cycles from the processor because it could coincide with xsan requiring resources. The tricky thing in xsan deployment is latency reduction, the controllers and switches don't look busy and they aren't but the smooth running of the system depends on that reduced latency and when you lose it your going to run into some strange and probably intermittent problems.
    Ive cut corners in the past by running open directory and DNS on the backup controller but never on the primary. Really it depends how busy the whole system is going to get but dont expect Xsan to behave simply when the latency starts to go. Its a very robust file system but it wont slow down in a linear way like a busy web server it will slow then hiccup stop, start or fall over entirely, which is not good in a critical situation.
    One solution Ive used in a similar situation is a smalltree 6 port ethernet card in an xserve with all ports bound together, make sure your switch can handle link aggregation but it creates a nice big pipe to a fileshare in a situation where xsan wont work. You can't edit video like this but you can remove the server bottleneck when you have several client computers requiring big file transfers.
    Another route is iSCSI but its still early days in the Apple world, ATTO now make an iSCSI initiator for Macs but I can say Ive deployed it.

  • Windows Server Primary & Secondary Domain Controller Question

    lulzchicken wrote:
    Right now the DHCP is assigning 192.168.200.1 (DNS server) and 8.8.8.8 (Google's DNS) as DNS servers for each client. I don't necessarilly want to change these assignment settings,Yes, you do. This is absolutely the worst thing you can ever do with DNS. More details why here -> Ramblings of a Sysadmin: How to do DNS correctly
    Primary and secondary DNS should ALWAYS be internal.
    Your DNS Servers should use FORWARDERS go go out to google. That's the only place that should see google DNS servers in your environment.

    Hi everyone, thank you for taking the time to listen.
    I have successfully implemented an Active Directory setup using a Primary DC and a Secondary DC with Windows Server 2012 R2.
    EL1 is my PDC and EL2 is my BDC.
    Active Directory is in sync among the two Domain Controllers. Here is my question:
    If I were to have a policy (Group Policy) that sets the wallpaper of each client machine to whatever is in the "\\EL1\Wallpaper\wp.jpg" - what would happen if I were to have that Domain Controller fail? That directory is no longer available due to the outage - even though the Backup Domain Controller will still be pushing out the policy (pointing to the down server).
    My idea was to have that directory replicated on the Backup Domain Controller, "\\EL2\Wallpaper\wp.jpg" however - the policy will still be looking for the file in the Primary Domain...
    This topic first appeared in the Spiceworks Community

  • Backup metadata enumeration failed on projection group

    I am seeing the Backup metadata enumeration failed intermittently on one protection group. It comes and goes, I'll check one time and the protection group is fine and maybe check the next day and the warning appears.  Though I can't correlate it,
    it seems like when the warning is present, no item level restores can take place.   Two questions, what's the best way to research/diagnose this warning and secondly is there any definitive corrective action.

    I am seeing the Backup metadata enumeration failed intermittently on one protection group. It comes and goes, I'll check one time and the protection group is fine and maybe check the next day and the warning appears.  Though I can't correlate it,
    it seems like when the warning is present, no item level restores can take place.   Two questions, what's the best way to research/diagnose this warning and secondly is there any definitive corrective action.

  • STMS Backup Domain Controller.

    Hi,
    I  want to configure a backup domain controller in my SAP system. The current controller is on my Production (PRD). I would like to configure the backup controller on Development (DEV).
    Currently, my OS is on AIX. The NFS is at Production for /usr/sap/trans.
    My question is, if I have a hardware problem in PRD that also impact my NFS, the backup controller that was created would also be pointless? Please correct me if I am wrong.
    Hope to get feedback.
    Thanks in advance,
    IAzir.

    Hi,
    I know individual /usr/sap/trans can be implemented in Windows but not in UNIX.
    Unix uses NFS. I might be wrong. Have your tried it before?

  • Problematic issues in installing backup domain controller on Virtual Machine

    Hello,<o:p></o:p>
    I have a physical domain controller - windows Server 2012 R2 Standard installed
    in my domain environment and this is a first root domain controller.
    I have also Hyper-V Server 2012 R2 installed and joined in that domain. 
    Now I want to install an additional (Backup) domain controller as a virtual
    machine hosted on Hyper-V Server. So while promoting VM as a DC all actions and
    steps go well but the problem arise when I press the install button at the end
    of the promotion - installation gets stuck in the process of writing some
    configuration files on first DC and also in the process of replication. Unfortunately
    VM does not promote as a DC and it goes to restart.
    The error event log with - NETLOGON source is logged on the virtual machine as
    well.
    Do you have some suggestions with this issue, or experience how to resolve this..
    Thanks a lot in advance,
    GMG
    <o:p></o:p>

    Now I want to install an additional (Backup) domain controller
    There is no backup DC. All DCs are RW except RODCs.
    I would recommend first checking the health status of the existing DC using
    dcdiag command. Also, please check the IP settings in use: Please make sure that the existing DC has its primary IP address in use and that public DNS servers are set as forwarders and not in IP settings of the DC. For the new DC, please make sure
    that it points to the existing DC as primary DNS server and once promoted you can see the recommendations here to update the configuration: http://social.technet.microsoft.com/wiki/contents/articles/18513.active-directory-replication-issues-basic-troubleshooting-steps-single-ad-domain-in-a-single-ad-forest.aspx
    Please also disable temporary all security software in use on the DCs and make sure that needed ports for AD replication and authentication are not blocked or filtered between the DCs.
    This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.
    Get Active Directory User Last Logon
    Create an Active Directory test domain similar to the production one
    Management of test accounts in an Active Directory production domain - Part I
    Management of test accounts in an Active Directory production domain - Part II
    Management of test accounts in an Active Directory production domain - Part III
    Reset Active Directory user password

  • Xserve G5 Cluster as Metadata Controller?

    We just got a great deal on a G5 Cluster. Would it make a good metadata controller for our future SAN? The MDC does not use local storage for the metadata right? How much RAM should it have? Thanks for all the info! All you guys are great!

    Thanks for the advice. I do have ARD. It's great on the LAN, but I find it easier to get things done over the WAN using Timbuktu. I wish ARD would allow me to monitor Windows servers also. I have to keep a couple around and ther're a pain to monitor. We always get video cards added to the Xserves anyways. We have a couple big KVM switches. Sometimes you just have to see the real thing. I'm not into Terminal!

  • Metadata Controller, inability to login to diagnose Xsan issue

    Hi,
    I'm trying to diagnose an issue with a small Xsan (1.x) we have here at the Uni.
    The SAN volume isn't being mounted on the clients so I assume the finger points to the Xserve G5 metadata controller.
    Now I can ping it OK, but not access it via ssh (remote host refused connection), ARD or any means I can think of - the Xserve doesn't have a VGA card btw.
    What's the best way to proceed whilst (hopefully) preserving the SAN integrity?
    PS - Before my intervention the SAN had been power recycled to no avail & I'm not sure the 'correct' procedure was followed; well it probably wasn't as you can't get in to gracefully shut each component down.
    Cheers in advance for any thoughts!

    1. Are you authenticated to the clients that you want to promote to controllers? If so, perhaps you have a problem with your metadata subnet.
    2. Sounds like possibly the same issue.
    In order to change settings, there needs to be communication over the metadata network between clients and controllers. Your logs should have some answers.

  • Will the backup metadata older than 10 days be deleted automatically?

    If the CONTROL_FILE_RECORD_KEEP_TIME parameter is set to 10, will the backup metadata older than 10 days be deleted automatically?----No.151

    http://download-east.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmarchi003.htm#sthref58

  • Can't make a Backup Domain Controller

    I have one Open Directory Master and three replicas. The Master is also set to be the Windows Primary Domain Controller. But none of the replicas can join the domain or join as the Backup Domain Controller.
    When I search the logs on the Master, I see:
    could not find new user/computer luca$ in passdb
    luca (a replica) is in Workgroup Manager. I even added a Kerberos entry for it.
    Any help is greatly appreciated.

    Solved my own problem...
    Turns out that some users on the network took it upon themselves to join a Workgroup with the same name as our Domain. Samba doesn't seem to like that at all. So make sure your workgroup names are never the same as your domain names.

  • Upgrading Domain Controller Questions

    Hello, we currently have 2 domain controllers in our environment, both with Server 2003 R2. We are looking to upgrade them one at a time to 2008 R2 but I have some questions. 
    Here's the environment:
    Server 1 (the one we are going to upgrade first):
    Server 2003 R2
    Domain Controller
    DHCP Server
    DNS Server
    Server 2 (we will be upgrading this in the near future but not just yet):
    Server 2003 R2
    Domain Controller
    DHCP Server
    DNS Server
    File Server with most of the company data
    We also have DNS replication set up between the two servers. 
    My questions:
    Will we run into any issues having two domain controllers with different Operating Systems?
    We would like for the domain controllers to keep the same names and IP's. Any issues with that?
    How will we stop, then re-setup DNS replication between the two servers?
    Any other 'gotcha's' we should be aware of?
    Dan Chandler-Klein

    I don't see any reason why not keeping old name and IP.
    Before upgrading make sure AD has no issues:
    look at the event viewer, run DCDiag, replication runs clean (repadmin /showrepl) etc.
    OS has no warning/errors.
    Not  must but I would move the FSMO roles to another DC before demote.
    Make sure applications installed on the new DC's (AV\Backup agents etc.)  support Windows 2008 R2 OS.
    Make sure all your network applications in your environment support working with Windows 2008 R2 DC - I recommend test it in lab first.
    Make sure that the DC you are about to demote not holding CA role. 
    Most important:
    Make sure you successfully demote the old DC and no records left in DNS.
    I'm not agree with evrimicelli about DC's naming and I wouldn't go for CNAME record - this can get you in many troubles in the future. 
    after demote the old DC, I would rename it or remove it from the domain, than you can rename the new server with old Dc name and promote it to DC with old DCs' IP address. 
    I didn't understand the question about DNS replication.
    What kind of DNS zone do host?  if its AD integrated (and thats what you should have), you don't need to configure any replication, AD integrated DNS zone replicate as part of AD replication between your two DC's.
    Please take a moment to Vote as Helpful and/or Mark as Answer where applicable. Thanks.

  • Backup problems and questions (v3)

    Hi,
    I've been reading the information regarding backups, and I want to make sure I'm safe. I want to maintain a "hot failover" in the event that my current environment becomes corrupt or unusable. My failover directory is close to 100% safe. I want to maintain two or three versions back of the data, without taking an extraordinary amount of disk space.
    I had originally tried using db_hotbackup for a failover backup, yet I kept running into strange problems, namely: the db_recover step seemed to run on my prodcution environment, and was taking an extraordinariliy long time to run. Long story short, after a very tedious trial and error period, I've decided to use dbxml_dump as a backup. I'd like to know what all steps it takes when dumping the data.
    My procedure is currently:
    1) Run a checkpoint on current environment.
    2) Copy containers, and then all active and inactive database files to my failover directory ("failover_dir").
    3) Dump from the failover_dir to dump files.
    4) Once dbxml_dump is done, I will run db_archive -d on both the production database.
    Few questions:
    1) First, of course, is this an acceptable "backup"? 2) Does dbxml_dump do any kind of verification (db_verify) prior to running? If not, what will happen if it is run against a corrput database? 3) After I get the above working, I want to make sure about #4: Can I safely delete the log files that are no longer needed (db_archive -d)? That is, I can safely run dbxml_dump again without those files.
    Thanks,
    Kevin

    Thanks, George!
    I would have liked to had db_hotbackup instead of dbxml_dump/load, but, strangely, it's one of a few things that isn't working as advertised. I run a checkpoint every half hour, and db_recover still takes forever. I think dbxml_dump/load is going to work for me--except for one potential issue. If I change my mind, I will post on BDB forum.
    (2) As for ceasing operations, yes, I am temporarily stopping the reads/writes, copying the dbxml/log files to another directory, and performing dbxml_dump on it.
    My one potential issue is related to indexes that you mention: "dbxml_dump will copy only content/metadata and no index databases; those are recreated during dbxml_load"
    Does that mean that 1) dbxml_dump will output the list of indexes and when dbxml_load is run, it will recreate them. Or 2) that there is no index info dumped and I will have to maintain the list and reindex the container after dbxml_load is done? (I'm currently having an issue with 'addIndex', asked in another thread.)
    Also, the docs seem to indicate that you can dbxml_load into an existing container. Yet, when I try to do this, I get "Error: File exists". I can successfully run dbxml_load and have it create the container.
    #3 is a great idea--I will incorporate it into my backup routine.
    --kevin                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Exchange Server 2013 backup and restoration question

    Good afternoon
    I am wondering if I can pick people's brains here if that's not too much trouble.
    I have recently implemented an Exchange 2013 environment with 3 servers (two multi-role CAS & MBX physical machines, and one further virtual CAS server). Everything is working correctly and mail-flow is fine. My question relates to backup and restoration
    procedures. I am running a single Database Availability group with the two multi-role servers members of the this group and I have one Mailbox Database on the DAG, passive on one member and active on the other. Failover has been tested and this is functioning
    as expected.
    The backup environment I have implemented is as follows. I have utilized a separate Microsoft DPM 2012 server that runs a nightly Bare Metal backup on both multi-role servers with a retention range of 14 days and I have protected the DAG using DPM with a
    nightly full-express backup and 4 hourly syncs on one of the multi-role servers. Furthermore the Mailbox Database is protected, again nightly with a copy-backup.
    I am confident I know what to do should the mailbox database become corrupted or lost or need to be restored for one reason or another (it would just be a case of restoring the backed up mailbox database using DPM to a pre-created recovery database), what
    I am not quite so sure on is what I would do should I lose one or both of the multi-role CAS and MBX servers (the third CAS I am not so worried about as it is not used for incoming mail flow from the internet and we really only use it for ECP as we did not
    want to expose this to the internet). I wonder what process should follow to restore my Exchange servers (I know how to perform the Bare Metal recoveries using WSB) and what configuration would be required after restoring the Bare Metal backups.
    I know this is a reasonably long question but if anyone has any advice for me I would appreciate it greatly in the unlikely event something goes horribly wrong with my Exchange environment.
    Thanks in advance

    Hi,
    Yes, Most of the configuration settings are stored in AD. Mail data and personal related data are store in Mailbox DB. We just need to take consideration of these two points.
    Thanks,
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Simon Wu
    TechNet Community Support

  • Part-backup, full Library question

    I'm about to get a 32GB flash drive (it's on offer) and will use it to backup some iTunes data. This includes a big project about 17GB in size, with lots of nested playlists and important tagged data. This will easily fit on the drive, along with my Favourites playlist. However, there isn't room for my entire iTunes folder which is over 38GB.
    The question is, how to achieve this without losing any of the important stuff like playlists, tags, ratings, etc? I can, for example, drag the relevant playlists to the flash drive from within iTunes. I could also drag the iTunes Library files there too. But those will refer to the whole iTunes Library, not just the 66% - 75% of it I'm backing up.
    What would happen if I ever tried to recreate iTunes on a different machine, from the iTunes Library file on the flash drive? The Library file and the music won't be in sync with each other.
    I already have a full backup of iTunes on an external HDD, but this is 'important belt and braces' stuff - I want to know that my most important music and playlists and data are preserved should the worst ever happen and I need to use what's on the flash drive.

    I don't think you can do exactly what you want to do -- not easily.
    The following is just theoretical, but it might work (I would back up first to make sure!):
    You could try selectively consolidating some items to an iTunes folder on the flash drive.  Consolidating, I believe, just copies files, not move them (or my references are out of date).  Then once there is a partial duplicate of the library on the flash drive you could copy the library files and artwork, etc. to the drive so you  reconstruct everything a normal iTunes folder would need to start with a partial version of your current library.  Of course there would be many broken links but you could use one of Dougscripts to remove dead links for the files you hadn't copied.
    I have never consolidated but I think you change the media location preference in your iTunes preferences.  Then do not let it organize things when you close preferences -- you don't want to have it do it automatically for everything.  Select items to be copied in iTunes, then there will be a consolidate command you can use to consolidate selected items.  Go back and return your iTunes preferences to normal.

  • Basic backup and maintenance questions - ZCM 10.3.3

    Hello all,
    I am looking for information and feedback about ZCM maintenance and database backup.
    We have ZCM 10.3.3 running on a SLES 11 VM. I do keep a clone of the ZCM server when updates are made for backup.
    I have I guess a couple of simple questions:
    1. How do I tell which database type was selected when ZCM was setup?
    2. What are the basic maintenance procedures to address database issues?
    3. What should I be backing up for the database?
    I did not build this ZCM server and I need to know more. Mostly I need to do some database maintenance as workstation deletions take an extremely long time and I have read somewhere else in the forums that I should be performing DB maintenance and I just need some sort of procedure.
    Thanks,
    Steve D.

    Hi,
    You can check inside the file zdm.xml to know which database you have
    The file is locate at :
    On Windows
    ZENworks_installation_path\conf\datamodel
    On Linux
    /etc/opt/novell/zenworks/datamodel
    For database maintenance you can check in the doc Novell Doc: System Planning, Deployment, and Best Practices Guide - Scalability, Fault Tolerance, Maintenance, and Sizing of the Database Server
    You can also take a look a the disaster recovery guide because it give some info to what you need to take in backup.
    Novell Doc: ZENworks 10 Configuration Management System Administration Reference - Disaster Recovery
    Martin Dallaire

Maybe you are looking for

  • Podcasts won't sync - error "no longer exist"

    i connected my ipod to sync to itunes. for some unknown reason, itunes was blanked out and when it synced, it wiped everything off my ipod. i reuploaded everything from my hard drive and subscribed to/downloaded podcasts. however, now when itunes syn

  • Audigy 4 digital i/o prob

    Here's my problem. I have an xbox with a coaxial digital output. I've attached a /8 inch adapter to that, and am trying to attach it to the audigy 4's digital i/o so I can have the soundcard decode the spdif signal to my 5./7. speaker setup. I theori

  • Cannot search file content on Word document with embedded Excel table

    Cannot search file content on Word document with embedded Excel table. I have Windows 8.1 64-bit and Office 2010 Professional. Only phrases from within Excel tables are not searchable. I have many Word documents with embedded Excel table. I use it fo

  • Why do I depend on a carrier for 10.2.1 update if I bought the phone from an electronics shop, unlocked

    I am mad on Blackberry for this, because I don't undestand why ! I bought the my phone, a Z10 from an electronics shop, so it is unlocked and not dependable on any carrier, and I paid premium. The first time I turned on the phone, BB Link did update

  • Import Repository Objects Question

    Hello experts, I am using a XI 3.0 SP13 development system. I export some repository objects under a certain namespace to a local file. Then I delete these objects and import the local file back into XI. Strange is, it shows the import is successful,