Hyper V 2012 Backup - best practice

I'm setting up a new Hyper-V server 2012 and I'm trying to understand how to best configure it for backups and restores using
Windows Server Backup. I´ve 5 VMs running, everyone has their one iScsi drive.
I would like to backup the VMs on a external usb disk. I´ve the choice to backup the Hyper-V using child partition snapshot or to backup the vm folder with the config and the vhd files (is is right, that Windows override the old backup ?)
What´s the difference between the two methods ?  Is incremental back possible with vm´s ?
thanks

Hi,
There are two basic methods you can use to perform a backup. You can:
Perform a backup from the server running Hyper-V.
We recommend that you use this method to perform a full server backup because it captures more data than the other method. If the backup application is compatible with Hyper-V and the Hyper-V VSS writer, you can perform a full server backup that helps protect
all of the data required to fully restore the server, except the virtual networks. The data included in such a backup includes the configuration of virtual machines, snapshots associated with the virtual machines, and virtual hard disks used by the virtual
machines. As a result, using this method can make it easier to recover the server if you need to, because you do not have to recreate virtual machines or reinstall Hyper-V.
Perform a backup from within the guest operating system of a virtual machine.
Use this method when you need to back up data from storage that is not supported by the Hyper-V VSS writer. When you use this method, you run a backup application from the guest operating system of the virtual machine. If you need to use this method, you
should use it in addition to a full server backup and not as an alternative to a full server backup. Perform a backup from within the guest operating system before you perform a full backup of the server running Hyper-V.
iSCSI-based storage is supported for backup by the Hyper-V VSS writer when the storage is connected through the management operating system and the storage is used for virtual hard disks.
For more information please refer to following MS articles:
Planning for Backup
http://technet.microsoft.com/en-us/library/dd252619(WS.10).aspx
Hyper-V: How to Back Up Hyper-V VMs from the Host Using Windows Server Backup
http://social.technet.microsoft.com/wiki/contents/articles/216.hyper-v-how-to-back-up-hyper-v-vms-from-the-host-using-windows-server-backup.aspx
Backing up Hyper-V with Windows Server Backup
http://blogs.msdn.com/b/virtual_pc_guy/archive/2009/03/11/backing-up-hyper-v-with-windows-server-backup.aspx
Lawrence
TechNet Community Support

Similar Messages

  • Portal backup best practice!!

    Hi ,
    Can anyone let me know where can i get Portal Backup Best Practices!! We are using SAP EP7.
    Thanks,
    Raghavendra Pothula

    Hi Tim: Here's my basic approach for this -- I create either a portal dynamic page or a stored procedure that renders an HTML parameter form. You can connect to the database and render what ever sort of drop downs, check boxes, etc you desire. To tie everything together, just make sure when you create the form, the names of the fields match that of the page parameters created on the page. This way, when the form posts to the same page, it appends the values for the page parameters to the URL.
    By coding the entire form yourself, you avoid the inherent limitations of the simple parameter form. You can also use advanced JavaScript to dynamically update the drop downs based on the values selected or can cause the form to be submitted and update the other drop downs from the database if desired.
    Unfortunately, it is beyond the scope of this forum to give you full technical details, but that is the approach I have used on a number of portal sites. Hope it helps!
    Rgds/Mark M.

  • E-business backup best practices

    What is E-business backup best practices, tools or techniques?
    For example what we do now is taking copy from the oracle folder in D partition every two days.
    but it take long time, and i believe there is a better way.
    We are on windows server 2003 and E-business 11i

    user11969921 wrote:
    What is E-business backup best practices, tools or techniques?
    For example what we do now is taking copy from the oracle folder in D partition every two days.
    but it take long time, and i believe there is a better way.
    We are on windows server 2003 and E-business 11i
    Please see previous threads for the same topic/discussion -- https://forums.oracle.com/search.jspa?view=content&resultTypes=&dateRange=all&q=backup+EBS&rankBy=relevance
    Please also see RMAN manuals (incremental backup, hot backup, ..etc) -- http://www.oracle.com/pls/db112/portal.portal_db?selected=4&frame=#backup_and_recovery
    Thanks,
    Hussein

  • OVM Repository and VM Guest Backups - Best Practice?

    Hey all,
    Does anybody out there have any tips/best practices on backing up the OVM Repository as well ( of course ) the VM's? We are using NFS exclusively and have the ability to take snapshots at the storage level.
    Some of the main points we'd like to do ( without using a backup agent within each VM ):
    backup/recovery of the entire VM Guest
    single file restore of a file within a VM Guest
    backup/recovery of the entire repository.
    The single file restore is probably the most difficult/manual. The rest can be done manually from the .snapshot directories, but when we're talking about having hundreds and hundreds of guests within OVM...this isn't overly appealing to me.
    OVM has this lovely manner of naming it's underlying VM directories off of some abiguous number which has nothing to do with the name of the VM ( I've been told this is changing in an upcoming release ).
    Brent

    Please find below the response from the Oracle support on that.
    In short :
    - First, "manual" copies of files into the repository is not recommend nor supported.
    - Second we have to go back and forth through templates and http (or ftp) server.
    Note that when creating a template or creating a new VM from a template, we're tlaking about full copies. No "fast-clone" (snapshots) are involved.
    This is ridiculous.
    How to Back up a VM:1) Create a template from the OVM Manager console
    Note: Creating a template requires the VM to be stopped (this is required because the if the copy of the virtual disk is done with the running will corrupt data) and the process to create the template make changes to the vm.cfg
    2) Enable Storage Repository Back Ups using the step above:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-storage-repo-config.html#vmusg-repo-backup
    2) Mount the NFS export created above on another server
    3) Them create a compress file (tgz) using the the relevant files (cfg + img) from the Repository NFS mount:
    Here is an example of the template:
    $ tar tf OVM_EL5U2_X86_64_PVHVM_4GB.tgz
    OVM_EL5U2_X86_64_PVHVM_4GB/
    OVM_EL5U2_X86_64_PVHVM_4GB/vm.cfg
    OVM_EL5U2_X86_64_PVHVM_4GB/System.img
    OVM_EL5U2_X86_64_PVHVM_4GB/README
    How to restore up a VM:1) Then upload the compress file (tgz) to an HTTP, HTTPS or FTP. server
    2) Import to the OVM manager using the following instructions:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-repo.html#vmusg-repo-template-import
    3) Clone the Virtual machine from the template imported above using the following instructions:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-vm-clone.html#vmusg-vm-clone-image
    Edited by: user521138 on Sep 5, 2012 11:59 PM
    Edited by: user521138 on Sep 6, 2012 3:06 AM

  • SQL Server 2012 Infrastructure Best Practice

    Hi,
    I would welcome some pointers (direct advice or pointers to good web sites) on setting up a hosted infrastructure for SQL Server 2012. I am limited to using VMs on a hosted site. I currently have a single 2012 instance with DB, SSIS, SSAS on the same server.
    I currently RDP onto another server which holds the BI Tools (VS2012, SSMS, TFS etc), and from here I can create projects and connect to SQL Server.
    Up to now, I have been heavily restricted by the (shared tenancy) host environment due to security issues, and have had to use various local accounts on each server. I need to put forward a preferred environment that we can strive towards, which is relatively
    scalable and allows me to separate Dev/Test/Live operations and utilise Windows Authentication throughout.
    Any help in creating a straw man would be appreciated.
    Some of the things I have been thinking through are:
    1. Separate server for Live Database, and another server for Dev/Test databases
    2. Separate server for SSIS (for all 3 environments)
    3. Separate server for SSAS (not currently using cubes, but this is a future requirement. Perhaps do not need dedicated server?)
    4. Separate server for Development (holding VS2012, TFS2012,SSMS etc). Is it worth having local SQL Server DB on this machine. I was unsure where SQL Server Agent Jobs are best run from i.e. from Live Db  only, from another SQL Server Instance, or to
    utilise SQL ServerAgent  on all (Live, Test and Dev) SQL Server DB instances. Running from one place would allow me to have everything executable from one place, with centralised package reporting etc. I would also benefit from some license cost
    reductions (Kingsway tools)
    5. Separate server to hold SSRS, Tableau Server and SharePoint?
    6. Separate Terminal Server or integrated onto Development Server?
    7. I need server to hold file (import and extract) folders for use by SSIS packages which will be accessible by different users
    I know (and apologise that) I have given little info about the requirement. I have an opportunity to put forward my requirement for x months into the future, and there is a mass of info out there which is not distilled in a way I can utilise. It would
    be helpful to know what I should aim for, in terms of separate servers for the different services and/or environments (Live/Test/Live), and specifically best practice for where SQL Server Agent jobs should be run from , and perhaps a little info on how to
    best control deployment/change control . (Note my main interest is not in application development, it is in setting up packages to load/refresh data marts fro reporting purposes).
    Many thanks,
    Ken

    Hello,
    On all cases, consider that having a separate server may increase licensing or hosting costs.
    Please allow to recommend you Windows Azure for cloud services.
    Answers.
    This is always a best practice.
    Having SSIS on a separate server allows you isolate import/export packages, but may increase network traffic between servers. I don’t know if your provider charges
    money for incoming traffic or outgoing traffic.
    SSAS on a separate server certainly a best practice too.
     It contributes to better performance and scalability.
    SQL Server Developer Edition cost about $50 dollars only. Are you talking about centralizing job scheduling on an on-premises computer than having jobs enable on a
    cloud service? Consider PowerShell to automate tasks.
    If you will use Reporting Services on SharePoint integrated mode you should install Reporting Services on the same server where SharePoint is located.
    SQL Server can coexist with Terminal Services with the exception of clustered environments.
    SSIS packages may be competing with users for accessing to files. Maybe copying them to a disk resource available for the SSIS server may be a better solution.
    A few more things to consider:
    Performance storage subsystem on the cloud service.
    How Many cores? How much RAM?
    Creating a Domain Controller or using active directory services.
    These resources may be useful.
    http://www.iis.net/learn/web-hosting/configuring-servers-in-the-windows-web-platform/sql-2008-for-hosters
    http://azure.microsoft.com/blog/2013/02/14/choosing-between-sql-server-in-windows-azure-vm-windows-azure-sql-database/
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • Sql backup best practice on vms that are backed up as a complete vm

    hi,
    apologies as i am sure this has been asked many times before but i cant really find an answer to my question. so my situation is this. I have two types of backups; agent based and snap based backups.
    For the vm's that are being backed up by snapshots the process is: vmware does the snap, then the san takes a snap of the storage and then the backup is taken from the san. we then have full vm backups.
    For the agent based backups, these are only backing up file level stuff. so we use this for our sql cluster and some other servers. these are not snaps/full vm backups, but simply backups of databases and files etc.
    this works well, but there are a couple of servers that need to be in the full vm snap category and therefore cant have the backup agent installed on that vm as it is already being backed up by the snap technology. so what would be the best practice on these
    snapped vms that have sql installed as well? should i configure a reoccurring backup in sql management studio (if this is possible??) which is done before the vm snap backup? or is there another way i should be backing up the dbs?
    any suggestions would be very welcome.
    thanks
    aaron

    Hello Aaron,
    If I understand correctly, you perform a snapshot backup of the complete VM.
    In that case you also need to create a SQL Server backup schedule to perform Full and Transaction Log backups.
    (if you do a filelevel backup of the .mdf and .ldf files with an agent you also need to do this)
    I would run a database backup before the VM snapshot (to a SAN location if possible), then perform the Snapshot backup.
    You should set up the transaction log backups depending on business recovery needs.
    For instance: if your company accepts a maximum of 30 minutes data loss make sure to perform a transaction log backup every 30 minutes.
    In case of emergency you could revert to the VM Snapshot, restore the full database backup and restore transaction log backups till the point in time you need.

  • Development System Backup - Best Practice / Policy for offsite backups

    Hi, I have not found any recommendations from SAP on best practices/recommendations on backing up Development systems offsite and so would appreciate some input on what policies other companies have for backing up Development systems. We continuously make enhancements to our SAP systems and perform daily backups; however, we do not send any Development system backups offsite which I feel is a risk (losing development work, losing transport & change logs...).
    Does anyone know whether SAP have any recommendations on backuping up Development systems offsite? What policies does your company have?
    Thanks,
    Thomas

    Thomas,
    Your question does not mention consideration of both sides of the equation - you have mentioned the risk only.  What about the incremental cost of frequent backups stored offsite?  Shouldn't the question be how the 'frequent backup' cost matches up with the risk cost?
    I have never worked on an SAP system where the developers had so much unique work in progress that they could not reproduce their efforts in an acceptable amount of time, at a acceptable cost.  There is typically nothing in dev that is so valuable as to be irreplaceable (unlike production, where the loss of 'yesterday's' data is extremely costly).  Given the frequency that an offsite dev backup is actually required for a restore (seldom), and given that the value of the daily backed-up data is already so low, the actual risk cost is virtually zero.
    I have never seen SAP publish a 'best practice' in this area.  Every business is different; and I don't see how SAP could possibly make a meaningful recommendation that would fit yours.  In your business, the risk (the pro-rata cost of infrequently  needing to use offsite storage to replace or rebuild 'lost' low cost development work) may in fact outweigh the ongoing incremental costs of creating and maintaining offsite daily recovery media. Your company will have to perform that calculation to make the business decision.  I personally have never seen a situation where daily offsite backup storage of dev was even close to making  any kind of economic sense. 
    Best Regards,
    DB49

  • SCOM 2012 Agent - Best Practices with Base Images

    I've read through the
    SCOM 2012 agent installation methods technet article, as well as how to
    install the SCOM 2012 agent via command line, but don't see any best practices in regards to how to include the SCOM 2012 agent in a base workstation image. My understanding is that the SCOM agent's unique identifier is created at the time of client installation,
    is this correct? I need to ensure that this is a supported configuration before I can recommend it. 
    If it is supported, and it does work the way I think it does, I'm trying to find out a way to strip out the unique information so that a new client GUID will be created after the machine is sysprepped, similar to how the SCCM client should be stripped of
    unique data when preparing a base image. 
    Has anyone successfully included a SCOM 2012 (or 2007 for that matter) agent in their base image?
    Thanks, 
    Joe

    Hi
    It is fine to build the agent into a base image but you then need to have a way to assign the agent to a management group. SCOM does this via AD Integration:
    http://technet.microsoft.com/en-us/library/cc950514.aspx
    http://blogs.msdn.com/b/steverac/archive/2008/03/20/opsmgr-ad-integration-how-it-works.aspx
    http://blogs.technet.com/b/jonathanalmquist/archive/2010/06/14/ad-integration-considerations.aspx
    http://thoughtsonopsmgr.blogspot.co.uk/2010/07/active-directory-ad-integration-when-to.html
    http://technet.microsoft.com/en-us/library/hh212922.aspx
    http://blogs.technet.com/b/momteam/archive/2008/01/02/understanding-how-active-directory-integration-feature-works-in-opsmgr-2007.aspx
    You have to be careful in environments with multiple forests if no trust exists.
    http://blogs.technet.com/b/smsandmom/archive/2008/05/21/opsmgr-2007-how-to-enable-ad-integration-for-an-untrusted-domain.aspx
    http://rburri.wordpress.com/2008/12/03/untrusted-ad-integration-suppress-misleading-runas-alerts/
    You might also want to consider group policy or SCCM as methods for installing agents.
    Cheers
    Graham
    Regards Graham New System Center 2012 Blog! -
    http://www.systemcentersolutions.co.uk
    View OpsMgr tips and tricks at
    http://systemcentersolutions.wordpress.com/

  • Backup "best practices" scenarios?

    I'd be grateful for a discussion of different "best practices" regarding backups, taking into consideration that a Time Machine backup is only as good as the external disk it is on.
    My husband & I currently back up our two macs, each to its own 500 GB hard disk using TM. We have a 1TB disk and I was going to make that a periodic repository for backups for each of our macs, in case one of the 500GB disks fails, but was advised by a freelance mac tech not to do that. This was in the middle of talking about other things & now I cannot remember if he said why.
    We need the general backup, and also (perhaps a separate issue) I am particularly interested in safeguarding our iPhoto libraries. Other forms of our data can be redundantly backed up to DVDs but with 50GB iPhoto libraries that doesn't seem feasible, nor is backing up to some online storage facility. My own iP library is too large for my internal disk; putting the working library on an external disk is discouraged by the experts in the iPhoto forum (slow access, possible loss if connexion between computer & disk interrupted during transfer) so my options seem to be getting a larger internal disk or using the internal disk just for the most recent photos and keeping the entire library on an external disk, independent of incremental TM backups.

    It's probably more than you ever wanted to know about backups, but some of this might be useful:
    There are three basic types of backup applications: *Bootable Clone, Archive, and Time Machine.*
    This is a general explanation and comparison of the three types. Many variations exist, of course, and some combine features of others.
    |
    _*BOOTABLE "CLONE"*_
    |
    These make a complete, "bootable" copy of your entire system on an external disk/partition, a second internal disk/partition, or a partition of your internal disk.
    Advantages
    If your internal HD fails, you can boot and run from the clone immediately. Your Mac may run a bit slower, but it will run, and contain everything that was on your internal HD at the time the clone was made or last updated. (But of course, if something else critical fails, this won't work.)
    You can test whether it will run, just by booting-up from it (but of course you can't be positive that everything is ok without actually running everything).
    If it's on an external drive, you can easily take it off-site.
    Disadvantages
    Making an entire clone takes quite a while. Most of the cloning apps have an update feature, but even that takes a long time, as they must examine everything on your system to see what's changed and needs to be backed-up. Since this takes lots of time and CPU, it's usually not practical to do this more than once a day.
    Normally, it only contains a copy of what was on your internal HD when the clone was made or last updated.
    Some do have a feature that allows it to retain the previous copy of items that have been changed or deleted, in the fashion of an archive, but of course that has the same disadvantages as an archive.
    |
    _*TRADITIONAL "ARCHIVE" BACKUPS*_
    |
    These copy specific files and folders, or in some cases, your entire system. Usually, the first backup is a full copy of everything; subsequently, they're "incremental," copying only what's changed.
    Most of these will copy to an external disk; some can go to a network locations, some to CDs/DVDs, or even tape.
    Advantages
    They're usually fairly simple and reliable. If the increments are on separate media, they can be taken off-site easily.
    Disadvantages
    Most have to examine everything to determine what's changed and needs to be backed-up. This takes considerable time and lots of CPU. If an entire system is being backed-up, it's usually not practical to do this more than once, or perhaps twice, a day.
    Restoring an individual item means you have to find the media and/or file it's on. You may have to dig through many incremental backups to find what you're looking for.
    Restoring an entire system (or large folder) usually means you have to restore the most recent Full backup, then each of the increments, in the proper order. This can get very tedious and error-prone.
    You have to manage the backups yourself. If they're on an external disk, sooner or later it will get full, and you have to do something, like figure out what to delete. If they're on removable media, you have to store them somewhere appropriate and keep track of them. In some cases, if you lose one in the "string" (or it can't be read), you've lost most of the backup.
    |
    _*TIME MACHINE*_
    |
    Similar to an archive, TM keeps copies of everything currently on your system, plus changed/deleted items, on an external disk, Time Capsule (or USB drive connected to one), internal disk, or shared drive on another Mac on the same local network.
    Advantages
    Like many Archive apps, it first copies everything on your system, then does incremental backups of additions and changes. But TM's magic is, each backup appears to be a full one: a complete copy of everything on your system at the time of the backup.
    It uses an internal OSX log of what's changed to quickly determine what to copy, so most users can let it do it's hourly incremental backups without much effect on system performance. This means you have a much better chance to recover an item that was changed or deleted in error, or corrupted.
    Recovery of individual items is quite easy, via the TM interface. You can browse your backups just as your current data, and see "snapshots" of the entire contents at the time of each backup. You don't have to find and mount media, or dig through many files to find what you're looking for.
    You can also recover your entire system (OSX, apps, settings, users, data, etc.) to the exact state it was in at the time of any backup, even it that's a previous version of OSX.
    TM manages it's space for you, automatically. When your backup disk gets near full, TM will delete your oldest backup(s) to make room for new ones. But it will never delete it's copy of anything that's still on your internal HD, or was there at the time of any remaining backup. So all that's actually deleted are copies of items that were changed or deleted long ago.
    TM examines each file it's backing-up; if it's incomplete or corrupted, TM may detect that and fail, with a message telling you what file it is. That way, you can fix it immediately, rather than days, weeks, or months later when you try to use it.
    Disadvantages
    It's not bootable. If your internal HD fails, you can't boot directly from your TM backups. You must restore them, either to your repaired/replaced internal HD or an external disk. This is a fairly simple, but of course lengthy, procedure.
    TM doesn't keep it's copies of changed/deleted items forever, and you're usually not notified when it deletes them.
    It is fairly complex, and somewhat new, so may be a bit less reliable than some others.
    |
    RECOMMENDATION
    |
    For most non-professional users, TM is simple, workable, and maintenance-free. But it does have it's disadvantages.
    That's why many folks use both Time Machine and a bootable clone, to have two, independent backups, with the advantages of both. If one fails, the other remains. If there's room, these can be in separate partitions of the same external drive, but it's safer to have them on separate drives, so if either app or drive fails, you still have the other one.
    |
    _*OFF-SITE BACKUPS*_
    |
    As great as external drives are, they may not protect you from fire, flood, theft, or direct lightning strike on your power lines. So it's an excellent idea to get something off-site, to your safe deposit box, workplace, relative's house, etc.
    There are many ways to do that, depending on how much data you have, how often it changes, how valuable it is, and your level of paranoia.
    One of the the best strategies is to follow the above recommendation, but with a pair of portable externals, each 4 or more times the size of your data. Each has one partition the same size as your internal HD for a "bootable clone" and another with the remainder for TM.
    Use one drive for a week or so, then take it off-site and swap with the other. You do have to tell TM when you swap drives, via TM Preferences > Change Disk; and you shouldn't go more than about 10 days between swaps.
    There are other options, instead of the dual drives, or in addition to them. Your off-site backups don't necessarily have to be full backups, but can be just copies of critical information.
    If you have a MobileMe account, you can use Apple's Backup app to get relatively-small amounts of data (such as Address book, preferences, settings, etc.) off to iDisk daily. If not, you can use a 3rd-party service such as Mozy or Carbonite.
    You can also copy data to CDs or DVDs and take them off-site. Re-copy them every year or two, as their longevity is questionable.
    Backup strategies are not a "One Size Fits All" sort of thing. What's best varies by situation and preference.
    Just as an example, I keep full Time Machine backups; plus a CarbonCopyCloner clone (updated daily, while I'm snoozing) locally; plus small daily Backups to iDisk; plus some other things to CDsDVDs in my safe deposit box. Probably overkill, but as many of us have learned over the years, backups are one area where +Paranoia is Prudent!+

  • Unity Backup Best Practices

    We are looking at our Disaster Recovery when it comes to our Unity Voicemail servers and I am looking for some Best Practices when it comes to backing up Unity. Does anyone have any whitepapers/suggestions? Thanks.

    if you mean unity backup as in backing up the data, then the suggested method is four fold...
    1) using a backupExec or the likes, with a SQL agent, backup the SQL database(s) on unity.
    2) using a backupExec or the likes, with an Exchange agent, backup the Exchange database(s) used by unity.
    3) using a backupExec or the likes, perform a Windows 2000/2003 full backup including SystemState. (open file option is also available for backupExec)
    4) for DR recovery purposes, using DiRT, backup the unity system.
    see this link for more unity backup info:
    http://www.cisco.com/en/US/products/sw/voicesw/ps2237/products_white_paper09186a00801f4ba7.shtml
    if you mean backing up unity as far as failover, then you can use the link that follows to create a failover environment for your unity:
    http://www.cisco.com/en/US/products/sw/voicesw/ps2237/prod_installation_guides_list.html

  • 2012 NLB Best Practice (Single vs Multiple NICs)?

    Our environment has used an NLB configuration with two NICs for years.  One NIC for the host itself and one for the NLB.  We have also been running the NLB in multicast mode.  Starting with 2008, we began adding the cluster's MAC address as
    an ARP entry on our layer three switch.  Each server participating in the NLB is on VMware.
    Can someone advise what the best procedure is for handling NLB in this day?  Although initial tests with one NIC seem to be working, I do notice that we get a popup warning on the participant servers when launching NLB manager "Running NLB Manager
    on a system with all networks bound to NLB might not work as expected"... if they are set to run in unicast mode.
    With that said, should we not be running multicast?  Will that present problems down the road?

    Hi enoobmot11,
    You can refer the following KB and the VMware requirement KB:
    Network Load Balancing Best practices
    https://technet.microsoft.com/en-us/library/cc740265%28v=ws.10%29.aspx?f=255&MSPPError=-2147217396
    Multiple network adapters
    https://technet.microsoft.com/en-us/library/cc784848(v=ws.10).aspx
    The VMware KB:
    Microsoft Network Load Balancing Multicast and Unicast operation modes (1006580)
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006580
    Sample Configuration - Network Load Balancing (NLB) Multicast Mode Configuration (1006558)
    http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1006558
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Backup Best Practices

    What are the best practices for backing up the Contribute
    Server? I found a tech note (
    http://www.adobe.com/go/1238b09),
    but it is from 2005. Is that still accurate?
    Thanks

    I thinks still this is fine, because after CPS 1.1 no new
    version came out.
    This tech note is for CPS 1.1 only.
    Thanks.

  • Hyper-v 2012 Backup Essentials on Centos 6.5

    I am running Centos 6.5 (new install).
    I installed the Hyper-V Backup Essentials as below. However when performing an online backup with Symantec System Recovery 2013 on Windows 2012 Standard (NOT R2), the Centos 6.5 VM goes into saved state when running a backup. 
    My understanding is that I should be able to perform a live/online backup of a Centos 6.5 VM with the Hyper-V Backup Essentials installed, is this correct? Am I missing something? Thanks.
    https://github.com/LIS/backupessentials

    I tried to install the Hyper-V backup essentials on a Linux CentOS 6.5 virtual machine and it says that conflicts with the 6.3 Version of the Hyper-V module, how can i -force the installation to overwrite the file? bellow the error:
    root@virtualmachine [/LIS-backupessentials-4e3121f/hv/hv-rhel6.5/rpm]# ./install.sh -f
    Installing Hyper-v Backup Essentials
    Preparing...                ########################################### [100%]
            file /usr/sbin/hv_vss_daemon from install of microsoft-hyper-v-vss-rhel65.1.0-20140121.x86_64 conflicts with file from package microsoft-hyper-v-rhel63.3.5-20131212.x86_64
    Installing Hyper-V Backup Essentials failed, Exiting.
    Regards,
    Anastasis

  • Hyper-V 2012 R2 best configuration with Two network ports

    Hi Team,
    I have to design Six Windows 2012 R2 Hyper-v Cluster . Each Host is carrying Two network Ports of 1 GBPS.
    10 to 12 Vlan will be configured across virtual machine on all six host.
    please let me know how best i can utilize two network port to achive redundancy and speed and network isolation.
    can i team two network ports into single team and than single virtual switch will carry for MGMT,Heartbeat and 10-12 VLAn traffic. is there any downside of it since i am merging all traffics on single tea.
    Thanks in advance
    Ravi

    Hi,
    If you are using the iSCSI share storage in a Failover Cluster will need 5 or more NICs:
    More information:
    Hyper-V VLANs part II
    http://blogs.msdn.com/b/adamfazio/archive/2009/06/23/hyper-v-vlans-part-ii.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Time Machine, FileVault & Windows Backup Best Practice

    I read a lot on the forums about people wanting the best way to backup, when they had FileVault and also Windows files.
    This is what I did, I have a 250 GB Lacie external drive:
    1) 2 Partitions: 177 GB Mac OS Extended, 55 GB FAT32 (This disk was entirely FAT32 and MacOS Disk Utility wouldn't resize it to make space or even recognize the unpartitioned space I created in Windows, using a third party tool. So I resized in Windows, created a new ext2 partition, connected to Mac and erased it as Mac OS Extended. Mac sees 2 partitions now and automatically wants to use one for TimeMachine)
    2) I use FileVault on my home folder, only for personal pictures and documents. I keep the iTunes music folder out of my home folder
    3) Use Time Machine to back up everything but my home folder
    4) Create an encrypted image of my FileVaulted home folder on the same backup disk that I created. For ease, I use the same password as that of my login account. I used the Disk Utility's Image from Folder option for this.
    So My pictures and documents are encrypted on my machine, they are copied as one encrypted file into the backup drive, and the rest of my machine is backed up by File Vault. I do not have to log out, though it is a manual step for me to create the Image from folder. I don't mind doing that for now as I will need to do it only when I add new pictures to this folder. I would then make a new image on the backup drive.
    I think this setup will work for me, if I do find things I can tweak I will post a follow up.

    I set Super Duper to Back Up my PowerMac and to shut down after completing the back up, then i went to sleep. Six hours later, I woke up when I heard the fans spinning profusely and the Mac still not 'shut down'. Got following error log. Does anyone knows what has went wrong? Have I potentially 'fried' the CPU?
    system.log:
    Description: System events log
    Size: 25 KB
    Last Modified: 4/22/08 6:11 AM
    Location: /var/log/system.log
    Recent Contents: Apr 22 00:00:01 ian-phuas-power-mac-g5 newsyslog[654]: logfile turned over
    Apr 22 00:01:16 ian-phuas-power-mac-g5 [0x0-0x95095].com.microsoft.PowerPoint[558]: monitor: taskforpid failed (os/kern) failure
    Apr 22 00:04:47 ian-phuas-power-mac-g5 com.apple.launchd[64] ([0x0-0xaa0aa].jp.co.canon.bj.printer.app.MPNavigator[673]): Stray process with PGID equal to this dead job: PID 675 PPID 1 MP Navigator
    Apr 22 00:07:03 ian-phuas-power-mac-g5 /usr/sbin/spindump[691]: process 675 is being monitored
    Apr 22 00:07:05 ian-phuas-power-mac-g5 /usr/sbin/spindump[691]: process 675 is being force quit
    Apr 22 00:07:09 ian-phuas-power-mac-g5 /usr/sbin/spindump[691]: process 675 is being no longer being monitored
    Apr 22 00:07:13 ian-phuas-power-mac-g5 com.apple.launchd[64] ([0x0-0xb00b0].jp.co.canon.bj.printer.app.MPNavigator[693]): Stray process with PGID equal to this dead job: PID 695 PPID 1 MP Navigator
    Apr 22 00:10:42 ian-phuas-power-mac-g5 SMARTReporter[101]: ATA Drive: 'Maxtor 6B160M0' - SMART condition not exceeded, drive OK!
    Apr 22 00:10:42 ian-phuas-power-mac-g5 SMARTReporter[101]: ATA Drive: 'Hitachi HDT725032VLA360' - SMART condition not exceeded, drive OK!
    Apr 22 00:12:03 ian-phuas-power-mac-g5 Finder[91]: Cannot find function pointer pluginFactory for factory 053918F4-2869-11D7-A671-000A27E2DB90 in CFBundle/CFPlugIn 0x6c54680 </Users/ianphuac/Library/Contextual Menu Items/ToastIt.plugin> (bundle, loaded)
    Apr 22 00:13:34 ian-phuas-power-mac-g5 Mail[202]: Cannot find function pointer pluginFactory for factory 053918F4-2869-11D7-A671-000A27E2DB90 in CFBundle/CFPlugIn 0x8b911f0 </Users/ianphuac/Library/Contextual Menu Items/ToastIt.plugin> (bundle, loaded)
    Apr 22 01:03:31 ian-phuas-power-mac-g5 /usr/sbin/spindump[760]: process 695 is being monitored
    Apr 22 01:03:33 ian-phuas-power-mac-g5 /usr/sbin/spindump[760]: process 695 is being force quit
    Apr 22 01:03:36 ian-phuas-power-mac-g5 fseventsd[26]: callback_client: ERROR: d2fcallbackrpc() => (ipc/send) invalid destination port (268435459) for pid 700
    Apr 22 01:03:41 ian-phuas-power-mac-g5 SubmitReport[766]: missing kCRProblemReportProblemTypeKey
    Apr 22 01:03:44 ian-phuas-power-mac-g5 /usr/sbin/ocspd[769]: starting
    Apr 22 01:03:45 ian-phuas-power-mac-g5 SubmitReport[766]: Submitted compressed hang report for MP Navigator
    Apr 22 01:04:26 ian-phuas-power-mac-g5 [0x0-0x94094].com.microsoft.DatabaseDaemon[557]: monitor: taskforpid failed (os/kern) failure
    Apr 22 01:04:28 ian-phuas-power-mac-g5 SuperDuper![772]: .scriptSuite warning for attribute 'boundsAsQDRect' of class 'NSWindow' in suite 'NSCoreSuite': 'NSData<QDRect>' is not a valid type name.
    Apr 22 01:04:27 ian-phuas-power-mac-g5 [0x0-0xbb0bb].com.blacey.SuperDuper![772]: crontab: no crontab for ianphuac
    Apr 22 01:04:28 ian-phuas-power-mac-g5 SuperDuper![772]: .scriptSuite warning for type 'NSTextStorage' attribute 'name' of class 'NSApplication' in suite 'NSCoreSuite': AppleScript name references may not work for this property because its type is not NSString-derived.
    Apr 22 01:04:28 ian-phuas-power-mac-g5 SuperDuper![772]: .scriptSuite warning for type 'NSTextStorage' attribute 'lastComponentOfFileName' of class 'NSDocument' in suite 'NSCoreSuite': AppleScript name references may not work for this property because its type is not NSString-derived.
    Apr 22 01:04:28 ian-phuas-power-mac-g5 SuperDuper![772]: .scriptSuite warning for attribute 'boundsAsQDRect' of class 'NSWindow' in suite 'NSCoreSuite': 'NSData<QDRect>' is not a valid type name.
    Apr 22 01:04:28 ian-phuas-power-mac-g5 SuperDuper![772]: .scriptSuite warning for type 'NSTextStorage' attribute 'title' of class 'NSWindow' in suite 'NSCoreSuite': AppleScript name references may not work for this property because its type is not NSString-derived.
    Apr 22 01:04:28 ian-phuas-power-mac-g5 SuperDuper![772]: .scriptSuite warning for superclass of class 'NSAttachmentTextStorage' in suite 'NSTextSuite': 'NSString' is not a valid class name.
    Apr 22 01:04:34 ian-phuas-power-mac-g5 kernel[0]: UniNEnet::monitorLinkStatus - Link is down.
    Apr 22 01:04:44 ian-phuas-power-mac-g5 SuperDuper![772]: Connection failed! Error - no Internet connection
    Apr 22 01:04:56 ian-phuas-power-mac-g5 authexec[800]: executing /Applications/SuperDuper!.app/Contents/MacOS/SDAgent
    Apr 22 01:04:59 ian-phuas-power-mac-g5 KernelEventAgent[23]: tid 00000000 received unknown event (256)
    Apr 22 01:10:42 ian-phuas-power-mac-g5 SMARTReporter[101]: ATA Drive: 'Maxtor 6B160M0' - SMART condition not exceeded, drive OK!
    Apr 22 01:10:42 ian-phuas-power-mac-g5 SMARTReporter[101]: ATA Drive: 'Hitachi HDT725032VLA360' - SMART condition not exceeded, drive OK!
    Apr 22 01:16:27 ian-phuas-power-mac-g5 KernelEventAgent[23]: tid 00000000 received unknown event (256)
    Apr 22 01:16:28 ian-phuas-power-mac-g5 fsaclctl[841]: HFSIOC_SETACLSTATE 1 on /Volumes/Lacie Backup returns 0 errno = 2
    Apr 22 01:30:28 ian-phuas-power-mac-g5 com.apple.launchd[1] (0x10b470.nohup[880]): Could not setup Mach task special port 9: (os/kern) no access
    Apr 22 02:00:33 ian-phuas-power-mac-g5 ntpd[14]: sendto(17.83.254.7) (fd=23): No route to host
    Apr 22 02:10:47 ian-phuas-power-mac-g5 SMARTReporter[101]: ATA Drive: 'Maxtor 6B160M0' - SMART condition not exceeded, drive OK!
    Apr 22 02:10:48 ian-phuas-power-mac-g5 SMARTReporter[101]: ATA Drive: 'Hitachi HDT725032VLA360' - SMART condition not exceeded, drive OK!
    Apr 22 02:49:58 ian-phuas-power-mac-g5 kernel[0]: IOPMSlotsMacRISC4::determineSleepSupport has canSleep true
    Apr 22 02:51:34 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 02:53:35 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 02:55:36 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 02:57:36 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 02:59:37 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:01:37 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:03:37 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:05:38 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:07:38 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:09:39 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:10:47 ian-phuas-power-mac-g5 SMARTReporter[101]: ATA Drive: 'Maxtor 6B160M0' - SMART condition not exceeded, drive OK!
    Apr 22 03:10:47 ian-phuas-power-mac-g5 SMARTReporter[101]: ATA Drive: 'Hitachi HDT725032VLA360' - SMART condition not exceeded, drive OK!
    Apr 22 03:12:39 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:14:12 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:17:13 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:19:13 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:21:44 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:23:44 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:25:45 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:27:45 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:29:46 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:31:46 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:33:46 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:36:47 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:38:47 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:40:48 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:42:48 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:45:49 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:47:49 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:50:50 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:53:20 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:55:21 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:57:21 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:59:22 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:01:52 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:03:53 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:05:53 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:07:53 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:09:56 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:10:47 ian-phuas-power-mac-g5 SMARTReporter[101]: ATA Drive: 'Maxtor 6B160M0' - SMART condition not exceeded, drive OK!
    Apr 22 04:10:47 ian-phuas-power-mac-g5 SMARTReporter[101]: ATA Drive: 'Hitachi HDT725032VLA360' - SMART condition not exceeded, drive OK!
    Apr 22 04:12:57 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:17:28 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:20:29 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:22:29 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:25:30 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:27:30 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:29:31 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:31:31 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:33:32 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:35:32 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:37:32 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:39:33 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:41:33 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:43:34 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:45:34 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:47:34 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:50:35 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:52:35 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 06:07:05 localhost com.apple.launchctl.System[2]: fsck_hfs: Volume is journaled. No checking performed.
    Apr 22 06:07:05 localhost com.apple.launchctl.System[2]: fsck_hfs: Use the -f option to force checking.
    Apr 22 06:07:07 localhost com.apple.launchctl.System[2]: launchctl: Please convert the following to launchd: /etc/mach_init.d/dashboardadvisoryd.plist
    Apr 22 06:07:07 localhost com.apple.launchd[1] (org.cups.cupsd): Unknown key: SHAuthorizationRight
    Apr 22 06:07:07 localhost com.apple.launchd[1] (org.ntp.ntpd): Unknown key: SHAuthorizationRight
    Apr 22 06:07:15 localhost kernel[0]: Darwin Kernel Version 9.2.2: Tue Mar 4 21:23:43 PST 2008; root:xnu-1228.4.31~1/RELEASE_PPC
    Apr 22 06:07:08 localhost kextd[10]: 414 cached, 0 uncached personalities to catalog
    Apr 22 06:07:15 localhost kernel[0]: standard timeslicing quantum is 10000 us
    Apr 22 06:07:14 localhost rpc.statd[17]: statd.notify - no notifications needed
    Apr 22 06:07:15 localhost kernel[0]: vmpagebootstrap: 1020257 free pages and 28319 wired pages
    Apr 22 06:07:14 localhost fseventsd[26]: event logs in /.fseventsd out of sync with volume. destroying old logs. (99878 12 104327)
    Apr 22 06:07:14 localhost bootlog[35]: BOOT_TIME: 1208815622 0
    Apr 22 06:07:16 localhost kernel[0]: migtable_maxdispl = 79
    Apr 22 06:07:14 localhost DirectoryService[31]: Launched version 5.2 (v514.4)
    Apr 22 06:07:15 localhost DumpPanic[29]: Panic data written to /Library/Logs/PanicReporter/2008-04-22-060715.panic
    Apr 22 06:07:17 localhost kernel[0]: 106 prelinked modules
    Apr 22 06:07:14 localhost /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow[22]: Login Window Application Started
    Apr 22 06:07:17 localhost kernel[0]: Loading security extension com.apple.security.TMSafetyNet
    Apr 22 06:07:14 localhost DirectoryService[31]: WARNING - dsTouch: file was asked to be opened </Library/Preferences/DirectoryService/.DSIsRunning>: (File exists)
    Apr 22 06:07:17 localhost kernel[0]: calling mpopolicyinit for TMSafetyNet
    Apr 22 06:07:17 localhost kernel[0]: Security policy loaded: Safety net for Time Machine (TMSafetyNet)
    Apr 22 06:07:17 localhost kernel[0]: Loading security extension com.apple.nke.applicationfirewall
    Apr 22 06:07:18 localhost kernel[0]: Loading security extension com.apple.security.seatbelt
    Apr 22 06:07:18 localhost kernel[0]: calling mpopolicyinit for mb
    Apr 22 06:07:18 localhost kernel[0]: Seatbelt MACF policy initialized
    Apr 22 06:07:18 localhost kernel[0]: Security policy loaded: Seatbelt Policy (mb)
    Apr 22 06:07:18 localhost kernel[0]: Copyright (c) 1982, 1986, 1989, 1991, 1993
    Apr 22 06:07:18 localhost kernel[0]: The Regents of the University of California. All rights reserved.
    Apr 22 06:07:18 localhost kernel[0]: MAC Framework successfully initialized
    Apr 22 06:07:19 localhost kernel[0]: using 16384 buffer headers and 4096 cluster IO buffer headers
    Apr 22 06:07:19 localhost kernel[0]: AirPort_Brcm43xx::probe: 05094e80, 0
    Apr 22 06:07:19 localhost kernel[0]: DART enabled
    Apr 22 06:07:19 localhost kernel[0]: FireWire (OHCI) Apple ID 42 PCI now active, GUID 001124fffe7f2582; max speed s800.
    Apr 22 06:07:19 localhost kernel[0]: mbinit: done
    Apr 22 06:07:20 localhost kernel[0]: Security auditing service present
    Apr 22 06:07:20 localhost kernel[0]: BSM auditing present
    Apr 22 06:07:20 localhost kernel[0]: rooting via boot-uuid from /chosen: 7EEABB9C-080B-3F8E-9C95-7E90FFCF646F
    Apr 22 06:07:20 localhost kernel[0]: Waiting on <dict ID="0"><key>IOProviderClass</key><string ID="1">IOResources</string><key>IOResourceMatch</key><string ID="2">boot-uuid-media</string></dict>
    Apr 22 06:07:20 localhost kernel[0]: wl0: Broadcom BCM4320 802.11 Wireless Controller
    Apr 22 06:07:20 localhost fseventsd[26]: log dir: /.fseventsd getting new uuid: 77035FE8-86D2-4B05-AE4E-F897A4A724B3
    Apr 22 06:07:20 localhost kernel[0]: 4.170.25.8.2Got boot device = IOService:/MacRISC4PE/ht@0,f2000000/AppleMacRiscHT/pci@5/IOPCI2PCIBridge/k2-sat a-root@C/AppleK2SATARoot/k2-sata@1/AppleK2SATA/ATADeviceNub@0/AppleATADiskDriver /IOATABlockStorageDevice/IOBlockStorageDriver/Hitachi HDT725032VLA360 Hitachi HDT725032VLA360/IOApplePartitionScheme/AppleHFS_Untitled1@3
    Apr 22 06:07:20 localhost kernel[0]: BSD root: disk1s3, major 14, minor 5
    Apr 22 06:07:20 localhost kernel[0]: jnl: unknown-dev: replay_journal: from: 15033344 to: 16646656 (joffset 0x952000)
    Apr 22 06:07:20 localhost kernel[0]: jnl: unknown-dev: journal replay done.
    Apr 22 06:07:20 localhost mDNSResponder mDNSResponder-170 (Jan 4 2008 18:04:16)[21]: starting
    Apr 22 06:07:21 localhost kernel[0]: HFS: Removed 2 orphaned unlinked files or directories
    Apr 22 06:07:21 localhost kernel[0]: Jettisoning kernel linker.
    Apr 22 06:07:21 localhost kernel[0]: Resetting IOCatalogue.
    Apr 22 06:07:21 localhost kernel[0]: Matching service count = 0
    Apr 22 06:07:21 localhost kernel[0]: Matching service count = 5
    Apr 22 06:07:21: --- last message repeated 4 times ---
    Apr 22 06:07:21 localhost kernel[0]: PowerMac72U3TwinsPIDCtrlLoop::adjustControls state == not ready
    Apr 22 06:07:21 localhost kernel[0]: IOPlatformControl::registerDriver Control Driver AppleSlewClock did not supply target-value, using default
    Apr 22 06:07:21 localhost kernel[0]: IPv6 packet filtering initialized, default to accept, logging disabled
    Apr 22 06:07:21 localhost kernel[0]: UniNEnet: Ethernet address 00:11:24:7f:25:82
    Apr 22 06:07:21 localhost kernel[0]: AirPort_Brcm43xx: Ethernet address 00:11:24:aa:86:4b
    Apr 22 06:07:21 localhost /usr/sbin/ocspd[57]: starting
    Apr 22 06:07:22 localhost kernel[0]: jnl: disk0s3: replay_journal: from: 623616 to: 1369088 (joffset 0xa701000)
    Apr 22 06:07:22 localhost kernel[0]: jnl: disk0s3: journal replay done.
    Apr 22 06:07:22 localhost fseventsd[26]: event logs in /Volumes/Macintosh HD/.fseventsd out of sync with volume. destroying old logs. (20187 0 27000)
    Apr 22 06:07:22 localhost fseventsd[26]: log dir: /Volumes/Macintosh HD/.fseventsd getting new uuid: EAEEEBE3-D7A7-4F79-BBAE-0B9727642A98
    Apr 22 06:07:27 localhost mDNSResponder[21]: Couldn't read user-specified Computer Name; using default “Macintosh-0011247F2582” instead
    Apr 22 06:07:27 localhost mDNSResponder[21]: Couldn't read user-specified local hostname; using default “Macintosh-0011247F2582.local” instead
    Apr 22 06:07:32 localhost kextd[10]: writing kernel link data to /var/run/mach.sym
    Apr 22 06:07:38 localhost mDNSResponder[21]: User updated Computer Name from Macintosh-0011247F2582 to Ian Phua’s Power Mac G5
    Apr 22 06:07:38 localhost mDNSResponder[21]: User updated Local Hostname from Macintosh-0011247F2582 to ian-phuas-power-mac-g5
    Apr 22 06:07:40 ian-phuas-power-mac-g5 configd[33]: setting hostname to "ian-phuas-power-mac-g5.local"
    Apr 22 06:07:45 ian-phuas-power-mac-g5 org.ntp.ntpd[14]: Error : nodename nor servname provided, or not known
    Apr 22 06:07:45 ian-phuas-power-mac-g5 ntpdate[81]: can't find host time.asia.apple.com
    Apr 22 06:07:45 ian-phuas-power-mac-g5 ntpdate[81]: no servers can be used, exiting
    Apr 22 06:07:50 ian-phuas-power-mac-g5 loginwindow[22]: Login Window Started Security Agent
    Apr 22 06:07:51 ian-phuas-power-mac-g5 authorizationhost[90]: MechanismInvoke 0x11f220 retainCount 2
    Apr 22 06:07:51 ian-phuas-power-mac-g5 SecurityAgent[91]: MechanismInvoke 0x101650 retainCount 1
    Apr 22 06:07:51 ian-phuas-power-mac-g5 SecurityAgent[91]: MechanismDestroy 0x101650 retainCount 1
    Apr 22 06:07:51 ian-phuas-power-mac-g5 loginwindow[22]: Login Window - Returned from Security Agent
    Apr 22 06:07:51 ian-phuas-power-mac-g5 authorizationhost[90]: MechanismDestroy 0x11f220 retainCount 2
    Apr 22 06:07:51 ian-phuas-power-mac-g5 loginwindow[22]: USER_PROCESS: 22 console
    Apr 22 06:07:52 ian-phuas-power-mac-g5 com.apple.launchd[1] (com.apple.UserEventAgent-LoginWindow[86]): Exited: Terminated
    Apr 22 06:07:59 ian-phuas-power-mac-g5 kernel[0]: jnl: disk2s3: replay_journal: from: 71680 to: 792576 (joffset 0xa9a000)
    Apr 22 06:07:59 ian-phuas-power-mac-g5 kernel[0]: jnl: disk2s3: journal replay done.
    Apr 22 06:07:59 ian-phuas-power-mac-g5 kernel[0]: jnl: disk2s5: replay_journal: from: 4422144 to: 5159424 (joffset 0x3f8000)
    Apr 22 06:07:59 ian-phuas-power-mac-g5 fseventsd[26]: event logs in /Volumes/Lacie Backup/.fseventsd out of sync with volume. destroying old logs. (97634 1 104373)
    Apr 22 06:08:01 ian-phuas-power-mac-g5 kernel[0]: jnl: disk2s5: journal replay done.
    Apr 22 06:08:01 ian-phuas-power-mac-g5 /System/Library/CoreServices/coreservicesd[43]: SFLSharePointsEntry::CreateDSRecord: dsCreateRecordAndOpen(Ian Phua C S's Public Folder) returned -14135
    Apr 22 06:08:01 ian-phuas-power-mac-g5 /System/Library/CoreServices/coreservicesd[43]: SFLSharePointsEntry::CreateDSRecord: dsCreateRecordAndOpen(Ian Phua's Public Folder) returned -14135
    Apr 22 06:08:06 ian-phuas-power-mac-g5 loginwindow[22]: Unable to resolve startup item: status = -36, theURL == NULL = 1
    Apr 22 06:08:06 ian-phuas-power-mac-g5 loginwindow[22]: Unable to resolve startup item: status = -43, theURL == NULL = 1
    Apr 22 06:08:08: --- last message repeated 1 time ---
    Apr 22 06:08:08 ian-phuas-power-mac-g5 fseventsd[26]: log dir: /Volumes/Lacie Backup/.fseventsd getting new uuid: 0C27B6C4-044E-45BA-BD19-E56DBBC68DC4
    Apr 22 06:08:08 ian-phuas-power-mac-g5 fseventsd[26]: event logs in /Volumes/Lacie SP/.fseventsd out of sync with volume. destroying old logs. (97169 10 104382)
    Apr 22 06:08:08 ian-phuas-power-mac-g5 fseventsd[26]: log dir: /Volumes/Lacie SP/.fseventsd getting new uuid: 5BA6D6AF-6DC0-4FE9-94D3-099FAE59249D
    Apr 22 06:08:14 ian-phuas-power-mac-g5 SMARTReporter[130]: ATA Drive: 'Maxtor 6B160M0' - SMART condition not exceeded, drive OK!
    Apr 22 06:08:15 ian-phuas-power-mac-g5 SMARTReporter[130]: ATA Drive: 'Hitachi HDT725032VLA360' - SMART condition not exceeded, drive OK!
    Apr 22 06:08:21 ian-phuas-power-mac-g5 /System/Library/CoreServices/SystemUIServer.app/Contents/MacOS/SystemUIServer[1 08]: CPSGetProcessInfo(): This call is deprecated and should not be called anymore.
    Apr 22 06:08:21 ian-phuas-power-mac-g5 /System/Library/CoreServices/SystemUIServer.app/Contents/MacOS/SystemUIServer[1 08]: CPSPBGetProcessInfo(): This call is deprecated and should not be called anymore.
    Apr 22 06:08:29 ian-phuas-power-mac-g5 SubmitReport[140]: CFReadStreamGetError() returned: 12
    Apr 22 06:08:29 ian-phuas-power-mac-g5 SubmitReport[140]: Failed to submit uncompressed panic report for xnu
    Apr 22 06:08:31 ian-phuas-power-mac-g5 Disk Utility[146]: ********
    Apr 22 06:08:31 ian-phuas-power-mac-g5 Disk Utility[146]: Disk Utility started.
    Apr 22 06:08:32 ian-phuas-power-mac-g5 mdworker[69]: (Error) SyncInfo: Boot-cache avoidance timed out!
    Apr 22 06:08:37: --- last message repeated 1 time ---
    Apr 22 06:08:37 ian-phuas-power-mac-g5 /System/Library/CoreServices/Problem Reporter.app/Contents/MacOS/Problem Reporter[138]: CPSGetProcessInfo(): This call is deprecated and should not be called anymore.
    Apr 22 06:08:37 ian-phuas-power-mac-g5 /System/Library/CoreServices/Problem Reporter.app/Contents/MacOS/Problem Reporter[138]: CPSPBGetProcessInfo(): This call is deprecated and should not be called anymore.
    Apr 22 06:08:47 ian-phuas-power-mac-g5 Disk Utility[146]: Repairing permissions for “Macintosh Leopard HD”
    Apr 22 06:08:52 ian-phuas-power-mac-g5 kextcache[46]: CFPropertyListCreateFromXMLData(): Old-style plist parser: missing semicolon in dictionary.
    Apr 22 06:09:10 ian-phuas-power-mac-g5 kernel[0]: UniNEnet::monitorLinkStatus - Link is up at 1000 Mbps - Full Duplex (PHY regs 5,6:0xcde1,0x000d)
    Apr 22 06:09:15 ian-phuas-power-mac-g5 mdworker[69]: (Error) SyncInfo: Catalog changed during searchfs too many time. Falling back to fsw search /
    Apr 22 06:09:21 ian-phuas-power-mac-g5 SubmitReport[169]: Submitted uncompressed panic report for xnu
    Apr 22 06:10:00 ian-phuas-power-mac-g5 kextcache[46]: CFPropertyListCreateFromXMLData(): Old-style plist parser: missing semicolon in dictionary.
    Apr 22 06:11:18 ian-phuas-power-mac-g5 simpleScanner[251]: launching daemon...
    Apr 22 06:11:18 ian-phuas-power-mac-g5 simpleScanner[251]: daemon runs all right...
    Apr 22 06:11:25 ian-phuas-power-mac-g5 SuperDuper![253]: .scriptSuite warning for attribute 'boundsAsQDRect' of class 'NSWindow' in suite 'NSCoreSuite': 'NSData<QDRect>' is not a valid type name.
    Apr 22 06:11:25 ian-phuas-power-mac-g5 SuperDuper![253]: .scriptSuite warning for type 'NSTextStorage' attribute 'name' of class 'NSApplication' in suite 'NSCoreSuite': AppleScript name references may not work for this property because its type is not NSString-derived.
    Apr 22 06:11:25 ian-phuas-power-mac-g5 SuperDuper![253]: .scriptSuite warning for type 'NSTextStorage' attribute 'lastComponentOfFileName' of class 'NSDocument' in suite 'NSCoreSuite': AppleScript name references may not work for this property because its type is not NSString-derived.
    Apr 22 06:11:25 ian-phuas-power-mac-g5 SuperDuper![253]: .scriptSuite warning for attribute 'boundsAsQDRect' of class 'NSWindow' in suite 'NSCoreSuite': 'NSData<QDRect>' is not a valid type name.
    Apr 22 06:11:25 ian-phuas-power-mac-g5 SuperDuper![253]: .scriptSuite warning for type 'NSTextStorage' attribute 'title' of class 'NSWindow' in suite 'NSCoreSuite': AppleScript name references may not work for this property because its type is not NSString-derived.
    Apr 22 06:11:25 ian-phuas-power-mac-g5 SuperDuper![253]: .scriptSuite warning for superclass of class 'NSAttachmentTextStorage' in suite 'NSTextSuite': 'NSString' is not a valid class name.
    Apr 22 06:11:25 ian-phuas-power-mac-g5 [0x0-0x1e01e].com.blacey.SuperDuper![253]: crontab: no crontab for ianphuac
    Apr 22 06:11:27 ian-phuas-power-mac-g5 GrowlHelperApp[255]: Unknown plugin filename extension '(null)' (from filename '(null)' of plugin named 'Bubbles')
    Apr 22 06:11:39 ian-phuas-power-mac-g5 Mail[248]: NSInvalidArgumentException: Unable to obtain sql exclusive lock to commit transaction: 10 (disk I/O error)
    Apr 22 06:11:39 ian-phuas-power-mac-g5 Mail[248]: SyncServices[ISyncSessionDriver]: Caught top level exception: Unable to obtain sql exclusive lock to commit transaction: 10 (disk I/O error) Stack trace: (0x91f3bef4 0x950e676c 0x91f3be04 0x91f3be3c 0x93163d60 0x930ff2c4 0x93101c7c 0x93101b4c 0x93113fc0 0x931134b4 0x93105e64 0x930f3da0 0x9317adfc 0x968704f8 0x95fdab9c)

Maybe you are looking for