Unity Backup Best Practices

We are looking at our Disaster Recovery when it comes to our Unity Voicemail servers and I am looking for some Best Practices when it comes to backing up Unity. Does anyone have any whitepapers/suggestions? Thanks.

if you mean unity backup as in backing up the data, then the suggested method is four fold...
1) using a backupExec or the likes, with a SQL agent, backup the SQL database(s) on unity.
2) using a backupExec or the likes, with an Exchange agent, backup the Exchange database(s) used by unity.
3) using a backupExec or the likes, perform a Windows 2000/2003 full backup including SystemState. (open file option is also available for backupExec)
4) for DR recovery purposes, using DiRT, backup the unity system.
see this link for more unity backup info:
http://www.cisco.com/en/US/products/sw/voicesw/ps2237/products_white_paper09186a00801f4ba7.shtml
if you mean backing up unity as far as failover, then you can use the link that follows to create a failover environment for your unity:
http://www.cisco.com/en/US/products/sw/voicesw/ps2237/prod_installation_guides_list.html

Similar Messages

  • Portal backup best practice!!

    Hi ,
    Can anyone let me know where can i get Portal Backup Best Practices!! We are using SAP EP7.
    Thanks,
    Raghavendra Pothula

    Hi Tim: Here's my basic approach for this -- I create either a portal dynamic page or a stored procedure that renders an HTML parameter form. You can connect to the database and render what ever sort of drop downs, check boxes, etc you desire. To tie everything together, just make sure when you create the form, the names of the fields match that of the page parameters created on the page. This way, when the form posts to the same page, it appends the values for the page parameters to the URL.
    By coding the entire form yourself, you avoid the inherent limitations of the simple parameter form. You can also use advanced JavaScript to dynamically update the drop downs based on the values selected or can cause the form to be submitted and update the other drop downs from the database if desired.
    Unfortunately, it is beyond the scope of this forum to give you full technical details, but that is the approach I have used on a number of portal sites. Hope it helps!
    Rgds/Mark M.

  • E-business backup best practices

    What is E-business backup best practices, tools or techniques?
    For example what we do now is taking copy from the oracle folder in D partition every two days.
    but it take long time, and i believe there is a better way.
    We are on windows server 2003 and E-business 11i

    user11969921 wrote:
    What is E-business backup best practices, tools or techniques?
    For example what we do now is taking copy from the oracle folder in D partition every two days.
    but it take long time, and i believe there is a better way.
    We are on windows server 2003 and E-business 11i
    Please see previous threads for the same topic/discussion -- https://forums.oracle.com/search.jspa?view=content&resultTypes=&dateRange=all&q=backup+EBS&rankBy=relevance
    Please also see RMAN manuals (incremental backup, hot backup, ..etc) -- http://www.oracle.com/pls/db112/portal.portal_db?selected=4&frame=#backup_and_recovery
    Thanks,
    Hussein

  • OVM Repository and VM Guest Backups - Best Practice?

    Hey all,
    Does anybody out there have any tips/best practices on backing up the OVM Repository as well ( of course ) the VM's? We are using NFS exclusively and have the ability to take snapshots at the storage level.
    Some of the main points we'd like to do ( without using a backup agent within each VM ):
    backup/recovery of the entire VM Guest
    single file restore of a file within a VM Guest
    backup/recovery of the entire repository.
    The single file restore is probably the most difficult/manual. The rest can be done manually from the .snapshot directories, but when we're talking about having hundreds and hundreds of guests within OVM...this isn't overly appealing to me.
    OVM has this lovely manner of naming it's underlying VM directories off of some abiguous number which has nothing to do with the name of the VM ( I've been told this is changing in an upcoming release ).
    Brent

    Please find below the response from the Oracle support on that.
    In short :
    - First, "manual" copies of files into the repository is not recommend nor supported.
    - Second we have to go back and forth through templates and http (or ftp) server.
    Note that when creating a template or creating a new VM from a template, we're tlaking about full copies. No "fast-clone" (snapshots) are involved.
    This is ridiculous.
    How to Back up a VM:1) Create a template from the OVM Manager console
    Note: Creating a template requires the VM to be stopped (this is required because the if the copy of the virtual disk is done with the running will corrupt data) and the process to create the template make changes to the vm.cfg
    2) Enable Storage Repository Back Ups using the step above:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-storage-repo-config.html#vmusg-repo-backup
    2) Mount the NFS export created above on another server
    3) Them create a compress file (tgz) using the the relevant files (cfg + img) from the Repository NFS mount:
    Here is an example of the template:
    $ tar tf OVM_EL5U2_X86_64_PVHVM_4GB.tgz
    OVM_EL5U2_X86_64_PVHVM_4GB/
    OVM_EL5U2_X86_64_PVHVM_4GB/vm.cfg
    OVM_EL5U2_X86_64_PVHVM_4GB/System.img
    OVM_EL5U2_X86_64_PVHVM_4GB/README
    How to restore up a VM:1) Then upload the compress file (tgz) to an HTTP, HTTPS or FTP. server
    2) Import to the OVM manager using the following instructions:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-repo.html#vmusg-repo-template-import
    3) Clone the Virtual machine from the template imported above using the following instructions:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-vm-clone.html#vmusg-vm-clone-image
    Edited by: user521138 on Sep 5, 2012 11:59 PM
    Edited by: user521138 on Sep 6, 2012 3:06 AM

  • Sql backup best practice on vms that are backed up as a complete vm

    hi,
    apologies as i am sure this has been asked many times before but i cant really find an answer to my question. so my situation is this. I have two types of backups; agent based and snap based backups.
    For the vm's that are being backed up by snapshots the process is: vmware does the snap, then the san takes a snap of the storage and then the backup is taken from the san. we then have full vm backups.
    For the agent based backups, these are only backing up file level stuff. so we use this for our sql cluster and some other servers. these are not snaps/full vm backups, but simply backups of databases and files etc.
    this works well, but there are a couple of servers that need to be in the full vm snap category and therefore cant have the backup agent installed on that vm as it is already being backed up by the snap technology. so what would be the best practice on these
    snapped vms that have sql installed as well? should i configure a reoccurring backup in sql management studio (if this is possible??) which is done before the vm snap backup? or is there another way i should be backing up the dbs?
    any suggestions would be very welcome.
    thanks
    aaron

    Hello Aaron,
    If I understand correctly, you perform a snapshot backup of the complete VM.
    In that case you also need to create a SQL Server backup schedule to perform Full and Transaction Log backups.
    (if you do a filelevel backup of the .mdf and .ldf files with an agent you also need to do this)
    I would run a database backup before the VM snapshot (to a SAN location if possible), then perform the Snapshot backup.
    You should set up the transaction log backups depending on business recovery needs.
    For instance: if your company accepts a maximum of 30 minutes data loss make sure to perform a transaction log backup every 30 minutes.
    In case of emergency you could revert to the VM Snapshot, restore the full database backup and restore transaction log backups till the point in time you need.

  • Development System Backup - Best Practice / Policy for offsite backups

    Hi, I have not found any recommendations from SAP on best practices/recommendations on backing up Development systems offsite and so would appreciate some input on what policies other companies have for backing up Development systems. We continuously make enhancements to our SAP systems and perform daily backups; however, we do not send any Development system backups offsite which I feel is a risk (losing development work, losing transport & change logs...).
    Does anyone know whether SAP have any recommendations on backuping up Development systems offsite? What policies does your company have?
    Thanks,
    Thomas

    Thomas,
    Your question does not mention consideration of both sides of the equation - you have mentioned the risk only.  What about the incremental cost of frequent backups stored offsite?  Shouldn't the question be how the 'frequent backup' cost matches up with the risk cost?
    I have never worked on an SAP system where the developers had so much unique work in progress that they could not reproduce their efforts in an acceptable amount of time, at a acceptable cost.  There is typically nothing in dev that is so valuable as to be irreplaceable (unlike production, where the loss of 'yesterday's' data is extremely costly).  Given the frequency that an offsite dev backup is actually required for a restore (seldom), and given that the value of the daily backed-up data is already so low, the actual risk cost is virtually zero.
    I have never seen SAP publish a 'best practice' in this area.  Every business is different; and I don't see how SAP could possibly make a meaningful recommendation that would fit yours.  In your business, the risk (the pro-rata cost of infrequently  needing to use offsite storage to replace or rebuild 'lost' low cost development work) may in fact outweigh the ongoing incremental costs of creating and maintaining offsite daily recovery media. Your company will have to perform that calculation to make the business decision.  I personally have never seen a situation where daily offsite backup storage of dev was even close to making  any kind of economic sense. 
    Best Regards,
    DB49

  • Backup "best practices" scenarios?

    I'd be grateful for a discussion of different "best practices" regarding backups, taking into consideration that a Time Machine backup is only as good as the external disk it is on.
    My husband & I currently back up our two macs, each to its own 500 GB hard disk using TM. We have a 1TB disk and I was going to make that a periodic repository for backups for each of our macs, in case one of the 500GB disks fails, but was advised by a freelance mac tech not to do that. This was in the middle of talking about other things & now I cannot remember if he said why.
    We need the general backup, and also (perhaps a separate issue) I am particularly interested in safeguarding our iPhoto libraries. Other forms of our data can be redundantly backed up to DVDs but with 50GB iPhoto libraries that doesn't seem feasible, nor is backing up to some online storage facility. My own iP library is too large for my internal disk; putting the working library on an external disk is discouraged by the experts in the iPhoto forum (slow access, possible loss if connexion between computer & disk interrupted during transfer) so my options seem to be getting a larger internal disk or using the internal disk just for the most recent photos and keeping the entire library on an external disk, independent of incremental TM backups.

    It's probably more than you ever wanted to know about backups, but some of this might be useful:
    There are three basic types of backup applications: *Bootable Clone, Archive, and Time Machine.*
    This is a general explanation and comparison of the three types. Many variations exist, of course, and some combine features of others.
    |
    _*BOOTABLE "CLONE"*_
    |
    These make a complete, "bootable" copy of your entire system on an external disk/partition, a second internal disk/partition, or a partition of your internal disk.
    Advantages
    If your internal HD fails, you can boot and run from the clone immediately. Your Mac may run a bit slower, but it will run, and contain everything that was on your internal HD at the time the clone was made or last updated. (But of course, if something else critical fails, this won't work.)
    You can test whether it will run, just by booting-up from it (but of course you can't be positive that everything is ok without actually running everything).
    If it's on an external drive, you can easily take it off-site.
    Disadvantages
    Making an entire clone takes quite a while. Most of the cloning apps have an update feature, but even that takes a long time, as they must examine everything on your system to see what's changed and needs to be backed-up. Since this takes lots of time and CPU, it's usually not practical to do this more than once a day.
    Normally, it only contains a copy of what was on your internal HD when the clone was made or last updated.
    Some do have a feature that allows it to retain the previous copy of items that have been changed or deleted, in the fashion of an archive, but of course that has the same disadvantages as an archive.
    |
    _*TRADITIONAL "ARCHIVE" BACKUPS*_
    |
    These copy specific files and folders, or in some cases, your entire system. Usually, the first backup is a full copy of everything; subsequently, they're "incremental," copying only what's changed.
    Most of these will copy to an external disk; some can go to a network locations, some to CDs/DVDs, or even tape.
    Advantages
    They're usually fairly simple and reliable. If the increments are on separate media, they can be taken off-site easily.
    Disadvantages
    Most have to examine everything to determine what's changed and needs to be backed-up. This takes considerable time and lots of CPU. If an entire system is being backed-up, it's usually not practical to do this more than once, or perhaps twice, a day.
    Restoring an individual item means you have to find the media and/or file it's on. You may have to dig through many incremental backups to find what you're looking for.
    Restoring an entire system (or large folder) usually means you have to restore the most recent Full backup, then each of the increments, in the proper order. This can get very tedious and error-prone.
    You have to manage the backups yourself. If they're on an external disk, sooner or later it will get full, and you have to do something, like figure out what to delete. If they're on removable media, you have to store them somewhere appropriate and keep track of them. In some cases, if you lose one in the "string" (or it can't be read), you've lost most of the backup.
    |
    _*TIME MACHINE*_
    |
    Similar to an archive, TM keeps copies of everything currently on your system, plus changed/deleted items, on an external disk, Time Capsule (or USB drive connected to one), internal disk, or shared drive on another Mac on the same local network.
    Advantages
    Like many Archive apps, it first copies everything on your system, then does incremental backups of additions and changes. But TM's magic is, each backup appears to be a full one: a complete copy of everything on your system at the time of the backup.
    It uses an internal OSX log of what's changed to quickly determine what to copy, so most users can let it do it's hourly incremental backups without much effect on system performance. This means you have a much better chance to recover an item that was changed or deleted in error, or corrupted.
    Recovery of individual items is quite easy, via the TM interface. You can browse your backups just as your current data, and see "snapshots" of the entire contents at the time of each backup. You don't have to find and mount media, or dig through many files to find what you're looking for.
    You can also recover your entire system (OSX, apps, settings, users, data, etc.) to the exact state it was in at the time of any backup, even it that's a previous version of OSX.
    TM manages it's space for you, automatically. When your backup disk gets near full, TM will delete your oldest backup(s) to make room for new ones. But it will never delete it's copy of anything that's still on your internal HD, or was there at the time of any remaining backup. So all that's actually deleted are copies of items that were changed or deleted long ago.
    TM examines each file it's backing-up; if it's incomplete or corrupted, TM may detect that and fail, with a message telling you what file it is. That way, you can fix it immediately, rather than days, weeks, or months later when you try to use it.
    Disadvantages
    It's not bootable. If your internal HD fails, you can't boot directly from your TM backups. You must restore them, either to your repaired/replaced internal HD or an external disk. This is a fairly simple, but of course lengthy, procedure.
    TM doesn't keep it's copies of changed/deleted items forever, and you're usually not notified when it deletes them.
    It is fairly complex, and somewhat new, so may be a bit less reliable than some others.
    |
    RECOMMENDATION
    |
    For most non-professional users, TM is simple, workable, and maintenance-free. But it does have it's disadvantages.
    That's why many folks use both Time Machine and a bootable clone, to have two, independent backups, with the advantages of both. If one fails, the other remains. If there's room, these can be in separate partitions of the same external drive, but it's safer to have them on separate drives, so if either app or drive fails, you still have the other one.
    |
    _*OFF-SITE BACKUPS*_
    |
    As great as external drives are, they may not protect you from fire, flood, theft, or direct lightning strike on your power lines. So it's an excellent idea to get something off-site, to your safe deposit box, workplace, relative's house, etc.
    There are many ways to do that, depending on how much data you have, how often it changes, how valuable it is, and your level of paranoia.
    One of the the best strategies is to follow the above recommendation, but with a pair of portable externals, each 4 or more times the size of your data. Each has one partition the same size as your internal HD for a "bootable clone" and another with the remainder for TM.
    Use one drive for a week or so, then take it off-site and swap with the other. You do have to tell TM when you swap drives, via TM Preferences > Change Disk; and you shouldn't go more than about 10 days between swaps.
    There are other options, instead of the dual drives, or in addition to them. Your off-site backups don't necessarily have to be full backups, but can be just copies of critical information.
    If you have a MobileMe account, you can use Apple's Backup app to get relatively-small amounts of data (such as Address book, preferences, settings, etc.) off to iDisk daily. If not, you can use a 3rd-party service such as Mozy or Carbonite.
    You can also copy data to CDs or DVDs and take them off-site. Re-copy them every year or two, as their longevity is questionable.
    Backup strategies are not a "One Size Fits All" sort of thing. What's best varies by situation and preference.
    Just as an example, I keep full Time Machine backups; plus a CarbonCopyCloner clone (updated daily, while I'm snoozing) locally; plus small daily Backups to iDisk; plus some other things to CDsDVDs in my safe deposit box. Probably overkill, but as many of us have learned over the years, backups are one area where +Paranoia is Prudent!+

  • Backup Best Practices

    What are the best practices for backing up the Contribute
    Server? I found a tech note (
    http://www.adobe.com/go/1238b09),
    but it is from 2005. Is that still accurate?
    Thanks

    I thinks still this is fine, because after CPS 1.1 no new
    version came out.
    This tech note is for CPS 1.1 only.
    Thanks.

  • Time Machine, FileVault & Windows Backup Best Practice

    I read a lot on the forums about people wanting the best way to backup, when they had FileVault and also Windows files.
    This is what I did, I have a 250 GB Lacie external drive:
    1) 2 Partitions: 177 GB Mac OS Extended, 55 GB FAT32 (This disk was entirely FAT32 and MacOS Disk Utility wouldn't resize it to make space or even recognize the unpartitioned space I created in Windows, using a third party tool. So I resized in Windows, created a new ext2 partition, connected to Mac and erased it as Mac OS Extended. Mac sees 2 partitions now and automatically wants to use one for TimeMachine)
    2) I use FileVault on my home folder, only for personal pictures and documents. I keep the iTunes music folder out of my home folder
    3) Use Time Machine to back up everything but my home folder
    4) Create an encrypted image of my FileVaulted home folder on the same backup disk that I created. For ease, I use the same password as that of my login account. I used the Disk Utility's Image from Folder option for this.
    So My pictures and documents are encrypted on my machine, they are copied as one encrypted file into the backup drive, and the rest of my machine is backed up by File Vault. I do not have to log out, though it is a manual step for me to create the Image from folder. I don't mind doing that for now as I will need to do it only when I add new pictures to this folder. I would then make a new image on the backup drive.
    I think this setup will work for me, if I do find things I can tweak I will post a follow up.

    I set Super Duper to Back Up my PowerMac and to shut down after completing the back up, then i went to sleep. Six hours later, I woke up when I heard the fans spinning profusely and the Mac still not 'shut down'. Got following error log. Does anyone knows what has went wrong? Have I potentially 'fried' the CPU?
    system.log:
    Description: System events log
    Size: 25 KB
    Last Modified: 4/22/08 6:11 AM
    Location: /var/log/system.log
    Recent Contents: Apr 22 00:00:01 ian-phuas-power-mac-g5 newsyslog[654]: logfile turned over
    Apr 22 00:01:16 ian-phuas-power-mac-g5 [0x0-0x95095].com.microsoft.PowerPoint[558]: monitor: taskforpid failed (os/kern) failure
    Apr 22 00:04:47 ian-phuas-power-mac-g5 com.apple.launchd[64] ([0x0-0xaa0aa].jp.co.canon.bj.printer.app.MPNavigator[673]): Stray process with PGID equal to this dead job: PID 675 PPID 1 MP Navigator
    Apr 22 00:07:03 ian-phuas-power-mac-g5 /usr/sbin/spindump[691]: process 675 is being monitored
    Apr 22 00:07:05 ian-phuas-power-mac-g5 /usr/sbin/spindump[691]: process 675 is being force quit
    Apr 22 00:07:09 ian-phuas-power-mac-g5 /usr/sbin/spindump[691]: process 675 is being no longer being monitored
    Apr 22 00:07:13 ian-phuas-power-mac-g5 com.apple.launchd[64] ([0x0-0xb00b0].jp.co.canon.bj.printer.app.MPNavigator[693]): Stray process with PGID equal to this dead job: PID 695 PPID 1 MP Navigator
    Apr 22 00:10:42 ian-phuas-power-mac-g5 SMARTReporter[101]: ATA Drive: 'Maxtor 6B160M0' - SMART condition not exceeded, drive OK!
    Apr 22 00:10:42 ian-phuas-power-mac-g5 SMARTReporter[101]: ATA Drive: 'Hitachi HDT725032VLA360' - SMART condition not exceeded, drive OK!
    Apr 22 00:12:03 ian-phuas-power-mac-g5 Finder[91]: Cannot find function pointer pluginFactory for factory 053918F4-2869-11D7-A671-000A27E2DB90 in CFBundle/CFPlugIn 0x6c54680 </Users/ianphuac/Library/Contextual Menu Items/ToastIt.plugin> (bundle, loaded)
    Apr 22 00:13:34 ian-phuas-power-mac-g5 Mail[202]: Cannot find function pointer pluginFactory for factory 053918F4-2869-11D7-A671-000A27E2DB90 in CFBundle/CFPlugIn 0x8b911f0 </Users/ianphuac/Library/Contextual Menu Items/ToastIt.plugin> (bundle, loaded)
    Apr 22 01:03:31 ian-phuas-power-mac-g5 /usr/sbin/spindump[760]: process 695 is being monitored
    Apr 22 01:03:33 ian-phuas-power-mac-g5 /usr/sbin/spindump[760]: process 695 is being force quit
    Apr 22 01:03:36 ian-phuas-power-mac-g5 fseventsd[26]: callback_client: ERROR: d2fcallbackrpc() => (ipc/send) invalid destination port (268435459) for pid 700
    Apr 22 01:03:41 ian-phuas-power-mac-g5 SubmitReport[766]: missing kCRProblemReportProblemTypeKey
    Apr 22 01:03:44 ian-phuas-power-mac-g5 /usr/sbin/ocspd[769]: starting
    Apr 22 01:03:45 ian-phuas-power-mac-g5 SubmitReport[766]: Submitted compressed hang report for MP Navigator
    Apr 22 01:04:26 ian-phuas-power-mac-g5 [0x0-0x94094].com.microsoft.DatabaseDaemon[557]: monitor: taskforpid failed (os/kern) failure
    Apr 22 01:04:28 ian-phuas-power-mac-g5 SuperDuper![772]: .scriptSuite warning for attribute 'boundsAsQDRect' of class 'NSWindow' in suite 'NSCoreSuite': 'NSData<QDRect>' is not a valid type name.
    Apr 22 01:04:27 ian-phuas-power-mac-g5 [0x0-0xbb0bb].com.blacey.SuperDuper![772]: crontab: no crontab for ianphuac
    Apr 22 01:04:28 ian-phuas-power-mac-g5 SuperDuper![772]: .scriptSuite warning for type 'NSTextStorage' attribute 'name' of class 'NSApplication' in suite 'NSCoreSuite': AppleScript name references may not work for this property because its type is not NSString-derived.
    Apr 22 01:04:28 ian-phuas-power-mac-g5 SuperDuper![772]: .scriptSuite warning for type 'NSTextStorage' attribute 'lastComponentOfFileName' of class 'NSDocument' in suite 'NSCoreSuite': AppleScript name references may not work for this property because its type is not NSString-derived.
    Apr 22 01:04:28 ian-phuas-power-mac-g5 SuperDuper![772]: .scriptSuite warning for attribute 'boundsAsQDRect' of class 'NSWindow' in suite 'NSCoreSuite': 'NSData<QDRect>' is not a valid type name.
    Apr 22 01:04:28 ian-phuas-power-mac-g5 SuperDuper![772]: .scriptSuite warning for type 'NSTextStorage' attribute 'title' of class 'NSWindow' in suite 'NSCoreSuite': AppleScript name references may not work for this property because its type is not NSString-derived.
    Apr 22 01:04:28 ian-phuas-power-mac-g5 SuperDuper![772]: .scriptSuite warning for superclass of class 'NSAttachmentTextStorage' in suite 'NSTextSuite': 'NSString' is not a valid class name.
    Apr 22 01:04:34 ian-phuas-power-mac-g5 kernel[0]: UniNEnet::monitorLinkStatus - Link is down.
    Apr 22 01:04:44 ian-phuas-power-mac-g5 SuperDuper![772]: Connection failed! Error - no Internet connection
    Apr 22 01:04:56 ian-phuas-power-mac-g5 authexec[800]: executing /Applications/SuperDuper!.app/Contents/MacOS/SDAgent
    Apr 22 01:04:59 ian-phuas-power-mac-g5 KernelEventAgent[23]: tid 00000000 received unknown event (256)
    Apr 22 01:10:42 ian-phuas-power-mac-g5 SMARTReporter[101]: ATA Drive: 'Maxtor 6B160M0' - SMART condition not exceeded, drive OK!
    Apr 22 01:10:42 ian-phuas-power-mac-g5 SMARTReporter[101]: ATA Drive: 'Hitachi HDT725032VLA360' - SMART condition not exceeded, drive OK!
    Apr 22 01:16:27 ian-phuas-power-mac-g5 KernelEventAgent[23]: tid 00000000 received unknown event (256)
    Apr 22 01:16:28 ian-phuas-power-mac-g5 fsaclctl[841]: HFSIOC_SETACLSTATE 1 on /Volumes/Lacie Backup returns 0 errno = 2
    Apr 22 01:30:28 ian-phuas-power-mac-g5 com.apple.launchd[1] (0x10b470.nohup[880]): Could not setup Mach task special port 9: (os/kern) no access
    Apr 22 02:00:33 ian-phuas-power-mac-g5 ntpd[14]: sendto(17.83.254.7) (fd=23): No route to host
    Apr 22 02:10:47 ian-phuas-power-mac-g5 SMARTReporter[101]: ATA Drive: 'Maxtor 6B160M0' - SMART condition not exceeded, drive OK!
    Apr 22 02:10:48 ian-phuas-power-mac-g5 SMARTReporter[101]: ATA Drive: 'Hitachi HDT725032VLA360' - SMART condition not exceeded, drive OK!
    Apr 22 02:49:58 ian-phuas-power-mac-g5 kernel[0]: IOPMSlotsMacRISC4::determineSleepSupport has canSleep true
    Apr 22 02:51:34 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 02:53:35 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 02:55:36 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 02:57:36 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 02:59:37 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:01:37 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:03:37 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:05:38 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:07:38 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:09:39 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:10:47 ian-phuas-power-mac-g5 SMARTReporter[101]: ATA Drive: 'Maxtor 6B160M0' - SMART condition not exceeded, drive OK!
    Apr 22 03:10:47 ian-phuas-power-mac-g5 SMARTReporter[101]: ATA Drive: 'Hitachi HDT725032VLA360' - SMART condition not exceeded, drive OK!
    Apr 22 03:12:39 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:14:12 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:17:13 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:19:13 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:21:44 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:23:44 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:25:45 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:27:45 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:29:46 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:31:46 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:33:46 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:36:47 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:38:47 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:40:48 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:42:48 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:45:49 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:47:49 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:50:50 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:53:20 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:55:21 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:57:21 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 03:59:22 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:01:52 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:03:53 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:05:53 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:07:53 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:09:56 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:10:47 ian-phuas-power-mac-g5 SMARTReporter[101]: ATA Drive: 'Maxtor 6B160M0' - SMART condition not exceeded, drive OK!
    Apr 22 04:10:47 ian-phuas-power-mac-g5 SMARTReporter[101]: ATA Drive: 'Hitachi HDT725032VLA360' - SMART condition not exceeded, drive OK!
    Apr 22 04:12:57 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:17:28 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:20:29 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:22:29 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:25:30 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:27:30 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:29:31 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:31:31 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:33:32 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:35:32 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:37:32 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:39:33 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:41:33 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:43:34 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:45:34 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:47:34 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:50:35 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 04:52:35 ian-phuas-power-mac-g5 kernel[0]: PM notification timeout (pid 202, Mail)
    Apr 22 06:07:05 localhost com.apple.launchctl.System[2]: fsck_hfs: Volume is journaled. No checking performed.
    Apr 22 06:07:05 localhost com.apple.launchctl.System[2]: fsck_hfs: Use the -f option to force checking.
    Apr 22 06:07:07 localhost com.apple.launchctl.System[2]: launchctl: Please convert the following to launchd: /etc/mach_init.d/dashboardadvisoryd.plist
    Apr 22 06:07:07 localhost com.apple.launchd[1] (org.cups.cupsd): Unknown key: SHAuthorizationRight
    Apr 22 06:07:07 localhost com.apple.launchd[1] (org.ntp.ntpd): Unknown key: SHAuthorizationRight
    Apr 22 06:07:15 localhost kernel[0]: Darwin Kernel Version 9.2.2: Tue Mar 4 21:23:43 PST 2008; root:xnu-1228.4.31~1/RELEASE_PPC
    Apr 22 06:07:08 localhost kextd[10]: 414 cached, 0 uncached personalities to catalog
    Apr 22 06:07:15 localhost kernel[0]: standard timeslicing quantum is 10000 us
    Apr 22 06:07:14 localhost rpc.statd[17]: statd.notify - no notifications needed
    Apr 22 06:07:15 localhost kernel[0]: vmpagebootstrap: 1020257 free pages and 28319 wired pages
    Apr 22 06:07:14 localhost fseventsd[26]: event logs in /.fseventsd out of sync with volume. destroying old logs. (99878 12 104327)
    Apr 22 06:07:14 localhost bootlog[35]: BOOT_TIME: 1208815622 0
    Apr 22 06:07:16 localhost kernel[0]: migtable_maxdispl = 79
    Apr 22 06:07:14 localhost DirectoryService[31]: Launched version 5.2 (v514.4)
    Apr 22 06:07:15 localhost DumpPanic[29]: Panic data written to /Library/Logs/PanicReporter/2008-04-22-060715.panic
    Apr 22 06:07:17 localhost kernel[0]: 106 prelinked modules
    Apr 22 06:07:14 localhost /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow[22]: Login Window Application Started
    Apr 22 06:07:17 localhost kernel[0]: Loading security extension com.apple.security.TMSafetyNet
    Apr 22 06:07:14 localhost DirectoryService[31]: WARNING - dsTouch: file was asked to be opened </Library/Preferences/DirectoryService/.DSIsRunning>: (File exists)
    Apr 22 06:07:17 localhost kernel[0]: calling mpopolicyinit for TMSafetyNet
    Apr 22 06:07:17 localhost kernel[0]: Security policy loaded: Safety net for Time Machine (TMSafetyNet)
    Apr 22 06:07:17 localhost kernel[0]: Loading security extension com.apple.nke.applicationfirewall
    Apr 22 06:07:18 localhost kernel[0]: Loading security extension com.apple.security.seatbelt
    Apr 22 06:07:18 localhost kernel[0]: calling mpopolicyinit for mb
    Apr 22 06:07:18 localhost kernel[0]: Seatbelt MACF policy initialized
    Apr 22 06:07:18 localhost kernel[0]: Security policy loaded: Seatbelt Policy (mb)
    Apr 22 06:07:18 localhost kernel[0]: Copyright (c) 1982, 1986, 1989, 1991, 1993
    Apr 22 06:07:18 localhost kernel[0]: The Regents of the University of California. All rights reserved.
    Apr 22 06:07:18 localhost kernel[0]: MAC Framework successfully initialized
    Apr 22 06:07:19 localhost kernel[0]: using 16384 buffer headers and 4096 cluster IO buffer headers
    Apr 22 06:07:19 localhost kernel[0]: AirPort_Brcm43xx::probe: 05094e80, 0
    Apr 22 06:07:19 localhost kernel[0]: DART enabled
    Apr 22 06:07:19 localhost kernel[0]: FireWire (OHCI) Apple ID 42 PCI now active, GUID 001124fffe7f2582; max speed s800.
    Apr 22 06:07:19 localhost kernel[0]: mbinit: done
    Apr 22 06:07:20 localhost kernel[0]: Security auditing service present
    Apr 22 06:07:20 localhost kernel[0]: BSM auditing present
    Apr 22 06:07:20 localhost kernel[0]: rooting via boot-uuid from /chosen: 7EEABB9C-080B-3F8E-9C95-7E90FFCF646F
    Apr 22 06:07:20 localhost kernel[0]: Waiting on <dict ID="0"><key>IOProviderClass</key><string ID="1">IOResources</string><key>IOResourceMatch</key><string ID="2">boot-uuid-media</string></dict>
    Apr 22 06:07:20 localhost kernel[0]: wl0: Broadcom BCM4320 802.11 Wireless Controller
    Apr 22 06:07:20 localhost fseventsd[26]: log dir: /.fseventsd getting new uuid: 77035FE8-86D2-4B05-AE4E-F897A4A724B3
    Apr 22 06:07:20 localhost kernel[0]: 4.170.25.8.2Got boot device = IOService:/MacRISC4PE/ht@0,f2000000/AppleMacRiscHT/pci@5/IOPCI2PCIBridge/k2-sat a-root@C/AppleK2SATARoot/k2-sata@1/AppleK2SATA/ATADeviceNub@0/AppleATADiskDriver /IOATABlockStorageDevice/IOBlockStorageDriver/Hitachi HDT725032VLA360 Hitachi HDT725032VLA360/IOApplePartitionScheme/AppleHFS_Untitled1@3
    Apr 22 06:07:20 localhost kernel[0]: BSD root: disk1s3, major 14, minor 5
    Apr 22 06:07:20 localhost kernel[0]: jnl: unknown-dev: replay_journal: from: 15033344 to: 16646656 (joffset 0x952000)
    Apr 22 06:07:20 localhost kernel[0]: jnl: unknown-dev: journal replay done.
    Apr 22 06:07:20 localhost mDNSResponder mDNSResponder-170 (Jan 4 2008 18:04:16)[21]: starting
    Apr 22 06:07:21 localhost kernel[0]: HFS: Removed 2 orphaned unlinked files or directories
    Apr 22 06:07:21 localhost kernel[0]: Jettisoning kernel linker.
    Apr 22 06:07:21 localhost kernel[0]: Resetting IOCatalogue.
    Apr 22 06:07:21 localhost kernel[0]: Matching service count = 0
    Apr 22 06:07:21 localhost kernel[0]: Matching service count = 5
    Apr 22 06:07:21: --- last message repeated 4 times ---
    Apr 22 06:07:21 localhost kernel[0]: PowerMac72U3TwinsPIDCtrlLoop::adjustControls state == not ready
    Apr 22 06:07:21 localhost kernel[0]: IOPlatformControl::registerDriver Control Driver AppleSlewClock did not supply target-value, using default
    Apr 22 06:07:21 localhost kernel[0]: IPv6 packet filtering initialized, default to accept, logging disabled
    Apr 22 06:07:21 localhost kernel[0]: UniNEnet: Ethernet address 00:11:24:7f:25:82
    Apr 22 06:07:21 localhost kernel[0]: AirPort_Brcm43xx: Ethernet address 00:11:24:aa:86:4b
    Apr 22 06:07:21 localhost /usr/sbin/ocspd[57]: starting
    Apr 22 06:07:22 localhost kernel[0]: jnl: disk0s3: replay_journal: from: 623616 to: 1369088 (joffset 0xa701000)
    Apr 22 06:07:22 localhost kernel[0]: jnl: disk0s3: journal replay done.
    Apr 22 06:07:22 localhost fseventsd[26]: event logs in /Volumes/Macintosh HD/.fseventsd out of sync with volume. destroying old logs. (20187 0 27000)
    Apr 22 06:07:22 localhost fseventsd[26]: log dir: /Volumes/Macintosh HD/.fseventsd getting new uuid: EAEEEBE3-D7A7-4F79-BBAE-0B9727642A98
    Apr 22 06:07:27 localhost mDNSResponder[21]: Couldn't read user-specified Computer Name; using default “Macintosh-0011247F2582” instead
    Apr 22 06:07:27 localhost mDNSResponder[21]: Couldn't read user-specified local hostname; using default “Macintosh-0011247F2582.local” instead
    Apr 22 06:07:32 localhost kextd[10]: writing kernel link data to /var/run/mach.sym
    Apr 22 06:07:38 localhost mDNSResponder[21]: User updated Computer Name from Macintosh-0011247F2582 to Ian Phua’s Power Mac G5
    Apr 22 06:07:38 localhost mDNSResponder[21]: User updated Local Hostname from Macintosh-0011247F2582 to ian-phuas-power-mac-g5
    Apr 22 06:07:40 ian-phuas-power-mac-g5 configd[33]: setting hostname to "ian-phuas-power-mac-g5.local"
    Apr 22 06:07:45 ian-phuas-power-mac-g5 org.ntp.ntpd[14]: Error : nodename nor servname provided, or not known
    Apr 22 06:07:45 ian-phuas-power-mac-g5 ntpdate[81]: can't find host time.asia.apple.com
    Apr 22 06:07:45 ian-phuas-power-mac-g5 ntpdate[81]: no servers can be used, exiting
    Apr 22 06:07:50 ian-phuas-power-mac-g5 loginwindow[22]: Login Window Started Security Agent
    Apr 22 06:07:51 ian-phuas-power-mac-g5 authorizationhost[90]: MechanismInvoke 0x11f220 retainCount 2
    Apr 22 06:07:51 ian-phuas-power-mac-g5 SecurityAgent[91]: MechanismInvoke 0x101650 retainCount 1
    Apr 22 06:07:51 ian-phuas-power-mac-g5 SecurityAgent[91]: MechanismDestroy 0x101650 retainCount 1
    Apr 22 06:07:51 ian-phuas-power-mac-g5 loginwindow[22]: Login Window - Returned from Security Agent
    Apr 22 06:07:51 ian-phuas-power-mac-g5 authorizationhost[90]: MechanismDestroy 0x11f220 retainCount 2
    Apr 22 06:07:51 ian-phuas-power-mac-g5 loginwindow[22]: USER_PROCESS: 22 console
    Apr 22 06:07:52 ian-phuas-power-mac-g5 com.apple.launchd[1] (com.apple.UserEventAgent-LoginWindow[86]): Exited: Terminated
    Apr 22 06:07:59 ian-phuas-power-mac-g5 kernel[0]: jnl: disk2s3: replay_journal: from: 71680 to: 792576 (joffset 0xa9a000)
    Apr 22 06:07:59 ian-phuas-power-mac-g5 kernel[0]: jnl: disk2s3: journal replay done.
    Apr 22 06:07:59 ian-phuas-power-mac-g5 kernel[0]: jnl: disk2s5: replay_journal: from: 4422144 to: 5159424 (joffset 0x3f8000)
    Apr 22 06:07:59 ian-phuas-power-mac-g5 fseventsd[26]: event logs in /Volumes/Lacie Backup/.fseventsd out of sync with volume. destroying old logs. (97634 1 104373)
    Apr 22 06:08:01 ian-phuas-power-mac-g5 kernel[0]: jnl: disk2s5: journal replay done.
    Apr 22 06:08:01 ian-phuas-power-mac-g5 /System/Library/CoreServices/coreservicesd[43]: SFLSharePointsEntry::CreateDSRecord: dsCreateRecordAndOpen(Ian Phua C S's Public Folder) returned -14135
    Apr 22 06:08:01 ian-phuas-power-mac-g5 /System/Library/CoreServices/coreservicesd[43]: SFLSharePointsEntry::CreateDSRecord: dsCreateRecordAndOpen(Ian Phua's Public Folder) returned -14135
    Apr 22 06:08:06 ian-phuas-power-mac-g5 loginwindow[22]: Unable to resolve startup item: status = -36, theURL == NULL = 1
    Apr 22 06:08:06 ian-phuas-power-mac-g5 loginwindow[22]: Unable to resolve startup item: status = -43, theURL == NULL = 1
    Apr 22 06:08:08: --- last message repeated 1 time ---
    Apr 22 06:08:08 ian-phuas-power-mac-g5 fseventsd[26]: log dir: /Volumes/Lacie Backup/.fseventsd getting new uuid: 0C27B6C4-044E-45BA-BD19-E56DBBC68DC4
    Apr 22 06:08:08 ian-phuas-power-mac-g5 fseventsd[26]: event logs in /Volumes/Lacie SP/.fseventsd out of sync with volume. destroying old logs. (97169 10 104382)
    Apr 22 06:08:08 ian-phuas-power-mac-g5 fseventsd[26]: log dir: /Volumes/Lacie SP/.fseventsd getting new uuid: 5BA6D6AF-6DC0-4FE9-94D3-099FAE59249D
    Apr 22 06:08:14 ian-phuas-power-mac-g5 SMARTReporter[130]: ATA Drive: 'Maxtor 6B160M0' - SMART condition not exceeded, drive OK!
    Apr 22 06:08:15 ian-phuas-power-mac-g5 SMARTReporter[130]: ATA Drive: 'Hitachi HDT725032VLA360' - SMART condition not exceeded, drive OK!
    Apr 22 06:08:21 ian-phuas-power-mac-g5 /System/Library/CoreServices/SystemUIServer.app/Contents/MacOS/SystemUIServer[1 08]: CPSGetProcessInfo(): This call is deprecated and should not be called anymore.
    Apr 22 06:08:21 ian-phuas-power-mac-g5 /System/Library/CoreServices/SystemUIServer.app/Contents/MacOS/SystemUIServer[1 08]: CPSPBGetProcessInfo(): This call is deprecated and should not be called anymore.
    Apr 22 06:08:29 ian-phuas-power-mac-g5 SubmitReport[140]: CFReadStreamGetError() returned: 12
    Apr 22 06:08:29 ian-phuas-power-mac-g5 SubmitReport[140]: Failed to submit uncompressed panic report for xnu
    Apr 22 06:08:31 ian-phuas-power-mac-g5 Disk Utility[146]: ********
    Apr 22 06:08:31 ian-phuas-power-mac-g5 Disk Utility[146]: Disk Utility started.
    Apr 22 06:08:32 ian-phuas-power-mac-g5 mdworker[69]: (Error) SyncInfo: Boot-cache avoidance timed out!
    Apr 22 06:08:37: --- last message repeated 1 time ---
    Apr 22 06:08:37 ian-phuas-power-mac-g5 /System/Library/CoreServices/Problem Reporter.app/Contents/MacOS/Problem Reporter[138]: CPSGetProcessInfo(): This call is deprecated and should not be called anymore.
    Apr 22 06:08:37 ian-phuas-power-mac-g5 /System/Library/CoreServices/Problem Reporter.app/Contents/MacOS/Problem Reporter[138]: CPSPBGetProcessInfo(): This call is deprecated and should not be called anymore.
    Apr 22 06:08:47 ian-phuas-power-mac-g5 Disk Utility[146]: Repairing permissions for “Macintosh Leopard HD”
    Apr 22 06:08:52 ian-phuas-power-mac-g5 kextcache[46]: CFPropertyListCreateFromXMLData(): Old-style plist parser: missing semicolon in dictionary.
    Apr 22 06:09:10 ian-phuas-power-mac-g5 kernel[0]: UniNEnet::monitorLinkStatus - Link is up at 1000 Mbps - Full Duplex (PHY regs 5,6:0xcde1,0x000d)
    Apr 22 06:09:15 ian-phuas-power-mac-g5 mdworker[69]: (Error) SyncInfo: Catalog changed during searchfs too many time. Falling back to fsw search /
    Apr 22 06:09:21 ian-phuas-power-mac-g5 SubmitReport[169]: Submitted uncompressed panic report for xnu
    Apr 22 06:10:00 ian-phuas-power-mac-g5 kextcache[46]: CFPropertyListCreateFromXMLData(): Old-style plist parser: missing semicolon in dictionary.
    Apr 22 06:11:18 ian-phuas-power-mac-g5 simpleScanner[251]: launching daemon...
    Apr 22 06:11:18 ian-phuas-power-mac-g5 simpleScanner[251]: daemon runs all right...
    Apr 22 06:11:25 ian-phuas-power-mac-g5 SuperDuper![253]: .scriptSuite warning for attribute 'boundsAsQDRect' of class 'NSWindow' in suite 'NSCoreSuite': 'NSData<QDRect>' is not a valid type name.
    Apr 22 06:11:25 ian-phuas-power-mac-g5 SuperDuper![253]: .scriptSuite warning for type 'NSTextStorage' attribute 'name' of class 'NSApplication' in suite 'NSCoreSuite': AppleScript name references may not work for this property because its type is not NSString-derived.
    Apr 22 06:11:25 ian-phuas-power-mac-g5 SuperDuper![253]: .scriptSuite warning for type 'NSTextStorage' attribute 'lastComponentOfFileName' of class 'NSDocument' in suite 'NSCoreSuite': AppleScript name references may not work for this property because its type is not NSString-derived.
    Apr 22 06:11:25 ian-phuas-power-mac-g5 SuperDuper![253]: .scriptSuite warning for attribute 'boundsAsQDRect' of class 'NSWindow' in suite 'NSCoreSuite': 'NSData<QDRect>' is not a valid type name.
    Apr 22 06:11:25 ian-phuas-power-mac-g5 SuperDuper![253]: .scriptSuite warning for type 'NSTextStorage' attribute 'title' of class 'NSWindow' in suite 'NSCoreSuite': AppleScript name references may not work for this property because its type is not NSString-derived.
    Apr 22 06:11:25 ian-phuas-power-mac-g5 SuperDuper![253]: .scriptSuite warning for superclass of class 'NSAttachmentTextStorage' in suite 'NSTextSuite': 'NSString' is not a valid class name.
    Apr 22 06:11:25 ian-phuas-power-mac-g5 [0x0-0x1e01e].com.blacey.SuperDuper![253]: crontab: no crontab for ianphuac
    Apr 22 06:11:27 ian-phuas-power-mac-g5 GrowlHelperApp[255]: Unknown plugin filename extension '(null)' (from filename '(null)' of plugin named 'Bubbles')
    Apr 22 06:11:39 ian-phuas-power-mac-g5 Mail[248]: NSInvalidArgumentException: Unable to obtain sql exclusive lock to commit transaction: 10 (disk I/O error)
    Apr 22 06:11:39 ian-phuas-power-mac-g5 Mail[248]: SyncServices[ISyncSessionDriver]: Caught top level exception: Unable to obtain sql exclusive lock to commit transaction: 10 (disk I/O error) Stack trace: (0x91f3bef4 0x950e676c 0x91f3be04 0x91f3be3c 0x93163d60 0x930ff2c4 0x93101c7c 0x93101b4c 0x93113fc0 0x931134b4 0x93105e64 0x930f3da0 0x9317adfc 0x968704f8 0x95fdab9c)

  • Hot backup - best practice

    I have written a simple TimerTask subclass to handle backup for my application. I am doing my best to follow the guidelines in the Getting Started guide. I am using the provided DbBackup helper class along with a ReentrantReadWriteLock to quiesce the database. I want to make sure that writes wait for the lock and that a checkpoint happens before the backup. I have included a fragment of the code that we are using and I am hoping that perhaps someone could validate the approach.
    I should add that we are running a replicated group in which the replicas forward commands to the master using RMI and the Command pattern.
    Kind regards
    James Brook
    // Start backup, find out what needs to be copied.
    backupHelper.startBackup();
    log.info("running backup...");
    // Stop all writes by quiescing the database
    RepositoryNode.getInstance().bdb().quiesce();
    // Ensure all data is committed to disk by running a checkpoint with the default configuration - expensive
    environment.checkpoint(null);
    try { 
         String[] filesForBackup = backupHelper.getLogFilesInBackupSet();
         // Copy the files to archival storage.
         for (int i=0; i<filesForBackup.length; i++) {
              String filePathForBackup = filesForBackup;
              try {
                   File source = new File(environment.getHome(), filePathForBackup);
                   File destination = new File(backupDirectoryPath, new File(filePathForBackup).getName());
                   ERXFileUtilities.copyFileToFile(source, destination, false, true);
                   if (log.isDebugEnabled()) {
                        log.debug("backed up: " + source.getPath() + " to: destination: " + destination);
              } catch (FileNotFoundException e) {
                   log.error("backup failed for file: " + filePathForBackup +
    " the number stored in the holder did not exist.", e);
                   ERXFileUtilities.deleteFile(new File(backupDirectory, LATEST_BACKUP_NUMBER_FILE_NAME));
              } catch (IOException e) {
                   log.fatal("backup failed for file: " + filePathForBackup, e);
    saveLatestBackupFileNumber(backupDirectory, backupHelper.getLastFileInBackupSet());
    finally {
         // Remember to exit backup mode, or all log files won't be cleaned
         // and disk usage will bloat.
         backupHelper.endBackup();
         // Allow writes again
         RepositoryNode.getInstance().bdb().quiesce();
         log.info("finished backup");
    The quiesce() method acquires the writeLock on the ReentrantReadWriteLock.
    One thing I am not sure about is when we should checkpoint the database. Is it safe to do this after acquiring the lock, or should we do it just before?
    The plan is to backup files to a filesystem which is periodically copied to offline backup storage.
    Any help much appreciated.

    James,
    I am using the provided DbBackup helper class along with a ReentrantReadWriteLock to quiesce the database. I want to make sure that writes wait for the lock and that a checkpoint happens before the backup. One thing to make clear is that you can choose to do a hot backup or an offline backup. The Getting Started Guide, Performing Backups chapter defines these terms. When doing a hot backup, it's not necessary to stop any database operations or do a checkpoint, and you should use DbBackupHelper. On the other hand, when doing an offline backup, you would want to quiesce the database, close or sync your environment, and copy your .jdb files. In an offline backup, there is no need to do DbBackup.
    So while it doesn't hurt to do all of the steps for both hot and offline backup, as you are doing, it's more than you need to do.
    You may find the section on http://www.oracle.com/technology/documentation/berkeley-db/je/GettingStartedGuide/logfilesrevealed.html informative in how it explains how JE's storage is append only. This may help you understand why a hot backup does not require stopping database operations. If you want to do a hot backup, the code you show looks fine, though you don't need to quiesce the database or run a checkpoint.
    If you are running replication, you should read the chapter http://www.oracle.com/technology/documentation/berkeley-db/je/ReplicationGuide/dbbackup.html. Backing up a node that is a member of a replicated group is very much like a standalone backup, except you will want to catch that extra possible exception. Also note that backup files belong to a given node, and the files from one node's backup shouldn't be mixed with the files from another node.
    To be clear, suppose you have node A and node B. The backup of nodeA contains files 00000001.jdb, 00000003.jdb. The backup of node B contains files 00000002.jdb, 00000003.jdb. Although A's file 0000003.jdb and B's 00000003.jdb have the same name, they are not the same file and are not interchangeable.
    In a replication system, if you are doing a full restoration, you can use B's full set of files to restore A, and vice versa.
    Linda

  • Hyper V 2012 Backup - best practice

    I'm setting up a new Hyper-V server 2012 and I'm trying to understand how to best configure it for backups and restores using
    Windows Server Backup. I´ve 5 VMs running, everyone has their one iScsi drive.
    I would like to backup the VMs on a external usb disk. I´ve the choice to backup the Hyper-V using child partition snapshot or to backup the vm folder with the config and the vhd files (is is right, that Windows override the old backup ?)
    What´s the difference between the two methods ?  Is incremental back possible with vm´s ?
    thanks

    Hi,
    There are two basic methods you can use to perform a backup. You can:
    Perform a backup from the server running Hyper-V.
    We recommend that you use this method to perform a full server backup because it captures more data than the other method. If the backup application is compatible with Hyper-V and the Hyper-V VSS writer, you can perform a full server backup that helps protect
    all of the data required to fully restore the server, except the virtual networks. The data included in such a backup includes the configuration of virtual machines, snapshots associated with the virtual machines, and virtual hard disks used by the virtual
    machines. As a result, using this method can make it easier to recover the server if you need to, because you do not have to recreate virtual machines or reinstall Hyper-V.
    Perform a backup from within the guest operating system of a virtual machine.
    Use this method when you need to back up data from storage that is not supported by the Hyper-V VSS writer. When you use this method, you run a backup application from the guest operating system of the virtual machine. If you need to use this method, you
    should use it in addition to a full server backup and not as an alternative to a full server backup. Perform a backup from within the guest operating system before you perform a full backup of the server running Hyper-V.
    iSCSI-based storage is supported for backup by the Hyper-V VSS writer when the storage is connected through the management operating system and the storage is used for virtual hard disks.
    For more information please refer to following MS articles:
    Planning for Backup
    http://technet.microsoft.com/en-us/library/dd252619(WS.10).aspx
    Hyper-V: How to Back Up Hyper-V VMs from the Host Using Windows Server Backup
    http://social.technet.microsoft.com/wiki/contents/articles/216.hyper-v-how-to-back-up-hyper-v-vms-from-the-host-using-windows-server-backup.aspx
    Backing up Hyper-V with Windows Server Backup
    http://blogs.msdn.com/b/virtual_pc_guy/archive/2009/03/11/backing-up-hyper-v-with-windows-server-backup.aspx
    Lawrence
    TechNet Community Support

  • Re: OVM Repository and VM Guest Backups - Best Practice?

    Hi,
    I have also been looking into how to backup an OVM Repository, and I'm currently thinking of doing it with the OCFS2 reflink command, which is what it used by the OVMM 'Thin Clone' option to create a snapshot of the virtual disk's .img file. I thought I could create a script that reflink's all the virtual disks to sperate directory within the repository, then export the repository via OVMM and backup all the snapshots. All the snapshots can then be deleted once the backup was complete.
    The VirtualMachines directory could also be backed up in the same way, or I would have thought it would be safe to back this directory directly, as the files are very cfg files are very small and change infrequently.
    I would be interested to hear from anyone that has any experience of doing a similar thing, or has any advise about whether this would be doable.
    Regards,
    Andy

    yes, that is one common way to perform backups. Unfortunately, you'll have to script those things yourself. Some people also use the xm command to pause the VM for just the time it takes to create the reflinks (especially if there is more than one virtual disk in the machine and you want to make sure they are consistent).
    You can read a bit more about it here:
    VM Manager and VM Server - backup and recovery options
    great article (in german but you can understand the scripts) about this http://www.trivadis.com/fileadmin/user_upload/PDFs/Trivadis_in_der_Presse/120601_DOAG-News_Snapshot_einer_VM_mit_Oracle_VM_3.pdf
    and I have blogged about an alternative way where you clone a running machine and then backup that cloned (and stopped) machine
    http://portrix-systems.de/blog/brost/taking-hot-backups-with-oracle-vm/
    cheers
    bjoern

  • Backup validation best practice  11GR2 on Windows

    Hi all
    I am just reading through some guides on checking for various types of corruption on my database. It seems that having DB_BLOCK_CHECKSUM set to TYPICAL takes care of much of the physical corruption and will alert you to the fact any has occurred. Furthermore RMAN by default does its own physical block checking. Logical corruption on the other hand does not seem to be checked automatically unless the CHECK LOGICAL is added to the RMAN command. There are also various VALIDATE commands that could be run on various objects.
    My question is really, what is best practice for checking for block corruption. Do people even bother regularly checking this and just allow Oracle to manage itself, or is it best practice to have the CHECK LOGICAL command in RMAN (even though its not added by default when configuring backup jobs through OEM) or do people schedule jobs and output reports from a VALIDATE command on a regular basis?
    Many thanks

    To use CHECK LOGICAL clause is considered best practice at least by Oracle Support according to
    NOTE:388422.1  Top 10 Backup and Recovery best practices
    (referenced in http://blogs.oracle.com/db/entry/master_note_for_oracle_recovery_manager_rman_doc_id_11164841).

  • Best practice for E-business suite 11i or R12 Application backup

    Hi,
    I'm taking RMAN backup of database. What would be "Best practice for E-business suite 11i or R12 Application backup" procedure?
    Right now I'm taking file level backup. Please suggest if any.
    Thanks

    Please review the following thread, it should be helpful.
    Reommended backup and recovery startegy for EBS
    Reommended backup and recovery startegy for EBS

  • Best practice for taking Site collection Backup with more than 100GB

    Hi,
    I have site collection data is more than 100 GB. Can anyone please suggest me the best practice to take backup?
    Thanks in advance....
    Regards,
    Saya

    Hi
    i think Using powershell script we can do..
    Add this command in powershell
    Add-PSSnapin Microsoft.SharePoint.PowerShell
    Web application backup & restore
    Backup-SPFarm -Directory \\WebAppBackup\Development  -BackupMethod Full -Item "Web application name"
    Site Collection backup & restore
    Backup-SPSite http://1632/sites/TestSite  -Path C:\Backup\TestSite1.bak
    Restore-SPSite http://1632/sites/TestSite2  -Path C:\Backup\TestSite1.bak -Force
    Regards
    manikandan

Maybe you are looking for

  • How do I highlight text in Pages?

    How do I highlight text in Pages - none of the menus, nor the 'Help' function seems to help...

  • Using Pre Plus as a modem with laptop

    Can I use the Palm Pre Plus as a modem tethered to my laptop with VZ Access Manager software from Verizon.  I used this method before with my Blackberry Pearl ($15/mo).  I don't want to pay $40/mo to Verizon to use WiFi (Mobile hotspot) feature on Pa

  • Connection/Network encryption in Oracle Standard Edition 11gR2

    Does the Oracle 11gR2 Standard Edition support any network encryption for SQL*Plus or JDBC thin clients? I don't have "Oracle Advanced Security" installed, is this required to support any encryption of data send across a network?

  • DHCP issue with mobility anchor

    Hi Guys, I am having a bit of trouble to get the DHCP to work in my configuration. basically I have configured one SSID on two WLCs (running same version of code), I configured WLC 1 as anchor  controller and configured itself as mobility anchor (loc

  • OPA and configuration folder rename

    Hi All, As we all know there is a configuration folder present in WEB-INF/CLASSES I wanted to rename this configuration folder to configuration_mm for this i edited application.properties file however now i need two folders one with configuration_mm