Snapshot Backups on HP EVA SAN

Hi everyone,
We are implementing a new HP EVA SAN for our SAP MaxDB Wintel environment.  As part of the SAN setup we will be utilising the EVAs snapshot technology to perform a nightly backup.
Currently HP Data Protector does not support MaxDB for its "Zero Downtime Backup" concept (ZDB), thus we need to perform LUN snapshots using the EVAs native commands.  ZDB would have been nice as it integrates into SAP and lets the DB/SAP know when a snapshot backup has occurred.  However as I mentioned this feature is not available on MaxDB (only SAP on Oracle).
We are aware that SAP supports snapshots on external storage devices as stated in OSS notes 371247 and 616814.
To perform the snapshot we would do something similar (if not exactly) like note 616814 describes as below:
To create the split mirror or snapshot, proceed as follows:
             dbmcli -d <database_name> -u < dbm_user>,<password>
                  util_connect < dbm_user>,<password>
                  util_execute suspend logwriter
               ==> Create the snapshot on the EVA
                  util_execute resume logwriter
                  util_release
                  exit
Obviously MaxDB and SAP are unaware that a "backup" has been performed.  This poses a couple of issues that I would like to see if anyone has a solution too.
a.  To enable automatic log backup MaxDB must know that it has first completed a "full" backup.  Is it possible to have MaxDB be aware that a snapshot backup has been taken of the database, thus allowing us to enable automatic log backup?
b.  SAP also likes to know its been backed up also.  Earlywatch Alert reports start to get a little upset when you don't perform a backup on the system for awhile.
Also DB12 will mention that the system isn't in a recoverable state, when in fact it is.  Any work arounds available here?
Cheers
Shaun

Hi Shaun,
interesting thread sofar...
> It would be nice to see HP and SAP(MaxDB) take the snapshot technology one or two steps further, to provide a guaranteed consistent backup, and can be block level verified.  I think HPs ZDB (zero downtime backup eg snapshots) technology for SAP on Oracle using Data Protector does this now?!??!
Hmm... I guess the keyword here is 'market'. If there is enough market potential visible, I tend to believe that both SAP and HP would happily try to deliver such tight integration.
I don't know how this ZDB stuff works with Oracle, but how could the HP software possibly know how a Oracle block should look like?
No, there are just these options to actually check for block consistency in Oracle:  use RMAN, use DBV or use SQL to actually read your data (via EXP, EXPDB, ANALYZE, custom SQL)
Even worse, you might come across block corruptions that are not covered by these checks really.
> Data corruption can mean so many things.  If your talking structure corruption or block corruption, then you do hope that your consistency checks and database backup block checks will bring this to the attention of the DBA.  Hopefully recovery of the DB from tape and rolling forward would resolve this.
Yes, I was talking about data block corruption. Why? Because there is no reliable way to actually perform a semantic check of your data. None.
We (SAP) simply rely on that, whatever we write to the database by the Updater is consistent from application point of view.
Having handled far too much remote consulting messages concerning data rescue due to block corruptions I can say: getting all readable data from the corrupt database objects is really the easy part of it.
The problems begin to get big, once the application developers need to think of reports to check and repair consistency from application level.
> However if your talking data corruption as is "crap data" has been loaded into the database, or a rogue ABAP has corrupted several million rows of data then this becomes a little more tricky.  If the issue is identified immediately, restoring from backup is a fesible option for us.
> If the issue happened over 48hrs ago, then restoring from a backup is not an option.  We are a 24x7x365 manufacturing operation.  Shipping goods all around the world.  We produce and ship to much product in a 24hr window that can not be rekeyed (or so the business says) if the data is lost.
Well in that case you're doomed. Plain and simple. Don't put any effort into getting "tricky", just let never ever run any piece of code that had not passed the whole testfactory. That's really the only chance.
> We would have to get tricky and do things such as restore a copy of the production database to another server, and extract the original "good" documents from the copy back into the original, or hopefully the rogue ABAP can correct whatever mistake they originally made to the data.
That's not a recovery plan - that is praying for mercy.
I know quite a few customer systems that went to this "solution" and had inconsistencies in their system for a long long time afterwards.
> Look...there are hundreds of corruption scenarios we could talk about, but each issue will have to be evaluated, and the decision to restore or not would be decided based on the issue at hand.
I totally agree.
The only thing that must not happen is: open a callconference and talk about what a corruption is in the first place, why it happened, how it could happen at all ... I spend hours of precious lifetime in such non-sense call confs, only to see - there is no plan for this at customer side.
> I would love to think that this is something we could do daily to a sandpit system, but with a 1.7TB production database, our backups take 6hrs, a restore would take about 10hrs, and the consistency check ... well a while.
We have customers saving multi-TB databases in far less time - it is possible.
> And what a luxury to be able to do this ... do you actually know of ANY sites that do this?
Quick Backups? Yes, quite a few. Complete Backup, Restore, Consistency Check cycle? None.
So why is that? I believe it's because there is no single button for it.
It's not integrated into the CCMS and/or the database management software.
It might also be (hopefully) that I never hear of these customers. See as a DB Support Consultant I don't get in touch with "sucess stories". I see failures and bugs all day.
To me the correct behaviour would be to actually stop the database once the last verified backup is too old. Just like everybody is used to it, when he hits a LOGFULL /ARCHIVER STUCK situation.
Until then - I guess I will have a lot more data rescue to do...
> Had a read  ...  being from New Zealand I could easily relate to the sheep =)
> Thats not wan't I meant.  Like I said we are a 24x7x365 system.  We get a maximum of 2hrs downtime for maintenance a month.  Not that we need it these days as the systems practically run themselves.  What I meant was that between 7am and 7pm are our busiest peak hours, but we have dispatch personnel, warehouse operations, shift supervisors ..etc.. as well as a huge amount of batch running through the "night" (and day).  We try to maintain a good dialog response during the core hours, and then try to perform all the "other" stuff around these hours, including backups, opt stats, and business batch, large BI extractions ..etc..
> Are we busy all day and night ... yes ... very.
Ah ok - got it!
Especially in such situations I would not try to implement consistency checks on your prod. database.
Basically running a CHECK DATA there does not mean anything. Right after a table finished the check it can get corrupted although the check is still running on other tables. So you have no guranteed consistent state in a running database - never really.
On the other hand, what you really want to know is not: "Are there any corruptions in the database?" but "If there would be any corruptions in the database, could I get my data back?".
This later question can only be answered by checking the backups.
> Noted and agreed.  Will do daily backups via MaxDB kernel, and a full verification each week.
One more customer on the bright side
> One last question.  If we "restored" from an EVA snapshot, and had the DB logs upto the current point-in-time, can you tell MaxDB just to roll forward using these logs even though a restore wasn't initiated via MaxDB?
I don't see a reason why not - if you restore the data and logarea and bring the db to admin mode than it uses the last successfull savepoint for startup.
If you than use recover_start to supply more logs that should work.
But as always this is something that needs to be checked on your system.
That has been a really nice discussion - hope you don't get my comments as offending, they really aren't meant that way.
KR Lars

Similar Messages

  • SAP Data Storage Migration from HP EVA SAN to NetApp FAS3070 FMC for M5000s

    Good day all
    We are needing to perform a storage migration for SAP Data that is currently on 2 HP EVA SANs. We have 2 SUN M5000s, 2 SUN E2900s and a couple of V490s, that all connect to the SAN via Cisco 9506 Directors. We have recently commissioned a NetApp Fabric Metrocluster on 2 FAS 3070s, and need to move our SAP Data from the EVAs to the new Metrocluster. Our SUN boxes are running Solaris 10. It was suggested that we use LVM to move the data, but I have no knowledge when it comes to Solaris.
    I have some questions, which I hope someone can assist me in answering:
    - Can we perform a live transfer of this data with low risk, using LVM? (Non-disruptive migration of 11Tb)
    - Is LVM a wise choice for this task? We have Replicator X too, but have had challenges using it on another Metrocluster.
    - I would like to migrate our Sandbox, as a test migration (1.5Tb), and to judge the speed of the data migration. Then move all DEV and QA boxes across, before Production data. There are multiple zones on the hardware mentioned above. Is there no simple way of cloning data from the HP to the NetApp, and then re-synching before going live on the new system?
    - Would it be best to have LUNs created with the same volume on the new SAN as the HP EVA sizings, or is it equally simple to create "Best Practise" sized LUNs on the other side before copying data across? Hard to believe it would be equally simple, but we would like to size the LUNs properly.
    Please assist, I can get further answers, if there are any questions in this regard.

    Good day all
    We are needing to perform a storage migration for SAP Data that is currently on 2 HP EVA SANs. We have 2 SUN M5000s, 2 SUN E2900s and a couple of V490s, that all connect to the SAN via Cisco 9506 Directors. We have recently commissioned a NetApp Fabric Metrocluster on 2 FAS 3070s, and need to move our SAP Data from the EVAs to the new Metrocluster. Our SUN boxes are running Solaris 10. It was suggested that we use LVM to move the data, but I have no knowledge when it comes to Solaris.
    I have some questions, which I hope someone can assist me in answering:
    - Can we perform a live transfer of this data with low risk, using LVM? (Non-disruptive migration of 11Tb)
    - Is LVM a wise choice for this task? We have Replicator X too, but have had challenges using it on another Metrocluster.
    - I would like to migrate our Sandbox, as a test migration (1.5Tb), and to judge the speed of the data migration. Then move all DEV and QA boxes across, before Production data. There are multiple zones on the hardware mentioned above. Is there no simple way of cloning data from the HP to the NetApp, and then re-synching before going live on the new system?
    - Would it be best to have LUNs created with the same volume on the new SAN as the HP EVA sizings, or is it equally simple to create "Best Practise" sized LUNs on the other side before copying data across? Hard to believe it would be equally simple, but we would like to size the LUNs properly.
    Please assist, I can get further answers, if there are any questions in this regard.

  • VCB/VADP, ESX 4.1 and NetWare vm snapshot backup issue

    Hi!
    If you are running Netware as a guest in VMware ESX 4.1 and are using backup
    software that uses the snapshot feature to backup the guests vmdk then you
    may run into an issue that causes the snapshot to fail. This was just
    documented by VMware on Feb 23 (two days ago) so you may have not seen this
    yet. Here is the url to the VMware kb:
    http://kb.vmware.com/kb/1029749
    The fix is to install the older v4.0 vmware tools into the NetWare guest.
    Cheers,
    Ron

    Ron,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Visit http://support.novell.com and search the knowledgebase and/or check all
    the other self support options and support programs available.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://forums.novell.com)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://forums.novell.com/faq.php
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://forums.novell.com/

  • Time Machine snapshots, but no hourly backups

    System: MBP (late 2011), 2.8 GHz i7, 16 GB RAM, Mavericks 10.9.5, Samsung SSD and Samsung HDD in lieu of optical drive. Running Bootcamp too.
    My technical acumen: ample and extensive...but I am stumped.
    Problem: in September 2014, I noticed that my hourly TM backups were not happening regularly.  Snapshots were occurring, but no "real" backup.  I scoured the Console messages, and there is NO error...there is no NOTHING regarding backups unless I instigate a manual one.  In that instance, the backup completes normally, and I am left with a normal and functional backup....as best as I can tell.  Otherwise?  Nothing. This was not concomitant with any major hardware upgrade(s); the second HDD was added since things stopped working and the RAM was added before. However, I did  have 1 bad RAM module (when I went to 16 GB) that needed to be replaced, but as far as I can tell this is only coincidental.
    Here's what I have tried:
    1) Wiping and re-formatting the backup drive (WD My Book 2 TB, firewire, indended for Mac)
    2) Replacing backup drive (WD My Book 3 TB, USB, intended for Mac)
    3) Rest PRAM, SMC, repaired permissions
    4) Ran Onyx cleaning and maintenance scripts, rebuild every cache I can think of, removed old and obsolete plists from Library (for deleted software)
    5) Repaired disk from recovery partition (there were no issues)
    6) EVERY suggestion I could find on pondini.org, including "full reset" of TM
    7) EVERY suggestion I could find here on Apple Discussions
    Nothing worked.  At all.  The problem is the total lack of console activity to indicate an issue...I can't fix what I can't see! Anyone know what I could be missing?  just for verification, below are the console messages re: manual backups, backup drive and snapshots:
    9/30/14 7:10:43.291 AM com.apple.backupd[289]: Starting manual backup
    9/30/14 7:10:43.723 AM com.apple.backupd[289]: Backing up to /dev/disk2s2: /Volumes/Disaster/Backups.backupdb
    9/30/14 7:10:51.261 AM com.apple.backupd[289]: Will copy (145.9 MB) from Macintosh HD
    9/30/14 7:10:51.264 AM com.apple.backupd[289]: Will copy (22 KB) from Secondary HD
    9/30/14 7:10:51.265 AM com.apple.backupd[289]: Found 601 files (145.9 MB) needing backup
    9/30/14 7:10:51.268 AM com.apple.backupd[289]: 4.37 GB required (including padding), 1.69 TB available
    9/30/14 7:11:33.675 AM com.apple.backupd[289]: Copied 731 items (145.9 MB) from volume Macintosh HD. Linked 5147.
    9/30/14 7:11:36.918 AM com.apple.backupd[289]: Copied 15 items (801 KB) from volume Secondary HD. Linked 37.
    9/30/14 7:11:37.584 AM com.apple.backupd[289]: Will copy (24.4 MB) from Macintosh HD
    9/30/14 7:11:37.585 AM com.apple.backupd[289]: Not using file event preflight for Secondary HD
    9/30/14 7:11:37.587 AM com.apple.backupd[289]: Found 30 files (24.4 MB) needing backup
    9/30/14 7:11:37.588 AM com.apple.backupd[289]: 4.23 GB required (including padding), 1.69 TB available
    9/30/14 7:11:39.927 AM com.apple.backupd[289]: Copied 33 items (24.4 MB) from volume Macintosh HD. Linked 700.
    9/30/14 7:11:40.449 AM com.apple.backupd[289]: Copied 1 items (Zero KB) from volume Secondary HD. Linked 4.
    9/30/14 7:11:41.192 AM com.apple.backupd[289]: Created new backup: 2014-09-30-071141
    9/30/14 7:11:41.827 AM com.apple.backupd[289]: Starting post-backup thinning
    9/30/14 7:11:41.829 AM com.apple.backupd[289]: No post-backup thinning needed: no expired backups exist
    9/30/14 7:11:41.834 AM com.apple.backupd[289]: Backup completed successfully.
    9/30/14 7:09:28.000 AM kernel[0]: nspace-handler-set-snapshot-time: 1412075370
    9/30/14 7:09:28.546 AM com.apple.mtmd[62]: Set snapshot time: 2014-09-30 07:09:30 -0400 (current time: 2014-09-30 07:09:28 -0400)
    9/30/14 7:09:31.860 AM mds[63]: (Normal) Volume: volume:0x7ff521108a00 ********** Created snapshot backup index
    9/30/14 7:09:27.000 AM kernel[0]: hfs: mounted Disaster on device disk2s2
    9/30/14 7:09:28.000 AM kernel[0]: hfs: unmount initiated on Disaster on device disk2s2
    9/30/14 7:09:28.000 AM kernel[0]: nspace-handler-set-snapshot-time: 1412075370
    SO: Am I missing something?  I've tried everything short of wiping and restoring my primary SSD because...well, that's a pain and also because I'm afraid that the restoration process will simply restore whatever issue is causing this, leaving me with nothing gained. Thoughts?

    When you replaced your MBP's original HD with the Samsung SSD, how did you originally populate its contents? In other words, did you begin with a new OS X installation, followed by using Setup Assistant and migrating  your information from an existing TM backup, or...?
    Edit: A persistent bug with this site prevented replies from appearing until I posted one. Since you resolved the problem by reinstalling OS X I'll only suggest you don't use OnyX or any other utility to "clean" anything or perform any other action for that matter. Although I understand you only used OnyX in an effort to address an existing problem, such utilities are only likely to add to or obscure the underlying cause.

  • Snapshots interfering with differential backups

    1- Setup full backs to happen Saturday nights.
    2- Setup differential backups to happen every night, execpt Saturdays
    3- This is on a VM machine, and we take a snapshot of it every Wednesday night.
    4- SQL Server records the snapshot as a valid, full backup, although it is marked as is_snapshot = 1.
    5- All differential backups after the snapshot are rendered useless. They are done based on the snapshot backup, not on the actual valid full backup done on Saturday.
    After research I found KB article 951288 (http://support.microsoft.com/kb/951288). Apparently this is a Microsoft issue.
    So, how do we tell SQL server to ignore the snapshot backup when doing the differential backups? And no, we do not want to stop doing our snapshots on Wednesdays. Is there a solution or a work around to this?
    This is really annoying and dangerous that SQL Server would consider these snapshots as valid, viable full backups on which to base its differential backups.
    Help much appreciated,
    Raphael
    rferreira

    Hi Raphael,
    According to the following document, virtualization Snapshots for Hyper-V or for any virtualization vendor are not supported to use with SQL Server in a virtual machine. I want to confirm whether there is any other important application running on the VM
    machine. If not, we can turn off VM Snapshot, if the VM machine failed, we just need to restore the database backup. For more detail information, please refer to the following link:
    Support policy for Microsoft SQL Server products that are running in a hardware virtualization environment
    http://support.microsoft.com/?id=956893
    Allen Li
    TechNet Community Support

  • Sql backup best practice on vms that are backed up as a complete vm

    hi,
    apologies as i am sure this has been asked many times before but i cant really find an answer to my question. so my situation is this. I have two types of backups; agent based and snap based backups.
    For the vm's that are being backed up by snapshots the process is: vmware does the snap, then the san takes a snap of the storage and then the backup is taken from the san. we then have full vm backups.
    For the agent based backups, these are only backing up file level stuff. so we use this for our sql cluster and some other servers. these are not snaps/full vm backups, but simply backups of databases and files etc.
    this works well, but there are a couple of servers that need to be in the full vm snap category and therefore cant have the backup agent installed on that vm as it is already being backed up by the snap technology. so what would be the best practice on these
    snapped vms that have sql installed as well? should i configure a reoccurring backup in sql management studio (if this is possible??) which is done before the vm snap backup? or is there another way i should be backing up the dbs?
    any suggestions would be very welcome.
    thanks
    aaron

    Hello Aaron,
    If I understand correctly, you perform a snapshot backup of the complete VM.
    In that case you also need to create a SQL Server backup schedule to perform Full and Transaction Log backups.
    (if you do a filelevel backup of the .mdf and .ldf files with an agent you also need to do this)
    I would run a database backup before the VM snapshot (to a SAN location if possible), then perform the Snapshot backup.
    You should set up the transaction log backups depending on business recovery needs.
    For instance: if your company accepts a maximum of 30 minutes data loss make sure to perform a transaction log backup every 30 minutes.
    In case of emergency you could revert to the VM Snapshot, restore the full database backup and restore transaction log backups till the point in time you need.

  • CSV disk signature and backup

    I would like to be able to take a snapshot backup of a CSV directly from my SAN and mount it back into the cluster in orer to recover files. 
    Right now, if i do this, the cluster detects the disk as a CSV (presumably a pre-existing CSV), i guess there is a signature on the disk identifying it as a particular cluster shared volume. This makes perfect sense. 
    Is there some way of remove this signature\ID, to ostensibly allow me to have two copies if the same disk mounted at any one time? I.e. one as CSV, which remains unaffected and in use, and one as a 'normal' disk which i can mount and assign a drive letter
    to? 
    Cheers.

    Mount the disk on a standalone system and use diskpart to change the signature, then mount the disk with the changed signature on the cluster.
    DISKPART> list disk
    DISKPART> uniqueid disk (get current signature)
    DISKPART> select disk ID
    DISKPART> uniqueid disk ID=[NEW SIGNATURE]
    . : | : . : | : . tim

  • What happen when Database in Backup Mode?

    Hi,
    What happen when we kept database in backup mode, Means using command 'Alter database Begin Backup';
    Thanks...
    Asit

    jgarry wrote:
    EdStevens wrote:
    jgarry wrote:
    What do you think of the snapshot backup on page 22 of [url http://en.community.dell.com/techcenter/storage/w/wiki/2638.oracle-11g-backup-and-recovery-using-rman-and-equallogic-snapshots-by-sis.aspx]this paper? (No sarcasm, I'm curious about these snap solutions in general. Though I am really down on Dell for what turned out to be a brain-damaged laptop I got for my wife.)
    Well, you can't really make a judgement about a company's enterprise products based on an experience with their consumer products.What if it came from what they call a business products catalog? They intermingle laptops with servers.
    Don't know, in general. I do know that for Dell there is a distinct difference between their decidedly consumer laptops (Inspiron) and their "business" Lattitude series. I know that at my last job we had a lot of rock solid HP equipment (servers and SAN) - vs. an HP laptop I had that was trouble from Day One. I'm sure there is a point in desktops and laptops the line can get blurred, but in the case of the OP, he was no where near that fuzzy line.
    >
    >>
    Perhaps I can find time to read the white paper over the weekend ....

  • RMAN backup fro EBS R12

    Hi,
    I have made I backup script for EBS R12 using RMAN with the following parameters:
    run {
         CONFIGURE CONTROLFILE AUTOBACKUP ON;
         CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/mnt/EVA-DISK/RMAN/data/%F';
         CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/mnt/EVA-DISK/RMAN/data/snapcf_PROD.f';
         backup format '/mnt/EVA-DISK/RMAN/data/%d_LVL0_%T_%u_s%s_p%p' database;
         delete obsolete;
         }and I have this 3 files as the backup output.
    [root@orafin data]# ls -l /mnt/EVA-DISK/RMAN/data
    total 21089360
    -rw-r-----  1 oraprod dba    31817728 Apr 11 18:10 c-94644369-20090411-00
    -rw-r-----  1 oraprod dba 21510840320 Apr 11 18:10 PROD_LVL0_20090411_04kc7jd6_s4_p1
    -rw-r-----  1 oraprod dba    31735808 Apr 11 18:10 snapcf_PROD.fAre these 3 files complete? What other do I need or miss in order to recover my database in case of disaster.
    Thanks a lot

    Hi,
    You need to verify that yourself. Restore the RMAN backup and verify that your backup is valid (I believe you already have links to RMAN documentation is some other thread).
    Regards,
    Hussein

  • How to Properly Protect a Virtualized Exchange Server - Log File Discontinuity When Performing Child Partition Snapshot

    I'm having problems backing up a Hyper-V virtualized Exchange 2007 server with DPM 2012. The guest has one VHD for the OS, and two pass-through volumes, one for logs and one for the databases. I have three protection groups:
    System State - protects only the system state of the mail server, runs at 4AM every morning
    Exchange Databases - protects the Exchange stores, 15 minute syncs with an express full at 6:30PM every day
    VM - Protecting the server hosting the Exchange VM. Does an child partition snapshot backup of the Exchange server guest with an express full at 9:30PM every day
    The problem I'm experiencing is that every time the VM express full completes I start receiving errors on the Exchange Database synchronizations stating that a log file discontinuity was detected. I did some poking around in the logs on the Exchange server
    and sure enough, it looks like the child partition snapshot backup is causing Exchange to truncate the log files even though the logs and databases are on pass-through disks and aren't covered by the child partition snapshot.
    What is the correct way to back up an entire virtualized Exchange server, system state, databases, OS drive and all?

    I just created a new protection group. I added "Backup Using Child Partition Snapshot\MailServer", short-term protection using disk, and automatically create the replica over the network immediately. This new protection group contains only the child partition
    snapshot backup. No Exchange backups of any kind.
    The replica creation begins. Soon after, the following events show up in the Application log:
    =================================
    Log Name:      Application
    Source:        MSExchangeIS
    Date:          10/23/2012 10:41:53 AM
    Event ID:      9818
    Task Category: Exchange VSS Writer
    Level:         Information
    Keywords:      Classic
    User:          N/A
    Computer:      PLYMAIL.mcquay.com
    Description:
    Exchange VSS Writer (instance 7d26282d-5dec-4a73-bf1c-f55d5c1d1ac7) has been called for "CVssIExchWriter::OnPrepareSnapshot".
    =================================
    Log Name:      Application
    Source:        ESE
    Date:          10/23/2012 10:41:53 AM
    Event ID:      2005
    Task Category: ShadowCopy
    Level:         Information
    Keywords:      Classic
    User:          N/A
    Computer:      PLYMAIL.mcquay.com
    Description:
    Information Store (3572) Shadow copy instance 2051 starting. This will be a Full shadow copy.
    =================================
    The events continue on, basically snapshotting all of Exchange. From the DPM side, the total amount of data transferred tells me that even though Exhange is trunctating its logs, nothing is actually being sent to the DPM server. So this snapshot operation
    seems to be superfluous. ~30 minutes later, when my regularly scheduled Exchange job runs, it fails because of a log file discontinuity.
    So, in this case at least, a Hyper-V snapshot backup is definitely causing Exchange to truncate the log files. What can I look at to figure out why this is happening?

  • Time Machine fails to backup home space.

    Summary:
    I've been having problems with Time Machine since I enabled it. For some reason it refuses to backup any file in my account (/Users/ritchiem/). I have spoken with the an Apple 'Genius' and they were dumb founded. So I ask here in the hope that someone has had this issue occur and know what the cause is and how to fix it. I'm not really looking for work-arounds, though I will cover that later.
    *Problem Description:*
    A new 500GB external drive was erased and formatted HFS+ Journaled.
    TimeMachine was then opened and enabled on this drive.
    Back Up Now was selected and left to run assuming it would take a while on the 500Gb internal drive that had a around 200Gi-250Gi of used.
    It ran successfully.
    When entering TimeMachine, history shows the top level drive and navigating down the Users folder shows only my 'test' user account. My user account 'ritchiem' is not listed.
    There was no error presented from Time Machine an the System Preferences panel reported it was successful.
    Console.logs shows that backup was performed successfully.
    Steps already taken to address issue:
    I have verified/repaired permissions on my boot drive, a few files were changed but none in /Users/ritchiem
    I have deleted the Time Machine preferences: /Library/Preferences/com.apple.TimeMachine.plist
    I have tried a second external hard drive, in this case it was more interesting:
    It was reported as being to small to perform the backup. So I added some exclusions such as my large iTunes library.
    This allowed the Time Machine to proceed but again there was no backup performed of any file in home space.
    This is where I thought I must be doing something wrong and went to the Apple Store the genius tried all the usual suspects, permissions, preferences. But no success.
    It was only then that one of his colleagues picked up on discussion that it was only my user that was being skipped and suggested the risky move of renaming my user. He showed me the 'Advanced' panel in the Account System Preferences. (I never realised you could use a nice gui to do the renaming of a user in all the right Apple places.)
    So I swapped users and renamed my account, and moved my home space to a new user name. mv /Users/ritchiem /Users/martin
    I then logged back in to my account to verify the rename all worked and it had.
    _*Time Machine now worked!*_
    HOWEVER, that is a total hack. I use the username 'ritchiem' all across my local network and it was not recommended to have a user ritchiem but a home space in /Users/martin.
    I renamed my account back to 'ritchiem' and Time Machine again is blocking backups of /Users/ritchiem
    Somewhere on my drive there MUST be an exclusion that is preventing the first user that I created on my 2008 MacBook from being backed up. So far my grep-ing of the filesystem for 'ritchiem' has not shown any likely suspects. If anyone here knows where the mystic plist/file is that I need to zap then that would be great.
    If of course you have another work-around that does not require me to rename either my account or my home space then that would be great.
    Oh and I also tried to trick TimeMachine by creating a symbolic link in /Users to my home space (ln -s /Users/ritchiem /Users/martin). It resolves the link and refuses to backup the contents of my home space just as before.
    Please remember here that in all backups that Time Machine performed there was no error, console.log and the UI all report success.
    Now if I had a new blank machine and some time I could test out the theory that it is always the first user of a freshly installed machine. However, if other people have it working then it is perhaps not that simple.
    TIA
    Martin

    Thank you for all your advice and suggestions. Apologies for the delay I had difficulty locating my original install DVDs.
    Summary
    I have performed all the steps and checked items as requested however backups are still not including my home space (/Users/ritchiem)
    What I do find interesting is the output of the TimeMachine backup log (at the very end of this VERY long post) which states that 84.8 GB should be backed up but only 24.3 GB is actually backed up.
    Would love to know why TM decided 60.5GB of data should not be backed up but not explained in any of the log files and a successful backup reported.
    Before I go on to the details of what i've done, I should also mention in case it is important that I have of course been logged in with my user (ritchiem) whilst performing all of these backups. I assume that is normal and that you shouldn't have to log out for TM to actually backup your data. To verify this previously I have logged out user ritchiem and logged in as a test account. The path /Users/ritchiem still failed to backup. I have not tried this recently, however, I would be really surprised if you had to log out of your user account for the data to be backed up.
    Detail
    So to answer V.K. first: My drive did indeed have some items that it said I should repair. Hence the search for the install DVDs. I ran the Disk Utility from the Leopard installer and it repaired the drive. The verify now runs green.
    As I had just rebooted the machine and it was all fresh I repeated Pondini's previous steps as well.
    * Disabled TM
    * Exited System Preferences
    * Deleted /Library/Preferences/com.apple.TimeMachine.plist
    I then also deleted the Backups.backupdb from my external drive so everything was fresh and new. I would have just formatted it (which would have been waaay faster, but I have some copies as backups on there for now.)
    I started the TM prefs panel and added a few items to the disabled list to a) speed up the backup but also b) I have a 320g internal drive and only a 250g external. My total disk usage is 288 which won't fit, especially with the breathing space TM likes to have.
    Here is the data from console.log from the first two backup runs:
    *The first automated run that occured whilst I was still disabling items*
    Jul 9 00:05:28 ramaII /System/Library/CoreServices/backupd[371]: Backup requested due to disk attach
    Jul 9 00:05:28 ramaII /System/Library/CoreServices/backupd[371]: Starting standard backup
    Jul 9 00:05:28 ramaII /System/Library/CoreServices/backupd[371]: Backing up to: /Volumes/Red-5/Backups.backupdb
    Jul 9 00:05:52 ramaII /System/Library/CoreServices/backupd[371]: Event store UUIDs don't match for volume: ramaII
    Jul 9 00:05:53 ramaII /System/Library/CoreServices/backupd[371]: Backup content size: 288.2 GB excluded items size: 0 bytes for volume ramaII
    Jul 9 00:05:53 ramaII /System/Library/CoreServices/backupd[371]: Starting pre-backup thinning: 347.15 GB requested (including padding), 134.18 GB available
    Jul 9 00:05:53 ramaII /System/Library/CoreServices/backupd[371]: No expired backups exist - deleting oldest backups to make room
    Jul 9 00:05:53 ramaII /System/Library/CoreServices/backupd[371]: Error: backup disk is full - all 0 possible backups were removed, but space is still needed.
    Jul 9 00:05:53 ramaII /System/Library/CoreServices/backupd[371]: Backup Failed: unable to free 347.15 GB needed space
    Jul 9 00:05:59 ramaII /System/Library/CoreServices/backupd[371]: Backup failed with error: Not enough available disk space on the target volume.
    *The second much smaller backup run*
    Jul 9 00:06:29 ramaII /System/Library/CoreServices/backupd[371]: Backup requested by user
    Jul 9 00:06:29 ramaII /System/Library/CoreServices/backupd[371]: Starting standard backup
    Jul 9 00:06:31 ramaII /System/Library/CoreServices/backupd[371]: Backing up to: /Volumes/Red-5/Backups.backupdb
    Jul 9 00:06:32 ramaII /System/Library/CoreServices/backupd[371]: Event store UUIDs don't match for volume: ramaII
    Jul 9 00:15:57 ramaII /System/Library/CoreServices/backupd[371]: Backup content size: 288.2 GB excluded items size: 203.4 GB for volume ramaII
    Jul 9 00:15:57 ramaII /System/Library/CoreServices/backupd[371]: No pre-backup thinning needed: 103.06 GB requested (including padding), 134.18 GB available
    Jul 9 01:10:18 ramaII /System/Library/CoreServices/backupd[371]: Copied 191916 files (24.3 GB) from volume ramaII.
    Jul 9 01:10:19 ramaII /System/Library/CoreServices/backupd[371]: No pre-backup thinning needed: 1.75 GB requested (including padding), 108.05 GB available
    Jul 9 01:10:25 ramaII /System/Library/CoreServices/backupd[371]: Copied 114 files (93 bytes) from volume ramaII.
    Jul 9 01:10:25 ramaII /System/Library/CoreServices/backupd[371]: Starting post-backup thinning
    Jul 9 01:10:25 ramaII /System/Library/CoreServices/backupd[371]: No post-back up thinning needed: no expired backups exist
    Jul 9 01:10:26 ramaII /System/Library/CoreServices/backupd[371]: Backup completed successfully.
    What I find interesting is that only it only copied '191916 files (24.3 GB)' It said it was going to backup '103.06 GB requested (including padding)'.
    As requested here is the hidden .exclude.plist file, I printed the contents using the Terminal rather than TinkerTool.
    .exclude.plist
    ramaII:~ ritchiem$ sudo cat /Volumes/Red-5/Backups.backupdb/ramaII/2009-07-09-011025/.exclusions.plist
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
    <key>sourcePaths</key>
    <array>
    <string>/</string>
    </array>
    <key>standardExclusionPaths</key>
    <array/>
    <key>systemFilesExcluded</key>
    <false/>
    <key>userExclusionPaths</key>
    <array>
    <string>/Users/ritchiem/Music</string>
    <string>/Users/ritchiem/dev</string>
    <string>/Applications</string>
    <string>/Users/ritchiem/Desktop</string>
    <string>/Users/ritchiem/Downloads</string>
    <string>/Users/ritchiem/Movies</string>
    <string>/Users/ritchiem/ritchiem.sparseimage</string>
    </array>
    </dict>
    </plist>
    This content mirrors what is in the /Library/Preferences/com.apple.TimeMachine.plist I resaved this to my desktop as a xml plist so I could print it for copying:
    /Library/Preferences/com.apple.TimeMachine.plist
    ramaII:~ ritchiem$ cat Desktop/com.apple.TimeMachine.plist
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
    <key>AlwaysShowDeletedBackupsWarning</key>
    <true/>
    <key>AutoBackup</key>
    <true/>
    <key>BackupAlias</key>
    <data>-snipped-</data>
    <key>ExcludeByPath</key>
    <array>
    <string>/Users/ritchiem/Library/Calendars/Calendar Cache</string>
    <string>/Users/cristina/Library/Calendars/Calendar Cache</string>
    </array>
    <key>MaxSize</key>
    <integer>0</integer>
    <key>RequiresACPower</key>
    <false/>
    <key>SkipPaths</key>
    <array>
    <string>~ritchiem/Music</string>
    <string>~ritchiem/dev</string>
    <string>/Applications</string>
    <string>~ritchiem/Desktop</string>
    <string>~ritchiem/Downloads</string>
    <string>~ritchiem/Movies</string>
    <string>~ritchiem/ritchiem.sparseimage</string>
    </array>
    <key>SkipSystemFiles</key>
    <false/>
    </dict>
    </plist>
    *Other Steps*
    Whilst running TM I also setup dtrace to log every file open: (http://www.macosxhints.com/article.php?story=20071031121823710)
    Looking for any files in my home space that it might have opened returned none:
    ramaII:~ ritchiem$ grep "/Users/ritchiem" tm.dtrace |grep "entry backupd" |wc -l
    0
    Yet looking for all the files backed up from the /Users directory there were 458!
    ramaII:~ ritchiem$ grep "/Users/" tm.dtrace |grep "entry backupd" |wc -l
    458
    Dtrace also highlights the files that backupd uses unfortuately the only files it opened before it started its backup were:
    0 17720 open:entry backupd /Library/Preferences/com.apple.TimeMachine.plist
    0 17720 open:entry backupd /var/db/.TimeMachine.Cookie
    0 18506 open_nocancel:entry backupd /System/Library/CoreServices/backupd.bundle/Contents/Res
    ources/StdExclusions.plist
    The only file on that list I haven't seen mentioned before is the StdExclusions but I'm sure that is the same on everyone's machines... just incase though:
    ramaII:~ ritchiem$ cat /System/Library/CoreServices/backupd.bundle/Contents/Resources/StdExclusions.pl ist
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
    <!-- paths we do not want to include in a system backup -->
    <key>PathsExcluded</key>
    <array>
    <string>/.Spotlight-V100</string>
    <string>/.Trashes</string>
    <string>/.fseventsd</string>
    <string>/.hotfiles.btree</string>
    <string>/Backups.backupdb</string>
    <string>/Desktop DB</string>
    <string>/Desktop DF</string>
    <string>/Network/Servers</string>
    <string>/Previous Systems</string>
    <string>/Users/Shared/SC Info</string>
    <string>/Users/Guest</string>
    <string>/dev</string>
    <string>/home</string>
    <string>/net</string>
    <string>/private/var/db/Spotlight</string> <!-- old tiger location of the Spotlight db -->
    <string>/private/var/db/Spotlight-V100</string> <!-- old tiger location of the Spotlight db -->
    </array>
    <!-- paths we need to include in backup so we can restore disk structure, but don't want to backup contents -->
    <key>ContentsExcluded</key>
    <array>
    <string>/Volumes</string>
    <string>/Network</string>
    <string>/automount</string>
    <string>/.vol</string>
    <string>/tmp</string>
    <string>/cores</string>
    <string>/private/tmp</string>
    <string>/private/Network</string>
    <string>/private/tftpboot</string>
    <string>/private/var/automount</string>
    <string>/private/var/log</string>
    <string>/private/var/folders</string>
    <string>/private/var/log/apache2</string>
    <string>/private/var/log/cups</string>
    <string>/private/var/log/fax</string>
    <string>/private/var/log/ppp</string>
    <string>/private/var/log/sa</string>
    <string>/private/var/log/samba</string>
    <string>/private/var/log/uucp</string>
    <string>/private/var/run</string>
    <string>/private/var/spool</string>
    <string>/private/var/tmp</string>
    <string>/private/var/vm</string>
    <string>/private/var/db/dhcpclient</string>
    <string>/private/var/db/fseventsd</string>
    <string>/Library/Caches</string>
    <string>/Library/Logs</string>
    <string>/System/Library/Caches</string>
    <string>/System/Library/Extensions/Caches</string>
    </array>
    <!-- standard user paths we want to skip for each user (subpath relative to root of home directory) -->
    <key>UserPathsExcluded</key>
    <array>
    <string>Library/Application Support/SyncServices</string>
    <string>Library/Caches</string>
    <string>Library/Logs</string>
    <string>Library/Mail/Envelope Index</string>
    <string>Library/Mail/AvailableFeeds</string>
    <string>Library/Mirrors</string>
    <string>Library/PubSub/Database</string>
    <string>Library/PubSub/Downloads</string>
    <string>Library/PubSub/Feeds</string>
    <string>Library/Safari/Icons.db</string>
    <string>Library/Safari/HistoryIndex.sk</string>
    </array>
    </dict>
    </plist>
    Finally here is the backup log that was generated incase you spot something I haven't
    Backup.log
    ramaII:~ ritchiem$ sudo cat /Volumes/Red-5/Backups.backupdb/ramaII/2009-07-09-011025/.Backup.log
    2009-07-09-00:06:32 - Starting backup
    Previous snapshot:
    None
    Will traverse "ramaII" (mount: '/' fsUUID: 3E4CD731-945C-3BA3-85D5-67B401082442 eventDBUUID: FDD41B79-8CEF-4F57-810C-B1C56476640B)
    === Starting backup loop #1 ===
    Will use FirstBackupCopier
    Running preflight for "ramaII" (mount: '/' fsUUID: 3E4CD731-945C-3BA3-85D5-67B401082442 eventDBUUID: FDD41B79-8CEF-4F57-810C-B1C56476640B)
    Excluding /Users/ritchiem/Music: 36.1 GB (6083 items)
    Excluding /Users/ritchiem/dev: 20.8 GB (488641 items)
    Excluding /Applications: 9.6 GB (15070 items)
    Excluding /Users/ritchiem/Desktop: 26.3 GB (2444 items)
    Excluding /Users/ritchiem/Downloads: 3.8 GB (1127 items)
    Excluding /Users/ritchiem/Movies: 79.6 GB (167 items)
    Excluding /Users/ritchiem/ritchiem.sparseimage: 27.2 GB (1 items)
    Should copy 2853633 items (84.8 GB) representing 22223169 blocks of size 4096. 35174679 blocks available.
    Preflight complete for "ramaII" (mount: '/' fsUUID: 3E4CD731-945C-3BA3-85D5-67B401082442 eventDBUUID: FDD41B79-8CEF-4F57-810C-B1C56476640B)
    Time elapsed: 9 minutes, 25.000 seconds
    Processing preflight info
    Space needed for this backup: 103.1 GB (27016262 blocks of size 4096)
    Finished processing preflight info
    Copying items from "ramaII" (mount: '/' fsUUID: 3E4CD731-945C-3BA3-85D5-67B401082442 eventDBUUID: FDD41B79-8CEF-4F57-810C-B1C56476640B)
    Finished copying items for "ramaII" (mount: '/' fsUUID: 3E4CD731-945C-3BA3-85D5-67B401082442 eventDBUUID: FDD41B79-8CEF-4F57-810C-B1C56476640B)
    Time elapsed: 54 minutes, 20.000 seconds
    Copied 191916 items (24.3 GB)
    Gathering events since 390504228.
    Needs new backup due to change in /Library/Preferences
    === Starting backup loop #2 ===
    Will use IncrementalBackupCopier
    Running preflight for "ramaII" (mount: '/' fsUUID: 3E4CD731-945C-3BA3-85D5-67B401082442 eventDBUUID: FDD41B79-8CEF-4F57-810C-B1C56476640B)
    Calculating size of changes
    Should copy 4 items (0 bytes) representing 0 blocks of size 4096. 28325806 blocks available.
    Preflight complete for "ramaII" (mount: '/' fsUUID: 3E4CD731-945C-3BA3-85D5-67B401082442 eventDBUUID: FDD41B79-8CEF-4F57-810C-B1C56476640B)
    Time elapsed: 1.243 seconds
    Processing preflight info
    Space needed for this backup: 1.8 GB (459904 blocks of size 4096)
    Preserving last snapshot /Volumes/Red-5/Backups.backupdb/ramaII/2009-07-09-000552.inProgress/79ECBD8C-12 BE-4067-A202-B7033190FD82
    Finished processing preflight info
    Copying items from "ramaII" (mount: '/' fsUUID: 3E4CD731-945C-3BA3-85D5-67B401082442 eventDBUUID: FDD41B79-8CEF-4F57-810C-B1C56476640B)
    Finished copying items for "ramaII" (mount: '/' fsUUID: 3E4CD731-945C-3BA3-85D5-67B401082442 eventDBUUID: FDD41B79-8CEF-4F57-810C-B1C56476640B)
    Time elapsed: 6.147 seconds
    Copied 114 items (93 bytes)
    Gathering events since 406201626.
    Finalizing completed snapshot
    Finished finalizing completed snapshot
    Backup complete.
    Total time elapsed: 1 hour, 3 minutes, 54.000 seconds
    *Next Steps*
    I've been thinking about what i could try next and not having many ideas. So I'm really hoping someone here will have a flash of inspiration.
    If I find some time at the weekend then I will try renaming my account again and verifying that it does backup under a new name.
    Just incase there is something odd about the first user account created, i.e. UID:501 I will make a new 'ritchiem' account (which will have the next UID, UID:504) and see if that gets backed up, I can try this while my current account has been renamed. If that does work then I can copy my files back in to that account and verify it isn't something in my home directory that is upsetting TM. Although I doubt this is the case as it never read any files from home space during the backup process.
    Here's hoping there is something in the above logs etc, that give a clue to what might be going wrong.
    Cheers
    Martin

  • Using Sync as a one way backup?

    I have a user with a client mac (MacBook Pro). The MBP actually remains on our network at all times and I want to backup both her Documents and Desktop folders every time she logs out, no need to backup anything else. She has a home folder on our Active Directory controlled file servers. I can manage her AD account via Workgroup Manager because our schema has been extended.
    Is there a way to set up a Home Sync so that it acts as a one way (client to server) backup each time she logs out? I can't use background sync, mainly because of the issues with the MS Database files from Office 2008. I don't actually want to "sync" in this case, just really create a snapshot backup of the 2 folders from her Mac each time she logs out. No history of file versions is required.
    Any info would be greatly appreciated. Thanks!

    you may not find step by step instructions to make it a one way sync. but check the details pane in wgm to add keys to the homesync prefs that may get you close.
    also, check here:
    http://images.apple.com/business/solutions/it/docs/BestPractices_ClientMgmt.pdf
    and on http://afp548.com

  • Qla2310f not shown in cfgadm/luxadm (san 4.4.11)

    I have a problem, with two qlogic qla2310f HBA�s (not sun branded) that are not detected in luxadm/cfgadm.
    The box is a SUN Fire 280R running solaris 8:
    SunOS [hostname] 5.8 Generic_117350-27 sun4u sparc SUNW,Sun-Fire-280R
    Originally it was connected to an older eva-san(HP) using HP-securepath 3.0d for multipathing.
    When changing to a new EVA8000 san, i did the following:
    1. Unmounted the san-disks, removed them fra Solstice.
    2. deletede them in spmgr (HP Securepath)
    3. unpresented the disks
    4. unistalled Hp Securepath
    5. presented disks from the new san (EVA8000)
    6. installed SAN 4.4.11 and required pathces, using the "install_it" script
    7. modified /kernel/drv/scsi_vhci.conf to enable mpxio globally and add hp specific Symmetric options for the eva8000 san.
    After the reconfiguration reboot, i see the disks - however, i Cant see the two qla2310f in luxadm/cfgadm.
    These utillities only see the internal controller, and the two internal disks:
    Luxadm shows:
    luxadm probe
    No Network Array enclosures found in /dev/es
    Found Fibre Channel device(s):
    Node WWN:20000004cfbff6be Device Type:Disk device
    Logical Path:/dev/rdsk/c1t0d0s2
    Node WWN:2000000c500739e5 Device Type:Disk device
    Logical Path:/dev/rdsk/c1t1d0s2
    luxadm fcode_download -p
    Found Path to 0 FC/S Cards
    Complete
    Found Path to 0 FC100/S Cards
    Complete
    Found Path to 1 FC100/P, ISP2200, ISP23xx Devices
    Opening Device: /devices/pci@8,600000/SUNW,qlc@4/fp@0,0:devctl
    Detected FCode Version: ISP2200 FC-AL Host Adapter Driver: 1.15 04/03/22
    Complete
    Found Path to 0 JNI1560 Devices.
    Complete
    Found Path to 0 Emulex Devices.
    Complete
    cfgadm shows:
    cfgadm -l
    Ap_Id Type Receptacle Occupant Condition
    c0 scsi-bus connected configured unknown
    c1 fc-private connected configured unknown
    c2 scsi-bus connected unconfigured unknown
    c3 scsi-bus connected unconfigured unknown
    c4 scsi-bus connected unconfigured unknown
    cfgadm -al
    Ap_Id Type Receptacle Occupant Condition
    c0 scsi-bus connected configured unknown
    c0::dsk/c0t6d0 CD-ROM connected configured unknown
    c1 fc-private connected configured unknown
    c1::21000004cfbff6be disk connected configured unknown
    c1::2100000c500739e5 disk connected configured unknown
    c2 scsi-bus connected unconfigured unknown
    c3 scsi-bus connected unconfigured unknown
    c4 scsi-bus connected unconfigured unknown
    Qlogics SANSurfer CLI shows all three adapters:
    SANsurfer FC HBA CLI
    v1.06.16 Build 57
    Device List Menu
    HBA Model QLA2300/2310:
    1: Port 1 (OS 0): WWPN: 21-00-00-E0-8B-0D-AE-BF Online
    HBA Model QLA2300/2310:
    2: Port 1 (OS 1): WWPN: 21-00-00-E0-8B-0D-B6-BF Online
    HBA Model qlc:
    3: Port 1 (OS 0): WWPN: 21-00-00-03-BA-2F-AA-EF Online
    4: All HBAs
    cat /etc/path_to_inst |grep qla2300
    "/pci@8,700000/QLGC,qla@3" 0 "qla2300"
    "/pci@8,600000/QLGC,qla@1" 1 "qla2300"
    the adapters are recognized at boot as well (and probe-scsi-all sees them as well as the san disks)
    Oct 22 19:15:43 [hostname] qla2300: [ID 818750 kern.notice] QLogic Fibre Channel Driver v5.01 Instance: 0
    Oct 22 19:15:43 [hostname] qla2300: [ID 572349 kern.notice] hba0: QLogic QLA2300 Fibre Channel Host Adapter fcode version 2.00.05 01/29/03
    Oct 22 19:15:50 [hostname] qla2300: [ID 818750 kern.notice] QLogic Fibre Channel Driver v5.01 Instance: 1
    Oct 22 19:15:50 [hostname] qla2300: [ID 572349 kern.notice] hba1: QLogic QLA2300 Fibre Channel Host Adapter fcode version 2.00.05 01/29/03
    I currently use the disks "directly" without using multipathing:
    ls -la /dev/dsk/c5t7d5s6
    lrwxrwxrwx 1 root root 46 Oct 22 13:04 /dev/dsk/c5t7d5s6 -> ../../devices/pci@8,700000/QLGC,qla@3/sd@7,5:g

    I need luxadm/cfgadm to "see" the two qla2310f adapters, in order to use multipathing, any ideas?

    Same here:
    V240 and X4200, QLF2340, Solaris 9 and 10 plus latest 9 recommended patch cluster connectec to McData and HDS.
    Seems like a problem with luxadm and cfgadm, as on V480 with QLC2340 all works well...
    Any new patches/fixes?
    Cheers.

  • Can I Create VM snapshot in Azure with out-of box cmdlet? Or nee to use third-party strange scripts?

    Hi!
    I see that I can install MS Azure Powershell, which allows me by example create WM. But what about create/restore snapshot of this WM? in official described
    functions I don't see something about it.
    There are
    some
    methods, but they look like third-party, anybody used them to take/restore snapshots of VM's in Azure?
    Or any links or RTFM appreciated, I am newbie now in this area
    Best regards,
    Gennady
    Разработчик под SharePoint, http://rockietm.wordpress.com, http://demo.arsenal-it.com

    Thank You, Anders!
    It is not snapshot, backup, but it works, thank You very much!
    Best regards,
    Gennady
    Разработчик под SharePoint, http://rockietm.wordpress.com, http://demo.arsenal-it.com

  • System hangs running Server Backup - Windows Server 2012 R2 Essentials

    I ran through the Backup Wizard, selected the System, OS, and my Data drives to backup. total output is probably about 400-500GB of data to be snapshot and backed up. It's set to automatically backup daily at 9:00pm, but upon the first backup it just sits
    there. I figured I'd let it ride until the next day since I've seen backups take a couple hours before.
    Well, It's more than 12 hours since it started the process, and I cannot access my server. It doesn't wake the screen, CTRL-Alt-Del doesn't do anything, and I can't remote in with RDP.
    Any advice? I haven't shut it down or forced a reboot yet. 

    The server hangs when the backup starts, and I cannot see any error message other than after I hard reboot and see the error popup on reboot due to unscheduled shut down. Also, there is an error log in the Windows\Logs\WindowsServerBackup directory. The
    log files are empty, though. They are named according to date. 
    e.g. "Backup_Error-07-01-2014_02-00-06"
    I have reboot the system and the same happens when I try to backup again. I do have data in the backup HDD, as I have since disabled Server Backup. After disabling server backup, the designated drive is now mounted and I can see the log file created in the
    only backup folder on it.
    "Backup of volume C: has failed. The operation failed due to a device error encountered with either the source or the destination. If the source or destination volume is on a disk, run CHKDSK /R on the source or destination volume, and
    then retry the operation."
    "Backup of volume D: has failed. Server execution failed"
    All drives are new, and CHKDSK /R doesn't resolve anything in terms of allowing backup to proceed.
    I should also note that the C: HDD is a 512GB SSD with only 36GB of data on it including the OS. D: is a 2TB HDD with only 18GB of data on it. SQL server is active during the backup process, but even ending the SQL processes doesn't change the backup outcome.
    Server Backup ends up hanging regardless of everything I've tried.
    I imagine Server Backup would kill any processes that may prevent it from accessing data to snapshot/backup, no?
    I will add the backup HDD does have 3 files in the F:\WindowsImageBackup\NESServer\Backup 2014-01-07 020006 directory.
    @ 9:02pm 1/6/2014 (9:00pm was the scheduled start), a 2,550,784KB file
    @ 12:45am 1/7/2014, a 11,264KB file
    @ 10:49am 1/7/2014, a 272,384KB file
    All three of those files were created from the same backup process. However, checking the Server Manager/Server Backup Tool, that drive doesn't qualify as containing any backup files. "The backup location does not contain any backup. Specify another
    backup location."

Maybe you are looking for

  • "oracle database 11g performance issues"

    Hai everybody, In oracle 11g 11.2.0.1.0 we are developing business application using java, Our developers said Database performance is very poor it takes more time to retrieve values from database, they check froantend and middleware that has no prob

  • Convert binary to document

    hi, how to convert blob object stored in binary variable to document variable? Thanks, kripa

  • Weblogic managed servers connecting to the servers in different cluster

              Hi All,           We have a weired problem going on for a while. We have a cluster configuration           with an admin server and two managed servers. We have the similar configuration           in DEV, TEST and PROD. The problem is that

  • Is it possible to deploy CRS 2008 on websphere 6.0?

    Hi I'd like to know whether we can deploy CRS 2008 or BOE XI3.1 on webshpere 6.0 not 6.1? When I add ReportAnalytical.war, ClassCast Exception throws.

  • Invoice Split: Sales orders with Differnt PO number

    Hi Experts, I was created two sales orders (OR) with different Purchase order number and done collective delivery using VL10, while create the invoice; Invoice was spited based PO number. (Reference doc is delivery I was given) I am wondering there w