AlwaysOn AG issue during backup

Hi everyone,
I've a strange issue on our AlwaysOn Cluster  Sql Server 2012 Enterprise multi-instance, I'm evaluating to open a support case to MS but I would share the issue with the community before going on.
Sql server are dedicated to host Sharepoint 2010 farm content databases.
Scenario
n.2 VM (hypervisor ESX 5.1) Windows Server 2008 R2; each vm has 2 virtual disk, and 4 raw-mapped disk
each VM has MSFC role installed; quorum model is "file share majority"; no shared disk are mapped to clustered services.
each vm host a Sql Server 2012 enterprise installation; each VM host n.3 SQL named instance (Development, Testing, Production)
there are 3 Availability Group configured, one for each named-instance.
Raw-Mapped disk are configured in this way: raw1(500GB) for sql data, raw2(500GB) for t-log; raw3 and raw4 are dynamic disk configured in spanning mode (500+500GB) as sql backup repository.
Issue
note: This issue came out recently (while entire enviroment is on production since 1 year)
Our full backup job include a backup checksum and VERIFY.
Since a couple of weeks, this verify on a particular contentdb (CrawlStoreDB, around 80GB) cause the availability group to go in an unhealty state (RESOLVING).
From CLUSTER DIAGNOSTIC EXTENDED EVENT LOG I see this entries:
info_message    2013-05-27 15:44:42.4845706    [hadrag] SQLFetch() returns -1 with following information 
info_message    2013-05-27 15:44:42.4845706    [hadrag] ODBC Error: [42000] [Microsoft][SQL Server Native Client 11.0][SQL Server]Could not serialize the data for node 'filePath' because it contains a character (0x0000)
which is not allowed in XML. To retrieve this data convert it to binary, varbinary or image data type (6842)
info_message    2013-05-27 15:44:42.4845706    [hadrag] No more diagnostics results
info_message    2013-05-27 15:44:42.4845706    [hadrag] Discard the pending result sets  
info_message    2013-05-27 15:44:42.4845706    [hadrag] ODBC Error: [24000] [Microsoft][SQL Server Native Client 11.0]Invalid cursor state (0)  
From AlwaysOn health event file
availability_replica_state_change    2013-05-27 15:45:02.6615881    PRIMARY_NORMAL    RESOLVING_NORMAL    CABD99D4-8591-4B37-9160-6738DAE9C851
From Failover Cluster Manager Events
Cluster resource 'AGPROD' in clustered service or application 'AGPROD' failed.
The Cluster service failed to bring clustered service or application 'AGPROD' completely online or offline. One or more resources may be in a failed state. This may impact the availability of the clustered service or application.
this situation remains "looping" for 1 hour, unless forcing a manual failover in SMSS; during this hour, the availability group is unreachble (RESOLVING)
If I deactivate the VERIFY on backup job, the issue doesn't happen.
Any idea to point out the problem?

The cluster is running a query that fails with SQL Error 6842 (Could not serialize the data for node 'filePath' because it contains a character (0x0000) which is not allowed in XML. To retrieve this data convert it to binary, varbinary or image data type
(6842)".
My guess is that the cluster is running the SQL Instance health check procedure
sp_server_diagnostics which returns a VARCHAR(MAX) column filled with XML, and that some diagnostic data has a column 'filePath' that has an embedded null value (0x000), which is causing
an XML conversion error.  As to why it happens on backup verification, it's probably because the diagnostics returns information about the backup process, and one of the files involved has the embedded null value.
You can try to run this and see if it repros the issue.
declare @t table
create_time datetime,
component_type varchar(200),
component_name varchar(200),
state int,
state_desc varchar(20),
data xml
insert into @t
exec sp_server_diagnostics
select * from @t
But this is probably a case for support.
David
David http://blogs.msdn.com/b/dbrowne/

Similar Messages

  • My 4th gen ipod won't backup after it froze during a software update to 4.3.5. I had to cancel the backup at the start of the update and now it won't let me backup to my computer, freezes during backup or says my backup is corrupted.

    I pressed 'X' during a backup after it froze during an update to software 4.3.5 and now it won't let me backup on my computer. Does anyone have any ideas? It either freezes during backup or says my 'backup is corrupted' or 'I cannot backup on my computer'. I have had to abort quite a few attempted backups since the initial problem, which I think has made it worse. I have already tried restoring my ipod, deleting the aborted backup from my preferences/devices in itunes and deleting it from my user/appdata/etc on my PC hard drive. Any ideas what to try next?

    After hours of frustration I eventually decided to restore. I lost my apps, but i didn't have many and was able to reinstall them for free. During the restore it updated my ipod to the latest software 4.3.5, which i think may have caused the initial problem, but the backup issue remains. Sometimes it stops me from syncing and other times it allows me to skip the backup, but still says that i can't backup on my computer. I'm worried that it has done something to my computer as it took me six hours to do a Bullguard scan, which only normally takes about an hour. The scan didn't come up with any viruses, but it stalled at system32 for ages. The only other thing that i can think of is that a demo CD I uploaded may have caused a problem. iTunes couldn't find a track listing for it but gave me a few different options. i thought I had to choose one to continue, so did that then changed the track details to the correct tracks. I'm now worried that may have introduced an issue or caused a glitch in iTunes. How paranoid am I sounding?  

  • Itunes crashed during backup and I can't get help from apple support on the

    website.
    Stupid Itunes crashed on disk 41 of 46 when backing up.
    There is no help available on the apple website and no way to request help on the apple website for any issue with Itunes.
    How do I fix this problem?
    How do I tell Itunes which disks it backup up so it won't try backing them up all over again. Did it destroy a list of what was backed up? What's going on with itunes anyways? Does it always crash when it backs up this many cds? I don't want to start all over again and ruin another 41 CDR disks trying to back it up. How do I get it to continue from where it crashed or do whatever has to be done to get it to resume where it crashed during backup?
    Stupid program wouldn't let me do anything after it crashed. said it will take a minute or two (more like forever) to cancel backup. Don't even know why it was trying to cancel the backup. I didn't tell it to.

    Apparently there are some bugs in the latest itunes. DO NOT USE ITUNES TO BACKUP YOUR LIBRARY! UNLESS IT COMPLETES THE BACKUP, ITUNES IS NOT SMART ENOUGH TO FIGURE OUT WHICH FILES WERE BACKED UP. TECH SUPPORT DOES NOT KNOW HOW TO MAKE ITUNES KNOW WHAT FILES WERE BACKED UP SO IT CAN CONTINUE FROM WHERE IT LEFT OFF. AFTER 41 CDS WORTH OF BACKUPS, ITUNES CRASHED AND IT TRIED TO CANCEL THE BACKUP BUT NEVER FINISHED DOING THAT.
    SO AFTER 60 CDS WORTH AND ABOUT 6 HOURS OF WASTED TIME, THE ONLY WAY IS TO BACKUP THE ENTIRE LIBRARY ALL OVER AGAIN WHICH IS A TOTAL WASTE OF MONEY, TIME, AND CDS WITH NO GUARANTEE THAT IT WILL WORK ANY BETTER THE SECOND TIME AROUND.
    EVEN TRYING TO RESTORE FROM CDS DOESN'T TELL ITUNES TO UPDATE ITS LIST OF BACKED UP FILES. I'M SURPRISED THAT SUCH A BIG BLUNDER WITH ITUNES HAS GONE UNNOTICED.
    ITS A SERIOUS ENOUGH ISSUE THAT SOMETHING NEEDS TO BE DONE ABOUT IT. I BACKED UP MY ITUNES TO HD AND THERE WERE NO PROBLEMS SO OBVIOUSLY ITS AN ITUNES ISSUE.
    NEEDLESS TO SAY, ITS A VERY ANNOYING PROBLEM TO PUT SO MUCH TIME AND EFFORT INTO GETTING NOWHERE.

  • Disk errors during backup

    Hi, I'm baaaaaaack!
    I have a new (replacement) Netware 6.5 SP8 server running Groupwise,
    with 4 RAID'ed 450GB 10K SAS drives giving me about 1.2TB of space. In
    keeping things arranged close to the way they were before I arrived, for
    now at least, I have a "DATA_POOL" which holds "SYS2" volume. During
    nightly backups, since the new server went in about 12 days ago, I'm
    getting backups failing, and eventually the SYS2 volume dismounted last
    weekend during the full backup.
    I tried doing a poolrebuild even though poolverify showed no errors, per
    Symantec & Novell documentation, and that didn't suffice. Had to do the
    "purge" rebuild, and then was able to re-mount the volume. I wasn't
    sure if the error was somehow related to the server migration, so I
    tried the same full backup again and got the same nasty results.
    Rebuilt again, and made some changes to the OFMCDM.CFG file as
    recommended in some Symantec documentation, and the backups have been
    succeeding.
    However, the logs show lots of entries like the following, still, always
    during the backups:
    3-08-2011 9:43:15 pm: COMN-3.26-178
    Severity = 5 Locus = 3 Class = 0
    NSS-2.70-5009: Pool AIRP_COMM/DATA_POOL_SP had an error
    (20012(beastTree.c[510])) at block 129200397(file block
    -129200397)(ZID 1).
    3-08-2011 9:43:15 pm: COMN-3.26-180
    Severity = 5 Locus = 3 Class = 0
    NSS-2.70-5008: Volume AIRP_COMM/SYS2_SV had an error
    (20012(beastTree.c[510])) at block 129200397(file block
    -129200397)(ZID 1).
    It's not clear to me if this means it *has* to be a disk drive error.
    New disks would seem to increase the likelihood of physical disk
    failure, but it only happens during the Backup Exec 9.2 backups of this
    volume, and never on the SYS volume during those same backups.
    The Groupwise total database is about 100GB, and the total volume space
    is about 800GB, if that matters. (This server is built to GROW.)
    Suggestions? Does this clearly point to a hardware (disk) error even
    though it only happens during backups, and the symptoms changed when I
    made the OFM configuration changes?
    The changes I made to OFMCDM.CFG were:
    InitialCacheSize = 0x0000000000400000 (was 200000)
    CacheSizeExpandThreshold = 0x30 (was 0x1e)
    The old server didn't have any problems backing up, and everything was
    brought over intact/identical in configuration, even though the above
    changes did seem to help somewhat. I'm posting here because this seems
    more of a "storage" (disk) issue than really a backup issue, despite the
    failures occurring during backups.
    Any suggestions, or clarification about the above cause & possible fixes?
    Thanks.
    -- DE

    Thanks, but I don't know of any way to back up Groupwise without using
    OFM; you just can't get the data if you do that because so many
    important files are open. And I do agree that it's OFM if it is not a
    hardware issue.
    Your entries under "device" look like the defaults, which I started
    with, and yet there are a bunch of references on the Symantec (Backup
    Exec) site which suggest changing those. But I will want to compare
    your entries with mine, higher in the file, to see if there is any
    particular setting that is non-default and might address the issue.
    I do have slots for a spare drive on this new server. Currently, the
    cache is directed to the very same volume that is having issues,
    although it worked fine that way in the past on the old server. But I
    can easily get another drive if that will resolve this.
    Thanks.
    -- DE
    [email protected] wrote:
    > Besides other possibilities I would at first take the OFM
    > out the game and run backups without it.
    >
    > In case that does the trick and you need to backup open file
    > - as we did - I would try the following (here we run
    > NW6.0-SP5 with BackupExec 9.1 and OFM from St. Bernard,
    > which comes with BE):
    >
    > Place a single spare disk in the server with a seperate pool
    > and volume and direct the open-file-cache to this device. In
    > our case it is directed to TMP and the ofmcdm.cfg is the
    > following:
    >
    > [General]
    > BreakOnAssert = 0x1
    > BreakOnException = 0x1
    > MinServerMemoryPercent = 0x6
    > StatusPrintLevel = 0x0700000000000000
    > ThreadsPerQueue = 0x3
    > QueuesPerPool = 0x1
    > WriteInactivityPeriod = 0x2
    > SyncTimeOut = 0x3c
    > CacheLFS = 0x0
    > CacheBlockSize = 0x4000
    > GetHamSetStateTimeout = 0x3c
    > LFSCachePoolName =
    > BackupCachePoolName =
    > BreakOnCluster = 0x0
    > VerboseMessageLogging = 0x0
    > SyncLFSVolumes = 0x0
    > CacheToLFSVolumes = 0x0
    > AutoMountVolumes = 0x0
    > LogFileBufferSize = 0x10000
    > LogFileNumBuffers = 0x4
    > LogFileWriteFrequency = 0xf
    > DebugDumpOnSyncFail = 0x0
    > LFSCacheLocation =
    > BackupCacheLocation =
    >
    > [Advanced]
    > SmallEmergencyMemBlockSize = 0x400
    > SmallEmergencyMemBlockCount = 0x32
    > MediumEmergencyMemBlockSize = 0x2800
    > MediumEmergencyMemBlockCount = 0xa
    > LargeEmergencyMemBlockSize = 0x19000
    > LargeEmergencyMemBlockCount = 0x2
    >
    > [Device SYS]
    > MinFreeSpace = 0x0000000000400000
    > InitialCacheSize = 0x0000000000200000
    > CacheSizeExpandThreshold = 0x1e
    > CacheSizeFailThreshold = 0x5
    > Status = 0x29
    >
    > [Device BAK]
    > MinFreeSpace = 0x0000000000400000
    > InitialCacheSize = 0x0000000000200000
    > CacheSizeExpandThreshold = 0x1e
    > CacheSizeFailThreshold = 0x5
    > Status = 0x29
    >
    > [Device TMP]
    > MinFreeSpace = 0x0000000000400000
    > InitialCacheSize = 0x0000000000200000
    > CacheSizeExpandThreshold = 0x1e
    > CacheSizeFailThreshold = 0x5
    > Status = 0x79
    >
    > [Device GWX]
    > MinFreeSpace = 0x0000000000400000
    > InitialCacheSize = 0x0000000000200000
    > CacheSizeExpandThreshold = 0x1e
    > CacheSizeFailThreshold = 0x5
    > Status = 0x29
    >
    > [Device IVM]
    > MinFreeSpace = 0x0000000000400000
    > InitialCacheSize = 0x0000000000200000
    > CacheSizeExpandThreshold = 0x1e
    > CacheSizeFailThreshold = 0x5
    > Status = 0x29
    >
    > [Device ZFD]
    > MinFreeSpace = 0x0000000000400000
    > InitialCacheSize = 0x0000000000200000
    > CacheSizeExpandThreshold = 0x1e
    > CacheSizeFailThreshold = 0x5
    > Status = 0x29
    >
    >
    >
    > I do not remember all the other tweaking we did with the
    > BE-Support, but this finally did the trick and helped us to
    > run the server without any problems.
    >
    > Sincerely
    >
    > Karl

  • Macbook Pro freezes during Backup

    Hi Folks,
    I have just upgraded my previous TC 500 GB to a 2 TB version and have since been having a serious problem:
    My Macbook freezes (colorful ring of death) completely during backups (only the first backup went through smoothly, all other did not).
    The following applies:
    1) My wife's Macbook does not have these problems during backup on the same time-capsule
    2) My relevant file "sparsebundle" ist about 310 GB big
    3) It NOT make a difference, if I'm connected by wifi or by ethernet -> the mac hangs/freezes either way
    4) I tried deleting my (not my wife's) sparsebundle yesterday and then doing a "full reset" of the TC. After that I wanted to backup freshly by ethernet over night (wifi was switched off). This morning I found my mac frozen w/o any successful backup done.
    5) no logs could be found in the logging widget -> it was empty
    6) I ran the long version of Apple Hardware Test just to be sure I wasn't having a RAM issue. But all is well there, no problems found.
    7) I did not manually fill in the TC settings again, the TC 2TB was connected to my old TC 500GB and took its settings from it.
    My infrastructure is:
    2x Airport Express as repeaters running on 7.4.2.
    1x 2TB Time Capsule running on 7.5. (but like I said, disabling wifi and connecting by cable made no difference.)
    Can anyone help please?
    Thanks!

    Turns out it was faulty RAM, eventhough they did not show up as faulty in Apple's Hardware Test

  • Event ID 215 during Backup

    Hi,
    Hope ur all doing very well. We are facing an issue since last 10 days, Whenever our scheduled backup runs Event ID 215 Source ExchangeStoreDB contineously pops up after every 3mins. We cant access our OWA neither can we send or receive emails during the
    CAS and Mailbox Backups. We use Windows Server Backup Full Backup, which completes in almost 3.5 hours. This error is coming since last 10 days or so, before this we dont have this issue during the backup. Any thoughts??
    Virgo

    Hello,
    Please make sure you use exchange version.
    I recommend you install the latest rollup and then reschedule backups to check the result.
    If the issue persists, I recommend you set up diagnostic logging and testing backups till the issue happen again.
     =============================================================
    Set-EventLogLevel "MSExchange Repl\Exchange VSS Writer" -Level Expert
    Set-EventLogLevel "MSExchangeIS\9002 System\Exchange Writer" -Level Expert
    Set-EventLogLevel "MSExchangeIS\9002 System\Backup Restore" -Level Expert 
    And then turn the logging down on those servers by running the following commands:
     ==============================================================
    Set-EventLogLevel "MSExchange Repl\Exchange VSS Writer" -Level Lowest
    Set-EventLogLevel "MSExchangeIS\9002 System\Exchange Writer" -Level Lowest
    Set-EventLogLevel "MSExchangeIS\9002 System\Backup Restore" -Level Lowest
    Cara Chen
    TechNet Community Support

  • Unlocking iphone during backup

    I have backed up and synced many times from the same computer with no password.  Now it wants a password to unlock backup file?!?
    I did not set up this password.  Anyone figure it out yet?

    Backup if not fails because of the size might not fail and complete. It jsut might you wont be able to restore it because of the chnages made during backup. Actually never tried this, but I am sure you wont get correct backup on sp2007. On SP 2010 wont be
    the issue using snapshot option.
    V
    Imposible is nothing

  • Unlocking site during backup

    Hi,
    I have created a automated site collection backup script using powershell in moss 2007. The backup size goes upto 150gb approx. It takes around 12-15 hours to backup, till then my site collection is locked (site collection and quotas is read-only). What
    if i unlock(site collection and quotas is unlocked) the site collection while the backup is running?
    Will my backup get successfully completed if i unlock the site collection manually everyday after starting the backup script?
    Thanks

    Backup if not fails because of the size might not fail and complete. It jsut might you wont be able to restore it because of the chnages made during backup. Actually never tried this, but I am sure you wont get correct backup on sp2007. On SP 2010 wont be
    the issue using snapshot option.
    V
    Imposible is nothing

  • Issues during Component Batch Determination for Process Orders

    Hi,
    I am encountering a strange issue during component batch determination of process order.
    1. During creation of process order, when I execute batch determination, systems does determination based on strategies we have set and when I click on Copy it gives an error
    "Log cannot be saved: Object/subobject not specified"
    Message no. BL201
    *Diagnosis
    Log save cancelled because at least one log contains no object or subobject.
    Object and subobject are needed to classify application logs because there are several log types. Only a few logs are managed in main memory at runtime, so this classification is not needed.
    If the logs are to be saved in the database, object/subobject must be specified for later retrieval.
    Procedure for System Administration
    Object/sub object can be passed when a log is created (function module BAL_LOG_CREATE) and changed with BAL_LOG_CHANGE.
    The possible values for object and sub object must be configured in transaction SLG0.*
    If I save the order with CRTD status and come back again in change mode and do determination, It works fine.
    Again when the order is in REL status and do the determination, It gives the same error.
    This happens only for a specific plant, In other plants batch determination works in all scenarios.
    I checked for all plant related Config for batch determination and coudn't find any discrepancies.
    Please advise how to resolve this issue. Thanks in advance for your help
    Regards,
    Aheesh

    There Is no direct solution for your requirement just try this work around.
    While defining the batch determination search strategy for process order in COB1, there is one column Quantity proposal where you can attach the Routines (This is written in ABAB code )  make use of this, define New Routines with ABAB help to fulfill your requirement. Try this if found useful award full points.
    Regards,
    Ajay Nikte

  • Issue during automatic Creation of Work Order from Notification

    Hi,
    I have a issue during automatic Creation of Work Order from Notification.
    BADI implemented: NOTIF_EVENT_POST
    BAPI called in BADI: BAPI_ALM_ORDER_MAINTAIN
    I am able to create Work Order successfully but after that i need to update notification header with created work order number.
    But i am unable to update the same (VIQMEL-AUFNR).
    Can anybody provide solution for the same!
    Thanks,
    Kumar.

    Hi,
    Any inputs on above posted issue!
    Thanks in advance.
    Thanks,
    Kumar.

  • Automating Goods issue during Delivery creation

    Hi all,
    We have a requirement to automate the Goods issue creation for certain type of orders when the Delivery is getting created. The orders that need to be automatically Goods issued during delivery creation are identified based on certain plants. These plants are linked to certain output type and in the output type routine is the standard program RVADEK01 with one additonal code for automating the Goods Issue creation.
    We have a custom table that holds the status of orders and there is a code in user exit userexit_save_document_prepare which changes the status of the order as closed when the goods issue is done.
    But when the delivery is saved, in this case when an automatic goods issue needs to happen, when the flow reaches this user exit, the output type code dosenot get executed and the Goods issue is not done and so the custom table will not be updated with the closed status. So we are in need to findout a place where we can update the status of the order in that table.
    The output type code is not executed even before the other user exit userexit_save_document. The output type code gets executed and goods issue is done after the this userexit_save_document when the COMMIT statement is executed in the subroutine BELEG_SICHEN_POST in the include FV50XF0B_BELEG_SICHERN.
    I need help in finding out if any user exit or badi is called after this commit statement, so that I can add my code to close the status of the order in my custom table. Just after this commit the Goods issue happens and the VBFA table gets updated with the 'R' records for goods issue.
    Please let me know if anyone has any idea on this. The ultimate goal is to find some place after the goods issue is done to update the status of the order as closed in the custom table we have.
    Thanks,
    Packia

    Dear Siva,
    As informed yesterday I changed the language from DE to EN, to match the other shipping points settings in table V_TVST, this did not bring the solution.
    Please let me summarize, I am really desparate here:
    This is only IM related, Not WM.
    Picking lists are not printed for any Shipping Point from this warehouse, this is just a small subsidiary of my customer in Finland.
    Issue is not Aut. PGI.
    VP01SHP has not been configured for any shipping points, still there we do get the PR except for the new shipping point.
    In the deliveries of correct processed shipping points  I do not find any picking output type.
    Item category in new shipping is equal to Item category in already existing shipping points, so no need to config here.
    There is no picking block active.
    PR creation happens once I enter the pick qty in the delivery in VL02N. This is the part that we need to have automated.
    Can you please try to help me out?
    Tnx & regards,
    Chris

  • HT3275 For the past several days, I have been getting this message during backups: Time Machine could not complete the back up.  The backup disk image "Volumes/Data/iMac.sparsebundle" could not be accessed (error-1).

    For the past several days, I have been getting this message during backups:
    Time Machine could not complete the back up.  The backup disk image "Volumes/Data/bhoppy2's iMac.sparsebundle" could not be accessed (error-1).  When I click on the 'help' icon on the message, it reverts to a blank page, and I cannot find anything online regarding the term 'sparsebundle.'  In addition, I cannot access previous backups any longer. 
    Help?

    See C17 in Time Machine Troubleshooting by Time Machine guru Pondini:
    http://pondini.org/TM/Troubleshooting.html

  • Issue with backup NCS via NFS (Cisco Prime NCS 1.2.0)

    Hello,
    Does someone have issue with backup NCS via externally mounted location (NFS)?
    I have Cisco Prime NCS 1.2.0 and tried backup it to external resources, but I have issue with my free space:
    NCS/admin# backup ncs repository backup_nfs
    % Creating backup with timestamped filename: ncs-130131-0534.tar.gpg
    INFO : Cannot configure the backup directory size settings as the free space available is less than the current database size.
    You do not have enough disk space available in your repository to complete this backup.
    DB size is 25 GB
    Available size is 12 GB
    Please refer to the command reference guide for NCS and look at the /backup-staging-url/ command reference to setup the backup repository on an externally mounted location
      Stage 5 of 7: Building backup file ...
      -- complete.
      Stage 6 of 7: Encrypting backup file ...
      -- complete.
      Stage 7 of 7: Transferring backup file ...
      -- complete.
    I have tried to add additional space and use command backup-staging-url (my configuration: backup-staging-url nfs://server2008:/nfs), but it didn't help me.
    NFS share works perfect. I have checked it via NFS repository:
    repository backup_nfs
      url nfs://server2008:/nfs
    +++++++++++++++++++++++++++++++++++++++
    NCS/admin# show repository backup_nfs
    NCS-130130-1135.tar.gpg
    NCS-130130-1137.tar.gpg
    NCS-130130-1157.tar.gpg
    NCS-130130-1158.tar.gpg
    test-130130-1210.tar.gz
    Everytime when I try create backup I receive error message "You do not have enough disk space available in your repository to complete this backup".
    Does someone know how can I backup NCS system?
    Thank you

    How much space is availabe on that NFS mount point? It looks like to me from the error message that there is only 12 GB.... 
    The backup-staging-url is just for a space used to stage the backup before it is written-----

  • RMAN Duplicate DB fails to restore datafile created during backup.

    Database is 9i.
    Performing duplicate database using rman on seperate host.
    Reason of failure is , there were datafiles created during backup. Question is , Is there a workaround to perform duplicate database to work.
    Error :
    released channel: aux1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of Duplicate Db command at 12/16/2007 14:42:44
    RMAN-03015: error occurred in stored script Memory Script
    RMAN-06026: some targets not found - aborting restore
    RMAN-06023: no backup or copy of datafile 791 found to restore
    RMAN-06023: no backup or copy of datafile 790 found to restore
    RMAN-06023: no backup or copy of datafile 789 found to restore
    RMAN-06023: no backup or copy of datafile 788 found to restore
    RMAN-06023: no backup or copy of datafile 787 found to restore
    RMAN-06023: no backup or copy of datafile 786 found to restore
    Backup Date is : 23rd Nov to 26th Nov
    Datafile created between backup period are :
    786 /oracle/prod/proddata17/btabd_322/btabd.data322 Nov-25-2007 02:58:38 AM
    787 /oracle/prod/proddata17/btabd_323/btabd.data323 Nov-25-2007 03:06:02 AM
    788 /oracle/prod/proddata17/btabd_324/btabd.data324 Nov-25-2007 03:17:48 AM
    789 /oracle/prod/proddata17/btabd_325/btabd.data325 Nov-25-2007 03:26:50 AM
    790 /oracle/prod/proddata17/btabd_326/btabd.data326 Nov-25-2007 03:32:31 AM
    791 /oracle/prod/proddata17/btabd_327/btabd.data327 Nov-25-2007 03:39:59 AM
    Restore Script :
    rman TARGET dtbackup/dt0dmin@prod CATALOG rman_prd/rman_prd@rcat_db connect auxiliary /
    run {
    set until time = "TO_DATE('11/26/2007 10:30:00','mm/dd/yyyy hh24:mi:ss')";
    allocate auxiliary channel aux1 type 'SBT_TAPE' PARMS="BLKSIZE=262144";
    DUPLICATE TARGET DATABASE TO DUP ;
    }

    I think there is no workaround and this is expected behavior - atleast till 10.2. If you refer oracle documentation, it says that it requires target database in either MOUNT or OPEN stage to duplicate database using RMAN. So RMAN will get current physical structure information about target database from its control file and when any file is not there in the backup, it will give error. Only way to resolve this is to take backup of these datafiles through RMAN either as backupset or copy. ( I think oracle can take hint from here and make RMAN database duplication possible for which backup is not available - its a small change in code - if backup not found, then start taking backup. Already in 11g, during RMAN duplication, RMAN can use backup from other server)

  • Move files during backup

    what does time machine do when I move a file while it is backing up, where will the file or folder go when backing up, will it be in the area i had it in part 1 or part 2. please explain. thanks is it reccomended not to use a computer during backup?

    kevin123456789 wrote:
    what does time machine do when I move a file while it is backing up, where will the file or folder go when backing up, will it be in the area i had it in part 1 or part 2. please explain.
    it depends. at the beginning of every backup TM checks to see what needs to be backed up and does it. at the end of a backup it does another very quick backup to see if anything has changed since the start of the backup. if you move a file after TM already backed it up it will back it up in both places. if you move it before it has a chance to back it up, it will only back it up in the new location.
    thanks is it reccomended not to use a computer during backup?
    no, you can use the computer normally while the backup takes place. it's not supposed to interrupt the work flow. Indeed, given the fact that it runs every hour it would be utterly unreasonable to expect people to stop what they are doing every hour to let TM do its thing.
    Message was edited by: V.K.

Maybe you are looking for

  • Battery powered monitor for mac mini..

    I'm wondering if there is any battery powered small monitor that i can buy as mac mini monitor..the idea is i can use mac mini while on the go..

  • Wi-Fi doesn't work. At all :/.

    i've had my ipod for like a week now and i've tried pretty much everything including putting in the ip address, dns ect. i have the wep code but whenever i enter it it just says "unable to join the network". this was the main reason i got the ipod to

  • I pod not working!!!! Please help!!!

    My Ipod classic froze over the weekend whislt plugged into a JBL speaker sound system... I have allowed the battery to run down fully in an attempt to restart the pod.. However now plugged into either the mains or my laptop it just reaches the 'Apple

  • What next?

    Hello, it looks like XI is not the friendliest thing in the world We're talking about csv to xml conversion, file is picked from file folder and stored in the folder via ftp. I created data types, message types, message interfaces, message mappings a

  • Control Movement in JSplitPane

    Hi, I am building a application in JApplet, where in I have some data panel with fixed width on left side and corresponding Graph on right side which can also scroll horizontally , so I have two JScrollPanes and then add them to the JSplitPane, but n