SAN Snapshots

Does anyone know if SAN snapshot technology can be used with Oracle Databases
The 10gR2 Backup and Recovery Advanced User Guide seems to have a chapter ( CH17 ) that covers User Managed Backups which I am guessing where SAN snapshot technology would fit in ?
What I am trying to ascertain is -
1. Can SAN snapshot be used to backup a database if it is open and online ? - do you need to put the database into backup mode ?
2. Is is better to use SAN snapshot in combination with Hotbackup so you are purely snapshotting the backup itself and not directly the database ?
3. What the difference between SAN snapshot of an open database and hotbackup ?
4. What are the industry common SAN Snapshot solutions used with Oracle ?
thanks,
Jim

1. Can SAN snapshot be used to backup a database if it is open and online ? - do you need to put the database into backup mode ?yes & yes
2. Is is better to use SAN snapshot in combination with Hotbackup so you are purely snapshotting the backup itself and not directly the database ?HUH?
3. What the difference between SAN snapshot of an open database and hotbackup ?A HOTBACKUP will allow a consistent restore to be done.
A SAN Snaphot is a waste of effort.
4. What are the industry common SAN Snapshot solutions used with Oracle ?It depends

Similar Messages

  • ASM backup strategies with SAN snapshots

    Hello folks,
    in one of our setups we have a database server (11.2) that has only binaries and config (not the datafiles) and all ASM disks that are on the SAN. The SAN has daily snapshots, so the datafiles are backed up. My question is, how do I proceed if the database server fails and needs to be reinstalled? My question is how do I tell to the grid that we have already a diskgroup with disks on the SAN. I found md_backup and md_restore, but I am not sure how / when to proceed with md_restore.
    I have searched some time, but I am quite confused with the approaches, please note that RMAN is not an option in this environment.
    Thanks in advance
    Alex

    Hi,
    O.k. that changes things. You should have mentioned that in the beginning of your post, that you shutdown the database before you doing the snapshot and your are not using archivelog mode.
    I would advise to also dismount the diskgroups before doing the snapshots.
    However when was the last time you tried the archivelog mode? 11g increased the performance a lot.
    Note: In this case the snapshot is only a point in time view. In case of an error, you will loose data. (The last hours from your last snapshot).
    Now regarding your questions:
    => You will install a new system with GI and database software. You will need a new diskgroup (e.g. INFRA) for the new OCR and Voting disks (they should not be contained in the other diskgroups like DATA and FRA, so no need to snapshot the old INFRA).
    => Then simply present the LUNs from DATA and FRA diskgroup to the new server. ASM will be able to mount the diskgroups as soon as all disks are available (and have the right permission). No need for md_backup or restore. MD_BACKUP backups the contents of the ASM headers, but since you still have the disks this meta information is still intact.
    => Reregister the database and the services with the Grid Infrastructure (srvctl add database / srvctl add service etc.). Just make sure to point to the correct spfile (you can find that e.g. with ASMCMD).
    => Start database...
    That is if your snapshot did work. Just have another forum thread, where a diskgroup got corrupted due to snapshot (luckily only FRA).
    And just as reminder: A snapshot is not a backup. Depending on how the storage is doing the snapshot, you should take precautions to move it to separate disks and verfiy it (that it is usable).
    Regards
    Sebastian

  • Recover from San Snapshot

    Hii All.
    I am trying to combining san snaphots with rman for recovery purpose.I am taking copy of archive logs to out of storage.
    I confused about where log files should reside.I need recover datafiles via snapshots after that apply archives without any data lose.
    this recovery scenario is possible ? has anyone accomplished like this ? Have you any advice ?
    Best Regards..

    It depends on your SAN.
    With some, you can just do a startup and it is able to recover just from redo logs.
    With some, you have to go through a database recovery.
    Most SAN vendors have a recommended Oracle recovery template when starting up a replicated database using their toys.

  • Opening SAN volume snapshot

    Hello,
    I have an HP 4300(formerly lefthand) SAN. I also have the VSA software running on a separate box. I do remote snapshot copies to the VSA server. If I wanted to get access to the remote snapshot volume(which ends up being an NSS volume), what's the best way to do this? Should I have a separate test edirectory environment? I reluctant to fire it up on a production box through NSSMU because it may be presenting itself with the same NSS volume information as the production volume.

    Originally Posted by imc
    Could you see any issues mounting it on a "test" tree?
    Should work, although you may have issues with the trustee rights if your test tree doesn't have the same user list/layout as your prod eDir.
    However, I've taken SAN snapshots and mounted them on OTHER file servers without any problem all the time in our production system. The only thing to be careful of as Jesse mentioned is that you don't present that to the SAME server (ie, you can't present the same VOL1 volume to the SAME server, but multiple servers can have VOL1 mounted on them).
    One gotcha is if you are presented a Clustered volume. You'll need to run NSSMU and make it non-shareable if you are mounting the snapshotted one on a non-clustered server.
    --Kevin

  • DPM 2012 Backing up a VM on Server 2012 Hyper-V CSV Host - Not Working with Hardware VSS

    Hi All,
    I'm trying to backup a VM on a 2012 Cluster. I can do it by using the system VSS provider, but when I try to use the hardware provider, (Dell equalogic) it doesn't work. DPM will sit for a while trying and then report a retryable VSS error. 
    The only error I'm seeing on the Host is the following:
    Event ID 12297Volume Shadow Copy Service error: The I/O writes cannot be flushed during the shadow copy creation period on volume \\?\Volume{3312155e-569a-42f3-ab3a-baff892a2681}\. The volume index in the shadow copy set is 0. Error details: Open[0x00000000, The operation completed successfully.
    ], Flush[0x80042313, The shadow copy provider timed out while flushing data to the volume being shadow copied. This is probably due to excessive activity on the volume. Try again later when the volume is not being used so heavily.
    ], Release[0x00000000, The operation completed successfully.
    ], OnRun[0x00000000, The operation completed successfully.
    Operation:
    Executing Asynchronous Operation
    Context:
    Current State: DoSnapshotSet
    I don't know where to go from here - There is no activity on the CSV (this is the only VM on it, and both the CSV and VM were created specifically for testing this issue
    Does anyone have any ideas? I'm desperate. 
    Update:
    Ok, so I can Take DPM out of the picture. Trying to do a snapshot from the Dell Auto-Snapshot manager, I get the same errors. But I also get a bit more information:
    Started at 3:02:47 PM
    Gathering Information...
    Phase 1: Checking pre-requisites... (3:02:47 PM)
    Phase 2: Initializing Smart Copy Operation (3:02:47 PM)
    Adding components from cluster node SB-BLADE01
    Adding components from cluster node SB-BLADE04
    Adding components from cluster node SB-BLADE02
    Retrieving writer information
    Phase 3: Adding Components and Volumes (3:02:52 PM)
    Adding components to the Smart Copy Set
    Adding volumes to the Smart Copy Set
    Phase 4: Creating Smart Copy (3:02:52 PM)
    Creating Smart Copy Set
    An error occurred:
    An error occurred during phase: Creating Smart Copy
    Exception from HRESULT: 0x80042313.
    Creating Smart Copy Set
    An error occurred:
    An error occurred during phase: Creating Smart Copy
    Exception from HRESULT: 0x80042313.
    An error occurred:
    Writer 'Microsoft Hyper-V VSS Writer' reported an error: 'VSS_WS_FAILED_AT_FREEZE'. Check the application component to verify it is in a valid state for the operation.
    An error occurred:
    One or more errors occurred during the operation. Check the detailed progress updates for details.
    An error occurred:
    Smart Copy creation failed.
    Source: Creating Smart Copy Set
    An error occurred:
    An error occurred during phase: Creating Smart Copy
    Exception from HRESULT: 0x80042313.
    An error occurred:
    Writer 'Microsoft Hyper-V VSS Writer' reported an error: 'VSS_WS_FAILED_AT_FREEZE'. Check the application component to verify it is in a valid state for the operation.
    An error occurred:
    One or more errors occurred during the operation. Check the detailed progress updates for details.
    Error: VSS can no longer flush I/O writes.
    Thanks,
    John

    I had a similar issue with an environment that had previously been working with the Dell HIT configured correctly. As we added a third node to the cluster I began seeing this problem.
    In my case I had the HIT volume max sessions per volume at 6 and maximum sessions per volume slice set to 2 and the CSV was using a LUN/Volume on the SAN that was split across 2 members.
    When the backup takes place and Dell HIT is configured to use SAN snapshots the vss-control iSCSI target is used which in my case exceeded my limits for maximum connections per volume as I'm using 2 paths per Hyper-V node with MPIO (this is my
    current theory).
    Once I'd modified these settings I could then back up the VHD's on that CSV again.
    Hope this helps.

  • IOS7:  "all-day, repeating" entry from all-day to a specific time.

    Since upgrading to ios7 (iPhone 5, Verizon) when using the calendar I can't switch an "all-day, repeating" entry from all-day to a specific time. There's no "all-day" toggle switch.  Does anyone know how to correct this?  My friend has the 5s & AT&T and she doesn't have this problem.  Hers shows a toggle switch!

    I haven't completely discounted it but it seems like I would need to back up to files, then restore from files. I don't want to do that. I want a full file system level SAN snapshot that I can just drop into place. The does work when the production base is shut down. However, considering that I do have archive logs, shouldn't it be possible to use them to recover to a specific scene without having to do an RMAN backup at all? It seems that doing an RMAN backup/recovery would just make this whole process a lot longer since the DB is 160 gigs in size (not huge, but the dump would take more time than I would like). With a SAN snapshot, if I can get this to work, I'm looking at about a 15-20 minute period of time to move the production DB over to test.
    Since you are suggesting that RMAN may be a better approach I'll provide more details about what I'm trying to do. Essentially this is like trying to recover a DB from a server that had its power plug pulled. I was hoping that Oracle's automatic recovery would do the same thing it would do in that instance. But obviously that doesn't work. What I want to do is bring over all the datafiles, redo logs, and archive logs using the SAN snapshot. Then if possible use some aspect of Oracle (RMAN if it can do it) to mount the database, then recover to a specific time or SCN if using RMAN. However, when I tried using RMAN to do it, I got an error saying that it couldn't restore the data file because it already existed. Since I don't want to start from scratch and have RMAN rebuild files that I've already taken snapshots of (needless copying of data), I gave up on the RMAN approach. But, if you know of a way to use RMAN so that it can recover to a specific incarnation without needing to runm a backup first, I am completely open to trying it.

  • DB Back to a Specific Time with Archive Logs (Until cancel or time?)

    I'll try to be as clear as possible with my intended goals and the limitations of the system I'm working with:
    1. We have a test instance of our Oracle DB that I would like to be able to refresh with data from the production instance at any time, without having to shut down the production database or put tablespaces into hot backup mode.
    2. Both systems are HP Itanium boxes with differing numbers of CPUs and RAM. Those differences have been taken into account in the init.ora file for the DB instances.
    3. The test and production instances are using a SAN to hold the following file systems: ora_redo1, ora_redo2, ora_archlog, oradata10g. The test instance is using SAN snapshots (HP EVA series SAN) of the production file systems to pull the data over when needed. The problem is that the only window to do this is when the production system is down for nightly maintenance which is about a 20 minute period. I want to escape this limitation.
    What I've been doing is using the HP SAN to take snapshots of the file systems on the production system mentioned above. I do this while the production DB is up and running. I then import those snapshots into the test system, run an fsck to ensure file system integrity, then start up the Oracle instance as follows:
    startup mount
    Then I run the following query I found on line to determine the current redo log:
    select member from v$logfile lf , v$log l where l.status='CURRENT' and lf.group#=l.group#;
    Then I attempt to run a recovery as follows:
    recover database using backup controlfile until cancel;
    When prompted, I enter the path to the first of the current redo logs and hit enter. After waiting, sometimes it says that the recovery completed, other times it stops saying there was an error and that more files are needed to make the DB consistent.
    I took the above route because doing a recover until time (which is what I really want to do) kept prompting me for the next archive log in sequence that didn't yet exist when I took the SAN snapshot. Here was my recover until time command:
    recover database until time 'yyyy-mm-dd:hh:mm:ss' using backup controlfile;
    What I would like to do is take the SAN snapshot and then recover the database to about a minute before the snapshot using the archive logs. I don't want to use RMAN since that seems to be overkill for this purpose. A simple recovery to a specific point in time seems to be all that is needed and I have archive logs which, I assume, SHOULD help me get there. Even if I have to lose the last hour's worth of transactions I could live with that. But my tests setting the specific time of recovery to even 12 hours earlier still resulted in a prompt for the next, non-existent archivelog.
    I will also note that I even tried copying the next archive log over once it did exist and the recovery would then prompt me for the next archive log! I will admit right now that I really don't know a whole lot about Oracle or DBs, but it's my task to try and make it possible to "refresh" the test DB with the most recent data with no impact on the production DB.
    The reason I don't want to use hot backup mode is that I don't know the DB schema other than there are probably 58 or more tablespaces. The goal is to use SAN snapshots for their speed instead of having to take RMAN files and copy them to the test instance. I'm sure I'm not the only person who has ever tried this. But most of what I've found on line refers to RMAN, hot backup mode, or down time. The first two don't take advantage of SAN snapshots for a quick swap of all the Oracle file systems and I can't afford downtime other than that window at night. Is there some reason that the recover to time didn't work even though I have archive logs?
    One final point. The recover until cancel actually worked a couple of times, but it seems to be sporadic. It likely has something to do with what was happening on the production DB when I created the SAN snapshots. I actually thought I had a solution with recover until cancel last week until it didn't work three times in a row.

    I haven't completely discounted it but it seems like I would need to back up to files, then restore from files. I don't want to do that. I want a full file system level SAN snapshot that I can just drop into place. The does work when the production base is shut down. However, considering that I do have archive logs, shouldn't it be possible to use them to recover to a specific scene without having to do an RMAN backup at all? It seems that doing an RMAN backup/recovery would just make this whole process a lot longer since the DB is 160 gigs in size (not huge, but the dump would take more time than I would like). With a SAN snapshot, if I can get this to work, I'm looking at about a 15-20 minute period of time to move the production DB over to test.
    Since you are suggesting that RMAN may be a better approach I'll provide more details about what I'm trying to do. Essentially this is like trying to recover a DB from a server that had its power plug pulled. I was hoping that Oracle's automatic recovery would do the same thing it would do in that instance. But obviously that doesn't work. What I want to do is bring over all the datafiles, redo logs, and archive logs using the SAN snapshot. Then if possible use some aspect of Oracle (RMAN if it can do it) to mount the database, then recover to a specific time or SCN if using RMAN. However, when I tried using RMAN to do it, I got an error saying that it couldn't restore the data file because it already existed. Since I don't want to start from scratch and have RMAN rebuild files that I've already taken snapshots of (needless copying of data), I gave up on the RMAN approach. But, if you know of a way to use RMAN so that it can recover to a specific incarnation without needing to runm a backup first, I am completely open to trying it.

  • Files server migration with deduplication

    Dear All,
    I need to migrate the FTP server from Windows Server 2003 Web Edition to Windows server 2012R2 Std.
    The purpose of this migration is saving disk space. (Testing dedup result is 80%)
    I'm trying to setup a migration plan but I have doubt regarding files/folders copy from the old server to the new one. 
    For the moment the current server has a data disk of 170GB, on the new server I would like to have a disk of 50GB. How to perform data copy ?
    Is it possible to perform a first copy with Robocopy (1 day before) for example and the D day purge deleted files/folders and copy new one ? (I don't think it's possible because the file will not be the same (pointer vs real file)).
    Also because disks are not the same I will have to copy data in several times. Is it not an issue to manually execute Dedup job when copying ? 
    Does anyone have experience with this kind of migration ? 
    Regards,
    Vincent

    Are you going physical to physical or virtual to physical or virtual to virtual? Say your going virtual to virtual, you can umount the virtual volume and shift it to the new VM and setup your FTP server to point to the volume. But the downside to going virtual
    with dedupe is you have to pretty much thick provision your virtual hard disk. Dedupe will touch every block in the virtual disk causing it to fully provision. If you do scheduled SAN snapshots you have to be cautious about this change. I also recommend striping
    short names and disabling 8.3 naming. The beauty is if you are doing domain permissions for ACL's on the filesystem then you do not have to apply them to the volume. You just have to rebuild shares and you can apply everyone access to those. But if they were
    local machine ACL's like a machine on a workgroup you would have to rebuild them.
    If it's physical to physical you have similar options if you utilize a shared SAN for your storage.
    I've done both of these options and downtime was minimal something like 5 to 15 minutes. That being said I'd still recommend Robocopy if you could do it.
     

  • Is User-managed Backup&Recovery Useless Because of RMAN?

    Is user-managed backup&recovery method useless because of RMAN?

    Nice dicussion!
    Also, one has to remember that if your database is using ASM, RMAN is the only option for backup/restore/recover (atleast in 10g).
    Can SAN Snapshot/SplitMirroring be used to take a backup of database using ASM?
    --MM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • High CPU activity - Terminal services

    Hi,
    I have an HP DL360 G7 with 2 VMs on Hyper-v 2008R2.  When users connect and open apps, the CPU runs at 100%.  I've enabled the BIOS virtual settings and changed it to run just one VM with 32GB RAM but the same symptoms persist?
    The VM is running with 4 Virtual processors, any ideas?
    Thanks, Matt.

    Hi Patrick,
    Thanks for responding.  Part of the problem was that the VM was sharing the VNIC with the host.  After un-ticking this it seemed to improve things.  I've also removed an iSCSI device, DVD and floppy device which were not wanted and re-installed
    the integration services.  I've also deleted some SAN snapshots which were long overdue.  All of this has improved things.
    Matt.

  • Can InterConnect support D3L and XML at the same time?

    Hi,
    In file Adapter.ini, ota.type can only be one of D3L and XML. Does it mean InterConnector can only support either D3L or XML parsing at one time?
    If I have different type of incoming files, I need to install at least two interconnectors to work, right?
    Thanks,
    Jie

    ASM diskgroup redundancy and SAN redundancy are not the same and work completely different. You cannot use ASM extern and high redundancy at the same time.
    A SAN does not know anything about Oracle database files and redundancy is usually accomplished by RAID. ASM is not RAID and does not duplicate complete disks. ASM data redundancy is based on disk failure groups and file extents. ASM knows about Oracle database files and automatically adjusts the stripping of data, which a SAN cannot. ASM also has the advantage that the storage size can be dynamically adjusted while the storage is online, without having to rebuild your storage devices. Rebuilding or extending a RAID based SAN solution usually means to reinitialize the storage.
    Beware of SAN snapshot and replication features as they may not provide complete backup or redundancy if you loose the complete storage.

  • Snapshot Backups on HP EVA SAN

    Hi everyone,
    We are implementing a new HP EVA SAN for our SAP MaxDB Wintel environment.  As part of the SAN setup we will be utilising the EVAs snapshot technology to perform a nightly backup.
    Currently HP Data Protector does not support MaxDB for its "Zero Downtime Backup" concept (ZDB), thus we need to perform LUN snapshots using the EVAs native commands.  ZDB would have been nice as it integrates into SAP and lets the DB/SAP know when a snapshot backup has occurred.  However as I mentioned this feature is not available on MaxDB (only SAP on Oracle).
    We are aware that SAP supports snapshots on external storage devices as stated in OSS notes 371247 and 616814.
    To perform the snapshot we would do something similar (if not exactly) like note 616814 describes as below:
    To create the split mirror or snapshot, proceed as follows:
                 dbmcli -d <database_name> -u < dbm_user>,<password>
                      util_connect < dbm_user>,<password>
                      util_execute suspend logwriter
                   ==> Create the snapshot on the EVA
                      util_execute resume logwriter
                      util_release
                      exit
    Obviously MaxDB and SAP are unaware that a "backup" has been performed.  This poses a couple of issues that I would like to see if anyone has a solution too.
    a.  To enable automatic log backup MaxDB must know that it has first completed a "full" backup.  Is it possible to have MaxDB be aware that a snapshot backup has been taken of the database, thus allowing us to enable automatic log backup?
    b.  SAP also likes to know its been backed up also.  Earlywatch Alert reports start to get a little upset when you don't perform a backup on the system for awhile.
    Also DB12 will mention that the system isn't in a recoverable state, when in fact it is.  Any work arounds available here?
    Cheers
    Shaun

    Hi Shaun,
    interesting thread sofar...
    > It would be nice to see HP and SAP(MaxDB) take the snapshot technology one or two steps further, to provide a guaranteed consistent backup, and can be block level verified.  I think HPs ZDB (zero downtime backup eg snapshots) technology for SAP on Oracle using Data Protector does this now?!??!
    Hmm... I guess the keyword here is 'market'. If there is enough market potential visible, I tend to believe that both SAP and HP would happily try to deliver such tight integration.
    I don't know how this ZDB stuff works with Oracle, but how could the HP software possibly know how a Oracle block should look like?
    No, there are just these options to actually check for block consistency in Oracle:  use RMAN, use DBV or use SQL to actually read your data (via EXP, EXPDB, ANALYZE, custom SQL)
    Even worse, you might come across block corruptions that are not covered by these checks really.
    > Data corruption can mean so many things.  If your talking structure corruption or block corruption, then you do hope that your consistency checks and database backup block checks will bring this to the attention of the DBA.  Hopefully recovery of the DB from tape and rolling forward would resolve this.
    Yes, I was talking about data block corruption. Why? Because there is no reliable way to actually perform a semantic check of your data. None.
    We (SAP) simply rely on that, whatever we write to the database by the Updater is consistent from application point of view.
    Having handled far too much remote consulting messages concerning data rescue due to block corruptions I can say: getting all readable data from the corrupt database objects is really the easy part of it.
    The problems begin to get big, once the application developers need to think of reports to check and repair consistency from application level.
    > However if your talking data corruption as is "crap data" has been loaded into the database, or a rogue ABAP has corrupted several million rows of data then this becomes a little more tricky.  If the issue is identified immediately, restoring from backup is a fesible option for us.
    > If the issue happened over 48hrs ago, then restoring from a backup is not an option.  We are a 24x7x365 manufacturing operation.  Shipping goods all around the world.  We produce and ship to much product in a 24hr window that can not be rekeyed (or so the business says) if the data is lost.
    Well in that case you're doomed. Plain and simple. Don't put any effort into getting "tricky", just let never ever run any piece of code that had not passed the whole testfactory. That's really the only chance.
    > We would have to get tricky and do things such as restore a copy of the production database to another server, and extract the original "good" documents from the copy back into the original, or hopefully the rogue ABAP can correct whatever mistake they originally made to the data.
    That's not a recovery plan - that is praying for mercy.
    I know quite a few customer systems that went to this "solution" and had inconsistencies in their system for a long long time afterwards.
    > Look...there are hundreds of corruption scenarios we could talk about, but each issue will have to be evaluated, and the decision to restore or not would be decided based on the issue at hand.
    I totally agree.
    The only thing that must not happen is: open a callconference and talk about what a corruption is in the first place, why it happened, how it could happen at all ... I spend hours of precious lifetime in such non-sense call confs, only to see - there is no plan for this at customer side.
    > I would love to think that this is something we could do daily to a sandpit system, but with a 1.7TB production database, our backups take 6hrs, a restore would take about 10hrs, and the consistency check ... well a while.
    We have customers saving multi-TB databases in far less time - it is possible.
    > And what a luxury to be able to do this ... do you actually know of ANY sites that do this?
    Quick Backups? Yes, quite a few. Complete Backup, Restore, Consistency Check cycle? None.
    So why is that? I believe it's because there is no single button for it.
    It's not integrated into the CCMS and/or the database management software.
    It might also be (hopefully) that I never hear of these customers. See as a DB Support Consultant I don't get in touch with "sucess stories". I see failures and bugs all day.
    To me the correct behaviour would be to actually stop the database once the last verified backup is too old. Just like everybody is used to it, when he hits a LOGFULL /ARCHIVER STUCK situation.
    Until then - I guess I will have a lot more data rescue to do...
    > Had a read  ...  being from New Zealand I could easily relate to the sheep =)
    > Thats not wan't I meant.  Like I said we are a 24x7x365 system.  We get a maximum of 2hrs downtime for maintenance a month.  Not that we need it these days as the systems practically run themselves.  What I meant was that between 7am and 7pm are our busiest peak hours, but we have dispatch personnel, warehouse operations, shift supervisors ..etc.. as well as a huge amount of batch running through the "night" (and day).  We try to maintain a good dialog response during the core hours, and then try to perform all the "other" stuff around these hours, including backups, opt stats, and business batch, large BI extractions ..etc..
    > Are we busy all day and night ... yes ... very.
    Ah ok - got it!
    Especially in such situations I would not try to implement consistency checks on your prod. database.
    Basically running a CHECK DATA there does not mean anything. Right after a table finished the check it can get corrupted although the check is still running on other tables. So you have no guranteed consistent state in a running database - never really.
    On the other hand, what you really want to know is not: "Are there any corruptions in the database?" but "If there would be any corruptions in the database, could I get my data back?".
    This later question can only be answered by checking the backups.
    > Noted and agreed.  Will do daily backups via MaxDB kernel, and a full verification each week.
    One more customer on the bright side
    > One last question.  If we "restored" from an EVA snapshot, and had the DB logs upto the current point-in-time, can you tell MaxDB just to roll forward using these logs even though a restore wasn't initiated via MaxDB?
    I don't see a reason why not - if you restore the data and logarea and bring the db to admin mode than it uses the last successfull savepoint for startup.
    If you than use recover_start to supply more logs that should work.
    But as always this is something that needs to be checked on your system.
    That has been a really nice discussion - hope you don't get my comments as offending, they really aren't meant that way.
    KR Lars

  • MaxDB 7.x and its reorganization + SAN cloning/snapshotting

    We are currently in the process of evaluating a new storage solution for all our SAP (= MaxDB) systems.
    Given the case we'd produce a snapshot from our production system (3,1 TB) and present the results of that to another server to create a test system. Observations turned out that we move between 300 - 600 GB of data daily on the production - not only but also due to the internal MaxDB page reorganization. This would eventually result in a test system that will be as big as the source system in a few days.
    We are currently discussing whether and if yes how it can be possible to reduce the amount of daily block changes, I'm of the opinion that this is not really possible.
    Am I right?
    Markus

    > Copy-on-Write in turn is a feature of the physical page management (happening in the converter).
    > This does happen with every change of the page - even if it's an in-place operation seen from the logical-page-angle.
    Ok - got it.
    > > > EnableDataVolumeBalancing        YES
    > > > Another feature that might lead to a higher amount of 'moved' pages is maybe the table clustering. (I haven't checked this, but I assume this!).
    > >
    > > Those two come together, no rebalancing on Volumes --> no more free clusters - fragementation - slower system. There are other parameters influencing the "search for free clusters", yes, but eventually this will lead to a slower system.
    >
    > Nope again.
    > Rebalancing surely helps to free up space for clusters, but you can use the clustering without it.
    Sure - I did not exclude them
    > For the clustered pages a certain area of each data volume is kind of reserved and not considered for non-clustered pages.
    I thought this "static" mode is no more supported but now it's "mixed up" (with 7.7.x) with other non-clustered pages.
    > > Another discussion was/is the deduplication of backups, we tested this earlier and there is only one solution supporting the MaxDB backup deduplication (Exagrid), others storage vendors are not really aware of MaxDB backups being a "problem" as per todays knowledge. They all think in "Oracle files".
    >
    > I wouldn't call this 'thinking' really.
    > In fact this feature is not that clever and as you see it's also not very flexible.
    Well, reducing a daily "full backup" of 3,1 TB to a daily backup of a few hundred GB is clever enough to save time and backup space, no matter how intelligent it is in fact. We are doing this with other (non-SAP) databases (SQL Server) and it works pretty well saving LOTS of storage and tapes.
    > As soon as the data 'moves' around it does not work properly anymore.
    That's again the reorganization thing, interestingly enough is Exagrid clever enough to detect the changes and stores only one block and not the two (as in COW) - which is a significant reduction.
    Backing up a 3 TB database is a challenge if you have no real SAN features to assist you.
    Markus

  • Snapshot and backup

    Hello BW Experts,
    Current Scenario:
    There is a daily snapshot of the BW system in nite for our Production server. During this they have to stop any jobs / loads running at that time.
    Secondly they take a backup every weekend. They stop any jobs / loads during that time also.
    Wondering if you could provide ur experiences, that would very great of you..
    1) Is snapshot necessary every  day?
    2) what is the alternative to daily snapshot?
    3) what is the best procedure for daily snapshot ?
    4) is there a procedure not to stop the loads / jobs  during snapshot ?
    5) Is backup necessary every weekend?
    6) what is the alternative to weekly backup?
    7) what is the best procedure for weekly backup ?
    8) is there a procedure not to stop the loads / jobs  during backup ?
    9) is there any doc / paper / link / notes for the snapshots and backups for BW production system.
    10)what is the typical time taken for snapshot?
    11) what is the typical time taken for backup ?
    any does please mail to [email protected]
    Thanks,
    BWer

    Thanks for the valuable info Mimosa.
    In our company, they call the snapshot: copy of the production box to the box on the SAN network. The copy is made to the disc. Here the snapshot disc is on RAID 0 and production disc is on RAID 5. They do this every night.
    In our company backup is : taking copy to the tape. they do this every weekend.
    Its interesting that they do the 'split mirror on-line backup' daily during system is available to all users & jobs. Wondering if you have any details of this backup. What is the procedure called. What level is this copy done, on the database level, network functionality or sap functionality.....?
    Suggestions appreciated.
    Thanks,
    BWer

  • How to update Flash Builder 4 beta 2 so that it remains compatible with snapshots?

    I have Flash Builder 4 beta 2 STANDALONE installed on my Mac (MacOSX 10.6) and I just upgraded my Flex SDK to the latest stable snapshot (4.0.0.13875). Now it seems that the visual designer is not very happy with this upgrade.
    And in "Using Gumbo builds with Flex Builder" (http://opensource.adobe.com/wiki/display/flexsdk/Using+Gumbo+Builds+in+Flex+Builder) it is said that "Please note that the release of a milestone build will usually have an associated automatic update to Flex Builder that will include all relevant files."
    The question is how do I get that automatic update? I've tried Help/Software Updates/Find and install.../Updates for existing features, but it didn't find any updates. Any idea how I can revive my Design mode?

    There are no updates for Beta 2.
    You're probably seeing one of two issues with the latest nightly build:
    1) playerglobal.swc change http://forums.adobe.com/thread/559312?tstart=0
    2) Namespace change from xmlns:mx="library://ns.adobe.com/flex/halo" to xmlns:mx="library://ns.adobe.com/flex/mx"
    Jason San Jose
    Quality Engineer, Flash Builder

Maybe you are looking for

  • Please HELP! Macbook pro (15" mid.2010) restarts randomly.

    Hi, I've got a problem and I hope you can help me. Macbook Pro 15" mid. 2010 (recently updated to os x Mountain Lion) restarts on its own "because of a problem". It's been a major issue in the last months, causing big troubles and preventing me from

  • External mail acknowledgement

    Hi, I have sent a pdf attachment through Email. I need some clarification in the following cases. How to capture the following scenarious. 1) Email ID of employee is maintained incorrectly in user master 2) for whatever reason, the mail cannot be del

  • Connect to wlan

    Hello, is it possible to connect to a wlan from my java client application? I need some basic functions like get a list of ssids, signal strength and connect/disconnect to a ssid. Can someone point me out if it is possible and recommand an api? Thank

  • Can't format a flash drive into exFAT

    Hi, so I have a flash drive and I want to formate it into exFAT/NTFS. I go and do everything, but then I got MS-DOS (FAT). Also I do not have option for NTFS. I apply 2 screenshots so that to see what is happening here.

  • Ship to party/ Bill to party

    Hi all, I have one bill to party(payer ) for two ship to party.Ship to party A bought for 100$ & B bought for 50$. If the payer sends us check of 120for item bought by ship to party A.Then we do not want to create a credit memo for 20$ but want to ap