Storage Replica versus Robocopy: Fight!

Storage Replica versus Robocopy: Fight!I've used Robocopy for so many years, that this blog post really caught my eye. Surely, Robocopy could not be beaten doing file copies? oh dear, it looks as though we have a new Sherriff in town.This copy tests both systemsunder various workloads:[originally postedby Ned Pyle]Hi folks, Ned here again. While we designed Storage Replica in Windows Server 2016 for synchronous, zero data-loss protection, it also offers a tantalizing option: extreme data mover. Today I compare Storage Replica’s performance with Robocopy and demonstrate using SR for more than just disaster planning peace of mind.Oh, and you may accidentally learn about Perfmon data collector sets.In this cornerRobocopy has been around for decades and is certainly the most advanced file copy utility shipped in Windows. Unlike the...
This topic first appeared in the Spiceworks Community

Hi, 
Since you have deleted the existing Replication Group at step 5, the step6 could not affect the existing DFSR Database. 
When you create a new replication group, it will do an initial sync between shares UsersA-C on server1 and a new SAN mounted drive on the replica site server.
After the step 9, I think two replication groups is needed between server1 and replica site server to replication shares UsersA-C and shares UsersD-F. You could set the replica site server as primary member in the replication group. It will be considered to
be the authoritative member and it wins out during the initial replication. This will overwrite the current replicated folder content on the non-primary member. 
You could try a command to set another server as primary:
Dfsradmin Membership Set /RGName:<RG Name> /RFName:<RF Name> /MemName:<Member Name> /IsPrimary:True
Best Regards,
Mandy
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place.

Similar Messages

  • Locking in replicated versus distributed caches

    Hello,
    In the User Guide for Coherence 2.5.0, in section 2.3 Cluster Services Overview it says
    the replicated cache service supports pessimistic lockingyet in section 2.4 Replicated Cache Service it says
    if a cluster node requests a lock, it should not have to get all cluster nodes to agree on the lockI am trying to decide whether to use a replicated cache or a distributed cache, either of which will be small, where I want the objects to be locked across the whole cluster.
    If not all of the cluster nodes have to agree on a lock in a replicated cluster, doesn't this mean that a replicated cluster does not support pessimistic locking?
    Could you please explain this?
    Thanks,
    Rohan

    Hi Rohan,
    The Replicated cache supports pessimistic locking. The User Guide is discussing the implementation details and how they relate to performance. The Replicated and Distributed cache services differ in performance and scalability characteristics, but both support cluster-wide coherence and locking.
    Jon Purdy
    Tangosol, Inc.

  • Brocade SAN switch training

    Anyone else facing issues with Adobe?We have about 25 users that work on graphics in our company, they use:
    - Illustrator
    - Photoshop
    - InDesign, etc...Since Adobe (auto)-updated their software from 2014 to 2015 edition, all of our users are complaining about stability issues, software crashes, very very slow programs...I've called Adobe support numerous times in the past couple of days and they keep blaming everything except for their own software.First call: "It's the hardware sir!"
    Sure...: i7-4770, 16GB DDR3, 256GB SSD, NVidia Quadro 1GB, built this year, specifically for our editors.2nd call: It's the drivers sir...
    Checked, everything is up to date3rd call: It's the user accounts... they're all corrupt...Yesterday: User reports there's a Illustrator update... let's try that...
    Check the whats new section: "Stability fixes, performance...

    Storage Replica versus Robocopy: Fight!I've used Robocopy for so many years, that this blog post really caught my eye. Surely, Robocopy could not be beaten doing file copies? oh dear, it looks as though we have a new Sherriff in town.This copy tests both systemsunder various workloads:[originally postedby Ned Pyle]Hi folks, Ned here again. While we designed Storage Replica in Windows Server 2016 for synchronous, zero data-loss protection, it also offers a tantalizing option: extreme data mover. Today I compare Storage Replica’s performance with Robocopy and demonstrate using SR for more than just disaster planning peace of mind.Oh, and you may accidentally learn about Perfmon data collector sets.In this cornerRobocopy has been around for decades and is certainly the most advanced file copy utility shipped in Windows. Unlike the...

  • Repair of motherboards( system units)

    Has anyone a kind of effect, but a general data sheet for carrying out good mother repair?
    This topic first appeared in the Spiceworks Community

    Storage Replica versus Robocopy: Fight!I've used Robocopy for so many years, that this blog post really caught my eye. Surely, Robocopy could not be beaten doing file copies? oh dear, it looks as though we have a new Sherriff in town.This copy tests both systemsunder various workloads:[originally postedby Ned Pyle]Hi folks, Ned here again. While we designed Storage Replica in Windows Server 2016 for synchronous, zero data-loss protection, it also offers a tantalizing option: extreme data mover. Today I compare Storage Replica’s performance with Robocopy and demonstrate using SR for more than just disaster planning peace of mind.Oh, and you may accidentally learn about Perfmon data collector sets.In this cornerRobocopy has been around for decades and is certainly the most advanced file copy utility shipped in Windows. Unlike the...

  • Create Plant or just separate Storage Location

    I am trying to determine where the magical line is to decide whether to establish a separate address location as a different plant or location.  My example is not very complex.  It is a satellite production location that produces products for use as raw materials for another production facility that assembles and sells the final product.  The satellite location will have a separate cost center and will use activity rates and costing sheets for overhead recovery.  My inclination is to set the satellite facility as a plant but supply Chain team sees this as overkill and would rather make it simply a storage locations.
    Can someone provide a white paper or list of positives and negatives to storage location versus plant?  I know some things might involve the following:
    1-If valuation of products are different or may at some point involve different values by location, a plant is needed.
    2-If products are sold directly from location, plant would make more sense.
    3-If inventroy is transferred between locations with differing addresses, various tax and bank reporting could necessitate the use of separate plants.
    4-When manufacturing activities exist, a plant is logical.
    5-If all purchasing is performed at a central location, a separate plant may not be required for this purpose.
    My existing dilemma relates to the fact that we use costing sheets by plant. Without a separate plant, using different storage locations forces me to use the Material Origin Group to dileneate the extra location if I don't want to use the plant based overhead rates.  This seems like a violation of best practice and proper use of SAP.

    Hi
            If you define satellite production location as Storage location you can not do profitable analysis at this unit. My suggestion is activate MRP area at storage location level. So that satellite production unit is treated as separate entity under the same Plant. This will just help you to do the Supply chain activities (purchasing, Sale, Prod and STO) at MRP area level. But Profitability analysis still can be done at Plant level only.
    This will solve your point numbers 2, 3, and 4.For material valuation (Point number -1) I think you have to go for split valuation.
    Regards,
    Velmurugan S

  • Configure Storage Replication - where to create Source Log Disk

    Hey guys,
    I tried to create a Scale-Out-Fileserver storage replication. Unfortunately i can't find an option to create the "source log disk" neither I couldn't find anything about the prereq of this drive?
    Anyone who found the missing piece?

    Prerequisites and answers for most setup questions can be found here:
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/f843291f-6dd8-4a78-be17-ef92262c158d/getting-started-with-windows-volume-replication?forum=WinServerPreview&prof=required
    Ned Pyle [MSFT] | Sr. Program Manager for Storage Replica, DFS Replication, Scale-out File Server, SMB 3, probably some other $%#^#% stuff

  • Windows 10 Volume Replica vs StarWind

    Hi,
    I know Server 10 isn't finished yet, but I am interested in the storage replica feature which I believe does synchronous synchronisation. Was just wondering if someone else had some more experience with testing this feature and could explain some of the
    major differences between Microsoft's replication and StarWind replication. Do we need extra hardware for the MS way, anything special? can it be run on a 2 node cluster? if it does the same job, what reasons would we have to purchasing a third party product,
    can the MS implementation be a virtual SAN and be competing against the likes to StarWind and DataCore?
    I would be interested to know the technical details of the differences, and why one is better than the other.
    Cheers
    Steve

    Hi,
    I know Server 10 isn't finished yet, but I am interested in the storage replica feature which I believe does synchronous synchronisation. Was just wondering if someone else had some more experience with testing this feature and could explain some of the
    major differences between Microsoft's replication and StarWind replication. Do we need extra hardware for the MS way, anything special? can it be run on a 2 node cluster? if it does the same job, what reasons would we have to purchasing a third party product,
    can the MS implementation be a virtual SAN and be competing against the likes to StarWind and DataCore?
    I would be interested to know the technical details of the differences, and why one is better than the other.
    Cheers
    Steve
    Steve,
    few disclaimers to start from...
    DISCLAIMER1: I do work for StarWind Software (heck, I started the Company with one more guy years ago and still own a noticeable part of it) so while I'm doing my best to be honest you cannot believe me 100% :) Making long story short: NEVER trust anything
    and double-check everything vendor says to you. At least we're publishing step-by-step guides or "cooking books" and you're welcomed to use these logs to run your experiments. There's no guarantee something did not work for us would not work for
    you. And vice versa :)
    DISCLAIMER2: Windows Server 10 is half-baked it's not even beta so A LOT of things don't work as expected, performance sucks badly affecting functionality (see more below), a lot of things are simply locked, MSFT is not sure will it deliver everything promised
    with GA or not. So it's very premature to build your company storage strategy on what you see now. Also a lot of us are under NDA and prefer not to tell anything. I'm also under NDA so the only thing I can tell is - stay tuned, there are more interesting things
    coming you're going to get in love with :)
    Now Storage Replica Vs. StarWind...
    1) Technical. Storage Replica is a mini-filter based logical volume synchronous replication, very similar to what SteelEye DataKeeper and maybe Double-Take do. StarWind Virtual SAN is exactly what name says - basically SAN firmware running on Windows platform
    and brought one level up to run on a hypervisor host, partially as a user-land service and partially as a kernel-mode driver. Storage Replica copies data blocks coming to one volume to another volume and StarWind represents distributed high performance multi-path
    LUN. I/O is "bond" to local node to avoid network transactions and TCP stack is bypassed to lower latency. So LUN only "looks" like it's iSCSI and actually it's not (something similar to SMB3 SMB Direct connection that starts life as TCP
    and then turns RDMA, StarWind starts as TCP and then does DMA on a local node). With Storage Replica there's a component sitting above volume and "branching out" writes (if you know about DRBD or HAST - very similar). So "by design" Storage
    Replica and Virtual SAN are TOTALLY different.
    2) Positioning. Storage Replica is a Disaster Recovery software with a very high uptime (nearly zero downtime with some scenarios, see below). StarWind is a Business Continuity solution with optional DR component (async replication) so does 99.99% and 99.9999%
    uptime. While it's possible to achieve SOME of the scenarios with Storage Replica (we did SoFS cluster and HA VM) it's not really a goal of it. 
    3) Features. StarWind does "spoofing" so places huge amounts of a distributed write back cache (RAM-based) on every node, Storage Replica from the other side uses LOG disks (that's why SSD is preferred). StarWind is "all-active" while
    SR is "active-passive" by design. So I would not expect high IOPS from SR even with GA code. StarWind does other cool things like "flash-friendly" in-line 4KB deduplication (MSFT can do only 32KB block off-line one and it does steal IOPS
    from storage array as data is written twice and one extra read is required by optimization process, on heavily loaded storage you either have less IOPS or no dedupe as optimization process never kicks in...). StarWind can use RAM not only for write-back cache
    (AFAIK even with W10 MSFT does not do any read-write CSV cache and cannot use non-flash as a write back cache <-- double check this) but also for in-memory with upcoming VDI accelerator project. StarWind does log-structured file system so there's no data
    and only LOG and with SR LOG is re-read and decoded to put on a primary storage. Making long story short: StarWind kills random 4KB I/Os and eliminates I/O blender effect, something MSFT cannot do (LSFS is not a panacea - there are scenarios that don't work
    well with it...). StarWind does offloads snapshots to cheap nodes saving primary flash for hot data, so-called "inter-node tiering", NetApp & Nimble can do that and MSFT moves hot and cold data between flash and spindle on the same node only.
    Etc etc etc
    So... We see SR as a very basic but very important step in the right direction. If you find that SR is "good enough" for you - use it. If not - you're welcomed to deploy third-party software. MSFT does for years leave huge holes in their product
    strategy (I have an impression quite a lot of their teams are run by engineers and not by guys who were buying similar solutions for real customers) leaving a lot of space to other guys. Remember: good companies create good products and excellent companies
    create INFRASTRUCTURE :) MSFT is definitely a company that allows ISVs live and grow. So we're very happy with what MSFT does in general and with Windows 10 "storage" in particular. 
    Now some true value for you. We've experimented to run SR in a different set of scenarios. Some:
    1) Failover file server with 2 nodes and no shared storage. Works but failover is not transparent (probably because performance sucks, I hope closer to beta no need to tune failover timeout would be required to have 100% transparent one as CA SMB3 shares
    provide). Please see this blog:
    Storage Replica: General-Purpose File Server with NO SHARED STORAGE!!
    http://www.starwindsoftware.com/blog/?p=25
    2) Scale-Out File Server with 2 nodes and no shared storage. Works but with SR "active-passive" design whole SoFS idea is a bit compromised... See:
    Storage Replica: Scale-Out File Server with NO SHARED STORAGE!!
    http://www.starwindsoftware.com/blog/?p=42
    3) Hyper-V Cluster with HA VM and 2 nodes only, again no shared storage. ALSO WORKS. You can e-mail me or PM me and I'll give you post draft, not published yet.
    4) Hyper-V Cluster with guest VM cluster. DOES NOT WORK. Probably because bugs in Windows 10 as guest VM cluster on a normal shared storage DOES NOT WORK as well. Again not published yet so e-mail me.
    Hope this helped a bit :)
    Good luck and happy clustering!
    Anton
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • How long do photos last in shared photo streams?

    I was wondering how long photos will stay in shared photo streams.  Do these fall under the same rules are regular photo streams or will they stay there until I delete them?
    I'd love to use this as a photo storage site versus Facebook!

    I finally found an article on the Apple site that says "You can share as many photos as you like with Shared Photo Streams.  They don't count against your iCloud storage and aren't removed from iCloud until you delete them."

  • File Server Failover Cluster without shared disks

    i have two servers that i wish to cluster as my Hyper V hosts and also two file servers each with 10 4TB SATA disks, all i have read about implementing high availability at the storage level involves clustering the file servers (e.g SOFS) which require external
    shared storage that the servers in the cluster can access directly. i do not have external storage and do not have budget for it.
    Is it possible to implement some for of HA with windows server 2012 R2 file servers without shared storage? for example is it possible to cluster the servers and have data on one server mirrored real-time to the other server such that if one server goes
    down, the other server will take over processing storage request using the mirrored data?
    i intend to use the storage to host VMs for Hyper V fail-over cluster and an SQL server cluster. they will access the shared on the file server through SMB
    each file server also has 144GB SSD, how can i use it to improve performance?

    i have two servers that i wish to cluster as my Hyper V hosts and also two file servers each with 10 4TB SATA disks, all i have read about implementing high availability at the storage level involves clustering the file servers (e.g SOFS) which require external
    shared storage that the servers in the cluster can access directly. i do not have external storage and do not have budget for it.
    Is it possible to implement some for of HA with windows server 2012 R2 file servers without shared storage? for example is it possible to cluster the servers and have data on one server mirrored real-time to the other server such that if one server goes
    down, the other server will take over processing storage request using the mirrored data?
    i intend to use the storage to host VMs for Hyper V fail-over cluster and an SQL server cluster. they will access the shared on the file server through SMB
    each file server also has 144GB SSD, how can i use it to improve performance?
    There are two ways for you to go:
    1) Build a cluster w/o shared storage using MSFT upcoming version of Windows (yes, finally they have that feature and tons of other cool stuff). We've recently build both Scale-Out File Server serving Hyper-V cluster and standard general-purpose File Server
    cluster with this version. I'll blog next week edited content (you can drop me a message to get drafts right now) or you can use Dave's blog who was the first one I know who build it and posted, see :
    Windows Server Technical Preview (Storage Replica)
    http://clusteringformeremortals.com
    Feature you should be interested in it Storage Replica. Official guide is here:
    Storage Replica Guide
    http://blogs.technet.com/b/filecab/archive/2014/10/07/storage-replica-guide-released-for-windows-server-technical-preview.aspx
    Will do things like on the picture below:
    Just be aware: feature is new, build is preview (not even beta) so failover does not happen transparently (even with CA feature of SMB 3.0 enabled). However I think tuning timeout and improving I/O performance will fix that. SoFS failover is transparent
    even right away.
    2) If you cannot wait 9-12 months from now (hope MSFT is not going to delay their release) and you're not happy with a very basic functionality MSFT had put there (active-passive design, no RAM cache, requirement for separated storage, system/boot and dedicated
    log disks where SSD is assumed) you can get advanced stuff with a third-party software doing things similar to the picture below:
    So it will basically "mirror" some part of your storage (can be even directly-accessed file on your only system/boot disk) between hypervisor or just Windows hosts creating fault-tolerant and distributed SAN volume with optimal SMB3/NFS shares.
    For more details see:
    StarWind Virtual SAN
    http://www.starwindsoftware.com/starwind-virtual-san/
    There are other guys who do similar things but as you want file server (no virtualization?) most of them who are Linux/FreeBSD/Solaris-based and VM-running are out and you need to check for native Windows implementations. Guess SteelEye DataKeeper (that's
    Dave who blogged about Storage Replica File Server) and DataCore.
    Good luck :) 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Windows 2008 R2 Multi-Site (geo) Cluster File Server

    We need to come up with a new HA file server (user drive data) solution complete with DR. It needs to be 2008 R2, cater for about 25TB of data, and be suitable for 500 users (nothing high end on I/O). I don't want to rely on DFS for any form of resilience
    due to its limitations for open files. We have two active-active data centers (a third can be used for file share quorum).
    We could entertain:
    1)
    Site1 - 2 x HP ProLiants with MSA storage, replicating with something like DoubleTake to a third HP Proliant at site 2 for DR.
    2)
    Site1 - 2 x HP ProLiants with local storage and VSA or HP StoreVirtual array (aka LeftHand), using SAN replication to site 2 where we could have a one or two node config of the same setup.
    Ideally I would like all 3/4 nodes in these configurations to be part of the same multi-site cluster to ensure resources like file shares are in sync. With two pieces of storage across this single cluster (either a DoubleTake or SAN replication to local
    disks in DR) will this work? How will the cluster/SAN fail over the storage?
    We do have VMWare 5.0/1 (not 5.5 yet). We don't have Hyper-V yet either. Any thoughts on the above, and possible alternatives welcome. HA failover RTO we'd like in seconds. DR longer, perhaps 30 mins.
    Thanks in advance for any thoughts and guidance.

    For automated failover between sites, the storage replication needs to have a way to script the failover so you can have a custom resource that performs the failover at the SAN level before the disks come online. 
    DoubleTake has GeoCluster which should accomplish this. I'm not sure about how automated Lefthand's solution is for multi-site clusters.
    VMware has Site Recovery Manager, though this is really an assisted failover and not really an automatic site failover solution. It's automated so that you can failover between sites at the push of a button, but this would need to be a planned failover.
    RTO of seconds might be difficult to accomplish as you need to give the storage replication enough time to reverse direction while giving the MS cluster enough time to bring cluster applications online. 
    When planning your multi-site cluster, I'd recommend going with 2 nodes on each site and then use the file share witness quorum on your 3rd site. If you only had one node on the remote site, the primary site would never be able to failover to the remote
    site without manually overriding the quorum as 1 node isn't enough to gain enough votes for quorum. With 2 nodes on each site and a FSW, each site has the opportunity to gain enough votes to maintain quorum should one of the sites go down.
    Hope this helps.
    Visit my blog about multi-site clustering

  • Moving ASM disks from Host to Host

    Hello all,
    should anyone have tried this or should anyone be able to point ou some sort of documentation for this issue would greatly be appreciated;
    I have two hosts that have an Oracle Database resident on filesystem that is storage replicated from between to disk groups and that on the destination the control file is used to "import" the new database.
    My issue would be if anyone has tired the same with ASM and if would it be possible to have a certain scenario;
    Shutdown database on source,
    Replicate the disks via Storage,
    Scan for the disks on the destination
    <Tweak the disks so that they can be imported ?>
    Import the disks to the ASM Instance
    Fire up the database on destination
    If anyone has some usefull insight would be most apreciated.
    Regards,
    Francisco

    Hi,
    at the bottom of my last post, I did point you to some whitepapers on the ASM side, identifying what needs to be done in case of an ASM diskgroup.
    So yes this will work with ASM, you simply have to mirror the whole luns and this should work.
    However regarding the other points:
    => Replicating via. Oracle is definitely faster than storage. Just think of it: Oracle only needs to forward the changed of a block (never the whole block) with the help of the redologs, whereas Storage mirroring always has to take the whole block (+Redologs). So this is roughly double the data needed to be transferred than Oracle. Additionally blocks which Oracle does not need to forward (like ASM header/tablespace header information). Hence if you say it is slower, this cannot physically be the case. Only exception is if your infrastructure for network is not configured the same as the storage network is (or you have an error in setup).
    => DR with Oracle Dataguard is way below 5 Minutes if you configure it correctly. If you setup ASM mirroring between both nodes and setup a cluster than you have no downtime at all.
    => For the last point look at Snapshot Standby. This is exactly what they need + it provides DR all the time.
    http://www.oracle.com/technetwork/database/features/availability/twp-dataguard-11gr2-1-131981.pdf
    Here is a case study about this:
    http://www.hitachi.co.jp/products/it/storage-solutions/techsupport/whitepaper/pdf/11gdg_wp_v1_e.pdf
    Regards
    Sebastian

  • Ipod unreadable

    I have bought a iPod shuffle to use in my wife's car and the error message we are getting is ipod unreadable.  Does anyone know what the problem could be.
    The car is less than three months old and has a usb connector so I would have thought it would have worked

    You should refer to the documentation for the car audio system to see what devices are compatible.  The shuffle is not the same as the other "big" iPods (because it connects throught the headphones jack, not a full dock connector).
    As an alternative, get a cheap USB flash drive.  Here's one that costs less than $10 that is so small it barely sticks out of the USB port (no dangling cable). 
    http://eshop.macsales.com/item/SanDisk/SDCZ33008G/
    It has 4 times the storage space versus a shuffle, and is much less suspectible to "car conditions" such as extreme heat and cold.  Plus you'll save the shuffle from the wear and tear of constantly plugging and unplugging it.  You can load it with your favorite songs, and leave it connected to the car, until you want to change the songs.

  • Hyper-V Replication Implementation question

    I have two Hyper-V 2012 servers. I want to setup replication between them but I wanted to clarify a few things.  My plan is to put 3 VM's on each Hyper-v Server and then replicate them to the other servers. So Server A has 3 VM's and Server B has
    3 VM's. In case of a server failure the servers on Server A will failover to Server B and vice versa. This also applies to the VHD's. I want everything to replicate between the two. And I need to make sure that the process is automatic so if storage fails
    the vm's will fail over. I just want to make sure the Hyper-V Replication will work in this way.
    Vincent Sprague

    I need the storage replication aspect, I currently have the two servers in a cluster and the vm's failover but storage is my problem, our shared storage solution is junk and I need to find a way to get around that.
    Vincent Sprague
    1) For Hyper-V Replica you don't need to have shared storage as Windows will replicate source VHDX with some minor delay to a destination VHDX. 
    2) You may also take a look @ Storage Replica (part of the upcoming Windows Server 10) as it may do a better job for you because of the synchronous nature. See:
    Storage Replica and Hyper-V
    http://www.starwindsoftware.com/blog/storage-replica-with-microsoft-failover-cluster-and-clustered-hyper-v-vm-role-windows-server-technical-preview/
    Good luck :)
    P.S. Looks like you've already asked similar question before:
    https://social.technet.microsoft.com/Forums/projectserver/en-US/c19b08aa-b395-49e0-9bf7-52981118b820/server-2012-r2-vm-replication?forum=winserverhyperv 
    StarWind Virtual SAN clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • How much free space do you need?

    We have a year old Xsan system that we use for video editing. It's relatively small - two stations attached via fiber. We use one side of an Xserve RAID chassis for all of the media, and the other side for metadata.
    How much hard drive space do we need to keep as head room on the media LUN, which is 1.82TB? We seem to hit a 46GB limit (2.5% of the media LUN). Once we're down to 46GB, FCP says it's running out of disk space.
    Does that sound right? When I checked the preferences in FCP, it is set to keep 2047MB (2GB) available, quite a bit less then 46GB. We do have "Limit Capture Now" set to 30 min, but we aren't using Capture Now when the out disk space error pops up.
    Can anyone provide thoughts, even if it's just affirmation that 46GB is a normal head room.
    Also, could anyone point me to any documentation or previous topics on this issue?
    Many thanks,
    Philip
    Xserve G5 Dual 2 GHz   Mac OS X (10.4.9)   Xsan, Xserve RAID

    Wish I could help you here -- don't know if it's that Xsan is complaining to FCP about lack of space, or that FCP is cranky on its own.
    One thing I will note, though, is that 2.5% free space is WAY too low for storage like Xsan. Typically you will find that performance slows substantially when storage is more than 80% full. And you will have issues expanding Xsan volumes via the "bandwidth expansion" (adding LUNs to a storage pool, versus adding storage pools to a volume) method when you get above about 60% full. So just be aware of this -- filling an Xsan volume is different to filling the local hard disk on a Mac Pro or Power Mac (though the 80% "guideline" exists in most cases as it's simple physics -- platter density and physics are a constant in this world

  • Pros/Cons of replicating to files versus staging tables

    I am new to GoldenGate and am trying to figure out pros/cons of replicating to flatfiles to be processed by an ETL tool versus replicating directly to staging tables. We are using GoldenGate to source data from multiple transaction systems to flatfiles and then using Informatica to load thousands of flatfiles to our ODS staging. Trying to figure out if it would be better just to push data directly to staging tables. I am not sure which is better in terms of recovery, reconcilliation, etc. Any advice or thoughts on this would be appreciated.

    Hi,
    My Suggestion would be to push the data from multiple source systems directly to staging table and then populate target system using ELT tool like ODI.
    Oracle Data Integrator can be combined with Oracle Golden Gate (OGG) , that provides a cross-platform data replication and changed data capture. Oracle Golden Gate worked in a similar way to Oracle’s asynchronous change data capture but handles greater volumes and works across multiple database platforms.
    Source -> Staging -> Target
    ODI-EE supports all leading data warehousing platforms, including Oracle Database, Teradata, Netezza, and IBM DB2. This is complemented by the Oracle GoldenGate architecture, which decouples source and target systems, enabling heterogeneity of databases as well as operating systems and hardware platforms. Oracle GoldenGate supports a wide range of database versions for Oracle Database, SQL Server, DB2 z/Series and LUW, Sybase ASE, Enscribe, SQL/MP and SQL/MX, Teradata running on Linux, Solaris, UNIX, Windows, and HP NonStop platforms as well as many data warehousing appliances including Oracle Exadata, Teradata, Netezza, and Greenplum. Companies can quickly and easily involve new or different database sources and target systems to their configurations by simply adding new Capture and Delivery processes.
    ODI-EE and Oracle GoldenGate combined enable you to rapidly move transactional data between enterprise systems:
    Real-time data. - Immediately capture, transform, and deliver transactional data to other systems with subsecond latency. Improve organizational decision-making through enterprise-wide visibility into accurate, up-to-date information.
    Heterogeneous. - Utilize heterogeneous databases, packaged or even custom applications to leverage existing IT infrastructure. Use Knowledge Modules to speed the time of implementation.
    Reliability. - Deliver all committed records to the target, even in the event of network outages. Move data without requiring system interruption or batch windows. Ensure data consistency and referential integrity across multiple masters, back-up systems, and reporting databases.
    High performance with low impact. - Move thousands of transactions per second with negligible impact on source and target systems. Transform data at high performance and efficiency using E-LT. Access critical information in real time without bogging down production systems.
    Please refer to below links for more information on configuration of ODI-OGG.
    http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/odi/odi_11g/odi_gg_integration/odi_gg_integration.htm
    http://www.biblogs.com/2010/03/22/configuring-odi-10136-to-use-oracle-golden-gate-for-changed-data-capture/
    Hope this information helps.
    Thanks & Regards
    SK

Maybe you are looking for

  • OpenOffice to Applications box won't go away

    Every time I open OpenOffice the "OpenOffice to Applications" box pops up. I have already put OpenOffice in the Applications folder. How do I get rid of that box? Also, another OpenOffice icon pops up on the desktop every time I open the program, and

  • Call RFC from DELPHI Windows Services Program

    Hi all, Are there any way to call RFC from DELPHI Windows Services Program? Best regards. Munur EBCIOGLU

  • Data Source for Oracle reports

    Hi, Can Oracle Reports support anyother data source. I am using Reports Server to invoke reports. The requirement is client passing the dat that have to come in the report. (The size of data can be high).So is there any method other than sending the

  • Landed costs variance

    As far as I can tell SAP landed costs doesn't track record expected landed costs, only actual landed costs.  In other words there is not a way using landed costs to track expected landed costs vs. actual landed costs and identify variances?  Is there

  • SQL migration to 11g problem

    While migrating from MS SQL Server 2008 to Oracle 11.2 using SQLDeveloper 3.2.2, i'm getting an error: java.lang.Exception: The plugin used to capture this model is not available.  Please re-install the plugin and try again before convert.     at ora