WRT610N Shared Storage

I recently purchased a WRT610N and have been having some problems setting up the USB shared storage feature.  I have a 1.5 Terabyte Seagate drive that I have created two partiions on (I had read elsewhere in the forums that the WRT610N only handled partitions/dirves up to 1 TB).  Both partitions are NTFS with the first one being 976,561 GB and the second one being 420,700 GB. Both drives show up in the "Disk" section of the admin console and I can create/define a share for the larger of the two partitions without any problems. 
The first of my problems comes when I try to create/define a share for the smaller partition.  I can create a share but the admin console does not save the access privleges that I assign to it.  Despite setting them up in the admin console they don't show up when I go back and look (in both the detail and summary views) the Access rights show as blank.  I do not have this issue with the larger partition where I can add and later view groups in the Access section.
The second problem comes when I try to attach to the larger share from a network client.  I can look at the shares if I use ..........Start - Run - and Type \\192.168.1.1.  If I enter in my admin User ID and password, I can see the new share on the WRT610N.  When I try to double click on it, I am then pompted again for a Username & Password,  When I try to re-enter the admin user ID and password, the logon comes right back to me with "WRT610n\admin" populated in the User ID field.  From there it won't accept the admin password.  There are no error messages.
Help with either problem would be appreciated.

When you select your Storage partition and open it, and if it ask you for the Username and Password, thats the username and password is of your storage drive, Might be you must have set some password for your storage drivers.
Login to your Routers GUI. and click on the Storage Tab and Below you will find the Sub tab "Administration" click on it, if you wish you can Modify the "admin" rights, Like change the password or else you can Create your Own User and password. So whenever you login to your Storage Partition, and it ask for the username and password then you can input that username and password and click OK. This way you will be able to access your Storage Driver. 

Similar Messages

  • My wife an I share one iTunes account but have separate Apple id for our devices - prior to 8.1 we shared storage on iCloud now we can't and when I try to buy more storage on her phone instead of accessing the shared iTunes account it try's to sign

    MMy wife an I share one iTunes account but have separate Apple id for our devices - prior to 8.1 we shared storage on iCloud now we can't and when I try to buy more storage on her phone instead of accessing the shared iTunes account it try's to sign in to iTunes using her Apple id - I checked the iTunes id and password on both devices - can anyone help

    Have a look here...
    http://macmost.com/setting-up-multiple-ios-devices-for-messages-and-facetime.htm l

  • In the old Mobile me storage I had shared storage for my family and all of our devices. How do I breakup the 55GB across my apple accounts now that iCloud is by device by user?

    In the old Mobile me storage I had shared storage for my family and all of our devices. How do I breakup the 55GB across my apple accounts now that iCloud is by device by user? Anyone else have this issue. I am also in need of correcting the storage from my work where I do not have Safari or I can not down load iCould to my desk top.

    That storage moves to master account. Since there are no accounts like that in Icloud, people in your family will enjoy complimentary 5 gigs from Apple and they will let you know if they need anymore. You will not be able to manage storage from windows desktop

  • Is Shared storage provided by VirtualBox better or as good as Openfiler ?

    Grid version : 11.2.0.3
    Guest OS           : Solaris 10 (64-bit )
    Host OS           : Windows 7 (64-bit )
    Hypervisor : Virtual Box 4.1.18
    In the past , I have created 2-node RAC in virtual environment (11.2.0.2) in which the shared storage was hosted in OpenFiler.
    Now that VirtualBox supports shared LUNs. I want to try it out. If VirtualBox's shared storage is as good as Openfiler , I would definitely go for VirtualBox as Openfiler requires a third VM (Linux) to be created just for hosting storage .
    For pre-RAC testing, I created a VirtualBox VM and created a Stand alone DB in it. Below test is done in VirtualBox's LOCAL storage (I am yet to learn how to create Shared LUNs in Virtual Box )
    I know that a datafile creation is not a definite test to determine I/O throughput. But i did a quick Test by creating a 6gb tablespace.
    Is the duration of 2 minutes and 42 seconds acceptable for a 6gb datafile ?
    SQL> set timing on
    SQL> create tablespace MHDATA datafile '/u01/app/hldat1/oradata/hcmbuat/mhdata01.dbf' SIZE 6G AUTOEXTEND off ;
    Tablespace created.
    Elapsed: 00:02:42.47
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    $
    $ du -sh /u01/app/hldat1/oradata/hcmbuat/mhdata01.dbf
    6.0G   /u01/app/hldat1/oradata/hcmbuat/mhdata01.dbf
    $ df -h /u01/app/hldat1/oradata/hcmbuat
    Filesystem             size   used  avail capacity  Mounted on
    /dev/dsk/c0t0d0s6       14G    12G   2.0G    86%    /u01

    well once i experimented with Openfiler and built a 2-node 11.2 RAC on Oracle Linux 5 using iSCSI storage (3 VirtualBox VMs in total, all 3 on a desktop PC: Intel i7 2600K, 16GB memory)
    CPU/memory wasnt a problem, but as all the 3 VMs were on a single HDD, performance was awful
    didnt really run any benchmarks, but a compressed full database backup with RMAN for an empty database (<1 GB) took like 15 minutes...
    2 VMs + VirtualBox shared disk on the same single HDD provided much better performance, still using this kind of setup for my sandbox RAC databases
    edit: 6 GB in 2'42" is about 37 MB/sec
    with the above setup using Openfiler, it was nowhere near this
    edit2: made a little test
    host: Windows 7
    guest:2 x Oracle Linux 6.3, 11.2.0.3
    hypervisor is VirtualBox 4.2
    PC is the same as above
    2 virtual cores + 4GB memory for each VM
    2 VMs + VirtualBox shared storage (single file) on a single HDD (Seagate Barracuda 3TB ST3000DM001)
    created a 4 GB datafile (not enough space for 6 GB):
    {code}SQL> create tablespace test datafile '+DATA' size 4G;
    Tablespace created.
    Elapsed: 00:00:31.88
    {code}
    {code}RMAN> backup as compressed backupset database format '+DATA';
    Starting backup at 02-OCT-12
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=22 instance=RDB1 device type=DISK
    channel ORA_DISK_1: starting compressed full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00001 name=+DATA/rdb/datafile/system.262.790034147
    input datafile file number=00002 name=+DATA/rdb/datafile/sysaux.263.790034149
    input datafile file number=00003 name=+DATA/rdb/datafile/undotbs1.264.790034151
    input datafile file number=00004 name=+DATA/rdb/datafile/undotbs2.266.790034163
    input datafile file number=00005 name=+DATA/rdb/datafile/users.267.790034163
    channel ORA_DISK_1: starting piece 1 at 02-OCT-12
    channel ORA_DISK_1: finished piece 1 at 02-OCT-12
    piece handle=+DATA/rdb/backupset/2012_10_02/nnndf0_tag20121002t192133_0.389.795640895 tag=TAG20121002T192133 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
    channel ORA_DISK_1: starting compressed full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    including current control file in backup set
    including current SPFILE in backup set
    channel ORA_DISK_1: starting piece 1 at 02-OCT-12
    channel ORA_DISK_1: finished piece 1 at 02-OCT-12
    piece handle=+DATA/rdb/backupset/2012_10_02/ncsnf0_tag20121002t192133_0.388.795640919 tag=TAG20121002T192133 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 02-OCT-12
    {code}
    Now i dont know much about Openfiler, maybe i messed up something, but i think its quite good, so i wouldnt use a 3rd VM just for the storage.

  • How to Reorganize CSM200 Shared Storage in Solaris 10 x86 Oracle 10gR2

    I could use some guidance from those who are more experienced in RAC administration in a Solaris environment with ASM. I have a three-node RAC with Oracle 10gR2 instances on top of Solaris 10 x86 where the shared storage is a Sun CSM200 disk array which looks like a single disk to the rest of the world. I'm not very familiar with the CSM200 Common Array Manager but I do have access to use it.
    During initial setup, I followed the Oracle cookbook and defined a storage slice for each of the following: OCR, OCR mirror, three voting disks, and +DATA, for a total of six slices. I brought up the RAC and we've used it for a couple of weeks.
    This is a Dev and QA environment, so it changes pretty fast. The new requirement is to add a +FRA and to add a mount point for a file system on the shared storage, so that all three Oracle instances can refer to the same external table(s).
    However, I've already used all the available slices in the VTOC on the shared logical drive. I'm not sure how to proceed.
    1) Is it necessary to use the CAM to create two logical disks out of the single existing logical disk?
    2) If so, how destructive is that? I don't need to keep the contents of the database, but I do not want to reinstall CRS or ASM or the DB instances.
    3) Is it possible to combine the OCR and its mirror on the same slice, thus freeing a slice for reuse?
    4) Is it possible to combine all three voting disks on the same slice, thus freeing two slices for reuse?
    Edited by: user12006221 on Mar 29, 2011 3:30 PM
    Another question: Under 10.2.0.4, is it possible for the OCR and voting disks to be managed by ASM? I know it would be possible under 11g, but that's not an option as I am trying to match a customer's environment and they aren't going to 11g any time real soon.

    What you see is what happens when the Java runtime running on Solaris 10 x86 tries to load a library which is compiled for SPARC.
    Because of the native parts in SAP GUI for Java, compilations and installers are required for each OS - HW combination.
    The supported platforms can be seen in SAP note 954572. For Solaris only SPARC is currently supported.
    Because of the effort needed for compiling, testing, support etc. it is required to focus on OS - HW combinations widely used on desktop machines and Solaris 10 on x86 currently does not seem to be one of those.

  • Disk replication for Shared Storage in Weblogic server

    Hi,
    Why we need a disk replication in web-logic server for shared storage systems? What is the advantage of it and how this disk replication can be achieved in web-logic for the shared storage which contains the common configurations and software's which will be used by a pool of client machines? Please clarify.
    Thanks.

    Hi,
    I am not the middleware expert. However ACFS (Oracle Cloud File System) is a clustering filesystem, which also has the functionality for replication:
    http://www.oracle.com/technetwork/database/index-100339.html
    Maybe you also finde information on what you need on the MAA website: www.oracle.com/goto/maa
    Regards
    Sebastian

  • The best option to create  a shared storage for Oracle 11gR2 RAC in OEL 5?

    Hello,
    Could you please tell me the best option to create a shared storage for Oracle 11gR2 RAC in Oracel Enterprise Linux 5? in production environment? And could you help to create shared storage? Because there is no additional step in Oracle installation guide. There are steps for only asm disk creation.
    Thank you.

    Here are names of partitions and permissions. Partitions which have 146 GB, 438 GB, 438 GB of capacity are my storage. Two of three disks which are 438 GB were configured as RAID 5 and remaining disk was configured as RAID 0. My storage is Dell MD 3000i and connected to nodes through ethernet.
    Node 1
    [root@rac1 home]# ll /dev/sd*
    brw-r----- 1 root disk 8, 0 Aug 8 17:39 /dev/sda
    brw-r----- 1 root disk 8, 1 Aug 8 17:40 /dev/sda1
    brw-r----- 1 root disk 8, 16 Aug 8 17:39 /dev/sdb
    brw-r----- 1 root disk 8, 17 Aug 8 17:39 /dev/sdb1
    brw-r----- 1 root disk 8, 32 Aug 8 17:40 /dev/sdc
    brw-r----- 1 root disk 8, 48 Aug 8 17:41 /dev/sdd
    brw-r----- 1 root disk 8, 64 Aug 8 18:26 /dev/sde
    brw-r----- 1 root disk 8, 65 Aug 8 18:43 /dev/sde1
    brw-r----- 1 root disk 8, 80 Aug 8 18:34 /dev/sdf
    brw-r----- 1 root disk 8, 81 Aug 8 18:43 /dev/sdf1
    brw-r----- 1 root disk 8, 96 Aug 8 18:34 /dev/sdg
    brw-r----- 1 root disk 8, 97 Aug 8 18:43 /dev/sdg1
    [root@rac1 home]# fdisk -l
    Disk /dev/sda: 72.7 GB, 72746008576 bytes
    255 heads, 63 sectors/track, 8844 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 8844 71039398+ 83 Linux
    Disk /dev/sdb: 72.7 GB, 72746008576 bytes
    255 heads, 63 sectors/track, 8844 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdb1 * 1 4079 32764536 82 Linux swap / Solaris
    Disk /dev/sdd: 20 MB, 20971520 bytes
    1 heads, 40 sectors/track, 1024 cylinders
    Units = cylinders of 40 * 512 = 20480 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sde: 146.2 GB, 146278449152 bytes
    255 heads, 63 sectors/track, 17784 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sde1 1 17784 142849948+ 83 Linux
    Disk /dev/sdf: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdf1 1 53352 428549908+ 83 Linux
    Disk /dev/sdg: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdg1 1 53352 428549908+ 83 Linux
    Node 2
    [root@rac2 ~]# ll /dev/sd*
    brw-r----- 1 root disk 8, 0 Aug 8 17:50 /dev/sda
    brw-r----- 1 root disk 8, 1 Aug 8 17:51 /dev/sda1
    brw-r----- 1 root disk 8, 2 Aug 8 17:50 /dev/sda2
    brw-r----- 1 root disk 8, 16 Aug 8 17:51 /dev/sdb
    brw-r----- 1 root disk 8, 32 Aug 8 17:52 /dev/sdc
    brw-r----- 1 root disk 8, 33 Aug 8 18:54 /dev/sdc1
    brw-r----- 1 root disk 8, 48 Aug 8 17:52 /dev/sdd
    brw-r----- 1 root disk 8, 64 Aug 8 17:52 /dev/sde
    brw-r----- 1 root disk 8, 65 Aug 8 18:54 /dev/sde1
    brw-r----- 1 root disk 8, 80 Aug 8 17:52 /dev/sdf
    brw-r----- 1 root disk 8, 81 Aug 8 18:54 /dev/sdf1
    [root@rac2 ~]# fdisk -l
    Disk /dev/sda: 145.4 GB, 145492017152 bytes
    255 heads, 63 sectors/track, 17688 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 8796 70653838+ 83 Linux
    /dev/sda2 8797 12875 32764567+ 82 Linux swap / Solaris
    Disk /dev/sdc: 146.2 GB, 146278449152 bytes
    255 heads, 63 sectors/track, 17784 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdc1 1 17784 142849948+ 83 Linux
    Disk /dev/sdd: 20 MB, 20971520 bytes
    1 heads, 40 sectors/track, 1024 cylinders
    Units = cylinders of 40 * 512 = 20480 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sde: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sde1 1 53352 428549908+ 83 Linux
    Disk /dev/sdf: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdf1 1 53352 428549908+ 83 Linux
    [root@rac2 ~]#
    Thank you.
    Edited by: user12144220 on Aug 10, 2011 1:10 AM
    Edited by: user12144220 on Aug 10, 2011 1:11 AM
    Edited by: user12144220 on Aug 10, 2011 1:13 AM

  • How can use all vcenter feature such as HA and drs without shared storage

    Hi
    i have 2 esxi host dl 380g8
    but i don't have any shared storage or san now i want use vcenter feature like ha and drs now for use this features do i have to run VSAN?
    i readed for for use vsan we have to buy ATLEAST ONE ssd disk is this trure?
    ssd disk is necessary?
    atleast one ssd disk for each host ?
    can use from all of vcenter feature with VSAN ?
    please some help me

    Hi,
    For VSAN you need at LEAST 3 hosts (4 recommended).
    Each host must have 1 SSD for read cache/write buffer.
    Do you have any storage at all? How about running a NAS such as "OpenFiler" or "FreeNAS". That way you can present shared storage via that way. (I'd use this in Lab only). These NAS O/S's also work as virtual machines so you can turn any disk into shared storage.

  • Access iphoto 08 file on shared storage device from multiple machines

    I recently installed ilife 08 on both an imac and macbook. Previously (iphoto 06), both devices accessed the iphoto library on a shared storage device without any problems. After the upgrade, my imac is able to view thelibrary but my macbook (the second machine to be upgraded) no longer has access. 'Sharing' is too slow over the wireless network and doesn't represent a reasonable option.
    Is anyone else experiencing this issue? Any suggestions.

    Actually, neither repairing permissions or changing them with Get Info worked for me. What did work for me was deleting the empty iPhoto Library in the user folder who couldn't access the shared library, and put an alias of the shared library in that user's Pictures folder. Everything then worked as it did prior to upgrading. Thanks.

  • How to Create Shared Storage using VM-Server 2.1 Red Hat Enterprise Linux 5

    Thanks in advance.
    Describe in sequence how to create shared storage for a two guest/node Red Hat Linux Enterprise using Oracle 2.1 VM Server on Red Hat Linux Enterprise 5 using command line or appropriate interface.
    How to create Shared Storage using Oracle 2.1 VM Server?
    How to configure Network for two node cluster (oracle clusterware)?

    Hi Suresh Kumar,
    Oracle Application Server 10g Release 2, Patch Set 3 (10.1.2.3) is required to be fully certified on OEL 5.x or RHEL 5.x.
    Oracle Application Server 10g Release 2 10.1.2.0.0 or 10.1.2.0.1 versions are not supported with Oracle Enterprise Linux (OEL) 5.0 or Red Hat Enterprise Linux (RHEL) 5.0. It is recommended that version 10.1.2.0.2 be obtained and installed.
    Which implies Oracle AS 10.1.2.x is some what certified on RHEL 5.x
    I think it would be better if you get in touch with Oracle Support regarding this .
    Sorry , I am not aware of any document on migration from Sun Solaris to RH Linux 5.2 .
    Thanks,
    Sutirtha

  • Oracle RAC with QFS shared storage going down when one disk fails

    Hello,
    I have an oracle RAC on my testing environment. The configuration follows
    nodes: V210
    Shared Storage: A5200
    #clrg status
    Group Name Node Name Suspended Status
    rac-framework-rg host1 No Online
    host2 No Online
    scal-racdg-rg host1 No Online
    host2 No Online
    scal-racfs-rg host1 No Online
    host2 No Online
    qfs-meta-rg host1 No Online
    host2 No Offline
    rac_server_proxy-rg host1 No Online
    host2 No Online
    #metastat -s racdg
    racdg/d200: Concat/Stripe
    Size: 143237376 blocks (68 GB)
    Stripe 0:
    Device Start Block Dbase Reloc
    d3s0 0 No No
    racdg/d100: Concat/Stripe
    Size: 143237376 blocks (68 GB)
    Stripe 0:
    Device Start Block Dbase Reloc
    d2s0 0 No No
    #more /etc/opt/SUNWsamfs/mcf
    racfs 10 ma racfs - shared
    /dev/md/racdg/dsk/d100 11 mm racfs -
    /dev/md/racdg/dsk/d200 12 mr racfs -
    When the disk /dev/did/dsk/d2 failed (I have failed it by removing from the array), the oracle RAC went offline on both nodes, and then both nodes paniced and rebooted. Now the #clrg status shows below output.
    Group Name Node Name Suspended Status
    rac-framework-rg host1 No Pending online blocked
    host2 No Pending online blocked
    scal-racdg-rg host1 No Online
    host2 No Online
    scal-racfs-rg host1 No Online
    host2 No Pending online blocked
    qfs-meta-rg host1 No Offline
    host2 No Offline
    rac_server_proxy-rg host1 No Pending online blocked
    host2 No Pending online blocked
    crs is not started in any of the nodes. I would like to know if anybody faced this kind of a problem when using QFS on diskgroup. When one disk is failed, the oracle is not supposed to go offline as the other disk is working, and also my qfs configuration is to mirror these two disks !!!!!!!!!!!!!!
    Many thanks in advance
    Ushas Symon

    I'm not sure why you say QFS is mirroring these disks!?!? Shared QFS has no inherent mirroring capability. It relies on the underlying volume manager (VM) or array to do that for it. If you need to mirror you storage, you do it at the VM level by creating a mirrored metadevice.
    Tim
    ---

  • Pointing existing RAC nodes to a fresh Shared Storage discarding old one

    Hi,
    I have a RAC Setup with the Primary Database on Oracle 10gR2.
    For this setup, there is a Physical Standby Database Setup (using DataGuard configuration) also with 30min delay.
    Assume that the "Shared Storage" of the Primary DB fails completely.
    In the above scenario, my plan is to refresh a "fresh" shared storage device using Physical Standby Database Setup and then "point" the RAC nodes to the new "Shared Storage".
    Is this possible?
    Simply put, how can I refresh the Primary database using the Standby Database?
    Please help with the utilities (RMAN, DatGuard, other non-Oracle products etc.) that can be used to do this.
    Regards

    Does following Shared Device configuration is fine for 10g RAC on Windows 2003?
    . 1 SCSI drive
    • Two PCI network adapters on each node in the cluster.
    • Storage cables to attach the shared storage device to all computers.
    regard.

  • How to change permissions on shared storage to install Oracle RAC on vmware

    Hello:
    - I am trying to install Oracle RAC using vmware.
    - But when I try to change permissions for the shared storage,
    the owner and group of the shared storage does not change.
    - Has some else installed Oracle RAC onto vmware ?
    - I am using Oracle RAC 10.2.0, solaris 5.10 x86
    Thanks
    Jlem

    I have successfully installed RAC on vmware following this article, maybe you can give it a try,
    Oracle 10g RAC On Linux Using VMware Server
    http://www.oracle-base.com/articles/10g/OracleDB10gR2RACInstallationOnCentos4UsingVMware.php

  • Qmaster Compressor and Shared Storage

    I've got Qmaster working on my compressor encodes and everything is working fine. What I want to know is whether there is a way to cut out the extra copy of data for the encode. All of my machines in the cluster have the same shared storage mounted on the desktop, but still my jobs get copied first and then encoded.
    Is there anyway to configure qmaster, compressor and the cluster machines so they all use their local path for source files? And avoid the extra copy of the source files to the controller system?
    Thanks in advance.

    You want to make sure in Compressor, under Preferences, the Cluster Options field says Never Copy Source to Cluster. Assuming you have all of the correct permissions set and have the Shared Storage configured properly then it should work. "Should" being the key word.

  • DFSr supported cluster configurations - replication between shared storage

    I have a very specific configuration for DFSr that appears to be suffering severe performance issues when hosted on a cluster, as part of a DFS replication group.
    My configuration:
    3 Physical machines (blades) within a physical quadrant.
    3 Physical machines (blades) hosted within a separate physical quadrant
    Both quadrants are extremely well connected, local, 10GBit/s fibre.
    There is local storage in each quadrant, no storage replication takes place.
    The 3 machines in the first quadrant are MS clustered with shared storage LUNs on a 3PAR filer.
    The 3 machines in the second quadrant are also clustered with shared storage, but on a separate 3PAR device.
    8 shared LUNs are presented to the cluster in the first quadrant, and an identical storage layout is connected in the second quadrant. Each LUN has an associated HAFS application associated with it which can fail-over onto any machine in the local cluster.
    DFS replication groups have been set up for each LUN and data is replicated from an "Active" cluster node entry point, to a "Passive" cluster node that provides no entry point to the data via DFSn and a Read-Only copy on it's shared cluster
    storage.
    For the sake of argument, assume that all HAFS application instances in the first quadrant are "Active" in a read/write configuration, and all "Passive" instances of the HAFS applications in the other quadrants are Read-Only.
    This guide: http://blogs.technet.com/b/filecab/archive/2009/06/29/deploying-dfs-replication-on-a-windows-failover-cluster-part-i.aspx defines
    how to add a clustered service to a replication group. It clearly shows using "Shared storage" for the cluster, which is common sense otherwise there effectively is no application fail-over possible and removes the entire point of using a resilient
    cluster.
    This article: http://technet.microsoft.com/en-us/library/cc773238(v=ws.10).aspx#BKMK_061 defines the following:
    DFS Replication in Windows Server 2012 and Windows Server 2008 R2 includes the ability to add a failover cluster
    as a member of a replication group. The DFS Replication service on versions of Windows prior to Windows Server 2008 R2
    is not designed to coordinate with a failover cluster, and the service will not fail over to another node.
    It then goes on to state, quite incredibly: DFS Replication does not support replicating files on Cluster Shared Volumes.
    Stating quite simply that DFSr does not support Cluster Shared Volumes makes absolutely no sense at all after stating clusters
    are supported in replication groups and a technet guide is provided to setup and configure this configuration. What possible use is a clustered HAFS solution that has no shared storage between the clustered nodes - none at all.
    My question:  I need some clarification, is the text meant to read "between" Clustered
    Shared Volumes?
    The storage configuration must to be shared in order to form a clustered service in the first place. What
    we am seeing from experience is a serious degradation of
    performance when attempting to replicate / write data between two clusters running a HAFS configuration, in a DFS replication group.
    If for instance, as a test, local / logical storage is mounted to a physical machine the performance of a DFS replication group between the unshared, logical storage on the physical nodes is approaching 15k small files per minute on initial write and even high
    for file amendments. When replicating between two nodes in a cluster, with shared clustered storage the solution manages a weak 2,500 files per minute on initial write and only 260 files per minute when attempting to update data / amend files.
    By testing various configurations we have effectively ruled out the SAN, the storage, drivers, firmware, DFSr configuration, replication group configuration - the only factor left that makes any difference is replicating from shared clustered storage, to another
    shared clustered storage LUN.
    So in summary:
    Logical Volume ---> Logical Volume = Fast
    Logical Volume ---> Clustered Shared Volume = ??
    Clusted Shared Volume ---> Clustered Shared Volume = Pitifully slow
    Can anyone explain why this might be?
    The guidance in the article is in clear conflict with all other evidence provided around DFSr and clustering, however it seems to lean towards why we may be seeing a real issue with replication performance.
    Many thanks for your time and any help/replies that may be received.
    Paul

    Hello Shaon Shan,
    I am also having the same scenario at one of my customer place.
    We have two FileServers running on Hyper-V 2012 R2 as guest VM using Cluster Shared Volume.  Even the data partition drive also a part of CSV.
    It's really confusing whether the DFS replication on CSV are supported or not, then what would be consequence if using.
    In my knowledge we have some customers they are using Hyper-V 2008 R2 and DFS is configured and running fine on CSV since more than 4 years without any issue.
    Appreciate if you can please elaborate and explain in details about the limitations on using CSV.
    Thanks in advance,
    Abul

Maybe you are looking for

  • 10g (9.0.4) - How to start webserver?

    Very newbie question here: I have a database on a server , and forms stored on another file server. I just installed the 10g developer suite on a local machine, and am able to use SQL Plus to log into the db, and use Forms Designer to edit the forms.

  • CALL TRANSACTION 'ME33K  from another program

    Hi, I ma trying to CALL TRANSACTION 'ME33K  from another program, but it is not working.  The transactions is opening, but it is not opening with the contract number (ls_ekpo-ebeln) i am passing. ls_ekpo-ebeln does have a valued when CALL TRANSACTION

  • IFS Document_ID from iFS Tables

    iFS Document_ID from iFS Tables 1. How to get the document_ID if you know the directory path and the file_name that is stored in iFS? 2. How to get the directory path and file name from within iFS if you have the document_ID? ifs_folder_items does no

  • OS for 2011 Macbook Pro!

    Could someone explain in detail what the OS is, and exactly how it works. I have only had my Macbook Pro going on 4 days now. I have no experience on them at all. I see that it comes with the OS when you purchase the Macbook. I am just confused on th

  • Brightness adjustment in Final Cut Express HD.

    Dear members; I have a clip that is a little dark and I would like to make it brighter. I've tried the levels adjustments for the black and white points. While it has improved a bit I still feel that I need to improve it more. After trying the bright