MaxDB on a shared storage -( Fibre Channel or iSCSI )

Hello,
anyone here is running MaxDB on a shared storage ( Fibre Channel or iSCSI ) please?

> but we need more CPUs and Ram! for our application !! that is what will solve the problem!
So - you don't need a shared database?!
> Problem here in this solution is clustering MAxDB, that is why i am asking in this forum.
> I have asked in this forum before about Clustering MaxDb and the answer from " Sushil Suryawan" was: 
> How to implement HA on MaxDB?
> 1105291 Tips of setting up HA-MaxDB/liveCache on Sun Cluster
>
>
> so i dont understand why it doesnt work on a storage!?
This is high availability - not "load balancing".
That cluster is basically what we have set up here. One database node is running the database. If that node fails, the other node takes over the database. But they won't run at the same time.
Oracle RAC does what you want, it handles two database servers for ONE database so you can distribute the load. The same is true for DB2, you can create more than one node to act for the same database. This does not work with MaxDB since the database is lacking that functionality.
At times when MaxDB "was ADABAS D", that feature of distributed database was available but it was not further developed since there was no demand.
Markus

Similar Messages

  • Can Xserve2,1 make Xsan disk sharing using Fibre Channel to different OS version client?

    Hi, I want to upgrade 3 MacPro linked to an Xserve2,1 (MacOsX10.6.8) with fibre channel.
    My question is: the operative system of the server must be the same of the clients?
    Thanks!

    It shouldn't matter if using same XSAN across all three. Although, it is a bit of a black art. The best place to ask these questions is http://www.xsanity.com/

  • Plugin for EMC VMAX fibre channel storage

    I am new to Oracle VM and VM Manager. After reading the documentation, I believe that I need a plugin for VM Manager to discover our EMC VMAX Fibre Channel SAN LUNs. As it is now, discovery on its own does not show the LUNs. Where can I get this plugin? I contacted EMC and they know nothing about a plugin.

    904115 wrote:
    I am new to Oracle VM and VM Manager. After reading the documentation, I believe that I need a plugin for VM Manager to discover our EMC VMAX Fibre Channel SAN LUNs. As it is now, discovery on its own does not show the LUNs. Where can I get this plugin? I contacted EMC and they know nothing about a plugin.There is no plugin for EMC VMAX SANs. It will use the Unmanaged FC Storage Array functionality. If you are not seeing your VMAX luns, check that multipathd is started on your Oracle VM Server and that multipath.conf has the correct device {} stanza for the VMAX.

  • Windows Server 2008R2 - Adding additional New Fibre Channel Storage will I need to reboot after mpclaim?

    I am replacing an MSA 1500 Fibre Channel array with an MSA 2040 on a live system. The new array has been installed physically and connected to the switch fabric.
    Both arrays are connected to a common switch fabric. The new array can see all the physical hosts on the SAN, and I have managed to setup MPIO on the least critical box but rebooted that one as I used the -r -i -a options. My question is will I be able to
    run mpclaim -n -i -d "HP      MSA 2040 SAN" and not have to reboot afterwards? I have read every piece of documentation and some seem to hint this is possible in 2008 R2 and later but it is not explicitly clear as MS haven't updated
    all their docs to detail differences.
    I suppose I am after the wisdom of experience.
    Since MPIO has already been installed and working for a different array I am hoping any requirement to reboot should not be there?
    Your advice greatly appreciated.
    K

    Hi,
    If the server cannot be rebooted at the time the command is run, you could use the -n switch instead. The devices will not be claimed by MPIO until the server reboots.
    For more detailed information, please refer to the articles below:
    Microsoft MPIO Command Line Reference (mpclaim) and Server Core Configuration
    http://blogs.msdn.com/b/san/archive/2008/07/27/microsoft-mpio-command-line-reference-mpclaim-and-server-core-configuration.aspx
    Configuring Multipath I/O
    http://sourcedaddy.com/windows-7/configuring-multipath-io.html
    Please Note: Since the website is not hosted by Microsoft, the link may change without notice. Microsoft does not guarantee the accuracy of this information.
    Best Regards,
    Mandy 
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Fibre Channel Storage

    Im planning on using three mac pros to capture live footage in prores 422hq 1080i.
    Can I put a fibre channel in each mac pro and network them to each other...that way when i edit i can just edit from each drive from one mac pro without having to copy them all to one station?

    What you're asking is called arbitrated loop.
    You can start from here:
    http://en.wikipedia.org/wiki/Arbitrated_loop
    My preliminary kinda lame and super-fast answer is that you'll be involved in a messy configuration, which isn't gonna worth the cost and time.
    If you're gonna use fibre channel, buy a switch a proper SAN and do that. Otherwise it is possible to use Gigabit Ethernet switches, a mac pro as a host and work with ProRes.
    If you're gonna use the MacBook pro, absolutely check if the GigE port supports jumbo frames because most new iMacs do not.

  • Clarification on how to use Xserve Raid and Fibre Channel without xsan.

    First let me apologize for not responding earlier to your response, I tend to get busy and then forget to check back here.
    Tod, the answer to your question is No, only one computer is accessing the xserve raid files at any one time and that is via Fibre Channel. However I do have the xserve raids set up as share points via ethernet.
    Maybe I should turn that off and only access the files with the one computer that can connect via fibre channel.
    I never thought of that. I will try that while I await for your answer, thanks again.
    Todd Buhmiller
    I have the following setup:
    Xserve: 2x2Ghz Dual Core Intel Xeon, 5Gb of Ram, Running 10.5.8 Leopard Server
    Xserve Raid with firmware version 1.5.1/1.51c on both controllers, and
    Qlogic Sanbox 5600
    Apple Fibre Channel Cards in Xserve, and Mac Pro Tower; Apple 2 Port 4Gbs Fibre Channel Card
    Mac Pro Tower-Quad Core Intel Xeon 2.8Ghz, 16Gb of Ram, Running Snow Leopard 10.6.4
    Here is the problem.
    The directory for the xserve raids keep getting corrup, and I use disc warrior to rebuild them. Is there a way to keep the directories from getting corrupt? I am a few pieces of equipment before I can build an Xsan as that is the ultimate goal, but until then, I just need to be able to have the raids funciton as storage without having to rebuild the directories all of the time.
    Anybody have any suggestions?
    Thanks
    Todd Buhmiller
    Widescreen Media
    Calgary, Alberta Canada
    Tod Kuykendall
    Posts: 1,237
    From: San Diego
    Registered: Oct 11, 2000
    Re: Xserve Raid Mounts, Corrupt Directory tired of rebuilding directory
    Posted: Jun 27, 2010 1:25 PM in response to: Todd Buhmiller
    Are multiple computers accessing the same data on the RAID at the same time?
    If so then NO. This is the source of your data corruption and I'm surprised if you were able to get all your data back every time if this is how you've been running your system. Each fibre channel assumes it has full and sole control of every volume it has mounted, no data arbitration is practiced and data corruption will occur if this assumption is wrong.
    The only way this set-up will work is to use partitions or LUN masks so the volumes are accessed by one computer at any time. As long as one computer relinquishes control before another mounts it you will dodge arbitration issues but this is a dangerous game. If you screw up and mount an already mounted volume - and there is no easy way to tell if a volume is mounted - corruption will then occur. Sharing data simultaneous at fibre speeds is what XSAN does and to do this you need it.
    HTH,
    =Tod
    Intel Xserve, G5 XServes, XRAID, Promise

    +The xserve raids will mount automatically to any computer that I connect the qlogic fc switch to+
    This is source of the corruption to your data. Any computer that attaches to a drive/partition via fibre channel assumes that it alone is in control of the drive and data corruption is inevitable.
    +Is that the issue, should I disconnect the xserve from the fc switch and leave it connected via ethernet?+
    Short answer: YES. The ethernet connections are fine because the server is controlling the file arbitration through the sharing protocol. Fibre channel connections assumes complete control over the partition and no arbitration of the file access is performed. It's like two people independently driving trying to drive the same car to different locations.
    Depending on your set-up it is possible for the two machines to see and use different parts of the Xserve RAID storage but they cannot access the same areas without SAN doing the arbitration.
    Hope that's clear,
    =Tod

  • Can I hook up a NAS via Fibre Channel?

    I am building a Linux NAS to be accessed by my Macs. I've purchased fibre cards for all of my macs, an LSI fibre card for my Linux machine, and a McData 4400 fibre switch. I'm trying to configure my fibre network so I can share files from my NAS. I've got the fabric set up, but everything that I read is talking about using SAN software to connect to a SAN, not a NAS. I can't find any information at all on how to do what I want to do, or whether or not it's possible. Can someone please help me figure out how to set up this Fibre Channel network, or at least figure out if it's possible or not? Thank you!
    -Ryan

    Is what you want technically possible? Probably.
    In an earlier era, you could have a network or an image scanner or other devices connected to a host via SCSI; via a storage bus. There was all manner of odd SCSI-connected gear. (Think of SCSI as expensive multi-host USB with big expensive cables and with expensive peripheral devices, and you'll have the general idea. And FWIW, SCSI is the underpinnings of USB.)
    You would end up writing a whole lot of driver code for the devices and hosts you have, too.
    Which is where folks end up with commercial SAN solutions, or with NAS solutions, and not with using a SAN as a network.
    The various Fibre Channel controller vendors discussed but never seemed to have sorted out the network interface designs for their FC controllers.
    If you want to roll your own storage arrays akin to the [Apple Xsan|http://www.apple.com/xsan> or the HP [MSA|http://h18000.www1.hp.com/storage/diskstorage/msa_diskarrays/sanarrays/index.html] or [EVA|http://h18000.www1.hp.com/storage/diskstorage/eva_diskarrays/evaarrays/index.html] SAN arrays, well, have at. You'll likely end up needing to write host disk drivers, as well as the firmware within the SAN controller. Networking drivers might be a bit more tricky; I haven't looked at those device interfaces in a while, and you'd need to tie those drivers into the host network stacks. Once you have some or all of that working, then the hosts can see the block storage out on the SAN. If you need sharing, you'll need to sort out a cluster or SAN file system to run atop the block storage or atop the network connection you've built; a file system that can coordinate distributed access.
    Possible? Sure. On a budget? Probably not.

  • Add Fibre Channel LUN to Server 2012 R2

    I'm having difficulty configuring SAN storage attached to a pair of 2012 R2 hosts.  I'm building a two node Hyper-V cluster.  Bear in mind this is an operation that I'm very familiar with using VMware.  I'm simply trying to get the windows
    hosts to see the LUN so that I can configure it as shared disk in the Hyper-V cluster.  This is IBM System x3850 storage.  The host each have Emulex LPe 12000-M8 HBAs.  Also, I'm pretty sure that the disk is visible from the HBAs in the OneCommand
    utility.  <see below>
    I've tried to run the mpclaim -e command and it returns "There is no enterprise storage connected on the system"
    What am I not doing? MPIO? I've tried adding a device using the 8x16 rule "IBM     5639            "
    I can't find any documentation that explains how to do this in server 2012 R2 ANYWHERE!!!! I'm pulling my hair out and I don't have that much to lose!! :)
    Please help!
    Thank you,
    Roger 

    Again, this is not a Windows issue.  I am not familiar with the IBM utility, so I don't know what that is showing.  I just know that critical to accessing Fibre Channel LUNs is that you have your SAN network zoned to allow the WWPN from your
    HBA to talk to the WWPN of the Fibre Channel controllers, and then on the controller you need to mask the LUN to expose it to the WWN of your server. If that is not done, Windows will not see anything. Once it is done, Windows sees the disk as it if were a
    local disk.
    I do see that you have two HBAs, and some of the WWN information seems to be duplicated.  Most likely you are trying to configure some form of MPIO.  I am assuming that is something provided by the IBM software, but it must be working properly
    otherwise, once the zone/mask is correct, you will see each LUN two or more times, depending upon your Fibre configuration.
    . : | : . : | : . tim

  • Live Migration : virtual Fibre Channel vSAN

    I can do live migration, from one node to another. No error. Problem / Question that I have is, is live migration really lie migration.
    When I do live migration from cluster or SCVMM  it save and start  virtual machine. Which fro me is not live migration.
    I have describe in more details : http://social.technet.microsoft.com/Forums/en-US/a52ac102-4ea3-491c-a8c5-4cf4dd14768d/synthetic-fibre-channel-hba-live-migration-savestopstart?forum=winserverhyperv
    BlatniS

    I can do live migration, from one node to another. No error. Problem / Question that I have is, is live migration really lie migration.
    When I do live migration from cluster or SCVMM  it save and start  virtual machine. Which fro me is not live migration.
    I have describe in more details : http://social.technet.microsoft.com/Forums/en-US/a52ac102-4ea3-491c-a8c5-4cf4dd14768d/synthetic-fibre-channel-hba-live-migration-savestopstart?forum=winserverhyperv
    Virtual Fibre Channel had sense in pre-R2 times when there was no shared VHDX and you had to somehow provide fault tolerant shared storage to guest VM cluster (spawning iSCSI target on top of FC was slow and ugly). Now there\s no point in putting one into
    production so if you have issues just use shared VHDX. See:
    Deploy a Guest Cluster Using a Shared Virtual Hard Disk
    http://technet.microsoft.com/en-us/library/dn265980.aspx
    Shared VHDX
    http://blogs.technet.com/b/storageserver/archive/2013/11/25/shared-vhdx-files-my-favorite-new-feature-in-windows-server-2012-r2.aspx
    Shared VHDX is much more flexible and has better performance. 
    Good luck!
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • One very basic question on Shared Storage

    For setting up 11GR2 in Sun 5.10 SPARC, our sys admins have allocated block devices in shared storage and they sent us a mail mentioning the disks. This is what their mail looks like
    Storage attached to Node1
    /dev/rdsk/c4ikdzs3
    /dev/rdsk/c4ikdzs4
    .Storage attached to Node2
    /dev/rdsk/c5ikdzj2
    /dev/rdsk/c5ikdzj5
    .When i logged in to Node1:/dev/rdsk , i can only see
    /dev/rdsk/c4ikdzs3
    /dev/rdsk/c4ikdzs4
    .as they have mentioned. Same for Node2
    But, all the raw devices mentioned above should be visible from either node's /dev/rdsk. Right? Isn't that whole point of RAC; Shared storage ?

    resistanceIsFruitful wrote:
    But, all the raw devices mentioned above should be visible from either node's /dev/rdsk. Right? Isn't that whole point of RAC; Shared storage ?Not exactly if you are referring to device names..
    Each kernel will do a h/w discovery when booting. When dealing with LUNs via a HBA (or similar), there's absolutely no guarantee that the kernels will detect the LUNs in the same sequence and assign the same scsi device names to these. A LUN can be called device-foo-1 on one server and device-foo-21 on another.
    Also, many HBAs will be dual port and running dual fibre channels. So not only is the same LUN seen as a different scsi device by each kernel, but it is seen more than once. So device-foo-1 and device-foo-33 can be the same physical LUN on server 1.
    To deal with this a logical device name is needed. This will be the same device name on all servers - and it will in turn transparently support the multiple I/O paths to the LUN. This is done by "special driver" software looking at the scsi disk's unique signature - called a WWID or World Wide Name. With that unique signature, the s/w can uniquely recognise a specific LUN, irrespective of which server the s/w runs on.
    This s/w is called Multipath on Linux, Powerpath by EMC and so on. I would expect that you will have something similar on your servers.
    The actual scsi device mapped by the kernel to the LUN is not used. In the case of multipath for example, one will use the relevant +/dev/mpath/mpath<n>+ devices. In the case of Powerpath, these will be +/dev/emcpower<driveletter>+ devices.
    These are the device names that you will use for 11g Grid Infrastructure and Oracle ASM and RAC setup and configuration as shared storage.

  • Need docs that explain Fibre Channel setup, getting I/O error on 2540 SAN

    Sun T5220 Host running Solaris 10 5/09 as management host.
    Qlogic 5602 FC Switch
    Sun Storagetek 2540 - one controller tray with 9 300G SAS Hitachi drives. Firmware 7.35.x.
    Sun branded Qlogic QLE2462 HBAs - PCI express, dual port. 3 in the T5220. qlcxxxx drivers for the HBAs.
    Sun Common Array Manager software version 6.5.
    I am a long-time Oracle DBA who has the task of setting up a Fibre Channel SAN. I am not a Solaris sysadmin, but have installed and maintained large databases on Solaris boxes where I had access to a competent sysadmin. I am at a classified site and cannot bring out electronic files with logs, configuration info, etc. to upload. Connecting the T5220 is the 1st box of many. This is my first exposure to HBA's, Fibre Channel, and SAN, so everything I know about it I have read in a manual or from a post somewhere. I understand the big picture and I have the SAN configured with 2 storage pools each with 1 volume in them on RAID5 virtual disks. I can see the LUN 0 on the T5220 server when I do a luxadm probe and when I do a format. I formatted one of the volumes successfully. Now I attempt to issue:
    newfs /dev/rdsk/device_name_from_output_of_luxadm_probe
    I get an immediate I/O error. I could be doing something totally naive or have a larger problem - this is where I get lost and the documentation becomes less detailed.
    What would be great is if anyone knows of a detailed writeup that would match what I'm doing or a good off-the-shelf textbook that covers all of this or anything close. I continue to search for something to bridge my lack of knowledge in this area. I am unclear about the initiators and targets beyond the fundamental definitions. I have used the CAM 6.5 software to define the initiators that it discovered. I have mapped the Sun host into a host group also. I do not know what role the Qlogic 5602 Fibre Channel switch plays with respect to initiators and targets or if it has any role at all. Is it just a "pass through" and the ports on the 5602 do not have to be included? Maybe I don't have the SAN volume available in read/write. I find bits and pieces in blogs and forums, but nothing that puts it all together. I also find that many of the notes on the web are not accurate.
    This all may appear simplistic to someone who works with it a lot and if you know of an obvious reference I should be using, a link or reply would be greatly appreciated as I continue to Google for information.

    Thanks for the reply. I had previously read the CAM 6.5 manual and have all the SAN configuration and mappings. Yesterday I was back at the site and was able to place a UFS filesystem on the exposed SAN LUN which was 0. I've not seen any reference to LUN 0 being a placeholder for the 2540 setup and when I assigned it, I allowed the CAM 6.5 software to choose "Next Available" LUN and it chose 0. LUN 31 on the 2540 is the "Access" LUN that is assigned automatically - perhaps it is taking the place of what you describe as the LUN 0 placeholder.
    I was able to put a new UFS filesystem on LUN 0 (newfs), mount it, and copy data to it. The disk naming convention that Solaris shows for the SAN disks is pretty wild and I usually have to reference a Solaris book on the standard scsi disk name formats. My question/confusion at the moment is that I have 3 Sun branded Qlogic HBA's in the Sun T5220 server - QLE2462 (dual port) with one port on two of the HBAs cabled to the Qlogic 5602 FC switch which is cabled to the A and B controller of the SAN 2540 - there are only 2 cables coming out of the 5220; the 3rd HBA (for future use) has no cables to it. Both ports show up as active and connected on the server down to the SAN and the CAM 6.5 software automatically identified both initiators (ports) on the Sun 5220 when I mapped them. I had previously mapped them to the Sun host, mapped the host to a host_group, virtual disks to volumes, volumes to....etc.; and was able to put data on the exposed volume named dev_vol1 which is a RAID5 virtual disk on the SAN.
    When I use the format command on Solaris, it shows two disks and I assumed this represented the two ports from the same host 5220. I was able to put a label on one of these disks (dev_vol1), format it, and put data on it as noted above. When I select the other disk in the format menu, it is not formatted, won't allow me to put a label on it (I/O error) and I can go no further from there. The CAM 6.5 docs stop after they get you through the mapping and getting a LUN exposed. I continue on the in a Solaris-centric mindset and try to do the normal label, format, newfs, mount routine and it works for the one "disk" that format finds but not for the other. The information from the format info on both the disks shows them as 1.09 TB and that is the only volume mapped right now from the SAN so I know it is the same SAN volume. It does not make sense that I would label it and format it again anyway, but is this what I am supposed to see - two disks (because of 2 ports?) and the ability to access it through one. I found out by trial an error that I could label, format, and access the one. I did not do it from knowledge or looking at the information presented....I just guessed through it.
    I have not "bound" the 2 or HBAs in any way and that is on my list as next because I want to do multipathing and failover - just starting to read that so I may be using the wrong language. But I am wondering before I go on to that, if I am leaving something undone in the configuration that is going to hamper my success in the multipathing - since I cannot do anything with the 2nd "disk" that has been exposed to Solaris from the SAN. I thought, after I labeled, formatted and put a filesystem on the one "disk" I can write to that the other "disk" that shows up would just be another path to the same data via a 2nd initiator. Just writing that does not sound right, but I am trying to convey my thoughts as to what I logically expected to see. Maybe the question should be why am I seeing that 2nd "disk" in a Solaris format listing at all? I have not rebooted any time during this process also and can easily do that and will today.

  • Is Shared storage provided by VirtualBox better or as good as Openfiler ?

    Grid version : 11.2.0.3
    Guest OS           : Solaris 10 (64-bit )
    Host OS           : Windows 7 (64-bit )
    Hypervisor : Virtual Box 4.1.18
    In the past , I have created 2-node RAC in virtual environment (11.2.0.2) in which the shared storage was hosted in OpenFiler.
    Now that VirtualBox supports shared LUNs. I want to try it out. If VirtualBox's shared storage is as good as Openfiler , I would definitely go for VirtualBox as Openfiler requires a third VM (Linux) to be created just for hosting storage .
    For pre-RAC testing, I created a VirtualBox VM and created a Stand alone DB in it. Below test is done in VirtualBox's LOCAL storage (I am yet to learn how to create Shared LUNs in Virtual Box )
    I know that a datafile creation is not a definite test to determine I/O throughput. But i did a quick Test by creating a 6gb tablespace.
    Is the duration of 2 minutes and 42 seconds acceptable for a 6gb datafile ?
    SQL> set timing on
    SQL> create tablespace MHDATA datafile '/u01/app/hldat1/oradata/hcmbuat/mhdata01.dbf' SIZE 6G AUTOEXTEND off ;
    Tablespace created.
    Elapsed: 00:02:42.47
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    $
    $ du -sh /u01/app/hldat1/oradata/hcmbuat/mhdata01.dbf
    6.0G   /u01/app/hldat1/oradata/hcmbuat/mhdata01.dbf
    $ df -h /u01/app/hldat1/oradata/hcmbuat
    Filesystem             size   used  avail capacity  Mounted on
    /dev/dsk/c0t0d0s6       14G    12G   2.0G    86%    /u01

    well once i experimented with Openfiler and built a 2-node 11.2 RAC on Oracle Linux 5 using iSCSI storage (3 VirtualBox VMs in total, all 3 on a desktop PC: Intel i7 2600K, 16GB memory)
    CPU/memory wasnt a problem, but as all the 3 VMs were on a single HDD, performance was awful
    didnt really run any benchmarks, but a compressed full database backup with RMAN for an empty database (<1 GB) took like 15 minutes...
    2 VMs + VirtualBox shared disk on the same single HDD provided much better performance, still using this kind of setup for my sandbox RAC databases
    edit: 6 GB in 2'42" is about 37 MB/sec
    with the above setup using Openfiler, it was nowhere near this
    edit2: made a little test
    host: Windows 7
    guest:2 x Oracle Linux 6.3, 11.2.0.3
    hypervisor is VirtualBox 4.2
    PC is the same as above
    2 virtual cores + 4GB memory for each VM
    2 VMs + VirtualBox shared storage (single file) on a single HDD (Seagate Barracuda 3TB ST3000DM001)
    created a 4 GB datafile (not enough space for 6 GB):
    {code}SQL> create tablespace test datafile '+DATA' size 4G;
    Tablespace created.
    Elapsed: 00:00:31.88
    {code}
    {code}RMAN> backup as compressed backupset database format '+DATA';
    Starting backup at 02-OCT-12
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=22 instance=RDB1 device type=DISK
    channel ORA_DISK_1: starting compressed full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00001 name=+DATA/rdb/datafile/system.262.790034147
    input datafile file number=00002 name=+DATA/rdb/datafile/sysaux.263.790034149
    input datafile file number=00003 name=+DATA/rdb/datafile/undotbs1.264.790034151
    input datafile file number=00004 name=+DATA/rdb/datafile/undotbs2.266.790034163
    input datafile file number=00005 name=+DATA/rdb/datafile/users.267.790034163
    channel ORA_DISK_1: starting piece 1 at 02-OCT-12
    channel ORA_DISK_1: finished piece 1 at 02-OCT-12
    piece handle=+DATA/rdb/backupset/2012_10_02/nnndf0_tag20121002t192133_0.389.795640895 tag=TAG20121002T192133 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
    channel ORA_DISK_1: starting compressed full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    including current control file in backup set
    including current SPFILE in backup set
    channel ORA_DISK_1: starting piece 1 at 02-OCT-12
    channel ORA_DISK_1: finished piece 1 at 02-OCT-12
    piece handle=+DATA/rdb/backupset/2012_10_02/ncsnf0_tag20121002t192133_0.388.795640919 tag=TAG20121002T192133 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 02-OCT-12
    {code}
    Now i dont know much about Openfiler, maybe i messed up something, but i think its quite good, so i wouldnt use a 3rd VM just for the storage.

  • How to obtain target WWN for a LUN On Windows 2012 R2 Hyper-V with Fibre Channel SAN

    I have a large collection of Hyper-V hosts for which I would like to retrieve the target WNN and LUN ID for the LUNs visible on each host.  By target WWN I mean the storage array's WWN.
    I have tried using the root\wmi namespace to retrieve MS_SMHBA_PORT_LUN instances, but I get 0 instances back.
    I get instances of MSFC_FibrePortHBAAttributes back, but those only tell me the local host's HBA WWN, not the target information.
    This information must be stored somewhere, because you can't access the LUN without it, I hope there is a way to retrieve it.  Is there?  Or is it write only during LUN setup?
     

    Hi,
    Have you checked these two postings:
    http://terrytlslau.tls1.cc/2011/08/how-to-find-world-wide-name-wwn-in.html
    http://pkjayan.wordpress.com/2011/08/17/world-wide-name-wwn-for-a-fibre-channel-hba-on-windows-server/
    You need the FCINFO tool, which you can get from:
    http://www.microsoft.com/en-us/download/details.aspx?id=17530
    Cheers
    Andrew

  • DFSr supported cluster configurations - replication between shared storage

    I have a very specific configuration for DFSr that appears to be suffering severe performance issues when hosted on a cluster, as part of a DFS replication group.
    My configuration:
    3 Physical machines (blades) within a physical quadrant.
    3 Physical machines (blades) hosted within a separate physical quadrant
    Both quadrants are extremely well connected, local, 10GBit/s fibre.
    There is local storage in each quadrant, no storage replication takes place.
    The 3 machines in the first quadrant are MS clustered with shared storage LUNs on a 3PAR filer.
    The 3 machines in the second quadrant are also clustered with shared storage, but on a separate 3PAR device.
    8 shared LUNs are presented to the cluster in the first quadrant, and an identical storage layout is connected in the second quadrant. Each LUN has an associated HAFS application associated with it which can fail-over onto any machine in the local cluster.
    DFS replication groups have been set up for each LUN and data is replicated from an "Active" cluster node entry point, to a "Passive" cluster node that provides no entry point to the data via DFSn and a Read-Only copy on it's shared cluster
    storage.
    For the sake of argument, assume that all HAFS application instances in the first quadrant are "Active" in a read/write configuration, and all "Passive" instances of the HAFS applications in the other quadrants are Read-Only.
    This guide: http://blogs.technet.com/b/filecab/archive/2009/06/29/deploying-dfs-replication-on-a-windows-failover-cluster-part-i.aspx defines
    how to add a clustered service to a replication group. It clearly shows using "Shared storage" for the cluster, which is common sense otherwise there effectively is no application fail-over possible and removes the entire point of using a resilient
    cluster.
    This article: http://technet.microsoft.com/en-us/library/cc773238(v=ws.10).aspx#BKMK_061 defines the following:
    DFS Replication in Windows Server 2012 and Windows Server 2008 R2 includes the ability to add a failover cluster
    as a member of a replication group. The DFS Replication service on versions of Windows prior to Windows Server 2008 R2
    is not designed to coordinate with a failover cluster, and the service will not fail over to another node.
    It then goes on to state, quite incredibly: DFS Replication does not support replicating files on Cluster Shared Volumes.
    Stating quite simply that DFSr does not support Cluster Shared Volumes makes absolutely no sense at all after stating clusters
    are supported in replication groups and a technet guide is provided to setup and configure this configuration. What possible use is a clustered HAFS solution that has no shared storage between the clustered nodes - none at all.
    My question:  I need some clarification, is the text meant to read "between" Clustered
    Shared Volumes?
    The storage configuration must to be shared in order to form a clustered service in the first place. What
    we am seeing from experience is a serious degradation of
    performance when attempting to replicate / write data between two clusters running a HAFS configuration, in a DFS replication group.
    If for instance, as a test, local / logical storage is mounted to a physical machine the performance of a DFS replication group between the unshared, logical storage on the physical nodes is approaching 15k small files per minute on initial write and even high
    for file amendments. When replicating between two nodes in a cluster, with shared clustered storage the solution manages a weak 2,500 files per minute on initial write and only 260 files per minute when attempting to update data / amend files.
    By testing various configurations we have effectively ruled out the SAN, the storage, drivers, firmware, DFSr configuration, replication group configuration - the only factor left that makes any difference is replicating from shared clustered storage, to another
    shared clustered storage LUN.
    So in summary:
    Logical Volume ---> Logical Volume = Fast
    Logical Volume ---> Clustered Shared Volume = ??
    Clusted Shared Volume ---> Clustered Shared Volume = Pitifully slow
    Can anyone explain why this might be?
    The guidance in the article is in clear conflict with all other evidence provided around DFSr and clustering, however it seems to lean towards why we may be seeing a real issue with replication performance.
    Many thanks for your time and any help/replies that may be received.
    Paul

    Hello Shaon Shan,
    I am also having the same scenario at one of my customer place.
    We have two FileServers running on Hyper-V 2012 R2 as guest VM using Cluster Shared Volume.  Even the data partition drive also a part of CSV.
    It's really confusing whether the DFS replication on CSV are supported or not, then what would be consequence if using.
    In my knowledge we have some customers they are using Hyper-V 2008 R2 and DFS is configured and running fine on CSV since more than 4 years without any issue.
    Appreciate if you can please elaborate and explain in details about the limitations on using CSV.
    Thanks in advance,
    Abul

  • Fibre Channel

    Hi all
    Does anyone have experience with running fibre
    channel in solaris ? I mean storage solution SAN
    configuring HBA, ... etc. I'm looking for documentation
    focused on Solaris. I searched docs.sun.com, but found
    nothing about HBA ( there are only release notes) .
    One more question: Can I run IP trough Sun's HBA
    (btw. it's qlogic 2200 and an qlogic's site i found, that it is possible, but didn't find how)
    thanx a lot
    robbie
    Robert Hecko
    [email protected]
    HT Computers
    Slovakia, Europe

    You'll need to start with the qla.conf and sd.conf files. What type of SAN are you attaching to?

Maybe you are looking for