ISCSI question

Hi
I'm try to setup my iMac and MacBook to both connect using iSCSI to the same iSCSI target.
I've successfully installed the GlobalSAN software on both machines, initialised the disk and formatted to HFS+. I can write files without problems to the volume. However, when I connect the second Mac to the target, the file system doesn't update on the other Mac. (They seem to keep their own views).
So my questions are as follows:
+Has anyone ever used GlobalSAN with multiple hosts connecting to the same Target?
+If this isn't supported, can anyone else recommend a better iSCSI initiator? (I'm happy to pay a _little_)
Any help would be appreciated.
David
Message was edited by: denz1234

+Has anyone ever used GlobalSAN with multiple hosts connecting to the same Target?
Bad idea. I foresee file corruption on a massive scale.
iSCSI is a way to pretend you have direct access to the physical disk. All file locking and integrity software is NOT running on the system serving up the disk. If you have 2 (or more) systems attempting to write (and allocation storage) on an iSCSI disk, they will NOT coordinate any of their activities, which will result in each system blindly corrupting the work of the other system. The file system stored on the iSCSI disk will become hopelessly corrupted.
The 2 common solutions for multiple systems accessing the same same disk are:
a) Networked file sharing. This solution uses a protocol such as NFS, SMB/CIFS, AFP, etc... This could be a dedicated Network Attached Storage box, or another Mac, Windows, Uinx/Linux system sharing its files.
b) A clustered file system. This is generally a tightly coupled set of systems that communicate locking information between themselves so that only one system is ever attempting to write to any given area of the disk, and so that a system attempting to read will not so so while a critical section is being updated. The systems also communicate so that cached data gets flushed when the contents are needed by another system. Digital Equipment Corporation's VAX/VMS systems were one of the first successful clustered file systems.
As a file system developer (my day job), I've worked on both styles, and when I say performing uncoordinated I/O to a physical disk leads to file system corruption, I speak from experience.

Similar Messages

  • SCSI disk to iSCSI question

    Greetings,
    Perhaps someone here may have an answer to this. The other day someone approached me and asked would it be possible to somehow get a SCSI disk array available as iSCSI. It seems that HP have some sort of solution for this but we did not discuss it all the fully. I think it might be the MSA1510i.
    This led me to wonder does Cisco have anything like this?
    I know there are other SCSI to FC routers available but I am not sure if they can provide iSCSI. I played with these devices a few years ago and did not rate them all that much as they seemed a bit flakey.
    Any ideas on this?
    Stephen

    Stephen,
    The MDS 9216i and the MDS IPS and MPS 14+2 modules can do iSCSI. These are the same products that can do FCIP. They will act as gateway between an iscsi initiator and FC target. Been in MDS since 2003. It superceded the Cisco SN54xx storage router.
    Dallas
    TAC

  • Cisco UCS iSCSI boot question

    I am having trouble with getting this to work.  I don't control the iSCSI side of the equation so I am just trying to make sure everything is correct on the UCS side.  When we boot we get an the "Initialize error 1"
    If I attach to FI via ssh I am able to ping the target ip's.  The SAN administrator says that the UCS has to register before itself (which isn't occuring) prior to him giving it space.  Everything I have seen the LUN's and Mask are created prior to boot...is this required?
    Thanks,
    Joe

    UCSM version
    2.0(2r)
    We have another blade the other chasis that has ESX installed on the local drive and is using the iSCSI SAN as it datastore, therefore I know I connectivity to the SAN.  The storage array is a emc CX4-120.
    adapter 2/1/1 (mcp):1# iscsi_get_config
    vnic iSCSI Configuration:
    vnic_id: 5
              link_state: Up
           Initiator Cfg:
         initiator_state: ISCSI_INITIATOR_READY
    initiator_error_code: ISCSI_BOOT_NIC_NO_ERROR
                    vlan: 0
             dhcp status: false
                     IQN: iqn.1992-04.com.cisco:2500
                 IP Addr: 192.168.0.109
             Subnet Mask: 255.255.255.0
                 Gateway: 192.168.0.2
              Target Cfg:
              Target Idx: 0
                   State: INVALID
              Prev State: ISCSI_TARGET_GET_SESSION_INFO
            Target Error: ISCSI_TARGET_LOGIN_ERROR
                     IQN: iqn.1992-04.com.emc:cx.apm00101701431:a2
                 IP Addr: 192.168.0.2
                    Port: 3260
                Boot Lun: 0
              Ping Stats: Success (19.297ms)
              Target Idx: 1
                   State: INVALID
              Prev State: ISCSI_TARGET_GET_SESSION_INFO
            Target Error: ISCSI_TARGET_LOGIN_ERROR
                     IQN: iqn.1992-04.com.emc:cx.apm00101701431:b2
                 IP Addr: 192.168.0.3
                    Port: 3260
                Boot Lun: 0
              Ping Stats: Success (18.229ms)
    adapter 2/1/1 (mcp):2#

  • Windows Server 2012 - Hyper-V - iSCSI SAN - All Hyper-V Guests stops responding and extensive disk read/write

    We have a problem with one of our deployments of Windows Server 2012 Hyper-V with a 2 node cluster connected to a iSCSI SAN.
    Our setup:
    Hosts - Both run Windows Server 2012 Standard and are clustered.
    HP ProLiant G7, 24 GB RAM, 2 teamed NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the primary host and normaly all VMs run on this host.
    HP ProLiant G5, 20 GB RAM, 1 NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the secondary host that and is intended to be used in case of failure of the primary host.
    We have no antivirus on the hosts and the scheduled ShadowCopy (previous version of files) is switched of.
    iSCSI SAN:
    QNAP NAS TS-869 Pro, 8 INTEL SSDSA2CW160G3 160 GB i a RAID 5 with a Host Spare. 2 Teamed NIC.
    Switch:
    DLINK DGS-1210-16 - Both the network cards of the Hosts that are dedicated to the Storage and the Storage itself are connected to the same switch and nothing else is connected to this switch.
    Virtual Machines:
    3 Windows Server 2012 Standard - 1 DC, 1 FileServer, 1 Application Server.
    1 Windows Server 2008 Standard Exchange Server.
    All VMs are using dynamic disks (as recommended by Microsoft).
    Updates
    We have applied the most resent updates to the Hosts, WMs and iSCSI SAN about 3 weeks ago with no change in our problem and we continually update the setup.
    Normal operation
    Normally this setup works just fine and we see no real difference in speed in startup, file copy and processing speed in LoB applications of this setup compared to a single host with 2 10000 RPM Disks. Normal network speed is 10-200 Mbit, but occasionally
    we see speeds up to 400 Mbit/s of combined read/write for instance during file repair
    Our Problem
    Our problem is that for some reason all of the VMs stops responding or responds very slowly and you can for instance not send CTRL-ALT-DEL to a VM in the Hyper-V console, or for instance start task manager when already logged in.
    Symptoms (i.e. this happens, or does not happen, at the same time)
    I we look at resource monitor on the host then we see that there is often an extensive read from a VHDX of one of the VMs (40-60 Mbyte/s) and a combined write speed to many files in \HarddiskVolume5\System Volume Information\{<someguid and no file extension>}.
    See iamge below.
    The combined network speed to the iSCSI SAN is about 500-600 Mbit/s.
    When this happens it is usually during and after a VSS ShadowCopy backup, but has also happens during hours where no backup should be running (i.e. during daytime when the backup has finished hours ago according to the log files). There is however
    not that extensive writes to the backup file that is created on an external hard drive and this does not seem to happen during all backups (we have manually checked a few times, but it is hard to say since this error does not seem leave any traces in event
    viewer).
    We cannot find any indication that the VMs themself detect any problem and we see no increase of errors (for example storage related errors) in the eventlog inside the VMs.
    The QNAP uses about 50% processing Power on all cores.
    We see no dropped packets on the switch.
    (I have split the image to save horizontal space).
    Unable to recreate the problem / find definitive trigger
    We have not succeeded in recreating the problem manually by, for instance, running chkdsk or defrag in VM and Hosts, copy and remove large files to VMs, running CPU and Disk intensive operations inside a VM (for instance scan and repair a database file).
    Questions
    Why does all VMs stop responding and why is there such intensive Read/Writes to the iSCSI SAN?
    Could it be anything in our setup that cannot handle all the read/write requests? For instance the iSCSI SAN, the hosts, etc?
    What can we do about this? Should we use MultiPath IO instead of NIC teaming to the SAN, limit bandwith to the SAN, etc?

    Hi,
    > All VMs are using dynamic disks (as recommended by Microsoft).
    If this is a testing environment, it’s okay, but if this a production environment, it’s not recommended. Fixed VHDs are recommended for production instead of dynamically expanding or differencing VHDs.
    Hyper-V: Dynamic virtual hard disks are not recommended for virtual machines that run server workloads in a production environment
    http://technet.microsoft.com/en-us/library/ee941151(v=WS.10).aspx
    > This is the primary host and normaly all VMs run on this host.
    According to your posting, we know that you have Cluster Shared Volumes in the Hyper-V cluster, but why not distribute your VMs into two Hyper-V hosts.
    Use Cluster Shared Volumes in a Windows Server 2012 Failover Cluster
    http://technet.microsoft.com/en-us/library/jj612868.aspx
    > 2 teamed NIC dedicated to iSCSI storage.
    Use Microsoft MultiPath IO (MPIO) to manage multiple paths to iSCSI storage. Microsoft does not support teaming on network adapters that are used to connect to iSCSI-based storage devices. (At least it’s not supported until Windows Server 2008 R2. Although
    Windows Server 2012 has built-in network teaming feature, I don’t article which declare that Windows Server 2012 network teaming support iSCSI connection)
    Understanding Requirements for Failover Clusters
    http://technet.microsoft.com/en-us/library/cc771404.aspx
    > I have seen using MPIO suggests using different subnets, is this a requirement for using MPIO
    > or is this just a way to make sure that you do not run out of IP adressess?
    What I found is: if it is possible, isolate the iSCSI and data networks that reside on the same switch infrastructure through the use of VLANs and separate subnets. Redundant network paths from the server to the storage system via MPIO will maximize availability
    and performance. Of course you can set these two NICs in separate subnets, but I don’t think it is necessary.
    > Why should it be better to not have dedicated wireing for iSCSI and Management?
    It is recommended that the iSCSI SAN network be separated (logically or physically) from the data network workloads. This ‘best practice’ network configuration optimizes performance and reliability.
    Check that and modify cluster configuration, monitor it and give us feedback for further troubleshooting.
    For more information please refer to following MS articles:
    Volume Shadow Copy Service
    http://technet.microsoft.com/en-us/library/ee923636(WS.10).aspx
    Support for Multipath I/O (MPIO)
    http://technet.microsoft.com/en-us/library/cc770294.aspx
    Deployments and Tests in an iSCSI SAN
    http://technet.microsoft.com/en-US/library/bb649502(v=SQL.90).aspx
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support

  • New UCS and VMware setup Questions

    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-parent:"";
    mso-padding-alt:0in 5.4pt 0in 5.4pt;
    mso-para-margin:0in;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:11.0pt;
    font-family:"Calibri","sans-serif";
    mso-ascii-font-family:Calibri;
    mso-ascii-theme-font:minor-latin;
    mso-hansi-font-family:Calibri;
    mso-hansi-theme-font:minor-latin;
    mso-bidi-font-family:"Times New Roman";
    mso-bidi-theme-font:minor-bidi;}
    We are currently in the process of migrating out vmware infrastructure from HP to UCS.  We are utilizing the Virtual Connect Adapters for the project.  With the migration we also plan on implementing the cisco nexus v1000 in our environment.  I have demo equipment setup and have had a chance to install a test environment, but still have a few design questions.
    When implementing the new setup, what is a good base setup for the virtual connect adapters with the v1000?  How many Nics should I dedicate?  Right now I run 6 nics per server (2 console, 2 Virtual Machines, and 2 Vmotion).  Is this a setup I should continue with going forward?  The only other thing I am looking to implement is another set of nics for nfs access.  In a previous setup at a different job, we had 10 nics per server (2 console, 4 virtual machines, 2 vmotion and 2 iSCSI).  Is there any kind of standard for this setup?
    The reason I am asking is I want to get the most out of my vmware environment as we will be looking to migrate Tier 1 app servers once we get everything up and running.
    Thanks for the help!

    Tim,
    Migrating from HP Virtual Connect (VC) -> UCS might change your network design slightly, for the better of course .  Not sure if you're using 1G or 10G VC modules but I'll respond as if you've using 10G modules because this is what UCS will provide. VC modules provide a 10G interface that you can logically chop up into a max of 4 host vNIC interfaces totaling 10G. Though it's handy to divide a single 10G interfaces into virtual NICs for Service Console, VMotion, iSCSI etc, this creates the opportunity for wasted bandwidth.  The logical NICs VC creates provides a max limit of bandwidth to the adapter.  For example if create a 2GB interface for your host to use for vMotion, then 2G of your 10G pipe is wastes when there's no vMotions taking place!
    UCS & 1000v offer a different solution in terms of bandwidth utilization by means of QoS.  We feel it's more appropriate to specifiy a "minimum" bandwidth guarantee rather than a hard upper limit - leading to wasted pipe.  Depending on which UCS blade and mezz card option you have, the # of adapters you can present to the Host varies.  B200 blades can support one mezz card (with 2 x 10G interfaces) while the B250 and B440 are full width blades and support 2 Mezz cards.  In terms of Mezz cards now, there's the Intel/Emulex/Qlogic/Broamcom/Cisco VIC options.  In my opinion the M81KR (VIC) is best suited for virtualized environments as you can present up to 56 virtual interfaces to the host, each having various levels of QoS applied.  When you roll the 1000v into the mix you have a lethal combination of adding some of the new QoS features that automatically match traffic types such as Service Console, iSCSI, VMotion etc.  See this thread for a list/explanation of new features coming in the next verison of 1000v due out in a couple weeks https://www.myciscocommunity.com/message/61580#61580
    Before you think about design too much, tell us what blades & adapters you're using and we can offer some suggestions for setting them up in the best configuration for your virtual infrastructure.
    Regards,
    Robert
    BTW - Here's a couple Best Practice Guides with UCS & 1000v that you might find useful.

  • Questions before an internal lab POC (on old underperformant hardware)

    Hello VDI users,
    NB: I initially asked this list of questions to my friends at the
    SunRay-Users mailing list, but then I found this forum as a
    more relevant place.
    It's a bit long and I understand that many questions may have
    already been answered in detail on the forum or VDI wiki. If it's
    not too much a burden - please just reply with a link in this case.
    I'd like this thread to become a reference of sorts to point to our
    management and customers.
    I'm growing an interest to try out a Sun VDI POC in our lab
    with VirtualBox and Sunrays, so I have a number of questions
    popping up. Not all of them are sunray-specific (in fact, most
    are VirtualBox-related), but I humbly hope you won't all flame
    me for that?
    I think I can get most of the answers by experiment, but if
    anyone feels like sharing their experience on these matters
    so I can expect something a'priori - you're welcome to do so ;)
    Some questions involve best practices, however. I understand
    that all mileages vary, but perhaps you can warn me (and others)
    about some known-not-working configurations...
    1) VDI core involves a replicated database (such as MySQL)
    for redundant configuration of its working set storage...
    1.1) What is the typical load on this database? Should it
    just be "available", or should it also have a high mark
    in performance?
    For example, we have a number of old Sun Netra servers
    (UltraSPARC-II 450-650MHz) which even have a shared SCSI
    JBOD (Sun StorEdge S1 with up to 3 disks).
    Would these old horses plow the field well? (Some of them
    do run our SRSS/uttsc tasks okay)
    1.2) This idea seems crippled a bit anyway - if the database
    master node goes down, it seems (but I may be wrong) that
    one of the slave DB nodes should be promoted to a master
    status, and then when the master goes up, their statuses
    should be sorted out again.
    Or the master should be made HA in shared-storage cluster.
    (Wonder question) Why didn't they use Sun DSEE or OpenDS
    with built-in multimaster replication instead?
    2) The documentation I've seen refers to a specific version
    of VirtualBox - 2.0.8, as the supported platform for VDI3.
    It was implied that there were specific features in that
    build made for Sun VDI 3 to work with it. Or so I got it.
    2.1) A few versions rolled out since that one, 3.0 is out
    now. Will they work together okay, work but as unsupported
    config, not work at all?
    2.2) If specifically VirtualBox 2.0.8 is to be used, is there
    some secret build available, or the one from Old Versions
    download page will do?
    3) How much a bad idea is it to roll out a POC deployment
    (up to 10 virtual desktop machines) with VirtualBox VMs
    running on the same server which contains their data
    (such as Sun Fire X4500 with snv_114 or newer)?
    3.1) If this is possible at all, and if the VM data is a
    (cloned) ZFS dataset/volume, should a networked protocol
    (iSCSI) be used for VM data access anyway, or is it
    possible (better?) to use local disk access methods?
    3.2) Is it possible to do a POC deployment (forfeiting such
    features as failover, scalability, etc.) on a single
    machine alltogether?
    3.3) Is it feasible to extend a single-machine deployment
    to a multiple-machine deployment in the future (that
    is, without reinstalling/reconfiguring from scratch)?
    4) Does VBox RDP server and its VDI interaction with SRSS
    have any specific benefits to native Windows RDP, such
    as responsiveness, bandwidth, features (say, microphone
    input)?
    Am I correct to say that VBox RDP server enables the
    touted 3D acceleration (OpenGL 2.0 and DX8/9), and lets
    connections over RDP to any VM BIOS and OSes, not just
    Windows ones?
    4.1) Does the presence of a graphics accelerator card on
    the VirtualBox server matter for remote use of the VM's
    (such as through a Sun Ray and VDI)?
    4.2) Concerning the microphone input, as the question often
    asked for SRSS+uttsc and replied by RDP protocol limits...
    Is it possible to pass the audio over some virtualized
    device for the virtual machine? Is it (not) implemented
    already? ;)
    5) Are there known DO's and DONT's for VM desktop workloads?
    For example, simply office productivity software users
    and software developers with Java IDEs or ongoing C/C++
    compilations should have different RAM/disk footprints.
    Graphics designers heavy on Adobe Photoshop are another
    breed (which we've seen to crawl miserably in Windows
    RDP regardless of win/mstsc or srss/uttsc clients).
    Can it be predicted that some class of desktops can
    virtualize well and others should remain "physical"?
    NB: I guess this is a double-question - on virtualization
    of remote desktop tasks over X11/RDP/ALP (graphics bound),
    as well as a question on virtualization of whole desktop
    machines (IO/RAM/CPU bound).
    6) Are there any rule-of-thumb values for virtualized
    HDD and networking filesystems (NFS, CIFS) throughput?
    (I've seen the sizing guides on VDI Wiki; anything else
    to consider?)
    For example, the particular users' data (their roaming
    profiles, etc.) should be provisioned off the networked
    storage server, temporary files (browser caches, etc.)
    should only exist in the virtual machine, and home dirs
    with working files may better be served off the network
    share altogether.
    I wonder how well this idea works in real life?
    In particular, how well does a virtualized networked
    or "local" homedir work for typical software compile
    tasks (r/w access to many small files)?
    7) I'm also interested in the scenario of VMs spawned
    from "golden image" and destroyed after logout and/or
    manually (i.e. after "golden image"'s update/patching).
    It would be interesting to enable the cloned machine
    to get an individual hostname, join the Windows domain
    (if applicable), promote the user's login to the VM's
    local Administrators group or assign RBAC profiles or
    sudoer permissions, perhaps download the user's domain
    roaming profile - all prior to the first login on this
    VM...
    Is there a way to pass some specific parameters to the
    VM cloning method (i.e. the user's login name, machine's
    hostname and VM's OS)?
    If not, perhaps there are some best-practice suggestions
    on similar provisioning of cloned hosts during first boot
    (this problem is not as new as VDI, anyways)?
    8) How great is the overhead (quantitative or subjective)
    of VM desktops overall (if more specific than values in
    sizing guide on Wiki)? I've already asked on HDD/networking
    above. Other aspects involve:
    How much more RAM does a VM-executing process typically
    use than is configured for the VM? In the JavaOne demo
    webinar screenshots I think I've seen a Windows 7 host
    with 512Mb RAM, and a VM process sized about 575Mb.
    The Wiki suggests 1.2 times more. Is this a typical value?
    Are there "hidden costs" in other VBox processes?
    How efficiently is the CPU emulated/provided (if the VBox
    host has the relevant VT-x extensions), especially for
    such CPU-intensive tasks as compilation?
    *) Question from our bookkeeping team:
    Does creating such a POC lab and testing it in office's
    daily work (placing some employees or guests in front of
    virtual desktops instead of real computers or SR Solaris
    desktops) violate some licenses for Sun VDI, VirtualBox,
    Sun Rays, Sun SGD, Solaris, etc? (The SRSS and SSGD are
    licensed; Solaris is, I guess, licensed by the download
    form asking for how many hosts we have).
    Since all of the products involved (sans SGD) don't need
    a proof of license to install and run, and they can be
    downloaded somewhat freely (after quickly clicking thru
    the tomes of license agreements), it's hard for a mere
    admin to reply such questions ;)
    If there are some limits (# of users, connections, VMs,
    CPUs, days of use, whatever) which differentiate a legal
    deployment for demo (or even legal for day-to-day work)
    from a pirated abuse - please let me know.
    //Jim
    Edited by: JimKlimov on Jul 7, 2009 10:59 AM
    Added licensing question

    Hello VDI users,
    NB: I initially asked this list of questions to my friends at the
    SunRay-Users mailing list, but then I found this forum as a
    more relevant place.
    It's a bit long and I understand that many questions may have
    already been answered in detail on the forum or VDI wiki. If it's
    not too much a burden - please just reply with a link in this case.
    I'd like this thread to become a reference of sorts to point to our
    management and customers.
    I'm growing an interest to try out a Sun VDI POC in our lab
    with VirtualBox and Sunrays, so I have a number of questions
    popping up. Not all of them are sunray-specific (in fact, most
    are VirtualBox-related), but I humbly hope you won't all flame
    me for that?
    I think I can get most of the answers by experiment, but if
    anyone feels like sharing their experience on these matters
    so I can expect something a'priori - you're welcome to do so ;)
    Some questions involve best practices, however. I understand
    that all mileages vary, but perhaps you can warn me (and others)
    about some known-not-working configurations...
    1) VDI core involves a replicated database (such as MySQL)
    for redundant configuration of its working set storage...
    1.1) What is the typical load on this database? Should it
    just be "available", or should it also have a high mark
    in performance?
    For example, we have a number of old Sun Netra servers
    (UltraSPARC-II 450-650MHz) which even have a shared SCSI
    JBOD (Sun StorEdge S1 with up to 3 disks).
    Would these old horses plow the field well? (Some of them
    do run our SRSS/uttsc tasks okay)
    1.2) This idea seems crippled a bit anyway - if the database
    master node goes down, it seems (but I may be wrong) that
    one of the slave DB nodes should be promoted to a master
    status, and then when the master goes up, their statuses
    should be sorted out again.
    Or the master should be made HA in shared-storage cluster.
    (Wonder question) Why didn't they use Sun DSEE or OpenDS
    with built-in multimaster replication instead?
    2) The documentation I've seen refers to a specific version
    of VirtualBox - 2.0.8, as the supported platform for VDI3.
    It was implied that there were specific features in that
    build made for Sun VDI 3 to work with it. Or so I got it.
    2.1) A few versions rolled out since that one, 3.0 is out
    now. Will they work together okay, work but as unsupported
    config, not work at all?
    2.2) If specifically VirtualBox 2.0.8 is to be used, is there
    some secret build available, or the one from Old Versions
    download page will do?
    3) How much a bad idea is it to roll out a POC deployment
    (up to 10 virtual desktop machines) with VirtualBox VMs
    running on the same server which contains their data
    (such as Sun Fire X4500 with snv_114 or newer)?
    3.1) If this is possible at all, and if the VM data is a
    (cloned) ZFS dataset/volume, should a networked protocol
    (iSCSI) be used for VM data access anyway, or is it
    possible (better?) to use local disk access methods?
    3.2) Is it possible to do a POC deployment (forfeiting such
    features as failover, scalability, etc.) on a single
    machine alltogether?
    3.3) Is it feasible to extend a single-machine deployment
    to a multiple-machine deployment in the future (that
    is, without reinstalling/reconfiguring from scratch)?
    4) Does VBox RDP server and its VDI interaction with SRSS
    have any specific benefits to native Windows RDP, such
    as responsiveness, bandwidth, features (say, microphone
    input)?
    Am I correct to say that VBox RDP server enables the
    touted 3D acceleration (OpenGL 2.0 and DX8/9), and lets
    connections over RDP to any VM BIOS and OSes, not just
    Windows ones?
    4.1) Does the presence of a graphics accelerator card on
    the VirtualBox server matter for remote use of the VM's
    (such as through a Sun Ray and VDI)?
    4.2) Concerning the microphone input, as the question often
    asked for SRSS+uttsc and replied by RDP protocol limits...
    Is it possible to pass the audio over some virtualized
    device for the virtual machine? Is it (not) implemented
    already? ;)
    5) Are there known DO's and DONT's for VM desktop workloads?
    For example, simply office productivity software users
    and software developers with Java IDEs or ongoing C/C++
    compilations should have different RAM/disk footprints.
    Graphics designers heavy on Adobe Photoshop are another
    breed (which we've seen to crawl miserably in Windows
    RDP regardless of win/mstsc or srss/uttsc clients).
    Can it be predicted that some class of desktops can
    virtualize well and others should remain "physical"?
    NB: I guess this is a double-question - on virtualization
    of remote desktop tasks over X11/RDP/ALP (graphics bound),
    as well as a question on virtualization of whole desktop
    machines (IO/RAM/CPU bound).
    6) Are there any rule-of-thumb values for virtualized
    HDD and networking filesystems (NFS, CIFS) throughput?
    (I've seen the sizing guides on VDI Wiki; anything else
    to consider?)
    For example, the particular users' data (their roaming
    profiles, etc.) should be provisioned off the networked
    storage server, temporary files (browser caches, etc.)
    should only exist in the virtual machine, and home dirs
    with working files may better be served off the network
    share altogether.
    I wonder how well this idea works in real life?
    In particular, how well does a virtualized networked
    or "local" homedir work for typical software compile
    tasks (r/w access to many small files)?
    7) I'm also interested in the scenario of VMs spawned
    from "golden image" and destroyed after logout and/or
    manually (i.e. after "golden image"'s update/patching).
    It would be interesting to enable the cloned machine
    to get an individual hostname, join the Windows domain
    (if applicable), promote the user's login to the VM's
    local Administrators group or assign RBAC profiles or
    sudoer permissions, perhaps download the user's domain
    roaming profile - all prior to the first login on this
    VM...
    Is there a way to pass some specific parameters to the
    VM cloning method (i.e. the user's login name, machine's
    hostname and VM's OS)?
    If not, perhaps there are some best-practice suggestions
    on similar provisioning of cloned hosts during first boot
    (this problem is not as new as VDI, anyways)?
    8) How great is the overhead (quantitative or subjective)
    of VM desktops overall (if more specific than values in
    sizing guide on Wiki)? I've already asked on HDD/networking
    above. Other aspects involve:
    How much more RAM does a VM-executing process typically
    use than is configured for the VM? In the JavaOne demo
    webinar screenshots I think I've seen a Windows 7 host
    with 512Mb RAM, and a VM process sized about 575Mb.
    The Wiki suggests 1.2 times more. Is this a typical value?
    Are there "hidden costs" in other VBox processes?
    How efficiently is the CPU emulated/provided (if the VBox
    host has the relevant VT-x extensions), especially for
    such CPU-intensive tasks as compilation?
    *) Question from our bookkeeping team:
    Does creating such a POC lab and testing it in office's
    daily work (placing some employees or guests in front of
    virtual desktops instead of real computers or SR Solaris
    desktops) violate some licenses for Sun VDI, VirtualBox,
    Sun Rays, Sun SGD, Solaris, etc? (The SRSS and SSGD are
    licensed; Solaris is, I guess, licensed by the download
    form asking for how many hosts we have).
    Since all of the products involved (sans SGD) don't need
    a proof of license to install and run, and they can be
    downloaded somewhat freely (after quickly clicking thru
    the tomes of license agreements), it's hard for a mere
    admin to reply such questions ;)
    If there are some limits (# of users, connections, VMs,
    CPUs, days of use, whatever) which differentiate a legal
    deployment for demo (or even legal for day-to-day work)
    from a pirated abuse - please let me know.
    //Jim
    Edited by: JimKlimov on Jul 7, 2009 10:59 AM
    Added licensing question

  • Network Questions on 2012 R2 Hyper-V Cluster

    I am going through the setup and configuration of a clustered Windows Server 2012 R2 Hyper-V host. 
    I’ve followed as much documentation as I can find, and the Cluster Validation is passing with flying colors, but I have three questions about the networking setup.
    Here’s an overview as well as a diagram of our configuration:
    We are running two Server 2012 R2 nodes on a Dell VRTX Blade Chassis. 
    We have 4-dual port 10 GBe Intel NICS installed in the VRTX Chassis. 
    We have two Netgear 12-Port 10 GBe switches, both uplinked to our network backbone switch.
    Here’s what I’ve done on each 2012 R2 node:
    -Created a NIC team using two 10GBe ports from separate physical cards in the blade chassis.
    -Created a Virtual Switch using this team called “Cluster Switch” with “ManagementOS” specified.
    -Created 3 virtual Nics that connect to this “Cluster Switch”: 
    Mangement (10.1.10.x), Cluster (172.16.1.x), Live Migration (172.16.2.x)
    -Set up VLAN ID 200 on the Cluster NIC using Powershell.
    -Set Bandwidth Weight on each of the 3 NICS.  Mangement has 5, Cluster has 40, Live Migration has 20.
    -Set a Default Minimum Bandwidth for the switch at 35 (for the VM traffic.)
    -Created two virtual switches for iSCSI both with 
    “-AllowManagementOS $false” specified.
    -Each of these switches is using a 10GBe port from separate physical cards in the blade chassis.
    -Created a virtual NIC for each of the virtual switches: 
    ISCSI1 (172.16.3.x) and ISCSI2 (172.16.4.x)
    Here’s what I’ve done on the Netgear 10GB switches:
    -Created a LAG using two ports on each switch to connect them together.
    -Currently, I have no traffic going across the LAG as I’m not sure how I should configure it.
    -Spread out the network connections over each Netgear switch so traffic from the virtual switch “Cluster Switch” on each node is connected to both Netgear 10 GB switches.
    -Connected each virtual iSCSI switch from each node to its own port on each Netgear switch.
    First Question:  As I mentioned, the cluster validation wizard thinks everything is great. 
    But what about the traffic the Host and Guest VMs use to communicate with the rest of the corporate network? 
    That traffic is on the same subnet as the Management NIC. 
    Should the Management traffic be on that same corporate subnet, or should it be on its own subnet? 
    If Management is on its own subnet, then how do I manage the cluster from the corporate network? 
    I feel like I’m missing something simple here.
    Second Question:  Do I even need to implement VLANS in this configuration? 
    Since everything is on its own subnet, I don’t see the need.
    Third Question:  I’m confused how the LAG will work between the two 10 Gbe switches when both have separate uplinks to the backbone switch. 
    I see diagrams that show this setup, but I’m not sure how to achieve it without causing a loop.
    Thanks!

    "First Question:  As I mentioned, the cluster validation wizard thinks everything is great. 
    But what about the traffic the Host and Guest VMs use to communicate with the rest of the corporate network? 
    That traffic is on the same subnet as the Management NIC. 
    Should the Management traffic be on that same corporate subnet, or should it be on its own subnet? 
    If Management is on its own subnet, then how do I manage the cluster from the corporate network? 
    I feel like I’m missing something simple here."
    This is an operational question, not a technical question.  You can have all VM and management traffic on the same network if you want.  If you want to isolate the two, you can do that, too.  Generally, recommended
    practice is to create separate networks for host management and VM access, but it is not a strict requirement.
    "Second Question:  Do I even need to implement VLANS in this configuration? 
    Since everything is on its own subnet, I don’t see the need."
    No, you don't need VLANs if separation by IP subnet is sufficient.  VLANs provide a level of security against snooping that simple subnet isolation provides.  Again, up to you as to how you want to configure things. 
    I've done it both ways, and it works both ways.
    "Third Question:  I’m confused how the LAG will work between the two 10 Gbe switches when both have separate uplinks to the backbone switch. 
    I see diagrams that show this setup, but I’m not sure how to achieve it without causing a loop."
    This is pretty much outside the bounds of a clustering question.  You might want to take network configuration questions to a networking forum.  Or, you may want to talk with Netgear specialist.  Different networking
    vendors can accomplish this in different ways.
    .:|:.:|:. tim

  • ISCSI connections for guests: how to set up?

    A couple of questions:
    1. If we wanted to set up iSCSI connections for guests such as SQL servers, what is the best way to handle this? For example, if we had four 10-Gb NICs and wanted to use as few of them as possible, is it common to turn two of the NICs into Virtual Switches
    accessible by the OS, then use these to connect both the host and the SQL guests? Or would the best option be to use two 10-Gb NICs for the Hyper-V Host's iSCSI connections only, and use the other two 10-Gb NICs as virtual switches which are dedicated
    to the SQL server iSCSI connections?
    2. I know MPIO should be used for storage connections instead of teaming; if two NICs are teamed as a virtual switch, however, does this change anything? For example, if a virtual switch is created from a NIC team of two 10-Gb NICs, is it acceptable to create
    an iSCSI connection on a network adapter created on that virtual switch?

    " If we wanted to set up iSCSI connections for guests such as SQL servers, what is the best way to handle this?"
    Don't.   Use VHDX files instead.  A common reason for using iSCSI for SQL was to allow for shared storage in a SQL cluster.  2012 R2 introduces the capability to use shared vhdx files.  It is much easier to set up and will likely
    give you as good, or better performance, that iSCSI.
    But, if you insist on setting it up, set it up the same as you would on the host.  Two NICs on different subnets configured with MPIO. (Unless using Dell's software which forces a non-standard configuration of both NICs on the same subnet).  Teamed
    NICs are not supported.  For a purely best practice configuration, yes, it makes sense to have separate NICs for host and guest, but it is not an absolute requirement.
    .:|:.:|:. tim

  • Drive does not appear in ISCSi volumes

    Hi,
    I have configured a windows server 2012 r2 OS with 2 disks/ The server is a VM. Both disks are SCSI. Hoerver, the data disk does not show in ISCSI setup
    Screenshot of the part in question is here:
    https://weikingteh.files.wordpress.com/2012/12/image13.png
    Any help appreciated

    Sorry for the delay. However I still cannot confirm the exact status. 
    I assume the Drive S is not the data drive. Please open Disk Management to see if your Data is listed with a drive letter. Also test if you can access it correctly.
    Also though you mentioned these 2 disks are SCSI disks, whether they are physical disk on host (physical) computer, or they are 2 VHD files created on hard disk and connected in Hyper-V?
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Some questions for ressources provisionning.

    Hello people.
    I'm currently playing around with LDoms on a new brilliant shiny T5520 and I have some questions about.
    My actual configuration is the following:
    - 1 T5520 Server with 32 Gb RAM and a 8 cores 64 threads Niagara T2
    - 4 SAS 143 Gb disks on controller 1.
    The disk are mirrored using raidctl in two pairs of raid 1, c1t0d0 and c1t2d0.
    The primary OS is installed on c1t0d0 and I plan to use ZFS for creating vdisks LDoms guests on c1t2d0.
    I have access to a netapp but only with iscsi and nfs.
    So, the questions:
    - Does it make sense to use ZFS on hard mirrored disks ?
    - Is it best to add the two disks non mirrored to my ZFS pool and backup snapshots on the netapp ?
    - Does my LDoms needs more memory that I planned ? Each guest will have between 2 Gb and 4 Gb for the bigger. I will have, at the beginning 4 or 5 guests for testing / integration, LDAP / DNS services, Licences services, Web Portal, and, maybe, if I could convince my boss a little SPARC Gentoo Linux, just for fun :)
    Thanks :)
    PS: Forgive my poor English, I'm a French bas***d guy :)

    You mean T5220, great machine anyway :)
    - RAID hardware is supposed to be faster than ZFS, but ZFS offer more functionalities.
    - you need both mirroring and backup, even if nice, ZFS snapshot cannot replace a backup software.
    - Keep a free memory pool not allocated and ready to increase the RAM size of a LDOM (rebooting the LDOM will be necessary). And hum... use it to play with linux for sparcs :-)
    Regards

  • ISCSI MPIO, how many path do I need to create

    Hi,
    I've a server with 4 NIC connection to a DELL MD32xx which have 8 NIC.
    My question is how many path do I need to create under iSCSI connection.
    Do I need to create a path from each Server NIC to each MD32xx NIC, which will make 32 connection (and doesn't make sense).
    If not, how should I proceed, I've looked at many example and none seem to cover that kind of situation, they just directly connect the server NIC to the MD32xx NIC instead of going through switch for redundancy.
    Thank
    ML

    Hi,
    I've a server with 4 NIC connection to a DELL MD32xx which have 8 NIC.
    My question is how many path do I need to create under iSCSI connection.
    Do I need to create a path from each Server NIC to each MD32xx NIC, which will make 32 connection (and doesn't make sense).
    If not, how should I proceed, I've looked at many example and none seem to cover that kind of situation, they just directly connect the server NIC to the MD32xx NIC instead of going through switch for redundancy.
    Thank
    ML
    Please follow the guides and discussions here:
    Windows MPIO Setup for Dual Controller SAN
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/3fa0942e-7d07-4396-8f2e-31276e3d6564/windows-mpio-setup-for-dual-controller-san?forum=winserverfiles
    It's MD3260 so can be used for 32xx storage unit with iSCSI uplinks.
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • Windows 7 answer file deployment and iscsi boot

    Hi, I am trying to prepare an image with windows7 Ent that has been installed, went through "audit" and then shutdown with:
    OOBE+Generalize+Shutdown
    So that I can clone this image and the next time it boots, it will use answer file to customize, join domain etc.
    The complication to this - is that I am using iscsi boot for my image, and working within Vmware ESX.
    I can install Windows without any issues, get the drivers properly working, reboot and OOBE on the same machine - no issues.
    The problems come when I clone the VM and the only part that changes (that I think really matters) is the mac address of the network card. The new clone when it comes up after the OOBE reboot hangs for about 10min and then proceeds without joining to domain.
    Using Panter logs and network traces - I saw that the domain join command was timing out and in fact no traffic was being sent to the DC. So the network was not up. The rest of the answer file customization works fine.
    As a test I brought up this new clone (with new mac) in Audit mode - and Windows reported that it found and installed drivers for new device - VMXNET3 Driver 2. So in fact it does consider this a new device.
    Even though it iscsi boots from this new network card - later in process its unable to use it until driver is reinstalled.
    In my answer file I tried with and without below portion but it didnt help:
    <settings pass="generalize">
            <component>
                <DoNotCleanUpNonPresentDevices>true</DoNotCleanUpNonPresentDevices>
                <PersistAllDeviceInstalls>true</PersistAllDeviceInstalls>
            </component>
        </settings>
    I also tried with E1000 NIC, but couldnt get windows to boot properly after the cdrom installation part.
    So my question - is my only option to use workarounds like post OOBE scripts for join etc? 
    Is it possible to let Windows boot and then initiate an extra reboot once the driver was installed and then allow it to go to Customize phase?
    thank you!

    Hi,
    This might be caused by the iscsi boot.
    iSCSI Boot is supported only on Windows Server. Client versions of Windows, such as Windows Vista® or Windows 7, are not supported.
    Detailed information, please check:
    About iSCSI Boot
    Best regards
    Michael Shao
    TechNet Community Support

  • ISCSI device mapping on Solaris 10

    Hi,
    A brief overview of my situation:
    I have 3 Oracle Solaris X86-64 virtual machines that I'm using for testing. I have configured a ZFS storage pool on one of them (named solastorage), that will serve as my iSCSI target. The remaining 2 servers (solarac1 and solarac2) are meant to be my RAC nodes for a test Oracle 11g R2 RAC installation.
    /etc/hosts listing:
    # ZFS iSCSI target
    192.168.247.150 solastorage solastorage.domain.com
    # RAC Public IPs
    192.168.247.131 solarac1 solarac1.domain.com loghost
    192.168.247.132 solarac2 solarac2.domain.com
    A brief overview of the steps carried out at solastorage (after enabling the iSCSI target service):
    zpool create rac_volume mirror c0d1 c1d1
    zfs create -V 0.5g rac_volume/ocr
    zfs create -V 0.5g rac_volume/voting
    zfs set shareiscsi=on rac_volume/ocr
    zfs set shareiscsi=on rac_volume/voting
    On both the RAC servers (of course I've enabled the iSCSI initiator service on both):
    iscsiadm modify discovery -t enable
    iscsiadm add discovery-address 192.168.247.150
    devfsadm -i iscsi
    After that, when I run format on both sides, I can see the following:
    solarac1# format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c1t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
    /pci@0,0/pci15ad,1976@10/sd@0,0
    1. c3t15d0 <DEFAULT cyl 509 alt 2 hd 64 sec 32>
    /iscsi/[email protected]%3A02%3A54d1d1d2-7154-ee78-94e6-c3d053ca7ab50001,0
    2. c3t16d0 <DEFAULT cyl 2045 alt 2 hd 128 sec 32>
    /iscsi/[email protected]%3A02%3Af109f049-9f76-6a16-c36a-d42c4d6818fe0001,0
    solarac2# format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c1t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
    /pci@0,0/pci15ad,1976@10/sd@0,0
    1. c2t2d0 <DEFAULT cyl 509 alt 2 hd 64 sec 32>
    /iscsi/[email protected]%3A02%3A54d1d1d2-7154-ee78-94e6-c3d053ca7ab50001,0
    2. c2t3d0 <DEFAULT cyl 2045 alt 2 hd 128 sec 32>
    /iscsi/[email protected]%3A02%3Af109f049-9f76-6a16-c36a-d42c4d6818fe0001,0
    solastorage# iscsitadm list target -v | more
    Target: rac_volume/ocr
    iSCSI Name: iqn.1986-03.com.sun:02:54d1d1d2-7154-ee78-94e6-c3d053ca7ab5
    Alias: rac_volume/ocr
    Connections: 2
    Initiator:
    iSCSI Name: iqn.1986-03.com.sun:01:2a95e0f4ffff.4d586ac7
    Alias: solarac1
    Initiator:
    iSCSI Name: iqn.1986-03.com.sun:01:2a95e0f4ffff.4d5b7cef
    Alias: solarac2
    ACL list:
    TPGT list:
    LUN information:
    LUN: 0
    GUID: 600144f04d5b8fca00000c29655dc000
    VID: SUN
    PID: SOLARIS
    Type: disk
    Size: 512M
    Backing store: /dev/zvol/rdsk/rac_volume/ocr
    Status: online
    So, here I can see the same devices on both the servers, only it is being recognised with different device names. Without using any 3rd-party software (for example Oracle Cluster), how can I manually map the device names on both these servers so that they are the same?
    Previously, for Oracle 10g Release 2, I was able to use the metainit commands to create a pseudo-name that is the same on both servers. However, as of Oracle 11g R2, devices with the naming format /dev/md/rdsk/.... are no longer valid.
    Does anyone know of a way I can manually re-map these devices to the same device names on the OS level, without needing Oracle Cluster or something similar?
    Thanks in advance,
    NS Selvam
    Edited by: NS Selvam on Feb 16, 2011 1:32 AM
    Edited by: NS Selvam on Feb 16, 2011 1:33 AM

    Thank you for your response.
    Setting the "ddi-forceattach" property in Pseudo driver .conf file will not
    help. Solaris does not "attach" Pseudo drivers which do not have ".conf"
    children (even though the Pseudo driver conf file has "ddi-forceattach=1"
    property set). Opening the Pseudo device file will attach the Pseudo driver.I'm confused... We have a .conf file, as mentioned, but what makes
    it a "Pseudo driver .conf" rather than just a "driver .conf"?
    From what I undestand of your requirement, the following should be sufficient :
    1. Set property : "ddi-forceattach=1" for all physical devices that is
    required by Pseudo driver.
    2. Application opens the Pseudo device node.
    Let me know if you have any queries / issues. I do have further questions.
    Included below is a version of our .conf file modified to protect the
    names of the guilty.
    As you can see, there is part of it which defines a pseudo device,
    and then a set of properties that apply to all devices. Or that's the
    intention.
    In #1, you said to set the ddi-forceattach property for all "physical
    devices", but how do I do this, if it's not what I'm already doing? And what
    do you mean "required by Pseudo driver"?
    name="foobar" parent="pseudo" instance=1000 FOOBAR_PSEUDO=1;
    ddi-forceattach=1
    FOOBAR_SYM1=1
    FOOBAR_SYM2=2
    FOOBAR_SYM3=3;
    On a Solaris 9 system of mine, recently I believe I have seen multiple cases
    where I've booted, and a physical device has not gotten attached, but if I
    reboot, it will be attached the next time.
    Thanks,
    Nathan

  • Iscsi or SAN NFS+ASM

    hi
    my questions look stupid.but its a vary simple question.
    Do i need any iscsi or SAN to install clusterware 11gR2 in OEL 5.4 x86_64?
    regards

    Gagan Arora wrote:
    Try as root on
    node1
    # /etc/init.d/oracleasm createdisk DATA <node2 nfs share>
    #/etc/init.d/oracleasm createdisk FRA <node1 nfs share>
    on node2
    #/etc/init.d/oracleasm scandisks
    #/etc/init.d/oracleasm listdisks
    DATA
    FRA
    if you dont want to use nfs you can create iscsi targets on each node and create iscsci initiator on each nodei dnt want iscsi.
    >
    DID you exactly min this?
    [root@rac-1 ~]# mount /dev/sdb10 /u02
    [root@rac-1 ~]# vi /etc/exports
    [root@rac-1 ~]# exports -a
    bash: exports: command not found
    [root@rac-1 ~]# export -a
    [root@rac-1 ~]# srvice nfs restart
    bash: srvice: command not found
    [root@rac-1 ~]# service nfs restart
    Shutting down NFS mountd:                                  [  OK  ]
    Shutting down NFS daemon:                                  [  OK  ]
    Shutting down NFS quotas:                                  [  OK  ]
    Shutting down NFS services:                                [  OK  ]
    Starting NFS services:                                     [  OK  ]
    Starting NFS quotas:                                       [  OK  ]
    Starting NFS daemon:                                       [  OK  ]
    Starting NFS mountd:                                       [  OK  ]
    [root@rac-1 ~]# mount -t nfs rac-1:/u02 /mnt/rac-1/nfs1
    [root@rac-1 ~]# oracleasm listdisks
    VOLUME1
    VOLUME2
    VOLUME3
    VOLUME4
    [root@rac-1 ~]# oracleasm createdisk VOLUME5 /mnt/rac-1/nfs1
    File "/mnt/rac-1/nfs1" is not a block device
    [root@rac-1 ~]# oracleasm createdisk VOLUME5 rac-1:/u02
    Unable to query file "rac-1:/u02": No such file or directory
    [root@rac-1 ~]# i got only:
    [root@rac-1 ~]# oracleasm createdisk VOLUME5 /mnt/rac-1/nfs1
    File "/mnt/rac-1/nfs1" is not a block device
    Edited by: you on Feb 22, 2010 5:35 AM

Maybe you are looking for

  • Query Help pls dates and logic.

    Hi all gurus I have a need where I need to put togahter series of timeline events that occurred in past as a one row concerning dates. This is my table .   CREATE TABLE "SORS"."SOR_TRACKING"    ( "TRACKING_ID" NUMBER NOT NULL ENABLE,   "LETTER_ID" NU

  • Score loses formating when printing to pdf in Logic Pro

    Logic Pro 9.1.6 shows score ok in page view but when it prints to pdf the formatting goes weird with notes dropping off the stave. How do I fix this so I can print a score?

  • Mythtv xorg.conf 6800gt dvi - hdmi 34xbr970 tv.. no HD

    All, I am running arch on a custom machine with a geforce 6800GT agp dual dvi outs i have my lcd connected to dv1 and my dvi->hdmi to dvi2.. The modes for my xorg.conf are not validating and the tv is black except for the mouse when i scroll over to

  • Inbound Porduction Order

    Hello, We are getting Production Order as an Inbound Idoc, it is well known fact that there is no process code & function module to process the incoming Idoc & create Production Order Can you please share you experience in writting a function module

  • Services doesn't use the software application I want...

    I have DevonThink and, with a new installation of Snow Leopard, I discovered that the Services associated with text selections (take rich note/take plain note) only offers to perform that action opening DevonThink. Previous to Snow Leopard, I could u