OCFS2 vs. NFS

I am currently in the process of an evaluation. We are planning to run an OpenVZ Cluster (true hypervisors have too much latency/overhead for the required scenario) with a shared storage for all participating nodes.
The OpenVZ containers will be stored on the shared filesystem. Use case is mostly reads (executing one binary in each container, which may load bulk data into memory from the shared fs as well), practically no writes. Since every process will run in its own OpenVZ container directory, there will be no concurrent reads/writes (ie locking should not be much of an issue). There might be > 200 node clients, each node might have up to 20 openvz containers (though since these use the node FS this should not matter). We want to use a central and cheap storage (no distributed storage).
Since OpenVZ is RHEL based i am considering Oracle Linux (i assume the OpenVZ RHEL 6.x based Kernel will run without issues on Oracle Linux). Now considering the shared filesystem i could use NFS or OCFS2 on top of iSCSI (linux software based) over Ethernet. I read a few performance benchmarks, in which OCFS2 comes out on top due to a more scalable design. However i am not sure whether these benchmarks are all that relevant to my use case, since they take into account more "normal file system usage". NFS is a lot easier to setup and to maintain, however if OCFS2 performs and scales significantly better for my use case i would give it a try.
I would appreciate all input, since i assume there are quite a few people out there who are more knowledgeable with such use cases and OCFS2 in particular.

We are using OCFS on a 4 node RHEL cluster to store 100's of 1000's of files. Mostly small files (containing config data of h/w devices we poll). Around 600GB used in total.
A file listing (the ls command) is a bad idea in such directory.. but this is true of most any file system with such file volumes. ;-)
No performance issues reported by development. BTW, we are running it using IPoIB (IP over DDR/10Gb Infiniband - we still need to wire the QDR/40Gb IB switches and move the OCFS heartbeat/Interconnect to faster Infiniband).
And I disagree with the statement that it is more complex than NFS. It is very simple to configure and use.

Similar Messages

  • 11g Grid Control in a cluster?

    Hi All,
    I am trying to find some official doc on how to setup an MAA architecture for 11g GC.
    I have already done the following:
    - installed 11.2.0.1 RAC database
    - installed wls 10.3.2 on both nodes
    - installed OMS on one node
    - added the second OMS instance on the second node
    Now, in WLS console I can see that there are 3 servers created (1 Admin + 2 OMS servers).
    However, there is no cluster setup in WLS.
    I am also planning to use a load balancer before the OMS instances.
    Are there some notes on how to set this up? All I could find was stuff about 10g which is still based on iAS so that's not any use.
    Also, where can I find notes about how to setup load-balancer for an 11g OMS clustered?
    Regards,
    Pawel.

    Thanks !
    However, I have a problem on deciding how to create the shared filesystem for the loader... In your example there is the following statement:
    ■The second thing to consider is shared storage, for both the XML file loading, and the software library (used for patching and provisioning). So make sure you can add a good shared storage solution to the OMS machines when needed to handle the shared files
    What is the best way to do that?
    - OCFS2
    - GFS
    - NFS (I don't like this solution as it is not HA - if primary server goes down - so does the shared folder in all attached nodes)
    - ASMCF (installing a whole Grid Infrastructure with ASM sounds way too much overhead to accomplish a simple task of sharing a directory between to nodes)
    Currently we have FC interfaces installed on both machines and attached to a disk storage matrix. In other words we have the same device /dev/raw1 visible from both OMS nodes.
    Which approach is best keeping in mind we want High Availability maintaned?

  • Fresh install of Oracle VM Manager Template into Oracle VM Server

    Hi,
    I am trying to install the Oracle VM Manager template and then
    create an Oracle VM Manager client.
    I have already done a 'fresh install' of Oracle VM Server.
    I do not have another machine available to use to contain the
    'Oracle VM Manager', therefore, I am attempting to install the
    Oracle VM Manager template directly into an Oracle VM Manager Server and
    run the client.
    I am following the instructions of the Oracle VM Server Users Guide
    in section 4.3.
    When I extract the zip file contents into the
    /OVS/seed_pool directory, I get the following files.
    Deploy_Manager_Template.sh
    OVM_EL5U3_X86_OVM_MANAGER_PVM.tgz
    So far, so good.
    Next, I used 'tar' to help me extract the directories of the .tgz file.
    So far, so good.
    Next, as instructed, I used 'python' and 'print randomMAC()' to create a new MAC address.
    Inside the /OVS/seed_pool/OVM_EL5U3_X86_OVM_MANAGER_PVM/vm.cfg file
    I modified the vif MAC address. I replaced the
    xx:xx:xx with the 'last three' that were generated by the python randomMAC
    function from above.
    vif = [ 'mac=00:16:3E:<my generated numbers>', ]
    So far, so good?
    In the Oracle VM Server Users Guide in section 4.3.,
    The next step, expains that I should run
    xm create mv.cfg
    I did this. When I ran this I received back the error.
    Using config file "./vm.cfg"
    Error disk is not accessible.
    When I peek inside my vm.cfg file.
    I see file references starting with the following
    file:/OVS/running_pool/ ...
    Of, course my running_pool directory is empty.
    (Again, this is a fresh install of Oracle VM Server).
    My first question is the following.
    Sometime in this process was I supposed to run
    the the following executable?
    ./Deploy_Manager_Template.sh
    If so, should I have done this early?
    Was the modification of the vif (of adding the MAC address
    into the vm.cfg file, something I 'should have not done'
    or 'something that could be ignored' because the
    './Deploy_Manager_Template.sh' would have done this for me?
    My second question is the following.
    Would the following process be 'more correct?'
    ..1 Not modify the vm.cfg' file.
    ..2 run ./Deploy_Manager_Template.sh.
    ..3 In the /OVS/running_pool/ directory find my
    vm.cfg file of interest, then modify the vif with a new MAC address.
    ..4 In /OVS/running_pool/Change my current location to my directory of interest
    run xm create vm.cfg (to run my Oracle VM Manager)?
    Any help or ideas whould be appreciated.
    Thank you very much,
    AIM

    Hi,
    This is the README file for Oracle VM Manager 2.2.0
    Readme for Media Pack B57738-01
    Print: Access key=P Close: Access key=C
    Oracle VM Templates for Oracle VM Manager 2.2.0 Media Pack v1 for x86 (32 bit)
    =====================================================================
    Template Version 2.3
    Oracle VM Manager Version 2.2.0
    This document contains:
    1. Prerequisites for Oracle VM Manager virtual machine (VM) deployment
    2. Oracle VM Manager Template description
    3. Creating an Oracle VM Manager Virtual Machine from
    Oracle VM Manager Template
    4. Deployment Interview
    5. Known Issues
    For more information on Oracle VM Manager, please refer to
    the "Oracle VM Manager Installation Guide" and the "Oracle VM Manager
    User's Guide" available at:
    http://download.oracle.com/docs/cd/E15458_01/index.htm
    1. Prerequisites
    ================
    - A new install of Oracle VM Server 2.2 that has NOT been managed by another
    Oracle VM Manager. Manager Template 2.2 is intended to be installed on Oracle
    VM 2.2 server. If you have a new Oracle VM 2.1.5 server and want to deploy
    Oracle VM Manager template, please use the Oracle VM 2.1.5 Manager template.
    Note: root access to the server's dom0 is required.
    - It's highly recommended that you upgrade the default agent (ovs-agent-2.3-19)
    to ovs-agent-2.3-27 or later. You can get the latest Oracle VM 2.2 packages
    from Oracle's Unbreakable Linux Network (http://linux.oracle.com).
    Note: Alternate location to get Oracle VM agent 2.3-27 is
    http://oss.oracle.com/oraclevm/server/2.2/RPMS/ovs-agent-2.3-27.noarch.rpm
    - A working directory of the Oracle VM Server 2.2 has at least 4GB free space
    for downloading and installing the template. The working directory can be any
    directory on the Oracle VM server except /OVS/running_pool.
    Note: The /root partition of the default Oracle VM server install may not have
    enough space to temporarily host the template installation. Please use other
    directory that has sufficient free space.
    - At least 15GB of free space in the cluster root storage repository. For storage
    and repository configuration, please refer to Oracle VM 2.2 Server User Guide:
    http://download.oracle.com/docs/cd/E15458_01/doc.22/e15444/storage.htm
    and
    http://download.oracle.com/docs/cd/E15458_01/doc.22/e15444/repository.htm
    - At least 2GB of free memory on the Oracle VM Server
    - A static IP address for the Oracle VM Manager
    - If enabling HA (high availability) for the Oracle VM Manager,
    mount a clustered OCFS2 or NFS filesystem on /OVS. If ext3 or a
    local OCFS2 filesystem is used, enabling HA will cause the high availability
    prerequisite checking to fail. The Oracle VM Manager configuration
    process will exit without completing the configuration.
    - The Oracle VM Manager will register the first VM that it detects.
    To have Oracle VM Manager be the first VM registered,
    make sure there are no virtual machine images besides the Oracle VM Manager
    virtual machine in the /OVS/running_pool directory on the Oracle VM Server.
    - A desktop or other system with a VNC Viewer installed
    The steps below assume that the Oracle VM Server used is not currently
    or was not previously managed by another Oracle VM Manager. If this is not
    the case, the instructions below will ask user clean up Oracle VM Agent DB
    before running the Oracle VM Manager.
    2. Oracle VM Manager Template Description
    =========================================
    The Oracle VM Manager Template is distributed as one archive file which
    includes:
    File Version
    OVM_EL5U3_X86_OVM_MANAGER_PVM.tgz 2.3
    Deploy_Manager_Template.sh 2.3
    The OVM_EL5U3_X86_OVM_MANAGER_PVM.tgz archive contains two disk images,
    a VM configuration file and a readme file:
    - Oracle Enterprise Linux 5.3 x86 system disk image
    - Oracle VM Manager 2.2 disk image
    - vm.cfg
    - README
    The system image is a JeOS (Just enough OS) installation of Oracle
    Enterprise Linux 5.3. It is a smaller footprint install that contains
    the only packages needed by Oracle VM Manager.
    Oracle VM Manager is configured to use Oracle Database 10g
    Express Edition (included).
    Deploy_Manager_Template.sh is used to check the prerequisite and
    create virtual machine.
    During the first boot of the Oracle VM Manager virtual machine,
    the Oracle VM Manager configuration process will create server pool
    and import the Oracle VM Manager virtual machine.
    Two OS user accounts are created by default:
    user: root password: ovsroot
    user: oracle password: oracle
    The user 'oracle' belongs to the 'oinstall' and 'dba' groups.
    The default vnc console password is 'oracle'
    3. Creating the Oracle VM Manager virtual machine
    =================================================
    1) Download the Oracle VM Manager Template (V19215-01.zip)
    from http://edelivery.oracle.com/oraclevm
    2) Login to the Oracle VM Server's dom0 as 'root'
    Copy V19215-01.zip to your working directory with at least 4GB free space.
    You can choose any directory on OVM Server except /OVS/running_pool.
    This zip file contains the archive file OVM_EL5U3_X86_OVM_MANAGER_PVM.tgz
    and a deploy script Deploy_Manager_Template.sh
    3) As root, run
    # unzip V19215-01.zip
    4) As 'root', run the deployment script:
    # chmod 755 Deploy_Manager_Template.sh
    # ./Deploy_Manager_Template.sh
    The deployment script Deploy_Manager_Template.sh will complete the following
    tasks:
    a) prerequisite checking
    b) uncompress OVM_EL5U3_X86_OVM_MANAGER_PVM.tgz file to directory
    /OVS/running_pool. This directory will contain the files following files:
    /OVS/running_pool/OVM_EL5U3_X86_OVM_MANAGER_PVM
    |- System.img (OS image file)
    |- Manager.img (Manager image file)
    |- vm.cfg (VM configuration file)
    |- README (Readme file)
    c) generate and assign new MAC address to the virtual machine
    d) interview the user for VM and VM Manager configuration parameters
    (next section 'Deployment interview' provides the list of questions)
    e) create and boot the virtual machine from the Oracle VM Server
    command line
    f) display the access information for Oracle VM Manager and Oracle VM
    Manager VM
    4. Deployment Interview
    =======================
    The deployment script will prompt a user to enter
    a) Agent password
    The agent password is required for the prerequisites check.
    b) Storage configuration
    Storage Source: NFS address, OCFS2 partition path
    The script will automatically detect your cluster root storage repository
    if you have configured it. Or it prompts users to input their storage source
    and the script tries to set it up as cluster root.
    NOTE: how to manually create your own storage repository in OracleVM server 2.2.x
    1) Register your storage source. Example:
    /opt/ovs-agent-2.3/utils/repos.py -n myhost:/mynfslocation
    /opt/ovs-agent-2.3/utils/repos.py -n /dev/sdb3
    Note that the storage source should have at least 15GB free space.
    If the storage source is successfully registered, note down the uuid genereated
    by the command above, such as:
    51d4c69b-e439-41ac-8b31-3cc485c993b0 => /dev/sdb3
    2) Mount your storage repository.
    If the agent version is 2.3-27 or higher, execute:
    /opt/ovs-agent-2.3/utils/repos.py -i
    otherwise, complete the following commands:
    [1] mkdir -p /var/ovs/mount/$(echo <uuid> | sed s/-//g | tr '[:lower:]' \
    '[:upper:]')"
    where '<uuid>' is the uuid noted down in step 2)
    [2] mount your storage source to the directory made in step [1].
    3) If /OVS exists, delete or move /OVS
    mv /OVS "/OVS.$(date '+%Y%m%d-%H%M%S').orig"
    create a symbolic link from storage repository to /OVS
    ln -nsf /var/ovs/mount/<UUID>/ /OVS
    c) Network configuration
    Static IP address
    Netmask
    Default Gateway IP address
    DNS Server IP address
    Hostname
    d) Password for database accounts:
    'SYS' and 'SYSTEM' (the same password the same password is used)
    'OVS'
    'oc4jadmin'
    'admin'
    e) Web Service configuration (supported in template in version 1.2)
    Web Service password (at least 6 characters)
    Enable HTTPS or not
    f) SMTP server (outgoing mail server SMTP hostname)
    E-mail Address for account 'admin'
    g) Data for the manager services configuration:
    Oracle VM Server Pool Name
    Oracle VM Server login user name
    Oracle VM Server login password
    Note that Oracle VM Manager is critical for managing Oracle VM Server Pools.
    Do not pause, suspend or shutdown this virtual machine! Configuring
    HA is recommended for this virtual machine so that Oracle VM will
    automatically restart the Oracle VM Manager virtual machine if there
    is a server crash.
    5. Known Issues
    ===============
    1) You may see messages on a virtual machine's console similar to these
    when the virtual machine is booting up:
    Fatal: No PCI config space access function found
    rtc: IRQ 8 is not free.
    i8042.c: No controller found.
    These messages can be ignored.
    2) Mail server check fails.
    Bug #7140 in bugzilla.oracle.com
    Oracle VM Manager installer only checks the default SMTP port 25 for the
    mail server. If the SMTP port is not 25, the check fails, and you will
    see the following message:
    Mail server '<mail server hostname>' check failed, want to re-enter it(Y|n)?
    You can enter 'n' to skip the mail server checking. You will also see the
    send mail checking fails with following message:
    Failed to send mail to '<Admin e-mail address>'
    want to re-enter the e-mail address(Y|n)?
    You can enter 'n' to skip the send mail checking.
    3) OEL VM console may display error messages similar to those below:
    BUG: warning at drivers/xen/fbfront/xenfb.c:143/xenfb_update_screen() (Not
    tainted)
    Call Trace:
    [<ffffffff803aa461>] xenfb_thread+0x135/0x2c5
    [<ffffffff8024874b>] try_to_wake_up+0x365/0x376
    [<ffffffff8029ba6e>] autoremove_wake_function+0x0/0x2e
    [<ffffffff8029b856>] keventd_create_kthread+0x0/0xc4
    [<ffffffff803aa32c>] xenfb_thread+0x0/0x2c5
    [<ffffffff8029b856>] keventd_create_kthread+0x0/0xc4
    [<ffffffff802339c8>] kthread+0xfe/0x132
    [<ffffffff80260b24>] child_rip+0xa/0x12
    [<ffffffff8029b856>] keventd_create_kthread+0x0/0xc4
    [<ffffffff802338ca>] kthread+0x0/0x132
    [<ffffffff80260b1a>] child_rip+0x0/0x12
    This will not cause any problems.
    4) If you accidentally power off Oracle VM Manager virtual machine through
    Oracle Manager UI, and restart the virtual machine from OVM server command
    line, although Oracle VM Manager virtual machine is running normally,
    the virtual machine status in Manager UI will stay in 'Shutting Down'.
    This is expected, as the virtual machine status sync will only happen when
    the virtual machine status is "Error" or "Powered Off".
    To re-sync the virtual machine status, please complete the following steps:
    1. Log on the Manager UI;
    2. Navigate to the 'Virtual Machines' tab;
    3. Select Oracle VM Manager virtual machine, "OVM_EL5U3_X86_OVM_MANAGER_PVM";
    4. Choose 'Reset' from 'More Actions' dropdown list;
    5. Click 'Go' button, the status will become "Running" after a while.
    5) (Bug 9191053) For OVS agent version 2.3-19, the following High
    Availability scenario will not work.
    "If a Virtual Machine Server fails, all running virtual machines are
    restarted automatically on another available Virtual Machine Server."
    For OVS agent 2.3-19, Oracle VM Manager virtual machine will not be
    automatically restarted on any other available Virtual Machine Server,
    but on the original Virtual Machine Server when it becomes available again.
    To fix the issue, please upgrade OVS agent to 2.3-27 or the latest version.

  • SLES 11 - Filesystemtype recommendations (on VMware)

    Hello guys,
    SLES 11 gives you the opportunity to choose from several filesystem types.
    I am planning to setup some test cases for I/O performance measurement on SLES 11 (on VMware).
    We are also running oracle databases on these servers so it's time to choose the right filesystem type for that.
    I have already searched for some performance comparisons and only found this one on Oracle 9i:
    [Document Oracle 9i - Linux Filesystems|http://www.oracle.com/technology/tech/linux/pdf/Linux-FS-Performance-Comparison.pdf]
    The      basic conditions are:
    Oracle 10gR2 and Oracle 11gR2
    Filesize up to 20 GB
    Async I/O or maybe Direct I/O (need to figure it out) ... Concurrent I/O is not available on Linux AFAIK
    SAN storage (IBM DS8000) on which the filesystems will be located
    SLES 11 on VMware ESX 4
    Is there any suggestion by SAP which filesystem type should be used?
    Any performance measurement by SAP?
    Thanks and Regards
    Stefan

    Hello Markus,
    > ReiserFS works as well if mounted with "-notail":
    Yes, of course. We also set this option from the beginning on SLES 10 and it works.
    The questions would be:
    Which filesystem is faster for big files (up to 20 GB of each data file)?
    Which filesystem is faster with aysnc/direct or concurrent I/O?
    But as you already pointed out that Oracle chooses ext2, ext3 und OCFS2 NAS NFS for their tests .. it's hard to find such a performance comparsion with ReiserFS, etc.
    Regards
    Stefan

  • Trying p2v import of virtual machine image, server pool not chooseable

    I'm trying to import a virtual machine image via p2v from an OEL 5 bare metal image. I go to resources, virtual machine images, Import, choose p2v, and then on the General Information page, it asks for Server Pool Name. The name of my one and only server pool is not in the dropdown. The dropdown only contains "Select Server Pool". I tried to fill in the rest and hit Next but it failed me on the Server Pool name.
    How can I get the server pool name to show up in that dropdown?
    There is one server pool, one server, and one vm already on that server.
    thanks,
    Peter

    It's a bug, I had the same problem. I used this and it worked (copying and pasting from an old thread with the solution):
    Workaround:
    VM Manager -> Server Pools -> Select Server Pool Name -> EDIT -> Check the box of “Enable High Availability” -> Apply -> Ignore the “Shared storage is not mounted, or the file system is invalid. The file system should be OCFS2 or NFS.” error message. Leave unchecked the box of ”Enable High Availability” -> Apply again:
    “The server pool updated successfully.”
    Now you can retry the import.

  • Shared Disks For RAC

    Hi,
    I plan to use shared disks to create Oracle RAC using ASM. What options do I have? OCFS2? or any other option?
    Can some one lead me to a documnet on how can I use the shared disks for RAC?
    Thanks.

    javed555 wrote:
    I plan to use shared disks to create Oracle RAC using ASM. What options do I have? You have two options:
    1. Create shared virtual, i.e. file-backed disks. These files will be stored in /OVS/sharedDisk/ and made available to each guest
    2. Expose physical devices directly to each guest, e.g. an LVM partition or a multipath LUN.
    With both options, the disks show up as devices in the guests and you would then provision them with ASM, exactly the same way as if your RAC nodes were physical.
    OCFS2 or NFS are required to create shared storage for Oracle VM Servers. This is to ensure the /OVS mount point is shared between multiple Oracle VM Servers.

  • NFS vs ISCSI for Storage Repositories

    Anyone have any good guidance in using NFS vs ISCSI for larger production deployments of OVM 3?
    My testing has been pretty positive with NFS but other than the documented "its not as fast as block storage" and the fact that there is no instant clones (no OCFS2), has anyone else contemplated between the two for OVM? If so, what did you choose and why?
    Currently we are testing using NFS thats presented from a Solaris HA Cluster servicing a ZFS pool (basically mimicking ZFS 73xx and 74xx appliances) but I don't know how the same setup would perform if the ZFS pool grew to 10TB of running virtual disk images.
    Any feedback?
    Thanks
    Dave

    Dave wrote:
    Would you personally recommend against using one giant NFS mount to storage VM disk images?I don't recommend against it, it's just most often the slowest possible storage solution in comparison to other mechanisms. NFS cannot take advantage of any of the OCFS2 reflinking, so guests must be fully copied from the template, which is time consuming. Loop-mounting a disk image on NFS is less efficient than loop-mounting it via iSCSI or directly in the guest. FC-SAN is the usually the most efficient storage, but bonded-10Gbps interfaces for NFS or iSCSI may now be faster. If you have dual-8Gpbs FC HBAs vs dual 1Gbps NICs for NFS/iSCSI, the FC SAN will win.
    Essentially, you have to evaluate what your critical success factors are and then make storage decisions based on that. As you have a majority of Windows guests, you need to present the block devices via Oracle VM, so you need to use either virtual disk images (which are the slowest, but easiest to manage) or FC/iSCSI LUNs presented to the guest (which are much faster, but more difficult to manage).

  • OVM 3.3.1:  NFS storage is not available during repository creation

    Hi, I have OVM Manager  running on a separate machines managing 3 servers running OVM server  in a server pool. One of the server also exports a NFS share that all other machines are able to mount and read/write to. I want to use this NFS share to create a OVM repository but so far unable to get it to work.
    From this first screen shot we can see that the NFS file system was successfully added under storage tab and refreshed.
    https://www.dropbox.com/s/fyscj2oynud542k/Screenshot%202014-10-11%2013.40.00.png?dl=0
    But its is not available when adding a repository as shown below. What can I did to make it show up here.
    https://www.dropbox.com/s/id1eey08cdbajsg/Screenshot%202014-10-11%2013.40.19.png?dl=0
    No luck with CLI either.  Any thoughts?
    OVM> create repository name=myrepo fileSystem="share:/" sharepath=myrepo - Configurable attribute by this name can't be found.
    == NFS file system refreshed via CLI === 
    OVM> refresh fileServer name=share
    Command: refresh fileServer name=share
    Status: Success
    Time: 2014-10-11 13:28:14,811 PDT
    JobId: 1413059293069
    == file system info
    OVM> show fileServer name=share
    Command: show fileServer name=share
    Status: Success
    Time: 2014-10-11 13:28:28,770 PDT
    Data:
      FileSystem 1 = ff5d21be-906d-4388-98a2-08cb9ac59b43  [share]
      FileServer Type = Network
      Storage Plug-in = oracle.generic.NFSPlugin.GenericNFSPlugin (1.1.0)  [Oracle Generic Network File System]
      Access Host = 1.2.3.4
      Admin Server 1 = 44:45:4c:4c:46:00:10:31:80:51:c6:c0:4f:35:48:31  [dev1]
      Refresh Server 1 = 44:45:4c:4c:46:00:10:31:80:51:c6:c0:4f:35:48:31  [dev1]
      Refresh Server 2 = 44:45:4c:4c:47:00:10:31:80:51:b8:c0:4f:35:48:31  [dev2]
      Refresh Server 3 = 44:45:4c:4c:33:00:10:34:80:38:c4:c0:4f:53:4b:31  [dev3]
      UniformExports = Yes
      Id = 0004fb0000090000fb2cf8ac1968505e  [share]
      Name = share
      Description = NFS exported /dev/sda1 (427GB) on dev1
      Locked = false
    == version details ==
    OVM server:3.3.1-1065
    Agent Version:3.3.1-276.el6.7Kernel Release:3.8.13-26.4.2.el6uek.x86_64
    Oracle VM Manager
    Version: 3.3.1.1065
    Build: 20140619_1065

    Actually, OVM, as is with all virtualization servers, is usually only the head on a comprehensive infrastructure. OVM seems  quite easy from the start, but I'd suggest, that you at least skim trough the admin manual, to get some understanding of the conecpts behind it.  OVS thus usually only provides the CPU horse power, but not the storage, unless you only want to setup a single-server setup. If you plan on having a real multi-server setup, then you will need shared storage.
    The shared storage for the server pool, as well as the storage repository can be served from the same NFS server without issues. If you want to have a little testbed, then NFS is for you. It lacks some features that OCFS2 benefits from, like thin provisioning, reflinks and sparse files.
    If you want to remove the NFS storage, then you'll need to remove any remainders of any OVM object, like storage repositories or server pool filesystems. Unpresent and storage repo and delete it afterwards… Also, I hope that you didn't  create the NFS export directly on the root of the drive, since OVM wants to remove any file on the NFS export and on any root of ony volume there's the lost-found folder, which OVM, naturally, can't remove. Getting rid of such a storage repo can be a bit daunting…
    Cheers,
    budy

  • Paravirtualized machine hanging (VM Server 2.1, NFS based repository)

    Hi,
    I have a problem with a VM server.
    I have local disks, that are kind of slow (initially my images were on OCFS2 based /OVS, after some problem with it wi migrated /OVS to ext3), but because of insufficient space we want to use NFS.
    I created an NFS repository with:
    /usr/lib/ovs/ovs-makerepo 172.16.32.51:/lv_raid5_fs1/OVS 1 raid5_nfs
    then I created a HVM wirtual machine with OEL4U5 (using installation from ISO images) - it works relatively fine (it hanged just once).
    I tried creating a PVM from the template OVM_EL4U5_X86_PVM_10GB.
    I did that using Oracle VM Manager. The template was created, and after the Power On command the VM started.
    I then wanted to test disk operations performance with simple
    dd if=/dev/zero of=/root/test_prs.dat bs=1048576 count=3000
    but it actually hanged (domU hanged, the iostat command that I had started continue to work though - showing no IO operations were going on, but iowait is 100%.
    Also xentop in dom0 hanged - it didn't refresh for 12 hours.
    The whole dom0 doesn't respond to new ssh requests (the existing one with xentop is not closed).
    The domU with the PVM allowed me to run some commands in another open sheel via ssh, but then it hang also..
    The NFS server I am using is a SLES9 SP3 + online updates (2.6.5-7.276-bigsmp). It is attached to SCSI Storage array.
    The exportfs options are "(rw,wdelay,no_root_squash)". The exported filesystem is reiserfs.
    The mount options in dom0 are "vers=3,tcp", I cannot find them all right now, because the dom0 is hanged.
    the connection between NFS client and server is 1Gbit.
    NFS Server hasn't shown any errors.
    The last screen from xentop is below:
    xentop - 19:31:19 Xen 3.1.1
    3 domains: 1 running, 2 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown
    Mem: 8387988k total, 3825360k used, 4562628k free CPUs: 8 @ 1995MHz
    NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS VBD_OO VBD_RD VBD_WR 90_soa b- 362 0.1 2097152 25.0 2097152 25.0 2 1 34083 3236 1 0 20981 45823 VBD BlkBack 768 [ 3: 0] OO: 0 RD: 20981 WR: 45823
    Domain-0 -----r 286 1.7 524288 6.3 no limit n/a 8 8 2461308 2509950 0 0 0 0 linux_hv_1 b- 268 2.3 1056644 12.6 1064960 12.7 1 1 0 0 1 0 0 0 VBD BlkBack 768 [ 3: 0] OO: 0 RD: 0 WR: 0
    Now my time is 11:26:00 (so more than 15 hours - no refresh of the screen). We've seen such a behaviour previously, but after 20-30 minutes it all started working again.
    What can I do to improve the situation? What could be the problem?
    Please help...
    Regards,
    Mihail Daskalov

    Hi,
    1) What does it mean "supported" - there are no specific requirements published. As I said my "filer" is another Linux machine which is exporting a file system via NFS.
    2) I already tested the NFS using another real machine (not a virtualized one) and it works perfect. I also tested the NFS mount point from dom0 on the same VM server and it worked...
    3) I have problem with the paravirtualized machine (from template)
    any other suggestions?

  • How to use external table - creating NFS mount -the details involved

    Hi,
    We are using Oracle 10.2.0.3 on Solaris 10. I want to use external tables to load huge csv data into the database. This concept was tested and also found to be working fine. But my doubt that : since ours is a J2EE application, the csv files have to come from the front end- from the app server. So in this case how to move them to the db server?
    For my testing I just used putty to transfer the file to db server, than ran the dos2unix command to strip off the control character at the end of file. but since this is to be done from the app server, putty can not be used. In this case how can this be done? Are there any risks or security issues involved in this process?
    Regards

    orausern wrote:
    For my testing I just used putty to transfer the file to db server, than ran the dos2unix command to strip off the control character at the end of file. but since this is to be done from the app server, putty can not be used. In this case how can this be done? Are there any risks or security issues involved in this process? Not sure why "putty" cannot be used. This s/w uses the standard telnet and ssh protocols. Why would it not work?
    As for getting the files from the app server to the db server. There are a number of options.
    You can look at it from an o/s replication level. The command rdist is common on most (if not all) Unix/Linux flavours and used for remote distribution and sync'ing of files and directories. It also supports scp as the underlying protocol (instead of the older rcp protocol).
    You can use file sharing - the typical Unix approach would be to use NFS. Samba is also an option if NTLM (Windows) is already used in the organisation and you want to hook this into your existing security infrastructure (e.g. using Microsoft's Active Directory).
    You can use a cluster file system - a file system that resides on shared storage and can be used by by both app and db servers as a mounted/cooked file system. Cluster file systems like ACFS, OCFS2 and GFS exist for Linux.
    You can go for a pull method - where the db server on client instruction (that provides the file details), connects to the app server (using scp/sftp/ftp), copy that file from the app server, and then proceed to load it. You can even add a compression feature to this - so that the db server copies a zipped file from the app server and then unzip it for loading.
    Security issues. Well, if the internals is not exposed then security will not be a problem. For example, defining a trusted connection between app server ad db server - so the client instruction does not have to contain any authentication data. Letting the client instruction only specify the filename and have the internal code use a standard and fixed directory structure. That way the client cannot instruct something like +/etc/shadow+ be copied from the app server and loaded into the db sever as a data file. Etc.

  • Cluster with shared domain folder using OCFS2 and the domain /tmp folder

    I've setup a cluster with two machines and one managed server on each machine. One of the machines also hosts the admin server.
    Initially, both managed servers were running on the same machine. Everything worked well. When I move to having two machines, I didn't want to have to independently maintain the domain folder on each machine.
    So I decided to setup a shared drive between the machines and use OCFS2 to share the domain folder between the two machines.
    Everything started up fine, but when I deployed software to the server I would frequently (but not always) get this error:
    [wldeploy] Target state: deploy failed on Cluster cluster-01
    [wldeploy] weblogic.management.DeploymentException: [Deployer:149189]Attempt to operate 'deploy' on null BasicDeploymentMBean for deployment mylibrary#[email protected]. Operation can not be performed until server is restarted.
    After looking around a bit on google I saw this issue:
    http://download.oracle.com/docs/cd/E11035_01/wls100/issues/known_resolved.html#CR279281
    which says
    Some OS and NFS combinations result in deployment failures or configuration updates with an exception like:
    weblogic.management.DeploymentException: Attempt to operate 'distribute' on null BasicDeploymentMBean
    Workaround or Solution:
    * Run statd() and lockd() processes on every NFS client that accesses a remote NFS volume.
    * If multiple servers that share the same domain root are started with different user Ids of same group, set the correct "umask" for the server processes so that a file created by one server can be opened for read/write by other servers without security exceptions.
    Unfortunately, I wasn't running NFS, so the workaround didn't help me.
    Through dump luck, I tried this:
    I replaced the domainroot/tmp folder with a symbolic link to a folder on each machine's local filesystem. Following that change my deployments work everytime.
    Does anyone know why this change worked? What is the purpose of the /tmp folder in a domain folder?
    Are there any switches I could turn on to get more information about the exact error that causes the deployment mbean to come back null?

    It may not be the same situation, but I had this "deployer:149189" error and it turned out to be mixing WLS 10.0 and 10.3 installations together in the same install directory and not being scrupulous about environment setup. I think the problem was caused by accidentally running a 10.0 domain with Java 1.6.
    Eoin.

  • ASM on RAW or OCFS2

    We have a 2-node RAC cluster using ASM that has a couple diskgroups (DATA and FRA) on RAW devices. With our current backup methodology, we use RMAN to backup simultaneously to FRA and /u02/backup (cooked filesystem on node 1 for backups) from where netbackup picks it up and tapes them. The team is a bit concerned with the learning curve involved with RAW and also the maintenance complexities involved in db cloning etc (eg. recently we were asked to clone this RAC database to a non-RAC database on a different host).
    One thought inside the team is to do away with RAW and put ASM on OCFS2 filesystem (in which case we won't have to maintain a separate /u02/backup at all plus no learning curve to manage RAW involved). However we do acknowledge that by doing so, we won't be able to reap the benefits of RAW long-term (when the usage of our RAC instances goes up). Also, I believe Oracle suggest ASM on RAW (could be wrong but that is what I see generally people talking about).
    Any suggestions/advices for or against having ASM created on OCFS2 (or even NFS etc)?
    In case that helps, the servers are Dell PE with RHEL4 and Oracle 10.2.0.3. Our duties are well defined between the storage group, Linux group and DBAs.
    Thank you,
    - Ravi

    Dan,
    There are some things about ASM that make it easier than a FS, but there are others that are more difficult; there is definitely a tradeoff. For the DBA who is coming from a background that is light on hardware, the things that ASM does best are "black box", tasks that a sysadmin or an EMC junkie normally do. The "simple" things a normal DBA would do (copy files, list files, check sizes) are now taken through another layer (whether you go asmcmd or a query against the ASM instance, or RMAN). Kirk McGowan briefly talked about how the job role of the DBA has changed with the new technology:
    http://blogs.oracle.com/kmcgowan/2007/06/27#a12
    Let's look at two "simple" things I have come across so far that I would like to see improved. First is resolving archivelog gaps:
    Easiest way to fill gap sequence in standby archivelog with RAW ASM
    Yes, we all know dataguard can do this. But this is not a thread about dataguard (I am more than willing to talk about it in another thread or privately). With ASM on Raw (from now on, I will just say ASM and assume Raw), you have to use RMAN. I have no problem saying that all of us should become better at RMAN (truly), but it bothers me that I cannot login to my primary host and scp a group of logs from the archive destination to the archived destination on my standby host. Unless of course you put your archive destination on a cooked FS. But then we go back to the beginning of this thread.
    Another "simple" tasks is monitoring space usage. ASM has a gimped version of 'du' that could stand a lot of improvement. Of course, there is sqlplus and just run a nice hierarchy query against one of the v$asm views. But 'du -sk /u0?/oradata/*' is so much simpler than either approach.
    Which leads me to ask myself whether or not we are approaching disk monitoring from a completely wrong angle. What does the 'A' stand for in ASM? grin
    There is a lot that ASM can do. And I have no doubt that, due to my lack of experience with ASM, I am simply "not getting it" in some cases.
    "While it may seem painful in the midst of it, the best way to overcome that learning curve is to diagnose problems in a very hands-on manner." - Kirk McGowan

  • Migrating EBS from NFS to OCFS partiton

    Hi,
    Environment : 2 node RAC with PCP and 2 node (non rac) web and forms
    Oracle database 10.2.0.4 and EBS: 12.0.6 OS: OEL 4
    We currently have our concurrent processing part on /erpp partition which is NFS
    now we wish to move it to OCFS partition /erpp1 which shared across rac nodes
    Please suggest how to re-configure Concurrent Processing part of oracle Applications if we move to /erpp1 partition
    this is urgent....
    Help is appreciated.

    We currently have our concurrent processing part on /erpp partition which is NFS
    now we wish to move it to OCFS partition /erpp1 which shared across rac nodes
    Please suggest how to re-configure Concurrent Processing part of oracle Applications if we move to /erpp1 partitionChoosing a Shared File System for Oracle E-Business Suite
    https://blogs.oracle.com/stevenChan/entry/choosing_an_ebs_shared_file_system
    OCFS2 for Linux Certified for E-Business Suite Release 12 Application Tiers
    https://blogs.oracle.com/stevenChan/entry/ocfs2_linux_certified_ebs12
    Certified Oracle RAC Scenarios for Oracle E-Business Suite Cloning
    https://blogs.oracle.com/stevenChan/entry/certified_rac_scenarios_for_ebs_cloning
    Sharing The Application Tier File System in Oracle E-Business Suite Release 12 [ID 384248.1]
    Running Script Apreconb.pls Fails With Error 'O/S Message: Invalid argument' On OCFS2 File System [ID 1264418.1]
    Thanks,
    Hussein

  • Cannot create repository OVM 3.3.1, have shared physical disk with OCFS2

    I installed OVM 3.3.1 servers on hp blades, give 20GB LUN from FC array to each.
    Give one 600GB LUN from FC to both.
    Using OVM 3.3 Manager created serverpool of the two VM Servers
    In the Storage tab under SAN servers-Unmanaged FC Storage Arrays-FibreChannel Volume group I can see
    - both 20GB disk on OVM servers
    - the 600GB disk (shown for 1 server only, but display Servers Using this disk shows both)
    I check ovs-agent is runnig on both OVM servers (I restarted it on both to be sure)
    There is an OCFS2 filesystem on the 600GB disk created.
    When I open the Repositories Tab in OVM Manager I am not able to create repository.
    Thank for any help.
    Jiri Rohlicek
    ============================================================================
    OVM server 1 - ovma1
    [root@ovma1 ~]# mount
    /dev/mapper/360002ac0000000000000006500005340p2 on / type ext4 (rw)
    proc on /proc type proc (rw)
    sysfs on /sys type sysfs (rw)
    devpts on /dev/pts type devpts (rw,gid=5,mode=620)
    tmpfs on /dev/shm type tmpfs (rw)
    /dev/mapper/360002ac0000000000000006500005340p1 on /boot type ext4 (rw)
    none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
    sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
    xenfs on /proc/xen type xenfs (rw)
    none on /var/lib/xenstored type tmpfs (rw)
    nodev on /sys/kernel/debug type debugfs (rw)
    configfs on /sys/kernel/config type configfs (rw)
    ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
    /dev/mapper/360002ac0000000000000006600005340 on /poolfsmnt/0004fb00000500004d14aab4675db242 type ocfs2 (rw,_netdev,heartbeat=global)
    [root@ovma1 ~]# service ovs-agent status
    log server (pid 3402) is running...
    notificationserver server (pid 3418) is running...
    remaster server (pid 3425) is running...
    monitor server (pid 3427) is running...
    ha server (pid 3429) is running...
    stats server (pid 3430) is running...
    xmlrpc server (pid 3432) is running...
    OVM server 2 - ovma2
    [root@ovma2 ~]# mount
    /dev/mapper/360002ac0000000000000006400005340p2 on / type ext4 (rw)
    proc on /proc type proc (rw)
    sysfs on /sys type sysfs (rw)
    devpts on /dev/pts type devpts (rw,gid=5,mode=620)
    tmpfs on /dev/shm type tmpfs (rw)
    /dev/mapper/360002ac0000000000000006400005340p1 on /boot type ext4 (rw)
    none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
    sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
    xenfs on /proc/xen type xenfs (rw)
    none on /var/lib/xenstored type tmpfs (rw)
    nodev on /sys/kernel/debug type debugfs (rw)
    configfs on /sys/kernel/config type configfs (rw)
    ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
    /dev/mapper/360002ac0000000000000006600005340 on /poolfsmnt/0004fb00000500004d14aab4675db242 type ocfs2 (rw,_netdev,heartbeat=global)
    [root@ovma2 ~]#  service ovs-agent status
    log server (pid 3308) is running...
    notificationserver server (pid 3323) is running...
    remaster server (pid 3330) is running...
    monitor server (pid 3332) is running...
    ha server (pid 3334) is running...
    stats server (pid 3336) is running...
    xmlrpc server (pid 3337) is running...

    Hi,
    OVMM will refuse to create a storage repository on any disk that already contains an OCFS2 file system. Either you got something mixed up or you got an old OCFS2 LUN. Either way, since creating a SR requires to wipe the LUN, rsp. formatting it with OCFS2, there musn't be an OCFS2 volume on it.
    Cheers,
    budy

  • Ocfs2/suse 10

    I am trying to use ocfs2 on suse10.
    When i try to mount a file system with the "-o datavolume" option, I get an invalid argument error. It mounts fine without the option.
    The exaxt command is : "mount -t ocfs2 -o datavolume /dev/sda1 /u02/oradata/crsdata
    When i try to create oracle cluster registry files on a file system mounted without the
    datavolume option, i get the following error :
    2006-02-28 20:49:25.604: [  OCROSD][1451766624]utstoragetype: /u02/oradata/crsdata/ocr_data_0.dat is on FS type 1952539503. Not supported.
    2006-02-28 20:49:25.604: [  OCROSD][1451766624]utopen:6'': OCR location /u02/oradata/crsdata/ocr_data_0.dat configured is not valid storage type. Return code [3
    7].
    I am aware that oracle 10gr2 is not certified against Suse 10. I have the following
    questions :
    1. Should n't the datavolume mount option work anyway ..as the ocfs2 modules came with Suse10
    2. Is the second error related to the first one
    3. When will be oracle 10gr2 certified against Suse 10 ?

    Hi Peter,
    I don't think you need to keep /sapmnt in  ocfs2 . Reason any file system  need  to be in cluster is,in RAC environment, data stored in the cache of one Oracle instance to be accessed by any other instance by transferring it across the private network  and preserves data integrity and cache coherency by transmitting locking and other synchronization information across cluster nodes.
    AS this applies to redo files, datafiles and control files only ,  you should be fine with nfs mount of /sapmnt sharing across and not having ocfs2.
    -SV

Maybe you are looking for

  • Credit is mixture of free songs and euros

    Dear, In my credit list I dispose of 11 free songs and some money. Now when I select to buy a certain album of 12 songs, only 9,99 euro needs to be paid (instead of 12x 0,99 euro). When I would only have money as a credit, there would be no problem t

  • How to transfer files from Pc to mobiles?

    HI, How to transfer files from Pc to mobiles? what are the Methods to do transferring files. Sri

  • F110 Query

    Hi, I am doing F110 for three vendors. The entire generated proposal is marked as an exception. the line items are showing the correct amount but when i go for editing the proposal & click on one of the line item; it doesn't show any details except V

  • Downloading a Connect recording directly (i.e. not streaming)

    According to a public 'Welcome to Connect 9' recorded webinar, one can now download recording directly without having to stream them in real time to the local workstation: https://onlineevents.adobeconnect.com/_a655548740/p82otjzrh2h/?launcher=false&

  • Tell a Friend form

    I created a form througha website called emailform.com (or something like that). The form was a site feedback form with a simple with 4 fields : visitor's name, visitor's email address, a box for a subject and one for comments. It also had an image v