ZFS and NFS

I'm trying to share a zfs file system via nfs to several hosts. I've looked at the official Oracle documentation and it seems to be lacking (or I'm looking in the wrong place).
I can do a zfs set sharenfs=rw data/set and use the mount command on the nfs clients and that works fine, but the volume is mounted read-only. How do I mount it read-write?
Also, I thought the 'zfs' command had a mount option for zfs/nfs volumes, is that no longer the case (perhaps it never has been?)?
Some google results have turned up mention of /etc/zfs/exports being updated whenever a sharenfs option is changed, but that file isn't being created for me. Has it been deprecated?
Any help appreciated, thanks!

Nik,
You are correct, the file system is mounted rw afterall. The ownership of the mount point on the client is nobody, so I su'd to nobody and tried to create files but couldn't. I can as root though.
I've changed the ownership of the NFS file system on the server to oracle:dba. I have an oracle:dba on the client with the same uid/gid, but the ownership of the files still says nobody. How do I make it show up as oracle?
Also, is the mount command the correct way to mount zfs nfs volumes?
Thanks!

Similar Messages

  • Slow ZFS-share performance (both CIFS and NFS)

    Hello,
    After upgrading my OpenSolaris file server (newest version) to Solaris 11 Express, the read (and write)-performance on my CIFS and NFS-shares dropped from 40-60MB/s to a few kB/s. I upgraded the ZFS filesystems to the most recent version as well.
    dmsg and /var/log/syslog doesn't list anything abnormal as far as I can see.. I'm not running any scrubs on the zpools, and they are listed as online. top doesn't reveal any process utilizing the CPU more than 0.07%.
    The problem is probably not at the client side, as the clients are 100% untouched when it comes to configuration.
    Where should I start looking for errors (logs etc.)? Any recommended diagnostic tools?
    Best regards,
    KL

    Hi!
    Check Link speed.
    dladm show-dev
    Check for collisions and wrong network packets:
    netstat -ia
    netstat -ia 2 10 ( when file transfered)
    Check for lost packets :
    ping -s <IP client> ( whait more 1 min )
    Check for retransmit, latency for respond:
    snoop -P -td <IP client> ( when file transfered)
    Try replace network cable.
    Regards.

  • Cache Flushes Solaris10 StorageTek D280 and NFS and ZFS

    I encounter complains from users, who are connected via nfs, to Sun Solaris10 server.
    The server is connected via Fibre to a Storage Tek d280.
    The performance on the server is okay.
    However, on the , via nfs connected , clients, the performance is poor.
    I found this document, and want to try to disable the cache flushes on the server.
    http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes
    However, I rather want to have the Storage Tek D280 acting as a nice zfs storage device, rather than tweaking the operating system.
    But I can not find any document on how to configure the behavior of the cache flushes on this device.
    Is there someone who know how to setup this storage tek D280 box correctly to ingnore the Cache Flush commands generated by the NFS ?
    Kind regards,

    806760 wrote:
    Thanks for the response.
    I don't know how the D280 internally has been setup. It should use raid 5. That's about the only thing I know about it.
    It is under control of an ICT department.
    However, the effect, or if the D280 is poor configured, does it only affect the NFS clients connected to the Solaris Server ?
    I have ruled out the network configuration. This is a 1Gb connection. And for diagnose I tryed with a different switch, and direct connection.
    But it did not influence the poor performance of the client, using NFS.
    As a test, I just extract a tar file with a big amount of empty files.
    This goes over 25 times slower on the clients, than on the server.
    I have installed about 8 of those systems, but none is performing so bad.
    Since everything on all systems is about the same configuration, the only things which are out of my control, is the network, and the san.
    I tryed to test the network, but I don't see any problems with that.
    So in my mind, the only thing left, would be the san device.
    Searching on this topic, I found some explanations, about the zfs with nfs, which works poorly, due to the nfs is committing a regular synchronous write (NFS commit) However I don't like to do this.
    I also can not find any description on how to configure a D280.
    It would be nice, if you could provide some settings which has to be set in a D280.
    The configuration is two cluster nodes, and two clients.
    The cluster node mainly task is to provide the nfs shares.
    The clients and servers are in one 19" rack.
    The San, I don't know where it is.
    It has a 2Gb fibre coupling. ( On the server side there are 4Gb Emulex HBA's installed )
    Kind regards,If a tar file extracts 25 times faster on the server then it does over the network, yet both times the data is being written to the SAN LUNs on the D280, the problem is the network.
    That tar file extracts slower across the network for two reasons: bandwidth and latency.
    There's only so much data you can stuff through a gigE network. Your single 1 gigE link can handle about 100 MB/sec read and 100 MB/sec write combined - total. For all users. That may be part of your performance problem, because the configuration LUN layout of that D280 would have to be really, REALLY bad for it to be unable handle that relatively small amount of IO. You CAN test the performance of the LUNs being presented to your server - just use your favorite benchmarking tool to do various reads from the "/dev/rdsk/...." device files that make up your filesystem(s). Just make doggone sure you ONLY do reads - if you write to those LUNs your filesystem(s) will be corrupted. Something like "dd if=/dev/rdsk/... of=/dev/null bs=1024k count=10000" will tell you how fast that one LUN can stream data - but it won't tell you how many IO ops/sec the LUN can support as you'd need to do random small reads to do that. Any halfway-decently configured D280 LUN should be able to stream data at a constant 200 MB/sec while you're reading from it.
    And even if the bandwidth were much higher, you still have to deal with the additional latency of having to do all communications across your network. No matter how fat the pipe is, it still takes more time to send data across the network and have to wait for a reply. What do your ping times look like between client and server? And even with that added latency, there are some things you can do on your hosts. Increase your TCP buffer sizes, mount your filesystems on your Linux clients with the "rsize=32768,wsize=32768,intr,noatime" options, and maybe use NFSv3 instead of NFSv4 - make sure you change both the server and client settings to be sure. And work with your network admins to get jumbo frames enabled. Moving more data per packet is a good way to address latency because you wind up having to wait for a response much fewer times.

  • ISCSI, AFP, SMB, and NFS performance with Mac OS X 10.5.5 clients

    Been doing some performance testing with various protocols related to shared storage...
    Client: iMac 24 (Intel), Mac OS X 10.5.5 w/globalSAN iSCSI Initiator version 3.3.0.43
    NAS/Target: Thecus N5200 Pro w/firmware 2.00.14 (Linux-based, 5 x 500 GB SATA II, RAID 6, all volumes XFS except iSCSI which was Mac OS Extended (Journaled))
    Because my NAS/target supports iSCSI, AFP, SMB, and NFS, I was able to run tests that show some interesting performance differences. Because the Thecus N5200 Pro is a closed appliance, no performance tuning could be done on the server side.
    Here are the results of running the following command from the Terminal (where test is the name of the appropriately mounted volume on the NAS) on a gigabit LAN with one subnet (jumbo frames not turned on):
    time dd if=/dev/zero of=/Volumes/test/testfile bs=1048576k count=4
    In seconds:
    iSCSI 134.267530
    AFP 140.285572
    SMB 159.061026
    NFSv3 (w/o tuning) 477.432503
    NFSv3 (w/tuning) 293.994605
    Here's what I put in /etc/nfs.conf to tune the NFS performance:
    nfs.client.allow_async = 1
    nfs.client.mount.options = rsize=32768,wsize=32768,vers=3
    Note: I tried forcing TCP as well as used an rsize and wsize that doubled what I had above. It didn't help.
    I was surprised to see how close AFP performance came to iSCSI. NFS was a huge disappointment but it could have been limitations of the server settings that could not have been changed because it was an appliance. I'll be getting a Sun Ultra 64 Workstation in soon and retrying the tests (and adding NFSv4).
    If you have any suggestions for performance tuning Mac OS X 10.5.5 clients with any of these protocols (beyond using jumbo frames), please share your results here. I'd be especially interested to know whether anyone has found a situation where Mac clients using NFS has an advantage.

    With fully functional ZFS expected in Snow Leopard Server, I thought I'd do some performance testing using a few different zpool configurations and post the results.
    Client:
    - iMac 24 (Intel), 2 GB of RAM, 2.3 GHz dual core
    - Mac OS X 10.5.6
    - globalSAN iSCSI Initiator 3.3.0.43
    NAS/Target:
    - Sun Ultra 24 Workstation, 8 GB of RAM, 2.2 GHz quad core
    - OpenSolaris 2008.11
    - 4 x 1.5 TB Seagate Barracuda SATA II in ZFS zpools (see below)
    - For iSCSI test, created a 200 GB zvol shared as iSCSI target (formatted as Mac OS Extended Journaled)
    Network:
    - Gigabit with MTU of 1500 (performance should be better with jumbo frames).
    Average of 3 tests of:
    # time dd if=/dev/zero of=/Volumes/test/testfile bs=1048576k count=4
    # zpool create vault raidz2 c4t1d0 c4t2d0 c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with RAIDZ2: 148.98 seconds
    # zpool create vault raidz c4t1d0 c4t2d0 c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with RAIDZ: 123.68 seconds
    # zpool create vault mirror c4t1d0 c4t2d0 mirror c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with two mirrors: 117.57 seconds
    # zpool create vault mirror c4t1d0 c4t2d0 mirror c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    # zfs set compression=lzjb vault
    iSCSI with two mirrors and compression: 112.99 seconds
    Compared with my earlier testing against the Thecus N5200 Pro as an iSCSI target, I got roughly 16% better performance using the Sun Ultra 24 (with one less SATA II drive in the array).

  • I have an ipad mini 1st gen 64 gb (wifi only) and i have problem with some of my apps. These apps have lags when you play with them for a few seconds the apps are having lags are call of duty strike team, gta San Andreas and nfs most wanted.pleas help me

    I have an ipad mini 1st gen 64 gb (wifi only) and i have problem with some of my apps. These apps have lags when you play with them for a few seconds the apps are having lags are call of duty strike team, gta San Andreas and nfs most wanted.pleas help me

    I'm going to guess videos buffer for a while also...
    Two possibilities, one is you should close apps on the iPad, the other is your internet speed is not able to handle the demands of a high speed device.
    To close apps:   Double click home button, swipe up to close the apps.
    To solve internet problem, contact your provider.   Basic internet is not enough for video games and movies.   Your router may be old and slow.

  • How to set userlevel permission for GFS and NFS

    hi
    how to set userlevel permission for GFS and NFS?
    regards

    hi
    http://www.redhat.com/docs/manuals/enterprise/
    AND
    http://en.tldp.org/
    probably your best bet.
    regards

  • EBS 7.4 with ZFS and Zones

    The EBS 7.4 product claims to support ZFS and zones, yet it fails to explain how to recover systems running this type of configuration.
    Has anyone out there been able to recover a server using the EBS software, that is running with ZFS file systems both in the Global zone and sub zones (NB: The servers system file store / /usr /var, is UFS for all zones).
    Edited by: neilnewman on Apr 3, 2008 6:42 AM

    The EBS 7.4 product claims to support ZFS and zones, yet it fails to explain how to recover systems running this type of configuration.
    Has anyone out there been able to recover a server using the EBS software, that is running with ZFS file systems both in the Global zone and sub zones (NB: The servers system file store / /usr /var, is UFS for all zones).
    Edited by: neilnewman on Apr 3, 2008 6:42 AM

  • 7110 OMG, CIF and NFS permission woes. I'm tired and I want to go home.

    OK, here's the dealio...
    I have share exported via CIFS and NFS from our 7110 array running 2010.02.09.2.1,1-1.18
    I have AD configured for CIFS Authentication.
    I have a UNIX desktop so I am using SMB authenticate via AD and talk to the CIF share on the array.
    I have the NFS share mounted using vers 3 on Solaris 10.
    Now, the problem..........
    PERMISSIONS!!!
    Here’s what I want to do,
    Create a file or folder on the CIF and preserve the username on NFS.
    Example, I login as myself via AD, bam I’m on the array.
    Create a file.
    Check the ownership of the file on the NFS mount and it’s suddenly become a series of numbers. Of which I assume are taken from my Windows SID. As Solaris can’t relate my SID to a UNIX username I’m left out in the dark.
    So, I then tried to set up some rule based identity mapping so my Windows login would be converted to my UNIX username, no luck still a series of numbers for me listed against my files.
    I could work around this if I could chown but I can’t even do that as it says chown: filename: Not owner
    What gives? How do I keep my username from CIFS to NFS? HELP!!!!

    Did you have any joy with this?
    I have never been able to determine a consistent configuration for NFS/CIFS sharing on a 7310. Ended up opening access to all on the NFS side (v4) and the CIFS just worked out of the box.
    I am using ID Mapping, with IDMU first, then rule based mapping next. The box picks up the correct UID/GID from AD but doesn't always inherit the user & group for the NFS side.
    Chris

  • Deadlocking issue with sshfs and nfs

    Okay, I've used both sshfs and nfs for remotely accessing the home partition on my fileserver, but I have been having a problem where the networking on the server suddenly cuts out.  Any processes that are accessing the folder I mounted nfs/sshfs with become deadlocked.  Any processes that try access my home directory, where the remote folder sits, are also deadlocked.  I cannot get into the machine with ssh.  I have to manually reboot it in order to get any networking at all.
    I have to also force-kill any known processes that are accessing the remote folder, and if I don't know what they are, I have to forcibly unmount it.  This issue has been occuring with this specific fileserver since I got it.  It is running Arch Linux i686, but has had the same problem with the server editions of both Fedora and Ubuntu.
    I don't know where to begin with fixing this problem, nor do I know how to diagnose it.

    Consider "soft" mount option for NFS.

  • [SOLVED] Netbooting with PXE, TFTP and NFS / Numerous errors

    Greetings all, hope you can help me out.
    Been given a task by my company of making a network bootable ICA client (with X and Firefox, with the Citrix ICA client installed) as small as possible to minimize network traffic (as 440 workstations would be downloading the end-product simultaneously, so it'd beat ten bells of proverbial out of the core and edge switches for a little while). I discovered two options. One being to integrate everything in side a cloop image directly inside the INITRD. I have stacks of working INITRDs with their matched kernels yet being my first dabble in to extracting the INITRD, my faffing with CPIO has resulted in me nuking my base layout (Thank god for snapshotting in VMware Workstation!) 4 times, and either getting "Premature end of file" or a copius amount of lines stating "cpio: Malformed Number: <strange characters>" finally ending with "Premature end of file". As a result I went in search of another option, which would be booting off an NFS share. I followed the guide:
    http://wiki.archlinux.org/index.php/Dis … t_NFS_root
    ...in order to set up a network booted install of Arch and hit a few snags along the way, probably a result of using multiple operating systems for the TFTP and NFS server as opposed to using what the guide recommends, but I'm not sure as these seem solvable, although I don't know how right now.
    The set up:
    DHCP is provided by a Microsoft Windows Server 2003 VM (AD Integrated) on 172.16.10.17 on a box called "Rex".
    TFTP is provided by another Windows Server 2003 VM by "TFTPd32" which is a free download. This is located on 172.16.10.158 on a box called "Terra".
    The NFS store is provided by OpenFiler 2.3 which is a specialized version of rPath Linux designed specifically for turning boxes in to dedicated NAS stores. This is located on 172.16.10.6, and is called "frcnet-nas-1".
    The problem:
    DHCP is correctly configured with a Boot Host Name (Which is 172.16.10.158) and a boot file name of "pxelinux.0". This is confirmed as working.
    Client gets the kernel and INITRD from TFTP and boots up fine until it hits "Waiting for devices to settle..." by which point it echos out "Root device /dev/nfs doesn't exist, attempting to create it...", which it seems to do so fine. It then passes control over to kinit and echos "INIT: version 2.86 booting" and the archlinux header, and immediately after that it prints:
    mount: only root can do that
    mount: only root can do that
    mount: only root can do that
    /bin/mknod: '/dev/null': File exists
    /bin/mknod: '/dev/zero': File exists
    /bin/mknod: '/dev/console': File exists
    /bin/mkdir: cannot create directory '/dev/pts': File exists
    /bin/mkdir: cannot create directory '/dev/shm': File exists
    /bin/grep: /proc/cmdline: No such file or directory
    /etc/rc.sysinit: line 72: /proc/sys/kernel/hotplug: No such file or directory
    :: Using static /dev filesystem [DONE]
    :: Mounting Root Read-only [FAIL]
    :: Checking Filesystems [BUSY]
    /bin/grep: /proc/cmdline: No such file or directory
    :: Mounting Local Filesystems
    mount: only root can do that
    mount: only root can do that
    mount: only root can do that
    [DONE]
    :: Activating Swap [DONE]
    :: Configuring System Clock [DONE]
    :: Removing Leftover Files [DONE]
    :: Setting Hostname: myhost [DONE]
    :: Updating Module Dependencies [DONE]
    :: Setting Locale: en_US.utf8 [DONE]
    :: Setting Consoles to UTF-8 mode[BUSY]
    /etc/rc.sysinit: line 362: /dev/vc/0: No such file or directory
    /etc/rc.sysinit: line 363: /dev/vc/0: No such file or directory
    /etc/rc.sysinit: line 362: /dev/vc/1: No such file or directory
    /etc/rc.sysinit: line 363: /dev/vc/1: No such file or directory
    ... all the way down to vc/63 ...
    :: Loading Keyboard Map: us [DONE]
    INIT: Entering runlevel: 3
    :: Starting Syslog-NG [DONE]
    Error opening file for reading; filename='/proc/kmsg', error='No such file or directory (2)'
    Error initializing source driver; source='src'
    :: Starting Network...
    Warning: cannot open /proc/net/dev (No such file or directory). Limited output.
    eth0: dhcpcd 4.0.3 starting
    eth0: broadcasting inform for 172.16.10.154
    eth0: received approval for 172.16.10.154
    eth0: write_lease: Permission denied
    :: Mounting Network Filesystems
    mount: only root can do that
    [FAIL]
    :: Starting Cron Daemon [DONE]
    ...and, nothing after that, it just stops. Kernel doesn't panic, and hitting ctrl+alt+delete does what you'd expect, a clean shutdown minus a few errors about filesystems not being mounted. It seems /proc isn't getting mounted because init apparently doesn't have the appropriate permissions, and /proc not being mounted causes a whole string of other issues. Thing is, proc gets created at boot time as it contains kernel specific information about the system and the kernel's capabilities, right? Why can't it create it? How come init doesn't have the same privileges as root as it usually would, and how would I go about fixing it?
    I admit, while I'm fairly competent in Linux, this one has me stumped. Anyone have any ideas?
    Last edited by PinkFloydYoshi (2008-11-22 12:29:01)

    The idea behind the Windows DHCP and TFTP is that we'd be using an existing server and a NetApp box with NFS license to serve everything off. I would have loved to make a new server which is completely Linux, but my boss, nor the other technician have ever used Linux so if I left for any reason, they'd be stuck if ever they ran in to trouble, which is why I've struggled to get Linux to penetrate our all Windows infrastructure.
    During my hunting around on Google I found a lot of information on making my own initrd, and a lot of it using all manner of switches. I can make them fine, but I figure that I would need to look at extracting the current working one first, adding X, Firefox and the ICA client to it, then compressing it again. Cloop came about when I was looking at DSL's internals. The smaller the initrd, the better, so utilizing this could possibly be a plus too.
    The reason I'm doing this with Archlinux is that I know Arch's internals quite well (and pacman is just wonderous, which is more than I can say for yum), so if I run in to a small problem I'm more likely to fix it without consulting Google. Fair enough though, the NFS booting method is giving me issues I never thought were possible. Ahh, sods law strikes again.
    Addendum: I've noticed something which struck me as odd. Files in the NFS share are somehow owned by 96:scanner instead of root:root. Upon attempting changing, it's telling me "Operation Not Permitted". Further prodding has led me to believe it's an Openfiler thing where GID/UID 96 on the OpenFiler box is "ofgroup"/"ofguest". Chowning / to root:root puts NFS boot right ahead and gives me a prompt, however I cannot log in as root. I've also discovered that chrooting in to the base from my Arch workstation and creating a directory makes the directory owned by ofgroup:ofguest again, so it's an Openfiler thing after all this time. Prodding further.
    Addendum two: For anyone using Openfiler out there, when you allow guest access to the NFS share, be sure to set the Anonymous GID and Anonymous UID to 0. By default it's 96 and as a result when trying to boot you get the errors I experienced. This is insecure and you should use some sort of network/host/ip range restriction. Because the root filesystem has 96:96 as the owner of everything after you install the base layout using pacman (and any changes you make afterward) init and root no longer have the appropriate permissions, user 96:96 (which is "scanner" in Archlinux) has the permissions instead and init, in order to complete boot would need to be "scanner" in order to boot completely.
    Solution is to set Anon GID and Anon UID to 0, chown the entire diskless root filesystem to root, then use a linux desktop to mount the diskless root filesystem, mount /proc, /sys and mount bind /dev, then chroot in to the diskless root filesystem. At this point to clear up any problems with bad passwords, use passwd to change your password. Exit the chroot environment then unmount the diskless proc, sys and dev. Boot up via the network and use your chosen password to log in as root. At this point, start clearing up permissions from the en masse filesystem chown and you should then have a usable diskless root.
    I'll experiment further and clear up some of the remaining permission errors that occured during boot and report on my progress in fixing it. Didn't like the idea of chowning the entire share as root. :S
    Last edited by PinkFloydYoshi (2008-11-21 19:28:15)

  • Doubt About FTP And NFS

    Hi Experts,
    1...What is the differnce between FPT and NFS( in Trasport Protocol)
    2...When we wil use FTP and NFS......In which Case
    Please Let me know in detailed
    Regards
    Khanna

    Hi Thanks for ur quick reply.
    As u told
    >>>><i>that the client's system is across ur network and the client is not ready to send u the file. At that time u have to use FTP.</i>
    This is ok.
    Q:::::And For this We should be in the VPN   OR No need?????/
    <i>In scenario where the XI system could store the file on their server (e.g. cases where the organization has their XI in place and they dont want to add an extra FTP server in their scenario, they can directly paste the file on the XI file system). In these cases NFS is used.</i>
    In this case u need to put the file in the XI Server, From where u wil get the file to keep it in the server(means via internet or by hand  or like.....)
    Please let me know all the details
    Regards
    Khanna

  • Systemd and nfs-common ?

    My nfs-client setup takes 62s to mount a nfsv4 share. I've tried to solve this by creating an own systemd nfs-common.service but it fails. It doesn't see rpcbind as started, which it is. I've googled high and low for a systemd *.service that works but haven't any nfs-common.service file at all.
    I have ExecStart=/etc/rc.d/nfs-common start in my .service file and it is enabled but status message is;
    nfs-common.service - nfs-common
    Loaded: loaded (/etc/systemd/system/nfs-common.service; enabled)
    Active: failed (Result: exit-code) since Sat, 19 May 2012 12:25:43 +0200; 10min ago
    Process: 1911 ExecStart=/etc/rc.d/nfs-common start (code=exited, status=1/FAILURE)
    CGroup: name=systemd:/system/nfs-common.service
    sudo /etc/rc.d/nfs-common start gives;
    Start rpcbind first. [FAIL]
    Same as in journal systemd-journalctl | grep -i nfs;
    May 19 12:14:09 mediadatorn kernel: RPC: Registered tcp NFSv4.1 backchannel...e.
    May 19 12:14:09 mediadatorn kernel: FS-Cache: Netfs 'nfs' registered for caching
    May 19 12:14:10 mediadatorn[103]: Inserted module 'nfs'
    May 19 12:14:11 mediadatorn nfs-common[227]: Start rpcbind first. [FAIL]
    All this came about since I started trying to optimize systemd. Before today, when I tried to disable all legacy arch units, I had the nfs client running ok. I did this after reading falconindys tip in the big systemd thread. By the way, that thread is so big it has become nearly useless.
    Boot up times:
    systemd-analyze
    Startup finished in 13269ms (kernel) + 94810ms (userspace) = 108080ms
    systemd-analyze blame
    62034ms mnt-SERVER_NYTT.mount
    6764ms console-kit-daemon.service
    2787ms systemd-modules-load.service
    1838ms home.mount
    867ms syslog-ng.service
    552ms rpc-statd.service

    swanson wrote:
    I followed this
    falconindy wrote:
    arch-daemons.target. Sadly, it's going to be a little longer than I wanted until that unit disappears. That said, the intended usage of that target was never meant to be more than:
    1) Install systemd, initscripts-systemd, systemd-arch-units (arch-daemons.target is enabled)
    2) reboot
    3) copy /run/systemd/generator/arch-daemons.target.wants/*.service to /etc/systemd/system for anything where a native unit file doesn't exist
    4) kill -1 1 (refreshes systemd's unit database)
    5) systemctl enable *.service (for the services you copied over in step 3)
    6) disable arch-daemons.target
    7) ????
    8) profit.
    (finally found it)
    Right now I'm trying to reverse it and start over.
    I tried this, and immediately didn't get it to mount.  I restarted rpcbind (supposedly already running) and nfs-common and I think it still stalled on mounting, but now I that I've been away from it, it finally did mount.
    Last edited by nomorewindows (2012-06-24 00:09:47)

  • Software to configure CIFS and NFS on AWS?

    Does anybody have experience with software that will configure CIFS and NFS on Amazon Web Services (AWS)?

    I suggest that you post this question in an AWS community forum available there : https://forums.aws.amazon.com/index.jspa

  • ZFS and fragmentation

    I do not see Oracle on ZFS often, in fact, i was called in too meet the first. The database was experiencing heavy IO problems, both by undersized IOPS capability, but also a lack of performance on the backups - the reading part of it. The IOPS capability was easily extended by adding more LUNS, so i was left with the very poor bandwidth experienced by RMAN reading the datafiles. iostat showed that during a simple datafile copy (both cp and dd with 1MiB blocksize), the average IO blocksize was very small, and varying wildly. i feared fragmentation, so i set off to test.
    i wrote a small C program that initializes a 10 GiB datafile on ZFS, and repeatedly does
    1 - 1000 random 8KiB writes with random data (contents) at 8KiB boundaries (mimicking a 8KiB database block size)
    2 - a full read of the datafile from start to finish in 128*8KiB=1MiB IO's. (mimicking datafile copies, rman backups, full table scans, index fast full scans)
    3 - goto 1
    so it's a datafile that gets random writes and is full scanned to see the impact of the random writes on the multiblock read performance. note that the datafile is not grown, all writes are over existing data.
    even though i expected fragmentation (it must have come from somewhere), is was appalled by the results. ZFS truly sucks big time in this scenario. Where EXT3, on which i ran the same tests (on the exact same storage), the read timings were stable (around 10ms for a 1MiB IO), ZFS started of with 10ms and went up to 35ms for 1 128*8Kib IO after 100.000 random writes into the file. it has not reached the end of the test yet - the service times are still increasing, so the test is taking very long. i do expect it to stop somewhere - as the file would eventually be completely fragmented and cannot be fragmented more.
    I started noticing statements that seem to acknowledge this behavior in some Oracle whitepapers, such as the otherwise unexplained advice to copy datafiles regularly. Indeed, copying the file back and forth defragments it. I don't have to tell you all this means downtime.
    On the production server this issue has gotten so bad that migrating to a new different filesystem by copying the files will take much longer than restoring from disk backup - the disk backups are written once and are not fragmented. They are lucky the application does not require full table scans or index fast full scans, or perhaps unlucky, because this issue would have been become impossible to ignore earlier.
    I observed the fragmentation with all settings for logbias and recordsize that are recommended by Oracle for ZFS. The ZFS caches were allowed to use 14GiB RAM (and moslty did), bigger than the file itself.
    The question is, of course, am i missing something here? Who else has seen this behavior?

    Stephan,
    "well i got a multi billion dollar enterprise client running his whole Oracle infrastructure on ZFS (Solaris x86) and it runs pretty good."
    for random reads there is almost no penalty because randomness is not increased by fragmentation. the problem is in scan-reads (aka scattered reads). the SAN cache may reduce the impact, or in the case of tiered storage, SSD's abviously do not suffer as much from fragmentation as rotational devices.
    "In fact ZFS introduces a "new level of complexity", but it is worth for some clients (especially the snapshot feature for example)."
    certainly, ZFS has some very nice features.
    "Maybe you hit a sync I/O issue. I have written a blog post about a ZFS issue and its sync I/O behavior with RMAN: [Oracle] RMAN (backup) performance with synchronous I/O dependent on OS limitations
    Unfortunately you have not provided enough information to confirm this."
    thanks for that article,  in my case it is a simple fact that the datafiles are getting fragmented by random writes. this fact is easily established by doing large scanning read IO's and observing the average block size during the read. moreover, fragmentation MUST be happening because that's what ZFS is designed to do with random writes - it allocates a new block for each write, data is not overwritten in place. i can 'make' test files fragmented by simply doing random writes to it, and this reproduces on both Solaris and Linux. obviously this ruins scanning read performance on rotational devices (eg devices for which the seek time is a function of the 'distance between consecutive file offsets).
    "How does the ZFS pool layout look like?"
    separate pools for datafiles, redo+control, archives, disk backups and oracle_home+diag. there is no separate device for the ZIL (zfs intent log), but i tested with setups that do have a seprate ZIL device, fragmentation still occurs.
    "Is the whole database in the same pool?"
    as in all the datafiles: yes.
    "At first you should separate the log and data files into different pools. ZFS works with "copy on write""
    it's already configured like that.
    "How does the ZFS free space look like? Depending on the free space of the ZFS pool you can delay the "ZFS ganging" or sometimes let (depending on the pool usage) it disappear completely."
    yes, i have read that. we never surpassed 55% pool usage.
    thanks!

  • LDAP and NFS mounts/setup OSX Lion iMac with Mac Mini Lion Server

    Hello all,
    I have a local account on my iMac (Lion), and I also have a Mac Mini (Lion Server) and I want to use LDAP and NFS to mount the /Users directory, but am having trouble.
    We have a comination of Linux (Ubuntu), Windows 7 and Macs on this network using LDAP and NFS, except the windows computers.
    We have created users in workgroup management on the server, and we have it working on a few Macs already, but I wasnt there to see that process. 
    Is there a way to keep my local account separate, and still have NFS access to /Users on the server and LDAP for authentification?
    Thanks,
    -Matt

    It would make a great server. Bonus over Apple TV for example is that you have access via both wired ethernet and wireless. Plus if you load tools from XBMC, Firecore and others you have a significant media server. Cost is right too.
    Many people are doing this - google mac mini media server or other for more info.
    Total downside to any windows based system - dealing with constant anti-virus, major security hassels, lack of true media integration and PITA to update, etc.
    You should be aware that Lion Server is not ready for prime time - it stil has significant issues if you are migrating from SNL 10.6.8. If you buy an apple fresh Lion Server mac mini you should have no problems.
    You'll probably be pleased.

Maybe you are looking for

  • Unable to install downloaded additional content in Windows 8

    I am trying to install additional content for Premiere Elements 10 running Windows 8.  I downloaded the additional content and when I extract the file, the Installwizard pops up and says; the installwizard will allow you to 'remove' Premiere Elements

  • PI and TREX, correct versions?

    Hi We are currently on PI 7.11 I heard that TREX will become available and that we can use it to do content search in PI messages. As far as i know the TREX version will be 7.0 Are PI 7.11 and TREX 7.0 compatible? Or do we need TREX 7.1? thx Robert

  • An error occurs when opening a file ppt

    I have a presentation in Powerpoint. But for some reason I can not open it on the laptop. Writes that: "Unable to read the structure of ..." It's very strange, because the version Power Point, on the laptop and PC alike. (2013) What could be causing

  • Partition Size Error

    Hello everyone, I am trying to partition my MacBook2,1 hard drive using the Boot Camp Assistant (v2.0, running on 10.5.6 with all updates installed). I have about 11 GB remaining (going to have a 5 GB really basic XP partition, no Office, etc.) Howev

  • Noise canceling for listening to music

    Hi there, is there any way to use the noise canceling system in the Z30 for listening to music? It is unusually for me that you can connect headphones in two different ways. The headset delivered with the phone does not mute the speakers of the phone