NFS performance with Solaris 10

Hello,
We have been playing with one of the x4200s running s10u2, or snv_50 for that matter, and are getting terrible numbers from the NFS performance. Initially, we suspected it was just the ZFS filesystem on the back (which it was, though zil_disable made it a lot better), but even after exploring a little I am getting terrible numbers for NFS backed by UFS. Using afio to unafio a file on the disk gives:
Local:
afio: 432m+131k+843 bytes read in 263 seconds. The operation was successful.
Remote:
afio: 432m+131k+843 bytes read in 1670 seconds. The operation was
successful.
I have raised the ncsize to 1000000, and upped the server threads to 1024.
The same thing on a linux box(ext3) turns in local times of 100 seconds and remote at 180 seconds. The differences in the local and remote numbers are just crazy. The difference in the ZFS is way worse:
Local zfs:
afio: 432m+131k+843 bytes read in 137 seconds. The operation was successful.
NFS -> ZFS:
afio: 432m+131k+843 bytes read in 2428 seconds. The operation was
successful.
I have started looking into dtrace for tracking the problem, but don't have much to report yet.
Any suggestions appreciated.

Ask this on the Solaris Forum, not the Java Networking forum.
Edit: typo

Similar Messages

  • ISCSI, AFP, SMB, and NFS performance with Mac OS X 10.5.5 clients

    Been doing some performance testing with various protocols related to shared storage...
    Client: iMac 24 (Intel), Mac OS X 10.5.5 w/globalSAN iSCSI Initiator version 3.3.0.43
    NAS/Target: Thecus N5200 Pro w/firmware 2.00.14 (Linux-based, 5 x 500 GB SATA II, RAID 6, all volumes XFS except iSCSI which was Mac OS Extended (Journaled))
    Because my NAS/target supports iSCSI, AFP, SMB, and NFS, I was able to run tests that show some interesting performance differences. Because the Thecus N5200 Pro is a closed appliance, no performance tuning could be done on the server side.
    Here are the results of running the following command from the Terminal (where test is the name of the appropriately mounted volume on the NAS) on a gigabit LAN with one subnet (jumbo frames not turned on):
    time dd if=/dev/zero of=/Volumes/test/testfile bs=1048576k count=4
    In seconds:
    iSCSI 134.267530
    AFP 140.285572
    SMB 159.061026
    NFSv3 (w/o tuning) 477.432503
    NFSv3 (w/tuning) 293.994605
    Here's what I put in /etc/nfs.conf to tune the NFS performance:
    nfs.client.allow_async = 1
    nfs.client.mount.options = rsize=32768,wsize=32768,vers=3
    Note: I tried forcing TCP as well as used an rsize and wsize that doubled what I had above. It didn't help.
    I was surprised to see how close AFP performance came to iSCSI. NFS was a huge disappointment but it could have been limitations of the server settings that could not have been changed because it was an appliance. I'll be getting a Sun Ultra 64 Workstation in soon and retrying the tests (and adding NFSv4).
    If you have any suggestions for performance tuning Mac OS X 10.5.5 clients with any of these protocols (beyond using jumbo frames), please share your results here. I'd be especially interested to know whether anyone has found a situation where Mac clients using NFS has an advantage.

    With fully functional ZFS expected in Snow Leopard Server, I thought I'd do some performance testing using a few different zpool configurations and post the results.
    Client:
    - iMac 24 (Intel), 2 GB of RAM, 2.3 GHz dual core
    - Mac OS X 10.5.6
    - globalSAN iSCSI Initiator 3.3.0.43
    NAS/Target:
    - Sun Ultra 24 Workstation, 8 GB of RAM, 2.2 GHz quad core
    - OpenSolaris 2008.11
    - 4 x 1.5 TB Seagate Barracuda SATA II in ZFS zpools (see below)
    - For iSCSI test, created a 200 GB zvol shared as iSCSI target (formatted as Mac OS Extended Journaled)
    Network:
    - Gigabit with MTU of 1500 (performance should be better with jumbo frames).
    Average of 3 tests of:
    # time dd if=/dev/zero of=/Volumes/test/testfile bs=1048576k count=4
    # zpool create vault raidz2 c4t1d0 c4t2d0 c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with RAIDZ2: 148.98 seconds
    # zpool create vault raidz c4t1d0 c4t2d0 c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with RAIDZ: 123.68 seconds
    # zpool create vault mirror c4t1d0 c4t2d0 mirror c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with two mirrors: 117.57 seconds
    # zpool create vault mirror c4t1d0 c4t2d0 mirror c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    # zfs set compression=lzjb vault
    iSCSI with two mirrors and compression: 112.99 seconds
    Compared with my earlier testing against the Thecus N5200 Pro as an iSCSI target, I got roughly 16% better performance using the Sun Ultra 24 (with one less SATA II drive in the array).

  • Solaris 10 NFS performance on Linux running ws3 update 3

    Hope someone can help me sort out this problem.
    Dear Support.
    We are having a Solaris/Sparc file server running Solaris 10. The Solaris machine act as a NFS file server. We encounter very poor NFS performance when copying files to and from a filesystem via Linux NFS
    I have set up a very simple test scenario. Created a tar file, size around 3 GB. The file is sitting on a SAN system. The file it self is created on a Solaris 10 UFS filesystem.
    Solaris E240 Solaris 8 NFS, GB interface copy to and from the same disk via NFS
    timecp /seis/seis600_new/usr.tar /seis/seis600_new/new1.tar
    real 2m18.91s
    user 0m0.11s
    sys 0m29.72s
    IBM/AMD 64 bit Linux WS3 U5, GB interface NFS to and from the same disk via NFS.
    time cp /seis/seis600_new/usr.tar /seis/seis600_new/new1.tar
    real 6m24.670s
    user 0m0.130s
    sys 0m21.860s
    Also run the test on other Linux boxes with similar results?
    The funny part of this is that I can reproduce the performance problem on other SUN systems among the SUN Blade 2000 with 8 GB Ram
    Let me wrap up.
    Always bad NFS performance when using NFS between Solaris NFS server and Linux client.
    Not always bad performance when using NFS between Solaris server and clients.

    It's been a while since I was doing linux->solaris nfs, so bear with me as I clear out the cobwebs.
    First things to check: Mount options for the nfs mount to the server.
    Which versions of nfs are you using?(v2, v3?) Solaris uses version 3 mounts by default.
    What's your wsize and rsize for reads and writes?
    I believe linux is limited to using 8k r/w block sizes. Solaris will let you use r/wsize up to 32k in nfsv3, which would really help with larger data transfers.
    nfsv3 has a number of performance enhancements over v2, so give that a shot with a larger block size.
    nfsvers=3,wsize=8192,rsize=8192,nolock,intr
    Also experiment with your locking options, that might help some.
    NFS has always been a fairly weak point for linux.
    A few links for reference:
    http://www.scd.ucar.edu/hps/TECH/LINUX/linux.html
    http://nfs.sourceforge.net/
    Cheers && good luck,
    fptt.

  • Solaris 10 NFS client with FreeBSD server

    Hello,
    I have an issue about Solaris 10 and a FreeBSD server (currently running 5.4-STABLE). This server exports several NFS shares (3TB each) to various machines including Linux (which works fine) and Solaris (which worked fine on 9 but not 10).
    The last progress was that if you link .sunw in the /home/$USER
    directory to /tmp on Solaris 10, you can then use applications and ssh out from the server. It looks to me like a pure Solaris issue but i still have failed to find a way to fix it.
    The mount options i currently have on solaris are :
    rw,bg,nosuid,nfsver=3,tcp,intr,-w=32768,-r=32768
    Also modified /etc/default/nfs to force NFS3 thinking it might have been the issue : NFS_CLIENT_VERSMAX=3
    I have searched forums and search engines hoping to find an answer but have failed to see anything yet. As far as i can tell, it is a Solaris 10 issue but have no idea how to fix it.
    Thanks for any help,
    Steph

    I also have this problem with FreeBSD 6.2 and Solaris 10 (was OK with Solaris 7 and 9).
    The file system shared is about 30GB, and is FreeBSD on Sparc64.

  • Experience with Solaris 10 Memory Management

    Dear Forum!
    Using Note 724713 we installed a couple of R3 Systems on Solaris 10. Independently which R3 Version is used we observe much more memory problems as with prior solaris releases or other operating systems like AIX.
    For instance1: We moved three R3 Systems from a server with solaris 8 to a server with solaris 10. We didnt change memory parameters of R3 and the databases (oracle). The solaris 10 server has the same amount of memory (8GB) like the solaris 8 server. Now, on the solaris 10 server we can only start two R3 systems, the third one doesnt start because no memory can be allocated. The memory parameter of all component slightly overload the main memory. Because the systems are not productive they mostly idle and performance loss due to paging and even swapping didnt worry us. Not even that the third system didnt start, the whole OS was a kind of freezed and no more work was possible, also not on the other two R3 systems which were already startet.
    For instance2:
    On another server with 16GB memory we startet an installation of a sapsystem and the sapinst crashed due to memory absence. Admittedly, here the main memory was also a bit overloaded, but this is not unusal here and I never saw sapinst crashing because of this.
    In both cases Swapspace was adequate.
    Do you have similar  experiences? Or any tipps to solve the problem.
    Thanks
    Andreas

    this is strange. Although /etc/user_attr shows
    /etc/user_attr shows
    adm::::profiles=Log Management
    lp::::profiles=Printer Management
    postgres::::type=role;profiles=Postgres Administration,All
    root::::auths=solaris.*,solaris.grant;profiles=Web Console Management,All;lock_after_retries=no;min_label=admin_low;cle
    arance=admin_high
    qp7adm::::project=QP7
    oraqp7::::project=QP7
    sapadm::::project=QP7
    smdadm::::project=QP7
    the process list show that the processes of the user sidadm run in project "system" only the oracle user project is ok:
    root@pi7q #  ps -eo user,pid,ppid,project,args
        USER   PID  PPID  PROJECT COMMAND
      oraqp7  1783  3652      QP7 ora_q001_QP7
      oraqp7   125  3652      QP7 ora_mmon_QP7
      qp7adm  1884  1862   system dw.sapQP7_DVEBMGS00 pf=/usr/sap/QP7/SYS/profile/QP7_DVEBMGS00_pi7q
      oraqp7 20464  3652      QP7 oracleQP7 (LOCAL=NO)
      daemon  4416  3652   system /usr/lib/nfs/nfs4cbd
      qp7adm  1870  1862   system icman -attach pf=/usr/sap/QP7/SYS/profile/QP7_DVEBMGS00_pi7q
      oraqp7 20462  3652      QP7 oracleQP7 (LOCAL=NO)
        root 14658 25510 user.root bash
      oraqp7   105  3652      QP7 ora_dbw1_QP7
      oraqp7  2980  3652      QP7 oracleQP7 (LOCAL=NO)
        root  5819  3652   system ./tGAgent.d -d
      oraqp7  2544  3652      QP7 oracleQP7 (LOCAL=NO)
      smdadm  5639  3652   system /usr/sap/SMD/SMDA02/exe/sapstartsrv pf=/usr/sap/SMD/SYS/profile/SMD_SMDA02_pi7q
      qp7adm  1900  1862   system dw.sapQP7_DVEBMGS00 pf=/usr/sap/QP7/SYS/profile/QP7_DVEBMGS00_pi7q
        root  4443  3668   system /usr/lib/saf/sac -t 300

  • NFS Performance

    I have 2 questions about NFS on 10.4.
    Client;
    Has NFS performance improved on the client side? Last time I tested was 10.3, and the sustained throughput was about 10-12 MB/s on a gig connection. This was from a Sun NFS server.
    Server;
    Has the performance improved? I am thinking about doing an Xsan NFS re-share to 50+ Linux machines in a compute farm. Will this work out well?
    I'm interested to hear from anybody doing heavy NFS serving.
    Thanks,
    David

    The client is somewhat lacking.
    On one test here (XServe G5 client talking to XServe RAID 5 array connected to XServe G5 NFS server) I get around 40MB/sec copying a file to the RAID over a gigabit ethernet network.
    By comparison, a Solaris machine talking to the same server gets almost 80MB/sec.
    So it sounds like it's improved some from when you last tested, but maybe not by as much as you'd like.
    Note that these tests were done on a single active client (or maybe some minor background traffic going on at the same time).
    As for the server side, I don't know quite where that tops out. A quick test here shows little difference in times even when multiple clients are writing to the RAID at the same time. The server might be able to keep up with the RAID speed.

  • How to umount a busy NFS mount in solaris 2.6?

    How can I umount a busy NFS mount in solaris 2.6 without rebooting the machine? I've tried to check what processes that might hold the mount with the command 'fuser -c /home/user' but it reports nothing.
    I wish I had solaris 8 so I could use 'umount -f'...
    Can anyone help me?

    If your nfs mount is under autofs' control, you would run into issue. Give this a try.
    # /etc/init.d/autofs stop
    # fuser -ck /home/user
    # umount /home/user

  • [SOLVED] SGA_MAX_SIZE pre-allocated with Solaris 10?

    Hi all,
    I'm about to build a new production database to migrate an existing 8.1.7 database to 10.2.0.3. I'm in the enviable position of having a good chunk of memory to play with on the new system (compared with the existing one) so was looking at a suitable size for the SGA... when something pinged in my memory about SGA_MAX_SIZE and memory allocation in the OS where some platforms will allocate the entire amount of SGA_MAX_SIZE rather than just SGA_TARGET.
    So I did a little test. Using Solaris 10 and Oracle 10.2.0.3 I've created a basic database with SGA_MAX_SIZE set to 400MB and SGA_TARGET 280MB
    $ sqlplus
    SQL*Plus: Release 10.2.0.3.0 - Production on Wed Jan 30 18:31:21 2008
    Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.
    Enter user-name: / as sysdba
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    SQL> show parameter sga
    NAME                                 TYPE        VALUE
    lock_sga                             boolean     FALSE
    pre_page_sga                         boolean     FALSE
    sga_max_size                         big integer 400M
    sga_target                           big integer 280MSo I was expecting to see the OS pre-allocate 280MB of memory but when I checked the segment is actually the 400MB (i.e. SGA_MAX_SIZE) (my database owner is 'ora10g'):
    $ ipcs -a
    IPC status from <running system> as of Wed Jan 30 18:31:36 GMT 2008
    T         ID      KEY        MODE        OWNER    GROUP  CREATOR  
    CGROUP CBYTES  QNUM QBYTES LSPID LRPID   STIME    RTIME    CTIME
    Message Queues:
    T         ID      KEY        MODE        OWNER    GROUP  CREATOR  
    CGROUP NATTCH      SEGSZ  CPID  LPID   ATIME    DTIME    CTIME
    Shared Memory:
    m         22   0x2394e4   rw-r---   ora10g   10gdba   ora10g  
    10gdba     20  419438592  2386  2542 18:31:22 18:31:28 18:28:18
    T         ID      KEY        MODE        OWNER    GROUP  CREATOR  
    CGROUP NSEMS   OTIME    CTIME
    Semaphores:
    s         23   0x89a070e8 ra-r---   ora10g   10gdba   ora10g  
    10gdba   154 18:31:31 18:28:18
    $ I wasn't sure whether Solaris 10 was one of the OSs with truly dynamic memory for the SGA but had hoped it was... this seems to say different. Really I'm just after some confirmation that I'm reading this correctly.
    Thanks.
    Joseph
    Message was edited by:
    Joseph Crofts
    Edited for clarity

    I don't want to get bogged down in too many details, as the links provided in previous posts have many details of SGA tests and the results of what happened. I just want to add a bit of explanation about the Oracle SGA and shared memory on UNIX and Solaris in particular.
    As you know Oracle's SGA is generally a single segment of shared memory. Historically this was 'normal' memory and could be paged out to the swap device. So a 500 MB SGA on a 1 GB physical memory system, would allocate 500 MB from the swap device for paging purposes, but might not use 500 MB of physical memory i.e. free memory might not decrease by 500 MB. How much physical memory depended on what pages in the SGA were accessed, and how frequently.
    At some point some people realised that this paging of the SGA was actually slowing performance of Oracle, as now some 'memory' accesses by Oracle could actually cause 'disk' accesses by paging in saved pages from the swap device. So some operating systems introduced a 'lock' option when creating a shared memory segment (shmat system call if memory serves me). And this was often enabled by a corresponding Oracle initialisation parameter, such as lock_sga.
    Now a 'locked' SGA did use up the full physical memory, and was guaranteed not to be paged out to disk. So Oracle SGA access was now always at memory speed, and consistent.
    Some operating systems took advantage of this 'lock' flag to shared memory segment creation to implement some other performance optimisations. One is not to allocate paging storage from swap space anyway, as it cannot be used by this shared memory segment. Another is to share the secondary page tables within the virtual memory sub-system for this segment over all processes attached to it i.e. one shared page table for the segment, not one page table per process. This can lead to massive memory savings on large SGAs with many attached shadow server processes. Another optimisation on this non-paged, contiguous memory segment is to use large memory pages instead of standard small ones. On Solaris instead of one page entry covering 8 KB of physical memory, it covers 8 MB of physical memory. This reduces the size of the virtual memory page table by a factor of 1,000 - another major memory saving.
    These were some of the optimisations that the original Red Hat Enterprise Linux had to introduce, to play catch up with Solaris, and to not waste memory on large page tables.
    Due to these extra optimisations, Solaris chose to call this 'locking' of shared memory segments 'initimate shared memory' or ISM for short. And I think there was a corresponding Oracle parameter 'use_ism'. This is now the default setting in Oracle ports to Solaris.
    As a result, this is why when Oracle grabs its shared memory segment up front (SGA_MAX_SIZE), it results in that amount of real physical memory being allocated and used.
    With Oracle 9i and 10g when Oracle introduced the SGA_TARGET and other settings and could dynamically resize the SGA, this messed things up for Solaris. Because the shared memory segment was 'Intimate' by default, and was not backed up by paging space on the swap device, it could never shrink in size, or release memory as it could not be paged out.
    Eventually Sun wrote a work around for this problem, and called it Dynamic Intimate Shared Memory (DISM). This is not on by default in Oracle, hence you are seeing all your shared memory segments using the same amount of physical memory. DISM allows the 'lock' flag to be turned on and off on a shared memory segment, and to be done over various memory sizes.
    I am not sure of the details, and so am beginning to get vague here. But I remember that this was a workaround on Sun's part to still get the benefits of ISM and the memory savings from large virtual memory pages and shared secondary page tables, while allowing Oracle to manage the SGA size dynamically and be able to release memory back for use by other things. I'm not sure if DISM allows Oracle to mark memory areas as pageable or locked, or whether it allows Oracle to really grow and shrink the size of a single shared memory segment. I presumed it added yet more flags to the various shared memory system calls.
    Although DISM should work on normal, single Solaris systems, as you know it is not enabled by default, and requires a special initialisation parameter. Also be aware that there are issues with DISM on high end Solaris systems that support Domains (F15K, F25K, etc.) and in Solaris Zones or Containers. Domains have problems when you want to dynamically remove a CPU/Memory board from the system, and the allocations of memory on that board must be reallocated to other memory boards. This can break the rule that a locked shared memory segment must occupy contiguous physical memory. It took Sun another couple of releases of Solaris (or patches or quarterly releases) before they got DISM to work properly in a system with domains.
    I hope I am not trying to teach my granny to suck eggs, if you know what I mean. I just thought I'd provide a bit more background details.
    John

  • Oracle Database Performance With Semantic

    Hello,
    Is there a Developer's Guide for Semantic that specifically talks about database performance with the Semantic network/tables/indexes? We are having issues with performance the larger the semantic network becomes.
    Any help or pointers would be appriciated.
    Thanks
    -MichaelB

    Matt,
    Thanks for your response. Here are the answers to the questions about our setup/environment.
    1) Are you querying multiple models and/or a model + entailment? If so, are you using a virtual model and using the ALLOW_DUP=T query option?
    A single model, no entailments. We attempted to use multiple models, and a virtual model (with ALLOW_DUP=T), however the UNION ALL in the explain plan made the query duration unacceptable.
    2) Are you using named graphs?
    No named graphs.
    3) How many triples are you querying?
    Approximately 85 million.
    4) What semantic network and/or datatype indexes have been created?
    We have PCSGM, PSCGM, PSCM, PCSM, CPSM, and SCM.
    5) What is your hardware setup (number and type of disks, RAM, processor, etc.)?
    We are running the 11.2.0.3 database on a Sun Solaris T2000, we have ASM managing our disks from RAID5, I believe currently we have two Disk Groups with the indexes in one and the data tables in the other. We have 32 GB of memory, and 32 CPUs. However, it is not the only thing running on the machine.
    6) How much memory have you allocated to the database (pga, sga, memory_target, etc.)?
    We have the memory_target set to 9GB, the db_cache_size set to 2GB, and the db_keep_cache_size set to 4.5GB. `pga_aggregate_target` is set to 0 (auto), as is `sga_target`.
    (Since my initial request, we pinned the RDF_VALUE$ (~2.5GB) and C_PK_VID (~1.7GB) objects in the KEEP buffer cache, which drastically improved performance)
    7) Are you using parallel query execution?
    Yes, some of the more complex queries we run with the parallel hint set to 8.
    8) Have you tried dynamic sampling?
    Yes. We have ODS set to 3 for our more complex queries, we have not altered this much to see if there is a performance gained by changing this value.
    Thanks again,
    -Michael

  • Confused about ZFS filesystems created with Solaris 11 Zone

    Hello.
    Installing a blank Zone in Solaris *10* with "zonepath=/export/zones/TESTvm01" just creates one zfs filesystem:
    +"zfs list+
    +...+
    +rzpool/export/zones/TESTvm01 4.62G 31.3G 4.62G /export/zones/TESTvm01"+
    Doing the same steps with Solaris *11* will ?create? more filesystems:
    +"zfs list+
    +...+
    +rpool/export/zones/TESTvm05 335M 156G 32K /export/zones/TESTvm05+
    +rpool/export/zones/TESTvm05/rpool 335M 156G 31K /rpool+
    +rpool/export/zones/TESTvm05/rpool/ROOT 335M 156G 31K legacy+
    +rpool/export/zones/TESTvm05/rpool/ROOT/solaris 335M 156G 310M /export/zones/TESTvm05/root+
    +rpool/export/zones/TESTvm05/rpool/ROOT/solaris/var 24.4M 156G 23.5M /export/zones/TESTvm05/root/var+
    +rpool/export/zones/TESTvm05/rpool/export 62K 156G 31K /export+
    +rpool/export/zones/TESTvm05/rpool/export/home 31K 156G 31K /export/home"+
    I dont understand why Solaris 11 is doing that. Just one FS (like in Solaris 10) would be better for my setup. I want to configure all created volumes by myself.
    Is it possible to deactivate this automatic "feature"?

    There are several reasons that it works like this, all guided by the simple idea "everything in a zone should work exactly like it does in the global zone, unless that is impractical." By having this layout we get:
    * The same zfs administrative practices within a zone that are found in the global zone. This allows, for example, compression, encryption, etc. of parts of the zone.
    * beadm(1M) and pkg(1) are able to create boot environments within the zone, thus making it easy to keep the global zone software in sync with non-global zone software as the system is updated (equivalent of patching in Solaris 10). Note that when Solaris 11 updates the kernel, core libraries, and perhaps other things, a new boot environment is automatically created (for the global zone and each zone) and the updates are done to the new boot environment(s). Thus, you get the benefits that Live Upgrade offered without the severe headaches that sometimes come with Live Upgrade.
    * The ability to have a separate /var file system. This is required by policies at some large customers, such as the US Department of Defense via the DISA STIG.
    * The ability to perform a p2v of a global zone into a zone (see solaris(5) for examples) without losing the dataset hierarchy or properties (e.g. compression, etc.) set on datasets in that hierarchy.
    When this dataset hierarchy is combined with the fact that the ZFS namespace is virtualized in a zone (a feature called "dataset aliasing"), you see the same hierarchy in the zone that you would see in the global zone. Thus, you don't have confusing output from df saying that / is mounted on / and such.
    Because there is integration between pkg, beadm, zones, and zfs, there is no way to disable this behavior. You can remove and optionally replace /export with something else if you wish.
    If your goal is to prevent zone administrators from altering the dataset hierarchy, you may be able to accomplish this with immutable zones (see zones admin guide or file-mac-profile in zonecfg(1M)). This will have other effects as well, such as making all or most of the zone unwritable. If needed, you can add fs or dataset resources which will not be subject to file-mac-profile and as such will be writable.

  • Monitoring services in zones with Solaris container Manager

    I need to know how to Manage Solaris services (SMF) in sparse zone with Solaris container manager.
    I have navigated all the documentation and I have not found any clue.
    I installed the Sun management center (SMC) server on a server box and the agents on others. I can manage the SMF of the global zone by drilling down via the console GUI. But having access to the container manager, I have to go via https connection. And drilling down on the zone did not reveal that SMF can be monitored.
    Please if you have any idea, share it with me.

    Hi,
    check these:
    - Version of webserver latest is 3.1
    # smcwebserver -V
    - check webconsole is started and running
    # smcwebserver status
    Sun Java(TM) Web Console is running
    # svcs webconsole
    STATE STIME FMRI
    online 19:38:06 svc:/system/webconsole:console
    # svcs -pl webconsole
    fmri svc:/system/webconsole:console
    name java web console
    enabled true
    state online
    next_state none
    state_time Wed Feb 10 19:38:06 2010
    logfile /var/svc/log/system-webconsole:console.log
    restarter svc:/system/svc/restarter:default
    contract_id 64
    dependency require_all/none svc:/milestone/network (online)
    dependency require_all/refresh svc:/milestone/name-services (online)
    dependency require_all/none svc:/system/filesystem/local (online)
    dependency optional_all/none svc:/system/filesystem/autofs (online) svc:/network/nfs/client (online)
    dependency require_all/none svc:/system/system-log (online)
    process 843 /usr/java/bin/java -server -Xmx128m -XX:+UseParallelGC -XX:ParallelGCThreads=4
    - check port 6789 is listen mode
    # netstat -an | grep 6789
    *.6789 *.* 0 0 49152 0 LISTEN
    if the output show
    localhost .6789 *.* 0 0 49152 0 LISTEN
    than do these:
    - check that the tcp_listen of webconsole service is true, default is false
    # svccfg -s webconsole listprop options/tcp_listen
    options/tcp_listen boolean false
    # svcadm disable svc:/system/webconsole:console
    # svccfg -s webconsole setprop options/tcp_listen=true
    # svccfg -s webconsole listprop options/tcp_listen
    options/tcp_listen boolean true
    # svcadm enable svc:/system/webconsole:console
    Regards

  • NFS Errors on Solaris 2.6 and 7 Systems running as a clear case client

    I read in a Rational Document that When Sun fixed defect #4271267 they introduced a problem that causes EAGAIN to be returned to fsync() andclose() system calls. The EAGAIN defect is being tracked by Sun as defect #4349744 and it is a problem for NFS clients running Solaris2.6, 7, and 8.
    Sun has released a patch for this on Solaris 8(108727-06 or later). Does anyone know, What are the Patches(Patch ID Numbers)to be installed on Solaris 2.6 and Solaris 7 Systems for the Problem when ClearCase Views or Vobs are on NetAPP with Solaris servers.
    I am getting NFS errors and I am not able to view the files, when I try accessing the Vobs from Solaris 2.6,7 systems. The Clear Case Server is on Solaris 8 and the VOBs are stored in Netapp Filer.
    TIA
    Regards
    Saravanan.C.S

    Hello:
    I have the same problem with an only Adaptec AHA-2940, but it is bigger....i can't install Solaris with this problem.
    I am sure that it isn't hardware problem becouse i have 3 IBM PC Servers 315 and i have the same problem with all them.
    Some help would be very grateful.
    Hello,
    I have an intel box running Solaris 2.6 with Oracle
    8.i.
    There are 2 SCSI cards. The first one has the
    following devices. [0 = Seagate 9.5gig drive] [4 =
    Seagate tape drive] [6 = TEAC cdrom] [7 = adaptec 2940
    SCSI card].
    The Second has the following devices. [0 = Seagate
    9.5gig drive] [1 = Seagate 9.5gig drive] [7 = Adaptec
    2040 SCSI card].
    Solaris is partitioned as followed. 1 root drive. 1
    /opt drive. And 1 /backup drive.
    My problem is periodically we get Transport errors.
    Dec 2 08:15:45 HAFC unix: WARNING:
    /pci@0,0/pci9004,7861@4 (adp1):
    Dec 2 08:15:45 HAFC unix: timeout: abort
    request, target=1 lun=0
    Dec 2 08:15:45 HAFC unix: WARNING:
    /pci@0,0/pci9004,7861@4 (adp1):
    Dec 2 08:15:45 HAFC unix: timeout: abort device,
    target=1 lun=0
    Dec 2 08:15:45 HAFC unix: WARNING:
    /pci@0,0/pci9004,7861@4 (adp1):
    Dec 2 08:15:45 HAFC unix: timeout: reset target,
    target=1 lun=0
    Dec 2 08:15:45 HAFC unix: WARNING:
    /pci@0,0/pci9004,7861@4 (adp1):
    Dec 2 08:15:45 HAFC unix: timeout: early
    timeout, target=1 lun=0
    Dec 2 08:15:45 HAFC unix: WARNING:
    /pci@0,0/pci9004,7861@4/cmdk@1,0 (Disk8):
    Dec 2 08:15:45 HAFC unix: SCSI transport failed:
    reason 'incomplete': retrying command
    Like this. When they come it is by the thousands.
    Sometimes locking up the system. Many times the errors
    indicate a disk and give a block error or two. If I
    replace the disk. The problem usually goes away.
    Sometimes replacing the cable clears the problem.
    Sometimes this happens when there is no real activity
    on the server. I have applied patch 111031-01 witch
    was supposed to fix this problem. But it hasnt. Unless
    I have bad hardware. Is it possible?

  • Performance with Dedup on HP ProLiant DL380p Gen8

    Hi all,
    it is not that i haven't been warned. It is just that i simply do not understand why write performance on the newly created pool ist so horrible...
    Hopefully, i'll get some mor advise here. Some basic figures:
    The machine is a HP ProLiant DL380p Gen8 with two Intel Xeon E5-2665 CPUs and 128GB Ram.
    The storage-pool is made out of 14 900GB SAS 10k disks on two HP H221 SAS HBAs in two HP D2700 storage enclosures.
    The System is Solaris 11.1
    root@server12:~# zpool status -D datenhalde
    pool: datenhalde
    state: ONLINE
    scan: none requested
    config:
    NAME STATE READ WRITE CKSUM
    datenhalde ONLINE 0 0 0
    mirror-0 ONLINE 0 0 0
    c11t5000C5005EE0F5D5d0 ONLINE 0 0 0
    c12t5000C5005EDBBB95d0 ONLINE 0 0 0
    mirror-1 ONLINE 0 0 0
    c11t5000C5005EE20251d0 ONLINE 0 0 0
    c12t5000C5005ED658F1d0 ONLINE 0 0 0
    mirror-2 ONLINE 0 0 0
    c11t5000C5005ED80439d0 ONLINE 0 0 0
    c12t5000C5005EDB23F1d0 ONLINE 0 0 0
    mirror-3 ONLINE 0 0 0
    c11t5000C5005EDA2315d0 ONLINE 0 0 0
    c12t5000C5005ED6E049d0 ONLINE 0 0 0
    mirror-4 ONLINE 0 0 0
    c11t5000C5005EDBB289d0 ONLINE 0 0 0
    c12t5000C5005EDB9479d0 ONLINE 0 0 0
    mirror-5 ONLINE 0 0 0
    c11t5000C5005EDD8385d0 ONLINE 0 0 0
    c12t5000C5005ED72855d0 ONLINE 0 0 0
    mirror-6 ONLINE 0 0 0
    c11t5000C5005ED8759Dd0 ONLINE 0 0 0
    c12t5000C5005EE3AB59d0 ONLINE 0 0 0
    spares
    c11t5000C5005ED6CEADd0 AVAIL
    c12t5000C5005EDA2CD5d0 AVAIL
    errors: No known data errors
    DDT entries 5354008, size 292 on disk, 152 in core
    bucket allocated referenced
    refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE
    1 3,22M 411G 411G 411G 3,22M 411G 411G 411G
    2 1,28M 163G 163G 163G 2,93M 374G 374G 374G
    4 440K 54,9G 54,9G 54,9G 2,12M 271G 271G 271G
    8 140K 17,5G 17,5G 17,5G 1,39M 177G 177G 177G
    16 36,1K 4,50G 4,50G 4,50G 689K 85,9G 85,9G 85,9G
    32 6,26K 798M 798M 798M 277K 34,4G 34,4G 34,4G
    64 1,92K 244M 244M 244M 136K 16,9G 16,9G 16,9G
    128 56 6,52M 6,52M 6,52M 10,5K 1,23G 1,23G 1,23G
    256 222 27,5M 27,5M 27,5M 71,0K 8,80G 8,80G 8,80G
    512 2 256K 256K 256K 1,38K 177M 177M 177M
    1K 4 384K 384K 384K 6,00K 612M 612M 612M
    4K 1 512 512 512 4,91K 2,45M 2,45M 2,45M
    16K 1 128K 128K 128K 24,9K 3,11G 3,11G 3,11G
    512K 1 128K 128K 128K 599K 74,9G 74,9G 74,9G
    Total 5,11M 652G 652G 652G 11,4M 1,43T 1,43T 1,43T
    root@server12:~# zpool list
    NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
    datenhalde 5,69T 662G 5,04T 11% 2.22x ONLINE -
    root@server12:~# ./arc_summery.pl
    System Memory:
    Physical RAM: 131021 MB
    Free Memory : 18102 MB
    LotsFree: 2047 MB
    ZFS Tunables (/etc/system):
    ARC Size:
    Current Size: 101886 MB (arcsize)
    Target Size (Adaptive): 103252 MB (c)
    Min Size (Hard Limit): 64 MB (zfs_arc_min)
    Max Size (Hard Limit): 129997 MB (zfs_arc_max)
    ARC Size Breakdown:
    Most Recently Used Cache Size: 100% 103252 MB (p)
    Most Frequently Used Cache Size: 0% 0 MB (c-p)
    ARC Efficency:
    Cache Access Total: 124583164
    Cache Hit Ratio: 70% 87975485 [Defined State for buffer]
    Cache Miss Ratio: 29% 36607679 [Undefined State for Buffer]
    REAL Hit Ratio: 103% 128741192 [MRU/MFU Hits Only]
    Data Demand Efficiency: 91%
    Data Prefetch Efficiency: 29%
    CACHE HITS BY CACHE LIST:
    Anon: --% Counter Rolled.
    Most Recently Used: 74% 65231813 (mru) [ Return Customer ]
    Most Frequently Used: 72% 63509379 (mfu) [ Frequent Customer ]
    Most Recently Used Ghost: 0% 0 (mru_ghost) [ Return Customer Evicted, Now Back ]
    Most Frequently Used Ghost: 0% 0 (mfu_ghost) [ Frequent Customer Evicted, Now Back ]
    CACHE HITS BY DATA TYPE:
    Demand Data: 15% 13467569
    Prefetch Data: 4% 3555720
    Demand Metadata: 80% 70648029
    Prefetch Metadata: 0% 304167
    CACHE MISSES BY DATA TYPE:
    Demand Data: 3% 1281154
    Prefetch Data: 23% 8429373
    Demand Metadata: 73% 26879797
    Prefetch Metadata: 0% 17355
    root@server12:~# echo "::arc" | mdb -k
    hits = 88823429
    misses = 37306983
    demand_data_hits = 13492752
    demand_data_misses = 1281335
    demand_metadata_hits = 71470790
    demand_metadata_misses = 27578897
    prefetch_data_hits = 3555720
    prefetch_data_misses = 8429373
    prefetch_metadata_hits = 304167
    prefetch_metadata_misses = 17378
    mru_hits = 66467881
    mru_ghost_hits = 0
    mfu_hits = 64253247
    mfu_ghost_hits = 0
    deleted = 41770876
    mutex_miss = 172782
    hash_elements = 18446744073676992500
    hash_elements_max = 18446744073709551615
    hash_collisions = 12375174
    hash_chains = 18446744073698514699
    hash_chain_max = 9
    p = 103252 MB
    c = 103252 MB
    c_min = 64 MB
    c_max = 129997 MB
    size = 102059 MB
    buf_size = 481 MB
    data_size = 100652 MB
    other_size = 924 MB
    l2_hits = 0
    l2_misses = 28860232
    l2_feeds = 0
    l2_rw_clash = 0
    l2_read_bytes = 0 MB
    l2_write_bytes = 0 MB
    l2_writes_sent = 0
    l2_writes_done = 0
    l2_writes_error = 0
    l2_writes_hdr_miss = 0
    l2_evict_lock_retry = 0
    l2_evict_reading = 0
    l2_abort_lowmem = 0
    l2_cksum_bad = 0
    l2_io_error = 0
    l2_hdr_size = 0 MB
    memory_throttle_count = 0
    meta_used = 1406 MB
    meta_max = 1406 MB
    meta_limit = 0 MB
    arc_no_grow = 1
    arc_tempreserve = 0 MB
    root@server12:~#
    The write-performance is really really slow:
    read/write within this pool:
    root@server12:/datenhalde/s12test/Bild-DB/Testaktion# /usr/gnu/bin/dd if=Test.tif of=Test2.tif
    1885030+1 records in
    1885030+1 records out
    965135496 bytes (965 MB) copied, 145,923 s, 6,6 MB/s
    read from this pool and write to the root-pool:
    root@server12:/datenhalde/s12test/Bild-DB/Testaktion# /usr/gnu/bin/dd if=Test.tif of=/tmp/Test2.tif
    1885030+1 records in
    1885030+1 records out
    965135496 bytes (965 MB) copied, 9,51183 s, 101 MB/s
    root@server12:/datenhalde/s12test/Bild-DB/Testaktion# /usr/gnu/bin/dd if=FS2013_Fashionation_Beach_06.tif of=FS2013_Test.tif
    I just do not get this. Why is it that slow? Am i missing any tunable parameters? From the above figures the ddt should use 5354008*152=776MB in RAM. That should fit easily.
    Sorry for the longish post, but i really need some help here, because the real data with much higher dedup ratio is still to be copied to that pool.
    Compression is no real alternative, because most of the data will be compressed images and i don't expect to see great compression ratios.
    TIA and kind regards,
    Tom
    Edited by: vigtom on 16.04.2013 07:51

    Hi Cindy,
    thanks for answering :)
    Isn't the tunable parameter "arc_meta_limit" obsolete in Solaris 11?
    Before Solaris 11 you could tune arc_meta_limit by setting something reasonable in /etc/system with "set zfs:zfs_arc_meta_limit=...." which - at boot - is copied into arc_c_max overriding the default setting.
    On this Solaris 11.1 c_max is already maxed out to "kstat -p zfs:0:arcstats:c_max -> zfs:0:arcstats:c_max 136312127488" without any tunig. This is also reflected by the parameter "meta_limit = 0". Am i missing something here?
    When looking at the output of "echo "::arc" | mdb -k" i see the values of "meta_used", "meta_max" and "meta_limit". I understand these as "memory used for metadata right now", "max memory used for metadata in the past" and "theoretical limit of memory used for metadata" with an value of "0" as "unlimited". Right?
    What exactly is "arc_no_grow = 1" saying here?
    Sorry for maybe asking some silly questions. This is all a bit frustrating ;)
    When disabling dedup on the pool write performance is increasing almost instantly. I did not test it long enough to get real figures. I'll probably do this (eventually even with Solaris 10) tomorrow.
    Would Oracle be willing to help me out under a support plan when running Solaris 11.1 on a machine which is certified for Solars 10 only?
    Thanks again and kind regards,
    Tom

  • Horrible performance with JDK 1.3.1!

    There is something very weird going on with Solaris (SPARC) JDK 1.3.1
    Hotspot Server jvm. We have an existing application which gets a large
    resultset back from the database, iterates over it and populates two int[]
    arrays.
    The operation iterates thru a 200,000 row resultset, which has three
    interger columns.
    Here are the performance numbers for various jdk (hotspot 2.0) combinations:
    Using Solaris JDK1.2.2_05a with the jit: 3 seconds
    Using Solaris JDK1.3.0 -server: <3 seconds
    Using Windows JDK1.3.1 -server: <3 seconds
    Using Solaris JDK1.3.1 -client: 7 seconds
    Using Solaris JDK1.3.1 -server: 3 MINUTES!
    As you can see the solaris 1.3.1 -server is having horrible performance, 60X
    worse than jdk1.3.0 -server and 1.2.2_05a with jit. I thought it was a
    problem with 1.3.1 on solaris, so I tried the 1.3.1 -client and while the
    performance was much better than 3 minutes, it still was slower than (which
    I expected since -client is meant for client side apps). I have no idea why
    this is happening. Below are the details of the problem.
    Oracle 8.1.7 on solaris 2.7
    Solaris 2.6
    Oracle Thin JDBC driver for 8.1.7 (classes12.zip)
    Code:
    String strQuery = "select entity_id, entity_child, period_id" +
    " FROM entity_map" +
    " WHERE entity_id >=0" +
    " AND period_id in (" + sEmptyPeriodIds + ")" +
    " ORDER BY period_id, entity_id ";
    boolean bAddedChildren = false;
    int intEntityId;
    int intChildId;
    int intPeriodId;
    //objDatabse just wraps creation and execution of resultset
    rs = objDatabase.executeSQLAndReturnResultSet( strQuery );
    //Timing start here
    while ( rs.next() )
    intEntityId = rs.getInt( 1 );
    intChildId = rs.getInt( 2 );
    intPeriodId = rs.getInt( 3 );
    // embo.addEntityMap( intPeriodId, intEntityId, intChildId );
    bAddedChildren = true;
    //Timing ends here
    If anyone has had similiar problems, I'd love to hear about it.
    Something is really really wrong with how 1.3.1 -server is optimizing the
    oracle jdbc code. Problem is that this is a black box, with no source
    available. Doesn't oracle test new versions of sun jvm's when they come
    out??
    Thanks,
    Darren
    null

    Darren,
    Good luck on trying to get any support for JDK 1.3.x with the ORACLE drivers. ORACLE doesn't support JDK 1.3.x yet. We've had other problems with the ORACLE 8.1.7 drivers. Have you tried running the same bench marks using the 8.1.6 or 8.1.6.2 drivers? I would be interested to find out if the performance problems are driver related or just JVM.
    -Peter
    See http://technet.oracle.com:89/ubb/Forum8/HTML/003853.html
    null

  • Mysql performance on solaris

    We have planned to run a mysql database on a SUN SPARC box with Solaris 9, but compared to a small linux box the performance seems to be bad. Our test is with only one user and no additional load on the machines. The Linux box is about 5 times faster (response time ) than the SUN box. My questions are:
    - will it get better on solaris10??
    - can I tune mysql or Solaris for a better performance??

    We have planned to run a mysql database on a SUN SPARC box with Solaris 9, but compared to a small linux box the performance seems to be bad. Our test is with only one user and no additional load on the machines. The Linux box is about 5 times faster (response time ) than the SUN box. My questions are:
    - will it get better on solaris10??
    - can I tune mysql or Solaris for a better performance??

Maybe you are looking for

  • DAQmx VIs Timeout Problems

    Hello, I am trying to acquire 9 frequency input chanels thorugh PXI-6624 card. I have 9 independent loops for 9 frequency counter chanels. I am able to read all the frequencies above 10Hz properly. Now the problem is, Sometimes there will be no pulse

  • MPLS MP-IBGP configuration

    Hi, I have configured following senario PE1-s1/0--------P1---P2------s1/0-PE2 10.10.10.1 10.10.30.2 PE1 -s1/0-10.10.10.1 PE2 -s1/0-10.10.30.2 I have configured the IBGP between PE1 and PE2 with physical interface IP address. I can see the BGP session

  • Every time I login, I have to select Country. It takes more than one attempt to select one.

    Every time I login, I have to select Country. It takes more than one attempt to select one. What could be done to save it so I don't have to select it repeatedly?

  • HT201250 Problem with Time Machine..

    Hi guys, the name's Tom. I've decided that I don't want to restore my files because my back up external hard drive isn't big enough to restore the full system. How do I get back to accessing my folders and applications from the installation disc?

  • KEPM - conversion of measure unit

    Hi gurus, Does the conversion of measure units in KE4MS works in KEPM ? When i do the forecast i plan in UN (units) but i also want to convert that quantitie to KG. This conversion is working on the CO-PA actual values but in the planning doesn't. An