NFS Performance

I have 2 questions about NFS on 10.4.
Client;
Has NFS performance improved on the client side? Last time I tested was 10.3, and the sustained throughput was about 10-12 MB/s on a gig connection. This was from a Sun NFS server.
Server;
Has the performance improved? I am thinking about doing an Xsan NFS re-share to 50+ Linux machines in a compute farm. Will this work out well?
I'm interested to hear from anybody doing heavy NFS serving.
Thanks,
David

The client is somewhat lacking.
On one test here (XServe G5 client talking to XServe RAID 5 array connected to XServe G5 NFS server) I get around 40MB/sec copying a file to the RAID over a gigabit ethernet network.
By comparison, a Solaris machine talking to the same server gets almost 80MB/sec.
So it sounds like it's improved some from when you last tested, but maybe not by as much as you'd like.
Note that these tests were done on a single active client (or maybe some minor background traffic going on at the same time).
As for the server side, I don't know quite where that tops out. A quick test here shows little difference in times even when multiple clients are writing to the RAID at the same time. The server might be able to keep up with the RAID speed.

Similar Messages

  • Solaris 10 NFS performance on Linux running ws3 update 3

    Hope someone can help me sort out this problem.
    Dear Support.
    We are having a Solaris/Sparc file server running Solaris 10. The Solaris machine act as a NFS file server. We encounter very poor NFS performance when copying files to and from a filesystem via Linux NFS
    I have set up a very simple test scenario. Created a tar file, size around 3 GB. The file is sitting on a SAN system. The file it self is created on a Solaris 10 UFS filesystem.
    Solaris E240 Solaris 8 NFS, GB interface copy to and from the same disk via NFS
    timecp /seis/seis600_new/usr.tar /seis/seis600_new/new1.tar
    real 2m18.91s
    user 0m0.11s
    sys 0m29.72s
    IBM/AMD 64 bit Linux WS3 U5, GB interface NFS to and from the same disk via NFS.
    time cp /seis/seis600_new/usr.tar /seis/seis600_new/new1.tar
    real 6m24.670s
    user 0m0.130s
    sys 0m21.860s
    Also run the test on other Linux boxes with similar results?
    The funny part of this is that I can reproduce the performance problem on other SUN systems among the SUN Blade 2000 with 8 GB Ram
    Let me wrap up.
    Always bad NFS performance when using NFS between Solaris NFS server and Linux client.
    Not always bad performance when using NFS between Solaris server and clients.

    It's been a while since I was doing linux->solaris nfs, so bear with me as I clear out the cobwebs.
    First things to check: Mount options for the nfs mount to the server.
    Which versions of nfs are you using?(v2, v3?) Solaris uses version 3 mounts by default.
    What's your wsize and rsize for reads and writes?
    I believe linux is limited to using 8k r/w block sizes. Solaris will let you use r/wsize up to 32k in nfsv3, which would really help with larger data transfers.
    nfsv3 has a number of performance enhancements over v2, so give that a shot with a larger block size.
    nfsvers=3,wsize=8192,rsize=8192,nolock,intr
    Also experiment with your locking options, that might help some.
    NFS has always been a fairly weak point for linux.
    A few links for reference:
    http://www.scd.ucar.edu/hps/TECH/LINUX/linux.html
    http://nfs.sourceforge.net/
    Cheers && good luck,
    fptt.

  • NFS performance with Solaris 10

    Hello,
    We have been playing with one of the x4200s running s10u2, or snv_50 for that matter, and are getting terrible numbers from the NFS performance. Initially, we suspected it was just the ZFS filesystem on the back (which it was, though zil_disable made it a lot better), but even after exploring a little I am getting terrible numbers for NFS backed by UFS. Using afio to unafio a file on the disk gives:
    Local:
    afio: 432m+131k+843 bytes read in 263 seconds. The operation was successful.
    Remote:
    afio: 432m+131k+843 bytes read in 1670 seconds. The operation was
    successful.
    I have raised the ncsize to 1000000, and upped the server threads to 1024.
    The same thing on a linux box(ext3) turns in local times of 100 seconds and remote at 180 seconds. The differences in the local and remote numbers are just crazy. The difference in the ZFS is way worse:
    Local zfs:
    afio: 432m+131k+843 bytes read in 137 seconds. The operation was successful.
    NFS -> ZFS:
    afio: 432m+131k+843 bytes read in 2428 seconds. The operation was
    successful.
    I have started looking into dtrace for tracking the problem, but don't have much to report yet.
    Any suggestions appreciated.

    Ask this on the Solaris Forum, not the Java Networking forum.
    Edit: typo

  • ISCSI, AFP, SMB, and NFS performance with Mac OS X 10.5.5 clients

    Been doing some performance testing with various protocols related to shared storage...
    Client: iMac 24 (Intel), Mac OS X 10.5.5 w/globalSAN iSCSI Initiator version 3.3.0.43
    NAS/Target: Thecus N5200 Pro w/firmware 2.00.14 (Linux-based, 5 x 500 GB SATA II, RAID 6, all volumes XFS except iSCSI which was Mac OS Extended (Journaled))
    Because my NAS/target supports iSCSI, AFP, SMB, and NFS, I was able to run tests that show some interesting performance differences. Because the Thecus N5200 Pro is a closed appliance, no performance tuning could be done on the server side.
    Here are the results of running the following command from the Terminal (where test is the name of the appropriately mounted volume on the NAS) on a gigabit LAN with one subnet (jumbo frames not turned on):
    time dd if=/dev/zero of=/Volumes/test/testfile bs=1048576k count=4
    In seconds:
    iSCSI 134.267530
    AFP 140.285572
    SMB 159.061026
    NFSv3 (w/o tuning) 477.432503
    NFSv3 (w/tuning) 293.994605
    Here's what I put in /etc/nfs.conf to tune the NFS performance:
    nfs.client.allow_async = 1
    nfs.client.mount.options = rsize=32768,wsize=32768,vers=3
    Note: I tried forcing TCP as well as used an rsize and wsize that doubled what I had above. It didn't help.
    I was surprised to see how close AFP performance came to iSCSI. NFS was a huge disappointment but it could have been limitations of the server settings that could not have been changed because it was an appliance. I'll be getting a Sun Ultra 64 Workstation in soon and retrying the tests (and adding NFSv4).
    If you have any suggestions for performance tuning Mac OS X 10.5.5 clients with any of these protocols (beyond using jumbo frames), please share your results here. I'd be especially interested to know whether anyone has found a situation where Mac clients using NFS has an advantage.

    With fully functional ZFS expected in Snow Leopard Server, I thought I'd do some performance testing using a few different zpool configurations and post the results.
    Client:
    - iMac 24 (Intel), 2 GB of RAM, 2.3 GHz dual core
    - Mac OS X 10.5.6
    - globalSAN iSCSI Initiator 3.3.0.43
    NAS/Target:
    - Sun Ultra 24 Workstation, 8 GB of RAM, 2.2 GHz quad core
    - OpenSolaris 2008.11
    - 4 x 1.5 TB Seagate Barracuda SATA II in ZFS zpools (see below)
    - For iSCSI test, created a 200 GB zvol shared as iSCSI target (formatted as Mac OS Extended Journaled)
    Network:
    - Gigabit with MTU of 1500 (performance should be better with jumbo frames).
    Average of 3 tests of:
    # time dd if=/dev/zero of=/Volumes/test/testfile bs=1048576k count=4
    # zpool create vault raidz2 c4t1d0 c4t2d0 c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with RAIDZ2: 148.98 seconds
    # zpool create vault raidz c4t1d0 c4t2d0 c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with RAIDZ: 123.68 seconds
    # zpool create vault mirror c4t1d0 c4t2d0 mirror c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with two mirrors: 117.57 seconds
    # zpool create vault mirror c4t1d0 c4t2d0 mirror c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    # zfs set compression=lzjb vault
    iSCSI with two mirrors and compression: 112.99 seconds
    Compared with my earlier testing against the Thecus N5200 Pro as an iSCSI target, I got roughly 16% better performance using the Sun Ultra 24 (with one less SATA II drive in the array).

  • Help needed w/ NFS performance

    Hi.
    I am using my MacBook running Snow Leopard as an NFS client. The NFS server resides on my home Linux box (running CentOS 4 if it matters). I am able to auto-mount an NFS drive using Disk Utilities. I am able to see and read its contents. However, the performance (i.e. throughput) is utterly miserable.
    Watching a WMV file on the NFS share via VLC Player, for instance, is just impossible. The video gets choppy every few seconds, and the audio gets cut off equally often.
    Interestingly, connecting to the same share using Samba does not exhibit the same problem. The performance is quite acceptable here.
    I am using the following advanced mount options: ro, nolock, locallock, -P
    What am I doing wrong here? Any help would be appreciated.

    Hi.
    I am using my MacBook running Snow Leopard as an NFS client. The NFS server resides on my home Linux box (running CentOS 4 if it matters). I am able to auto-mount an NFS drive using Disk Utilities. I am able to see and read its contents. However, the performance (i.e. throughput) is utterly miserable.
    Watching a WMV file on the NFS share via VLC Player, for instance, is just impossible. The video gets choppy every few seconds, and the audio gets cut off equally often.
    Interestingly, connecting to the same share using Samba does not exhibit the same problem. The performance is quite acceptable here.
    I am using the following advanced mount options: ro, nolock, locallock, -P
    What am I doing wrong here? Any help would be appreciated.

  • Sharing DVD over NFS - performance question

    Hi,
    I'm trying to share the DVD over NFS.
    I have updated /etc/rmmount.conf with the following line:
    share cdrom* -o ro,anon=0Everything works well, except the reading speed, which is quite slow.
    Copying a ~200MB file from DVD over NFS takes ~12 minutes.
    Copying the same file from DVD to the local filesystem takes ~2 minutes,
    and copying this file from the hard drive over NFS takes ~20 seconds.
    This makes me believe that the issue is in the interworking of the NFS and DVD-ROM, for (hypothetical) example,
    the disk can not spin up to the max speed due to the random access.
    Has anyone tried sharing the DVDs with good performance? Should I read up and tweak some NFS parameters or abandon the idea of achieving good performance?
    I'm using Solaris 10 Update 7.
    Thanks in advance for any suggestions.

    Thanks for the suggestion.
    I'm not sure if it's applicable for my issue. From what I've read, it may help if the same data is read over and over,
    but not with the reading speed of just inserted DVD disks which are not yet cached (which is the usage pattern I'm trying to optimize for).

  • NFS performance over ASM?

    In Netapp storage 11gr2 rac on linux using ASM on NFS versus 11gr2 rac on linux using nfs with out ASM. Please advise, which one is better interms of performance for DW. DB going to be around 20TB. Appreciate your reply..

    Hi;
    Please check below links which could heplful for your issue:
    Using NFS with ASM…
    http://www.oracle-base.com/blog/2010/05/04/using-nfs-with-asm/
    Oracle NetApp white paper..
    http://www.oracle.com/technology/products/database/asm/pdf/netapp_asm3329.pdf
    http://kevinclosson.wordpress.com/kevin-closson-index/cfs-nfs-asm-topics/
    Regard
    Helios

  • Zfs/nfs poor performance compared to zfs/smb

    I setup zfs with sharenfs on my solaris box and i'm having a very poor performance with nfs (about 1-3MB/s).
    When i share the same zfs volume with sharesmb performance is very good (about 40-50MB/s) for a single disk zfs volume on a gigabit network.
    Both tests where made using MAC OSX 10.5.2 client using a single iso file transfert (about 4Gigs).
    I also mount through nfs my MAC client (using apple nfs share) from the solaris box and copy the file to the ZFS volume, nfs performance was very good in this case (45-50MB/s).
    Why my solaris zfs/nfs sharing is so slow compared to mac nfs sharing or solaris zfs/smb sharing ?

    I'm guess your talking about write performance ie your copying the ISO from a local drive to the ZFS/NFS drive.
    The write performance of NFS/ZFS has been known to suck for a long time.
    First make sure your running the latest solaris version (10U4) or a recent recommended patch set.
    If not, upgrade and see if that helps.
    Unfortunately, the solaris developers have a (some say overly) strict interpretation of the coherence requirements of the NFS spec and what it implies about caching data in memory and flushing to disk.
    This is discussed a little here
    and
    here
    Their position (as I understand it) that their implemation provides the best performance that can be obtained while maintaining adherence with the relevant standards and the fact that every other NFS implementation has streets better performance just means that everyone else is non conforming.
    Anyway maybe can try the solution that the "best practices" suggests ie using a local drive as an intent log.
    If that doesnt work, try setting zil_disable (�set zfs:zil_disable=1′ to /etc/system).
    You will have to weigh the significance of the purported reliability issues of that solution for yourself.
    Basically what it comes down to is that if your nfs server crashes and the client doesnt, but keeps cheerfully writing away. Then some of what it wrote maybe lost. Even silently a part in the middle.
    One other thing to check, the storage isn't coming off a SAN is it?
    If it is you can try "set zfs:zfs_nocacheflush=1" instead of zil_disable. Its a less aggressive variant that can reduce issues with san storage.

  • NFS cache

    Hi all !
    I'm having a question concerning the NFS cache.
    I'm using the NFS service to test the network over both 1Gb and fast-ethernet network: the NFS client copies several times (over 10,000) too differents small files (file1, file2) to the NFS server in file1_0, file1_1, ... file2_0, file2_1, etc.
    Although these 2 files are renamed on the NFS server, I'm wondering if they are not in the NFS server cache. As I want to test the network performances, I want to be sure these files are copied each time and not taken from the NFS cache.
    The NFS client runs solaris 7 and the server solaris 9.
    Thanks in advance for any help.
    Rgds,
    Sabrina

    Hi,
    I'm using the NFS service to test the network over
    both 1Gb and fast-ethernet network: the NFS client
    copies several times (over 10,000) too differents
    small files (file1, file2) to the NFS server in
    file1_0, file1_1, ... file2_0, file2_1, etc.
    Although these 2 files are renamed on the NFS server,
    I'm wondering if they are not in the NFS server cache.
    As I want to test the network performances, I want to
    be sure these files are copied each time and not
    taken from the NFS cache.We have also tested the NFS performance in the similar
    fashion.
    Since you are writing on to different files, each of this will
    be accounted separately for NFS Writes.
    All those newly written files can be read back - which will
    be accounted separately for NFS Reads.
    cp <local file> <remote_file_n> (n = 0,1,2,....)
    cat <remote_file_n> > /dev/null
    Each cp command will add to the NFS Writes and each cat
    will add to the NFS Reads.
    Thanks.
    Regards,
    Rams.

  • Configuring NAS as local storage

    This is not a specific question related directly to Oracle, but I'm hoping somebody can answer or point me in the proper direction. I'm working on setting up a Disaster Recovery system for my Oracle environment. Right now my DR system is such:
    HP Proliant DL 385 (G5): 64-bit running Oracle Enterprise Linux 5 Update 2 and 10.2.0.4.0
    IoMega StorPro Center NAS: Mounted as NFS, holds all database related files (control, redo and .dbf files)
    I have everything working but the NAS is hooked up to the network, and thus my environment requires network connectivity which I obviously can't count on during a disaster. Is there anyway to configure the NAS as local storage so when I do not have network connectivity I can still access my files on it?
    The vendor (IoMega) was of very little help. They tell me that I can plug the NAS directly into one of the NIC cards and "discover" the NAS that way. The problem is that the discovery agent does not run on Linux and they could not tell me how to get around this.
    Anybody have some experience hooking up a NAS unit as local storage instead of NFS? I'm trying to put on my SA/Network/Storage hats as best as possible, but I have very little experience trying things like this.

    I'm thinking out loud, so bear with me.
    An NFS mount point does an important feature in a clustered environment: file system access serialization. Frequently the underlying NAS file system has been formatted with EXT3 or some other non-cluster aware file system; NFS performs the important locking and serialization to keep this from being corrupted in a cluster. Please keep this in mind when designing a disaster recovery solution.
    What do you mean by "hooked to the network"? Do you mean you are using the public Internet or a corporate network?
    Are they suggesting that you establish a private, direct connection to the NAS?
    Find out how the NAS gets its network address. If it's using DHCP you will need to set up a local server and have the DHCP server listen only to the NIC/NICs where the NAS is plugged. Be sure the client NIC's have addresses on the same network as the NAS unit.
    Bring up networking on the NAS NIC devices.
    The bit about "discovering" the NAS file systems has me puzzled.
    Once you figure that out, mount the NAS file systems somewhere on you system, but NOT IN THEIR PRODUCTION locations.
    Now, set up your local machine as an NFS server. Publish the mount points as NFS exports, and then have your applications use these NFS mountpoints.

  • DNFS with ASM over dNFS with file system - advantages and disadvantages.

    Hello Experts,
    We are creating a 2-node RAC. There will be 3-4 DBs whose instances will be across these nodes.
    For storage we have 2 options - dNFS with ASM and dNFS without ASM.
    The advantages of ASM are well known --
    1. Easier administration for DBA, as using this 'layer', we know the storage very well.
    2. automatic re-balancing and dynamic reconfiguration.
    3. Stripping and mirroring (though we are not using this option in our env, external redundancy is provided at storage level).
    4. Less (or no) dependency on storage admin for DB file related tasks.
    5. Oracle also recommends to use ASM rather than file system storage.
    Advantages of DNFS(Direct Network File System) ---
    1. Oracle bypasses the OS layer, directly connects to storage.
    2. Better performance as user's data need not to be loaded in OS's kernel.
    3. It load balances across multiple network interfaces in a similar fashion to how ASM operates in SAN environments.
    Now if we combine these 2 options , how will be that configuration in terms of administration/manageability/performance/downtime in future in case of migration.
    I have collected some points.
    In favor of 'NOT' HAVING ASM--
    1. ASM is an extra layer on top of storage so if using dNFS ,this layer should be removed as there are no performance benefits.
    2. store the data in file system rather than ASM.
    3. Stripping will be provided  at storage level (not very much sure about this).
    4. External redundancy is being used at storage level so its better to remove ASM.
    points for 'HAVING' ASM with dNFS --
    1. If we remove ASM then DBA has no or very less control over storage. He can't even see how much is the free space left as physical level.
    2. Stripping option is there to gain performance benefits
    3. Multiplexing has benefits over mirroring when it comes to recovery.
    (e.g, suppose a database is created with only 1 controlfile as external mirroring is in place at storage level , and another database is created with 2 copies (multiplexed within Oracle level), and an rm command was issued to remove that file then definitely there will be a time difference between restoring the file back.)
    4. Now familiar and comfortable with ASM.
    I have checked MOS also but could not come to any conclusion, Oracle says --
    "Please also note that ASM is not required for using Direct NFS and NAS. ASM can be used if customers feel that ASM functionality is a value-add in their environment. " ------How to configure ASM on top of dNFS disks in 11gR2 (Doc ID 1570073.1)
    Kindly advise which one I should go with. I would love to go with ASM but If this turned out to be a wrong design in future, I want to make sure it is corrected in the first place itself.
    Regards,
    Hemant

    I agree, having ASM on NFS is going to give little benefit whilst adding complexity.  NAS will carrying out mirroring and stripping through hardware where as ASM using software.
    I would recommend DNFS only if NFS performance isn't acceptable as DNFS introduce an additional layer with potential bugs!  When I first used DNFS in 11gR1, I came across lots of bugs and worked with Oracle Support to have them all resolved.  I recommend having read of this metalink note:
    Required Diagnostic for Direct NFS Issues and Recommended Patches for 11.1.0.7 Version (Doc ID 840059.1)
    Most of the fixes have been rolled into 11gR2 and I'm not sure what the state of play is on 12c.
    Hope this helps
    ZedDBA

  • Sun 7110 + Esx 4.1

    Hi ,
    we're setting up a vmware ha cluster to serve some virtual machine from one sun 7110 4,2tb,but iscsi and nfs performances are really really slow.We can't go in production with such slow system.We have tried both nfs and iscsi obtaining througputs inside linux vm starting from 14mb/s arriving to 100 mb/s .While a simple 6k$ sas external storage arrive to 220 mb/s of write throughput inside vm.
    Why Oracle sells sun 7110s for virtualization,as written in www.techdirt.com/iti/resources/midsize_vmware_intel.pdf (written by oracle)??

    Hi Scott,
    Welcome to VMware Community,I saw the error message posted in the previous thread.I doubted it is issue with build version.Can you share your build version of esx host.
    Have a try with re-scan adaptor.
    VMware KB: Performing a rescan of the storage on an ESX/ESXi host
    Based on the error message i found few KB below
    VMware KB: Booting the ESX host fails with this message in the console: Restoring S/W iSCSI volumes
    VMware KB: VMware ESX 4.1 Patch ESX410-201104406-BG: Updates mptsas, mptspi device drivers

  • FTP performance over NFS 3

    Hi Guys,
    I have a NFS mount from one solaris OS to another solaris OS, When i do FTP to the NFS mounted server i get disconnected during "cd" to one particular directory. There are 500000+ files in that directory of total size 50GB.
    My NFS stat is as below. I will like to know if there is any way to cache the file count information on nfs client side or any way to improve the performance to resolve FTP issue.
    ---cut---
    Flags: vers=3,proto=tcp,sec=sys,hard,intr,link,symlink,acl,rsize=1048576,wsize=32768,retrans=5,timeo=600
    Attr cache: acregmin=3,acregmax=60,acdirmin=30,acdirmax=60
    ---cut---
    Thanks
    Rahul

    Hi;
    Please check below links which could heplful for your issue:
    Using NFS with ASM…
    http://www.oracle-base.com/blog/2010/05/04/using-nfs-with-asm/
    Oracle NetApp white paper..
    http://www.oracle.com/technology/products/database/asm/pdf/netapp_asm3329.pdf
    http://kevinclosson.wordpress.com/kevin-closson-index/cfs-nfs-asm-topics/
    Regard
    Helios

  • Slow ZFS-share performance (both CIFS and NFS)

    Hello,
    After upgrading my OpenSolaris file server (newest version) to Solaris 11 Express, the read (and write)-performance on my CIFS and NFS-shares dropped from 40-60MB/s to a few kB/s. I upgraded the ZFS filesystems to the most recent version as well.
    dmsg and /var/log/syslog doesn't list anything abnormal as far as I can see.. I'm not running any scrubs on the zpools, and they are listed as online. top doesn't reveal any process utilizing the CPU more than 0.07%.
    The problem is probably not at the client side, as the clients are 100% untouched when it comes to configuration.
    Where should I start looking for errors (logs etc.)? Any recommended diagnostic tools?
    Best regards,
    KL

    Hi!
    Check Link speed.
    dladm show-dev
    Check for collisions and wrong network packets:
    netstat -ia
    netstat -ia 2 10 ( when file transfered)
    Check for lost packets :
    ping -s <IP client> ( whait more 1 min )
    Check for retransmit, latency for respond:
    snoop -P -td <IP client> ( when file transfered)
    Try replace network cable.
    Regards.

  • Sparc DS5.2p4 NFS netgroup performance problem

    We recently setup our NFS server as an LDAP client. We use netgroups to provide a list of clients for each shared FS. Since moving to LDAP (from NIS+) the performance has been abysmal. I've created all the indices, VLV and regular, per the Sun instructions.
    I've always known that netgroups in LDAP was poorly handled, from a client point of view. I even made my own access mechanism for users because netgroups for user access was slow. Today, I did some searching on Sunsolve and found Bug ID 4734259. Here's an excerpt:
    The comment about these lookups being done in clusters may have
    been true back in the old days.  But now the in-kernel NFS code
    asks mountd questions like this all the time rather than only
    at mount time.
    Bug4176752 is (partly) about the fact that nscd does not cache netgroups.
    Now with LDAP in the nsswitch.conf, caching these things becomes
    more important.  Here we find mountd has a cache, but it keeps it
    for a very short period.  That period was long enough initially,
    but now the the kernel NFS code checks this info at access time
    instead of mount time, the cache timeout should be longer, if not configurable
    [email protected] 2003-03-14Sun has known about this for TWO YEARS and has not addressed the problem!!! At the same time, they're pushing LDAP as the be-all naming service. To put this in perspective, our NIS+ server was running on a V120. The LDAP server is running on a 3800 (4x750Mhz) and it gets routinely pegged with the slapd processing taking 70% of the CPU.
    Also, one of our NFS servers is under cluster control and it doesn't even seem to understand the LDAP-based netgroups. We had to modify nsswitch.conf to check NIS+.
    Has anyone else encountered performance issues with netgroups in LDAP and NFS?
    In the near future, I'll be rebuilding the VLV indices. I'm hoping that will correct our problems.
    Thanks,
    Roger S.

    Thanks.
    I think it may be one of the issue. But looking at ldd command output I think much more libarary getting called for a simple command in Solaris 10 (production env) then to the Solaris 9 (Test env).
    Production Server:
    Prompt> ldd /usr/bin/ls
    libsec.so.1 => /lib/libsec.so.1
    libc.so.1 => /lib/libc.so.1
    libavl.so.1 => /lib/libavl.so.1
    libm.so.2 => /lib/libm.so.2
    /platform/SUNW,Sun-Fire-T200/lib/libc_psr.so.1
    Test Server:
    prompt> ldd /usr/bin/ls
    libc.so.1 => /usr/lib/libc.so.1
    libdl.so.1 => /usr/lib/libdl.so.1
    /usr/platform/SUNW,Sun-Fire-V440/lib/libc_psr.so.1
    In solaris 10, I can see two library has been added to call ls command itself.
    I have done truss on the program (In my original post) and observed that the times is taking after the system call fork abd it returns from it. And at the sample test environment does not take time.
    Does this mean Solaris 10 (production env) trying to do something extra then test environment while forking the child process?
    Regards,
    Aminul Haque

Maybe you are looking for

  • Need HELP with JAVA RUNTIME Environment

    Hi, I just download J2EE, J2SE, and Ant from sun site. I am having problem to compile java program. Here what I did 1)     Download J2EE at c:\j2sdkee1.3.1 directory 2)     J2SE at c:\Program Files\Java\j2re1.4.1_01 dir 3)     Ant at ant c:\Jakarta_a

  • Warranty question please help

    my creative zen has completely died. the blue light will turn on but that's about it. i cant do anything. i already got a RMA number and i want to know what to do next. i've had it for ten month and i want to get a replacement. does the warranty cove

  • Error 0x000012f bad image

    Hi all, need help . I already upgrade win 8  single language to win 8.1 single language, the problem is, error 0x000012f bad image appear when I login the windows mention this error for sync with One Drive Microsoft and will appear when I use Adobe A

  • Using Roxio Media Manager in standalone mode

    I have installed Media Manager with RIM's Desktop Manager, but I only have use for Roxio Media Manager. Is there's a way to just install Media Manager ? Alternatively is it possible to launch Media Manager individually without having to go through th

  • Migration from VFP to Oracle 8i

    Dear Friends, I am about to migrate a Visual FoX Pro 7.0 & 5.0 databases to Oracle 8i. I would like to share the experience with you, if you have undergone such a task. Is there any utility for achieving this. Any white paper or link for guidance on