Large Sawpping in Solaris 10

We have SUN SPARC t4 hosting INFORLN ERP with 256GB of RAM and 2 socket 16 core (120vcpU) Machine in a cluster.
There is huge amount of swapping happening in the server, THe process use 800mb but the swapping is thrice the memory , allmost all the process are swapping and all are from the same apps....
any clue?
prstat -t shows
NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
    10 bsp      2892M  918M   0.4%   1:36:23 1.0%
   102 root     2648M  459M   0.2%   4:58:59 0.0%
     3 mpatanen 2134M  163M   0.1%   0:00:03 0.0%
     1 dtownsen 2189M  218M   0.1%   0:00:14 0.0%
     2 rrrrrr    2147M  176M   0.1%   0:00:16 0.0%
     3 svacinfo 2315M  345M   0.1%   0:00:30 0.0%
     1 lp       1360K 4520K   0.0%   0:00:00 0.0%
     1 z0weeeee 2085M  113M   0.0%   0:00:01 0.0%
     1 nobody   1200K 3544K   0.0%   0:00:00 0.0%
     1 uracherl 2117M  146M   0.1%   0:00:02 0.0%
     1 noaccess  157M  144M   0.1%   0:16:06 0.0%
     2 ion      2081M  110M   0.0%   0:00:02 0.0%
     1 smmsp    3088K   11M   0.0%   0:00:10 0.0%
     6 daemon   8368K   11M   0.0%   0:01:05 0.0%Edited by: Maran Viswarayar on Sep 18, 2012 9:42 PM

Maran Viswarayar wrote:
We have SUN SPARC t4 hosting INFORLN ERP with 256GB of RAM and 2 socket 16 core (120vcpU) Machine in a cluster.
There is huge amount of swapping happening in the server, THe process use 800mb but the swapping is thrice the memory , allmost all the process are swapping and all are from the same apps....
any clue?
prstat -t shows
NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
10 bsp      2892M  918M   0.4%   1:36:23 1.0%
102 root     2648M  459M   0.2%   4:58:59 0.0%
3 mpatanen 2134M  163M   0.1%   0:00:03 0.0%
1 dtownsen 2189M  218M   0.1%   0:00:14 0.0%
2 rrrrrr    2147M  176M   0.1%   0:00:16 0.0%
3 svacinfo 2315M  345M   0.1%   0:00:30 0.0%
1 lp       1360K 4520K   0.0%   0:00:00 0.0%
1 z0weeeee 2085M  113M   0.0%   0:00:01 0.0%
1 nobody   1200K 3544K   0.0%   0:00:00 0.0%
1 uracherl 2117M  146M   0.1%   0:00:02 0.0%
1 noaccess  157M  144M   0.1%   0:16:06 0.0%
2 ion      2081M  110M   0.0%   0:00:02 0.0%
1 smmsp    3088K   11M   0.0%   0:00:10 0.0%
6 daemon   8368K   11M   0.0%   0:01:05 0.0%Edited by: Maran Viswarayar on Sep 18, 2012 9:42 PMpost FORMATTED results from OS command below
vmstat 6 10

Similar Messages

  • Large Disks On Solaris Intel 8EA

    I am trying to setup a Large Disk ( 27 or 40 GB Maxtor ) using
    Solaris 8EA on a Pentium Pro MAchine. The Disk Partitions fine
    and am able to setup multiple Filesystems for 8 or 9 GB's
    easily. But when trying to access these Filesystems
    online I get several disk errors and Timeouts.
    I tried several configurations for the Disk. Has anyone
    face the same problem. Is this fixed in Solaris 8 Final
    edition.
    Thanks,
    Harshan.

    Actually the hardware is fine. I tried testing the hardware with
    different operating systems. I think Solaris8 has a bug dealing with large SCSI and IDE drives.

  • Using large pages on Solaris 10

    Hello,
    I�ve some problems to use large pages ( 16k, 512k, 4M ) on two Primepower650 systems. I�ve installed the most actual kernel 127111-05.
    The pagesize -a command respond 4 page sizes ( 8k, 16k, 512k, 4M ). Even if I try the old manual method using LD_PRELOAD=mpss.so.1 and a mpss.conf file to force large pages the pmap -sx <pid> shows only 8k for stack,heap and anon. Only for shared memory 4M DISM segments are used. I didn�t receive any error message. Two other primepower systems with the same kernel release works as expected.
    What can I do for further troubleshooting ? I�ve tried different kernel settings all without effect.
    Best regards
    JCJ

    This problem is now ( paritially ) solved by the Fujitsu-Siemens edition of kernel patch 127111-08. The behaviour is now like Solaris 9 because large pages must be forced by LD_PRELOAD=mpss.so.1 and still didn�t work out of the box for this kind of cpu ( Sparc GP64 V only ). All available page sizes ( 8k, 64k, 512k and 4M ) can now be used by configuring the /etc/mpss.conf. Unfortunally large pages out-of-the-box are not working on this kind of cpu and the actual kernel patch. This is not so nice because on large systems with a lot of memory and a lot of large processes there may be still a lot of TLB misses. So I still wait and test further as soon as new kernel patches are available.
    JCJ

  • How to Access Space on a large Hard Drive (Solaris 10 Running)/

    Good morning everyone,
    I have looked back a year on this forum and on hardware forums and I don't see this topic covered.
    Recently I installed Solaris 10 on a Sun Blade 150. Not being very smart, I accepted the default settings. The computer only sees about 1/3 of the hard drive. Some mentioned that Zones would let me see and use the rest of the hard drive. And the person mentioned that I could reinstall Solaris 10. I rather not do that, because I have done this three times already to solve other issues and the last effort required a lot of work to be able to see the Internet again.
    So what could I do to gain access to the rest of my hard drive?
    Thank you,
    Fred Strickland

    Good afternoon Sparcy and others,
    I am not home in front of my Sun Blade 150, but I have been doing some reading. It is not clear to me if the provided information works for a client or home networked computer. For example, I am reading the System Administration Guide: Devices and File Systems, Chapter 10, page 205 and following. I am unclear if the partitioning command is able to see the entire hard drive or not. And I am unclear if using the command would destroy what I have already placed on the hard drive.
    I tried to read and understand the following links, but I have come away more confused than before I started. I am not even sure if Zones will be the final answer.
    Help!
    Thank you,
    Fred
    http://forum.java.sun.com/thread.jspa?threadID=5097926&messageID=9336522
    http://forum.java.sun.com/thread.jspa?threadID=5112629&messageID=9382927
    http://www.utahsysadmin.com/2008/02/07/adding-a-hard-drive-to-solaris-10/
    http://unixadmintalk.com/f4/partition-order-solaris-10-a-23738/
    http://www.partnersforever.net/computer_help/viewtopic9193.html
    http://www.codefund.net/512/solaris-10-1106-install-how-to-slicepartition-mounted-disks-5120037.shtm
    http://www.hccfl.edu/pollock/AUnix1/SolarisPartitioning.htm
    http://books.google.com/books?id=mllDyJZkGMQC&pg=PA50&lpg=PA50&dq=solaris+10+partition&source=web&ots=bj7e5UiDZk&sig=xdhAK4o7mFzL_fMaucvAJtmvoPs#PPR22,M1
    http://networking.ittoolbox.com/groups/technical-functional/sun-server-l/how-to-alter-the-root-partition-size-1678414
    http://www.blastwave.org/docs/step-063.html
    http://vegdave.wordpress.com/category/partition/

  • Any way to increase the default Heap size for all Java VMs in Solaris 8

    Hello,
    I have a java product that deals with large databases under Solaris 8. It is a jar file, started by a cron job every night. Some nights it will fail because it runs out of Heap memory depending on the amount of records it has to deal with. I know that I could increase the java VM heap size with "java -jar -mx YY JARFILE" command but I have other java products that are showing the same behavior, and I would like to correct them all in one shot if possible.
    What I would like to do is find a system or configuration parameter that forces all Java VMs to use a larger MAX Heap size than the default 16M specified in the Man page for Java. Is there a way to accomplish that?
    TIA
    Maizo

    You could always download the source and modify it.

  • Need docs that explain Fibre Channel setup, getting I/O error on 2540 SAN

    Sun T5220 Host running Solaris 10 5/09 as management host.
    Qlogic 5602 FC Switch
    Sun Storagetek 2540 - one controller tray with 9 300G SAS Hitachi drives. Firmware 7.35.x.
    Sun branded Qlogic QLE2462 HBAs - PCI express, dual port. 3 in the T5220. qlcxxxx drivers for the HBAs.
    Sun Common Array Manager software version 6.5.
    I am a long-time Oracle DBA who has the task of setting up a Fibre Channel SAN. I am not a Solaris sysadmin, but have installed and maintained large databases on Solaris boxes where I had access to a competent sysadmin. I am at a classified site and cannot bring out electronic files with logs, configuration info, etc. to upload. Connecting the T5220 is the 1st box of many. This is my first exposure to HBA's, Fibre Channel, and SAN, so everything I know about it I have read in a manual or from a post somewhere. I understand the big picture and I have the SAN configured with 2 storage pools each with 1 volume in them on RAID5 virtual disks. I can see the LUN 0 on the T5220 server when I do a luxadm probe and when I do a format. I formatted one of the volumes successfully. Now I attempt to issue:
    newfs /dev/rdsk/device_name_from_output_of_luxadm_probe
    I get an immediate I/O error. I could be doing something totally naive or have a larger problem - this is where I get lost and the documentation becomes less detailed.
    What would be great is if anyone knows of a detailed writeup that would match what I'm doing or a good off-the-shelf textbook that covers all of this or anything close. I continue to search for something to bridge my lack of knowledge in this area. I am unclear about the initiators and targets beyond the fundamental definitions. I have used the CAM 6.5 software to define the initiators that it discovered. I have mapped the Sun host into a host group also. I do not know what role the Qlogic 5602 Fibre Channel switch plays with respect to initiators and targets or if it has any role at all. Is it just a "pass through" and the ports on the 5602 do not have to be included? Maybe I don't have the SAN volume available in read/write. I find bits and pieces in blogs and forums, but nothing that puts it all together. I also find that many of the notes on the web are not accurate.
    This all may appear simplistic to someone who works with it a lot and if you know of an obvious reference I should be using, a link or reply would be greatly appreciated as I continue to Google for information.

    Thanks for the reply. I had previously read the CAM 6.5 manual and have all the SAN configuration and mappings. Yesterday I was back at the site and was able to place a UFS filesystem on the exposed SAN LUN which was 0. I've not seen any reference to LUN 0 being a placeholder for the 2540 setup and when I assigned it, I allowed the CAM 6.5 software to choose "Next Available" LUN and it chose 0. LUN 31 on the 2540 is the "Access" LUN that is assigned automatically - perhaps it is taking the place of what you describe as the LUN 0 placeholder.
    I was able to put a new UFS filesystem on LUN 0 (newfs), mount it, and copy data to it. The disk naming convention that Solaris shows for the SAN disks is pretty wild and I usually have to reference a Solaris book on the standard scsi disk name formats. My question/confusion at the moment is that I have 3 Sun branded Qlogic HBA's in the Sun T5220 server - QLE2462 (dual port) with one port on two of the HBAs cabled to the Qlogic 5602 FC switch which is cabled to the A and B controller of the SAN 2540 - there are only 2 cables coming out of the 5220; the 3rd HBA (for future use) has no cables to it. Both ports show up as active and connected on the server down to the SAN and the CAM 6.5 software automatically identified both initiators (ports) on the Sun 5220 when I mapped them. I had previously mapped them to the Sun host, mapped the host to a host_group, virtual disks to volumes, volumes to....etc.; and was able to put data on the exposed volume named dev_vol1 which is a RAID5 virtual disk on the SAN.
    When I use the format command on Solaris, it shows two disks and I assumed this represented the two ports from the same host 5220. I was able to put a label on one of these disks (dev_vol1), format it, and put data on it as noted above. When I select the other disk in the format menu, it is not formatted, won't allow me to put a label on it (I/O error) and I can go no further from there. The CAM 6.5 docs stop after they get you through the mapping and getting a LUN exposed. I continue on the in a Solaris-centric mindset and try to do the normal label, format, newfs, mount routine and it works for the one "disk" that format finds but not for the other. The information from the format info on both the disks shows them as 1.09 TB and that is the only volume mapped right now from the SAN so I know it is the same SAN volume. It does not make sense that I would label it and format it again anyway, but is this what I am supposed to see - two disks (because of 2 ports?) and the ability to access it through one. I found out by trial an error that I could label, format, and access the one. I did not do it from knowledge or looking at the information presented....I just guessed through it.
    I have not "bound" the 2 or HBAs in any way and that is on my list as next because I want to do multipathing and failover - just starting to read that so I may be using the wrong language. But I am wondering before I go on to that, if I am leaving something undone in the configuration that is going to hamper my success in the multipathing - since I cannot do anything with the 2nd "disk" that has been exposed to Solaris from the SAN. I thought, after I labeled, formatted and put a filesystem on the one "disk" I can write to that the other "disk" that shows up would just be another path to the same data via a 2nd initiator. Just writing that does not sound right, but I am trying to convey my thoughts as to what I logically expected to see. Maybe the question should be why am I seeing that 2nd "disk" in a Solaris format listing at all? I have not rebooted any time during this process also and can easily do that and will today.

  • Patching 8, 9 & 10 without internet connection

    Hi
    I'm looking to automate and manage Solaris patching for a large number of Solaris systems. The OS on these systems range from 8, 9 and 10.
    The systems sit within a secure domain and have absolutely no connection to the internet ... there is a physical air gap between the network and the internet.
    What I would like to do, is build a central patch manager server that I can populate manually ( via DVD for example) patches. Clients would then interrogate this system for patches, install those appropriate, and reboot (if required) at a predefinied time.
    Is this possible with the latest supported tools?
    If it is, can I just populate my my patch server with patch clusters off Sunsolve, or do they have to be delivered in some other form.
    This could be the difference between my high level management staying with SUN or moving to HP entirely.
    Many thanks
    Andy

    Hi Andy,
    A draft solution is as follows:
    Have one server working as NFS server. This is will be your patch server.
    All other servers will be NFS client.
    On NFS server:
    - share via NFS a directory, let's say /export/patch.
    - periodically (you decide the frequency), load Solaris 8, 9 & 10 recommended patch clusters on /export/patch.
    - write a script, let's say script1, that contains all the intelligence to patch a server. Store script1 in /export/patch.
    On NFS client:
    - create a script that does the following:
    . mount /export/patch from NFS server in /mnt
    . run /mnt/script1
    . umount /mnt
    . reboot the server (if you are applying recommended patch cluster it is guaranteed that you will need to reboot the servers).
    - make the script a cronjob to run every saturday (or any other frequency you like)
    Another solution, the one I have in my company is
    - All servers are behind corporate firewall.
    - one server has sunUC proxy server installed. This is the only server allowed to connect to Internet.
    - all other servers have sunUC client installed.
    The patching on all servers is done the way I described above, except the servers get their patches via the sunUC proxy server. No need to load manually Solaris patch clusters.
    Hope this helps.

  • How to reconcile services?

    To support a large number of Solaris 10 machines, I need a way
    to establish an initial configuration of services, and to modify that
    configuration from time to time. Taking the inetd restarter as an
    example, I used to keep copies of /etc/inet/inetd.conf in a central
    location, installing them on each machine in a Jumpstart finish
    script. Commenting out lines in the file would disable those
    services.
    This no longer works in build 69. It's possible to convert an
    netd.conf file into services that run under the inetd restarter,
    but commented out lines don't disable the corresponding
    service. What I'm thinking of for Solaris 10, is to run `inetadm'
    to get a file containing the current set of inetd services. The
    output shows the enabled/disabled status in the first column.
    Then, I'd create a second file, starting with a copy of the first one,
    but with the first column modified to show the desired status.
    A script could then compare the two files, and and issue `inetadm'
    commands to change the status of services to match the desired
    status.
    Is this a reasonable approach? Does anything like this facility
    already exist, or is planned? I'd also need to do the same thing
    for services that run under the default restarter. The objective
    is to be able to specify a set of enabled services for each
    machine, and ensure that the machine is running with only those
    services enabled. We have a procedure now that makes those
    types of adjustments periodically , or on request.
    I also want to take properties into account, particularly for inetd
    services. I notice that inetd no longer looks at /etc/default/inetd.
    I've been enabling TCP wrappers from that file. Now, I have to do
    it by changing properties of inetd or of services that run under it.
    There is also the issue of locally-installed services. I've taken the
    approach of writing manifests for them, an installing them in the
    Jumpstart finish script. Is this a reasonable approach? I still have
    the problem of enabling or disabling those services, as required.

    Are there any services that should not be disabled
    initially, but can be disabled later?Services ending with "-upgrade" tend to disable themselves after running on first boot. Depending on whether you need SVM or not, you may be able to disable the various "meta" services.
    I'm also going to install manifests for
    locally-installed services
    into /var/svc/manifest/site in the Jumpstart finish
    script. I have
    one that ensures that a specific NFS mount has
    succeeded,
    for example. Is there a similar place for
    locally-installed
    methods? I've been using /lib/svc/method .We haven't reserved a location for site methods, as they can be placed anywhere in the filesystem. (I'll think about making a site directory in /lib/svc.) If you name your methods with some unique prefix, you'll be fine. (At home, where I play the role of hobbyist admin, I put such methods in /etc/[my_domainname], or in /opt/[my_domainname], depending on when they're needed in boot.
    - Stephen

  • U10 hangs at S8 Upgrade

    My problem is that I want to upgrade Solaris 7 Desktop to Solaris 8 Server. All runs fine, till it testing the
    install profile. This test going to 100% and then nothing happens. I wait 1 hour but nothing change.
    I minimize, maximize and move the window, that show me that the webstart is Ok. I don't get a error message or something. It only hangs.
    What's the problem?

    Solaris 8 has quite a different FileSystem Directory structure from Solaris 7.
    So is with Solaris 9 has quite a different Filesystem Directory from Solaris 6.
    As far as I could remember, I usually set partition for the root (/) slice just about
    500 mb for Solaris 6 and Solaris 7. This is the common practice at that time
    where a 5 GB harddisk was considered large. When Solaris 8 and 9 came out,
    larger disk became common like the 9/18/36 GB. Also both of these OS needs
    more than 1 or 1.5 GB space to be successfully installed. I am pretty sure that your
    old OS does not have this much of a space on their root filesystem.
    if I were you, it is really better to install anew the latest OS. Also upgrading means
    alll thoses patches you had installed in the old OS are all inapplicable to the new OS
    so you have another round of a problem. Then your applications...are they compatible to the new OS
    if not, your production is down.
    Hope this helps.

  • Having trouble with including JNI library in netbeans

    [Sorry a newbie question - I am more used to writing large amounts of Solaris kernel and writing a JNI for the first time]
    Guys,
    I have a C library that needs to hook into the Java bean. I created a JNI library wrapper around the C library using swig/javah
    and that went fine (using some of the examples/ref in this forum). The JNI library contains classes like SysInfoJNI created
    by swig.
    class SysInfoJNI {
    public final static native int NAME_LEN_get();
    public final static native int NAME_LEN_set();
    I quickly wrote a standalone test program where I did
    public class test {
    // Load JNI Library
    static {
    System.loadLibrary("SysInfo");
    public static void main(String[] args) {
    SysInfoJNI siJNI = new SysInfoJNI();
    I can compile and run this on command line doing
    javac *.java
    java -Djava.library.path=./ Sysinfotest
    and it all works fine. Now I move this code to netbeans since I need to add some more code
    and create a java->web project and pretty much transfer the code there. I add the libSysinfo.so
    using the property->library->create option but netbeans still can't see the symbols coming
    out of Sysinfo JNI library. I suspect it wants the class definitions. My question is how do I
    supply that? swig didn't really crate a header file but it did create Sysinfo.java file. Do I
    just copy Sysinfo.java in the src directory (ugly?) or is there a convention of keeping the
    *.java files in some standard place (like /usr/include for C headers) and then what would be
    the method to include SysInfo.java to my netbeans src so that symbols can be found.
    Thanks,
    Sunay

    OK, seems like I had some kind of mismatch between the library & classes.
    The loadLibrary only had the name and I had the library in /usr/lib. After running
    truss -f -t open /usr/tomcat6/bin/startup.sh >& tomcat.out
    I could see the tomcat was finding the library but still reporting link error.
    Recompiling the library and classes again and then placing the library
    in /usr/lib and classes in the right directory resolved the issue.
    So I guess the answer to my original question is that classes need to
    be delivered as a jar along with the rest of the distribution while the
    library can be delivered in /usr/lib as part of the system.
    Appreciate the help and quick responses.
    Sunay

  • 1969 clock bug

    I would like to know why Mac Os X goes back to 1969 if you have a problem with shutdown or the battery clock.
    Why 1969, we are on 2009 that is 40 years ago, why on the mac os x updates they don't change that clock date to 2009.
    This 1969 date bug causes a lot of problems with ical, mdnsresponder, and I am sure more programs that sometimes you don't know.
    For example mdnsresponder creation date is in 31 december 1969
    Could any please explain to me why all computers return to that date, is that date the first apple released?

    If for any reason your internal battery stop working or you have a power failure then your computer will start having problems, this is not a BUG?
    No, it isn't a bug. This is exactly how it is designed to work. Maybe you don't like the design, but that doesn't make it a bug
    If you still think it is a bug, please explain how the system should be changed to act differently. How will your system maintain time without any power source or access to external systems. What do you want the system to do under these circumstances?
    Maybe it should use some solid state device to remember what the last time was. That'll work real well - turn off your machine for a day and your clock loses a day - maybe not enough of a difference for you to notice, so you carry on for a week or more with all your dates off by one day, then you repeat - maybe you go home for the weekend. Now you're two or three days off. You are progressively degenerating. How is this better by an all out approach of resetting the clock to 0 - a situation the OS can detect and will warn you about the next time you boot.
    The mac computers are the best and they have to fix any problems like the 1969 BUG.
    It's still not a bug, but even so, find me ANY other system that has the capability to magically maintain date and time settings with no power source. No Windows system can do this. No Linux system can do it. No Solaris, AIX, or HP/UX system can do it, either.
    Do you know that NASA is predicting a large increase of solare flare by 2012? This increased solar activity can disrupt power distribution and communications. In 1989 6 million people lost power when a huge increase in solar activity caused a surge. All indications are that the next solar activity peak will be worse.
    So what? That's completely irrelevant because 99.99% of all systems have a functional battery that will maintain the clock when the power is out. Replace the battery in your system and you can sleep well at night, too.

  • Large page sizes on Solaris 9

    I am trying (and failing) to utilize large page sizes on a Solaris 9 machine.
    # uname -a
    SunOS machinename.lucent.com 5.9 Generic_112233-11 sun4u sparc SUNW,Sun-Blade-1000
    I am using as my reference "Supporting Multiple Page Sizes in the Solaris� Operating System" http://www.sun.com/blueprints/0304/817-6242.pdf
    and
    "Taming Your Emu to Improve Application Performance (February 2004)"
    http://www.sun.com/blueprints/0204/817-5489.pdf
    The machine claims it supports 4M page sizes:
    # pagesize -a
    8192
    65536
    524288
    4194304
    I've written a very simple program:
    main()
    int sz = 10*1024*1024;
    int x = (int)malloc(sz);
    print_info((void**)&x, 1);
    while (1) {
    int i = 0;
    while (i < (sz/sizeof(int))) {
    x[i++]++;
    I run it specifying a 4M heap size:
    # ppgsz -o heap=4M ./malloc_and_sleep
    address 0x21260 is backed by physical page 0x300f5260 of size 8192
    pmap also shows it has an 8K page:
    pmap -sx `pgrep malloc` | more
    10394: ./malloc_and_sleep
    Address Kbytes RSS Anon Locked Pgsz Mode Mapped File
    00010000 8 8 - - 8K r-x-- malloc_and_sleep
    00020000 8 8 8 - 8K rwx-- malloc_and_sleep
    00022000 3960 3960 3960 - 8K rwx-- [ heap ]
    00400000 6288 6288 6288 - 8K rwx-- [ heap ]
    (The last 2 lines above show about 10M of heap, with a pgsz of 8K.)
    I'm running this as root.
    In addition to the ppgsz approach, I have also tried using memcntl and mmap'ing ANON memory (and others). Memcntl gives an error for 2MB page sizes, but reports success with a 4MB page size - but still, pmap reports the memcntl'd memory as using an 8K page size.
    Here's the output from sysinfo:
    General Information
    Host Name is machinename.lucent.com
    Host Aliases is loghost
    Host Address(es) is xxxxxxxx
    Host ID is xxxxxxxxx
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    Manufacturer is Sun (Sun Microsystems)
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    System Model is Blade 1000
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    ROM Version is OBP 4.10.11 2003/09/25 11:53
    Number of CPUs is 2
    CPU Type is sparc
    App Architecture is sparc
    Kernel Architecture is sun4u
    OS Name is SunOS
    OS Version is 5.9
    Kernel Version is SunOS Release 5.9 Version Generic_112233-11 [UNIX(R) System V Release 4.0]
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    Kernel Information
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    SysConf Information
    Max combined size of argv[] and envp[] is 1048320
    Max processes allowed to any UID is 29995
    Clock ticks per second is 100
    Max simultaneous groups per user is 16
    Max open files per process is 256
    System memory page size is 8192
    Job control supported is TRUE
    Savid ids (seteuid()) supported is TRUE
    Version of POSIX.1 standard supported is 199506
    Version of the X/Open standard supported is 3
    Max log name is 8
    Max password length is 8
    Number of processors (CPUs) configured is 2
    Number of processors (CPUs) online is 2
    Total number of pages of physical memory is 262144
    Number of pages of physical memory not currently in use is 4368
    Max number of I/O operations in single list I/O call is 4096
    Max amount a process can decrease its async I/O priority level is 0
    Max number of timer expiration overruns is 2147483647
    Max number of open message queue descriptors per process is 32
    Max number of message priorities supported is 32
    Max number of realtime signals is 8
    Max number of semaphores per process is 2147483647
    Max value a semaphore may have is 2147483647
    Max number of queued signals per process is 32
    Max number of timers per process is 32
    Supports asyncronous I/O is TRUE
    Supports File Synchronization is TRUE
    Supports memory mapped files is TRUE
    Supports process memory locking is TRUE
    Supports range memory locking is TRUE
    Supports memory protection is TRUE
    Supports message passing is TRUE
    Supports process scheduling is TRUE
    Supports realtime signals is TRUE
    Supports semaphores is TRUE
    Supports shared memory objects is TRUE
    Supports syncronized I/O is TRUE
    Supports timers is TRUE
    /opt/default/bin/sysinfo: /dev/ksyms is not a 32-bit kernel namelist
    Device Information
    SUNW,Sun-Blade-1000
    cpu0 is a "900 MHz SUNW,UltraSPARC-III+" CPU
    cpu1 is a "900 MHz SUNW,UltraSPARC-III+" CPU
    Does anyone have any idea as to what the problem might be?
    Thanks in advance.
    Mike

    I ran your program on Solaris 10 (yet to be released) and it works.
    Address Kbytes RSS Anon Locked Pgsz Mode Mapped File
    00010000 8 8 - - 8K r-x-- mm
    00020000 8 8 8 - 8K rwx-- mm
    00022000 3960 3960 3960 - 8K rwx-- [ heap ]
    00400000 8192 8192 8192 - 4M rwx-- [ heap ]
    I think you don't this patch for Solaris 9
    i386 114433-03
    sparc 113471-04
    Let me know if you encounter problem even after installing this patch.
    Saurabh Mishra

  • Insertion too large on Solaris

    Hi
    i try to insert some rows in a table but i have troubles
    My client system is solaris 2.8 and my database is oracle 10
    my table own a field of 180 Varchar2 but when i try to insert a value whith accents like 'é' i have an error :
    ERROR at line 4:
    ORA-01401: inserted value too large for column
    so i looked after the charset :
    On the client system :
    NFI li6bad00@bt1sssrd:/usr/users/li6bad00/BASE_LIGIS/sql/data>nv | grep NLS
    NLSPATH=/u01/app/tuxedo/650/locale/C/%N
    NLS_LANG=FRENCH_FRANCE.WE8ISO8859P1
    NLS_DATE_FORMAT=DD/MM/YYYY
    ORA_NLS32=/u01/app/oracle/product/10204/ocommon/nls/admin/data
    and my database is also in iso 8859-1
    do you have an idea?
    Thanks

    A few precisions on the environment
    PARAMETER VALUE
    NLS_LANGUAGE FRENCH
    NLS_TERRITORY FRANCE
    NLS_CURRENCY ?
    NLS_ISO_CURRENCY FRANCE
    NLS_NUMERIC_CHARACTERS ,.
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD/MM/RRRR
    NLS_DATE_LANGUAGE FRENCH
    NLS_CHARACTERSET WE8ISO8859P1
    NLS_SORT FRENCH
    NLS_TIME_FORMAT HH24:MI:SSXFF
    NLS_TIMESTAMP_FORMAT DD/MM/RR HH24:MI:SSXFF
    NLS_TIME_TZ_FORMAT HH24:MI:SSXFF TZR
    NLS_TIMESTAMP_TZ_FORMAT DD/MM/RR HH24:MI:SSXFF TZR
    NLS_DUAL_CURRENCY ?
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_COMP BINARY
    on the client
    NLSPATH=/u01/app/tuxedo/650/locale/C/%N
    NLS_LANG=FRENCH_FRANCE.WE8ISO8859P1
    NLS_DATE_FORMAT=DD/MM/YYYY
    ORA_NLS32=/u01/app/oracle/product/10204/ocommon/nls/admin/data
    NLS_NCHAR_CHARACTERSET=AL16UTF16
    LC_ALL=fr_FR.ISO8859-1
    LANG=fr_FR.ISO8859-1
    LC_CTYPE="fr_FR.ISO8859-1"
    LC_NUMERIC="fr_FR.ISO8859-1"
    LC_TIME="fr_FR.ISO8859-1"
    LC_COLLATE="fr_FR.ISO8859-1"
    LC_MONETARY="fr_FR.ISO8859-1"
    LC_MESSAGES="fr_FR.ISO8859-1"
    LC_ALL=fr_FR.ISO8859-1

  • Solaris hangs when Oracle SGA is too large

    I am an Oracle DBA with preconfigured Solaris servers. Our Unix admin is outsourced so we don't have a lot of good support here. When I set the instance SGA too large the server hangs and requires a hard reboot It seems that the error is due to insufficient memory or swap. Is this normal behaviour for Solaris 10? Shouldn't the process abend before the server does?

    I've managed to duplicate this behavour on two seperate boxes. The first server is quite small only 1G physical memory and the SGA target was set too high. Additionally Oracle was competing with other resources on this box. Still it was surprising to me that the server actually hung. I would have expected to get something like "shared memory realm does not exists" if there was not enough memory.
    The second box has 8G physical memory and 4G swap. I set up the oracle /etc/project parameters for this one, here are the kernel settings:
    shminfo_shmmni = 256
    seminfo_semmni = 128
    shminfo_shmmax = 4G
    seminfo_semmsl = 256
    In this case Solaris hung when I started the database with a 4G MAX_SGA_SIZE, I don't think SGA_TARGET was set. I think the server ran out of swap but I'm still confused because it seems like physical memory should have been available. Also I don't understand why the Oracle instance would have even been using swap because Oracle locks physical memory using ISM or DISM.
    In both case the OS hung and could only be revived with a hard reboot.

  • Migrating solaris 10 to larger hard disk

    Hi all,
    I not sure if the word "migrating" should be used. Here is the case, we have a blade server which only have 2 local slot for hdd which is already occupied. The hdd is mirrored using hardware raid. Recently we encountered insufficient disk space and was thinking to increase the disk space. Somehow external storage is not an option here. The management bought two larger size hard disk and questioning if we can migrate/clone the running production OS to this new larger hard disk. The filesystem used is UFS and it is x86 machine. Please advise if it can be done using UFSdump. If yes, how about the partition size issue? Will it expanded? TQVM.

    Checked with the hardware engineer and they confirm that the RAID can be synced one by one without breaking it. Means, we can take out hdd2 while hdd1 still in the server with RAID1. Then we put in hdd3 (new larger size hdd) and let it sync. After that repeat the steps with hdd1 > hdd4. I found the command growfs can expand the mountpoint. Can anyone share any foresee concerns and is it workable?

Maybe you are looking for

  • Creation of Vendor in MDM  using Guided Procedure ...SDA not found

    Hi All, We are trying to create Master data Using Guided Procedure. After deploying MDM components on Portal we are able to see the standard iViews provided by SAP on Portal. We are using the following PDF to create the scenario. https://www.sdn.sap.

  • Passing a parameter in the query string

    When a string containing a �+� is passed as a parameter in the URL in the query string through a POST request, it is interpreted as two strings separated by a �+� and thus when the string is retrieved through a request.getParameter (),it returns the

  • Passing values to different thread

    how to pass value to diffent threads? this code dosen't work. the only path that was processed is the "sDirList[1]="//10.81.86.55/ANC_DATA$/";" public static void main(String[] args){         sDirList[0]="//10.81.86.55/LOG_FIT$/";         sDirList[1]

  • Augmented Reality

    Hello! I want to start implementing this in my projects. If anyone here has some starting tips/articles/tutorials/advice, it would really help. Thank you!

  • How to we install faster compilers like sj or jikes

    Hi I have come across a problem where weblogic portal server takes exceptionally long time to come up. one point I have come across is using sj or jikes compiler we can decrease the server start-up time since these are fast compilers. Please let me k