11 GB Limitation in Solaris 8?

I am new to Solaris. Noticed I cannot set up the OS to use all of my 43GB Maxtor drive, and only uses 11 GB. Is this a fundemental limitation? Can the rest of the disk be setup via the GUI?
Any help or recommended patches (yes, I have looked) is greatly appreciated!
Jake

http://sunsolve.sun.com/pub-cgi/show.pl?target=patches/xprod-Solaris_x86IntelDrivers&nav=pub-patches
and driver ATA patch: 110202-01
turby

Similar Messages

  • Java classpath limitation in solaris 8?

    Hi,
    I'd like to find out if there is a limitation on the classpath length in solaris 8. We've noticed that upon keeping the length under 1000 characters, we ar able to run programs. Is there a workaround for this ? Thanx!

    Hi!
    In your classpath... aren't you using $JAR instead of $JAX ??
    In the line:
    CLSPATH="../:.:$JAR/jaxb-api.jar:$JAR/jaxb-impl.jar:$JAR/jaxb-libs.jar:$JWS/relaxngDatatype.jar:$JWS/namespace.jar"

  • Solaris 2.6 i386 on HP NetServer E800 - problem in detecting SCSI disk

    Hi people,
    We have HP Netserver E800 with a 9G scsi drive attached to U2W onboard SCSI Controller.
    Solaris 2.6 boot disk doesn't seem to detect the 9G drive, we assume the problem is the scsi controller did not get detected at all. I found an article that explains how to get a hard drive that is > 8Gb...but our problem is the solaris cannot see the disk drive at all.
    Our current assumption is due to limitation of solaris 2.6 i386.
    Any thoughts would be much appreciated.
    Thanks.

    Hi Asgorath,
    Unfortunately I do not have an answer for you but I
    am experiencing the identical problem on a HP
    Proliant BL25p Blade Server. Here is my hardware
    config:
    CPU - Dual Opteron 2.6 Single Core
    Memory - 16gb
    Controller - HP Smart Array 6i
    Logical drives - 1 (2x72gb RAID1+0)
    I have tried the same things as you and I still get
    intermittant boots. Power on and off does not always
    work as the system still hangs. It's totally hit or
    miss. Once it soes boot it seems to work fine. I have
    also installed the latest Solaris 10 for x86 Patch
    Cluster in hopes that some of the kernal patches
    would fix the problem. So far no luck as the system
    continues to hang...HARD!! HP was out to my site
    today and the Eng. is going back to the group that
    qualified Solaris10 on the HP Blades and see if they
    have any input. If I receive anything back I will
    post it here. I know this post is late and if you
    have resolved please post your fix.Hi doc42755,
    I have not come up with a solution as of yet, if I do come up with a solution i will most certainly post it here.
    However, if you find the solution could you let us know in this thread aswell.
    Thanks
    Asgaroth

  • Solaris 8 and SunFire v445

    Hello,
    I need information about compatibility and limitations for Solaris 8
    running in a SunFire v445.
    Looking into v445 specifications I found only suppports Solaris 9 & 10.
    Is it that correct ?
    Thanks

    Solaris 8 on the V445 platform ?
    ... not going to happen.
    Here's a recent discussion from the Hardware Forum.
    http://forum.sun.com/jive/thread.jspa?threadID=108466
    (Recent = only four weeks ago)
    In particular, see the 11-Oct-06 response in that thread, from contributor Maalatft.

  • Solaris 10 5/08 x86 - Pseudocolor visual not available in xdpyinfo

    Hello,
    I am running Sol 10 5/08 x86 64-bit on a Dell Precision 690 wkstn with WinXP using VMware Workstation 6.0.4. I have a NVIDIA Quadro FX 3500 video card. I am not sure if this is really a VMware issue or one that can be solved using Solaris.
    I need to run "Xnest -depth 8 -class Pseudocolor," but my video setup, using the Xorg server (Xsun not supported by VMware Tools), does not support Pseudocolor visuals. Here is my xdpyinfo:
    xdpyinfo
    name of display: :0.0
    version number: 11.0
    vendor string: Sun Microsystems, Inc.
    vendor release number: 10300000
    maximum request size: 16777212 bytes
    motion buffer size: 256
    bitmap unit, bit order, padding: 32, LSBFirst, 32
    image byte order: LSBFirst
    number of supported pixmap formats: 7
    supported pixmap formats:
    depth 1, bits_per_pixel 1, scanline_pad 32
    depth 4, bits_per_pixel 8, scanline_pad 32
    depth 8, bits_per_pixel 8, scanline_pad 32
    depth 15, bits_per_pixel 16, scanline_pad 32
    depth 16, bits_per_pixel 16, scanline_pad 32
    depth 24, bits_per_pixel 32, scanline_pad 32
    depth 32, bits_per_pixel 32, scanline_pad 32
    keycode range: minimum 8, maximum 255
    focus: window 0x1e0000a, revert to Parent
    number of extensions: 37
    BIG-REQUESTS
    DAMAGE
    DEC-XTRAP
    DOUBLE-BUFFER
    DPMS
    Extended-Visual-Information
    GLX
    MIT-SCREEN-SAVER
    MIT-SHM
    MIT-SUNDRY-NONSTANDARD
    RANDR
    RECORD
    RENDER
    SECURITY
    SGI-GLX
    SHAPE
    ST
    SYNC
    SolarisIA
    TOG-CUP
    VMWARE_CTRL
    X-Resource
    XAccessControlExtension
    XC-APPGROUP
    XC-MISC
    XEVIE
    XFIXES
    XFree86-Bigfont
    XFree86-DGA
    XFree86-Misc
    XFree86-VidModeExtension
    XINERAMA
    XINERAMA
    XInputExtension
    XKEYBOARD
    XTEST
    XVideo
    default screen number: 0
    number of screens: 1
    screen #0:
    dimensions: 1600x1200 pixels (542x406 millimeters)
    resolution: 75x75 dots per inch
    depths (7): 24, 1, 4, 8, 15, 16, 32
    root window id: 0x3e
    depth of root window: 24 planes
    number of colormaps: minimum 1, maximum 1
    default colormap: 0x20
    default number of colormap cells: 256
    preallocated pixels: black 0, white 16777215
    options: backing-store NO, save-unders NO
    largest cursor: 32x32
    current input event mask: 0xfa2033
    KeyPressMask KeyReleaseMask EnterWindowMask
    LeaveWindowMask ButtonMotionMask StructureNotifyMask
    SubstructureNotifyMask SubstructureRedirectMask FocusChangeMask
    PropertyChangeMask ColormapChangeMask
    number of visuals: 4
    default visual id: 0x22
    visual:
    visual id: 0x22
    class: TrueColor
    depth: 24 planes
    available colormap entries: 256 per subfield
    red, green, blue masks: 0xff0000, 0xff00, 0xff
    significant bits in color specification: 8 bits
    visual:
    visual id: 0x23
    class: TrueColor
    depth: 24 planes
    available colormap entries: 256 per subfield
    red, green, blue masks: 0xff0000, 0xff00, 0xff
    significant bits in color specification: 8 bits
    visual:
    visual id: 0x24
    class: TrueColor
    depth: 24 planes
    available colormap entries: 256 per subfield
    red, green, blue masks: 0xff0000, 0xff00, 0xff
    significant bits in color specification: 8 bits
    visual:
    visual id: 0x25
    class: TrueColor
    depth: 24 planes
    available colormap entries: 256 per subfield
    red, green, blue masks: 0xff0000, 0xff00, 0xff
    significant bits in color specification: 8 bits
    Notice that only 4 TrueColor visuals are supported. Does anyone know how I can change my setup so that my Xserver supports a Pseudocolor visual? VMware Tools is very limited for Solaris, and it set up the Xserver for me.
    Any help or advice anyone could provide would be greatly appreciated.
    Thanks...

    I installed the latest nvidia driver and ran nvidia-xconfig, which made a new xorg.conf file. Option "CIOverlay" is in the new file, which should enable Pseudocolor visuals. However, after rebooting with -- -r, Xorg won't start. Xerrors says:
    No devices detected
    Fatal server error:
    no screens found
    XIO: fatal IO error 146 (Connection refused) on X server ":0.0"
    error (pid 714): Server for display :0 can't be started
    /var/log/Xorg.0.log appears to show that /usr/X11/lib/modules/drivers//nvidia_drv.so (compiled for 4.0.2, module version 1.0.0) is loaded:
    (II) NVIDIA dlloader X driver 173.14.09
    (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs
    (II) Prinmary Device is: PCI 00:0f:0
    (EE) No devices detected
    So Xorg can't find my video devices. This page: http://us.download.nvidia.com/solaris/173.14.09/README/appendix-c.html does show that the Quadro FX 3500 is suppoted (thank heavens).
    When I run /usr/X11/bin/scanpci I get:
    pci bus 0x0000 cardnum 0x00 function 0x00: vendor 0x8086 device 0x7190
    Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge
    pci bus 0x0000 cardnum 0x01 function 0x00: vendor 0x8086 device 0x7191
    Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX AGP bridge
    pci bus 0x0000 cardnum 0x07 function 0x00: vendor 0x8086 device 0x7110
    Intel Corporation 82371AB/EB/MB PIIX4 ISA
    pci bus 0x0000 cardnum 0x07 function 0x01: vendor 0x8086 device 0x7111
    Intel Corporation 82371AB/EB/MB PIIX4 IDE
    pci bus 0x0000 cardnum 0x07 function 0x03: vendor 0x8086 device 0x7113
    Intel Corporation 82371AB/EB/MB PIIX4 ACPI
    pci bus 0x0000 cardnum 0x0f function 0x00: vendor 0x15ad device 0x0405
    VMware Inc Abstract SVGA II Adapter
    pci bus 0x0000 cardnum 0x10 function 0x00: vendor 0x1000 device 0x0030
    LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI
    pci bus 0x0000 cardnum 0x11 function 0x00: vendor 0x15ad device 0x0790
    VMware Inc Device unknown
    pci bus 0x0002 cardnum 0x00 function 0x00: vendor 0x8086 device 0x100f
    Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
    pci bus 0x0002 cardnum 0x01 function 0x00: vendor 0x1274 device 0x1371
    Ensoniq ES1371 [AudioPCI-97]
    Only VMware devices are seen, and there is nothing in /dev/fbs except "text-0 -> ../../devices/pci@0,0/display@f:text-0"
    There is a /dev/nvidia0 and it should be symlinked to /dev/fbs/nvidia0, but the link is broken. Can Solaris 10 use the installed Solaris NVIDIA drivers in a VMware workstation environment? The only way I can get Solaris to work is to use the vmware video driver, which does not support pseudocolor visuals.

  • Installing Solaris 10 from a SCSI DVD driveand a Tekram DC-390U adapter

    I'm tryng to install Solaris x86 from a SCSI DVD drive wth a Tekram DC-390U adapter (DVD version downloaded from the sun website).
    When the computer boots from the DVD the Solaris Configuration Assistant is run and ask me to choose the device which I want to boot from. The problem is the list only contains my hard drive and a CD drive, but not the DVD drive.
    I guess Solaris doesn't include any driver for that specific SCSI adapter, which I find quite surprising as any old Linux or *BSD works fine with it. And the card is also listed in the device list Solaris has found while probing the system.
    I tried to download drivers from the Tekram webiste but they are limited to Solaris 8. I didn't try them 'cause it takes me so much time to get a floppy drive up and running...
    PS : Installing on a VirtualPC 2004 guest works fine, but it's so slow...

    I downloaded the latest version of Solaris 10, and that solved the problem of the continual reboots. Now the keyboard doesn't work, but that's a different problem.

  • Maximum number of threads can be created in solari

    Dear All,
    This is Amarnath.E, I'm working on high-end server program. As per my requirement i have to create many threads in my program.
    1.Is there any limitation in solaris on creating the no. of threads?
    2. If so how to increase the no. of threads that can be created?
    3.Whether the No. of Threads can be created can vary based on the system configuration?
    Please
    ThanX in advance
    Amarnath.E

    Hello there,
    I believe the previous answer is given correctly. There is no specific
    kernel limit to be set. You will eventually run out of virtual address space
    (after about ~3000 threads). Out of 4 GB virtual address space, kernel
    base address (ie., F0000000) and beyond is reserved for kernel (except
    for Ultra boxes where you can practically get almost 4GB each for
    kernel and process's virtual address spaces). Thus you have about
    3.75GB Sparc (3.5GB on X86 where the kernel base is E000000). It is
    not recommended to reduce the thread stack size (just be sure your new
    stack size will be able to handle the stack growth it needs per thread)
    but, if you are sure it is safe for your purpose, this can be done as an
    argument to a thread creation. Please see also manpage on thr_create.
    hope this helps.
    hae
    Sun Developer Technical Support

  • Iozone website benchmark results on Xserve RAID

    Hi,
    I'd like to try and find out how the Apple RAID was configured for the
    iozone benchmark. The results of the benchmark are here:
    http://www.iozone.org/src/current/Xserver.xls
    I asked the question directly to iozone and the answer was that the
    benchmark was run in the Apple booth at some tradeshow/conference.
    The booth guys let the iozone guys run the benchmark there.
    The write results in the iozone benchmark are about 2.5 times better than
    the write results we are getting so I'd like to try and figure out how it was
    configured.
    I'm going to be using the Apple RAID in an intensive write database
    application running MySQL. Here's some info on my setup:
    My system is a Sun v40 4xOpteron with 8Gig of RAM. I'm running
    Solaris 10 release 03/05.
    The apple RAID has all 14 disks and is fibre connected. The disks
    are setup so that the mirroring is done in the Apple RAID. This produces
    a bunch of LUNs. These are all striped together at the OS level, so
    the setup is RAID 10. Note that this version of the OS only supports
    a LUN size of 2TB max.
    Thanks for any help,
    Mike
      Mac OS X (10.4.6)   Xserve RAID

    The last tab in the spreadsheet tells you how the RAID was configured, namely RAID 5 128 stripes.
    From what I recall when I ran iozone against one of my XServe RAIDs, their figures came out a little higher than mine, but not dramatically. I'll see if I can find the data dumps for comparison.
    In the meantime I would look at how you're configuring the RAID. publishing a series of mirrors and using striping at the host level seems less-than-ideal. You're forcing the XServe RAID to write the data twice on each controller, as well as requiring the OS to manage which LUNs it's writing to.
    (remember, RAID 1 write performance is lower than other RAID levels)
    You would be better off running either RAID 0+1 (striping on the XServe RAID with each side mirrored by the server), or RAID 5, leaving everything up to the XServe RAID - the XServe RAID's performance at RAID 5 is not significantly lower than RAID 0 and it eliminates any overhead on the server side.
    If it wasn't for the volume size limitation in Solaris I would recommend RAID 50 over 10 (RAID 5 on the XServe RAID, striped on the host) but that would likely exceed the 2TB volume limit.
    Other things to check are the write caches on the drive (use only if you're in a stable power environment).

  • Fsbtodb macro in ufs_fs.h does not return correct disk address

    I'm using fsbtodb to translate the file inode block address to file system block address.
    What I've observed is fsbtodb returns corretct disk address for all the files if file system size < 1 TB.
    But, if ufs file system size is greater than 1 TB then for some files, the macro fsbtodb does not return correct value, it returns -ve value
    Is this a known issue and is this been resolved in new versions
    Thanks in advance,
    dhd

    returns corretct disk address for all the files if file system size < 1 TB.and
    if ufs file system size is greater than 1 TB then for some files, the macro fsbtodb does not return correct value, it returns -ve valueI seem to (very) vaguely recall that you shouldn't be surprised at this example of a functional filesize limitation.
    Solaris 9 was first shipped in May 2002 and though it was the first release of that OS to have extended file attributes I do not think the developers had intended the OS to use raw filesystems larger than 1TB natively.
    That operating environment is just too old to do exactly as you hope.
    Perhaps others can describe this at greater length.

  • How to migrate from a standard store setup in a splitted store (msg - idx) setup

    How can I migrate from a standard store setup in a splitted setup described in
    https://wikis.oracle.com/display/CommSuite/Best+Practices+for+Messaging+Server+and+ZFS
    can a 'reconstruct' run do the migration or have I do a
    imsbackup - imsrestore ?

    If your new setup would use the same filesystem layout as the old one (i.e. directory paths to the files would be the same when your migration is complete) you can just copy the existing store into the new structure, rename the old store directory into some other name, and mount the new hierarchy instead of it (zfs set mountpoint=...). The CommSuite Wiki also includes pages on more complex migrations, such as splitting the user populace into several stores (on different storage) and/or separate mailhosts. That generally requires that you lock the user in LDAP (perhaps deferring his incoming mail for later processing into the new location), migrate his mailbox, rewrite the pointers from LDAP, reenable account. The devil is in the details, for both methods. For the latter, see Wiki; for the former I'll elaborate a bit here
    1) To avoid any surprises, you should stop the messaging services before making the filesystem switch, finalize the data migration (probably with prepared data already mostly correct in the new hierarchy before you shut down the server, just resync'ing the recent changes into new structure), make the switch and reenable the server. If this is a lightly-used server which can tolerate some downtime - good for you If it is a production server, you should schedule some time when it is not very used so you can shut it down, and try to be fast - so perhaps practice on a test system or a clone first.
    I'd strongly recommend taking this adventure in small reversible steps, using snapshots and backups, and renaming old files and directories instead of removing them - until you're sure it all works, at least.
    2) If your current setup already includes a message store on ZFS, and it is large enough for size to be a problem, you can save some time and space by tricks that lead to direct re-use of existing files as if they are the dataset with a prepopulated message store.
    * If this is a single dataset with lots of irrelevant data (i.e. one dataset for the messaging local zone root with everything in it, from OS to mailboxes) you can try zfs-cloning a snapshot of the existing filesystem and moving the message files to that clone's root (eradicating all irrelevant directories and files on the clone). Likewise, you'd remove the mailbox files on the original system (when the time is right, and after sync-ing).
    * If this is already a dedicated store dataset which contains the directories like dbdata/    mboxlist/  partition/ session/   and which you want to split further to store just some files (indices, databases) separately, you might find it easier to just make new filesystem datasets with proper recordsizes and relocate these files there, and move the partition/primary to the remaining dataset's root, as above. In our setups, the other directories only take up a few megabytes and are not worth the hassle of cloning - which you can also do for larger setups (i.e. make 4 clones and make different data at each one's root). Either way, when you're done, you can and should make sure that these datasets can mount properly into the hierarchy, yielding the pathnames you need.
    3) You might also look into separating the various log-file directories into datasets, perhaps with gzip-9 compression. In fact, to reduce needed IOPS and disk space at expense of available CPU-time, you might use lightweight compression (lzjb) on all messaging data, and gzip on WORM data sets - local zone, but not global OS, roots; logs; etc. Structured databases might better be left without compression, especially if you use reduced record sizes - they might just not compress enough to make a difference, just burning CPU cycles. Though you could look into "zle" compression which would eliminate strings of null bytes only - there's lots of these in fresh database files.
    4) If you need to recompress the data as suggested in point (3), or if you migrate from some other storage to ZFS, rsync may be your friend (at least, if your systems don't rely on ZFS/NFSv4 ACLs - in that case you're limited to Solaris tar or cpio, or perhaps to very recent rsync versions which claim ACL support). Namely, I'd suggest "rsync -acvPHK --delete-after $SRC/ $DST/" with maybe some more flags added for your needs. This would retain the hardlink structure which Messaging server uses a lot, and with "-c" it verifies file contents to make sure you've copied everything over (i.e. if a file changes without touching the timestamp).
    Also, if you were busy preparing the new data hierarchy with a running server, you'd need to rsync old data to new while the services are down. Note that reading and comparing the two structures can take considerable time - translating to downtime for the services.
    Note that if you migrate from ZFS to ZFS (splitting as described in (2)), you might benefit from "zfs diff" if your ZFS version supports it - this *should* report all ofjects that changes since the named snapshot, and you can try to parse and feed this to rsync or some other migration tool.
    Hope this helps and you don't nuke your system,
    //Jim Klimov

  • Ultra 10's Question

    So, I recently liberated nine Sun Ultra10's from my university's trash heap before they were thrown away. I would like to use them for doing some grid computing, but unforunately, not many grid computing projects are compiled for SPARC. However, they do all have cards with X86 coprocessors on them.
    How can I use these coprocessors as well as the main processor to do this computing? What kind of software do I need to use the cards? Do I need a specific Operating System to be able to interface with the cards? Can I virtualize the main SPARC processors so that X86 code will run on them? If so, how?

    it depends on which SunPci card you have... the I, II and III I think are limited to Solaris 9 and below (and XP). you will have to search the hardware forum and the Sun site find more...
    the first three are limited on ram and cpu power... so, try... too bad you are trying it with an U10... not much horsepower there either... and ide to boot.
    good luck!
    haroldkarl

  • Virtual to physical address

    Greetings. We are developing a network driver for Solaris and face a problem of flushing a buffer from/to a user area to/from a kernel area, outside a process context. The main question is how within a kernel module we can retrieve the physical addess to/from a virtual address of an arbitrary process ? Our targets are limited to Solaris 7 and 8. Thanks in advance for any help you'll provide.
    Pierre.

    Please refer hat_getkpfnum(9F) to do this from within a kernel module.
    Otherwise, libkvm provides the foll function :
    extern uint64_t kvm_physaddr(kvm_t *, struct as *, uintptr_t);
    Refer man page for libkvm(3LIB)
    Crash(1M) provides the function "vtop" to carry out this translation.

  • Multiple versions of  Sunone on 1 box

    Please help me. We are in the process of migration from Sunone 7 to SJAS 8.x , that is Sunone App server version 8. Since we are supporting both versions in the transition period, I would like to have both the servers installed on our integration box which is a SUN SPARC SOLARIS 10. But the problem is that once I install, version 8, the Sunone 7 installation would not work as is compalins about a host of modules already existing on the machine, for example, SUNWant. I have root privileges and I am trying to install both under root.
    Is this a limitation on Solaris, because I have successfully installed these versions on the same box with Windows XP on it.
    Thanks
    Sunil

    It is very easy to have several versions of appserver7 and 8 on a single box. All you have to do is use one packaged-based install and another one file-based install or both appserver versions using file-based install.

  • Limitation on LD_LIBRARY_PATH on Solaris box

    Hi All,
    I am using Tuxedo v8.0 on Solaris 2.8 box. This tuxedo server guides other servers to start up after issuing "tmboot -y" command. Following error I can see if the LD_LIBRARY_PATH is too long.
    <i>"CMDTUX_CAT:819: INFO: Process id=1281 Assume started (pipe)."</i>
    Is there any limitation that LD_LIBRARY_PATH should not me more than some pre defined characters?
    Thanks in advance.
    -Pijush

    As Wayne points out, there are some temporary variables of size 2048 used to
    manipulate LD_LIBRARY_PATH in tmsyncproc(), the function where the problem
    is occurring. A long value of LD_LIBRARY_PATH can overwrite the values in
    these temporary variables. If you keep LD_LIBRARY_PATH shorter than this,
    you should be fine.
    <Pijush Koley> wrote in message news:[email protected]...
    Thanks for the reply.<p>
    You are right. I received one core file at $APPDIR. But the strange thingis I got the core from the "tmboot" binary. Here is the back trace which I
    received when LD_LIBRARY_PATH is too long. <p>
    >
    ===========================================
    user1@TNUTF8 /proj1/appdir>file core <br>
    core: ELF 64-bit MSB core file SPARCV9 Version 1, from'tmboot'<br>
    >
    user1@TNUTF8 /proj1/appdir>dbx /proj1/3p/tuxedo8.0/bin/tmboot core <br>
    Reading tmboot <br>
    core file header read successfully <br>
    Reading ld.so.1 <br>
    Reading libm.so.1 <br>
    Reading libgpnet.so.71 <br>
    Reading libtux.so.71 <br>
    Reading libbuft.so.71 <br>
    Reading libfml.so.71 <br>
    Reading libfml32.so.71 <br>
    Reading libengine.so.71 <br>
    Reading libpthread.so.1 <br>
    Reading librt.so.1 <br>
    Reading libsocket.so.1 <br>
    Reading libnsl.so.1 <br>
    Reading libthread.so.1 <br>
    Reading libc.so.1 <br>
    Reading libaio.so.1 <br>
    Reading libdl.so.1 <br>
    Reading libmp.so.2 <br>
    Reading libc_psr.so.1 <br>
    Reading en_US.ISO8859-1.so.2 <br>
    Reading registry.so <br>
    detected a multithreaded program <br>
    t@1 (l@1) terminated by signal SEGV (no mapping at the fault address) <br>
    0x0000000100006430: __do_misaligned_ldst_instr+0x01d4: ldx [%g4 +0x8], %o0 <br>
    dbx: warning: invalid frame pointer <br>
    (/opt/SUNWspro/bin/../WS6U2/bin/sparcv9/dbx) where <br>
    current thread: t@1 <br>
    =>[1] __do_misaligned_ldst_instr(0xffffffff7fff4f90, 0xffffffff7fff5050,0xd25c2000, 0x2f33702f726f7365, 0x1, 0xb), at 0x100006430 <br>
    [2] __misalign_trap_handler(0x7474652f6c69623a, 0xffffffff7fffe20c, 0x0,0x100598020, 0x0, 0x100126c40), at 0x100007680 <br>
    [3] tmsyncproc(0xffffffff7ecacea0, 0x100136f58, 0xffffffff7fff6bd8,0x0, 0x1, 0xffffffff7fff6bc0), at
    0xffffffff7eb2e0d4 <br>
    (/opt/SUNWspro/bin/../WS6U2/bin/sparcv9/dbx) exit <br>
    ================================ <p>
    But I did not receive any error when LD_LIBRARY_PATH is not too long. <br>
    Any pointers?<p>
    Thanks in advance. <br>
    -Pijush

  • Thread number limitation on Sun One Web Server 6.1 on Solaris 9

    Hi.
    I am testing my servlet on Web Server 6.1 on Solaris 9 (SPARC). I am logging start of HTTPServlet.doPost() method and end of it, by calling GenericServlet.log() method for perfomance check.
    When I request more than two request(it takes long time) simultaneously from browser, my servlet logs like:
    doPost start
    doPost start
    doPost end
    doPost start
    doPost end
    doPost start
    :that is ,two requests is processed concurrent by threads, and another requests waiting, and after each running thread ends, waiting request is processed one by one.
    I think there is some limitation of thread or connections, so I checked magnus.conf. But RqThrottle is set to 128. And I cannot find any thread number settings.
    My magnus.conf is as follows.
    # The NetsiteRoot, ServerName, and ServerID directives are DEPRECATED.
    # They will not be supported in future releases of the Web Server.
    NetsiteRoot /export/home0/SUNWwbsvr
    ServerName test03
    ServerID https-test03
    RqThrottle 128
    DNS off
    Security off
    PidLog /export/home0/SUNWwbsvr/https-test03/logs/pid
    User webservd
    StackSize 131072
    TempDir /tmp/https-test03-8ac62f09
    UseNativePoll off
    PostThreadsEarly on
    KernelThreads off
    Init fn=flex-init access="$accesslog" format.access="%Ses->client.ip% - %Req->vars.auth-user% [%SYSDATE%] \"%Req->reqpb.clf-request%\" %Req->srvhdrs.clf-status% %Req->srvhdrs.content-length%"
    Init fn="load-modules" shlib="/export/home0/SUNWwbsvr/bin/https/lib/libj2eeplugin.so" shlib_flags="(global|now)"Why web server do not process more than two requests concurrent? Which server configuration should I check?
    Thanks in advance.

    I don't think I ever ran into that kind of a limit. Does the servlet use database connections (maybe the connection pool is empty) or other critical sections / large synchronized blocks?
    Try a minimal servlet that takes a while to execute:
        doGet(...)
            log("sleep starting " + Thread.currentThread().getName());
            try {
                Thread.sleep(30000);
            } catch (Exception e) { }
            log("sleep done " + Thread.currentThread().getName());
            response.getOutputStream().println("good morning");

Maybe you are looking for