DiskSuite Solaris 10 Configuration

Due to space constrictions I would like to break the mirrors and use the disks for additional space. Does anyone have recommendations? This is a test host only, I'm not concerned about have these filesystems mirrored.
d14 -m d12 d13 1                    /usr/prmsbackup
d12 1 2 c1t10d0s3 c1t11d0s3 -i 64b
d13 1 2 c1t12d0s3 c1t13d0s3 -i 64b
d0 -m d1 d2 1                         /usr/prms
d1 1 2 c1t3d0s4 c1t2d0s4 -i 64b
d2 1 2 c1t8d0s4 c1t9d0s4 -i 64b
Thanks!

Did the Solaris 10 installer recognize a supported network device?
If that is the case, the installer should have prompted you for the
NIC's configuration. And when you boot Solaris from the harddisk,
the command "ifconfig -a" should list a network interface in addition
to the loopback interface "lo0". (And in a next step we can try to
configure the NIC to access external systems)
Otherwise, if all that is listed by "ifconfig -a" is the loopback interface
"lo0", then you have to find a 3rd party NIC device driver (or maybe
modify / extend the standard Solaris PCI device -> device driver
bindings), to be able to access the NIC. The command
/usr/X11/bin/scanpci can help to identify the PCI cards available in
the system. If you know what kind of ethernet pci hardware is
present in the system, you can start looking for a Solaris x86
NIC device driver (e.g. here: http://homepage2.nifty.com/mrym3/taiyodo/eng/index.htm ).

Similar Messages

  • Oracle RAC on Solarais Configuration Issue

    Hi,
    We are trying to install Oracle RAC 10g R2 on Solaris 10.
    Following are the products
    1=> Soalris 10 OS
    2=> Sun Cluster 3.1
    3=> Veritas Volume Manager
    4=> Veritas Cluster File System
    Can i deploy Oracle RAC using the above listed Software.
    Here we have not purchase any of the Cluster Volume Manager.
    Is the CVM required to install Oracle RAC.
    Is there any alternative wherein we can install Oracle RAC without using CVM like for example using the RAW device where the VM will manage storage from a single node rather than buying the CVM that will allow to manage the storage from multiple nodes.
    Also i would like to highlight is suggestion for using ASM is rule out hence ASM would not be used.
    Can anyone suggest me some solution to the above problem.

    Well, my impression is that you don't really know what your requirements are and you are trying to fit the technologies somehow. Worse yet, there is no clear understanding how those technologies fit together and as long as you explain what your vendor is saying you (which vendor?) they don't have clear idea either.
    I would also suggest to review your decision to use Sun Network Data Replicator for DR site. Consider Oracle Physical Standby database instead - it's more flexible solution and doesn't limit your choice of storage stack.
    1) Do Oracle RAC 10gR2 requires any Veritas Cluster Volume Manager
    Words "any" and "Veritas" contradict in your question. But the answer is no. RAC require shared storage which can generally be one of those:
    - raw devices with or without some kind of cluster volume manager
    - cluster file system
    - NAS storage (NFS mounted)
    - ASM with raw devices for CRS files (OCR and voting disks)
    2) Does Oracle RAC 10g R2 require any Vertias Cluster File System or it can sit on normal Vertias File System
    Again, Oracle database files MUST reside on shared storage and non-clustered file system is not an option.
    3) Is there any solution where we can use the Sun Cluster with Veritas Componets to configure RAC.
    If I recall correctly, Sun Cluster includes in it's license some of component of Veritas storage stack. So you might be all set. You should turn to documentation on that stage and see exactly what you have licensed and if your stack allows shared storage. You might want to have a look at http://www.sun.com/software/whitepapers/solaris10/solaris_cluster.pdf
    and Oracle Certification Matrix on Metalink.

  • How to configure Solaris that third-party software can use Netscape

    Hi,
    I don't know if this is Solaris configuration issue.
    I have Storage Foundation installed on Solaris 10 SPARC.
    And I am using the VEA GUI application to manage storage. Then I intentionally made some activities that the Error window appears with the link.
    This link point to the Veritas web page.
    After clicking on this link the error appears:
    "Error attempting to launch web browser.:
    netscape: not found"
    Please tell me whether this issue is because it needs some changes in Solaris or maybe the Storage Foundation needs to be configured properly?
    The internet connection works fine while executing the Netscape from CDE.
    Kind regards,
    Daniel

    I don't think it has anything to do with the third-party software. When you click on a link or document, the OS tries to access it using the associated application. In your case, clicking on a hyperlink tries to load the default web browser which on your system is Netscape.
    If say, you installed Firefox and configured it to be the default web browser then Solaris would launch it instead.
    Try to manually launch Netscape and in the preferences, set it as the default web browser. See how you go with that.
    Cheers,
    Erick Ramirez
    Melbourne, Australia

  • Clustering Solaris 10 (SPARC)  with QFS 4.3

    I have searched to no avail for a solution to my error. The error is bolded and Italics in the information below. I would appreciate any assists!!
    System
    - Dual Sun-Fire-280R with external dual ported SCSI-3 disk arrays.
    - Solaris 10 Update 1 with the latest patch set (as of 5/2/06)
    - Clustering from Java Enterpriset System 2005Q4 - SPARC
    - StorEdge_QFS_4.3
    The root/boot disk is not mirrored - don't want to introduce another level
    of complication at this point.
    I followed an example in one of the docs for "HA-NFS on Volumes Controlled by Solstice DiskSuite/Solaris Volume Manager" from setting up an HA QFS file system".
    The following is additional information:#
    hosts file for PREFERRED - NOTE Secondary has same entries but PREF and SEC loghosts are switched.
    # Internet host table
    127.0.0.1 localhost
    XXX.xxx.xxx.11 PREFFERED loghost
    XXX.xxx.xxx.10 SECONDARY
    XXX.xxx.xxx.205 SECONDARY-test
    XXX.xxx.xxx.206 PREFERRED-test
    XXX.xxx.xxx.207 VIRTUAL
    Please NOTE I only have one NIC port to the public net.
    ifconfig results from the PREFERRED for the interconnects only
    eri0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
    inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255
    ether 0:3:ba:18:70:15
    hme0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
    inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127
    ether 8:0:20:9b:bc:f9
    clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 5
    inet 172.16.193.1 netmask ffffff00 broadcast 172.16.193.255
    ether 0:0:0:0:0:1
    lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
    inet6 ::1/128
    eri0: flags=2008841<UP,RUNNING,MULTICAST,PRIVATE,IPv6> mtu 1500 index 3
    inet6 fe80::203:baff:fe18:7015/10
    ether 0:3:ba:18:70:15
    hme0: flags=2008841<UP,RUNNING,MULTICAST,PRIVATE,IPv6> mtu 1500 index 4
    inet6 fe80::a00:20ff:fe9b:bcf9/10
    ether 8:0:20:9b:bc:f9
    PLEASE NOTE!! I did disable ipv6 during Solaris installation and I have modified the defaults to implement NFS - 3
    ifconfig results from the SECONDARY for the interconnects only
    eri0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
    inet 172.16.0.130 netmask ffffff80 broadcast 172.16.0.255
    ether 0:3:ba:18:86:fe
    hme0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
    inet 172.16.1.2 netmask ffffff80 broadcast 172.16.1.127
    ether 8:0:20:ac:97:9f
    clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 5
    inet 172.16.193.2 netmask ffffff00 broadcast 172.16.193.255
    ether 0:0:0:0:0:2
    lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
    inet6 ::1/128
    eri0: flags=2008841<UP,RUNNING,MULTICAST,PRIVATE,IPv6> mtu 1500 index 3
    inet6 fe80::203:baff:fe18:86fe/10
    ether 0:3:ba:18:86:fe
    hme0: flags=2008841<UP,RUNNING,MULTICAST,PRIVATE,IPv6> mtu 1500 index 4
    inet6 fe80::a00:20ff:feac:979f/10
    ether 8:0:20:ac:97:9f
    Again - I disabled ipv6 and install time.
    I followed all instructions and below are the final scrgadm command sequences:
    scrgadm -p | egrep "SUNW.HAStoragePlus|SUNW.LogicalHostname|SUNW.nfs"
    scrgadm -a -t SUNW.HAStoragePlus
    scrgadm -a -t SUNW.nfs
    scrgadm -a -g nfs-rg -y PathPrefix=/global/nfs
    scrgadm -a -L -g nfs-rg -l VIRTUAL_HOSTNAME
    scrgadm -c -g nfs-rg -h PREFERRED_HOST,SECONDARY_HOST
    scrgadm -a -g nfs-rg -j qfsnfs1-res -t SUNW.HAStoragePlus -x FilesystemMountPoints=/global/qfsnfs1 -x Filesy
    stemCheckCommand=/bin/true
    scswitch -Z -g nfs-rg
    scrgadm -a -g nfs-rg -j nfs1-res -t SUNW.nfs -y Resource_dependencies=qfsnfs1-res
    PREFERRED_HOST - Some shared paths in file /global/nfs/SUNW.nfs/dfstab.nfs1-res are invalid.
    VALIDATE on resource nfs1-res, resource group nfs-rg, exited with non-zero exit status.
    Validation of resource nfs1-res in resource group nfs-rg on node PREFERRED_HOST failed.
    Below are the contents of /global/nfs/SUNW.nfs/dfstab.nfs1-res:
    share -F nfs -o rw /global/qfsnfs1
    AND Finally the results of the scstat command - same for both hosts:(root)[503]# scstat
    -- Cluster Nodes --
    Node name Status
    Cluster node: PREF Online
    Cluster node: SEC Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path: PREF:hme0 SEC:hme0 Path online
    Transport path: PREF:eri0 SEC:eri0 Path online
    -- Quorum Summary --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node --
    Node Name Present Possible Status
    Node votes: PREF 1 1 Online
    Node votes: SEC 1 1 Online
    -- Quorum Votes by Device --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d3s2 1 1 Online
    -- Device Group Servers --
    Device Group Primary Secondary
    Device group servers: nfs1dg PREF SEC
    Device group servers: nfsdg PREF SEC
    -- Device Group Status --
    Device Group Status
    Device group status: nfs1dg Online
    Device group status: nfsdg Online
    -- Multi-owner Device Groups --
    Device Group Online Status
    -- Resource Groups and Resources --
    Group Name Resources
    Resources: nfs-rg VIRTUAL qfsnfs1-res
    -- Resource Groups --
    Group Name Node Name State
    Group: nfs-rg PREF Online
    Group: nfs-rg SEC Offline
    -- Resources --
    Resource Name Node Name State Status Message
    Resource: VIRTUAL PREF Online Online - LogicalHo
    stname online.
    Resource: VIRTUAL SEC Offline Offline - LogicalH
    ostname offline.
    Resource: qfsnfs1-res PREF Online Online
    Resource: qfsnfs1-res SEC Offline Offline
    -- IPMP Groups --
    Node Name Group Status Adapter Status
    IPMP Group: PREF ipmp1 Online ce0 Online
    IPMP Group: SEC ipmp1 Online ce0 Online
    ALSO the system will not fail over

    Good Morning Tim:
    Below are the contents of /global/nfs/SUNW.nfs/dfstab.nfs1-res:
    share -F nfs -o rw /global/qfsnfs1
    Below are the contents of vfstab for the Preferred host:
    #device device mount FS fsck mount mount
    #to mount to fsck point type pass at boot options
    fd - /dev/fd fd - no -
    /proc - /proc proc - no -
    /dev/dsk/c1t1d0s1 - - swap - no -
    /dev/dsk/c1t1d0s0 /dev/rdsk/c1t1d0s0 / ufs 1 no -
    #/dev/dsk/c1t1d0s3 /dev/rdsk/c1t1d0s3 /globaldevices ufs 2 yes -
    /devices - /devices devfs - no -
    ctfs - /system/contract ctfs - no -
    objfs - /system/object objfs - no -
    swap - /tmp tmpfs - yes size=1024M
    /dev/did/dsk/d2s3 /dev/did/rdsk/d2s3 /global/.devices/node@1 ufs 2 no global
    qfsnfs1 - /global/qfsnfs1 samfs 2 no sync_meta=1
    Below are the contents of vfstab for the Secondary host:
    #device device mount FS fsck mount mount
    #to mount to fsck point type pass at boot options
    fd - /dev/fd fd - no -
    /proc - /proc proc - no -
    /dev/dsk/c1t1d0s1 - - swap - no -
    /dev/dsk/c1t1d0s0 /dev/rdsk/c1t1d0s0 / ufs 1 no -
    #/dev/dsk/c1t1d0s3 /dev/rdsk/c1t1d0s3 /globaldevices ufs 2 yes -
    /devices - /devices devfs - no -
    ctfs - /system/contract ctfs - no -
    objfs - /system/object objfs - no -
    swap - /tmp tmpfs - yes size=1024M
    /dev/did/dsk/d20s3 /dev/did/rdsk/d20s3 /global/.devices/node@2 ufs 2 no global
    qfsnfs1 - /global/qfsnfs1 samfs 2 no sync_meta=1
    Below are contents of /var/adm/messages from scswitch -Z -g nfs-rg through the offending scrgadm command:
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource qfsnfs1-res status on node PREFFERED_HOST change to R_FM_ONLINE
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource qfsnfs1-res status msg on node PREFFERED_HOST change to <>
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource qfsnfs1-res state on node PREFFERED_HOST change to R_MON_STARTING
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 529407 daemon.notice] resource group nfs-rg state on node PREFFERED_HOST change to RG_PENDING_ON_STARTED
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <hastorageplus_monitor_start> for resource <qfsnfs1-res>, resource group <nfs-rg>, timeout <90> seconds
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <hastorageplus_monitor_start> completed successfully for resource <qfsnfs1-res>, resource group <nfs-rg>, time used: 0% of timeout <90 seconds>
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource qfsnfs1-res state on node PREFFERED_HOST change to R_ONLINE
    May 15 14:39:22 PREFFERED_HOST Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <hafoip_monitor_start> completed successfully for resource <merater>, resource group <nfs-rg>, time used: 0% of timeout <300 seconds>
    May 15 14:39:22 PREFFERED_HOST Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource merater state on node PREFFERED_HOST change to R_ONLINE
    May 15 14:39:22 PREFFERED_HOST Cluster.RGM.rgmd: [ID 529407 daemon.notice] resource group nfs-rg state on node PREFFERED_HOST change to RG_ONLINE
    May 15 14:42:47 PREFFERED_HOST Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <nfs_validate> for resource <nfs1-res>, resource group <nfs-rg>, timeout <300> seconds
    May 15 14:42:47 PREFFERED_HOST SC[SUNW.nfs:3.1,nfs-rg,nfs1-res,nfs_validate]: [ID 638868 daemon.error] /global/qfsnfs1 does not exist or is not mounted.
    May 15 14:42:47 PREFFERED_HOST SC[SUNW.nfs:3.1,nfs-rg,nfs1-res,nfs_validate]: [ID 792295 daemon.error] Some shared paths in file /global/nfs/admin/SUNW.nfs/dfstab.nfs1-res are invalid.
    May 15 14:42:47 PREFFERED_HOST Cluster.RGM.rgmd: [ID 699104 daemon.error] VALIDATE failed on resource <nfs1-res>, resource group <nfs-rg>, time used: 0% of timeout <300, seconds>
    If there is anything else that might help, please let me know. I am currently considering tearing the cluseter down and rebuilding it to test with a UFS filesystem to see if the problem might be with QFS.,

  • Ip changing problem in solaris 10 (sun fire v490)

    I am a new user of sun solaris . Yesterday I had configured the file /etc/inet/hosts but i missed to configure the file /etc/inet/netmask and /etc/inet/ipnodes, due to which server is not allowed me to login. Now what is the other way to login into server console. I am entering in the console using normal mode, should I try to login in using diagnostic mode., or is there any other way to login into and modify the file. I also need training videos of solaris configuration .
    Help in this regard will be highly appreciable.
    Regards
    Muhammad Ali.

    I am a new user of sun solaris . Yesterday I had configured the file /etc/inet/hosts but i missed to configure the file /etc/inet/netmask and /etc/inet/ipnodes, due to which server is not allowed me to login. Now what is the other way to login into server console. I am entering in the console using normal mode, should I try to login in using diagnostic mode., or is there any other way to login into and modify the file. I also need training videos of solaris configuration .
    Help in this regard will be highly appreciable.
    Regards
    Muhammad Ali.

  • Installing Solaris 10 from a SCSI DVD driveand a Tekram DC-390U adapter

    I'm tryng to install Solaris x86 from a SCSI DVD drive wth a Tekram DC-390U adapter (DVD version downloaded from the sun website).
    When the computer boots from the DVD the Solaris Configuration Assistant is run and ask me to choose the device which I want to boot from. The problem is the list only contains my hard drive and a CD drive, but not the DVD drive.
    I guess Solaris doesn't include any driver for that specific SCSI adapter, which I find quite surprising as any old Linux or *BSD works fine with it. And the card is also listed in the device list Solaris has found while probing the system.
    I tried to download drivers from the Tekram webiste but they are limited to Solaris 8. I didn't try them 'cause it takes me so much time to get a floppy drive up and running...
    PS : Installing on a VirtualPC 2004 guest works fine, but it's so slow...

    I downloaded the latest version of Solaris 10, and that solved the problem of the continual reboots. Now the keyboard doesn't work, but that's a different problem.

  • Solaris 10 JET install and ZFS

    Hi - so following on from Solaris Volume Manager or Hardware RAID? - I'm trying to get my client templates switched to ZFS but it's failing with:
    sudo ./make_client -f build1.zfs
    Gathering network information..
    Client: xxx.14.80.196 (xxx.14.80.0/255.255.252.0)
    Server: xxx.14.80.199 (xxx.14.80.0/255.255.252.0, SunOS)
    Solaris: client_prevalidate
    Clean up /etc/ethers
    Solaris: client_build
    Creating sysidcfg
    WARNING: no base_config_sysidcfg_timeserver specified using JumpStart server
    Creating profile
    Adding base_config specifics to client configuration
    Adding zones specifics to client configuration
    ZONES: Using JumpStart server @ xxx.14.80.199 for zones
    Adding sbd specifics to client configuration
    SBD: Setting Secure By Default to limited_net
    Adding jass specifics to client configuration
    Solaris: Configuring JumpStart boot for build1.zfs
    Solaris: Configure bootparams build
    Starting SMF services for JumpStart
    Adding Ethernet number for build1 to /etc/ethers
    cleaning up preexisting install client "build1"
    removing build1 from bootparams
    removing /tftpboot/inetboot.SUN4V.Solaris_10-1
    svcprop: Pattern 'network/tftp/udp6:default/:properties/restarter/state' doesn't match any entities
    enabling network/tftp/udp6 service
    svcadm: Pattern 'network/tftp/udp6' doesn't match any instances
    updating /etc/bootparams
    copying boot file to /tftpboot/inetboot.SUN4V.Solaris_10-1
    Force bootparams terminal type
    -Restart bootparamd
    Running '/opt/SUNWjet/bin/check_client build1.zfs'
    Client: xxx.14.80.196 (xxx.14.80.0/255.255.252.0)
    Server: xxx.14.80.199 (xxx.14.80.0/255.255.252.0, SunOS)
    Checking product base_config/solaris
    Checking product custom
    Checking product zones
    Product sbd does not support 'check_client'
    Checking product jass
    Checking product zfs
    WARNING: ZFS: ZFS module selected, but not configured to to anything.
    Check of client build1.zfs
    -> Passed....
    So what is "WARNING: ZFS: ZFS module selected, but not configured to to anything." referring to? I've amended my template and commented out all references to UFS so I now have this:
    base_config_profile_zfs_disk="slot0.s0 slot1.s0"
    base_config_profile_zfs_pool="rpool"
    base_config_profile_zfs_be="BE1"
    base_config_profile_zfs_size="auto"
    base_config_profile_zfs_swap="65536"
    base_config_profile_zfs_dump="auto"
    base_config_profile_zfs_compress=""
    base_config_profile_zfs_var="65536"
    I see there is a zfs.conf file in /opt/SUNWjet/Products/zfs/zfs.conf do I need to edit that as well?
    Thanks - J.

    Hi Julian,
    You MUST create /var as part of the installation in base_config, as stuff gets put there really early during the install.
    The ZFS module allows you to create additional filesystems/volumes in the rpool, but does not let you modify the properties of existing datasets/volumes.
    So,
    you still need
    base_config_profile_zfs_var="yes" if you want a /var dataset.
    /export and /export/home are created by default as part of the installation. You can't modify that as part of the install.
    For your zones dataset, seems to be fine and as expected, however, the zfs_rpool_filesys needs to list ALL the filesystems you want to create. It should read zfs_rpool_filesys="logs zones". This makes JET look for variables of the form zfs_rpool_filesys_logs and zfs_rpool_filesys_zones. (The last variable is always picked up, in your case the zones entry. Remember, the template is a simple name=value set of variables. If you repeat the "name" part, it simply overwrites the value.)
    So you really want:
    zfs_rpool_filesys="logs zones"
    zfs_rpool_filesys_logs="mountpoint=/logs quota=32g"
    zfs_rpool_filesys_zones="mountpoint=/zones quota=200g reservation=200g"
    (incidentally, you don't need to put zfs_pools="rpool" as JET assumes this automatically.)
    So, if you want to alter the properties of /var and /export, the syntax you used would work, if the module was set up to allow you to do that. (It does not currently do it, but I may update it in the future to allow it).
    (Send me a direct e-mail and I can send you an updated script which should then work as expected, check my profile and you should be able to guess my e-mail address)
    Alternatively, I'd suggest writing a simple script and stick it into the /opt/SUNWjet/Clients/<clientname> directory with the following lines in them:
    varexportquotas:
    #!/bin/sh
    zfs set quota=24g rpool/export
    zfs set quota=24g rpool/ROOT/10/var
    and then running it in custom_scripts_1="varexportquotas"
    (Or you could simply type the above commands the first time you log in after the build. :-) )
    Mike
    Edited by: mramcha on Jul 23, 2012 1:39 PM
    Edited by: mramcha on Jul 23, 2012 1:45 PM

  • SolMan 4.0 Solaris / Oracle install: Unable to create account user

    Hi!
    Am I getting this error during step 2 (Create users for SAP System) of the Solution Manager 4.0 installation. This error occurs when SAPinst is trying to create account "orasol" for system "SOL".
    INFO[E] 2007-02-14 13:12:51
    FSH-00006 Return value of function getpwnam(orasol) is NULL
    ERROR 2007-02-14 13:12:51
    FSL-01002 Unable to create account user="soladm". UX: /usr/sbin/useradd: ERROR: Unable to create the home directory: No such file or directory. (return code 12)
    ERROR 2007-02-14 13:12:51
    MUT-03025 Caught ESyException in Modulecall: ESAPinstException: error text undefined.
    Please let me know how I can fix this error!
    Regards,
    Thomas

    Hello Thomas,
    It looks like a Solaris issue. Check if the home directory
    /home is reserved in the Solaris configuration.
    Check if this fix the problem please perform the following steps:
    1. modify /etc/auto_master file to change the entry for '/home' to say
       '/autohome' instead
    2. reboot
    3. rmdir /home
    4. ln -s /export/home /home (make sure /export/home exists)
    Also you could create soladm manually in advance.
    Hope this helps,
    Dolores

  • Solaris JVM memory footprint

    WLS8.1sp4 running on Solaris configured with:
    -Xms1024m -Xmx1024m -XX:NewSize=256m
    -XX:MaxNewSize=256m -XX:PermSize=256m
    -XX:MaxPermSize=256m -XX:SurvivorRatio=3
    -verbosegc -XX:+PrintGCDetails
    Monitoring java process with "pestat" shows that memory consumption is constantly growing:
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    26607 acms 1505M 446M run 0 10 0:02.51 9.3% java/66
    26607 acms 1529M 487M run 28 10 0:02.39 18% java/66
    26607 acms 1560M 542M run 0 2 0:04.09 16% java/66
    26607 acms 1604M 620M sleep 20 10 0:00.31 11% java/66
    26607 acms 1645M 687M sleep 20 10 0:00.31 13% java/66
    26607 acms 1687M 752M run 28 10 0:04.34 20% java/66
    26607 acms 1719M 811M run 1 10 0:03.26 17% java/66
    26607 acms 1764M 880M run 19 10 0:03.12 19% java/66
    26607 acms 1803M 960M sleep 20 10 0:00.31 20% java/66
    26607 acms 1857M 1038M run 28 10 0:01.07 19% java/66
    26607 acms 1879M 1090M run 28 10 0:01.43 15% java/66
    26607 acms 1889M 1248M run 20 10 0:01.41 50% java/65
    26607 acms 1924M 1440M sleep 20 10 0:00.31 24% java/65
    Application uses BEA XA OCI drivers to connect to Oracle 8.1.5 (I know that it's old and oci should be replaced with thin). Is there any way to prove that memory is growing in native oci?

    Sorry, this newsgroup is about JRockit (which isn't available for solaris - yet).
    /Staffan

  • Low CPU utilization on Solaris

    Hi all.
    We've recently been performance tuning our java application running
    inside of an Application Server with Java 1.3.1 Hotspot -server. We've
    begun to notice some odd trends and were curious if anyone else out
    there has seen similiar things.
    Performance numbers show that our server runs twice
    as fast on Intel with Win2K than on an Ultra60 with Solaris 2.8.
    Here's the hardware information:
    Intel -> 2 processors (32bit) at 867 MHz and 2 Gig RAM
    Solaris -> 2 processors (64bit) at 450 MHz and 2 Gig RAM.
    Throughput for most use cases in a low number of threads is twice as
    fast on Intel. The only exception is some of our use-cases that are
    heavily dependent on a stored procedure which runs twice as fast on
    Solaris. The database (oracle 8i) and the app server run on the same
    machine in these tests.
    There should minor (or no) network traffic. GC does not seem to be an
    issue. We set the max heap at 1024 MG. We tried the various solaris
    threading models as recommended, but they have accomplished little.
    It is possible our Solaris machine is not configured properly in some
    way.
    My question (after all that ...) is whether this seems normal to
    anyone? Should throughput be higher since the processors are faster on
    the wIntel box? Does the fact that the solaris processors are 64bit
    have any benefit?
    We have also run the HeapTest recommended on this site on both
    machines. We found that the memory test performs twice as fast on
    solaris, but the CPU test performs 4 times as slow on solaris. The
    "joint" test performs twice as slow on solaris. Does this imply bad
    things about our solaris configuration? Or is this a normal result?
    Another big difference is between Solaris and Win2K in these runs is
    that CPU Utilization is low on solaris (20-30%) while its much higher
    on Win2K (60-70%)
    [both machines are 2 processor and the tests are "primarily" single
    threaded at
    this stage]. I would except the solaris CPU utilization to be around
    50% as well. Any ideas why it isn't?

    Hi,
    I recently went down this path and wound up coming to the realization that the
    cpu's are almost neck and neck per cycle when running my Java app. Let me qualify
    this a little more (400mhz Sparc II cpu vs 500mhz Intel cpu) under similar load
    running the same test gave me similar results. It wasn't as huge difference in
    performance as I was expecting.
    My theory is given the scalability of the SPARC architecture, more chips==more
    performance with less hardware, whereas the Wintel boxes are cheaper, but in order
    to get scaling, the underlying hardware comes into question. (how many wintel
    boxes to cluster, co-locate, manage, etc…)
    From what little I've found out when running tests against our Solaris 8 (E-250's)
    400mhz UltraSparc 2's is that it appears that the CPU performance in a lightly
    threaded environment is almost 1 cycle / 1 cycle (SPARC to Intel). I don't think
    the 64 bit SPARC architecture will buy you anything for java 1.3.1, but if your
    application has some huge memory requirements, then using 1.4.0(when BEA supports
    it) should be beneficial (check out http://java.sun.com/j2se/1.4/performance.guide.html).
    If your application is running only a few threads, tying the threads to the LWP
    kernel processes probably won't gain you much. I noticed that it decreased performance
    for a test with only a few threads.
    I can't give you a good reason as to why your Solaris CPU utilization is so low,
    you may want to try getting a copy of Jprobe and profiling Weblogic and your application
    to see where your bottlenecks are. I was able to do this with our product, and
    found some nasty little performance bugs, but even with that our CPU utilization
    was around 98% on a single and 50% on a dual.
    Also, take a look at iostat / vmstat and see if your system is bottlenecking doing
    io operations. I kept a background process of vmstat to a log and then looked
    at it after my test and saw that my cpu was constantly pegged out (doing a lot
    of context switching), but that it wasn't doing a whole lot of page faults
    (had enough memory).
    If you're doing a lot of serialization, that could explain slow performance as
    well.
    I did follow a suggestion on this board of running my test several times with
    the optimizer (-server) and it boosted performance on each iteration until a plateau
    on or about the 3rd test.
    If you're running Oracle or another RDBMS on your Solaris machine you should see
    a pretty decent performance benchmark against NT as these types of applications
    are more geared toward the SPARC architecture. From what I've seen running Oracle
    on Solaris is pretty darn fast when compared to Intel.
    I know that I tried a lot of different tweaks on my Solaris configuration (tcp
    buffer size, etc/system parameters for file descriptors, etc.) I even got to the
    point where I wanted
    to see how WebLogic was handling the Nagle algorithm as far as it's POSIX muxer
    was concerned and ran a little test to see how they were setting the sockets (setTcpNoDelay(Boolean)
    on java.net.Socket). They're disabling the Nagle algorithm so that wasn't an
    issue sigh. My best advice would be to profile your application and see where
    the bottlenecks are, you might be able to increase performance, but I'm not too
    sure. I also checked out www.spec.org and saw some of their benchmarks that
    coincide with our findings.
    Best of luck to you and I hope this helps :)
    Andy
    [email protected] (feanor73) wrote:
    Hi all.
    We've recently been performance tuning our java application running
    inside of an Application Server with Java 1.3.1 Hotspot -server. We've
    begun to notice some odd trends and were curious if anyone else out
    there has seen similiar things.
    Performance numbers show that our server runs twice
    as fast on Intel with Win2K than on an Ultra60 with Solaris 2.8.
    Here's the hardware information:
    Intel -> 2 processors (32bit) at 867 MHz and 2 Gig RAM
    Solaris -> 2 processors (64bit) at 450 MHz and 2 Gig RAM.
    Throughput for most use cases in a low number of threads is twice as
    fast on Intel. The only exception is some of our use-cases that are
    heavily dependent on a stored procedure which runs twice as fast on
    Solaris. The database (oracle 8i) and the app server run on the same
    machine in these tests.
    There should minor (or no) network traffic. GC does not seem to be an
    issue. We set the max heap at 1024 MG. We tried the various solaris
    threading models as recommended, but they have accomplished little.
    It is possible our Solaris machine is not configured properly in some
    way.
    My question (after all that ...) is whether this seems normal to
    anyone? Should throughput be higher since the processors are faster on
    the wIntel box? Does the fact that the solaris processors are 64bit
    have any benefit?
    We have also run the HeapTest recommended on this site on both
    machines. We found that the memory test performs twice as fast on
    solaris, but the CPU test performs 4 times as slow on solaris. The
    "joint" test performs twice as slow on solaris. Does this imply bad
    things about our solaris configuration? Or is this a normal result?
    Another big difference is between Solaris and Win2K in these runs is
    that CPU Utilization is low on solaris (20-30%) while its much higher
    on Win2K (60-70%)
    [both machines are 2 processor and the tests are "primarily" single
    threaded at
    this stage]. I would except the solaris CPU utilization to be around
    50% as well. Any ideas why it isn't?

  • How to configure Ip address if unable to login error is occur through Hyper

    I am a new user of sun Solaris 10 . In my sun fire v490 server every thing is configured fine there is no fault in booting .But when I configured the file /etc/inet/hosts in solaris 10 and missed to configure the file /etc/inet/netmask and /etc/inet/ipnodes, due to which server is not allow me to login there is an error occur " unable to login" . Now what is the other way to login into server console. I am entering in the console using normal mode, should I try to login in using diagnostic mode., or is there any other way to login into and modify the file. I also need training videos of solaris configuration .
    Help in this regard will be highly appreciable.

    you should try to boot in single mode
    1) start server, when boot process begins (after memory checks), do a STOP + A on the Sun keyboard. you should be on prompt "Ok" (OBP)
    2) on Ok prompt, type : "boot -s"
    and tell us what happened.

  • Boot into Solaris without keyboard

    May I know how I can boot into multi-session mode successfully without having my computer connected to a keyboard in x86?
    I have already tuned BIOS setting. The setting left to tune should be in Solaris configuration.
    Thanks!

    Ok, I figured it out.
    The bluetooth settings in system preferences were set to search for a keyboard.
    So I unchecked the "Open Bluetooth Setup Assistant at startup when no input device is present" setting.
    Matt

  • SPARC box

    Hi,
    I have a Sun V240 box with 2x1.5 Ghz , and 4x73 GB hard drives.
    I am going to run there Oracle 9i Database Server Entreprise Edition and BEA WebLogic Server.
    I need some advise about different issues before proceding with the installation of Solaris 9.
    -Mirroring the disks. Do you recommend DiskSuite?
    -File System, partitions, any special here?
    Thanks in advance

    Hi,
    With Sol 9 and Oracle running on internal disks, I would presume this is going to be a test/dev system, otherwise I recommend going for a / a couple of external arrays on separate controllers.
    However,
    Sol 9 on c1t0d0 mirrored to c1t1d0 with something like the following layout:
    c1t0d0s0 / 24GB (24576MB)
    c1t0d0s1 swap 8GB (8192MB) or double the amount of memory.
    c1t0d0s3 /var 8GB (8192MB) depends on how many logs you have 8GB is loads.
    c1t0d0s4 /apps (remaining minus slice 7 space) Use this for applications but not database.
    c1t0d0s7 /metadb (128MB) Use this for the metadb.
    On build configure as /metadb then unmount and rmdir the /metadb directory, take the line out of /etc/vfstab for /metadb.
    Install SUNWCXall if you require it, depends on what you are going to be doing with the system. Test it with different builds if you want.
    # prtvtoc /dev/rdsk/c1t0d0s2 > /tmp/x
    # fmthard -s /tmp/x /dev/rdsk/c1t1d0s2
    This will replicate the disk layouts after install.
    Then mirror c1t0d0 with c1t1d0 through DiskSuite / Solaris Volume Manager.
    Now use c1t2d0 mirrored with c1t3d0 for the DataBase (73GB) /u01 or split it up into other filesystems as required.
    When configuring these 2 disks through format, configure 128MB for metadbs just like on the first two disks.
    So you should have metadbs on all four disks.
    Set logging on the required filesystem as you see fit, best to do some testing on this for performance and recover times etc etc.
    HTH
    Tom

  • Unable to create node /home/ SID adm with type directory

    I am currently experiencing a problem when trying to install a Java Stack onto my new SCM 7.0  EHP1 server.
    I receive the error:
    System call failed.  Error 89......mkdir..../home/djdadm....
    (djd is the SID of my new Java AS)
    This is followed by an error:
    Unable to create node /home/djdadm with type directory
    We have Solaris v10 (SPARC), and the ABAP stack has already been installed successfully.
    The user djdadm was created manually on solaris before starting the installation (assigned to groups sapsys, dba, sapinst + oper) and the home directory "/usr/sap/DJD/home" was specified. 
    Note the difference in the home directory that it is trying to create to the one that we set against the new user djdadm.
    This error is experienced quite early in when SAPINST is performing the tasks (and it stops at the Create users for SAP system stage).
    I was experiencing a problem where SAPINST crashed completely for a while, so I've downloaded the latest version of SAPINST today.  Also, I've set a new TEMP folder and am running SAPINST from there, but am stuck at the problem mentioned above.
    Any help would be much appreciated. 
    Alistair Crawshaw

    Hello,
    It looks like a Solaris issue. Check if the home directory /home is reserved in the Solaris configuration.                                                                               
    You may try this in that case:       
    1. modify /etc/auto_master file to change the entry for '/home' to say 
       '/autohome' instead                                                  
    2. reboot                                                              
    3. rmdir /home                                                         
    4. ln -s /export/home /home (make sure /export/home exists)       
    Regards,
    Désiré

  • Problems with maintaining session state using mod_ose

    I am using 9i AS(version 1.0.2.0) and oracle 8i(version 815).
    Because i need to maintain client sessions for my application, I
    have activated mod_ose(for stateful session). As suggested in
    the oracle documentation, i created a package containing
    variables for which state needs to be maintain per client. The
    state is maintained by the app server but it is not maintained
    properly. All the new clients that connect to the app server
    share the same session variable values. If I change a package
    variable value from one client, other clients take the same
    value.
    Any solution?
    Suggestions are most welcome.
    Thanks in advance.

    We installed oracle9i AS on solaris, configured OSE and the
    problem is not occuring now. Not sure whether its solaris that
    worked or 9i AS had some problem on NT machine earlier.
    Regards,
    Sharad

Maybe you are looking for

  • Printed Documentation Research

    I would like to put some proposals to Adobe to let them know well or otherwise the current workflow for producing a printed document works for you. Also what do you particularly like or dislike, things you want to do but cannot. I would welcome your

  • How to set Internet connection in solaris for DataOne Service in India?

    I am using a P4 system with Inter Board D945GTP and i have a internet connection from BSNL. I wold like to connect internet to Solaris express Developer Edition 2/07 Help me! bye P.Sathish Kumar

  • Query on Master CHM

    Hi All, Greets! I have a query on RoboHelp. I have a master chm, which is linked to couple of individual CHMs. I have a requirement to now place all the individuals chms in a sub-folder,keep the master chm outside the sub- folder and deliver it to th

  • Home button icon in notification center

    If you own an iPod Touch long enough and you have used it very often, you might notice that the Home Button becomes unresponsive. I can read that on the Internet that people are annoyed about this, so I am not the only one! To get around using the bu

  • "404: No group with that name (wikigroupname) hosted on this server"

    I am running 10.6.8 as our wiki server. After rebooting the server, one of the wiki group seems to be not accessible. When try to access the web page it gives error "404: No group with that name (wikigroupname) hosted on this server", though it still