Sparc T5-2 servers

Hi All,
How good is the new sun sparc T5-2  in handling single threaded jobs in Oracle database.
Any documents or Links will be of great help.
Thanks.

See the discussion in the companion thread
Solaris Studio 12.3 CC compiler returns no error without compiling

Similar Messages

  • Sun Servers for Personal Use

    I recently have come into possesion of a Sun Enterprise 450 Ultra, a Sun Enterprise 3500, and two Storedge 5200(s). The company I work for (a straight by the books windows enterprise) recently took over a competitor that made use of Sun/Solaris, so they were getting thrown away. My conciense wouldnt allow this. I would like to assimalate at least one of the servers into my home network, but do not even know where to start. My ultimate goal would be to have a Server runnign ultralinux (sparc linux port) that I can use as a fileserver/vpn router, with one of the SAN's attached to the fileserver. Would either server neccesarily be better than the other? They seem pretty similar to me, but im completely in the dark looking at these things. Im open to ideas and suggestions.

    Excerpt from Sun:
    "Get the Solaris Enterprise System
    Download a complete enterprise-class solution�Solaris 10, Java Enterprise System, development tools and N1 management software�at no cost, no kidding. Already downloaded? See My Sun Connection."
    Here's the link:
    http://www.sun.com/software/solaris/get.jsp
    To my humble understanding, you can download Solaris 8, 9 and/or 10 for free along with a whole host of other Sun software - ALL FREE - and not have to worry about Fiber Channel support or anything else not 'supported' or capable of running under any linux variant. I just started downloading Solaris 10 06/06, free of charge. It's a huge download that requires a DVD burner.
    I believe Sun is doing this to compete with the competition from the Linux crowd. While I have been able to get linux and various BSD flavors to run on various Sun SPARC workstations and servers, I've never run into the Fiber Channel problem, so I can only believe what has been previously posted.
    If you are set on running Linux, I believe the E450 would be your best bet as it is PCI and SCSI based as opposed to the E3500 having all Fiber hard drives. The CD and Tape (if one exists) are SCSI in the E3500, but the backplane to which all internal hard drives plug into, is Fiber.
    If you want to look at hardware for your systems have a look at the Sun System Handbook found here: (Click on 'EOL Systems')
    http://sunsolve.sun.com/handbook_pub/
    The Ultra Enterprise 450 page:
    http://sunsolve.sun.com/handbook_pub/Systems/E450/E450.html
    There are similar pages for the E3500 and for the A5200 Storage Arrays. In the handbook there are links to purchase some components, which you should feel free if you find you need them. However, I would suggest searching eBay or other auction sites before purchasing new parts as you could save enough to purchase that new dual core PC I've been hearing so much about.
    Now for getting started on installations... Sun's support forum, BigAdmin located at:
    http://www.sun.com/bigadmin/collections/installation.html
    BigAdmin has quite a few articles that should help you thru the installation process. Or do as I have always done, Google it, it will come. There's also information there to help set of various other configurations.
    HTH,
    //ericS

  • Oracle Servers Roadmap & Support Life

    Where can I get information on when SPARC T4 -1 Servers were launched, Product contiuity till which date and end of support life information ?

    sudhirkakkat wrote:
    Sergio, is this pdf viewable only for Oracle Employees? I am saving the pdf and it becomes a blank pdf document. I have saved it 3 times now and the same is happening. Please help.The original link
    http://www.oracle.com/us/support/library/lifetime-support-hardware-os-337182.pdf
    works for me.
    I had recently upgraded my Firefox browser to a newer version and many PDFs failed to display properly until I changed the Firefox MIME action. Instead of using the default plugin action to display the document in a browser window I now have firefox default to a download with auto-launch of my PDF viewer.
    I've done this to two different Windows Desktops:
    WinXP and Adobe Reader on one and Win7 with Foxit Reader on another.
    Both were giving me PDF issues after a browser upgrade and both are now successfully giving me viewable documents.
    In Firefox -->
    Tools --> Options --> Applications
    Change the choice of action from plugin to the application.

  • BIND 9.2.4 Slow on Solaris 10 01/06

    Hi There,
    Have an issue with 2 x Solaris 10 (Sparc) external DNS servers that we put in. The servers are very quick to resolve local zone files and any cached queries. When i'm requesting a new internet DNS record that is not in the cache, it can take 5-6 seconds for the query to come back.
    I've been doing some reading and other people have had similar issues with IPV4/IPV6 queries. When looking at my bind debug logs i can see that requests go out for AAAA records. The servers are not running IPV6 themselves.
    Is there any way to disable IPV6 in Bind 9.2.4 or has anyone come across this problem before and its something completely different?
    Thanks

    We only have one search domain that is used internally, the external dns servers host about 25 zones on them that they are authoritive for. The 2 external servers are also used by the 2 internal servers to handle internet resolution. Its only slow when requesting FQDN's from the internet that are not int he cache. If i clear the bind cache and look up say www.microsoft.com it takes 5-6 seconds to resolve. Next time its instant.
    Here is most of the named.conf (cut out some of the hosted zones to limit the length);
    acl bogusnets { 0.0.0.0/8; 2.0.0.0/8; 224.0.0.0/3; };
    acl local { 172.19.220.0/32; 172.22.280.0/32; };
    acl local { 127.0.0.1/8; };
    options {
         directory "/var/zones";
         allow-recursion { local; };
         allow-transfer { 172.19.82.17; 172.19.220.4; 172.19.280.5; };
         blackhole { bogusnets; };
    logging {
         category default { default_log; };
         category queries { query_log; };
         category network { network_log; };
         channel default_log {
              file "/var/logs/default.log" versions 7 size 10m;
              print-category     yes;
              print-severity     yes;
              print-time     yes;
         channel query_log {
              file "/var/logs/query.log" versions 7 size 10m;
              print-category     yes;
              print-severity     yes;
              print-time     yes;
         channel network_log {
              file "/var/logs/network.log" versions 7 size 10m;
              print-category     yes;
              print-severity     yes;
              print-time     yes;
         category lame-servers { null; };
    zone "." {
         type hint;
         file "named.cache";
    zone "0.0.127.IN-ADDR.ARPA" {
         type master;
         file "master/db.reverse.127.0.0";
    # Reverse lookups for 172.20
    zone "20.172.IN-ADDR.ARPA" {
         type master;
         file "master/db.reverse.172.20";
    # Reverse lookups for 172.19
    zone "191.150.IN-ADDR.ARPA" {
         type master;
         file "master/db.reverse.172.19";
    # Reverse lookups for 172.20
    zone "205.155.IN-ADDR.ARPA" {
         type master;
         file "master/db.reverse.172.20";
    zone "example.com" {
         type master;
         file "master/db.example.com";     
    zone "abc.int" {
         type stub;
         file "slave/db.abc.int";
         allow-query { internal; local; };
         masters { 172.20.7.120; 172.20.7.121; };
    zone "testing.com" {
         type stub;
         file "slave/db.testing.com";
         allow-query { internal; local; };
         masters { 172.20.8.161; 172.20.7.119; };
    Message was edited by:
    jgooding

  • Oracle VM Server 2.0 for Sun Fire v240?

    Hi all,
    I am searching and trying all day long to install Oracle VM Server on SPARC 64 Sun Fire v240.
    I still can not find information is it supported or not?
    I have Oracle Solaris 10 5.10 Generic_142909-17 sun4u sparc
    the CPUs are UltraSPARC-IIIi (portid 0 impl 0x16 ver 0x34 clock 1503 MHz).
    I installed OVM_Server_SPARC-2_0 from https://edelivery.oracle.com/
    Anyway the command ldmconfig returns that This is not supported by my platform!
    In the release notes of Logical Domain Manager 1.2, 1.3 and VM Server 2.0 is written the supported hardware and I can not see the v240, but I see
    Supported Platforms:
    Sun Fire and SPARC Enterprise T1000 Servers
    Sun Fire and SPARC Enterprise T2000 Servers
    Probably this sun fire is not Sun fire v240 but it is sun fire t1000?!
    What do you think - is my server supported for any of these versions? Is virtualization possible?
    Also I can not download version 1.3 because it's moved to oracle my support section but it was free?
    Need your help and support :)
    Thanks a lot.

    A T1000 servers does not show V240.
    uname -a on T1000 output below
    bash-3.00# uname -a
    SunOS xxxx 5.10 Generic_142909-17 sun4v sparc SUNW,SPARC-Enterprise-T1000
    Thanks,
    Sudhir

  • SNMP traps from Solaris 10

    Hi,
    I'm trying very hard to set up a Sun Fire V245 to send SNMP traps when certain hardware or software related events occur. I've been looking at sma_snmp (net-snmp) and the Fault Management Daemon (SUNWfmd) but they seem to be very limited in their capabilities. I have manged to get some traps sent for filesystem fill-ups and high load averages but that is about it.
    Most of all I would like the system to send traps when there is a HW failure such as a faulty FRU or if there are disk failures.
    If anyone can point me to some documentation about this, I would be most grateful.
    /Mikael

    Mikael,
    I struggled through the same thing with a Netra 240 recently. The Sun docs are garbage when it comes to this. I opened a ticket with Sun and after 3 days and 6 hours on the phone I finally got hold of someone who knew how to spell SNMP. Yes, it was that bad!
    Here's the scoop. In Solaris 10 you run Net-SNMP, a.k.a. SMA, snmpd. The old snmpdx is obsoleted and you shouldn't configure it at all.
    Now to get the hardware related traps for the Sunfire and Netra series servers... (what you are really looking for).
    You have to load and configure an additional SNMP daemon for the hardware specific traps.
    (The first doc is rather old, the last one 819-7978-12 is pretty new and is somewhat more relevant.)
    Sun� SNMP Management Agent for Sun Fire� and Netra� Systems: Sun Doc number 817-2559-13
    Sun� SNMP Management Agent Addendum for the Netra� 240 Server: Sun Doc Number 817-6238-10
    Sun� SNMP Management Agent Administration Guide for Sun Blade� /Sun Fire�/Sun SPARC� Enterprise/Netra� Servers: Sun Doc Number 819-7978-12
    And finally the SMA/net-snmp/snmpd guide for the standard Solaris related traps:
    Solaris System Management Agent Administration Guide: Sun Doc Number 817�3000�11
    There are problems with all of the above documents. None of the Netra/Sunfire docs specifically talk about Solaris 10 so read them with caution. They also talk about configuring and running snmpdx and never reference SMA/net-snmp. This is odd because the instructions I got from Sun (finally) were not to run snmpdx, only to run sma/snmpd and additionally run the sunfire/netra snmpd agent.
    The SMA document (817-3000-11) has an undocumented bug, which Sun knows about and is working on but will not reveal to the public. In the section titled "Migration From the Sun Fire Management Agent" it references using a script called masfcnv to convert the sunfire/netra specific snmp config and daemon to work with and through SMA. Since they all use the same ports (161/162) there is some conflict and the masfcnv is script is meant to resolve this by making sma/snmpd a proxy agent to requests toward the sunfire/netra specific hardware daemon.
    The problem is the masfcnv script doesn't work properly. In fact, if you run the script you will destroy your other snmp configurations and may have to uninstall and reinstall the packages to clean everything up. This script hasn't ever worked and Sun is working on a fix but they neglect to mention this in the document which is IMO gross negligence and is a reflection of Sun's overall state of affairs (but that's another ranting thread).
    So what you must do is configure SMA/net-snmp (or whatever you want to call it), and also configure the sunfire/netra specific snmp (after downloading and installing that package).
    Since traps are sent to the remote trapsink using destination port 162, both net-snmp and the netra specific snmp daemons can co-exist here (port 162 is not an open listening port on the machine).
    Port 161 is used for receiving SNMP Get requests and can only be bound to one daemon at a time. So either it is used by net-snmp or the netra snmp daemon, but not both. Since my boxes have not been fully integrated still I can't figure out which daemon 161 is bound to. At any rate, in my application the customer is only interested in receiving traps so the outcome here isn't that important.
    I realize this isn't complete but I'm no expert here and haven't worked through all the test scenarios on a fully configured system. Hopefully though this will help clear some of the confusion propogated through Sun's stupid documents. Good luck!
    /Frank

  • Oracle DB Can't Survive ZFS SA Controller Failover

    We are running two new Sparc T4-1 servers against a ZFS SA with two heads and a single DE2-24P disk shelf. It is configured with a single pool for all the storage. Our servers are clustered with VCS as an active/passive pair, so only one server accesses storage at a time. The active server runs the Oracle Enterprise DB version 12c, using dNFS to connect to the shares. Before deployment, we are testing out various failure scenarios, and I was disheartened to see that the Oracle DB doesn't handle a controller failover very well. Here's how I tested:
    My DBA kicked off a large DB import job to provide some load.
    I logged in to the secondary head, and issued "takeover" on the "cluster" page.
    My DBA monitored the DB alert log, and reported everything looking fine.
    When the primary head was back online, I logged in to it, and issued "takeover" on the "cluster" page.
    This time things didn't go so well. We logged the following:
    Errors in file /u04/app/oracle/diag/rdbms/aasc/aasc/trace/aasc_arc2_1296.trc:
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ORA-17516: dNFS asynchronous I/O failure
    ARCH: Archival stopped, error occurred. Will continue retrying
    Tue Aug 12 14:25:14 2014
    ORACLE Instance aasc - Archival Error
    Tue Aug 12 14:25:14 2014
    ORA-16038: log 15 sequence# 339 cannot be archived
    ORA-19510: failed to set size of  blocks for file "" (block size=)
    12-AUG-14 14:32:03.424: ORA-02374: conversion error loading table "ARCHIVED"."CR_PHOTO"
    12-AUG-14 14:32:03.424: ORA-00600: internal error code, arguments: [kpudpcs_ccs-1], [], [], [], [], [], [], [], [], [], [], []
    12-AUG-14 14:32:03.424: ORA-02372: data for row: INVID : ''
    12-AUG-14 14:32:03.513: ORA-31693: Table data object "ARCHIVED"."CR_PHOTO" failed to load/unload and is being skipped due to error:
    ORA-02354: error in exporting/importing data
    ORA-00600: internal error code, arguments: [kpudpcs_ccs-1], [], [], [], [], [], [], [], [], [], [], []
    My DBA said that this was a very risky outcome, and that we certainly wouldn't want this to happen to a live production instance.
    I would have hoped that the second controller failover would have been invisible to the Oracle instance. What am I missing?
    Thanks.
    Don

    your FRA filed up.
    you are getting ORA-16038: log 15 sequence# 339 cannot be archived
    This means that there is no more space on FRA.
    You need to clean up the FRA. Here are some steps:
    SQL > alter system set db_recovery_file_dest_size=18G;
    http://oraclenutsandbolts.net/knowledge-base/oracle-data-guard/65-oracle-dataguard-and-oracle-standby-errors

  • Hello Mr. otn i would like to have a Win-NT/2000-Discussion-group too

    I am not a NT-Fan, but many customer ddriver ther db under NT and when it's stabil than why not?
    i there a possibility ???
    Thanks

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Karl Reitschuster ([email protected]):
    I am not a NT-Fan, but many customer ddriver ther db under NT and when it's stabil than why not?
    i there a possibility ???
    Thanks<HR></BLOCKQUOTE>
    I would also like to see a group specific for Windows 2000. We have installed several solaris sparc and intel servers running oracle in the last few months but it seems that that the next installs are going to be win2000. I am having more trouble ( and oracle has given me lots of trouble) with win2000 than all the others combined. It's different!!

  • Essbase 11.1.2 dimension build slow

    Hi All,
    I wonder if anyone has had similar problems or ideas of how to solve the problem I have.
    We've recently migrated to Essbase 11.1.2 running on some big SPARC 64 bit servers and everything is going well.
    We have a problem with dimension builds though. They work, just very slowly taking sometimes over a minute to build a dimension with just 10-20 members in it. With 22 dimensions to build it is taking over 20 minutes to build our dimensions - much more time then loading the gigabytes of data after and this is making the overnight process slower.
    The model was migrated from 11.1.1.2 and we converted the outline. The dimension builds on the old server were 4 minutes. The old rules files are still used but as a test I tried creating a new rules file in the same application to load the meta data from a text file instead of an Oracle SQL server but the same problem exists.
    We use deferred restructure on an ASO model but the restructure is fine - just the 'BuildDimFile' session runs for over a minute and then starts again for over a minute for the next dim build. Number of members seems to make no difference be it 10 or so to 60,000.
    Has anyone got any ideas why or seen similar? Or should I really move out of the dark ages and learn Essbase Studio!
    Thanks for your help,
    Dan

    It seems to be related to the number of dimensions.
    I tried creating a basic outline, two dimensions and loaded with a basic rules file. Took 8 seconds in total, no problem.
    I then tried copying in one of the existing rules files that builds a dimension with four members. Still no problem.
    I then went to the existing outline and deleted all but two dimensions and tried rebuilding with the rules file - it is still quick. I then added the dimensions back, by typing them, all as stored with no members underneath and it suddenly jumps to 40 seconds (of which 30 sec for the dimension build time, 10 sec for the restructure) using the same rules file and data. I'd expect the restructure time to go up, but not the build time.
    Possibly unrelated - EAS often produces an error message when loading an outline stating that I have -1536 MB memory free and the outline is 0 MB do I still wish to load?
    Dan

  • Internet access from Solaris 10 server

    HI
    we have Sparc T-4 servers running solaris 10. i need one of the servers to be able to reach to Internet. However i am not able to even ping 8.8.8.8 .
    i am able to browse internet using firefox after i changed the proxy settings in the browser.
    can some one help in steps to configure internet access.  i am installing Ops Center and it needs connectivity to reach to My Oracle Support over Ineter Net.
    thanks
    -Muneer

    If you are able to browse with Firefox on your server then Internet works.
    If you have some troubles with Ops Center, you should certainly configure the HTTP_PROXY/HTTPS_PROXY variables and in this case I suggest to push your question in the Ops Center community : Oracle Enterprise Manager Ops Center (MOSC)

  • Alternate Boot Environment disaster

    Hi - hoping someone can help me with a small disaster I've just had in trying to patch one of my SPARC T4-1 servers running zones using the patch ABE method. The patching appeared to work perfectly well, I ran the following commands:
    sudo su -
    zlogin tdukihstestz01 shutdown -y -g0 -i 0
    zlogin tdukihstestz02 shutdown -y -g0 -i 0
    zlogin tdukbackupz01 shutdown -y -g0 -i 0
    lucreate -n CPU_2013-01
    mkdir /tdukwbadm01
    mount -F nfs tdukwbadm01:/export/jumpstart/Patches/Solaris10/10_Recommended_CPU_2013-01 /tdukwbadm01/
    cd /tdukwbadm01/
    ./installpatchset apply-prereq s10patchset
    nohup ./installpatchset -B CPU_2013-01 --s10patchset
    luactivate CPU_2013-01
    lustatus
    init 6
    However when the server came back up only 1 zone would start - tdukbackupz01.
    The other two zones were in the installed state although they are set to autoboot. The ONLY difference between the zones is that for the two that won't start I had added a "fs" by doing this:
    zonepath: /export/zones/tdukihstestz01
    fs:
    special: /export/zones/tdukihstestz01/logs
    So I actually made /logs a folder under the zonepath - and it appears after patching the ABE this doesn't exist so the zone won't start. In fact /export/zones/tdukihstestz01-CPU_2013-01/ is completely empty now. So I can only assume that by having /logs inside the zones file system has caused this problem.
    So after a bit of manual intervention I have my zones running again - basically I edited the zones xml files and the index file in /etc/zones and removed the references to CPU_2013-01 which has done the trick.
    However my ZFS looks a bit of a mess. It now looks like this:
    root@tdukunxtest01:~ 503$ zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    archives 42.8G 504G 42.8G /archives
    rpool 126G 421G 106K /rpool
    rpool/ROOT 5.48G 421G 31K legacy
    rpool/ROOT/CPU_2013-01 5.38G 421G 3.60G /
    rpool/ROOT/CPU_2013-01@CPU_2013-01 592M - 3.60G -
    rpool/ROOT/CPU_2013-01/var 1.21G 421G 1.19G /var
    rpool/ROOT/CPU_2013-01/var@CPU_2013-01 14.4M - 659M -
    rpool/ROOT/Solaris10 96.9M 421G 3.60G /.alt.Solaris10
    rpool/ROOT/Solaris10/var 22.2M 421G 671M /.alt.Solaris10/var
    rpool/dump 32.0G 421G 32.0G -
    rpool/export 17.9G 421G 35K /export
    rpool/export/home 1.01G 31.0G 1.01G /export/home
    rpool/export/zones 16.9G 421G 35K /export/zones
    rpool/export/zones/tdukbackupz01 41.8M 421G 3.14G /export/zones/tdukbackupz01
    rpool/export/zones/tdukbackupz01-Solaris10 3.14G 96.9G 3.13G /export/zones/tdukbackupz01-Solaris10
    rpool/export/zones/tdukbackupz01-Solaris10@CPU_2013-01 1.80M - 3.13G -
    rpool/export/zones/tdukihstestz01 43.3M 421G 10.1G /export/zones/tdukihstestz01
    rpool/export/zones/tdukihstestz01-Solaris10 10.2G 21.8G 10.2G /export/zones/tdukihstestz01-Solaris10
    rpool/export/zones/tdukihstestz01-Solaris10@CPU_2013-01 2.28M - 10.2G -
    rpool/export/zones/tdukihstestz02 35.3M 421G 3.37G /export/zones/tdukihstestz02
    rpool/export/zones/tdukihstestz02-Solaris10 3.40G 28.6G 3.40G /export/zones/tdukihstestz02-Solaris10
    rpool/export/zones/tdukihstestz02-Solaris10@CPU_2013-01 1.66M - 3.40G -
    rpool/logs 5.10G 26.9G 5.10G /logs
    rpool/swap 66.0G 423G 64.0G -
    Whereas previously it look more like this:
    NAME USED AVAIL REFER MOUNTPOINT
    archives 42.8G 504G 42.8G /archives
    rpool 126G 421G 106K /rpool
    rpool/ROOT 5.48G 421G 31K legacy
    rpool/dump 32.0G 421G 32.0G -
    rpool/export 17.9G 421G 35K /export
    rpool/export/home 1.01G 31.0G 1.01G /export/home
    rpool/export/zones 16.9G 421G 35K /export/zones
    rpool/export/zones/tdukbackupz01 41.8M 421G 3.14G /export/zones/tdukbackupz01
    rpool/export/zones/tdukihstestz01 43.3M 421G 10.1G /export/zones/tdukihstestz01
    rpool/export/zones/tdukihstestz02 35.3M 421G 3.37G /export/zones/tdukihstestz02
    rpool/logs 5.10G 26.9G 5.10G /logs
    rpool/swap 66.0G 423G 64.0G -
    Does anyone know how to fix my file system mess and is having a non-global zones /logs inside the actual zones zonepath is a bad idea - it would appear it is.
    Thanks - Julian.

    Ok, got a little further with this. I do now think that I can track down the start of my problems was due to me defining a filesystem within a non-global zone that was actually inside the zonepath itself - having looked at the Solaris zones documentation there's nothing to stop you doing this, just that it's a bad idea. So I've amended ALL my non-global zones to NOT do this anymore and checked.
    Taking a single non-global zone I can see that ZFS did the following when I ran the lucreate command:
    2013-02-17.07:39:58 zfs snapshot rpool/export/zones/tdukihstestz01@CPU_2013-01
    2013-02-17.07:39:58 zfs clone rpool/export/zones/tdukihstestz01@CPU_2013-01 rpool/export/zones/tdukihstestz01-CPU_2013-01
    2013-02-17.07:39:58 zfs set zoned=off rpool/export/zones/tdukihstestz01-CPU_2013-01
    So a snapshop / clone was taken. There is then a series of zfs canmount=on and zfs canmount=off commands seen against rpool/export/zones/tdukihstestz01-CPU_2013-01 - I'm not entirely sure what these are doing, well I know what the command does just not why its doing it.
    The patch process finished at 08:46 and I rebooted the server with an init 6 a little time after this. I then see a few more canmount commands and then:
    2013-02-17.08:49:22 zfs rename rpool/export/zones/tdukihstestz01 rpool/export/zones/tdukihstestz01-Solaris10
    And then a load more canmount commands against rpool/export/zones/tdukihstestz01-Solaris10 but also the following is shown:
    2013-02-17.08:54:31 zfs rename rpool/export/zones/tdukihstestz01-CPU_2013-01 rpool/export/zones/tdukihstestz01
    Now my memory is a little fuzzy over what happened next but the failure of the non-global zone to boot was because <zonepath>/logs/ did not exist - and this takes me back to my point above about defining a file system within the <zonepath> - when I tried to start the zone tdukihstestz01 it complained that /logs did not exist. It did exist in the zone on the old Boot Environment but NOT the new one. And when I actually created these zones several months ago I can remember I had to manually create these BEFORE I ran the initial sudo zoneadm -z tdukihstestz01 boot command.
    So basically I'm 99.9% sure that I know what I did wrong to cause this for the non-global zones and I can only assume this has had a knock on effect with the root environment. To fix a non-global zone I ran the following commands earlier today.
    zfs list |grep tdukihstestz02
    rpool/export/zones/tdukihstestz02 81.1M 421G 3.41G /export/zones/tdukihstestz02 <-- clone
    rpool/export/zones/tdukihstestz02-Solaris10 3.40G 28.6G 3.40G /export/zones/tdukihstestz02-Solaris10
    rpool/export/zones/tdukihstestz02-Solaris10@CPU_2013-01 1.66M - 3.40G - <-- snapshot
    zlogin tdukihstestz02
    init 5
    zfs destroy -R rpool/export/zones/tdukihstestz02-Solaris10@CPU_2013-01
    zfs list |grep tdukihstestz02
    rpool/export/zones/tdukihstestz02-Solaris10 3.40G 28.6G 3.40G /export/zones/tdukihstestz02-Solaris10
    zfs rename rpool/export/zones/tdukihstestz02-Solaris10 rpool/export/zones/tdukihstestz02
    zfs set canmount=on rpool/export/zones/tdukihstestz02
    zfs mount rpool/export/zones/tdukihstestz02
    I also see that 81.1M of space used in rpool/export/zones/tdukihstestz01 must refer to changes between the original file system and the clone ... I think. These will only have been log files so I'm not to bothered ... again I think, well actually hope.
    So I'm sort of almost sorted, there is the small matter of the root file system - which tbh I won't be so gung ho' in my approach to fixing. But again if anyone has any ideas on this I'd love to hear them.
    Thanks - Julian.

  • GF 3.1.1 Admin Console Freeze -- A long-running process has been detected

    I installed Glassfish 3.1.1 on both sparc and intel servers for solaris 10 & 11 platform. All my servers have the same symptom when I access the admin console. The console freezed and pop-up a message "A long-running process has been detected". There is nothing I can do at this point. I've tried accessing from different PC with different browsers (IE, Firefox 5, Chrome and Safari), but nothing is working including clearing the cache. Also, tried uninstall and install again (with and without updatetool) many times. Any idea how do I stop the popup message? Am I the only one has this problem? I'm using JDK 1.7 (GF 3.1.1 support JAVA 7). I need help!!!!!
    Thanks,
    - Johnny

    The only fix I know of for this issue is to reboot the host.
    If there is another fix someone please let me know as I have users who run into this often.
    Thanks
    Bonnie.Partridge

  • Solaris 10 SMA is dumping core. Have you installed net-snmp?

    It's only happening on afew of the many sparc Solaris 10 servers I support. Oracle no longer supports SMA and has suggested downloading and installing "publicly available 'net-snmp' software". I'm reluctant to download and install open source; the review and approval process in my environment is onerous.
    Since net-snmp is bundled with Solaris 11, I'm wondering if it is possible to install Solaris 11 packages on a Solaris 10 server (and net-snmp in particular). If so, perhaps someone can provide tips?

    The package formats are not the same between these both release of Solaris. You can still trying to extract the files and then install it in your Solaris 10 machine but this is highly unrecommended and of course unsupported. And I'm not sure that it will work.

  • Solaris 10-to-11 Installation/Upgrade

    We are running Solaris 10 SPARC on our servers and we want to upgrade to Solaris 11. Now the Oracle Solaris 11 11/11 Media Pack v1 has multiple disks:
    Oracle Solaris 11 Automated Installer
    Oracle Solaris 11 IPS Repository
    Oracle Solaris 11 Text Installer
    We have read on about this installers and now, we need to know which exact installer DVD do we need to use to perform this installation?

    There is no upgrade between Solaris 10 and Solaris 11 only a fresh install.
    If you are doing interactive installation use the Text installer, otherwise setup an AI server.
    See the installation guide here: http://docs.oracle.com/cd/E23824_01/html/E21798/index.html
    If you have been using Jumpstart on Solaris 10 there is a Jumpstart to AI transistion guide here:
    http://docs.oracle.com/cd/E23824_01/html/E21799/index.html
    Darren J Moffat

  • Cluster Without StorEdge

    Hi .,
    I am new to clustering.
    We are having two Sun Sparc 490 R servers with Solaris10 installed
    which we are planning to configure as a Two-node cluster.
    Can anybody tell me whether is it possible to install the two node cluster without any storEdge..
    Thanks & Regards
    Suseendran .A

    Please check docs.sun.com as Hartmut suggested. It is all documented there.
    You can install Sun Cluster software for free on as many systems as you like. However, as soon as you need support, i.e. you are running on a production cluster, you will need to buy the licenses and get a support contract. You will also need to have your installation validated and it would be a good idea to go on the training course too.
    However, if you are just a developer, then it is free to use (as you probably don't need any support).
    Tim
    ---

Maybe you are looking for