A1000 on solaris 10 as a JBOD
I want to upgrade a system from sol9 to sol10, but raid manager is not supported on solaris10
Is there any way I can use an A1000 as a JBOD on solaris 10
So, A1000 cannot be used with solaris 10, I'll stick with solaris 9 then as I have no hardware budget. (unless of course there is a way of changing the controller on the a1000 :-) )
Similar Messages
-
While I've found evidence that people have gotten RM6 (Raid Manager 6.22.1) to work on Solaris 10 which has been upgraded from a RM6-supported version of Solaris, I have not found any evidence of a fresh RM6 install on Solaris 10.
Yes, I know this array was EOLed in 2004, but the thing is still running, so I'd like to use it. I just need to reset the battery age. Does anyone know how to either install RM6 on Solaris 10, or have another way of resetting the battery age on the A1000?
-- MTo answer my own question - - the RM6 software does work on Solaris 9 and 10, even though the install of the main package fails due to "Solaris 10 being unsupported". After the install you need to remove 2 of the 3 forceloads in /etc/system (leave the SD forceload and remove the other 2) to remove some warnings at bootup. But despite the driver removal and incomplete install, everything works fine.
-- M
Edited by: MikeVarney on Mar 9, 2011 4:43 AM -
Can an A1000 be used as a JBOD and, if so, how do you go about it?
A1000 is a hardware RAID. You must use Raid Manager to configure/access the disks "behind" the RAID controller module. If you go to:
http://sunsolve.Sun.COM/handbook_pub/Systems/A1000/docs.html
the 805-4749 PDF shows you how to convert a D1000 (JBOD) into an A1000.
It involves swapping out the whole controller assembly, or, most of the "guts".
I imagine this process could be followed in reverse, assuming you aleady have the D1000 "guts".
But, if you already have a D1000, then you already have a JBOD.
You would have to purchase/find a 375-0008 Differential SCSI controller (the guts for a D1000) to accomplish this.
http://sunsolve.Sun.COM/handbook_pub/Systems/D1000/components.html
Good Luck,
John -
Hello!
Is the version Solaris 10 supported in RAID manager 6.22?
ThanksSince that post, I've learned that others have had success using RM6 with an A1000 on Solaris 10. I don't have one to test myself, and I've seen some other posts from users reporting problems, but I hope that it does in fact work.
See also this usenet thread:
http://groups.google.com/group/comp.unix.solaris/browse_thread/thread/78cc3db9a19d1fac/43ac969ddb4894e1
Darren -
We have loaded solaris 10 ( 3/05) on our server i.e. Sunfire 280R with A1000 storage connected to it thru SCSI cable. We have addon SCSI card installed in server. Our A1000 is having only one controller.
Afterthat we have loaded sun storage RAID Manager 6.22 software to configure A1000 , we have made slices using RAID 5 & using RM6 utility. While rebooting the server we are getting following two errors & keeps scrolling on screen for about 10 minutes , though we are able to access A1000;
1. Warning : mod_load : cannot load module 'rdriver'
2. /kernel/drv/spark9/rdriver:undefined symbol 'dev_get_dev_info'
Is any solution to above errors? Is any patch / upgrade / firmware etc for above errors ?
Will it recommended to upgrade to solaris 10 or continue with solaris 9. we are using this as a database server with oracle 10G.FYI, I think Sun discontinued support for the A1000 h/w in Solaris 10... should be documented.
I only mention this in case you want to have Sun support help you... if it works fine, I generally wouldn't worry. But it is a production system, I might have second thoughts about using Solaris 10 with unsupported h/w.
My $.02, YMMV.
David Strom -
hello all
i'm trying to install A1000 on E250 Solaris 8.
Yes, i read some threads related to A1000 on this forum.
Some messages are saying that A1000 is recognized, but nothing work
Here are the messages:
iostat -En
sd147 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: Symbios Product: StorEDGE A1000 Revision: 0301 Serial No: 1T02596846
Size: 180.72GB <180718141440 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
but format don't show devices, nor lad
probe-scsi-all don't show something related to A1000
and when i try to boot -r, kernel crashes:
panic[cpu1]/thread=3000730e020: BAD TRAP: type=31 rp=2a100324ff0 addr=300078c5b48 mmu_fsr=0
devfsadm: trap type = 0x31
addr=0x300078c5b48
pid=59, pc=0x1027b914, sp=0x2a100324891, tstate=0x4480001606, context=0x1ff3
i'm in solaris 8, 117350-43 patch kernel
There are no devices (controllers) in the system; nvutil terminated.
There are no devices (controllers) in the system.
fwutil failed!
Array Monitor initiated
RDAC daemons initiated
Dec 19 15:54:11 serengheti /usr/lib/osa/bin/arraymon: No RAID devices found to check.
Dec 19 15:54:11 serengheti rdriver: ID[RAIDarray.rdaemon.1001] RDAC Resolution Daemon locked in memory
What can i do?
Thanks in advance for help,Hello,
iostat -En
sd147 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: Symbios Product: StorEDGE A1000 Revision: 0301 Serial No: 1T02596846
Size: 180.72GB <180718141440 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
...probe-scsi-all don't show something related to A1000
iostat works at the operating system level, therefore the A1000 was indeed detected at the ok-prompt, otherwise the device tree wouldn't have been build properly.
Even an A1000 w/o disks should be detected at the ok-prompt. The integrated RAID-Controller is already a device. The installed disks are only detected with probe-scsi-all after being configuring with RaidManager (the configured LUNs are displayed, not the individual disks).
Please check the cable and the HVD-SCSI terminator at the rear of the A1000.
After the A1000 is detected at the ok-prompt advance to the next step.
Check if the 4 SUNWosa... packages that ssteimann mentioned are installed.
The version of RaidManager must match the installed A1000 firmware. If you can login to SunSolve there is a firmware matrix (InfoDoc 43483).
Maybe you should remove these packages and re-install them.
Merry Christmas !
Michael
Any updates ?
Message was edited by:
MAALATFT -
Performance in Sun Java Communication Suite 5 between Solaris 9 and 10
Somebody knows which is the best Operation System to deploy the Sun Java Communication Suite? in performance terms?
I have and old Sun Fire 280R with two 750 Mhz Processors, 3 GB Ram, and an A1000 Storage.
Thanks a lot,
Andres.AYacopino wrote:
Somebody knows which is the best Operation System to deploy the Sun Java Communication Suite? in performance terms?Solaris 10 by far for several reasons:
-> improved overall performance (kernel/networking level)
-> ZFS for storage
-> dtrace for debugging
-> zones for the various components of the communication suite (http://www.sun.com/blueprints/0806/819-7663.html)
I have and old Sun Fire 280R with two 750 Mhz Processors, 3 GB Ram, and an A1000 Storage.I'm not sure how many users you are planning to provide access to but your RAM is going to be the bottleneck.
Regards,
Shane. -
Sun Ultra 10 with SunStorEDGE A1000/D1000 unable to setup
I realise this covers old ground but I have not been able to trace a uitable resolvement and would be grateful for assistance.
I have a Sun Ultra 10 with 256MB RAM and 440GHz cpu.. It has a pci Adaptec 2944UW HVD SCSI interface card. The storage system is a tray of eight scsi 72GB disks housed in a D1000 case. NetBSD reported it as an A1000 SunStorEDGE at sd0 having 16 targets and 8 luns. However NetBSD did not have a readily available controller to setup the raid system so I installed an alternative. I presume the case is a reused old case housing an A1000 unit.
I tried to format the drives directly but they are not lists as devices such as ct1sd1 etc. and so this is not possible.
I have installed Sun Solaris 10 but could not communicate and reported inability to use rdriver and rdnexus and after reading a number of reports that driver hooks had been removed from the kernel in Solaris 10 I have installed Solaris 9.
On both occasions I installed a number of RAID Manager 6.1.1 control packs, namely SUNWosafw, osar, osau, osamn and vtsse direct from my CD here.
I commanded dr_hotadd.sh to take up the challenge but it failed to communicate.
I added a number of Rdac amendments to rparams in /etc/osa but this did not resolve the issue.
I have tried to call probe-scsi-all after doing a reset-all from the OBP but this fails to report anything. I presume this is because the unit is connected by the interface card rather than a direct scsi disk.
dmesg pipe grep scsi reports:
unknown scsi sd0 at uata0 target 2 lun 0
I would be very grateful fr guidance to resolve this connection so that I can create a suitable filestore.
Thanks
cblackfoot wrote:
I realise this covers old ground but I have not been able to trace a uitable resolvement and would be grateful for assistance.
I have a Sun Ultra 10 with 256MB RAM and 440GHz cpu.. It has a pci Adaptec 2944UW HVD SCSI interface card. The storage system is a tray of eight scsi 72GB disks housed in a D1000 case. NetBSD reported it as an A1000 SunStorEDGE at sd0 having 16 targets and 8 luns. However NetBSD did not have a readily available controller to setup the raid system so I installed an alternative. I presume the case is a reused old case housing an A1000 unit.If it's really an A1000, you won't have access to the drives. You'll only see exposed LUNS from the raid controller. Do you have a dial in the rear with a scsi address? That will be the address that the A1000 will respond on. In addition, the A1000 has only one pair of SCSI ports. The D1000 has two pair and doesn't have the scsi adress dial (because each of the disks in the chassis respond on their own addresses). Instead you have some DIP switches to change the addressing behavior.
I tried to format the drives directly but they are not lists as devices such as ct1sd1 etc. and so this is not possible.
I have installed Sun Solaris 10 but could not communicate and reported inability to use rdriver and rdnexus and after reading a number of reports that driver hooks had been removed from the kernel in Solaris 10 I have installed Solaris 9.That's fine. You can ignore them. Neither the A1000 nor D1000 require those drivers. You should still be able to use 'rm6' or the CLI tools to interact with an A1000.
On both occasions I installed a number of RAID Manager 6.1.1 control packs, namely SUNWosafw, osar, osau, osamn and vtsse direct from my CD here.
I commanded dr_hotadd.sh to take up the challenge but it failed to communicate.
I added a number of Rdac amendments to rparams in /etc/osa but this did not resolve the issue.
I have tried to call probe-scsi-all after doing a reset-all from the OBP but this fails to report anything. I presume this is because the unit is connected by the interface card rather than a direct scsi disk.If the controller doesn't support the OBP environment, then yes, you won't see anything then.
dmesg pipe grep scsi reports:
unknown scsi sd0 at uata0 target 2 lun 0So that might be a single LUN from an A1000 controller. If the selector switch in the back is set to '2', then it's even more likely.
Darren -
Hello,
Just liveupgraded solaris 8 to the latest solaris 10 on V440/sparc. After luactivate, trying to boot into Solaris 10 but system is throwing below messages on console,
Configuring devices.
WARNING: /pci@1f,700000/scsi@2,1 (mpt1):
hard reset failed
WARNING: /pci@1f,700000/scsi@2,1 (mpt1):
mpt_restart_ioc failed
WARNING: /pci@1f,700000/scsi@2 (mpt0):
hard reset failed
WARNING: /pci@1f,700000/scsi@2 (mpt0):
mpt restart ioc failed
WARNING: /pci@1f,700000/scsi@2 (mpt0):
firmware image bad or mpt ARM disabled. Cannot attempt to recover via firmware download because driver's stored firmware is incompatible with this adapter.
WARNING: /pci@1f,700000/scsi@2 (mpt0):
mpt restart ioc failed
WARNING: /pci@1f,700000/scsi@2 (mpt0):
firmware image bad or mpt ARM disabled. Cannot attempt to recover via firmware download because driver's stored firmware is incompatible with this adapter.
WARNING: /pci@1f,700000/scsi@2 (mpt0):
mpt restart ioc failed
WARNING: /pci@1f,700000/scsi@2 (mpt0):
firmware image bad or mpt ARM disabled. Cannot attempt to recover via firmware download because driver's stored firmware is incompatible with this adapter.
WARNING: /pci@1f,700000/scsi@2 (mpt0):
mpt restart ioc failed
WARNING: /pci@1f,700000/scsi@2 (mpt0):
firmware image bad or mpt ARM disabled. Cannot attempt to recover via firmware download because driver's stored firmware is incompatible with this adapter.
And, it continues to do so. Any idea on how this can be fixed?OK, as I was booting into Solaris 8, that also gave me mpt errors and suddenly realized of one change that I had done recently and I had not rebooted Solaris 8 after that change. The change was connecting A1000 device to scsi port on the host(not external scsi adapter). I removed the A1000 cable from scsi port and there you go solaris 8 came up.
Thought I 'd try booting Solaris 10 again and now the earlier errors don't come but I see following warnings,
Loading smf(5) service descriptions: 1/187
WARNING: svccfg import /var/svc/manifest/application/management/wbem.xml failed
2/187
WARNING: svccfg import /var/svc/manifest/system/metainit.xml failed
172/187
WARNING: svccfg import /var/svc/manifest/system/power.xml failed
173/187
WARNING: svccfg import /var/svc/manifest/system/postrun.xml failed
174/187
WARNING: svccfg import /var/svc/manifest/system/resource-mgmt.xml failed
175/187
WARNING: svccfg import /var/svc/manifest/system/zones.xml failed
176/187
WARNING: svccfg import /var/svc/manifest/system/poold.xml failed
177/187
WARNING: svccfg import /var/svc/manifest/system/pools.xml failed
178/187
WARNING: svccfg import /var/svc/manifest/system/picl.xml failed
179/187
WARNING: svccfg import /var/svc/manifest/system/installupdates.xml failed
180/187
WARNING: svccfg import /var/svc/manifest/system/labeld.xml failed
181/187
WARNING: svccfg import /var/svc/manifest/system/tsol-zones.xml failed
182/187
WARNING: svccfg import /var/svc/manifest/system/iscsi_target.xml failed
183/187
WARNING: svccfg import /var/svc/manifest/system/cvc.xml failed
184/187
WARNING: svccfg import /var/svc/manifest/system/rcap.xml failed
185/187
WARNING: svccfg import /var/svc/manifest/system/fpsd.xml failed
186/187
WARNING: svccfg import /var/svc/manifest/system/br.xml failed
187/187
WARNING: svccfg import /var/svc/manifest/system/sar.xml failed
svccfg import warnings. See /var/svc/log/system-manifest-import:default.log .
WARNING: svccfg apply /var/svc/profile/generic.xml failed
WARNING: svccfg apply /var/svc/profile/platform.xml failed
Requesting System Maintenance Mode
(See /lib/svc/share/README for more information.)
Console login service(s) cannot run
Reading ZFS config: *
Root password for system maintenance (control-d to bypass):done.
Login incorrect
Root password for system maintenance (control-d to bypass):
I can connect A1000 to other machine or external card and deal with it later however how to get system out of this state? -
Solaris 10 on an HP DL180 G6 server with P410 Raid Controller.
This is an informational post to try and save people some of the pain that my colleague Joe and I have just gone through to get Solaris 10 installed!
The DL180 G6 is an interesting box. It uses the latest Nehalem processors, and can support up to 12 x 3.5" old school high density 15k RPM drives. This is pretty unique in the industry. However, here is what you need to know -
- The P410 raid controller must have memory if you want to present all 12 drives down as JBOD to solaris to be managed via ZFS. i.e. order the P410/256. If you order the P410/ZM you will only be able to create a maximum of 2 arrays which will be seen as disks by Solaris.
- The P410 requires a driver that doesn't exist on the DL180 G6 driver page at HP. If you look under the DL585 G6 page you will find the solaris driver you need. [HP DL585 G6 storage driver page|http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareDescription.jsp?lang=en&cc=us&prodTypeId=15351&prodSeriesId=3949980&prodNameId=3949981&swEnvOID=2023&swLang=13&mode=2&taskId=135&swItem=MTX-f490e7c0f736457c92229f7e8c]
- The network card is a NC362i. This is intel based and referenced by this Solaris patch - [patch ID = 138175-02|http://sunsolve.sun.com/search/document.do?assetkey=1-21-138175-02-1]. The latest solaris 10 release (u7) doesnt contain the drivers for this card. You will have to add this patch separately. When added and recofigure rebooted etc etc, the device can be plumbed as igb0.
Hopefully this helps someone out!
Edited by: foobar50 on Jul 31, 2009 8:27 AMHi, try this out - it helped me some time ago:
http://forum.java.sun.com/thread.jspa?threadID=5081926
HTH, -
Intalling Postgresql in solaris 10
I have downloaded the postgresql package from
www.postgresql.org/download/bittorent
i have unziped the files. i dont know how to continue with the installation.Here is some documentation to get you started......It available online.
Author : Chris Drawater
Date
: May 2005
Version : 1.2
PostgreSQL 8.0.02 for J2EE applications on Solaris 10
Abstract
Advance planning enables PostgreSQL 8 and its associated JDBC driver to be quickly deployed in a
basic but resilient and IO efficient manner.
Minimal change is required to switch JDBC applications from Oracle to PostgreSQL.
Document Status
This document is Copyright � 2005 by Chris Drawater.
This document is freely distributable under the license terms of the GNU Free Documentation License
(http://www.gnu.org/copyleft/fdl.html). It is provided for educational purposes only and is NOT
supported.
Introduction
This paper documents how to deploy PostgreSQL 8 and its associated JDBC driver in a basic but both
resilient and IO efficient manner. Guidance for switching from Oracle to PostgreSQL is also provided.
It is based upon experience with the following configurations =>
PostgreSQL 8.0.2 on Solaris 10
PostgreSQL JDBC driver on Windows 2000
using the PostgreSQL distributions =>
postgresql-base-8.0.2.tar.gz
postgresql-8.0-311.jdbc3.jar
� Chris Drawater, 2005
PostgreSQL 8.0.2 on Solaris, v1.2
p1/10
Page 2
Background for Oracle DBAs
For DBAs coming from an Oracle background, PostgreSQL has a number of familiar concepts including
Checkpoints
Tablespaces
MVCC concurrency model
Write ahead log (WAL)+ PITR
Background DB writer
Statistics based optimizer
Recovery = Backup + archived WALs + current WALs
However , whereas 1 Oracle instance (set of processes) services 1 physical database, PostgreSQL differs in
that
1 PostgreSQL �cluster� services n * physical DBs
1 cluster has tablespaces (accessible to all DBs)
1 cluster = 1 PostgreSQL instance = set of server processes etc ( for all DBs) + 1 tuning config +
1 WAL
User accts are cluster wide by default
There is no undo or BI file � so to support MVCC, the �consistent read� data is held in the tables
themselves and once obsolete needs to be cleansed out using the �vacuum� utility.
The basic PostgreSQL deployment guidelines for Oracle aware DBAs are to =>
Create only 1 DB per cluster
Have 1 superuser per cluster
Let only the superuser create the database
Have one user to create/own the DB objects + n* endusers with appropriate read/write access
Use only ANSI SQL datatypes and DDL.
Wherever possible avoid DB specific SQL extensions to ensure cross-database portability
IO distribution & disc layouts
It is far better to start out with good disc layouts rather than reto-fix for a production database.
As with any DBMS, for resilience, the recovery components ( eg. backups , WAL, archived WAL logs)
should kept on devices separate from the actual data.
So the basic rules for resilience are as follows.
For non disc array or JBOD systems =>
keep recovery components separate from data on dedicated discs etc
keep WAL and data on separate disc controllers
mirror WAL across discs ( preferably across controllers) for protection against WAL spindle loss
For SAN based disc arrays (eg HP XP12000) =>
keep recovery components separate from data on dedicated LUNs etc
use Host Adapter Multipathing drivers (such as mpxio) with 2 or more HBAs for access to SAN .
Deploy application data on mirrored/striped (ie RAID 1+0) or write-cache fronted RAID 5 storage.
The WAL log IO should be configured to be osync for resilience (see basic tuning in later section).
Ensure that every PostgreSQL component on disc is resilient (duplexed) !
Recovery can be very stressful�
Moving onto IO performance, it is worth noting that WAL IO and general data IO access have different IO
characteristics.
� Chris Drawater, 2005
PostgreSQL 8.0.2 on Solaris, v1.2
p2/10
Page 3
WAL sequential access (write mostly)
Data sequential scan, random access write/read
The basic rules for good IO performance �.
use tablespaces to distribute data and thus IO across spindles or disc array LUNs
keep WAL on dedicated spindles/LUNs (mirror/stripe in preference to RAID 5)
keep WAL and arch WAL on separate spindles to reduce IO on WAL spindles.
RAID or stripe data across discs/LUNs in 1 Mb chunks/units if unsure as what chunk size to use.
For manageability, keep the software distr and binaries separate from the database objects.
Likewise, keep the system catalogs and non-application data separate from the application specific data.
5 distinct storage requirements can be identified =>
Software tree (Binaries, Source, distr)
Shared PG sys data
WAL logs
Arch WAL logs
Application data
For the purposes of this document , the following minimal set of FS are suggested =>
/opt/postgresql/8.0.2
# default 4 Gb for software tree
/var/opt/postgresql
# default 100 Mb
/var/opt/postgresql/CLUST/sys
# default size 1Gb for shared sys data
/var/opt/postgresql/CLUST/wal
# WAL location # mirrored/striped
/var/opt/postgresql/CLUST/archwal
# archived WALs
/var/opt/postgresql/CLUST/data
# application data + DB sys catalogs # RAID 5
where CLUST is your chosen name for the Postgres DB cluster
For enhanced IO distribution , a number of �/data FS (eg data01, data02 etc) could be deployed.
Pre-requisites !
The GNU compiler and make software utilities (available on the Solaris 10 installation CDs) =>
gcc (compiler) ( $ gcc --version => 3.4.3 )
gmake (GNU make)
are required and should be found in
/usr/sfw/bin
Create the Unix acct
postgres
in group dba
with a home directory of say /export/home/postgresql
using
$ useradd utility
or hack
/etc/group then /etc/passwd then run pwconv and then passwd postgres
Assuming the following FS have been created =>
� Chris Drawater, 2005
PostgreSQL 8.0.2 on Solaris, v1.2
p3/10
Page 4
/opt/postgresql/8.0.2
# default 4 Gb for the PostgreSQL software tree
/var/opt/postgresql
# default 100 Mb
create directories
/opt/postgresql/8.0.2/source
# source code
/opt/postgresql/8.0.2/distr
# downloaded distribution
all owned by user postgres:dba with 700 permissions
To ensure, there are enough IPC resources to use PostgreSQL, edit /etc/system and add the following lines
=>
set shmsys:shminfo_shmmax=1300000000
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=200
set shmsys:shminfo_shmseg=20
set semsys:seminfo_semmns=800
set semsys:seminfo_semmni=70
set semsys:seminfo_semmsl=270 # defaults to 25
set rlim_fd_cur=1024
# per process file descriptor soft limit
set rlim_fd_max=4096
# per process file descriptor hard limit
Thenn on the console (log in as root) =>
$ init 0
{a} ok boot -r
Download Source
Download the source codes from http://www.postgresql.org (and if downloaded via Windows, remember
to ftp in binary mode) =>
Distributions often available include =>
postgresql-XXX.tar.gz => full source distribution.
postgresql-base-XXX.tar.gz => Server and the essential client interfaces
postgresql-opt-XXX.tar.gz => C++, JDBC, ODBC, Perl, Python, and Tcl interfaces, as well as multibyte
support
postgresql-docs-XXX.tar.gz => html docs
postgresql-test-XXX.tar.gz => regression test
For a working, basic PostgreSQL installation supporting JDBC applications, simply use the �base�
distribution.
Create Binaries
Unpack Source =>
$ cd /opt/postgresql/8.0.2/distr
$ gunzip postgresql-base-8.0.2.tar.gz
$ cd /opt/postgresql/8.0.2/source
$ tar -xvof /opt/postgresql/8.0.2/distr/postgresql-base-8.0.2.tar
� Chris Drawater, 2005
PostgreSQL 8.0.2 on Solaris, v1.2
p4/10
Page 5
Set Unix environment =>
TMPDIR=/tmp
PATH=/usr/bin:/usr/ucb:/etc:.:/usr/sfw/bin:usr/local/bin:n:/usr/ccs/bin:$PATH
export PATH TMPDIR
Configure the build options =>
$ cd /opt/postgresql/8.0.2/source/postgresql-8.0.2
$ ./configure prefix=/opt/postgresql/8.0.2 with-pgport=5432 --without-readline
CC=/usr/sfw/bin/gcc
Note => --enable-thread-safety option failed
And build =>
$ gmake
$ gmake install
On an Ultra 5 workstation, this gives 32 bit executables
Setup Unix environment
Add to environment =>
LD_LIBRARY_PATH=/opt/postgresql/8.0.2/lib
PATH=/opt/postgresql/8.0.2/bin:$PATH
export PATH LD_LIBRARY_PATH
Create Database(Catalog) Cluster
Add to Unix environment =>
PGDATA=/var/opt/postgresql/CLUST/sys
# PG sys data , used by all DBs
export PGDATA
Assuming the following FS has been created =>
/var/opt/postgresql/CLUST/sys
# default size 1Gb
where CLUST is your chosen name for the Postgres DB cluster,
initialize database storage area, create shared catalogs and template database template1 =>
$ initdb -E UNICODE -A password
-W
# DBs have default Unicode char set, user basic passwords, prompt for super user password
Startup, Shutdown and basic tuning of servers
Check servers start/shutdown =>
$ pg_ctl start -l /tmp/logfile
$ pg_ctl stop
Next, tune the PostgreSQL instance by editing the configuration file $PGDATA/postgresql.conf .
� Chris Drawater, 2005
PostgreSQL 8.0.2 on Solaris, v1.2
p5/10
Page 6
First take a safety copy =>
$ cd $PGDATA
$ cp postgresql.conf postgresql.conf.orig
then make the following (or similar changes) to postgresql.conf =>
# listener
listen_addresses = 'localhost'
port = 5432
# data buffer cache
shared_buffers = 10000
# each 8Kb so depends upon memory available
#checkpoints
checkpoint_segments = 3
# default
checkpoint_timeout = 300
# default
checkpoint_warning = 30
# default � logs warning if ckpt interval < 30s
# log related
fsync = true
# resilience
wal_sync_method = open_sync
# resilience
commit_delay = 10
# group commit if works
archive_command = 'cp "%p" /var/opt/postgresql/CLUST/archwal/"%f"'
# server error log
log_line_prefix = '%t :'
# timestamp
log_min_duration_statement = 1000
# log any SQL taking more than 1000ms
log_min_messages = info
#transaction/locks
default_transaction_isolation = 'read committed'
Restart the servers =>
$ pg_ctl start -l /tmp/logfile
Create the Database
This requires the FS =>
/var/opt/postgresql/CLUST/wal
# WAL location
/var/opt/postgresql/CLUST/archwal
# archived WALs
/var/opt/postgresql/CLUST/data
# application data + DB sys catalogs
plus maybe also =>
/var/opt/postgresql/CLUST/backup
# optional for data and config files etc as staging
area for tape
Create the clusterwide tablespaces (in this example, a single tablespace named �appdata�) =>
$ psql template1
� Chris Drawater, 2005
PostgreSQL 8.0.2 on Solaris, v1.2
p6/10
Page 7
template1=# CREATE TABLESPACE appdata LOCATION '/var/opt/postgresql/CLUST/data';
template1=# SELECT spcname FROM pg_tablespace;
spcname
pg_default
pg_global
appdata
(3 rows)
and add to the server config =>
default_tablespace = 'appdata'
Next, create the database itself (eg name = db9, unicode char set) =>
$ createdb -D appdata -E UNICODE -e db9
# appdata = default TABLESPACE
$ createlang -d db9 plpgsql
# install 'Oracle PL/SQL like' language
WAL logs are stored in the directory pg_xlog under the data directory. Shut the server down & move the
directory pg_xlog to /var/opt/postgresql/CLUST/wal and create a symbolic link from the original location in
the main data directory to the new path.
$ pg_ctl stop
$ cd $PGDATA
$ mv pg_xlog /var/opt/postgresql/CLUST/wal
$ ls /var/opt/postgresql/CLUST/wal
$ ln -s /var/opt/postgresql/CLUST/wal/pg_xlog $PGDATA/pg_xlog
# soft link as across FS
$ pg_ctl start -l /tmp/logfile
Assuming all is now working OK, shutdown PostgreSQL & backup up all the PostgreSQL related FS
above� just in case�!
� Chris Drawater, 2005
PostgreSQL 8.0.2 on Solaris, v1.2
p7/10
Page 8
User Accounts
Create 1 * power user to create/own/control the tables (using psql) =>
$ pgsql template1
create user cxd with password 'abc';
grant create on tablespace appdata to cxd;
Do not create any more superusers or users that can create databases!
Now create n* enduser accts to work against the data =>
$pgsql template1
CREATE GROUP endusers;
create user enduser1 with password 'xyz';
ALTER GROUP endusers ADD USER enduser1;
$ psql db9 cxd
grant select. on <table>. to group endusers;
JDBC driver
A pure Java (Type 4) JDBC driver implementation can be downloaded from
http://jdbc.postgresql.org/
Assuming the use of the SDK 1.4 or 1.5, download
postgresql-8.0-311.jdbc3.jar
and include this in your application CLASSPATH.
(If moving JAR files between different hardware types, always ftp in BIN mode).
Configure PostgreSQL to accept JDBC Connections
To allow the postmaster listener to accept TCP/IP connections from client nodes running the JDBC
applications, edit the server configuration file and change
listen_addresses = '*'
# * = any IP interface
Alternatively, this parameter can specify only selected IP interfaces ( see documentation).
In addition, the client authetication file will need to edited to allow access to our database server.
First take a backup of the file =>
$ cp pg_hba.conf pg_hba.conf.orig
Add the following line =>
host db9
cxd
0.0.0.0/0
password
where , for this example, database db9, user cxd, auth password
� Chris Drawater, 2005
PostgreSQL 8.0.2 on Solaris, v1.2
p8/10
Page 9
Switching JDBC applications from Oracle to PostgreSQL
The URL used to connect to the PostgreSQL server should be of the form
jdbc:postgresql://host:port/database
If used, replace the line (used to load the JDBC driver)
Class.forName ("oracle.jdbc.driver.OracleDriver");
with
Class.forName("org.postgresql.Driver");
Remove any Oracle JDBC extensions, such as
((OracleConnection)con2).setDefaultRowPrefetch(50);
Instead, the row pre-fetch must be specified at an individual Statement level =>
eg.
PreparedStatement pi = con1.prepareStatement(�select�.�);
pi.setFetchSize(50);
If not set, the default fetch size = 0;
Likewise, any non ANSI SQL extensions will need changing.
For example sequence numbers
Oracle => online_id.nextval
should be replaced by
PostgreSQL => nextval('online_id')
Oracle �hints� embedded within SQL statements are ignored by PostgreSQL.
Now test your application!
Concluding Remarks
At this stage, you should now have a working PostgreSQL database fronted by a JDBC based application,
and the foundations will have been laid for :
A reasonably level of resilience (recoverability)
A good starting IO distribution
The next step is to tune the system under load� and that�s another doc�
Chris Drawater has been working with RDBMSs since 1987 and the JDBC API since late 1996, and can
be contacted at [email protected] or [email protected] .
� Chris Drawater, 2005
PostgreSQL 8.0.2 on Solaris, v1.2
p9/10
Page 10
Appendix 1 � Example .profile
TMPDIR=/tmp
export TMPDIR
PATH=/usr/bin:/usr/ucb:/etc:.:/usr/sfw/bin:usr/local/bin:n:/usr/ccs/bin:$PATH
export PATH
# PostgreSQL 802 runtime
LD_LIBRARY_PATH=/opt/postgresql/8.0.2/lib
PATH=/opt/postgresql/8.0.2/bin:$PATH
export PATH LD_LIBRARY_PATH
PGDATA=/var/opt/postgresql/CLUST/sys
export PGDATA
� Chris Drawater, 2005
PostgreSQL 8.0.2 on Solaris, v1.2
p10/10 -
Solaris 9 and WebSphere Commerce Suite 5.6
We're upgrading our servers from Solaris 8 to 9. Does anyone know of any issues that may pop up with our WebSphere Commerce Suite 5.6, service pack 5 during or after the upgrade?
AYacopino wrote:
Somebody knows which is the best Operation System to deploy the Sun Java Communication Suite? in performance terms?Solaris 10 by far for several reasons:
-> improved overall performance (kernel/networking level)
-> ZFS for storage
-> dtrace for debugging
-> zones for the various components of the communication suite (http://www.sun.com/blueprints/0806/819-7663.html)
I have and old Sun Fire 280R with two 750 Mhz Processors, 3 GB Ram, and an A1000 Storage.I'm not sure how many users you are planning to provide access to but your RAM is going to be the bottleneck.
Regards,
Shane. -
StorEge A1000 + PC (with Mylex960 Raid Controller)
PC = PIII / 800EB / 256MB / 20GB
Raid card = Mylex DAC960
StorEdge = A1000 with 3 x 18Gb drives loaded.
Wehn Mylex scans for a new scsi device, Storedge A1000 can't be traced/probed. I tried this with HP scsi box and were able to probe the external devices but for Sun Storedge A1000 it was not successfull.
Any advise on how I can probe/trace this external device (Storedge A1000).
I am planning also to install solaris 10 (x86) and hopefully Solaris could detect the external storage (series of drives) at the application level. (cross finger)
For your advise Gurus.
PetalioCan you describe the features and specifications of that SCSI card ?
The A1000 array has a High-Voltage-Differential interface.
(See its link in the Sun System Handbook)
HVD is not common in the PeeCee universe.
The array already has a RAID controller in its chassis,
and will not work with a RAID controller SCSI card.
Any attempts to use a LVD card or a S/E card will just not work, either.
It would be invisible to the SCSI chain.
... then, additionally, you're going to need some sort of RAID control software
to administer the A1000 and its internal RAID controller.
if you do eventually get a compatible HBA, you also need to be aware
that functional support for the array was specifically dropped from Solaris 10.
You'd need to run Sol8 or Sol9 with RM6 software, and I cannot remember
whether RM6 was ever ported to x86 Solaris.
I fear you're just going to be out of luck,
and may need to get rid of the array (e.g. Ebay ?). -
I have an A1000 and have been all over the Sun and sunsolve sites looking for raid manager 6.2 or higher, but the download link redirects me to a new storage hardware page. Does anyone have any idea where I can get this software for Solaris 9
http://javashoplm.sun.com/ECom/docs/Welcome.jsp?StoreId=8&PartDetailId=RaidMgr-6.22-SP-G-F&TransactionId=Try
-
SUN Cluster 2.2 behaviour when removing SCSI from A1000
We're running a SUN Cluster 2.2 , 2 node cluster on a SUN E5500 running Solaris 5.8 and Veritas VM.
A1000 Boxes cross connected over SCSI to each node and D1000 dual attached per node.
What would one as cluster behaviour expect if one Node crashes, powering the crashed node off and removing the SCSI cable from the A1000 to the surviving node. Would the surviving node continue to run properly ?
ThanksThere is potential that the surviving node could panic when termination is lost when the cable is disconnected.
Maybe you are looking for
-
Why won't Safari open a web page when I click on it using Windows XP?
I prefer to use Safari as my primary browser for my Windows XP laptop. Recently, there has been no response when I click on the browser. When I click on it, it opens up fine, but it does not pull the Apple Home Page, or any other web pages. It see
-
Help needed (Photoshop work into Dreamweaver)
I am so sorry for my week knowledge. I just knew some starter commands in Dreamweaver or you can say some basic knowledge but here I am stuck and I need some serious help please. I have used "Adobe Photoshop CS4" to create my website template and com
-
Hello! is there a way to make part of the text as superscript in the table of content with iwork 08 pages? for example TOEFL® -> with the ® superscripted. Thanks! Jeanny
-
Show NUMC Type Field with Zeros in SE16
Hi. Is there any way to show a field of numc type (length 9) with zeros when is shown in SE16???
-
Role Cut and Paste Functionality
Dear All, My client wants to cut the Role from one Project which is not staffed and Paste it to another project without changing any data from the Role. So, please help me if any standard functionality is available in cProject or any functional modul