Solaris 10 upgrade with mirrored OS (meta-device) partition

I will going to upgrade my host from Solaris 10 5/08 (U5) to Solaris 10 10/09 (U8). The installation media is a cdrom.
On my host, I used Solaris Volume Manager (SVM) to mirror /, /var, and swap.
My Question is:
Before the upgrade, should I just need to break one side of mirrors which will be sufficient for the upgrade process, or should I convert them back to physical device?
How should I proceed?
Thanks.

chewr wrote:
Firstly, Thanks for your answer. I hope that it will work as you said.
But I am thinking that the OS upgrade process will boot from the cdrom, and it will only look for physical device to do the upgrade. That's the reason I am concerning whether the OS, which booted from cdrom, will able to see the meta-device?Solaris 10 boot media should be able to see and recognize the metadevices. Previous versions could not.
On the other hand, if I need to remove all meta-device for the upgrade, will data be safe and intact on the physical device when the OS booted from cdrom?Safe? If you do reconfigure the system to not use any SVM devices for the OS, then yes the data is still there. I'm not sure what you're asking, or how the data might be at risk.
Darren

Similar Messages

  • Disk corruption in Solaris 10 with mirrors

    I have had 5 different Sun sparc servers in the past couple of months come up with badly corrupted file systems. I am using They are all mirrored metadisks. One thing I have looked into is using the UFS logging option, but from what I read, it doesn't need to be specified in /etc/vfstab because it is on by default in Solaris 10. Is this true?
    Also, is there a problem with shutting down or rebooting the OS with the init commands? Should I be using shutdown instead? Does init 0, 5 or 6 cleanly shutdown the system?
    Thanks

    Hello,
    Init 0 is very nice command which is gracefully shutting down all the services in the system including the OS. But the system will be powered on unless you execute powerdown from rsc.
    In the case of mirroring metadisks entries can reduce a lot of file system related errors because those entries ("device to fsck") forces the filesystem check in those meta devices in every reboot.
    Please have look on vfstab in my test server where i have mirrored disks.
    bash-3.00# more /etc/vfstab
    #device device mount FS fsck mount mount
    #to mount to fsck point type pass at boot options
    fd - /dev/fd fd - no -
    /proc - /proc proc - no -
    /dev/md/dsk/d1 - - swap - no -
    /dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no -
    /dev/md/dsk/d2 /dev/md/rdsk/d2 /var ufs 1 no -
    /devices - /devices devfs - no -
    ctfs - /system/contract ctfs - no -
    objfs - /system/object objfs - no -
    swap - /tmp tmpfs - yes -
    bash-3.00#
    Please do "iostat -En" and check for the errors so that you will get a better understanding.
    Thanks,
    Sal.

  • Solaris upgrade with Veritas Volume Manager

    I have to upgrade Solaris from 7 to 8 on a server which has Veritas Volume Manager. OK, I already found the necessary information to make this upgrade, but I have not found any answers on my problem.
    It is quite possible that after VVM was installed, some of the filesystems like /, /opt, /usr or /var were increased or that additional filesystems were added on the rootdisk. This will prevent the upgrade_start to run correctly. One way to overcome this problem is to boot the machine in single-user mode, create all the root filesystems as standard ufs filesystems on a free disk with the same size as the VVM filesystems and then use ufsdump to copy all data over. When that is done, install a bootblock on the new disk and boot the machine from this disk. Then, start the Solaris upgrade. The problem with this is that when there is no free disk available, then I clearly have a problem.
    Does anyone have experience with this or has a better plan ?
    Should I completely de-install VVM or is it enough to convert all rootfilesystems to ufs filesystems ?
    Please reply to me individually at [email protected]
    Thanks,
    Bruno

    There are more things that you need to do.
    Read veritas install guide -- it has a pretty good section of what needs to be done.
    http://www.sun.com/products-n-solutions/hardware/docs/Software/Storage_Software/VERITAS_Volume_Manager/

  • Solaris Live Upgrade with NGZ

    Hi
    I am trying to perform a Live Upgrade on my 2 Servers, both of them have NGZ installed and those NGZ are on an diifferent Zpool not on the Rpool and also are on an external disk.
    I have installed all the latest patches required for LU to work properly but when i perform an <lucreate> i start having problems... (new_s10BE is the new BE i'm creating)
    On my 1st Server:
    I have a Global zone and 1 NGZ named mddtri.. This is the error i am getting:-
    ERROR: unable to mount zone <mddtri> in </.alt.tmp.b-VBb.mnt>.
    zoneadm: zone 'mddtri': zone root /zoneroots/mddtri/root already in use by zone mddtri
    zoneadm: zone 'mddtri': call to zoneadm failed
    ERROR: unable to mount non-global zones of ABE: cannot make bootable
    ERROR: cannot unmount </.alt.tmp.b-VBb.mnt/var/run>
    ERROR: unable to make boot environment <new_s10BE> bootable
    On my 2nd Server:
    I have a Global zone and 10 NGZ. This is the error i am getting:-
    WARNING: Directory </zoneroots/zone1> zone <global> lies on a filesystem shared netween BEs, remapping path to </zoneroots/zone1/zone1-new_s10BE>
    WARNING: Device <zone1> is shared between BEs, remmapping to <zone1-new_s10BE>
    *.This happens for all the NGZ running.*
    Duplicating ZFS datasets from PBE to ABE.
    ERROR: The dataset <zone1-new_s10BE> is on top of ZFS pool. Unable to clone. Please migrate the zone  to dedicated dataset.
    ERROR: Unable to create a duplicate of <zone1> dataset in PBE. <zone1-new_s10BE> dataset in ABE already exists.
    Reverting state of zones in PBE <old_s10BE>
    ERROR: Unable to copy file system from boot environment <old_s10BE> to BE <new_s10BE>
    ERROR: Unable to populate file systems from boot environment <new_s10BE>
    Help, I need to sort this out a.s.a.p!

    Hi,
    I have the same problem with an attached A5200 with mirrored disks (Solaris 9, Volume Manager). Whereas the "critical" partitions should be copied to a second system disk, the mirrored partitions should be shared.
    Here is a script with lucreate.
    #!/bin/sh
    Logdir=/usr/local/LUscripts/logs
    if &#91; ! -d ${Logdir} &#93;
    then
    echo ${Logdir} existiert nicht
    exit
    fi
    /usr/sbin/lucreate \
    -l ${Logdir}/$0.log \
    -o ${Logdir}/$0.error \
    -m /:/dev/dsk/c2t0d0s0:ufs \
    -m /var:/dev/dsk/c2t0d0s3:ufs \
    -m /opt:/dev/dsk/c2t0d0s4:ufs \
    -m -:/dev/dsk/c2t0d0s1:swap \
    -n disk0
    And here is the output
    root&#64;ahbgbld800x:/usr/local/LUscripts >./lucreate_disk0.sh
    Discovering physical storage devices
    Discovering logical storage devices
    Cross referencing storage devices with boot environment configurations
    Determining types of file systems supported
    Validating file system requests
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    INFORMATION: Unable to determine size or capacity of slice </dev/md/RAID-INT/dsk/d0>.
    ERROR: An error occurred during creation of configuration file.
    ERROR: Cannot create the internal configuration file for the current boot environment <disk3>.
    Assertion failed: *ptrKey == (unsigned long long)_lu_malloc, file lu_mem.c, line 362<br />
    Abort - core dumped

  • Upgrade to Solaris 8 with Oracle 8.0.6

    Hi forum,
    on a Sparc-Server with Oracle 8.0.6 we want to upgrade from Solaris 7 to Solaris 8. Do we need to upgrade Oracle first or is Oracle 8.0.6 supportet on Solaris 8 ?
    Thanks
    Hans-Peter

    > Our organisation has been running SAP R/3 4.6C on Solaris 8 with Oracle 8i database. We are considering an upgrade to SAP ECC 6.0.
    Oh.. you're aware of the fact, that Oracle 8 as well as Solaris 8 is out of support?
    I would do it like (if you need/want to stay on the same hardware):
    - upgrade Oracle 8 (I hope your run 8.1.7.4) to Oracle 9.2.0.8
    - upgrade Solaris 8 to Solaris 10
    - upgrade Oracle 9.2.0.8 to Oracle 10.2.0.4 + latest interim patches
    - use kernel 46D_EX2
    - upgrade the 4.6c to ERP 6.0
    Be aware, that, if you plan to use Java applications (such as the Enterprise Portal or other Netweaer related appliation), the ERP 6.0 must be converted to Unicode.
    Markus

  • Upgrade from solaris 8 to solaris 9 with sun solstice disksuite

    Hi,
    I have to upgrade the solaris 8 with Solstice disksuite to Solaris 9 OS. Please let me know the steps for the upgrade.
    Regards
    chesun

    Yep!
    See
    http://docs.sun.com/db/doc/806-5205/6je7vd5rf?a=view
    Lee

  • Removing file system from meta devices in solaris 10

    hi,
    I have created a file system on meta device in solaris 10 using below command
    newfs /dev/md/rdsk/d110
    now i want to remove file system to make the meta device free, required command to remove file system ?
    Regards
    Zeeshan

    Thanks for your response , actually i have performed the below steps to release the space from mount point /u05 which i want to make the space as raw device so that i can use it for ASM , so my question is that how can i unformat the file system so that it can be a raw device.
    umount /u05
    metaclear d110
    (Now i have the below two metadevices 500gb each which i want to use as raw device so that i can allocate it to ASM (Automatic Storage management). So how can we make it as raw device ??
    /dev/dsk/emcpower17a
    /dev/dsk/emcpower17a

  • Hdd upgrade with multiple partitions

    I currently have a 250gb hdd with lion and a bootcamp partition of windows 7.
    I would like to upgrade my hdd because I am running out of room.
    I understand that I could most likely back up the entire drive to an external, then install the new hdd and reinstall the backed up data. That should put both partitions the way they were onto the new hdd. At least that is what I think.
    But I would like to have a bigger partition for windows and I am unsure how I would do this correctly. I don't know if reinstalling the backed up data to the new drive, deleting the bootcamp partition, then creating a new partition and reinstalling my same windows would work. Would I even be able to install windows a second time on the same computer without having to buy another key?
    Final thoughts:
         1. Will backing up the entire drive to an external, then install the new hdd and reinstall the backed up data put both partitions the way they were onto the          new hdd?
              a. Will the OSX partition get all the new space?
         2. How do I get a bigger windows partition on the new hdd, while keeping all of my existing windows content?
    Thank You for the help

    Use Windows software to backup your Windows system. When you setup your new Windows partition on the new hard drive reinstall Windows then restore your backup from the backup drive you use for Windows. You don't need another key as long as you don't have the same key active on more than one computer.
    Similarly, for OS X backup your current system to an OS X backup drive. See the following:
    How to replace or upgrade a drive in a laptop
    Step One: Repair the Hard Drive and Permissions
    Boot from your OS X Installer disc. After the installer loads select your language and click on the Continue button. When the menu bar appears select Disk Utility from the Installer menu (Utilities menu for Tiger, Leopard or Snow Leopard.) After DU loads select your hard drive entry (mfgr.'s ID and drive size) from the the left side list.  In the DU status area you will see an entry for the S.M.A.R.T. status of the hard drive.  If it does not say "Verified" then the hard drive is failing or failed. (SMART status is not reported on external Firewire or USB drives.) If the drive is "Verified" then select your OS X volume from the list on the left (sub-entry below the drive entry,) click on the First Aid tab, then click on the Repair Disk button. If DU reports any errors that have been fixed, then re-run Repair Disk until no errors are reported. If no errors are reported click on the Repair Permissions button. Wait until the operation completes, then quit DU and return to the installer.
    If DU reports errors it cannot fix, then you will need Disk Warrior and/or Tech Tool Pro to repair the drive. If you don't have either of them or if neither of them can fix the drive, then you will need to reformat the drive and reinstall OS X.
    Step Two: Remove the old drive and install the new drive.  Place the old drive in an external USB enclosure.  You can buy one at OWC who is also a good vendor for drives.
    Step Three: Boot from the external drive.  Restart the computer and after the chime press and hold down the OPTION key until the boot manager appears.  Select the icon for the external drive then click on the downward pointing arrow button.
    Step Four: New Hard Drive Preparation
    1. Open Disk Utility in your Utilities folder.
    2. After DU loads select your new hard drive (this is the entry with the mfgr.'s ID and size) from the left side list. Note the SMART status of the drive in DU's status area.  If it does not say "Verified" then the drive is failing or has failed and will need replacing.  Otherwise, click on the Partition tab in the DU main window.
    3. Under the Volume Scheme heading set the number of partitions from the drop down menu to one. Set the format type to Mac OS Extended (Journaled.) Click on the Options button, set the partition scheme to GUID  then click on the OK button. Click on the Partition button and wait until the process has completed.
    4. Select the volume you just created (this is the sub-entry under the drive entry) from the left side list. Click on the Erase tab in the DU main window.
    5. Set the format type to Mac OS Extended (Journaled.) Click on the Options button, check the button for Zero Data and click on OK to return to the Erase window.
    6. Click on the Erase button. The format process can take up to several hours depending upon the drive size.
    Step Five: Clone the old drive to the new drive
    1. Open Disk Utility from the Utilities folder.
    2. Select the destination volume from the left side list.
    3. Click on the Restore tab in the DU main window.
    4. Check the box labeled Erase destination.
    5. Select the destination volume from the left side list and drag it to the Destination entry field.
    6. Select the source volume from the left side list and drag it to the Source entry field.
    7. Double-check you got it right, then click on the Restore button.
    Destination means the new internal drive. Source means the old external drive.
    Step Six: Open the Startup Disk preferences and select the new internal volume.  Click on the Restart button.  You should boot from the new drive.  Eject the external drive and disconnect it from the computer.

  • Is newfs needed when creating mirror from existing ufs partition

    I have the root drive of my sparc solaris 10 server mirrored with SVM. In the same machine I have another single partition disk that was mounted /dev/dsk/ct0d0s6 to /home1, and I wanted to mirror it to another disk. This happened without problem.
    While the resync was happening I realized that I did not do a newfs on the target mirror disk (/dev/dsk/c1t2d0s6 partition before doing the metattach. I did not have a problem with reboots or anything after the resync was done. Will the mirror create a UFS partition on the target disk? If the source /home1 disk failed, would I be able to mount the target disk as a regular /dev/dsk partition? Or if I wanted to metaoffline the target half of the /home1 mirror to do a backup, would it recognize it as be a UFS partition?
    I'm thinking I should metadetach the mirror, do a newfs on the target S6 partition and then metattach again.
    Any ideas would be appreciated.

    Your getting a little confused about the difference between a partition/slice and a filesystem.
    Partitioning is the act of breaking the disk up into "slices". You have been referring to c0t0d0s6 which is a slice.
    Partitioning is done with the format program.
    The filesystem is the data that goes onto the slice. This is what newfs does. It creates a filesystem on a slice. So all its doing is writing a particular pattern of data onto a chunk of disk.
    So mirroring filesystems doesnt require a newfs. Since the correct pattern of data already exists. You just need to copy it onto the new slice.
    But the mirroring doesnt do partitioning ie create the slice. You have to do that with format. But it does copy the filesystem.
    So assuming you created a new slice of the correct size.
    You should be able to mirror the existing slice to the new slice. Then mount the virtual device representing the mirror.
    You do have to always use the mirror through the virtual /dev/md device, not the raw slices.
    The raw slices making up the mirror are perfectly ordinary UFS partitions. So yes they will be recognised.
    But if you modify one directly instead of through the virtual device, you changes won't be mirrored to the other.
    But yes, if a disk fails. You can use the other slice directly if necessary. But normally you would just keep using the metadevice.
    And if you metaoffline a disk. You could then back it up.
    But backups are normally more conveniently done with fssnap. That way you don't have to split mirrors and join them again.

  • Meta device open error

    I have configured mirror on root hard disk on intel machine solaris 2.9.
    Configured RAID5 (d50) on single disk on 03 partition(s0, s1 and s3).
    My RAID 5 hard disk is faulty. I want to clear the d50 configuration from system. I gave a command
    " metaclear -f d50 "
    It gave me error saying "meta device open error".
    How to clear the configuration?
    Thanks in advance.

    from another doc:
    unmount the filesystem
    metadetach mirror submirror
    metaclear -r mirror
    edit the vfstab to change mount

  • Mirror disk slice or partition problem

    Hi ,
    I recently installed S11 from the GUI installer.
    On a 1Tbyte disk I installed S11 on a 16G partition or slice (I dont know what the GUI installer does)
    Later I added another 16G partition + 880G partition, with the format->fdisk tool.
    The 880G partition is used for a pool called home2p, the extra 16G partition is a spare.
    All on disk c0t0d0.
    Later I added another 1Tbyte disk to make a mirror.
    I formatted the disk with format->fdisk and used the same numbers as c0t0d0.
    My aim is a fault tolerant mirror.
    But I ended with this strange situation:
    zpool status rpool
    pool: rpool
    state: ONLINE
    scan: resilvered 246K in 0h0m with 0 errors on Tue Jun 12 17:08:05 2012
    config:
    NAME STATE READ WRITE CKSUM
    rpool ONLINE 0 0 0
    mirror-0 ONLINE 0 0 0
    c3t0d0s0 ONLINE 0 0 0
    c3t2d0p0 ONLINE 0 0 0
    zpool status home2p
    pool: home2p
    state: ONLINE
    scan: resilvered 144G in 0h40m with 0 errors on Tue Jun 12 16:53:36 2012
    config:
    NAME STATE READ WRITE CKSUM
    home2p ONLINE 0 0 0
    mirror-0 ONLINE 0 0 0
    c3t0d0p3 ONLINE 0 0 0
    c3t2d0p3 ONLINE 0 0 0
    The rpool consists of c3t0s0 and c3t2p0 !
    When I look with fdisk the partition numbers are equal (offsets an sizes).
    It looks like the c3t0s0 which is created by the GUI installer is different from what I created
    with format->fdisk.
    So something seems to be wrong here, can this still be corrected ,
    or do I have to reinstall everything?
    I also need a GRUB loader on c3t2d0p0 to boot from the second disk,
    how must I do that? if possible at all in this situation.
    Edited by: 916641 on Jun 12, 2012 11:53 AM
    Edited by: 916641 on Jun 12, 2012 11:54 AM

    It's not clear from your post whether you're aware that for x86, Solaris numbers the MBR 'partition table' entries as devices c?t?d?p0 (whole disk) and then p1-4 (the partitions). Ie. 'ls -lrt /dev/[r]dsk/c?t?d?p?', etc.
    Any partition/s allocated to Solaris will be further sliced using a VTOC. Hence I'd expect your c3t0d0p0 to have a VTOC (therefore the installer used 'c3t0d0s0' for the rpool. Ie. /dev/[r]dsk/c?t?d?s?, etc.
    I expect you need to create the matching partition on the added device partition (c3t2d0p0) which is in the rpool and then create a VTOC with the 'fmthard' command (search prtvtoc fmthard to find example).
    There after the 'c3t2d0s0' device should be available to be added to the rpool mirror.
    Use 'iostat -En' to identify the disks, then the command 'fdisk -W - /dev/rdsk/c3t?d0p0' to show the disk partition table allocations and 'prtvtoc /dev/rdsk/c3t?d0p?' to establish the VTOC allocations. Note the ? should be substituted appropriately with your device numbering.
    When your devices are set up correctly you can use the "expected s" device names with the 'zpool attach' ... then take a look at this blog entry http://www.c0t0d0s0.org/archives/7394-Migrating-your-notebook-from-a-smaller-to-a-larger-disk.html to cater for the 'zpool spilt' and GRUB component (for really neat -simple and quick, disaster recovery).
    Note there is also 'gparted' available with S11 to check the layout of the x86 partition table (confirm what fdisk is reporting).

  • Solaris Upgrade 9u8 -- 10u9

    Hi all
    I need to study the upgrade from Solaris 9u8 to 10u9 as well as the upgrade from Solaris Cluster 3.1u4 --> 3.3u1 and VxVM 4.1 --> 6.0.1
    I plan to realize a standard upgrade but do I have to perform an intermediary step fo the cluster from 3.1 to 3.2 and then to 3.3 as described
    Minimum Oracle Solaris Cluster software version - Oracle Solaris Cluster 3.3 software supports the following direct upgrade paths:
    SPARC: From version 3.1 8/05 through version 3.2 11/09 - Use the standard, dual-partition, or live upgrade method.
    From version 3.2 including update releases through version 3.2 11/09 - Use the standard, dual-partition, or live upgrade method.
    On version 3.3 to an Oracle Solaris Cluster 3.3 update release with no Oracle Solaris upgrade except to an Oracle Solaris update release, or to upgrade only Oracle Solaris to an update release – You can also use the rolling upgrade method.
    Or is it possible to upgrade from 3.1 to 3.3 directly.
    Tx for your support in order to help me defining the right order of procedures
    Regards
    Support Solaris

    No, thats not really the recommended way.
    If you have two disks, have a look at live upgrade. That way you can upgrade one of your disks on the fly and later reboot to activate it.
    Better yet, if you have ZFS you don't even have to do that. If you aren't using ZFS as your root-filesystem, you can break your mirror and upgrade one of the disks to ZFS using live upgrade as well.. Then you will never ever have to worry about breaking mirrors to do upgrades again..
    .7/M.

  • Meta device question

    How can I find out how much space I have left on a meta device? I am using solaris 9.

    Hi.
    You have mirror d10 with configureger only one submirror d11.
    What reason configure mirror withot real mirror ? You only lose performance.
    After somthing error LVM do not have access to device c3txxxxxxxxxxxx037F0000d0s0 and mark this device as neeaded maintance.
    Becouse it was last work submirror in mirror configuration, this device mark as Last erred.
    This flag used in dual submirror confgiguration for found device with latest information whan both submirrors failed.
    What You can do at this state.
    On your choice:
    1. Ignore this problem. Becouse clear this error don't increase availability of this device.
    2. Stop application. Umout device d10
       metaclear d10
       metaclear d11
    Change /dev/md/dsk/d10 (/dev/md/rdsk/d10) on /dev/dsk/c3txxxxxxxxxxxx037F0000d0s0.
    Remount used FS, start application.
    After this you will use direct device instead middle mirror.
    3. Recreate Mirror/Submirror.
       metaclear d10
       metaclear d11
       metainit d11 1 1 c3txxxxxxxxxxxx037F0000d0s0
       metainit d10 -m d11
    Remount used FS, start application.
    I write commands with default values of some parameters. Current miror specific parameters you can see:
       metastat -p d10
       metastat -p d11
    Of course you need do it before metaclear.
    Output of this commands you can use for recreate device.
    For trubleshuting what happens with device and why d11 change status to "Needs maintenance" You need analyze files /var/adm/messages* and find all errors/ warning messages about disk device.
    Regards.

  • Failed to install Solaris 10 with Jump Start

    Hi all:
    When install Solaris 10 with Jump Start method. I met below error:
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    {3} ok boot net -install
    SC Alert: Host System has Reset
    Probing system devices
    Probing memory
    Probing I/O buses
    screen not found.
    Rebooting with command: boot net -install
    Use is subject to license terms.
    whoami: no domain name
    Hardware watchdog enabled
    WARNING: exec(nstall) failed (file not found).
    (Could not start init) Program terminated
    {1} ok
    Thanks a lots!

    thats because you wrote "-install", which will be parsed as "-i nstall", hence your system will try to execute a kernel named "nstall".
    The proper syntax is "boot net - install", with spaces on both sides of the -.
    .7/M.

  • On Sun fire v490 - Solaris 10 with Oracle 8.1.7.4 & Sybase 12.0

    Hi,
    We are going to upgrade our server with this configuration -
    Sun Fire V490     2 x 1.05 GHz UltraSPARC IV CPU
    8096MB RAM     2 x73GB local disk
    2x FC 2GB Sun/QLogic HBAs
    DAT72
    On one machine we will have Sun Solaris v10 with
    Oracle DB v8.1.7.4 & Second one will be Sun Solaris v10 with Sybase DB v12.0.0.6.
    Now our question is - Sun fire have Hyper-thread CPUs ��� will the O/S and databases (Oracle and Sybase) view the proposed system as a true 4 CPU platform? Will parameters used to tune the database such as Sybase max online engines still operate in the same manner as before?
    Our old machine configuration was - Sun E450     4x400MHz CPU     1024MB RAM     2 x18; 8x36GB disks

    Questions on Oracle and Sybase should be directed to a database forum, this forum is for Sun hardware support.
    Here is a link to a DB forum I look at from time to time:
    http://www.dbforums.com/index.php
    The topic of tuning Oracle or Solaris is way beyond the scope of this forum, I have attempted to go into it before but didn't get any feedback and I would only like to spend lots of time on it if I was being paid!!! On the memory side, keep in mind that Oracle 9i 64-bit can address a maximum of 2 ^ 64 ( 16777216 TB ) memory, prior to that the DBA had to define memory parameters in init.ora. To be honest the last time I worked with a Oracle 8 database I shut a HP K class server down permanently that had been migrated to Oracle 9i on Solaris by an Oracle consultant and I can't remember all the tuning trick etc.

Maybe you are looking for