Mpxio in solaris

HI,
What is mpxio?
how to disbale mpxio in soalirs?

Hi,
What is mpxio?mpxio is an inbuilt multipathing software avaialble in solaris which is used to manage Disk Arrays which are connected across storage.
how to disbale mpxio in soalirs?comment out entry *"mpxio-disable="no"* in file /kernel/drv/fp.con
Regards,
X A H E E R

Similar Messages

  • Adding eSAN storage to a Solaris 9 box, with MPXIO and Qlogic HBAs

    I recently added a few SAN drives to a Solaris-9 box and enabled MPXIO. I noticed that after you install the qlc drivers and do a reconfigure boot you see two additional devices besides the disks. These additional devices are very small and present themselves as disks. See example below
    12. c5t50060482D5304D88d0 <EMC-SYMMETRIX-5772 cyl 4 alt 2 hd 15 sec 128>
    /pci@8,700000/fibre-channel@4,1/fp@0,0/ssd@w50060482d5304d88,0
    13. c7t50060482D5304D87d0 <EMC-SYMMETRIX-5772 cyl 4 alt 2 hd 15 sec 128>
    /pci@8,700000/fibre-channel@5,1/fp@0,0/ssd@w50060482d5304d87,0
    14. c8t60060480000190103862533032384632d0 <EMC-SYMMETRIX-5772 cyl 37178 alt 2 hd 60 sec 128>
    /scsi_vhci/ssd@g60060480000190103862533032384632
    15. c8t60060480000190103862533032384541d0 <EMC-SYMMETRIX-5772 cyl 37178 alt 2 hd 60 sec 128>
    /scsi_vhci/ssd@g60060480000190103862533032384541
    notice the difference in the number of cylenders. I have a couple of questions about these devices
    1. What are these. The SAN storage is EMC Symmetrix
    2. I see the following errors in the /var/adm/messages any time a general disk access command is run, such as format. Should I be concerned
    Feb 4 13:05:35 Corrupt label; wrong magic number
    Feb 4 13:05:35 scsi: WARNING: /pci@8,700000/fibre-channel@4,1/fp@0,0/ssd@w50060482d5304d88,0 (ssd21):
    Feb 4 13:05:35 Corrupt label; wrong magic number
    Feb 4 13:05:35 scsi: WARNING: /pci@8,700000/fibre-channel@5,1/fp@0,0/ssd@w50060482d5304d87,0 (ssd28):
    Feb 4 13:05:35 Corrupt label; wrong magic number
    Feb 4 13:05:35 scsi: WARNING: /pci@8,700000/fibre-channel@5,1/fp@0,0/ssd@w50060482d5304d87,0 (ssd28):
    Feb 4 13:05:35 Corrupt label; wrong magic number

    Gate keepers from EMC Storage, is normal.

  • Solaris 8 SFS 4.4.13 mpxio question

    I have a 25k domain running Sol 8 02/04 with two qlc 2340 HBAs attached to two cisco 9509s getting a single lun from a netapp fas.
    I installed the OS, added the latest recommended patch cluster, then installed SFS 4.4.13. The netapp lun is presented to each of the HBA paths, with the hope to use mpxio across all 4 HBA connections. I can see all the devices at this point, but I cannot get them under traffic manager control, they just have the typical dev paths and not the vhci paths you would expect.
    I ran a cfgadm -c configure on them and luxadm comes back looking good. I am concerned that I might not have the correct entries in my kernel/drv/scsi_vhci.conf I am not booting from this lun, the system has a seperate local attached S3100.
    So, in a nutshell, I have 4 paths ( 8 with the dual paths on the netapps pri/failover ) to a single lun, and want to bind them into one virtual path. Has anyone run in a similar config, and does anyone have an example of the files I need to edit that is specific to a "NETAPP LUN" device (section of vhci.conf etc) as reported by luxadm inquiry?
    tia
    Edited by: mpulliam on Dec 16, 2009 2:13 PM

    Update:
    I shutdown the domain, pulled the boot disk, and replaced it with a clean drive. I built the system with Solaris 10 05/08 then patched with the latest reccomended patch cluster. My HBA is a Qlogic 2342 connecting to Cisco 9509 switches connected to a NetApp fas box (not 100% sure on the model atm, but very high end).
    I configured the same lun I had on the solaris 8 box then mounted it up. All my data files are intact so I began some testing to see if the performance was any different.
    I ran several timed sequential 10Gb writes and hit a max throughput of about 170Mb a sec on the lun. This is rather sad really, as it's not even hitting 2Gb FC speeds. I really punished the system with several parallel 10Gb writes, 10Gb reads, and 4k writes and reads. I never really achieved more than about 15k IOPs to the lun, and never exceded peak 190Mb a second. Under the heavy load the device queue really stacked up. The filesystem is UFS on the lun.
    We're going to try some different lun setups on the netapp to increase spindle count, but I'm really not sure it will help in any way.
    Does anyone have any experience with a similar setup? Perhaps changing settings in the qlogic driver conf, like frame size perhaps? How about any ideas to peg down the performance culprit (server, switch, storage)?
    tia

  • MPxIO vs PowerPath for EMC SAN - Solaris 10

    I'm just wondering what everyone's experience has been with either option.  We are in the process of upgrading to a T5120 server using Solaris 10 and Oracle 9i (we can't upgrade yet due to limitations in our current blood bank software) and connect to an EMC SAN.  Our vendor says that his preference is PowerPath because he's heard of systems hanging up when MPxIO option is used.  Any ideas? 

    I'm not sure what kind of commentary you are looking for.
    We use Powerpath on Solaris 10 (x86 now, but in the past on T5120s) for multipathing to LUNs on EMC VNX and CX4 SANs. We followed the EMC Powerpath setup guide for Solaris 10 exactly.
    After disabling MPxIO, Powerpath was able to take control. Some older versions of powerpath had a bug where the pseudo names for LUNs would change causing some headaches.
    We keep a mapping of LUN to pseudo name to solaris dsk labels
    If you use ZFS I have found a 1:1 match between the zfs_max_vdev_pending value and the max queued-IOs value shown in Powerpath
    When we've had to fail/trespass LUNs over to other SPs on the EMC SANs, Powerpath has handled this elegantly with the expected warnings in /var/adm/messages
    I recall that we had to explicitly set the powerpath options for CLARiiON and VNX to managed and the policy to claropt
    When adding LUNs, to see all paths there is a routine to go through with cfgadm, devfsadm and the powerpath utilities. We use qlogic HBAs and the qlc leadville driver.

  • Kernal warnings in Solaris 10

    Dear Folks,
    We have Solaris 10 box connected to HP SAN EVA8000 through dual HBA with MPXIO enabled. From quite some time we started receiving below warning messages, Any body Could explain, What is the reasons for these warnings or any solution to avoid them.....Many Thanks In Advance....
    /var/adm/messages (scanned at Tue May 8 08:46:12 AST 2007)
    May 8 08:44:33 mx-jes-11 fp: [ID 517869 kern.info] NOTICE: fp(0): PLOGI to 11400 failed state=Packet Transport error, reason=No Connection
    /var/adm/messages (scanned at Tue May 8 08:41:12 AST 2007)
    May 8 08:36:22 mx-jes-11 fp: [ID 517869 kern.info] NOTICE: fp(0): PLOGI to 11400 failed state=Packet Transport error, reason=No Connection
    /var/adm/messages (scanned at Tue May 8 08:46:12 AST 2007)
    May 8 08:42:38 mx-jes-11 fctl: [ID 517869 kern.warning] WARNING: fp(0)::GPN_ID for D_ID=11400 failed
    May 8 08:42:38 mx-jes-11 fctl: [ID 517869 kern.warning] WARNING: fp(0)::N_x Port with D_ID=11400, PWWN=10000000c94b138c disappeared from fabric
    May 8 08:44:33 mx-jes-11 fctl: [ID 517869 kern.warning] WARNING: fp(0)::N_x Port with D_ID=11400, PWWN=10000000c94b138c reappeared in fabric
    May 8 08:44:33 mx-jes-11 fctl: [ID 517869 kern.warning] WARNING: fp(0)::PLOGI to 11400 failed. state=e reason=5.
    May 8 08:44:33 mx-jes-11 scsi: [ID 243001 kern.warning] WARNING: /pci@8,700000/QLGC,qla@5/fp@0,0 (fcp0):
    /var/adm/messages (scanned at Tue May 8 08:41:12 AST 2007)
    May 8 08:36:22 mx-jes-11 fctl: [ID 517869 kern.warning] WARNING: fp(0)::N_x Port with D_ID=11400, PWWN=10000000c94b138c reappeared in fabric
    May 8 08:36:22 mx-jes-11 fctl: [ID 517869 kern.warning] WARNING: fp(0)::PLOGI to 11400 failed. state=e reason=5.
    May 8 08:36:22 mx-jes-11 scsi: [ID 243001 kern.warning] WARNING: /pci@8,700000/QLGC,qla@5/fp@0,0 (fcp0):

    Emmalleres wrote:
    you need to download patch 119130-17 for Solaris 10(SPARC) or another patch for Solaris 9 and intel platform.
    this will resovle the issue.
    [email protected] thread was originally posted more than two years ago. The O.P. has never returned to update the thread though they have been active in the forums since that time (click their username).
    The suggested patch would be of value only if the system was installed with Solaris 10 Update 2 or older and had never been patched. Rev-17 of patch 119130 is/was from 2006, which can be seen by reading the README for it.
    119130-19 was from May 2006
    [http://sunsolve.sun.com/search/document.do?assetkey=1-21-119130-19-1|http://sunsolve.sun.com/search/document.do?assetkey=1-21-119130-19-1]
    119130-16 was from Feb 2006
    The only information was the excerpt provided in the initial post. It still appears to have been an issue with the storage peripheral.

  • Can't use new LUN from IBM SAN on Sol10 box w/ MPxIO.

    I've got a T2000 running Solaris 10 06/06 hooked up to an IBM DS4300 via a dual-port Emulex L10000 HBA. Overall, it works fine... we've got a number of LUNs mounted, and MPxIO detects and "mpxio-izes" the device nodes. However, we've tried to add a new LUN, LUN 12, and Solaris simply won't detect it.
    cfgadm sees the LUN:
    # cfgadm -al -o show_SCSI_LUN
    Ap_Id                          Type         Receptacle   Occupant     Condition
    c2                             fc-fabric    connected    configured   unknown
    c2::10000000c955c589           unknown      connected    unconfigured unknown
    c2::200400a0b817c7b1,0         disk         connected    configured   unknown
    [... Other LUNS...]
    c2::200400a0b817c7b1,11        disk         connected    configured   unknown
    c2::200400a0b817c7b1,12        disk         connected    configured   unusable   <-----
    c2::200400a0b817c7b1,31        disk         connected    unconfigured unknown
    c2::200500a0b817c7b1,0         disk         connected    configured   unknown
    [...Other LUNs...]
    c2::200500a0b817c7b1,11        disk         connected    configured   unknown
    c2::200500a0b817c7b1,31        disk         connected    unconfigured unknown
    c3                             fc-fabric    connected    configured   unknown
    c3::10000000c955c58a           unknown      connected    unconfigured unknown
    c3::200400a0b817c7b1,0         disk         connected    configured   unknown
    [...Other LUNs...]
    c3::200400a0b817c7b1,11        disk         connected    configured   unknown
    c3::200400a0b817c7b1,12        disk         connected    configured   unusable   <----
    c3::200400a0b817c7b1,31        disk         connected    configured   unknown
    c3::200500a0b817c7b1,0         disk         connected    configured   unknown
    [...Other LUNs...]
    c3::200500a0b817c7b1,11        disk         connected    configured   unknown
    c3::200500a0b817c7b1,31        disk         connected    configured   unknownI've cut out LUNs 1 through 10 for brevity, but there are no holes in the sequence. Each LUN can be seen a total of four times... once for each controller on each port on the HBA. All except LUN 12, which only shows up twice, both associated with one controller. Also the disk is marked "unusable" while the working ones are "unknown". I don't know how cfgadm makes this determination. LUN 31 is actually some odd 16mb virtual device within the SAN that looks like a disk, but isn't usable.
    We've scheduled a reboot -r for tomorrow morning, just because it's the least-effort and quickest maybe-fix, but it doesn't seem like it should be necessary. The machine is slightly behind on its patches (I just got the okay to put a patch run on the train...) but isn't ancient. I wish I had mpathadm, but this release doesn't have it and upgrading to the newest Solaris isn't feasible at the moment.
    Any suggestions for non-reboot things to try?
    The format command will not create device nodes for LUN 12, nor will devfsadm do anything.
    Anyone have any suggestions where to start? There's nothing obviously different about LUN 12 compared to the rest in the SAN management interface.

    We eventually discovered the problem: The LUN was mapped to the host, not the host group.
    Why that matters is still under investigation... it should have worked fine. There are a number of other LUNs mapped to the host, and those work fine. We discovered this problem when we discovered some other LUNs that were in the host group as well, and ceased working when moved to the host. None of the LUNs are visibly different from the rest in configuration... something odd is going on, but Solaris itself isn't to blame.

  • Solaris 11.1 Comstar FC target

    Hello,
    I have a problem with the comstar as a FC target.
    New install of Solaris 11.1
    HBA is an Emulex LPe11002
    Brocade 5100B switches
    2x 10x 3TB NL-SAS disks in raidz2 in pool
    It all works, but the speed is unusable slow to the LUN.
    iSCSI work and I am able to hit the max of the network so there is no problems with access to access the disks.
    HBA info
    HBA Port WWN: 10000000c98e9712
            Port Mode: Target
            Port ID: 12000
            OS Device Name: Not Applicable
            Manufacturer: Emulex
            Model: LPe11002-E
            Firmware Version: 2.80a4 (Z3F2.80A4)
            FCode/BIOS Version: none
            Serial Number: VM92923844
            Driver Name: emlxs
            Driver Version: 2.70i (2012.02.10.12.00)
            Type: F-port
            State: online
            Supported Speeds: 1Gb 2Gb 4Gb
            Current Speed: 4Gb
            Node WWN: 20000000c98e9712
    HBA Port WWN: 10000000c98e9713
            Port Mode: Target
            Port ID: 22000
            OS Device Name: Not Applicable
            Manufacturer: Emulex
            Model: LPe11002-E
            Firmware Version: 2.80a4 (Z3F2.80A4)
            FCode/BIOS Version: none
            Serial Number: VM92923844
            Driver Name: emlxs
            Driver Version: 2.70i (2012.02.10.12.00)
            Type: F-port
            State: online
            Supported Speeds: 1Gb 2Gb 4Gb
            Current Speed: 4Gb
            Node WWN: 20000000c98e9713
    iostat 2 sec apart:
    pool        alloc   free   read  write   read  write
    dipool    44.1M  54.5T      0     19  1.01K   134K
    dipool    44.1M  54.5T      0      2      0   196K
    dipool    45.0M  54.5T      0     50      0   210K
    dipool    45.0M  54.5T      0      0      0  64.0K
    dipool    45.8M  54.5T      0     50      0   274K
    dipool    45.8M  54.5T      0      0      0  64.0K
    dipool    45.8M  54.5T      0      0      0      0
    dipool    45.0M  54.5T      0     35      0   125K
    dipool    45.0M  54.5T      0      0      0  64.0K
    dipool    44.5M  54.5T      0     34      0  61.0K
    dipool    44.5M  54.5T      0      0      0  64.0K
    dipool    44.5M  54.5T      0      0      0  64.0K
    dipool    44.6M  54.5T      0     34      0  61.0K
    dipool    44.6M  54.5T      0      0      0  64.0K
    I also tried openindiana, the speed was good, but link will die and then capturing stmf debug shows the following when using the Emulex.
    FROM STMF:210406652: abort_task_offline called for LPORT: lport abort timed out, 1000's of them
    Jun  7 14:02:18 emlxs: [ID 349649 kern.info] [ 5.0608]emlxs1: NOTICE: 730: Link reset. (Disabling link...)
    Jun  7 14:02:18 emlxs: [ID 349649 kern.info] [ 5.0333]emlxs1: NOTICE: 710: Link down.
    Jun  7 14:04:41 emlxs: [ID 349649 kern.info] [ 5.055D]emlxs1: NOTICE: 720: Link up. (4Gb, fabric, target)
    Jun  7 14:04:41 fct: [ID 132490 kern.notice] NOTICE: emlxs1 LINK UP, portid 22000, topology Fabric Pt-to-Pt,speed 4G
    Jun  7 14:10:19 emlxs: [ID 349649 kern.info] [ 5.0608]emlxs1: NOTICE: 730: Link reset. (Disabling link...)
    Jun  7 14:10:19 emlxs: [ID 349649 kern.info] [ 5.0333]emlxs1: NOTICE: 710: Link down.
    Jun  7 14:12:40 emlxs: [ID 349649 kern.info] [ 5.055D]emlxs1: NOTICE: 720: Link up. (4Gb, fabric, target)
    Jun  7 14:12:40 fct: [ID 132490 kern.notice] NOTICE: emlxs1 LINK UP, portid 22000, topology Fabric Pt-to-Pt,speed 4G
    I also tried a Qlogic QLE2460-SUN and that has the same problem in both OI and Solaris, ultra slow
    HBA Port WWN: 2100001b3280b
            Port Mode: Target
            Port ID: 12000
            OS Device Name: Not Applicable
            Manufacturer: QLogic Corp.
            Model: QLE2460
            Firmware Version: 5.2.1
            FCode/BIOS Version: N/A
            Serial Number: not available
            Driver Name: COMSTAR QLT
            Driver Version: 20100505-1.05
            Type: F-port
            State: online
            Supported Speeds: 1Gb 2Gb 4Gb
            Current Speed: 4Gb
            Node WWN: 2000001b3280b
    It seems no one is using Solaris as a FC target anymore and since we do not have 10Gbe in our lab and some systems cannot communicate via IP to others, FC is the only form of backup.
    Can someone please let me know if they are using Solaris as an FC target and perhaps some pointers. On the example above I am trying to clone using VMware from a LUN on an EMC array to the Solaris node. As I mentions the speed is good in OI, but then it seems there is a driver issue.
    Cloning in OI from the EMC LUN to the back server:
    1 sec apart.
         alloc free read write read write
    >>>>>>> ----- ----- ----- ----- ----- -----
    >>>>>>> 309G 54.2T 81 48 452K 1.34M
    >>>>>>> 309G 54.2T 0 8.17K 0 258M
    >>>>>>> 310G 54.2T 0 16.3K 0 510M
    >>>>>>> 310G 54.2T 0 0 0 0
    >>>>>>> 310G 54.2T 0 0 0 0
    >>>>>>> 310G 54.2T 0 0 0 0
    >>>>>>> 310G 54.2T 0 10.1K 0 320M
    >>>>>>> 311G 54.2T 0 26.1K 0 820M
    >>>>>>> 311G 54.2T 0 0 0 0
    >>>>>>> 311G 54.2T 0 0 0 0
    >>>>>>> 311G 54.2T 0 0 0 0
    >>>>>>> 311G 54.2T 0 10.6K 0 333M
    >>>>>>> 313G 54.2T 0 27.4K 0 860M
    >>>>>>> 313G 54.2T 0 0 0 0
    >>>>>>> 313G 54.2T 0 0 0 0
    >>>>>>> 313G 54.2T 0 0 0 0
    >>>>>>> 313G 54.2T 0 9.69K 0 305M
    >>>>>>> 314G 54.2T 0 10.8K 0 337M
    We have tons of other devices connected to the Brocade 5100B switches. I tried connecting the system to two different switches individually with the same result. We are basically 100% Emulex shop and I only have the one qlt card
    I have now tried a brand new Emulex LPe11002 card in a different PCI-E slot, new cable and different FC switch.
    I have similar problems with Openindiana and no problems with any of the emc vnx/cx/data domain connected to the same switches or any of the hosts connected to them as the targets using the same LPe10000/LPe11002/LPe12002 cards.
    Any help/pointers would be greatly appreciated.
    Thanks,

    Accidentally found this. Some comments.  Unfortuinately, we run Solaris FC as initiators/clients to san target LUNs on NetApp and HP.
    1. There were patches in sol 10 for fc cards for bugs, driver and firmware upgrades, fcode/bios which I believe is for san boot only.
    I read somewhere these patches for sun/oracle branded FC cards would not be released under Sol 11, have not looked.
    Sample patches for Qlogic for Sol 10 are 114874-07 for fcode/bios and 149175-03 for everything else.  Unfortunately we're most Qlogic and only have a couple Emulex in Linux systems.  So is 11.1 really supporting these FC cards now or is the user responsible for downloading drivers and firmware from vendors and installing?.
    2.  Have heard when zfs gets to around 80% capacity i/o performance can suffer.  This may have been fixed, been avoiding with quotas.
    Then of course if looking for continuous speed don't turn on compression.
    3.  Do you have/need the sol 11.1 multi-path package when Solaris has the targets/LUNs?  Are you configured for MPxIO?
    pkg info system/storage/multipath-utilities
    4. Do you need any kernel changes to /etc/system for performance? Some below are x86.
    set sd:sd_max_throttle=64   sparc
    set ssd:ssd_max_throttle=64   x86
    set maxphys=1048576
    set ssd:ssd_io_time=60        x86
    5.   Do you need to worry about 4K alignment from client side?
    These are all things I worry about but Solaris is an initiator in our environment along with every other platform.
    This is old and hopefully resolved by this time!
    where (s)sd_max_throttle= 256
    / # of LUNs

  • Latest round of patches on fabric booted system causes Solaris 10 to hang

    I have a fairly stock install of Solaris 10 6/06 on a T2000 which uses a Emulex HBA to boot from Xiotech SAN attached disks.
    I installed the following patches:
    118712 14 < 16 R-- 24 SunOS 5.10: Sun XVR-100 Graphics Accelerator Patch
    120050 05 < 06 RS- 28 SunOS 5.10: usermod patch
    120222 16 < 17 R-- 19 SunOS 5.10: Emulex-Sun LightPulse Fibre Channel Adapter driver
    120629 02 < 08 R-- 24 SunOS 5.10: libpool patch
    120824 08 < 09 R-- 39 SunOS 5.10: SunBlade T6300 & Sun Fire (T1000, T2000) platform patc
    121118 11 < 12 R-- 21 SunOS 5.10: Sun Update Connection System Client 1.0.9
    122660 08 < 09 R-- 17 SunOS 5.10: zones patch
    124258 03 < 05 RS- 19 SunOS 5.10: ufs and nfs driver patch
    124327 -- < 04 R-- 34 SunOS 5.10: libpcp patch
    120222 16 < 17 R-- 19 SunOS 5.10: Emulex-Sun LightPulse Fibre Channel Adapter driver
    When I rebooted the system it will no longer boot up.
    If I do a {ok} boot -m milestone=none and attempt to start all the services by hand, I see:
    svc:/platform/sun4u/mpxio-upgrade:default (Multipath upgrade)
    State: offline since June 4, 2007 4:05:58 PM CDT
    Reason: Start method is running.
    See: http://sun.com/msg/SMF-8000-C4
    See: man -M /usr/share/man -s 1M stmsboot
    See: /etc/svc/volatile/platform-sun4u-mpxio-upgrade:default.log
    It appears the mpxio-upgrade script is failing to start.
    If I run /lib/svc/method/mpxio-upgrade by hand the script hangs and can not be killed. Since I am on console the only want to recover is to send-brk and reboot. I truss'ed it and the last device it was trying to read is:
    82: open("/devices/pseudo/devinfo@0:devinfo", O_RDONLY) = 5
    82: ioctl(5, 0xDF82, 0x00000000) = 57311
    This is the second time this has happened in the last 2 months. The first time the problem was resolved with a new kernel patch. However Sun could not tell me what the exact problem was.
    Has anyone else run into SAN/Fabric booted servers failing to boot after various patches?

    We have discovered the problem. It has nothing to do with SAN booting at all. It has to do with having a device plugged into the serial port. I was using this system to act as serial console for another device in the same rack. When the Sun kernel engineering asked me to unplug the serial cable and reboot I was skeptical, but it worked.
    Sun has filed this as a bug. The only work around right now is to make sure you have nothing plugged into the serial port.

  • How do I map Hitachi SAN LUNs to Solaris 10 and Oracle 10g ASM?

    Hi all,
    I am working on an Oracle 10g RAC and ASM installation with Sun E6900 servers attached to a Hitachi SAN for shared storage with Sun Solaris 10 as the server OS. We are using Oracle 10g Release 2 (10.2.0.3) RAC clusterware
    for the clustering software and raw devices for shared storage and Veritas VxFs 4.1 filesystem.
    My question is this:
    How do I map the raw devices and LUNs on the Hitachi SAN to Solaris 10 OS and Oracle 10g RAC ASM?
    I am aware that with an Oracle 10g RAC and ASM instance, one needs to configure the ASM instance initialization parameter file to set the asm_diskstring setting to recognize the LUNs that are presented to the host.
    I know that Sun Solaris 10 uses /dev/rdsk/CwTxDySz naming convention at the OS level for disks. However, how would I map this to Oracle 10g ASM settings?
    I cannot find this critical piece of information ANYWHERE!!!!
    Thanks for your help!

    Yes that is correct however due to use of Solaris 10 MPxIO multipathing software that we are using with the Hitachi SAN it does present an extra layer of complexity and issues with ASM configuration. Which means that ASM may get confused when it attempts to find the new LUNs from the Hitachi SAN at the Solaris OS level. Oracle Metalink note 396015.1 states this issue.
    So my question is this: how to configure the ASM instance initialization parameter asm_diskstring to recognize the new Hitachi LUNs presented to the Solaris 10 host?
    Lets say that I have the following new LUNs:
    /dev/rdsk/c7t1d1s6
    /dev/rdsk/c7t1d2s6
    /dev/rdsk/c7t1d3s6
    /dev/rdsk/c7t1d4s6
    Would I set the ASM initialization parameter for asm_diskstring to /dev/rdsk/c7t1d*s6
    as correct setting so that the ASM instance recognizes my new Hitachi LUNs? Solaris needs to map these LUNs using pseudo devices in the Solaris OS for ASM to recognize the new disks.
    How would I set this up in Solaris 10 with Sun multipathing (MPxIO) and Oracle 10g RAC ASM?
    I want to get this right to avoid the dreaded ORA-15072 errors when creating a diskgroup with external redundancy for the Oracle 10g RAC ASM installation process.

  • Intalling Postgresql in solaris 10

    I have downloaded the postgresql package from
    www.postgresql.org/download/bittorent
    i have unziped the files. i dont know how to continue with the installation.

    Here is some documentation to get you started......It available online.
    Author : Chris Drawater
    Date
    : May 2005
    Version : 1.2
    PostgreSQL 8.0.02 for J2EE applications on Solaris 10
    Abstract
    Advance planning enables PostgreSQL 8 and its associated JDBC driver to be quickly deployed in a
    basic but resilient and IO efficient manner.
    Minimal change is required to switch JDBC applications from Oracle to PostgreSQL.
    Document Status
    This document is Copyright � 2005 by Chris Drawater.
    This document is freely distributable under the license terms of the GNU Free Documentation License
    (http://www.gnu.org/copyleft/fdl.html). It is provided for educational purposes only and is NOT
    supported.
    Introduction
    This paper documents how to deploy PostgreSQL 8 and its associated JDBC driver in a basic but both
    resilient and IO efficient manner. Guidance for switching from Oracle to PostgreSQL is also provided.
    It is based upon experience with the following configurations =>
    PostgreSQL 8.0.2 on Solaris 10
    PostgreSQL JDBC driver on Windows 2000
    using the PostgreSQL distributions =>
    postgresql-base-8.0.2.tar.gz
    postgresql-8.0-311.jdbc3.jar
    � Chris Drawater, 2005
    PostgreSQL 8.0.2 on Solaris, v1.2
    p1/10
    Page 2
    Background for Oracle DBAs
    For DBAs coming from an Oracle background, PostgreSQL has a number of familiar concepts including
    Checkpoints
    Tablespaces
    MVCC concurrency model
    Write ahead log (WAL)+ PITR
    Background DB writer
    Statistics based optimizer
    Recovery = Backup + archived WALs + current WALs
    However , whereas 1 Oracle instance (set of processes) services 1 physical database, PostgreSQL differs in
    that
    1 PostgreSQL �cluster� services n * physical DBs
    1 cluster has tablespaces (accessible to all DBs)
    1 cluster = 1 PostgreSQL instance = set of server processes etc ( for all DBs) + 1 tuning config +
    1 WAL
    User accts are cluster wide by default
    There is no undo or BI file � so to support MVCC, the �consistent read� data is held in the tables
    themselves and once obsolete needs to be cleansed out using the �vacuum� utility.
    The basic PostgreSQL deployment guidelines for Oracle aware DBAs are to =>
    Create only 1 DB per cluster
    Have 1 superuser per cluster
    Let only the superuser create the database
    Have one user to create/own the DB objects + n* endusers with appropriate read/write access
    Use only ANSI SQL datatypes and DDL.
    Wherever possible avoid DB specific SQL extensions to ensure cross-database portability
    IO distribution & disc layouts
    It is far better to start out with good disc layouts rather than reto-fix for a production database.
    As with any DBMS, for resilience, the recovery components ( eg. backups , WAL, archived WAL logs)
    should kept on devices separate from the actual data.
    So the basic rules for resilience are as follows.
    For non disc array or JBOD systems =>
    keep recovery components separate from data on dedicated discs etc
    keep WAL and data on separate disc controllers
    mirror WAL across discs ( preferably across controllers) for protection against WAL spindle loss
    For SAN based disc arrays (eg HP XP12000) =>
    keep recovery components separate from data on dedicated LUNs etc
    use Host Adapter Multipathing drivers (such as mpxio) with 2 or more HBAs for access to SAN .
    Deploy application data on mirrored/striped (ie RAID 1+0) or write-cache fronted RAID 5 storage.
    The WAL log IO should be configured to be osync for resilience (see basic tuning in later section).
    Ensure that every PostgreSQL component on disc is resilient (duplexed) !
    Recovery can be very stressful�
    Moving onto IO performance, it is worth noting that WAL IO and general data IO access have different IO
    characteristics.
    � Chris Drawater, 2005
    PostgreSQL 8.0.2 on Solaris, v1.2
    p2/10
    Page 3
    WAL sequential access (write mostly)
    Data sequential scan, random access write/read
    The basic rules for good IO performance �.
    use tablespaces to distribute data and thus IO across spindles or disc array LUNs
    keep WAL on dedicated spindles/LUNs (mirror/stripe in preference to RAID 5)
    keep WAL and arch WAL on separate spindles to reduce IO on WAL spindles.
    RAID or stripe data across discs/LUNs in 1 Mb chunks/units if unsure as what chunk size to use.
    For manageability, keep the software distr and binaries separate from the database objects.
    Likewise, keep the system catalogs and non-application data separate from the application specific data.
    5 distinct storage requirements can be identified =>
    Software tree (Binaries, Source, distr)
    Shared PG sys data
    WAL logs
    Arch WAL logs
    Application data
    For the purposes of this document , the following minimal set of FS are suggested =>
    /opt/postgresql/8.0.2
    # default 4 Gb for software tree
    /var/opt/postgresql
    # default 100 Mb
    /var/opt/postgresql/CLUST/sys
    # default size 1Gb for shared sys data
    /var/opt/postgresql/CLUST/wal
    # WAL location # mirrored/striped
    /var/opt/postgresql/CLUST/archwal
    # archived WALs
    /var/opt/postgresql/CLUST/data
    # application data + DB sys catalogs # RAID 5
    where CLUST is your chosen name for the Postgres DB cluster
    For enhanced IO distribution , a number of �/data FS (eg data01, data02 etc) could be deployed.
    Pre-requisites !
    The GNU compiler and make software utilities (available on the Solaris 10 installation CDs) =>
    gcc (compiler) ( $ gcc --version => 3.4.3 )
    gmake (GNU make)
    are required and should be found in
    /usr/sfw/bin
    Create the Unix acct
    postgres
    in group dba
    with a home directory of say /export/home/postgresql
    using
    $ useradd utility
    or hack
    /etc/group then /etc/passwd then run pwconv and then passwd postgres
    Assuming the following FS have been created =>
    � Chris Drawater, 2005
    PostgreSQL 8.0.2 on Solaris, v1.2
    p3/10
    Page 4
    /opt/postgresql/8.0.2
    # default 4 Gb for the PostgreSQL software tree
    /var/opt/postgresql
    # default 100 Mb
    create directories
    /opt/postgresql/8.0.2/source
    # source code
    /opt/postgresql/8.0.2/distr
    # downloaded distribution
    all owned by user postgres:dba with 700 permissions
    To ensure, there are enough IPC resources to use PostgreSQL, edit /etc/system and add the following lines
    =>
    set shmsys:shminfo_shmmax=1300000000
    set shmsys:shminfo_shmmin=1
    set shmsys:shminfo_shmmni=200
    set shmsys:shminfo_shmseg=20
    set semsys:seminfo_semmns=800
    set semsys:seminfo_semmni=70
    set semsys:seminfo_semmsl=270 # defaults to 25
    set rlim_fd_cur=1024
    # per process file descriptor soft limit
    set rlim_fd_max=4096
    # per process file descriptor hard limit
    Thenn on the console (log in as root) =>
    $ init 0
    {a} ok boot -r
    Download Source
    Download the source codes from http://www.postgresql.org (and if downloaded via Windows, remember
    to ftp in binary mode) =>
    Distributions often available include =>
    postgresql-XXX.tar.gz => full source distribution.
    postgresql-base-XXX.tar.gz => Server and the essential client interfaces
    postgresql-opt-XXX.tar.gz => C++, JDBC, ODBC, Perl, Python, and Tcl interfaces, as well as multibyte
    support
    postgresql-docs-XXX.tar.gz => html docs
    postgresql-test-XXX.tar.gz => regression test
    For a working, basic PostgreSQL installation supporting JDBC applications, simply use the �base�
    distribution.
    Create Binaries
    Unpack Source =>
    $ cd /opt/postgresql/8.0.2/distr
    $ gunzip postgresql-base-8.0.2.tar.gz
    $ cd /opt/postgresql/8.0.2/source
    $ tar -xvof /opt/postgresql/8.0.2/distr/postgresql-base-8.0.2.tar
    � Chris Drawater, 2005
    PostgreSQL 8.0.2 on Solaris, v1.2
    p4/10
    Page 5
    Set Unix environment =>
    TMPDIR=/tmp
    PATH=/usr/bin:/usr/ucb:/etc:.:/usr/sfw/bin:usr/local/bin:n:/usr/ccs/bin:$PATH
    export PATH TMPDIR
    Configure the build options =>
    $ cd /opt/postgresql/8.0.2/source/postgresql-8.0.2
    $ ./configure prefix=/opt/postgresql/8.0.2 with-pgport=5432 --without-readline
    CC=/usr/sfw/bin/gcc
    Note => --enable-thread-safety option failed
    And build =>
    $ gmake
    $ gmake install
    On an Ultra 5 workstation, this gives 32 bit executables
    Setup Unix environment
    Add to environment =>
    LD_LIBRARY_PATH=/opt/postgresql/8.0.2/lib
    PATH=/opt/postgresql/8.0.2/bin:$PATH
    export PATH LD_LIBRARY_PATH
    Create Database(Catalog) Cluster
    Add to Unix environment =>
    PGDATA=/var/opt/postgresql/CLUST/sys
    # PG sys data , used by all DBs
    export PGDATA
    Assuming the following FS has been created =>
    /var/opt/postgresql/CLUST/sys
    # default size 1Gb
    where CLUST is your chosen name for the Postgres DB cluster,
    initialize database storage area, create shared catalogs and template database template1 =>
    $ initdb -E UNICODE -A password
    -W
    # DBs have default Unicode char set, user basic passwords, prompt for super user password
    Startup, Shutdown and basic tuning of servers
    Check servers start/shutdown =>
    $ pg_ctl start -l /tmp/logfile
    $ pg_ctl stop
    Next, tune the PostgreSQL instance by editing the configuration file $PGDATA/postgresql.conf .
    � Chris Drawater, 2005
    PostgreSQL 8.0.2 on Solaris, v1.2
    p5/10
    Page 6
    First take a safety copy =>
    $ cd $PGDATA
    $ cp postgresql.conf postgresql.conf.orig
    then make the following (or similar changes) to postgresql.conf =>
    # listener
    listen_addresses = 'localhost'
    port = 5432
    # data buffer cache
    shared_buffers = 10000
    # each 8Kb so depends upon memory available
    #checkpoints
    checkpoint_segments = 3
    # default
    checkpoint_timeout = 300
    # default
    checkpoint_warning = 30
    # default � logs warning if ckpt interval < 30s
    # log related
    fsync = true
    # resilience
    wal_sync_method = open_sync
    # resilience
    commit_delay = 10
    # group commit if works
    archive_command = 'cp "%p" /var/opt/postgresql/CLUST/archwal/"%f"'
    # server error log
    log_line_prefix = '%t :'
    # timestamp
    log_min_duration_statement = 1000
    # log any SQL taking more than 1000ms
    log_min_messages = info
    #transaction/locks
    default_transaction_isolation = 'read committed'
    Restart the servers =>
    $ pg_ctl start -l /tmp/logfile
    Create the Database
    This requires the FS =>
    /var/opt/postgresql/CLUST/wal
    # WAL location
    /var/opt/postgresql/CLUST/archwal
    # archived WALs
    /var/opt/postgresql/CLUST/data
    # application data + DB sys catalogs
    plus maybe also =>
    /var/opt/postgresql/CLUST/backup
    # optional for data and config files etc as staging
    area for tape
    Create the clusterwide tablespaces (in this example, a single tablespace named �appdata�) =>
    $ psql template1
    � Chris Drawater, 2005
    PostgreSQL 8.0.2 on Solaris, v1.2
    p6/10
    Page 7
    template1=# CREATE TABLESPACE appdata LOCATION '/var/opt/postgresql/CLUST/data';
    template1=# SELECT spcname FROM pg_tablespace;
    spcname
    pg_default
    pg_global
    appdata
    (3 rows)
    and add to the server config =>
    default_tablespace = 'appdata'
    Next, create the database itself (eg name = db9, unicode char set) =>
    $ createdb -D appdata -E UNICODE -e db9
    # appdata = default TABLESPACE
    $ createlang -d db9 plpgsql
    # install 'Oracle PL/SQL like' language
    WAL logs are stored in the directory pg_xlog under the data directory. Shut the server down & move the
    directory pg_xlog to /var/opt/postgresql/CLUST/wal and create a symbolic link from the original location in
    the main data directory to the new path.
    $ pg_ctl stop
    $ cd $PGDATA
    $ mv pg_xlog /var/opt/postgresql/CLUST/wal
    $ ls /var/opt/postgresql/CLUST/wal
    $ ln -s /var/opt/postgresql/CLUST/wal/pg_xlog $PGDATA/pg_xlog
    # soft link as across FS
    $ pg_ctl start -l /tmp/logfile
    Assuming all is now working OK, shutdown PostgreSQL & backup up all the PostgreSQL related FS
    above� just in case�!
    � Chris Drawater, 2005
    PostgreSQL 8.0.2 on Solaris, v1.2
    p7/10
    Page 8
    User Accounts
    Create 1 * power user to create/own/control the tables (using psql) =>
    $ pgsql template1
    create user cxd with password 'abc';
    grant create on tablespace appdata to cxd;
    Do not create any more superusers or users that can create databases!
    Now create n* enduser accts to work against the data =>
    $pgsql template1
    CREATE GROUP endusers;
    create user enduser1 with password 'xyz';
    ALTER GROUP endusers ADD USER enduser1;
    $ psql db9 cxd
    grant select. on <table>. to group endusers;
    JDBC driver
    A pure Java (Type 4) JDBC driver implementation can be downloaded from
    http://jdbc.postgresql.org/
    Assuming the use of the SDK 1.4 or 1.5, download
    postgresql-8.0-311.jdbc3.jar
    and include this in your application CLASSPATH.
    (If moving JAR files between different hardware types, always ftp in BIN mode).
    Configure PostgreSQL to accept JDBC Connections
    To allow the postmaster listener to accept TCP/IP connections from client nodes running the JDBC
    applications, edit the server configuration file and change
    listen_addresses = '*'
    # * = any IP interface
    Alternatively, this parameter can specify only selected IP interfaces ( see documentation).
    In addition, the client authetication file will need to edited to allow access to our database server.
    First take a backup of the file =>
    $ cp pg_hba.conf pg_hba.conf.orig
    Add the following line =>
    host db9
    cxd
    0.0.0.0/0
    password
    where , for this example, database db9, user cxd, auth password
    � Chris Drawater, 2005
    PostgreSQL 8.0.2 on Solaris, v1.2
    p8/10
    Page 9
    Switching JDBC applications from Oracle to PostgreSQL
    The URL used to connect to the PostgreSQL server should be of the form
    jdbc:postgresql://host:port/database
    If used, replace the line (used to load the JDBC driver)
    Class.forName ("oracle.jdbc.driver.OracleDriver");
    with
    Class.forName("org.postgresql.Driver");
    Remove any Oracle JDBC extensions, such as
    ((OracleConnection)con2).setDefaultRowPrefetch(50);
    Instead, the row pre-fetch must be specified at an individual Statement level =>
    eg.
    PreparedStatement pi = con1.prepareStatement(�select�.�);
    pi.setFetchSize(50);
    If not set, the default fetch size = 0;
    Likewise, any non ANSI SQL extensions will need changing.
    For example sequence numbers
    Oracle => online_id.nextval
    should be replaced by
    PostgreSQL => nextval('online_id')
    Oracle �hints� embedded within SQL statements are ignored by PostgreSQL.
    Now test your application!
    Concluding Remarks
    At this stage, you should now have a working PostgreSQL database fronted by a JDBC based application,
    and the foundations will have been laid for :
    A reasonably level of resilience (recoverability)
    A good starting IO distribution
    The next step is to tune the system under load� and that�s another doc�
    Chris Drawater has been working with RDBMSs since 1987 and the JDBC API since late 1996, and can
    be contacted at [email protected] or [email protected] .
    � Chris Drawater, 2005
    PostgreSQL 8.0.2 on Solaris, v1.2
    p9/10
    Page 10
    Appendix 1 � Example .profile
    TMPDIR=/tmp
    export TMPDIR
    PATH=/usr/bin:/usr/ucb:/etc:.:/usr/sfw/bin:usr/local/bin:n:/usr/ccs/bin:$PATH
    export PATH
    # PostgreSQL 802 runtime
    LD_LIBRARY_PATH=/opt/postgresql/8.0.2/lib
    PATH=/opt/postgresql/8.0.2/bin:$PATH
    export PATH LD_LIBRARY_PATH
    PGDATA=/var/opt/postgresql/CLUST/sys
    export PGDATA
    � Chris Drawater, 2005
    PostgreSQL 8.0.2 on Solaris, v1.2
    p10/10

  • ASM Disks on Solaris 10

    All,
    I have presented RAID storage LUNs to a 2 node cluster with multipathing enabled via standard solaris MPxIO. Have also created special files using mknod instead of using standard SCSI names. Wondering if this is sufficient to retain device names across server reboots on both nodes?
    Thanks
    Stalin

    using mknod , that is a good way to help , when major + minor of device differ each of node.
    and that's enough.
    http://dbaforums.org/oracle/lofiversion/index.php?t9102.html

  • Solaris 10 SAN Boot issue

    During Solaris 10 Installation on a SAN Based disk, SUN Fire V10 was not rebooting after the first CD installation. It does memory dump...

    I really don't like san boot for operating system. You have too many points of failure for the basic operation.
    Anyway, during installation, ensure that you access the disk using just one path (using zoning for example). After you enable and configure MPxIO, you could enable other paths.

  • Mpxio not working

    on ultra 60 with 2x x6729a and a 5200.
    All that I have is twice the devices,
    pointing at the same disk, like
    c2t0d0 and c3t0d0, if I format and label one
    the other is formatted and labeled as well.
    Everything looks like working, correct driver and packages,
    tried both with solaris 10 and solaris 9, same behaviour.
    The only thing that differs from mpxio documentation is
    that in solaris 10 the driver looks like to be "iscsi".
    If I try the command
    # mpathadm list initiator-port
    and subsequently show, it says the device is "iscsi"
    and not "fiberchannel".
    this is the only difference so far,
    luxadm -e port shows both port connected (and,
    from the format command, also working)
    luxadm probe, display, ecc : everything fine, the storage
    is there.
    cfgadm -l shows always port c1 unconfigured,
    try to configure also with -f does not work.
    Can anyone help me ?
    thanks
    Max

    p.s.
    forgot to say : fcinfo hba-ports says there's no port.

  • Solaris 10 mpio and Emulex LP11002 HBA

    Greetings,
    Has anyone tried to use a Emulex LP11002 HBA with Solaris 10 mpio driver ?
    Kindly advise.
    Thanks and Regards

    Hi All,
    What I observed is a SAN messages just before a sudain reboot (periodicaly the two nodes of the Oracle RAC reboot). It says "disappeared from fabric".
    The configuration is :
    2 T2000 with 2 Emulex dual on each.
    Two Qlogic 5200 Switches
    1 Stk 6140
    Mpxio enabled.
    All thing are updated to the latest version, sc, obp, emulex driver,...
    Jul 29 13:48:04 std01b fctl: [ID 517869 kern.warning] WARNING: fp(1)::GPN_ID for D_ID=10500 failed
    Jul 29 13:48:04 std01b fctl: [ID 517869 kern.warning] WARNING: fp(1)::N_x Port with D_ID=10500, PWWN=10000000c95de248 disappeared from fabric
    Jul 29 13:48:04 std01b fctl: [ID 517869 kern.warning] WARNING: fp(4)::GPN_ID for D_ID=10400 failed
    Jul 29 13:48:04 std01b fctl: [ID 517869 kern.warning] WARNING: fp(4)::N_x Port with D_ID=10400, PWWN=10000000c95de2b4 disappeared from fabric
    Jul 29 13:48:04 std01b e1000g: [ID 801593 kern.notice] NOTICE: pciex8086,105e - e1000g[2] : Adapter copper link is down.
    Jul 29 13:50:53 std01b genunix: [ID 540533 kern.notice] ^MSunOS Release 5.10 Version Generic_125100-07 64-bit
    Jul 29 13:50:53 std01b genunix: [ID 172907 kern.notice] Copyright 1983-2006 Sun Microsystems, Inc. All rights reserved.
    Jul 29 13:50:53 std01b Use is subject to license terms.
    Jul 29 13:50:53 std01b genunix: [ID 678236 kern.info] Ethernet address = 0:14:4f:6f:21:28
    Regards.

  • Mpxio or Powerpath -- with EMC SANs?

    We have some new T3-1B's running on Solaris 10.
    We normally use PowerPath, but I'm trying to see if we can just use mpxio and be done. However, the issue I'm running into is the following.
    1. I dont have any way of determining which LUN is what -- in the CX500--I use the LUN Name to determine what the disk needs to be assigned to. I see only the WWN. With Oracle RAC, it's extremely important to make sure the disks are correct to each node.
    2. When adding new disks to the host, the new LUN does not appear on the host even after luxadm probe, and repeated devfsadm. Am I missing anything else? Any help would be appreciated.

    I'm not sure what kind of commentary you are looking for.
    We use Powerpath on Solaris 10 (x86 now, but in the past on T5120s) for multipathing to LUNs on EMC VNX and CX4 SANs. We followed the EMC Powerpath setup guide for Solaris 10 exactly.
    After disabling MPxIO, Powerpath was able to take control. Some older versions of powerpath had a bug where the pseudo names for LUNs would change causing some headaches.
    We keep a mapping of LUN to pseudo name to solaris dsk labels
    If you use ZFS I have found a 1:1 match between the zfs_max_vdev_pending value and the max queued-IOs value shown in Powerpath
    When we've had to fail/trespass LUNs over to other SPs on the EMC SANs, Powerpath has handled this elegantly with the expected warnings in /var/adm/messages
    I recall that we had to explicitly set the powerpath options for CLARiiON and VNX to managed and the policy to claropt
    When adding LUNs, to see all paths there is a routine to go through with cfgadm, devfsadm and the powerpath utilities. We use qlogic HBAs and the qlc leadville driver.

Maybe you are looking for