Multi-terabyte LUNs in Solaris 9 64bit

According to the Solaris Volume Manager documentation, recent versions of Solaris 9 now supports multi-terabyte disks. There are references to the automatic usage of EFI disk labels for LUNs that are larger than 1TB, but no explanation beyond this.
I am planning to set up a large volume for backup2disk on a Sunfire V240. Our SAN vendor (EMC) recommends that we implement this in the form of a striped meta-LUN spanning multiple RAID3 groups, and discourages the use of volume managers to achieve the same striping. Not wanting to deviate from best practice where avoidable I have therefore configured a 1.3 TB (which will become ~3TB once I get this working) on the SAN and presented this to my Solaris box in the form of a single logical unit.
Unfortunately, format does not understand the geometry at all. During probe it reports that nsect is being adjusted from 178 to 128, and in the disk list the new volume is reported as unknown with a negative size. To me this indicates that a counter is being overflowed so that a positive integer becomes a negative.
Also since this is a logical disk I cannot easily determine the real parameters to configure if I were to create my own format.dat - this is not readily available from the management tools provided by EMC (of which I am aware of anyway).
Does anyone of you, fellow admins, know of a way to resolve this problem?
I could do this the harder way and present this in the form of multiple LUNs and chain them together in a single SVM volume but I would be giving up a lot of flexibility and incurring a higher level of complexity which I would rather avoid if possible. And the document from sun clearly states that on a 64bit kernel, Solaris 9 revision 03/04 and later do indeed support single LUN of multiple TBs. Documentation bug?

Okay, classic case of asking then finding the answer yourself here.
Turns out that the documentation is indeed correct, but that a cfgadm -c configure <ctr id> ; devfsadm -C is insufficient for this particular task. Just for the hell of it I did a configuration reboot - lo and behold, my disks went straight up in format with correct geometry and size. Quite possibly there is necessary magic going on during initialization of the ssd driver that doesn't happen during a DR, but there's also the option of me being ignorant ;)
Anyway, I guess other people might benefit from this so here you go; Solaris 9 does indeed support single disks of multi-terabyte sizes, but quite possibly you need to perform a reboot -- -r (or touch /etc/reconfigure and then reboot) before format can understand their geometry.
Cheers!

Similar Messages

  • Multi-initated LUNS on Sun StorEdge 3310

    I recently received two Sun V20Zs and a StorEdge 3310, and I am asked to cluster these two machines with the 3310 as shared storage. 3310 came with two raid controllers.
    My question is, could I create multi-initiaged LUNs and have these LUNs mapped by the V20Z. Reason behind having the multi-imitated LUNs is it will facilitate fail-over in case a one of the V20Z are unavailable.
    V20Zs are running Fedora Core 3.
    Thanks for your help.

    A good output that would give you a hint of a problem is iostat -En
    This output reports anykind of errors to a particular disk since the last reboot. If you are seeing high error numbers, then you can move onto running a file system check - fsck - on the drive.
    If you have anykind of RAID setup, check to see if any issues are being reported. Depending on what you are using to manage the RAID - Veritas Volume Manager or Solaris Disk Suite - they will most certainly tell you if there is an issue. A vxprint for Veritas and a metastat for DiskSuite will show you what you have setup and if there are any recognized disk issues.

  • Cannot label 6Terabyte lun on solaris 8 host

    Hello,
    I have a 6 terabyte lun, I cannot label it. I get the following error:
    Warning: error writing VTOC.
    This is an emcpower device. The os is solaris 8 117350-26.
    -- Milo A.

    Been there tried that.....Solaris 8 will only support up to 1TB. You must then use software(SVM, SAMFS, Veritas,etc.) to stripe the 1TB LUN together to make your 6TB volume.
    It is my understanding that prior to ZFS(Newly released) that Solaris 9 & 10 properly patched with support a LUN up to 2TB.

  • Mkfs: bad value for nbpi: must be at least 1048576 for multi-terabyte, nbpi

    Hi, guys!
    *1. I have a big FS (8 TB) on UFS which contains a lot of small files ~ 64B-1MB.*
    -bash-3.00# df -h /mnt
    Filesystem Size Used Avail Use% Mounted on
    /dev/dsk/c10t600000E00D000000000201A400020000d0s0
    8.0T 4.3T 3,7T 54% /mnt
    *2. But today I noticed in dmesg such errors: "ufs: [ID 682040 kern.notice] NOTICE: /mnt: out of inodes"*
    -bash-3.00# df -i /mnt
    Filesystem Inodes IUsed IFree IUse% Mounted on
    /dev/dsk/c10t600000E00D000000000201A400020000d0s0
    8753024 8753020 4 100% /mnt
    *3. So, I decided to make file system with new parameters:*
    -bash-3.00# mkfs -m /dev/rdsk/c10t600000E00D000000000201A400020000d0s0
    mkfs -F ufs -o nsect=128,ntrack=48,bsize=8192,fragsize=8192,cgsize=143,free=1,rps=1,nbpi=997778,opt=t,apc=0,gap=0,nrpos=1,maxcontig=128,mtb=y /dev/rdsk/c10t600000E00D000000000201A400020000d0s0 17165172656
    -bash-3.00#
    -bash-3.00# mkfs -F ufs -o nsect=128,ntrack=48,bsize=8192,fragsize=1024,cgsize=143,free=1,rps=1,nbpi=512,opt=t,apc=0,gap=0,nrpos=1,maxcontig=128,mtb=f /dev/rdsk/c10t600000E00D000000000201A400020000d0s0 17165172656
    *3. I've got some warnings about inodes threshold:*
    -bash-3.00# mkfs -F ufs -o nsect=128,ntrack=48,bsize=8192,fragsize=1024,cgsize=143,free=1,rps=1,nbpi=512,opt=t,apc=0,gap=0,nrpos=1,maxcontig=128,mtb=n /dev/rdsk/c10t600000E00D000000000201A400020000d0s0 17165172656
    mkfs: bad value for nbpi: must be at least 1048576 for multi-terabyte, nbpi reset to default 1048576
    Warning: 2128 sector(s) in last cylinder unallocated
    /dev/rdsk/c10t600000E00D000000000201A400020000d0s0: 17165172656 sectors in 2793811 cylinders of 48 tracks, 128 sectors
    8381432.0MB in 19538 cyl groups (143 c/g, 429.00MB/g, 448 i/g)
    super-block backups (for fsck -F ufs -o b=#) at:
    32, 878752, 1757472, 2636192, 3514912, 4393632, 5272352, 6151072, 7029792,
    7908512,
    Initializing cylinder groups:
    super-block backups for last 10 cylinder groups at:
    17157145632, 17158024352, 17158903072, 17159781792, 17160660512, 17161539232,
    17162417952, 17163296672, 17164175392, 17165054112
    *4.And my inodes number didn't change:*
    -bash-3.00# df -i /mnt
    Filesystem Inodes IUsed IFree IUse% Mounted on
    /dev/dsk/c10t600000E00D000000000201A400020000d0s0
    8753024 4 8753020 1% /mnt
    I found http://wesunsolve.net/bugid.php/id/6595253 that is a bug of mkfs without workaround. Is ZFS what I need now?

    Well, to fix the bug you referred to you can apply patch 141444-01 or 141445-01.
    However that bug is just regarding an irrelevant error message from mkfs, it will not fix your problem as such.
    It seems to me like the minimum value for nbpi on a multi-terabyte filesystem is 1048576, hence you won't be able to create a filesystem with more inodes.
    The things to try would be to either create two UFS filesystems, or go with ZFS, which is the future anyway ;-)
    .7/M.

  • Unable to configure LUN in Solaris 10 (T3-1)

    Hi everyone,
    While trying to configure a LUN on Solaris 10u9, I'm running into a problem getting it configured and available in 'format'. These are the things I tried:
    -bash-3.00# cfgadm -o show_FCP_dev -al c5
    Ap_Id Type Receptacle Occupant Condition
    c5 fc-fabric connected configured unknown
    c5::200400a0b819d8c0,0 disk connected configured unknown
    c5::201400a0b8481acc,0 disk connected configured unknown
    c5::202500a0b8481acc,0 disk connected configured unknown
    c5::20340080e52e4bc4 disk connected unconfigured unknown
    c5::20350080e52e4bc4 disk connected unconfigured unknown
    And if I try to configure it:
    -bash-3.00# cfgadm -c configure c5::20340080e52e4bc4
    cfgadm: Library error: failed to create device node: 20340080e52e4bc4: Invalid argument
    Does anyone have an idea how to fix this?

    Apparently it was a faulty configuration on the SAN side, the issue was fixed and now I can see the LUN in Solaris without problems.

  • How do I map Hitachi SAN LUNs to Solaris 10 and Oracle 10g ASM?

    Hi all,
    I am working on an Oracle 10g RAC and ASM installation with Sun E6900 servers attached to a Hitachi SAN for shared storage with Sun Solaris 10 as the server OS. We are using Oracle 10g Release 2 (10.2.0.3) RAC clusterware
    for the clustering software and raw devices for shared storage and Veritas VxFs 4.1 filesystem.
    My question is this:
    How do I map the raw devices and LUNs on the Hitachi SAN to Solaris 10 OS and Oracle 10g RAC ASM?
    I am aware that with an Oracle 10g RAC and ASM instance, one needs to configure the ASM instance initialization parameter file to set the asm_diskstring setting to recognize the LUNs that are presented to the host.
    I know that Sun Solaris 10 uses /dev/rdsk/CwTxDySz naming convention at the OS level for disks. However, how would I map this to Oracle 10g ASM settings?
    I cannot find this critical piece of information ANYWHERE!!!!
    Thanks for your help!

    Yes that is correct however due to use of Solaris 10 MPxIO multipathing software that we are using with the Hitachi SAN it does present an extra layer of complexity and issues with ASM configuration. Which means that ASM may get confused when it attempts to find the new LUNs from the Hitachi SAN at the Solaris OS level. Oracle Metalink note 396015.1 states this issue.
    So my question is this: how to configure the ASM instance initialization parameter asm_diskstring to recognize the new Hitachi LUNs presented to the Solaris 10 host?
    Lets say that I have the following new LUNs:
    /dev/rdsk/c7t1d1s6
    /dev/rdsk/c7t1d2s6
    /dev/rdsk/c7t1d3s6
    /dev/rdsk/c7t1d4s6
    Would I set the ASM initialization parameter for asm_diskstring to /dev/rdsk/c7t1d*s6
    as correct setting so that the ASM instance recognizes my new Hitachi LUNs? Solaris needs to map these LUNs using pseudo devices in the Solaris OS for ASM to recognize the new disks.
    How would I set this up in Solaris 10 with Sun multipathing (MPxIO) and Oracle 10g RAC ASM?
    I want to get this right to avoid the dreaded ORA-15072 errors when creating a diskgroup with external redundancy for the Oracle 10g RAC ASM installation process.

  • Oracle 11.2 installation on solaries 64bit

    Hi
    I am try to install oracle 11.2 on laptop with solaries 10(64bit) vmware meachine and my laptop os is XP 32 bit.
    But every time i run runinstaller it give errors.I am installing oracle 11g on 64bit because oracle 11g is not avaliable for 32bit os.Please provide me solution.
    Thanks

    Hi;
    Yes correct
    donwload 32 bit linux+32 bit oracle setup for your installation
    PS: somehow i cant type message!when i use poster message on my post than  press save message it gives IE error!Anyone face this issue?
    Regard
    Helios
    Edited by: Helios- Gunes EROL on Mar 22, 2011 2:59 PM

  • Unable to detect LUNS in solaris 9.

    Hi all,
    We added HBA card to one of the Solaris 9 server (Sun Fire V240). Configured zoning and presented the LUNs to the server from EVA6100. But LUNs are not visible in OS side. Verified the link, zone config and the Storage config. All looks fine.
    Applied all required packages and patches from server level.
    Storage Details:
    Storage Model: EVA 6100
    Server Details:
    OS Version: Sun Solaris 9
    HBA Model: SG-XPCI2FC-EM4-Z - Sun StorageTek PCI-X Enterprise 4Gb Dual Channel FC Emulex Host Bus Adapter, includes standard and low profile brackets. RoHS 6 compliant
    luns are visible in OK prompt but not in OS.
    if there are additional parameters i need to know
    Regards,
    parkar
    UAE
    Edited by: parkar on Jul 16, 2012 10:23 PM

    hi
    we are still unable to detect luns in OS.
    We have installed the recommended packages/patches from Oracle mentioned below, but still the LUN is not visible. Kindly check and let us know if any other parameters to be configured in Solaris side.
    dvsrv19:/export/home/osteam/SAN_Patches/p10297699_440_Generic/SAN_S9_4.4.15_install_it#./install_it
    Logfile /var/tmp/install_it_Sun_StorEdge_SAN.log : created on Mon Aug 6 12:31:30 GMT 2012
    This routine installs the packages and patches that
    make up Sun StorEdge SAN on Solaris 9 (only).
    Would you like to continue with the installation?
    [y,n,?] y
    Verifying system...
    Checking for incompatible patches : Done
    Begin installation of SAN software
    Installing StorEdge SAN packages -
    Package SUNWsan : Installed Previously.
    Package SUNWcfpl : Installed Previously.
    Package SUNWcfplx : Installed Previously.
    Package SUNWcfclr : Installed Previously.
    Package SUNWcfcl : Installed Previously.
    Package SUNWcfclx : Installed Previously.
    Package SUNWfchbr : Installed Previously.
    Package SUNWfchba : Installed Previously.
    Package SUNWfchbx : Installed Previously.
    Package SUNWfcsm : Installed Previously.
    Package SUNWfcsmx : Installed Previously.
    Package SUNWmdiu : Installed Previously.
    Package SUNWqlc : Installed Successfully.
    Package SUNWqlcx : Installed Successfully.
    Package SUNWjfca : Installed Previously.
    Package SUNWjfcax : Installed Previously.
    Package SUNWjfcau : Installed Previously.
    Package SUNWjfcaux : Installed Previously.
    Package SUNWemlxs : Installed Previously.
    Package SUNWemlxsx : Installed Previously.
    Package SUNWemlxu : Installed Previously.
    Package SUNWemlxux : Installed Previously.
    StorEdge SAN packages installation completed.
    Installing StorEdge SAN patches and required patches -
    Patch 111847-08 : Installed Successfully.
    Patch 113046-01 : Installed Previously.
    Patch 113049-01 : Installed Previously.
    Patch 113039-21 : Later version installed 113039-25.
    Patch 113040-26 : Later version installed 113040-33.
    Patch 113041-14 : Installed Previously.
    Patch 113042-19 : Installed Successfully.
    Patch 113043-15 : Installed Previously.
    Patch 113044-07 : Installed Successfully.
    Patch 114476-09 : Later version installed 114476-10.
    Patch 114477-04 : Installed Successfully.
    Patch 114478-08 : Installed Successfully.
    Patch 114878-10 : Installed Successfully.
    Patch 119914-14 : Later version installed 119914-15.
    Patch installation completed.
    Installation of Sun StorEdge SAN completed Successfully
    Please reboot your system.
    dvsrv19:/export/home/osteam/SAN_Patches/p10297699_440_Generic/SAN_S9_4.4.15_install_it#
    Regards,

  • Removing LUNs from Solaris 10

    Hi all, I have a 2-node Solaris 10 11/06 cluster running Sun Cluster 3.2. It's shared storage is supplied from an EMC Clariion SAN. Some of the LUNs used by the cluster have recently been replaced. I now have multiple LUNs on this cluster that are no longer needed. But there's a problem.
    Last week I added, then removed, a new LUN to the cluster and one of the nodes in the cluster rebooted. Sun Microsystems analysed the problem and identified the reboot as a known issue (Bug ID is 6518348):
    On Solaris 10 Systems, any SAN interruptions causing a path to SAN devices to be temporarily offlined may cause the system to panic. The SAN interruption may be due to regular SAN switch and/or array maintenance, or as simple as a fibre cable disconnect.
    I need to remove the unused LUNs as the SAN that they live on is being moved.
    Is there any way I can safely remove the unused LUNs without causing or requiring server reboots?
    Thanks in advance,
    Stewart

    Hi Stewart,
    Usually devfsadm -Cv ( -C = cleanup, v = verbose) does a good job
    If it doesn't work, you can use cfgadm -c unconfigure/remove <LUNid>
    Ex: cfgadm -c unconfigure c2::50060161106023dd;cfgadm -c remove c2::50060161106023dd
    Marco

  • Maximum LUNS in solaris 10,9

    Hi,
    Pl. let me know the maximum number of LUNS that can be added to a solaris 10 and solaris9 OS .
    I am using both Qlogic and Emulex fiber card.

    user5782900 wrote:
    hmmm, its IBM ThinkCentre A50-8175 KAA.
    i don't whether the system has the driver for the ethernet or not.If there are no prompts, then the OS has no knowledge of whatever chipset you might have.
    You will need to figure that out on your own.
    Once you know the actual chipset, you can review the Hardware Compatibility List (HCL)
    http://www.sun.com/bigadmin/hcl/
    and then determine whether you have to manually install a third party driver for what might be there,
    or whether you have to purchase and install another NIC that is+ usable by Solaris.

  • WL6.0 Multi developer install on Solaris

    Hi,
    I have installed WL6.0 on a solaris m/c in a common directory logged in as
    root.
    Everything works fine when I run the WL server as root.
    But, I would like to run the WL server as an ordinary user from my home
    directory. So, I copy the config directory to my area, and then start the WL
    server on a different port.
    The EJBs deploy fine, but I am unable to create JDBC connection pools and
    install startup classes. There is obviously something wrong with starting WL
    from my local area.
    My question is - what are the WL files, i need to copy over to my home dir,
    to successfully start the WL server from my dir ?
    Thanks,
    Karthik

    s_n_i_dba wrote:
    So based on metalink note 207303.1, I have to install 10.1.0.5 oracle client on this machine which will work with 11.2.0.2 Oracle database.
    I can find the patchset in MOS for 10.1.0.5. I'm not able to find the base release software in otn or edelivery site. A 10.1.0.5 client is available as Instant Client:
    http://www.oracle.com/technetwork/topics/sol64soft-085649.html
    With IC there are no "admin" tools etc. like in the "full" install, and not all app interfaces/api's are supported, so depending on your requirements it may not be usable (it's one of the very first releases of Instant Client, the current one's are more developed).
    Edited by: orafad on Apr 15, 2012 10:19 AM

  • More than 1 million files on multi-terabyte UFS file systems

    How do you configure a UFS file system for more than 1 million files when it exceeds 1 terabyte? I've got several Sun RAID subsystems where this is necessary.

    Thanks. You are right on. According to Sun official channels:
    Paula Van Wie wrote:
    Hi Ron,
    This is what I've found out.
    No there is no way around the limitation. I would suggest an alternate
    file system if possible suggest ZFS as they would get the most space
    available as inodes are no longer used.
    Like the customer noted if the inode values were increased significantly
    and an fsck were required there is the possibility that the fsck could
    take days or weeks to complete. So in order to avoid angry customers
    having to wait a day or two for fsck to finish the limit was imposed.
    And so far I've heard that there should not be corruption using zfs and
    raid.
    Paula

  • Index-building strategy for multi-terabyte database

    Running 11g.
    We have about 17 million XML files to load into a brand new database. They are to be indexed with a context index. After the 17 million records are imported, there will be future imports of approximately 30,000 records every two weeks, loaded in batch.
    1. What is the best way to load the 17 million? Should they be loaded in chunks, say 1 M at a time, and then indexed? Or load them all, then index them all at once? (Based on preliminary tests the initial load will take 9 days and the indexing will take a little under 7 days.)
    I vote for doing it in chunks, since the developers will want access to the system periodically during the data load and I want to be able to start and stop the process without causing a catastrophe.
    But I have read that this can introduce fragmentation into the index. Is this really something to worry about, given that my chunks are so large? Would there be any real benefit from doing the entire index operation in one go?
    2. After each of the bi-weekly 30,000-record imports, we will run CTX_DDL.SYNC_INDEX which I estimate will take about 20 minutes to run. Will this cause fragmentation over time? I have read that it is advisable to perform a full index rebuild occasionally but obviously we won't want to do that; that would take days and days. I guess the alternative is to run OPTIMIZE_INDEX
    http://download.oracle.com/docs/cd/B28359_01/text.111/b28304/cddlpkg.htm#i998200
    ...any advice on how often to run this, considering the hugeness of my dataset? Fortunately the data will be read-only (apart from the bi-weekly load), so I'm thinking that there won't be very much fragmentation occurring at all. Am I correct?

    There are two types of fragmentation, one during index creation and one during additional loads. The first can be minimised with some tweaking, the second should not be a big issue because there are only 26 additional loads in one year in.
    1)
    You will not have any issues loading the XML and index them using Oracle Text as long you use a sensible partitioning. Index the whole dataset in one go, the parallel clause works very well during indexing. Have a look at the initial memory allocation for every thread what is created, I found the default values in 10g and 9i are far too small. When you use 100 partitions it will create 100 Oracle text indexes, nothing to be scared off. More partitions you use, less index memory is required for each thread, but it adds to fragmentation.
    You can reduce the initial indexing time and fragmentation in various ways:
    Use parallel indexing and partitioning, I used 6 threads over 25 partitions and it reduced time to index 8 millions XML documents from 2 days to less than 12 hours (XML documents as clobs using a user_datastore).
    Tweak indexing memory to your requirements, somehow it is more memory than CPU bound. More memory you use less fragmentation occurs and it will finish earlier.
    Use a representative list of stop words. For this I use to query the DR$...I tables directly (query is out my head)
    SELECT token_text, SUM(token_count) total
    FROM DR$...I
    WHERE UPPER(token_text) = tokent_text – remove XML tags
    ORDER BY total DESC
    Have a look at the top entries if you can include them into the stop word list; they will cause trouble later on when querying the index.
    2) We found an index rebuild (drop/create) useful for following scenario, but this more a feeling than solid science
    1 million XML records, daily loads of around 1 thousand records with many updates => quarterly rebuilds
    8 million XML records, bi-weekly loads of around 20 thousand records with very few updates => once a year or so.

  • Installation of Sybase GW in a separate home - 11gR1 on SUN SOLARIS 64bit

    Hi all
    I have a new installation of 11.1.0.6 database & listener (port 1521) patched to 11.1.0.7 with the most recent patchset and PSU.
    Now I need to install Gateways 11.1.0.6 in a separate home. I have jotted down the following:
    1. Install Sybase Gateway under new GW_ORACLE_HOME
    2. Create new Gateway Listener (new port 1522) using netca from GW_ORACLE_HOME
    3. Make modification to listener.ora under GW_ORACLE_HOME
    4. Restart Gateway Listener
    5. Configure Oracle Database for Gateway Access by creating tnsnames.ora entry under ORACLE_HOME
    6. Create database link
    question A) Am I missing any steps or is this all I need to do?
    question B) Do I need to run the 11.1.0.7 patchset on GW_ORACLE_HOME as well?

    you need to modify a gateway init file pointing to the Sybase database you want to access. OUI only configures the connect info - if you need other settings you need to add them.
    Regarding the 11.1.0.7 patchset - I strongly recommend to apply the latest patchset as it contains fixes for many known issues.

  • What is the  limit of  the size of a disk under Solaris 8?

    Hello,
    I have a problem when I try to run format command to label a disk of 1TB under solaris 8.
    # format.......
    114. c9t14d3 <DGC-RAID3-0322 cyl 32766 alt 2 hd 1 sec 0>
    /pci@8,700000/lpfc@4/sd@e,3
    Specify disk (enter its number)[115]: 114
    selecting c9t14d3
    [disk formatted]
    Disk not labeled. Label it now? y
    Warning: error writing VTOC.
    Warning: no backup labels
    Write label failed
    format> print
    PARTITION MENU:
    0 - change `0' partition
    1 - change `1' partition
    2 - change `2' partition
    3 - change `3' partition
    4 - change `4' partition
    5 - change `5' partition
    6 - change `6' partition
    7 - change `7' partition
    select - select a predefined table
    modify - modify a predefined partition table
    name - name the current table
    print - display the current table
    label - write partition map and label to the disk
    !<cmd> - execute <cmd>, then return
    quit
    partition> p
    Current partition table (default):
    Total disk cylinders available: 32766 + 2 (reserved cylinders)
    Arithmetic Exception - core dumped

    I think maybe if you split it into two luns, you can
    stitch them back together with svm.
    But even thats not certain.Depends on what you're going to do with the device at that point. UFS on Solaris also does not support volumes 1TB or larger. So you'd have to use it as a raw slice space or a filesystem that did support larger sizes.
    You need later versions of Solaris 9 to get mutli-terabyte UFS support. (a separate issue from multi-terabyte LUN support).
    Darren

Maybe you are looking for