Disks on Solaris
Hi
I have a Sun Fire V240 with 4 disks.
On disk0, I have Solaris10, pre-installed by SUN.
On disk1, I have Solaris9.
When I boot on disk0, and when I use the command format, I have these message :
Unknown controller 'MD21' - /etc/format.dat (15)
Unknown controller 'MD21' - /etc/format.dat (20)
Unknown controller 'MD21' - /etc/format.dat (25)
Unknown controller 'MD21' - /etc/format.dat (151)
Unknown controller 'MD21' - /etc/format.dat (155)
Unknown controller 'MD21' - /etc/format.dat (159)
Unknown controller 'MD21' - /etc/format.dat (163)
Unknown controller 'MD21' - /etc/format.dat (167)
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> system
/pci@1c,600000/scsi@2/sd@0,0
1. c0t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> miroir
/pci@1c,600000/scsi@2/sd@1,0
2. c0t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/scsi@2/sd@2,0
3. c0t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/scsi@2/sd@3,0
Specify disk (enter its number):
When I boot on disk0, and when I use the command format, I have these message :
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> system
/pci@1c,600000/scsi@2/sd@0,0
1. c1t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> miroir
/pci@1c,600000/scsi@2/sd@1,0
2. c1t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/scsi@2/sd@2,0
3. c1t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/scsi@2/sd@3,0
Specify disk (enter its number):
Is it normal that I have C0TxDySz from disk0 and C1TxDySz from disk1 ?
Thanks for your help.
AL
Neither one is "wrong" if that's what you're asking.
Controllers are assigned unused numbers and are linked in /dev/cfg. So any particular installation may use a different controller number for a particular piece of hardware. Normally this reflects differences in history between the installations.
Darren
Similar Messages
-
Download boot disk of Solaris 2.5.1
Hi, all.
Where can I download a boot disk of Solaris 2.5.1?
ThanksWhat do you mean? An ISO image of the boot CDROM? Or
do you want to copy a boot disk from one system to
another? I don't know about the former, my guess is
that unless you are booting from already existing
media from a CDROM on a system in your LAN, which is
doable, the answer is "no".
The latter idea, using dd to copy one disk to another
is doable with cavaets. The disks must be of the
same type and the system hardware configs must be
the same. There are help documents on SunSolve about
the details. There are many risks and even though
you get a copy, it might fail to boot. -
Problem encountered installing new disk on Solaris VMware
Hi Guys,
I'm trying my first attempt to create a new disk on solaris 10 filesystem but having a few issues mounting disk. i've been following instructions on google searches but now am stuck and really some expert advice.
Details:
Host OS: Windows Vista
Gues OS: Solaris 10 64x UFS filesystem
VMware Workstation: Ver 6:00
Steps undertook:
1) Shut down the VM; Edit the VMware configuration: Select Virtual Machines -> Settings; Added new hard disk device of 20GB (SCSI:0:0)
2) Booted Solaris VM
3)
*# format*
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
/pci@0,0/pci-ide@7,1/ide@0/cmdk@0,0
1. c2t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
/pci@0,0/pci1000,30@10/sd@0,0
Specify disk (enter its number): 1
selecting c2t0d0
[disk formatted]
4)
format> p
PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> p
Current partition table (original):
Total disk cylinders available: 2607 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 2606 19.97GB (2607/0/0) 41881455
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 7.84MB (1/0/0) 16065
9 unassigned wm 0 0 (0/0/0) 0
5)
partition> 0
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
Enter partition id tag[unassigned]:
Enter partition permission flags[wm]:
Enter new starting cyl[0]: 3
Enter partition size[0b, 0c, 3e, 0.00mb, 0.00gb]: 2604c
partition> p
Current partition table (unnamed):
Total disk cylinders available: 2607 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 3 - 2606 19.95GB (2604/0/0) 41833260
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 2606 19.97GB (2607/0/0) 41881455
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 7.84MB (1/0/0) 16065
9 unassigned wm 0 0 (0/0/0) 0
partition> label
Ready to label disk, continue? y
partition> q
format> q
6)
*# newfs /dev/dsk/c0d0s2*
newfs: construct a new file system /dev/rdsk/c0d0s2: (y/n)? y
Warning: inode blocks/cyl group (431) >= data blocks (246) in last
cylinder group. This implies 3950 sector(s) cannot be allocated.
/dev/rdsk/c0d0s2: 41877504 sectors in 6816 cylinders of 48 tracks, 128 sectors
20448.0MB in 426 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
super-block backups for last 10 cylinder groups at:
40898592, 40997024, 41095456, 41193888, 41292320, 41390752, 41489184,
41587616, 41686048, 41784480
7)
*# mountall*
mount: /tmp is already mounted or swap is busy
mount: /dev/dsk/c0d0s7 is already mounted or /export/home is busy
8)
*# df -h*
Filesystem size used avail capacity Mounted on
/dev/dsk/c0d0s0 6.9G 5.6G 1.2G 83% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 1.1G 968K 1.1G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
fd 0K 0K 0K 0% /dev/fd
swap 1.1G 40K 1.1G 1% /tmp
swap 1.1G 28K 1.1G 1% /var/run
/dev/dsk/c0d0s7 12G 1.5G 11G 13% /export/home
Where am i going wrong? I dont see new mount for 20GB new disk that i created?
Please advice/help
Thanks!Thanks for your response but still no luck??
# vi vfstab
"vfstab" 13 lines, 457 characters
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/dsk/c0d0s1 - - swap - no -
/dev/dsk/c0d0s0 /dev/rdsk/c0d0s0 / ufs 1 no -
*/dev/dsk/c0d0s2 /dev/rdsk/c0d0s2 /u01 ufs 1 yes -*
/dev/dsk/c0d0s7 /dev/rdsk/c0d0s7 /export/home ufs 2 yes
/devices - /devices devfs - no -
sharefs - /etc/dfs/sharetab sharefs - no -
ctfs - /system/contract ctfs - no -
objfs - /system/object objfs - no -
swap - /tmp tmpfs - yes -
# mountall
/dev/rdsk/c0d0s2 is clean
mount: Nonexistent mount point: /u01
mount: /tmp is already mounted or swap is busy
mount: /dev/dsk/c0d0s7 is already mounted or /export/home is busy -
Need to format the old ASM disks on solaris.10.
Hello Gurus,
we uninstalled the ASM on solaris, but while installing the ASM again it says that mount point already used by another instance, but there is no db and ASM running (this is the new server) so we need to use dd command or need to reformat the raw devices which already exists and used by the old ASM instance,here is the confusion...
there are 6 Luns presented to the this host for ASM,its not used by anyone...
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> solaris
/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0
1. c0t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> solaris
/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0
2. c2t60050768018E82BE98000000000007B2d0 <IBM-2145-0000-150.00GB>
/scsi_vhci/ssd@g60050768018e82be98000000000007b2
3. c2t60050768018E82BE98000000000007B3d0 <IBM-2145-0000 cyl 44798 alt 2 hd 64 sec 256>
/scsi_vhci/ssd@g60050768018e82be98000000000007b3
4. c2t60050768018E82BE98000000000007B4d0 <IBM-2145-0000 cyl 19198 alt 2 hd 64 sec 256>
/scsi_vhci/ssd@g60050768018e82be98000000000007b4
5. c2t60050768018E82BE98000000000007B5d0 <IBM-2145-0000 cyl 5118 alt 2 hd 32 sec 64>
/scsi_vhci/ssd@g60050768018e82be98000000000007b5
6. c2t60050768018E82BE98000000000007B6d0 <IBM-2145-0000 cyl 5118 alt 2 hd 32 sec 64>
/scsi_vhci/ssd@g60050768018e82be98000000000007b6
7. c2t60050768018E82BE98000000000007B7d0 <IBM-2145-0000 cyl 5118 alt 2 hd 32 sec 64>
/scsi_vhci/ssd@g60050768018e82be98000000000007b7
but the thing is when we try to list the raw devices by ls -ltr on /etc/rdsk location all disk owned by root and other not in oracle:dba & oinstall.
root@b2dslbmom3dbb3301 [dev/rdsk]
# ls -ltr
total 144
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s0 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:a,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s1 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:b,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s2 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:c,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s3 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:d,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s4 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:e,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s5 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:f,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s6 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:g,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s7 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:h,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s0 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:a,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s1 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:b,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s2 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:c,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s3 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:d,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s4 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:e,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s5 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:f,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s6 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:g,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s7 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:h,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s0 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:a,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s1 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:b,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s2 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:c,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s3 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:d,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s4 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:e,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s5 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:f,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s6 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:g,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s7 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:h,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:a,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:b,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:c,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:d,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:e,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:f,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:g,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:a,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:b,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:c,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:d,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:e,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:f,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:g,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:a,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:b,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:c,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:d,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:e,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:f,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:g,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:a,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:b,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:c,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:d,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:e,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:f,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:g,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:a,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:b,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:c,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:d,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:e,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:f,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:g,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:a,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:b,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:c,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:d,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:e,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:f,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:g,raw
lrwxrwxrwx 1 root root 68 Jun 13 15:34 c2t60050768018E82BE98000000000007B2d0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:wd,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:47 c2t60050768018E82BE98000000000007B3d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:h,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:48 c2t60050768018E82BE98000000000007B4d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:h,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:49 c2t60050768018E82BE98000000000007B5d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:h,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:51 c2t60050768018E82BE98000000000007B6d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:h,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:53 c2t60050768018E82BE98000000000007B7d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:h,raw
so we need to know where the raw devices located for oracle to do the dd command to remove the old asm header on the raw device inorder to start the fresh installation
but when we use the command which already given by the unix person who is no longer works here now, we are able to see the following information
root@b2dslbmom3dbb3301 [dev/rdsk] # ls -l c2t600*d0s0|awk '{print $11}' |xargs ls -l
crwxr-x--- 1 oracle oinstall 118, 232 Jun 14 13:29 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:a,raw
crwxr-x--- 1 oracle oinstall 118, 224 Jun 14 13:31 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:a,raw
crwxr-x--- 1 oracle oinstall 118, 216 Jun 14 13:32 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:a,raw
crw-r----- 1 root sys 118, 208 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:a,raw
crw-r----- 1 root sys 118, 200 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:a,raw
crw-r----- 1 root sys 118, 192 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:a,raw
also we are having the information of the mkode, with minor and major number we used for making the softlinks for raw device with ASM.
Cd dev/oraasm/
/usr/sbin/mknod asm_disk_03 c 118 232
/usr/sbin/mknod asm_disk_02 c 118 224
/usr/sbin/mknod asm_disk_01 c 118 216
/usr/sbin/mknod asm_ocrvote_03 c 118 208
/usr/sbin/mknod asm_ocrvote_02 c 118 200
/usr/sbin/mknod asm_ocrvote_01 c 118 192
But the final thing is we need find out where the above configuration located on the host, i think this raw device present method is different than the normal method on solaris??
please help me to proceed my installtion .... thanks in advance....
i am really confused with the following command from where we are getting the oracle disk raw devices information,since there is no info there in /etc/rdsk location (Os is solaris 10)
root@b2dslbmom3dbb3301 [dev/rdsk] # ls -l c2t600*d0s0|awk '{print $11}' |xargs ls -l
crwxr-x--- 1 oracle oinstall 118, 232 Jun 14 13:29 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:a,raw
crwxr-x--- 1 oracle oinstall 118, 224 Jun 14 13:31 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:a,raw
crwxr-x--- 1 oracle oinstall 118, 216 Jun 14 13:32 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:a,raw
crw-r----- 1 root sys 118, 208 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:a,raw
crw-r----- 1 root sys 118, 200 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:a,raw
crw-r----- 1 root sys 118, 192 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:a,raw
please help....Hi Winner;
For your issue i suggest close your thread here as changing thread status to answered and move it to Forum Home » Grid Computing » Automatic Storage Management which you can get more quick response
Regard
Helios -
Large Disks On Solaris Intel 8EA
I am trying to setup a Large Disk ( 27 or 40 GB Maxtor ) using
Solaris 8EA on a Pentium Pro MAchine. The Disk Partitions fine
and am able to setup multiple Filesystems for 8 or 9 GB's
easily. But when trying to access these Filesystems
online I get several disk errors and Timeouts.
I tried several configurations for the Disk. Has anyone
face the same problem. Is this fixed in Solaris 8 Final
edition.
Thanks,
Harshan.Actually the hardware is fine. I tried testing the hardware with
different operating systems. I think Solaris8 has a bug dealing with large SCSI and IDE drives. -
Using a 160GB Disk with Solaris 8
Hello all:
I am using an Ultra5 system with OBP 3.25.3, Solaris 8 [10/01]. I have a 160GB Seagate ST3160812A disk drive that I am trying to use as a second drive (primary slave).
I have installed the drive on the primary channel as a slave and performed boot -r. The /dev and /devices directories correctly setup device links (c0t1d0) for the drive.
Assuming all was good I tried to create a new file system on slice 0 with "newfs /dev/rdsk/c0t1d0s0". This responded with
/dev/rdsk/c0t1d0s0: I/O error
I then used the "format" command to see what was going on. I can see the drive in the 'format' command:
#format
AVAILABLE DISK SELECTIONS
0. c0t0d0 <ST320420A cyl 39533 alt 2 hd 16 sec 63>...
1. c0t1d0 <ST3160812A cyl 255 alt 2 hd 16 sec 255>...
Specify disk (enter...) : 1
selecting c0t1d0
(disk formatted, no defect list found)
format> quit
As can be seen the 160GB drive has parameters that are all wrong for this size disk. The #format->verify command output for slice 2 (backup) shows the following:
ascii name = <ST3160812A cyl 255 alt 2 hd 16 sec 255>
pcyl = 257
ncyl = 255
acyl = 2
nhead = 16
nsect = 255
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
1 :: :: ::
2 backup wu 0 - 254 508.01MB (255/0/0) 1040400
3 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
I recalled that at boot-up the system showed a 'wrong magic number' message on the new drive. I tried to re-label the drive (format->label). The 'magic number' message went away on the next reboot but the 'format->verify' still showed the same thing.
I have tried several approaches to fix this:
1. I have read several posts and on-line texts that indicate the 137GB limit associated with LBA addressing for the Ultra5 IDE controllers. I have also read the post "How can I use my 160G Disk?"
(http://forum.java.sun.com/thread.jspa?forumID=829&threadID=5065871)
which provided great information and described using the "format" command to configure the disk with 65535 cylinders.
When I use the "format->type" command I get the following:
#format
format> type
AVAILABLE DRIVE TYPES:
0. ST320420A
1. ST3160812A
2. other
According to the "How can I use..." post, I select [2] other and set parameters. When I get to the first parameter to set the cylinders to 65535 I get the following:
Enter number of data cylinders: 65535
'65535' is out of range
a. Using the #format->type {other} function to change the # of cylinders is not accepted by the OS. In fact any value over 32000 cylinders is rejected by the command.
b. Just in case I also tried using the [1] disk type. I get the message:
"disk formatted, no defect list found"
So the suggestion in this post did not work in my case.
2. Next I tried referring to the post "Sun Ultra 5 Hard Drive"
(http://forum.java.sun.com/thread.jspa?forumID=860&threadID=5068466)
which suggested using the dd command to 'nuke' the disk label and re-label with format. So using the command:
dd if=/dev/zero of=/dev/rdsk/c0t1d0s2 bs1b count =16
the command is processed properly but when I perform "format->verify"
the label is unchanged even after I re-execute the "format->label" command. Rebooting the system did not change it either.
3. I also attempted to use the 'format->part->modify' command to change the partition. When the command gets to the 'Free Hog Partition[6]" prompt I get a 'warning: no space available from Free Hog partition" message.
4. I also tried using the 'format->part' command to directly modify the specific '2' partition from the partition menu.
format>part
partition> 2
Part Tag Flag Cylinders Size Blocks
2 backup wu 0 - 254 508.01MB (255/0/0) 1040400
Enter partition id tag[backup]; <return>
Enter partition permission flags [wu]: <return>
Enter new starting cylinder cyl[0]: <return>
Enter partition size [1040400b, 255c, 508.01mb, 0.50gb]: 100g
'100.00gb is out of range'
^D
For some reason even this command limits the partition size.
5. My last ditch attempt was to use fdisk -S <geom_file> to set the label geometry with the following parameters in the file:
pcyl: 65535
ncyl: 65535
acyl: 2
bcyl: 0
nheads: 16
nsectors: 255
secsize: 512
When I run this command the command core dumps.
I'm somewhat lost as to what is preventing me from even using the suggestions others have found useful.
Has anyone successfully used a 160GB Seagate (ST3160812A or other 160GB model) drive with the following?
(as defined by #showrev and #prtconf)
system: Ultra5
OS: Solaris 8 [10.01 assumed based on kernel date]
Kernel: SunOS 5.8 Generic 108528-13 December 2001
OBP 3.25.3 2000/06/29 14:12
kernel architecture: sun4u
My hardware guy has tried numerous times to get a 120GB drive (which some posts indicated should work without any problems) but they are very hard to find. If I can get the 160GB drive to work (even with 137GB) I'll be happy.
Thanks for any information.
elbowzYour version of the OBP as well as the Solaris kernel are not the most up-to-date. If the system is stable without the new disk I would upgrade the OBP to the last version (3.31). It may also be worthwhile to patch the kernel with the recommended cluster patches (if you have enough disk space).
-
Disk Suite/ Solaris 8 Upgrade Problems
Hello,
I am trying to upgrade from Solaris Sparc 7 to 8 and I have Sun Disk Suite mirroring the boot device. When I try to upgrade the installation fails. Is there a way to upgrade from 7 to 8 without breaking the mirror and if not is there a utility that can remove the metadb info and rebuild them after the upgrade.
Thanks,
Bill BradleyHello,
I am trying to upgrade from Solaris Sparc 7 to 8 and I have Sun Disk Suite mirroring the boot device. When I try to upgrade the installation fails. Is there a way to upgrade from 7 to 8 without breaking the mirror and if not is there a utility that can remove the metadb info and rebuild them after the upgrade.
Thanks,
Bill Bradley -
Mounting Sol 8 SPARC disk on Solaris 10 x86
I have been having a very difficult time mounting a Solaris 8 (SPARC) SCSI disk on a Solaris 10 x86 box. The disk is recognized (it shows up under format as c2t2d0). I cannot mount it at all.
The individual slices will mount under another Sol 8 machine. I can also boot the x86 machine with System Rescue CD, and it will mount the individual SunOS slices as UFS with the -o ro option.
I have look extensively on the web for a solution, with no luck so far. I am clearly missing something very basic. I'd appreciate any pointers.
Thanks!
TomUFS on Solaris is endian-specific. Solaris does not support mounting big-endian UFS (SPARC) on a little-endian host (x86). I don't know that anyone was trying to work on a converter.
The other problem you'll run into is that x86 drivers expect an MBR label on the disk with a VTOC inside a Solaris partition, while SPARC disks usually have only the VTOC label.
See also this thread:
[UFS-discuss Endian|http://mail.opensolaris.org/pipermail/ufs-discuss/2007-April/000857.html]
Darren -
Accessing floppy disk in Solaris 10 u5 VirtualBox Virtual Machine
Hi,
I cannot access my host floppy drive from my Solaris 10 u5 x86 64-bit VM on a WinXP host using VirtualBox 2.1.0. When I select "Mount Floppy A: from the VM window, the floppy is apparently mounted, as I can see disk activity. However, no floppy icon comes up on my Desktop, there is no /floppy, and I can't otherwise find the "mounted" drive.
I can access CD-ROMs using my VM with no problem: a CD-ROM icon appears on my Desktop.
Can anyone else using Solaris 10 as a guest on a Windows host access their floppy drives?
Any help would be greatly appreciated. I have asked this question in the Vbox forums, but evidently only a couple of users use SUN Vbox for Solaris guests, which I find remarkable.
Thanks.I finally got the Solaris 10 VM to recognize the floppy. ACPI must be enabled for the guest. See:
http://forums.virtualbox.org/viewtopic.php?f=4&t=13599 -
Partitioning a disk in Solaris 10
Hi,
I am new with Solaris 10, and I want to create a partition on the disk, which it has already very important data that is being backuped. I have read the documentation from sun, and it seems that the best way is with the command format and then partition.
But I have seen that is possible to do it in the System Management Console .
What do you recommend me?
And after the partition, I have to mount the file systems, again, do I?
I would appreciate all the help.
Best regards,
JuanI don't think you can create a Solaris partition with the format command from either DOS or Windows.
Here is how you create a Solaris partition, as far as I know anyway:
1. Use Ranish PM to create Solaris partition before hand so that the installation won't create x86boot.
2. Create some free space available, and then run the installation. The process will let you create the Solaris partition along with x86boot.
3. If you don't create some free space before hand, you will have to install Solaris on other existing partition, which means whatever current operating system of yours will be erased.
-van. -
Removing a LUN/disk in Solaris 10
What is the Correct(TM) way to remove a disk/LUN on a Solaris 10/SPARC box, with multipathing? Can this method be followed if the LUN was removed externally already (SAN)?
I have a server in this situation - LUN removed - which has 'phantom' entries in format and still shows up in cfgadm (as unusable). I have tried "cfgadm -c unconfigure <device>". The system still seems to be running fine, but it does not seem to be an ideal situation. And I need to shuffle around more LUNs.
Thanks.
PS I would like to be able to remove the LUN/disk (properly) without a reboot, since this is a 24x7 environment.
Edited by: DCSMidrange on Jul 20, 2008 6:18 PMError? No error, the machine froze and eventually rebooted.
Here are the relevant lines from /var/adm/messages. Looks like it started to work and then bit off more than it could chew?
Jul 22 11:19:26 dcsnetworker genunix: [ID 834635 kern.info] /scsi_vhci/ssd@g600507630efe0a42000000000000100b (ssd10) multipath status: failed, path /pci@7c0/p
ci@0/pci@8/SUNW,qlc@0/fp@0,0 (fp2) to target address: w500507630e860a42,0 is offline Load balancing: round-robin
Jul 22 11:19:26 dcsnetworker genunix: [ID 834635 kern.info] /scsi_vhci/ssd@g600507630efe0a42000000000000100b (ssd10) multipath status: failed, path /pci@7c0/p
ci@0/pci@9/SUNW,qlc@0/fp@0,0 (fp0) to target address: w500507630e860a42,0 is offline Load balancing: round-robin
Jul 22 11:19:26 dcsnetworker genunix: [ID 834635 kern.info] /scsi_vhci/ssd@g600507630efe0a42000000000000100b (ssd10) multipath status: failed, path /pci@780/p
ci@0/pci@8/SUNW,qlc@0/fp@0,0 (fp1) to target address: w500507630e860a42,0 is offline Load balancing: round-robin
Jul 22 11:19:26 dcsnetworker genunix: [ID 834635 kern.info] /scsi_vhci/ssd@g600507630efe0a42000000000000100b (ssd10) multipath status: failed, path /pci@780/p
ci@0/pci@8/SUNW,qlc@0,1/fp@0,0 (fp3) to target address: w500507630e860a42,0 is offline Load balancing: round-robin
Jul 22 11:19:26 dcsnetworker genunix: [ID 834635 kern.info] /scsi_vhci/ssd@g600507630efe0a42000000000000100b (ssd10) multipath status: failed, path /pci@7c0/p
ci@0/pci@8/SUNW,qlc@0,1/fp@0,0 (fp4) to target address: w500507630e860a42,0 is offline Load balancing: round-robin
Jul 22 11:19:26 dcsnetworker unix: [ID 836849 kern.notice]
Jul 22 11:19:26 dcsnetworker ^Mpanic[cpu12]/thread=2a101789cc0:
Jul 22 11:19:26 dcsnetworker unix: [ID 799565 kern.notice] BAD TRAP: type=30 rp=2a101788d00 addr=736440672000 mmu_fsr=9
Jul 22 11:19:26 dcsnetworker unix: [ID 100000 kern.notice]
Jul 22 11:19:26 dcsnetworker unix: [ID 839527 kern.notice] sched:
Jul 22 11:19:26 dcsnetworker unix: [ID 756718 kern.notice] data access exception:
Jul 22 11:19:26 dcsnetworker unix: [ID 901159 kern.notice] MMU sfsr=9:
Jul 22 11:19:26 dcsnetworker unix: [ID 820745 kern.notice] Data or instruction address out of range
Jul 22 11:19:26 dcsnetworker unix: [ID 162203 kern.notice] context 0x0
Jul 22 11:19:26 dcsnetworker unix: [ID 100000 kern.notice]
Jul 22 11:19:26 dcsnetworker unix: [ID 101969 kern.notice] pid=0, pc=0x11c2460, sp=0x2a1017885a1, tstate=0x80001605, context=0x0
Jul 22 11:19:26 dcsnetworker unix: [ID 743441 kern.notice] g1-g7: 120a800, 1, 600051bbc70, 0, 78, 0, 2a101789cc0
Jul 22 11:19:26 dcsnetworker unix: [ID 100000 kern.notice]
Jul 22 11:19:26 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101788a20 unix:die+9c (30, 2a101788d00, 736440672000, 9, 2a101788ae0, ffff)
Jul 22 11:19:26 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000000000000030 0000000000000030 0000000000000000
Jul 22 11:19:26 dcsnetworker %l4-7: 0000000000000000 0000000001062e74 000000000000000b 0000000001080c00
Jul 22 11:19:26 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101788b00 unix:trap+754 (2a101788d00, 10000, 0, 9, 300015e2000, 2a101789cc0)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 000000000183d180 0000000000000030 0000000000000000
Jul 22 11:19:27 dcsnetworker %l4-7: 0000000000000000 0000000001062e74 000000000000000b 0000000000010200
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101788c50 unix:ktl0+64 (2f73736440673630, 0, 60005385d40, 181a7c8, 0, 0)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 00000300015e2000 0000000000000060 0000000080001605 000000000101dd08
Jul 22 11:19:27 dcsnetworker %l4-7: 000000000183d180 0000000001062e74 000000000000000b 000002a101788d00
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101788da0 unix:mutex_vector_enter+478 (18b82b0, 2f737364406737e0, 0, 181a7c8, 2f737364406
73631, 2f73736440673630)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000000000000000 0000000000000000 000002a101789cc0
Jul 22 11:19:27 dcsnetworker %l4-7: 000000000183d180 0000000001062e74 0000000001062e74 0000000000000080
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101788e50 genunix:turnstile_block+1b8 (2f73736440673630, 0, 60005385d40, 181a7c8, 0, 0)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000000000000000 0000000000000000 000002a101789cc0
Jul 22 11:19:27 dcsnetworker %l4-7: 000000000183d180 0000000001062e74 0000000001062e74 0000000000000080
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101788f00 unix:mutex_vector_enter+478 (300015e2000, 300015e2000, 60005385d40, 2f737364406
73630, 2f73736440673630, 0)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000000000000001 0000000000000000 0000000000000000 0000000001846c20
Jul 22 11:19:27 dcsnetworker %l4-7: 00000600053725c0 2f73736440673631 fffbb833b2219ac4 000000000181a400
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101788fc0 scsi_vhci:vhci_commoncap+b0 (ffffffffffffffff, 13, 0, 1, 1, 70491520)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000060005372a38 0000000000000000 0000000070495c80 000000000000000a
Jul 22 11:19:27 dcsnetworker %l4-7: 00000600053725c0 0000060005385d40 0000060005385980 0000000000000000
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101789070 ssd:ssd_unit_detach+464 (60005352a10, 60005351b00, 0, 70491530, 1, 3)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000060005372a38 0000000000000000 0000000070495c80 000000000000000a
Jul 22 11:19:27 dcsnetworker %l4-7: 00000600053725c0 00000300008e54d8 0000000000000000 0000000000000000
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101789120 genunix:devi_detach+a4 (60005352a10, 0, 40001, 0, 7b65ef58, 0)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 00000000fffdffff 0000000000020000 0000000000000000 0000060005352a78
Jul 22 11:19:27 dcsnetworker %l4-7: 0000000000000004 00000000fffdfc00 000000000185d400 0000000000000005
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a1017891f0 genunix:detach_node+64 (60005352a10, 40001, 0, 50020000, 40001, 0)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 00000000fffdffff 0000000000020000 0000000000000000 0000060005352a78
Jul 22 11:19:27 dcsnetworker %l4-7: 0000000000000004 00000000fffdfc00 000000000185d400 0000000000000005
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a1017892a0 genunix:i_ndi_unconfig_node+144 (60005352a10, 12c, 40001, 10cc844, 14, 185d790)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 00000000fffdffff 0000000000020000 0000000000000000 0000060005352a78
Jul 22 11:19:27 dcsnetworker %l4-7: 0000000000000004 00000000fffdfc00 000000000185d400 0000000000000005
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101789350 genunix:i_ddi_detachchild+14 (60005352a10, 40001, 0, ffffffffffffffff, 300008e5
4d8, 60005385d40)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000000000040001 0000000000000003 000002a101789cc0
Jul 22 11:19:27 dcsnetworker %l4-7: 0000060005352a10 0000000000000000 0000000000040011 0000000000000000
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101789400 genunix:devi_detach_node+64 (60005352a10, 40001, 40001, 2a101789578, 80000, 400
01)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000000000040001 0000000000000003 000002a101789cc0
Jul 22 11:19:27 dcsnetworker %l4-7: 0000060005352a10 0000000000000000 0000000000040011 0000000000000000
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a1017894c0 genunix:ndi_devi_offline+188 (60005352a10, 0, 0, ffffffffffffffff, 300008e54d8,
60005385d40)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000000000040001 0000000000000003 000002a101789cc0
Jul 22 11:19:27 dcsnetworker %l4-7: 0000060005352a10 0000000000000000 0000000000040011 0000000000000000
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101789580 genunix:___const_seg_900000101+15af4 (300008e54d8, 60005352a10, 0, 30002202bc8,
300015e2000, 0)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000060005377500 00000300008f57c0 00000600064b0f68 00000300008f5820
Jul 22 11:19:27 dcsnetworker %l4-7: 0000060005352a10 0000000000000020 0000000000000003 0000000000000000
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101789630 genunix:___const_seg_900000101+15f38 (300008f57c0, 60005377500, 0, 4, 0, 300008
e54d8)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000060005377500 00000300008f57c0 00000600064b0f68 00000300008f5820
Jul 22 11:19:27 dcsnetworker %l4-7: 0000060005352a10 0000000000000020 0000000000000003 0000000000000000
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a1017896e0 genunix:mdi_pi_free+254 (600065c3680, 18cc058, 0, 300008f5820, 60005377540, 0)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000060005377500 00000300008f57c0 00000600064b0f68 00000000000fffff
Jul 22 11:19:27 dcsnetworker %l4-7: 00000000000ffc00 0000000000000000 0000000000000040 0000000001259b24
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101789790 fcp:ssfcp_offline_child+198 (600064b8718, 600065c3680, 0, 0, 1, 2a10178994c)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 00000000700b8c08 0000000000000000 000006000647e698 0000060005352a10
Jul 22 11:19:27 dcsnetworker %l4-7: 0000000000000001 00000000700b8c00 0000000000000000 000006000534e800
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101789860 fcp:ssfcp_trigger_lun+2f4 (600064b8718, 0, 60005352a10, 2, 2, 2a10178994c)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 00000600065c3680 0000000000000001 0000000000000002 0000000000000001
Jul 22 11:19:27 dcsnetworker %l4-7: 000006000534e800 00000600053d4e00 00000000700b8c08 00000300008e54d8
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101789950 fcp:ssfcp_hp_task+64 (6002b1a5758, 1, 600064b8718, 2, 2, 2)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000000000010000 00000600052f2f70 00000600052f2f78 00000600052f2f7a
Jul 22 11:19:27 dcsnetworker %l4-7: 0000060001414e48 0000000000000002 0000000000000000 000006000534e800
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101789a00 genunix:taskq_thread+1a4 (600052f2fa0, 600052f2f48, 10001, 447cc4dce7954, 2a101
789aca, 2a101789ac8)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000000000010000 00000600052f2f70 00000600052f2f78 00000600052f2f7a
Jul 22 11:19:27 dcsnetworker %l4-7: 0000060001414e48 0000000000000002 0000000000000000 00000600052f2f68
Jul 22 11:19:27 dcsnetworker unix: [ID 100000 kern.notice]
Jul 22 11:19:27 dcsnetworker genunix: [ID 672855 kern.notice] syncing file systems...
Jul 22 11:19:27 dcsnetworker genunix: [ID 733762 kern.notice] 2
Btw, drifting a little on the original topic, but does anyone know how to reset the SC on a T2000 without power cycling the machine? Any way of doing a reset remotely (no scadm functions in CMT architecture, that I know of)? Barring a remote reset, should I be able to connect to the terminal using local serial mgmt as opposed to the usual net mgmt port?
Edited by: DCSMidrange on Jul 22, 2008 3:29 PM
Edited by: DCSMidrange on Jul 22, 2008 3:30 PM -
How to partition disk in Solaris 11
Dear All,
Solaris newby question: I started the live CD of Solaris 11 11/11 in VBox 4.1.4 (host is 64 bit Ubuntu 10.04.3 with 8 GB of RAM) and hit the 'Install' button.
Question, how do I format my 30 GB VBox disk so that I have 4 GB of SWAP, 8 GB for OS, and the rest as /u01/opt/ for a future Oracle DB installation?
The GUI allows 'Solaris2' and 'Extended' ...
Thanks, PeterHi Bob,
thanks for the hint, I'm installing right now and checked the 'whole disk' setting.
Would you mind and let me know the ZFS commands to change the partitions, especially SWAP (4G) and '/u01/opt' with all remaining storage?
I wonder if the VBox guest additions in 4.1.4. will work with Solaris11 ...
Kind regards, Peter -
How to create Shared RAW disk on solaris 10?
Hi,
I have built two solaris machine using VMware. I am going to configuare RAC. how to create shared RAW disk for ocr,votedisk and asm?
OS version = 5.10 32 bit
Thanksuse openfiler ( software based SAN) if do not have shared disk storage system, this is an appliance kinda software can behave as SAN storage, it can be installed on VM as a guest machine and rest configuration remain same as storage and your RAC deployment.
http://www.openfiler.com/community/download
alternativly you can follow below method
http://startoracle.com/2007/09/30/so-you-want-to-play-with-oracle-11gs-rac-heres-how/ -
Adding a 1TB usb disk to Solaris 10
Im having a few problems trying to add a 1tb drive to Solaris 10 on a sun blade 2000.
It seems to find it ok when i attach it, cfgadm shows it ok
usb0/4 usb-storage connected configured ok
iostat -En shows it as 2 461gb drives, not a problem really as long as i can mount them
c4t0d0 Soft Errors: 2 Hard Errors: 0 Transport Errors: 1
Vendor: SAMSUNG Product: HD501LJ Revision: CR10 Serial No:
Size: 461.16GB <461158548480 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 2 Predictive Failure Analysis: 0
c5t0d0 Soft Errors: 2 Hard Errors: 0 Transport Errors: 7
Vendor: SAMSUNG Product: HD501LJ Revision: CR10 Serial No:
Size: 461.16GB <461158548480 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 2 Predictive Failure Analysis: 0
Connect ok in /var/adm/messages, but then gets a SCSI error
Jul 25 15:12:27 icmp5-1 usba: [ID 912658 kern.info] USB 2.0 device (usb7ab,fcd2)
operating at full speed (USB 1.x) on USB 1.10 root hub: storage@4, scsa2usb1 at
bus address 2
Jul 25 15:12:27 icmp5-1 usba: [ID 349649 kern.info] Freecom DataTank 000000
0000000001
Jul 25 15:12:27 icmp5-1 genunix: [ID 936769 kern.info] scsa2usb1 is /pci@8,70000
0/usb@5,3/storage@4
Jul 25 15:12:27 icmp5-1 genunix: [ID 408114 kern.info] /pci@8,700000/usb@5,3/sto
rage@4 (scsa2usb1) online
Jul 25 15:12:27 icmp5-1 scsi: [ID 193665 kern.info] sd1 at scsa2usb1: target 0 l
un 0
Jul 25 15:12:27 icmp5-1 genunix: [ID 936769 kern.info] sd1 is /pci@8,700000/usb@
5,3/storage@4/disk@0,0
Jul 25 15:12:27 icmp5-1 genunix: [ID 408114 kern.info] /pci@8,700000/usb@5,3/sto
rage@4/disk@0,0 (sd1) online
Jul 25 16:06:10 icmp5-1 scsi: [ID 107833 kern.warning] WARNING: /pci@8,700000/us
b@5,3/storage@4/disk@0,0 (sd1):
Jul 25 16:06:10 icmp5-1 SCSI transport failed: reason 'timeout': giving
up
How do i get solaris to mount it, i've tried devfsadm, i've tried mount /dev/dsk/c4t0d0s2 without any joy.I found out that zfs is the perfect solution for that scenario. I have lacie 1TB drive and I could get ufs to work but the drive only showed up about ~500g and not 1tb, similar situation as yours. I tried all the possible way that I could think of and found out that zfs is way better and easier.
Try format -e and see if it comes out as single disk or still multiple disks..
And check on the following site for more configuration on zpool and zfs and also look for solaris 10 docs.
http://blogs.sun.com/migi/entry/playing_with_zfs_filesystem
Hope you got your problem solved.
zp -
Fdisk and format show different cylinders can't use whole disk for solaris?
Solaris 10 5/09 s10x_u7wos_08 X86
Format did not show enough cylinders, so we can't use whole raid5 (4x 146GB) volume.
How can we fix it? We need the full raid5 volume for our solaris.
formatAVAILABLE DISK SELECTIONS:
0. c2t0d0 <DEFAULT cyl 17817 alt 2 hd 255 sec 63>
/pci@0,0/pci10de,375@f/pci108e,286@0/disk@0,0
Specify disk (enter its number):
fdisk outputTotal disk size is 53502 cylinders
Cylinder size is 16065 (512 byte) blocks
Cylinders
Partition Status Type Start End Length %
========= ====== ============ ===== === ====== ===
1 Active Solaris2 1 17819 17819 33
arcconf getconfigbash-3.00# ./arcconf getconfig 1
Controllers found: 1
Controller information
Controller Status : Optimal
Channel description : SAS/SATA
Controller Model : Sun STK RAID INT
Controller Serial Number : 00919AA0670
Physical Slot : 0
Temperature : 71 C/ 159 F (Normal)
Installed memory : 256 MB
Copyback : Disabled
Background consistency check : Disabled
Automatic Failover : Enabled
Global task priority : High
Defunct disk drive count : 0
Logical devices/Failed/Degraded : 1/0/0
Controller Version Information
BIOS : 5.2-0 (16732)
Firmware : 5.2-0 (16732)
Driver : 2.2-2 (1)
Boot Flash : 5.2-0 (16732)
Controller Battery Information
Status : Optimal
Over temperature : No
Capacity remaining : 99 percent
Time remaining (at current draw) : 3 days, 1 hours, 11 minutes
Logical device information
Logical device number 0
Logical device name : v
RAID level : 5
Status of logical device : Optimal
Size : 419690 MB
Stripe-unit size : 256 KB
Read-cache mode : Enabled
Write-cache mode : Enabled (write-back)
Write-cache setting : Enabled (write-back)
Partitioned : Yes
Protected by Hot-Spare : No
Bootable : Yes
Failed stripes : No
Logical device segment information
Segment 0 : Present (0,0) 000849E5RY9A P4X5RY9A
Segment 1 : Present (0,1) 000849E4TX4A P4X4TX4A
Segment 2 : Present (0,2) 000849E56KAA P4X56KAA
Segment 3 : Present (0,3) 000849E5S0GA P4X5S0GA
Physical Device information
Device #0 - 3
Device is a Hard drive
State : Online
Supported : Yes
Transfer Speed : SAS 3.0 Gb/s
Reported Channel,Device : 0,0
Reported Location : Enclosure 0, Slot 0
Reported ESD : 2,0
Vendor : HITACHI
Model : H101414SCSUN146G
Firmware : SA25
Serial number : 000849E5RY9A P4X5RY9A
World-wide name : 5000CCA0007B2BFF
Size : 140009 MB
Write Cache : Disabled (write-through)
FRU : None
S.M.A.R.T. : No
Device #4
Device is an Enclosure services device
Reported Channel,Device : 2,0
Enclosure ID : 0
Type : SES2
Vendor : ADAPTEC
Model : Virtual SGPIO
Firmware : 0001
Status of Enclosure services device
Temperature : Normal
Device #5
Device is an Enclosure services device
Reported Channel,Device : 2,1
Enclosure ID : 1
Type : SES2
Vendor : ADAPTEC
Model : Virtual SGPIO
Firmware : 0001
Status of Enclosure services device
Temperature : Normalastra666 wrote:
Hi Smart71,
I know it sounds complicated, but it really isn't.
You need to select the option to manually edit the disk label
when you install solaris.
Your problem is that you have assigned 73 GB for your / (root)
and 63 GB for your swap. That's not the problem that I see. The problem is that the Solaris partition (and therefore the Solaris VTOC label inside) is only for 136GB. But the actual underlying storage is 409GB. So the entire partition and the VTOC have to be rewritten.
You only have 2 options. Either re-install Solaris, of re-partition the disk.
If you don't have a spare disk to copy root to and you want to use
the whole disk, with just one / root partition, you will need to shrink /swap
create another slice to copy root to temporarily, and then re-partition your disk. Agreed reinstalling is easiest. But moving slices around will not let you change the VTOC size.
eg:
boot from CD into single user mode.
partition the disk so that eg:
swap is 3 GB. eg: ( from cylinder 1 to cylinder 400 )
partition 3 is 7 GB ( from cylinder 401 to cylinder 1318 )
( if 7 GB is enough to hold all your / data )
write the disk label out
newfs partition 3
mount / on /a
mount partition 3 on /mnt
Copy everything from /a to /mnt eg:
rsync -vaxH /a /mnt
or
find /a -xdev -print | cpio -pdam /mnt
umount /a and /mnt
re partition the disk so that / takes all the free space.If that were your goal, you wouldn't have to move anything. Just repartition and use 'growfs' on the root filesystem directly (while booted from CD).
eg ( starting at cylinder 1319 to cylinder 17816 )
write disk label out
newfs part 0 for /
mount / on /a and part 3 on /mnt
copy everything back from /mnt to /a
Check /a to make sure everything is there.
umount /a and /mnt
re partition the disk to make /swap 10 GB
( starting at cylinder 1 to cylinder 1318 )
write disk label out.
And that's it.
The disk you are working on is 17816 cylinders in size.
yet down the bottom you say that :
Total disk size is 53502 cylinders
That tells me there are 4 disks in the raid 5 set, and that
you have probably been presented with 3 x 136 GB disks.
That means that you should have 2 x 136 GB disks free.
Use one of these to copy your /root to. and then
re-partition your first disk and copy data back.
Or partition another disk correctly, write a boot block to it
Copy your root data to it, and boot from it.The entire disk is presented to the machine (as seen in the fdisk output), but the Solaris partition is only using the smaller portion of it. You can't create a second Solaris partition and use it at the same time, either. Only one Solaris partition can be used at a time.
Darren -
What is the limit of the size of a disk under Solaris 8?
Hello,
I have a problem when I try to run format command to label a disk of 1TB under solaris 8.
# format.......
114. c9t14d3 <DGC-RAID3-0322 cyl 32766 alt 2 hd 1 sec 0>
/pci@8,700000/lpfc@4/sd@e,3
Specify disk (enter its number)[115]: 114
selecting c9t14d3
[disk formatted]
Disk not labeled. Label it now? y
Warning: error writing VTOC.
Warning: no backup labels
Write label failed
format> print
PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> p
Current partition table (default):
Total disk cylinders available: 32766 + 2 (reserved cylinders)
Arithmetic Exception - core dumpedI think maybe if you split it into two luns, you can
stitch them back together with svm.
But even thats not certain.Depends on what you're going to do with the device at that point. UFS on Solaris also does not support volumes 1TB or larger. So you'd have to use it as a raw slice space or a filesystem that did support larger sizes.
You need later versions of Solaris 9 to get mutli-terabyte UFS support. (a separate issue from multi-terabyte LUN support).
Darren
Maybe you are looking for
-
Installed additional hard drive, previous one is not recognized.
I have a G4 867 MDD using 4 internal hard drives. In the ATA100 bay, I had a Seagate which was my main System drive. I just bought a larger drive intended to become my system drive, so I moved the Seagate to the outer slave position. I changed the ju
-
Error Message with iTunes update
Since updating to the latest version of iTunes, I'm getting an error message that doesn't make sense. This is the message: "Some of the iTunes Store files in the iTunes library were not copied to the iPod because the iPod may only contain content fro
-
Duplicate component item in BOM
Hello Guru, We having problem on the BOM, when we check via CS03 with a change number 111123, item 0050 and 0060 was duplicate. but when we check CS03 without entering the said change number those item was not duplicate. why this is happening when we
-
How to change the Tab name for a custom field added in IC Winclient
I have successfully added a field using eewb and it is appearing on my transaction in CIC0 but it is on a tab that says Customer Fields. In EEWB a BADI called ZEEW_CUSTOMER_H01 was generated and it has a methods called CRM_CUSTOMER_H_SET_TITLE. It
-
Multiple items not getting transfered from Punch out
Hi, I need your help in fixing an issue with punch out. When we select more than one item in the punch out and when it is getting transferred to SRM shopping cart, we could see only one item in the shopping cart. We are facing this issue with only on