Do I format a 6140 snapshot volume in Solaris 10?
I have created a snapshot in Storagetek 6140 and mapped it to host, for the purpose of using the snapshot to backup the database (base) volume.
On the host, in format, I can see the snapshot disk. Do I label and partition it? Or do I just mount it, and if so which slice?
I want to create a metadevice so as not to have to use the full wwn in the mount command, so I have:
metatinit d999 1 1 /dev/dsk/c6t600A0B800029BA4E000005AE47534D9Cd0s2
mount /dev/md/dsk/d999 /mdata2_snapshot
and this all seems to work ok.
But in format, if I select the disk it asks if I would like to label it now. Not sure whether to do this or not.
Would be very grateful for any advice. I can't find anything about this in the documentation.
Many thanks
Diana
No you should not format or repatition the volume as it will have the same characteristics as the original primary volume.
You use it in exactly the same way as you used the original volume - with the usual caveats of metasets etc.
If you're using slice 2 of the original volume, your command should work. If not, then you should change the partition number to reflect the primary volume.
Remember however, if you delete and recreate the snapshot, it will have a different WWN so you'll also have to remove the metadevice for the snapshot prior to removing the snapshot volume.
HTH,
--A.
Similar Messages
-
Windows NTFS and 6140 Snapshot
Hi,
I would like to know if there is anyone that is using the 6140 snapshot process with a windows NTFS mounted volume. We are looking at connecting a Windows system to the 6140 and then using the 6140 snapshot volume mounted to a backup server to write the backups to tape. The question that we have: when a snapshot is issued, what has to happen on the Windows system? We assume that it is much like having a drive fail or disappear and then another drive put in it's place. I've figured out how to do this with our unix systems (unmount the volume, re-snap the snapshot, remount the volume) but I'm not sure what to do in the Windows realm? Does anyone else do this or know how to do it.
thanksFrom discussions with some Windows sysadmins, the way that you unmount and remount a volume on a Windows server is to deallocate and then reallocate a drive letter.
I've only ever seen it done from the GUI: Control Panel -> Administrative Tools -> Computer Management -> Storage
I'm sure there must be some way to script this but I'm afraid I've no idea about how you'd do this.
You're absolutely right to worry about this : The OS assumes it has COMPLETE control of the a volume - and that the filesystem cache is up-to-date and correct. Obviously if the array is effectively changing data under the skirts, this is not a valid standpoint.
http://www.tech-archive.net/Archive/Windows/microsoft.public.windows.server.scripting/2005-07/msg00386.html
may give you a starting point to script this for your backup server...
HTH,
--A. -
Hi all,
at a customer’s site I’ve a problem with a fresh installation of Backup Exec 2014. Every backup (full or incremental) always reports the following
error: “An unexpected error occurred when cleaning up snapshot volumes. Confirm that all snapped volumes are correctly re-synchronized with the original volumes.”.
It’s not a Backup Exec problem itself, also backups using “Windows Server Backup” failing with the same error.
On this site I have three servers; the error is only generated for one of them. Here’s a short overview:
Server1: Windows Server 2012 R2, latest patchlevel, physical machine, Domain Controller and Fileserver. Backup Exec is installed on this machine,
backup is written directly to SAS tape loader. There error is generated on this server.
Server2: Windows Server 2008 R2, latest patchlevel, virtual machine, running on Citrix Xen Server 6.2. Used for remote desktop services, no errors
on this server.
Server3: Windows Server 2012 R2, latest patchlevel, virtual machine, database server with some SQL Instances, no errors on this server.
As I said, error is reported only on server1, no matter if it is a full or an incremental backup. During the backup I found the following errors
is the event log (translated from a german system):
Event ID: 12293
Volume Shadow Copy Service error: Error calling a routine on a Shadow Copy Provider. "{89300202-3cec-4981-9171-19f59559e0f2}" an error occured. Routinedetails Error calling Query(). [0x80042302] [hr = 0x80042302, unexpected component error of
the Volume Shadow Copy Service.
Process:
Volume Shadow Copy polling
Volume Shadow Copy delete
Context:
Executioncontext: Coordinator
Executionkontext: Coordinator
And
Event ID: 8193
Volume Shadow Copy Service error: Unexpected error calling Routine "IVssCoordinator::Query" hr = 0x8004230f, Unexpected error Volume Shadow Copy Service Provider
Process:
Volume Shadow Copy delete
Context:
Executioncontext: Coordinator
There are some articles about this error in the knowledge base or web which does not help or do not apply to my environment for example:
http://www.symantec.com/business/support/index?page=content&id=TECH38338&actp=search&viewlocale=en_US&searchid=1423724381707
What I already have tried:
Disabled Antivirus during the whole backup
Installed latest Service Pack for Backup Exec
Rebooted the server
vssadmin list writers do not show any errors
consult eventid.net for other possible solutions
no limits set for vaa
Anymore ideas from you guys?
Best regards,Hi Shaon,
vssadmin list providers gave the following output:
vssadmin list providers
vssadmin 1.1 - Verwaltungsbefehlszeilenprogramm des Volumeschattenkopie-Dienstes
(C) Copyright 2001-2013 Microsoft Corp.
Anbietername: "Microsoft File Share Shadow Copy provider"
Anbietertyp: Dateifreigabe
Anbieterkennung: {89300202-3cec-4981-9171-19f59559e0f2}
Version: 1.0.0.1
Anbietername: "Microsoft Software Shadow Copy provider 1.0"
Anbietertyp: System
Anbieterkennung: {b5946137-7b9f-4925-af80-51abd60b20d5}
Version: 1.0.0.7
Unfortunately theres not Symantec VSS Provider listed.
Best regards,
Christoph -
Local snapshots volume missing
I've been running OS X Mountain Lion since it came out, and Lion before that. I've always had a Time Machine drive and local snapshots has worked fine... up until recently.
I noticed, in Console, the following lines repeating over and over:
mtmfs[34579]: MTM fs Mount server failed to start because of error -1
mtmfs[34579]: MTM fs Mount server retrying ...
mtmfs[34579]: MTM fs Mount server failed to start because of error -1
mtmfs[34579]: MTM fs Mount server retrying ...
mtmfs[34579]: MTM fs Mount server failed to start because of error -1
(ad nauseum)
and it will throttle, then start back again. I've had to disable local snapshots in order to make it stop. My local snapshots volume also appears to be missing from /Volumes, so I think it's safe to say that my snapshots volume has gone AWOL and the error is happening because the system cannot find it.
Does anyone know how to resolve the issue? I've tried disabling/renabling Time Machine, but it doesn't recreate the volume and the error just keeps coming back. I'm sure I could reinstall Mountain Lion from scratch, but I'd much prefer not. That's a lot of cat video hours gone right there.Please read this whole message before doing anything.
This procedure is a diagnostic test. It’s unlikely to solve your problem. Don’t be disappointed when you find that nothing has changed after you complete it.
The purpose of the test is to determine whether the problem is caused by third-party software that loads automatically at startup or login, or by a peripheral device.
Disconnect all wired peripherals except those needed for the test, and remove all aftermarket expansion cards. Boot in safe mode and log in to the account with the problem. Note: If FileVault is enabled, or if a firmware password is set, or if the boot volume is a software RAID, you can’t do this. Post for further instructions.
Safe mode is much slower to boot and run than normal, and some things won’t work at all, including wireless networking on certain Macs. The next normal boot may also be somewhat slow.
The login screen appears even if you usually log in automatically. You must know your login password in order to log in. If you’ve forgotten the password, you will need to reset it before you begin. Test while in safe mode. Same problem? After testing, reboot as usual (i.e., not in safe mode) and verify that you still have the problem. Post the results of the test. -
Create a GPT partition table and format with a large volume (solved)
Hello,
I'm having trouble creating a GPT partition table for a large volume (~6T). It is a RAID 5 (hardware) with 3 hard disk drives having a size of 3T each (thus the resulting 6T volume).
I tried creating a GPT partition table with gdisk but it just fails at creating it, stopping here (I've let it run for like 3 hours...):
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/md126.
I also tried with parted but I get the same result. Out of luck, I created a GPT partition table from Windows 7 and 2 NTFS partitions (15G and the rest of space for the other) and it worked just fine. I then tried to format the 15G partition as ext4 but, as for gdisk, mkfs.ext4 will just never stop.
Some information:
fdisk -l
Disk /dev/sda: 256.1 GB, 256060514304 bytes, 500118192 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xd9a6c0f5
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 104861695 52429824 83 Linux
/dev/sda2 104861696 466567167 180852736 83 Linux
/dev/sda3 466567168 500117503 16775168 82 Linux swap / Solaris
Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 1 4294967295 2147483647+ ee GPT
Partition 1 does not start on physical sector boundary.
Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdd1 1 4294967295 2147483647+ ee GPT
Partition 1 does not start on physical sector boundary.
Disk /dev/sde: 320.1 GB, 320072933376 bytes, 625142448 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x5ffb31fc
Device Boot Start End Blocks Id System
/dev/sde1 * 2048 625139711 312568832 7 HPFS/NTFS/exFAT
Disk /dev/md126: 6001.1 GB, 6001143054336 bytes, 11720982528 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 131072 bytes
Disk label type: dos
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/md126p1 1 4294967295 2147483647+ ee GPT
Partition 1 does not start on physical sector boundary.
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
gdisk -l on my RAID volume (/dev/md126):
GPT fdisk (gdisk) version 0.8.7
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Disk /dev/md126: 11720982528 sectors, 5.5 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 8E7D03F1-8C3A-4FE6-B7BA-502D168E87D1
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 11720982494
Partitions will be aligned on 8-sector boundaries
Total free space is 6077 sectors (3.0 MiB)
Number Start (sector) End (sector) Size Code Name
1 34 262177 128.0 MiB 0C01 Microsoft reserved part
2 264192 33032191 15.6 GiB 0700 Basic data partition
3 33032192 11720978431 5.4 TiB 0700 Basic data partition
To make things clear: sda is an SSD on which Archlinux has been freshly installed (sda1 for root, sda2 for home, sda3 for swap), sde is a hard disk drive having Windows 7 installed on it. My goal with the 15G partition is to format it so I can mount /var on the HDD rather than on the SSD. The large volume will be for storage.
So if anyone has any suggestion that would help me out with this, I'd be glad to read.
Cheers
Last edited by Rolinh (2013-08-16 11:16:21)Well, I finally decided to use a software RAID as I will not share this partition with Windows anyway and it seems a better choice than the fake RAID.
Therefore, I used the mdadm utility to create my RAID 5:
# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
# mkfs.ext4 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0
It works like a charm. -
6140 online volume and virtual disk expansion
Hi,
I have a 6140 configured. I have created some volumes that is being used as oracle raw devoces on solaris 10. I have two doubts to confirm
1. Can I expand diskspace to the virtual disk online witout any dataloss.
2. can I expand volume with the added space online without any dataloss. (the volume is presented to the solaris 10 os and it is being used as a raw device by oracle (ASM)
Thanks and regards
Ushas SymonYour configuration is dependent on the application's needs.
Personally, on my 14 shelves of 6140 storage, I do not want to suffer the penalty of RAID 1 mirroring (cuts the available storage size in half), as today controllers are very reliable, as are disk drives, so I use RAID 5 which offers stripping among the disks, offering more available storage and excellent performance. In three years, we haven't lost a single bit, and we are capable of ~400K TPS. -
Add zfs volume to Solaris 8 branded zone
Hi,
I need to add a zfs volume to a Solaris 8 branded zone.
Basically ive created the zvol and added the following to the zone configuration.
# zonecfg -z test
zonecfg:test> add device
zonecfg:test:device> set match=/dev/zvol/dsk/sol8/vol
zonecfg:test:device> end
When I boot the zone it comes up ok but I am unable to see the device, nothing in format, /dev/dsk etc etc
Ive also tried to setmatch to the raw device as well to no avail.
Basically I have numerous zvols to add and dont really want a load of mount points from the global zone then lofs back to the local zone
Any ideas please??
Thanks...Thanks but that's why I created zfs volumes and newfs'ed them to create UFS and presented those to the zone.
In the end I just create a script in /etc/rc2.d and mounted the filesystems in there. -
Reports in PDF format does not show up on solaris box
Hi,
I am trying to generate a report in pdf format.
This shows up a blank page.
But i could see that the cache file is generated for the report.
This works fine in Win 2000.
I have fixed the IE adobe issue.
IS there anything special i need to do while
generating this output.
Your help will be highly appreciated.
Thanks in advance,
shailesh
nullI did all the config as mentioned on adobe site. It seems to me the dataset problem.
I am running 2 reports one for which i have more data and for the other less data.
The one with more data comes up properly on browser. But the one with less data is showing me blank page.
shailesh
<BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by david Mille ([email protected]):
hi,
Did you install the nppdf32.dll of acrobat reader in your plugins directory in your browser ?<HR></BLOCKQUOTE>
null -
Need to format the old ASM disks on solaris.10.
Hello Gurus,
we uninstalled the ASM on solaris, but while installing the ASM again it says that mount point already used by another instance, but there is no db and ASM running (this is the new server) so we need to use dd command or need to reformat the raw devices which already exists and used by the old ASM instance,here is the confusion...
there are 6 Luns presented to the this host for ASM,its not used by anyone...
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> solaris
/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0
1. c0t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> solaris
/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0
2. c2t60050768018E82BE98000000000007B2d0 <IBM-2145-0000-150.00GB>
/scsi_vhci/ssd@g60050768018e82be98000000000007b2
3. c2t60050768018E82BE98000000000007B3d0 <IBM-2145-0000 cyl 44798 alt 2 hd 64 sec 256>
/scsi_vhci/ssd@g60050768018e82be98000000000007b3
4. c2t60050768018E82BE98000000000007B4d0 <IBM-2145-0000 cyl 19198 alt 2 hd 64 sec 256>
/scsi_vhci/ssd@g60050768018e82be98000000000007b4
5. c2t60050768018E82BE98000000000007B5d0 <IBM-2145-0000 cyl 5118 alt 2 hd 32 sec 64>
/scsi_vhci/ssd@g60050768018e82be98000000000007b5
6. c2t60050768018E82BE98000000000007B6d0 <IBM-2145-0000 cyl 5118 alt 2 hd 32 sec 64>
/scsi_vhci/ssd@g60050768018e82be98000000000007b6
7. c2t60050768018E82BE98000000000007B7d0 <IBM-2145-0000 cyl 5118 alt 2 hd 32 sec 64>
/scsi_vhci/ssd@g60050768018e82be98000000000007b7
but the thing is when we try to list the raw devices by ls -ltr on /etc/rdsk location all disk owned by root and other not in oracle:dba & oinstall.
root@b2dslbmom3dbb3301 [dev/rdsk]
# ls -ltr
total 144
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s0 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:a,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s1 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:b,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s2 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:c,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s3 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:d,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s4 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:e,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s5 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:f,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s6 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:g,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s7 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:h,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s0 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:a,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s1 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:b,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s2 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:c,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s3 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:d,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s4 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:e,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s5 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:f,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s6 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:g,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s7 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:h,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s0 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:a,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s1 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:b,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s2 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:c,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s3 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:d,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s4 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:e,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s5 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:f,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s6 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:g,raw
lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s7 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:h,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:a,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:b,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:c,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:d,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:e,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:f,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:g,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:a,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:b,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:c,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:d,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:e,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:f,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:g,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:a,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:b,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:c,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:d,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:e,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:f,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:g,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:a,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:b,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:c,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:d,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:e,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:f,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:g,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:a,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:b,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:c,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:d,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:e,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:f,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:g,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:a,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:b,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:c,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:d,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:e,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:f,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:g,raw
lrwxrwxrwx 1 root root 68 Jun 13 15:34 c2t60050768018E82BE98000000000007B2d0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:wd,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:47 c2t60050768018E82BE98000000000007B3d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:h,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:48 c2t60050768018E82BE98000000000007B4d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:h,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:49 c2t60050768018E82BE98000000000007B5d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:h,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:51 c2t60050768018E82BE98000000000007B6d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:h,raw
lrwxrwxrwx 1 root root 67 Jun 13 15:53 c2t60050768018E82BE98000000000007B7d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:h,raw
so we need to know where the raw devices located for oracle to do the dd command to remove the old asm header on the raw device inorder to start the fresh installation
but when we use the command which already given by the unix person who is no longer works here now, we are able to see the following information
root@b2dslbmom3dbb3301 [dev/rdsk] # ls -l c2t600*d0s0|awk '{print $11}' |xargs ls -l
crwxr-x--- 1 oracle oinstall 118, 232 Jun 14 13:29 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:a,raw
crwxr-x--- 1 oracle oinstall 118, 224 Jun 14 13:31 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:a,raw
crwxr-x--- 1 oracle oinstall 118, 216 Jun 14 13:32 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:a,raw
crw-r----- 1 root sys 118, 208 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:a,raw
crw-r----- 1 root sys 118, 200 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:a,raw
crw-r----- 1 root sys 118, 192 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:a,raw
also we are having the information of the mkode, with minor and major number we used for making the softlinks for raw device with ASM.
Cd dev/oraasm/
/usr/sbin/mknod asm_disk_03 c 118 232
/usr/sbin/mknod asm_disk_02 c 118 224
/usr/sbin/mknod asm_disk_01 c 118 216
/usr/sbin/mknod asm_ocrvote_03 c 118 208
/usr/sbin/mknod asm_ocrvote_02 c 118 200
/usr/sbin/mknod asm_ocrvote_01 c 118 192
But the final thing is we need find out where the above configuration located on the host, i think this raw device present method is different than the normal method on solaris??
please help me to proceed my installtion .... thanks in advance....
i am really confused with the following command from where we are getting the oracle disk raw devices information,since there is no info there in /etc/rdsk location (Os is solaris 10)
root@b2dslbmom3dbb3301 [dev/rdsk] # ls -l c2t600*d0s0|awk '{print $11}' |xargs ls -l
crwxr-x--- 1 oracle oinstall 118, 232 Jun 14 13:29 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:a,raw
crwxr-x--- 1 oracle oinstall 118, 224 Jun 14 13:31 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:a,raw
crwxr-x--- 1 oracle oinstall 118, 216 Jun 14 13:32 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:a,raw
crw-r----- 1 root sys 118, 208 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:a,raw
crw-r----- 1 root sys 118, 200 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:a,raw
crw-r----- 1 root sys 118, 192 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:a,raw
please help....Hi Winner;
For your issue i suggest close your thread here as changing thread status to answered and move it to Forum Home » Grid Computing » Automatic Storage Management which you can get more quick response
Regard
Helios -
Mount NTFS volumes in Solaris 10 update 8
Hi All,
I have a requirement to mount a windows share to two different Solaris 10 update 8 hosts . The below are the steps that i have done for the same but it got failed .
Please suggest a procedure and solution .
1 . Created the network share named " csv_source " on windows 2008 server . The share is given permission to everyone full access
2 . I have enabled the samba service on solaris server
3 . Created a directory on solaris server using mkdir /data
4 . Used the " mount " command to mount the volume but its failed .Can you boot from DVD and do an "upgrade?"
-
Increase filesystem volume on solaris
Dear friends,
Please let me know. How will I increse filesystem size i.e. sapdataX.
RAID 5
sapdata1 60 GB
sapdata2 60 GB
sapdata3 60 GB
sapdata4 60 GB
Now I have increase it to::::::::::
RAID 5
sapdata1 100 GB
sapdata2 100 GB
sapdata3 100 GB
sapdata4 100 GB
How will i add new HDD then how will alocate it to these file system?
Rgds
DK> How to add new hdd in storage?
We don't know since there are many different storage devices available. I suggest you talk to your SAN administrator.
> How wiil I assign these spaces to X?
> How will I assign to A?
> How will I add into Oracle /filesystem?
UFS is not able to grow if you don't use a volume manager on top.
Read http://docs.sun.com/app/docs/doc/817-5093/fsoverview-38559?l=de&a=view and/or talk to your Solaris administrator.
Markus -
Mount FAT volume in Solaris-HOWTO
Can I mount an FAT partation in Solaris 8 like what I do under linux? How to?
I've got a multi-system.Solaris' pcfs can do that. See "man mount_pcfs" for
examples. -
Maximum Disk-Volume Capacity Solaris 8 supports (SPARC) ?
Any one knows what is the maximum capacity that Solaris 8 on SPARC supports for storage-set ( Raid set, Stripe Set ...)
Can Solaris 8 supports file system up to 2 Terabyes ?
ThanksYup.
EOL in November 2002
Which means EOSL November 2007
http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Systems/U10/U10
Unless you've somehow maintained a service contract on that system,
by specific system serial number,
your best bet is to haunt an online auction site (such as Ebay),
and get a replacement cpu or cpus for shelf stock.
Your current OBP patch level is only down two patch levels,
but your kernel patch level is essentially "never patched".
If it were patched better it may have noticed the issue a lot sooner.
Expect to replace that cpu. -
Unable to create logical volume snapshot
Hi All,
Pls note that the follwoing error is reported when a snapshot is being created in our Oracle Linux Server release 5.11 box:
[root@hawaii901 tmp]# lvcreate --size 500m --name snap /dev/ora_vg/vol02
Volume group name expected (no slash)
Run `lvcreate --help' for more information.
Any clue on this issue?
Thanks & RegardsLVM snapshot is a general purpose solution that was not designed with Oracle database in mind. It can be used, for example, to quickly create a snapshot prior to a system upgrade, then if you are satisfied with the result, you would delete the snaphot.
There is probably a common misconception. LVM snapshots, like all COW snapshosts that I'm aware off, allow to create a backup of a live filesystem while changes are written to the snapshot volume. Hence it's called "copy on write", or copy on change if you want. This is necessary for system integrity to make a complete backup of all data in a LVM volume at a certain point in time and to allow changes happening while the filesystem backup is taking place. A snapshot is no substitute for a disaster recovery backup in case you loose your storage media. A snapshot only takes seconds, and initially does not copy or backup any data, unless data changes. It is therefore important to delete the snapshot if no longer required, in order to prevent duplication of data and restore file system performance.
If snapshot or COW technololgy suits your purpose, you will actually want to look into BTRFS filesystem, which is more modern, employs the idea of subvolumes and is much more efficient that LVM. -
Hello,
I have an HP 4300(formerly lefthand) SAN. I also have the VSA software running on a separate box. I do remote snapshot copies to the VSA server. If I wanted to get access to the remote snapshot volume(which ends up being an NSS volume), what's the best way to do this? Should I have a separate test edirectory environment? I reluctant to fire it up on a production box through NSSMU because it may be presenting itself with the same NSS volume information as the production volume.Originally Posted by imc
Could you see any issues mounting it on a "test" tree?
Should work, although you may have issues with the trustee rights if your test tree doesn't have the same user list/layout as your prod eDir.
However, I've taken SAN snapshots and mounted them on OTHER file servers without any problem all the time in our production system. The only thing to be careful of as Jesse mentioned is that you don't present that to the SAME server (ie, you can't present the same VOL1 volume to the SAME server, but multiple servers can have VOL1 mounted on them).
One gotcha is if you are presented a Clustered volume. You'll need to run NSSMU and make it non-shareable if you are mounting the snapshotted one on a non-clustered server.
--Kevin
Maybe you are looking for
-
Unable to open account sometimes and unable to open icons
today i turned on my laptop entered my user account password and connected to my internet .within 2 minuites of opening my internet my computer froze .so i turned it off by pressing the off button . after 2 min i turned the computer back on again .
-
IPad home button is not working
Hi, my iPad home button stopped working. Can any one help me?
-
I've been using FM9 with Acrobat Pro 9 just fine for months on end, and suddenly FM9 "forgot" about Acrobat Pro 9. I've reinstalled the entire suite TWICE. Googling the error messages reveals that this is a very very old problem; I'm quite surprised
-
New problem of '' Replace with after effect composition ''
Hi i'm a new user of adobe premiere pro , and i have really a big problem on it and i tired of searching about solution but with out any result , the whole problem is when i select a footage and right-clicking from the premiere timeline , the option
-
HI, I am using inventory cube. The issue i am facing is that I am getting correct stock levels - QUANTITIES be it current or past. But if I look into stock values of past they are current. But stock values of current are correct. Can anyone suggest