Cannot mount file system
Hi,
I have had great advice on this board, and thought maybe someone could help me out. I am following the instructions on installing Enterprise Linux on VMServer. I have just about completed the install, but have ran into the following error message at this step of the instructions:
Format the file system. Before proceeding with formatting and mounting the file system, verify that O2CB is online on both nodes; O2CB heartbeat is currently inactive because the file system is not mounted.
# /etc/init.d/o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
Checking O2CB heartbeat: Not active
You are only required to format the file system on one node. As the root user on rac1, execute
# ocfs2console
1. OCFS2 Console: Select Tasks, Format.
2. Format:
* Available devices: /dev/sdb1
* Volume label: oracle
* Cluster size: Auto
* Number of node slots: 4
* Block size: Auto
3. OCFS2 Console: CTRL-Q to quit.
Mount the file system. To mount the file system, execute the command below on both nodes.
# mount -t ocfs2 -o datavolume,nointr /dev/sdb1 /ocfs
On rac1, the command to mount the file system works great, no errors. But on rac2, I get the following error:
[root@rac2 ~]# mount -t ocfs2 -o datavolume,nointr /dev/sdb1 /ocfs
ocfs2_hb_ctl: Bad magic number in superblock while reading uuid
mount.ocfs2: Error when attempting to run /sbin/ocfs2_hb_ctl: "Operations not
permitted"
Does anyone have an idea, I have followed the instructions to the letter, and this is the first error that has me stopped.
Thanks,
Zach
Hello -
I installeded and configured ocfs2 on one of my nodes in my Linux cluster. However, when I run this command on the first node (2 nodes in my cluster):
/etc/init.d/o2cb status
I see the following heartbeat error:
Driver for "configfs": Loaded
Filesystem "configfs": Mounted
Driver for "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster prycluster: Online
Heartbeat dead threshold = 31
Network idle timeout: 30000
Network keepalive delay: 2000
Network reconnect delay: 2000
Checking O2CB heartbeat: Not active
Any ideas why I would see heartbeat as inactive? Also when I try to make the cluster filesystem online on my other node, I get this error:
Starting O2CB cluster prycluster: Failed
Cluster prycluster created
Node jtcperfloradb01 added
o2cb_ctl: Internal logic failure while adding node jtcperfloradb02
Stopping O2CB cluster prycluster: OK
Any help would be greatly appreciated!
Thanks,!
Similar Messages
-
Lucreate - Cannot make file systems for boot environment
Hello!
I'm trying to use LiveUpgrade to upgrade one "my" Sparc servers from Solaris 10 U5 to Solaris 10 U6. To do that, I first installed the patches listed on [Infodoc 72099|http://sunsolve.sun.com/search/document.do?assetkey=1-9-72099-1] and then installed SUNWlucfg, SUNWlur and SUNWluufrom the S10U6 sparc DVD iso. I then did:
--($ ~)-- time sudo env LC_ALL=C LANG=C PATH=/usr/bin:/bin:/sbin:/usr/sbin:$PATH lucreate -n S10U6_20081207 -m /:/dev/md/dsk/d200:ufs
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
Comparing source boot environment <d100> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Searching /dev for possible boot environment filesystem devices
Updating system configuration files.
The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <S10U6_20081207>.
Source boot environment is <d100>.
Creating boot environment <S10U6_20081207>.
Creating file systems on boot environment <S10U6_20081207>.
Creating <ufs> file system for </> in zone <global> on </dev/md/dsk/d200>.
Mounting file systems for boot environment <S10U6_20081207>.
Calculating required sizes of file systems for boot environment <S10U6_20081207>.
ERROR: Cannot make file systems for boot environment <S10U6_20081207>.So the problem is:
ERROR: Cannot make file systems for boot environment <S10U6_20081207>.
Well - why's that?
I can do a "newfs /dev/md/dsk/d200" just fine.
When I try to remove the incomplete S10U6_20081207 BE, I get yet another error :(
/bin/nawk: can't open file /etc/lu/ICF.2
Quellcodezeilennummer 1
Boot environment <S10U6_20081207> deleted.I get this error consistently (I ran the lucreate many times now).
lucreate used to work fine, "once upon a time", when I brought the system from S10U4 to S10U5.
Would anyone maybe have an idea about what's broken there?
--($ ~)-- LC_ALL=C metastat
d200: Mirror
Submirror 0: d20
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 31458321 blocks (15 GB)
d20: Submirror of d200
State: Okay
Size: 31458321 blocks (15 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t1d0s0 0 No Okay Yes
d100: Mirror
Submirror 0: d10
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 31458321 blocks (15 GB)
d10: Submirror of d100
State: Okay
Size: 31458321 blocks (15 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s0 0 No Okay Yes
d201: Mirror
Submirror 0: d21
State: Okay
Submirror 1: d11
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 2097414 blocks (1.0 GB)
d21: Submirror of d201
State: Okay
Size: 2097414 blocks (1.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t1d0s1 0 No Okay Yes
d11: Submirror of d201
State: Okay
Size: 2097414 blocks (1.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s1 0 No Okay Yes
hsp001: is empty
Device Relocation Information:
Device Reloc Device ID
c1t1d0 Yes id1,sd@THITACHI_DK32EJ-36NC_____434N5641
c1t0d0 Yes id1,sd@SSEAGATE_ST336607LSUN36G_3JA659W600007412LQFN
--($ ~)-- /bin/df -k | grep md
/dev/md/dsk/d100 15490539 10772770 4562864 71% /Thanks,
MichaelHello.
(sys01)root# devfsadm -Cv
(sys01)root# To be on the safe side, I even rebooted after having run devfsadm.
--($ ~)-- sudo env LC_ALL=C LANG=C lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
d100 yes yes yes no -
--($ ~)-- sudo env LC_ALL=C LANG=C lufslist d100
boot environment name: d100
This boot environment is currently active.
This boot environment will be active on next system boot.
Filesystem fstype device size Mounted on Mount Options
/dev/md/dsk/d100 ufs 16106660352 / logging
/dev/md/dsk/d201 swap 1073875968 - -In the rebooted system, I re-did the original lucreate:
<code>--($ ~)-- time sudo env LC_ALL=C LANG=C PATH=/usr/bin:/bin:/sbin:/usr/sbin:$PATH lucreate -n S10U6_20081207 -m /:/dev/md/dsk/d200:ufs</code>
Copying.
*{color:#ff0000}Excellent! It now works!{color}*
Thanks a lot,
Michael -
RMAN to Disk (shared mounted file system) then secure backup to Tape
Hi
This is a little away from the norm and I am new to Oracle secure Backup.
We have several databases on physically separate servers, all backup to a central disk (a ZFS shared file-system).
We have a media server also with the same mounted file-system so it can see all of the RMAN backups there.
Secure backup is installed on the media server and is configured as such.
The question I have is I need to backup the file system to tape where all the RMAN backups live, I have configured the data set but I get file permission errors for each of the RMAN backup files in the directory.
I have tried to change these permissions but to no avail (assuming it is just a read write access change that is needed - but this may be a problem in the long run). What is the general process for backup of already created RMAN backups sat in a shared area? I know its not the norm to do to disk then to tape backups but can this be done? I would have installed Secure backup client on each server and managed the whole backup through secure backup but this is not possible I must do the to tape from the file system, any advise and guidance on this would be much appreciated.
Kind regards
Vicky
Edited by: user10090654 on Oct 4, 2011 4:50 AMYou can easily accomplish a very streamlined D2D2T strategy! RMAN backup to disk...then backup that disk to tape via RMAN and OSB. Upon restore, RMAN will restore from best media disk or tape based on where files are located.
Donna -
Zfs destroy DOES NOT CHECK NFS mount file-systems
I asked this question twitter once and the answer was a good one, but I did some checking today and was surprised!!
# zfs destroy mypool/home/andrew
The above command will destroy this file-system no questioned asked but if the file-system is mounted you will get back the Device busy and if you have snapshot then they will be protected as well
server# zfs destroy mypool/home/andrew
cannot unmount 'tank/home/andrew
server# zfs destroy dpool/staff/margaret
cannot destroy 'dpool/staff/margaret': filesystem has children
use '-r' to destroy the following datasets:
dpool/staff/margaret@Wed18
dpool/staff/margaret@Wed22
BUT?
server# zfs destroy dpool/staff/margaret@Wed18
server# zfs destroy dpool/staff/margaret@Wed22
NFSclient# cd /home/margaret
NFSlient# ls -l
drwx------+ 2 margaret staff 2 Aug 29 17:06 Mail
lrwxrwxrwx 1 margaret staff 4 Aug 29 17:06 mail -> Mail
drwx--x--x+ 2 margaret staff 2 Aug 29 17:06 public_www
server# zfs destroy dpool/staff/margaret
server#
GONE!!!
I will file a bug report to see what Oracle say!
Comments?
I think there should be a hold/protect of file-systems
# zfs hold dpool/staff/margaret
AndrewThe CR is already filed:
6947584 zfs destroy should be recoverable or prevented
The zfs.1m man page, which covers the mounted case and the ZFS admin guide are pretty clear
about the current zfs destroy behavior.
http://docs.oracle.com/cd/E23824_01/html/821-1448/gamnq.html#gammq
Caution - No confirmation prompt appears with the destroy subcommand. Use it with extreme caution.
zfs destroy [-rRf] filesystem|volume
Destroys the given dataset. By default, the command
unshares any file systems that are currently shared,
unmounts any file systems that are currently mounted,
and refuses to destroy a dataset that has active depen-
dents (children or clones).
I'm sorry that you were surprised.
Accidents happen too, like destroying the wrong file system, so always have good backups.
Thanks, Cindy -
Limitations: Mounting File systems
Hi,
We have a requirement to communicate 30 odd application system through XI.
Mostly they are File to SAP or viceversa scenario.
Instead of using FTP Transport protocol, we are planning to use NFS as FTP goes heavy on performance.
Is there any limitation on mounting maximun file systems on XI(on unix) server.
Also suggest the best practice(NFS or FTP) for these kind of scenarios where volume of data and number of interfaces are very high.
Best Regards,
Satish..."I understand the use of /etc/fstab is now deprecated."...
This is true - but they have been saying that since at least 10.3. Well, "/private/etc/fstab" still working fine in 10.5.1 so I don't imagine there will be a problem continuing to use it for the time being.
In Leopard, it looks like the information is automatically imported into "DirectoryService" under "/mounts" so it might also be possible to configure the mountpoints from there... -
Want to be able to mount file systems in my computers at home
Hi,
I have 2 apple computers , a mac pro and an ibook and I want to be able to mount the file system from one to the other to transfer files. What software do I need to do this?
I suppose I could use an FTP server just to transfer files is there one for free?
Or if I actually want to mount the file system , what do I need?
thanks
-MalenaAs sig said, firewire is a really good way to do this. It is one of the easiest and fastest and most painless ways to do this.
If it is not so convenient for you to tether up one computer to the other via firewire, if the computers are on the same home network, enable personal file sharing (Sys Prefs > Sharing > Services) in at least one of them. Then you should see it on the other computer when you do ⇧⌘k in the Finder. Double-click on it and its icon should mount on your desktop. Open it, then drag and drop the files. If on the same home network and you enable ftp or remote login (Sys Prefs > Sharing > Services again), you can do it from the command line, no extra software required.
If they are not on the same network, e.g., one is at work and the other is at home, this can still be done fairly easily and fairly painlessly in a hack-proof manner by tunneling your file sharing port through secure shell. If you want to do it that way, that's a bit more involved to initially set up, but really not difficult to do. Post back if you want help doing that. -
Won't mount, file system not recognised
I'm trying to download firmware from Netgear's website but when I try to mount the files they say the file system is not recognised.
I'm trying to get hold of Netgear as well but I thought someone here might have some suggestions too while I wait for a response from Netgear.
The two files I've tried so far are:
http://kbserver.netgear.com/release_notes/D102713.asp
ftp://downloads.netgear.com/files/dg834v2_1022.zip
http://kbserver.netgear.com/release_notes/D103044.asp
ftp://downloads.netgear.com/files/dg8343_0132.zipMy mistake. The file I was trying to mount isn't intended to be mounted. You're actually meant to upload this .img file directly to the modem.
-
Creation of oracle directory structure on mounted file system in linux
Hi,
I need to use datapump utility for which directory structure would be required. since my dump files are stored another system i usually mount the file system, can i create oracle directory structure on mounted filesystem in linux? do suggest urgently. thanks in advance.Yes you can why not
-
ZFS file system mount in solaris 11
Create a ZFS file system for the package repository in the root pool:
# zfs create rpool/export/repoSolaris11
# zfs list
The atime property controls whether the access time for files is updated when the files are read.
Turning this property off avoids producing write traffic when reading files.
# zfs set atime=off rpool/export/repoSolaris11
Create the required pkg repository infrastructure so that you can copy the repository
# pkgrepo create /export/repoSolaris11
# cat sol-11-1111-repo-full.iso-a sol-11-1111-repo-full.iso-b > \
sol-11-1111-repo-full.iso
# mount -F hsfs /export/repoSolaris11/sol-11-1111-repo-full.iso /mnt
# ls /mnt
# df -k /mnt
Using the tar command as shown in the following example can be a faster way to move the
repository from the mounted file system to the repository ZFS file system.
# cd /mnt/repo; tar cf - . | (cd /export/repoSolaris11; tar xfp -)
# cd /export/repoSolaris11
# ls /export/repoSolaris11
pkg5.repository README
publisher sol-11-1111-repo-full.iso
# df -k /export/repoSolaris11
# umount /mnt
# pkgrepo -s /export/repoSolaris11 refresh
=============================================
# zfs create -o mountpoint=/export/repoSolaris11 rpool/repoSolaris11
==============================================I am trying to reconfigure the package repository with above steps. when reached the below step
# zfs create -o mountpoint=/export/repoSolaris11 rpool/repoSolaris11
created the mount point but not mounted giving the error message
cannot mount ,directory not empty When restarted the box, threw service adm screen with error message
not able to mount all pointsPlease advise and Thanks in advance.Hi.
Don't mix content of directory as mountpoint and what you see after FS was mounted.
On othet ZFS - mount point also clear. You see contetn of ZFS file system.
For check you can unmount any other ZFS and see that mountpoint also clear.
Regards. -
Dfc: Display file system space usage using graph and colors
Hi all,
I wrote a little tool, somewhat similar to df(1) which I named dfc.
To present it, nothing better than a screenshot (because of colors):
And there is a few options available (as of version 3.0.0):
Usage: dfc [OPTIONS(S)] [-c WHEN] [-e FORMAT] [-p FSNAME] [-q SORTBY] [-t FSTYPE]
[-u UNIT]
Available options:
-a print all mounted filesystem
-b do not show the graph bar
-c choose color mode. Read the manpage
for details
-d show used size
-e export to specified format. Read the manpage
for details
-f disable auto-adjust mode (force display)
-h print this message
-i info about inodes
-l only show information about locally mounted
file systems
-m use metric (SI unit)
-n do not print header
-o show mount flags
-p filter by file system name. Read the manpage
for details
-q sort the output. Read the manpage
for details
-s sum the total usage
-t filter by file system type. Read the manpage
for details
-T show filesystem type
-u choose the unit in which
to show the values. Read the manpage
for details
-v print program version
-w use a wider bar
-W wide filename (un truncate)
If you find it interesting, you may install it from the AUR: http://aur.archlinux.org/packages.php?ID=57770
(it is also available on the archlinuxfr repository for those who have it enabled).
For further explanations, there is a manpage or the wiki on the official website.
Here is the official website: http://projects.gw-computing.net/projects/dfc
If you encounter a bug (or several!), it would be nice to inform me. If you wish a new feature to be implemented, you can always ask me by sending me an email (you can find my email address in the manpage or on the official website).
Cheers,
Rolinh
Last edited by Rolinh (2012-05-31 00:36:48)bencahill wrote:There were the decently major changes (e.g. -t changing from 'don't show type' to 'filter by type'), but I suppose this is to be expected from such young software.
I know I changed the options a lot with 2.1.0 release. I thought it would be better to have -t for filtering and -T for printing the file system type so someone using the original df would not be surprised.
I'm sorry for the inconvenience. There should not be any changes like this one in the future though but I thought it was needed (especially because of the unit options).
bencahill wrote:
Anyway, I now cannot find any way of having colored output showing only some mounts (that aren't all the same type), without modifying the code.
Two suggestions:
1. Introduce a --color option like ls and grep (--color=WHEN, where WHEN is always,never,auto)
Ok, I'll implement this one for 2.2.0 release It'll be more like "-c always", "-c never" and "-c auto" (default) because I do not use long options but I think this would be OK, right?
bencahill wrote:2. Change -t to be able to filter multiple types (-t ext4,ext3,etc), and support negative matching (! -t tmpfs,devtmpfs,etc)
This was already planned for 2.2.0 release
bencahill wrote:Both of these would be awesome, if you have time. I've simply reverted for now.
This is what I would have suggested.
bencahill wrote:By the way, awesome software.
Thanks I'm glad you like it!
bencahill wrote:P.S. I'd already written this up before I noticed the part in your post about sending feature requests to your email. I decided to post it anyway, as I figured others could benefit from your answer as well. Please forgive me if this is not acceptable.
This is perfectly fine Moreover, I seem to have some troubles with my e-mail addressee... So it's actually better that you posted your requests here! -
How increase external disk size used for an existing file system, Solaris10
Configuration:
Server: Sun T5220
S/O: Solaris 10 5/08 s10s_u5wos_10 SPARC
Storage: EMC AX4-5
EMC PowerPath: PowerPath Version 5.2
I have the following scenario:
In AX4-5 storage array, I created two LUNs into RAID Group 10, with Raid Type1/0:
LUN 101: 20Gb
LUN 102: 10Gb
Both LUNs were added to Storage Group (SG1) that includes two Servers (Server1 and Server2); both servers have Operating System Solaris 10 5/8 and Power Path.
The servers detect both LUNs across two paths. With Power Path were created a virtual path (emcpower0a, emcpower1a) to access to each LUNs respectively.
We have mounted two file system /home/tes1 over emcpower0a -> LUN101 and /home/tes2 over emcpower1a -> LUN102.
Filesystem size used avail capacity Mounted on
/dev/dsk/emcpower0a 20G 919M 19G 5% /home/test1
/dev/dsk/emcpower1a 10G 9G 1G 90% /home/test2
I want to increase the space in file system /home/test2, without lost the information that I have stored and using the same LUN, LUN 102. To do this I start with the following steps:
1- Create new LUN, LUN 103 with 15Gb into RAID Group 10. Result: OK, I have available space in RAID Group 10.
2- Add LUN 103 to LUN 102, using concatenation option. Result: OK. This action creates a new metaLUN with the same characteristics of LUN 102, but with new space of 25Gb.
After to do these actions, I want to know how Solaris recognize this new space. What I need to do, to increase the size of file system /home/test2 to 25 Gb. Is that possible?
I reviewed the description of each disk using format command, and the disks not have any change.
Could anyone help me? If you need more detail, please do not hesitate to contact me.
Thanks in advance.Robert, thank a lot for your know how. You helped me to clarify some doubts. To complete your answer, I will add two more details, based on my experience.
After to execute, type -> auto configure and label, the disk was created with different partitions like root, swap, usr, like this:
Volume name = < >
ascii name = <DGC-RAID10-0223 cyl 49150 alt 2 hd 32 sec 12>
pcyl = 49152
ncyl = 49150
acyl = 2
nhead = 32
nsect = 12
Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 682 128.06MB (683/0/0) 262272
1 swap wu 683 - 1365 128.06MB (683/0/0) 262272
2 backup wu 0 - 49149 9.00GB (49150/0/0) 18873600
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 usr wm 1366 - 49149 8.75GB (47784/0/0) 18349056
7 unassigned wm 0 0 (0/0/0) 0
This was not convenient because all information stored appear in the first slice and growfs not work if appear root or swap slice into the disk, for that reason was necesary to recreate manually every slice with partition command. The final result was this:
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 - 49149 9.00GB (49150/0/0) 18873600
1 unassigned wu 0 0 (0/0/0) 0
2 backup wu 0 - 49149 9.00GB (49150/0/0) 18873600
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
Now, execute again the label command to save the new information.
Mount file system and execute:
growfs M file_system mount_point to expand the file system to 9Gb. -
Dears
Can we expand mounted file system+ size on solris 10 ???, knowing that I'm not using SVM
ThanksAnd you're asking this question in the Solaris 9 forum?
The answer is yes in both cases, though. 'growfs' will expand a mounted UFS filesystem.
It doesn't help you create the space necessary to expand it, though. That can be very easy or very difficult depending on how your storage is allocated.
Darren -
Sun Cluster 3.2 - Global File Systems
Sun Cluster has a Global Filesystem (GFS) that supports read-only access throughout the cluster. However, only one node has write access.
In Linux a GFS filesystem allows it to be mounted by multiple nodes for simultaneous READ/WRITE access. Shouldn't this be the same for Solaris as well..
From the documentation that I have read,
"The global file system works on the same principle as the global device feature. That is, only one node at a time is the primary and actually communicates with the underlying file system. All other nodes use normal file semantics but actually communicate with the primary node over the same cluster transport. The primary node for the file system is always the same as the primary node for the device on which it is built"
The GFS is also known as Cluster File System or Proxy File system.
Our client believes that they can have their application "scaled" and all nodes in the cluster can have the ability to write to the globally mounted file system. My belief was, the only way this can occur is when the application has failed over and then the "write" would occur from the "primary" node whom is mastering the application at that time. Any input will be greatly appreciated or clarification needed. Thanks in advance.
RyanThank you very much, this helped :)
And how seamless is remounting of the block device LUN if one server dies?
Should some clustered services (FS clients such as app servers) be restarted
in case when the master node changes due to failover? Or is it truly seamless
as in a bit of latency added for duration of mounting the block device on another
node, with no fatal interruptions sent to the clients?
And, is it true that this solution is gratis, i.e. may legally be used for free
unless the customer wants support from Sun (authorized partners)? ;)
//Jim
Edited by: JimKlimov on Aug 19, 2009 4:16 PM -
File system used space monitoring rule
All, I'm trying to change the way OC monitors disk space usage. I don't want it to report on nfs filesystems, except from the server from which they are shared.
The monitored attribute is FileSystemUsages.name=*.usedSpacePercentage.
I'd like it to only report on ufs and zfs filesystems. I've tried to create new rules using the attribures:
FileSystemUsages.type=ufs.usedSpacePercentage
FileSystemUsages.type=UFS.usedSpacePercentage
FileSystemUsages.type=zfs.usedSpacePercentage
FileSystemUsages.type=ZFS.usedSpacePercentage
But I don't get any alerts generated on system which I know violate the thresholds I've specified.
Has anybody successfully set up rules like these? Am I on the right track? do ufs/UFS/zfs/ZFS need to be single or double quoted? Documentation with various examples is non-existent as far as I can tell.
Any help is greatly appreciated
Timdo you get any answers for this question? It seems like OEM12c has file system space usage monitoring setup for nfs mounted file systems, however I could not find a place to specify the threshold for those nfs mounted file sytems except root file system. does anybody know how to setup threshold of nfs mounted file systems(Netapp storage)? thank you very much
-
Hi all,
Where do I find an file system description for below file system?
I want to know what they are being used for.
MOUNT POINT TYPE DEVICE SIZE INUSE FREE USE%
/sw internal /dev/md0 991MB 793MB 198MB 80%
/swstore internal /dev/md1 991MB 718MB 273MB 72%
/state internal /dev/md2 5951MB 195MB 5756MB 3%
/local/local1 SYSFS /dev/md4 14878MB 483MB 14395MB 3%
/vbspace GUEST /dev/data1/vbsp 230128MB 128MB 230000MB 0%
.../local1/spool PRINTSPOOL /dev/data1/spool 991MB 32MB 959MB 3%
/obj1 CONTENT /dev/data1/obj 121015MB 355MB 120660MB 0%
/dre1 CONTENT /dev/data1/dre 79354MB 53802MB 25552MB 67%
/ackq1 internal /dev/data1/ackq 1189MB 0MB 1189MB 0%
/plz1 internal /dev/data1/plz 2379MB 26MB 2353MB 1%
JanHi Michael,
I manage a WAAS network with waas core WAVE7541 in data center, Central Manager
installed on WAVE 594 with 4.4.3 version and remote waas modules (sm-sre-910-4.4.3.4
version) integrated on Cisco2951/K9 router.
On remote module I have
#show disk detail
Physical disk information:
disk00: Present 22DCP07VT (h00 c00 i00 l00 - Int DAS-SATA)
476937MB(465.8GB)
disk01: Present 22HZT0SUT (h01 c00 i00 l00 - Int DAS-SATA)
476937MB(465.8GB)
Mounted file systems:
MOUNT POINT TYPE DEVICE SIZE INUSE FREE USE%
/sw internal /dev/md0 991MB 698MB 293MB 70%
/swstore internal /dev/md1 991MB 304MB 687MB 30%
/state internal /dev/md2 3967MB 127MB 3840MB 3%
/local/local1 SYSFS /dev/md4 14878MB 276MB 14602MB 1%
.../local1/spool PRINTSPOOL /dev/data1/spool 991MB 32MB 959MB 3%
/obj1 CONTENT /dev/data1/obj 121015MB 11712MB 109303MB 9%
/dre1 CONTENT /dev/data1/dre 119031MB 116977MB 2054MB 98%
/ackq1 internal /dev/data1/ackq 1189MB 0MB 1189MB 0%
/plz1 internal /dev/data1/plz 2379MB 1MB 2378MB 0%
Software RAID devices:
DEVICE NAME TYPE STATUS PHYSICAL DEVICES AND STATUS
/dev/md0 RAID-1 NORMAL OPERATION disk00/00[GOOD] disk01/00[GOOD]
/dev/md1 RAID-1 NORMAL OPERATION disk00/01[GOOD] disk01/01[GOOD]
/dev/md2 RAID-1 NORMAL OPERATION disk00/02[GOOD] disk01/02[GOOD]
/dev/md3 RAID-1 NORMAL OPERATION disk00/03[GOOD] disk01/03[GOOD]
/dev/md4 RAID-1 NORMAL OPERATION disk00/04[GOOD] disk01/04[GOOD]
/dev/md5 RAID-1 NORMAL OPERATION disk00/05[GOOD] disk01/05[GOOD]
Disk encryption feature is disabled.
I ask you details on size, use and contents of following partitions:
/obj1 that is CIFS object cache and /dre1 that is used for the DRE byte level cache
as indicated by you.
In particular I don't understand why if i connect to it through interface web,
from the tab CifsAo->Monitoring->Cache, I see
Maximum cache disk size: 95.75391 GB
Is this value contained in the 121015 MB of /obj1, Right?
Infact I have configured in the Central Manger a Prepositon Directive with Total Size as % of Cache Volume=20
What is the remaining content in /obj1?
While is the redundancy library contained in /dre1
to which WAAS device accesses to compress the traffic to as well as in /plz1?
Please can you do clarify me?
Thanks a lot in advance
Maybe you are looking for
-
How to populate data in PAY_PEOPLE_GROUPS table (People Group Flexfiled)
Hello We are migrating the data from one oracle instance to another oracle instance which are in same version of Oralce Applications 11.5.10.2. As a part of migration can anybody let me know how to populate data in "People Group Key Flexfiled" (PAY_P
-
How to install and use DBXML package in Ubuntu Server 8.04 64bit
Hi all. I used to install and use DBXML in Ubuntu Desktop 8.04 32bit. Now I want to use on Ubuntu Server 8.04 64bit. This is code for my installation (I use DBXML 2.2.13) env CFLAGS="-O0" sh buildall.sh enable-java with-xerces="xerces-c-src" with-ber
-
Importing files of the same name in *different* folders?
Yes, yes, we all know about the RAW+JPEG and whatever else argument. But why should I be prevented from importing identical files that are stored in different folders? Who cares if they are in the library twice? I haven't figured out if it's just a n
-
Datagrid itemrenderer multiple use
Hopefully a straightforward and easy to answer question, I have a datagrid with an item renderer, like so <mx:DataGrid id="dg1" x="371" y="97" creationComplete="dg1.dataProvider = getStuff.lastResult.info"> <mx:columns> <mx:DataGr
-
Updating base table with Materialized View's data
Hi, In order to update base table with MVs data, I am trying real time data transfer between two databases. One is Oracle 8i and other is Oracle 9i. I have created an updatable MV in 9i on a base table using database link. The base table is in 8i. Ma