Unable to do expdp on NFS mount point in solaris Oracle db 10g
Dear folks,
I am facing a wierd issue while doing expdp with NFS mount point. Kindly help me on this.
===============
expdp system/manager directory=exp_dumps dumpfile=u2dw.dmp schemas=u2dwExport: Release 10.2.0.4.0 - 64bit Production on Wednesday, 31 October, 2012 17:06:04
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "/backup_db/dumps/u2dw.dmp"
ORA-27040: file create error, unable to create file
SVR4 Error: 122: Operation not supported on transport endpoint
I have mounted like this:
mount -o hard,rw,noac,rsize=32768,wsize=32768,suid,proto=tcp,vers=3 -F nfs 172.20.2.204:/exthdd /backup_db
NFS=172.20.2.204:/exthdd
Hi Peter,
Thanks for ur reply.. pls find the below. I am able to touch the files while exporting log files also creating having the error msg as i showed in previous post.
# su - oracle
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
You have new mail.
oracle 201> touch /backup_db/dumps/u2dw.dmp.test
oracle 202>
Similar Messages
-
Expdp fails to create .dmp files in NFS mount point in solaris 10,Oracle10g
Dear folks,
I am facing a wierd issue while doing expdp with NFS mount point. Kindly help me on this.
===============
expdp system/manager directory=exp_dumps dumpfile=u2dw.dmp schemas=u2dw
Export: Release 10.2.0.4.0 - 64bit Production on Wednesday, 31 October, 2012 17:06:04
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "/backup_db/dumps/u2dw.dmp"
ORA-27040: file create error, unable to create file
SVR4 Error: 122: Operation not supported on transport endpoint
I have mounted like this:
mount -o hard,rw,noac,rsize=32768,wsize=32768,suid,proto=tcp,vers=3 -F nfs 172.20.2.204:/exthdd /backup_db
NFS=172.20.2.204:/exthdd
given read,write grants to public as well as specific user782011 wrote:
Hi sb92075,
Thanks for ur reply. pls find the below. I am able to touch the files while exporting log files also creating having the error msg as i showed in previous post.
# su - oracle
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
You have new mail.
oracle 201> touch /backup_db/dumps/u2dw.dmp.test
oracle 202>I contend that Oracle is too dumb to lie & does not mis-report reality
27040, 00000, "file create error, unable to create file"
// *Cause: create system call returned an error, unable to create file
// *Action: verify filename, and permissions -
Unable to open and save across mount point
Why do these forums appear so neglected?
Here's is my on-going question/problem:
General input/output error while accessing /home/MyDir/mountedDir/SomeDire/TheFileName
I experience a "feature" wherein I cannot open and save documents across an NFS mount point from a Linux client.
This is an error rcv'd on Linux, GNOME 2.8.*, 2.6.10-gentoo-r6
The mount is accomplished via an entry in the fstab as shown:
server-hostname:/export/mydir /home/mydir/mountdir nfs tcp,user,rw,rsize=32768 0 0
server-hostname is a Solaris OS.Sounds like you are missing some of the required plugins - possibly an updater failed, or someone moved/deleted the wrong directory.
Yes, you'll need to reinstall to restore the missing plugins. -
Nfs mount point does not allow file creations via java.io.File
Folks,
I have mounted an nfs drive to iFS on a Solaris server:
mount -F nfs nfs://server:port/ifsfolder /unixfolder
I can mkdir and touch files no problem. They appear in iFS as I'd expect. However if I write to the nfs mount via a JVM using java.io.File encounter the following problems:
Only directories are created ? unless I include the user that started the JVM in the oinstall unix group with the oracle user because it's the oracle user that writes to iFS not the user that creating the files!
I'm trying to create several files in a single directory via java.io.File BUT only the first file is created. I've tried putting waits in the code to see if it a timing issue but this doesn't appear to be. Writing via java.io.File to either a native directory of a native nfs mountpoint works OK. ie. Junit test against native file system works but not against an iFS mount point. Curiously the same unit tests running on PC with a windows driving mapping to iFS work OK !! so why not via a unix NFS mapping ?
many thanks in advance.
CHi Diep,
have done as requested via Oracle TAR #3308936.995. As it happens the problem is resolved. The resolution has been not to create the file via java.io.File.createNewFile(); before adding content via an outputStream. if the File creation is left until the content is added as shown below the problem is resolved.
Another quick question is link creation via 'ln -fs' and 'ln -f' supported against and nfs mount point to iFS ? (at Operating System level, rather than adding a folder path relationship via the Java API).
many thanks in advance.
public void createFile(String p_absolutePath, InputStream p_inputStream) throws Exception
File file = null;
file = new File(p_absolutePath);
// Oracle TAR Number: 3308936.995
// Uncomment line below to cause failure java.io.IOException: Operation not supported on transport endpoint
// at java.io.UnixFileSystem.createFileExclusively(Native Method)
// at java.io.File.createNewFile(File.java:828)
// at com.unisys.ors.filesystemdata.OracleTARTest.createFile(OracleTARTest.java:43)
// at com.unisys.ors.filesystemdata.OracleTARTest.main(OracleTARTest.java:79)
//file.createNewFile();
FileOutputStream fos = new FileOutputStream(file);
byte[] buffer = new byte[1024];
int noOfBytesRead = 0;
while ((noOfBytesRead = p_inputStream.read(buffer, 0, buffer.length)) != -1)
fos.write(buffer, 0, noOfBytesRead);
p_inputStream.close();
fos.flush();
fos.close();
} -
NFS4: Problem mounting NFS mount onto a Solaris 10 Client
Hi,
I am having problems mounting NFS mount point from a Linux-Server onto a Solaris 10 Client.
In the following
=My server IP ..*.120
=Client IP ..*.100
Commands run on Client:
==================
# mount -o vers=3 -F nfs 172.25.30.120:/scratch/pvfs2 /scratch/pvfs2
nfs mount: 172.25.30.120: : RPC: Rpcbind failure - RPC: Unable to receive
nfs mount: retrying: /scratch/pvfs2
nfs mount: 172.25.30.120: : RPC: Rpcbind failure - RPC: Unable to receive
nfs mount: 172.25.30.120: : RPC: Rpcbind failure - RPC: Unable to receive
# mount -o vers=4 -F nfs 172.25.30.120:/scratch/pvfs2 /scratch/pvfs2
nfs mount: 172.25.30.120:/scratch/pvfs2: No such file or directory
# rpcinfo -p
program vers proto port service
100000 4 tcp 111 rpcbind
100000 3 tcp 111 rpcbind
100000 2 tcp 111 rpcbind
100000 4 udp 111 rpcbind
100000 3 udp 111 rpcbind
100000 2 udp 111 rpcbind
1073741824 1 tcp 36084
100024 1 udp 42835 status
100024 1 tcp 36086 status
100133 1 udp 42835
100133 1 tcp 36086
100001 2 udp 42836 rstatd
100001 3 udp 42836 rstatd
100001 4 udp 42836 rstatd
100002 2 tcp 36087 rusersd
100002 3 tcp 36087 rusersd
100002 2 udp 42838 rusersd
100002 3 udp 42838 rusersd
100011 1 udp 42840 rquotad
100021 1 udp 4045 nlockmgr
100021 2 udp 4045 nlockmgr
100021 3 udp 4045 nlockmgr
100021 4 udp 4045 nlockmgr
100021 1 tcp 4045 nlockmgr
100021 2 tcp 4045 nlockmgr
100021 3 tcp 4045 nlockmgr
100021 4 tcp 4045 nlockmgr
# showmount -e 172.25.30.120 (Server)
showmount: 172.25.30.120: RPC: Rpcbind failure - RPC: Unable to receive
Commands OnServer:
================
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100021 1 tcp 49927 nlockmgr
100021 3 tcp 49927 nlockmgr
100021 4 tcp 49927 nlockmgr
100021 1 udp 32772 nlockmgr
100021 3 udp 32772 nlockmgr
100021 4 udp 32772 nlockmgr
100011 1 udp 796 rquotad
100011 2 udp 796 rquotad
100011 1 tcp 799 rquotad
100011 2 tcp 799 rquotad
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100005 1 udp 809 mountd
100005 1 tcp 812 mountd
100005 2 udp 809 mountd
100005 2 tcp 812 mountd
100005 3 udp 809 mountd
100005 3 tcp 812 mountd
100024 1 udp 854 status
100024 1 tcp 857 status
# showmount -e 172.25.30.120
Export list for 172.25.30.120:
/scratch/nfs 172.25.30.100,172.25.24.0/4
/scratch/pvfs2 172.25.30.100,172.25.24.0/4
Thank you, ~alI also tried to run Snoop on the client and wireshark on Server and following is what I see:
One Server: Upon issuing mount command on client:
# tshark -i eth1
Running as user "root" and group "root". This could be dangerous.
Capturing on eth1
0.000000 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
0.205570 172.25.30.100 -> 172.25.30.120 Portmap V2 GETPORT Call MOUNT(100005) V:3 UDP
0.205586 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
0.207863 172.25.30.100 -> 172.25.30.120 Portmap V2 GETPORT Call MOUNT(100005) V:3 UDP
0.207869 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
2.005314 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
4.011005 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
5.206109 Dell_70:ad:29 -> SunMicro_70:ff:17 ARP Who has 172.25.30.100? Tell 172.25.30.120
5.206277 SunMicro_70:ff:17 -> Dell_70:ad:29 ARP 172.25.30.100 is at 00:14:4f:70:ff:17
5.216157 172.25.30.100 -> 172.25.30.120 Portmap V2 GETPORT Call MOUNT(100005) V:3 UDP
5.216170 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
On Clinet Upon issuing mount command on client:
# snoop -d bge1
Using device /dev/bge1 (promiscuous mode)
? -> * ETHER Type=9000 (Loopback), size = 60 bytes
? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
atlas-pvfs2 -> pvfs2-io-0-3 PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (UDP port 111 unreachable)
atlas-pvfs2 -> pvfs2-io-0-3 PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (UDP port 111 unreachable)
? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
? -> * ETHER Type=9000 (Loopback), size = 60 bytes
? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
pvfs2-io-0-3 -> * ARP C Who is 172.25.30.100, atlas-pvfs2 ?
atlas-pvfs2 -> pvfs2-io-0-3 ARP R 172.25.30.100, atlas-pvfs2 is 0:14:4f:70:ff:17
atlas-pvfs2 -> pvfs2-io-0-3 PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (UDP port 111 unreachable)
Also I see the following on Client:
# rpcinfo -p pvfs2-io-0-3
rpcinfo: can't contact portmapper: RPC: Rpcbind failure - RPC: Failed (unspecified error)
When I try the above rpcinfo command on Client and Server Snoop And wireshark(ethereal) outputs are as follows:
Client # snoop -d bge1
Using device /dev/bge1 (promiscuous mode)
? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
atlas-pvfs2 -> pvfs2-io-0-3 TCP D=111 S=872 Syn Seq=2065245538 Len=0 Win=49640 Options=<mss 1460,nop,wscale 0,nop,nop,sackOK>
pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (TCP port 111 unreachable)
? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
? -> (multicast) ETHER Type=2004 (Unknown), size = 48 bytes
? -> (multicast) ETHER Type=0003 (LLC/802.3), size = 90 bytes
? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
? -> * ETHER Type=9000 (Loopback), size = 60 bytes
pvfs2-io-0-3 -> * ARP C Who is 172.25.30.100, atlas-pvfs2 ?
atlas-pvfs2 -> pvfs2-io-0-3 ARP R 172.25.30.100, atlas-pvfs2 is 0:14:4f:70:ff:17
? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
atlas-pvfs2 -> pvfs2-io-0-3 TCP D=111 S=874 Syn Seq=2068043912 Len=0 Win=49640 Options=<mss 1460,nop,wscale 0,nop,nop,sackOK>
pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (TCP port 111 unreachable)
? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
? -> * ETHER Type=9000 (Loopback), size = 60 bytes
Server # tshark -i eth1
Running as user "root" and group "root". This could be dangerous.
Capturing on eth1
0.000000 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
0.313739 Cisco_3d:68:10 -> CDP/VTP/DTP/PAgP/UDLD CDP Device ID: MILEVA Port ID: GigabitEthernet1/0/16
2.006422 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
3.483733 172.25.30.100 -> 172.25.30.120 TCP 865 > sunrpc [SYN] Seq=0 Win=49640 Len=0 MSS=1460 WS=0
3.483752 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
4.009741 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
6.014524 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
6.551356 Cisco_3d:68:10 -> Cisco_3d:68:10 LOOP Reply
8.019386 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
8.484344 Dell_70:ad:29 -> SunMicro_70:ff:17 ARP Who has 172.25.30.100? Tell 172.25.30.120
8.484569 SunMicro_70:ff:17 -> Dell_70:ad:29 ARP 172.25.30.100 is at 00:14:4f:70:ff:17
10.024411 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
12.030956 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
12.901333 Cisco_3d:68:10 -> CDP/VTP/DTP/PAgP/UDLD DTP Dynamic Trunking Protocol
12.901421 Cisco_3d:68:10 -> CDP/VTP/DTP/PAgP/UDLD DTP Dynamic Trunking Protocol
^[[A 14.034193 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
15.691119 172.25.30.100 -> 172.25.30.120 TCP 866 > sunrpc [SYN] Seq=0 Win=49640 Len=0 MSS=1460 WS=0
15.691138 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
16.038944 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
16.550760 Cisco_3d:68:10 -> Cisco_3d:68:10 LOOP Reply
18.043886 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
20.050243 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
21.487689 172.25.30.100 -> 172.25.30.120 TCP 867 > sunrpc [SYN] Seq=0 Win=49640 Len=0 MSS=1460 WS=0
21.487700 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
22.053784 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
24.058680 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
26.063406 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
26.558307 Cisco_3d:68:10 -> Cisco_3d:68:10 LOOP Reply
~thank you for any help you can provide!!! -
Finding what data is "under" an NFS mount point
Hello all,Regarding NFS share mounts, every now and then our mounts drop out.. which is easily fixable... the issue we have is if it's not caught in time, data is written to the physical target location as opposed to the network share.ie, Server1 mounts Storage1:/shares to /mnt/nfs/shareX.If the mount drops, anything targetting Server1's /mnt/nfs/shareX will write to the local version of server1's /mnt/nfs/shareX (it's autocreated).
Then if mount is recreated, anythign targetting server1 will obviously be writing to the mounted nfs share.
Hopefully I have written this is a semi understandable way.
My question is, while those nfs shares are mounted, is it possible to see what data is written on the underlying, physical, /mnt/nfs on Server1? Or do I need to umount them to see what data is there?If any other clarification is needed, please...
This topic first appeared in the Spiceworks CommunityHello all,Regarding NFS share mounts, every now and then our mounts drop out.. which is easily fixable... the issue we have is if it's not caught in time, data is written to the physical target location as opposed to the network share.ie, Server1 mounts Storage1:/shares to /mnt/nfs/shareX.If the mount drops, anything targetting Server1's /mnt/nfs/shareX will write to the local version of server1's /mnt/nfs/shareX (it's autocreated).
Then if mount is recreated, anythign targetting server1 will obviously be writing to the mounted nfs share.
Hopefully I have written this is a semi understandable way.
My question is, while those nfs shares are mounted, is it possible to see what data is written on the underlying, physical, /mnt/nfs on Server1? Or do I need to umount them to see what data is there?If any other clarification is needed, please... -
32 bit expdp.exe on 64 bit installation of Oracle DB 10g
Hello,
This may sound a bit strange but here goes. I have 64 bit version of Oracle 10g release 2 (10.2.04) installed on Windows Server 2008 64 bit. I have found that when accessing the server through my telnet program I cannot run expdp.exe. I always get this error "The system cannnot execute the specified program"
After a bit of troubleshooting I have found that the problem is the telnet program that I am using cannot launch the 32 bit version of expdp.exe. If I load the 32 bit version of Oracle 10g on a 64 bit server I can run expdp.ese in the telnet session.
So my question is: does anyone know how to load the 32 bit version od expdp.exe in a 64 bit version of Oracle 10g, to replace the 64 bit version of the expdp.exe? Or is this not possible at all?
QuintonI found it's hard approve that's the problem of telnet. when you telnet, you are on the server, if server is 64bit you should be able to run the command.
If you want to install 32bit version expdp. you can do a seperate Oracle client installation on the server with different ORACLE_HOME. patch to the same level as server.
Still I question the root cause is telnet. Did you try other method to connect to server? telnet is not safest connection per-se, most production server will have it disabled anyway. -
ZFS 7320c and T4-2 server mount points for NFS
Hi All,
We have an Oracle ZFS 7320c and T4-2 servers. Apart from the on-board 1 GB Ethernet, we also have a 10 Gbe connectivity between the servers and the storage
configured as 10.0.0.0/16 network.
We have created a few NFS shares but unable to mount them automatically after reboot inside Oracle VM Server for SPARC guest domains.
The following document helped us in configuration:
Configure and Mount NFS shares from SUN ZFS Storage 7320 for SPARC SuperCluster [ID 1503867.1]
However, we can manually mount the file systems after reaching run level 3.
The NFS mount points are /orabackup and /stage and the entries in /etc/vfstab are as follows:
10.0.0.50:/export/orabackup - /orabackup nfs - yes rw,bg,hard,nointr,rsize=131072,wsize=131072,proto=tcp,vers=3
10.0.0.50:/export/stage - /stage nfs - yes rw,bg,hard,nointr,rsize=131072,wsize=131072,proto=tcp,vers=3
On the ZFS storage, the following are the properties for shares:
zfsctrl1:shares> select nfs_prj1
zfsctrl1:shares nfs_prj1> show
Properties:
aclinherit = restricted
aclmode = discard
atime = true
checksum = fletcher4
compression = off
dedup = false
compressratio = 100
copies = 1
creation = Sun Jan 27 2013 11:17:17 GMT+0000 (UTC)
logbias = latency
mountpoint = /export
quota = 0
readonly = false
recordsize = 128K
reservation = 0
rstchown = true
secondarycache = all
nbmand = false
sharesmb = off
sharenfs = on
snapdir = hidden
vscan = false
sharedav = off
shareftp = off
sharesftp = off
sharetftp =
pool = oocep_pool
canonical_name = oocep_pool/local/nfs_prj1
default_group = other
default_permissions = 700
default_sparse = false
default_user = nobody
default_volblocksize = 8K
default_volsize = 0
exported = true
nodestroy = false
space_data = 43.2G
space_unused_res = 0
space_unused_res_shares = 0
space_snapshots = 0
space_available = 3.97T
space_total = 43.2G
origin =
Shares:
Filesystems:
NAME SIZE MOUNTPOINT
orabackup 31K /export/orabackup
stage 43.2G /export/stage
Children:
groups => View per-group usage and manage group
quotas
replication => Manage remote replication
snapshots => Manage snapshots
users => View per-user usage and manage user quotas
zfsctrl1:shares nfs_prj1> select orabackup
zfsctrl1:shares nfs_prj1/orabackup> show
Properties:
aclinherit = restricted (inherited)
aclmode = discard (inherited)
atime = true (inherited)
casesensitivity = mixed
checksum = fletcher4 (inherited)
compression = off (inherited)
dedup = false (inherited)
compressratio = 100
copies = 1 (inherited)
creation = Sun Jan 27 2013 11:17:46 GMT+0000 (UTC)
logbias = latency (inherited)
mountpoint = /export/orabackup (inherited)
normalization = none
quota = 200G
quota_snap = true
readonly = false (inherited)
recordsize = 128K (inherited)
reservation = 0
reservation_snap = true
rstchown = true (inherited)
secondarycache = all (inherited)
shadow = none
nbmand = false (inherited)
sharesmb = off (inherited)
sharenfs = sec=sys,rw,[email protected]/16:@10.0.0.218/16:@10.0.0.215/16:@10.0.0.212/16:@10.0.0.209/16:@10.0.0.206/16:@10.0.0.13/16:@10.0.0.200/16:@10.0.0.203/16
snapdir = hidden (inherited)
utf8only = true
vscan = false (inherited)
sharedav = off (inherited)
shareftp = off (inherited)
sharesftp = off (inherited)
sharetftp = (inherited)
pool = oocep_pool
canonical_name = oocep_pool/local/nfs_prj1/orabackup
exported = true (inherited)
nodestroy = false
space_data = 31K
space_unused_res = 0
space_snapshots = 0
space_available = 200G
space_total = 31K
root_group = other
root_permissions = 700
root_user = nobody
origin =
zfsctrl1:shares nfs_prj1> select stage
zfsctrl1:shares nfs_prj1/stage> show
Properties:
aclinherit = restricted (inherited)
aclmode = discard (inherited)
atime = true (inherited)
casesensitivity = mixed
checksum = fletcher4 (inherited)
compression = off (inherited)
dedup = false (inherited)
compressratio = 100
copies = 1 (inherited)
creation = Tue Feb 12 2013 11:28:27 GMT+0000 (UTC)
logbias = latency (inherited)
mountpoint = /export/stage (inherited)
normalization = none
quota = 100G
quota_snap = true
readonly = false (inherited)
recordsize = 128K (inherited)
reservation = 0
reservation_snap = true
rstchown = true (inherited)
secondarycache = all (inherited)
shadow = none
nbmand = false (inherited)
sharesmb = off (inherited)
sharenfs = sec=sys,rw,[email protected]/16:@10.0.0.218/16:@10.0.0.215/16:@10.0.0.212/16:@10.0.0.209/16:@10.0.0.206/16:@10.0.0.203/16:@10.0.0.200/16
snapdir = hidden (inherited)
utf8only = true
vscan = false (inherited)
sharedav = off (inherited)
shareftp = off (inherited)
sharesftp = off (inherited)
sharetftp = (inherited)
pool = oocep_pool
canonical_name = oocep_pool/local/nfs_prj1/stage
exported = true (inherited)
nodestroy = false
space_data = 43.2G
space_unused_res = 0
space_snapshots = 0
space_available = 56.8G
space_total = 43.2G
root_group = root
root_permissions = 755
root_user = root
origin =
Can anybody please help?
Regards.try this:
svcadm enable nfs/clientcheers
bjoern -
Nfs mount created with Netinfo not shown by Directory Utility in Leopard
On TIger I used to mount dynamically a few directories using NFS.
To do so, I used NetInfo.
I have upgraded to Leopard and the mounted directories
are still working, although Netinfo is not present anymore.
I was expecting to see these mount points and
modify them using Directory Utility, which has substituted Netinfo.
But they are not even shown in the Mount panel of Directory Utility.
Is there a way to see and modify NFS mount point previously
created by NetInfo with the new Directory Utility?Thank you very much! I was able to recreate the static automount that I had previously had. I just had to create the "mounts" directory in /var/db/dslocal/nodes/Default/ and then I saved the following text as a .plist file within "mounts".
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>dir</key>
<array>
<string>/Network/Backups</string>
</array>
<key>generateduid</key>
<array>
<string>0000000-0000-0000-0000-000000000000</string>
</array>
<key>name</key>
<array>
<string>server:/Backups</string>
</array>
<key>opts</key>
<array>
<string>url==afp://;AUTH=NO%20USER%[email protected]/Backups</string>
</array>
<key>vfstype</key>
<array>
<string>url</string>
</array>
</dict>
</plist>
I don't think the specific name of the .plist file matters, or the value for "generateduid". I'm listing all this info assuming that someone out there might care.
I assume this would work for SMB shares also... if SMB worked, which it hasn't on my system since I installed leopard. -
Where does Disk Utility define NFS mounts?
Hi, I used to use Disk Utility to define a NFS mount point for my Drobo, but then I sold the Drobo and deleted the mount point from Disk Utility. However, my system.log file shows that rpc.statd is trying to find the Drobo once every hour still. I double checked and there is nothing listed in auto_master, so the only place I can think of that Disk Utility defines the mounts within is Directory Services but I can't find where. Does anyone know where Disk Utility defines NFS mounts and how I can clear it out?
Mountain Lion NFS Mounts Missing In...: Apple Support Communities
-
OFA for 11g.. Are multiple mount points (ie 2 or more) still recommended
Folks
Client has a Clariion SAN (the latest generation) They are pushing strongly to provide 2 mount points for all Oracle databases as a space saving initiative. Is performance affected with this configuration given the improvements in spindle speeds and controllers.
Is going to 4 mount points 2 for Oracle and 2 for volume data and indexing that much of difference.
Peter Johnson
Oracle ACSFolks
Client has a Clariion SAN (the latest generation) They are pushing strongly to provide 2 mount points for all Oracle databases as a space saving initiative. Is performance affected with this configuration given the improvements in spindle speeds and controllers.
Is going to 4 mount points 2 for Oracle and 2 for volume data and indexing that much of difference.
Peter Johnson
Oracle ACS -
Mount point getting decreased frequently
Hi,
The mount point database in oracle E-Buiseness suite get reduced two days once.
I cross checked with trace file, it generated normally & also i stored the archive log files in separate mount point.
Kindly any one guide me for that issue.
Thanks & Regards
KesavKesavan G wrote:
Hi,
The mount point database in oracle E-Buiseness suite get reduced two days once.
I cross checked with trace file, it generated normally & also i stored the archive log files in separate mount point.
Kindly any one guide me for that issue.
Thanks & Regards
KesavHi,
What is the operating system, EBS version.
Do you have any scheduled scripts in crontab/scheduled tasks to delete trace and old files?
Do you store only database related files or application related files. If you store application related files in the same mount point check if you have scheduled "Purge concurrent request and/or manager data" program.
Any RMAN backups in this mount point?
Thanks -
Free Space on Oracle Mount Points
Hi,
In my production environment, which is on ECC 5.0 on HPUX and Oracle DB, my mount point for the Oracle DB files has reached 90% plus......
What is the amount of free space that should be available on these mount points at any point of time. Will there be a performance issue in the above case.
Inputs on this will be appreciated.
Regards
Alfred D'SouzaHello Alfred,
> What is the amount of free space that should be available on these mount points at any point of time. Will there be a performance issue in the above case.
Personally i try to keep 4 GB free space for each sapdata (for big systems), for smaller i only keep 1 GB.
Performance issues depends on your storage. If you have SAN storage with caches.. performance will be not affected, but if you have local disks it depends on the data striping and the i/o intensive actions.
But in general i say, the performance will be not affected by the free space.. it will only be affacted by the i/o actions and the distributed data over all sapdatas.
Regards
Stefan -
Can I read solaris mount point size through oracle?
I belive it can be done.You can create a package, which can perform o.s function calls to achive this.
hare krishna
Alok -
Accessing NFS mounted share in Finder no longer works in 10.5.3+
I have setup an automounted NFS share previously with Leopard against a RHEL 5 server at the office. I had to go through a few loops to punch a hole through the appfirewall to get the share accessible in the Finder.
A few months later when I returned to the office after a consultancy stint and upgrades to 10.5.3 and 10.5.4 the NFS mount no longer works. I have investigated it today and I can't get it to run even with the appfirewall disabled.
I've been doing some troubleshooting, and the interaction between the statd, lockd and perhaps the portmap seem a bit fishy, even with the appfirewall disabled. Both the statd and lockd complains that they can not register; lockd once and statd indefinitely.
Jul 2 15:17:10 ySubmarine com.apple.statd[521]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
Jul 2 15:17:10 ySubmarine com.apple.launchd[1] (com.apple.statd[521]): Exited with exit code: 1
Jul 2 15:17:10 ySubmarine com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
... and rpcinfo -p gets connection refused unless I start portmap using the launchctl utility.
This may be a bit obscure, and I'm not exactly an expert of NFS, so I wonder if someone else stumbled across this, and can point me in the right direction?
JohanSorry for my late response, but I have finally got around to some trial and error. I can mount the share using mount_nfs (but need to use sudo), and it shows up as a mounted disk in the Finder. However, when I start to browse a directory on the share that I can write to, I end up with the lockd and statd failures.
$ mount_nfs -o resvport xxxx:/home /Users/yyyy/xxxx-home
mount_nfs: /Users/yyyy/xxxx-home: Permission denied
$ sudo mount_nfs -o resvport xxxx:/home /Users/yyyy/xxxx-home
Jul 7 10:37:34 zzzz com.apple.statd[253]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
Jul 7 10:37:34 zzzz com.apple.launchd[1] (com.apple.statd[253]): Exited with exit code: 1
Jul 7 10:37:34 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
Jul 7 10:37:44 zzzz com.apple.statd[254]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
Jul 7 10:37:44 zzzz com.apple.launchd[1] (com.apple.statd[254]): Exited with exit code: 1
Jul 7 10:37:44 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
Jul 7 10:37:54 zzzz com.apple.statd[255]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
Jul 7 10:37:54 zzzz com.apple.launchd[1] (com.apple.statd[255]): Exited with exit code: 1
Jul 7 10:37:54 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
Jul 7 10:37:58 zzzz loginwindow[25]: 1 server now unresponsive
Jul 7 10:37:59 zzzz KernelEventAgent[26]: tid 00000000 unmounting 1 filesystems
Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: /net updated
Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: /home updated
Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: no unmounts
Jul 7 10:38:02 zzzz loginwindow[25]: No servers unresponsive
... and firewall wide open.
I guess that the Finder somehow triggers file locking over NFS.
Maybe you are looking for
-
Can't fax using HP7510 due to error message "not connected to network".
I am having trouble with efax. It has worked in the past with no issues. Now I get a message that the printer is not connected to the nework even though it is. I can print wirelessly. The issue is with faxing. I have tried all suggestions I have
-
InDesign CS5: Anchored Objects
I have a 30 page document that is flowing text and images. At this point the images are NOT anchored to the text boxes so that they flow with the text. I know you can cut/copy and paste them into the text box individually. I have 27 images that need
-
Deployment of OA Project in R12.2 - Could not load application module
I have deployed a project that is working without issue in JDeveloper to a R12.2 environment, but I am receiving the following error on loading of the page. This seems to be a common problem as I have another issue with deploying OA Framework projec
-
WTC configuration error inside BEA Workshop 8.1
Hi, Im trying to insert a Tuxedo control on Workshop 8.1, when i leave Workshop to configure it automatically i receive the following 2 errors: -)Cannot connect to TUxedo Application.Check if your application is up and retry.TPESYSTEM(12):0:0:TPED_MI
-
Is anyone else having these problems with Bootcamp Windows 7 on MBP?
Ok, I bought a new MBP on Amazon a couple of days ago to be able to put Windows 7 on it with Bootcamp. I wanted the OS with the hardware the MBP had to offer, but have since run into a few problems. Let me know if you've run into similar problems and