RPC Problem while mounting NFS File System

Hi,
We have two servers. ca1 and bench
Currently we moved bench across the firewall in DMZ. Now ca1 and bench are on opposite sides of the firewall. We have made sure that port 111, 2049 and 4045 along with the default telnet and ftp ports are programmed in firewall to provide access.
We can telnet and ftp from bench to ca1.
When we try to NFS mount a ca1 filesystem at bench using following command :
bench:>mount ca1:/u07/export /ca1/u07/export
we get following error :
nfs mount: ca1: : RPC: Timed out
This is happening since we moved the bench server across the firewall.
What could be the problem?
Cheers,
MS

Hi,
Thanks for the help.
It seems like that the nfs service is already started on the server.
Here are the results :
# ps -ef | grep nfsd
root 366 1 0 Jul 12 ? 0:00 /usr/lib/nfs/nfsd -a 16
root 25791 25789 0 16:16:45 pts/5 0:00 grep nfsd
# ps -ef | grep mountd
root 218 1 0 Jul 12 ? 0:05 /usr/lib/autofs/automountd
root 364 1 0 Jul 12 ? 0:00 /usr/lib/nfs/mountd
root 25793 25789 0 16:16:53 pts/5 0:00 grep mountd
# ps -ef | grep statd
root 201 1 0 Jul 12 ? 0:00 /usr/lib/nfs/statd
root 25797 25789 0 16:16:59 pts/5 0:00 grep statd
# ps -ef | grep lockd
root 203 1 0 Jul 12 ? 0:00 /usr/lib/nfs/lockd
root 25799 25789 0 16:17:07 pts/5 0:00 grep lockd
Please let me know what else can be done. This problem started after we moved these machines on different sides of the firewall.
Cheers,
MS

Similar Messages

  • ORA-27054: NFS file system where the file is created or resides is not mounted with correct options

    Hi,
    i am getting following error, while taking RMAN backup.
    ORA-01580: error creating control backup file /backup/snapcf_TST.f
    ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
    DB Version:10.2.0.4.0
    while taking datafiles i am not getting any errors , i am using same mount points for both (data&control files)

    [oracle@localhost dbs]$ oerr ora 27054
    27054, 00000, "NFS file system where the file is created or resides is not mounted with correct options"
    // *Cause:  The file was on an NFS partition and either reading the mount tab
    //          file failed or the partition wass not mounted with the correct
    //          mount option.
    // *Action: Make sure mount tab file has read access for Oracle user and
    //          the NFS partition where the file resides is mounted correctly.
    //          For the list of mount options to use refer to your platform
    //          specific documentation.

  • A problem while implementing a file to file senario

    hi all :
         There is some problem while implement a file to file senario. the file couldn't be sent or recieved.
          and as I try to check Check the Sender File Adapter Status  via
         Runtime Workbench -> Component Monitoring -> Communication Channel Monitoring ,
         It is found that Adapter Engine has error status with red light.  Is it why the file couldn't be sent?
        Could you please tell me how to make Adapter Engine  available?
        Thank you very much!!!

    Hi Sony
    Error getting JPR configuration from SLD. Exception: No entity of class SAP_BusinessSystem for EC1.SystemHome.cnbjw3501 found
    No access to get JPR configuration
    Along with what experts suggested
    What is the type of Business system is this. Standalone Java?
    JPR can have problem if you have a business system thats not ABAP/Java type if this system is not having a SAP TS in landscape then create Java type.
    Thanks
    Gaurav

  • Problem  while reading XML file from Aplication server(Al11)

    Hi Experts
    I am facing a problem while  reading XML file from Aplication server  using open data set.
    OPEN DATASET v_dsn IN BINARY MODE FOR INPUT.
    IF sy-subrc <> 0.
        EXIT.
      ENDIF.
      READ DATASET v_dsn INTO v_rec.
    WHILE sy-subrc <> 0.
      ENDWHILE.
      CLOSE DATASET v_dsn.
    The XML file contains the details from an IDOC number  ,  the expected output  is XML file giving  all the segments details in a single page and send the user in lotus note as an attachment, But in the  present  output  after opening the attachment  i am getting a single XML file  which contains most of the segments ,but in the bottom part it is giving  the below error .
    - <E1EDT13 SEGMENT="1">
      <QUALF>001</QUALF>
      <NTANF>20110803</NTANF>
      <NTANZ>080000</NTANZ>
      <NTEND>20110803<The XML page cannot be displayed
    Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later.
    Invalid at the top level of the document. Error processing resource 'file:///C:/TEMP/notesD52F4D/SHPORD_0080005842.xml'.
    /SPAN></NTEND>
      <NTENZ>000000</NTENZ>
    for all the xml  its giving the error in bottom part ,  but once we open the source code and  if we saved  in system without changing anything the file giving the xml file without any error in that .
    could any one can help to solve this issue .

    Hi Oliver
    Thanx for your reply.
    see the latest output
    - <E1EDT13 SEGMENT="1">
      <QUALF>003</QUALF>
      <NTANF>20110803</NTANF>
      <NTANZ>080000</NTANZ>
      <NTEND>20110803</NTEND>
      <NTENZ>000000</NTENZ>
      <ISDD>00000000</ISDD>
      <ISDZ>000000</ISDZ>
      <IEDD>00000000</IEDD>
      <IEDZ>000000</IEDZ>
      </E1EDT13>
    - <E1EDT13 SEGMENT="1">
      <QUALF>001</QUALF>
      <NTANF>20110803</NTANF>
      <NTANZ>080000</NTANZ>
      <NTEND>20110803<The XML page cannot be displayed
    Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later.
    Invalid at the top level of the document. Error processing resource 'file:///C:/TEMP/notesD52F4D/~1922011.xml'.
    /SPAN></NTEND>
      <NTENZ>000000</NTENZ>
    E1EDT13 with QUALF>003 and  <E1EDT13 SEGMENT="1">
    with   <QUALF>001 having almost same segment data . but  E1EDT13 with QUALF>003  is populating all segment data
    properly ,but E1EDT13 with QUALF>001  is giving in between.

  • ORA-27054: NFS file system where the file is created or resides is not moun

    Linux Red Hat 4
    10.2.0.4
    Had this issue since someone unplugged the san on Monday, and the servers were rebooted. Trying to do a duplpicate to Reporting Server from Production. The error returned, once the backupsets are being copied, is
    ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
    I have edited fstab in the way that the Doc 781349.1 suggests. I have unmounted the mount point and remounted. Themount is set up this way:
    /backup/PROD on /backup/PROD type nfs (rw,bg,hard,rsize=32768,wsize=32768,nfsvers=3,nointr,timeo=600,proto=tcp,addr=172.16.2.92)
    I have checked fstab and it has write permissions for the oracle user with which I run the duplicate command.
    I am able to browse, and create files on the /backup/PROD area.
    Edited by: user11981168 on May 3, 2010 1:02 PM

    It is my understanding that the NFS mouting options necessary to support Oracle vary based on the hardware/OS in use.
    I have the following recored for a NetApp appliance
    hard,bg,rw,vers=3,proto=tcp,intr,rsize=32768,wsize=32768,forcedirectio
    and this for a Linux Client
    rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0
    I suggest your review your platform specific documentationa and any record you have from the origonal set up.
    You might just want to open a SR where you provide the specific hardware info and see if support can help after making another search for documents of interest.
    HTH -- Mark D Powell --

  • Problem while dowloading the file from Application Server

    Dear Experts,
                 I am facing the Problem while downloading the file from Application server.
    We done the automatic function while saving the invoice, this will create an idoc, and this idoc is written in the Application Server.
    I am running the Transaction AL11 and select the record, and from menu --> List, i am downloading into TXT format.
    But for some segments, the length is long, and so the last 3 to 4 fields values are not appearing in the File. Even though i am unable to view the values in the file before downloading. But i can view in IDOC.
    Please help me to solve this issue.
    Thanks & Regards,
    Srini

    but our user will use the Txn. AL11 and they will download from there
    Educate the user On a serious note, tell him this is not how data from app server should be downloaded. You can ask him to talk to the basis team to provide him access to the app server folder where the file is being stored.
    I can set the Variant and put this in background, But always the file name will be change, Like we use Time stamp in the File name.
    You can't automate this process by scheduling in BG mode. This is because the in BG mode you can't dwld the file to presentation server.
    Hope i'm clear.
    BR,
    Suhas

  • Hello i am rajesh i am having problem while conveting other files into pdf format please help me

    i am rajesh i am having problem while converting other files into acrobat format please help me

    Hello Rajesh,
    I'm sorry to hear you're having trouble. Are you using Acrobat.com to convert your files to PDF? Please let me know where you are having trouble and I will do my best to help you convert your files. For your reference, here is a list of filetypes that can be converted to PDF online with Acrobat.com:
    http://kb2.adobe.com/cps/524/cpsid_52456.html#Create%20PDF
    Best,
    Rebecca

  • Problem while processing large files

    Hi
    I am facing a problem while processing large files.
    I have a file which is around 72mb. It has around more than 1lac records. XI is able to pick the file if it has 30,000 records. If file has more than 30,000 records XI is picking the file ( once it picks it is deleting the file ) but i dont see any information under SXMB_MONI. Either error or successful or processing ... . Its simply picking and igonring the file. If i am processing these records separatly it working.
    How to process this file. Why it is simply ignoring the file. How to solve this problem..
    Thanks & Regards
    Sowmya.

    Hi,
    XI pickup the Fiel based on max. limit of processing as well as the Memory & Resource Consumptions of XI server.
    PRocessing the fiel of 72 MB is bit higer one. It increase the Memory Utilization of XI server and that may fali to process at the max point.
    You should divide the File in small Chunks and allow to run multiple instances. It will  be faster and will not create any problem.
    Refer
    SAP Network Blog: Night Mare-Processing huge files in SAP XI
    /people/sravya.talanki2/blog/2005/11/29/night-mare-processing-huge-files-in-sap-xi
    /people/michal.krawczyk2/blog/2005/11/10/xi-the-same-filename-from-a-sender-to-a-receiver-file-adapter--sp14
    Processing huge file loads through XI
    File Limit -- please refer to SAP note: 821267 chapter 14
    File Limit
    Thanks
    swarup
    Edited by: Swarup Sawant on Jun 26, 2008 7:02 AM

  • Windows 7 problem with EFS (Encrypting file system)

    Hello colleagues!
    I think this topic is related more to the security.
    The problem is a bit unusual and google doesn't give me a clear answers, but maybe anyone came across
    with similar problem...
    In general, I suspect that my problem is EFS (Encrypting file system), ie
    service that automatically encrypts files using a digital signature (certificate).
    A little background:
    - On the work PC it was necessary to reinstall
    the system, according to corporate rules all the content we have is encrypted using EFS (Encrypted files are highlighted with green color as you know).
    I have copied all the data on a portable drive and also copied the certificates (certmgr.msc).
    The system was reinstalled. The only change - it was x32 and became x64.
    - It was necessary to free up some space on the hard drive (the file
    system on it is NTFS), so,  I temporary copied all the files on my home PC. My work certificate was installed on it too, because
    I work from home
    sometimes.
    When the  work PC was repaired, I've moved all the files back in
    the same way (ie on the portable HDD, then to the work PC).
    All files were working, but when I needed MS Word documents, it became clear that something was wrong.
    When I've opened the document, it gaveme a window with weird symbols and prompts me to select the encoding ... of course -
    no encoding was fit.
    Started to explore all the documents, it appears that some part of them were working,
    someof them not - all the old documents were working, ie the ones, which
    were created before deploying EFS (newly created or copied files immediately encrypted).
    So, now I am sure that the
    documents were somehow re-encrypted, at least on a portable hard drive, they do not look as encrypted (not highlighted in green), but it's still not open.
    Completely stopped opening all the documents that have
    been encrypted (for all types of files, ie it is not just an MS Word, but also pdf, presentations, charts, and even the pictures).
    Tell me, who faced similar?
    How can it all back? I have no possibility to restore those documents from another sources.

    Try copy those corrupted files and make sure you are using the same certificate and then re-encrypt the copied files and try decrypt them.
    If possible, use the same account as you used for encryption.

  • Starz PLAY "Windows Media Player encountered a problem while playing this file" error!?!

    My laptop running Windows 7 is having the error, " Windows Media Player encountered a problem while playing the file." when trying to play downloaded movies whereas the "live Starz TV" works fine. My netbook is working properly though, anyone seen this before?

    -  if you're still having the issue, make sure you are running all the latest video/audio codec's for windows media player and win 7   and also take a moment to check this article from microsoft.
    http://www.microsoft.com/windows/windowsmedia/player/10/errors.aspx
     if you find an answer let us know what fixed it!
    buggin wrote:
    My laptop running Windows 7 is having the error, " Windows Media Player encountered a problem while playing the file." when trying to play downloaded movies whereas the "live Starz TV" works fine. My netbook is working properly though, anyone seen this before?

  • Windows Media Player encountered a problem while playing the file...

    I am running WMP 11.0.5721.5280. I am running XP SP3 with every update that has come out. I have 4 GB RAM and my processor is a Pentium 4, 2.6GHz.
    I have a large collection of music file ripped from my CD collection. Almost every time I want to plat a music file I get a message; Windows Media Player encountered a problem while playing the file, For additional assistance, click Web Help. When this
    error box is displayed, a red 'X' placed next to the file in WMP.
    When I click the "Web Help" button, I am taken to a MS website (http://windows.microsoft.com/en-us/windows7/c00d11b1) which tells me that I should go to the sound device website and make certain I have the latest driver. All of the advice which
    is given does not apply to me. My sound device works fine! I can play CD, DVD (movies), and MP3 files. Even when I use RealPlayer, the same thing happens, but not as frequently.
    Can anyone offer a solution to this problem?

    They'll help you over here.
    Windows
    XP forums on Microsoft Answers
    Regards, Dave Patrick ....
    Microsoft Certified Professional
    Microsoft MVP [Windows]
    Disclaimer: This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.

  • My media player is not working, no sound. keeps saying encountered problem while playing the file.

    HP Pavilion g series (g6-1b50us Notebook PC), windows 7, 64-bit.
    My media player keeps saying  it encountered problems while playing a file. and little red x's appearing beside song titles. They did play before. Please help.

    Hi,
    This may happen if the files were moved to a different folder or a different location.
    Try this:
    http://answers.microsoft.com/en-us/windows/forum/windows_7-sound/red-x-symbol-next-to-each-song-whil...
    Note:
    If you have HP Support Assistant installed on the computer(The Blue Question Mark) then open it ==> Complete all pending Updates & Tuneups==> Restart and Check. It may solve your problem
    Although I am an HP employee, I am speaking for myself and not for HP.
    **Click on “Kudos” Star if you think this reply helped** Or Mark it as "Solved" if issue got fixed.

  • Problems mounting global file system

    Hello all.
    I have setup a Cluster using two Ultra10 machines called medusa & ultra10 (not very original I know) using Sun Cluster 3.1 with a Cluster patch bundle installed.
    When one of the Ultra10 machines boots it complains about being unable to mount the global file system and for some reason tries to mount the node@1 file system when it is actually node 2.
    on booting I receive the message on the macine ultra10
    Type control-d to proceed with normal startup,
    (or give root password for system maintenance): resuming boot
    If I use control D to continue then the following happens:
    ultra10:
    ultra10:/ $ cat /etc/cluster/nodeid
    2
    ultra10:/ $ grep global /etc/vfstab
    /dev/md/dsk/d32 /dev/md/rdsk/d32 /global/.devices/node@2 ufs 2 no global
    ultra10:/ $ df -k | grep global
    /dev/md/dsk/d32 493527 4803 439372 2% /global/.devices/node@1
    medusa:
    medusa:/ $ cat /etc/cluster/nodeid
    1
    medusa:/ $ grep global /etc/vfstab
    /dev/md/dsk/d32 /dev/md/rdsk/d32 /global/.devices/node@1 ufs 2 no global
    medusa:/ $ df -k | grep global
    /dev/md/dsk/d32 493527 4803 439372 2% /global/.devices/node@1
    Does anyone have any idea why the machine called ultra10 of node ID 2 is trying to mount the node ID 1 global file system when the correct entry is within the /etc/vfstab file?
    Many thanks for any assistance.

    Hmm, so for arguments sake, if I tried to mount both /dev/md/dsk/d50 devices to the same point in the filesystem for both nodes, it would mount OK?
    I assumed the problem was because the device being used has the same name, and was confusing the Solaris OS when both nodes tried to mount it. Maybe some examples will help...
    My cluster consists of two nodes, Helene and Dione. There is fibre-attached storage used for quorum, and website content. The output from scdidadm -L is:
    1 helene:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
    2 helene:/dev/rdsk/c0t1d0 /dev/did/rdsk/d2
    3 helene:/dev/rdsk/c4t50002AC0001202D9d0 /dev/did/rdsk/d3
    3 dione:/dev/rdsk/c4t50002AC0001202D9d0 /dev/did/rdsk/d3
    4 dione:/dev/rdsk/c0t0d0 /dev/did/rdsk/d4
    5 dione:/dev/rdsk/c0t1d0 /dev/did/rdsk/d5
    This allows me to have identical entries in both host's /etc/vfstab files. There are also shared devices under /dev/global that can be accessed by both nodes. But the RAID devices are not referenced by anything from these directories (i.e. there's no /dev/global/md/dsk/50). I just thought it would make sense to have the option of global meta devices, but maybe that's just me!
    Thanks again Tim! :D
    Pete

  • Horrible problems while mounting BTRFS RAID1 array (on a single HDD)

    Since I made my 2TB HDD (Samsung M9T which I bought precisely to make it a BTRFS RAID1 array for data security) into 2 x 1TB btrfs RAID1 array, I keep having horrible mounting problems.
    There are two subvolumes on that array:
    @data
    @sys_backup
    fstab entries for them look like so:
    UUID=a0555768-a37f-4bd8-95e5-66ebd6a09c75 /mnt/Disk_D btrfs device=/dev/mapper/2TB-1TB_RAID1a,device=/dev/mapper/2TB-1TB_RAID1b,rw,noatime,compress=lzo,autodefrag,space_cache,nofail,commit=180,subvol=@data 0 0
    UUID=a0555768-a37f-4bd8-95e5-66ebd6a09c75 /mnt/arch_backup btrfs device=/dev/mapper/2TB-1TB_RAID1a,device=/dev/mapper/2TB-1TB_RAID1b,rw,noatime,compress=zlib,autodefrag,space_cache,nofail,x-systemd.automount,commit=250,subvol=@sys_backup 0 0
    Now, I rarely access @sys_backup, but I have never found it not being mounted. @data on the other hand mounts hardly ever.
    Most of the time, after boot, then I try to access /mnt/Disk_D mountpoint, it either
    A - turns out empty, and running “mount” command confirms there's nothing mounted there. Trying to mount it manually renders absolutely no effect. Not even an error message.
    or
    B – every program used to access that mountpoint (terminal, file manager, conky displaying free space or anything else) freezes and I have to kill it. Even tryping “/mnt/D” and pressing TAB for autocompletion crashes the terminal.
    I then need to reboot anywhere from 1 to 15 more times to get it mount properly.
    I had this problem since I started using RAID1 3 months ago. I had it on kernels 3.17, 3.18 and 3.19.
    Other things that made no difference include:
    ➤ adding/removing “device=” option
    ➤ adding/removing x-systemd.automount option
    ➤ changing UUIDs o /dev/mapper/... entries
    ➤ completing scrub on that partition
    ➤ running dry run btrfs check (no errors indicated)
    This is horrible. Is there any solution to it, or is it just the current state of BTRFS? And if it is the later, then what other than btrfs Raid1 solution could be implemented?
    Last edited by Lockheed (2015-02-01 07:27:35)

    firecat53 wrote:There's nothing in that linked article about doing a RAID 1 on a single disk.
    The whole discussion I linked is about doing a RAID 1 on a single disk. And the article linked from that discussion is only to prove the point of btrfs self-healing.
    firecat53 wrote:If you think about it, you are going to absolutely kill your read/write times by having the drive head trying to keep up a RAID 1 mirror on two partitions of the same drive.
    The read performance is unchanged. The write performance is 50% of the standard partition, but since it is a data drive, that is not a serious issue. Certainly less serious than data security.
    firecat53 wrote:My guess is that's why you are having issues.
    Maybe so, but I don't see a reason to assume it and even it is so, then I don't think those issues should be there.
    firecat53 wrote:You're going to be much better off buying two 1TB drives and using those in RAID 1!!  Or just a single 2TB btrfs drive....with a UPS and another drive as a backup drive.
    I can't do that because it is a laptop SSD with system + HDD with data.
    firecat53 wrote:I've been using btrfs on my laptop SSD and on my server for almost a year now with only one problem (corruption from a power loss to the server before I got the UPS).
    I have been using btrfs on my laptop SSD + HDD and on my RPi server since over a year and btrfs check was detecting minor issues every few months.
    I now run SSD (non-raid) + HDD (2 partitions in RAID1) on laptop, and SD (2 partitions in RAID1) + HDD (2 partitions in RAID1) on RPi (which - by the way, has no such problems with mounting neither SD nor HDD in Raid1) and I have not yet had any error with btrfs check.

  • Can´t mount OCFS2 file system after Public IP modify

    Guys,
    We have an environment with 2 nodes with RAC database version 10.2.0.1. We need to modify the Public IP and VIP of the Oracle CRS. So we did the following steps:
    - Alter the VIP
    srvctl modify nodeapps -n node1 -A 192.168.1.101/255.255.255.0/eth0
    srvctl modify nodeapps -n node2 -A 192.168.1.102/255.255.255.0/eth0
    - Alter the Public IP
    oifcfg delif -global eth0
    oifcfg setif -global eth0/192.168.1.0:public
    - Alter the IP´s of the network interfaces
    - Update the /etc/hosts
    When we start the Oracle CRS, the components starts OK. But when we reboot the second node, the OCFS2 file system didn´t mount. The following errors occurs:
    SCSI device sde: 4194304 512-byte hdwr sectors (2147 MB)
    sde: cache data unavailable
    sde: assuming drive cache: write through
    sde: sde1
    parport0: PC-style at 0x378 [PCSPP,TRISTATE]
    lp0: using parport0 (polling).
    lp0: console ready
    mtrr: your processor doesn't support write-combining
    (2746,0):o2net_start_connect:1389 ERROR: bind failed with -99 at address 192.168.2.132
    (2746,0):o2net_start_connect:1420 connect attempt to node rac1 (num 0) at 192.168.2.131:7777 failed with errno -99
    (2746,0):o2net_connect_expired:1444 ERROR: no connection established with node 0 after 10 seconds, giving up and returning errors.
    (5457,0):dlm_request_join:786 ERROR: status = -107
    (5457,0):dlm_try_to_join_domain:934 ERROR: status = -107
    (5457,0):dlm_join_domain:1186 ERROR: status = -107
    (5457,0):dlm_register_domain:1379 ERROR: status = -107
    (5457,0):ocfs2_dlm_init:2007 ERROR: status = -107
    (5457,0):ocfs2_mount_volume:1062 ERROR: status = -107
    ocfs2: Unmounting device (8,17) on (node 1)
    When we did the command to force the mount occurs the errors:
    # mount -a
    mount.ocfs2: Transport endpoint is not connected while mounting /dev/sdb1 on /ocfs2. Check 'dmesg' for more information on this error.
    What occurs is that the OCFS2 is trying to connect with the older Public IP. My question is, how do i change the public IP in the ocfs2 ?
    regards,
    Eduardo P Niel
    OCP Oracle

    Hi, is correct you maybe check the /etc/cluster.conf file, maybe the configuration is wrong, you can also check the /etc/hosts file for verify the correct definition host names.
    Luck
    Have a good day.
    Regards,

Maybe you are looking for