Logging NFS file access?

I am trying to find a relatively automated way to collect statistical information about usage patterns of my NFS exports.
For example, let's say I am exporting
/exports/group1
which includes
/exports/group1/alpha
/exports/group1/bravo
/exports/group1/charlie
/exports/group1/delta
/exports/group1/echo
and also
/exports/group2
which also includes a subdirectory structure similar to the above.
I am pretty sure that (for example), files in /exports/group1/bravo and /exports/group2/delta are being accessed almost constantly, while things in /exports/group1/echo and /exports/group2/alpha might only be accessed once a month.
BUT, I would like for the server to collect this information and log it for me.  I would like to be able to assess usage patterns over time.  I'm reasonably sure this is doable, but I don't know where to start. 
Helpful hints appreciated.

The only thing I can suggest to preclude this from ever becoming an issue in the future is to store all of you critical data in an encrypted disk image or use File Vault (which might be overkill because it does the entire home directory), have a test admin account for service personnel, and disable autologin.

Similar Messages

  • NFS File Access problem

    Hello,
    I am having problems trying to "tail" an existing file.
    When the file is being written into, I can tail it without any problem.
    The problem rises when the file is already complete, and I try to open it.
    I tried to make a small demo program but for some reason I am unable to get the demo program to give the same behaviour.
    Below is the class in which it all goes wrong.
    I basically opens the file usring RandomAccessfile.
    when I try to retrieve the length of the file a bit further, my ascii file I am viewing already changed to .nfs01231353434 something.
    But all gets displayed ok.
    When I then close the text pane in which this tail class is logging, the file itself is deleted.
    As this has something to do with NFS here is the setup :
    The java jar file is located on a remote solaris disk, so is the ASCII file I am trying to view.
    The local machine where I am running my application is Red Hat Linux 3.2.3-52.
    Apologies if this information is kinda vague but as I am unable to supply a demo program, I dont know else how to explain my problem.
    The class that does the "tailing"
    package com.alcatel.tamtam.main;
    import java.io.*;
    import java.util.*;
    public class Usr_LogFileTrailer extends Thread
       * How frequently to check for file changes; defaults to 5 seconds
      private long sampleInterval = 5000;
       * Number of lines in a row we output, otherwise problems with large files
       private int lineBuffer = 250;
       * The log file to tail
      private File logfile;
       * Defines whether the log file tailer should include the entire contents
       * of the exising log file or tail from the end of the file when the tailer starts
      private boolean startAtBeginning = false;
       * Is the tailer currently tailing ?
      private boolean tailing = false;
       * Is the thread suspended or not ?
      private boolean threadSuspended = true;
       * File pointer where thread last logged a line
      private long filePointer = 0;
       * Set of listeners
      private Set listeners = new HashSet();
       * Creates a new log file tailer that tails an existing file and checks the file for
       * updates every 5000ms
      public Usr_LogFileTrailer( File file )
        this.logfile = file;
       * Creates a new log file tailer
       * @param file         The file to tail
       * @param sampleInterval    How often to check for updates to the log file (default = 5000ms)
       * @param startAtBeginning   Should the tailer simply tail or should it process the entire
       *               file and continue tailing (true) or simply start tailing from the
       *               end of the file
      public Usr_LogFileTrailer( File file, long sampleInterval, boolean startAtBeginning )
        this.logfile = file;
        this.sampleInterval = sampleInterval;
        setPriority(Thread.MIN_PRIORITY);
      public void addLogFileTailerListener( Usr_LogFileTrailerListener l )
        this.listeners.add( l );
      public void removeLogFileTailerListener( Usr_LogFileTrailerListener l )
        this.listeners.remove( l );
       *  Methods to trigger our event listeners
      protected void fireNewLogFileLine( String line )
        for( Iterator i=this.listeners.iterator(); i.hasNext(); )
          Usr_LogFileTrailerListener l = ( Usr_LogFileTrailerListener )i.next();
          l.newLogFileLine( line );
      public void stopTailing()
        this.tailing = false;
      public void restart()
        filePointer = 0;
      public synchronized void setSuspended(boolean threadSuspended)
        this.threadSuspended = threadSuspended;
        if ( ! threadSuspended ) notify();
      public void run()
        try
          while ( ! logfile.exists() )
            synchronized(this)
              while ( threadSuspended ) wait();
            Thread.currentThread().sleep(1000);
            File parentDir = logfile.getParentFile();
            if ( parentDir.exists() && parentDir.isDirectory() )
              File[] parentFiles = parentDir.listFiles();
              for ( File parentFile : parentFiles )
                if ( parentFile.getName().equals(logfile.getName()) ||
                     parentFile.getName().startsWith(logfile.getName() + "_child") )
                  logfile = parentFile;
                  break;
        catch(InterruptedException iEx)
          iEx.printStackTrace();
        // Determine start point
        if( this.startAtBeginning )
          filePointer = 0;
        try
          // Start tailing
          this.tailing = true;
          RandomAccessFile file = new RandomAccessFile( logfile, "r" );
          while( this.tailing )
            synchronized(this)
              while ( threadSuspended ) wait();
            try
              // Compare the length of the file to the file pointer
    //          long fileLength = 0;
              //long fileLength = file.length();
              long fileLength = this.logfile.length();
              if( fileLength < filePointer )
                // Log file must have been rotated or deleted;
                // reopen the file and reset the file pointer
                file = new RandomAccessFile( logfile, "r" );
                filePointer = 0;
              if( fileLength > filePointer )
                // There is data to read
                file.seek( filePointer );
                String line = file.readLine();
                int lineCount = 0;
    //            this.fireBlockTextPane();
                while( line != null && lineCount < lineBuffer)
                  this.fireNewLogFileLine( line );
                  line = file.readLine();
                  lineCount++;
                filePointer = file.getFilePointer();
                this.fireFlushLogging();
    //            this.fireNewTextPaneUpdate();
              // Sleep for the specified interval
              sleep( this.sampleInterval );
            catch( Exception e )
                e.printStackTrace();
          // Close the file that we are tailing
          file.close();
        catch( Exception e )
          e.printStackTrace();
      }

    Hi,
    Below is my NFS mount statement on database server and application server. BTW, the directory DocOutput has permission of 777.
    fstab on the Database Server
    appserver:/database/oracle/app/prod/DocOutput /DocOutput nfs rw,hard,retry=20 0 0
    exports file in the Application Server
    /database/oracle/app/prod/DocOutput -anon=105,rw=dbserver,access=dbserver

  • NFS File adapter donu00B4t generate log file.

    Hello!
    We have a problem with a File adapter. when adapter catch file, this one don´t archive this file into archive directory.
    We have:
    Processing mode: Archive.
    Archive directory : /XIcom/INT181_GECAT/LOG
    This directory is created correctly.
    Someone can i help me. Thanks.
    Best regards.

    Hi ,
    >>>NFS File adapter don´t generate log file
    Do you mean you are not getting the processed files archived only when the file adapter set to NFS File system ?
    Did you try same thing by setting File adapter as FTP ?
    If you face same issue with File adpter set in FTP mode also then there is some issue with access to the folders.
    Please check this ...
    Regards,
    Nanda
    Message was edited by: Nanda kishore Reddy Narapu Reddy

  • How to view file access log (AFP) on Mac OS X Lion (10.7) server?

    I want to track the files beeing uploaded and downloaded or deleted on the shares on the server by different users. There has to be some log where the Mac OS writes an entry for every file access (like upload / download / rename / delete) for each user. How can I access it? How can I filter the actions by user account? Do I have to first activate such a logging or is it activated from the first (standard) setup of the Mac OS X Lion server?

    try:
    sudo serveradmin settings afp:activityLog=yes
    then the log entries should be at
    /Library/Logs/AppleFileService/AppleFileServiceAccess.log

  • Log file (access.log) of the internal ITS

    Hello,
    anybody know how to access the logfiles of the internal ITS. In particularly im looking for the log file access.log which you had for the external ITS accessable over the ITS admin page http://<servername>/scripts/wgate/admin/!
    The log file loged all users and the transaction they accessed over the time in the format
    2006/10/21 18:39:16.093, 0 #197349: IP ???.???.???.???, -its_ping
    Thanks in advance,
    Kai Mattern

    hi
    good
    go through these links, i hope these ll help you to solve your problem.
    http://www.hp.com/hpbooks/prentice/chapters/0130280844.pdf
    http://help.sap.com/saphelp_46c/helpdata/en/5d/ca5237943a1e64e10000009b38f8cf/content.htm
    thanks
    mrutyun^

  • Can't have ASM mark a NFS file as an ASM disk : -is not a block device

    Hello,
    I’m trying to experiment with ASM for learning purpose. Because I don’t have access to a SAN, I am trying to use NFS files but I can’t manage to have ASM mark those files as ASM disks.
    [root@localhost /]# /etc/init.d/oracleasm createdisk ASM_DISK_1 /mnt/asm_dsks/dg1/disk1
    Marking disk "ASM_DISK_1" as an ASM disk: [FAILED]
    The oracleasm log says: File "/mnt/asm_dsks/dg1/disk1" is not a block device
    OK, more context now:
    I am trying to install ASM on a RHEL5 virtual machine (on vmware).
    [root@localhost /]# uname -rm
    2.6.18-8.el5 x86_64
    I followed this document:
    http://www.oracle.com/technology/pub/articles/smiley-11gr1-install.html until I got stuck at the following command:
    /etc/init.d/oracleasm createdisk ...
    Now, the NFS filesystem comes from a Solaris 10 system (the only one that's available) running on a physical sun box (this one is not a virtual system).
    I have tried many combinations. I tried creating the files on the linux VM, using dd. As root, as oracle. I tried creating them on the Solaris side, using mkfile... no matter what I try, I always get the same issue.
    I tried to follow this document: Creating Files on a NAS Device for Use with ASM (http://download.oracle.com/docs/html/B10811_05/app_nas.htm#BCFHCIEC)
    But nothing seems to work.
    Any idea, recommendations?
    Thanks,
    Laurent.

    Hi buddy,
    I guess the metalink note 731775.1 should help You.
    In fact the procedure is:
    - Create the disk devices on your NFS directory (using dd)
    - Adjuste the permissions over those files (in this case, oracle:dba)
    - Adjust the ASM_DISKSTRING at the ASM instance and setting the NFS directory in the discovery path
    - Verify if they are available at v$asm_disk view
    - Create the diskgroup using the the NFS disks that You have created.
    Hope it helps,
    Cerreia

  • Logging all file opens immediately at systemd boot?

    when I start my stage-2 systemd boot, I want to log every file that is being opened for reading or writing (to /var/log/accessed.log).  am I reinventing the wheel if I write this or is there already a standard service that does this?
    my plan is to use the fanotify_event framework to write a file logger, presumably with a service file like.
    [Unit]
    Description=fanotify-logger
    DefaultDependencies=no
    After=local-fs.target
    [Service]
    Type=oneshot
    ExecStart=/usr/local/bin/fanotify-all / /var/log/accessed.log
    is there a best-practices recommendation where local sysadmins should insert such services (e.g., which directory, steps, etc.)  and, does my logger need to know how to shutdown, or will the standard systemd service shut this down by itself?
    /iaw

    You used to be able to do this with boot.kernel.org, but not anymore.
    pxelinux.0 isn't the kernel, and you can only use it if you have an actual pxe installation available which I don't think is there at releng.archlinux.org.
    You have to install your own pxe setup and yes it does use NFS.
    If you are using qemu you don't even need the ipxe, it is already included with qemu.
    Last edited by nomorewindows (2012-08-03 23:42:16)

  • Read-only file access from network volume

    I get an read-only file access from network volume problem while sharing a drive from Snow Leopard to a Tiger install. Most of files were opening well, but *.fp7 (FileMaker) and *.xls (Excel) files won't open dealing with a read-only error.
    As descibed in the last post of http://discussions.apple.com/thread.jspa?threadID=1406977 the client have the same share name of the server. Renomming it resolved the error!
    Thanks!

    right then, as it looks like I'm talking to myself....
    I have just wiped clean the Macbook Pro.
    I installed Leopard from scratch, then installed Office 2008.
    Logged back onto the network share, and the read-only error came up again, ONLY in Excel.
    bugger.
    Did the same thing with my Macbook and all is fine.
    Copy the file to the local hard drive, opens ok.
    I then copied the file to another Mac on the network.
    mmmm, opens fine.
    what's the difference....
    mmmm, the machine it opens fine from is running 10.4
    the machine which hosts all the data is running 10.3.9
    could this be the problem.
    Just done a software update check on the 10.3 machine and there are some security updates that need doing.
    Going to run that now and see what happens, otherwise I think the iMac running 10.3.9 is going to need to come up to 10.4 and fingers crossed this will solve it.

  • ORA-27054: NFS file system where the file is created or resides is not mounted with correct options

    Hi,
    i am getting following error, while taking RMAN backup.
    ORA-01580: error creating control backup file /backup/snapcf_TST.f
    ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
    DB Version:10.2.0.4.0
    while taking datafiles i am not getting any errors , i am using same mount points for both (data&control files)

    [oracle@localhost dbs]$ oerr ora 27054
    27054, 00000, "NFS file system where the file is created or resides is not mounted with correct options"
    // *Cause:  The file was on an NFS partition and either reading the mount tab
    //          file failed or the partition wass not mounted with the correct
    //          mount option.
    // *Action: Make sure mount tab file has read access for Oracle user and
    //          the NFS partition where the file resides is mounted correctly.
    //          For the list of mount options to use refer to your platform
    //          specific documentation.

  • Sql agent job getting file access denied error

    I'm not sure if this question belongs in this forum. Please move it if you want to.
    Here is my question. I have an ssis package that is running into an error at the file system task trying to move a file. The package is deployed to the catalog and I am running the package using the stored procedure
    [SSISDB].[catalog].[start_execution] @execution_id
    When I execute this stored proc in Management Studio while logged in under a sysadmin, everything works fine. But when I call the same TQL in SQL Agent job, I get a file access denied error. This has something to do with the id that is getting used
    to run the package and I am not sure how to track that down. Any help would be appreciated.
    I've check the windows permission on both the id that is running the SQL Agent and SQL SSIS Service. Both seem to have the right windows permission.

    Please see:
    http://support.microsoft.com/kb/918760

  • EM Application Log and Web Access Log growing too large on Redwood Server

    Hi,
    We have a storage space issue on our Redwood SAP CPS Orcale servers and have found that the two log files above are the main culprits for this. These files are continually updated and I need to know what these are and if they can be purged or reduced down in size.
    They have been in existence since the system has been installed and I have tried to access them but they are too large. I have also tried taking the cluster group offline to see if the file stops being updated but the file continues to be updated.
    Please could anyone shed any light on this and what can be done to resolve it?
    Thanks in advance for any help.
    Jason

    Hi David,
    The file names are:
    em-application.log and web access.log
    The File path is:
    D:\oracle\product\10.2.0\db_1\oc4j\j2ee\OC4J_DBConsole_brsapprdbmp01.britvic.BSDDRINKS.NET_SAPCPSPR\log
    Redwood/CPS version is 6.0.2.7
    Thanks for your help.
    Kind Regards,
    Jason

  • Auditing all users file access - too much information

    Hi, I have enabled a GPO With the following: Computer Configuration\Policies\Windows Settings\Security Settings\Advanced Audit Policy Configuration\Audit Policies\Object Access -> Audit File System -
    Success on a file server.
    After that, I have enabled successful Create files/Create Folders on a folder for the built-in group Everyone.
    That part works fine, I can see when users are creating files on the folders. But I also get a lot of Extreme amounts of other events logged in the Security log, and everything is coming from the backup agent running on the server (NetBackup in this case).
    How come that a backup agent is creating the events like this? It makes filtering much harder afterwards. The business requirements is to audit Everyone who is adding files to a specific folder, not all the rest of the server. The server
    is Win2008 R2.
    Example:
    An attempt was made to access an object.
    Subject:
    Security ID: SYSTEM
    Account Name: FILESERVER01$
    Account Domain: MYDOMAIN
    Logon ID: 0x3e7
    Object:
    Object Server: Security
    Object Type: File
    Object Name: \Device\HarddiskVolumeShadowCopy58\Windows\winsxs\amd64_microsoft-windows-audio-audiocore_31bf3856ad364e35_6.1.7601.18619_none_d4cab625fb3adf96\audiosrv.dll
    Handle ID: 0x3c4
    Process Information:
    Process ID: 0x1048
    Process Name: C:\Program Files\VERITAS\NetBackup\bin\bpbkar32.exe
    Access Request Information:
    Accesses: WriteAttributes

    Hi Steve,
    I feel your pain, I turned on logging on a file server and found the security log filling 4GB in a couple of hours. I think the key is being very selective about what you audit. I found this article useful and it had some powershell and ideas for helping
    make sense of the information overload - http://blogs.technet.com/b/mspfe/archive/2013/08/27/auditing-file-access-on-file-servers.aspx
    In my opinion though you really need a third party solution to make this viable, two I've looked at are
    Netwrix File Server Auditor and
    FileAudit which seem very similar in functionality and ease of use. These basically read in the event log to provide long term archive and reporting on it.
    Good luck,
    Tim

  • Oblix_OBWebGate_AuthnAndAuthz: ..- Unable to read log configuration file.

    Hi All,
    I have a painful problem with Oracle 10g Webgate. I am using Oracle_Access_Manager10_1_4_3_0_linux64_APACHE22_WebGate to procted Apache resource. I have some problem in Custom Authentication plugin. which is not still resolve(https://forums.oracle.com/thread/2549716).   
    Now I have apply patch BP9 (Oracle_Access_Manager10_1_4_3_0_BP09_Patch_linux64_APACHE22_WebGate) for Webgate, Now I have a new issue, I am unable to get the logon page of OAM when I try to access my Apache resource.
    The Error Message in the browser is:
    Internal Server Error
    The server encountered an internal error or misconfiguration and was unable to complete your request.
    Please contact the server administrator, root@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error.
    More information about this error may be available in the server error log.
    Apache/2.2.3 (Oracle) Server at apache.tigerit.com Port 80
    In the error log file of  Apache I have found
    [Sun Jun 23 11:53:17 2013] [error] [client 192.168.1.156] Oblix_OBWebGate_AuthnAndAuthz:  Error: /opt/netpoint/webgate/access/oblix/config/oblog_config_wg.xml - Unable to read log configuration file.
    [Sun Jun 23 11:53:17 2013] [error] [client 192.168.1.156] Oblix_OBWebGate_AuthnAndAuthz:  Error: /opt/netpoint/webgate/access/oblix/config/oblog_config_wg.xml - Unable to read log configuration file.
    [Sun Jun 23 11:53:20 2013] [error] [client 192.168.1.156] Oblix_OBWebGate_AuthnAndAuthz:  Error: /opt/netpoint/webgate/access/oblix/config/oblog_config_wg.xml - Unable to read log configuration file.
    [Sun Jun 23 11:53:35 2013] [error] [client 192.168.1.156] Oblix_OBWebGate_AuthnAndAuthz:  Error: /opt/netpoint/webgate/access/oblix/config/oblog_config_wg.xml - Unable to read log configuration file.
    [Sun Jun 23 11:53:35 2013] [error] [client 192.168.1.156] Oblix_OBWebGate_AuthnAndAuthz:  Error: /opt/netpoint/webgate/access/oblix/config/oblog_config_wg.xml - Unable to read log configuration file.
    [Sun Jun 23 11:53:35 2013] [error] [client 192.168.1.156] Oblix_OBWebGate_AuthnAndAuthz:  Error: /opt/netpoint/webgate/access/oblix/config/oblog_config_wg.xml - Unable to read log configuration file.
    Can Anyone help me regarding this issue...
    Thanks
    Tamim Khan

    Hi Colin,
    Thanks for your help, You are right I have change the permission of /opt/netpoint/webgate/access/oblix/lib/libxmlengine.so.
    Now I am able to access the logon page of the access manager.
    But the problem as I mention in (https://forums.oracle.com/thread/2549716) my ObSSOCookie value is still loggedcontinue.
    My Authentication return (ExecutionStatus.SUCCESS) Successful from the Java Code , and I am unable to login in the application. The thing is I can't manipulate ObSSOCookie from Java code.
    I have apply Patch 12363955 to resolve this issue. But this patch is now to resolve this issue.
    Have you any idea how to resolve this issue.
    Thanks
    Tamim Khan

  • Audit file access

    I want to audit file and folder access auditing on a windows 2008 server. I need to enable audit log all file activity by user such as read, copy, create, rename, deleted .
    Is there a way to see if an user access a specific file ?
    Thanks

    Hey please have a look at these link for the reference.
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/b18ca99b-db07-4e2e-8f13-67d58a4d1c63/windows-2008-server-files-access-real-time-monitoring
    Moreover, you can start from the several links from here also
    http://technet.microsoft.com/en-us/library/dd408940%28v...
    http://technet.microsoft.com/en-us/sysinternals/bb89664...
    http://technet.microsoft.com/en-us/library/cc721946.asp...
    And the other option is you can opt for a third party tool such as Lepide Auditor For File Server. A file Server monitoring tool that would help you in case for a real time monitoring.Test the tool from the given link below.
    http://www.lepide.com/file-server-audit/
    Thanks.

  • Wanted: Simple, Straightforward Logging of File Opens and File Closes Per This Specification -- Is Windows Capable of This?

    What is needed is for Windows to log every attempt to open any file on the system.  The log shall contain a timestamp, name of file, the type of access required
    (read only, write only, read and write, exclusive use, non-exclusive use), and name of the process or service that wants the file open.  Also there must be a record of how the operating system disposed of the request.  If the open is successful,
    say so. If not, say so, and why.  We had this info on the mainframe in 1972.  It would be useful to log file close events, as well.  The close event will disclose what the program did to the file.  For example, did the program write into
    the file?  Did the program read from file?  Did the program truncate the file and write?  Did the program extend the file?  Did the program change the name of the file?   Did the program change any file attributes and, if so,
    which ones?  A file can have multiple streams.  Disclose which streams were affected.
    There is a Security Auditing feature in Windows that doesn't meet this specification.  So that is not the answer.   What is the answer? 
    MARK D ROCKMAN

    I have downloaded Process Monitor and tried it on my lab computer. It certainly is comprehensive in its output. I'm going to try it on the production machines in hope of catching clues as to who does what to whom in the file system that is causing
    3rd party software to reboot the computer. The author of the troublesome program claims he must reboot the computer at the drop of a hat. For example, some file he must open right now is "locked" by some other program, not his program mind you, some
    other program. Okay. So what else is running on the production system that may be doing this? Prove that some other program is doing this.  The fact that we must log all file system activity up to the moment of reboot poses a special issue.  
    Will the Process Monitor log lose any file system events because it cannot properly close the log as the system is being rebooted?  It is interesting the Federal Government is fine with Microsoft delivering an operating system that has no comprehensive
    file access logging capability.  Process Monitor may do it.  But one cannot run that behemoth 24/7/365.  (I hear you saying "Oh.  But we have Security Audits."  A CISSP may be impressed with that one.)
    MARK D ROCKMAN

Maybe you are looking for