Maown - file system monitor for shared group directories

Maown Info Page
I needed a way to manage ownership and permissions of files in a shared directory. ACLs and "chmod g+s" alone were not enough so I wrote maown.
Maown is a file system monitor written in C. It uses inotify to recursively watch a directory tree for file creation and attribute modification. It automatically chowns files to user:group and adjusts group permissions to match user permissions.
The package includes a daemon with a simple configuration file. Each line in the configuration file specifies a user, a group and a list of directories to monitor:
<user> <group> <directory> [<directory>...]
Last edited by Xyne (2012-05-21 02:35:24)

Maown has been replaced with Autochown.

Similar Messages

  • Unable to get the file system information for: \\****servername\E$\; error = 64 Unable to distribute content to DP

    One of our DPs has stopped loading content. 
    I've research for quite a bit and cannot find a clear cut reason to this.  This server only has a DP role, I verified sharing permissions, all looked good. This DP has been running just fine for the last year or so and all sudden it will no longer load
    packages.  The disk drive is still present I can still reach the hidden share \\servername.com\E$
    Verified that the SMSSIG$ folder is there and the last entry is from 4/23/2015 
    SCCM 2012 R2 
    OS 2008 R2 Standard
    Any help is greatly appreciated!
    Here's a snipit from the distmgr.log
    Start updating the package on server ["Display=\\*****.com\"]MSWNET:["SMS_SITE=1AB"]\\*****.com\...
    Attempting to add or update a package on a distribution point.
    Will wait for 1 threads to end.
    Thread Handle = 0000000000001E48
    STATMSG: ID=2342 SEV=I LEV=M SOURCE="SMS Server" COMP="SMS_DISTRIBUTION_MANAGER" SYS=***.com SITE=1AB PID=2472 TID=8252 GMTDATE=Thu Apr 30 19:12:01.972 2015 ISTR0="SYSMGMT Source" ISTR1="["Display=\\*****.com\"]MSWNET:["SMS_SITE=1AB"]\\*****.com\"
    ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=2 AID0=400 AVAL0="CAS00087" AID1=404 AVAL1="["Display=\\*****.com\"]MSWNET:["SMS_SITE=1AB"]\\*****.com\"
    SMS_DISTRIBUTION_MANAGER 4/30/2015 2:12:01 PM
    8252 (0x203C)
    The current user context will be used for connecting to ["Display=\\*****.com\"]MSWNET:["SMS_SITE=1AB"]\\*****.com\.
    Successfully made a network connection to \\*****.com\ADMIN$.
    Ignoring drive \\*****.com\C$\.  File \\*****.com\C$\NO_SMS_ON_DRIVE.SMS exists.
    Unable to get the file system information for: \\*****.com\E$\; error = 64.
    Failed to find a valid drive on the distribution point ["Display=\\*****.com\"]MSWNET:["SMS_SITE=1AB"]\\*****.com\
    Cannot find or create the signature share.
    STATMSG: ID=2324 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_DISTRIBUTION_MANAGER" SYS=sccmprdpr1sec2.mmm.com SITE=1AB PID=2472 TID=8252 GMTDATE=Thu Apr 30 19:12:55.206 2015 ISTR0="["Display=\\*****.com\"]MSWNET:["SMS_SITE=1AB"]\\*****.com\"
    ISTR1="CAS00087" ISTR2="" ISTR3="30" ISTR4="94" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=2 AID0=400 AVAL0="CAS00087" AID1=404 AVAL1="["Display=\\*****.com\"]MSWNET:["SMS_SITE=1AB"]\\*****.com\"
    Error occurred. Performing error cleanup prior to returning.
    Cancelling network connection to \\*****.com\ADMIN$.

    Error 64 is being returned which is simply "the network name is no longer available".
    There can be a number of reasons for this from SMB compatibility issues (2003 servers wont support SMB2), to the expected and actual computer name of the boxes don't match (tries to authenticate with server.tld.com when the actual name is srv-01.tld.com and
    you just put a C-name in). I'd start from the top:  Try opening said share from the Primary Site server as that's the box doing the work.  Verify the IP and computer name is legit and that no one has played ACL games between the two systems (remember
    RPC only initiates/listens on port 135 but established connections are up in the dynamic port range).
    At the end of the day it's an issues "underneath" SCCM, and not an SCCM problem specifically. 

  • System Monitoring for Satellite System in Solution Manager

    Hi,
      I configured satellite system in solution manager through SMSY,and i created solution in dswp.I want to configure system monitoring through ccms agent,how to install ccms agent,how to configure system monitoring for satellite system in solution manager .Please guide me.
    Thanku

    Hi,
    In Solution manager I did below steps. :
    1)  Activated background dispatching in Rz21 (Client : 000).
    2)  Created CSMREG user (In RZ21)
    3)  Generated the configuration file for agent (CSMCONF).
    I down loaded the ccmagent_35-20001346.sar file.
    What are the steps i have to do in satellite system.
    Thanku

  • System monitor for awesome

    Im currently using awesome, its great except for one thing, there is no system monitor, like gnome-system-monitor for example. though I want something that dosent depend on gnome and displays remainning laptop battery time, aswell as mounted file systems, ram/cpu/network usage and a list of prosesses.

    Here's what I'm using with dwm - probably good for dzen as well:
    dwm-mystats.sh
    #!/bin/zsh
    # vim:ft=zsh ts=4
    # (c) 2007 by Robert Manea
    # Date format as recognized by standard system 'date'
    SDATE_FORMAT='%Y-%m-%d %H:%M'
    sdate() {
    date +${SDATE_FORMAT}
    sdfree() {
    print -n `df -h / /home` | awk '{verf1=$11; part1=$13; verf2=$17; part2=$19}; END {print "["part1"] -"verf1" [~] -"verf2}'
    smemused() {
    awk '/MemTotal/ {t=$2}; /MemFree/ {f=$2}; END {print t-f " " t}' /proc/meminfo | dbar -w 6 -s '='
    swapused() {
    awk '/SwapTotal/ {t=$2}; /SwapFree/ {f=$2}; END {print t-f " " t}' /proc/meminfo | dbar -w 6 -s '='
    sload() {
    print -n ${(@)$(</proc/loadavg)[1,3]}
    scpu() {
    vmstat | awk '/0/ {print 100-$15}' | dbar -w 6 -s '='
    snetabs() {
    local iface
    iface=`ifconfig | grep "Ethernet" | awk '{print $1}'`
    if [ $iface != "" ] ; then
    print -n "$iface: `mesure -l -i $iface -avK -c 2`||"
    fi
    snetbar() {
    mesure -aK -c 2 -l -i `ifconfig | grep "Ethernet" | awk '{print $1}'` | dbar -nonl -w 6 -s '=' -min 0 -max 270
    sac() {
    case `awk '{print $2}' /proc/acpi/ac_adapter/AC/state` in
    on-line)
    print -n "yep"
    print -n "no"
    esac
    sbatt() {
    local STATE
    STATE=`cat /proc/acpi/battery/BAT0/state /proc/acpi/battery/BAT0/info`
    case `echo $STATE | awk '/present:/{print $2}' - | tail -n 1` in
    no)
    print -n "no"
    echo $STATE | awk '/design capacity:/ {f=$3}; /remaining capacity:/ {r=$3}; END {print 100*r/f"%"}' -
    esac
    sgov() {
    < /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
    sfreq() {
    awk '{print $1/1000}' /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq
    stemp() {
    awk '{print $2}' /proc/acpi/thermal_zone/THRC/temperature
    supdates() {
    grep pkg.tar.gz /var/log/updates.log | wc -l
    # Main
    while true; do
    upd=$(supdates)
    if [ $upd = 0 ] ; then
    print " AC: $(sac) | Batt: $(sbatt) || $(sdfree) || CPU:$(scpu) ($(sload)) | $(sgov) / $(sfreq) MHz -> $(stemp)° || $(sdate)"
    else
    print " AC: $(sac) | Batt: $(sbatt) || $(sdfree) || CPU:$(scpu) ($(sload)) | $(sgov) / $(sfreq) MHz -> $(stemp)° || $upd UPD. || $(sdate)"
    fi
    # change polling frequency in AC/BATT mode
    case $(sac) in
    yep)
    sleep 8
    no)
    sleep 1m
    esac
    done
    You can play around a bit with the functions that get called/displayed in the main loop. It depends on zsh and makes use of this cronjob (in /etc/cron.hourly/pacsync):
    #!/bin/dash
    pacman -Syup --noprogressbar > /var/log/updates.log

  • Linux Cluster File System partitions for 10g RAC

    Hi Friends,
    I planned to install 2 Node Oracle 10g RAC on RHEL and I planned to use Linux File system itself for OCR,Voting Disk and datafiles (no OCFS2/RAW/ASM)
    I am having SAN storage.
    I would like to know how do i create shared/cluster partitions for OCR,Voting Disk and Datafiles (common storage on SAN).
    Do i need to install any Linux cluster file system for creating these shared partitions (as we have sun cluster in solaris)?
    If so let me know what versions are supported and provide the necessary Note / Link
    Regards,
    DB

    Hi ,
    Below link may be useful to you:
    ORACLE-BASE - Oracle 10g RAC On Linux Using NFS

  • Warming up File System Cache for BDB Performance

    Hi,
    We are using BDB DPL - JE package for our application.
    With our current machine configuration, we have
    1) 64 GB RAM
    2) 40-50 GB -- Berkley DB Data Size
    To warm up File System Cache, we cat the .jdb files to /dev/null (To minimize the disk access)
    e.g
         // Read all jdb files in the directory
         p = Runtime.getRuntime().exec("cat " + dirPath + "*.jdb >/dev/null 2>&1");
    Our application checks if new data is available every 15 minutes, If new Data is available then it clears all old reference and loads new data along with Cat *.jdb > /dev/null
    I would like to know that if something like this can be done to improve the BDB Read performance, if not is there any better method to Warm Up File System Cache ?
    Thanks,

    We've done a lot of performance testing with how to best utilize memory to maximize BDB performance.
    You'll get the best and most predictable performance by having everything in the DB cache. If the on-disk size of 40-50GB that you mention includes the default 50% utilization, then it should be able to fit. I probably wouldn't use a JVM larger than 56GB and a database cache percentage larger than 80%. But this depends a lot on the size of the keys and values in the database. The larger the keys and values, the closer the DB cache size will be to the on disk size. The preload option that Charles points out can pull everything into the cache to get to peak performance as soon as possible, but depending on your disk subsystem this still might take 30+ minutes.
    If everything does not fit in the DB cache, then your best bet is to devote as much memory as possible to the file system cache. You'll still need a large enough database cache to store the internal nodes of the btree databases. For our application and a dataset of this size, this would mean a JVM of about 5GB and a database cache percentage around 50%.
    I would also experiment with using CacheMode.EVICT_LN or even CacheMode.EVICT_BIN to reduce the presure on the garbage collector. If you have something in the file system cache, you'll get reasonably fast access to it (maybe 25-50% as fast as if it's in the database cache whereas pulling it from disk is 1-5% as fast), so unless you have very high locality between requests you might not want to put it into the database cache. What we found was that data was pulled in from disk, put into the DB cache, stayed there long enough to be promoted during GC to the old generation, and then it was evicted from the DB cache. This long-lived garbage put a lot of strain on the garbage collector, and led to very high stop-the-world GC times. If your application doesn't have latency requirements, then this might not matter as much to you. By setting the cache mode for a database to CacheMode.EVICT_LN, you effectively tell BDB to not to put the value or (leaf node = LN) into the cache.
    Relying on the file system cache is more unpredictable unless you control everything else that happens on the system since it's easy for parts of the BDB database to get evicted. To keep this from happening, I would recommend reading the files more frequently than every 15 minutes. If the files are in the file system cache, then cat'ing them should be fast. (During one test we ran, "cat *.jdb > /dev/null" took 1 minute when the files were on disk, but only 8 seconds when they were in the file system cache.) And if the files are not all in the file system cache, then you want to get them there sooner rather than later. By the way, if you're using Linux, then you can use "echo 1 > /proc/sys/vm/drop_caches" to clear out the file system cache. This might come in handy during testing. Something else to watch out for with ZFS on Solaris is that sequentially reading a large file might not pull it into the file system cache. To prevent the cache from being polluted, it assumes that sequentially reading through a large file doesn't imply that you're going to do a lot of random reads in that file later, so "cat *.jdb > /dev/null" might not pull the files into the ZFS cache.
    That sums up our experience with using the file system cache for BDB data, but I don't know how much of it will translate to your application.

  • Shared file system  recomended for OCR and voting Disk in 10g R2

    Dear Friends,
    For Oracle 10g R2 (10.2.0.5) 64 bit which shared file system is recomended for OCR and voting Disk (NFS / raw devices / OCFS2)
    For Datafiles and FRA i planned to use ASM
    Regards,
    DB

    Hi,
    If your using standard edition then you got no choice but raw devices
    http://docs.oracle.com/cd/B19306_01/license.102/b14199/options.htm#CJAHAGJE
    for ocfs2 you need to take extra care
    Heartbeat/Voting/Quorum Related Timeout Configuration for Linux, OCFS2, RAC Stack to Avoid Unnecessary Node Fencing, Panic and Reboot [ID 395878.1]

  • Time Machine for shared group folders

    Hi,
    I would like to set up time machine for a shared group folder on the server. I would like users to be able to use time machine from clients to see backup history of files in the group folder. Is this possible? How?
    Gregor

    +...would like users to be able to use time machine from clients to see backup history of files in the group folder. Is this possible? How?+
    Yes. This is possible, but it isn't touted as one of the features that sells Mac OS X Server. The advertised Time Machine server solution is to host an AFP share point on the Mac OS X Server system which clients then use as a Time Machine backup destination, similar to Time Capsule.
    But, what I think you're going for is something like this: You have a share point where multiple users can read and write, and you'd like the server to be able to backup that share point via Time Machine, but also allow its clients to connect and see those backups. This is kind of like "Time Machine for a sharepoint." Again, this isn't really touted, but it's possible. Here is an example which shows you how.
    For starters, let's say that your server's disks are well-organized. You have one volume for the server's operating system (Boot), one volume for the share points (Data), and one volume for backup (Backup). Other scenarios could work, but this configuration will make this how-to easier to understand. Let's also agree that your server's hostname is junglecat.amazon.private.
    Let's say that the share point (group folder) that we want backed-up is at /Volumes/Data/Group-Folder on the server. Let's also agree that your group members already have access to this folder - e.g. it is already a share point with an ACL entry that grants the group members read and write access.
    You'll need to configure Time Machine on the server itself first, just as if it were a client. Open System Preferences, and use the Time Machine pane to choose the Backup volume as the server's backup destination. You'll want to make sure that only the contents of the Data disk (the disk housing the Group-Folder) are backed up. Skip anything on your boot disk, as you're using Time Machine to back up just the share points. Let the server at least start its first backup, which will create the following directory:
    /Volumes/Backup/Backups.backupdb/junglecat.amazon.private
    ...where the server's Time Machine backups will be stored.
    Now use Server Admin's File Sharing section to make the /Volumes/Backup disk a share point, with these properties: Share the folder via AFP, and enable it as a Time Machine backup destination. (Your clients will not really be backing up to the folder, but they will be needing to see it as a place that Time Machine can keep a backup. This creates the .com.apple.timemachine.supported file on the share point.)
    It's also a good idea to disable guest access and add an ACL deny rule to the share point (/Volumes/Backup) which denies delete and delete_child for everyone:
    chmod +a "everyone deny delete,delete_child" /Volumes/Backup.
    This prevents people from storing other data next to the Backups.backupdb folder. The Backups.backupdb folder already contains an ACL deny rule that limits its access to read-only.
    Back on a client, mount the newly-shared Backup share point. The client's Time Machine may ask permission to use this as a backup location (because it was defined as such), but don't let it. The important thing is that the client recognized it as a backup location.
    To browse the server's Time Machine history, the client must right-click on the Time Machine icon in the Dock (or use the Time Machine menu extra) and choose "Browse Other Time Machine Disk," where the user can choose the server's backup volume. Now the client can go back in time on the server, star-field and all. If the user needs to restore a file, Time Machine will ask him/her where to save the restored copy; it will not overwrite the copy on the Group-Folder share point.
    Hope this helps!
    --Gerrit

  • Using old file system backup for Cloning

    I have taken an off-line backup of Oracle 11i (11.5.10.2) 15 days ago. Before taking backup of file system , I verified that all the latest Rapid Clone Patches are applied. No changes or patch work in APPL_TOP or DB has been done since that backup. Now I need to do cloning of this instance, how I can use this backup for Cloning.
    Rapid Clone scripts create and generate some files/directories so I am not sure whether my Old backup of file system will work or not. What is the best way to use old backup for cloning , also what are the file and directories in addition to the old backup of file system which I need to copy to Target System.
    Thanks for reviewing and suggestions.
    Samar

    Samar,
    If you have run preclone before backing it up, your backup should be valid for cloning.
    2.1 in the cloning doc has to be in the backup.
    These doc's should clear out yours doubts on cloning.
    Cloning Oracle Applications Release 11i with Rapid Clone
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=230672.1
    FAQ: Cloning Oracle Applications Release 11i
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=216664.1

  • File permissions, inherit for shared user directory

    im trying to create a public share where users can read each others files, but not write to them.  thats no problem, but im also trying to create a public directory as well, where the users can create a folder inside another users directory. 
    for example
    a new group is created called localnet(name is reflecting a workgroup tied to windows computers), two users (user1 and user2) are assigned part of this group, so that only the group 'localnet' can cross access each others shares inside the /public folder only.
    a directory is created to reflect the new group, the two users dirs are added, and also a public dir is added
    mkdir /usr/share/localnet
    mkdir /usr/share/localnet/user1
    mkdir /usr/share/localnet/user2
    mkdir /usr/share/localnet/public
    now to set the permissions, the users dir is chowned to themselves, and the public dir is chowned to root, all of which are attached to the localnet group
    /usr/share/localnet/user1 user1:localnet drwxr-x---- (chmod 750)
    /usr/share/localnet/user2 user2:localnet drwxr-x---- (chmod 750)
    /usr/share/localnet/public root:localnet drwxrwx---(chmod 770)
    all is fine, but when user1 creates a dir in public, say for example /public/pictures it now inherits a different set of permissions outside of the boundaries of this setup, it inherited (drwxr-wr-w user1:users ), this looks familiar, its the way linux adreses users files. So now user 2 tries to place pictures inside to say /public/pictures/roll1 but this user is denied permissions because user1 owns the dir.  the permissions i need it to inherit is what i set in the parent, and that is (drwxrwx--- anyuser:localnet) 
    what im trying to figure out, is what command can i run to allow the sub directories of the /public to inherit its permissions i guess is what im trying to accomplish
    EDIT: another though, should i be doing this inside the "home" directory instead? (/home/public)  makes more sense to me, would it act any differently there?
    Last edited by wolfdogg (2011-12-06 10:35:12)

    i ran
    chmod -R g+w,g-t,o+t /usr/share/localnet/public
    the directory /usr/share/localnet/public/random_by_user1 now reads
    drwxrwsr-t user1:localnet
    (user2 was able to make a dir inside of this one)
    although that worked, and user2 is now able to make a directory inside of a directory that user1 has made, its not inheriting like this. so even though the recursive set them correctly, the new ones are coming up liek this
    drwxr-sr-x user1:localnet
    (user2 was NOT able to make a dir inside of this one obviously)
    without this elevated permission set.
    so we have it right when i chmod the files, but how do i get them to inherit this natively? thats a sticky right?  how to i make the 'rws' in group sticky, but no delete permission for a non owner?  i think thats the question that shoudl have been stated int eh beginning of the post now that im figuring this out.
    Edit: (i just realized something, my poor choice for an example might lead you to believe that the users personal directories are inside the /public, they are actually not, so in my examples such as the one above where i reference a new test directory like this " /usr/share/localnet/public/user1" let me rephrase that so as not to confuse " /usr/share/localnet/public/random_by_user1"
    the user directories are actually outside of the public eg.. /usr/share/localnet/someuser, where the public shares are inside  /usr/share/localnet/public.  im sure i made that clear, but at this point, confusion might lead to chaos. lol.
    also, i dont want everyone else to have permission, only owner and group, so no need to o+t correct?  remember, its only public for the group, not for everyone.
    Last edited by wolfdogg (2011-12-06 11:55:02)

  • SC 3.0 file system failover for Oracle 8i/9i

    I'm a Oracle DBA for our company. And we have been using shared NFS mounts successfully for the archivelog space on our production 8i 2-node OPS Oracle databases. From each node, both archivelog areas are always available. This is the setup recommended by Oracle for OPS and RAC.
    Our SA team is now wanting to change this to a file system failover configuration instead. And I do not find any information from Oracle about it.
    The SA request states:
    "The current global filesystem configuration on (the OPS production databases) provides poor performance, especially when writing files over 100MB. To prevent an impact to performance on the production servers, we would like to change the configuration ... to use failover filesystems as opposed to the globally available filesystems we are currently using. ... The failover filesystems would be available on only one node at a time, arca on the "A" node and arcb on the "B" node. in the event of a node failure, the remaining node would host both filesystems."
    My question is, does anyone have experience with this kind of configuration with 8iOPS or 9iRAC? Are there any issues with the auto-moving of the archivelog space from the failed node over to the remaining node, in particular when the failure occurs during a transaction?
    Thanks for your help ...
    -j

    The problem with your setup of NFS cross mounting a filesystem (which could have been a recommended solution in SC 2.x for instance versus in SC 3.x where you'd want to choose a global filesystem) is the inherent "instability" of using NFS for a portion of your database (whether it's redo or archivelog files).
    Before this goes up in flames, let me speak from real world experience.
    Having run HA-OPS clusters in the SC 2.x days, we used either private archive log space, or HA archive log space. If you use NFS to cross mount it (either hard, soft or auto), you can run into issues if the machine hosting the NFS share goes out to lunch (either from RPC errors or if the machine goes down unexpectedly due to a panic, etc). At that point, we had only two options : bring the original machine hosting the share back up if possible, or force a reboot of the remaining cluster node to clear the stale NFS mounts so it could resume DB activities. In either case any attempt at failover will fail because you're trying to mount an actual physical filesystem on a stale NFS mount on the surviving node.
    We tried to work this out using many different NFS options, we tried to use automount, we tried to use local_mountpoints then automount to the correct home (e.g. /filesystem_local would be the phys, /filesystem would be the NFS mount where the activity occurred) and anytime the node hosting the NFS share went down unexpectedly, you'd have a temporary hang due to the conditions listed above.
    If you're implementing SC 3.x, use hasp and global filesystems to accomplish this if you must use a single common archive log area. Isn't it possible to use local/private storage for archive logs or is there a sequence numbering issue if you run private archive logs on both sides - or is sequencing just an issue with redo logs? In either case, if you're using rman, you'd have to back up the redologs and archive log files on both nodes, if memory serves me correctly...

  • File system management when sharing photos/music with other users in Mac OS

    I'm a new convert from the Windows world. I'm trying to setup multiple accounts for each user within my household. However, I'd like us to all share the same set of folders for photos within iPhoto and music in iTunes. Is it best to just used the "shared" folder instead of keeping them in the standard home directory and trying to share those folders with other users? What is the best approach to this. I would imagine most people do this for managing their music if not photos as well. Just trying to learn best practice on the file system management before I copy over gigs of files from my old PC.
    Thanks much,
    champlir

    champlir
    Welcome to the Apple Discussions. This might help with iPhoto.
    There are two ways to share, depending on what you mean by 'share'.
    If you want the other user to be able to see the pics, but not add to, change or alter your library, then enable Sharing in your iPhoto (Preferences -> Sharing), leave iPhoto running and use Fast User Switching to open the other account. In that account, enable 'Look For Shared Libraries'. Your Library will appear in the other source pane.
    Remember iPhoto must be running in both accounts for this to work.
    If you want the other user to have the same access to the library as you: to be able to add, edit, organise, keyword etc. then:
    Quit iPhoto in both accounts
    Move the iPhoto Library Folder to an external HD set to ignore permissions. You could also use a Disk Image or even partition your Hard Disk.
    In each account in turn: Hold down the option (or alt) key and launch iPhoto. From the resulting dialogue, select 'Choose Library' and navigate to the new library location. From that point on, this will be the default library location. Both accounts will have full access to the library, in fact, both accounts will 'own' it.
    However, there is a catch with this system and it is a significant one. iPhoto is not a multi-user app., it does not have the code to negotiate two users simultaneously writing to the database, and trying will cause db corruption. So only one user at a time, and back up, back up back up.
    Finally: If you’re comfortable in the Terminal, and understand File Permissions, ACL’s etc., some folks have reported success using the process outlined here . (Note this page refers to 10.4, but it should also work on 10.5). If you’re not comfortable with the terminal, and don’t know an ACL from the ACLU, then you’re best doing something else... Oh, and the warning about simultaneous users still applies.
    Regards
    TD

  • [Solved] System Monitoring for SSH

    Hi all,
    Am on the lookout for a system monitoring tool that doesn't require X/KDE/GNOME etc. - ideally something like htop but with the functionality of something like conky ie. can also show temperatures (from lm-sensors etc.) etc.
    I'm running a basic no-frills Storage box that hosts subsonic and via SSH I would like to have a tool i can quickly call upon and give me live feedback on my CPU usage etc. as well as temps. - A colourful interface would be nice too
    Any recommendations?
    Thanks in advance!
    Last edited by killercow (2011-07-29 18:25:53)

    Thanks for your recommendations:
    graysky: wanted to avoid running a webserver so gave monitorix a miss - though looks like a brilliant monitoring tool.
    ChoK - thanks for putting me back on this path. I had an initial look and kind of gave up on conky being able to do this (especially since it explicitly advertises itself as a program that requires X) - however!!...
    Found these options for conky.conf:
    out_to_console
        Print text to stdout.
    out_to_ncurses
        Print text in the console, but use ncurses so that conky can print the text of a new update over the old text. (In the future this will provide more useful things)
    out_to_stderr
        Print text to stderr.
    out_to_x
        When set to no, there will be no output in X (useful when you also use things like out_to_console). If you set it to no, make sure that it's placed before all other X-related setting (take the first line of your configfile to be sure). Default value is yes
    Added
    out_to_ncurses yes
    out_to_x no
    Commented out (to get rid of little warning message that appears otherwise):
    #out_to_console no
    ..and voila! Conky a la console! Thanks a million! Time to play with my conky.conf file further

  • Creating File System Repository for a remote system

    Hi Experts,
    My requirement is that I need to create a KM repository in EP from where I need to upload documents into the BW system. Is File System Repository the right type of repository which I need to create for this purpose?
    If yes, then in what format do I have to specify the value of the Root Directory property of the repository? I have a folder /data/docs created in the BW system within which I want to upload documents using this repository. But since this folder is located on the BW system which is a remote system for EP, I am not sure how I have to enter the path to this folder.
    Can anyone give me any hints on this?
    Warm Regards,
    Saurabh

    Hello Saurabh,
    I don't think an FS repository is what you are looking for in this scenario, you could instead use a BI Repository Manager, for more information see:
    http://help.sap.com/saphelp_nw70/helpdata/en/43/c1475079642ec5e10000000a11466f/frameset.htm
    Kind regards,
    Lorcan.

  • Solution Manager System Monitoring for non SAP system

    Dear Support,
    I have configured the CCMS Agent (SAPCCMSR) for non-SAP system (Windows 2008R2).
    The system information (e.g. CPU, Memory, Disks etc) has already showed in RZ20 of Central Monitoring system (e.g. Solution Manager).
    How can I do to configure to display these information in the Alert Inbox tab on System Monitoring Workcenter?
    Best regards,
    Fan Hung

    Hi,
    Have you defined and added corresponding logical systems for the respective satellite systems?
    Here is the sequence of steps you've to do:
    in SMSY:
    1) Define server, DB, Systems
    2) Generate READ, TMW, TRUSTED RFCs to the satellite systems
    3) define logical systems and assign the satellite system to it
    in Solution_Manager:
    1) define new solution
    2) add the logical system into the solution
    If these steps are successful, you should be able to see the satellite system in SYSTEM MONITORING in Solution_Manager.
    Have you already done these steps?
    After performing these steps:
    Solution_Manager>choose relevant solution>operations setup>Solution Monitoring>System monitoring>Activate monitoring> choose the system and activate monitoring.
    Your system will then appear.
    Does this help any way?

Maybe you are looking for