My Home File Server Woes

Hey Guys, I know I have been asking alot of questions lately but I am trying to get this server running. I got my G3 B&W running again and I installed a slave 80GB HD formated MAC OS X extended. I wanted to do time machine backups from my macbook over the network to this disk but I can't seem to get it working. Time Machine does not find the disk. Any Ideas? It is running OS X 10.2 I can put Linux on it or I can go to OS 8 or 9. I have disks for these. Please help.
Daniel

Time Machine only works to network volumes if they are served from a Leopard machine. There is a workaround: http://vowe.net/archives/008940.html

Similar Messages

  • Using Time Capsule as a File Server

    In my current set-up, I need to use my 3TB Time Capsule as a router, time machine destination for a couple of Macs and as file server (about 1TB in size).
    The file server files will not be on any of the Macs so need to be backed up separately to a 1TB USB drive.
    Apple has confirmed that the TC can be partitioned easily and, although unsupported, you can use third party apps to back up the file server to an external drive.
    Does anyone have any real experience of doing this?
    I have the option to us e a different router and to sort the time machine back-ups without needing the TC which means the TC would be used only as a file server.  Would you recommend another product to the TC if you only needed a home file server?
    Any help much appreciated!

    To answer your question, yes but I would not use two Time Capsules. Just purchase one Time Capsule and then one external hard drive (cheaper than a Time Capsule) and connect the hard drive to the Time Capsule via USB. You can use the Airport Utility to archive the contents of your Time Capsule to the external hard drive using the Archive feature (http://support.apple.com/kb/HT1281).
    In fact, it may be worth buying two external hard drives so you can keep one offsite in case of disaster, theft, etc. You then rotate the external hard drives weekly (or whatever you choose) so you always have a current backup offsite.

  • How to preview graphic files on a home web server setup?

    I apologize if this question is not asked in the most technical way, but here goes:
    I'm read online on how to create a home web server or ftp server on a Mac computer. (I plan to do this with my G5, running OSX 10.6.4)
    I would like to have a library of folders containing all of my graphic files that I've collected over the years in order to have access to them (if needed) when i'm working remotely.
    I want to be able to easily view all of these graphic files like I can with Adobe Bridge (eps, psd, ai, gif, jpg, png, etc).
    Is there software I can use that will allow me to view a gallery thumbnails of all of these file types when I am away from home and logging into the ftp server or web server?
    I know software such as Transmission FTP will allow me to look at one image at a time, but I'd like to view an entire gallery of thumbnails at once for quick reference.
    Thanks in advance.

    Hi johnny-griswold;
    The G5 was called a PowerMac and used a PowerPC processor.
    The MacPro is an Intel based Mac and is capable of running 10.6.4.
    You might what to drop your references to G5 because they are only confusing things.
    Allan

  • How to mirror file server?

    I run a small office network with Mac OS X Server 10.4.x doing various services including AFP. My wife and I also work from home and we would like to be able to access remotely various files which are held on the server.
    We can do this over the internet, but the connections are not too quick. Ideally I would like to be able to run a mirrored copy of the file server on a little server at home, so that we can access the files locally and then have the servers keep each other up to date across the internet.
    Is this possible?
    TIA
    James

    Can anybody help me with this?
    TIA

  • Create a new web application, how shall I update the file server.xml

    Hi,
    I will create a new web application, i.e named newApp. Then I create a file structure as follows:
    - <server-root>/newApp
    - <server-root>/newApp/WEB-INF
    - <server-root>/newApp/WEB-INF/classes
    Then I must tell the server that I have created a new web application. Then I must update my file server.xml, How shall I do this and where in the file shall I type in the new information?
    I use windows XP Pro, and Tomcat 4.1.27.
    My server.xml file looks like below:
    <!-- Example Server Configuration File -->
    <!-- Note that component elements are nested corresponding to their
    parent-child relationships with each other -->
    <!-- A "Server" is a singleton element that represents the entire JVM,
    which may contain one or more "Service" instances. The Server
    listens for a shutdown command on the indicated port.
    Note: A "Server" is not itself a "Container", so you may not
    define subcomponents such as "Valves" or "Loggers" at this level.
    -->
    <Server port="8005" shutdown="SHUTDOWN" debug="0">
    <!-- Comment these entries out to disable JMX MBeans support -->
    <!-- You may also configure custom components (e.g. Valves/Realms) by
    including your own mbean-descriptor file(s), and setting the
    "descriptors" attribute to point to a ';' seperated list of paths
    (in the ClassLoader sense) of files to add to the default list.
    e.g. descriptors="/com/myfirm/mypackage/mbean-descriptor.xml"
    -->
    <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener"
    debug="0"/>
    <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"
    debug="0"/>
    <!-- Global JNDI resources -->
    <GlobalNamingResources>
    <!-- Test entry for demonstration purposes -->
    <Environment name="simpleValue" type="java.lang.Integer" value="30"/>
    <!-- Editable user database that can also be used by
    UserDatabaseRealm to authenticate users -->
    <Resource name="UserDatabase" auth="Container"
    type="org.apache.catalina.UserDatabase"
    description="User database that can be updated and saved">
    </Resource>
    <ResourceParams name="UserDatabase">
    <parameter>
    <name>factory</name>
    <value>org.apache.catalina.users.MemoryUserDatabaseFactory</value>
    </parameter>
    <parameter>
    <name>pathname</name>
    <value>conf/tomcat-users.xml</value>
    </parameter>
    </ResourceParams>
    </GlobalNamingResources>
    <!-- A "Service" is a collection of one or more "Connectors" that share
    a single "Container" (and therefore the web applications visible
    within that Container). Normally, that Container is an "Engine",
    but this is not required.
    Note: A "Service" is not itself a "Container", so you may not
    define subcomponents such as "Valves" or "Loggers" at this level.
    -->
    <!-- Define the Tomcat Stand-Alone Service -->
    <Service name="Tomcat-Standalone">
    <!-- A "Connector" represents an endpoint by which requests are received
    and responses are returned. Each Connector passes requests on to the
    associated "Container" (normally an Engine) for processing.
    By default, a non-SSL HTTP/1.1 Connector is established on port 8080.
    You can also enable an SSL HTTP/1.1 Connector on port 8443 by
    following the instructions below and uncommenting the second Connector
    entry. SSL support requires the following steps (see the SSL Config
    HOWTO in the Tomcat 4.0 documentation bundle for more detailed
    instructions):
    * Download and install JSSE 1.0.2 or later, and put the JAR files
    into "$JAVA_HOME/jre/lib/ext".
    * Execute:
    %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA (Windows)
    $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA (Unix)
    with a password value of "changeit" for both the certificate and
    the keystore itself.
    By default, DNS lookups are enabled when a web application calls
    request.getRemoteHost(). This can have an adverse impact on
    performance, so you can disable it by setting the
    "enableLookups" attribute to "false". When DNS lookups are disabled,
    request.getRemoteHost() will return the String version of the
    IP address of the remote client.
    -->
    <!-- Define a non-SSL Coyote HTTP/1.1 Connector on port 8080 -->
    <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
    port="8080" minProcessors="5" maxProcessors="75"
    enableLookups="true" redirectPort="8443"
    acceptCount="100" debug="0" connectionTimeout="20000"
    useURIValidationHack="false" disableUploadTimeout="true" />
    <!-- Note : To disable connection timeouts, set connectionTimeout value
    to -1 -->
    <!-- Define a SSL Coyote HTTP/1.1 Connector on port 8443 -->
    <!--
    <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
    port="8443" minProcessors="5" maxProcessors="75"
    enableLookups="true"
    acceptCount="100" debug="0" scheme="https" secure="true"
    useURIValidationHack="false" disableUploadTimeout="true">
    <Factory className="org.apache.coyote.tomcat4.CoyoteServerSocketFactory"
    clientAuth="false" protocol="TLS" />
    </Connector>
    -->
    <!-- Define a Coyote/JK2 AJP 1.3 Connector on port 8009 -->
    <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
    port="8009" minProcessors="5" maxProcessors="75"
    enableLookups="true" redirectPort="8443"
    acceptCount="10" debug="0" connectionTimeout="0"
    useURIValidationHack="false"
    protocolHandlerClassName="org.apache.jk.server.JkCoyoteHandler"/>
    <!-- Define an AJP 1.3 Connector on port 8009 -->
    <!--
    <Connector className="org.apache.ajp.tomcat4.Ajp13Connector"
    port="8009" minProcessors="5" maxProcessors="75"
    acceptCount="10" debug="0"/>
    -->
    <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
    <!-- See proxy documentation for more information about using this. -->
    <!--
    <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
    port="8082" minProcessors="5" maxProcessors="75"
    enableLookups="true"
    acceptCount="100" debug="0" connectionTimeout="20000"
    proxyPort="80" useURIValidationHack="false"
    disableUploadTimeout="true" />
    -->
    <!-- Define a non-SSL legacy HTTP/1.1 Test Connector on port 8083 -->
    <!--
    <Connector className="org.apache.catalina.connector.http.HttpConnector"
    port="8083" minProcessors="5" maxProcessors="75"
    enableLookups="true" redirectPort="8443"
    acceptCount="10" debug="0" />
    -->
    <!-- Define a non-SSL HTTP/1.0 Test Connector on port 8084 -->
    <!--
    <Connector className="org.apache.catalina.connector.http10.HttpConnector"
    port="8084" minProcessors="5" maxProcessors="75"
    enableLookups="true" redirectPort="8443"
    acceptCount="10" debug="0" />
    -->
    <!-- An Engine represents the entry point (within Catalina) that processes
    every request. The Engine implementation for Tomcat stand alone
    analyzes the HTTP headers included with the request, and passes them
    on to the appropriate Host (virtual host). -->
    <!-- You should set jvmRoute to support load-balancing via JK/JK2 ie :
    <Engine name="Standalone" defaultHost="localhost" debug="0" jmvRoute="jvm1">
    -->
    <!-- Define the top level container in our container hierarchy -->
    <Engine name="Standalone" defaultHost="localhost" debug="0">
    <!-- The request dumper valve dumps useful debugging information about
    the request headers and cookies that were received, and the response
    headers and cookies that were sent, for all requests received by
    this instance of Tomcat. If you care only about requests to a
    particular virtual host, or a particular application, nest this
    element inside the corresponding <Host> or <Context> entry instead.
    For a similar mechanism that is portable to all Servlet 2.3
    containers, check out the "RequestDumperFilter" Filter in the
    example application (the source for this filter may be found in
    "$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
    Request dumping is disabled by default. Uncomment the following
    element to enable it. -->
    <!--
    <Valve className="org.apache.catalina.valves.RequestDumperValve"/>
    -->
    <!-- Global logger unless overridden at lower levels -->
    <Logger className="org.apache.catalina.logger.FileLogger"
    prefix="catalina_log." suffix=".txt"
    timestamp="true"/>
    <!-- Because this Realm is here, an instance will be shared globally -->
    <!-- This Realm uses the UserDatabase configured in the global JNDI
    resources under the key "UserDatabase". Any edits
    that are performed against this UserDatabase are immediately
    available for use by the Realm. -->
    <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
    debug="0" resourceName="UserDatabase"/>
    <!-- Comment out the old realm but leave here for now in case we
    need to go back quickly -->
    <!--
    <Realm className="org.apache.catalina.realm.MemoryRealm" />
    -->
    <!-- Replace the above Realm with one of the following to get a Realm
    stored in a database and accessed via JDBC -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm" debug="99"
    driverName="org.gjt.mm.mysql.Driver"
    connectionURL="jdbc:mysql://localhost/authority"
    connectionName="test" connectionPassword="test"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm" debug="99"
    driverName="oracle.jdbc.driver.OracleDriver"
    connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
    connectionName="scott" connectionPassword="tiger"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm" debug="99"
    driverName="sun.jdbc.odbc.JdbcOdbcDriver"
    connectionURL="jdbc:odbc:CATALINA"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!-- Define the default virtual host -->
    <Host name="localhost" debug="0" appBase="webapps"
    unpackWARs="true" autoDeploy="true">
    <!-- Normally, users must authenticate themselves to each web app
    individually. Uncomment the following entry if you would like
    a user to be authenticated the first time they encounter a
    resource protected by a security constraint, and then have that
    user identity maintained across all web applications contained
    in this virtual host. -->
    <!--
    <Valve className="org.apache.catalina.authenticator.SingleSignOn"
    debug="0"/>
    -->
    <!-- Access log processes all requests for this virtual host. By
    default, log files are created in the "logs" directory relative to
    $CATALINA_HOME. If you wish, you can specify a different
    directory with the "directory" attribute. Specify either a relative
    (to $CATALINA_HOME) or absolute path to the desired directory.
    -->
    <!--
    <Valve className="org.apache.catalina.valves.AccessLogValve"
    directory="logs" prefix="localhost_access_log." suffix=".txt"
    pattern="common" resolveHosts="false"/>
    -->
    <!-- Logger shared by all Contexts related to this virtual host. By
    default (when using FileLogger), log files are created in the "logs"
    directory relative to $CATALINA_HOME. If you wish, you can specify
    a different directory with the "directory" attribute. Specify either a
    relative (to $CATALINA_HOME) or absolute path to the desired
    directory.-->
    <Logger className="org.apache.catalina.logger.FileLogger"
    directory="logs" prefix="localhost_log." suffix=".txt"
    timestamp="true"/>
    <!-- Define properties for each web application. This is only needed
    if you want to set non-default properties, or have web application
    document roots in places other than the virtual host's appBase
    directory. -->
         <DefaultContext reloadable="true"/>
    <!-- Tomcat Root Context -->
    <Context path="" docBase="ROOT" debug="0"/>
    <!-- Tomcat Examples Context -->
    <Context path="/examples" docBase="examples" debug="0"
    reloadable="true" crossContext="true">
    <Logger className="org.apache.catalina.logger.FileLogger"
    prefix="localhost_examples_log." suffix=".txt"
    timestamp="true"/>
    <Ejb name="ejb/EmplRecord" type="Entity"
    home="com.wombat.empl.EmployeeRecordHome"
    remote="com.wombat.empl.EmployeeRecord"/>
    <!-- If you wanted the examples app to be able to edit the
    user database, you would uncomment the following entry.
    Of course, you would want to enable security on the
    application as well, so this is not done by default!
    The database object could be accessed like this:
    Context initCtx = new InitialContext();
    Context envCtx = (Context) initCtx.lookup("java:comp/env");
    UserDatabase database =
    (UserDatabase) envCtx.lookup("userDatabase");
    -->
    <!--
    <ResourceLink name="userDatabase" global="UserDatabase"
    type="org.apache.catalina.UserDatabase"/>
    -->
    <!-- PersistentManager: Uncomment the section below to test Persistent
    Sessions.
    saveOnRestart: If true, all active sessions will be saved
    to the Store when Catalina is shutdown, regardless of
    other settings. All Sessions found in the Store will be
    loaded on startup. Sessions past their expiration are
    ignored in both cases.
    maxActiveSessions: If 0 or greater, having too many active
    sessions will result in some being swapped out. minIdleSwap
    limits this. -1 or 0 means unlimited sessions are allowed.
    If it is not possible to swap sessions new sessions will
    be rejected.
    This avoids thrashing when the site is highly active.
    minIdleSwap: Sessions must be idle for at least this long
    (in seconds) before they will be swapped out due to
    activity.
    0 means sessions will almost always be swapped out after
    use - this will be noticeably slow for your users.
    maxIdleSwap: Sessions will be swapped out if idle for this
    long (in seconds). If minIdleSwap is higher, then it will
    override this. This isn't exact: it is checked periodically.
    -1 means sessions won't be swapped out for this reason,
    although they may be swapped out for maxActiveSessions.
    If set to >= 0, guarantees that all sessions found in the
    Store will be loaded on startup.
    maxIdleBackup: Sessions will be backed up (saved to the Store,
    but left in active memory) if idle for this long (in seconds),
    and all sessions found in the Store will be loaded on startup.
    If set to -1 sessions will not be backed up, 0 means they
    should be backed up shortly after being used.
    To clear sessions from the Store, set maxActiveSessions, maxIdleSwap,
    and minIdleBackup all to -1, saveOnRestart to false, then restart
    Catalina.
    -->
    <!--
    <Manager className="org.apache.catalina.session.PersistentManager"
    debug="0"
    saveOnRestart="true"
    maxActiveSessions="-1"
    minIdleSwap="-1"
    maxIdleSwap="-1"
    maxIdleBackup="-1">
    <Store className="org.apache.catalina.session.FileStore"/>
    </Manager>
    -->
    <Environment name="maxExemptions" type="java.lang.Integer"
    value="15"/>
    <Parameter name="context.param.name" value="context.param.value"
    override="false"/>
    <Resource name="jdbc/EmployeeAppDb" auth="SERVLET"
    type="javax.sql.DataSource"/>
    <ResourceParams name="jdbc/EmployeeAppDb">
    <parameter><name>username</name><value>sa</value></parameter>
    <parameter><name>password</name><value></value></parameter>
    <parameter><name>driverClassName</name>
    <value>org.hsql.jdbcDriver</value></parameter>
    <parameter><name>url</name>
    <value>jdbc:HypersonicSQL:database</value></parameter>
    </ResourceParams>
    <Resource name="mail/Session" auth="Container"
    type="javax.mail.Session"/>
    <ResourceParams name="mail/Session">
    <parameter>
    <name>mail.smtp.host</name>
    <value>localhost</value>
    </parameter>
    </ResourceParams>
    <ResourceLink name="linkToGlobalResource"
    global="simpleValue"
    type="java.lang.Integer"/>
    </Context>
    </Host>
    </Engine>
    </Service>
    </Server>

    To use servlets u have indeed to update your web.xml...Well I'm not sure this is relevant to your case anyway.
    You have to add a <servlet> element to this file.
    Something like this:
    <servlet>
    <servlet-name>blabla</servlet-name>
    <servlet-class>blablapackage.Blablaclass</servlet-class>
    <init-param>...</init-param>
    </servlet>
    Now this may not solve your problem. Make sure you refer to your servlets using their full qualified names.btw, just to be sure, what is your definition of "servlet"? (i mean: any java class or only javax.servlet.Servlet)

  • Decommissioned a file server, but every mobile account in the place is still trying to connect to it on login!

    A couple of months ago we decommissioned the 10.4.11 xserve that was serving as our LDAP server and home directory server for mobile accounts.  We migrated all of that to a newer 10.6.8 xserve.  It was a fairly rough migration, but we've pretty much sorted it out except for one last annoyance: when you look at System Preferences->Accounts->Login Items for all of our mobile accounts, every single client is still trying to mount an afp share on the old server.  Logging in takes FOREVER because the connection needs to time out, so now my users are no longer logging out/in as often as they should, and so their Home Sync's are getting old.
    When you go to the client's Preferences, the line referencing the old server share is still there, but the minus sign is greyed out so the item cannot be deleted.
    The Kind is listed as "unknown" and there is a grey warning triangle next to it.
    This is clearly some sort of template/Preference that is hardcoded to the old name, and whatever file this is got moved to the new server (which has a different name and different numeric IP address.)  Because even the BRAND NEW users that I have created since after "pdc04.hgbc.com" disappeared are trying to log in to the non-existent share on the non-existent server, too!
    I have tried running grep on the entire disk on one of the clients looking for the string "pdc04", and updated everything that I found using vi directly on the files.  I have tried running grep on select directory trees on the new file server looking for pdc04.  In my grep on the client, I found the string in
         /Library/Managed Preferences/user/loginwindow.plist
         /Library/Managed Preferences/user/complete.plist
    but searching all of the logingwindow.plist and complete.plist files on the new server comes up with nothing.
    Does anyone have any idea where the template or preference or plist is on the server so that I can delete or update the file with the new host name?

    I think that Grant is on the right track -- but the problem is that whatever file that pdc04's Server Manager wrote into is not available to pdc's Server Manager to edit or even display.
    We upgraded mostly by turning off the G5 10.4 xserve and unplugging the fiber-channel RAID (with user accounts on it) and plugging the RAID into a fiber-channel card on the new (to us) Nehalem 10.6 xserve, which we did after using Migration Assistant between the machines.  Then we had various and sundry problems, and we ended up moving all of the mobile account directories to the internal RAID on the new server.
    Clearly there is a file somewhere that acts as a template for mobile accounts and it refers to the old machine but its been moved to the new machine.
    Only two of the mobile accounts have directories in the /Library/Managed Preferences folder.  One of them, ironically, is mine, and my account hasn't worked right since we went to the new server.

  • One USB HD with Multiple AEBS, Move from NAS/File Server

    Okay to explain my question I'll describe what i want to do. If this is possible i would buy 2 airport extremes, and one USB HD. I would then use the USB HD to do TimeMachine backups to an Airdisk. I would like to then be able to take the USB HD with me when i go to work, and be able to do Airdisk backups there, having connected the same USB drive to different TC or AEBS, and afterwards, obviously bring it home and continue doing backups there. Is this at all possible? Has anyone tried? Can i at least browse my backups after having connected them to a different AEBS?
    Another question is, up until now i have been using an "unsupported" backup to my Linux File Server over netatalk. I did this, mainly because i have raid, and my data can't really be lost this way. Is there anyway to move this backup to an Airdisk? I read something here about different formats being used to save data on different types of backups. The share is on a partition that is formated ext3, if that helps at all. Oh and the backup itself is a sparseimage.
    Thanks for your replies,
    Dmitry
    Message was edited by: Russo017

    Okay to explain my question I'll describe what i want to do. If this is possible i would buy 2 airport extremes, and one USB HD. I would then use the USB HD to do TimeMachine backups to an Airdisk. I would like to then be able to take the USB HD with me when i go to work, and be able to do Airdisk backups there, having connected the same USB drive to different TC or AEBS, and afterwards, obviously bring it home and continue doing backups there. Is this at all possible?
    It should work fine, but you may run out of space quickly if you back up multiple computers to the disk.

  • How can I use Airport Extreme just as a file server - no wifi

    I'm setting up a new Apple environment for an invalid music buff.  Here are the components, all newly purchased.
    MacBook Pro
    Airport Extreme
    Airport Express
    I was handed a G-Drive with a large iTunes file collection on it.
    The user will have Comcast as his network provider and he already has a Comcast wireless router, so none of the Apple devices will need to be used as a wireless router or base station (assuming I understand what Apple means by a "base station").
    The Airport Express has the primary function of acting as an AirPlay device, feeding an analog music signal to the user's hi-end audio system (Denon integrated amplifier, Thiel Loudspeakers).
    The Airport Extreme has the sole function of acting as a network file share for the G-Drive.
    The MacBook Pro will operate iTunes and will get its music files from the network shared G-Drive and send it to the AirPlay device, which will feed the music signal to the hi-end audio system.
    I am able to get the Airport Express to simply act as a wireless client on the Comcast provided wireless network.  The MacBook Pro is also on the network and can see the Airport Express as an AirPlay device and send music to the audio system.
    What I'm having problems with is getting the Airport Extreme to simply act as a file server on this LAN.  It can either work in bridged mode and just be a wireless network client, or I can readily run an ethernet cable from the Airport Extreme to the Airport Express, or even to the Comcast router. 
    I tried to connect the MacBook Pro to the Airport Extreme with an ethernet cable and get it to "join an existing network", but it refused to recognize the Comcast provided network.  All it could see were some networks in homes nearby.
    I was hoping that if I just ran a network cable between the Airport Extreme and the Airport Express, that it would just get a network address over DHCP through the Airport Express.  Nope.
    So I'm stuck.   How do I get the Airport Express to simply act as a file server for the G-Drive on my LAN?

    +Can I use the Airport Extreme base station as a wired router, with wireless disabled for the time being?+
    Sure, but it will probably take just as much time...a few minutes....to turn the wireless off than simply change the name on the default wireless network to your personal choice.
    The AirPort Extreme is pre-configured to create a wireless network when it is hooked up to a modem. All you need do is assign a name to this network and establish a password. If you don't want to do this, you can turn the wireless off and use the device as a wired only router.
    AirPort Utility, the application that is used to setup the AirPort Extreme has a simple guided step by step process for you to configure the device the way you want.

  • Setting up permissions on a central file server...

    I am setting up a central file server in a small network enviornment where the users will share a drive and jobs on the drive. Problem I am having is if I set up seperate users and one group that they all belong to, when someone creates a job in there home directory and then copies it to the server, the rest of the users access it as read only. I need them to be able to read and write to each others folders. Any solution other than creating one user that they all share (since that kind of defeates the perpose).
    Thanks in advance,
    Larry

    The solution to your problem depends on whether you want to use ACLs or not.
    If you are managing a server, you should crack open the manuals- the answer to your question lies within those pages. To point you in the right direction.....
    If ACLs are NOT enabled for the volume. When you click the Share Point in WorkGroup Manager, then click the Protocols tab, you will see a check box for "Inherit Permissions from Parent". This is what you want.
    If this option is greyed, then you DO have ACLs enabled for the volume.
    The explanations for this are not short and managing a server requires reading, research and frustration.
    ACLs work more reliably than the Posix 'inherit' permissions option does.
    That said....
    A user of this forum put together an excellent guide to ACLs. Here's the link:
    http://discussions.apple.com/thread.jspa?messageID=648307&#648307
    Jeff

  • How  to integrate File Server with Portal??

    Hi all,
    can any one tell me that how interegate the File Server With Portal Server??
    In my portal server under home tab i have sub tab (second level navigation) of File server.Whe i click this tab it throws an syntax error
    ie
    System Error
    An exception occurred during the program execution. Below you will find technical information pertaining to this exception that you might want to forward to your system administrator.
    Exception Class  :: class com.sapportals.wcm.repository.NotSupportedException
    Exception Message  :: Not Implemented
    thease are the steps that i have taken
    >created HTTP System..
    >created WebDAV Repository.
    >Created Cache.
    >Created KM WebDAV System.
    >Created Iview and in iview i have to specify the <b>Path to Initially Displayed Folder</b>
    i have specified the folder of file server that i want to display..
    but i get the same syntax error ..
    infact i dint get that folder in KM.
    But when i specify the folder that is present in KM its Work fine..
    Now the Scnerio According to me is that File server is not integrated properly
    if it would be integrated properly i would be able to see the folder of File server in  KM...
    am i correct ??
    please Guide me to integrated the File Server Properly??
    Points will be given for any help..
    Regards
    Vinit

    Hi Vinit,
    if you want some Windows File Server integrated into Portal, please do not use a WebDAV Repository, but use the File System Repository instead. Therefore you need to configure the according Repository Manager. Please refer to this documentation:
    <a href="http://help.sap.com/saphelp_nw70/helpdata/en/ed/b334ea02a2704388d1d2fc3e4298ad/frameset.htm">Integrating Documents from a Windows System into KM</a>
    HTH,
    Carsten

  • Backup solutions w/RAID or redundancy (NAS, RAID, DIY File server)

    Hi all, I need a place to bounce my ideas off of. Here goes:
    I have been doing a lot of reading, since I was considering adding an NAS solution for my home network. My data consists mainly of videos (TVs and movies) and pictures (many many years worth).
    Anyways, out of the box solutions seemed a bit too pricy and the RAID not that spectacular unless you're willing to spend, so I began looking at building my own fileserver, with a hardware/software RAID solution. That was a bit better bang for the buck, but I still had one nagging concern.
    I've played around with RAID before, and I realized that with mirroring (the only RAID option I was really considering), was that it relied on the RAID controller. I couldn't just take a hard drive, remove it physically from the array, and have my information accessible when plugging it into another computer.
    What happens in a few years if your RAID controller dies and you can't find the exact same one? Your array will always be dependent on that controller and I really don't like that feeling. I'd rather have the option of taking a drive, plugging it in another computer, rather than needing to move the whole array (RAID, NAS, DIY file server) around. That means quicker access to my information or the ability to take it with me anywhere I go, on a moment's notice.
    The least costly solution I have come up with, for data that doesn't change all that much, is to have two huge drives (1 TB) on a computer, either one or both connected via eSATA. Just remember to ghost/copy the main drive once in a while, and keep the 'backup' drive detached (preferably located in a fire-proof safe) and back it up once in a while, on a regular basis).
    Sorry for the long post, but how does that sound, for a cheap, reliable backup solution, for data that doesn't get updated too frequently and for ease of access and use?

    Hi BGBG;
    For what you are attempting to do, RAID is not the best solution. The reason I say this is because RAID 1 is only capable of protection from disk failure. It is not a valid backup solution.
    I think that your last solution of using eSATA and a copy is the best. My only addition to your proposal would be a third disk. That way when you move the backup disk into storage you could replace it with the third one. In this way you could use SuperDuper to periodically backup between two disks.
    Allan

  • File Server For Both Mac OSX and Windows?

    Hello All,
    With HP discontinuing theier HP Server line, I've been browsing around for quite some time at a good box to use as a file server.  My windows Home server is about to die i feel, and most PC based servers seem just as pricey as me acquiring a Mac Pro Server with Two 1TB Hard Drives and using OSX Lison Server.
    However, before I take that plunge I was really wondering if anyone had had any success or stories that they can share about using a Lion Server with windows 7 PC's.
    I use a MAC, but everyone else here uses windows.  On my the home server box we bascially store all of our digital photographs, personal files and such as a backup.  Granted the Apple time machines and mac mini's can be used as the same but you loose the redundant hard drives and stability of comintuing to upgrade if you run out of space.
    So I'm hoping that its positive and pretty much flawless where the Mac Pro with Lion Server could just save and serve files galore without slow down or problems.  With my past mac experience that's usually not the case.  I just want to replace that box, have a new box that can store files off of the independent machiens with a here or there additional backup on it as needed.
    The other plus if I take this plunge is I have a Mac Pro box for greater functionality at home over my Macbook Pro!   So yeah, a little perk for me...
    Thanks in advance for any news you can help or offer me...  I know I'm probably asking something that a ton of others may have alredy asked and sorry for the duplication if thats the case

    Mac OS X Server can do all that.
    You can keep All the user accounts on the Server, or you can just use it for File Sharing. Does Windows SMB or Apple File System sharing without issue. Disk Utility can create Mirrored RAIDs right out of the box, that expensive RAID card is only needed for RAID 5.
    I run a Server at home like a School Server with ALL User files on the Server (you log on at any Mac and your files appear, because they are on the Server). User files are on a pair of Mirrored RAID drives, and I use Time Machine to automatically back up all the User Files on the Server once an hour. Gigabit Ethernet Switches provide "Hard Drive-like" file access.

  • Extreme as File Server vs. Mac Mini G4

    Hello all,
    I have an iMac 24" 2008 and a Macbook White 2010 which I use for work both at home and away. I have all my work files and databases on an external hard disk plugged into the Airport Extreme as a file server. I can also access this when away from home using Macbook and MobileMe. I also have an Airport Express to extend the network at home. Currently reaching 8.4MB speeds on a 10mb connection over wireless.
    Recently the iMac has been taking it's time to connect to the HD to such an extent that I get frustrated and pull the HD from the Airport and plug it direct into the iMac just so I can work. The Macbook doesn't seem to have a problem connecting to it.
    I also have an old Mac Mini G4 which I tried to use as a file server before but found it slow also.
    Given the choice, what would you opt for? Mac Mini again as server or Airport Extreme with HD? The HD I use is a portable Maxtor 160Gb but I also have a Lacie Starck 1tb that I currently have plugged into the iMac as a Time Machine.
    I just want a simple file server option with fast access!!!
    Lewis

    RAID is not backup.  Software RAID and hardware RAID are the same thing operation wise.  The only difference is that you need your Mac Mini to do the managing and controlling of the RAID array as opposed to the dedicated RAID controller with the hardware option.  With 2 drives on your server, you usually have 2 RAID choices -- Stripe (Raid 0) or Mirror (Raid 1).  If you set up your Mini with software RAID 1, then what this does is whenever your first drive is being written with data, the second drive is mirrored with the same data.  In the event of a boot drive failure, you can still boot from your second drive.  You can achieve the same thing somewhat with Carbon Copy Cloner, but it is not as seamless as RAID 1.  Raid 0 stripe mode essentially doubles your hard drive throughput, but because the Sandy Bridge Intel Core i series chip is a powerful and efficient CPU, there won't be a huge latency lag associated with previous software RAID setup.
    Dual Core vs Quad Core is really dependent on software used.  If you use Handbrake a lot which takes advantage of the multi-core of the Quad Core, then Quad will be faster than Dual Core.  However, some Mac software are not multi-core aware.  In this case, a fast Dual Core with Turbo Boost engaged will be FASTER than a Quad of a slower clock speed in single process type application, but the newer Core i5 can run up to 4 threads, simulating somewhat 4 virtual cores, so really it is application and usage dependent. 
    I recently purchased a Mac Mini Core i5 2.3Ghz to deal with more HD video work and upgraded it to take a second SSD drive.  It is way cheaper than the Apple BTO option, plus I get to choose which fast SSD drive I want in.  With a SSD drive installed, the Mini just flies.  The Mini did not dissappoint me with iMovie 11 and editing plus finalizing.  For the cost to performance and power usage aspect, the Mini with Core i5/i7 option is a viable path.

  • Run HTTP File Server on Mac OSX Lion Server?

    Hello!
    I have a Mac Mini running Mac OS X Lion Server.
    I would like to be able to host a Web Server that runs within the built in Apache server that allows the transfer of files from Client to Mac Mini HD and vice versa using authentication from Open Directory.
    My inspiration is HTTP File Server, or HFS (no, not the Disk Format). This, however, is a Windows Program. It allows a HTTP platform to upload and download files from the HD of the machine it is running from. It comes as a nifty .exe that has everything you need. I'd like something similar but to run on the Mac Server.
    Note, the upload and download cannot be done over FTP. It must be using HTTP like HFS does.
    At the very least, I'd like a HTTP (Port 80) Web Server that runs within Apache that allows upload and download to the HD.
    And at best, authentication using the built in Open Directory credentials. And to make even more secure, HTTPS or SSL.
    So it's like iCloud or iWork.com but using my own disk space and credentials.
    Predicted End Product:
    User visits https://192.168.x.x or https://MyDNSName.dns.com
    Greeted with Home Screen (.html or .php)
    Clicks login
    Has a nice login window to use (probably have to be .php to keep simple)
    Logs in using same credentials as they would to logon to Mac Mini locally
    Greeted with view of their files in their Directory, e.g.: /Library/Server/Web/HFS/User1 or /Library/Server/Web/HFS/User2, etc
    Can choose to download present files
    Or upload. Click Upload
    Upload window appears. Selects file on Client machine.
    Over HTTP, file is uploaded to Host Machine to /Library/Server/Web/HFS/User1
    What I'd like to know is what is the easiest way of going about this? Some kind of CMS like Wordpress or software like HFS?
    Many thanks,
    Clark

    I'm a little confused here... what's the requirement for HTTP based transfers vs. traditional file sharing protocols such as AFP?
    Secondly, how does 'HFS' differ from any other HTTP form-based uploader? Any web CGI or scripting system such as PHP or Perl could easily present an upload form.
    Have you considered WebDAV, which is designed as s file transfer protocol over HTTP?
    It integrates at the Finder level, meaning you can copy files by simply dragging and dropping file (and folder) icons in the desktop, just like on a local drive. It's part of the standard Mac OS X Apache installation, too.

  • File Server - File size\type search and save results to file

    I already have a vb script to do what I want on our file server, but it is very inefficient and slow.  I was thinking that a powershell script may be more suitable now but I don't know anything about scripting in PS.  So far the vb code that I
    have works, and I am not the one who wrote it but I can manipulate it to do what I want it to.  The only problem is, when I scan the shared network locations it stops on some files that are password protected and I don't know how to get around it.  If
    someone else knows of a PS script to go through the file system and get all files of a certain type or size (right now, preferably size) and save the file name, size, path, owner and dates created\modified please point me to it and I can work with that.  If
    not, could I get some help with the current script that I have to somehow get around the password protected files?  They belong in a users' HOME directory so I can't do anything with them.  Here is my code:   
    'Script for scanning file folders for certain types of files and those of a certain size of larger'
    'Note: Script must be placed locally on whichever machine the script is running on'
    '***********VARIABLES FOR USE IN SCRIPT***********'
    'objStartFolder - notes the location of the folder you wish to begin your scan in'
    objStartFolder = "\\FileServer\DriveLetter\SharedFolder"
    'excelFileName - notes the location where you want the output spreadsheet to be saved to'
    excelFileName = "c:\temp\Results_Shared.xls"
    '**********END OF VARIABLES**********'
    Set objFSO = CreateObject("Scripting.FileSystemObject")
    fileName = objFSO.GetFileName(path)
    'beginning row and column for actual data (not headers)'
    excelRow = 3
    excelCol = 1
    'Create Excel Spreadsheet'
    Set objExcel = CreateObject("Excel.Application")
    Set objWorkbook = objExcel.Workbooks.Add()
    CreateExcelHeaders()
    'Loop to go through original folder'
    Set objFolder = objFSO.GetFolder(objStartFolder)
    Set colFiles = objFolder.Files
    For Each objFile in colFiles
    Call Output(excelRow) 'If a subfolder is met, output procedure recursively called'
    Next
    ShowSubfolders objFSO.GetFolder(objStartFolder)
    'Autofit the spreadsheet columns'
    ExcelAutofit()
    'Save Spreadsheet'
    objWorkbook.SaveAs(excelFileName)
    objExcel.Quit
    '*****END OF MAIN SCRIPT*****'
    '*****BEGIN PROCEDURES*****'
    Sub ShowSubFolders(Folder)
    'Loop to go through each subfolder'
    For Each Subfolder in Folder.SubFolders
    Set objFolder = objFSO.GetFolder(Subfolder.Path)
    Set colFiles = objFolder.Files
    For Each objFile in colFiles
    Call Output(excelRow)
    Next
    ShowSubFolders Subfolder
    Next
    End Sub
    Sub Output(excelRow)
    'convert filesize to readable format (MB)'
    fileSize = objFile.Size/1048576
    fileSize = FormatNumber(fileSize, 2)
    'list of file extensions currently automatically included in spreadsheet report:'
    '.wav, .mp3, .mpeg, .avi, .aac, .m4a, .m4p, .mov, .qt, .qtm'
    If fileSize > 100 then'OR objFile.Type="Movie Clip" OR objFile.Type="MP3 Format Sound" _ '
    'OR objFile.Type="MOV File" OR objFile.Type="M4P File" _'
    'OR objFile.Type="M4A File" OR objFile.Type="Video Clip" _'
    'OR objFile.Type="AAC File" OR objFile.Type="Wave Sound" _'
    'OR objFile.Type="QT File" OR objFile.Type="QTM File"'
    'export data to Excel'
    objExcel.Visible = True
    objExcel.Cells(excelRow,1).Value = objFile.Name
    objExcel.Cells(excelRow,2).Value = objFile.Type
    objExcel.Cells(excelRow,3).Value = fileSize & " MB"
    objExcel.Cells(excelRow,4).Value = FindOwner(objFile.Path)
    objExcel.Cells(excelRow,5).Value = objFile.Path
    objExcel.Cells(excelRow,6).Value = objFile.DateCreated
    objExcel.Cells(excelRow,7).Value = objFile.DateLastAccessed
    excelRow = excelRow + 1 'Used to move active cell for data input'
    end if
    End Sub
    'Procedure used to find the owner of a file'
    Function FindOwner(FName)
    On Error Resume Next
    strComputer = "."
    Set objWMIService = GetObject("winmgmts:" _
    & "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")
    Set colItems = objWMIService.ExecQuery _
    ("ASSOCIATORS OF {Win32_LogicalFileSecuritySetting='" & FName & "'}" _
    & " WHERE AssocClass=Win32_LogicalFileOwner ResultRole=Owner")
    For Each objItem in colItems
    FindOwner = objItem.AccountName
    Next
    End Function
    Sub CreateExcelHeaders
    'create headers for spreadsheet'
    Set objRange = objExcel.Range("A1","G1")
    objRange.Font.Bold = true
    objExcel.Cells(1, 1).Value = "File Name"
    objExcel.Cells(1, 2).Value = "File Type"
    objExcel.Cells(1, 3).Value = "Size"
    objExcel.Cells(1, 4).Value = "Owner"
    objExcel.Cells(1, 5).Value = "Path"
    objExcel.Cells(1, 6).Value = "Date Created"
    objExcel.Cells(1, 7).Value = "Date Modified"
    End Sub
    Sub ExcelAutofit
    'autofit cells'
    Set objRange = objExcel.Range("A1")
    objRange.Activate
    Set objRange = objExcel.ActiveCell.EntireColumn
    objRange.Autofit()
    Set objRange = objExcel.Range("B1")
    objRange.Activate
    Set objRange = objExcel.ActiveCell.EntireColumn
    objRange.Autofit()
    Set objRange = objExcel.Range("C1")
    objRange.Activate
    Set objRange = objExcel.ActiveCell.EntireColumn
    objRange.Autofit()
    Set objRange = objExcel.Range("D1")
    objRange.Activate
    Set objRange = objExcel.ActiveCell.EntireColumn
    objRange.Autofit()
    Set objRange = objExcel.Range("E1")
    objRange.Activate
    Set objRange = objExcel.ActiveCell.EntireColumn
    objRange.Autofit()
    Set objRange = objExcel.Range("F1")
    objRange.Activate
    Set objRange = objExcel.ActiveCell.EntireColumn
    objRange.Autofit()
    Set objRange = objExcel.Range("G1")
    objRange.Activate
    Set objRange = objExcel.ActiveCell.EntireColumn
    objRange.Autofit()
    End Sub
    David Hood

    Accessing Excel through automation is bvery slow no matter what tool you use.  Scanning a disk is very slow for all tools.
    Since Vista all system have a search service that catalogues all major file itmes like size, extension, name and other attributes.  A search of a 1+Tb  volume can return in less that a second if you query the search service.
    You can easily batch the result into Excel by writ4ing to a CSV and opening in Excel. Use a template to apply formats.
    Example.  See how fast this returns results.
    #The following will find all log files in a system that are larger than 10Mb
    $query="SELECT System.ItemName, system.ItemPathDisplay, System.ItemTypeText,System.Size,System.ItemType FROM SystemIndex where system.itemtype='.log' AND system.size > $(10Mb)"
    $conn=New-Object -ComObject adodb.connection
    $conn.open('Provider=Search.CollatorDSO;Extended Properties="Application=Windows";')
    $rs=New-Object -ComObject adodb.recordset
    $rs.open($query, $conn)
    do{
    $p=[ordered]@{
    Name = $rs.Fields.Item('System.ItemName').Value
    Type = $rs.Fields.Item('System.ITemType').Value
    Size = $rs.Fields.Item('System.Size').Value
    New-Object PsObject -Property $p
    $rs.MoveNext()
    }Until($rs.EOF)
    ¯\_(ツ)_/¯

Maybe you are looking for