File server replacement

zoranstojanovic wrote:
 This new server will function as the file, print management, WSUS, and Hyper-V server.
I hope you mean file, print, and WSUS will be in virtual machines on a Hyper-V host, not that the host will have all those roles.  The Hyper-V server should be doing only that.
Personally, I would use two VM's, one for file and print, one for WSUS.  A Server 2012 R2 Standard license grants you two VM's, so you wouldn't need any extra.
Also, I would think DFS may help.  Set up a namespace and just change the target folders.

I plan on setting up a new file server to replace the current one in production. This new server will function as the file, print management, WSUS, and Hyper-V server. I'm thinking, to make my life easier, I'll keep the server name and IP the same. I found a good article on Spiceworks, http://community.spiceworks.com/how_to/75097-replace-an-old-file-server-with-a-new-file-server-using...but have a few questions as my scenario is a little different. I've used the method in that link years ago when I migrated from the last server. This time all my shares are on my MSA2212fc. So I'm thinking as long as I import the shares registry key from the old server to the new oneand reassign the drives to the new server, everything should just work.. Right? Also, anyone think the server has too many roles?Any feedback would be greatly appreciated....
This topic first appeared in the Spiceworks Community

Similar Messages

  • Time Capsule as a File Server - to replace existing Airport Extreme

    Hi
    I have a 4th Gen Airport Extreme connected to my cable modem.  I was thinking of getting a 2013 Time Capsule to replace my existing 4th Gen 2009 Airport Extreme, to use for Time Machine Backups and also for storing other files which I can access from anywhere only.
    Is Time Capsule just a great wirelsss router and backup device or is it any good as a basic file server / nas drive.
    Thanks in advance.
    Matt

    Is Time Capsule just a great wirelsss router and backup device
    Yes, it is that.
    is it any good as a basic file server / nas drive.
    It is not a good NAS.. primary issue.. no internal backup. No way for the TC to back itself up.
    No way for Time Machine to backup a network drive.
    It still lacks any proper high speed connections for a second hard drive.. usb2 doesn't cut it.
    The tools are poor.. actually almost non-existent. No partitioning.. so mixing backups and data is not a great idea.
    See Q3 Pondini. http://pondini.org/TM/Time_Capsule.html
    Nothing has changed with this new one.
    There are no media servers. no ftp, http, or any other access method for files such as is common to the basic NAS of the world.
    To say nothing of rsync and all those kinds of things.
    It is still slow to spin up and quick to spin down.. fine for data.. not so great for streaming media.
    Remote access can be difficult if you cannot get the TC as the main router in the network.

  • Replace a 2003 (not R2) File Server with a 2012R2 files server and preferably keep the same machine name and IP when finished

    I am wanting to replace a 2003 (not R2) File Server with a 2012R2 file server and preferably keep the same machine name and IP when finished.  For the moment I just need some "high level" guidance, little details can be worked out once I know
    which direction I will go.  I was considering that DFS might be a way to help get through the process although when finished the 2012R2 Files server will be by itself with no other file server planned at this time.  DFS can stay installed for maybe
    future purposes but clearly I wouldn't need the DFS Replication with only one machine.
    Here's a few details of the environment....
    1.  DC's are 2012R2 but it is still 2003 DFL because the old 2003 DCs are still present.  But likely they will be gone and the DFL elevated before I start on the File Server project
    2. Nearly all machines in the facility have a shortcut on the "All Users" Desktop that points to the existing old File Server.  Editing or replacing that shortcut would be a major pain,...hence why I want to keep the same machine name at least,
    and maybe the same IP if not too much trouble.  This way the existing shortcut would continue to work with the new 2012R2 File Server.  The UNC path represented in that shortcut is also configured into one or more of our major business applications,
    futher emphasizing the need to keep the UNC path the same throughout the process.
    3. The facility runs 24/7/365 but is "light" on weekends.  The political environment is such that there is little to no tolerance for any down time at all.
    4. Would DFS (based from the 2012R2 machine) be a good tool to get where I need to go?
    Thanks for any suggestions.
    Phillip Windell

    Hi Sharon,
    I've done some more reading and have a few new ideas to run past you....
    Yes regular DFS wouldn't help and the Namespace would still be different than how it was with just the old server. However I was thinking DFS Replication could replace the purpose of RoboCopy and it would keep the two locations "in sync" until I was ready
    to flip over to the new server.  DFS Rep can exist independently of a DFS Namespace, so a Namespace is not even needed. It needs a minimum of 2003R2 for the "later & better" DFS Rep but I believe 2003 can do an "in place" upgrade to 2003R2, so I would upgrade
    the old server to 2003R2 first.  As long as the DFS Rep on 2012R2 and 2003R2 will properly interact I think that will work.
    Thanks for the reg info on the Shares.  I'm debating if editing that would reg file would really be much better than manually creating the Shares on the new server while the DFS Replication was doing its job.  I'll probably export that Key as a
    safety move whether I use it or not.
    Once the DFS Rep is fully in sync and the Shares are in place on the new server, I figure I would then:
    1. Remove the DFS Replication Object (optionally remove DFS Services completely)
    2. Rename the old File Server to something else and set it to DHCP
    3. Rename the new File Server to the name I want to use and give it the IP the old server had.
    How does that sound?
    Phillip Windell

  • Can Oracle Files replace file server ?

    We are facing backup problem and survey for new solution to replace our file server.
    Our file server now has ~ 1000Million files /30TB
    and daily upload ~ 6Million files
    Can anybody tell wether OCS can handle this kind of loading or not?

    I guess you could use Content Service (former Oracle Files) for that but in version 10g there is no support for NFS or SMB. Access is now limited to WebDAV and FTP.
    I should look for a archiving product instead like DiskXtender from EMC: http://www.legato.com/products/diskxtender/index.htm but I think Oracle themself store 18TB in Oracle Files.

  • Slow applications start from win 2012 file server using win 7 workstations

    I hope someone can help me with why applications are slow to load i have replaced a 8 year old server running  win 2003 and replaced it with dell t620 running win 2012 r2 it is like 20 times faster but it is slower than the old server. the office has
    7 win 7 x64 machines all our applications are running from win 2012 file server. The server is set up as following the server runs with two hyper-v machines one machine runs ad, file server, dhcp, dns and printer server and the other is domino server and remote
    desktop. 
    the machine is more than capable but it is not so i started reading after i run out of ideas. i looked every where but the issue is with the server after trying everything else i did a simple test If i go to the application folder and click on it apps load
    instantly if i then type a unc path it takes from instant load to 3 and a bit seconds the same as the workstations. The same speed if I use ip address.
    network cards are intel
    i would really appreciate if somebody has suggestions that i could try
    thank you

    Hi,
    Do you mean that applications in the application folder start slowly when you access the application folder on the Windows 2012 R2 file server from Windows 7 workstations using UNC path or IP address. Do all the files in the application folder have the same
    issue? Please create a shared folder on the file server, then access the shared folder from Windows 7 workstations to check if the issue still exists.
    You could disable SMBv3 on server 2012 to check if the issue related to SMB protocol. 
    How to enable and disable SMBv1, SMBv2, and SMBv3 in Windows Vista, Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 8, and Windows Server 2012
    http://support.microsoft.com/kb/2696547/en-us
    Warning: We do not recommend that you disable SMBv2 or SMBv3. Disable SMBv2 or SMBv3 only as a temporary troubleshooting measure. Do not leave SMBv2 or SMBv3 disabled.
    Best Regards,
    Mandy 
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Create a new web application, how shall I update the file server.xml

    Hi,
    I will create a new web application, i.e named newApp. Then I create a file structure as follows:
    - <server-root>/newApp
    - <server-root>/newApp/WEB-INF
    - <server-root>/newApp/WEB-INF/classes
    Then I must tell the server that I have created a new web application. Then I must update my file server.xml, How shall I do this and where in the file shall I type in the new information?
    I use windows XP Pro, and Tomcat 4.1.27.
    My server.xml file looks like below:
    <!-- Example Server Configuration File -->
    <!-- Note that component elements are nested corresponding to their
    parent-child relationships with each other -->
    <!-- A "Server" is a singleton element that represents the entire JVM,
    which may contain one or more "Service" instances. The Server
    listens for a shutdown command on the indicated port.
    Note: A "Server" is not itself a "Container", so you may not
    define subcomponents such as "Valves" or "Loggers" at this level.
    -->
    <Server port="8005" shutdown="SHUTDOWN" debug="0">
    <!-- Comment these entries out to disable JMX MBeans support -->
    <!-- You may also configure custom components (e.g. Valves/Realms) by
    including your own mbean-descriptor file(s), and setting the
    "descriptors" attribute to point to a ';' seperated list of paths
    (in the ClassLoader sense) of files to add to the default list.
    e.g. descriptors="/com/myfirm/mypackage/mbean-descriptor.xml"
    -->
    <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener"
    debug="0"/>
    <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"
    debug="0"/>
    <!-- Global JNDI resources -->
    <GlobalNamingResources>
    <!-- Test entry for demonstration purposes -->
    <Environment name="simpleValue" type="java.lang.Integer" value="30"/>
    <!-- Editable user database that can also be used by
    UserDatabaseRealm to authenticate users -->
    <Resource name="UserDatabase" auth="Container"
    type="org.apache.catalina.UserDatabase"
    description="User database that can be updated and saved">
    </Resource>
    <ResourceParams name="UserDatabase">
    <parameter>
    <name>factory</name>
    <value>org.apache.catalina.users.MemoryUserDatabaseFactory</value>
    </parameter>
    <parameter>
    <name>pathname</name>
    <value>conf/tomcat-users.xml</value>
    </parameter>
    </ResourceParams>
    </GlobalNamingResources>
    <!-- A "Service" is a collection of one or more "Connectors" that share
    a single "Container" (and therefore the web applications visible
    within that Container). Normally, that Container is an "Engine",
    but this is not required.
    Note: A "Service" is not itself a "Container", so you may not
    define subcomponents such as "Valves" or "Loggers" at this level.
    -->
    <!-- Define the Tomcat Stand-Alone Service -->
    <Service name="Tomcat-Standalone">
    <!-- A "Connector" represents an endpoint by which requests are received
    and responses are returned. Each Connector passes requests on to the
    associated "Container" (normally an Engine) for processing.
    By default, a non-SSL HTTP/1.1 Connector is established on port 8080.
    You can also enable an SSL HTTP/1.1 Connector on port 8443 by
    following the instructions below and uncommenting the second Connector
    entry. SSL support requires the following steps (see the SSL Config
    HOWTO in the Tomcat 4.0 documentation bundle for more detailed
    instructions):
    * Download and install JSSE 1.0.2 or later, and put the JAR files
    into "$JAVA_HOME/jre/lib/ext".
    * Execute:
    %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA (Windows)
    $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA (Unix)
    with a password value of "changeit" for both the certificate and
    the keystore itself.
    By default, DNS lookups are enabled when a web application calls
    request.getRemoteHost(). This can have an adverse impact on
    performance, so you can disable it by setting the
    "enableLookups" attribute to "false". When DNS lookups are disabled,
    request.getRemoteHost() will return the String version of the
    IP address of the remote client.
    -->
    <!-- Define a non-SSL Coyote HTTP/1.1 Connector on port 8080 -->
    <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
    port="8080" minProcessors="5" maxProcessors="75"
    enableLookups="true" redirectPort="8443"
    acceptCount="100" debug="0" connectionTimeout="20000"
    useURIValidationHack="false" disableUploadTimeout="true" />
    <!-- Note : To disable connection timeouts, set connectionTimeout value
    to -1 -->
    <!-- Define a SSL Coyote HTTP/1.1 Connector on port 8443 -->
    <!--
    <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
    port="8443" minProcessors="5" maxProcessors="75"
    enableLookups="true"
    acceptCount="100" debug="0" scheme="https" secure="true"
    useURIValidationHack="false" disableUploadTimeout="true">
    <Factory className="org.apache.coyote.tomcat4.CoyoteServerSocketFactory"
    clientAuth="false" protocol="TLS" />
    </Connector>
    -->
    <!-- Define a Coyote/JK2 AJP 1.3 Connector on port 8009 -->
    <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
    port="8009" minProcessors="5" maxProcessors="75"
    enableLookups="true" redirectPort="8443"
    acceptCount="10" debug="0" connectionTimeout="0"
    useURIValidationHack="false"
    protocolHandlerClassName="org.apache.jk.server.JkCoyoteHandler"/>
    <!-- Define an AJP 1.3 Connector on port 8009 -->
    <!--
    <Connector className="org.apache.ajp.tomcat4.Ajp13Connector"
    port="8009" minProcessors="5" maxProcessors="75"
    acceptCount="10" debug="0"/>
    -->
    <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
    <!-- See proxy documentation for more information about using this. -->
    <!--
    <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
    port="8082" minProcessors="5" maxProcessors="75"
    enableLookups="true"
    acceptCount="100" debug="0" connectionTimeout="20000"
    proxyPort="80" useURIValidationHack="false"
    disableUploadTimeout="true" />
    -->
    <!-- Define a non-SSL legacy HTTP/1.1 Test Connector on port 8083 -->
    <!--
    <Connector className="org.apache.catalina.connector.http.HttpConnector"
    port="8083" minProcessors="5" maxProcessors="75"
    enableLookups="true" redirectPort="8443"
    acceptCount="10" debug="0" />
    -->
    <!-- Define a non-SSL HTTP/1.0 Test Connector on port 8084 -->
    <!--
    <Connector className="org.apache.catalina.connector.http10.HttpConnector"
    port="8084" minProcessors="5" maxProcessors="75"
    enableLookups="true" redirectPort="8443"
    acceptCount="10" debug="0" />
    -->
    <!-- An Engine represents the entry point (within Catalina) that processes
    every request. The Engine implementation for Tomcat stand alone
    analyzes the HTTP headers included with the request, and passes them
    on to the appropriate Host (virtual host). -->
    <!-- You should set jvmRoute to support load-balancing via JK/JK2 ie :
    <Engine name="Standalone" defaultHost="localhost" debug="0" jmvRoute="jvm1">
    -->
    <!-- Define the top level container in our container hierarchy -->
    <Engine name="Standalone" defaultHost="localhost" debug="0">
    <!-- The request dumper valve dumps useful debugging information about
    the request headers and cookies that were received, and the response
    headers and cookies that were sent, for all requests received by
    this instance of Tomcat. If you care only about requests to a
    particular virtual host, or a particular application, nest this
    element inside the corresponding <Host> or <Context> entry instead.
    For a similar mechanism that is portable to all Servlet 2.3
    containers, check out the "RequestDumperFilter" Filter in the
    example application (the source for this filter may be found in
    "$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
    Request dumping is disabled by default. Uncomment the following
    element to enable it. -->
    <!--
    <Valve className="org.apache.catalina.valves.RequestDumperValve"/>
    -->
    <!-- Global logger unless overridden at lower levels -->
    <Logger className="org.apache.catalina.logger.FileLogger"
    prefix="catalina_log." suffix=".txt"
    timestamp="true"/>
    <!-- Because this Realm is here, an instance will be shared globally -->
    <!-- This Realm uses the UserDatabase configured in the global JNDI
    resources under the key "UserDatabase". Any edits
    that are performed against this UserDatabase are immediately
    available for use by the Realm. -->
    <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
    debug="0" resourceName="UserDatabase"/>
    <!-- Comment out the old realm but leave here for now in case we
    need to go back quickly -->
    <!--
    <Realm className="org.apache.catalina.realm.MemoryRealm" />
    -->
    <!-- Replace the above Realm with one of the following to get a Realm
    stored in a database and accessed via JDBC -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm" debug="99"
    driverName="org.gjt.mm.mysql.Driver"
    connectionURL="jdbc:mysql://localhost/authority"
    connectionName="test" connectionPassword="test"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm" debug="99"
    driverName="oracle.jdbc.driver.OracleDriver"
    connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
    connectionName="scott" connectionPassword="tiger"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm" debug="99"
    driverName="sun.jdbc.odbc.JdbcOdbcDriver"
    connectionURL="jdbc:odbc:CATALINA"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!-- Define the default virtual host -->
    <Host name="localhost" debug="0" appBase="webapps"
    unpackWARs="true" autoDeploy="true">
    <!-- Normally, users must authenticate themselves to each web app
    individually. Uncomment the following entry if you would like
    a user to be authenticated the first time they encounter a
    resource protected by a security constraint, and then have that
    user identity maintained across all web applications contained
    in this virtual host. -->
    <!--
    <Valve className="org.apache.catalina.authenticator.SingleSignOn"
    debug="0"/>
    -->
    <!-- Access log processes all requests for this virtual host. By
    default, log files are created in the "logs" directory relative to
    $CATALINA_HOME. If you wish, you can specify a different
    directory with the "directory" attribute. Specify either a relative
    (to $CATALINA_HOME) or absolute path to the desired directory.
    -->
    <!--
    <Valve className="org.apache.catalina.valves.AccessLogValve"
    directory="logs" prefix="localhost_access_log." suffix=".txt"
    pattern="common" resolveHosts="false"/>
    -->
    <!-- Logger shared by all Contexts related to this virtual host. By
    default (when using FileLogger), log files are created in the "logs"
    directory relative to $CATALINA_HOME. If you wish, you can specify
    a different directory with the "directory" attribute. Specify either a
    relative (to $CATALINA_HOME) or absolute path to the desired
    directory.-->
    <Logger className="org.apache.catalina.logger.FileLogger"
    directory="logs" prefix="localhost_log." suffix=".txt"
    timestamp="true"/>
    <!-- Define properties for each web application. This is only needed
    if you want to set non-default properties, or have web application
    document roots in places other than the virtual host's appBase
    directory. -->
         <DefaultContext reloadable="true"/>
    <!-- Tomcat Root Context -->
    <Context path="" docBase="ROOT" debug="0"/>
    <!-- Tomcat Examples Context -->
    <Context path="/examples" docBase="examples" debug="0"
    reloadable="true" crossContext="true">
    <Logger className="org.apache.catalina.logger.FileLogger"
    prefix="localhost_examples_log." suffix=".txt"
    timestamp="true"/>
    <Ejb name="ejb/EmplRecord" type="Entity"
    home="com.wombat.empl.EmployeeRecordHome"
    remote="com.wombat.empl.EmployeeRecord"/>
    <!-- If you wanted the examples app to be able to edit the
    user database, you would uncomment the following entry.
    Of course, you would want to enable security on the
    application as well, so this is not done by default!
    The database object could be accessed like this:
    Context initCtx = new InitialContext();
    Context envCtx = (Context) initCtx.lookup("java:comp/env");
    UserDatabase database =
    (UserDatabase) envCtx.lookup("userDatabase");
    -->
    <!--
    <ResourceLink name="userDatabase" global="UserDatabase"
    type="org.apache.catalina.UserDatabase"/>
    -->
    <!-- PersistentManager: Uncomment the section below to test Persistent
    Sessions.
    saveOnRestart: If true, all active sessions will be saved
    to the Store when Catalina is shutdown, regardless of
    other settings. All Sessions found in the Store will be
    loaded on startup. Sessions past their expiration are
    ignored in both cases.
    maxActiveSessions: If 0 or greater, having too many active
    sessions will result in some being swapped out. minIdleSwap
    limits this. -1 or 0 means unlimited sessions are allowed.
    If it is not possible to swap sessions new sessions will
    be rejected.
    This avoids thrashing when the site is highly active.
    minIdleSwap: Sessions must be idle for at least this long
    (in seconds) before they will be swapped out due to
    activity.
    0 means sessions will almost always be swapped out after
    use - this will be noticeably slow for your users.
    maxIdleSwap: Sessions will be swapped out if idle for this
    long (in seconds). If minIdleSwap is higher, then it will
    override this. This isn't exact: it is checked periodically.
    -1 means sessions won't be swapped out for this reason,
    although they may be swapped out for maxActiveSessions.
    If set to >= 0, guarantees that all sessions found in the
    Store will be loaded on startup.
    maxIdleBackup: Sessions will be backed up (saved to the Store,
    but left in active memory) if idle for this long (in seconds),
    and all sessions found in the Store will be loaded on startup.
    If set to -1 sessions will not be backed up, 0 means they
    should be backed up shortly after being used.
    To clear sessions from the Store, set maxActiveSessions, maxIdleSwap,
    and minIdleBackup all to -1, saveOnRestart to false, then restart
    Catalina.
    -->
    <!--
    <Manager className="org.apache.catalina.session.PersistentManager"
    debug="0"
    saveOnRestart="true"
    maxActiveSessions="-1"
    minIdleSwap="-1"
    maxIdleSwap="-1"
    maxIdleBackup="-1">
    <Store className="org.apache.catalina.session.FileStore"/>
    </Manager>
    -->
    <Environment name="maxExemptions" type="java.lang.Integer"
    value="15"/>
    <Parameter name="context.param.name" value="context.param.value"
    override="false"/>
    <Resource name="jdbc/EmployeeAppDb" auth="SERVLET"
    type="javax.sql.DataSource"/>
    <ResourceParams name="jdbc/EmployeeAppDb">
    <parameter><name>username</name><value>sa</value></parameter>
    <parameter><name>password</name><value></value></parameter>
    <parameter><name>driverClassName</name>
    <value>org.hsql.jdbcDriver</value></parameter>
    <parameter><name>url</name>
    <value>jdbc:HypersonicSQL:database</value></parameter>
    </ResourceParams>
    <Resource name="mail/Session" auth="Container"
    type="javax.mail.Session"/>
    <ResourceParams name="mail/Session">
    <parameter>
    <name>mail.smtp.host</name>
    <value>localhost</value>
    </parameter>
    </ResourceParams>
    <ResourceLink name="linkToGlobalResource"
    global="simpleValue"
    type="java.lang.Integer"/>
    </Context>
    </Host>
    </Engine>
    </Service>
    </Server>

    To use servlets u have indeed to update your web.xml...Well I'm not sure this is relevant to your case anyway.
    You have to add a <servlet> element to this file.
    Something like this:
    <servlet>
    <servlet-name>blabla</servlet-name>
    <servlet-class>blablapackage.Blablaclass</servlet-class>
    <init-param>...</init-param>
    </servlet>
    Now this may not solve your problem. Make sure you refer to your servlets using their full qualified names.btw, just to be sure, what is your definition of "servlet"? (i mean: any java class or only javax.servlet.Servlet)

  • Backup solutions w/RAID or redundancy (NAS, RAID, DIY File server)

    Hi all, I need a place to bounce my ideas off of. Here goes:
    I have been doing a lot of reading, since I was considering adding an NAS solution for my home network. My data consists mainly of videos (TVs and movies) and pictures (many many years worth).
    Anyways, out of the box solutions seemed a bit too pricy and the RAID not that spectacular unless you're willing to spend, so I began looking at building my own fileserver, with a hardware/software RAID solution. That was a bit better bang for the buck, but I still had one nagging concern.
    I've played around with RAID before, and I realized that with mirroring (the only RAID option I was really considering), was that it relied on the RAID controller. I couldn't just take a hard drive, remove it physically from the array, and have my information accessible when plugging it into another computer.
    What happens in a few years if your RAID controller dies and you can't find the exact same one? Your array will always be dependent on that controller and I really don't like that feeling. I'd rather have the option of taking a drive, plugging it in another computer, rather than needing to move the whole array (RAID, NAS, DIY file server) around. That means quicker access to my information or the ability to take it with me anywhere I go, on a moment's notice.
    The least costly solution I have come up with, for data that doesn't change all that much, is to have two huge drives (1 TB) on a computer, either one or both connected via eSATA. Just remember to ghost/copy the main drive once in a while, and keep the 'backup' drive detached (preferably located in a fire-proof safe) and back it up once in a while, on a regular basis).
    Sorry for the long post, but how does that sound, for a cheap, reliable backup solution, for data that doesn't get updated too frequently and for ease of access and use?

    Hi BGBG;
    For what you are attempting to do, RAID is not the best solution. The reason I say this is because RAID 1 is only capable of protection from disk failure. It is not a valid backup solution.
    I think that your last solution of using eSATA and a copy is the best. My only addition to your proposal would be a third disk. That way when you move the backup disk into storage you could replace it with the third one. In this way you could use SuperDuper to periodically backup between two disks.
    Allan

  • File Server For Both Mac OSX and Windows?

    Hello All,
    With HP discontinuing theier HP Server line, I've been browsing around for quite some time at a good box to use as a file server.  My windows Home server is about to die i feel, and most PC based servers seem just as pricey as me acquiring a Mac Pro Server with Two 1TB Hard Drives and using OSX Lison Server.
    However, before I take that plunge I was really wondering if anyone had had any success or stories that they can share about using a Lion Server with windows 7 PC's.
    I use a MAC, but everyone else here uses windows.  On my the home server box we bascially store all of our digital photographs, personal files and such as a backup.  Granted the Apple time machines and mac mini's can be used as the same but you loose the redundant hard drives and stability of comintuing to upgrade if you run out of space.
    So I'm hoping that its positive and pretty much flawless where the Mac Pro with Lion Server could just save and serve files galore without slow down or problems.  With my past mac experience that's usually not the case.  I just want to replace that box, have a new box that can store files off of the independent machiens with a here or there additional backup on it as needed.
    The other plus if I take this plunge is I have a Mac Pro box for greater functionality at home over my Macbook Pro!   So yeah, a little perk for me...
    Thanks in advance for any news you can help or offer me...  I know I'm probably asking something that a ton of others may have alredy asked and sorry for the duplication if thats the case

    Mac OS X Server can do all that.
    You can keep All the user accounts on the Server, or you can just use it for File Sharing. Does Windows SMB or Apple File System sharing without issue. Disk Utility can create Mirrored RAIDs right out of the box, that expensive RAID card is only needed for RAID 5.
    I run a Server at home like a School Server with ALL User files on the Server (you log on at any Mac and your files appear, because they are on the Server). User files are on a pair of Mirrored RAID drives, and I use Time Machine to automatically back up all the User Files on the Server once an hour. Gigabit Ethernet Switches provide "Hard Drive-like" file access.

  • Impossible to access File server shares using a DNS alias

    I am currently testing migration scenarios to replace 2 standalone W2K3 File Servers into 1 W2K8R2 Failover Cluster.
    Everything went very well, shares are well accessible as long as I use the new File Server names, but...
    ... when I try to use DNS aliases (of the old servers), It is getting a nightmare!
    First, each time I tried to to connect to the shares using the alias name \\tempsf\ I got an error message prompting for Duplicate name on the network.
    I solved this first issue by applying the DisableStrictNameChecking
    reg key (see below). The Duplicate name error message disappeared and I was able to access to the \\tempsf\ folder but it was empty however I have 2 shares well displayed when accessing it through the real file server name. Moreover, I confirm that :
    ping tempsf resolves correctly the real file server name and IP
    nslookup tempsf do the same
    net view \\realfileserver displays my 2 shares
    net view \\tempsf doesn't display anything :-(
    Then, I almost tried everything currently documented :
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters Add a new DWORD value called DisableStrictNameChecking and set to 1.
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters Add a new Multi-String value called OptionalNames. Enter one or more aliases, one per line.
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters Add a new DWORD value called DisableLoopBackCheck and set to 1.
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0 Add a new Multi-String Value BackConnectionHostNames. Enter one or more aliases, one per line.
    And also the SETSPN:
    setspn -A host/<your_ALIAS_name> <ServerrName>
    setspn -A host/<your_ALIAS_name.domain.com> <ServerNname>
    But still nothing... Launch date is approaching, I am feeling so desperate...
    Any idea? please... Thanks in advance,
    Max
    PS : Ahh yes, one more thing, we are using a third-party DNS here (Infoblox) but I don't think it could be the issue.

    Hello,
    Configure my new File Server (new name, new IP)
    Shutdown the old File Server
    Map the mirrored SAN disk to the new File Server
    Mount Storage and create shares (previously exported from old FS)
    Update Alias to point to point to new FS (in Infoblox)
    Update Reg keys, setspn and netdom as explained above
    Try access. Shares well displayed when using new FS name but nothing when using updated alias despite dns resolution is ok.
    I also found the following doc :
    http://blogs.technet.com/b/askcore/archive/2009/01/09/file-share-scoping-in-windows-server-2008-failover-clusters.aspx?PageIndex=2#comments which explain precisely the way the W2K8R2 File Servers are working but without giving any solution to my issue.
    Your doc was interesting but didn't help :-(
    Anyone else ? Thanks

  • Display image in Report from File Server

    Hi
    As the logo of the company is changing every 3 - 4 months, i need a solution for this..
    my approach is to replace the new logo in file server
    call the logo in the oracle reports... using oracle reports 6i...
    but i am not able to do so ( which objects to be used like formula column, link file, ole) ... any help from your side will be good.
    Regards
    Yram

    You may put the logo into db as a blob column and retrieve it onto report. One update in db would update all calling reports.

  • File server migration with Offline files involved

    Hi,
    We are planning a file server migration in following weeks.
    This morning, our customer came with the good old "Ow, and I just thought about something else"
    Here's the scenario :
    -They are using 1 network drive
    -That network drive is made offline available for all laptop users
    -Those users are spread out in several country's. No VPN connection
    -They are working for months on their offline network drive, right in the middle of the wood, no internet connection, it was already short for them to find power supply for their laptop ...
    ...nevermind
    -The day they come back to the office, the file server to which points the network drives will be offline.
    Now the 1 Million question : What happens with their "dirty" files ?
    yep exactly. those they changed 6 months ago, have no clue about if you ask them but certainly will the day I will clear the damn cache.
    My first analysis :
    -The new file server will have another name, no alias or re-using the old name is possible (the customer don't want to)
    -I can't tell to those laptops "hey for that offline cache, please use this new network drive"
    So :
    >> Those users have to identify manually files they changed while being offline, copy them locally on their machine and work that way the time they come back to the main office.
    >> When they finally show up, clear the cache, offline the new network drive and replace file copied locally
    >> If no internet connexion available in the branch office, let them work locally, it's still better than this hybrid-non-sense 6month offline folder "solution". If internet connexion is mainly available remotely, propose some Citrix/View/RDS
    Setup which is, for me, a more professional looking solution
    Someone has another (better?) idea/solution ?

    Hi, 
    I suggest you ask users to collect their laptop to internet, then start offline files synchronization on the old file server. After that, use
    Robocopy to copy the date from the old server to the new server. As the offline files cache cannot be recognized by the new file server, so we need to synchronize data first.
    If the older server cannot be enabled, as you mentioned, you might need to ask users to copy their changed files to the new file servers.
    Regards, 
    Mandy
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Ownership of folders in file server

    Hi Guys,
    I am facing a problem with my file server. I have a file server installed windows server 2008 R2 Standard SP1 in it. There are 4 drives and almost 99% folders are created by 1st domain user account having domain admin rights.
    Because I was facing problem that 2nd domain user account having domain admin rights was not able to open, change permissions of shared folders and giving error access is denied, so I logged on with 1st domain user account having domain admin rights
    and transferred the ownership to local admin group (local admin user account resides in this group). So that I can make any kind of changes through local admin account.
    Now I can open and provide permissions on single folders through local admin account but still I cant select 'replace all child object permissions from this object' option any where while assigning permissions and getting error access denied.
    And I can see 1st domain user account having domain admin rights is still present in ownership tab with local admin group. Ownership has transferred successfully because I got no error while doing that, so what would be the solution for this?
    Is there any way we can remove unwanted owners from 'security-advanced-owner-edit' tab and can keep only one owner we want?

    Hi,
    I'm a little confuse about the process.
    Why not just logon your 1st user and give 2nd user (or Domain Admins group) Full Control permission in NTFS permission instead? Transfer Ownership is not a common step for this purpose.
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Getting new file server. Special considerations for Revit?

    Hi all,
    My office (small architecture firm) is looking to purchase a new file server. Our current one is a Mac Pro 1,1 (2006). We're Mac based and use VMWare Fusion on most workstations for our Autodesk Software. I use Bootcamp on mine. My employer is under the impression we can use a $500 Mac Mini as a replacement file server for our 10 person office, ~1tb of data, and website. We have a separate Mac Mini for email. Are there any hardware considerations I should make them aware of since we use it for Revit central models? I have a vague understanding of what hardware is prioritized for individuals workstations, but not for the file server where our central models live.

    "One node is a DC, the other is a member server. "
    Sounds like you should pick one of the many documents/videos available from TechNet or Microsoft Virtual Academy instead of CBT Nuggets.  They are giving you bad information.  You cannot cluster DCs.  It's that simple.  Cluster
    nodes have to be members of a cluster.
    .:|:.:|:. tim

  • File Server files?

    We are 8 users who share files in our FILE SERVER. Some drag the files to their local hard drive, work on them, then drag them back to the FILE SERVER where the old file is replaced. Others work on files while "IN" the FILE SERVER. To say the least, we've had some doozies happen and we usually don't have time to have I.T. do a restore from the tape.
    We don't currently use TIME MACHINE on any workstations. I'm thinking we may need to do so if it can prevent future calamities.
    QUESTION: When a user works directly "IN" the FILE SERVER using their own installed application (for example, inDesign), will TIME MACHINE make a backup of it? If so, how do I go about setting up for this?
    Thanks a bunch in advance!

    Is your server running +OSX Server?+ If so, Time Machine can back up the server, or any disk connected directly to it. Backups of the clients can be managed from the server. See page 241 here: http://images.apple.com/server/macosx/docs/UserManagementv10.6.pdf
    Or, if the Mac you're using like a server is running "normal" Leopard or Snow Leopard, it can back up anything on that Mac, or any drives connected directly to it.
    Similarly, Time Machine running on individual users' Macs can back up anything on those Macs, or directly-connected to them; but that's probably not what you want.
    You might want to review these:
    What is Time Machine?
    Time Machine Tutorial
    and perhaps browse the Time Machine - Frequently Asked Questions *User Tip* at the top of this forum.

  • Adequate File Server for Design Studio

    I am looking for feedback on the new mac mini. Hopefully, there are a number of you who have experience in the matter.
    My company will soon have several designers on staff. Currently, my Mac Pro is serving as the file server for the company. This is not ideal, as I would like to use my Mac Pro as a work station. We have a ReadyNAS in the office, but it's sluggish performance would never hold up with our work flow. I need a file server that can handle less than 10 designers working on large files. Ideally, I would also like to find something that acts as an in-house web server.
    The new Mac Mini with Thunderbold looks like it could fit the bill, but I would love to hear from people who are using the machine in a similar setup. Could the mini provide adequate performance as a file server? If so, what upgrades should I consider?
    I am also looking at attaching the LaCe Little Big DIsk for the files. If I understand correctly, this should run as quickly as an internal drive, correct?
    I would love to hear your feedback and any alternative suggestions.

    I recently purchased a Mac Mini i& 2.0 with 7200RPM Drives and a Promise Pegasus R6 12TB File Server.
    Everything worked great for 48 hours, but then my Pegasus Server droped a drive... Promise is RMAing me a replcement drive, but that takes about a week to get...  I have heard issues with Promise RAIDs coming with 1 Bad drive, but I thought I was safe since I didn't have one after the 6 hour initialization process.  The new drive wil be here by Friday, and hopefully I can do some tests on this thing to see if anything else goes wrong before the "Return/Refund" date is up.  I really can't afford to have a $2,000 RAID PaperWeight.
    I also asked Promise if I could simply go to "Best Buy" and get a replacement drive to get me back up and going (even though the server is still running I am apprehensive about using it without a "Redundant drive"), Promise said that the server will only work with "Apple Approved" drives, and only Promise can ship though drives, so I could not just go get a 2TB 7,200 RPM drive and replace the defective one myself.
    Not sure there is any truth to this, but since the parts are under warranty for 2 years, I guess it doesn't matter as its not costing me anything but time and from using the new $2K server.
    I thought about Drobo Pro servers too, but the reviews of them seemed 10 times worse that what I read about the Promise Servers. 
    Anyone out there has these R6 Pegasus Servers, care to comment?  Should I cut my loses and return this thing or should I stick it out?

Maybe you are looking for