SQL2008R2 cluster w/ several Named Instances -- shared storage best practice?

Planning four node SQL2008R2 cluster; three named instances.  Each named instance requires exclusive use of a drive letter on shared storage.  Does the names instance need all it's files (data, logs, tempdb) on that exclusive drive?  Or can
it use a drive shared by all 3 instances.  E.g. U:\SQLBackup\<instance-name>\...
Thanks,
Bob
 

You will need at least one drive for each instance + 1 one for cluster Quorum (unless you go for fileshare).
My recommandation would be:
Instance1
E:\SQLDataFiles
F:\SQLLogFiles
G:\SQLTempFiles
Instance2
H:\SQLDataFiles
I:\SQLLogFiles
J:\SQLTempFiles
And so on.  If you are considered that you might run out of drive letters you could make a single Drive letter pr. instance and then attach the 3 drives as mountpoints into this drive. That way you will save 2 letters pr. instance.
As for just using one single drive pr. instance with all 3 kinds of files: Don't go there - the performance gain of splitting then into 3 drives as laid out above, is at least 50% in my experience. Remeber also to format the sql drives with NTFS blocksize
of 64K
Regards
Rasmus Glibstrup, SQLGuy
http://blog.sqlguy.dk

Similar Messages

  • Windows 2012 R2 File Server Cluster Storage best practice

    Hi Team,
    I am designing  Solution for 1700 VDi user's . I will use Microsoft Windows 2012 R2 Fileserver Cluster to host their Profile data by using Group Policy for Folder redirection.
    I am looking best practice to define Storage disk size for User profile data . I am looking to have Single disk size of 30 TB to host user Profile data .Single disk which will spread across two Disk enclosure .
    Please let me know if if single disk of 30 Tb can become any bottle neck to hold user active profile data .
    I have SSD Writable disk in storage with FC connectivity.
    Thanks
    Ravi

    Check this
    TechEd session,
    the
    Windows Server 2012 VDI deployment Guide (pages 8,9), and 
    this article
    General considerations during volume size planning:
    Consider how long it will take if you ever have to run chkdsk. Chkdsk has gone significant improvements in 2012 R2, but it will still take a long time to run against a 30TB volume.  That's down time..
    Consider how will volume size affect your RPO, RTO, DR, and SLA. It will take a long time to backup/restore a 30 TB volume. 
    Any operation on a 30TB volume like snapshot will pose performance and additional disk space challenges.
    For these reasons many IT pros choose to keep volume size under 2TB. In your case, you can use 15x 2TB volumes instead of a single 30 TB volume. 
    Sam Boutros, Senior Consultant, Software Logic, KOP, PA http://superwidgets.wordpress.com (Please take a moment to Vote as Helpful and/or Mark as Answer, where applicable) _________________________________________________________________________________
    Powershell: Learn it before it's an emergency http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx http://technet.microsoft.com/en-us/scriptcenter/dd793612.aspx

  • Multi-user, Multi-mac, single-storage best practices?

    I wouldn't share the MacBookPro so my wife finally replaced her PC with a new iMac. We want to store our big music collection in one place (either the iMac or external USB disk. Both machines presently use WiFi connectivity through our older round AiportExtreme, though I'd consider upgrading if the Airport Disk sharing would make this simple. We also presently use Airport Express to play music from the laptop to our home audio system and will continue to use the laptop for this. Presently we each have one library for laptop/iGadgets. Ideally we could share the library files across machines (in something akin to an NFS/Celerra mount in the Unix world) so that we don't have to add music more than once per person and I could recover laptop disk space. Is it possible to point multiple machines at the same library xml/itl files, or at least synch them somehow (maybe dotmac) to both machines and how would one configure that? My knowledge of mac networking is very small, but I'm tech-savvy in the Win/Unix world. Is the network latency prohibitively slow, particularly when pulling files through WiFi from remote disk and playing back remotely to AirPort Express? We don't want it to stop every 5 seconds to buffer. I welcome suggestions for the best way to proceed. Thanks in advance.

    dead thread

  • Runtime image storage best practice ?

    Hello,
    I have a question regarding the best place to store images that will be loaded at runtime. I understand the concept of having an assets folder within the project and keeping certain images as a part of the project itself, but what about storing images that are dynamic in that they not available at authoring, but are still loaded at runtime.
    The specific implementation is that I have an application that is configured by the user, and I want them to be able to assign their own images for icons on buttons (while still assigning default icon in case the image they've assigned is not found or is not compliant to the size requirements, etc). So where would be the best place to store images like this. There are a couple of other places in my project where I'll allow the user to place their own logos (such as the a control bar area etc) or other graphics withing the context of the UI so the question is not specific to buttons and icons.
    I hope my question makes sense, but I can be more specific if need be. Thanks in advance for your time.

    You could use resource bundling mechanism for your Idea, depends on how many users will you have? because this approach requires to have compiled resource modules loaded at runtime, so for each custom set of stuff for one of your users you should invoke mxmlc compiler to build it's own custom resource module. Which you can load at runtime and overlap all same named resources already used in the application.
    And all you have to worry about that your resources named matched and resource bundle names must be equal too.
    If you are interested, dig into ResourceManager class and resource bundling mechanism.
    If you feel this message answers your question or helps, please mark it respectively

  • Servlet - xml data storage best practice

    Hello - I am creating a webapp that is a combination of servlets and jsp. The app will access, store and manipulate data in an xml file. I hope to deploy and distribute the webapp as a war file. I have been told that it is a bad idea to assume that the xml file, if included in a directory of the war file will be writeable, as the servlet spec does not guarantee that war are "exploded" into real file space. For that matter, they do not guarantee that the file space is writeable at all.
    So, what is the best idea for the placement of this xml file? Should I have users create a home directory for the xml file to sit in, so it can be guaranteed to be writeable? And, if so, how should I configure the webapp so they it will know where this file is kept?
    Any advice would be gratefully welcomed...
    Paul Phillips

    Great Question, but I need to take it a little further.
    First of all, my advice is to use some independent home directory for the xml file that can be located via a properties file or the like.
    This will make life easier when trying to deploy to a server such as JBoss (with Catalina/Tomcat) which doesn't extract the war file into some directory. In that case you would need to access your XML file which would be residing inside a war file. I haven't tried this (sounds painful) but I suspect there may be security access problems when trying to get the FileOutputStream on a file inside the war??
    Anyway.... so I recommend the independent directory away from the hustle and bustle of the servers' directories. Having said that..... I have a question in return: Where do you put a newly created (on the fly) jsp that you want accessed via your webapp?
    In Tomcat its easy... just put it in the tomcat/webapps/myapp directory, but this can't be done for JBoss with integrated Tomcat (jboss-3.0.0RC1_tomcat-4.0.3).
    Anyone got any ideas on that one?

  • Advantages of Shared Storage in SOA Cluster

    HI,
    Enterprise deployment guide ( http://download.oracle.com/docs/cd/E15523_01/core.1111/e12036/toc.htm) , installing binaries in shared storage.
    We have NAS as shared storage.
    My Question is what are the advantages/disadvantages of installing binaries in shared storage?
    One advantage i know as mentioned in the guide is that, we can create multiple soa servers from single installation.
    Thanks
    Manish

    It has always been my understanding that the shared storage is prerequsite, not a recommendation, meaning if you want a cluster configuration you must have shared storage. I have done a quick look through the EDG can't can see any reference installing binaries on a non-shared storage.
    I'm not 100% on this but I don't believe the WLS and SOA home are used during run time. The run time files used are in the managed server location, e.g. user_projects. By default this sit in the WLS home.
    Also don't know much about shared storage, e.g. NAS over SAN but if you already have NAS, this seems to be the logical choice.
    cheers
    James

  • Is it possible to install Oracle RAC without shared storage

    Dear All,
    I would like to seek for your advice.
    I got two different servers. We call it node 1 and node 2. And two different instances name.
    Node 1 -> instance name as "ORCL1"
    Node 2 -> instance name as "ORCL2"
    For the system we need Oracle RAC active-active cluster mode. Our objective is to have 2 replicated databases, in other words we need 2 instances of the same database automatically replicated for 100% up time to the Application server. We have 2 separate database machines and 2 application server machines. We need our application server to connect to any of the databases at any point of time and be having a consistent data on both database machines. We only need the database to be in a cluster mode, we won't need the OS to be in a cluster. There is no shared storage in this case.
    Can this be done? Please advice.

    you should review RAC concepts, and the meaning of instance and database
    For the system we need Oracle RAC active-active cluster mode.RAC = single database with multiple instances all accessing the same shared storage, no replication involved
    Our objective is to have 2 replicated databases, in other words we need 2 instances of the same database automatically replicated for 100% up time to the Application server.what you describe here is = multiple databases with multiple instances, replicated between each other
    We have 2 separate database machines and 2 application server machines. We need our application server to connect to any of the databases at any point of time and be having a consistent data on both database machines. We only need the database to be in a cluster mode, we won't need the OS to be in a cluster. There is no shared storage in this case.no shared storage = no RAC
    you will have two seperate databases synchronizing continuously
    you can use for example Streams / Advanced Replication (with multi-master configuration)
    if you dont insist on an active-active configuration, you can also use Data Guard for building a standby database

  • SQL2008R2 new named instance in existiong cluster as a new resource.

    Hello everyone.
    I'm trying to find out the best way to install a new named instance of a SQL2008R2 server in clustered environment.
    The current windows cluster is a 2 node cluster and contains the DTC and 2 Named SQL Server instance:
    MSSQLSERVER(Default)  Network name: Clust1
    Example1 Network name: Clust2
    I need another instance but i want to reach it through the MSSQLSERVER's SQL Network Name.
    In the end I need to be able to connect to the default instance with Clust1 and the new instance with Clust1\New.
    Is it even possible to install SQL server not as a cluster service, more like inside a service as a new Resource?

    Hello,<o:p></o:p>
    Thank you for your reply.<o:p></o:p>
    I want to create the installation on both nodes,
    so when a failover occurs on the Clust1 SQL cluster, both the default and
    the new instance fails over and starts up at the new node.<o:p></o:p>
    I tested the standalone installation method, the
    problem is that it won’t show up as a resource in the cluster service.
    I have to add it manually?
    No,what you are suggesting cannot be attained by standalone installation you need to add new instance in cluster aware mode.That would be your third instance in cluster which would require new disk ,IP for data files and Virtual name respectively.MSDTC can
    be shared.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • DFSr supported cluster configurations - replication between shared storage

    I have a very specific configuration for DFSr that appears to be suffering severe performance issues when hosted on a cluster, as part of a DFS replication group.
    My configuration:
    3 Physical machines (blades) within a physical quadrant.
    3 Physical machines (blades) hosted within a separate physical quadrant
    Both quadrants are extremely well connected, local, 10GBit/s fibre.
    There is local storage in each quadrant, no storage replication takes place.
    The 3 machines in the first quadrant are MS clustered with shared storage LUNs on a 3PAR filer.
    The 3 machines in the second quadrant are also clustered with shared storage, but on a separate 3PAR device.
    8 shared LUNs are presented to the cluster in the first quadrant, and an identical storage layout is connected in the second quadrant. Each LUN has an associated HAFS application associated with it which can fail-over onto any machine in the local cluster.
    DFS replication groups have been set up for each LUN and data is replicated from an "Active" cluster node entry point, to a "Passive" cluster node that provides no entry point to the data via DFSn and a Read-Only copy on it's shared cluster
    storage.
    For the sake of argument, assume that all HAFS application instances in the first quadrant are "Active" in a read/write configuration, and all "Passive" instances of the HAFS applications in the other quadrants are Read-Only.
    This guide: http://blogs.technet.com/b/filecab/archive/2009/06/29/deploying-dfs-replication-on-a-windows-failover-cluster-part-i.aspx defines
    how to add a clustered service to a replication group. It clearly shows using "Shared storage" for the cluster, which is common sense otherwise there effectively is no application fail-over possible and removes the entire point of using a resilient
    cluster.
    This article: http://technet.microsoft.com/en-us/library/cc773238(v=ws.10).aspx#BKMK_061 defines the following:
    DFS Replication in Windows Server 2012 and Windows Server 2008 R2 includes the ability to add a failover cluster
    as a member of a replication group. The DFS Replication service on versions of Windows prior to Windows Server 2008 R2
    is not designed to coordinate with a failover cluster, and the service will not fail over to another node.
    It then goes on to state, quite incredibly: DFS Replication does not support replicating files on Cluster Shared Volumes.
    Stating quite simply that DFSr does not support Cluster Shared Volumes makes absolutely no sense at all after stating clusters
    are supported in replication groups and a technet guide is provided to setup and configure this configuration. What possible use is a clustered HAFS solution that has no shared storage between the clustered nodes - none at all.
    My question:  I need some clarification, is the text meant to read "between" Clustered
    Shared Volumes?
    The storage configuration must to be shared in order to form a clustered service in the first place. What
    we am seeing from experience is a serious degradation of
    performance when attempting to replicate / write data between two clusters running a HAFS configuration, in a DFS replication group.
    If for instance, as a test, local / logical storage is mounted to a physical machine the performance of a DFS replication group between the unshared, logical storage on the physical nodes is approaching 15k small files per minute on initial write and even high
    for file amendments. When replicating between two nodes in a cluster, with shared clustered storage the solution manages a weak 2,500 files per minute on initial write and only 260 files per minute when attempting to update data / amend files.
    By testing various configurations we have effectively ruled out the SAN, the storage, drivers, firmware, DFSr configuration, replication group configuration - the only factor left that makes any difference is replicating from shared clustered storage, to another
    shared clustered storage LUN.
    So in summary:
    Logical Volume ---> Logical Volume = Fast
    Logical Volume ---> Clustered Shared Volume = ??
    Clusted Shared Volume ---> Clustered Shared Volume = Pitifully slow
    Can anyone explain why this might be?
    The guidance in the article is in clear conflict with all other evidence provided around DFSr and clustering, however it seems to lean towards why we may be seeing a real issue with replication performance.
    Many thanks for your time and any help/replies that may be received.
    Paul

    Hello Shaon Shan,
    I am also having the same scenario at one of my customer place.
    We have two FileServers running on Hyper-V 2012 R2 as guest VM using Cluster Shared Volume.  Even the data partition drive also a part of CSV.
    It's really confusing whether the DFS replication on CSV are supported or not, then what would be consequence if using.
    In my knowledge we have some customers they are using Hyper-V 2008 R2 and DFS is configured and running fine on CSV since more than 4 years without any issue.
    Appreciate if you can please elaborate and explain in details about the limitations on using CSV.
    Thanks in advance,
    Abul

  • Can we install a new mssql cluster on the same windows cluster which already containes a mssql cluster with named instance

    We have a MSSQL 2008R2 Enterprise edition with a two node active passive fail-over cluster running on 2008R2 windows cluster with out any issues,
    Now my question is can we add one more MSSSQL cluster instance for the same setup with out disturbing the existing one ?
    Also give thoughts on load sharing as the second node is mostly ideal now except fail-over scenarios,
    Why we go for this situation is because of the collation setting which can be set only one per instance(Database collation setting change not working), we need a different default collation for the new setup

    hi,
    >>Now my question is can we add one more MSSSQL cluster instance for the same setup with out disturbing the existing one ?
    Yes it is possible .You need to add new drives as cluster aware and install SQL server and put data and log files on thse drives.YOu would need to create named instance of SQL server and need to create different resource group.Both old installation and new
    onw would work separately.
    >>Also give thoughts on load sharing as the second node is mostly ideal now except fail-over scenarios,
    Good point indeed.You are about to create Multi instance cluster and should plan for scenario where one node is down and other node is handling load for both instances.Memory and CPU should be enough to handle the load.
    >>Why we go for this situation is because of the collation setting which can be set only one per instance(Database collation setting change not working), we need a different default collation for the new setup .
    Just for collation if you are installing new instance seems little wierd to me.You can manage collation at column ,database and at server level.
    http://technet.microsoft.com/en-us/library/aa174903(v=sql.80).aspx
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • How the cluster works when shared storage disk is offline to the primary ??

    Hi All
    I have configured Cluster as below
    Number of nodes: 2
    Quorum devices: one Quorum server, shared disks
    Resource Group with HA-storage, Logical host name, Apache
    My cluster works fine when either the nodes looses connectivity or crashes but when I deny access for primary node ( on which HA storage is mounted ) to the shared disks.
    The Cluster didn’t failover the whole RG to other node.
    I tried to add the HAstorage disks to the quorum devices but it didn’t help
    Anyways i can't able to do any i/o on the HAstorage on the respective node
    NOTE:This is the same case even on Zone cluster
    Please guide me, below is the O/P of # cluster status command === Cluster Nodes ===
    --- Node Status ---
    Node Name Status
    sol10-1 Online
    sol10-2 Online
    === Cluster Transport Paths ===
    Endpoint1 Endpoint2 Status
    sol10-1:vfe0 sol10-2:vfe0 Path online
    --- Quorum Votes by Node (current status) ---
    Node Name Present Possible Status
    sol10-1 1 1 Online
    sol10-2 1 1 Online
    --- Quorum Votes by Device (current status) ---
    Device Name Present Possible Status
    d6 0 1 Offline
    server1 1 1 Online
    d7 1 1 Offline
    === Cluster Resource Groups ===
    Group Name Node Name Suspended State
    global sol10-1 No Online
    sol10-2 No Offline
    === Cluster Resources ===
    Resource Name Node Name State Status Message
    global-data sol10-1 Online Online
    sol10-2 Offline Offline
    global-apache sol10-1 Online Online - LogicalHostname online.
    sol10-2 Offline Offline
    === Cluster DID Devices ===
    Device Instance Node Status
    /dev/did/rdsk/d6 sol10-1 Fail
    sol10-2      Ok
    /dev/did/rdsk/d7 sol10-1 Fail
    sol10-2 Ok
    Thanks in advance
    Sid

    not sure what you mean with "deny access" but could be reboot of path failures is disabled. This should
    enable that:
    # clnode set -p reboot_on_path_failure=enabled +
    HTH,
    jono

  • 10g RAC on varitas Cluster Software and Shared storage

    1. Install oracle binaries and patches (RAC install)
    2. Configure Cluster control interface (shared storage) for Oracle
    3. Create instances of oracle
    These are 3 things i am wondering how to handle, I did all these on Oracle Clusterware , but never on Veritas Cluster ware ...all these 3 steps are the same or different. In someone can help..

    How we can do this while using varitas cluster software
    1. Install oracle binaries and patches (RAC install)
    2. Configure Cluster control interface (shared storage) for Oracle
    3. Create instances of oracle
    If we install RDBMS 10.2.0.1 with standard installer it will catch the vcs and when we will be running dbca it will ask for RAC db option?
    what is Configure Cluster control interface (shared storage) for Oracle??

  • SSAS 2012 (SP2) - Connecting to a Named Instance in a Failover Cluster

    I posted this question some months ago and never got a resolution...still having the same problem. (http://social.msdn.microsoft.com/Forums/sqlserver/en-US/4178ba62-87e2-4672-a4ef-acd970ac1011/ssas-2012-sp1-connecting-to-a-named-instance-in-a-failover-cluster?forum=sqlanalysisservices)
    I have a 3 node failover cluster installation (active-passive-passive) configured as follows:
    Node1: DB Engine 1 (named instance DB01)
    Node2: DB Engine 2 (named instance DB02)
    Node3: Analysis Services (named instance DBAS)
    Obviously, the node indicated is merely the default active node for each service, with each service able to fail from node to node as required.
    Strangely, when I attempt to connect to the SSAS node using the cluster netbios "alias" (dunno, what else it would be called, so I apologize if I am mixing terminology or somesuch), I am only able to do so by specifying the the alias _without_ the
    required named instance. If I issue a connection request using an external program or even SSMS using Node3\DBAS or Node3.domain\DBAS, it appears that the SQL Server Browser is offering up a bogus TCP port for the named instance (in my case TCP/58554), when
    in reality, the SSAS service is running on TCP/2383 (confirmed with netstat) -- which if I understand correctly after much, much reading on the subject is the only port that can be used in a failover cluster. In any case, I'm puzzled beyond words. As I think
    through it, I believe I've seen this issue in the past, but never worried about it since it wasn't necessary to specify the named instance when I had SSAS requirements... It's only a showstopper now because I'm finalizing my implementation of SCVMM/SCOM 2012
    R2, and for some strange reason the PRO configuration in VMM gets all butthurt if you don't offer up a named instance...
    Thank you much for reading. I appreciate any help to get this resolved.
    POSSIBLY NOT RELEVANT...?
    I've properly configured the SPNs for the SSAS service (MSOLAPSvc.3) and the SQL Browser (MSOLAPDisco.3), with the former mapped to the SSAS service account and the latter to the cluster "alias" (since it runs as "NT AUTHORITY\LOCALSERVICE"
    as is customary) and have permitted delegation on the service and machine accounts as required. So, I'm not getting any kerberos issues with the service...any more, that is... ;) I'm not sure that's important, but I wanted to be forthcoming with details to
    help solve the issue.

    When connecting to SSAS in a cluster, you do not specify an instance name.  In your case, you would use the name of the SSAS IP address to connect.
    See:
    http://msdn.microsoft.com/en-us/library/dn141153.aspx
    For servers deployed in a failover cluster, connect using the network name of the SSAS cluster. This name is specified during SQL Server setup, as
    SQL Server Network Name. Note that if you installed SSAS as a named instance onto a Windows Server Failover Cluster (WSFC), you never add the instance name on the connection. This practice is unique to SSAS; in contrast, a named
    instance of a clustered relational database engine does include the instance name. For example, if you installed both SSAS and the database engine as named instance (Contoso-Accounting) with a SQL Server Network Name of SQL-CLU, you would connect to SSAS using
    "SQL-CLU" and to the database engine as "SQL-CLU\Contoso-Accounting". See
    How to Cluster SQL Server Analysis Services for more information and examples.

  • Sharing DB server among several SAP instances

    When discussing possible server consolidation scenarios the following question arose:
    Is it possible and supported to use one database server for several SAP instances?
    E.g. One instance of SQL Server holds four databases for "dev" + "test" + "sandbox1" + "sandbox2"
    and four servers (three of them virtual servers in VMware ESX) access this shared DB server to access their databases.
    The idea is to have one heavyweight DB server accessing SAN and running the DB load of several non productive SAP instances. One would have to pay only one SQLServer license...
    Kind regards, Rudi

    Hi Dan,
    The advantages of multiple instances/one database each are:
    - Each instance can be stopped and started independently. This allows you to have downtime on one system without impacting another.
    - Each instance can be patched separately. (This can also be viewed as a disadvantage from an admin overhead point of view)
    - MS SQL memory parameters (min. server memory/max. server memory) are configued at the instance level therefore you can better configure these parameters to meet the specific requirements of the database e.g. R/3, BI, EP. The alternative is one memory setting for all DB types.
    Thanks,
    Chris

  • 10g RAC on varitas Cluster Software & Shared storage

    we are in process of making 10g RAC without using Oracle Clusterware , we will be using Varitas Cluster software and varitas shared storage , I am looking for some quick notes/article on setting up/Installing this RAC configuration.

    Step-By-Step Installation of 9i RAC on VERITAS STORAGE FOUNDATION (DBE/AC) and Solaris
    Doc ID: Note:254815.1
    These are the notes i was looking for, Question is Only the RDBMS version will be changes , all other setup will be same as mentioned in Notes, and DBA work will start from creating DBs, right?

Maybe you are looking for

  • Really unhappy with Bold issues with freezing and Verizon not backing products

    I started a discussion on the issues I'm having a few weeks ago, and things are getting worse.  I bought the Bold online just over 2 months ago, and really liked it at first.  I started noticing that it would freeze up and the only button that would

  • Weblogic Portal Administration Console - Workflow problem

    Hi I began with WLP a week ago. When I try default workflow, I see that it doesn't work right When I download workflow file (default.xml) it like this: <pre class="jive-pre">      <transition>           <from-status id="1"/>           <to-status id="

  • Non-blocking Vectored read from SocketChannel fails

    Hi. Please look at the code example: ByteBuffer tmp1 = ByteBuffer.allocate(500); ByteBuffer tmp2 = ByteBuffer.allocate(500); ByteBuffer[] tmp = {tmp1, tmp2}; while ((count = socketChannel.read(tmp)) > 0) { + "bytes."); When I run this code, (using no

  • Same Code In 2 Different Pages Behaves Differently In Each

    I have a series of three pages such that each, when submitted, calls the next one.  The first page calls the second one correctly, but the second one calls only itself.  Both are using the same code. The form code for the first page is: <form name="f

  • Upgrade to Solution Manager 7.1 missing Master upgrade DVD

    Hi, I'm upgrading Solution Manager 7.0 EHP1 to Solution Manager 7.1 .  I've got the following error; Please enter mount points for the DVDs required for the upgrade. Enter at least the mount point containing "Upgrade master DVD" I Can't find this "Up