StorageTek 2500 - Mirroring Best Practices

I am setting up a new StorageTek 2500 array.
One of my volumes will be assigned 12 hard drives from a single shelf and will use Raid 1+0.
The CAM asks me to manually assign pairs of drives for mirroring.
I would like to understand if there are any single point of failures within the disk shelves that I should be aware of when deciding this.
Do the StorageTek 2500 series disk shelves have multiple backplanes?
- If so, which hard drives are assigned to each backplane? I want to avoid assigning both disks that are being mirrored to the same controller. I don't want to lose the data on both drives in the event of a hardware failure.
- If not, is it recommended to pair one drive from one shelf, with a drive from another shelf to eliminate single point of failure?
Does anyone have any suggested best practices when assigning pairs of drives for mirroring on the StorageTek 2500?
Thanks

jgibson wrote:
I am setting up a new StorageTek 2500 array.
One of my volumes will be assigned 12 hard drives from a single shelf and will use Raid 1+0.
The CAM asks me to manually assign pairs of drives for mirroring.
I would like to understand if there are any single point of failures within the disk shelves that I should be aware of when deciding this.
Do the StorageTek 2500 series disk shelves have multiple backplanes?No.
>
- If so, which hard drives are assigned to each backplane? I want to avoid assigning both disks that are being mirrored to the same controller. I don't want to lose the data on both drives in the event of a hardware failure.
- If not, is it recommended to pair one drive from one shelf, with a drive from another shelf to eliminate single point of failure?
I do not understand what you mean by shelf. The drives are connected to the same backplane, regardless of what pair you create. The SPOF is the backplane, nevertheless it is unlikely to fail.
Regards
Nicolas
Does anyone have any suggested best practices when assigning pairs of drives for mirroring on the StorageTek 2500?
Thanks

Similar Messages

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • Raid Configuration MCS 7845 (best practice)

    I'm wondering what is the best practice for RAID configuration. Looking for examples of a 4 disk and 6 disk setups. Also, which drives to pull when breaking the mirror.
    Is it possible to have a RAID 1+0 for 4/6 drives and have the mirroring set so that you would pull the top or bottom drives on a MCS 7835/7845?
    I'm also confused that using the SmartStart Array Configuration I seem to be able to create one logical drive using raid 1+0 with only having 2 drives, how is that possible?
    And links to dirrections would be appreicated.

    ICM 7.0, CVP 4.x, CCM 4.2.3, unity, and the collaboration server 5.0 and e-mail manager options for ICM.
    But to keep it simple let's look at a Roger set-up.
    Sorry for the delayed response.

  • Best practice for database move to new disk

    Good morning,
    Hopefully this is a straight forward question/answer, but we know how these things go...
    We want to move a SQL Server Database data file (user database, not system) from the D: drive to the E: drive.
    Is there a best practice method?
    My colleague has offered "ALTER DATABASE XXXX MODIFY FILE" whilst I'm more inclined to use "sp_detach_db".
    Is there a best practice method or is it much of a muchness?
    Regards,
    Andy

    Hello,
    A quick search on MSDN blogs does not show any official statement about ALTER DATABASE – MODIFY FILE vs ATTACCH. However, you can see a huge number of article promoting and supporting
     the use of ALTER DATABASE on any scenario (replication, mirroring, snapshots, always on, SharePoint, service broker).
    http://blogs.msdn.com/b/sqlserverfaq/archive/2010/04/27/how-to-move-publication-database-and-distribution-database-to-a-different-location.aspx
    http://blogs.msdn.com/b/sqlcat/archive/2010/04/05/moving-the-transaction-log-file-of-the-mirror-database.aspx
    http://blogs.msdn.com/b/dbrowne/archive/2013/07/25/how-to-move-a-database-that-has-database-snapshots.aspx
    http://blogs.msdn.com/b/sqlserverfaq/archive/2014/02/06/how-to-move-databases-configured-for-sql-server-alwayson.aspx
    http://blogs.msdn.com/b/joaquint/archive/2011/02/08/sharepoint-and-the-importance-of-tempdb.aspx
    You cannot find the same about ATTACH. In fact, I found the following article:
    http://blogs.msdn.com/b/sqlcat/archive/2011/06/20/why-can-t-i-attach-a-database-to-sql-server-2008-r2.aspx?Redirected=true
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • Best practice for version control

    Hi.
    I'm setting up a file share, and want some sort of version control on the file share. What's the best practice method for this sort of thing?
    I'm coming at this as a subversion server administrator, and in subversion people keep their own copy of everything, and occasionally "commit" their changes, and the server keeps every "committed" version of every file.
    I liked subversion because: 1) users have their own copy, if they are away from the office or make a big oops mistake, it doesn't ever hit the server, and 2) you can lock a file to avoid conflicts, and 3) if you don't lock the file and a conflict (two simultaneous edits) occur, it has systems for dealing with conflicts.
    I didn't like subversion because it adds a level of complexity to things -- and many people ended up with critical files that should be shared on their own hard drives. So now I'm setting up a fileshare for them, which they will use in addition to the subversion repository.
    I guess I realize that I'll never get full subversion-like functionality in a file share. But through a system of permissions, incremental backups and mirroring (rsync, second-copy for windows users) I should be able to allow a) local copies on user's hard drives, b) control for conflicts (locking, conflict identification), and keeping old versions of things.
    I wonder if anyone has any suggestions about how to best setup a file share in a system where many people might want to edit the same file, with remote users needing to take copies of directories along with them on the road, and where the admin wants to keep revisions of things?
    Links to articles or books are welcome. Thanks.

    Subversion works great for code. Sort-of-ok for documents. Not so great for large data files.
    I'm now looking at using the wiki for project-level documentation. We've done that before quite successfully, and the wiki I was using (mediawiki) provides version history of pages and uploaded files, and stores the uploaded files in the file system.
    Which would leave just the large data files and some working files on the fileshare. Is there any way people can lock a file on the fileshare, to indicate to others that they are working on it and others shouldn't be modifying it? Is there a way to use unix permissions (user-group-other) permissions, "chmod oa-w" to lock a file and indicate that one is working on it?
    I also looked at Alfresco, which provides a CIFS (windows SMB) view of data files. I liked it in principle, but the files are all stored in a database, not in the file system, which makes me uneasy about backups. (Sure, subversion also stores stuff in a database, not a file system, but everyone has a copy of everything so I only lose sleep about backups regarding version history, not backups on the most recent file version.)
    John Abraham
    [email protected]

  • Best Practice: Usage of the ABAP Packages Concept?

    Hi SDN folks,
      I've just started on a new project - I have significant ABAP development experience (15 years+) - but one thing that I have never seen used correctly is the Package concept in ABAP - for any of the projects that I have worked on.
    I would like to define some best practices - about when we should create packages - and about how they should be structured.
    My understanding of the package concept is that they allow you to bundle all of the related objects of a piece of development work together. In previous projects - and almost every project I have ever worked on - we just have packages ZBASIS, ZMM, ZSD, ZFI and so on. But this to me is a very crude usage of packages, and really it seems that we have not moved on passed the 4.6 usage of the old development class concept - and it means that packages do not really add much value.
    I read in the SAP PRESS Next Generation ABAP book (Thomas Ljung, Rich Hellman) (I only have the 1st edition) - that we should use packages for defining separation of concern for an application. So it seems there they are recommending that for each and every application we write - we define at the least 3 packages - one for Model, one for Controller and one for view based objects. It occurs to me that following this approach will lead to a tremendous number of packages over the life cycle of an implementation, which could potentially lead to confusion - and so also add little value. Is this really the best practice approach? Has anyone tried this approach across a full blown implementation?
    As we are starting a new implementation - we will be running with 7 EHP2 and I would really like to get the most out of the functionality that is provided. I wonder what others have for experience in the definition of packages.
    One possible usage occurs to me that you could define the packages as a mirror image of the application business object class hierarchy (see below). But perhaps this is overcomplicating their usage - and would lead to issues later in terms of transportation conflicts etc.:
                                          ZSD
                                            |
                    ZSOrder    ZDelivery   ZBillingDoc
    Does anyone have any good recommendations for the usage of the ABAP Package concept - from real life project experience?
    All contributions are most welcome - although please refrain from sending links on how to create packages in SE80
    Kind Regards,
    Julian

    Hi Julian,
    I have struggled with the same questions you are addressing. On a previous project we tried to model based on packages, but during the course of the project we encountered some problems that grew overtime. The main problems were:
    1. It is hard to enforce rules on package assignments
    2. With multiple developers on the project and limited time we didn't have time to review package assignment
    3. Devopelers would click away warnings that an object was already part of another project and just continue
    4. After go-live the maintenance partner didn't care.
    So, my experience is is that it is a nice feature, but only from a high level design point of view. In real life it will get messy and above all, it doesn't add much value to the development. On my neew assignment we are just working with packages based on functional area and that works just fine.
    Roy

  • Best practice for # of drives for Oracle on a Windows 2003 server

    I need to know what the best practice is concerning the # of drives that should be built on a Windows 2003 server to ensure best performance and most effective back up and recovery, for both the application itself and the data base.
    I'm not certain, but it may be only a 32 bit machine. I'll update this once I know for sure.

    We are in the process of migrating our Oracle 10 database (20G) to a new maschine.
    How should we configure our disks (8 in total)?
    1. SAME: "Stripe and mirror everything"?
    2. Doc 30286.1 "I/O tuning with different RAID configurations" and 148342.1 "Avoiding I/O disk contention" say:
    database files on RAID01
    redo and archive logs on RAID1
    temp on RAID1
    So, what is the best practice?

  • ICloud document library organization best practices?

    While I think the iCloud document library could work pretty well if I was iOS-only, I'm still having some trouble organizing something that works with my work and personal Macs as well. A big gap is lack of an iOS version of Preview.
    But more importantly, I still keep documents organized by project, and I have a lot of project folders because, well, I have a lot of work! I'm not sure how to best reconcile that with the limitations imposed by iCloud Documents. And I'm not sure how/if Mavericks tags will really help.
    The best example I've seen of a best practice to organizing iCloud documents was in this blog post from the makers of iA Writer:
    http://ia.net/blog/mountain-lions-new-file-system/
    Their folder structure mirrored their workflow rather than projects, which I think could be interesting. They haven't updated it since Mavericks, and I'm curious how they might add tags. Perhaps tags would be used for projects?
    Right now, I tend to just keep documents in iCloud that I'm actively working on, since I might need to edit it at home or on my iPad. Once they're complete, I move them to the respective project folder on the Mac. Dropbox keeps the project folders in sync, which makes iCloud Documents feel redundant.
    This workflow still feels klugy to me.
    Basically, I'm asking, have you effectively incorporated iCloud Documents into your Mac workflow? What are your best practice recommendations?
    Thanks.
    Paul

    >
    Madhu_1980 wrote:
    > Hi,
    >
    >
    > As per the doc "Best Practices for Naming Conventions" https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/90b213c2-d311-2a10-89bf-956dbb63aa7f
    >
    > In this doc, we see there are no prefixes or suffixes like DT_ for data types, MT_ for Message types, SI_ for service interfaces OM_ for operation mappings (MM_ in message mappings in earlier versions).
    >
    > but i have seen some people maintain these kind of conventions.
    >
    > For larger projects, what is the best option,
    > A) to strictly follow the instructions in the above document and not to maintain the object type prefixes or suffixes.
    > B) or to have this kind of prefixes in addition to what mentioned in the naming conventions doc.
    >
    > which is preferable, from point of long term maintainance.
    >
    > i would appreciate an opinion/guideline from people who had worked on multiple projects.
    >
    > thanks,
    > madhu.
    I have seen projects where they are specific to having DT_, MT_ prefixes and also projects which dont use them.
    Even though you dont have a DT_ or MT_ prefix for DT and MT, it would be essential to have AA, OA, OS, IS etc defining a message or service interface that will give you an idea of the mode and direction of the interface.
    On a generic term, i strongly feel that the naming conventions suggested by the document is quite enough to accommodate a large number of projects unless something very specific pops up.

  • BEST PRACTICE TO PARTITION THE HARD DISK

    Can some Please guide me on THE BEST PRACTICE TO PARTITION THE[b] HARD DISK FOR 10G R2 on operating system HP-UX-11
    Thanks,
    Amol
    Message was edited by:
    user620887

    I/O speed is a basic function of number of disk controllers available to read and write, physical speed of the disks, size of the I/O pipe(s) between SAN and server, and the size of the SAN cache, and so on.
    Oracle recommends SAME - Stripe And Mirror Everything. This comes in RAID10 and RAID01 flavours. Ideally you want multiple fibre channels between the server and the SAN. Ideally you want these LUNs from the SAN to be seen as raw devices by the server and use these raw devices as ASM devices - running ASM as the volume manager. Etc.
    Performance is not achieved by just partitioning. Or just a more memory. Or just a faster CPU. Performance planning and scalability encapsulates the complete system. All parts. Not just a single aspect like partitioning.
    Especially not partitioning as an actual partition is simple a "logical reference" for a "piece" of the disk. I/O performance has very little do with how many pieces you split a a single disk into. That is the management part. It is far more important how you stripe and whether you use RAID5 instead of a RAID1 flavour, etc.
    So I'm not sure why you are all uppercase about partitioning....

  • Best practice RAC installation in two datacenter zones?

    Datacenter has two separate zones.
    In each zone we have one storage system and one rac node.
    We will install RAC 11gR2 with ASM.
    For data we want to use diskgroup +DATA, normal redundancy mirrored to both storage systems.
    For CRS+Voting we want to use diskgroup +CRS, normal redundancy.
    But for CRS+Voting diskgroup with normal redundancy we need 3 luns and we have only 2 storage systems.
    I believe the third lun is needed to avoid split brain situations.
    If we put two luns to storage #1 and one lun to storage #2, what will happen when storage #1 faills - this means that two of three disks for diskgroup +CRS are unaccessible?
    What will happen, when all equipment in zone #1 fails?
    Is human intervention required: at failure time, when zone#1 is coming up again?
    Is there a best practice for a 2-zone 2-storage rac configuration?
    Joachim

    Hi,
    As far as voting files are concerned, a node must be able to access more than the half of the voting files at any time (simple majority). In order to be able to tolerate a failure of n voting files, one must have at least 2n+1 configured. (n= number of voting files) for the cluster.
    The problem in a stretched cluster configuration is that most installations only use two storage systems (one at each site), which means that the site that hosts the majority of the voting files is a potential single point of failure for the entire cluster. If the storage or the site where n+1 voting files are configured fails, the whole cluster will go down, because Oracle Clusterware will loose the majority of voting files.
    To prevent a full cluster outage, Oracle will support a third voting file on an inexpensive, lowend standard NFS mounted device somewhere in the network. Oracle recommends putting the NFS voting file on a dedicated server, which belongs to a production environment.
    Use the White Paper below to accomplish it:
    http://www.oracle.com/technetwork/database/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf
    Also Regarding the Voting File and OCR configuration (11.2), when using ASM. How they should be stored?
    I recommend you read it:
    {message:id=10028550}
    Regards,
    Levi Pereira

  • Best Practice - Securing Schema from User Access

    Scenario:
    User A requires access to schema called BLAH.
    User A is a developer that built an application using this schema in a separate development environment, although has the same privileges mirrored to production (same roles etc - required for operation of the application built).
    This means that the User has roles that grant Select, Update etc rights for the schema / table in order to use (and maintain) the applications.
    How can we restrict access to the BLAH schema in PRODUCTION, enforcing it to only be accessible via middle tier / application (proxy authentication?)?
    We've looked at using proxy authentication, however, it's not possible to grant roles and rights to the proxy account and NOT have them granted to the user (so they can dive straight in using development tooling and hit prod etc)>
    We've tried granting it on a session basis using proxy authentication (i.e. user a connects via proxy, an we ENABLE a disabled role on the user based on this connection), however, it causes performance issues.
    Are we tackling this the wrong way? What's the best practice for securing oracle schemas (and objects in general) for user access where the users actually get oracle user account (or even use SSO) for day to day business as usual.
    To me this feels like a common scenario, especially where SSO comes into play ...

    What about situations where we have Legacy Oracle Forms stuff? In these cases the user must be granted select etc rights to particular objects, as this can't connect via a middle tier.
    The problem we have is that our existing middle tier implementation is built expecting the user credentials to be passed to it during initial authentication and does not use a proxy, or super user style account.  We have, historically, been 100% reliant on Oracle rights and controls to validate and restrict access to our underlying data.  From what you are saying, we should start to look at using proxy or super user access and move this control process further up - i.e. into Code or Packages ?  If so, does this mean that there is no specific way to restrict schema access to given proxy accounts and then grant normal user accounts to connect through these to get access (kind of a delegated access scenario), without using disabled roles?

  • 6140 Firmware Best Practice

    Hi,
    We have a pair of 6140s with volumes mirrored by Veritas and have a requirement to upgrade the firmware to the latest version. Both are live and hosting mission critical application data. Ideally we want to keep the application running.
    I have conflicting advice as to whether to quiece I/O to the storage. I have upgraded firmware of other vendors storage with only a proviso of minimizing it not stopping it. Service advisor is vague to say the least :)
    Is there a best practice document detailing best practice/procedure for a 6140 firmware upgrade or can anyone give their experiences of upgrading live arrays?
    Thanks in advance.
    Trevor

    Hi,
    tmundell wrote:
    Hi,
    We have a pair of 6140s with volumes mirrored by Veritas and have a requirement to upgrade the firmware to the latest version. Both are live and hosting mission critical application data. Ideally we want to keep the application running.
    I have conflicting advice as to whether to quiece I/O to the storage. I have upgraded firmware of other vendors storage with only a proviso of minimizing it not stopping it. Service advisor is vague to say the least :)
    Is there a best practice document detailing best practice/procedure for a 6140 firmware upgrade No. Everything is in CAM.
    In summary:
    Controller FW: online upgrade
    IOM FW: online upgrade
    Drive FW: offline upgrade (which means, IO have to be stopped at 100%)
    Regards
    or can anyone give their experiences of upgrading live arrays?
    Thanks in advance.
    Trevor

  • Best practices for development / production environments

    Our current scenario:
    We have one production database server containing the APEX development install, plus all production data.
    We have one development server that is cloned nightly (via RMAN duplicate) from production. It therefore also contains a full APEX development environment, and all our production data, albeit 1 day old.
    Our desired scenario:
    We want to convert the production database to a runtime only environment.
    We want to be able to develop in the test environment, but since this is an RMAN duplicated database, every night the runtime APEX will overlay it, and the production versions of the apps will overlay. However, we still want to have up to date data against which to develop.
    Questions: What is best practice for this sort of thing? We've considered a couple options:
    1.) Find a way to clone the database (RMAN or something else), that will leave the existing APEX environment intact? If such is doable, we can modify our nightly refresh procedure to refresh the data, but not APEX.
    2.) Move apex (in both prod and dev environments) to a separate database containing only APEX, and use DBLINKS to point to the data in both cases. The nightly refresh would only refresh the data and the APEX database would be unaffected. This would require rewriting all apps to use DBLINKS though, as well as requiring a change to the code when moving to production (i.e. modify the DBLINK to the production value)
    3.) Require the developers to export their apps when done for the day, and reimport the following morning. This would leave the RMAN duplication process unchanged, and would add a manual step which the developers loath.
    We basically have two mutually exclusive requirements - refresh the database nightly for the sake of fresh data, but don't refresh the database ever for the sake of the APEX environment.
    Again, any suggestions on best practices would be helpful.
    Thanks,
    Bill Johnson

    Bill,
    To clarify, you do have the ability to export/import, happily, at the application level. The issue is that if you have an application that consist of more than a couple of pages, you will find yourself in a situation where changes to page 3 are tested and ready but, changes to pages 2, 5 and 6 are still in various stages of development. You will need to get the change for page 5 in to resolve a critical production issue. How do you do this without sending pages 2, 5 and 6 in their current state if you have to move the application all at once??? The issue is that you absolutely are going to need to version control at the page level, not at the application level.
    Moreover, the only supported way of exporting items is via the GUI. While practically everyone doing serious APEX development has gone on to either PL/SQL or Utility hacks, Oracle still will not release a supported method for doing this. I have no idea why this would be...maybe one of the developers would care to comment on the matter. Obviously, if you want to automate, you will have to accept this caveat.
    As to which version of the backend source control tool you use, the short answer is that it really doesn't matter. As far as the VC system is concerned, you APEX exports are simply files. Some versioning systems allow promotion of code through various SDLC stages. I am not sure about GIT in particular but, if it doesn't support this directly, you could always mimic the behavior with multiple repositories. Taht is, create a development repository into which you automatically update via exports every night. Whenever particular changes are promoted to production, you can at that time export form the development repository and into the production. You could, of course, create as many of these "stops" as necessary to mirror your shop's SDLC stages, e.g. dev, qa, intergation, staging, production etc.
    -Joe
    Edited by: Joe Upshaw on Feb 5, 2013 10:31 AM

  • Best practices for TOC, versus PROJ title, filename

    I am seeking "best practice" help from the forum. Say a TOC
    is to have numbers as part of the organization, 1,2,3...and topics
    will be sub-numbered (as in 1.1, 1.2.1, 1.2.2, etc). I am not aware
    of any way to automatically manage this, they must be explicitly
    typed in the TOC editor.
    I have heard some say that the "view" in the Project should
    mirror the view in the TOC. That would imply that folder
    organization, filenames and titles would be in sync with the TOC,
    and also include the section/chapter numbering.
    If you have to make a change and resequence things, or insert
    a new chapter between 2 and 3 then there is a LOT of work to do.
    I am leaning in the direction of having filenames and titles
    be separate from the TOC structure, and not use Field[title] in the
    documents to simplify the update process. This would also simplify
    updating links within the documents.
    However, I am still pretty new with Robo and don't have the
    seasoning some of you have with long term effects of particular
    organization.
    Any help, or best practices that can be shared?
    Thanks!
    Don

    Thanks for your input Colum,
    I am actually considering the opposite of your comment - to
    have numbering in the TOC but NOT with the Project folders. I
    consider the TOC much more volatile and flexible than the folder
    structure. As long as one knows what goes with what. So the good
    news is there is not a sacrosanct line drawn between the project
    names and structure and that in the TOC. This helps. If I have to
    resequence section 2 and section 5, I mainly just change the TOC. I
    would not have to move or rename any folders in the Project view.
    The other part, using the document title (in the topic
    properties) to automatically update the topic title in the text I
    like quite a bit and I think can make the case with my peers. That
    offers a lot of flexibility. As in the above mentioned
    resequencing, I would have to access the properties for each topic
    in the project folder that used to be in section 2, and edit the
    document title to reflect its new position in section 5 (and vice
    versa). That is pretty straightforward. I do NOT have to rename all
    those folders and files in the Project view.
    Thanks again for chiming in.
    Don

  • Storage Server 2012 best practices? Newbie to larger storage systems.

    I have many years managing and planning smaller Windows server environments, however, my non-profit has recently purchased
    two StoreEasy 1630 servers and we would like to set them up using best practices for networking and Windows storage technologies. The main goal is to build an infrastructure so we can provide SMB/CIFS services across our campus network to our 500+ end user
    workstations, taking into account redundancy, backup and room for growth. The following describes our environment and vision. Any thoughts / guidance / white papers / directions would be appreciated.
    Networking
    The server closets all have Cisco 1000T switching equipment. What type of networking is desired/required? Do we
    need switch-hardware based LACP or will the Windows 2012 nic-teaming options be sufficient across the 4 1000T ports on the Storeasy?
    NAS Enclosures
    There are 2 StoreEasy 1630 Windows Storage servers. One in Brooklyn and the other in Manhattan.
    Hard Disk Configuration
    Each of the StoreEasy servers has 14 3TB drives for a total RAW storage capacity of 42TB. By default the StoreEasy
    servers were configured with 2 RAID 6 arrays with 1 hot standby disk in the first bay. One RAID 6 array is made up of disks 2-8 and is presenting two logical drives to the storage server. There is a 99.99GB OS partition and a 13872.32GB NTFS D: drive.The second
    RAID 6 Array resides on Disks 9-14 and is partitioned as one 11177.83 NTFS drive.  
    Storage Pooling
    In our deployment we would like to build in room for growth by implementing storage pooling that can be later
    increased in size when we add additional disk enclosures to the rack. Do we want to create VHDX files on top of the logical NTFS drives? When physical disk enclosures, with disks, are added to the rack and present a logical drive to the OS, would we just create
    additional VHDX files on the expansion enclosures and add them to the storage pool? If we do use VHDX virtual disks, what size virtual hard disks should we make? Is there a max capacity? 64TB? Please let us know what the best approach for storage pooling will
    be for our environment.
    Windows Sharing
    We were thinking that we would create a single Share granting all users within the AD FullOrganization User group
    read/write permission. Then within this share we were thinking of using NTFS permissioning to create subfolders with different permissions for each departmental group and subgroup. Is this the correct approach or do you suggest a different approach?
    DFS
    In order to provide high availability and redundancy we would like to use DFS replication on shared folders to
    mirror storage01, located in our Brooklyn server closet and storage02, located in our Manhattan server closet. Presently there is a 10TB DFS replication limit in Windows 2012. Is this replicaiton limit per share, or total of all files under DFS. We have been
    informed that HP will provide an upgrade to 2012 R2 Storage Server when it becomes available. In the meanwhile, how should we designing our storage and replication strategy around the limits?
    Backup Strategy
    I read that Windows Server backup can only backup disks up to 2TB in size. We were thinking that we would like
    our 2 current StoreEasy servers to backup to each other (to an unreplicated portion of the disk space) nightly until we can purchase a third system for backup. What is the best approach for backup? Should we use Windows Server Backup to be capturing the data
    volumes?
    Should we use a third party backup software?

    Hi,
    Sorry for the delay in reply.
    I'll try to reply each of your questions. However for the first one, you may have a try to post to Network forum for further information, or contact your device provider (HP) to see if there is any recommendation.
    For Storage Pooling:
    From your description you would like to create VHDX on RAID6 disks for increasment. It is fine and as you said it is 64TB limited. See:
    Hyper-V Virtual Hard Disk Format Overview
    http://technet.microsoft.com/en-us/library/hh831446.aspx
    Another possiable solution is using Storage Space - new function in Windows Server 2012. See:
    Storage Spaces Overview
    http://technet.microsoft.com/en-us/library/hh831739.aspx
    It could add hard disks to a storage pool and creating virtual disks from the pool. You can add disks later to this pool and creating new virtual disks if needed. 
    For Windows Sharing
    Generally we will have different sharing folders later. Creating all shares in a root folder sounds good but actually we may not able to accomplish. So it actually depends on actual environment.
    For DFS replication limitation
    I assume the 10TB limitation comes from this link:
    http://blogs.technet.com/b/csstwplatform/archive/2009/10/20/what-is-dfs-maximum-size-limit.aspx
    I contacted DFSR department about the limitation. Actually DFS-R could replicate more data which do not have an exact limitation. As you can see the article is created in 2009. 
    For Backup
    As you said there is a backup limitation (2GB - single backup). So if it cannot meet your requirement you will need to find a third party solution.
    Backup limitation
    http://technet.microsoft.com/en-us/library/cc772523.aspx
    If you have any feedback on our support, please send to [email protected]

Maybe you are looking for

  • External Hard Drive says its Read Only and wont allow any changes

    My IQ505a has an external USB drive attached which is an Western Digital 3200JS USB External Disc Drive and for some reason I am unable to make any changes to data on the drive because the system has apparently marked the whole drive as "Read Only".

  • Using 1.920 x 1.080 pixel panasonic plasma screen with Mac Mini Intel

    Do you know if i can connect a mac mini with a belkin "dvi to hdmi" cable to a panasonic plasma screen with a 1.920 x 1.080 resolution (screen 42 inches)? I thought the screen was too big, but some friend told me that the resolution i supported by th

  • Currency in Thousands, Lakhs & Crores

    Hello Friends, I have a requirement to print the currency in INR format. i.e., i am getting currency 123,456,789.00, but i want it as 12,34,56,789.00 Please suggest me is there any funcition moudle to convert like this or otherwise give some suggesti

  • SAPSCRIPT : Line Printing Three Times?

    Here is the code: Anybody see why this would print three times? /E   HEADER_TEXT                                                               L    <H>   </>                                                                 /:   INCLUDE &EKKO-EBELN& OB

  • Mysql-jdbc applet connection issue

    I am developing a user interface for a mysql database. I am able to successfully access the db and execute queries through a java APPLICATION. I need to do the same (access the db (mysql)) through a java APPLET. When I compile the applet it complies