Best Practice Question on Heartbeat Issue

Our environment consists of 2 Fiber Channel Hard Drive enclosures. One is an HP P2000 that has 12 2TB disks in it, piggy backed on these controllers is a D2700 with 24 SFF drives, 15,000 RPM and 146GB each. This enclosure has full RAID and volume/LUN creation capability. I can pretty much put the disks together anyway I want, though I can not combine SFF and LFF (2TB) drives together into a single RAID Set.
My other devices (Texas Memory System 810) are 2 extremely fast SSD enclosures that have 8 500 Gig cards in each and show up as 4TB of storage each. Each device does not have RAID capability, so there is no redundancy except that if the device detects a bad "chip" it migrates the data to one of the spare chips. There is a full card that is considered a spare, but your data does not exist in more than one place from what I can tell. I can create any number of LUNs and both are completely visible into my Oracle VM environment.
The LFF spinning disks mostly have LUNs that are used for Large data transfers (backups etc) but is the controller for the SFF disks too. The SSDs are used for our database ASM with normal redundancy across the 2 distinct TMS810's. The SFF drives are used for the various filesystems that actually boot up the servers and other things that need a faster disk.
My question is: Which of these 3 should I create my Cluster Heartbeat on? I have that LUN currently on a LFF LUN (one LUN of many on the RAID1 - 2 2TB drives combined). The LUN is only 20Gigs, but I do have other LUNs off that same RAIDSET as I did not want to waste the whole 2TB for a single heartbeat. This way I knew if one disk failed in that set, I could swap the disk and not loose my heartbeat and therefore all of my guests running in my cluster. We are looking for 99.9999% uptime.
Everything in my environment is redundant except for the heartbeat. Does OVM 3.1 expect to have redundant heartbeats perhaps?
If the P2000 goes down, I loose my heartbeat and all of my servers/guests go down too. Its my single point of failure.
I tried a large filecopy 1TB worth of data to a 3TB filesystem on the LFF drives and it seemed to loose heartbeat connectivity and fenced my server. I expect the redundant controllers were overloaded and OVM was not able to keep up. I have no other explanation why the guest was down, and the server needed to be fully rebooted. OVMM showed the server down, the guest down, but I could ping the server still.
I could place it on the extremely fast SSDs, but then it would only be in one location on one set of chips. If I need to replace a flashcard in this device - I must take that device single down and my database would still be up from the other device for ASM, but I would loose my servers and guests. Not the ideal solution.
I am all ears as to how 1) to better configure the hardware we have or 2) buy additional hardware if absolutely necessary. I have 4 physical enclosures - all on separate redundant 8GB FC cards in our 2 servers. It seems it would be enough.
Thanks for all your help, Apologies for the long post.

Avi Miller wrote:
>
OCFS2's timeout needs to be larger than the timeout for your SAN. If your SAN takes 120 seconds to fail from one path to another, but OCFS2 is set to 60 second disk heartbeat timeout, then your servers will fence halfway through a potential fabric failover. So,Do you know how to check this setting for the Server Pool Heartbeat? Did you say OCFS2 is 60 seconds by default?
>
OCFS2 v1.8 does support multiple global heartbeat regions and there are plans to allow for multiple heartbeat devices in some future version of Oracle VM. However, I have no idea when this will be. Keep in mind however, that if the enclosure hosting the heartbeat goes down, you will lose everything else hosted on that enclosure as well. If you put it on the large storage repository, all your VM virtual disks disappear too, so you're offline anyway. If you put it on the fast SSDs, all your data has gone away, so you're hosed anyway. Both enclosures appear (to me) to be fairly critical for the running of your VMs, so losing either of them during normal operation would probably cause an outage. Unless I'm missing something?Yes since we have multiple enclosures, I have separated a lot of the servers, 2 node RAC DB servers running on each enclosure (primary on P2000 which is RAIDED, secondary on the SSD which is unraided, but its a backup), 2 different WEB/APP servers on both as well. So if one enclosure goes down, yes I would loose one set of servers, but one DB and one WEB server is still up. No single point of failure. Even if one of the SSDs went down for the Database files those are 2 distinct physical redundant devices with ASM. ASM handles having one side of the FAILURE group down until it can be brought back online. But if I loose the enclosure with the Heartbeat, I loose all my servers and nothing stays up. Its my only point of frustration in my design.

Similar Messages

  • Best Practice Question on Heartbeat Network

    After running 3.0.3 a few weeks in production, we are wondering if we set up our Heartbeat /Servers correctly.
    We have 2 servers in our Production Server pool. Our LAN, a 192.168.x.x network, has the Virtual IP of the Cluster (heartbeat), the 2 main IP addresses of the servers, and a NIC assigned to each guest. All of this has been configured on the same network. Over the weekend, I wanted to separate the Heartbeat onto a new network, but when trying to add to the pool I received:
    Cannot add server: ovsx.mydomain.com, to pool: mypool. Server Mgt IP address: 192.168.x.x, is not on same subnet as pool VIP: 192.168.y.y
    Currently, I only have one router that translate our WAN to our LAN of 192.168.x.x. I thought the heartbeat would strictly be internal and would not need to be routed anywhere and just set up as a separate VLAN and this is why I created 192.168.y.y. I know that the servers can have multiple IP addresses, and I have 3 networks added to my OVM servers. 192.168.x.x, 192.168.y.y and 192.168.z.z. y and z are not pingable from anything but the servers themselves or one of the guests that I have assigned that network to. I can not ping them directly from our office network, even through the VPN which only gives us access to 192.168.x.x.
    I guess I can change my Sever Mgt IP away from 192.168.x.x to 192.168.y.y, but can I do that without reinstalling the VM server? How have others structured there networks especially relating to the heartbeat?
    Is there any documentation/guides that would describe how to set up the networks properly relating to the heartbeat?
    Thanks for any help!!

    Hello user,
    In order to change your environment, what you could do is go to the Hardware tab -> Network. Within here you can create new networks and also change via the Edit this Network pencil icon what networks should manage what roles (i.e. Virtual Machine, Cluster Heartbeat, etc). In my past experience, I've had issues changing the cluster heartbeat once it has been set. If you have issues changing it, via the OVM Manager, one thing you could do is change it manually via the /etc/ocfs2/cluster.conf file. Also, if it successfully lets you change it via the OVM Manager, verify it within the cluster.conf to ensure it actually did your change. This is where that is being set. However, doing it manually can be tricky because OVM has a tendency to like to revert it's changes back to its original state say after a reboot. Of course I'm not even sure if they support you manually making that change. Ideally, when setting up an OVM environment, best practice would be to separate your networks as much as possible i.e. (Public network, private network, management network, clusterhb network, and live migration network if you do a lot of live migrating, otherwise you can probably place it with say the management network).
    Hope that helps,
    Roger

  • Best Practices Question: How to send error message to SSHR web page.

    Best Practices Question: How to send error message to SSHR web page from custom PL\SQL procedure called by SSHR workflow.
    For the Manager Self-Service application we’ve copied various workflows which were modified to meet business needs. Part of this exercise was creating custom PL\SQL Package Procedures that would gather details on the WF using them on custom notification sent by the WF.
    What I’m looking for is if/when the PL\SQL procedure errors, how does one send an failure message back and display it on the SS Page?
    Writing information into a log or table at the database level works for trouble-shooting, but we’re looking for something that will provide the end-user with an intelligent message that the workflow has failed.
    Thanks ahead of time for your responses.
    Rich

    We have implemented the same kind of requirement long back.
    We have defined our PL/SQL procedures with two OUT parameters
    1) Result Type (S:Success, E:Error)
    2) Result Message
    In the PL/SQL procedure we always use below construct when we want to raise any message
    hr_utility.set_message(APPL_NO, 'FND_MESSAGE_NAME');
    hr_utility.raise_error;
    In Exception block we write below( in successful case we just set the p_result_flag := 'S';)
    EXCEPTION
    WHEN APP_EXCEPTION.APPLICATION_EXCEPTION THEN
    p_result_flag := 'E';
    p_result_message := hr_utility.get_message;
    WHEN OTHERS THEN
    p_result_flag := 'E';
    p_result_message := hr_utility.get_message;
    fnd_message.set_name('PER','FFU10_GENERAL_ORACLE_ERROR');
    fnd_message.set_token('2',substr(sqlerrm,1,200));
    fnd_msg_pub.add;
    p_result_message := fnd_msg_pub.get_detail;
    After executing the PL/SQL in java
    We have written some thing similar to
    orclStmt.execute();
    OAExceptionUtils.checkErrors (txn);
    String resultFlag = orclStmt.getString(provide the resultflag bind no);
    if ("E".equalsIgnoreCase(resultFlag)){
    String resultMessage = orclStmt.getString(provide the resultMessage bind no);
    orclStmt.close();
    throw new OAException(resultMessage, OAException.ERROR);
    It safely shows the message to the user with all the data in the page.
    We have been using this construct for a long time for all our projects. They are all working as expected.
    Regards,
    Peddi.

  • Best practice question -- copy container, assemble it, build execution plan

    So, this is a design / best practice question:
    I usually copy containers as instructed by docs
    I then set the source system parameters
    I then generate needed parameters / assemble the copied container for ALL subject areas present in the container
    I then build an execution plan JUST FOR THE 4 SUBJECT AREAS and build the execution plan and set whatever is needed before running it.
    QUESTION - When i copy the container, should i delete all not needed subject areas out of it or is it best to do this when building the execution plan? I am basically trying to simplify the container for my own sake and have the container just have few subject areas rather than wait till i build the execution plan and then focus on few subject areas.
    Your thoughts / clarifications are appreciated.
    Regards,

    Hi,
    I would suggest that you leave the subject areas and then just don't include them in the execution plan. Otherwise you have the possibility of running into the situation where you need to include another subject area in the future and you will have to go through the hassle of recreating it in your SSC.
    Regards,
    Matt

  • SAP Adapter Best Practice Question for Deployment to Clustered Environment

    I have a best practices question on the iway Adapters around deployment into a clustered environment.
    According to the documentation, you are supposed to run the installer on both nodes in the cluster but configure on just the first node. See below:
    Install Oracle Application Adapters 11g Release 1 (11.1.1.3.0) on both machines.
    Configure a J2CA configuration as a database repository on the first machine.
    Perform the required changes to the ra.xml and weblogic-ra.xml files before deployment.
    This makes sense to me because once you deploy the adapter rar in the next step it the appropriate rar will get staged and deployed on both nodes in the cluster.
    What is the best practice for the 3rdParty adapter directory on the second node? The installer lays it down with the adapter rar and all. Since we only configure the adapter on node 1, the directory on node 2 will remain with the default installation files/values not the configured ones. Is it best practice to copy node 1's 3rdParty directory to node 2 once configured? If we leave node 2 with the default files/values, I suspect this will lead to confusion to someone later on who is troubleshooting because it will appear it was never configured correctly.
    What do folks typically do in this situation? Obviously everything works to leave it as is, but it seems strange to have the two nodes differ.

    What is the version of operating system. If you are any OS version lower than Windows 2012 then you need to add one more voter for quorum.
    Balmukund Lakhani
    Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    | Facebook
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

  • SAP Adapter Best Practice Question for Migration of Channels

    I have a best practice question on the SAP adapter when migrating an OSB project from one environment (DEV) to another (QA).
    If my project includes an adapter channel that (e.g., Inbound SAP Proxy listening on a channel), how do I migrate that project to another environment if the channel in the target environment is different.
    I tried using the search and replace mechanism in the sbconsole, but it doesn't find the channel name in the jca and wsdl files.
    What is the recommended way to migrate from one environment to the other when the channel name changes?

    I have a best practice question on the SAP adapter when migrating an OSB project from one environment (DEV) to another (QA).
    If my project includes an adapter channel that (e.g., Inbound SAP Proxy listening on a channel), how do I migrate that project to another environment if the channel in the target environment is different.
    I tried using the search and replace mechanism in the sbconsole, but it doesn't find the channel name in the jca and wsdl files.
    What is the recommended way to migrate from one environment to the other when the channel name changes?

  • Best Practice in V7.0 : Issues with Sales Planning and Reporting

    I am trying to install the SAP Best Practices for BPC 5.1 on SAP PBC 7.0 SP 04 I have done this as I cannot find any Best Practice documents for version 7 as yet.
    I have managed to get through the Administration setup and most of the BPC -Administration Configuration Guide, however I am having a problem with 7.4 Running a Data ManagementPackage - Import on page 32 of 36. This step involves you uploading a data file Demo_Revenue_Data.txt into BPC.
    The file says that it has failed due to Ínvalid dimension ACCOUNT in lookup.
    I believe that this error may be driven by a previous step 6.4 Creating Script Logic where the logic for BP_Sales Application was required.
    My question is twofold in that I need to determine:
    1. Has anyone else tried the BestPractices for BPC 5.0 in BPC 7.0?
    2. Does anyone know how to overcome the error when uploading the Demo Revenue into BPC?
    Edited by: Kevin West on Jul 8, 2009 2:03 PM

    Hi,
    BPC best practices document from 5 is working fine also for 7.0 because 7.0 is just an update for 5.x.
    Running Import involve logic just if you are running the package with option enabled (Run Default Logic).
    Your issue seems to be related to maping which means you have to check Transformation and Conversion file.
    Any way the best practices document will not provide you information about how to build Transformation and Conversion files.
    You have to follow an SAP BPC training and that it will help you to build your applicatioon easier and faster.
    Regards
    Sorin Radulescu

  • Best Practices: Question about Passing DataSet to Crystal through C#

    I have used the tutorial provided by Business Objects and have successfully passed a dataset from C# code to Crystal Reports.
    I have a few questions about "best pratices" though.
    It appears that when passing the dataset to crystal then you no longer have the ability to put SQL Expressions in the report anymore otherwise errors will occur.
    So, I'm trying to come up with a way to have a custom field in the SELECT statement and have it show up on the report as a field. So, in the below example I created a custom SQL field called TIMES7 in the query.
    "Select CLM_ID, CLM, PAID_111X, *(PAID111X * 7) As TIMES7*_ From WIKI.MULCRICKET WHERE CLM_ID < 5"
    If I create a formula field in Crystal and set it's value to {MULCRICKET .TIMES7} then it works like a hybrid SQL Expression at run-time.
    This does work but has issues because the database field doesn't really exist in the design environment which causes error when the formula is saved. But it does work at run-time.
    I was wondering if their was a best practice for this?

    So why are you using a formula that doesn't work? If it errors in the designer it's telling you it's not supported.
    Drop the formula into a filed and hide it if you don't want it displayed, this way it gets into the record selection formula.
    "Best Practices" is don't use formulae that won't verify in the Designer. "If it doesn't work in the designer it won't work in code"
    If you need to add an "unknown" field at run time then use RAS to insert the field.

  • Best Practice Question

    I have 3 Areas for my DWH
    The first area is Staging then validation and core
    Staging is just do load date from the source systems
    validation is to validate data (every city has to have a countrie ....)
    core is my DWH shema.
    The First step in ETL is to load the data from core to validation, let's say my GEO_DIM Dimension goes to Countries, Cities and Regions in core. Additionaly I build a CRC SUM when I downlaod from Core to Validation and store the CRC Checksum in a Staging table.
    The second step is to load target from the source systems to staging, but only those date that are non equal to the previous downloadet CRC schecksum, so only changed or new data going to staging.
    The third step is do load that new/changed data from staging to core and proof some dependences. It's just validation.
    My Question is, what is the best practic to bring three tables (Countries, cities and region) to one Dimension
    thanks and regards
    Andreas

    Andreas,
    I guess the correct is depends... Without kidding, are you planning to use a flat star table for this dimension? If that is the case you would be joining the sources together and loading this into the table.
    Now this sounds way to simple, so I guess there is something more to the question...
    Jean-Pierre

  • Informatica and Essbase Best Practice questions

    We now have the Informatica adapter for Essbase installed and working. We have been able to get Informatica to upload data successfully. Now I have a few questions that I have not been able to find answers to in any documentation or forums for Informatica or Essbase. I have submitted these same questions to the Informatica Support but thought I would also post the questions here to see if many folks are using Informatica against Essbase.
    We are using:
    Informatica 8.6.1 (Linux)
    Essbase 11.1.1.3 (Windows 2003)
    1) I can see in Informtica that when we load data to Essbase (Target) it gives me the option to run a calc script AFTER it loads the data. However, if I need to run a Calc script BEFORE the load to Essbase (Target) what is the best practice? The work around I have found was to add the same session twice and for the 1st instance select the option to 'ONLY RUN THE CALC SCRIPT' on the mapping tab. The problem with this is the log shows that it will still run the query against the Source tables. This will impact run times and double to querying against the Source database. What is the Best Practice and proper way to build the workflow to Run a Calc Script BEFORE the load?
    2)Since you do not see the list of Calc Scripts for Essbase in Informatica (you have to manually type the Calc name), If I want to run the 'Default' calc for Essbase what is the syntax to run the 'Default' Calc Script? Tried 'Default' but didn't seem to work.
    3)I have other tasks in Essbase I want to do before actually having Informatica load the data. I would like to run the MAXL commands via a Command task. What is the Best Practice for doing this and the syntax to run MAXL commands in a Command Task in Informatica? I previously had Shell scripts built on the Informatica server that would be kicked off within Informatica, but we are trying to move away from shell scripts and instead have the scripting codes IN the workflows/sessions to make it easier to review the code and follow the logic, rather than having to find the scripts and open each of them.
    Any assistance you have with the two products working together I would GREATLY appreciate it!
    Robert

    AS i know addUser(User user){ ... } is much more useful for several reasons:
    1.Its object oriented
    2.its easy to write , because if Object has many parameters its very painful to write method with comma seperated parameters

  • Best practice question for Bounded task flows

    We are new to Jdeveloper/ADF and I was wondering if we should always try to use a bounded task flow for our applications... is this considered the best way to develop an app? even the small single page ones?
    Thanks in advance.

    Hi,
    let me turn teh question around: How many tools do you have at home beside of a hammer ? In other words, its the problem you need to solve (and use cases are problems) that should determine the use of bounded task flows and its granularity. Note that for each use case the bounded task flow makes a good candidate for delivering it. Doesn't mean that every single page needs to go into its own task flow. For general bounded best practices have a look here
    http://www.oracle.com/technetwork/developer-tools/jdev/adf-task-flow-design-132904.pdf
    Frank

  • Bean best practice question

    Simple questions (hopefully), I know how to code this but just want some advice on the best way to do the following:
    1. User enters data into HTML form and submits
    2. Some Java at the backend grabs these details and emails them off somewhere
    I am thinking of doing the following, but what’s the best way?
    1. HTML form submits and data is sent directly to a JavaBean (FormBean.java)
    2. FormBean.java contains standard getters/setters but also contains a method called sendMail(), is this bad practice? Do I need a second Bean sendMail.java? Or is this completely the wrong way to do things i.e. should I do this entirely in a servlet with only 1 bean to grab the data (FormBean) and then access from servlet?
    Just a bit confused on what’s best practice for this stuff?
    Thanks!

    2. FormBean.java contains standard getters/setters but also contains a method called sendMail(), is this bad practice? Do I need a second Bean sendMail.java? A better approach is this way
    a) Have all the form data in the form bean
    b) Write sendMail in a all together different class, as action.
    c) Send the form bean as a parameter to sendMail for processing and sending an email
    This way your sendMail() will become a kind of a service. Tomorrow you might have some other data, which you will have to send it in an email. In that case, you just reuse sendMail() method. Otherwise, if you have sendMail() in form bean itself, then if there are many form beans, then you would have to write sendMail() in every form bean, which is a bad practice. One principle of OOAD is to separate the functionality, which is redundant in your classes and make it as a separate module. If there are changes to the sendMail() functionality then, by having it in one module, you only have to change it at one place.
    Or is this completely the wrong way to do things i.e. should I do this entirely in a servlet with only 1 bean to grab the data (FormBean) and then access from servlet?You can have a servlet which acts like a controller, which receives the request parameters, constructs the form bean and invokes appropriate Action (in your case sendMail()). This is same as an MVC framework. Instead of you re-inventing the wheel to create a servlet controller, form bean, action etc. You could use one of the several MVC frameworks available in the market, such as Struts or Spring MVC.

  • DNS best practice question

    Hello,
    we currently have an issue regarding DNS in a multiple Domain Forest.
    first of all, in the forest there are 5 Domains (names changed):
    dom1.domain.org
    sub.dom1.domain.org
    dom2.domain.org
    dom1.url.de
    dom.de
    As you see, a forest full of Domains not matching ;-)
    We also have multiple sites, and as per network requirements, replication is made trough Domain: dom1.domain.org
    All other Domains replicate only with this one.
    The DNS is currently set up as follows:
    Each Domain Controller holds its own domain as primary AD integrated Domain in DNS (Domain wide repl.).
    All others are set up as Forest Wide AD integrated Stubs.
    Each startup we get Event 4515 on the DCs, that a Zone is available twice.
    So, I have to troubleshoot this infrastructure now.
    Can you tell me, what is best practice here to set up DNS correctly with less replication traffic as possible?
    Best regards

    By Default DNS Zone replication Scope is Domainwide but except _MSDCS Zone . _MSDCS Zone replication should be forest wide. In addition Replication Scope can be decide as per your business requirment.
    Regards~Biswajit
    Disclaimer: This posting is provided & with no warranties or guarantees and confers no rights.
    MCP 2003,MCSA 2003, MCSA:M 2003, CCNA, MCTS, Enterprise Admin
    MY BLOG
    Domain Controllers inventory-Quest Powershell
    Generate Report for Bulk Servers-LastBootUpTime,SerialNumber,InstallDate
    Generate a Report for installed Hotfix for Bulk Servers

  • Group by best practice question

    Consider this example:
    TABLE: SALES_DATA
    firm_id|sales_amt|d_date|d_data
    415|45|20090615|Lincoln Financial
    415|30|20090531|Lincoln AG
    416|10|20081005|AM General
    416|20|20080115|AM General Inc.
    I want the output to be grouped by firm_id with the sum of sales_amt and the d_data
    that corresponds to the latest d_date (i.e. max(d_date))
    Proposed query:
    select firm_id, sum(sales_amt) total_sales, substr(max(d_data), instr(max(d_data), '~') + 1) firm_name from (
    select firm_id, sales_amt, d_date || '~' || d_data from sales_data
    group by firm_id
    output is as expected:
    firm_id|total_sales|firm_name
    415|75|Lincoln Financial
    416|30|AM General
    I know this works but my QUESTION is: is there a better way to do this and is the above approach to concatenate columns when you want to aggregate multiple columns against any best practices.
    Thanks very much!

    Here's a way that uses analytics (I just like them):
    SQL> select * from sales_data;
                 FIRM_ID            SALES_AMT D_DATE               D_DATA
                     415                   45 15-JUN-2009 00:00:00 Lincoln Financial
                     415                   30 31-MAY-2009 00:00:00 Lincoln AG
                     416                   10 05-OCT-2008 00:00:00 AM General
                     416                   20 15-JAN-2008 00:00:00 AM General Inc.
    SQL> select firm_id, sum_amt, d_data
      2  from
      3  (
      4     select firm_id, d_data
      5           ,sum(sales_amt) over (partition by firm_id) sum_amt
      6           ,row_number() over (partition by firm_id order by d_date desc) rn
      7     from   sales_data
      8  )
      9  where rn = 1
    10  ;
                 FIRM_ID              SUM_AMT D_DATA
                     415                   75 Lincoln Financial
                     416                   30 AM General

  • HyperV 2012 best practice question (storage of guests)

    I was reading this post:
    http://blogs.technet.com/b/askpfeplat/archive/2013/03/10/windows-server-2012-hyper-v-best-practices-in-easy-checklist-form.aspx
    and I saw two things, one where it says fiber channel is not support and one where is says loopback is not support?
    so my question is for a best practice, would local storage (say a local attached raid 10) be considered best practice? or a shared
    san lun? and if a san, would virtual stuff like IBMs SVC be supported?

    Using a SoFS with SMB3 as a Storage for Hyper-V and SQL Server/Clusters is Microsoft Nr.1 and you will see more and more push for that Design in every next Version.
    The TOP Features for me in that Combo is, you get 100% of all Features on Day One when new Version comes out, within Microsoft SMB 3.x are some realy sexy Things in it and you can use all the Standard Ethernet Stuff yu have and know already for years. If
    you Need super Speed you can do that with some RDMA Cards, still Ethernet :-) and if you do a Live Migration between any Combo of Clusters and Single Nodes you never Need to move the Data and Config Files :-) they always stay on
    \\SoFS\VMs :-)
    All Information you need are on http://smb3.info , Jose, my Mr. SoFS  collects and blogs about all Features and Step-by-Step Guides to test and prove that even on one sinlge Notebook, very very cool.
    As a Backend fot the SoFs you have many choices, from the new Wave of JBODs togehter with Storage Spaces up to the old Style SAN Boxes with or without Special Features. Allways check the Windows Server Catalog for supported Confgs, specially on the JBOD
    Side, make sure you get one with R2 Support und also with SES Support (SCSI Enclosure Services ). Jose also Blogs about that Topics very well.
    https://blogs.technet.com/b/storageserver/archive/2013/10/19/storage-spaces-jbods-and-failover-clustering-a-recipe-for-cost-effective-highly-available-storage.aspx
    http://www.windowsservercatalog.com/results.aspx?&chtext=&cstext=&csttext=&chbtext=&bCatID=1642&cpID=0&avc=10&ava=0&avq=0&OR=1&PGS=25&ready=0
    Hyper-V Cluster up to 64 Nodes and SoFS up to 8 Nodes depending on your Scaling Needs and having Storage splitted over separat DataCenters should be a good start :-)
    Just let me know of you need some Advice after doing a Design Session.
    Udo

Maybe you are looking for

  • MBP Won't load - stuck at grey apple logo

    Ok here's my dilema ... I've read posts about this same problem but I can't get this mbp to do any of the suggested fixes!! Here's where I screwed up.... I was playing around with iWeb 09' trying to make a better website for my company... when all of

  • Do I need to refresh Info Providers when I chg the Mast Data attri to Nav

    Hai Gurus,    Can any tell me, When I change an attribute of master data, material to a navigation. Do I need to refresh data to info provider, cube?. If this cube is part of the multi provider. In What sequence I need to make the changes?. I assume.

  • Queries regarding BPM Process

    Hi All I have some queries regarding BPM.Please respond. Can we create / modify BPM processes using any Webservices / APIs available Is it possible to edit template used for notification mails in BPM? Do we have any way to transform BPM process desig

  • Text Issue in sap-script

    Hi All, I am using writing the text in script using include object and text-id ,but it is not showing up in the form.Can anybody tell me where the text is maintained.where can i check whether this text is maintained or not. Regards Lalit

  • Default delivery date in sales order

    Hi guys A strange problem: I have two sales organisation (sales org.channeldivision) we will call A and B and two sales order documen type (C and D). Both the document type have as default delivery date the today date. I created a order using A and C