Toplink Cache "back reference" best practice question.

I have too many objects being stored in the Identity map. I'm using the default Soft/Weak map. The problem is every objects connects to every other objects. Since every objects can somehow be traced to each other all objects nothing is removed.
Let's look at an example. I have two objects Projects and Tasks. A project has a collection of tasks and the task's have a "back reference" to the project. If just one task is in the "soft" section of the Identity map then all the other tasks, which are in the weak section, aren't eligible for garbage collection. The task has a simple OneToOne mapping to the project. I'm using a valueholder to hold the project. This particular instance doesn't use indirection, but there are many other objects that have a similar setup that do.
These mappings are really convenient for reporting. For example through any task I can easily print the project's name. No query, joining, etc is necessary. This is critical to the app because we have a dynamic report builder where end users and print off anything they want. This flexibility though all of our mappings enables us to build a powerful report builder. Besides the report builder there are other modules that work in a similar fashion. Needless to say the application will build with the assumption that these "back references" exist. That said, I need to remove objects from the identity map.
Is there a way to manually remove / invalidate the back references and have it repopulated on access? Similar to how indirection works?
Is it possible to re-init an indirection value holder?
Does anyone have any suggestions?
I've looked at using an invalidation policy but that doesn't seem to remove anything. It still keeps the objects in memory. It just refreshes the object on access.

Soft references should still garbage collect when memory is low, so you should still be ok memory wise, even with your cycles. If you want memory to be freed more aggressively, then use a Weak cache instead of Soft, or decrease your Soft cache size.
There is no refresh() or revert() API on a ValueHolder, but if you refresh the source object, it will revert all of its relationships. Having a refresh() or revert() API of ValueHolder would be useful, so feel free to log an enhancement request on EclipseLink for that. You also may be able to cook something up using the mapping and readFromRowIntoObject(). Another option would be to just set the relationship to null and invalidate the object so it is refreshed when next accessed.
James : http://www.eclipselink.org

Similar Messages

  • Best Practices Question: How to send error message to SSHR web page.

    Best Practices Question: How to send error message to SSHR web page from custom PL\SQL procedure called by SSHR workflow.
    For the Manager Self-Service application we’ve copied various workflows which were modified to meet business needs. Part of this exercise was creating custom PL\SQL Package Procedures that would gather details on the WF using them on custom notification sent by the WF.
    What I’m looking for is if/when the PL\SQL procedure errors, how does one send an failure message back and display it on the SS Page?
    Writing information into a log or table at the database level works for trouble-shooting, but we’re looking for something that will provide the end-user with an intelligent message that the workflow has failed.
    Thanks ahead of time for your responses.
    Rich

    We have implemented the same kind of requirement long back.
    We have defined our PL/SQL procedures with two OUT parameters
    1) Result Type (S:Success, E:Error)
    2) Result Message
    In the PL/SQL procedure we always use below construct when we want to raise any message
    hr_utility.set_message(APPL_NO, 'FND_MESSAGE_NAME');
    hr_utility.raise_error;
    In Exception block we write below( in successful case we just set the p_result_flag := 'S';)
    EXCEPTION
    WHEN APP_EXCEPTION.APPLICATION_EXCEPTION THEN
    p_result_flag := 'E';
    p_result_message := hr_utility.get_message;
    WHEN OTHERS THEN
    p_result_flag := 'E';
    p_result_message := hr_utility.get_message;
    fnd_message.set_name('PER','FFU10_GENERAL_ORACLE_ERROR');
    fnd_message.set_token('2',substr(sqlerrm,1,200));
    fnd_msg_pub.add;
    p_result_message := fnd_msg_pub.get_detail;
    After executing the PL/SQL in java
    We have written some thing similar to
    orclStmt.execute();
    OAExceptionUtils.checkErrors (txn);
    String resultFlag = orclStmt.getString(provide the resultflag bind no);
    if ("E".equalsIgnoreCase(resultFlag)){
    String resultMessage = orclStmt.getString(provide the resultMessage bind no);
    orclStmt.close();
    throw new OAException(resultMessage, OAException.ERROR);
    It safely shows the message to the user with all the data in the page.
    We have been using this construct for a long time for all our projects. They are all working as expected.
    Regards,
    Peddi.

  • Best practice question -- copy container, assemble it, build execution plan

    So, this is a design / best practice question:
    I usually copy containers as instructed by docs
    I then set the source system parameters
    I then generate needed parameters / assemble the copied container for ALL subject areas present in the container
    I then build an execution plan JUST FOR THE 4 SUBJECT AREAS and build the execution plan and set whatever is needed before running it.
    QUESTION - When i copy the container, should i delete all not needed subject areas out of it or is it best to do this when building the execution plan? I am basically trying to simplify the container for my own sake and have the container just have few subject areas rather than wait till i build the execution plan and then focus on few subject areas.
    Your thoughts / clarifications are appreciated.
    Regards,

    Hi,
    I would suggest that you leave the subject areas and then just don't include them in the execution plan. Otherwise you have the possibility of running into the situation where you need to include another subject area in the future and you will have to go through the hassle of recreating it in your SSC.
    Regards,
    Matt

  • SAP Adapter Best Practice Question for Deployment to Clustered Environment

    I have a best practices question on the iway Adapters around deployment into a clustered environment.
    According to the documentation, you are supposed to run the installer on both nodes in the cluster but configure on just the first node. See below:
    Install Oracle Application Adapters 11g Release 1 (11.1.1.3.0) on both machines.
    Configure a J2CA configuration as a database repository on the first machine.
    Perform the required changes to the ra.xml and weblogic-ra.xml files before deployment.
    This makes sense to me because once you deploy the adapter rar in the next step it the appropriate rar will get staged and deployed on both nodes in the cluster.
    What is the best practice for the 3rdParty adapter directory on the second node? The installer lays it down with the adapter rar and all. Since we only configure the adapter on node 1, the directory on node 2 will remain with the default installation files/values not the configured ones. Is it best practice to copy node 1's 3rdParty directory to node 2 once configured? If we leave node 2 with the default files/values, I suspect this will lead to confusion to someone later on who is troubleshooting because it will appear it was never configured correctly.
    What do folks typically do in this situation? Obviously everything works to leave it as is, but it seems strange to have the two nodes differ.

    What is the version of operating system. If you are any OS version lower than Windows 2012 then you need to add one more voter for quorum.
    Balmukund Lakhani
    Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    | Facebook
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

  • SAP Adapter Best Practice Question for Migration of Channels

    I have a best practice question on the SAP adapter when migrating an OSB project from one environment (DEV) to another (QA).
    If my project includes an adapter channel that (e.g., Inbound SAP Proxy listening on a channel), how do I migrate that project to another environment if the channel in the target environment is different.
    I tried using the search and replace mechanism in the sbconsole, but it doesn't find the channel name in the jca and wsdl files.
    What is the recommended way to migrate from one environment to the other when the channel name changes?

    I have a best practice question on the SAP adapter when migrating an OSB project from one environment (DEV) to another (QA).
    If my project includes an adapter channel that (e.g., Inbound SAP Proxy listening on a channel), how do I migrate that project to another environment if the channel in the target environment is different.
    I tried using the search and replace mechanism in the sbconsole, but it doesn't find the channel name in the jca and wsdl files.
    What is the recommended way to migrate from one environment to the other when the channel name changes?

  • Best Practices question re Windows XP & Parallels 4.0 installation

    To Apple Gurus here:
    I am a new convert from Windows to Mac. Just bought a Macbook Pro (4G/320G, 2.4Gz) and a copy of Parallels 4.0. I have an OEM copy of Windows XP Pro & Photoshop CS4 for Windows. The question before me is what sequence should I go about installing Windows & Parallels. Logically, I think I should install:
    1) Windows XP using Boot Camp first,
    2) then install PhotoShop CS3 for Windows in the Windows partition
    3) then install MS Office
    4) and finally install Parallels 4.0.
    Is this the right sequence or indeed a "Best Practices" scenario?
    Any tips for a 'Best Practice' installation will be highly appreciated.
    Also, is anyone here using the SAP GUI for Mac OS-X & Citrix Presentation Server Client for Mac OS 10.0 (now renamed XenApp)?

    First, my creds. I don't consider myself an Apple guru. I have been running a MB since last December and at that time, I installed Parallels 3.0. If I remember correctly, after installing Parallels, I installed Windows Vista, and then Office and while I was impressed to be able to run MS Office on a MB, it took what I considered to be TOO long to load and then the performance was not that great. So, mostly I've stayed on the Mac side of the operation and only loaded Parallels if I had to run some MS program.
    About a week ago I got an offer from Parallels to buy 4.0 at an upgrade price of $40. I went with the box version since it was the same price as the download version. Tonight I got my courage up to do the upgrade. I was leery because I thought I might have to reinstall all my MS stuff (Office Pro, etc.) When I put the disk in to install the program, I receive a message saying there was a later edition available with the option to download it or install the box edition. After a few minutes of thought, I decided to do the download version. I would still recommend getting the box version since you get a manual with it although the download version comes with a PDF manual.
    When I finished, I then clicked on upgrade/install and the installation proceeded without much input from me. Lo and behold, the installation finished and it booted up to my previous Vista installation with all my programs intact.
    So far, I must stay I'm VERY impressed with this upgrade Parallels edition. It seems to load much faster, the programs are more responsive, Vista so far seems very stable and the ability to switch back and forth from Windows to OS X is totally better. From what I've seen so far, I would highly recommend anyone using Parallels 3.0 get this upgrade. While I've only been using it a few hours, it seems like the best upgrade for ANY program/system (Windows 95-->Vista) that I've ever done.
    A few months ago I saw a piece on an upgraded version of Fusion which stated that it moved Fusion ahead of Parallels. If that were so, I think the ball must be back in Parallels court with 4.0.

  • Best Practice question - null or empty object?

    Given a collection of objects where each object in the collection is an aggregation, is it better to leave references in the object as null or to instantiate an empty object? Now I'll clarify this a bit more.....
    I have an object, MyCollection, that extends Collection and implements Serializable(work requirement). MyCollection is sent as a return from an EJB search method. The search method looks up data in a database and creates MyItem objects for each row in the database. If there are 10 rows, MyCollection would contain 10 MyItem objects (references, of course).
    MyItem has three attributes:
    public class MyItem implements Serializable {
        String name;
        String description;
        MyItemDetail detail;
    }When creating MyItem, let's say that this item didn't have any details so there is no reason to create MyitemDetail. Is it better to leave detail as a null reference or should a MyItemdetail object be created? I know this sounds like a specific app requirement, but I'm looking for a best practice - what most people do in this case. There are reasons for both approaches. Obviously, a bunch of empty objects going over RMI is a strain on resources whereas a bunch of null references is not. But on the receiving end, you have to account for the MyItemDetail reference to be null or not - is this a hassle or not?
    I looked for this at [url http://www.javapractices.com]Java Practices but found nothing.

    I know this sounds like a specific apprequirement,
    , but I'm looking for a best practice - what most
    people do in this case. It depends but in general I use null.Stupid.Thanks for that insightful comment.
    >
    I do a lot of database work though. And for that
    null means something specific.Sure, return null if you have a context where null
    means something. Like for example that you got no
    result at all. But as I said before its's best to
    keep the nulls at the perimeter of your design. Don't
    let nulls slip through.As I said, I do a lot of database work. And it does mean something specific. Thus (in conclusion) that means that, in "general", I use null most of the time.
    Exactly what part of that didn't you follow?
    And exactly what sort of value do you use for a Date when it is undefined? What non-null value do you use such that your users do not have to write exactly the same code that they would to check for null anyways?

  • Best Practice Question

    I have 3 Areas for my DWH
    The first area is Staging then validation and core
    Staging is just do load date from the source systems
    validation is to validate data (every city has to have a countrie ....)
    core is my DWH shema.
    The First step in ETL is to load the data from core to validation, let's say my GEO_DIM Dimension goes to Countries, Cities and Regions in core. Additionaly I build a CRC SUM when I downlaod from Core to Validation and store the CRC Checksum in a Staging table.
    The second step is to load target from the source systems to staging, but only those date that are non equal to the previous downloadet CRC schecksum, so only changed or new data going to staging.
    The third step is do load that new/changed data from staging to core and proof some dependences. It's just validation.
    My Question is, what is the best practic to bring three tables (Countries, cities and region) to one Dimension
    thanks and regards
    Andreas

    Andreas,
    I guess the correct is depends... Without kidding, are you planning to use a flat star table for this dimension? If that is the case you would be joining the sources together and loading this into the table.
    Now this sounds way to simple, so I guess there is something more to the question...
    Jean-Pierre

  • Informatica and Essbase Best Practice questions

    We now have the Informatica adapter for Essbase installed and working. We have been able to get Informatica to upload data successfully. Now I have a few questions that I have not been able to find answers to in any documentation or forums for Informatica or Essbase. I have submitted these same questions to the Informatica Support but thought I would also post the questions here to see if many folks are using Informatica against Essbase.
    We are using:
    Informatica 8.6.1 (Linux)
    Essbase 11.1.1.3 (Windows 2003)
    1) I can see in Informtica that when we load data to Essbase (Target) it gives me the option to run a calc script AFTER it loads the data. However, if I need to run a Calc script BEFORE the load to Essbase (Target) what is the best practice? The work around I have found was to add the same session twice and for the 1st instance select the option to 'ONLY RUN THE CALC SCRIPT' on the mapping tab. The problem with this is the log shows that it will still run the query against the Source tables. This will impact run times and double to querying against the Source database. What is the Best Practice and proper way to build the workflow to Run a Calc Script BEFORE the load?
    2)Since you do not see the list of Calc Scripts for Essbase in Informatica (you have to manually type the Calc name), If I want to run the 'Default' calc for Essbase what is the syntax to run the 'Default' Calc Script? Tried 'Default' but didn't seem to work.
    3)I have other tasks in Essbase I want to do before actually having Informatica load the data. I would like to run the MAXL commands via a Command task. What is the Best Practice for doing this and the syntax to run MAXL commands in a Command Task in Informatica? I previously had Shell scripts built on the Informatica server that would be kicked off within Informatica, but we are trying to move away from shell scripts and instead have the scripting codes IN the workflows/sessions to make it easier to review the code and follow the logic, rather than having to find the scripts and open each of them.
    Any assistance you have with the two products working together I would GREATLY appreciate it!
    Robert

    AS i know addUser(User user){ ... } is much more useful for several reasons:
    1.Its object oriented
    2.its easy to write , because if Object has many parameters its very painful to write method with comma seperated parameters

  • Best Practice Question on Heartbeat Network

    After running 3.0.3 a few weeks in production, we are wondering if we set up our Heartbeat /Servers correctly.
    We have 2 servers in our Production Server pool. Our LAN, a 192.168.x.x network, has the Virtual IP of the Cluster (heartbeat), the 2 main IP addresses of the servers, and a NIC assigned to each guest. All of this has been configured on the same network. Over the weekend, I wanted to separate the Heartbeat onto a new network, but when trying to add to the pool I received:
    Cannot add server: ovsx.mydomain.com, to pool: mypool. Server Mgt IP address: 192.168.x.x, is not on same subnet as pool VIP: 192.168.y.y
    Currently, I only have one router that translate our WAN to our LAN of 192.168.x.x. I thought the heartbeat would strictly be internal and would not need to be routed anywhere and just set up as a separate VLAN and this is why I created 192.168.y.y. I know that the servers can have multiple IP addresses, and I have 3 networks added to my OVM servers. 192.168.x.x, 192.168.y.y and 192.168.z.z. y and z are not pingable from anything but the servers themselves or one of the guests that I have assigned that network to. I can not ping them directly from our office network, even through the VPN which only gives us access to 192.168.x.x.
    I guess I can change my Sever Mgt IP away from 192.168.x.x to 192.168.y.y, but can I do that without reinstalling the VM server? How have others structured there networks especially relating to the heartbeat?
    Is there any documentation/guides that would describe how to set up the networks properly relating to the heartbeat?
    Thanks for any help!!

    Hello user,
    In order to change your environment, what you could do is go to the Hardware tab -> Network. Within here you can create new networks and also change via the Edit this Network pencil icon what networks should manage what roles (i.e. Virtual Machine, Cluster Heartbeat, etc). In my past experience, I've had issues changing the cluster heartbeat once it has been set. If you have issues changing it, via the OVM Manager, one thing you could do is change it manually via the /etc/ocfs2/cluster.conf file. Also, if it successfully lets you change it via the OVM Manager, verify it within the cluster.conf to ensure it actually did your change. This is where that is being set. However, doing it manually can be tricky because OVM has a tendency to like to revert it's changes back to its original state say after a reboot. Of course I'm not even sure if they support you manually making that change. Ideally, when setting up an OVM environment, best practice would be to separate your networks as much as possible i.e. (Public network, private network, management network, clusterhb network, and live migration network if you do a lot of live migrating, otherwise you can probably place it with say the management network).
    Hope that helps,
    Roger

  • Best practice question for Bounded task flows

    We are new to Jdeveloper/ADF and I was wondering if we should always try to use a bounded task flow for our applications... is this considered the best way to develop an app? even the small single page ones?
    Thanks in advance.

    Hi,
    let me turn teh question around: How many tools do you have at home beside of a hammer ? In other words, its the problem you need to solve (and use cases are problems) that should determine the use of bounded task flows and its granularity. Note that for each use case the bounded task flow makes a good candidate for delivering it. Doesn't mean that every single page needs to go into its own task flow. For general bounded best practices have a look here
    http://www.oracle.com/technetwork/developer-tools/jdev/adf-task-flow-design-132904.pdf
    Frank

  • Bean best practice question

    Simple questions (hopefully), I know how to code this but just want some advice on the best way to do the following:
    1. User enters data into HTML form and submits
    2. Some Java at the backend grabs these details and emails them off somewhere
    I am thinking of doing the following, but what’s the best way?
    1. HTML form submits and data is sent directly to a JavaBean (FormBean.java)
    2. FormBean.java contains standard getters/setters but also contains a method called sendMail(), is this bad practice? Do I need a second Bean sendMail.java? Or is this completely the wrong way to do things i.e. should I do this entirely in a servlet with only 1 bean to grab the data (FormBean) and then access from servlet?
    Just a bit confused on what’s best practice for this stuff?
    Thanks!

    2. FormBean.java contains standard getters/setters but also contains a method called sendMail(), is this bad practice? Do I need a second Bean sendMail.java? A better approach is this way
    a) Have all the form data in the form bean
    b) Write sendMail in a all together different class, as action.
    c) Send the form bean as a parameter to sendMail for processing and sending an email
    This way your sendMail() will become a kind of a service. Tomorrow you might have some other data, which you will have to send it in an email. In that case, you just reuse sendMail() method. Otherwise, if you have sendMail() in form bean itself, then if there are many form beans, then you would have to write sendMail() in every form bean, which is a bad practice. One principle of OOAD is to separate the functionality, which is redundant in your classes and make it as a separate module. If there are changes to the sendMail() functionality then, by having it in one module, you only have to change it at one place.
    Or is this completely the wrong way to do things i.e. should I do this entirely in a servlet with only 1 bean to grab the data (FormBean) and then access from servlet?You can have a servlet which acts like a controller, which receives the request parameters, constructs the form bean and invokes appropriate Action (in your case sendMail()). This is same as an MVC framework. Instead of you re-inventing the wheel to create a servlet controller, form bean, action etc. You could use one of the several MVC frameworks available in the market, such as Struts or Spring MVC.

  • Design Pattern / Best Practice Question

    Hi,
    I have been using Flex for a while now, but there is a
    scenario which I still have not found a solution I'm entirely happy
    with. I'm wondering if anyone else out there might have suggestions
    on a design pattern or best practice.
    Suppose I have a view which depends on model data which
    resides in some back end systems. That model data may or may not
    have been loaded (e.g. via a web service or remote object call) at
    the time the view is displayed.
    I don't know if the user will ever visit this part of the
    application so I would prefer to defer retrieval of the data until
    the user actually navigates to this view. Or I want to retrieve the
    data each time the view is displayed because the data is dynamic
    and could change between one presentation of the view and the next.
    Because the data comes from several systems, I cannot simply
    make one service call and display the view when it completes and
    all the data is available. I need to call several services which
    could complete in any order but I only want to display my view
    after I know all of them have completed and all of the model data
    is available. Otherwise, I can present the user an incomplete view
    (e.g. some combo boxes are empty until the corresponding service
    call to get the data completes).
    The solution I like best so far is to dispatch a single event
    (I am using Cairngorm) handled by a single command which acts as
    the caller and responder for all of the services. This command then
    remembers which responses it has received and dispatches another
    event to navigate to the view once all the results have returned.
    If the services being called are used in different
    combinations on different screens, this results in proliferation of
    events and commands. An event and command for each service and
    additional events and commands to bundle the services and the
    handling of their responses in the right combinations for each of
    the views.
    Another approach is to have some helper class listen for all
    of the model changes and only display the view when the model
    enters some state that is acceptable. It is sometimes difficult to
    determine just by looking at the model whether it is in the right
    state (e.g. how can I tell that a collection is the new collection
    that should just have been requested versus an old one lingering
    from a previous call). The logic required can get kind of
    convoluted and brittle.
    Basically, all of the solutions I've come up with so far seem
    less than ideal and a little hackish. I keep thinking there is some
    elegant solution out there that I am just missing ... but so far,
    no luck finding it. Thoughts?
    Thanks.
    Bill

    i think a service class is right - to coordinate your calls.
    i would have 1 event per call (so you could listen to individual
    responses if you wanted to).
    then i would use a flag. if you want to check for staleness,
    you would probably want two objects to map your service flag to
    lastRequested and lastCompleted. when you check, check if it's
    completed, and if it's not stale and that your lastRequested is
    less than lastCompleted (meaning that you're not currently waiting,
    i.e. you've returned since making a request). then make the request
    and update your lastRequested.
    here's a snippet of what i mean.
    ./paul
    public static const SVC1_LOADED:int = 1;
    public static const SVC2_LOADED:int = 2;
    public static const SVC3_LOADED:int = 4;
    public static const SVCALL_LOADED:int = 7;
    private var completedFlag:int = 0;
    then each call would have it's own callback.
    private function onSvc1Complete( evt:Event):void {
    completedFlag |= SVC1_LOADED;
    lastCompleted[ SVC1_LOADED ] = getTimer();
    dispatchEvent( new Event("svc1complete") );
    checkDone();
    private function checkDone():void{
    if( completedFlag == SVCALL_LOADED )
    dispatchEvent(new Event( "allLoaded" ));

  • Group by best practice question

    Consider this example:
    TABLE: SALES_DATA
    firm_id|sales_amt|d_date|d_data
    415|45|20090615|Lincoln Financial
    415|30|20090531|Lincoln AG
    416|10|20081005|AM General
    416|20|20080115|AM General Inc.
    I want the output to be grouped by firm_id with the sum of sales_amt and the d_data
    that corresponds to the latest d_date (i.e. max(d_date))
    Proposed query:
    select firm_id, sum(sales_amt) total_sales, substr(max(d_data), instr(max(d_data), '~') + 1) firm_name from (
    select firm_id, sales_amt, d_date || '~' || d_data from sales_data
    group by firm_id
    output is as expected:
    firm_id|total_sales|firm_name
    415|75|Lincoln Financial
    416|30|AM General
    I know this works but my QUESTION is: is there a better way to do this and is the above approach to concatenate columns when you want to aggregate multiple columns against any best practices.
    Thanks very much!

    Here's a way that uses analytics (I just like them):
    SQL> select * from sales_data;
                 FIRM_ID            SALES_AMT D_DATE               D_DATA
                     415                   45 15-JUN-2009 00:00:00 Lincoln Financial
                     415                   30 31-MAY-2009 00:00:00 Lincoln AG
                     416                   10 05-OCT-2008 00:00:00 AM General
                     416                   20 15-JAN-2008 00:00:00 AM General Inc.
    SQL> select firm_id, sum_amt, d_data
      2  from
      3  (
      4     select firm_id, d_data
      5           ,sum(sales_amt) over (partition by firm_id) sum_amt
      6           ,row_number() over (partition by firm_id order by d_date desc) rn
      7     from   sales_data
      8  )
      9  where rn = 1
    10  ;
                 FIRM_ID              SUM_AMT D_DATA
                     415                   75 Lincoln Financial
                     416                   30 AM General

  • HyperV 2012 best practice question (storage of guests)

    I was reading this post:
    http://blogs.technet.com/b/askpfeplat/archive/2013/03/10/windows-server-2012-hyper-v-best-practices-in-easy-checklist-form.aspx
    and I saw two things, one where it says fiber channel is not support and one where is says loopback is not support?
    so my question is for a best practice, would local storage (say a local attached raid 10) be considered best practice? or a shared
    san lun? and if a san, would virtual stuff like IBMs SVC be supported?

    Using a SoFS with SMB3 as a Storage for Hyper-V and SQL Server/Clusters is Microsoft Nr.1 and you will see more and more push for that Design in every next Version.
    The TOP Features for me in that Combo is, you get 100% of all Features on Day One when new Version comes out, within Microsoft SMB 3.x are some realy sexy Things in it and you can use all the Standard Ethernet Stuff yu have and know already for years. If
    you Need super Speed you can do that with some RDMA Cards, still Ethernet :-) and if you do a Live Migration between any Combo of Clusters and Single Nodes you never Need to move the Data and Config Files :-) they always stay on
    \\SoFS\VMs :-)
    All Information you need are on http://smb3.info , Jose, my Mr. SoFS  collects and blogs about all Features and Step-by-Step Guides to test and prove that even on one sinlge Notebook, very very cool.
    As a Backend fot the SoFs you have many choices, from the new Wave of JBODs togehter with Storage Spaces up to the old Style SAN Boxes with or without Special Features. Allways check the Windows Server Catalog for supported Confgs, specially on the JBOD
    Side, make sure you get one with R2 Support und also with SES Support (SCSI Enclosure Services ). Jose also Blogs about that Topics very well.
    https://blogs.technet.com/b/storageserver/archive/2013/10/19/storage-spaces-jbods-and-failover-clustering-a-recipe-for-cost-effective-highly-available-storage.aspx
    http://www.windowsservercatalog.com/results.aspx?&chtext=&cstext=&csttext=&chbtext=&bCatID=1642&cpID=0&avc=10&ava=0&avq=0&OR=1&PGS=25&ready=0
    Hyper-V Cluster up to 64 Nodes and SoFS up to 8 Nodes depending on your Scaling Needs and having Storage splitted over separat DataCenters should be a good start :-)
    Just let me know of you need some Advice after doing a Design Session.
    Udo

Maybe you are looking for

  • Proxy- JDBC error

    Hi friends, The receiver CC is showing error.. the error Message processing failed. Cause: com.sap.aii.af.ra.ms.api.RecoverableException: JDBC Adapter configuration not initialized: null please help it is showing in CC monitoring.. in sxmb_moni it is

  • Trouble syncing pics and video to iPhone 4S

    I tried to sync an entire event folder from iPhoto to my iPhone 4s via iTunes. Only about 80 out of 200 items synced. It does not appear to be a memory issue. Any ideas?

  • How to embed code from Vimeo?

    Why can not I in Dreamvewer Code Mode paste an embedded code from Vimeo? If I try with Iframe it does not put in anything other than the file name. It works in many other editors for HTML, such as WordPress.

  • ICloud says it's sending an e-mail to verify my account but no e-mail arrives, this is very annoying.

    I have just downloaded the new OS X Mountain Lion onto my macbook and as I have an iphone and ipad I wanted to setup the iCloud, I have done everything I'm supposed to in the instructions and then I get as far as a screen saying that apple will send

  • Imp / Exp Repository in Administration tool

    Hi all, As I am newly starting on Oracle BI. and phasing so many question before setting environment for our development. here is one senario as we are having three different server each for development, testing and production. If some request comes