Best practice question - hosting in virtual and physical

Hello,
I am looking for some feedback. unsure if this is the right forum or if the database forum is better. Trying here.
We are looking at moving into virtualization on Oracle T4 servers using zones.
Some possibly relevant background detail:
* The T4 servers will be hosting database, content management and web servers (all Oracle products). I'm only looking at the oracle database servers for this question.
* In general, we have server pairs. For a specific database (or group of databases), I have the prod server and a dev server. For another set of databases I have another prod and dev server. This way, i can patch the OS for one set but maybe not the others (for wahtever reason).
Specific scenario:
I have an older v440 server that I need to life cycle. The production server is on a newer v490 which is still under support for another year or so. My first thought was I can P2V it into a Solaris Zone on the T4. The direction we are taking is virtualization & consolidation to save money and make better use of our hardware. That is all great.
However, my concern is that my dev server is now virtualized and my production server is still physical. The OS patches are different as the hardware is different. However, the v440 and v490 they are on today would be slightly different as well. Oracle DB is certified on both and the database patches are the same regardless of the hardware since it is the same operating system. From a change management point of view, my patching process should be identical so if less of a concern as well.
However, I am not sure if this is a good idea or not and am wondering if anyone can advise based on their experience. What can I expect to run differently in a Zone compared to physical? This cannot be a reccomended practice, but I have not found much saying it is not a viable option either.
I am considering moving the production to a T4 Solaris Zone as well to keep them the same, however, that possibly means dropping a server that still have some years remaining (yes, it could be used by something else..but maybe).
Is this a viable solution - hosting dev on virtual and prod on physical? what problems could I expect?
Thanks!

T4-2 , T4-4(withbtwomcpu) has two pci_bus
T4-4 has four pci_bus
each pci_bus can associated with root domain that has direct access to physical device with pcie slots ofthe pci_ bus.
one can run zone inside these root domain.
IMHO, these zone will run better than v490 for you db prod.
for T4-4 one can use one RD as production and one RD as dev env
your webs env can all run inside zone in another RD

Similar Messages

  • Best Practices Question: How to send error message to SSHR web page.

    Best Practices Question: How to send error message to SSHR web page from custom PL\SQL procedure called by SSHR workflow.
    For the Manager Self-Service application we’ve copied various workflows which were modified to meet business needs. Part of this exercise was creating custom PL\SQL Package Procedures that would gather details on the WF using them on custom notification sent by the WF.
    What I’m looking for is if/when the PL\SQL procedure errors, how does one send an failure message back and display it on the SS Page?
    Writing information into a log or table at the database level works for trouble-shooting, but we’re looking for something that will provide the end-user with an intelligent message that the workflow has failed.
    Thanks ahead of time for your responses.
    Rich

    We have implemented the same kind of requirement long back.
    We have defined our PL/SQL procedures with two OUT parameters
    1) Result Type (S:Success, E:Error)
    2) Result Message
    In the PL/SQL procedure we always use below construct when we want to raise any message
    hr_utility.set_message(APPL_NO, 'FND_MESSAGE_NAME');
    hr_utility.raise_error;
    In Exception block we write below( in successful case we just set the p_result_flag := 'S';)
    EXCEPTION
    WHEN APP_EXCEPTION.APPLICATION_EXCEPTION THEN
    p_result_flag := 'E';
    p_result_message := hr_utility.get_message;
    WHEN OTHERS THEN
    p_result_flag := 'E';
    p_result_message := hr_utility.get_message;
    fnd_message.set_name('PER','FFU10_GENERAL_ORACLE_ERROR');
    fnd_message.set_token('2',substr(sqlerrm,1,200));
    fnd_msg_pub.add;
    p_result_message := fnd_msg_pub.get_detail;
    After executing the PL/SQL in java
    We have written some thing similar to
    orclStmt.execute();
    OAExceptionUtils.checkErrors (txn);
    String resultFlag = orclStmt.getString(provide the resultflag bind no);
    if ("E".equalsIgnoreCase(resultFlag)){
    String resultMessage = orclStmt.getString(provide the resultMessage bind no);
    orclStmt.close();
    throw new OAException(resultMessage, OAException.ERROR);
    It safely shows the message to the user with all the data in the page.
    We have been using this construct for a long time for all our projects. They are all working as expected.
    Regards,
    Peddi.

  • Best practice question -- copy container, assemble it, build execution plan

    So, this is a design / best practice question:
    I usually copy containers as instructed by docs
    I then set the source system parameters
    I then generate needed parameters / assemble the copied container for ALL subject areas present in the container
    I then build an execution plan JUST FOR THE 4 SUBJECT AREAS and build the execution plan and set whatever is needed before running it.
    QUESTION - When i copy the container, should i delete all not needed subject areas out of it or is it best to do this when building the execution plan? I am basically trying to simplify the container for my own sake and have the container just have few subject areas rather than wait till i build the execution plan and then focus on few subject areas.
    Your thoughts / clarifications are appreciated.
    Regards,

    Hi,
    I would suggest that you leave the subject areas and then just don't include them in the execution plan. Otherwise you have the possibility of running into the situation where you need to include another subject area in the future and you will have to go through the hassle of recreating it in your SSC.
    Regards,
    Matt

  • SAP Adapter Best Practice Question for Deployment to Clustered Environment

    I have a best practices question on the iway Adapters around deployment into a clustered environment.
    According to the documentation, you are supposed to run the installer on both nodes in the cluster but configure on just the first node. See below:
    Install Oracle Application Adapters 11g Release 1 (11.1.1.3.0) on both machines.
    Configure a J2CA configuration as a database repository on the first machine.
    Perform the required changes to the ra.xml and weblogic-ra.xml files before deployment.
    This makes sense to me because once you deploy the adapter rar in the next step it the appropriate rar will get staged and deployed on both nodes in the cluster.
    What is the best practice for the 3rdParty adapter directory on the second node? The installer lays it down with the adapter rar and all. Since we only configure the adapter on node 1, the directory on node 2 will remain with the default installation files/values not the configured ones. Is it best practice to copy node 1's 3rdParty directory to node 2 once configured? If we leave node 2 with the default files/values, I suspect this will lead to confusion to someone later on who is troubleshooting because it will appear it was never configured correctly.
    What do folks typically do in this situation? Obviously everything works to leave it as is, but it seems strange to have the two nodes differ.

    What is the version of operating system. If you are any OS version lower than Windows 2012 then you need to add one more voter for quorum.
    Balmukund Lakhani
    Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    | Facebook
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

  • SAP Adapter Best Practice Question for Migration of Channels

    I have a best practice question on the SAP adapter when migrating an OSB project from one environment (DEV) to another (QA).
    If my project includes an adapter channel that (e.g., Inbound SAP Proxy listening on a channel), how do I migrate that project to another environment if the channel in the target environment is different.
    I tried using the search and replace mechanism in the sbconsole, but it doesn't find the channel name in the jca and wsdl files.
    What is the recommended way to migrate from one environment to the other when the channel name changes?

    I have a best practice question on the SAP adapter when migrating an OSB project from one environment (DEV) to another (QA).
    If my project includes an adapter channel that (e.g., Inbound SAP Proxy listening on a channel), how do I migrate that project to another environment if the channel in the target environment is different.
    I tried using the search and replace mechanism in the sbconsole, but it doesn't find the channel name in the jca and wsdl files.
    What is the recommended way to migrate from one environment to the other when the channel name changes?

  • Any Best Practices (i.e. DOs and Don'ts) for source to target MAPPINGs?

    Hi Experts,
    Any Best Practices (i.e. DOs and Don'ts) for source to target MAPPINGs?
    I will appreciate any hints on this.
    Thanks

    Hi,
    I am assuming that you are asking about transformation mapping between source and target..
    1) One to One mapping
    2)Avoid using complex calculations
    3) if any calculations required use routine instead of formulas
    4)if possible avoid Using field routine (you can do start or end routine )
    5)Do not map unwanted fields ( unnecessary process time,database occupant, also problem while activation of DSO data )
    6)Avoid using master data read mapping option instead you can use routine to fetch the master data
    7) no need of using  infosurce
    8)Use standard time conversions for time fields
    Generally these things we need to consider while mapping..
    Regards,
    Satya

  • Kernel, virtual and physical memory

    Hi,
    I would like to get a clarification of the uses of the terms kernel, virtual and physical memory. Functions like kmem_alloc provide non-paged 'kernel' memory, but is this a pointer to a physical memory address, or is it similar to a virtual address that gets mapped by the memory management when it is referenced like virtual memory?
    Is there a way to determine a mapping between a virtual address and a physical address? (like vtophys in bsd or virt_to_phys/virt_to_bus in linux) ?
    Is all memory that is normally used within a driver (e.g. messages through a STREAM or DMA areas) 'kernel' memory, and is this a direct physical memory address (as asked above)?
    Many thanks
    S.

    Hi,
    Most kernel code and device drivers will only use virtual addresses. Yes, it is right!
    But I want to access a specific physical address.
    Because the bus bridge chip with PCI bus-master access a specific physical address directly.
    My system configuration is below:
    Platform: IA32
    Address map:
    0x00000000 - 0x000FFFFF: Main Memory, VGA Frame Buffer, Boot ROM
    0x00100000 - 0x0FFFFFFF : Main Memory(DIMM) - 255MB*
    0x10000000 - 0xFEBFFFFF: PCI bus memory space
    0xFEC00000 - 0xFFF7FFFF: Not Used
    0xFFFF8000 - 0xFFFFFFFF: Boot ROM
    *My system has 128MB DIMM on board.
    The bus bridge chip access a physical address at 0x06000000.
    I tested whether it would be set up correctly by using Linux as follow:
    1) Memory size is specified to 96MB compulsorily.
    init: Linux mem=96M
    2) Access to Physical address space via virtual address in kernel space.
    init_module(){
    va = iormap(0x06000000, size);
    *va = value;
    value = *(va+n);
    3) work good.
    I tried on Solaris by the same way.
    But I cannot find working.
    1) Memory size is specified to 96MB compulsorily.
    In /etc/system:
    set physmem=23723 <- Page size is 4K in my system. lower than 96MB.
    2) Access to Physical address space via virtual address in kernel space.
    xxx_attach(){
    ddi_regs_map_setup();
    ddi_putX(value);
    value = ddi_getX();
    driver.conf
    name="xxx" class="sysbus" reg=0,0x06000000, size;
    And I tried access by using /dev/mem.
    This also went wrong.
    dose anyone have good idea?
    Toru OHATA

  • Best Practices for NCS/PI Server and Application Monitoring question

    Hello,
    I am deploying a virtual instance of Cisco Prime Infrastructure 1.2 (1.2.1.012) on an ESX infrastructure. This is being deployed in an enterprise enviroment. I have questions around the best practices for moniotring this appliance. I am looking to monitor application failures (services down, db issues) and "hardware" (I understand this is a virtual machine, but statistics on the filesystem and CPU/Memory is good).
    Firstly, I have enabled via the CLI the snmp-server and set the SNMP trap host destination. I have created a notification receiver for the SNMP traps inside the NCS GUI and enabled the "System" type alarm. This type includes alarms like NCS_DOWN and PI database is down. I am trying to understand what the difference between enabling SNMP-SERVER HOST via the CLI and setting the Notification destination inthe GUI is? Also how can I generate a NCS_DOWN alarm in my lab. Doing NCS stop does not generate any alarms. I have not been able to find much information on how to generate this as a test.
    Secondly, how and which processes should I be monitoring from the Management Station? I cannot easily identify the main NCS procsses from the output of ps -ef when logged in the shell as root.
    Thanks guys!

    Amihan_Zerrudo wrote:
    1.) What is the cost of having the scope in a <jsp:useBean> tag set to 'session'? I am aware that there are a list of scopes like page, application, etc. and that if i use 'session' my variable will live for as long as that session is alive. (did i get this right?). You should rather look to the functional requirements instead of costs. If the bean need to be session scoped (e.g. maintain the logged in user), then do it so. If it just need to be request scoped (e.g. single page form data), then keep it request scoped.
    2.)If the JSP Page where i use that <useBean> is to be accessed hundred of times a day, will it compensate my server resources? Right now i am using the Sun Glassfish Server.It will certainly eat resources. Just supply enough CPU speed and memory to a server. You cannot expect that a webserver running at a Pentium 500MHz with 256MB of memory can flawlessly serve 100 simultaneous users at the same second. But you may expect that it can serve 100 users per 24 hour.
    3.) Can you suggest best practice in memory management given the architecture i described above?Just write code so that it doesn't unnecessarily eat memory. Only allocate memory if your application need to do so. You should rather let the hardware depend on the application requirements, not to let the application depend on the hardware specs.
    4.)Also, I have implemented connection pooling in my architecture, but my application is to be used by thousands of clients everyday.. Can the Sun Glassfish Server take care of that or will I have to purchase a powerful sever?Glassfish is just an application server software, it is not server hardware. Your concerns are rather hardware related.

  • Informatica and Essbase Best Practice questions

    We now have the Informatica adapter for Essbase installed and working. We have been able to get Informatica to upload data successfully. Now I have a few questions that I have not been able to find answers to in any documentation or forums for Informatica or Essbase. I have submitted these same questions to the Informatica Support but thought I would also post the questions here to see if many folks are using Informatica against Essbase.
    We are using:
    Informatica 8.6.1 (Linux)
    Essbase 11.1.1.3 (Windows 2003)
    1) I can see in Informtica that when we load data to Essbase (Target) it gives me the option to run a calc script AFTER it loads the data. However, if I need to run a Calc script BEFORE the load to Essbase (Target) what is the best practice? The work around I have found was to add the same session twice and for the 1st instance select the option to 'ONLY RUN THE CALC SCRIPT' on the mapping tab. The problem with this is the log shows that it will still run the query against the Source tables. This will impact run times and double to querying against the Source database. What is the Best Practice and proper way to build the workflow to Run a Calc Script BEFORE the load?
    2)Since you do not see the list of Calc Scripts for Essbase in Informatica (you have to manually type the Calc name), If I want to run the 'Default' calc for Essbase what is the syntax to run the 'Default' Calc Script? Tried 'Default' but didn't seem to work.
    3)I have other tasks in Essbase I want to do before actually having Informatica load the data. I would like to run the MAXL commands via a Command task. What is the Best Practice for doing this and the syntax to run MAXL commands in a Command Task in Informatica? I previously had Shell scripts built on the Informatica server that would be kicked off within Informatica, but we are trying to move away from shell scripts and instead have the scripting codes IN the workflows/sessions to make it easier to review the code and follow the logic, rather than having to find the scripts and open each of them.
    Any assistance you have with the two products working together I would GREATLY appreciate it!
    Robert

    AS i know addUser(User user){ ... } is much more useful for several reasons:
    1.Its object oriented
    2.its easy to write , because if Object has many parameters its very painful to write method with comma seperated parameters

  • Question - Best practice data source for Vs2008 and Crystal Reports 2008

    I have posted a question here
    CR2008 using data from .NET data provider (ADO.NET DATASET from a .DLL)
    but think that perhaps I need general community advise on best practice with data sources.
    In Crystal reports I can choose the data source location from any number of connection types, eg ado.net(xml), com, oledb, odbc.
    Now in regard to the post, the reports have all been created in Crxi 6.3, upgraded to Crystal XI and now Im using the latest and greatest. I wrote the Crystal Reports 6.3/ XI reports back in the day to do the following: The Reports use a function from COM Object which returns an ADO recordset which is then consumed fine.
    So I don't want to rewrite all these reports, of which there are many.
    I would like to know if any developers are actually using .NET Class libraries to return ADO.NET datasets via the method call or if you are connecting directly to XML data via whatever source ( disk, web service, http request etc).
    I have not been able to eliminate the problem listed in the post mentioned above, which is that the Crystal Report is calling the .NET class library method twice before displaying the data. I have confirmed this by debugging the class lib.
    So any guidance or tips is appreciated.
    Thanks

    This is already being discuss in one of your other threads. Let's close this one out and concentrate on the one I've already replied to.
    Thanks

  • IVR (Inter-VSAN Routing) - Best practices questions

    Hi there,
    We have a situation where we will have multiple customers hosted on a 9513 and sharing a single storage array.
    We want to keep the logically seperated in their own VSANs, but of course the storage array will need to be zoned to all the customers hosts.
    Now IVR should be the thing to use, but I'm getting resistance from the local team (screams of "Nooooo!!! They're EVILLLLL!!!") ... so I want to find out if there are some best practices around IVR use ?
    Should they be used only for light duty stuff ? (though at present we use them with tape backup, which isn't exactly "light")
    Do they impact performance to a measurable degree ?
    Are they stable ?
    What can go wrong with them ? And does it happen often ?
    Thanks!

    IVR does not impact application I/O performance because all the vsan rewrite and fcid rewrite actions are done in hardware asics. The IVR process on supervisor is responsible for managing the configuration and ensuring the rewrite tables are programmed in the linecards. The process is stable.
    Most of the issues I have seen are in environments with multiple IVR enabled MDS switches ISL'd together or an MDS IVR enabled switch connected to a McData/Brocade in an interop mode.
    Like any feature there have been bugs and it pays to check the SAN-OS release notes when planning installs. For example, a config change on one switch does not get properly pushed to another IVR switch or a forwarding table for an ISL interface does not get correctly programmed. There have also been a fair share of user misconfigurations which could have been avoided if Cisco Fabric Services (CFS) was enabled for IVR. This is done with the 'ivr distribute' command. Without this it is very difficult in large topologies of multiple IVR switches to ensure they have a consistent IVR config. In other cases there have been problems from a mix of IVR enabled switches running different releases of SAN-OS, e.g. mixing 3.0 with 3.2.
    Best practice is to have dual physical fabrics, upgrade one fabric at a time ensuring all IVR switches in a fabric run same SAN-OS release.
    A single IVR switch is much easier to implement. The MDS Configuration Guide has a list of best practices for IVR and one of those is to use the NAT option. Personally I would avoid NAT option where you can as NAT makes any troubleshooting harder trying to figure out the domain ID translations. You would also minimize risk of hitting some NAT related bugs, but you could also avoid most of these by checking the workarounds documented in the Release Notes. And with NAT you need to also configure persistent virtual domains and fcids to cater for AIX and HP-UX systems that cannot handle the FCID of the target changing whenever the exported virtual domain ID changes. To give NAT credit, each vsan is represented by a single virtual domain. In regular non-NAT mode, each switch in a vsan is represented by a virtual domain, meaning you eat up more virtual domain IDs. So in large topologies with many domain IDs there are scalability advantages to using NAT and the IVR updates between switches are more efficient with fewer virtual domain IDs to advertize. Of course, NAT must be used if merging physical fabrics with same domain ID and you cannot afford the downtime to change one of the switch domain IDs.
    However if it is just a single IVR switch I would avoid NAT. To do this all your domain IDs should be statically defined and there must be no overlapping domain IDs between IVR'd vsans. If it is a brand new install you can easily achieve this by specifying unique allowed domain ID ranges per vsan.
    For example, each customer can have their own vsan with say 10 domain IDs and the storage can be in vsan 2. You will only use one domain ID per vsan on day 1. Allowing 10 domain ids per vsan means you can add up to 9 other switches per vsan should you need to in the future. There is a maximum 239 domains per vsan so you could have up to 23 customers on your 9513 working with a range of 10 domain IDs per vsan.
    fcdomain domain 2 static vsan 2
    fcdomain domain 10 static vsan 10
    fcdomain domain 20 static vsan 20
    fcdomain domain 30 static vsan 30
    ..and so on..
    fcdomain domain 230 static vsan 230
    fcdomain allowed 01-09 vsan 2
    fcdomain allowed 10-19 vsan 10
    fcdomain allowed 20-29 vsan 20
    fcdomain allowed 30-39 vsan 30
    ..and so on..
    fcdomain allowed 230-239 vsan 230
    With or without IVR you should still run dual fabrics (e.g. 95xx in each fabric) and host based multipathing for redundancy.
    And don't forget IVR will require an Enterprise license. I have even seen a large outage because the customer forgot to install the license before the 120 day grace period expired.

  • HyperV 2012 best practice question (storage of guests)

    I was reading this post:
    http://blogs.technet.com/b/askpfeplat/archive/2013/03/10/windows-server-2012-hyper-v-best-practices-in-easy-checklist-form.aspx
    and I saw two things, one where it says fiber channel is not support and one where is says loopback is not support?
    so my question is for a best practice, would local storage (say a local attached raid 10) be considered best practice? or a shared
    san lun? and if a san, would virtual stuff like IBMs SVC be supported?

    Using a SoFS with SMB3 as a Storage for Hyper-V and SQL Server/Clusters is Microsoft Nr.1 and you will see more and more push for that Design in every next Version.
    The TOP Features for me in that Combo is, you get 100% of all Features on Day One when new Version comes out, within Microsoft SMB 3.x are some realy sexy Things in it and you can use all the Standard Ethernet Stuff yu have and know already for years. If
    you Need super Speed you can do that with some RDMA Cards, still Ethernet :-) and if you do a Live Migration between any Combo of Clusters and Single Nodes you never Need to move the Data and Config Files :-) they always stay on
    \\SoFS\VMs :-)
    All Information you need are on http://smb3.info , Jose, my Mr. SoFS  collects and blogs about all Features and Step-by-Step Guides to test and prove that even on one sinlge Notebook, very very cool.
    As a Backend fot the SoFs you have many choices, from the new Wave of JBODs togehter with Storage Spaces up to the old Style SAN Boxes with or without Special Features. Allways check the Windows Server Catalog for supported Confgs, specially on the JBOD
    Side, make sure you get one with R2 Support und also with SES Support (SCSI Enclosure Services ). Jose also Blogs about that Topics very well.
    https://blogs.technet.com/b/storageserver/archive/2013/10/19/storage-spaces-jbods-and-failover-clustering-a-recipe-for-cost-effective-highly-available-storage.aspx
    http://www.windowsservercatalog.com/results.aspx?&chtext=&cstext=&csttext=&chbtext=&bCatID=1642&cpID=0&avc=10&ava=0&avq=0&OR=1&PGS=25&ready=0
    Hyper-V Cluster up to 64 Nodes and SoFS up to 8 Nodes depending on your Scaling Needs and having Storage splitted over separat DataCenters should be a good start :-)
    Just let me know of you need some Advice after doing a Design Session.
    Udo

  • Best practices for setting up virtual servers on Windows Server 2012 R2

    I am creating a Web server from scratch with Windows Server 2012 R2. I expect to have a host server, and then 3 virtual servers...one that runs all of the web apps as a web server, another as a Database Server, and then on for session state.  I
    expect to use Windows Server 2012 R2 for the Web Server and Database Server, but Windows 7 for the session state.
    I have an SATA2 Intel SROMBSASMR RAID card with battery back up that I am attaching a small SSD drive that I expect to use for the session state, and an IBM Server RAID M1015 SATA3 card that I am running Intel 520 Series SSD's that I expect to
    use for Web server and Database server.
    I have some questions. I am considering using the internal USB with a flash drive to boot the Host off of, and then using two small SSD's in a Raid 0 for the Web server (theory being that if something goes wrong, session state is on a different drive), and
    then 2 more for the Database server in a RAID 1 configuration.
    please feel free to poke holes in this and tell me of a better way to do it.
    I am assuming that having the host running on a slow USB drive that is internal has no effect on the virtual servers after it is booted up, and the virtual servers are booted up?
    DCSSR

    I am creating a Web server from scratch with Windows Server 2012 R2. I expect to have a host server, and then 3 virtual servers...one that runs all of the web apps as a web server, another as a Database Server, and then on for session state.  I
    expect to use Windows Server 2012 R2 for the Web Server and Database Server, but Windows 7 for the session state.
    I have an SATA2 Intel SROMBSASMR RAID card with battery back up that I am attaching a small SSD drive that I expect to use for the session state, and an IBM Server RAID M1015 SATA3 card that I am running Intel 520 Series SSD's that I expect to
    use for Web server and Database server.
    I have some questions. I am considering using the internal USB with a flash drive to boot the Host off of, and then using two small SSD's in a Raid 0 for the Web server (theory being that if something goes wrong, session state is on a different drive), and
    then 2 more for the Database server in a RAID 1 configuration.
    please feel free to poke holes in this and tell me of a better way to do it.
    I am assuming that having the host running on a slow USB drive that is internal has no effect on the virtual servers after it is booted up, and the virtual servers are booted up?
    There are two issues about RAID0:
    1) It's not as fast as people think. So with a general purpose file system like NTFS or ReFS (choice for Windows is limited) you're not going to have any great benefits as there are very low chances whole RAID stripe would be updated @ the same time (I/Os
    need to touch all SSDs in a set so 256KB+ in a real life). Web server workload is quite far away from sequential reads or writes so RAID0 is not going to shine here. Log-structures file system (or at least some FS with logging capabilities, think about ZFS
    and ZIL enabled) *will* benefit from SSDs in RAID0 properly assigned. 
    2) RAID0 is dangerous. One lost SSD would render whole RAID set useless. So unless you build a network RAID1-over-RAID0 (mirror RAID sets between multiple hosts with a virtual SAN like or synchronous replication solutions) - you'll be sitting on a time bomb.
    Not good :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Best Practice - Bounded Task Flows, Regions and Nested Application Modules

    Using JDev 11.1.1.3; understand that it's generally considered good practice to just have 1 root application module servicing model content / services for each page. In our application, we've used a number of bounded task flows and page fragments deployed as af:region's into pages as either a) views targeted in page-flow navigation, b) tab panel content inside a regular jspx, or c) af:popup / af:dialog content. As it stands, we've not engaged nesting of the application modules for this embedded region content, so these regions are no doubt instantiating new AM's if/when invoked. Should the AM's servicing these embedded regions be deployed nested within the root AM's, and then if so, does this change the way that the jsff / fragment content is actually developed (currently as per any other jspx using the DataControl pallete). Or are the best-practice directives talking about a page as being the design-time / declarative composition of content rather than the run-time aggregation of page + fragments ... in which case the fact that our embedded fragments are not using nested AM's is unlikely to concern.
    Thanks,

    Probably a better question for the ADF EMG: http://groups.google.com/group/adf-methodology?hl=en
    CM.

  • Best Practice Question

    I have 3 Areas for my DWH
    The first area is Staging then validation and core
    Staging is just do load date from the source systems
    validation is to validate data (every city has to have a countrie ....)
    core is my DWH shema.
    The First step in ETL is to load the data from core to validation, let's say my GEO_DIM Dimension goes to Countries, Cities and Regions in core. Additionaly I build a CRC SUM when I downlaod from Core to Validation and store the CRC Checksum in a Staging table.
    The second step is to load target from the source systems to staging, but only those date that are non equal to the previous downloadet CRC schecksum, so only changed or new data going to staging.
    The third step is do load that new/changed data from staging to core and proof some dependences. It's just validation.
    My Question is, what is the best practic to bring three tables (Countries, cities and region) to one Dimension
    thanks and regards
    Andreas

    Andreas,
    I guess the correct is depends... Without kidding, are you planning to use a flat star table for this dimension? If that is the case you would be joining the sources together and loading this into the table.
    Now this sounds way to simple, so I guess there is something more to the question...
    Jean-Pierre

Maybe you are looking for

  • Logical drive partition not showing in Mac OS.

    Hi all, I'm rather new to mac OS and am hoping someone could help me out with this problem. I've a windows partition (C:) running Vista Ultimate. In it, I had created 'logical drive' which contains a FAT32 (D:)and a NTFS (E:)partition. I'm thinking o

  • How can I set up my apple mail to get mail from my exchange account?

    My work recently changed the security on the Outlook Exchange server. I used to be able to get my Exchange email on my iPhone and in Apple Mail, but now it only works on my iPhone. I don't understand how to get my Apple Mail to receive the emails? I

  • Minimise Mail On Boot-Up

    I am having problems getting Mail to open up in a hidden state at boot-up. I have it added to the login items in the accounts section and have the hide option checked, yet on bootup mail opens but doesnt hide. Can anyone help me on this issue?

  • Not able to browse with opera mini on my nokia 513...

    i've not be able to use my nokia 5130 to browse via opera mini for free unless i hv money on it. pls does anyone has the setting? hw will configure my phone to browse for free. pls i need help

  • Get current page or page of selected item

    Hello, I' d like get the current page in Indesign or (if this is not possible) the page of a selected object. How could I implement this? var page = app.selection[0].page; does not work. Thanks and kind regards, mannyk