Portal server deployment best practices

anyone out there knows what is the right way to deply portal server into production environment instead of manually copying all the folders and run the nessarily commands? Is there a better way to deploy portal server? Any best practices that i should follow for deploying portal server?

From the above what I understood is you would like to transfer your existing portal server configuration to the new one. I don't think there is an easy method to do it.
One way You can do is by taking the "ldif " back up from the existing portal server.
For that first you have to install the portal server in the new box and then take back up of existing portal server using
# /opt/netscape/directory4/slapd-<host>/ldif2db /tmp/profile.ldif
edit the "/tmp/profile.ldif " file and modify <hostname> and <Domain name> with the new system values.
copy this file to the new server using
# /opt/netscape/directory4/slapd-<host>/ldif2db -i /tmp/backdb.ldif
and also copy the file "slapd.user_at.conf " under /opt/netscape/directory4/slapd-<hostname>/config to the new system.
Restarting the server makes you to access the portal server with the confguration of the old one.

Similar Messages

  • License type of SQL Server 2005 Best Practices Analyzer

    Hi everybody.
    I need to install in my organization the software "SQL Server 2005 Best Practices Analyzer" but I need to know if this application it's free licensing. I have seen on several web sites about this tool it's free but not in official microsoft
    web page. So, where can I find the official microsoft information about the type of licensing of "SQL Server 2005 Best Practices Analyzer" ?
    Thanks of your support

    Hello Erland.
    I followed your advice and I have read the terms of use of this software. I stop at point 3 (which I highlighted). Based on this point, I doubt it is about using this application. Furthermore nowhere says that this software is free to use.
    Would appreciate if someone can clarify this to me.
     =============================================================
    MICROSOFT SOFTWARE LICENSE TERMS
    MICROSOFT SQL SERVER 2005 BEST PRACTICES ANALYZER:
    These license terms are an agreement between Microsoft Corporation (or based on where you live, one of its affiliates) and you. 
    Please read them.  They apply to the software named above, which includes the media on which you received it, if any. 
    The terms also apply to any Microsoft
    *  updates,
    *  supplements,
    *  Internet-based services, and
    *  support services
    for this software, unless other terms accompany those items. 
    If so, those terms apply.
    BY USING THE SOFTWARE, YOU ACCEPT THESE TERMS. 
    IF YOU DO NOT ACCEPT THEM, DO NOT USE THE SOFTWARE.
    If you comply with these license terms, you have the rights below.
    1. 
    INSTALLATION AND USE RIGHTS.  You may install and use any number of copies of the software on your devices.
    2. 
    INTERNET-BASED SERVICES.  Microsoft provides Internet-based services with the software. 
    It may change or cancel them at any time.
    3. 
    SCOPE OF LICENSE.  The software is licensed, not sold. This agreement only gives you some rights to use the software. 
    Microsoft reserves all other rights. 
    Unless applicable law gives you more rights despite this limitation, you may use the software only as expressly permitted in this agreement. 
    In doing so, you must comply with any technical limitations in the software that only allow you to use it in certain ways. 
    You may not
    *  work around any technical limitations in the software;
    *  reverse engineer, decompile or disassemble the software, except and only to the extent that applicable law expressly permits, despite this limitation;
    *  make more copies of the software than specified in this agreement or allowed by applicable law, despite this limitation;
    *  publish the software for others to copy;
    *  rent, lease or lend the software;
    *  transfer the software or this agreement to any third party; or
    *  use the software for commercial software hosting services.
    4. 
    BACKUP COPY.  You may make one backup copy of the software. 
    You may use it only to reinstall the software.
    5. 
    DOCUMENTATION.  Any person that has valid access to your computer or internal network may copy and use the documentation for your internal, reference purposes.
    6. 
    EXPORT RESTRICTIONS.  The software is subject to United States export laws and regulations. 
    You must comply with all domestic and international export laws and regulations that apply to the software. 
    These laws include restrictions on destinations, end users and end use. 
    For additional information, see www.microsoft.com/exporting.
    7. 
    SUPPORT SERVICES.  Because this software is "as is," we may not provide support services for it.
    8. 
    ENTIRE AGREEMENT.  This agreement, and the terms for supplements, updates, Internet-based services and support services that you use, are the entire agreement for the software and support services.
    9. 
    APPLICABLE LAW.
    a.  United States.  If you acquired the software in the United States, Washington state law governs the interpretation of this agreement and applies to claims for breach of it, regardless of conflict of laws principles. 
    The laws of the state where you live govern all other claims, including claims under state consumer protection laws, unfair competition laws, and in tort.
    b.  Outside the United States.  If you acquired the software in any other country, the laws of that country apply.
    10. 
    LEGAL EFFECT.  This agreement describes certain legal rights. 
    You may have other rights under the laws of your country. 
    You may also have rights with respect to the party from whom you acquired the software. 
    This agreement does not change your rights under the laws of your country if the laws of your country do not permit it to do so.
    11. 
    DISCLAIMER OF WARRANTY.  THE SOFTWARE IS LICENSED "AS-IS." 
    YOU BEAR THE RISK OF USING IT.  MICROSOFT GIVES NO EXPRESS WARRANTIES, GUARANTEES OR CONDITIONS. 
    YOU MAY HAVE ADDITIONAL CONSUMER RIGHTS UNDER YOUR LOCAL LAWS WHICH THIS AGREEMENT CANNOT CHANGE. 
    TO THE EXTENT PERMITTED UNDER YOUR LOCAL LAWS, MICROSOFT EXCLUDES THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.
    12. 
    LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES.  YOU CAN RECOVER FROM MICROSOFT AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP TO U.S. $5.00. 
    YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL, LOST PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES.
    This limitation applies to
    *  anything related to the software, services, content (including code) on third party Internet sites, or third party programs; and
    *  claims for breach of contract, breach of warranty, guarantee or condition, strict liability, negligence, or other tort to the extent permitted by applicable law.
    It also applies even if Microsoft knew or should have known about the possibility of the damages. 
    The above limitation or exclusion may not apply to you because your country may not allow the exclusion or limitation of incidental, consequential or other damages.
    Please note: As this software is distributed in Quebec, Canada, some of the clauses in this agreement are provided below in French.

  • Adode premiere pro cc + server shared, best practices

    Where to place projects, media caches, preview files ...... ?
    A project can be opened on different stations ( not simultaneously , of course ) during the day ....
    I obtients no information about my interlocutor Adobe.
    Regards, Vince

    thank you very much for the explanation. I have a request: 6 emissions
    first assembly room, 2 technical stations for Backup and indgest and
    finally 2 deruhs stations prelude. each subject is in a directory with the
    name of the subject with the project, the media, provided files. by against
    the caches is also on the server in a directory caches common to all
    machines. there is there a max number of file caches has not exceeded? is
    that each machine must have its own cache directory?
    Le 25 nov. 2014 14:24, "Vinay Dwivedi" <[email protected]> a écrit :
        Adode premiere pro cc + server shared, best practices  created by Vinay
    Dwivedi <https://forums.adobe.com/people/Vinay+Dwivedi> in Premiere Pro
    - View the full discussion
    <https://forums.adobe.com/message/6960713#6960713>

  • Portal 8.1 best practices document.

    Hi All,
    Is there any document on Portal 8.1 best practices standard document?
    If yes, can somebody send me the same OR point me to appropriate URL?
    Thanks,
    Prashanth Bhat.

    Hi,
    http://edocs.bea.com is the entry point to docs. Try these documents as a start
    On the URL below is several links to various useful documents.
    http://e-docs.bea.com/wlp/docs81/index.html
    - Anders M.

  • Grid Control deployment best practices

    Looking for this document, interested to know more about Grid Control deployment best practices, monitoring and managing for more than 300+ dbs.

    hi
    have a search for the following document
    MAA_WP_10gR2_EnterpriseManagerBestPractices.pdf
    regards
    Alan

  • SQL Server 2008 - Best Practices Analyzer

    Is there a version of SQL Server 2008 Best Practices Analyzer available for download?   If not, can I use BPA for SQL Server 2005 to run a DB assessment on a SQL Server 2008 database?  Please let me know what your recommendation is?
    Thanks

    Microsoft® SQL Server® 2008 R2 Best Practices Analyzer has been released for few months.
    More details here
    http://www.microsoft.com/downloads/en/details.aspx?displaylang=en&FamilyID=0fd439d7-4bff-4df7-a52f-9a1be8725591

  • Jdev101304 SU5 - ADF Faces - Web app deployment best practice|configuration

    Hi Everybody:
    1.- We have several web applications that provides a service/product used for public administration purposes.
    2.- the apps are using adf faces adf bc.
    2.- All of the apps are participating on javaSSO.
    3.- The web apps are deployed in ondemand servers.
    4.- We have notice, that with the increase of users on this dates, the sessions created by the middle tier in the database, are staying inactive but never destroyed or removed.
    5.- Even when we only sing into the apps using javasso an perform no transacctions (like inserting or deleting something), we query the v$sesisons in the database, and the number of inactive sessions is always increasing, until the server colapse.
    So, we want to know, if this is an issue of the configurations made on the Application Module's properties. And we want to know if there are some "best practices" that you could provide us to configure a web application and avoid this behavior.
    The only configurations that we found recomended for web apps is set the jbo.locking.mode to optimistic, but this doesn't correct the "increasing inactive sessions" problem.
    Please help us to get some documentation or another resource to correct configure our apps.
    Thnks in advance.
    Edited by: alopez on Jan 8, 2009 12:27 PM

    hi alopez
    Maybe this can help, "Understanding Application Module Pooling Concepts and Configuration Parameters"
    see http://www.oracle.com/technology/products/jdev/tips/muench/ampooling/index.html
    success
    Jan Vervecken

  • Portal System Transport (Best Practice)

    Hello,
    We have DEV, QA and PRD landscape. We have created systems that connect to backend ECC systems. Since the DEV and QA ECC system has one application server, we have created a portal system of type singe application server in the DEV Portal that points to DEV ECC system. Subsequently we have transported this portal system to QA portal and make it point to QA ECC.
    Now the Prd ECC systems is of type load balancing with multiple servers. The portal system that connects to Prd ECC system should also be of type Load Balancing. Now we cannot transport the QA portal system that connects to QA ECC system to prd since its of type Single Application Server.
    What will be the best strategy to create the portal system in prd portal that points to PRD ECC.
    1. Create the portal system freshly in Prd system of type Load Ballancing. Does it adhere to the best practise approach that suggest Not to Create anyting in prd system directly.
                                                       OR
    2, Is there any other way that should I follow to make sure that Best Practices for Portal Dvelepment is followed.
    Regards
    Deb

    I don't find it useful to transport system objects so I make them manually.

  • Storage Server 2012 best practices? Newbie to larger storage systems.

    I have many years managing and planning smaller Windows server environments, however, my non-profit has recently purchased
    two StoreEasy 1630 servers and we would like to set them up using best practices for networking and Windows storage technologies. The main goal is to build an infrastructure so we can provide SMB/CIFS services across our campus network to our 500+ end user
    workstations, taking into account redundancy, backup and room for growth. The following describes our environment and vision. Any thoughts / guidance / white papers / directions would be appreciated.
    Networking
    The server closets all have Cisco 1000T switching equipment. What type of networking is desired/required? Do we
    need switch-hardware based LACP or will the Windows 2012 nic-teaming options be sufficient across the 4 1000T ports on the Storeasy?
    NAS Enclosures
    There are 2 StoreEasy 1630 Windows Storage servers. One in Brooklyn and the other in Manhattan.
    Hard Disk Configuration
    Each of the StoreEasy servers has 14 3TB drives for a total RAW storage capacity of 42TB. By default the StoreEasy
    servers were configured with 2 RAID 6 arrays with 1 hot standby disk in the first bay. One RAID 6 array is made up of disks 2-8 and is presenting two logical drives to the storage server. There is a 99.99GB OS partition and a 13872.32GB NTFS D: drive.The second
    RAID 6 Array resides on Disks 9-14 and is partitioned as one 11177.83 NTFS drive.  
    Storage Pooling
    In our deployment we would like to build in room for growth by implementing storage pooling that can be later
    increased in size when we add additional disk enclosures to the rack. Do we want to create VHDX files on top of the logical NTFS drives? When physical disk enclosures, with disks, are added to the rack and present a logical drive to the OS, would we just create
    additional VHDX files on the expansion enclosures and add them to the storage pool? If we do use VHDX virtual disks, what size virtual hard disks should we make? Is there a max capacity? 64TB? Please let us know what the best approach for storage pooling will
    be for our environment.
    Windows Sharing
    We were thinking that we would create a single Share granting all users within the AD FullOrganization User group
    read/write permission. Then within this share we were thinking of using NTFS permissioning to create subfolders with different permissions for each departmental group and subgroup. Is this the correct approach or do you suggest a different approach?
    DFS
    In order to provide high availability and redundancy we would like to use DFS replication on shared folders to
    mirror storage01, located in our Brooklyn server closet and storage02, located in our Manhattan server closet. Presently there is a 10TB DFS replication limit in Windows 2012. Is this replicaiton limit per share, or total of all files under DFS. We have been
    informed that HP will provide an upgrade to 2012 R2 Storage Server when it becomes available. In the meanwhile, how should we designing our storage and replication strategy around the limits?
    Backup Strategy
    I read that Windows Server backup can only backup disks up to 2TB in size. We were thinking that we would like
    our 2 current StoreEasy servers to backup to each other (to an unreplicated portion of the disk space) nightly until we can purchase a third system for backup. What is the best approach for backup? Should we use Windows Server Backup to be capturing the data
    volumes?
    Should we use a third party backup software?

    Hi,
    Sorry for the delay in reply.
    I'll try to reply each of your questions. However for the first one, you may have a try to post to Network forum for further information, or contact your device provider (HP) to see if there is any recommendation.
    For Storage Pooling:
    From your description you would like to create VHDX on RAID6 disks for increasment. It is fine and as you said it is 64TB limited. See:
    Hyper-V Virtual Hard Disk Format Overview
    http://technet.microsoft.com/en-us/library/hh831446.aspx
    Another possiable solution is using Storage Space - new function in Windows Server 2012. See:
    Storage Spaces Overview
    http://technet.microsoft.com/en-us/library/hh831739.aspx
    It could add hard disks to a storage pool and creating virtual disks from the pool. You can add disks later to this pool and creating new virtual disks if needed. 
    For Windows Sharing
    Generally we will have different sharing folders later. Creating all shares in a root folder sounds good but actually we may not able to accomplish. So it actually depends on actual environment.
    For DFS replication limitation
    I assume the 10TB limitation comes from this link:
    http://blogs.technet.com/b/csstwplatform/archive/2009/10/20/what-is-dfs-maximum-size-limit.aspx
    I contacted DFSR department about the limitation. Actually DFS-R could replicate more data which do not have an exact limitation. As you can see the article is created in 2009. 
    For Backup
    As you said there is a backup limitation (2GB - single backup). So if it cannot meet your requirement you will need to find a third party solution.
    Backup limitation
    http://technet.microsoft.com/en-us/library/cc772523.aspx
    If you have any feedback on our support, please send to [email protected]

  • Oracle Identity Manager - automated builds and deployment/Best practice

    Is there a best practice as for directory structure for repository in version control system?
    Do you recommend to keep the whole xellerate folder + separate structure for xml files and java code? (Considering fact that multiple upgrades can occur over the time)
    How custom code is merged to the main application?
    How deployment to Weblogic application server occur? (Do you create your own script or there is an out of the box script that can be reused)
    I would appreciate any guidance regarding this matter.
    Thank you for your help.

    Hi,
    You can use any IDE (Eclipse, Netbeans) for development.
    For, Getting started with OIM API's using Eclipse, please follow these steps
    1. Creating the working folder structure
    2. Adding the jar/configuration files needed
    3. Creating a java project in Eclipse
    4. Writing a sample java class that will call the API's
    5. Debugging the code with Eclipse debugger
    6. API Reference
    1. Creating the working folder structure
    The following structure must be created in the home directory of your project (Separate project home for each project):
    <PROJECT_HOME>
    \ bin
    \ config
    \ ext
    \ lib
    \ log
    \ src
    The folders will store:
    src - source code of your project
    bin - compiled code of your project
    config - configuration files for the API and any of your custom configuration files
    ext - external libraries (3'rd party)
    lib - OIM API libraries
    log - local logging folder
    2. Adding the jar/configuration files needed
    The easiest way to perform this task is to copy all the files from the OIM Design Console
    folders respectively in the <PROJECT_HOME> folders.
    That is:
    <XEL_DESIGN_CONSOLE_HOME>/config -> <PROJECT_HOME>/config
    <XEL_DESIGN_CONSOLE_HOME>/ext -> <PROJECT_HOME>/ext
    <XEL_DESIGN_CONSOLE_HOME>/lib -> <PROJECT_HOME>/lib
    3. Creating a java project in Eclipse
    + Start Eclipse platform
    + Select File->New->Project from the menu on top
    + Select Java Project and click Next
    + Type in a project name (For example OIM_API_TEST)
    + In the Contents panel select "Create project from existing source",
    click Browse and select your <PROJECT_HOME> folder
    + Click Finish to exit the wizard
    At this point the project is created and you should be able to browse
    trough it in Package Explorer.
    Setting src in the build path:
    + In Package Explorer right click on project name and select Properties
    + Select Java Build Path in the left and Source tab in the right
    + Click Add Folder and select your src folder
    + Click OK
    4. Writing a sample Java class that will call the API's
    + In Package Explorer, right click on src and select New->Class.
    + Type the name of the class as FirstAPITest
    + Click Finish
    Put the following sample code in the class:
    import java.util.Hashtable;
    import com.thortech.xl.util.config.ConfigurationClient;
    import Thor.API.tcResultSet;
    import Thor.API.tcUtilityFactory;
    import Thor.API.Operations.tcUserOperationsIntf;
    public class FirstAPITest {
    public static void main(String[] args) {
    try{
    System.out.println("Startup...");
    System.out.println("Getting configuration...");
    ConfigurationClient.ComplexSetting config =
    ConfigurationClient.getComplexSettingByPath("Discovery.CoreServer");
    System.out.println("Login...");
    Hashtable env = config.getAllSettings();
    tcUtilityFactory ioUtilityFactory = new tcUtilityFactory(env,"xelsysadm","welcome1");
    System.out.println("Getting utility interfaces...");
    tcUserOperationsIntf moUserUtility =
    (tcUserOperationsIntf)ioUtilityFactory.getUtility("Thor.API.Operations.tcUserOperationsIntf");
    Hashtable mhSearchCriteria = new Hashtable();
    mhSearchCriteria.put("Users.First Name", "System");
    tcResultSet moResultSet = moUserUtility.findUsers(mhSearchCriteria);
    for (int i=0; i<moResultSet.getRowCount(); i++){
    moResultSet.goToRow(i);
    System.out.println(moResultSet.getStringValue("Users.Key"));
    System.out.println("Done");
    }catch (Exception e){
    e.printStackTrace();
    Replace the "welcome1" with your own password.
    + save the class
    To run the example class perform the following steps:
    + Click in the menu on top Run, and run "Create, Manage, and run Configurations" wizard. (In the menu, this can be either "run..." or "Open Run Dialog...", depending on the version of Eclipse used).
    + Right click on Java Application and select New
    + Click on arguments tab
    + Paste the following in VM arguments box:
    -Djava.security.manager -DXL.HomeDir=.
    -Djava.security.policy=config\xl.policy
    -Djava.security.auth.login.config=config\authwl.conf
    -DXL.ClientClassName=%CLIENT_CLASS%
    (please replace the URL, in ./config/xlconfig.xml, to your application server if not running on localhost or not using the default port)
    + Click Apply
    + Click Run
    At this point your class is executed. If everything is correct, you will see the following output in the Eclipse console:
    Startup...
    Getting configuration...
    Login...
    log4j:WARN No appenders could be found for logger (com.opensymphony.oscache.base.Config).
    log4j:WARN Please initialize the log4j system properly.
    Getting utility interfaces...
    1
    Done
    Regards,
    Sunny Ajmera

  • SOA OSB Deployment best practices in Production environment.

    Hi All
    I just wanted to know the best practices followed in production environment for deploying OSB and SOA Code. As you are aware that both require libraries from either (Jdev or SOA Suite) and (OEPE and OSB)? Should one rip the libraries and package them with the ANT scripts (I am not sure but SOA would require its internal ANT scripts and lot of libraries to be bundled, OSB requires only a few OEPE and OSB libraries) or we simply use the below:
    1) Use the production run time (SOA Server and OSB Server) to build and deploy the code. OEPE would not be present here, so we would just have to deploy the already created sbconfig.jar (We would build this in a local environment where OEPE and OSB would be installed). The code is checked out from a repository and transferred to this linux machine.
    2) Use a windows machine (which has access to prod environment) and have Jdeveloper, OEPE and OSB installed to build\deploy the code to production server. The code is checked out from a repository.
    Please let us know your personal experiences with the deployment in PROD. Thanks a lot!

    There are two approaches for deployment of OSB and SOA code.
    1. Use a machine specifically for build and deployment which will have access to all production environments (where deployment needs to be done). Install all the required software (oepe, OSB etc..) and use remote deployment for deploying the code.
    2. Bundle all the build and deployment related libraries and ship them as a deployment package on the target server and proceed with the deployment.
    Most commonly followed approach is approach#1.
    Regards
    Vivek

  • SCCM 2012 Update deployment best practices?

    I have recently upgraded our environment from SCCM 2007 to 2012. In switching over from WSUS to SCCM Updates, I am having to learn how the new deployments work.  I've got the majority of it working just fine.  Microsoft Updates, Adobe Updates (via
    SCUP)... etc.
    A few users have complained that the systems seem to be taking up more processing power during the update scans... I am wondering what the best practices are for this...
    I am deploying all Windows 7 updates (32 and 64 bit) to a collection with all Windows 7 computers (32 and 64 bit)
    I am deploying all Windows 8 updates (32 and 64 bit) to a collection with all Windows 8 computers (32 and 64 bit)
    I am deploying all office updates (2010, and 2013) to all computers
    I am deploying all Adobe updates to all computers... etc.
    I'm wondering if it is best to be more granular than that? For example: should I deploy Windows 7 32-bit patches to only Windows 7 32-bit machines? Should I deploy Office 2010 Updates only to computers with Office 2010?
    It's certainly easier to deploy most things to everyone and let the update scan take care of it... but I'm wondering if I'm being too general?

    I haven't considered cleaning it up yet because the server has only been active for a few months... and I've only connected the bulk of our domain computers to it a few weeks ago. (550 PCs)
    I checked several PCs, some that were complaining and some not. I'm not familiar with what the standard size of that file should be, but they seemed to range from 50M to 130M. My own is 130M but mine is 64-bit, the others are not. Not sure if that makes
    a difference.
    Briefly read over that website. I'm confused, It was my impression that WSUS was no longer used and only needed to be installed so SCCM can use some of the functions for its own purposes. I thought the PCs no longer even connected to it.
    I'm running the WSUS cleanup wizard now, but I'm not sure it'll clean anything because I've never approved a single update in it. I do everything in the Software Update Point in SCCM, and I've been removing expired and superseded updates fairly regularly.
    The wizard just finished, a few thousand updates deleted, disk space freed: 0 MB.
    I found a script here in technet that's supposed to clean out old updates..
    http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx
    Haven't had the chance to run it yet.

  • Failover cluster File Server role best practices

    We recently implemented a Hyper-V Server Core 2012 R2 cluster with the sole purpose to run our server environment.  I started with our file servers and decided to create multiple file servers and put them in a cluster for high
    availability.  So now I have a cluster of VMs, which I have now learned is called a guest cluster, and I added the File Server role to this cluster.  It then struck me that I could have just as easily created the File Server role under my Hyper-V
    Server cluster and removed this extra virtual layer.  
    I'm reaching out to this community to see if there are any best practices on using the File Server role.  Are there any benefits to having a guest cluster provide file shares? Or am I making things overly complicated for no reason?
    Just to be clear, I'm just trying to make a simple Windows file server with folder shares that have security enabled on them for users to access internally. I'm using Hyper-V Core server 2012 R2 on my physical servers and right now I have Windows
    Server Standard 2012 R2 on the VMs in the guest cluster.
    Thanks for any information you can provide.

    Hi,
    Generally with Hyper-V VMs available, we will install all roles into virtual machines as that will be easy for management purpose.
    In your situation the host system is a server core, so it seems that manage file shares with a GUI is much better.
    I cannot find an article specifically regarding "best practices of setting up failover cluster". Here are 2 articles regarding build guest cluster (you have already done) and steps to create a file server cluster. 
    Hyper-V Guest Clustering Step-by-Step Guide
    http://blogs.technet.com/b/mghazai/archive/2009/12/12/hyper-v-guest-clustering-step-by-step-guide.aspx
    Failover Cluster Step-by-Step Guide: Configuring a Two-Node File Server Failover Cluster
    https://technet.microsoft.com/en-us/library/cc731844(v=ws.10).aspx
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Portal db provider(best practice)

    Best practice question here. If I wanted to create a few db portlets(suggestions/questions) is there already an existing portal db provider/schema that I should add them to? Or is it best to simply create a schema and db provider?

    That is an interesting question, we created our own schemas for each of the portal sites we have, so basically custom made providers for all portlets used in those portals.

  • Client on Server installation best practice

    Hi all,
    I wonder on this subject, searched and found nothing relevant, so I ask here :
    Is there any best practice/state of the art when you have a client application installed on the same machine as the database ?
    I know the client app use the server binaries, but must I avoid it ?
    Should I install a Oracle client home and parameter the client app to use the client libraries ?
    In 11g there is no more changeperm.sh anymore, doest-il prove Oracle agrees to have client apps using server libraries ?
    Precision : I'm on AIX 6 (or 7) + Oracle 11g.
    Client app will be an ETL tool - explaining why it is running on DB machine.

    GReboute wrote:
    EdStevens wrote:
    Given the premise "+*when*+ you have a client application installed on the same machine as the database", I'd say you are already violating "best practice".So I deduce from what you wrote, that you're absolutely against coexisting client app and DB server, which I understand, and usually agree.Then you deduce incorrectly. I'm not saying there can't be a justifiable reason for having the app live on the same box, but as a general rule, it should be avoided. It is generally not considered "best practice".
    But in my case, should I load or extract 100s millions of rows, then GB flow through the network, with possible disconnection issues, although I could have done it locally ?Your potentially extenuating circumstances were not revealed until this architecture was questioned. We can only respond to what we see.
    The answer I'm seeking is a bit more elaborate than "shouldn't do that".
    By the way, CPU or Memory resources shouldn't be an issue, as we are running on a strong P780.

Maybe you are looking for