Best Practices: Clustered Author Environment

Hello,
We are setting our CQ 5.5 infrastructure in 3 datacenters with ultimately an Authoring instance in each (total of three).  Our plan was to Cluster the three machines using “Share Nothing” and each would replicate to the Publish instances in all data centers.  To eliminate confusion within our organization, I’d like to create a single URL resource for our Authors so they wouldn’t have to remember to log into 3 separate machines?
So instead of providing cqd1.acme.com, cqd2.acme.com, cqd3.acme.com, I would distribute something like “cq5.acme.com” which would resolve to one of the three author instances.  While that’s certainly possible by putting a web server/load balancer in front of the three, I’m not so sure that’s even a best practice for supporting internal users.
I’m wondering what have other multi-datacenter companies done (or what does Adobe recommend) to solve this issue, did you:
Only give one destination and let the other two serve as backups? (this appears to defeat the purpose of clustering)
Place a web server/load balancer in front of each machine and distribute traffic that way?
Do nothing, e.g., provide all 3 author URLs and let the end-user choose the one closest to them geographically?
Something else???
It would be nice if there was a master UI an Author could use that communicated with the other author machines in a way that’s transparent to the end-user – so if Auth01 went down, the UI would continue to work with the remaining machiness without the end-user (author) even knowing the difference (e.g., not have to change machines).
Any thoughts would be greatly appreciated.

Day's documentation (for CRX 2.3) states in part, "whenever a write operation is received by a slave instance, it is redirected to the master instance ..."  So, all writes will always go to the master, regardless of which instance you hit.
Day's documentation also states, "Perhaps surprisingly, clustering can also benefit the author environment because even in the author environment the vast majority of interactions with the repository are reads. In the usual case 97% of repository requests in an author environment are reads, while only 3% are writes."
This being the case, it seems the latency of hitting a remote author would far outweight other considerations.  If I were you, New2CQ, I would probably have my users hit the instance that's nearest to them (in terms of network latency, etc...) regardless or whether it's a master or a slave.

Similar Messages

  • Best Practice for MUD Environment

    Hi Guys,
    I initially thought of using Merge Repository as an option to MUD Environment.
    But I found that while merging repositories, You have to either accept changes from Modified or Current Repository.
    What if I have 2 developers working parallel in a single Presentation Folder?
    Then I though Project based MUD Implementation will be the only option but in that Developers will have power to keep or remove changes from other developer.
    Now I am confused how I can get multiple users develop single RPD.
    Please let me know what's the best practice used?
    Thanks
    Saurabh

    Below some explanations. Follow the links, if you want more information. Personnaly, I prefer to set up a MUD environment.
    Software Configuration Management
    By default, the Oracle BI repository development environment is not set up for multiple users. However, online editing makes it possible for multiple developers to work simultaneously, though this may not be an efficient methodology, and can result in conflicts, because developers can potentially overwrite each other's work.
    To develop a repository in a concurrent version environment, you have several choices :
    * first of all, you can send the repository to the developper, keep a copy, retrieve it after modification and perform an [Merge Repository|http://gerardnico.com/wiki/dat/obiee/obiee_repository_merge|Merge Repository]
    * second, you can set up a [multiuser environment (MUD)|http://gerardnico.com/wiki/dat/analytic/obiee/multiuser_environment] which use the notion of Projects to split the work area. It would permit developers to modify a repository simultaneously and then check in changes.
    The import option which permit to import a subset of a repository to an other repository, work but is deprecated.
    Success
    Nico

  • Best Practice for Production environment

    Hello everyone,
    can someone share the best practice for a production environment? or is there a SAP standard best practice to follow in a Production landscape?
    i understand there are Best practices available for Implementation , Migration and upgrade. But, i was unable to find one for productive landscape
    thanks.

    Hi Siva,
    What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
    From my basis experience some of the best practices.
    1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
    2) It should have backup configured for which restore has been already tested
    3) It should have all the monitoring setup viz application, OS and DB
    4) Productive client should not be modifiable
    5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
    6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
    7) Relevant Database and OS security parameters should be tested before golive and enabled
    8) Pre-Golive , Post Golive should have been performed on Production system
    9) EWA should be configured atleast for Production system
    10) Production system availability using DR should have been tested
    Hope this helps.
    Regards,
    Deepak Kori

  • SSL: Best Practice in HA environment

    Hi all,
    The customer want to know, if there is any best practice Doc available regarding SSL Konfiguration in one WebLogic Server (10.3.6) - HA environment and Hardware Loadbalancer? Additional questions:
    1- Configuration of lacation of "certification files"? which practice is better: share Storage or each Managed Server?
    2- More Information regarding generating of certificates
    Thanks in advance, Moh

    Hi Moh,
    Did you have a look at this?
    http://www.oracle.com/technetwork/database/availability/maa-fmwsharedstoragebestpractices-402094.pdf
    Search for certs.
    Regards Peter

  • SOA OSB Deployment best practices in Production environment.

    Hi All
    I just wanted to know the best practices followed in production environment for deploying OSB and SOA Code. As you are aware that both require libraries from either (Jdev or SOA Suite) and (OEPE and OSB)? Should one rip the libraries and package them with the ANT scripts (I am not sure but SOA would require its internal ANT scripts and lot of libraries to be bundled, OSB requires only a few OEPE and OSB libraries) or we simply use the below:
    1) Use the production run time (SOA Server and OSB Server) to build and deploy the code. OEPE would not be present here, so we would just have to deploy the already created sbconfig.jar (We would build this in a local environment where OEPE and OSB would be installed). The code is checked out from a repository and transferred to this linux machine.
    2) Use a windows machine (which has access to prod environment) and have Jdeveloper, OEPE and OSB installed to build\deploy the code to production server. The code is checked out from a repository.
    Please let us know your personal experiences with the deployment in PROD. Thanks a lot!

    There are two approaches for deployment of OSB and SOA code.
    1. Use a machine specifically for build and deployment which will have access to all production environments (where deployment needs to be done). Install all the required software (oepe, OSB etc..) and use remote deployment for deploying the code.
    2. Bundle all the build and deployment related libraries and ship them as a deployment package on the target server and proceed with the deployment.
    Most commonly followed approach is approach#1.
    Regards
    Vivek

  • Hotfix Management | Best Practices | WCS | J2EE environment

    Hi All,
    Trying to exploit some best practices around hotfix management in a J2EE environment. After some struggle, we managed to handle the tracking of individual hotfixes using one of our home-grown tools. However, the issue remains on how to manage the 'automated' build of these hotfixes, rather than doing the same manually, as we are currently doing.
    Suppose we need to hotfix a particular jar file on a production environment, I would need to understand how to build 'just' that particular jar. I understand we can label the related code (which in this case could be just a few java files). Suppose this jar contains 10 files out of which 2 files need to be hotfixed. The challenge is to come up with a build script which builds -
    - ONLY this jar
    - the jar with 8 old files and 2 new files.
    - the jar using whatever dependent jars are required.
    - the hotfix build script needs to be generic enough to handle the hotfix build of any jar in the system.
    Pointers, more in line with a WCS environment would be very much appreciated!
    Regards,
    Mrinal Mukherjee

    Moderator Action:
    This post has been moved from the SysAdmin Build & Release Engineering forum
    To the Java EE SDK forum, hopefully for closer topic alignment.
    @)O.P.
    I don't think device driver build/release engineering is what you were intending.
    Additionally, your partial post that was accidentally created as a duplicate to this one
    has been removed before it confuses anyone.

  • Best practices for defining Environment Variables/User Accounts in Linux

    Hello,
    After reading throught the Quick Install guide for 10gR2 on x86_64 Linux, I see that it is not recommended to define ANY variables in .bash_profile.
    I'm hoping to get a Best practices approach for defining environment variables - right now we use the oracle linux account for administration including sql*plus. So, where should the myriad variables be defined? Is it important enough to create a user account in linux to support best practices?
    What variables, exactly, should be defined? It seems that LD_LIBRARY_PATH is no longer being used?
    Thanks in advance
    Doug

    Something that I've done for years on unix/linux boxes is to create a seperate environment variable setup file for each instance on the box. This would include things like ORACLE_HOME, ORACLE_SID, etc. Then I would create an alias in my .bash_profile that would execute this script. As an example, I would create a orcl.env file that would hold all of the environment variables for this instance. Then in my .bash_profile I would create a line like the following:
    alias orcl=". $HOME/orcl.env"
    Then from anywhere you could type orcl and you would set your environment to connect to that database.
    Also, if you are using 10g, something else that is really nice if you are using sqlplus, and you connect to different databases without starting a new sqlplus session is to set a parameter in your $ORACLE_HOME/sqlplus/admin/glogin.sql file:
    set sqlprompt "_user 'at' _connect_identifier >"
    This will automatically change your command prompt to look like this:
    RALPH at ORCL >
    if you connect as GEORGE, your prompt will immediately change to :
    GEORGE at ORCL >
    This way you can always know who and where you are connected to.
    Good luck!

  • Best practices for setting environment based static variables?

    I have a set of static string variables that hold the url location of modules in a project. These locations change depending on whether I'm building for development, staging or production.
    What's the best way to set static variables in this way?

    I don't know if this is best practice, but here's the solution I've come up with.
    The root domain is accessible within the swf via a node on a loaded xml file. So I created a simple method that sets a url variable based on that domain node.
    The domain-based url variable is then used within the static string variables that define the location of the modules.
    Simplified like so:
    var domain:String = xml.node.value;
    static var bucketLocation:String = getLocation()
    static var moduleLocation:String = bucketLocation + "modulename.swf";
    function getLocation():String
         var loc:String
         switch (domain) {
              case stagingUrl:
                  loc = "pathToAmazonStagingBucket";
                   break;
              case productionUrl:
                   loc = "pathToAmazonProductionBucket";
                   break;

  • Best Practices for zVM/SLES10/zDB2 environment for dialog instances.

    Hi,  I am a zSeries system programmer who has just completed an IBM led Proof of Concept which demonstrated the viability of running SAP instances on SUSE SLES10 Linux booted in zVM guests and accessing zDB2 data via hipersockets. Before we build a Linux infrastructure using the 62 IFLs we just procured, we are wondering if any best practices for this environment have been developed as an OSS note or something else by SAP.    Below you will find an email which was sent and responded to by IBM and Novell on these topics...
    "As you may know, Home Depot has embarked on an IBM led proof of concept using SUSE SLES10 running in zVM guests on IBM zSeries hardware to host SAP server instances.  The Home Depot IT organization is currently in the midst of a large scale push to modernize our merchandising and people systems on SAP platforms.  The zVM/SUSE/SAP POC is part of that effort, as is a parallel POC of an Intel Blade/Red Hat/SAP platform.  For our production financial systems we now use a pSeries/AIX/SAP platform.
          So far in the zVM/SUSE/SAP POC, we have been able to create four zVM LPARS on IBM z9 hardware, create twelve zVM guests on those LPARS, boot SLES10 in those guests, install and run SAP instances in those guests using hipersockets for access to our DB2 SAP databases running on zOS, and direct user workloads to the SAP instances with good results.  We have also successfully developed cloning scripts that have made it possible to create new SLES10 instances, configured and ready for SAP installs, in about 10 seconds using FLASHCOPY and IBM DASD.
          I am writing in the hope that you can direct us to technical resources at IBM/Novell/SAP who may be able to field a few questions that have arisen.  In our discussions about optimization of the zVM/SUSE/SAP platform, we wondered if any wisdom about the appropriateness of and support for using zVM capabilities to virtualize SAP has ever been developed or any best practices drafted.  Attached you will find an IBM Redbook and a PowerPoint presentation which describes the use of the zVM discontiguous shared segments and the zVM named saved system features for the sharing of reentrant code and other  elements of Linux and its applications, thereby conserving storage and disk resources allocated to guest machines.   The specific question of the hour is, can any SAP code be handled similarly?  Have specific SAP elements eligible for this treatment been identified? 
          I've searched the SUSE Knowledgebase for articles on this topic to no avail.  Any similar techniques that might help us reduce the total cost of ownership of a zVM/SUSE/SAP platform as we compare it to Intel Blade/Red Hat/SAP and pSeries/AIX/SAP platforms are of great interest as we approach the end of our POC.  Can you help?
          Greg McKelvey is a Client I/T Architect at IBM.  He found the attached IBM documents and could give a fuller account of our POC.  Pat Downs, IBM zSeries IT Architect, has also worked to guide our POC. Akshay Rao, IBM Systems IT Specialist - Linux | Virtualization | SOA, is acting as project manager for the POC.  Jim Hawkins is the Home Depot Architect directing the POC.  I've CC:ed their email addresses.  I am sure they would be pleased to hear from you if there are the likely questions about what the heck I am asking about here.  And while writing, I thought of yet another question that I hoping somebody at SAP might weigh in on; are there any performance or operational benefits to using Linux LVM to apportion disk to filesystems vs. using zVM to create appropriately sized minidisks for filesystems without LVM getting involved?"
    As you can see, implementation questions need to be resolved.  We have heard from Novell that the SLES10 Kernel and other SUSE artifacts can reside in memory and be shared by multiple operating system images.  Does SAP support this configuration?  Also, has SAP identified SAP components which are eligible for similar treatment?  We would like to make sure that any decisions we make about the SAP platforms we are building will be supportable.  Any help you can provide will be greatly appreciated.  I will supply the documents referenced above if they are not known to any answerer.  Thanks,  Al Brasher 770-433-8211 x11895 [email protected]

    Hello AL ,
    first, let me welcome you on board,  I am sure you won't be disapointed with your choice to run SAP on ZOS.
    as for your questions,
    it wan't easy to find them in this long post , so i suggest you take the time to write a short summary that contains a very short list of questions.
    as for answers.
    here are a few usefull sources of information :
    1. the sap on db2 for Z/os sdn page :
    SAP on DB2 for z/OS
    in it you can find 2 relevant docs :
    a. best practices for ...
    b. database administration for db2 udb for z/os .
    this second publication is excellent , apart from db2 specific info , it contains information on all the components of the sap on db2 for z/os like zlinux,z/vm and so on ...
    2. I can see that you are already familiar with the ibm redbooks , but it seems that you haven't taken the time to get the most out of that resource.
    from you post it is clear that you have found one usefull publication , but I know there are several.
    3. a few months ago I wrote a short post on a similar subject ,
    I'm sure its not exactly what you are looking for at this moment , but its a good start , and with some patience you may be able to get some answers.
    here's a link
    http://blogs.ittoolbox.com/sap/db2/archives/index-of-free-documentation-on-sap-db2-administration-14245
    good luck.
    omer brandis.

  • Best practice for Error logging and alert emails

    I have SQL Server 2012 SSIS. I have Excel files that are imported with Exel Source and OLE DB Destination. Scheduled Jobs runs every night SSIS packages.
    I'm looking for advice that what is best practice for production environment.Requirements are followings:
    1) If error occurs with tasks, email is sent to admin
    2) If error occurs with tasks, we have log in flat file or DB
    Kenny_I

    Are you asking about difference b/w using standard logging and event handlers? I prefer latter as using standard logging will not always data in the way in which we desire. So we've developed a framework to add necessary functionalities inside event handlers
    and log required data in the required format to a set of tables that we maintain internally.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • SAP Adapter Best Practice Question for Deployment to Clustered Environment

    I have a best practices question on the iway Adapters around deployment into a clustered environment.
    According to the documentation, you are supposed to run the installer on both nodes in the cluster but configure on just the first node. See below:
    Install Oracle Application Adapters 11g Release 1 (11.1.1.3.0) on both machines.
    Configure a J2CA configuration as a database repository on the first machine.
    Perform the required changes to the ra.xml and weblogic-ra.xml files before deployment.
    This makes sense to me because once you deploy the adapter rar in the next step it the appropriate rar will get staged and deployed on both nodes in the cluster.
    What is the best practice for the 3rdParty adapter directory on the second node? The installer lays it down with the adapter rar and all. Since we only configure the adapter on node 1, the directory on node 2 will remain with the default installation files/values not the configured ones. Is it best practice to copy node 1's 3rdParty directory to node 2 once configured? If we leave node 2 with the default files/values, I suspect this will lead to confusion to someone later on who is troubleshooting because it will appear it was never configured correctly.
    What do folks typically do in this situation? Obviously everything works to leave it as is, but it seems strange to have the two nodes differ.

    What is the version of operating system. If you are any OS version lower than Windows 2012 then you need to add one more voter for quorum.
    Balmukund Lakhani
    Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    | Facebook
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

  • Best Practice for CQ Updates in complex installations (clustering, replication)?

    Hi everybody,
    we are planning a production setup of CQ 5.5 with an authoring cluster replicating to 4 publisher instances. We were wondering what the best update process looks like in a scenario like this. Let's say, we need to install the latest CQ 5 Update - which we actually have to -:
    Do we need to do this on every single instance, or can replication be utilized to distribute updates?
    If updating a cluster - same question: one instance at a time? Just one, and the cluster does the rest?
    The question is really: can update packages (official or custom) be automatically distributed to multiple instances? If yes, is there a "best practice" way to do this?
    Thanks for any help on this!
    Henning

    Hi Henning,
    The CQ5.5 servicepacks are distributed as CRX packages. You can replicate these packages and on the publishs they are unpacked and installed.
    In a cluster the situation is different: You have only 1 repository. So when you have installed the servicepack on one node, the new versions of bundles and other stuff is unpacked to the repository (most likely to /libs). Then the magic (essentially the JcrInstaller) takes care, that the bundles are extracted to started.
    I would not recommend to activate the service pack in a production environment, because then all publishs will be updated the same time. And as a restart is required, you might encounter downtimes. Of course you can make it work when you play with the replication agents :-)
    cheers,
    Jörg

  • Best Practice for FlexConnect Wireless roaming in MediaNet environment?

    Hello!
    Current Cisco best practice recommendations for enterprise MediaNet design, specify that VLANs be local to a switch / switch stack (i.e., to limit the scope of spanning-tree). 
    In the wireless world, this causes problems if you want users while roaming to keep real-time applications up and running.  Every time they connect to a new AP on a different VLAN, then they will need to get a new IP address, which interrupts real-time apps. 
    So...best practice for LAN users causes real problems for wireless users.
    I thought I'd post here in case there's a best practice for implementing wireless roaming in a routed environment that we might have missed so far!
    We have a failover pair of FlexConnect 7510s, btw, configured for local switching for Internal users, and central switching with an anchor controller on the DMZ for Guest users.
    Thanks,
    Deb

    Thanks for your replies, Stephen and JSnyder.
    The situation here is that the original design engineer is no longer here, and the original design was not MediaNet-friendly, in that it had a very few /20 subnets bridged over entire large sites. 
    These several large sites (with a few hundred wireless users per site), are connected to an HQ location (where the 7510s in failover mode are installed) via 1G ethernet hand-offs (MPLS at the WAN provider).  The 7510s are new, and are replacing older contollers at the HQ location. 
    The internal employee wireless users use resources both local to their site, as well as centralized resources.  There are at least as many Guest wireless users per site as there are internal employee users, and the service to them consists of Internet traffic only.  (When moved to the 7510s, their traffic will continue to be centrally switched and carried to an anchor controller in the DMZ.) 
    (1) So, going local mode seems impractical due to the sheer number of users whose traffic bound for their local site would be traversing the WAN twice.  Too much bandwidth would be used.  So, that implies the need to use Flex / HREAP mode instead.
    (2) However, re-designing each site's IP environment for MediaNet would suggest to go routed to the closet.  However, this breaks seamless roaming for users....
    So, this conundrum is why I thought I'd post here, and see if there was some other cool / nifty solution I wasn't yet aware of. 
    The only other (possibly friendly to both needs) solution I'd thought of was to GRE tunnel a subnet from each closet to the collapsed Core / Disti switch at each site.  Unfortunately, GRE tunnels are not supported in the rev of IOS on the present equipment, and so it isn't possible to try this idea.
    Another "blue sky" idea I had (not for this customer, but possibly elsewhere in the future), is to use LAN switches such as 3850s that have WLC functionality built-in.  I haven't yet worked with the WLC s/w available on those, but I was thinking it looks like they could be put into a mobility group, and L3 user roaming between them might then work.  Do you happen to know if this might be a workable solution to the overall big-picture problem? 
    Thanks again for taking the time and trouble to reply!
    Deb

  • Best Practice to configure tnsnames.ora on client of MAA environment in 10g

    Hi,
    I have a MAA environment, 1 RAC Primary of 2 nodes and 1 RAC standby of 2 nodes too. I want to configure the tnsnames.ora on clients (we have many clients on each PC) and I need to configure the tnsnames.
    I have read some papers but the information is not clear regarding to configure the tnsnames on clients. I just want to have only one entrie on tnsnames that I can use on clients to connect to RAC Primary and/or Standby depending on what site is primary in that moment, I was thinking in something like this:
    SALES.WORLD =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = site1a)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = site1b)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = site2a)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = site2b)(PORT = 1521))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = sales.world)
    (FAILOVER_MODE =
    (TYPE = SELECT)
    (METHOD = BASIC)
    (RETRIES = 180)
    (DELAY = 5)
    Where site1 is RAC Primary now, and site2 is standby. But in case of switchover, I want that clients recognize automatically to which site connect.
    I was thinking to stop listeners on site that is not primary on that moment.
    The question is if this is the best practice on this scenario?.
    Thanks for your advice.
    Luis

    Hi Louis,
    you need to setup a service with clusterware. On both sides: primary and standby.
    On primary you start them. On standby the services are also configured but stopped.
    In case of switchover or failover, data guard will notice clusterware to bring them up.
    You need to use this service name in your clients tnsnames.
    Another issue are TCP timeouts, to protect against them you use
    outbount_connect_timeout in your sqlnet.ora
    Also have a look at
    http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_SwitchoverFailoverBestPractices.pdf
    HTH Mathias

  • Best Practice to generate UUIDs in a Cluster-Server Environment

    Hi all,
    I just need some inputs over the best practices to generate UUIDs in typical internet world where there are multiple servers/JVMs involved for load balancing or traffic distribution etc. I know JAVA is shipped with very efficient UUID generator API.
    But still that doesn't solve the issue in multiple server environment.
    For the discussion sake lets assume I need it to be unique over the setup than a near unique.
    How do you guys approach it?
    Thanks you all in advance.

    codeNombre wrote:
    jverd wrote:
    codeNombre wrote:
    Thanks jverd,
    So adding to the theory of "distinguishing all possible servers" in addition to UUID over each server would be the way to go.If you're unreasonably paranoid, sure.I think its a common problem and there is a big number of folks who might still be bugged about the "relative uniqueness" of UUID in long run. People who don't understand probability and scale, sure.
    Again coming back to my original problem in an "internet world", shouldn't the requirement like unique id between different servers be dealt with generating the UUID's at a layer before entering into the multi-server setup. Where would that be? I don't have the answer..Again, that is the POINT of the UUID class--so that you can generate as many IDs as you want and still be confident that nobody anywhere is in the world has ever generated any of those same IDs. However, if your requirements say UUID is not good enough, then you need to define what is, and that means having a lot of foresight as to how this system will evolve and how long it will live, AND having total control over some aspect of your servers, AND having a process that is so good that it's LESS LIKELY for a human to screw up and re-use a "unique" server ID than the probabilities I presented in my previous post.

Maybe you are looking for