XREF best practices in ESB cluster installation-OESB10.1.3.3

Hi,
We are using Oracle ESB during last 2 years.
2 months ago I migrated our ESB installation to ESB Cluster in production (1 ESB DT, 1 ESB RT for polling
adapters, 2 ESB RT for further message processing).
We are using SOA Suite 10.1.3.3 with MLR#17 applied.
I faced a issue with XREF (populateXRefRow XPath function) in production system and need assistance.
All our ESB Processes contains next main parts:
1) Polling DB-adapter (or FTP-adapter, this didn't matter) that initiates a ESB process, routing service for that polling adapter
that asyncronously (!) invokes Requestor ABC level services (AIA terms);
2) Requestor ABC level-services perform XREF population and continues message
processing.
XREF population is doing with next steps:
we call lookupXRefRow XPath function, if value is not present in XREF, we doing
populateXRefRow call.
This logic is working fine when we are not using ESB cluster, but now step 2) (ReqABC level) is performed by different ESB servers
and frequently we faced unique constraint violation error on XREF_DATA
population (during populateXrefRow call).
ESB RT nodes using to balance load but transmitted data is intersected. For example, we are poll not documents but document details instead (polling table populated by Oracle Streams, there are no guarantee that document header receives earlier than document details, because our system is high loaded and we are using commit_serialization=none with parallelism at APPLY processes).
Each ESB RT instance can receive different rows of same document and xref population done at document header level.
My question is: what is best practices to work with XREF in ESB cluster installations?
May be other peoples faced with this issue and how this issue was resolved?
I know possible workarounds to accomplish this task: not call populateXRefRow function in XSLT, instead call PL/SQL procedure or function that working same but can ignore any exceptions.
This's solution not liked to me, but I dont know any other solutions.
Also I cannot not populate XREF because XREF actively used in inter-systems communication.

Hi,
We are using Oracle ESB during last 2 years.
2 months ago I migrated our ESB installation to ESB Cluster in production (1 ESB DT, 1 ESB RT for polling
adapters, 2 ESB RT for further message processing).
We are using SOA Suite 10.1.3.3 with MLR#17 applied.
I faced a issue with XREF (populateXRefRow XPath function) in production system and need assistance.
All our ESB Processes contains next main parts:
1) Polling DB-adapter (or FTP-adapter, this didn't matter) that initiates a ESB process, routing service for that polling adapter
that asyncronously (!) invokes Requestor ABC level services (AIA terms);
2) Requestor ABC level-services perform XREF population and continues message
processing.
XREF population is doing with next steps:
we call lookupXRefRow XPath function, if value is not present in XREF, we doing
populateXRefRow call.
This logic is working fine when we are not using ESB cluster, but now step 2) (ReqABC level) is performed by different ESB servers
and frequently we faced unique constraint violation error on XREF_DATA
population (during populateXrefRow call).
ESB RT nodes using to balance load but transmitted data is intersected. For example, we are poll not documents but document details instead (polling table populated by Oracle Streams, there are no guarantee that document header receives earlier than document details, because our system is high loaded and we are using commit_serialization=none with parallelism at APPLY processes).
Each ESB RT instance can receive different rows of same document and xref population done at document header level.
My question is: what is best practices to work with XREF in ESB cluster installations?
May be other peoples faced with this issue and how this issue was resolved?
I know possible workarounds to accomplish this task: not call populateXRefRow function in XSLT, instead call PL/SQL procedure or function that working same but can ignore any exceptions.
This's solution not liked to me, but I dont know any other solutions.
Also I cannot not populate XREF because XREF actively used in inter-systems communication.

Similar Messages

  • Best Practice for SAP PI installation to share Data Base server with other

    Hi All,
    We are going for PI three tire installation but now I need some best practice document for PI installation should share Data base with other Non-SAP Application or not. I never see SAP PI install on Data base server which has other Application sharing. I do not know what is best practice but I am sure sharing data base server with other non-sap application doesnu2019t look good means not clean architecture, so I need some SAP document for best practice to get it approve from management. If somebody has any document link please let me know.
    With regards
    Sunil

    You should not mix different apps into one database.
    If you have a standard database license provided by SAP, then this is not allowed. See these sap notes for details:
    [581312 - Oracle database: licensing restrictions|https://service.sap.com/sap/bc/bsp/spn/sapnotes/index2.htm?numm=581312]
    [105047 - Support for Oracle functions in the SAP environment|https://service.sap.com/sap/bc/bsp/spn/sapnotes/index2.htm?numm=105047] -> number 23
          23. External data in the SAP database
    Must be covered by an acquired database license (Note 581312).
    Permitted for administration tools and monitoring tools.
    In addition, we do not recommend to use an SAP database with non-SAP software, since this constellation has considerable disadvantages
    Regards, Michael

  • Best Practice for Updating Administrative Installation Point for Reader 9.4.1 Security Update?

    I deployed adobe reader 9.4 using an administrative installation point with group policy when it was released. This deployment included a transform file.  It's now time to update reader with the 9.4.1 security msp.
    My question is, can I simply patch the existing AIP in place with the 9.4.1 security update and redeploy it, or do I need to create a brand new AIP and GPO?
    Any help in answering this would be appreciated.
    Thanks in advance.

    I wouldn't update your AIP in place. I end up keeping multiple AIPs on hand. Each time a security update comes out I make a copy and apply the updates to that. One reason is this: when creating the AIPs, you need to apply the MSPs in the correct  order; you cannot simply apply a new MSP to the previous AIP.
    Adobe's support patch order is documented here:  http://kb2.adobe.com/cps/498/cpsid_49880.html.
    That link covers Adode  Acrobat and Reader, versions 7.x through 9.x. A quarterly update MSP can  only be applied to the previous quarterly. Should Adobe Reader 9.4.2  come out tomorrow as a quarterly update, you will not be able to apply it to  the 9.4.1 AIP; you must apply it to the previous quarterly AIP - 9.4.0. At a minimum I keep the previous 2 or 3 quarterly AIPs around, as well as the MSPs to update them. The only time I delete my old AIPs is when I am 1000% certain they are not longer needed.
    Also, when Adobe's developers author the MSPs they don't include the correct metadata entries for in place upgrades of AIPs - any AIP based on the original 9.4.0 MSI will not in-place upgrade any installtion that is based on the 9.4.0 MSI and AIP - you must uninstall Adobe Reader, then re-install. This deficiency affects all versions of Adobe Reader 7.x through 9.x. Oddly, Adobe Acrobat AIPs will correctly in-place upgrade.
    Ultimately, the in-place upgrade issue and the patch order requirements are why I say to make a copy, then update and deploy the copy.
    As for creating the AIPs:
    This is what my directory structure looks like for my Reader AIPs:
    F:\Applications\Adobe\Reader\9.3.0
    F:\Applications\Adobe\Reader\9.3.1
    F:\Applications\Adobe\Reader\9.3.2
    F:\Applications\Adobe\Reader\9.3.3
    F:\Applications\Adobe\Reader\9.3.4
    F:\Applications\Adobe\Reader\9.4.0
    F:\Applications\Adobe\Reader\9.4.1
    The 9.4.0 -> 9.4.1 MSP is F:\Applications\Adobe\Reader\AdbeRdrUpd941_all_incr.msp
    When I created my 9.4.1 AIP, I entered these at a cmd.exe prompt - if you don't have robocopy on your machine you can get it from the Server 2003 Resouce Kit:
    F:
    cd \Applications\Adobe\Reader\
    robocopy /s /e 9.4.0 9.4.1
    cd 9.4.1
    rename AdbeRdr940_en_US.msi AdbeRdr941_en_US.msi
    msiexec /a AdbeRdr941_en_US.msi /update F:\Applications\Adobe\Reader\AdbeRdrUpd941_all_incr.msp /qb

  • How to check verison of Best Practice Baseline in existing ECC system?

    Hi Expert,
    How to check verison of Best Practice Baseline in existing ECC system such as v1.603 or v1.604?
    Any help will be appriciate.
    Sayan

    Dear,
    Please go to https://websmp201.sap-ag.de/bestpractices and click on Baseline packages then on right hand side you will see that On which release is SAP Best Practices Baseline package which version is applicable.
    If you are on EHP4 then you can use the v1.604.
    How to Get SAP Best Practices Data Files for Installation (pdf, 278 KB) please refer this link,
    https://websmp201.sap-ag.de/~sapidb/011000358700000421882008E.pdf
    Hope it will help you.
    Regards,
    R.Brahmankar

  • BODS best practices

    Hi,
    Best practices for Data Services installation & Designer work, please share with me.
    Madhu

    I will follow this doc, which may help you.
    http://help.sap.com/bp_dmg603/DMS_US/Documentation/DM_Quick_Guide_EN_US.doc
    I'm Back

  • Best practice RAC installation in two datacenter zones?

    Datacenter has two separate zones.
    In each zone we have one storage system and one rac node.
    We will install RAC 11gR2 with ASM.
    For data we want to use diskgroup +DATA, normal redundancy mirrored to both storage systems.
    For CRS+Voting we want to use diskgroup +CRS, normal redundancy.
    But for CRS+Voting diskgroup with normal redundancy we need 3 luns and we have only 2 storage systems.
    I believe the third lun is needed to avoid split brain situations.
    If we put two luns to storage #1 and one lun to storage #2, what will happen when storage #1 faills - this means that two of three disks for diskgroup +CRS are unaccessible?
    What will happen, when all equipment in zone #1 fails?
    Is human intervention required: at failure time, when zone#1 is coming up again?
    Is there a best practice for a 2-zone 2-storage rac configuration?
    Joachim

    Hi,
    As far as voting files are concerned, a node must be able to access more than the half of the voting files at any time (simple majority). In order to be able to tolerate a failure of n voting files, one must have at least 2n+1 configured. (n= number of voting files) for the cluster.
    The problem in a stretched cluster configuration is that most installations only use two storage systems (one at each site), which means that the site that hosts the majority of the voting files is a potential single point of failure for the entire cluster. If the storage or the site where n+1 voting files are configured fails, the whole cluster will go down, because Oracle Clusterware will loose the majority of voting files.
    To prevent a full cluster outage, Oracle will support a third voting file on an inexpensive, lowend standard NFS mounted device somewhere in the network. Oracle recommends putting the NFS voting file on a dedicated server, which belongs to a production environment.
    Use the White Paper below to accomplish it:
    http://www.oracle.com/technetwork/database/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf
    Also Regarding the Voting File and OCR configuration (11.2), when using ASM. How they should be stored?
    I recommend you read it:
    {message:id=10028550}
    Regards,
    Levi Pereira

  • Best Practice on Post Steps after 11.2.0.2.4 ORACLE RAC installation

    I finished RAC 11.2.0.2 installation and patched it to 11.2.0.2.4. The database is also created.
    The nodes are Linux redhat and storages are on ASM.
    Is there any good article or link regarding the best practice of post steps after installation?
    Thanks in advance.

    Hi,
    I also want to know what kind of monitoring scripts I can use to setup as cron jobs to monitor or detect any failure or problems?
    To monitor Cluster (OS Level):
    I suggest you use a powerful tool "CHM" that already comes with product Grid Infrastructure.
    What do you do to configure? Nothing ... Just use.
    Cluster Health Monitor (CHM) FAQ [ID 1328466.1]
    See this example:
    http://levipereira.wordpress.com/2011/07/19/monitoring-the-cluster-in-real-time-with-chm-cluster-health-monitor/
    To monitor Database:
    PERFORMANCE TUNING USING ADVISORS AND MANAGEABILITY FEATURES: AWR, ASH, and ADDM and Sql Tuning Advisor. [ID 276103.1]
    The purpose of this article is to illustrate how to use the new 10g manageability features to diagnose
    and resolve performance problems in the Oracle Database.
    Oracle10g has powerful tools to help the DBA identify and resolve performance issues
    without the hassle of analyzing complex statistical data and extensive reports.
    Hope this help,
    Levi Pereira
    Edited by: Levi Pereira on Nov 3, 2011 11:40 PM

  • Best Practices for SRM Installation !!

    Hi
        can someone share the best Practices for SRM Installation ?
    What is the typical timeframe to install SRM on development server and as well as on the Production server ?
    Appericiate the responses
    Thanks,
    Arvind

    Hi
    I don't know whether this will help you.
    See these links as well.
    <b>http://help.sap.com/bp_epv170/EP_US/HTML/Portals_intro.htm
    http://help.sap.com/bp_scmv150/index.htm
    http://help.sap.com/bp_biv170/index.htm
    http://help.sap.com/bp_crmv250/CRM_DE/index.htm</b>
    Hope this will help.
    Please reward suitable points.
    Regards
    - Atul

  • Best practice for using messaging in medium to large cluster

    What is the best practice for using messaging in medium to large cluster In a system where all the clients need to receive all the messages and some of the messages can be really big (a few megabytes and maybe more)
    I will be glad to hear any suggestion or to learn from others experience.
    Shimi

    publish/subscribe, right?
    lots of subscribers, big messages == lots of network traffic.
    it's a wide open question, no?
    %

  • Best Practice for Distributed TREX NFS vs cluster file systems

    Hi,
    We are planning to implement a distributed TREX, using RedHat on X64, but we are wondering which could be the best practice or approach to configure the "file server" used on the TREX distributed environment. The guides mention file server, that seems to be another server connected to a SAN exporting or sharing the file systems required to be mounted in all the TREX systems (Master, Backup and Slaves), but we know that the BI accelerator uses OCFS2 (cluster file systems) to access the storage, in the case of RedHat we have GFS or even OCFS.
    Basically we would like to know which is the best practice and how other companies are doing it, for a TREX distributed environment using either network file systems or cluster file systems.
    Thanks in advance,
    Zareh

    I would like to add one more thing, in my previous comment I assumed that it is possible to use cluster file system on TREX because BI accelerator, but maybe that is not supported, it does not seem to be clear on the TREX guides.
    That should be the initial question:
    Aare cluster file system solutions supported on plain TREX implementation?
    Thanks again,
    Zareh

  • Best Practice for CQ Updates in complex installations (clustering, replication)?

    Hi everybody,
    we are planning a production setup of CQ 5.5 with an authoring cluster replicating to 4 publisher instances. We were wondering what the best update process looks like in a scenario like this. Let's say, we need to install the latest CQ 5 Update - which we actually have to -:
    Do we need to do this on every single instance, or can replication be utilized to distribute updates?
    If updating a cluster - same question: one instance at a time? Just one, and the cluster does the rest?
    The question is really: can update packages (official or custom) be automatically distributed to multiple instances? If yes, is there a "best practice" way to do this?
    Thanks for any help on this!
    Henning

    Hi Henning,
    The CQ5.5 servicepacks are distributed as CRX packages. You can replicate these packages and on the publishs they are unpacked and installed.
    In a cluster the situation is different: You have only 1 repository. So when you have installed the servicepack on one node, the new versions of bundles and other stuff is unpacked to the repository (most likely to /libs). Then the magic (essentially the JcrInstaller) takes care, that the bundles are extracted to started.
    I would not recommend to activate the service pack in a production environment, because then all publishs will be updated the same time. And as a restart is required, you might encounter downtimes. Of course you can make it work when you play with the replication agents :-)
    cheers,
    Jörg

  • BEST PRACTICE FOR THE REPLACEMENT OF REPORTS CLUSTER

    Hi,
    i've read the noter reports_gueide_to_changed_functionality on OTN.
    On Page 5 ist stated that reports cluster is deprecated.
    Snippet:
    Oracle Application Server High Availability provides the industry’s most
    reliable, resilient, and fault-tolerant application server platform. Oracle
    Reports’ integration with OracleAS High Availability makes sure that your
    enterprise-reporting environment is extremely reliable and fault-tolerant.
    Since using OracleAS High Availability provides a centralized clustering
    mechanism and several cutting-edge features, Oracle Reports clustering is now
    deprecated.
    Please can anyone tell me, what is the best practice to replace reports cluster.
    It's really annoying that the clustering technology is changing in every version of reports!!!
    martin

    hello,
    in reality, reports server "clusters" was more a load balancing solution that a clustering (no shared queue or cache). since it is desirable to have one load-balancing/HA approach for the application server, reports server clustering is deprecated in 10gR2.
    we understand that this frequent change can cause some level of frustration, but it is our strong believe that unifying the HA "attack plan" for all of the app server components will utimatly benefit custoemrs in simpifying their topologies.
    the current best practice is to deploy LBRs (load-balancing routers) with sticky-routing capabilites to distribute requests across middletier nodes in an app-server cluster.
    several custoemrs in high-end environments have already used this kind of configuration to ensure optimal HA for their system.
    thanks,
    philipp

  • Best practice for version control B2B, ESB and BPEL

    Hello,
    we are setting up a new system using B2B, ESB and BPEL. The development team is more experienced working with PL/SQL, Oracle Workflow and we are worried that Jdeveloper generates changes to the source files during development and that we might have problems with the version control.
    Is there any best practice for setting up version control for these systems? Do we need to take anything in particular into consideration when setting up the projects?
    We are using Serena Dimensions 9.1 for version control with the add-on in Jdeveloper.
    Thanks in advance!

    I believe JDeveloper has a plugin for Dimensions.
    I havent used it but to get it, go to tools (It may be help I don't have JDeveloper on this machine to confirm) check for updates.
    If you select the thrid party check box - next, you will see an entry for dimentions.
    Configure the connection and develop as you would any other project.
    cheers
    James

  • Best Practice for Installation of Both Leopard and Aperture 2 upgrade.

    I've finally bought the bullet and purchased both Leopard and Aperture 2.0 upgrade. I've tried searching for a best practice to install both, but haven't been able to find one--only trouble shooting type stuff. Any suggestions, things to avoid, etc would be greatly appreciated. Even a gentle shove to a prior thread would be helpful. . . .
    Thanks for pointing me in the right direction.
    Steve

    steve hutchcraft wrote:
    I've tried searching for a best practice to install...
    • First be really sure that all your apps work well with 10.5.3 before you leave 10.4.11, which is extraordinarily stable.
    • Immediately prior to and immediately after every installation of any kind (OS, apps, drivers, etc.) got to Utilities/Disk Utility/First Aid, and Repair Permissions. Repairing Permissions is not a problem fixer per se, but anecdotally many folks with heavy graphics installations (including me) who follow that protocol seem to maintain better operating environments under the challenge of heavy graphics than folks who do not diligently do so.
    • When you upgrade the OS do a "clean install."
    • RAM is relatively inexpensive and 2 GB RAM is limiting. I recommend adding 4x2 GB RAM. One good source is OWC: http://www.owcomputing.com/.
    • After you do your installations check for updates to the OS and/or Aperture, and perform any upgrades. Remember to Repair Permissions immediately prior to and immediately after the upgrade installations.
    • If you are looking for further Aperture performance improvement, consider the Radeon HD 3870. Reviews at http://www.barefeats.com/harper16.html and at http://www.barefeats.com/harper17.html.
    Good luck!
    -Allen Wicks

  • Best practice of 11G release 2 Grid & RAC installation on Solaris 10

    Hi Experts,
    Please share 11g Release 2 Grid infrastructure and RAC installation experiennce on Sun SPARC.
    Appreciate if you can provide documentation which provde complete information from server setup to database setup(other than oracle documentaion)
    Also please let me know which is the best storage option( NFS , ASM,...) and pros and cons
    Regards,
    Rasin M

    Hi,
    Appreciate if you can provide documentation which provde complete information from server setup to database setup(other than oracle documentaion)Check this in MOS:
    RAC Assurance Support Team: RAC Starter Kit and Best Practices (Solaris)
    https://support.oracle.com/CSP/main/article?cmd=show&id=811280.1&type=NOT
    Regards,
    Levi Pereira
    http://levipereira.wordpress.com

Maybe you are looking for