Informatica workflow architecture - question

One of my MANAGER recently discussed with me regarding a process improvement for migrating only the required session not the entire workflow. For example, if a workflow has 50 sessions. He is suggesting to convert all session tasks into a child workflows; i.e if a workflow has 10 sessions there will be one parent workflow to trigger the 10 child workflows. In this way, he said that if there any changes for one session we can make only in one workflow and move that changes.
I am not convinced with this, please provide your suggestions.

The drive in which I installed informatica ran out of disc space. So I found this in the error log SF_34125 Error in writing storage file [C:\Informatica\9.0.1\server\infa_shared\Storage\pmservice_Domain_ssintr01_INT_SSINTR01_1314615470_0.dat].  System returns error code [errno = 28], error message [No space left on device]. Then I tried to shut down the integration service and then freeup some space on the disc. I got the following message in the log file LM_36047 Waiting for all running workflows to complete.SF_34014 Service [INT_SSINTR01] on node [node01_ssintr01] shut down. Then when I tried to start the integration service again, I got the following error Could not execute action... The Service INT_SSINTR01 could not be enabled due to the following error: [DOM_10079] Unable to start service [INT_SSINTR01] on any node specified for the service After this I am not able to find any entry in the log file for the integration service. So I went to the domain log to find out more details. I found these in the domain log DOM_10126 Request to disable [SERVICE] [INT_SSINTR01] in [COMPLETE] mode.DOM_10130 Stop service process for [SERVICE] [INT_SSINTR01] on node [node01_ssintr01].LIC_10040 Service [INT_SSINTR01] is stopping on node [node01_ssintr01].SPC_10015 Request to stop process for service [INT_SSINTR01] with mode [COMPLETE] on node [node01_ssintr01].DOM_10127 Request to disable service [INT_SSINTR01] completed.DOM_10126 Request to disable [SERVICE] [Repo_SSINTR01] in [ABORT] mode.DOM_10130 Stop service process for [SERVICE] [Repo_SSINTR01] on node [node01_ssintr01].LIC_10042 Repository instance [Repo_SSINTR01] is stopping on node [node01_ssintr01].SPC_10015 Request to stop process for service [Repo_SSINTR01] with mode [ABORT] on node [node01_ssintr01].DOM_10127 Request to disable service [Repo_SSINTR01] completed.DOM_10115 Request to enable [service] [Repo_SSINTR01].DOM_10117 Starting service process for service [Repo_SSINTR01] on node [node01_ssintr01].SPC_10014 Request to start process for service [Repo_SSINTR01] on node [node01_ssintr01].SPC_10018 Request to start process for service [Repo_SSINTR01] was successful.SPC_10051 Service [Repo_SSINTR01] started on port [6,019] successfully.DOM_10118 Service process started for service [Repo_SSINTR01] on node [node01_ssintr01].DOM_10121 Selecting a primary service process for service [Repo_SSINTR01].DOM_10120 Service process on node [node01_ssintr01] has been set as the primary node of service [Repo_SSINTR01].DOM_10122 Request to enable service [Repo_SSINTR01] completed.LIC_10041 Repository instance [Repo_SSINTR01] has started on node [node01_ssintr01].DOM_10115 Request to enable [service] [INT_SSINTR01].DOM_10117 Starting service process for service [INT_SSINTR01] on node [node01_ssintr01].SPC_10014 Request to start process for service [INT_SSINTR01] on node [node01_ssintr01].DOM_10055 Unable to start service process [INT_SSINTR01] on node [node01_ssintr01].DOM_10079 Unable to start service [INT_SSINTR01] on any node specified for the service.DOM_10126 Request to disable [SERVICE] [INT_SSINTR01] in [COMPLETE] mode.DOM_10130 Stop service process for [SERVICE] [INT_SSINTR01] on node [node01_ssintr01].LIC_10040 Service [INT_SSINTR01] is stopping on node [node01_ssintr01].SPC_10015 Request to stop process for service [INT_SSINTR01] with mode [COMPLETE] on node [node01_ssintr01].DOM_10127 Request to disable service [INT_SSINTR01] completed.DOM_10126 Request to disable [SERVICE] [Repo_SSINTR01] in [ABORT] mode.DOM_10130 Stop service process for [SERVICE] [Repo_SSINTR01] on node [node01_ssintr01].LIC_10042 Repository instance [Repo_SSINTR01] is stopping on node [node01_ssintr01].SPC_10015 Request to stop process for service [Repo_SSINTR01] with mode [ABORT] on node [node01_ssintr01].DOM_10127 Request to disable service [Repo_SSINTR01] completed.DOM_10115 Request to enable [service] [Repo_SSINTR01].DOM_10117 Starting service process for service [Repo_SSINTR01] on node [node01_ssintr01].SPC_10014 Request to start process for service [Repo_SSINTR01] on node [node01_ssintr01].SPC_10018 Request to start process for service [Repo_SSINTR01] was successful.SPC_10051 Service [Repo_SSINTR01] started on port [6,019] successfully.DOM_10118 Service process started for service [Repo_SSINTR01] on node [node01_ssintr01].DOM_10121 Selecting a primary service process for service [Repo_SSINTR01].DOM_10120 Service process on node [node01_ssintr01] has been set as the primary node of service [Repo_SSINTR01].DOM_10122 Request to enable service [Repo_SSINTR01] completed.LIC_10041 Repository instance [Repo_SSINTR01] has started on node [node01_ssintr01].DOM_10115 Request to enable [service] [INT_SSINTR01].DOM_10117 Starting service process for service [INT_SSINTR01] on node [node01_ssintr01].SPC_10014 Request to start process for service [INT_SSINTR01] on node [node01_ssintr01].DOM_10055 Unable to start service process [INT_SSINTR01] on node [node01_ssintr01].DOM_10079 Unable to start service [INT_SSINTR01] on any node specified for the service.Then I tried shutting down the domain and restarting the informatica service again. I got the following error when the Integration service was initializedDOM_10115 Request to enable [service] [INT_SSINTR01].DOM_10117 Starting service process for service [INT_SSINTR01] on node [node01_ssintr01].SPC_10014 Request to start process for service [INT_SSINTR01] on node [node01_ssintr01].SPC_10009 Service process [INT_SSINTR01] output [Informatica(r) Integration Service, version [9.0.1], build [184.0604], Windows 32-bit].SPC_10009 Service process [INT_SSINTR01] output [Service [INT_SSINTR01] on node [node01_ssintr01] starting up.].SPC_10009 Service process [INT_SSINTR01] output [Logging to the Windows Application Event Log with source as [PmServer].].SPC_10009 Service process [INT_SSINTR01] output [Please check the log to make sure the service initialized successfully.].SPC_10008 Service Process [INT_SSINTR01] output error [ERROR: Unexpected condition at file:[..\utils\pmmetrics.cpp] line:[2118]. Application terminating. Contact Informatica Technical Support for assistance.].SPC_10012 Process for service [INT_SSINTR01] terminated unexpectedly.DOM_10055 Unable to start service process [INT_SSINTR01] on node [node01_ssintr01].DOM_10079 Unable to start service [INT_SSINTR01] on any node specified for the service. I tried creating a new integration service and associating it with the same repository. Even then I got the same error. So I tried creating a new repository and a new integration service. Even then I got the same error. What might be the workaround to start the integration service?

Similar Messages

  • How do full and incremental Informatica workflow differ?

    Hi:
    I've read that a custom Informatica workflow should have full and incremental versions. I've compared the incremental and full versions of several seeded OBIA workflows, but I cannot find how they differ. For example, when I look at the session objects for each workflow they both have a source filter that limits a date field by LAST_EXTRACT_DATE.
    Can someone shed some light on this?
    Thanks.

    To answer your question, they differ in various ways for various mappings. For most FACT tables which hold high volume transactional data, there may be a SQL override in the FULL session that uses INITIALEXTRACT_DATE and a different override in the INCREMENTAL (does not have Full Suffix) that uses LASTEXTRACT_DATE. For dimension tables, its not always required that you have a different logic for FULL versus incremental.
    Also, all FULL sessions (even if they have the same SQL) will have a BULK option turned on in the Session properties that allow a faster load since the table is usually truncated on FULL versus Incremental. As a best practice, for FACTS, it is best to have a FULL versus INCREMENTAL session. For Dimensions, depending on the volume of transactions or the change capture date field available, you may or may not have different logic. If you do a FULL load, however, it is better to use BULK to speed up the load.
    if this is helpful, please mark as correct or helpful.

  • Informatica Workflow Manager ODBC Relational Connection for ETL in DAC

    In Informatica Workflow Manager, I have created a Relational Connection of type ODBC and specified Connect String as "DSN=BIEEDW" where "BIEEDW" is the System ODBC DSN already set pointing to a SQL Server 2008 database.
    However, when the ETL run in DAC, the following error occurs in Session log files showing that the database and driver cannot be located:
    MAPPING> CMN_1569 Server Mode: [UNICODE]
    MAPPING> CMN_1570 Server Code page: [MS Windows Traditional Chinese, superset of Big 5]
    MAPPING> TM_6151 The session sort order is [Binary].
    MAPPING> TM_6156 Using low precision processing.
    MAPPING> TM_6180 Deadlock retry logic will not be implemented.
    MAPPING> TM_6187 Session target-based commit interval is [10000].
    MAPPING> TM_6307 DTM error log disabled.
    MAPPING> TE_7022 TShmWriter: Initialized
    MAPPING> DBG_21075 Connecting to database [DSN=BIEEDW], user [bieedw02]
    MAPPING> CMN_1761 Timestamp Event: [Wed May 22 01:29:17 2013]
    MAPPING> CMN_1022 Database driver error...
    CMN_1022 [
    [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified
    Database driver error...
    Function Name : Connect
    Database driver error...
    Function Name : Connect
    Database Error: Failed to connect to database using user [bieedw02] and connection string [DSN=BIEEDW].]
    Any hint in setting the Connect String for ODBC Relational Connection?

    Hi,
    Let me tell you the real story:
    Our server architecture consists of two servers:
    Windows Server 2008 R2 (64-bit) platform with the following installed:
    - SQL Server 2008
    - DAC 10.1.3.4.1
    - OBIEE 11g
    - BI Apps (Financial Analytics) 7.9.6.3
    - Informatica Server 9.1.0 HotFix 2
    Windows Server 2003 Enterprise Edition SP2 (32-bit) platform with the following installed:
    - Informatica Clients (i.e. Workflow Manager, Repository Manager, Designer and Workflow Monitor)
    And thus the ODBC Relational Connection is configured in Informatica Client machine (Workflow Manager) which is a 32-bit platform.
    Any idea?

  • Informatica Workflow  "Succeeded" even correspsonding session Failed.

    We have installed following,
    obiee 10.3.1.4,
    DAC 10.3.1.4,
    BIApps 7.9.6,
    Informatica 8.6
    we are trying to run HRMS full load from DAC. But in Informatica workflow monitor "SDE_ORA_PayrollFact_Full" session FAILED but corresponding Workflow "SDE_ORA_PayrollFact_Full" status shows as "Succeeded". Since Informatica Workflow was Succeeded,DAC marks that job as Succeeded.
    So question is why Informatica shows Work-flow as "Succeeded" even corresponding session status is in failed status?
    Thanks,
    slokam

    Frustrating isn't it?
    Open Workflow Designer and locate the workflow SDE_ORA_PayrollFact_Full. Go to Tools and select Workflow Desinger. Drag the SDE_ORA_PayrollFact_Full worfklow into the work area. Double click the session that is failing and ensure the following checkboxes are clicked: (1) Fail parent if this task fails, (2) Fail parent if this task does not run.
    Hope this helps.
    - Austin

  • Informatica Workflow hung/waits.

    We have Informatica version 8.6.0 HF4 and OBIA 7.9.6. We are facing a weird issue with Informatica workflows hanging/waiting in recent months.
    Informatica session completes quick, but workflow takes long time to recognize last session finished. So it waits for long time to finish workflow. Sometimes the time difference between last session completion and workflow completion time can be in hours. Not sure, what is causing workflow to wait.. I know for sure there is no lock on target database.
    Also this behavior is random. It happens on intermittently on different workflows at different timings. There is no specific pattern to narrow down to root cause.
    Did any of you face this issue in your environment? Appreciate your time for response.
    Thanks,
    Ash

    Hi,
    Let me tell you the real story:
    Our server architecture consists of two servers:
    Windows Server 2008 R2 (64-bit) platform with the following installed:
    - SQL Server 2008
    - DAC 10.1.3.4.1
    - OBIEE 11g
    - BI Apps (Financial Analytics) 7.9.6.3
    - Informatica Server 9.1.0 HotFix 2
    Windows Server 2003 Enterprise Edition SP2 (32-bit) platform with the following installed:
    - Informatica Clients (i.e. Workflow Manager, Repository Manager, Designer and Workflow Monitor)
    And thus the ODBC Relational Connection is configured in Informatica Client machine (Workflow Manager) which is a 32-bit platform.
    Any idea?

  • Workflow design questions: FM vs WF to call FM

    Hereu2019s a couple of workflow design questions.
    1. We have Workitem 123 that allow user to navigate to a custom transaction TX1. User can make changes in TX1.  At save or at user command of TX1, the program will call a FM (FM1) to delete WI 123 and create a new WI to send to a different agent. 
    Since Workitem 123 is still open and lock, the FM1 cannot delete it immediately, it has to use a DO loop to check if the Workitem 123 is dequeued before performing the WI delete.
    Alternative: instead of calling the FM1, the program can raise an event which calls a new workflow, which has 1 step/task/new method which call the FM1.  Even with this alternative, the Workitem 123 can still be locked when the new workflowu2019s task/method calls the FM1.
    I do not like the alternative, which calls the same FM1 indirectly via a new workflow/step/task/method.
    2. When an application object changes, the user exit will call a FMx which is related to workflow.  The ABAP developer do not want to call the FMx directly, she wants to raise an event which call a workflow .. step .. task .. method .. FMx indirectly.  This way any commit that happens in the FMx will not affect the application objectu2019s COMMIT.
    My recommendation is to call the FMx using u2018in Update tasku2019 so that the FMx is only called after the COMMIT of the application object.
    Any recommendation?
    Amy

    Mike,
    Yes, in my first design, the TX can 1. raise a terminating event for the existing workitem/workflow and then 2. raise another event to call another workflow.   Both 1 and 2 will be in FM1. 
    Then the design question is: Should the FM1 be called from TX directly or should the TX raise an event to call a new workflow which has 1 step/task, which calls a method in the Business object, and the method calls the FM1?
    In my second design question, when an application object changes, the user exit will call a FMx which is related to workflow.  The ABAP developer do not want to call the FMx directly, she wants to raise an event which call a workflow, which has 1 step/task, which calls a method, which calls the FMx indirectly.  This way any commit that happens in the FMx will not affect the application objectu2019s COMMIT.
    My recommendation is either call the FMx using u2018in Update tasku2019 so that the FMx is only called after the COMMIT of the application object or raise an event to call a receiver FM (FMx).
    Thanks.
    Amy

  • Oracle VM Server for SPARC - network multipathing architecture question

    This is a general architecture question about how to best setup network multipathing
    I am reading the "Oracle VM Server for SPARC 2.2 Administration Guide" but I can't find what I am looking for.
    From reading the document is appears it is possible to:
    (a) Configure IPMP in the Service Domain (pg. 155)
    - This protects against link level failure but won't protect against the failure of an entire Service LDOM?
    (b) Configure IPMP in the Guest Domain (pg. 154)
    - This will protect against Service LDOM failure but moves the complexity to the Guest Domain
    - This means the there are two (2) VNICs in the guest though?
    In AIX, "Shared Ethernet Adapter (SEA) Failover" it presents a single NIC to the guest but can tolerate failure of a single VIOS (~Service LDOM) as well as link level failure in each VIO Server.
    https://www.ibm.com/developerworks/mydeveloperworks/blogs/aixpert/entry/shared_ethernet_adapter_sea_failover_with_load_balancing198?lang=en
    Is there not a way to do something similar in Oracle VM Server for SPARC that provides the following:
    (1) Two (2) Service Domains
    (2) Network Redundancy within the Service Domain
    (3) Service Domain Redundancy
    (4) Simplify the Guest Domain (ie single virtual NIC) with no IPMP in the Guest
    Virtual Disk Multipathing appears to work as one would expect (at least according the the documentation, pg. 120). I don't need to setup mpxio in the guest. So I'm not sure why I would need to setup IPMP in the guest.
    Edited by: 905243 on Aug 23, 2012 1:27 PM

    Hi,
    there's link-based and probe-based IPMP. We use link-based IPMP (in the primary domain and in the guest LDOMs).
    For the guest LDOMs you have to set the phys-state linkprop on the vnets if you want to use link-based IPMP:
    ldm set-vnet linkprop=phys-state vnetX ldom-name
    If you want to use IPMP with vsw interfaces in the primary domain, you have to set the phys-state linkprop in the vswitch:
    ldm set-vswitch linkprop=phys-state net-dev=<phys_iface_e.g._igb0> <vswitch-name>
    Bye,
    Alexander.

  • DAC as schedular tool for Informatica workflows(Not for OBIApps repository)

    I have to use DAC as administrative / scheduler tool for Informatica workflows. Earlier, I configured the same for OBI Apps. It's running fine. Now I am not refering to Oracle_BI_DW_Base.rep which is the inbuilt repository provided with OBI Apps. I have to run my workflows through DAC. I have gone through the below listed steps to achieve the same,
    1) Created New USER
    2) Used that user to configure DAC connection as well as for DAC repository tables. (am not sure whether it is a best practice or not)
    3) Created a new Source System Container. (no containers were there initially)
    4) I have folder in Informatica as "MyRep", so I created task logical and physical folders with the same name in DAC (Tools => Seed Data =>........)
    5) Created new Subject Area, tables, tasks, performed Synchronize Tasks.
    5) Set up informatica servers, Physical Data Sources. Tested them, no flaws.
    6) Added new Execution Plan.
    7) Assigned Subject Area to it.
    8) Clicked on "generate" button in parameters section of execution plan. No Parameters generated. Now got into confusion.
    *9) I couldn't find my tasks listed in ORDERED TASKS tab.*
    10) Clicked on BUILD. Then got error message as below,
    MESSAGE:::No tasks were found to build this execution plan.
    Please do let me know what is the wrong step here.
    Thanks,

    Thanks Ahsan, it solved my problem.
    I forgot to set Configuration Tags properly, later Assembled the subject area. Cheeeeeeerrrrrrsss... It's working
    Edited by: ABT on Dec 19, 2011 9:49 PM

  • Informatica Workflow not able to connect to source database.

    Hi,
    I have completed the installation of OBI Apps.All the test connections are working fine and I have also configured the Relational Connection in Informatica Workflow.The passwords,username and connect string are correct.Still when I run a ETL all the tasks that need to connect to source database fail.I checked the session logs of those workflow and it gave me following error:
    READER_1_1_1> DBG_21438 Reader: Source is [UPG11i], user [obiee]
    READER_1_1_1> CMN_1761 Timestamp Event: [Fri Sep 05 18:01:37 2008]
    READER_1_1_1> RR_4036 Error connecting to database [
    Database driver error...
    Function Name : Logon
    ORA-12154: TNS:could not resolve the connect identifier specified
    Database driver error...
    Function Name : Connect
    Database Error: Failed to connect to database using user [obiee] and connection string [UPG11i].]
    From DAC I am able to connect to the databases.There seems to be some problem in the relational connections.What are the drivers involved and where should they be installed.
    Please help.
    Thanks and regards,
    Soumya.

    Hi,
    I think you need to check you connections in the Work flow manager. So, i'd suggest you to follow this below section again. Your error clearly shows that there is problem with the connections.
    "4.13 Configuring Relational Connections In Informatica Workflow Manager" in Installation & Configuration guide.
    Post back here, if you find any problems.
    Thanks,

  • Architecture question, global VDI deployment

    I have an architecture question regarding the use of VDI in a global organization.
    We have a pilot VDI Core w/remote mysql setup with 2 hypervisor hosts. We want to bring up 2 more Hypervisor hosts (and VDI Secondaries) in another geographic location, where the local employees would need to connect desktops hosted from their physical location. What we don't want is to need to manage multiple VDI Cores. Ideally we would manage the entire VDI implementation from one pane of glass, having multiple Desktop Provider groups to represent the geographical locations.
    Is it possible to just setup VDI Additional Secondaries in the remote locations? What are the pros and cons of that?
    Thanks

    Yes, simply bind individual interfaces for each domain on your web server,
    one for each.
    Ensure the appropriate web servers are listening on the appropriate
    interfaces and it will work fine.
    "Paul S." <[email protected]> wrote in message
    news:407c68a1$[email protected]..
    >
    Hi,
    We want to host several applications which will be accessed as:
    www.oursite.com/app1 www.oursite.com/app2 (all using port 80 or 443)
    Is it possible to have a separate Weblogic domain for each application,all listening
    to ports 80 and 443?
    Thanks,
    Paul

  • Running MII on a Wintel virtual environment + hybrid architecture questions

    Hi, I have two MII Technical Architecture questions (MII 12.0.4).
    Question1:  Does anyone know of MII limitations around running production MII in a Wintel virtualized environment (under VMware)?
    Question 2: We're currently running MII centrally on Wintel but considering to move it to Solaris.  Our current plan is to run centrally but in the future we may want to install local instances local instances of MII in some of our plants which require more horsepower.  While we have a preference for Solaris UNIX based technologies in our main data center where our central MII instance will run, in our plants the preference seems to be for Wintel technologies.  Does anybody know of any caveats, watch outs or else around running MII in a hybrid architecture with a Solarix Unix based head of the hybrid architecture and the legs being run on Wintel?
    Thanks for your help
    Michel

    This is a great source for the ins/outs of SAP Virtualization:  https://www.sdn.sap.com/irj/sdn/virtualization

  • Trigger informatica workflow using Process chain

    Hi Gurus,
    Please guide me how trigger the informatica workflow using the process chains.
    Please give me the steps to create ....
    Thanks in Adwance...
    Babu

    Duplicate thread.
    See:
    /thread/601213 [original link is broken]

  • Architectural question

    Little architectural question: why is all the stuff that is needed to render a page put into the constructor of a backing bean? Why is there no beforeRender method, analogous to the afterRenderResponse method? That method can then be called if and only if a page has to be rendered. It seems to me that an awful lot of resources are waisted this way.
    Reason I bring up this question is that I have to do a query in the constructor in a page backing bean. Every time the backing bean is created the query is executed, including when the page will not be rendered in the browser...

    Little architectural question: why is all the stuff
    that is needed to render a page put into the
    constructor of a backing bean? Why is there no
    beforeRender method, analogous to the
    afterRenderResponse method? That method
    can then be called if and only if a page has to be
    rendered. It seems to me that an awful lot of
    resources are waisted this way.There actually is such a method ... if you look at the FacesBean base class, there is a beforeRenderResponse() method that is called before the corresponding page is actually rendered.
    >
    Reason I bring up this question is that I have to do
    a query in the constructor in a page backing bean.
    Every time the backing bean is created the query is
    executed, including when the page will not be
    rendered in the browser...This is definitely a valid concern. In Creator releases prior to Update 6 of the Reef release, however, there were use cases when the beforeRenderResponse method would not actually get called (the most important one being when you navigated to a new page, which is a VERY common use case :-).
    If you are using Update 6 or later, as a side effect of other bug fixes that were included, the beforeRenderResponse method is reliably called every time, so you can put your pre-rendering logic in this method instead of in the constructor. However, there is still a wrinkle to be aware of -- if you navigate from one page to another, the beforeRenderResponse of both the "from" and "to" pages will be executed. You will need to add some conditional logic to ensure that you only perform your setup work if this is the page that is actually going to be rendered (hint: call FacesContext.getCurrentInstance().getViewRoot().getViewId() to get the context relative path to the page that will actually be displayed).
    One might argue, of course, that this is the sort of detail that an application should not need to worry about, and one would be absolutely correct. This usability issue will be dealt with in an upcoming Creator release.
    Craig McClanahan

  • BPEL/ESB - Architecture question

    Folks,
    I would like to ask a simple architecture question;
    We have to invoke a partner web services which are rpc/encoded from SOA suite 10.1.3.3. Here the role of SOA suite is simply to facilitate communication between an internal application and partner services. As a result SOA suite doesn't have any processing logic. The flow is simply:
    1) Internal application invokes SOA suite service (wrapper around partner service) and result is processed.
    2) SOA suite translates the incoming message and communicates with partner service and returns response to internal application.
    Please note that at this point there is no plan to move all processing logic from internal application to SOA suite. Based on the above details I would like get some recommedation on what technology/solution from SOA suite is more efficient to facilate this communication.
    Thanks in advance,
    Ranjith

    You can go through the design pattern called Channel Adapter.
    Here is how you should design - Processing logic remains in the application.. however, you have to design and build a channel adapter as a BPEL process. The channel adapter does the transformation of your input into the web services specific format and invoke the endpoint. You need this channel adapter if your internal application doesn't have the capability to make webservice calls.
    Hope this helps.

  • Architecture Question...brain teasing !

    Hi,
    I have a architecture question in grid control. So far Oracle Support hasnt been able to figure out.
    I have two management servers M1 and M2.
    two VIP's(Virtual IP's) V1 and V2
    two Agents A1 and A2
    the scenerio
    M1 ----> M2
    | |
    V1 V2
    | |
    A1 A2
    Repository at M1 is configured as Primary and sends archive logs to M2. On the failover, I have it setup to make M2 as primary repository and all works well !
    Under normal conditions, A1 talks to M1 thru V1 and A2 talks to M2 thru V2. No problem so far !
    If M1 dies, and V1 forwards A1 to M2 or
    if M2 dies, V2 forwards A2 to M1
    How woudl this work.
    I think (havent tried it yet) but what if i configure the oms'es with same username and registration passwords and copy all the wallets from M1 to M2
    and A1 to A2 and just change V1 to V2. Would this work ????
    please advice!!

    SLB is not an option for us here !
    Can we just repoint all A1 to M2 using DNS CNAME change ??

Maybe you are looking for