BPEL async calls OSB which calls BPEL async

In this scenario, I use the bpel-10g transport with an opmn ULR to call the back-end BPEL process. The bpel-10g transport seems to be the only way to include the callback address. I'm getting an error like this:
<fault><remoteFault xmlns="http://schemas.oracle.com/bpel/extension"><part name="summary"><summary>exception on JaxRpc invoke: HTTP transport error: javax.xml.soap.SOAPException: java.security.PrivilegedActionException: javax.xml.soap.SOAPException: Message transmission failure, response code: 500</summary>
</part></remoteFault></fault>
What must be done to correctly connect the back-end async BPEL process from OSB?
Are there any tricks which must be used to connect the front-end async BPEL process to the OSB as well?
I'm using OSB 10gR3 and BPEL 10.1.3.4.

Hi ,
Please provide us solution to resolve this issue..
Any Updates or clues ...............

Similar Messages

  • Custom fault message from OSB is not throw in BPEL

    Hi,
    I have created a custom fault error in OSB which is sent to BPEL.
    <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/">
    <env:Header/>
    <env:Body>
    <env:Fault>
    </env:Fault>
    </env:Body>
    </env:Envelope>
    If my namespace in FAULT tag is "http://schemas.xmlsoap.org/soap/envelope/",
    message is understood as fault IN BPEL. But if I change namespace, BPEL understands as reply message.
    How can I use my custom fault message with different namespace as http://schemas.xmlsoap.org/soap/envelope/ ?
    Regards,

    SOAP Fault has a defined structure and defined namespaces. It does provide a placeholder for custom data though. Just like you can not change the namespace of Envelope of Body, similarly you cant change the namespace of Fault(if you want it to be recognized by the client apps). Any custom data you want to send in the SOAP Fault needs to be set as a child of 'detail' element of SOAP Fault. This custom data can have any namespace you wish. This Custom XML will be the element you define as Fault element in your WSDL
    <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/">
    <env:Header/>
    <env:Body>
    <env:Fault>
    <faultcode>env:Client</faultcode>
    <faultstring>some error string</faultstring>
    <faultactor>somedata</faultactor>
    <detail>
    {here you can put your custom XML content in any namespace which you have defined in the WSDL}
    </detail>
    </env:Fault>
    </env:Body>
    </env:Envelope>

  • How to get the instance id of BPEL in OSB

    We have the process in OSB which will invoke the BPEL ansync process and returns success once the BPEL is invoked.
    Can we refer/get the BPEL instance id in OSB by anymeans?

    There is another option. You may set the unique instance name in BPEL process based on some value in the payload. From OSB you may query composite_instance table of SOAINFRA schema on the basis of instance name (title column) and/or composite name (composite_dn column) to get the value of instance id (ID column) and return it in response from OSB.
    Regards,
    Anuj

  • Oracle BPEL - MTOM/XOP support

    I'm testing integration between a JBoss Web service and Oracle BPEL. I created a web service using a stateless EJB with MTOM turned on in JBoss 5.0.1 and tested it using the OSB which worked and returned the MTOM message, but the BPEL is getting confused with the MTOM XOP include element.
    The error message from the BPEL is "exception on JaxRpc invoke: HTTP transport error: javax.xml.soap.SOAPException: java.security.PriviledgeActionException: oracle.j2ee.ws.saaj.ContentTypeException: Not a valid SOAP Content-Type: application/xop+xml; type="text/xml""
    The OSB says the returned XML in the MIME's XOP include element looks like the following
    "<xop:Include href="cid:[email protected]" xmlns:xop="http://www.w3.org/2004/08/xop/include"/>"
    Something to note is that in java the datatype is defined as byte[] and that the WSDL generated with MTOM turned on is identical to the schema generated with MTOM turned off from JBoss.
    Also does BPEL support the 2004 08 xop standard?
    I have tried to find a example of MTOM/XOP being used on Oracles BPEL engine and I have been unable to find any useful information.
    I appreciate any help. Thank you.
    Shane

    So,
    11g BPEL / JDeveloper does not support MTOM?
    If I want to call an external web service as a reference from BPEL and the external web service supports MTOM what are my choices for passing attachments to the external web service, am I limited to inline content?
    Thanks,
    Joe

  • Oracle SOA Suite 11g (BPEL Process) Dependency with Database

    Folks,
    Oracle SOA Suite 11g requires a relational DB like Oracle, SQL Server to maintain the Metadata for the SOA & Related Components.
    Can you plz let me know with your experience - what should happen to a running BPEL Process if the database (SOAINFRA) is down? Being very specific -
    1, Does/Should end-point become inaccessible? Or Does service remains accessible but throws service remote-fault execption?
    2, What happens to a process which is in running state?
    My experience in these scenarios is not encouraging; we have notices the following -- during the time when our SOAINFRA database is down, the in-process (running) instances are left in a running state. These instances do not complete or move forward even when the Database is back. These instances do not provide any information in the Enterprise Manager.
    Also, if DB is down WSDL's are still accesssible, is this right way?
    If this is how SOA does work then I would doubt for its merit……...your thought plz????
    Thanks & Regards
    Shyam Kumar

    I am not posting here because I'm smart but because your questions look interesting.
    during the time when our SOAINFRA database is down, the in-process (running) instances are left in a running stateYou know that BPEL is a state-full product and it needs database support for persisting the process of state so if database goes down it is an expected behavior that processes may stuck in the same state where they were when DB went down. I don't see anything wrong with it.
    These instances do not complete or move forward even when the Database is back.Instances should auto/manually recoverable. If this is not the case then you may raise a SR with support.
    These instances do not provide any information in the Enterprise Manager. What information you could not find? Can you be more clear on this?
    Also, if DB is down WSDL's are still accesssible, is this right way?Yes, again it's expected because design time data also gets stored in DB only and hence if DB goes down, any design time data may not be accessible.
    If this is how SOA does work then I would doubt for its merit……...your thought plz????Why so? Alternative of DB is physical memory and that I don't think is perfect for long running BPEL processes and their heavy design time data. Moreover, it will create problems in terms of auditing, support and maintenance. If your requirement is such that it does not need any long running BPEL processes and even auditing is not required and use case is simple routing and transformation with no DB dependency then you may go for OSB which is a stateless product. BPEL is generally used for long running processes and hence DB is the best option, in my opinion at least.
    Regards,
    Anuj

  • Error loading wsdl from the OSB in the SOA Suite over https

    Hi,
    We are running SOA Suite 11G R1 PS2. I have created a webservice in the OSB which runs on https://soa-test.myfirm.com/schedule/service/v1. This service works fine. I can test it in the OSB and using SoapUI. Now I want to make use of it in a bpel proces in the SOA Suite so I added the webservice as external reference and pointed it to the the above url. This works...it sees the porttype and all the bindings. But when I want to deploy it, it doesnt compiles and i get the following issue:
    Error(19,30): Load of wsdl "https://soa-test.myfirm.com/schedule/service/v1?wsdl" failed
    Error(29,89): Cannot find Port Type "{http://www.myfirm.com/wsdl/schedule/service}ScheduleServicePortType" for "ScheduleService" in WSDL Manager
    So it seems it cannot load the wsdl. I saw it might be a proxy issue but I dont use a proxy. I can see the wsdl using my browser without proxy settings.
    Does anyone know why it cannot load my wsdl?
    best regards,
    H
    Edited by: 860367 on 20-May-2011 05:32

    The issue changed to an certificate issue now.
    I get the error:
    javax.xml.soap.SOAPException: javax.xml.soap.SOAPException: Message send failed: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
         at oracle.j2ee.ws.client.jaxws.DispatchImpl.invoke(DispatchImpl.java:784)
         at oracle.j2ee.ws.client.jaxws.OracleDispatchImpl.synchronousInvocationWithRetry(OracleDispatchImpl.java:234)
         at oracle.j2ee.ws.client.jaxws.OracleDispatchImpl.invoke(OracleDispatchImpl.java:105)
         at oracle.integration.platform.blocks.soap.AbstractWebServiceBindingComponent.dispatchRequest(AbstractWebServiceBindingComponent.java:464)
         ... 156 more
    Caused by: javax.xml.soap.SOAPException: javax.xml.soap.SOAPException: Message send failed: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
         at oracle.j2ee.ws.saaj.client.p2p.HttpSOAPConnection.call2(HttpSOAPConnection.java:227)
         at oracle.j2ee.ws.common.transport.HttpTransport.transmit(HttpTransport.java:75)
         at oracle.j2ee.ws.common.async.MessageSender.call(MessageSender.java:64)
         at oracle.j2ee.ws.common.async.Transmitter.transmitSync(Transmitter.java:134)
         at oracle.j2ee.ws.common.async.Transmitter.transmit(Transmitter.java:90)
         at oracle.j2ee.ws.common.async.RequestorImpl.transmit(RequestorImpl.java:273)
         at oracle.j2ee.ws.common.async.RequestorImpl.invoke(RequestorImpl.java:94)
         at oracle.j2ee.ws.client.jaxws.DispatchImpl.invoke(DispatchImpl.java:741)
    I looked in the forum and this could be solved by adding the root CA to the trust keystore of the server. My OSB Managed Server and SOA Managed Server are on the same machine (https://sontprs201.myfirm.com:8888). This machine has a certificate of GoDaddy so i have added the GoDaddy-root and GoDaddy-intermediate as trusted certificates to my certs.jks
    In the weblogic console I set both the managed servers to point to my certs.jks with the correct password and in OSB managed server i setup the SSL with the correct private key alias.
    When i call the OSB service from SoapUI i get the correct 3 certificates back (GD-root, GD-intermediate, sontprs201.myfirm.com), but when I call the OSB service from BPEL I still get the error: Unable to access the following endpoint(s): https://sontprs201.myfirm.com:8888/schedule/service/v1. Unable to find valid certification path.
    Does anybody know what I'm missing or how I can debug this?
    Thanks in advance!

  • Query regarding handling and monitoring of OSB Errors in a specific manner

    Hi,
    I'm using
    - OSB 10gR3 (10.3.1)
    Context
    To give some context, the setup I currently have a proprietory frontend client that is making XML/HTTP requests to OSB
    which in turn is getting info from downstream systems.
    I have no control over the behaviour/logic of the frontend client .. apart from knowing that it has a specific
    request/response format.
    So now to the issue..
    The issue I'm facing is regarding handling of errors. Based on my knowledge of OSB, the error handling of the transaction has been
    built along these lines
    (Within the stage) If an error scenario's occurred, error is being thrown
    (Within the error handler stage) Logging the error (using the 'Log' function .. with severity 'Error')
    (Within the error handler stage) The response payload is being built .. so that the error can be reported to the frontend client
    (Within the error handler stage) I'm doing a 'Reply with Success'
    Now this does give me the expected behavior from the frontend client's perspective (... i.e. it receives the payload and displays whatever error that's occurred)
    However from OSB's perspective, because I have done a 'Reply with success', the OSB Service Health Page ... (screenshot below) does not
    increment the 'Errors' count for the respective proxy service ....
    [OSB Service Health Page|http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/operations/monitoring.html#wp1107685] (Looking at Figure 3-8)
    The only way I could achieve this was to set the error handler stage to 'Reply with Failure' (which would return HTTP 500) ...
    The issue however is that, the proprietory frontend simply sees the incoming HTTP 500 code and doesn't read the return XML payload (containing the error details ..)
    which beats the whole idea of being able to maintain some sort of traceability for the failed transactions.
    So what I'm basically trying to find is .. that when an error occurs
    - Some 'call' to make the during the error handler stage so that it registers as an error in the OSB Service Health Page.. and I can clearly
    see the error count, etc for each of the proxy services
    - After that being able to still use 'Reply with Success' ... so that the payload is being returned with HTTP 500 code...
    so the frontend can read the payload ...
    ............. in essence, the idea being to register errors so that they can be monitored via the Service Health Dashboard ..but at the same time
    being able to return the 'error details' payload successfully
    With my limited (but growing) knowledge of OSB .. I've tried quite a few ways to achieve this with no success...
    Would very much appreciate any help here ...
    Lastly, if there is some way of achieving what I'm trying to get to above through different means, I'm open
    to trying out alternate stuff.
    Regards,
    Himanshu

    Hi Atheek,
    Many thanks for the reply. Does appear to be a pretty neat way of doing it ..
    One more clarification I'd like on this however ....
    Given that I have a set up like this below
    [External frontend client] <------> (1 Parent Service) <------> (2 Child Service)
    (1 Parent Service) .... has error handler .. and all it does is 'Reply with success'
    (2 Child Service) .... has error handler for particular stage
    so...
    2 Child Service - is doing the actual work (has error handler where custom error msg is being populated in $body)
    1 Parent Service - is the one that is getting called by the external client (has error handler .. and all it does is 'Reply with success')
    What I currently have is ..once the error occurs, I'm populating $body ... with the custom error message within the error handler
    Previously I had 'Reply with Success' in the 2 Child Service's error handler and the custom error message would get returned to the external client.
    Now, with the error handler only creating the custom error msg.... and 1 Parent service doing the 'Reply with Success' .. it doesn't appear
    to return the $body .. with the custom msg that I'd populated .. in 2 Child Service's error handler.
    Are the $body contents populated in 2 Child Service's error handler lost ...once the error propagates up to 1 Parent service's error handler ?
    If yes... is there way of getting around it ? I could see for instance that the $fault message context has a 'user populated' Details variable
    but the documentation covering this is sparse.
    Regards,

  • Best practice of OSB logging Report handling or java code using publish

    Hi all,
    I want to do common error handling of OSB I did two implementations as below just want to know which one is the best practice.
    1. By using the custom report handler --> When ever we want to log we will use the report action of OSB which will call the Custom java class which
    Will log the data in to DB.
    2. By using plain java class --> creating a java class publish to the proxy which will call this java class and do the logging.
    Which is the best practice and pros and cons.
    Thanks
    Phani

    Hi Anuj,
    Thanks for the links, they have been helpful.
    I understand now that OSR is only meant to contain only Proxy services. The synch facility is between OSR and OSB so that in case when you are not using OER, you can publish Proxy services to OSR from OSB. What I didn't understand was why there was a option to publish a Proxy service back to OSB and why it ended up as a Business service. From the link you provided, it mentioned that this case is for multi-domain OSBs, where one OSB wants to use the other OSB's service. It is clear now.
    Some more questions:
    1) In the design-time, in OER no Endpoints are generated for Proxy services. Then how do we publish our design-time services to OSR for testing purposes? What is the correct way of doing this?
    Thanks,
    Umar

  • How to publish to different Business service in OSB

    Hi,
    I have a xml,based on the RecordType I need to send to two Business services.
    If 00020=Then i publish to LOSS BS
    if 00030=Then to Gain Business service.
    Then XML is as below.
    +<body>+
    +<CSSiteAndMeter>+
    +<Header>+
    +...+
    +</Header>+
    +<Detail>+
    +<RecordType>00010</RecordType>+
    +<RecordTypeLiteral>METERID</RecordTypeLiteral>+
    +</Detail>+
    +<Detail>+
    +<RecordType>00020</RecordType>+
    +<RecordTypeLiteral>LOSS</RecordTypeLiteral>+
    +</Detail>+
    +<Detail>+
    +<RecordType>00010</RecordType>+
    +<RecordTypeLiteral>METERID</RecordTypeLiteral>+
    +</Detail>+
    +<Detail>+
    +<RecordType>00030</RecordType>+
    +<RecordTypeLiteral>GAIN</RecordTypeLiteral>+
    +</Detail>+
    +<Trailer>+
    +...+
    +</Trailer>+
    +</CSSiteAndMeter>+
    +</body>+
    How do i do in OSB,though there is a IF condition in OSB which i can do by cheking the Record Type='00020',but i need the Record Type =00010 also while publishing, like below
    +<Detail>+
    +<RecordType>00010</RecordType>+
    +<RecordTypeLiteral>METERID</RecordTypeLiteral>+
    +</Detail>+
    +<Detail>+
    +<RecordType>00020</RecordType>+
    +<RecordTypeLiteral>LOSS</RecordTypeLiteral>+
    +</Detail>+

    HI,
    u can use Conditional Branching
    http://docs.oracle.com/cd/E13159_01/osb/docs10gr3/userguide/modelingmessageflow.html#wp1061670
    Split join would be used in case u need to split your request and call your Business Service in Serial/parallel & then gather resposnes from multiple callouts to have single response
    http://docs.oracle.com/cd/E13159_01/osb/docs10gr3/userguide/splitjoin.html#wp1137258
    Abhinav

  • OSB 10gR3 memory usage

    I have been experiencing "out of memory" type errors on several of our OSB development and certification boxes. We haven't even begun to drive real load through the OSB which is causing me more than a little concern. First a little background.
    We have about a dozen message flows. A few take REST requests (inbound) and turn them into SOAP requests (outbound) to talk to legacy apps. The majority are REST on both sides. All are what I would describe as simple. Straight copies of data from one side to the other. No orchestration or dynamic routing. We have java callouts that check the contents of a passed in string for a valid chksum value and return true or false. false causes the message flow to fail the request without calling a business service. No real data allocation is going on in the callouts. We are configured to start with a heap size of 512meg and have a max sizqe of 3gig. Cert is using the jrockit jvm. Dev is using Suns jdk. Only the OSB runs in this instance of WebLogic. Called services are standalone POJOs running on remote hosts.
    Looking at one managed servers in cert shows that it has handled about 500,000 requests, received 240meg and sent about 1gig of data. This copies heap size is stiCll sitting at the initial 512meg size after running 5 days. Last week one of these cert copies grew from 512meg to 3gig out of memory in 54 seconds according to the GC stats. So I'm assuming a single very large request or response took him out. We have 4 copies in the cluster and requests are round robin'd across the managed servers. So multiple request to a single copy seems unlikely. Most of the logging happens when a request is done, so nothing in the logs shows a huge transaction if there is one. Sampling the last 50,000 request on this copy shows that message response sizes range from 0 bytes to 9meg. Only about 30 requests have exceeded 1 meg and only 100 have exceeded 100k. The average response size is 3k.
    So far we have done the following:
    1) made sure all business proxies have a timeout 60 seconds or less. (We had stuck threads as the various called service developers release buggy new code)
    2) all message flows that return large data, (more than 1k) are set with Content Streaming Enabled (Memory Buffer, No Compression)
    Would paging attachments to disk or using compression on the Content Streaming make much difference? I assumed streaming was using a small memory buffer and passing the data out as fast as it got it. Perhaps that is not really the case.
    Any ideas on things I should be investigating?
    I have thought about turning on Terse Proxy Tracing for all calls to see if I could get the logs to tell me what was going on when it died. Assumption being it dies before the access log has a chance to report a massive response.
    Edited by: user10451772 on Jun 16, 2010 7:33 PM

    There could be many reasons behind an out-of-memory error.
    If you see anything related to OutOfMemory due to Number of Threads, then set -Xss memory option to a higher value.
    If OutOfMemoryError thrown for the reason of PermGen Space size exceed then set -XX:MaxPermSize memory option to a higher value.
    -sever option helps to deal faster in thread context switching.
    -Xms and -Xmx options set starting heap size and maximum heap size respectively. Make these two values equal to get best performance in server mode and minimize GC cycle.
    Regards,
    Anuj

  • Custom logging in OSB

    I want to create a common proxy service in OSB which can be called by other services for logging body, fault etc. I intend to log to three different channels : File, Queue & Database. Now the approach is
    1. While executing any OSB process, accept a "logDestination" input from the user.
    2. Within the OSB process, just do a service callout to the common OSB Logging proxy service.
    3. Logging OSB proxy service will read the "logDestination" value and based on that route to a
    file proxy service, queue proxy service etc to write logs to respective channels.
    Will this work? I tried doing this but i am unable to create and read the "logDestination" value in my common OSB logging proxy service (its in a different project).
    Need help in doing this. Thanks!

    >
    Can you please suggest a better approach ?
    >
    Well, I don't know your requirements, but concerning logging I'm quite sure that client can't (and shouldn't) decide where to log in service implementation. In other words, you should decide whether to log in file, queue or db in proxy service flow and without any explicit client input.
    >
    The problem I am facing is that I am unable to propogate the value of "logDestination" to my OSB logging proxy service. In addition to this variable value I also need to propogate the $body of the OSB process to my common logging service.
    I hav a service callout (within the stage error handler) from the OSB process to my common service. How can i propogate all the above mentioned details to the common service at the same time ? I am pretty new to OSB, so plz excuse me if thats a basic question.
    >
    It seems you are not asking this question for the first time. :-)
    How to call a OSB proxy service from a different OSB process?
    I hope my answer in that thread should help you.

  • OSB 11g - Apache Webserver - WSDL location redirecting to OSB server

    Hi,
    In our case we have a OSB server running on a dev machine and a Apache server is installed which receives the requests from the vendor and forwards the requests to the OSB. I have a Proxy Service on the OSB which is WSDL based and this is accessed by the vendor.
    The DMZ only allows requests from the vendor to the web server and not to the OSB.
    From client's end when they send the WSDL request http://*<webserver_ip>*/endpoint-uri?wsdl to the webserver, the WSDL is sent back to the client. But in that WSDL the service endpoint location points to the OSB server (http://*<osb_ip>*/endpoint-uri?wsdl). Upon receiving this the client tries sends the request to OSB server directly which is blocked by the DMZ. The client should have got the response as http://*<webserver_ip>*/endpoint-uri?wsdl so that it could send the request to the web server and not the OSB server. The calls to the OSB server are not allowed by the DMA. Because of this their call to invoke the data is not successful.
    I tried hard coding the URL http://<webserver_ip>/endpoint-uri?wsdl in the WSDL and then use it but still the problem is not resolved. Still in the endpoint address location in the WSDL the IP of the OSB is sent and not the IP of the web server.
    What can I do so that the WSDL which is sent to the vendor has the service endpoint location as http://<webserver_ip>/endpoint-uri?wsdl and not http://*<osb_ip>*/endpoint-uri?wsdl.
    Thanks,
    Sanjay

    I tried hard coding the URL http://<webserver_ip>/endpoint-uri?wsdl in the WSDL and then use it but still the problem is not resolved. Still in the endpoint address location in the WSDL the IP of the OSB is sent and not the IP of the web server.
    The url in the port section of the wsdl resource is never used in the effective wsdl generated by OSB. Instead you can use weblogic Frontend host/port properties for solving this.
    The below describes the rule how weblogic computes the hostname part of the URL in the generated wsdl's for its deployed web services:
    http://download.oracle.com/docs/cd/E12840_01/wls/docs103/webserv/setenv.html#wp220945
    # If the Web Service is deployed to a cluster, and the cluster Frontend Host, Frontend HTTP Port, and Frontend HTTPS Port are set, then WebLogic Server uses these values in the server address of the dynamic WSDL.
    # If the preceding cluster values are not set, but the Frontend Host, Frontend HTTP Port, and Frontend HTTPS Port values are set for the individual server to which the Web Service is deployed, then WebLogic Server uses these values in the server address.
    # If these values are not set for the cluster or individual server, then WebLogic Server uses the server address of the WSDL request in the dynamic WSDL.

  • OSB-Faulthandling Framework

    hi,
    i know that there is a fault handling framework which has many fault policies in BPEL. By configuring it,(for eg: we can configure fault binding.xml file to handle many faults in BPEL) .we can reuse that framework in all BPEL process.
    But now, is there any such faulthandling framework available for OSB which can be implemented and reused??
    or do we need to define fault handling for each and every process?

    But how to create a proxy which accepts all errors??When I say a proxy which accept all errors, means it could be a simple XML/SOAP/Messaging Type proxy which accept the error information in the format you want. Then in the message flow of that proxy you may apply the error handling logic you want.
    To give example,
    1. You may create a WSDL based SOAP proxy which accepts message for "HandleError" operation. Definitely it would act like a web service which would accept HandleError requests. Format of HandleErrorRequest can be customized by you. In the message flow of HandleError operation, just extract the information from incoming request SOAP message like error code, error message etc. and either store it in a DB or in a file to take actions later.
    2. What you can do that from each and every error handler, publish a XML in pre-defined format to a JMS queue. The common error handler proxy will be a Messaging Type proxy which will accept XML as input. This Messaging Type proxy will poll the JMS Queue to pick the messages from there. Now in message flow of this JMS proxy do the required error handling processing.
    Above two are just examples of what you may do. There may be other approaches as well.
    Few things which you need to decide from business/functional point of view are -
    1. Format of Error xml/document, which needs to be passed to common error handler proxy
    2. Depending upon the input, type of proxy
    3. Persistency required and if yes then whether file/DB
    4. Whether Resubmission required
    It totally depends on your business requirement and your solution design that how would you like to go about this.
    Regards,
    Anuj

  • Incorrect OSB JCA transaction spanning

    Hi!
    I have set up a service in OSB which polls database using a dbAdapter, does some processing of the data read and then inserts data in database using a different dbAdapter.
    The service is composed as follows. First I built a proxy service from JCA adapter which polls an XML registry from database. This proxy makes a service callout to another proxy which performs some transformations and finally routes to a JCA dbAdapter business service which writes to database.
    I need full process to be transactional, so If any error occurs at any step, register should not be marked as read in polled database and no data should be written to database. I have configured polling proxy as transactional and invoked proxy which performs writting to database as transactional too. I've tested that when a writting error occurs, polled data is not marked as read, which is expected behaviour. But if I rise an error in polling proxy after service callout is effectuated, then register is not marked as read but data is written to database in called proxy.
    Could anyone tell me how to configure transaction behaviour in proposed scenary?
    Thank you in advance.

    After some research I saw following section in http://download.oracle.com/docs/html/E15867_01/proxy_services.htm#i1316487
    "If the message flow run time starts a transaction, quality of service is exactly-once. However, if Service Callouts or Publish actions have the outbound quality of service parameter set to best-effort (the default), Oracle Service Bus executes those actions outside of the transaction context. To have Oracle Service Bus execute those actions in the same request transaction context, set quality of service on those actions to exactly-once."
    It seems default qualify of service for Service Callouts is best effort so services called are executed in a different transaction than the caller. That's pretty clear then.
    But after replacing the qualify of service the process stopped working when activating the polling.
    When I started testing the process I noticed that after polling I got more registers in target table than the total number of polled registers in source table, indicating that some of the source registers where been processed multiple times. As polled registers count must be equal to written registers count, I assumed the difference could be caused because an error occurred after commiting transaction in proxy 2 which performs the write operation. That's why I need full process to be executed in same transaction context.
    Thanks.

  • High Availability File Adapter in OSB

    If you use the JCA FileAdapter in OSB, it is necessary to use the eis/HAFileAdapter version, to ensure that only one instance of the adapter picks up a file; you must then configure a coordinator, by setting the
    controlDir, inboundDataSource, outboundDataSource, outboundDataSourceLocal, outboundLockTypeForWrite
    parameters.
    controlDir refers to the filesystem, the others to the DB
    This document http://www.oracle.com/technetwork/database/features/availability/maa-soa-assesment-194432.pdf says
    "Database-based mutex and locks are used to coordinate these operations in a File Adapter clustered topology. Other coordinators are available but Oracle recommends using the Oracle Database."
    Using a Oracle Database as coordinator means using RAC, otherwise no HA.
    I wonder if anybody has been successful setting up HAFileAdapter without using a DB?
    If DB is required, I am considering using the good old "native" OSB File Poller, since it doesn't require complicated setup to be run in a cluster... but I don't want to use MFL, I would rather use the XSD-based Native Format. Here comes the second question:
    Is it possible to use the nXSD translator using the OSB Native File Poller - instead of the JCA Adapter?
    Thank you so much for your help - it will be rewarded with "helpful/answered" points .
    pierre

    I wonder if anybody has been successful setting up HAFileAdapter without using a DB?
    I have not tried it but I think there are several options available invlucing writing your own custom mutex. Please find the details in Oracle File and FTP Adapters High Availability Configuration section on this link
    http://download.oracle.com/docs/cd/E14571_01/core.1111/e10106/ha_soa.htm#sthref434
    Is it possible to use the nXSD translator using the OSB Native File Poller - instead of the JCA Adapter?
    When you create a JCA Adapter based Proxy Service to read the files, the nXSD translation happens before the proxy service is invoked. JCA Engine first reads the data, translates using nXSD and then invokes the Proxy with the translated content. (You can verify this easily by creating a JCA based file read service and open the test console for it in sbconsole, it will show you XML request instead of native).
    So you can not read the text content using File Transport of OSB and then calling nXSD directly or calling an nXSD based Proxy Service.
    HOWEVER, you certainly can use file and nXSD in a combination if thats what you want.
    1. Create a Synchronous Read File Adapter with an nXSD created for it
    2. Create a Business Service for that Synchronous Read JCA in OSB
    3. Create a File Transport based service in OSB which will read the content of file and then call the Business Service to again read the content (which will include the translation using nXSD defined in step one to convert the content to XML).
    So basically you will need to read the file twice! Once using File Transport Proxy service (which will take care of polling in cluster) and then using Sync Read JCA based business service(which will do nXSD translation). To reduce the impact of reading the file twice you can use trigger files. File Proxy to read trigger file and and invoke JCA business service to read the actual file for that trigger.
    Another alternative can be to create a similar class as present here(http://blogs.oracle.com/adapters/entry/command_line_tool_for_testing) but instead of writing a file it will just return the translated content. Call this class with native content from the File Transport proxy using a Java Callout to do the translation.

Maybe you are looking for

  • Buying a new iMac; can i use my eMac as a second display?

    Hi, I'm just about to buy an iMac (intel core 2 duo), probably 17" 2 GHz with some extra RAM. I currently have an eMac, and now I wonder, is there any way to use it (the eMac) as a second display for the iMac? Obviously, the eMac doesn't have any vid

  • Can a PC ATI X1600 XT AGP card work with my MDD..

    Can a PC ATI X1600 XT AGP card work with my MDD?

  • Changes in a Reservation

    Hi Guys, May I know is there any Table/ Transaction avaialble to see the changes in a Reservation. A Reservation had been created with one line item saved it. Then If go to change mode and add another line item  and saved it, where can I see the chan

  • Printing to Acrobat

    As of two days ago, I lost the ability to print from Photoshop. I go to file, and the print option is grayed out. I can still print to Acrobat from other applications. Windows 7, CS5

  • Scaling out of whack

    I have a complex application that was written before I was hired. It's my baby now. In the last day or so several voltage inputs are higher than they should be. I'm not sure whether it is just the two channels that are in the test being run, whether