Best practice for calling an AM method with parameters
Which will be the best way to call an AM method with parameters from a backing bean.
I usually use the BindingContainer to get the operation binding and then call execute function. But when the method have parameters, how to do it?
Thanks
Hi,
same:
operationBinding.getParamMap().put("argument1Name", argument1Value);
operationBinding.getParamMap().put("argument2Name", argument2Value);
operationBinding.execute();
Frank
Similar Messages
-
Best practice for calling application module methods and plsql code
In my application I am experiencing problems with connection pooling, I seem to be using a lot of connections in my application when only a few users are using the system. As part of our application we need to call database procedures for business logic.
Our backing beans, call methods on the application module which in turn call a database procedure. For instance in the backing bean we have code like this to call the application module method.
// Calling Module to generate new examination/test.
CIGAppModuleImpl appMod = (CIGAppModuleImpl)Configuration.createRootApplicationModule("ky.gov.exam.model.CIGAppModule", "CIGAppModuleLocal");
String testId = appMod.createTest( userId, examId, centerId).toString();
AdfFacesContext.getCurrentInstance().getPageFlowScope().put("tid",testId);
// Close the call
System.out.println("Calling releaseRootApplicationModule remove");
Configuration.releaseRootApplicationModule(appMod, true);
System.out.println("Completed releaseRootApplicationModule remove");
return returnResult;
In the application module method we have the following code.
System.out.println("CIGAppModuleImpl: Call the database and use the value from the iterator");
CallableStatement cs = null;
try{
cs = getDBTransaction().createCallableStatement("begin ? := macilap.user_admin.new_test_init(?,?,?); end;", 0);
cs.registerOutParameter(1, Types.NUMERIC);
cs.setString(2, p_userId);
cs.setString(3, p_examId);
cs.setString(4, p_centerId);
cs.executeUpdate();
returnResult=cs.getInt(1);
System.out.println("CIGAppModuleImpl.createTest: Return Result is " + returnResult);
}catch (SQLException se){
throw new JboException(se);
finally {
if (cs != null) {
try {
cs.close();
catch (SQLException s) {
throw new JboException(s);
I have read in one of Steve Muench presentations (Oracle Fusion Applications Team' Best Practises) that calling the createRootApplicationModule method is a bad idea, and to call the method via the binding interface.
I am assuming calling the createRootApplicationModule uses much more resources and database connections that calling the method through the binding interface such as
BindingContainer bindings = getBindings();
OperationBinding ob = bindings.getOperationBinding("customMethod");
Object result = ob.execute()
Is this the case? Also is using getDBTransaction().createCallableStatement the best way of calling database procedures. Would it be better to expose plsql packages as webservices and then call them from the applicationModule. Is this more efficient?
Regards
OrlandoThanks Shay, this is now working.
I successfully got the binding to the application method in the pagedef.
I used the following code in my backing bean.
package view.backing;
import oracle.binding.BindingContainer;
import oracle.binding.OperationBinding;
public class Testdatabase {
private DCBindingContainer bindingContainer;
public void setBindingContainer (DCBindingContainer bc) {this.bindingContainer = bc;}
public DCBindingContainer getBindingContainer() {return bindingContainer;}
public static String validateUser()
// Calling Module to validate user and return user role details.
System.out.println("Getting Binding Container from Home Backing Bean");
BindingContainer bindings = BindingContext.getCurrent().getCurrentBindingsEntry();
System.out.println("Obtain binding");
OperationBinding operationBinding = bindings.getOperationBinding("calldatabase");
System.out.println("Set username parameter");
operationBinding.getParamsMap().put("p_userId",userId);
System.out.println("Set password parameter");
operationBinding.getParamsMap().put("p_testId",examId);
Object result = operationBinding.execute();
System.out.println("Obtain result");
String userRole = result.toString();
System.out.println("Result is "+userRole);
} -
What is the best practice for using the Calendar control with the Dispatcher?
It seems as if the Dispatcher is restricting access to the Query Builder (/bin/querybuilder.json) as a best practice regarding security. However, the Calendar relies on this endpoint to build the events for the calendar. On Author / Publish this works fine but once we place the Dispatcher in front, the Calendar no longer works. We've noticed the same behavior on the Geometrixx site.
What is the best practice for using the Calendar control with Dispatcher?
Thanks in advance.
ScottNot sure what exactly you are asking but Muse handles the different orientations nicely without having to do anything.
Example: http://www.cariboowoodshop.com/wood-shop.html -
How to call a AM method with parameters from Managed Bean?
Hi Everyone,
I have a situation where I need to call AM method (setDefaultSubInv) from Managed bean, under Value change Listner method. Here is what I am doing, I have added AM method on to the page bindings, then in bean calling this
Class[] paramTypes = { };
Object[] params = { } ;
invokeEL("#{bindings.setDefaultSubInv.execute}", paramTypes, params);
This works and able to call this method if there are no parameters. Say I have to pass a parameter to AM method setDefaultSubInv(String a), i tried calling this from the bean but throws an error
String aVal = "test";
Class[] paramTypes = {String.class };
Object[] params = {aVal } ;
invokeEL("#{bindings.setDefaultSubInv.execute}", paramTypes, params);
I am not sure this is the right way to call the method with parameters. Can anyone tell how to call a AM method with parameters from Manage bean
Thanks,
San.Simply do the following
1- Make your Method in Client Interface.
2- Add it to Page Def.
3- Customize your Script Like the below one to Achieve your goal.
BindingContainer bindings = getBindings();
OperationBinding operationBinding = bindings.getOperationBinding("GetUserRoles");
operationBinding.getParamsMap().put("username", "oracle");
operationBinding.getParamsMap().put("role", "F1211");
operationBinding.getParamsMap().put("Connection", "JDBC");
Object result = operationBinding.execute();
if (!operationBinding.getErrors().isEmpty()) {
return null;
return null;
i hope it help you
thanks -
Best practices for 2 x DNS servers with 2 x sites
I am curious if someone can help me with best practices for my DNS servers. Let me give my network layout first.
I have 1 site with 2 x Windows 2012 Servers (1 GUI - 10.0.0.7, the other CORE - 10.0.0.8) the 2nd site connected via VPN has 2 x Windows 2012R2 Servers (1 GUI - 10.2.0.7, the other CORE - 10.2.0.8) All 4 servers are promoted to DC's and have DNS services
running.
Here goes my questions:
Site #1
DC-01 - NIC IP address for DNS server #1 set to 10.0.0.8, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.2.0.7 & 10.2.0.8)
DC-02 - NIC IP address for DNS server #1 set to 10.0.0.7, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.2.0.7 & 10.2.0.8)
Site #2
DC-01 - NIC IP address for DNS server #1 set to 10.2.0.8, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.0.0.7 & 10.0.0.8)
DC-02 - NIC IP address for DNS server #1 set to 10.2.0.7, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.0.0.7 & 10.0.0.8)
Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local
> properties > Name Servers should I have all of my other DNS servers, or should I have my WAN DNS servers? In a single server scenario I always put my WAN DNS server but a bit unsure in this scenario.
Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local > properties > General > Type should all servers be set to
Active Directory - Integrated > Primary Zone? Should any of these be set to
Secondary Zone?
Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local > properties > Zone Transfers should I allow zone transfers?
Would the following questions be identical to the Forward Lookup Zone mydomain.local as well?I am curious if someone can help me with best practices for my DNS servers. Let me give my network layout first.
I have 1 site with 2 x Windows 2012 Servers (1 GUI - 10.0.0.7, the other CORE - 10.0.0.8) the 2nd site connected via VPN has 2 x Windows 2012R2 Servers (1 GUI - 10.2.0.7, the other CORE - 10.2.0.8) All 4 servers are promoted to DC's and have DNS services
running.
Here goes my questions:
Site #1
DC-01 - NIC IP address for DNS server #1 set to 10.0.0.8, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.2.0.7 & 10.2.0.8)
DC-02 - NIC IP address for DNS server #1 set to 10.0.0.7, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.2.0.7 & 10.2.0.8)
Site #2
DC-01 - NIC IP address for DNS server #1 set to 10.2.0.8, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.0.0.7 & 10.0.0.8)
DC-02 - NIC IP address for DNS server #1 set to 10.2.0.7, DNS server #2 set to 127.0.0.1 (should I add my 2nd sites DNS servers under Advanced as well? 10.0.0.7 & 10.0.0.8)
Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local
> properties > Name Servers should I have all of my other DNS servers, or should I have my WAN DNS servers? In a single server scenario I always put my WAN DNS server but a bit unsure in this scenario.
Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local > properties > General > Type should all servers be set to
Active Directory - Integrated > Primary Zone? Should any of these be set to
Secondary Zone?
Under the DNS management > Forward Lookup Zones > _msdcs.mydomain.local > properties > Zone Transfers should I allow zone transfers?
Would the following questions be identical to the Forward Lookup Zone mydomain.local as well?
Site1
DC1: Primary 10.0.0.7. Secondary 10.0.0.8. Tertiary 127.0.0.1
DC2: Primary 10.0.0.8. Secondary 10.0.0.7. Tertiary 127.0.0.1
Site2
DC1: Primary 10.2.0.7. Secondary 10.2.0.8. Tertiary 127.0.0.1
DC2: Primary 10.2.0.8. Secondary 10.2.0.7. Tertiary 127.0.0.1
The DC's should automatically register in msdcs. Do not register external DNS servers in msdcs or it will lead to issues. Yes, I recommend all zones to be set to AD-integrated. No need to allow zone transfers as AD replication will take care
of this for you. Same for mydomain.local.
Hope this helps. -
Best practice for calling stored procedures as target
The scenario is this:
1) Source is from a file or oracle table
2) Target will always be oracle pl/sql stored procedures which do the insert or update (APIs).
3) Each failure from the stored procedure must log an error so the user can re-submit the corrected file for those error records
There is no option to create an E$ table, since there is no control option for the flow around procedures.
Is there a best practice around moving data into Oracle via procedures? In Oracle EBS, many of the interfaces are pure stored procs and not batch interface tables. I am concerned that I must build dozens of custom error tables around these apis. Then it feels like it would be easier to just write pl/sql batch jobs and schedule with concurrent manager in EBS (skip ODI completely). In that case, one could write to the concurrent manager log and the user could view the errors and correct.
I can get a simple procedure to work in ODI where the source is the SQL, and the target is the pl/sql call to the stored proc in the database. It loops through every row in the sql source and calls the pl/sql code.
But I can not see how to set which rows have failed and which table would log errors to begin with.
Thank you,
ErikHi Erik,
Please, take a look in these posts:
http://odiexperts.com/?p=666
http://odiexperts.com/?p=742
They could help you in a way to solve your problem.
I already used it to call Oracle EBS API's and worked pretty well.
I believe that an IKM could be build to automate all the work but I never stopped to try...
Does it help you?
Cezar Santos
http://odiexperts.com -
Hi,
In a web application, if I need to call a CFC method from a different CFC, what would be considered as the best way of doing it?
For example, let's say I have two components: Customer and Product. From a method functionA in Customer, I would like to call functionB in Product. I can do one of the following, but which way is best practice and why?
1. Create a Product object in functionA, and use it to call functionB
<cfcomponent name="Customer">
<cffunction name="functionA">
<cfset productObj = createObject('component', 'Product')>
<cfset productObj.functionB()>
</cffunction>
</cfcomponent>
2. Pass a Product object when we initialize a Customer object, and use that to call functionB
<cfcomponent name="Customer">
<cffunction name="init">
<cfargument name="productObj">
<cfset variables.productObj = arguments.productObj>
</cffunction>
<cffunction name="functionA">
<cfset variables.productObj.functionB()>
</cffunction>
</cfcomponent>
3. Assume that Customer object has access to the Product object in the application scope
<cfcomponent name="Customer">
<cffunction name="functionA">
<cfset application.productObj.functionB()>
</cffunction>
</cfcomponent>
Thank you very much.The first two are fine. If the CFC being called is always gonna be the exact same one, then there's no prob directly referencing it in the calling CFC. If the CFC could vary, then pass it in.
If you're only using the CFC transiently, then you could use <cfinvoke> as well in this case.
Directly accessing an application-scoped CFC within a method is poor practice.
Adam -
Best Practices for Integrating UC-5x0's with SBS 2003/8?
Almost all of Cisco's SBCS market is the small and medium business space. Most, if not all of these SMB's have a Microsoft Small Business Server 2003 or 2008. It will be critical, In order for Cisco to be considered as a purchase option, that the UC-5x0 integrates well into these networks.
To that end, I see a lot of talk here about how to implement parts and pieces of this, but no guidance from Cisco, no labs and no best practices or other documentation. If I am wrong, please correct me.
I am currently stumbling through and validating these configurations myself, Once complete, I will post detailed recommendations. However, it would have been nice to have a lab to follow instead of having to learn from each mistake.
Some of the challanges include;
1. Where should the UC-540 be placed: As the gateway for QOS or behind a validated UC-5x0 router/security appliance combination
2. Should the Microsoft Windows Small Business Server handle DCHP (as Microsoft's documentation says it must), or must the UC-540 handle DHCP to prevent loss of features? What about a DHCP relay scheme?
3. Which device should handle DNS?
My documentation (and I recommend that any Cisco Lab/Best Practice guidence include it as well) will assume the following real-world scenario, the same which applies to a majority of my SMB clients;
1. A UC-540 device utilizing SIP for the cost savings
2. High Speed Internet with 5 static routable IP addresses
3. An existing Microsoft Small Business Server 2003/8
4. An additional Line of Business Application or Terminal Server that utilizes the same ports (i.e. TCP 80/443/3389) as the UC-540 and the SBS, but on seperate routable IP's (Making up crazy non-standard port redirections is not an option).
5. A employee who teleworks from various places that provide a seat and a network jack, which is not under our control (i.e. a employees home, a clients' office, or a telework center). This teleworker should use the built in VPN feature within the SPA or 7925G phones because we will not have administrative access to any third party's VPN/firewall.
Your thoughs appreciated.Progress Report;
The following changes have been made to the router in support of the previously detailed scenario. Everything appears to be working as intended.
DHCP is still on the UC540 for now. DNS is being performed by the SBS 2008.
Interestingly, the CCA still works. The NAT module even shows all the private mapped IP's, but no the corresponding public IP's. I wouldnt recommend trying to make any changes via the CCA in the NAT module.
To review, this configuration assumes the following;
1. The UC540 has a public IP address of 4.2.2.2
2. A Microsoft Small Business Server 2008 using an internal IP of 192.168.10.10 has an external IP of 4.2.2.3.
3. A third line of business application server with www, https and RDP that has an internal IP of 192.168.10.11 and an external IP of 4.2.2.4
First, backup your current configuration via the CCA,
Next, telent into the UC540, login, edit, cut and paste the following to 1:1 NAT the 2 additional public IP addresses;
ip nat inside source static tcp 192.168.10.10 25 4.2.2.3 25 extendable
ip nat inside source static tcp 192.168.10.10 80 4.2.2.3 80 extendable
ip nat inside source static tcp 192.168.10.10 443 4.2.2.3 443 extendable
ip nat inside source static tcp 192.168.10.10 987 4.2.2.3 987 extendable
ip nat inside source static tcp 192.168.10.10 1723 4.2.2.3 1723 extendable
ip nat inside source static tcp 192.168.10.10 3389 4.2.2.3 3389 extendable
ip nat inside source static tcp 192.168.10.11 80 4.2.2.4 80 extendable
ip nat inside source static tcp 192.168.10.11 443 4.2.2.4 443 extendable
ip nat inside source static tcp 192.168.10.11 3389 4.2.2.4 3389 extendable
Next, you will need to amend your UC540's default ACL.
First, copy what you have existing as I have done below (in bold), and paste them into a notepad.
Then, im told the best practice is to delete the entire existing list first, finally adding the new rules back, along with the addition of rules for your SBS an LOB server (mine in bold) as follows;
int fas 0/0
no ip access-group 104 in
no access-list 104
access-list 104 remark auto generated by SDM firewall configuration##NO_ACES_24##
access-list 104 remark SDM_ACL Category=1
access-list 104 permit tcp any host 4.2.2.3 eq 25 log
access-list 104 permit tcp any host 4.2.2.3 eq 80 log
access-list 104 permit tcp any host 4.2.2.3 eq 443 log
access-list 104 permit tcp any host 4.2.2.3 eq 987 log
access-list 104 permit tcp any host 4.2.2.3 eq 1723 log
access-list 104 permit tcp any host 4.2.2.3.35 eq 3389 log
access-list 104 permit tcp any host 4.2.2.4 eq 80 log
access-list 104 permit tcp any host 4.2.2.4 eq 443 log
access-list 104 permit tcp any host 4.2.2.4 eq 3389 log
access-list 104 permit udp host 116.170.98.142 eq 5060 any
access-list 104 permit udp host 116.170.98.143 any eq 5060
access-list 104 deny ip 10.1.10.0 0.0.0.3 any
access-list 104 deny ip 10.1.1.0 0.0.0.255 any
access-list 104 deny ip 192.168.10.0 0.0.0.255 any
access-list 104 permit udp host 116.170.98.142 eq domain any
access-list 104 permit udp host 116.170.98.143 eq domain any
access-list 104 permit icmp any host 4.2.2.2 echo-reply
access-list 104 permit icmp any host 4.2.2.2 time-exceeded
access-list 104 permit icmp any host 4.2.2.2 unreachable
access-list 104 permit udp host 192.168.10.1 eq 5060 any
access-list 104 permit udp host 192.168.10.1 any eq 5060
access-list 104 permit udp any any range 16384 32767
access-list 104 deny ip 10.0.0.0 0.255.255.255 any
access-list 104 deny ip 172.16.0.0 0.15.255.255 any
access-list 104 deny ip 192.168.0.0 0.0.255.255 any
access-list 104 deny ip 127.0.0.0 0.255.255.255 any
access-list 104 deny ip host 255.255.255.255 any
access-list 104 deny ip host 0.0.0.0 any
access-list 104 deny ip any any log
int fas 0/0
ip access-group 104 in
Lastly, save to memory
wr mem
One final note - if you need to use the Microsoft Windows VPN client from a workstation behind the UC540 to connect to a VPN server outside your network, and you were getting Error 721 and/or Error 800...you will need to use the following commands to add to ACL 104;
(config)#ip access-list extended 104
(config-ext-nacl)#7 permit gre any any
Im hoping there may be a better way to allowing VPN clients on the LAN with a much more specific and limited rule. I will update this post with that info when and if I discover one.
Thanks to Vijay in Cisco Tac for the guidence. -
Best practice for oracle 10.2 RAC with ASM
Did any one tried/installed Oracle 10.2 RAC with ASM and CRS ?
What is the best practice?
1. separate home for CRS, ASM and Oracle Database?
2. separate home for CRS and same home for ASM and Oracle Darabase?
we set up the test environment with separate CRS, ASM and Oracle database homes, but we have tons of issues with the listener, spfile and tnsnames.ora files. So, seeking advise from the gurus who implimeted/tested the same ?I am getting ready to install the 10gR2 database software (10gR2 Clusterware was just installed ) and I want to have a home for ASM and another for database as you suggest. I have been told that 10gR2 was to have a smaller set of binaries that can be used for the ASM home ... but I am not sure how I go about installing it. The first look at the installer does not seem to make it obvious...Is it a custom build option?
-
Best practices for setting up RDS pool, with regards to profiles /appdata
All,
I'm working on a network with four physical sites and currently using a single pool of 15 RDS servers with one broker. We're having a lot of issues with the current deployment, and are rethinking our strategy. I've read a lot of conflicting information on how
to best deploy such a service, so I'd love some input.
Features and concerns:
Users connect to the pool from intranet only.
There are four sites, each with a somewhat different local infrastructure. Many users are connecting to the RDS pool via thin clients, although some locations have workstations in place.
Total user count that needs to be supported is ~400, but it is not evenly distributed - some sites have more than others.
Some of the users travel from one site to another, so that would need to be accounted for with any plans that involve carving up the existing pool into smaller groups.
We are looking for a load-balanced solution - using a different pool for each site would be acceptable as long as it takes #4 and #7,8 into account.
User profile data needs to be consistent throughout: My Docs, Outlook, IE favorites, etc.
Things such as cached IE passwords (for sharepoint), Outlook settings and other user customization needs to be carried over as well.
As such, something needs to account for the information in AppData/localroaming, /locallow and /local between these RDS servers.
Ideally the less you have to cache during each logon the better, in order to reduce login times.
I've almost never heard anything positive about using roaming profiles, but is this one of those rare exceptions? Even if we do that, I don't believe that covers the information in <User>/AppData/* (or does it?), so what would be the best
way to make sure that gets carried over between sessions inside the pool or pools?
The current solution involves using 3rd party apps, registry hacks, GPOs and a mashup of other things and is generally considered to be a poor fit for the environment. A significant rework is expected and acceptable. Thinking outside the box is fine!
I would relish any advice on the best solutions for deployment! Thank you!Hi Ben,
Thank you for posting in Windows Server Forum.
Please check below blogs and document which helps to understand some basic requirement and to setup the new environment with proper guided manner.
1. Remote Desktop Services Deployment Guide
(Doc)
2. Step by Step Windows 2012 R2 Remote Desktop Services –
Part 1, 2,3 & 4
3.Deploying a 2012 / 2012R2 Remote Desktop Services (RDS) farm
Hope it helps!
Thanks.
Dharmesh Solanki -
Best practices for a development/production scenario with ORACLE PORTAL 10G
Hi all,
we'd like to know what is the best approach for maintaining a dual development/production portal scenario. Specially important is the process of moving from dev. to prod. and what it implies in terms of portal availability in the production environment.
I suppose the best policy to achieve this is to have two portal instances and move content via transport sets. Am I right? Is there any specific documentation about dev/prod scenarios? Can anybody help with some experiences? We are a little afraid regarding transport sets, as we have heard some horror stories about them...
Thanks in advance and have a nice day.It would be ok for a pair of pages and a template.
I meant transport sets failed for moving an entire pagegroup (about 100 pages, 1Gb of documents).
But if your need only deals with a few pages, I therefore would direclty developp on the production system : make a copy of the page, work on it, then change links.
Regards -
Best Practice for Commit() after custom method on struts action
Hi all,
I'm curious what's the best way to commit programmatically after an application module custom method, wired to a struts action, inserts a row into a view object.
Right now I do this:
vo.insertRow(theRow);
vo.executeQuery();
getDBTransaction.commit();
Then, I wire a struts action after the one calling the custom method, and drag a commit onto that as well. Is there a better way? I wanted to make sure...
Thanks.John,
it seems that your are commiting it twice then.
On another issue, don't program directly against View objects in your Struts Action. Write a method on the YourApplicationModuleImpl class and expose this to the client. Have this method handle the update and commit on the server. In the Action simply call the method exposed method from the YourApplicationModule.
Frank -
Best practice for taking Site collection Backup with more than 100GB
Hi,
I have site collection data is more than 100 GB. Can anyone please suggest me the best practice to take backup?
Thanks in advance....
Regards,
SayaHi
i think Using powershell script we can do..
Add this command in powershell
Add-PSSnapin Microsoft.SharePoint.PowerShell
Web application backup & restore
Backup-SPFarm -Directory \\WebAppBackup\Development -BackupMethod Full -Item "Web application name"
Site Collection backup & restore
Backup-SPSite http://1632/sites/TestSite -Path C:\Backup\TestSite1.bak
Restore-SPSite http://1632/sites/TestSite2 -Path C:\Backup\TestSite1.bak -Force
Regards
manikandan -
What is best practice for calling XI Services with Web Dynpro-Java?
We are trying to expose XI services to Web Dynpro via "Web Services". Our XI developers have successfully generated the WSDL file(s) for their XI services and handed off to the Web Dynpro developers.
The Java developers put the WSDL file to their local PC's and import as "Adaptive Web Services" data models. When the application is constructed and deployed to our development box, the application abends because the J2EE server on that box cannot locate the WSDL file at runtime (it was on the developers box at, say, "C:\temp\" and that directory does not exist on the dev server).
Since XI does not have a way of directly associating the generated WSDL file to the XI service, where is the best place to put the WSDL so it is readable at design time and at run time? Also, how do we reconcile that we'll have 3 WSDL files for each service (one for Dev, one for QA and one for Prod) and how is the model in Web Dynpro configured so it gets the right ome?
Does anyone have any good guide on how to do this? Have found plenty of "how to consume a Web Service in Web Dynpro" docs on SDN, but these XI services are not really traditional Web Services so the instructions break down when it comes time to deploy.HI Bob,
As sometimes when you create a model using a local wsdl file then instead of refering to URL mentioned in wsdl file it refers to say, "C:\temp" folder from where you picked up that file. you can check target address of logical port. Due to this when you deploy application on server it try to search it in "c:\temp" path instead of it path specified at soap:address location in wsdl file.
Best way is re-import your Adaptive Web Services model using the URL specified in wsdl file as soap:address location.
like http://<IP>:<PORT>/XISOAPAdapter/MessageServlet?channel<xirequest>
or you can ask you XI developer to give url for webservice and username password of server -
Best practices for Calling Multiple Business Services in OSB
Hi All,
I have a requirement where I need to call multiple business services in OSB. We are presently calling them sequentially in a proxy pipeline. I was wondering if we could accomplish the same task in a better way. Each of the business services are mutually exclusive.
Thanks in Advance,
RudrakshHi Eric,
Thanks for the response. We figured that it is possible to call multiple services with Split Join. However, we ran into the issue you described. We had a blocking call and had to wait until each of the services returned a response.
However, we needed a Async model for our design and felt that this might not be a right fit.
We are now looking at implementing the publish option with QoS configured as this fits our usecase better. Thanks for the help again.
Rudraksh
Maybe you are looking for
-
Problem with AppleID/Authorizing music/prompting me for an old ID no longer in use
I have a 2009 Macbook Pro with a 3.06ghz Intel Core 2 Duo, 8gb of 1067 Mhz Ram, 320gb 7200 rpm HD running 10.8.2, and an iPhone 4S running 5.1.1. My issues began last year when I upgraded to the 4S. This was the beginning of iCloud, and the beginning
-
Flasher Player should be listed as a plugin in tools for it to work?
-
Hi All, I am using the Fm REUSE_ALV_GRID_DISPLAY to display output in the ALV grid. I am getting the following error "Field symbol has not yet been assigned." Error analysis You at
-
Changing Selection Criteria in Current DataSources
I am pulling in financial information from BSEG into BW. BSEG is a massive table with a lot more information than I care to pull in for my purposes. I would like to be able to select a range of GL Accounts to pull in, but I cannot find a way to mak
-
Doubt about with update Win Phone 8 - Lumia 820
Good afternoon, I bought my lumia 820 1 month ago. The OS is 8.0.10211.204, firmware 1232.5957.1308.0001 and my IMEI is 35xxxxxxxxxxxxx. My device does not identify the new firmware update "... 1314 .." launched a few weeks. I know that my phone comp