Best Practice for External Libraries Shared Libraries and Web Dynrpo

Two blogs have been written on sharing libraries with Web Dynpro DC, but I would
like to know the best practice for doing this.
External libraries seem to work great at compile time, but when deploying there is often an error related to the external library not being a deployed component. 
Is there a workaround for this besides creating a shared J2EE library which I have been able to get working?  I am not interested in something that works, but really
what are the best practice for this. What is the best way to  limit the number of jars that need to be kept in a shared library/ext library.  When is sharing ref service/etc a valid approach vs. hunting down the jars in the portal libraries etc and storing in an external library.

Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
http://pauliom.wordpress.com

Similar Messages

  • Best Practice for Host Named Site Collections and Web Apps

    Looking for advice on setting up the host named site collections.  If I am reading many of the technet articles and blogs correctly I should 1) have only 1 top level web app for host named site collections and 2) not have a host header for that web
    app.  If that's correct I am looking for advice.  We have 7 separate domains that we support in our farm.  Currently each of those domains is divided into web applications based on the domain,  *.contoso, *.trains.com, *.bakers.com, etc.
      Is the concept now that all of the host named site collections fall under that one web app?  How do we deal with the SSL for each of those separate domains which all have their own certificates? 
    Thanks in advance for your comments. 
    NLewis

    Yes, for creating host named site collections, first you create a host header less web app and then create host named site collections under that web app. However this is only for the cases where all the host named site collections ends in one domain. So
    you can create host named site collections as intranet.contoso.com, my.contoso.com, portal.contoso.com etc as they are all ending in *.contoso.com.
    As per your environment, if you have web apps which caters to different domains like *.contoso.com, *.trains.com, *.bakers.com, you need to create separate web apps as they are all ending in different domains. Then you can have a separate wildcard SSL certificate
    for each of those web apps.
    Hope this helps.
    Thanks
    Mohit

  • Best practices for office 365 SHARED CALENDAR for whole school / organization

    hi
    we need guidance on best practice for setting up SHARED CALENDAR on Office365 exchange server for entire organization (school)of150 staff.
    Requirements
    + all staff should have read only / reviewer permissions on calendar
    +handful staff should have editor permissions on calendar
    + the calendar should synchronise custom categories and colors
    Current Solution
    at the moment we have found that a shared mailbox is the best solution because;
    - allusers can add the shared mailbox on outlook 2010as additional mailbox as readonly
    - all the categories & colors for the calendarare automatically synchronised because the color categories are stored within this mailbox.
    - you can edit calendar permissions in outlook to allow some users as "editor" of the calendar.Problem with Current Solution
    the problem however is that the users also need to access this...
    This topic first appeared in the Spiceworks Community

    Hi Aleksei,
    I think Inactive mailboxes in Exchange Online is the feature that you want. This feature makes it possible for you to preserve (store and archive) the contents of deleted mailboxes indefinitely.
    A mailbox becomes inactive when an In-Place Hold or a
    Litigation Hold is placed on the mailbox before the corresponding Office 365 user account is deleted.
    But I'm afraid that it might be impossible to "easily share certain folders or even whole mailbox with people in the company". As can been seen from below articles, this only allows administrators, compliance officers, or records managers
    to use the In-Place eDiscovery feature in Exchange Online to access and search the contents of an inactive mailbox:
    http://technet.microsoft.com/en-us/library/dn144876(v=exchg.150).aspx
    http://blogs.technet.com/b/exchange/archive/2013/03/21/preserve-mailbox-data-for-ediscovery-using-inactive-mailboxes-in-exchange-online.aspx
    Anyway, this is the forum to discuss questions and feedback for Microsoft Office client. For more details about your question, I would suggest you post in the dedicated forum of
    Exchange Online, where you can get more experienced responses:
    https://social.technet.microsoft.com/Forums/msonline/en-US/home?forum=onlineservicesexchange
    The reason why we recommend posting appropriately is you will get the most qualified pool of respondents, and other partners who read the forums regularly can either share their knowledge or learn from your interaction with us. Thank you for your understanding.
    Regards,
    Ethan Hua
    TechNet Community Support
    It's recommended to download and install
    Configuration Analyzer Tool (OffCAT), which is developed by Microsoft Support teams. Once the tool is installed, you can run it at any time to scan for hundreds of known issues in Office
    programs.

  • Best practice for external but secure access to internal data?

    We need external customers/vendors/partners to access some of our company data (view/add/edit).  It’s not so easy as to segment out those databases/tables/records from other existing (and put separate database(s) in the DMZ where our server is).  Our
    current solution is to have a 1433 hole from web server into our database server.  The user credentials are not in any sort of web.config but rather compiled in our DLLs, and that SQL login has read/write access to a very limited number of databases.
    Our security group says this is still not secure, but how else are we to do it?  Even if a web service, there still has to be a hole in somewhere.  Any standard best practice for this?
    Thanks.

    Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
    I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
    using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
    http://pauliom.wordpress.com

  • Best practice for version control B2B, ESB and BPEL

    Hello,
    we are setting up a new system using B2B, ESB and BPEL. The development team is more experienced working with PL/SQL, Oracle Workflow and we are worried that Jdeveloper generates changes to the source files during development and that we might have problems with the version control.
    Is there any best practice for setting up version control for these systems? Do we need to take anything in particular into consideration when setting up the projects?
    We are using Serena Dimensions 9.1 for version control with the add-on in Jdeveloper.
    Thanks in advance!

    I believe JDeveloper has a plugin for Dimensions.
    I havent used it but to get it, go to tools (It may be help I don't have JDeveloper on this machine to confirm) check for updates.
    If you select the thrid party check box - next, you will see an entry for dimentions.
    Configure the connection and develop as you would any other project.
    cheers
    James

  • Problem with external libraries and Web DynPro

    Hello,
    we're stuck here.
    We're trying for a week now to include external libraries(e.g. Hibernate) into our Web DynPro Project, without success so far.
    We read every single forum and blog entry we could find on this topic.
    E.g.: /people/valery.silaev/blog/2005/09/14/a-bit-of-impractical-scripting-for-web-dynpro
    We're running Netweaver 2004s SP9 Trial Version.
    The biggest problem is, that when we deploy an J2EE Server Component Library DC exactly like described in the blog entry above, although it is deployed correctly an lists under Server->Libraries in Visual Administrator, the external hibernate.jar doesn't get deployed onto the server. It's just an empty container named hib/lib without any entries in "Jars Contained". The sda file of the library DC also has only 2K and doesn't include the hibernate.jar which we added as used dc...
    Any help would be greatly appreciated.
    Or is there a simpler way to include external jars into Web DynPro Projects and deploy them to the server? (We already tried putting them into the lib folder without luck, we always get "NoClassDefFound...")
    Edited by: Christian Wieland on Jan 31, 2008 11:22 AM

    Hallo Christian,
    this is a bug which should be fixed in NW 7.0 SP9 patch1: [Look at this forum thread on the same issue: External Library DC in NW2004s SP09|External Library DC in NW2004s SP09].
    Regards, Bertram

  • Best practice for dealing with Recordsets, JDBC and JSP?

    I've spent the last three years developing web apps using JSP, Struts and Kodo JDO for persistence. All of the content for the apps was created as Java objects using model classes and saved to an Oracle db. Thus, data retrieved from the db was as instances of the model classes and then put into Struts form beans, etc.
    I changed jobs last month and am now having to use Servlets with JDBC to retrieve records from db tables and returning it into Recordsets. Oh, and I can't use Struts in my JSPs either. I'm beginning to think that I had it easy at my previous job but maybe that's just because I was used to it.
    So here are my problems/questions:
    I have two tables with a one to many relationship that I need to retrieve data from, show in a jsp and be able to update eventually.
    So here's what I am doing:
    a) In a servlet, I use a SQL statement to join the tables and retrieve the results into a Recordset.
    b) I created a class with a bunch of String attributes to copy the Recordset data into, one Recordset row per each instance of the bean and then close the Recordset
    c) I then add the beans to an ArrayList and save the ArrayList into the session.
    d) Then, in the JSP, I retrieve the ArrayList from the session and iterate over each bean instance, printing the data out to the jsp. There are some logic statements to determine when not to print redundant data caused by the one to many join.
    e) I have not written the code to update the data yet but was planning on having separate jsps for updating the (one) table and the (many) table.
    Would most of you do something similar? Would you use one SQL statement to retrieve all of the data for display and use logic to avoid printing the redundant part of the data? Or would you have used separate SQL queries, one for each table? Would you have saved the results into something other than an instance of a bean class that represents one record in the RecordSet? Would you have had a bean class with attributes other than Strings - like had a collection attribute to hold the results from the "many" table? The way that I am doing everything just seems so cumbersome and difficult compared to using Struts and JDO before.
    Your help/opinion will be greatly appreciated!

    Would you use one SQL statement to retrieve all of the data for display Yes.
    and use logic to avoid printing the redundant part of the dataNo.
    I believe in minimising the number of queries. If it is a simple one-many join on a db table, then one query is better than one + n queries.
    However I prefer to store the objects in a bean class with attributes other than strings - ie one object, with a collection attribute to hold the related "many" records.
    Does the fact you are not using Struts mean that you have to use scriptlet code? (shudder)
    Or are you using JSTL, or other custom tags?
    How about tools like Ant? Junit testing?
    The way that I am doing everything just seems so cumbersome and difficult
    compared to using Struts and JDO before.Anything different takes adjusting to. Sounds like you know what you're doing for the most part. I agree, in terms of best practices what you have described so far sounds like a step backwards from what you were previously doing.
    However I wouldn't go complaining about it too loudly, too quickly. If you're new on the block theres nothing like making a pain of yourself, and complaining how backwards the work they have done is to put your new workmates' backs up
    Look on it as a challenge. Maybe discuss it quietly with a team leader, to see if they understand how much easier/better/less error prone such approaches can be?
    Struts, cumbersome as it can be, definitely has the advantage of pushing you to follow good MVC practice.
    Good luck,
    evnafets

  • Best Practice for Installation of Both Leopard and Aperture 2 upgrade.

    I've finally bought the bullet and purchased both Leopard and Aperture 2.0 upgrade. I've tried searching for a best practice to install both, but haven't been able to find one--only trouble shooting type stuff. Any suggestions, things to avoid, etc would be greatly appreciated. Even a gentle shove to a prior thread would be helpful. . . .
    Thanks for pointing me in the right direction.
    Steve

    steve hutchcraft wrote:
    I've tried searching for a best practice to install...
    • First be really sure that all your apps work well with 10.5.3 before you leave 10.4.11, which is extraordinarily stable.
    • Immediately prior to and immediately after every installation of any kind (OS, apps, drivers, etc.) got to Utilities/Disk Utility/First Aid, and Repair Permissions. Repairing Permissions is not a problem fixer per se, but anecdotally many folks with heavy graphics installations (including me) who follow that protocol seem to maintain better operating environments under the challenge of heavy graphics than folks who do not diligently do so.
    • When you upgrade the OS do a "clean install."
    • RAM is relatively inexpensive and 2 GB RAM is limiting. I recommend adding 4x2 GB RAM. One good source is OWC: http://www.owcomputing.com/.
    • After you do your installations check for updates to the OS and/or Aperture, and perform any upgrades. Remember to Repair Permissions immediately prior to and immediately after the upgrade installations.
    • If you are looking for further Aperture performance improvement, consider the Radeon HD 3870. Reviews at http://www.barefeats.com/harper16.html and at http://www.barefeats.com/harper17.html.
    Good luck!
    -Allen Wicks

  • Best practices for using Normalizer in ASA and in AIP-SSM

    Both PIX OS 7.x and IPS 5.x software have a concept of "traffic normalization". PIX OS on ASA can do virtual reassembly, IPS on SSM (so far as I know) can do physical reassembly and fragmentation of IP packets. Also, both ASA and SSM can do TCP normalization. For example, they both can "check inconsistent retransmissions" and protect against "TTL evasion attacks". I realize that PIX OS has only basic normalization functions and the SSM is much more configurable.
    The question is: what are the best practices here? Is it better to disable some IP/TCP PIX OS checks / IPS signatures on ASA and/or SSM? Is it better to use just SSM for traffic normalization? Does anybody has personal experience here?
    Also, there is a BugID CSCsd04327 - "ASA all out of order packets are dropped when sending to ssm"
    "When ips ssm is inline slowness is reported. show service-policy shows that the number of out of order packets reported match exactly the number of no buffer drops (even with queue-limit option). Performance hit is not the result of tcp normalization (on IPS 5.x ssm) in this case, but rather an issue with asa normalizer."
    To me it seems to be more logical to have normalization function on the firewall, but there may be drawbacks in doing this.
    So, those who're using ASA with SSM, please share your experience.
    Thx.

    Yes, this is almost correct ;)
    TCP SRP (Stream Reassemly Processor) is turned OFF on the SSM and cannot be enabled, contrary to 4200 appliances, but IP FRP (Fragmentation Reassembly Processor) is functioning on the SSM.
    The testing of 7.2(1) shows the following:
    When you configure "policy-map" to send packets to the SSM the "tcp-map" parameter "queue-limit", which has the value of zero by default, is set to an X (the X is unknown). This means that the ASA now only accepts the TCP segments which are sent in the correct order. More specifically, the gaps in SEQs are not allowed anymore. When for example, the ASA receives a TCP segment which has a SEQ within the window, but the previous TCP segment has been lost, it sends an ACK to the sender to enforce retransmition of the lost segment. As a result the sender retransmits both segments. Only after that the ASA forwards both segments to the SSM. This basically means that SSM always sees in-order TCP segments. That it is why SRP is not needed on the SSM.
    There are at least two problems however.
    The first problem is the performance impact.
    ASA now acts almost like a proxy. And, so far as I know, it doesn't support SACK (Selective ACKs). First, when the ASA does TCP SEQ randomization it doesn't change SEQ values within the SACK TCP Option. This simply breakes SACK. Second, even if you turn randomization mechanism OFF, then, I believe, the ASA will not selectively ACK the lost TCP segments, as it simply doesn't support this mechanism.
    The second problem is THE SECURITY HOLE.
    By default the ASA doesn't check TCP checksums. The 4200 appliances do check by default. But as we now know the SRP is turned OFF on the SSM... So, this means that SSM module can easily be evaded. The hacker only needs to mix attacking traffic with the random TCP segments that have bad TCP checksum. The SSM module will see the mixture of the two and will not recognize the attack. The target host will drop TCP segments with the bad checksums and see only attacking traffic... This has been successfully verified in the lab.
    Of course, this security hole can be closed with the "tcp-map" parameter "checksum-verification", but it will definitely has performance impact.
    The last note: All of the above has never been documented by Cisco. So, use at your own risk, etc.
    I hope, you will read this message, Marcoa. All of this MUST be documented. Once again, the default behaviour of the ASA opens up a big security hole.
    Regards,
    Oleg Tipisov,
    REDCENTER,
    Moscow

  • Best practice for number of result objects in webi

    Hello all,
    I am just wondering if SAP has any recommendation or best practice document regarding number of fields in Result Objects area for webi. We are currently running on XI 3.1 SP3...one of the end user is running a webi with close to 20 objects/dimensions and 2 measure in result objects. The report is running for 45-60 mins and sometimes timing out. The cube which stores data has around 250K records and the report would return pretty much all the records from the cube.
    Any recommendations/ best practices?
    On similar issue - our production system is around 250GB what would be the memory on your server typically...currently we have 8GB memory on the sap instance server.
    Thanks in advance.

    Hi,
    You mention Cubes so i suspect BW or MS AS .   Yes,  OLAP data access (ODA) to OLAP DataSets is a struggle for WebIntelligence which is best at consuming Relational RowSets.
    Inefficient MDX queries can easily be generated by the webi tool, primeraly due to substandard (or excessive) query and document design. Mandatory filters and focused navigation (i.e. targetted BI questions) are the best for success.
    Here's an intersting article about "when is a webi doc too big" https://weblogs.sdn.sap.com/pub/wlg/18706
    Here's a best practice doc about webi report design and tuning ontop of BW MDX : https://service.sap.com/~sapidb/011000358700000750762010E 
    Optimization of the cube itself, including aggregates and cache warming is important. But especially  use of Suppress Unassigned nodes in the BW hierarchy, and "query stripping" in the webi document.
    finally,  patch level of the BW (BW-BEX-OT-MDX) component is critical.  i.e. anything lower than 7.01 SP09 is trouble. (memory management, mdx optimization, functional correctness)
    Regards,
    H

  • Aperture best practices for large libraries

    Hi,
    I am very new to Aperture and still trying to figure out the best way to take advantage of it.
    I have been using iPhoto for a while, with just under 25,000 images. This amount of images takes up about 53 gig. I recently installed and built an Aperture library, leaving the images in the iPhoto library. Still, the Aperture library is over 23 gig. Is this normal? If I turn off the preview, is the integration with iLife and iWork the only functionality lost?
    Thanks,
    BC
    MacBook Pro   Mac OS X (10.4.10)  

    Still, the Aperture library is over 23 gig. Is this
    normal?
    If Previews are turned on, yes.
    If I turn off the preview, is the
    integration with iLife and iWork the only
    functionality lost?
    Pretty much.
    Ian

  • Best Practice For Secure File Sharing?

    I'm a newbie to both OX X Server and File Sharing protocols, so please excuse my ignorance...
    My client would like to share folders in the most secure way possible; I was considering that what might be the best way would be for them to VPN into the server and then view the files through the VPN tunnel; my only issue with this is that I have no idea how to open up File Sharing to ONLY allow users who are connecting from the VPN (i.e. from inside of the internal network)... I don't see any options in Server Admin to restrict users in that way....
    I'm not afraid of the command line, FYI, I just don't know if this is:
    1. Possible!
    And 2. The best way to ensure secure AND encrypted file sharing via the server...
    Thanks for any suggestions!

    my only issue with this is that I have no idea how to open up File Sharing to ONLY allow users who are connecting from the VPN
    Simple - don't expose your server to the outside world.
    As long as you're running on a NAT network behind some firewall or router that's filtering traffic, no external traffic can get to your server unless you setup port forwarding - this is the method used to run, say, a public web server where you tell the router/firewall to allow incoming traffic on port 80 to get to your server.
    If you don't setup any port forwarding, no external traffic can get in.
    There are additional steps you can take - such as running the software firewall built into Mac OS X to tell it to only accept network connections from the local network, but that's not necessary in most cases.
    And 2. The best way to ensure secure AND encrypted file sharing via the server...
    VPN should take care of most of your concerns - at least as far as the file server is concerned. I'd be more worried about what happens to the files once they leave the network - for example have you ensured that the remote user's local system is sufficiently secured so that no one can get the documents off his machine once they're downloaded?

  • Best practice for external drive?

    A really basic set of questions, for which I ought to know the answers:
    with four external firewire drives connected, three of them daisy-chained to one FW800 port, the fourth to a second FW800 port, do I:
    a) when shutting down the computer, eject them, or can I just shut down the computer then shut down the drives?
    b) when ejecting the drives, is there a particular order (eg, the drive that is actually connected to the computer of the three which are tied together being last to eject?)
    c) when sleeping the computer, is it all right to leave the drives spinning and let them sleep when they realize the computer is sleeping?
    d) when starting up, start the drives before turning on the computer? I've actually forgotten this, and started the drives after the computer was up and running, and it doesn't SEEM to have any ill effect, but am I doing something dangerous?

    OS X is pretty tolearant of whatever you do. Just don't turn them off or unplug them unless you either turn off the computer or eject the volume by dragging to the Trash.
    So
    a) shut down the computer then shut down the drives: yes
    b) doesn't matter with my FW drives, but I suppose there could be some that will not pass the FW signals if it is shut off. I believe if they are built correctly they should work like mine and the FW communication will pass through the devices that are powered down.
    c) OS X should be in control of drive sleep. Some HD manufacturers have firmware that conflicts with OS X and insist upon sleeping the drives when they ought not to, or vice versa. When sleeping the computer, just ignore the drives. Do not shut them off though or else OS X will scold you for having removed them unexpectedly. They should sleep on their own, but if their firmware insists otherwise, there is nothing you can do about it.
    d) This doesn't matter at all, unless (obviously) you need to boot the computer from one of the external volumes.
    Read about OS X file system journaling - it should ease your mind about potential corruption: http://support.apple.com/kb/HT2355

  • What is tthe Best practice for Variant List, Add, Edit and Display Forms?

    Requirement:
    I have single list.  The list has a large number of columns and a large number of items (lets say 20,000).
    I want to show users a different view of the list based on clicking on a different left-hand navigation option.
    Lets say I have four types of users:  Sales, Manufacturing, Shipping and Finance. I would like to have four options in the left-hand navigation.
    All of them would be pointing at the same list, BUT, I want each of them to have a customer list form.  The only difference between the custom list forms would be:
    Each would have its own set of views, and hence its own default view.
    Each would have its own New, Edit and Display Forms.  The only difference between the forms in one variant list and another would be: The order of the columns and which columns are modifiable.
    I would like to achieve this in SharePoint Designer in such a way that the "users" could still add/modify views and could even modify the forms from the SharePoint Menu.  BTW, I don't want to use InfoPath for obvious reasons.
    What is the best approach to meeting this requirement?  I have at least 20 sites and 70 lists overall that need variant forms made.
    HELP!!
    Savin
    BTW We are using SharePoint 2013 and I selected the wrong forum *sigh*.  But I think its probably the same answer.
    Cheers, Savin Smith

    Hi,
    I understand that you want to have different forms based on different view.
    Per my knowledge, there are no out of box method to achieve it.
    As a workaround, you can add the JavaScript code to the different view page.
    For example, to open different new form based on different view, you can get the windows.location, and then judge the view, then change the onclick event of the "New item" button.
    For more information, you can refer to:
    http://css-tricks.com/snippets/javascript/get-url-and-url-parts-in-javascript/
    http://samsharepoint.wordpress.com/2013/05/01/change-the-default-sharepoint-ok-and-cancel-button/
    Thanks,
     Linda
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Linda Li
    TechNet Community Support

  • Best practice for calling application module methods and plsql code

    In my application I am experiencing problems with connection pooling, I seem to be using a lot of connections in my application when only a few users are using the system. As part of our application we need to call database procedures for business logic.
    Our backing beans, call methods on the application module which in turn call a database procedure. For instance in the backing bean we have code like this to call the application module method.
    // Calling Module to generate new examination/test.
    CIGAppModuleImpl appMod = (CIGAppModuleImpl)Configuration.createRootApplicationModule("ky.gov.exam.model.CIGAppModule", "CIGAppModuleLocal");
    String testId = appMod.createTest( userId, examId, centerId).toString();
    AdfFacesContext.getCurrentInstance().getPageFlowScope().put("tid",testId);
    // Close the call
    System.out.println("Calling releaseRootApplicationModule remove");
    Configuration.releaseRootApplicationModule(appMod, true);
    System.out.println("Completed releaseRootApplicationModule remove");
    return returnResult;
    In the application module method we have the following code.
    System.out.println("CIGAppModuleImpl: Call the database and use the value from the iterator");
    CallableStatement cs = null;
    try{
    cs = getDBTransaction().createCallableStatement("begin ? := macilap.user_admin.new_test_init(?,?,?); end;", 0);
    cs.registerOutParameter(1, Types.NUMERIC);
    cs.setString(2, p_userId);
    cs.setString(3, p_examId);
    cs.setString(4, p_centerId);
    cs.executeUpdate();
    returnResult=cs.getInt(1);
    System.out.println("CIGAppModuleImpl.createTest: Return Result is " + returnResult);
    }catch (SQLException se){
    throw new JboException(se);
    finally {
    if (cs != null) {
    try {
    cs.close();
    catch (SQLException s) {
    throw new JboException(s);
    I have read in one of Steve Muench presentations (Oracle Fusion Applications Team' Best Practises) that calling the createRootApplicationModule method is a bad idea, and to call the method via the binding interface.
    I am assuming calling the createRootApplicationModule uses much more resources and database connections that calling the method through the binding interface such as
    BindingContainer bindings = getBindings();
    OperationBinding ob = bindings.getOperationBinding("customMethod");
    Object result = ob.execute()
    Is this the case? Also is using getDBTransaction().createCallableStatement the best way of calling database procedures. Would it be better to expose plsql packages as webservices and then call them from the applicationModule. Is this more efficient?
    Regards
    Orlando

    Thanks Shay, this is now working.
    I successfully got the binding to the application method in the pagedef.
    I used the following code in my backing bean.
    package view.backing;
    import oracle.binding.BindingContainer;
    import oracle.binding.OperationBinding;
    public class Testdatabase {
    private DCBindingContainer bindingContainer;
    public void setBindingContainer (DCBindingContainer bc) {this.bindingContainer = bc;}
    public DCBindingContainer getBindingContainer() {return bindingContainer;}
    public static String validateUser()
    // Calling Module to validate user and return user role details.
    System.out.println("Getting Binding Container from Home Backing Bean");
    BindingContainer bindings = BindingContext.getCurrent().getCurrentBindingsEntry();
    System.out.println("Obtain binding");
    OperationBinding operationBinding = bindings.getOperationBinding("calldatabase");
    System.out.println("Set username parameter");
    operationBinding.getParamsMap().put("p_userId",userId);
    System.out.println("Set password parameter");
    operationBinding.getParamsMap().put("p_testId",examId);
    Object result = operationBinding.execute();
    System.out.println("Obtain result");
    String userRole = result.toString();
    System.out.println("Result is "+userRole);
    }

Maybe you are looking for

  • Inbound calls getting answered automatically in Cisco CME

    Hi All....  Please advise me the reason, why the inbound calls to CME got answered automatically.? I am mentioning the call handling scenario here with.. Trunk Type : FXO Connection PLAR to physical extension(200)  timeouts call-disconnect 1  timeout

  • I used to be able to play blu rays, but after the reinstall I don't have winDVD any more

    I own a Toshiba Satellite A505-6965. When I first got the laptop I had all these toshiba items on there. Recently I had to reinstall and reformat my laptop losing all toshiba items. I used to have winDVD and now I don't have it nor can I find it.I wa

  • Installing Snow leopard on my Mac Pro

    While waiting for Snow leopard to load/ install, It seemed to just stop and then gave a message that the disk could not finish installing. After repeated attempts to eject the install disk, I finally got it to eject and to attempt restarting. However

  • ICal events lost, how to restore?

    I updated my iCal yesterday, and added quite a bit of stuff. Now today, I opened iCal only to find that it's as though I did nothing yesterday. Events I had deleted are back, and events I posted yesterday are gone. Is there any way to fix this? Thank

  • Reinstallation without disc

    HI I hv macbook pro 13 inch early 2011 with having maverick osx but it been booting slow since month or two, i already hv backup of entire hdd, pls suggest me I want to format hdd and reinstall same osx but i dnt have osx disc to install(i didnt get