Best practice for loading config params for web services in BEA

Hello all.
I have deployed a web service using a java class as back end.
I want to read in config values (like init-params for servlets in web.xml). What
is the best practice for doing this in BEA framework? I am not sure how to use
the web.xml file in WAR file since I do not know how the name of the underlying
servlet.
Any useful pointers will be very much appreciated.
Thank you.

It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
You can enforce rules at your network layer to allow access to the App server only from Gateway.
When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
The next BPEL developer in your project may not be aware of Security extensions
Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
Thanks
Ram

Similar Messages

  • How to get the list of values for a dynamic parameter using Web Services SDK?

    <p>I am struggling to get the list of values for a dynamic parameter of a report.</p><p>I am using Java Web Services SDK ... I tried to use PromptInfo.getLOV().getValues() method but it does not work.</p><p>First of all ... is this possible (to get the list of values for a dynamic param) using Web Services?</p><p>Second of all, if this is possible, how should I do it ... it seems it works fine when running the report from CMC. It asks for DB logon info and after that it provides a list of values.</p><p>Thx </p>

    <p>Your assumption is correct. We are trying to get the LOVs from the Crystal Report. I was not aware that this is not supported by Web Services SDK.</p><p>We used Web Services SDK to integrated the Crystal Reports in our web application. We implemented some basic actions for reports: schedule, view instances, run ad-hoc reports.</p><p>We encountered this problem when trying to run/schedule reports with dynamic parameters (a list of values from DB). We were unable to get the LOVs.</p><p>Please let me know if you can think of an alternative to look at.</p><p>Thanks a lot,</p><p>Catalin </p>

  • Best practice standard User Acess Test for WIN2012 AD

    What is the Best practice standard User Acess Test  for WIN2012 AD

    Hello,
    as before, add a computer to the domain and log on with a domain user account to the computer.
    You should be able from the client machine to open the sharedfolders on the DCseither with:
    \\DCName\sysvol
    \\DCName\netlogonor \\NetBiosDomainName\sysvol
    \\NetBiosDomainName\netlogon
    Best regards
    Meinolf Weber
    MVP, MCP, MCTS
    Microsoft MVP - Directory Services
    My Blog: http://blogs.msmvps.com/MWeber
    Disclaimer: This posting is provided AS IS with no warranties or guarantees and confers no rights.
    Twitter:  

  • What are the best practices to migrate VPN users for Inter forest mgration?

    What are the best practices to migrate VPN users for Inter forest mgration?

    It depends on a various factors. There is no "generic" solution or best practice recommendation. Which migration tool are you planning to use?
    Quest (QMM) has a VPN migration solution/tool.
    ADMT - you can develop your own service based solution if required. I believe it was mentioned in my blog post.
    Santhosh Sivarajan | Houston, TX | www.sivarajan.com
    ITIL,MCITP,MCTS,MCSE (W2K3/W2K/NT4),MCSA(W2K3/W2K/MSG),Network+,CCNA
    Windows Server 2012 Book - Migrating from 2008 to Windows Server 2012
    Blogs: Blogs
    Twitter: Twitter
    LinkedIn: LinkedIn
    Facebook: Facebook
    Microsoft Virtual Academy:
    Microsoft Virtual Academy
    This posting is provided AS IS with no warranties, and confers no rights.

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • Requirements for the development of ABAP web services...

    Recently at TechEd in Vienna I saw a demo of the development of an ABAP service. It all looked fairly easy, so when I came back I decided to try it out for myself. However, this mission seems to be compromised by the development environment not playing along. I've tried to encapsulate a small RFC function module inside af service on an IDES ECC 6.0 system and that went fairly well until I wanted to test the service.
    At some point it is understood that one should setup a reference to a J2EE system in the WSADMIN transaction and that caused me some troubles as it looks like the IDES system that I tried it on doesn't have a java stack. How could I check whether that is true? I then tried to test the service directly under maintain service within WSADMIN and it sends me a popup screen and asks me to logon. After logging on the browser claims an internal error has happened.
    What do I do now? Where do I start to check if everything is setup the way it should be? Is there a note or a how to guide somewhere?
    Best regards,
    Anders

    Hello,
    For creating and hosting ABAP web Services, you do not need JAVA stack. However, for testing Web Services using Web Service Navigator (from SOAMANAGER), you need J2EE stack. 
    If you are invoking Web Service from SOAPUI, .NET or some other third party tool, you need to create End Point for the service using SOAMANAGER and provide the Porttype WSDL to the third party system.
    Thanks,
    Venu

  • Can't find source file for newsservice.NewsServiceProxy in JMS Web Service

    i can't find the source file for newsservice.NewsServiceProxy in JMS Web Service. The
    newsservice.NewsServiceProxy is required for client to post news to server and received news from server. But
    i can't find where the file located in the newsservices.jar provided in that tutorial.
    Someone please help me to located it. thank you :)

    Hi,
    You will have to generate this file. The steps to generate this file has been provided in the Install file present under NewsService/doc/Install.html (once newsservice.jar is extracted) - Running the NewsService client application section - Step 1.
    Follow the instructions, and you will be able to download the zip/jar file which contains newsservice.NewsServiceProxy within it.
    Regards
    Elango.

  • Best Practice: A J2EE Blue-Print for a Typical Web App

    Consider a typical synchronous Struts-based Web application which does a simple DB search and post. What are some of the main patterns and components that should be used if following the �industry best practices�
    Does the following flow seem accurate?
    Strust Action creates a TransferObject , and passes it to a Business Delegate. Delegate finds the appropriate BusinessObject, the Business Object uses the Data Access Object�.the CRUD operation happens and the result is sent back to the Action in the same TransferObject.
    Which one of these components need an interface?
    What's the best way for this components to interact with each other (factory, etc.)?
    Message was edited by:
    kmkiani
    Message was edited by:
    kmkiani

    There are 3 tiers in a Java EE application. (Presentation, Business, Integration).
    The BusinessDelegate in this scenario would be a Presentation-tier business delegate. This guy would interact with a Session Facade who lives on the Business-tier. The SessionFacade is the abstraction on the Business-tier and the Business Delegate is the abstraction on the Presentation-tier. It is these guys that have direct communication. This design enables low coupling between the actual implementations of each area. If done properly, you could go from EJB to Web Service to POJO business models without ever having to change anything in the Presentation-tier.
    These object-oriented design patterns are primarily for Enterprise applications with extensive Quality-of-Service requirements.
    In your scenario, the Presentation-tier would contain a MVC-based web application, i.e. Struts. The business model and business/domain requirements would be implemented in the Business-tier.
    Presentation Tier - Struts Web Application
    Business Tier - (EJB | POJO | WEB SERVICES) Application
    Integration Tier - (Relational Database | File System | XML Database | EIS)

  • Best Practices? Rendering to Flash for streaming web....

    I am always impressed with the flash based videos I see streaming on YouTube, FastCompany.Tv and other sites....
    My question... can you please either explain or point me in the right direction for streaming video best practices? Specifically, I am looking for info on best settings to produce the flash video (codecs and/or FCP render settings) and then what do people use as a flash player on their websites to show the end result.
    My goal is to create internal instructional videos for corporate training and then host them on my site (or streaming from Akamai). I would like people to be able to watch it in a flash player embedded on my site (and have it look good even if they click on a full screen button) or download to their iPod.
    Examples of what I like, but I don't know how to do:
    http://www.fastcompany.tv/video/getting-government-work
    Thank you in advance for your expertise and insight.
    -Steven

    I would like people to be able to watch it in a flash player embedded on my site (and have it look good even if they click on a full screen button) or download to their iPod.
    Use the H.264 setting for iPod in Compressor. The h.264 file will play in a JW Flash Player and it's able to be downloaded for iPod viewing.

  • [XI 3.1] BEST PRACTICE method of Oracle connection for RPTs on Linux

    Business Objects XI (3.1) - SP3.
    Running on Red Hat Enterprise Linux OS.
    7,000+ Crystal Reports 2008 *.rpt objects ONLY (No Universe / No WebI).
    All reports connecting to Oracle 10g databases.
    ==================
    In the past, all of this infrastructure was running on Windows Server OS and providing the database access via a Named ODBC connection (eg. "APP_DATA".)
    This made it easy to manage as all the Report Developers had a standard System DSN called "APP_DATA" which was the same as the System DSN name on all of our DEV, TEST/UAT, and PROD servers for Business Objects.
    When we wanted to move/promote a *.rpt file from DEV to PROD we did not have to change any "Database Connection" info as it was all taken care of by pointing the System DSN called "APP_DATA" a a different physical Oracle server at the ODBC level.
    Now, that hardware is moving from Windows OS to Red Hat Linux and we are trying to determine the Best Practices (and Pros/Cons) of using one of the three methods below to access the Oracle database for our *.rpts....
    1.) Oracle Native connection
    2.) ODBC connection
    3.) JDBC connection
    Here's what we have determined so far -
    1a.) Oracle Native connection should be the most efficient method of passing SQL-query to the DB with the fewest issues and best speed [PRO]
    1b.) Oracle Native connection may not be supported on Linux - http://www.forumtopics.com/busobj/viewtopic.php?t=118770&view=previous&sid=9cca754b468fc67888ab2553c0fbe448 [CON]
    1c.) Using Oracle Native would require special-handling on the *.rpts at either the source-file or the CMC level to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    2a.) A 3rd-Party Linux ODBC option may be available from EasySoft - http://www.easysoft.com/products/data_access/odbc_oracle_driver/index.html - which would allow us to use a similar Developer / Admin overhead to what we are used to. [PRO]
    2b.) Adding a 3rd-Party Vendor into the mix may lead to support issues is we have problems with results or speeds of our queries. [CON]
    3a.) JDBC appears to be the "defacto standard" when running Oracle SQL queries from Linux. [PRO]
    3b.) There may be issues with results or speeds of our queries when using JDBC. [CON]
    3c.) Using JDBC requires the explicit-IP of the Oracle server to be defined for each connection. This would require special-handling on the *.rpts at either the source-file (and NOT the CMC level) to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    ==================
    We would appreciate some advice from anyone who has been down this road before.
    What were your Best Practices?
    What can you add to the Pros and Cons listed above?
    How do we find the "sweet spot" between quality/performance/speed of reports and easy-overhead for the Admins and Developers?
    As always, thanks in advance for your comments.

    Hi,
    I just saw this article and I would like to add some infos.
    First you can quite easely reproduce the same way of working with the odbc entries by playing with the oracle name resolution on the server. By changing some files (sqlnet, tnsnames.ora,..) you can define a different oracle server for a specific name that will be the same accross all environments.
    Database name will be resolved differently regarding to the environment and therefore will access a different database.
    Second option is the possibility to change the connection in .rpt files by an automated way like the schedule manager. This tool is a additional web application to deploy that can change the connection settings of rpt reports on thousands of reports in a few clicks. you can find it here :
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/80af7965-8bdf-2b10-fa94-bb21833f3db8
    The last option is to do it with a small sdk script, for this purpose, a few lines of codes can change all the reports in a row.
    After some implementations on linux to oracle database I would prefer also the native connection. ODBC and JDBC are deprecated ways to connect to database. You can use DATADIRECT connectors that are quite good but for volumes you will see the difference.

  • OBIEE Best Practice Data Model/Repository Design for Objectives/Targets

    Hello World!
    We are faced with a design question that has become somewhat difficult and we need some help. We want to be able to compare side-by-side actual measures with their corresponding objectives/targets. Sounds simple. But, our objectives are static (not able to be aggregated) with multi-dimensionality and multi-levels. We need some best practice tips on how to design our data model and repository properly so that we can see the objective/target for a measure regardless of the dimensions that are used in the criteria and regardless of the level.
    Here is some more details:
    Example of existing objective table.
    Dimension1
    Dimension2
    Dimension3
    Obj1
    Obj2
    Quarter
    NULL
    NULL
    NULL
    .99
    1.8
    1Q13
    DIM1VAL1
    NULL
    NULL
    .99
    2.4
    1Q13
    DIM1VAL1
    DIM2VAL1
    NULL
    .98
    2.41
    1Q13
    DIM1VAL1
    DIM2VAL1
    DIM3VAL1
    .97
    2.3
    1Q13
    DIM1VAL1
    NULL
    DIM3VAL1
    .96
    1.9
    1Q13
    NULL
    DIM2VAL1
    NULL
    .97
    2.2
    1Q13
    NULL
    DIM2VAL1
    DIM3VAL1
    .95
    2.0
    1Q13
    NULL
    NULL
    DIM3VAL1
    .94
    3.1
    1Q13
    - Right now we have quarterly objectives set using 3 different dimensions. So, if an author were to add one or more (or zero) dimensions to their criteria for a given measure they could get back a different objective. They could add Dimension1 and get 99%. They could add Dimension1 and Dimension2 and get 98%. They could add all three dimensions and get 97%. They could add zero dimensions (highest grain) and get 99%. Using our existing structure if we were to add a new dimension to the mix the possible combinations would grow dramatically. (Not flexible)
    - We would like our final solution to be flexible enough so that we could view objectives with altogether different dimensions and possibly get different objectives.
    - We currently have 3 fact tables with 3+ conformed dimension tables and a few unique dimension tables.
    Could anyone share a similar situation where you have implemented a data model structure with the proper repository joins to handle showing side-by-side objectives/targets where the objectives were static and could be displayed at differing levels with flexible dimensions as described?
    Any help would be greatly appreciated.

    hi..yes this suggestion is nice...first configure the sensors(activity or variable) ..then configure the sensor action as a JMS Topic which will in turn insert the data into a DB..Or when u configure the sensor action as a DB..then the data goes to Oracle Reports schema..if there is any chance of altering the DB..i mean if there is any chance by changing config files so that the data doesnt go to that Reports schema and goes to a custom schema created by any User....i dont know if it can b done...my problem is wen i m configuring the jms Topic for sensor actions..i see blank data coming..for sm reason or the other the data is not getting posted ...i have used a esb ..a routing service based on the schema which i am monitoring...can any1 help?

  • Best practice to use Time capsule for back-up of 3 different products (MBP 15 OSX lion, MBP 13 OSX lion and MBA 13 OSX mountain lion)?. Only the MBP15 is back-up regularly.

    When I want to save data of the MBA13 on mountain lion (wireless) with time capsule, hois there any best practice to perform?
    After that, assuming that data are back-up, can we easily differentiate data in time capsule belonging to MBP15/13 and MBA13?

    Unfortunately, Apple left off the Ethernet port....the most important port in networking....on the MBA, so your first backup of the entire Mac will need to be done using wireless.
    That may take a day or two unless your MBA has a Thunderbolt port on it in which case you could add a Thunderbolt to Ethernet adapter and connect the MBA to the Time Capsule for the first backup using an Ethernet cable.  It will probably only take 3-4 hours or less doing it this way.
    Once you have the first complete backup done, other subsequent backups can be done using wireless since they will ony take a few minutes, on average.
    Both Macs will backup to the Time Capsule using Time Machine automatically. Backups will be kept completely separate, so one Mac will normally only be able to "see" its own backups.

  • Best Practices in use of ABAP for SRM and/or CRM Configuration

    I was wondering if there is a document that defines best practices for the use of ABAP with the installation and customization of SRM and/or CRM. Such as amount of ABAP coding typically required, and best practices around the use of ABAP for customization and configuration.
    Thanks.

    Hi, Johnson
    Sorry, Please don't mind, you are not at right place to ask the Question like this
    Please read "The Forum Rules of Engagement" before posting!  HOT NEWS!!
    Thanks and Regards,
    Faisal

  • Best practice in Infoprovider & Query design for access by BO Universe

    Hello Experts,
    Are there any best practices identified by practitioners or suggested by SAP for development of Infoprovider and queries for access by BO Universe.
    Best practices should be from the prospective of performance, design simplicity, adaptability to change etc.
    Appreciate your help.
    Regards,
    Pritesh.
    Edited by: pritesh prakash on Jul 19, 2010 10:51 AM

    Thanks Suresh.
    My project plan is to build Infocubes & queries which will be then used to build Universe upon it. Thus I am looking for do's & dont's while designing infocubes & queries such that there wont be any issues(performance or other) when accessed by Universe built on it.
    Hope I have made it more clear now.
    Regards,
    Pritesh.

  • Best practice DNS in VPN environment for Lync2013 clients

    So I do have those site2site VPNs to connect the small branch offices to the main office. Internal DNS makes sure, that the branch offices can acess all the servers/services in the main office with their domain.local namespace.
    In such a scenario will the Lync2013 clients connect through the VPN to the internal sites due to both lyncdiscover and lyncdiscoverinternal being available?
    Wouldn't it cause way less burden on the VPN routers if clients would simply go out to the internet and connect from the external side so all the Lync traffic does not have to be stuffed through the VPN pipe? I dont see the point to encrypt the traffice
    once more.
    Thanks for your suggestions about best practices!
    HST

      Hi,
    When users connect to the corporate network using a VPN client, Lync media traffic is sent through the VPN tunnel. This configuration can create additional latency and jitter because media traffic must pass through an additional layer of encryption and
    decryption. The issue is compounded when the VPN concentrator is busy.
    If you want to connect Lync server from public network you need to deploy an Edge server.
    The solution to force VPN traffic through the Edge Servers must allow external Lync clients connected through VPN, you can refer to the part of "Solution Configuration" in the link below:
     http://blogs.technet.com/b/nexthop/archive/2011/11/15/enabling-lync-media-to-bypass-a-vpn-tunnel.aspx
    Best Regards,
    Eason Huang
    Eason Huang
    TechNet Community Support

Maybe you are looking for

  • Deployed Custom Master Page Not Visible in Designer

    My team has created a custom master page and has packaged and deployed it as a feature. The custom master page file correctly appears in the master pages gallery via the Web UI, but it's not showing up in the Master Pages folder when the site is open

  • IPhoto 11 - older photos only show as thumbnails

    I have upgraded both my MacBook Pros to iPhoto 11 and now some of my older photos only show as thumbnails and won't go full screen. I just get a triangle with a "!" on the screen.

  • Importing into Lightroom with card reader doesn't show previews

    I am using Lightroom 2.4 with Adobe 5.4 Raw on a Windows XP based laptop -- however the problem I am having also presents on my desktop which also uses Lightroom 2.4 with Adobe 5.4 and Windows XP.  In fact the issue is completely the same on both com

  • Activate Tab page and set cursor on field!

    Hi All, I have a requirement to mimic SAP standard functionality like whenever a mandatory field is not enetered in a transaction, the control activates particular tab page and set the cursor on mandatory field. My requirement is as below. There is a

  • ERROR ( 0 :: 42) when trying to setup sound on MPEG-4 export

    I got a project with an ac3 file and a YUV 10-bit uncompressed video file that are same lenght on my comp (2;37;04). When I try to set up the option for sound for a QT MPEG-4 export ; AE crashes with error ( 0 :: 42 ) Any clues as to why? Anyone has