Looking up an EJB – best practices

Hi, what's the correct way to specify an EJB when attempting to get
the home interface?
Using WebLogic, it seems people just specify the JNDI name and it
seems to work fine.
Object ref = context.lookup("testsession");
But other times I see this syntax:
Object ref = context.lookup("java:comp/env/ejb/MySession2");
Which method is best practices?
Thanks

Hello Marcus,
Actually, EJB 1.1 introduced a formal manner to specify the location of the EJBs,
namely the
"java:comp/env/ejb" location. This is actually a best practice for specifying
the location of
your EJBs, because the "application assembler" doesn't need to know exactly what
the JNDI
name of the particular EJB is. This task is left to the actual deployer. If you
use the actual JNDI
name of the EJB instead of Sun's recommended prefix, then you are limiting the
overall portability
of the EJBs, because the code must be modified to reflect the exact JNDI name
that will be used
based on the particular J2EE application server that they are being deployed on.
You can read up about this in chapter 14 of the EJB 1.1 specification as well
as chapter 5 of the
J2EE 1.2 specification. Also, check out Mastering Enterprise JavaBeans 2nd Edition
by
Ed Roman, Scott W. Ambler, and Tyler Jewell for more explanation and code examples.
Best regards,
Ryan LeCompte
[email protected]
http://www.louisiana.edu/~rml7669
[email protected] (Marcus Leon) wrote:
Hi, what's the correct way to specify an EJB when attempting to get
the home interface?
Using WebLogic, it seems people just specify the JNDI name and it
seems to work fine.
Object ref = context.lookup("testsession");
But other times I see this syntax:
Object ref = context.lookup("java:comp/env/ejb/MySession2");
Which method is best practices?
Thanks

Similar Messages

  • Looking for team development best practices

    We are new to Flex and have a team of five developers with
    JEE background. My question is how to best organize a flex project,
    so it's efficient for everyone to work together. Coming from
    typical JEE Web application development, it's quite straightforward
    to break up features into separate Java classes and JSP pages. It
    reduces chances of multiple people working on the same file and the
    merging hassle. I am looking for best practices for breaking up
    flex code especially for MXML, so it is easy for a team of
    developers to work on the project.

    We are new to Flex and have a team of five developers with
    JEE background. My question is how to best organize a flex project,
    so it's efficient for everyone to work together. Coming from
    typical JEE Web application development, it's quite straightforward
    to break up features into separate Java classes and JSP pages. It
    reduces chances of multiple people working on the same file and the
    merging hassle. I am looking for best practices for breaking up
    flex code especially for MXML, so it is easy for a team of
    developers to work on the project.

  • Looking for Some Examples / Best Practices on User Profile Customization in RDS 2012 R2

    We're currently running RDS on Windows 2008 R2. We're controlling user's Desktops largely with Group Policy. We're using Folder Redirection to configure their Start Menus as well.
    We've installed a Server 2012 R2 RDS box and all the applications that users will need. Should we follow the same customization steps for 2012 R2 that we used in 2012 R2? I would love to see some articles on someone who has customized a user profile/Desktop
    in 2012 R2 to see what's possible.
    Orange County District Attorney

    Hi Sandy,
    Here are some related articles below for you:
    Easier User Data Management with User Profile Disks in Windows Server 2012
    http://blogs.msdn.com/b/rds/archive/2012/11/13/easier-user-data-management-with-user-profile-disks-in-windows-server-2012.aspx
    User Profile Best Practices
    http://social.technet.microsoft.com/wiki/contents/articles/15871.user-profile-best-practices.aspx
    Since you want to customize user profile, here is another blog for you:
    Customizing Default users profile using CopyProfile
    http://blogs.technet.com/b/askcore/archive/2010/07/28/customizing-default-users-profile-using-copyprofile.aspx
    Best Regards,
    Amy
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]

  • Looking for information on best practices using Live Upgrade to patch LDOMs

    This is in Solaris 10. Relatively new to the style of patching... I have a T5240 with 4 LDOMS. A control LDOM and three clients. I have some fundamental questions I'd like help with..
    Namely:
    #1. The Client LDOMS have zones running in them. Do I need to init 0 the zone or can I just +zoneadm zone halt+ them regardless of state? I.E. if it's running a database will halting the zone essentially snapshot it or will it attempt to shut it down. Is this even a nessessary step.
    #2. What is the reccommended reboot order for the LDOMs? do I need to init 0 the client ldoms and the reboot the control ldom or can I leave the client LDOM's running and just reboot the control and then reboot the clients after the control comes up?
    #3. Oracle. it's running in several of the zones on the client LDOM's what considerations need to be made for this?
    I am sure other things will come up during the conversation but I have been looking for an hour on Oracle's site for this and the only thing I can find is old Sun Docs with broken links.
    Thanks for any help you can provide,
    pipelineadmin+*                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Before you use live upgrade, or any other patching technique for Solaris, please be sure to read http://docs.oracle.com/cd/E23823_01/html/E23801/index.html which includes information on upgrading systems with non-global zones. Also, go to support.oracle.com and read Oracle Solaris Live Upgrade Information Center [ID 1364140.1]. These really are MANDATORY READING.
    For the individual questions:
    #1. During the actual maintenance you don't have to do anything to the zone - just operate it as normal. That's the purpose of the "live" in "live upgrade" - you're applying patches on a live, running system under normal operations. When you are finisihed with that process you can then reboot into the new "boot environment". This will become more clear after reading the above documents. Do as you normally would do before taking a planned outage: shut the databases down using the database commands for a graceful shutdown. A zone halt will abruptly stop the zone and is not a good idea for a database. Alternatively, if you can take application outages, you could (smoothly) shutdown the applications and then their domains, detach the zones (zoneadm detach) and then do a live upgrade. Some people like that because it makes things faster. After the live upgrade you would reboot and then zoneadm attach the zones again. The fact that the Solaris instance is running within a logical domain really is mostly besides the point with respect to this process.
    As you can see, there are a LOT of options and choices here, so it's important to read the doc. I ***strongly*** recommend you practice on a test domain so you can get used to the procedure. That's one of the benefits of virtualization: you can easily set up test environments so you cn test out procedures. Do it! :-)
    #2 First, note that you can update the domains individually at separate times, just as if they were separate physical machines. So, you could update the guest domains one week (all at once or one at a time), reboot them into the new Solaris 10 software level, and then a few weeks later (or whenever) update the control domain.
    If you had set up your T5240 in a split-bus configuration with an alternate I/O domain providing virtual I/O for the guests, you would be able to upgrade the extra I/O domain and the control domain one at a time in a rolling upgrade - without ever having to reboot the guests. That's really powerful for providing continuous availability. Since you haven't done that, the answer is that at the point you reboot the control domain the guests will lose their I/O. They don't crash, and technically you could just have them continue until the control domain comes back up at which time the I/O devices reappear. For an important application like a database I wouldn't recommend that. Instead: shutdown the guests. then reboot the control domain, then bring the guest domains back up.
    3. The fact that Oracle database is running in zones inside those domains really isn't an issue. You should study the zones administration guide to understand the operational aspects of running with zones, and make sure that the patches are compatible with the version of Oracle.
    I STRONGLY recommend reading the documents mentioned at top, and setting up a test domain to practice on. It shouldn't be hard for you to find documentation. Go to www.oracle.com and hover your mouse over "Oracle Technology Network". You'll see a window with a menu of choices, one of which is "Documentation" - click on that. From there, click on System Software, and it takes you right to the links for Solaris 10 and 11.

  • Looking for Security Best Practices documentation for Sybase ASE 15.x

    Hello, I'm looking for SAP/Sybase best practice documentation speaking to security configurations for Sybase ASE 15.x. Something similar to this:
    Sybase ASE 15 Best Practices: Query Processing & Optimization White Paper-Technical: Database Management - Syba…
    Thanks!

    Hi David,
    This is something I found on the Sybase site:
    Database Encryption Design Considerations and Best Practices for ASE 15
    http://www.sybase.com/files/White_Papers/ASE-Database-Encryption-3pSS-011209-wp.pdf
    ASE Encryption Best Pracites:
    http://www.sybase.com/files/Product_Overviews/ASE-Encryption-Best-Practices-11042008.pdf
    If these do not help, you can search for others at:
    www.sybase.com > serach box on the top right.
    I searched "best pracitces security"
    Can also run advanced search > I typed in "ssl" into exact phrase.
    Hope this helps,
    Ryan

  • Looking for best practices using Linux

    I use Linux plataform to all the Hyperion tools, we has been problems with Analyzer V7.0.1, the server hangs up ramdomly.<BR>I'm looking for a Linux best practices using Analyzer, Essbsae, EAS, etc.<BR>I'll appreciate any good or bad comments related Hyperion on Linux OS.<BR><BR>Thanks in advance.<BR><BR>Mario Guerrero<BR>Mexico

    Hi,<BR><BR>did you search for patches? It can be known problem. I use all Hyperion tools on Windows without any big problem.<BR><BR>Hope this helps,<BR>Grofaty

  • Favorite / Best Practice / Useful MTE's ??!?!

    Hi Everyone,
    I'm setting up monitoring on a multi-system architecture; i've got all of the agents working and reporting to my CEN. I've even got the auto-response email all set up, great!
    I've been looking around for any best practice on what MTE's (out of the 100's there!) I should be setting up virtual and rule based nodes for.
    So come on everyone...what are your favorite MTE's. Of course I'm looking at Dialog Response Times, and Filesystem %'s -- but any other tips? hints? tricks? Any neat MTE's out there that you love to check.
    A best practice / useful guide would be brilliant?
    Thanks in advance.
    Nic Doodson

    ??

  • Design Patterns/Best Practices etc...

    fellow WLI gurus,
    I am looking for design patterns/best practices especially in EAI / WLI.
    Books ? Links ?
    With patterns/best practices I mean f.i.
    * When to use asynchronous/synchronous application view calls
    * where to do validation (if your connecting 2 EIS, both EIS, only in WLI,
    * what if an EIS is unavailable? How to handle this in your workflow?
    * performance issues
    Anyone want to share his/her thoughts on this ?
    Kris

              Hi.
              I recently bought WROX Press book Professional J2EE EAI, which discusses Enterprise
              Integration. Maybe not on a Design Pattern-level (if there is one), but it gave
              me a good overview and helped me make some desig decisions. I´m not sure if its
              technical enough for those used to such decisions, but it proved useful to me.
              http://www.wrox.com/ACON11.asp?WROXEMPTOKEN=87620ZUwNF3Eaw3YLdhXRpuVzK&ISBN=186100544X
              HTH
              Oskar
              

  • Best practice for locations to deploy AP3602i's?

    I am doing an install for a new building for my company. We have 2 office floors and 19x 2602's. They will be switched through a WLC-5508. I am looking to find a best practices guide on how to deploy them (location wise). For instance, should they be spaced certain distances from each other, at least X feet away from walls/obstacles, at least X high off the floor, in a triangular pattern, etc etc. Anyone know where to find documentation such as this?
    Each one of our floors are rectangular in shape, around 20k square feet, have mechanical and elevator shafts in the center of the floors. So it is a large squarish oval. Nine AP's per floor, I thought about placing them along the center line of the oval, but in sort of a zig-zag pattern, not in a straight line along the perimeter. They would all be ceiling mounted. I am looking for any info to let me know if that is a good plan, or how I should change it to confirm to best practices.

    Either you do voice or just data only, I recommend you do site survey before and after.  Make sure you leave some slacks of cables so that you can re-position APs after the post-deployment site survey.  Lowest SNR for a good solid connection is 20~25dB due to the fact that you need some fade margine, so perform site survey with that in mind.
    Those lighting fixtures are not big of deal if they are recessed and mounted above access points.  Bigger concerns are firedoors, leaded walls, elevator columns, etc.
    Remember that a site survey after the deployment is very important for a good wireless network.
    Good luck.

  • Workflow & Web Dynpro integration - best practice?

    Hi,
    I am working on ECC6 and EP7 and looking at building some workflow approval scenarios for Travel Management.  I need to move away from the SAP supplied approval scenarios to meet our business requirements.
    What I'm looking for is a 'best practice' for integrating a Web Dynpro application (for ABAP) which will be the basis of our approval workitem.  I have seen a number of presentations which talk about integrating the user decision task (BOR object DECISION) into a web dynpro application.  I have also seen an approach where the FM WDY_EXECUTE_IN_PLACE is used to call a Web Dynpro application from within a BOR object method. 
    I guess I'm wondering if there is an approach that provides a cleaner integration then either of the above approaches as they both appear (well to me anyway!) to have limitations.  Is there a way for example of implementing an ABAP class method as the basis of the approval task that cleanly integrates with the Web Dynpro application?
    Any suggestions would be greatly appreciated.
    Thanks in advance
    Michael Arter

    Hello,
    Are you going to use the Universal worklist in your portal? If yes, that will bring you more possibilitites. Then you don't have to code anything into your business object - instead in portal (UWL configuration) you can define what which WD application is launched when the user clicks the task in UWL.
    If you are going to use business workplace and just launch WD applications from there, then you probably just need to use WDY_EXECUTE_IN_PLACE (or any other suitable way to launch WD application from ABAP).
    >Is there a way for example of implementing an ABAP class method as the basis of the approval task that cleanly integrates with the Web Dynpro application?
    Yes, but what is really the need for this? Did you know that you can replace the methods of your BO as methods of an ABAP class? Just implement the IF_WORKFLOW interface for your class, and you can use it in your workflow then just like the BO. If you want to "replace" the whole BO with your ABAP class, just take a look to Jocelyn Dart's blog series about the subject. But as I said, it is not really necessary to do this - especially if you already have lot of custom code in your custom business object - then it is probably a good idea to continue using it for your custom stuff.
    Regards,
    Karri

  • Best Practices Data Extract from Essbase

    Hi there,
    I have been looking for information on Best Practices to extract data from Essbase (E).
    Using MDX I initially wanted to bulk extract data from E but apparently the process was never ending.
    As a 2d choice, I went for a simulation of an interactive interaction and got ODI generating MDX queries requesting smaller data sets.
    At the moment more than 2000 mdx queries are generated and sequentially sent to E. I takes some times ....
    Has anyone be using other approaches ?
    Awaiting reaction.
    regards
    JLD

    What method are you using to extract, what version are you on including patch
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Where to find best practices for tuning data warehouse ETL queries?

    Hi Everybody,
    Where can I find some good educational material on tuning ETL procedures for a data warehouse environment?  Everything I've found on the web regarding query tuning seems to be geared only toward OLTP systems.  (For example, most of our ETL
    queries don't use a WHERE statement, so the vast majority of searches are table scans and index scans, whereas most index tuning sites are striving for index seeks.)
    I have read Microsoft's "Best Practices for Data Warehousing with SQL Server 2008R2," but I was only able to glean a few helpful hints that don't also apply to OLTP systems:
    often better to recompile stored procedure query plans in order to eliminate variances introduced by parameter sniffing (i.e., better to use the right plan than to save a few seconds and use a cached plan SOMETIMES);
    partition tables that are larger than 50 GB;
    use minimal logging to load data precisely where you want it as fast as possible;
    often better to disable non-clustered indexes before inserting a large number of rows and then rebuild them immdiately afterward (sometimes even for clustered indexes, but test first);
    rebuild statistics after every load of a table.
    But I still feel like I'm missing some very crucial concepts for performant ETL development.
    BTW, our office uses SSIS, but only as a glorified stored procedure execution manager, so I'm not looking for SSIS ETL best practices.  Except for a few packages that pull from source systems, the majority of our SSIS packages consist of numerous "Execute
    SQL" tasks.
    Thanks, and any best practices you could include here would be greatly appreciated.
    -Eric

    Online ETL Solutions are really one of the biggest challenging solutions and to do that efficiently , you can read my blogs for online DWH solutions to know at the end how you can configure online DWH Solution for ETL  using Merge command of SQL Server
    2008 and also to know some important concepts related to any DWH solutions such as indexing , de-normalization..etc
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927103-data-warehousing-workshop-2-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927173-data-warehousing-workshop-3-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
    Kindly let me know if any further help is needed
    Shehap (DB Consultant/DB Architect) Think More deeply of DB Stress Stabilities

  • Best practice - moving home directories

    Hello all,
    I was looking for insight on best practices for moving home directories.
    I was thinking that using the migration tool would be best to move the directories and then using dsrazor to remap all of the home directories for our users.
    We are running Netware 6.5 SP8. I have added a 2TB RAID 10 set to one of our servers which is where I am planning on moving all of the user directories.
    Thoughts/suggestions are welcome.
    Steve D.

    Originally Posted by sjdimare
    Moving data from one volume to another on the same server should not require migration, correct? I just want to make sure all of the trustee assignments stay in place.
    I also will need to redo volume space restriction on new user templates and the migrated volumes.
    First test went quite smoothly.
    Steve D.
    When you move data (using Windows explorer ect) across volumes the trustee rights drop off. If moving from NW to NW use Server Consilidation and Migration Tool. If moving from NW to OES Linux use miggui. The SCMT has a few other features like project planning and verification.

  • Best Practices on Mining Industries - CO-PC

    Hello,
    I am looking for best practices on the mining industry for CO-PC. I am not very familiar with this industry and I don't know if a classic solution based on standard cost price control and material ledger is the most suitable for this business.
    I am used to Material Ledger, but I think it does not make sense to implement that on Mining Industry that produces coal for example or extract only one type of product using as raw materials only diesel and explosives. Can anyone share  experiences in this segment?
    Rgds,
    Rafael

    Hi Rafael,
    Controlling in Mining industry is often centered not so much about product costs but about assets, maintenance and utilization. Therefore controlling reports often show the costs e.g. by crusher compared to planned cost, and ABC (activity based costing) plays a larger role than in classical manufacturing industries.
    But material ledger is still quite popular in mining companies. Even if you are a coal miner there are manufacturing steps like purification, crushing and homogenizing and activity consumption involved in the process steps. Furthermore also the distribution and transportation costs play a big role and can be controlled by material ledger actual costs.
    Have a look at SAP's best practice for Mining that contains material ledger and ABC templates for asset costs:
    [http://help.sap.com/bp_miningv1600/Mining_AU/Html/index.htm]

  • BizTalk monitoring best practice(s)

    I am looking for information on best practices to monitor BizTalk environment (ideally using Tivoli monitoring tools).  Specifically I am looking for insight into what should be monitored, how one analyzes performance profile.  Thanks

    While setting up monitoring agents/products for BizTalk server (or for any server/application for this matter), there are two ways to start:
    If available, import/install application specific monitoring packages i.e. Import/install prebuild monitoring rules, alerts and actions specific to the application.
    Or create the rules/alerts and actions from scratch for the application.
    For monitoring products like SCOM, management packs for BizTalk server are available as pre-build, as readymade packages. For a non-Microsoft product like Tivoli check with
    the vendor/ IBM for any such pre-build monitoring packages for BizTalk. If available purchase it and install it in Tivoli. This would be the best option to start, instead of spending time and resource in building the rules, alerts and actions from scratch.
    If pre-build monitoring package is not available, then start by creating rules to monitor any errors or warnings from event logs of the BizTalk and SQL servers. Gradually,
    you can update/add more rules based on your needs.
    And regarding analysing performance profile, most of the monitoring product now-a-days comes with prebuild alerts for monitoring the server performances, CPU utilization
    etc. I’m sure renowned product like Trivoli shall have prebuild alerts for monitoring the server performances. Same can be configured to monitor the BizTalk’s performances. And also monitoring event log entries would also pickup any performance related issues.
    Moreover, Tivoli has got detail user guide document for setting alerts for BizTalk server. Check this
    document here.
    Reading best practices, links provided by MaheshKumar shall help you.
    Key point to remember is no-monitoring product is perfect; you can’t create a fool-proof monitoring alerts and actions on day one. It would get mature over the time in your
    environment.
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful.

Maybe you are looking for