Best Practice - 2 lookups

I need to filter a main table A, depending on whether either one of two sub tables B, and C have records that relates to A.
I can see two ways of doing this.
I coule create sub-key tables BK and CK, and do a UNION to create a combined key tale BCK and do a lookup in the filter on BCK.
Or, I could create BK, drop the PK constraints, do an insert from CK's source, raise the PK constraints, and then use the BK table (with the non-dup CKs inserted) as the lookup in the filter.
Which do you think is best? Or is there a better way?

Not sure what you are attempting here - is it possible that a join (or an outer join) would do the trick?
Regards:
Igor

Similar Messages

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • Best practice when FACT and DIMENSION table are the same

    Hi,
    In my physical model I have some tables that are both fact and dimension table, i.e. in the BMM they are of course separated into Fact and Dim source (2 different units) and it works fine. But I can see that there will be trouble when having more fact tables and I e.g. have a Period dimension pointing to all the different fact tables (different sources).
    Seems like the best solution to this is to have an alias of the fact/transaction table and have 2 "copies" of the transaction table (one for fact and one for dimension table) in the physical layer. Only bad thing is that there will then allways be 2 lookups in the same table when fetching data from the dimension and the fact table.
    This is not built on a datawarehouse - so the architecture is thereby more complex. Hope this was understandable (trying to make a short story of it).
    Any best practice on this? Or other suggestions.

    Id recommend creation of a view in the database. if its an oracle DB, materialised views would be a huge performance benefit. you just need to make sure that the MVs are updated when the source is updated.
    -Domnic

  • Best practice - caching objects

    What is the best practice when many transactions requires a persistent
    object that does not change?
    For example, in a ASP model supporting many organizations, organization is
    required for many persistent objects in the model. I would rather look the
    organization object up once and keep it around.
    It is my understanding that once the persistence manager is closed the
    organization can no longer be part of new transactions with other
    persistence managers. Aside from looking it up for every transaction, is
    there a better solution?
    Thanks in advance
    Gary

    problem with using object id fields instead of PC object references in your
    object model is that it makes your object model less useful and intuitive.
    Taking to the extreme (replacing all object references with their IDs) you
    will end up with object like a row in JDBC dataset. Plus if you use PM per
    HTTP request it will not do you any good since organization data won't be in
    PM anyway so it might even be slower (no optimization such as Kodo batch
    loads)
    So we do not do it.
    What you can do:
    1. Do nothing special just use JVM level or distributed cache provided by
    Kodo. You will not need to access database to get your organization data but
    object creation cost in each PM is still there (do not forget this cache we
    are talking about is state cache not PC object cache) - good because
    transparent
    2. Designate a single application wide PM for all your read-only big
    things - lookup screens etc. Use PM per request for the rest. Not
    transparent - affects your application design
    3. If large portion of your system is read-only use is PM pooling. We did it
    pretty successfully. The requirement is to be able to recognize all PCs
    which are updateable and evict/makeTransient those when PM is returned to
    the pool (Kodo has a nice extension in PersistenceManagerImpl for removing
    all managed object of a certain class) so you do not have stale data in your
    PM. You can use Apache Commons Pool to do the pooling and make sure your PM
    is able to shrink. It is transparent and increase performance considerably
    One approach we use
    "Gary" <[email protected]> wrote in message
    news:[email protected]...
    >
    What is the best practice when many transactions requires a persistent
    object that does not change?
    For example, in a ASP model supporting many organizations, organization is
    required for many persistent objects in the model. I would rather look the
    organization object up once and keep it around.
    It is my understanding that once the persistence manager is closed the
    organization can no longer be part of new transactions with other
    persistence managers. Aside from looking it up for every transaction, is
    there a better solution?
    Thanks in advance
    Gary

  • Best practice for Tags

    Hello,
    In packaged applications Tags are used in most of the Apps. Eg. in Customer Tracker App, we can add tags to a customer where these tags are stored in a varchr2 column in the Customers Table.
    In my case, I have predefined tags for Properties (Real Estate) in a lookup table called TAGS . Eg, Full floor, Furnished, Fitted, Duplex, Attached... What is the best Practice to tag the properties:
    1- To store these tags in a varchar column in PROPERTIES table using Shuttle box.
    OR
    2- To store them in a third table Eg, PROPERTIES_TAGS (ID PK, PROPERTY_ID FK , TAG_ID FK ), Then use LISTAGG function to show the tags in one line in the Properties Report.
    OR
    Do you have a better option ??
    Regards,
    Fateh

    Fateh wrote:
    Hello,
    In packaged applications Tags are used in most of the Apps. Eg. in Customer Tracker App, we can add tags to a customer where these tags are stored in a varchr2 column in the Customers Table.
    In my case, I have predefined tags for Properties (Real Estate) in a lookup table called TAGS . Eg, Full floor, Furnished, Fitted, Duplex, Attached...These appear to me to be two different use cases. In the packaged applications the tags allow end users to attach free-form metadata to data for their own purposes (these are sometimes called "folk taxonomies"). Users may use tags for different purposes, or different tags for the same purpose. For example, I might add "Monday", "Thursday" or "Friday" tags to customers because those are the days they receive their deliveries. For the same purpose you might tag the same customers "1", "8", and "15" using the route numbers of the trucks making the deliveries. You might use "Monday" to indicate that the customer is closed on Mondays...
    In your application you are assigning known, predefined attributes to the properties. This is a standard 1:M attribute model. Displaying them using the tag metaphor does not make them equivalent to free-form user tags.
    What is the best Practice to tag the properties:
    1- To store these tags in a varchar column in PROPERTIES table using Shuttle box.If you do this, how do you:
    <li>Efficiently search for furnished duplex properties?
    <li>Globally change "fitted" to "built-in"?
    <li>Report the number of properties, broken down by full floor, duplex, fitted...
    OR
    2- To store them in a third table Eg, PROPERTIES_TAGS (ID PK, PROPERTY_ID FK , TAG_ID FK ), Then use LISTAGG function to show the tags in one line in the Properties Report.As Why to use Look up Table, this the correct way to do this. It enables the data to be indexed for efficient retrieval, and questions like those above should be handled simply using joins and grouping.
    You might want to investigate the possibility of eliminating the ID PK and using an index organised table for this.
    OR
    Do you have a better option ??I'd also look carefully at your data model. Ensure you're not flirting with the EAV anti-pattern. Should some/all of these values not simply be attributes on the property?

  • Best Practice in V7.0 : Issues with Sales Planning and Reporting

    I am trying to install the SAP Best Practices for BPC 5.1 on SAP PBC 7.0 SP 04 I have done this as I cannot find any Best Practice documents for version 7 as yet.
    I have managed to get through the Administration setup and most of the BPC -Administration Configuration Guide, however I am having a problem with 7.4 Running a Data ManagementPackage - Import on page 32 of 36. This step involves you uploading a data file Demo_Revenue_Data.txt into BPC.
    The file says that it has failed due to Ínvalid dimension ACCOUNT in lookup.
    I believe that this error may be driven by a previous step 6.4 Creating Script Logic where the logic for BP_Sales Application was required.
    My question is twofold in that I need to determine:
    1. Has anyone else tried the BestPractices for BPC 5.0 in BPC 7.0?
    2. Does anyone know how to overcome the error when uploading the Demo Revenue into BPC?
    Edited by: Kevin West on Jul 8, 2009 2:03 PM

    Hi,
    BPC best practices document from 5 is working fine also for 7.0 because 7.0 is just an update for 5.x.
    Running Import involve logic just if you are running the package with option enabled (Run Default Logic).
    Your issue seems to be related to maping which means you have to check Transformation and Conversion file.
    Any way the best practices document will not provide you information about how to build Transformation and Conversion files.
    You have to follow an SAP BPC training and that it will help you to build your applicatioon easier and faster.
    Regards
    Sorin Radulescu

  • What is the best Practice JCO Connection Settings for DC  Project

    When multiple users are using the system data is missing from Web Dynpro Screens.  This seems to be due to running out of connections to pull data.
    I have a WebDynpro Project based on component development using DC's.  I have one main DC which uses other DC's as Lookup Windows.  All DC's have their Own Apps.  Also inside the main DC screen, the data is populated from multiple function modules.
    There are about 7 lookup DC Apps accessed by the user
    I have created JCO destinations with following settigns
    Max Pool Size 20
    Max Number of Connections 200
    Before I moved to DC project it was regular Web Dynpro Project with one Application and all lookup windows were inside the same Project.  I never had the issue with the same settings.
    Now may be becuase of DC usage and increase in applications I am running out of connections.
    Has any one faced the problem.  Can anyone suggest the best practice of how to size JCO connections.
    It does not make any sense that just with 15-20 concurrent users I am seeing this issue.
    All lookup components are destroyed after its use and is created manually as needed.  What else can I do to manage connections
    Any advise is greatly appreciated.
    Thanks

    Hi Ravi,
    Try to go through this Blog its very helpful.
    [Web Dynpro Best Practices: How to Configure the JCo Destination Settings|http://www.sdn.sap.com/irj/scn/weblogs;jsessionid=(J2EE3417600)ID2054522350DB01207252403570931395End?blog=/pub/wlg/1216]
    Hope It will help.
    Regards
    Jeetendra

  • Best Practice EJB 3.0 Question

    I have a web application consisting of 3 projects:
    - Model (EJB 3.0 Session Beans connected to two different databases)
    - TagLibrary (custom tag library)
    - ViewController (Web App / GUI)
    Currently I am connecting to the EJB Beans using code that Jdeveloper generates for a test client:
    env.put( Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory" );
    env.put(Context.PROVIDER_URL, "t3://localhost:7101");
    However would like to move these to a properties file (I believe jndi.properties) such that they can be modified based on app server.
    My question is following:
    What is best practice for Session beans in the Model project to access other session beans in the same project? Do I also need to specify JNDI prop file and settings? (This occurs when Bean from one database needs to access bean from another database).
    Or should I really put these in two separate projects / EJB libraries?
    Thanks,
    Kris

    You have two options, first is to use JNDI lookup (you should be able to use just new InitialContext(), without the environment map).
    Second one is more elegant and, as far as I'm concerned, should be referred to as best practice, that is using dependency injection:
    @EJB
    YourSesionBeanInterface yourEJB;
    If you get stuck, there is plenty of documentation about this on the internet.
    Pedja

  • IronPort ESA best practice for DNS servers?

    Hello!
    Is there a best practice for what servers should be used for the Cisco IronPort DNS servers?
    Currently when I check our configuration, we have set it to "Use these DNS servers" and the first two are our domain controllers and last two are Google DNS.
    Is there a best practice way of doing this? I'm thinking of selecting the "Use the Internet's Root DNS Servers" option as I can't really see an advantage of using internal DC's.
    Thoughts?

    Best practice is to use Internet Root DNS Servers and define specific dns servers for any domain that you need to give different answers for. Since internal mail delivery is controlled by smtproutes using internal dns servers is normally not required.
    If you must use internal dns servers I recommend servers dedicated to your Ironports and not just using servers that handle enterprise lookups as well. Ironports can place a very high load on dns servers because every outside connection results in multiple dns lookups. (forward, reverse, sbrs)
    If you don't have enough dns horsepower you are susceptible to a DOS attack either through accident or design. If the Ironports overload your internal dns servers it can impact your entire enterprise.

  • _msdcs subdomain best practice with NS records?

    I have the _msdcs subfolder under my domain (the grey folder). example below
    It has only one DC inside of it for a NS server. This DC is old and no longer exists. I checked my test environment and it has the same scenario (an old DC that does that not exist). example below
    I'm just wondering:
    1) Is this normal, should this folder update itself with other servers?
    2) should I be adding one of my other DC's? and removing the original?
    I have a single forest, single domain setup 2008 functional level. My normal
    _msdcs Zone does behave as expected and removes and add the appropriate records. Thanks.

    I apologize for the late response. I see you've gone further than what I've recommended.
    No, you shouldn't have deleted the _msdc.parent.local zone!!!!!! I'm not sure why you did that. Are you working with someone else on this that recommended to do that? If not,
    you're over-thinking it. I provide specifics to fix it by simply  updating the NS records, that's it. If you only found the _msdcs folder had the wrong record, then that's all you had to change.
    In cases where DCs are removed, replaced, upgraded, etc, it's also best practice to check a few things to make sure things are in order, and one of them is check the NS records on all zones and delegations. Delegation's NS records won't update
    automatically with changes, but zone NS records will if DCs are properly demoted.
    The _msdcs delegated zone is required by Active Directory. And yes, based on your thread subject, it's best practice. When Windows 2000 came out, and IF you had created the initial domain with it, it did not have it this way, but all domains initially created
    with Windows 2003 and newer are designed this way. If you had upgraded from 2000 to 2003, then one of the steps that we must perform is to create the _msdcs delegation.
    Please re-create it in this order:
    In the DNS console, right-click Forward Lookup Zones, and then click
    New Zone. Click Next
    On the Zone Type page in the New Zone Wizard, click
    Primary zone, and then click to select the Store the zone in Active Directory check box. Click
    Next
    On the Active Directory Zone Replication Scope page, click "To all DNS servers in the Active Directory forest parent.local.
    On the Zone Name page, in the Zone Name box, type
    _msdcs.parent.local
    Complete the wizard by accepting all the default options.
    After you've done that:
    Delete the _msdcs subfolder under parent.local.
    Right-click parent.local, choose New Delegation.
    Type in _msdcs
    In the Nameserver page, type in the name of your server, and its IP address.
    Complete the wizard
    You should now see a grayed out _msdcs folder under parent.local.
    Go to c:\windows\system32\config\ folder
    Find netlogon.dns and rename it to netlogon.dns.old
    Find netlogon.dnb and rename it to netlogon.dnb.old
    Open a command prompt
    Run ipconfig /registerdns
    Run net stop netlogon
    Run net start netlogon
    Wait a few minutes, then click on the _msdcs.parent.local zone, and click the F5 button to refresh it.
    You should see the data populate.
    Ace Fekay
    MVP, MCT, MCITP/EA, MCTS Windows 2008/R2 & Exchange 2007, Exchange 2010 EA, MCSE & MCSA 2003/2000, MCSA Messaging 2003
    Microsoft Certified Trainer
    Microsoft MVP - Directory Services
    Technical Blogs & Videos: http://www.delawarecountycomputerconsulting.com/
    This post is provided AS-IS with no warranties or guarantees and confers no rights.

  • Mapping Best Practice Doubt

    Dear SDN,
    I have a best practice doubt.
    For an scenario where it is needed to mapping a value to another value, but the conversion is based on certain logic over R/3 data what is the recommended implementation:
    1.  Use Value Mapping Replication for Mass Data   or
    2.  Use XSLT ABAP Mapping calling an RFC ??
    Best regards,
    Gustavo P.

    Hi,
    I would suggest you use XSLT ABAP mapping or,
    Use the RFC LookUp API available from SP 14 onwards to call the RFC from your message mapping itself.
    Regards
    Bhavesh

  • Best Practice for Enterprise Application Integration

    I would like to integrate a few corporate systems together by using Oracle Fusion Middleware. I suppose the integrated process is running in synchronous mode such that it also supports two phase commit.
    In BPEL Process manager, there is a tool called "WSIF" which seems to be relevant to my requirement. I would like to know which tools should be best for my integration project and any suggestion on implementation.
    Thanks in advance,
    Samuel Wai

    This has been answered repeatedly. WL allows you to cache JNDI context
              objects, ejb homes and remotes without any problems. (EJB remote interfaces
              must only be used by one thread at a time, but that requirement is provided
              by the EJB spec itself.)
              Peace,
              Cameron Purdy
              Tangosol, Inc.
              http://www.tangosol.com
              +1.617.623.5782
              WebLogic Consulting Available
              "Geordie" <[email protected]> wrote in message
              news:3af9579f$[email protected]..
              >
              > I'm wondering what the best practice is for Servlet EJB integration in
              terms of
              > caching the home and remote objects. My understanding is that the Home
              object
              > is threadsafe and could therefore be cached as an attribute of the
              Servlet. This
              > would remove the need for a JNDI lookup for each request. Similarly
              caching the
              > ProxyObject would yield further savings. However, I have noticed that
              most examples
              > don't use either of these practices. Why not?
              >
              > Thanks in advance,
              > Geordie
              

  • Best Practices for Accessing the Configuration data Modelled as XML File in

    Hi,
    I refer the couple of blof posts/Forum threads on How to model and access the Configuration data as XML inside OSB.
    One of the easiest and way is to
    Re: OSB: What is best practice for reading configuration information
    Another could be
    Uploading XML data as .xq file (Creating .xq file copy paste all the Configuration as XML )
    I need expert answers for following.
    1] I have .xsd file which is representing the Configuration data. Structure of XSD is
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue</Config>
    <FrameworkConfig>
    2] As my project will move from one env to another the property-value will change according to the Environment...
    For Dev:
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Dev</Config>
    <FrameworkConfig>
    For Stage :
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Stage</Config>
    <FrameworkConfig>
    3] Let say I create the following Folder structure to store the Configuration file specific for dev/stage/prod instance
    OSB Project Folder
    |
    |---Dev
    |
    |--Dev_Config_file.xml
    |
    |---Stage
    |
    |--Stahe_Config_file.xml
    |
    |---Prod
    |
    |-Prod_Config_file.xml
    4] I need a way to load these property file as xml element/variable inside OSb message flow.?? I can't use XPath function fn:doc("URL") coz I don't know exact path of XMl on deployed server.
    5] Also I need to lookup/model the value which will specify the current server type(Dev/Stage/prod) on which OSB MF is running. Let say any construct which will act as a Global configuration and can be acccessible inside the OSb message flow. If I get the vaalue for the Global variable as Dev means I will load the xml config file under the Dev Directory @runtime containing key value pair for Dev environment.
    6] This Re: OSB: What is best practice for reading configuration information
    suggest the designing of the web application which will serve the xml file over the http protocol and getting the contents into variable (which in turn can be used in OSB message flow). Can we address this problem without creating the extra Project and adding the Dependencies? I read configuration file approach too..but the sample configuration file doesn't show entry of .xml file as resources
    Hope I am clear...I really appreciate your comments and suggestion..
    Sushil
    Edited by: Sushil Deshpande on Jan 24, 2011 10:56 AM

    If you can enforce some sort of naming convention for the transport endpoint for this proxy service across the environments, where the environment name is part of the endpoint you may able to retrieve it from $inbound in the message pipeline.
    eg. http://osb_host/service/prod/service1 ==> Prod and http://osb_host/service/prod/service2 ==> stage , then i think $inbound/ctx:transport/ctx:uri can give you /service/prod/service1 or /service/stage/service1 and applying appropriate xpath functions you will be able to extract the environment name.
    Chk this link for details on $inbound/ctx:transport : http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#wp1080822

  • Best Practice for Initial Load

    Hello,
    what is the best way of doing the initial load? is there a best practice somwhere that tells you what should be imported first?
    I want to understand the order ex,
    1. load Lookups,
    2. Hierarchies,
    3. taxonomy and attributes
    last the main table
    etc...
    I dont understand the logic.
    Thanks in advance

    Hi Ario,
    If you follow any SAP Standard business content for MDM Repositories like e.g. Material.
    https://websmp130.sap-ag.de/sap/support/notes/1355137
    In the SAP Note attachments, you will get MDM71_Material_Content.pdf
    You will see Import of reference Data(look up table's data) 1st(step6) before import of Master data(step7).
    During Import of Reference Data(look up data), Please follow the Import Sequence by using Processing level 0,1,2 etc.
    Which take care of filling look up flat tables first then filling Hierarchies tables etc.
    After that if you are maintaining Taxonomy, You need to fill taxonomy table in Taxonomy mode of Data Manager, in the sequence (Categories, Attributes, Linkage between Attributes and Categories and lastly Attribute Values)
    After this I mean populating Reference data you need to populate Main table records along with tuples table data since now in MDM 7.1 Tuple has been replaced by Qualified table for most of the Master's but if you are still maintaining Qualified table you can import Mani table data along with Qualified table in a single step. Otherwise for Qualified table you can alos use this approach of populating Non-qualifeirs to Qualified table first before importing main table and then importing Main table data along with Qualifier's field of Qualified table.
    This above entire process for exporting data from SAP R/3 system to MDM. If you are importing data into MDM from legacy system (Non-Sap systems too), Approach should be remain same Populating Lookup tables data and lastly main table data.
    I dont understand the logic.
    The logic is simple in your main table you have fields which are look up to Reference tables( e.g. field in main table which are look up to Lookup flat tables like Countries, Currencies etc, field in main table which is lookup to Hierarchy/Taxonomy table etc). So, if these values are not populated firstly, so during your Main table import you will have incomplete data for all of these fields from main table which are look up to some other tables as values in your lookup table you haven't populated before Main table import.
    Kindly revert if you still have any doubts.
    Regards,
    Mandeep Saini

  • Best practice for Servlet EJB integration

              I'm wondering what the best practice is for Servlet EJB integration in terms of
              caching the home and remote objects. My understanding is that the Home object
              is threadsafe and could therefore be cached as an attribute of the Servlet. This
              would remove the need for a JNDI lookup for each request. Similarly caching the
              ProxyObject would yield further savings. However, I have noticed that most examples
              don't use either of these practices. Why not?
              Thanks in advance,
              Geordie
              

    This has been answered repeatedly. WL allows you to cache JNDI context
              objects, ejb homes and remotes without any problems. (EJB remote interfaces
              must only be used by one thread at a time, but that requirement is provided
              by the EJB spec itself.)
              Peace,
              Cameron Purdy
              Tangosol, Inc.
              http://www.tangosol.com
              +1.617.623.5782
              WebLogic Consulting Available
              "Geordie" <[email protected]> wrote in message
              news:3af9579f$[email protected]..
              >
              > I'm wondering what the best practice is for Servlet EJB integration in
              terms of
              > caching the home and remote objects. My understanding is that the Home
              object
              > is threadsafe and could therefore be cached as an attribute of the
              Servlet. This
              > would remove the need for a JNDI lookup for each request. Similarly
              caching the
              > ProxyObject would yield further savings. However, I have noticed that
              most examples
              > don't use either of these practices. Why not?
              >
              > Thanks in advance,
              > Geordie
              

Maybe you are looking for

  • Illustrator Crashing on open and save of files

    PC using Windows 8 I've been using Illustrator CC for 6 months and it worked fine, all of a sudden it can't open or save files without taking very long (5-10 minutes) and occasionally crashes. Please help!

  • Data Transfer Tool for ECC 6.0

    Hi All,      Good Morning. I would like to know if anybody has had the opportunity to use this tool in ECC 6.0. If so, where did you obtain the software related to this tool? Please Advise. Kind Regards, Daniel A. La Mendola

  • My router is slowing down my DSL speed

    I have a measured download speed of almost 3 Mbps (2.9 actually) when connecting my DSL modem (Westell Model B90-610030) directly to the computer (bypassing the router) - but connected thru the router (linksys WRK54G) with two other computers sharing

  • Dreamweaver CS4 html email absolute paths help

    Anyones help would be so great to this! I am going crazy. I have read all posts I can find on creating HTML emails. I find that images in DreamWeaver need to have absolute paths. Ok, I will try to be as clear as possible. I created the page (email) i

  • Left join query with join of three tables

    I'm trying to build a query which has me stumped. Most of the query is fairly straightforward but I've run into an issue I'm not sure how to solve. Background: We have actions stored in i_action. We have the available attributes for each type of acti