JNDI cache in a Cluster

Lets say that we have 2 machines in a cluster, If I cache in the JNDI, which
          is accessible from both Servers,
          1. when they update the same cache then how can we put a lock so that they
          don't mess up other user's data.
          2. Can I hide this cache for the other machine in the cluster?.
          3. What is the best way to do session level caching?. For Example, If I use
          the sessionID of the Weblogic's HttpSession, It is too big to store it.
          4. What is the maximum objects can be stored in the JNDI? If I store 400-500
          objects in the JNDI , whether performance will be affected?!
          5. Do I have to rebind cache if I update the data?
          Thanks
          /selvan
          Captura Software Inc
          

Lets say that we have 2 machines in a cluster, If I cache in the JNDI, which
          is accessible from both Servers,
          1. when they update the same cache then how can we put a lock so that they
          don't mess up other user's data.
          2. Can I hide this cache for the other machine in the cluster?.
          3. What is the best way to do session level caching?. For Example, If I use
          the sessionID of the Weblogic's HttpSession, It is too big to store it.
          4. What is the maximum objects can be stored in the JNDI? If I store 400-500
          objects in the JNDI , whether performance will be affected?!
          5. Do I have to rebind cache if I update the data?
          Thanks
          /selvan
          Captura Software Inc
          

Similar Messages

  • JNDI lookup in a cluster

    Hi,
              The WL documentation contains the following:
              "When clients obtain an initial JNDI context by supplying the cluster DNS
              name, weblogic.jndi.WLInitialContextFactory obtains the list of all
              addresses that are mapped to the DNS name"
              Should clients (e.g. an EJB) be concerned about retrying the call to get the
              Initial Context. In other words, can this call fail if one of the servers in
              the cluster fails?
              After the intital contect is obtained, it seems that the lookup should
              always work (since WL will take care of individual server failures and retry
              the lookup in needed).
              Not clear whether the call to get the initial context is guaranteed to
              succeed (as long as one server in the cluster is up, of course)... Any
              information would be appreciated.
              Thanks,
              Philippe
              

              Hello Philippe,
              I had posted a similar question but now can't find it...got lost I suppose. Anyway,
              I wanted to add in my findings on this. I have a stateful session object running
              in a clustered setup. This stateful object has Home references to multiple stateless
              beans. When I create a failover my stateful object does its failover properly.
              But, if I don't perform a new Home lookup for the stateless objects needed I receive
              the following error:
              ####<Nov 9, 2001 2:00:06 PM CST> <Error> <> <gwiz> <testServer1> <ExecuteThread:
              '9' for queue: 'default'> <> <> <000000> <<TestDeliveryActionHandler>Problem occured
              when trying to do a save and goto. java.rmi.NoSuchObjectException: Activation
              failed with: java.rmi.NoSuchObjectException: Unable to locate EJBHome: 'GBTestManagerHome'
              on server: 't3://10.1.17.3:7001
              When I perform a lookup during the ejbactivate() method to get a new Home reference
              all seems to work OK. My question though is, is this correct? From what I have
              read I had the same impression that the unserialized Home reference should be
              able to locate a new reference in the cluster without having to perform a lookup
              again.
              Any advice from anyone is greatly appreciated,
              Rich
              "Philippe Fajeau" <[email protected]> wrote:
              >Hi,
              >
              >The WL documentation contains the following:
              >
              >"When clients obtain an initial JNDI context by supplying the cluster
              >DNS
              >name, weblogic.jndi.WLInitialContextFactory obtains the list of all
              >addresses that are mapped to the DNS name"
              >
              >Should clients (e.g. an EJB) be concerned about retrying the call to
              >get the
              >Initial Context. In other words, can this call fail if one of the servers
              >in
              >the cluster fails?
              >
              >After the intital contect is obtained, it seems that the lookup should
              >always work (since WL will take care of individual server failures and
              >retry
              >the lookup in needed).
              >
              >Not clear whether the call to get the initial context is guaranteed to
              >succeed (as long as one server in the cluster is up, of course)... Any
              >information would be appreciated.
              >
              >Thanks,
              >
              >Philippe
              >
              >
              

  • JNDI replication within a cluster

    Hi to all of you,
    we successfully enabled HTTP Session replication and tested the failover. We would also like to setup a JNDI replication, so that we can use it as a storage for some shared data -- as stated in http://download.oracle.com/docs/cd/B10464_05/web.904/b10324/cluster.htm this should be enabled automatically, once EJB replication is enabled.
    With some problems we finally enabled EJB replication (we configured it through orion-application.xml) and the required replication policy is propagated to our statful beans. Anyway, the JNDI is still not replicated over the machines.
    We are running latest OAS 10g, cluster is MultiCast on RedHat Enterprise, replication policy for Stateful beans is set to 'onRequestEnd' (we tried all the options :), our application is normal ear with 1 ejb and 1 war archive and apart from JNDI replication, it works as expected.
    Is there some trick that is not mentioned or that we may overlooked in documentation to enable JNDI replication?
    Kind Regards,
    Martin

    Hopefully solved -- though the documentation explicitly mentions rebinding as not working, after any changes made to value stored in JNDI context you should just re-bind the value to the JNDI context; the value is replicated to other JNDI contexts.
    m.

  • Loading the JNDI Tree in a Cluster

    Is there any special processing that occurs with a Startup class when it
              has been
              started via the cluster level properties file?
              We've got a class that loads the JNDI tree for various configuration for
              our application.
              It's written that so that it will rebind() entries in the tree, so two
              copies could work together
              in the cluster, but I'd like to prevent the double work. (One copy
              bind()s an element, then the other rebind()s the same value.
              Are Startups "cluster" aware, and is there any magic to simplify this
              for me (or do I do the
              work of creating a semaphore-like setup in my class to detect two copies
              running.)
              Thanks in Advance,
              Brian Homrich
              Chicago, Illinois
              

    In the startup class on Environment object if you don't set
              replicatebindings to false, in a cluster all locally bound objects will be
              replicated. The default it true. So, jndi will try replicate every
              bind/rebind etc.
              Rebind will remove old copy and bind the new copy. But I have to understand
              more what you are trying to do, before I can be of any help.
              - Prasad
              Brian Homrich wrote:
              > Is there any special processing that occurs with a Startup class when it
              > has been
              > started via the cluster level properties file?
              >
              > We've got a class that loads the JNDI tree for various configuration for
              > our application.
              > It's written that so that it will rebind() entries in the tree, so two
              > copies could work together
              > in the cluster, but I'd like to prevent the double work. (One copy
              > bind()s an element, then the other rebind()s the same value.
              >
              > Are Startups "cluster" aware, and is there any magic to simplify this
              > for me (or do I do the
              > work of creating a semaphore-like setup in my class to detect two copies
              > running.)
              >
              > Thanks in Advance,
              >
              > Brian Homrich
              > Chicago, Illinois
              

  • How to create InitialContext for JNDI lookup in a cluster?

              I am new to clusters in WL7 and I wanted to know how a client would create an InitialContext
              object to perform a JNDI lookup for a remote object deployed across serveral servers
              in the cluster. Is the following correct?
              Physcial Servers in the cluster
              machine1:9001
              machine2:9001
              machine3:9001
              Code for creating InitialContext
              Properties p = new Properties();
              p.put(Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory");
              p.put(Context.PROVIDER_URL, "machine1:9001,machine2:9001,machine3:9001");
              Context c = new InitialContext( p);
              Thanks,
              Raffi
              

    Hi Ivaylo,
    There's another alternative solution to your problem. You can have the screen 120 as a user-defined selection screen. i.e., instead of creating this screen through the screen painter, you can create it from within your ABAP Program. This way, you can directly use the SELECT-OPTIONS statement within your screen. You then will no longer have to bother about how to handle the data for the field.
    Especially in the case where your screen 120 has few elements, this approach, in my opinion, will be the best.
    Please let me know if you need any further clarifications on how to go about it if you choose to follow this approach.
    Regards,
    Anand Mandalika.

  • Cache Connect in cluster

    Hi!!!
    We have 4-nodes linux cluster based on MC Service Guard.
    T10 works as a standalone in the packet "pack1" with other apps. Apps uses t10.
    Packet "pack1" (with t10 and apps) can be moved to any of 4 cluster nodes, and it doesn't have a home node (it may live on node 1 year, 3 month on node 2, 1 day on node 3, etc...).
    Is there any possibilities to set up cache connect in that kind of cluster configuration?
    We don't want to install additional T10 instance on all nodes :)

    Chris,
    but then pack1 moved to another node, a host name (where t10 now works) is changed.
    And since pack1 has moved to another node, values like BOOKMARK and REPORTS in TT_03_AGENT_STATUS for that t10-subscriber remains the same, and off course it turns to growth TT_03_34575_L table.
    Maybe, it is because t10-subscriber identified in TT_03_AGENT_STATUS by hostname?
    In t10 cache connect works fine.
    But in oracle we have growing table and we can't make our own monitoring of cache connect based on BOOKMARK and REPORTS.
    I believe that I've done something wrong. But what exactly?
    Linux Cluster:
    node1 pack1(T10 7.0.5 64bit, Oracle client 11 64bit, other apps...)
    node2 pack2
    node3 pack3
    node4 pack4
    Oracle 11G R1 64 bit is installed on remote linux machine
    Cache Connect:
    CREATE READONLY CACHE GROUP MYCACHE
    AUTOREFRESH INTERVAL 15 SECONDS ...
    Alexander

  • Different distributed caches within the cluster

    Hi,
    i've three machines n1 , n2 and n3 respectively that host tangosol. 2 of them act as the primary distributed cache and the third one acts as the secondary cache. i also have weblogic running on n1 and based on some requests pumps data on to the distributed cache on n1 and n2. i've a listener configured on n1 and n2 and on the entry deleted event i would like to populate tangosol distributed service running on n3. all the 3 nodes are within the same cluster.
    i would like to ensure that the data directly coming from weblogic should only be distributed across n1 and n2 and NOT n3. for e.g. i do not start an instance of tangosol on node n3. and an object gets pruned from either n1 or n2. so ideally i should get a storage not configured exception which does not happen.
    The point is the moment is say CacheFactory.getCache("Dist:n3") in the cache listener, tangosol does populate the secondary cache by creating an instance of Dist:n3 on either n1 or n2 depending from where the object has been pruned.
    from my understanding i dont think we can have a config file on n1 and n2 that does not have a scheme for n3. i tried doing that and got an illegalstate exception.
    my next step was to define the Dist:n3 scheme on n1 and n2 with local storage false and have a similar config file on n3 with local-storage for Dist:n3 as true and local storage for the primary cache as false.
    can i configure local-storage specific to a cache rather than to a node.
    i also have an EJB deployed on weblogic that also entertains a getData request. i.e. this ejb will also check the primary cache and the secondary cache for data. i would have the statement
    NamedCahe n3 = CacheFactory.getCache("n3") in the bean as well.

    Hi Jigar,
    i've three machines n1 , n2 and n3 respectively that
    host tangosol. 2 of them act as the primary
    distributed cache and the third one acts as the
    secondary cache.First, I am curious as to the requirements that drive this configuration setup.
    i would like to ensure that the data directly coming
    from weblogic should only be distributed across n1
    and n2 and NOT n3. for e.g. i do not start an
    instance of tangosol on node n3. and an object gets
    pruned from either n1 or n2. so ideally i should get
    a storage not configured exception which does not
    happen.
    The point is the moment is say
    CacheFactory.getCache("Dist:n3") in the cache
    listener, tangosol does populate the secondary cache
    by creating an instance of Dist:n3 on either n1 or n2
    depending from where the object has been pruned.
    from my understanding i dont think we can have a
    config file on n1 and n2 that does not have a scheme
    for n3. i tried doing that and got an illegalstate
    exception.
    my next step was to define the Dist:n3 scheme on n1
    and n2 with local storage false and have a similar
    config file on n3 with local-storage for Dist:n3 as
    true and local storage for the primary cache as
    false.
    can i configure local-storage specific to a cache
    rather than to a node.
    i also have an EJB deployed on weblogic that also
    entertains a getData request. i.e. this ejb will also
    check the primary cache and the secondary cache for
    data. i would have the statement
    NamedCahe n3 = CacheFactory.getCache("n3") in the
    bean as well.In this scenario, I would recommend having the "primary" and "secondary" caches on different cache services (i.e. distributed-scheme/service-name). Then you can configure local storage on a service by service basis (i.e. distributed-scheme/local-storage).
    Later,
    Rob Misek
    Tangosol, Inc.

  • OBIEE Cache mangement in cluster enviornment

    Hi all,
    We have cluster env in OBIEE(Two OBIEE server)
    Node1 primary node
    Node2 secondary node.
    now I want to seed the cache using IBOT(In single env we got it)
    And purge the cache.
    1. Do we need to do it on both the node.
    or how we can do this?

    Looks like you haven't looked at Oracle BI servers' DSN, just see how it is configured and the other options.
    DSN is configured based on environment no matters if it is single server or cluster.
    Pls mark correct/helpful.

  • Near caches disappearing on cluster leave then join

    In the situation where a node gets dropped from the cluster, either because of GC pauses, network problems or whatever, normally it will manage to join back into the cluster and all of the distributed caches come back. However it seems that near caches are not recreated when the node rejoins the cluster. This is very problematic as although the system seems to be healthy, its performance is dramatically reduced.
    Why are these caches not being recreated? Is there some configuration to ensure this happens?
    This is using Coherence 3.4. I haven't tested the 3.5 behavior yet.

    Hi CormacB,
    I guess you are referring to the fact that during "disconnect" the content of the front tier of the near cache is cleared. It is done to prevent the client using stale data. After the re-connect, the front map will start "refilling" from the back tier as the application gets the data, which is no different with what happens upon the near cache initialization.
    Regards,
    Gene

  • JNDI caching problems.. Want to lookup newly bound object.

    I want to use JNDI as a dynamic objects repository.
    When I first bound a object to a JNDI, it works properly.
    But, when I second bound a object that is in the same object hierarchy to a JNDI with a same JNDI name (rebind), it returns old result that is related to first bound object.
    How can I turn off caching capability of JNDI?
    I'm working on Weblogic Server 8.1 with Service Pack 5.

    I want to use JNDI as a dynamic objects repository.
    When I first bound a object to a JNDI, it works properly.
    But, when I second bound a object that is in the same object hierarchy to a JNDI with a same JNDI name (rebind), it returns old result that is related to first bound object.
    How can I turn off caching capability of JNDI?
    I'm working on Weblogic Server 8.1 with Service Pack 5.

  • Toplink Cache issues in Cluster

    Hi
    Our production environment is have a clustered environment and we have been noticing the following problem. When a user is trying to save a record she repeatedly encounters the "Toplink-5006" exception that I have included below.
    TopLink Error]: 2006.07.19 04:49:23.359--UnitOfWork(115148745)--null--Exception [TOPLINK-5006] (TopLink (WLS CMP) - 10g (9.0.4.2) (Build 040311)): oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object [[email protected]ce459] cannot be updated because it has changed or been deleted since it was last read.
    Class> com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink_CMP_2_0 Primary Key> [1001280937, 0]
    [TopLink Error]: 2006.07.19 04:49:23.359--UnitOfWork(115148745)--null--Exception [TOPLINK-5006] (TopLink (WLS CMP) - 10g (9.0.4.2) (Build 040311)): oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object [[email protected]ce459] cannot be updated because it has changed or been deleted since it was last read.
    Class> com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink_CMP_2_0 Primary Key> [1001280937, 0]
    <Jul 19, 2006 4:49:23 PM PDT> <Error> <EJB> <BEA-010026> <Exception occurred during commit of transaction Name=[EJB com.rhii.mjplus.fo.people.beans.PeopleManagerBean.setPeople(java.util.HashMap,java.lang.String,java.lang.String,java.lang.String,java.util.HashSet,com.rhii.mjplus.common.login.data.UserInfoDO)],Xid=BEA1-795A6481D2E1938A8EAD(115171166),Status=Rolled back. [Reason=Exception [TOPLINK-5006] (TopLink (WLS CMP) - 10g (9.0.4.2) (Build 040311)): oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object [[email protected]ce459] cannot be updated because it has changed or been deleted since it was last read.
    Class> com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink_CMP_2_0 Primary Key> [1001280937, 0]],numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=0,seconds left=60,XAServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=(ServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=(state=rolledback,assigned=MS15_mjp),xar=weblogic.jdbc.wrapper.JTSXAResourceImpl@6dcf50b),SCInfo[mjp+MS15_mjp]=(state=rolledback),properties=({weblogic.transaction.name=[EJB com.rhii.mjplus.fo.people.beans.PeopleManagerBean.setPeople(java.util.HashMap,java.lang.String,java.lang.String,java.lang.String,java.util.HashSet,com.rhii.mjplus.common.login.data.UserInfoDO)], weblogic.jdbc=t3://10.253.129.56:2323}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=MS15_mjp+10.253.129.56:2323+mjp+t3+, XAResources={},NonXAResources={})],CoordinatorURL=MS15_mjp+10.253.129.56:2323+mjp+t3+): Local Exception Stack:
    Exception [TOPLINK-5006] (TopLink (WLS CMP) - 10g (9.0.4.2) (Build 040311)): oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object [[email protected]ce459] cannot be updated because it has changed or been deleted since it was last read.
    Class> com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink_CMP_2_0 Primary Key> [1001280937, 0]
         at oracle.toplink.exceptions.OptimisticLockException.objectChangedSinceLastReadWhenUpdating(Ljava/lang/Object;Loracle/toplink/queryframework/ObjectLevelModifyQuery;)Loracle/toplink/exceptions/OptimisticLockException;(OptimisticLockException.java:109)
    What is puzzling is that the occurance of this nature has increased with user load and the toplink cache does not seem to have been refreshed after it encounters the first Optimistic Lock exception. We have run several test and this is not reproducabile in the DEV environment where we do not have a clustered set. After making a few updates to a record users starts experiencing the problem ... for some this persist for a really long time.
    We do not have Cache synchronization
    Cluster setup is as followes
    There are 4 boxes and each box has one admin and 4 managed servers.
    I have included the toplink-cmp-people.xml ( Thisis the particular entity bean we have a problem with) Our application server is Weblogic and we have Toplink version 9042
    <toplink-ejb-jar>
    <session>
    <name>People</name>
    <project-class>
    com.rhii.mjplus.fo.people.beans.PeopleToplink
    </project-class>
    <login>
    <datasource>MJPool</datasource>
    <non-jts-datasource>MJPool</non-jts-datasource>
    </login>
    <use-remote-relationships>true</use-remote-relationships>
    <customization-class>com.rhii.mjplus.common.TopLinkCustomization
    </customization-class>
    </session>
    </toplink-ejb-jar>
    I would appreciate any kind of feedback
    Thanks
    Lakshmi

    Can you refresh that record using a query before you save it ?
    Hi
    Our production environment is have a clustered
    environment and we have been noticing the following
    problem. When a user is trying to save a record she
    repeatedly encounters the "Toplink-5006" exception
    that I have included below.
    TopLink Error]: 2006.07.19
    04:49:23.359--UnitOfWork(115148745)--null--Exception
    [TOPLINK-5006] (TopLink (WLS CMP) - 10g (9.0.4.2)
    (Build 040311)):
    oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object
    [com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLin
    k_CMP_2_0@6dce459] cannot be updated because it has
    changed or been deleted since it was last read.
    Class>
    com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink
    CMP2_0 Primary Key> [1001280937, 0]
    [TopLink Error]: 2006.07.19
    04:49:23.359--UnitOfWork(115148745)--null--Exception
    [TOPLINK-5006] (TopLink (WLS CMP) - 10g (9.0.4.2)
    (Build 040311)):
    oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object
    [com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLin
    k_CMP_2_0@6dce459] cannot be updated because it has
    changed or been deleted since it was last read.
    Class>
    com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink
    CMP2_0 Primary Key> [1001280937, 0]
    <Jul 19, 2006 4:49:23 PM PDT> <Error> <EJB>
    <BEA-010026> <Exception occurred during commit of
    transaction Name=[EJB
    com.rhii.mjplus.fo.people.beans.PeopleManagerBean.setP
    eople(java.util.HashMap,java.lang.String,java.lang.Str
    ing,java.lang.String,java.util.HashSet,com.rhii.mjplus
    .common.login.data.UserInfoDO)],Xid=BEA1-795A6481D2E19
    38A8EAD(115171166),Status=Rolled back.
    [Reason=Exception [TOPLINK-5006] (TopLink (WLS CMP) -
    10g (9.0.4.2) (Build 040311)):
    oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object
    [com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLin
    k_CMP_2_0@6dce459] cannot be updated because it has
    changed or been deleted since it was last read.
    Class>
    com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink
    CMP2_0 Primary Key> [1001280937,
    0]],numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds
    since begin=0,seconds
    left=60,XAServerResourceInfo[weblogic.jdbc.wrapper.JTS
    XAResourceImpl]=(ServerResourceInfo[weblogic.jdbc.wrap
    per.JTSXAResourceImpl]=(state=rolledback,assigned=MS15
    _mjp),xar=weblogic.jdbc.wrapper.JTSXAResourceImpl@6dcf
    50b),SCInfo[mjp+MS15_mjp]=(state=rolledback),propertie
    s=({weblogic.transaction.name=[EJB
    com.rhii.mjplus.fo.people.beans.PeopleManagerBean.setP
    eople(java.util.HashMap,java.lang.String,java.lang.Str
    ing,java.lang.String,java.util.HashSet,com.rhii.mjplus
    .common.login.data.UserInfoDO)],
    weblogic.jdbc=t3://10.253.129.56:2323}),OwnerTransacti
    onManager=ServerTM[ServerCoordinatorDescriptor=(Coordi
    natorURL=MS15_mjp+10.253.129.56:2323+mjp+t3+,
    XAResources={},NonXAResources={})],CoordinatorURL=MS15
    _mjp+10.253.129.56:2323+mjp+t3+): Local Exception
    Stack:
    Exception [TOPLINK-5006] (TopLink (WLS CMP) - 10g
    (9.0.4.2) (Build 040311)):
    oracle.toplink.exceptions.OptimisticLockException
    Exception Description: The object
    [com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLin
    k_CMP_2_0@6dce459] cannot be updated because it has
    changed or been deleted since it was last read.
    Class>
    com.rhii.mjplus.fo.people.beans.People_z2e2a7__TopLink
    CMP2_0 Primary Key> [1001280937, 0]
    at
    t
    oracle.toplink.exceptions.OptimisticLockException.obje
    ctChangedSinceLastReadWhenUpdating(Ljava/lang/Object;L
    oracle/toplink/queryframework/ObjectLevelModifyQuery;)
    Loracle/toplink/exceptions/OptimisticLockException;(Op
    timisticLockException.java:109)
    What is puzzling is that the occurance of this nature
    has increased with user load and the toplink cache
    does not seem to have been refreshed after it
    encounters the first Optimistic Lock exception. We
    have run several test and this is not reproducabile
    in the DEV environment where we do not have a
    clustered set. After making a few updates to a record
    users starts experiencing the problem ... for some
    this persist for a really long time.
    We do not have Cache synchronization
    Cluster setup is as followes
    There are 4 boxes and each box has one admin and 4
    managed servers.
    I have included the toplink-cmp-people.xml ( Thisis
    the particular entity bean we have a problem with)
    Our application server is Weblogic and we have
    Toplink version 9042
    <toplink-ejb-jar>
    <session>
    <name>People</name>
    <project-class>
    com.rhii.mjplus.fo.people.beans.PeopleToplink
    </project-class>
    <login>
    <datasource>MJPool</datasource>
    <non-jts-datasource>MJPool</non-jts-datasource>
    </login>
    <use-remote-relationships>true</use-remote-relationsh
    ips>
    <customization-class>com.rhii.mjplus.common.TopLinkCu
    stomization
    </customization-class>
    </session>
    </toplink-ejb-jar>
    I would appreciate any kind of feedback
    Thanks
    LakshmiCan you refresh that record using a query before you save it ?

  • Does JNDI cache physical IP address on ldap lookups?  And How to stop it?

    We are using Oracle's 11.1.0.6 thin jdbc driver using the 'ldap lookup' syntax for database connect string lookups. The ldap server we specify is a DNS alias pointing to a hardware server load balancer, which returns different IP addresses based on load and current availability. During maintenance, we will remove a server from the load balancing rotation, but applications will continue to try connecting to this physical IP address even though DNS is no longer serving it up as a valid address. This causes continuous application failures until either the server is brought back up or the application is bounced. Only the application needs to be bounced, not the websphere or the physical server, so the caching is not being done at the server level.
    We've used the tracing capabilities in the 11g jdbc driver to trace the fact that Oracle is passing in thd DNS alias name and not an IP address to the JNDI interface, so it appears the caching is occurring at the JNDI level. Unfortunately, tracing is not detailed enough to show exactly what JNDI calls are being made.
    Is there any JNDI attributes that can be set to stop JNDI from caching this IP address and force it to re-evalute the new DNS lookup at each invocation?
    We already know Oracle's jdbc thin driver supports specifying secondary failover ldap lookup strings, but this IP address is also getting cached, so while this will reduce the frequency of errors, it won't eliminate the problem, especially if a server is removed from the load balancing rotation permanently.

    - Reset the iOS device. Nothing will be lost
    Reset iOS device: Hold down the On/Off button and the Home button at the same time for at
    least ten seconds, until the Apple logo appears.
    - Power off and then back on the router
    - Reset network settings: Settings>General>Reset>Reset Network Settings
    - iOS: Troubleshooting Wi-Fi networks and connections
    - iOS: Recommended settings for Wi-Fi routers and access points
    - Restore from backup. See:
    iOS: How to back up
    - Restore to factory settings/new iOS device.
    Do you have this problem with other networks/routers/access points? If not then that points to a problem with your network.

  • Java cache library at cluster/server level

    Hello Experts,
    I need that some data will be available to all my users (the data is more or less constant so a  cache is needed for better performance).
    Does the SAP J2EE engine have standard cache library?
    If yes, where can I find documentation?
    If no, can someone recommend external library for caching?
    Regards,
    Omri

    Perhaps the link below will help you, however I would recommend that you create a mechanism of Java Native Cache (How to Create a Simple In Memory Cache in Java (Lightweight Cache) • Crunchify) in an application (EJB or other) and call this application whenever the Web AS is started.
    Thus it would be possible to access this EJB by JNDI and get the preloaded data.
    Java Persistence in SAP Web Application Server
    Best regards,
    Angelo

  • Using P13N Caching Service in Cluster (J2EE Apps)-Not portals

    Problem Statement: We are using the BEA Weblogic 8.1 app server for developing J2EE application in clustred environiment. We are running four managed servers and one admin server. (two managed servers for each hardware Dell 2250). For application performance we are caching some the data (Database Lookup tables and some dynamic data in appication scope). The problem exists with the dynamic data. When we clear the cache for dynamic data it is refreshing only in one instance of managed server. To resolve this problem, implemented the servlet 2.3 feature(HttpAttributeListner). I came to know that very quickly it won't solve the problem. I found interesting paper on BEA edocs. Here is the solution in BEA Portal world.
    http://dev2dev.bea.com/products/wlportal/articles/jiang.jsp
    Since we are not using the portal, is there any workaround to the solution?
    Thanks,
    Chandra

    Hi,
    One thing that can be helpful is to add the references of the JAR files that you are adding as Used DC in the EAR.
    This can be done by adding Library type References in the application-j2ee-engine.xml Deployment descriptor in the EAR project.
    Following link will be helpful: http://help.sap.com/saphelp_nw70/helpdata/EN/83/82814282cfc153e10000000a1550b0/content.htm
    Regards,
    Alka.

  • JNDI in a cluster

    Hi,
    I've got a situation where I am deploying two different ears into two different Managed Servers in the same cluster. One ear is trying to look up an ejb from the other ear but for some reason the stateless bean iis not getting published into the JNDI tree of the cluster, and is only visible inside the managed server that the ear is deployed to. My question is: isn't the JNDI tree supposed to be global to all the servers inside of a cluster so registering an ejb in one will make it available to the rest of the cluster?

    When you deploy ear file to one of the Managed server's part of cluste, the JNDI name can be seen only in that Managed server's tree, it cannot be seen in cluster JNDI tree or other managed servers JNDI tree.
    - - Tarun

Maybe you are looking for