Cluster and jndi?

Hi All:
I get a problem with the cluster.
I have a class called OrderMonitor, used to function as a central
controler to the order entity beans. The OrderMonitor will call all the
order beans under some condition. The order beans will also call the
OrderMonitor under some condition.
Currently, the order beans look up this OrderMonitor thr'u local
cluster's jndi, the OrderMonitor binds itself to the local cluster's
JNDI.
How can i make the beans in one cluster call the orderMonitor in
another cluster? And how can i make the OrderMonitor call all the beans
in all the clusters? Should the beans look up the OrderMonitor not in
the local cluster, but in the dispatcher machine?
Will weblogic/dispatcher do this automatically?
Thanks a lot!
wang minjiang

Just read weblogic manual. Seems that there is no particular dispatcher
machine in weblogic clusters. (different with SilverStream)
So, the solution might be:
OrderMonitor implements as a RMI, register its stub to local server, and
subsequently copied to all other servers when joining in.
Beans look up OrderMonitor thr'u lookup the replicated JNDI, and pass in
its remote object, rather than the bean itself, in order for the monitor to
call back.
Any comments?
wang minjiang
wangminjiang wrote:
Hi All:
I get a problem with the cluster.
I have a class called OrderMonitor, used to function as a central
controler to the order entity beans. The OrderMonitor will call all the
order beans under some condition. The order beans will also call the
OrderMonitor under some condition.
Currently, the order beans look up this OrderMonitor thr'u local
cluster's jndi, the OrderMonitor binds itself to the local cluster's
JNDI.
How can i make the beans in one cluster call the orderMonitor in
another cluster? And how can i make the OrderMonitor call all the beans
in all the clusters? Should the beans look up the OrderMonitor not in
the local cluster, but in the dispatcher machine?
Will weblogic/dispatcher do this automatically?
Thanks a lot!
wang minjiang

Similar Messages

  • Rmi and jndi

    I am trying to creat a distributed java application using rmi. After I
              create my class, that implements a remote interface, and use rmic to
              create the clusterable stub, what do I bind into a cluster wide jndi
              tree and how do I explicitly do this. a jav example would be great.
              Please reply to [email protected]
              

    ooh anyone....

  • Cluster-wide JNDI replication

    I have a question about cluster wide JNDI replication. I am on weblogic
              7.0 sp2, solaris 8. I have 2 weblogic servers in a cluster and two
              asynchronous JVM's connect to this cluster. The two weblogic servers
              would be w1 and w2 and the JVMs would be j1
              w1 creates a stateful ejb. The handle to this ejb is put on the JNDI
              tree. A JMS message is created which contains the key to the handle
              object. A JMS message is sent which is picked up by one of the JVM's say
              j1. j1 tries to do a lookup of the handle using the key in the JMS
              message and it is not able to find the handle object.
              The reason is that j1 is trying to connect to the w2 to find the handle.
              The change to the jndi tree is not propagated to w2 yet and hence the
              error. Any ideas on why it would take so long to communicate the jndi
              tree? My jndi tree has hardly anything in it. There are a couple of JMS
              connection factories and 3 ejbs and couple of JMS queues.
              Would appreciate any kind of help. I am going to migrate storing the ejb
              handles to another persistent store instead of the jndi tree but till
              then any insight into this problem would be helpful.
              Please don't advice solutions like using Thread.sleep and trying again
              Thanks,
              Shiva.
              

    Shiva,
              > I have a question about cluster wide JNDI replication. I am on weblogic
              > 7.0 sp2, solaris 8. I have 2 weblogic servers in a cluster and two
              > asynchronous JVM's connect to this cluster. The two weblogic servers
              > would be w1 and w2 and the JVMs would be j1
              >
              > w1 creates a stateful ejb. The handle to this ejb is put on the JNDI
              > tree. A JMS message is created which contains the key to the handle
              > object. A JMS message is sent which is picked up by one of the JVM's say
              > j1. j1 tries to do a lookup of the handle using the key in the JMS
              > message and it is not able to find the handle object.
              >
              > The reason is that j1 is trying to connect to the w2 to find the handle.
              > The change to the jndi tree is not propagated to w2 yet and hence the
              > error. Any ideas on why it would take so long to communicate the jndi
              > tree? My jndi tree has hardly anything in it. There are a couple of JMS
              > connection factories and 3 ejbs and couple of JMS queues.
              That's an interesting problem. The handle should have enough information
              to locate the EJB. Are you explicitly trying to connect to W2 from J1 or
              something? That part didn't make sense to me.
              (While it's a different approach, if you need to share data real-time in a
              cluster, use our Coherence Java clustered cache software.)
              Peace,
              Cameron Purdy
              Tangosol, Inc.
              http://www.tangosol.com/coherence.jsp
              Tangosol Coherence: Clustered Replicated Cache for Weblogic
              "Shiva P" <[email protected]> wrote in message
              news:[email protected]...
              >
              

  • Some error in cluster alertlog but the cluster and database is normal used.

    hi.everybody
    i used RAC+ASM,RAC have two nodes and ASM have 3 group(+DATA,+CRS,+FRA),database is 11gR2,ASM used ASMlib,not raw.
    When i start cluster or auto start after reboot the OS,in the cluster alertlog have some error as follow:
    ERROR: failed to establish dependency between database rac and diskgroup resource ora.DATA.dg
    [ohasd(7964)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
    I do not know what cause these error but the cluster and database can start normal and use normal.
    I do not know whether these errors will affect the service.
    thanks everybody!

    anyon has the same question?

  • Problem with OpenLDAP and JNDI

    I'm having problem working with OpenLDAP and JNDI.
    First I have changed LDAP's slapd.conf file:
    suffix          "dc=antipodes,dc=com"
    rootdn          cn=Manager,dc=antipodes,dc=com
    directory     "C:/Program Files/OpenLDAP/data"
    rootpw          secret
    schemacheck offthan i used code below, to create root context:
    package test;
    import javax.naming.Context;
    import javax.naming.InitialContext;
    import javax.naming.NamingException;
    import javax.naming.NameAlreadyBoundException;
    import javax.naming.directory.*;
    import java.util.*;
    public class MakeRoot {
         final static String ldapServerName = "localhost";
         final static String rootdn = "cn=Manager,dc=antipodes,dc=com";
         final static String rootpass = "secret";
         final static String rootContext = "dc=antipodes,dc=com";
         public static void main( String[] args ) {
                   // set up environment to access the server
                   Properties env = new Properties();
                   env.put( Context.INITIAL_CONTEXT_FACTORY,
                              "com.sun.jndi.ldap.LdapCtxFactory" );
                   env.put( Context.PROVIDER_URL, "ldap://" + ldapServerName + "/" );
                   env.put( Context.SECURITY_PRINCIPAL, rootdn );
                   env.put( Context.SECURITY_CREDENTIALS, rootpass );
                   try {
                             // obtain initial directory context using the environment
                             DirContext ctx = new InitialDirContext( env );
                             // now, create the root context, which is just a subcontext
                             // of this initial directory context.
                             ctx.createSubcontext( rootContext );
                   } catch ( NameAlreadyBoundException nabe ) {
                             System.err.println( rootContext + " has already been bound!" );
                   } catch ( Exception e ) {
                             System.err.println( e );
    }this worked fine, I could see that by using "LDAP Browser/Editor".
    and then I tried to create group with code:
    package test;
    import java.util.Hashtable;
    import javax.naming.*;
    import javax.naming.ldap.*;
    import javax.naming.directory.*;
    public class MakeGroup
         public static void main (String[] args)
              Hashtable env = new Hashtable();
              String adminName = "cn=Manager,dc=antipodes,dc=com";
              String adminPassword = "secret";
              String ldapURL = "ldap://127.0.0.1:389";
              String groupName = "CN=Evolution,OU=Research,DC=antipodes,DC=com";
              env.put(Context.INITIAL_CONTEXT_FACTORY,"com.sun.jndi.ldap.LdapCtxFactory");
              //set security credentials, note using simple cleartext authentication
              env.put(Context.SECURITY_AUTHENTICATION,"simple");
              env.put(Context.SECURITY_PRINCIPAL,adminName);
              env.put(Context.SECURITY_CREDENTIALS,adminPassword);
              //connect to my domain controller
              env.put(Context.PROVIDER_URL,ldapURL);
              try {
                   // Create the initial directory context
                   LdapContext ctx = new InitialLdapContext(env,null);
                   // Create attributes to be associated with the new group
                        Attributes attrs = new BasicAttributes(true);
                   attrs.put("objectClass","group");
                   attrs.put("samAccountName","Evolution");
                   attrs.put("cn","Evolution");
                   attrs.put("description","Evolutionary Theorists");
                   //group types from IAds.h
                   int ADS_GROUP_TYPE_GLOBAL_GROUP = 0x0002;
                   int ADS_GROUP_TYPE_DOMAIN_LOCAL_GROUP = 0x0004;
                   int ADS_GROUP_TYPE_LOCAL_GROUP = 0x0004;
                   int ADS_GROUP_TYPE_UNIVERSAL_GROUP = 0x0008;
                   int ADS_GROUP_TYPE_SECURITY_ENABLED = 0x80000000;
                   attrs.put("groupType",Integer.toString(ADS_GROUP_TYPE_UNIVERSAL_GROUP + ADS_GROUP_TYPE_SECURITY_ENABLED));
                   // Create the context
                   Context result = ctx.createSubcontext(groupName, attrs);
                   System.out.println("Created group: " + groupName);
                   ctx.close();
              catch (NamingException e) {
                   System.err.println("Problem creating group: " + e);
    }got the error code: Problem creating group: javax.naming.directory.InvalidAttributeIdentifierException: [LDAP: error code 17 - groupType: attribute type undefined]; remaining name 'CN=Evolution,OU=Research,DC=antipodes,DC=com'
    I tried by creating organizational unit "ou=Research" from "LDAP Browser/Editor", and then running the same code -> same error.
    also I have tried code for adding users:
    package test;
    import java.util.Hashtable;
    import javax.naming.ldap.*;
    import javax.naming.directory.*;
    import javax.naming.*;
    import javax.net.ssl.*;
    import java.io.*;
    public class MakeUser
         public static void main (String[] args)
              Hashtable env = new Hashtable();
              String adminName = "cn=Manager,dc=antipodes,dc=com";
              String adminPassword = "secret";
              String userName = "cn=Albert Einstein,ou=Research,dc=antipodes,dc=com";
              String groupName = "cn=All Research,ou=Research,dc=antipodes,dc=com";
              env.put(Context.INITIAL_CONTEXT_FACTORY,"com.sun.jndi.ldap.LdapCtxFactory");
              //set security credentials, note using simple cleartext authentication
              env.put(Context.SECURITY_AUTHENTICATION,"simple");
              env.put(Context.SECURITY_PRINCIPAL,adminName);
              env.put(Context.SECURITY_CREDENTIALS,adminPassword);
              //connect to my domain controller
              env.put(Context.PROVIDER_URL, "ldap://127.0.0.1:389");
              try {
                   // Create the initial directory context
                   LdapContext ctx = new InitialLdapContext(env,null);
                   // Create attributes to be associated with the new user
                        Attributes attrs = new BasicAttributes(true);
                   //These are the mandatory attributes for a user object
                   //Note that Win2K3 will automagically create a random
                   //samAccountName if it is not present. (Win2K does not)
                   attrs.put("objectClass","user");
                        attrs.put("samAccountName","AlbertE");
                   attrs.put("cn","Albert Einstein");
                   //These are some optional (but useful) attributes
                   attrs.put("giveName","Albert");
                   attrs.put("sn","Einstein");
                   attrs.put("displayName","Albert Einstein");
                   attrs.put("description","Research Scientist");
                        attrs.put("userPrincipalName","[email protected]");
                        attrs.put("mail","[email protected]");
                   attrs.put("telephoneNumber","999 123 4567");
                   //some useful constants from lmaccess.h
                   int UF_ACCOUNTDISABLE = 0x0002;
                   int UF_PASSWD_NOTREQD = 0x0020;
                   int UF_PASSWD_CANT_CHANGE = 0x0040;
                   int UF_NORMAL_ACCOUNT = 0x0200;
                   int UF_DONT_EXPIRE_PASSWD = 0x10000;
                   int UF_PASSWORD_EXPIRED = 0x800000;
                   //Note that you need to create the user object before you can
                   //set the password. Therefore as the user is created with no
                   //password, user AccountControl must be set to the following
                   //otherwise the Win2K3 password filter will return error 53
                   //unwilling to perform.
                        attrs.put("userAccountControl",Integer.toString(UF_NORMAL_ACCOUNT + UF_PASSWD_NOTREQD + UF_PASSWORD_EXPIRED+ UF_ACCOUNTDISABLE));
                   // Create the context
                   Context result = ctx.createSubcontext(userName, attrs);
                   System.out.println("Created disabled account for: " + userName);
                   //now that we've created the user object, we can set the
                   //password and change the userAccountControl
                   //and because password can only be set using SSL/TLS
                   //lets use StartTLS
                   StartTlsResponse tls = (StartTlsResponse)ctx.extendedOperation(new StartTlsRequest());
                   tls.negotiate();
                   //set password is a ldap modfy operation
                   //and we'll update the userAccountControl
                   //enabling the acount and force the user to update ther password
                   //the first time they login
                   ModificationItem[] mods = new ModificationItem[2];
                   //Replace the "unicdodePwd" attribute with a new value
                   //Password must be both Unicode and a quoted string
                   String newQuotedPassword = "\"Password2000\"";
                   byte[] newUnicodePassword = newQuotedPassword.getBytes("UTF-16LE");
                   mods[0] = new ModificationItem(DirContext.REPLACE_ATTRIBUTE, new BasicAttribute("unicodePwd", newUnicodePassword));
                   mods[1] = new ModificationItem(DirContext.REPLACE_ATTRIBUTE, new BasicAttribute("userAccountControl",Integer.toString(UF_NORMAL_ACCOUNT + UF_PASSWORD_EXPIRED)));
                   // Perform the update
                   ctx.modifyAttributes(userName, mods);
                   System.out.println("Set password & updated userccountControl");
                   //now add the user to a group.
                        try     {
                             ModificationItem member[] = new ModificationItem[1];
                             member[0]= new ModificationItem(DirContext.ADD_ATTRIBUTE, new BasicAttribute("member", userName));
                             ctx.modifyAttributes(groupName,member);
                             System.out.println("Added user to group: " + groupName);
                        catch (NamingException e) {
                              System.err.println("Problem adding user to group: " + e);
                   //Could have put tls.close()  prior to the group modification
                   //but it seems to screw up the connection  or context ?
                   tls.close();
                   ctx.close();
                   System.out.println("Successfully created User: " + userName);
              catch (NamingException e) {
                   System.err.println("Problem creating object: " + e);
              catch (IOException e) {
                   System.err.println("Problem creating object: " + e);               }
    }same error.
    I haven't done any chages to any schema manually.
    I know I'm missing something crucial but have no idea what. I have tried many other code from tutorials from net, but they are all very similar and throwing the same error I showed above.
    thanks in advance for help.

    I've solved this.
    The problem was that all codes were using classes from Microsoft Active Directory, and they are not supported in OpenLDAP (microsoft.schema in OpenLDAP is just for info). Due to this some fields are not the same in equivalent classes ("user" and "person").
    so partial code for creating user in root would be:
    import java.util.Hashtable;
    import javax.naming.ldap.*;
    import javax.naming.directory.*;
    import javax.naming.*;
    import javax.net.ssl.*;
    import java.io.*;
    public class MakeUser
         public static void main (String[] args)
              Hashtable env = new Hashtable();
              String adminName = "cn=Manager,dc=antipodes,dc=com";
              String adminPassword = "secret";
              String userName = "cn=Albert Einstein,ou=newgroup,dc=antipodes,dc=com";
              env.put(Context.INITIAL_CONTEXT_FACTORY,"com.sun.jndi.ldap.LdapCtxFactory");
              //set security credentials, note using simple cleartext authentication
              env.put(Context.SECURITY_AUTHENTICATION,"simple");
              env.put(Context.SECURITY_PRINCIPAL,adminName);
              env.put(Context.SECURITY_CREDENTIALS,adminPassword);
              //connect to my domain controller
              env.put(Context.PROVIDER_URL, "ldap://127.0.0.1:389");
              try {
                   // Create the initial directory context
                   LdapContext ctx = new InitialLdapContext(env,null);
                   // Create attributes to be associated with the new user
                        Attributes attrs = new BasicAttributes(true);
                                  attrs.put("objectClass","user");
                   attrs.put("cn","Albert Einstein");
                   attrs.put("userPassword","Nale");
                   attrs.put("sn","Einstein");
                   attrs.put("description","Research Scientist");
                   attrs.put("telephoneNumber","999 123 4567");
                   // Create the context
                   Context result = ctx.createSubcontext(userName, attrs);
                   System.out.println("Successfully created User: " + userName);
              catch (NamingException e) {
                   System.err.println("Problem creating object: " + e);
    }hope this will help anyone.

  • Problem with creating Connection pool and JNDI, driver is not detected

    Hi,
    I have an issue with creating Connection Pool and JNDI.
    I'm using:
    - JDK 1.6
    - OS: Linux(ubuntu 8.10)
    - Netbeans IDE 6.5.1
    - Java EE 5.0
    - Apache Tomcat 6.0.18 Its lib directory contains all necessary jar files for Oracle database driver
    - Oracle 11g Enterprise
    My problem is that the Oracle database driver is not detected when I want to create a pool (it works pretty well and is detected without any problem when I create ordinary connection by DriverManager)
    Therefore after running:
    InitialContext ic = new InitialContext();
    Context context = (Context)ic.lookup("java:comp/env");
    DataSource dataSource = (DataSource)context.lookup("jdbc/oracle11g");
    Connection connection = dataSource.getConnection();and right after dataSource.getConnection() I have the following exception:
    org.apache.tomcat.dbcp.dbcp.SQLNestedException: Cannot load JDBC driver class 'oracle.jdbc.OracleDriver'
    at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1136)
    at org.apache.tomcat.dbcp.dbcp.BasicDataSource.getConnection(BasicDataSource.java:880)
    at servlets.Servlet1.doPost(Servlet1.java:47)
    at servlets.Servlet1.doGet(Servlet1.java:29)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286)
    at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845)
    at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
    at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
    at java.lang.Thread.run(Thread.java:619)
    Caused by: java.lang.ClassNotFoundException: oracle.jdbc.OracleDriver
    at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
    at sun.misc.Launcher$ExtClassLoader.findClass(Launcher.java:229)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
    at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:169)
    at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1130)
    ... 17 more
    My application context file (context.xml) is:
    <?xml version="1.0" encoding="UTF-8"?>
    <Context path="/WebApplication3">
      <Resource auth="Container"
                      driverClassName="oracle.jdbc.OracleDriver"
                      maxActive="8"
                      maxIdle="4"
                      name="jdbc/oracle11g"
                      username="scott"
                      password="tiger"
                      type="javax.sql.DataSource"
                      url="jdbc:oracle:thin:@localhost:1521:database01" />
    </Context>and my web.xml is:
        <resource-ref>
            <description>Oracle Datasource example</description>
            <res-ref-name>jdbc/oracle11g</res-ref-name>
            <res-type>javax.sql.DataSource</res-type>
            <res-auth>Container</res-auth>
        </resource-ref>
    ...I found similar threads in different forums including sun, such as
    http://forums.sun.com/thread.jspa?threadID=567630&start=0&tstart=0
    http://forums.sun.com/thread.jspa?threadID=639243&tstart=0
    http://forums.sun.com/thread.jspa?threadID=5312178&tstart=0
    , but no solution.
    As many suggest, I also tried to put context directly in the server.xml (instead of my application context) and referencing it by <ResourceLink /> inside my application context but it didn't work and instead it gave me the following message:
    org.apache.tomcat.dbcp.dbcp.SQLNestedException: Cannot create JDBC driver of class '   ' for connect URL 'null'
    Has anyone succeeded in creating a connection pool with JNDI by using Tomcat 6 or higher ? If yes, could kindly explain about the applied method.
    Regards,

    Hello again,
    Finally I managed to run my application also with Tomcat 6.0.18. There was only two lines that had to be modified
    in the context.xml file (the context of my application project and not server's)
    Instead of writing
    <Context antiJARLocking="true" path="/WebApplication2">
        type="javax.sql.DataSource"
        factory="org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory"
    </Context>we had to write:
    <Context antiJARLocking="true" path="/WebApplication2">
        type="oracle.jdbc.pool.OracleDataSource"
        factory="oracle.jdbc.pool.OracleDataSourceFactory"
    </Context>- No modification was needed to be done at server level (niether server.xml nor server context.xml)
    - I just added the ojdbc6.jar in $CATALINA_HOME/lib (I didn't even need to add it in WEB-INF/lib of my project)
    - The servlet used to do the test was the same that I presented in my precedent post.
    For those who have encountered my problem and are interested in the format of the web.xml and context.xml
    with Tomcat 6.0, you can find them below:
    Oracle server: Oracle 11g Enterprise
    Tomcat server version: 6.0.18
    Oracle driver: ojdbc.jar
    IDE: Netbeans 6.5.1
    The context.xml file of the web application
    <?xml version="1.0" encoding="UTF-8"?>
    <Context antiJARLocking="true" path="/WebApplication2">
        <Resource name="jdbc/oracle11g"
                  type="oracle.jdbc.pool.OracleDataSource"
                  factory="oracle.jdbc.pool.OracleDataSourceFactory"
                  url="jdbc:oracle:thin:@localhost:1521:database01"
                  driverClassName="oracle.jdbc.OracleDriver"
                  userName="scott"
                  password="tiger"
                  auth="Container"
                  maxActive="100"
                  maxIdle="30"
                  maxWait="10000"
                  logAbandoned="true"
                  removeAbandoned="true"
                  removeAbandonedTimeout="60" />
    </Context>The web.xml of my web application
    <?xml version="1.0" encoding="UTF-8"?>
    <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
        <resource-ref>
            <description>Oracle Database 11g DataSource</description>
            <res-type>oracle.jdbc.pool.OracleDataSource</res-type>
            <res-auth>Container</res-auth>
            <res-ref-name>jdbc/oracle11g</res-ref-name>
        </resource-ref>
        <servlet>
            <servlet-name>Servlet1</servlet-name>
            <servlet-class>servlets.Servlet1</servlet-class>
        </servlet>
        <servlet-mapping>
            <servlet-name>Servlet1</servlet-name>
            <url-pattern>/Servlet1</url-pattern>
        </servlet-mapping>
        <session-config>
            <session-timeout>
                30
            </session-timeout>
        </session-config>
        <welcome-file-list>
            <welcome-file>index.jsp</welcome-file>
        </welcome-file-list>
    </web-app>Ok, now I'm happy as the original problem is completely solved
    Regards

  • When setting up converged network in VMM cluster and live migration virtual nics not working

    Hello Everyone,
    I am having issues setting up converged network in VMM.  I have been working with MS engineers to no avail.  I am very surprised with the expertise of the MS engineers.  They had no idea what a converged network even was.  I had way more
    experience then these guys and they said there was no escalation track so I am posting here in hopes of getting some assistance.
    Everyone including our consultants says my setup is correct. 
    What I want to do:
    I have servers with 5 nics and want to use 3 of the nics for a team and then configure cluster, live migration and host management as virtual network adapters.  I have created all my logical networks, port profile with the uplink defined as team and
    networks selected.  Created logical switch and associated portprofle.  When I deploy logical switch and create virtual network adapters the logical switch works for VMs and my management nic works as well.  Problem is that the cluster and live
    migration virtual nics do not work.  The correct Vlans get pulled in for the corresponding networks and If I run get-vmnetworkadaptervlan it shows cluster and live migration in vlans 14 and 15 which is correct.  However nics do not work at all.
    I finally decided to do this via the host in powershell and everything works fine which means this is definitely an issue with VMM.  I then imported host into VMM again but now I cannot use any of the objects I created and VMM and have to use standard
    switch.
    I am really losing faith in VMM fast. 
    Hosts are 2012 R2 and VMM is 2012 R2 all fresh builds with latest drivers
    Thanks

    Have you checked our whitepaper http://gallery.technet.microsoft.com/Hybrid-Cloud-with-NVGRE-aa6e1e9a for how to configure this through VMM?
    Are you using static IP address assignment for those vNICs?
    Are you sure your are teaming the correct physical adapters where the VLANs are trunked through the connected ports?
    Note; if you create the teaming configuration outside of VMM, and then import the hosts to VMM, then VMM will not recognize the configuration. 
    The details should be all in this whitepaper.
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • After bring the Cluster and ASM is up, I'm not able to start the rdbms ins.

    I am not able to bring the database up.
    Cluster and ASM is up and running.
    ORA-01078: failure in processing system parameters
    LRM-00109: could not open parameter file '/oracle/app/oracle/product/11.1.0/db_1/dbs/initTMISC11G.ora'

    That is wre the confusion is: I coud nt find it under O_H/dbs,
    However the log file indicates that it is using spfile which is located at :
    spfile = +DATA_TIER/dintw10g/parameterfile/spfile_dintw10g.ora
    Now it checks only the /dbs directory only i guess.
    Could you please how can i bring it up now.
    Just for the information: ASM instance is up.

  • Using ASM in a cluster, and new snapshot feature

    Hi,
    I'm currently studying ASM and trying to find some answers.
    From what I understand and experienced so far, ASM can only be used as a local storage solution. In other words it cannot be used by network access. Is this correct?
    How is the RDBMS database connecting to the ASM instance? Which process or what type of connection is it using? It's apparently not using listener, although the instance name is part of the database file path. How does this work please?
    How does ASM work in a cluster environment? How does each node in a cluster connect to it?
    As of 11g release 2, ASM provides a snapshot feature. I assume this can be used for the purpose of backup, but then each database should use it's own diskgroup, and I will still need to use alter database begin backup, correct?
    Thanks!

    Markus Waldorf wrote:
    Hi,
    I'm currently studying ASM and trying to find some answers.
    From what I understand and experienced so far, ASM can only be used as a local storage solution. In other words it cannot be used by network access. Is this correct?Well, you are missing one point that it would entirely depend on the architecture that you are going to use. If you are going to use ASM for a single node, it would be available right there. If installed for a RAC system, ASM instance would be running on each node of the cluster and would be managing the storage which would be lying on the shared storage. The process ASMB would be responsible to exchange messages, take response and push the information back to the RDBMS instance.
    How is the RDBMS database connecting to the ASM instance? Which process or what type of connection is it using? It's apparently not using listener, although the instance name is part of the database file path. How does this work please?Listener is not need Markus as its required to create the server processes which is NOT the job of the ASM instance. ASM instance would be connecting the client database with itself immediately when the first request would come from that database to do any operation over the disk group. As I mentioned above, ASMB would be working afterwards to carry forward the request/response tasks.
    How does ASM work in a cluster environment? How does each node in a cluster connect to it? Each node would be having its own ASM instance running locally to it. In the case of RAC, ASM sid would be +ASMn where n would be 1, 2,... , going forward to the number of nodes being a part of teh cluster.
    As of 11g release 2, ASM provides a snapshot feature. I assume this can be used for the purpose of backup, but then each database should use it's own diskgroup, and I will still need to use alter database begin backup, correct?You are probably talking about the ACFS Snapshot feature of 11.2 ASM. This is not to take the backup of the disk group but its like a o/s level backup of the mount point which is created over the ASM's ACFS mount point. Oracle has given this feature so that you can take the backup of your oracle home running on the ACFS mount point and in the case of any o/s level failure like someone has deleted certain folder from that mountpoint, you can get it back through the ACFS snapshot taken. For the disk group, the only backup available is metadata backup(and restore) but that also does NOT get the data of the database back for you. For the database level backup, you would stillneed to use RMAN only.
    HTH
    Aman....

  • Question about the Cluster and Column width in Graphs in Illustrator CS2

    How do they relate to each other? How do the Percentages work together? Is there any information about how you can mathematically figure how much area each Cluster width takes. If you make a graph and make both the cluster and column widths 100%, the entire area is filled. If you make the Column width 50% the bar will sit exactly in the center of the Cluster width, but if you make the cluster width 75%, the bars move closer together.

    Gregg,
    The set of bars that represent each row of data in the graph spreadsheet is called a "cluster".
    Let's let
       W = the width of the whole area inside the graph axes
       R = the number of rows of data
       C = the number of columns of data
    Then the maximum potential space to use for each row of data is W/R. The cluster width controls how much of this potential space is assigned to each row. This cluster space is then divided by C, giving the maximum potential width of each bar. The maximum potential width for each bar is then multiplied by the column width percentage to get the actual width of each bar.
    So the actual width of each bar is ((W/R)*Cluster width)/C)*Column width
    The graphs illustrated below have three rows of data, with two columns each. The solid colored areas in the background have the same cluster width as the main graph, but they all have a column width of 100%. This lets you see more easily how the gradient bars that have a column width of 80% are using up 80% of their potential width.
    Notice that as the Cluster percentage gets lower, the group of bars that represent a row of data get farther apart, so that you can more easily see the "clumping" of rows. When the Cluster percentage is 100%, the columns within each row are no closer to each other than they are to the columns in other rows.
    As the Column percentage gets lower, the bars for each data value occupy less of the space within their row's cluster, so that spaces appear between every bar, even in the same row.

  • ECC 6.0 installation on HPUX Cluster and oracle database

    Hi,
    I need to install ECC6.0 installation on HPUX Cluster and oracle database. Can anybody plz suggest me or send any document to me ..
    Thanks,
    venkat.

    Hi Venkat,
    Please download installation guide from below link:-
    https://websmp105.sap-ag.de/instguides
    Regards,
    Anil

  • Solaris 8 patches added to Solaris 9 patch cluster and vice versa

    Has anyone noticed this? On the Solaris 8 and 9 patch cluster readmes, it shows sol 9 patches have been added to the sol 8 cluster and sol 8 patches have been added to the sol 9 cluster. what's the deal? I haven't found any information about whether or not this is a mistake.

    Desiree,
    Solaris 9's kernel patch 112233-12 was the last revision for that particular patch number. The individual zipfile became so large that it was subsequently supplanted by 117191-xx and that has also been supplanted when its zipfile became very large, by 118558-xx.
    Consequently you will never see any newer version than 112233-12 on that particular patch.
    What does <b>uname -a</b> show you for that system?
    Solaris 8 SPARC was similarly effected, for 108528, 117000 and 117350 kernel patches.
    If you have login privileges to Sunsolve, find <font color="teal">Infodoc 76028</font>.

  • Cluster and Backup server

    Hello guys;
    I want to use a cluster oracle server and another server in another place as a backup server to copy the database on line from the cluster server.
    If I want to make the backup server a primary server if something wrong happened to the cluster server.
    what the steps should I do? or if there is any document to read I'll be thankful
    thanks in advance

    Hi,
    What do you refer by saying cluster ? If you want to have a HA (High Availability) with Active/Passive cluster then you can go with 3rd party cluster vendor, like HACMP from IBM, ServiceGuard from HP and so on. If you want to create this with Oracle technologies then you are talking about Oracle RAC One Node. Of course you always go with Oracle RAC (Real Application Cluster) if you want to have a Active/Active cluster. These are in terms of cluster and High Availability.
    There is another point if you want to create a DR (Disaster Recovery) of you primary database then you would consider implementing Oracle Dataguard.
    Oracle has a rich documentation for blueprints and best practices, my sincerely advice is to get familiar with it:
    http://www.oracle.com/technetwork/database/features/availability/maa-090890.html
    Regards,
    Sve

  • Cluster and space consumption

    I'm confused about some thing in regards to clusters. I have about 1GB wort of data when it is not in a cluster, just in a normal table. If I create a cluster and insert the data in to this cluster it consumes more space, and that was fully expected. When I tried the first few times it took 13GB of space. The table space was 30GB large and contained a few other tables. Then I cleaned out the table space so only 2% of the space was used and added another 30GB to the table space and ran the test once more. Now it is consuming more than 40GB and is still not completed. So my question is does the size the cluster consume depend on the size and available space of the tablespace it is in? Is there a way to limit the size it is allowed to use other than to put it in a separate tablespace?

    Hi Marius,
    First, by "cluster" you mean sorted cluster tables, right?
    http://www.dba-oracle.com/t_sorted_hash_clusters.htm
    So my question is does the size the cluster consume depend on the size and available space of the tablespace it is in?These are "hash" clusters, and you govern the range of hash cluster keys, and hence, the range where Oracle will store the rows.
    http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10739/hash.htm
    Oracle Database uses a hash function to generate a distribution of numeric values, called hash values, that are based on specific cluster key values. The key of a hash cluster, like the key of an index cluster, can be a single column or composite key (multiple column key). To find or store a row in a hash cluster, the database applies the hash function to the cluster key value of the row. The resulting hash value corresponds to a data block in the cluster, which the database then reads or writes on behalf of the issued statement.

  • Cluster and Read Write XML

    In my applications I allow users to save their settings. I used to do this in a Ini file. So I wrote a Vi that could write any cluster and it worked ok. When I discovered that in the newer versions of LabVIEW you could Read/Write From/To xml, I changed inmediatly because it have some advantages form me but I am having some trouble.
    Every time I want to save a cluster I have to use
    Flatten To XML -> Escape XML -> Write XML
    and to load
    Load From XML -> Unescape XML -> Unflatten from XML.
    I also do other important things each time I save or load settings. So it seems to be reasonable to put all this in just two subvi's. (One for loading, One for saving). The problem is that I want to use it with any cluster
    What I want with can be summarized as following:
    - SaveSettings.vi
    --Inputs:
    ---Filename
    ---AnyCluster
    ---Error In
    --Outputs
    ---Error Out
    -LoadSettings.vi
    Inputs:
    ---Filename
    ---AnyCluster
    ---Error In
    Outputs
    ---DataFromXML
    ---Error Out
    I have tried using variants or references and I was not able to make generic sub vi (sub vi that don't know what is in the cluster). I don't want to use a polymorphic vi because it will demand making one load/save pair per each type of settings.
    Thanks,
    Hernan

    If I am correct you still you need to wire the data type to the Variant To Data. How can you put in a subvi ALL that is needed to handle the read/write of the cluster? I don't want to put any Flatten To XML or Unflatten From XML outside.
    The solution I came out with INI files was passing a reference to a cluster control but it is real unconfortable because I have to itereate through all items.
    When a control has a "Anything" input, is there any way to wire that input to a control and remains "Anything"?
    Thanks
    Hernan

Maybe you are looking for

  • Splitting of dimensions into two applications

    Dear all, I've come across an artical in SDN says" The Application could be split into two applications POC_APP5 and POC_APP6 with less number of dimensions to expand. In turn would improve the performance of Input schedule and Reports." Could you ki

  • SWT:::INVALID THREAD ACCESS

    When im tryin to call a method from sum other class . It throws org.eclipse.swt.SWTException: Invalid thread access i know this has to do with ui threadsand we have to wrap the display by SyncEXec...... but im not able to do it,,, can ne1 help??

  • I cannot e-mail microsoft word attachments with Windows Live mail

    I receive error message - Logon failed, you must logon to Microsoft Exchange to access address book. Error Code: Unspecified error.  Another message pops up:  There is no e-mail program associated to perform the requested action.  Please install an e

  • Why is my video memory showing 707 mb on bootcamp?

    Here is the summary of what I've done https://discussions.apple.com/message/15207067 Maybe there is somebody here who knows? Thanks

  • Download previous version of java

    The latest software update for Mac OS X (10.6.8) upgraded the version of java from Apple and broke an application used for work. Are the previous or older versions of java available from Apple for download? For future updates, Is there a way to modif