Singleton problem

Hi,
I have a Singleton and I want it to behave like this:
First time it's called it must receive a String parameter in the constructor in order to set class variable. Subsequent calls, just return the created instance. Something like this
public class LogParser extends MainParser {
  private static LogParser logParser = null;
  private String pathToLogXmlFile;
  //First time call
  public static LogParser getFirstTimeInstance(String pathToLogXmlFile) {
    if (logParser == null)
      return new LogParser(pathToLogXmlFile);
    else
      return logParser;
  //Subesequent calls use this one
  public static LogParser getInstance() throws WrongInstanceCallException {
    if (logParser == null)
      throw new WrongInstanceCallException() ;
    else
      return logParser;
  }but I don't think this is a good design! Can you give me some notes?
thanks in advance,
Manuel Leiria

try this
package forums;
public class LogParser {
  private static LogParser theInstance = null;
  private String logfilename = null;
  private LogParser(String logfilename) {
    this.logfilename = logfilename;
  public static LogParser getInstance(String logfilename) {
    if (theInstance == null)
      theInstance = new LogParser(logfilename);
    return theInstance;
  public static LogParser getInstance() throws IllegalArgumentException {
    if(theInstance == null) throw new IllegalArgumentException("Use LogParser.getInstance(logfile) to Initialise LogParser");
    return theInstance;
  public String getLogfilename() {
    return this.logfilename;
  public static void main(String[] args) {
    LogParser lOne = LogParser.getInstance("my.log");
    LogParser lTwo = LogParser.getInstance();
    System.out.println(lTwo.getLogfilename());
}... and yeah there really is no way (that I know of) to do this "cleanly"... you're allways gonna be left with an Exception case somewhere... I use an unchecked exception for this, because I believe any "uninitialised" errors should be found in the very first unit test execution... probably before you've even finished programming your program.
keith.

Similar Messages

  • Singleton stateless session bean

    hello
    how to implement "singleton stateless session bean"?
    thanks in advance!!

    This is a case of premature optimization. Try forgetting about how to solve a singleton problem for the time being. Focus on the use-cases of your application and getting something working.
    After that, if memory usage continues to be an issue, consider the following. If you have a large amount of data needed by multiple clients, one option to store it in a database and retrieve portions on demand. If you need to keep everything in memory in each bean instance you can also configure the total number of stateless session bean instances in your vendor's bean pool to limit the total resource usage. A third alternative is to use a third-party caching implementation or a vendor-specific singleton service.
    --ken                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Deployment failure in WLS 8.1 sp2

    Hi,
    Iam getting the following EXCEPTIONS while deploying my war file.
    Iam getting this exception………………in the log file too.
    <Jun 23, 2004 3:05:14 PM EDT> <Error> <HTTP> <BEA-101018> <[ServletContext(id=19521401,name=myPortal,context-path=/myPorta
    l)] Servlet failed with ServletException
    javax.servlet.ServletException: Unable to resolve location of alkaPortal settings
    file: /WEB-INF/config/ApplicationSettings.propert
    ies. Exiting...
    at com.dps.j2ee.alkaPortal.config.ConfigPlugIn.init(ConfigPlugIn.java:33)
    at org.apache.struts.action.ActionServlet.initModulePlugIns(ActionServlet.java:1158)
    at org.apache.struts.action.ActionServlet.init(ActionServlet.java:473)
    at javax.servlet.GenericServlet.init(GenericServlet.java:258)
    at weblogic.servlet.internal.ServletStubImpl$ServletInitAction.run(ServletStubImpl.java:993)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:317)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:118)
    at weblogic.servlet.internal.ServletStubImpl.createServlet(ServletStubImpl.java:869)
    at weblogic.servlet.internal.ServletStubImpl.createInstances(ServletStubImpl.java:848)
    at weblogic.servlet.internal.ServletStubImpl.prepareServlet(ServletStubImpl.java:787)
    at weblogic.servlet.internal.ServletStubImpl.getServlet(ServletStubImpl.java:518)
    at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:362)
    at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:305)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:6350)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:317)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:118)
    at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:3635)
    at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2585)
    at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
    >
    I think the code that is causing the problem is due to one JSP that has the following
    CODE ::::
    <%
    try{              
    String rootPath = this.getServletConfig().getServletContext().getRealPath("/");
    // get the config file property value
    String settingsFile = this.getServletConfig().getServletContext().getRealPath("/WEB-INF/config/ApplicationSettings.properties"
    // is the above code will work ?????
    // i suppose this will give javax.servlet.ServletException: Unable to resolve
    location....
    // is the following code could create singleton problems .....
    Resources.Instance().load(settingsFile); configFile = Resources.Instance().getResource(RESOURCE_KEY);
    // load the properties
    File fileObj = null;
    fileObj = new File(rootPath + configFile);
    xmlContainer = new XMLDataContainer();
    xmlContainer.load(fileObj);
    // get the element names and their values.
    names = xmlContainer.getElementNames();
    if(names.isEmpty())
    noElements = true;
    else{           
    noElements = false;
    elemIter = names.iterator();
    %>
    PLEASE TELL ME, WHAT THE PROLEM IS ???
    we are using wls8.1+ sp2.

    sangita wrote:
    Rob,
    2 questions:
    1) You told me this:
    "foo.xml in the same directory"
    Here, what do you mean by "in the same directory" ????I meant deploy /foo/myApp/myWar.war and /foo/myApp/myConfig.xml
    >
    2) why do you want foo.xml to be separate from the deployment unit....what is
    the reason ??I'd want to separate the configuration from the application code. If
    you find a bug in your app and rebuild it, you're going to have to merge
    in any changes you've made to your configuration file.
    -- Rob
    >
    Please explain and help me learn this.
    -sangita
    Rob Woollen <[email protected]> wrote:
    No, I'd suggest having it external to the deployment unit. ie deploy
    foo.war and have foo.xml in the same directory.
    -- Rob
    sangita wrote:
    do you suggest taking this config out from the .war file and put itinto a .jar
    file, and then package the .war and the .jar file together into an.ear file.
    -sangita
    Rob Woollen <[email protected]> wrote:
    It is possible to write back to a zip file, but I wouldn't encourage
    it.
    If you need to update the config file, I'd suggest that you either
    deploy exploded, or keep the config file external to the war file.
    -- Rob
    sangita wrote:
    Rob,
    You told me a workaround to read the configuration file in a .war
    file.
    That is
    done and iam experiencing no problems.
    BUT i also need to write back some changes to the configuration file,how can
    i do that ???
    Iam getting exceptions and it looks like the weblogic container doesnot handle
    this. some protocol exceptions.
    please let me know.
    thanks, sangita
    "sangita" <[email protected]> wrote:
    how do you solve the problems so easily??
    ....you are great !!!
    Rob Woollen <[email protected]> wrote:
    http://java.sun.com/j2ee/sdk_1.3/techdocs/api/javax/servlet/ServletContext.html#getResourceAsStream(java.lang.String)
    sangita wrote:
    Rob,
    what is the following means ??
    Otherwise, using getResourceAsStream to get an InputStream would
    work
    as well.
    Please explain a little more.
    thanks again.
    Rob Woollen <[email protected]> wrote:
    No, neither of these would be bugs. getRealPath is supposed to
    return
    null for archives.
    -- Rob
    sangita wrote:
    Rob,
    would that mean that this is a bug in WLS8.1+sp2 ???
    Rob Woollen <[email protected]> wrote:
    getRealPath will return null if the war is archived rather than
    a
    directory. The easiest solution would just be to deploy an
    exploded
    war.
    Otherwise, using getResourceAsStream to get an InputStream wouldwork
    as
    well.
    -- Rob
    sangita wrote:
    Hi,
    Iam getting the following EXCEPTIONS while deploying my war
    file.
    Iam getting this exception??????in the log file too.
    <Jun 23, 2004 3:05:14 PM EDT> <Error> <HTTP> <BEA-101018> <[ServletContext(id=19521401,name=myPortal,context-path=/myPorta
    l)] Servlet failed with ServletException
    javax.servlet.ServletException: Unable to resolve locationof
    alkaPortal
    settings
    file: /WEB-INF/config/ApplicationSettings.propert
    ies. Exiting...
    at com.dps.j2ee.alkaPortal.config.ConfigPlugIn.init(ConfigPlugIn.java:33)
    at org.apache.struts.action.ActionServlet.initModulePlugIns(ActionServlet.java:1158)
    at org.apache.struts.action.ActionServlet.init(ActionServlet.java:473)
    at javax.servlet.GenericServlet.init(GenericServlet.java:258)
    at weblogic.servlet.internal.ServletStubImpl$ServletInitAction.run(ServletStubImpl.java:993)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:317)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:118)
    at weblogic.servlet.internal.ServletStubImpl.createServlet(ServletStubImpl.java:869)
    at weblogic.servlet.internal.ServletStubImpl.createInstances(ServletStubImpl.java:848)
    at weblogic.servlet.internal.ServletStubImpl.prepareServlet(ServletStubImpl.java:787)
    at weblogic.servlet.internal.ServletStubImpl.getServlet(ServletStubImpl.java:518)
    at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:362)
    at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:305)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:6350)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:317)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:118)
    at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:3635)
    at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2585)
    at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
    I think the code that is causing the problem is due to one
    JSP
    that
    has the following
    CODE ::::
    <%
    try{              
    String rootPath = this.getServletConfig().getServletContext().getRealPath("/");
    // get the config file property value
    String settingsFile = this.getServletConfig().getServletContext().getRealPath("/WEB-INF/config/ApplicationSettings.properties"
    // is the above code will work ?????
    // i suppose this will give javax.servlet.ServletException:
    Unable
    to resolve
    location....
    // is the following code could create singleton problems .....
    Resources.Instance().load(settingsFile); configFile= Resources.Instance().getResource(RESOURCE_KEY);
    // load the properties
    File fileObj = null;
    fileObj = new File(rootPath + configFile);
    xmlContainer = new XMLDataContainer();
    xmlContainer.load(fileObj);
    // get the element names and their values.
    names = xmlContainer.getElementNames();
    if(names.isEmpty())
    noElements = true;
    else{           
    noElements = false;
    elemIter = names.iterator();
    %>
    PLEASE TELL ME, WHAT THE PROLEM IS ???
    we are using wls8.1+ sp2.

  • Performance problem with synchronized singleton

    I'm using the singleton pattern to cache incoming JMS Message data from a 3rd party. I'm seeing terrible performance though, and I think it's because I've misunderstood something.
    My singleton class stores incoming JMS messages in a HashMap, so that successive messages can be checked to see if they are a new piece of data, or an update to an earlier one.
    I followed the traditional examples of a private constructor and a public getInstance method, and applied the double-checked locking to the latter. However, a colleague then suggested that all my other methods in the same class should also be synchronized - is this the case or am I creating an unnecessary performance bottleneck? Or have I unwittingly created that bottleneck elsewhere?
    package com.mycode;
    import java.util.HashMap;
    import java.util.Iterator;
    public class DataCache {
        private volatile static DataCache uniqueInstance;
        private HashMap<String, DataCacheElement> dataCache;
        private DataCache() {
            if (dataCache == null) {
                dataCache = new HashMap<String, DataCacheElement>();
        public static DataCache getInstance() {
             if (uniqueInstance == null) {
                synchronized  (DataCache.class) {
                    if (uniqueInstance == null) {
                        uniqueInstance = new DataCache();
            return uniqueInstance;
        public synchronized void put(String uniqueID, DataCacheElement dataCacheElement) {
            dataCache.put(uniqueID, dataCacheElement);
        public synchronized DataCacheElement get(String uniqueID) {
            DataCacheElement dataCacheElement = (DataCacheElement) dataCache.get(uniqueID);
            return dataCacheElement;
        public synchronized void remove(String uniqueID) {
            dataCache.remove(uniqueID);
        public synchronized int getCacheSize() {
         return dataCache.keySet().size();
         * Flushes all objects from the cache that are older than the
         * expiry time.
         * @param expiryTime (long milliseconds)
        public synchronized void flush(long expiryTime) {
            String uniqueID;
            long currentDate = System.currentTimeMillis();
            long compareDate = currentDate - (expiryTime);
            Iterator<String> iterator = dataCache.keySet().iterator();
            while( iterator.hasNext() ){
                // Get element by unique key
                uniqueID = (String) iterator.next();
                DataCacheElement dataCacheElement = (DataCacheElement) get(uniqueID);
                // get time from element
                long lastUpdatedDate = dataCacheElement.getUpdatedDate();
                // if time is greater than 1 day, remove element from cache
                if (lastUpdatedDate <  compareDate) {
                    remove(uniqueID);
        public synchronized void empty() {
            dataCache.clear();
    }

    m0thr4 wrote:
    SunFred wrote:
    m0thr4 wrote:
    I [...] applied the double-checked locking
    Which is broken. http://www.ibm.com/developerworks/java/library/j-dcl.html
    from the link:
    The theory behind double-checked locking is perfect. Unfortunately, reality is entirely different. The problem with double-checked locking is that there is no guarantee it will work on single or multi-processor machines.
    The issue of the failure of double-checked locking is not due to implementation bugs in JVMs but to the current Java platform memory model. The memory model allows what is known as "out-of-order writes" and is a prime reason why this idiom fails[b].
    I had a read of that article and have a couple of questions about it:
    1. The article was written way back in May 2002 - is the issue they describe relevant to Java 6's memory model? DCL will work starting with 1.4 or 1.5, if you make the variable you're testing volatile. However, there's no reason to do it.
    Lazy instantiation is almost never appropriate, and for those rare times when it is, use a nested class to hold your instance reference. (There are examples if you search for them.) I'd be willing to be lazy instantiation is no appropriate in your case, so you don't need to muck with syncing or DCL or any of that nonsense.

  • Singleton interceptor problem in JDev 11

    Hi,
    In JDev 10.1.3.x I can import lib oracle.j2ee.ejb.StatelessDeployment and use annotation @StatelessDeployment(interceptorType="singleton") to configure singleton interceptor. In JDev libs missing this class StatelessDeployment (also missing StatefulDeployment ...)???
    How to configure singleton interceptor whit EJB 3.0 Session Bean in JDeveloper 11tp?

    Mimarko,
    The class has been moved to the following package. Therefore, the class still exists but its former location is deprecated.
    oracle.j2ee.annotation.ejb.StatelessDeployment
    Regards,
    Ric

  • ImageFileReader as singleton gives problems with getResource

    Hi,
    I used the singleton design pattern on my class ImageFileReader. I use this class to read and buffer images/icons. I only need one instance of this class in my program so I create the class as a singlton.
    To load the images I use ClassLoader.getResource() but it gives a compilation error because I called the method from a static context.
    Has someone a sollution for this?
    my code:
    package jdmtc.util;
    import java.awt.*;
    import java.io.Serializable;
    import javax.swing.*;
    import java.awt.event.*;
    import javax.swing.event.*;
    import java.util.*;
    public class ImageFileReader
    public static final transient char SMALL = 's';
    public static final transient char LARGE = 'l';
    private static transient HashMap imageMap;
    private static ImageFileReader imageFileReader = null;
    private ImageFileReader(){
    if(imageMap == null)     {
    imageMap = new HashMap();
    ImageIcon i = new ImageIcon( ClassLoader.getResource("sNOTFOUND.gif") );
    imageMap.put("sNOTFOUND",i);
    i = new ImageIcon( ClassLoader.getResource("lNOTFOUND.gif") );
    imageMap.put("lNOTFOUND",i);
    public static ImageFileReader getImageFileReader() {
    if( imageFileReader == null )
    imageFileReader = new ImageFileReader();
    return imageFileReader;
    public ImageIcon getImageIcon(String imageString, char size) throws IllegalArgumentException {
    ImageIcon img;
    if( size == SMALL )
    imageString= 's'+imageString;
    else if ( size == LARGE )
    imageString= 'l'+imageString;
    else
    throw new IllegalArgumentException("imagesize "+size+" is unknown");
    if(imageMap.containsKey(imageString))
    return (ImageIcon)imageMap.get(imageString);
    else {
    img = new ImageIcon(ClassLoader.getResource(imageString+".gif"));
    if( img != null ){
    imageMap.put(imageString,img);
    return img;
    else{
    return imageMap.get(size+"NOTFOUND");

    Hi,
    Thanks for your sollution, I would remove my question because just after i did send it I did know the anwser. It's too long ago I worte some code.
    Since April I'm studying UML and patterns.
    I think you owend you duke dollars.
    Greetings

  • Problem with threads within applet

    Hello,
    I got an applet, inside this applet I have a singleton, inside this singleton I have a thread.
    this thread is running in endless loop.
    he is doing something and go to sleep on and on.
    the problem is,
    when I refresh my IE6 browser I see more than 1 thread.
    for debug matter, I did the following things:
    inside the thread, sysout every time he goes to sleep.
    sysout in the singleton constructor.
    sysout in the singleton destructor.
    the output goes like this:
    when refresh the page, the singleton constructor loading but not every refresh, sometimes I see the constructor output and sometimes I dont.
    The thread inside the singleton is giving me the same output, sometime I see more than one thread at a time and sometimes I dont.
    The destructor never works (no output there).
    I don't understand what is going on.
    someone can please shed some light?
    thanks.
    btw. I am working with JRE 1.1
    this is very old and big applet and I can't convert it to something new.

    Ooops. sorry!
    I did.
         public void start() {
         public void stop() {
         public void destroy() {
              try {
                   resetAll();
                   Configuration.closeConnection();
                   QuoteItem.closeConnection();
              } finally {
                   try {
                        super.finalize();
                   } catch (Throwable e) {
                        e.printStackTrace();
         }

  • Can not access the Instance Data of a Singleton class from MBean

    I am working against the deadline and i am sweating now. From past few days i have been working on a problem and now its the time to shout out.
    I have an application (let's call it "APP") and i have a "PerformanceStatistics" MBean written for APP. I also have a Singleton Data class (let's call it "SDATA") which provides some data for the MBean to access and calculate some application runtime stuff. Thus during the application startup and then in the application lifecysle, i will be adding data to the SDATA instance.So, this SDATA instance always has the data.
    Now, the problem is that i am not able to access any of the data or data structures from the PerformanceStatistics MBean. if i check the data structures when i am adding the data, all the structures contains data. But when i call this singleton instance from the MBean, am kind of having the empty data.
    Can anyone explain or have hints on what's happening ? Any help will be appreciated.
    I tried all sorts of DATA class being final and all methods being synchronized, static, ect.,, just to make sure. But no luck till now.
    Another unfortunate thing is that, i some times get different "ServicePerformanceData " instances (i.e. when i print the ServicePerformanceData.getInstance() they are different at different times). Not sure whats happening. I am running this application in WebLogic server and using the JConsole.
    Please see the detailed problem at @ http://stackoverflow.com/questions/1151117/can-not-access-the-instance-data-of-a-singleton-class-from-mbean
    I see related problems but no real solutions. Appreciate if anyone can throw in ideas.
    http://www.velocityreviews.com/forums/t135852-rmi-singletons-and-multiple-classloaders-in-weblogic.html
    http://www.theserverside.com/discussions/thread.tss?thread_id=12194
    http://www.jguru.com/faq/view.jsp?EID=1051835
    Thanks,
    Krishna

    I am working against the deadline and i am sweating now. From past few days i have been working on a problem and now its the time to shout out.
    I have an application (let's call it "APP") and i have a "PerformanceStatistics" MBean written for APP. I also have a Singleton Data class (let's call it "SDATA") which provides some data for the MBean to access and calculate some application runtime stuff. Thus during the application startup and then in the application lifecysle, i will be adding data to the SDATA instance.So, this SDATA instance always has the data.
    Now, the problem is that i am not able to access any of the data or data structures from the PerformanceStatistics MBean. if i check the data structures when i am adding the data, all the structures contains data. But when i call this singleton instance from the MBean, am kind of having the empty data.
    Can anyone explain or have hints on what's happening ? Any help will be appreciated.
    I tried all sorts of DATA class being final and all methods being synchronized, static, ect.,, just to make sure. But no luck till now.
    Another unfortunate thing is that, i some times get different "ServicePerformanceData " instances (i.e. when i print the ServicePerformanceData.getInstance() they are different at different times). Not sure whats happening. I am running this application in WebLogic server and using the JConsole.
    Please see the detailed problem at @ http://stackoverflow.com/questions/1151117/can-not-access-the-instance-data-of-a-singleton-class-from-mbean
    I see related problems but no real solutions. Appreciate if anyone can throw in ideas.
    http://www.velocityreviews.com/forums/t135852-rmi-singletons-and-multiple-classloaders-in-weblogic.html
    http://www.theserverside.com/discussions/thread.tss?thread_id=12194
    http://www.jguru.com/faq/view.jsp?EID=1051835
    Thanks,
    Krishna

  • Problem with Configuring Tomcat for running jsp web applications..Plz HELP

    I am using Tomcat 5.5 and Jdk 1.5.0_12 and Oracle 10g. I am using jdbc-odbc bridge connection
    to connect to the database. I have placed my project folder called
    tdm under the webapps folder in Tomcat. This 'tdm' folder consists of
    a collection of html pages,jsp pages and images of my project. Also I created a
    WEB-INF folderand in that I have lib folder which contains catalina-root.jar
    , classes12.jar and nls_charset.jar files. And also in the WEB-INF folder I have the web.xml
    file which looks like this
    <?xml version="1.0" encoding="ISO-8859-1"?>
    <!--
    Copyright 2004 The Apache Software Foundation
    Licensed under the Apache License, Version 2.0 (the "License");
    you may not use this file except in compliance with the License.
    You may obtain a copy of the License at
    http://www.apache.org/licenses/LICENSE-2.0
    Unless required by applicable law or agreed to in writing, software
    distributed under the License is distributed on an "AS IS" BASIS,
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and
    limitations under the License.
    -->
    <web-app>
    <resource-ref>
    <description>Oracle Datasource example</description>
    <res-ref-name>jdbc/gdn</res-ref-name>
    <res-type>javax.sql.DataSource</res-type>
    <res-auth>Container</res-auth>
    </resource-ref>
    </web-app>
    My Server.xml file in Tomcat\conf folder is as follows
    <!-- Example Server Configuration File -->
    <!-- Note that component elements are nested corresponding to their
    parent-child relationships with each other -->
    <!-- A "Server" is a singleton element that represents the entire JVM,
    which may contain one or more "Service" instances. The Server
    listens for a shutdown command on the indicated port.
    Note: A "Server" is not itself a "Container", so you may not
    define subcomponents such as "Valves" or "Loggers" at this level.
    -->
    <Server port="8005" shutdown="SHUTDOWN">
    <!-- Comment these entries out to disable JMX MBeans support used for the
    administration web application -->
    <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" />
    <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
    <!-- Global JNDI resources -->
    <GlobalNamingResources>
    <!-- Test entry for demonstration purposes -->
    <Environment name="simpleValue" type="java.lang.Integer" value="30"/>
    <!-- Editable user database that can also be used by
    UserDatabaseRealm to authenticate users -->
    <Resource name="UserDatabase" auth="Container"
    type="org.apache.catalina.UserDatabase"
    description="User database that can be updated and saved"
    factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
    pathname="conf/tomcat-users.xml" />
    <Resource name="jdbc/gdn" auth="Container"
    type="javax.sql.DataSource" driverClassName="sun.jdbc.odbc.JdbcOdbcDriver"
    url="jdbc:odbc:gdn"
    username="system" password="tiger" maxActive="20" maxIdle="10"
    maxWait="-1"/>
    </GlobalNamingResources>
    <!-- A "Service" is a collection of one or more "Connectors" that share
    a single "Container" (and therefore the web applications visible
    within that Container). Normally, that Container is an "Engine",
    but this is not required.
    Note: A "Service" is not itself a "Container", so you may not
    define subcomponents such as "Valves" or "Loggers" at this level.
    -->
    <!-- Define the Tomcat Stand-Alone Service -->
    <Service name="Catalina">
    <!-- A "Connector" represents an endpoint by which requests are received
    and responses are returned. Each Connector passes requests on to the
    associated "Container" (normally an Engine) for processing.
    By default, a non-SSL HTTP/1.1 Connector is established on port 8080.
    You can also enable an SSL HTTP/1.1 Connector on port 8443 by
    following the instructions below and uncommenting the second Connector
    entry. SSL support requires the following steps (see the SSL Config
    HOWTO in the Tomcat 5 documentation bundle for more detailed
    instructions):
    * If your JDK version 1.3 or prior, download and install JSSE 1.0.2 or
    later, and put the JAR files into "$JAVA_HOME/jre/lib/ext".
    * Execute:
    %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA (Windows)
    $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA (Unix)
    with a password value of "changeit" for both the certificate and
    the keystore itself.
    By default, DNS lookups are enabled when a web application calls
    request.getRemoteHost(). This can have an adverse impact on
    performance, so you can disable it by setting the
    "enableLookups" attribute to "false". When DNS lookups are disabled,
    request.getRemoteHost() will return the String version of the
    IP address of the remote client.
    -->
    <!-- Define a non-SSL HTTP/1.1 Connector on port 8080 -->
    <Connector
    port="5050" maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
    enableLookups="false" redirectPort="8443" acceptCount="100"
    connectionTimeout="20000" disableUploadTimeout="true" />
    <!-- Note : To disable connection timeouts, set connectionTimeout value
    to 0 -->
         <!-- Note : To use gzip compression you could set the following properties :
                   compression="on"
                   compressionMinSize="2048"
                   noCompressionUserAgents="gozilla, traviata"
                   compressableMimeType="text/html,text/xml"
         -->
    <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
    <!--
    <Connector port="8443"
    maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
    enableLookups="false" disableUploadTimeout="true"
    acceptCount="100" scheme="https" secure="true"
    clientAuth="false" sslProtocol="TLS" />
    -->
    <!-- Define an AJP 1.3 Connector on port 8009 -->
    <Connector port="8009"
    enableLookups="false" redirectPort="8443" protocol="AJP/1.3" />
    <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
    <!-- See proxy documentation for more information about using this. -->
    <!--
    <Connector port="8082"
    maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
    enableLookups="false" acceptCount="100" connectionTimeout="20000"
    proxyPort="80" disableUploadTimeout="true" />
    -->
    <!-- An Engine represents the entry point (within Catalina) that processes
    every request. The Engine implementation for Tomcat stand alone
    analyzes the HTTP headers included with the request, and passes them
    on to the appropriate Host (virtual host). -->
    <!-- You should set jvmRoute to support load-balancing via AJP ie :
    <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1">
    -->
    <!-- Define the top level container in our container hierarchy -->
    <Engine name="Catalina" defaultHost="localhost">
    <!-- The request dumper valve dumps useful debugging information about
    the request headers and cookies that were received, and the response
    headers and cookies that were sent, for all requests received by
    this instance of Tomcat. If you care only about requests to a
    particular virtual host, or a particular application, nest this
    element inside the corresponding <Host> or <Context> entry instead.
    For a similar mechanism that is portable to all Servlet 2.4
    containers, check out the "RequestDumperFilter" Filter in the
    example application (the source for this filter may be found in
    "$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
    Request dumping is disabled by default. Uncomment the following
    element to enable it. -->
    <!--
    <Valve className="org.apache.catalina.valves.RequestDumperValve"/>
    -->
    <!-- Because this Realm is here, an instance will be shared globally -->
    <!-- This Realm uses the UserDatabase configured in the global JNDI
    resources under the key "UserDatabase". Any edits
    that are performed against this UserDatabase are immediately
    available for use by the Realm. -->
    <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
    resourceName="UserDatabase"/>
    <!-- Comment out the old realm but leave here for now in case we
    need to go back quickly -->
    <!--
    <Realm className="org.apache.catalina.realm.MemoryRealm" />
    -->
    <!-- Replace the above Realm with one of the following to get a Realm
    stored in a database and accessed via JDBC -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm"
    driverName="org.gjt.mm.mysql.Driver"
    connectionURL="jdbc:mysql://localhost/authority"
    connectionName="test" connectionPassword="test"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm"
    driverName="oracle.jdbc.driver.OracleDriver"
    connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
    connectionName="scott" connectionPassword="tiger"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm"
    driverName="sun.jdbc.odbc.JdbcOdbcDriver"
    connectionURL="jdbc:odbc:CATALINA"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!-- Define the default virtual host
    Note: XML Schema validation will not work with Xerces 2.2.
    -->
    <Host name="localhost" appBase="webapps"
    unpackWARs="true" autoDeploy="true"
    xmlValidation="false" xmlNamespaceAware="false">
    <!-- Defines a cluster for this node,
    By defining this element, means that every manager will be changed.
    So when running a cluster, only make sure that you have webapps in there
    that need to be clustered and remove the other ones.
    A cluster has the following parameters:
    className = the fully qualified name of the cluster class
    name = a descriptive name for your cluster, can be anything
    mcastAddr = the multicast address, has to be the same for all the nodes
    mcastPort = the multicast port, has to be the same for all the nodes
    mcastBindAddr = bind the multicast socket to a specific address
    mcastTTL = the multicast TTL if you want to limit your broadcast
    mcastSoTimeout = the multicast readtimeout
    mcastFrequency = the number of milliseconds in between sending a "I'm alive" heartbeat
    mcastDropTime = the number a milliseconds before a node is considered "dead" if no heartbeat is received
    tcpThreadCount = the number of threads to handle incoming replication requests, optimal would be the same amount of threads as nodes
    tcpListenAddress = the listen address (bind address) for TCP cluster request on this host,
    in case of multiple ethernet cards.
    auto means that address becomes
    InetAddress.getLocalHost().getHostAddress()
    tcpListenPort = the tcp listen port
    tcpSelectorTimeout = the timeout (ms) for the Selector.select() method in case the OS
    has a wakup bug in java.nio. Set to 0 for no timeout
    printToScreen = true means that managers will also print to std.out
    expireSessionsOnShutdown = true means that
    useDirtyFlag = true means that we only replicate a session after setAttribute,removeAttribute has been called.
    false means to replicate the session after each request.
    false means that replication would work for the following piece of code: (only for SimpleTcpReplicationManager)
    <%
    HashMap map = (HashMap)session.getAttribute("map");
    map.put("key","value");
    %>
    replicationMode = can be either 'pooled', 'synchronous' or 'asynchronous'.
    * Pooled means that the replication happens using several sockets in a synchronous way. Ie, the data gets replicated, then the request return. This is the same as the 'synchronous' setting except it uses a pool of sockets, hence it is multithreaded. This is the fastest and safest configuration. To use this, also increase the nr of tcp threads that you have dealing with replication.
    * Synchronous means that the thread that executes the request, is also the
    thread the replicates the data to the other nodes, and will not return until all
    nodes have received the information.
    * Asynchronous means that there is a specific 'sender' thread for each cluster node,
    so the request thread will queue the replication request into a "smart" queue,
    and then return to the client.
    The "smart" queue is a queue where when a session is added to the queue, and the same session
    already exists in the queue from a previous request, that session will be replaced
    in the queue instead of replicating two requests. This almost never happens, unless there is a
    large network delay.
    -->
    <!--
    When configuring for clustering, you also add in a valve to catch all the requests
    coming in, at the end of the request, the session may or may not be replicated.
    A session is replicated if and only if all the conditions are met:
    1. useDirtyFlag is true or setAttribute or removeAttribute has been called AND
    2. a session exists (has been created)
    3. the request is not trapped by the "filter" attribute
    The filter attribute is to filter out requests that could not modify the session,
    hence we don't replicate the session after the end of this request.
    The filter is negative, ie, anything you put in the filter, you mean to filter out,
    ie, no replication will be done on requests that match one of the filters.
    The filter attribute is delimited by ;, so you can't escape out ; even if you wanted to.
    filter=".*\.gif;.*\.js;" means that we will not replicate the session after requests with the URI
    ending with .gif and .js are intercepted.
    The deployer element can be used to deploy apps cluster wide.
    Currently the deployment only deploys/undeploys to working members in the cluster
    so no WARs are copied upons startup of a broken node.
    The deployer watches a directory (watchDir) for WAR files when watchEnabled="true"
    When a new war file is added the war gets deployed to the local instance,
    and then deployed to the other instances in the cluster.
    When a war file is deleted from the watchDir the war is undeployed locally
    and cluster wide
    -->
    <!--
    <Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
    managerClassName="org.apache.catalina.cluster.session.DeltaManager"
    expireSessionsOnShutdown="false"
    useDirtyFlag="true"
    notifyListenersOnReplication="true">
    <Membership
    className="org.apache.catalina.cluster.mcast.McastService"
    mcastAddr="228.0.0.4"
    mcastPort="45564"
    mcastFrequency="500"
    mcastDropTime="3000"/>
    <Receiver
    className="org.apache.catalina.cluster.tcp.ReplicationListener"
    tcpListenAddress="auto"
    tcpListenPort="4001"
    tcpSelectorTimeout="100"
    tcpThreadCount="6"/>
    <Sender
    className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
    replicationMode="pooled"
    ackTimeout="15000"/>
    <Valve className="org.apache.catalina.cluster.tcp.ReplicationValve"
    filter=".*\.gif;.*\.js;.*\.jpg;.*\.htm;.*\.html;.*\.txt;"/>
    <Deployer className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
    tempDir="/tmp/war-temp/"
    deployDir="/tmp/war-deploy/"
    watchDir="/tmp/war-listen/"
    watchEnabled="false"/>
    </Cluster>
    -->
    <!-- Normally, users must authenticate themselves to each web app
    individually. Uncomment the following entry if you would like
    a user to be authenticated the first time they encounter a
    resource protected by a security constraint, and then have that
    user identity maintained across all web applications contained
    in this virtual host. -->
    <!--
    <Valve className="org.apache.catalina.authenticator.SingleSignOn" />
    -->
    <!-- Access log processes all requests for this virtual host. By
    default, log files are created in the "logs" directory relative to
    $CATALINA_HOME. If you wish, you can specify a different
    directory with the "directory" attribute. Specify either a relative
    (to $CATALINA_HOME) or absolute path to the desired directory.
    -->
    <!--
    <Valve className="org.apache.catalina.valves.AccessLogValve"
    directory="logs" prefix="localhost_access_log." suffix=".txt"
    pattern="common" resolveHosts="false"/>
    -->
    <!-- Access log processes all requests for this virtual host. By
    default, log files are created in the "logs" directory relative to
    $CATALINA_HOME. If you wish, you can specify a different
    directory with the "directory" attribute. Specify either a relative
    (to $CATALINA_HOME) or absolute path to the desired directory.
    This access log implementation is optimized for maximum performance,
    but is hardcoded to support only the "common" and "combined" patterns.
    -->
    <!--
    <Valve className="org.apache.catalina.valves.FastCommonAccessLogValve"
    directory="logs" prefix="localhost_access_log." suffix=".txt"
    pattern="common" resolveHosts="false"/>
    -->
    <Context path="/tdm" docBase="tdm" debug="0" reloadable="true" />
    </Host>
    </Engine>
    </Service>
    </Server>
    I have set the context path to /tdm in the server.xml file. Should this be placed in context.xml?
    My first page in the project is called Homepage.html. To start my project I give http://localhost:5050/tdm/homepage.html
    in a browser. Here I accept a username and password from the user and then do the validation in
    a valid.jsp file, where I connect to the database and check and use jsp:forward to go to next pages
    accordingly. However when I enter the username and password and click Go in the homepage, nothing is
    displayed on the next page. The URL in the browser says valid.jsp but a blank screen appears.
    WHY DOES IT HAPPEN SO? DOES IT MEAN THAT TOMCAT IS NOT RECOGNIZING JAVA IN MY SYSTEM OR IS IT A PROBLEM
    WITH THE DATABASE CONNECTION OR SOMETHING ELSE? I FEEL THAT TOMCAT IS NOT EXECUTING JSP COMMANDS?
    IS IT POSSIBLE?WHY WILL THIS HAPPEN?
    I set the JAVA_HOME and CATALINA_HOME environment to the jdk and tomcat folders resp.
    Is there any other thing that I need to set in classpath? Should I have my project as a
    WAR file in the webapps of TOMCAT or just a folder i.e. directory structure will fine?

    I am using Tomcat 5.5 and Jdk 1.5.0_12 and Oracle 10g. I am using jdbc-odbc bridge connection
    to connect to the database. I have placed my project folder called
    tdm under the webapps folder in Tomcat. This 'tdm' folder consists of
    a collection of html pages,jsp pages and images of my project. Also I created a
    WEB-INF folderand in that I have lib folder which contains catalina-root.jar
    , classes12.jar and nls_charset.jar files. And also in the WEB-INF folder I have the web.xml
    file which looks like this
    <?xml version="1.0" encoding="ISO-8859-1"?>
    <!--
    Copyright 2004 The Apache Software Foundation
    Licensed under the Apache License, Version 2.0 (the "License");
    you may not use this file except in compliance with the License.
    You may obtain a copy of the License at
    http://www.apache.org/licenses/LICENSE-2.0
    Unless required by applicable law or agreed to in writing, software
    distributed under the License is distributed on an "AS IS" BASIS,
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and
    limitations under the License.
    -->
    <web-app>
    <resource-ref>
    <description>Oracle Datasource example</description>
    <res-ref-name>jdbc/gdn</res-ref-name>
    <res-type>javax.sql.DataSource</res-type>
    <res-auth>Container</res-auth>
    </resource-ref>
    </web-app>
    My Server.xml file in Tomcat\conf folder is as follows
    <!-- Example Server Configuration File -->
    <!-- Note that component elements are nested corresponding to their
    parent-child relationships with each other -->
    <!-- A "Server" is a singleton element that represents the entire JVM,
    which may contain one or more "Service" instances. The Server
    listens for a shutdown command on the indicated port.
    Note: A "Server" is not itself a "Container", so you may not
    define subcomponents such as "Valves" or "Loggers" at this level.
    -->
    <Server port="8005" shutdown="SHUTDOWN">
    <!-- Comment these entries out to disable JMX MBeans support used for the
    administration web application -->
    <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" />
    <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
    <!-- Global JNDI resources -->
    <GlobalNamingResources>
    <!-- Test entry for demonstration purposes -->
    <Environment name="simpleValue" type="java.lang.Integer" value="30"/>
    <!-- Editable user database that can also be used by
    UserDatabaseRealm to authenticate users -->
    <Resource name="UserDatabase" auth="Container"
    type="org.apache.catalina.UserDatabase"
    description="User database that can be updated and saved"
    factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
    pathname="conf/tomcat-users.xml" />
    <Resource name="jdbc/gdn" auth="Container"
    type="javax.sql.DataSource" driverClassName="sun.jdbc.odbc.JdbcOdbcDriver"
    url="jdbc:odbc:gdn"
    username="system" password="tiger" maxActive="20" maxIdle="10"
    maxWait="-1"/>
    </GlobalNamingResources>
    <!-- A "Service" is a collection of one or more "Connectors" that share
    a single "Container" (and therefore the web applications visible
    within that Container). Normally, that Container is an "Engine",
    but this is not required.
    Note: A "Service" is not itself a "Container", so you may not
    define subcomponents such as "Valves" or "Loggers" at this level.
    -->
    <!-- Define the Tomcat Stand-Alone Service -->
    <Service name="Catalina">
    <!-- A "Connector" represents an endpoint by which requests are received
    and responses are returned. Each Connector passes requests on to the
    associated "Container" (normally an Engine) for processing.
    By default, a non-SSL HTTP/1.1 Connector is established on port 8080.
    You can also enable an SSL HTTP/1.1 Connector on port 8443 by
    following the instructions below and uncommenting the second Connector
    entry. SSL support requires the following steps (see the SSL Config
    HOWTO in the Tomcat 5 documentation bundle for more detailed
    instructions):
    * If your JDK version 1.3 or prior, download and install JSSE 1.0.2 or
    later, and put the JAR files into "$JAVA_HOME/jre/lib/ext".
    * Execute:
    %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA (Windows)
    $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA (Unix)
    with a password value of "changeit" for both the certificate and
    the keystore itself.
    By default, DNS lookups are enabled when a web application calls
    request.getRemoteHost(). This can have an adverse impact on
    performance, so you can disable it by setting the
    "enableLookups" attribute to "false". When DNS lookups are disabled,
    request.getRemoteHost() will return the String version of the
    IP address of the remote client.
    -->
    <!-- Define a non-SSL HTTP/1.1 Connector on port 8080 -->
    <Connector
    port="5050" maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
    enableLookups="false" redirectPort="8443" acceptCount="100"
    connectionTimeout="20000" disableUploadTimeout="true" />
    <!-- Note : To disable connection timeouts, set connectionTimeout value
    to 0 -->
         <!-- Note : To use gzip compression you could set the following properties :
                   compression="on"
                   compressionMinSize="2048"
                   noCompressionUserAgents="gozilla, traviata"
                   compressableMimeType="text/html,text/xml"
         -->
    <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
    <!--
    <Connector port="8443"
    maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
    enableLookups="false" disableUploadTimeout="true"
    acceptCount="100" scheme="https" secure="true"
    clientAuth="false" sslProtocol="TLS" />
    -->
    <!-- Define an AJP 1.3 Connector on port 8009 -->
    <Connector port="8009"
    enableLookups="false" redirectPort="8443" protocol="AJP/1.3" />
    <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
    <!-- See proxy documentation for more information about using this. -->
    <!--
    <Connector port="8082"
    maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
    enableLookups="false" acceptCount="100" connectionTimeout="20000"
    proxyPort="80" disableUploadTimeout="true" />
    -->
    <!-- An Engine represents the entry point (within Catalina) that processes
    every request. The Engine implementation for Tomcat stand alone
    analyzes the HTTP headers included with the request, and passes them
    on to the appropriate Host (virtual host). -->
    <!-- You should set jvmRoute to support load-balancing via AJP ie :
    <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1">
    -->
    <!-- Define the top level container in our container hierarchy -->
    <Engine name="Catalina" defaultHost="localhost">
    <!-- The request dumper valve dumps useful debugging information about
    the request headers and cookies that were received, and the response
    headers and cookies that were sent, for all requests received by
    this instance of Tomcat. If you care only about requests to a
    particular virtual host, or a particular application, nest this
    element inside the corresponding <Host> or <Context> entry instead.
    For a similar mechanism that is portable to all Servlet 2.4
    containers, check out the "RequestDumperFilter" Filter in the
    example application (the source for this filter may be found in
    "$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
    Request dumping is disabled by default. Uncomment the following
    element to enable it. -->
    <!--
    <Valve className="org.apache.catalina.valves.RequestDumperValve"/>
    -->
    <!-- Because this Realm is here, an instance will be shared globally -->
    <!-- This Realm uses the UserDatabase configured in the global JNDI
    resources under the key "UserDatabase". Any edits
    that are performed against this UserDatabase are immediately
    available for use by the Realm. -->
    <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
    resourceName="UserDatabase"/>
    <!-- Comment out the old realm but leave here for now in case we
    need to go back quickly -->
    <!--
    <Realm className="org.apache.catalina.realm.MemoryRealm" />
    -->
    <!-- Replace the above Realm with one of the following to get a Realm
    stored in a database and accessed via JDBC -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm"
    driverName="org.gjt.mm.mysql.Driver"
    connectionURL="jdbc:mysql://localhost/authority"
    connectionName="test" connectionPassword="test"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm"
    driverName="oracle.jdbc.driver.OracleDriver"
    connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
    connectionName="scott" connectionPassword="tiger"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm"
    driverName="sun.jdbc.odbc.JdbcOdbcDriver"
    connectionURL="jdbc:odbc:CATALINA"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!-- Define the default virtual host
    Note: XML Schema validation will not work with Xerces 2.2.
    -->
    <Host name="localhost" appBase="webapps"
    unpackWARs="true" autoDeploy="true"
    xmlValidation="false" xmlNamespaceAware="false">
    <!-- Defines a cluster for this node,
    By defining this element, means that every manager will be changed.
    So when running a cluster, only make sure that you have webapps in there
    that need to be clustered and remove the other ones.
    A cluster has the following parameters:
    className = the fully qualified name of the cluster class
    name = a descriptive name for your cluster, can be anything
    mcastAddr = the multicast address, has to be the same for all the nodes
    mcastPort = the multicast port, has to be the same for all the nodes
    mcastBindAddr = bind the multicast socket to a specific address
    mcastTTL = the multicast TTL if you want to limit your broadcast
    mcastSoTimeout = the multicast readtimeout
    mcastFrequency = the number of milliseconds in between sending a "I'm alive" heartbeat
    mcastDropTime = the number a milliseconds before a node is considered "dead" if no heartbeat is received
    tcpThreadCount = the number of threads to handle incoming replication requests, optimal would be the same amount of threads as nodes
    tcpListenAddress = the listen address (bind address) for TCP cluster request on this host,
    in case of multiple ethernet cards.
    auto means that address becomes
    InetAddress.getLocalHost().getHostAddress()
    tcpListenPort = the tcp listen port
    tcpSelectorTimeout = the timeout (ms) for the Selector.select() method in case the OS
    has a wakup bug in java.nio. Set to 0 for no timeout
    printToScreen = true means that managers will also print to std.out
    expireSessionsOnShutdown = true means that
    useDirtyFlag = true means that we only replicate a session after setAttribute,removeAttribute has been called.
    false means to replicate the session after each request.
    false means that replication would work for the following piece of code: (only for SimpleTcpReplicationManager)
    <%
    HashMap map = (HashMap)session.getAttribute("map");
    map.put("key","value");
    %>
    replicationMode = can be either 'pooled', 'synchronous' or 'asynchronous'.
    * Pooled means that the replication happens using several sockets in a synchronous way. Ie, the data gets replicated, then the request return. This is the same as the 'synchronous' setting except it uses a pool of sockets, hence it is multithreaded. This is the fastest and safest configuration. To use this, also increase the nr of tcp threads that you have dealing with replication.
    * Synchronous means that the thread that executes the request, is also the
    thread the replicates the data to the other nodes, and will not return until all
    nodes have received the information.
    * Asynchronous means that there is a specific 'sender' thread for each cluster node,
    so the request thread will queue the replication request into a "smart" queue,
    and then return to the client.
    The "smart" queue is a queue where when a session is added to the queue, and the same session
    already exists in the queue from a previous request, that session will be replaced
    in the queue instead of replicating two requests. This almost never happens, unless there is a
    large network delay.
    -->
    <!--
    When configuring for clustering, you also add in a valve to catch all the requests
    coming in, at the end of the request, the session may or may not be replicated.
    A session is replicated if and only if all the conditions are met:
    1. useDirtyFlag is true or setAttribute or removeAttribute has been called AND
    2. a session exists (has been created)
    3. the request is not trapped by the "filter" attribute
    The filter attribute is to filter out requests that could not modify the session,
    hence we don't replicate the session after the end of this request.
    The filter is negative, ie, anything you put in the filter, you mean to filter out,
    ie, no replication will be done on requests that match one of the filters.
    The filter attribute is delimited by ;, so you can't escape out ; even if you wanted to.
    filter=".*\.gif;.*\.js;" means that we will not replicate the session after requests with the URI
    ending with .gif and .js are intercepted.
    The deployer element can be used to deploy apps cluster wide.
    Currently the deployment only deploys/undeploys to working members in the cluster
    so no WARs are copied upons startup of a broken node.
    The deployer watches a directory (watchDir) for WAR files when watchEnabled="true"
    When a new war file is added the war gets deployed to the local instance,
    and then deployed to the other instances in the cluster.
    When a war file is deleted from the watchDir the war is undeployed locally
    and cluster wide
    -->
    <!--
    <Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
    managerClassName="org.apache.catalina.cluster.session.DeltaManager"
    expireSessionsOnShutdown="false"
    useDirtyFlag="true"
    notifyListenersOnReplication="true">
    <Membership
    className="org.apache.catalina.cluster.mcast.McastService"
    mcastAddr="228.0.0.4"
    mcastPort="45564"
    mcastFrequency="500"
    mcastDropTime="3000"/>
    <Receiver
    className="org.apache.catalina.cluster.tcp.ReplicationListener"
    tcpListenAddress="auto"
    tcpListenPort="4001"
    tcpSelectorTimeout="100"
    tcpThreadCount="6"/>
    <Sender
    className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
    replicationMode="pooled"
    ackTimeout="15000"/>
    <Valve className="org.apache.catalina.cluster.tcp.ReplicationValve"
    filter=".*\.gif;.*\.js;.*\.jpg;.*\.htm;.*\.html;.*\.txt;"/>
    <Deployer className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
    tempDir="/tmp/war-temp/"
    deployDir="/tmp/war-deploy/"
    watchDir="/tmp/war-listen/"
    watchEnabled="false"/>
    </Cluster>
    -->
    <!-- Normally, users must authenticate themselves to each web app
    individually. Uncomment the following entry if you would like
    a user to be authenticated the first time they encounter a
    resource protected by a security constraint, and then have that
    user identity maintained across all web applications contained
    in this virtual host. -->
    <!--
    <Valve className="org.apache.catalina.authenticator.SingleSignOn" />
    -->
    <!-- Access log processes all requests for this virtual host. By
    default, log files are created in the "logs" directory relative to
    $CATALINA_HOME. If you wish, you can specify a different
    directory with the "directory" attribute. Specify either a relative
    (to $CATALINA_HOME) or absolute path to the desired directory.
    -->
    <!--
    <Valve className="org.apache.catalina.valves.AccessLogValve"
    directory="logs" prefix="localhost_access_log." suffix=".txt"
    pattern="common" resolveHosts="false"/>
    -->
    <!-- Access log processes all requests for this virtual host. By
    default, log files are created in the "logs" directory relative to
    $CATALINA_HOME. If you wish, you can specify a different
    directory with the "directory" attribute. Specify either a relative
    (to $CATALINA_HOME) or absolute path to the desired directory.
    This access log implementation is optimized for maximum performance,
    but is hardcoded to support only the "common" and "combined" patterns.
    -->
    <!--
    <Valve className="org.apache.catalina.valves.FastCommonAccessLogValve"
    directory="logs" prefix="localhost_access_log." suffix=".txt"
    pattern="common" resolveHosts="false"/>
    -->
    <Context path="/tdm" docBase="tdm" debug="0" reloadable="true" />
    </Host>
    </Engine>
    </Service>
    </Server>
    I have set the context path to /tdm in the server.xml file. Should this be placed in context.xml?
    My first page in the project is called Homepage.html. To start my project I give http://localhost:5050/tdm/homepage.html
    in a browser. Here I accept a username and password from the user and then do the validation in
    a valid.jsp file, where I connect to the database and check and use jsp:forward to go to next pages
    accordingly. However when I enter the username and password and click Go in the homepage, nothing is
    displayed on the next page. The URL in the browser says valid.jsp but a blank screen appears.
    WHY DOES IT HAPPEN SO? DOES IT MEAN THAT TOMCAT IS NOT RECOGNIZING JAVA IN MY SYSTEM OR IS IT A PROBLEM
    WITH THE DATABASE CONNECTION OR SOMETHING ELSE? I FEEL THAT TOMCAT IS NOT EXECUTING JSP COMMANDS?
    IS IT POSSIBLE?WHY WILL THIS HAPPEN?
    I set the JAVA_HOME and CATALINA_HOME environment to the jdk and tomcat folders resp.
    Is there any other thing that I need to set in classpath? Should I have my project as a
    WAR file in the webapps of TOMCAT or just a folder i.e. directory structure will fine?

  • Design problem: RS232 communication

    Hi,
    I have a design problem for communication with a device via RS232. Since I'm normally a C++ programmer I might just look at the problem from a wrong angle and hope for some hints how to do it in LabVIEW.
    The scenario:
    A device is communicating with the PC via RS232. The device permanently sends data packets. At the same time, commands can be sent to the device and it returns replies. Data packets and reply packets are arbitrarily mixed, i.e. after sending a command there could be a couple of date packets before the reply comes back but the packets can be distinguished by an identifier.
    At least one, ideally several VIs should communicate with the device. Commands should be sent by pressing buttons and the incoming data should be parsed (the packets contain mutliple data streams) and shown on graphs or saved to files.
    My initial idea:
    Coming from C++ I wanted to build a class for the communication that permanently reads the incoming data and splits it to reply and data packets. This class would then have a function to send out a command and would return the reply or a timeout and it would be possible to register and unregister listeners (I wanted to use queues for this) for the various data streams.
    The problems I ran into:
    There were a couple but the two most pressing problems were: how could I communicate with the constantly running sample VI (e.g. to stop sampling) and how could I propagate changes to the class to it (e.g. new listeners). Since it is not returning I don't see a good way to implement it as in instance funcion (i.e. pass it the object). I could probably not let the sample function run continously but call it periodically from outside. However I planned to implement the class as a singleton, so it could be used parallely from different VIs.
    Is there a best practice for a case like this?
    I'm glad about any hints or ideas.
    Thanks,
    Tobias

    tfritz wrote:
    Hi,
    thanks. Since almost the same thing was suggested to me in a German forum I guess this is really common practice (using one VI with different methods controlled by a queue). It still seems a little "unnatural" for me but my biggest concern (bad interface description) was shattered by the suggestion in the link you sent me to wrap these functions with wrapper VIs, thus caller VIs won't have to deal with the call-by-queue-mechanism. This might also be easier to port to a different implementation later. However I still see the danger that the continously running VI could easily become bloated. 
    It also requires me to change the way I have looked at VIs until now. In our course they told us that VIs are basically functions. Using this design patterns, the VI becomes more of a module, really (Like a C module implemented in a C-Source file). But I will try it. It sounds as if it could work.
    I will still look into the OOP solutions a little more, though. Do I understand you correctly that you wouldn't recommend using LVOOP because it's still buggy? What about dqGOOP for example? This sounds like it could do what I need (however it doesn't seem to implement things like polymorphism, late binding and inheritance so I don't quite see what's so OOP about it. It seems more like programming with structures in C.)
    I don't know if LVOOP is buggy or not.  I think early on it was buggy and things have improved in recent versions. I have read that it doesn't have all the features that you would have in OOP like C.  I wouldn't recommend it only because I'm not familiar with it at all.  I can't recommend something that I'm not comfortable with.  If you go that route, plan on spending time in these forums and in LAVA to reading up on what others have done.  I haven't hard of dqGOOP.
    But back to your suggestion. I still have a couple of questions:
    - How do you return values from the module? Would you use a queue for that as well?
    - Where would the parameter queue be held (created and passed to the VI)
     I would store all of these in a functional global variable.  This is the VI that stores data in shift registers.  Ben's action engine nugget is an advancement on that.  This allows for both the calling VI and the parallel running subVI to get and set the data as needed.  It runs quickly so neither process should be forced to wait while the other  VI is doing its thing.
    - My VI has to be constantly sampling and this shouldn't be interrupted too long by other functions as adding a listener. However both functionalities have to access the same kind of data. Is there an easy way to parallelize this? Would the sampling be a case in the case diagram that's always used if no command was sent to the VI or would it somehow run parallely?   Yes.  There are a couple of ways of doing this.  One would be for the dequeue to have a timeout function.  In the event the dequeue times out, you run the code that is doing the acquisition.  I think a better method is that the code that does the acquisition enqueues its own command again to the end of the queue.  Let's say that is command A.  So when case A finishes, it enqueues A, which seeds itself to run again.  So if nothing else comes into the queue, it just executes A , A, A, A.  But let's say another section of code needs to do something such as command B.  It will slip B into the queue while A is executing.  So you would A, B, then A again, because A would get slipped back into the queue when the first A finishes, but B has already been put in while the first A was running.
    - Would it be possible to make the VI reentrant and in this way use it simultaneously on different COM ports (using different parameter queues as well)? I'm not sure if I will need this but it would be neat if it could work.
    I think you could do this.  It may be a case where the VI is saved as a template  (.vit) and you initiate it multiple times.  I haven't needed to do this before, so I'm afraid I can't provide any details or useful tips. 
    Well, I will fool around some more. Thanks so much for your help. This is kind of exciting since the concepts are quite new for me. Btw, is there something like an academic theory (computer science) for LabVIEW? I came across functional languages in university but data flow languages are still a new concept for me.
    Tobias
    tfritz wrote:
    Another question about the "dynamically starting" of the VI:
    How is the path handled? Is it guaranteed that it always takes the VI from the project or does it just search for the first VI by that name it finds in the file structure? Does this still work when building an .exe from the project? What happens if the VI is already running? Can you test for this?
    While I'm at it: is there a way to stop LabVIEW from searching for subVIs it can't find when openin a VI? This resulted in very unexpected behaviour sometimes where it would find the VI somewhere else (with the same name but maybe an older version).
    In my case, I just had the path hardcoded.  It is my only instance, I'm not planning on moving the VI's.  If you don't have the path, it will take a VI by that name if it's in memory.  If it isn't in memory, it starts searching relative to the calling VI's path.  One thing I know, if you are dealing with relative paths, a subVI has a different relative path in an .exe as opposed to the development environment.  The name of the .exe becomes a folder.  So in development, if your sub VI is mySubVI.vi.  In an executable, its path is MyExe.exe\MySubVI.vi
    For all of this, I recommend searching the forums to get more details.
    If it is searching for a VI, you can hit ignore.  But of course you'd have to do it before it finds it.  When you are dealing with versioning issues, I recommend making a backup copy of the entire directory structure elsewhere.  Some location where it shouldn't stumble across it.

  • Problem with Mappings in the new Version of NWDS

    Hello all,
    i have a process which created in the NWDS (Version: SAP Enhancement Package 1 for SAP NetWeaver 7.1 SP00 PAT0000) checked in the CVS.
    Now i have the process checked out of the CVS in the new Version of NWDS (Version: SAP Enhancement Package 1 for SAP NetWeaver Developer Studio 7.1 SP01 PAT0003).
    The process contains a automated activity and on the output mapping from the webservice is now n error. The Error: "Expected 'com.sap.dictionary.string,ns=.... Found list of 'char10,ns=..." I´ve changed nothing and the input mapping and the other mappings are all right. I checked out again the same process in the "old" Version of NWDS and it works fine.
    Knows somebody how i can "import" n process from an older Version of NWDS into the new Version?
    Kind regards,
    Bastian

    Hi Bastian,
                   The problem you are facing is a trivial problem. You need not worry about the importing the whole development component again. Rather your mapping error can be removed by changing the context in webdynpro perspective easily.
    Your error is telling that the automated activity requires a string paramenter and what it has found is a list of parameters.
    What this means is in the Webdynpro context, the mapping which you have done has problems.
    The webservice input requires a singleton parameter. If you look into the mapping view, you will notice a single icon for the node which you are mapping.
    And what you are sending is a list (not a singletion node)
    If you see from the left side from where the parameters is coming in you will notice multiple icons (stack on top of each other) representing non singleton nodes ie cardinatlity is 0...n or 1..n)
    So to remove the problem just go to component controller of the view and change the context setting
    Colletion cardinality of 0..1 or 1..1 and selection cardinality to 0..1 or 1..1 of the node.
    Thnks

  • Problem with checkbox group in row popin of table.

    In table row popin I have kept Check Box Group.I have mapped  the texts property of checkbox group to the attribute which is under the subnode of the table.the subnode properties singleton=false,selectioncardinality=0-n,and cardinality=0-n.
    if there are 'n' number of records in the table.each record will have its own row popin and in the row popin there is check box group.
    the check box group in the row popin  belongs to that perticular row.
    but the checkboxegroup values in row popins of all the  rows are getting changed to the row which is lead selected.
    The same scenario  (table in the row popin is showing the values corresponding to its perticular row and all the table values in popin are not getting changed to the one lead selected in the main table)is working fine with the table in place of  checkbox group in row popin with datasource property of table  binded to the subnode
    I cant trace out the problem with checkbox group in place of table.
    Please help me in this regard.I have to place check box group in place of table in row popin.
    Thanks and Regards
        Kiran Kumar K

    I have done the same thing successfully with normal check box ui element. Try using check box in your tabel cell editor instead of check box group.

  • Problems with RAC and XA: Fallback

    Hello,
    we are seing problems with RAC and XA (Tuxedo 11, DB 11.2), specifically encountering "ORA-24798: cannot resume the distributed transaction branch on another instance".
    The first scenario relates to fallback after a RAC node failure. There are two servers, S1 and S2. S1 makes an ATMI call to S2. Both servers are in the same Tuxedo group, using TMS_ORA. RAC is set up for failover (BASIC), no load balancing.
    The sequence is:
    - S1 and S2 are connected to the same RAC node n1. All is well.
    - RAC node n1 fails. S1, S2 and the TMS_ORA all fail over to RAC node n2. After the failover has happened, all is well.
    - RAC node n1 recovers. All is still well (as there is no automatic fallback).
    - S1 (or S2) is restarted (either intentionally or because of a crash). Since n1 is up again, S1 connects to n1. Now we get ORA-24798. Permanently.
    S1 is connected to n1 and S2 is connected to n2. Since both are in the same group, both use the same XA transaction branch. When called, S2 attempts to JOIN the transaction branch that S1 started. But the DB (11.2) does not allow the same branch to span more than one node. Hence the ORA-24798.
    This seems to be a severe limitation in the combination of Tuxedo, XA and RAC. It basically means we still have to use DTP services, even with Tuxedo 11 and DB 11.2. Or are we missing something?
    We could put S1 and S2 into different groups, but that seems to be inefficient, and not practical for a real application (10s of servers).
    I am extrapolating from this that RAC load balancing would also not work, as S1 and S2 could be connected to different RAC nodes.
    Roger

    Roger,
    When using an external transaction manager such as Tuxedo you should still declare Oracle services as DTP services when using Oracle Database 11g. The Tuxedo documentation is not clear about this. The relevant 11gR2 RAC documentation is at http://download.oracle.com/docs/cd/E11882_01/rac.112/e16795/hafeats.htm and states
    "An XA transaction can span Oracle RAC instances by default, allowing any application that uses the Oracle XA library to take full advantage of the Oracle RAC environment to enhance the availability and scalability of the application.
    "GTXn background processes support global (XA) transactions in an Oracle RAC environment. The GLOBAL_TXN_PROCESSES initialization parameter, which is set to 1 by default, specifies the initial number of GTXn background processes for each Oracle RAC instance. Use the default value for this parameter clusterwide to allow distributed transactions to span multiple Oracle RAC instances. Using the default value allows the units of work performed across these Oracle RAC instances to share resources and act as a single transaction (that is, the units of work are tightly coupled). It also allows 2PC requests to be sent to any node in the cluster.
    "Before Release 11.1, the way to achieve tight coupling in Oracle RAC was to use Distributed Transaction Processing (DTP) services, that is, services whose cardinality (one) ensured that all tightly-coupled branches landed on the same instance—regardless of whether load balancing was enabled. Tightly coupled XA transactions no longer require the special type of singleton services to be deployed on Oracle RAC databases if the XA application does not join or resume XA transaction branches. XA transactions are transparently supported on Oracle RAC databases with any type of service configuration.
    A"n external transaction manager, such as Oracle Services for Microsoft Transaction Server (OraMTS), coordinates DTP/XA transactions. However, an internal Oracle transaction manager coordinates distributed SQL transactions. Both DTP/XA and distributed SQL transactions must use the DTP service in Oracle RAC."
    This issue came up earlier this year in another newsgroup thread at https://forums.oracle.com/forums/thread.jspa?threadID=2165803
    Regards,
    Ed

  • How do i use classloaders to create singletons

    I have some code that correctly creates a singleton because the code runs within a clients vm , and there should only be instance of the class per user. But for testing purposes I would like to mimic two users, to do this they each require their own instance of the singleton. I have read that using custom classloaders there could be one instance of each singleton per classloader, but dont understand how to do this. Can someone give me an example

    As I understand it you are simulating what amounts to
    a number of instances running in separate JVMs
    (probably on different machines) by running multiple
    instances in the same JVM. The natural way to do this
    would be to start multiple threads, each representing
    the action of one of these clients. (To do it
    sequentially would be a very poor test, since you
    need to cope with multiple, simultaneous
    activities).Yes, correct
    >
    The use of ThreadLocal enables you to have a separate
    instance of the "singleton" class for each thread.
    Any calls to the getInstance() method in the class
    made within one of the threads will return the
    details of the user "logged in" on that thread.
    Ok,fine but Ive simplified the use case there are a number of singletons involved (most of which are not called directly by the test code but by each other) , all of these would have to be identified and modified.
    InheritableThreadLocal extends this by passing the
    same instance to any child threads formed in one of
    these threads after initially intanciating an
    instance in a Thread (quite possibly uneccessary).Not needed, (thanks for explaining it though)
    And you can, if you wish, leave this in in the
    production version providing you do the login in the
    root thread of the client JVM.
    We do not want to allow the client to do such things.
    This, I have to say, would be a vastly less messy
    solution (and a much milder distortion between test
    and production) than anything based on ClassLoaders
    could possibly provide. Using custom class loaders
    quickly gets very messy indeed.
    I was hoping that by having two threads, each defining their own instance of a classloader it would get round the singleton behaviour and this is the only problem I am trying to solve. Obviously this would not be desirable in production but if it solves my testing problem Ill be happy, whether or not this is possible I still cant ascertain.
    If you have existing code you aren't allowed to
    change, then the best solution is probabllly to run
    multiple JVMs, exactly as in the real live case.We are using junit and taking advantage of its reporting facilities,ant integration and so on. If I was to have two JVMs I would have to split my test into two coorporating tests that would have to run in parallel but as far as Junit was concerned were two seperate tests, I can see this causing problems with reporting and causing side effects on other tests if something failed. Im aware that I am not really writing 'unit' tests in the strict test but the Junit framework provides advantages over plain old java.

  • Problem with filling nodes of a context with data

    hi,
    i've got the following problem with filling a controller context:
    the context of the controller looks like:
    Context
    |-Node1             0..n singleton
      |-Subnode1        0..n singleton
      | |-SubVal1.1
      | |-SubVal1.2
      |-Subnode2        0..n singleton
      | |-Subval2.1
      | |-Subval2.2
      |-Val1.1
      |-Val1.2
    that means every Element of Node1 should have its own Subnode-Elements & Val1-Values
    in wdDoInit() of the controller I fill the context like this:
    Collection Node1, SubNode1, SubNode2
    for (Iterator iter = Node1.iterator(); iter.hasNext;) {
       newNode1NodeElement = wdContext.createNode1Element();
       newNode1NodeElement.set... //setting the values
       wdContext.nodeNode1().addElement(newNode1NodeElement);
       for (Iterator iter2=Subnode1.iterator(); iter2.hasNext;) {
          newSubnode1NodeElement = wdContext.createSubnode1Element();
          newSubNode1NodeElement.set... // setting the SubVal1.x
          wdContext.nodeSubnode1.addElement(newSubnode1NodeElement);
       for (Iterator iter3=SubNode2.iterator(); iter3.hasNext;) {
          newSubnode2NodeElement = wdContext.createSubnode2Element();
          newSubNode2NodeElement.set... // setting the SubVal2.x
          wdContext.nodeSubnode2.addElement(newSubnode2NodeElement);
    i've got the impression, that <b>all</b> my SubNodes are filled in the <b>first</b> Node1-Element. is there an error in the code above? because in the first place, i see every values in the first Element of Node1-views and if i navigate to the next Element of Node1, every views are empty.
    for every Node (Node1, Subnode1, Subnode2) i've got an own view, that maps its context to the corresponding Node of the controller context, e.g for the SubNode1-View:
    Context                  Context
    |                        ....
    |- ViewNode      --->    ..|- Subnode1
      |- SubVal1.1   --->    ..   |-SubVal1.1
      |- SubVal1.2   --->    ..   |-SubVal1.2
    in these views, i navigate through the nodes via
    wdContext.nodeViewNode().move...()
    in the SubNode1-View i see the SubVal1.1, SubVal1.2 (that's what i want) <b>and</b> additional SubVal2.1, SubVal2.2 (that's what I don't want...)
    kind regards, achim
    ps: i've studied the Master/Detail-Tutorial and i think the choice for cardinality 0..n and type singleton is correct in my case.

    hmm, let's look at the code:
    for (Iteration Node1) {
      newNode1NodeElement = wdContext.createNode1Element();
      wdContext.nodeNode1().addElement(newNode1NodeElement);
      for (Iteration SubNode1) {
         newSubNode1NodeElement = wdContext.createSubNodeXElement();
         newNode1NodeElement.nodeSubNode1().addElement(newSubNode1NodeElement);
         for (Iteration SubNode1.1) {
            newSubNode1.1NodeElement = <b>wdContext</b>.createSubNode1.1Element();
            newSubNode1NodeElement.nodeSubNode1.1.addElement(newSubNode1.1NodeElement);
       for (Iteration SubNode2) {
          newSubNode2NodeElement = wdContext.createSubNode2Element();
          newNode1NodeElement.nodeSubNode2.addElement(newSubNode2NodeElement);
    is there an error in creating the SubNode1.1-Node (bold line)?
    if the code is correct, perhaps it's only a viewing problem:
    i use views that point on every node and display the values in that node. if i move in the view for Node1 to another node, the values for SubNode1 point to the correct values too, but the values for SubNode1.1 still stay on the old values. is the move of a grandfather node not correctly propagated to his first child?
    kr, achim

Maybe you are looking for