Long running servlet in a single node of a cluster

Good evening,
I am developing a J2EE application that consists of a number of web services that perform background processing of relatively long-running jobs.  Status of the jobs actually need to be reported back to the SAP ABAP system, which we do via JCo.  All communication back to ABAP occurs in a single long-running, JMS driven servlet.  This callback servlet uses both a timestamp (last time status was sent back) or a  count of status events to determine when to send data to the ABAP system.
All of this works great in a single Java instance J2EE configuration.  However, when we introduce clustering into the configuration, we get one instance of the callback servlet per J2EE server process.  Is there an easy way to configure the servlet so that it only runs on a single J2EE server process in one instance?
TIA,
- Bill

Hi Bill,
Launching manually threads even from a web container is also not recommended as good J2EE practice.
If each applications launches its own set of threads that may easily crash the server. Another drawback of such long running servlet is that you are blocking application thread from the server. One last point - it seems that you are storing the data in the servlet at some intermediate variables without any persistance. What will happen if the server crashes ? Can you afford to loose the data ?
I am not implying that you launch threads from the MDB. I am just saying that you could define that single MDB and send messages directly to it. You do the job there inside the onMessage, no new thread. You will be guaranteed that no other call will be executed. You are guaranteed nothing will be lost in case server crashes, you will not use resources even if there is no traffic.
Btw, please feel free to give me a phone call or write an email to further discuss the issue. You can take the details from your CSN I am currently processing
Best Regards
Peter

Similar Messages

  • Rapid clone 11.5.10  from single node to Veritas cluster environment

    Is anybody there who had done the same and working? Please shed some light.
    version 11.5.10
    Source
    DB tier : DB, CM and Admin ( OS : Solaris 10 ), single node
    App tier : FrmSrv, iAS ( OS : Solaris 10), single node
    Target
    DB tier : DB, CM, and Admin ( OS : Solaris 10), 2-nodes Veritas Cluster
    (active/passive)
    App tier : FrmSrv, iAS (OS : Solaris 10), 2-nodes load balancer
    I've many time single node to single node rapid clone , no problem.
    But with this single node to multi-nodes, no luck.
    your input is much appreciated

    Hi
    You can use the Advanced cloning (section 4) of the cloining document# 230672.1
    to clone signle node to multi node systems.
    for webserver load balancing follow the metalink note#:217368.1
    Regards
    Srinath

  • Long running servlet

    I wrote a servlet for Tomcat to do update a DB table at 5 minutes interval for about 1.5 hours. I use a parm to control its start and stop. The 5 minutes interval is controlled by Thread.sleep(300000L);. Although I only need one instance of this servlet, I use some static variables in the servlet so that even if new instance is init by Tomcat due to whatever reason the process will continue. But I encountered problems.
    First of all, I don't know if servlet should be used for such purpose, or there is other better/right way to do this within Tomcat. Secondly, my servlet works sometimes but failed mostly. It usually failed with NullPointerException on some static variable. Not sure if "static" has anything to do with it but from all my failed run they stopped at static variables. Hope somebody can point me to the right direction.

    I would recommend that you look at [Quartz Scheduler|http://www.quartz-scheduler.org/docs/index.html] for your job scheduling.
    As far as the NullPointerException goes, we need to see your code and the stack trace of the error to be able to help you with that...

  • Single node file system to 3 node rac and asm migration

    hi,
    we have several utl_file and external table applications running on 10.2 single node veritas file system. and we want to migrate to 3 node RAC ASM environment. what is the best practices in order to succeed this migration during this migration. thanks.

    1. Patch to 10.2.0.3 or 10.2.0.4 if not already there.
    2. Dump Veritas from any future consideration.
    3. Build and validate the new RAC environment and then plug in your data using transportable tablespaces.
    Do not expect the first part of step 3 to work perfectly the first time if you do not have experience building RAC clusters.
    This means have appropriate hardware in place for perfecting your skills.
    Be sure, too, that you are not trying to do this with blade or 1U servers. You need a minimum of 2U servers to be able
    to plug in sufficient hardware to have redundant paths to storage and for cache fusion and public access (a minimum of 6 ports).
    And don't let any network admin try to convince you that they can virtualize the network paths: They can not do so successfully
    for RAC.

  • Long Running SQL and ORDS Spawns Multiple Database Sessions

    Hi all.
    We have a strange situation when accessing a long running SQL Report (a single APEX Page).
    The SQL takes about 15 mins to run but when I monitor what database sessions are spawned by the APEX Listener, I see multiple sessions all executing the same SQL. It appears that after 6 minutes, the APEX Listener spawns a new database session to execute the same SQL.
    Has anyone seen this before and if so, is there a key setting I am missing as I don't want this to happen. I am new to the APEX Listener and WebLogic so apologies if this is the way it's meant to work but it seems odd that after a certain amount of time (6 minutes in my case) a new database session is spawned to do the same work.
    We are running:
    WebLogic: 10.3.0.6
    APEX_LISTENER_VERSION 2.0.0.354.17.06
    Datadate: 11.2.0.3.0 Production
    APEX: 4.2.1.00.08
    Cheers for any help.
    Duncs

    Hi Duncan,
    With all respect, you should please rethink your interface.  I would never consider writing a Web application with a request that knowingly takes 15 minutes to return the results.  You can consider doing this asynchronously via DBMS_SCHEDULER and then alerting the user (via email, perhaps) that their results are ready.  Or if you can precompute this in advance, consider using materialized views so that the user's response time is sub-second.
    In an era where the patience of the average end-user is measured in single-digit seconds, it is impractical to ever expect an end-user to wait 15 minutes for their resultant Web page.
    Joel

  • How to find out the IP@s of all nodes in a cluster?

    Is there any way to retrieve the IP addresses of all nodes in a cluster?
    The problem is the following. We intend to write an administration program
    that administers all nodes of a cluster using rmi (e.g. tell all singletons
    in the cluster to reload configuration values etc.). My understanding is
    that rmi only talks to a single node in a cluster. It would be a convenient
    feature if the administration program could figure out all nodes in a
    cluster by itself and then administers each node sequentially. So far we're
    planning to pass all IP addresses to the administration program e.g. as
    command line arguments but what if a node gets left out due to human error?
    Thanks for your help.
    Bernie

    There is no public interface to inquire about the IP addresses of the servers in a cluster. If you use WLS 6.0, there is an administrative console that uses JMX to manage the cluster. Perhaps that would be of use to you?
    Bernhard Lenz wrote:
    Is there any way to retrieve the IP addresses of all nodes in a cluster?
    The problem is the following. We intend to write an administration program
    that administers all nodes of a cluster using rmi (e.g. tell all singletons
    in the cluster to reload configuration values etc.). My understanding is
    that rmi only talks to a single node in a cluster. It would be a convenient
    feature if the administration program could figure out all nodes in a
    cluster by itself and then administers each node sequentially. So far we're
    planning to pass all IP addresses to the administration program e.g. as
    command line arguments but what if a node gets left out due to human error?
    Thanks for your help.
    Bernie

  • Long running reports using java servlets

    Hello,
    I'm running into a problem using java servlets to produce database reports and sending them back to the client browser as excel or html. The problem comes into play when the user has submitted two reports then goes to run a third and the browser hangs. I realized now that this is because of the two active connections that the user already has to the app server. Does anyone have any suggestions on how to get around this? I'm trying to now write the report to the server, and provide the browser with a url, but it seams as though the connection isn't totally closed even though I close the response printwriter.... Any suggestions would be greatly appreciated....

    Unfortunately I dont have an answer to your question, but:
    If the reports are large/long running, should they be generated on demand? You could schedule the reports to be generated every day, month or whenever appropriate, stored on the server and then quickly pulled by the user whenever needed.

  • Is it possible to run two appl in a single node

    Hi folks,
    Plz tell me if it is possible to run two applications in a single node,coz I had deployed one and it is running fine but when I am going to deploy and run another one it is showing that "503 service unavailable" in the browser after the deployment.
    cheers
    prasenjit

    Can you please explain below
    Why your are creating Two VO's for one Table.
    What is relation between this two VO's??
    Thanks,
    Dilip

  • How to run all Concurrent Requests in a single node in a multi node env

    DB;11.1.0.7
    Oracle Apps:12.1.1
    OS:Linux 86x64 Red Hat
    PCP setting is enabled.
    Load Balancer is enabled.
    APPLDCP=ON
    Could anyone please share - How to run all Concurrent Requests in a single node in a multi node env where there are 3 web tier nodes?
    Thanks for your time!
    Regards,

    PCP setting is enabled.
    Load Balancer is enabled.
    APPLDCP=ON
    Could anyone please share - How to run all Concurrent Requests in a single node in a multi node env where there are 3 web tier nodes?Concurrent requests will be processed by the CM nodes and it has nothing to do with the 3 web tier nodes you have.
    If you mean the database instance, then please see these docs.
    How to run a concurrent program against a specific RAC instance with PCP/RAC setup? [ID 1129203.1]
    In A PCP/RAC Configuration, How To Find Out On Which RAC Instance FNDSM Is Currently Running? [ID 1089396.1]
    Thanks,
    Hussein

  • Should EntryProcessors be used for long-running operations?

    Hi Gene and All,
         a couple of other questions come from the seemingly unexhaustible list :-)
         - what happens if the caller of an InvocableMap.invokeAll or invoke method dies?
         - all entryProcessors complete regardless of the client being there or not
         - all already started process method calls complete, the unprocessed entries will not get processed
         - something else happens
         - what happens if the caller of an InvocableMap.aggregate method with a parallel-aggregator dies
         - all aggregate methods in the parallel-aggregator complete
         - the aggregate methods in the parallel-aggregator stop during processing
         - something else happens
         - should an entryprocessor or a parallel-aware entryaggregator implement a comparably long-running operation (e.g. jdbc access), or does that seriously affect performance of other concurrent operations within the cluster node or the entire cluster (e.g. becuase of blocking other events/requests)?
         - should the work manager be used instead for these kinds of things (e.g. jdbc access)?
         Thanks and best regards,
         Robert

    Robert,
         As soon as an EntryProcessor or EntryAggregator get delivered to the server nodes, it will get executed regardless of the requestor's state.
         In regard to long-running operations the only thing you have to be conscious about is a number of worker threads allocated for such a processing. Since there is a single client thread issuing a request, it would suggests allocation as many worker threads (across the cache server tier) as there are client thread (across the presentation/application tier).
         Regards,
         Gene

  • Test client pinned to single node in production

    WL 6.1 sp2, Solaris 2.8
              Currently we have a bunch of SLSBs deployed in cluster out in production and
              a web tier that usually gets and invokes a single SLSB, and they're running
              happily. But everyone once in a while, we get an asymmetrical exception,
              where one node in the cluster is giving us bad results. What I like to do
              is write some simple test clients that can pin to a particular node and
              diagnose just that node while the regular client (web tier) still
              round-robins in production.
              Our SLSBs do not have <stateless-clustering> elemements at all in
              weblogic-ejb-jar.xml, so <stateless-bean-is-clusterable> defaults to true.
              My understanding is this means WL will round robin at 3 different levels:
              jndi Context, EJBHome and EJBObject, unless server-client is co-located in
              which WL will always pick the local object.
              What I have tried to do is to write the test client with a single url in
              PROVIDER_URL
              and PIN_TO_PRIMARY_SERVER set to true in InitialContext construction. This
              does not seem to work; by the time I get the EJBHome, create the EJBObject
              and invoke a test method, I see round-robin occuring. I can understand a
              reason FOR this behavior, and a counter-argument AGAINST this behavior. The
              reason why WL is still round-robining is because only the Context is pinned
              to the primary server; subsequent EJBHome and EJBObject are cluster-aware,
              and hence will round-robin, which in fact it is doing. But then the reason
              against this observation is that once I retrieve InitialContext, the
              subsequent EJBHome and EJBObject are all available locally. So shouldn't WL
              do co-location optimization and hence never round-robin???
              Here are some alternative framework I've thought up so I can write a test
              client that pins to a specific ejb server:
              1) Create a second set of DD's in weblogic-ejb-jar.xml, this time setting
              stateless-bean-is-clusterable to false, and have the test client us this for
              pinning.
              2) Expose a co-located servlet that will accept ejb invocation (via SOAP or
              customized RPC). Servlet invocation will always be ip-specific, and
              hopefully co-location of web and ejb tier will
              Problem with #1 is 2 sets of DD's, hence 2 sets of EJBHomes/Objects that
              behave slightly differently.
              Problem with #2 is the complexity of a new web tier just for pinning, which
              then also means the test client doesn't exactly replicate my actual web
              client calls.
              Is there a simple solution to isolate and diagnose a single node in a
              production cluster? Am I missing something? Much appreciated!
              Gene
              

    WL 6.1 sp2, Solaris 2.8
              Currently we have a bunch of SLSBs deployed in cluster out in production and
              a web tier that usually gets and invokes a single SLSB, and they're running
              happily. But everyone once in a while, we get an asymmetrical exception,
              where one node in the cluster is giving us bad results. What I like to do
              is write some simple test clients that can pin to a particular node and
              diagnose just that node while the regular client (web tier) still
              round-robins in production.
              Our SLSBs do not have <stateless-clustering> elemements at all in
              weblogic-ejb-jar.xml, so <stateless-bean-is-clusterable> defaults to true.
              My understanding is this means WL will round robin at 3 different levels:
              jndi Context, EJBHome and EJBObject, unless server-client is co-located in
              which WL will always pick the local object.
              What I have tried to do is to write the test client with a single url in
              PROVIDER_URL
              and PIN_TO_PRIMARY_SERVER set to true in InitialContext construction. This
              does not seem to work; by the time I get the EJBHome, create the EJBObject
              and invoke a test method, I see round-robin occuring. I can understand a
              reason FOR this behavior, and a counter-argument AGAINST this behavior. The
              reason why WL is still round-robining is because only the Context is pinned
              to the primary server; subsequent EJBHome and EJBObject are cluster-aware,
              and hence will round-robin, which in fact it is doing. But then the reason
              against this observation is that once I retrieve InitialContext, the
              subsequent EJBHome and EJBObject are all available locally. So shouldn't WL
              do co-location optimization and hence never round-robin???
              Here are some alternative framework I've thought up so I can write a test
              client that pins to a specific ejb server:
              1) Create a second set of DD's in weblogic-ejb-jar.xml, this time setting
              stateless-bean-is-clusterable to false, and have the test client us this for
              pinning.
              2) Expose a co-located servlet that will accept ejb invocation (via SOAP or
              customized RPC). Servlet invocation will always be ip-specific, and
              hopefully co-location of web and ejb tier will
              Problem with #1 is 2 sets of DD's, hence 2 sets of EJBHomes/Objects that
              behave slightly differently.
              Problem with #2 is the complexity of a new web tier just for pinning, which
              then also means the test client doesn't exactly replicate my actual web
              client calls.
              Is there a simple solution to isolate and diagnose a single node in a
              production cluster? Am I missing something? Much appreciated!
              Gene
              

  • Convert 11.1.0.7 EBS 12.1.3 single node to RAC with ASM on 11.2.0.3 cluster

    Hi
    I am trying to convert a single node 11.1.0.7 database with EBS 12.1.3 version from single to 2-node RAC ASM on Oracle clusterware 11.2.0.3 on AIX 6.1 - 64 bit
    I am following Oracle metalink document note id 466649.1 . My 11.2.0.3 clusterware is already setup correctly and working fine as per document E24614-03.
    However I am currently stuck in section 3.3:Configure TNS listener (Doc id : 466649.1) . The netca is unable to start the listener and failing with error as below.
    However when I start the same locally from lsnrctl command listener starts.
    trace_OraDb11g_home1-1212159AM174.log
    ====================================
    [AWT-EventQueue-0] [18:34:31:887] [RuntimeExec.runCommand:144] runCommand: process returns 123
    [AWT-EventQueue-0] [18:34:31:887] [RuntimeExec.runCommand:161] RunTimeExec: output>
    [AWT-EventQueue-0] [18:34:31:887] [RuntimeExec.runCommand:164] Attempting to start `ora.jxn-ux-ebs1d1q.LISTENER_EBSQA_JXN-UX-EBS1D1Q.lsnr
    ` on member `jxn-ux-ebs1d1q`
    [AWT-EventQueue-0] [18:34:31:887] [RuntimeExec.runCommand:164] Start of `ora.jxn-ux-ebs1d1q.LISTENER_EBSQA_JXN-UX-EBS1D1Q.lsnr` on member
    `jxn-ux-ebs1d1q` failed.
    [AWT-EventQueue-0] [18:34:31:888] [RuntimeExec.runCommand:164] Attempting to stop `ora.jxn-ux-ebs1d1q.LISTENER_EBSQA_JXN-UX-EBS1D1Q.lsnr`
    on member `jxn-ux-ebs1d1q`
    [AWT-EventQueue-0] [18:34:31:888] [RuntimeExec.runCommand:164] Stop of `ora.jxn-ux-ebs1d1q.LISTENER_EBSQA_JXN-UX-EBS1D1Q.lsnr` on member
    `jxn-ux-ebs1d1q` succeeded.
    [AWT-EventQueue-0] [18:34:31:888] [RuntimeExec.runCommand:164] CRS-2632: There are no more servers to try to place resource 'ora.jxn-ux-e
    bs1d1q.LISTENER_EBSQA_JXN-UX-EBS1D1Q.lsnr' on that would satisfy its placement policy
    [AWT-EventQueue-0] [18:34:31:888] [RuntimeExec.runCommand:170] RunTimeExec: error>
    [AWT-EventQueue-0] [18:34:31:889] [RuntimeExec.runCommand:173] CRS-0223: Resource 'ora.jxn-ux-ebs1d1q.LISTENER_EBSQA_JXN-UX-EBS1D1Q.lsnr'
    has placement error.
    [AWT-EventQueue-0] [18:34:31:889] [RuntimeExec.runCommand:173]
    [AWT-EventQueue-0] [18:34:31:889] [RuntimeExec.runCommand:192] Returning from RunTimeExec.runCommand
    [AWT-EventQueue-0] [18:34:31:890] [HAOperationImpl.runCommand:1221] signed exit value = 123
    [AWT-EventQueue-0] [18:34:31:890] [HAOperationImpl.runCommand:1258] set status HA_RES_RELOCATE_ERR
    [AWT-EventQueue-0] [18:34:31:891] [HAStartOperation.run:84] Returned from executing the HA Operation
    [AWT-EventQueue-0] [18:34:31:891] [HAStartOperation.run:89] OUTPUT> Attempting to start `ora.jxn-ux-ebs1d1q.LISTENER_EBSQA_JXN-UX-EBS1D1Q
    .lsnr` on member `jxn-ux-ebs1d1q`
    [AWT-EventQueue-0] [18:34:31:891] [HAStartOperation.run:89] OUTPUT> Start of `ora.jxn-ux-ebs1d1q.LISTENER_EBSQA_JXN-UX-EBS1D1Q.lsnr` on m
    ember `jxn-ux-ebs1d1q` failed.
    [AWT-EventQueue-0] [18:34:31:891] [HAStartOperation.run:89] OUTPUT> Attempting to stop `ora.jxn-ux-ebs1d1q.LISTENER_EBSQA_JXN-UX-EBS1D1Q.
    lsnr` on member `jxn-ux-ebs1d1q`
    [AWT-EventQueue-0] [18:34:31:892] [HAStartOperation.run:89] OUTPUT> Stop of `ora.jxn-ux-ebs1d1q.LISTENER_EBSQA_JXN-UX-EBS1D1Q.lsnr` on me
    mber `jxn-ux-ebs1d1q` succeeded.
    [AWT-EventQueue-0] [18:34:31:892] [HAStartOperation.run:89] OUTPUT> CRS-2632: There are no more servers to try to place resource 'ora.jxn
    -ux-ebs1d1q.LISTENER_EBSQA_JXN-UX-EBS1D1Q.lsnr' on that would satisfy its placement policy
    [AWT-EventQueue-0] [18:34:31:893] [HAStartOperation.run:95] ERROR> CRS-0223: Resource 'ora.jxn-ux-ebs1d1q.LISTENER_EBSQA_JXN-UX-EBS1D1Q.l
    snr' has placement error.
    [AWT-EventQueue-0] [18:34:31:893] [LocalCommand.execute:56] LocalCommand.execute: Returned from run method
    [AWT-EventQueue-0] [18:34:31:893] [HAOperationResult.getOutputAll:114] outLine is [CRS-2632: There are no more servers to try to place re
    source 'ora.jxn-ux-ebs1d1q.LISTENER_EBSQA_JXN-UX-EBS1D1Q.lsnr' on that would satisfy its placement policy]
    [AWT-EventQueue-0] [18:34:31:894] [HAOperationResult.getOutputAll:115] errLine is [CRS-0223: Resource 'ora.jxn-ux-ebs1d1q.LISTENER_EBSQA_
    JXN-UX-EBS1D1Q.lsnr' has placement error.]
    [AWT-EventQueue-0] [18:36:39:994] [ca.ConfigureListenerOPS.getNodeNameIndex:-1] getNodeNameIndex: Matching LISTENER_EBSQA_JXN-UX-EBS1D1Q
    [AWT-EventQueue-0] [18:36:39:994] [ca.ConfigureListenerOPS.getListenerRootName:-1] ListenerName is LISTENER_EBSQA
    The crsctl has details as below:
    =====================
    NAME TARGET STATE SERVER STATE_DETAILS
    Cluster Resources
    ora.LISTENER_SCAN1.lsnr
    1 ONLINE ONLINE jxn-ux-ebs2d1q
    ora.LISTENER_SCAN2.lsnr
    1 ONLINE ONLINE jxn-ux-ebs1d1q
    ora.LISTENER_SCAN3.lsnr
    1 ONLINE ONLINE jxn-ux-ebs1d1q
    ora.cvu
    1 ONLINE ONLINE jxn-ux-ebs1d1q
    ora.jxn-ux-ebs1d1q.LISTENER_EBSQA_JXN-UX-EBS1D1Q.lsnr
    1 ONLINE OFFLINE
    ora.jxn-ux-ebs1d1q.vip
    1 ONLINE ONLINE jxn-ux-ebs1d1q
    ora.jxn-ux-ebs2d1q.LISTENER_EBSQA_JXN-UX-EBS2D1Q.lsnr
    1 ONLINE OFFLINE
    Not sure how to fix this .
    Regards
    Ram

    I am trying to convert a single node 11.1.0.7 database with EBS 12.1.3 version from single to 2-node RAC ASM on Oracle clusterware 11.2.0.3 on AIX 6.1 - 64 bit You don't need to create listeners, after installing 11203 cluster there will be LISTENER running from $GRID_HOME
    along with SCAN_LISTENERS

  • Multiple ASM instances on a single node

    Can i have multiple ASM instances on a single node? This is to have each instance supporting different environment dev,stage etc
    Thanks
    Sannidhi

    I had been discussing the same issue with someone from Oracle. I asked for multiple ASMs on a server so that we could have seperate ASMs running for, say, 10g and 11g.
    He explained that we should think of ASM as being the same as Veritas FileSystem. We don't run multiple instances of Veritas on our servers. On single Veritas "instance" (set of drivers) provides all the Veritas Fx mountpoints on the server. Similarly, there should be only 1 ASM instance on a server.
    (substitute UFS or ZFS or CFS for Veritas in the above example and you still have only 1 filesystem manager on a server providing one type of filesystem, although you could have, say , UFS and Veritas co-exist ; just as you could have Veritas, ASM and Raw co-exist on a server).
    Hemant K Chitale
    Edited by: Hemant K Chitale on Sep 29, 2009 11:50 AM

  • GI installation on a single-node cluster error.

    Hello, I am trying to install GI on a single-node cluster (Solaris 10 / Sparc) but the root.sh script fails with the following error (this is not a GI installation for a Standalone Server :
    root@selvac./dev/ASM/OCRVTD_DG # /app/oracle/grid/11.2/root.sh
    Running Oracle 11g root script...
    The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME= /app/oracle/grid/11.2
    Enter the full pathname of the local bin directory: [usr/local/bin]:
    Copying dbhome to /usr/local/bin ...
    Copying oraenv to /usr/local/bin ...
    Copying coraenv to /usr/local/bin ...
    Creating /var/opt/oracle/oratab file...
    Entries will be added to the /var/opt/oracle/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root script.
    Now product-specific root actions will be performed.
    Using configuration parameter file: /app/oracle/grid/11.2/crs/install/crsconfig_params
    Creating trace directory
    LOCAL ADD MODE
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    OLR initialization - successful
    root wallet
    root wallet cert
    root cert export
    peer wallet
    profile reader wallet
    pa wallet
    peer wallet keys
    pa wallet keys
    peer cert request
    pa cert request
    peer cert
    pa cert
    peer root cert TP
    profile reader root cert TP
    pa root cert TP
    peer pa cert TP
    pa peer cert TP
    profile reader pa cert TP
    profile reader peer cert TP
    peer user cert
    pa user cert
    Adding daemon to inittab
    ACFS-9200: Supported
    ACFS-9300: ADVM/ACFS distribution files found.
    ACFS-9312: Existing ADVM/ACFS installation detected.
    ACFS-9314: Removing previous ADVM/ACFS installation.
    ACFS-9315: Previous ADVM/ACFS components successfully removed.
    ACFS-9307: Installing requested ADVM/ACFS software.
    ACFS-9308: Loading installed ADVM/ACFS drivers.
    ACFS-9327: Verifying ADVM/ACFS devices.
    ACFS-9309: ADVM/ACFS installation correctness verified.
    CRS-2672: Attempting to start 'ora.mdnsd' on 'selvac'
    CRS-2676: Start of 'ora.mdnsd' on 'selvac' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'selvac'
    CRS-2676: Start of 'ora.gpnpd' on 'selvac' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'selvac'
    CRS-2672: Attempting to start 'ora.gipcd' on 'selvac'
    CRS-2676: Start of 'ora.cssdmonitor' on 'selvac' succeeded
    CRS-2676: Start of 'ora.gipcd' on 'selvac' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'selvac'
    CRS-2672: Attempting to start 'ora.diskmon' on 'selvac'
    CRS-2676: Start of 'ora.diskmon' on 'selvac' succeeded
    CRS-2676: Start of 'ora.cssd' on 'selvac' succeeded
    ASM created and started successfully.
    Disk Group OCRVTD_DG created successfully.
    The ora.asm resource is not ONLINE
    Did not succssfully configure and start ASM at /app/oracle/grid/11.2/crs/install/crsconfig_lib.pm line 6465.
    /app/oracle/grid/11.2/perl/bin/perl -I/app/oracle/grid/11.2/perl/lib -I/app/oracle/grid/11.2/crs/install /app/oracle/grid/11.2/crs/install/rootcrs.pl execution failed
    I also found the "PRVF-5150: Path OCRL:DISK1 is not a valid path on all nodes" error but as I have read it is a bug I Ignored it. But...
    I think my ASM_DG OCR and voting is ok, accessible by grid user and 660. It seems ASM does not start or does not start in time.
    Any help is wellcome.
    Thanks in advance.

    Thanks a lot for the hint. I had already checked this doc. but I think it is not the problem. Actually de error ora.asm is not online is not correct. After failing root.sh, ora.asm is ONLINE:
    root@selvac./app/oracle/grid/11.2/bin # ./crsctl check resource ora.asm -init
    root@selvac./app/oracle/grid/11.2/bin # ./crsctl stat resource ora.asm -init
    NAME=ora.asm
    TYPE=ora.asm.type
    TARGET=ONLINE
    STATE=ONLINE on selvac
    The last part of the /app/oracle/grid/11.2/cfgtoollogs/crsconfig/rootcrs_selvac.log file reads :
    >
    ASM created and started successfully.
    Disk Group OCRVTD_DG created successfully.
    End Command output2011-04-14 13:24:16: Executing cmd: /app/oracle/grid/11.2/bin/crsctl check resource ora.asm -init
    2011-04-14 13:24:17: Executing cmd: /app/oracle/grid/11.2/bin/crsctl status resource ora.asm -init
    2011-04-14 13:24:17: Command output:
    NAME=ora.asm
    TYPE=ora.asm.type
    TARGET=ONLINE
    STATE=OFFLINE
    End Command output2011-04-14 13:24:17: Checking the status of ora.asm
    2011-04-14 13:24:22: Executing cmd: /app/oracle/grid/11.2/bin/crsctl status resource ora.asm -init
    2011-04-14 13:24:22: Command output:
    NAME=ora.asm
    TYPE=ora.asm.type
    TARGET=ONLINE
    STATE=OFFLINE
    End Command output2011-04-14 13:24:22: Checking the status of ora.asm
    2011-04-14 13:24:27: Executing cmd: /app/oracle/grid/11.2/bin/crsctl status resource ora.asm -init
    2011-04-14 13:24:28: Command output:
    NAME=ora.asm
    TYPE=ora.asm.type
    TARGET=ONLINE
    STATE=OFFLINE
    End Command output2011-04-14 13:24:28: Checking the status of ora.asm
    2011-04-14 13:24:33: Executing cmd: /app/oracle/grid/11.2/bin/crsctl status resource ora.asm -init
    2011-04-14 13:24:33: Command output:
    NAME=ora.asm
    TYPE=ora.asm.type
    TARGET=ONLINE
    STATE=OFFLINE
    End Command output2011-04-14 13:24:33: Checking the status of ora.asm
    2011-04-14 13:24:38: Executing cmd: /app/oracle/grid/11.2/bin/crsctl status resource ora.asm -init
    2011-04-14 13:24:38: Command output:
    NAME=ora.asm
    TYPE=ora.asm.type
    TARGET=ONLINE
    STATE=OFFLINE
    End Command output2011-04-14 13:24:38: Checking the status of ora.asm
    2011-04-14 13:24:43: Executing cmd: /app/oracle/grid/11.2/bin/crsctl status resource ora.asm -init
    2011-04-14 13:24:43: Command output:
    NAME=ora.asm
    TYPE=ora.asm.type
    TARGET=ONLINE
    STATE=OFFLINE
    End Command output2011-04-14 13:24:43: Checking the status of ora.asm
    2011-04-14 13:24:48: Executing cmd: /app/oracle/grid/11.2/bin/crsctl status resource ora.asm -init
    2011-04-14 13:24:49: Command output:
    NAME=ora.asm
    TYPE=ora.asm.type
    TARGET=ONLINE
    STATE=OFFLINE
    End Command output2011-04-14 13:24:49: Checking the status of ora.asm
    2011-04-14 13:24:54: Executing cmd: /app/oracle/grid/11.2/bin/crsctl status resource ora.asm -init
    2011-04-14 13:24:54: Command output:
    NAME=ora.asm
    TYPE=ora.asm.type
    TARGET=ONLINE
    STATE=OFFLINE
    End Command output2011-04-14 13:24:54: Checking the status of ora.asm
    2011-04-14 13:24:59: Executing cmd: /app/oracle/grid/11.2/bin/crsctl status resource ora.asm -init
    2011-04-14 13:24:59: Command output:
    NAME=ora.asm
    TYPE=ora.asm.type
    TARGET=ONLINE
    STATE=OFFLINE
    End Command output2011-04-14 13:24:59: Checking the status of ora.asm
    2011-04-14 13:25:04: Executing cmd: /app/oracle/grid/11.2/bin/crsctl status resource ora.asm -init
    2011-04-14 13:25:04: Command output:
    NAME=ora.asm
    TYPE=ora.asm.type
    TARGET=ONLINE
    STATE=OFFLINE
    End Command output2011-04-14 13:25:04: Checking the status of ora.asm
    2011-04-14 13:25:09: The ora.asm resource is not ONLINE
    2011-04-14 13:25:09: Running as user grid: /app/oracle/grid/11.2/bin/cluutil -ckpt -oraclebase /app/grid -writeckpt -name ROOTCRS_BOOTCFG -state FAIL
    2011-04-14 13:25:09: s_run_as_user2: Running /bin/su grid -c ' /app/oracle/grid/11.2/bin/cluutil -ckpt -oraclebase /app/grid -writeckpt -name ROOTCRS_BOOTCFG -state FAIL '
    2011-04-14 13:25:10: Removing file /var/tmp/mbahSaGPn
    2011-04-14 13:25:10: Successfully removed file: /var/tmp/mbahSaGPn
    2011-04-14 13:25:10: /bin/su successfully executed
    2011-04-14 13:25:10: Succeeded in writing the checkpoint:'ROOTCRS_BOOTCFG' with status:FAIL
    2011-04-14 13:25:10: ###### Begin DIE Stack Trace ######
    2011-04-14 13:25:10: Package File Line Calling
    2011-04-14 13:25:10: --------------- -------------------- ---- ----------
    2011-04-14 13:25:10: 1: main rootcrs.pl 322 crsconfig_lib::dietrap
    2011-04-14 13:25:10: 2: crsconfig_lib crsconfig_lib.pm 6465 main::__ANON__
    2011-04-14 13:25:10: 3: crsconfig_lib crsconfig_lib.pm 6390 crsconfig_lib::perform_initial_config
    2011-04-14 13:25:10: 4: main rootcrs.pl 671 crsconfig_lib::perform_init_config
    2011-04-14 13:25:10: ####### End DIE Stack Trace #######
    2011-04-14 13:25:10: 'ROOTCRS_BOOTCFG' checkpoint has failed
    So this must be a bug. During root.sh execution ora.asm is OFFLINE but after failing it is ONLINE. It maight be a question of waiting/repeating or timeout as I see the "Checking the status of ora.asm" command is repeated several times during root.sh, but not enough perhaps. Now root.sh is failed, installation halted but ASM is ONLINE.
    Any other Idea?
    Thanks again.

  • ORABPEL-05002 for long running process

    Hi everybody,
    My question is related with a long running process I have designed and which, after running for a couple of days, ends by reporting the ORABPEL-05002 error:
    ===============================================================
    ORABPEL-05002
    Message handle error.
    An exception occurred while attempting to process the message "com.collaxa.cube.engine.dispatch.message.instance.PerformMessage"; the exception is: Transaction was rolled back: timed out; nested exception is: java.rmi.RemoteException: No Exception - originate from:java.lang.Exception: No Exception - originate from:; nested exception is:
         java.lang.Exception: No Exception - originate from:
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:152)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    ===============================================================
    Looking in the Manual Recovery screen, I can see an Activity I can recover. It's about an assign Activity where I'm doing a single boolean assignation.
    Of course, together with the ORABPEL-05002 error I got also the 'Transaction was rolled back: time out' message. Note that I have modified the transaction-timeout value to 180000. The error occurs during the night, with no heavy load of the server.
    Recovering the assign activity brings back the process in the running state.
    My process pattern:
    while (1 == 1) {
    do activity;
    wait_timeout();
    So, I have the following questions:
    1. Which is cause of this error?
    2. How may I automatically recover this lost activity? RecoveryAgent?
    Any suggestion is appreciated.
    Regards,
    amo
    P.S: the full stack of error messages reported in domain.log:
    ===============================================================
    <2006-09-18 08:08:34,101> <ERROR> <SRH.collaxa.cube.engine.dispatch> <DispatchHelper::handleMessage> failed to handle message
    javax.ejb.EJBException: Transaction was rolled back: timed out; nested exception is: java.rmi.RemoteException: No Exception - originate from:java.lang.Exception: No Exception - originate from:; nested exception is:
         java.lang.Exception: No Exception - originate from:
    java.rmi.RemoteException: No Exception - originate from:java.lang.Exception: No Exception - originate from:; nested exception is:
         java.lang.Exception: No Exception - originate from:
         at com.evermind.server.ejb.EJBUtils.makeException(EJBUtils.java:873)
         at ICubeEngineLocalBean_StatelessSessionBeanWrapper0.handleWorkItem(ICubeEngineLocalBean_StatelessSessionBeanWrapper0.java:1479)
         at com.collaxa.cube.engine.dispatch.message.instance.PerformMessageHandler.handle(PerformMessageHandler.java:45)
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:125)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    Caused by: java.lang.Exception: No Exception - originate from:
         at com.evermind.server.ejb.EJBUtils.makeException(EJBUtils.java:871)
         ... 10 more
    javax.ejb.EJBException: Transaction was rolled back: timed out; nested exception is: java.rmi.RemoteException: No Exception - originate from:java.lang.Exception: No Exception - originate from:; nested exception is:
         java.lang.Exception: No Exception - originate from:
         at ICubeEngineLocalBean_StatelessSessionBeanWrapper0.handleWorkItem(ICubeEngineLocalBean_StatelessSessionBeanWrapper0.java:1479)
         at com.collaxa.cube.engine.dispatch.message.instance.PerformMessageHandler.handle(PerformMessageHandler.java:45)
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:125)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    <2006-09-18 08:08:34,129> <ERROR> <SRH.collaxa.cube.engine.dispatch> <BaseScheduledWorker::process> Failed to handle dispatch message ... exception ORABPEL-05002
    Message handle error.
    An exception occurred while attempting to process the message "com.collaxa.cube.engine.dispatch.message.instance.PerformMessage"; the exception is: Transaction was rolled back: timed out; nested exception is: java.rmi.RemoteException: No Exception - originate from:java.lang.Exception: No Exception - originate from:; nested exception is:
         java.lang.Exception: No Exception - originate from:
    ORABPEL-05002
    Message handle error.
    An exception occurred while attempting to process the message "com.collaxa.cube.engine.dispatch.message.instance.PerformMessage"; the exception is: Transaction was rolled back: timed out; nested exception is: java.rmi.RemoteException: No Exception - originate from:java.lang.Exception: No Exception - originate from:; nested exception is:
         java.lang.Exception: No Exception - originate from:
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:152)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    <2006-09-18 08:09:05,236> <ERROR> <SRH.collaxa.cube> <BaseCubeSessionBean::logError> Error while invoking bean "activity manager": Scope not found.
    The scope "BpSwt2.30995" has not been defined in the current instance.
    ORABPEL-02094
    Scope not found.
    The scope "BpSwt2.30995" has not been defined in the current instance.
         at com.collaxa.cube.engine.core.ScopeContext.getScope(ScopeContext.java:213)
         at com.collaxa.cube.engine.core.WorkItem.setCubeInstance(WorkItem.java:259)
         at com.collaxa.cube.engine.core.WorkItemFactory.init(WorkItemFactory.java:68)
         at com.collaxa.cube.engine.core.WorkItemFactory.create(WorkItemFactory.java:58)
         at com.collaxa.cube.engine.adaptors.common.BaseWorkItemPersistenceAdaptor.load(BaseWorkItemPersistenceAdaptor.java:147)
         at com.collaxa.cube.engine.data.WorkItemPersistenceMgr.load(WorkItemPersistenceMgr.java:75)
         at com.collaxa.cube.engine.CubeEngine.load(CubeEngine.java:5185)
         at com.collaxa.cube.engine.CubeEngine.load(CubeEngine.java:5173)
         at com.collaxa.cube.engine.CubeEngine.expireActivity(CubeEngine.java:2136)
         at com.collaxa.cube.ejb.impl.ActivityManagerBean.expireActivity(ActivityManagerBean.java:145)
         at com.collaxa.cube.ejb.impl.ActivityManagerBean.expireActivity(ActivityManagerBean.java:116)
         at IActivityManagerLocalBean_StatelessSessionBeanWrapper52.expireActivity(IActivityManagerLocalBean_StatelessSessionBeanWrapper52.java:645)
         at com.collaxa.cube.engine.dispatch.message.instance.ExpirationMessageHandler.handle(ExpirationMessageHandler.java:43)
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:125)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    <2006-09-18 08:09:05,274> <ERROR> <SRH.collaxa.cube.engine.dispatch> <DispatchHelper::handleMessage> failed to handle message
    ORABPEL-02094
    Scope not found.
    The scope "BpSwt2.30995" has not been defined in the current instance.
         at com.collaxa.cube.engine.core.ScopeContext.getScope(ScopeContext.java:213)
         at com.collaxa.cube.engine.core.WorkItem.setCubeInstance(WorkItem.java:259)
         at com.collaxa.cube.engine.core.WorkItemFactory.init(WorkItemFactory.java:68)
         at com.collaxa.cube.engine.core.WorkItemFactory.create(WorkItemFactory.java:58)
         at com.collaxa.cube.engine.adaptors.common.BaseWorkItemPersistenceAdaptor.load(BaseWorkItemPersistenceAdaptor.java:147)
         at com.collaxa.cube.engine.data.WorkItemPersistenceMgr.load(WorkItemPersistenceMgr.java:75)
         at com.collaxa.cube.engine.CubeEngine.load(CubeEngine.java:5185)
         at com.collaxa.cube.engine.CubeEngine.load(CubeEngine.java:5173)
         at com.collaxa.cube.engine.CubeEngine.expireActivity(CubeEngine.java:2136)
         at com.collaxa.cube.ejb.impl.ActivityManagerBean.expireActivity(ActivityManagerBean.java:145)
         at com.collaxa.cube.ejb.impl.ActivityManagerBean.expireActivity(ActivityManagerBean.java:116)
         at IActivityManagerLocalBean_StatelessSessionBeanWrapper52.expireActivity(IActivityManagerLocalBean_StatelessSessionBeanWrapper52.java:645)
         at com.collaxa.cube.engine.dispatch.message.instance.ExpirationMessageHandler.handle(ExpirationMessageHandler.java:43)
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:125)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    <2006-09-18 08:09:05,275> <ERROR> <SRH.collaxa.cube.engine.dispatch> <BaseScheduledWorker::process> Failed to handle dispatch message ... exception ORABPEL-05002
    Message handle error.
    An exception occurred while attempting to process the message "com.collaxa.cube.engine.dispatch.message.instance.ExpirationMessage"; the exception is: Scope not found.
    The scope "BpSwt2.30995" has not been defined in the current instance.
    ORABPEL-05002
    Message handle error.
    An exception occurred while attempting to process the message "com.collaxa.cube.engine.dispatch.message.instance.ExpirationMessage"; the exception is: Scope not found.
    The scope "BpSwt2.30995" has not been defined in the current instance.
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:152)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    ===============================================================

    These are the possible cause to the problem and their solutions:
    Poor performance of the dehydration database If you are using Oracle Lite as dehydration store, please switch to use Oracle 9i or 10g. If Oracle 9i/10g is already in use, check the database parameter 'process' and 'session' to make sure it can handle the expected throughput.
    OC4J has too few available connections to the dehydration database. Increase the maxConnection number of the BPELServerDataSource at the BPEL_HOME/integration/orabpel/system/appserver/oc4j/j2ee/home/config/data-sources.xml (for developer edition) or IAS_HOME/j2ee/OC4J_BPEL/config/data-sources.xml (mid-tier installation).
    Size of message is too big Two ways to deal with this problem:
    Increase the transaction timeout at PEL_HOME/integration/orabpel/system/appserver/oc4j/j2ee/home/config/server.xml (developer edition) or IAS_HOME/j2ee/OC4J_BPEL/config/server.xml (mid-tier installation)
    Decrease the auditLevel from BPELConsole -> Manage BPEL Domain -> Configurations tab. Doing so will reduce the amount of data saved to the dehydration store.
    Cheers
    Anirudh Pucha

Maybe you are looking for

  • Inconsistent Vendor type

    Hi friends, [Apps R12] Running "Supplier Open Interface Import" it returns the error : AP_INCONSISTENT_VENDOR_TYPE The value for VENDOR_TYPE_LOOKUP_CODE column is EMPLOYEE. I don't understand why.. It appears in this select: SELECT lookup_code FROM f

  • Struts tags

    Hi there I am new to struts. I was trying to test a logon system, and I got some problems with invoking struts tags. I created "pages" fold and "tags" fold under my WEB-INF fold. I copied and pasted all tlds in tags fold including struts-html.tld. In

  • NOT DISPLAYING IN LKM DROP DOWN BOX

    I want to load XML file form Flat file. I want to use LKM as LKM FILE TO SQL It is available in my project. I don't see LKM FILE TO SQL  in LKM selector drop down in flow tab of interface

  • New PO document type

    Hi experts We plan to create new doc type for return Po.What we should do for all related setup? eg: copy one current document type ? assign document number range?  others? Thanks Alice

  • Javascript error in Portal

    Hello All, I am facing two issues. One is sometimes when I login to Portal with admin id, it prevents me from displaying contents under the content administration. The reason being a Javascript error at the left bottom of the browser page access deni