A transaction problem in cluster environment,help!

          I need to take a complicated data operation which will have to call sevearl SQL
          sentences to add datas into DB in a circle. In order to minimize the cost of DB
          connection, I use one connection to create 5 statements,which are used to add
          datas repeatedly. You can read the corresponding codes below. These codes run
          very well in stand-alone environment,but problem occurs when changed to cluster
          environment. The console shows that it's timeout because the transaction takes
          too long time. But when it runs in stand-alone environment,it completes in less
          than one second! In both situations I use the same TX data source. I guess when
          changed to cluster environment,the transaction processing becomes very complicated
          and then it leads to dead-lock. Anybody agrees with this, or has any experience
          about it before? Help,thanks a lot!
          conn = getConnection();
          pstmt3 = conn.prepareStatement
          (DBInfo.SQL_RECEIPTPACK_CREATE);
          pstmt4 = conn.prepareStatement
          (DBInfo.SQL_RECEIPT_CREATE);
          pstmt5 = conn.prepareStatement
          (DBInfo.SQL_RECEIPTPACKAUDIT_INSERT_ALL);
          pstmt6 = conn.prepareStatement
          (DBInfo.SQL_RECEIPTAUDIT_INSERT_ALL);
          int count = (endno+1-startno)/quantity;
          for(int i=0;i<count;i++)
          int newstartno = startno +
          i*quantity;
          int newendno = newstartno +
          quantity - 1;
          String newStartNO =
          Formatter.formatNum2Str(newstartno,ConstVar.RECEIPT_NO_LENGTH);
          String newEndNO =
          Formatter.formatNum2Str(newendno,ConstVar.RECEIPT_NO_LENGTH);
          pstmt1 = conn.prepareStatement
          (DBInfo.SQL_RECEIPTPACK_SEQ_NEXT);
          rs1 = pstmt1.executeQuery();
          if(!rs1.next()) return -1;
          int packid = rs1.getInt(1);
          cleanup(pstmt1,null,rs1);
          pstmt3.setInt(1,packid);
          pstmt3.setString(2,newStartNO);
          pstmt3.setString(3,newEndNO);
          pstmt3.setInt(4,quantity);
          pstmt3.setLong(5,expiredt);
          pstmt3.setInt
          (6,ConstVar.ID_UNIT_TREASURY);
          pstmt3.setInt
          (7,Status.STATUS_RECEIPTPACK_REGISTERED);
          pstmt3.setLong(8,proctm);
          pstmt3.setInt(9,procUserid);
          pstmt3.setInt
          (10,ConstVar.ID_UNIT_TREASURY);
          pstmt3.setInt(11,typeid);
          pstmt3.addBatch();
          //audit
          pstmt5.setInt(1,procUserid);
          pstmt5.setInt(2,packid);
          pstmt5.setInt
          (3,OPCode.OP_RCT_RGT_RECEIPTPACK);
          pstmt5.setLong(4,0);
          pstmt5.setLong(5,proctm);
          pstmt5.addBatch();
          for(int
          j=newstartno;j<=newendno;j++)
          String receiptNO =
          Formatter.formatNum2Str(j,ConstVar.RECEIPT_NO_LENGTH);
          pstmt2 =
          conn.prepareStatement(DBInfo.SQL_RECEIPT_SEQ_NEXT);
          rs2 =
          pstmt2.executeQuery();
          if(!rs2.next()) return -
          1;
          int receiptid =
          rs2.getInt(1);
          cleanup
          (pstmt2,null,rs2);
          pstmt4.setInt
          (1,receiptid);
          pstmt4.setString
          (2,receiptNO);
          pstmt4.setInt
          (3,Status.STATUS_RECEIPT_REGISTERED);
          pstmt4.setDouble
          (4,00.0);
          pstmt4.setInt(5,0);
          pstmt4.setDouble
          (6,00.0);
          pstmt4.setDouble
          (7,00.0);
          pstmt4.setDouble
          (8,00.0);
          pstmt4.setInt
          (9,procUserid);
          pstmt4.setLong
          (10,proctm);
          pstmt4.setLong
          (11,expiredt);
          pstmt4.setInt(12,0);
          pstmt4.setInt
          (13,packid);
          pstmt4.setInt
          (14,typeid);
          pstmt4.addBatch();
          //audit
          pstmt6.setInt
          (1,procUserid);
          pstmt6.setInt
          (2,receiptid);
          pstmt6.setInt
          (3,OPCode.OP_RCT_RGT_RECEIPTPACK);
          pstmt6.setLong(4,0);
          pstmt6.setLong
          (5,proctm);
          pstmt6.addBatch();
          pstmt3.executeBatch();
          cleanup(pstmt3,null);
          pstmt5.executeBatch();
          cleanup(pstmt5,null);
          pstmt4.executeBatch();
          cleanup(pstmt4,null);
          pstmt6.executeBatch();
          cleanup(pstmt6,null);
          

Hello,
Are you using any kind of Load Balancer, like an F5. I am currently troubleshooting this issue for one of our ADF apps and I originally suspected the F5 was not sending traffic correctly. We have not set the adf-config file for HA and the dev team will fix that. But my concern is that will just hide my F5 issue.
Thanks,
-Alan

Similar Messages

  • Session Problem With Cluster Environment

    Issue:
    To test our web application, we build a simply cluster environment containing 2 nodes, web server and admin. server. if we start all related servers and nodes in the following manner, the web server can not maintain the correct session ID, and mirror them across the nodes due to which the first session/request goes to the first node (node1) and the second request goes to the second (Node2). But Node2 does not have a successful session ID. Thus, our deployed application get a Session Time Out message.
    1.     Start WebLogic Application server
    2.     Started the nodes.
    3.     First start the Webserver
    However, when we change the start sequence as follows, the issue will disappear.
    1.     First start the WebServer
    2.     Start WebLogic Application server
    3.     Lastly started the nodes.
    Question:
    Why does the different start sequences effect the different result of Session Affinity in the cluster environment?
    Thanks for any responses!

    Hmmm... Never thought about it.
    Somehow I always start the environment in this sequence:
    - Admin Server
    - WebServer
    - Cluster
    I assume you are using the Apache HTTP server with the WebLogic proxy plug-in as your WebServer.
    Here you can configure
    - WebLogicCluster - which is a static starting point for the server list, and
    - DynamicServerList - enables that WebLogic automatically adds new servers to the server list when they become part of the cluster.
    What I can think of is that the last option is set to OFF in which case when new servers are added to the cluster, the plug-in cannot proxy requests to the new server.
    This could explain why the start-up sequence matters, i.e., when you first start the WebServer than the cluster nodes, the static list will do its work.
    When on the other hand, the cluster nodes are started before the WebServer the starting point for the server list is not created, because somehow
    to WebServer is not receiving notifications.
    One remark is in order, is that when you did not alter the DynamicServerList property, the default is ON.
    Hope the above makes some sense to you. (http://download.oracle.com/docs/cd/E12840_01/wls/docs103/plugins/index.html)

  • Failed to replicate non-serializable object  Weblogic 10 3 Cluster environ

    Hi,
    We have problem in cluster environment, its showing all the objects in Session needs to be serialized, is there any tool to find what objects in session needs to be serialized or any way to find. There was no issue in WLS 8 when the application again setup in WLS 10, we are facing the session replication problem.
    The setup is there are two managed server instance in cluster, they are set to mulicast(and also tried with unicast).
    stacktrace:
    ####<Jun 30, 2010 7:11:16 PM EDT> <Error> <Cluster> <userbser01> <rs002> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1277939476284> <BEA-000126> <All session objects should be serializable to replicate. Check the objects in your session. Failed to replicate non-serializable object.>
    ####<Jun 30, 2010 7:11:19 PM EDT> <Error> <Cluster> <userbser01> <rs002> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1277939479750> <BEA-000126> <All session objects should be serializable to replicate. Check the objects in your session. Failed to replicate non-serializable object.>
    Thanks,

    Hi
    Irrespective of WLS 8.x or WLS 9.x, 10.x, in general any objects that needs to be synced/replicated across the servers in cluster should be Serializable (or implement serializable). The object should be able to marshall and unmarshall. Simple reason why it did not showed in WLS 8.x is may be in that version they could not show these details like Error. May be they showed as Info or Warn or Just Ignored it. Weblogic Server became more and more stable and more efficient down the lines like from its oldest version 4.x, 5.x, 6.x, 7.x to latest 10.x. So my guess is they added more logic and more functionality to capture all possible errors and scenarios. I did worked on WLS 8.1 SP4 to SP6 long time back. Could not remember if I saw or did not see these errors for cluster domain with non-serializabe objects. I vaguley remember seeing it for Portal Domains but not sure. I do not have 8.x installed, otherwise I would have given a quick shot and confirm it.
    So even though it did not showed up in WLS 8.x, still underneath rule is any object that needs to be replicated that is getting replicated in cluster needs to implement Serializable interface.
    Thanks
    Ravi Jegga

  • Deploying Java Web Application (WAR-File) into a cluster environment

    Hi,
    we have a web application which has to read from and write to the file system.
    Since a short time we have a cluster environment (2 parallel servers) and since thisa time we have the problem, that files are worked double in the cluster. The application is working on both servers now and so we have this problem.
    Does anybody know how we have to deploy the application correctly in a cluster environment or do we have to change anything in our source code of the application?
    I didn't find any documentation about this.
    At the moment we have deployed the application on one of the two servers only, but I think there must be a better way to solve this problem.
    Thanks for your replies.
    Regards
    Thorsten

    Hi,
    I think first you need to wrap it into an EAR file, then you can deploy it.
    As far as I know standalone deployment of WAR is deprecated as of 640.
    similar threads:
    How to deploy .war on NWDI
    Deploying an existing WAR file into the Portal
    Hopefully this tutorial also gives some idea:
    http://help.sap.com/saphelp_nw70ehp1/helpdata/en/70/13353094af154a91cbe982d7dd0118/frameset.htm
    Regards,
    Ervin

  • PI 7.1 in a cluster environment (multiple ip-adresses): P4 port

    We want to  install PI 7.1 on unix in a cluster environment.Therefore we  installed also DEV+QA with virtual hostnames like the prodsystem, which will be later installed.
    At all sapinst installation screens we have used only the virtual hostname <virtual-hostname-server interface>.We have also set the SAPINST_USE_HOSTNAME=<virtual-hostname-server interface>. Although the P4-port seems to have used the physical hostname: in step 57 of sapinst we got problems and in dev_icm were:
    [Thr 05] *** ERROR => client with this banner already exists:
    1:<physical-hostname>:35644 {000306f5} [p4_plg_mt.c 2495]
    After we have set
    icm/server_port_1 = PROT=P4,PORT=5$$04, HOST=<virtual-hostname-server interface>
    icm/server_port_6 = PROT=P4,PORT=5$$04, HOST=<virtual-hostname-user interface>
    icm/server_port_7 = PROT=P4,PORT=5$$04, HOST=<physical hostname>
    icm/server_port_8 = PROT=P4,PORT=5$$04, HOST=127.0.0.1
    the sapinst was successfull.
    Now we're not sure how to set these P4-parameters in our  future productive cluster environment.
    Our productive system PX1 will live in a HA environment, so we don't want to use the physical hostnames in any profile.
    Our environment will look like:
    HOST-A (<physical-hostname-A>):
    <virtual-hostname-server interface>
    <virtual-hostname-user interface>
    HOST-B (<physical-hostname-B>):
    Normally our prodsystem will live on Host-A (physical-hostname-A). All parameters should
    only take the virtual hostname <virtual-hostname-server-interface>. During switchover the
    virtual hostnames (server and user interface) will be taken over to HOST-B, while the physical
    hostnames of HOST-A and HOST-B will stay like there are.
    How do the parameters have to be set here ?
    Have also the physical hostnames of both cluster nodes set in the
    instance profile, e.g:
    icm/server_port_1 = PROT=P4,PORT=5$$04, HOST=<virtual-hostname-server interface>
    icm/server_port_6 = PROT=P4,PORT=5$$04, HOST=<virtual-hostname-user interface>
    icm/server_port_7 = PROT=P4,PORT=5$$04, HOST=<physical-hostname-A>
    icm/server_port_8 = PROT=P4,PORT=5$$04, HOST=<physical-hostname-B>
    icm/server_port_9 = PROT=P4,PORT=5$$04, HOST=<localhost>
    Any recommendations ? In note 1158626 is some infomation regarding P4 ports with multiple network interfaces, but it's not 100% clear for us.
    Best regards,
    Uta

    Hi Uta!
    Obviously we are the only human beings in the SAP community having this problem. Nevertheless let's give it another try with a - hopefully - more simple problem description (and maybe it will be helpful to copy and paste this description into the open SAP CSN also).
    So here comes the scenario:
    We have one physical host:
    Physical hostname: physhost
    Physical IP address: 1.1.1.1
    On this physical host there is running one OS: SUN Solaris 10/SPARC
    On top of this we have two virtual hosts where we install 2 completely independent PI 7.1 instances with separate virtual hostnames and separate virtual IP addresses and separate DB2 9.1 databases. That is this is not an MCOD installation.
    Virtual Host no. 1 is PI 7.1 Development System:
    Virtual hostname: virthostdev
    Virtual IP address: 2.2.2.2
    Java Port numbers: 512xx
    Virtual Host no. 2 is PI 7.1 QA System:
    Virtual hostname: virthostqa
    Virtual IP address: 3.3.3.3
    Java Port numbers: 522xx
    With this constellation we face serious problems with the P4 port. Currently for example the JSPM for virthostdev does not start, because JSPM cannot connect to the P4 port.
    In SAP note 1158626 we have learned that by default always the physical hostname/IP address is used to address the P4 port and that we have to configure instance profile parameter icm/server_port_xx to avoid this.
    So how do we have to configure the instance profile parameter icm/server_port_xx for both systems to resolve these P4 port conflicts?
    Additionally: Is it important to use distinct server port slot numbers xx in both systems?
    Additionally: Is it possible to configure this parameter with hostnames instead of using IP addresses?
    So far we have tried several combinations, but with each combination at least one or even both systems have problems with that f.... P4 port.
    Please help! Thanx a lot in advance!
    Regards,
    Volker

  • How to display XML generated dynamically, as TREE in cluster environment

    hai guys,
    we are generating a tree.xml file in server side as follows.
    path = getServletConfig().getServletContext().getRealPath("/QBE/jsp/tree.xml");
    BufferedWriter out1 = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(path),"UTF8"));
    out1.write(xmlString + "</tree>");
    out1.close();
    "xmlString" holds the data in xml tag format.
    in order to display the xml file in TREE structure, we are using a function like
    function createTree()
         var tree = new WebFXLoadTree("SMQ/AMQ List", "tree.xml");
         document.write(tree);
    ALL THIS MECHANISM IS WORKING FINE IN DEVELOPMENT ENVIRONMENT ANS AS WELL AS WHEN WE DEPLOY THE RELEASE IN LOCAL MACHINE. BUT IT IS NOT WORKING WHEN THE RELEASE IS DEPLOYED IN CLUSTER ENVIRONMENT.
    Please help me out how to solve this...
    thanks in advance,
    Ranga
    Edited by: Ranganatha on Jun 5, 2008 5:18 AM

    Hey if you want any more information regarding this problem, i can provide.

  • Wsdl parsing error soa11g cluster environment

    hi,
    i clicked test in console below error is coming in 11g soa cluster environment Please help me.
    http://soadev-internal.xxx.com/soa-infra/services/default/HighwayTableUpdateProcess/highwaytableupdatebpel_client_ep?WSDL
    Either the WSDL URL is invalid or the WSDL file is not valid or incorrect. - WSDLException: faultCode=PARSER_ERROR: Failed to read wsdl file at: "http://soadev-internal.wlgore.com/soa-infra/services/default/HighwayTableUpdateProcess/highwaytableupdatebpel_client_ep?WSDL", caused by: java.net.SocketException. : java.net.SocketException: Connection reset
    but in browser it is parsing well.
    Regards
    janardhan

    Hi,
    If it is a problem in jdev..Make sure u set the proxy information in Tools--->Preferences-->WebBrowser and Proxy or you can remove the Use HTTP Proxy server check mark and try.
    If u also face the similar issue in the console make sure u make an entry of soadev-internal.xxx.com to the correct IP in hosts file
    Regards
    Edited by: Oraacler on Aug 16, 2010 7:45 AM

  • Deploy EJBs in cluster environment

              I have cluster environment using three levels of weblogic.properties files (global/per cluster/per server). When I put all EJBs into per server properties file in all servers in cluster, I got conflictHandler message for some of the EJBs when bring up the second server in the cluster.
              1. Where the ejb should go (set in weblogic.ejb.deploy property)?
              2. Can I move those conflicted EJBs per cluster or golbal weblogic.properties file to resolve the conflict problem?
              3. If there are weblogic.ejb.deploy entries in both per-cluster properties file and per server cluster properties file, do the setting in per-cluster be overwritten by the one in per-server?
              Thanks in advance.
              key
              

              Just found out that those beans have been explicitly disable home-is-clusterable. Thanks for all of your help.
              "Vivek Raikanti" <[email protected]> wrote:
              >But I thought, that is the default setting in the weblogic-ejb-jar.xml file
              >?
              >( enable-clustering is true by default ... correct ?)
              >
              >
              > -Vivek
              >
              >
              >Wei Guan <[email protected]> wrote in message
              >news:[email protected]...
              >> In your EJB deployement descriptors, did you enable clustering for your
              >> EJBs?
              >> --
              >> Hope it helps.
              >>
              >> Cheers - Wei
              >>
              >>
              >> "Key Zhang" <[email protected]> wrote in message
              >> news:[email protected]...
              >> >
              >> > I have cluster environment using three levels of weblogic.properties
              >files
              >> (global/per cluster/per server). When I put all EJBs into per server
              >> properties file in all servers in cluster, I got conflictHandler message
              >for
              >> some of the EJBs when bring up the second server in the cluster.
              >> >
              >> > 1. Where the ejb should go (set in weblogic.ejb.deploy property)?
              >> > 2. Can I move those conflicted EJBs per cluster or golbal
              >> weblogic.properties file to resolve the conflict problem?
              >> > 3. If there are weblogic.ejb.deploy entries in both per-cluster
              >properties
              >> file and per server cluster properties file, do the setting in per-cluster
              >> be overwritten by the one in per-server?
              >> >
              >> > Thanks in advance.
              >> > key
              >>
              >>
              >
              >
              

  • Concurrent nodes reading from JMS topic (cluster environment)

    Hi.
    Need some help on this:
    Concurrent nodes reading from JMS topic (cluster environment)
    Thanks
    Denis

    After some thinking, I noted the following:
    1 - It's correct that only one node subscribes to a topic at a time. Otherwise, the same message would be processed by the nodes many times.
    2 - In order to solve the load balancing problem, I think that the Topic should be changed by a Queue. This way, each BPEL process from the node would poll for a message, and as soon as the message arrives, only one BPEL node gets the message and take if off the Queue.
    The legacy JMS provider I mentioned in the post above is actually the Retek Integration Bus (RIB). I'm integrating Retek apps with E-Business Suite.
    I'll try to configure the RIB to provide both a Topic (for the existing application consumers) and a Queue (an exclusive channel for BPEL)
    Do you guys have already tried to do an integration like this??
    Thanks
    Denis

  • Forced user logged out in cluster environment

              Hi there,
              We have 2 managed server instances (WL7.0sp2) in our cluster environment. Under
              heavy load,
              the user gets kicked out of the system at random. This could possibly be either
              as a result of httpsession not getting replicated on the secondary server instance
              or the request not getting routed to the right server. Has any one experienced
              this problem before? When the proxy server is by-passed using a custom app, the
              prob no longer occurs!
              Any help would be appreciated.
              

    We have 2 managed server instances (WL7.0sp2) in our cluster          > environment. Under heavy load, the user gets kicked out of the
              > system at random. This could possibly be either as a result of
              > httpsession not getting replicated on the secondary server instance
              > or the request not getting routed to the right server. Has any one
              > experienced this problem before? When the proxy server is by-
              > passed using a custom app, the prob no longer occurs!
              Yes, I've seen it under heavy load. I'm not sure what it is, though. It
              sounds, in your case, that the problem is in the proxy. Perhaps it is
              incorrectly thinking that a server has died (that seems to happen
              sometimes).
              Peace,
              Cameron Purdy
              Tangosol, Inc.
              http://www.tangosol.com/coherence.jsp
              Tangosol Coherence: Clustered Replicated Cache for Weblogic
              "Bala" <[email protected]> wrote in message
              news:[email protected]..
              >
              

  • Using Optimistic Concurrency in a cluster environment

    Hi,
    In weblogic 8.1 sp3 cluster environment, I deployed CMP Entity Beans with the following settings:
    In weblogic-ejb-jar.xml:
    <entity-cache>
    <concurrency-strategy>OptimistÂic</concurrency-strategy>
    <cache-between-transactions>trÂue</cache-between-transactionsÂ>
    </entity-cache>
    In weblogic-cmp-rdbms-jar.xml:
    <verify-columns>Version</verifÂy-columns>
    <optimistic-column>VERSION</opÂtimistic-column>
    And I deployed the CMP Entity beans into a cluster which has two managed servers.
    When I only do findByPrimaryKey, on both managed servers, the cache-between-transaction functions well and only call ejbLoad() when it first loads that Entity Bean Instance.
    However, If I do any updates to this bean, the performance change. After the updates, I issued a lot findByPrimaryKey calls for this bean to test. If the call reach the managed server where the update for the
    bean happens, it is fine and still perform like cache-between-transaction. But if the call reach other managed servers, the ejbLoad() get called for that bean in every transactions; and cache-between-transaction seems to be disabled on the other managed servers since the updates.
    I test this senario a lot of times and the problem is very consistant. According to my understanding, the other managed servers should only do ejbLoad() at the first time after the updates happened, and the transactions after that shouldn't call ejbLoad everytime.
    Does anyone encounter the same problem like this? And is there anyway to optimize it?
    Thanks!!

    Did you figure out how to do this? We ended up having to track the number of sessions using the service and close it only when there were none. However, this did not solve the problem completely. There seems to be a conflict when running our servlet app (which uses PAPI) on different machines talking to the same BPM. A thread on one machine opens, uses, closes a session and service while a thread on another machine opens a session and in the middle of using it, it dies.

  • I am not able to launch FF everytime i tr to open it, it says FF has to submit a crash report, i even tried doing that and the report was submitted too, but stiil FF did not start, and the problem still persists, please help me solve this issue in English

    Question
    I am not able to launch FF everytime i try to open it, it says FF has to submit a crash report,and restore yr tabs. I even tried doing that and the report was submitted too, but still FF did not start, and the problem still persists, please help me solve this issue
    '''(in English)'''

    Hi Danny,
    Per my understanding that you can't get the expect result by using the expression "=Count(Fields!TICKET_STATUS.Value=4) " to count the the TICKET_STATUS which value is 4, the result will returns the count of all the TICKET_STATUS values(206)
    but not 180, right?
    I have tested on my local environment and can reproduce the issue, the issue caused by you are using the count() function in the incorrect way, please modify the expression as below and have a test:
    =COUNT(IIF(Fields!TICKET_STATUS.Value=4 ,1,Nothing))
    or
    =SUM(IIF(Fields!TICKET_STATUS=4,1,0))
    If you still have any problem, please feel free to ask.
    Regards,
    Vicky Liu
    Vicky Liu
    TechNet Community Support

  • Stored procedure in a transaction problem

    hello to everybody
    I have an application under weblogic8.1 sp3.
    I have to call an Oracle stored procedure that populate a table and I have to see the new record anly at the end of the ejb service transaction ( a Container transaction ).When the procedure terminate I see the db data before the transaction end.So I have created a XA DataSource and changed the oracle 9.2 thin drivers in oracle 9.2 thin drivers XA.But Now I receive this Oracle Error:
    ORA-02089: COMMIT is not allowed in a subordinate session
    Why?How Can I resolve my problem?Can Anyone Help Me?Thanks...

    giorgio giustiniani wrote:
    hello to everybody
    I have an application under weblogic8.1 sp3.
    I have to call an Oracle stored procedure that populate a table and I have to see the new record anly at the end of the ejb service transaction ( a Container transaction ).When the procedure terminate I see the db data before the transaction end.So I have created a XA DataSource and changed the oracle 9.2 thin drivers in oracle 9.2 thin drivers XA.But Now I receive this Oracle Error:
    ORA-02089: COMMIT is not allowed in a subordinate session
    Why?How Can I resolve my problem?Can Anyone Help Me?Thanks...It sounds like you have transactional syntax embedded in your
    procedure. You can't do that and still include it in an XA
    transaction.
    Joe

  • How to Process Files in Cluster Environment

    Hi all,
    We are facing the below situation, and would like to know your opinions on how to proceed.
    We have a cluster environment ( server a and server b). A ESP Job is picking the files from a windows location and placing it in the unix location( server a or server b).
    The problem is ESP job can place the file in only one server. This will affect the basic purpose of the cluster environment (then the file will always be processed from that particular server only).
    If we place the file in both the servers, then there are chances that the same file can be processed mutilple times.
    Is there a way that the load balancer can direct the file to either one of the server based on the load of the servers(just like how it does with the BPEL processes).
    Or are there any other suggestions/solutions for this.
    Thanks in Advance !!
    Regards
    Mohan

    Hi,
    wch version of SOA are you using? ...have a luk at this:Re: Duplicate instance created in BPEL

  • OPMN fails in cluster environment after the machie reboots.

    Dear All:
    Have you ever meet the following problem or have any suggestion for it? Thanks a lot.
    There are machine A and B have been setup successfully in a cluster environment, everything is fine.
    However, after the machine B reboots, the Oracle BI Server and Presentation Server in machine B could not start.
    The error message for Oracle BI Server is:
    [2011-06-02T10:40:34.000+00:00] [OracleBIServerComponent] [NOTIFICATION:1] [] [] [ecid: ] [tid: 1060] Server start up failed: [nQSError: 43079] Oracle BI Server could not start because locating or importing newly published repositories failed.
    The error message for presentation server is:
    [2011-06-02T18:40:24.000+08:00] [OBIPS] [ERROR:1] [] [saw.sawserver.initializesawserver] [ecid: ] [tid: ] Another instance of Oracle Business Intelligence is trying to upgrade/update/create the catalog located at \\?\UNC\10.91.61.158\BIEEShare\CATALOG\SampleAppLite\root. Retry after it finishes.[
    The detail information is:
    1)     Machine A and B both run Windows 2008 server 64bit.
    2)     Machine A runs Admin Server+BI_Server1(managed server). Machine B is configured with “Scale out an existing installation” option and runs BI_Server2(managed server).
    3)     BIEE 11.1.1.3
    4)     In EM, the shared location of shared repository has been set to something like \\10.91.61.158\shared\RPD. The catalog path has been set to \\10.91.61.158\BIEEShare\CATALOG\SampleAppLite. Both path could be accessed by machine A and B.
    5)     Before machine B reboots, everything is fine. All the servers could be started.
    6)     After machine B reboots, the managed server BI_server2 could be started successfully. However, “opmnctl startall” would incur the above error message.
    Any suggestion is welcome. Thanks a lot!

    Hello,
    Thanks for answering.
    I am using OBIEE 11.1.1.5.0 enterprise software-only install scaleout existing Bi system option on HostMachine2. All services and coreapplication systems are running except coreapplication_obis1, coreapplication_obis2, coreapplication_obips1 and coreapplication_obips2 of bi_server2 on HostMachine2.
    Here is my ClusterConfig.xml, as you can see all changes in this file is done by em and primary and secondary controller settings are already applied. The same file also exist in the same location on HostMachine2.
    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <Cluster xmlns="oracle.bi.cluster.services/config/v1.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="oracle.bi.cluster.services/config/v1.1 ClusterConfig.xsd">
    <ClusterProperties>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><ClusterEnabled>true</ClusterEnabled>
    <ServerPollSeconds>5</ServerPollSeconds>
    <ControllerPollSeconds>5</ControllerPollSeconds>
    </ClusterProperties>
    <NodeList>
    <Node>
    <NodeType>PrimaryController</NodeType>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><NodeId>instance1:coreapplication_obiccs1</NodeId>
    <!--HostNameOrIP can be a hostname, IP or virtual hostname-->
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><HostNameOrIP>HostMachine1</HostNameOrIP>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><ServicePort>9706</ServicePort>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><MonitorPort>9700</MonitorPort>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><ListenAddress>HostMachine1.localdomain.com</ListenAddress>
    </Node>
    <Node>
    <NodeType>Server</NodeType>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><MasterServer>true</MasterServer>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><NodeId>instance1:coreapplication_obis1</NodeId>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><HostNameOrIP>HostMachine1.localdomain.com</HostNameOrIP>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><ServicePort>9703</ServicePort>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><MonitorPort>9701</MonitorPort>
    </Node>
    <Node>
    <NodeType>Scheduler</NodeType>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><NodeId>instance1:coreapplication_obisch1</NodeId>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><HostNameOrIP>HostMachine1.localdomain.com</HostNameOrIP>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><ServicePort>9705</ServicePort>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><MonitorPort>9708</MonitorPort>
    </Node>
    <Node>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <NodeType>Server</NodeType>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <MasterServer>false</MasterServer>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <NodeId>instance1:coreapplication_obis2</NodeId>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <HostNameOrIP>HostMachine1.localdomain.com</HostNameOrIP>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <ServicePort>9702</ServicePort>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <MonitorPort>9709</MonitorPort>
    </Node>
    <Node>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <NodeType>Server</NodeType>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <MasterServer>false</MasterServer>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <NodeId>instance2:coreapplication_obis1</NodeId>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <HostNameOrIP>HostMachine2</HostNameOrIP>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <ServicePort>9761</ServicePort>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <MonitorPort>9762</MonitorPort>
    </Node>
    <Node>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <NodeType>Server</NodeType>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <MasterServer>false</MasterServer>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <NodeId>instance2:coreapplication_obis2</NodeId>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <HostNameOrIP>HostMachine2</HostNameOrIP>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <ServicePort>9763</ServicePort>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <MonitorPort>9764</MonitorPort>
    </Node>
    <Node>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <NodeType>SecondaryController</NodeType>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <NodeId>instance2:coreapplication_obiccs1</NodeId>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <HostNameOrIP>HostMachine2</HostNameOrIP>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <ServicePort>9765</ServicePort>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <MonitorPort>9766</MonitorPort>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <ListenAddress>HostMachine2</ListenAddress>
    </Node>
    <Node>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <NodeType>Scheduler</NodeType>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <NodeId>instance2:coreapplication_obisch1</NodeId>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <HostNameOrIP>HostMachine2</HostNameOrIP>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <ServicePort>9770</ServicePort>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager-->
    <MonitorPort>9771</MonitorPort>
    </Node>
    </NodeList>
    <SSLProperties>
    <!--This Configuration setting is managed by Oracle Business Intelligence Enterprise Manager--><SSL>false</SSL>
    <SSLCertificateFile/>
    <SSLPrivateKeyFile/>
    <SSLCACertificateFile/>
    <SSLVerifyPeer>false</SSLVerifyPeer>
    </SSLProperties>
    </Cluster>

Maybe you are looking for

  • How can I make new iCal events default to 1 hour long?

    Please help me love my iCal again.  How can I set it up so that the default for a new event is an hour long instead of EIGHT?!?  Why on earth does iCal think a new event should be eight hours long?  It's especially annoying at night.  Add a new event

  • Surface Pro 3 with MDT deployed 8.1 Enterprise integrated Update 1 missing power button?

    I am building / deploying an image for Surface Pro 3. Out of the box the Surface Pro 3 has the power button on start screen next to search button. However when I deploy 8.1U1 Ent the button disappears. Reading http://winsupersite.com/windows-8/window

  • When I try to send a Safari web page via Mail, Mail locks up!

    No matter how I try to Email a Safari web page (using the share arrow icon or the mail icon or the File menu), Mail locks up.  Then I have to Force Quit Mail and send the report to Apple.  There must be a bug somewhere, but I can't figure out where! 

  • CRVS2010 beta - chart legend

    Hello, I created a pie chart with a legend. Usually, when the legend has percents, the last row on the legend has the word "Total" and a percent. The legend has what looks like a field name with underscores. I right-clicked on the chart and selected

  • Safari is not storing passwords for certain webpages

    Hi, I am wondering why Safari won't store the login/passw for certain webpages in my keychain. It is not asking if I want to save it. Even if I make the entry in the keychain all Safari does is to ask for the keychain password when I enter the webpag