Using Optimistic Concurrency in a cluster environment
Hi,
In weblogic 8.1 sp3 cluster environment, I deployed CMP Entity Beans with the following settings:
In weblogic-ejb-jar.xml:
<entity-cache>
<concurrency-strategy>OptimistÂic</concurrency-strategy>
<cache-between-transactions>trÂue</cache-between-transactionsÂ>
</entity-cache>
In weblogic-cmp-rdbms-jar.xml:
<verify-columns>Version</verifÂy-columns>
<optimistic-column>VERSION</opÂtimistic-column>
And I deployed the CMP Entity beans into a cluster which has two managed servers.
When I only do findByPrimaryKey, on both managed servers, the cache-between-transaction functions well and only call ejbLoad() when it first loads that Entity Bean Instance.
However, If I do any updates to this bean, the performance change. After the updates, I issued a lot findByPrimaryKey calls for this bean to test. If the call reach the managed server where the update for the
bean happens, it is fine and still perform like cache-between-transaction. But if the call reach other managed servers, the ejbLoad() get called for that bean in every transactions; and cache-between-transaction seems to be disabled on the other managed servers since the updates.
I test this senario a lot of times and the problem is very consistant. According to my understanding, the other managed servers should only do ejbLoad() at the first time after the updates happened, and the transactions after that shouldn't call ejbLoad everytime.
Does anyone encounter the same problem like this? And is there anyway to optimize it?
Thanks!!
Did you figure out how to do this? We ended up having to track the number of sessions using the service and close it only when there were none. However, this did not solve the problem completely. There seems to be a conflict when running our servlet app (which uses PAPI) on different machines talking to the same BPM. A thread on one machine opens, uses, closes a session and service while a thread on another machine opens a session and in the middle of using it, it dies.
Similar Messages
-
Openquery update and optimistic concurrency
Hi, I need to update a mySQL database through a linked server in SQL.
I can successfully add, delete, but struggle to update a row twice.
exec ('UPDATE OPENQUERY (SIBC, SELECT UID, value1, value2 FROM table1 WHERE UID= "SCEP"'')
SET value1= "hello" WHERE UID= "SCEP"')
The first time I run the update, it succeeds, but thereafter I get the following error message :
OLE DB provider 'MSDASQL' could not UPDATE table '[MSDASQL]'. The rowset was using optimistic concurrency and the value of a column has been changed after the containing row was last fetched or resynchronized. [OLE/DB provider returned message: Row cannot be located for updating. Some values may have been changed since it was last read.] OLE DB error trace [OLE/DB Provider 'MSDASQL' IRowsetChange::SetData returned 0x80040e38: The rowset was using optimistic concurrency and the value of a column has been changed after the containing row was last fetched or resynchronized.].
Any ideas ?
Thanks.Hi Conor, thanks for your reply.
The only process updating the system is my application as it's on the development environment.
I have an unusual issue in that I can update a datetime field in mySQL only once I've provided it a value explicitly through a mySQL query analyser utility.
The other odd problem I have is that when I perform the update, it has to actually update a field otherwise it fails, thus if I try update a column TEMP1 with a value of 1, but it already contains a value of 1, it fails.
PS: the provider is a mySQL provider, which doesn't allow 4 part naming in SQL.
I've a feeling the issue could exist with the ODBC driver, but unfortunitely the mySQL and Microsoft communities do not seem to work together too nicely.
Thanks for your help.
Karlo -
TimesTen database in Sun Cluster environment
Hi,
Currently we have our application together with the TimesTen database installed at the customer on two different nodes (running on Sun Solaris 10). The second node acts as a backup to provide failover functionality, although right now only manual failover is supported.
We are now looking into a hot-standby / high availability solution using Sun Cluster software. As understood from the documentation, applications can be 'plugged-in' to the Sun Cluster using Agents to monitor the application. Sun Cluster Agents should be already available for certain applications such as:
# MySQL
# Oracle 9i, 10g (HA and RAC)
# Oracle 9iAS Application Server
# PostgreSQL
(See http://www.sun.com/software/solaris/cluster/faq.jsp#q_19)
Our question is whether Sun Cluster Agents are already (freely) available for TimesTen? If so, where to find them. If not, should we write a specific Agent separately for TimesTen or handle database problems from the application.
Does someone have any experience using TimesTen in a Sun Cluster environment?
Thanks in advance!Yes, we use 2-way replication, but we don't use cache connect. The replication is created like this on both servers:
create replication MYDB.REPSCHEME
element SERVER01_DS datastore
master MYDB on "SERVER01_REP"
transmit nondurable
subscriber MYDB on "SERVER02_REP"
element SERVER02_DS datastore
master MYDB on "SERVER02_REP"
transmit nondurable
subscriber MYDB on "SERVER01_REP"
store MYDB on "SERVER01_REP"
port 16004
failthreshold 500
store MYDB on "SERVER02_REP"
port 16004
failthreshold 500
The application runs on SERVER01 and is standby on SERVER02. If an invalid state is detected in the application, the application on SERVER01 is stopped and the application on SERVER02 is started.
In addition to this, we want to fail over if the database on the SERVER01 is in invalid state. What should we have monitored by the Clustering Agent to detect an invalid state in TT? -
How to setup the cluster environment for BPM using weblogic
want to setup the cluster environment for BPM using weblogic....
i have installed the oracle weblogic server 10gr3 and oracle BPM enterprise for weblogic 10gR3
i have used the Admin tools from the "oracle BPM enterprise for weblogic" to setup the configuration and create the weblogic domain servers.
i can launch the process administrator and import the project exp file to domain server.
but what should i do to setup cluster environment using weblogic?
what i want to do is :
setup one admin machine..
setup two product machine..
enable the cluster so the admin machine can monitor the status of the product machine..
thanks a lot ...The install guide at http://download-llnw.oracle.com/docs/cd/E13154_01/bpm/docs65/config_guide/index.html gives a reasonable amount of info on how to do this.
Personally I have not used the OBPM option to configure WebLogic instead I've used the information in the above install guide to create the weblogic domain in advance of configuring OBPM.
Once you've setup WebLogic configure OBPM using the values I mention in the following thread: How to set the JMX Engine parameter in Process Administation?
Let me know any specific config questions and I'll do my best to answer them for you.
Thanks,
Mike -
How do i setup WTC in cluster environment using WLS6.1 sp3?
Hello:
the situation is:
(1)Two solaris machines.
(2)one WLS domain.
(3)one admin server and four managed servers in this domain.
(4)we cluster these 4 managed servers.
(5)each machine contains 2 managed server.
Here is the problem,
when we start managed servers in the same machine, the wtc is ok, but when we start managed servers in diffent machine, the wtc fail.
here is the error message:
[Error][Cluster][Conflict start: You tried to bind an object under the name tuxedo.services.TuxedoConnection in the JNDI tree. The object you have bound from 10.64.32.188 is non clusterable and you have tried to bind more than once from two or more servers. Such objects can only deployed from one server.]
[Error][Cluster][Conflict start: You tried to bind an object under the name tuxedo.services.TuxedoConnection in the JNDI tree. The object you have bound from 10.64.32.187 is non clusterable and you have tried to bind more than once from two or more servers. Such objects can only deployed from one server.]
So, how can i setup wtc in cluster environment using WLS 61. sp3?
Regards,
cjyangcjyang <[email protected]> writes:
[Error][Cluster][Conflict start: You tried to bind an object under the name tuxedo.services.TuxedoConnection in the JNDI tree. The object you have bound from 10.64.32.188 is non clusterable and you have tried to bind more than once from two or more servers. Such objects can only deployed from one server.] I believe that this is a known problem and that WTC clustering is not
supported in WLS 6.1.
andy -
Concurrent nodes reading from JMS topic (cluster environment)
Hi.
Need some help on this:
Concurrent nodes reading from JMS topic (cluster environment)
Thanks
DenisAfter some thinking, I noted the following:
1 - It's correct that only one node subscribes to a topic at a time. Otherwise, the same message would be processed by the nodes many times.
2 - In order to solve the load balancing problem, I think that the Topic should be changed by a Queue. This way, each BPEL process from the node would poll for a message, and as soon as the message arrives, only one BPEL node gets the message and take if off the Queue.
The legacy JMS provider I mentioned in the post above is actually the Retek Integration Bus (RIB). I'm integrating Retek apps with E-Business Suite.
I'll try to configure the RIB to provide both a Topic (for the existing application consumers) and a Queue (an exclusive channel for BPEL)
Do you guys have already tried to do an integration like this??
Thanks
Denis -
PI 7.1 in a cluster environment (multiple ip-adresses): P4 port
We want to install PI 7.1 on unix in a cluster environment.Therefore we installed also DEV+QA with virtual hostnames like the prodsystem, which will be later installed.
At all sapinst installation screens we have used only the virtual hostname <virtual-hostname-server interface>.We have also set the SAPINST_USE_HOSTNAME=<virtual-hostname-server interface>. Although the P4-port seems to have used the physical hostname: in step 57 of sapinst we got problems and in dev_icm were:
[Thr 05] *** ERROR => client with this banner already exists:
1:<physical-hostname>:35644 {000306f5} [p4_plg_mt.c 2495]
After we have set
icm/server_port_1 = PROT=P4,PORT=5$$04, HOST=<virtual-hostname-server interface>
icm/server_port_6 = PROT=P4,PORT=5$$04, HOST=<virtual-hostname-user interface>
icm/server_port_7 = PROT=P4,PORT=5$$04, HOST=<physical hostname>
icm/server_port_8 = PROT=P4,PORT=5$$04, HOST=127.0.0.1
the sapinst was successfull.
Now we're not sure how to set these P4-parameters in our future productive cluster environment.
Our productive system PX1 will live in a HA environment, so we don't want to use the physical hostnames in any profile.
Our environment will look like:
HOST-A (<physical-hostname-A>):
<virtual-hostname-server interface>
<virtual-hostname-user interface>
HOST-B (<physical-hostname-B>):
Normally our prodsystem will live on Host-A (physical-hostname-A). All parameters should
only take the virtual hostname <virtual-hostname-server-interface>. During switchover the
virtual hostnames (server and user interface) will be taken over to HOST-B, while the physical
hostnames of HOST-A and HOST-B will stay like there are.
How do the parameters have to be set here ?
Have also the physical hostnames of both cluster nodes set in the
instance profile, e.g:
icm/server_port_1 = PROT=P4,PORT=5$$04, HOST=<virtual-hostname-server interface>
icm/server_port_6 = PROT=P4,PORT=5$$04, HOST=<virtual-hostname-user interface>
icm/server_port_7 = PROT=P4,PORT=5$$04, HOST=<physical-hostname-A>
icm/server_port_8 = PROT=P4,PORT=5$$04, HOST=<physical-hostname-B>
icm/server_port_9 = PROT=P4,PORT=5$$04, HOST=<localhost>
Any recommendations ? In note 1158626 is some infomation regarding P4 ports with multiple network interfaces, but it's not 100% clear for us.
Best regards,
UtaHi Uta!
Obviously we are the only human beings in the SAP community having this problem. Nevertheless let's give it another try with a - hopefully - more simple problem description (and maybe it will be helpful to copy and paste this description into the open SAP CSN also).
So here comes the scenario:
We have one physical host:
Physical hostname: physhost
Physical IP address: 1.1.1.1
On this physical host there is running one OS: SUN Solaris 10/SPARC
On top of this we have two virtual hosts where we install 2 completely independent PI 7.1 instances with separate virtual hostnames and separate virtual IP addresses and separate DB2 9.1 databases. That is this is not an MCOD installation.
Virtual Host no. 1 is PI 7.1 Development System:
Virtual hostname: virthostdev
Virtual IP address: 2.2.2.2
Java Port numbers: 512xx
Virtual Host no. 2 is PI 7.1 QA System:
Virtual hostname: virthostqa
Virtual IP address: 3.3.3.3
Java Port numbers: 522xx
With this constellation we face serious problems with the P4 port. Currently for example the JSPM for virthostdev does not start, because JSPM cannot connect to the P4 port.
In SAP note 1158626 we have learned that by default always the physical hostname/IP address is used to address the P4 port and that we have to configure instance profile parameter icm/server_port_xx to avoid this.
So how do we have to configure the instance profile parameter icm/server_port_xx for both systems to resolve these P4 port conflicts?
Additionally: Is it important to use distinct server port slot numbers xx in both systems?
Additionally: Is it possible to configure this parameter with hostnames instead of using IP addresses?
So far we have tried several combinations, but with each combination at least one or even both systems have problems with that f.... P4 port.
Please help! Thanx a lot in advance!
Regards,
Volker -
DB2 is not starting the Cluster environment
Hi, Experts
we are using HP Unix Service Guard cluster envirnment for our
production server.
I have sucessfully installed SAP SR3 in this environment. When the
failover happen, then filesystem is automatically moving. when i try to
start in Node B.
It's throwing error "
02/12/2009 16:09:31 0 0 SQL6048N A communication error
occurred during
START or STOP DATABASE MANAGER processing.
02/12/2009 16:09:32 1 0 SQL6048N A communication error
occurred during
START or STOP DATABASE MANAGER processing.
SQL1032N No start database manager command was issued. SQLSTATE=57019"But it's working fine in Node A. I am attaching the db2diag.log and
trans.log for your reference.
In db2diag.log
2009-02-02-10.42.05.176811+330 I1A1279 LEVEL: Event
PID : 2928 TID : 1 PROC : db2flacc
INSTANCE: db2sop NODE : 000
FUNCTION: DB2 UDB, RAS/PD component, pdLogInternal, probe:120
START : New Diagnostic Log file
DATA #1 : Build Level, 152 bytes
Instance "db2sop" uses "64" bits and DB2 code release "SQL09016"
with level identifier "01070107".
Informational tokens are "DB2 v9.1.0.6", "s081007", "U817477", Fix Pack "6".
DATA #2 : System Info, 1568 bytes
System: HP-UX SOGLPRDP B.11.31 U ia64
CPU: total:8 online:4 Threading degree per core:1
Physical Memory(MB): total:16363 free:11923
Virtual Memory(MB): total:65515 free:61075
Swap Memory(MB): total:49152 free:49152
Kernel Params: msgMaxMessageSize:65535 msgMsgMap:4098 msgMaxQueueIDs:4096
msgNumberOfHeaders:4096 msgMaxQueueSize:65535
msgMaxSegmentSize:248 shmMax:17179869184 shmMin:1 shmIDs:512
shmSegments:300 semMap:8194 semIDs:8192 semNum:16384
semUndo:4092 semNumPerID:2048 semOps:500 semUndoPerProcess:100
semUndoSize:824 semMaxVal:32767 semAdjustOnExit:16384
Information in this record is only valid at the time when this file was
created (see this record's time stamp)
2009-02-02-10.42.38.940922+330 I4655A1629 LEVEL: Event
PID : 3883 TID : 1 PROC : db2stop
INSTANCE: db2sop NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleStartStopSingleNode, probe:1130
DATA #1 : String, 31 bytes
/db2/db2sop/sqllib/adm/db2stop2
DATA #2 : Hexdump, 256 bytes
0x9FFFFFFFFFFFAD60 : 2F64 6232 2F64 6232 736F 702F 7371 6C6C /db2/db2sop/sqll
0x9FFFFFFFFFFFAD70 : 6962 2F61 646D 2F64 6232 7374 6F70 3200 ib/adm/db2stop2.
0x9FFFFFFFFFFFAD80 : 4E4F 4D53 4700 464F 5243 4500 534E 0000 NOMSG.FORCE.SN..
0x9FFFFFFFFFFFAD90 : 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x9FFFFFFFFFFFADA0 : 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x9FFFFFFFFFFFADB0 : 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x9FFFFFFFFFFFADC0 : 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x9FFFFFFFFFFFADD0 : 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x9FFFFFFFFFFFADE0 : 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x9FFFFFFFFFFFADF0 : 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x9FFFFFFFFFFFAE00 : 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x9FFFFFFFFFFFAE10 : 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x9FFFFFFFFFFFAE20 : 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x9FFFFFFFFFFFAE30 : 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x9FFFFFFFFFFFAE40 : 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x9FFFFFFFFFFFAE50 : 0000 0000 0000 0000 0000 0000 0000 0000 ................
2009-02-02-10.42.49.871047+330 I6285A289 LEVEL: Event
PID : 4005 TID : 1 PROC : db2flacc
INSTANCE: db2sop NODE : 000
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:30
CHANGE : CFG DBM: "Dir_cache" From: "1" To: "0"
2009-02-02-10.42.49.934583+330 I6575A306 LEVEL: Event
PID : 4006 TID : 1 PROC : db2flacc
INSTANCE: db2sop NODE : 000
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:30
CHANGE : CFG DBM: "Sysadm_group" From: "DBSOPADM" To: "dbsopadm"
2009-02-02-10.42.50.000401+330 I6882A299 LEVEL: Event
PID : 4007 TID : 1 PROC : db2flacc
INSTANCE: db2sop NODE : 000
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:30
CHANGE : CFG DBM: "Sysctrl_group" From: "" To: "dbsopctl"
2009-02-02-10.42.50.062186+330 I7182A300 LEVEL: Event
PID : 4008 TID : 1 PROC : db2flacc
INSTANCE: db2sop NODE : 000
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:30
CHANGE : CFG DBM: "Sysmaint_group" From: "" To: "dbsopmnt"
2009-02-02-10.42.50.127927+330 I7483A295 LEVEL: Event
PID : 4009 TID : 1 PROC : db2flacc
INSTANCE: db2sop NODE : 000
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:30
CHANGE : CFG DBM: "Dft_mon_bufpool" From: "0" To: "1"
2009-02-02-10.42.50.203836+330 I7779A292 LEVEL: Event
PID : 4012 TID : 1 PROC : db2flacc
INSTANCE: db2sop NODE : 000
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:30
CHANGE : CFG DBM: "Dft_mon_lock" From: "0" To: "1"
2009-02-02-10.42.50.265351+330 I8072A292 LEVEL: Event
PID : 4021 TID : 1 PROC : db2flacc
INSTANCE: db2sop NODE : 000
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:30
CHANGE : CFG DBM: "Dft_mon_sort" From: "0" To: "1"
2009-02-02-10.42.50.329484+330 I8365A292 LEVEL: Event
PID : 4022 TID : 1 PROC : db2flacc
INSTANCE: db2sop NODE : 000
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:30
CHANGE : CFG DBM: "Dft_mon_stmt" From: "0" To: "1"
2009-02-02-10.42.50.390840+330 I8658A293 LEVEL: Event
PID : 4023 TID : 1 PROC : db2flacc
INSTANCE: db2sop NODE : 000
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:30
CHANGE : CFG DBM: "Dft_mon_table" From: "0" To: "1"
2009-02-02-10.42.50.514969+330 I8952A291 LEVEL: Event
PID : 4025 TID : 1 PROC : db2flacc
INSTANCE: db2sop NODE : 000
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:30
CHANGE : CFG DBM: "Dft_mon_uow" From: "0" To: "1"
2009-02-02-10.42.50.576102+330 I9244A294 LEVEL: Event
PID : 4026 TID : 1 PROC : db2flacc
INSTANCE: db2sop NODE : 000
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:30
CHANGE : CFG DBM: "Mon_heap_sz" From: "90" To: "128"
2009-02-02-10.42.50.701560+330 I9539A296 LEVEL: Event
PID : 4028 TID : 1 PROC : db2flacc
INSTANCE: db2sop NODE : 000
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:30
CHANGE : CFG DBM: "Rqrioblk" From: "32767" To: "65000"
2009-02-02-10.42.50.761306+330 I9836A295 LEVEL: Event
PID : 4029 TID : 1 PROC : db2flacc
INSTANCE: db2sop NODE : 000
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:30
CHANGE : CFG DBM: "Svcename" From: "" To: "sapdb2SOP"
2009-02-02-10.42.50.826115+330 I10132A291 LEVEL: Event
PID : 4030 TID : 1 PROC : db2flacc
INSTANCE: db2sop NODE : 000
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:30
CHANGE : CFG DBM: "Aslheapsz" From: "15" To: "16"
2009-02-02-10.42.50.886807+330 I10424A290 LEVEL: Event
PID : 4031 TID : 1 PROC : db2flacc
INSTANCE: db2sop NODE : 000
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:30
CHANGE : CFG DBM: "Keepfenced" From: "1" To: "0"
2009-02-02-10.42.50.951758+330 I10715A294 LEVEL: Event
PID : 4032 TID : 1 PROC : db2flacc
INSTANCE: db2sop NODE : 000
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:30
CHANGE : CFG DBM: "Maxagents" From: "400" To: "1024"
2009-02-02-10.42.51.017562+330 I11010A295 LEVEL: Event
PID : 4033 TID : 1 PROC : db2flacc
INSTANCE: db2sop NODE : 000
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:30
CHANGE : CFG DBM: "Dftdbpath" From: "" To: "/db2/SOP"
2009-02-02-10.42.51.078023+330 I11306A328 LEVEL: Event
PID : 4034 TID : 1 PROC : db2flacc
INSTANCE: db2sop NODE : 000
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:30
CHANGE : CFG DBM: "Diagpath" From: "/db2/db2sop/sqllib/db2dump" To: "/db2/SOP/db2dump"
2009-02-10-15.23.47.236724+330 I11635A213 LEVEL: Error
PID : 25606 TID : 1
FUNCTION: DB2 Common, Generic Control Facility, gcf_stop, probe:60
MESSAGE : ECF=0x90000390 Invalid process id
2009-02-10-16.28.21.589909+330 I11849A213 LEVEL: Error
PID : 6745 TID : 1
FUNCTION: DB2 Common, Generic Control Facility, gcf_stop, probe:60
MESSAGE : ECF=0x90000390 Invalid process id
2009-02-10-18.38.17.375623+330 I12063A213 LEVEL: Error
PID : 16296 TID : 1
FUNCTION: DB2 Common, Generic Control Facility, gcf_stop, probe:60
MESSAGE : ECF=0x90000390 Invalid process id
2009-02-10-19.53.31.036205+330 I12277A213 LEVEL: Error
PID : 6044 TID : 1
FUNCTION: DB2 Common, Generic Control Facility, gcf_stop, probe:60
MESSAGE : ECF=0x90000390 Invalid process id
2009-02-11-23.37.20.156996+330 I12491A213 LEVEL: Error
PID : 15427 TID : 1
FUNCTION: DB2 Common, Generic Control Facility, gcf_stop, probe:60
MESSAGE : ECF=0x90000390 Invalid process id
2009-02-11-23.57.56.959811+330 I12705A213 LEVEL: Error
PID : 4895 TID : 1
FUNCTION: DB2 Common, Generic Control Facility, gcf_stop, probe:60
MESSAGE : ECF=0x90000390 Invalid process id
2009-02-12-00.42.20.199428+330 I12919A213 LEVEL: Error
PID : 5424 TID : 1
FUNCTION: DB2 Common, Generic Control Facility, gcf_stop, probe:60
MESSAGE : ECF=0x90000390 Invalid process id
2009-02-12-00.55.15.231691+330 I13133A213 LEVEL: Error
PID : 4065 TID : 1
FUNCTION: DB2 Common, Generic Control Facility, gcf_stop, probe:60
MESSAGE : ECF=0x90000390 Invalid process id
2009-02-12-01.23.40.283293+330 I13347A213 LEVEL: Error
PID : 4345 TID : 1
FUNCTION: DB2 Common, Generic Control Facility, gcf_stop, probe:60
MESSAGE : ECF=0x90000390 Invalid process id
2009-02-12-01.59.06.153079+330 I13561A213 LEVEL: Error
PID : 4145 TID : 1
FUNCTION: DB2 Common, Generic Control Facility, gcf_stop, probe:60
MESSAGE : ECF=0x90000390 Invalid process id
2009-02-12-02.40.06.335045+330 I13775A213 LEVEL: Error
PID : 4515 TID : 1
FUNCTION: DB2 Common, Generic Control Facility, gcf_stop, probe:60
MESSAGE : ECF=0x90000390 Invalid process id
2009-02-12-03.09.22.341443+330 I13989A213 LEVEL: Error
PID : 4423 TID : 1
FUNCTION: DB2 Common, Generic Control Facility, gcf_stop, probe:60
MESSAGE : ECF=0x90000390 Invalid process id
2009-02-12-03.31.31.201911+330 I14203A213 LEVEL: Error
PID : 4135 TID : 1
FUNCTION: DB2 Common, Generic Control Facility, gcf_stop, probe:60
MESSAGE : ECF=0x90000390 Invalid process id
In R3trans.log
4 ETW000 [dev trc ,00000] (0.144) MSSQL: ODBC fastload on separate connection (note 1131805)
4 ETW000 46 0.514332
4 ETW000 [dev trc ,00000] Supported features: 30 0.514362
4 ETW000 [dev trc ,00000] ..retrieving configuration parameters 29 0.514391
4 ETW000 [dev trc ,00000] ..done 3613 0.518004
4 ETW000 [dev trc ,00000] Running with UTF-8 Unicode 30 0.518034
4 ETW000 [dev trc ,00000] Running with CLI driver 126609 0.644643
4 ETW000 [dev trc ,00000] *** ERROR in DB6Connect[dbdb6.c, 1632] CON = 0 (BEGIN)
4 ETW000 180751 0.825394
4 ETW000 [dev trc ,00000] &+ DbSlConnectDB6( SQLConnect ): [IBM][CLI Driver] SQL30081N A communication error has been detected. Communication
4 ETW000 57 0.825451
4 ETW000 [dev trc ,00000] &+ protocol being used: "TCP/IP". Communication API being used: "SOCKETS". Location where the error was detected:
4 ETW000 49 0.825500
4 ETW000 [dev trc ,00000] &+ "128.0.0.35". Communication function detecting the error: "connect". Protocol specific error code(s): "239", "*"
&+
4 ETW000 49 0.825978
4 ETW000 [dev trc ,00000] *** ERROR in DB6Connect[dbdb6.c, 1632] (END) 30 0.826008
4 ETW000 [dbdb6.c ,00000] *** ERROR => DbSlConnect to 'SOP' as 'sapsop' failed
4 ETW000 554 0.826562
2EETW169 no connect possible: "DBMS = DB6 --- DB2DBDFT = 'SOP'"
Please provide me the solution to solve the issue.
Thanks & RegardsHi, Malte
I have given individual nodes address in db2nodes.cfg, as per your instruction I have run the db2 list database directory I have got the following output
System Database Directory
Number of entries in the directory = 1
atabase 1 entry:
Database alias = SOP
Database name = SOP
Local database directory = /db2/SOP
Database release level = b.00
Comment = SAP database SOP
Directory entry type = Indirect
Catalog database partition number = 0
Alternate server hostname =
Alternate server port number =
=> db2 list node directory
=> SQL1027N The node directory cannot be found
Regards -
Steps to upgrade kernel patch in AIX cluster environment
Hello All,
We are going to perform kernel upgrade in AIX cluster environment.
Please let me know the other locations to copy the new kernel files ,
default location
CI+DB server
APP1
Regards
SubbuHi Subbu
Refer the SAP link
Executing the saproot.sh Script - Java Support Package Manager (OBSOLETE) - SAP Library
1. Extract the downloaded files to a new location using SAPCAR -xvf <file_name> as sidadm.
2. copy the extracted files to sapmnt/<SID>/exe
3. Start the DB & Application.
Regards
Sriram -
SAP R/3 4.7 SR1 Migration to MSCS cluster environment
Hi all,
We are planning to move our existing standalone server R/3 production server to cluster environment.
I have some queries regarding this migration
Our R/3 system is running on SAP R/3 4.7x1.10 SR1 on Windows 2003 and Oracle 9.2
1. Central Instance has to be installed in local disk or shared disk
2. Is it correct to R3load procedures to export data from old server and import during Database installation
3. Presently our server is running with kernel 640, but for the new installation kernel will be 620. So when can i upgrade kernel immediately after central instance installation or after CI and DB Instance installation.
Give me some inputs
Thanks & Regards
KTHi, the best place for your questions would have been the forum SAP on Windows.
However, here the answers.
1) CI must be on a shared disk
2) Right, if you don't change the database platform and its release you can also use backup/restore
3) Right. You can also perform the installation with the latest NW04 installation master -
OSB Polling in Cluster Environiment
Hi ,
I created a OSB Poller which will read data from DB. When I test it in a single server it is working fine but when I move it to cluster environment with two managed servers it is polling Twice from the DB and the reports are generating twice . I have changed the JCA Property <Lock>lock-no-wait</Lock> even then it is polling twice. Any solution Please ?Hi:
Maybe this link can be useful
http://javaoraclesoa.blogspot.com.au/2012/04/polling-with-dbadapter-in-clustered.html
Regards,
RK
Edited by: RK.. on Dec 16, 2012 7:54 PM
Edited by: RK.. on Dec 16, 2012 7:55 PM -
Changing the listener port number in a cluster environment
Hello,
I have an Oracle 10g database on a Windows cluster environment with Oracle Fail safe. I am trying to change the default listener port number - these are the steps I have done to change the port number:
1) Take the listener offline via Oracle Fail safe
2) stop the original listener from the command line
3) change the port number in the listener.ora file & save
4) start the original listener
5) bring the listener online in Fail safe
6) register the listener in the database with ALTER SYSTEM SET LOCAL_LISTENER....
After all this, when i check the status of the listener via lsnrctl, i see that the new port number is used, however in the Fail safe administrator, I still see the default port 1521. How do I go about changing the port number so that Fail safe also registers the change?I did troubleshooting to verify the group, but this just changed the port number back to the default in the listener.ora & tnsnames.ora.
So I did all the steps again to change the port number from the default to another - via lsnrctl status, i see that the new port number is being used, I can also log in to the database via Toad using the new port number, in v$parameter i see that the local_listener is registered on the new port number....only under the Fail Safe manager, the port number (under listener parameter) has not changed....it still shows the default port number. Anyone know how to change this??? -
Installation of CRM 2007 in Windows with oracle and cluster environment
Dear Experts,
We are about to start the installation of CRM 2007 (ABAP+JAVA) with
Oracle 10g on Windows x64 in cluster environment. In the SAPINST dialog
box under High availability option, I could see installation options like ASCS
Instance, SCS Instance, First MSCS node, Database Instance, Additional
MSCS Node, Enqueue Replication Server, Central Instance and Dialog
Instance.
I have gone through the installation guide. I have below queries
regarding the same. Can you please clarify the same
1) Our requirement is we want to have an ACTIVE-ACTIVE cluster setup
with sap service running in Node A and Database running in Node B. Can
we have this setup in ABAP+JAVA setup
2) Also,in the SAPINST dialog box as said above except last two
(Central and Dialoge instance) as per standard installation guide are
to be installed in shared drives as per the requirement. But, central
and dialog are said to be installed locally on any one node in MSCS or
in separate server. As per my understanding Dialog instance will be
used for load balancing which we do not require now. Hence I feel this
is optional in our case. Where as Central Instance I am not able to
understand why this option is for? Is it mandatory to be installed or optional. If
so, when I installed it in one of the MSCS node the incase of failover
how does it effect the system. As per my understanding ASCS and SCS
comprise central instance.
Please clarify and thanks in advance.
Regards,
Sharath Babu M
+91-9003072891I am following as per the standard installation guide.
Regards
Sharath -
How to set property for Cluster Environment for JMS Adapter
Hi All,
I am moving from DEV to Prod environment which is cluster.
Can you Please explain me what property I need to Set for Cluster Environment for JMS Adapter, so that I could avoid race condition for Dequeue/enqueue.
I am using soa suite 10.1..3.4
Thanks in Advance.
Edited by: vikky123 on Jul 12, 2010 7:03 AMput something like this
<activationAgents>
<activationAgent className="oracle.tip.adapter.fw.agent.jca.JCAActivationAgent" partnerLink="PARNERLINKNAME">
<property name="clusterGroupId">SOMEUNIQUEVALUE</property>
<property name="portType">PARTNERLINK_PORTTYPE</property>
</activationAgent>
</activationAgents> -
How to Process Files in Cluster Environment
Hi all,
We are facing the below situation, and would like to know your opinions on how to proceed.
We have a cluster environment ( server a and server b). A ESP Job is picking the files from a windows location and placing it in the unix location( server a or server b).
The problem is ESP job can place the file in only one server. This will affect the basic purpose of the cluster environment (then the file will always be processed from that particular server only).
If we place the file in both the servers, then there are chances that the same file can be processed mutilple times.
Is there a way that the load balancer can direct the file to either one of the server based on the load of the servers(just like how it does with the BPEL processes).
Or are there any other suggestions/solutions for this.
Thanks in Advance !!
Regards
MohanHi,
wch version of SOA are you using? ...have a luk at this:Re: Duplicate instance created in BPEL
Maybe you are looking for
-
I cannot view my videos after upgrading to OSX
I cannot view my videos after upgrading to OSX. Each video I try to open the message on the black background is Blocked Plug-In. In system preferences I have the latest Java but Flash Player does not upgrade to Flash 12 after numerous attempts.
-
When trying to restore from time machine system won't find my hardrive
Last night I open up my MacBook Pro and it was non-responsive. It had been working fine earlier that evening. I hadn't dropped it or jarred it (that I remember). However, when I tried to open up Safari, everything froze. All I got was the spinning be
-
Problem using two displays...
So I'm running my G5 with OSX 10.4.10 with a Radeon 9800 XT. I have two displays connected. A 23" Apple Cinema display conneded, via a Belkin ADC to DVI adaptor, to the ADC port. The second display is a Princeton VL 1919 connected directly to the DVI
-
Very Poor Support Service in UK
I had to contact Support Services in the UK last Sunday, and I hope that no other Playbook user in the UK had to go through what I did. I have 3 faults on my Playbook, Dead Pixels, No GPS and Faulty Charging, I contacted Support and in a nutshell the
-
Error when accessing the web-based portal for SQL Azure
Hi, when trying to access SQL azure management portal, the following error is thrown: [MoreThanOneMatch] Arguments: Debugging resource strings are unavailable. Often the key and arguments provide sufficient information to diagnose the problem. See ht