Configuration for multilpe nodes
Hi,
I have the following problem: I have my app deployed on Oracle app server with two nodes and the problem is, that I'm not able to configure toplink to work properly on this server with two nodes. Sometimes when I try to write something to the DB (via my app) the changes occur in the DB, but when I try to read them back, my app can't see them. So I suppose that toplink sometimes writes the data via one node of the app server and reads via the another. The same problem occurs also by using the cache, because it seems, that the cache is not synchronized between multiple nodes.
How can I configure toplink to work properly on app servers with multiple nodes?
thnx Martin
Martin,
If you have entity types that are modified and read on multiple nodes of your application then you need to understand and properly configure your locking and caching in TopLink. This article is a good introduction to the topic and the documentation can provide some more specifics.
I would recommend the following order of operations:
1. Ensure you have optimistic locking configured and handled for entity types that are both shared and modified by your application.
2. Based on the volatility of your entity types select the correct caching type. Look at types, size, isolation, and expiration.
3. For operations where you commonly get stale data on volatile entity types consider using refreshing options on your queries. If using version optimistic locking also enable only-refresh-if-newer.
4. Consider cache coordination for those types that are read-mostly to minimize the refreshing required.
Doug
Similar Messages
-
How many Scan Listeners, we have to configure for 5 Node RAC?
Dear Professionals,
How many scan Listeners we need to configure for 5 Node RAC? Oracle will not allow us to create More than 3 scan's? What If i have one scan Listener per each for 1st 3 node's? what about remaining two Node's? How application user will connect to 4 or 5 Node? Can you please explain..? Forgive me, If am totally wrong?
Thanks
SagarEach of the 5 instances will register itself with the scan listener (using the instance parameter remote_listener). Thus, the scan listener is "aware" of the database instances on the other two nodes where it is not running. It can still redirect incoming connections to the local listeners on these nodes (registered as local_listener).
Hemant K Chitale -
Automating custom software deployment and configuration for multiple nodes
Hello everyone.
Essentially, the question I'd like to ask is related to the automation of software package deployments on Solaris 10.
Specifically, I have a set of software components in tar files that run as daemon processes after being extracted and configured in the host environment. Pretty much like any server side software package out there, I need to ensure that a list of prerequisites are met before extracting and running the software. For example:
* Checking that certain users exists, and they are associated with one or many user groups. If not, then create them and their group associations.
* Checking that target application folders exist and if not, then create them with pre-configured path values defined when the package was assembled.
* Checking that such folders have the appropriate access control level and ownership for a certain user. If not, then set them.
* Checking that a set of environment variables are defined in /etc/profile, pointed to predefined path locations, added to the general $PATH environment variable, and finally exported into the user's environment. Other files include /etc/services and /etc/system.
Obviously, doing this for many boxes (the goal in question) by hand will certainly be slow and error prone.
I believe a better alternative is to somehow automate this process. So far I have thought about the following options, and discarded them for one reason or another.
1. Traditional shell scripts. I've only troubleshooted these before, and I don't really have much experience with them. These would be my last resort.
2. Python scripts using the pexpect library for analyzing system command output. This was my initial choice since the target Solaris environments have it installed. However, I want to make sure that I'm not reinveting the wheel again :P.
3. Ant or Gradle scripts. They may be an option since the boxes also have Java 1.5 enabled, and the fileset abstractions can be very useful. However, they may fall short when dealing with user and folder permissions checking/setting.
It seems obvious to me that I'm not the first person in this situation, but I don't seem to find a utility framework geared towards this purpose. Please let me know if there's a better way to accomplish this.
I thank you for your time and help.Configuration Management is a big topic today with a few noteworthy alternatives for Solaris:
- CFEngine (http://www.cfengine.org)
- Chef (http://wiki.opscode.com)
- Puppet (http://www.puppetlabs.com) -
How can I list all the domains configured for Weblogic Servers?
How can I list all the domains configured for Weblogic Servers?
I saw a note, which says the following:
"WebLogic Server does not support multi-domain interaction using either the Administration Console, the weblogic.Admin utility, or WebLogic Ant tasks. This restriction does not, however, explicitly preclude a user written Java application from accessing multiple domains simultaneously."
In my case, I just want to list all the domains, is that possible by using any scripts?
Thanks
AJIf you use WLS Node Manager and the Config Wizard was used to create the domains, then the list of domains should be in a location like this:
<MIDDLEWARE_HOME>\wlserver_10.3\common\nodemanager\nodemanager.domains
Enterprise Manager Grid Control also has support for multi-domain management of WLS in a console. -
Dynamic Configuration for determining filename
Hi ,
I am using dynamic configuration for determining the filename sent by the sender
I am using the following code::
int i = 0;
DynamicConfiguration conf = (DynamicConfiguration) container.getTransformationParameters().get(StreamTransformationConstants.DYNAMIC_CONFIGURATION);
DynamicConfigurationKey key = DynamicConfigurationKey.create("http://sap.com/xi/XI/System/File","FileName");
String ourSourceFileName = conf.get(key);
if (ourSourceFileName.equals("data.txt"))
for (i = 0 ; i < 1 ; i++)
result.addValue(ourSourceFileName);
i have also 'checked' filename in set Adapter specific message propertiesas in sender communication channel.
But Still it is not working.
Please provide some help on it.
Thanks & Regards
NilimaHi,
Michal
In SXMB-MONIthe response part contains DYNAMIC CONFIGURATION node
and it displays::
<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<!-- Response
-->
<SAP:DynamicConfiguration xmlns:SAP="http://sap.com/xi/XI/Message/30" xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/" SOAP:mustUnderstand="1">
<SAP:Record namespace="http://sap.com/xi/XI/System/File" name="FileName"><b>try1.txt</b></SAP:Record>
</SAP:DynamicConfiguration>
please sir provide some help on it.
Thanks once again. -
Reports are not posting with report repository webserver configured for Sin
Hi Everyone,
We have configured Single Signon on our Test environment (UADB1) using Sun Authentication Manager. Everything went well, we can login using our LDAP accounts except for one thing. The reports are not posting to the report repository.
Our setup goes like this. We have used only one webserver for login and for report repository purposes. SSL certificate was configured in the webserver and we are using https in the report node. Both URLs https://dv001.test.com:8450 and http://dv001.test.com:8400 were configured for Single Signon.
Report Node Definition
Node Name: uadb1
URL: https://dv001.test.com:8450/psreports/uadb1
Connection Information
https
URI Port: 8450
URI Host: dv001.test.com
URI Resource: SchedulerTransfer/uadb1
Below is the error I am getting. If I will use another webserver which is not the Single Signon configured as report repository the reports are posting. So, I am thinking this has something to do with the Single Signon setup and SSL. ANy idea? Thanks.
PSDSTSRV.2093190 (10) [06/13/10 01:05:43 PostReport](3) 1. Process Instance: 9499/Report Id: 8465/Descr: Process Scheduler System Purge
PSDSTSRV.2093190 (10) [06/13/10 01:05:43 PostReport](3) from directory: /psft/pt849/appserv/prcs/UADB1/log_output/AE_PRCSYSPURGE_9499
PSDSTSRV.2093190 (10) [06/13/10 01:05:44 PostReport](1) (JNIUTIL): Java exception thrown: java.net.SocketException: Unexpected end of file from server
PSDSTSRV.2093190 (10) [06/13/10 01:05:44 PostReport](3) HTTP transfer error.
PSDSTSRV.2093190 (10) [06/13/10 01:05:44 PostReport](3) Post Report Elapsed Time: 0.2300
PSDSTSRV.2093190 (10) [06/13/10 01:05:44 PostReport](1) =================================Error===============================
PSDSTSRV.2093190 (10) [06/13/10 01:05:44 PostReport](1) Unable to post report/log file for Process Instance: 9499, Report Id: 8465
PSDSTSRV.2093190 (10) [06/13/10 01:05:44 PostReport](2) Process Name: PRCSYSPURGE, Type: Application Engine
PSDSTSRV.2093190 (10) [06/13/10 01:05:44 PostReport](2) Description: Process Scheduler System Purge
PSDSTSRV.2093190 (10) [06/13/10 01:05:44 PostReport](2) Directory: /psft/pt849/appserv/prcs/UADB1/log_output/AE_PRCSYSPURGE_94Duplicated thread : Reports not posting if using Single Signon webserver as report repo
Nicolas. -
Hi all,
We are on PI 7.4 AEX SP 7. We are trying to get the following scenario working: Inhouse warehouse management system (JMS) --> PI --> ECC (IDoc)
Sender is configured as 3rd party technical system system with business system maintained in SLD
business system has logical system assigned in SLD
adapter specific identifiers show logical system in ID / NWDS
logical system is defined in receiver (ECC system)
partner profile is created in ecc for this very logical system (inbound parameters)
No mapping of EDI_DC40 node in ESB (complete node disabled)
no header mapping in ID / NWDS
receiver IDoc AEE adapter is configured to not enforce control record (Control Record in IDoc XML = Not Mandatory)
no Identifiers specified in receiver IDoc AEE adapter
My guts feeling is that this should work, however, it doesnt't. Failing with "The Configuration for Sender/Receiver Partner Number/Port is incorrect. Enter proper values in Sender/Receiver Component"
Any thoughts are highly welcome.
Cheers
JensHi,
The error mentioned by you is not related to the port..but its related to the Application ...
IDoc 51 means idoc has been received in R/3 client and when it tries to process it ended in error..
Check with the functional consultant for the error you have received..
HTH
Rajesh -
Define Tabs and Process Configuration for Template (HAP_TA_CONF)
I am currently building a new appraisal template for my client and I have configured the process timeline in the development system via the IMG node Define Tabs and Process Configuration for Template (transaction code HAP_TAB_CONF).
Does anyone know how to transport the tabs and process configuration through the system landscape?
We are on SAP ECC 6.0 Enhancement Pack 4Hi Sushil,
I have used the report RHMOVE30 as you recommended and it worked perfectly in one run. I did not have to create custom relationships. I simply selected all the Process Item (VH) objects for my appraisal template and ran the report.
Many thanks for your help. It has saved me having to configure the tabs and process timeline in each client.
Janet -
System is not configured for WS Securitylogon
Greetings,
I've created and attempted to begin transport of my first provider service.
I'm having some issues in our QA system
The SE80 objects attached to a transport & transported no problem.
I had to manually add the ICF objects onto a transport but I got 'em there
I had to manually create the Service and Endpoint definition in SOAMANAGER
Unfortunately in our QA system I'm getting the following error in SOAMANAGER.
has anyone else seen this error, and if so, how did you or your BASIS team resolve it?
Error: ICF: Cannot generate ICF information from SOAP information[System is not configured for WS Securitylogon (SAP Note 1319507)]Hi Doug,
I haven't seen this before but I had a quick look at SAP Note 1319507 & it gives a clear indication of how to fix this with report WSS_SETUP. Did this approach in the note not work?
One more point...I'm not sure which ICF objects you manually added but in general you would just transport the SE80 objects. Once the SE80 objects are QA you would then re-do the SOAMANAGER (service & endpoint) configuration, this would then add the ICF nodes automatically.
Regards, Trevor -
Thread pool configuration for write-behind cache store operation?
Hi,
Does Coherence have a thread pool configuration for the Coherence CacheStore operation?
Or the CacheStore implementation needs to do that?
We're using write-behind and want to use multiple threads to speed up the store operation (storeAll()...)
Thanks in advance for your help.user621063 wrote:
Hi,
Does Coherence have a thread pool configuration for the Coherence CacheStore operation?
Or the CacheStore implementation needs to do that?
We're using write-behind and want to use multiple threads to speed up the store operation (storeAll()...)
Thanks in advance for your help.Hi,
read/write-through operations are carried out on the worker thread (so if you configured a thread-pool for the service the same thread-pool will be used for the cache-store operation).
for write-behind/read-ahead operations, there is a single dedicated thread per cache above whatever thread-pool is configured, except for remove operations which are synchronous and still carried out on the worker thread (see above).
All above is of course per storage node.
Best regards,
Robert -
Alternate access mapping and binding in IIS for NLB nodes(2)
Hello All,
We have configured NLB for 2 nodes( 1 is App and WFE1 and 2 is WFE2).
here, we have given NLB host name to the users to browse. but, do we need to configure any thing in alternate access mapping and in IIS bindings, if yes, Please elaborate step by step please.
Thanks in advance
NLB host name and IP: abc.ap.company.com /10.11.12.95
Node1 server: abc.appri.company.com / 10.11.12.93
Node2 server: abc.appsec.company.com / 10.11.12.94
how to do this.
NARLAAssuming you configured the web application to use the URL http://abc.ap.company.com there is no additional IIS configuration needed on the servers.
If you're interested in accessing a specific server you can create a hostfile entry on your client machine that abc.ap.company.com to one of the two servers.
Jason Warren
@jaspnwarren
jasonwarren.ca
habaneroconsulting.com/Insights -
Failover on zone cluster configured for apache on zfs filesystem takes 30 M
Hi all
I have configured zone cluster for apache service, i have used ZFS file-system as high available storage.
The failover takes around 30mts which is not acceptable. my configuration steps are outlined as below
1) configured a 2 node physical cluster.
2) configured a quorum server.
3) configured a zone cluster.
4) created a resource group in the zone cluster.
5) created a resource for logical hostname and added to the above resource group
6) created a resource for Highavailable storage ( ZFS here) and added to the above resource group
7) created a resource for apache and added to the above resource group
the failover is taking 30mts of time and shows "pending offline/online" most of the time
I reduced the number of retry's to 1 , but of no use
Any help will be appreciated
Thanks in advance
SidSorry guys for the late reply,
I tried to switch the owners of RG to both the nodes simultaniously,which is taking reasonable time.But the failover for a dry run is taking 30mts
The same setup with SVM is working fine, but i want to have ZFS in my zone cluster
Thanks in advance
Sid -
How to determine raw disks configured for OCR/voting disk, ASM spfile
I have a two-node Oracle 10gR2 RAC configuration using raw disks. Basically, raw disks are used for CRS's OCR/voting disks, ASM's ASM spfile and disk groups.
Is there a better way to figure out what raw disks are configured in Oracle using methods other than those shown below:
- To find out votedisk:
# crsctl query css votedisk
0. 0 /dev/ora_crs_vote1
1. 0 /dev/ora_crs_vote2
2. 0 /dev/ora_crs_vote3
- To find out OCR:
# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 525836
Used space (kbytes) : 3856
Available space (kbytes) : 521980
ID : 1603037329
Device/File Name : /dev/ora_ocr_file1
Device/File integrity check succeeded
Device/File Name : /dev/ora_ocr_file2
Device/File integrity check succeeded
Cluster registry integrity check succeeded
- Is there a way to figure out what disk device is used for ASM spfile?
- To find out raw disks configured for disk groups:
while connected to the ASM instance in sqlplus:
SQL> select name,path from v$asm_disk where name like 'DG%';
NAME
PATH
DG_DC_ASM_0000
/dev/rhdiskpower13
DG_DC_ASM_0001
/dev/rhdiskpower14
DG_DC_ASM_0002
/dev/rhdiskpower15
NAME
PATH
DG_DC_ASM_0003
/dev/rhdiskpower22http://docs.oracle.com/cd/B19306_01/install.102/b14203/storage.htm#BABFFBBA
and
Configuring raw devices (singlepath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OEL5 [ID 465001.1]
Configuring raw devices (multipath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OEL5 [ID 564580.1] -
Appraisal- "Define Tabs and Process Configuration for template"
Hi Experts,
I am looking for implementation of Flexible template. However, I am unable to do so because "Define Tabs and Process Configuration for template" config node is not available even though we have activated business functions HCM_OSA_CL_1 and HCM_OSA_CL_2. We are in EHP 5 SPlevel 44.
Please help.1. Check with Business Function CA_HAP_CI_1 ..first go through the documentation of the Business Function.
2.Check with BC Sets is activated or not through tcode SCPR20PR...If it not activated then activate with Tcode SCPR20..
BC Sets for HR
EA-HR-MENU
EA-HR-AKH
EA-HR-IMG
Mohan -
Internal disk configuration for oracle
Hi experts
I need some guidance for internal disk configuration for oracle
requirements are for 2 node clustered VM on linux OS OEL
the OS will be for RAC, OEM
will RAID 5 be optimal setting
this is not production env
thanks912919 wrote:
Hi experts
I need some guidance for internal disk configuration for oracle
requirements are for 2 node clustered VM on linux OS OEL
the OS will be for RAC, OEM
will RAID 5 be optimal settingFor most definition of "optimal" the answer is "NO"
RAID+10 provides better performance.
Handle: 912919
Status Level: Newbie
Registered: Feb 7, 2012
Total Posts: 135
Total Questions: 74 (46 unresolved)
why do you waste time here when you rarely get your questions answered?
Maybe you are looking for
-
- recently re-installed Adobe Acrobat Pro 9.0 on my new Mac (OSX version 10.9.3) - after re-installing Adobe Acrobat Pro 9.0 an icon called Adobe PDF 9.0 shows up under Printers and Scanners - every single time I want to scan, my printer/scanner Brot
-
How to setup the Linksys router E4200 DD-WRT for WVC54GCA
I don't know how to setup the Linksys router E4200 DD-WRT for WVC54GCA. The DD-WRT is very complcated. Please help!!!!! Also, I have used the TZO.com for the DDNS. Should I setup the DDNS for DD-WRT router??? I used the DDNS for my old model router b
-
A65-S159 - I can't use the CD Recovery because my CD/DVD doesn't work properly
Hi, I need help to reinstall the system and drives. It's a Satellite A65-S159. The CD/DVD doesn't work properly. Rarely, the CD/DVD can read a DVD. So I can't use the CD Recovery. What may I do? Thank you, Cabula, Brazil
-
We are looking into purchasing an ATV for the workplace to show content we have created for our clients (we are a video post production house). I have one at my home that I am familiar with & have been experimenting for work purposes. When setting up
-
[904] custom UserManager can't access JNDI?
Our app has a custom UserManager which works in 1.0.2.2. In the init method, our user manager is trying to lookup a datasource (defined in the global data-sources.xml) whose name has been passed as a property in application.xml file. While trying to