JSPM in high availability
Dear Friends,
I have installed PI 7.1 in High availability using Microsoft windows MNS based cluster. It was with initial patch level 04 of Netweaver 710.
I have installed SCS, ASCS, Enque Replication Services etc in cluster mode. I have installed primary application server on Node 2 and Additional Application server on Node 1.
Now I installed SP07 using JSPM from my Primary Application server i.e. Node 1. It went successfully and my support Pack level was upgraded to Sp07. But now when I open Java stack in Node 1, It shows only home page using url - http://10.6.4.178:50000
After that if I open any link e.g. UME or NWA, it does not open and throughs following error :
Service cannot be reached
What has happened?
URL http://10.6.4.178:50000/wsnavigator call was terminated because the corresponding service is not available.
Note
The termination occurred in system PIP with error code 404 and for the reason Not found.
The selected virtual host was 0 .
What can I do?
Please select a valid URL.
If it is a valid URL, check whether service /wsnavigator is active in transaction SICF.
If you do not yet have a user ID, contact your system administrator.
ErrorCode:ICF-NF-http-c:000-u:SAPSYS-l:E-i:sappicl1_PIP_00-v:0-s:404-r:Notfound
HTTP 404 - Not found
Your SAP Internet Communication Framework Team
Java is started properly on both the nodes without any error. While it is working fine on Node 2 i.e. 10.6.4.179.
Please suggest ! Do we need to do so special step for installing patch through JSM in High availability setup?
Jitendra Tayal
Hi Jitendra,
While doing a patch upgrade with JSPM in HA scenario, you need to switch off the automatic switchover option because, once the components are deployed, the J2ee gets restarted. If the automatic failover is active, the service gets switched over to the other node.
In this case, shutdown all the instances and stop the cluster.
Restart the systems. Start the cluster in the original servers.
Restart JSPM.
It should work fine then.
Let me know if it is helpful.
Best Regards
Raghu
Similar Messages
-
Advice Requested - High Availability WITHOUT Failover Clustering
We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2. My question is: Can we accomplish high availability WITHOUT using failover clustering?
So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover. Here's what I mean:
In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment. In other words, there is at least a domain
controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring. The SQL Server VM on each host has about 75% of the
physical memory resources dedicated to it (for performance reasons). We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
So now, to high availability. The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
(we are using an iSCSI SAN for storage).
BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted. With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
or so only in the event of a major failure, rather than running at 50% ALL the time.
Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability. I guess
I'm looking for validation on my thinking.
So what do you think? What am I missing or forgetting? What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
Thanks in advance for your thoughts!Udo -
Yes your responses are very helpful.
Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access? Or can that not run on the same physical box as the Hyper-V host? I guess if the physical box goes down
the LUN would go down anyway, huh? Or can I cluster that role (iSCSI target) as well? If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
- Morgan
That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
for cheap". See:
What's new in iSCSI target with Windows Server 2012 R2
http://technet.microsoft.com/en-us/library/dn305893.aspx
Improved optimization to allow disk-level caching
Updated
iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
any Microsoft iSCSI target and you'll be happy. For references see:
MSFT iSCSI Target in HA mode
http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
Cluster MSFT iSCSI Target with SAS back end
http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
Guest
VM Cluster Storage Options
http://technet.microsoft.com/en-us/library/dn440540.aspx
Storage options
The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
Storage Type
Description
Shared virtual hard disk
New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
Virtual Fibre Channel
Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
Virtual Fibre Channel Overview.
iSCSI
The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
Server 2012.
Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
StarWind VSAN [Virtual SAN] for Hyper-V
http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
Hope this helped a bit :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Today I had requirement, where we have to use the SharePoint Foundation 2013 (free version) to build an intranet portal ( basic announcement , calendar , department site , document management - only check-in check-out / Version).
Please help me regarding the license and size limitations. ( I know the feature comparison of Standard / Enterprise) I just want to know only about the installation process and license.
6 Server - 2 App / 2 Web / 2 DB cluster ( so total license 6 windows OS license , 2 SQL Server license and Guess no sharepoint licenes)Thanks Trevor,
Is load balance service also comes in free license... So, in that case I can use SharePoint Foundation 2013 version for building a simple Intranet & DMS ( with limited functionality). And for Workflow and content management we have to write code.
Windows Network Load Balancing (the NLB feature) is included as part of Windows Server and would offer high availability for traffic bound to the SharePoint servers. WNLB can only associate with up to 4 servers.
Trevor Seward
Follow or contact me at...
  
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs. -
11.1.2 High Availability for HSS LCM
Hello All,
Has anyone out there successfully exported an LCM artifact (Native Users) through Shared Services console to the OS file system (<oracle_home>\user_projects\epmsystem1\import_export) when Shared Services is High Available through a load balancer?
My current configuration is two load balanced Shared Services servers using the same HSS repository. Everything works perfectly everywhere except when I try to export anything through LCM from the HSS web console. I have shared out the import_export folder on server 1. And I have edited the oracle_home>\user_projects\epmsystem1\config\FoundationServices1\migration.properties file on the second server to point to the share on the first server. Here is a cut and paste of the migration.properties file from server2.
grouping.size=100
grouping.size_unknown_artifact_count=10000
grouping.group_by_type=Y
report.enabled=Y
report.folder_path=<epm_oracle_instance>/diagnostics/logs/migration/reports
fileSystem.friendlyNames=false
msr.queue.size=200
msr.queue.waittime=60
group.count=10000
double-encoding=true
export.group.count = 30
import.group.count = 10000
filesystem.artifact.path=\\server1\import_export
I perform an export of just the native users to the file system, the export fails stating and I find errors in the log stating the following.
[2011-03-29T20:20:45.405-07:00] [EPMLCM] [NOTIFICATION] [EPMLCM-11001] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [arg: /server1/import_export/msr/PKG_34.xml] Executing package file - /server1/import_export/msr/PKG_34.xml
[2011-03-29T20:20:45.530-07:00] [EPMLCM] [ERROR] [EPMLCM-12097] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [SRC_CLASS: com.hyperion.lcm.clu.CLUProcessor] [SRC_METHOD: execute:?] Cannot find the migration definition file in the specified file path - /server1/import_export/msr/PKG_34.xml.
[2011-03-29T20:20:45.546-07:00] [EPMLCM] [NOTIFICATION] [EPMLCM-11000] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [arg: Completed with failures.] Migration Status - Completed with failures.
I go and look for the path that it is searching for "/server1/import_export/msr/PKG_34.xml" and find that this file does exist, it was infact created by the export process so I know that it is able to find the correct location, but it then says that it can't find it after the fact. I have tried creating mapped drives and editing the migration.properties file to reference the mapped drive but it gives me the same errors but with the new path. I also tried the below article that I found in support with no result. Please advise, thank you.
Bug 11683769: LCM MIGRATIONS FAIL WHEN HSS IS SET UP FOR HA AND HSS IS STARTED AS A SERVICE
Bug Attributes
Type B - Defect Fixed in Product Version 11.1.2.0.000PSE
Severity 3 - Minimal Loss of Service Product Version 11.1.2.0.00
Status 90 - Closed, Verified by Filer Platform 233 - Microsoft Windows x64 (64-bit)
Created 25-Jan-2011 Platform Version 2003 R2
Updated 24-Feb-2011 Base Bug 11696634
Database Version 2005
Affects Platforms Generic
Product Source Oracle
Related Products
Line - Family -
Area - Product 4482 - Hyperion Lifecycle Management
Hdr: 11683769 2005 SHRDSRVC 11.1.2.0.00 PRODID-4482 PORTID-233 11696634Abstract: LCM MIGRATIONS FAIL WHEN HSS IS SET UP FOR HA AND HSS IS STARTED AS A SERVICE*** 01/25/11 05:15 pm ***Hyperion Foundation Services is set up for high availability. Lifecycle management migrations fail when running Hyperion Foundation Services - Managed Server as a Windows Service. We applied the following configuration changes:Set up shared disk then we modify the migration.properties.filesystem.artifact.path=\\<servername>\LCM\import_export We are able to get the migrations to work if we set filesystem.artifact.path=V:\import_export (a maped drive to the shared disk) and we run Hyperion Foundation Services in console mode.*** 01/25/11 05:24 pm *** *** 01/28/11 03:32 pm *** (CHG: Sta->11)*** 01/28/11 03:32 pm *** (CHG: Comp. Ver-> 11.1.2 -> 11.1.2.0.00)*** 01/28/11 03:34 pm *** (CHG: Base Bug-> NULL -> 11696634)*** 02/02/11 01:53 pm *** (CHG: Sta->41)*** 02/02/11 01:53 pm ****** 02/02/11 05:18 pm ****** 02/03/11 10:30 am ****** 02/04/11 03:53 pm *** *** 02/04/11 03:53 pm *** (CHG: Sta->80 Asg->NEW OWNER)*** 02/10/11 08:49 am ****** 02/15/11 11:59 am ****** 02/15/11 04:56 pm ****** 02/16/11 09:58 am *** (CHG: Sta->90)*** 02/16/11 09:58 am ****** 02/16/11 02:12 pm *** (CHG: Sta->93)*** 02/24/11 06:14 pm *** (CHG: Sta->90)Back to topit is not possible to implement a kind of HA between two different appliances 3315 and 3395.
A node in HA can have the 3 persona.
Suppose Node A (Admin(primary), monitoring(Primary) and PSN).
Node B(Admin(Secondary), Monitoring(Secondary) and PSN).
If the Node A is unavailable, you will have to promote manually the Admin role to Primary.
Although the best way is to have
Node A Admin(Primary), Monitoring(Secondary) and PSN
Node B Admin(Secondary), Monitoring (Primary) and PSN.
Rate if helpful and Marked As correct if it is correct for the experts.
Regards, -
Oracle Berkeley DB Java Edition High Availability (White Paper)
Hi all,
I've just read Oracle Berkeley DB Java Edition High Availability White Paper
http://www.oracle.com/technetwork/database/berkeleydb/berkeleydb-je-ha-whitepaper-132079.pdf
In section "Time Consistency Policy" (Page 18) it is written:
"Setting a lag period that is too small, given the load and available hardware resources, could result in
frequent timeout exceptions and reduce a replica's availability for read operations. It could also increase
the latency associated with read requests, as the replica makes the read transaction wait so that it can
catch up in the replication stream."
Can you tell me why those read operations will not be taken by the master ?
Why will we have frequent timeout ?
Why should read transaction wait instead of being redirect to the master ?
Why should it reduce replica's availability for read operations ?
ThanksPlease post this question on the Berkeley DB Java Edition (BDB JE) forum Berkeley DB Java Edition. This is the Berkeley DB Core (BDB) forum.
Thanks,
Andrei -
Office Web Apps farm - how to make it high available
Hi there,
I have deployed OWAs for Lync 2013 with two servers in one farm. When I was testing, e.g. the case when other server is down I found out the following error:
Log Name: Microsoft Office Web Apps
Source: Office Web Apps
Date: 17/03/2014 22:59:54
Event ID: 8111
Level: Error
Computer: OWASrv02.domain.com
Description:
A Word or PowerPoint front end failed to communicate with backend machine
http://OWASrv01:809/pptc/Viewing.svc
Does that means, that OWAs cannot setup as high avaiable when using standalone OWAs only? Above error appeared when the server OWASrv01 was rebooting.
PetriHi Petri,
For your scenario, you have achieve a high performance for your Office Web Apps farm but not high availability.
For performing a high availability Office Web Apps farm, you can refer to the blog:
http://blogs.technet.com/b/meamcs/archive/2013/03/27/office-web-apps-2013-multi-servers-nlb-installation-and-deployment-for-sharepoint-2013-step-by-step-guide.aspx
Thanks,
Eric
Forum Support
Please remember to mark the replies as answers
if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]
Eric Tao
TechNet Community Support -
Error while installing NW 7.4 SR2 in High availability
Hello Guys,
We are installing NW 7.4 SR2 Enterprise Portal with sybase database on windows 2012 R2 server under high availability i.e Microsoft Cluster.
JEP is the system ID of the server, SAPJEP is the virtual host and X.X.X.36 is the virtual I.P for Central Instance node.
Physical I.P of First node is X.X.X.21
We have started the installation as first cluster node and get stuck at phase 3.
The error in sapinst.log is below:-
FCO-00011 The step assignNetworknameToClusterGroup with step key
|NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|NW_GetSidFromCluster|ind|ind|ind|ind|getSid|0|assignNetworknameToClusterGroup
was executed with status ERROR ( Last error reported by the step: Could
not set properties on ClusterResource 'SAP JEP IP'.).
The error log of sapinst_dev.log is is below:-
TRACE 2015-03-18 11:30:29.193
NWInstance.getStartProfilePath(true)
TRACE 2015-03-18 11:30:29.194
NWInstance.getStartProfileName()
TRACE 2015-03-18 11:30:29.194
NWInstance.getStartProfileName() done: JEP_ERS10_SAPJEP
TRACE 2015-03-18 11:30:29.194
NWERSInstance.getDirProfile()
TRACE 2015-03-18 11:30:29.194
NW.getDirProfile(false)
TRACE 2015-03-18 11:30:29.195
NW.getDirProfile() done: \\SAPJEP/sapmnt/JEP/SYS/profile
TRACE 2015-03-18 11:30:29.204 [synxcpath.cpp:925]
CSyPath::getOSNodeType(bool) lib=syslib module=syslib
Path \\SAPJEP/sapmnt/JEP/SYS/profile is on a non-existing share.
TRACE 2015-03-18 11:30:29.204
NWERSInstance.getDirProfile() done: \\SAPJEP/sapmnt/JEP/SYS/profile
TRACE 2015-03-18 11:30:29.205
NWInstance.getStartProfilePath() done: \\SAPJEP/sapmnt/JEP/SYS/profile/JEP_ERS10_SAPJEP
TRACE 2015-03-18 11:30:29.206
NWInstance.getProfilePath() done: \\SAPJEP/sapmnt/JEP/SYS/profile/JEP_ERS10_SAPJEP
TRACE 2015-03-18 11:30:29.206
NWProfiles(JEP.ERS10.SAPJEP).getPersistentDefaultProfile()
TRACE 2015-03-18 11:30:29.207
NW.getPersistentDefaultProfile()
TRACE 2015-03-18 11:30:29.207
NW.isUnicode(true)
TRACE 2015-03-18 11:30:29.208
NW.isUnicode() done: true
TRACE 2015-03-18 11:30:29.208
NW.getPersistentDefaultProfile() done
TRACE 2015-03-18 11:30:29.208
NWProfiles(JEP.ERS10.SAPJEP).getPersistentDefaultProfile() done
TRACE 2015-03-18 11:30:29.208
NWProfiles(JEP.ERS10.SAPJEP)._getPersistentProfile(INSTANCE) done
TRACE 2015-03-18 11:30:29.209
NWProfiles(JEP.ERS10.SAPJEP).getPersistentProfile() done
TRACE 2015-03-18 11:30:29.209
PersistentR3Profile.commit()
TRACE 2015-03-18 11:30:29.209
PersistentR3Profile.commit() done
TRACE 2015-03-18 11:30:29.209
NWProfiles(JEP.ERS10.SAPJEP).getPersistentDefaultProfile()
TRACE 2015-03-18 11:30:29.210
NW.getPersistentDefaultProfile()
TRACE 2015-03-18 11:30:29.210
NW.isUnicode(true)
TRACE 2015-03-18 11:30:29.211
NW.isUnicode() done: true
TRACE 2015-03-18 11:30:29.211
NW.getPersistentDefaultProfile() done
TRACE 2015-03-18 11:30:29.211
NWProfiles(JEP.ERS10.SAPJEP).getPersistentDefaultProfile() done
TRACE 2015-03-18 11:30:29.211
PersistentR3Profile.commit()
TRACE 2015-03-18 11:30:29.211
PersistentR3Profile.commit() done
TRACE 2015-03-18 11:30:29.212
NWProfiles(JEP.ERS10.SAPJEP).commit() done
TRACE 2015-03-18 11:30:29.212
NWInstanceInstall.addSAPCryptoParametersToProfile() done
TRACE 2015-03-18 11:30:29.260
The step setProfilesForSAPCrypto with key |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|NW_Unpack2|ind|ind|ind|ind|unpack|0|NW_SAPCrypto|ind|ind|ind|ind|crypto|0 has been executed successfully.
INFO 2015-03-18 11:30:29.262 [synxccuren.cpp:887]
CSyCurrentProcessEnvironmentImpl::setWorkingDirectory(const CSyPath & C:/Program Files/sapinst_instdir/NW740SR2/SYB/INSTALL/HA/JAVA/MSCS-NODE1)
lib=syslib module=syslib
Working directory changed to C:/Program Files/sapinst_instdir/NW740SR2/SYB/INSTALL/HA/JAVA/MSCS-NODE1.
TRACE 2015-03-18 11:30:29.265 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:29.267 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\inifile.xml
TRACE 2015-03-18 11:30:29.270 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:29.280 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:29.281 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\inifile.xml
TRACE 2015-03-18 11:30:29.284 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:29.293 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:29.295 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\inifile.xml
TRACE 2015-03-18 11:30:29.315 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
INFO 2015-03-18 11:30:29.330 [csistepexecute.cpp:1024]
Execute step askUnpack of component |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|NW_Unpack2|ind|ind|ind|ind|unpack|0
TRACE 2015-03-18 11:30:29.339 [csistepexecute.cpp:1079]
Execution of preprocess block of |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|NW_Unpack2|ind|ind|ind|ind|unpack|0|askUnpack returns TRUE
TRACE 2015-03-18 11:30:29.364
Call block: NW_Unpack2_ind_ind_ind_ind
function: NW_Unpack2_ind_ind_ind_ind_DialogPhase_askUnpack
is validator: false
TRACE 2015-03-18 11:30:29.364
NWInstall.getSystem(JEP)
TRACE 2015-03-18 11:30:29.366
NWOption(localExeDir).value()
TRACE 2015-03-18 11:30:29.366
NWOption(localExeDir).value() done: true
TRACE 2015-03-18 11:30:29.367
NWOption(collected).value()
TRACE 2015-03-18 11:30:29.367
NWOption(collected).value() done: true
TRACE 2015-03-18 11:30:29.369
NWInstall({
sid:JEP
sapDrive:E:
dbtype:ind
sapmnt:/sapmnt
hasABAP:false
hasJava:true
isAddinInstallation:undefined
unicode:true
workplace:undefined
loadType:SAP
umeConfiguration:
doMigMonConfig:
useParallelAbapSystemCopy:
dir_profile:\\SAPJEP\sapmnt\JEP\SYS\profile
os4_krnlib:
os4_seclanglib:
useCurrentUser:false
noShares:false
TRACE 2015-03-18 11:30:29.370
NW({
sid:JEP
sapDrive:E:
dbtype:ind
sapmnt:/sapmnt
hasABAP:false
hasJava:true
isAddinInstallation:undefined
unicode:true
workplace:undefined
loadType:SAP
umeConfiguration:
doMigMonConfig:
useParallelAbapSystemCopy:
dir_profile:\\SAPJEP\sapmnt\JEP\SYS\profile
os4_krnlib:
os4_seclanglib:
useCurrentUser:false
noShares:false
TRACE 2015-03-18 11:30:29.371
NW() done
TRACE 2015-03-18 11:30:29.371
NWInstall() done
TRACE 2015-03-18 11:30:29.371
NWInstall.getSystem() done
TRACE 2015-03-18 11:30:29.372
NWInstall.mapUnpackTable(t_askUnpack)
TRACE 2015-03-18 11:30:29.373
NWInstall.mapUnpackTable() done: true
TRACE 2015-03-18 11:30:29.377
NWDB._get(JEP, ind)
TRACE 2015-03-18 11:30:29.377
NWDB._fromTable(JEP, ctor)
TRACE 2015-03-18 11:30:29.379
NWDB(JEP, {
sid:JEP
hostname:undefined
dbsid:undefined
installDB:false
ORACLE_HOME:
abapSchema:
abapSchemaUpdate:
javaSchema:
TRACE 2015-03-18 11:30:29.379
NWDB() done
TRACE 2015-03-18 11:30:29.379
NWDB._fromTable() done
TRACE 2015-03-18 11:30:29.380
NWDB._get() done
TRACE 2015-03-18 11:30:29.380
NWDB.getDBType(): ind
TRACE 2015-03-18 11:30:29.390 [iaxxgenimp.cpp:283]
CGuiEngineImp::showDialogCalledByJs()
showing dlg d_nw_ask_unpack
TRACE 2015-03-18 11:30:29.390 [iaxxgenimp.cpp:293]
CGuiEngineImp::showDialogCalledByJs()
<dialog sid="d_nw_ask_unpack">
<title>Unpack Archives</title>
<table enabled="true" fixedrows="true" sid="askUnpack">
<caption>Archives to Be Unpacked</caption>
<column enabled="true" type="check" numeric="false" upper="false" name="unpack">
<caption>Unpack</caption>
<helpitem id="common.Unpack"/>
<value type="string">false</value>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="path">
<caption>Archive</caption>
<helpitem id="common.UnpackArchive"/>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="codepage">
<caption>Codepage</caption>
<helpitem id="common.UnpackArchiveCodepage"/>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="destination">
<caption>Destination</caption>
<helpitem id="common.UnpackArchiveDestination"/>
</column>
<column enabled="true" type="file" numeric="false" upper="false" name="ownPath">
<caption>Downloaded To</caption>
<helpitem id="common.UnpackArchiveDownload"/>
<value type="string"/>
</column>
<row rowId="0">
<boolvalue>
<true/>
</boolvalue>
<value>DBINDEP\SAPEXE.SAR</value>
<value>Unicode</value>
<value>E:\usr\sap\JEP\SYS\exe\uc\NTAMD64</value>
<value></value>
</row>
</table>
<dialog/>
TRACE 2015-03-18 11:30:29.391 [iaxxgenimp.cpp:1031]
CGuiEngineImp::acceptAnswerForBlockingRequest
Waiting for an answer from GUI
TRACE 2015-03-18 11:30:30.617 [iaxxdlghnd.cpp:96]
CDialogHandler::doHandleDoc()
CDialogHandler: ACTION_NEXT requested
TRACE 2015-03-18 11:30:30.618 [iaxxcdialogdoc.cpp:190]
CDialogDocument::submit()
<dialog sid="d_nw_ask_unpack">
<table enabled="true" fixedrows="true" sid="askUnpack">
<caption>Archives to Be Unpacked</caption>
<column enabled="true" type="check" numeric="false" upper="false" name="unpack">
<caption>Unpack</caption>
<helpitem id="common.Unpack"/>
<value type="string">false</value>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="path">
<caption>Archive</caption>
<helpitem id="common.UnpackArchive"/>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="codepage">
<caption>Codepage</caption>
<helpitem id="common.UnpackArchiveCodepage"/>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="destination">
<caption>Destination</caption>
<helpitem id="common.UnpackArchiveDestination"/>
</column>
<column enabled="true" type="file" numeric="false" upper="false" name="ownPath">
<caption>Downloaded To</caption>
<helpitem id="common.UnpackArchiveDownload"/>
<value type="string"/>
</column>
<row rowId="0">
<boolvalue>
<true/>
</boolvalue>
<value>DBINDEP\SAPEXE.SAR</value>
<value>Unicode</value>
<value>E:\usr\sap\JEP\SYS\exe\uc\NTAMD64</value>
<value/>
</row>
</table>
<dialog/>
TRACE 2015-03-18 11:30:30.646
Call block: NW_Unpack2_ind_ind_ind_ind
function: NW_Unpack2_ind_ind_ind_ind_DialogPhase_askUnpackValidator_default
is validator: true
TRACE 2015-03-18 11:30:30.646
NWInstall.getSystem(JEP)
TRACE 2015-03-18 11:30:30.648
NWOption(localExeDir).value()
TRACE 2015-03-18 11:30:30.648
NWOption(localExeDir).value() done: true
TRACE 2015-03-18 11:30:30.649
NWOption(collected).value()
TRACE 2015-03-18 11:30:30.649
NWOption(collected).value() done: true
TRACE 2015-03-18 11:30:30.652
NWInstall({
sid:JEP
sapDrive:E:
dbtype:ind
sapmnt:/sapmnt
hasABAP:false
hasJava:true
isAddinInstallation:undefined
unicode:true
workplace:undefined
loadType:SAP
umeConfiguration:
doMigMonConfig:
useParallelAbapSystemCopy:
dir_profile:\\SAPJEP\sapmnt\JEP\SYS\profile
os4_krnlib:
os4_seclanglib:
useCurrentUser:false
noShares:false
TRACE 2015-03-18 11:30:30.653
NW({
sid:JEP
sapDrive:E:
dbtype:ind
sapmnt:/sapmnt
hasABAP:false
hasJava:true
isAddinInstallation:undefined
unicode:true
workplace:undefined
loadType:SAP
umeConfiguration:
doMigMonConfig:
useParallelAbapSystemCopy:
dir_profile:\\SAPJEP\sapmnt\JEP\SYS\profile
os4_krnlib:
os4_seclanglib:
useCurrentUser:false
noShares:false
TRACE 2015-03-18 11:30:30.654
NW() done
TRACE 2015-03-18 11:30:30.654
NWInstall() done
TRACE 2015-03-18 11:30:30.654
NWInstall.getSystem() done
TRACE 2015-03-18 11:30:30.655
NWInstall.validateUnpackTable(t_askUnpack)
TRACE 2015-03-18 11:30:30.657
NWInstall.validateUnpackTable(): validating row {
codepage:Unicode
destination:E:\usr\sap\JEP\SYS\exe\uc\NTAMD64
ownPath:
path:DBINDEP\SAPEXE.SAR
sid:JEP
unpack:true
TRACE 2015-03-18 11:30:30.659 [tablecpp.cpp:136]
Table(t_NW_unpack).updateRow({
cd:UKERNEL
codepage:Unicode
confirmUnpack:
destination:E:\usr\sap\JEP\SYS\exe\uc\NTAMD64
list:
needUnpacking:true
ownPath:
path:DBINDEP\SAPEXE.SAR
selectiveExtract:false
sid:JEP
unpack:true
wasUnpacked:false
}, WHERE sid='JEP' AND path='DBINDEP\SAPEXE.SAR' AND codepage='Unicode' AND destination='E:\usr\sap\JEP\SYS\exe\uc\NTAMD64')
updating
TRACE 2015-03-18 11:30:30.661
NWInstall.validateUnpackTable() done
TRACE 2015-03-18 11:30:30.722 [iaxxgenimp.cpp:301]
CGuiEngineImp::showDialogCalledByJs()
<dialog sid="d_nw_ask_unpack">
<table enabled="true" fixedrows="true" sid="askUnpack">
<caption>Archives to Be Unpacked</caption>
<column enabled="true" type="check" numeric="false" upper="false" name="unpack">
<caption>Unpack</caption>
<helpitem id="common.Unpack"/>
<value type="string">false</value>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="path">
<caption>Archive</caption>
<helpitem id="common.UnpackArchive"/>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="codepage">
<caption>Codepage</caption>
<helpitem id="common.UnpackArchiveCodepage"/>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="destination">
<caption>Destination</caption>
<helpitem id="common.UnpackArchiveDestination"/>
</column>
<column enabled="true" type="file" numeric="false" upper="false" name="ownPath">
<caption>Downloaded To</caption>
<helpitem id="common.UnpackArchiveDownload"/>
<value type="string"/>
</column>
<row rowId="0">
<boolvalue>
<true/>
</boolvalue>
<value>DBINDEP\SAPEXE.SAR</value>
<value>Unicode</value>
<value>E:\usr\sap\JEP\SYS\exe\uc\NTAMD64</value>
<value/>
</row>
</table>
<dialog/>
TRACE 2015-03-18 11:30:30.724 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:30.725 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\inifile.xml
TRACE 2015-03-18 11:30:30.728 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:30.776
The step askUnpack with key |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|NW_Unpack2|ind|ind|ind|ind|unpack|0 has been executed successfully.
INFO 2015-03-18 11:30:30.778 [synxccuren.cpp:887]
CSyCurrentProcessEnvironmentImpl::setWorkingDirectory(const CSyPath & C:/Program Files/sapinst_instdir/NW740SR2/SYB/INSTALL/HA/JAVA/MSCS-NODE1)
lib=syslib module=syslib
Working directory changed to C:/Program Files/sapinst_instdir/NW740SR2/SYB/INSTALL/HA/JAVA/MSCS-NODE1.
TRACE 2015-03-18 11:30:30.780 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:30.781 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\inifile.xml
TRACE 2015-03-18 11:30:30.785 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:30.802 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:30.803 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\inifile.xml
TRACE 2015-03-18 11:30:30.806 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:30.819 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:30.842 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\keydb.xml
TRACE 2015-03-18 11:30:30.908 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:30.909 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:30.963 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\statistic.xml
TRACE 2015-03-18 11:30:31.139 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
INFO 2015-03-18 11:30:31.142 [csistepexecute.cpp:1024]
Execute step setDisplayNames of component |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0
TRACE 2015-03-18 11:30:31.177
Call block: NW_FirstClusterNode_ind_ind_ind_ind
function: NW_FirstClusterNode_ind_ind_ind_ind_DialogPhase_setDisplayNames_Preprocess
is validator: false
TRACE 2015-03-18 11:30:31.219 [csistepexecute.cpp:1079]
Execution of preprocess block of |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|setDisplayNames returns TRUE
TRACE 2015-03-18 11:30:31.243
Call block: NW_FirstClusterNode_ind_ind_ind_ind
function: NW_FirstClusterNode_ind_ind_ind_ind_DialogPhase_setDisplayNames
is validator: false
TRACE 2015-03-18 11:30:31.246
NWInstance.byNumberAndRealHost(00, DCEPPRDCI)
TRACE 2015-03-18 11:30:31.246
NWInstance.find()
TRACE 2015-03-18 11:30:31.248
NWOption(localExeDir).value()
TRACE 2015-03-18 11:30:31.248
NWOption(localExeDir).value() done: true
TRACE 2015-03-18 11:30:31.249
NWOption(collected).value()
TRACE 2015-03-18 11:30:31.249
NWOption(collected).value() done: true
TRACE 2015-03-18 11:30:31.252
NWInstance._fromRow({
sid:JEP
name:SCS00
number:00
host:SAPJEP
guid:0
realhost:dcepprdci
type:SCS
installationStatus:installing
startProfileName:JEP_SCS00_SAPJEP
instProfilePath:
dir_profile:
unicode:false
collectSource:
TRACE 2015-03-18 11:30:31.252
NWSCSInstance()
TRACE 2015-03-18 11:30:31.252
NWInstance(JEP/SCS00/SAPJEP)
TRACE 2015-03-18 11:30:31.252
NWInstance(JEP/SCS00/SAPJEP) done
TRACE 2015-03-18 11:30:31.253
NWInstanceInstall()
TRACE 2015-03-18 11:30:31.253
NWInstanceInstall() done
TRACE 2015-03-18 11:30:31.253
NWInstall.getSystem(JEP)
TRACE 2015-03-18 11:30:31.253
NWOption(localExeDir).value()
TRACE 2015-03-18 11:30:31.254
NWOption(localExeDir).value() done: true
TRACE 2015-03-18 11:30:31.254
NWOption(collected).value()
TRACE 2015-03-18 11:30:31.254
NWOption(collected).value() done: true
TRACE 2015-03-18 11:30:31.256
NWInstall({
sid:JEP
sapDrive:E:
dbtype:ind
sapmnt:/sapmnt
hasABAP:false
hasJava:true
isAddinInstallation:undefined
unicode:true
workplace:undefined
loadType:SAP
umeConfiguration:
doMigMonConfig:
useParallelAbapSystemCopy:
dir_profile:\\SAPJEP\sapmnt\JEP\SYS\profile
os4_krnlib:
os4_seclanglib:
useCurrentUser:false
noShares:false
TRACE 2015-03-18 11:30:31.257
NW({
sid:JEP
sapDrive:E:
dbtype:ind
sapmnt:/sapmnt
hasABAP:false
hasJava:true
isAddinInstallation:undefined
unicode:true
workplace:undefined
loadType:SAP
umeConfiguration:
doMigMonConfig:
useParallelAbapSystemCopy:
dir_profile:\\SAPJEP\sapmnt\JEP\SYS\profile
os4_krnlib:
os4_seclanglib:
useCurrentUser:false
noShares:false
TRACE 2015-03-18 11:30:31.258
NW() done
TRACE 2015-03-18 11:30:31.258
NWInstall() done
TRACE 2015-03-18 11:30:31.258
NWInstall.getSystem() done
TRACE 2015-03-18 11:30:31.258
NWSCSInstance() done
TRACE 2015-03-18 11:30:31.259
NWInstance._fromRow() done
TRACE 2015-03-18 11:30:31.259
NWInstance._fromRow({
sid:JEP
name:ERS10
number:10
host:SAPJEP
guid:1
realhost:dcepprdci
type:ERS
installationStatus:installing
startProfileName:JEP_ERS10_SAPJEP
instProfilePath:
dir_profile:
unicode:false
collectSource:
TRACE 2015-03-18 11:30:31.259
NWERSInstance()
TRACE 2015-03-18 11:30:31.260
NWInstance(JEP/ERS10/SAPJEP)
TRACE 2015-03-18 11:30:31.260
NWInstance(JEP/ERS10/SAPJEP) done
TRACE 2015-03-18 11:30:31.260
NWInstanceInstall()
TRACE 2015-03-18 11:30:31.260
NWInstanceInstall() done
TRACE 2015-03-18 11:30:31.261
NWERSInstance() done
TRACE 2015-03-18 11:30:31.261
NWInstance._fromRow() done
TRACE 2015-03-18 11:30:31.261
NWInstance.find() done: 1 instances found
TRACE 2015-03-18 11:30:31.261
t_scs.setDisplayName([nw.progress.installInstance, SCS00], WHERE ROWNUM=0)
TRACE 2015-03-18 11:30:31.305
The step setDisplayNames with key |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0 has been executed successfully.
INFO 2015-03-18 11:30:31.307 [synxccuren.cpp:887]
CSyCurrentProcessEnvironmentImpl::setWorkingDirectory(const CSyPath & C:/Program Files/sapinst_instdir/NW740SR2/SYB/INSTALL/HA/JAVA/MSCS-NODE1)
lib=syslib module=syslib
Working directory changed to C:/Program Files/sapinst_instdir/NW740SR2/SYB/INSTALL/HA/JAVA/MSCS-NODE1.
TRACE 2015-03-18 11:30:31.362
Build Client for subcomponent |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|NW_FinishFirstClusterNode|ind|ind|ind|ind|finishFirst|0
TRACE 2015-03-18 11:30:31.364
TRACE 2015-03-18 11:30:31.364 [ccdclient.cpp:22]
CCdClient::CCdClient()
TRACE 2015-03-18 11:30:31.385 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:31.387 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\inifile.xml
TRACE 2015-03-18 11:30:31.390 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:31.406 [cpropertycontentmanager.hpp:220]
CPropertyContentManager::logMissingParameters()
The following parameters can be set via SAPINST_PARAMETER_CONTAINER_URL (inifile.xml) but not via SAPINST_INPUT_PARAMETERS_URL:
Component 'NW_ERS_Instance': ersInstanceNumber, restartSCS
Component 'NW_GetDomainOU': isOUInstallation, ou_delimiter, windows_domain_ous
Component 'NW_GetSidFromCluster': clusterGroup, isUnicode, localDrive, network, networkName, sharedDrive, sid, useDHCP
Component 'NW_GetUserParameterWindows': sapDomain, sapServiceSIDPassword, sidAdmPassword
Component 'NW_SAPCrypto': installSAPCrypto
Component 'NW_SCS_Instance': instanceNumber, scsMSPort, scsMSPortInternal
Component 'Preinstall': installationMode
INFO 2015-03-18 11:30:31.802 [synxcpath.cpp:799]
CSyPath::createFile() lib=syslib module=syslib
Creating file C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\summary.html.
TRACE 2015-03-18 11:30:31.828 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:31.852 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\keydb.xml
TRACE 2015-03-18 11:30:31.916 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:31.917 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:31.971 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\statistic.xml
TRACE 2015-03-18 11:30:32.152 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:32.154 [sixxbsummary.cpp:320]
CSiSummaryContainer::showDialog()
<?xml version="1.0" encoding="UTF-8"?><sapinstgui version="1.0"><dialog version="1.0" sid="diReviewDialog"> <title>Parameter Summary</title><description>Choose 'Next' to start with the values shown. Otherwise, select the parameters to be changed and choose 'Revise'. You are then taken to the screen where you can change the parameter. You might be guided through other screens that have so far been processed.</description> <review id="review_controll"><caption><![CDATA[Parameter list]]></caption><inputgroup id="ID_1"> <caption><![CDATA[Drive for Local Instances]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_2"><pulldown edit="false" enabled="true" highlight="false" sid="localDrive">
<caption>Destination Drive for Local Instances</caption>
<helpitem id="mscs.SelectLocalDrive"/>
<value>C:</value>
<value>D:</value>
<selectvalue>D:</selectvalue>
</pulldown></input></inputgroup><inputgroup id="ID_3"> <caption><![CDATA[SAP System Cluster Parameters]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_4"><field enabled="true" highlight="false" sid="sid">
<caption>SAP System ID (SAPSID)</caption>
<helpitem id="common.SAPSystemIDCIDB"/>
<value type="upper" maxlength="3" regexp="^[A-Z][A-Z0-9]{2}" minlength="3">JEP</value>
</field></input><input id="ID_5"><field enabled="true" highlight="false" sid="networkName">
<caption>Network Name (SAP Virtual Instance Host)</caption>
<helpitem id="common.VirtualInstanceHostWindowsMSCS"/>
<value type="string" maxlength="13" minlength="1">SAPJEP</value>
</field></input><input id="ID_6"><pulldown edit="false" enabled="true" highlight="false" sid="network">
<caption>Use Public Network</caption>
<helpitem id="mscs.UsePublicNetwork"/>
<value>Cluster Network 1</value>
<selectvalue>Cluster Network 1</selectvalue>
</pulldown></input><input id="ID_7"><pulldown edit="false" enabled="true" highlight="false" sid="sharedDrive">
<caption>Destination Drive for Clustered Instances</caption>
<helpitem id="mscs.SelectSharedDrive"/>
<value>BACKUP</value>
<value>SAP_USR_SID_CI_NODE</value>
<value>SAPDATA1</value>
<value>SAPDATA2</value>
<value>SAPDATA3</value>
<value>SAPDATA4</value>
<value>SAPLOG1</value>
<value>SAPLOG2</value>
<value>SAPTEMP_SAPDIAG_SAPSYBSYSTEM</value>
<value>SYBASE</value>
<selectvalue>SAP_USR_SID_CI_NODE</selectvalue>
</pulldown></input></inputgroup><inputgroup id="ID_8"> <caption><![CDATA[DNS Domain Name]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_9"><check enabled="true" highlight="false" sid="setFQDN">
<caption>Set FQDN for SAP system</caption>
<helpitem id="common.FullyQualifiedDomainNameSelect"/>
<boolvalue>
<true/>
</boolvalue>
</check></input><input id="ID_10"><field enabled="true" highlight="false" sid="FQDN" depends="setFQDN">
<caption>DNS Domain Name for SAP System</caption>
<helpitem id="common.FullyQualifiedDomainName"/>
<value type="string" regexp="^[^.]+.*$" minlength="1">jnport.com</value>
</field></input></inputgroup><inputgroup id="ID_11"> <caption><![CDATA[Windows Domain]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_12"><radiobox enabled="true" highlight="false" sid="domainType">
<caption>Domain Model</caption>
<helpitem id="common.DomainModelDomainLocal"/>
<radio enabled="true" sid="global">
<caption>Domain of Current User</caption>
<boolvalue>
<true/>
</boolvalue>
</radio>
<radio enabled="true" sid="other">
<caption>Different Domain</caption>
</radio>
</radiobox></input></inputgroup><inputgroup id="ID_13"> <caption><![CDATA[Operating System Users]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_14"><password confirm="true" enabled="true" highlight="false" sid="sidAdmPassword">
<caption>Password of SAP System Administrator</caption>
<helpitem id="common.PasswordCreateExistsOS"/>
<encrvalue maxlength="63">*****</encrvalue>
</password></input><input id="ID_15"><password confirm="true" enabled="true" highlight="false" sid="sapServiceSIDPassword">
<caption>Password of SAP System Service User</caption>
<helpitem id="common.PasswordCreateExistsOS"/>
<encrvalue maxlength="63">*****</encrvalue>
</password></input></inputgroup><inputgroup id="ID_16"> <caption><![CDATA[Windows Domain for SAP Host Agent]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_17"><radiobox enabled="true" highlight="false" sid="domainType">
<caption>Domain Model</caption>
<helpitem id="common.DomainModelDomainLocal"/>
<radio enabled="true" sid="local">
<caption>Local Domain</caption>
<boolvalue>
<true/>
</boolvalue>
</radio>
<radio enabled="true" sid="global">
<caption>Domain of Current User</caption>
</radio>
<radio enabled="true" sid="other">
<caption>Different Domain</caption>
</radio>
</radiobox></input></inputgroup><inputgroup id="ID_18"> <caption><![CDATA[Operating System Users]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_19"><password confirm="true" enabled="true" highlight="false" sid="sidAdmPassword">
<caption>Password of SAP System Administrator</caption>
<helpitem id="common.PasswordCreateExistsOS"/>
<encrvalue maxlength="63">*****</encrvalue>
</password></input></inputgroup><inputgroup id="ID_20"> <caption><![CDATA[SCS Instance]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_21"><field enabled="true" highlight="false" sid="instanceNumber">
<caption>SCS Instance Number</caption>
<helpitem id="common.InstanceNumberCreate"/>
<value type="string" maxlength="2" regexp="[0-9]{2}" minlength="2">00</value>
</field></input></inputgroup><inputgroup id="ID_22"> <caption><![CDATA[Java Message Server Port]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_23"><field enabled="true" highlight="false" sid="scsMSPortInternal">
<caption>Internal Java Message Server Port</caption>
<helpitem id="common.InternalMessagePort"/>
<value type="numeric" maxlength="5" min="1025" max="65535">3900</value>
</field></input></inputgroup><inputgroup id="ID_24"> <caption><![CDATA[Enqueue Replication Server Instance]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_25"><field enabled="true" highlight="false" sid="ersInstanceNumber">
<caption>Number of the ERS Instance</caption>
<helpitem id="common.InstanceNumberCreate"/>
<value type="numeric" maxlength="2" max="99" minlength="2">10</value>
</field></input><input id="ID_26"><field enabled="true" highlight="false" sid="ersVirtualHostname">
<caption>ERS Instance - Virtual Host Name</caption>
<helpitem id="common.virtualHostname"/>
<value type="string" maxlength="60" minlength="1">SAPJEP</value>
</field></input></inputgroup><inputgroup id="ID_27"> <caption><![CDATA[Unpack Archives]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_28"><table enabled="true" fixedrows="true" sid="askUnpack">
<caption>Archives to Be Unpacked</caption>
<column enabled="true" type="check" numeric="false" upper="false" name="unpack">
<caption>Unpack</caption>
<helpitem id="common.Unpack"/>
<value type="string">false</value>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="path">
<caption>Archive</caption>
<helpitem id="common.UnpackArchive"/>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="codepage">
<caption>Codepage</caption>
<helpitem id="common.UnpackArchiveCodepage"/>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="destination">
<caption>Destination</caption>
<helpitem id="common.UnpackArchiveDestination"/>
</column>
<column enabled="true" type="file" numeric="false" upper="false" name="ownPath">
<caption>Downloaded To</caption>
<helpitem id="common.UnpackArchiveDownload"/>
<value type="string"/>
</column>
<row rowId="0">
<boolvalue>
<true/>
</boolvalue>
<value>DBINDEP\SAPEXE.SAR</value>
<value>Unicode</value>
<value>E:\usr\sap\JEP\SYS\exe\uc\NTAMD64</value>
<value/>
</row>
</table></input></inputgroup></review><button sid="btPREV" default="false" enabled="false" ><caption>< &Back</caption><action>ACTION_PREV</action><tooltiphelp><![CDATA[]]></tooltiphelp></button><button sid="btNEXT" default="true"><caption>&Next</caption><action>ACTION_NEXT</action><tooltiphelp><![CDATA[Continue processing]]></tooltiphelp></button><button sid="btEDIT"><caption>&Revise</caption><action>ACTION_EDIT</action><tooltiphelp><![CDATA[Edit input values]]></tooltiphelp></button><button sid="btDETAIL" default="true"><caption>Show &Detail</caption><action>ACTION_CBD_DETAIL</action><tooltiphelp><![CDATA[Show all input values]]></tooltiphelp></button></dialog></sapinstgui>
TRACE 2015-03-18 11:30:32.161 [iaxxgenimp.cpp:1031]
CGuiEngineImp::acceptAnswerForBlockingRequest
Waiting for an answer from GUI
TRACE 2015-03-18 11:30:33.330 [sixxbsummary.cpp:83]
CSiSummaryContainer::setDocument(const iXMLDocument & doc)
<?xml version="1.0" encoding="utf-8"?>
<sapinstguiresp>
<action>ACTION_NEXT</action>
<dialog sid="diReviewDialog">
<review id="review_controll"/>
</dialog>
</sapinstguiresp>
TRACE 2015-03-18 11:30:33.331 [iaxxdlghnd.cpp:96]
CDialogHandler::doHandleDoc()
CDialogHandler: ACTION_NEXT requested
TRACE 2015-03-18 11:30:33.331 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:33.356 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\keydb.xml
TRACE 2015-03-18 11:30:33.375 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:33.375 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:33.429 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\statistic.xml
TRACE 2015-03-18 11:30:33.500 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:33.773
Status of Step |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|NW_First_Steps|ind|ind|ind|ind|firstSteps|0|NW_Update_DLLs|ind|ind|ind|ind|dll|0|ntpatch2 is not OK. So we have to restart at this point.
TRACE 2015-03-18 11:30:33.773
Switch to STANDARD mode
TRACE 2015-03-18 11:30:33.773 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:33.801 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\keydb.xml
TRACE 2015-03-18 11:30:33.866 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:33.868 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:33.926 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\statistic.xml
TRACE 2015-03-18 11:30:34.67 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
INFO 2015-03-18 11:30:34.67 [csistepexecute.cpp:1024]
Execute step ntpatch2 of component |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|NW_First_Steps|ind|ind|ind|ind|firstSteps|0|NW_Update_DLLs|ind|ind|ind|ind|dll|0
TRACE 2015-03-18 11:30:34.084
Call block: NW_Update_DLLs_ind_ind_ind_ind
function: NW_Update_DLLs_ind_ind_ind_ind_SubComponentContainer_ntpatch2_Preprocess
is validator: false
TRACE 2015-03-18 11:30:34.115 [csistepexecute.cpp:1079]
Execution of preprocess block of |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|NW_First_Steps|ind|ind|ind|ind|firstSteps|0|NW_Update_DLLs|ind|ind|ind|ind|dll|0|ntpatch2 returns TRUE
TRACE 2015-03-18 11:30:34.127
Call block: NW_Update_DLLs_ind_ind_ind_ind
function: NW_Update_DLLs_ind_ind_ind_ind_SubComponentContainer_ntpatch2
is validator: false
TRACE 2015-03-18 11:30:34.127
NW_Update_DLLs.installDLLs(false)
TRACE 2015-03-18 11:30:34.127
NW_Update_DLLs.installDLLs() already run
TRACE 2015-03-18 11:30:34.153
The step ntpatch2 with key |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|NW_First_Steps|ind|ind|ind|ind|firstSteps|0|NW_Update_DLLs|ind|ind|ind|ind|dll|0 has been executed successfully.
INFO 2015-03-18 11:30:34.154 [synxccuren.cpp:887]
CSyCurrentProcessEnvironmentImpl::setWorkingDirectory(const CSyPath & C:/Program Files/sapinst_instdir/NW740SR2/SYB/INSTALL/HA/JAVA/MSCS-NODE1)
lib=syslib module=syslib
Working directory changed to C:/Program Files/sapinst_instdir/NW740SR2/SYB/INSTALL/HA/JAVA/MSCS-NODE1.
TRACE 2015-03-18 11:30:34.187 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:34.200 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\keydb.xml
TRACE 2015-03-18 11:30:34.255 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:34.256 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:34.283 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\statistic.xml
TRACE 2015-03-18 11:30:34.425 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
INFO 2015-03-18 11:30:34.426 [csistepexecute.cpp:1024]
Execute sHello Arjun,
Have you executed any other installation prior this attempt?
It seems there are unfinished installations on this server. If this is the case, run the SWPM to uninstall the unfinished installation. Then, execute the installation once again.
Also, make sure all IPs are corrects in the DND and host files. There were cases in which the IPs were not properly configured, or maybe a typo.
Regards,
Henrique Girardi
SAP Active Global Support
#2081285 - How to enter good search terms to an SAP search?
https://service.sap.com/sap/support/notes/2081285
SWPM Troubleshooting documents:
http://scn.sap.com/docs/DOC-62646 -
OIM 11g High Availability Deployment
Hi Experts,
I'm deploying OIM 11g in High Available schema, following Oracle docs: http://download.oracle.com/docs/cd/E14571_01/core.1111/e10106/imha.htm#CDEFECJF, I have succesfully installed and configured OIM & SOA in weblogic domain on 'OIMHOST1', trying to propagate the configuration from 'OIMHOST1' to 'OIMHOST2' I have packed (using pack.sh) the domain on 'OIMHOST1' and unpacked (using unpack.sh) it to 'OIMHOST2' so I have updated the NodeManager executing setNMProps.sh and finally Ihave started the NodeManager. In order to Test everything is fine and following the documentation I'm traying to perform the following steps, but I'm not succeed
I'M MUST TO SAY THAT I'M RUNNING ON SINGLE STANDARD EDITION DB INSTANCE AND NOT RAC AS MENTIONED IN ORACLE DOCS, PLEASE CLARIFY IF RAC IS REQUIRED, FOR NOW I'M IN DEVELOPMENT ENVIRONMENT, SO I THINK RAC IS NOT REQUIRED FOR NOW, PLEASE CLARIFY
8.9.3.8.3 Start the WLS_SOA2 and WLS_OIM2 Managed Servers on OIMHOST2
Follow these steps to start the WLS_SOA2 and WLS_OIM2 managed servers on OIMHOST2:
Stop the WebLogic Administration Server on OIMHOST2. Use the WebLogic Administration Console to stop the Administration Server.
Start the WebLogic Administration Server on OIMHOST2 using the startWebLogic.sh script under the $DOMAIN_HOME/bin directory. For example:
/u01/app/oracle/admin/OIM/bin/startWebLogic.sh > /tmp/admin.out 2>1&
Validate that the WebLogic Administration Server started up successfully by bringing up the WebLogic Administration Console.
Here its not possible start AdminServer on OIMHOST2, first of all, it looks like boot.properties file under WLS_OIM_DOMAIN_HOME/servers/AdminSever/security is not valid, the first time I try to execute startWeblogic.sh script, it ask for username/password, I have updated boot.properties (vi boot.properties) and manually set clear username and password, this time startWeblogic.sh script passed this stage, but fails:
<Error> <util.install.help.BuildMasterHelpSet> <BEA-000000> <IOException ioe java.io.IOException: No such file or directory>
<Error> <oracle.adf.share.config.ADFMDSConfig> <BEA-000000> <MDSConfigurationException encountered in parseADFConfigurationMDS-01330: unable to load MDS configuration document
MDS-01329: unable to load element "persistence-config"
MDS-01370: MetadataStore configuration for metadata-store-usage "writeable" is invalid.
MDS-00503: The metadata path "/u01/app/oracle/product/Middleware/user_projects/domains/IDMDomain/sysman/mds" does not contain any valid directories.
I have verified that this directory "mds" does not exists, as reported by the IOException, in OIMHOST2, but it exists in OIMHOST1. from here its not possible for me following Oracle's documentation, I test this starting Adminserver in OIMHOST1, and starting WLS_SOA2 and WLS_OIM2 managed servers from OIMHOST1 AdminServer console, I have tested 2 ways:
1.- All managed servers in OIHOST1 are shutdown, for this, managed servers in OIMHOST2 works as expected
2.- All managed servers in OIMHOST1 are RUNNING, for this, first I have started SOA2 managed server, after that, I have fired OIM2 managed server, when it finish boot process the following message appears in server's output:
<Warning> <org.quartz.impl.jdbcjobstore.JobStoreCMT> <BEA-000000> <This scheduler instance (servername.domainname1304128390936) is still active but was recovered by another instance in the cluster. This may cause inconsistent behavior.>
Start the WLS_SOA2 managed server using the WebLogic Administration Console.
Start the WLS_OIM2 managed server using the WebLogic Administration Console. The WLS_OIM2 managed server must be started after the WLS_SOA2 managed server is started.
8.9.3.9 Validate the Oracle Identity Manager Instance on OIMHOST2
Validate the Oracle Identity Manager Server instance on OIMHOST2 by bringing up the Oracle Identity Manager Console using a web browser.
The URL for the Oracle Identity Manager Console is:
http://oimvhn2.mycompany.com:14000/oim
Log in using the xelsysadm password.
Your help is highly apprecciated
Regards
JuanHi Vaasu,
I have succeeded deploying OIM in HA, just now my customer and I are working on the installation of webtier. Now I have a better understand about HA concepts and the way weblogic works -really nice, but little tricky-
All the magic about HA is configuring properly the network interfaces in each Linux boxes (our case) so, first of all you need to create 2 new floating IP's on each Linux boxes (google: how to create virtual Ip in linux, if you don't know) clone and modify your 'eth0' network script to create the virtual IPs
Follow the procudere in the HA guide: http://download.oracle.com/docs/cd/E14571_01/core.1111/e10106/imha.htm#CDEFECJF
create DB schemas with RCU
install weblogic
install SOA
patch SOA
install IAM
---if you are working on a virtual machine is good idea to take a snapshot here---
Create and configure the weblogic domain (special attentention whe configuring the cluster), see step 13 of 8.9.3.2 Creating and Configuring the WebLogic Domain for OIM and SOA on OIMHOST1, here you need to cofigure:
For the oim_server1 entry, change the entry to the following values:
Name: WLS_OIM1
Listen Address: the IP that is confured in eth0:1 of Linux box1
Listen Port: 14000
For the soa_server1 entry, change the entry to the following values:
Name: WLS_SOA1
Listen Address: the IP configure on eth0:2 of Linux box1
Listen Port: 8001
For the second OIM Server, click Add and supply the following information:
Name: WLS_OIM2
Listen Address: the IP configured on eth0:1 of Linux box2
Listen Port: 14000
For the second SOA Server, click Add and supply the following information:
Name: WLS_SOA2
Listen Address: the IP configured on eth0:2 of Linux box2
Listen Port: 8001
Click Next.
On Step 16 ensure you are using the UNIX tab to configure the machines, also ensure that for machine1 you use the IP configured on the eth0 interface of Linux box1, the same for machine2
please confirm you have performered 8.9.3.3.2 Update Node Manager on OIMHOST1
if everything is ok you must be able to start the AdminServer as described in the guide.
configure OIM: 8.9.3.4.2 Running the Oracle Identity Management Configuration Wizard, in my case I don't need LDAPsync, I have skipped this section, if you configure properly OIM, then you mus perform 8.9.3.5 Post-Configuration Steps for the Managed Servers
resrtar AdminServer then from the weblogic console, start OIM and SOA if node manager is properly configured SOA and OIM must run properly, update deployment mode and coherence as described in the guide and verify that OIM run perfectly in Linux box1.
Propagate OIM from Linux box1 to Linux box2 as described in the guide, using pack and unpack (you MUST use the same filesystem directory structure on both Linux boxes)
Update and start NodeManager as described in the guide
VERY IMPORTAN OBSERVATION
the guide say:
8.9.3.8.3 Start the WLS_SOA2 and WLS_OIM2 Managed Servers on OIMHOST2
Follow these steps to start the WLS_SOA2 and WLS_OIM2 managed servers on OIMHOST2:
Stop the WebLogic Administration Server on OIMHOST2. Use the WebLogic Administration Console to stop the Administration Server.
JUAN OBSERVATION:
IS NOT POSSIBLE TO START OR STOP ADMINSERVER ON HOST2 SINCE ADMIN SERVER WERE CONFIGURED TO LISTEN ON THE IP ADDRES OF eth0 INTERFACE ON HOST1, SO, ITS NOT POSSIBLE TO PLAY IT ON HOST2, I THINK AND ADDITIONAL PROCEDURE SHOULD BE FOLLOWED TO CONFIGURE ADMINSERVER IN HA IN A ACTIVE-PASSIVE MODE
Start the WebLogic Administration Server on OIMHOST2 using the startWebLogic.sh script under the $DOMAIN_HOME/bin directory. For example:
/u01/app/oracle/admin/OIM/bin/startWebLogic.sh > /tmp/admin.out 2>1& -----NOT APPLICABLE
Validate that the WebLogic Administration Server started up successfully by bringing up the WebLogic Administration Console. -----NOT APPLICABLE
Start the WLS_SOA2 managed server using the WebLogic Administration Console. ----START SOA2 FROM THE CONSOLE RUNNING ON HOST1, IT DOESN'T MATTER
Start the WLS_OIM2 managed server using the WebLogic Administration Console. The WLS_OIM2 managed server must be started after the WLS_SOA2 managed server is started. ------ START OIM2 FROM THE CONSOLE RUNNING ON HOST1
HERE YOU MUST BE ABLE TO LOGIN TO OIM2 SERVER AS DESCRIBED IN THE GUIDE, YOU DON'T NEED TO EXECUTE config.sh SCRIPT THIS SHOULD WORK AS DESCRIBED.
Server migration should work straight-forward if you have configured the floating IPs as described, I have not configured the persistence yet since my customer does not have the skills to share a storage.
I hope this helps, and feel free to comment or complement.
By the way, did you know how to set up a valid SSL certificate in Windows 2003 server??? I need it to test and Exchange 2007 I'm tryin to integrate
Regards
Juan -
High Availability of BPEL System
We are having a High Availability architecture configuration for the BPEL System in our production environment.
The BPEL servers are clustered in the middle tier of the architecture and RAC is used in the database tier of the architecture.
We have 5 BPEL processes which are getting invoked within each other. For eg:
BPELProcess1 --> BPELProcess2 --> BPELProcess3, BPELProcess4 &
BPELProcess4 --> BPELProcess5
Now when all the above BPEL processes are deployed on both the nodes of the BPEL server, how do we handle the end point URL's of these BPEL servers.
Should we hardcode the end point URL in the invoking BPEL process or should we replace the IP address of the two nodes of the BPEL server with the IP address of the load balancer.
If we replace the IP address of the BPEL server with the IP address of the load balancer, it will require us to modify, redeploy and retest all the BPEL processes again.
Please advise
ThanksThe BPEL servers are configured with active - active topology and RAC is used in the database tier of the architecture.
BPEL Servers is not clustered. Load Balancer is used in front of the two nodes of the BPEL servers. -
Synchronous File Read in a High Availability Scenario
Hi All,
We have Oracle SOA Suite 10.1.3.4 which is a High Availability instance. As per Oracle's doc Id 730515.1 we have edited the bpel.xml file with the property for Adapter cluster only in the cases where a BPEL process was polling for a file and instantiating the process to avoid the race around condition.
However, we have not made this change for BPEL process which contain a synchronous read. As I understand, such processes(which contain a synchronous read) are already instantiated before the file is read and hence the race around scenario does not exist.
Please correct me if my understanding is incorrect. Also, will I need to make any changes for the process in such a case.
Thanks and regards.Hi Marcio,
The Oracle Document I mentioned available on Metalink states:
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Applies to:
Oracle9i AS Integration Platform - Version: 10.1.3.3
Information in this document applies to any platform.
Goal
You have installed BPEL cluster. However, you expect that adapters doing the same work on two different nodes might encounter a race condition. So you would like to configure a singleton adapter. This note describes how to do it.
Solution
1. In collaxa-config.xml change from
<property id="clusterName">
<name>Cluster Id</name>
<value>machine_name:port</value>
<comment>
to
<property id="clusterName">
<name>Cluster Id</name>
<value>bpelCluster</value>
<comment>
The value: 'bpelCluster'
can be anything but has to be the same on both instances.
2. Multicast host and port in jgroups-properties.xml have to be same on both instances.
3. To enable a singleton adapter add the following property in bpel.xml of the process using the adapter in question:
<property name="clusterGroupId">adapterCluster</property>
where the actual value - 'adapterCluster' in this case can be anything but has to be different from the value used before (bpelCluster) when configuring BPEL cluster. The reason for that is that BPEL engine and an adapter will use different communication channels.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I would Like to know if this is applicable incase we have a BPEL Process which contains a synchronous read.
Thanks and Regards. -
Hi,
I currently have a single Exchange 2010 Server that has all the roles supporting about 500 users. I plan to upgrade to 2013 and move to a four server HA Exchange setup (a CAS array with 2 Server as CAS servers and one DAG with 2 mailbox Servers). My
goal is to plan out the transition in steps with no downtime. Email is most critical with my company.
Exchange 2010 is running SP3 on a Windows Server 2010 and a Separate Server as archive. In the new setup, rather than having a separate server for archiving, I am just going to put that on a separate partition.
Here is what I have planned so far.
1. Build out four Servers. 2 CAS and 2 Mailbox Servers. Mailbox Servers have 4 partitions each. One for OS. Second for DB. Third for Logs and Fourth for Archives.
2. Prepare AD for exchange 2013.
3. Install Exchange roles. CAS on two servers and mailbox on 2 servers. Add a DAG. Someone had suggested to me to use an odd number so 3 or 5. Is that a requirement?
4. I am using a third party load balancer for CAS array instead of NLB so I will be setting up that.
5. Do post install to ready up the new CAS. While doing this, can i use the same parameters as assigned on exchange 2010 like can i use the webmail URL for outlook anywhere, OAB etc.
6. Once this is done. I plan to move a few mailboxes as test to the new mailbox servers or DAG.
7. Testing outlook setups on new servers. inbound and outbound email tests.
once this is done, I can migrate over and point all my MX records to the new servers.
Please let me know your thoughts and what am I missing. I like to solidify a flowchart of all steps that I need to do before I start the migration.
thank you for your help in advanceHi,
okay, you can use 4 virtual servers. But there is no need to deploy dedicated server roles (CAS + MBX). It is better to deploy multi-role Exchange servers, also virtual! You could install 2 multi-role servers and if the company growths, install another multi-role,
and so on. It's much more simpler, better and less expensive.
CAS-Array is only an Active Directory object, nothing more. The load balancer controls the sessions on which CAS the user will terminate. You can read more at
http://blogs.technet.com/b/exchange/archive/2014/03/05/load-balancing-in-exchange-2013.aspx Also there is no session affinity required.
First, build the complete Exchange 2013 architecture. High availability for your data is a DAG and for your CAS you use a load balancer.
On channel 9 there is many stuff from MEC:
http://channel9.msdn.com/search?term=exchange+2013
Migration:
http://geekswithblogs.net/marcde/archive/2013/08/02/migrating-from-microsoft-exchange-2010-to-exchange-2013.aspx
Additional informations:
http://exchangeserverpro.com/upgrading-to-exchange-server-2013/
Hope this helps :-) -
Hyper-V 2012 High Availability using Windows Server 2012 File Server Storage
Hi Guys,
Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
for hyper-v host to implement HA. Here's the setup:
Host1
HP Proliant L380 G7
Windows Server 2012 Std
Hyper-V role, Failover Cluster Manager and File and Storage Services installed
Host2
Dell PowerEdge 2950
Windows Server 2012 Std
Hyper-V role, Failover Cluster Manager and File and Storage Services installed
Storage
Dell PowerEdge 6800
Windows Server 2012 Std
File and Storage Services installed
I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks
were found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v
hosts are already highly available.
Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!Hi Guys,
Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
for hyper-v host to implement HA. Here's the setup:
Host1
HP Proliant L380 G7
Windows Server 2012 Std
Hyper-V role, Failover Cluster Manager and File and Storage Services installed
Host2
Dell PowerEdge 2950
Windows Server 2012 Std
Hyper-V role, Failover Cluster Manager and File and Storage Services installed
Storage
Dell PowerEdge 6800
Windows Server 2012 Std
File and Storage Services installed
I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks were
found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v hosts
are already highly available.
Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!
Your shared storage is a single point of failure with this scenario so I would not consider the whole setup as a production configuration... Also setup is both slow (all I/O is travelling down the wire to storage server, running VMs from DAS is ages faster)
and expensive (third server + extra Windows license). I would think twice about what you do and either deploy a built-in VM replication technologies (Hyper-V Replica) and apps built-in clustering features that does not require shared storage (SQL Server and
Database Mirroring for example, BTW what workload do you run?) or use some third-party software creating fault tolerant shared storage from DAS or investing into physical shared storage hardware (HA one of course).
Hi VR38DETT,
Thanks for responding. The hosts will cater a domain controller (on each host), Web filtering software (Websense), Anti-Virus (McAfee ePO), WSUS and an Auditserver as of the moment. Is the Hyper-V Replica somewhat give "high availability" to VMs or Hyper-V
hosts? Also, is the cluster required in order to implement it? Haven't tried that but worth a try. -
How to enable high availability on SQL Server 2005 with Windows Server 2008 Enterprise R2
Dear Folks,
I would like to ask you about this thing. I'm working for IT department for bank in Myanmar. Our bank have up to 96 branches across all of Myanmar including H.O. We are using Microsoft SQL Server 2005 with Windows Server 2008 for our banking
information system. My main problem is having to backup and restore the database backup files every time the servers in branches goes down for whatever reasons. I want to deploy this feature of high availability and failover cluster using Windows Server 2008
and SQL Server 2005. Our branches have 2 Servers. One is for Primary and other is for Backup. What I want to do is that, I want to change Backup Server to Primary Server whenever the Primary Server goes down for whatever reasons. All the working data and databases
from Primary will immediately replicated into Backup Server along with all the IP information of Primary Server. Please give me step by step guide for this process.Try below
http://blogs.msdn.com/b/cindygross/archive/2009/10/23/checklist-for-installing-sql-server-2005-as-a-clustered-instance.aspx
I will recommend you to upgrade SQL server to newer version for support as well as flexibility.
Regards,
Vishal Patel
Blog: http://vspatel.co.uk
Site: http://lehrity.com -
URL to be used in high availability for Directory Server
Hi All,
I have an environment configured for high availability. I have two OVD and OID servers each in this environment, configured in high availability. What should be the value in the Server URL field of the Directory Server IT Resource in the OIM for this environment? In the normal environment, I had it as "ldap://ovdhost01:6501" and it was working fine. But since there are two servers here, I am not sure what URL to use in place of this. The entry for the two ovd hosts in the OHS is "idstore.com" which is configured on 6501 port. But I tried using the following URLs and none of them worked:
1. idstore.com
2. ldap://idstore.com
3. ldap://idstore.com:6501
4. ldap://ovdhost01:6501,ldap://ovdhost01:6501
Can someone help me know the correct URL to be used in this case?
Thanks,
$idNot sure about OVD or OID but for SOA and OIM:
SOA:
XMLConfig -> XMLConfig.SOAConfig -> SOAConfig
Rmiurl -> t3://soahost1:soaport1,soahost2:soaport2
Soapurl -> Load balancer or web server url (without the /workflow context)
OIM:
XMLConfig -> XMLConfig.DiscoveryConfig -> Discovery
OimFrontEndUrl -> Load balance or web server url (without the /oim context)
And ofcourse on your LB or WebServer, you need to configure these:
SOA: http://docs.oracle.com/cd/E23943_01/core.1111/e10106/ha_soa.htm#CHDDJEGD
OIM: http://docs.oracle.com/cd/E21764_01/core.1111/e10106/imha.htm#BGBDFEIE
-Bikash -
SharePoint 2013 High Availability - BI Semantic
SharePoint 2013 High Availability
Hi all,
Settings
Two Sites connected via tunnel, no issue in networking – Single farm
SQL2012, two DB with mirroring
SharePoint 2013, two servers in the farm, each server is APP & WEB
Issue, this has been designed against full site failure (e.g. losing connection to other site, which means losing 1 SQL and 1 SP server)
When users try to view some reports from the BI page, they can’t , we have confirmed that SQL mirrors are failing correctly, the issue (we think to be to access the SharePoint reporting services DB which we have mirrored as well), any help will be great.
RegardsOkay, if you've validated that you have the 1Gbps and 1ms over 10 minutes average, then you're in a supported scenario. How did you configure the SSRS databases to use mirroring within SharePoint? With SSRS databases, you'll need to use PowerShell:
$database = Get-SPDatabase | where {$_.Name -eq "ReportingServiceDbName"}
$database.AddFailoverServiceInstance("InstanceName")
$database.Update()
Trevor Seward
Follow or contact me at...
  
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.
Maybe you are looking for
-
I installed some fonts and now have 1148 installed, many of which I wish to remove. However, Font Book 3.0 freezes on loading so I am unable to proceed. Your assistance is much appreciated. I am running OSX 10.7.4.
-
Ok, I know how to create a default property in a custom .as class so that when i use that class in mxml i can use the default property but... does anyone know how (or if) i can set a default property when i make an mxml component so that i can then u
-
IWeb Media browser shows video but not photos
I just recently discovered that iWeb is not displaying photos on the photo tab, but it is displaying videos on the photo tab. It is ALSO showing the videos on the video tab. Any recommendations?
-
Hiii capturing error when uploading file from sap to Server
hiii i am taking data from an internal table which is as follow ITAB: colA(35) type c colB(35) type c when doing opendataset OPEN DATASET lv_directory FOR OUTPUT IN TEXT MODE ENCODING UTF-8. spliting my itab by the delimiter "|" i need file already e
-
BW 7.3 Installation query
Hi Folks, we are going to install BW 7.3. I Have some queries regarding BW 7.3 Installation. Which is the best way of installation with ABAP OR Java OR ABAP/Java stack. Is it possible to install ABAP and Java stack on the same server. People who has