High availability through HSRP
hi
one of our customer want high availibilty solution for their existing the WAN link,
I am planning to give the solution with two 2811 router and one v35 data share device.
but in their existing network, they are using lease line, so they have only one valid ip address for WAN connection.
suggest me the serial port configuration of second router, as LAN side I am planninig HSRP.
Dear Norat Aheer,
MultilinkPPP will allow you to use a single IP across a WAN made up of two leased line circuits.
Furthermore, this will provide you with redundancy in case one of the links fails. There is also load sharing taking place, and possibly an improvement in your overall link utilization.
see this:
<http://www.cisco.com/univercd/cc/td/doc/product/wanbu/8850px45/rel51/mpsm/mlpppcfg.htm>
hth,
Ajaz Nawaz
Similar Messages
-
Best practice for High availability design, HSRP
Hi,
I am planning to create High Availability for LAN to WAN connectivity.
But I want to know your opinion about the best way how to do this. I googled for a solution/best way how to do this, but I didn't found in my opinion right answer.
The situation:
I have 2 3945E Routers and 2 3560 switches. The design that I am planning to implement is below.
The main goal is to have redundant connection, whatever one of the devices will fail. For example, if the R1 will fail, R2 should become active, if the SW1 will fail, the SW2 will take care about reachability and vice versa. The router 1 should be preferred always, if the link to ISP isn't down, because of greater bandwidth. So why am I drown 2 connections to 2 separate switches. If the SW1 will fail, I will still have a connection to WAN using R1 router.
The Router interface should be configured with sub interfaces (preferred over secondary IP address of interface), because more than 10 subnets will be assigned to the LAN segment. The routers have 4 Gi ports.
HSRP must be enabled on LAN side, because PC's on LAN must have redundant def. getaway.
So, the question is - what is the best and preferred way to do this?
In my opinion, I should use BVI and combine R1 routers 2 interfaces in to logical one and do the same for the R2.
Next, turn the router in to L3 switch using IRB and then configure HSRP.
What would be your preferred way to do this?Hi Audrius,
I would suggest you to go with HSRP. GLBP you will use where you want load balance.
I think the connectivity between your Routers (3945) and switches (3560) is gigabit connection which is high speed. So keep one physical link from your switches to each router and do HSRP on those router physical interfaces.
In this way you will have high availability like if R1 fails then R2 will take over.
Regarding the config see the below which I have for one of my Customer DC.
ACTIVE:
track 1 interface GigabitEthernet0/0 line-protocol
track 2 interface GigabitEthernet0/0 line-protocol
interface GigabitEthernet0/1
ip address 10.10.10.12 255.255.255.0
ip nat inside
ip virtual-reassembly
duplex full
speed 100
standby use-bia scope interface
standby 0 ip 10.10.10.10
standby 0 priority 110
standby 0 preempt
standby 0 authentication peter2mo
standby 0 track 1 decrement 30
standby 0 track 2 decrement 30
STANDBY:
track 1 interface GigabitEthernet0/0 line-protocol
interface GigabitEthernet0/1
ip address 10.10.10.11 255.255.255.0
ip nat inside
ip virtual-reassembly
duplex full
speed 100
standby use-bia scope interface
standby 0 ip 10.10.10.10
standby 0 priority 90
standby 0 authentication peter2mo
standby 0 track 1 decrement 30
Please rate the helpfull posts.
Regards,
Naidu. -
11.1.2 High Availability for HSS LCM
Hello All,
Has anyone out there successfully exported an LCM artifact (Native Users) through Shared Services console to the OS file system (<oracle_home>\user_projects\epmsystem1\import_export) when Shared Services is High Available through a load balancer?
My current configuration is two load balanced Shared Services servers using the same HSS repository. Everything works perfectly everywhere except when I try to export anything through LCM from the HSS web console. I have shared out the import_export folder on server 1. And I have edited the oracle_home>\user_projects\epmsystem1\config\FoundationServices1\migration.properties file on the second server to point to the share on the first server. Here is a cut and paste of the migration.properties file from server2.
grouping.size=100
grouping.size_unknown_artifact_count=10000
grouping.group_by_type=Y
report.enabled=Y
report.folder_path=<epm_oracle_instance>/diagnostics/logs/migration/reports
fileSystem.friendlyNames=false
msr.queue.size=200
msr.queue.waittime=60
group.count=10000
double-encoding=true
export.group.count = 30
import.group.count = 10000
filesystem.artifact.path=\\server1\import_export
I perform an export of just the native users to the file system, the export fails stating and I find errors in the log stating the following.
[2011-03-29T20:20:45.405-07:00] [EPMLCM] [NOTIFICATION] [EPMLCM-11001] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [arg: /server1/import_export/msr/PKG_34.xml] Executing package file - /server1/import_export/msr/PKG_34.xml
[2011-03-29T20:20:45.530-07:00] [EPMLCM] [ERROR] [EPMLCM-12097] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [SRC_CLASS: com.hyperion.lcm.clu.CLUProcessor] [SRC_METHOD: execute:?] Cannot find the migration definition file in the specified file path - /server1/import_export/msr/PKG_34.xml.
[2011-03-29T20:20:45.546-07:00] [EPMLCM] [NOTIFICATION] [EPMLCM-11000] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [arg: Completed with failures.] Migration Status - Completed with failures.
I go and look for the path that it is searching for "/server1/import_export/msr/PKG_34.xml" and find that this file does exist, it was infact created by the export process so I know that it is able to find the correct location, but it then says that it can't find it after the fact. I have tried creating mapped drives and editing the migration.properties file to reference the mapped drive but it gives me the same errors but with the new path. I also tried the below article that I found in support with no result. Please advise, thank you.
Bug 11683769: LCM MIGRATIONS FAIL WHEN HSS IS SET UP FOR HA AND HSS IS STARTED AS A SERVICE
Bug Attributes
Type B - Defect Fixed in Product Version 11.1.2.0.000PSE
Severity 3 - Minimal Loss of Service Product Version 11.1.2.0.00
Status 90 - Closed, Verified by Filer Platform 233 - Microsoft Windows x64 (64-bit)
Created 25-Jan-2011 Platform Version 2003 R2
Updated 24-Feb-2011 Base Bug 11696634
Database Version 2005
Affects Platforms Generic
Product Source Oracle
Related Products
Line - Family -
Area - Product 4482 - Hyperion Lifecycle Management
Hdr: 11683769 2005 SHRDSRVC 11.1.2.0.00 PRODID-4482 PORTID-233 11696634Abstract: LCM MIGRATIONS FAIL WHEN HSS IS SET UP FOR HA AND HSS IS STARTED AS A SERVICE*** 01/25/11 05:15 pm ***Hyperion Foundation Services is set up for high availability. Lifecycle management migrations fail when running Hyperion Foundation Services - Managed Server as a Windows Service. We applied the following configuration changes:Set up shared disk then we modify the migration.properties.filesystem.artifact.path=\\<servername>\LCM\import_export We are able to get the migrations to work if we set filesystem.artifact.path=V:\import_export (a maped drive to the shared disk) and we run Hyperion Foundation Services in console mode.*** 01/25/11 05:24 pm *** *** 01/28/11 03:32 pm *** (CHG: Sta->11)*** 01/28/11 03:32 pm *** (CHG: Comp. Ver-> 11.1.2 -> 11.1.2.0.00)*** 01/28/11 03:34 pm *** (CHG: Base Bug-> NULL -> 11696634)*** 02/02/11 01:53 pm *** (CHG: Sta->41)*** 02/02/11 01:53 pm ****** 02/02/11 05:18 pm ****** 02/03/11 10:30 am ****** 02/04/11 03:53 pm *** *** 02/04/11 03:53 pm *** (CHG: Sta->80 Asg->NEW OWNER)*** 02/10/11 08:49 am ****** 02/15/11 11:59 am ****** 02/15/11 04:56 pm ****** 02/16/11 09:58 am *** (CHG: Sta->90)*** 02/16/11 09:58 am ****** 02/16/11 02:12 pm *** (CHG: Sta->93)*** 02/24/11 06:14 pm *** (CHG: Sta->90)Back to topit is not possible to implement a kind of HA between two different appliances 3315 and 3395.
A node in HA can have the 3 persona.
Suppose Node A (Admin(primary), monitoring(Primary) and PSN).
Node B(Admin(Secondary), Monitoring(Secondary) and PSN).
If the Node A is unavailable, you will have to promote manually the Admin role to Primary.
Although the best way is to have
Node A Admin(Primary), Monitoring(Secondary) and PSN
Node B Admin(Secondary), Monitoring (Primary) and PSN.
Rate if helpful and Marked As correct if it is correct for the experts.
Regards, -
SQL Server Analysis Services (SSAS) 2012 High Availability Solution in Azure VM
I have been testing an AlwaysOn high availability failover solution in SQL Server Enterprise on an Azure VM, and this works pretty well as a failover for SQL Server Databases, but I also need a high availability solution for SQL Server
Analysis Server, and so far I haven't found a way to do this. I can load balance it between two machines, but this isn't working as a failover and because of the restriction of not being able to use shared storage in a Failover Cluster in Azure
VM's, I can't set it up as a cluster which is required for AlwaysOn in Analysis Services.
Anyone else found a solution to use an AlwaysOn High Availability for SQL Analysis Services in Azure VM? As my databases are read-only, I would be satisfied with even just a solution that would sync the OLAP databases and switch
the data connection to the same server as the SQL databases.
Thanks!
BillBill,
So, what you need is a model like SQL Server failover cluster instances. (before sql server 2012)
In SQL Server 2012, AlwaysOn replaces SQL Server failover cluster, and it has been seperated to
AlwaysOn Failover Cluster Instances (SQL Server) and
AlwaysOn Availability Groups (SQL Server).
Since your requirement is not in database level, I think the best option is to use AlwaysOn Failover Cluster Instances (SQL Server).
As part of the SQL Server AlwaysOn offering, AlwaysOn Failover Cluster Instances leverages Windows Server Failover Clustering (WSFC) functionality to provide local high availability through redundancy at the server-instance level—a
failover cluster instance (FCI). An FCI is a single instance of SQL Server that is installed across Windows Server Failover Clustering (WSFC) nodes and, possibly, across multiple subnets. On the network, an FCI appears to be an instance of SQL
Server running on a single computer, but the FCI provides failover from one WSFC node to another if the current node becomes unavailable.
It is similar to SQL Server failover cluster in SQL 2008 R2 and before.
Please refer to these references:
Failover Clustering in Analysis Services
Installing a SQL Server 2008 R2 Failover Cluster
Iric Wen
TechNet Community Support -
Exchange server 2013 CAS server high availability
Hi
I have exchange server 2010 sp3(2 MB, 2Hub/Cas) servers.
Planning to migrate to exchange server 2013.( 2 cas servers and 2 mbx servers).
I dont want to go all traffic single so i am keeping the role separate..
In exchange 2010 i achieved hub/CAS high availability through NLB.
In exchange 2013 how to acheive this...
Please share your suggestions with document if possible...Here ya go:
http://technet.microsoft.com/en-us/library/jj898588(v=exchg.150).aspx
Load balancing
and
http://technet.microsoft.com/en-us/office/dn756394
Even though it says 2010, it applies to 2013 vendors as well.
Twitter!: Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied. -
Deploying SCOM 2012 R2 on a high availability
I I am deploying scom 2012 r2 in high availabilty my question is that i need a witness resource disk or file share witness disk for the sql 2012 high availabilty or not and also share the steps of making the scom 2012 r2 high availability through
screen shots.with sql server 2012 high availabilityLooks like you asked this twice?
I replied on the later thread:
http://social.technet.microsoft.com/Forums/systemcenter/en-US/3b69afd0-db93-436e-84a7-caee7b79a5a6/deploying-scom-2012-r2-on-a-high-availability?forum=operationsmanagerdeployment
John Joyner MVP-SC-CDM -
IPS High Availability Solution
Hi all,
requirement to have redundancy for IPS appliance placed on data center design, I have digged on Cisco docs but found the Resiliency and HA (High Availability) from the IPS point of view could occur in the switches side (HSRP/Eth channel load-balance).
is there any visible way to implement the High Availability in dynamic way !!
Regards,
BelalBelal
You are correct, only one sensor at a time will pass traffic.
Spanning Tree Protocol uses layer 2 frames called BPDUs to determine if a path to the root bridge (in this case VLAN) exists. If the primary sensor stops passing layer 2 frames (a good indication that the rest of your traffic is not going to get through the sensor) then BPDUs will not pass thru the primary sensor and Spanning Tree will unblock the secondary path through the standby sensor. You may want to watch for an SNMP trap from the switch to know when that happens.
The failover cable is just an ordinary roll over cable between two ports (in the two VLANS) on the switch. I called it a failover cable because it only carries traffic when the sensor has failed to pass layer two (and above) frames. -
Advice Requested - High Availability WITHOUT Failover Clustering
We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2. My question is: Can we accomplish high availability WITHOUT using failover clustering?
So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover. Here's what I mean:
In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment. In other words, there is at least a domain
controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring. The SQL Server VM on each host has about 75% of the
physical memory resources dedicated to it (for performance reasons). We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
So now, to high availability. The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
(we are using an iSCSI SAN for storage).
BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted. With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
or so only in the event of a major failure, rather than running at 50% ALL the time.
Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability. I guess
I'm looking for validation on my thinking.
So what do you think? What am I missing or forgetting? What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
Thanks in advance for your thoughts!Udo -
Yes your responses are very helpful.
Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access? Or can that not run on the same physical box as the Hyper-V host? I guess if the physical box goes down
the LUN would go down anyway, huh? Or can I cluster that role (iSCSI target) as well? If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
- Morgan
That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
for cheap". See:
What's new in iSCSI target with Windows Server 2012 R2
http://technet.microsoft.com/en-us/library/dn305893.aspx
Improved optimization to allow disk-level caching
Updated
iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
any Microsoft iSCSI target and you'll be happy. For references see:
MSFT iSCSI Target in HA mode
http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
Cluster MSFT iSCSI Target with SAS back end
http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
Guest
VM Cluster Storage Options
http://technet.microsoft.com/en-us/library/dn440540.aspx
Storage options
The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
Storage Type
Description
Shared virtual hard disk
New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
Virtual Fibre Channel
Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
Virtual Fibre Channel Overview.
iSCSI
The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
Server 2012.
Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
StarWind VSAN [Virtual SAN] for Hyper-V
http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
Hope this helped a bit :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Error while installing NW 7.4 SR2 in High availability
Hello Guys,
We are installing NW 7.4 SR2 Enterprise Portal with sybase database on windows 2012 R2 server under high availability i.e Microsoft Cluster.
JEP is the system ID of the server, SAPJEP is the virtual host and X.X.X.36 is the virtual I.P for Central Instance node.
Physical I.P of First node is X.X.X.21
We have started the installation as first cluster node and get stuck at phase 3.
The error in sapinst.log is below:-
FCO-00011 The step assignNetworknameToClusterGroup with step key
|NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|NW_GetSidFromCluster|ind|ind|ind|ind|getSid|0|assignNetworknameToClusterGroup
was executed with status ERROR ( Last error reported by the step: Could
not set properties on ClusterResource 'SAP JEP IP'.).
The error log of sapinst_dev.log is is below:-
TRACE 2015-03-18 11:30:29.193
NWInstance.getStartProfilePath(true)
TRACE 2015-03-18 11:30:29.194
NWInstance.getStartProfileName()
TRACE 2015-03-18 11:30:29.194
NWInstance.getStartProfileName() done: JEP_ERS10_SAPJEP
TRACE 2015-03-18 11:30:29.194
NWERSInstance.getDirProfile()
TRACE 2015-03-18 11:30:29.194
NW.getDirProfile(false)
TRACE 2015-03-18 11:30:29.195
NW.getDirProfile() done: \\SAPJEP/sapmnt/JEP/SYS/profile
TRACE 2015-03-18 11:30:29.204 [synxcpath.cpp:925]
CSyPath::getOSNodeType(bool) lib=syslib module=syslib
Path \\SAPJEP/sapmnt/JEP/SYS/profile is on a non-existing share.
TRACE 2015-03-18 11:30:29.204
NWERSInstance.getDirProfile() done: \\SAPJEP/sapmnt/JEP/SYS/profile
TRACE 2015-03-18 11:30:29.205
NWInstance.getStartProfilePath() done: \\SAPJEP/sapmnt/JEP/SYS/profile/JEP_ERS10_SAPJEP
TRACE 2015-03-18 11:30:29.206
NWInstance.getProfilePath() done: \\SAPJEP/sapmnt/JEP/SYS/profile/JEP_ERS10_SAPJEP
TRACE 2015-03-18 11:30:29.206
NWProfiles(JEP.ERS10.SAPJEP).getPersistentDefaultProfile()
TRACE 2015-03-18 11:30:29.207
NW.getPersistentDefaultProfile()
TRACE 2015-03-18 11:30:29.207
NW.isUnicode(true)
TRACE 2015-03-18 11:30:29.208
NW.isUnicode() done: true
TRACE 2015-03-18 11:30:29.208
NW.getPersistentDefaultProfile() done
TRACE 2015-03-18 11:30:29.208
NWProfiles(JEP.ERS10.SAPJEP).getPersistentDefaultProfile() done
TRACE 2015-03-18 11:30:29.208
NWProfiles(JEP.ERS10.SAPJEP)._getPersistentProfile(INSTANCE) done
TRACE 2015-03-18 11:30:29.209
NWProfiles(JEP.ERS10.SAPJEP).getPersistentProfile() done
TRACE 2015-03-18 11:30:29.209
PersistentR3Profile.commit()
TRACE 2015-03-18 11:30:29.209
PersistentR3Profile.commit() done
TRACE 2015-03-18 11:30:29.209
NWProfiles(JEP.ERS10.SAPJEP).getPersistentDefaultProfile()
TRACE 2015-03-18 11:30:29.210
NW.getPersistentDefaultProfile()
TRACE 2015-03-18 11:30:29.210
NW.isUnicode(true)
TRACE 2015-03-18 11:30:29.211
NW.isUnicode() done: true
TRACE 2015-03-18 11:30:29.211
NW.getPersistentDefaultProfile() done
TRACE 2015-03-18 11:30:29.211
NWProfiles(JEP.ERS10.SAPJEP).getPersistentDefaultProfile() done
TRACE 2015-03-18 11:30:29.211
PersistentR3Profile.commit()
TRACE 2015-03-18 11:30:29.211
PersistentR3Profile.commit() done
TRACE 2015-03-18 11:30:29.212
NWProfiles(JEP.ERS10.SAPJEP).commit() done
TRACE 2015-03-18 11:30:29.212
NWInstanceInstall.addSAPCryptoParametersToProfile() done
TRACE 2015-03-18 11:30:29.260
The step setProfilesForSAPCrypto with key |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|NW_Unpack2|ind|ind|ind|ind|unpack|0|NW_SAPCrypto|ind|ind|ind|ind|crypto|0 has been executed successfully.
INFO 2015-03-18 11:30:29.262 [synxccuren.cpp:887]
CSyCurrentProcessEnvironmentImpl::setWorkingDirectory(const CSyPath & C:/Program Files/sapinst_instdir/NW740SR2/SYB/INSTALL/HA/JAVA/MSCS-NODE1)
lib=syslib module=syslib
Working directory changed to C:/Program Files/sapinst_instdir/NW740SR2/SYB/INSTALL/HA/JAVA/MSCS-NODE1.
TRACE 2015-03-18 11:30:29.265 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:29.267 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\inifile.xml
TRACE 2015-03-18 11:30:29.270 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:29.280 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:29.281 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\inifile.xml
TRACE 2015-03-18 11:30:29.284 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:29.293 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:29.295 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\inifile.xml
TRACE 2015-03-18 11:30:29.315 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
INFO 2015-03-18 11:30:29.330 [csistepexecute.cpp:1024]
Execute step askUnpack of component |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|NW_Unpack2|ind|ind|ind|ind|unpack|0
TRACE 2015-03-18 11:30:29.339 [csistepexecute.cpp:1079]
Execution of preprocess block of |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|NW_Unpack2|ind|ind|ind|ind|unpack|0|askUnpack returns TRUE
TRACE 2015-03-18 11:30:29.364
Call block: NW_Unpack2_ind_ind_ind_ind
function: NW_Unpack2_ind_ind_ind_ind_DialogPhase_askUnpack
is validator: false
TRACE 2015-03-18 11:30:29.364
NWInstall.getSystem(JEP)
TRACE 2015-03-18 11:30:29.366
NWOption(localExeDir).value()
TRACE 2015-03-18 11:30:29.366
NWOption(localExeDir).value() done: true
TRACE 2015-03-18 11:30:29.367
NWOption(collected).value()
TRACE 2015-03-18 11:30:29.367
NWOption(collected).value() done: true
TRACE 2015-03-18 11:30:29.369
NWInstall({
sid:JEP
sapDrive:E:
dbtype:ind
sapmnt:/sapmnt
hasABAP:false
hasJava:true
isAddinInstallation:undefined
unicode:true
workplace:undefined
loadType:SAP
umeConfiguration:
doMigMonConfig:
useParallelAbapSystemCopy:
dir_profile:\\SAPJEP\sapmnt\JEP\SYS\profile
os4_krnlib:
os4_seclanglib:
useCurrentUser:false
noShares:false
TRACE 2015-03-18 11:30:29.370
NW({
sid:JEP
sapDrive:E:
dbtype:ind
sapmnt:/sapmnt
hasABAP:false
hasJava:true
isAddinInstallation:undefined
unicode:true
workplace:undefined
loadType:SAP
umeConfiguration:
doMigMonConfig:
useParallelAbapSystemCopy:
dir_profile:\\SAPJEP\sapmnt\JEP\SYS\profile
os4_krnlib:
os4_seclanglib:
useCurrentUser:false
noShares:false
TRACE 2015-03-18 11:30:29.371
NW() done
TRACE 2015-03-18 11:30:29.371
NWInstall() done
TRACE 2015-03-18 11:30:29.371
NWInstall.getSystem() done
TRACE 2015-03-18 11:30:29.372
NWInstall.mapUnpackTable(t_askUnpack)
TRACE 2015-03-18 11:30:29.373
NWInstall.mapUnpackTable() done: true
TRACE 2015-03-18 11:30:29.377
NWDB._get(JEP, ind)
TRACE 2015-03-18 11:30:29.377
NWDB._fromTable(JEP, ctor)
TRACE 2015-03-18 11:30:29.379
NWDB(JEP, {
sid:JEP
hostname:undefined
dbsid:undefined
installDB:false
ORACLE_HOME:
abapSchema:
abapSchemaUpdate:
javaSchema:
TRACE 2015-03-18 11:30:29.379
NWDB() done
TRACE 2015-03-18 11:30:29.379
NWDB._fromTable() done
TRACE 2015-03-18 11:30:29.380
NWDB._get() done
TRACE 2015-03-18 11:30:29.380
NWDB.getDBType(): ind
TRACE 2015-03-18 11:30:29.390 [iaxxgenimp.cpp:283]
CGuiEngineImp::showDialogCalledByJs()
showing dlg d_nw_ask_unpack
TRACE 2015-03-18 11:30:29.390 [iaxxgenimp.cpp:293]
CGuiEngineImp::showDialogCalledByJs()
<dialog sid="d_nw_ask_unpack">
<title>Unpack Archives</title>
<table enabled="true" fixedrows="true" sid="askUnpack">
<caption>Archives to Be Unpacked</caption>
<column enabled="true" type="check" numeric="false" upper="false" name="unpack">
<caption>Unpack</caption>
<helpitem id="common.Unpack"/>
<value type="string">false</value>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="path">
<caption>Archive</caption>
<helpitem id="common.UnpackArchive"/>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="codepage">
<caption>Codepage</caption>
<helpitem id="common.UnpackArchiveCodepage"/>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="destination">
<caption>Destination</caption>
<helpitem id="common.UnpackArchiveDestination"/>
</column>
<column enabled="true" type="file" numeric="false" upper="false" name="ownPath">
<caption>Downloaded To</caption>
<helpitem id="common.UnpackArchiveDownload"/>
<value type="string"/>
</column>
<row rowId="0">
<boolvalue>
<true/>
</boolvalue>
<value>DBINDEP\SAPEXE.SAR</value>
<value>Unicode</value>
<value>E:\usr\sap\JEP\SYS\exe\uc\NTAMD64</value>
<value></value>
</row>
</table>
<dialog/>
TRACE 2015-03-18 11:30:29.391 [iaxxgenimp.cpp:1031]
CGuiEngineImp::acceptAnswerForBlockingRequest
Waiting for an answer from GUI
TRACE 2015-03-18 11:30:30.617 [iaxxdlghnd.cpp:96]
CDialogHandler::doHandleDoc()
CDialogHandler: ACTION_NEXT requested
TRACE 2015-03-18 11:30:30.618 [iaxxcdialogdoc.cpp:190]
CDialogDocument::submit()
<dialog sid="d_nw_ask_unpack">
<table enabled="true" fixedrows="true" sid="askUnpack">
<caption>Archives to Be Unpacked</caption>
<column enabled="true" type="check" numeric="false" upper="false" name="unpack">
<caption>Unpack</caption>
<helpitem id="common.Unpack"/>
<value type="string">false</value>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="path">
<caption>Archive</caption>
<helpitem id="common.UnpackArchive"/>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="codepage">
<caption>Codepage</caption>
<helpitem id="common.UnpackArchiveCodepage"/>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="destination">
<caption>Destination</caption>
<helpitem id="common.UnpackArchiveDestination"/>
</column>
<column enabled="true" type="file" numeric="false" upper="false" name="ownPath">
<caption>Downloaded To</caption>
<helpitem id="common.UnpackArchiveDownload"/>
<value type="string"/>
</column>
<row rowId="0">
<boolvalue>
<true/>
</boolvalue>
<value>DBINDEP\SAPEXE.SAR</value>
<value>Unicode</value>
<value>E:\usr\sap\JEP\SYS\exe\uc\NTAMD64</value>
<value/>
</row>
</table>
<dialog/>
TRACE 2015-03-18 11:30:30.646
Call block: NW_Unpack2_ind_ind_ind_ind
function: NW_Unpack2_ind_ind_ind_ind_DialogPhase_askUnpackValidator_default
is validator: true
TRACE 2015-03-18 11:30:30.646
NWInstall.getSystem(JEP)
TRACE 2015-03-18 11:30:30.648
NWOption(localExeDir).value()
TRACE 2015-03-18 11:30:30.648
NWOption(localExeDir).value() done: true
TRACE 2015-03-18 11:30:30.649
NWOption(collected).value()
TRACE 2015-03-18 11:30:30.649
NWOption(collected).value() done: true
TRACE 2015-03-18 11:30:30.652
NWInstall({
sid:JEP
sapDrive:E:
dbtype:ind
sapmnt:/sapmnt
hasABAP:false
hasJava:true
isAddinInstallation:undefined
unicode:true
workplace:undefined
loadType:SAP
umeConfiguration:
doMigMonConfig:
useParallelAbapSystemCopy:
dir_profile:\\SAPJEP\sapmnt\JEP\SYS\profile
os4_krnlib:
os4_seclanglib:
useCurrentUser:false
noShares:false
TRACE 2015-03-18 11:30:30.653
NW({
sid:JEP
sapDrive:E:
dbtype:ind
sapmnt:/sapmnt
hasABAP:false
hasJava:true
isAddinInstallation:undefined
unicode:true
workplace:undefined
loadType:SAP
umeConfiguration:
doMigMonConfig:
useParallelAbapSystemCopy:
dir_profile:\\SAPJEP\sapmnt\JEP\SYS\profile
os4_krnlib:
os4_seclanglib:
useCurrentUser:false
noShares:false
TRACE 2015-03-18 11:30:30.654
NW() done
TRACE 2015-03-18 11:30:30.654
NWInstall() done
TRACE 2015-03-18 11:30:30.654
NWInstall.getSystem() done
TRACE 2015-03-18 11:30:30.655
NWInstall.validateUnpackTable(t_askUnpack)
TRACE 2015-03-18 11:30:30.657
NWInstall.validateUnpackTable(): validating row {
codepage:Unicode
destination:E:\usr\sap\JEP\SYS\exe\uc\NTAMD64
ownPath:
path:DBINDEP\SAPEXE.SAR
sid:JEP
unpack:true
TRACE 2015-03-18 11:30:30.659 [tablecpp.cpp:136]
Table(t_NW_unpack).updateRow({
cd:UKERNEL
codepage:Unicode
confirmUnpack:
destination:E:\usr\sap\JEP\SYS\exe\uc\NTAMD64
list:
needUnpacking:true
ownPath:
path:DBINDEP\SAPEXE.SAR
selectiveExtract:false
sid:JEP
unpack:true
wasUnpacked:false
}, WHERE sid='JEP' AND path='DBINDEP\SAPEXE.SAR' AND codepage='Unicode' AND destination='E:\usr\sap\JEP\SYS\exe\uc\NTAMD64')
updating
TRACE 2015-03-18 11:30:30.661
NWInstall.validateUnpackTable() done
TRACE 2015-03-18 11:30:30.722 [iaxxgenimp.cpp:301]
CGuiEngineImp::showDialogCalledByJs()
<dialog sid="d_nw_ask_unpack">
<table enabled="true" fixedrows="true" sid="askUnpack">
<caption>Archives to Be Unpacked</caption>
<column enabled="true" type="check" numeric="false" upper="false" name="unpack">
<caption>Unpack</caption>
<helpitem id="common.Unpack"/>
<value type="string">false</value>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="path">
<caption>Archive</caption>
<helpitem id="common.UnpackArchive"/>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="codepage">
<caption>Codepage</caption>
<helpitem id="common.UnpackArchiveCodepage"/>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="destination">
<caption>Destination</caption>
<helpitem id="common.UnpackArchiveDestination"/>
</column>
<column enabled="true" type="file" numeric="false" upper="false" name="ownPath">
<caption>Downloaded To</caption>
<helpitem id="common.UnpackArchiveDownload"/>
<value type="string"/>
</column>
<row rowId="0">
<boolvalue>
<true/>
</boolvalue>
<value>DBINDEP\SAPEXE.SAR</value>
<value>Unicode</value>
<value>E:\usr\sap\JEP\SYS\exe\uc\NTAMD64</value>
<value/>
</row>
</table>
<dialog/>
TRACE 2015-03-18 11:30:30.724 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:30.725 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\inifile.xml
TRACE 2015-03-18 11:30:30.728 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:30.776
The step askUnpack with key |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|NW_Unpack2|ind|ind|ind|ind|unpack|0 has been executed successfully.
INFO 2015-03-18 11:30:30.778 [synxccuren.cpp:887]
CSyCurrentProcessEnvironmentImpl::setWorkingDirectory(const CSyPath & C:/Program Files/sapinst_instdir/NW740SR2/SYB/INSTALL/HA/JAVA/MSCS-NODE1)
lib=syslib module=syslib
Working directory changed to C:/Program Files/sapinst_instdir/NW740SR2/SYB/INSTALL/HA/JAVA/MSCS-NODE1.
TRACE 2015-03-18 11:30:30.780 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:30.781 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\inifile.xml
TRACE 2015-03-18 11:30:30.785 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:30.802 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:30.803 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\inifile.xml
TRACE 2015-03-18 11:30:30.806 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:30.819 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:30.842 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\keydb.xml
TRACE 2015-03-18 11:30:30.908 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:30.909 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:30.963 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\statistic.xml
TRACE 2015-03-18 11:30:31.139 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
INFO 2015-03-18 11:30:31.142 [csistepexecute.cpp:1024]
Execute step setDisplayNames of component |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0
TRACE 2015-03-18 11:30:31.177
Call block: NW_FirstClusterNode_ind_ind_ind_ind
function: NW_FirstClusterNode_ind_ind_ind_ind_DialogPhase_setDisplayNames_Preprocess
is validator: false
TRACE 2015-03-18 11:30:31.219 [csistepexecute.cpp:1079]
Execution of preprocess block of |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|setDisplayNames returns TRUE
TRACE 2015-03-18 11:30:31.243
Call block: NW_FirstClusterNode_ind_ind_ind_ind
function: NW_FirstClusterNode_ind_ind_ind_ind_DialogPhase_setDisplayNames
is validator: false
TRACE 2015-03-18 11:30:31.246
NWInstance.byNumberAndRealHost(00, DCEPPRDCI)
TRACE 2015-03-18 11:30:31.246
NWInstance.find()
TRACE 2015-03-18 11:30:31.248
NWOption(localExeDir).value()
TRACE 2015-03-18 11:30:31.248
NWOption(localExeDir).value() done: true
TRACE 2015-03-18 11:30:31.249
NWOption(collected).value()
TRACE 2015-03-18 11:30:31.249
NWOption(collected).value() done: true
TRACE 2015-03-18 11:30:31.252
NWInstance._fromRow({
sid:JEP
name:SCS00
number:00
host:SAPJEP
guid:0
realhost:dcepprdci
type:SCS
installationStatus:installing
startProfileName:JEP_SCS00_SAPJEP
instProfilePath:
dir_profile:
unicode:false
collectSource:
TRACE 2015-03-18 11:30:31.252
NWSCSInstance()
TRACE 2015-03-18 11:30:31.252
NWInstance(JEP/SCS00/SAPJEP)
TRACE 2015-03-18 11:30:31.252
NWInstance(JEP/SCS00/SAPJEP) done
TRACE 2015-03-18 11:30:31.253
NWInstanceInstall()
TRACE 2015-03-18 11:30:31.253
NWInstanceInstall() done
TRACE 2015-03-18 11:30:31.253
NWInstall.getSystem(JEP)
TRACE 2015-03-18 11:30:31.253
NWOption(localExeDir).value()
TRACE 2015-03-18 11:30:31.254
NWOption(localExeDir).value() done: true
TRACE 2015-03-18 11:30:31.254
NWOption(collected).value()
TRACE 2015-03-18 11:30:31.254
NWOption(collected).value() done: true
TRACE 2015-03-18 11:30:31.256
NWInstall({
sid:JEP
sapDrive:E:
dbtype:ind
sapmnt:/sapmnt
hasABAP:false
hasJava:true
isAddinInstallation:undefined
unicode:true
workplace:undefined
loadType:SAP
umeConfiguration:
doMigMonConfig:
useParallelAbapSystemCopy:
dir_profile:\\SAPJEP\sapmnt\JEP\SYS\profile
os4_krnlib:
os4_seclanglib:
useCurrentUser:false
noShares:false
TRACE 2015-03-18 11:30:31.257
NW({
sid:JEP
sapDrive:E:
dbtype:ind
sapmnt:/sapmnt
hasABAP:false
hasJava:true
isAddinInstallation:undefined
unicode:true
workplace:undefined
loadType:SAP
umeConfiguration:
doMigMonConfig:
useParallelAbapSystemCopy:
dir_profile:\\SAPJEP\sapmnt\JEP\SYS\profile
os4_krnlib:
os4_seclanglib:
useCurrentUser:false
noShares:false
TRACE 2015-03-18 11:30:31.258
NW() done
TRACE 2015-03-18 11:30:31.258
NWInstall() done
TRACE 2015-03-18 11:30:31.258
NWInstall.getSystem() done
TRACE 2015-03-18 11:30:31.258
NWSCSInstance() done
TRACE 2015-03-18 11:30:31.259
NWInstance._fromRow() done
TRACE 2015-03-18 11:30:31.259
NWInstance._fromRow({
sid:JEP
name:ERS10
number:10
host:SAPJEP
guid:1
realhost:dcepprdci
type:ERS
installationStatus:installing
startProfileName:JEP_ERS10_SAPJEP
instProfilePath:
dir_profile:
unicode:false
collectSource:
TRACE 2015-03-18 11:30:31.259
NWERSInstance()
TRACE 2015-03-18 11:30:31.260
NWInstance(JEP/ERS10/SAPJEP)
TRACE 2015-03-18 11:30:31.260
NWInstance(JEP/ERS10/SAPJEP) done
TRACE 2015-03-18 11:30:31.260
NWInstanceInstall()
TRACE 2015-03-18 11:30:31.260
NWInstanceInstall() done
TRACE 2015-03-18 11:30:31.261
NWERSInstance() done
TRACE 2015-03-18 11:30:31.261
NWInstance._fromRow() done
TRACE 2015-03-18 11:30:31.261
NWInstance.find() done: 1 instances found
TRACE 2015-03-18 11:30:31.261
t_scs.setDisplayName([nw.progress.installInstance, SCS00], WHERE ROWNUM=0)
TRACE 2015-03-18 11:30:31.305
The step setDisplayNames with key |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0 has been executed successfully.
INFO 2015-03-18 11:30:31.307 [synxccuren.cpp:887]
CSyCurrentProcessEnvironmentImpl::setWorkingDirectory(const CSyPath & C:/Program Files/sapinst_instdir/NW740SR2/SYB/INSTALL/HA/JAVA/MSCS-NODE1)
lib=syslib module=syslib
Working directory changed to C:/Program Files/sapinst_instdir/NW740SR2/SYB/INSTALL/HA/JAVA/MSCS-NODE1.
TRACE 2015-03-18 11:30:31.362
Build Client for subcomponent |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|NW_FinishFirstClusterNode|ind|ind|ind|ind|finishFirst|0
TRACE 2015-03-18 11:30:31.364
TRACE 2015-03-18 11:30:31.364 [ccdclient.cpp:22]
CCdClient::CCdClient()
TRACE 2015-03-18 11:30:31.385 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:31.387 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\inifile.xml
TRACE 2015-03-18 11:30:31.390 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:31.406 [cpropertycontentmanager.hpp:220]
CPropertyContentManager::logMissingParameters()
The following parameters can be set via SAPINST_PARAMETER_CONTAINER_URL (inifile.xml) but not via SAPINST_INPUT_PARAMETERS_URL:
Component 'NW_ERS_Instance': ersInstanceNumber, restartSCS
Component 'NW_GetDomainOU': isOUInstallation, ou_delimiter, windows_domain_ous
Component 'NW_GetSidFromCluster': clusterGroup, isUnicode, localDrive, network, networkName, sharedDrive, sid, useDHCP
Component 'NW_GetUserParameterWindows': sapDomain, sapServiceSIDPassword, sidAdmPassword
Component 'NW_SAPCrypto': installSAPCrypto
Component 'NW_SCS_Instance': instanceNumber, scsMSPort, scsMSPortInternal
Component 'Preinstall': installationMode
INFO 2015-03-18 11:30:31.802 [synxcpath.cpp:799]
CSyPath::createFile() lib=syslib module=syslib
Creating file C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\summary.html.
TRACE 2015-03-18 11:30:31.828 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:31.852 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\keydb.xml
TRACE 2015-03-18 11:30:31.916 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:31.917 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:31.971 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\statistic.xml
TRACE 2015-03-18 11:30:32.152 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:32.154 [sixxbsummary.cpp:320]
CSiSummaryContainer::showDialog()
<?xml version="1.0" encoding="UTF-8"?><sapinstgui version="1.0"><dialog version="1.0" sid="diReviewDialog"> <title>Parameter Summary</title><description>Choose 'Next' to start with the values shown. Otherwise, select the parameters to be changed and choose 'Revise'. You are then taken to the screen where you can change the parameter. You might be guided through other screens that have so far been processed.</description> <review id="review_controll"><caption><![CDATA[Parameter list]]></caption><inputgroup id="ID_1"> <caption><![CDATA[Drive for Local Instances]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_2"><pulldown edit="false" enabled="true" highlight="false" sid="localDrive">
<caption>Destination Drive for Local Instances</caption>
<helpitem id="mscs.SelectLocalDrive"/>
<value>C:</value>
<value>D:</value>
<selectvalue>D:</selectvalue>
</pulldown></input></inputgroup><inputgroup id="ID_3"> <caption><![CDATA[SAP System Cluster Parameters]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_4"><field enabled="true" highlight="false" sid="sid">
<caption>SAP System ID (SAPSID)</caption>
<helpitem id="common.SAPSystemIDCIDB"/>
<value type="upper" maxlength="3" regexp="^[A-Z][A-Z0-9]{2}" minlength="3">JEP</value>
</field></input><input id="ID_5"><field enabled="true" highlight="false" sid="networkName">
<caption>Network Name (SAP Virtual Instance Host)</caption>
<helpitem id="common.VirtualInstanceHostWindowsMSCS"/>
<value type="string" maxlength="13" minlength="1">SAPJEP</value>
</field></input><input id="ID_6"><pulldown edit="false" enabled="true" highlight="false" sid="network">
<caption>Use Public Network</caption>
<helpitem id="mscs.UsePublicNetwork"/>
<value>Cluster Network 1</value>
<selectvalue>Cluster Network 1</selectvalue>
</pulldown></input><input id="ID_7"><pulldown edit="false" enabled="true" highlight="false" sid="sharedDrive">
<caption>Destination Drive for Clustered Instances</caption>
<helpitem id="mscs.SelectSharedDrive"/>
<value>BACKUP</value>
<value>SAP_USR_SID_CI_NODE</value>
<value>SAPDATA1</value>
<value>SAPDATA2</value>
<value>SAPDATA3</value>
<value>SAPDATA4</value>
<value>SAPLOG1</value>
<value>SAPLOG2</value>
<value>SAPTEMP_SAPDIAG_SAPSYBSYSTEM</value>
<value>SYBASE</value>
<selectvalue>SAP_USR_SID_CI_NODE</selectvalue>
</pulldown></input></inputgroup><inputgroup id="ID_8"> <caption><![CDATA[DNS Domain Name]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_9"><check enabled="true" highlight="false" sid="setFQDN">
<caption>Set FQDN for SAP system</caption>
<helpitem id="common.FullyQualifiedDomainNameSelect"/>
<boolvalue>
<true/>
</boolvalue>
</check></input><input id="ID_10"><field enabled="true" highlight="false" sid="FQDN" depends="setFQDN">
<caption>DNS Domain Name for SAP System</caption>
<helpitem id="common.FullyQualifiedDomainName"/>
<value type="string" regexp="^[^.]+.*$" minlength="1">jnport.com</value>
</field></input></inputgroup><inputgroup id="ID_11"> <caption><![CDATA[Windows Domain]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_12"><radiobox enabled="true" highlight="false" sid="domainType">
<caption>Domain Model</caption>
<helpitem id="common.DomainModelDomainLocal"/>
<radio enabled="true" sid="global">
<caption>Domain of Current User</caption>
<boolvalue>
<true/>
</boolvalue>
</radio>
<radio enabled="true" sid="other">
<caption>Different Domain</caption>
</radio>
</radiobox></input></inputgroup><inputgroup id="ID_13"> <caption><![CDATA[Operating System Users]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_14"><password confirm="true" enabled="true" highlight="false" sid="sidAdmPassword">
<caption>Password of SAP System Administrator</caption>
<helpitem id="common.PasswordCreateExistsOS"/>
<encrvalue maxlength="63">*****</encrvalue>
</password></input><input id="ID_15"><password confirm="true" enabled="true" highlight="false" sid="sapServiceSIDPassword">
<caption>Password of SAP System Service User</caption>
<helpitem id="common.PasswordCreateExistsOS"/>
<encrvalue maxlength="63">*****</encrvalue>
</password></input></inputgroup><inputgroup id="ID_16"> <caption><![CDATA[Windows Domain for SAP Host Agent]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_17"><radiobox enabled="true" highlight="false" sid="domainType">
<caption>Domain Model</caption>
<helpitem id="common.DomainModelDomainLocal"/>
<radio enabled="true" sid="local">
<caption>Local Domain</caption>
<boolvalue>
<true/>
</boolvalue>
</radio>
<radio enabled="true" sid="global">
<caption>Domain of Current User</caption>
</radio>
<radio enabled="true" sid="other">
<caption>Different Domain</caption>
</radio>
</radiobox></input></inputgroup><inputgroup id="ID_18"> <caption><![CDATA[Operating System Users]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_19"><password confirm="true" enabled="true" highlight="false" sid="sidAdmPassword">
<caption>Password of SAP System Administrator</caption>
<helpitem id="common.PasswordCreateExistsOS"/>
<encrvalue maxlength="63">*****</encrvalue>
</password></input></inputgroup><inputgroup id="ID_20"> <caption><![CDATA[SCS Instance]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_21"><field enabled="true" highlight="false" sid="instanceNumber">
<caption>SCS Instance Number</caption>
<helpitem id="common.InstanceNumberCreate"/>
<value type="string" maxlength="2" regexp="[0-9]{2}" minlength="2">00</value>
</field></input></inputgroup><inputgroup id="ID_22"> <caption><![CDATA[Java Message Server Port]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_23"><field enabled="true" highlight="false" sid="scsMSPortInternal">
<caption>Internal Java Message Server Port</caption>
<helpitem id="common.InternalMessagePort"/>
<value type="numeric" maxlength="5" min="1025" max="65535">3900</value>
</field></input></inputgroup><inputgroup id="ID_24"> <caption><![CDATA[Enqueue Replication Server Instance]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_25"><field enabled="true" highlight="false" sid="ersInstanceNumber">
<caption>Number of the ERS Instance</caption>
<helpitem id="common.InstanceNumberCreate"/>
<value type="numeric" maxlength="2" max="99" minlength="2">10</value>
</field></input><input id="ID_26"><field enabled="true" highlight="false" sid="ersVirtualHostname">
<caption>ERS Instance - Virtual Host Name</caption>
<helpitem id="common.virtualHostname"/>
<value type="string" maxlength="60" minlength="1">SAPJEP</value>
</field></input></inputgroup><inputgroup id="ID_27"> <caption><![CDATA[Unpack Archives]]></caption><selector enabled="true"><bool><boolvalue><false/></boolvalue></bool></selector><input id="ID_28"><table enabled="true" fixedrows="true" sid="askUnpack">
<caption>Archives to Be Unpacked</caption>
<column enabled="true" type="check" numeric="false" upper="false" name="unpack">
<caption>Unpack</caption>
<helpitem id="common.Unpack"/>
<value type="string">false</value>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="path">
<caption>Archive</caption>
<helpitem id="common.UnpackArchive"/>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="codepage">
<caption>Codepage</caption>
<helpitem id="common.UnpackArchiveCodepage"/>
</column>
<column enabled="false" type="field" numeric="false" upper="false" name="destination">
<caption>Destination</caption>
<helpitem id="common.UnpackArchiveDestination"/>
</column>
<column enabled="true" type="file" numeric="false" upper="false" name="ownPath">
<caption>Downloaded To</caption>
<helpitem id="common.UnpackArchiveDownload"/>
<value type="string"/>
</column>
<row rowId="0">
<boolvalue>
<true/>
</boolvalue>
<value>DBINDEP\SAPEXE.SAR</value>
<value>Unicode</value>
<value>E:\usr\sap\JEP\SYS\exe\uc\NTAMD64</value>
<value/>
</row>
</table></input></inputgroup></review><button sid="btPREV" default="false" enabled="false" ><caption>< &Back</caption><action>ACTION_PREV</action><tooltiphelp><![CDATA[]]></tooltiphelp></button><button sid="btNEXT" default="true"><caption>&Next</caption><action>ACTION_NEXT</action><tooltiphelp><![CDATA[Continue processing]]></tooltiphelp></button><button sid="btEDIT"><caption>&Revise</caption><action>ACTION_EDIT</action><tooltiphelp><![CDATA[Edit input values]]></tooltiphelp></button><button sid="btDETAIL" default="true"><caption>Show &Detail</caption><action>ACTION_CBD_DETAIL</action><tooltiphelp><![CDATA[Show all input values]]></tooltiphelp></button></dialog></sapinstgui>
TRACE 2015-03-18 11:30:32.161 [iaxxgenimp.cpp:1031]
CGuiEngineImp::acceptAnswerForBlockingRequest
Waiting for an answer from GUI
TRACE 2015-03-18 11:30:33.330 [sixxbsummary.cpp:83]
CSiSummaryContainer::setDocument(const iXMLDocument & doc)
<?xml version="1.0" encoding="utf-8"?>
<sapinstguiresp>
<action>ACTION_NEXT</action>
<dialog sid="diReviewDialog">
<review id="review_controll"/>
</dialog>
</sapinstguiresp>
TRACE 2015-03-18 11:30:33.331 [iaxxdlghnd.cpp:96]
CDialogHandler::doHandleDoc()
CDialogHandler: ACTION_NEXT requested
TRACE 2015-03-18 11:30:33.331 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:33.356 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\keydb.xml
TRACE 2015-03-18 11:30:33.375 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:33.375 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:33.429 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\statistic.xml
TRACE 2015-03-18 11:30:33.500 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:33.773
Status of Step |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|NW_First_Steps|ind|ind|ind|ind|firstSteps|0|NW_Update_DLLs|ind|ind|ind|ind|dll|0|ntpatch2 is not OK. So we have to restart at this point.
TRACE 2015-03-18 11:30:33.773
Switch to STANDARD mode
TRACE 2015-03-18 11:30:33.773 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:33.801 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\keydb.xml
TRACE 2015-03-18 11:30:33.866 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:33.868 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:33.926 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\statistic.xml
TRACE 2015-03-18 11:30:34.67 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
INFO 2015-03-18 11:30:34.67 [csistepexecute.cpp:1024]
Execute step ntpatch2 of component |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|NW_First_Steps|ind|ind|ind|ind|firstSteps|0|NW_Update_DLLs|ind|ind|ind|ind|dll|0
TRACE 2015-03-18 11:30:34.084
Call block: NW_Update_DLLs_ind_ind_ind_ind
function: NW_Update_DLLs_ind_ind_ind_ind_SubComponentContainer_ntpatch2_Preprocess
is validator: false
TRACE 2015-03-18 11:30:34.115 [csistepexecute.cpp:1079]
Execution of preprocess block of |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|NW_First_Steps|ind|ind|ind|ind|firstSteps|0|NW_Update_DLLs|ind|ind|ind|ind|dll|0|ntpatch2 returns TRUE
TRACE 2015-03-18 11:30:34.127
Call block: NW_Update_DLLs_ind_ind_ind_ind
function: NW_Update_DLLs_ind_ind_ind_ind_SubComponentContainer_ntpatch2
is validator: false
TRACE 2015-03-18 11:30:34.127
NW_Update_DLLs.installDLLs(false)
TRACE 2015-03-18 11:30:34.127
NW_Update_DLLs.installDLLs() already run
TRACE 2015-03-18 11:30:34.153
The step ntpatch2 with key |NW_FirstClusterNode_Java|ind|ind|ind|ind|0|0|NW_FirstClusterNode|ind|ind|ind|ind|NW_FirstClusterNode|0|NW_First_Steps|ind|ind|ind|ind|firstSteps|0|NW_Update_DLLs|ind|ind|ind|ind|dll|0 has been executed successfully.
INFO 2015-03-18 11:30:34.154 [synxccuren.cpp:887]
CSyCurrentProcessEnvironmentImpl::setWorkingDirectory(const CSyPath & C:/Program Files/sapinst_instdir/NW740SR2/SYB/INSTALL/HA/JAVA/MSCS-NODE1)
lib=syslib module=syslib
Working directory changed to C:/Program Files/sapinst_instdir/NW740SR2/SYB/INSTALL/HA/JAVA/MSCS-NODE1.
TRACE 2015-03-18 11:30:34.187 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:34.200 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\keydb.xml
TRACE 2015-03-18 11:30:34.255 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
TRACE 2015-03-18 11:30:34.256 [kdxxctaco.cpp:93]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile start ...
TRACE 2015-03-18 11:30:34.283 [kdxxctaco.cpp:121]
CKdbTableContainerImpl::syncToContainerFile
after creating out stream for C:\Program Files\sapinst_instdir\NW740SR2\SYB\INSTALL\HA\JAVA\MSCS-NODE1\statistic.xml
TRACE 2015-03-18 11:30:34.425 [kdxxctaco.cpp:155]
CKdbTableContainerImpl::syncToContainerFile
CKdbTableContainerImpl::syncToContainerFile stop ...
INFO 2015-03-18 11:30:34.426 [csistepexecute.cpp:1024]
Execute sHello Arjun,
Have you executed any other installation prior this attempt?
It seems there are unfinished installations on this server. If this is the case, run the SWPM to uninstall the unfinished installation. Then, execute the installation once again.
Also, make sure all IPs are corrects in the DND and host files. There were cases in which the IPs were not properly configured, or maybe a typo.
Regards,
Henrique Girardi
SAP Active Global Support
#2081285 - How to enter good search terms to an SAP search?
https://service.sap.com/sap/support/notes/2081285
SWPM Troubleshooting documents:
http://scn.sap.com/docs/DOC-62646 -
File Adapter vs BPEL interaction issue on high availability environment
Hi all,
i would really appreciate your help on a matter i'm facing about a composite (SCA) deployed on a clustered environment configured for high availability. To help you better understand the issue i briefly describe what my composite does. Composite's instances are started by means of an Inbound File Adapter which periodically polls a directory in order to check if any file with a well defined naming convention is available. The adapter is not meant to read the file content but only its properties. Furthermore, the adapter automatically makes a backup copy of the file and doesn't delete the file. Properties read by the adapter are provided to a BPEL process which obtains them using the various "jca.file.xyz" properties (configurable in any BPEL receive activity) and stores them in some of its process variables. How the BPEL process uses these properties is irrilevant to the issue i'd like to pose to your attention.
The just described interaction between the File Adapter and the BPEL process has always worked in other non-HA environments. The problem i'm facing is that this interaction stops to work when i deploy the composite in a clustered environment configured for high availability: the File Adapter succeeds to read the file but no BPEL process instance gets started and the composite instance gets stuck (that is, it keeps always running until you don't manually abort it!).
Interesting to say, if I put a Mediator between the File Adapter and the BPEL, the Mediator instance gets started, that is the file's properties read by the adapter are passed to the mediator, but then the composite gets stuck again 'cos even the mediator doesn't seem to be able to initiate the BPEL process instance.
I think the problem lies in the way i configured either the SOA infrastructure for HA or the File Adapter or BPEL process in my composite. To configure the adapter, i followed the instructions given here:
http://docs.oracle.com/cd/E14571_01/integration.1111/e10231/adptr_file.htm#BABCBIAH
but maybe i missed something. Instead, i didn't find anything about BPEL configuration for HA with SOA Suite 11g (all the material i found refers to SOA Suite 10g).
I've also read in some posts that for using the db as a coordinator between the file adapters deployed on the different nodes of the cluster, the db must be a RAC! Is that true or is possible to use even another type of oracle db?
Please, let me know if someone of you has already encountered (and solved :)) a problem like this!
Thanks in advance,
Bye!Hi,
thanks for your prompt reply. Anyway, i had already read through out that documentation and tried all settings suggested in it without any luck! I'm thinking the problem could be related to the Oracle DB used in the clustered environment, which is not RAC while all documentation i read about high availability configuration always refers to a RAC db. Anyone knows if a RAC Oracle DB is strictly needed for file adapter configuration in HA cluster?
Thanks, bye!
Fabio -
Windows 2012 RDS - Session Host servers High Availability
Hello Windows/Terminal server Champs,
I am new middle of implementing RDS environment for one of my customer, Hope you could help me out.
My customer has asked for HA for RDS session host where applications are published, and i have prepared below plan for server point of view.
2 Session Host server, 1 webaccess, 1 License/connection
Broker & 1 Gateway (DMZ).
In first Phase, we are planning to target internal user
who connect to Session host HA where these 2 servers will have application installed and internal user will use RDP to access these application.
In second Phase we will be dealing with external Party who connect from external network where we are planning to integrate with NetIQ => gateway
=> Webaccess/Session host
I have successfully installed and configured 2 Session
Host, 1 license/Broker. 1 webAccess & 1 Gateway. But my main concern to have session Host High Available as it is hosting the application and most of the internal user going to use it. to configure it i am following http://technet.microsoft.com/en-us/library/cc753891.aspx
However most of the Architecture is change in RDS 2012. can you please help me out to setup the Session Host HA.
Note: we can have only 1 Connection broker /Licensing server , 1 webacess server & 1 Gateway server, we cannot increase more server due to cost
factor.
thanks in advance.Yes, absolutely no problem in just using one connection broker in your environment as long as your customer understands the SPOF.
the session hosts however aren't really what you would class HA - but to set them up so youhave reduancy you would use either Windows NLB, an external NLB device or windows dns round robin. My preferred option when using the connection broker is DNS round
robin - where you give each server in the farm the same farm name dns entry - the connection broker then decides which server to allocate the session too.
You must ensure your session host servers are identical in terms of software though - same software installed in the same paths on all the session host servers.
if you use the 2012 deployment wizard through server manager roles the majority of the config is done for you.
Regards,
Denis Cooper
MCITP EA - MCT
Help keep the forums tidy, if this has helped please mark it as an answer
My Blog
LinkedIn: -
Load balancing and High Availability topology
Our Forms 6i client-server application currently runs on Citrix farm of 20 Windows 2000 boxes (IBM Blade Servers 2 CPU and 2 Gig Memory).
Application supports 2000 users.
We are moving to AS 10g r2, forms 10g and the goal is to use same hardware, 20 Windows boxes (or less), for intranet web deployment.
What will be our best choices for application Load balancing and High Availability?
Hardware load balancer, Web Cache, mod-oc4j? Combinations?
Any suggestions, best practices, your experience?Gerd, I understand, that you are running 10g web forms through the browser, but using Citrix for deployment. This means that in addition to Application Server and Forms runtime sessions, it will be separate browser session opened for each user. What the advantage of this configuration?
Michael, we are aware, that Citrix is not supported by Oracle as a deployment platform. That only means that prior contacting Oracle Support we have to reproduce the problem in standard environment. It was never been a problem to reproduce problem :) We were using Citrix as a deployment platform for Forms 6i client/server for 4 years, but now we are forced to upgrade to 10g.
We are familiar with various Load balancing options available. The question is which option is the most "workable" in our case. -
User experience question for High Availability
Hi Experters,
Not sure whether this is right forum to post this message.
If anybody has already captured user experience for the failover environment, will appreciate their help on this. I will give overview of environment. Using PowerHA for ASCS/SCS failover, ORACLE Data Guard for Database failover.
Trying to capture user experience for the failover environment
if ASCS/SCS fail,
App Server Fail (using F5 Load Balancer which should reroute, unless all app server fail),
Database fail
with following cases:
1. User logged in, NO ACTIVITY. I believe NO IMPACT either ASCS/SCS fail or DB fail or App Server fail.
2. User logged in and run a transaction before failover.
What will happen in case of ASCS / SCS
What will happen in case of DB failover
and what will happen in case of App server fail (NO High Availability, only redundant app servers. MAX 4 app servers for each component)
3. User logged in and run a transaction during failover.
What will happen in case of ASCS / SCS
What will happen in case of DB failover
and what will happen in case of App server fail (NO High Availability, only redundant app servers. MAX 4 app servers for each component)
Not sure which one possible or not. Some case thinking system will hang need to refresh. Some case hour glass and then come back once failover complete. and some case session closed completely.
Thanks for your time and god bless the knowledge you have.
Sarojjust try to answer as much as I can (guess)
> 1. User logged in, NO ACTIVITY. I believe NO IMPACT either ASCS/SCS fail or DB fail or App Server fail.
DB failed and SCS failed won't have any impact to enduser, but if app server failed, the user session will lost, user will see a pop up error message at their SAPGUI.
> 2. User logged in and run a transaction before failover.
> What will happen in case of ASCS / SCS
user wont'be effected during failover if user is doing nothing but idle (replica enqueue is working before failover)
> What will happen in case of DB failover
App server won't be able to do much thing but it's work processes into reonnecting status. it should resume (reconnect to DB) when failover is compoleted. So user should be able to continue the sessions.
> and what will happen in case of App server fail (NO High Availability, only redundant app servers. MAX 4 app servers for each component)
user sessions in failed app server will be lost. However user should be able to logon agan if
1) logon via group, and
2) within the group, there is at least one appl server alive.
> 3. User logged in and run a transaction during failover.
hanging or
> What will happen in case of ASCS / SCS
if the transaction is using enqueue service, for example, then user will get error message. otherwise, user won't be effected, e.g. if use is just search a list of order. user wont to able to logon via group during the failover.
You also should be pepared for user connected through message server, .e.g. http request dispatching through message server directly , or via web dispatchr. user wont be able to connect during the failover.
> What will happen in case of DB failover
user will get error message, transaction will be aborted.
> and what will happen in case of App server fail (NO High Availability, only redundant app servers. MAX 4 app servers for each component)
very similar case as case 2. -
Windows Event Collector - Built-in options for load balancing and high availability ?
Hello,
I have a working collector. config is source initiated, and pushed by GPO.
I would like to deploy a second collector for high availability and load balancing. What are the available options ? I have not found any guidance on TechNet articles.
As a low cost option, is it fine to simply start using DNS round-robin with a common alias for both servers pushed as a collector name through GPO ?
In my GPO Policy, if I individually declare both servers, events are forwarded twice, once for each server. Indeed it does cover high availability, but not really optimized.
Thanks for your help.Hi,
>>As a low cost option, is it fine to simply start using DNS round-robin with a common alias for both servers pushed as a collector name through GPO ?
Based on the description, we can utilize DNS round robin to distribute workloads and increase fault tolerance. By default, DNS uses round robin to rotate the order of RR data returned in query answers where multiple RRs of the same type exist for a queried
DNS domain name. This feature provides a simple method for load balancing client use of Web servers and other frequently queried multihomed computers. Besides, by default, DNS will perform round-robin rotation for all RR types.
Regarding DNS round robin, the following article can be referred to for more information.
Configuring round robin
http://technet.microsoft.com/en-us/library/cc787484(v=ws.10).aspx
TechNet Subscriber Support
If you are TechNet Subscription user and have any feedback on our support quality, please send your feedback here.
Best regards,
Frank Shen -
Dear Friends,
I have installed PI 7.1 in High availability using Microsoft windows MNS based cluster. It was with initial patch level 04 of Netweaver 710.
I have installed SCS, ASCS, Enque Replication Services etc in cluster mode. I have installed primary application server on Node 2 and Additional Application server on Node 1.
Now I installed SP07 using JSPM from my Primary Application server i.e. Node 1. It went successfully and my support Pack level was upgraded to Sp07. But now when I open Java stack in Node 1, It shows only home page using url - http://10.6.4.178:50000
After that if I open any link e.g. UME or NWA, it does not open and throughs following error :
Service cannot be reached
What has happened?
URL http://10.6.4.178:50000/wsnavigator call was terminated because the corresponding service is not available.
Note
The termination occurred in system PIP with error code 404 and for the reason Not found.
The selected virtual host was 0 .
What can I do?
Please select a valid URL.
If it is a valid URL, check whether service /wsnavigator is active in transaction SICF.
If you do not yet have a user ID, contact your system administrator.
ErrorCode:ICF-NF-http-c:000-u:SAPSYS-l:E-i:sappicl1_PIP_00-v:0-s:404-r:Notfound
HTTP 404 - Not found
Your SAP Internet Communication Framework Team
Java is started properly on both the nodes without any error. While it is working fine on Node 2 i.e. 10.6.4.179.
Please suggest ! Do we need to do so special step for installing patch through JSM in High availability setup?
Jitendra TayalHi Jitendra,
While doing a patch upgrade with JSPM in HA scenario, you need to switch off the automatic switchover option because, once the components are deployed, the J2ee gets restarted. If the automatic failover is active, the service gets switched over to the other node.
In this case, shutdown all the instances and stop the cluster.
Restart the systems. Start the cluster in the original servers.
Restart JSPM.
It should work fine then.
Let me know if it is helpful.
Best Regards
Raghu
Maybe you are looking for
-
Pavilion dv6 My Hard Drive is crash and I thinking buy a new one but I did not made Recovery Disk when my loptop was good. the recovery part is still working bacause I can use HP QuickWeb perfectly, what can I do, I can still do the recovery **bleep
-
How to send FDF file when submitting a PDF form
Most users do not have Pro only reader. I need to be able to send the fdf file so that when the user launches the fdf file the digital signatures will be in tact and form fields populated. Is that possible?
-
Profile for Sony DSC-P200 and Samsung NV24HD created
Hi, just started plying around with LPC and create a profile for my old pictures from the Sony DSC-P200 and Samsung NV24HD. I have uploaded/emailed them to Adobe, so they should show up in the near future. br Beholder003
-
Im trying to run dbca on my node2 of the cluster ( this is 10.2 db home running against 11.2 clusterware) but it comes back with an error ... below is the error from trace file for the dbca ava.lang.UnsatisfiedLinkError: /u01/app/oracle/product/db02/
-
Goodies in PSE 4 French Edition
Can anyone tell me where I've to copy "goodies" files contained in PSE 4 Adobe's CD. No information's given by Adobe except for DitherBox. Thanks.