Monitor High Availability on other SLES servers
I've got the Monitor working fine with GWHA on the local SLES server, but it will not work for other GroupWise servers.
Configuration is like this:
Server-1;
- MTA, POA, GWMonitor
- created a ha user and configured grpwise-ma
- GWHA configured with xinetd
- agents will auto-reload after bringing down :)
Server-2;
- MTA, GWIA, GWInter, WebAccess
- created same ha user as on Server-1
- GWHA configured with xinetd
- agents will NOT auto-reload after bringing down :(
- I do see entries in the /var/log/xinetd.log
Anyone done this before? - I would guess so....
Is there something I miss here?
Regards,
Patrick
Have you checked your firewall log to make sure you can accept incoming connections on port 8400? GWHA is essentially a web server that waits for commands from the monitor server.
You can test it: go to your monitor server and do:
'telnet <target server IP> 8400'
and see if you get a connection. It should ask you for a username.
Since default installs of SLES with the firewall enabled do not allow high ports, 8400 is probably blocked.
Check /var/log/firewall for dropped connections:
'cat /var/log/firewall | grep 8400'
Originally Posted by pfronteri
I've got the Monitor working fine with GWHA on the local SLES server, but it will not work for other GroupWise servers.
Configuration is like this:
Server-1;
- MTA, POA, GWMonitor
- created a ha user and configured grpwise-ma
- GWHA configured with xinetd
- agents will auto-reload after bringing down :)
Server-2;
- MTA, GWIA, GWInter, WebAccess
- created same ha user as on Server-1
- GWHA configured with xinetd
- agents will NOT auto-reload after bringing down :(
- I do see entries in the /var/log/xinetd.log
Anyone done this before? - I would guess so....
Is there something I miss here?
Regards,
Patrick
Similar Messages
-
User experience question for High Availability
Hi Experters,
Not sure whether this is right forum to post this message.
If anybody has already captured user experience for the failover environment, will appreciate their help on this. I will give overview of environment. Using PowerHA for ASCS/SCS failover, ORACLE Data Guard for Database failover.
Trying to capture user experience for the failover environment
if ASCS/SCS fail,
App Server Fail (using F5 Load Balancer which should reroute, unless all app server fail),
Database fail
with following cases:
1. User logged in, NO ACTIVITY. I believe NO IMPACT either ASCS/SCS fail or DB fail or App Server fail.
2. User logged in and run a transaction before failover.
What will happen in case of ASCS / SCS
What will happen in case of DB failover
and what will happen in case of App server fail (NO High Availability, only redundant app servers. MAX 4 app servers for each component)
3. User logged in and run a transaction during failover.
What will happen in case of ASCS / SCS
What will happen in case of DB failover
and what will happen in case of App server fail (NO High Availability, only redundant app servers. MAX 4 app servers for each component)
Not sure which one possible or not. Some case thinking system will hang need to refresh. Some case hour glass and then come back once failover complete. and some case session closed completely.
Thanks for your time and god bless the knowledge you have.
Sarojjust try to answer as much as I can (guess)
> 1. User logged in, NO ACTIVITY. I believe NO IMPACT either ASCS/SCS fail or DB fail or App Server fail.
DB failed and SCS failed won't have any impact to enduser, but if app server failed, the user session will lost, user will see a pop up error message at their SAPGUI.
> 2. User logged in and run a transaction before failover.
> What will happen in case of ASCS / SCS
user wont'be effected during failover if user is doing nothing but idle (replica enqueue is working before failover)
> What will happen in case of DB failover
App server won't be able to do much thing but it's work processes into reonnecting status. it should resume (reconnect to DB) when failover is compoleted. So user should be able to continue the sessions.
> and what will happen in case of App server fail (NO High Availability, only redundant app servers. MAX 4 app servers for each component)
user sessions in failed app server will be lost. However user should be able to logon agan if
1) logon via group, and
2) within the group, there is at least one appl server alive.
> 3. User logged in and run a transaction during failover.
hanging or
> What will happen in case of ASCS / SCS
if the transaction is using enqueue service, for example, then user will get error message. otherwise, user won't be effected, e.g. if use is just search a list of order. user wont to able to logon via group during the failover.
You also should be pepared for user connected through message server, .e.g. http request dispatching through message server directly , or via web dispatchr. user wont be able to connect during the failover.
> What will happen in case of DB failover
user will get error message, transaction will be aborted.
> and what will happen in case of App server fail (NO High Availability, only redundant app servers. MAX 4 app servers for each component)
very similar case as case 2. -
Problem in Webcenter Portal application with High Availability changes
Jdeveloper Version: 11.1.1.5
JROCKET
MDS is pointing to IBM/DB2 database.
Weblogic Domain 10.3 with 1 cluster - 2 manage servers.
BigIP - Load Balancing
Database: IBM DB2
I am facing below issues in webcenter portal application after making high availability changes.
1. "A connection to the server has failed" Status - 12152
12031
404
2. Because of Inactivity, session is timeout - ADF_FACES_30108 exception.
The above exceptions are occuring randomly in the application. It occurs either at login page or inside the application clicking on any link/button.
I have made high availability changes as per the below blog:
http://rimmidis.blogspot.com/2012/02/adf-application-running-on-clustered.html
Any help is really appreciatedFixed "Because of Inactivity.." popup message by adding more tokens. The PS4 Webcenter portal application default value for CLIENT_STATE_MAX_TOKENS is 3, we increased to 15 that fixed the popup message.
There was another popup message "Connection to the server failed, with status = 404, 12152..., which resulted in a bug in the product. The PortletServletContextListener was throwing classcastexception and that was causing this popup message.
Once again, we have noticed these popup messages only when we tried to make our application high available. -
High Availability for the news versions
Hi,
Somebody knows, which is the HA strategy for a System (CRM 7.0) with one server for Abap and other server for Java?
May be:
Two servers:
NodeA: Abap Database and Abap Central Instance
NodeB: Java Database and Java Central Instance
OR
NodeA: Abap Database and Java Database
NodeB: Abap Central Instance and Java Central Instance
Four servers:
NodeA: Abap Database
NodeB: Abap Central Instance
NodeC: Java Database
NodeD: Java Central Instance
Thanks.Take a look at FAQ and other documents at [High Availability]
-
EBS AccessGate in High Availability
Hi,
We are integrating EBS 11i with OAM 10g uisng EBS AccessGate.
My question is whether we can configure EBS AccessGate in High Availability ? If yes, can you pls provide the details.
ThanksHi
Were you able to configure EBS access gate in high availability. -
Windows 2012 RDS - Session Host servers High Availability
Hello Windows/Terminal server Champs,
I am new middle of implementing RDS environment for one of my customer, Hope you could help me out.
My customer has asked for HA for RDS session host where applications are published, and i have prepared below plan for server point of view.
2 Session Host server, 1 webaccess, 1 License/connection
Broker & 1 Gateway (DMZ).
In first Phase, we are planning to target internal user
who connect to Session host HA where these 2 servers will have application installed and internal user will use RDP to access these application.
In second Phase we will be dealing with external Party who connect from external network where we are planning to integrate with NetIQ => gateway
=> Webaccess/Session host
I have successfully installed and configured 2 Session
Host, 1 license/Broker. 1 webAccess & 1 Gateway. But my main concern to have session Host High Available as it is hosting the application and most of the internal user going to use it. to configure it i am following http://technet.microsoft.com/en-us/library/cc753891.aspx
However most of the Architecture is change in RDS 2012. can you please help me out to setup the Session Host HA.
Note: we can have only 1 Connection broker /Licensing server , 1 webacess server & 1 Gateway server, we cannot increase more server due to cost
factor.
thanks in advance.Yes, absolutely no problem in just using one connection broker in your environment as long as your customer understands the SPOF.
the session hosts however aren't really what you would class HA - but to set them up so youhave reduancy you would use either Windows NLB, an external NLB device or windows dns round robin. My preferred option when using the connection broker is DNS round
robin - where you give each server in the farm the same farm name dns entry - the connection broker then decides which server to allocate the session too.
You must ensure your session host servers are identical in terms of software though - same software installed in the same paths on all the session host servers.
if you use the 2012 deployment wizard through server manager roles the majority of the config is done for you.
Regards,
Denis Cooper
MCITP EA - MCT
Help keep the forums tidy, if this has helped please mark it as an answer
My Blog
LinkedIn: -
Best practice for highly available management / publishing servers
I am testing a highly available appv 5.0 environment, which will deploy appv packages to a Xenapp farm. I have two SQL 2012 servers configured as an availability group for the backend, and two publishing / management servers for the front end.
What is the best practice to configure the publishing / management servers for high availability? Should I configure them as an NLB cluster, which I have tested and does seem to work, or should I just use the GPO to configure the clients to use both
publishing servers, which I have also tested and appears to work?
Thanks,
Patrick SullivanIn App-V 5.0 the Management and Publishing Servers are hosted in IIS, so use the same approach for HA as you would any web application.
If NLB is all that's available to you, then use that; otherwise I would recommend a proper load balancing solution such as Citrix NetScaler or KEMP LoadManager.
Please remember to click "Mark as Answer" or "Vote as Helpful" on the post that answers your question (or click "Unmark as Answer" if a marked post does not actually
answer your question). This can be beneficial to other community members reading the thread.
This forum post is my own opinion and does not necessarily reflect the opinion or view of my employer, Microsoft, its employees, or other MVPs.
Twitter:
@stealthpuppy | Blog:
stealthpuppy.com |
The Definitive Guide to Delivering Microsoft Office with App-V -
I would like to understand if a shared storage is required for all application servers that will run weblogic + oracle identity manager. I've been using the following oracle guides : http://docs.oracle.com/cd/E40329_01/doc.1112/e28391/iam.htm#BABEJGID and http://www.oracle.com/technetwork/database/availability/maa-deployment-blueprint-1735105.pdf and from my interpretation both talk about configuring all the application servers with access to a shared storage. From an architecture standpoint, does this mean all the application servers need access to an external hard disk? if shared storage is required what are the steps to implement it?
Thanks,
user12107187You can do it and it will work. But Fusion Middleware products have EDG to provide guidelines to help you to implement high availability.
Having a shared storage will help you to recover from storage failure, otherwise this might be a point of failure.
Enterprise Deployment Overview
"An Oracle Fusion Middleware enterprise deployment:
Considers various business service level agreements (SLA) to make high-availability best practices as widely applicable as possible
Leverages database grid servers and storage grid with low-cost storage to provide highly resilient, lower cost infrastructure
Uses results from extensive performance impact studies for different configurations to ensure that the high-availability architecture is optimally configured to perform and scale to business needs
Enables control over the length of time to recover from an outage and the amount of acceptable data loss from a natural disaster
Evolves with each Oracle version and is completely independent of hardware and operating system "
Best Regards
Luz -
Highly Available Management Servers in SCVMM 2012
Dear All,
I need to build Highly Available Management Servers (2) (SCVMM 2012) for my environment. I am interested in following things:
1. What are my options. I have done little reading and seek confirmation that without building a windows failover cluster, one is not able to build HA SCVMM Management Servers? Am I correct in understanding this? If so, do I need
a shared disk as well to create a cluster first and then install Management Servers?
2. What service accounts or groups are need during the configuration and what would be their permissions on respective servers such as Hyper-V Hosts and in Active Directory?
3. Do we really need to create a container in Active Directory for an HA SCVMM scenario?
4. Can HA solution be implemented without the WFC?
5. Would appreciate, if you could share some of the links to blogs/articles about step by step instructions to it.
Thanks in advance.easy. create 2 node cluster-nested virtual or physical, no matter. if virtual, set affinity (rather lack of to keep them on separate nodes-also build a sql cluster as well and also keep them on separate nodes)follow the articles.
I found no less than 10 step by steps when searching for when I built the HA SCVMM / SQL environment for my company.
MS article and third party, as you can see they are pretty straight forward to follow. Best of luck.
http://technet.microsoft.com/en-us/library/gg610678.aspx
http://www.thomasmaurer.ch/2013/08/how-to-install-a-highly-available-scvmm-management-server/
Brian -
Configuring two 11g OID servers in High Availability mode.
I have OID1 server where I have installed OID11g and WLS using SSL Port 3131 and Non SSL Port 3060. The ldap set up is working as the sqlnet connections are using ldap adapter to resolve the request.
I have OID2 server where I have installed OID11g using the same port.
Now, I want to setup a cluster for these two so that the the load balancer will automatically route the requests to either of the two servers so that if one is unavailable, the other will fill the request. I am following "Configuring High Availability for Identity Management Components" document, but it is not very what steps needs to be followed.
Any suggestion will be appreciated;
I am also having problem using ldapbind or any of the oid commands as it gives "unable to locate message file: ldap<language>.msb" despite the fact that I am seting all the env vars such as ORACLE_HOME, ORACLE_INTANCE, ORA_NLS33 and so on.You don't need to setup a cluster for Load balancer. The Load balancer configuration can point to both the server and depending on the configuration in LBR act in failover and load balanced mode. All you need to take care of is that the two OID servers are using the same schema.
When installing first OID server it gives a option to install in cluster mode and when installing the second server you can use the option to expand the cluster created in first installation. But that should not stop you from configuring OID in highly available mode using Load balancer as explained above.
"unable to locate message file: ldap<language>.msb" occurs if you have not set the ORACLE_HOME variable. See that it is set to <MiddlewareHome>/Oracle_IDM1 if you have used the defaults.
Hope this helps,
Sagar -
High availability for file server and exchange with 2 physical servers
Dear Experts,
I have 2 physical server with local disks only. I want to setup below on same with high availability, please advise best prossible options. We will be using windows 2012 R2 Server..
1. Domain controller
2. Exchange 2013
As of now I am thinking of setting up below:
1. Install Hyper-v on both and create 3 VM on each as
-On Host A- 1 VM for DC, 1 VM for File server with DFS namespace and replication for file server HA and 1 VM for Exchange 2013 with CAS/MBX with DAG and DNS RR for Exchange HA
-On Host B - 1 VM for ADC, 1 VM for File server DFS member for above and 1 VM for Exchange 2013 CAS/MBX with DAG member
I have read on internet about new features called scale out file server (SoFS) in Windows 2012 Server but not sure that will be preferred for file sharing.
Any advise will be highly appreciated..
Thanks for the help in advance..
Best regards,Dear Experts,
I have 2 physical server with local disks only. I want to setup below on same with high availability, please advise best prossible options. We will be using windows 2012 R2 Server..
1. Domain controller
2. Exchange 2013
As of now I am thinking of setting up below:
1. Install Hyper-v on both and create 3 VM on each as
-On Host A- 1 VM for DC, 1 VM for File server with DFS namespace and replication for file server HA and 1 VM for Exchange 2013 with CAS/MBX with DAG and DNS RR for Exchange HA
-On Host B - 1 VM for ADC, 1 VM for File server DFS member for above and 1 VM for Exchange 2013 CAS/MBX with DAG member
I have read on internet about new features called scale out file server (SoFS) in Windows 2012 Server but not sure that will be preferred for file sharing.
Any advise will be highly appreciated..
Thanks for the help in advance..
Best regards,
DFS is by far the best way to implement any sort of file server. Because a) failover is not fully transparent and does not happen always (say not on copy ) b) DFS cannot replicate open files so if you edit a big file and have node rebooted you're going to
lose ALL transactions/updates you've applied c) actually slows down the config. See:
DFS for failover
http://help.globalscape.com/help/wafs3/using_microsoft_dfs_for_failover.htm
DFS FAQ
http://technet.microsoft.com/library/cc773238(WS.10).aspx
(check "open files" point here)
DFS Performance
http://blogs.technet.com/b/filecab/archive/2009/08/22/windows-server-dfs-namespaces-performance-and-scalability.aspx
SoFS a) requires shared storage to run and you don't have one b) does not support generic workloads
(only Hyper-V and SQL Server) and c) technically makes sense to expand SAS JBOD or existing FC SAN to numerous Hyper-V clients over 10 GbE w/o need to invest money into SAS switches and HBAs and FC HBAs and new licenses FC ports. Making long story short:
SoFS is NOT YOUR CASE.
SoFS Overview
http://technet.microsoft.com/en-us/library/hh831349.aspx
http://www.aidanfinn.com/?p=12786
http://www.aidanfinn.com/?p=12786
For now you need to find some shared storage to be a back end for your hypevisor config (SAS JBOD from supported list, virtual SAN from multiple vendors like for example StarWind see below, make sure you review ALL the vendors) and then you create a failover
SMB 3.0 share for your file server workload. See:
Clustered Storage Spaces over SAS JBOD
http://technet.microsoft.com/en-us/library/jj822937.aspx
Virtual SAN from inexpensive SATA and no SAS or FC
http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
Failover
SMB File Server in Windows Server 2012 R2
http://www.starwindsoftware.com/configuring-ha-file-server-on-windows-server-2012-for-smb-nas
Fault
tolerant file server on just a pair of nodes
http://www.starwindsoftware.com/ns-configuring-ha-file-server-for-smb-nas
For Exchange you use SMB share from above for a file share witness and use DAG. See:
Exchange DAG
Good luck! Hope this helped :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Advice Requested - High Availability WITHOUT Failover Clustering
We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2. My question is: Can we accomplish high availability WITHOUT using failover clustering?
So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover. Here's what I mean:
In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment. In other words, there is at least a domain
controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring. The SQL Server VM on each host has about 75% of the
physical memory resources dedicated to it (for performance reasons). We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
So now, to high availability. The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
(we are using an iSCSI SAN for storage).
BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted. With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
or so only in the event of a major failure, rather than running at 50% ALL the time.
Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability. I guess
I'm looking for validation on my thinking.
So what do you think? What am I missing or forgetting? What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
Thanks in advance for your thoughts!Udo -
Yes your responses are very helpful.
Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access? Or can that not run on the same physical box as the Hyper-V host? I guess if the physical box goes down
the LUN would go down anyway, huh? Or can I cluster that role (iSCSI target) as well? If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
- Morgan
That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
for cheap". See:
What's new in iSCSI target with Windows Server 2012 R2
http://technet.microsoft.com/en-us/library/dn305893.aspx
Improved optimization to allow disk-level caching
Updated
iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
any Microsoft iSCSI target and you'll be happy. For references see:
MSFT iSCSI Target in HA mode
http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
Cluster MSFT iSCSI Target with SAS back end
http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
Guest
VM Cluster Storage Options
http://technet.microsoft.com/en-us/library/dn440540.aspx
Storage options
The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
Storage Type
Description
Shared virtual hard disk
New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
Virtual Fibre Channel
Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
Virtual Fibre Channel Overview.
iSCSI
The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
Server 2012.
Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
StarWind VSAN [Virtual SAN] for Hyper-V
http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
Hope this helped a bit :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
11.1.2 High Availability for HSS LCM
Hello All,
Has anyone out there successfully exported an LCM artifact (Native Users) through Shared Services console to the OS file system (<oracle_home>\user_projects\epmsystem1\import_export) when Shared Services is High Available through a load balancer?
My current configuration is two load balanced Shared Services servers using the same HSS repository. Everything works perfectly everywhere except when I try to export anything through LCM from the HSS web console. I have shared out the import_export folder on server 1. And I have edited the oracle_home>\user_projects\epmsystem1\config\FoundationServices1\migration.properties file on the second server to point to the share on the first server. Here is a cut and paste of the migration.properties file from server2.
grouping.size=100
grouping.size_unknown_artifact_count=10000
grouping.group_by_type=Y
report.enabled=Y
report.folder_path=<epm_oracle_instance>/diagnostics/logs/migration/reports
fileSystem.friendlyNames=false
msr.queue.size=200
msr.queue.waittime=60
group.count=10000
double-encoding=true
export.group.count = 30
import.group.count = 10000
filesystem.artifact.path=\\server1\import_export
I perform an export of just the native users to the file system, the export fails stating and I find errors in the log stating the following.
[2011-03-29T20:20:45.405-07:00] [EPMLCM] [NOTIFICATION] [EPMLCM-11001] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [arg: /server1/import_export/msr/PKG_34.xml] Executing package file - /server1/import_export/msr/PKG_34.xml
[2011-03-29T20:20:45.530-07:00] [EPMLCM] [ERROR] [EPMLCM-12097] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [SRC_CLASS: com.hyperion.lcm.clu.CLUProcessor] [SRC_METHOD: execute:?] Cannot find the migration definition file in the specified file path - /server1/import_export/msr/PKG_34.xml.
[2011-03-29T20:20:45.546-07:00] [EPMLCM] [NOTIFICATION] [EPMLCM-11000] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [arg: Completed with failures.] Migration Status - Completed with failures.
I go and look for the path that it is searching for "/server1/import_export/msr/PKG_34.xml" and find that this file does exist, it was infact created by the export process so I know that it is able to find the correct location, but it then says that it can't find it after the fact. I have tried creating mapped drives and editing the migration.properties file to reference the mapped drive but it gives me the same errors but with the new path. I also tried the below article that I found in support with no result. Please advise, thank you.
Bug 11683769: LCM MIGRATIONS FAIL WHEN HSS IS SET UP FOR HA AND HSS IS STARTED AS A SERVICE
Bug Attributes
Type B - Defect Fixed in Product Version 11.1.2.0.000PSE
Severity 3 - Minimal Loss of Service Product Version 11.1.2.0.00
Status 90 - Closed, Verified by Filer Platform 233 - Microsoft Windows x64 (64-bit)
Created 25-Jan-2011 Platform Version 2003 R2
Updated 24-Feb-2011 Base Bug 11696634
Database Version 2005
Affects Platforms Generic
Product Source Oracle
Related Products
Line - Family -
Area - Product 4482 - Hyperion Lifecycle Management
Hdr: 11683769 2005 SHRDSRVC 11.1.2.0.00 PRODID-4482 PORTID-233 11696634Abstract: LCM MIGRATIONS FAIL WHEN HSS IS SET UP FOR HA AND HSS IS STARTED AS A SERVICE*** 01/25/11 05:15 pm ***Hyperion Foundation Services is set up for high availability. Lifecycle management migrations fail when running Hyperion Foundation Services - Managed Server as a Windows Service. We applied the following configuration changes:Set up shared disk then we modify the migration.properties.filesystem.artifact.path=\\<servername>\LCM\import_export We are able to get the migrations to work if we set filesystem.artifact.path=V:\import_export (a maped drive to the shared disk) and we run Hyperion Foundation Services in console mode.*** 01/25/11 05:24 pm *** *** 01/28/11 03:32 pm *** (CHG: Sta->11)*** 01/28/11 03:32 pm *** (CHG: Comp. Ver-> 11.1.2 -> 11.1.2.0.00)*** 01/28/11 03:34 pm *** (CHG: Base Bug-> NULL -> 11696634)*** 02/02/11 01:53 pm *** (CHG: Sta->41)*** 02/02/11 01:53 pm ****** 02/02/11 05:18 pm ****** 02/03/11 10:30 am ****** 02/04/11 03:53 pm *** *** 02/04/11 03:53 pm *** (CHG: Sta->80 Asg->NEW OWNER)*** 02/10/11 08:49 am ****** 02/15/11 11:59 am ****** 02/15/11 04:56 pm ****** 02/16/11 09:58 am *** (CHG: Sta->90)*** 02/16/11 09:58 am ****** 02/16/11 02:12 pm *** (CHG: Sta->93)*** 02/24/11 06:14 pm *** (CHG: Sta->90)Back to topit is not possible to implement a kind of HA between two different appliances 3315 and 3395.
A node in HA can have the 3 persona.
Suppose Node A (Admin(primary), monitoring(Primary) and PSN).
Node B(Admin(Secondary), Monitoring(Secondary) and PSN).
If the Node A is unavailable, you will have to promote manually the Admin role to Primary.
Although the best way is to have
Node A Admin(Primary), Monitoring(Secondary) and PSN
Node B Admin(Secondary), Monitoring (Primary) and PSN.
Rate if helpful and Marked As correct if it is correct for the experts.
Regards, -
NAC Manager High Availability Peer CAM DEAD
Hi,
I have two NAC Managers with High Availability and i have used both sides eth1 interface as a Heartbit link.
I have done following Steps for High Availability.
1) Synchronize the times between two CAMs.
2) Generate a Temporary SSL certificate in both CAMs and done export-import procedure in each other.
3) Make One CAM as a Primary and another as Seconday.
But after all this configuration done i can see the status in Monitoring> Reports as--------Primary CAM is up in both the servers and Redundant CAM is down.
Also in Failover Tab i can see ------Local CAM - OK [Active] and Peer CAM :- DEAD.
I have also attached some screenshots so you can find out the same.
Your help will highly appreciated.
ThanksTry the following steps and verify that all the steps were followed :
http://www.cisco.com/c/en/us/support/docs/security/nac-appliance-clean-access/99945-NAC-CAM-HA.html -
Zenworks Database - High Availability
Hello,
we just use Zenworks 10 with an Oracle Database. We have two primary Zenworks Servers at two different Locations (Other Town - Linked via VPN).
Is it possibile to configure the Database High Availability - so that the external primary Zenworks Server is still available/manageable after a VPN Connection breakdown.
Best regerds,
Alex SommerThis would need to be done with Oracle.
ZCM talks to a single Database.
If you can configure Oracle so that ZCM is unaware that there are
multiple back-end databases and Oracle can somehow figure out how to
resolve changes when the different DBs are not talking, I presume this
would work. This would all need to be handled by Oracle, however.
Normally, All Primaries would be in the Data Center with the Database.
Satellite Servers would be in the remote offices.
If the VPN connection was down, users would authenticate with Cached
Credentials and have their cached bundles/policies.
On 5/4/2011 11:06 AM, alexsommer wrote:
>
> Hello,
>
> we just use Zenworks 10 with an Oracle Database. We have two primary
> Zenworks Servers at two different Locations (Other Town - Linked via
> VPN).
>
> Is it possibile to configure the Database High Availability - so that
> the external primary Zenworks Server is still available/manageable after
> a VPN Connection breakdown.
>
>
> Best regerds,
>
> Alex Sommer
>
>
Craig Wilson - MCNE, MCSE, CCNA
Novell Knowledge Partner
Novell does not officially monitor these forums.
Suggestions/Opinions/Statements made by me are solely my own.
These thoughts may not be shared by either Novell or any rational human.
Maybe you are looking for
-
How to use "machine name" in new report vi?
I have a computer on a network that has Microsoft excel installed. On another computer I have an application that will take an excel file and stuff a bunch of data from a database into specific cells to generate a report. The vi works fine on my comp
-
Windows Backup for Windows Server 2008 R2 Standerd not working 2155348041
Hi When I am running backups via the Backup server feature I keep getting the following error in the event log: The backup operation that started at '2014-05-27T04:59:16.185618200Z' has failed with following error code '2155348041' (None of the
-
Hello! I have problem with switcher airplane mode. Because if it is stolen, abdacter use this switcher to hide iPhone. Then they got a video from youtube "how to crack stolen iPhone 5 with iOS 7" 31 december 2013 Thief robbed my sister and stole her
-
Use of Widening Cast in Inheritence
Hello All, Request you to help me in understanding the use of widening cast in Inheritence. As of my understanding, before widening cast we need to copy the a subclass reference z_sub1 to a superclass reference z_super1 (to ensure that contents of su
-
HTTP 403 error when connecting to MySQL
Hello all. I have read a few posts that are similar in nature to the issue that I am having. Before I run off to the Ubuntu discussion forum, I thought that I would run this by some of the resident gurus. First, my setup. Running DW CS6 on my laptop