OMS failover
Hello everyone,
I am looking for some good documentation to understand what I need to do to 'failover' or reconfigure OMS following a failover of the underlying Grid Control repository database. My Grid Control respository primary database is a 2 node RAC database which is Data Guarded to a physical standby database that is also a 2 node RAC database.
My OMS instances running on the current standby database nodes are configured to use the current primary database. I have to failover the Grid repository database to the standby site, so that the servers hosting the current primary database can be re-deployed elsewhere. I know how to failover the database, what I need to understand in more detail is how to reconfigure the OMS' on the current standby database servers to use the new primary database following the database failover.
Any help or advice would be greatly appreciated.
Thanks,
Shaun.
Hello,
Thanks for the information. I have successfully failed over the database and reconfigured OMS and the agents to use the new primary database. In case this is of use for anyone else, the way I did this was :-
1. On any node of the primary database issue the command - emctl config emkey -copy_to_credstore
2. Stop OMS' on both nodes of the standby database (S1EMREP) - emctl stop oms
2. Stop Agents on both nodes of the standby database - emctl stop agent
3. Stop OMS' on both nodes of the primary database (S2EMREP) - emctl stop oms
4. Stop Agents on both nodes of the primary database - emctl stop agent
5. Failover the databse, issue failover command from DG Broker CLI - DGMGRL> failover to s1emrep
6. Reconfigure OMS on both nodes of the new primary database (S1EMREP) -
emctl config oms -store_repos_details -repos_conndesc '"(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=newprimaryserver.domain)
(PORT=1521)))(LOAD_BALANCE=ON)(CONNECT_DATA=(SERVICE_NAME=s1emrep.domain)))"' -repos_user SYSMAN
NB: After entering the above command it prompts for the password of the SYSMAN user before continuing
7. Start OMS on both nodes of the new primary database (S1EMREP) - emctl start OMS
To reconfigure the Grid Control Agents to talk to the new OMS I did the following on each target server hosting a grid control agent:-
1. Stop the Grid Control agent - emctl stop agent
2. Edit the $AGENT_HOME/sysman/config/emd.properties file. Change any references to the old primary database server hostnames to the new primary database sefver hostnames. Save the file.
3. Delete the contents of the $AGENT_HOME/sysman/emd/upload directory.
4. Delete the contents of the $AGENT_HOME/sysman/emd/state directory.
5. Start the Grid Control agent - emctl start agent
I hope the above is of some use to anyone else who finds themselves in the same situation as I was in having to failover the Grid Control repository DB and reconfigure OMS.
Shaun.
Similar Messages
-
Error in Grid Control: Setup - Management Services and Repository - error
Hi, sometimes , in our grid control, we could see in
Setup -> Management Services and Repository -> errors, the following error
OMS failover: Error in OMS status collection. ORA-20233: Invalid Target=Management Services and Repository or Metric=Management_Servlet_Status (validate_target_metric)
How can I correct this ?
We are in 10.2.0.5 ( with 3 management servers )
thanks for your help
regards,See the metalink note 388280.1,4487966 for solution.
Hope this helps,
Regards
Click here to [ Restore and recover OCR|http://www.oracleracexpert.com/2009/08/restore-and-recover-ocr-from-backup.html]
Click here to [Replace and repair OCR|http://www.oracleracexpert.com/2009/09/how-to-move-or-replace-and-repair-ocr.html]
Click here to [Backup and Recover VOTE|http://www.oracleracexpert.com/2009/08/voting-disk-backup-and-recovery.htmll]
http://www.oracleracexpert.com -
Migrate OMS 11g to new server.
Hi,
We want to take export of 11g Grid control repository and put it into a new 11g database.
And then install new 11g OMS pointing to this new database.
Will this configuration work smoothly ?
Thanks,If you're doing this to move hardware, adding the OMS and using the standby database is the best option that i see. you simply failover to the standby database, and remove the old oms out of the environment. If you're simply trying to create a test environment, i would recommend a new install with it's own targets/agents.
What I would recommend is go one of two routes:
1. start clean with a new db and new install, apply all patches, create roles/jobs/users/templates, etc, then migrate the agents over. metric history will not be available, but the configuration will be clean.
2. if you really want to duplicate the environment, you could do roughly the following however there are some downsides here:
a. add new oms to existing env
b. create a standby of the db
c. stop em, break standby and activate the copy
d. drop the new oms, update the configuration to point to the new repository, restart
e. drop old targets/agents in the new oms
--> problems i see with the above: can cause communication issues w/ the existing agents, if you try to move the agents over to the new system, they will have conflicts because the targets basically exist already. they will be out of sync and all will have to be resynchronized or reinstalled - no telling which. -
10.2.0.5
both agent and oms at latest psu.
Linux Redhat 4.
As a test, trying to secure the agent that is on the same server as the OMS. I am secure it against a VIP, since we have active/passive failover.
I already secured my oms against the VIP. It is running. I can reach grid control through the VIP. The agent functions fine against the regular IP before securing.
On this step:
$AGENT_HOME/bin/emctl secure agent -emdWalletSrcUrl https://<virtual_hostname>:<upload_port>/em
I just get a failed. Here are the errors from the log. I got 2.
2011-06-09 15:36:42,538 [main] INFO agent.SecureAgentCmd secureAgent.223 - Requesting an HTTPS Upload URL from the OMS
2011-06-09 15:39:42,616 [main] ERROR agent.SecureAgentCmd main.207 - Failed to secure the Agent:
java.io.InterruptedIOException: Connection establishment timed out
at HTTPClient.HTTPConnection.getSocket(HTTPConnection.java:3261)
at HTTPClient.HTTPConnection.doConnect(HTTPConnection.java:4020)
at HTTPClient.HTTPConnection.sendRequest(HTTPConnection.java:3003)
at HTTPClient.HTTPConnection.handleRequest(HTTPConnection.java:2843)
at HTTPClient.HTTPConnection.setupRequest(HTTPConnection.java:2635)
at HTTPClient.HTTPConnection.Get(HTTPConnection.java:923)
at oracle.sysman.emctl.secure.agent.SecureAgentCmd.openPage(SecureAgentCmd.java:836)
at oracle.sysman.emctl.secure.agent.SecureAgentCmd.getOMSSecurePort(SecureAgentCmd.java:782)
at oracle.sysman.emctl.secure.agent.SecureAgentCmd.secureAgent(SecureAgentCmd.java:224)
at oracle.sysman.emctl.secure.agent.SecureAgentCmd.main(SecureAgentCmd.java:200)
2011-06-09 15:49:33,715 [main] INFO agent.SecureAgentCmd secureAgent.223 - Requesting an HTTPS Upload URL from the OMS
2011-06-09 15:49:34,357 [main] ERROR agent.SecureAgentCmd main.207 - Failed to secure the Agent:
javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
at com.sun.net.ssl.internal.ssl.InputRecord.handleUnknownRecord(InputRecord.java:523)
at com.sun.net.ssl.internal.ssl.InputRecord.read(InputRecord.java:355)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:782)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1089)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:618)
at com.sun.net.ssl.internal.ssl.AppOutputStream.write(AppOutputStream.java:59)
at java.io.ByteArrayOutputStream.writeTo(ByteArrayOutputStream.java:112)
at HTTPClient.HTTPConnection.sendRequest(HTTPConnection.java:3018)
at HTTPClient.HTTPConnection.handleRequest(HTTPConnection.java:2843)
at HTTPClient.HTTPConnection.setupRequest(HTTPConnection.java:2635)
at HTTPClient.HTTPConnection.Get(HTTPConnection.java:923)
at oracle.sysman.emctl.secure.agent.SecureAgentCmd.openPage(SecureAgentCmd.java:836)
at oracle.sysman.emctl.secure.agent.SecureAgentCmd.getOMSSecurePort(SecureAgentCmd.java:782)
at oracle.sysman.emctl.secure.agent.SecureAgentCmd.secureAgent(SecureAgentCmd.java:224)
at oracle.sysman.emctl.secure.agent.SecureAgentCmd.main(SecureAgentCmd.java:200)I have same setup like you. I configured it a long time ago, so stuff are a bit fuzzy in my head!
I remember I deleted the old wallet file, changed the REPOSITORY_URL and emdWalletSrcUrl in emd.properties in AGENT_HOME/sysman/config to use the VIP hostname. Then ran "emctl secure agent".
Please note the upload port and SSL usage are different for wallet and repository URLs.
What I changed:
REPOSITORY_URL=https://<virtualhostname>:1159/em/upload
emdWalletSrcUrl=http://<virtualhostname>:4889/em/wallets/emd
What I Didn't change:
EMD_URL=http://<physicalhostname>:3872/emd/main/
This will automatically change to HTTPS after successfully securing agent.
HTH -
In oracle rac, If user query a select query and in processing data is fetched but in the duration of fetching the particular node is evicted then how failover to another node internally?
The query is re-issued as a flashback query and the client process can continue to fetch from the cursor. This is described in the Net Services Administrators Guide, the section on Transparent Application Failover.
-
Reporting Services as a generic service in a failover cluster group?
There is some confusion on whether or not Microsoft will support a Reporting Services deployment on a failover cluster using scale-out, and adding the Reporting Services service as a generic service in a cluster group to achieve active-passive high
availability.
A deployment like this is described by Lukasz Pawlowski (Program Manager on the Reporting Services team) in this blog article
http://blogs.msdn.com/b/lukaszp/archive/2009/10/28/high-availability-frequently-asked-questions-about-failover-clustering-and-reporting-services.aspx. There it is stated that it can be done, and what needs to be considered when doing such a deployment.
This article (http://technet.microsoft.com/en-us/library/bb630402.aspx) on the other hand states: "Failover clustering is supported only for the report server database; you
cannot run the Report Server service as part of a failover cluster."
This is somewhat confusing to me. Can I expect to receive support from Microsoft for a setup like this?
Best Regards,
Peter WretmoHi Peter,
Thanks for your posting.
As Lukasz said in the
blog, failover clustering with SSRS is possible. However, during the failover there is some time during which users will receive errors when accessing SSRS since the network names will resolve to a computer where the SSRS service is in the process of starting.
Besides, there are several considerations and manual steps involved on your part before configuring the failover clustering with SSRS service:
Impact on other applications that share the SQL Server. One common idea is to put SSRS in the same cluster group as SQL Server. If SQL Server is hosting multiple application databases, other than just the SSRS databases, a failure in SSRS may cause
a significant failover impact to the entire environment.
SSRS fails over independently of SQL Server.
If SSRS is running, it is going to do work on behalf of the overall deployment so it will be Active. To make SSRS Passive is to stop the SSRS service on all passive cluster nodes.
So, SSRS is designed to achieve High Availability through the Scale-Out deployment. Though a failover clustered SSRS deployment is achievable, it is not the best option for achieving High Availability with Reporting Services.
Regards,
Mike Yin
If you have any feedback on our support, please click
here
Mike Yin
TechNet Community Support -
What is solution of nat failover with 2 ISPs?
Now I have lease line link to 2 ISPs for internet connection. I separate packets of users by accesslist such as www go to ISP1 and mail or other protocol go to ISP2 . Let's say link go to ISP1 down I need www traffics failover to ISP2 and vice versa.
Problem is acl on nat statement?
If you config about this.
access-l 101 permit tcp any any www -->www traffic to ISP1
access-l 101 permit tcp any any mail --> back up for mail packet to ISP2 down
access-l 102 permit tcp any any mail -->mail packet to ISP2
access-l 102 permit tcp any any www --> back up for www traffic go to ISP2
ip nat inside source list 101 interface s0 overload
ip nat inside source list 102 interface s1 overload
In this case is links of ISP1 and ISP2 are UP.
when you apply this acl on nat statement then nat will process each statement in order( if I incorrect please correct me) so mail traffics will match in this acl and then nat with ip of ISP1 only.
please advice solution about this
TIAHi,
If you have two serial links connecting to two diff service provider , then you can try this .
access-l 101 permit tcp any any www
access-l 102 permit tcp any any mail
route-map isp1 permit 10
match ip address 101
set interface s0
route-map isp2 permit 10
match ip address 102
set interface s1
ip nat inside route-map isp1 interface s0 overload
ip nat inside source route-map isp2 interface s1 overload
ip nat inside source list 103 interface s0 overload
ip nat inside source list 104 interface s1 overload
ip route 0.0.0.0 0.0.0.0 s0
ip route 0.0.0.0 0.0.0.0 s1 100
In case if any of the link fails , automatically the other traffic would prefer the other serial.
I have not tried the config , just worked out the config on logic .pls go through and try if possible
pls see the note2 column
http://www.cisco.com/en/US/tech/tk648/tk361/technologies_tech_note09186a0080093fca.shtml#related
Hope it helps
regards
vanesh k -
Advice Requested - High Availability WITHOUT Failover Clustering
We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2. My question is: Can we accomplish high availability WITHOUT using failover clustering?
So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover. Here's what I mean:
In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment. In other words, there is at least a domain
controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring. The SQL Server VM on each host has about 75% of the
physical memory resources dedicated to it (for performance reasons). We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
So now, to high availability. The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
(we are using an iSCSI SAN for storage).
BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted. With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
or so only in the event of a major failure, rather than running at 50% ALL the time.
Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability. I guess
I'm looking for validation on my thinking.
So what do you think? What am I missing or forgetting? What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
Thanks in advance for your thoughts!Udo -
Yes your responses are very helpful.
Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access? Or can that not run on the same physical box as the Hyper-V host? I guess if the physical box goes down
the LUN would go down anyway, huh? Or can I cluster that role (iSCSI target) as well? If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
- Morgan
That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
for cheap". See:
What's new in iSCSI target with Windows Server 2012 R2
http://technet.microsoft.com/en-us/library/dn305893.aspx
Improved optimization to allow disk-level caching
Updated
iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
any Microsoft iSCSI target and you'll be happy. For references see:
MSFT iSCSI Target in HA mode
http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
Cluster MSFT iSCSI Target with SAS back end
http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
Guest
VM Cluster Storage Options
http://technet.microsoft.com/en-us/library/dn440540.aspx
Storage options
The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
Storage Type
Description
Shared virtual hard disk
New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
Virtual Fibre Channel
Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
Virtual Fibre Channel Overview.
iSCSI
The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
Server 2012.
Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
StarWind VSAN [Virtual SAN] for Hyper-V
http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
Hope this helped a bit :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Memory consumption on oms/agent
Hello all, I had posted this at the wrong place. I was wondering if someone can shed some light on this....i did get one reply back saying look at the metalink doc...which i did...but dose not seem to be my issue...
Re: agent taking too much memory...
Also, how dose OMS runs ??? dose OMS starts a java process ??? why is the java process conusing so much memory ?
if i grep for the java process....i get the below(system was restarted yesterday, thats why diff pid)...
$ ps -ef|grep 29301
oracle 11192 24846 0 08:42:51 pts/1 0:00 grep 29301
oracle 29301 29284 0 14:19:40 ? 27:53 /opt/java6/bin/IA64W/java -client -Xms256m -Xmx1024m -XX:MaxPermSize=512m -XX:CompileThreshold=8000 -XX:PermSize=128m -Dweblogi
$ ps -ef|grep 29120
oracle 29120 29103 0 14:18:41 ? 12:41 /opt/java6/bin/IA64W/java -client -Xms256m -Xmx1024m -XX:MaxPermSize=512m -XX:CompileThreshold=8000 -XX:PermSize=128m -Dweblogi
oracle 11804 24846 0 08:45:10 pts/1 0:00 grep 29120
$
why do i have 2 java things, and looks like they are running the same cmd....is that part of OMS or ???I assume that your are using GridControl 11g?
Why are there java-processes? Because that's how the architecture of GridControl was build :-)
And as a matter of fact java-processes are usually memory and cpu-consuming processes. When checking the requirements for GC you will figure out that you will need a certain amount of cpu and ram to keep that thing running...
Please check the documentation "Enterprise Manager Documentation 11g Release 1 (11.1)" which can be found under http://download.oracle.com/docs/cd/E11857_01/index.htm for a detailed description of the architecure -
No data in ecc variables in failover mode
Hi all,
got a problem with custom enterprise data layout when CAD is connected to secondary node.
With "set enterprise..." I set a special layout for CAD enterprise data in my script. This works fine as long as CAD is connected to primary node. When CAD is connected to secondary node there is no data in expanded call variales and CAD displays default layout. Changing default layout and writing the content to predefined variables works fine. So it seems only extended call variables are not working in failover mode.
Any ideas?
Br
Sven
(UCCX v8.5)Hi Sven,
Looks like TAC helped fix the issue. Previously we had rebooted both servers without success. TAC suggested to restart the Desktop Enterprise Service on both servers (I did Pub first than Sub). I've verified ECC variables are being sent to CAD correctly.
The root cause might of been a network outage we had a week ago. We were using the Desktop Workflow Administrator at the time of the outage.
Kyle -
Difference between scalable and failover cluster
Difference between scalable and fail over cluster
A scalable cluster is usually associated with HPC clusters but some might argue that Oracle RAC is this type of cluster. Where the workload can be divided up and sent to many compute nodes. Usually used for a vectored workload.
A failover cluster is where a standby system or systems are available to take the workload when needed. Usually used for scalar workloads. -
I have an error when I try to patch my OMS GridControl. Always at the OMS patch configuration
content of configToolFailedCommands
oracle.sysman.emcp.oms.OmsPatchUpgrade -configureOms
oracle.sysman.emcp.aggregates.ConfigPlugIn
oracle.sysman.emcp.oms.StartOMS -configureOms
content of CfmLogger_2008-04-14_11-00-34-AM.log
INFO: Creating new CFM connection
INFO: Creating a new logger for oracle.sysman.patchset
INFO: Unmarshalling E:\OracleHome\oms10g\inventory\ContentsXML\ConfigXML\oracle.sysman.patchset.10_2_0_4_0.xml
INFO: No description found in E:\OracleHome\oms10g\inventory\ContentsXML\ConfigXML for aggregate=oracle.sysman.top.agent
INFO: Creating a new logger for OuiConfigVariables
INFO: Unmarshalling E:\OracleHome\oms10g\inventory\ContentsXML\ConfigXML\OuiConfigVariables.1_0_0_0_0.xml
INFO: Creating a new logger for oracle.sysman.top.oms
INFO: Unmarshalling E:\OracleHome\oms10g\inventory\ContentsXML\ConfigXML\oracle.sysman.top.oms.10_2_0_4_0.xml
INFO: Creating a new logger for oracle.sysman.ccr
INFO: Unmarshalling E:\OracleHome\oms10g\inventory\ContentsXML\ConfigXML\oracle.sysman.ccr.10_2_6_0_0.xml
WARNING: {oracle.sysman.emCfg.core.CfmAggregateRef ref to oracle.sysman.top.agent:null:LATEST(unresolved_version):common} was marked unavailable: There are no loaded aggregates for oracle.sysman.top.agent:common
INFO: Aggregate Description oracle.sysman.patchset:10.2.0.4.0:common successfully loaded
INFO: Aggregate Description OuiConfigVariables:1.0.0.0.0:common successfully loaded
INFO: Aggregate Description oracle.sysman.top.oms:10.2.0.4.0:common successfully loaded
INFO: Aggregate Description oracle.sysman.ccr:10.2.6.0.0:common successfully loaded
INFO: Creating a new logger for oracle.sysman.top.oms
INFO: Unmarshalling E:\OracleHome\oms10g\inventory\ContentsXML\ConfigXML\oracle.sysman.top.oms.10_2_0_2_0.xml
INFO: Creating a new logger for oracle.calypso
INFO: Unmarshalling E:\OracleHome\oms10g\inventory\ContentsXML\ConfigXML\oracle.calypso.10_1_2_1_0.xml
INFO: Creating a new logger for oracle.java.j2ee.iascfg
INFO: Unmarshalling E:\OracleHome\oms10g\inventory\ContentsXML\ConfigXML\oracle.java.j2ee.iascfg.10_1_2_1_0.xml
INFO: Creating a new logger for oracle.apache.apache
INFO: Unmarshalling E:\OracleHome\oms10g\inventory\ContentsXML\ConfigXML\oracle.apache.apache.1_3_31_0_0.xml
INFO: Creating a new logger for oracle.oid.oradas
INFO: Unmarshalling E:\OracleHome\oms10g\inventory\ContentsXML\ConfigXML\oracle.oid.oradas.10_1_2_1_0.xml
INFO: Creating a new logger for oracle.iappserver.iasobject
INFO: Unmarshalling E:\OracleHome\oms10g\inventory\ContentsXML\ConfigXML\oracle.iappserver.iasobject.10_1_2_0_2.xml
INFO: Creating a new logger for oracle.apache
INFO: Unmarshalling E:\OracleHome\oms10g\inventory\ContentsXML\ConfigXML\oracle.apache.10_1_2_1_0.xml
INFO: Creating a new logger for oracle.iappserver.iapptop
INFO: Unmarshalling E:\OracleHome\oms10g\inventory\ContentsXML\ConfigXML\oracle.iappserver.iapptop.10_1_2_0_2.xml
INFO: Creating a new logger for oracle.rdbms.jazn.config
INFO: Unmarshalling E:\OracleHome\oms10g\inventory\ContentsXML\ConfigXML\oracle.rdbms.jazn.config.10_1_2_0_2.xml
INFO: Creating a new logger for oracle.iappserver.repository.api
INFO: Unmarshalling E:\OracleHome\oms10g\inventory\ContentsXML\ConfigXML\oracle.iappserver.repository.api.10_1_2_0_2.xml
INFO: Creating a new logger for oracle.iappserver.iappcore
INFO: Unmarshalling E:\OracleHome\oms10g\inventory\ContentsXML\ConfigXML\oracle.iappserver.iappcore.10_1_2_0_2.xml
INFO: No description found in E:\OracleHome\oms10g\inventory\ContentsXML\ConfigXML for aggregate=oracle.sysman.top.agent
WARNING: {oracle.sysman.emCfg.core.CfmAggregateRef ref to oracle.sysman.top.agent:null:LATEST(unresolved_version):common} was marked unavailable: There are no loaded aggregates for oracle.sysman.top.agent:common
WARNING: {oracle.sysman.emCfg.core.CfmAggregateRef ref to oracle.sysman.top.agent:null:LATEST(unresolved_version):common} was marked unavailable: There are no loaded aggregates for oracle.sysman.top.agent:common
INFO: Aggregate Description oracle.sysman.patchset:10.2.0.4.0:common successfully loaded
INFO: Aggregate Description OuiConfigVariables:1.0.0.0.0:common successfully loaded
INFO: Aggregate Description oracle.sysman.top.oms:10.2.0.4.0:common successfully loaded
INFO: Aggregate Description oracle.sysman.ccr:10.2.6.0.0:common successfully loaded
INFO: Aggregate Description oracle.sysman.top.oms:10.2.0.2.0:common successfully loaded
INFO: Aggregate Description oracle.calypso:10.1.2.1.0:common successfully loaded
INFO: Aggregate Description oracle.java.j2ee.iascfg:10.1.2.1.0:common successfully loaded
INFO: Aggregate Description oracle.apache.apache:1.3.31.0.0:common successfully loaded
INFO: Aggregate Description oracle.oid.oradas:10.1.2.1.0:common successfully loaded
INFO: Aggregate Description oracle.iappserver.iasobject:10.1.2.0.2:common successfully loaded
INFO: Aggregate Description oracle.apache:10.1.2.1.0:common successfully loaded
INFO: Aggregate Description oracle.iappserver.iapptop:10.1.2.0.2:common successfully loaded
INFO: Aggregate Description oracle.rdbms.jazn.config:10.1.2.0.2:common successfully loaded
INFO: Aggregate Description oracle.iappserver.repository.api:10.1.2.0.2:common successfully loaded
INFO: Aggregate Description oracle.iappserver.iappcore:10.1.2.0.2:common successfully loaded
INFO: Successfully returning from CfmFactory.connect()
INFO: Cfm.save() was called
INFO: Cfm.save(): 15 aggregate instances saved
INFO: oracle.sysman.patchset:IAction.perform() was called on {Action state:patchsetconfigure in CfmAggregateInstance: oracle.sysman.patchset:10.2.0.4.0:common:family=CFM:oh=E:\OracleHome\oms10g:label=0}
INFO: CfwProgressMonitor:actionProgress:About to perform Action=patchsetconfigure Status=is running with ActionStep=0 stepIndex=0 microStep=0
INFO: CfwProgressMonitor:actionProgress:About to perform Action=patchsetconfigure Status=is running with ActionStep=1 stepIndex=1 microStep=0
INFO: oracle.sysman.top.oms:About to execute plug-in OMS Oneoff Patch Application
INFO: oracle.sysman.top.oms:The plug-in OMS Oneoff Patch Application is running
INFO: oracle.sysman.top.oms:Launching CmdExec
INFO: oracle.sysman.top.oms:ExitCode=0
INFO: oracle.sysman.top.oms:The plug-in OMS Oneoff Patch Application executed as attached=true in separate process with exitcode=0
INFO: oracle.sysman.top.oms:The plug-in OMS Oneoff Patch Application has successfully been performed
INFO: oracle.sysman.top.oms:About to execute plug-in Repository Upgrade
INFO: oracle.sysman.top.oms:The plug-in Repository Upgrade is running
INFO: oracle.sysman.top.oms:Internal PlugIn Class: oracle.sysman.emcp.oms.RepositoryPatchUpgrade
INFO: oracle.sysman.top.oms:Classpath = E:\OracleHome\oms10g\sysman\jlib\omsPlug.jar;E:\OracleHome\oms10g\jlib\emConfigInstall.jar;E:\OracleHome\oms10g\sysman\jlib\emCORE.jar;E:\OracleHome\oms10g\sysman\jlib\emagentSDK.jar;E:\OracleHome\oms10g\sysman\jlib\log4j-core.jar;E:\OracleHome\oms10g\jdbc\lib\classes12.jar
INFO: oracle.sysman.top.oms:EmcpPlug:invoke:Starting EmcpPlug invoke method on an aggregate=oracle.sysman.top.oms for Action=patchsetConfiguration in step=1:microstep=0
INFO: oracle.sysman.top.oms:called initialize method
INFO: oracle.sysman.top.oms:
Invoking Repmanager tool ...
INFO: oracle.sysman.top.oms:The value of ORACLE_HOME is E:\OracleHome\oms10g
INFO: oracle.sysman.top.oms:The value of ConnectDes is (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=VMORACLE01)(PORT=1521)))(CONNECT_DATA=(SID=rep)))
INFO: oracle.sysman.top.oms:The command is E:\OracleHome\oms10g\sysman\admin\emdrep\bin\RepManager -connect (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=VMORACLE01)(PORT=1521)))(CONNECT_DATA=(SID=rep))) -action upgrade -verbose -repos_user sysman -output_file E:/OracleHome/oms10g/sysman/log/emrepmgr.log.10.2.0.4.0
INFO: oracle.sysman.top.oms:[RepositoryPatchUpgrade]:Initialize Environment Variable:{CLIENTNAME=T8193, PROCESSOR_ARCHITECTURE=x86, NEED_EXIT_CODE=TRUE, TMP=C:\Temp\1, ClusterLog=C:\WINDOWS\Cluster\cluster.log, __PROCESS_HISTORY=E:\Source\Grid control 10.1.2.0.4\p3731593_10204\3731593\Disk1\setup.exe;E:\Source\Grid control 10.1.2.0.4\p3731593_10204\3731593\Disk1\install\setup.exe, COMPUTERNAME=VMORACLE01, OS=Windows_NT, PROMPT=$P$G, PERL5LIB=E:\OracleHome\db10g\perl\5.8.3\lib\MSWin32-x86;E:\OracleHome\db10g\perl\5.8.3\lib;E:\OracleHome\db10g\perl\5.8.3\lib\MSWin32-x86;E:\OracleHome\db10g\perl\site\5.8.3;E:\OracleHome\db10g\perl\site\5.8.3\lib;E:\OracleHome\db10g\sysman\admin\scripts, SystemDrive=C:, HOMEDRIVE=C:, LOGONSERVER=\\SIDOMP01, PROCESSOR_IDENTIFIER=x86 Family 15 Model 4 Stepping 8, GenuineIntel, ProgramFiles=C:\Program Files, NUMBER_OF_PROCESSORS=2, TEMP=C:\Temp\1, SMS_LOCAL_DIR=C:\WINDOWS, USERDOMAIN=SCT, PROCESSOR_LEVEL=15, USERDNSDOMAIN=SCT.GOUV.QC.CA, Path=E:\OracleHome\db10g\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem, SESSIONNAME=RDP-Tcp#1, USERNAME=admora, PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH, ComSpec=C:\WINDOWS\system32\cmd.exe, SystemRoot=C:\WINDOWS, windir=C:\WINDOWS, PROCESSOR_REVISION=0408, USERPROFILE=C:\Documents and Settings\admora, oracle_home = e:\OracleHome\agent10g, CommonProgramFiles=C:\Program Files\Common Files, oracle_home=e:\oraclehome\oms10g, APPDATA=C:\Documents and Settings\admora\Application Data, HOMEPATH=\Documents and Settings\admora, ALLUSERSPROFILE=C:\Documents and Settings\All Users}
INFO: oracle.sysman.top.oms:calling constructEnvVariables
INFO: oracle.sysman.top.oms:Constructed Env Variable:{CLIENTNAME=T8193, PROCESSOR_ARCHITECTURE=x86, NEED_EXIT_CODE=TRUE, TMP=C:\Temp\1, ClusterLog=C:\WINDOWS\Cluster\cluster.log, __PROCESS_HISTORY=E:\Source\Grid control 10.1.2.0.4\p3731593_10204\3731593\Disk1\setup.exe;E:\Source\Grid control 10.1.2.0.4\p3731593_10204\3731593\Disk1\install\setup.exe, COMPUTERNAME=VMORACLE01, OS=Windows_NT, PROMPT=$P$G, PERL5LIB=E:\OracleHome\db10g\perl\5.8.3\lib\MSWin32-x86;E:\OracleHome\db10g\perl\5.8.3\lib;E:\OracleHome\db10g\perl\5.8.3\lib\MSWin32-x86;E:\OracleHome\db10g\perl\site\5.8.3;E:\OracleHome\db10g\perl\site\5.8.3\lib;E:\OracleHome\db10g\sysman\admin\scripts, SystemDrive=C:, HOMEDRIVE=C:, LOGONSERVER=\\SIDOMP01, PROCESSOR_IDENTIFIER=x86 Family 15 Model 4 Stepping 8, GenuineIntel, ProgramFiles=C:\Program Files, NUMBER_OF_PROCESSORS=2, TEMP=C:\Temp\1, SMS_LOCAL_DIR=C:\WINDOWS, USERDOMAIN=SCT, PROCESSOR_LEVEL=15, USERDNSDOMAIN=SCT.GOUV.QC.CA, Path=E:\OracleHome\db10g\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem, SESSIONNAME=RDP-Tcp#1, USERNAME=admora, PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH, ComSpec=C:\WINDOWS\system32\cmd.exe, SystemRoot=C:\WINDOWS, windir=C:\WINDOWS, PROCESSOR_REVISION=0408, USERPROFILE=C:\Documents and Settings\admora, oracle_home = e:\OracleHome\agent10g, ORACLE_HOME=E:\OracleHome\oms10g, CommonProgramFiles=C:\Program Files\Common Files, oracle_home=e:\oraclehome\oms10g, APPDATA=C:\Documents and Settings\admora\Application Data, HOMEPATH=\Documents and Settings\admora, ALLUSERSPROFILE=C:\Documents and Settings\All Users}
INFO: oracle.sysman.top.oms:Upgrade->Executing Command: CMD /c E:\OracleHome\oms10g\bin\emctl status emkey
INFO: oracle.sysman.top.oms: Command Exit Code:1
INFO: oracle.sysman.top.oms: Command Output:----------
INFO: oracle.sysman.top.oms:Oracle Enterprise Manager 10g Release 4 Grid Control
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
The Em Key is configured properly, but is not secure. Secure the Em Key by running "emctl config emkey -remove_from_repos".
INFO: oracle.sysman.top.oms: Command Error:----------
INFO: oracle.sysman.top.oms:Please enter repository password:
INFO: oracle.sysman.top.oms:calling run commands with inputs
INFO: oracle.sysman.top.oms:Upgrade->Executing Command: CMD /c E:\OracleHome\oms10g\sysman\admin\emdrep\bin\RepManager -connect (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=VMORACLE01)(PORT=1521)))(CONNECT_DATA=(SID=rep))) -action upgrade -verbose -repos_user sysman -output_file E:/OracleHome/oms10g/sysman/log/emrepmgr.log.10.2.0.4.0
INFO: oracle.sysman.top.oms: Entering another input
INFO: oracle.sysman.top.oms: Command Exit Code:0
INFO: oracle.sysman.top.oms: Command Output:----------
INFO: oracle.sysman.top.oms:Enter SYS user's password :
Enter repository user password :
Getting temporary tablespace from database...
Found temporary tablespace: TEMP
Environment :
ORACLE HOME = e:/oraclehome/oms10g
REPOSITORY HOME = e:/oraclehome/oms10g
SQLPLUS = e:/oraclehome/oms10g/bin/sqlplus
SQL SCRIPT ROOT = e:/oraclehome/oms10g/sysman/admin/emdrep/sql
EXPORT = e:/oraclehome/oms10g/bin/exp
IMPORT = e:/oraclehome/oms10g/bin/imp
LOADJAVA = e:/oraclehome/oms10g/bin/loadjava
JAR FILE ROOT = e:/oraclehome/oms10g/sysman/admin/emdrep/lib
JOB TYPES ROOT = e:/oraclehome/oms10g/sysman/admin/emdrep/bin
Arguments :
Connect String = (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=VMORACLE01)(PORT=1521)))(CONNECT_DATA=(SID=rep)))
Action = upgrade
Repos User = SYSMAN
Default tablespace = MGMT_TABLESPACE
Default Data file = mgmt.dbf
Dflt Dfile Init size = 20m
Dflt Dfile Ext size = 20m
ECM tablespace = MGMT_ECM_DEPOT_TS
ECM Data file = mgmt_ecm_depot1.dbf
ECM Dfile Init size = 100m
ECM Dfile Ext size = 100m
ECM CSA tablespace = MGMT_TABLESPACE
ECM CSA Data file = mgmt_ecm_csa1.dbf
ECM CSA Dfile Init size = 100m
ECM CSA Dfile Ext size = 100m
TEMP tablespace = TEMP
Create options = 3
Verbose output = 1
Output File = E:/OracleHome/oms10g/sysman/log/emrepmgr.log.10.2.0.4.0
Repos creation mode = CENTRAL
MetaLink user name = NOTAVAILABLE_
MetaLink URL = http://updates.oracle.com
Export Directory = e:/oraclehome/oms10g/sysman/log
Import Directory = e:/oraclehome/oms10g/sysman/log
Path Separator = "\;"
Checking SYS Credentials ... rem error switch
Return code = 0.OK.
rem error switch
Upgrading the repository..
Checking for Repos User ... Return code = 0.Exists.
Loading necessary DB objects ...
Checking DB Object (DBMS_SHARED_POOL , PACKAGE) ... rem error switch
Return code = 0Exists.
rem error switch
DBMS POOL package exists.
Return code = 0.
Done Loading necessary DB objects
Checking repository version..
Running setSchemaStatus: BEGIN EMD_MAINTENANCE.SET_VERSION('_UPGRADE_','0','0','SYSTEM',EMD_MAINTENANCE.G_STATUS_UPGRADING);END;
Upgrading EM Schema ... using new rep manager framework
rem error switch
parsing component core
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component db
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component pa
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component connector
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component ias
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component integic
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component sso_server
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component bc4j
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component forms
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component wireless
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component workflow
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component ocs
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component portal
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component integrationbpm
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component integrationbam
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component discoverer
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component pp
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component ifs
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component repserv
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component beehive
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component content
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component integb2b
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
parsing component ci
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ********** Start header analysis ****************
[14-04-2008 11:01:45] ********** End header analysis ****************
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] The following dump is meant for debugging purposes
[14-04-2008 11:01:45] ********** Start SQL objects dump ****************
[14-04-2008 11:01:45] ********** End SQL objects dump ****************
executing query :
"select component_name, component_mode, version from SYSMAN.MGMT_VERSIONS where version<>'0'"
executing query :
"select tablespace_name, table_name from all_tables where owner ='SYSMAN' and table_name in ( 'MGMT_TARGETS' , 'MGMT_JOB_PARAMETER' ) order by table_name asc"
component integic is already at 10.2.0.2.0
component sso_server is already at 10.2.0.2.0
component bc4j is already at 10.2.0.2.0
component forms is already at 10.2.0.2.0
component wireless is already at 10.2.0.2.0
component workflow is already at 10.2.0.2.0
component portal is already at 10.2.0.2.0
component integrationbpm is already at 10.2.0.2.0
component integrationbam is already at 10.2.0.2.0
component discoverer is already at 10.2.0.2.0
component ifs is already at 10.2.0.2.0
component repserv is already at 10.2.0.2.0
component integb2b is already at 10.2.0.2.0
has_upgrade_scripts:0
executing core schema_upgrade scripts from version 10.2.0.2.0
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for schema_upgrade (from version: 10.2.0.2.0) scripts in component: core *****
[14-04-2008 11:01:45] ***** End Final order for schema_upgrade (from version: 10.2.0.2.0) scripts in component: core *****
[14-04-2008 11:01:45]
executing core recreation scripts
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for recreate scripts in component: core *****
[14-04-2008 11:01:45] ***** End Final order for recreate scripts in component: core *****
[14-04-2008 11:01:45]
has_upgrade_scripts:0
executing db schema_upgrade scripts from version 10.2.0.2.0
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for schema_upgrade (from version: 10.2.0.2.0) scripts in component: db *****
[14-04-2008 11:01:45] ***** End Final order for schema_upgrade (from version: 10.2.0.2.0) scripts in component: db *****
[14-04-2008 11:01:45]
executing db recreation scripts
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for recreate scripts in component: db *****
[14-04-2008 11:01:45] ***** End Final order for recreate scripts in component: db *****
[14-04-2008 11:01:45]
has_upgrade_scripts:0
component pa does not exist in the current schema. create it.
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for create scripts in component: pa *****
[14-04-2008 11:01:45] ***** End Final order for create scripts in component: pa *****
[14-04-2008 11:01:45]
has_upgrade_scripts:0
component connector does not exist in the current schema. create it.
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for create scripts in component: connector *****
[14-04-2008 11:01:45] ***** End Final order for create scripts in component: connector *****
[14-04-2008 11:01:45]
has_upgrade_scripts:0
executing ias schema_upgrade scripts from version 10.2.0.2.0
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for schema_upgrade (from version: 10.2.0.2.0) scripts in component: ias *****
[14-04-2008 11:01:45] ***** End Final order for schema_upgrade (from version: 10.2.0.2.0) scripts in component: ias *****
[14-04-2008 11:01:45]
executing ias recreation scripts
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for recreate scripts in component: ias *****
[14-04-2008 11:01:45] ***** End Final order for recreate scripts in component: ias *****
[14-04-2008 11:01:45]
has_upgrade_scripts:0
component integic is already at 10.2.0.2.0
skipping integic upgrade
has_upgrade_scripts:0
component sso_server is already at 10.2.0.2.0
skipping sso_server upgrade
has_upgrade_scripts:0
component bc4j is already at 10.2.0.2.0
skipping bc4j upgrade
has_upgrade_scripts:0
component forms is already at 10.2.0.2.0
skipping forms upgrade
has_upgrade_scripts:0
component wireless is already at 10.2.0.2.0
skipping wireless upgrade
has_upgrade_scripts:0
component workflow is already at 10.2.0.2.0
skipping workflow upgrade
has_upgrade_scripts:0
executing ocs schema_upgrade scripts from version 10.2.0.2.0
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for schema_upgrade (from version: 10.2.0.2.0) scripts in component: ocs *****
[14-04-2008 11:01:45] ***** End Final order for schema_upgrade (from version: 10.2.0.2.0) scripts in component: ocs *****
[14-04-2008 11:01:45]
executing ocs recreation scripts
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for recreate scripts in component: ocs *****
[14-04-2008 11:01:45] ***** End Final order for recreate scripts in component: ocs *****
[14-04-2008 11:01:45]
has_upgrade_scripts:0
component portal is already at 10.2.0.2.0
skipping portal upgrade
has_upgrade_scripts:0
component integrationbpm is already at 10.2.0.2.0
skipping integrationbpm upgrade
has_upgrade_scripts:0
component integrationbam is already at 10.2.0.2.0
skipping integrationbam upgrade
has_upgrade_scripts:0
component discoverer is already at 10.2.0.2.0
skipping discoverer upgrade
has_upgrade_scripts:0
executing pp schema_upgrade scripts from version 10.2.0.2.0
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for schema_upgrade (from version: 10.2.0.2.0) scripts in component: pp *****
[14-04-2008 11:01:45] ***** End Final order for schema_upgrade (from version: 10.2.0.2.0) scripts in component: pp *****
[14-04-2008 11:01:45]
executing pp recreation scripts
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for recreate scripts in component: pp *****
[14-04-2008 11:01:45] ***** End Final order for recreate scripts in component: pp *****
[14-04-2008 11:01:45]
has_upgrade_scripts:0
component ifs is already at 10.2.0.2.0
skipping ifs upgrade
has_upgrade_scripts:0
component repserv is already at 10.2.0.2.0
skipping repserv upgrade
has_upgrade_scripts:0
component beehive does not exist in the current schema. create it.
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for create scripts in component: beehive *****
[14-04-2008 11:01:45] ***** End Final order for create scripts in component: beehive *****
[14-04-2008 11:01:45]
has_upgrade_scripts:0
component content does not exist in the current schema. create it.
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for create scripts in component: content *****
[14-04-2008 11:01:45] ***** End Final order for create scripts in component: content *****
[14-04-2008 11:01:45]
has_upgrade_scripts:0
component integb2b is already at 10.2.0.2.0
skipping integb2b upgrade
has_upgrade_scripts:0
executing ci schema_upgrade scripts from version 10.2.0.2.0
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for schema_upgrade (from version: 10.2.0.2.0) scripts in component: ci *****
[14-04-2008 11:01:45] ***** End Final order for schema_upgrade (from version: 10.2.0.2.0) scripts in component: ci *****
[14-04-2008 11:01:45]
executing ci recreation scripts
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for recreate scripts in component: ci *****
[14-04-2008 11:01:45] ***** End Final order for recreate scripts in component: ci *****
[14-04-2008 11:01:45]
executing core pre data upgrade scripts from versio
n 10.2.0.2.0
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for pre_data_upgrade (from version: 10.2.0.2.0) scripts in component: core *****
[14-04-2008 11:01:45] ***** End Final order for pre_data_upgrade (from version: 10.2.0.2.0) scripts in component: core *****
[14-04-2008 11:01:45]
executing core data upgrade scripts from version 10.2.0.2.0
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for data_upgrade (from version: 10.2.0.2.0) scripts in component: core *****
[14-04-2008 11:01:45] ***** End Final order for data_upgrade (from version: 10.2.0.2.0) scripts in component: core *****
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for post_data_upgrade: scripts in component: core *****
[14-04-2008 11:01:45] ***** End Final order for post_data_upgrade: scripts in component: core *****
[14-04-2008 11:01:45]
executing db pre data upgrade scripts from versio
n 10.2.0.2.0
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for pre_data_upgrade (from version: 10.2.0.2.0) scripts in component: db *****
[14-04-2008 11:01:45] ***** End Final order for pre_data_upgrade (from version: 10.2.0.2.0) scripts in component: db *****
[14-04-2008 11:01:45]
executing db data upgrade scripts from version 10.2.0.2.0
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for data_upgrade (from version: 10.2.0.2.0) scripts in component: db *****
[14-04-2008 11:01:45] ***** End Final order for data_upgrade (from version: 10.2.0.2.0) scripts in component: db *****
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for post_data_upgrade: scripts in component: db *****
[14-04-2008 11:01:45] ***** End Final order for post_data_upgrade: scripts in component: db *****
[14-04-2008 11:01:45]
running post_creation process for component pa
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for post_create scripts in component: pa *****
[14-04-2008 11:01:45] ***** End Final order for post_create scripts in component: pa *****
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for outofbox scripts in component: pa *****
[14-04-2008 11:01:45] ***** End Final order for outofbox scripts in component: pa *****
[14-04-2008 11:01:45]
running post_creation process for component connector
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for post_create scripts in component: connector *****
[14-04-2008 11:01:45] ***** End Final order for post_create scripts in component: connector *****
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for outofbox scripts in component: connector *****
[14-04-2008 11:01:45] ***** End Final order for outofbox scripts in component: connector *****
[14-04-2008 11:01:45]
executing ias pre data upgrade scripts from versio
n 10.2.0.2.0
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for pre_data_upgrade (from version: 10.2.0.2.0) scripts in component: ias *****
[14-04-2008 11:01:45] ***** End Final order for pre_data_upgrade (from version: 10.2.0.2.0) scripts in component: ias *****
[14-04-2008 11:01:45]
executing ias data upgrade scripts from version 10.2.0.2.0
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for data_upgrade (from version: 10.2.0.2.0) scripts in component: ias *****
[14-04-2008 11:01:45] ***** End Final order for data_upgrade (from version: 10.2.0.2.0) scripts in component: ias *****
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for post_data_upgrade: scripts in component: ias *****
[14-04-2008 11:01:45] ***** End Final order for post_data_upgrade: scripts in component: ias *****
[14-04-2008 11:01:45]
component integic is already at 10.2.0.2.0
skipping integic data upgrade
component sso_server is already at 10.2.0.2.0
skipping sso_server data upgrade
component bc4j is already at 10.2.0.2.0
skipping bc4j data upgrade
component forms is already at 10.2.0.2.0
skipping forms data upgrade
component wireless is already at 10.2.0.2.0
skipping wireless data upgrade
component workflow is already at 10.2.0.2.0
skipping workflow data upgrade
executing ocs pre data upgrade scripts from versio
n 10.2.0.2.0
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for pre_data_upgrade (from version: 10.2.0.2.0) scripts in component: ocs *****
[14-04-2008 11:01:45] ***** End Final order for pre_data_upgrade (from version: 10.2.0.2.0) scripts in component: ocs *****
[14-04-2008 11:01:45]
executing ocs data upgrade scripts from version 10.2.0.2.0
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for data_upgrade (from version: 10.2.0.2.0) scripts in component: ocs *****
[14-04-2008 11:01:45] ***** End Final order for data_upgrade (from version: 10.2.0.2.0) scripts in component: ocs *****
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for post_data_upgrade: scripts in component: ocs *****
[14-04-2008 11:01:45] ***** End Final order for post_data_upgrade: scripts in component: ocs *****
[14-04-2008 11:01:45]
component portal is already at 10.2.0.2.0
skipping portal data upgrade
component integrationbpm is already at 10.2.0.2.0
skipping integrationbpm data upgrade
component integrationbam is already at 10.2.0.2.0
skipping integrationbam data upgrade
component discoverer is already at 10.2.0.2.0
skipping discoverer data upgrade
executing pp pre data upgrade scripts from versio
n 10.2.0.2.0
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for pre_data_upgrade (from version: 10.2.0.2.0) scripts in component: pp *****
[14-04-2008 11:01:45] ***** End Final order for pre_data_upgrade (from version: 10.2.0.2.0) scripts in component: pp *****
[14-04-2008 11:01:45]
executing pp data upgrade scripts from version 10.2.0.2.0
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for data_upgrade (from version: 10.2.0.2.0) scripts in component: pp *****
[14-04-2008 11:01:45] ***** End Final order for data_upgrade (from version: 10.2.0.2.0) scripts in component: pp *****
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for post_data_upgrade: scripts in component: pp *****
[14-04-2008 11:01:45] ***** End Final order for post_data_upgrade: scripts in component: pp *****
[14-04-2008 11:01:45]
component ifs is already at 10.2.0.2.0
skipping ifs data upgrade
component repserv is already at 10.2.0.2.0
skipping repserv data upgrade
running post_creation process for component beehive
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for post_create scripts in component: beehive *****
[14-04-2008 11:01:45] ***** End Final order for post_create scripts in component: beehive *****
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for outofbox scripts in component: beehive *****
[14-04-2008 11:01:45] ***** End Final order for outofbox scripts in component: beehive *****
[14-04-2008 11:01:45]
running post_creation process for component content
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for post_create scripts in component: content *****
[14-04-2008 11:01:45] ***** End Final order for post_create scripts in component: content *****
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for outofbox scripts in component: content *****
[14-04-2008 11:01:45] ***** End Final order for outofbox scripts in component: content *****
[14-04-2008 11:01:45]
component integb2b is already at 10.2.0.2.0
skipping integb2b data upgrade
executing ci pre data upgrade scripts from versio
n 10.2.0.2.0
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for pre_data_upgrade (from version: 10.2.0.2.0) scripts in component: ci *****
[14-04-2008 11:01:45] ***** End Final order for pre_data_upgrade (from version: 10.2.0.2.0) scripts in component: ci *****
[14-04-2008 11:01:45]
executing ci data upgrade scripts from version 10.2.0.2.0
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for data_upgrade (from version: 10.2.0.2.0) scripts in component: ci *****
[14-04-2008 11:01:45] ***** End Final order for data_upgrade (from version: 10.2.0.2.0) scripts in component: ci *****
[14-04-2008 11:01:45]
[14-04-2008 11:01:45]
[14-04-2008 11:01:45] ***** Start Final order for post_data_upgrade: scripts in component: ci *****
[14-04-2008 11:01:45] ***** End Final order for post_data_upgrade: scripts in component: ci *****
[14-04-2008 11:01:45]
Return code = 0
*** running TransX for files under e:/oraclehome/oms10g/sysman/admin/emdrep/rsc ***
using transx command line :
e:/oraclehome/oms10g/jdk/bin/java -cp e:/oraclehome/oms10g/lib/oraclexsql.jar;e:/oraclehome/oms10g/lib/transx.zip;e:/oraclehome/oms10g/xdk/lib/transx.zip;e:/oraclehome/oms10g/lib/xml.jar;e:/oraclehome/oms10g/lib/xmlparserv2.jar;e:/oraclehome/oms10g/lib/xschema.jar;e:/oraclehome/oms10g/lib/xsu12.jar;e:/oraclehome/oms10g/rdbms/jlib/xdb.jar;e:/oraclehome/oms10g/jdk/lib/dt.jar;e:/oraclehome/oms10g/oc4j/jdbc/lib/orai18n.jar;e:/oraclehome/oms10g/jdbc/lib/ojdbc14.jar;e:/oraclehome/oms10g/emcore/classes/debug;e:/oraclehome/oms10g/emcore/lib/emCORE.jar;e:/oraclehome/oms10g/sysman/jlib/emCORE.jar;e:/oraclehome/oms10g/oc4j/jdbc/lib/ojdbc14.jar;e:/oraclehome/oms10g/oc4j/jdbc/lib/ojdbc14dms.jar;e:/oraclehome/oms10g/jdbc/lib/ojdbc5.jar;e:/oraclehome/oms10g/jdbc/lib/ojdbc5dms.jar;e:/oraclehome/oms10g/oc4j/lib/dms.jar;e:/oraclehome/oms10g/oc4j/jdbc/lib/dms.jar;e:/oraclehome/oms10g/dms/lib/dms.jar oracle.sysman.emdrep.util.TransxWrapper 2>> E:/OracleHome/oms10g/sysman/log/emrepmgr.log.10.2.0.4.0
Found Metadata File: e:/oraclehome/oms10g/sysman/admin/emdrep/sql/core/latest/test_metadata/core.def
Adding e:/oraclehome/oms10g/oc4j/jdbc/lib/ojdbc14dms.jar
Adding e:/oraclehome/oms10g/oc4j/lib/dms.jar
Adding e:/oraclehome/oms10g/oc4j/jdbc/lib/orai18n.jar
Adding e:/oraclehome/oms10g/jdbc/lib/ojdbc14.jar
Adding e:/oraclehome/oms10g/sysman/jlib/emCORE.jar
Adding e:/oraclehome/oms10g/sysman/jlib/log4j-core.jar
Adding e:/oraclehome/oms10g/lib/servlet.jar
Adding e:/oraclehome/oms10g/jdbc/lib/orai18n.jar
Adding e:/oraclehome/oms10g/sysman/jlib/emagentSDK.jar
Running oracle.sysman.eml.gensvc.test.data.SeedMetadata
ExecJava: Running e:/oraclehome/oms10g/jdk/bin/java -cp "e:/oraclehome/oms10g/oc4j/jdbc/lib/ojdbc14dms.jar;e:/oraclehome/oms10g/oc4j/lib/dms.jar;e:/oraclehome/oms10g/oc4j/jdbc/lib/orai18n.jar;e:/oraclehome/oms10g/jdbc/lib/ojdbc14.jar;e:/oraclehome/oms10g/sysman/jlib/emCORE.jar;e:/oraclehome/oms10g/sysman/jlib/log4j-core.jar;e:/oraclehome/oms10g/lib/servlet.jar;e:/oraclehome/oms10g/jdbc/lib/orai18n.jar;e:/oraclehome/oms10g/sysman/jlib/emagentSDK.jar" oracle.sysman.eml.gensvc.test.data.SeedMetadata Connection Descriptor: (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=VMORACLE01)(PORT=1521)))(CONNECT_DATA=(SID=rep)))
HTTP Test Inserted Successfully
Successfully Added HTTP Query Descriptors
DHTML Test Inserted Successfully
Successfully Added DHTML Query Descriptors
HTTPPING Test Inserted Successfully
Ping Test Inserted Successfully
Successfully Added PING Query Descriptors
DNS Test Inserted Successfully
Successfully Added DNS Query Descriptors
FTP Test Inserted Successfully
Successfully Added FTP Query Descriptors
Port Test Inserted Successfully
Successfully Added Port Query Descriptors
TNS Test Inserted Successfully
Successfully Added TNS Query Descriptors
SQLT Test Inserted Successfully
Successfully Added SQLT Query Descriptors
JDBC Test Inserted Successfully
Successfully Added JDBC Query Descriptors
Forms Test Inserted Successfully
Successfully Added Forms Query Descriptors
OS Test Inserted Successfully
Successfully Added OS Query Descriptors
Creating imap, smtp, ldap, pop, soap test types
******** ORACLE_HOME is e:/oraclehome/oms10g
test properties path: e:/oraclehome/oms10g\sysman\admin\emdrep\prop\imap.properties
CreateTestType:createTestMetadataObject: BEGIN
Enabled test for: IMAP generic_service 1.0
CreateTestType:createCompleteTest: END
******** ORACLE_HOME is e:/oraclehome/oms10g
test properties path: e:/oraclehome/oms10g\sysman\admin\emdrep\prop\smtp.properties
CreateTestType:createTestMetadataObject: BEGIN
Enabled test for: SMTP generic_service 1.0
CreateTestType:createCompleteTest: END
******** ORACLE_HOME is e:/oraclehome/oms10g
test properties path: e:/oraclehome/oms10g\sysman\admin\emdrep\prop\ldap.properties
CreateTestType:createTestMetadataObject: BEGIN
Enabled test for: LDAP generic_service 1.0
CreateTestType:createCompleteTest: END
******** ORACLE_HOME is e:/oraclehome/oms10g
test properties path: e:/oraclehome/oms10g\sysman\admin\emdrep\prop\pop.properties
CreateTestType:createTestMetadataObject: BEGIN
Enabled test for: POP generic_service 1.0
CreateTestType:createCompleteTest: END
******** ORACLE_HOME is e:/oraclehome/oms10g
test properties path: e:/oraclehome/oms10g\sysman\admin\emdrep\prop\nntp.properties
CreateTestType:createTestMetadataObject: BEGIN
Enabled test for: NNTP generic_service 1.0
CreateTestType:createCompleteTest: END
******** ORACLE_HOME is e:/oraclehome/oms10g
test properties path: e:/oraclehome/oms10g\sysman\admin\emdrep\prop\soap.properties
CreateTestType:createTestMetadataObject: BEGIN
Enabled test for: SOAP generic_service 1.0
CreateTestType:createCompleteTest: END
******** ORACLE_HOME is e:/oraclehome/oms10g
test properties path: e:/oraclehome/oms10g\sysman\admin\emdrep\prop\siebel.properties
CreateTestType:createTestMetadataObject: BEGIN
CreateTestType:createCompleteTest: END
Found Metadata File: e:/oraclehome/oms10g/sysman/admin/emdrep/sql/ocs/latest/test_metadata/test_metadata_cs.def
Adding e:/oraclehome/oms10g/jdbc/lib/ojdbc14.jar
Adding e:/oraclehome/oms10g/sysman/jlib/emCORE.jar
Adding e:/oraclehome/oms10g/sysman/jlib/log4j-core.jar
Adding e:/oraclehome/oms10g/lib/servlet.jar
Adding e:/oraclehome/oms10g/jdbc/lib/orai18n.jar
Adding e:/oraclehome/oms10g/sysman/jlib/emagentSDK.jar
Adding e:/oraclehome/oms10g/sysman/jlib/emcs.jar
Running oracle.sysman.ocs.test.data.OCSSeedMetadata
ExecJava: Running e:/oraclehome/oms10g/jdk/bin/java -cp "e:/oraclehome/oms10g/jdbc/lib/ojdbc14.jar;e:/oraclehome/oms10g/sysman/jlib/emCORE.jar;e:/oraclehome/oms10g/sysman/jlib/log4j-core.jar;e:/oraclehome/oms10g/lib/servlet.jar;e:/oraclehome/oms10g/jdbc/lib/orai18n.jar;e:/oraclehome/oms10g/sysman/jlib/emagentSDK.jar;e:/oraclehome/oms10g/sysman/jlib/emcs.jar" oracle.sysman.ocs.test.data.OCSSeedMetadata Connection Descriptor: (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=VMORACLE01)(PORT=1521)))(CONNECT_DATA=(SID=rep)))
Found Metadata File: e:/oraclehome/oms10g/sysman/admin/emdrep/sql/pa/latest/test_metadata/test_metadata_pa.def
Adding e:/oraclehome/oms10g/jdbc/lib/ojdbc14.jar
Adding e:/oraclehome/oms10g/sysman/jlib/emCORE.jar
Adding e:/oraclehome/oms10g/sysman/jlib/log4j-core.jar
Adding e:/oraclehome/oms10g/lib/servlet.jar
Adding e:/oraclehome/oms10g/jdbc/lib/orai18n.jar
Adding e:/oraclehome/oms10g/sysman/jlib/emagentSDK.jar
Adding e:/oraclehome/oms10g/j2ee/OC4J_EM/applications/em/em/WEB-INF/lib/empa.jar
Running oracle.sysman.siebel.test.data.SiebelSeedMetadata
ExecJava: Running e:/oraclehome/oms10g/jdk/bin/java -cp "e:/oraclehome/oms10g/jdbc/lib/ojdbc14.jar;e:/oraclehome/oms10g/sysman/jlib/emCORE.jar;e:/oraclehome/oms10g/sysman/jlib/log4j-core.jar;e:/oraclehome/oms10g/lib/servlet.jar;e:/oraclehome/oms10g/jdbc/lib/orai18n.jar;e:/oraclehome/oms10g/sysman/jlib/emagentSDK.jar;e:/oraclehome/oms10g/j2ee/OC4J_EM/applications/em/em/WEB-INF/lib/empa.jar" oracle.sysman.siebel.test.data.SiebelSeedMetadata Connection Descriptor: (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=VMORACLE01)(PORT=1521)))(CONNECT_DATA=(SID=rep)))
Done.
Running setSchemaStatus: BEGIN EMD_MAINTENANCE.SET_VERSION('_UPGRADE_','0','0','SYSTEM',EMD_MAINTENANCE.G_STATUS_CONFIGURED_READY);END;
Repository Upgrade Successful.
INFO: oracle.sysman.top.oms: Command Error:----------
INFO: oracle.sysman.top.oms:'stty' is not recognized as an internal or external command,
operable program or batch file.
'stty' is not recognized as an internal or external command,
operable program or batch file.
'stty' is not recognized as an internal or external command,
operable program or batch file.
'stty' is not recognized as an internal or external command,
operable program or batch file.
INFO: oracle.sysman.top.oms:Upgrade->Executing Command: CMD /c E:\OracleHome\oms10g\bin\emctl config emkey -repos
INFO: oracle.sysman.top.oms: Command Exit Code:0
INFO: oracle.sysman.top.oms: Command Output:----------
INFO: oracle.sysman.top.oms:Oracle Enterprise Manager 10g Release 4 Grid Control
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
The Em Key has been configured successfully.
INFO: oracle.sysman.top.oms: Command Error:----------
INFO: oracle.sysman.top.oms:Please enter repository password:
INFO: oracle.sysman.top.oms:EmcpPlug:invoke:Completed EmcpPlug invoke method on an aggregate=oracle.sysman.top.oms for Action=patchsetConfiguration in step=1:microstep=0
INFO: oracle.sysman.top.oms:The plug-in Repository Upgrade has successfully been performed
INFO: oracle.sysman.top.oms:About to execute plug-in OMS Patch Configuration
INFO: oracle.sysman.top.oms:The plug-in OMS Patch Configuration is running
INFO: oracle.sysman.top.oms:Internal PlugIn Class: oracle.sysman.emcp.oms.OmsPatchUpgrade
INFO: oracle.sysman.top.oms:Classpath = E:\OracleHome\oms10g\sysman\jlib\omsPlug.jar;E:\OracleHome\oms10g\jlib\emConfigInstall.jar;E:\OracleHome\oms10g\sysman\jlib\emCORE.jar;E:\OracleHome\oms10g\sysman\jlib\emagentSDK.jar;E:\OracleHome\oms10g\sysman\jlib\log4j-core.jar;E:\OracleHome\oms10g\jdbc\lib\classes12.jar
INFO: oracle.sysman.top.oms:EmcpPlug:invoke:Starting EmcpPlug invoke method on an aggregate=oracle.sysman.top.oms for Action=patchsetConfiguration in step=2:microstep=0
INFO: oracle.sysman.top.oms:inside perform
INFO: oracle.sysman.top.oms:PerformSecureCommand:runCmd:Performing Command=CMD /C E:\OracleHome\oms10g\opmn\bin\opmnctl stopall
INFO: oracle.sysman.top.oms:Stop Opmn Error = CMD /C E:\OracleHome\oms10g\opmn\bin\opmnctl stopall
INFO: oracle.sysman.top.oms:PerformSecureCommand:runCmd:CMD /C E:\OracleHome\oms10g\opmn\bin\opmnctl stopall have completed with exitCode=2
INFO: oracle.sysman.top.oms:PerformSecureCommand:runCmd:Command Output stdout:
'opmnctl: opmn is not running
INFO: oracle.sysman.top.oms:PerformSecureCommand:runCmd:Command Output stderr:
INFO: oracle.sysman.top.oms:stopOpmnService:WINDOWS:status=2. Ignoring OpmnStatus='opmnctl: opmn is not running
INFO: oracle.sysman.top.oms:stoping the Opms services
INFO: oracle.sysman.top.oms:OmsPlugIn:Requested Configuration Step 0 have been completed with status=true
INFO: oracle.sysman.top.oms:the value of OMS home isE:\OracleHome\oms10g
INFO: oracle.sysman.top.oms:b_deployed is false
INFO: oracle.sysman.top.oms:Starting Oms deploying
INFO: oracle.sysman.top.oms:PerformSecureCommand:runCmd:Performing Command=CMD /C E:\OracleHome\oms10g\bin\EMDeploy.bat
INFO: oracle.sysman.top.oms:EMDeploy Error = CMD /C E:\OracleHome\oms10g\bin\EMDeploy.bat
INFO: oracle.sysman.top.oms:PerformSecureCommand:runCmd:CMD /C E:\OracleHome\oms10g\bin\EMDeploy.bat have completed with exitCode=2
INFO: oracle.sysman.top.oms:PerformSecureCommand:runCmd:Command Output stdout:
'Environment :
ORACLE HOME = e:\oraclehome\oms10g
STAGE DIR = e:\oraclehome\oms10g/j2ee/OC4J_EM/applications/em
TEMP DIR = C:\Temp\1
PSEP = ;
Info : e:\oraclehome\oms10g/lib/ojsp.jar doesn't exist
Info : e:\oraclehome\oms10g/lib/ojsputil.jar doesn't exist
Info : e:\oraclehome\oms10g/syndication/lib/syndserver.jar doesn't exist
Info : e:\oraclehome\oms10g/rdbms/jlib/xsu12.jar doesn't exist
Info : e:\oraclehome\oms10g/network/jlib/netcfg4em12.jar doesn't exist
Info : e:\oraclehome\oms10g/encryption/jlib/ojmisc.jar doesn't exist
adding <classpath path="e:\oraclehome\oms10g/lib/ojsp.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/webservices/lib/wsdl.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/lib/dsv2.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/lib/classgen.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/rdbms/jlib/jmscommon.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/lib/ojsputil.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/lib/oraclexsql.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/jlib/providerutil.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/syndication/lib/syndserver.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/lib/xschema.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/rdbms/jlib/xsu12.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/jlib/regexp.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/oui/jlib/OraInstaller.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/network/jlib/netcfg4em12.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/encryption/jlib/ojmisc.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/jlib/orai18n.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/portal/jlib/pdkjava.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/portal/jlib/ptlshare.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/jlib/share.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/jlib/uix2.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/jlib/ohw.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/jlib/commons-el.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/jlib/jsp-el-api.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/jlib/oracle-el.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/jlib/axis.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/jlib/axis-ant.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/jlib/commons-discovery-0.2.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/jlib/commons-logging-1.0.4.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/jlib/jaxrpc.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/jlib/saaj.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/jlib/wsdl4j-1.5.1.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/jdk/lib/tools.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emCORE.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emCfg.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emPid.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emProvisioningAll.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emSDKsamples.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emcliload.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emclidownload.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emcoreALL.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emcoreAgent.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emcoreTest.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emcore_emjsp.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emcore_emjspf_classes.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emdloader.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emagentSDK.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emagentTest.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/iview.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/jviewsall.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/qsma.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/svgdom.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/omsPlug.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/lib/xml.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/lib/xmlmesg.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/jcb.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/log4j-core.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/jlib/emConfigInstall.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emDB.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emdb_emjsp.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/ewm-1_1.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emas.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emasSDK.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emas_emdjsp.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emas_emjsp.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emd_java.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/webapps/emd/WEB-INF/lib/iview.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emcs_emdjsp.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emcs_emjsp.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emcs.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/emcsSDK.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/prov/agentpush/jlib/remoteinterfaces.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/prov/agentpush/jlib/ssh.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/prov/agentpush/jlib/jsch.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/j2ee/agentpush.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/j2ee/asprovALL.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/j2ee/bpelprovALL.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/j2ee/collabsuiteuser.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/j2ee/dnALL.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/j2ee/ecALL.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/j2ee/ejb-prov.jar"/> to orion-web.xml
adding <classpath path="e:\oraclehome\oms10g/sysman/jlib/j2ee/empp_emjsp.jar"/> to orion-web.xml
addingno update yet. I'm gonna run the tool that is suggested, but I have no other issues with any previous patch. I thought the content of the file I added to my post would have ring a bell to somebody, who had the same problem, but it seems like I was wrong. I'll post once I run the tool suggested in the note
thx all -
Install Guide - SQL Server 2014, Failover Cluster, Windows 2012 R2 Server Core
I am looking for anyone who has a guide with notes about an installation of a two node, multi subnet failover cluster for SQL Server 2014 on Server Core edition
Hi KamarasJaranger,
According to your description, you want configure a SQL Server 2014 Multi-Subnet failover Cluster on Windows Server 2012 R2. Below are the whole steps for the configuration. For the detailed steps about the configuration, please download
and refer to the
PDF file.
1.Add Required Windows Features (.NET Framework 3.5 Features, Failover Clustering and Multipath I/O).
2.Discover target portals.
3.Connect targets and configuring Multipathing.
4.Initialize and format the Disks.
5.Verify the Storage Replication Process.
6.Run the Failover Cluster Validation Wizard.
7.Create the Windows Server 2012 R2 Multi-Subnet Cluster.
8.Tune Cluster Heartbeat Settings.
9.Install SQL Server 2014 on a Multi-Subnet Failover Cluster.
10.Add a Node on a SQL Server 2014 Multi-Subnet Cluster.
11.Tune the SQL Server 2014 Failover Clustered Instance DNS Settings.
12.Test application connectivity.
Regards,
Michelle Li -
Sum
General
Upload Metric Data
Status
Status Pending
Availability (%)
Unavailable
Host
Version
Platform
Operating System
Operating System Agent Username
Unavailable
Interaction with Management Service
Upload to
Unavailable
Last Successful Upload
Unavailable
Agent to Management Service Response Time (ms)
Unavailable
Agent Upload Interval (minutes)
15
Detailed Status
Restarts (last 24 hours)
0
Secure
Unavailable
Agent Collecting
Unavailable
Agent Backed Off
No
Agent Purging Data
Unavailable
Above is the display from the agent page on the OEM console.
I have also checked the agents page and confirmed that the agent is not Blacked Out.Hi Krishnan,
emctl unsecure agent fails with the error:
[15-04-2015 16:28:19] USERINFO ::Checking Agent for HTTP...
[15-04-2015 16:28:19] USERINFO :: Done.
[15-04-2015 16:28:24] USERINFO ::Agent is already stopped... Done.
[15-04-2015 16:28:24] USERINFO ::Unsecuring agent... Started.
-115-04-15 16:28:25,294 [main] INFO agent.SecureAgentCmd openPage.871 - Opening: https://usatl-w-cmcloud.americas.abb.com:1159/empbs/genwallet
2015-04-15 16:28:29,521 [main] INFO agent.SecureAgentCmd openPage.897 - Response Status Code: 200
2015-04-15 16:28:29,523 [main] INFO agent.SecureAgentCmd openPage.905 - RESPONSE_STATUS header: OK
2015-04-15 16:28:29,525 [main] INFO agent.SecureAgentCmd openPage.906 - Response body: -1
2015-04-15 16:28:30,311 [main] INFO agent.SecureAgentCmd openPage.871 - Opening: https://usatl-w-cmcloud.americas.abb.com:1159/empbs/genwallet
2015-04-15 16:28:32,258 [main] INFO agent.SecureAgentCmd openPage.897 - Response Status Code: 200
2015-04-15 16:28:32,261 [main] INFO agent.SecureAgentCmd openPage.905 - RESPONSE_STATUS header: OK
2015-04-15 16:28:32,262 [main] INFO agent.SecureAgentCmd openPage.906 - Response body: -1
[15-04-2015 16:28:32] USERINFO ::OMS is locked. Cannot unsecure the Agent.
[15-04-2015 16:28:32] USERINFO ::Unsecuring Agent... Failed.
The OMS is on a Windows 7 VM. Do you know the command to unlock the OMS?
Thanks,
Jerald -
Mounted Volume not shwoing up with Windows 2012 R2 failover cluster
Hi
We configured some drives as mounted volumes and configured it with Failover cluster. But it's not showing up the mounted volume details with Cluster Manager, it's showing the details as seen below
Expect support from someone to correct this issue
Thanks in advance
LMSHi LMS,
Are you doubt about the disk shown as GUID? Cluster Mount point Disk is showing as a Volume GUID in server 2012 R2 Failover Cluster I creating a mountpoint inside the cluster
and had the same behavior, instead of mount point name we had the volume GUI after volume label, that must by design.
How to configure volume mount points on a Microsoft Cluster Server
http://support.microsoft.com/kb/280297
I’m glad to be of help to you!
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]
Maybe you are looking for
-
I accidentally moved all my contacts to computer and out of iCloud. how do I put back?
I thought I was saving a copy to my desktop and when it said replace I thought it meant replace the old copy on my desktop with the newer version and then the whole contacts are in a different look and I can't use it in various ways I used to be able
-
More than one Thread accessing the same records in the database
Hi, I have more than one thread accessing a synchronized method called getMinPrimaryKey() which gets the lowest of the primary key frm the table based on the a flag which is set to true. once it is retrieved the same method also sets the flag as fals
-
Revision: 18011 Revision: 18011 Author: [email protected] Date: 2010-10-04 16:45:07 -0700 (Mon, 04 Oct 2010) Log Message: 1. Fix bug FM-1086, change OSMF version from 1.41 to 1.5 2. Misc code modifications Ticket Links: http://bugs.adobe.co
-
How to create sequence when minvalue is unknown?
I am currently migrating a lot of tables (that already have data) over to using sequences. I'd like to have a script that creates these sequences without having to hardcode the minimum value that the sequence should start at. I am trying to do someth
-
Sumproduct row and column?
hello -- it seems fairly straight forward to use sumproduct for two columns or two rows of a given length. how can i do the same for a row and a column of the same length. i keep getting an error when i do that. do i need to use a different function