Find nodes in a cluster
How to find all the nodes in a cluster.
olsnodes -n gives the the info of nodes which are up . If any of the nodes is down it will not show.Is there any command or file from which i can check all nodes of a cluster even if it is down.
In 11.2.0.3 i use:
crsctl stat res -tHope it helps.
Similar Messages
-
How to find out the IP@s of all nodes in a cluster?
Is there any way to retrieve the IP addresses of all nodes in a cluster?
The problem is the following. We intend to write an administration program
that administers all nodes of a cluster using rmi (e.g. tell all singletons
in the cluster to reload configuration values etc.). My understanding is
that rmi only talks to a single node in a cluster. It would be a convenient
feature if the administration program could figure out all nodes in a
cluster by itself and then administers each node sequentially. So far we're
planning to pass all IP addresses to the administration program e.g. as
command line arguments but what if a node gets left out due to human error?
Thanks for your help.
BernieThere is no public interface to inquire about the IP addresses of the servers in a cluster. If you use WLS 6.0, there is an administrative console that uses JMX to manage the cluster. Perhaps that would be of use to you?
Bernhard Lenz wrote:
Is there any way to retrieve the IP addresses of all nodes in a cluster?
The problem is the following. We intend to write an administration program
that administers all nodes of a cluster using rmi (e.g. tell all singletons
in the cluster to reload configuration values etc.). My understanding is
that rmi only talks to a single node in a cluster. It would be a convenient
feature if the administration program could figure out all nodes in a
cluster by itself and then administers each node sequentially. So far we're
planning to pass all IP addresses to the administration program e.g. as
command line arguments but what if a node gets left out due to human error?
Thanks for your help.
Bernie -
Question about adding an Extra Node to SOFS cluster
Hi, I have a fully functioning SOFS cluster, with two nodes, it uses SAN FC storage, Not SAS JBODS. its running about 100VM's in production at the moment.
Both my nodes currently sit on one blade chassis, but for resiliency, I want to add another node from a blade chassis in our secondary onsite smaller DC.
I've done plenty of cluster node upgrades before on SQL and Hyper-V , but never with a SOFS cluster.
I have the third node fully prepaired, it can see the Disks the FC Luns, on the SAN (using powerpath, disk manager) and all the roles are installed.
so in theory I can just add this node in the cluster manager and it should all be good, my question is has anyone else done this, and is there anything else I should be aware of, and what's the best way to check the new node will function , and be able
to migrate the File role over without issues. I know I can run a validation when adding the node, I presume this is the best option ?
cannot find much information on the web about expanding a SOFS cluster.
any advice or information would be greatfully received !!
cheers
MarkHi Mark,
Sorry for the delay in reply.
As you said there is no much information which related to add a node to a SOFS cluster.
The only ones I could find is related to System Center (VMM):
How to Add a Node to a Scale-Out File Server in VMM
http://technet.microsoft.com/en-us/library/dn466530.aspx
However adding a node to SOFS cluster should be simple as you just prepared. You can have a try and see the result.
If you have any feedback on our support, please send to [email protected] -
Can I use one transport adapter on the nodes of the cluster?
Hi
I am new to sun cluster, in the cluster documentation they mentioned that each node should have 2 network cards one for public connections and one for private connection. what if I do not want the nodes to have public connections except for one node. In other words, I want to use one network card on each node except for the first node in the cluster, users can access the rest of the nodes through the fist node . Is that possible? If yes, what should be the name of the second transport adapter while installing the cluster software on the nodes.
Thank You for the helpDear
We are using cluster for HA on failover condition, If you have only one network adapter so how you work in failover, and you can't assign one adaptor to two node as same, you have min 2 network adapter for 2 node cluster..
:)GooDLucK
Mohammed Tanvir -
Hello,
Say, I have a 3 nodes RAC, I want to add a new node to current cluster... while the new node's CPUs are slower than the others.. what will happen?
(my concern is : can I add this new node successfully? if yes, can it anyway improve the whole cluster performance or not?)
Thank you
s9225Also you can refer MOS note : RAC: Frequently Asked Questions (Doc ID 220970.1)
Can I have different servers in my Oracle RAC? Can they be from different vendors? Can they be different sizes? -
RAC Instalation Problem (shared accross all the nodes in the cluster)
All experts
I am trying for installing Oracle 10.2.0 RAC on Redhat 4.7
reff : http://www.oracle-base.com/articles/10g/OracleDB10gR2RACInstallationOnLinux
All steps successfully completed on all nodes (rac1,rac2) every thing is okey for each node
on single node rac instalation successfull.
when i try to install on two nodes
on specify Oracle Cluster Registry (OCR) location showing error
the location /nfsmounta/crs.configuration is not shared accross all the nodes in the cluster. Specify a shared raw partation or cluster file system file that is visible by the same name on all nodes of the cluster.
I create shared disks on all nodes as:
1 First we need to set up some NFS shares. Create shared disks on NAS or a third server if you have one available. Otherwise create the following directories on the RAC1 node.
mkdir /nfssharea
mkdir /nfsshareb
2. Add the following lines to the /etc/exports file. (edit /etc/exports)
/nfssharea *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/nfsshareb *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
3. Run the following command to export the NFS shares.
chkconfig nfs on
service nfs restart
4. On both RAC1 and RAC2 create some mount points to mount the NFS shares to.
mkdir /nfsmounta
mkdir /nfsmountb
5. Add the following lines to the "/etc/fstab" file. The mount options are suggestions from Kevin Closson.
nas:/nfssharea /nfsmounta nfs rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0 0 0
nas:/nfsshareb /nfsmountb nfs rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0 0 0
6. Mount the NFS shares on both servers.
mount /mount1
mount /mount2
7. Create the shared CRS Configuration and Voting Disk files.
touch /nfsmounta/crs.configuration
touch /nfsmountb/voting.disk
Please guide me what is wrongI think you did not really mount it on the second server. what is the output of 'ls /nfsmounta'.
step 6 should be 'mount /nfsmounta', not 'mount 1'. I also don't know if simply creating a zero-size file is sufficient for ocr (i have always used raw devices, not nfs for this) -
Regarding number of nodes in endeca cluster
Hi,
I have a question regarding number of nodes in the endeca server cluster.
Our solution contains one data domain running in a endeca cluster with two nodes.
Endeca server documentation recommends to run the cluster with atleast 3 nodes however our solution can't accomdate another server straight away.
Can anyone please suggest what are the implication of running the cluster with two nodes like
1. Can the cluster still serve the request if one node goes down?
2, How the leader promoting works if a node goes down?
Thank you,
regards,
rpHi rp,
You can definitely start with two nodes and then add another Endeca Server node later, if needed. It is recommended to run a cluster of three, for increased availability.
Here are some answers to you questions about the cluster behavior:
Q: Can the cluster still serve the request if one node goes down?
A: Quoting from this portion of the Endeca Server Cluster Guide > How enhanced availability is achieved:
Availability of Endeca Server nodes
In an Endeca Server cluster with more than one Endeca Server instance, an ensemble of the Cluster Coordinator services running on a subset of nodes in the Endeca Server cluster ensures enhanced availability of the Endeca Server nodes in the Endeca Server cluster.
When an Endeca Server node in an Endeca Server cluster goes down, all Dgraph nodes hosted on it, and the Cluster Coordinator service (which may also be running on this node) also go down. As long as the Endeca Server cluster consists of more than one node, this does not disrupt the processing of non-updating user requests for the data domains. (It may negatively affect the Cluster Coordinator services. For information on this, see Availability of Cluster Coordinator services.)
If an Endeca Server node fails, the Endeca Server cluster is notified and stops routing all requests to the data domain nodes hosted on that Endeca Server node, until you restart the Endeca Server node.
Let's consider an example that helps illustrate this case. Consider a three-node single data domain cluster hosted on the Endeca Server cluster consisting of three nodes, where each Endeca Server node hosts one Dgraph node for the data domain. In this case:
If one Endeca Server node fails, incoming requests will be routed to the remaining nodes.
If the Endeca Server node that fails happens to be the node that hosts the leader node for the data domain cluster, the Endeca Server cluster selects a new leader node for the data domain from the remaining Endeca Server nodes and routes subsequent requests accordingly. This ensures availability of the leader node for a data domain.
If the Endeca Server node goes down, the data domain nodes (Dgraphs) it is hosting are not moved to another Endeca Server node. If your data domain has more than two nodes dedicated to processing queries, the data domain continues to function. Otherwise, query processing for this data domain may stop until you restart the Endeca Server node.
When you restart the failed Endeca Server node, its processes are restarted by the Endeca Server cluster. Once the node rejoins the cluster, it will rejoin any data domain clusters for the data domains it hosts. Additionally, if the node hosts a Cluster Coordinator, it will also rejoin the ensemble of Cluster Coordinators.
Q: How the leader promoting works if a node goes down? See part of the answer above. Also, this: (from the same topic, but later in text)
Failure of the leader node. When the leader node goes offline, the Endeca Server cluster elects a new leader node and starts sending updates to it. During this stage, follower nodes continue maintaining a consistent view of the data and answering queries. When the node that was the leader node is restarted and joins the cluster, it becomes one of the follower nodes. Note that is also possible that the leader node is restarted and joins the cluster before the Endeca Server cluster needs to appoint a new leader node. In this case, the node continues to serve as the leader node.If the leader node in the data domain changes, the Endeca Server continues routing those requests that require the leader node to the Endeca Server cluster node hosting the newly appointed leader node.
Note: If the leader node in the data domain cluster fails, and if an outer transaction has been in progress, the outer transaction is not applied and is automatically rolled back. In this case, a new outer transaction must be started. For information on outer transactions, see the section about the Transaction Web Service in the Oracle Endeca Server Developer's Guide.
Failure of a follower node. When one of the follower nodes goes offline, the Endeca Server cluster starts routing requests to other available nodes, and attempts to restart the Dgraph process for this follower node. Once the follower node rejoins the cluster, the Endeca Server adjusts its routing information accordingly.
You may ask, why do you need three nodes then? This is to achieve the high availability of the cluster services themselves.
Quoting:
If you do not configure at least three Endeca Server nodes to run the Cluster Coordinator service, the Cluster Coordinator service will be a single point of failure. Should the Cluster Coordinator service fail, access to the data domain clusters hosted in the Endeca Server cluster becomes read-only. This means that it is not possible to change the data domains in any way. You cannot create, resize, start, stop, or change data domains; you also cannot define data domain profiles. You can send read queries to the data domains and perform read operations with the Cluster and Manage Web Services, such as listing data domains or listing nodes. No updates, writes, or changes of any kind are possible while the Cluster Coordinator service in the Endeca Server cluster is down — this applies to both the Endeca Server cluster and data domain clusters. To recover from this situation, the Endeca Server instance that was running a failed Cluster Coordinator must be restarted or replaced (the action required depends on the nature of the failure).
Julia -
Cloned two vm cluster nodes from development cluster to act as template to create production cluster
Morning,
There was so much done to setup the development cluster, I thought it would be easy to have the two nodes in the cluster cloned. To my surprise, the development cluster was up and happily running on the new vm servers. Stopping resources verifies
it is stopping and starting resources on the original cluster. I am not sure how to safely have the two new servers not manage the development cluster and create a new production cluster on them.
I am hesitant to destroy the cluster as I suspect it will destroy the real development cluster. How do I do this? How do I delete the windows cluster software and re-install without affecting the development cluster?
Note that I tried to create a new cluster in the failover cluster manager and specify the new vm cluster servers, but it says they are part of a cluster already. I do not see them listed as nodes. I am not sure how to see what cluster it thinks
the new servers are part of or how to not make them part of the development cluster. That might be the path to my solution.This actually has worked out okay. I found these steps and did them on both of the nodes that were claiming to be in a cluster already:
powershell
Import-Module FailoverClusters;
clear-clusternode -
Unable to see other OC4J nodes in a cluster
I have installed 2 instances of OracleAS on 2 separate machines, both machines ( Lnx-5 and Lnx-6 ) were installed with the J2EE component and WEB component.
During installation, I have selected Lnx-5 as the administration node of the cluster, and I have configured the discovery address using multicast address 225.0.0.33:8001.
There were no installations errors encountered and things seems to work fine.
However, on Lnx-5, it can't "see" Lnx-6 as one of its cluster nodes. On both Lnx-5 and Lnx-6, I see the following when I issued the "opmnctl @cluster status".
---- On Lnx-5 , here is what I got ---------
[root@Lnx-5 conf]# opmnctl @cluster status
Processes in Instance: Lnx5.anydomain.com
--------------------------------------------------------------+---------
ias-component | process-type | pid | status
--------------------------------------------------------------+---------
OC4JGroup:default_group | OC4J:home | 5392 | Alive
ASG | ASG | N/A | Down
HTTP_Server | HTTP_Server | 5391 | Alive
---- On Lnx-6 , here is what I got ---------
[root@Lnx-6 conf]# opmnctl @cluster status
Processes in Instance: Lnx6.anydomain.com
--------------------------------------------------------------+---------
ias-component | process-type | pid | status
--------------------------------------------------------------+---------
OC4JGroup:default_group | OC4J:home | 5392 | Alive
ASG | ASG | N/A | Down
HTTP_Server | HTTP_Server | 5391 | Alive
I suppose I should see both Lnx-5 and Lnx-6 when I issue the commad in either nodes.
I have also verified that both machine are synchronized to the NTP server.
I have also done a tcpdump on both nodes, indeed I can multicast ( 225.0.0.33:8001 ) packets arriving at both nodes..
Really need some help in what would have go wrong, what information should I look for to address this issue.
Thanks in advance!!Ok, for the discovery server configuration, here is the config that I have in the opmn.xml file, both lnx-5 and lnx-6 use exactly the same configuration :
<notification-server interface="ipv4">
<port local="6101" remote="6201" request="6004"/>
<ssl enabled="true" wallet-file="$ORACLE_HOME/opmn/conf/ssl.wlt/default"/>
<topology>
<discover list="10.1.230.11:6201,10.1.230.12:6201"/>
</topology>
</notification-server>
the ip address of Lnx-5 is 10.1.230.11, and Lnx-6 is 10.1.230.12.
Once this was configured on both Lnx-5, Lnx-6, I keep seeing this error from the Lnx-6's log file :
07/05/16 22:10:18 [pm-process] Process Alive: default_group~home~default_group~1
(1542677438:3859)
07/05/16 22:10:18 [pm-requests] Request 2 Completed. Command: /start
07/05/16 22:13:25 [ons-connect] Connection 9,10.1.230.11,6201 connect (Connectio
n refused)
07/05/16 22:13:26 [ons-connect] Connection a,10.1.230.12,6201 connect (Connectio
n refused)
Well, Once I enabled the debugging, there were some errors reported when opmn is started, the errors are as follows :
Loading Module libopmnohs callback functions
Module libopmnohs: loaded callback function opmnModInitialize
Module libopmnohs: unable to load callback function opmnModSetNumProcs
Module libopmnohs: unable to load callback function opmnModParse
Module libopmnohs: unable to load callback function opmnModDebug
Module libopmnohs: unable to load callback function opmnModDepend
Module libopmnohs: loaded callback function opmnModStart
Module libopmnohs: unable to load callback function opmnModReady
Module libopmnohs: loaded callback function opmnModNotify
Module libopmnohs: loaded callback function opmnModRestart
Module libopmnohs: loaded callback function opmnModStop
Module libopmnohs: loaded callback function opmnModPing
Module libopmnohs: loaded callback function opmnModProcRestore
Module libopmnohs: loaded callback function opmnModProcComp
Module libopmnohs: unable to load callback function opmnModReqComp
Module libopmnohs: unable to load callback function opmnModCall
Module libopmnohs: unable to load callback function opmnModInfo
Module libopmnohs: unable to load callback function opmnModCron
Module libopmnohs: loaded callback function opmnModTerminate
Loading Module libopmnoc4j callback functions
Module libopmnoc4j: loaded callback function opmnModInitialize
Module libopmnoc4j: unable to load callback function opmnModSetNumProcs
Module libopmnoc4j: loaded callback function opmnModParse
Module libopmnoc4j: unable to load callback function opmnModDebug
Module libopmnoc4j: unable to load callback function opmnModDepend
Module libopmnoc4j: loaded callback function opmnModStart
Module libopmnoc4j: unable to load callback function opmnModReady
Module libopmnoc4j: loaded callback function opmnModNotify
Module libopmnoc4j: loaded callback function opmnModRestart
Module libopmnoc4j: loaded callback function opmnModStop
Module libopmnoc4j: loaded callback function opmnModPing
Module libopmnoc4j: loaded callback function opmnModProcRestore
Module libopmnoc4j: loaded callback function opmnModProcComp
Module libopmnoc4j: unable to load callback function opmnModReqComp
Module libopmnoc4j: unable to load callback function opmnModCall
Module libopmnoc4j: unable to load callback function opmnModInfo
Module libopmnoc4j: unable to load callback function opmnModCron
Module libopmnoc4j: loaded callback function opmnModTerminate
Loading Module libopmncustom callback functions
Module libopmncustom: loaded callback function opmnModInitialize
Module libopmncustom: unable to load callback function opmnModSetNumProcs
Module libopmncustom: loaded callback function opmnModParse
Module libopmncustom: loaded callback function opmnModDebug
Module libopmncustom: unable to load callback function opmnModDepend
Module libopmncustom: loaded callback function opmnModStart
Module libopmncustom: loaded callback function opmnModReady
Module libopmncustom: unable to load callback function opmnModNotify
Module libopmncustom: loaded callback function opmnModRestart
Module libopmncustom: loaded callback function opmnModStop
Module libopmncustom: loaded callback function opmnModPing
Module libopmncustom: loaded callback function opmnModProcRestore
Module libopmncustom: loaded callback function opmnModProcComp
Module libopmncustom: loaded callback function opmnModReqComp
Module libopmncustom: unable to load callback function opmnModCall
Module libopmncustom: unable to load callback function opmnModInfo
Module libopmncustom: unable to load callback function opmnModCron
Module libopmncustom: loaded callback function opmnModTerminate
Loading Module libopmniaspt callback functions
Module libopmniaspt: loaded callback function opmnModInitialize
Module libopmniaspt: unable to load callback function opmnModSetNumProcs
Module libopmniaspt: unable to load callback function opmnModParse
Module libopmniaspt: unable to load callback function opmnModDebug
Module libopmniaspt: unable to load callback function opmnModDepend
Module libopmniaspt: loaded callback function opmnModStart
Module libopmniaspt: loaded callback function opmnModReady
Module libopmniaspt: unable to load callback function opmnModNotify
Module libopmniaspt: unable to load callback function opmnModRestart
Module libopmniaspt: loaded callback function opmnModStop
Module libopmniaspt: unable to load callback function opmnModPing
Module libopmniaspt: unable to load callback function opmnModProcRestore
Module libopmniaspt: loaded callback function opmnModProcComp
Module libopmniaspt: unable to load callback function opmnModReqComp
Module libopmniaspt: unable to load callback function opmnModCall
Module libopmniaspt: unable to load callback function opmnModInfo
Module libopmniaspt: unable to load callback function opmnModCron
Module libopmniaspt: loaded callback function opmnModTerminate
Looks pretty bad.. What cuases those errors to happen? Are they related?
Thanks!! -
Adding node back into cluster after removal...
Hi,
I removed a cluster node using "scconf -r -h <node>" (carried out all the other usual removal steps before getting this command to work).
Because this is a pair+1 cluster and the node i was trying to remove was physically attached to the quroum device (scsi), I had to create a dummy node before the removal command above would work.
I reinstalled solaris, SC3.1u4 framwork, patches etc. and then tried to run scsinstall again on the node (reintroduced the node to the cluster again first using scconf -a -T node=<node>).
However! during the scsinstall i got the following problem:
Updating file ("ntp.conf.cluster") on node n20-2-sup ... done
Updating file ("hosts") on node n20-2-sup ... done
Updating file ("ntp.conf.cluster") on node n20-3-sup ... done
Updating file ("hosts") on node n20-3-sup ... done
scrconf: RPC: Unknown host
scinstall: Failed communications with "bogusnode"
scinstall: scinstall did NOT complete successfully!
Press Enter to continue:
Was not sure what to do at this point, but since the other clusternodes could now see my 'new' node again, i removed the dummy node, rebooted the new node and said a little prayer...
Now, my node will not boot as part of the cluster:
Rebooting with command: boot
Boot device: /pci@8,600000/SUNW,qlc@4/fp@0,0/disk@w21000004cfa3e691,0:a File and args:
SunOS Release 5.10 Version Generic_127111-06 64-bit
Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hostname: n20-1-sup
/usr/cluster/bin/scdidadm: Could not load DID instance list.
Cannot open /etc/cluster/ccr/did_instances.
Booting as part of a cluster
NOTICE: CMM: Node n20-1-sup (nodeid = 1) with votecount = 0 added.
NOTICE: CMM: Node n20-2-sup (nodeid = 2) with votecount = 2 added.
NOTICE: CMM: Node n20-3-sup (nodeid = 3) with votecount = 1 added.
NOTICE: CMM: Node bogusnode (nodeid = 4) with votecount = 0 added.
NOTICE: clcomm: Adapter qfe5 constructed
NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 being constructed
NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 being constructed
NOTICE: clcomm: Adapter qfe1 constructed
NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 being constructed
NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 being constructed
NOTICE: CMM: Node n20-1-sup: attempting to join cluster.
NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 being initiated
NOTICE: CMM: Node n20-2-sup (nodeid: 2, incarnation #: 1205318308) has become reachable.
NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 online
NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 being initiated
NOTICE: CMM: Node n20-3-sup (nodeid: 3, incarnation #: 1205265086) has become reachable.
NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 online
NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 being initiated
NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 online
NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 being initiated
NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 online
NOTICE: CMM: Cluster has reached quorum.
NOTICE: CMM: Node n20-1-sup (nodeid = 1) is up; new incarnation number = 1205346037.
NOTICE: CMM: Node n20-2-sup (nodeid = 2) is up; new incarnation number = 1205318308.
NOTICE: CMM: Node n20-3-sup (nodeid = 3) is up; new incarnation number = 1205265086.
NOTICE: CMM: Cluster members: n20-1-sup n20-2-sup n20-3-sup.
NOTICE: CMM: node reconfiguration #18 completed.
NOTICE: CMM: Node n20-1-sup: joined cluster.
NOTICE: CMM: Node (nodeid = 4) with votecount = 0 removed.
NOTICE: CMM: Cluster members: n20-1-sup n20-2-sup n20-3-sup.
NOTICE: CMM: node reconfiguration #19 completed.
WARNING: clcomm: per node IP config clprivnet0:-1 (349): 172.16.193.1 failed with 19
WARNING: clcomm: per node IP config clprivnet0:-1 (349): 172.16.193.1 failed with 19
cladm: CLCLUSTER_ENABLE: No such device
UNRECOVERABLE ERROR: Sun Cluster boot: Could not initialize cluster framework
Please reboot in non cluster mode(boot -x) and Repair
syncing file systems... done
WARNING: CMM: Node being shut down.
Program terminated
{1} ok
Any ideas how i can recover this situation without having to reinstall the node again?
(have a flash with OS, sc3.1u4 framework etc... so not the end of the world but...)
Thanks a mil if you can help here!
- headwreckedHi - got sorted with this problem...
basically just removed (scinstall -r) the sc3.1u4 software from the node which was not booting, and then re-installed the software (this time the dummy node had been removed so it did not try to contact this node and the scinstall completed without any errors)
I think the only problem with the procedure i used to remove and readd the node was that i forgot to remove the dummy node before re-adding the actaul cluster node again...
If anyone can confirm this to be the case then great - if not... well its working now so this thread can be closed.
root@n20-1-sup # /usr/cluster/bin/scinstall -r
Verifying that no unexpected global mounts remain in /etc/vfstab ... done
Verifying that no device services still reference this node ... done
Archiving the following to /var/cluster/uninstall/uninstall.1036/archive:
/etc/cluster ...
/etc/path_to_inst ...
/etc/vfstab ...
/etc/nsswitch.conf ...
Updating vfstab ... done
The /etc/vfstab file was updated successfully.
The original entry for /global/.devices/node@1 has been commented out.
And, a new entry has been added for /globaldevices.
Mounting /dev/dsk/c3t0d0s6 on /globaldevices ... done
Attempting to contact the cluster ...
Trying "n20-2-sup" ... okay
Trying "n20-3-sup" ... okay
Attempting to unconfigure n20-1-sup from the cluster ... failed
Please consider the following warnings:
scrconf: Failed to remove node (n20-1-sup).
scrconf: All two-node clusters must have at least one shared quorum device.
Additional housekeeping may be required to unconfigure
n20-1-sup from the active cluster.
Removing the "cluster" switch from "hosts" in /etc/nsswitch.conf ... done
Removing the "cluster" switch from "netmasks" in /etc/nsswitch.conf ... done
** Removing Sun Cluster framework packages **
Removing SUNWkscspmu.done
Removing SUNWkscspm..done
Removing SUNWksc.....done
Removing SUNWjscspmu.done
Removing SUNWjscspm..done
Removing SUNWjscman..done
Removing SUNWjsc.....done
Removing SUNWhscspmu.done
Removing SUNWhscspm..done
Removing SUNWhsc.....done
Removing SUNWfscspmu.done
Removing SUNWfscspm..done
Removing SUNWfsc.....done
Removing SUNWescspmu.done
Removing SUNWescspm..done
Removing SUNWesc.....done
Removing SUNWdscspmu.done
Removing SUNWdscspm..done
Removing SUNWdsc.....done
Removing SUNWcscspmu.done
Removing SUNWcscspm..done
Removing SUNWcsc.....done
Removing SUNWscrsm...done
Removing SUNWscspmr..done
Removing SUNWscspmu..done
Removing SUNWscspm...done
Removing SUNWscva....done
Removing SUNWscmasau.done
Removing SUNWscmasar.done
Removing SUNWmdmu....done
Removing SUNWmdmr....done
Removing SUNWscvm....done
Removing SUNWscsam...done
Removing SUNWscsal...done
Removing SUNWscman...done
Removing SUNWscgds...done
Removing SUNWscdev...done
Removing SUNWscnmu...done
Removing SUNWscnmr...done
Removing SUNWscscku..done
Removing SUNWscsckr..done
Removing SUNWscu.....done
Removing SUNWscr.....done
Removing the following:
/etc/cluster ...
/dev/did ...
/devices/pseudo/did@0:* ...
The /etc/inet/ntp.conf file has not been updated.
You may want to remove it or update it after uninstall has completed.
The /var/cluster directory has not been removed.
Among other things, this directory contains
uninstall logs and the uninstall archive.
You may remove this directory once you are satisfied
that the logs and archive are no longer needed.
Log file - /var/cluster/uninstall/uninstall.1036/log
root@n20-1-sup #
Ran the scinstall again:
>>> Confirmation <<<
Your responses indicate the following options to scinstall:
scinstall -ik \
-C N20_Cluster \
-N n20-2-sup \
-M patchdir=/var/cluster/patches \
-A trtype=dlpi,name=qfe1 -A trtype=dlpi,name=qfe5 \
-m endpoint=:qfe1,endpoint=switch1 \
-m endpoint=:qfe5,endpoint=switch2
Are these the options you want to use (yes/no) [yes]?
Do you want to continue with the install (yes/no) [yes]?
Checking device to use for global devices file system ... done
Installing patches ... failed
scinstall: Problems detected during extraction or installation of patches.
Adding node "n20-1-sup" to the cluster configuration ... skipped
Skipped node "n20-1-sup" - already configured
Adding adapter "qfe1" to the cluster configuration ... skipped
Skipped adapter "qfe1" - already configured
Adding adapter "qfe5" to the cluster configuration ... skipped
Skipped adapter "qfe5" - already configured
Adding cable to the cluster configuration ... skipped
Skipped cable - already configured
Adding cable to the cluster configuration ... skipped
Skipped cable - already configured
Copying the config from "n20-2-sup" ... done
Copying the postconfig file from "n20-2-sup" if it exists ... done
Copying the Common Agent Container keys from "n20-2-sup" ... done
Setting the node ID for "n20-1-sup" ... done (id=1)
Verifying the major number for the "did" driver with "n20-2-sup" ... done
Checking for global devices global file system ... done
Updating vfstab ... done
Verifying that NTP is configured ... done
Initializing NTP configuration ... done
Updating nsswitch.conf ...
done
Adding clusternode entries to /etc/inet/hosts ... done
Configuring IP Multipathing groups in "/etc/hostname.<adapter>" files
IP Multipathing already configured in "/etc/hostname.qfe2".
Verifying that power management is NOT configured ... done
Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done
Ensure network routing is disabled ... done
Updating file ("ntp.conf.cluster") on node n20-2-sup ... done
Updating file ("hosts") on node n20-2-sup ... done
Updating file ("ntp.conf.cluster") on node n20-3-sup ... done
Updating file ("hosts") on node n20-3-sup ... done
Log file - /var/cluster/logs/install/scinstall.log.938
Rebooting ...
Mar 13 13:59:13 n20-1-sup reboot: rebooted by root
Terminated
root@n20-1-sup # syncing file systems... done
rebooting...
R
LOM event: +103d+20h44m26s host reset
screen not found.
keyboard not found.
Keyboard not present. Using lom-console for input and output.
Sun Netra T4 (2 X UltraSPARC-III+) , No Keyboard
Copyright 1998-2003 Sun Microsystems, Inc. All rights reserved.
OpenBoot 4.10.1, 4096 MB memory installed, Serial #52960491.
Ethernet address 0:3:ba:28:1c:eb, Host ID: 83281ceb.
Initializing 15MB Rebooting with command: boot
Boot device: /pci@8,600000/SUNW,qlc@4/fp@0,0/disk@w21000004cfa3e691,0:a File and args:
SunOS Release 5.10 Version Generic_127111-06 64-bit
Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hostname: n20-1-sup
Configuring devices.
devfsadm: minor_init failed for module /usr/lib/devfsadm/linkmod/SUNW_scmd_link.so
Loading smf(5) service descriptions: 24/24
/usr/cluster/bin/scdidadm: Could not load DID instance list.
Cannot open /etc/cluster/ccr/did_instances.
Booting as part of a cluster
NOTICE: CMM: Node n20-1-sup (nodeid = 1) with votecount = 0 added.
NOTICE: CMM: Node n20-2-sup (nodeid = 2) with votecount = 2 added.
NOTICE: CMM: Node n20-3-sup (nodeid = 3) with votecount = 1 added.
NOTICE: clcomm: Adapter qfe5 constructed
NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 being constructed
NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 being constructed
NOTICE: clcomm: Adapter qfe1 constructed
NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 being constructed
NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 being constructed
NOTICE: CMM: Node n20-1-sup: attempting to join cluster.
NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 being initiated
NOTICE: CMM: Node n20-2-sup (nodeid: 2, incarnation #: 1205318308) has become reachable.
NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 online
NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 being initiated
NOTICE: CMM: Node n20-3-sup (nodeid: 3, incarnation #: 1205265086) has become reachable.
NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 online
NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 being initiated
NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 online
NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 being initiated
NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 online
NOTICE: CMM: Cluster has reached quorum.
NOTICE: CMM: Node n20-1-sup (nodeid = 1) is up; new incarnation number = 1205416931.
NOTICE: CMM: Node n20-2-sup (nodeid = 2) is up; new incarnation number = 1205318308.
NOTICE: CMM: Node n20-3-sup (nodeid = 3) is up; new incarnation number = 1205265086.
NOTICE: CMM: Cluster members: n20-1-sup n20-2-sup n20-3-sup.
NOTICE: CMM: node reconfiguration #23 completed.
NOTICE: CMM: Node n20-1-sup: joined cluster.
ip: joining multicasts failed (18) on clprivnet0 - will use link layer broadcasts for multicast
NOTICE: CMM: Votecount changed from 0 to 1 for node n20-1-sup.
NOTICE: CMM: Cluster members: n20-1-sup n20-2-sup n20-3-sup.
NOTICE: CMM: node reconfiguration #24 completed.
Mar 13 14:02:23 in.ndpd[351]: solicit_event: giving up on qfe1
Mar 13 14:02:23 in.ndpd[351]: solicit_event: giving up on qfe5
did subpath /dev/rdsk/c1t3d0s2 created for instance 2.
did subpath /dev/rdsk/c2t3d0s2 created for instance 12.
did subpath /dev/rdsk/c1t3d1s2 created for instance 3.
did subpath /dev/rdsk/c1t3d2s2 created for instance 6.
did subpath /dev/rdsk/c1t3d3s2 created for instance 7.
did subpath /dev/rdsk/c1t3d4s2 created for instance 8.
did subpath /dev/rdsk/c1t3d5s2 created for instance 9.
did subpath /dev/rdsk/c1t3d6s2 created for instance 10.
did subpath /dev/rdsk/c1t3d7s2 created for instance 11.
did subpath /dev/rdsk/c2t3d1s2 created for instance 13.
did subpath /dev/rdsk/c2t3d2s2 created for instance 14.
did subpath /dev/rdsk/c2t3d3s2 created for instance 15.
did subpath /dev/rdsk/c2t3d4s2 created for instance 16.
did subpath /dev/rdsk/c2t3d5s2 created for instance 17.
did subpath /dev/rdsk/c2t3d6s2 created for instance 18.
did subpath /dev/rdsk/c2t3d7s2 created for instance 19.
did instance 20 created.
did subpath n20-1-sup:/dev/rdsk/c0t6d0 created for instance 20.
did instance 21 created.
did subpath n20-1-sup:/dev/rdsk/c3t0d0 created for instance 21.
did instance 22 created.
did subpath n20-1-sup:/dev/rdsk/c3t1d0 created for instance 22.
Configuring DID devices
t_optmgmt: System error: Cannot assign requested address
obtaining access to all attached disks
n20-1-sup console login: -
Load-balancing between Analytical Provider service nodes in a cluster
Hi All,
- First a little background on my architecture. My EPM environment consist of 3 Solaris servers:
Server1: Foundation Services + APS + EAS + WLS Admin server
Server2: Foundation Services + APS + EAS
Server3: Essbase server + Essbase Studio server
All above services are deployed to a single domain. We have a load-balancer sitting in front of server1 and server2 that redirects request based on availability of the services.
- Consider APS:
We have a APS cluster "AnalyticProviderServices" with members AnalyticProviderServices1 deployed on Server1 and AnalyticProviderServices2 deployed on Server2.
So I connect to APS and login as user1. Say the load-balancer decides to forward my request to server1, so all my request are then managed by APS on Server1. Now if APS on server1 is brought down, then any request to APS on server1 are redirected by weblogic to APS on server2.
Now ideally APS on server2 should say "hey I see APS on server1 is down so I will take up your session where it left off". So I expect the 2nd APS node in the cluster to tale up my session. But this does not happen.. I need to login again when I hit refresh in excel as I get the error "Invalid session.. Please login again". When I open EAS I see I have been logged in with a new session ID. So it seems that the cluster nodes simply act as load-balancers and are not smart enough to take up a failed nodes sessions where it left off.
Is my understanding correct or have I to configure something to allow for this to happen?
Thanks,
KentThanks for your reply John!
I was hoping APS could do something like that .. I am not sure if restoring sessions of a dead APS cluster node on another APS would be helpful but I can think of one situation where a drill-through report is running for a long time on the Essbase server and APS goes down.. it would be good to have the other APS to take up the session and return the drill-through output to the user. -
Create directory on a two-node Oracle server cluster
Hi,
How can I create directory on a two-node Oracle server cluster (2 identical servers running the same Oracle database instances with shared disks)? Both of them run Windows 2008.
I know this works for Oracle running in a single server. How about in failover cluster environment?
CREATE OR REPLACE DIRECTORY g_vid_lib AS 'X:\video\library\g_rated';
Thanks.
AndyUsing 11.2.0.? Using ASM? use ACFS - it is a SHARED file system.
http://docs.oracle.com/cd/E16338_01/server.112/e10500/asmfilesystem.htm
create [big empty disk group]
create ACFS volume {container file that lives in that diskgroup]
create ACFS file system
mount the file system.
Now, all nodes in the cluster can access that shared device at the OS level just as if it were any other device. -
Cant't start node on the cluster
when I start my node in the cluster I have this error, what I should do to solve this?
<Oct 30, 2011 11:06:00 PM BRST> <Error> <ALSB Statistics Manager> <BEA-473003> <Aggregation Server Not Available. Failed to get remote aggregator
java.rmi.UnknownHostException: Could not discover URL for server 'Node1'
at weblogic.protocol.URLManager.findURL(URLManager.java:145)
at com.bea.alsb.platform.weblogic.topology.WlsRemoteServerImpl.getInitialContext(WlsRemoteServerImpl.java:94)
at com.bea.alsb.platform.weblogic.topology.WlsRemoteServerImpl.lookupJNDI(WlsRemoteServerImpl.java:54)
at com.bea.wli.monitoring.statistics.ALSBStatisticsManager.getRemoteAggregator(ALSBStatisticsManager.java:291)
at com.bea.wli.monitoring.statistics.ALSBStatisticsManager.access$000(ALSBStatisticsManager.java:38)
at com.bea.wli.monitoring.statistics.ALSBStatisticsManager$RemoteAggregatorProxy.send(ALSBStatisticsManager.java:55)
at com.bea.wli.monitoring.statistics.collection.Collector.sendRawSnaphotToAggregator(Collector.java:284)
at com.bea.wli.monitoring.statistics.collection.Collector.doCheckpoint(Collector.java:245)
at com.bea.wli.monitoring.statistics.collection.Collector$CheckpointThread.doWork(Collector.java:69)
at com.bea.wli.monitoring.utils.Schedulable.timerExpired(Schedulable.java:68)
at com.bea.wli.timer.ClusterTimerImpl$InternalTimerListener.timerExpired(ClusterTimerImpl.java:255)
at weblogic.timers.internal.TimerImpl.run(TimerImpl.java:273)
at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
>I solved this on two steps:
1 - I run ifconfig and check if my network card has MULTICAST flag
2 - I add this route run this command:
route add -host 239.192.0.10 dev eth0
afeter this my nodes has started with sucess -
Can't start cluster, 2 node 3.3 cluster lost 2 quorum disks
Hi,
I have a 2 node cluster 1 one iscsi quorum disk, I was in the middle of migrating the quorum device to another iscsu disk, when the servers lost contact with the disks(iscsi targe problem), so the 2 cluster nodes where left with no quorum, because of the 2 quorum devices 3 votes are needed, I only have 2 votes from the 2 cluster nodes.
iscsi disks are back online, but the cluster/quorum isn't able to get hold of them.
May 11 11:21:59 vmcluster1 genunix: [ID 965873 kern.notice] NOTICE: CMM: Node vmcluster2 (nodeid = 1) with votecount = 1 added.
May 11 11:21:59 vmcluster1 genunix: [ID 965873 kern.notice] NOTICE: CMM: Node vmcluster1 (nodeid = 2) with votecount = 1 added.
May 11 11:22:04 vmcluster1 genunix: [ID 832830 kern.warning] WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d1s2 with error 2.
May 11 11:22:10 vmcluster1 genunix: [ID 832830 kern.warning] WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d2s2 with error 2.
May 11 11:22:14 vmcluster1 genunix: [ID 884114 kern.notice] NOTICE: clcomm: Adapter e1000g2 constructed
May 11 11:22:15 vmcluster1 genunix: [ID 884114 kern.notice] NOTICE: clcomm: Adapter e1000g1 constructed
May 11 11:22:15 vmcluster1 genunix: [ID 843983 kern.notice] NOTICE: CMM: Node vmcluster1: attempting to join cluster.
May 11 11:22:15 vmcluster1 e1000g: [ID 801725 kern.info] NOTICE: pci8086,100e - e1000g[2] : link up, 1000 Mbps, full duplex
May 11 11:22:16 vmcluster1 e1000g: [ID 801725 kern.info] NOTICE: pci8086,100e - e1000g[1] : link up, 1000 Mbps, full duplex
May 11 11:23:20 vmcluster1 genunix: [ID 832830 kern.warning] WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d1s2 with error 2.
May 11 11:23:25 vmcluster1 genunix: [ID 832830 kern.warning] WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d2s2 with error 2.
May 11 11:23:25 vmcluster1 genunix: [ID 980942 kern.notice] NOTICE: CMM: Cluster doesn't have operational quorum yet; waiting for quorum.
Looks like the server thinks the ID of the disks have changed:
[root@vmcluster1:/]# scdidadm -L (05-11 11:27)
1 vmcluster1:/dev/rdsk/c3t5d0 /dev/did/rdsk/d1
1 vmcluster2:/dev/rdsk/c3t5d0 /dev/did/rdsk/d1
2 vmcluster1:/dev/rdsk/c3t4d0 /dev/did/rdsk/d2
2 vmcluster2:/dev/rdsk/c3t4d0 /dev/did/rdsk/d2
3 vmcluster2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d3
4 vmcluster1:/dev/rdsk/c1t0d0 /dev/did/rdsk/d4
5 vmcluster2:/dev/rdsk/c3t6d0 /dev/did/rdsk/d5
5 vmcluster1:/dev/rdsk/c3t6d0 /dev/did/rdsk/d5
6 vmcluster2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d6
7 vmcluster1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d7
[root@vmcluster1:/]# scdidadm -r (05-11 11:27)
scdidadm: Device ID "vmcluster1:/dev/rdsk/c3t5d0" does not match physical device ID for "d1".
Warning: Device "vmcluster1:/dev/rdsk/c3t5d0" might have been replaced.
scdidadm: Device ID "vmcluster1:/dev/rdsk/c3t4d0" does not match physical device ID for "d2".
Warning: Device "vmcluster1:/dev/rdsk/c3t4d0" might have been replaced.
scdidadm: Device ID "vmcluster1:/dev/rdsk/c3t6d0" does not match physical device ID for "d5".
Warning: Device "vmcluster1:/dev/rdsk/c3t6d0" might have been replaced.
scdidadm: Could not save DID instance list to file.
scdidadm: File /etc/cluster/ccr/global/did_instances exists.
Disks are ok, and accesible from format
[root@vmcluster1:/]# echo | format (05-11 11:28)
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <DEFAULT cyl 8351 alt 2 hd 255 sec 63>
/pci@0,0/pci8086,2829@d/disk@0,0
1. c1t1d0 <DEFAULT cyl 1020 alt 2 hd 64 sec 32>
/pci@0,0/pci8086,2829@d/disk@1,0
2. c3t4d0 <IET-VIRTUAL-DISK-0-1.00GB>
/iscsi/[email protected]%3Astorage.lun10001,0
3. c3t5d0 <DEFAULT cyl 497 alt 2 hd 64 sec 32>
/iscsi/[email protected]%3Astorage.lun20001,1
4. c3t6d0 <DEFAULT cyl 496 alt 2 hd 64 sec 32>
/iscsi/[email protected]%3Astorage.lun30001,2
Is there a way to remove a quorum device without the cluster online?
Or is there another alternative?, try and fix the did problem ?
Thanks!This is the primary reason that you have one and only one quorum device. There are many failure modes that result in your cluster not starting. Looks like your only option is to hand edit the CCR. If this is a production cluster, please log a service desk ticket for the full procedure. If it's just a development cluster and you are happy to take a risk, the basic outline is (IIRC):
1. Boot nodes into non-cluster mode
2. Edit /etc/cluster/ccr/global/infrastructure and either remove the cluster.quorum_devices.* entries or set the votecount to 0
3. cd /etc/cluster/ccr/global
4. Run /usr/cluster/lib/sc/ccradm replace -i infrastructure infrastructure
5. Reboot back into cluster mode
6. Add one new quorum disk
You may need to run one or more of:
# cldev refresh
# cldev check
# cldev clean
# cldev populate
to get the right DID entries between steps 5 and 6.
Tim
--- -
Hi,
can somebody explain what leaf nodes in flex cluster are?from documentation I see that they are nodes which dont have access to storage and they communicate with hub nodes.
Can they have oracle db instances?If so how is data transfered between hub and leaf nodes?Through interconnect?Doesn't it overload the interconnect?
Thanks
SekarSekar_BLUE4EVER wrote:
Thanks Aman...Still confused about this...Consider the following scenario
| H1 |<-------> | H2 | <------> | H3 |
| | | | | |
| L1 L2 L3 | | L1 L2 L3 | | L1 L 2 L3 |
| _________| |___________| |__________|
H depicts the hub nodes and L depict the leaf nodes.Assume each Hub node has 3 leaf nodes attached to them.
Suppose L1 connected to H1 needs a block and modifies it and after sometime L1 connected to H2 needs the same block then it must follow the same 2 way/3 way grant as in normal cache fusion right?
Does this actually increase the number of hops since the leaf nodes are not directly connected?
Do we have any control over the leaf node to hub node mapping or is it all automatically managed?
Thanks
The blocks are going to be accessed, modified at the Hub nodes only AFAIK as the Hub nodes are considered as DB Nodes. The Leaf Nodes are going to be considered as the Application Nodes. That's the reason, it's better to set up the instances running on the Hub Nodes only rather than the Leaf Nodes. Even if the instance runs on a Leaf Node, the communication is between the Hub and Leaf node only and it won't do any harm as both the nodes-Hub and Leaf(and the other nodes in the Leaf group) would be talking to each other directly. There is no VIP required on the Leaf Nodes so the connections by the database users would be only on the Hub Nodes, I guess and that means, the block movement would remain essentially the same.
The number of network hops are reduced as you won't be having a requirement to have too many Hub Nodes since each Hub node can connect to 64(?) Leaf Nodes. So essentially, in your case, you would need only 4 Interconnects (2 on one Hub Node and 1 each on the remaining two) for the private interconnect and just 3 network links for the storage for each Hub node.
I am not sure that I understood the last question of yours.
HTH
Aman....
Maybe you are looking for
-
Serious performance issues with no solution in sight
I hate to be in such desperate need of help, but these forums have been extraordinarily helpful before, so I'm hoping someone will be able to help me with this series of problems. I honestly don't know how to describe the problem beyond that it seems
-
Hi, Am trying to develop a report the parameter has an issue, it gives me an error. There are 2 parameter (1st is Date range like start and end date and 2nd is Loan number) When ever I select date range the Loan number parameter should go blank. If I
-
Inspection lot stock tab is missing
Hi, In inspection lot generated for 01 insp type purchase order, the inspection lot stock tab is missing. Is there any control ?
-
AirPlay from MLB At Bat 2012 not working?
I'm trying to stream video to my Apple TV 2nd gen from MLB At Bat 2012 and it does not reliably work. I try it from both my iPod touch 4th gen and iPad 2, and it does not work. It works with mirroring on the iPad 2, but is a letterboxed video. Anyone
-
Removing an account in Outlook 2013
Hello, If I remove an account in outlook 2013 will the email folders that I created under that account also be removed? I do not want them deleted! Thanks!