Sun cluster 3.1 missing device files
Hullo, Iam trying to remove a Filesystem mount point from The HAStorage plus with the command
scrgadm -c -j foris-hasp-rs -x FileSystemMountPoints="/data7/oradata",
However, the volume associated with the mount point is doesnot exist anymore, and as such the comman files. This has caused to fail as it cannot come online and it complains of failing to mount /data7/oradata
Any help is welcome.
Regards
Adrian
Answers to the question:
1) /dev/did/rdsk/d2s2 was used by quorum device, and it comes from the first storage. That means when the first storage is out (d2 and d3), the cluster will be down. if so, how could I bypass it?
Incorrect. The quorum device is passive and is only needed if the cluster membership changes. Check the manual for more details as it's too much to explain it here.
2) ls -l /dev/did/rdsk, I don't see dX, instead I see dXsX, If I decide to use did device for ASM, how would I know which slice should I use? Or I should not use did device directly for ASM at all?
3) since quorum device is on d2, I am sure some space was already taken from d2, how would I know how much space left on that LUN? Do I have to create muti-owner device with SVM on d2 and d4 and then mirror them with ASM?
The quorum feature doesn't use any space on the disk. You can use this for data without problems.
In general, it will be far easier if you create shared metaset metadevices and put the OCR, voting disk and ASM data directly on to them.
4) for OCR and voting disk, I have to create muti-owner device on d2 or d4 to store them. any other options?
See above.
5) Which way is prefered? ASM on muti-owner device (configured by SVM) or ASM on DID devices?
ASM on a mirrored SVM device.
Tim
---
Similar Messages
-
We had an issue with our SAN this morning and we were missing some device files but not all. I cannot seem to find anything that shows what device files ASM is expecting to see for each disk group. For example, we have a data disk group with 10 LUNs assigned. Two of the LUNs were not found and there were no device files. How do I know what device files are missing?
>
How do I know what device files are missing?
>
What is your 4 digit Oracle version?
The V$ASM_DISK view will tell you what it found and what is missing.
See the Database reference
http://docs.oracle.com/cd/B28359_01/server.111/b28320/dynviews_1020.htm
>
V$ASM_DISKIn an Automatic Storage Management instance, V$ASM_DISK displays one row for every disk discovered by the Automatic Storage Management instance, including disks which are not part of any disk group. In a database instance, V$ASM_DISK only displays rows for disks in disk groups in use by the database instance.
>
There are several other V$ASM views that have other information that may be of interest
>
V$ASM_ALIAS view, 7.17
V$ASM_ATTRIBUTE view, 7.18
V$ASM_CLIENT view, 7.19
V$ASM_DISK view, 7.20
V$ASM_DISK_IOSTAT view, 7.21
V$ASM_DISK_STAT view, 7.22
V$ASM_DISKGROUP view, 7.23
V$ASM_DISKGROUP_STAT view, 7.24
V$ASM_FILE view, 7.25
V$ASM_OPERATION view, 7.26
V$ASM_TEMPLATE view, 7.27 -
Sun Cluster 3.2 - Global File Systems
Sun Cluster has a Global Filesystem (GFS) that supports read-only access throughout the cluster. However, only one node has write access.
In Linux a GFS filesystem allows it to be mounted by multiple nodes for simultaneous READ/WRITE access. Shouldn't this be the same for Solaris as well..
From the documentation that I have read,
"The global file system works on the same principle as the global device feature. That is, only one node at a time is the primary and actually communicates with the underlying file system. All other nodes use normal file semantics but actually communicate with the primary node over the same cluster transport. The primary node for the file system is always the same as the primary node for the device on which it is built"
The GFS is also known as Cluster File System or Proxy File system.
Our client believes that they can have their application "scaled" and all nodes in the cluster can have the ability to write to the globally mounted file system. My belief was, the only way this can occur is when the application has failed over and then the "write" would occur from the "primary" node whom is mastering the application at that time. Any input will be greatly appreciated or clarification needed. Thanks in advance.
RyanThank you very much, this helped :)
And how seamless is remounting of the block device LUN if one server dies?
Should some clustered services (FS clients such as app servers) be restarted
in case when the master node changes due to failover? Or is it truly seamless
as in a bit of latency added for duration of mounting the block device on another
node, with no fatal interruptions sent to the clients?
And, is it true that this solution is gratis, i.e. may legally be used for free
unless the customer wants support from Sun (authorized partners)? ;)
//Jim
Edited by: JimKlimov on Aug 19, 2009 4:16 PM -
File System Sharing using Sun Cluster 3.1
Hi,
I need help on how to setup and configure the system to share a remote file system that is created on a SAN disk (SAN LUN ) between two Sun Solaris 10 servers.
The files in the remote system should be read/writabe from both the solaris servers concurrently.
As a security policy NFS mount is not allowed. Some one suggested it can be done by using Sun Cluster 3.1 agents on both servers. Any details on how I can do this using Sun Cluster 3.1 is really appreciated.
thanks
SureshYou could do this by installing Sun Cluster on both systems and then creating a global file system on the shared LUN. However, if there was significant write activity on both nodes, then the performance will not necessarily what you need.
What is wrong with the security of NFS? If it is set up properly I don't think this should be a problem.
The other option would be to use shared QFS, but without Sun Cluster.
Regards,
Tim
--- -
Sun Cluster 3.1: Not same physical device (scdidadm -L)
Hi all,
I going to install Sun Cluster 3.1.
When verifying with the scdidadm -L, i got:
4 clustnode1:/dev/rdsk/c2t0d0 /dev/did/rdsk/d4
4 clustnode2:/dev/rdsk/c3t0d0 /dev/did/rdsk/d4
Seem that it not connected to the same physical device.
What do I have to check ?
Thanks,
Regards,
AiggnoSome more information:
Node 1
0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w500000e01070a271,0
1. c1t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w500000e0106dd891,0
2. c2t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@8,700000/pci@2/scsi@4/sd@0,0
3. c2t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@8,700000/pci@2/scsi@4/sd@1,0
4. c2t2d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@8,700000/pci@2/scsi@4/sd@2,0
5. c3t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@8,700000/pci@3/scsi@4/sd@0,0
6. c3t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@8,700000/pci@3/scsi@4/sd@1,0
7. c3t2d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@8,700000/pci@3/scsi@4/sd@2,0
Node 2
0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w500000e01070aad1,0
1. c1t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w500000e01070aac1,0
2. c3t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@8,700000/pci@2/scsi@4/sd@0,0
3. c3t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@8,700000/pci@2/scsi@4/sd@1,0
4. c3t2d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@8,700000/pci@2/scsi@4/sd@2,0
5. c5t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@8,700000/pci@3/scsi@4/sd@0,0
6. c5t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@8,700000/pci@3/scsi@4/sd@1,0
7. c5t2d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@8,700000/pci@3/scsi@4/sd@2,0
Node 1 have c2, c3 and node 2 have c3, c5 ??? -
SAP 7.0 on SUN Cluster 3.2 (Solaris 10 / SPARC)
Dear All;
i'm installing a two nodes cluster (SUN Cluster 3.2 / Solaris 10 / SPARC), for a HA SAP 7.0 / Oracle 10g DataBase
SAP and Oracle softwares were successfully installed and i could successfully cluster the Oracle DB and it is tested and working fine.
for the SAP i did the following configurations
# clresource create -g sap-ci-res-grp -t SUNW.sap_ci_v2 -p SAPSID=PRD -p Ci_instance_id=01 -p Ci_services_string=SCS -p Ci_startup_script=startsap_01 -p Ci_shutdown_script=stopsap_01 -p resource_dependencies=sap-hastp-rs,ora-db-res sap-ci-scs-res
# clresource create -g sap-ci-res-grp -t SUNW.sap_ci_v2 -p SAPSID=PRD -p Ci_instance_id=00 -p Ci_services_string=ASCS -p Ci_startup_script=startsap_00 -p Ci_shutdown_script=stopsap_00 -p resource_dependencies=sap-hastp-rs,or-db-res sap-ci-Ascs-res
and when trying to bring the sap-ci-res-grp online # clresourcegroup online -M sap-ci-res-grp
it executes the startsap scripts successfully as following
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
stty: : No such device or address
stty: : No such device or address
Starting SAP-Collector Daemon
11:04:57 04.06.2008 LOG: Effective User Id is root
Starting SAP-Collector Daemon
11:04:57 04.06.2008 LOG: Effective User Id is root
* This is Saposcol Version COLL 20.94 700 - V3.72 64Bit
* Usage: saposcol -l: Start OS Collector
* saposcol -k: Stop OS Collector
* saposcol -d: OS Collector Dialog Mode
* saposcol -s: OS Collector Status
* Starting collector (create new process)
* This is Saposcol Version COLL 20.94 700 - V3.72 64Bit
* Usage: saposcol -l: Start OS Collector
* saposcol -k: Stop OS Collector
* saposcol -d: OS Collector Dialog Mode
* saposcol -s: OS Collector Status
* Starting collector (create new process)
saposcol on host eccprd01 started
Starting SAP Instance ASCS00
Startup-Log is written to /export/home/prdadm/startsap_ASCS00.log
saposcol on host eccprd01 started
Running /usr/sap/PRD/SYS/exe/run/startj2eedb
Trying to start PRD database ...
Log file: /export/home/prdadm/startdb.log
Instance Service on host eccprd01 started
Jun 4 11:05:01 eccprd01 SAPPRD_00[26054]: Unable to open trace file sapstartsrv.log. (Error 11 Resource temporarily unavailable) [ntservsserver.cpp 1863]
/usr/sap/PRD/SYS/exe/run/startj2eedb completed successfully
Starting SAP Instance SCS01
Startup-Log is written to /export/home/prdadm/startsap_SCS01.log
Instance Service on host eccprd01 started
Jun 4 11:05:02 eccprd01 SAPPRD_01[26111]: Unable to open trace file sapstartsrv.log. (Error 11 Resource temporarily unavailable) [ntservsserver.cpp 1863]
Instance on host eccprd01 started
Instance on host eccprd01 started
and the it repeats the following warnings on the /var/adm/messages till it fails to the other node
Jun 4 12:26:22 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:28 eccprd01 last message repeated 1 time
Jun 4 12:26:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:46 eccprd01 last message repeated 1 time
Jun 4 12:26:46 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:49 eccprd01 last message repeated 1 time
Jun 4 12:26:49 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:52 eccprd01 last message repeated 1 time
Jun 4 12:26:52 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:55 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:55 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:58 eccprd01 last message repeated 1 time
Jun 4 12:26:58 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:01 eccprd01 last message repeated 1 time
Jun 4 12:27:01 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:04 eccprd01 last message repeated 1 time
Jun 4 12:27:04 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:07 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:07 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:10 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:10 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:13 eccprd01 last message repeated 1 time
Jun 4 12:27:13 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:16 eccprd01 last message repeated 1 time
Jun 4 12:27:16 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:19 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:19 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:22 eccprd01 last message repeated 1 time
Jun 4 12:27:22 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:25 eccprd01 last message repeated 1 time
Jun 4 12:27:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:34 eccprd01 last message repeated 1 time
Jun 4 12:27:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:37 eccprd01 last message repeated 1 time
Jun 4 12:27:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:43 eccprd01 last message repeated 1 time
Jun 4 12:27:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:46 eccprd01 last message repeated 1 time
Jun 4 12:27:46 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dis
can anyone one help me if there is any error on configurations or what is the cause of this problem.....thanks in advance
ARSSESHi all.
I am having a similar issue with a Sun Cluster 3.2 and SAP 7.0
Scenrio:
Central Instance (not incluster) : Started on one node
Dialog Instance (not in cluster): Started on the other node
When I create the resource for SUNW.sap_as like
clrs create --g sap-rg -t SUNW.sap_as .....etc etc
in the /var/adm/messages I got lots of WAITING FOR DISPACHER TO COME UP....
Then after timeout it gives up.
Any clue? What does is try to connect or waiting for? I hve notest that it's something before the startup script....
TIA -
Sun Cluster.. Why?
What are the advantages of installing RAC 10.2.0.3 on a Sun Cluster.? Are there any benefits?
Oracle 10g onward, there is no such burning requirement for Sun Cluster (or any third party cluster) as far as you are using all Oracle technologies for your Oracle RAC database. You should Oracle RAC with ASM for shared storage and that would not require any third party cluster. Bear inmind that
You may need to install Sun Cluster in the following scenarios:
1) If there is applicaiton running with in the cluster along with Oracle RAC database that you want to configure for HA and Sun Cluster provide the cluster resourced (easy to use) to manage and monitor the application. THIS can be achieved with Oracle Clusterware but you will have to write your own cluster resource for that.
2) If you want to install cluster file system such as QFS then you will need to install the Sun Cluster. If this cluster is only running the Oracle RAC database then you can rely on Oracle technologies such as ASM, raw devices without installing Sun Cluster.
3) Any certification conflicts.
Any correction is welcome..
-Harish Kumar Kalra -
Beta Refresh Release Now Available! Sun Cluster 3.2 Beta Program
The Sun Cluster 3.2 Release team is pleased to announce a Beta Refresh release. This release is based on our latest and greatest build of Sun Cluster 3.2, build 70, which is close to the final Revenue Release build of the product.
To apply for the Sun Cluster 3.2 Beta program, please visit:
https://feedbackprograms.sun.com/callout/default.html?callid=%7B11B4E37C-D608-433B-AF69-07F6CD714AA1%7D
or contact Eric Redmond <[email protected]>.
New Features in Sun Cluster 3.2
Ease of use
* New Sun Cluster Object Oriented Command Set
* Oracle RAC 10g improved integration and administration
* Agent configuration wizards
* Resources monitoring suspend
* Flexible private interconnect IP address scheme
Availability
* Extended flexibility for fencing protocol
* Disk path failure handling
* Quorum Server
* Cluster support for SMF services
Flexibility
* Solaris Container expanded support
* HA ZFS
* HDS TrueCopy campus cluster
* Veritas Flashsnap Fast Mirror Resynchronization 4.1 and 5.0 option support
* Multi-terabyte disk and EFI label support
* Veritas Volume Replicator 5.0 support
* Veritas Volume Manager 4.1 support on x86 platform
* Veritas Storage Foundation 5.0 File System and Volume Manager
OAMP
* Live upgrade
* Dual partition software swap (aka quantum leap)
* Optional GUI installation
* SNMP event MIB
* Command logging
* Workload system resource monitoring
Note: Veritas 5.0 features are not supported with SC 3.2 Beta.
Sun Cluster 3.2 beta supports the following Data Services
* Apache (shipped with the Solaris OS)
* DNS
* NFS V3
* Java Enterprise System 2005Q4: Application Server, Web Server, Message Queue, HADBWithout speculating on the release date of Sun Cluster 3.x or even its feature list, I would like to understand what risk Sun would take when Sun Cluster would support ZFS as a failover filesystem? Once ZFS is part of Solaris 10, I am sure customers will want to use it in clustered environments.
BTW: this means that even Veritas will have to do something about ZFS!!!
If VCS is a much better option, it would be interesting to understand what features are missing from Sun Cluster to make it really competitive.
Thanks
Hartmut -
LDOM SUN Cluster Interconnect failure
I am making a test SUN-Cluster on Solaris 10 in LDOM 1.3.
in my environment, i have T5120, i have setup two guest OS with some configurations, setup sun cluster software, when executed, scinstall, it failed.
node 2 come up, but node 1 throws following messgaes:
Boot device: /virtual-devices@100/channel-devices@200/disk@0:a File and args:
SunOS Release 5.10 Version Generic_139555-08 64-bit
Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hostname: test1
Configuring devices.
Loading smf(5) service descriptions: 37/37
/usr/cluster/bin/scdidadm: Could not load DID instance list.
/usr/cluster/bin/scdidadm: Cannot open /etc/cluster/ccr/did_instances.
Booting as part of a cluster
NOTICE: CMM: Node test2 (nodeid = 1) with votecount = 1 added.
NOTICE: CMM: Node test1 (nodeid = 2) with votecount = 0 added.
NOTICE: clcomm: Adapter vnet2 constructed
NOTICE: clcomm: Adapter vnet1 constructed
NOTICE: CMM: Node test1: attempting to join cluster.
NOTICE: CMM: Cluster doesn't have operational quorum yet; waiting for quorum.
NOTICE: clcomm: Path test1:vnet1 - test2:vnet1 errors during initiation
NOTICE: clcomm: Path test1:vnet2 - test2:vnet2 errors during initiation
WARNING: Path test1:vnet1 - test2:vnet1 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
WARNING: Path test1:vnet2 - test2:vnet2 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
clcomm: Path test1:vnet2 - test2:vnet2 errors during initiation
CREATED VIRTUAL SWITCH AND VNETS ON PRIMARY DOMAIN LIKE:<>
532 ldm add-vsw mode=sc cluster-vsw0 primary
533 ldm add-vsw mode=sc cluster-vsw1 primary
535 ldm add-vnet vnet2 cluster-vsw0 test1
536 ldm add-vnet vnet3 cluster-vsw1 test1
540 ldm add-vnet vnet2 cluster-vsw0 test2
541 ldm add-vnet vnet3 cluster-vsw1 test2
Primary DOmain<>
bash-3.00# dladm show-dev
vsw0 link: up speed: 1000 Mbps duplex: full
vsw1 link: up speed: 0 Mbps duplex: unknown
vsw2 link: up speed: 0 Mbps duplex: unknown
e1000g0 link: up speed: 1000 Mbps duplex: full
e1000g1 link: down speed: 0 Mbps duplex: half
e1000g2 link: down speed: 0 Mbps duplex: half
e1000g3 link: up speed: 1000 Mbps duplex: full
bash-3.00# dladm show-link
vsw0 type: non-vlan mtu: 1500 device: vsw0
vsw1 type: non-vlan mtu: 1500 device: vsw1
vsw2 type: non-vlan mtu: 1500 device: vsw2
e1000g0 type: non-vlan mtu: 1500 device: e1000g0
e1000g1 type: non-vlan mtu: 1500 device: e1000g1
e1000g2 type: non-vlan mtu: 1500 device: e1000g2
e1000g3 type: non-vlan mtu: 1500 device: e1000g3
bash-3.00#
NOde1<>
-bash-3.00# dladm show-link
vnet0 type: non-vlan mtu: 1500 device: vnet0
vnet1 type: non-vlan mtu: 1500 device: vnet1
vnet2 type: non-vlan mtu: 1500 device: vnet2
-bash-3.00# dladm show-dev
vnet0 link: unknown speed: 0 Mbps duplex: unknown
vnet1 link: unknown speed: 0 Mbps duplex: unknown
vnet2 link: unknown speed: 0 Mbps duplex: unknown
-bash-3.00#
NODE2<>
-bash-3.00# dladm show-link
vnet0 type: non-vlan mtu: 1500 device: vnet0
vnet1 type: non-vlan mtu: 1500 device: vnet1
vnet2 type: non-vlan mtu: 1500 device: vnet2
-bash-3.00#
-bash-3.00#
-bash-3.00# dladm show-dev
vnet0 link: unknown speed: 0 Mbps duplex: unknown
vnet1 link: unknown speed: 0 Mbps duplex: unknown
vnet2 link: unknown speed: 0 Mbps duplex: unknown
-bash-3.00#
and this configuration i give while setting up scinstall
Cluster Transport Adapters and Cables <<<You must identify the two cluster transport adapters which attach
this node to the private cluster interconnect.
For node "test1",
What is the name of the first cluster transport adapter [vnet1]?
Will this be a dedicated cluster transport adapter (yes/no) [yes]?
All transport adapters support the "dlpi" transport type. Ethernet
and Infiniband adapters are supported only with the "dlpi" transport;
however, other adapter types may support other types of transport.
For node "test1",
Is "vnet1" an Ethernet adapter (yes/no) [yes]?
Is "vnet1" an Infiniband adapter (yes/no) [yes]? no
For node "test1",
What is the name of the second cluster transport adapter [vnet3]? vnet2
Will this be a dedicated cluster transport adapter (yes/no) [yes]?
For node "test1",
Name of the switch to which "vnet2" is connected [switch2]?
For node "test1",
Use the default port name for the "vnet2" connection (yes/no) [yes]?
For node "test2",
What is the name of the first cluster transport adapter [vnet1]?
Will this be a dedicated cluster transport adapter (yes/no) [yes]?
For node "test2",
Name of the switch to which "vnet1" is connected [switch1]?
For node "test2",
Use the default port name for the "vnet1" connection (yes/no) [yes]?
For node "test2",
What is the name of the second cluster transport adapter [vnet2]?
Will this be a dedicated cluster transport adapter (yes/no) [yes]?
For node "test2",
Name of the switch to which "vnet2" is connected [switch2]?
For node "test2",
Use the default port name for the "vnet2" connection (yes/no) [yes]?
i have setup the configurations like.
ldm list -l nodename
NODE1<>
NETWORK
NAME SERVICE ID DEVICE MAC MODE PVID VID MTU LINKPROP
vnet1 primary-vsw0@primary 0 network@0 00:14:4f:f9:61:63 1 1500
vnet2 cluster-vsw0@primary 1 network@1 00:14:4f:f8:87:27 1 1500
vnet3 cluster-vsw1@primary 2 network@2 00:14:4f:f8:f0:db 1 1500
ldm list -l nodename
NODE2<>
NETWORK
NAME SERVICE ID DEVICE MAC MODE PVID VID MTU LINKPROP
vnet1 primary-vsw0@primary 0 network@0 00:14:4f:f9:a1:68 1 1500
vnet2 cluster-vsw0@primary 1 network@1 00:14:4f:f9:3e:3d 1 1500
vnet3 cluster-vsw1@primary 2 network@2 00:14:4f:fb:03:83 1 1500
ldm list-services
VSW
NAME LDOM MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
primary-vsw0 primary 00:14:4f:f9:25:5e e1000g0 0 switch@0 1 1 1500 on
cluster-vsw0 primary 00:14:4f:fb:db:cb 1 switch@1 1 1 1500 sc on
cluster-vsw1 primary 00:14:4f:fa:c1:58 2 switch@2 1 1 1500 sc on
ldm list-bindings primary
VSW
NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
primary-vsw0 00:14:4f:f9:25:5e e1000g0 0 switch@0 1 1 1500 on
PEER MAC PVID VID MTU LINKPROP INTERVNETLINK
vnet1@gitserver 00:14:4f:f8:c0:5f 1 1500
vnet1@racc2 00:14:4f:f8:2e:37 1 1500
vnet1@test1 00:14:4f:f9:61:63 1 1500
vnet1@test2 00:14:4f:f9:a1:68 1 1500
NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
cluster-vsw0 00:14:4f:fb:db:cb 1 switch@1 1 1 1500 sc on
PEER MAC PVID VID MTU LINKPROP INTERVNETLINK
vnet2@test1 00:14:4f:f8:87:27 1 1500
vnet2@test2 00:14:4f:f9:3e:3d 1 1500
NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
cluster-vsw1 00:14:4f:fa:c1:58 2 switch@2 1 1 1500 sc on
PEER MAC PVID VID MTU LINKPROP INTERVNETLINK
vnet3@test1 00:14:4f:f8:f0:db 1 1500
vnet3@test2 00:14:4f:fb:03:83 1 1500
Any Idea Team, i beleive the cluster interconnect adapters were not successfull.
I need any guidance/any clue, how to correct the private interconnect for clustering in two guest LDOMS.You dont have to stick to default IP's or subnet . You can change to whatever IP's you need. Whatever subnet mask you need. Even change the private names.
You can do all this during install or even after install.
Read the cluster install doc at docs.sun.com -
Java.util.MissingResourceException: Missing device property
Hi all.
I've installed:
Sun Java Wireless Toolkit 2.5.2 for CLDC
Java Platform Micro Edition Software Development Kit 3.0 Early Access ( i suposed that it isn't necessarly)
JDK 6 Update 13
NetBeans IDE 6.5.1 (All)
After instalation i run WTK - > open sample project -> run and get this:
Warning: Could not start new emulator process:
java.util.MissingResourceException: Missing device property
Project is building without problems but i can't run it ;/ In NetBeans i have similiar problem i can't run sample projects because i have this:
Starting emulator in execution mode
com.sun.kvem.midletsuite.InvalidJadException: Reason = 22
The manifest or the application descriptor MUST contain the attribute: MIDlet-1
I've re-installed everything many times without windows :) I tried to search solutions for that but i couldn't find anything usefull.
I will be very apreciate for helping me.
Edited by: grubasek on Apr 1, 2009 3:29 AM
i have of course more partitions and WTK, NetBeans, ME SDK 3.0 are installed on other than C: but JDK is installed on C: in program files where in path is space. It has any meaning?Did you figure this out? I've got exactly the same problem. Same error that is getting reported. The resource missing is the AMConfig.properties file which is required for the Access Manager Policy agent. Agent doesn't work until I get the WebSphere config correct.
Thanks in advance. -
Configure iws on Sun cluster???
I have installed sun cluster 3.1.On top of it I need to install iws(sunone web server).Does anyone have document pertaining to it?
I tried docs.sun.com , the document there sound greek or latin to me
CheersJust to get you started:
3) create the failover RG to hold the shared address.
#scrgadm -a -g sa-rg (unique arbitrary RG name) -h prod-node1,prod-node2 (comma seperated list of nades that can host this RG, in the order you want it to fail over)
again - #scrgadm -a -g sa-rg -h prod-node1,prod-node2
4) add the network resource to the failover RG.
# scrgadm -a -S (telling the cluster this is going to be a scalable resource, if it were failover you would use -L) -g sa-rg (the group we created in step #3) -l web-server (-l is for the hostname of the logical host. This name (web-server) needs to be specified in the /etc/hosts file on each node of the cluster. Even if a nodfe is not going to host the RG, it has to know about the LH (logical hosts) hostname!)
again - #scrgadm -a -S -g sa-rg -l web-server
5) create the scalable resource group that will run on all nodes.
#scrgadm -a -g web-rg -y Maximum_primaries=2 -y Desired_primaries=2 -y RG_dependencies=sa-rg
-y is an extension property. Most resources use standard properties, other "can" use extension properties, still others "must" have extension properties defined. Maximum_primaries says how many nodes you want instance to run on at the most. Desired_primaries is how many instances you want to run at the same time. For an eight node cluster, running other DS's you might say, Maximum_primaries=8 Desired_primaries=6 Which means an instance could run on any node in the cluste, but you want to try to make sure there are nodes available for your other resource so you only want to run 6 instance at any given time, leaving the other two nodes to run your other DS's.
You could say Max=8 Desired=8 it's a matter of choice.
6) create a storage resource to be used by the app. This tells the app where to go to find the software it needs to run or process.
-a=add,-g=in the group,-j=resource name, needs to be unique and is arbitrary, -t resource type installed in pkg format earlier, and registered, -x= resource type extension property (a -y extension property could be used for a RG property or a RT property) -x is only for a RT property. /global/web is defined in the /etc/vfstab file with the mount options field specifying global,logging (at least global, maybe logging) (note you do not specify the DG, just mounts from storage supplied by the DG, because multiple RG's may use storage from the same DG)
#scrgadm -a -g web-rg -j web-stor -t SUNW.HAStoragePlus (HAStoragePlus provides support only for global devices and file systems) -x Affinityon=false -x FileSystemMountPoints=/global/web
7) create the app resource in the scalable RG.
-a=add, -j new resource -g (in the group) web-rg (created in step #5) using the type -t SUNW.apache (defined in step #2, remember the pkg installed was SUNWscapc, SUNW.apache is a made up name we are using to use apache for possibly multiple resource groups. Each -j (resource name must be unique, and only used once) but each -t (resource type, allthough having a unique name from other RT's can be used over and over again in different resources of different RG's.) Bin_dir (self explanitory, where to go to get the binaries) Network_Resouces_Used=web-server (created in step #5, again is the hostname in the /etc/vfstab for the logical host, the name the clients are going to use to get to the resource.) Resource_Dependencies=web-stor (created in step #6) saying that apache-res depends on web-stor, so if web-stor is not online, don't bother trying to start apache because the binaries won't be there. They are supplied by the storage being online and /global/web being mounted up.
#scrgadm -a -j apache-res -g web-rg -t SUNW.apache -x Bin_dir=/usr/apache/bin -y Scalable=True -y Network_Resources_Used=web-server -y Resource_dependencies=web-stor
8) switch the failover group to activate it.
#scswitch -z -g sa-rg
9) switch the scalable RG to activate it.
#scswitch -z -g web-rg
10) make sure everything got started.
#scstat -g
11) connect to the newly, cluster started service. -
Veritas required for Oracle RAC on Sun Cluster v3?
Hi,
We are planning a 2 node Oracle 9i RAC cluster on Sun Cluster 3.
Can you please explain these 2 questions?
1)
If we have a hardware disk array RAID controller with LUNs etc, then why do we need to have Veritas Volume Manager (VxVM) if all the LUNS are configured at a hardware level?
2)
Do we need to have VxFS? All our Oracle database files will be on raw partitions.
Thanks,
Steve> We are planning a 2 node Oracle 9i RAC cluster on Sun
Cluster 3.Good. This is a popular configuration.
Can you please explain these 2 questions?
1)
If we have a hardware disk array RAID controller with
LUNs etc, then why do we need to have Veritas Volume
Manager (VxVM) if all the LUNS are configured at a
hardware level?VxVM is not required to run RAC. VxVM has an option (separately
licensable) which is specifically designed for OPS/RAC. But if
you have a highly reliable, multi-pathed, hardware RAID platform,
you are not required to have VxVM.
2)
Do we need to have VxFS? All our Oracle database
files will be on raw partitions.No.
IMHO, simplify is a good philosophy. Adding more software
and layers into a highly available design will tend to reduce
the availability. So, if you are going for maximum availabiliity,
you will want to avoid over-complicating the design. KISS.
In the case of RAC, or Oracle in general, many people do use
raw and Oracle has the ability to manage data in raw devices
pretty well. Oracle 10g further improves along these lines.
A tenet in the design of highly available systems is to keep
the data management as close to the application as possible.
Oracle, and especially 10g, are following this tenet. The only
danger here is that they could try to get too clever, and end up
following policies which are suboptimal as the underlying
technologies change. But even in this case, the policy is
coming from the application rather than the supporting platform.
-- richard -
Sun cluster: virtual IP address
Hi,
What is the virtual IP address and how to configure it?
For example, should it be defined in /etc/hosts? dns?
Thank you,[[[s this correct to have apache HA?]]]
Apache can be set up as a failover resource (so it is active only on one node at a time) or a scalable resource (where it would be active on multiple nodes at the same time).
[[[Just an aside question: HAStoragePlus is NFS sharing? What is difference between NFS resource and mount resource (I saw Veritas differentiate between them)? In case I set up a shared disk, is it NFS or mount resource?]]]
HAStoragePlus is not NFS sharing. HAStoragePlus let's you create HA storage (it's called HAStoragePlus because there was an earlier generation data service (aka Clustering Agent a-la VCS) called HAStorage). This will let you wrap a shared storage device and fail it back and forth between multiple nodes of the cluster.
NFS sharing has to be handled using the SUNW.nfs Data service (or in other words, the NFS clustering agent) (ie only if you want to set up NFS as a HA service). Otherwise, you can use standard NFS.
Mount resource is (i'm guessing here) any resource that can be mounted. In other words, a Filesystem.
NFS resource is a resource that is shared out via NFS.
[[[Also, a basic question: The shared disk should not be mounted in /etc/vfstab. Correct? It should be only present when doing format on each node. Right? It is SCS that manages the mounting of the file system? This should be up before testing apache HA no?]]]
That is correct. Sun Cluster will handle mounting/unmounting the filesystem and importing/deporting the disk set (in Veritas world it is called a Disk group).
When you build your cluster resource group (aka VCS Service group), you will have to build the dependency tree (just how you would in VCS).
1) Create empty RG
2) Create HAStoragePlus Resource
3) Create Logical Hostname resource
4) Create Apache resource
5) define dependency of Logical hostname (Virtual IP) and HAStoragePlus (filesystem) so that apache can start.
At each stage, you can test whether the RG is working as it should before proceeding to the next level. -
QFS Meta data resource on sun cluster failed
Hi,
I'm trying to configure QFS on cluster environment, to configure metadata resource faced error. i tried with different type of qfs none of them worked.
[root @ n1u331]
~ # scrgadm -a -j mds -g qfs-mds-rg -t SUNW.qfs:5 -x QFSFileSystem=/sharedqfs
n1u332 - shqfs: Invalid priority (0) for server n1u332FS shqfs: validate_node() failed.
(C189917) VALIDATE on resource mds, resource group qfs-mds-rg, exited with non-zero exit status.
(C720144) Validation of resource mds in resource group qfs-mds-rg on node n1u332 failed.
[root @ n1u331]
~ # scrgadm -a -j mds -g qfs-mds-rg -t SUNW.qfs:5 -x QFSFileSystem=/global/haqfs
n1u332 - Mount point /global/haqfs does not have the 'shared' option set.
(C189917) VALIDATE on resource mds, resource group qfs-mds-rg, exited with non-zero exit status.
(C720144) Validation of resource mds in resource group qfs-mds-rg on node n1u332 failed.
[root @ n1u331]
~ # scrgadm -a -j mds -g qfs-mds-rg -t SUNW.qfs:5 -x QFSFileSystem=/global/hasharedqfs
n1u332 - has: No /dsk/ string (nodev) in device.Inappropriate path in FS has device component: nodev.FS has: validate_qfsdevs() failed.
(C189917) VALIDATE on resource mds, resource group qfs-mds-rg, exited with non-zero exit status.
(C720144) Validation of resource mds in resource group qfs-mds-rg on node n1u332 failed.
any QFS expert here?hi
Yes we have 5.2, here is the wiki's link, [ http://wikis.sun.com/display/SAMQFSDocs52/Home|http://wikis.sun.com/display/SAMQFSDocs52/Home]
I have added the file system trough webconsole, and it's mounted and working fine.
after creating the file system i tried to put under sun cluster's management, but it asked for metadata resource and to create metadata resource I have got the mentioned errors.
I need the use QFS file system in non-RAC environment, just mounting and using the file system. I could mount it on two machine in shared mode and high available mode, in both case in the second node it's 3 time slower then the node which has metadata server when you write and the same read speed. could you please let me know if it's the same for your environment or not. if so what do you think of the reason, i see both side is writing to the storage directly but why it's so slow on one node.
regards, -
Oracle9i installation on Sun cluster not showing the cluster config screen
Hi All,
We are trying to install Oracle 9i RAC on a 2 node Sun cluster. The oracle installer never shows up the cluster configuration screen. However if we run the preInstall checker script from oracle (Installprecheck.sh) , it reports that it has indeed detected the cluster. Can anyone please help us out of this predicament.
-KishoreHi Kishore,
I 'm assuming the following:
. Sun Cluster 3.1 will be used to support Oracle 9i RAC
. Oracle is being installed on a file system instead of raw devices
To answer your question, a clustered file system must be present in order for Oracle 9i RAC Installer to recognize that a cluster exists before it displays the cluster configuration screen at the beginning of the installation.
Prior to QFS 4.2, Sun did not support a clusteed file system. All Oracle RAC systems for Solaris were raw device based. You're in luck now because Sun has released Sun QFS 4.2 which support a clustered file system.
Maybe you are looking for
-
Charging Ipod: need help!
Hey im leaving going to my grandparetns house soon for about 10 days. I know that my battery will not last that long because i use it alot. I'm wondering if i can charge my ipod on there computer. They dont have itunes, and im wondering if that will
-
Hi, The following is the error comming up when trying to load the data using SQL Loader in 10g. **SQL*Loader-704: Internal error: ulnain: error occurred on good insert [-1]** **SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
-
Hi there, Here i m trying to setup WTC where WebLogic talks to Tuxedo, but i got this error: [java] <Feb 8, 2002 3:37:52 PM SGT> <Warning> <EJB> <EJB Deployment: Tolowe r has a class weblogic.wtc.jatmi.TuxedoServiceHome which is in the classpath. Th
-
Remote_login_passwordfile security
I found this text and I cannot understand the risk (***) part. can you advise? If you set the REMOTE_LOGIN_PASSWORDFILE parameter to EXCLUSIVE, changes made to the SYS password are changed in the remote login password file. (***This means that the pa
-
Unable to fax from Mac OS X 10.6 but otherwise C7280 works fine
Dear HP, I have a C7280 connected to our LAN via RJ45 and to a phone line using the provided cord. It works fine as a stand alone fax to send and receive faxes. However, when I try to fax from my Mac (OS 10.6.8) with the most current drivers, it does