Sun Cluster 3.3 Mirror 2 SAN storages (storagetek) with SVM

Hello all,
I would like to know if you have any best practice for mirroring two storage systems with svm on sun cluster without corrupting/loosing data from the storages.
I currently have enabled the multipath on the fc (stmsboot) after that configure the cluster and created the SVM mirror with the did devices.
I have some issues that i wan to know if there's gonna be any problem.
a) 4 quorum votes. As i have two (2) nodes and 2 storages that i need to know which is up i have 4 votes, so in order the cluster to start needs 3 votes. Is there any solution on this like cldevice combine ?
b) The mirror is on SVM level so when a failover happens the metasets go to the other node. Is there any change to start the mirror from the second SAN insteand of the first and have any kind of corruption? Is there someway to better protect the storage ?
c) The storagetek has option for snapshots, is there a good way of using this feature or not?
d) Is there any problem by failling over global filesystems (global option in mount)? The only thing that may write in this filesystem is the application itself that belongs in the same resource group, so when it will need to fail over it will stop all the proccesses accessing this filesystem and it would be ok to unmount it.
Best regards to all of you,
PiT

Thank you very much for your answers Tim, they are really very helpfull, i only have some comments on them to be fully answered.
a) Its all answered to me. I thing that i will add the vote from only one storage and if the storage goes down, i will tell the customer to check the quorum status and add the second storage as QD. The quorum server is not a bad idea, but if the network is down for some reason i thing that bad thing will happen so i dont wont to relly on that.
b) I think you are clear enough.
c) I thing you are clear enough! (just as i thought this would happen for the snapshots....)
d) Finally, if this filesystem is in a metadisk that is been started from the first node and the second node is proxing to the first node for the metaset disks, is there any change to lock the filesystem/ metaset group and don't be able to take it?
Thanks in advance,
Pit
(I will also look the document you mention, a lot of thanks)

Similar Messages

  • Information about Sun Cluster 3.1 5Q4 and Storage Foundation 4.1

    Hi,
    I have 2 Sunfire V440 with Solaris 9 last release 9/05 with last cluster patchs.. , Qlogic HBA fibre card on a seven disks shared on a Emc Clariion cx500. I have installed and configured Sun cluster 3.1 and Veritas Storage Foundation 4.1 MP1. My problems is when i run format wcommand on each node, I see the disks in a different order and veritas SF (4.1) is also picking up the disks in a different order.
    1. Storage Foundation 4.1 is compatible with Sun cluster 3.1 2005Q4?????
    2. Do you have a how to or other procedure for Storage foundation 4.1 with Sun Cluster 3.1.
    I'm very confuse with veritas Storage foundation
    Thanks!
    J-F Aubin

    This combination does not work today, but it will be available later.
    Since Sun and Veritas are two separate companies, it takes more
    time than expected to synchronize releases. Products supported by
    Sun for Sun Cluster installation undergo extensive testing, which also
    takes time.
    -- richard

  • Sun Cluster 3.2  without share storage. (Sun StorageTek Availability Suite)

    Hi all.
    I have two node sun cluster.
    I am configured and installed AVS on this nodes. (AVS Remote mirror replication)
    AVS working fine. But I don't understand how integrate it in cluster.
    What did I do:
    Created remote mirror with AVS.
    v210-node1# sndradm -P
    /dev/rdsk/c1t1d0s1      ->      v210-node0:/dev/rdsk/c1t1d0s1
    autosync: on, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: sync, group: AVS_TEST_GRP, state: replicating
    v210-node1# 
    v210-node0# sndradm -P
    /dev/rdsk/c1t1d0s1      <-      v210-node1:/dev/rdsk/c1t1d0s1
    autosync: on, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: sync, group: AVS_TEST_GRP, state: replicating
    v210-node0#   Created resource group in Sun Cluster:
    v210-node0# clrg status avs_test_rg
    === Cluster Resource Groups ===
    Group Name       Node Name       Suspended      Status
    avs_test_rg      v210-node0      No             Offline
                     v210-node1      No             Online
    v210-node0#  Created SUNW.HAStoragePlus resource with AVS device:
    v210-node0# cat /etc/vfstab  | grep avs
    /dev/global/dsk/d11s1 /dev/global/rdsk/d11s1 /zones/avs_test ufs 2 no logging
    v210-node0#
    v210-node0# clrs show avs_test_hastorageplus_rs
    === Resources ===
    Resource:                                       avs_test_hastorageplus_rs
      Type:                                            SUNW.HAStoragePlus:6
      Type_version:                                    6
      Group:                                           avs_test_rg
      R_description:
      Resource_project_name:                           default
      Enabled{v210-node0}:                             True
      Enabled{v210-node1}:                             True
      Monitored{v210-node0}:                           True
      Monitored{v210-node1}:                           True
    v210-node0# In default all work fine.
    But if i need switch RG on second node - I have problem.
    v210-node0# clrs status avs_test_hastorageplus_rs
    === Cluster Resources ===
    Resource Name               Node Name    State     Status Message
    avs_test_hastorageplus_rs   v210-node0   Offline   Offline
                                v210-node1   Online    Online
    v210-node0# 
    v210-node0# clrg switch -n v210-node0 avs_test_rg
    clrg:  (C748634) Resource group avs_test_rg failed to start on chosen node and might fail over to other node(s)
    v210-node0#  If I change state in logging - all work.
    v210-node0# sndradm -C local -l
    Put Remote Mirror into logging mode? (Y/N) [N]: Y
    v210-node0# clrg switch -n v210-node0 avs_test_rg
    v210-node0# clrs status avs_test_hastorageplus_rs
    === Cluster Resources ===
    Resource Name               Node Name    State     Status Message
    avs_test_hastorageplus_rs   v210-node0   Online    Online
                                v210-node1   Offline   Offline
    v210-node0#  How can I do this without creating SC Agent for it?
    Anatoly S. Zimin

    Normally you use AVS to replicate data from one Solaris Cluster to another. Can you just clarify whether you are replicating to another cluster or trying to do it between a single cluster's nodes? If it is the latter, then this is not something that Sun officially support (IIRC) - rather it is something that has been developed in the open source community. As such it will not be documented in the Sun main SC documentation set. Furthermore, support and or questions for it should be directed to the author of the module.
    Regards,
    Tim
    ---

  • SAN Storage Migration with Hyper-V 2008 R2 CSV

    Dears
    Our customer has configured 2 node Hyper-V Cluster with CSV   connected to an Old SAN Storage The Host Operating System is Windows 2008 R2.
    We are planning to migrate the data from the old storage to the new storage. ( Both Storage boxes are connected via Fiber)
    With regard to the Migration can we do  the below if so please provide some guidelines related to the Hyper-V queries.
    (We are trying to avoid the Storage based migration)
    - Connect the new storage   to the servers
    - Create the LUN same as in the old storage and assign it to the servers
    - Create a new CSV  and point it to the new luns in the Failover Cluster Manager
    - Use the export/import function to move the VM's from the new storage to the old storage
    - Once all the vm's are moved create a new LUN for the quorum and re-configure the cluster to use the new quorum
    Are the above steps will perform a error-free migration if so
    - Can we delete the old CSV and dismount the old storage at this stage.
    Also these VM's (Exchange and SQL) has seperate LUN's assigned for the Exchange and SQL data , Does these data will be exported when we use the export/import feature of the Hyper-V to migrate the virtual machine
    Your Valuable response in this regard is highly appreciated.
    Regards
    Muralee

    Dear All
    Thanks for your valuable reply.
    And I was able to successfully migrate the cluster and below are the steps i performed
    * Initialized the SAN storage and assigned the LUN to both servers.
    * Logged in to the server which  is the current owner of the  csv's and disks
    * Initialized the disks as GPT and formatted the LUN without a drive letter.
    * Added to the Hyper-V Cluster as disks
    * Added to the CSV
    * Exported the VM to the new disks
    * Deleted the VM from Hyper-V console(Which will not delete any VM files)
    * Imported the VM (chosed the default settings in the wizard)
    * Added the VM as a service Failover Cluster Manager.
    * After completely moving the VM's  created a new LUN for the quorrum and changed the quorrum      configuration to the new disk.
    * Removed the Old Storage and restarted the cluster checked for the VM's functionality everything was fine.
    NOTE1 :-  I faced a  difficulty in multipathing software as he was using the EMC Powerpath and I had to remove it and enable Widows MPIO  .
    Anybody who are facing a similar scenario please send a note  I would be glad to share the experience.
    NOTE2 :-  I tried all the SAN migration options from the storage Vendor but this method was able to do the migration in a much easier and confident manner.
    Regards

  • Help with Oracle 10g Client Connectivity from Linux to IBM SAN storage

    Hello Oracle Experts,
    This is my first post. My client is having oracle 10g database up and running in IBM SAN storage.
    We have some NMS tools running in Red Hat Linux version 5. So, these tools require connectivity to Oracle database which is residing in SAN storage connected with the Fibre cables.
    How do I establish the connectivity from Linux to SAN storage. If would be glad if you can explain me the steps and also if there is any pre-installation/post-installation, patches and procedures involved.
    If it is IP based network we normally give the IP address of the host running the database server. I have no idea about SAN storage connected with Fibre cable.
    Please guide me to establish the connectivity from linux 5 to SAN.
    Thanks.
    Regards,
    RaviShankar.

    user13153556 wrote:
    Hi Rajesh,
    Actually I will not be touching the Oracle instance SAN box directly. I will only access the database from another machine. I my case it is Linux box.
    So, my question is how do you make the Oracle Client in Linux box to connect to Oracle instance running in another non ip based machine SAN storage.Install Oracle client on this Linux machine ..
    Make sure you have network connectivity from linux machine to database server. You need to connect to server where db instance is running and you need not to bother about SAN storage.
    make tns entry into client $ORACLE_HOME/network/admin/tnsnames.ora file.
    Use sqlplus to connect to database using client.
    Regards
    Rajesh

  • Windows 2008 R2 Cluster - migrating data to new SAN storage - Detailed Steps?

    We have a project where some network storage is falling out of warranty and we need to move the data to new storage.  I have found separate guides for moving the quorum and replacing individual drives, but I can't find an all inclusive detailed step
    by step for this process, and I know this has to happen all the time.
    What I am looking for is detailed instructions on moving all the data from their current san storage to new san storage, start to finish.  All server side, there is a separate storage team that will present the new storage.  I'll then have to install
    the multi-pathing driver; then I am looking for a guide that picks up at this point.
    The real concern here is that this machine controls a critical production process and we don't have a proper set up with storage and everything to test with, so it's going to be a little nerve racking.
    Thanks in advance for any info.

    I would ask Hitachi.  As I said, the SAN vendors often have tools to assist in this sort of thing.  After all, in order to make the sale they often have to show how to move the data with the least amount of impact to the environment.  In fact,
    I would have thought that inclusion of this type of information would have been a requirement for the purchase of the SAN in the first place.
    Within the Microsoft environment, you are going to deal with generic tools.  And using generic tools the steps will be similar to what I said. 
    1. Attach storage array to cluster and configure storage.
    2. Use robocopy or Windows Server Backup for file transfer.  Use database backup/recovery for databases.
    3. Start with applications that can take multiple interruptions as you learn what works for your environment.
    Every environment is going to be different.  To get into much more detail would require an analysis of what you are using the cluster for (which you never state in either of your posts), what sort of outages you can operate with, what sort of recovery
    plan you will put in place, etc.  You have stated that your production environment is going to be your lab because you do not have a non-production environment in which to test it.  That makes it even trickier to try to offer any sort of detailed
    steps for unknown applications/sizing/timeframes/etc.
    Lastly, the absolute best practice would be to build a new Windows Server 2012 R2 cluster and migrate to a new cluster.  Windows Server 2008 R2 is already out of mainstream support. That means that you have only five more years of support on your
    current platform, at which point in time you will have to perform another massive upgrade.  Better to perform a single upgrade that gives you a lot more length of support.
    . : | : . : | : . tim

  • Sun Cluste 3.1 with SAN 6320 - Any Known Issues?

    Hello,
    We are moving to new Sun hardware with following configurations.
    Solaris 8,Sun Cluster 3.1, Oracle 8.0.6.3 on two V1280 connected to Sun Stordege SAN 6320. SAN is also connected to 5 other machines including one windows 2000.
    Following were the limitations which we came across during the testing phase.
    1. Maximum LUN you can have on a 6320 , co-existing with SUN Cluster is 16. ( You can not have more than 16 LUNS configured on 6320..!)
    2. Maximum number of CLUSTER nodes that you can have with 6320 is FOUR.
    Refer:
    http://docs-pdf.sun.com/816-3381/816-3381.pdf
    Bug ID: 4840853
    Is anybody else there, already moved/moving to any such configuration and wants to give some tips and suggestions. Please let me know.
    Thanks
    Sair

    An update on the same..
    we are having issues with SAN 6320.
    SAN hangs when we use 7 nodes with Sun CLuster 3.1 simultaneously accessing the volumes. No volumes are being accessed from moer than a single node.
    will update later...

  • Storagetek 6140 - chunk size? - veritas and sun cluster tuning?

    hi, we've just got a 6140 and i did some raw write and read tests -> very nice box!
    current config: 16 fc-disks (300gbyte / 2gbit/sec): 1x hotspare, 15x raid5 (512kibyte chunk)
    3 logical volumes: vol1: 1.7tbyte, vol2: 1.7tbyte, vol3 rest (about 450gbyte)
    on 2x t2000 coolthread server (32gibyte mem each)
    it seems the max write perf (from my tests) is:
    512kibyte chunk / 1mibyte blocksize / 32 threads
    -> 230mibyte/sec (write) transfer rate
    my tests:
    * chunk size: 16ki / 512ki
    * threads: 1/2/4/8/16/32
    * blocksize (kibyte): .5/1/2/4/8/16/32/64/128/256/512/1024/2048/4096/8192/16384
    did anyone out there some other tests with other chunk size?
    how about tuning veritas fs and sun cluster???
    veritas fs: i've read so far about write_pref_io, write_nstream...
    i guess, setting them to: write_pref_io=1048576, write_nstream=32 would be the best in this scenario, right?

    I've responded to your question in the following thread you started:
    https://communities.oracle.com/portal/server.pt?open=514&objID=224&mode=2&threadid=570778&aggregatorResults=T578058T570778T568581T574494T565745T572292T569622T568974T568554T564860&sourceCommunityId=465&sourcePortletId=268&doPagination=true&pagedAggregatorPageNo=1&returnUrl=https%3A%2F%2Fcommunities.oracle.com%2Fportal%2Fserver.pt%3Fopen%3Dspace%26name%3DCommunityPage%26id%3D8%26cached%3Dtrue%26in_hi_userid%3D132629%26control%3DSetCommunity%26PageID%3D0%26CommunityID%3D465%26&Portlet=All%20Community%20Discussions&PrevPage=Communities-CommunityHome
    Regards
    Nicolas

  • Sun cluster 3.1 io error

    Hi,
    I have 2 cluster nodes with solaris 9/05 with sun cluster 3.1,After a migration from Hitachi AMS1000 storage to SUN storagetek 9985v when i shutdown one node in the cluster the mounted volumes on the second node giving io error.I already installed the new patches for os,cluster and san but the problem still persists.Please help me
    Regards,
    Arun

    Arun,
    You say you migrated to the 9985v - did you do that with backup and restore or with a replication technology? If it was the latter, you might have inadvertantly copied over some SCSI reservation keys. Otherwise, I can't see any reason for the problem.
    SCSI keys can be removed (with extreme care) using the scsi and pgre commands in the /usr/cluster/lib/sc directory.
    Tim
    ---

  • Beta Refresh Release Now Available!  Sun Cluster 3.2 Beta Program

    The Sun Cluster 3.2 Release team is pleased to announce a Beta Refresh release. This release is based on our latest and greatest build of Sun Cluster 3.2, build 70, which is close to the final Revenue Release build of the product.
    To apply for the Sun Cluster 3.2 Beta program, please visit:
    https://feedbackprograms.sun.com/callout/default.html?callid=%7B11B4E37C-D608-433B-AF69-07F6CD714AA1%7D
    or contact Eric Redmond <[email protected]>.
    New Features in Sun Cluster 3.2
    Ease of use
    * New Sun Cluster Object Oriented Command Set
    * Oracle RAC 10g improved integration and administration
    * Agent configuration wizards
    * Resources monitoring suspend
    * Flexible private interconnect IP address scheme
    Availability
    * Extended flexibility for fencing protocol
    * Disk path failure handling
    * Quorum Server
    * Cluster support for SMF services
    Flexibility
    * Solaris Container expanded support
    * HA ZFS
    * HDS TrueCopy campus cluster
    * Veritas Flashsnap Fast Mirror Resynchronization 4.1 and 5.0 option support
    * Multi-terabyte disk and EFI label support
    * Veritas Volume Replicator 5.0 support
    * Veritas Volume Manager 4.1 support on x86 platform
    * Veritas Storage Foundation 5.0 File System and Volume Manager
    OAMP
    * Live upgrade
    * Dual partition software swap (aka quantum leap)
    * Optional GUI installation
    * SNMP event MIB
    * Command logging
    * Workload system resource monitoring
    Note: Veritas 5.0 features are not supported with SC 3.2 Beta.
    Sun Cluster 3.2 beta supports the following Data Services
    * Apache (shipped with the Solaris OS)
    * DNS
    * NFS V3
    * Java Enterprise System 2005Q4: Application Server, Web Server, Message Queue, HADB

    Without speculating on the release date of Sun Cluster 3.x or even its feature list, I would like to understand what risk Sun would take when Sun Cluster would support ZFS as a failover filesystem? Once ZFS is part of Solaris 10, I am sure customers will want to use it in clustered environments.
    BTW: this means that even Veritas will have to do something about ZFS!!!
    If VCS is a much better option, it would be interesting to understand what features are missing from Sun Cluster to make it really competitive.
    Thanks
    Hartmut

  • Recommendations for Multipathing software in Sun Cluster 3.2 + Solaris 10

    Hi all, I'm in the process of building a 2-node cluster with the following specs:
    2 x X4600
    Solaris 10 x86
    Sun Cluster 3.2
    Shared storage provided by a EMC CX380 SAN
    My question is this: what multipathing software should I use? The in-built Solaris 10 multipathing software or EMC's powerpath?
    Thanks in advance,
    Stewart

    Hi,
    according to http://www.sun.com/software/cluster/osp/emc_clarion_interop.xml you can use both.
    So at the end it all boils down to
    - cost: Solaris multipathing is free, as it is bundled
    - support: Sun can offer better support for the Sun software
    You can try to browse this forum to see what others have experienced with Powerpath. From a pure "use as much integrated software as possible" I would go with the Solaris drivers.
    Hartmut

  • Any experience with NFS failover in Sun Cluster?

    Hello,
    I am planning to install dual-node Sun Cluster for NFS failover configuration. The SAN storage is shared between nodes via Fibre Channel. The NFS shares will be manually assigned to nodes and should fail over / takeback between nodes.
    Is this setup tested well? How the NFS clients survive the failover (without "stale NFS handle" errrors)? Does it work smoothly for Solaris,Linux,FreeBSD clients?
    Please share your experience.
    TIA,
    -- Leon

    My 3 year old linux installtion on my laptop, which is my NFS client most of the time uses udp as default (kernel 2.4.19).
    Anyway the key is that the NFS client, or better, the RPC implementation on the client is intelligent enough to detect a failed TCP connection and tries to reestablish it with the same IP address. Now once the cluster has failed over the logical IP the reconnect will be successful and NFS traffic continues as if nothing bad had happened. This only(!) works if the NFS mount was done with the "hard" option. Only this makes the client retry the connection.
    Other "dumb" TCP based applications might not retry and thus would need manual intervention.
    Regarding UFS or PxFS, it does not make a difference. NFS does not know the difference. It shares a mount point.
    Hope that helped.

  • Sun cluster, 6140 and 'cross-connections'

    This was brought up in the storage forum by somebody else, but the responses never answered the original question:
    In the 6140 setup document located at http://docs.sun.com/source/819-7497-11/chapter3.html#50589714_93886 it shows two different ways to cable a host to a 6140 via a SAN switch. figure 3-3 and 3-4
    It states that the setup in 3-4 is not supported in a sun cluster environment.
    The problem is that given the acive/passive nature of the 6140, the setup shown in 3-4 is the obvious one to use since it prevents one from having to force all of the luns over in the event of an hba port or switch failure.
    To make life more interesting, the 6140 setup doc does not make not of what version of sun cluster it is not supported on. Or what the bugid is, or any informationto know if the restriction is still valid.
    So, does this restriction still exist? If so, for what version of sun cluster? What version of solaris?

    Thanks for the clarification.
    As an aside, it would be nice if in the future, the documentation could contain a bit more information than just a simple note saying 'this is not supported'. A reference to a bugid or info doc would go a long way in helping folks determine if the restriction is still valid.
    --john                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Sun cluster HPC 5.0 - usage of ssh instead of rlogin

    Hi ,
    I am working on a sun cluster(HPC 5.0).It has been installedto use rlogin.But I wish to change it to ssh as it ismore secure.So is there a way to do it without uninstalling the cluster and then reinstalling it.
    Thank you very much
    Varsha

    First question is what HBA and driver combination are you using?
    Next do you have MPxIO enabled or disabled?
    Are you using SAN switches? If so whose, what F/W level and what configuration, (ie. single switch, cascade of multiple switches, etc.)
    What are the distances from nodes to storage (include any fabric switches and ISL's if multiple switches) and what media are you using as a transport, (copper, fibre {single mode, multi-mode})?
    What is the configuration of your storage ports, (fabric point to point, loop, etc.)? If loop what are the ALPA's for each connection?
    The more you leave out of your question the harder it is to offer suggestions.
    Feadshipman

  • VCS to Sun Cluster migration

    I am planning to migrate a 2-node cluster from VCS to Sun Cluster. how much downtime does this involve? is there any documentation that i can reference?

    Hi all,
    In the following I outlined the principle steps how to migrate a cluster in place. Tis will be one of the subtopics of an upcoming blog about VCS to SC migration.
    Pavel, you should revisit SC 3.2 definitely and explicitly the bui. We had various VCS admins on different projects who told us, the gap became that small, that VCS is not worth the additional costs.
    Bear in mind that migrating in place is the most complex scenario, and doing it on a complete alternative platform is a much simpler process. But lets proceed with the assumptions and process:
    Let us assume a two node cluster where you want to migrate from VCS with VXVM to Solaris Cluster and Solaris Volume Manager. I assume as well that your data is mirrored. The steps below are a principle outline of the migration process, to get the necessary cluster administration commands you need to consult the appropriate documentation.
    1.Reduce the VCS cluster to a one node cluster and disconnect the interconnect. The interconnect has to be disconnected to allow a Solaris Cluster installation on the other node. Solaris Cluster check the interconnect for unwanted traffic
    2.Split the storage in two halfs, and disallow the access from the VCS cluster to the future Solaris cluster part. This can be achieved in example by modifying the switch zoning, or lun masking. At this point in time your application is still running, but you have no high availability and no data redundancy any more.
    3.Install a single node Solaris Cluster on the second host, it is advisable to start with a fresh Solaris install.
    4.Configure the full Solaris Cluster topology with a temporary copy of your date. The data has to be installed by backup/restore, because you are changing the volume manager as well. It is important here, that you use different IP addresses for the logical hosts to avoid duplicate addresses. Now the new single node Solaris Cluster is ready to take the actual data.
    5.When you are ready for an application downtime, transfer the actual data from the Veritas Cluster again to the Solaris Cluster, and shut down the remaining VCS single node cluster.
    6.Change the IP Addresses of the logical host in the Solaris Cluster to the final value and enable all relevant resources. From now on your application will be running on the new Solaris Cluster.
    7.Reestablish the interconnect, destroy the VCS cluster and install Solaris Cluster packages on the old VCS node, but do not configure the node yet.
    8.Allow data access to the storage for both nodes with appropriate methods.
    9.Add the second node to the Solaris Cluster including the Solaris Cluster device groups, this step will take an other short application downtime.
    10.Mirror your data. From this point you have full redundancy and full high availability again.
    Cheers
    Detlef

Maybe you are looking for

  • How to get the source code in PAR file

    Hi All, I used the PAR migration tool to migrate from PAR to EAR file. When I imported the EAR into NWDS 7.3, I was just able to see the structure and jsp files and could not find the java source code files. I would requires the java source code to m

  • Where can I get my questions about JMX answered?

    Where can I get my questions about JMX answered? In the Jungle Mix of the World Wild Web, where are the dwellings of the JMX denizens? It recently came to my attention that the answer to that simple question may not be obvious... I hope the informati

  • I/O Error in a New Database

    I built a brand new AW database last week, and started populating it this week. I got 60 or so records in, then problems started. When I try to add a new record, AW crashes with "an I/O error occurred". This is NOT converting an old ClarisWorks file

  • User Account Advanced Options Not Saving

    I'm trying to either add an alias or change the account name of a user account in Server.app. Right click on a user and select Advanced Options. From there I add an alias then click OK. A spinner appears then disappears. No other indicators showing i

  • HT1320 Old iPod works but not on my Bose docking station.? Help!

    why won't my older iPod work on my Bose docking station any longer???