Using NAS devices inside zone clusters

I was reading the docs regarding using NAS devices with Sun Cluster. It mentioned installing a vendor supplied package onto each cluster node to support their NAS, for example, NetApp has the NTAPclnas package. We are using Hitachi HNAS and I doubt they have a an equivalent package. (I'm looking into this but my hopes are not high)
What exactly do I need this for? Are there any problems with simply adding the NFS mounts I need to the /etc/vfstab of each zone cluster node?
In my case I will be running our oracle dev/test databases on NFS file-systems and our production clusters will use NFS file-systems for writing RMAN backups into. We aren't using RAC (today) but quite possibly will be moving to RAC in the future.
I was just curious what the level of integration detailed in http://download.oracle.com/docs/cd/E19680-01/html/821-1556/ewplp.html#eyprn provided? Also will using a vanilla /etc/vfstab approach work if Hitachi doesn't have such a package available.
Thanks for any info.

IIRC, that package provides locking and fencing support. You need to be able to ensure that nodes that are not supposed to be able to write to a piece of shared storage cannot do so. With an NFS server, this is done by revoking the share. The second part is NFS lock release.
So, if you don't have this package, you don't have these features. Without fencing control, you risk data integrity. Without lock release, you risk not being able to access your files.
That said, there are some circumstances when you probably don't need them. Just dumping RMAN backups to a dump area may be OK. I don't know quite enough about RMAN to comment.
Tim
---

Similar Messages

  • Telemetry & Zone Clusters

    Does anyone know a good source for configuring cluster telemetry, specifically with zone clusters? I can't find much in the cluster documentation or by searching oracle's website. The sctelemetry man page wasn't very useful. The sun cluster essentials book provides a brief example but not with zone clusters.
    I'm wanting to use this feature to monitor memory/cpu usage in my various zone clusters. In our environment, we will have a few three node clusters with all applications running inside zone clusters with the "active" cluster nodes being staggered across the 3 nodes.
    Lastly is telemetry really worth the hassle? We are also deploying Ops Center (which I don't really know its capabilities yet) I briefly used an older version of XVM Ops Center at my last gig but only as a provisioning tool. So with Ops Center and the myriad of dtrace tools available, is telemetry worth messing with?
    Thx for any info,
    Chuck

    That's correct. I checked with the features author and telemetry pre-dates the introduction of zone clusters. So "SC only can do cpu.idle monitoring for a zone itself. Anything below that are not monitored, include RG/RS configured inside zones. " is what I got back.
    Tim
    ---

  • Using Time Machine with a Network Attached Storage (NAS) device

    I have a Mac household (20" iMac, MacBook Pro and MacBook) all connecting to the internet, wireless printer and a Network Attached Storage (NAS) device via the "n" version AirPort Extreme.
    All software / firmware is up-to-date on all machines / devices.
    Everything works great through the AirPort, but I can't see the NAS in the Time Machine setup. How do I get Time Machine to make backups of each of my machines to the NAS?

    TM backups to NAS are not officially supported. There are some hacks to make it work but you'll be on your own if you use them. check out this [Mac os x hint|http://www.macosxhints.com/article.php?story=20080420211034137&query=time% 2Bmachine%2Bafp]. make sure you read all the caveats and comments.

  • Creating Oracle-HA config using zone clusters

    We have a three-node Sun Cluster (3.3u1) on Solaris 10 update 9. We are using Hitachi VSP for external storage. Eventually we may go to RAC (had to drop the RAC licenses for the time being due to budget cuts) For the time being I want to deploy zone clusters and create several different Oracle-HA installations.
    I've seen several ways of doing this and not sure what the best practice is or what limitations each method has. (so far I've not been able to get any of them working but I was using the vanilla 3.3 release and just rebuilt using 3.3u1 and then the 145334-09 core patch)
    My question is do I create the cluster resource in the zone cluster or in the global cluster?
    One document I'm trying to following does this at a high level:
    a) Create HASP resource in global cluster to enable the zone cluster the use of the zpool that will house the oracle binaries (add to my zone cluster using "add dataset")
    b) Create a logical hostname resource in the global cluster (add to my zone cluster using "add net")
    Then from within one node of the zone cluster:
    c) Create the oracle resource group
    d) Register the HASP resource type (if not already)
    e) Create an HASP resource to mount the zpool (this seems like a redundant step)
    f) Create a Logical Hostname resource that will be used by the oracle listener (this seems like a redundant step)
    g) Bring the resource group online
    h) Install the oracle binaries from the cluster node where the zpool is current mounted
    i) Install the database and configure the listener, ...
    j) Register the oracle database and listener resource types
    k) Create cluster resource for the listener and the database
    I'm confused as to why I create the resources in the global zone (steps a & b) and then again in the zone cluster... Anyone have any ideas?
    Also I found a sun engineer's blog that shows doing everything above at the global zone only. (Haven't gotten this to work either)
    Thanks,

    When I follow the example in the sun cluster essentials book I get the following error: (This error is the same with sun cluster 3.3 and 3.3u1) My test cluster is now running 3.3u1 with the 145334-09 core patch set.
    When I get to the step where I create the hasp resource within one zone cluster node:
    node01-chuck1:~ # clrs create -g rg-zc-chuck1-oracle -t SUNW.HAStoragePlus -p zpools=chuck1-u01 rs-zc-hasp-chuck1-u01
    clrs: node02:chuck1 - More than one matching zpool for 'chuck1-u01': zpools configured to HAStoragePlus must be unique.
    clrs: (C189917) VALIDATE on resource rs-zc-hasp-chuck1-u01, resource group rg-zc-chuck1-oracle, exited with non-zero exit status.
    clrs: (C720144) Validation of resource rs-zc-hasp-chuck1-u01 in resource group rg-zc-chuck1-oracle on node node02-chuck1 failed.
    clrs: (C891200) Failed to create resource "rs-zc-hasp-chuck1-u01".
    My environment:
    2-node cluster, with physical nodes - node01, node02
    1 local zpool on each physical node called "zones" and mounted as /zones (These are on disks that are only visible to each physical node)
    1 local zfs filesystem on each node called zones/chuck1 and mounted as /zones/chuck1
    1 zone cluster created with zonepath = /zones/chuck1 called chuck1
    1 zpool created on shared storage - chuck1-u01
    At this point all the cluster checks and status commands show everything is healthy. I have done multiple reboots/halts/shutdowns of the zone cluster and no issues that I can see.
    1) I have added the dataset to the zone cluster config from the global zone and rebooted the zone cluster. Note that I still don't see anything when I run "zpool status -v" within a zone cluster node. I would expect to see my chuck1-u01 zpool at this point.
    2) I then created a resource group within one zone cluster node called rg-zc-chuck1-oracle
    3) I then registered the SUNW.HAStoragePlus resource type within one zone cluster node
    4) I then attempted to create the HASP resource type within one zone cluster node and I get the error above.
    Any ideas? I've followed the sun cluster essentials example explicitly. (In my last attempt I skipped doing anything with the logical hostname resource - was saving that for later once I got the zpool working) It seems to get confused on the second node.
    Thx

  • How do you set up an FTP server using a NAS device?

    I'm sure this question has been answered before. I run a small graphic design business from my home. Occassionly clients want to send me files and ask if I have an FTP site they can upload to. I recently purchased an NAS enclosure and added a 250 GB HD in it. It's hook up to a Linksys router. I can attach to it using the GO menu using "smb://STORAGE". It appears on the desktop. I have Comcast as my broadband service with a dynamic DNS. I have 2 folders on it, one password protected and the other a Public folder. I would like someone to be able to access the Public folder and upload and download files on it. Would someone be able to explain, in simple terms, how to set this up.

    Well you actually need to have an ftp server running somewhere. I dont know (although i doubt it does) if your NAS has an embedded FTP or SFTP server. If it dosent youll actuall have to have people connect to one of your servers/workstations that has FTP enabled.
    First youll need to log in to your router's admin panel and forward port 21 to the server/workstation that will function as the ftp server.
    Then set up the mount for the NAS device at large or one of the specific folders on it with sharepoints or something on the mac that will act as the ftp server.
    Set up a user as a FTP only user. Youll probably want to make this user only have FTP access (you can google or consult other threads in these forums for this procedure).
    In this users home folder make symlinks to the shares with the command line:
    cd pathto_ftpusers_homedir
    ln -s /Volumes/NAS_Sharepoint NameOfFolderUserWillSee
    Then create the file /etc/ftpchroot which will contain a list of users that will be limited to thier home directory when using ftp. i would use a command line text editor to do this (pico, vi, emacs... choose your poison).
    the file should simply be a list of user shortnames, 1 per line.
    Thats the basics of it. You can get more complicated and might indeed need to set up permissions and what not properly (youll probably want to use ACLS so you dont have to constantly change permissions or login as another user to access files that have been uploaded)but that should get you started i think.

  • Trying to use a Nas device to perform backups.

    Good day all,
    I am trying to set up a Nas device, Netgear ReadyData 5200 and it comes with its own GUI, we also run back up exec 2012 with tape drives.
    What i am trying to do is use the Nas device instead of tapes.
    The question or questions i have is:
    How do i configure the Nas to be the back up device instead of tape. Back up exec can see the nas, but says 'Not enough satistical information is available', and i cannot use it to do any backups.
    We have a Nas box in two sites and I would like to replicate data between the two with just the changes of data.
    Is it best to use the Netgear software only or try and get back up exec to do the back up.
    Regards
    The One
    This topic first appeared in the Spiceworks Community

    If you have a rescue email address (which is not the same thing as an alternate email address) set up on your account then you can try going to https://appleid.apple.com/ and click 'Manage your Apple ID' on the right-hand side of that page and log into your account. Then click on 'Password and Security' on the left-hand side of that page and on the right-hand side you might see an option to send security question reset info to your rescue email address.
    If you don't have a rescue email address then see if the instructions on this user tip helps : https://discussions.apple.com/docs/DOC-4551

  • About the error: "The account is not authorized to login from this station" when you access NAS devices from Windows 10 Technical Preview (build 9926)

    Scenario:
    With the release of Windows 10 Technical Preview (build 9926), some users may encounter an error message of “The account is not authorized to login from this station” when trying to access remote files saved in NAS storage. In
    addition, the following error log may also be found via Event Viewer:
    Rejected an insecure guest logon.
    This event indicates that the server attempted to log the user on as an unauthenticated guest but was denied by the client. Guest logons do not support standard security features such as signing and encryption. As a result,
    guest logons are vulnerable to man-in-the-middle attacks that can expose sensitive data on the network. Windows disables insecure guest logons by default. Microsoft does not recommend enabling insecure guest logons.
    Background:
    The error message is due to a change we made in Windows 10 Technical Preview (build 9926) which is related to security and remote file access that may affect you.
    Previously, remote file access includes a way of connecting to a file server without a username and password, which was termed as “guest access”.
    With guest access authentication, the user does not need to send a user name or password.
    The security change is intended to address a weakness when using guest access.  While the server may be fine not distinguishing among clients for files (and, you can imagine in the home scenario that it doesn’t
    matter to you which of your family members is looking at the shared folder of pictures from your last vacation), this can actually put you at risk elsewhere.  Without an account and password, the client doesn’t end up with a secure connection to the server. 
    A malicious server can put itself in the middle (also known as the Man-In-The-Middle attack), and trick the client into sending files or accepting malicious data.  This is not necessarily a big concern in your home, but can be an issue when you take your
    laptop to your local coffee shop and someone there is lurking, ready to compromise your automatic connections to a server that you can’t verify.  Or when your child goes back to the dorm at the university. The change we made removes the ability to connect
    to NAS devices with guest access, but the error message which is shown in build 9926 does not clearly explain what happened. We are working on a better experience for the final product which will help people who are in this situation. 
    As a Windows Insider you’re seeing our work in progress; we’re sorry for any inconvenience it may have caused.
    Suggestion:
    You may see some workarounds (eg. making a registry change restores your ability to connect with guest access).
    We do NOT recommend making that change as it leaves you vulnerable to the kinds of attacks this change was meant to protect you from.
    The recommended solution is to add an explicit account and password on your NAS device, and use that for the connections.  It is a one-time inconvenience,
    but the long term benefits are worthwhile.  If you are having trouble configuring your system, send us your feedback via the Feedback App and post your information here so we can document additional affected scenarios.
    Alex Zhao
    TechNet Community Support

    Hi RPMM,
    Homegroup works great in Windows 10 Technical Preview (9926 build), when I invited my Windows 10 Technical Preview (9926 build) joined in HomeGroup, I can access the shares smoothly:
    My shares settings is like this:
    Alex Zhao
    TechNet Community Support

  • I can VPN into my work but can't connect to any devices inside that network

    I authenticate fine and get an address but I cant connect to any devices inside my work.  I am connecting to Cisco Lan controllers, routers, and switches.
    I thought it was a routing problem but I can connect with my iphone and open the webpage for my lan controllers and I even tested it on a pc laptop with Windows XP and it works fine.  I have a 2012 Macbook pro.  I prefer to use my Macbook all the time.  I haven't used my Windows laptop until today since I got the Mac.  Do you think I have the V-Word?

    Please read this whole message before doing anything. This procedure is a diagnostic test. It’s unlikely to solve your problem. Don’t be disappointed when you find that nothing has changed after you complete it. The purpose of this test is to determine whether your problem is caused by third-party system modifications that load automatically at startup or login. Disconnect all wired peripherals except those needed for the test, and remove all aftermarket expansion cards. Boot in safe mode* and log in to the account with the problem. The instructions provided by Apple are as follows: 
    Shut down your computer, wait 30 seconds, and then hold down the shift key while pressing the power button.
    When you see the gray Apple logo, release the shift key.
    If you are prompted to log in, type your password, and then hold down the shift key again as you click  Log in.
     *Note: If FileVault is enabled under OS X 10.7 or later, or if a firmware password is set, or if the boot volume is a software RAID, you can’t boot in safe mode. Safe mode is much slower to boot and run than normal, and some things won’t work at all, including wireless networking on certain Macs. The login screen appears even if you usually log in automatically. You must know your login password in order to log in. If you’ve forgotten the password, you will need to reset it before you begin. Test while in safe mode. Same problem? After testing, reboot as usual (i.e., not in safe mode) and verify that you still have the problem. Post the results of the test.

  • Windows 8 backup (Win7 File Recovery system image) creation to NAS device fails with error 0x807800C5

    Hi,
    I have a ZyXEL NSA310 NAS device on my network that I use for backups (as well as a media server). I have been very happy with it as, amongst other things, it has a gigabit Ethernet connection. I recently upgraded my home laptop from Win7 Pro to Win8
    Pro. Under Win7 the NAS device worked perfectly as the backup target. I could back up file sets and - most importantly to me - create a system image on the device should I need to restore the system in the event of a full disk failure.
    When I upgraded to Win8 it kept the Win7 settings and it looked like it was just going to work, then as it came to create the system image it failed with error code 0x807800C5 and message "The version does not support this version of the file format".
    I have searched the internet and seen that others have had similar issues on Win7 and Win8 with NAS devices where they have had to hack the device to get it working - though it isn't clear that this has been successful for everyone. This isn't an option
    for me as the NSA310 is a closed device and in any event I don't see why I should have to hack the device when clearly this is a Win8 issue (since Win7 worked perfectly).
    Does anyone have any ideas how to fix this issue so that I can create the full backups I require?
    Thanks,
    Phil
    Event Log messages:
    Log Name:      Application
    Source:        Microsoft-Windows-Backup
    Date:          13/01/2013 23:14:52
    Event ID:      517
    Task Category: None
    Level:         Error
    Keywords:     
    User:          SYSTEM
    Computer:      Home-Laptop
    Description:
    The backup operation that started at '‎2013‎-‎01‎-‎13T23:13:43.523158000Z' has failed with following error code '0x807800C5' (There was a failure in preparing the backup image of one of the volumes in the backup set.). Please review the event details for a
    solution, and then rerun the backup operation once the issue is resolved.
    Log Name:      Application
    Source:        Windows Backup
    Date:          13/01/2013 23:14:56
    Event ID:      4104
    Task Category: None
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      Home-Laptop
    Description:
    The backup was not successful. The error is: There was a failure in preparing the backup image of one of the volumes in the backup set. (0x807800C5).

    Thanks willis! I will look into the iSCSI route. A quick Google search and I can see some mention of the NSA310 and iSCSI so maybe it does support it.
    One question: Have you ever attempted to restore a system image from a NAS iSCSI device with a Win8 Restore Disk? Is it easy? I just want to be sure that it is possible to do so in case the worst happens and I need to restore the entire image onto a
    new disk.
    Hopefully Microsoft will fix the issue with the standard NAS setup with an update in the future, but I don't want to wait for it.
    Thanks again,
    Phil
    Hi Phil,  No I have not had to do this yet, but I see no reason why it shouldn't work as the iSCSI disk looks just like a regular hard disk to the OS. I agree that Microsoft should fix the direct NAS support as the iSCSI approach does have the downside
    of dedicating a fixed chunk of your NAS drive to the iSCSI disk that you have to choose when you create the disk whereas the direct NAS just uses the actual space currently needed by the backup. Also I had some trouble getting authentication (access rights)
    to work so I left the iSCSI portal as open access - which is OK for a home solution but not a good idea in general. I will revisit this for my own setup and see if I can get it working but just wanted to mention it in case you have the same issue. It manifests
    itself as not being able to connect to the iSCSI portal due to failed authentication when running iSCSI initiator setup.
    -willis

  • How to migrate File server to NAS device?

    All right so here's the scenario.
    An external forest level trust is setup between two forests A & B as recently as we acquired the company B.
    Users from forest B were already migrated to domain A.
    Now I need to migrate file server from domain B to my domain A but the target where we need to move file shares is a NAS device, so basically we cannot do a simple server migration from domain B to domain A.
    We need to copy over the data+ACL+Share permissions from fileserver in domain B to a NAS device in domain A whereas of course ACL permissions on file server present in domain B will obviously be as per source domain user accounts, example ACL permissions
    currently will show as "B\user1".
    So whats the best way to perform this in order to maintain the permissions & data.
    Another question how do we copy share permissions from file server to NAS as I can robocopy the data with security permissions but how to copy over all the share permissions to destination NAS?

    Hi,
    You need to retain the SID History attribute when users across domains to ensure users in the target domain can access to files during migration.
    For more detailed information, please refer to the threads below:
    ACL migration
    http://social.technet.microsoft.com/Forums/exchange/en-US/3c75a116-6bd3-407f-a76c-0d825d4f525a/acl-migration
    File server migration using FSMT 1.2 in NAS environment
    http://social.technet.microsoft.com/Forums/en-US/fb4bf505-2d95-4409-9777-8fd4a1c0c471/file-server-migration-using-fsmt-12-in-nas-environment?forum=winserverfiles
    In additional, you could backup registry key SYSTEM\CurrentControlSet\Services\LanmanServer\Shares to copy share permissions.
    Saving and restoring existing Windows shares
    http://support.microsoft.com/kb/125996
    ADMT and network mapped drives
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/9752e31d-2f35-4d7d-ae4d-2f9fe9400bfe/admt-and-network-mapped-drives?forum=winserverDS
    Regards,
    Mandy
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Moving photos from iPhoto to NAS Device

    Hello.
    I'm having a difficult time copying photos from my library and then transferring them to my NAS device once I've edited my photos in iPhoto. It seems as though I can only transfer/move the iPhoto library file and not my actual photos. I use iPhoto 11 to edit photos and then would like to move those edited photos to my NAS device so I can view these photos on all my network computers. I have a new Macbook Pro Retina Display and my NAS device is a Synology ds212J. This device is a media server so I can view these photos on my tv and other connected devices. Does anyone know how I can accomplish the above? I'm new to the Mac environment and need some help with this problem. I'm sure there has to be a fix for this as it seems so seemless for my Windows computers to do the same.
    Thanks much.

    Think of it this way:
    Open a file in a photo editor - say Photoshop, Acorn, Graphic Converter, whatever...
    Fix Red eye.
    Save.
    What happens? The file containing the photo is edited. The Modification date is changed. There's no way back to the photo before the fixing of the red eye.
    Try do that in iPhoto (or similar apps: Picasa, Ligthroom, Aperture)
    Open a file... Well you can't. You have to import it to a database first. So, that done, now you view the photo.
    Fix red eye. Okay. There's no 'Save' you just move to the next picture.
    Now look at the file you imported. Notice anything about the modification date? It hasn't changed. Open the file in another app. Notice anything? Like the Red-eye isn't fixed...
    iPhoto will never edit the original file. It's called version control. It means that you can always revert back to the original. It preserves the original like a film shooter preseves the negative.
    You don't want that. You want to edit the original. So, use a photo editor, something that edits the original file, uses no database and works the way you want.
    In order of price here are some suggestions:
    Seashore (free)
    The Gimp (free)
    Graphic Coverter ($45 approx)
    Acorn ($50 approx)
    Pixelmator ($50 approx)
    Photoshop Elements ($75 approx)
    There are many, many other options. Search on MacUpdate or the App Store.

  • Problem in data import from dump file on NAS device

    I am using oracle 10g on solaris 10. I have mounted NAS device on my solaris machine.
    I have created a directory object dir on my NAS storage in which i have placed the dump file.
    Now when i execute the following command to import data
    impdp user/pass DIRECTORY=DIR DUMPFILE=MQA.DMP logfile=import.log
    then i get the following error message
    ORA-39002: invalid operation
    ORA-39070: Unable to open the log file.
    ORA-29283: invalid file operation
    ORA-06512: at "SYS.UTL_FILE", line 475
    ORA-29283: invalid file operation
    Can any body help ?

    Also, Datapump from NAS device would perform slow.
    Take a look at this:
    10gR2 "database backup" to nfs mounted dir results in ORA-27054 NFS error
    Note:356508.1 - NFS Changes In 10gR2 Slow Performance (RMAN, Export, Datapump) From Metalink.

  • How to automatically reconnect to a NAS device when wireless drops..

    When my mini sits idle for awhile, the wireless network connection drops. I've read (I thought) that this is a "know/common" problem.
    My question is, I have a NAS device connected to my network. Everytime the signal drops, I have to reconnect to the device as it doesn't show on my desktop. Is there a way to make the mini reconnect automatically when I reconnect to my wireless signal?
    Thanks.

    As a client OS X supports the major two VPN protocols, PPTP and L2TP over IPSec. As a server technically OS X supports both of these too but the difference is this sort of thing was more aimed at OS X Server so there is no nice GUI to work with.
    Of course I wouldn't have suggested this if there wasn't a nicer solution at hand. I'm sure there are others too but what I know works (as I use it) is [iVPN|http://macserve.org.uk/projects/ivpn>, essentially a GUI wrapper to the already installed daemons. As far as performance goes it's dependent on both your connection at home and the connection wherever you are but then that would also be the case if trying to access the NAS remotely anyway. I'd also guess this will result in a safer connection too as the file sharing protocols you're using probably aren't concerned about encryption.
    The only thing I'd say is I do believe there are issues if both the home network and the place you are at are both using the same private IP range, assuming both locations are behind a router. The reason being if both use the same range how can your machine know when you mean one network and not the other. Careful selection of the IP range at home though can minimise this by choosing a non-standard private IP range. So a lot of home routers, the majority I've come into contact with anyway use 192.168.0.xxx or 192.168.1.xxx. The entire 10.xxx.xxx.xxx though, assuming the router allows you to do so is open and if you pick something in the upper half the chances of both places doing the same has to be very slim.

  • Multiple macs and iTunes library on NAS device

    My iTunes library is stored on a Buffalo NAS drive. 
    I have an older Mac desktop - running OS X 10.6.8; on this machine iTunes 11.4 is set up to use the library on the above noted NAS device.  This is working fine; all my burned and purchased music is playing no problem. 
    I have a new (latest) version Macbook Pro Retina - running OS X Yosemite 10.10.2.  I proceeded to set iTunes 12.1.0.50 on this machine to use the exact same shared location as is being used on the desktop noted above for the library location.  On this machine, however, I am only able to view my purchased music (with the download cloud displayed). 
    Anyone know why two Macs pointed to the exact same iTunes library shared folder location would result in one working fine and the other only showing purchased music available for download?   
    Thanks

    How exactly did you set it?  You do not do it in preferences (a common mistake).  You do it by starting up iTunes while holding down the option/alt key whereupon iTunes will irreversibly convert what it sees as the library, namely the iTunes Library.itl file, to a version which will only work with the newest version of iTunes you run (your 12).  Clearly you can't use 12 on your old computer so there is no way to share the same library file. Each version of iTunes will have to run its own library file which will essentially maintain its own independent list of entries, though they can share media.  This means changes you make to one library will mostly not appear in the other but if you do something such as delete media in one library the other library will fuss it can no longer find the file, etc.
    If you are going to start doing advanced iTunes things you need to learn how iTunes works.
    What are the iTunes library files? - http://support.apple.com/kb/HT1660
    More on iTunes library files and what they do - http://en.wikipedia.org/wiki/ITunes#Media_management
    What are all those iTunes files? - http://www.macworld.com/article/139974/2009/04/itunes_files.html
    Where are my iTunes files located? - http://support.apple.com/kb/ht1391
    iTunes 9 [and later]: Understanding iTunes Media Organization - http://support.apple.com/kb/ht3847 - plus supplemental information about organizing to new structure https://discussions.apple.com/message/26404702#26404702
    Image of folder structure and explanation of different iTunes versions (turingtest2 post) - https://discussions.apple.com/docs/DOC-7392 and making an iTunes library portable.
    One other tip. Just about every mention I see of people using iTunes with a NAS is something with a problem that results from using iTunes with a NAS.  iTunes is likely not written with NAS use in mind.  It may work, or it may not.  Keep regular backups.

  • Issue when naming resources the same on different zone clusters

    Dear all
    I found a very strange issue related to naming of resources and zone clusters, and would like to know weather what I am doing is supported.
    I have a 4 node cluster and on the first two nodes I have zone cluster A. On the second two nodes I have zone cluster B. Each zone cluster has configured in it a unique shared address. On each zone cluster various scalable GDS services having same names are configured. When creating the GDS resource the following warning cropped up “Warning: Scalable service group for resource test has already been created” . When I put the resource on zone A up everything works fine but when I startup the resource on cluster B , then for some reason it registers the shared address of zone A on the nodes of zone B ie.. it gets confused.. and thus the service on zone A becomes unavailable from the shared address ip
    Is the use of same names for resources on two different zone clusters supported? From my perspective this issue breaks the “Containment” concept of zone clusters, since zone cluster B can “confuse” zone cluster A
    Regards
    Daniel

    Daniel,
    Is the use of same names for resources on two different zone clusters supported? From my perspective this issue breaks the “Containment” concept of >zone clusters, since zone cluster B can “confuse” zone cluster AYes, the use of same resource name on two different zone clusters is supported.
    I have a 4 node cluster and on the first two nodes I have zone cluster A. On the second two nodes I have zone cluster B. Each zone cluster has >configured in it a unique shared address. On each zone cluster various scalable GDS services having same names are configured. When creating >the GDS resource the following warning cropped up “Warning: Scalable service group for resource test has already been created”As mentioned above, using the same resource name is allowed and the above warning message should not be printed. I will open a bug against
    the product for this.
    When I put the resource on zone A up everything works fine but when I startup the resource on cluster B , then for some reason it registers the shared >address of zone A on the nodes of zone B ie.. it gets confused.. and thus the service on zone A becomes unavailable from the shared address ipThis is the issue we could not see in our lab and need reproducing steps. We were successfully able to create(except for seeing the above warning message) the shared address resources and scalable resources with the same names in two different zones and able to bring them up correctly. This is not GDS service, but a regular apache scalable service and the choice of the data service should not matter here. If you can provide the following, we would be happy to investigate further and send you information:
    1) Reproducing steps (The exact order of the commands that were run to create and bring up the resources)
    2) Is your shared address resource name was also same (Ofcourse the IP address has to be different) in addition to the GDS service name?
    3) Any error messages that got printed.
    4) syslog messages related to this failure (or the syslog messages that you have seen during this time frame)
    Thanks,
    Prasanna

Maybe you are looking for

  • Po line item undelete after GR&IR copletion

    hi gurus,      pls give me sugesition related to   PO line item is deleted after completion of GR&IR,now user is asking to undelete the line item is it possible to undelete the line item,if possible please give me suggesition how it is possible. Rega

  • One Components into 2 different panel problems

    Hi, I have one component, just call it A, and 2 different panels just call it PA and PB. I put A into PA and then I put A into PB, and display PA & PB into screen together. The problem is A is missing in PA but displayed into PB. I know this is behav

  • At line-selection event triggering issue?

    hi, I am working on a interactive report using at line-selection event, in that i have 3 list & now on 4 list i have to perform a BDC operation,for that I have used the PF-STATUS syntax, & at user command code. but for this particular list my at user

  • ETO/CTO Scenario for S&OP

    Does anyone have experience/input for integrating Engineer to Order / Customer to order scenario in S&OP. We are looking at following key areas where inputs are required: - This is project system based scenario where products are manufactured in Proj

  • My final cut pro has been slowing down unbearably and creating duplicates

    Hi. I have an appointment booked for the end of next week to have my laptop looked at but I was wondering if anyone might know what my issue is. I've edited 7 projects on final cut pro 10 and they've all been fine. For some reason every time I closed