7410 - NFS mount question

Good afternoon,
I have a very large NFS share created on my 7410 system. I have no difficulty in mounting the share on my clients. (which is actually the problem).
In a Solaris environment, you can do: "share -F nfs -o rw=thatnode /mountpoint" and the /mountpoint will be shared out via NFS, but only the node whose name is "thatnode" can mount that NFS share.
I see some very vague references in the 7410 documentation on this, and in particular in the release notes for the Q3 upgrade to the 7410 firmware - but I have yet to find this functionality thoroughly documented.
So the question is: how do i restrict the node (or nodes) that can mount the NFS file system on my 7410?
Thanks you and have a great evening,
Howard

In the Shares configuration:
select a project or file system to edit
and choose the "Protocols" page.
There will be an "NFS Exceptions" option you can use to configure host level access (RO or RW) and root access to the file system or project.
This sets a property on the file system:
sharenfs = rw,root=mickey.mydomain.com
This property can also be "set" via the CLI interface.

Similar Messages

  • NFS mount question

    Solaris: 5.10 on x86
    In machine A, We have a filesystem with mountpoint
    /data/oradataWe need to mount this FS in machine B and make it accessible(read/write) to OS user oracle in Machine B, Should the os user oracle be of the same UID and groupID as that of oracle user in machine A? Our sys admin said it is necessary.

    From:
    http://docs.sun.com/app/docs/doc/817-1985/userconcept-97366?l=en&a=view
    User ID Numbers
    Associated with each user name is a user identification number (UID). The UID number identifies the user name to any system on which the user attempts to log in. And, the UID number is used by systems to identify the owners of files and directories. If you create user accounts for a single individual on a number of different systems, always use the same user name and ID number. In that way, the user can easily move files between systems without ownership problems.
    .7/M.

  • 7410 NFS server not responding

    Greetings,
    Anyone seeing "NFS server ... not responding" from a client of a 7410?
    I have a 7410 with a single J4400 (22 1TB drives @ RAID1, 1 spare, 1 logzilla). It's running 2009.09.01.3.0,1-1.8, which I believe is the latest and greatest. There is one client, a T2000 running Solaris 10. We're using NFS v3 to mount four shares from the 7410. Mount options look like this:
    box-nge2:/export/oracle/data     -       /oradata/data   nfs     -   yes rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3,noac,forcedirectioA fairly large (~1TB) Oracle database lives here (although there was very little activity in the database when the following occurred).
    Today I was cleaning up some old data files from a now unused Oracle instance. Pretty simple: rm -rf /oradata/data/DO-NOT-WANT/ . I was surprised to see the command appear to hang, and the "NFS server kwaltz-nge2 not responding still trying" message appear. Control-C eventually got me my prompt back.
    When the system became responsive again a few minutes later I tried deleting files one at a time. Deleting some "large" data files (i.e 200GB or more) was taking more than 2 minutes, and caused the NFS server not responding message. Smaller files would take a few seconds.
    Why is a simple "rm" command bringing the 7410 to its knees? Any thoughts? Thanks.
    Edited by: roymcmorran on Dec 23, 2009 3:44 PM

    After some update issues, we also lost NFS. I created a test share and exported it to one of our Solaris 10 hosts. I got an rpcbind failure. This failure wasn't corrected by restarting the NFS service, nor rebooting the 7410. Rather, I had to DISABLE the NFS service, then re-enable it. After that, all connectivity returned.
    Charles

  • Nfs mount point does not allow file creations via java.io.File

    Folks,
    I have mounted an nfs drive to iFS on a Solaris server:
    mount -F nfs nfs://server:port/ifsfolder /unixfolder
    I can mkdir and touch files no problem. They appear in iFS as I'd expect. However if I write to the nfs mount via a JVM using java.io.File encounter the following problems:
    Only directories are created ? unless I include the user that started the JVM in the oinstall unix group with the oracle user because it's the oracle user that writes to iFS not the user that creating the files!
    I'm trying to create several files in a single directory via java.io.File BUT only the first file is created. I've tried putting waits in the code to see if it a timing issue but this doesn't appear to be. Writing via java.io.File to either a native directory of a native nfs mountpoint works OK. ie. Junit test against native file system works but not against an iFS mount point. Curiously the same unit tests running on PC with a windows driving mapping to iFS work OK !! so why not via a unix NFS mapping ?
    many thanks in advance.
    C

    Hi Diep,
    have done as requested via Oracle TAR #3308936.995. As it happens the problem is resolved. The resolution has been not to create the file via java.io.File.createNewFile(); before adding content via an outputStream. if the File creation is left until the content is added as shown below the problem is resolved.
    Another quick question is link creation via 'ln -fs' and 'ln -f' supported against and nfs mount point to iFS ? (at Operating System level, rather than adding a folder path relationship via the Java API).
    many thanks in advance.
    public void createFile(String p_absolutePath, InputStream p_inputStream) throws Exception
    File file = null;
    file = new File(p_absolutePath);
    // Oracle TAR Number: 3308936.995
    // Uncomment line below to cause failure java.io.IOException: Operation not supported on transport endpoint
    // at java.io.UnixFileSystem.createFileExclusively(Native Method)
    // at java.io.File.createNewFile(File.java:828)
    // at com.unisys.ors.filesystemdata.OracleTARTest.createFile(OracleTARTest.java:43)
    // at com.unisys.ors.filesystemdata.OracleTARTest.main(OracleTARTest.java:79)
    //file.createNewFile();
    FileOutputStream fos = new FileOutputStream(file);
    byte[] buffer = new byte[1024];
    int noOfBytesRead = 0;
    while ((noOfBytesRead = p_inputStream.read(buffer, 0, buffer.length)) != -1)
    fos.write(buffer, 0, noOfBytesRead);
    p_inputStream.close();
    fos.flush();
    fos.close();
    }

  • Windows server 2008 R2 and NFS mounted subdirectories.

    I am mounting from a Windows Server 2008 R2 Box to a RHEL 6.3 machine and I am able see the folders; however, I am unable to see the sub-folders via the mapped NFS mount in windows. Any ideas as to why? Some additional facts are below.
    1. We are using an NFS mount from Windows to RHEL (mount -o
    \\RHELBOX\ops\resources R:)  We NFS mount from RHEL to a Storage device - the fstab entry looks like this:
    10.9.9.9:/vol/afpres1/psf_prod         
    /ops/resources/prod     nfs         
    _netdev,defaults            
    0 0
    10.9.9.9:/vol/afpres1/psf_test           
    /ops/resources/test       nfs         
    _netdev,defaults            
    0 0
    10.9.9.9:/vol/afpres1/baselib             
    /ops/resources/psf       
    nfs          _netdev,defaults            
    0 0
    The RHEL export file looks like this.
    /ops/resources 10.4.4.4(rw,sync,no_all_squash,insecure,nohide)  (the 10.4.4.4 IP address is the of the 2008 r2 server
    /ops/resources 10.4.4.5(rw,sync,no_all_squash,insecure,nohide)
    /ops 10.4.11.66(rw,sync,no_all_squash,insecure,nohide)
    #/ops/resources *(rw,sync,no_all_squash,insecure,nohide)
    /opspool *(rw,sync)
    2. We are not using Samba or CIFS

    Hello,
    The TechNet Sandbox forum is designed for users to try out the new forums functionality. Please be respectful of others, and do not expect replies to questions asked here.
    As it's off-topic here, I am moving the question to the
    Where is the forum for... forum.
    Karl
    When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or Mark As Answer.
    My Blog: Unlock PowerShell
    My Book: Windows PowerShell 2.0 Bible
    My E-mail: -join ('6F6C646B61726C40686F746D61696C2E636F6D'-split'(?<=\G.{2})'|%{if($_){[char][int]"0x$_"}})

  • Missing menu "File|NFS Mounts" in Disk utility

    I had many NFS mounts using Disk utility and I used "File|NFS Mounts" but now that option is missing and I can't see my mounts neither those mounts are working, so I have two questions
    1. Where I can see my old mounts listed?
    2. How can I make nfs mount work?

    The problem is Boot Camp:  It uses a hybrid GPT/MBR partitioning scheme - which ends up hiding the Recovery HD partition - which is an EFI physical partition (neither GPT nor MBR).
    I would expect a new version of Boot Camp to release - like real soon - because of the Recovery HD partition invisibility issue.
    In this article - it is suggested that rEFIt should be used to partition a hard drive that is going to support multiple boots including Mac OS X, LINUX, and Windows. 
    (http://wiki.onmac.net/index.php/Triple_Boot_via_BootCamp)
    The key in the article to using Boot Camp with rEFIt is this:
    "Run the Boot Camp Assistant and create the Windows XP driver cd. Then exit Boot Camp.  DO NOT PARTITION USING BOOT CAMP: you are only using Boot Camp for the drivers, not the partitioning."
    All partitioning is done in terminal mode using the "diskutil" command.
    rEFIt is used to update both the GPT and MBR records so that all partitions will be visible using its "gptsync" command.
    Then - you replace the standard Mac boot menu with the rEFIt boot menu.  THAT will show the Mac OS X partition, Recovery HD (an EFI partition), and the Windows partition.
    My caveat is that rEFIt - which is open sourced and available here:  http://refit.sourceforge.net
    has not been recently updated and tested with respect to Mac OS X Lion.
    Hope this helps!

  • NFS Mounted Directory And Files Quit Responding

    I mounted a remote directory using NFS and I can access the mount point and all of its sub-directories and files. After a while, all of the sub-directories and files no longer respond when clicked; in column view there is no longer an icon nor any statistics for those files. If I go back and click on Network->Servers->myserver->its_subdirectories, it will eventually respond again.
    I have found no messages in the system log. And nfsstat shows no errors.
    I am using these these mount parameters with the Directory Utility->Mounts tab:
    ro net -P -T -3
    Any idea why the NFS mounted directories and files quit responding?
    Thanks.

    I may have found an answer to my own question.
    It looks like automount will automatically unmount a file system if it has not been accessed in 10 minutes. This time-out can be changed using the automount command. I am going to try increasing this time-out value.
    Here is part of the man page:
    SYNOPSIS
    automount [-v] [-c] [-t timeout]
    -t timeout
    Set to timeout seconds the time after which an automounted file
    system will be unmounted if it hasn't been referred to within
    that period of time. The default is 10 minutes (600 seconds).

  • Stop scanning/mounting NFS mounts

    Hi,
         I have a Linux desktop. I used to transfer files b/w my MacBook and my desktop using NFS. I don't require to transfer files anymore and hence I stopped using the NFS service a long time ago. But my Mac still searches/tries to connect to the server, and my desktop HDD still shows up (looks like an alias) in the various disk monitoring applications. Is there a way to stop this? The Console message is as follows:
    com.apple.automountd: mount_nfs: can't mount /harshad from 192.168.1.4 onto /Volumes/minty: Host is down
    And another completely unrelated question: How do you stay signed in in the support communities?
    Thanks.

    You need to do a "svcadm enable" for the following services, or NFS mounts at boot won't work...
    svc:/network/nfs/status
    svc:/network/nfs/nlockmgr
    svc:/network/nfs/client

  • Effective NFS mount options

    My computer is running Mac OS X 10.7.5. I have a question regarding NFS mount options.
    The following mount options are defined in /etc/nfs.conf
    didymus:~ service$ cat /etc/nfs.conf
    nfs.client.mount.options = tcp,rw,nfc,intr,async,rdirplus,rsize=65536,wsize=65536
    nfs.client.allow_async=1
    If I run the mount command, I see the following:
    10.72.6.11:/ on /Volumes/10.72.6.11 (nfs, asynchronous, nodev, nosuid, mounted by service)
    How do I verify the other mount options specified, intr, rsize, wsize, etc. have taken affect? I presume there is some mechanism for validating this.

    Nevermind, appear to have located it: nfsstat -m appears to do the trick.

  • Adaptiv Computing Controller / do i need NFS Mounts for Application Server?

    Hello,
    i got a question about the Adaptiv Computing Controller.
    Do i need NFS Mounts for the Application servers?
    Or can i handle this from the SAN too?
    Can anybody help me here?
    Thanks

    http://ww2.cs.fsu.edu/~rosentha/linux/2.6.26.5/docs/DocBook/libata/ch07.html#excatATAbusErr wrote:
    ATA bus error means that data corruption occurred during transmission over ATA bus (SATA or PATA). This type of errors can be indicated by
    ICRC or ABRT error as described in the section called “ATA/ATAPI device error (non-NCQ / non-CHECK CONDITION)”.
    Controller-specific error completion with error information indicating transmission error.
    On some controllers, command timeout. In this case, there may be a mechanism to determine that the timeout is due to transmission error.
    Unknown/random errors, timeouts and all sorts of weirdities.
    As described above, transmission errors can cause wide variety of symptoms ranging from device ICRC error to random device lockup, and, for many cases, there is no way to tell if an error condition is due to transmission error or not; therefore, it's necessary to employ some kind of heuristic when dealing with errors and timeouts. For example, encountering repetitive ABRT errors for known supported command is likely to indicate ATA bus error.
    Once it's determined that ATA bus errors have possibly occurred, lowering ATA bus transmission speed is one of actions which may alleviate the problem.
    I'd also add; make sure you have good backups when ATA errors are frequent

  • Vi error on nfs mount; E212: Can't open file for writing

    Hi all,
    I've setup a umask of 0 for testing on both NFS client (Centos 5.2) and NFS server (OSX 10.5.5 server).
    I can create files as one user and edit/save out as another user w/o issue when directly logged into the server via ARD.
    However, when I attempt the same from an NFS mount on a client machine, even as root I get the following error using vi;
    "file" E212: Can't open file for writing
    Looking at the system.log file on the server, I see;
    kernel[0]: add_fsevent: no name hard-link! dropping the event. (event 2 vp == 0xa5db510 (-UNKNOWN-FILE)).
    This baffles me. My umask is 0 meaning files I create and attempt to edit as other users are 777, but I cannot save out edits unless I do a wq! in vi. At that point, the owner of the file changes to whomever did the vi.
    This isn't just a vi issue as it happens using any editor, but I like to use vi.
    Any help is greatly appreciated. Hey, beer is on me!

    Hi all,
    Thanks for the replies
    I've narrowed it down to a Centos client issue.
    Everything works fine using other Linux based OS's as clients.
    Since we have such a huge investment in Centos, I must figure out a workaround. Apple support wasn't much help as usual however they were very nice.
    There usual response is "its unsupported".
    If Apple really wants to play in the enterprise of business space, they really need to change there philosophy. I mean telling me that I shouldn't mount home directories via NFS is completely rediculus.
    What am I supposed to use then, Samba of AFP? No, I don't think so. No offense to Microsoft but why would I use a Windows based file sharing protocol to mount network shares in a Nix env???

  • Accessing NFS mounted share in Finder no longer works in 10.5.3+

    I have setup an automounted NFS share previously with Leopard against a RHEL 5 server at the office. I had to go through a few loops to punch a hole through the appfirewall to get the share accessible in the Finder.
    A few months later when I returned to the office after a consultancy stint and upgrades to 10.5.3 and 10.5.4 the NFS mount no longer works. I have investigated it today and I can't get it to run even with the appfirewall disabled.
    I've been doing some troubleshooting, and the interaction between the statd, lockd and perhaps the portmap seem a bit fishy, even with the appfirewall disabled. Both the statd and lockd complains that they can not register; lockd once and statd indefinitely.
    Jul 2 15:17:10 ySubmarine com.apple.statd[521]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 2 15:17:10 ySubmarine com.apple.launchd[1] (com.apple.statd[521]): Exited with exit code: 1
    Jul 2 15:17:10 ySubmarine com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    ... and rpcinfo -p gets connection refused unless I start portmap using the launchctl utility.
    This may be a bit obscure, and I'm not exactly an expert of NFS, so I wonder if someone else stumbled across this, and can point me in the right direction?
    Johan

    Sorry for my late response, but I have finally got around to some trial and error. I can mount the share using mount_nfs (but need to use sudo), and it shows up as a mounted disk in the Finder. However, when I start to browse a directory on the share that I can write to, I end up with the lockd and statd failures.
    $ mount_nfs -o resvport xxxx:/home /Users/yyyy/xxxx-home
    mount_nfs: /Users/yyyy/xxxx-home: Permission denied
    $ sudo mount_nfs -o resvport xxxx:/home /Users/yyyy/xxxx-home
    Jul 7 10:37:34 zzzz com.apple.statd[253]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 7 10:37:34 zzzz com.apple.launchd[1] (com.apple.statd[253]): Exited with exit code: 1
    Jul 7 10:37:34 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    Jul 7 10:37:44 zzzz com.apple.statd[254]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 7 10:37:44 zzzz com.apple.launchd[1] (com.apple.statd[254]): Exited with exit code: 1
    Jul 7 10:37:44 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    Jul 7 10:37:54 zzzz com.apple.statd[255]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 7 10:37:54 zzzz com.apple.launchd[1] (com.apple.statd[255]): Exited with exit code: 1
    Jul 7 10:37:54 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    Jul 7 10:37:58 zzzz loginwindow[25]: 1 server now unresponsive
    Jul 7 10:37:59 zzzz KernelEventAgent[26]: tid 00000000 unmounting 1 filesystems
    Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: /net updated
    Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: /home updated
    Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: no unmounts
    Jul 7 10:38:02 zzzz loginwindow[25]: No servers unresponsive
    ... and firewall wide open.
    I guess that the Finder somehow triggers file locking over NFS.

  • Cannot access external NFS mounts under Snow Leopard

    I was previously running Leopard (10.5.x) and automounted an Ubuntu (9.04 Jaunty) Linux NFS mount from my iMac. I had set this up with Directory Utility and it was instantly functional and I never had any issues. After upgrading to Snow Leopard, I set up the same mount point on the same machine (using Disk Utility now), without changing any of the export settings, and Disk Utility stated that the external server had responded and appeared to be working correctly. However, when attempting to access the share, I get a 'Operation not permitted' error. I also cannot manually create the NFS mount using mount or mount_nfs. I get a similar error if I try to cd into /net/<remote-machine>/<share>. I can see the shared folder in /net/<remote-machine>, but I cannot access it (cd, ls, etc). I can see on the Linux machine that the iMac has mounted the share (showmount -a), so the problem appears to be solely in the permissions. But I have not changed any of the permissions on the remote machine, and even then, they are blown wide open (777) so I'm not sure what is causing the issue. I have tried everything as both a regular user, and as root. Any thoughts?
    On the Linux NFS server:
    % cat /etc/exports
    /share 192.168.1.0/24(rw,sync,nosubtree_check,no_rootsquash)
    % showmount -a
    All mount points on <server>:
    192.168.1.100:/share <-- <server> address
    192.168.1.101:/share <-- iMac address
    On the iMac:
    % rpcinfo -t 192.168.1.100 nfs
    program 100003 version 2 ready and waiting
    program 100003 version 3 ready and waiting
    program 100003 version 4 ready and waiting
    % mount
    trigger on /net/<server>/share (autofs, automounted, nobrowse)
    % mount -t nfs 192.168.1.100:/share /Volumes/share1
    mount_nfs: /Volumes/share1: Operation not permitted

    My guess is that the Linux server is refusing NFS requests coming from a non-reserved (<1024) source port. If that's the case, adding "insecure" to the Linux export options should get it working. (Note: requiring the use of reserved ports doesn't actually make things any more secure on most networks, so the name of the option is a bit misleading.)
    If you were previously able to mount that same export from a Mac, you must have been specifying the "-o resvport" option and doing the mounts as root (via sudo or automount which happens to run as root). So that may be another fix.
    HTH
    --macko

  • Expdp fails to create .dmp files in NFS mount point in solaris 10,Oracle10g

    Dear folks,
    I am facing a wierd issue while doing expdp with NFS mount point. Kindly help me on this.
    ===============
    expdp system/manager directory=exp_dumps dumpfile=u2dw.dmp schemas=u2dw
    Export: Release 10.2.0.4.0 - 64bit Production on Wednesday, 31 October, 2012 17:06:04
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "/backup_db/dumps/u2dw.dmp"
    ORA-27040: file create error, unable to create file
    SVR4 Error: 122: Operation not supported on transport endpoint
    I have mounted like this:
    mount -o hard,rw,noac,rsize=32768,wsize=32768,suid,proto=tcp,vers=3 -F nfs 172.20.2.204:/exthdd /backup_db
    NFS=172.20.2.204:/exthdd
    given read,write grants to public as well as specific user

    782011 wrote:
    Hi sb92075,
    Thanks for ur reply. pls find the below. I am able to touch the files while exporting log files also creating having the error msg as i showed in previous post.
    # su - oracle
    Sun Microsystems Inc. SunOS 5.10 Generic January 2005
    You have new mail.
    oracle 201> touch /backup_db/dumps/u2dw.dmp.test
    oracle 202>I contend that Oracle is too dumb to lie & does not mis-report reality
    27040, 00000, "file create error, unable to create file"
    // *Cause:  create system call returned an error, unable to create file
    // *Action: verify filename, and permissions                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Anyone else having problems with NFS mounts not showing up?

    Since Lion, cannot see NFS shares anymore.  Folder that had them is still threre but the share will not mount.  Worked fine in 10.6.
    nfs://192.168.1.234/volume1/video
    /Volumes/DiskStation/video
    resvport,nolocks,locallocks,intr,soft,wsize=32768,rsize=3276
    Any ideas?
    Thanks

    Since the NFS points show up in the terminal app, go to the local mount directory (i.e. the mount location in the NFS Mounts using Disk Utility) and do the following:
    First Create a Link file
    sudo ln -s full_local_path link_path_name
    sudo ln -s /Volumes/linux/projects/ linuxProjects
    Next Create a new directory say in the Root of the Host Drive (i.e. Macintosh HDD)
    sudo mkdir new_link_storage_directory
    sudo mkdir /Volumes/Macintosh\ HDD/Links
    Copy the Above Link file to the new directory
    sudo  mv link_path_name new_link_storage_directory
    sudo  mv linuxProjects /Volumes/Macintosh\ HDD/Links/
    Then in Finder locate the NEW_LINK_STORAGE_DIRECTORY and then the link file should allow opening of these NFS point points.
    Finally, after all links have been created and placed into the NEW..DIRECTORY, Place it into the left sidebar.  Now it works just like before.

Maybe you are looking for

  • Import file problem, the video is stretched

    I have a 720*576 .mov file, but when I import this file in iMoive, the video is stretched, it have two black bars on both sides... How can I fix it? Thanks very much!!

  • Strange file in trash that won't go away?

    I have a strange file that appeared in my Trash the other day. I tried to empty the trash multiple times, but it won't go away. Secure empty trash doesn't help, as it just gets stuck. Double-clicking on the file, getting info on the file, or dragging

  • Uploaded a recent upgrade and now photoshop crashes on startup any suggestions?

    Having problems with new upgrade to photoshop cc. Crashes on startup, any sugestions?

  • Too many different admin rights, user preferred

    Hello, I have after migrating to my new Mac, 5 accounts, almost all with admin right now. Two migrations, the second one done with admin rights, doubled the number of accounts... I want to keep all accounts, that is ok - I use only one as main, with

  • Itouch will not get updated

    recently i saw on the apple website that there is a new ipod touch update, so i went on my itunes and it says that the most recent version is the 4.2.1, but on the website it is saying that the 4.3 is the latest update.  how can i correct my itunes t