Using a Netapp and NFS

I did some searches and in the past it sounded like the use of NFS wasn't recommended. Has this stance changed in the latest releases of the Sun Messaging Server?
The reason I ask is because I currently rely on NFS with our current email system to distribute our load to a number of frontend mail servers. We have around 130,000 accounts and recieve anywhere from 1 to 2 million messages per day.
Thanks

While we do not normally recommend the use of Network Attached Storage, via NFS for mail store use, there are specific products that have been tested, and found to work.
Network Appliance is one vendor we have tested.
We do continue to find problems, however, and we are working on those. Deleting a mail folder currently causes problem.
Please note:
1. Network attached storage WILL impact your performance. It fights for bandwidth with incoming/outbound mail.
2. This will ONLY be supported with current JES2005q4 product. It will NOT be backported to 5.2.
Personally, I would not touch NFS for mail stores with a 10 foot pole. SAN is a totally different thing, though, and fully supported for all versions. It's not NFS.

Similar Messages

  • Deadlocking issue with sshfs and nfs

    Okay, I've used both sshfs and nfs for remotely accessing the home partition on my fileserver, but I have been having a problem where the networking on the server suddenly cuts out.  Any processes that are accessing the folder I mounted nfs/sshfs with become deadlocked.  Any processes that try access my home directory, where the remote folder sits, are also deadlocked.  I cannot get into the machine with ssh.  I have to manually reboot it in order to get any networking at all.
    I have to also force-kill any known processes that are accessing the remote folder, and if I don't know what they are, I have to forcibly unmount it.  This issue has been occuring with this specific fileserver since I got it.  It is running Arch Linux i686, but has had the same problem with the server editions of both Fedora and Ubuntu.
    I don't know where to begin with fixing this problem, nor do I know how to diagnose it.

    Consider "soft" mount option for NFS.

  • [SOLVED] Netbooting with PXE, TFTP and NFS / Numerous errors

    Greetings all, hope you can help me out.
    Been given a task by my company of making a network bootable ICA client (with X and Firefox, with the Citrix ICA client installed) as small as possible to minimize network traffic (as 440 workstations would be downloading the end-product simultaneously, so it'd beat ten bells of proverbial out of the core and edge switches for a little while). I discovered two options. One being to integrate everything in side a cloop image directly inside the INITRD. I have stacks of working INITRDs with their matched kernels yet being my first dabble in to extracting the INITRD, my faffing with CPIO has resulted in me nuking my base layout (Thank god for snapshotting in VMware Workstation!) 4 times, and either getting "Premature end of file" or a copius amount of lines stating "cpio: Malformed Number: <strange characters>" finally ending with "Premature end of file". As a result I went in search of another option, which would be booting off an NFS share. I followed the guide:
    http://wiki.archlinux.org/index.php/Dis … t_NFS_root
    ...in order to set up a network booted install of Arch and hit a few snags along the way, probably a result of using multiple operating systems for the TFTP and NFS server as opposed to using what the guide recommends, but I'm not sure as these seem solvable, although I don't know how right now.
    The set up:
    DHCP is provided by a Microsoft Windows Server 2003 VM (AD Integrated) on 172.16.10.17 on a box called "Rex".
    TFTP is provided by another Windows Server 2003 VM by "TFTPd32" which is a free download. This is located on 172.16.10.158 on a box called "Terra".
    The NFS store is provided by OpenFiler 2.3 which is a specialized version of rPath Linux designed specifically for turning boxes in to dedicated NAS stores. This is located on 172.16.10.6, and is called "frcnet-nas-1".
    The problem:
    DHCP is correctly configured with a Boot Host Name (Which is 172.16.10.158) and a boot file name of "pxelinux.0". This is confirmed as working.
    Client gets the kernel and INITRD from TFTP and boots up fine until it hits "Waiting for devices to settle..." by which point it echos out "Root device /dev/nfs doesn't exist, attempting to create it...", which it seems to do so fine. It then passes control over to kinit and echos "INIT: version 2.86 booting" and the archlinux header, and immediately after that it prints:
    mount: only root can do that
    mount: only root can do that
    mount: only root can do that
    /bin/mknod: '/dev/null': File exists
    /bin/mknod: '/dev/zero': File exists
    /bin/mknod: '/dev/console': File exists
    /bin/mkdir: cannot create directory '/dev/pts': File exists
    /bin/mkdir: cannot create directory '/dev/shm': File exists
    /bin/grep: /proc/cmdline: No such file or directory
    /etc/rc.sysinit: line 72: /proc/sys/kernel/hotplug: No such file or directory
    :: Using static /dev filesystem [DONE]
    :: Mounting Root Read-only [FAIL]
    :: Checking Filesystems [BUSY]
    /bin/grep: /proc/cmdline: No such file or directory
    :: Mounting Local Filesystems
    mount: only root can do that
    mount: only root can do that
    mount: only root can do that
    [DONE]
    :: Activating Swap [DONE]
    :: Configuring System Clock [DONE]
    :: Removing Leftover Files [DONE]
    :: Setting Hostname: myhost [DONE]
    :: Updating Module Dependencies [DONE]
    :: Setting Locale: en_US.utf8 [DONE]
    :: Setting Consoles to UTF-8 mode[BUSY]
    /etc/rc.sysinit: line 362: /dev/vc/0: No such file or directory
    /etc/rc.sysinit: line 363: /dev/vc/0: No such file or directory
    /etc/rc.sysinit: line 362: /dev/vc/1: No such file or directory
    /etc/rc.sysinit: line 363: /dev/vc/1: No such file or directory
    ... all the way down to vc/63 ...
    :: Loading Keyboard Map: us [DONE]
    INIT: Entering runlevel: 3
    :: Starting Syslog-NG [DONE]
    Error opening file for reading; filename='/proc/kmsg', error='No such file or directory (2)'
    Error initializing source driver; source='src'
    :: Starting Network...
    Warning: cannot open /proc/net/dev (No such file or directory). Limited output.
    eth0: dhcpcd 4.0.3 starting
    eth0: broadcasting inform for 172.16.10.154
    eth0: received approval for 172.16.10.154
    eth0: write_lease: Permission denied
    :: Mounting Network Filesystems
    mount: only root can do that
    [FAIL]
    :: Starting Cron Daemon [DONE]
    ...and, nothing after that, it just stops. Kernel doesn't panic, and hitting ctrl+alt+delete does what you'd expect, a clean shutdown minus a few errors about filesystems not being mounted. It seems /proc isn't getting mounted because init apparently doesn't have the appropriate permissions, and /proc not being mounted causes a whole string of other issues. Thing is, proc gets created at boot time as it contains kernel specific information about the system and the kernel's capabilities, right? Why can't it create it? How come init doesn't have the same privileges as root as it usually would, and how would I go about fixing it?
    I admit, while I'm fairly competent in Linux, this one has me stumped. Anyone have any ideas?
    Last edited by PinkFloydYoshi (2008-11-22 12:29:01)

    The idea behind the Windows DHCP and TFTP is that we'd be using an existing server and a NetApp box with NFS license to serve everything off. I would have loved to make a new server which is completely Linux, but my boss, nor the other technician have ever used Linux so if I left for any reason, they'd be stuck if ever they ran in to trouble, which is why I've struggled to get Linux to penetrate our all Windows infrastructure.
    During my hunting around on Google I found a lot of information on making my own initrd, and a lot of it using all manner of switches. I can make them fine, but I figure that I would need to look at extracting the current working one first, adding X, Firefox and the ICA client to it, then compressing it again. Cloop came about when I was looking at DSL's internals. The smaller the initrd, the better, so utilizing this could possibly be a plus too.
    The reason I'm doing this with Archlinux is that I know Arch's internals quite well (and pacman is just wonderous, which is more than I can say for yum), so if I run in to a small problem I'm more likely to fix it without consulting Google. Fair enough though, the NFS booting method is giving me issues I never thought were possible. Ahh, sods law strikes again.
    Addendum: I've noticed something which struck me as odd. Files in the NFS share are somehow owned by 96:scanner instead of root:root. Upon attempting changing, it's telling me "Operation Not Permitted". Further prodding has led me to believe it's an Openfiler thing where GID/UID 96 on the OpenFiler box is "ofgroup"/"ofguest". Chowning / to root:root puts NFS boot right ahead and gives me a prompt, however I cannot log in as root. I've also discovered that chrooting in to the base from my Arch workstation and creating a directory makes the directory owned by ofgroup:ofguest again, so it's an Openfiler thing after all this time. Prodding further.
    Addendum two: For anyone using Openfiler out there, when you allow guest access to the NFS share, be sure to set the Anonymous GID and Anonymous UID to 0. By default it's 96 and as a result when trying to boot you get the errors I experienced. This is insecure and you should use some sort of network/host/ip range restriction. Because the root filesystem has 96:96 as the owner of everything after you install the base layout using pacman (and any changes you make afterward) init and root no longer have the appropriate permissions, user 96:96 (which is "scanner" in Archlinux) has the permissions instead and init, in order to complete boot would need to be "scanner" in order to boot completely.
    Solution is to set Anon GID and Anon UID to 0, chown the entire diskless root filesystem to root, then use a linux desktop to mount the diskless root filesystem, mount /proc, /sys and mount bind /dev, then chroot in to the diskless root filesystem. At this point to clear up any problems with bad passwords, use passwd to change your password. Exit the chroot environment then unmount the diskless proc, sys and dev. Boot up via the network and use your chosen password to log in as root. At this point, start clearing up permissions from the en masse filesystem chown and you should then have a usable diskless root.
    I'll experiment further and clear up some of the remaining permission errors that occured during boot and report on my progress in fixing it. Didn't like the idea of chowning the entire share as root. :S
    Last edited by PinkFloydYoshi (2008-11-21 19:28:15)

  • Dedupe on NetApp and disk reclaim on VMware. Odd results

    Hi I am currently in the process of reclaiming disk space back from our NetApp FAS8020 Array running 7-mode 8.2.1. All of our flexvols are VMware datastores using VMFS which are all thin provisioned volumes. NONE of our datastores are presented using NFS.  On the VMware layer we have a mixture of VMs using thin and thick provisioned disk, any new VMs created are normally creating using thin provisioned disks.  Our VMware environment is ESXi 5.0.0 U3 and we also use VSC 4.2.2. This has been quite a journey for us and after a number of hurdles we are now able to see reclaim of volume space on the NetApp, this resulting in the free space returning to the aggregate. To get this all working we had to perform a few steps provided by NetApp and VMware. If we used NFS we could have used the disk reclaim feature in VSC but because that only works with NFS volumes this wasn't an option for us. NETAPP - Enable lun set space_alloc to enabled - https://kb.netapp.com/support/index?page=content&id=3013572. This is disabled by default on any version of ONTAP.VMWARE - Enable BlockDelete to value 1 on each ESXi host in cluster - http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2007427. This is disabled by default on the version of ESXi we are running.VMWARE - Rescan the VMFS datastores in VMware and update the VSC settings for each host. Set recommended host settings. Once performed check delete status is showing as 'supported' esxcli storage core device vaai status get -d naa - http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2014849VMWARE - login to ESXi host, go the /vmfs/volumes and datastore where you want to run disk reclaim and run vmkfstools -y percentage_of_deleted_blocks_to_reclaimNETAPP Run sis start -s -d -o /vol/lun - this will rerun deduplication and delete the existing checkpoints and start afresh. Whilst I believe we are seeing savings on the volumes we are not seeing the savings at the LUN layer in NetApp. The volume usage comes down and with dedupe on I would expect the volume usage to be lower than the datastore usage but the LUN usage doesnt go down.   Does anyone know why this might be the case. Both our flexvols and LUNs are created using thin provisoned and space reserved in unchecked on the LUN.

    Hi,
    Simple answer is yes. It's just the matter of visibility of the disks on the virtual servers. You need to configure the disks appropriately so that some of them are accessible from both nodes e.g. OCR or Voting disks and some are local, but many of the answers depend on the setup that you are going to choose.
    Regards,
    Jarek

  • How to use external table - creating NFS mount -the details involved

    Hi,
    We are using Oracle 10.2.0.3 on Solaris 10. I want to use external tables to load huge csv data into the database. This concept was tested and also found to be working fine. But my doubt that : since ours is a J2EE application, the csv files have to come from the front end- from the app server. So in this case how to move them to the db server?
    For my testing I just used putty to transfer the file to db server, than ran the dos2unix command to strip off the control character at the end of file. but since this is to be done from the app server, putty can not be used. In this case how can this be done? Are there any risks or security issues involved in this process?
    Regards

    orausern wrote:
    For my testing I just used putty to transfer the file to db server, than ran the dos2unix command to strip off the control character at the end of file. but since this is to be done from the app server, putty can not be used. In this case how can this be done? Are there any risks or security issues involved in this process? Not sure why "putty" cannot be used. This s/w uses the standard telnet and ssh protocols. Why would it not work?
    As for getting the files from the app server to the db server. There are a number of options.
    You can look at it from an o/s replication level. The command rdist is common on most (if not all) Unix/Linux flavours and used for remote distribution and sync'ing of files and directories. It also supports scp as the underlying protocol (instead of the older rcp protocol).
    You can use file sharing - the typical Unix approach would be to use NFS. Samba is also an option if NTLM (Windows) is already used in the organisation and you want to hook this into your existing security infrastructure (e.g. using Microsoft's Active Directory).
    You can use a cluster file system - a file system that resides on shared storage and can be used by by both app and db servers as a mounted/cooked file system. Cluster file systems like ACFS, OCFS2 and GFS exist for Linux.
    You can go for a pull method - where the db server on client instruction (that provides the file details), connects to the app server (using scp/sftp/ftp), copy that file from the app server, and then proceed to load it. You can even add a compression feature to this - so that the db server copies a zipped file from the app server and then unzip it for loading.
    Security issues. Well, if the internals is not exposed then security will not be a problem. For example, defining a trusted connection between app server ad db server - so the client instruction does not have to contain any authentication data. Letting the client instruction only specify the filename and have the internal code use a standard and fixed directory structure. That way the client cannot instruct something like +/etc/shadow+ be copied from the app server and loaded into the db sever as a data file. Etc.

  • 7110 OMG, CIF and NFS permission woes. I'm tired and I want to go home.

    OK, here's the dealio...
    I have share exported via CIFS and NFS from our 7110 array running 2010.02.09.2.1,1-1.18
    I have AD configured for CIFS Authentication.
    I have a UNIX desktop so I am using SMB authenticate via AD and talk to the CIF share on the array.
    I have the NFS share mounted using vers 3 on Solaris 10.
    Now, the problem..........
    PERMISSIONS!!!
    Here’s what I want to do,
    Create a file or folder on the CIF and preserve the username on NFS.
    Example, I login as myself via AD, bam I’m on the array.
    Create a file.
    Check the ownership of the file on the NFS mount and it’s suddenly become a series of numbers. Of which I assume are taken from my Windows SID. As Solaris can’t relate my SID to a UNIX username I’m left out in the dark.
    So, I then tried to set up some rule based identity mapping so my Windows login would be converted to my UNIX username, no luck still a series of numbers for me listed against my files.
    I could work around this if I could chown but I can’t even do that as it says chown: filename: Not owner
    What gives? How do I keep my username from CIFS to NFS? HELP!!!!

    Did you have any joy with this?
    I have never been able to determine a consistent configuration for NFS/CIFS sharing on a 7310. Ended up opening access to all on the NFS side (v4) and the CIFS just worked out of the box.
    I am using ID Mapping, with IDMU first, then rule based mapping next. The box picks up the correct UID/GID from AD but doesn't always inherit the user & group for the NFS side.
    Chris

  • Doubt About FTP And NFS

    Hi Experts,
    1...What is the differnce between FPT and NFS( in Trasport Protocol)
    2...When we wil use FTP and NFS......In which Case
    Please Let me know in detailed
    Regards
    Khanna

    Hi Thanks for ur quick reply.
    As u told
    >>>><i>that the client's system is across ur network and the client is not ready to send u the file. At that time u have to use FTP.</i>
    This is ok.
    Q:::::And For this We should be in the VPN   OR No need?????/
    <i>In scenario where the XI system could store the file on their server (e.g. cases where the organization has their XI in place and they dont want to add an extra FTP server in their scenario, they can directly paste the file on the XI file system). In these cases NFS is used.</i>
    In this case u need to put the file in the XI Server, From where u wil get the file to keep it in the server(means via internet or by hand  or like.....)
    Please let me know all the details
    Regards
    Khanna

  • ISCSI, AFP, SMB, and NFS performance with Mac OS X 10.5.5 clients

    Been doing some performance testing with various protocols related to shared storage...
    Client: iMac 24 (Intel), Mac OS X 10.5.5 w/globalSAN iSCSI Initiator version 3.3.0.43
    NAS/Target: Thecus N5200 Pro w/firmware 2.00.14 (Linux-based, 5 x 500 GB SATA II, RAID 6, all volumes XFS except iSCSI which was Mac OS Extended (Journaled))
    Because my NAS/target supports iSCSI, AFP, SMB, and NFS, I was able to run tests that show some interesting performance differences. Because the Thecus N5200 Pro is a closed appliance, no performance tuning could be done on the server side.
    Here are the results of running the following command from the Terminal (where test is the name of the appropriately mounted volume on the NAS) on a gigabit LAN with one subnet (jumbo frames not turned on):
    time dd if=/dev/zero of=/Volumes/test/testfile bs=1048576k count=4
    In seconds:
    iSCSI 134.267530
    AFP 140.285572
    SMB 159.061026
    NFSv3 (w/o tuning) 477.432503
    NFSv3 (w/tuning) 293.994605
    Here's what I put in /etc/nfs.conf to tune the NFS performance:
    nfs.client.allow_async = 1
    nfs.client.mount.options = rsize=32768,wsize=32768,vers=3
    Note: I tried forcing TCP as well as used an rsize and wsize that doubled what I had above. It didn't help.
    I was surprised to see how close AFP performance came to iSCSI. NFS was a huge disappointment but it could have been limitations of the server settings that could not have been changed because it was an appliance. I'll be getting a Sun Ultra 64 Workstation in soon and retrying the tests (and adding NFSv4).
    If you have any suggestions for performance tuning Mac OS X 10.5.5 clients with any of these protocols (beyond using jumbo frames), please share your results here. I'd be especially interested to know whether anyone has found a situation where Mac clients using NFS has an advantage.

    With fully functional ZFS expected in Snow Leopard Server, I thought I'd do some performance testing using a few different zpool configurations and post the results.
    Client:
    - iMac 24 (Intel), 2 GB of RAM, 2.3 GHz dual core
    - Mac OS X 10.5.6
    - globalSAN iSCSI Initiator 3.3.0.43
    NAS/Target:
    - Sun Ultra 24 Workstation, 8 GB of RAM, 2.2 GHz quad core
    - OpenSolaris 2008.11
    - 4 x 1.5 TB Seagate Barracuda SATA II in ZFS zpools (see below)
    - For iSCSI test, created a 200 GB zvol shared as iSCSI target (formatted as Mac OS Extended Journaled)
    Network:
    - Gigabit with MTU of 1500 (performance should be better with jumbo frames).
    Average of 3 tests of:
    # time dd if=/dev/zero of=/Volumes/test/testfile bs=1048576k count=4
    # zpool create vault raidz2 c4t1d0 c4t2d0 c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with RAIDZ2: 148.98 seconds
    # zpool create vault raidz c4t1d0 c4t2d0 c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with RAIDZ: 123.68 seconds
    # zpool create vault mirror c4t1d0 c4t2d0 mirror c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with two mirrors: 117.57 seconds
    # zpool create vault mirror c4t1d0 c4t2d0 mirror c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    # zfs set compression=lzjb vault
    iSCSI with two mirrors and compression: 112.99 seconds
    Compared with my earlier testing against the Thecus N5200 Pro as an iSCSI target, I got roughly 16% better performance using the Sun Ultra 24 (with one less SATA II drive in the array).

  • LXC and NFS

    Anyone been successful using OL6.3/LXC/NFS. Set up NFS on the host exporting home directories. In the container, when I try to mount the exported file system, I get an error message that says the filesystem couldn't be mounted. In /var/log/messages on the host I see an error that indicates the host is unknown. Ok, I don't have the container setup for reverse lookups so I can see that. But the interesting part is the "unknown host" error includes the IP address of the host, not the container. Almost seems like there is a breakdown in the kernel isolating the IPs.
    I was eventually able to get this to work by modifying the /etc/exports file to read "/export/home *(rw)" instead of specifying a specific IP or range of IPs. But I don't find this solution acceptable. Anyone got this working?

    Did a lot of googling over the holiday weekend and it appears to be a known issue. The solution is to either allow all hosts access (as I did) or to include the host and container IP addresses.
    Edited by: jwmitchell on Nov 26, 2012 10:43 AM

  • LDAP and NFS mounts/setup OSX Lion iMac with Mac Mini Lion Server

    Hello all,
    I have a local account on my iMac (Lion), and I also have a Mac Mini (Lion Server) and I want to use LDAP and NFS to mount the /Users directory, but am having trouble.
    We have a comination of Linux (Ubuntu), Windows 7 and Macs on this network using LDAP and NFS, except the windows computers.
    We have created users in workgroup management on the server, and we have it working on a few Macs already, but I wasnt there to see that process. 
    Is there a way to keep my local account separate, and still have NFS access to /Users on the server and LDAP for authentification?
    Thanks,
    -Matt

    It would make a great server. Bonus over Apple TV for example is that you have access via both wired ethernet and wireless. Plus if you load tools from XBMC, Firecore and others you have a significant media server. Cost is right too.
    Many people are doing this - google mac mini media server or other for more info.
    Total downside to any windows based system - dealing with constant anti-virus, major security hassels, lack of true media integration and PITA to update, etc.
    You should be aware that Lion Server is not ready for prime time - it stil has significant issues if you are migrating from SNL 10.6.8. If you buy an apple fresh Lion Server mac mini you should have no problems.
    You'll probably be pleased.

  • How we will use these Job_open and Job_submit job_close??

    Hi Experts,
    i am new in Developement, i need one help here my problem is
    I have one issue which is a report here i am getting the data into internal table, that data fetching i want to schedule it in background job..
    can any body tell me how can i use the JOB_OPEN and JOB_SUBMIT function modules....
    plz provide any example..
    Thanks in Advance,
    Venkat N

    Hi,
    Here is the sample program
    *Submit report as job(i.e. in background) 
    data: jobname like tbtcjob-jobname value
                                 ' TRANSFER TRANSLATION'.
    data: jobcount like tbtcjob-jobcount,
          host like msxxlist-host.
    data: begin of starttime.
            include structure tbtcstrt.
    data: end of starttime.
    data: starttimeimmediate like btch0000-char1.
    Job open
      call function 'JOB_OPEN'
           exporting
                delanfrep        = ' '
                jobgroup         = ' '
                jobname          = jobname
                sdlstrtdt        = sy-datum
                sdlstrttm        = sy-uzeit
           importing
                jobcount         = jobcount
           exceptions
                cant_create_job  = 01
                invalid_job_data = 02
                jobname_missing  = 03.
      if sy-subrc ne 0.
                                           "error processing
      endif.
    Insert process into job
    SUBMIT zreport and return
                    with p_param1 = 'value'
                    with p_param2 = 'value'
                    user sy-uname
                    via job jobname
                    number jobcount.
      if sy-subrc > 0.
                                           "error processing
      endif.
    Close job
      starttime-sdlstrtdt = sy-datum + 1.
      starttime-sdlstrttm = '220000'.
      call function 'JOB_CLOSE'
           exporting
                event_id             = starttime-eventid
                event_param          = starttime-eventparm
                event_periodic       = starttime-periodic
                jobcount             = jobcount
                jobname              = jobname
                laststrtdt           = starttime-laststrtdt
                laststrttm           = starttime-laststrttm
                prddays              = 1
                prdhours             = 0
                prdmins              = 0
                prdmonths            = 0
                prdweeks             = 0
                sdlstrtdt            = starttime-sdlstrtdt
                sdlstrttm            = starttime-sdlstrttm
                strtimmed            = starttimeimmediate
                targetsystem         = host
           exceptions
                cant_start_immediate = 01
                invalid_startdate    = 02
                jobname_missing      = 03
                job_close_failed     = 04
                job_nosteps          = 05
                job_notex            = 06
                lock_failed          = 07
                others               = 99.
      if sy-subrc eq 0.
                                           "error processing
      endif.
    Regards
    Sudheer

  • I'm using matchbook pro and I'm unable to find iPhoto after upgrading to Maverick os

    I'm using matchbook pro and I'm unable to find iPhoto after upgrading to Maverick os

    Open the Mac App Store application and see if iPhoto is under your "Purchases" tab.
    Clinton

  • Help using multiple iphones and ipods on itunes

    Okay, is there any simple way to use multiple apple products thru itunes. I can log in on my account and sync my iphone/ipod, then I log out and log back in with my daughter's account info. I plug her itouch in and it wants to read all of my apps(some apps we both have on our devices). Have problems with music sharing as well. Still a PC user. Get very frustrated with itunes. spend way too much time trying to do things that should be simple. please help! Thank you!

    Are you using method 1 with different windows user accounts?
    http://support.apple.com/kb/HT1495
    Sounds like you are currently using method 2 and not happy with it.

  • I want to buy a new apple tv but it used hdmi cables and my house is only wired for analog.  Is there any way I can use the apple tv on analog cables?

    I want to buy a new apple tv but it used hdmi cables and my house is only wired for analog.  Is there any way I can use the apple tv on analog cables?

    Welcome to the Apple Community.
    It's do-able, but I don't think it's a great idea.
    DVI
    Some users with DVI have managed to get their TV's to work with DVI-HDMI cables. DVI carries no audio, so alternative connections need to be explored to enable audio. DVI doesn't necessarily support HDCP as well as other standards used by HDMI (which may or may not be an issue)
    Analogue
    There are hardware converters that will convert HDMI to various other types of output, however there are some issues with doing so that you should be aware of.
    HDCP
    HDCP compliant converters will not allow you to watch HDCP protected content such as that from the iTunes Store. Non compliant converters exist but we cannot discuss them under the Terms of Use for these communities.
    Resolution and aspect ratio
    I'm not aware of any converters that will scale the output from the Apple TV, any TV or projector which is used will need to be widescreen and support resolutions of 720p (Apple TV 2), 720p/1080p (Apple TV 3)
    DAC
    DAC (Example Only - Not a recommendation or suggestion that this is suitable in your circumstances)

  • IPad2 and a new MacBook running Lion, Both Devices use the same Apple ID which is a Hotmail eMail id. Can I create a new iCloud eMail Id and use iCloud eMail and continue to use my Hotmail Id for my Apple Id and use it for iTunes, iCloud

    I have an iPad2 and a new MacBook running Mountain Lion. Both Devices use the same Apple ID which is a Hotmail eMail id. Can I create a new iCloud eMail Id and use iCloud eMail and continue to use my current Hotmail Id for my Apple Id for iTunes, iCloud.
    Note, I will use both Hotmail and iCloud eMail.

    Welcome to the Apple Community.
    In order to change your Apple ID or password for your iCloud account on your iOS device, you need to delete the account from your iOS device first, then add it back using your updated details. (Settings > iCloud, scroll down and hit "Delete Account")

Maybe you are looking for