NFS-Problem

Hi,
I have some problems with an nfs-mount to a netapp-filer.
The mountoptions are as follows:
soft,noac,rsize=65536,wsize=65536,vers=3,proto=udp
In /var/adm/messages I get messages like this:
Dec 3 08:04:11 uaizk15 nfs: [ID 664466 kern.notice] NFS read failed for server filer406: error 5 (RPC: Timed out)
Dec 2 13:27:15 uaizk15 nfs: [ID 664466 kern.notice] NFS fsstat failed for server filer406: error 16 (RPC: Failed (unspecified error))
After changing the mountoptions to “rw,bg,hard,nointr,rsize=32768,wsize=32768,vers=3,nosuid,proto=tcp” I lost connection:
Dec 2 13:17:32 uaizk15 nfs: [ID 333984 kern.notice] NFS server filer406 not responding still trying
ping is o.k.
Have somebody an idea?
Thanks, Peter
Checking nfsd:
root@uaizk15:/# rpcinfo -T tcp filer406 nfs
program 100003 version 2 ready and waiting
program 100003 version 3 ready and waiting
root@uaizk15:/# rpcinfo -T udp filer406 nfs
program 100003 version 2 ready and waiting
program 100003 version 3 ready and waiting
NFSSTAT:
root@uaizk15:/var/adm# nfsstat -c
Client rpc:
Connection oriented:
calls badcalls badxids timeouts newcreds badverfs
823 248 0 198 0 0
timers cantconn nomem interrupts
0 0 0 0
Connectionless:
calls badcalls retrans badxids timeouts newcreds
49436457 502 2573 0 3075 0
badverfs timers nomem cantsend endpoints
0 2834 0 0 1
Client nfs:
calls badcalls clgets cltoomany
48781688 432 48783564 3
Version 2: (0 calls)
null getattr setattr root lookup readlink
0 0% 0 0% 0 0% 0 0% 0 0% 0 0%
read wrcache write create remove rename
0 0% 0 0% 0 0% 0 0% 0 0% 0 0%
link symlink mkdir rmdir readdir statfs
0 0% 0 0% 0 0% 0 0% 0 0% 0 0%
Version 3: (48784709 calls)
null getattr setattr lookup access readlink
0 0% 40469432 82% 0 0% 76553 0% 3734660 7% 0 0%
read write create mkdir symlink mknod
3648436 7% 769910 1% 5157 0% 37858 0% 0 0% 0 0%
remove rmdir rename link readdir readdirplus
5030 0% 12307 0% 3181 0% 0 0% 6049 0% 12555 0%
fsstat fsinfo pathconf commit
3579 0% 2 0% 0 0% 0 0%
Client nfs_acl:
Version 2: (0 calls)
null getacl setacl getattr access getxattrdir
0 0% 0 0% 0 0% 0 0% 0 0% 0 0%
Version 3: (1 calls)
null getacl setacl getxattrdir
0 0% 1 100% 0 0% 0 0%

I'm sorry that I wasn't clear enough. The head unit (regular Xserve) has 3 internal drives, with one containing the OS. It is one of the other two drives that unmounts. The other 29 are cluster units and only have one internal drive. When whatever it is happens, both of the NFS shared drives (one internal and the RAID) unmount.
The RAID is directly attached to the host. All connections have been checked. No errors appear in the RAID log or the system log.
Originally we had the home directories and data collection going to the RAID. It would exhibit the same issues (but with only the RAID dropping offline). We then switched the data collection to the internal drive, while the home folders remained on the RAID (so very little data xfer going to the RAID now). Now BOTH drives get kicked offline when whatever it is happens.
We are using NFS vice AFP because of an apparent permission issue with Gridware and AFP. Jobs seem to run only as the user that submits, which then locks out all subsequent jobs from being run. We have not tried ignoring all permissions on the drive, as that may cause other issues.
All drives are, I believe HFS+ and journaled. The RAID is 1 terabyte. The other drive is 80GB. When they get kicked off and are brought back on (always requiring a reboot of the head unit), they seem to go through a directory compare and rebuild. Happens every time. The non-NFS shared drives do not go through this process. There does not appear to be any data loss or corruption.
The simulation creates lots (thousands) of small files (both for submission and results). Not many directories.
So, to summarize, when this happens, all NFS-shared drives drop offline and have to go through a rebuild (even if they are not RAID) once they come back on (which can take nearly 2 hours with the RAID).
Hope this clarifies a bit. We are certainly stumped.

Similar Messages

  • Strange networking/NFS problem

    Hello,
    Occasionaly I'm having NFS slowdowns, then I see this information in dmesg on the client:
    nfs: server galaxy2 not responding, still trying
    nfs: server galaxy2 not responding, still trying
    NETDEV WATCHDOG: eth0: transmit timed out
    nfs: server galaxy2 OK
    nfs: server galaxy2 OK
    nfs: server galaxy2 not responding, still trying
    NETDEV WATCHDOG: eth0: transmit timed out
    nfs: server galaxy2 OK
    nfs: server galaxy2 not responding, still trying
    NETDEV WATCHDOG: eth0: transmit timed out
    nfs: server galaxy2 OK
    nfs: server galaxy2 not responding, still trying
    nfs: server galaxy2 not responding, still trying
    nfs: server galaxy2 OK
    nfs: server galaxy2 OK
    But on the server, I see nothing in dmesg. But, I do see this:
    eth0 Link encap:Ethernet HWaddr 00:0E:0C:72:83:51
    inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.0.0
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:185954 errors:774 dropped:774 overruns:774 frame:0
    (notice the high errors/dropped/overruns)
    The LAN is a GigaBit lan, on the server:
    00:09.0 Ethernet controller: Intel Corp.|82547EI Gigabit Ethernet Controller (e1000)
    on the client:
    00:0a.0 Bridge: nVidia Corp.|Ethernet controller (forcedeth)
    The entry in rc.conf for the network looks like this:
    client:
    eth0="eth0 192.168.1.102 netmask 255.255.0.0 broadcast 192.168.1.255"
    server:
    eth0="eth0 192.168.1.1 netmask 255.255.0.0 broadcast 192.168.1.255"
    Is this an NFS problem or a networking problem??
    Thanks.

    netmask should be 255.255.255.0 according to your broadcast
    y should also have a look at network-components before your server also an easy one would be to change the cable
    once our database was instable due to a cable-problem (was fine for 6 months before)

  • NFS Problems - Disconnects in iTunes

    I think I have this problem posted by scree and I want to look at the posted solution buy the solution link is dead, the original thread is archived and I don't know how to send a message to scree.
    Anyone know the solution described at http://blog.lemon23.com/news/nfs-unter-leopard/nfs-unter-leopard.html ?
    scree
    Posts: 2
    From: Austria
    Registered: Nov 18, 2007
    Re: NFS problems, file locking???
    Posted: Nov 18, 2007 12:03 PM in response to: jt519
    i expirienced the same problems (and even more) with my ubuntu-NFS-server and my mac (as a client) which worked very good in 10.4. no problems on the terminal, but lots of disconnects and interrupted server connections when using itunes or iphoto via NFS...
    Macbook 13.3 Core Duo Mac OS X (10.5.1)
    scree
    Posts: 2
    From: Austria
    Registered: Nov 18, 2007
    I GOT IT!
    Posted: Nov 19, 2007 12:24 PM in response to: scree
    Solved
    solution posted here: http://blog.lemon23.com/news/nfs-unter-leopard/nfs-unter-leopard.html
    no more interrupts, everything is working fine.

    I found your post while digging for answers to the same problem myself. Essentially, my NFS mount of my iTunes library (Centos 5.1 server, gigabit ethernet) was working fine until I hooked up my 40GB iPod (no problem with the smaller ones). I would get the same NFS disconnects that I had with iPhoto, which had made it impossible to use the NFS mount for it.
    Well, the good news is that I found a copy of the post you had a link for in the internet archive: http://web.archive.org/web/20080212115443/http://blog.lemon23.com/news/nfs-unter -leopard/nfs-unter-leopard.html . Essentially, the author suggests 3 things:
    1) use the same uid and gid for users on your Mac and NFS server
    2) disable the generation of the hidden OS X files (e.g. .DSStore) by using 'defaults write com.apple.desktopservices DSDontWriteNetworkStores true'
    3) change the NFS mount options to 'locallocks rsize=32768 wsize=32768 intr noatime'
    I knew that 1) was important and already had that in place. 2) I don't think should make a difference, it didn't do the trick for the OP either. 3) essentially tells NFS to use local locks, to set the local read and write size to 32k bytes (could have used rwsize=32768 instead) and to allow for any NFS operations to be interrupted. I am not sure that the noatime option is supported in OS X, but on linux machines it tells the client to not constantly update atime on NFS files.
    Digging finally into my system.log, I find lots of errors from lockd (lockd not responding), so my hunch is that the most important setting was the 'locallocks' option for mount_nfs (via the Directory Utility app). In any case, my disconnects (doing all 3 of the above) have vanished, even for iPhoto although I have only tested that very briefly.
    Hope this helps!

  • Workaround for NFS problem

    I am experiencing the following problem with the NFS client on Mac OS X Snow Leopard. I NFS mount a NFS share (say at /mnt/nfs). I run a program which reads from, and writes to, some files in some directory /mnt/nfs/adir. In another Terminal window I run "cd /mnt/nfs/adir ; ls -l". The directory listing is very slow. However - and this is the problem - if I then interrupt the directory listing (Ctrl-C), the other process (i.e. the program in the other window) that was accessing /mnt/nfs/adir aborts with the message "aprog: STDOUT: Interrupted system call".
    Can anyone else reproduce this behaviour?
    Is there a workaround?

    +Is there a workaround?+
    Yes. Don't start the second process until the first one is finished.

  • NFS problem between RedHat Client and Solaris Server

    Hi all, we are experiencing a problem between a RedHat client and a Solaris 10 server. For the purposes of this post, I'll call the Redhat client server A and the Solaris 10 server B.
    Server B is exporting a filesystem that server A is trying to mount. Server A can successfully mount the exported file system, however, strange things are happening. If I change to the exported mount point on server A and create a file, the file is owned by nobody:nobody, not the user that created the file.
    A look at the file on server B shows the file has the correct UID and GID (ie the UID & GID of server A).
    The fstab file on server A looks like this:
    serverB:/data /data nfs4 rsize=32768,wsize=32768,hard,nointr,rw,bg,actimeo=0,timeo=300,suid 0 0
    Does anyone have a explanation for this?
    NB: There is a firewall between server A and server B. A firewall rule is in place to allow traffic between the two servers on port 2049
    Stewart

    Hi
    If I change to the exported mount point on server A and create a file, the file is owned by nobody:nobody, not the user that created the file.On a NFS share, for security reasons, you normally dont have root provileges.
    A file createt as root user will be mapped to nobody:nobody.
    The behaviour you see is correct.
    If you want the file to be createt as root, you have to export the filesystem with -o ro,anon=0
    NFSv3 will be blocked by your firewall.
    Franco

  • Various NFS problems

    I'm quite new with Arch and I'm having a lot of problems with NFS.
    At home I've a small home server (running Ubuntu Server 12.04) with some NFSv4 shares. On the client (the one running Arch) I mount the shares with the following line in /etc/fstab:
    192.168.1.1:/ /mnt/nfs4 nfs4 noauto,x-systemd.automount,x-systemd.device-timeout=20,rsize=8192,wsize=8192 0 0
    The shares are correctly mounted at boot and I don't have any problems when I copy files from the server to the client. Performances are really good: with big files I've seen transfer rates up to 80-90MB/s.
    Things get messy the other way around, when I copy files from the client to the server. If the files being copied are small it usually works fine, but if the files are big (let's say 1GB+) the copy always hangs after 20-30s. When this happens the entire system becomes laggy and unresponsive. To get an usable system again I must stop the copy (not an easy task with nautilus freezed) and umount the nfs shares or, alternatively, reboot the entire system (the faster method).
    Any ideas?
    PS: sorry for my terrible english.

    Hello,
    You need to provide more information in order to find a solution. It can be a network, disk or configuration problem, or maybe something else.
    You can try to:
    - Detect disk errors using dmesg and copy the file locally on the server share (using a flash drive for example)
    - Copy the file using the terminal from the client
    - Use other protocols to send the file on the server (FTP, SSH, SAMBA, ...)
    - Monitor your network, cpu and memory during the copy process (on the server and the client)
    - Use another client or OS to mount the NFS.
    - Investigate the logs and send useful information.
    The steps above aren't in a relevant order. You can also provide your NFS configuration file, you connection type (wireless, wired, with/without router/switch/hub), the file system of your drives.
    Thank you.

  • NFS Problems

    For quite a while I'm already struggling with this problem:
    On the server user have access rights either by ownership or by group membership, just like any Linux system and everythings works the way it should be.
    Via NFS (NFS3 as well as NFS4) the access by ownership as well as main group always works. The access by group works mostly for user with a few group memberships, but for other some group based access just does not work. Of course UID and GID match.
    There is no tracable pattern for me. Sometimes the removal of one group membership solves the access for another group access.
    Again: on local logins ALL access rights work well.
    NFS-ACL is loaded and the mapping of user and groubs is looks correct.
    Any experiences over here?
    By the way: Thanks for the great NFS4 wiki!

    It would help if you'd post the actual command you're using. I'm sure that "my nfs share location" is not the actual share and therefore I'm projecting that "my mount point" is not the actual mount point where you're trying to mount.
    It might not be relevant, but one of the first things to check is that you're mounting it at a valid location, and that's impossible to tell using the obfuscated information provided.
    You are providing a valid path to an existing directory right?

  • KDE Konqueror NFS Problem

    Hello,
    i have NFS up and running,
    if i mount it per hand all is functionally,
    i can copy files between all shares only
    in konqueror 3.2 or 3.3 i can only read files from
    the nfs shares if i going to write a file i get
    an rpc failure ??
    i only found these in the kernel log:
    kernel: RPC: bad TCP reclen 0x00000f9c (non-terminal)
    in konquereo it works also if i mount the nfs share by hand or fstab.
    can anyone help me
    regards
    albert

    Hello,
    i have NFS up and running,
    if i mount it per hand all is functionally,
    i can copy files between all shares only
    in konqueror 3.2 or 3.3 i can only read files from
    the nfs shares if i going to write a file i get
    an rpc failure ??
    i only found these in the kernel log:
    kernel: RPC: bad TCP reclen 0x00000f9c (non-terminal)
    in konquereo it works also if i mount the nfs share by hand or fstab.
    can anyone help me
    regards
    albert

  • Strange delete behavior in Solaris 10 with NFS mounts

    We are using the apache commons-io framework to delete a directory in a Solaris 10 environment. The code works well on our dev and qa boxes, but when we load it into our production envrionment we get intermittent failures where the files in a directory are not being deleted and therefore when we try to delete the directory it fails to delete.
    We suspect that this may be some kind of NFS problem in Solaris where it may take longer to delete a file than if it is on a local drive and therefore the code reaches the deletedir before the OS actually removes the files and this cause the delete directory failure because files are still present.
    Has anyone seen this in an NFS environment with Solaris? We are on Java 1.4.2_15 and we are using apache commons-io 1.3.1.

    The apache commons-io framework contains a method to delete a directory by recursively deleting all files and subdirectories. Intermittently, we are seeing some of the files in a subdirectory remain and then when delete is called to remove the directory (from within the commons-io framework deletedir method) we get an IOException. This only occurs on an NFS mounted file system on our production system. Our dev and qa systems are also on an NFS but it is a different one and appears to be loaded differently and the behavior for dev and qa consistently works as expected.
    It appears to be some kind of latency issue related to the way java deletes files on the NFS, but no conclusive evidence so far.
    We have not tried this with a newer version of java since we are presently constrained to 1.4 :-(

  • Finder problems using network folders

    I am using NFS in my setup with an iBook and a PowerMac G5 both using NFS to network to a Linux fileserver. I am having some unusual problems with the Finder. Specifically, when I try to copy files from a local to a mounted NFS folder using the Finder, it can cause the system to hang. Moreover, even when copying a file within an NFS folder to the same folder under a different file name and then deleting the copy, the Finder display often goes crazy sometimes refusing to display any files in the folder or even worse, scrambling the file and/or subfolder names in that NFS folder. The problem seems to be entirely in Finder because when I look at the same folder in Terminal, everything works fine -- there is no underlying NFS problem.
    The problem is even more acute when I use the NFS union mount option. What I want to do in my iBook is to union mount a network users diretory on top of /Users -- this way, both local and network users will have their directories available to them in the same place. Union mounts drive the Finder crazy and this can be readily checked. The mount seems to work fine, and the finder displays the union-ed directory fine. However, when you try to look at any of the folders in the directory, I basically get a system folder that is not even in the same directory tree sometimes, and at others, random network folders. Again, when looking at things in Terminal everything seems to work fine.
    What is wrong with Finder and NFS? How on earth can Finder have problems displaying a directory structure? I keep having other problems when using GUI based programs with NFS volumes -- random failures seem all too common while the Terminal seems to work fine each time. Is the Apple implementation of NFS hopelessly broken? Or is it just the Finder that has problems. Either way, I am surprised and concerned that a mounted network directory exhibits seemingly random behavior in a GUI. Any help on these issues would be much appreciated.
    Power Mac G5 2.0Ghz Mac OS X (10.4.6)
    Power Mac G5 2.0Ghz   Mac OS X (10.4.6)  

    Seems to be new behavior that if you delete a directory you are currently in it jumps back to the root instead of a directory above or after the deleted one. Just press the back button and you will be back in the directory.

  • Cannot mount nfs shares

    Hello,
    since Sunday I'm unable to mount NFS shares:
    mount.nfs: No such device
    The server-side is working fine, I can mount all shares from my FreeBSD Desktop machine.
    I'm using netcfg and start rpcbind and nfs-common upong connection before mounting NFS shares (via netfs). Is this maybe related to some recent pacman updates? It was working flawless just until Sunday.

    As it turns out, It now works.  I did load the nfs module manually during my troubleshooting but it was already loaded or built into the kernel or whatever.
    The thing that made it work is changing the nfs mount lines in /etc/fstab from the hostname of the server to the ip address of the server.  I don't know why that worked on both machines since I could ping the hostname of the nfs server which is a Freenas server and it always worked before.
    @ jasonwryan
    rc.d start rpcbind && rc.d start nfs-common
    start fine after stopped and restarted.  Have you replaced portmap with rpcbind in pacman?  rpcbind superceded portmap a while back.  gl.
    @.:B:.
    lol, snide remark succesfully detected.  In my defense I was half guessing and half sniding (or some percentage thereof).  I have to admit I do get a bit snippy over this since nfs is necessary for my little clients to run mpd and I gets a bit cranky when I gots no musics!  Fueling my frustration, it seems I have to chase down nfs problems frequently after "pacman -Syu".

  • NFS mount error messages on Solaris 8; is there a patch?

    We recently purchased an EMC Celerra NS80 to serve as a front end to our Centerra archive solution. On a Solaris 8 box I've been seeing a large number of NFS errors in the /var/adm/messages file relating directly to the datamover on the NS80. I've tried everything I can think of from the array side of the house to no avail. EMC states there is fix for this problem, patch 113318-12, but according to the patch notes this is for an NFS problem with Solaris 9.
    The error is causing sporadic performance issues when our end users attempt to pull up the data that resides on the Celerra NFS mount, so I would really like to get it resolved. The specific error message is:
    NFS server DATAMOVER1 not responding still trying
    NFS server DATAMOVER1 ok
    The messages of not responding and ok tend to relate every few seconds to 3-5 minutes delay. Our database vendor insists something on the NS80 is introducing this error, but I cannot find anything to change to clear this up.
    Any input would be greatly appreciated.
    Thanks.

    SunOS www02.unix 5.10 Generic_127128-11 i86pc i386 i86pc
    10:58am up 22:51, 4 users, load average: 2.16, 2.26, 2.26
    /export/www
    cache hit rate: 99% (111069900 hits, 6674 misses)
    consistency checks: 459000 (458945 pass, 55 fail)
    modifies: 0
    garbage collection: 0
    /export/zero
    cache hit rate: 94% (2089349 hits, 110629 misses)
    consistency checks: 1497690 (1497075 pass, 615 fail)
    modifies: 0
    garbage collection: 0
    /export/saba
    cache hit rate: 97% (7677577 hits, 174056 misses)
    consistency checks: 10809059 (10801491 pass, 7568 fail)
    modifies: 0
    garbage collection: 0
    So 1 day uptime, that is much better. We rebooted it for mirroring setup, and found that cachefs needs to fsck before it comes up. Can I not simply have it start afresh rather than attempt to keep cache directory? (Obviously I can, but I mean in a boot-friendly manner)

  • [systemd] No shutdown/reboot/suspend anymore from XBMC

    Migrated my HTPC to systemd last night. So far so good, lirc still works, I found some Fedora xbmc service script that launches XBMC neatly. It looks good.
    XBMC runs as my own user. However, whereas before I could suspend/shutdown/reboot just fine (through upower), that now does not work anymore. I have enabled (and started) the upower service:
    $ systemctl list-units|grep -i upower
    cpupower.service loaded active exited Apply cpupower configuration
    upower.service loaded active running Daemon for power management
    However, even with the upower service enabled all XBMC shows in the shutdown menu is a timer option and hibernate/suspend. The two last options definitely don't work. I have hit that button enough to know. Before, there were also restart/shutdown options visible. Those are gone. So all that's left is pushing the button on my HTPC to make it shut down.
    XBMC service file:
    [Unit]
    Description = Starts instance of XBMC using xinit
    After = syslog.target
    [Service]
    User = $user
    Group = users
    Type = simple
    ExecStart = /usr/bin/xinit /usr/bin/xbmc-standalone -- :0
    [Install]
    WantedBy = multi-user.target
    Any pointers?
    Edit: NFS problem solved by enabling rpc-idmapd.service.
    Last edited by .:B:. (2012-08-27 15:10:43)

    Thanks Elfo, I'll have a look at those. Further inspection showed that the polkit service was actually failing. I just bumped the release from 105 and 107, built from ABS and installed 107. Turns out polkit needs its own user now (polkitd) and needs access to the root:root owned rules dirs as well. Fixed that one, but not getting any further.
    Edit: there's a user-session-units package in the AUR (with deps) so I built and installed those. It works (XBMC autolaunches), and it seems like it's communicating with D-Bus, but no luck there (no suspend, no shutdown etc. and shutdown/reboot buttons are still gone):
    23:12:09 T:140643986409344 DEBUG: DBus: Creating message to org.freedesktop.ConsoleKit on /org/freedesktop/ConsoleKit/Manager with interface org.freedesktop.ConsoleKit.Manager and method CanStop
    23:12:09 T:140643986409344 DEBUG: DBus: Creating message to org.freedesktop.UPower on /org/freedesktop/UPower with interface org.freedesktop.UPower and method EnumerateDevices
    23:12:10 T:140643986409344 INFO: Selected UPower and ConsoleKit as PowerSyscall
    23:12:10 T:140643986409344 DEBUG: DBus: Creating message to org.freedesktop.ConsoleKit on /org/freedesktop/ConsoleKit/Manager with interface org.freedesktop.ConsoleKit.Manager and method CanStop
    23:12:10 T:140643986409344 DEBUG: DBus: Creating message to org.freedesktop.ConsoleKit on /org/freedesktop/ConsoleKit/Manager with interface org.freedesktop.ConsoleKit.Manager and method CanRestart
    23:12:10 T:140643986409344 DEBUG: DBus: Creating message to org.freedesktop.UPower on /org/freedesktop/UPower with interface org.freedesktop.DBus.Properties and method Get
    23:12:10 T:140643986409344 DEBUG: Previous line repeats 1 times.
    23:12:10 T:140643986409344 DEBUG: DBus: Creating message to org.freedesktop.UPower on /org/freedesktop/UPower with interface org.freedesktop.UPower and method EnumerateDevices
    Edit: now it's not even starting anymore at boot, apparently it's waiting for some stuff to finish and bails out. God what an ordeal this is.
    Last edited by .:B:. (2012-08-28 21:44:41)

  • Error reading job logs of Apps server from Central Instance

    Dear Gurus,
    We have newly installed system with one CI ( cluster environment) and 2 application Instances.
    Systems are recently installed by other team. I am looking into support part after handover.
    We have noticed below error while reading failed background job logs from our CI. If suppose any job is failing, we can read respective job log from that application instance but it throws error while reading job log from CI to either of application instance.
    I checked /sapmnt/SID/global is shared among all 3 servers and i am successfully able to "Touch a" from applications instances.
    Even though i have given "777" permissions to all folders like  /sapmnt/SID , /sapmnt/SID/global , /sapmnt/SID/global/400JOB*
    I am not able to read job log from CI , for same failed job I can read job log from respective application instance.
    Error log :
    Error reading job log JOBLGX00080700X39290
    Message no. BT167
    Diagnosis
    The background processing system was unable to read the job log named in the message.
    This message suggests that there is a problem with the TEMSE storage system of the SAP system.  The TEMSE storage system is a repository for temporary objects, such as job logs and spool requests. Job logs are always stored in the TEMSE as operating system files.
    This error occurs if the TEMSE system is not able to find or access the file that contains the text of the job log that you requested. Possible causes for the loss or unavailability of the job log include the following:
    Someone deleted the required TEMSE file (from the operating system, not from within the SAP system).
    A CRON (or equivalent scheduler) job has deleted the TEMSE file.
    The file system in which the TEMSE stores its files is not mounted or is not accessible (NFS problem, disk failure, or similar problem).
    The TEMSE reorganize or consistency check functions were used within the SAP system and deleted the job log.
    SM21 logs :
    Error 2 for write/read access to a file. File = /usr/sap/SID/SYS/global
    BP_JOBLOG_SHOW: Failed to display jobs. Reason:
    > Error reading job log JOBLGX00080700X39290
    Strange this is I can check failed job log on one application instance frfom other application instance but not from CI.
    Kindly throw some lights where to check.
    Regards,

    Hi Shravan,
    I guess it is related to permission to /sapmnt/SID/global folder. Please ensure owner is sidadm:sapsys in all the systems viz CI, App servers etc.
    Check the mounting options are correctly set with read/write mode.
    Hope this helps.
    Regards,
    Deepak Kori

  • 10.6.8 update has broken quicktime 7 codecs

    Hi there
    It seems that the 10.6.8 update has broken the codecs that quicktime 7 uses.
    We have users here that need to use quicktime 7 pro to export movs etc, and since the update, they cannot play (so far that we have found) movs that are using the photo-jpeg decompressor.
    We're using the version from the optional installs package form the dvd.
    We've been able to work around this by copying (from a 10.6.7 install) the following folders, over the top of the 10.6.8 image.
    /System/Library/QuickTime
    /System/Library/QuickTimaJava
    /System/Library/Frameworks/QuickTime.framework
    /Library/QuickTime
    While ive tried to single out which files are affected, it seems copying one of these folders at a time hasnt fixed the problem. Only when (at least) all the /System folders are copied over, then we can start playing the movs that have the broken codecs.
    Is this something Apple is aware of? Is there an word of a fix for this? I also hear that optical audio is broken as part of the update - which the solution has been to recopy the kext file back from an older system.
    10.6.7 was a show stopper for us - because of the NFS permission problems.. (but QT7 worked)
    and now 10.6.8 is a show stopper for us, because QT7 is now broken, but the NFS problems are resolved.
    I dont want to have to rebuild our workstations back to 10.6.6 as we had other issues that was fixed with 10.6.7..
    Any ideas anyone? Apple dev's?
    Cheers

    i think ive narrowed it down to a single file...
    On my 10.6.8 laptop - with the stock QT files... the affected movs wouldnt play...
    I copy this file from a 10.6.7 machine...
    /System/Library/QuickTime/QuickTimeComponents.component/Contents/MacOS/QuickTime Components
    And i can play the files again...

Maybe you are looking for

  • Is It  POSSIBLE to change the Technical Name of Z-version Query

    Hi  Friends , I was changed all quereis and cube from standard version to Z version. For queries, by using rszc , i was changed to Zversion,Now again is it possible to change the Technical Name of Zversion Query ? If possible , can you please tell me

  • 11.5.10 new features !

    Hi, I have one simple question: WHY to upgrade? We have upgraded our eBS from 11.5.8 to 11.5.9 ... and we have met "too many" problems ! Which are new features in 11.5.10 that are able to convince us for new upgrade ? I would want a sincere answer wi

  • P&L By Functional Area

    Hi, We are tring to integrate two company codes from two diffrent regions.The one in US is on ECC6.0 classic GL which is using GL by funtional area.The other CC is in KSA and its on ECC 5.0 with new gl active and no functional area is active here.Thi

  • Cfmail querying database not working

    The code below is not working and I am not getting any errors. Any suggestions. <cfparam name="notifyDate" default="#DateFormat(Now(), "short")#"> <CFQUERY name="getNotifications" datasource="AEDWEB"> SELECT u.strEmail, u.strFirstName, u.strLastName,

  • Unable to Create a Material -- All replies will be rewarded..

    Hi All, When I try to create a new material number using MM01 I get the Error Message 'The internal Number already Exists" I debugged the code and I found the following things SAP is using the function module MATERIAL_NUMBER_GET to get the next numbe