File system getting full // dev_icmmon rapidly increasing

Hello All,
/sapmnt/SID/profile file system getting full
/in this file system dev_icmmon rapidly increasing
more dev_icmmon
[Thr 258] **** SigHandler: signal 1 received
[Thr 01] *** ERROR => IcmReadAuthFile: could not open authfile: icmauth.txt - errno: 2 [icxxsec_mt.c 728]
[Thr 258] **** SigHandler: signal 1 received
[Thr 01] *** ERROR => IcmReadAuthFile: could not open authfile: icmauth.txt - errno: 2 [icxxsec_mt.c 728]
please help me to resolve the issue
Regards
Mohsin M

I have killed icmon process ..now problem is solved

Similar Messages

  • File system getting full and Server node getting down.

    Hi Team,
    Currently we are using IBM Power 6 AIX operating system.
    And in our environment, for development system, file system is getting full and development system is getting slow while accessing.
    Can you please let me know , what exactly the problem & which command is used to see the file system size and how to resolve the issue by deleting the core files or some ting. Please help me .
    Thanks
    Manoj K

    Hi      Orkun Gedik,
    When i executed the command df -lg and find . -name core noting is displayed, but if i execute the command df is showed me the below information. below is an original file which i have modified the sid.
    Filesystem    512-blocks      Free %Used    Iused %Iused Mounted on
    /dev/fslv10     52428800  16279744   69%   389631    15% /usr/sap/SID
    Server 0 node is giving the problem. its getting down all the times.
    And if i check it in the /usr/sap/SID/<Instance>/work for the server node "std_server0.out file , the below information is written in the file.
    framework started for 73278 ms.
    SAP J2EE Engine Version 7.00   PatchLevel 81863.450 is running! PatchLevel 81863.450 March 10, 2010 11:48 GMT
    94.539: [GC 94.539: [ParNew: 239760K->74856K(261888K), 0.2705150 secs] 239760K->74856K(2009856K), 0.2708720 secs] [Times: user=0.00 sys=0.36, real=0.27 secs]
    105.163: [GC 105.164: [ParNew: 249448K->80797K(261888K), 0.2317650 secs] 249448K->80797K(2009856K), 0.2320960 secs] [Times: user=0.00 sys=0.44, real=0.23 secs]
    113.248: [GC 113.248: [ParNew: 255389K->87296K(261888K), 0.3284190 secs] 255389K->91531K(2009856K), 0.3287400 secs] [Times: user=0.00 sys=0.58, real=0.33 secs]
    Please advise.
    thanks in advance
    Manoj K

  • Root file system getting full SLES 10 SAP ERP6

    Hi Gurus
    Iam having an unusual  problem.My / file system is getting full and i can`tv pick up what causing it .I have checked the logs in /var/messages.I  dont know whats writing to / .I didnt copy anything directly on /
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda5             7.8G  7.5G     0 100% /
    SLES 10 64 bit  ERP6 SR2
    Anyone who had a similar problem.
    Any ideas are welcome

    cd /
    xwbr1:/ # du -hmx --max-depth=1
    1       ./lost+found
    48      ./etc
    1       ./boot
    1       ./sapdb
    1       ./sapmnt
    1430    ./usr
    0       ./proc
    0       ./sys
    0       ./dev
    85      ./var
    8       ./bin
    1       ./home
    83      ./lib
    12      ./lib64
    1       ./media
    1       ./mnt
    488     ./opt
    56      ./root
    14      ./sbin
    2       ./srv
    1       ./tmp
    1       ./sapcd
    1       ./backupdisk
    1       ./dailybackups
    1       ./fulloffline
    cd /root
    xwbr1:~ # du -hmx --max-depth=1
    1       ./.gnupg
    1       ./bin
    1       ./.kbd
    1       ./.fvwm
    2       ./.wapi
    12      ./dsadiag
    1       ./.gconf
    1       ./.gconfd
    1       ./.skel
    1       ./.gnome
    1       ./.gnome2
    1       ./.gnome2_private
    1       ./.metacity
    1       ./.gstreamer-0.10
    1       ./.nautilus
    1       ./.qt
    1       ./Desktop
    1       ./.config
    1       ./Documents
    1       ./.thumbnails
    38      ./.sdtgui
    1       ./sdb
    1       ./.sdb
    4       ./.mozilla
    56      .

  • File system getting full

    Hi All,
    When i check the path
    /usr/sap/ccms/wilyintroscope/traces
    Its occupying lot of space.Can help us in understanding the reorganisation of the same.
    Indexrebuilder.sh would only rebuild the index right not the logs.Thanks.

    HI
    Can be done by scripts (chk [this|Introscope Enterprise Manager;)
    you can set the property file accordingly for managing the space (chk [this|/usr/sap/ccms is above the threshold value;)
    jansi

  • SQL0968C The file system is full

    Hi All,
    In our BI pre-prod system we trying to test the BI data load.
    It stopped giving this error in SM21
    Database error -968 at FET
    SQL0968C The file system is full. SQLSTATE=57011
    Database error -968.
    Our BI is 7.0 version ,DB2 8.1, AIX 5.3.
    I checked in db2diag.log and found that temporary tablespace was full at that time.
    Following is some example from db2diag.log.
    2007-06-18-21.00.17.073907+060 E154034841A701     LEVEL: Error
    PID     : 528538               TID  : 1           PROC : db2pclnr 0
    INSTANCE: db2fbr               NODE : 000
    FUNCTION: DB2 UDB, buffer pool services, sqlbClnrAsyncWriteCompletion, probe:0
    MESSAGE : ADM6017E  The table space "PSAPTEMP16" (ID "3") is full. Detected on
              container "/db2/FBR/saptemp1/NODE0000/temp16/PSAPTEMP16.container000"
              (ID "0").  The underlying file system is full or the maximum allowed
              space usage for the file system has been reached. It is also possible
              that there are user limits in place with respect to maximum file size
              and these limits have been reached.
    But we tested the same load before with approximatly same amount of data but dont know why its giving the problem this time.
    How to solve this temporary tablespace issue is it require to increase the filsystem size. tablespace is in autoextent mode and fiesystem is still 20GB free.
    Regards,
    Manish

    Hi Manish,
    when looking at the error message <a href="http://publib.boulder.ibm.com/infocenter/db2luw/v9/topic/com.ibm.db2.udb.msg-search.doc/doc/sql0968-sch.htm?resultof=%22%53%51%4c%30%39%36%38%43%22%20%22%73%71%6c%30%39%36%38%63%22%20">SQL0968C</a>, the documentation states the following:
    SQL0968C
    The file system is full.
    Explanation:
    <b>One of the file systems containing the database is full. This file system may contain the database directory, the database log files, or a table space container.</b>
    The statement cannot be processed.
    User response:
    Free system space by erasing unwanted files. Do not erase database files. If additional space is required, it may be necessary to drop tables and indexes identified as not required.
    On unix-based systems, this disk full condition may be due to exceeding the maximum file size allowed for the current userid. Use the chuser command to update fsize. A reboot may be necessary.
    This disk full condition may be caused when containers are of varying sizes. If there is sufficient space in the file system, drop the table space and recreate it with containers of equal size.
    sqlcode: -968
    sqlstate: 57011
    Please check also the other filesystems that belong to the database, not only the one where PSAPTEMP16 is included.
    Check, if you are using quotas in your system.
    Kind regards
    Waldemar Gaida

  • ***Urget*** sapreorg file system is full?

    Hi All
         sapreorg file system is full?

    Hi Ramu
       You can use SAPDBA options L (Show/cleanup)  B (Cleanup log files / directories)   A  (SAPDBA    log files and dump
    directories)  to reorg the sapreorg file system.It prompts you to delete directries also respond  yes.Sure this will reduse
    you file system.
    Regards...
    S.Manu

  • Does OCFS2 file system get fragmented

    We are running Production & Testing RAC databases on Oracle 9.2.0.8 RAC on Red Hat 4.0 using OCFS2 for the cluster file system.
    Every week we refresh our Test database by deleting the datafiles and cloning our Standby database to the Test database. The copying of the datafiles from the Standby mount points to the Test database mount points (same server), seems to be taking longer each time we do this.
    My question is : can the OCFS2 file system become fragmented over time from the constant deletion & copying of the datafiles and if so is there a way to defragment it.
    Thanks
    John

    Hi,
    I think it will get fragment if you constant deletion & copying of the datafiles on ocfs2.You can set the preferable block size and cluster size on the basis of actual applications,which can reduce the file fragments.
    Regards
    Terry

  • SAPCCM4X Agent directory getting full.

    Hello,
    The sapccm4x agent running on the satellite systems produce log files i.e,dev_sapccm4x and dev_rfc.trc
    which keeps growing and the file systems getting full on account of this.Only restarting the agent
    puts the dev_sapccm4x into dev_sapccm4x.old but is there any other way that we can limit the size of the files to prevent space issues.
    Thanks and Regards,
    Vinod Menon

    Hello,
    I am not sure  about parameter to limit the file size but certainly you can monitor it and take preventive action instead of reactive one.
    You can use the parameter MONITOR_FILESIZE_KB to monitor in rz20.
    Here is the link containing detailed information.
    http://help.sap.com/saphelp_nw04/helpdata/en/fa/e4ab3b92818b70e10000000a114084/content.htm
    Hope this helps.
    Thanks,
    Manoj Chintawar

  • FIle system full

    Dear All,
    In our one of java based system file system /usr/sap becomes full.
    I need to delete some of old files.
    I found some old files like .....heapdump1208338.1313527999.phd in following path
    /usr/sap/<SID>/JC00/j2ee/cluster/server0
    So can i delete it.
    Please suggest

    Dear satu,
    Hope you are doing good.
    Please see sap note: 1589548 for the JAVA server filling up and Note 16513 fo the ABAP end:
    1589548 - J2EE engine trace files fills up at a rapid pace
    and
    16513 - File system is full - what do I do?
    However, for the heap dump, please check he reason for it, else you will face other occurences later.
    If you face the error again, kindly check the below link to generate the heap dump:
    SAP Note No. 1004255- How to create a full HPROF heap dump of J2EE Engine
    As I am not sure about your OS, I am mentioning all notes:
    AIX:     1259465    How to get a heapdump which can by analyzed with MAT
    LNX:     1263258    IBM JDK 1.4.2 x86_64: How to get a proper heapdump
    AS400:   1267126    IBM i: How to get a heapdump which can by analyzed
    Z/OS:    1336952    DB2-z/OS:Creating a heapdump which can be analyzed
    HP-UX    1053604    DK heap dump and heap profiling on HP-UX
    There is no side effect of the heap dump parameter; it will however write a heap dump, so make sure that there is enough free space on the server. Even if the free space is less, it will not harm the server in any manner; just the dump written
    will not be complete. This will hinder the analysis.
    More details are available here:
    [http://www.sdn.sap.com/irj/scn/elearn?rid=/library/uuid/f0a5d007-a35f-2a10-da9f-99245623edda&overridelayout=true]
    [https://www.sdn.sap.com/irj/sdn/wiki?path=/display/java/javaMemoryAnalysis]
    Thank you and have a nice day :).
    Kind Regards,
    Hemanth
    SAP AGS
    Edited by: Hemanth Kumar on Aug 28, 2011 9:24 PM

  • After upgrade,stderr3 file is growing more than capacity of the file system

    The file is located in /usr/sap/C11/DVEBMGS00/ work/ folder was increasing rapidly. How to reduce the file size? How to truncate?

    Hi,
    1. Could you check out SAP note 1140307 ? Update the kernel patch stated in that note, if yours is old.
    2. normally, what I will do is to check the content of stderr3. see what information that keep written in repeatedly.
    3. Check if you have lot of inconsistencies in the temse. See SAP note 48400
    At the moment, delete stderr file which it is not active from operating sytem level.
    Kindly refer to SAP note 16513 (File system is full- What to do)
    Hope this helps.
    Regards,
    Vincent

  • File system recommendation in SAP -AIX

    Dear All ,
    Pls check my current file system of PROD .
    Filesystem    GB blocks      Used      Free %Used Mounted on
    /dev/prddata1lv    120.00     66.61     53.39   56% /oracle/IRP/sapdata1
    /dev/prddata2lv    120.00     94.24     25.76   79% /oracle/IRP/sapdata2
    /dev/prddata3lv    120.00     74.55     45.45   63% /oracle/IRP/sapdata3
    /dev/prddata4lv    120.00     89.14     30.86   75% /oracle/IRP/sapdata4
    1, How much Space is recommended to increase the total space of the sapdata1..4 file system in SAP.
    Currently we have assigned 120GB to each datafile. if its any recommendation of sapdata1,2,3,4 total space , how can proceed for further ?
    ie ---> is there any limitation to increase the sapdata file system in SAP .
    example sapdata1,...4 ---> max limit to increase is 200 GB like that .....
    2,secondly only sapdata4 is keep on growing and rest of the file system are not much .
    Kindly suggets
    Edited by: satheesh0812 on Dec 20, 2010 7:13 AM

    Hi,
    Please identify what are the activities that were taking place which was consuming your space so rapidly.
    Analyze if the same activities are going to be performed in future, if so for how many months.
    Based on the comparision of present growth trend you can predict future trend and decide. Also analyze which table/tablespace is growing fast and what were its related data files and where are they located, you need to increase that file system to high level rather increasing all file systems sapdata1...4.
    Based on gathered inputs you can easily predict future growth trends and accordingly increase your size with additional space of 10%.
    You can increase your file system there is no limitation such, but you will face performance related problems. For this you need to create a separate tablespace and move the heavily growing table data files into that table space with proper index build.
    Regards.
    Edited by: Sita Rr Uppalapati on Dec 20, 2010 3:20 PM

  • Tablespace PSAPSR3DB getting full again and agin

    Dears,
    In our PI 7.1 system,Installed on AIX tablespace PSAPSR3DB is getting full very rapidly.
    10 GB space is getting full in 2 days.
    As per our PI developer they are not saving any such huge data in PI, data transfer through PI is going on but I am not sure in that case does PI is storing any data.
    Please suggest how can I stop this rapid increment in data of this tablespace.
    Can I clean some of its data by any way or if possible how not to save data getting transfer through PI.
    Regards,
    Shivam Mittal

    Hi Shivam,
      This table references to standard tablespace for JAVA stack of SAP objects. It's possible that the clean job of Java Stack have been executed with errors. Check this job in Netweaver Administrator application ( http://servername:port/nwa ).
      Have you storing synchronous messages? Maybe you have to tunning the ABAP and Java Stack.
      Have you configured the archive and delete jobs in SXMB_ADM?
    Best regards
    Ivá

  • How to run something on shutdown before file systems unmounted

    I've been trying to get kexec working with systemd, following the advice on this wiki page:
    https://wiki.archlinux.org/index.php/Kexec#Systemd
    Unfortunately, the suggested unit file does not work for me.  The problem is that no matter what I do, my /boot file system gets unmounted before kexec gets run (so that it cannot find the kernel).  I've tried adding most subsets of the following to the Unit section of my [email protected] file, all to no avail:
    Before=shutdown.target umount.target final.target
    RequiresMountsFor=/boot/vmlinuz-linux
    Wants=boot.mount
    After=boot.mount
    If I run "systemctl start kexec-load@linux" followed by "systemctl kexec", everything works fine.  But if I forget to do the former, then the kexec fails.
    Anyway, I'm finding it super frustrating not to be able to control the order of events on shutdown.  There has to be a way to do it, but absolutely nothing I've tried seems to make any difference.  On startup I can obviously control things, so I could have some dummy service that does nothing on start but on stop calls kexec.  But I don't want kexec called on an ordinary reboot or poweroff, only after "systemctl kexec."
    If someone could tell me how to do what I need to do, I would much appreciate it.

    use:
    FileOutputStream data_Stream = new FileOutputStream(data_File, true);The second argument is the append flag and is false by default.

  • Communigate Pro corrupting file system

    I am running CGP 5.0.13 on an Intel XServe 10.4.8 and XRaid
    I have replaced the XServe and XRaid 3 times with new systems out of the box. But, everytime the file system gets corrupted and the server panics.
    Does any body have any ideas.
    Any help would be greatly appreciated.
    Sam Carvalho
    [email protected]
    xserve g5 Mac OS X (10.4.5) g5 tower
    xserve g5 Mac OS X (10.4.5) g5 tower
    xserve g5 Mac OS X (10.4.5) g5 tower
    xserve g5 Mac OS X (10.4.5) g5 tower
    xserve g5   Mac OS X (10.4.5)   g5 tower

    I'm just surprise you say this used to work at all. You should never copy a file INTO another bootable disk, as it immediately messes up the permissions on that disk.
    Sharing is fine for accessing files on another mac and copying files FROM that machine back on to the machine you're currently booted into. If you want to send a file to the other machine (the one you're not booted into) you should boot into that machine first, and then copy it FROM the first machine.
    The other way to do it is to set up a shared Dropbox folder on each machine and place your files in there, which will update the files on the other machine when you boot into it.
    Anything else is going to keep giving you permissions problems, I'm afraid.

  • How to extend the file system?

    hello experts,
                             how to check wether file system is full?and how to extend the file system?

    Hi,
    You can check whether your file system occupancy using Unix level command df -k.
    how to extend the file system?
    Whats the OS platform you working upon and need to extend the file system. Based on individual OS platform commands may vary.
    Regards,
    Deepak Kori

Maybe you are looking for