Different SAP mount points reaching to 99% full

Hi Masters,
Presently i am working on different Solaris implemented SAP Clients.
o/s --> Solaris
DB --> Oracle  10g
after reaching the threashold value server is throwing an alert message
say suppose
/Oracle            99% full
/USR/SAP        98% full
so what are the area do we have to look in to resoleve this issue specially on Unix platform .

Get into the filesystem /oracle  or /usr/sap
Use commands like du -sg * , to identify which directory is taking the major diskspace
get into the directory , use commands like df -g .  to identify where is the dir mounted to example (/usr/sap/trans , see whether its mounted to /usr/sap or is it a separate mount on its own.
basically identify dir which are mounted to /usr/sap and /oracle and are using the major disk space .
Major areas to look at /usr/sap/SID/DV*/work , look for core files and other logs there.
Do the same with /oracle , see whether /oracle/stage is mounted onto it and see what do u have in it , try to find something which you can remove from there..
See whether you still have old oracle binaries remaining even after upgrades .
These are some starting points .
Dont underestimate the fact that you might need to actually increase the size of the filesystem
All the best,
Gerard

Similar Messages

  • Is it possible in 9i to take export backup in two different mount point

    Hello Team,
    Is it possible in 9i to take export in two different mount point with file size 22 gb.
    exp owner=PERFSTAT FILE =/global/nvishome5/oradata/jlrvista/PERFSTAT_exp01.dmp,/global/nvishome4/oradata/jlrvista/export/PERFSTAT_exp02.dmp FILESIZE=22528
    I tried with above but no luck..so later killed session
    prs72919-oracle:/global/nvishome5/oradata/jlrvista$ exp owner=SLENTON FILE =/global/nvishome5/oradata/jlrvista/PERFSTAT_exp01.dmp,/global/nvishome4/oradata/jlrvista/export/PERFSTAT_exp02.dmp FILESIZE=2048
    Export: Release 9.2.0.8.0 - Production on Thu Nov 14 13:25:54 2013
    Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
    Username: / as sysdba
    Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.8.0 - Production
    Export done in US7ASCII character set and UTF8 NCHAR character set
    server uses UTF8 character set (possible charset conversion)
    About to export specified users ...
    . exporting pre-schema procedural objects and actions
    . exporting foreign function library names for user SLENTON
    . exporting PUBLIC type synonyms
    . exporting private type synonyms
    . exporting object type definitions for user SLENTON
    continuing export into file /global/nvishome4/oradata/jlrvista/export/PERFSTAT_exp02.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    About to export SLENTON's objects ...
    . exporting database links
    . exporting sequence numbers
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    . exporting cluster definitions
    . about to export SLENTON's tables via Conventional Path ...
    . . exporting table                      G_AUTHORS
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp >
    continuing export into file expdat.dmp
    Export file: expdat.dmp > ps -ef | grep exp
    continuing export into file ps -ef | grep exp.dmp
    Export file: expdat.dmp > ^C
    continuing export into file expdat.dmp
    EXP-00056: ORACLE error 1013 encountered
    ORA-01013: user requested cancel of current operation
    . . exporting table                        G_BOOKS
    Export file: expdat.dmp > ^C
    continuing export into file expdat.dmp
    EXP-00056: ORACLE error 1013 encountered
    ORA-01013: user requested cancel of current operation
    . . exporting table                 G_BOOK_AUTHORS
    Export file: expdat.dmp > ^C
    continuing export into file expdat.dmp
    Export file: expdat.dmp > Killed

    See the text in BOLD , if you do not specify the sufficient export file names, export will prompt you to provide additional file names. So either for your 22 GB you need to give 11 different file names or provide the filename when its prompted.
    FILE
    Default: expdat.dmp
    Specifies the names of the export files. The default extension is .dmp, but you can specify any extension. Since Export supports multiple export files , you can specify multiple filenames to be used.
    When Export reaches the value you have specified for the maximum FILESIZE, Export stops writing to the current file, opens another export file with the next name specified by the parameter FILE and continues until complete or the maximum value of FILESIZE is again reached. If you do not specify sufficient export filenames to complete the export, Export will prompt you to provide additional filenames.

  • Btrfs with different file systems for different mount points?

    Hey,
    I finally bought a SSD, and I want to format it to f2fs (and fat to boot on UEFI) and install Arch on it, and in my old HDD I intend to have /home and /var and  try btrfs on them, but I saw in Arch wiki that btrfs "Cannot use different file systems for different mount points.", it means I cannot have a / in f2fs and a /home in btrfs? What can I do? Better use XFS, ZFS or ext4 (I want the faster one)?
    Thanks in advance and sorry my english.

    pedrofleck wrote:Gosh, what was I thinking, thank you! (I still have a doubt: is btrfs the best option?)
    Just a few weeks ago many of us were worrying about massive data loss due to a bug introduced in kernel 3.17 that caused corruption when using snapshots. Because btrfs is under heavy developement, this sort of thing can be expected. That said, I have my entire system running with btrfs. I have 4 volumes, two raid1, a raid0 and a jbod. I also run rsync to an ext4 partition and ntfs. Furthermore I make offline backups as well.
    If you use btrfs make sure you have backups and make sure you are ready to use them. Also, make sure you check sum your backups. rsync has the option to use checksums in place of access times to determine what to sync.

  • Ur mount point have not more space and ur system table space is full how u

    ur mount point have not more space and ur system table space is full how u resize or add file . u have not other mount point .what steps u follow

    Answers in your duplicated thread: Some inter view Questions Please give prefect answer  help me
    You can get all the answers at http://tahiti.oracle.com.
    You understimate job recruiters, a simple crosscheck is enough to discard people with experience from people who memorize 'interview answers'.
    Don't expect to get a job just because you memorize answers for 'job interviews', get real life experience.

  • Unexpected disconnection external disk and different mount points

    Dear community,
    I have an application that needs to read and write data from an external disk called "external"
    If the volume is accidentally unmounted by unproperly pluggin it off), it will remount as "external_1" in the terminal,
    and my app won't see it as the original valid destination.
    According to this documentation:
    https://support.apple.com/en-us/HT203258
    it needs a reboot to be solved, optionally removing the wrong unused mount points before rebooting.
    Would there be a way to force OSX remounting the volume on the original mount point automatically?
    or checking the disk UUID and bypassing the different mount point name (app or os level?)
    Thanks for any clue on that.

    See DUMPFILE

  • Mounting multiple directories with same name on different severs to a single mount point on another server

    We have a requirement where in we have multiple solaris servers and each solaris server has a directory with the same name.
    The files in these directories will be different.
    These same name directories on multiple severs has to be mounted to a single directory on another sever.
    We are planning to use NFS, but it seems we can not mount multiple directories with same name on different severs to a single mount point using NFS, and we need to create multiple mount points.
    Is there any way we can achieve this so that all the directories can be mounted to a single mount point?

    You can try to mount all these mount points via NFS in one additional server and then export this new tree again via NFS to all your servers.
    No sure if this works. If this works, then you will have in this case just an additional level in the tree.

  • Root mount point is full

    Hello,
    [root@nfs-stg-app1 log]# df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/VolGroup00-LogVol00
    558M 529M 412K 100% /
    /dev/sda7 112G 99G 7.8G 93% /u01
    /dev/sda6 2.9G 251M 2.5G 10% /var
    /dev/sda5 3.8G 3.5G 103M 98% /usr
    /dev/sda3 4.8G 139M 4.4G 4% /tmp
    /dev/sda2 7.6G 818M 6.4G 12% /swap
    /dev/sda1 99M 13M 82M 14% /boot
    tmpfs 16G 0 16G 0% /dev/shm
    [root@nfs-stg-app1 log]#
    our root mount point is full. Could you pleaes help what files can be delete to make free space. Thank you

    But do we need to move the folder first before creating the softlink?What's the point of creating a softlink that points to a target that does not exist?
    Can we create the outbound folder in /u01 and crate the softlink without moving the /home/outbound folder?A symbolic link does not relocate existing physical data.
    Can i create a soft line from /home/outbount to /u01/outbound so that the root file system will have more space. A softlink can be a quick alternative to relocate directories instead of having to change your ftp server or user account configuration, but is not necessarily your best option.

  • Diagnostic mount point is full

    Hi Experts,
    I still remember , In 9iR2 Oracle db version and ERP 11i when the bdump/udump mount point get fulled , Our application get hanged ,even we are not able to login with sysdba. ( OS Linux RHEL 4.2)
    But in 11gR2 , Our Diag get fulled , but still I am able to login into the Application (R12.1.3) , Login into the database with sysdba .(OS AIX 6.1)
    IS their a change in the behaviour ?
    About 10g I am not sure.
    Regards
    Sourabh

    I am able to login to EBS successfully even my mount point is 100% full. I agree with you that user should face login issue. I still remembered in 9i , even I was not able to login to database.
    even with 10g , my friend has confirmed that same issue was there not able to login.
    But in 11gR2( 11.2.0.3) , Same this is not happening.I agree as I am running on 11.2.0.2/11.2.0.3 and I can still login to the application with my trace/log mount point is 100% full.
    Thanks,
    Hussein

  • Taking export on different mount point.

    We have two databases in one server. I want to  take export of one schema from one database and store it directly on other database's mount point due to space crunch. How can i do that. We are using solaris 5.10 as OS and database version is 11.2.0.3.

    Thanks for your quick reply. Here is what i tried:
    Server Name - unixbox02
    Source database name - DV01
    Target database name - DV02
    I want to take export of test schema from DV01 to "/orabkup01/DV02/data_dump". The test schema is 100gig+ in size and i dont have enough space on /orabkup01/DV01.
    I have created directory on DV01 named datadir1 as 'unixbox02:/orabkup01/DV02/data_dump'.
    Then granted read and write privilege to system.
    (Not sure to who else i need to grant this privilege)
    After that I ran the below script:
    expdp "'/ as sysdba'"  schemas=test directory=datadir1 dumpfile=a1.dmp logfile=a2.log grants=y indexes=y rows=y constraints=y
    But I have received the below error:
    ORA-39002: invalid operation
    ORA-39070: Unable to open the log file.
    ORA-29283: invalid file operation
    ORA-06512: at "SYS.UTL_FILE", line 536
    ORA-29283: invalid file operation
    I am new to oracle dba, hence, I am trying to explain as much as possible.

  • Clone A Database Instance on a different mount point.

    Hello Gurs,
    I need you help as I need to clone a 11.1.0.7 database instance to a NEW mount point on the same host. The host is a HP-UX box and my question is do I need to install oracle database software in this new mount point and then clone ?? or cloning to the NEW MOUNT point itself will create all the necessary software?. Please provide me any documents that will be helpful for the process.
    Thanks In Advance.

    882065 wrote:
    Hello Gurs,
    my question is do I need to install oracle database software in this new mount point and then clone ??No.
    or cloning to the NEW MOUNT point itself will create all the necessary software?.No: cloning a database on same host means cloning database files : it does not mean cloning Oracle executables. You don't need to clone ORACLE_HOME on same host.
    Please provide me any documents that will be helpful for the process.
    Try to use : http://www.oracle-base.com/articles/11g/DuplicateDatabaseUsingRMAN_11gR2.php
    Thanks In Advance.Edited by: P. Forstmann on 29 nov. 2011 19:53

  • How datafile being extended over different mount point automatically

    Hello,
    I would like to understand if I have like 20 datafiles created over 2 mount point. All of it being setup with auto extend of 1GB and maxfile of 10GB. All files are not a max size yet.
    10 datafiles at /mountpoint1 with free space of 50GB
    10 datafiles at /mountpoint2 with free space of 200MB
    Since mountpoint2 have absolutely no space for auto extend, will it keep extending datafiles at mountpoint1 until it hit the maxsize of each file?
    Will it cause any issue of having mountpoint could not be extended due to mountpoint2?

    Girish Sharma wrote:
    In general, extents are allocated in a round-robin fashionNot necessarily true. I used to believe that, and even published a 'proof demo'. But then someone (may have been Jonothan Lewis) pointed out that there were other variables I didn't control for that can cause oracle to completely fill one file before moving to the next. Sorry, I don't have a link to that converstation, but it occurred in this forum, probably some time in 2007-2008.Ed,
    I guess you are looking for below thread(s)... ?
    Re: tablespaces or datafile
    or
    Re: tablespace with multiple files , how is space consumed?
    Regards
    Girish SharmaYes,but even those weren't the first 'publication' of my test results, as you see in those threads I refer to an earlier demo. That may have been on usenet in comp.database.oracle.server.

  • PI 7.1 Upgrade Mount Point

    Hi.
    Im upgrading a 7.0 PI system to 7.1, and all was going well until the system asked about the mount points.
    I have tried everything - the cds, the downloaded product.
    Nothing seems to recognise the 'SAP Kernel DVD Unicode' folders /cds.
    Its able to find the MID.xml file ok, but doesnt seem to recognise the mount point as a whole.
    Is there anything I can check to make sure that I have the right version?
    Andy

    Olivier,
       The directory structure for SOLARIS Kernel is different:
    +*510332451 NW 7.1 UC-Kernel 7.10 Solaris on SPARC 64bit Upgrade*+_
    /sapcd/XI_UPGRADE_PI71/UK/NW_7.1_Kernel_SOLARIS:
    CDLABEL.ASC   COPY_TM.HTM   DATA_UNITS    LABEL.EBC     MID.XML       VERSION.ASC
    CDLABEL.EBC   COPY_TM.TXT   LABEL.ASC     LABELIDX.ASC  SHAFILE.DAT   VERSION.EBC
    /sapcd/XI_UPGRADE_PI71/UK/NW_7.1_Kernel_SOLARIS/DATA_UNITS:
    K_710_UV_SOLARIS_SPARC  LABELIDX.ASC
    +*510332452 NW 7.1 UC-Kernel 7.10 Solaris on SPARC 64bit*+_ 
    /sapcd/XI_UPGRADE_PI71/UK/NW_7.1_Kernel_SOLARIS:
    CDLABEL.ASC   COPY_TM.HTM   DATA_UNITS    LABEL.EBC     MID.XML       VERSION.ASC
    CDLABEL.EBC   COPY_TM.TXT   LABEL.ASC     LABELIDX.ASC  SHAFILE.DAT   VERSION.EBC
    /sapcd/XI_UPGRADE_PI71/UK/NW_7.1_Kernel_SOLARIS/DATA_UNITS:
    K_710_UI_SOLARIS_SPARC  LABELIDX.ASC
    +*510332455 NW 7.1 Kernel 7.10 Solaris on SPARC 64bit Upgrade*+_
    /sapcd/XI_UPGRADE_PI71/UK/NW_7.1_Kernel_SOLARIS:
    CDLABEL.ASC   COPY_TM.HTM   DATA_UNITS    LABEL.EBC     MID.XML       VERSION.ASC
    CDLABEL.EBC   COPY_TM.TXT   LABEL.ASC     LABELIDX.ASC  SHAFILE.DAT   VERSION.EBC
    /sapcd/XI_UPGRADE_PI71/UK/NW_7.1_Kernel_SOLARIS/DATA_UNITS:
    K_710_NV_SOLARIS_SPARC  LABELIDX.ASC
    +*510332458 NW 7.1 Kernel 7.10 Solaris on SPARC 64bit*+_ 
    /sapcd/XI_UPGRADE_PI71/UK/NW_7.1_Kernel_SOLARIS:
    CDLABEL.ASC   COPY_TM.HTM   DATA_UNITS    LABEL.EBC     MID.XML       VERSION.ASC
    CDLABEL.EBC   COPY_TM.TXT   LABEL.ASC     LABELIDX.ASC  SHAFILE.DAT   VERSION.EBC
    /sapcd/XI_UPGRADE_PI71/UK/NW_7.1_Kernel_SOLARIS/DATA_UNITS:
    K_710_NI_SOLARIS_SPARC  LABELIDX.ASC
    With the above files what do you think a main directory /sapcd/XI_UPGRADE_PI71/UCK shoud like.
    Thanks for your help.
    Thanks
    S.

  • Checking the space for /archlog mount point script

    I have the below shell script which is checking /archlog mount point space on cappire(solaris 10) server. When the space usage is above 80% it should e-mail. When i tested this script it is working as expected.
    #!/usr/bin/ksh
    export MAIL_LIST="[email protected]"
    export ARCH_STATUS=`df -k /archlog | awk '{ print $5 }' | grep -v Use%`
    echo $ARCH_STATUS
    if [[ $ARCH_STATUS > 80% ]]
    then echo "archive destination is $ARCH_STATUS full please contact DBA"
    echo "archive destination /archlog is $ARCH_STATUS full on Cappire." | mailx -s "archive destination on cappire is $ARCH_STATUS full" $MAIL_LIST
    else
    exit 1
    fi
    exit
    When i scheduled a cron job it is giving different result. Right now /archlog is 6%, it should exit without e-mailing anything. But, i am getting the below e-mail from cappire server which is strange.
    subject:archive destination on cappire is capacity
    below is the e-mail content.
    6% full
    Content-Length: 62
    archive destination /archlog is capacity 6% full on Cappire.
    Please help me in resolving this issue - why i am getting the above e-mail, i should not get any e-mail with the logic.
    Is there any issue with the cron. Please let me know.

    user01 wrote:
    I have the below shell script which is checking /archlog mount point space on cappire(solaris 10) server. When the space usage is above 80% it should e-mail. When i tested this script it is working as expected.
    #!/usr/bin/ksh
    export MAIL_LIST="[email protected]"
    export ARCH_STATUS=`df -k /archlog | awk '{ print $5 }' | grep -v Use%`
    echo $ARCH_STATUS
    if [[ $ARCH_STATUS > 80% ]]
    then echo "archive destination is $ARCH_STATUS full please contact DBA"
    echo "archive destination /archlog is $ARCH_STATUS full on Cappire." | mailx -s "archive destination on cappire is $ARCH_STATUS full" $MAIL_LIST
    else
    exit 1
    fi
    exit
    When i scheduled a cron job it is giving different result. Right now /archlog is 6%, it should exit without e-mailing anything. But, i am getting the below e-mail from cappire server which is strange.
    subject:archive destination on cappire is capacity
    below is the e-mail content.
    6% full
    Content-Length: 62
    archive destination /archlog is capacity 6% full on Cappire.
    Please help me in resolving this issue - why i am getting the above e-mail, i should not get any e-mail with the logic.
    Is there any issue with the cron. Please let me know.Not a problem with cron, but possibly an issue with the fact that you are doing a string comparison on something that you are thinking of as a number.
    Also, when I'm piping a bunch of stuff together and get unexpected results, I find it useful to break it down at a command line to confirm that each step is returning what I expect.
    df -k /archlog
    df -k /archlog | awk '{ print $5 }'
    df -k /archlog | awk '{ print $5 }' | grep -v Use%
    A common mistake is to forget that jobs submitted from cron don't source the owning user's .profile. You need to make sure the script takes care of setting its environment, but that doesn't look to be the issue for this particular problem.

  • Built in rule in Linux to prevent additional space after 90% of mount point

    We have an RHEL instance on VMware which uses RAID configuration for storage on SAN device. There is an observation with regard to the virtual instance on which our Oracle database is hosted that one of the mount points on which backup is stored starts giving trouble after the spaces reaches 90% in the way that the Oracle backup fails. There is another virtual machine on which there is no problem even if the spaces crosses 90%. Is there something like a rule built in into the file system that could prevent an attempt to write something on it once it reaches 90%?
    I hope I have been able to explain the problem that can a rule be built into the file system which prevents writing to it once it reaches 90%.
    Requesting a reply to my query.
    Regards

    Disk space can be restricted by implementing disk quotas which alert a system administrator before a user consumes too much disk space or a partition becomes full. Disk quotas can be configured for individual users as well as user groups.

  • Changing Oracle Mount Point After Installation Is Completed.

    How easy is it to change an Oracle Mount Point once an SAP installation has been completed?  Reason I am asking, is another Basis person believes it's easier to wipe the partition and reinstall.

    Hello,
    Yes this is OK (for SAP/Oracle) and easy for unix team too. You please shut down SAP and Oracle completely and Unix expert can change the file system. But be careful that the exact mount point 'name' should remain intact. For e.g. '/oracle/SID' is a mount point and if you want to change the file system or mount this particular 'directory' onto a different filesystem/partition then the 'mount point name' i.e. 'oracle/SID' should remain same - as required by oracle/sap to run.
    Thanks

Maybe you are looking for

  • To retreive the data based on input in a variable in web interface.

    Hello all, I am working on BW-BPS. I have a web interface which has many variables.One of them is Version. I fill in the values in the variables(including the version) and save the data. This saves the plan data in the Infocube. Now next time the use

  • Good news: We won't have to put up with the poor i...

    http://thenextweb.com/uk/2011/08/08/the-bbc-launches-a-new-tv-friendly-version-of-iplayer/ It actually looks really good, check out the video. BTW: Yes, it does say HD in the images but there is no mention in the article or on the video.

  • Calling a function in OAF page

    Hi Frnds, Iam trying to call a function in oaf page but somehow its not working , can u please identify mistake in my code . the below is CO code: public void processRequest(OAPageContext oapagecontext, OAWebBean oawebbean) super.processRequest(oapag

  • Difference between MiniSAP 4.6D and MiniSAP Netweaver

    Hi guys, I was confused with MiniSAP 4.6D and the MiniSAP Netweaver.What is their difference? I am searching for the MiniSAP where I could practice BASIS and ABAP, like R/3 system. But then, there is also a MiniSAP Netweaver. Could you tell me the di

  • Windows not genuine after factory reset

    Hey guys,  So i have a Toshiba Satellite A505-S6005 that i have owned all 4 years of college. On sunday i restored my computer back to how it was when i bought it by pressing the 0 key on start up. I followed the on screen instructions and wiped ever