File system sapdata2 95% SM1

Hello friends, I need your help, we are starting with the administration BASIS implanted after a SAP consultant in the company, the query is based on the Solution Manager sapdata2 the file system is 95% of occupied space.
In theory I can not delete anything from it? It contains data from the database oracle 10g.
The same does not matter to grow as it would extend the sapdata3 not?
As is usual procedure in such cases?
Thanks in advance
Martin Sello

and check the maximum growth for the data files in this sapdata filesystem (when autoextend is on).
Kind regards,
Mark

Similar Messages

  • File system recommendation in SAP -AIX

    Dear All ,
    Pls check my current file system of PROD .
    Filesystem    GB blocks      Used      Free %Used Mounted on
    /dev/prddata1lv    120.00     66.61     53.39   56% /oracle/IRP/sapdata1
    /dev/prddata2lv    120.00     94.24     25.76   79% /oracle/IRP/sapdata2
    /dev/prddata3lv    120.00     74.55     45.45   63% /oracle/IRP/sapdata3
    /dev/prddata4lv    120.00     89.14     30.86   75% /oracle/IRP/sapdata4
    1, How much Space is recommended to increase the total space of the sapdata1..4 file system in SAP.
    Currently we have assigned 120GB to each datafile. if its any recommendation of sapdata1,2,3,4 total space , how can proceed for further ?
    ie ---> is there any limitation to increase the sapdata file system in SAP .
    example sapdata1,...4 ---> max limit to increase is 200 GB like that .....
    2,secondly only sapdata4 is keep on growing and rest of the file system are not much .
    Kindly suggets
    Edited by: satheesh0812 on Dec 20, 2010 7:13 AM

    Hi,
    Please identify what are the activities that were taking place which was consuming your space so rapidly.
    Analyze if the same activities are going to be performed in future, if so for how many months.
    Based on the comparision of present growth trend you can predict future trend and decide. Also analyze which table/tablespace is growing fast and what were its related data files and where are they located, you need to increase that file system to high level rather increasing all file systems sapdata1...4.
    Based on gathered inputs you can easily predict future growth trends and accordingly increase your size with additional space of 10%.
    You can increase your file system there is no limitation such, but you will face performance related problems. For this you need to create a separate tablespace and move the heavily growing table data files into that table space with proper index build.
    Regards.
    Edited by: Sita Rr Uppalapati on Dec 20, 2010 3:20 PM

  • In rz20 all file systems are not coming.

    Hi all.
    I have ccms configuration in local system not in central system menas every system there are own ccms configuration.
    File system In rz20 that all file systems are not coming.
    in os level there are like sapdata1,sapdata2.... & sapreorg but these file system are not coming in rz20.so we are not getting any alert regarding this file systems.
    Kindly suggest how can i fetch all the file system which are available in os level.
    thanks in advance.
    Thanks & regards,
    Laxmid

    Hi Kumar,
    Os -HP Unix
    In *RZ0> SAP CCMS Monitor Templates>Filesystems--->* these below file systems are not coming.Otheres are coming.
    /oracle/RXN/sapdata1
    /oracle/RXN/sapdata2
    /oracle/RXN/sapdata3
    /oracle/RXN/sapdata4
    /oracle/RXN/oraarch
    /oracle/RXN/sapreorg
    & please tell me there is one Lock & unlock is there in front of every node.what is this.
    Thanks & Regards
    Laxmi
    Edited by: laxmid on Feb 7, 2010 9:04 PM

  • SAP File System are being updated at Storage Level and as well as in Trash

    Hi Friends,
    We are facing a strange but serious issue with our linux system. We have multiple instance installed on it but one of the instance's file system are being visible in Trash.
    The exact issue is this.
    1. We have db2 installed on Linux now one of our instance's Mount Points are being available in LInux Trash if i create a file at storage level i.e /db2/SID/log_dir/touch test it will be dynamically updated in the trash and will be create in Trash also.
    2  this can not be a normal behavior  of any O.S.
    3 if i delete any file from trash related to this particular SID (instance) file will be deleted from the actual location.
    i know this is not related to SAP Configuration but i just want to find out root cause analysis. if any Linux expert can help in this issue waiting for an early reply.
    Regards,

    Hi Nelis,
    I think you have misinterpreted this issue. let me explain you in detail. we have following mount points in storage including SAP installed on it.
    /db2/BID
    /db2/BID/log_dir
    /db2/BID/log_dir2
    /db2/BID/log_archive
    /db2/BID/db2dump
    /db2/BID/saptemp1
    /db2/BID/sapdata1
    /db2/BID/sapdata2
    /db2/BID/sapdata3
    /db2/BID/sapdata4
    Now i can see the same mount points in the Trash of linux and if i create a folder ./ file in any of the above mentioned mount point it will be dynamically updated in Trash and if i delete some thing at storage / os level the same will be deleted from trash and vis versa.
    I have checked everything no softlink exists anywhere but i am not sure about storage / os level thats what i want to find out?
    Regards,

  • File system for SAP Ecc 6.0 with db2 on AIX

    Hello Everyone
    Pleaese suggest the file system size which is to created before installation of sap ecc6.0 sr3 on Aix with DB2.
    Thanks

    find this
    PRD
    df -M
    Filesystem    Mounted on         512-blocks      Free %Used    Iused %Iused
    /dev/hd4      /                     4194304   2008384   53%     2438     2%
    /dev/hd2      /usr                 33554432  24364192   28%    48662     2%
    /dev/hd9var   /var                  8388608   7880328    7%      580     1%
    /dev/hd3      /tmp                  8388608   8342704    1%      123     1%
    /dev/fwdump   /var/adm/ras/platform     262144    261448    1%        4     1%
    /dev/hd1      /home                 2097152   1928856    9%      131     1%
    /proc         /proc                       -         -    -         -     -
    /dev/hd10opt  /opt                  4194304   2851592   33%     6285     2%
    /dev/fslv12   /instcd1             41943040   9293200   78%    15850     2%
    /dev/fslv00   /sapmnt/P11          10485760   8752544   17%     5955     1%
    /dev/fslv01   /usr/sap/P11         10485760   8244400   22%     1319     1%
    /dev/fslv02   /db2/db2p11           4194304   3100248   27%     3941     2%
    /dev/fslv03   /db2/P11/log_dir     20971520  18346264   13%       26     1%
    /dev/fslv04   /db2/P11/log_dir2    20971520  18346424   13%       26     1%
    /dev/fslv05   /db2/P11/db2dump      4194304   3422024   19%       23     1%
    /dev/fslv06   /db2/P11/sapdata1   272629760 220914312   19%      106     1%
    /dev/fslv07   /db2/P11/sapdata2   272629760 220914000   19%      106     1%
    /dev/fslv08   /db2/P11/sapdata3   272629760 220914304   19%      106     1%
    /dev/fslv09   /db2/P11/sapdata4   272629760 220914280   19%      106     1%
    /dev/fslv10   /db2/P11/saptemp1    10485760  10483352    1%       47     1%
    /dev/fslv11   /db2/P11/mirrlog     20971520  20967456    1%        7     1%
    /dev/fslv13   /db2/P11/archlog     94371840  46774984   51%      380     1%
    /dev/fslv14   /sap/auditlog        12582912  12311392    3%       51     1%
    *****rpd11:/usr/sap/trans /usr/sap/trans       33554432  11883968   65%     5738

  • Decision on File system management in Oracle+SAP

    Hi All,
    In my production system we use to have /oracle/SID/sapdata1 and oracle/SID/sapdata2. Initially there was many datafiles assigned to the table sapce PSAPSR3, few as autoextend on and few as autoextend off. As per my understanding DB02 shows you the information just tablespace wise it will report AUTOEXTEND ON as soon as at least one of the datafiles has AUTOEXTEND ON. In PSAPSR3 all the datafile with autoextend ON are from SAPDATA1 which has only 50 GBs left. All the files as Autoextend OFF are from SAPDATA2 which has 900 GBs of sapce left.
    Now the question is :
    1.Do I need to request for additional space for SAPDATA1 as some of the tablespaces are at the edge of autoextend and that much space is not left in the FS(sapadat1) , then how will they extend? DB growth is 100GB per month.
    2.We usually were adding 10 GB of datafile in the tablespace with 30GB as autoextend.
    Can we add another datafile from sapdata2 this time with autoextend ON and the rest will be taken care automatically.
    Pleae suggest.
    Regards,
    VIcky

    Hi Vicky,
    As you have 100GB/month growth suggestion here would be
    1) Add 2 more mount points sapdata3 and sapdata4 with around 1 TB space.
       This is to distribute data across 4 data partitions for better performance
    2) As sapdata1 has datafiles with auto extend ON, you need to extend the file system to 500 GB atleast so that whenever data is written on datafiles under sapdata1, it will have space to grow using autoextend feature. Without sufficient disk space it may lead to space problem and transaction may result in dump.
    3) No need to change anything on sapdata2 as you already have 900GB free space
    Hope this helps.
    Regards,
    Deepak Kori

  • SAP file system restoration on other server

    Dear Experts,
    To check that our offline file system backup is successful, we are planning to restore the offline file system backup from the tape on to a new test server.
    Our current SAP system (ABAP only) is in cluster with CI running on one node (using virtual host name cicep) & DB running on another node (using virtual host name dbcep).
    Now, is it possible to restore the offline file system backup of the above said cluster server on to a single server with a different host name?
    Please help on this.
    Regards,
    Ashish Khanduri

    Dear Ashish
    We want to include file system backup process as part of our backup strategy.  To test the waters, we are planning to take a backup of the at filesystems level.  Following are the filesystems in our production systems.
    We have a test server (hostname is different), without any filesystems created beforehand. 
    I want to know:
    1. Which filesystems will be required from the below:
    /dev/hd4         4194304   3772184   11%     5621     2% /
    /dev/hd2        10485760   6151688   42%    43526     6% /usr
    /dev/hd9var      4194304   4048944    4%     4510     1% /var
    /dev/hd3         4194304   2571760   39%     1543     1% /tmp
    /dev/hd1          131072    129248    2%       85     1% /home
    /proc                  -         -    -         -     -  /proc
    /dev/hd10opt      655360    211232   68%     5356    18% /opt
    /dev/oraclelv   83886080  73188656   13%    11091     1% /oracle
    /dev/optoralv   20971520  20967664    1%        4     1% /opt/oracle
    /dev/oracleGSPlv   83886080  74783824   11%    18989     1% /oracle/GSP
    /dev/sapdata1lv  833617920 137990760   84%     3189     1% /oracle/GSP/sapdata1
    /dev/sapdata2lv  623902720 215847400   66%       82     1% /oracle/GSP/sapdata2
    /dev/sapdata3lv  207093760 108510632   48%       24     1% /oracle/GSP/sapdata3
    /dev/sapdata4lv  207093760 127516424   39%       28     1% /oracle/GSP/sapdata4
    /dev/origlogAlv   20971520  20730080    2%        8     1% /oracle/GSP/origlogA
    /dev/origlogBlv   20971520  20730080    2%        8     1% /oracle/GSP/origlogB
    /dev/mirrlogAlv   20971520  20762848    1%        6     1% /oracle/GSP/mirrlogA
    /dev/mirrlogBlv   20971520  20762848    1%        6     1% /oracle/GSP/mirrlogB
    /dev/oraarchlv  311951360 265915600   15%      526     1% /oracle/GSP/oraarch
    /dev/usrsaplv   41943040  41449440    2%      165     1% /usr/sap
    /dev/sapmntlv   41943040  20149168   52%   565823    21% /sapmnt
    /dev/usrsapGSPlv   41943040  25406768   40%   120250     5% /usr/sap/GSP
    /dev/saptranslv   41943040   5244424   88%   136618    18% /usr/sap/trans
    IDES:/sapcd     83886080   4791136   95%    18878     4% /sapcd
    GILSAPED:/usr/sap/trans   41943040   5244424   88%   136618    18% /usr/sap/trans
    2. Is it possible to directly backup the filesystems (like /dev/oracleGSPlv)?  This requirement is because, when I backup (using tar) /oracle, all the folders in /oracle, like /oracle/GSP, /oracle/GSP/sapdata1 etc, are also backed up.  I do not want it.  I would like to backup the filesystems directly.
    3. Which unix backup tools are used to backup the individual filesystems?
    4. How do we restore the filesystems to the test server?
    Thanks for your advise.
    Abdul
    Edited by: Abdul Rahim Shaik on Feb 8, 2010 12:10 PM

  • File system /oracle/SID/sapdata3 empty after EP 6.0 installation

    Hi Friends
    I'm installing EP 6.0 on WAS 6.40- SR1, AIX 5.1/Oracle 9.2.0 central system, where the Oracle database & the central instance are installed on the same host using the standard SAP DVD's, the installlation guide "SAP Web Application Server Java 6.40 SR1 on AIX: Oracle - Planning & Preparation" specifies creating 4 filesystems /oracle/SID/sapdata1, 
    /oracle/SID/sapdata2, /oracle/SID/sapdata3, /oracle/SID/sapdata4.
    I created the 4 filesystems but notice that after completing the installation with portal up & running, filesystem /oracle/SID/sapdata3 is empty, with no directories & files created inside. Can anybody advice if this filesystem doesn't gets any files, then why SAP document asks to create this in 1st place.
    Thanks & Rgds,
    Abhishek

    Hi Abhishek,
    Kindly go through the sap note 972263, it clearly explains the file system required and the size.
    SAP NetWeaver 2004s SR2 Java:
    File System Name Space Requirement
    /oracle/<DBSID>/sapdata1 1 GB
    /oracle/<DBSID>/sapdata2 3 GB
    /oracle/<DBSID>/sapdata3  -  GB
    /oracle/<DBSID>/sapdata4 6 GB
    SUM 10 GB
    Regards
    Manjunath

  • Tablespace: PSAPTEMP # Tablespace files autoextend can cause file system overflow

    Hi All,
    We are seeing the above message after updating the kernel from 701 SP 125 to 720 SP 600.
    I have checked the data files that make up PSAPTEMP and none are set to auto extend so I am confused as to what it is complaining about.
    Tablespace name
    Size(MB)
    Free(MB)
    Used(%)
    Autoextend
    Total size(MB)
    Total free space(MB)
    Total used (%)
    #Files
    #Segments
    #Extents
    Status
    Contents
    Compression
    PSAPTEMP
    11,264.00
    11,262.00
    0
    NO
    11,264.00
    11,262.00
    0
    6
    0
    0
    ONLINE
    TEMPORARY
    DISABLED
    File name
    File Id
    Tablespace name
    Size(MB)
    #Blocks
    Status
    Rel. file number
    Autoextensible
    Maxsize(MB)
    Maxblocks
    Increment by
    User size(MB)
    User blocks
    /oracle/SID/sapdata1/temp_1/temp.data1
    1
    PSAPTEMP
    2,048.00
    262,144
    AVAILABLE
    1
    NO
    0
    0
    0
    2,047.00
    262,016
    /oracle/SID/sapdata1/temp_2/temp.data2
    2
    PSAPTEMP
    2,048.00
    262,144
    AVAILABLE
    2
    NO
    0
    0
    0
    2,047.00
    262,016
    /oracle/SID/sapdata1/temp_3/temp.data3
    3
    PSAPTEMP
    2,048.00
    262,144
    AVAILABLE
    3
    NO
    0
    0
    0
    2,047.00
    262,016
    /oracle/SID/sapdata3/temp_4/temp.data4
    4
    PSAPTEMP
    3,072.00
    393,216
    AVAILABLE
    4
    NO
    0
    0
    0
    3,071.00
    393,088
    /oracle/SID/sapdata2/temp_5/temp.data5
    5
    PSAPTEMP
    1,024.00
    131,072
    AVAILABLE
    5
    NO
    0
    0
    0
    1,023.00
    130,944
    /oracle/SID/sapdata2/temp_6/temp.data6
    6
    PSAPTEMP
    1,024.00
    131,072
    AVAILABLE
    6
    NO
    0
    0
    0
    1,023.00
    130,944
    Any ideas how to resolve this or is it a bug?
    Thanks
    Craig

    Hi Craig,
    Deactivating the alert could be the latest solution, but you should first check how the tempfiles are setup. If they are set as sparse files they might not use all the allocated space at FS level, but might grow up to it and cause an overflow...
    To deactivate that behavior you should copy temp files with a specific cp option (--sparse=never) to deactivate the sparse attribute as explained in the here under notes (the 2nd one is for 11g only thus not valid for you).
    Regards
    548221 - Temporary Files are created as sparse files
    Oracle creates files of temporary tablespaces as sparse files.
    'Sparse' is a special property a file in Unix operating systems can have. Sparse files are files that can dynamically grow and shrink depending on their content. So many sparse files can use a common pool of free disk space. Also the creation of a sparse file is much faster than creation of a normal file.
    Problems can occur if sparse files on the same disk grow in parallel and there is not sufficient free disk space for all to extend.
    Usage of sparse files is not a bug. Therefore there is no possibility to tell Oracle not to use sparse files for the temporary tablespace if the operating system offers sparse file functionality.
    1864212 - Forcing complete space allocation for temp files
    By default, temp files are not initialized when the file is created and therefore the disk space is not pre-allocated in the file system.

  • How to fix file system error 56635 in windows 8.1

    heelo pls help me to fix this problem i cant install any on my laptop.
    file system error 56635 is showing  up wen i install my kaspersky 2015

    Hello Jay,
    The current forum is for developers. I'd suggest asking non-programming questions on the
    Office 2013 and Office 365 ProPlus - IT Pro General Discussions  forum instead.

  • ISE 1.2 VM file system recommendation

    Hi,
    According to http://www.cisco.com/en/US/docs/security/ise/1.2/installation_guide/ise_vmware.html#wp1056074, tabl 4-1 discusses storage requirement for ISE 1.2 in VM environment.  It recommends VMFS.
    What are the implications when using NFS instead?  Is this just a recommendation or an actual requirement?
    At the moment, we use Netapp array which uses NFS for all vApps.  It will be difficult to justify a creation an additional FC HBA just for this one vApp.  Please explain.
    TIA,
    Byung

    If you refer to
    http://www.cisco.com/en/US/docs/security/ise/1.2/installation_guide/ise_vmware.html
    It says :
    Storage
    •File System—VMFS
    We recommend that you use VMFS for storage. Other storage protocols are not tested and might result in some file system errors.
    •Internal Storage—SCSI/SAS
    •External Storage—iSCSI/SAN
    We do not recommend the use of NFS storage.

  • SAP GoLive : File System Response Times and Online Redologs design

    Hello,
    A SAP Going Live Verification session has just been performed on our SAP Production environnement.
    SAP ECC6
    Oracle 10.2.0.2
    Solaris 10
    As usual, we received database configuration instructions, but I'm a little bit skeptical about two of them :
    1/
    We have been told that our file system read response times "do not meet the standard requirements"
    The following datafile has ben considered having a too high average read time per block.
    File name -Blocks read  -  Avg. read time (ms)  -Total read time per datafile (ms)
    /oracle/PMA/sapdata5/sr3700_10/sr3700.data10          67534                         23                               1553282
    I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
    2/
    We have been asked  to increase the size of the online redo logs which are already quite large (54Mb).
    Actually we have BW loading that generates "Chekpoint not comlete" message every night.
    I've read in sap note 79341 that :
    "The disadvantage of big redo log files is the lower checkpoint frequency and the longer time Oracle needs for an instance recovery."
    Frankly, I have problems undertanding this sentence.
    Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right ?
    But how is it that frequent chekpoints should decrease the time necessary for recovery ?
    Thank you.
    Any useful help would be appreciated.

    Hello
    >> I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
    The recommended ("standard") values are published at the end of sapnote #322896.
    23 ms seems really a little bit high to me - for example we have round about 4 to 6 ms on our productive system (with SAN storage).
    >> Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right?
    Correct.
    >> But how is it that frequent chekpoints should decrease the time necessary for recovery ?
    A checkpoint is occured on every logswitch (of the online redologfiles). On a checkpoint event the following 3 things are happening in an oracle database:
    Every dirty block in the buffer cache is written down to the datafiles
    The latest SCN is written (updated) into the datafile header
    The latest SCN is also written to the controlfiles
    If your redologfiles are larger ... checkpoints are not happening so often and in this case the dirty buffers are not written down to the datafiles (in the case of no free space in the buffer cache is needed). So if your instance crashes you need to apply more redologs to the datafiles to be in a consistent state (roll forward). If you have smaller redologfiles more log switches are occured and so the SCNs in the data file headers (and the corresponding data) are closer to the newest SCN -> ergo the recovery is faster.
    But this concept does not really fit the reality because of oracle implements some algorithm to reduce the workload for the DBWR in the case of a checkpoint.
    There are also several parameters (depends on the oracle version) which control that a required recovery time is kept. (for example FAST_START_MTTR_TARGET)
    Regards
    Stefan

  • Error message "Live file system repair is not supported."

    System won't boot. Directed to Disk Utility to repair but get error message "Live file system repair is not supported."  Appreciate all help.
    Thanks.
    John

    I recently ran into a similar issue with my Time Machine backup disk. After about 6 days of no backups - I had swapped the disk for my photo library for a media project; I reattached the Time Machine disk and attempted a backup.
    Time Machine could not backup to the disk. Running Disk Utility and attempting to Repair the disk ended up returning the "Live file system repair is not supported" message.
    After much experimentaion with disk analysis softwares, I came to the realization that the issue might be that the USB disk dock wasn't connected directly to the MacBook Pro - it was daisy-chained through a USB Hub.
    Connecting the USB disk dock directly to the MBP and running Disk Utility appears to have resolved the issue. DU ran for about 6 hours and succesfully repaired the disk. Consequently, I have been able to use that Time Machine disk for subsequent backups.

  • How to get access to the local file system when running with Web Start

    I'm trying to create a JavaFX app that reads and writes image files to the local file system. Unfortunately, when I run it using the JNLP file that NetBeans generates, I get access permission errors when I try to create an Image object from a .png file.
    Is there any way to make this work in Netbeans? I assume I need to sign the jar or something? I tried turning "Enable Web Start" on in the application settings, and "self-sign by generated key", but that made it so the app wouldn't launch at all using the JNLP file.

    Same as usual as with any other web start app : sign the app or modify the policies of the local JRE. Better sign the app with a temp certificate.
    As for the 2nd error (signed app does not launch), I have no idea as I haven't tried using JWS with FX 2.0 yet. Try to activate console and loggin in Java's control panel options (in W7, JWS logs are in c:\users\<userid>\appdata\LocalLow\Sun\Java\Deployment\log) and see if anything appear here.
    Anyway JWS errors are notoriously not easy to figure out and the whole technology in itself is temperamental. Find the tool named JaNeLA on the web it will help you analyze syntax error in your JNLP (though it is not aware of the new syntax introduced for FX 2.0 and may produce lots of errors on those) and head to the JWS forum (Java Web Start & JNLP Andrew Thompson who dwells over there is the author of JaNeLA).

  • Zone install file system failed?

    On the global zone, my /opt file system is like this:
    /dev/dsk/c1t1d0s3 70547482 28931156 40910852 42% /opt
    I am trying to install it in NMSZone1 like this config:
    fs:
    dir: /opt
    special: /dev/dsk/c1t1d0s3
    raw: /dev/rdsk/c1t1d0s3
    type: ufs
    options: [nodevices,logging]
    But failed like this:
    bash-2.05b# zoneadm -z NMSZone1 boot
    zoneadm: zone 'NMSZone1': fsck of '/dev/rdsk/c1t1d0s3' failed with exit status 3
    3; run fsck manually
    zoneadm: zone 'NMSZone1': unable to get zoneid: Invalid argument
    zoneadm: zone 'NMSZone1': unable to destroy zone
    zoneadm: zone 'NMSZone1': call to zoneadmd failed
    Please help me. Thanks.

    It appears that the c1t1d0s3 device is already in use as /opt in the
    global zone. Is that indeed the case? If so, you need to unmount
    it from there (and remove or comment out its entry in the global
    zone's /etc/vfstab) file and then try booting the zone again.

Maybe you are looking for