SAP Backups ; R/3 & file system

Hello,
What is the different between SAP R/3 backup and SAP File System backup?
what are the files that backing up in each cases
To which back up does the oracle DB fit in. (Data Files,Control files, Redo Logs, SPFILE)
To which back up does the O/S files fit in. (here Suse O/S)
thankx

There is not any concept of SAP file system backup.....it means OS level file system backup.
SAP R/3 backup can backed up all the sapdata(database) files, controlfiles as well as redologs.
OS filesystem level backup can backed up all the files based on your requirement and rotation.
Both backups you can use for restorations.....SAP R/3 backup will take all the responsibilities in backup as well as in restoration.
but in OS level backup first you must know which files need to be backed up and what exactly the all directoruies contains.
Regards,
Nick Loy

Similar Messages

  • Backup of application file system in oracle apps 11i

    what are the ways to take the backup of application file system in oracle application 11i ?
    Can anyone suggest regarding this ?
    Application :11.5.10.2
    OS : RHEL 3.0

    Check the following threads:
    Apps Backup
    Re: Apps Backup
    Backup Oracle Applications (11.5.10.2)
    Re: Backup Oracle Applications (11.5.10.2)
    Best Backup Strategy
    Re: Best Backup Strategy
    System Backup
    system backup
    Reommended backup and recovery startegy for EBS
    Re: Reommended backup and recovery startegy for EBS

  • Backup of application file system is also required ?

    Hi,
    Basic question.
    Suppose database is creashed and we have restored it with cold backup taken 2 days back.
    How do Oracle Application file system will be in sync with database?
    We need to backup the application file system also ?
    Thanks,
    Kishore

    Hi,
    It's also worth mentioning that your concurrent logs in the application file system will be out of sync with the records in the database.
    If you restore the database from a backup taken a couple of days ago then the records in FND_CONCURRENT_REQUESTS will be a couple of days old, but the users will have been running requests since the backup was made and creating files in $APPLCSF/log/<context>. The danger is that when you restart the environment the users start to run concurrent requests and the output of the request will be appended to the end of the existing logfile.
    So best to find the maximum request number in the table and remove any output files newer than that to avoid confusion.
    HTH
    J

  • SAP ECC 6.0 file system Restore

    Dear Friends,
    Happy Holi.
    I have restore file system backup of our development server to another host having oracle 10g and ECC 6.0.
    After restore database has been up successfully.
    But listner is not running when I trid to run listner it is giving the error as below.
    jkeccbc:oradvr 2> lsnrctl start
    LSNRCTL for HPUX: Version 10.2.0.2.0 - Production on 19-MAR-2011 15:18:33
    Copyright (c) 1991, 2005, Oracle.  All rights reserved.
    Starting /oracle/DVR/102_64/bin/tnslsnr: please wait...
    TNSLSNR for HPUX: Version 10.2.0.2.0 - Production
    System parameter file is /oracle/DVR/102_64/network/admin/listener.ora
    Log messages written to /oracle/DVR/102_64/network/log/listener.log
    Error listening on: (ADDRESS=(PROTOCOL=IPC)(KEY=DVR.WORLD))
    TNS-12557: TNS:protocol adapter not loadable
    TNS-12560: TNS:protocol adapter error
      TNS-00527: Protocol Adapter not loadable
    Listener failed to start. See the error message(s) above...
    Regards
    Ganesh Datt Tiwari

    Hi Mark,
    Please find below
    cat listener.ora
    Filename......: listener.ora
    Created.......: created by SAP AG, R/3 Rel. >= 6.10
    Name..........:
    Date..........:
    @(#) $Id: //bc/700-1_REL/src/ins/SAPINST/impl/tpls/ora/ind/LISTENER.ORA#4 $
    ADMIN_RESTRICTIONS_LISTENER = on
    LISTENER =
      (ADDRESS_LIST =
            (ADDRESS =
              (PROTOCOL = IPC)
              (KEY = DVR.WORLD)
            (ADDRESS=
              (PROTOCOL = IPC)
             (KEY = DVR)
            (ADDRESS =
              (COMMUNITY = SAP.WORLD)
              (PROTOCOL = TCP)
              (HOST =jkeccbc)
              (PORT = 1527)
    STARTUP_WAIT_TIME_LISTENER = 0
    CONNECT_TIMEOUT_LISTENER = 10
    TRACE_LEVEL_LISTENER = OFF
    SID_LIST_LISTENER =
      (SID_LIST =
        (SID_DESC =
          (SID_NAME = DVR)
          (ORACLE_HOME = /oracle/DVR/102_64)
    ===========================================
    cat tnsnames.ora
    Filename......: tnsnames.ora
    Created.......: created by SAP AG, R/3 Rel. >= 6.10
    Name..........:
    Date..........:
    @(#) $Id: //bc/700-1_REL/src/ins/SAPINST/impl/tpls/ora/ind/TNSNAMES.ORA#4 $
    DVR.WORLD=
      (DESCRIPTION =
        (ADDRESS_LIST =
            (ADDRESS =
              (COMMUNITY = SAP.WORLD)
              (PROTOCOL = TCP)
              (HOST = jkeccbc)
              (PORT = 1527)
        (CONNECT_DATA =
           (SID = DVR)
           (GLOBAL_NAME = DVR.WORLD)
    Regards
    Ganesh Datt Tiwari

  • RAC backup with RMAN...put backup on diff file system

    hello all,
    I have not work a lot on SAN storages. One of my client has implementated 9i RAC. Now he wants to Add two more disk in SAN storage (Implemented RAID). So sun engineers will do this but before this i have to take full database backup(80GB database) throught RMAN. The problem or confussion from my side is that the database is on Sun SAN storage and i have to put the Full database backup taken by RMAN on local hard disk on the node (node 1 of rac). Is the possible since the SAN storage is RAW file system (as i guess) and i am putting the backup on local system.
    Please help me out ...i have do this in couple of days..
    Please tell me prosedure too to how to change the path of backup in RMAN if above is possible..
    Its urgent
    Thanks and Regards!!
    Pankaj Rawat

    Two things:
    1) You will not have any problems taking RMAN backup for RAC and raw devices. None of them make your backups any different.
    2) Based on your post, you are not very confident in your RMAN skills and this is your real problem. What is a must for you - take the backup, copy it on another machine and try to restore from it. Note, that you should NOT look at your original database during restore or take any files from there (even init.ora or spfile). If you don't have this done and don't have exact procedure - consider your backup as useless. This is a conservative approach but believe me - it's wort it when you SAN engineers screw up your storage. And they warned you. ;-)

  • Oracle DB and File system backup configuration

    Hi,
    As I understand from the help documents and guides, brbackup, brarchive and brrestore are the tools which are used for backing up and restoring the oracle database and the file system. We have TSM (Trivoli Storage manager) in our infrastructure for managing the backup centrally. Before configuring the backup with TSM, I want to test the backup/restore configuration locally i.e. storing the backup to local file system and then restoring from there. Our backup strategy is to have full online backup on the weekends and incremental backup on the weekdays. Given this, following are the things, I want to test.
    1. Full online backup (to local file system)
    2. Incremental online backup (to local file system)
    3. Restore (from local file system)
    I found help documents to be very generic and couldn't get any specific information for the comprehensive configuration to achieve this. Can someone help with end to end configuration?
    We are using SAP Portal 7.0 (NW2004s) with Oracle 10g database hosted on AIX server.
    Helpful answers will be rewarded
    Regards,
    Chandra

    Thanks for your feedback. I am almost clear about this issue now, except one point need to be confirmed: do you mean that on linux or unix, if required, we can set "direct to disk" from OS level, but for windows, it's default "direct to disk", we do not need to set it manually.
    And I have a further question: If a database is stored on a SAN disk, say, a volume from disk array, the disk array could take snapshot for a disk on block level, we need to implement online backup of database. The steps are: alter tablespace begin backup, alter system suspend, take a snapshot of the volume which store all database files, including datafiles, redo logs, archived redo logs, controll file, service parameter file, network parameter files, password file. Do you think this backup is integrity or not. please note, we do not flush the fs cache before all these steps. Let's assume the SAN cache could be flushed automatically. Can I think it's integrity because the redo writes are synchronous.

  • Backup and restore Root File system

    Hi
    Can I take backup of Root File system using ufsdump and later restore it (Root file system) completely using ufsrestore?
    Please give me the steps or a link
    Thanks in advance
    Ashraf.

    In short yes. But the steps depends on where you are going to store the backup and if you are running Sparc or x86 and what you do with your disk in between.
    Boot in single user.
    example# ufsdump 0cfu /dev/rmt/0 /dev/rdsk/c0t3d0s0
    /dev/rmt/0 (tape) can be another partions file /backup/root_backup
    For restore it must be safeest to boot from CD.
    Mount your disk to restore to under /a
    cd /a
    ufsrestore rf /dev/rmt/0 (or your file /backup/root_backup)
    If the partition is reformatted you may have to install new bootblocks.
    Please read some from docs.sun.com
    This is just an advice, not detailed workorder.....
    /Gunnar

  • File system /usr/sap/SID full

    Hello All,
    File system /usr/sap/SID getting full
    I have rese trace from SAP level but still file system rapidly increasing...
    below 2 files rapidly incrasing more space in file system
    hostname:sidadm 136> du -sk dev_server1
    31400   dev_server1
    hostname:sidadm 136> pwd
    /usr/sap/SID/DVEBMGS02/work
    more dev_server1
    [Thr 76096] *  LOCATION    SAP-Gateway on host xxxx / sapgw02
    [Thr 76096] *  ERROR       registration of tp SLD_NUC from host xxxx not allowed
    [Thr 76096] *
    TIME        Sat Mar 27 13:05:32 2010
    [Thr 76096] *  RELEASE     700
    [Thr 76096] *  COMPONENT   SAP-Gateway
    [Thr 76096] *  VERSION     2
    [Thr 76096] *  RC          720
    [Thr 76096] *  MODULE      gwxxrd.c
    [Thr 76096] *  LINE        3766
    [Thr 76096] *  COUNTER     60935963
    [Thr 76096] *
    [Thr 76096] *****************************************************************************
    [Thr 76096] *** ERROR => SAP_CMACCPTP3: wrong rqtype SAP_ACCPTP [r3cpic_mt.c  10240]
    [Thr 75582]
    Please help me to solve permently
    Regards
    Mohsin

    Hi Gaurav,
    I got the issue, problem was that the reginfo and secinfo on my ABAP system which was connected to my JAVA system, it was not allowing the requests to pass through and hence it was not getting registered. I deactivated the parameters of reginfo and secinfo and its now working fine.
    I will look into the issue of where i went wrong with reginfo/secinfo

  • How to recover DB2 using offline file system backup.

    Gurus,
    I had a problem with my BI PRD's database. 1 of my BI consultant accidently deleted some important 2010's data. We've try recover online database backup of BWP into BWQ and retrieve the missing data. However the backup retention period was expired and we dont have any available online backup now.
    Only hope is my offline file system backup which backup the whole file system of BWP ONLY.
    My question:-
    Can i restore the file system (offline backup) into BWQ and rebuild my DB2 in BWQ. Please enlighten me.
    Thanks,
    Devan.

    Hi
    You can do it .The file system backup image can be mirroed into production system and by making using of "db2inidb" utility you can restore the whole database.
    http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/core/r0004473.htm
    Thanks
    Romansh

  • Backup into file system

    Hi
    Setting backup-storage with the follwoing configuration is not generating backup files under said location - we are pumping huge volume of data and data(few GB) is not getting backuped up into file system - can you let me know if what is that I missing here?
    Thanks
    sunder
    <distributed-scheme>
         <scheme-name>distributed-Customer</scheme-name>
         <service-name>DistributedCache</service-name>
         <!-- <thread-count>5</thread-count> -->
         <backup-count>1</backup-count>
         <backup-storage>
         <type>file-mapped</type>
         <directory>/data/xx/backupstorage</directory>
         <initial-size>1KB</initial-size>
         <maximum-size>1KB</maximum-size>
         </backup-storage>
         <backing-map-scheme>
              <read-write-backing-map-scheme>
                   <scheme-name>DBCacheLoaderScheme</scheme-name>
                   <internal-cache-scheme>
                   <local-scheme>
                        <scheme-ref>blaze-binary-backing-map</scheme-ref>
                   </local-scheme>
                   </internal-cache-scheme>
                   <cachestore-scheme>
                        <class-scheme>
                             <class-name>com.xxloader.DataBeanInitialLoadImpl
                             </class-name>
                             <init-params>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>{cache-name}</param-value>
                                  </init-param>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>com.xx.CustomerProduct
                                       </param-value>
                                  </init-param>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>CUSTOMER</param-value>
                                  </init-param>
                             </init-params>
                        </class-scheme>
                   </cachestore-scheme>
                   <read-only>true</read-only>
              </read-write-backing-map-scheme>
         </backing-map-scheme>
         <autostart>true</autostart>
    </distributed-scheme>
    <local-scheme>
    <scheme-name>blaze-binary-backing-map</scheme-name>
    <high-units>{back-size-limit 1}</high-units>
    <unit-calculator>BINARY</unit-calculator>
    <expiry-delay>{back-expiry 0}</expiry-delay>
    <cachestore-scheme></cachestore-scheme>
    </local-scheme>

    Hi
    We did try out with the following configuration
    <near-scheme>
         <scheme-name>blaze-near-HeaderData</scheme-name>
    <front-scheme>
    <local-scheme>
    <eviction-policy>HYBRID</eviction-policy>
    <high-units>{front-size-limit 0}</high-units>
    <unit-calculator>FIXED</unit-calculator>
    <expiry-delay>{back-expiry 1h}</expiry-delay>
    <flush-delay>1m</flush-delay>
    </local-scheme>
    </front-scheme>
    <back-scheme>
    <distributed-scheme>
    <scheme-ref>blaze-distributed-HeaderData</scheme-ref>
    </distributed-scheme>
    </back-scheme>
    <invalidation-strategy>present</invalidation-strategy>
    <autostart>true</autostart>
    </near-scheme>
    <distributed-scheme>
    <scheme-name>blaze-distributed-HeaderData</scheme-name>
    <service-name>DistributedCache</service-name>
    <partition-count>200</partition-count>
    <backing-map-scheme>
    <partitioned>true</partitioned>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <external-scheme>
    <high-units>20</high-units>
    <unit-calculator>BINARY</unit-calculator>
    <unit-factor>1073741824</unit-factor>
    <nio-memory-manager>
    <initial-size>1MB</initial-size>
    <maximum-size>50MB</maximum-size>
    </nio-memory-manager>
    </external-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>
    com.xx.loader.DataBeanInitialLoadImpl
    </class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>{cache-name}</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>com.xx.bean.HeaderData</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>SDR.TABLE_NAME_XYZ</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <backup-count>1</backup-count>
    <backup-storage>
    <type>off-heap</type>
    <initial-size>1MB</initial-size>
    <maximum-size>50MB</maximum-size>
    </backup-storage>
    <autostart>true</autostart>
    </distributed-scheme>
    With configuration the amount of residual main memory consumption is like 15 G.
    When we changed this configuration to
    <near-scheme>
         <scheme-name>blaze-near-HeaderData</scheme-name>
    <front-scheme>
    <local-scheme>
    <eviction-policy>HYBRID</eviction-policy>
    <high-units>{front-size-limit 0}</high-units>
    <unit-calculator>FIXED</unit-calculator>
    <expiry-delay>{back-expiry 1h}</expiry-delay>
    <flush-delay>1m</flush-delay>
    </local-scheme>
    </front-scheme>
    <back-scheme>
    <distributed-scheme>
    <scheme-ref>blaze-distributed-HeaderData</scheme-ref>
    </distributed-scheme>
    </back-scheme>
    <invalidation-strategy>present</invalidation-strategy>
    <autostart>true</autostart>
    </near-scheme>
    <distributed-scheme>
    <scheme-name>blaze-distributed-HeaderData</scheme-name>
    <service-name>DistributedCache</service-name>
    <partition-count>200</partition-count>
    <backing-map-scheme>
    <partitioned>true</partitioned>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <external-scheme>
    <high-units>20</high-units>
    <unit-calculator>BINARY</unit-calculator>
    <unit-factor>1073741824</unit-factor>
    <nio-memory-manager>
    <initial-size>1MB</initial-size>
    <maximum-size>50MB</maximum-size>
    </nio-memory-manager>
    </external-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>
    com.xx.loader.DataBeanInitialLoadImpl
    </class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>{cache-name}</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>com.xx.bean.HeaderData</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>SDR.TABLE_NAME_XYZ</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <backup-count>1</backup-count>
    <backup-storage>
    <type>file-mapped</type>
    <initial-size>1MB</initial-size>
    <maximum-size>100MB</maximum-size>
    <directory>/data/xxcache/blazeload/backupstorage</directory>
    <file-name>{cache-name}.store</file-name>
    </backup-storage>
    <autostart>true</autostart>
    </distributed-scheme>
    Note backup storage is file-mapped
    <backup-storage>
    <type>file-mapped</type>
    <initial-size>1MB</initial-size>
    <maximum-size>100MB</maximum-size>
    <directory>/data/xxcache/blazeload/backupstorage</directory>
    <file-name>{cache-name}.store</file-name>
    </backup-storage>
    We still see that process residual main memory consumption is 15 G and we also see that /data/xxcache/blazeload/backupstorage folder is empty.
    Wanted to check where does backup storage maintains the information - we would like offload this to flat file.
    Appreciate any pointers in this regard.
    Thanks
    sunder

  • ASM RMAN backup to File System

    Hi all,
    I have a rman backup (datafile and controlfile) which was took in an ASM instance (not a RAC) ORACLE 11.2.0.2 in a Linux server, now I want restore the backup in a new database in windows/Linux OS using general File System storage (single instance rdbms) instead of ASM.
    Is this possible?
    Can I restrore an ASM rman backup in a file system storage mechanisim in a new server?
    Kindly clarify my question.
    Thanks in Advance..
    Nonuday

    Nonuday wrote:
    Hi Levi,
    Thanks for your invaluable script and blog.
    can you clarify me on this query:
    I have a RMAN backup taken from ASM and the backup is database and controlf file backup which contains datafiles and controlfiles.
    Now I need to restore this on my system and here I dont use ASM or archive log, I use single instance in no archive log mode database.
    I have restored the control file from the RMAN controfile backup.
    Before restoring the control file I have checked the orginal pfile of the backup database which had parameters like
    'db_create_file_dest',
    'db_create_online_log_dest',
    'db_recovery_file_dest_size',
    'db_recovery_dest',
    'log_archive_dest'.
    Since I am not gng to create a DB in no archive log mode, I didnt use any of the above parameters and created a database.
    Now my question is:
    If i restore the database and the datafile will get restored and after renaming all the logfiles, database will be opened.
    I want to know whether this method is correct or wrong and will the database work as it was working previously. Or do i need create the db_file_recovery and other parameters also for this database.About Parameter:
    All these parameters should reflect your current environment any reference to the old environment must be modified.
    About Filesystem used:
    Does not matter what Filesystem you are using the File (datafile/redolog/controlfile/archivelog/backuppiece) are created on Binary Format which depend on Platform only. So, The same binary file ( e.g datafile) have same format and content on raw device, ASM, ext3, ext2, and so on. So, to database it's only a location where file are stored, but the file are the same. ASM has a different architecture from Regular Filesystem and need be managed in a different manner (i.e using RMAN).
    About Database:
    Since your database files are the same even using different filesystem what you need is rename your datafiles/redofiles on controlfile during restore, the redo files will be recreated.
    So, does not matter if you database are noarchivelog or archivelog, the same way which you will do a restore on ASM is the same way to restore on Regular Filesystem. (it's only about renaming database file on controlfile during restore)
    On blog the post "How Migrate All Files on ASM to Non-ASM (Unix/Linux)" is about move the file from filesystem to another. But you can modify the script used to restore purposes;
    ## set newname tell to RMAN where file will be restored and keep this files location on memory buffer
    RMAN> set newname for datafile 1 to <location>;
    ### swich get list of files from memory buffer (rman) and rename on controlfile the files already restored.
    RMAN>switch datafile/tempfile all ;With database mounted use this script below:
    I just commented three lines that are unnecessary in your case.
    SET serveroutput ON;
    DECLARE
      vcount  NUMBER:=0;
      vfname VARCHAR2(1024);
      CURSOR df
      IS
        SELECT file#,
          rtrim(REPLACE(name,'+DG_DATA/drop/datafile/','/u01/app/oracle/oradata/drop/'),'.0123456789') AS name
        FROM v$datafile;
      CURSOR tp
      IS
        SELECT file#,
          rtrim(REPLACE(name,'+DG_DATA/drop/tempfile/','/u01/app/oracle/oradata/drop/'),'.0123456789') AS name
        FROM v$tempfile;
    BEGIN
    --  dbms_output.put_line('CONFIGURE CONTROLFILE AUTOBACKUP ON;'); ### commented
      FOR dfrec IN df
      LOOP
        IF dfrec.name  != vfname THEN
          vcount      :=1;
          vfname     := dfrec.name;
        ELSE
          vcount := vcount+1;
          vfname:= dfrec.name;
        END IF;
      --  dbms_output.put_line('backup as copy datafile ' || dfrec.file# ||' format  "'||dfrec.name ||vcount||'.dbf";');  ### commented
      END LOOP;
      dbms_output.put_line('run');
      dbms_output.put_line('{');
      FOR dfrec IN df
      LOOP
        IF dfrec.name  != vfname THEN
          vcount      :=1;
          vfname     := dfrec.name;
        ELSE
          vcount := vcount+1;
          vfname:= dfrec.name;
        END IF;
        dbms_output.put_line('set newname for datafile ' || dfrec.file# ||'  to  '''||dfrec.name ||vcount||'.dbf'' ;');
      END LOOP;
      FOR tprec IN tp
      LOOP
        IF tprec.name  !=  vfname THEN
          vcount      :=1;
          vfname     := tprec.name;
        ELSE
          vcount := vcount+1;
          vfname:= tprec.name;
        END IF;
        dbms_output.put_line('set newname for tempfile ' || tprec.file# ||'  to  '''||tprec.name ||vcount||'.dbf'' ;');
        END LOOP;
          dbms_output.put_line('restore database;');
        dbms_output.put_line('switch tempfile all;');
        dbms_output.put_line('switch datafile all;');
        dbms_output.put_line('recover database;');
        dbms_output.put_line('}');
    ---   dbms_output.put_line('alter database open;');  ### comented because you need rename your redologs on controlfile before open database
        dbms_output.put_line('exit');
    END;
    /After restore you must rename your redologs on controlfile from old location to new location:
    e.g
    ##  use this query to get current location of redolog
    SQL>  select group#,member from v$logfile order by 1;
    ## and change from <old_location> to <new_location>
    SQL > ALTER DATABASE
      RENAME FILE '+DG_TSM_DATA/tsm/onlinelog/group_3.263.720532229' 
               TO  '/u01/app/oracle/oradata/logs/log3a.rdo'  When you change all redolog on controlfile issue command below:
    SQL> alter database open resetlogs;PS: Always track database in real time using alert log file of database.
    HTH,
    Levi Pereira

  • Required "/" (root) file system size on UNIX for Solution Manager.

    Hello SAP Gurus,
       I am setting up SAP Solution Manager 3.2 on HP-UX. It is asking me about 350MB free sapce on "/" file system for Central Instance installation and about 120MB free sapce on "/" file system for Database Instance installation.
       I am installaing everything on to shared disk which mounted under /usr/sap. Why it needs free sapce in "/" file system. Is there any workaround to get rid of this requirement, as I have very less free sapce on "/" file system and I don't want to take the risks involved in increasing this size.
       Are there any SAP recommended sizes for "/" file system?
       I stuck in the middle of setting up SAP landscape on HP-UX (11.23). I searched through the Installation documents but I couldn't find any thing helpful in this regard. It is urgent requirement to set up this so please let me know any solution or workaround ASAP.
       Any help is greatly appriciated.
    Thanks in advance.
    Regards,
    cvr/

    Hi Vaibhav.
    Normally "canonical path not available for (folder name)" means:
    1. Wrong username/password. Please double check you credentials.
    2. The resource cannot be linked from the portal server. Please be sure that you can connect to the next ports in windows server from the Unix Server:
    a. NetBIOS Session Service TCP 139 This port is used to connect file shares for example.
    b. TCP 445 The SMB (Server Message Block) protocol is used among other things for file sharing in Windows NT/2000/XP. In windows NT it ran on top of NetBT (NetBIOS over TCP/IP), which used the famous ports 137, 138 (UDP) and 139 (TCP). In Windows 2000/XP/2003, Microsoft added the possibility to run SMB directly over TCP/IP, without the extra layer of NetBT. For this they use TCP port 445.
    I hope these things help somebody.
    Best Regards,
    Jheison A. Urzola H.

  • SAP file system restoration on other server

    Dear Experts,
    To check that our offline file system backup is successful, we are planning to restore the offline file system backup from the tape on to a new test server.
    Our current SAP system (ABAP only) is in cluster with CI running on one node (using virtual host name cicep) & DB running on another node (using virtual host name dbcep).
    Now, is it possible to restore the offline file system backup of the above said cluster server on to a single server with a different host name?
    Please help on this.
    Regards,
    Ashish Khanduri

    Dear Ashish
    We want to include file system backup process as part of our backup strategy.  To test the waters, we are planning to take a backup of the at filesystems level.  Following are the filesystems in our production systems.
    We have a test server (hostname is different), without any filesystems created beforehand. 
    I want to know:
    1. Which filesystems will be required from the below:
    /dev/hd4         4194304   3772184   11%     5621     2% /
    /dev/hd2        10485760   6151688   42%    43526     6% /usr
    /dev/hd9var      4194304   4048944    4%     4510     1% /var
    /dev/hd3         4194304   2571760   39%     1543     1% /tmp
    /dev/hd1          131072    129248    2%       85     1% /home
    /proc                  -         -    -         -     -  /proc
    /dev/hd10opt      655360    211232   68%     5356    18% /opt
    /dev/oraclelv   83886080  73188656   13%    11091     1% /oracle
    /dev/optoralv   20971520  20967664    1%        4     1% /opt/oracle
    /dev/oracleGSPlv   83886080  74783824   11%    18989     1% /oracle/GSP
    /dev/sapdata1lv  833617920 137990760   84%     3189     1% /oracle/GSP/sapdata1
    /dev/sapdata2lv  623902720 215847400   66%       82     1% /oracle/GSP/sapdata2
    /dev/sapdata3lv  207093760 108510632   48%       24     1% /oracle/GSP/sapdata3
    /dev/sapdata4lv  207093760 127516424   39%       28     1% /oracle/GSP/sapdata4
    /dev/origlogAlv   20971520  20730080    2%        8     1% /oracle/GSP/origlogA
    /dev/origlogBlv   20971520  20730080    2%        8     1% /oracle/GSP/origlogB
    /dev/mirrlogAlv   20971520  20762848    1%        6     1% /oracle/GSP/mirrlogA
    /dev/mirrlogBlv   20971520  20762848    1%        6     1% /oracle/GSP/mirrlogB
    /dev/oraarchlv  311951360 265915600   15%      526     1% /oracle/GSP/oraarch
    /dev/usrsaplv   41943040  41449440    2%      165     1% /usr/sap
    /dev/sapmntlv   41943040  20149168   52%   565823    21% /sapmnt
    /dev/usrsapGSPlv   41943040  25406768   40%   120250     5% /usr/sap/GSP
    /dev/saptranslv   41943040   5244424   88%   136618    18% /usr/sap/trans
    IDES:/sapcd     83886080   4791136   95%    18878     4% /sapcd
    GILSAPED:/usr/sap/trans   41943040   5244424   88%   136618    18% /usr/sap/trans
    2. Is it possible to directly backup the filesystems (like /dev/oracleGSPlv)?  This requirement is because, when I backup (using tar) /oracle, all the folders in /oracle, like /oracle/GSP, /oracle/GSP/sapdata1 etc, are also backed up.  I do not want it.  I would like to backup the filesystems directly.
    3. Which unix backup tools are used to backup the individual filesystems?
    4. How do we restore the filesystems to the test server?
    Thanks for your advise.
    Abdul
    Edited by: Abdul Rahim Shaik on Feb 8, 2010 12:10 PM

  • After Restoring/Backup of File System XI Java Instances are not up!

    Hello all,
    We are facing problem in restoring the SAP XI System, after taking backup of the system the <b>java instances</b> in SAP XI System are not starting again. ABAP connections are fine.
    Can anyone provide suggestions/solutions in order to restore the XI System back.
    The system information is as follows.
    System Component:     SAP NetWeaver 2004s, <b>PI 7.0</b>
    Operating System:     SunOS 5.9, SunOS 5.10
    Database:          ORACLE 9.2.0.
    Regards,
    Ketan Patel

    If it´s REALLY a PI 7.0 (SAP_BASIS 700 and WebAS Java 7.00) then it´s not compatible. WebAS 7.00 needs Oracle 10g (http://service.sap.com/pam)
    Also see
    http://service.sap.com/nw2004s
    --> Availibility
    --> SAP NetWeaver 7.0 (2004s) PAM
    If you open the Powerpoint, you will see that Oracle 9 is not listed. I wonder, how you got that installed.
    Neverless, if you recover a Java instance, both filesystem and database content (of the Java schema) must be in sync, means, you need to restore both, database (schema) and filesystem, that have been backed up at the same time.
    Check Java Backup and Restore :
    Restoring the System
           1.      Shut down the system.
           2.      Install a new AS Java system using SAPInst, or restore the file system from the offline backups that you created.
           3.      Import the database backup using the relevant tools provided by the database vendor.
           4.      Overwrite the SAP system directory /usr/sap/.
           5.      Start the system (see Starting and Stopping SAP NetWeaver ABAP and Java.)
    The J2EE Engine is restored with the last backup.
    Markus

  • After Restoring/Backup File System XI Java Instances are not up!

    Hello all,
    We are facing problem in restoring the SAP XI System, after taking backup of the system the java instances in SAP XI System are not starting again. ABAP connections are fine.
    Can anyone provide suggestions/solutions in order to restore the XI System back.
    The system information is as follows.
    System Component: SAP NetWeaver 2004s, PI 7.0
    Operating System: SunOS 5.9, SunOS 5.10
    Database: ORACLE 9.2.0.
    Regards,
    Ketan Patel

    I would correct Oracle version is 10.2.0
    I would also like to reframe my problem.
    XI server(both abap and Web) was working fine. Every weekend we take full backup(file system+DB)(UFS) of server. After taking backup server got up and we were able to login through abap but java web page was not openning. We did some troubleshooting it didn't work and finaly we restored the backup and somehow it worked. Next week after backup again same problem arose. We again restored latest backup but this time problem still exists.
    Regards,
    Ketan

Maybe you are looking for

  • Price Updation of Finished Good

    Dear Experts, Can any one explain how the MAP of a material (SG/ SFG) whose price indicator is standard price gets calculated in system. I Have a material CLINKER with price indicator Standard Price and the price is fixed as 145. In MM03 the MAP of t

  • Google says iOS Google Maps in App Store hopefully "before Christmas"

    may be some good news? http://www.ubergizmo.com/2012/09/google-says-ios-google-maps-in-app-store-hopefu lly-before-christmas/?utm_source=mainrss

  • Click to change color of field

    When a user clicks a field I want the color to change. Each click will result in a different color (of 5). I assume I need an array but am currently unable to get it to work. I have the following code on a mouseUp: function changeColor() var changeCo

  • WLS with the HttpClusterServlet or Apache with proxy plug-in?

    I'm newbie with WebLogic Server cluster. Please tell me which is better for load balancing for WLS cluster? WLS with the HttpClusterServlet or Apache HTTP Server Plug-In? And which is recommended for production environment? Many thanks.

  • How to record entity changes

    Hi friends, I am trying to implement feature which should record entity property changes. Basicaly I want to keep history when and what has changed in a separate table. For example having a table similar to this and record all changes made to entitie