Export 500gb database size to a 100gb file system space in oracle 10g

Hi All,
Please let me the know the procedure to export 500gb database to a 100gb file system space. Please let me know the procedure.

user533548 wrote:
Hi Linda,
The database version is 10g and OS is linux. Can we use filesize parameter for the export. Please advice on this.FILESIZE will limit the size of a file in case you specify multiple dumpfiles. You could also could specify multiple dump directory (in different FS) when given multiple dumpfiles.
For instance :
dumpfile=dump_dir1:file1,dump_dir2:file2,dump_dir3:file3...Nicolas.

Similar Messages

  • Oracle Database Point intime recovery with File System backup.

    Hi.. Experts...
    We have shutdown the database and have take a file system backup on
    09.10.2010 @ 10 AM.
    We started the database and all the  archive logs generated after that are already
    available in archive directory till now.
    We discovered there were some  deletions took place. Yesterday evening at 4 PM.
    Requirement.
    We want to recover the database till  yesterday 3.30 PM.
    Can any one send steps to recover with above backup and available archive logs till 3.30 PM Yesterday.
    Regards

    Experts.....Nick Loy, Varadharajan M, Mark & Volker.............Thanks U Very Much.
    TAIL END MESSAGE OF RECOVERY COMMAND
    ORA-00279: change 11331655931 generated at 10/10/2010 11:30:56 needed for
    thread 1
    ORA-00289: suggestion : /oracle/D10/oraarch/D10arch1_123487_607825009.dbf
    ORA-00280: change 11331655931 for thread 1 is in sequence #123487
    ORA-00278: log file '/oracle/D10/oraarch/D10arch1_123486_607825009.dbf' no
    longer needed for this recovery
    Log applied.
    Media recovery complete.
    SQL>
    SQL>
    SQL> ALTER DATABASE OPEN RESETLOGS;
    Database altered.
    SQL>
    SYSTEM RECOVERED TO DEMAND & RELEASED
    T H A N K S    --   T O    --   E V E R Y O N E .
    RGDS

  • CUIS 7.5: ERROR: No Archiver database size is supported on this system

    Dear all,
    I try the configuration tool on CUIS for Archiver but I encounter with an error and exit the configuration.
    Please help me on this.
    Thanh
    [1-20-2011 15:29:00] INFO:    Verifying Archiver pre-requisites.
    [1-20-2011 15:29:00] INFO:    Microsoft SQL Server is present.
    [1-20-2011 15:29:00] INFO:    The system has enough fixed drives to support at least one Archiver size.
    [1-20-2011 15:29:00] INFO:    The Archiver requirements have successfully passed verification.
    [1-20-2011 15:29:00] INFO:    One or more CUIS components are available for configuration.
    [1-20-2011 15:29:00] INFO:    CUIS verification complete.
    [1-20-2011 15:29:03] INFO:    User selection: Cisco Archiver
    [1-20-2011 15:29:03] INFO:    Displaying screen: Archiver - Product Selection
    [1-20-2011 15:29:06] INFO:    User selected: Cisco Unified Contact Center Enterprise 7.5(1)
    [1-20-2011 15:29:06] INFO:    Displaying screen: Archiver - User Verification
    [1-20-2011 15:29:27] INFO:    The user has selected to use the default instance.
    [1-20-2011 15:29:27] INFO:    Internal name for instance MSSQLSERVER: MSSQL.1
    [1-20-2011 15:29:27] INFO:    TCP/IP connectivity is enabled for the SQL Server instance: MSSQLSERVER
    [1-20-2011 15:29:28] INFO:    The connection to the local SQL Server as an administrator was successful.
    [1-20-2011 15:29:28] INFO:    Microsoft SQL Server Agent is present.
    [1-20-2011 15:29:28] INFO:    The SQL Server default instance is valid.
    [1-20-2011 15:29:28] INFO:    User selection: Domain user
    [1-20-2011 15:29:28] INFO:    Domain entered: HCMCPT
    [1-20-2011 15:29:28] INFO:    The domain user login was successful.
    [1-20-2011 15:29:28] INFO:    Standard username: ArchiverUser
    [1-20-2011 15:29:28] INFO:    The database security login is already present: HCMCPT\ArchiverUser
    [1-20-2011 15:29:28] INFO:    The common Archiver database is not present.
    [1-20-2011 15:29:28] INFO:    The Archiver product-specific data is not present.
    [1-20-2011 15:29:28] WARNING: Small database size selection is disabled.
    [1-20-2011 15:29:28] WARNING: Large database size selection is disabled.
    [1-20-2011 15:29:28] ERROR:   No Archiver database size is supported on this system.
    [1-20-2011 15:29:38] INFO:    Displaying screen: Archiver - Database Size

    I follow the workarround of the bug but it did not help. I still encounter with the same error.
    Do I need to upgrade to later version before run the config tool?
    Regards,
    Thanh

  • ZFS File   system space issue

    Hi All,
    Kindly help how to resolve ZFS File system space issue. I have deleted nearly 20 GB but File system size remains the same. Kindly advice on it.
    Thanks,
    Kumar

    The three reasons that I'm aware of that causes deleting files not to return space are 1) the file is linked to multiple names 2) deleting files which are still open by a process and 3) deleting files which are backed up by a snapshot. While it is possible you deleted 20GB in multiply linked and/or open files, I'd guess snapshots to be most likely case.
    For multiple "hard" links, you can see the link count (before you delete the file) in the "ls -l" command (the second field). If it is greater than one, deleting the file won't free up space. You have to delete all file names linked to the file to free up the space.
    For open files, you can use the pfiles command to see what files a process has open. The file space won't be recovered until all processes with a file open close it. Killing the process will do the job. If you like to use a big hammer, a reboot kills everything.
    For snapshots: Use the "zfs -t snapshot" command and look at the snapshots. The space used by the snapshot indicates how much space is held by the snapshot. Deleting the snapshot will free up space unless the space is still being held by another snapshot. To free space held by a file, you have to delete all snapshots which contain that file.
    Hopefully, I got all of this right.

  • Export the database schema / structure to a file server

    I would really appreciate if any of you can share your experience on the following:
    Database-1: Oracle 8i/9i and
    Database-2: SQL Server v65/7/2000
    I am looking for a solution to extract all the database schema/structure to a pre-defined location onto a file server. I wanted to configure this thru CRON job (for unix) and Microsoft Task Scheduler (for Windows). Also this solution should work for all databases (must work with Oracle, SQL Server). The exported file should not be just a dump file (.dmp), assuming only oracle can read .dmp file.
    What are the possible ways to accomplish this.
    I am sure, several tools would be available to accomplish this (like Embarcadero Change Manager), but solution may not be cost effective. Can we write any program to accomplish the same?
    I would really appreciate your inputs.
    Thanks, Madhavi

    I am doing this job using bcp command on sql serverwhich create dat file and then transport the file to oracle server in location where I have created an external table.
    I belive that there is server level compatability which need no external programe or software. My one fellow told me once and when I got him I shall reply

  • 10G Database install on Linux 64bit File system Vs. Automatic Storage

    Hi,
    I'm installing Oracle 10G on a Dell 2950 server with an Intel 64 bit processor. The operating system is Red Hat Linux 4. It has a hardware RAID.
    I've downloaded the Install Guide but I have a question about installing.
    I'm confused about the File system vs. Automatic Storage management option. I will be installing on the local system, I will not be using a NAS or SAN.
    Under Database Storage Options, the guide says that "Oracle recommends that the file system you choose be separate from the file systems used by the operating system or the Oracle software.".
    1. Do I need to do that since I'm already using hardware RAID??
    2. Which way is recommended / what do most people do: File system or Automatic Storage managment??
    3. For Automatic Storage Management I read that I'd have to create an ASM Disk group that can consist of "a multiple disk device such as a RAID storage array or logical volume. However, in most cases disk groups consist of one or more individual physical disks". Do I need to reconfigure my partitions ??
    I just need some input on what I should do since this is my first time installing Oracle on Linux.
    Thank you.

    Besides documentation there's a step-by-step guide :
    http://www.oracle.com/technology/pub/articles/smiley_10gdb_install.html#asm
    Many questions should be answered here.
    Werner

  • How the media files are stored in Oracle 10g database

    I guess they have introduced new datatypes to handle multimedia objects( audio file, video file, images, etc etc). Can anyone tell which is the data type which is used to handle the media files in Oracle 10g database.
    thanks,
    shekar.

    Check this out.
    http://download-west.oracle.com/docs/cd/B14117_01/appdev.101/b10840/mm_uses.htm#sthref433

  • Import eif files of OFA to Oracle 10g

    I have exported my super and shared databases of OFA into "eif" files. Pls tell me how i should import it into Oracle 10g database and subsequently view it using BI Beans or Discoverer.
    Thanks,
    Krishanu

    Thanks for the info.
    I tried creating as given in the article, but i am unable to see the OLAP Catalog view. I proceeded as follows:
    1)Created a User called DEMO and granted Unlimited Taplespace privilege to EXAMPLE.
    CREATE USER DEMO IDENTIFIED BY PASSWORD
    DEFAULT TABLESPACE EXAMPLE
    TEMPORARY TABLESPACE TEMP
    ACCOUNT UNLOCK;
    GRANT OLAP_USER TO DEMO
    CREATE DIRECTORY EIF_FOLDER AS 'D:\';
    GRANT READ ON DIRECTORY EIF_FOLDER TO DEMO;
    2)Connected to Analytic Workspace Manager as "DEMO" and opened OLAP Worksheet and gave the following commands
    AW CREATE super
    IMPORT ALL FROM EIF FILE 'eif_folder\super.EIF' DATA DFNS
    (I got the error "CAUTION: FMV is being imported. However, this is a reserved word, so
    you must rename the new object before you can use it")
    Then, i gave the commands,
    UPDATE
    COMMIT
    CALL CREATE_DB_STDFORM('super')
    UPDATE
    COMMIT
    In the Enterprise Manager Console of Schema "DEMO" i can see the following objects,
    Object                         Object Type
    AW$SUPER                    TABLE PARTITION
    AW$SUPER                    TABLE
    SUPER_I$                    INDEX
    SUPER_S$                    SEQUENCE
    SYS_IL0000051573C00004$$          INDEX PATITION
    SYS_LOB0000051573C00004$$          LOB PARTITION
    SYS_LOB0000051573C00004$$          LOB
    However in Analytical Workspace manager nothing is there, all the 3 folders of "Measure Folders", "Cubes" and "Dimensions" are empty.
    Pls tell me where i am going wrong or am i missing any step?
    Thanks,
    Krishanu

  • Dfc: Display file system space usage using graph and colors

    Hi all,
    I wrote a little tool, somewhat similar to df(1) which I named dfc.
    To present it, nothing better than a screenshot (because of colors):
    And there is a few options available (as of version 3.0.0):
    Usage: dfc [OPTIONS(S)] [-c WHEN] [-e FORMAT] [-p FSNAME] [-q SORTBY] [-t FSTYPE]
    [-u UNIT]
    Available options:
    -a print all mounted filesystem
    -b do not show the graph bar
    -c choose color mode. Read the manpage
    for details
    -d show used size
    -e export to specified format. Read the manpage
    for details
    -f disable auto-adjust mode (force display)
    -h print this message
    -i info about inodes
    -l only show information about locally mounted
    file systems
    -m use metric (SI unit)
    -n do not print header
    -o show mount flags
    -p filter by file system name. Read the manpage
    for details
    -q sort the output. Read the manpage
    for details
    -s sum the total usage
    -t filter by file system type. Read the manpage
    for details
    -T show filesystem type
    -u choose the unit in which
    to show the values. Read the manpage
    for details
    -v print program version
    -w use a wider bar
    -W wide filename (un truncate)
    If you find it interesting, you may install it from the AUR: http://aur.archlinux.org/packages.php?ID=57770
    (it is also available on the archlinuxfr repository for those who have it enabled).
    For further explanations, there is a manpage or the wiki on the official website.
    Here is the official website: http://projects.gw-computing.net/projects/dfc
    If you encounter a bug (or several!), it would be nice to inform me. If you wish a new feature to be implemented, you can always ask me by sending me an email (you can find my email address in the manpage or on the official website).
    Cheers,
    Rolinh
    Last edited by Rolinh (2012-05-31 00:36:48)

    bencahill wrote:There were the decently major changes (e.g. -t changing from 'don't show type' to 'filter by type'), but I suppose this is to be expected from such young software.
    I know I changed the options a lot with 2.1.0 release. I thought it would be better to have -t for filtering and -T for printing the file system type so someone using the original df would not be surprised.
    I'm sorry for the inconvenience. There should not be any changes like this one in the future though but I thought it was needed (especially because of the unit options).
    bencahill wrote:
    Anyway, I now cannot find any way of having colored output showing only some mounts (that aren't all the same type), without modifying the code.
    Two suggestions:
    1. Introduce a --color option like ls and grep (--color=WHEN, where WHEN is always,never,auto)
    Ok, I'll implement this one for 2.2.0 release It'll be more like "-c always", "-c never" and "-c auto" (default) because I do not use long options but I think this would be OK, right?
    bencahill wrote:2. Change -t to be able to filter multiple types (-t ext4,ext3,etc), and support negative matching (! -t tmpfs,devtmpfs,etc)
    This was already planned for 2.2.0 release
    bencahill wrote:Both of these would be awesome, if you have time. I've simply reverted for now.
    This is what I would have suggested.
    bencahill wrote:By the way, awesome software.
    Thanks I'm glad you like it!
    bencahill wrote:P.S. I'd already written this up before I noticed the part in your post about sending feature requests to your email. I decided to post it anyway, as I figured others could benefit from your answer as well. Please forgive me if this is not acceptable.
    This is perfectly fine Moreover, I seem to have some troubles with my e-mail addressee... So it's actually better that you posted your requests here!

  • Solaris file system space

    Hi All,
    While trying to use df -k command in my solaris box, I am getting output shown as below.
    Filesystem 1024-blocks Used Available Capacity Mounted on
    rpool/ROOT/solaris-161 191987712 6004395 140577816 5% /
    /devices 0 0 0 0% /devices
    /dev 0 0 0 0% /dev
    ctfs 0 0 0 0% /system/contract
    proc 0 0 0 0% /proc
    mnttab 0 0 0 0% /etc/mnttab
    swap 4184236 496 4183740 1% /system/volatile
    objfs 0 0 0 0% /system/object
    sharefs 0 0 0 0% /etc/dfs/sharetab
    /usr/lib/libc/libc_hwcap1.so.1 146582211 6004395 140577816 5% /lib/libc.so.1
    fd 0 0 0 0% /dev/fd
    swap 4183784 60 4183724 1% /tmp
    rpool/export 191987712 35 140577816 1% /export
    rpool/export/home 191987712 32 140577816 1% /export/home
    rpool/export/home/123 191987712 13108813 140577816 9% /export/home/123
    rpool/export/repo 191987712 11187204 140577816 8% /export/repo
    rpool/export/repo2010_11 191987712 31 140577816 1% /export/repo2010_11
    rpool 191987712 5238974 140577816 4% /rpool
    /export/home/123 153686630 13108813 140577816 9% /home/12
    My question here is why /usr/lib/libc/libc_hwcap1.so.1 file system is having same size as that of / root filesystem? and what is the significance of /usr/lib/libc/libc_hwcap1.so.1 file system..
    Thanks in Advance for your help..

    You must have a lot of small files on the file system.
    There are couple of ways, the simplest is to increase the size of the filesystem.
    Or if you can create a new filesystem, but increase the inode count so you can utilize the space and still have enough inodes. Check out the man page mkfs_ufs and the option nbpi=n
    my 2 bits

  • MBRC and SYSTEM STATISTICS in Oracle 10g database.

    Hi All,
    I am performing database upgrade from Oracle 8i Solaris to Oracle 10g HP-UX using exp/imp method.
    But i do have some doubts regarding MBRC and System statistics.
    MBRC in Oracle 10g is automatically adjusted if MBRC parameter is not set, but i found value 128 as shown below.
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for HPUX: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    SQL> sho parameter multi
    NAME                                 TYPE                             VALUE
    db_file_multiblock_read_count        integer                          128Also i performed one simple full table scan to test it...but db file scattered read is performing on 128 blocks. So i dont think 128 is suitable and is automatic, i mean MBRC is not set accrodingly it always uses 128.
    Does this MBRC value affects whole database performance?
    Regarding SYSTEM STATISTICS i found below result:
    SQL> select * from AUX_STATS$
    SNAME                          PNAME                               PVAL1 PVAL2
    SYSSTATS_INFO                  STATUS                                    COMPLETED
    SYSSTATS_INFO                  DSTART                                    11-09-2009 04:59
    SYSSTATS_INFO                  DSTOP                                     11-09-2009 04:59
    SYSSTATS_INFO                  FLAGS                                   1
    SYSSTATS_MAIN                  CPUSPEEDNW                     128.239557
    SYSSTATS_MAIN                  IOSEEKTIM                              10
    SYSSTATS_MAIN                  IOTFRSPEED                           4096
    SYSSTATS_MAIN                  SREADTIM
    SYSSTATS_MAIN                  MREADTIM
    SYSSTATS_MAIN                  CPUSPEED
    SYSSTATS_MAIN                  MBRC
    SYSSTATS_MAIN                  MAXTHR
    SYSSTATS_MAIN                  SLAVETHRNow whether NOWORKLOAD or WORKLOAD is better, and this server is still under building process....so how can i collect WORKLOAD stats as high load on this server can't be performed?? Is it really require to gather system statistics, what will happen with NOWORLOAD stats?
    I have not seen single database where system stats are gathered in our organisation having more than 2000 databases.
    -Yasser

    Maybe this article written by Tom Kite helps:
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:499197100346264909

  • SC 3.0 file system failover for Oracle 8i/9i

    I'm a Oracle DBA for our company. And we have been using shared NFS mounts successfully for the archivelog space on our production 8i 2-node OPS Oracle databases. From each node, both archivelog areas are always available. This is the setup recommended by Oracle for OPS and RAC.
    Our SA team is now wanting to change this to a file system failover configuration instead. And I do not find any information from Oracle about it.
    The SA request states:
    "The current global filesystem configuration on (the OPS production databases) provides poor performance, especially when writing files over 100MB. To prevent an impact to performance on the production servers, we would like to change the configuration ... to use failover filesystems as opposed to the globally available filesystems we are currently using. ... The failover filesystems would be available on only one node at a time, arca on the "A" node and arcb on the "B" node. in the event of a node failure, the remaining node would host both filesystems."
    My question is, does anyone have experience with this kind of configuration with 8iOPS or 9iRAC? Are there any issues with the auto-moving of the archivelog space from the failed node over to the remaining node, in particular when the failure occurs during a transaction?
    Thanks for your help ...
    -j

    The problem with your setup of NFS cross mounting a filesystem (which could have been a recommended solution in SC 2.x for instance versus in SC 3.x where you'd want to choose a global filesystem) is the inherent "instability" of using NFS for a portion of your database (whether it's redo or archivelog files).
    Before this goes up in flames, let me speak from real world experience.
    Having run HA-OPS clusters in the SC 2.x days, we used either private archive log space, or HA archive log space. If you use NFS to cross mount it (either hard, soft or auto), you can run into issues if the machine hosting the NFS share goes out to lunch (either from RPC errors or if the machine goes down unexpectedly due to a panic, etc). At that point, we had only two options : bring the original machine hosting the share back up if possible, or force a reboot of the remaining cluster node to clear the stale NFS mounts so it could resume DB activities. In either case any attempt at failover will fail because you're trying to mount an actual physical filesystem on a stale NFS mount on the surviving node.
    We tried to work this out using many different NFS options, we tried to use automount, we tried to use local_mountpoints then automount to the correct home (e.g. /filesystem_local would be the phys, /filesystem would be the NFS mount where the activity occurred) and anytime the node hosting the NFS share went down unexpectedly, you'd have a temporary hang due to the conditions listed above.
    If you're implementing SC 3.x, use hasp and global filesystems to accomplish this if you must use a single common archive log area. Isn't it possible to use local/private storage for archive logs or is there a sequence numbering issue if you run private archive logs on both sides - or is sequencing just an issue with redo logs? In either case, if you're using rman, you'd have to back up the redologs and archive log files on both nodes, if memory serves me correctly...

  • Problem in Reducing the root file system space

    Hi All ,
    The root file system is reached 86%. We have cleared 1 GB data in /var file system. But the root file system still showing 86%. Please note that the /var file is not seprate file system.
    I have furnished the df -h output for your reference. Please provide solution as soon as possible.
    /dev/dsk/c1t0d0s0 2.9G 2.4G 404M 86% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 30G 1.0M 30G 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    /dev/dsk/c1t0d0s3 6.7G 3.7G 3.0G 56% /usr
    /platform/SUNW,Sun-Fire-T200/lib/libc_psr/libc_psr_hwcap1.so.1
    2.9G 2.4G 404M 86% /platform/sun4v/lib/libc_psr.so.1
    /platform/SUNW,Sun-Fire-T200/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
    2.9G 2.4G 404M 86% /platform/sun4v/lib/sparcv9/libc_psr.so.1
    fd 0K 0K 0K 0% /dev/fd
    swap 33G 3.5G 30G 11% /tmp
    swap 30G 48K 30G 1% /var/run
    /dev/dsk/c1t0d0s4 45G 30G 15G 67% /www
    /dev/dsk/c1t0d0s5 2.9G 1.1G 1.7G 39% /export/home
    Regards,
    R. Rajesh Kannan.

    I don't know if the root partition filling up was sudden, and thus due to the killing of an in-use file, or some other problem. However, I have noticed that VAST amounts of space is used up just through the normal patching process.
    After I installed Sol 10 11/06, my 12GB root partition was 48% full. Now, about 2 months later, after applying available patches, it is 53% full. That is about 600 MB being taken up by the superseded versions of the installed patches. This is ridiculous. I have patched using Sun Update Manager, which by default does not use the patchadd -d option that would not back up old patch versions, so the superseded patches are building up in /var, wasting massive amounts of space.
    Are Solaris users just supposed to put up with this, or is there some other way we should manage patches? It is time consuming and dangerous to manually clean up the old patch versions by using patchrm to delete all versions of a patch and then using patchadd to re-install only the latest revision.
    Thank you.

  • Decision on File system management in Oracle+SAP

    Hi All,
    In my production system we use to have /oracle/SID/sapdata1 and oracle/SID/sapdata2. Initially there was many datafiles assigned to the table sapce PSAPSR3, few as autoextend on and few as autoextend off. As per my understanding DB02 shows you the information just tablespace wise it will report AUTOEXTEND ON as soon as at least one of the datafiles has AUTOEXTEND ON. In PSAPSR3 all the datafile with autoextend ON are from SAPDATA1 which has only 50 GBs left. All the files as Autoextend OFF are from SAPDATA2 which has 900 GBs of sapce left.
    Now the question is :
    1.Do I need to request for additional space for SAPDATA1 as some of the tablespaces are at the edge of autoextend and that much space is not left in the FS(sapadat1) , then how will they extend? DB growth is 100GB per month.
    2.We usually were adding 10 GB of datafile in the tablespace with 30GB as autoextend.
    Can we add another datafile from sapdata2 this time with autoextend ON and the rest will be taken care automatically.
    Pleae suggest.
    Regards,
    VIcky

    Hi Vicky,
    As you have 100GB/month growth suggestion here would be
    1) Add 2 more mount points sapdata3 and sapdata4 with around 1 TB space.
       This is to distribute data across 4 data partitions for better performance
    2) As sapdata1 has datafiles with auto extend ON, you need to extend the file system to 500 GB atleast so that whenever data is written on datafiles under sapdata1, it will have space to grow using autoextend feature. Without sufficient disk space it may lead to space problem and transaction may result in dump.
    3) No need to change anything on sapdata2 as you already have 900GB free space
    Hope this helps.
    Regards,
    Deepak Kori

  • Need help with File system creation fro Oracle DB installation

    Hello,
    I am new to Solaris/Unix system landscape. I have a Sun enterprise 450 with 18GB hard drive. It has Solaris 9 on it and no other software at this time. I am planning on adding 2 more hard drives 18gb and 36gb to accommodate Oracle DB.
    Recently I went through the Solaris Intermediate Sys admin training, knows the basic stuff but not fully confident to carry out the task on my own.
    I would appreciate some one can help me with the sequence of steps that I need perform to
    1. recognize the new hard drives in the system,
    2. format,
    3. partition. What is the normal strategy for partitioning? My current thinking is to have 36+18gb drives as data drives. This is where I am little bit lost. Can I make a entire 36GB drive as 1 slice for data, I am not quite sure how this is done in the real life, need your help.
    4. creating the file system to store the database files.
    Any help would be appreciated.

    Hello,
    Here is the rough idea for HA from my experience.
    The important thing is that the binaries required to run SAP
    are to be accessible before and after switchover.
    In terms of this file system doesn't matter.
    But SAP may recommend certain filesystem on linux
    please refer to SAP installation guide.
    I always use reiserfs or ext3fs.
    For soft link I recommend you to refer SAP installation guide.
    In your configuration the files related to SCS and DB is the key.
    Again those files are to be accessible both from hostA and from hostB.
    Easiest way is to use share these files like NFS or other shared file system
    so that both nodes can access to these files.
    And let the clustering software do mount and unmount those directory.
    DB binaries, data and log are to be placed in shared storage subsystem.
    (ex. /oracle/*)
    SAP binaries, profiles and so on to be placed in shared storage as well.
    (ex. /sapmnt/*)
    You may want to place the binaries into local disk to make sure the binaries
    are always accessible on OS level, even in the connection to storage subsystem
    losts.
    In this case you have to sync the binaries on both nodes manually.
    Easiest way is just put on shared storage and mount them!
    Furthermore you can use sapcpe function to sync necessary binaries
    from /sapmnt to /usr/sap/<SID>.
    For your last question /sapmnt should be located in storage subsystem
    and not let the storage down!

Maybe you are looking for

  • Partial printing only from Adobe Reader via Server 2008 print spooler

    Hi there, We've recently set up a new print server for our school. It's running Windows Server 2008 (not R2) 32 bit. We have 8 printers, which are all HP LaserJet P2035n. All printing works fine, except for PDF files from Adobe Reader. PDF files from

  • Export To PDF Issue

    I have an old server (Windows Server 2003 x86) that I used to export lots reports as PDFs and it has worked perfectly.  The new server is now setup (running Windows Server 2008 R2 x64), and the export seems to have gone haywire.  I am using Crystal R

  • XI doubt and test scenario

    Hi,   I need some advice on the following: We are having XI 3.0, ECC 5.0 and also have GIS Server (Gentran). We need to use these servers to convert the data from XML to IDOC (inbound to sap) as well as to XML from IDOC (outbound). We came to know GI

  • No Folders for Folder Mappings when enabling AppXRay

    When I enable AppXRay on an existing project it asks me to Specify Folder Mappings. The Add button brings up the New Folder Mapping dialog. The Folder field is disabled and when I click Browse the Choose Folder dialog says "No Entries Available". Thi

  • A SIMPLE int[ ] array is driving me nuts!

    ok, what am I doing wrong? I've read many other posts saying the same thing: arrays giving null pointer exceptions, but I can't find an intelligible answer: * Demonstrates simple arrays public class DemonstrateSimpleArrays     private int[] anIntArra