Oracle 10.2.0.4 on SAN Storage.

Hi Experts,
I am in the process of installing new Oracle 10g database for my
production database which is a part of upgrade, on HP SAN storage with RAID5.
We run oracle apps 11i and this is also going to upgrade to R12.
My concern is how shall i put the different oracle database files on the filesystem with different mount points in order to get the maximum benefit. (like which file shoud be on same mount point, how many points should be there etc..)
Kindly suggest my the best practice for this.
Thanks in advance.

Thanks,
Do you mean that i can put oracle_home and database on a single mount point?
Because if follow OFA everything is going to be on a single mount point.
and for example in order to get controlfile and redolog redundant (as recommneded) i would put each copy of them on another mount point.
But here does not i need it?
Thanks.

Similar Messages

  • Oracle 10g RAC - two nodes - change SAN storage - only one node up

    Hi all,
    We're using Oracle 10g with Linux Red Hat.
    We want to migrate to another SAN storage.
    The basic steps are:
    node 1:
    1. presents the volume disk and create partition
    2. asmlib create disk newvolume
    3. alter diskgroup oradata add disk newdisk
    4. alter diskgroup oradata drop disk OLDDISKOTHERSAN
    But THE NODE 2 DOWN.
    we want start the NODE 2 only after ALL operations in node 1 finish.
    What's you opinion? Any impact?
    Can I execute the oracleasm scandisks on node 2 after?
    thank you very much!!!!

    modify the commands just slightly...
    1. presents the volume disk node1 AND node2
    2. create partition node1 [if necessary?? ]
    3. asmlib create disk newvolume [node1] ## I am not a fan of and have never had to use ASMlib - however, apparently you do...
    4. asmlib scandisk... [node2]
    [on one of the nodes]
    alter diskgroup oradata add disk1 newdisk rebalance power 0
    alter diskgroup oradata add disk2 newdisk rebalance power 0
    alter diskgroup oradata add disk3 newdisk rebalance power 0
    alter diskgroup oradata add disk{n} newdisk rebalance power 0
    alter diskgroup oradata drop disk1 OLDDISKOTHERSAN rebalance power 0
    alter diskgroup oradata drop disk2 OLDDISKOTHERSAN rebalance power 0
    alter diskgroup oradata drop disk3 OLDDISKOTHERSAN rebalance power 0
    alter diskgroup oradata drop disk{n} OLDDISKOTHERSAN rebalance power 0
    alter diskgroup oradata rebalance power {n}
    You can actually do this with BOTH nodes ONLINE. I was involved in moving 250TB from one storage vendor to another - all online - no downtime, no outage.
    And before you actually do this and break something - TEST it first!!!

  • Guide to install Oracle 11gr2 on solaris 10 wiht SAN storage

    Hey expert,
    Anyone install Oracle 11g2 RAC on solaris 10 with SAN storage before. Any link and proper document that i can follow. Appreacite you guy can assist me.
    Regard
    liang
    Edited by: 816734 on Sep 15, 2011 2:32 AM

    Please check the following link :
    http://www.oracle.com/pls/db112/portal.portal_db?selected=11&frame=#solaris_installation_guides
    Anytime!

  • Help with Oracle 10g Client Connectivity from Linux to IBM SAN storage

    Hello Oracle Experts,
    This is my first post. My client is having oracle 10g database up and running in IBM SAN storage.
    We have some NMS tools running in Red Hat Linux version 5. So, these tools require connectivity to Oracle database which is residing in SAN storage connected with the Fibre cables.
    How do I establish the connectivity from Linux to SAN storage. If would be glad if you can explain me the steps and also if there is any pre-installation/post-installation, patches and procedures involved.
    If it is IP based network we normally give the IP address of the host running the database server. I have no idea about SAN storage connected with Fibre cable.
    Please guide me to establish the connectivity from linux 5 to SAN.
    Thanks.
    Regards,
    RaviShankar.

    user13153556 wrote:
    Hi Rajesh,
    Actually I will not be touching the Oracle instance SAN box directly. I will only access the database from another machine. I my case it is Linux box.
    So, my question is how do you make the Oracle Client in Linux box to connect to Oracle instance running in another non ip based machine SAN storage.Install Oracle client on this Linux machine ..
    Make sure you have network connectivity from linux machine to database server. You need to connect to server where db instance is running and you need not to bother about SAN storage.
    make tns entry into client $ORACLE_HOME/network/admin/tnsnames.ora file.
    Use sqlplus to connect to database using client.
    Regards
    Rajesh

  • ORACLE TABLESPACE CREATION PROBLEM IN SAN STORAGE FROM SOLARIS X86 Machine

    Hi everybody,
    i am new in this forum and i am also new in solaris operating system with SAN storage.
    I am using oracle database 10g in a sun x86 machine with solaris 5.10.
    While i am trying to create a tablepsace in SAN storage in a mount point of 2.1TB the following error occurs:
    SQL> create tablespace test datafile '/appbrm01/oracle/oradata/test1.dbf' size 100M;
    create tablespace test datafile '/appbrm01/oracle/oradata/test1.dbf' size 1M
    ERROR at line 1:
    ORA-01119: error in creating database file '/appbrm01/oracle/oradata/test1.dbf'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 79: Value too large for defined data type
    Additional information: 45
    SQL>
    But when i am trying to create a tablepsace in SAN storage in a mount point of 1.8TB it works fine.
    So, please if any body can help me to solve this issue i will be very glad
    Thanks
    Reshad

    Also, Datapump from NAS device would perform slow.
    Take a look at this:
    10gR2 "database backup" to nfs mounted dir results in ORA-27054 NFS error
    Note:356508.1 - NFS Changes In 10gR2 Slow Performance (RMAN, Export, Datapump) From Metalink.

  • Oracle Rac on San Storage

    Hi
    We are in the midst of defining a process for  a 2 node installation on a san storage Device.  Oracle 11.2  on Rhel 5.
    What is the recommended implementation process for the shared storage.
    We would like the mirroring to stay at the storage level.
    * NFS without ASM
    * NFS with ASM
    * LUN with ASM
    * LUN without ASM
    Any other if any.
    I have gone through the documentation, it does give options, however does not say the best practices.
    Thanks in advance everyone and apologize if there is already a post on this and I'm repeating this.

    UDP is still relatively slow (due to Ethernet itself) compared to storage protocols over fibre channels and Infiniband (which can be bonded/multipath'ed too). HBA and HCA cards all have at least 2 ports.
    The basic issue is that Ethernet was never designed with storage protocol considerations. Or even Interconnect considerations. Case in point, Oracle developed the IB protocol RDS (Reliable Datagram Sockets) because UDP was not efficient and scalable enough. From Oracle performance reports to the OFED committee when ratifying RDS as a standard IB protocol, it was 2x as fast, consuming 50% less CPU, than UDP.
    Port to port latency for an Infiniband fabric layer can be as lows as 70ns (Wall Street IB implementations as a real world example). Most HPC centres using Infiniband has latencies of around 100ns.
    Ethernet is not the only solution. Despite what vendors like Cisco are keen to push "as the only solution so help us Donald Davies".
    So it is not as much a religious debate as which-is-the-best-o/s debates often turn into, as much as one of ignorance and Ethernet vendors exploiting and enforcing that.
    There are sound reasons Infiniband as Interconnect Family grew from a 4% market share 10 years ago, to over a 40% market share today, in the world of top 500 HPC clusters in the world. And Ethernet's market share has been decreasing.

  • New SAN storage

    My environment PROD is Oracle 10g RAC on Windows 2003.
    •Oracle Cluster ready srvices software and database software are installed on local disk.
    •Oracle Database and archived log are on SAN storage
    Now i just want to use new SAN storage due to problem in older storage. How i can do,
    I

    Given what you've written I'm amazed you have anything working.
    You installed the CRS for what reason? RAC? ASM?
    Do you have a RAC cluster? Is it working?
    Do you want to build a RAC cluster?
    That something is a SAN confers almost no information of value. Is it copper or fibre? Is it EMC, Fujitsu, Hitachi, NetApp, Pillar, or someone else's? Does it have RAW devices that can be used to create ASM DiskGroups or a file system?
    Read the docs and repost addressing the missing information from this post.

  • RAC 10G With SAN storage

    Dear all
    I am trying to install Oracle 10gR2 Database, buy using Cluster Hardware with 2 nodes and RAID SAN Storage, i was thinking i can go with oracle RAC, but is there any one can advice me if thats ok, and provide me with step-by-step installation manual, knowing that i am using Windows 2003 64-bit As OS, or if any can help me to install the best solution for my Case.

    Check following link it will help
    Best Practices for Oracle Database 10g RAC on Microsoft 64bit Windows
    RAC Installation Lab

  • Windows 2008 R2 Cluster - migrating data to new SAN storage - Detailed Steps?

    We have a project where some network storage is falling out of warranty and we need to move the data to new storage.  I have found separate guides for moving the quorum and replacing individual drives, but I can't find an all inclusive detailed step
    by step for this process, and I know this has to happen all the time.
    What I am looking for is detailed instructions on moving all the data from their current san storage to new san storage, start to finish.  All server side, there is a separate storage team that will present the new storage.  I'll then have to install
    the multi-pathing driver; then I am looking for a guide that picks up at this point.
    The real concern here is that this machine controls a critical production process and we don't have a proper set up with storage and everything to test with, so it's going to be a little nerve racking.
    Thanks in advance for any info.

    I would ask Hitachi.  As I said, the SAN vendors often have tools to assist in this sort of thing.  After all, in order to make the sale they often have to show how to move the data with the least amount of impact to the environment.  In fact,
    I would have thought that inclusion of this type of information would have been a requirement for the purchase of the SAN in the first place.
    Within the Microsoft environment, you are going to deal with generic tools.  And using generic tools the steps will be similar to what I said. 
    1. Attach storage array to cluster and configure storage.
    2. Use robocopy or Windows Server Backup for file transfer.  Use database backup/recovery for databases.
    3. Start with applications that can take multiple interruptions as you learn what works for your environment.
    Every environment is going to be different.  To get into much more detail would require an analysis of what you are using the cluster for (which you never state in either of your posts), what sort of outages you can operate with, what sort of recovery
    plan you will put in place, etc.  You have stated that your production environment is going to be your lab because you do not have a non-production environment in which to test it.  That makes it even trickier to try to offer any sort of detailed
    steps for unknown applications/sizing/timeframes/etc.
    Lastly, the absolute best practice would be to build a new Windows Server 2012 R2 cluster and migrate to a new cluster.  Windows Server 2008 R2 is already out of mainstream support. That means that you have only five more years of support on your
    current platform, at which point in time you will have to perform another massive upgrade.  Better to perform a single upgrade that gives you a lot more length of support.
    . : | : . : | : . tim

  • New mac pro with fiber attached san storage

    We have a digital media department that uses After Effects to produce content for different types of media boards, digital signage, post production  etc. They start with Red 4K files which get edited in AE and rendered to different video formats. They want to move away from Windows PC based editors with multi drive RAID local storage to the new MAC Pro with 8GB fiber attached to a shared SAN storage solution. First question, is this a wise move to run After Effects? Second question is, do you need to create multiple logical volumes on the SAN storage to accommodate where AE render location, cache, source, database, and general media content files are stored or can this all be stored on different folders in one large 50TB volume? Third question is, should six different editors be sharing the location where these files are stored?

    Third question is, should six different editors be sharing the location where these files are stored?
    No. Each user should have his own home directory. Not for performance resons, but simply because they'll end up overwriting each other's files. The rest - logical volumes don't matter on your end, but SAN systems still use internal stripe groups, so it would make sense to define volumes on those stripe sets simply in the interest of performance. Beyond that AE doesn't really care. It uses the operating system's file I/O stuff and 'NIX based systems don't differentiate between local volumes or otehr resources as long as they are mounted correctly in teh file system.
    Mylenium

  • Need particular SAN storage document in English

    Hi,
    I have a SAN storage Solution document. The document is in German language. I am unable to find same document in English.
    I had attached SAN storage Solution document in German language below.
    Can anyone provide me same document in English language.
    I will be very thankful if anyone can help me.

    Thanks for the reply,
    Can your or anyone provide me similar solution document comprising of MDS and Storage router
    That will be very helpful for me.

  • SAN Storage solution in combination w/ Server 2012 R2 (HYPER-V)

    Hello everyone,
    We are currently in talks with 2 storage vendors (IBM and NETAPP) in order to replace/expand our current SAN storage and finally make the move to a virtualized environment (HYPER-V).
    For the NETAPP side, they proposed a FAS3220 for our main location and a FAS2240 for our DR location, both running Clustered ONTAP 8.2. For the IBM side, they proposed a couple of V3700's (under SVC).
    I'm aware this isn't the place to discuss these things in depth, but could anyone who has any experience w/ both solutions in combination w/ Server 2012 (R2) and HYPER-V recommend one solution over the other? Or is anyone in possession of a feature list
    for both solutions?
    Any help will be greatly appreciated.
    With Kind Regard,
    L.

    Hello everyone,
    We are currently in talks with 2 storage vendors (IBM and NETAPP) in order to replace/expand our current SAN storage and finally make the move to a virtualized environment (HYPER-V).
    For the NETAPP side, they proposed a FAS3220 for our main location and a FAS2240 for our DR location, both running Clustered ONTAP 8.2. For the IBM side, they proposed a couple of V3700's (under SVC).
    I'm aware this isn't the place to discuss these things in depth, but could anyone who has any experience w/ both solutions in combination w/ Server 2012 (R2) and HYPER-V recommend one solution over the other? Or is anyone in possession of a feature list
    for both solutions?
    Any help will be greatly appreciated.
    With Kind Regard,
    L.
    You should avoid NetApp when thinking about Hyper-V deployment. For a reason: NetApp "internals" are a pure filer built around WAFL and NFS being a direct interface to it. So for environments where you need to talk NFS (VMware ESXi and any *nixes with a good
    client NFS implementation) NetApp really shines. As Hyper-V has no support for NFS in terms of a VMs storage and pretty much never will, Windows having very low performing NFS support as a client (latency is ridiculous, 10x-100x times higher compared to FreeBSD
    running on the same hardware) and iSCSI with NetApp is a bolt-on solution (basically it's a file on a local WAFL partition and iSCSI uplink) you're not going to have the best performance and good features coverage. Up to the point when NetApp would do SMB
    3.0 in production for a couple of years. So picking up between IBM and NetApp in your case assuming these are only two options - go IBM. If you need a SAN for your environment and can think twice - think about virtual SAN. That's what big hypervisor boys are
    doing.
    StarWind iSCSI SAN & NAS

  • Connect two domain controllers to SAN storage

    Hi everyone
    I have primary and secondary domain controllers, I want to connect them to SAN storage as a cluster, I tried to configure Failover Clustering on them, but when adding them both to the Create A Cluster Wizard, I receive the following error (see the link)
    http://s14.postimg.org/lssjm2vu9/Screenshot_1.png
    so, is there any solution for this error, or may be there is another way to connect both DCs to the storage as cluster.
    any help will be appreciated,

    Hi,
    as I know this configuration is not supported.
    http://support.microsoft.com/kb/2795523/en-us
    Regards
    Guido

  • SAN Storage Migration with Hyper-V 2008 R2 CSV

    Dears
    Our customer has configured 2 node Hyper-V Cluster with CSV   connected to an Old SAN Storage The Host Operating System is Windows 2008 R2.
    We are planning to migrate the data from the old storage to the new storage. ( Both Storage boxes are connected via Fiber)
    With regard to the Migration can we do  the below if so please provide some guidelines related to the Hyper-V queries.
    (We are trying to avoid the Storage based migration)
    - Connect the new storage   to the servers
    - Create the LUN same as in the old storage and assign it to the servers
    - Create a new CSV  and point it to the new luns in the Failover Cluster Manager
    - Use the export/import function to move the VM's from the new storage to the old storage
    - Once all the vm's are moved create a new LUN for the quorum and re-configure the cluster to use the new quorum
    Are the above steps will perform a error-free migration if so
    - Can we delete the old CSV and dismount the old storage at this stage.
    Also these VM's (Exchange and SQL) has seperate LUN's assigned for the Exchange and SQL data , Does these data will be exported when we use the export/import feature of the Hyper-V to migrate the virtual machine
    Your Valuable response in this regard is highly appreciated.
    Regards
    Muralee

    Dear All
    Thanks for your valuable reply.
    And I was able to successfully migrate the cluster and below are the steps i performed
    * Initialized the SAN storage and assigned the LUN to both servers.
    * Logged in to the server which  is the current owner of the  csv's and disks
    * Initialized the disks as GPT and formatted the LUN without a drive letter.
    * Added to the Hyper-V Cluster as disks
    * Added to the CSV
    * Exported the VM to the new disks
    * Deleted the VM from Hyper-V console(Which will not delete any VM files)
    * Imported the VM (chosed the default settings in the wizard)
    * Added the VM as a service Failover Cluster Manager.
    * After completely moving the VM's  created a new LUN for the quorrum and changed the quorrum      configuration to the new disk.
    * Removed the Old Storage and restarted the cluster checked for the VM's functionality everything was fine.
    NOTE1 :-  I faced a  difficulty in multipathing software as he was using the EMC Powerpath and I had to remove it and enable Widows MPIO  .
    Anybody who are facing a similar scenario please send a note  I would be glad to share the experience.
    NOTE2 :-  I tried all the SAN migration options from the storage Vendor but this method was able to do the migration in a much easier and confident manner.
    Regards

  • High Available virtual machines-SAN storage availability

    Hi,
    Considering that we have the following scenarios:
    1) High available virtual machines
    2)Storage presented through a virtual SAN switch connection.
    The question that I have for you is the following:
    How will the SAN storage be available to the virtual machines in:
    a) Life-migration scenarios?
    b)Physical server failure?
    Thank you.

    Hi,
    a) from Technet:
    You must have access to any virtual SANs that are being used by the virtual machine. In addition, the virtual SAN connectivity must have the same number of ports on the SAN to expose the LUNs"
    http://technet.microsoft.com/en-us/library/dn551169.aspx#BKMK_Step1
    b) A classcial Failover will occur and all cluster resources will be moved to another cluster node depending on your configured Failover Cluster rules
    regards Marc Grote aka Jens Baier - www.it-training-grote.de - www.forefront-tmg.de - www.galileocomputing.de/3570

Maybe you are looking for