OPS/RAC archivelog space failover

Is anyone using the cluster's filesystem failover capability on their archivelog space for 8iOPS or 9iRAC, instead of the usual global filesystem or NFS mount?
thanks...-j

Mynz wrote:
Hello,
Does a RAC produce more than usual logfile-entries, or is there any possibility to check who's producing the logfile-activity ?No , RAC doesn't influence redo log activity, but database activity(DML operations specifically) does, and because archive log generation depends on redo logs dynamics, you should find out what is causing high DML activity in that database.
Discussions about that already happened here:
Archived Logs
http://kr.forums.oracle.com/forums/thread.jspa?threadID=901150&tstart=45

Similar Messages

  • Funtion of cluster software on OPS(RAC)?

    I Want to Know the function or reason of need to use cluster software on OPS(RAC)
    As I Know, if I use oracle with HA Architecture,
    It's true that I must use cluster software to get following function
    1.health check
    2.IP and resource take over...etc
    But In parallel db environment,
    Because there is TAF(tnsnames.ora) Function, There is no need to instance takeover.
    Is it true?
    If it is true, Cluster software's software has no need to ip take over and
    because database is shared by multinode, there is no need to volume take over.
    (whether using raw device or cluster file system)
    Summary of what I want to know.
    ===============================
    Q1)the reason that cluster software need on ops(rac)
    Q2)is there need to take over instance in ops(rac)?
    if it need, what is the case?
    Q3)the difference of cluster software between HA enviroment and RAC(OPS) environment.
    Q4)is there any function that don't need with ops ,but need when used with HA environment?
    Q5)if we use cluster file system solution than have buffer cache manage fuction
    (sync for consistency) on two node without parallel server option,
    is there any problem on processing same database?
    if any, what is the problem?

    Summary of what I want to know.
    ===============================
    Q1)the reason that cluster software need on ops(rac)
    >
    The cluster software is required to facilitate internode communication between nodes in the cluster through the cluster interconnect devices. Remember, 9i RAC does not use buffer pinging for block passing it utilizes cache fusion
    Q2)is there need to take over instance in ops(rac)?
    if it need, what is the case?
    >
    sorry I don't understand the question.
    Q3)the difference of cluster software between HA enviroment and RAC(OPS) environment.
    >
    This is dependant on the OS/Clustering S/W you are using. For instance, 9i RAC on Windows 2000 requires that you do not install M$ clustering at all and use the clustering tools provided by Oracle. On the other hand, 9i RAC onCompaq Tru 64 Unix and Open VMS rely on the native Clustering of these OS's.
    Q4)is there any function that don't need with ops ,but need when used with HA environment?
    Q5)if we use cluster file system solution than have buffer cache manage fuction
    (sync for consistency) on two node without parallel server option,
    is there any problem on processing same database?
    if any, what is the problem?
    >
    Technically, you can use Oracle in a cluster without 9i RAC under the following conditions:
    You use a clustered file system, or RAW Devices.
    Only one Instance per node can access the database at any time. In other words it could be used for failover from one node to the other, but it can't be used for scaling.
    For more detailed info, you should read all of the relevent Oracle white papers on 9i RAC, and if you can find them, the release notes for 9i RAC on your OS.

  • RMAN support of OPS/RAC and ISD's

    Can some one point me in the right direction for any white papers, or documentation, on RMAN and its current support of OPS/RAC and/or Intelligent Storage Devices (ISD). I'm trying to do a comparison of RMAN against other third party backup and recovery software and need to find out as much as I can of RMAN support of these architectures. Thanks!

    The closest documention I have of RMAN and OPS/RAC is the Oracle Documentation. Take a look in the RAC ADMIN doc. And I talked a little bit about it in an OpenWorld paper from last year. You can find the paper @
    http://otn.oracle.com/deploy/availability/pdf/BR_OOW01_213WP.pdf
    Bottom line, RMAN supports the clustered environments the same as single instance databases. The rub is the archive log destinations. This is discussed in the RAC documentation.
    Thanks, Tammy

  • OPS의 TAF (TRANSPARENT APPLICATION FAILOVER) 개념 및 구성

    제품 : ORACLE SERVER
    작성날짜 : 2004-08-13
    OPS의 TAF (TRANSPARENT APPLICATION FAILOVER) 개념 및 구성 (8.1이상)
    ===================================================================
    PURPOSE
    Oracle8 부터는 OPS node 간의 TAF (Transparent Application Fail-over)가
    제공된다. 즉 OPS의 한쪽 node에 fail이 발생하여도 해당 node로 접속하여
    사용하던 모든 session이 사용하던 session을 잃지 않고 자동으로 정상적인
    node로의 재접속이 이루어저 작업이 계속 진행하도록 하는 것이다.
    이 문서에는 이 TAF에 대해서 간단히 살펴보고 실제 configuration을 기술한다.
    SCOPE
    Transparent Application Failover(TAF) Feature는
    8i~10g Standard Edition에서는 지원하지 않는다.
    Explanation
    TAF가 cover하는 fail의 형태에 대한 설명과, TAF 시 지정하는 fail over의
    type과 method에 대해서 설명한다.
    (1) fail의 형태:
    TAF는 다음과 같은 fail에 대해서 모두 TAF가 정상적으로 수행되게 된다.
    단 MTS mode에 대해서는 전혀 문제가 없지만, dedicated mode의 경우는
    반드시 dynamic registration형태로 구현이 되어야 정상적으로 TAF가 가능하다.
    instance fail: mts의 경우는 문제가 없지만 dedicated mode의 경우는 반드시
    dynamic registration 형태로 구성되어야 한다.
    fail된 instance 측의 listener가 정상적이라 하더라도,
    dynamic registration에 의해서 instance가 fail되면
    listener로부터 deregistration되게 되어 listener 정보
    를 확인 후 다른 node의 listener로 접속을 시도하게 된다.
    그러나 dynamic registration을 사용하지 않게 되면 fail
    된 instance 쪽의 listener는 fail된 instance 정보를
    services로 보여주게 되고 해당 instance와 연결을 시도하
    면서 ORA-1034: Oracle not available 오류가 발생하게 되
    는 것이다.
    instance & listener down: listener까지 down되게 되면 문제 발생 후
    재접속 시도 시 fail된 쪽의 listener 접속이 실패하게 되고,
    다른 node의 listener로 접속이 이루어지게 된다.
    node down: node 자체가 down되는 경우에도 TAF는 이루어진다. 단 clinet
    에 적정한 TCP configuration parameter인 keepalive 의 설정
    이 요구되어진다.
    node fail시 client와 server간의 작업이 진행중이라면
    문제가 없지만 만약 server쪽에서 수행되는 작업이 없는
    상태라면 cleint가 node가 down이 되어도 바로 인지할 수가
    없다. client에서 다음 server로의 요청이 이루어지는
    순간에 client가 더이상 존재하지 않는 TCP end point쪽으로
    TCP packet을 보내게 되고, server node가 더이상 살아있지
    않다는것을 확인하게 되는데 일반적으로 2,3분이 걸릴수
    있다. node가 fail이 된경우 network에 대한 write() function
    call이 오류를 return하게 되고, 이것을 client가 받은후
    failover기능을 호출하게 되는 것이다.
    client에서 idle한 상태에서도 server node가 down되었는지를
    학인하려면 TCP keepalive를 설정해야 하며, 이 keepalive를
    오라클의 connection에서 사용하려면 TNS service name에서
    ENABLE=BROKEN절을 지정해 주어야한다.
    DESCRIPTION절에 포함되는 이 ENABLE=BROKEN절에 대한 예제는
    아래 구성 예제의 (3)번 tnsnames.ora 구성 부분에서 참조할
    수 있다.
    이렇게 ENABLE=BROKEN을 지정하면 network쪽 configuration인
    keepalive 설정을 이용하게 되는데 이것이 일반적으로는
    2 ~ 3시간으로 설정되어 있기 때문에 이값이 적당히 짧아야
    TAF에서 의미가 있을 수 있다.
    단 이 keepalive time이 너무 짧으면, 그리고 idle한
    session이 많은 편이라면 network부하가 매우 증가할 수
    있으므로 이 지정에 대해서는 os나 network administrator와
    충분히 상의하여야 한다.
    이 keepalive 대한 자세한 내용과 설정 방법은 <bulletin:11323:
    SQL*NET DCD(DEAD CONNECTION DETECTION)과 KEEPALIVE의 관계>를
              참조한다.
    (2) type: session vs. select
    session은 유지하고 수행중이던 SQL문장은 모두 fail되는 session type과
    DML문장은 rollback되고 select문장은 유지되는 select type이 제공된다.
    select type의 경우도 fail된 instance에서만 얻을 수 있는 정보의 경우는
    조회수행 도중 다음과 같은 오류를 발생시키고 중단될 수 있다.
    예를 들어 해당 instance에 대한 gv$session으로부터의 조회와 같은것이 그
    예이다.
    ORA-25401: can not continue fetches
    (3) method: basic vs. backup
    fail발생시 다른 node로 session을 연결하는 basic method와,
    미리 다른 node로 backup session을 연결해 두었다가 fail발생시 사용하는
    backup method가 존재한다.
    Example
    TAF설정을 위해서는 init.ora, listener.ora, tnsnames.ora에 설정이 필요하다.
    MTS mode에서는 문제가 없기 때문에 여기서는 반드시 dynamic registration으로
    설정해야 하는 dedicated방식을 예로 들었다.
    test는 Oracle 8.1.7.4/Sun solaris 2.8에서 수행되었다.
    A/B 두 node를 가정한다.
    (1)initSID.ora에서
    - A node의 initSID.ora
    service_names=INS1, DB1
    local_listener="(address=(protocol=TCP)(host=krtest1)(port=1521))"
    - B node의 initSID.ora
    service_names=INS2, DB1
    local_listener="(address=(protocol=TCP)(host=krtest2)(port=1521))"
    service_names는 여러개를 지정가능한데, 중요한것은 두 node가 공통으로
    사용할 service name한가지는 반드시 지정하여야 한다.
    일반적으로 db_name을 지정하면 된다.
    host=부분은 hostname이나 ip address를 지정하면 된다.
    (2) listener.ora
    LISTENER =
    (DESCRIPTION =
    (ADDRESS =
    (PROTOCOL = tcp)
    (HOST = krtest1)(PORT= 1521)))
    B node에서는 krtest1대신 b node의 hostname혹은 ip address를 지정하면
    된다
    (3) tnsnames.ora은 지정하는 방법이 두가지입니다.
    아래에 basic method와 backup method 두 가지 방법에 대한 예를 모두 기술한다.
    이중 한가지를 사용하면 되며 backup method의 fail-over시 미리 연결된
    session을 사용하므로 시간이 적게 걸릴수 있으나 반대 node에 사용안하는
    session을 미리 맺어놓는것에 대한 부하가 있어 서로 장단점이 있을 수 있다.
    두 설정 모두 TAF뿐 아니라 connect time fail-over도 가능한 설정이다.
    즉 A node가 fail시 같은 tns service name을 이용하여서 (여기서는 opsbasic
    또는 ops1) B node로 접속이 이루어진다.
    address=로 정의된 address절이 위쪽을 먼저 시도하므로 정상적인 상태에서
    B node로 접속을 원하는 경우는 opsbasic의 경우 krtest2를 위쪽에 적고,
    ops1/ops2의 경우는 ops2를 사용하도록 한다.
    여기에서 (enable=broken)설정이 되어 있는데 이것은 client machine에 설정되어
    있는 TCP keepalive를 이용하는 것으로 network부하를 고려하여 설정을 제거할
    수 있다.
    a. basic method
    krtest1의 tnsnames.ora에서는 opsbasic과 ops2에 대해서 설정해두고,
    krtest2 node에서는 opsbasic과 ops1을 설정한 후, backup=ops2를
    backup=ops1으로 수정하면 된다.
    opsbasic =
    (description=
    (address_list=
         (enable=broken)
         (load_balance=off)
         (failover=on)
         (address= (protocol=tcp) (host=krtest1) (port=1521))
         (address= (protocol=tcp) (host=krtest2) (port=1521))
    (connect_data =
              (service_name=DB1)
         (failover_mode=
         (type=select)
         (method=basic)
    (backup=ops2))))
    ops1 =
         (description =
         (enable=broken)
         (load_balance=off)
         (failover=on)
         (address=(protocol=tcp)(host=krtest1) (port=1521))
    (connect_data = (service_name = DB1)))
    ops2 =
         (description =
         (enable=broken)
         (load_balance=off)
         (failover=on)
    (address=(protocol=tcp)(host=krtest2) (port=1521))
    (connect_data = (service_name = DB1)))
    b. preconnect method
    아래 예제의 ops1, ops2가 모두 같은 tnsnames.ora에 정의되어 있어야 하며,
    ops1을 이용하여 접속하여 krtest1을 사용시에도 미리 backup session을
    krtest2에 맺어둔 상태에서 작업하게 된다.
    ops1 =
    (description =
    (address_list =     
    (enable=broken)
         (load_balance=off)
         (failover=on)
         (address=(protocol=tcp)(host=krtest1) (port=1521))
         (address=(protocol=tcp)(host=krtest2) (port=1521))
    (connect_data = (service_name = DB1)
    (failover_mode=
         (backup=ops2)
         (type=select)
         (method=preconnect))))
    ops2 =
    (description =
    (address_list=
         (enable=broken)
         (load_balance=off)
         (failover=on)
    (address=(protocol=tcp)(host=krtest2) (port=1521))
    (address=(protocol=tcp)(host=krtest1) (port=1521))
    (connect_data = (service_name = DB1)
    (failover_mode=
         (backup=ops1)
         (type=select)
         (method=preconnect))))
    Reference Documents
    -------------------

  • OPS/RAC Vs Raw Device/File System

    Hello, firts to all, I want to know what is OPS/RAC and Raw Device/File System.
    And then know the diferens between both.
    I apologize for my english.
    Thanks.

    OPS is Oracle parallel server was available up to Oracle 8i and RAC is Real application cluster is available from Oracle 9i.
    Raw device is just presenting the disk to server and using it. File system is presenting disk more transparent and managable form using OS or third party software.
    You need to refer respective documents to know them in detail.
    Ashok

  • RAC backup procedure failover

    We have Oracle 11gR2 RAC database on two nodes. We also have a RMAN backup script that works fine, using a recovery catalog database which is located in a town 20km from the data center. The script for database backup works fine, and is started from crontab Job or from Oracle dbconsole (for now it works from crontab). A recovery procedure is checked and everything is working properly.
    The problem is that the script runs from the first node in the cluster, and if the node is turned off, backup can't be run. How can we ensure that our script have a failover backup version. We also tried to do the backup over dbconsole but this only works if the node from which to start job was started.
    Essentially the question is "How to ensure that our backup works, whether or not both nodes are active"

    Hi,
    SerPedjasim wrote:
    We have Oracle 11gR2 RAC database on two nodes. We also have a RMAN backup script that works fine, using a recovery catalog database which is located in a town 20km from the data center. The script for database backup works fine, and is started from crontab Job or from Oracle dbconsole (for now it works from crontab). A recovery procedure is checked and everything is working properly.
    The problem is that the script runs from the first node in the cluster, and if the node is turned off, backup can't be run. How can we ensure that our script have a failover backup version. We also tried to do the backup over dbconsole but this only works if the node from which to start job was started.
    Essentially the question is "How to ensure that our backup works, whether or not both nodes are active"Don't use nodes of clusterware to start backup. You can use the host of recovery catalog database which is located in a town 20km.
    Move all scripts of rman to host of recovery catalog and configure the scripts in crontab of host of recovery catalog. The RMAN works as client only the backup is always performed on server side.
    Create a Service on RAC to RMAN connect (e.g RMAN_BACKUP). The service should run on nodes 1 of the RAC, but is able to run on node 2 or 3 if nodes 1 are not available. (If you use the default service name of RAC (name of database) and you are using parallelism the RMAN can start a session in each node and the backup be perfomed by all nodes at same time, this is not a problem, but you can have perfomance problems on your environment). Because that I recommend create a Service.
    http://download.oracle.com/docs/cd/E11882_01/rac.112/e16795/hafeats.htm
    Configure a net service name using the local naming method on host of recovery catalog.
    If you are using SCAN Feature do this above, if not put all VIP hostnames on Address.
    e.g:
    RMAN_BACKUP =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = node-scan.oracle.com)(PORT = 1521))
         (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = RMAN_BACKUP)
      )Regardless of the node that is active your backup will be started if a node is active
    This will only work if you are performing online backup. I hope so.
    Hope this helps,
    Levi Pereira
    Edited by: Levi Pereira on Aug 22, 2011 11:27 PM

  • ARCHIVELOG space consumption

    Hi!
    I am new to DBA world and would like to ask how much space is consumed when ORA XE database is in archivelog mode. That is if XE is limited to 5 GB of data how much of that space is ment to be for archivelog segment. Furthermore what exactly(besides DATA) is consuming those 5 GB of available space(db objects, data, ???)?
    Thank you in advance,
    Marinero

    Thank you C.
    I wasn't sure weather files for archivelog were part of those 5 GB DATA space. I was thinking that if you have lets say 2 of 5 GB of space consumed and last backup was made when there was only 1 GB of data in DB that actual consumption is 3 GB(2GB for data and 1 GB for archive log). Thank you once again for clarification.
    Regards,
    Marinero

  • RAC Archivelogs not Switching on Closed Threads with Dataguard.

    I have a 4 node Cluster with each database running 1 instance.
    My experience is when RMAN backs up the Archivelogs it rolls the Closed threads over ie switchs Archivelogs and backs them up.
    After I put one of my production databases ( 10.2.0.2 ) in Dataguard it changed, and only the Open thread switches logs.
    It has caused problems with duplicating the database to test.
    I went through support and Oracle said its a bug fixed in a higher version so Upgrade. ( Management not excited to do so )
    The work around is to open the idle instances and immediately close them before an application thread tries to connect.
    I just put a second database in dataguard ... a related application ... which is using the same Oracle home and it is switching all threads on the ArchiveLog backup.
    The only difference is that the original has been open recovered ( flashbacked) and closed and the second has not but the Standby was recreated after wards ( I have put the standby in flashback now )
    Has anyone else observed this behaviour.

    If the Standby is in Maximum Availability and the primary has one or more Closed Threads, the Standby doesn't need the closed threads.
    If you are doing Backups to create another duplicate, I suggest that you add a forced Switch or Archive Log for the closed threads in your backup script.
    Hemant K Chitale

  • Toplink support for RAC Failover

    We need to enable Toplink to support RAC Fast Connection Failover. We need to support
    only read query retry functionality and not write query retry. We found one document
    (http://www.oracle.com/technology/events/develop2007/presentations/oracle_jdbc_high_availability_load_balancing_best_practices_and_road_map.pdf)
    that explains how to configure JDBC connection for RAC failover. However, since we
    are using Toplink, we don't explicitly write our own jdbc url and toplink's session/uow
    will automatically handle the actual sql query execution.
    Is there any way we can configure Toplink to support RAC Fast Connection Failover, such
    as possibly in the toplink's sessions.xml or in the oc4j container's connection pools setting?
    Just for information, the way we acquire Toplink session is through SessionFactory:
       SessionFactory sessionFactory = new SessionFactory("repository",
                                     "META-INF/repository-sessions.xml");
       Session toplinkSession = sessionFactory.acquireSession();Then we do queries, acquiring unit of work, etc, using that toplinkSession.
    Thanks.

    We have enhanced our RAC FCF support in 11gR1 leveraging the new Universal Connection Pool to make it more seamless to the application. Which version of TopLink and OC4J are you using?
    Doug

  • Optimize rac failover time?

    I have 2node RAC and the failover time is taking 4 minutes. Please advice some tips/documents/links that shows, how to optimize the rac failover time?
    [email protected]

    Hi
    Could you provide some more information of what it is you are trying to achieve. I assume you are talking about a the time it takes for clients to start connecting to the available instance on the second node, could you clarify this?
    There is SQLnet parameters that can be set, you can also make shadow connections with the preconnect parameter in your fail_over section of your tnsnames.ora on the clients.
    Have you set both of your hosts as preferred in the service configuration on the RAC cluster. The impact will be less in a failure as approximately half of your connections will be unaffeced when an instance fails.
    Cheers
    Peter

  • SC 3.0 file system failover for Oracle 8i/9i

    I'm a Oracle DBA for our company. And we have been using shared NFS mounts successfully for the archivelog space on our production 8i 2-node OPS Oracle databases. From each node, both archivelog areas are always available. This is the setup recommended by Oracle for OPS and RAC.
    Our SA team is now wanting to change this to a file system failover configuration instead. And I do not find any information from Oracle about it.
    The SA request states:
    "The current global filesystem configuration on (the OPS production databases) provides poor performance, especially when writing files over 100MB. To prevent an impact to performance on the production servers, we would like to change the configuration ... to use failover filesystems as opposed to the globally available filesystems we are currently using. ... The failover filesystems would be available on only one node at a time, arca on the "A" node and arcb on the "B" node. in the event of a node failure, the remaining node would host both filesystems."
    My question is, does anyone have experience with this kind of configuration with 8iOPS or 9iRAC? Are there any issues with the auto-moving of the archivelog space from the failed node over to the remaining node, in particular when the failure occurs during a transaction?
    Thanks for your help ...
    -j

    The problem with your setup of NFS cross mounting a filesystem (which could have been a recommended solution in SC 2.x for instance versus in SC 3.x where you'd want to choose a global filesystem) is the inherent "instability" of using NFS for a portion of your database (whether it's redo or archivelog files).
    Before this goes up in flames, let me speak from real world experience.
    Having run HA-OPS clusters in the SC 2.x days, we used either private archive log space, or HA archive log space. If you use NFS to cross mount it (either hard, soft or auto), you can run into issues if the machine hosting the NFS share goes out to lunch (either from RPC errors or if the machine goes down unexpectedly due to a panic, etc). At that point, we had only two options : bring the original machine hosting the share back up if possible, or force a reboot of the remaining cluster node to clear the stale NFS mounts so it could resume DB activities. In either case any attempt at failover will fail because you're trying to mount an actual physical filesystem on a stale NFS mount on the surviving node.
    We tried to work this out using many different NFS options, we tried to use automount, we tried to use local_mountpoints then automount to the correct home (e.g. /filesystem_local would be the phys, /filesystem would be the NFS mount where the activity occurred) and anytime the node hosting the NFS share went down unexpectedly, you'd have a temporary hang due to the conditions listed above.
    If you're implementing SC 3.x, use hasp and global filesystems to accomplish this if you must use a single common archive log area. Isn't it possible to use local/private storage for archive logs or is there a sequence numbering issue if you run private archive logs on both sides - or is sequencing just an issue with redo logs? In either case, if you're using rman, you'd have to back up the redologs and archive log files on both nodes, if memory serves me correctly...

  • RAC and instance goes down, transaction state?

    I was talking to our physical DBAs, and they told me that when the RAC node you're connected to goes down:
    1) if you're mid-query, your process gets transferred to another node
    2) if you're not currently running a query, but you are in the middle of a transaction (ie no rollback or commit), that you lose the state of your transaction
    I've seen 1) in action, but I haven't heard anything like 2). Is it true? They couldn't remember if they'd read it in the manuals, but they had heard it mentioned on more than 1 occasion during demonstrations.
    -Chuck

    What version of Oracle?
    What kind of Application?
    For most of the history of OPS/RAC if you are connected to an instance in the middle of a transaction or not when the instance you are connected to dies you must reconnect. Only if an Oranet fail-over connection was configured could you successfully reconnet asking for the same instance, though you would actually be connected to one of the available instances.
    Any work in process was rolled back by another node.
    Only with the advent of automatic transaction failover, ATF, would work on the dieing instance transfer to another instance. This works only if the application is coded to perform this task and when it first came out only for Select statements.
    HTH -- Mark D Powell --

  • At the time of finding redundant component failover information....

    os:redhat linux AS
    oracle:10.2.0.3
    hai..
    i am new to rac.
    if production environment using orace rac with all failover components like two hbas,two fc swithces,four network cards(two for private interconnect, and remaining two for public) and four switches for high availability.my doubt is where can we get the component failure information? for example if we lost the one of the hba, the application availability is always there bcaz due to the redundant component(second hba) and we are not aware of the failover. how can we moniotr these type of failures in os like linux and in oracle also? please guide me....
    v.s.srinivas potnuru.

    Hi,
    On a hardware failure event kernel messages are displayed on logs from Operating System under /var/logs. If redundancy is on hardware level it is transparent to Oracle.
    Regards,
    Rodrigo Mufalani
    http://mufalani.blogspot.com

  • Veritas required for Oracle RAC on Sun Cluster v3?

    Hi,
    We are planning a 2 node Oracle 9i RAC cluster on Sun Cluster 3.
    Can you please explain these 2 questions?
    1)
    If we have a hardware disk array RAID controller with LUNs etc, then why do we need to have Veritas Volume Manager (VxVM) if all the LUNS are configured at a hardware level?
    2)
    Do we need to have VxFS? All our Oracle database files will be on raw partitions.
    Thanks,
    Steve

    > We are planning a 2 node Oracle 9i RAC cluster on Sun
    Cluster 3.Good. This is a popular configuration.
    Can you please explain these 2 questions?
    1)
    If we have a hardware disk array RAID controller with
    LUNs etc, then why do we need to have Veritas Volume
    Manager (VxVM) if all the LUNS are configured at a
    hardware level?VxVM is not required to run RAC. VxVM has an option (separately
    licensable) which is specifically designed for OPS/RAC. But if
    you have a highly reliable, multi-pathed, hardware RAID platform,
    you are not required to have VxVM.
    2)
    Do we need to have VxFS? All our Oracle database
    files will be on raw partitions.No.
    IMHO, simplify is a good philosophy. Adding more software
    and layers into a highly available design will tend to reduce
    the availability. So, if you are going for maximum availabiliity,
    you will want to avoid over-complicating the design. KISS.
    In the case of RAC, or Oracle in general, many people do use
    raw and Oracle has the ability to manage data in raw devices
    pretty well. Oracle 10g further improves along these lines.
    A tenet in the design of highly available systems is to keep
    the data management as close to the application as possible.
    Oracle, and especially 10g, are following this tenet. The only
    danger here is that they could try to get too clever, and end up
    following policies which are suboptimal as the underlying
    technologies change. But even in this case, the policy is
    coming from the application rather than the supporting platform.
    -- richard

  • RAC with Dbnode1 and Dbnode2

    I have RAC with Dbnode1 and Dbnode2 and my application submit job in Dbnode1 .job is running and Dbnode1 is down .
    It possible running Job automatically move on the Dbnode2 .
    1 App1 and App2 node /DBNode2 and DBNode1 node are running.
    2 Application batch submit successfully from Appnod1/DBnode1 and DBNode1 goes down in middle of the batch.
    3 Pending Job will not switch automatically on DBnode2.

    995587 wrote:
    I have RAC with Dbnode1 and Dbnode2 and my application submit job in Dbnode1 .job is running and Dbnode1 is down .
    It possible running Job automatically move on the Dbnode2 .
    1 App1 and App2 node /DBNode2 and DBNode1 node are running.
    2 Application batch submit successfully from Appnod1/DBnode1 and DBNode1 goes down in middle of the batch.
    3 Pending Job will not switch automatically on DBnode2.Yes... It complete possible, but your application need support Oracle RAC.
    Client Failover Best Practices for Highly Available Oracle Databases: Oracle Database 11g Release 2
    http://www.oracle.com/technetwork/database/features/availability/maa-wp-11gr2-client-failover-173305.pdf
    Application Failover with Oracle Database 11g
    http://www.oracle.com/technetwork/database/app-failover-oracle-database-11g-173323.pdf
    How to develop it:
    Transparent Application Failover in OCI
    http://docs.oracle.com/cd/E14072_01/appdev.112/e10646.pdf

Maybe you are looking for

  • Built-in restore session hangs/freezes with too many tabs in Fx 29

    Hi, I've used the Session Manager add on for awhile, ever since a previous 'upgrade' to Firefox tended to not save sessions after a crash. The upgrade to version 29 has rendered Session Manager unusable, as documented here: https://support.mozilla.or

  • Length of search clause

    select * from t where cat_name in ('abb,'acc'--etc like this it is going above 4000 characters ---'etc');when it goes above 4000 charaters iam getting an error.

  • Method to pass the content of pdf through offline(without internet connection)

    Hi All, Is there any method available to pass the content of the pdf field from client machine having no internet connection? Barcode is the one way to pass the content of pdf field but my data is large and barcode capacity is not more than 1800 char

  • BAPI in MM01 uploading

    Hi Friends.. Iam using BAPI for to upload data from flatfile to sap instead of BDC. But after execute this method "BAPI_MATERIAL_SAVEDATA" Finally no data was updated in master table... so how can i store the value into base table.. some one told nee

  • Ios7 laggy on iphone 4 8gb

    apple needs to start signing ipsw for a4 devices like iphone 4 that are struggling with ios7. seriously its annoying ive never had any lag issues until now.