Backup recovery in sun cluster production

Hello Friends,
          Can any body provide me document for Backuprecovery in suncluster based sapR3 SERVER.
Thanks in Advance.

Yes, we use 2-way replication, but we don't use cache connect. The replication is created like this on both servers:
create replication MYDB.REPSCHEME
element SERVER01_DS datastore
master MYDB on "SERVER01_REP"
transmit nondurable
subscriber MYDB on "SERVER02_REP"
element SERVER02_DS datastore
master MYDB on "SERVER02_REP"
transmit nondurable
subscriber MYDB on "SERVER01_REP"
store MYDB on "SERVER01_REP"
port 16004
failthreshold 500
store MYDB on "SERVER02_REP"
port 16004
failthreshold 500
The application runs on SERVER01 and is standby on SERVER02. If an invalid state is detected in the application, the application on SERVER01 is stopped and the application on SERVER02 is started.
In addition to this, we want to fail over if the database on the SERVER01 is in invalid state. What should we have monitored by the Clustering Agent to detect an invalid state in TT?

Similar Messages

  • What is the best practice to perform DB Backup on Sun Cluster using OSB

    I have a query on OSB 10.4.
    I want to configure OSB 10.4 on 2 Node Sun Cluster where the oracle database is running.
    When im performing DB backup, my DB backup job should not get failed if my node1 fails. What is the best practice to achieve this?

    Hi,
    Each Host that participates in an OSB administrative domain must also have some pre-configured way to resolve a host name to an IP address.Use DNS, NIS etc to do this.
    Specify cluster IP in OSB, so that OSB always looks for Cluster IP only instead of physical IPs of each node.
    Explanation :
    If it is 2-Node OR 4-Node, when Cluster software installed in these nodes we have to configure Cluster IP so that when one node fails Cluster IP will automatically move to the another node.
    This cluster IP we have to specify whether it is RMAN backup or Application JDBC connection. Failing to second node/another Node is the job of Cluster IP. So wherever we install cluster configuration we have to specify in all the failover places specify CLUSTER IP.
    Hope it helps..
    Thanks
    LaserSoft

  • ORACLE8 OPS BACKUP & RECOVERY

    제품 : ORACLE SERVER
    작성날짜 : 2004-08-16
    ORACLE8 OPS BACKUP & RECOVERY
    =============================
    SCOPE
    Standard Edition 에서는 Real Application Clusters 기능이 10g(10.1.0) 이상 부터 지원이 됩니다.
    Explanation
    OPS에서의 database backup & recovery 방법은 single instance의 backup 방법과
    비슷하다. 즉, Single instance에서의 모든 backup 방법은 ops에서도 지원된다.
    1. Backup 방법
    다음의 backup 방법 모두 사용이 가능하다. 여기서는 2)의 os 명령을 이용한
    backup 방법에 대해 기술합니다.
    1) Recovery Manager (RMAN) : <Bulletin 11451> 참고
    2) OS 명령을 활용한 백업
    Noarchive log mode : full offline backup only
    Archive log mode : full or partial, offline or online backup
    3) export : <Bulletin 10080> 참고 : ORACLE 7 BACKUP 및 RECOVERY 방법
    2. backup 정책 수립 시 고려 사항
    1) disk crash나 user error 등으로 말미암은 손실을 허용하지 않는다면 ARCHIVE
    LOG MODE를 사용해야 한다.
    2) 대부분 모든 instance는 자동 archiving을 사용한다.
    3) 모든 data backup 작업이 어떤 instance 건 가능하다.
    4) media recovery 시 모든 thread의 archive file이 사용된다.
    5) Instance recovery 시 살아있는 instance의 smon에 의해 자동으로 recovery된다.
    3. Noarchive log mode : Full offline backup
    1) 다음의 view들을 query하여 backup이 필요한 file을 알아낸다.
    V$DATAFILE or DBA_DATA_FILES
    V$LOGFILE
    V$CONTROLFILE
    2) 모든 instance를 shutdown한다.
    3) 확인된 file을 backup destination으로 copy한다.
    4. Archive log mode : Partial or Full Online Backup
    1) 백업을 수행하기 전에 ALTER SYSTEM ARCHIVE LOG CURRENT 명령 실행(이 명령을
    실행하여 현재 운영되지 않는 데이터베이스를 포함한 모든 노드의 current redo
    log에 대한 로그 스위치와 그에 따른 아카이브를 모든 인스턴스에서 실행시킨다.)
    2) ALTER TABLESPACE tablespace BEGIN BACKUP 명령 실행
    3) ALTER TABLESPACE 명령이 성공적을 실행될 때까지 대기
    4) OS에서 적절한 명령어를 활용하여 테이블스페이스에 속하는 데이터파일들을 백업
    (tar, cpio, cp 등)
    5) OS 명령을 활용한 백업이 다 끝날 때까지 대기
    6) ALTER TABLESPACE tablespace END BACKUP 명령 수행
    7) ALTER DATABASE BACKUP CONTROLFILE TO filename 이나
    ALTER DATABASE BACKUP CONTROLFILE TO TRACE
    명령을 수행시켜 컨트롤 파일을 백업.
    만약 아카이브 로그 파일을 백업받는다면 END BACKUP 명령을 실행시킨 이후
    ALTER SYSTEM ARCHIVE LOG CURRENT 명령을 실행시켜 END BACKUP 시점까지의
    모든 리두 로그 파일들을 확보한다.
    5. Import Parameter
    1) Controlfile 내의 Redo Log History (MAXLOGHISTORY )
    CREATE DATABASE 명령이나 CREATE CONTROLFILE 명령에서 MAXLOGHISTORY 값을
    지정하여 parallel server에서 다 채워진 리두 로그 파일에 대한 history를
    컨트롤 파일이 저장하도록 할 수 있다. 이미 데이터베이스를 생성한 후라면
    log history 값을 증가시키거나 감소시키기 위해서는 컨트롤 파일을 재생성
    하여야만 한다.
    MAXLOGHISTORY는 컨트롤 파일 내의 archive history를 얼마나 저장할 수
    있는지를 지정하며, 기본값은 플랫폼 별로 다르다. 이 값이 0이 아닌 다른
    값으로 지정된다면 log switch가 발생할 때마다 LGWR 프로세스에서는 컨트롤
    파일에 다음 정보를 기록한다.
    thread number, log sequence number, low SCN, low SCN timestamp, next SCN
    (next log의 가장 낮은 SCN값)
    (이 정보는 리두 로그 파일이 archive된 후가 아니라 log switch가 발생할 때
    컨트롤 파일에 저장된다.)
    MAXLOGHISTORY 값에서 지정한 값을 넘어서 log history가 저장되어야 할 경우
    가장 오래된 history를 overwrite하는 방식으로 저장된다. Log history 정보는
    OPS에서 자동 media recovery 시 SCN, thread number를 기준으로 적절한
    아카이브 로그 파일을 찾아 재구성하는 데 사용된다. 데이터베이스를 exclusive
    모드에서 한개의 쓰레드만 사용하는 환경에서는 log history 정보가 필요하지 않다.
    Log history 관련 정보는 V$LOG_HISTORY를 이용해 조회해 볼 수 있다.
    서버 관리자에서 V$RECOVERY_LOG를 조회하면 media recovery에 필요한 아카이브
    로그에 대한 정보를 얻을 수 있다.
    Multiplex된 리두 로그 파일에 대해서, log history 내에서 여러개의 entry가
    사용되지 않는다. 각각의 entry는 개개의 파일에 대한 정보가 아니라, multiplex
    된 log 파일의 그룹에 대한 정보를 가지고 있다.
    2) Archive Log Mode 시 Parameter
    OPS에서 archive log mode로 변경 시 exclusive mode로 db mount 후에 변경한다.
    a. LOG_ARCHIVE_FORMAT
    파라미터     설명     예
    %T     thread number, left-zero-padded     arch0000000001
    %t     thread number, not padded     arch1
    %S     log sequence number, left-zero-padded     arch0000000251
    %s     log sequence number, not padded     arch251
    이 가운데 %T와 %t는 OPS에서만 유효한 파라미터이다.
    모든 instance의 format은 같아야 하며 OPS 환경에서는 반드시 thread 번호를
    포함시켜야 한다.
    예) log_archive_format = %t_%s.arc
    b. LOG_ARCHIVE_START
    - 자동 archiving : TRUE로 지정한 후 인스턴스를 구동시키면 background process
    인 ARCH에서 자동 archiving을 수행한다. Closed Thread의 경우에는 실행 중인
    thread에서 closed thread를 대신해 log switch와 archiving을 수행한다.
    이것은 모든 노드에서 비슷한 SCN을 유지하도록 하기 위해 강제적으로 log switch
    가 발생할 때 일어난다
    - 수동 Archiving : FALSE이면 archive를 시작하도록 지시하는 명령을 명시적으로
    내리지 않는 이상 동작을 멈추고 대기한다. OPS에서는 각각의 인스턴스에서 서로
    다른 LOG_ARCHIVE_START 값을 사용할 수 있다.
    다음과 같은 방법으로 수동 archiving을 수행할 수 있다.
    ALTER SYSTEM ARCHIVE LOG SQL 명령을 실행
    ALTER SYSTEM ARCHIVE LOG START 명령을 실행하여 자동 archiving을 실행하도록
    지정.
    수동 archiving은 명령을 실행시킨 노드에서만 실행 되며, 이 때 archiving
    작업을 ARCH 프로세스가 처리하지 않는다.
    c. LOG_ARCHIVE_DEST
    archive log file이 만들어질 directory를 지정한다.
    예) log_archive_dest = /arch2/arc
    6. OPS Recovery
    1) Instance Failure 시
    Instance failure는 S/W나 H/W 상의 문제, 정전이나 background process에서
    fail이 발생하거나, shutdown abort를 시키거나 OS crash 등 여러가지 이유로
    인해 instance가 더 이상 작업을 진행할 수 없을 때 발생할 수 있다.
    Single instance 환경에서는 instance failure는 instance를 restart 시키고
    database를 open하여 해결된다. Mount 상태에서 open 되는 중간 단계에서 SMON은
    online redo log 파일을 읽어 instance recovery 작업을 수행한다.
    OPS에서는 instance failure가 발생 했을 경우 다른 방식으로 instance
    recovery가 수행된다. OPS에서는 한 노드에서 fail이 발생했다고 하더라도
    다른 노드의 인스턴스는 계속 운영될 수 있기 때문에 instance failure는
    database가 가용하지 않다는 것을 의미하지는 않는다.
    Instance recovery는 dead instance를 처음으로 발견한 SMON 프로세스에서
    수행한다. Recovery가 수행되는 동안 다음과 같은 작업이 일어난다.
    - Fail이 발생하지 않은 다른 인스턴스에서는 fail이 발생한 인스턴스의
    redo log 파일을 읽어 들여 데이터파일에 그 내용을 적용시킨다.
    - 이 기간 동안 fail이 발생하지 않은 다른 노드에서도 buffer cache 영역의
    내용을 write 하지는 못한다.
    - DBWR disk I/O가 일어나지 못한다.
    - DML 사용자에 의해 lock request를 할 수 없다.
    a. Single-node Failure
    한 인스턴스에서 fail이 난 다른 인스턴스에 대한 recovery를 수행하는 동안,
    정상적으로 운영 중인 인스턴스는 fail이 난 인스턴스의 redo log entry를
    읽어 들어 commit이 된 트랜잭션의 결과치를 데이터베이스에 반영시킨다.
    따라서 commit 된 데이터에 대한 손실은 일어나지 않으며, fail이 난
    인스턴스에서 commit 시키지 않은 트랜잭션에 대해서는 rollback을 수행하고,
    트랜잭션에서 사용 중이던 자원을 release시킨다.
    b. Multiple-node Failure
    만약 OPS의 모든 인스턴스에서 fail이 발생했을 경우, 인스턴스 recovery는
    어느 한 인스턴스라도 open이 될 때 자동으로 수행된다. 이 때 open되는 인스턴스는
    fail이 발생한 인스턴스가 아니라도 상관 없으며, OPS에서 shared 모드
    혹은 execlusive 모드에서 데이터베이스를 mount 하더라도 상관 없이 수행된다.
    오라클이 shared 모드에서 수행되던, execlusive 모드에서 수행되건,
    recovery 절차는 하나의 인스턴스에서, fail이 난 모든 인스턴스에 대한
    recovery를 수행하는지 여부를 제외하고는 동일하다.
    2) Media Failure 시
    Oracle에서 사용하는 file을 저장하는 storage media에 문제가 발생했을 경우
    발생한다. 이와 같은 상황에서는 일반적으로 data에 대한 read/write가 불가능하다.
    Media failure가 발생했을 경우 recovery는 single instance의 경우와
    마찬가지로 recovery가 수행되어야 한다. 두 경우 모드 archive log 파일을
    이용해서 transaction recovery를 수행하여야 한다.
    3) Node Failure 시
    OPS 환경에서, 한 노드 전체에 fail이 발생했을 때, 해당 노드에서 동작하던
    instance와 IDLM 컴포넌트에서도 fail이 발생한다. 이 경우 instance recovery를
    하기 위해서는 IDLM은 lock에 대한 remaster를 시키기 위해 그 자신을
    reconfigure시켜야 한다.
    한 노드에서 fail이 발생했을 때 Cluster Manager 또는 다른 GMS product에서는
    failure를 알리고, reconfiguration을 수행하여야만 한다. 이 작업이 수행되어야만
    다른 노드에서 운영 중인 LMD0 프로세스와의 통신이 가능하다.
    오라클에서는 fail이 발생한 노드에서 잡고 있는 lock 정보를 access할 경우나,
    LMON 프로세스에서 heartbeat을 이용해서 fail이 발생한 노드가 더 이상
    가용하지 않다는 것을 감지할 때 failure가 발생한 것을 알게 된다.
    IDLM에서 reconfigure가 일어나면 instance recovery가 수행된다.
    Instance recovery는 recovery를 수행하는 동안 자원에 대한 contention을
    피하기 위해 전체 데이터베이스의 작업을 일시 중지시킬 수 있다.
    FREEZE_DB_FOR_FAST_INSTANCE_RECOVERY initialization parameter 값을
    TRUE로 지정하며 전체 데이터베이스가 일시적으로 작업을 멈추게 된다.
    데이터 화일에서 fine-grain lock을 사용할 경우 기본값은 TRUE이다.
    이 값을 FALSE로 지정할 경우 recovery가 필요한 데이터만이 일시적으로 작업이
    멈춰진다. 데이터 화일이 hash lock을 사용할 경우 FALSE가 기본 값이다.
    4) IDLM failure 시
    한 노드에서 다른 연관된 프로세스의 fail이나 memory fault 등의 이유로 인해
    IDLM 프로세스만 fail이 발생했다면 다른 노드의 LMON에서는 이 문제를 감지하여
    lock reconfiguration process를 시작한다.
    이 작업이 진행 중인 동안 lock 관련 작업은 처리가 정지되고 PCM lock 또는
    다른 resource를 획득하기 위해 일부 사용자들은 대기 상태로 들어간다.
    5) Interconnect Failure ( GMS failure ) 시
    노드 간의 interconnect에서 fail이 발생하면 각각의 노드에서는 서로 다른
    노드의 IDLM과 GMS에서 fail 이 발생했다고 간주하게 된다. GMS에서는 quorum
    disk나 node에 pinging 등을 수행하는 다른 방법을 통해 시스템의 상태를 확인한다.
    이 경우 Fail이 발생한 connection에 대해 두 노드 혹은 한쪽 노드에서
    shutdown 이 일어난다.
    Oracle 8 recovery mechanism에서는 노드 혹은 인스턴스에서 강제로 fail이
    발생했을 경우 IDLM이나 instance가 startup 될 수 없게 된다. 경우에 따라서는
    노드 간의 IDLM communication이 가용한지 여부를 확인하기 위해 cluster
    validation code를 직접 작성하여 사용할 수도 있다. 이 방법을 사용하여
    GMS에서 제공하지는 않지만, 문제를 진단한 후 shutdown을 수행하도록 할 수 있다.
    이같은 code를 작성하기 위해서는 단일 PCM lock에서 처리되는 단일 data block에
    대해 계속해서 update 를 수행해 보는 루틴이 들어가면 된다. 서로 연결된
    두 노드에서 이 프로그램을 실행시키게 될 경우 interconnect에서 fail이
    난 상황을 진단할 수 있게 된다.
    만약 여러개의 노드가 cluster를 구성할 경우에는 매 interconnect 마다
    다른 PCM lock에 의해 처리되는 data block을 update 함으로써, 어떤 노드와의
    interconnect에 문제가 발생했는지를 알아낼 수 있다.
    7. Parallel Recovery
    Parallel Recovery의 목표는 compute와 I/O parallelism을 사용해서 crash
    recovery, single-instance recovery, media recovery 시 소요되는 시간을 줄이는
    데 있다.
    Parallel recovery는 여러 디스크에 걸쳐 몇 개의 데이터파일에 대해 동시에
    recovery를 수행할 때 가장 효율적이다
    다음과 같이 2가지 방식으로 병렬화시킬 수 있다.
    - RECOVERY_PARALLELISM 파라미터 지정
    - RECOVER 명령의 옵션에 지정
    오라클 서버는 하나의 프로세스에서 log file을 순차적으로 읽어들이고, redo
    정보를 여러 개의 recovery 프로세스에 전달해, log file에 기록된 변동 사항을
    데이터파일에 적용시킬 수 있다.
    Recovery Process는 오라클에서 자동적으로 구동되므로, recovery를 수행할 경우
    한 개 이상의 session을 사용할 필요가 없다.
    RECOVERY_PARALLELISM의 최대값은 PARALLEL_MAX_SERVERS 파라미터에 지정된 값을
    초과할 수 없다.
    Reference Ducumment
    Oracle8 ops manual

    Configuration files of the Oracle Application server can be backed up by "Backup and Recovery Tool"
    Pls refer to the documentation,
    http://download.oracle.com/docs/cd/B32110_01/core.1013/b32196/part5.htm#i436649
    Also "backup to tapes feature" is not yet supported by this tool
    thanks,
    Murugesh
    Message was edited by:
    Murugesan Appukuttty

  • Error during Backup/Recovery settings configuration

    Hi all,
    Using the Oracle Enterprise Manager 10g Application Server control,
    i am configuring backup/recovery settings for my infrastructure tier in a 10g Rel 2 application server environment. however after specifying the various locations for Log Files, Configuration files, Metadata repository database backup etc i get the following error after clicking the 'Ok' button.
    An Internal Error has occurred while performing the operation.
    Performing configuration ...
    oracle_sid: cinfra
    The command /oracle/appn/oracle/infra/perl/bin/perl /oracle/appn/oracle/infra/backup_restore/bkp_restore.pl -m configure -h /oracle/appn/oracle/infra -f > /infrahome/backups/log_files/2009-12-06_17-13-38_output.log failed with return code 255
    i am at a loss as to why i am having this error and how to fix it.
    I examined the log file and saw an error snippet saying
    *"Unable to get dbid from the database. Please ensure that ORACLE_HOME and ORACLE_SID are set to the same values used when starting up the database and that the database is open.... Failure: Configure failed"*
    Also examined another file generated at the same time as the output log file and in there i saw the following
    *"ld,so.1: oracle: fatal: relocation error: file /oracle/appn/oracle/product/10.2.0/cm2t/lib/libjox10.so: symbol kgestKguard_: referenced symbol not found*
    *ERROR:*
    *ORA-12547: TNS: lost contact*
    *SP2-0751: Unable to connect to oracle. exiting SQL*Plus"*
    Apart from the infrastructure database, i have another database with SID 'cm2t' on the same machine and i am suspecting that OEM Application server control is referencing the wrong file while it's doing it's work. How do i tell it to look in the right place if indeed that is the cause of my problem?
    In my config.inp file all the right parameter values are in place.
    Thanks.
    Steve.

    Hi AMN,
    Thanks for your reply. I managed to solve it by taking the following steps.
    I use the runstartupconsole.sh script to start up my infrastructure tier. At startup it was using an LD_LIBRARY_PATH pointing to that used by an existing Oracle database instance on the same machine.
    I set ORACLE_HOME, ORACLE_SID, LD_LIBRARY_PATH etc correctly before running the runstartupconsole.sh script and after that i was able to setup my backup/recovery settings properly.
    I hope this will be useful to others.
    Steve.

  • Encountered ora-29701 during Sun Cluster for Oracle RAC 9.2.0.7 startup (UR

    Hi all,
    Need some help from all out there
    In our Sun Cluster 3.1 Data Service for Oracle RAC 9.2.0.7 (Solaris 9) configuration, my team had encountered
    ora-29701 *Unable to connect to Cluster Manager*
    during the startup of the Oracle RAC database instances on the Oracle RAC Server resources.
    We tried the attached workaround by Oracle. This workaround works well for the 1^st time but it doesn’t work anymore when the server is rebooted.
    Kindly help me to check whether anyone encounter the same problem as the above and able to resolve. Thanks.
    Bug No. 4262155
    Filed 25-MAR-2005 Updated 11-APR-2005
    Product Oracle Server - Enterprise Edition Product Version 9.2.0.6.0
    Platform Linux x86
    Platform Version 2.4.21-9.0.1
    Database Version 9.2.0.6.0
    Affects Platforms Port-Specific
    Severity Severe Loss of Service
    Status Not a Bug. To Filer
    Base Bug N/A
    Fixed in Product Version No Data
    Problem statement:
    ORA-29701 DURING DATABASE CREATION AFTER APPLYING 9.2.0.6 PATCHSET
    *** 03/25/05 07:32 am ***
    TAR:
    PROBLEM:
    Customer applied 9.2.0.6 patchset over 9.2.0.4 patchset.
    While creating the database, customer receives following error:
         ORA-29701: unable to connect to Cluster Manager
    However, if customer goes from 9.2.0.4 -> 9.2.0.5 -> 9.2.0.6, the problem does not occur.
    DIAGNOSTIC ANALYSIS:
    It seems that the problem is with libskgxn9.so shared library.
    For 9.2.0.4 -> 9.2.0.5 -> 9.2.0.6, the install log shows the following:
    installActions2005-03-22_03-44-42PM.log:,
    [libskgxn9.so->%ORACLE_HOME%/lib/libskgxn9.so 7933 plats=1=>[46]langs=1=> en,fr,ar,bn,pt_BR,bg,fr_CA,ca,hr,cs,da,nl,ar_EG,en_GB,et,fi,de,el,iw,hu,is,in, it,ja,ko,es,lv,lt,ms,es_MX,no,pl,pt,ro,ru,zh_CN,sk,sl,es_ES,sv,th,zh_TW, tr,uk,vi]]
    installActions2005-03-22_04-13-03PM.log:, [libcmdll.so ->%ORACLE_HOME%/lib/libskgxn9.so 64274 plats=1=>[46] langs=-554696704=>[en]]
    For 9.2.0.4 -> 9.2.0.6, install log shows:
    installActions2005-03-22_04-13-03PM.log:, [libcmdll.so ->%ORACLE_HOME%/lib/libskgxn9.so 64274 plats=1=>[46] langs=-554696704=>[en]] does not exist.
    This means that while patching from 9.2.0.4 -> 9.2.0.5, Installer copies the libcmdll.so library into libskgxn9.so, while patching from 9.2.0.4 -> 9.2.0.6 does not.
    ORACM is located in /app/oracle/ORACM which is different than ORACLE_HOME in customer's environment.
    WORKAROUND:
    Customer is using the following workaround:
    cd $ORACLE_HOME/rdbms/lib make -f ins_rdbms.mk rac_on ioracle ipc_udp
    RELATED BUGS:
    Bug 4169291

    Check if following MOS note helps.
    Series of ORA-7445 Errors After Applying 9.2.0.7.0 Patchset to 9.2.0.6.0 Database (Doc ID 373375.1)

  • Is oracle 9.2.0.8 compatible with Sun Cluster 3.3 5/11 and 3.3 3/13?

    Is oracle 9.2.0.8 compatible with Sun Cluster 3.3 5/11 and 3.3 3/13?
    Where can I check compatibility matrix?

    matthew_morris wrote:
    This forum is about Oracle professional certifications (i.e. "Oracle Database 12c Administrator Certified Professional"), not about certifying product compatibility.
    I concur with Matthew.  The release notes for sun cluster and oracle for solaris might tell you.  oracle 9.2.0.8 is out of support on solaris and I recall needing a number of patches to get it to a fit state ... and that is without considering sun cluster.  Extended support for 9.2.0.8 ended about 4 years ago ... this is not a combination I would currently be touching with a bargepole!  You are best to seek on MOS.

  • Beta Refresh Release Now Available!  Sun Cluster 3.2 Beta Program

    The Sun Cluster 3.2 Release team is pleased to announce a Beta Refresh release. This release is based on our latest and greatest build of Sun Cluster 3.2, build 70, which is close to the final Revenue Release build of the product.
    To apply for the Sun Cluster 3.2 Beta program, please visit:
    https://feedbackprograms.sun.com/callout/default.html?callid=%7B11B4E37C-D608-433B-AF69-07F6CD714AA1%7D
    or contact Eric Redmond <[email protected]>.
    New Features in Sun Cluster 3.2
    Ease of use
    * New Sun Cluster Object Oriented Command Set
    * Oracle RAC 10g improved integration and administration
    * Agent configuration wizards
    * Resources monitoring suspend
    * Flexible private interconnect IP address scheme
    Availability
    * Extended flexibility for fencing protocol
    * Disk path failure handling
    * Quorum Server
    * Cluster support for SMF services
    Flexibility
    * Solaris Container expanded support
    * HA ZFS
    * HDS TrueCopy campus cluster
    * Veritas Flashsnap Fast Mirror Resynchronization 4.1 and 5.0 option support
    * Multi-terabyte disk and EFI label support
    * Veritas Volume Replicator 5.0 support
    * Veritas Volume Manager 4.1 support on x86 platform
    * Veritas Storage Foundation 5.0 File System and Volume Manager
    OAMP
    * Live upgrade
    * Dual partition software swap (aka quantum leap)
    * Optional GUI installation
    * SNMP event MIB
    * Command logging
    * Workload system resource monitoring
    Note: Veritas 5.0 features are not supported with SC 3.2 Beta.
    Sun Cluster 3.2 beta supports the following Data Services
    * Apache (shipped with the Solaris OS)
    * DNS
    * NFS V3
    * Java Enterprise System 2005Q4: Application Server, Web Server, Message Queue, HADB

    Without speculating on the release date of Sun Cluster 3.x or even its feature list, I would like to understand what risk Sun would take when Sun Cluster would support ZFS as a failover filesystem? Once ZFS is part of Solaris 10, I am sure customers will want to use it in clustered environments.
    BTW: this means that even Veritas will have to do something about ZFS!!!
    If VCS is a much better option, it would be interesting to understand what features are missing from Sun Cluster to make it really competitive.
    Thanks
    Hartmut

  • Information about Sun Cluster 3.1 5Q4 and Storage Foundation 4.1

    Hi,
    I have 2 Sunfire V440 with Solaris 9 last release 9/05 with last cluster patchs.. , Qlogic HBA fibre card on a seven disks shared on a Emc Clariion cx500. I have installed and configured Sun cluster 3.1 and Veritas Storage Foundation 4.1 MP1. My problems is when i run format wcommand on each node, I see the disks in a different order and veritas SF (4.1) is also picking up the disks in a different order.
    1. Storage Foundation 4.1 is compatible with Sun cluster 3.1 2005Q4?????
    2. Do you have a how to or other procedure for Storage foundation 4.1 with Sun Cluster 3.1.
    I'm very confuse with veritas Storage foundation
    Thanks!
    J-F Aubin

    This combination does not work today, but it will be available later.
    Since Sun and Veritas are two separate companies, it takes more
    time than expected to synchronize releases. Products supported by
    Sun for Sun Cluster installation undergo extensive testing, which also
    takes time.
    -- richard

  • Disaster Recovery Plan for BW Production

    Hello Experts,
    We at IP are trying to evaluate and institute a sound and
    cost-effective disaster recovery plan for the BW production system.
    In this regard, we would like to hear from you(or from your infrastructure folks) about the experience(s) at
    client sites of such scenarios for BW.
    We are specifically interested in knowing answers to:
    1. Do you have currently have Disaster Recovery Plan for BW Production?.
    2. How large is your BW production database?
    3. Is the alternate server being replicated real time or using tapes or ......?
    4. What are your target service levels?
    5. Location specifics or any other pertinent info.
    Any input is highly appreciated.
    Thanks in advance,
    Kumar Gudiseva.
    Edited by: Kumar Gudiseva on Jan 9, 2008 3:35 PM

    Hi,
    The disaster recovery strategy should include a backup and restore plan that determines which data should be backed up and the procedures that will be used to recover it.
    Please check these links:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/001ef297-9bbf-2910-bbaa-babedc1b01ca
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/6efca1a6-0301-0010-6f9d-82cf87340e45
    /people/nicholas.holshouser/blog/2006/09/07/the-square-root-of-an-it-disaster-is-recovery
    Hope it helps.
    Regards,
    Mona

  • SUN  CLUSTER RESOURCE FOR LEGATO CLIENT (LGTO.CLNT) in Oracle database

    hi everyone
    I am tryinig to create a LGTO.clnt resource in oracle-rg resource group in SUN CLUSTER 3.2 with the following commands
    clresource create -g resource_group_name -t LGTO.clnt \
    -x clientname=virtual_hostname -x owned_paths=pathname_1,
    pathname_2[,...] resource_name
    I just need to know what is value of Owned_Paths variable in the above commnad?
    or what PATH it is reffering to ( $ORACLE_HOME or Global devices path ...etc) ?

    Hello,
    The Owned_Paths parameter are the paths (or mountpoints) the legato client will be able to backup from.
    To configure a legato client in the Networker console (and to be managed as a cluster client) you need to declare the in the Owned_Paths the paths you want to save.
    The savesets paths can be a directory under the Owned_Paths.
    Regards
    Pablo Villanueva.

  • Multiple Oracle databases in Sun cluster 3.2 (without RAC setup)

    There are 2 Sun SPARC (Sun Fire T2000) servers with Solaris 10 (05/09) OS and Sun Cluster 3.2 software. We need two different Oracle databases & instances (Oracle 10g R2 without RAC) for an application. The first database is for Production and need to be configured in the first node & Shared storage disk and need high availability. This database should run from the second node if the first node fails. The second database is for Quality/Test and it is prefered to be running in the Second node for better load distribution. This DB doesnt require any failover.
    The shared storage is Sun SE 3510 FC and multiple LUNs can be created for different databases..
    Is it possible to configure two different resource groups (one for Quality and other for Production) and make the first node primary for Production RG, and the second node primary for Quality RG and thus distributing the load on 2 servers ? If possible, what special configuration required in Sun OS and Cluster side ?
    Appreciate if you can give some configuration procedures/documents for this multi-master cluster setup..

    You can configure two resource groups, such as:
    # clrg create -n node-a,node-b prod-rg
    # clrg create -n node-b qa-rgand you configure the required resources (disk groups / file systems, logical host, oracle listener, oracle server) as described within
    [http://docs.sun.com/app/docs/doc/819-2980?l=en&a=expand|http://docs.sun.com/app/docs/doc/819-2980?l=en&a=expand]
    Note that this is not really called a "multi-master" configuration - that has a specific meaning for a resource group (see [http://docs.sun.com/app/docs/doc/820-4682/babefcja?l=en&a=view|http://docs.sun.com/app/docs/doc/820-4682/babefcja?l=en&a=view] ) for details.
    With Solaris Cluster all nodes part of a cluster are considered active and can host resource groups. You can have any number of resource groups running, where a subset runs one one node, another subset on other nodes. The nodelist property of the resource group defines where it can run, the first node in the list is the preferred primary.
    You can even define resource group dependencies or affinities between the resource groups. Like you could define a negative affinity between qa-rg and prod-rg, such as if prod-rg needs to failover to node-b (since e.g. node-a died), it would offline qa-rg. Details for that kind of possibilities are described at [http://docs.sun.com/app/docs/doc/820-4682/ch14_resources_admin-35?l=en&a=view|http://docs.sun.com/app/docs/doc/820-4682/ch14_resources_admin-35?l=en&a=view].
    Regards
    Thorsten

  • TimesTen database in Sun Cluster environment

    Hi,
    Currently we have our application together with the TimesTen database installed at the customer on two different nodes (running on Sun Solaris 10). The second node acts as a backup to provide failover functionality, although right now only manual failover is supported.
    We are now looking into a hot-standby / high availability solution using Sun Cluster software. As understood from the documentation, applications can be 'plugged-in' to the Sun Cluster using Agents to monitor the application. Sun Cluster Agents should be already available for certain applications such as:
    # MySQL
    # Oracle 9i, 10g (HA and RAC)
    # Oracle 9iAS Application Server
    # PostgreSQL
    (See http://www.sun.com/software/solaris/cluster/faq.jsp#q_19)
    Our question is whether Sun Cluster Agents are already (freely) available for TimesTen? If so, where to find them. If not, should we write a specific Agent separately for TimesTen or handle database problems from the application.
    Does someone have any experience using TimesTen in a Sun Cluster environment?
    Thanks in advance!

    Yes, we use 2-way replication, but we don't use cache connect. The replication is created like this on both servers:
    create replication MYDB.REPSCHEME
    element SERVER01_DS datastore
    master MYDB on "SERVER01_REP"
    transmit nondurable
    subscriber MYDB on "SERVER02_REP"
    element SERVER02_DS datastore
    master MYDB on "SERVER02_REP"
    transmit nondurable
    subscriber MYDB on "SERVER01_REP"
    store MYDB on "SERVER01_REP"
    port 16004
    failthreshold 500
    store MYDB on "SERVER02_REP"
    port 16004
    failthreshold 500
    The application runs on SERVER01 and is standby on SERVER02. If an invalid state is detected in the application, the application on SERVER01 is stopped and the application on SERVER02 is started.
    In addition to this, we want to fail over if the database on the SERVER01 is in invalid state. What should we have monitored by the Clustering Agent to detect an invalid state in TT?

  • RAW disks for Oracle 10R2 RAC NO SUN CLUSTER

    Yes you read it correctly....no Sun cluster. Then why am I on the Forum right? Well we have one Sun Cluster and another that is RAC only for testing. Between Oracle and Sun, neither accept any fault for problems with their perfectly honed products. Currently, I have multipathed fiber hba's to a Storedge 3510, and I've tried to get Oracle to use a raw lun for the ocr and voting disks. It doesn't see the disk. I've made sure they are stamped for oracle:dba, and tried oracle:oinstall. When presenting /dev/rdsk/C7t<long number>d0s6 for the ocr, I get a "can not find disk path." Does Oracle raw mean SVM raw? Should I create metadisks?

    "Between Oracle and Sun, neither accept any fault for problems with their perfectly honed products"...more specific:
    Not that the word "fault" is characterization of any liability, but a technical characterization of acting like a responsible stakeholder when you sell your product to a corporation. I've been working on the same project for a year, as an engineer. Not withstanding a huge expanse of management issues over the project, when technical gray areas have been reached, whereas our team has tried to get information to solve the issue. The area has become a big bouncing hot potato. Specifically, when Oracle has a problem reading a storage device, according to Oracle, that is a Sun issue. According to Sun, they didn't certify the software on that piece of equipment, so go talk to Oracle. In the sun cluster arena, if starting the database creates a node eviction from the cluster, good luck getting any specific team to say, that's our problem. Sun will say that Oracle writes crappy cluster verify scripts, and Oracle will say that Sun has not properly certified the device for use with their product. Man, I've seen it. The first time I said O.K. how do we avoid this in the future, the second time I said how did I let this happen again, and after more issues, money spent, hours lost, and customers, pissed --do the math.   I've even went as far as say, find me a plug and play production model for this specific environment, but good luck getting two companies to sign the specs for it...neither wants to stamp their name on the product due to the liability.  Yes your right, I should beat the account team, but as an engineer, man that's not my area, and I have other problems that I was hired to deal with.  I could go on.  What really is a slap in face is no one wants to work on these projects, if given the choice with doing a Windows deployment, because they can pop out mind bending amounts of builds why we plop along figuring out why clusterware doesn't like slice 6 of a /device/scsi_vhci/ .  Try finding good documentation on that.  ~You can deploy faster, but you can't pay more!                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Sun Cluster Core Conflict - on SUN Java install

    Hi
    We had a prototype cluster that we were playing with over two nodes.
    We decided to uninstall the cluster by putting node into single user mode and running scinstall -r.
    Afterwards we found that the Java Availability Suite was a little messed up - maybe because the kernel/registry had not been updated - it though the cluster and agent software was uninstalled and would not let us re-install. All the executabvles from /etc/cluster/bin had been removed from the nodes.
    So, On both nodes we ran the uninstall program from /var/sadm/prod/... and then selected cluster and agents to uninstall.
    On the first node, this completely removed the sun cluster compoenets and then allowed us to re-install the cluster software successfully.
    On the second node, for some reason, it has left behind the component "Sun Cluster Core", and will not allow us to remove it with the uninstall.
    When we try to re-install we get the following:
    "Conflict - incomplete version of Sun Cluster Core has been detected"
    In then points us to the sun cluster upgrade guide on sun.com.
    My question is - how do we 'clean up' this node and remove the sun cluster core so we can re-install the sun cluster software from scratch?
    I don't quite understand how this has been left behind....
    thanks in advance
    S1black.

    You can use prodreg directly to clean up when your de-install has gone bad.
    Use:
    # prodreg browse
    to list the products. You may need to recurse down into the individual items. The use:
    # prodreg unregister ...
    to unregister and pkgrm to remove the packages manually.
    That has worked for me in the past. Not sure if it is the 'official' way though!
    Regards,
    Tim
    ---

  • BPC Transport vs backup/recovery

    BPC Gurus:
    We have implemented BPC just a few months ago and are new to it. Have a question for all experts on the appset code migration.
    Need to know the best practices around the code migration from dev to qa and hence production.
    I see two options.
    Transports and backup/recovery but need help in understanding what is the proper and best strategy.
    Thanks
    Ravi

    Hi Ravi,
    Transporting data manager files in file server is not Supported in NW 7.0 to my knowladge.
    Using transaction code SM30
    UJT_TRANS_CHG
    if Subobject
    is not set D Changes for the Sub object will not be transported.
    In my experience  QA build is done when Build reaches a stable state in Dev where no changes done in the DataModel, from there on any changes in the front end Input Schedules or Report Template in moved manually between the file servers as application names remain the same.Hope it helps.
    Regards
    Vinoo

Maybe you are looking for

  • Need to be pointed in the right direction for buttons

    Hello,    I'm trying to figure out if this can be done.    Lets say you have 3 buttons on a web page.  When you hit one of the 3 buttons, text appears and locks on the page.  Each button has different texts that appears and locks.  No matter what ord

  • My fm transimeter has disappeared

    My fm transimeter has disappeared when i had downloaded the newest firmware , what shall i do ?

  • Import command for one table core dumps

    Hi, The following import command is causing a core dump; $ imp system/***** file=/upload/dbdumps/06-09-19_11.00.01-db.dmp fromuser=ioeadmin tables=(utils) 2>&1 |tee imp.log But this one does not: $ cat imp.params FULL=y FILE=/upload/dbdumps/06-09-19_

  • CS 5.5 launch error

    when i use the regular user account to launch photoshop,illustrator, flash....that will launch fail photoshop and illustrator can delete plugin file to launch program butflash cant how can i launch the CS components on non-admin privileges? thanks

  • Cannot sync podcasts to 4th gen. nano

    I recently downloaded several podcasts, but when I've tried to sync them to my iPod, it says the podcast "was not copied to your iPod because it cannot be played on this iPod." Am I missing something really obvious? Does anyone know why certain podca