다중 ARCHIVE 와 원격지 ARCHIVE (ORACLE 8I)

제품 : ORACLE SERVER
작성날짜 : 2004-08-16
다중 ARCHIVE 와 원격지 ARCHIVE (ORACLE 8I)
========================================
SCOPE
8i~10g Standard Edition 에서는 다중 ARCHIVE 기능은 지원하지 않으며, 하나의 LOG_ARCHIVE_DEST 만을 지원한다.
Explanation
Oracle8i의 새 기능으로 redo log의 archive를 다중으로, 원격지에
생성이 가능하다.
이중 아카이브 지정(mirrored)은 log_archive_dest, log_archive_duplex_dest
지정으로 oracle8.0에서도 소개 되었다.
Oracle 8i에서는 이 기능이 더 향상되었다. 원격지에 아카이브의 생성이 가능하여
Standby database와 결합하여 자동 recovery가 가능하게 되었다.
이 자료는 Oracle8i에서 어떻게 다중 archiving 지정이 가능한지를 보여준다.
1. 새로운 parameter와 view
Oracle8i는 ARCH process에게 online redo log 의 archive를 multiple
destination 지정을 제공한다. 5개까지 지정을 할 수 있다.
다음의 parameter로 지정이 가능하다.
LOG_ARCHIVE_DEST_n ( n 은 1 부터 5까지의 정수값)
Local destination 지정은 LOCATION keyword를 다음과 같이 지정 후 archive
directory path를 기술한다. Remote destination 지정은 SERVICE keyword를
다음과 같이 지정 후 standby database의 TNS alias를 지정한다.
Examples:
LOG_ARCHIVE_DEST_1="LOCATION=/oracle/arch_1 MANDATORY"
LOG_ARCHIVE_DEST_2="LOCATION=/oracle/arch_2 OPTIONAL"
LOG_ARCHIVE_DEST_3="SERVICE=uksn117_V813b REOPEN=60"
LOG_ARCHIVE_DEST_STATE_n (n 은 1 부터 5까지의 정수값)
각 archive destination의 상태를 기술한다. 이 parameter는 2가지 값 지정이
가능하다. ENABLE과 DEFER이다. 만일 destination이 enable이면 oracle8i는
그 destination에 archive 생성이 가능하다. 만일 defer이면 archiving을
위해 사용이 불가능하다.
LOG_ARCHIVE_MIN_SUCCEED_DEST=n ( n 은 1 부터 5까지의 정수값 )
Oracle 8.0의 파라미터 이지만, Oracle 8i에서 사용가능하다.
이 파라미터는 오라클 8i에서 성공적으로 archive 시킨 redo log group의
최소 갯수를 지정한다. 지정된 최소한의 redo log group 갯수가 성공적으로
archive 되어야만 online redo log 파일이 재사용 가능하다.
LOG_ARCHIVE_MIN_SUCCEED_DEST 값의 최소 값은 1이다.
Archive destination은 MANDATORY 혹은 OPTIONAL 로 선언될 수
있다. Mandatory destination으로의 archive는 반드시 성공하여야만
한다.
지정된 destination에 archive가 실패할 경우, REOPEN 이라는
파라미터에 의해 설정된 시간 동안이 경과한 후, 지정된 destination에
다시 archive를 시도하게 된다. 하지만, 이것이 redo log 파일을
소급해서 저장을 하는 역할을 수행하지는 않는다.
Mandatory destination 중 하나라도 archive에 실패할 경우
LOG_ARCHIVE_MIN_SUCCEED_DEST 파라미터와는 무관하게 online redo log
가 재사용되지 않는다.
Oracle8i 는 또한 V$ARCHIVE_DEST라는 새로운 view가 소개됐다.
V$ARCHIVE_DEST는 각 instace의 현재 destination 상태 정보를 보여준다.
2. Case study1 : multiple archive destination의 setup방법
1) init.ora를 다음과 같이 수정한다.
LOG_ARCHIVE_DEST_1="location=/u01/arch813_1 mandatory"
LOG_ARCHIVE_DEST_2="location=/u02/arch813_2 optional reopen=60"
LOG_ARCHIVE_FORMAT=log_%s.arc
LOG_ARCHIVE_START = true
2 archive destinations을 정의됐다. 이중 하나는 mandatory이다.
LOG_ARCHIVE_MIN_SUCCEED_DEST 는 정의하지 않는다.
LOG_ARCHIVE_DEST_STATE_1..5 는 정의하지 않는다.
2) database를 startup한다.
3) redo log의 Archive를 만든다.
SVRMGR> ALTER SYSTEM SWITCH LOGFILE;
4) V$ARCHIVE_DEST을 check한다.
SELECT DEST_ID,STATUS FROM V$ARCHIVE_DEST;
DEST_ID STATUS
1 VALID
2 VALID
3 INACTIVE
4 INACTIVE
5 INACTIVE
5) LOG_ARCHIVE_DEST_2 를 유효하지 않게 만든다. (directory permission을 수정한다.)
6) redo log의 Archive를 만든다.
SVRMGR> ALTER SYSTEM SWITCH LOGFILE;
7) V$ARCHIVE_DEST을 check한다.
SELECT DEST_ID,STATUS FROM V$ARCHIVE_DEST:
DEST_ID STATUS
1 VALID
2 ERROR
3 INACTIVE
4 INACTIVE
5 INACTIVE
DEST_ID 2 는 archival error를 보인다. 이것은 V$ARCHIVE_DEST로부터
더 많은 정보를 얻을수 있게 한다.
SELECT FAIL_DATE,FAIL_SEQUENCE,ERROR
FROM V$ARCHIVE_DEST
WHERE DEST_ID=2;
FAIL_DATE FAIL_SE ERROR
1998-NOV-05 109 ORA-19504: failed to create file ""
8) LOG_ARCHIVE_DEST_2 를 다시 유효하게 만든다.
9) redo log의 Archive를 만든다.
SVRMGR> ALTER SYSTEM SWITCH LOGFILE;
10) V$ARCHIVE_DEST을 check한다.
SELECT DEST_ID,STATUS FROM V$ARCHIVE_DEST:
DEST_ID STATUS
1 VALID
2 VALID
3 INACTIVE
4 INACTIVE
5 INACTIVE
3. Case study2: remote archive destinations의 setup방법
1) init.ora를 다음과 같이 수정한다.
LOG_ARCHIVE_DEST_1="location=/u01/arch813_1 mandatory"
LOG_ARCHIVE_DEST_2="service=uksn117_jb reopen=60"
LOG_ARCHIVE_FORMAT=log_%s.arc
LOG_ARCHIVE_START = true
2 archive destinations을 정의한다. local destination은 mandatory이고
remote destination은 optional (default)이다.
2) database를 startup한다.
archive를 하기 위해서는 primary database는 반드시 open되어 있고 , standby
database 반드시 mount되어 있어야 한다.
3) redo log의 Archive를 만든다.
SVRMGR> ALTER SYSTEM SWITCH LOGFILE;
local instance는 standby instance에 remote (NON-LOCAL) server process
를 구동한다. 이 remote file server (RFS) process는 remote archive log를
생성한다.
4) Check v$archive_dest
4. 제약사항
Remote archival은 standby database를 제외하고는 작업이 불가하다.
5. 그 밖의 사항들
1) ARC0 process, slave archiver process에 의해 archive작업이 이루어 진다.
혹은 user가 manually하게 switch log시 shadow process에 의해서도 가능하다.
2) log_archive_dest = /archive1/arch
log_archive_duplex_dest = /archive2/arch
의 지정과 아래의 지정은 primary location과 backup destination 지정으로
같다.
log_archive_dest_1 = /archive1/arch
log_archive_dest_2 = /archive2/arch
단, log_archive_dest_n은 remote 지정이 가능하다.
Reference Documents
<Note:73163.1>

You know, that is exactly the problem with running legacy versions productively:
It is very hard to reproduce your situation.
I can tell you, though, that your problem is most likely originating from using a feature/syntax, not valid in your old Oracle version. On 11g, the following command is valid:
rman target /
Recovery Manager: Release 11.2.0.1.0 - Production on Mon Nov 23 15:54:15 2009
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
connected to target database: ORCL (DBID=1231477610)
RMAN> delete noprompt archivelog until time 'sysdate -5' backed up 1 times to device type disk;
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=23 device type=DISK
specification does not match any archived log in the repositoryKind regards
Uwe
http://uhesse.wordpress.com

Similar Messages

  • Need to archive Oracle data into MS Access database

    Gurus,
    I've been tasked with archiving several large tables (10 million records) into an MS Access database. I'm looking through MS Access help guide which states that I can use the ODBC driver to create table, column definitions, and import data.
    Does anyone have any good resources on creating an ODBC driver for MS Access. Also, any reference material would be greatly appreciated.
    Thanks
    Scott

    sreese wrote:
    Gurus,
    I've been tasked with archiving several large tables (10 million records) into an MS Access database. On the face of it, that is an astoundingly bad idea. Using the worst database product (I can't even really refer to it as an RDBMS) on the planet to archive data that is already in the best RDBMS on the planet?
    What does the person who came up with this idea expect to achieve?
    I'm looking through MS Access help guide which states that I can use the ODBC driver to create table, column definitions, and import data. ODBC drivers don't create anything. They provide a common link between ODBC enabled apps and the native db interface.
    >
    Does anyone have any good resources on creating an ODBC driver for MS Access. You don't create the driver. It is provided by either MS or Oracle (they both have them). You just configure a connection definition (aka "Data Source Name" or DSN) that utilizes it.
    Also, any reference material would be greatly appreciated.
    Thanks
    Scott

  • Archiving Oracle E-Business Suite database

    Hi all
    I need to archive my E-Business Suite oracle database.I have several products on the market.
    However I will appreciate if anybody can share his personal experience on the products.
    Regards

    I need to archive my E-Business Suite oracle database.I have several products on the market.If you have a list of software, I would suggest you contact software vendors to get a comparison between the products.
    However I will appreciate if anybody can share his personal experience on the products.We use [Veritas NetBackup |http://www.symantec.com/business/netbackup] which works perfectly for cold/hot/system backup.

  • Auditing and Archiving Oracle Messenger Conference

    Hello,
    I have found that conferente (with three or more people) aren´t archived. Is there any way to archive the text of a conference at server-side?
    Thanks,
    Heitor Andre Kirsten

    It is possible to archive the conversations at the backend. Check out http://download-east.oracle.com/docs/cd/B25553_01/rtc.1012/b25460/archives.htm#CJABHCJB
    Because there are restrictions on who can view the archives you may want to create a report on the backend table that's involved. Check out RTC_IM.RTC_IM_MESSAGES_ARCHIVE
    I just had a look at my notes on this (this was some time ago) and I see I made a remark that as soon as you were into group chats, aka Text Chats, the results were not saved, even using the backend archiving mechanism.
    Ingo

  • Incomplete Recovery Fails using Full hot backup & Archive logs !!

    Hello DBA's !!
    I am doing on Recovery scenario where I have taken One full hot backup of my Portal Database (EPR) and Restored it on New Test Server. Also I restored Archive logs from last full hot backup for next 6 days. Also I restored the latest Control file (binary) to their original locations. Now, I started the recovery scenario as follows....
    1) Installed Oracle 10.2.0.2 compatible with restored version of oracle.
    2) Configured tnsnames.ora, listener.ora, sqlnet.ora with hostname of Test server.
    3) Restored all Hot backup files from Tape to Test Server.
    4) Restored all archive logs from tape to Test server.
    5) Restored Latest Binary Control file from Tape to Test Server.
    6) Now, Started recovery using following command from SQL prompt.
    SQL> recover database until cancel using backup controlfile;
    7) Open database after Recovery Completion using RESETLOGS option.
    Now in Above scenario I completed steps upto 5) successfully. But when I execute the step 6) the recovery completes with Warning : Recovery completed but OPEN RESETLOGS may throw error " system file needs more recovery to be consistent " . Please find the following snapshot ....
    ORA-00279: change 7001816252 generated at 01/13/2008 12:53:05 needed for thread
    1
    ORA-00289: suggestion : /oracle/EPR/oraarch/1_9624_601570270.dbf
    ORA-00280: change 7001816252 for thread 1 is in sequence #9624
    ORA-00278: log file '/oracle/EPR/oraarch/1_9623_601570270.dbf' no longer needed
    for this recovery
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    ORA-00308: cannot open archived log '/oracle/EPR/oraarch/1_9624_601570270.dbf'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oracle/EPR/sapdata1/system_1/system.data1'
    SQL> SQL> SQL> SQL> SQL> SQL> SQL>
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oracle/EPR/sapdata1/system_1/system.data1'
    Let me know What should be the reason behind recovery failure !
    Note : I tried to Open the database using Last Full Hot Backup only & not applying any archives. Then Database Opens successfully. It means my Database Installation & Configuration is OK !
    Please Let me know why my Incomplete Recovery using Archive logs Goes Fail ?
    Atul Patil.

    oh you made up a new thread so here again:
    there is nothing wrong.
    You restored your backup, archives etc.
    you started your recovery and oracle applyed all archives but the archive
    '/oracle/EPR/oraarch/1_9624_601570270.dbf'
    does not exist because it represents your current online redo log file and that is not present.
    the recovery process cancels by itself.
    the solution is:
    restart your recovery process with:
    recover database until cancel using backup controlfile
    and when oracle suggests you '/oracle/EPR/oraarch/1_9624_601570270.dbf'
    type cancel!
    now you should be able to open your database with open resetlogs.

  • Flash Archive HTTP retrieval method

    Hello, could anybody explain me why size restriction is there for HTTP flash method for flar archives? HTTP method can be used only for archives < 2GB. I would like to know why and how to split an archive > 2 GB. Thanks a lot for every reply.

    What does the
    "layered flash" mean, Andrew? cpio? it means standard flash archives, but splitted into parts
    let me explain
    for example:
    1st flash archive - solaris os
    2nd archive - some applications
    3rd archive - oracle db
    when you make archive, you include there only some directories and files you need, so in this case we have more flexibility

  • Automatic archival is turned off...

    Hi every one,
    I am running an Oracle 8i database, which is in archive log mode.
    On Friday I got an error message being displayed in the command prompt and it displayed the following message;
    Warning - The following error occured during ORACLE redo log archival:
    ORACLE Instance mukrec - Archival Error
         Press <ENTER> to acknowledge message
    ORA-16038: log 8 sequence# 277 cannot be archived
    ORA-19502: write error on file "", blockno (blocksize=)
    ORA-00312: online log 8 thread 1: 'E:\ORACLE\ORADATA\MUKREC\LOGS\REDO08.LOG'
         Press <ENTER> to acknowledge message.
    This error can be seen clearly using the following URL:
    http://us.f13.yahoofs.com/bc/47aff3ab_a145/bc/My+Documents/Oracle+Error.bmp?bfqe_rHB6HN7NUCc
    Today when i came in the morning I realized that automatic archival of the database had been turned off and I came to realize that the database had stopped performing archival on Friday.
    What can I do to come out this situation because this is a production database. I don't want to shutdown because many users are connected.

    Hi every one,
    I am running an Oracle 8i database, which is in
    archive log mode.
    Snip<<<<<<<Today when i came in the morning I realized that
    automatic archival of the database had been turned
    off and I came to realize that the database had
    stopped performing archival on Friday.
    What can I do to come out this situation because this
    is a production database. I don't want to shutdown
    because many users are connected.Try to do manual archiving like
    ALTER SYSTEM ARCHIVE LOG ALL;
    The you still have to set log_archive_start to TRUE on your spfile then restart when you have a chance . Its just a pain to keep doing this through the day so you may want to schedule this until you can restart. Hopefully the missed parameter is the only problem why the redo logs can not be written and not any file permissions.

  • Domain configuration with Weblogic 12c from ZIP archive

    Hi,
    for development purposes I have set up an Ubuntu Linux system and installed Weblogic 12.1.3 from ZIP archive following the instructions added with the archive (Oracle Fusion Middleware Software Downloads).
    But coming to the point where I want to create the domain, the documentation gets vague and I don't know exactly what to do and why the domain is not and no more scripts are created.
    Documentation states:
    It is recommended that you create domains outside the MW_HOME.
       Linux
        $ mkdir /home/myhome/mydomain
        $ cd /home/myhome/mydomain
        $ $JAVA_HOME/bin/java $JAVA_OPTIONS -Xmx1024m -XX:MaxPermSize=256m weblogic.Server
    So I did the following (normal user is named oracle):
    $ mkdir /home/oracle/chkdom
    $ cd /home/oracle/chkdom
    $ sudo su
    $ source /etc/profile
    $ $JAVA_HOME/bin/java -Xmx1024m -XX:MaxPermSize=256m weblogic.Server
    Weblogic Server starts and the directory structure is created. I am prompted for user credentials to start Weblogic server. But not more than the following output happens:
    No config.xml was found.
    Would you like the server to create a default configuration and boot? (y/n): y
    <06.11.2014 22:54 Uhr MEZ> <Info> <Management> <BEA-140013> </home/oracle/chkdom/config not found>
    <06.11.2014 22:54 Uhr MEZ> <Info> <Management> <BEA-141254> <Generating new domain directory in /home/oracle/chkdom.>
    What's going wrong?

    Okay, forget my posting.
    I had to wait more than half an hour and domain is created.

  • Class oracle/jpub/runtime/dbws/DbwsProxy does not exists on 10g Rel2

    I trying using UTL_DBWS with samle from url http://www.freelists.org/archives/oracle-l/03-2005/msg00670.html in 10g Rel2 but I take error: class oracle/jpub/runtime/dbws/DbwsProxy does not exists .What I do incorrect?
    However, using web service in Oracle rdms very hard...May be exists simply example for dummies?

    Not really for dummies, but it may be a good way to get started:
    Oracle Database Programming Using Java and Web Services, by Kuassi Mensah.
    You have also the following resouces on OTN: Database Web Services.
    -- Eric

  • Oracle apps cloninig issue

    Hi
    I will do the single node cloning in oracle application 11.5.10.2 without applying rapid clone patch, its successfullly completed or not please give me the solution.
    Regards
    D

    Hi
    I have mentioned below the log file about adadmin:
    Archiving oracle/apps/mst/planoption/Shuttle$ButtonLayout.class
    Archiving oracle/apps/mst/planoption/Shuttle$ShuttleHandler.class
    Archiving oracle/apps/mst/planoption/Shuttle.class
    Archiving oracle/apps/mst/planoption/ShuttleLayout.class
    Archiving oracle/apps/mst/planoption/TableDataSource.class
    Archiving oracle/apps/mst/setup/ColorChoiceWrapper.class
    Archiving oracle/apps/mst/setup/ExceptionShuttleWrapper.class
    Archiving oracle/apps/mst/setup/ExceptionSpreadTableWrapper$ExceptionItemAdapter.class
    Archiving oracle/apps/mst/setup/ExceptionSpreadTableWrapper$ExceptionMouseAdapter.class
    Archiving oracle/apps/mst/setup/ExceptionSpreadTableWrapper$ExceptionTableAdapter.class
    Archiving oracle/apps/mst/setup/ExceptionSpreadTableWrapper$ExceptionTableFocusAdapter.class
    Archiving oracle/apps/mst/setup/ExceptionSpreadTableWrapper.class
    Archiving oracle/apps/mst/setup/KPIShuttleWrapper.class
    Archiving oracle/apps/mst/setup/LineColorWrapper.class
    Archiving oracle/apps/mst/setup/MapLineThickness$ComboBoxRenderer.class
    Archiving oracle/apps/mst/setup/MapLineThickness.class
    Archiving oracle/apps/mst/util/Debug.class
    Archiving oracle/apps/mst/util/JumpBackPopup.class
    Finishing and closing archive /ydev/u01/oracle/ydevcomn/java/oracle/apps/mst/jar/mstjar.jar.uns
    Done Generating mstjar.jar : Tue Sep 29 2009 06:37:22
    About to Sign mstjar.jar : Tue Sep 29 2009 06:37:22
    Executing: /ydev/u01/oracle/ydevcomn/util/java/1.5/jdk1.5.0_15/bin/java -Djava.security.egd=file:/dev/urandom sun.security.tools.JarSigner -keystore ******** -storepass ******** -keypass ******** -sigfile CUST -signedjar /ydev/u01/oracle/ydevcomn/java/oracle/apps/mst/jar/mstjar.jar.sig /ydev/u01/oracle/ydevcomn/java/oracle/apps/mst/jar/mstjar.jar.uns customer1
    JarSigner subcommand exited with status 0
    No standard output from jarsigner
    No error output from jarsigner
    Done Signing mstjar.jar : Tue Sep 29 2009 06:37:22
    About to Copy mstjar.jar to /ydev/u01/oracle/ydevappl/mst/11.5.0/java/jar : Tue Sep 29 2009 06:37:22
    Done Copying mstjar.jar to /ydev/u01/oracle/ydevappl/mst/11.5.0/java/jar : Tue Sep 29 2009 06:37:22
    Done Analyzing mstjar.jar : Tue Sep 29 2009 06:37:22
    Done Analyzing/Generating jar files : Tue Sep 29 2009 06:37:22
    Errors have occurred; exiting with status 1
    AD Run Java Command is complete.
    Copyright (c) 2002 Oracle Corporation
    Redwood Shores, California, USA
    AD Java
    Version 11.5.0
    NOTE: You may not use this utility for custom development
    unless you have written permission from Oracle Corporation.
    Failed to generate product JAR files in JAVA_TOP -
    /ydev/u01/oracle/ydevcomn/java.
    adogjf() Unable to generate jar files under JAVA_TOP
    Time is: Tue Sep 29 2009 06:37:23
    Backing up restart files, if any......Done.
    You should check the file
    /ydev/u01/oracle/ydevappl/admin/ydev/log/adadmin.log
    for errors.
    please give me the solution.
    regards
    D

  • Same table, Oracle 5 times slower than MySQL

    Hi
    I have several sites with the same aplication using a database as a log device and to later retrieve reports from. Some tables are for setup and one are for all the log data. The log data table has the following columns: LINEID, TAG, DATE_, HOUR_, VALUE, TIME_ and CHANGED. Typical data is: 122345, PA01_FT1_ACC, 2008-08-01, 10, 985642, "", 0.
    Index (TAG,DATE_)
    When calling a report the software querys for typical 3-5 select querys like the following, only different TAG: SELECT * FROM table WHERE TAG='PA01_FT1_ACC' AND DATE_ BETWEEN '2008-08-01' AND '2008-08-31' AND HOUR_=24
    Since our customers have different preferences some sites have Oracle and some have MySQL. And I have registered that the sites running Oracle uses 24-30 sec on the report, MySQL uses 3-6 sec on a similar report with the same tables and querying software.
    How is this?
    Is there anything I can do to make Oracle work faster?
    Should HOUR_ also be in the index?
    Since I guess this slowness is not something consistant in Oracle, there must be something to do.
    Thanks for any help.

    Histograms on varchar2 columns are based on the
    first 6 bytes of the column. If the database is using
    a character set that uses 1 byte per character, every
    entry in the DATE_ column since the beginning of the
    year looks like '2008-0' to the optimizer when
    determining cardinality to produce the "best"
    execution plan. For character sets that require
    multiple bytes per character, the situation is worse
    - every entry in the column representing this century
    appears to be the same value to the optimizer when
    determining cardinality
    That's a very good point and I didnt know about it
    before, about first 6 bytes being used. Can you point
    me in the docs where it is listed if its there or
    some other document/s which has this detail?Aman,
    I am having a bit of trouble finding the information in the documentation about the number of bytes used by a histogram on a VARCHAR2 column.
    References:
    http://www.freelists.org/archives/oracle-l/08-2006/msg00199.html
    "Cost-Based Oracle Fundamentals" page 117 shows a demonstration, and describes the use of ENDPOINT_ACTUAL_VALUE starting on Oracle 9i.
    "Cost-Based Oracle Fundamentals" page 118-120 describes selectivity problems when histograms are not used and a date is placed into a VARCHAR2 column.
    "Troubleshooting Oracle Performance", likely around page 130-140 also indicates that histograms only use the first 6 bytes.
    See section "Followup November 12, 2005 - 4pm US/Eastern"
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:707586567563
    An interesting test setup that almost shows what I intended - but Oracle 10.2.0.2 was a little smarter than I expected, even though it selected to use an index to retrieve more than 50% of a table... Take a look at the TO_CHAR representation of the ENDPOINT_VALUE from DBA_TAB_HISTOGRAMS to understand what I was trying to decribe in my original post in this thread.
    CREATE TABLE T1 (DATE_ VARCHAR2(10));
    INSERT INTO T1
    SELECT
      TO_CHAR(TO_DATE('2008-01-01','YYYY-MM-DD')+ROWNUM-1,'YYYY-MM-DD')
    FROM
      DUAL
    CONNECT BY
      LEVEL<=250;
    250 rows created.
    COMMIT;
    CREATE INDEX IND_T1 ON T1(DATE_);
    SELECT
      MIN(DATE_),
      MAX(DATE_)
    FROM
      T1;
    MIN(DATE_) MAX(DATE_)
    2008-01-01 2008-09-06
    SELECT
      COLUMN_NAME,
      NUM_DISTINCT,
      NUM_BUCKETS,
      HISTOGRAM
    FROM
      DBA_TAB_COL_STATISTICS
    WHERE
      OWNER=USER
      AND TABLE_NAME='T1';
    no rows selected
    SELECT
      SUBSTR(COLUMN_NAME,1,10) COLUMN_NAME,
      ENDPOINT_NUMBER,
      ENDPOINT_VALUE,
      SUBSTR(ENDPOINT_ACTUAL_VALUE,1,10) ENDPOINT_ACTUAL_VALUE
    FROM
      DBA_TAB_HISTOGRAMS
    WHERE
      OWNER=USER
      AND TABLE_NAME='T1';
    no rows selected
    EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',METHOD_OPT=>'FOR COLUMNS SIZE 254 DATE_',CASCADE=>TRUE);
    PL/SQL procedure successfully completed.
    SELECT
      COLUMN_NAME,
      NUM_DISTINCT,
      NUM_BUCKETS,
      HISTOGRAM
    FROM
      DBA_TAB_COL_STATISTICS
    WHERE
      OWNER=USER
      AND TABLE_NAME='T1';
    COLUMN_NAME                    NUM_DISTINCT NUM_BUCKETS HISTOGRAM
    DATE_                                   250         250 HEIGHT BALANCED
    SELECT
      SUBSTR(COLUMN_NAME,1,10) COLUMN_NAME,
      ENDPOINT_NUMBER,
      ENDPOINT_VALUE,
      SUBSTR(ENDPOINT_ACTUAL_VALUE,1,10) ENDPOINT_ACTUAL_VALUE
    FROM
      DBA_TAB_HISTOGRAMS
    WHERE
      OWNER=USER
      AND TABLE_NAME='T1'
    ORDER BY
      ENDPOINT_NUMBER;
    COLUMN_NAM ENDPOINT_NUMBER ENDPOINT_VALUE ENDPOINT_A
    DATE_                    1     2.6059E+35 2008-01-01
    DATE_                    2     2.6059E+35 2008-01-02
    DATE_                    3     2.6059E+35 2008-01-03
    DATE_                    4     2.6059E+35 2008-01-04
    DATE_                    5     2.6059E+35 2008-01-05
    DATE_                    6     2.6059E+35 2008-01-06
    DATE_                    7     2.6059E+35 2008-01-07
    DATE_                    8     2.6059E+35 2008-01-08
    DATE_                    9     2.6059E+35 2008-01-09
    DATE_                   10     2.6059E+35 2008-01-10
    DATE_                  243     2.6059E+35 2008-08-30
    DATE_                  244     2.6059E+35 2008-08-31
    DATE_                  245     2.6059E+35 2008-09-01
    DATE_                  246     2.6059E+35 2008-09-02
    DATE_                  247     2.6059E+35 2008-09-03
    DATE_                  248     2.6059E+35 2008-09-04
    DATE_                  249     2.6059E+35 2008-09-05
    DATE_                  250     2.6059E+35 2008-09-06
    ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    SELECT
      DATE_
    FROM
      T1
    WHERE
      DATE_<='2008-01-15';
    15 rows selected.
    From the 10053 trace:
    BASE STATISTICAL INFORMATION
    Table Stats::
      Table: T1  Alias: T1
        #Rows: 250  #Blks:  5  AvgRowLen:  11.00
    Index Stats::
      Index: IND_T1  Col#: 1
        LVLS: 0  #LB: 1  #DK: 250  LB/K: 1.00  DB/K: 1.00  CLUF: 1.00
    SINGLE TABLE ACCESS PATH
      Column (#1): DATE_(VARCHAR2)
        AvgLen: 11.00 NDV: 250 Nulls: 0 Density: 0.002
        Histogram: HtBal  #Bkts: 250  UncompBkts: 250  EndPtVals: 250
      Table: T1  Alias: T1    
        Card: Original: 250  Rounded: 15  Computed: 15.00  Non Adjusted: 15.00
      Access Path: TableScan
        Cost:  3.01  Resp: 3.01  Degree: 0
          Cost_io: 3.00  Cost_cpu: 85607
          Resp_io: 3.00  Resp_cpu: 85607
      Access Path: index (index (FFS))
        Index: IND_T1
        resc_io: 2.00  resc_cpu: 49621
        ix_sel: 0.0000e+000  ix_sel_with_filters: 1
      Access Path: index (FFS)
        Cost:  2.00  Resp: 2.00  Degree: 1
          Cost_io: 2.00  Cost_cpu: 49621
          Resp_io: 2.00  Resp_cpu: 49621
      Access Path: index (IndexOnly)
        Index: IND_T1
        resc_io: 1.00  resc_cpu: 10121
        ix_sel: 0.06  ix_sel_with_filters: 0.06
        Cost: 1.00  Resp: 1.00  Degree: 1
      Best:: AccessPath: IndexRange  Index: IND_T1
             Cost: 1.00  Degree: 1  Resp: 1.00  Card: 15.00  Bytes: 0
    ============
    Plan Table
    ============
    | Id  | Operation         | Name    | Rows  | Bytes | Cost  | Time      |
    | 0   | SELECT STATEMENT  |         |       |       |     1 |           |
    | 1   |  INDEX RANGE SCAN | IND_T1  |    15 |   165 |     1 |  00:00:01 |
    Predicate Information:
    1 - access("DATE_"<='2008-01-15')
    INSERT INTO T1
    SELECT
      TO_CHAR(TO_DATE('2008-09-07','YYYY-MM-DD')+ROWNUM-1,'YYYY-MM-DD')
    FROM
      DUAL
    CONNECT BY
      LEVEL<=250;
    COMMIT;
    EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',METHOD_OPT=>'FOR COLUMNS SIZE 254 DATE_',CASCADE=>TRUE);
    PL/SQL procedure successfully completed.
    SELECT
      COLUMN_NAME,
      NUM_DISTINCT,
      NUM_BUCKETS,
      HISTOGRAM
    FROM
      DBA_TAB_COL_STATISTICS
    WHERE
      OWNER=USER
      AND TABLE_NAME='T1';
    COLUMN_NAME                    NUM_DISTINCT NUM_BUCKETS HISTOGRAM
    DATE_                                   500         254 HEIGHT BALANCED
    SELECT
      SUBSTR(COLUMN_NAME,1,10) COLUMN_NAME,
      ENDPOINT_NUMBER,
      TO_CHAR(ENDPOINT_VALUE) ENDPOINT_VALUE,
      SUBSTR(ENDPOINT_ACTUAL_VALUE,1,10) ENDPOINT_ACTUAL_VALUE
    FROM
      DBA_TAB_HISTOGRAMS
    WHERE
      OWNER=USER
      AND TABLE_NAME='T1'
    ORDER BY
      ENDPOINT_NUMBER;
    COLUMN_NAM ENDPOINT_NUMBER ENDPOINT_VALUE                           ENDPOINT_A
    DATE_                    0 260592218925307000000000000000000000     2008-01-01
    DATE_                    1 260592218925307000000000000000000000     2008-01-02
    DATE_                    2 260592218925307000000000000000000000     2008-01-04
    DATE_                    3 260592218925307000000000000000000000     2008-01-06
    DATE_                    4 260592218925307000000000000000000000     2008-01-08
    DATE_                    5 260592218925307000000000000000000000     2008-01-10
    DATE_                    6 260592218925307000000000000000000000     2008-01-12
    DATE_                    7 260592218925307000000000000000000000     2008-01-14
    DATE_                    8 260592218925307000000000000000000000     2008-01-16
    DATE_                    9 260592218925307000000000000000000000     2008-01-18
    DATE_                   10 260592218925307000000000000000000000     2008-01-20
    DATE_                  242 260592219234792000000000000000000000     2009-04-26
    DATE_                  243 260592219234792000000000000000000000     2009-04-28
    DATE_                  244 260592219234792000000000000000000000     2009-04-29
    DATE_                  245 260592219234792000000000000000000000     2009-05-01
    DATE_                  246 260592219234792000000000000000000000     2009-05-02
    DATE_                  247 260592219234792000000000000000000000     2009-05-04
    DATE_                  248 260592219234792000000000000000000000     2009-05-05
    DATE_                  249 260592219234792000000000000000000000     2009-05-07
    DATE_                  250 260592219234792000000000000000000000     2009-05-08
    DATE_                  251 260592219234792000000000000000000000     2009-05-10
    DATE_                  252 260592219234792000000000000000000000     2009-05-11
    DATE_                  253 260592219234792000000000000000000000     2009-05-13
    DATE_                  254 260592219234792000000000000000000000     2009-05-14
    SELECT
      DATE_
    FROM
      T1
    WHERE
      DATE_ BETWEEN '2008-01-15' AND '2008-09-15';
    245 rows selected.
    From the 10053 trace:
    BASE STATISTICAL INFORMATION
    Table Stats::
      Table: T1  Alias: T1
        #Rows: 500  #Blks:  5  AvgRowLen:  11.00
    Index Stats::
      Index: IND_T1  Col#: 1
        LVLS: 1  #LB: 2  #DK: 500  LB/K: 1.00  DB/K: 1.00  CLUF: 2.00
    SINGLE TABLE ACCESS PATH
      Column (#1): DATE_(VARCHAR2)
        AvgLen: 11.00 NDV: 500 Nulls: 0 Density: 0.002
        Histogram: HtBal  #Bkts: 254  UncompBkts: 254  EndPtVals: 255
      Table: T1  Alias: T1    
        Card: Original: 500  Rounded: 240  Computed: 240.16  Non Adjusted: 240.16
      Access Path: TableScan
        Cost:  3.01  Resp: 3.01  Degree: 0
          Cost_io: 3.00  Cost_cpu: 148353
          Resp_io: 3.00  Resp_cpu: 148353
      Access Path: index (index (FFS))
        Index: IND_T1
        resc_io: 2.00  resc_cpu: 111989
        ix_sel: 0.0000e+000  ix_sel_with_filters: 1
      Access Path: index (FFS)
        Cost:  2.01  Resp: 2.01  Degree: 1
          Cost_io: 2.00  Cost_cpu: 111989
          Resp_io: 2.00  Resp_cpu: 111989
      Access Path: index (IndexOnly)
        Index: IND_T1
        resc_io: 2.00  resc_cpu: 62443
        ix_sel: 0.48031  ix_sel_with_filters: 0.48031
        Cost: 2.00  Resp: 2.00  Degree: 1
      Best:: AccessPath: IndexRange  Index: IND_T1
             Cost: 2.00  Degree: 1  Resp: 2.00  Card: 240.16  Bytes: 0
    ============
    Plan Table
    ============
    | Id  | Operation         | Name    | Rows  | Bytes | Cost  | Time      |
    | 0   | SELECT STATEMENT  |         |       |       |     2 |           |
    | 1   |  INDEX RANGE SCAN | IND_T1  |   240 |  2640 |     2 |  00:00:01 |
    Predicate Information:
    1 - access("DATE_">='2008-01-15' AND "DATE_"<='2008-09-15')I am sure that there are much better examples than the above, as the above generates a very small data set, and is still an incomplete test setup.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Oracle Java 7 Installation in Ubuntu 12.04 - JB-java.desktop

    In my Ubuntu server, I added ppa:webupd8team/java repository and tried to install oracle-java7-installer. But it results in the follwing error
    root@emaillenin:~# apt-get install oracle-java7-installer
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    Suggested packages:
      binfmt-support visualvm ttf-baekmuk ttf-unfonts ttf-unfonts-core ttf-kochi-gothic ttf-sazanami-gothic ttf-kochi-mincho ttf-sazanami-mincho
      ttf-arphic-uming firefox firefox-2 iceweasel mozilla-firefox iceape-browser mozilla-browser epiphany-gecko epiphany-webkit epiphany-browser galeon
      midbrowser moblin-web-browser xulrunner xulrunner-1.9 konqueror chromium-browser midori google-chrome
    The following NEW packages will be installed:
      oracle-java7-installer
    0 upgraded, 1 newly installed, 0 to remove and 1 not upgraded.
    Need to get 0 B/17.6 kB of archives.
    After this operation, 106 kB of additional disk space will be used.
    Preconfiguring packages ...
    (Reading database ... 48156 files and directories currently installed.)
    Unpacking oracle-java7-installer (from .../oracle-java7-installer_7u25-0~webupd8~1_all.deb) ...
    oracle-license-v1-1 license has already been accepted
    dpkg: error processing /var/cache/apt/archives/oracle-java7-installer_7u25-0~webupd8~1_all.deb (--unpack):
    trying to overwrite '/usr/share/applications/JB-java.desktop', which is also in package oracle-java6-installer 6u37-0~eugenesan~precise1
    Errors were encountered while processing:
    /var/cache/apt/archives/oracle-java7-installer_7u25-0~webupd8~1_all.deb
    E: Sub-process /usr/bin/dpkg returned an error code (1)
    root@emaillenin:~# ls /usr/share/applications/JB-java.desktop
    ls: cannot access /usr/share/applications/JB-java.desktop: No such file or directory
    root@emaillenin:~#
    How to resolve this error? Are there any other way to install Java 7?

    Download the tar.gz installer and untar it to a location like /opt.
    Once you untar the archive, add the bin directory to your PATH variable and your should be all set to use it from the terminal.
    OR use sudo update-alternatives to set the java command to pick up your installed version.

  • How oracle deal with  latch wait posting

    I want to know when a long latch release,according to latch wait posting ,oracle post one process in latch wait list according fifo or post all processes in latch wait list
    Edited by: jinyu on Nov 28, 2008 12:02 AM

    Hi,
    I think steve dose not agree with metalink's viewFYI, you can ask Steve, he is a nice fellow. He hangs out on Oracle-l:
    http://www.freelists.org/archive/oracle-l/recent

  • 了解Oracle RAC Brain Split Resolution集群脑裂协议

    大约是一周前,一位资深的Oracle工程师向我和客户介绍RAC中脑裂的处理过程,据他介绍脑裂发生时通过各节点对voting disk(投票磁盘)的抢夺,那些争抢到(n/2+1)数量voting disk的节点就可以survive(幸存)下来,而没有争抢到voting disk的节点则被evicted踢出节点。
    不得不说以上这番观点,来得太过随意了,一位从Oracle 6就开始从事维护工作的老工程师也会犯这样的概念性错误,只能说Oracle技术的更新过于日新月异了。
    在理解脑裂(Brain Split)处理过程前,有必要介绍一下Oracle RAC Css(Cluster Synchronization Services)的工作框架:
    Oracle RAC CSS提供2种后台服务包括群组管理(Group Managment简称GM)和节点监控(Node Monitor简称NM),其中GM管理组(group)和锁(lock)服务。在集群中任意时刻总有一个节点会充当GM主控节点(master node)。集群中的其他节点串行地将GM请求发送到主控节点(master node),而master node将集群成员变更信息广播给集群中的其他节点。组成员关系(group membership)在每次发生集群重置(cluster reconfiguration)时发生同步。每一个节点独立地诠释集群成员变化信息。
    而节点监控NM服务则负责通过skgxn(skgxn-libskgxn.a,提供节点监控的库)与其他厂商的集群软件保持节点信息的一致性。此外NM还提供对我们熟知的网络心跳(Network heartbeat)和磁盘心跳(Disk heartbeat)的维护以保证节点始终存活着。当集群成员没有正常Network heartbeat或Disk heartbeat时NM负责将成员踢出集群,被踢出集群的节点将发生节点重启(reboot)。
    NM服务通过OCR中的记录(OCR中记录了Interconnect的信息)来了解其所需要监听和交互的端点,将心跳信息通过网络发送到其他集群成员。同时它也监控来自所有其他集群成员的网络心跳Network heartbeat,每一秒钟都会发生这样的网络心跳,若某个节点的网络心跳在misscount(by the way:10.2.0.1中Linux上默认misscount为60s,其他平台为30s,若使用了第三方vendor clusterware则为600s,但10.2.0.1中未引入disktimeout;10.2.0.4以后misscount为60s,disktimeout为200s;11.2以后misscount为30s:CRS-4678: Successful get misscount 30 for Cluster Synchronization Services,CRS-4678: Successful get disktimeout 200 for Cluster Synchronization Services)指定的秒数中都没有被收到的话,该节点被认为已经”死亡”了。NM还负责当其他节点加入或离开集群时初始化集群的重置(Initiates cluster reconfiguration)。
    在解决脑裂的场景中,NM还会监控voting disk以了解其他的竞争子集群(subclusters)。关于子集群我们有必要介绍一下,试想我们的环境中存在大量的节点,以Oracle官方构建过的128个节点的环境为我们的想象空间,当网络故障发生时存在多种的可能性,一种可能性是全局的网络失败,即128个节点中每个节点都不能互相发生网络心跳,此时会产生多达128个的信息”孤岛”子集群。另一种可能性是局部的网络失败,128个节点中被分成多个部分,每个部分中包含多于一个的节点,这些部分就可以被称作子集群(subclusters)。当出现网络故障时子集群内部的多个节点仍能互相通信传输投票信息(vote mesg),但子集群或者孤岛节点之间已经无法通过常规的Interconnect网络交流了,这个时候NM Reconfiguration就需要用到voting disk投票磁盘。
    因为NM要使用voting disk来解决因为网络故障造成的通信障碍,所以需要保证voting disk在任意时刻都可以被正常访问。在正常状态下,每个节点都会进行磁盘心跳活动,具体来说就是会到投票磁盘的某个块上写入disk心跳信息,这种活动每一秒钟都会发生,同时CSS还会每秒读取一种称作”kill block”的”赐死块”,当”kill block”的内容表示本节点被驱逐出集群时,CSS会主动重启节点。
    为了保证以上的磁盘心跳和读取”kill block”的活动始终正常运作CSS要求保证至少(N/2+1)个投票磁盘要被节点正常访问,这样就保证了每2个节点间总是至少有一个投票磁盘是它们都可以正常访问的,在正常情况下(注意是风平浪静的正常情况)只要节点所能访问的在线voting disk多于无法访问的voting disk,该节点都能幸福地活下去,当无法访问的voting disk多于正常的voting disk时,Cluster Communication Service进程将失败并引起节点重启。所以有一种说法认为voting disk只要有2个足以保证冗余度就可以了,没有必要有3个或以上voting disk,这种说法是错误的。Oracle推荐集群中至少要有3个voting disks。
    补充1:
    Question:
    有同学问那么voting disk 必须是奇数个呢?
    Answer:
    实际上我们仅仅是推荐使用奇数个vote disk ,而非必须是奇数个。10gR2中vote disk的数目上限是32个。
    Question
    我们可以使用2或4个vote disk吗?
    Answer:
    可以的。 但是2、4这样的数目在“至少(N/2+1)个投票磁盘要被节点正常访问”这一disk heartbeat的硬性算法下是不利的:
    当我们使用2个vote disk 时,不能发生任意个vote disk的心跳失败
    当我们使用3个vote disk 时,不能发生大于1个的vote disk心跳失败
    当我们使用4个vote disk 时,不能发生大于1个的vote disk心跳失败 ,这和3个时的容错率是一样,但是因为我们有更多的vote disk,这会导致管理成本和引入的风险增长
    当我们使用5个vote disk 时,不能发生大于2个的vote disk心跳失败
    当我们使用6个vote disk 时,仍然不能发生大于2个的vote disk心跳失败, 同样的因为比5时多出一个 ,也会引入不合理的管理成本和风险
    补充2:
    Question:
    若节点间的网络心跳正常,且节点所能正常心跳的vote disk 大于 不能正常访问的 ,如3个votedisk 时 恰巧有1个vote disk 的disk heartbeat 超时,此时Brain split 会发生吗?
    Answer:
    这种情况即不会触发Brain Split,也不会引发节点驱逐协议(eviction protocol)。 当单个或小于(N/2+1)个的voting disk心跳失败(disk heartbeat failure)时,这种心跳失败可能是由于短期内节点访问voting disk发生I/O error错误而引起的,此时css会立刻将这些失败的voting disk标记为OFFLINE。虽然有一定数量的voting disk OFFLINE了,但是我们仍有至少(N/2+1)个投票磁盘可用,这保证了eviction protocol不会被调用,所以没有节点会被reboot重启。紧接着node monitor模块的Disk ping Monitor Thread(DPMT-clssnmDiskPMT)会重复尝试访问这些失败的OFFLINE voting disk,若这些投票磁盘变得再次可I/O访问且经过验证其上的数据也没有讹误,那么css会再次将此voting disk标记为ONLINE;但是如果在45s( 这里的45s是基于misscount和 内部算法获得的) 内仍不能正常访问相关的voting disk,那么DMPT将在cssd.log中生成警告信息,如:
    CSSD]2011-11-11 20:11:20.668 >
    WARNING: clssnmDiskPMT: long disk latency >(45940 ms) to voting disk (0//dev/asm-votedisk1)
    假设以上发生clssnmDiskPMT警告的RAC场景共有3个voting disk,现已有一个asm-votedisk1因为I/O error或其他原因而被标记为OFFLINE,若此时再有一个votedisk也出现了问题并disk heartbeat 失败,那么节点会因为少于规定数目(2)的votedisk而引发eviction protocol,进而重启reboot。
    单个或小于(N/2+1)个的voting disk心跳失败都仅仅生成警告(Warning),而非致命的错误。因为仍有绝大数量的vote disk可被访问,因此生成的警告都是非致命的,eviction protocol将不会被触发。
    当实际的NM Reconfiguration集群重置情况发生时所有的active节点和正在加入集群的节点都会参与到reconfig中,那些没有应答(ack)的节点都将不再被归入新的集群关系中。实际上reconfig重置包括多个阶段:
    1.初始化阶段 — reconfig manager(由集群成员号最低的节点担任)向其他节点发送启动reconfig的信号
    2.投票阶段 — 节点向reconfig manager发送该节点所了解的成员关系
    3.脑裂检查阶段 — reconfig manager检查是否脑裂
    4.驱逐阶段 — reconfig manager驱逐非成员节点
    5.更新阶段 — reconfig manager向成员节点发送权威成员关系信息
    在脑裂检查阶段Reconfig Manager会找出那些没有Network Heartbeat而有Disk Heartbeat的节点,并通过Network Heartbeat(如果可能的话)和Disk Heartbeat的信息来计算所有竞争子集群(subcluster)内的节点数目,并依据以下2种因素决定哪个子集群应当存活下去:
    拥有最多节点数目的子集群(Sub-cluster with largest number of Nodes)
    若子集群内数目相等则为拥有最低节点号的子集群(Sub-cluster with lowest node number),举例来说在一个2节点的RAC环境中总是1号节点会获胜。
    采用Stonith algorithm 的IO fencing(remote power reset)
    补充:
    STONITH算法是一种常用的I/O Fencing algorithm,是RAC中必要的远程关闭节点的接口。其想法十分简单,当某个节点上运行的软件希望确保本集群内的其他节点不能使用某种资源时,拔出其他节点的插座即可。这是一种简单、可靠且有些残酷的算法。Stonith 的优势是其没有特定的硬件需求,也不会限制集群的扩展性。
    Oracle Clusterware的Process Monitor模块负责实现IO fencing,保证不会因节点/实例的不协调工作而产生corruption。Process Monitor的工作具体由hangcheck timer或者oprocd 完成, 在Linux平台上10.2.0.4 之前都没有oprocd的存在(其他Unix平台在10.2.0.1就有了),在安装RAC之前需要额外安装hangcheck timer软件以保证IO fencing, 到10.2.0.4 时Linux上也有了oprocd,具体见<Know about RAC Clusterware Process OPROCD> 一文。 这些负责IO fencing的进程一般都会被锁定在内存中(locked in memory)、实时运行(Real time)、休眠固定的时间(Sleep a fixed time)、以root身份运行;若进程唤醒后发现时间已经太晚,那么它会强制reboot;若这些进程发生失败,则也会重启,所以在RAC环境中oprocd是一个很重要的进程,不要失去手动去kill这个进程。
    在完成脑裂检查后进入驱逐阶段,被驱逐节点会收到发送给它们的驱逐信息(如果网络可用的话),若无法发送信息则会通过写出驱逐通知到voting disk上的”kill block”来达到驱逐通知的目的。同时还会等待被驱逐节点表示其已收到驱逐通知,这种表示可能是通过网络通信的方式也可能是投票磁盘上的状态信息。
    可以看到Oracle CSS中Brain Split Check时会尽可能地保证最大的一个子集群存活下来以保证RAC系统具有最高的可用性,而并不如那位资深工程师所说的在Cluster Reconfiguration阶段会通过节点对投票磁盘的抢占来保证哪个节点存活下来。
    http://www.oracledatabase12g.com/archives/oracle-rac-brain-split-resolution.html
    http://www.oracledatabase12g.com/archives/%E5%86%8D%E8%AE%AErac-brain-split%E8%84%91%E8%A3%82.html
    Edited by: Maclean Liu on Feb 4, 2012 9:16 AM

    that's ok!

  • Long connecting issues after migrating to oracle 9i

    Hi
    I have a vb application which connects to an oracle database. A few days back the oracle database was migrated from one server to another one. Ever since this migration the application is taking very long time to connect to the database (around 10mins!!!). Once the connection is established everything works normally as it used to.

    Check in
    http://www.freelists.org/archives/oracle-l/02-2006/threads.html#00130
    for thread connection to the database is slow
    and
    http://www.freelists.org/archives/oracle-l/07-2007/threads.html#00252
    for thread Slow sqlplus connection
    and
    http://www.freelists.org/archives/oracle-l/07-2004/threads.html#01677
    for thread login delay with 9.2.0.5
    For so long delays it more looks like some DNS/Network problem than DB.
    Gints Plivna
    http://www.gplivna.eu

Maybe you are looking for

  • TM Schema - Is it possible to change IT2001/2 from a Rule?

    Hello gurus, here the TM scenario is: - CATS via ESS for Time Recording; - The employee enters the <b>actual</b> Clock-In / Clock-Out manually & releases; - The Manager approves and Time data arrives to HR (IT2001/2002). I am using a copy of the Sche

  • How to change user and keep downloads and icloud.

    Since i bought my first Mac in 2009 i have created an account on my son´s email in order to download apps for him in the future but keep it under my management After that, i have also bought an Iphone and Ipad.. Nowadays, my son uses the ipad but eve

  • Logout failure by AuthContext

    Hi All, I have setup a J2EE agent and IdServer successfully. I would like to write a logout jsp/servlet at agent side server. I try to use AuthContext where logout method is offered. Here is the code: AuthContext lc = new AuthContext(token); lc.logou

  • Two earlywatch reports for same instance in one Solution manager

    Hi i am trying to create two solutons for early watch reports for same instance. let me explain here. In SAP Solution Manager: Active Solutions  >>> i have two solutions like this Solution                                                          Earl

  • Can't use three-finger double tap zoom.

    iPad 2 version 6.3.1. Couple of days ago, I tried to use the three-fingered, double-tap zoom, but it didn't work. Now, being a middle-aged person, I like to use this feature. So, tried again. Didn't work. Double tapped all over the screen. Nada. Went