Identify the OCR master node for 11.2

My customer is on 11.1.0.6 RAC DB with 11.2 CRS+ASM and interested in finding  "OCR master node" at any point time.
I noticed that one way is to identify the OCR master node is to search
$ORA_CRS_HOME/log/hostname/crsd/crsd.log
file for the line "I AM THE NEW OCR MASTER" or "MASTER" with the most recent timestamp. Does this applicable 11.2 Release ?
and what are the other alternate ways to identify master node.
Thanks in advance.

Hi,
as it was mentioned before, you can use the RAC FAQ Oracle Support Note to determine the masters in a RAC system. Except that this note would not elaborate on the OCR Master that you are asking for (OCR Writer as it is called in the documentation these days) in this context.
However, your command to check $ORA_CRS_HOME/log/hostname/crsd/crsd.log works and the message is pretty much the same in 11.2 as it was pre-11.2. However, note that only checking the CRSD.log may not always tell you the OCR master all the time. Reason: The CRSD.log is used in a rolling fashion. Once the log entries have reached approx. 50MB, it is rolled over to crsd.l01 or something like that and a fresh crsd.log is used. 10 archived logs are maintained.
For an average cluster this will last for a while, but in general, there might be a time when all these logs have been used and the OCR master has never been changed. In this case, you cannot use the logs anymore. Luckily, you should not have to find the OCR Master all the time. Why are you interested in knowing which node the OCR Master resides on all the time?
At least, you should therefore cat all crsd.l* files under the respective directory on all nodes to determine this. But again, that should not be necessary.
Hope that helps. Thanks,
Markus

Similar Messages

  • TUXEDO11 in MP mode can't boot TMS_ORA on the non-master node

    I have my Tuxedo 11 installed on Ubuntu9.10 server as the master node (SITE1) and on CentOS6.2 as the non-master node (SITE2). The client program is using WSL to communicate with the servers. Tuxedo 11 has no patch, and both Tuxedo11 and Oracle10gR2 are 32 bits running on 32 bits OS.
    On both node a TMS_ORA associated with an ORACLE 10gR2 database was installed. When I issue "tmboot -y", the servers on the master node booted normally, however, the TMS_ORA server and server that using TMS_ORA on SITE2 reported "Assume started (pipe). ". There is no core file for these servers on SITE2 and in ULOG on SITE2 there is no Error or Warning concerning the failure starting of TMS_ORA.
    In order to check my servers and TMS_ORA works OK on SITE2, I used the master command under tmadmin to first swap the master and non-master node, and after the migration is successful, on SITE2 I issued "tmshutdown -cy" command then "tmboot -y" command. Surprisingly, all the servers booted correctly on both nodes. Then I migrate the master node back to SITE1 and the servers are still there alive and my client program can successfully call these servers which means the TMS_ORA and server using TMS_ORA on both nodes works fine.
    The problem is, when I "tmshutdown -s server" (those on SITE2, either TMS_ORA or server using TMS_ORA), then using "tmboot -s server" to boot them (those on SITE2, either TMS_ORA or server using TMS_ORA) I got "Assume started (pipe). " reported and those server process didn't appear on SITE2.
    It seems that I can't boot TMS_ORA on SITE2 from the master node SITE1 but can boot all the servers correctly if SITE2 are acting as the master node. Server that don't use TMS_ORA on SITE2 can be booted successfully from SITE1.
    Can anybody figure out what's wrong? Thanks in advance.
    Best regards,
    Orlando
    Edited by: user10950876 on 2012-6-13 下午3:02
    Edited by: user10950876 on 2012-6-13 下午3:33

    Hi Todd,
    Thank you for you reply. Following is my ULOG and tmboot report:
    ubuntu9:~/tuxapp$tmboot -y
    Booting all admin and server processes in /home/xp/tuxapp/tuxconfig
    INFO: Oracle Tuxedo, Version 11.1.1.2.0, 32-bit, Patch Level (none)
    Booting admin processes ...
    exec DBBL -A :
    on SITE1 -> process id=8803 ... Started.
    exec BBL -A :
    on SITE1 -> process id=8804 ... Started.
    exec BBL -A :
    on SITE2 -> process id=3964 ... Started.
    Booting server processes ...
    exec TMS_ORA -A :
    on SITE1 -> process id=8812 ... Started.
    exec TMS_ORA -A :
    on SITE1 -> process id=8838 ... Started.
    exec TMS_ORA2 -A :
    on SITE2 -> CMDTUX_CAT:819: INFO: Process id=3967 Assume started (pipe).
    exec TMS_ORA2 -A :
    on SITE2 -> CMDTUX_CAT:819: INFO: Process id=3968 Assume started (pipe).
    exec WSL -A -- -n //128.0.88.24:5000 -m 3 -M 5 -x 5 :
    on SITE1 -> process id=8841 ... Started.
    8 processes started.
    ULOG on ubuntu9
    134547.ubuntu9!DBBL.8803.3071841984.0: 06-14-2012: client high water (0), total client (0)
    134547.ubuntu9!DBBL.8803.3071841984.0: 06-14-2012: Tuxedo Version 11.1.1.2.0, 32-bit
    134547.ubuntu9!DBBL.8803.3071841984.0: LIBTUX_CAT:262: INFO: Standard main starting
    134549.ubuntu9!DBBL.8803.3071841984.0: CMDTUX_CAT:4350: INFO: BBL started on SITE1 - Release 11112
    134550.ubuntu9!BBL.8804.3072861888.0: 06-14-2012: Tuxedo Version 11.1.1.2.0, 32-bit, Patch Level (none)
    134550.ubuntu9!BBL.8804.3072861888.0: LIBTUX_CAT:262: INFO: Standard main starting
    134550.ubuntu9!BRIDGE.8806.3072931520.0: 06-14-2012: Tuxedo Version 11.1.1.2.0, 32-bit
    134550.ubuntu9!BRIDGE.8806.3072931520.0: LIBTUX_CAT:262: INFO: Standard main starting
    134555.ubuntu9!DBBL.8803.3071841984.0: CMDTUX_CAT:4350: INFO: BBL started on SITE2 - Release 11112
    134556.ubuntu9!BRIDGE.8806.3072931520.0: CMDTUX_CAT:1371: INFO: Connection received from redhat62
    134557.ubuntu9!TMS_ORA.8812.3057321664.0: 06-14-2012: Tuxedo Version 11.1.1.2.0, 32-bit
    134557.ubuntu9!TMS_ORA.8812.3057321664.0: LIBTUX_CAT:262: INFO: Standard main starting
    134559.ubuntu9!TMS_ORA.8838.3056805568.0: 06-14-2012: Tuxedo Version 11.1.1.2.0, 32-bit
    134559.ubuntu9!TMS_ORA.8838.3056805568.0: LIBTUX_CAT:262: INFO: Standard main starting
    134559.ubuntu9!WSL.8841.3072153920.0: 06-14-2012: Tuxedo Version 11.1.1.2.0, 32-bit
    134559.ubuntu9!WSL.8841.3072153920.0: LIBTUX_CAT:262: INFO: Standard main starting
    134559.ubuntu9!WSH.8842.3072411328.0: 06-14-2012: Tuxedo Version 11.1.1.2.0, 32-bit
    134559.ubuntu9!WSH.8842.3072411328.0: WSNAT_CAT:1030: INFO: Work Station Handler joining application
    134559.ubuntu9!WSH.8843.3073169088.0: 06-14-2012: Tuxedo Version 11.1.1.2.0, 32-bit
    134559.ubuntu9!WSH.8843.3073169088.0: WSNAT_CAT:1030: INFO: Work Station Handler joining application
    134559.ubuntu9!WSH.8844.3073066688.0: 06-14-2012: Tuxedo Version 11.1.1.2.0, 32-bit
    134559.ubuntu9!WSH.8844.3073066688.0: WSNAT_CAT:1030: INFO: Work Station Handler joining application
    ULOG on redhat62
    134615.redhat62!tmloadcf.3961.3078567616.-2: 06-14-2012: client high water (0), total client (0)
    134615.redhat62!tmloadcf.3961.3078567616.-2: 06-14-2012: Tuxedo Version 11.1.1.2.0, 32-bit
    134615.redhat62!tmloadcf.3961.3078567616.-2: CMDTUX_CAT:872: INFO: TUXCONFIG file /home/tuxedo/tuxedo/simpapp/tuxconfig has been updated
    134617.redhat62!BSBRIDGE.3963.3078089312.0: 06-14-2012: Tuxedo Version 11.1.1.2.0, 32-bit
    134617.redhat62!BSBRIDGE.3963.3078089312.0: LIBTUX_CAT:262: INFO: Standard main starting
    134619.redhat62!BBL.3964.3079420512.0: 06-14-2012: Tuxedo Version 11.1.1.2.0, 32-bit, Patch Level (none)
    134619.redhat62!BBL.3964.3079420512.0: LIBTUX_CAT:262: INFO: Standard main starting
    134620.redhat62!BRIDGE.3965.3077868128.0: 06-14-2012: Tuxedo Version 11.1.1.2.0, 32-bit
    134620.redhat62!BRIDGE.3965.3077868128.0: LIBTUX_CAT:262: INFO: Standard main starting
    134620.redhat62!BRIDGE.3965.3077868128.0: CMDTUX_CAT:4488: INFO: Connecting to ubuntu9 at //128.0.88.24:1800
    ubb file content: (just in case you want to see it too. I've commented all the services in the ubb file, except the TMS_ORA2 on SITE2 to make it more distinct.)
    *RESOURCES
    IPCKEY 123456
    DOMAINID TUXTEST
    MASTER SITE1, SITE2
    MAXACCESSERS 50
    MAXSERVERS 35
    MAXCONV 10
    MAXGTT 20
    MAXSERVICES 70
    OPTIONS LAN, MIGRATE
    MODEL MP
    LDBAL Y
    *MACHINES
    DEFAULT: MAXWSCLIENTS=30
    ubuntu9 LMID=SITE1
    APPDIR="/home/xp/tuxapp"
    TUXCONFIG="/home/xp/tuxapp/tuxconfig"
    TUXDIR="/home/xp/tuxedo11gR1"
    TLOGDEVICE="/home/xp/tuxapp/TLOG"
    TLOGNAME="TLOG"
    TLOGSIZE=100
    TYPE=Linux
    ULOGPFX="/home/xp/tuxapp/ULOG"
    ENVFILE="/home/xp/tuxapp/ENVFILE"
    UID=1000
    GID=1000
    redhat62 LMID=SITE2
    TUXDIR="/usr/oracle/tuxedo11gR1"
    APPDIR="/home/tuxedo/tuxedo/simpapp"
    TLOGDEVICE="/home/tuxedo/tuxedo/simpapp/TLOG"
    TLOGNAME="TLOG"
    TUXCONFIG="/home/tuxedo/tuxedo/simpapp/tuxconfig"
    TYPE=Linux
    ULOGPFX="/home/tuxedo/tuxedo/simpapp/ULOG"
    ENVFILE="/home/tuxedo/tuxedo/simpapp/ENVFILE"
    UID=501
    GID=501
    *GROUPS
    BANK1
    LMID=SITE1 GRPNO=1 TMSNAME=TMS_ORA TMSCOUNT=2
    OPENINFO="Oracle_XA:Oracle_XA+Acc=P/scott/tiger+SesTm=120+MaxCur=5+LogDir=.+SqlNet=xpdev"
    CLOSEINFO="NONE"
    BANK2
    LMID=SITE2 GRPNO=2 TMSNAME=TMS_ORA2 TMSCOUNT=2
    OPENINFO="Oracle_XA:Oracle_XA+Acc=P/scott/scott+SesTm=120+MaxCur=5+LogDir=.+SqlNet=tuxdev"
    CLOSEINFO="NONE"
    WSGRP
    LMID=SITE1 GRPNO=3
    OPENINFO=NONE
    *NETGROUPS
    DEFAULTNET NETGRPNO=0 NETPRIO=100
    SITE1_SITE2 NETGRPNO=1 NETPRIO=200
    *NETWORK
    SITE1 NETGROUP=DEFAULTNET
    NADDR="//128.0.88.24:1800"
    NLSADDR="//128.0.88.24:1500"
    SITE2 NETGROUP=DEFAULTNET
    NADDR="//128.0.88.215:1800"
    NLSADDR="//128.0.88.215:1500"
    *SERVERS
    DEFAULT:
    CLOPT="-A"
    #XFER SRVGRP=BANK1 SRVID=1
    #TLR_ORA SRVGRP=BANK1 SRVID=2
    #TLR_ORA2 SRVGRP=BANK2 SRVID=3
    WSL SRVGRP=WSGRP SRVID=4
    CLOPT="-A -- -n //128.0.88.24:5000 -m 3 -M 5 -x 5"
    *SERVICES
    #INQUIRY
    #WITHDRAW
    #DEPOSIT
    #XFER_NOXA
    #XFER_XA
    Edited by: user10950876 on 2012-6-13 下午10:58

  • How To Identify the OCR-disk,Voting Disk

    Hi Champ,
    There is one confusion for the OCR and Voting Disk.
    In case my RAC is up and Running and There are 3 RAW partitions for the OCR as well.
    and I dont know which partition is for which purpose?? how to Identify the partions which are for the OCR,Voting disk?
    Regards,
    Shitesh Shukla

    hi,
    try to use this:
    crsctl query css votedisk
    and
    OCRCHECK
    OCRCONFIG

  • I have the CS5 Master Collection for Windows, but want to convert my license to a Mac.

    I have the CS5 Master Collection for Windows, but want to convert my license to a Mac. I already have the CS5 Suite installed on the Mac. What do I need to do from here? I tried reading up a little bit on this and saw some people said in order to do this you would also need to upgrade your product version. Please tell me this is not the case, as that would be the dumbest and most frustrating customer service move ever to already faithful Adobe purchasers... 
    Thanks for any help you can give!

    in order to do this you would also need to upgrade your product version. Please tell me this is not the case, as that would be the dumbest and most frustrating customer service move ever to already faithful Adobe purchasers...
    Unfortunately, upgrading to CS6 or joining the Cloud are your only options.
    Platform swapping (Win to Mac and vIce versa) is only available to CS6 owners.
    Order product | Platform, language swap

  • How to identify the system generated program for any standrd extraction?

    Hi all,
    How to identify the standard program generated for any extraction takes place. Is there any tcode to figure out the standard abap prgms for the datasource.
    for instance:
    for the data source : 2LIS_02_SCL what is the standard extractor program used.
    Thanks
    Pooja

    Hi hemanyt,
    i cant find the table RSTRAN active.
    Thanks
    Pooja

  • Need help on how to identify the latest timestamp ran for the WorkBooks

    I know some tables to identify the workbooks when they are created.But this tables are not updating If we ran the workbook today.
    for example, :   if we created workbook & saved on 05/25/2009 or opening old WB.After that if we ran same workbook on 05/26/2009 then system has to update the latest timestamp as last ran.but iam unable to find this timestamps for workbooks.
    WB are maitained in RSDDSTAT.here i can able to find when it was last changed but not the latest timestamps.Our requirement to find the latest timestamp ran for the WorkBooks from last one year, which are using.
    below are the tables to identify when the workbook was created and other details:
    RSRWBINDEX  List of binary large objects (Excel workbooks)
    RSRWBINDEXT  Titles of binary objects (Excel workbooks)
    RSRWBSTORE  Storage for binary large objects (Excel workbooks)
    RSRWBTEMPLATE  Assignment of Excel workbooks as personal templates
    RSRWORKBOOK  Where-used list for reports in workbooks
    Thanks in advance
    Mahesh

    Any Updates,
    I got one more table to identify WB,
    /BI0/PTCTWORKBK - Master Data (Time-Ind.): Characteristic Workbook
    Still unable to find Last run timestamps for workbooks
    Mahesh.

  • What are the dependent master data for running MRP

    Hi Experts,
                      My client wants to Run MRP After 8 Months of implementation. Now i want to know
    1) What are the Master Data need to be check. They are Using the Strategy 20.
    2) They have confirmed sales order for few customers. Forecast sales orders for few customers.How to map it in sap.
    3) Where will i get Gross Requirement in SAP. Because  MRP run will generates the net requirements.
    4) Is there any report in SAP displays the Forecast values for the selected material.
    Please help me on the above points.
    Thanks
    Satheesh

    HI Vishwa Upadhyay,
                                     Thank you Once again,
    The first point is cleared. Can you please suggest me on the 2,3& 4 points please. What are the configuration settings needed for the different sales order type( i.e.. confirmed and forecast)
    Thanks
    Satheesh.N

  • Changed the GL Master record for Line item display

    Hi,
    Can anybody please let me know once i changed the GL Master record as a line item display then how i can see my old line items display?
    I know through one standard programe we can do so but i haven't recall it. Can anybody please help me out with this?
    I'll really appreciate your time and help.
    Thanks & Regards,
    Niki Shah

    Hi Eric,
    Thanks for the quick reply. I got it.
    Thanks once again and i assigned the points accordingly.
    Regards,
    Niki Shah

  • OCM for Revesion level in the Material Master and for the BOM

    Hello,
    I use My SAP 6.
    I have implemented OCM. It's used for 2 things:
    - The first is for revision level in the Material Master.
    - the second is for the BOM.
    Today, the customizing is good, it's work.
    My need is to have the same change number for those 2 evolutions:  for the material master evolution and for the BOM evolution.
    It's possible, and how?
    Thanks for your answers.
    Bérenger

    I do not think this is possible. Service level is usually used for safety stock planning.

  • Identifying the correct MIME type for file uploads

    How does Apex identify the MIME type of an uploaded file? I have a page that uploads a file and accesses it via APEX_APPLICATION_FILES. That works fine. The problem is that, for certain file types, the mime_types column is "application/octet-stream" instead of the correct file type. I think it's using the Content-Type from the client browser, though I'm not quite sure. Is there any way to override this server-side in Apex?
    I'm running Apex with mod_plsql. Adding an AddType directive to Apache and adding the correct association to the mime.types file did not work.

    Hi,
    I also noticed this - it seems that Microsoft's 4 char extensions like .docx are not being recognized by application type so that either the web server or mod-plsql is defaulting to the "application/octet-stream" mime type which refers to a file with binary content - instead of the traditional "application/msword" mime type you would expect.
    If you know, at the point of insert into your database table, both the mime type and file name, one workaround would be to check if the mime type is 'application/octet-stream' and the file's extension is ".docx" or other likely extensions and then treat it as mime type 'application/msword", etc.
    Not really a solution but might be sufficient workaround depending on your situation.
    Ted
    Edited by: Ted Martin on Oct 25, 2012 7:27 PM

  • Alsa/pulseaudio - keep the same master volume for speaker&headphones

    Hello,
    I have laptop Asus N550JV.
    I installed alsa ant pulseaudio (alsa-utils and pulseaudio, nothing else, no pulseaudio-alsa plugin, because I noticed that it causes me more problems rather than reduces them.
    the problem is that I want to keep Master audio channel untouched. If I listen through speaker, I could set master volume to 50%, then I can plug in headphones, set volume to 100%, then I can unplug headphones, the sound will play still at 100% THROUGH SPEAKER for about 0.5 sec. and after 0.5sec it will set back to 50% as I set when I was listening through speakers.
    My question is, how can I fix this problem. I see that there are 2 ways to fix this:
    1. make a lag time (delay) to switch from headphones to speakers.
    2. keep the same master audio volume for both headphones and speakers.
    Any ideas? I think the second solution would be easier to make, but I dunno how

    Hi,
    I am having the same problem with the ringer volume automatically being reset to some other volume. In my case the ringer volume slider under "Settings" + "Sounds" seems to be tied to the iPod speaker volume.
    For example, I go in to "Settings" + "Sounds" and I set the ringer volume to maximum, I return to the home screen and then back to settings to make sure the volume is still the same (and it is), and then I go to iPod and play a track via the built-in speakers (i.e., no headphones). If I happen to change the iPod speaker volume and then stop playback and return to "Settings" + "Sounds" from the home screen I find that the ringer volume has been reset to whatever volume level I had previously set the iPod speaker volume.
    I had assumed that these two volume settings were independent . Certainly, changing the ringer volume does NOT affect the iPod speaker volume, but setting the iPod speaker volume DOES affect the ringer volume.
    Hopefully this is corrected in the first firmware update.
    I have an iPhone 3G.

  • How to get the path of the selected Jtree node for deletion purpose

    I had one Jtree which show the file structure of the system now i want to get the functionality like when i right click on the node , it should be delete and the file also deleted from my system, for that how can i get the path of that component
    i had used
    TreePath path = tree.getPathForLocation(loc.x, loc.y);
    but it gives the array of objects as (from sop)=> [My Computer, C:\, C:\ANT_HOME, C:\ANT_HOME\lib, C:\ANT_HOME\lib\ant-1.7.0.pom.md5]
    i want its last path element how can i get it ?

    Call getLastSelectedPathComponent() on the tree.

  • How do I identify the actual /dev nodes in use by ASM?

    I'm using ASM on LUNs from an EMC SAN, fronted by PowerPath. Right now I have only one fiber path to the SAN, so /dev/emcpowera3 maps directly to /dev/sda3, for example. Oracle had a typo in what they told me to do in /etc/sysconfig/oracleasm*, so the scan picks up both devices.
    #/etc/init.d/oracleasm querydisk -p ASMVOL_01
    Disk "ASMVOL_01" is a valid ASM disk
    /dev/emcpowera3: LABEL="ASMVOL_01" TYPE="oracleasm"
    /dev/sda3: LABEL="ASMVOL_01" TYPE="oracleasm"
    But I don't think it can be using both. How do I see which one it's actually using?
    *They said:
    ORACLEASM_SCANORDER="emcpower*"
    ORACLEASM_SCANEXCLUDE="sd"
    But I think that should be "sd*".

    Powerpath supports multiple I/O paths. Most HBA (Fibre Channel PCI cards) have dual ports.
    This means 2 fibres running from the server into the FC switch(es). And more than one I/O path for that server to read/write a SAN LUN.
    That SAN LUN will be seen multiple times by the sever - so it will create multiple scsi devices (in the +/dev/+ directory). One device for each I/O path.
    There are 2 basic reasons why you should not use these scsi devices directly.
    They can and do change device names after each reboot. The sequence the SAN LUNs are named in, depends on how the I/O fabric layer enumerates the LUNs when the kernel runs a device discovery on each I/O path. So LUN1 can be device sdg and sdk on one boot, and the same LUN1 can be sde and sdx on another boot. There is thus no naming consistency which makes it very difficult to use the device names directly as these names change.
    The second reason is redundancy. If you use device sdg and that I/O path fails, your s/w using that device fails. Despite a second path being available to the LUN on the SAN via device sdk.
    Powerpath (and Open Source Multipath) addresses this issue. The unique scsi ID of each device is determined. The devices with the same ID are I/O paths to the same LUN. A single logical device (an emcpower* device) is created - and it serves as the interface to the LUN, supporting all the I/O paths to the LUN (providing load balancing, and redundancy should an I/O path fail).
    The powermt command (if I recall correctly) will show you how this logical device is configured and what scsi devices are used as I/O paths to the EMC LUN.
    Personally - we do not use PowerPath any more for a number of years now. We instead use the Open Source Multipath solution. This was build for very large Linux clusters (1000's of nodes and Pentabytes of SAN storage) and is now a standard driver in the Enterprise distros of the Linux kernel. It works fine with EMC (we have used it with Clariion, Symmetrix SANs, and currently with VNX SANs).
    Multipath does not taint the kernel. Multipath allows for far easier kernel upgrades. Multipath supports a number of different I/O fabric layers transparently. Multipath is very easy to configure.

  • Changing Cluster Master node

       Hello,
                  I have two nodes rac setup. I just need to change the master node to different node in the cluster. How to change the master node?
      Please help me out.

    Hi,
    Master Node not exist on RAC and I have pretty sure you are not talking about "OCR Master", you are talking about RESOURCE MASTER.
    The only thing closest to something like a "Master" is that only one node has the role to update the Oracle Cluster Registry (OCR), all other nodes only read the OCR.
    However this is just a role, which can easily switch between the nodes (but is normally fix as long as the responsible nodes lives).
    This node hover is then called OCR master.
    How do I change node which hold  "OCR Master"?
    You can't decide that and you can't change that manually the Clusterware do it automatically without human intervention.
    OCR Master is not a MASTER NODE it's only a role.
    Good answer here:
    Re: Identify the OCR master node for 11.2
    Re: which node will become master
    Please don't confuse the concept of MASTER OCR with RESOURCE MASTER (e.g Data Block).
    All nodes hold MASTER RESOURCE, because that I said : all nodes are equal.
    With "billions" of data block spread out in memory of cluster (SGA Instances), one node maintains extensive information (i.e Lock, Version, etc) about a particular resource (i.e a data block).
    So, one node has master resource of data block "data_01" the other node has a master resource of data block "data_02" and so on. If a node which hold master resource fails (shutdown) GCS choose other node in the cluster to be a MASTER Resource of that particular resource.
    http://www.oracleracsig.org/pls/apex/RAC_SIG.download_my_file?p_file=1003567
    RAC object remastering ( Dynamic remastering )  Oracle database internals by Riyaj
    Message was edited by: Levi-Pereira

  • BDB Native Version 5.0.21 - asynchronous write at the master node

    Hi There,
    As part of performance tuning, we think of introducing asynchronous write capabilities at the master node in replication code that uses BDB native edition (11g).
    Are there any known issues with the asynchronous write at the master node? We'd like to confirm with Oracle before we promote to production.
    For asynchronous write at the master node we have configured a TaskExecutor with the following configuration:
    <bean id="MasterAsynchronousWriteTaskExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
    <property name="corePoolSize" value="3"/>
    <property name="maxPoolSize" value="10"/>
    <property name="daemon" value="true"/>
    <property name="queueCapacity" value="200000"/>
    <property name="threadNamePrefix" value="Master_Entity_Writer_Thread"/>
    <property name="threadGroupName" value="BDBMasterWriterThreads"/>
    </bean>
    Local test showed no issues. Please let us know at the EARLIEST convenience if there are any changes required to corePoolSize, “maxPoolSize” and “queueCapacity” values as a result of asynchronous write.
    To summarize, 2 questions:
    1) Are there any known issues with the asynchronous write at the master node for BDB Native, version 5.0.21?
    2) If there are no issues, are any changes required to corePoolSize, “maxPoolSize” and “queueCapacity” values as a result of asynchronous write, and based on the configuration above?
    Thank you!

    Hello,
    If you have not already, please take a look at the documentation
    on "Database and log file archival" at:
    http://download.oracle.com/docs/cd/E17076_02/html/programmer_reference/transapp_archival.html
    Are you periodically creating backups of your database files?
    These snapshots are either a standard backup, which creates a
    consistent picture of the databases as of a single instant in
    time; or an on-line backup (also known as a hot backup), which
    creates a consistent picture of the databases as of an
    unspecified instant during the period of time when the
    snapshot was made. After backing up the database files you
    should periodically archive the log files being created in the
    environment. And I believe the question here is how often
    the periodic archive should take place to establish the
    best protocol for catastrophic recovery in the case of a
    failure like physical hardware being destroyed, etc.
    As the documentation describes, it is often helpful to think
    of database archival in terms of full and incremental filesystem
    backups. A snapshot is a full backup, whereas the periodic
    archival of the current log files is an incremental backup.
    For example, it might be reasonable to take a full snapshot
    of a database environment weekly or monthly, and archive
    additional log files daily. Using both the snapshot and the
    log files, a catastrophic crash at any time can be recovered
    to the time of the most recent log archival; a time long after
    the original snapshot.
    What other details can you provide about how how much activity
    there is on your system with regards to log file creation
    and how often a full backup is being taken, etc.
    Thank,
    Sandra

Maybe you are looking for