Capturing session level resource usage in RAC environment
I have used Tom Kyte's runstats.sql to capture resource usage by a session in a single instance environment. http://asktom.oracle.com/tkyte/runstats.html
The procedures in the package take before and after snapshot from v$mystat. My question is how does in change in RAC enviornment with more than one instance?
There is an equivalent table gv$mystat which has an additional column inst_id. I will appreciate if somebody can provide me some insight.
I wish everyone would stop using V$ objects where GV$ objects exist. Everything you write using V$ is a minimal or no value when run on a RAC cluster. However everything written using GV$ will always run on a stand-alone database.
Just convert the code and you should be fine.
At the University of Washington I never even teach about V$ dynamic performance views beyond the concept that they are a legacy of the time before RAC. There are about 107 V$s that have no global equivalent but they mostly related to instance recovery (RMAN).
Similar Messages
-
Instnce name in non-RAC environment
Hi!
In non-RAC environment V$INSTANCE.INSTANCE_NAME does not actually displays the name of the instance,that was set in INSTANCE_NAME parameter.
It always displays DB_NAME instead.
Is it any way to get instance_name that has service user connected to in this environment?
LSNRCTL for 32-bit Windows: Version 10.2.0.4.0 - Production on 28-JAN-2010 09:16:25
Copyright (c) 1991, 2007, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=vegas)(PORT=1524)))
STATUS of the LISTENER
Alias LISTENER
Version TNSLSNR for 32-bit Windows: Version 10.2.0.4.0 - Production
Start Date 28-JAN-2010 09:15:36
Uptime 0 days 0 hr. 0 min. 48 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File D:\oracle\db\product\10.2.0\network\admin\listener.ora
Listener Log File D:\oracle\db\product\10.2.0\network\log\listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=vegas)(PORT=1524)))
Services Summary...
Service "EMCOR" has 1 instance(s).
Instance "INST0", status READY, has 1 handler(s) for this service...
Service "EMCOR_XPT" has 1 instance(s).
Instance "INST0", status READY, has 1 handler(s) for this service...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "RESXDB" has 1 instance(s).
Instance "INST0", status READY, has 1 handler(s) for this service...
Service "SRV1" has 1 instance(s).
Instance "INST0", status READY, has 1 handler(s) for this service...
Service "SRV2" has 1 instance(s).
Instance "INST0", status READY, has 1 handler(s) for this service...
The command completed successfully
And SQLPLUS said
C:\Documents and Settings\oradba>sqlplus
SQL*Plus: Release 10.2.0.4.0 - Production on Thu Jan 28 09:44:59 2010
Copyright (c) 1982, 2007, Oracle. All Rights Reserved.
Enter user-name: emcos@emcor_srv2
Enter password:
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
09:45:04 EMCOS@emcor_srv2 >select name from v$database;
NAME
EMCOR
Elapsed: 00:00:00.00
09:45:07 EMCOS@emcor_srv2 >select instance_name from v$instance;
INSTANCE_NAME
emcor
Elapsed: 00:00:00.01
09:45:21 EMCOS@emcor_srv2 >select service_name from v$session where sid=(select unique sid from v$mystat);
SERVICE_NAME
SRV2Hemant K Chitale wrote:
The documentation on INSTANCE_NAME in the 10gR2 Reference says :
"In a single-instance database system, the instance name is usually the same as the database name."
(this after
"In a Real Application Clusters environment, multiple instances can be associated with a single database service. Clients can override Oracle's connection load balancing by specifying a particular instance by which to connect to the database. INSTANCE_NAME specifies the unique name of this instance.")
This would imply that setting INSTANCE_NAME in non-RAC is ignored. The usage of the word "usually" is weak.
Hemant K ChitaleBut what do says lsnrctl - it says that it is not weak
11:33:28 SYS@EMCOR_SRV1 >show parameter instance_name
NAME TYPE VALUE
instance_name string INST0
11:33:36 SYS@EMCOR_SRV1 >host lsnrctl status
LSNRCTL for 32-bit Windows: Version 10.2.0.4.0 - Production on 28-JAN-2010 11:33:50
Copyright (c) 1991, 2007, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=vegas)(PORT=1524)))
STATUS of the LISTENER
Alias LISTENER
Version TNSLSNR for 32-bit Windows: Version 10.2.0.4.0 - Production
Start Date 28-JAN-2010 09:15:36
Uptime 0 days 2 hr. 18 min. 14 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File D:\oracle\db\product\10.2.0\network\admin\listener.ora
Listener Log File D:\oracle\db\product\10.2.0\network\log\listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=vegas)(PORT=1524)))
Services Summary...
Service "EMCOR" has 1 instance(s).
Instance "INST0", status READY, has 1 handler(s) for this service...
Service "EMCOR_XPT" has 1 instance(s).
Instance "INST0", status READY, has 1 handler(s) for this service...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "RESXDB" has 1 instance(s).
Instance "INST0", status READY, has 1 handler(s) for this service...
Service "SRV1" has 1 instance(s).
Instance "INST0", status READY, has 1 handler(s) for this service...
Service "SRV2" has 1 instance(s).
Instance "INST0", status READY, has 1 handler(s) for this service...
The command completed successfully
11:33:50 SYS@EMCOR_SRV1 >select sys_context('USERENV','INSTANCE_NAME') from dual;
SYS_CONTEXT('USERENV','INSTANCE_NAME')
emcor
Elapsed: 00:00:00.00
11:34:42 SYS@EMCOR_SRV1 >select service_name from v$session where sid=sys_context('USERENV','SID');
SERVICE_NAME
SRV1
Best regards, Sergey -
Oracle Streams on a Rac Environment
Hi
I have some questions with respect to Setting up Streams on a rac Environment.Would appreciate a quick response as I need the answers by tommorrow.Any help would be greatly appreciated.Here are the questions
1> Do we have to create capture process for each active instance on only 1 capture process will do?
2> If yes then do they need to have a seperate queue for each one?
3> How will the apply process access multiple capture process and the propogation take place?
4> can only 2 tables in the source be replicated instead of the entire database?
5> In case if we use a push job if both the primary and secondary go down how can we move to the third instance and use it?
6> If the instance goes down do we have to restart the capture process once again?
7>What is the best suited for rac - ASM/RAW FILES with respect to Streams?
Regards
ShwetaStreams in 9iR2 RAC environment mines only from archive logs not online redo logs. This restriction is lifted in 10g RAC. If you choose to go thru the downstream capture route in 10g then you can only mine from archive logs in 10gR1.
Having said the above here are my answers:
1> Do we have to create capture process for each active instance or only 1 capture process will do?
You can run multiple capture processes each on difference instance in RAC. Unless you have a requirement to do so, a single capture process would suffice. The in-memory queue should also be on the same instance as the capture process is running from.
2> If yes then do they need to have a seperate queue for each one?
YES
3> How will the apply process access multiple capture process and the propogation take place?
Propagation is from a source queue to the destination queue. If the destination is a single instance database, then you can direct propagations for all of your capture(s) into a single apply queue. If the destination is also RAC then you can run multiple apply processes on each node and apply changes for specific set of tables. Maintenance would be something to think about here along with what happens when one node goes down.
4> can only 2 tables in the source be replicated instead of the entire database?
YES. Streams is flexible to let you decide what level you want to replicate.
5> In case if we use a push job if both the primary and secondary go down how can we move to the third instance and use it?
In theory propagation is a push job. There are certain things you need to configure correctly. If done, then you can move the entire streams configuration to any of the surviving node(s).
6> If the instance goes down do we have to restart the capture process once again?
In 9iR2 you have to restart the streams processes. In 10g the streams processes automatically migrate and restart at the new "owning" instance. In both versions, Queue ownership is transferred automatically to the surviving instance.
7> What is the best suited for rac - ASM/RAW FILES with respect to Streams?
Streams is independent of the storage system you use. I cannot think of any correlation here. -
Hello all,
I will like to learn more about RAC. I have been ready many books but still confused about a physical RAC environement. Can someone describe to me his RAC physical implementation? Thanks972027 wrote:
I will like to learn more about RAC. I have been ready many books but still confused about a physical RAC environement. Can someone describe to me his RAC physical implementation? ThanksThe 3 basic ingredients for RAC.
Servers that will be used as RAC nodes. E.g. x86_64 servers such as those from Oracle, HP, IBM, etc.
Shared storage. This can be a SAN (ala EMC style). Or a storage servers from a number of vendors (including Oracle, HP and others). You can also build your own storage server. Use a x86_64 chassis that supports a large number of drive bays (typically 24+), install Linux on it, and use iSCSI (SCSI over IP) to share the chassis disks over IP with RAC servers. Part of the decision here is what to use as I/O fabric layer. Fibre channel (can be very expensive)? Infiniband SRP? Ethernet (can be very slow)?
Interconnect. 2 basic choices of 10Gb Ethernet or 40Gb Infiniband. A dedicated switch is needed and a dedicated and private high-speed communication network created between cluster servers. Infiniband is my suggestion for a 3+ node RAC cluster. Infiniband is also used by Oracle for their Exadata Database Machine RAC clusters. It is also used by most of the 500 fastest/biggest computer clusters on this planet.
Putting a RAC together manually, is a slow, expensive, and risky process. You need to make sure that all these components work together, that the drivers are robust and configured correctly, and so on. You need to wire this up to work together. You need to ensure h/w redundancy at storage and Interconnect levels. This is not for the faint hearted or the technological inexperienced.
In fact, after building RACs this way, I've had my share of problems and issues with doing it like this. It is fair simpler, faster, and less expensive (especially in resources), to rather buy Oracle Database Appliance for small to medium RAC solutions, or Database Machine for large RAC solutions. -
How locking take place in oracle rac environment?
how locking take place in oracle rac environment?
Suppose from one session, user is updating and from other session same rows are being selected then how this locking take place in Oracle RAC?user11936985 wrote:
how locking take place in oracle rac environment?
Suppose from one session, user is updating and from other session same rows are being selected then how this locking take place in Oracle RAC?In the case of one session updating and the other selecting, there is no locking issue, regardless of whether it's single instance or RAC.
The update will take appropriate table (TM) and row-level (TX) locks, but the select will not take any locks (unless it's a select for update), so, there should be no problem.
Oracle will use read consistency to guarantee that the selected results are self-consistent and consistent with the point in time of the start of the query.
Hope that helps,
-Mark -
How to measure undo at a session level
Below is what are trying to do.
We are trying to implement Oracle's table block compression feature.
In doing so, in one of our testing we discovered that the session performing the DML (inserts) generated almost 30x undo.
We measured this undo by using below query (before the transaction commited).
SELECT a.sid, a.username, used_ublk, used_ublk*8192 bytes
FROM v$session a, v$transaction b
WHERE a.saddr = b.ses_addr
However, above is at a transaction level since it still not committed, we would lose this value once the transaction either committed or rolled back, for this reason, we are trying to find an equivalent statistic at a session level.
1. What we are trying to find it out whether if an equivalent session level statistic exist to measure the amount of undo generated?
2. Is the undo generated always in terms of "undo blocks?"
3. When querying v$statname for name like '%undo%' we came across several statistics, the closest one
undo change vector size -in bytes?
4. desc test_table;
Name Type
ID NUMBER
sql> insert into test_table values (1);
5. However when we run the query against:
SELECT s.username,sn.name, ss.value
FROM v$sesstat ss, v$session s, v$statname sn
WHERE ss.sid = s.sid
AND sn.statistic# = ss.statistic#
AND s.sid =204
AND sn.name ='undo change vector size'
SID USERNAME NAME BYTES
204 NP4 undo change vector size 492
6. Query against: v$transaction
SELECT a.sid, a.username, used_ublk, used_ublk*8192 bytes
FROM v$session a, v$transaction b
WHERE a.saddr = b.ses_addr
SID USED_UBLK BYTES
204 1 8192
What are trying to understand is:
1. How can we or what is the correct statistic to determine how many undo blocks were generated by particular session?
2. What is the statistic: undo change vector size? What does it really mean? or measure?Any transaction that generates Undo will use Undo Blocks in multiples of 1 --- i.e. the minimum allocation on disk is 8KB.
Furthermore, an Undo_Rec does not translate to a Table Row. The Undo has to capture changes to Indexes, block splits, other actions. Multiple changes to the same table/index block may be collapsed into one undo record/block etc etc.
Therefore, a transaction that generated 492 bytes of Undo would use 8KB of undo space because that is the minimum allocation.
You need to test with larger transactions.
SQL> update P_10 set col_2='ABC2' where mod(col_1,10)=0;
250000 rows updated.
SQL>
SQL> @active_transactions
SID SERIAL# SPID USERNAME PROGRAM XIDUSN USED_UBLK USED_UREC
143 542 17159 HEMANT sqlplus@DG844 (TNS V1-V3) 6 5176 500000
Statistic : db block changes 1,009,903
Statistic : db block gets 1,469,623
Statistic : redo entries 502,507
Statistic : redo size 117,922,016
Statistic : undo change vector size 41,000,368
Statistic : table scan blocks gotten 51,954
Statistic : table scan rows gotten 10,075,245Hemant K Chitale -
Setting Session level parameter in FORMS 10g
Hi folks,
I want to setup session level setting for NLS DATE FORMAT in FORMS 10g at environment settings. because, i can't change these setting at database level. b,cz different client applications (i.e. .NET,Forms 10g and SQL PLUS) using different settings.
So, i want to set this NLS DATE FORMAT for SESSION level in FORMS 10g.
can i include this in default.env, if yes, how to include that one in .env file
Edited by: user12212962 on Jul 23, 2010 7:18 PMNo, i want to setup the session parameter for DATE FORMAT. why because, i'm executing oracle stored procedure from forms and this procedure does some logic based on date value.
In this procedure, i have used all variables as DATE datatype only. and i can't change this procedure due to some other client application's using same procedure like JAVA, .NET, Oracle BI and scheduled jobs. And all these application's working fine, even when i use forms 6i also it's working.
but when we use FORMS10g, date was treating as DD-MON-RR and all other client applications using DD-MON-YYYY due to session level setting and at database level also has a same format i.e. DD-MON-YYYY. because i logged this NLS value in audit_table when i executed through FORMS10G, JAVA application,.NET
May be some where it's changing this setting to DD-MON-RR for FORMS10g. Is any settings at Oracle APP server level
(iAS) for this parameter? -
Database Table Resource via Oracle RAC connection Manager
Hello,
Our current IDM 8.1 syncs off a HR Database Table using the Database Table connector.
Our DBA's are moving the view into an Oracle RAC environment.
Question: Is there a way for us to connect and sync off our DB table through an Oracle RAC connection manager.
I understand that for the IDM repository, this is possible.
jdbc:oracle:thin@(DESCRIPTION =
(ADDRESS_LIST =
(LOAD_BALANCE = ON)
(FAILOVER = ON)
(ADDRESS = (PROTOCOL = TCP)(HOST = server1.ser.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = server2.ser.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = db.com)
servers server1 and server2 are oracle connection managers
Thank you for your input,
-RCWe moved the repository to Oracle RAC recently. The syntax to switch to the new databasecluster was something like this:
lh setRepo -tOracle -ujdbc:oracle:thin:@//dbcluster1.domain:1521/service_idm.oradb.domain -Uwaveset -Pwachtwoord
For the 'lh setRepo' command the old database had to be up, and we had to use the old ojdb5.jar. After that we switched to ojdb6.jar.
One of our resources is a database table with HR data in the same database. The following Database Access Parameters work for us:
Database Type: Oracle
JDBC Driver: oracle.jdbc.driver.OracleDriver
JDBC URL Template: jdbc:oracle:thin:@//%h:%p/%d
Host : dbcluster1.domain
Database: service_idm.oradb.domain
Port: 1521
Greetings,
Marijke -
Calculating total memory in oracle RAC environment
I have to calculate total memry in RAC environment.
For shared and buffer pool I execute show sga.
For UGA and PGA I execute statement that have two different values.
This is my two different methot for calculating total memory in oracle RAC environment.
Why I have very different value in this 2 statements on pga values?
first stat
with vs as
select 'PGA: ' pid
,iid
,session_pga_memory + session_uga_memory bytes
from (select inst_id iid
,(select ss.value
from gv$sesstat ss
where ss.sid = s.sid
and ss.inst_id = s.inst_id
and ss.statistic# = 20) session_pga_memory
,(select ss.value
from gv$sesstat ss
where ss.sid = s.sid
and ss.inst_id = s.inst_id
and ss.statistic# = 15) session_uga_memory
from gv$session s)
union all
select 'SGA: ' || name pid
,s.inst_id iid
,value bytes
from gv$sga s
select distinct iid, pid, sum(bytes) over (partition by iid, pid) bytes from vs
IID PID BYTES
1 PGA: 196764792 <=====
1 SGA: Database Buffers 318767104
1 SGA: Fixed Size 733688
1 SGA: Redo Buffers 811008
1 SGA: Variable Size 335544320
2 PGA: 77159560 <=====
2 SGA: Database Buffers 318767104
2 SGA: Fixed Size 733688
2 SGA: Redo Buffers 811008
2 SGA: Variable Size 335544320
second stat
with vs as
select 'PGA: ' pid
,p.inst_id iid
,p.pga_alloc_mem bytes
from gv$session s
,gv$sesstat pcur
,gv$process p
where pcur.statistic# in ( 20 -- = session pga memory
,15 -- = session uga memory
and s.paddr = p.addr
and pcur.sid = s.sid
and pcur.INST_ID = s.INST_ID
and pcur.INST_ID = p.INST_ID
union all
select 'SGA: ' || name pid
,s.inst_id iid
,value bytes
from gv$sga s
select distinct iid, pid, sum(bytes) over (partition by iid, pid) bytes from vs
IID PID BYTES
1 PGA: 342558636 <=====
1 SGA: Database Buffers 318767104
1 SGA: Fixed Size 733688
1 SGA: Redo Buffers 811008
1 SGA: Variable Size 335544320
2 PGA: 186091416 <=====
2 SGA: Database Buffers 318767104
2 SGA: Fixed Size 733688
2 SGA: Redo Buffers 811008
2 SGA: Variable Size 335544320I'm sorry but it is not clear to me.
- From v$session (1th stmt) I have
nearly 196MB of PGA mem on instance 1
and
nearly 77MB of PGA mem on instance 2
- From v$process (2th stmt) I have
nearly 342MB of PGA mem on instance 1
and
nearly 186MB of PGA mem on instance 2
then...
342+186 - 196+77 = nearly 255MB of memory allocated by oracle processes but free?
if I want calculate the total thing of the amount of the allocated memory from Oracle...It is more correct 2th statement that query v$process...it is true? -
Gurus,
what are the advantages, if any, of running ASM in a non-RAC environment?
Thanks.Any raid (that actually provides multiple disks in a
LVM) will nullify the performance gains of raw that
are associated with disk head contention.
Any disk subsystem with front-end RAM cache will
nullify the rest of the performance gains of raw
disk.
Any 'RAID-5 is good' argument under a database
assumes large and efficient cache.Unfortunately, we can't just discount RAID-5, since it is far too cost-efficient an option these days, with 10, 20, 30+ TB databases. Plus, the storage vendors actually make it efficient enough that it performs almost as well as RAID 1+0.
Multiple disks on the storage subsystem and RAM caches (for reads and writes) are still important and valuable performance enhancements. Remember that your storage subsystem is much more efficient at this than your database or server (since it's designed to do it!). True, it doesn't have knowledge of the database files, their content, and their usage, but you do, and can therefore intelligently configure the subsystem and place the data files.
ASM does nothing more intelligent than stripe the data across all of the available disks. The best part about it is the ability to dynamically grow the groups. The problem with it is that it thwarts the read-ahead cache of the storage subsystem. And, while Oracle will intelligently 'read-ahead', it can't optimize where the heads are on the disk, or take into consideration the 10 other servers accessing the same storage subsystem.
The problem with ASM doing the striping is that, for large databases, you will typically have hundreds of disks underneath the covers. If you gave the OS and then Oracle that many devices, what do you think would happen? It would step all over itself and seriously impair your ability to access the devices in parallel. Whereas, if you group 4 or 8 devices together, the storage subsystem will access them together in parallel for the OS.
So, don't discount your storage box... But yeah, use RAID 1+0 rather than RAID-5 (4,3) when possible! -
Are multiple PHYSICAL databases supported in one Oracle 10g RAC environment
Hi alls,
as of Metalink 220970.1 Oracle RAC is also supporting different databases in one cluster installation. RAC handles any resource as a service and it doesn't matter if these services belongs to only one or to different databases. You install the Oracle Clusterware only once and you create any service you want to make high available.
So is technically possible to install multiple databases on same Oracle Rac environment.
My doubts are related to SAP , does SAP certify such environments ?? we could have one SAP R/3 database and SAP BW database co-exist in a single Oracle RAC environment. or also co-exist SAP database with a non-SAP database (an already existing application) in the same Oracle Rac environment.
Any comment would be higly appreciated.
Silvio BrandaniWe have ECC, BI, PI and Portal running on same Oracle RAC environment. We have one RAC for QA systems and one for PRD systems. We are not running any SAP applications along with Non-SAP applications on RAC. Keeping SAP and non-SAP apps separate would be a way to go.
Hope this helps.
Thanks,
Naveed -
Hi all,
I have an Oracle 11g installation with a database setup as follows: NLS_COMP=BINARY, NLS_SORT=BINARY.
After playing a bit at the session level with NLS_COMP=LINGUISTIC and NLS_SORT=BINARY_CI, I persisted them at the instance level via ALTER SYSTEM with SCOPE=SPFILE.
Bounced the database and voila, when I query nls_instance_parameters it reflects my changes.
Problem is, my parameters are not applied to my session: in fact, if I query nls_session_parameters, both are still set to BINARY. (Note: using sqlplus).
The documentation (http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/create006.htm) states:
"A new session inherits parameter values from the instance-wide values."
Am I missing something here? Is sqlplus somehow overriding the instance-level parameters? (I know sql developer may, depending on your options, hence I went back to basics...).
Any help appreciated!
Thanks in advance,
PaoloHi Sergiusz,
Thank you again for your reply.
In my registry NLS_LANG is set to its default value (AMERICAN_AMERICA.WE8MSWIN1252), so I assume that unless I set anything specific in my environment, NLS_COMP and NLS_SORT are affected by that and sort of reset to BINARY?
If this is the case, then I'm struggling to understand the purpose of setting them at the instance level, given that as you said, NLS_LANG shouldn't be removed as an environmental variable and it seems to override them?
Along the same lines, if I understand this correctly, should I have no control over the environment that my application runs in (e.g. a web application running in a shared IIS app pool), then the only option left is (re)setting my variables every time I establish a connection with the Oracle db, thereby starting a new session?
Really appreciate your help.
Cheers,
Paolo -
Manually Setting up a Standby Database for RAC Environment
Hi,
I am a Junior DBA so apologies if this sounds like a silly question.
We have production 11g Extended RAC environment setup. Management have asked for a separate single instance database be setup as a standby DB on a separate 'cold' server in case we lose complete connection to both sides of the RAC. I read up on Oracle Data Guard and presented this as a solution, but they seem adamant about manually going in and copying out the latest backup and archive logs over to the standby database.
Can this be done? I mean, can ASM managed backup files and archive logs from one database be simply copied out of the backup directory and imported into a completely separate database as easy as that?
We are using ASM to manage to data files on the RAC. My understanding is that we cant manually access the files on an OS level when using ASM, but maybe I am wrong. Any help or opinions on this would be greatly appreciated.
Rgs,
RobHi,
Can this be done? I mean, can ASM managed backup files and archive logs from one database be simply copied out of the backup directory and imported into a completely separate >database as easy as that?Yes, but depends on DB version.
what is the DB version.
from 11g you have cp command so there is a possibility to copy generated archive logs to standby location, so that you can apply..
In 10g there is no such option. you need to create a standby database. with automatic shipping
Thanks -
Steps to do switchover / switchback in RAC environment
Hi folks,
I m having setup with 2 node RAC primary and 2 node RAC Dataguard on 10.2.0.4.0. Dataguard setup is working fine. Dataguard is setup with Standby Redo log group with managed recovery. There is no problem with transferring archives & applying on standby.
Now I want to do Switchover/Switchback between Primary and Standby for RAC. I am familiar with Single instance Switchover and Switchback but never did RAC environment Switchover/Switchback. Can anybody please elaborate steps or suggest any link for me??
regards,
manishHi Guys,
Today I performed RAC Switchover / Switchback for 2 Node Primary with 2 Node Standby on OEL. I expected some issues, but it was totally smooth. Giving you steps for the same, so it will be useful to you. Even this would be my first contribution to Oracle Forums.
DB Name DB Unique Name Host Name Instance Name
live live linux1 live1
live live linux2 live2
live livestdby linux3 livestdby1
live livestdby linux4 livestdby2
Verify that each database is properly configured for the role it is about to assume and the standby database is in mounted state.
(Verify all Dataguard parameters on each node for Primary & Standby)
Like,
Log_archive_dest_1
Log_archive_dest_2
Log_archive_dest_state_1
Log_archive_dest_state_2
Fal_client
Fal_server
Local_listener
Remote_listener
Standby_archive_Dest
Standby_archive_management
service_names
db_unique_name
instance_name
db_file_name_convert
log_file_name_convert
Verify that both Primary RAC & Dataguard RAC are functioning properly and both are in Sync
On Primary,
Select thread#,max(sequence#) from v$archived_log group by thread#;
On Standby,
Select thread#,max(sequence#) from v$log_history group by thread#;
Before performing a switchover from a RAC primary shut down all but one primary instance (they can be restarted after the switchover has completed).
./srvctl stop instance –d live –i live1
Before performing a switchover or a failover to a RAC standby shut down all but one standby instance (they can be restarted after the role transition has completed).
./srvctl stop instance –d live –i livestdby1
On the primary database initiate the switchover:
alter database commit to switchover to physical standby with session shutdown;
Shutdown former Primary database & Startup in Mount State.
Shut immediate;
Startup mount;
select name,db_unique_name, log_mode,open_mode,controlfile_type,switchover_status,database_role from v$database;
Make log_Archive_Dest_state_2 to DEFER
alter system set log_archive_dest_state_2='DEFER' sid='*';
On the (old) standby database,
select name,log_mode,open_mode,controlfile_type,switchover_status,database_role from v$database;
On the (old) standby database switch to new primary role:
alter database commit to switchover to primary;
shut immediate;
startup;
On new Primary database,
select name,log_mode,open_mode,controlfile_type,switchover_status,database_role from v$database;
Make log_Archive_Dest_state_2 to ENABLE
alter system set log_archive_dest_state_2='ENABLE' sid='*';
Add tempfiles in New Primary database.
Do some archivelog switches on new primary database & verify that archives are getting transferred to Standby database.
On new primary,
select error from v$archive_Dest_status;
select max(sequence#) from v$archived_log;
On new Standby, Start Redo Apply
alter database recover managed standby database using current logfile disconnect;
Select max(sequence#) from v$log_history; (should be matching with Primary)
Now Start RAC databases services (both Primary – in open & Standby – in mount)
On new Primary Server.
./srvctl start instance –d live –i livestdby2
Verify using ./crs_stat –t
Check that database is opened in R/W mode.
On new Standby Server.
./srvctl start instance –d live –i live2 –o mount
Now add TAF services on new Primary (former Standby) Server.
By Command Prompt,
./srvctl add service -d live -s srvc_livestdby -r livestdby1,livestdby2 -P BASIC
OR
By GUI,
dbca -> Oracle Read Application Cluster database -> Service Management -> select database -> add services, details (Preferred / Available), TAF Policy (Basic / Preconnect) - > Finish
Start the services,
./srvctl start service -d live
Verify the same,
./crs_stat -t
Perform TAF testing, to make sure Load Balancing & Failover.
regards,
manish
Email: [email protected]
Edited by: Manish Nashikkar on Aug 31, 2010 7:41 AM
Edited by: Manish Nashikkar on Aug 31, 2010 7:42 AM -
Transaction Recovery within an Oracle RAC environment
Good evening everyone.
I need some help with Oracle 11gR1 RAC transaction-level recovery issues. Here's the scenario.
We have a three(3) node RAC Cluster running Oracle 11g R1. The Web UI portion of the application connects through WLS 9.2.3 with connection pooling set. We also have a command-line/SQL*Developer component that uses a TNSNAMES file that allows for both failover and load balancing. Within either the UI or the command line portion of the application, a user can run a process by which invokes one or more PL/SQL Packages to be invoked. The exact location of the physical to the database is dependent on which server is chosen from either the connection pooling or the TNSNAMES.ORA Load Balancing option.
In the normal world, the process executes and all is good. The status of the execution of this process is updated by the Packages once completed. The problem we are encountering is when an Oracle Instance fails. Here's where I need some help. For Application-level (Transaction Level) recovery, the database instances are first recovered by the database background proccesses and then Users must determine which processes were in flight and either re-execute them (if restart processing is part of the process) or remove any changes and restart from scratch. Given that the database instance does not record which processes are "in flight" it is the responsibility of the application to perform its own recovery processing. Is this still true?
If an instance fails, are "in flight" transactions/connections moved to other instances in the Grid/RAC environment? I don't think this is possible but I don't remember if this was accomplished through a combination of Application and Database Server features that provide feedback between each other. How is the underlying application notified of the change if such an issue occurs? I remember something similar to this in older versions of Oracle but I cannot remember what it was callled.
Any help or guidance would be great as our client is being extremely difficult in pressing this issue.
Thanks in advance
Stephen Karniotis
Project Architect - Compuware
[email protected]
(248) 408-2918You have not indicated whether you are using TAF or FCF ... that would be the first place to start.
My recommendation would be to let Oracle roll back the database changes and have the application resubmit the most recent work.
If the application knows what it did since the last "COMMIT" then you should be fine with the possible exception of variables stored
in packages. Depending on packages retaining values is an issue best solved with PRAGMA SERIALLY_REUSABLE ... in other words
not using the retention feature.
Maybe you are looking for
-
Erro: "Error while creating table 'EDISEGMENT'"
Pessoal bom dia, quando eu vou Ativar uma Regra de Transferência retona o seguinte erro: Error while creating table 'EDISEGMENT' não estou conseguindo desvendar o que pode ser isto, e a regra não ativa de jeito nenhum. Alguém já passou pelo problema?
-
Scrobbler no longer works with iTunes.
Why is this?
-
Item text is not coming in production server
hello abap gurus, i developed a smart forms where i am using item text field insted of master data records. using that while i run programe on developement server. the values coming are right but when i run that programe after moving it on productio
-
SQL not working in 10.2.0.3
The following select statement works fine in Oracle 9.2.0.8 and 10.2.0.1 versions , but it throws ORA-00904: T.A: invalid identifier error in Oracle 10.2.0.3 and Oracle 10.2.0.4 environments. SQL's: create table tmp (a varchar2(10)); insert into tmp
-
Adodb.recordset object is not getting created
Hi , I am unable to get created the adodb.recordset object.Please help me in getting created. Thanks in advance.