Logical Standby Scenario..!!

Hello,
We are having dataguard setup with Primary, Physical standby and Logical Standby databases located on three different machines..
Oracle Version : 10.2.0.1
OS : RHEL 4 on (x86 32-Bit)
Following are our initialization parameters for all the three instances..
Primary Database Parameters--_
*.db_unique_name='betapri'
*.fal_client='BETAPRI'
*.fal_server='BETAPHYSTDBY'
*.instance_name='oracle'
*.log_archive_config='DG_CONFIG=(betapri,betaphystdby,betalogstdby)'
*.log_archive_dest_1='LOCATION=/u01/app/oracle/archive/oracle VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=betapri MANDATORY'
*.log_archive_dest_2='SERVICE=BETAPHYSTDBY LGWR VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=betaphystdby'
*.log_archive_dest_3='SERVICE=BETALOGSTDBY LGWR SYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=betalogstdby'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_dest_state_3='ENABLE'
*.standby_archive_dest='/u01/app/oracle/archive/oracle'
*.standby_file_management='AUTO'
*.remote_login_passwordfile='EXCLUSIVE'
Physical standby Database Parameters--_
*.db_unique_name='betaphystdby'
*.fal_client='BETAPHYSTDBY'
*.fal_server='BETAPRI'
*.instance_name='oracle'
*.log_archive_config='DG_CONFIG=(betapri,betaphystdby,betalogstdby)'
*.log_archive_dest_1='LOCATION=/u01/app/oracle/archive/oracle VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=betaphystdby'
*.log_archive_dest_2='SERVICE=BETAPRI LGWR VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=betapri'
*.log_archive_dest_3='SERVICE=BETALOGSTDBY VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=betalogstdby'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_dest_state_3='ENABLE'
*.standby_archive_dest='/u01/app/oracle/archive/oracle'
*.standby_file_management='AUTO'
*.remote_login_passwordfile='EXCLUSIVE'*Logical Standby Database Parameters *_
*.db_unique_name='betalogstdby'
*.instance_name='oracle'
*.log_archive_config='DG_CONFIG=(betapri,betaphystdby,betalogstdby)'
*.log_archive_dest_1='LOCATION=/u01/app/oracle/archive/betalogstdby VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=betalogstdby'
*.log_archive_dest_2='LOCATION=/u01/app/oracle/archive/oracle VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=betalogstdby'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.standby_archive_dest='/u01/app/oracle/archive/oracle'
*.standby_file_management='AUTO'
*.remote_login_passwordfile='EXCLUSIVE'Whole data guard setup is working properly as per the configuration.. But just for testing purpose i did FAILOVER_ and made our physical standby as new primary database and using Flashback i made my old primary as new physical standby and haven't made any changes to logical standby database since as per my init parameters i guess logical standby should remain in sync and keep applying the logs..
But performing this scenario logical standby seems to be messed up.. My new primay database does generate archives to logical standby but logical standby logmnr doesn't apply the changes to the database.. So once i issue "ALTER DATABASE START LOGICAL STANDBY APPLY" following message gets displayed in alert log *"Fatal Error: LogMiner processed beyond new branch scn."*
Tue Sep 29 18:55:14 2009
alter database start logical standby apply
Tue Sep 29 18:55:14 2009
ALTER DATABASE START LOGICAL STANDBY APPLY (oracle)
Tue Sep 29 18:55:14 2009
No optional part
Attempt to start background Logical Standby process
LSP0 started with pid=16, OS id=18308
LOGSTDBY status: ORA-16111: log mining and apply setting up
Tue Sep 29 18:55:14 2009
LOGMINER: Parameters summary for session# = 1
LOGMINER: Number of processes = 3, Transaction Chunk Size = 201
LOGMINER: Memory Size = 30M, Checkpoint interval = 150M
Tue Sep 29 18:55:14 2009
Fatal Error: LogMiner processed beyond new branch scn.
LOGSTDBY status: ORA-01346: LogMiner processed redo beyond specified reset log scn
Tue Sep 29 18:55:14 2009
Errors in file /u01/app/oracle/admin/oracle/bdump/oracle_lsp0_18308.trc:
ORA-01346: LogMiner processed redo beyond specified reset log scn
LOGSTDBY status: ORA-16222: automatic Logical Standby retry of last action
LOGSTDBY status: ORA-16111: log mining and apply setting up
Tue Sep 29 18:55:14 2009
LOGMINER: Parameters summary for session# = 1
LOGMINER: Number of processes = 3, Transaction Chunk Size = 201
LOGMINER: Memory Size = 30M, Checkpoint interval = 150M
Tue Sep 29 18:55:14 2009
Fatal Error: LogMiner processed beyond new branch scn.
LOGSTDBY status: ORA-01346: LogMiner processed redo beyond specified reset log scn
Tue Sep 29 18:55:14 2009
Errors in file /u01/app/oracle/admin/oracle/bdump/oracle_lsp0_18308.trc:
ORA-01346: LogMiner processed redo beyond specified reset log scn
Tue Sep 29 18:55:14 2009
ORA-16210 signalled during: alter database start logical standby apply... Note : I am having flashback enabled only on one database which was my old primary..
Please any comments, help or suggestions would be great..
Thanks - HP

Appreciate your kind reply Robert..
Would it be the only option left for me to re instantiate the logical standby..? Or what if i enable flashback on logical standby and then try to flashback till "standby_became_primary_scn" and then try to start logical standby apply.. Will this work in my scenario?
Any suggestions or comments are most welcome.
Thanks - HP

Similar Messages

  • Best practice on using Flashback and Logical Standby

    Hello,
    I'm testing a fail-back scenario where I first need to activate a logical standby, then do some dummy transactions before I flashback this db and resme the redo apply. Here is what the steps look like:
    1)     Ensure logical standby is in-sync with primary
    2)     Enable flashback on standby
    3)     Create a flashback guaranteed restore point
    4)     Defer log shipping from primary
    5)     Activate the logical standby so it’s fully open to read-write
    6)     Dummy activities against the standby (which is now fully open)
    7)     Flashback the database to the guaranteed checkpoint
    8)     Resume log shipping on primary
    9)     Resume redo apply on secondary
    In the end, i can see the log shipping is happening but the logical standby does not apply any of these..and there is no error in the alert log on Standby side. But the following query could explains why the standby is idle:
    SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;
    TYPE HIGH_SCN STATUS
    COORDINATOR ORA-16240: Waiting for log file (thread# 2, sequence# 0)
    ORA-16240: Waiting for log file (thread# string, sequence# string)
    Cause: Process is idle waiting for additional log file to be available.
    Action: No action necessary. This informational statement is provided to record the event for diagnostic purposes.
    I dont understand why it's looking for sequence #0 after the flashback.
    Thanks for the help.

    Hello;
    I hesitate to answer your question because you are not doing a good job of keeping the forum clean :
    Total Questions: 13 (13 unresolved)
    Please consider closing some of you old answered questions and rewarding those who helped you.
    No action necessary.
    Do you really have a thread 2? ( Redo thread number )
    Quick check
    select applied_scn,latest_scn from v$logstdby_progress;Use the DBA_LOGSTDBY_LOG View if you don't have a thread 2 then the sequence# is meaningless.
    COLUMN DICT_BEGIN FORMAT A10;
    SELECT FILE_NAME, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE#,
    TIMESTAMP, DICT_BEGIN, DICT_END, THREAD# AS THR# FROM DBA_LOGSTDBY_LOG
    ORDER BY SEQUENCE#;Logical Standby questions are difficult, not a lot of them out there I'm thinking.
    Check
    http://docs.oracle.com/cd/E14072_01/server.112/e10700/manage_ls.htm
    "Waiting On Gap State" ( However I still believe you don't have a 2nd thread# )
    OR
    http://psilt.wordpress.com/2009/04/29/simple-logical-standby/
    Best Regards
    mseberg
    Edited by: mseberg on Apr 26, 2012 5:13 PM

  • Logical Standby and Streams

    Hi,
    I am considering different replication scenarios for our future system and have a (maybe stupid:) question. Is it technically possible to create a logical standby database (Data Guard) and than replicate it further using Streams?
    Regards,
    Tim

    Yes , This is possible with logical standby but you need to have mechanisms to take care of role transitioning in case of fail over. See Oracle document http://download.oracle.com/docs/cd/E14072_01/server.112/e10700/whatsnew.htm (Key word search "streams")
    Thanks
    http://swervedba.wordpress.com/
    Edited by: swervedba on May 30, 2011 8:48 PM

  • Scheduled job not getting executed on a logical standby

    Hello,
    We have created a job(through dbms_scheduler API). The job is enabled and shows up in the SCHEDULERJOBS view also.
    However the job does not get executed. I looked into the following tables there was no relevant entry found for the aforesaid job:
    select * from all_scheduler_job_log
    select * from dba_scheduler_running_jobs
    select * from DBA_SCHEDULER_JOB_RUN_DETAILS order by log_date desc
    Is there any limitation that we cannot execute scheduled jobs on a logical standby database. If i execute the relevant program (that is configured to be run as job in this scenario) as an individual procedure from SQL plus, it gets executed successfully implying there is no errors/problem in the subprogram that the job is going to invoke.
    Appreciate your thoughts in this regard.
    Thanks.

    Hi Justin,
    Thanks for your response.
    As per the app design, the job invokes a stored program(that maps to a stored procedure present in standby db itself) that reads the data from standby and populates the relevant tables/entities in another database(third db, not primary or standby) which acts as a repository. No write operations are to be performed on standby.
    So, i have two doubts:
    -- Can scheduled jobs execute on logical standby db[Oracle release 10g(R2)]
    I was going through few of the oracle docs and it is mentioned that this is a known limitation in 10g
    R2 release and has been corrected in 11g. Now we have something called database_role
    attribute that needs to be set to 'LOGICAL STANDBY' if you need to execute a job on
    standby. However it is available in 11g onwards.
    -- If there is no workaround for the above mentioned problem in 10g-R2 release.
    Then we may have to schedule a job from third db instance that shall invoke the program(residing on the standby db). Can we have a scheduled job which executes a program that maps to a remote stored procedure instead of local stored procedure?
    Appreciate your thoughts.
    Thanks

  • When to use Real Time Apply for Logical standby..!!

    Hello All,
    I have been trying many ways to speed up the archival on primary and improve sql apply on logical standby, but still we are getting about 45-50 mins of delay between primary and logical standby.
    We wanted to have our transactions applied on logical standby within couple minutes. Which i guess wont be possible in async mode.
    That's why i am planning to implement Real Time apply between primary and logical standby.
    Now since both our databases are too far away from each other (Primary is in US and logical is in India) would it be recommended to implement real time apply in such scenario? And if implemented would it affect Primary DB Performance?
    Also if there might be some packet loss or network hitch would Primary will try again and keep logical DB in Sync with Primary?
    Any help or suggestions would be great.
    Thanks.

    yes, real time apply is recommended in your scenario.
    however due to the geographical distance between your primary and standby; I would suggest to keep your standby in current mode - max performance ; ASYNC- itself. It would not affect the performace of the primary.
    As long as you set the FAL parameters and configure tnsnames properly and ensure proper deletion policy for archivelog cleanup in primary ( so that it's not deleted before shipping if need be), you shouldn't find any problem with primary & standby synching.
    Good Luck.
    Cheers.

  • ORA-01291:missing logfile when FlashingBack a Primary DB in logical standby

    OS: Solaris 10 and Windows vista
    Oracle version : 10.2.0.4.0 Enterprise Edition and 10.2.0.3.0 Enterprise Edition
    We are getting ORA-01291: missing logfile when FlashingBack a failed Primary DB into logical standby
    We are following Below procedure for failover and flashback in logical standby.
    Primary and standby database name is as below.
    primary db_name primdb
    standby db_name logicdb
    failover
    From primdb:
    shut abort
    From logicdb:
    select applied_scn,newest_scn from dba_logstdby_progress;
    alter database stop logical standby apply;
    alter database activate logical standby database;
    Flashing Back a Failed Primary Database into a Logical Standby Database
    We are following instructions from below link.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/scenarios.htm#i1050060
    SYS@logicdb> SELECT APPLIED_SCN AS FLASHBACK_SCN FROM V$LOGSTDBY_PROGRESS;
    FLASHBACK_SCN
    302330
    1 row selected.
    SYS@logicdb> SELECT file_NAME FROM DBA_LOGSTDBY_LOG WHERE NEXT_CHANGE# > (SELECT VALUE FROM DBA_LOGSTDBY_PARAMETERS
              WHERE NAME = 'STANDBY_BECAME_PRIMARY_SCN') AND FIRST_CHANGE#<=302330;
    FILE_NAME
    /logs/app/oracle/flash_recovery_area/LOGICDB/archivelog2/logicdb_1_12_729695607.arc
    Note: We have copied above mentioned file to primary archive destination i.e. /logs/app/oracle/flash_recovery_area/PRIMDB/archivelog.
    SYS@primdb> startup mount
    ORACLE instance started.
    Total System Global Area 1073741824 bytes
    Fixed Size 2046056 bytes
    Variable Size 264243096 bytes
    Database Buffers 801112064 bytes
    Redo Buffers 6340608 bytes
    Database mounted.
    SYS@primdb> FLASHBACK DATABASE TO SCN 302330;
    Flashback complete.
    SYS@primdb> ALTER DATABASE OPEN RESETLOGS;
    Database altered.
    SYS@primdb> ALTER DATABASE START LOGICAL STANDBY APPLY NEW PRIMARY logicdb;
    Database altered.
    SYS@primdb> select type,high_scn,status from v$logstdby;
    TYPE HIGH_SCN
    STATUS
    COORDINATOR
    ORA-01291: missing logfile
    Primary database init.ora parameters are as below
    *.db_file_name_convert=('/export/oracle/oradata/primdb/','/export/oracle/oradata/logicdb/')
    *.db_name='primdb'
    *.instance_name=primdb
    *.db_unique_name=primdb
    *.service_names=primdb
    *.db_recovery_file_dest='/logs/app/oracle/flash_recovery_area'
    *.fal_client='LOGICDB'
    *.fal_server='PRIMDB'
    *.log_archive_dest_1='LOCATION=/logs/app/oracle/flash_recovery_area/PRIMDB/archivelog/ VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=primdb'
    *.log_archive_dest_2='SERVICE=logicdb LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=logicdb'
    *.log_archive_dest_3='LOCATION=/logs/app/oracle/flash_recovery_area/PRIMDB/archivelog2/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLES) DB_UNIQUE_NAME=primdb'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_dest_state_3='DEFER'
    *.log_archive_format='primdb_%t_%s_%r.arc'
    *.log_archive_max_processes=4
    *.log_file_name_convert=('/export/oracle/oradata/primdb/','/export/oracle/oradata/logicdb/')
    *.standby_file_management='AUTO'
    *.log_archive_config='dg_config=(primdb,logicdb)'
    Standby database init.ora parameters are as below
    *.db_file_name_convert=('/export/oracle/oradata/primdb/','/export/oracle/oradata/logicdb/')
    *.db_name='logicdb'
    *.instance_name=logicdb
    *.db_unique_name=logicdb
    *.service_names=logicdb
    *.db_recovery_file_dest='/logs/app/oracle/flash_recovery_area'
    *.fal_client='LOGICDB'
    *.fal_server='PRIMDB'
    *.log_archive_dest_1='LOCATION=/logs/app/oracle/flash_recovery_area/LOGICDB/archivelog/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=logicdb'
    *.log_archive_dest_2='SERVICE=primdb LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=primdb'
    *.log_archive_dest_3='LOCATION=/logs/app/oracle/flash_recovery_area/LOGICDB/archivelog2/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLES) DB_UNIQUE_NAME=logicdb'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='DEFER'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_format='logicdb_%t_%s_%r.arc'
    *.log_archive_max_processes=4
    *.log_file_name_convert=('/export/oracle/oradata/primdb/','/export/oracle/oradata/logicdb/')
    *.log_archive_config='dg_config=(primdb,logicdb)'

    Hi ,
    The error shows it is waiting for the Logfile. The Integrated extract mainly needs of the availability of two things.
    1. Archivelogs.
    2. Trail Files.
    Both should be retained to the needed / required level.
    Please execute the below query and check the status of the Extract / Capture process.
    The below query displays the information of each capture process in a database.,
    COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A7
    COLUMN PROCESS_NAME HEADING 'Capture|Process|Number' FORMAT A7
    COLUMN SID HEADING 'Session|ID' FORMAT 9999
    COLUMN SERIAL# HEADING 'Session|Serial|Number' FORMAT 9999
    COLUMN STATE HEADING 'State' FORMAT A20
    COLUMN TOTAL_MESSAGES_CAPTURED HEADING 'Redo|Entries|Evaluated|In Detail' FORMAT 9999999
    COLUMN TOTAL_MESSAGES_ENQUEUED HEADING 'Total|LCRs|Enqueued' FORMAT 9999999999
    SELECT c.CAPTURE_NAME,
           SUBSTR(s.PROGRAM,INSTR(s.PROGRAM,'(')+1,4) PROCESS_NAME,
           c.SID,
           c.SERIAL#,
           c.STATE,
           c.TOTAL_MESSAGES_CAPTURED,
           c.TOTAL_MESSAGES_ENQUEUED
      FROM V$STREAMS_CAPTURE c, V$SESSION s
      WHERE c.SID = s.SID AND
            c.SERIAL# = s.SERIAL#;
    Also run this query to check, if the capture is waiting for which logfile.,
    COLUMN CONSUMER_NAME HEADING 'Capture|Process|Name' FORMAT A15
    COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A10
    COLUMN SEQUENCE# HEADING 'Sequence|Number' FORMAT 99999
    COLUMN NAME HEADING 'Required|Archived Redo Log|File Name' FORMAT A40
    SELECT r.CONSUMER_NAME,
           r.SOURCE_DATABASE,
           r.SEQUENCE#,
           r.NAME
      FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
      WHERE r.CONSUMER_NAME =  c.CAPTURE_NAME AND
            r.NEXT_SCN      >= c.REQUIRED_CHECKPOINT_SCN;
    The above query clearly shows for which logfile the Extract / Capture process is waiting. Check if that logfile is available in your system.
    Regards,
    Veera

  • Job not getting invoked on a logical standby instance

    Hello,
    We have created a job(through dbms_scheduler API). The job is enabled and shows up in the SCHEDULERJOBS view also.
    However the job does not get executed. I looked into the following tables there was no relevant entry found for the aforesaid job:
    select * from all_scheduler_job_log
    select * from dba_scheduler_running_jobs
    select * from DBA_SCHEDULER_JOB_RUN_DETAILS order by log_date desc
    Is there any limitation that we cannot execute scheduled jobs on a logical standby database. If i execute the relevant program (that is configured to be run as job in this scenario) as an individual procedure from SQL plus, it gets executed successfully implying there is no errors/problem in the subprogram that the job is going to invoke.
    Appreciate your thoughts in this regard.
    Thanks.

    I think then we need to think about an alternate way to tackle the problem.
    Anyways thanks for your timely help Ravi.
    Just one query, can we invoke remote stored procedures(i.e. using datalinks) from dbms_scheduler jobs on another db.
    i.e. something like this:
    DB0(primary db instance); DB1(is a logical standby db); DB2(centralized repository db)
    DB1 contains certain packaged application procedures that need to be invoked as a scheduled activity.
    Previously we had the scheduled jobs residing on the logical standby db itself but because of this known issue we cannot proceed with that design.
    So can we have jobs,programs(mapping to DB1s stored procedure) scheduled on DB2?
    Thanks.

  • Skipping dependent Tables in Logical Standby

    Hello DBAs
    I need your expertise here. Let me explain the scenario. Suppose a table is skipped in a logical standby. This table is referred by other tables and there are dependencies on this table. Now my question is what happens when a transaction is committed at the primary which is dependent on this table ?
    Does the transaction go through even though that table is not replicated. What happens to data integrity ?
    I appreciate your help. Thanks.

    Have a go at
    [The Documentation...|http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/manage_ls.htm#SBYDB00800]

  • Which way - Streams or Logical Standby?

    Hi,
    I'm having a requirement to replicate a schema between two databases. The scenario is that we've got an Interactive Voice Recorder system for tracking voice calls to a call center contact.
    The system will be hosted in two sites, with kind of load balancing where calls are routed to either of the centers. The two centers will each have an OLTP database, and the databases will of-course be similar.
    There's a particular schema that needs to be replicated between the two databases so that the schema is available in either of the nodes. We were thinking of Active Standby but this seems to be out of question because the two databases need to be open read-write all the time.
    We are looking into Streams to accomplish this, but we wonder if a logical standby can also accomplish the same, since it can be open read-write?
    The database version will be 11.1.0.7.0

    user11983948 wrote:
    Thanks Anuraq,
    The volume of the data that will be replicated will be small, just a single schema. I also did not clarify, but the data in the two databases will be kept only for about 30 days, then truncated, after being sent to a separate warehouse server where the data will be stored and analyzed.
    Regards,
    dulaIf the data replicated is small, and dmls are free then streams would be better than logical standby.
    Regarding the truncate part you meant the data would be deleted from the table which are older than 30 days. Streams would be able to handle that. Make sure you delete the data from one site only and that would be replicated on another site. I also assume you are looking for two way replication in that case logical standby is not an option.
    Regards
    Anurag

  • Physical Standby from Logical Standby

    Is it possible this scenario?
    Create a physical standby from logical standby and the physical receive archive from logical.
    I mean,
    Primary (10g production) -> Logical (11gr2 new server 1) -> (Physical 11gr2 new server 2)
    The goal of this scenario is a migration with a short downtime.
    O.S. aix 6.1
    11.2.0.2

    Is it possible this scenario?YES

  • Creating a new schema in a Logical Standby Database

    Hi All,
    I am experimenting with logical standby databases for the purpose of reporting, and have not been able to create a new schema in the logical standby database - one of the key features of logical standbys.
    I have setup primary and logical standby databases, and they seem to be running just fine - changes are moved from the primary to the standby and queries on the standby seem to run ok.
    However, If I try to create a new schema on the logical standby, that does not exist on the primary, I get "ORA-01031: insufficient privileges" errors when I try to create new objects.
    Show below are the steps I have taken to create the new schema on the logical standby. Any help would be greatly appreciated.
    SYS@UATDR> connect / as sysdba
    Connected.
    SYS@UATDR>
    SYS@UATDR> select name, log_mode, database_role, guard_status, force_logging, flashback_on, db_unique_name
    2 from v$database
    3 /
    NAME LOG_MODE DATABASE_ROLE GUARD_S FOR FLASHBACK_ON DB_UNIQUE_NAME
    UATDR ARCHIVELOG LOGICAL STANDBY ALL YES YES UATDR
    SYS@UATDR>
    SYS@UATDR> create tablespace ts_new
    2 /
    Tablespace created.
    SYS@UATDR>
    SYS@UATDR> create user new
    2 identified by new
    3 default tablespace ts_new
    4 temporary tablespace temp
    5 quota unlimited on ts_new
    6 /
    User created.
    SYS@UATDR>
    SYS@UATDR> grant connect, resource to new
    2 /
    Grant succeeded.
    SYS@UATDR> grant unlimited tablespace, create table, create any table to new
    2 /
    Grant succeeded.
    SYS@UATDR>
    SYS@UATDR> -- show privs given to new
    SYS@UATDR> select * from dba_sys_privs where grantee='NEW'
    2 /
    GRANTEE PRIVILEGE ADM
    NEW CREATE ANY TABLE NO
    NEW CREATE TABLE NO
    NEW UNLIMITED TABLESPACE NO
    SYS@UATDR>
    SYS@UATDR> -- create objects in schema
    SYS@UATDR> connect new/new
    Connected.
    NEW@UATDR>
    NEW@UATDR> -- prove ability to create tables
    NEW@UATDR> create table new
    2 (col1 number not null)
    3 tablespace ts_new
    4 /
    create table new
    ERROR at line 1:
    ORA-01031: insufficient privileges
    NEW@UATDR>
    NEW@UATDR>

    HI Daniel,
    I appreciate your quick response.
    My choice of name may not have been ideal, however changing new to another name - like gav - does not solve the problem.
    SYS@UATDR> connect / as sysdba
    Connected.
    SYS@UATDR>
    SYS@UATDR> select name, log_mode, database_role, guard_status, force_logging, flashback_on, db_unique_name
    2 from v$database
    3 /
    NAME LOG_MODE DATABASE_ROLE GUARD_S FOR FLASHBACK_ON DB_UNIQUE_NAME
    UATDR ARCHIVELOG LOGICAL STANDBY ALL YES YES UATDR
    SYS@UATDR>
    SYS@UATDR> create tablespace ts_gav
    2 /
    Tablespace created.
    SYS@UATDR>
    SYS@UATDR> create user gav
    2 identified by gav
    3 default tablespace ts_gav
    4 temporary tablespace temp
    5 quota unlimited on ts_gav
    6 /
    User created.
    SYS@UATDR>
    SYS@UATDR> grant connect, resource to gav
    2 /
    Grant succeeded.
    SYS@UATDR> grant unlimited tablespace, create table, create any table to gav
    2 /
    Grant succeeded.
    SYS@UATDR>
    SYS@UATDR> -- show privs given to gav
    SYS@UATDR> select * from dba_sys_privs where grantee='GAV'
    2 /
    GRANTEE PRIVILEGE ADM
    GAV CREATE TABLE NO
    GAV CREATE ANY TABLE NO
    GAV UNLIMITED TABLESPACE NO
    SYS@UATDR>
    SYS@UATDR> -- create objects in schema
    SYS@UATDR> connect gav/gav
    Connected.
    GAV@UATDR>
    GAV@UATDR> -- prove ability to create tables
    GAV@UATDR> create table gav
    2 (col1 number not null)
    3 tablespace ts_gav
    4 /
    create table gav
    ERROR at line 1:
    ORA-01031: insufficient privileges
    GAV@UATDR>

  • Logical standby and Primary keys

    Hi All,
    Why primary keys are essential for creating logical standby database? I have created a logical standby database on testing basis without having primary keys on most of the tables and it's working fine. I have not event put my main DB in force logging mode.

    I have not event put my main DB in force logging mode. This is because, redo log files or standby redo logfiles transforms into set of sql statements to update logical standby.
    Have you done any DML operations with nologging options and do you notice any errors in the alert.log? I just curious to know.
    But I wanted to know that, while system tablespace in hot backup mode,In the absence of both a primary key and a nonnull unique constraint/index, all columns of bounded size are logged as part of the UPDATE statement to identify the modified row. In other words, all columns except those with the following types are logged: LONG, LOB, LONG RAW, object type, and collections.
    Jaffar

  • Logical standby and truncate partition

    Hi,
    I'm evaluation whether a logical standby database would meet our needs.
    We have a live database and want a reporting database that is identical to the live one, just minutes behind in time to the live one, plus we want to create other summary tables etc on the reporting db.
    Logical standby seems to meet our needs but I have one query.
    - On the live db the most of the tables are organized by date partitions and only 5 days are kept with new partitions being created every night for the forthcoming days and the oldest date partitions being truncated.
    On the reporting database we want to keep 30 days partitions.
    Can we have all the DDL and DML from live being applied to the standby APART from the specific truncate partition statements?
    Many Thanks,
    Kailas

    Addendum.
    If you can , it is better to truncate the partition but not drop it, after exporting the partition for archive purposes. If you ever need to bring the data back, the data will go into the correct partition . If the table structure has been changed by adding or lengthening columns, this will still work OK where column names match.
    Test ease of restoration of archived data to existing and non-existing partitions with altered structure (dropped columns,added columns, renamed columns) for yourself though.
    Regards, Vin.
    PS. When splitting maxdata partition to create new 'highest' partition, use a small initial extent then the 'next' being the expected real size needed. When the partition is truncated, the only space that should remain un-claimable will be that allocated for the initial extent.

  • Logical Standby Database in NOARCHIVE Mode

    Hi,
    I have configured a Logical Standby Database for Reporting purposes. A Physical Standby Database is running for MAA. i.e. in case of Role Transition (switch/Failover) the Physical Stdby Db will get the role of the Primary.
    The logical standby database is creating a lot of Archive Redologs files, nearly every minute. Redolog files are 50MB and there is no work done in db during the time. I'm NOT using Standby Redolog files.
    Is there a need for logical standby database to be in NOARCHIVELOG mode? The Primary is definatley in ARCHIVELOG mode.
    Thanks for any responses.
    regards
    Sahba

    hi,
    well there are two things to the above:-
    1. there was an archive file nearly every minute:
    this is due to a db recovery. for some reason, the db was in inconsistent state, after a sudden shutdownof the OS. I was on a test environment, on windows vista, unfortunately. unimportant ... a reboot solved it.
    2. Logical standby db in NOARCHIVE MODE when setup for the purpose of Reporting.
    As long as the MAA configured for the primary db, such as physical standby db, and a second, the logical standby db setup purely for the purpose of reporting, which then can run with NOARCHIVELOG mode, after converting the physical standby db to logical.
    logical standby db uses Streams architecture, so this method brings cost, time and performance advantages to the customer.
    regards
    Sahba

  • Logical standby in noarchive mode

    Hello,
    does anybody know if it is possible to run a logical standby (10gR2) in noarchivelog mode?
    For my understanding it should be possible, why not... But I can't find a piece of documentation which prove this.
    Thanks in advance,
    Boris...

    I also was curious if setting up a Logical Standby in noarchivelog mode was possible. It looks like it is...
    We use this simply as a reporting copy of our data. Our analysts do not change any of this data. They just need it to be fairly current (~3 days). At this point, there isn't any need to generate archivelogs so we want to disable the feature to minimize our maintenance and our storage requirements.
    STANDBY DB:
    SQL> select log_mode, open_mode from v$database;
    LOG_MODE     OPEN_MODE
    ARCHIVELOG   READ WRITE
    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    Database altered.
    SQL> shutdown immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup mount;
    ORACLE instance started.
    Database mounted.
    SQL>  alter database noarchivelog;
    Database altered.
    SQL> alter database open;
    Database altered.
    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    Database altered.Then I created an object on the production server:
    SQL> create table testuser.noarchivelog_testing as select * from dba_objects;
    Table created.
    SQL> alter system switch logfile;
    System altered.
    Mon Jun 22 11:13:26 2009
    Beginning log switch checkpoint up to RBA [0xe6.2.10], SCN: 1031235046
    Mon Jun 22 11:13:26 2009Now I see it on the standby:
    Mon Jun 22 11:13:43 2009
    RFS LogMiner: RFS id [25920] assigned as thread [1] PING handler
    RFS[1]: Archived Log: '/oracle/LH1/oraarch/LH1arch_1_229_680639127.arc'
    Mon Jun 22 11:13:46 2009
    RFS LogMiner: Registered logfile [/oracle/LH1/oraarch/LH1arch_1_229_680639127.arc] to LogMiner session id [1]
    Mon Jun 22 11:13:48 2009
    LOGSTDBY status: ORA-16204: DDL successfully applied
    Mon Jun 22 11:13:51 2009
    LOGMINER: End mining logfile: /oracle/LH1/oraarch/LH1arch_1_229_680639127.arc
    SQL> select count(*) from testuser.noarchivelog_testing;
      COUNT(*)
         24444

Maybe you are looking for