Additional read-write schema in logical standby

Hi,
We have successfully configured logical standby for a production database on oracle 11.1.0.7. Now we have the following requirements:
1) Add a new/ additional schema to logical standby database which can support read write (dml/ddl)operations.
2) Turn off the sync for one particular table in the standby (prod) schema and make it read write.
Appreciate any guidelines in this regard.
Thanks,
Venkat
Edited by: 825766 on Jun 23, 2011 1:20 PM
Edited by: 825766 on Jun 23, 2011 1:44 PM

Thank you.
I was able to make a table read-write based on following extract from oracle documentation.
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
Database altered.
SQL> EXECUTE DBMS_LOGSTDBY.SKIP('SCHEMA_DDL','MYSCHEMA','MYTABLES%');
PL/SQL procedure successfully completed.
SQL> EXECUTE DBMS_LOGSTDBY.SKIP('DML','MYSCHEMA','MYTABLES%');
PL/SQL procedure successfully completed.
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
Database altered.
The example then queries the DBA_LOGSTDBY_PARAMETERS view to verify the logical standby database is updated. Verification can take a while so you might need to repeat the query until no rows are returned, as shown in the following example:
SQL> SELECT VALUE FROM DBA_LOGSTDBY_PARAMETERS WHERE NAME = 'GUARD_STANDBY';
VALUE
Ready
select guard_status from v$database;
Finally, the example sets the database guard to allow updates to the tables.
SQL> ALTER DATABASE GUARD STANDBY;
Database altered.
select guard_status from v$database;

Similar Messages

  • How to import schema into logical standby database

    i had created a new schema(A)in different database and exported the schema into a dmp file using export utility
    from the exported dmp file i want to import that schema(A) into logical standby database using import utility
    iam importing as system user .
    i am getting the following error
    IMP-00003: ORACLE error 1031 encountered
    ORA-01031: insufficient privileges
    can some one help what was going wrong.
    thank you in advance.
    kalyan

    Hi kalyan,
    May u plz let us know which import command you are using? kinldy check that your system user has import/export privileges?
    Thanks
    Hassan Khan

  • Open standby read-write then flashback leaves standby in new incarnation?

    Oracle 10.2.0.4 on RHEL. In February, we followed Note:805438.1 to open the standby for read-write for a test (after creating a restore point). A few minutes later, we did flashback. It ran fine ever since, receiving and applying archivelogs. Recently we noticed it stopped applying redo and this started in late June. Then I followed Note:836986.1 to roll forward the standby with incremental backup from the primary. During this process, it was found that the standby was in incarnation 2:
    RMAN> list incarnation;
    using target database control file instead of recovery catalog
    List of Database Incarnations
    DB Key Inc Key DB Name DB ID STATUS Reset SCN Reset Time
    1 1 ORACP2 4102395133 PARENT 1 20080407 14:13:17
    2 2 ORACP2 4102395133 CURRENT 44328058841 20110223 08:33:18
    The solution is to set it to 1, because the primary is (and always was) 1:
    reset database to incarnation 1;
    The DBA team don't recall making any significant change to the standby since the February test and alert.log doesn't show anything relevant. Will "open standby read-write then flashback" leave the standby in the new incarnation, instead of setting it back to the original incarnation? We checked another database which has a standby we did the same test on. The standby is back to the original incarnation as the primary is. So there must be something special about this database.
    Secondly, if the standby stayed in a newer incarnation, how could it continue to apply redo received from the primary in its original incarnation?

    Thanks. Bug 6035495 is related but not exactly my case. My 10.2.0.4 standby database alert.log does not have "orphan" or "ORA-19909". But the solution is the same.
    The Oracle support analyst working on my case referred me to this new note
    Rman-06571: Datafile 1 Does Not Have Recoverable Copy (Doc ID 1336872.1)
    (maybe he wrote it)
    That's exactly what happened to me. Although flashing back the standby sets its current incarnation back (to the same as the primary), if you later run "catalog start with" on the standby, that command will change the incarnation to the later one! Since I was following Note:836986.1 to roll forward the standby, "catalog start with" was run and these lines appeared
    WARNING: catalog online log file +fra/ORACP2SB/ONLINELOG/group_52.2882.708107363 is not supported
    New incarnation branch detected in Backup, filename +FRA/oracp2sb/autobackup/2011_02_23/s_743848464.8985.743848467
    Inspection of file changed rdi from 1 to 2
    Setting recovery target incarnation to 2
    Wed Jul 6 00:06:53 2011
    Setting recovery target incarnation to 2
    Wed Jul 6 00:10:51 2011
    It appears that when "catalog" detects another (must be newer I guess?) incarnation, it blindly sets the standby database to that one. The analyst calls it "implicit catalog". It may as well be called "implicit incarnation change of catalog" or "side effec of catalog".
    Edited by: user11989003 on Jul 11, 2011 7:02 AM
    Edited by: user11989003 on Jul 11, 2011 7:04 AM

  • Slow replication  on logical standby DB

    Hi All,
    Before 5 day we run update statistics script on our MIS DB. Now, we face slow replication in peak hours on this from RAC and not able to generate reports.
    Please suggest me why this happen, how to solve this ? It very critical for me ..
    Details are following-----
    BEGIN
    -- Run job synchronously.
    DBMS_SCHEDULER.run_job (job_name=> 'SYS.GATHER_STATS_JOB');
    END;
    Oracle Version-----
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
    PL/SQL Release 10.2.0.3.0 - Production
    CORE 10.2.0.3.0 Production
    TNS for IBM/AIX RISC System/6000: Version 10.2.0.3.0 - Productio
    NLSRTL Version 10.2.0.3.0 - Production
    AIX Version 5.3
    SQL> select NAME ,OPEN_MODE, PROTECTION_MODE ,DATABASE_ROLE,GUARD_STATUs,LOG_MODE from v$database;
    OPEN_MODE PROTECTION_MODE DATABASE_ROLE GUARD_S LOG_MODE
    READ WRITE MAXIMUM PERFORMANCE LOGICAL STANDBY ALL ARCHIVELOG
    Topas -------------
    Topas Monitor for host: UMISDB01 EVENTS/QUEUES FILE/TTY
    Wed Jan 20 11:51:13 2010 Interval: 2 Cswitch 6723 Readch 0.0G
    Syscall 4300 Writech 6657.3K
    Kernel 5.4 |## | Reads 2780 Rawin 0
    User 58.1 |################# | Writes 288 Ttyout 1337
    Wait 2.9 |# | Forks 0 Igets 0
    Idle 33.7 |########## | Execs 0 Namei 33
    Physc = 2.25 %Entc= 64.2 Runqueue 5.0 Dirblk 0
    Waitqueue 6.5
    Network KBPS I-Pack O-Pack KB-In KB-Out
    en6 2.2 7.0 5.0 0.4 1.8 PAGING MEMORY
    en1 0.0 0.0 0.0 0.0 0.0 Faults 2762 Real,MB 16384
    lo0 0.0 0.0 0.0 0.0 0.0 Steals 7251 % Comp 94.4
    PgspIn 3 % Noncomp 5.5
    Disk Busy% KBPS TPS KB-Read KB-Writ PgspOut 0 % Client 5.5
    hdisk10 100.0 4.2K 542.5 4.2K 0.0 PageIn 5530
    hdisk6 100.0 9.6K 1.2K 9.0K 659.1 PageOut 1664 PAGING SPACE
    hdisk5 100.0 8.8K 1.1K 8.1K 715.3 Sios 7194 Size,MB 32768
    hdisk4 29.1 1.4K 89.9 128.6 1.3K % Used 9.9
    hdisk15 8.5 1.1K 46.2 92.4 1.0K NFS (calls/sec) % Free 91.1
    hdisk8 3.5 261.2 31.6 0.0 261.2 ServerV2 0
    hdisk14 2.5 514.4 14.6 0.0 514.4 ClientV2 0 Press:
    hdisk12 2.0 759.5 8.5 0.0 759.5 ServerV3 0 "h" for help
    hdisk0 1.5 16.1 4.0 12.1 4.0 ClientV3 0 "q" to quit
    hdisk13 0.5 827.8 13.6 0.0 827.8
    hdisk1 0.5 4.0 1.0 0.0 4.0
    hdisk7 0.5 442.1 7.5 0.0 442.1
    Name PID CPU% PgSp Owner
    oracle 1876038 14.1 15.3 oracle
    oracle 1597544 14.0 11.3 oracle
    lrud 16392 0.5 0.6 root
    oracle 1515570 0.4 467.5 oracle
    oracle 1695836 0.3 27.3 oracle
    oracle 1642498 0.3 323.3 oracle
    oracle 1204230 0.3 291.3 oracle
    oracle 512222 0.2 483.5 oracle
    oracle 1368188 0.2 7.4 oracle
    oracle 1458238 0.2 227.3 oracle
    oracle 1712180 0.1 307.4 oracle
    oracle 1638546 0.1 37.2 oracle
    aioserve 848030 0.1 0.4 root
    Signal 2 received
    Thanks in advance

    Santosh Pradhan wrote:
    Hi ,
    oracle 10.2.0.3 enterprise edition logical standby
    We performed heavy updates on our production database due to which logical standby gone lots of logs behind with primary database and log are getting apply on logical standby very slowly.
    Kindly suggest how to speed up apply process on logical standby ....Hope you are using "ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;" command
    Here please check the below note for Adjusting the Number of APPLIER Processes , also if redo transport is slow check the settings for "LOG_ARCHIVE_MAX_PROCESSES"
    http://docs.oracle.com/cd/B28359_01/server.111/b28294/manage_ls.htm#CHDBGBFC

  • Streams Replication:Source database Physical or Logical Standby DB

    Can the source database in streams replication be a physical or logical standby database ? If so, is the process of configuring streams the same as a regular database ? Are there any best practices or different configuration if the source is Logical or Physical standby DB ?
    Thanks in advance.

    Never done it, but I don't see any reason why it should not work.
    Streams, at capture site, is only a data dictionary game and in a logical standby your data dictionary is open read write.
    Streams, at capture site, never touch the source tables, in fact they may even not exists from Streams point of view,
    as it deals only with the redo that are generated.
    So Streams horizon is limited to the data dictionary, the log buffer, the archives and, in SYSAUX tablespace, all the LOGMNR_% tables. All these structures are read write in the logical standby. However, for the capture/propagation you may have to set to true the 'include_tagged_lcr' parameters.

  • Streams Replication: Logical Standby DB as source

    I don't have much experience with streams replications. So, the following might sound like a silly questions to Oracle Gurus:
    If the source database is a logical standby database, can stream replication be used to replicate some of the tables in the source database to the target DB ?
    The source and the target DB are 11g R2.
    Thanks in adavance:
    - Sanjay
    Edited by: sanjayku on Dec 8, 2010 4:38 PM

    Never done it, but I don't see any reason why it should not work.
    Streams, at capture site, is only a data dictionary game and in a logical standby your data dictionary is open read write.
    Streams, at capture site, never touch the source tables, in fact they may even not exists from Streams point of view,
    as it deals only with the redo that are generated.
    So Streams horizon is limited to the data dictionary, the log buffer, the archives and, in SYSAUX tablespace, all the LOGMNR_% tables. All these structures are read write in the logical standby. However, for the capture/propagation you may have to set to true the 'include_tagged_lcr' parameters.

  • Can I read/write a LabVIEW DataSocket cluster in Lookout?

    Can read/write DataSocket text, logical, and numerical arrays but how do I read/write a LabVIEW style cluster datatype (i.e. an array of clusters or a cluster of arrays)?
    Thanks!
    Tom
    Thomas G. Duffey
    Engineering and Research Consultants, Inc.
    Air Force Research Laboratory
    3 Antares Rd.
    Edwards AFB, CA 93524
    661.275.6172 FAX 661.275.5073
    mailto:[email protected]
    http://neverworld.net

    You should be able to wire your cluster or array of clusters or cluster of arrays directly into the datasocket write/read primitives. They are polymorphic and should handle the data directly.
    Marc Monroe
    National Instruments

  • Creating a new schema in a Logical Standby Database

    Hi All,
    I am experimenting with logical standby databases for the purpose of reporting, and have not been able to create a new schema in the logical standby database - one of the key features of logical standbys.
    I have setup primary and logical standby databases, and they seem to be running just fine - changes are moved from the primary to the standby and queries on the standby seem to run ok.
    However, If I try to create a new schema on the logical standby, that does not exist on the primary, I get "ORA-01031: insufficient privileges" errors when I try to create new objects.
    Show below are the steps I have taken to create the new schema on the logical standby. Any help would be greatly appreciated.
    SYS@UATDR> connect / as sysdba
    Connected.
    SYS@UATDR>
    SYS@UATDR> select name, log_mode, database_role, guard_status, force_logging, flashback_on, db_unique_name
    2 from v$database
    3 /
    NAME LOG_MODE DATABASE_ROLE GUARD_S FOR FLASHBACK_ON DB_UNIQUE_NAME
    UATDR ARCHIVELOG LOGICAL STANDBY ALL YES YES UATDR
    SYS@UATDR>
    SYS@UATDR> create tablespace ts_new
    2 /
    Tablespace created.
    SYS@UATDR>
    SYS@UATDR> create user new
    2 identified by new
    3 default tablespace ts_new
    4 temporary tablespace temp
    5 quota unlimited on ts_new
    6 /
    User created.
    SYS@UATDR>
    SYS@UATDR> grant connect, resource to new
    2 /
    Grant succeeded.
    SYS@UATDR> grant unlimited tablespace, create table, create any table to new
    2 /
    Grant succeeded.
    SYS@UATDR>
    SYS@UATDR> -- show privs given to new
    SYS@UATDR> select * from dba_sys_privs where grantee='NEW'
    2 /
    GRANTEE PRIVILEGE ADM
    NEW CREATE ANY TABLE NO
    NEW CREATE TABLE NO
    NEW UNLIMITED TABLESPACE NO
    SYS@UATDR>
    SYS@UATDR> -- create objects in schema
    SYS@UATDR> connect new/new
    Connected.
    NEW@UATDR>
    NEW@UATDR> -- prove ability to create tables
    NEW@UATDR> create table new
    2 (col1 number not null)
    3 tablespace ts_new
    4 /
    create table new
    ERROR at line 1:
    ORA-01031: insufficient privileges
    NEW@UATDR>
    NEW@UATDR>

    HI Daniel,
    I appreciate your quick response.
    My choice of name may not have been ideal, however changing new to another name - like gav - does not solve the problem.
    SYS@UATDR> connect / as sysdba
    Connected.
    SYS@UATDR>
    SYS@UATDR> select name, log_mode, database_role, guard_status, force_logging, flashback_on, db_unique_name
    2 from v$database
    3 /
    NAME LOG_MODE DATABASE_ROLE GUARD_S FOR FLASHBACK_ON DB_UNIQUE_NAME
    UATDR ARCHIVELOG LOGICAL STANDBY ALL YES YES UATDR
    SYS@UATDR>
    SYS@UATDR> create tablespace ts_gav
    2 /
    Tablespace created.
    SYS@UATDR>
    SYS@UATDR> create user gav
    2 identified by gav
    3 default tablespace ts_gav
    4 temporary tablespace temp
    5 quota unlimited on ts_gav
    6 /
    User created.
    SYS@UATDR>
    SYS@UATDR> grant connect, resource to gav
    2 /
    Grant succeeded.
    SYS@UATDR> grant unlimited tablespace, create table, create any table to gav
    2 /
    Grant succeeded.
    SYS@UATDR>
    SYS@UATDR> -- show privs given to gav
    SYS@UATDR> select * from dba_sys_privs where grantee='GAV'
    2 /
    GRANTEE PRIVILEGE ADM
    GAV CREATE TABLE NO
    GAV CREATE ANY TABLE NO
    GAV UNLIMITED TABLESPACE NO
    SYS@UATDR>
    SYS@UATDR> -- create objects in schema
    SYS@UATDR> connect gav/gav
    Connected.
    GAV@UATDR>
    GAV@UATDR> -- prove ability to create tables
    GAV@UATDR> create table gav
    2 (col1 number not null)
    3 tablespace ts_gav
    4 /
    create table gav
    ERROR at line 1:
    ORA-01031: insufficient privileges
    GAV@UATDR>

  • Logical standby as read only

    we are using 3 node RAC in 10g R2 and we have been requested to create one physical(for DR) and one logical standby(for reporting) for the primary database.we created both successfully.
    everything worked fine until i opened the logical standby database as read only.the sql apply stops once i opened the logical standby as readonly.
    once i shut it down and opened in read write mode the sql apply works fine.
    is there any way to open the logical standby only in read only,because we want to use the database only for reporting and doesnt want anyone creating anything in the database.
    anyhelp is appreciated
    thanks

    Users (other than SYSDBA) cannot create anything in a Logical standby unless someone with SYSDBA privs executed ALTER DATABASE GUARD STANDBY; or ALTER DATABASE GUARD NONE;
    STANDBY says they can create objects that they have the normal privs to do but they cannot affect/change anything that SQL Apply is maintaining (i.e. the stuff coming in from the Primary)
    NONE says they still cannot create/change anything they do not have the privs to do but they CAN mess with the tables that SQL Apply is maintaining which would toast your standby so you would never do that.
    It is lightly documented at http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_1004.htm#i2110829 :^(
    The default for the GUARD is ALTER DATABASE GUARD ALL; which gets set for you when you created the Logical Standby following the procedures at http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ls.htm#g105412
    Larry

  • How to open a "manual" Physical standby database in read/write mode

    Hi,
    I am running Oracle Database 10g Release 10.2.0.3.0 - 64bit Production Standard Edition on Linux version 2.6.9-42.0.8.ELsmp ([email protected]) (gcc version 3.4.6 20060404 (Red Hat 3.4.6-3))
    I've created a physical standby database, but since I am running Standard Edition, I am not using the DataGuard features. I use the rsync utility to copy over the archivelogs to the standby database, and I apply them periodically to the standby database.
    The standby database is started this way :
    startup nomount pfile='/u01/oradata/orcl/initorcl.stdby';
    alter database mount standby database;
    Everything runs perfectly fine, I can do "alter database open read only" and then I can do selects into tables to confirm that everything is up to date.
    The thing is, if I shutdown immediate the database, then do just startup :
    shutdown immediate;
    startup;
    The database opens with no error messages, but is still in read-only mode...
    I read that the default behavior for a standby database is to open read-only, like I am experiencing, but I would like to know what is the right way to open it correctly in read-write mode (I understand that after that, my standby will not be standby anymore and that I will have to recreate my standby database).
    Thanks,
    Mat

    Hello,
    There're features which allows you to open a Standby database in Read/Write mode but for all I know
    it needs Entreprise Edition.
    In Enterprise Edition you can use Logical Standby database. More over, for Physical standby there's
    a way by using flashback database so as to rolling backward the database and avoiding to recreate
    the Standby.
    In Standard Edition I'm afraid that you'll have to recreate your Standby database.
    Best regards,
    Jean-Valentin

  • Is it possible to have a 10g (R1) LOGICAL standby db in read-ONLY mode?

    From what I've gathered reading the 10g (R 1) docs, the logical standby turns read-only when applying sql, and then in read-write mode otherwise. I have to meet a requirement that the remote database be strictly in read-only mode the entire time (so as to not risk writing to it), as well as being as up-to-date as possible off the primary. (Hence I decided on the logical vs. the physical) Does anyone pls. know if I can keep the logical read-only all the time?
    Much appreciated,
    Sophie

    Oh, Ok. I guess i interpreted it differently. The direct quote from the docs: "Although the logical standby database is opened in read/write mode, its target tables for the regenerated SQL are available only for read-only operations." So I guess what it means, as you suggest, is the entire set of tables from the primary, and not just those few that happened to be updated at that particular time frame, are kept in read-only the entire time...

  • Refresh Schema in Primary (and Logical Standby)

    Does anyone have a recommendation for the best way to refresh a 16Gb schema in a Primary Database that has a logical standby?
    I attempted to refresh our main schema in our Primary by using IMPDP. I tried to follow the same steps as I would if I had to upgrade a Primary database with a Logical Standby in place. I defered the Log archive log dest state associated with the logical standby, stopped the SQL Apply on the standby and disabled Data Guard, then dropped the schema in the Standby, created and imported the schema. i then performed the drop, recreate and import in the Primary. After that I did a DBMS_LOGSTDBY.BUILD on the Primary. Finally I enabled DG, restarted the apply and enabled the log arch dest on the primary.
    One issue I did have was that i did not defer the archivelogs until after the import had started so it did send some archives over - of course DG was disabled and the apply was off but now the status of DG is normal but i have a big gap in Last Received Log and Last Applied Log and of the missing archives, the oldest have 'Committed Transactions Applied', newer ones have 'Not Applied' and the newest have 'Not Received'.
    I think I'm hosed but I am confused on the best approach. I did try just to perform the import on the Primary previously (last year) but I remember that the volume of data killed the replication.
    This is 10.2.0.4 on Windows 2003 Server (64bit)
    Thanks
    Graeme

    i suppossed that you did : alter database open resetlog; right?
    In first place, try to see any error in redo transport:
    alter system switch logfile;
    select status, error from v$archive_dest where dest_id = 2;
    any error?
    for more information, please check:
    http://www.idevelopment.info/data/Oracle/DBA_tips/Data_Guard/DG_45.shtml

  • Finding exception with the read-write-backing-map-scheme configuration.

    Finding exception with the <read-write-backing-map-scheme> configuration, that is setup against a simple database cache store implementation. The class SimpleCacheEventStoreImpl implements CacheStore interface.
    Exception in thread "main" java.lang.UnsupportedOperationException: configureCache: read-write-backing-map-scheme
         at com.tangosol.net.DefaultConfigurableCacheFactory.configureCache(DefaultConfigurableCacheFactory.java:995)
         at com.tangosol.net.DefaultConfigurableCacheFactory.ensureCache(DefaultConfigurableCacheFactory.java:277)
         at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:689)
         at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:667)
         at Sample.SimpleEventStoreConsumer.main(SimpleEventStoreConsumer.java:10)
    The cache store is interfaced to the program SimpleEventStoreConsumer(where I have a put and get operation) through the following cache configuration descriptor. On running the SimpleEventStoreConsumer, the exception happens on trying to get the Named cache from the cache factory
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>Evt*</cache-name>
                   <scheme-name>SampleDatabaseScheme</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <read-write-backing-map-scheme>
                   <scheme-name>SampleDatabaseScheme</scheme-name>
                   <internal-cache-scheme>
                        <local-scheme>
                             <scheme-ref>SampleMemoryScheme</scheme-ref>
                        </local-scheme>
                   </internal-cache-scheme>
                   <cachestore-scheme>
                        <class-scheme>
                             <class-name>com.emc.srm.cachestore.SimpleCacheEventStoreImpl</class-name>
                             <init-params>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>{cache-name}</param-value>
                                  </init-param>
                             </init-params>
                        </class-scheme>
                   </cachestore-scheme>
              </read-write-backing-map-scheme>
              <local-scheme>
                   <scheme-name>SampleMemoryScheme</scheme-name>
              </local-scheme>
         </caching-schemes>
    </cache-config>

    you are missing <backing-map-scheme>. Do like following:
    <caching-schemes>
              <distributed-scheme>
                   <scheme-name>distributed-scheme</scheme-name>
                   <service-name>DistributedQueryCache</service-name>
                   <backing-map-scheme>
                        <read-write-backing-map-scheme>
                             <scheme-ref>rw-bm</scheme-ref>
                        </read-write-backing-map-scheme>
                   </backing-map-scheme>
    <autostart>true</autostart>
              </distributed-scheme>
              <read-write-backing-map-scheme>
                   <scheme-name>rw-bm</scheme-name>
    <internal-cache-scheme>
         <local-scheme>
                        </local-scheme>
                   </internal-cache-scheme>               
              </read-write-backing-map-scheme>
    </caching-schemes>

  • Import a schema (tables, Views, Stored Procedures) on logical standby

    Hi,
    We have a logical standby for reporting purpose. The logical standby build through data guard
    we need to import a new user in logical standby using import utility. The user dump contain tables, views, procedures, packages, roles).
    The new user import has to go in users tablespace.
    Is is possbile to import a new user in logical standby and what are the steps.
    Thanks in advance

    Hi,
    Can you give me more details about your envirnoment configuration, O/S, DB version.
    But generally i don't think that is this possible becuase as you know standby only for cloning the primary database, so you can import it on the production then it will be transfered to the logical standby database.
    Regards;

  • Recovering Logical Standby - Non-Guarded Schemas

    I'm making use of logical standby databases for our reporting environment.
    We are also making use of the capability of having a number of other schemas that havent come from the primary and are therefore not managed by data guard.
    Recently had a problem on the primary which required us to perform a restore and a resetlogs - and as soon as this happened the logical diverged from the primary.
    So it seems i have to reinstantiate the logical from the primary BUT since our non-guarded objects dont exist in the primary they will be lost.
    I have a backup of the logical - whats the best way of rebuilding the logical so its synced with the primary and also so that it contains all the non-guarded objects as well?
    I was thinking schema export/import.
    Simon

    Found the following:-
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/manage_ls.htm#sthref1308
    Oracle® Data Guard Concepts and Administration
    10g Release 2 (10.2)
    Part Number B14239-04
    Managing A Logical Standby Database -- 9.5.4
    Indicates that the Logical Standby can recover through an OPEN RESETLOGS if its running in flashback database mode.
    Not sure what the impact of this is on non-guarded components - could be that if it flashes the logical back too that it loses all the non-guarded components

Maybe you are looking for