Streams on a source Database with DataGuard in place

Hello,
I want to know what would be the impact on performance on setting up Streams on a 9i RAC Database, which already has DataGuard implemented to a single instance database.
From the forums I have read that Streams works closely with redo logs and it can also be affected by archiving, so if my source database is archiving locally and remotely because of DataGuard, could this affect functionality or performance?
Thank you in advance.

Sure it will not have any affect on the functionality. Streams mines redo logs (and archive logs if needed) for the DDL and DML changes. It depends in which protection mode is your Data Guard is configured.
If its Max Protection then Streams has nothing to do with DG
If its Max Performance and you use LGWR ASYNC then yes, both Streams and DG will be mining the redo logs and you may have some performance issues but I think it will be negligible. The LNS process outside the SGA will be responsible in DG env to transport the changes and a Capture process again outside the SGA will be held responsible to mine the changes in Streams setup.
Conceptually, this should not have any performance impact on your Primary.

Similar Messages

  • Increase SGA on database with dataguard

    hi all,
    I have a database (10.2.0.2.0) configured with dataguard and we want to increase the SGA. We want update the RAM from Server ( Microsoft Windows Server 2003 R2 Server 5.2 Service Pack 2 (32-bit) ) with double memory RAM (actually 8G):
    The steps would be the next?:
    - SQL>alter system set sga_max_size=16G scope=spfile;
    - SQL>alter system set sga_target=16G scope=spfile;
    - shutdown immediate;
    - startup;
    Same in the standby database?
    Thanks in advance.

    Hi;
    What is your current SGA_MAX_SZE parameter? the SGA_MAX_SZE parameter is not dynamic in oracle 10g.
    To increase the sga size for your instance or to decrease,
    1. Make the entry in the pfile as sga_max_size=value. (generally pfile at 'ORACLE_HOME\database') *
    2. Shutdown the DB
    3. Startup the DB with the pfile as " SQL> STARTUP PFILE='path of the pfile edited' "
    4. Your DB is now running on pfile, to make the changes in spfile "SQL>CREATE SPFILE='path of the spfile' FROM PFILE"
    5. Shutdown Immediate
    6. Startup the DB with the newly created spfile " SQL>STARTUP SPFILE='path of the spfile' "
    7. sql> show parameter sga to make sure your changes takes effect
    8. You can increase SGA_TARGET dynamicly by ALTER command.
    Regard
    Helios

  • Audit setup with dataguard in place

    Auditing is planned to be enabled on one of our database, however this one has dataguard configured. Are there any difference in setting up auditing on a db with and without dataguard? Anything to watch out for?
    Thanks

    dont think anything differnt...if auditing is happenin on the primary DB then u should be all good...but if its on standy DB(phycail or logical) then not sure about it.

  • Streams Replication:Source database Physical or Logical Standby DB

    Can the source database in streams replication be a physical or logical standby database ? If so, is the process of configuring streams the same as a regular database ? Are there any best practices or different configuration if the source is Logical or Physical standby DB ?
    Thanks in advance.

    Never done it, but I don't see any reason why it should not work.
    Streams, at capture site, is only a data dictionary game and in a logical standby your data dictionary is open read write.
    Streams, at capture site, never touch the source tables, in fact they may even not exists from Streams point of view,
    as it deals only with the redo that are generated.
    So Streams horizon is limited to the data dictionary, the log buffer, the archives and, in SYSAUX tablespace, all the LOGMNR_% tables. All these structures are read write in the logical standby. However, for the capture/propagation you may have to set to true the 'include_tagged_lcr' parameters.

  • Streams with DataGuard design advice

    I have two 10gR2 RAC installs with DataGuard physical copy mode. We will call the main system A and the standby system B. I have a third 10gR2 RAC install with two-way Streams Replication to system A. We will call this RAC system C.
    When I have a failure scenario with system A, planned or unplanned, I need for system C's Streams Replication to start replicating with system B. When the system A is available again I need for system C to start replicating with system A again.
    I am sure this is possible, and I am not the only one that wants to do something like this, but how? What are the pitfals?
    Any advice on personal experience with this would be greatly appreciated!

    Nice concept and I can only applaud to its ambitions.
    +"I am sure this is possible, and I am not the only one that wants to do something like this".+
    I would like to share your confidence, but i am afraid there are so many pitfalls than success will depends on how much pain you and you hierarchy can cope with.
    Some thoughts:
    Unless your dataguard is Synchronous, at the very moment where A fails, there will be missing TXN in C,
    which may have been applied in B as Streams is quite fast. This alone tells us that a forced switch cannot
    be guarantee consistent : You will have errors and some nasty one such as sequence numbers consumed
    on A (or B) just before the crash, already replicated to B (or A)but never shipped to C. Upon awake C will
    re-emit values already known on B (dup key on index?)
    I hope you don't sell airplane ticket for in such a case you can sell some seats twice.
    Does C have to appear as another A or is it allowed to have a different DB_NAME? (How will you set C in B?
    is C another A which retakes A name or is C a distinct source).if C must have the same DB_NAME, the global
    name must be the same. Your TNS construction will be have to cope with 2 identical TNS entry in your network
    referring to 2 different hosts and DB. Possible with cascade lines at (ADDRESSES= ... , but to be tested.
    If C is another A then C must have the same DB_name as LCR do have their origin DB name into them.
    If C has a distinct name from A it must have its on apply process, not a problem it will be idle while A is alive,
    but also a capture that capture nothing while A is alive, for it is Capture site who is supposed to send the ticks
    to advance counters on B. Since C will be down in normal time, you will have to emulate this feature periodically
    reseting the first_scn manually for this standby capture - you can jump archives providing you jump to another
    archive with built in data dictionary - or accept to create a capture on B only when C wakes up. The best would
    be to consider C as a copy of B (multi-master DML+DDL?) and re-instantiate tables without transferring any data
    the apply and capture on C and B to whatever SCN is found to be max on both sites when C wakes up.
    All this is possible, with lot of fun and hard work.
    As of the return of the Jedi, I mean A after its recovered from its crash you did not tell us if the re-sync C-->A
    is hot or cold. Cold is trivial but If it is hot, it can be done configuring a downstream capture from C to A with
    the init SCN set to around the crash and re-load all archives produced on C. But then C and A must have a different
    DB_name or maybe setting a different TAG will be enough, to be tested. Also there will be the critical switch
    the multi-master replication C<-->B to A<-->B. This alone is a master piece of re-sync tables.
    In all cases, I wish you a happy and great project and eager to hear about it.
    Last, there was a PDF describing how to deal with dataguard switch with a DB using streams, but it is of little help,
    for it assumes that the switch is gentle : no SCN missing. did not found it but maybe somebody can point a link to it.
    Regards,
    Bernard Polarski

  • Is it possible to create a Clone database with the same name of source db ?

    Is it possible to create a Clone database with the same name of source db using RMAN ...
    DB version is 11.2.0.2
    Is it possible to clone a 11.2.0.2 database to 11.2.0.3 home location directly on a new server . If it starts in a upgrade mode , it is ok ....

    user11919409 wrote:
    Is it possible to create a Clone database with the same name of source db using RMAN ...
    yes
    >
    DB version is 11.2.0.2
    Is it possible to clone a 11.2.0.2 database to 11.2.0.3 home location directly on a new server . If it starts in a upgrade mode , it is ok ....yes
    Handle:     user11919409
    Status Level:     Newbie (10)
    Registered:     Dec 7, 2009
    Total Posts:     102
    Total Questions:     28 (22 unresolved)
    why do you waste time here when you rarely get any answers to your questions?

  • ADF (View Object) with open source database

    Hi all,
    Recently, I am evaluating JDeveloper 9.0.5.2 ADF with an open source database (Firebird).
    I follow the instruction to:
    1. Create an entity object
    2. Create a view object for the entity object in 1
    3. Create an application module for the view object in 2
    4. Right click on the application module and test -> all worked fine
    5. I created a jsp and drag the view (in 2) to it from Data Control Palette
    6. Test it in OC4J -> an error occurred (not because of jdbc config)
    JBO-27122: SQL error during statement preparation. Statement: SELECT * FROM (SELECT Department.DEPT_NO, Department.DEPARTMENT AS DEPARTMENT1, Department.HEAD_DEPT, Department.MNGR_NO, Department.BUDGET, Department.LOCATION, Department.PHONE_NO FROM DEPARTMENT Department) QRSLT ORDER BY DEPT_NO
    GDS Exception. 335544569. Dynamic SQL Error SQL error code = -104 Token unknown - line 1, char 16 SELECT
    To my knowledge, the error should be due to Firebird doesn't support the query syntax " select * (select col1, col2... from table) QRSLT where... order by... "
    I am quite headache as this query is automatically generated by JDeveloper. I am figuring out whether I can change it. It seems that I have to play with the ViewObjectImpl class.
    I do think ADF is an amazing technology. However, is it only so amazing with the Oracle technologies? Please correct me if I am wrong.
    Hons

    Dear Shay Shmeltzer,
    Thank you for your reply. After hours of trying on Firebird, I still get the error:
    JBO-27122: SQL error during statement preparation. Statement: SELECT * FROM (SELECT Department.DEPT_NO, Department.DEPARTMENT AS DEPARTMENT1, Department.HEAD_DEPT, Department.MNGR_NO, Department.BUDGET, Department.LOCATION, Department.PHONE_NO FROM DEPARTMENT Department) QRSLT ORDER BY DEPT_NO
    GDS Exception. 335544569. Dynamic SQL Error SQL error code = -104 Token unknown - line 1, char 16 SELECT
    I then downloaded the MySQL 4.0 and install on my system to do the exact procedures. No error! Thing works fine! Oh man...
    For the Firebird case, a strange thing to note is that the data table (I dragged) contains data, but the error message is on top of it.
    Actually, I don't have a particular database preference but I do hope ADF can work equally for these open-source databases. I think the problem may also be contributed by Firebird. Hope that I can figure it out this week.
    Regards,
    Hons

  • Creating Standby database with Oracle 10gR2 SE (no dataguard). Procedure

    Hello,
    I have problems in creating a standby database without Dataguard (Oracle Standard Edition)
    -Oracle 10gR2 SE (No DataGuard !!!!)
    - SUSE Enterprise 10.
    Both primary and standby databases are running in Virtual machines (lab).
    I will describe the exact steps I followed :
    1.---------------------------------------------------
    Both primary and standby databases have exactly the same file / folder structure
    2.---------------------------------------------------
    I enabled archive log mode with a new destination : /opt/oracle/oradata/orcl/archive_logs.
    Also : SQL> alter database force logging;
    3.---------------------------------------------------
    I shut down the primary database (shutdown immediate)
    4.---------------------------------------------------
    I created a standby controlfile on the primary database:
    SQL> startup mount
    SQL> alter database create standby controlfile as '/tmp/standby.ctl'
    SQL> shutdown immediate
    5.---------------------------------------------------
    I did a cold backup from the primary db to the standby db
    (I copied all the db files, control files, redo log)
    6.---------------------------------------------------
    I copied from the primary db to the standby db the standby control file
    (from primary /tmp/standby.ctl)
    7.---------------------------------------------------
    I copied from the primary db to the standby db the standby control file
    (from primary /tmp/standby.ctl)
    8.---------------------------------------------------
    I created on the standby database a pfile from the spfile :
    SQL> create pfile from spfile;
    9.---------------------------------------------------
    I edited the pfile and changed the controlfile location (with the standby controlfile created in step 4)
    >>>>> *.control_files='/opt/oracle/oradata/orcl/standby.ctl'
    10.--------------------------------------------------
    I started the standby db
    SQL> startup mount pfile='/opt/oracle/product/10.2/db_1/dbs/initorcl.ora'
    SQL> alter database recover managed standby database disconnect from session;
    SQL> quit
    11.--------------------------------------------------
    I made some changes on the primary db on user scott
    SQL> update table set .......
    12.--------------------------------------------------
    I switched logfile on the primary db
    SQL> alter system switch logfile;
    13.--------------------------------------------------
    I manually copied the new archivelog to the standby db
    And then nothing !!!
    The db changes are not applied.
    Please help me fix the procedure !!!
    Thanks

    Hi
    Ref to your Oracle SE Standby issue, we were in similar situation couple of years ago to build an Automated Standby for our disaster recovery on Oracle SE. We finally got a solution provider Anbul Technologies , they have a one of their solutions based on Oracle SE which provided fully automated Standby in Oracle SE.
    We are successfully running that solution in our prod env for many years. You can visit their site www.anbultechnologies.co.uk or contact them for further details. They are Uk based but provides support and services all over Europe.
    Cheers

  • Integrate source db with target EBS database using ODI

    Hi
    We have a requirement to integrate our legacy systems to Oracle E-business suite 12.1.1 database 10g. We have several MS-SQL and oracle databases that has to be integrate with our Oracle EBS database. Oracle says to have separate schema for Master and Work repository, and also create a new schema when connecting to the data store and use the same schema as a WORK SCHEMA when creating a physical schema. My question is:
    1) What are all the privileges to be assigned to the new schema used as a WORK SCHEMA? Oracle EBS holds many FND schemas and APPS uses the synonyms of all the tables. So what oracle recommends when using EBS with ODI?
    2) Do we need to create a separate work schema for each and every SOURCE Database as well?
    3) We are not using separate oracle database for ODI we will be creating schemas in PROD to hold ODI data. Our management wants the the odi schema to hold the staging data. Can we use the the same Work schema mentioned in the first point as a staging area as well?
    Thank you
    Regards
    Shahrukh Yasin

    Hi Shahrukh ,
    There are two ways.(1st one is recommended)
    1.You can make source schema in one database and target schema, work schema in another database.
    For source schema only read only access privilege.
    You must have complete ownership in work schema and target schema(Minimum insert,update,delete).
    Here you have to always create one work schema in every target database
    2.if all schema present in 3 different database,you also need above criteria.
    But here you don't have to create work schema always.its fixed.
    This process takes much time for transformation and not an efficient way.
    Regards
    Bhabani

  • Clone of auxillary database using rman with the source database unavailable

    Hi,
    Is it possible to clone a database with just the source database's backup pieces. The source database is currently unavailable. The source database backup was taken through rman - nocatalog mode. We have been provided the backup pieces. Any clues on how to go about it?

    NAME TYPE VALUE
    db_file_name_convert string /u34/oracle/data/FRD9, +DG_OAP
    PST1/OAPPST1/DATAFILE, /u34a/o
    racle/data/FRD9, +DG_OAPPST1/O
    APPST1/DATAFILE, /u35/oracle/d
    ata/FRD9, +DG_OAPPST1/OAPPST1/
    DATAFILE, /u36/oracle/data/FRD
    9, +DG_OAPPST1/OAPPST1/DATAFIL
    E, /u37/oracle/data/FRD9, +DG_
    OAPPST1/OAPPST1/DATAFILE, /u38
    /oracle/data/FRD9, +DG_OAPPST1
    /OAPPST1/DATAFILE, /u39/oracle
    NAME TYPE VALUE
    /data/FRD9, +DG_OAPPST1/OAPPST
    1/DATAFILE, /u40/oracle/data/F
    RD9, +DG_OAPPST1/OAPPST1/DATAF
    ILE
    log_file_name_convert string /u32/oracle/data/FRD9, +DG_
    OAPPST1_AUX1/OAPPST1/ONLINELOG
    SQL>

  • LCRs in queue in source Database are not removed after propagation

    Hello,
    I am using Oracle 10g Enterprise Edition 10.2.0.3 databases with Linux in VMware virtual machines. Replication (master-slave) with Streams does it's job quite well, but LCRs in the source queue are not removed after propagation.
    I think this is because the apply process cannot send the notification after applying LCRs. does anyone know if Oracle Streams use a different port (different from standard listener port 1521) for the notifications?
    For some stupid reason I have to run my virtual machine with a NAT network, so I have to do port-forwarding for every single port the VM needs to use. Does anyone know any more ports than Listener (1521) and EnterpriseManager (1158) I need? Any other suggestions?
    Thanks in advance,
    Oliver Jelinski

    Hello again,
    When I wrote "notification" I wanted to write "acknowledgement". So again: does anyone know, how acknowledgements are sent by the apply process (which port)? And when I am right, they are sent to the propagation process, which gives another acknowledgement to the capture process. Does anyone know how this second acknowledgement ist sent?
    Thanks again, looking forward to reading your comments,
    Oliver Jelinski

  • Special case with dataguard

    Greetings!
    I would like to configure a dataguard standby database with a special primary. I have two servers, both access the same SAN storage and having the spfile (linked from $OH/dbs), redologs, controlfile, datafiles on this SAN. Because of that in the same time only one of the servers is able to mount and run the database. (This is NOT a RAC) And because of this situation it is not a matter which server maintains the instance because it is always the same one.
    My Q:
    Am I able to configure the DataGuard to try to dynamically connect to both servers and use that one which maintaints the instance?
    Thank you in advance.
    Best regards,
    Miklos

    Miklos
    First of all this is not a Dataguard configuration, this is just an Active/passive configuration working with the same data. In case of a failure event at the datafiles, this topology won't protect you.
    Dataguard is a stand by database at a 'remote' site which maintains its own set of datafiles, controlfiles and redologfiles, which is transactionally updated by means of an archivelog stream coming from the primary database, either physically or logically updated. In case of failure of the primary database or host or storage, you can switch to the stand by so the operation continues with minimum down time.
    In the topology you are planning, in the event of failure from the database at the datafile level there will be no way to recover but by means of a restore/recover operation. If dataguard is a mean to maintain a continuous business operation, this approach won't provide but redundancy at the node level, and it is rather a Cold Failover Cluster.
    ~ Madrid
    http://hrivera99.blogspot.com

  • IWork/iLife?  Can I build a searchable database with PDF doclinks on my Mac

    Hi,
    I'm trying to figure out how to scan a library of PDF documents, where I can populate keywords into a searchable database with link(s) for every result back to the pdf file that pertains.
    I don't see that capability in any of the software packages but maybe I'm missing something.
    Thanks!

    I'm trying to figure out how to scan a library of PDF documents, where I can populate keywords into a searchable database with link(s) for every result back to the pdf file that pertains.
    If your PDF documents are themselves non-searchable, whether because the glyphs were turned into bitmaps in a PS driver or because you scanned printed pages, then you are concerned with the metadata in PDF and not with the content stream in PDF.
    If you are concerned with the content stream in PDF, you need to think about how you generate the PDF that you populate your database with. As of 2002, the Adobe liason to the Unicode Consortium was not aware of any PDF implementation that worked the way it should.
    That is, if the source character string cannot by implicitly synthesised from the glyph identifiers in the glyph run, then explicitly embed the source character string into the content stream of the PDF. In other words, searching is not guaranteed except perhaps with Adobe InDesign CS2 and CS3.
    Anything generated from a PostScript stream in Adobe Acrobat Distiller is not guaranteed to be searchable, and nothing generated in Apple Mac OS X up to and including the present dot release is guaranteed to be searchable.
    /hh

  • Need help, implementing streams between oracle 10g databases

    Hello all.
    Please help me, and give me instruction if any body implement streams between databases (Oracle 10g)
    I have implemented streams between schema table on (10g) and result was success.
    Firstly I want to know some thing, such as:
    1) Is it possible make streams with conditions, (make only DML or DDL i know) but make DML and not DELETE operation, just INSERT and UPDATE operations on the table?
    2) After changes was applied on target database, can I delete that records which copied (replicated) on source database?
    I have 2 databases and one of them is for archive purpose (I want use it as target database for streams). Other one is PROD database, applications make DML operation.
    I) record insert with null status
    II) processing (status 1)
    III) if success (status 2) unsuccess (status 3)
    For now, I have cron script on (Linux host) and in this script has PLSQL anonymous block and it works couple times during the day. This script works as archive.
    My task is: Make it via Oracle streams.
    Thank you beforehand.

    For conditions on the type of operation (Insert) check in the doc after apply handler and you can associate what ever code and conditions you want. You can also choose to work with a subset rules but there are some restriction as no lobs :
    [http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_rules.htm#insertedID7|http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_rules.htm#insertedID7]
    For a complete list of the restrictions :
    [http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/ap_restrictions.htm#i1012965|http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/ap_restrictions.htm#i1012965]

  • DBIF_RSQL_TABLE_UNKNOWN Table AUSPN_V1 does not exist in the database with ID R/3

    Hi All,
    We are encountering numerous short dumps in our system caused by Table AUSPN_V1 does not exist in the database with ID R/3.
    Can anyone advise how to solve the issue? Please find below portion of the short dumps.
    Category               ABAP Programming Error
    Runtime Errors         DBIF_RSQL_TABLE_UNKNOWN
    ABAP Program           SAPLCLVF
    Application Component  CA-CL-CL
    Date and Time          11.06.2014 12:08:28
    Short text
         A table is unknown or does not exist.
    What happened?
         Error in the ABAP Application Program
         The current ABAP program "SAPLCLVF" had to be terminated because it has
         come across a statement that unfortunately cannot be executed.
    What can you do?
         Note down which actions and inputs caused the error.
         To process the problem further, contact you SAP system
         administrator.
         Using Transaction ST22 for ABAP Dump Analysis, you can look
         at and manage termination messages, and you can also
         keep them for a long time.
    Error analysis
         A table is referred to in an SAP Open SQL statement that either does not
          exist or is unknown to the ABAP Data Dictionary.
         The table involved is "AUSPN_V1" or another table accessed in the statement.
    Source Code Extract
    Line  SourceCde
      104     if dupl = kreuz.
      105       insert auspc_v2 client specified from table auspcv2
      106                              accepting duplicate keys.
      107     else.
      108       insert auspc_v2 client specified from table auspcv2.
      109       if syst-subrc ne 0.
      110         message a585 with tabausp.
      111       endif.
      112     endif.
      113     refresh auspcv2.
      114   endif.
      115   read table auspcv3 index 1.
      116   if syst-subrc = 0.
      117     if dupl = kreuz.
      118       insert auspc_v3 client specified from table auspcv3
      119                              accepting duplicate keys.
      120     else.
      121       insert auspc_v3 client specified from table auspcv3.
      122       if syst-subrc ne 0.
      123         message a585 with tabausp.
      124       endif.
      125     endif.
      126     refresh auspcv3.
      127   endif.
      128   read table auspnv1 index 1.
      129   if syst-subrc = 0.
      130     if dupl = kreuz.
      131       insert auspn_v1 client specified from table auspnv1
      132                              accepting duplicate keys.
      133     else.
    >>>>>       insert auspn_v1 client specified from table auspnv1.
      135       if syst-subrc ne 0.
      136         message a585 with tabausp.
      137       endif.
      138     endif.
      139     refresh auspnv1.
      140   endif.
      141   read table auspnv2 index 1.
      142   if syst-subrc = 0.
      143     if dupl = kreuz.
      144       insert auspn_v2 client specified from table auspnv2
      145                              accepting duplicate keys.
      146     else.
      147       insert auspn_v2 client specified from table auspnv2.
      148       if syst-subrc ne 0.
      149         message a585 with tabausp.
      150       endif.
      151     endif.
      152     refresh auspnv2.
      153   endif.

    Hello
    Please check on transaction SE11 if this table exists and if it is active.
    Also, it may be a database issue. What is your database?
    BR
    Caetano

Maybe you are looking for

  • Spinning ball all the time? new final cut?

    Hi there, After spend 300 dollars in the brand new final cut X, i notice that i also have a free spinning colourfull beachball?! yes in the previous version i never add so many hang ups and so time waiting for some operations, i have, as u can say th

  • PDF does not show correct font with RTF template

    Hi, I have some doubts regarding fonts in PDF reports. when we create any XML report of PDF output type with RTF template, does the font used in the template has to be uploaded to the server? I have create a RTF template to produce PDF output with ar

  • HT201991 How does one write a review for an app?

    I would like to write a review for an app I bought from the App Store. "Locate the item that you would like to write a review for." Where? In the App Store via the internet (e.g. on my iMac), OR via the App Store button on my iPad Mini? "Click "Write

  • Error in SQL Statement: SAPSQL_INVALID_TABLENAME /BI0/V0FIGL_C01F

    Hi, When I am executing datasource 80FIGL_C01 using T.code RSA3 in BW3.5, getting error Error in SQL Statement: SAPSQL_INVALID_TABLENAME /BI0/V0FIGL_C01F I am getting same error when pulling data from BW3.5 to BI7.0 at scheduling. Can I have your val

  • I have trying to have my contacts and calendar info transferred to my new 4s from my 3g

    Please help.  I have been trying to transfer my contacts and calendar info from my old phone 3GS which I had already updated to iCloud-to my new iPhone 4s.   I have one apple I'd for I tunes and now 2 apple is for me.com.  Thanks