About GoldenGate

Well, I have some experience working with Oracle Streams and really this technology is great for me but now I am reading that Oracle GoldenGate will be the future technology for data integration solutions and some peoples say that oracle streams will disappear. So I want to learn about Oracle GoldenGate and I have some questions for you:
Will really disappear Oracle Streams?
Where can I find good documentation about Oracle GoldenGate to learn about its configuration?
Thank you in advance.

The statements of direction that I've seen are that Oracle is going to be integrating GoldenGate into the product and adding functionality that currently only exists in Streams into GoldenGate. They are not going to be focusing on enhancing Streams. I would expect that this would play out much like replacing materialized views as the preferred replication solution with Streams played out-- a very gradual move as customers consider the new technology for new projects but aren't generally rushing to rip out the old technology. So I wouldn't expect Streams to disappear any more than I would expect materialized views to disappear. But you will probably see Oracle working to better integrate new technologies like GoldenGate with new products like Oracle Data Integrator (ODI).
I'd start here for white papers and discussions.
Justin

Similar Messages

  • About goldengate for sql server  ,and from sql server to sql server replication,help!

    IF I config odbc dsn_ggExtdb , when exect this command:
    DBLOGIN sourcedb dsn_ggExtdb USERID sa,PASSWORD 123456
    error :Unrecognized parameter (SOURCEDB), expected USERID.
    help me!

    Please ensure that you have downloaded the OGG software for SQLserver
    OS >ggsci -v
    Oracle GoldenGate Command Interpreter for Oracle   <-------  here is the problem it should reflect the DB environment that you would like to connect to in this case MS SQLServer
    Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230
    Windows x64 (optimized), Oracle 11g on Apr 23 2012 04:55:02
    if the DB is not showing as Sqlserver , please download the correct patch form the Oracle Site and try.

  • Goldengate replication performance

    Hi ,
    This is about Goldengate replication performance.
    Have configured Goldengate replication between OLTP and Reporting  and the business peak hours occurs only for a one hour.
    at that time I can see a LAG on the replicat side of around 10-15 minutes.
    Rest of all the time there is no LAG.
    I reviewed the AWR report of the target and I could see all the replicat process are executing with a elapsed time of 0.02 or 0.01 seconds.
    However I could see a major wait event DB sequential read of 65%-71% of DB time  and it is having the maximum Waits. apart from this there are no major wait event contributing to % DB time.(21% od DB CPU which I believe is normal )
    and I can also see few queries are being hit at that peak time since it is a reporting server. and they are using select query which executes more than 15-20 minutes specially on the High transaction table.
    Can you please advise where I should look on to resolve the LAG during the Peak hours.
    I believe the select operation/wait event is causing the LAG during that peak hours. Am I correct.
    can you please advise from your experience.
    Thanks
    Surendran

    Hi Bobby,
    Thanks for your response.
    Please find my response as below,
    Environment details as below.
    1. Source and target DB - oracle database enterprise edition v 11.2.0.4
    2. Goldengate version. 12.1.2.0.0  (Same Golden-gate version installed on source and target)
    3  Classic CDC process is configured between source and target Goldengate.
    Queries and response
    Is there any long running transactions on the extract side?
    No, long running transaction is seen, I can see a huge volume of transaction getting populated  (over 0.3M records in 30 minutes)
    Target environment information
    High transaction DML activities is seen only on 3 tables.
    As the target is reporting environment I can see many sql's querying those 3 high transaction populating tables and I can see DB sequential read wait event spiking up to 65%-71%.
    I can also see in the AWR report that the GG session are executing the insert/update transaction is less than 0.01/2 sec.
    Have to set the report for every 10 min. I will update to 1 min and share the report.
    My query is : Is the select operation executed on that high transaction table on the reporting server during that high transaction window is causing the bottleneck and causes the LAG during that peak hours ?
    or Do I need to look on other area's ?
    Based on above information If you any further comments/advise  please share.
    Thanks
    Surendran.

  • Migration with Goldengate

    Hi,
    We want to migrate our SAP system from Oracle 9i(Solaris) to SQL 2005(Windows). And export time with r3load is too high.
    Do you have any information about GoldenGate Trasactional Data Integration tool.?
    Muhittin Çelik

    hi,
    Solutions for "Three States" of High Availability
    High availability of applications and their underlying systems should be assessed from the position of the end user. In other words, regardless of the root cause, does the user perceive the system to be down or unavailable?
    At any given time, the business can be concerned with any one of three (3) states for the same system -- each state with different underlying technical considerations and potential solutions, but all affecting the availability for end users in the same way (it's either seen as "down" or "running smoothly"). Understanding those three states makes it easier to categorize and evaluate potential solutions:
    "Unplanned Outage"
    Unplanned outages may be caused by one of two primary failure types: system-level failures, or data-level failures. The business will want to confidently failover to a backup as quickly as possible, and also be able to easily revert back to normal operating conditions once the primary system is back online.
    GoldenGate for Live Standby -- A high availability software implementation that significantly improves recovery time for business-critical systems, enabling a real-time high integrity backup system that can be re-synchronized with the primary system.
    "Planned Outage"
    There are essential events that still require the IT group to schedule downtime, such as modifying hardware or database software, upgrading applications or databases, applying software patches, migrating to different computing architectures, etc. Since such events are not conceived via an unplanned or unexpected event, they are aptly classified as "planned outages."
    GoldenGate for Zero-Downtime Operations -- An exceptional solution for reducing planned outages, by enabling uninterrupted business operations and application high availability during necessary system upgrade, migration, and maintenance activities.
    "Active"
    In this state, your application and/or database is technically up and running, but is experiencing some degree of degradation that is noticeably affecting performance, throughput, and subsequently response time.
    GoldenGate for Active-Active (Dual-Active) -- A load-sharing, high availability solution for improving performance and reliability for two (or more) systems, including data conflict detection and resolution capabilities.
    GoldenGate for Database Tiering -- An unlimited scalability solution for improved performance and availability of critical systems, achieved by off-loading reporting and read-only activity from critical primary transaction processing systems.
    GoldenGate Live Standby is a high availability software implementation that significantly improves recovery time for business-critical systems.
    KEY BENEFITS:
    Very minimal business interruption enabled by secondary database ready for immediate failover
    Do root-cause analysis of failure while backup system supports active users
    Seamless, real-time data flow between backup and primary systems
    Higher confidence in backup data
    No geographical distance constraints
    Point-in-time recovery and dynamic rollback of data
    Maximize backup system value to support live, real-time reporting needs
    Business and IT Challenges
    For business-critical transaction systems, traditional disaster recovery solutions are simply not enough for companies seeking to improve recovery time and recovery point objectives. Particularly for those systems that directly impact revenue, customer servicing, product delivery, and other key parts of the business, the IT organization must find ideal solutions for minimizing downtime and data loss.
    Overcoming Traditional Recovery Challenges
    Where traditional disaster recovery, backup, and replication methods are relied upon solely, the IT department is likely to find any combination of the following issues:
    It takes longer than expected to get the backup system online.
    The backup data is not as current as within only sub-seconds of the outage.
    There is no guarantee of data integrity and accuracy at the backup.
    The failover system is not available for real use and requires a substantial, manual recovery effort.
    If the backup system has started processing transactions after the outage, there is no easy, seamless way to re-synchronize data with the primary system.
    In an environment of increasing 24x7 business processes and declining user patience levels, improving the speed and reliability in getting key transactional systems back online must be an integral part of the overall IT strategy.
    GoldenGate Live Standby
    GoldenGate Software offers an excellent solution that allows the IT group to overcome common data recovery challenges, and thereby greatly improve their business;s ability to tolerate any range of unexpected interruptions and disasters.
    GoldenGate's Live Standby is a high availability software implementation that captures transactional data from the primary system and applies it to the backup database in real time. If a failure or outage occurs at the primary system, the backup system is immediately ready and available to support users with the most current data committed at the source up to its failure point.
    Further, GoldenGate Live Standby provides for bi-directional data movement so that once the primary system is ready to be brought back online, any new data processed by the backup system is applied to the primary. With this Live Standby solution, the primary and secondary systems can be kept in synch at all times. In addition, the secondary database can now be leveraged to support other business activities -- particularly, real-time "live reporting" needs.
    GoldenGate Live Standby is an excellent, proven solution for unplanned outages and improving disaster tolerance for transactional data systems. Please contact us for more information on GoldenGate Live Standby, the TDM software platform, and other solutions and products.
    thanks
    swamy
    reward me points if usefull

  • Help Needed on GoldenGate Capabilities

    Hi Folks
    let me start off by saying I know nothing about GoldenGate. I am asking this question from one of our Software Developers who has been tasked to convert data from an Oracle 10.2.0.5 64-bit database running on Windows 2003 64-bit to an IBM Filenet Database. Does anyone know if GoldenGate has that capability to convert the data ?
    Appreaciate anyones expertise in this are
    Jim

    If you are storing files on the file system (as opposed to within the database), then no.
    The BFILE data type, for example, is not supported by GoldenGate (you can also see this in the install/setup guide for Oracle, page 14).
    From Oracle documentation:
    BFILE Data Type
    The BFILE data type enables access to binary file LOBs that are stored in file systems outside Oracle Database. A BFILE column or attribute stores a BFILE locator, which serves as a pointer to a binary file on the server file system. The locator maintains the directory name and the filename.
    You can change the filename and path of a BFILE without affecting the base table by using the BFILENAME function. Refer to BFILENAME for more information on this built-in SQL function.
    Binary file LOBs do not participate in transactions and are not recoverable. Rather, the underlying operating system provides file integrity and durability. BFILE data can be up to 264-1 bytes, although your operating system may impose restrictions on this maximum.
    The database administrator must ensure that the external file exists and that Oracle processes have operating system read permissions on the file.
    The BFILE data type enables read-only support of large binary files. You cannot modify or replicate such a file.

  • Column maping and conversion between dissimilar tables fails.

    Hello,
    I have two Oracle databases of different versions and replicate an only schema in one direction (9.2.0.8 -> 11.2.0.3).
    An instrument of replication is GoldenGate 11.1.1.1.2_05.
    All objects of two replicating schemas are identical except one table -
    in 9.2.0.8 the table have this appearance -
    create table test.t1 add (id number, nm number);
    create index test.i_t1 on test.t1(id);however, in 11.2.0.3 -
    create table test.t11g add (id number, vch varchar2(5));
    create index test.i_t1 on test.t11g(id);and all objects, except this mentioned table in its two incarnations, are replicating successfully.
    Now about GoldenGate structure - it looks like this -
    (DB 9.2.0.8) -> (Extractor) -> (Local trail) -> (Pump) -> (Network) -> (Remote trail) -> (Replicator) -> (DB 11.2.0.3)
    I decided to make mapping and conversion at target site and, however, I have those parameters for replicator -
    --Replicat group --
    REPLICAT REP2
    --source and target definitions
    SOURCEDEFS /u01/app/oracle/product/11.1.1.1.2ogg411g/dirdef/sourcedef
    --target database login --
    USERID ogg, PASSWORD XXX
    --file for dicarded transaction --
    DISCARDFILE /u01/app/oracle/product/11.1.1.1.2ogg411g/discard/rep1_discard.txt, APPEND, MEGABYTES 10
    --ddl support
    DDL
    --Specify table mapping ---
    MAP test.t1, TARGET test.t11g, COLMAP (USEDEFAULTS, VCH=@STRNUM(NM));
    MAP test.*, TARGET test.*;
    DDLERROR 24344 DISCARD;And, of course, I have created definitions file at source site and have copied it afterwards to the target site, so parameter SOURCEDEFS reflects the reality.
    For some completeness of the picture I'll providing parameters of extractor and of pump extractor -
    --extract group--
    EXTRACT ext1
    --connection to database--
    USERID ogg, PASSWORD xxx
    EXTTRAIL /u01/app/oracle/product/11.1.1.12ogg/dirdat/ss
    SEQUENCE test.*
    --DDL support
    DDL INCLUDE MAPPED OBJNAME test.*
    --DML
    TABLE test.*;
    TABLEEXCLUDE test.DIFFTYPE;
    -- Identify the data pump group:
    EXTRACT pump11
    --connection to database--
    USERID ogg, PASSWORD xxx
    RMTHOST db-dev-2, MGRPORT 7869
    RMTTRAIL /u01/app/oracle/product/11.1.1.1.2ogg411g/dirdat/tt
    sequence test.*
    -- Allow mapping, filtering, conversion or pass data through as-is:
    PASSTHRU
    -- Specify tables to be captured:
    TABLE test.*;Before definitions file generation, at source site there was created this parameters -
    DEFSFILE /u01/app/oracle/product/11.1.1.12ogg/dirdef/sourcedef
    USERID ogg, PASSWORD xxx
    TABLE test.*;Whenever I've tried to insert a row at source site my replicator became abendon immediately and in $GG_HOME/ggserr.log there are appear those lines -
    2012-10-11 23:49:17  WARNING OGG-00869  Oracle GoldenGate Delivery for Oracle, rep2.prm:  Failed to retrieve column list handle for table TEST.T1.
    2012-10-11 23:49:17  ERROR   OGG-00199  Oracle GoldenGate Delivery for Oracle, rep2.prm:  Table TEST.T1 does not exist in target database.
    2012-10-11 23:49:17  ERROR   OGG-01668  Oracle GoldenGate Delivery for Oracle, rep2.prm:  PROCESS ABENDING.It seems to me that the source of the issue is in some kind of bug in GoldenGate but it would be my last resort to upgrade the system.
    This is my first steps in this area, so I would like to any help.
    Edited by: ArtemKhisamiev on 12.10.2012 0:23

    I've tried to change slightly replication params from this -
    --Replicat group --
    REPLICAT REP2
    <...>
    --Specify table mapping ---
    MAP test.t1, TARGET test.t11g, COLMAP (USEDEFAULTS, VCH=@STRNUM(NM));
    MAP test.*, TARGET test.*;
    DDLERROR 24344 DISCARD;to something like this -
    --Replicat group --
    REPLICAT REP2
      <...>
    --Specify table mapping ---
    MAP test.T1, TARGET test.T11G, COLMAP (USEDEFAULTS, VCH=@STRNUM(NM));
    MAP test.CITIES, TARGET test.CITIES;
    MAP test.COUNTRIES, TARGET test.COUNTRIES;
    MAP test.LOBEXP, TARGET test.LOBEXP;
    DDLERROR 24344 DISCARD;Now replication works. So the problem was in asterisk which of course stands for test.T1 - the table which is substituted by test.T11g in target site.
    Then, how can I exclude test.T1 from generalisation? To enumerate every table in a schema is a mad overcome, especially if the schema is really bulky of objects?
    Edited by: ArtemKhisamiev on 12.10.2012 1:38
    Edited by: ArtemKhisamiev on 12.10.2012 3:22

  • Want to know more about their work topology of GoldenGate?  And example

    work of GoldenGate example uni-direction,bidirection and other
    thankyou

    The examples shown in the admin guide are easy to understand. What - specifically - is it you do not understand about unidirectional and bidirectional configurations?
    Unidirectional
    Source - primary extract and secondary extract (data pump)
    Target - replicat
    Bidirectional
    Source - primary extract and secondary extract for source/A database, and replicat to process trails sent from the target/B database
    Target - primary extract and secondary extract for target/B database, and replicat to process trails sent from the source/A database
    Do the tutorial in the link provided earlier. Put hands on keyboard and try.
    Using Oracle GoldenGate for Oracle to Oracle Database Synchronization
    This tutorial provides instructions on how to configure GoldenGate to provide Oracle to Oracle database synchronization.
    http://apex.oracle.com/pls/apex/f?p=44785:24:8700699511844680::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:5340,2

  • Suspect about sql server with goldengate

    I suspect sql server ?
    must run script marker_setup.sql in sql server or no?
    Edited by: 891982 on 11 มี.ค. 2555, 4:16 น.

    thank you
    I install false
    install goldengate for sql server in window

  • GoldenGate colmap issue about calculation

    Hi guys,
         I did some replication from oracle to SQL server.
       Source DB:  Oracle table:
    create table myemps(
    id number,
    first_name varchar2(100),
    last_name varchar2(100),
    salary number
      Target DB:  SQL Server DB:
    CREATE TABLE [dbo].[myemps](
      [id] [numeric](18, 0) NOT NULL,
      [full_name] [varchar](300) NULL,
      [wages] [numeric](18, 0) NULL,
      [first_name] [varchar](200) NULL,
      [last_name] [varchar](200) NULL,
    PRIMARY KEY CLUSTERED
      [id] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    The replicat params:
       REPLICAT MSREP1
      targetdb ggs_nc
      GETTRUNCATES
      APPLYNOOPUPDATES
      SOURCEDEFS dirdef/source.def
      MAP scott.customer, TARGET dbo.customer;
      map scott.myemps, target dbo.myemps,
      COLMAP (USEDEFAULTS,
      WAGES = @COMPUTE(SALARY * 12)
      FULL_NAME = @STRCAT(LAST_NAME,",",FIRST_NAME));
      The problem is the updates on only first _name on Oracle, the updates won't reflect to full_name on SQL Server;
      Sample:
         I issued some SQL from Oracle:
    insert into myemps(id,first_name,last_name,salary) values(2, 'A', 'B', 100000);
    commit;
    update myemps set first_name = 'CCCC' where id = 2;
    commit;
    After updates, the full_name on SQL Server should be 'B,CCCC', but it remains B,A.
    Can anyone explain this?
    Thanks.

    I am guessing that it has something to do with not having all of the source columns with which to do the @STRCAT. If you did an ADD TRANDATA on the source table, then all that is getting passed in your update record to the Replicat is the key and the first_name column. But if first_name is updated, you also need last_name so that you can format the target full_name column correctly, and vice versa, because it needs to overwrite the entire full_name string. One way to accomplish this is to always supplementally log both first_name and last_name. That is overkill, but I don't think there is a way to tie supplemental logging of one column to another (i.e., to log last_name only when first_name is updated).
    Best regards,
    Marie

  • Problem in installing GoldenGate Director on weblogic 12c

    hello friends,
    we have a server with weblogic 12c and a basic domain namely "base_domain" on it.(i have no earlier experience about weblogic).its path is "D:\Oracle\Middleware_HOME\wlserver_12.1"
    i can start its AdminServer and we can login to its console.
    now i want to install GoldenGate Director Server.but in installing  process when i set Weblogic location it gives me error :
    "Please Select a valid weblogic install Directory"
    i tried all of these paths :
    "D:\Oracle\Middleware_HOME\wlserver_12.1"
    "D:\Oracle\Middleware_HOME"
    "D:\Oracle"
    but the error remains yet.
    i installed weblogic on another machine but nothing changed.
    another thin is that i checked all places that oracle documents tells that should be updated
    i mean :
    1 Update the registry.xml file in Weblogic home directory to point to the
    correct WebLogic Server home directory.
    2 Update all of the scripts under %WLS_SERVER_HOME%/server/bin and %WLS_
    SERVER%/common/bin to point to the correct WebLogic Server home directory.
    3 Update the .product.properties file under %WLS_SERVER_HOME% to point to the
    correct WebLogic Server home directory.
    1 & 2 are correct (i think) but i couldn't find third one "product.properties"
    please help me.
    Whats the problem?
    weblogic Servers : Windows Server 2008 64-bit SP2  and Windows Seven Ultimate 64bit
    Weblogic version : oepe-indigo-installer-12.1.1.0.1.201203120349-12.1.1-win32
    GoldenGate Director : gg-director-serversetup_win_v11_1_1_1_0_001

    You should specify the middle-ware home direction.
    The error normally caused by the wrong version installed, which doesn’t have the correct registry.xml in MW_HOME. I met this problem in the past which is because I installed a dev version of WLS in zip file.
    After I install the installable version( I think it is the same version you used: oepe-indigo-installer-12.1.1.0.1.201203120349-12.1.1-win32.exe of WLS, it works fine.

  • Goldengate calling plsql procedure

    Hi Friends ,
    as i am new to Goldengate features i need your help on working on this task.
    i am actually need pl/sql procedure since i am working on Goldengate replication.in my parameter i will call this procdure .actuall requirement is like this .
    i have tables A,B,C at source and at target i have tables B , C AND D Tables .
    TABLE A has columns ID,TT_STATUS ,COUNTRY,DB_NAME
    TABLE B HAS COLUMNS OPEN_BY,CREATED
    TABLE C HAS COLUMNS NAME,DEPT
    TABLE D HAS COLUMNS NAME,DEPT,OPEN_BY,CREATED ,OPEN_BY,CREATED,ID,TT_STATUS ,COUNTRY,DB_NAME
    AT MY SOURCE when ever on TABLE A .TT_STATUS column changes or update like open to close or some other status i have to compare before.tt_status with tt_status if it is diffrent then i have to do an insert operation like as below and
    insert in to table D AS SELECT NAME,DEPT,OPEN_BY,CREATED ,OPEN_BY,CREATED,ID,TT_STATUS ,COUNTRY,DB_NAME FROM a,b,c if both the status are same igonore
    so here i have two tables B&C as lookup tables at target and other table A i dont have it at target side i want to know how to achive that with out bringing table A to target side .
    i need your help in achiving this task since some of the features like calling procedure i am using it for the 1st time . i know its easy if i use pl/sql to achive this task but i dont how to pass the values in procedure.
    thanks and regards
    Tom

    Hi Tom.
    Check out the OGG 11.1 reference manuel pages 245-258 that talks about SQLEXEC. It works the same for extract and replicat with one significant difference: extract does not support REPERROR yet.
    Here's an example from said doc:
    MAP sales.srctab, TARGET sales.targtab, &
    SQLEXEC (SPNAME lookup, ID lookup1, PARAMS (param1 = srccol)), &
    COLMAP (targcol1 = lookup1.param2), &
    SQLEXEC (SPNAME lookup, ID lookup2, PARAMS (param1 = srccol)), &
    COLMAP (targcol = lookup2.param2);
    A few things to point out here:
    1. The string after reserved word SPNAME is the name of your procedure. If logged in as the owner you don't need to qualify the schema but it's always good to do so
    2. "param1" is the name of the IN parameter from your procedure
    3. "param2" is the OUT parameter from your procedure
    4. "srccol" is the name/value of a column in the table
    5. You don't need the ampersand (&) to continue the line anymore with Table and Map statements
    If we want to write this using a Table statement in the extract (Map statements are used in the replicat) you would store the data in a user defined token, which are declared on the fly and will be written with the OGG change record to the trail and available to the target. You'll want to read the document section on tokens (use @TOKEN in Map statements to pull out the value).
    We can write a simpler version for example purposes like this in extract:
    Tablle sales.srctab,
    SQLEXEC (SPNAME lookup, ID lookup1, PARAMS (param1 = srccol)),
    TOKENS ( TKN_STATUS = lookup1.param2),
    If you want to do string comparisons on the target then check the @STR* functions. Otherwise you can do this logic in your procedure.
    Hope this helps and good luck.
    -joe

  • How to speedup Goldengate Replicat during busy time

    both db version is 11.2.0.3 on linux redhat
    Target tables replicate normal during slow time, however last weekend, there are a lot activities on the source, then target lag up to 5 hours . We checked error log, reports , no errors on both source/target. 
    after weekend, the lag finally cleared out because weekend is getting slow.
    How do we improve this?  How to speedup replicat during peak hours.
    Thanks.

    Hi,
    1. Use of @Range Functions
      Golden Gate has a feature to split the data in the schema using @RANGE function. It preserves data integrity by guaranteeing that the same row will always be processed by the same process group. @RANGE tends to split data uniformly, similar like hash partitioned tables.
    2. Asynchronous COMMITS
      Replicat performance can be further increased by modifying the way of goldengate commiting the transactions on the target database. Defaultly, Oracle GoldenGate will wait for the transactions to get committed before allowing the session to continue. This is called a Synchronous Way. This leads to unnecessary delay or latency in the replicat side.
    To over come this we can configure the Replicat Processes to commit aynchronously at sessoin level by including the following SQLEXEC statement in each parameter file.,
    SQLEXEC “alter session set commit_wait = ‘NOWAIT’”;
    Nothing to worry about this during the database crashes or any problems, GoldenGate will automatically replay the uncommitted transactions using the Checkpoint Table, following the database instance crash recovery.
    3. If you are using the ASM, then use the DBLOGREADER parameter in the TRANLOGOPTIONS.This ease the access to the ASM. Now, via an OCI API, GoldenGate reads the redo and archived logs from the DB server, increasing Extract performance over the former PL/SQL API. There is therefore no need to specify the TRANLOGOPTIONS ASMUSER option when specifying DBLOGREADER.
    4. You can even use the below parameters to increase the performance.,
    DYNAMICRESOLUTION DYNAMICRESOLUTION is an Extract and Replicat parameter that causes the object record to be built one table at a time, instead of all at once at process start up. A table’s attributes are added to the record the first time its object ID enters the transaction log, which occurs with the first extracted transaction on that table. Record-building for other tables is deferred until activity occurs.  DYNAMICRESOLUTION is the same as WILDCARDRESOLVEDYNAMIC. NODYNAMICRESOLUTION causes the object record to be built at startup. This option is not supported for Teradata.
    With WILDCARDRESOLVE IMMEDIATE source tables that satisfy the wildcard definition are processed at startup.  With WILDCARDRESOLVE DYNAMIC source tables that satisfy the wildcard definition are resolved each time the wildcard rule is satisfied. Do not use this option, instead use the BOTH option, if:    SOURCEISTABLE is specified   Some source tables are specified with wildcards while others are specified with explicit names.  WILDCARDRESOLVE BOTH: Combines DYNAMIC resolution and IMMEDIATE resolution. Source tables defined by name are processed at startup (IMMEDIATE), and wildcarded source tables are processed when GoldenGate receives the first operation on them (DYNAMIC). This allows new source tables (that satisfy the wildcard) to be added after.
    Regards,
    Veera

  • Oracle streams versus oracle goldengate

    Hi all,
    I just found out about oracle goldengate and was wondering if anyone of you could share what are the differences between it and oracle streams when it comes to change data capture capabilities? Also, how does owb come into play when it comes to oracle goldengate? For instance owb 11gr2 has got cdc capabilties so does it mean its cdc capabilities is based on oracle streams?

    Hi,
    With CDC/Streams you have two choices:
    process the Oracle logfiles in the source-database/server and read the resulting changerecords from the target database/server or
    transport the logfiles to the target database/server and process them there.
    The advantage of the latter case is that you relieve the source from the load of processing the logfiles, but target and and source then need to have the same database and server versions. Golden Gate, if I understand correctly, converts the logfiles to its own format (with mimimal load) and these can be processed by Golden Gate on a target database and server of a different version from the source.
    So you have the advantage (little load on the source) without the disadvantage (source and target have to be of equal versions).
    Regards,
    Jaap.

  • GoldenGate for Standard Edition One

    Hi,
    I'm considering implementing GoldenGate for bidirectional active-active replication between 2 Oracle Standard Edition databases.
    I couldn't find any reference about the deference between GG on Enterprise Edition and Standard Edition.
    So I would like to know what are the restrictions for SE1?
    Can GG work as bidirectional active active replication include DDL on SE1?
    Thanks, Ofir.

    Hello,
    I can not find documentation on licensing restrictions on Streams (SE/SE1) you speak.
    Only this is documented in the manual license (https://docs.oracle.com/database/121/DBLIC/toc.htm) this restriction:
    With SE / SE1 capture is async , that is you  could not perform real-time capture.
    But not indicate any restriction more.
    In this case if you want to use real-time capture, GG would have to be used in classic mode.
    Any way, A Golden Gate licence you need.
    Arturo

  • What's the best way to cleanly stop Goldengate?

    For routine maintenance/upgrades what's the best way to cleanly stop GoldenGate? I don't want to wait endlessly. I use this currently and seen no issues :-
    stop er *!
    kill er *!
    stop manager!
    Thanks,
    Shankar

    shiyer wrote:
    For routine maintenance/upgrades what's the best way to cleanly stop GoldenGate? I don't want to wait endlessly. I use this currently and seen no issues :-
    stop er *!
    kill er *!
    stop manager!
    For routine maintenance/upgrades, just {noformat} "stop er *" {noformat} should be preferred; and when all processes are stopped, mgr can (optionally) be stopped. If you really are waiting endlessly for this to return, the real question is "why": then, perhaps there are other parameters that can be adjusted to make GG stop more quickly. (On the other hand, {noformat} "stop mgr!" {noformat} is harmless, the "!" simply prevents it from asking "are you sure?" before stopping the process.)
    I really wouldn't use "kill" unless you really have a good reason to (and the reason itself requiring the process to be killed should be analyzed & resolved). To "kill" a process shouldn't cause data loss (GG always maintains checkpoints to prevent this) -- but still it seems unnecessary, unless there really is something that should be killed. (Aside: I mean, I can kill -9 / "force quit" my browser and/or 'halt' my laptop every time as well, and it would probably be "faster" to -- but it can cause issues (and wasted time) upon restart: i.e., fsck, recover sessions, whatever. Same basic idea, imo.)
    There's a reason there are different commands to 'stop' processes (stop vs. stop! vs. kill). Just for example, "stop replicat !" causes current transactions to be rolled back; there typically is no reason for this, you'll just restart that txn when the process is restarted; might as well let the current one finish. And "kill extract" (I believe) will not warn about potentially "long running transactions" that can cause (painful) issues at startup (missing old archive logs, etc). There are probably other examples, as well.
    So if this really is "routine", then "stop", don't "stop!" or "kill". If there are long delays, see why first & see if they can be addressed independently. (This really is just a stock answer for a generic question, it would be irresponsible for me to answer otherwise :-) )

Maybe you are looking for