Golden Gate Data Transformation and Filters

Hello All,
I have a scenario wher I need to transform the data on the go. In short, I have 2 customers, using the same software and they are each customer_id =1 in their respective application.
However, customer_id=1 has the other customer as customer_id=55 in his application and vice versa customer_id=1 has the other customer as customer_id=12.
I know I can already setup filters for the particular data I want to replicate. However, I don't know if I can transform it before the data is replicated.
Here is the scenario:
Replication from Customer A(customer_id=1) to Customer B(customer_id=55)
==========================================================
-->  FILTER inserts, updates, deletes where customer_id=55
---> TRANSFORM customer_id=55 to customer_id=1
    --> APPLY EXTRACTS to Customer B
Replication from Customer B(customer_id=1) to Customer A(customer_id=12)
==========================================================
-->  FILTER inserts, updates, deletes where customer_id=12
---> TRANSFORM customer_id=12 to customer_id=1
    --> APPLY EXTRACTS to Customer A
Thanks in advance for any advise or recommendations you might have on the subject.
Sincerely
Jan S.

I've done some research and look like we are talking about Oracle Data Integrator ..... I hope not. don't want to another layer of complexity.
I hope there is someone out there that has alternative solution.
Thanks in advance for any assistance in this matter.
jan S.

Similar Messages

  • Using transforms and filters without device drivers

    Hello,
    I came across NIMS as a possible solution for some transforms and filtering, possibly even generating test signal data, for a seismic application. I'm in the process of evaluating NIMS for best possible fit for what we need/want to accomplish.
    Basically, we've got some seismic data, and we want to process that data through a series of transforms and filters to denoise and pick the data for seismic analysis. No sense reinventing the wheel if we can adopt and then adapt a third-party library like NIMS into our app.
    We do not necessarily need any device drivers, although I noticed installing NIMS requires them. Hopefully we can opt in or out depending on what's actually required. Can someone help clarify the nature of the driver dependency?
    Anyhow, like I said I am evaluating it for best possible fit in our application, but in the meantime if someone can shed some light on the above concerns, questions, etc, would begreat.
    Thank you...
    Best regards.
    Solved!
    Go to Solution.

    Glad to hear it!
    -Mike
    Applications Engineer
    National Instuments

  • SQL DM - Data Transformation and Data Movement option ?

    SQL DM - Data Transformation and Data Movement option ?
    I am using SQL DM 3.0.0.665. I need your thoughts on following.
    We find that Erwin introduced Data Transformation and Data Movement functionality to support ETL need. We were able to generate ETL spec using this feature.
    Does SQL DM have any plan to introduce such features?
    How do we use the current SQL DM to build ETL spec ?
    Thanks in helping us out.

    Hello,
    I am currently experimenting with SQL Data Modeler to produce high level solution designs and ETL specifications.
    Have not completed what I am doing but so far have come up with the following:
    Current assumption I am working on:
    All objects specified within the SQL Data Modeler will export to the Reporting Schema tables set up in an Oracle database. Once the data is within these tables then it will be a simple task to develop a SQL report to extract the data in any format required.
    1) There is nothing in the physical (Relational) Model section that supports this
    - though I have yet to fully use the Dimensional Modelling functionality which may have the mapping functionality required to specify an ETL
    2) We need diagrams of the processes as well as the ETL mapping
    - Process modelling is available in the Logical
    - Reverse Engineer all Physical objects to become Logical object i.e. one Table to one Entity
    - For each Entity set up an Information Structure
    (Currently this can only be done in a convoluted method via creating a diagram, creating a Flow and editing the Flow then drilling down)
    MESSAGE to SQL Data Modeler Support: Can things be set up so that Information Structures can be set up directly from the Browser, current method is a bit nonsensical
    - You are now set up to use the Logical Process Modeling functionality to capture the ETL requirements
    - Advise that you reference the training to understand what primitive, composite and transformation processes objects are
    - Also, take the time to understand what an external agent object is
    - Will assume you know what a Data Store is
    Here is the standard I am heading towards that seems feasible, will need to run a proof of concept within the larger team to ensure it works though:
    - A Logical is kept that is a one for one with the Physical
    (The only reason for this is that there is no process modeling functionality for the Physical objects
    MESSAGE to SQL Data Modeler Support: Can you duplicate the Process Modeling for the Logical to be available for the Physical objects too, would be a great help to specify ETL jobs.
    - An External Agent is used to represent an external source e.g. Billing application
    - A primitive process is used to represent the high Level design
    - A composite process is used to specify processes which can be further broken down to ETL jobs
    - A transformation process is used to represent an ETL job
    Within a Transformation process you can specify the mapping from multiple sources to a target table
    There are some negatives to this approach:
    - You lose the physical schemas the tables are part of, though a naming convention will get round this
    - You need to maintain a logical that is one for one with the physical, this is not a big overhead
    However, as I have stated in my message to the SQL Data Modeler support team, would all be resolved if the Process Modeling functionality were also made available within the Physical objects environment.
    Please note that we have not as yet adopted the above approach and are still assessing is SQL Data Modeler will meet this requirement to our satisfaction. The critical bit will be if the data exports to the Reporting Schema, if it does then we have plenty of SQL resource that can produce the reports required procided the data can be captured.
    Hope that all helps.
    Also, hope I have not missed the point of your email.
    Kind regards,
    Yusef

  • Golden Gate Data Type

    Hai,
    IS there any data type can be used instead of using CLOB or BLOB data type?I believe CLOB n BLOB data type,golden gate cannot replicate this data type.
    Any suggestion???
    Regards,
    muddy

    Did you read the installation guide for Oracle (assuming that is what you're using since you didn't say otherwise), under Supported Oracle data types?
    Large object data types● CLOB
    ● NCLOB
    ● BLOB

  • Master Data: Transformation and DTP for compounded characteristic

    Good day
    Please assist, I am not sure what is the correct way.
    I extract master data from a db via DB connect.
    There are three fields in the db view for extraction. (1) Code (2) Kind and (3) Code Text.
    What I have done is the following. I created a datasource with transformation and DTP for master data text for (1) and (3), and then a datasource master data attribute transformation and DTP for (1) and (3).
    Is this the correct way to handle extracts of compounded characteristics?
    Your assistance ill be appreciated.
    Thanks
    Cj

    Hello,
    if the char ' Code' is compounded with 'Kind'.
    then for text datasource u shld have  1, 2 and 3. the table for this datasource should have 'code' and 'kind' as keys.
    for the attribute datasource u shld have 1 ,2 followed by the reqd attributes of 'Code'.
    Reagrds,
    Dhanya

  • Data Mining on data specified and filtered by the user in runtime

    Hi Experts,
    i am new to Data Mining in SAP BI (we are on BI 7.0 SP Level 20). I familiarised myself with APD and Data Mining by reading some interesting and useful threads in this forum and some other resources. Therefore I got a understanding about the topic and was able to create basic data mining model for an association analysis and an corresponding APD for it and write the results into a DSO by using the data source. But for now I was not able to find a solution for a concrete customer requirement.
    The user shall be able to select an article, a retail location and a month and get the top n combinations sold with that article in the particular location and month. For that he may not access the data mining workbench or any other SAP internal tools but he shall be able to start the analysis out of the portal (preferable a query).
    We had some thoughts on the scenario. The first idea would be to create an APD for every location for the last month. As we need to cover more than 100 locations, this would not be practicable. Therefore I think it would be necessary, that the user can select the particular filters, and the data mining would then be executed with the given input.
    The other idea was to use a query as source. The user would start this query and filter location and month in it. The result of the query could then be used as the source for the APD with the association analysis. Therefore we would need to create a jump point from that query, which starts the APD with that results. After that the user should be able to start a result query, which displays the result of the association analysis (ideally this result query would start automatically, but starting it manually would be ok, too).
    So, I have the following questions for these scenarios:
    1.) Is it possible to create variants of a single APD, for automatically doing the data mining for the different locations?
    2.) is it possible to start an APD out of a query, with the particular results regarding filtering?
    3.) Can we place a query directly on the data mining results (how?) or do we need to write the data mining results in a DSO first?
    4.) What about the performance? Would it be practicable to do the data mining in runtime with the user waiting?
    5.) Is the idea realistic at all? Do you have any other idea how to accomplish the requirement (e.g. without APD but with a query, specific filter and conditions)?
    Edited by: Markus Maier on Jul 27, 2009 1:57 PM

    Hi ,
    you can see the example : go to se 80 then select BSP Application ,SBSPEXT_HTMLB   then select tableview.bsp , you will get some idea to be more clear for the code which you have written
    DATA: tv TYPE REF TO CL_HTMLB_TABLEVIEW.
    tv ?= cl_htmlb_manager=>get_data(
                             request = runtime->server->request
                              name    = 'tableView'
                                  id      = ''tbl_o_table" ).    
    IF tv IS NOT INITIAL.
      DATA: tv_data TYPE REF TO CL_HTMLB_EVENT_TABLEVIEW.
      tv_data = tv->data.
    IF tv_data->prevSelectedRowIndex IS NOT INITIAL.
    FIELD-SYMBOLS: <row> LIKE LINE OF sflight.
        READ TABLE ur tablename  INDEX tv_data->prevSelectedRowIndex ASSIGNING <row>.
        DATA value TYPE STRING.
        value = tv_data->GET_CELL_ID( row_index    =
                                   tv_data->prevSelectedRowIndex
                                      column_index = '1' ).
    endif.
    endif,

  • Bridge meta data handling and filtering

    It would be great if you could add two features:
    1.  adding keywords to multiple files currently if two files share the same  keywords but not in the order, adding new is impossible without  overwriting the existing keywords
    2. filtering by location data, it would be great to filter by country say, just as by keyword, date or other.
    Thanks, Erik

    For adding keywords without removing existing keywords, this script might be of use to you...
    http://www.ps-scripts.com/bb/viewtopic.php?f=19&t=2364&sid=57a50afd1b53b5fc195f0e5dfdbfab0 6

  • Color Application with Bitmap Data, Threshold and filters

    Hello,
    I want to simulate lipstick in an image over the lips.
    To apply colour to the image we use the method threshold to a BitmapData.
    Then we generate a Bitmap with the image whoch we have applies the colour. We determine a BlendMode for the BitmapData.
    We apply the following filters:
    BlurFilter
    BevelFilter
    ColorMatrixFilter
    ConvolutionFilter
    GlowFilter
    GradientBevelFilter
    Then we create the image and assign the .source of the BitmapData with the filters and the specific BlendMode.
    We add this to a canvas with the addchild.
    The problem is that We obtain an image that's good but it doesn't seen as an image that glows or an image superposed with the previous.
    It's not very "real". I need like more shiny.
    Can you help me?
    Thank you very much!

    Just to clarify, the site I'm having trouble with is at the link at the bottom of my first post.  The first one was my working example.  This one's broken: http://www.equestrianarts.org/resources.html
    I'm looking at it in Explorer 7, and all I get is the names of the column headers in {brackets} where there should be lists loading from my html table.  Does anyone else see the same thing?
    Thanks,
    Maria

  • Data Guard or Golden Gate

    Hello Fellows
    I want to migrate an oracle 9i to 11g and I am not sure which is the best way to migrate my database.
    Data Guard or Golden Gate
    Thanks in advance
    Panos
    Edited by: user9141133 on 19 Σεπ 2011 2:50 πμ

    Although it is true that Golden Gate & Data Guard have a different main purpose, they can very well be used during an upgrade rsp. a migration to minimize the associated downtime.
    See for example how Data Guard can be used to minimize downtime for upgrade here:
    http://uhesse.wordpress.com/2011/08/10/rolling-upgrade-with-transient-logical-standby/
    Kind regards
    Uwe Hesse
    http://uhesse.wordpress.com

  • Need pointers towards SOAP header based tranformation and filtering in ESB

    Hi,
    could to help me towards getting sample projects worked on SOAP header based transformation and filtering in ESB
    Regards,

    Hi.
    You can find info ESB home page:
    http://www.oracle.com/technology/products/integration/esb/index.html
    There you can download esbsamples:
    http://www.oracle.com/technology/products/integration/esb/files/esbsamples.zip
    It has a project demo containing SOAP header examples.
    Denis

  • Doubt: Filters in Golden Gate Area

    Hi All,
    I'm a ODI Developer (begginer) and have doubt abt Golden Gate.
    I have the follow scenario:
    GG -> Staging Area -> Mastersaf
    Client told me to do not make filters on the first step (GG -> Staging), only at the step Staging Area -> Mastersaf.
    Is it correct (Is it a best practice)? Why?
    Regards,

    This is a common requirement / request. Basically if you have GG capturing data from some database, you want to capture everything that you might need, including in the future, rather than just getting what you know you need today. So, yes, capture everything (or rather don't filter gratuitously) and send it all to the staging area. Next week you many have another client (or a change in requirements) that asks for additional data. If you're already capturing the data, you don't have to go back to the source & modify your configuration to get the additional tables, rows or columns. Yes, it takes a little bit of extra (temporary) storage space, but disk and bandwidth are (often) cheaper than development effort, regardless of how much effort it actually is. (In many database shops, it takes much longer to do the "paperwork" to make a change on the DB server than the time it would take to make the actual change. This type of proactive policy ("capture it in case we need it") prevents that.)

  • Golden Gate for mysql5.5 extract is Abended,and not error in the file

    Dear All,
    golden gate for mysql5.5 to oracle 11g,extract is Abended ,but there didn't have error in the log , And sometimes the successful extraction some records;
    extract :
    EXTRACT EXT_M1               
    TRANLOGOPTIONS AltLogDest /mydata/mysqllog/binlog/binlog.index       
    SOURCEDB [email protected]:16052, USERID mama,PASSWORD mama        
    sqlexec "set names gbk;"       
    EXTTRAIL dirdat/m1                  
    Dynamicresolution               
    TABLE mama.merchants_member_card_customer;   
    datapump:
    EXTRACT DPRD_M1  
    SOURCEDB [email protected]:16052, USERID mama,PASSWORD mama  
    RMTHOST 192.168.2.57, MGRPORT 7089, compress --COMPRESSUPDATESETWHERE
    RMTTRAIL /home/oracle/goldengate/dirdat/m1
    NOPASSTHRU  
    TABLE mama.merchants_member_card_customer;
    GGSCI>>info all
    Program     Status      Group       Lag at Chkpt  Time Since Chkpt
    MANAGER     RUNNING                                          
    EXTRACT     RUNNING     DPRD_M1     00:00:00      00:00:01   
    EXTRACT     ABENDED     EXT_M1      00:11:49      00:01:56
    REPORT:
    GGSCI>>view report ext_m1
                      Oracle GoldenGate Capture for MySQL
          Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230
    Linux, x64, 64bit (optimized), MySQL Enterprise on Apr 23 2012 05:23:34
    Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.
                        Starting at 2013-09-29 18:38:08
    Operating System Version:
    Linux
    Version #1 SMP Wed Jun 13 18:24:36 EDT 2012, Release 2.6.32-279.el6.x86_64
    Node: M46
    Machine: x86_64
                             soft limit   hard limit
    Address Space Size   :    unlimited    unlimited
    Heap Size            :    unlimited    unlimited
    File Size            :    unlimited    unlimited
    CPU Time             :    unlimited    unlimited
    Process id: 6322
    Description:
    **            Running with the following parameters                  **
    2013-09-29 18:38:08  INFO    OGG-03035  Operating system character set identified as UTF-8. Locale: zh_CN, LC_ALL:.
    EXTRACT EXT_M1
    TRANLOGOPTIONS AltLogDest /mydata/mysqllog/binlog/binlog.index
    SOURCEDB [email protected]:16052, USERID mama100,PASSWORD ****************
    sqlexec "set names gbk;"
    Executing SQL statement...
    2013-09-29 18:38:08  INFO    OGG-00893  SQL statement executed successfully.
    EXTTRAIL dirdat/m1
    Dynamicresolution
    TABLE mama100.merchants_member_card_customer;
    2013-09-29 18:38:08  INFO    OGG-01815  Virtual Memory Facilities for: COM
        anon alloc: mmap(MAP_ANON)  anon free: munmap
        file alloc: mmap(MAP_SHARED)  file free: munmap
        target directories:
        /home/goldengate/dirtmp.
    CACHEMGR virtual memory values (may have been adjusted)
    CACHESIZE:                               64G
    CACHEPAGEOUTSIZE (normal):                8M
    PROCESS VM AVAIL FROM OS (min):         128G
    CACHESIZEMAX (strict force to disk):     96G
    Database Version:
    MySQL
    Server Version: 5.5.24-patch-1.0-log
    Client Version: 6.0.0
    Host Connection: 192.168.2.46 via TCP/IP
    Protocol Version: 10
    2013-09-29 18:38:08  INFO    OGG-01056  Recovery initialization completed for target file dirdat/m1000000, at RBA 1295, CSN 000086|000000065228677.
    2013-09-29 18:38:08  INFO    OGG-01478  Output file dirdat/m1 is using format RELEASE 11.2.
    2013-09-29 18:38:08  INFO    OGG-01026  Rolling over remote file dirdat/m1000000.
    2013-09-29 18:38:08  INFO    OGG-00182  VAM API running in single-threaded mode.
    2013-09-29 18:38:08  INFO    OGG-01515  Positioning to begin time 2013-9-29 06:26:18.
    **                     Run Time Messages                             **
    2013-09-29 18:38:08  INFO    OGG-01516  Positioned to Log Number: 86
        Record Offset: 65223906, 2013-9-29 06:26:18.
    2013-09-29 18:38:08  INFO    OGG-01517  Position of first record processed Log Number: 86
        Record Offset: 65223906, 2013-9-29 06:26:18.
    TABLE resolved (entry mama100.merchants_member_card_customer):
      TABLE mama100."merchants_member_card_customer";
    Using the following key columns for source table mama100.merchants_member_card_customer: id.
    2013-09-29 18:38:08  INFO    OGG-01054  Recovery completed for target file dirdat/m1000001, at RBA 1316, CSN 000086|000000065228677.
    2013-09-29 18:38:08  INFO    OGG-01057  Recovery completed for all targets.
    ggsevt:
    2013-09-29 18:38:08  INFO    OGG-00963  Oracle GoldenGate Manager for MySQL, mgr.prm:  Command received from GGSCI on host localhost (START EXTRACT EXT_M1 ).
    2013-09-29 18:38:08  INFO    OGG-00975  Oracle GoldenGate Manager for MySQL, mgr.prm:  EXTRACT EXT_M1 starting.
    2013-09-29 18:38:08  INFO    OGG-00992  Oracle GoldenGate Capture for MySQL, ext_m1.prm:  EXTRACT EXT_M1 starting.
    2013-09-29 18:38:08  INFO    OGG-03035  Oracle GoldenGate Capture for MySQL, ext_m1.prm:  Operating system character set identified as UTF-8. Locale: zh_CN, LC_ALL:.
    2013-09-29 18:38:08  INFO    OGG-00893  Oracle GoldenGate Capture for MySQL, ext_m1.prm:  SQL statement executed successfully.
    2013-09-29 18:38:08  INFO    OGG-01815  Oracle GoldenGate Capture for MySQL, ext_m1.prm:  Virtual Memory Facilities for: COM
        anon alloc: mmap(MAP_ANON)  anon free: munmap
        file alloc: mmap(MAP_SHARED)  file free: munmap
        target directories:
        /home/goldengate/dirtmp.
    2013-09-29 18:38:08  INFO    OGG-00993  Oracle GoldenGate Capture for MySQL, ext_m1.prm:  EXTRACT EXT_M1 started.
    2013-09-29 18:38:08  INFO    OGG-01056  Oracle GoldenGate Capture for MySQL, ext_m1.prm:  Recovery initialization completed for target file dirdat/m1000000, at RBA 1295, CSN 000086|000000065228677.
    2013-09-29 18:38:08  INFO    OGG-01478  Oracle GoldenGate Capture for MySQL, ext_m1.prm:  Output file dirdat/m1 is using format RELEASE 11.2.
    2013-09-29 18:38:08  INFO    OGG-01026  Oracle GoldenGate Capture for MySQL, ext_m1.prm:  Rolling over remote file dirdat/m1000000.
    2013-09-29 18:38:08  INFO    OGG-00182  Oracle GoldenGate Capture for MySQL, ext_m1.prm:  VAM API running in single-threaded mode.
    2013-09-29 18:38:08  INFO    OGG-01515  Oracle GoldenGate Capture for MySQL, ext_m1.prm:  Positioning to begin time 2013-9-29 06:26:18.
    2013-09-29 18:38:08  INFO    OGG-01516  Oracle GoldenGate Capture for MySQL, ext_m1.prm:  Positioned to Log Number: 86
        Record Offset: 65223906, 2013-9-29 06:26:18.
    2013-09-29 18:38:08  INFO    OGG-01517  Oracle GoldenGate Capture for MySQL, ext_m1.prm:  Position of first record processed Log Number: 86
        Record Offset: 65223906, 2013-9-29 06:26:18.
    2013-09-29 18:38:08  INFO    OGG-01054  Oracle GoldenGate Capture for MySQL, ext_m1.prm:  Recovery completed for target file dirdat/m1000001, at RBA 1316, CSN 000086|000000065228677.
    2013-09-29 18:38:08  INFO    OGG-01057  Oracle GoldenGate Capture for MySQL, ext_m1.prm:  Recovery completed for all targets.
    2013-09-29 18:38:09  INFO    OGG-01054  Oracle GoldenGate Capture for MySQL, dprd_m1.prm:  Recovery completed for target file /home/oracle/goldengate/dirdat/m1000002, at RBA 1435, CSN 000086|000000055512672.
    2013-09-29 18:38:09  INFO    OGG-01057  Oracle GoldenGate Capture for MySQL, dprd_m1.prm:  Recovery completed for all targets.

    GGSCI>>info ext_m1 showch
    EXTRACT    EXT_M1    Last Started 2013-09-29 18:38   Status ABENDED
    Checkpoint Lag       00:11:49 (updated 00:12:05 ago)
    VAM Read Checkpoint  2013-09-29 18:26:18.665841
    Current Checkpoint Detail:
    Read Checkpoint #1
      VAM External Interface
      Startup Checkpoint (starting position in the data source):
        Timestamp: 2013-09-29 18:26:18.665841
      Recovery Checkpoint (position of oldest unprocessed transaction in the data source):
        Timestamp: 2013-09-29 18:26:18.665841
      Current Checkpoint (position of last record read in the data source):
        Timestamp: 2013-09-29 18:26:18.665841
    Write Checkpoint #1
      GGS Log Trail
      Current Checkpoint (current write position):
        Sequence #: 0
        RBA: 917
        Timestamp: 2013-09-29 18:30:55.655570
        Extract Trail: dirdat/m1
    CSN state information:
      CRC: 20-82-1D-34
      CSN: Not available
    Header:
      Version = 2
      Record Source = A
      Type = 8
      # Input Checkpoints = 1
      # Output Checkpoints = 1
    File Information:
      Block Size = 2048
      Max Blocks = 100
      Record Length = 20480
      Current Offset = 0
    Configuration:
      Data Source = 5
      Transaction Integrity = 1
      Task Type = 0
    Status:
      Start Time = 2013-09-29 18:38:08
      Last Update Time = 2013-09-29 18:38:08
      Stop Status = A
      Last Result = 0

  • Regarding Transformation and Data Transfer Process(DTP)

    Dear Gurus
    1) Transformation replaces the transfer rule and update rule.
    2) DTP replaces the info package.
    Hencse, there is a some advantages are there in Transformation and DTP.
    I couldn't understand in help.sap.com.
    Could you please tell me in simple language what all are the new features are there in Transformation and DTP which is not possible in Transfer rule, Update rule and infopackage.
    Thanks and Regards
    Raja

    Hi Raja,
    These are the advantages of DTP and Transformation over their predecessors.
    DTP:1)Improved transparency of staging processes across data warehouselayers (PSA, DWH layer, ODS layer, Architected Data Marts)
    2)Improved performance:Intrinsic parallelism
    3)Separation of delta mechanism for different data targets: delta capability is controlled by the DTP
    4)Enhanced filtering in data flow
    5)Repair modes based on temporary buffers (buffers keep complete set of data)
    Transformation:   SAP NetWeaver2004s significantly improves transformation capabilities. The improved graphical UI contributes to decrease TCO.
    1)Transformation Improved performance, flexibility and usability 
    2)Graphical UI
    3)Unification and simplification of transfer and update rules
    4)New rule type: end routine
    5)New rule type: expert routine (pure coding of transformation)
    6)Unit Conversion Capabilities for unit conversion during data load (and reporting)
    Assign points if it is helpful.
    Cheers,
    Bharath

  • Data Guard Vs Golden Gate

    Hi Experts,
    I am looking for High Availability and Disaster Recovery architecture for my data layer i.e. Oracle Database 11g R2
    We have two physical locations and the distance between two sites is around 20 miles.
    Site 1:
    We already implemented RAC setup with two node in site 1.
    Site 2:
    We are going to implement standalone database. (Not RAC)
    My requirements:
    1. Both databases at Site 1 & Site 2 should be replica of each other.
    2. Both databases should be in sync always.
    3. Site 1 is active and Site 2 is stand by.
    4. Client applications on Site 1 & Site 2 should always talk to RAC database on Site1.
    5.. If RAC at site 1 goes down completely, then ONLY client apps should connect to Site2 database without human intervention.
    How can acheive my requirement ? I was doing some research & found two solutions. 1. Active Data Guard 2. Golden Gate.
    Questions:
    1. Do Data Guard and Golden Gate offers same features ?
    2. Which products offers solutions to all my requirements or Do I need to use both ?
    3. If Data Guard and Golden Gate are different from each other then What is the difference between them and what are the overlapping features among them ?
    Thanks

    1. Do Data Guard and Golden Gate offers same features ?No, there's simple compare here :
    http://www.oracle.com/technetwork/database/features/availability/dataguardgoldengate-096557.html
    2. Which products offers solutions to all my requirements or Do I need to use both ?Data Guard will work and you don't need anything else. I cannot speak to Golden Gate.
    3. If Data Guard and Golden Gate are different from each other then
    What is the difference between them and what are the overlapping features among them ?Again this document :
    http://www.oracle.com/technetwork/database/features/availability/dataguardgoldengate-096557.html
    1. Both databases at Site 1 & Site 2 should be replica of each other.
    Data Guard can do this.
    2. Both databases should be in sync always.
    Data Guard can do this.
    3. Site 1 is active and Site 2 is stand by.
    Data Guard can do this.
    4. Client applications on Site 1 & Site 2 should always talk to RAC database on Site1.
    You can set your tnsnames to handle this and more. Using DBMS_SERVICE you can create an alias
    to handle this.
    Ex.
    ernie =
    (DESCRIPTION =
        (ADDRESS_LIST =
           (ADDRESS = (PROTOCOL = TCP)(HOST = primary.host)(PORT = 1521))
           (ADDRESS = (PROTOCOL = TCP)(HOST = standby.host)(PORT = 1521))
           (CONNECT_DATA =
           (SERVICE_NAME = ernie)
    )5. If RAC at site 1 goes down completely, then ONLY client apps should connect to Site2 database without human intervention.
    You can set your tnsnames to handle this and more.
    Best Regards
    mseberg

  • Golden gate extract and replicate process are not running.

    All,
    I am trying replicate data between two oracle databases using golden gate.
    I am trying this scenario in a single machine(two databases and golden gate are on same windows machine)
    1. I have two databases PROD, UAT both are running on 11.2 oracle home.
    2. Created the ggate user both the databases, and enabled supplemental logging.
    3. Ran the following scripts in both databases.
    SQL> @marker_setup.sql
    SQL> @ddl_setup.sql
    SQL> @role_setup.sql
    SQL> grant GGS_GGSUSER_ROLE to ggate;
    SQL> @ddl_enable.sql
    4. Connected the source database (PROD) in ggsci prompt
    GGSCI (home-c07402bbc5) 79> add extract ext1, tranlog, begin now
    add exttrail C:\app\Bhanu\Goldengate\lt, extract ext1
    edit params ext1
    EXTRACT ext1
    USERID ggate@PROD, PASSWORD 123456
    RMTHOST home-c07402bbc5, MGRPORT 7840
    rmttrail C:\app\Bhanu\Goldengate\lt
    ddl include mapped objname bhanu.* // bhanu is a schema in PROD database.
    TABLE bhanu.*;
    5. Connected the target database(UAT) in ggsci prompt
    add checkpointtable ggate.checkpoint
    edit params ./GLOBALS
    GGSCHEMA ggate
    CHECKPOINTTABLE ggate.checkpoint
    add replicat rep1, exttrail C:\app\Bhanu\Goldengate\Mt,checkpointtable ggate.checkpoint
    edit params rep1
    replicat rep1
    ASSUMETARGETDEFS
    userid ggate@UAT, password 123456
    discardfile C:\app\Bhanu\Goldengate\rep1_discard.txt, append, megabytes 10
    map bhanu.*, target kiran.*;
    After that started the extract, replicat using
    start extract ext1, start replicate rep1
    Now the status.
    GGSCI (home-c07402bbc5) 103> info all
    Program Status Group Lag Time Since Chkpt
    MANAGER RUNNING
    EXTRACT STOPPED EXT1 00:00:00 00:11:43
    REPLICAT STOPPED REP1 00:00:00 00:21:16
    Can you please help me what is wrong in my setup and why extract and replicate process are not running.
    Edited by: user12178861 on Nov 19, 2011 11:22 AM

    Thanks for your quick reply.
    I have done few changes but extract, replicate process not running.
    couple of points I would like to share with you regarding my setup.
    1. I am using single golden date instance to replicate the data between PROD and UAT databases.
    2. GGSCI (home-c07402bbc5) 1> dblogin userid ggate@PROD,PASSWORD 123456
    Successfully logged into database.
    GGSCI (home-c07402bbc5) 2> info all
    Program Status Group Lag Time Since Chkpt
    MANAGER RUNNING
    EXTRACT STOPPED EXT1 00:00:00 01:23:29
    REPLICAT STOPPED REP1 00:00:00 01:33:02
    GGSCI (home-c07402bbc5) 3> VIEW REPORT EXT1
    ERROR: REPORT file EXT1 does not exist.
    GGSCI (home-c07402bbc5) 4> start er *
    Sending START request to MANAGER ...
    EXTRACT EXT1 starting
    Sending START request to MANAGER ...
    REPLICAT REP1 starting
    GGSCI (home-c07402bbc5) 5> VIEW REPORT EXT1
    ERROR: REPORT file EXT1 does not exist.
    GGSCI (home-c07402bbc5) 6> info all
    Program Status Group Lag Time Since Chkpt
    MANAGER RUNNING
    EXTRACT STOPPED EXT1 00:00:00 01:24:10
    REPLICAT STOPPED REP1 00:00:00 01:33:44
    Target :
    GGSCI (home-c07402bbc5) 1> dblogin ggate@UAT,PASSWORD 123456
    ERROR: Unrecognized parameter (GGATE@UAT), expected USERID.
    GGSCI (home-c07402bbc5) 2> dblogin userid ggate@UAT,PASSWORD 123456
    Successfully logged into database.
    GGSCI (home-c07402bbc5) 5> add replicat rep1, exttrail C:\app\Bhanu\Goldengate/lt,checkpointtable ggate.checkpoint
    ERROR: REPLICAT REP1 already exists.
    GGSCI (home-c07402bbc5) 6> delete replicat rep1
    Deleted REPLICAT REP1.
    GGSCI (home-c07402bbc5) 7> add replicat rep1, exttrail C:\app\Bhanu\Goldengate/lt,checkpointtable ggate.checkpoint
    REPLICAT added.
    GGSCI (home-c07402bbc5) 8> edit params rep1
    GGSCI (home-c07402bbc5) 9> start er *
    Sending START request to MANAGER ...
    EXTRACT EXT1 starting
    Sending START request to MANAGER ...
    REPLICAT REP1 starting
    GGSCI (home-c07402bbc5) 10> info all
    Program Status Group Lag Time Since Chkpt
    MANAGER RUNNING
    EXTRACT STOPPED EXT1 00:00:00 01:29:46
    REPLICAT STOPPED REP1 00:00:00 00:00:48
    3. Is mandatory that I need two golden gate instances running each side ?
    Thanks for spending your time on this problem.

Maybe you are looking for