I am on Source side and DDL replication is enabled.

I am on Source side and DDL replication is enabled.
I would be providing initial dump of TABLE and VIEW objects to target using Export DP to Target.
And start Golden Gate Extract, Pump on Source.
Source PRM file has Table list.
EXTRACT ex1test
DDL INCLUDE MAPPED
EXTTRAIL ./trails/l1
TABLE ABCD.T100;
TABLE ABCD.T200;
TABLE ABCD.T300;
Objective is ---- If I change VIEW definition at Source, it should be reflected to Target
Question is --- In my PRM file, how I can include VIEWs???
If the source has 10 views , only 5 are replicated to target ..
DDL changes for only the ones in the target should be considered other must be excluded
Thanks and Regards,
Kurian

Hi Kurian,
Oracle GoldenGate supports VIEW replication in both Classic and Integrated Modes. But we do have some limitations on it.,
1. Capture from a view is supported when Extract is in initial-load mode. The data is captured from the Source View and not the Redologs.
2. Changes made to the data of the view will not be captured.
3. View replication is only supported for inherently updateable views in which case the source table and target view structures must be identical.
You can exclude DDL Objects using the ddl_filter.sql
To know more about ddl_filter.sql, follow the below link.,
http://docs.oracle.com/goldengate/1212/gg-winux/GIORA/ddl.htm#GIORA316
Under this,
13.8.1 Filtering with PL/SQL Code
Regards,
Veera

Similar Messages

  • REG: 4 different folders from the source side and we need to have the BPM t

    Hi All,
    We are using a BPM for file-idoc scenario. Previously we use to have one source folder which sends file and the bpm worked fine. Now we have a requirement where the files come from 4 different folders from the source side and we need to have the BPM to run independently for each folder.
    What are the changes to be made in IR and id for this..
    Thanks & Regards,
    Kiran.

    File adapter has a option advance selection of source file setting which can be used for picking files from different folder.

  • System copy: RZ21 MTE has source SID and not the target SID

    An system copy of PRD to QAS environment has been performed.
    Since then receive the following message in the CCMS Monitor sets (trx RZ20):
    Unable to determine the name of MTE class from <SID>\Background\...\SAP_SRT_COLLECTOR_FOR_MONITORIN or the message "Failed to determine MTE information for system QAS"
    Checked and in the transaction RZ21 the MTE class is pointing to the PRD system and not to the QAS system:
    RZ21 -> MTE-specific properties
    Monitoring: Properties Assigned to MTEs
    RZ21 -> Methods assigned to specific MTEs
    Monitoring: Properties and Methods
    Unable to determine the name of MTE class from PRD\Background\...\SAP_SRT_COLLECTOR_FOR_MONITORIN
    I would expect here the value "QAS\Background\...\SAP_SRT_COLLECTOR_FOR_MONITORIN".
    How can I adjust the MTE to check the QAS instead of the source system PRD ?

    Received following answer from SAP:
    we can find out the follow errors from the dev_disp developer trace file you attached.
    ERROR => e=13 semop(1048668,(0,-1,4096),1)
    (13: Permission denied) [semux.c 1196]
    ERROR => CCMS: AlCreateAttachShm_AS : lock semaphore return 1
    [alxxappl.c 665]
    ERROR => CCMS: sAlInit: could not create/attach shared memory key 13(rtc = 1) [alxx.c 1187]
    ERROR => AlDpAdmInit: sAlInit rtc = 1 [alxx.c 514]
    ERROR => DpLoopInit: AlDpAdmInit [dpxxdisp.c 2033]
    The shared memory area 13 is the one that is responsible for the alert area.
    An error within that area could cause the problem that you are facing when opening monitoring trees.
    If we look at your OS, we can see that the semaphore for this area is currently being held by the root user:
    [1]ipcs | grep 1048668
    s 1048668 0x00004e46 ra-r root system
    You need to delete this area with root user using ipcrm and restart the instace or to reboot the OS and start the instance once again.
    Performed these actions and the issue was solved.

  • DFS Shares Prepended by DFS SID and No Longer Accessible

    Hello,
    Environment: We use two dfs servers which replicate all namespaces and dfs folders between each other. 
    There are two namespaces: Data and Users.  First server (DC12) has referrals enabled the second server (DC07) has referrals disabled. 
    This is globally configured across all namespaces and dfs folders. 
    Both servers are Server 2008 R2 Standard x64 OS.  Other roles on these servers include AD DS, DNS, and DHCP.
    Issue: Within the Data namespace windows explorer location on DC12 (D:\Data\) the folder structure was mysteriously changed for all (9) dfs folders. 
    Dfs folders had “DFS.<DFS SID>” prepended to their folder names and are now no longer accessible. 
    For example: folder previously named “Accounts Receivable” were renamed to “DFS.8c654b7d-0246-4389-ab00-2b1b7027626fAccounts Receivable” within explorer, but are named "Accounts Receivable" within DFS Management. 
    Additionally, there was another, empty dfs folder created in D:\Data\ called “Accounts Receivable”, but when we try to access it from either D:\Data\ or through \\namespace\data we get an error “Location is not available: The network location cannot
    be reached.” 
    Background: Our server switch died and replication between DC12 and DC07 was interrupted for about 90 minutes. 
    We replaced the switch and the environment came back online. 
    When testing to confirm network, resource, and LOB functionality, we discovered this issue and have been thus far unsuccessful in resolving.
    Associated Event Log Found:
    Log Name:     
    DFS Replication
    Source:       
    DFSR
    Date:         
    5/14/2014 5:33:16 PM
    Event ID:     
    4004
    Task Category: None
    Level:         Error
    Keywords:     
    Classic
    User:         
    N/A
    Computer:     
    DC12
    Description:
    The DFS Replication service stopped replication on the replicated folder at local path D:\Data\Lab Tech.
    Additional Information:
    Error: 87 (The parameter is incorrect.)
    Additional context of the error:  
    Replicated Folder Name: Lab Tech
    Replicated Folder ID: C6475450-CA1B-4AE2-929A-2C67F5EC79BF
    Replication Group Name: schaeffer.com\data\lab tech
    Replication Group ID: 478B691D-415F-4788-8D64-41DEBDDB76FD
    Member ID: 66B7D8A8-6A93-43B7-844D-DF77AB3EF31F
    Troubleshooting Steps Done So Far:
    Restarted DFS Namespace (Dfs), DFS Replication (DFSR), and Netlogon services on both DC12 and DC07
    Renaming folders - folders don't exist error
    Restarted DC12 and DC07
    This issue is ONLY isolated to DC12 and ONLY the Data namespace.  DC07 and Users namespace works just fine.
    We ended up having to disable DC12 as a referral target and in replication so that clients were pointing to DC07. 
    I’m hoping that I won’t have to rebuild the Data namespace because it’s massive. 
    Hoping for some guidance on troubleshooting.  Thanks for your time.

    Hi,
    As currently the DC07 is still working, a new initial replication should help in this situation. Please try the steps below:
    1. Stop the DFSR service on the server that is logging the 4004 event. 
    2. Navigate to the root of the DFSR folder. 
    3. Depending on OS, you may need to take ownership of the "System Volume Information" folder and grant yourself permissions (FULL) on the folder. 
    4. Navigate to :\System Volume Information\DFSR\ 
    5. Rename Database_GUID folder to Olddatabase_GUID 
    NOTE: For Windows Server 2008 R2 you will need to do this from an elevated command prompt, otherwise any changes made to items in this folder will get reversed by system. You can use command line: ren Database_GUID Olddatabase_GUID 
    6. Start the DFSR service. You should see an Event ID 2102 in the DFSR event log indicating the database is being recreated, and then an Event ID 2106 indicated it has been successfully recreated. 
    You can then monitor progress by checking the state of the replicated folder using WMIC command and backlog using dfsrdiag command: 
    Wmic /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo get replicationgroupname,replicatedfoldername,state 
    The "State" values can be: 
    0 = Uninitialized 
    1 = Initialized 
    2 = Initial Sync 
    3 = Auto Recovery 
    4 = Normal 
    5 = In Error 
    And:
    dfsrdiag backlog /SendingMember: /ReceivingMember: /RGName: /RFName: 
    Technet information on DFSR automatic database recovery 
    http://msdn2.microsoft.com/en-us/library/aa379506.aspx 
    If you have any feedback on our support, please send to [email protected]

  • GoldenGate - Oracle to MSSQL - handling DDL replication abends on replicat

    Sorry for the cross-post. I clearly failed to actually read the "there's a GoldenGate forum" sticky...
    Hello -
    Very much a GoldenGate noob here, so please excuse me if I fumble on the terms / explanations - still learning.
    We've recently been lucky enough to have a GoldenGate pump put into our campus environment to support our data warehouse. We don't manage the pump or source systems - just the target.
    Pump: GoldenGate 11.1.1.1
    Source: Oracle 11g
    Target: MSSQL 2008R2
    ~5,000 tables of hugely varying sizes
    The extract is apparently configured to push DDL changes, which is clearly not going to work with a MSSQL 2008 R2 target. We're getting abend messages on the replicat and I'd like to know if we can bypass them on the replicat or need to ask that the extract process be modified to exclude DDL operations.
    The replicat error logs show exception:
    OGG-00453: DDL Replication is not supported for this database
    On the replicat I've tried including:
    DDL EXCLUDE ALL
    DDLERROR DEFAULT DISCARD (or DDLERROR DEFAULT IGNORE - neither let the process continue)
    The replicat just abends with the same OGG-00453 exception.
    My question: Can I gracefully handle these abends on the replicat? Or do I need to request the extract be updated with "DDL EXCLUDE ALL." Ideally, we can handle this on the replicat - I'm trying to be considerate of the main GoldenGate admin's time and also avoid any disruption of the extract.
    Any direction / info / ideas much appreciated.
    Thank you,
    Eric

    924681 wrote:
    Sorry for the cross-post. I clearly failed to actually read the "there's a GoldenGate forum" sticky...
    Hello -
    Very much a GoldenGate noob here, so please excuse me if I fumble on the terms / explanations - still learning.
    We've recently been lucky enough to have a GoldenGate pump put into our campus environment to support our data warehouse. We don't manage the pump or source systems - just the target.
    Pump: GoldenGate 11.1.1.1
    Source: Oracle 11g
    Target: MSSQL 2008R2
    ~5,000 tables of hugely varying sizes
    The extract is apparently configured to push DDL changes, which is clearly not going to work with a MSSQL 2008 R2 target. We're getting abend messages on the replicat and I'd like to know if we can bypass them on the replicat or need to ask that the extract process be modified to exclude DDL operations.
    The replicat error logs show exception:
    OGG-00453: DDL Replication is not supported for this database
    On the replicat I've tried including:
    DDL EXCLUDE ALL
    DDLERROR DEFAULT DISCARD (or DDLERROR DEFAULT IGNORE - neither let the process continue)
    The replicat just abends with the same OGG-00453 exception.
    My question: Can I gracefully handle these abends on the replicat? Or do I need to request the extract be updated with "DDL EXCLUDE ALL." Ideally, we can handle this on the replicat - I'm trying to be considerate of the main GoldenGate admin's time and also avoid any disruption of the extract.
    Any direction / info / ideas much appreciated.
    Thank you,
    EricI find strange that DDLERROR DEFAULT IGNORE does not work, are you sure you placed it properly? did you restarted the replicats after doing the change?
    Why dont you try specifying the error like:
    DDLERROR <error> IGNORE

  • How to Reconcile Data Between SAP Source Systems and SAP NetWeaver BI

    Hi,
    I just read "How to Reconcile Data Between SAP Source Systems and SAP NetWeaver BI".  While I'm waiting for  more authorisation to r/3 to carry it out and test this functionality.
    I'd like to ask a question to anyone who has implemented this type solution.  On page 10 it talks about creating a view then setting up the datasource. The solution talks about runnig a query.  I suspect when we run a query I would run it for only a period(using variable) to reconcile.
    My question is this.  Will the datasource extractor on /r3 only select the period in our variable or will it do a full selection of the data which would then be passed to BW for filtering?
    Regards

    DEar Mark,
    There are several avenues where you can see and reconcile your data with source system, u can see data in by tcode RSA3 for a datasource, and compare the values with actual document posted into the R/3 system. Respective fuctional consultant canhelp you a lot to confirm the data.
    On BW side u can see the data in PSA and then check tranformations which subsequent change/update/reject data records based on the selective conditions.
    hope this helps.
    Kindly assign the points if it works.
    Revert back if u need futher help/information.

  • Message mapping source side

    Hi Experts,
    Is it possible to copy the message in source side in message mapping using src tab?
    Regards
    Sara

    You cant get the required target xml structure with values untill u complete the mapping
    Why because if you have any transformation in mapping means you have to generate the target values according to that
    So better to complete the mapping and take the xml file
    Regards
    Seshagiri
    Edited by: N V Seshagiri on Dec 2, 2008 7:05 AM

  • Database restore with same SID and different schema owner

    Dear all,
    I have quality system on HP-UX oracle platform which has been upgraded from 4.6C to ECC 5.0
    and schema owner for the database is still SAPR3
    I have installed new Test system with version ECC 5.0 with same SID and now i need to refresh its database with Data from QAS system ... owner at Test system is SAPC11 which is new installation ECC 5.0 and where SID is C11.
    I need to know once i restore data what steps i need to carry out at Test system
    i.e like change ENV settings from SAPC11 to SAPR3.
    Please note SID is same on both hosts. and owner is different SAPR3 and SAPC11
    Regards,
    RR

    Dear all ,
    Thanks for your views . but i have already installed ECC 5.0 on
    target machine with schema owner as default SAPC11 (sid)
    Is there any other way out ... instead of doing reinstallation with schema owner SAPR3 / instead of doing Export-Import system copy method which again is as good as reinstallation.
    I would like to have your views on following ,
    when i will restore database from source to taget
    al tables in target machine will be having owner as SAPR3 ( which came from source ) ...but my DB owner at Target machine is SAPC11 ( as far as ENV and all profiles are concerned ) .... cant i use SAPR3 user which got restored with backup of source to target to start my instance at Target machine... may be by changing ENV settings.
    I really appreciate and thank you in adavance for sharing your views.
    Regards,
    RR

  • SRM-PI Integration for Supplier and Company Replication ( EBP-SUS Scenario)

    Dear Experts,
    We are doing a scenario where we are trying to replicate the Company and the Vendors from EBP to SUS via tcode-
    BBP_SP_COMP_INI and BBP_SP_SUPP_INI.
    When we are doing this in EBP , at SLG1 it is showing the replication done successfully. I have also checked  in EBP - SXI_MONITOR the corresponding XML messages has been successfully generated and the respective proxies are called.
    But these messages are failing in XI.
    In XI the corresponding service interfaces are used -
    from EBP (outbound) - SupplierPortalTradingPartner_CreateOrChange_Out       http://sap.com/xi/EBP
    to SUS   (Inbound)-  SupplierPortalTradingPartner_CreateOrChange_In              http://sap.com/xi/SRM/SupplierEnablement
    corresponding interface mapping used-    EBP40Partner2SRM40Partner    http://sap.com/xi/SRM/SupplierEnablement/Global/IC
      corresponding message mapping used -  EBP40Partner2SRM40Partner   http://sap.com/xi/SRM/SupplierEnablement/Global/IC
    data type used                                           TradingPartnerCreateOrChangeInternalOut    http://sap.com/xi/EBP
    We are doing this scenario from  SAP  Pre-delivered  standard content -
                          Service_Procurement_SupplierEnablement
    in XI the inbound message is failing at mapping step, it is showing the below error-
    Cannot produce target element /ns1:SupplierPortalTradingPartner/Method. Check xml instance is valid for source xsd and target-field mapping fulfills requirements of target xsd
    I have also tried to test the mapping in test tab with the same payload but it showing the same error, even if I try it with a blank payload error remains same.
    below is the error I got-
    <?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
    - <!--  Request Message Mapping
      -->
    - <SAP:Error xmlns:SAP="http://sap.com/xi/XI/Message/30" xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/" SOAP:mustUnderstand="">
      <SAP:Category>Application</SAP:Category>
      <SAP:Code area="MAPPING">EXCEPTION_DURING_EXECUTE</SAP:Code>
      <SAP:P1>com/sap/xi/tf/_EBP40Partner2SRM40Partner_</SAP:P1>
      <SAP:P2>com.sap.aii.utilxi.misc.api.BaseRuntimeException</SAP:P2>
      <SAP:P3>RuntimeException in Message-Mapping transformatio~</SAP:P3>
      <SAP:P4 />
      <SAP:AdditionalText />
      <SAP:ApplicationFaultMessage namespace="" />
      <SAP:Stack>com.sap.aii.utilxi.misc.api.BaseRuntimeException thrown during application mapping com/sap/xi/tf/_EBP40Partner2SRM40Partner_: RuntimeException in Message-Mapping transformatio~</SAP:Stack>
      <SAP:Retry>M</SAP:Retry>
      </SAP:Error>
    Trace -
    <Trace level="1" type="T">com.sap.aii.utilxi.misc.api.BaseRuntimeException: RuntimeException in Message-Mapping transformation: Cannot produce target element /ns1:SupplierPortalTradingPartner/Method. Check xml instance is valid for source xsd and target-field mapping fulfills requirements of target xsd at com.sap.aii.mappingtool.tf3.AMappingProgram.start(AMappingProgram.java:420) at com.sap.aii.mappingtool.tf3.Transformer.start(Transformer.java:142) at com.sap.aii.mappingtool.tf3.AMappingProgram.execute(AMappingProgram.java:118) at com.sap.aii.ibrun.server.mapping.JavaMapping.executeStep
    I have gone through this thread  SRM-SUS Scenerio Replicating company code and suppliers to SUS via XI
    but I think we can not change this interfaces as mentioned here as we are using standard SAP content.
    Any help would be very much appreciated.
    Thanks
    SUgata B Majumder

    Hi All,
    I have resolved the issue.
    it was a cache problem. I have changed the map, just edited description and activated  the map. Though it was a SAP standard Map. I have made the objects as modifiable from SWCV and all the messages are processed.
    Thanks

  • Cloning use the different gcc version between source system and target sys

    Hi All,
    Our system is: Application tier and Database tier is split to two servers.
    We should run a cloning, but I found the different gcc version on application tier on source system and target system.
    The source application tier server is RedHat Linux ES3, gcc version is 3.2.3
    The target application tier server is RedHat Linux ES3, gcc version is 2.9.6
    There is the same gcc version on database tier on source system and target system, they are gcc 2.9.6.
    My question: Can I use the different gcc version between source system and target system when I run an erp cloning?
    Thanks & Regards
    Owen

    Not necessarily, you might get some errors if the version is higher and it is not supported by the OS. An example is Note: 392311.1 - usr/lib/gcc/i386-redhat-linux/3.4.5/libgcc_s.so: undefined reference to 'dl_iterate_phdr@GLIBC_2.2.4'
    To be on the safe side, make sure you have the same version (unless you want to give it a try).

  • Rerunning script (DML and DDL) without ora-00001 and ora-00955 errors

    What is the best way to write a DML and DDL script so that it can be run multiple times without these ORA errors:
    ORA-00955: name is already used by an existing object
    ORA-00001: unique constraint (JILL.SYS_C00160247) violated
    I have just joined a product development company using SQL Server as there primary database. They have just completed a port to Oracle.
    Their product release upgrades (given to clients) include sql scripts with database changes (structure and data). They require that the client be able to rerun the scripts more than once with no errors. In SQL Server, the accomplish it this way.
    For DDL:
    if not exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[MyTab]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)
    Begin
         CREATE TABLE [dbo].[MyTab] (
              [ID]int IDENTITY(1,1),
              [InvID] uniqueidentifier not null
    For DML:
    IF NOT EXISTS (SELECT 1 FROM [dbo].[mytab] WHERE [Name] = 'Smith' and [ID] = 3)
    BEGIN
    INSERT INTO [dbo].[mytab]
    ([ID] ,[Name]
    VALUES (3,'Smith')
    END
    I am tasked with duplicating this logic on the Oracle side. The only way I can think of so far is using plsql and checking for existence before every insert and create statement. The other options I thought of cannot be used in this case
    - "whenever sqlerror continue" - gives the same response for all errors. True errors should stop the code, so this is too risky.
    - "log errors into ... reject limit unlimited" on the insert - I thought this was my best solution until I found out that it doesn't support lobs.
    Do you know of any more elegant (and more efficient) solution other than plsql cursors to check for existence before running each insert/create?
    Any suggestions would be greatly appreciated.

    select table_name from user_tables will give you the table exist or not.
    all_tables/dba_tables
    http://download.oracle.com/docs/cd/B14117_01/server.101/b10755/statviews_1190.htm#i1592091

  • Differenece between oracle streams and oracle Replication

    Hi all,
    Can anyone tell me difference between oracle streams and oracle replication?
    Regards

    refer the link:Difference Between Oracle Replication & Oracle Streams.
    Oracle Replication is designed to replicate exact copies of data sets to various databases. Oracle Streams is designed to propagate individual data changes to various databases. Thus, Replication is probably easier if the end goal is to maintain identical copies of data, where Streams is easier if the end goal is to allow different databases to react differently to data changes.
    Oracle Replication is a significantly more mature product-- it is quite usable with older databases. Oracle Streams is a much newer technology and is only usable among different 9i databases. Most competent Oracle developers and DBA's are familiar with Oracle Replication, while many fewer have any real experience with Streams.
    The Streams architecture strikes me as a lot more flexible than Oracle Replication's. This leads me to suspect that Oracle will be pushing Streams over Replication in subsequent releases, so I would expect new features in Streams, like DDL changes, that aren't in Oracle Replication. Realistically, though, I don't expect any serious movement away from Replication for at least a few releases, so I wouldn't tend to be overly concerned on this front.
    answered by Justin
    Distributed Database Consulting, Inc.
    reference from forum thread:
    Difference Between Oracle Replication & Oracle Streams.

  • DDL Replication is not supported in DB2

    I found this error after i create DDL process on DB2,
    have you got same problem? and how to fix this isue ?
      ERROR OGG-00453  DDL Replication is not supported for this database.thanks Riyas

    Hi Riyas,
    Just found a document which states that DDL replication is not supported in DB2 database.
    Does Oracle GoldenGate(OGG) support DDL Replication for DB2 (Doc ID 1303729.1)
    Thanks,
    Kamal.

  • What is the data source name and the data target name for the table COSP

    Hi,
    Actaully i am new to fico/bw,and i have to create a report based on the actual and budget value and difference in the variance.
    in R/3 side the table they are using is COSP,so please let me know what is the data source name and the cube or ods name in BW side.wht are the fields in which i can get the BUDGET value in the report.
    here we are using all the business content extractors and BI content cubes and odss.
    please reply immediately as its very jurgent isssue.
    Thanks,
    ashok

    answered

  • Is there any restriction on DDL replication with 11.2.0.1 GG version on HP UNix

    Is there any restriction on DDL replication with 11.2.0.1 GG version on HP Unix

    Here is few:
    1. ALTER TABLE ... MOVE TABLESPACE
    2. DDL on nested tables
    3. ALTER DATABASE  and   ALTER SYSTEM  (these are not considered to be DDL)
    4. DDL on a standby database
    In addition, classic capture  mode does not support DDL that involves password-based  column encryption, such as:
    1.CREATE TABLE t1 ( a number, b varchar2(32) ENCRYPT IDENTIFIED BY my_password);
    2.ALTER TABLE t1 ADD COLUMN c varchar2(64) ENCRYPT IDENTIFIED BY my_password
    I would request you to check documentation. you can check that here:http://www.oracle.com/technetwork/middleware/goldengate/documentation/index.html
    -Onkar

Maybe you are looking for