Schema replication in 5.2

Hi,
I had setup multimaster replication between two ids5.2 but schema replication is not workin while it should be automatic.
I can change the 99user.ldif files manually, but I want to make sur that schema replication is enabled and ensured after(and never stop alone even).
Any hint.
Thanks

thank you,
you got my problem, I executed the perl script to push the schema but it didn't work since the first time.
I had to remove the reference to the database in the empty schema file and make the push again.
Now I'm having an other problem. I can see schema modification only after restarting server.
Any help about that?

Similar Messages

  • Schema replication method - urgend help pls

    Hi
    i have a schema in database a with 40 tables.
    i want to replicate these tables in another repl schema in database b.
    The m view will be reade only ( no DML on rep_schema)
    My q is . what are the steps to do this by creating a master group for that tables and
    refresh that group (and not make a interval for each m view , so be difficult to manage all that jobs later )
    I read in rep api but the example of that doc is to create tthe replication method
    with update option in m views.
    do i have to follow exactly as is in the documentation?
    or i have to bypass a few steps?
    Have anyone try this method so can give me i guide how to do this?
    Regards Aris

    This is a conflict in the OID or Name in either an object class or attribute.
    No attribute must have an identical OID or name as another defined attribute
    (both OID and name identical between 2 definitions means same definition)
    The error log on the Consumer should have the OID or the name of the schema element that is already defined.
    Schema replication between 5.2 and 5.1 servers is explained in the Administration Guide of Directory Server 5.2 and may require some settings (I don't recall which way, but it's covered in the documentation).
    Regards,
    Ludovic.

  • Config Streams using Handler on Schema Replication

    hi,
    we need to replicate Schema using oracle stream in two database (source - target)
    we already test the capture - propagation - apply process for schema
    but we need to use handler in this process thi for:
    - capture LCRs, and save it into another table (monitor_lcrs)-> this is our table
    - Transform tables - in the process of replicate schem we need to denormalization many tables.
    it is posible use Handlers to transform tables in Schema Replication?
    please your help
    regards,
    MQ

    Hi,
    Captured LCR's are placed in queues, you want them to be placed in normal table directly? if so it is not possible.
    When you say transform tables what exactly you mean by that?
    Thanks,
    Lalitha

  • DBA_CAPTURE_PREPARED_TABLES is not showing two tables in schema replication

    Hi,
    Using Oracle 10.2.0.3.0 on linux 64 bit
    I have setup one way Stream replication(SCHEMA REPLICATION) from production to destination server, when i am executing the query DBA_CAPTURE_PREPARED_TABLES view, it is not showing two , one of the table have same number of records as production have. Do i need to do any thing, because replication is running fine.
    Do i need not to worry because i am doing schema level replication and following view is showing the name of schema
    SELECT * FROM DBA_CAPTURE_PREPARED_schemas;
    Regards,

    Hello
    If replication is working fine on these 2 tables, you dont need to worry about these. Usually DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION API dumps the necessary streams dictionary information and enables supplemental logging. If your tables are getting replicated properly, that means the supplemental logging is already done and streams dictionary information are already available on the apply site.
    Still if you need you can run DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION again for these 2 tables on the capture site so that the view gets populated but it doesnt add any value.
    Thanks,
    Rijesh

  • Schema replication in Sun DS 5.1 and 5.2

    hi ldapies,
    has anybody expirienced similar problem?
    Excerpt from errors log on supplier (master) server:
    NSMMReplicationPlugin - Schema replication update failed: Type or value exists
    NSMMReplicationPlugin - Warning: unable to replicate schema to host yy.yy.yy.yy, port 389. Continuing with replication session.
    Excerpt from access DS 5.2 consumer (client) server log:
    conn=1844 op=-1 msgId=-1 - fd=31 slot=31 LDAP connection from xx.xx.xx.xx to yy.yy.yy.yy
    conn=1844 op=0 msgId=1 - BIND dn="cn=Replication Manager,cn=replication,cn=config" method=128 version=3
    conn=1844 op=0 msgId=1 - RESULT err=0 tag=97 nentries=0 etime=0 dn="cn=replication manager,cn=replication,cn=config"
    conn=1844 op=1 msgId=2 - SRCH base="" scope=0 filter="(objectClass=*)" attrs="supportedControl supportedExtension"
    conn=1844 op=1 msgId=2 - RESULT err=0 tag=101 nentries=1 etime=0
    conn=1844 op=2 msgId=3 - EXT oid="2.16.840.1.113730.3.5.3"
    conn=1844 op=2 msgId=3 - RESULT err=0 tag=120 nentries=0 etime=0
    conn=1844 op=3 msgId=4 - SRCH base="cn=schema" scope=0 filter="(objectClass=*)" attrs="nsSchemaCSN"
    conn=1844 op=3 msgId=4 - RESULT err=0 tag=101 nentries=1 etime=0
    conn=1844 op=4 msgId=5 - MOD dn="cn=schema"
    conn=1844 op=4 msgId=5 - RESULT err=20 tag=103 nentries=0 etime=0
    conn=1844 op=5 msgId=6 - EXT oid="2.16.840.1.113730.3.5.5"
    conn=1844 op=5 msgId=6 - RESULT err=0 tag=120 nentries=0 etime=0
    conn=1844 op=6 msgId=7 - UNBIND
    conn=1844 op=6 msgId=-1 - closing - U1
    conn=1844 op=-1 msgId=-1 - closed.
    More configuration details:
    hosts OS - Solaris 9
    Supplier - Sun DS 5.1 Service Pack 4
    Consumer - Sun DS 5.1 Service Pack 4, Sun DS 5.2_Patch_4
    How it happens:
    Consumer was already configured and used for other data located in separate database and suffix. Replication manager, new suffix and database for replica, were created on consumer. Replication agreement was created on supplier with "Always keep in sync" set.
    Immediately after agreement was confirmed attempt to schema update was happened with warning mentioned above.
    Yes, it is just warning, but it can result in some problems.
    Changes in cn=schema are not stored in 99user.ldif on consumer. Consumer DS keeps updated changes in memory and used them, if needed. But if I decide to disable replication and restart the DS, DS reads user defined schema definition from 99user.ldif and there are no updated schema entries obtained earlier from supplier, but replicated data are available.
    My question and cry for help:
    What can cause the schema update failed and how to eliminate it?
    Or am I trying to do something impossible?
    I already got suggestions like "Forget about DS 5.1", "Supplier 5.1 and consumer 5.2? DO NOT DO THAT!"
    Note: The same happened for DS 5.1 consumer.
    I will appreciate any sort of reaction.
    Thanks

    This is a conflict in the OID or Name in either an object class or attribute.
    No attribute must have an identical OID or name as another defined attribute
    (both OID and name identical between 2 definitions means same definition)
    The error log on the Consumer should have the OID or the name of the schema element that is already defined.
    Schema replication between 5.2 and 5.1 servers is explained in the Administration Guide of Directory Server 5.2 and may require some settings (I don't recall which way, but it's covered in the documentation).
    Regards,
    Ludovic.

  • Check list for upgrading from 10g to 11g when there is a schema replication

    Hi
    We are looking to upgrade one of our production database from 10g to 11g
    Currently this database has one schema that is replicated to 2 other databases using Oracle streams.
    The replication is excluing DDLs and excluding several other application tables.
    What should I do pre and post the upgrade ?
    should we remove the stream configuration all together and rebuild it after the upgrade ?
    I was hoping that we can first upgrade the two target databases to 11g and then the source database, without impacting our streams configuration at all
    Is that possible ?
    Is there any documentation on the subject ?
    thanks in advance
    Orna

    Pl post the OS versions of the source and target servers, along with exact (4-digit) versions of "10g" and "11g". I do not have any experience with streams, but the 11gR2 Upgrade Doc suggests that you upgrade the downstream target databases first before upgrading the source - http://download.oracle.com/docs/cd/E11882_01/server.112/e10819/upgrade.htm#CIAFJJFC
    HTH
    Srini

  • Golden Gate Schema Replication

    Guys - My requirement is fairly simple.I have two schemas, GG [Source] and GGR [Target] on the same host. Have one table called GG.SYNC_TABLE. I am having difficulties to push data from GG to GGR
    Below are the extract and replicat information
    EXTRACT EXT_AP1
    SETENV (ORACLE_SID=ERPA4)
    RMTHOST mcdeagaix825, mgrport 7809
    USERID GG@ERPA4, PASSWORD goldengate1
    DISCARDFILE ./dirrpt/ext_ap1_discard.rpt, append, megabytes 50
    RMTTRAIL ./dirdata/sa
    TABLE GG.AP_AE_HEADERS_ALL;
    TABLE GG.AP_AE_LINES_ALL;
    TABLE GG.AP_BANK_ACCOUNTS_ALL;
    TABLE GG.AP_BANK_BRANCHES;
    TABLE GG.AP_CARDS_ALL;
    TABLE GG.AP_CHECKS_ALL;
    TABLE GG.AP_CREDIT_CARD_TRXNS_ALL;
    TABLE GG.AP_EXPENSE_REPORTS_ALL;
    TABLE GG.AP_EXPENSE_REPORT_HEADERS_ALL;
    TABLE GG.AP_EXPENSE_REPORT_LINES_ALL;
    TABLE GG.AP_EXPENSE_REPORT_PARAMS_ALL;
    TABLE GG.AP_EXP_REPORT_DISTS_ALL;
    TABLE GG.AP_HOLDS_ALL;
    TABLE GG.AP_HOLD_CODES;
    TABLE GG.AP_INVOICES_ALL;
    TABLE GG.AP_INVOICE_DISTRIBUTIONS_ALL;
    TABLE GG.AP_INVOICE_LINES_ALL;
    TABLE GG.AP_INVOICE_PAYMENTS_ALL;
    TABLE GG.AP_NOTES;
    TABLE GG.AP_PAYMENT_HISTORY_ALL;
    TABLE GG.AP_PAYMENT_HIST_DISTS;
    TABLE GG.AP_PAYMENT_SCHEDULES_ALL;
    TABLE GG.AP_POL_VIOLATIONS_ALL;
    TABLE GG.AP_SELF_ASSESSED_TAX_DIST_ALL;
    TABLE GG.AP_SUPPLIERS;
    TABLE GG.AP_SUPPLIER_SITES_ALL;
    TABLE GG.AP_SYSTEM_PARAMETERS_ALL;
    TABLE GG.AP_TERMS_LINES;
    TABLE GG.AP_TOLERANCE_TEMPLATES;
    TABLE GG.SYNC_TABLE;
    REPLICAT REP_AP1
    SETENV (ORACLE_SID=ERPA4)
    USERID GG@ERPA4, PASSWORD goldengate1
    ASSUMETARGETDEFS
    REPORTCOUNT EVERY 1 MINUTES, RATE
    DISCARDFILE ./dirrpt/rep_ap1.dsc, PURGE
    MAP GG.AP_AE_HEADERS_ALL, TARGET GGR.AP_AE_HEADERS_ALL;
    MAP GG.AP_AE_LINES_ALL, TARGET GGR.AP_AE_LINES_ALL;
    MAP GG.AP_BANK_ACCOUNTS_ALL, TARGET GGR.AP_BANK_ACCOUNTS_ALL;
    MAP GG.AP_BANK_BRANCHES, TARGET GGR.AP_BANK_BRANCHES;
    MAP GG.AP_CARDS_ALL, TARGET GGR.AP_CARDS_ALL;
    MAP GG.AP_CHECKS_ALL, TARGET GGR.AP_CHECKS_ALL;
    MAP GG.AP_CREDIT_CARD_TRXNS_ALL, TARGET GGR.AP_CREDIT_CARD_TRXNS_ALL;
    MAP GG.AP_EXPENSE_REPORTS_ALL, TARGET GGR.AP_EXPENSE_REPORTS_ALL;
    MAP GG.AP_EXPENSE_REPORT_HEADERS_ALL, TARGET GGR.AP_EXPENSE_REPORT_HEADERS_ALL;
    MAP GG.AP_EXPENSE_REPORT_LINES_ALL, TARGET GGR.AP_EXPENSE_REPORT_LINES_ALL;
    MAP GG.AP_EXPENSE_REPORT_PARAMS_ALL, TARGET GGR.AP_EXPENSE_REPORT_PARAMS_ALL;
    MAP GG.AP_EXP_REPORT_DISTS_ALL, TARGET GGR.AP_EXP_REPORT_DISTS_ALL;
    MAP GG.AP_HOLDS_ALL, TARGET GGR.AP_HOLDS_ALL;
    MAP GG.AP_HOLD_CODES, TARGET GGR.AP_HOLD_CODES;
    MAP GG.AP_INVOICES_ALL, TARGET GGR.AP_INVOICES_ALL;
    MAP GG.AP_INVOICE_DISTRIBUTIONS_ALL, TARGET GGR.AP_INVOICE_DISTRIBUTIONS_ALL;
    MAP GG.AP_INVOICE_LINES_ALL, TARGET GGR.AP_INVOICE_LINES_ALL;
    MAP GG.AP_INVOICE_PAYMENTS_ALL, TARGET GGR.AP_INVOICE_PAYMENTS_ALL;
    MAP GG.AP_NOTES, TARGET GGR.AP_NOTES;
    MAP GG.AP_PAYMENT_HISTORY_ALL, TARGET GGR.AP_PAYMENT_HISTORY_ALL;
    MAP GG.AP_PAYMENT_HIST_DISTS, TARGET GGR.AP_PAYMENT_HIST_DISTS;
    MAP GG.AP_PAYMENT_SCHEDULES_ALL, TARGET GGR.AP_PAYMENT_SCHEDULES_ALL;
    MAP GG.AP_POL_VIOLATIONS_ALL, TARGET GGR.AP_POL_VIOLATIONS_ALL;
    MAP GG.AP_SELF_ASSESSED_TAX_DIST_ALL, TARGET GGR.AP_SELF_ASSESSED_TAX_DIST_ALL;
    MAP GG.AP_SUPPLIERS, TARGET GGR.AP_SUPPLIERS;
    MAP GG.AP_SUPPLIER_SITES_ALL, TARGET GGR.AP_SUPPLIER_SITES_ALL;
    MAP GG.AP_SYSTEM_PARAMETERS_ALL, TARGET GGR.AP_SYSTEM_PARAMETERS_ALL;
    MAP GG.AP_TERMS_LINES, TARGET GGR.AP_TERMS_LINES;
    MAP GG.AP_TOLERANCE_TEMPLATES, TARGET GGR.AP_TOLERANCE_TEMPLATES;
    MAP GG.SYNC_TABLE, TARGET GGR.SYNC_TABLE;
    Extract, Replicat and Manager processes are running fine. But a commit on target is not propagating the data across to GGR schema. Supplemental logging is enabled. Archiving is not [I hope its not required]. What do you think I am missing here?

    No point in doing this if you are not running in archived log mode. If you get behind for whatever reason and GoldenGate has to look further into the past that what is currently in your online redo log, game over.
    Have you tried the tutorial at Oracle Learning Library?
    Another thing - why would you use your GoldenGate user as part of your schema/data replication? That is only asking for trouble and unnecessary complexity.  The GoldenGate schema is used to drive the replication between other schemas, not itself.

  • Schema to Schema replication

    Hi,
    We have a kind of requirement where we have 2 exact schema's with some 450 tables (on same database). Now whenever there is any insert/update/delete on schema1.table1 then same changes should be applied on schema2.table1 (same database)... The difference between 2 schema is that there is one extra column in all tables of schema1 which should not be available on schem2 tables.
    The thing that we can do is to write a trigger for each 450 tables... but i don't think that would be the best solution.. can someone please suggest what could be the better way to implement this..?
    Thanks and Regard!

    bLaK wrote:
    Hi,
    We have a kind of requirement where we have 2 exact schema's with some 450 tables (on same database). Now whenever there is any insert/update/delete on schema1.table1 then same changes should be applied on schema2.table1 (same database)... The difference between 2 schema is that there is one extra column in all tables of schema1 which should not be available on schem2 tables.
    The thing that we can do is to write a trigger for each 450 tables... but i don't think that would be the best solution.. can someone please suggest what could be the better way to implement this..?
    CREATE VIEW -- leaving out the one column
    repeat 450 times & no data needs to be duplicated

  • Is schema replicated in multimaster replication scenario?

    hellos
    I am asking basic question. I have set up a multimaster replication (ids5.1sp2) on database c=fi.. this works well.
    however, in the future I know that more objectClasses and attributes will be added to the schema.
    My simple question.. is do I have to amend the 99user.ldif files on BOTH ids5.1 servers or can I rely on the schema being replicated.
    thanks.

    Schema is supposed to replicate in all replication scenario's
    The documentation states
    "The best way to guarantee schema consistency is to make schema modifications on a single master server, even in the case of a multi-master replication environment.
    Schema replication happens automatically. If replication has been configured between a supplier and a consumer, schema replication will happen by default."
    However a few people in this forum have complained that this replication doesn't always work - so be sure to check to make sure the schema does in fact replicate.
    For more information on schema replication check out this link:
    http://docs.sun.com/source/816-5609-10/rep.htm#1063947
    It is at the bottom of the page ...

  • Replication of user Id and Passwords Across Servers

    I am in the process of replicating our production application server to a Disaster Recovery site server.
    I am using Goldengate Software as the tool for bi-directional replication. What Application Express tables do I need to replicate to the support syncing of my UserIDs and Passwords within my Apex environment?
    Has any one done this? And do I have any known issues with keeping it in sync?

    Do to unsupported data types (CLOBS, BLOBS) and tables that have no primary keys or unique indexes, using a blanket schema replication may not work. That is why I am looking for the exact tables that I need to replicate.
    I am sure that someone has found a solution that will work. This would be a stumbling block for anyone trying to have an active/active DR site.

  • Question on best practice to extend schema

    We have a requirement to extend the directory schema. I wanted to know what is the standard practice adopted
    1) Is it good practice to manually create an LDIF so that this can be run on every deployment machine at every stage?
    2) Or should the schema be created through the console the first time and the LDIF file from this machine copied over to the schema directory of the target server ?
    3) Should the custom schema be appended to the 99user.ldif file or is it better to keep it in a separate LDIF ?
    Any info would be helpful.
    Thanks
    Mamta

    I would say it's best to create your own schema file. Call it 60yourname.ldif and place it in the schema directory. This makes it easy to keep track of your schema in a change control system (e.g. CVS). The only problem with this is that schema replication will not work - you have to manually copy the file to every server instance.
    If you create the schema through the console, schema replication will occur - schema replication only happens when schema is added over LDAP. The schema is written to the 99user.ldif file. If you choose this method, make sure you save a copy of the schema you create in your change control system so you won't lose it.

  • Replication Failure - Master - Consumer

    We have a 5.2 Directory Server running as master and a 5.2 Directory Server as a Consumer.
    When setting up replication I followed the steps in the documentation and everything worked fine up until I attempted to intialize the replica - it fails everytime with the following errors:
    On the Consumer in the error log it reports:
    [10/Jun/2004:11:48:14 -0700] - ERROR<8303> - Replication - conn=10 op=3 msgId=4 - Schema replication error [C] Failed with error code 20
    On the Master in the error log it reports:
    [10/Jun/2004:11:51:08 -0700] - WARNING<10303> - Repl. Transport - conn=-1 op=-1 msgId=-1 - [S] Unable to push the schema on the consumer withresponse: Unknown rc (20)
    [10/Jun/2004:11:51:08 -0700] - WARNING<10247> - Total Protocol - conn=-1 op=-1 msgId=-1 - Unable to replicate schema to host ldapconsumer1.xxx.xxx, port 3890. Continuing with replication session.
    [10/Jun/2004:11:51:13 -0700] - WARNING<10303> - Repl. Transport - conn=-1 op=-1 msgId=-1 - [S] Unable to push the schema on the consumer withresponse: Unknown rc (20)
    I found 8303 in the Directory Server error code list and it just says:
    - Failed with error code error
    - Schema replication failed locally on the consumer.
    - Check error code and contact Sun ONE Technical Support.
    Can anyone help?
    Thanks,
    Dan

    I had a similar problem when I first setup replication between two DS5.2 systems. The Master was installed via tarball and the consumer was installed via the Sun packages. Upon investigation, I found that the schema files were different between the two systems. I just manually copied over the schema files from the master to the replica and the errors went away and replication started working.
    Is this a problem that should be reported to Sun? I don't know. I found a quick solution and kept on going.
    HTH,
    Roger S.

  • AM7.1 and DS with custom DIT/schema

    I am installing AM 7.1 in realm mode and using a remote DS 6. I use the configure now option and choose no when asked if directory is provisioned with user data. Everything is ok but i need to config AM for my directory.
    My DIT is strange because all my users are at root, o=company. I also use a custom attribute for the rdn in user entries. How do I get AM to look at root for people entries, it seems by default to use ou=people? How do I get AM to see and use a custom dn for people entries?

    This is a conflict in the OID or Name in either an object class or attribute.
    No attribute must have an identical OID or name as another defined attribute
    (both OID and name identical between 2 definitions means same definition)
    The error log on the Consumer should have the OID or the name of the schema element that is already defined.
    Schema replication between 5.2 and 5.1 servers is explained in the Administration Guide of Directory Server 5.2 and may require some settings (I don't recall which way, but it's covered in the documentation).
    Regards,
    Ludovic.

  • Schema replicaton

    hi all ,
    what is the best way to configure schema replication between 2 database keep in mind the source database notarchive ... both of the database 10gr2 i need to configure one-way replication please any help

    Local Capture in Streams can capture from the Online Redo Logs.
    Downstream capture reads from the Archived Redo Logs.
    Another option is Advanced Replication. http://download.oracle.com/docs/cd/B19306_01/server.102/b14226/toc.htm
    Hemant K Chitale

  • OGG app

    Hi,
    Can you please clarify the below queries....
    1. Where we can use the filter condition in goldengate , whether extract process or datapump files , which one will give the better performance?
    2. In case i want to add one more schema replication between two dbs, but currently in my GG running for single schema replication with single extract and three pumps , now i want to add one more schema in the replication process , how to add the new schema in my GG setup without OLTP impact?

    "Like a primary Extract group, a data pump can be configured for either online or batch processing. It can perform data filtering, mapping, and conversion, or it can be configured in pass-through mode, where data is passively transferred as-is, without manipulation."
    When are you using data pump? If the filtering is a one time shot, then put the load on the data pump. If ongoing, you'd have to do it in the extract anyway. Or, if it is a large amount, then:
    Filtering and conversion
    Data filtering and data conversion both add overhead, and these activities are sometimes prone to configuration errors. If Oracle GoldenGate must perform a large amount of filtering and conversion, consider using one or more data pumps to handle this work. You can use Replicat for this purpose, but you would be sending more data across the network that way, as it will be unfiltered. You can split filtering and conversion between the two systems by dividing it between the data pump and Replicat."
    "Oracle GoldenGate supports adding tables for synchronization without having to stop and start processes or perform special procedures. The procedure varies, depending on whether or not you used wildcards in the TABLE parameter to specify tables."
    See page 310 in the Windows/UNIX admin guide.

Maybe you are looking for