Config Streams using Handler on Schema Replication

hi,
we need to replicate Schema using oracle stream in two database (source - target)
we already test the capture - propagation - apply process for schema
but we need to use handler in this process thi for:
- capture LCRs, and save it into another table (monitor_lcrs)-> this is our table
- Transform tables - in the process of replicate schem we need to denormalization many tables.
it is posible use Handlers to transform tables in Schema Replication?
please your help
regards,
MQ

Hi,
Captured LCR's are placed in queues, you want them to be placed in normal table directly? if so it is not possible.
When you say transform tables what exactly you mean by that?
Thanks,
Lalitha

Similar Messages

  • Oracle Streams setup for multiple schemas in a same database

    We are on 11.1.0.7 and will be using Oracle 11g Streams that will replicate the data real-time for two schemas between the source and target set of schemas with in the same database. We will be doing DDL as well as DML replication.
    I created the following plan and want your inputs. After implementing this, I created a table in SCOTT but it's get replicated to RPT_SCOTT later I tried inserting a row in the table created under SCOTT but that too didn't get replicated to RPT_SCOTT.
    Here are the steps that I used to set up my STREAMS -
    Database Instance:     TESTDB
    Schemas:
         Source:          SCOTT
                   HR
         Target:          RPT_SCOTT
                   RPT_HR
    Configuring Streams:
    1.     Database is in Archive log mode
    2.     Set up the Streams administrator.
    create user STRMADMIN identified by STRMADMIN default tablespace USERS temporary tablespace temp;
    grant resource, dba, AQ_ADMINISTRATOR_ROLE to STRMADMIN;
    BEGIN
    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
    grantee => 'STRMADMIN',
    grant_privileges => TRUE);
    END;
    3.     Set up Streams queues
    CONNECT STRMADMIN/****
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_name => 'STREAMS_QUEUE',
    queue_table => 'STREAMS_QUETAB',
    queue_user => 'STRMADMIN');
    END;
    4.     Add the Apply rule
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name      => 'RPT_SCOTT',
    streams_type     => 'APPLY',
    streams_name     => 'APPLY_CC_STREAM',
    queue_name          => 'STRMADMIN.STREAMS_QUEUE',
    include_dml     => true,
    include_ddl     => true,
    inclusion_rule     => true,
    source_database     => 'TESTDB');
    END;
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name      => 'RPT_HR',
    streams_type     => 'APPLY',
    streams_name     => 'APPLY_AB_STREAM',
    queue_name          => 'STRMADMIN.STREAMS_QUEUE',
    include_dml     => true,
    include_ddl     => true,
    inclusion_rule     => true,
    source_database     => 'TESTDB');
    END;
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name     => 'APPLY_CC_STREAM',
    apply_user     => 'STRMADMIN');
    END;
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name     => 'APPLY_AB_STREAM',
    apply_user     => 'STRMADMIN');
    END;
    5.     Add the Capture Rule
    CONNECT STRMADMIN/*****
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name     => 'SCOTT',
    streams_type     => 'CAPTURE',
    streams_name     => 'CAPTURE_CC_STREAM',
    queue_name          => 'STRMADMIN.STREAMS_QUEUE',
    include_dml     => true,
    include_ddl     => true,
    inclusion_rule     => true,
    source_database     => 'TESTDB');
    END;
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name     => 'HR',
    streams_type     => 'CAPTURE',
    streams_name     => 'CAPTURE_AB_STREAM',
    queue_name          => 'STRMADMIN.STREAMS_QUEUE',
    include_dml     => true,
    include_ddl     => true,
    inclusion_rule     => true,
    source_database     => 'TESTDB');
    END;
    6.     Set the instantiation system change number (SCN)
    CONNECT STRMADMIN/******
    DECLARE
    source_scn NUMBER;
    BEGIN
    source_scn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
    DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN (
    source_schema_name => 'SCOTT',
    source_database_name => 'TESTDB',
    instantiation_scn => source_scn);
    END;
    7.     Start the Apply
    CONNECT STRMADMIN/******
    BEGIN
    DBMS_APPLY_ADM.START_APPLY('APPLY_CC_STREAM');
    END;
    BEGIN
    DBMS_APPLY_ADM.START_APPLY('APPLY_AB_STREAM');
    END;
    8.     Start the Capture
    CONNECT STRMADMIN/******
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE('CAPTURE_CC_STREAM');
    END;
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE('CAPTURE_AB_STREAM');
    END;
    Waiting for your inputs!

    If I understand from your code, you want to do this on the same DB :
    SCOTT --> RPT_SCOTT
    HR  --> RPT_HRSo there is a schema transformation, where is it coded ?
    General info : http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_transform.htm
    More specific on schema rename : http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_mtransform.htm#CHDGDHDE
    Next : where are the initialisation of both capture schema ?
    Missing :
    execute DBMS_CAPTURE_ADM.PREPARE_SCHEMA_INSTANTIATION( schema_name          => 'scott'');
    execute DBMS_CAPTURE_ADM.PREPARE_SCHEMA_INSTANTIATION( schema_name          => 'HR'');This tell Streams from where to capture the SCN.
    Also there is an (WRONG) instantiation of SCOTT for the apply, which tells the apply to consider as valuable candidate all LCR after source_scn,
    but where is the code or RPT_HR. Alas, you put for the APPLY target schema 'SCOTT' while it should have been 'RPT_SCOTT'.
    the fact that is correct or false depends where you put the schema transformation. If you put the transformation at apply time then use the SOURCE schema name (SCOTT, HR) for the LCR will contains their name. If you put the transformation at capture time, then put target schema name for the LCR will contain their name (RPT_HR,RPT_SCOTT).
    Let's say you put the schema transformation at capture time then
    Missing:
    DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN (
    source_schema_name => 'RPT_HR',
    source_database_name => 'TESTDB',
    instantiation_scn => source_scn);If you attache the transformation on the apply process then the code is :
    DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN (
    source_schema_name => 'HR',
    source_database_name => 'TESTDB',
    instantiation_scn => source_scn);And this is useless:
    -- useless code
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name => 'APPLY_CC_STREAM',
    apply_user => 'STRMADMIN');
    END;
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name => 'APPLY_AB_STREAM',
    apply_user => 'STRMADMIN');
    END;
    /Last : You are using the same queue for 2 separated capture/apply transformation.
    Do yourself a favor and give each couple capture/tranform/apply its own queue.

  • Error while  using remote-invocation-scheme

    Hi
    Can you please help us with the following exception while we are trying to fetch from a remote cache server instance using Extended*TCP (<remote-invocation-scheme>).
    Please find below client n server cache-config.xml with client java code. Appreciate your response in this regard.
    Thanks
    vamsi
    Client config.xml
    <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
    xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config/coherence-cache-config.xsd">
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>example-local-cache</cache-name>
                   <scheme-name>example-local</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <local-scheme>
                   <scheme-name>example-local</scheme-name>
                   <eviction-policy>LRU</eviction-policy>
                   <high-units>10</high-units>
                   <low-units>3</low-units>
                   <unit-calculator>FIXED</unit-calculator>
                   <expiry-delay>10ms</expiry-delay>
                   <cachestore-scheme>
                        <class-scheme>
                             <class-name>com.test.dao.xx</class-name>
                             <init-params>
                                  <!-- <init-param> <param-type>java.lang.String</param-type> <param-type>locations</param-type>
                                       </init-param> -->
                             </init-params>
                        </class-scheme>
                   </cachestore-scheme>
                   <pre-load>true</pre-load>
              </local-scheme>
         <remote-invocation-scheme>
    <scheme-name>extend-invocation</scheme-name>
    <service-name>ExtendTcpInvocationService</service-name>
    <initiator-config>
    <tcp-initiator>
    <remote-addresses>
    <socket-address>
    <address>x.x.x.x</address>
    <port>9099</port>
    </socket-address>
    </remote-addresses>
    <connect-timeout>10s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
    <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    </initiator-config>
    </remote-invocation-scheme>
         </caching-schemes>
    </cache-config>
    Server config.xml
    <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
    xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config/coherence-cache-config.xsd">
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>example-local-cache</cache-name>
                   <scheme-name>example-local</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <local-scheme>
                   <scheme-name>example-local</scheme-name>
                   <eviction-policy>LRU</eviction-policy>
                   <high-units>10</high-units>
                   <low-units>3</low-units>
                   <unit-calculator>FIXED</unit-calculator>
                   <expiry-delay>10ms</expiry-delay>
                   <cachestore-scheme>
                        <class-scheme>
                             <class-name>com.test.dao.xx</class-name>
                             <init-params>
                                  <!-- <init-param> <param-type>java.lang.String</param-type> <param-type>locations</param-type>
                                       </init-param> -->
                             </init-params>
                        </class-scheme>
                   </cachestore-scheme>
                   <pre-load>true</pre-load>
              </local-scheme>
              <proxy-scheme>
              <service-name>ExtendTcpProxyService</service-name>
              <thread-count>5</thread-count>
              <acceptor-config>
              <tcp-acceptor>
              <local-address>
              <address>localhost</address>
              <port>9099</port>
              </local-address>
              </tcp-acceptor>
              </acceptor-config>
              <autostart>true</autostart>
              </proxy-scheme>
    </caching-schemes>
    </cache-config>
    Client code
    ===========
    public class TestClient {
         * @param args
         public static void main(String[] args) {
              // TODO Auto-generated method stub
         InvocationService service = (InvocationService)
         CacheFactory.getConfigurableCacheFactory()
         .ensureService("ExtendTcpInvocationService");
    Map map = service.query(new AbstractInvocable()
    public void run()
    setResult(CacheFactory.getCache("example-local-cache").get("key"));
    }, null);
    int IValue = (Integer) map.get(service.getCluster().getLocalMember());
    Exception -
    Exception in thread "main" java.lang.IllegalArgumentException: Missing scheme for service: "ExtendTcpInvocationService"
         at com.tangosol.net.DefaultConfigurableCacheFactory.findServiceScheme(DefaultConfigurableCacheFactory.java:722)
         at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(DefaultConfigurableCacheFactory.java:330)
         at com.test.dao.TestClient.main(TestClient.java:31)

    If you are pretty sure the code you posted was running on client side (if you accidently ran on server side, of course it couldn't find the service), check the IP address you specified on yoru client's cache config.
    On your server, you were using localhost. If you are not using localhost in your client's cache config (i.e. running from different server), replace the localhost in your server's cache config and use the correct IP.
    Or check the server's log to make sure the proxy service were started using matching IP.

  • DBA_CAPTURE_PREPARED_TABLES is not showing two tables in schema replication

    Hi,
    Using Oracle 10.2.0.3.0 on linux 64 bit
    I have setup one way Stream replication(SCHEMA REPLICATION) from production to destination server, when i am executing the query DBA_CAPTURE_PREPARED_TABLES view, it is not showing two , one of the table have same number of records as production have. Do i need to do any thing, because replication is running fine.
    Do i need not to worry because i am doing schema level replication and following view is showing the name of schema
    SELECT * FROM DBA_CAPTURE_PREPARED_schemas;
    Regards,

    Hello
    If replication is working fine on these 2 tables, you dont need to worry about these. Usually DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION API dumps the necessary streams dictionary information and enables supplemental logging. If your tables are getting replicated properly, that means the supplemental logging is already done and streams dictionary information are already available on the apply site.
    Still if you need you can run DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION again for these 2 tables on the capture site so that the view gets populated but it doesnt add any value.
    Thanks,
    Rijesh

  • Schema replication in Sun DS 5.1 and 5.2

    hi ldapies,
    has anybody expirienced similar problem?
    Excerpt from errors log on supplier (master) server:
    NSMMReplicationPlugin - Schema replication update failed: Type or value exists
    NSMMReplicationPlugin - Warning: unable to replicate schema to host yy.yy.yy.yy, port 389. Continuing with replication session.
    Excerpt from access DS 5.2 consumer (client) server log:
    conn=1844 op=-1 msgId=-1 - fd=31 slot=31 LDAP connection from xx.xx.xx.xx to yy.yy.yy.yy
    conn=1844 op=0 msgId=1 - BIND dn="cn=Replication Manager,cn=replication,cn=config" method=128 version=3
    conn=1844 op=0 msgId=1 - RESULT err=0 tag=97 nentries=0 etime=0 dn="cn=replication manager,cn=replication,cn=config"
    conn=1844 op=1 msgId=2 - SRCH base="" scope=0 filter="(objectClass=*)" attrs="supportedControl supportedExtension"
    conn=1844 op=1 msgId=2 - RESULT err=0 tag=101 nentries=1 etime=0
    conn=1844 op=2 msgId=3 - EXT oid="2.16.840.1.113730.3.5.3"
    conn=1844 op=2 msgId=3 - RESULT err=0 tag=120 nentries=0 etime=0
    conn=1844 op=3 msgId=4 - SRCH base="cn=schema" scope=0 filter="(objectClass=*)" attrs="nsSchemaCSN"
    conn=1844 op=3 msgId=4 - RESULT err=0 tag=101 nentries=1 etime=0
    conn=1844 op=4 msgId=5 - MOD dn="cn=schema"
    conn=1844 op=4 msgId=5 - RESULT err=20 tag=103 nentries=0 etime=0
    conn=1844 op=5 msgId=6 - EXT oid="2.16.840.1.113730.3.5.5"
    conn=1844 op=5 msgId=6 - RESULT err=0 tag=120 nentries=0 etime=0
    conn=1844 op=6 msgId=7 - UNBIND
    conn=1844 op=6 msgId=-1 - closing - U1
    conn=1844 op=-1 msgId=-1 - closed.
    More configuration details:
    hosts OS - Solaris 9
    Supplier - Sun DS 5.1 Service Pack 4
    Consumer - Sun DS 5.1 Service Pack 4, Sun DS 5.2_Patch_4
    How it happens:
    Consumer was already configured and used for other data located in separate database and suffix. Replication manager, new suffix and database for replica, were created on consumer. Replication agreement was created on supplier with "Always keep in sync" set.
    Immediately after agreement was confirmed attempt to schema update was happened with warning mentioned above.
    Yes, it is just warning, but it can result in some problems.
    Changes in cn=schema are not stored in 99user.ldif on consumer. Consumer DS keeps updated changes in memory and used them, if needed. But if I decide to disable replication and restart the DS, DS reads user defined schema definition from 99user.ldif and there are no updated schema entries obtained earlier from supplier, but replicated data are available.
    My question and cry for help:
    What can cause the schema update failed and how to eliminate it?
    Or am I trying to do something impossible?
    I already got suggestions like "Forget about DS 5.1", "Supplier 5.1 and consumer 5.2? DO NOT DO THAT!"
    Note: The same happened for DS 5.1 consumer.
    I will appreciate any sort of reaction.
    Thanks

    This is a conflict in the OID or Name in either an object class or attribute.
    No attribute must have an identical OID or name as another defined attribute
    (both OID and name identical between 2 definitions means same definition)
    The error log on the Consumer should have the OID or the name of the schema element that is already defined.
    Schema replication between 5.2 and 5.1 servers is explained in the Administration Guide of Directory Server 5.2 and may require some settings (I don't recall which way, but it's covered in the documentation).
    Regards,
    Ludovic.

  • How to setup bidirectional streams using OEM.

    hi,
    can anyone guide me in setting up bi-directional streams using OEM??
    Thanks in Advance.

    Click on the database --> data Movement --> setup under streams. Follow the steps whether you want schema replication / table replication / etc. You need to populate the host credentials for oracle.

  • Problems using mathml XML Schema in XMLDB

    I have successfully loaded the hierarchy of XML Schema definition documents for the current 'mathml' by adjusting the relative paths in all include and import statements, and by forcing the load to overcome cyclic dependency issues.
    However when I try to create a table using the XMLTYPE or register another schema which is dependent on mathml I receive the following error:
    ERROR at line 1:
    ORA-31079: unable to resolve reference to group "Content-expr.class"
    ORA-06512: at "XDB.DBMS_XMLSCHEMA_INT", line 37
    ORA-06512: at "XDB.DBMS_XMLSCHEMA", line 61
    ORA-06512: at "XDB.DBMS_XMLSCHEMA", line 126
    ORA-06512: at line 14
    Having tried loading into both 9iR2 and 10GR2 databases and pasting the document containing the 'Content-expr.class' into the parent document (i.e. removing the include) I have come to the conclusion that there is a problem with the schema definition.
    However being new to XML I do not know what ths issue is as the file containing the definition (math.xsd) is included well before the references in the parent document.
    Here is the group definition corresponding to the offending reference:
    <xs:group name="Content-expr.class">
    <xs:choice>
    <xs:group ref="ContExpr.class"/>
    <xs:group ref="PresExpr.class"/>
    </xs:choice>
    </xs:group>
    I am also unable to trace which reference is causing the problem as there are several.
    Does anyone have any suggestions as to what could be causing this issue, or how I can obtain further diagnostics?
    Any help much appreciated.
    Robert Honeyman
    *********** New info ***************
    OK. I have tried further to register the schema, and have the following additional information.
    I determined that I thought my problem was cyclic dependencies being resolved ONLY by virtue of the parent file mathml2.xsd. I therefore pasted all the files together in include order so that this was no longer an issue. I then performed the following actions:
    1. I tried to register into Oracle 9iR2 database (9.2.0.4) and received the following error:
    ERROR at line 1:
    ORA-31151: Cyclic definition encountered for group: "Content-expr.class"
    ORA-06512: at "XDB.DBMS_XMLSCHEMA_INT", line 0
    ORA-06512: at "XDB.DBMS_XMLSCHEMA", line 26
    ORA-06512: at "XDB.DBMS_XMLSCHEMA", line 131
    ORA-06512: at line 14
    2. I then tried registering the schema into an Oracle 10G Release 2 database as well, and received the following error:
    ORA-31084: error while creating table "MEDLINE"."math729_TAB" for element
    "math"
    ORA-01792: maximum number of columns in a table or view is 1000
    ORA-02310: exceeded maximum number of allowable columns in table
    ORA-06512: at "XDB.DBMS_XMLSCHEMA_INT", line 37
    ORA-06512: at "XDB.DBMS_XMLSCHEMA", line 61
    ORA-06512: at "XDB.DBMS_XMLSCHEMA", line 126
    ORA-06512: at line 14
    My conclusions are as follows:
    1. 9iR2 (the version I was using at least 9.2.0.4) can't handle cyclic dependencies at all, or at least not complex ones.
    2. Neither 9iR2 or 10gR2 can handle cyclic dependencies (or other dependent definitions managed purely by virtue of a parent file with multiple include statements)
    3. 10gR2 can handle cyclic dependencies when they are all defined in the same file or resolved by an explicit include regardless of complexity.
    4. 10gR2 cannot handle highly complex XML Schema definitions due its limit of 1000 columns per table.
    Does anyone have any comments on these conclusions?
    I don't even want to handle Mathml, just another schema definition that imports it. I may try to make my own adjustments.
    Message was edited by:
    rhoneyman
    null

    Thanks, I found the relevant sections in the XML DB 10GR2 documentation (chapter 3 I think). As far as I can tell my options are to either use one of the following:
    - top-down technique creating tables for sub elements in the schema definition
    - bottom-up technique collapsing some of the lower elements into CLOBs (I guess this is semi-structured storage)
    The problem with Mathml is that it seems quite complex, with only one global schema element and many nested elements, hence only one table is created by XMLDB and it logically follows that we run out of columns.
    To get it to work is likely to involve a fair amount of manual schema modification, which incurs a maintenance overhead for me.
    It would be good if Oracle could provide a way of simplifying this maintenance of complex schemas to get them to be registered. I guess this would be increasing the number of columns alllowed for a table or offering some other parametric option for DBMS_XMLSCHEMA.REGISTER_SCHEMA that allowed a threshold and either a top-down or bottom-up approach to be taken automatically.
    For the time being I am using a simplified schema that does not depend on Mathml for my storage needs.

  • What is the best methodology to handle database schema changes after an application has been deployed?

    Hi,
    VS2013, SQL Server 2012 Express LocalDB, EF 6.0, VB, desktop application with an end user database
    What is a reliable method to follow when there is a schema change for an end user database used by a deployed application?  In other words, each end user has their own private data, but the database needs to be expanded for additional features, etc. 
    I list here the steps it seems I must consider.  If I've missed any, please also inform:
    (1) From the first time the application is installed, it should have already moved all downloaded database files to a separate known location, most likely some sub-folder in <user>\App Data.
    (2) When there's a schema change, the new database file(s) must also be moved into the location in item (1) above.
    (3) The application must check to see if the new database file(s) have been loaded, and if not, transfer the data from the old database file(s) to the new database file(s).
    (4) Then the application can operate using the new schema.
    This may seem basic, but for those of us who haven't done it, it seems pretty complicated.  Item (3) seems to be the operative issue for database schema changes.  Existing user data needs to be preserved, but using the new schema.  I'd like
    to understand the various ways it can be done, if there are specific tools created to handle this process, and which method is considered best practice.
    (1) Should we handle the transfer in a 'one-time use' application method, i.e. do it in application code.
    (2) Should we handle the transfer using some type of 'one-time use' SQL query.  If this is the best way, can you provide some guidance if there are different alternatives for how to perform this in SQL, and where to learn/see examples?
    (3) Some other method?
    Thanks.
    Best Regards,
    Alan

    Hi Uri,
    Thank you kindly for your response.  Also thanks to Kalman Toth for showing the right forum for such questions.
    To clarify the scenario, I did not mean to imply the end user 'owns' the schema.  I was trying to communicate that in my scenario, an end user will have loaded their own private data into the database file originally delivered with the application. 
    If the schema needs to be updated for new application features, the end user's data will of course need to be preserved during the application upgrade if that upgrade includes a database schema change.
    Although I listed step 3 as transferring the data, I should have made more clear I was trying to express my limited understanding of how this process "might work", since at the present time I am not an expert with this.  I suspected my thinking
    is limited and someone would correct me.
    This is basically the reason for my post; I am hoping an expert can point me to what I need to learn about to handle database schema changes when application upgrades are deployed.  For example, if an SQL script needs to be created and deployed
    then I need to learn how to do that.  What's the best practice, or most reliable/efficient way to make sure the end user's database is changed to the new schema after the upgraded application is deployed?  Correct me if I'm wrong on this,
    but updating the end user database will have to be handled totally within the deployment tool or the upgraded application when it first starts up.
    If it makes a difference, I'll be deploying application upgrades initially using Click Once from Visual Studio, and eventually I may also use Windows Installer or Wix.
    Again, thanks for your help.
    Best Regards,
    Alan

  • Golden Gate Schema Replication

    Guys - My requirement is fairly simple.I have two schemas, GG [Source] and GGR [Target] on the same host. Have one table called GG.SYNC_TABLE. I am having difficulties to push data from GG to GGR
    Below are the extract and replicat information
    EXTRACT EXT_AP1
    SETENV (ORACLE_SID=ERPA4)
    RMTHOST mcdeagaix825, mgrport 7809
    USERID GG@ERPA4, PASSWORD goldengate1
    DISCARDFILE ./dirrpt/ext_ap1_discard.rpt, append, megabytes 50
    RMTTRAIL ./dirdata/sa
    TABLE GG.AP_AE_HEADERS_ALL;
    TABLE GG.AP_AE_LINES_ALL;
    TABLE GG.AP_BANK_ACCOUNTS_ALL;
    TABLE GG.AP_BANK_BRANCHES;
    TABLE GG.AP_CARDS_ALL;
    TABLE GG.AP_CHECKS_ALL;
    TABLE GG.AP_CREDIT_CARD_TRXNS_ALL;
    TABLE GG.AP_EXPENSE_REPORTS_ALL;
    TABLE GG.AP_EXPENSE_REPORT_HEADERS_ALL;
    TABLE GG.AP_EXPENSE_REPORT_LINES_ALL;
    TABLE GG.AP_EXPENSE_REPORT_PARAMS_ALL;
    TABLE GG.AP_EXP_REPORT_DISTS_ALL;
    TABLE GG.AP_HOLDS_ALL;
    TABLE GG.AP_HOLD_CODES;
    TABLE GG.AP_INVOICES_ALL;
    TABLE GG.AP_INVOICE_DISTRIBUTIONS_ALL;
    TABLE GG.AP_INVOICE_LINES_ALL;
    TABLE GG.AP_INVOICE_PAYMENTS_ALL;
    TABLE GG.AP_NOTES;
    TABLE GG.AP_PAYMENT_HISTORY_ALL;
    TABLE GG.AP_PAYMENT_HIST_DISTS;
    TABLE GG.AP_PAYMENT_SCHEDULES_ALL;
    TABLE GG.AP_POL_VIOLATIONS_ALL;
    TABLE GG.AP_SELF_ASSESSED_TAX_DIST_ALL;
    TABLE GG.AP_SUPPLIERS;
    TABLE GG.AP_SUPPLIER_SITES_ALL;
    TABLE GG.AP_SYSTEM_PARAMETERS_ALL;
    TABLE GG.AP_TERMS_LINES;
    TABLE GG.AP_TOLERANCE_TEMPLATES;
    TABLE GG.SYNC_TABLE;
    REPLICAT REP_AP1
    SETENV (ORACLE_SID=ERPA4)
    USERID GG@ERPA4, PASSWORD goldengate1
    ASSUMETARGETDEFS
    REPORTCOUNT EVERY 1 MINUTES, RATE
    DISCARDFILE ./dirrpt/rep_ap1.dsc, PURGE
    MAP GG.AP_AE_HEADERS_ALL, TARGET GGR.AP_AE_HEADERS_ALL;
    MAP GG.AP_AE_LINES_ALL, TARGET GGR.AP_AE_LINES_ALL;
    MAP GG.AP_BANK_ACCOUNTS_ALL, TARGET GGR.AP_BANK_ACCOUNTS_ALL;
    MAP GG.AP_BANK_BRANCHES, TARGET GGR.AP_BANK_BRANCHES;
    MAP GG.AP_CARDS_ALL, TARGET GGR.AP_CARDS_ALL;
    MAP GG.AP_CHECKS_ALL, TARGET GGR.AP_CHECKS_ALL;
    MAP GG.AP_CREDIT_CARD_TRXNS_ALL, TARGET GGR.AP_CREDIT_CARD_TRXNS_ALL;
    MAP GG.AP_EXPENSE_REPORTS_ALL, TARGET GGR.AP_EXPENSE_REPORTS_ALL;
    MAP GG.AP_EXPENSE_REPORT_HEADERS_ALL, TARGET GGR.AP_EXPENSE_REPORT_HEADERS_ALL;
    MAP GG.AP_EXPENSE_REPORT_LINES_ALL, TARGET GGR.AP_EXPENSE_REPORT_LINES_ALL;
    MAP GG.AP_EXPENSE_REPORT_PARAMS_ALL, TARGET GGR.AP_EXPENSE_REPORT_PARAMS_ALL;
    MAP GG.AP_EXP_REPORT_DISTS_ALL, TARGET GGR.AP_EXP_REPORT_DISTS_ALL;
    MAP GG.AP_HOLDS_ALL, TARGET GGR.AP_HOLDS_ALL;
    MAP GG.AP_HOLD_CODES, TARGET GGR.AP_HOLD_CODES;
    MAP GG.AP_INVOICES_ALL, TARGET GGR.AP_INVOICES_ALL;
    MAP GG.AP_INVOICE_DISTRIBUTIONS_ALL, TARGET GGR.AP_INVOICE_DISTRIBUTIONS_ALL;
    MAP GG.AP_INVOICE_LINES_ALL, TARGET GGR.AP_INVOICE_LINES_ALL;
    MAP GG.AP_INVOICE_PAYMENTS_ALL, TARGET GGR.AP_INVOICE_PAYMENTS_ALL;
    MAP GG.AP_NOTES, TARGET GGR.AP_NOTES;
    MAP GG.AP_PAYMENT_HISTORY_ALL, TARGET GGR.AP_PAYMENT_HISTORY_ALL;
    MAP GG.AP_PAYMENT_HIST_DISTS, TARGET GGR.AP_PAYMENT_HIST_DISTS;
    MAP GG.AP_PAYMENT_SCHEDULES_ALL, TARGET GGR.AP_PAYMENT_SCHEDULES_ALL;
    MAP GG.AP_POL_VIOLATIONS_ALL, TARGET GGR.AP_POL_VIOLATIONS_ALL;
    MAP GG.AP_SELF_ASSESSED_TAX_DIST_ALL, TARGET GGR.AP_SELF_ASSESSED_TAX_DIST_ALL;
    MAP GG.AP_SUPPLIERS, TARGET GGR.AP_SUPPLIERS;
    MAP GG.AP_SUPPLIER_SITES_ALL, TARGET GGR.AP_SUPPLIER_SITES_ALL;
    MAP GG.AP_SYSTEM_PARAMETERS_ALL, TARGET GGR.AP_SYSTEM_PARAMETERS_ALL;
    MAP GG.AP_TERMS_LINES, TARGET GGR.AP_TERMS_LINES;
    MAP GG.AP_TOLERANCE_TEMPLATES, TARGET GGR.AP_TOLERANCE_TEMPLATES;
    MAP GG.SYNC_TABLE, TARGET GGR.SYNC_TABLE;
    Extract, Replicat and Manager processes are running fine. But a commit on target is not propagating the data across to GGR schema. Supplemental logging is enabled. Archiving is not [I hope its not required]. What do you think I am missing here?

    No point in doing this if you are not running in archived log mode. If you get behind for whatever reason and GoldenGate has to look further into the past that what is currently in your online redo log, game over.
    Have you tried the tutorial at Oracle Learning Library?
    Another thing - why would you use your GoldenGate user as part of your schema/data replication? That is only asking for trouble and unnecessary complexity.  The GoldenGate schema is used to drive the replication between other schemas, not itself.

  • Problem using Handler chain with JAXWS

    Hi, I want to use handler chain on a web service and I'm not sure if I understood correctly how it works. In every example I found on the web the handler chain is applied on the client side (the binding is made on the generated client side). So, when I send a request the handler catch an outbound message and when the web service respond, it receive an inbound message. The problem is that I would like to link the handler on the server side, i.e. when the web service receive a request, the handler receives an inbound message and when the WS respond it receives an outbound message.
    Is it possible to do so?? I mean, it makes no sense to link the handler on the client side, since I don't have control on the client side, I produce web services and I want to apply some handlers at this level.
    Any idea??? Or is there something that I didn't understand about handler chains??
    Thanks,
    Korg

    Hello again, I finally found how to do what I wanted to do. The problem came from different examples I found on the web. In my handler.xml file, the structure must be something like:
    <jws:handler-chains xmlns:jws="http://java.sun.com/xml/ns/javaee">
    <jws:handler-chain>
    <jws:handler>
    <jws:handler-class>........</jws:handler-class>
    </jws:handler>
    </jws:handler-chain>
    </jws:handler-chains>
    In many examples, there is an element at the root that is "handler-config" and it doesn't work if I use it this way. Here's an example of a handler.xml file that doesn't work:
    <handler-config>
    <handler-chains>
    <handler-chain>
    <handler>
    <handler-class>............</handler-class>
    </handler>
    </handler-chain>
    </handler-chains>
    </handler-config>
    Korg

  • Getting "invalid type: 169" errors when using POF with Push Replication

    I'm trying to get Push Replication - latest version - running on Coherence 3.6.1. I can get it working fine if I don't use POF with my objects, but when trying to use POF format for my objects I get this:
    2011-02-11 13:06:00.993/2.297 Oracle Coherence GE 3.6.1.1 <D5> (thread=Invocation:Management, member=1): Service Management joined the cluster with senior service member 1
    2011-02-11 13:06:01.149/2.453 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded POF configuration from "file:/C:/wsgpc/GlobalPositionsCache/resource/coherence/pof-config.xml"
    2011-02-11 13:06:01.149/2.453 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6/coherence/lib/coherence.jar!/coherence-pof-config.xml"
    2011-02-11 13:06:01.149/2.453 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6-pushreplication/coherence-3.6-common-1.7.3.20019.jar!/coherence-common-pof-config.xml"
    2011-02-11 13:06:01.165/2.469 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6-pushreplication/coherence-3.6-messagingpattern-2.7.4.21016.jar!/coherence-messagingpattern-pof-config.xml"
    2011-02-11 13:06:01.165/2.469 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6-pushreplication/coherence-3.6-pushreplicationpattern-3.0.3.20019.jar!/coherence-pushreplicationpattern-pof-config.xml"
    2011-02-11 13:06:01.243/2.547 Oracle Coherence GE 3.6.1.1 <D5> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Service DistributedCacheForSequenceGenerators joined the cluster with senior service member 1
    2011-02-11 13:06:01.258/2.562 Oracle Coherence GE 3.6.1.1 <D5> (thread=DistributedCache:DistributedCacheForLiveObjects, member=1): Service DistributedCacheForLiveObjects joined the cluster with senior service member 1
    2011-02-11 13:06:01.274/2.578 Oracle Coherence GE 3.6.1.1 <D5> (thread=DistributedCache:DistributedCacheForSubscriptions, member=1): Service DistributedCacheForSubscriptions joined the cluster with senior service member 1
    2011-02-11 13:06:01.290/2.594 Oracle Coherence GE 3.6.1.1 <D5> (thread=DistributedCache:DistributedCacheForMessages, member=1): Service DistributedCacheForMessages joined the cluster with senior service member 1
    2011-02-11 13:06:01.305/2.609 Oracle Coherence GE 3.6.1.1 <D5> (thread=DistributedCache:DistributedCacheForDestinations, member=1): Service DistributedCacheForDestinations joined the cluster with senior service member 1
    2011-02-11 13:06:01.305/2.609 Oracle Coherence GE 3.6.1.1 <D5> (thread=DistributedCache:DistributedCacheWithPublishingCacheStore, member=1): Service DistributedCacheWithPublishingCacheStore joined the cluster with senior service member 1
    2011-02-11 13:06:01.321/2.625 Oracle Coherence GE 3.6.1.1 <D5> (thread=DistributedCache, member=1): Service DistributedCache joined the cluster with senior service member 1
    2011-02-11 13:06:01.461/2.765 Oracle Coherence GE 3.6.1.1 <Info> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): TcpAcceptor now listening for connections on 166.15.224.91:20002
    2011-02-11 13:06:01.461/2.765 Oracle Coherence GE 3.6.1.1 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): Started: TcpAcceptor{Name=Proxy:ExtendTcpProxyService:TcpAcceptor, State=(SERVICE_STARTED), ThreadCount=0, Codec=Codec(Format=POF), Serializer=com.tangosol.io.DefaultSerializer, PingInterval=0, PingTimeout=0, RequestTimeout=0, SocketProvider=SystemSocketProvider, LocalAddress=[/166.15.224.91:20002], SocketOptions{LingerTimeout=0, KeepAliveEnabled=true, TcpDelayEnabled=false}, ListenBacklog=0, BufferPoolIn=BufferPool(BufferSize=2KB, BufferType=DIRECT, Capacity=Unlimited), BufferPoolOut=BufferPool(BufferSize=2KB, BufferType=DIRECT, Capacity=Unlimited)}
    2011-02-11 13:06:01.461/2.765 Oracle Coherence GE 3.6.1.1 <D5> (thread=Proxy:ExtendTcpProxyService, member=1): Service ExtendTcpProxyService joined the cluster with senior service member 1
    2011-02-11 13:06:01.461/2.765 Oracle Coherence GE 3.6.1.1 <Info> (thread=main, member=1):
    Services
    ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_JOINED), Id=0, Version=3.6, OldestMemberId=1}
    InvocationService{Name=Management, State=(SERVICE_STARTED), Id=1, Version=3.1, OldestMemberId=1}
    PartitionedCache{Name=DistributedCacheForSequenceGenerators, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
    PartitionedCache{Name=DistributedCacheForLiveObjects, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
    PartitionedCache{Name=DistributedCacheForSubscriptions, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
    PartitionedCache{Name=DistributedCacheForMessages, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
    PartitionedCache{Name=DistributedCacheForDestinations, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
    PartitionedCache{Name=DistributedCacheWithPublishingCacheStore, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
    PartitionedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
    ProxyService{Name=ExtendTcpProxyService, State=(SERVICE_STARTED), Id=9, Version=3.2, OldestMemberId=1}
    Started DefaultCacheServer...
    2011-02-11 13:08:27.894/149.198 Oracle Coherence GE 3.6.1.1 <Error> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): Failed to publish EntryOperation{siteName=csfb.cs-group.com, clusterName=SPTestCluster, cacheName=source-cache, operation=Insert, publishableEntry=PublishableEntry{key=Binary(length=32, value=0x15A90F00004E07424F4F4B303038014E08494E535430393834024E0345535040), value=Binary(length=147, value=0x1281A30115AA0F0000A90F00004E07424F4F4B303038014E08494E535430393834024E03455350400248ADEEF99607060348858197BF22060448B4D8E9BE02060548A0D2CDC70E060648B0E9A2C4030607488DBCD6E50D060848B18FC1882006094E03303038402B155B014E0524737263244E1F637366622E63732D67726F75702E636F6D2D535054657374436C7573746572), originalValue=Binary(length=0, value=0x)}} to Cache passive-cache because of
    (Wrapped) java.io.StreamCorruptedException: invalid type: 169 Class:com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher
    2011-02-11 13:08:27.894/149.198 Oracle Coherence GE 3.6.1.1 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): An exception occurred while processing a InvocationRequest for Service=Proxy:ExtendTcpProxyService:TcpAcceptor: (Wrapped: Failed to publish a batch with the publisher [Active Publisher] on cache [source-cache]) java.lang.IllegalStateException: Attempted to publish to cache passive-cache
         at com.tangosol.util.Base.ensureRuntimeException(Base.java:293)
         at com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher$RemotePublishingAgent.run(RemoteClusterPublisher.java:348)
         at com.tangosol.coherence.component.net.extend.proxy.serviceProxy.InvocationServiceProxy.query(InvocationServiceProxy.CDB:6)
         at com.tangosol.coherence.component.net.extend.messageFactory.InvocationServiceFactory$InvocationRequest.onRun(InvocationServiceFactory.CDB:12)
         at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
         at com.tangosol.coherence.component.net.extend.proxy.serviceProxy.InvocationServiceProxy.onMessage(InvocationServiceProxy.CDB:9)
         at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:39)
         at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer.onNotify(Peer.CDB:103)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:662)
    Caused by: java.lang.IllegalStateException: Attempted to publish to cache passive-cache
         at com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher.publishBatch(AbstractCachePublisher.java:163)
         at com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher$RemotePublishingAgent.run(RemoteClusterPublisher.java:343)
         ... 9 more
    Caused by: (Wrapped) java.io.StreamCorruptedException: invalid type: 169
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:265)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService$ConverterKeyToBinary.convert(PartitionedService.CDB:16)
         at com.tangosol.util.ConverterCollections$ConverterInvocableMap.invoke(ConverterCollections.java:2156)
         at com.tangosol.util.ConverterCollections$ConverterNamedCache.invoke(ConverterCollections.java:2622)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ViewMap.invoke(PartitionedCache.CDB:11)
         at com.tangosol.coherence.component.util.SafeNamedCache.invoke(SafeNamedCache.CDB:1)
         at com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher.publishBatch(AbstractCachePublisher.java:142)
         ... 10 more
    Caused by: java.io.StreamCorruptedException: invalid type: 169
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2265)
         at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2253)
         at com.tangosol.io.DefaultSerializer.deserialize(DefaultSerializer.java:74)
         at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2703)
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:261)
         ... 16 more
    2011-02-11 13:08:37.925/159.229 Oracle Coherence GE 3.6.1.1 <Error> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): Failed to publish EntryOperation{siteName=csfb.cs-group.com, clusterName=SPTestCluster, cacheName=source-cache, operation=Insert, publishableEntry=PublishableEntry{key=Binary(length=32, value=0x15A90F00004E07424F4F4B303038014E08494E535430393834024E0345535040), value=Binary(length=147, value=0x1281A30115AA0F0000A90F00004E07424F4F4B303038014E08494E535430393834024E03455350400248ADEEF99607060348858197BF22060448B4D8E9BE02060548A0D2CDC70E060648B0E9A2C4030607488DBCD6E50D060848B18FC1882006094E03303038402B155B014E0524737263244E1F637366622E63732D67726F75702E636F6D2D535054657374436C7573746572), originalValue=Binary(length=0, value=0x)}} to Cache passive-cache because of
    (Wrapped) java.io.StreamCorruptedException: invalid type: 169 Class:com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher
    2011-02-11 13:08:37.925/159.229 Oracle Coherence GE 3.6.1.1 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): An exception occurred while processing a InvocationRequest for Service=Proxy:ExtendTcpProxyService:TcpAcceptor: (Wrapped: Failed to publish a batch with the publisher [Active Publisher] on cache [source-cache]) java.lang.IllegalStateException: Attempted to publish to cache passive-cache
         at com.tangosol.util.Base.ensureRuntimeException(Base.java:293)
         at com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher$RemotePublishingAgent.run(RemoteClusterPublisher.java:348)
         at com.tangosol.coherence.component.net.extend.proxy.serviceProxy.InvocationServiceProxy.query(InvocationServiceProxy.CDB:6)
         at com.tangosol.coherence.component.net.extend.messageFactory.InvocationServiceFactory$InvocationRequest.onRun(InvocationServiceFactory.CDB:12)
         at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
         at com.tangosol.coherence.component.net.extend.proxy.serviceProxy.InvocationServiceProxy.onMessage(InvocationServiceProxy.CDB:9)
         at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:39)
         at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer.onNotify(Peer.CDB:103)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:662)
    Caused by: java.lang.IllegalStateException: Attempted to publish to cache passive-cache
         at com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher.publishBatch(AbstractCachePublisher.java:163)
         at com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher$RemotePublishingAgent.run(RemoteClusterPublisher.java:343)
         ... 9 more
    Caused by: (Wrapped) java.io.StreamCorruptedException: invalid type: 169
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:265)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService$ConverterKeyToBinary.convert(PartitionedService.CDB:16)
         at com.tangosol.util.ConverterCollections$ConverterInvocableMap.invoke(ConverterCollections.java:2156)
         at com.tangosol.util.ConverterCollections$ConverterNamedCache.invoke(ConverterCollections.java:2622)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ViewMap.invoke(PartitionedCache.CDB:11)
         at com.tangosol.coherence.component.util.SafeNamedCache.invoke(SafeNamedCache.CDB:1)
         at com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher.publishBatch(AbstractCachePublisher.java:142)
         ... 10 more
    Caused by: java.io.StreamCorruptedException: invalid type: 169
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2265)
         at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2253)
         at com.tangosol.io.DefaultSerializer.deserialize(DefaultSerializer.java:74)
         at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2703)
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:261)
         ... 16 more
    2011-02-11 13:08:47.940/169.244 Oracle Coherence GE 3.6.1.1 <Error> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): Failed to publish EntryOperation{siteName=csfb.cs-group.com, clusterName=SPTestCluster, cacheName=source-cache, operation=Insert, publishableEntry=PublishableEntry{key=Binary(length=32, value=0x15A90F00004E07424F4F4B303038014E08494E535430393834024E0345535040), value=Binary(length=147, value=0x1281A30115AA0F0000A90F00004E07424F4F4B303038014E08494E535430393834024E03455350400248ADEEF99607060348858197BF22060448B4D8E9BE02060548A0D2CDC70E060648B0E9A2C4030607488DBCD6E50D060848B18FC1882006094E03303038402B155B014E0524737263244E1F637366622E63732D67726F75702E636F6D2D535054657374436C7573746572), originalValue=Binary(length=0, value=0x)}} to Cache passive-cache because of
    (Wrapped) java.io.StreamCorruptedException: invalid type: 169 Class:com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher
    2011-02-11 13:08:47.940/169.244 Oracle Coherence GE 3.6.1.1 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): An exception occurred while processing a InvocationRequest for Service=Proxy:ExtendTcpProxyService:TcpAcceptor: (Wrapped: Failed to publish a batch with the publisher [Active Publisher] on cache [source-cache]) java.lang.IllegalStateException: Attempted to publish to cache passive-cache
         at com.tangosol.util.Base.ensureRuntimeException(Base.java:293)
         at com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher$RemotePublishingAgent.run(RemoteClusterPublisher.java:348)
         at com.tangosol.coherence.component.net.extend.proxy.serviceProxy.InvocationServiceProxy.query(InvocationServiceProxy.CDB:6)
         at com.tangosol.coherence.component.net.extend.messageFactory.InvocationServiceFactory$InvocationRequest.onRun(InvocationServiceFactory.CDB:12)
         at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
         at com.tangosol.coherence.component.net.extend.proxy.serviceProxy.InvocationServiceProxy.onMessage(InvocationServiceProxy.CDB:9)
         at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:39)
         at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer.onNotify(Peer.CDB:103)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:662)
    Caused by: java.lang.IllegalStateException: Attempted to publish to cache passive-cache
         at com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher.publishBatch(AbstractCachePublisher.java:163)
         at com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher$RemotePublishingAgent.run(RemoteClusterPublisher.java:343)
         ... 9 more
    Caused by: (Wrapped) java.io.StreamCorruptedException: invalid type: 169
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:265)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService$ConverterKeyToBinary.convert(PartitionedService.CDB:16)
         at com.tangosol.util.ConverterCollections$ConverterInvocableMap.invoke(ConverterCollections.java:2156)
         at com.tangosol.util.ConverterCollections$ConverterNamedCache.invoke(ConverterCollections.java:2622)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ViewMap.invoke(PartitionedCache.CDB:11)
         at com.tangosol.coherence.component.util.SafeNamedCache.invoke(SafeNamedCache.CDB:1)
         at com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher.publishBatch(AbstractCachePublisher.java:142)
         ... 10 more
    Caused by: java.io.StreamCorruptedException: invalid type: 169
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2265)
         at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2253)
         at com.tangosol.io.DefaultSerializer.deserialize(DefaultSerializer.java:74)
         at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2703)
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:261)
         ... 16 more
    It seems to be loading my POF configuration file - which also includes the standard Coherence ones as well as those required for PR - just fine, as you can see at the top of the trace.
    Any ideas why POF format for my objects is giving this error (NB. I've tested the POF stuff outside of PR and it all works fine.)
    EDIT: I've tried switching the "publisher" to the "file" publisher in PR. And that works fine. I see my POF format cached data extracted and published to the directory I specify. So the "publish" part seems to work when I use a file-publisher.
    Cheers,
    Steve

    Hi Neville,
    I don't pass any POF config parameters on the command-line. My POF file is called "pof-config.xml" so seems to be picked up by default. The trace I showed in my post shows the file being picked up.
    My POF config file content is as follows:
    <pof-config>
         <user-type-list>
              <!-- Standard Coherence POF types -->
              <include>coherence-pof-config.xml</include>
              <!-- Coherence Push Replication Required POF types -->
    <include>coherence-common-pof-config.xml</include>
    <include>coherence-messagingpattern-pof-config.xml</include>
    <include>coherence-pushreplicationpattern-pof-config.xml</include>
              <!-- User POF types (must be above 1000) -->
              <user-type>
                   <type-id>1001</type-id>
                   <class-name>com.csg.gpc.domain.model.position.trading.TradingPositionKey</class-name>
                   <serializer>
                        <class-name>com.csg.gpc.coherence.pof.position.trading.TradingPositionKeySerializer</class-name>
                   </serializer>
              </user-type>
              <user-type>
                   <type-id>1002</type-id>
                   <class-name>com.csg.gpc.domain.model.position.trading.TradingPosition</class-name>
                   <serializer>
                        <class-name>com.csg.gpc.coherence.pof.position.trading.TradingPositionSerializer</class-name>
                   </serializer>
              </user-type>
              <user-type>
                   <type-id>1003</type-id>
                   <class-name>com.csg.gpc.domain.model.position.simple.SimplePosition</class-name>
                   <serializer>
                        <class-name>com.csg.gpc.coherence.pof.position.simple.SimplePositionSerializer</class-name>
                   </serializer>
              </user-type>
              <user-type>
                   <type-id>1004</type-id>
                   <class-name>com.csg.gpc.coherence.processor.TradingPositionUpdateProcessor</class-name>
              </user-type>
         </user-type-list>
    </pof-config>
    EDIT: I'm running both clusters here from within Eclipse. Here's the POF bits from the startup of the receiving cluster:
    2011-02-11 15:05:22.607/2.328 Oracle Coherence GE 3.6.1.1 <D5> (thread=Invocation:Management, member=1): Service Management joined the cluster with senior service member 1
    2011-02-11 15:05:22.779/2.500 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded POF configuration from "file:/C:/wsgpc/GlobalPositionsCache/resource/coherence/pof-config.xml"
    2011-02-11 15:05:22.779/2.500 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6/coherence/lib/coherence.jar!/coherence-pof-config.xml"
    2011-02-11 15:05:22.779/2.500 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6-pushreplication/coherence-3.6-common-1.7.3.20019.jar!/coherence-common-pof-config.xml"
    2011-02-11 15:05:22.779/2.500 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6-pushreplication/coherence-3.6-messagingpattern-2.7.4.21016.jar!/coherence-messagingpattern-pof-config.xml"
    2011-02-11 15:05:22.779/2.500 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6-pushreplication/coherence-3.6-pushreplicationpattern-3.0.3.20019.jar!/coherence-pushreplicationpattern-pof-config.xml"
    And here's the start-up POF bits from the sending cluster:
    2011-02-11 15:07:09.744/2.343 Oracle Coherence GE 3.6.1.1 <D5> (thread=Invocation:Management, member=1): Service Management joined the cluster with senior service member 1
    2011-02-11 15:07:09.916/2.515 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded POF configuration from "file:/C:/wsgpc/GlobalPositionsCache/resource/coherence/pof-config.xml"
    2011-02-11 15:07:09.916/2.515 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6/coherence/lib/coherence.jar!/coherence-pof-config.xml"
    2011-02-11 15:07:09.916/2.515 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6-pushreplication/coherence-3.6-common-1.7.3.20019.jar!/coherence-common-pof-config.xml"
    2011-02-11 15:07:09.916/2.515 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6-pushreplication/coherence-3.6-messagingpattern-2.7.4.21016.jar!/coherence-messagingpattern-pof-config.xml"
    2011-02-11 15:07:09.916/2.515 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6-pushreplication/coherence-3.6-pushreplicationpattern-3.0.3.20019.jar!/coherence-pushreplicationpattern-pof-config.xml"
    They both seem to be reading my pof-config.xml file.
    I have the following in my sending cluster cache config:
              <sync:provider pof-enabled="true">
                   <sync:coherence-provider />
              </sync:provider>
    And this in the receiving cache config:
    <introduce:config
    file="coherence-pushreplicationpattern-pof-cache-config.xml" />
    Cheers,
    Steve
    Edited by: stevephe on 11-Feb-2011 07:05

  • Any tutorial for Live RTMP Dynamic Streaming using Strobe Media Playback?

    Any tutorial for Live RTMP Dynamic Streaming using Strobe Media Playback anywhere available on the web ?

    Thank you for the link but it does not solve my problem. In this thread they work with manifest xml files like a playlist. Which i tryed but also doesn´t work. Adobe declairs FMP with easy config and less code for non geeks. And this is exactly what i want for my project.
    Does anybody can post some  html sample code especially for dynamic streaming/ MBR wit Flash Media Playback??? For Live or On-Demand.
    Best
    Hinricht

  • Playing HLS stream using OSMF in Air for IOS

    Is there any sample app (source) that shows how to play HLS streams using AIR for IOS, that the OSMF team can make available?
    I am running into issues trying to get this to work as outlined here. http://forums.adobe.com/message/4013821#4013821
    TIA,
    - Abey

    Thanks to everyone who replied.
    The conclusiver answer is that there are only 2 ways to display H264 video in AIR for IOS
    (more info here http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/NetStream.htm l#play%28%29)
    1. Progressive download
    2. HLS format (slight caveat, in my tests at least, OSMF 1.6.1 does'nt handle this, but if you use the NetStream directly with StageVideo enabled it works)
    Updated Matrix is :
    FMS 4.5 H.264 Streaming test matrix
    RTMP
    HDS
    HLS
    HTTP Progressive Download
    AIR for Android
    Yes
    Yes
    No
    Yes
    AIR on Windows (Desktop)
    Yes
    Yes
    No
    Yes
    AIR on IOS
    No
    No
    Yes
    Yes
    Safari Browser on IOS
    No
    No
    Yes
    No

  • Is there a way to view the Photo Stream using Android device?

    My iPhone 4 is currently broken at the moment and I want access to my Photo Stream. Is there a way I can access my iCloud Photo Stream using my Samsung Galaxy tab?

    Not that I"m aware of.

  • Object reference not set to an instance of an object error when generating a schema using flat file schema wizard.

    I have a csv file that I need to generate a schema for. I am trying to generate a schema using flat file schema wizard but I keep getting "Object reference not set to an instance of an object." error when I am clicking on the Next button after
    specifying properties of the child elements on the wizard. At the end I get schema file generated but it contains an empty root record with no child elements.
    I thought may be this is because I didn't have my project checked out from the Visual SourceSafe db first but I tried again with the project checked out and got the same error.
    I also tried creating a brand new project and generating a schema for it but got the same error.
    I am not sure what is causing Null Reference exception to be thrown and there is nothing in the Windows event log that would tell me more about the problem.
    I am using Visual Studio 2008 for my BizTalk development.
    I would appreciate if some has any insides on this issue.

    Hi,
    To test your environment, create a new BizTalk project outside of source control.
    Create a simple csv file on the file system.
    Name,City,State
    Bob,New York,NY
    Use the Flat file schema Wizard to create the flat file schema from your simple csv instance.
    Validate the schema.
    Test the schema using your csv instance.
    This will help you determine if everything is ok with you environment.
    Thanks,
    William

Maybe you are looking for