CDC on a view?

Hi all
1. Is it possible to apply CDC on views?
I was attempting to apply CDC on views.
When i executed the Start Journal i got the following error:
java.sql.SQLException: ORA-25001: cannot create this trigger type on this type of view
(The code snippet for creating a trigger in JKM :
triggerCmd = """
     create or replace trigger ODI_SOURCE_11G.T$VW_PARTY_PARTY_EMP
     after insert or update or delete on ODI_SOURCE_11G.VW_PARTY_PARTY_EMP
     for each row
2. Can the code/JKM be customized to apply CDC on views? (to include INSTEAD OF).
3. Will there be any other impacts due to changing the code?

Hi ,
Duplicate the JKM and give it a name say JKM ON VIEW
Now change the create trigger statement to create instead of trigger
use it in your project .
Thanks,
Sutirtha

Similar Messages

  • Changed Data Capture (CDC) when view as a Source

    Hello All,
    We implemented Changed Data Capture (CDC) by taking table as a source and we used to JKM Oracle Simple KM and it is working fine. But, we need to implement CDC by taking View as a source. Included Primary key at ODI Level for this view as CDC requires this on the source.
    As we cannot create triggers on views and also while creating journal view prefixed with JV${table_name}, getting the following errror:
    "1446 : 72000 : java.sql.SQLException: ORA-01446: cannot select ROWID from view with DISTINCT, GROUP BY, etc."
    How can we achieve CDC if our source is a view?
    Any suggestions..
    Thanks,
    -Vency

    Hi,
    Its not issue of a "lock" so no luck..
    Its definitely the issue with the view..
    I also got the real error as:
    ORA-012024:Cannot select FOR UPDATE from view with DISTINCT,GROUP by etc..
    Wonder why this is the error, as my view does not have DISTINCT,GROUP By etc..
    Also checked
    select * from USER_UPDATABLE_COLUMNS ;
    and found that none of the columns are updateable..
    So how to make these updateable and get my form work?

  • CDC on view

    Hi,
    I have created a view based on a source table in database and want to use that view as source in ODI. If any changes are made to source table, these changes have to be captured.
    Can we apply CDC on a view data base object to caputure the changes? I am using oracle 11g as source and target and ODI 11g.
    Regards,
    mc

    HI ,
    Please follow the below thread. Hope this helps you.
    cdc
    Regards,
    Chaitanya.

  • Use of View in ODI

    Hi All,
    we know that we can apply reverse engineering on tables only.
    But suppose instead of table i have view in the target data daatstore ( oracle). It is possible to reverse the view from the target oracle to model ( folder) in ODI?. If yes then How to can we we can use in the interface & can apply CDC on that view?
    There is a resctriction we can use tables of target datastore.
    Actually If somebody knows about ( ORACLE WORK SPACE MANAGER - OWM) .
    OWM is used for data versoining so if we add any table in OWM, the tables get convetred into views and some columns have been added to that table & is used for data versioning & maintenace.
    So my target datastore tables resides IN OWM and requirement is to use view for inserting data into target tables.
    Thanks
    Neeraj

    yes we can reverse the views in ODI
    go to the data model in the reverse tab there in the "Types of objects to reverse -engineer" un check the table option and select the View option and reverse it u will be able to reverse the view..

  • Change Data Capture on a view

    I am trying to do change data capture on a view.When i start the journal i am getting error in the "create journal" step.
    BEGIN
    DBMS_CDC_PUBLISH.CREATE_CHANGE_TABLE(
    owner          => 'DMTRA_TEMPLATE',
    change_table_name     => 'J$BIEO_HYP_EXP_ORG_WEEK',
    change_set_name     => 'DEMANTRA_SOURCE',
    source_schema     => 'DMTRA_TEMPLATE',
    source_table     => 'BIEO_HYP_EXP_ORG_WEEK',
    column_type_list     => 'SDATE DATE ,LEVEL1 VARCHAR2(2000) ',
    capture_values     => 'new',
    rs_id          => 'n',
    row_id          => 'n',
    user_id          => 'n',
    timestamp          => 'n',
    object_id          => 'n',
    source_colmap     => 'n',
    target_colmap     => 'n',
    options_string     => ''
    END;
    The error that i am getting is:
    java.sql.BatchUpdateException: ORA-31419: source table does not exist
    ORA-06512: at "SYS.DBMS_CDC_PUBLISH", line 611.
    I want to know is it possible to do CDC on a view.And if yes,then is the procedure to be followed any different than the one used for a table?

    I'm afraid this is not possible!!!!

  • How to do incremental import via dump...

    i have the table inf_products_dim.
    as one time load i used .dmp option (ie export/import).
    from now onwards, shall we do incremental dump ie export/import?
    ex: on 20-aug-2009 we dumped using (imp/exp) 1200 records into inf_products_dim (this is as part of initial load)
    now on 26-aug-2009 we need to load 6 days of data into inf_products_dim. ie (incremental which will have updated/new records based on inventory_item_id)
    shall we export that 6 days of data and import (MERGE) into inf_products_dim using .dmp file?

    Import will not be doing a merge. If you wanted to use export (classic or DataPump), you would have to specify a WHERE clause as a parameter to the export, import the data into a separate staging table, and write your own MERGE logic. Definitely possible, but less than ideal.
    Using LAST_UPDATE_DATE like this also has the potential for problems if there is any possibility of ever having transactions that run for more than a few seconds in the OLTP system (including data clean-up/ migration transactions) or if there is ever any potential that code bugs will fail to increment LAST_UPDATE_DATE. In the former case, LAST_UPDATE_DATE could be a few seconds or minutes before the time that the transaction was actually committed and thus visible to the export process. Making sure that you don't miss an update in that scenario is tough and requires setting an absolute limit on the length of a transaction which tends to be difficult to do.
    I would strongly prefer using an actual replication technology. Streams, Change Data Capture (CDC), even materialized views provide much more robust ways to track changes to rows and replicate the changes to the destination.
    Justin

  • Detect updated records

    Hi all,
    is there a way to find all the updated record of a table?
    (I don't have any field on the table that can help me such as update_date or similar).
    Thanks.

    I don't think we're disagreeing, just want to make sure I'm following
    rp0428 wrote:
    >
    If there was a timestamp on the row, your refresh process would need to know the timestamp that the destination was current as of and would need to pull the changes since that timestamp.
    >
    Unfortunately that approach has a big, gaping hole which can render it unreliable.True. But using a timestamp to do the replication was never a suggestion here. Dan correctly pointed out that if you wanted to convert the SCN to a timestamp, there are limits on the accuracy of the conversion and the time frame during which that conversion is possible. I was simply emphasizing that there was no need to do that conversion in the first place unless you needed to know when a row changed rather than merely that it had changed and I compared how you would identify changed data using the ORA_ROWSCNto how you might do it if there was a timestamp that could be used.
    The value of the timestamp is typically set by using SYSDATE or SYSTIMESTAMP but the record containing that value isn't 'current' until the record is committed.
    So if a query/process begins before midnight tonight but is not commited until after midnight (e.g. 1am tomorrow) the timestamp will have today's value but will NOT get pulled by a query after midnight but before 1am and processed in tonight's batch process.
    That data will also not get processed in tomorrow's batch process because the timestamp makes it appear as if it has already been processed.
    Timestamp's only work reliably when it is known that the above use case cannot happen.Or if you define a maximum transaction length and pull data since last_extract_time - maximum_transaction_length. Obviously not perfect, which is why there are tons of technologies to allow you to replicate changes (CDC, Streams, materialized views, etc.) rather than rolling your own solution. Using the SCN would, of course, be preferrable from a correctness standpoint in the extremely rare case that there is a need to roll your own change data capture process.
    Justin

  • Unable to View Data in the table of a  CDC enabled Data Store.

    hi all,
    I am trying to import some tables from the external metadata from my CDC enabled datastore, but after importing the tables i m unable to view the data in the table,
    where as the same table in a normal datastore (Without CDC) is showing data in the Desginer.
    Can anyone help me out???
    is there any settings which we need to do, to see the data in a CDC enabled DS.

    any help ????

  • CDC feature for snapshots and views

    Hi,
    Can we use the Oracle Change Data Capture (CDC) feature which can be used with Oracle's Publish and Subscribe packages for views and snapshots in Oracle 10g? Or is it only for tables?

    This is question for Oracle support as DI is just using it. And they will tell you that it is for tables only.

  • Create  cdc on views

    Hi Experts,
    Can i know how to create views on CDC?
    Thanks,

    Hi - I dont believe its possible as the underlying objects deployed during CDC are not applicable to views (triggers, streams, goldengate etc).
    You might be able to use the JKM Last Updated date.
    I suggest you first decide what type of CDC you want to use (e.g choose JKM) then view the operator steps when starting journal to get a feel for what is happening.
    For example if you choose the Streams JKM - You need primary keys defined on the source table, conditional log groups are created etc. Its quite a thorough implementation (all hidden by ODI usually).
    Good luck - You might want to identify the source tables in your view and set up CDC on these tables, then create your own view on the target (replicated) tables.

  • Can't view changed data in journal data

    Hi,
    I have implemented JKM Oracle 10g Consistent Logminer on Oracle 10g with the following option.
    - Asynchronous_mode : yes
    - Auto_configuration : yes
    1. Change Data Capture -> Add to CDC, 2.Subscriber->subscribe (sunopsis),
    3. Start Journal
    The journal has been started correctly wothout errors. The journalized table has always the symbol "green clock". All is gook working.
    And then i inserted 1 record in source table, but i can't view changed data in journal data. I can't understand why journal data was generated.
    There are no errors.
    Help me !!!

    Did your designer was on the good context ?
    Look the list box at the top right of the Designer interface.
    You must have the same as the one where you define your journalization.

  • Replace Materialized View with Flashback?

    I'm building a Data Warehouse with the following:
    1. Tables populated throughout the day using CDC from the Application DB
    2. MVs on those tables to keep a daily snapshot of the tables for reporting.
    3. End users access the data through views with VPD applied, on the MVs
    My systems team would like the solution to use as little storage as possible and currently I effectively have a copy of the app DB in the DW tables and would need another copy in the Daily MVs. (It is an insurance DB, so it is complex with lots of data, > 1.5 TB)
    One way to reduce the storage could be to use flashback to keep a static daily version of the tables, so
    At midnight I'd recreate the views like:
    CREATE OR REPLACE VIEW client
    AS SELECT *
       FROM   client_tab
       AS OF TIMESTAMP (TO_TIMESTAMP(TRUNC(SYSDATE)));This would replace my refresh MV script. The end users would then refer to the client view in their reports
    We would obviously need enough undo to store a days worth of data to ensure the flashback views remain consistent, but this is much less than the space required for a full copy. On a busy day there would be about 1% data change.
    No DDL will occur on the tables during the day
    Is there anything else I should be aware of? Can you let me know if (and why) this would not be a good idea?
    This will run on Oracle 11.2.0.1
    Thanks,
    Ben

    I guess I'm having some trouble visualizing the basic data model...
    In most data warehouses that I've seen in the financial industry, reporting the position/ balance/ etc. at a given date involves scanning a single daily partition of each fact table involved and then hitting dimension tables that may or may not be partitioned (slowly changing dimensions would often have effective and expiration date columns to store the range of time a row was valid for, for example). Year-over-year reporting, then, just has to scan two fact table partitions-- the one for today and the one for a year ago. You may not store every intermediate change if there are potentially hundreds of transactions per account per day, but you'd generally put the end state for a given day in a single partition.
    In one of your updates, it sounded like the 1.5 TB of data was just for the data that constituted end-of-day yesterday plus the 1% of changes made today which would imply that there was at least 15 GB of UNDO generated every day that would need to be applied to make flashback query work. That quantity of UNDO would make me pretty concerned from a performance perspective.
    I would also tend to wager that VPD policies applied to views that are doing flashback query would be problematic. I haven't tried it and haven't really worked through all the testing scenarios in my mind, but I would be somewhat surprised if that didn't introduce some sort of hurdle that you'd have to work through/ work around.
    Justin

  • CDC synchronization - how to make it perform

    For synchronization between a transactional database and a data warehouse we've been using synchronous CDC. Performance is still somewhat an issue.
    The situation is as follows (simplified):
    EMP table. This is a large table (40 million + records) with a lot of mutations. The CDC log table contains about 3 million entries (records) each day, which are used to update the data warehouse. About 50% of the changes consists of updates.
    The data of the describer view looks something like
    operation$;empno;name
    I;1;John
    UU;1;John
    UN;2;John
    Since PK updates are allowed, all data manipulations needs to be processed to the data warehouse (to another EMP-like table). At this moment this is done via an old-fashioned for loop.
    Since PK updates are allowed (and an additional functional identifier is not an option at this point in time) the mutations need to be processed sequential. Is there a faster 'bulk' option to do just this?

    Maybe you can try something like this:
    MERGE INTO emp e
       USING (SELECT   cep1.operation$, cep1.empno, cep1.NAME,
                       TO_NUMBER (NULL) chg_empno, TO_CHAR (NULL) chg_name
                  FROM cdc_v_emp cep1
                 WHERE cep1.operation$ = 'I'
              UNION ALL
              SELECT   cep2.operation$, cep1.empno, cep1.NAME, cep2.empno,
                       cep2.NAME
                  FROM cdc_v_emp cep1, cdc_v_emp cep2
                 WHERE cep1.cscn$ = cep2.cscn$
                   AND cep1.rsid$ = cep2.rsid$
                   AND cep1.operation$ = 'UU'
                   AND cep2.operation$ = 'UN'
              ORDER BY cep1.cscn$, cep1.rsid$) c
       ON (e.empno = c.empno)
       WHEN MATCHED THEN
          UPDATE
             SET e.empno = c.chg_empno, e.NAME = c.chg_name
       WHEN NOT MATCHED THEN
          INSERT (e.empno, e.NAME)
          VALUES (c.empno, c.NAME);:)
    PS: Note that I changed the order (and name) of some columns in the "USING" query.
    .

  • How to ignore SDO_GEOMETRY but capture with CDC

    Hi,
    Is it possible to set up a table with an SDO_GEOMETRY for CDC, ignoring the SDO_GEOMETRY columns but capturing the remaining data?
    Im using ODI to deploy the CDC and underlying apply + capture processes, I've tried removing the column in question from the ODI metadata, so for all intents its ignored in any generated code (supplemental log groups etc) but my DBA_CAPTURE view is not surprisingly showing the following :
    "ORA-26744: STREAMS capture process "CDC$C_EBIZ_AR" does not support "DGDW_TEST"."HZ_LOCATIONS" because of the following reason:
    ORA-26783: Column data type not supported
    So can I somehow ignore the problem column but capture the rest ?
    Thanks in advance,
    Alastair

    Hi,
    First check whether the given object is supported by streams are not by querying DBA_STREAMS_UNSUPPORTED.
    If it is supported then we can set the negative rule to avoide the problematic column.
    Thanks and Regards,
    Satish.G.S
    http://gssdba.wordpress.com

  • Fast refreshable mviews vs. cdc

    im currently working on a new data warehouse environment. i need to create an ods schema for each of the operational systems in my warehouse.
    i've looked at two of 10gR2 prefered technologies for this task:
    fast refreshable materialized view and change data capture.
    i would to know what is the difference between this two approaches.
    in my understanding, there both cause a performance overhead and require some amount of additional work on each dml operations. in addition , there both work on the method of capturing changes from a table and applying them on a target database.
    the only difference i could think of is that cdc capture changes from the redo log so that commit time on the operational system wont be affected (as much as it would when logging a dml operation in a mview log).

    dba_snapshot_refresh_times or dba_mview_refresh_times would help.
    select job, last_date last_refresh, next_date next_refresh, total_time,what from dba_jobs where what like '%dbms_refresh%'
    This will work only if the refresh is scheduled to run through job queue.

Maybe you are looking for

  • Transfer Upgrade from one line to another on the same plan.

    Is it possible to transfer an upgrade from one line to a second line, and then have the phone that was used on the second line, be switched to the first line? If so, how would I go about doing this online?

  • Client proxy consuming web service provider using logical port issue

    Hi All, I have a proxy client having a logical port (configured using  NWDS)  to consume a web service in the provider system. In the logical port, I have given target address, and logical port name. While moving this client proxy NWDI dev track to Q

  • Saving pdf's to ibook

    when i want to save pdf attachments to ibook directly, doesn't work. have to go to screen where it asks whether to save to ibook or kindle.  then i choose ibook and saves it there.  however, it does not SYNC to my ibook on my iphone - the pdf is not

  • The back plate of my ipod bent-how do i fix???CHEAP

    i got hit by a car and the mirror from the car bent the back plate of the itouch...it works with no problems and the screen didn't break,only the back was danged.so how can i fix it for cheap or free????

  • How to unload swf file?

    I was wondering what are the codes for unloading the swf file after i hit the return button key???? I tried removeChild(); but still exits in the menu page after i hit the return button key.