OdiWaitForLogData with CDC

Hi ,
When I try to use OdiWaitForLogData , and loop back to start scenario again (with OdiStartScene) , it will end up like infinite loop.
Here is my parameter that defined on OdiWaitForLogdata
Context : Global
Global Row Count : 1
Logical Schema : ORCL_VM
Optimized Wait : AUTO
Polling Interval : 10000
Subscriber : SUNOPSIS
Table Name : []
CDC Set : ODI.ORADB_TEST
Timeout : 0
Timeout without Errors : Yes
Unit Row Count :
I want it to wait for 1 row change then process that row to target table and loop back to check again.
But result was the scenario won't stop execute (not even wait in the step OdiWaitForLogData).
How to use it correctly as I want ?
Thanks

Hi Som,
Can you explain this in detail please?
I am not using CDC_SET param but I am facing the same problem if I specify the correct Table name along with Subscriber name.
These are the parameters I use for OdiWaitForLogData
Context:     HNM_CTX
Global Row Count:     1
Logical Schema:     HLS_POC
Optimized Wait:     AUTO
Polling Interval:     1000
Subscriber:     SUNOPSYS
Table Name:     [VTILSIM]
CDC Set:     
Timeout:     0
Timeout without Errors:     YES
Unit Row Count:
Thanks & Regards
Naveen

Similar Messages

  • How to start with CDC?

    Hi, friends:
    I have read much online material about CDC, but I'm still confused.
    I knew that I can download the WTK2.2 or higher to develop the app based on the CLDC and MIDP.
    My question is how to start with CDC. For example, If I use JBuilder, what stuff should been downloaded and installed? and how about the Eclipse?
    And some online learning resource is better!
    Thanks in advance!

    See
    http://home.elka.pw.edu.pl/~pboetzel/
    for HOWTO run SWT application on PocketPC (emulator). I used CDC 1.1 Personal Profile 1.1. You will find exact instructions what to download and install to write and run CDC applications. And also links to other guides and tutorials.

  • Issue with CDC and Replication enabled

    Hello,
    We have this strange issue with CDC and replication. Let me explain
    1. We have a database on write server and we replicate some tables to the read server. There are 15 tables that we replication and 8 of them have computed columns that are persisted.
    2. We also have CDC enabled on the same database where we have transactional replication enabled. I know that both CDC and replication uses replication log reader. Some how, all the time we see the log_reuse_wait says replication
    3. If I add around 100-200 MB into these tables, with these persisted columns, it will be around 500 MB of data. But the replication is queuing up 10-15 GB of data.
    4. I checked CDC tables, and the updates are in cdc tables. Also, I don't see CDC capture job. Is this because there is already replication enabled?
    What might be the issue that's causing the log to hold for a very long extended periods of time? We don't see any issue with log reader and CDC.

    2. Log_reuse_wait will show replication status both for CDC and replication.
    4. Yes as you are using transaction replication, Log Reader Agent is created for the database and the capture job won’t exist.
    When the Log Reader Agent is used for both change data capture and transactional replication, replicated changes are first written to the distribution database.
    Then, captured changes are written to the change tables. Both operations are committed together. If there is any latency in writing to the distribution database, there will be a corresponding latency before changes appear in the change tables.
    https://msdn.microsoft.com/en-us/library/cc645938.aspx?f=255&MSPPError=-2147217396
    As you said CDC updates are in cdc table I don’t see any issue.
    You could run DBCC OPENTRAN to see the old active transaction? It will give you more info.

  • Problem with CDC capturing changes to LOB datatypes

    Greetings all,
    I've recently setup a CDC procedure (Asyncronous Hotlog mode) to populate a data mart with several source database tables (Oracle Enterprise 11gR2 to 11gR1). When inserting new rows, all of the columns defined including LOB columns are being captured and populated successfully. However if a row is updated, the CDC stream is not carrying over the LOB data. Looking at the populated stage tables, the LOB columns have been nulled out (both in the UO row and the UN row). All other defined columns were captured correctly.
    Is there some known issue with CDC and LOB data types in 11g? I read some KB tips pertaining to CDC LOBS in 10g but it looks like all of that had been resolved with 11g.
    Any ideas why LOBs aren't being updated?
    Thanks,
    M/R

    I'm not aware of any. I would recommend opening an SR with Oracle and please post the resolution here so we can all learn from your experience.

  • How to ignore SDO_GEOMETRY but capture with CDC

    Hi,
    Is it possible to set up a table with an SDO_GEOMETRY for CDC, ignoring the SDO_GEOMETRY columns but capturing the remaining data?
    Im using ODI to deploy the CDC and underlying apply + capture processes, I've tried removing the column in question from the ODI metadata, so for all intents its ignored in any generated code (supplemental log groups etc) but my DBA_CAPTURE view is not surprisingly showing the following :
    "ORA-26744: STREAMS capture process "CDC$C_EBIZ_AR" does not support "DGDW_TEST"."HZ_LOCATIONS" because of the following reason:
    ORA-26783: Column data type not supported
    So can I somehow ignore the problem column but capture the rest ?
    Thanks in advance,
    Alastair

    Hi,
    First check whether the given object is supported by streams are not by querying DBA_STREAMS_UNSUPPORTED.
    If it is supported then we can set the negative rule to avoide the problematic column.
    Thanks and Regards,
    Satish.G.S
    http://gssdba.wordpress.com

  • Problem with CDC using SnpsWaitForLogData

    Hi,
    I have implemented CDC, having oracle 10g as source and oracle 10g as target in ODI, by the following step.
    1) OdiWaitForLogData
    2) Extend window
    3) Lock subscribers (sunopsis)
    4) Interface using the journalized data (JRN_SUBSCRIBER = 'sunopsis')
    5) Unlock subscribers (sunopsis)
    6) Purge journal
    7) OdiStartScen
    After starting above package correctly, i inserted 100 rows in source table.
    In operator window, total 10 packages were executed without errors.
    But the last package only transferred the changed data into target data source.
    The others were executed but didn't transfer data into target data store.
    I can't understand why 10 packages are executed not a package.
    The interface waits for about 35 ~ 40 seconds to load data into target.
    Thanks and Regards,
    Han

    HI,
    To try help you...
    Are all interfaces in journal mode?
    How many target table, 10?
    Is the source table what got the new rows as source of all interfaces?

  • J2ME - database connection with CDC

    How to connect to the database using cdc in j2me?

    I've typed in the ip address, such as 00.00.000.000
    I've typed in the ip address with the username:  00.00.000.000/~example
    I get "Errors encountered when connecting to your database"
    "An unidentified error has occurred"
    "Click the 'Define...' button and correct the connection issues before continuing."

  • Recover databases synchronized with CDC

    Hi,
    I need to define a mechanism with RMAN to backup two databases where information is copied from one to another by using Oracle Data Integrator CDC.
    How can I backup and recover both databases and it still sync?
    Thanks
    Homero

    HI Doug,
    I think I figured out what causes the error here.
    You are using a parallel medium for backup and restore.
    When you start the recovery e.g. via db_activate recover <MEDIUM NAME> you get a reply from the dbm server as soon as the first data stream has been completely  read.
    And since (obviously) not all of the parallel data streams (could be files, could be pipes) can be finished at the exact same point in time, there will always be pages left to be read when the first stream is done.
    That's exactly what the -8020 message tells you here.
    It says "Hey, the first data stream is done and there is still more to be read".
    But - you already started the other streams since they are part of the parallel medium.
    So there is no need at all to manually add them with recover_replace.
    Instead all you've to do is to wait until the rest of the data is processed.
    To tell the dbm server this the command [recover_ignore|http://maxdb.sap.com/doc/7_6/30/f7c7e35be311d4aa1500a0c9430730/content.htm] is used.
    Even more comfortable would be to use the AUTOIGNORE flag when starting the recovery.
    Finally, the best option would be not to perform the recovery via DBMCLI at all.
    Use the GUI tools. They have all the knowledge build in. They are simple and easy to use and give you all the options that you have with DBMCLI.
    regards,
    Lars

  • RMI with CDC + foundation profile

    I'm trying to get RMI working with Java ME on Linux. I built the CDC with the Foundation Profile and installed that on the target device (which has an i486-class processor), and set LD_LIBRARY_PATH to include the path that my native library is in.
    If I run the standard java VM (1.5), it finds the library and runs my test app. However, if I run it with cvm, I get the following error:
    java.lang.UnsatisfiedLinkError: no adm1026 in java.library.path
    at java.lang.ClassLoader.loadLibraryInternal(Ljava/lang/Class;Ljava/lang/String;ZZ)Ljava/lang/Object;(ClassLoader.java:1465)
    at java.lang.ClassLoader.loadLibrary(Ljava/lang/Class;Ljava/lang/String;Z)V(ClassLoader.java:1349)
    at java.lang.Runtime.loadLibrary0(Ljava/lang/Class;Ljava/lang/String;)V(Runtime.java:814)
    at java.lang.System.loadLibrary(Ljava/lang/String;)V(System.java:854)
    at com.terascala.manager.server.EnclosureSensorReader.<clinit>()V(EnclosureSensorReader.java:71)
    at java.lang.Class.runStaticInitializers()V(Class.java:1523)
    at java.lang.reflect.Method.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;(Method.java:314)
    at sun.misc.CVM.runMain()V(CVM.java:259)
    I've also tried explicitly setting java.library.path, and it makes no difference. Is there some change I have to make to compile my library so that CVM can load it? I know that Linux will often say it "can't find" a library when it really means that the library isn't compatible with the loader, so I assume something like that is going on here.
    Thanks in advance.

    Hi.
    Some iPAQ's are delivered with Jeode PersonalJava VM. It can also run applets. This is a kind of "old" Java but still "state of the art" for WinCE/PPC.
    For the more up to date J2ME you need a proper VM. CLDC is for devices with restricted resources like mobile phones etc. CDC is for devices with some more resources, e. g PDA, Settop-boxes etc. For these devices and J2ME/CDC I only know the beta of NSIcom's CrE-ME VM, availabe as a 30-days trial at www.nsicom.com.
    Yours
    Michael

  • Setting Dialog's size with CDC-pp

    Hi
    I'm trying to adapt an AWT version of JOptionPane so it can run correctly on IPaq(running Linux Familiar 0.7.2 if you want to know everything), using j2me cdc personal profile.
    On PC, the class does what it's supposed to do(show a message/confirm/dialog box with the correct size). But when I run the class on my IPaq, the dialog box is so small, I can't see it. When I use the setSize() method, I can see something, but it's not the best solution, since it's not adapted to the content of the dialog box.
    Has somebody managed to resize a component to its preferred size on IPaq(the pack() and getPreferredSize() don't seem to work, but maybe I haven't use them correctly)?
    thanks

    actually, the call to setResizable(false/true) before the setting of the dimensions causes this behaviour.
    Originally, I put setResizable(false); and few lines later, I put pack(); or setSize();
    Actually you must do the other way round: first put pack(); or setSize(); then setResizable(false);

  • Initial load with CDC

    I am now able to replicate changes made on the source table and make the changes visible on the staging site, now I have two questions:
    1.) how do I manage the initial load of the source table.
    The table as object gets replicated as an empty table, after the first change on the source table the changes are beeing replictated, but not the whole contents of the table.
    2.) how do I manage to apply the changes to target table ?
    I have the changes in the change view but with a.) all the additional columns, ending with a $-sign and b.) only the changed values, if the modification was an update-operation.
    rueisel

    Hi Rueisel,
    1. before starting replication, initialiaze yours destinations tables with DataPump. You can create a scripèt based on impdp/expdp ou write a PL/SQL package around DataPump API (not very difficult and well documented)
    2. To propagate the changes from the Changes tables to the Destination tables, you have to write your own solution. The main principles is to
    create a job that read the Changes Tables and update the Destination tables (use merge statements). The job can be stored on the Destination database or staging one. A database link must be created between Destination and staging database.
    Warning: If you have to propagate some clobs/blobs objects, you have to create a specifc solution because you can not access them throught a database link.
    I hope it's help,
    Cyryl

  • Which should I use, PersonalJava, J2ME with CDC or J2ME with CLDC?

    I want to write a program, which can run on Pocket PC (Compaq ipaq)?
    And I need the support of double?
    What should I do?
    Who can help me please? Thanks!

    It depends which runtime you are using (VM) and what you want your program to do.
    I don't think the iPAQ has native support for floating point arithmetic, so any implementation will do all the calculations in software anyway.

  • Problems with CDC/Foundation Profile making on Linux, heip me please!!!

    I am a new to J2ME.My OS is redhat9.0,and building tools is gcc3.4.3.After having downloaded the cdcfoundation-1_0_1-fcs for linux, I unzip it to /home/cdcfoundation/.Under the directory of /build/linux/,I typed the following command:
    make CVM_JAVABIN=/usr/java/jdk1.5.0/bin CVM_DEBUG=true J2ME_CLASSLIB=foundation
    But it failed to build,the error is:
    make: *** No targets specified and no makefile found. Stop.
    I had tried many times,I hope someone can help me!!!
    I am waiting................

    You need ksh (korn shell) in order to compile. It is because this shell provides an option (if I remember well -h) which is not present in other shells like bash.
    So, download it from http://www.kornshell.com and make it available in a folder present in your PATH.
    Hope this help.

  • Problems with CDC/Foundation Profile making on Linux

    I am a new to J2ME.My OS is redhat9.0,and building tools is gcc3.4.3.After having downloaded the cdcfoundation-1_0_1-fcs for linux, I unzip it to /home/cdcfoundation/.Under the directory of /build/linux-i386/,I typed the following command:
    make CVM_JAVABIN=/usr/java/jdk1.5.0/bin CVM_DEBUG=true J2ME_CLASSLIB=foundation CVM_GNU_TOOLS_PATH=/usr/bin
    But it failed to build,the error is:
    make: ksh: Command not found
    make: ksh: Command not found
    H BH B../share/rules.mk:235: ../../build/linux-i686/generated/empty.mk: &#27809;&#26377;&#37027;&#20010;&#25991;&#20214;&#25110;&#30446;&#24405;
    make: ksh: Command not found
    make: *** [../../build/linux-i686/generated/javavm/runtime] Error 127
    I had tried many times,I hope someone can help me!!!
    I am waiting...............

    You need ksh (korn shell) in order to compile. It is because this shell provides an option (if I remember well -h) which is not present in other shells like bash.
    So, download it from http://www.kornshell.com and make it available in a folder present in your PATH.
    Hope this help.

  • Cdc- deletion of parent- child records

    Hi,
    I am working with CDC-consistent feature in odi.
    Here my scenario is, I have a record say 120 (primary key) in table A(parent source table) and it is used as a foreign key in Table B.
    both child and parent are inserted into the concerned tables of target.
    Now i want to delete this 120 record from target parent and child tables.
    IN the pkg i arranged the pkg scenarios as follows
    odiwaitforlogdata----->source model(with extenwindow and lock subscriber option selected)------>parent pkg scenario------>child pkg scenario----->source mode(with unlock subscriber and purge journal options). ------------> This works fine for insert and update.
    odiwaitforlogdata----->source model(with extenwindow and lock subscriber option selected)------>child pkg scenario--------->parent pkg scenario----->source mode(with unlock subscriber and purge journal options). ------------> This works fine for delete.
    Can't I achieve these two in one pkg
    Please Guide.
    Regards,
    Chaitanya.

    Hi,
    kev374 wrote:
    Thanks, one question...
    I did a test and it seems the child rows have to also satisfy the parent row's where clause, take this example:
    EVENT_ID|PARENT_EVENT_ID|CREATED_DATE
    2438 | (null) | April 9 2013
    2439 | 2438 | April 11 2013
    2440 | 2438 | April 11 2013
    select * from EVENTS where CREATED_DATE < sysdate - 9
    start with EVENT_ID = 2438
    connect by PARENT_EVENT_ID = prior EVENT_IDSo you've changed the condition about only wanting roots and their children, and now you want descendants at all levels.
    This pulls in record #2438 (per the sysdate - 9 condition) but 2439 and 2440 are not connected. Is there a way to supress the where clause evaluation for the child records? I just want to pull ALL child records associated with the parent and only want to do the date check on the parent.Since the roots (the only rows you want to exclude) have LEVEL=1, you can get the results you requested like this:
    WHERE   created_date  < SYSDATE - 9
    OR      LEVEL         > 1However, since you're not ruling out the grandchildren and great-grandchildren any more, why wouldn't you just say:
    SELECT  *
    FROM    events
    WHERE   created_date     < SYSDATE - 9
    OR      parent_event_id  IS NOT NULL;?
    CONNECT BY is slow. Don't use it if you don't need it.
    If you x-reference my original query:
    select * from EVENTS where CREATED_DATE < sysdate - 90 and PARENT_EVENT_ID is null -- All parents
    union
    select * from EVENTS where PARENT_EVENT_ID in (select EVENT_ID from EVENTS where CREATED_DATE < sysdate - 90 and PARENT_EVENT_ID is null) -- include any children of parents selected from above
    The 2nd select does not apply the created_date < sysdate - 90 on the children but rather pulls in all related children :)Sorry; my mistake. That's what happens when you don't post sample data, and desired results; people can't test their solutions and find mistakes like that.

Maybe you are looking for

  • Data in VSPRPS_CN need to be deleted

    Dear Experts,                 My scenario is like following 1.I have created a WBS TP/SBAN/NEW/00189 2.I released the WBS 3.Changed the WBS to TP/SBAN/NEW/00190 and saved. Now My requirement is same as no 1. I need to create TP/SBAN/NEW/00189 But whi

  • Photos not appearing on iMovie

    Hey there, I'm making a movie/slideshow of pictures for my friend's 18th birthday, been working on it for months and when I go to add some final photos tonight the new ones that I add imovie seems to skip! They appear in the thumbnail line up when I

  • There a way to make sure two elements aren't randomly chosen twice?

    I want to make an array of 52 cards - then draw 21, but I want to make sure not to have any duplicates.... Is there some easy way with ArrayLists or Linked lists to accomplish this? I am going to make a second array of random cards and thought of som

  • Sap.m.UploadCollection BusyIndicator Issue

    Hello Experts, I am using sap.m.UploadCollection and uploading file using OData service, which is working fine and successfully uploading the file. But I am facing some problem with the busy indicator. Eventhough the file is uploaded it is showing th

  • How can i get my answer about Security question :(

    i need to help me In restoring the answer to your security question