Oracle Streams - First Load

Hi,
I have an Oracle Streams environment working well. I replicate and transform the data.
My problem is:
Initially I have a source database with 3 million of records and my destination database with no records.
I have to equalize source and destination database before start to synchronize it.
Do you know how can I replicate (and transform) this data for my first load?
It's not only to copy all the data, it's to copy and transform.
Is it possible to use the same transformation process for this first load?
If I didn't transform the data I would use the Data Pump tool(e.g.). But I have to transform the data for my destination database.
Thanks

I am in DAC and trying to run the Informatica ETL for one of the prebuilt execution plan (HR- Oracle R12). I built the project plan and ran it. I got failed status for all of the ETL steps (Starting from the first one 'Load Row into Run table'). I have attached the error log for the Load Row into Run table ibelow.
I took a closer look at all the steps, and it seem like they all have this common "fail parent if this task fails" error message.
Error log for
pmcmd startworkflow -u Administrator -p **** -s SOBI:4006 -f SILOS -lpf C:\Informatica\PowerCenter8.1.1\server\infa_shared\SrcFiles\SILOS.SIL_InsertRowInRunTable.txt SIL_InsertRowInRunTable
Status Desc : Failed
WorkFlowMessage :
=====================================
STD OUTPUT
=====================================
Informatica(r) PMCMD, version [8.1.1 SP5], build [186.0822], Windows 32-bit
Copyright (c) Informatica Corporation 1994 - 2008
All Rights Reserved.
Invoked at Thu May 07 14:46:04 2009
Connected to Integration Service at [SOBI:4006]
Folder: [SILOS]
Workflow: [SIL_InsertRowInRunTable] version [1].
Workflow run status: [Failed]
Workflow run error code: [36331]
Workflow run error message: [WARNING: Session task instance [SIL_InsertRowInRunTable] failed and its "fail parent if this task fails" setting is turned on. So, Workflow [SIL_InsertRowInRunTable] will be failed.]
Start time: [Thu May 07 14:45:43 2009]
End time: [Thu May 07 14:45:47 2009]
Workflow log file: [C:\Informatica\PowerCenter8.1.1\server\infa_shared\WorkflowLogs\SIL_InsertRowInRunTable.log]
Workflow run type: [User request]
Run workflow as user: [Administrator]
Integration Service: [Oracle_BI_DW_Base_Integration_Service]
Disconnecting from Integration Service
Completed at Thu May 07 14:46:04 2009
=====================================
ERROR OUTPUT
=====================================
Error Message : Unknown reason for error code 36331
ErrorCode : 36331
If you have any input on how to fix this issue, please let me know.

Similar Messages

  • Oracle streams versus oracle goldengate

    Hi all,
    I just found out about oracle goldengate and was wondering if anyone of you could share what are the differences between it and oracle streams when it comes to change data capture capabilities? Also, how does owb come into play when it comes to oracle goldengate? For instance owb 11gr2 has got cdc capabilties so does it mean its cdc capabilities is based on oracle streams?

    Hi,
    With CDC/Streams you have two choices:
    process the Oracle logfiles in the source-database/server and read the resulting changerecords from the target database/server or
    transport the logfiles to the target database/server and process them there.
    The advantage of the latter case is that you relieve the source from the load of processing the logfiles, but target and and source then need to have the same database and server versions. Golden Gate, if I understand correctly, converts the logfiles to its own format (with mimimal load) and these can be processed by Golden Gate on a target database and server of a different version from the source.
    So you have the advantage (little load on the source) without the disadvantage (source and target have to be of equal versions).
    Regards,
    Jaap.

  • BLOB in Oracle Streams

    Oracle 10.2.0.4:
    I am new to Oracle streams and just reading docs at this point. I read in http://download.oracle.com/docs/cd/B19306_01/server.102/b14229.pdf doc that BLOB are not supported by streams. I am just looking for basic stream configuration with some rule processing which will send LCR from source queue to destination queue. And as I understand I can do that by using ANYDATA payload.
    We have some tables witih BLOB data.

    It's all a balancing act. If you absolutely need both data centers processing transactions simultaneously, you'll need Streams.
    Lets start with the simplest possible case of this, two data centers A and B, with databases 1 and 2. Database 1 is in data center A, database 2 is in data center B. If database 1 fails, would you be able to shift traffic to database 2 relatively easily? Assuming that you're building in functionality to shift load between databases, which is normally the case when you're building this sort of distributed application, it may be easier to do this sort of shift regardless of the reason that database 1 fails.
    If you have a standby database in each data center (1A as the standby for database 1, 2A as the standby for database 2), when 1 fails, you have to figure out whether whatever caused 1 to fail will also cause 1A to fail. If data center A is having connectivity or power issues, for example, you would have to shift traffic to 2 rather than failing 1 over to 1A. On the other hand, if it was an isolated server failure, you could either shift traffic to 2 or fail over to 1A. There is some risk that having a more complex failure scenario makes it more likely that someone makes a mistake-- there will be a number of failover steps that you'd do only if you're failing from 1 to 1A and other steps that you'd do if you were shifting traffic from 1 to 2 and some steps that are common-- and makes it more difficult to fully test all the scenarios. On the other hand, there may well be benefits to having more options to respond to different sorts of failures. And politics/ reporting structure as well as geography plays a role here-- if the data centers are on different continents, shifting traffic is probably much less desirable than if you have two US data centers.
    If, rather than having standbys 1A and 2A, database 1 and 2 were really multi-node RAC clusters, both database 1 and database 2 would be able to survive most sorts of localized hardware failure (i.e. one node can fail on database 1 without affecting whether database 1 is up and processing transactions). If there was a data center wide failure, you'd still have to shift traffic. But one server dying in a pile wouldn't be an issue. Of course, there would be a handful of events that could take down the entire RAC cluster without affecting the data center where a standby could potentially be used (i.e. the SAN for the cluster fails but the standby is using a different SAN). Those may not be particularly likely, however, so it may make sense not to bother with contingency planning for them and just assume that anything that knocks out all the nodes of the cluster forces traffic to be shifted to 2 and that it wouldn't be worth trying to maintain a standby for those scenarios.
    There are lots of trade-offs here. You have simplicity of setup, you have simplicity of failover, you have robustness, etc. And there are going to be cases where you realistically need to take a stab at predicting how likely various events are which gets pretty deeply into hardware, setup, and politics (i.e. how likely a server is to fail depends on whether you've bought a high-end server with doubly-rundundant-everything or a commodity linux box, how likely a data center is to fail depends on the data center's redundancy measures and your level of confidence in those measures, etc)
    Justin

  • Register Oracle Streams with OID

    I'm trying to setup an Oracle Streams environment that registers queues with OID. I have the db registered, and set global_toppic_enabled=true. DB is in archivemode. When I try to setup a queue with:
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'STREAMS_QUEUE_TABLE',
    queue_name => 'STREAMS_QUEUE',
    queue_user => 'STRMADMIN');
    END;
    I get
    Error report:
    ORA-00600: internal error code, arguments: [kcbgtcr_5], [52583], [4], [0], [], [], [], []
    ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 739
    ORA-06512: at line 2
    00600. 00000 - "internal error code, arguments: [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s]"
    *Cause:    This is the generic internal error number for Oracle program
    exceptions.     This indicates that a process has encountered an
    exceptional condition.
    *Action:   Report as a bug - the first argument is the internal error number
    Has anyone run into this? I searched metalink, but couldn't find anything. I'm running 10.2.0.1 on Windows 2K.
    Thanks in advance.

    I never use streams with OID but check Bug 4996133 - OERI[kcbgtcr_5] updating an IOT in RAC environment.
    I would consider to upgrade db to 10.2.0.3 - it is first really stable release of 10gR2
    Regards,
    Serge

  • Can Oracle Streams Replicate from Oracle 8i?

    Oracle Streams looks very cool for loading our ODS. But do all of the databases it replicates from need to be on Oracle 9i as well, or only the receiving database?
    Thanks,
    Scott Uhrick
    Oxford Health Plans

    Oracle Streams is an Oracle9i Release 2 feature. The new Streams features are not available in earlier releases of the database.

  • Oracle Streams vs. Updatable Materialized View

    Does anyone have an idea in which cases Oracle Streams is better than Updatable MV or visa verse?

    Are you really talking about Updatable Materialized Views? Or multi-master replication? Personally, I'm rather hard-pressed to come up with a situation where updatable materialized views would be useful unless you're taking the next step and doing multi-master replication.
    In general, Streams is going to put less load on the source system than materialized views and is going to replicate data more quickly. The downside tends to be that it's a relatively new technology, so it's not appropriate for environments that have older versions of Oracle. Going along with that, you'll find a lot more people/ organizations/ setups using materialized views than Streams, which can be a good thing if you need to hire new staff/ get support from a local user group/ etc. Streams also tends to be more flexible, which can be a good thing, but also tends to make things a bit more complicated.
    If you can outline the particular problem you're trying to solve, we can probably be a lot more specific...
    Justin

  • Reconfigure the Oracle streams incase of Server move from 192 to 191

    Hi All,
    We have bi directioanl oracle streams setup between two databases.
    Recently, We have moved our server from cin192 to cin191. After server moved we checked the all the streams process.
    Capture process shows Wating for Dictionary Redo First SCN XXXXXXXXX on both the database.
    When i checked these SCN ,i got cin192 archive log.
    Can you please help how can i resolve these issue..
    Do we need to reconfigure the streams or we can assign new SCN to capture process without dropping anything with cin191 server archive log file...
    Means , How to point new server archive log file to capture process..
    Any help would be appreciated...
    It's urgent...Plzzzzzzzzzz Help.
    Thanks,
    Singh

    Hi Singh,
    If I would know what cin191 and cin192 are, I would probably be able to redirect you to the right forum.
    If you are looking for Oracle streams, I suggest you to try the Database - General forum: General Database Discussions
    If is Oracle replication what you are looking for, please check here: Replication
    This is the Berkeley DB High Availability forum ( http://www.oracle.com/technology/documentation/berkeley-db/db/ref/rep/intro.html )
    Bogdan

  • Oracle Streams really works?

    Hi everybody,
    I suggested to my client to use Oracle Streams in his data integration project.
    But my client asked me to see if this tool is really reliable... As it is very new, I'd like to know if Oracle Streams really works from who had already experienced it, ok?
    I will be waiting for any reply...

    I would be interested in the same thing. I've been testing streams for almost 1 month now and seem to be running into many problems. Today for some unknown reason supplemental logging was removed from my replicated tables causing the capture process to fail. It seems to work when I replicate small amounts of data but when I start using it for larger load and such I get memory errors and process seem to abort. The problem come mostly from trying to debug it. From my perspective it is very hard to see whats going on "under the hood". It seems you almost need to be an expert with Advanced Queue, Logminer, etc in order to sort through Streams problems

  • Prevent Queries When Page First Loads

    Hi,
    How to Prevent Queries When Page First Loads: in this case #{!adfFacesContext.initialRender}
    didn’t works. and Jdeveloper 11.1.1.2.0.
    Please help me ..
    Thanks
    Anup

    Hi Mohammad Jabr,
    I have also set refresh property but not working..
    i have follow the link refer https://blogs.oracle.com/shay/entry/preventing_queries_when_page_f
    I want to prevent user to search without selecting any criteria.
    but this is not working..
    Please give any other solution..
    Thanks
    Anup
    Edited by: 888679 on Mar 13, 2013 10:46 PM

  • Oracle Streams b/w MS-Access 2007 and Oracle 10g.

    Can we set up Oracle Streams between MS-Access 2007 and Oracle 10g? Ms-Access as source and Oracle 10g as destination database. If so, can any one please give me little heads up with supported doc's or any source of info.

    Help Help....!!!

  • Doubt in Oracle streams

    Doubt in Oracle streams .Can you please help me in understanding the terms
    1. Message
    2.User defined Event
    3. Event
    4.Rules
    5.Oracle supplied PL/SQL packages
    6.Subscriber,Consumer

    Hi
    Message
    A message is the smallest unit of information that is inserted into and retrieved from a queue.
    Queue
    A queue is repository for messages. Queues are stored in queue tables
    Enqueue
    To place a message in queue
    Dequeue
    To comsume a message
    Agent
    An agent is a end user or the application uses a queue
    Thanks
    Venkat

  • SQL Query report region that only queries on first load

    Hello all,
    Is there any way in which you can prevent a SQL Query report region from quering data after every refresh?
    I would like to make a report that queries on the first load, but then I would like to change the individual values, and reload to show the change, but every time I reload the page the columns are queried and the original values are displayed once again...
    any ideas?
    -Mux

    Chet,
    I created a header process to create the HTMLDB_COLLECTION. It is something like:
    HTMLDB_COLLECTION.CREATE_COLLECTION_FROM_QUERY(
    p_collection_name => 'Course_Data',
    p_query => 'SELECT DISTINCT COURSE_ID, HTMLDB_ITEM.CHECKBOX(14,COURSE_ID) as "checker", TITLE, SUBJECT, COURSE_NUMB, SECTION, ENROLLED, null as "temp_term", null as "temp_title", null as "temp_crse_id", null as "temp_subj", null as "temp_crse_numb", null as "temp_sect", FROM DB_TBL_A, DB_TBL_B, DB_TBL_C, DB_TBL_D, DB_TBL_E, DB_TBL_F WHERE ...');
    The names were changed, for obvious reasons.
    I then created an SQL Report Region to see if it would work. The SQL is:
    SELECT c001, c002, c003
    FROM htmldb_collections
    WHERE collection_name = 'COURSE_DATA'
    When I run the page it says:
    ORA-20104: create_collection_from_query Error:ORA-20104: create_collection_from_query ExecErr:ORA-01008: not all variables bound
    Any idea why this is happening?
    I'm new to HTMLDB_COLLECTIONS, so I may be doing something wrong
    -Mux

  • I'm trying to insall Adobe Photoshop CS3 on a new ASUS ASM11BB001O computer equipped with 64-bit Windows 7.  I use a "bundle" approach, first loading Photoshop 6, then upgrading with CS upgrade and then CS3 upgrade.  The installation goes smoothly until I

    I'm trying to install Adobe Photoshop CS3 on a new ASUS ASM11BB001O computer equipped with 64-bit Windows 7.  I use a "bundle" approach, first loading Photoshop 6, then upgrading with CS upgrade and then CS3 upgrade.  The installation goes smoothly until I add PS CS, and then it balks at the "accept" Adobe conditions screen.  I have two other (Gateway) computers with Windows 7, 64-bit, on both of which the PS-6, CS-CS3 pathway worked fine.  Any thoughts on how to get this working?  Thanks

    If your goal is to install and activate CS5 there is no need to install anything preceding it.  You will only need to have the serial number from the preceding version that you upgrade from (CS3 I guess).

  • I have an ipod touch 5th gen. i first loaded itunes on a windows 7 64 bit machine. i then got a new windows 8 64 bit machine but itunes will not load on it. i keep getting 2503

    i have an ipod touch 5th gen. i first loaded itunes on a windows 7 64 bit machine. i then got a new windows 8 64 bit machine but itunes will not load on it. i keep getting 2503 & 2502 errors. some apple software loaded, but it will not let me uninstall it from the machine. any thoughts, about how to get itunes to run on my machine?

    Try:
    Trouble installing iTunes or QuickTime for Windows
    Next try posting in the iTunes forum

  • Help on Oracle streams 11g configuration

    Hi Streams experts
    Can you please validate the following creation process steps ?
    What is need to have streams doing is a one way replication of the AR
    schema from a database to another database. Both DML and DDL shall do
    the replication of the data.
    Help on Oracle streams 11g configuration. I would also need your help
    on the maintenance steps, controls and procedures
    2 databases
    1 src as source database
    1 dst as destination database
    replication type 1 way of the entire schema FaeterBR
    Step 1. Set all databases in archivelog mode.
    Step 2. Change initialization parameters for Streams. The Streams pool
    size and NLS_DATE_FORMAT require a restart of the instance.
    SQL> alter system set global_names=true scope=both;
    SQL> alter system set undo_retention=3600 scope=both;
    SQL> alter system set job_queue_processes=4 scope=both;
    SQL> alter system set streams_pool_size= 20m scope=spfile;
    SQL> alter system set NLS_DATE_FORMAT=
    'YYYY-MM-DD HH24:MI:SS' scope=spfile;
    SQL> shutdown immediate;
    SQL> startup
    Step 3. Create Streams administrators on the src and dst databases,
    and grant required roles and privileges. Create default tablespaces so
    that they are not using SYSTEM.
    ---at the src
    SQL> create tablespace streamsdm datafile
    '/u01/product/oracle/oradata/orcl/strepadm01.dbf' size 100m;
    ---at the replica:
    SQL> create tablespace streamsdm datafile
    ---at both sites:
    '/u02/oracle/oradata/str10/strepadm01.dbf' size 100m;
    SQL> create user streams_adm
    identified by streams_adm
    default tablespace strepadm01
    temporary tablespace temp;
    SQL> grant connect, resource, dba, aq_administrator_role to
    streams_adm;
    SQL> BEGIN
    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE (
    grantee => 'streams_adm',
    grant_privileges => true);
    END;
    Step 4. Configure the tnsnames.ora at each site so that a connection
    can be made to the other database.
    Step 5. With the tnsnames.ora squared away, create a database link for
    the streams_adm user at both SRC and DST. With the init parameter
    global_name set to True, the db_link name must be the same as the
    global_name of the database you are connecting to. Use a SELECT from
    the table global_name at each site to determine the global name.
    SQL> select * from global_name;
    SQL> connect streams_adm/streams_adm@SRC
    SQL> create database link DST
    connect to streams_adm identified by streams_adm
    using 'DST';
    SQL> select sysdate from dual@DST;
    SLQ> connect streams_adm/streams_adm@DST
    SQL> create database link SRC
    connect to stream_admin identified by streams_adm
    using 'SRC';
    SQL> select sysdate from dual@SRC;
    Step 6. Control what schema shall be replicated
    FaeterBR is the schema to be replicated
    Step 7. Add supplemental logging to the FaeterBR schema on all the
    tables?
    SQL> Alter table FaeterBR.tb1 add supplemental log data
    (ALL) columns;
    SQL> alter table FaeterBR.tb2 add supplemental log data
    (ALL) columns;
    etc...
    Step 8. Create Streams queues at the primary and replica database.
    ---at SRC (primary):
    SQL> connect stream_admin/stream_admin@ORCL
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'streams_adm.FaeterBR_src_queue_table',
    queue_name => 'streams_adm.FaeterBR_src__queue');
    END;
    ---At DST (replica):
    SQL> connect stream_admin/stream_admin@STR10
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'stream_admin.FaeterBR_dst_queue_table',
    queue_name => 'stream_admin.FaeterBR_dst_queue');
    END;
    Step 9. Create the capture process on the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'capture',
    streams_name =>'FaeterBR_src_capture',
    queue_name =>'FaeterBR_src_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database => NULL,
    inclusion_rule => true);
    END;
    Step 10. Instantiate the FaeterBR schema at DST. by doing export
    import : Can I use now datapump to do that ?
    ---AT SRC:
    exp system/superman file=FaeterBR.dmp log=FaeterBR.log
    object_consistent=y owner=FaeterBR
    ---AT DST:
    ---Create FaeterBR tablespaces and user:
    create tablespace FaeterBR_datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create tablespace ws_app_idx datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create user FaeterBR identified by FaeterBR_
    default tablespace FaeterBR_
    temporary tablespace temp;
    grant connect, resource to FaeterBR;
    imp system/123db file=FaeterBR_.dmp log=FaeterBR.log fromuser=FaeterBR
    touser=FaeterBR streams_instantiation=y
    Step 11. Create a propagation job at the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name =>'FaeterBR',
    streams_name =>'FaeterBR_src_propagation',
    source_queue_name =>'stream_admin.FaeterBR_src_queue',
    destination_queue_name=>'stream_admin.FaeterBR_dst_queue@dst',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 12. Create an apply process at the destination database (DST).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'apply',
    streams_name =>'FaeterBR_Dst_apply',
    queue_name =>'FaeterBR_dst_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 13. Create substitution key columns for äll the tables that
    haven't a primary key of the FaeterBR schema on DST
    The column combination must provide a unique value for Streams.
    SQL> BEGIN
    DBMS_APPLY_ADM.SET_KEY_COLUMNS(
    object_name =>'FaeterBR.tb2',
    column_list =>'id1,names,toys,vendor');
    END;
    Step 14. Configure conflict resolution at the replication db (DST).
    Any easier method applicable the schema?
    DECLARE
    cols DBMS_UTILITY.NAME_ARRAY;
    BEGIN
    cols(1) := 'id';
    cols(2) := 'names';
    cols(3) := 'toys';
    cols(4) := 'vendor';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name =>'FaeterBR.tb2',
    method_name =>'OVERWRITE',
    resolution_column=>'FaeterBR',
    column_list =>cols);
    END;
    Step 15. Enable the capture process on the source database (SRC).
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name => 'FaeterBR_src_capture');
    END;
    Step 16. Enable the apply process on the replication database (DST).
    BEGIN
    DBMS_APPLY_ADM.START_APPLY(
    apply_name => 'FaeterBR_DST_apply');
    END;
    Step 17. Test streams propagation of rows from source (src) to
    replication (DST).
    AT ORCL:
    insert into FaeterBR.tb2 values (
    31000, 'BAMSE', 'DR', 'DR Lejetoej');
    AT STR10:
    connect FaeterBR/FaeterBR
    select * from FaeterBR.tb2 where vendor= 'DR Lejetoej';
    Any other test that can be made?

    Check the metalink doc 301431.1 and validate
    How To Setup One-Way SCHEMA Level Streams Replication [ID 301431.1]
    Oracle Server Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6
    Cheers.

Maybe you are looking for

  • Table maintenance in a loop?

    I have a situation that I'm having trouble resolving. I'm looping through this list, each time getting a table of keys and offsets. But each time I loop through it, I only want to retain the records that are common and get rid of new or unique record

  • Images not showing in Mail, just generic JPG icon...

    I have this problem with Mail (v3.6, Leopard 10.5.8) not displaying images in html messages anymore. The problem is mostly with the emails i'm sending. Back on Tiger, you could drag and drop an image and it would show inline right there where your cu

  • Duplicate Entries in Library

    In the process of trying to move my itunes library to an external drive, I've ended up with duplicate entries for each file. Both entries play the same file, but when one entry is deleted from the library, the file gets deleted, and the remaining ent

  • After changing my 1st cartridge Envy 5530 printer wants to realign and scan each time it's turned on

    After changing my 1st cartridge Envy 5530 printer wants to realign and scan each time it's turned on.  When I do this, it says "Scan not detcted."  After a 2nd scan, it sets itself to default settings.  Then I can use the printer.  Is there a way to

  • Cross-mailbox conversation expansion doesn't work

    In Mail for Mountain Lion, when a conversation has messages in different mailboxes, the gray box to expand/collapse the conversation doesn't work. The number of messages shown only reflects the ones in the Inbox (or active mailbox, probably; I've onl