Oracle Streams 10gR2 Schedule Propagation or Application

Hi all,
Is there a way to schedule the propagation and application when configuring Oracle Streams?, I mean, I don't want to execute online replication because I have other object outside Oracle that need to replicate withing the db to have my aplication (red only) in sync.
So, where I can do that.
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'shm',
streams_type => 'apply',
streams_name => 'apply_from_db1',
queue_name => 'strmadmin.from_db1',
include_dml => true,
include_ddl => true,
source_database => 'db1.world',
inclusion_rule => true);
END;
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
schema_name => 'shm',
streams_name => 'db1_to_db2',
source_queue_name => 'strmadmin.captured_db1',
destination_queue_name => '[email protected]',
include_dml => true,
include_ddl => true,
source_database => 'db1.world',
inclusion_rule => true,
queue_to_queue => true);
END;
/

Hello, Am I not being clear with my question?. Or maybe I'm doing everything in the wrong way about Stream propagation and application processes?.
I looked into all documentation available and I can't find how to schedule the processes ... what's the part I'm missundertanding?.
AS long I have other non Oracle objects (file system objects, we say some ECM file system objects) to replicate with the data schema, I can't replicate (automatically) all the changes occurred at source table's schema to the destiny database schema. So, I'm replicating ECM file systems objects within other external tool, and It has Windows schedule integration ... but how I can schedule propagation and/or application processes in the a 2-way Streams environment.
Please, I need a hint from somebody.
Thanks.

Similar Messages

  • Java application (tomcat) and Oracle RAC 10gR2

    Hi, I have an Oracle RAC 10gR2 (10.2.0.3) on Suse Linux Enterprise Server (3 nodes).
    I have several application running on tomcat 5.X and tomcat 6.X with java 1.5 and java 1.6. Sometimes, because of hardware fail, network problems,... one of the nodes fails, and then the other 2 nodes still working and my database is up. However, the most of applications lose the connection with the database, and I must to restart the tomcat. I want to have a system more reliable and robust, and I want to prepare the tomcat's and java application for prevent it. I have read the http://drdobbs.com/java/222700353?pgno=1 and http://db360.blogspot.com/2007/01/is-your-java-application-failoverproof.html. I've been tried the example of the first url, but I don't connect with my database. However, I've probe the sample on another oracle rac (10.2.0.4) and it works. In the samples, it use the ucp.jar (11g) and ons.jar (10g) and ojdbc6.jar (11g driver).
    Anyone can help me how can I configure my application for I must not to restart the tomcats??
    Thanks you very much!!.

    The best solution for your problem is ORACLE SUPPORT.. Please raise an SR and coordinate with them.. GoodLuck!

  • OpenVMS Alpha 10GR2 Oracle Streams

    We are trying to determine if Oracle Streams in 10GR2 for OpenVMS has all functionality advertised implemented. The 9i release did not meet the customer expectations.
    Any comments?
    Peter Johnson
    SDM

    I also wonder the feedbacks on this thread since on Solaris we tried to implement a change data capture - http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/cdc.htm#i1028295 based etl for our data warehouse, but during proof of concept work we faced lots of problems; some had workarounds, some are documented as bugs and patches for operating systems are on metalink, but some didn't have any solution, so the project management decided that cdc is an imature product on 10gR2.
    Although the product is working fine when your needs dont get complicated asw mentioned on below two great cookbooks, especially on hr demo tables :)
    Asynchronous Autolog Archive CDC
    http://www.oracle.com/technology/products/bi/db/10g/pdf/twp_autolog_cdc_cookbook_0107.pdf
    Asynchronous Distributed Hotlog CDC
    http://www.oracle.com/technology/products/bi/db/10g/pdf/twp_cdc_cookbook_0206.pdf
    Best regards.

  • Oracle Streams & Oracle Real Application Clusters

    Hello...i'am developing a new replication system to my company using Oracle Streams. I have already achieved data replication to a downstream database but now i would like to do it but in an RAC environment. So, i will appreciate any help you can give me. Best regards, walny

    I've been researching but now i have another doubt, i have a cluster of five instances, two of them donwstream, one primary and one secundary, i don't now if the standby redo log files configured for the primary instance will be the same for the secundary instance, what i want to achieve is the HA of the replication environment, so if i configured standby redo log files in both instances and with just one group the problem is solved, i'll be wasting resources.
    i hope you can help me
    regards
    Edited by: walny on 08-abr-2010 10:45
    Edited by: walny on 08-abr-2010 10:46

  • Oracle stream with rac

    hi ,
    I’m trying to configure oracle stream one direction ( tables level )..
    my source and destination database is 10.2.0.4 and destination in rac (three nodes)
    source database is one node
    please help if there is some configuration required in rac

    Hello
    Please find the Oracle RAC Specific Configuration while implementing Oracle Bidirectional streaming Setup
    #Propagation
    queue_to_queue parameter
    -- Assign Primary / Secondary Instance IDs
    dbms_aqadm.alter_queue_table(queue_table => 'capture_srctab',
    primary_instance => 1,
    secondary_instance => 2);
    dbms_aqadm.alter_queue_table(queue_table => 'apply_srctab',
    primary_instance => 1,
    secondary_instance => 2);
    All Streams processing is done at the owning instance of the queue used by
    the Streams client. To determine the owning instance of each ANYDATA queue
    in a database, run the following query:
    SELECT q.OWNER, q.NAME, t.QUEUE_TABLE, t.OWNER_INSTANCE
    FROM DBA_QUEUES q, DBA_QUEUE_TABLES t
    WHERE t.OBJECT_TYPE = 'SYS.ANYDATA' AND
    q.QUEUE_TABLE = t.QUEUE_TABLE AND
    q.OWNER = t.OWNER;
    #tbsnames.ora
    Service_name=global_name=db_name
    Please find the metalink document
    10gR2 Streams Recommended Configuration [ID 418755.1]
    Regards
    Hitgon

  • Doubt in Oracle streams

    Doubt in Oracle streams .Can you please help me in understanding the terms
    1. Message
    2.User defined Event
    3. Event
    4.Rules
    5.Oracle supplied PL/SQL packages
    6.Subscriber,Consumer

    Hi
    Message
    A message is the smallest unit of information that is inserted into and retrieved from a queue.
    Queue
    A queue is repository for messages. Queues are stored in queue tables
    Enqueue
    To place a message in queue
    Dequeue
    To comsume a message
    Agent
    An agent is a end user or the application uses a queue
    Thanks
    Venkat

  • Help on Oracle streams 11g configuration

    Hi Streams experts
    Can you please validate the following creation process steps ?
    What is need to have streams doing is a one way replication of the AR
    schema from a database to another database. Both DML and DDL shall do
    the replication of the data.
    Help on Oracle streams 11g configuration. I would also need your help
    on the maintenance steps, controls and procedures
    2 databases
    1 src as source database
    1 dst as destination database
    replication type 1 way of the entire schema FaeterBR
    Step 1. Set all databases in archivelog mode.
    Step 2. Change initialization parameters for Streams. The Streams pool
    size and NLS_DATE_FORMAT require a restart of the instance.
    SQL> alter system set global_names=true scope=both;
    SQL> alter system set undo_retention=3600 scope=both;
    SQL> alter system set job_queue_processes=4 scope=both;
    SQL> alter system set streams_pool_size= 20m scope=spfile;
    SQL> alter system set NLS_DATE_FORMAT=
    'YYYY-MM-DD HH24:MI:SS' scope=spfile;
    SQL> shutdown immediate;
    SQL> startup
    Step 3. Create Streams administrators on the src and dst databases,
    and grant required roles and privileges. Create default tablespaces so
    that they are not using SYSTEM.
    ---at the src
    SQL> create tablespace streamsdm datafile
    '/u01/product/oracle/oradata/orcl/strepadm01.dbf' size 100m;
    ---at the replica:
    SQL> create tablespace streamsdm datafile
    ---at both sites:
    '/u02/oracle/oradata/str10/strepadm01.dbf' size 100m;
    SQL> create user streams_adm
    identified by streams_adm
    default tablespace strepadm01
    temporary tablespace temp;
    SQL> grant connect, resource, dba, aq_administrator_role to
    streams_adm;
    SQL> BEGIN
    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE (
    grantee => 'streams_adm',
    grant_privileges => true);
    END;
    Step 4. Configure the tnsnames.ora at each site so that a connection
    can be made to the other database.
    Step 5. With the tnsnames.ora squared away, create a database link for
    the streams_adm user at both SRC and DST. With the init parameter
    global_name set to True, the db_link name must be the same as the
    global_name of the database you are connecting to. Use a SELECT from
    the table global_name at each site to determine the global name.
    SQL> select * from global_name;
    SQL> connect streams_adm/streams_adm@SRC
    SQL> create database link DST
    connect to streams_adm identified by streams_adm
    using 'DST';
    SQL> select sysdate from dual@DST;
    SLQ> connect streams_adm/streams_adm@DST
    SQL> create database link SRC
    connect to stream_admin identified by streams_adm
    using 'SRC';
    SQL> select sysdate from dual@SRC;
    Step 6. Control what schema shall be replicated
    FaeterBR is the schema to be replicated
    Step 7. Add supplemental logging to the FaeterBR schema on all the
    tables?
    SQL> Alter table FaeterBR.tb1 add supplemental log data
    (ALL) columns;
    SQL> alter table FaeterBR.tb2 add supplemental log data
    (ALL) columns;
    etc...
    Step 8. Create Streams queues at the primary and replica database.
    ---at SRC (primary):
    SQL> connect stream_admin/stream_admin@ORCL
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'streams_adm.FaeterBR_src_queue_table',
    queue_name => 'streams_adm.FaeterBR_src__queue');
    END;
    ---At DST (replica):
    SQL> connect stream_admin/stream_admin@STR10
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'stream_admin.FaeterBR_dst_queue_table',
    queue_name => 'stream_admin.FaeterBR_dst_queue');
    END;
    Step 9. Create the capture process on the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'capture',
    streams_name =>'FaeterBR_src_capture',
    queue_name =>'FaeterBR_src_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database => NULL,
    inclusion_rule => true);
    END;
    Step 10. Instantiate the FaeterBR schema at DST. by doing export
    import : Can I use now datapump to do that ?
    ---AT SRC:
    exp system/superman file=FaeterBR.dmp log=FaeterBR.log
    object_consistent=y owner=FaeterBR
    ---AT DST:
    ---Create FaeterBR tablespaces and user:
    create tablespace FaeterBR_datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create tablespace ws_app_idx datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create user FaeterBR identified by FaeterBR_
    default tablespace FaeterBR_
    temporary tablespace temp;
    grant connect, resource to FaeterBR;
    imp system/123db file=FaeterBR_.dmp log=FaeterBR.log fromuser=FaeterBR
    touser=FaeterBR streams_instantiation=y
    Step 11. Create a propagation job at the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name =>'FaeterBR',
    streams_name =>'FaeterBR_src_propagation',
    source_queue_name =>'stream_admin.FaeterBR_src_queue',
    destination_queue_name=>'stream_admin.FaeterBR_dst_queue@dst',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 12. Create an apply process at the destination database (DST).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'apply',
    streams_name =>'FaeterBR_Dst_apply',
    queue_name =>'FaeterBR_dst_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 13. Create substitution key columns for äll the tables that
    haven't a primary key of the FaeterBR schema on DST
    The column combination must provide a unique value for Streams.
    SQL> BEGIN
    DBMS_APPLY_ADM.SET_KEY_COLUMNS(
    object_name =>'FaeterBR.tb2',
    column_list =>'id1,names,toys,vendor');
    END;
    Step 14. Configure conflict resolution at the replication db (DST).
    Any easier method applicable the schema?
    DECLARE
    cols DBMS_UTILITY.NAME_ARRAY;
    BEGIN
    cols(1) := 'id';
    cols(2) := 'names';
    cols(3) := 'toys';
    cols(4) := 'vendor';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name =>'FaeterBR.tb2',
    method_name =>'OVERWRITE',
    resolution_column=>'FaeterBR',
    column_list =>cols);
    END;
    Step 15. Enable the capture process on the source database (SRC).
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name => 'FaeterBR_src_capture');
    END;
    Step 16. Enable the apply process on the replication database (DST).
    BEGIN
    DBMS_APPLY_ADM.START_APPLY(
    apply_name => 'FaeterBR_DST_apply');
    END;
    Step 17. Test streams propagation of rows from source (src) to
    replication (DST).
    AT ORCL:
    insert into FaeterBR.tb2 values (
    31000, 'BAMSE', 'DR', 'DR Lejetoej');
    AT STR10:
    connect FaeterBR/FaeterBR
    select * from FaeterBR.tb2 where vendor= 'DR Lejetoej';
    Any other test that can be made?

    Check the metalink doc 301431.1 and validate
    How To Setup One-Way SCHEMA Level Streams Replication [ID 301431.1]
    Oracle Server Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6
    Cheers.

  • Upload file in JSP with Oracle Database 10gR2

    How to upload file with oracle database 10gR2??
    i can't find how to upload..
    i've tried to create a procedure in oracle and execute in netbeans but the file save in directory and then from directory save to database.
    it means the file save in 2 location, in directory and database..
    does anybody know how to save file direct from the JSP file into database without save in directory?
    this is the procedure..
    create or replace PROCEDURE load_file (
    p_id number,
    p_photo_name in varchar2) IS
    src_file BFILE;
    dst_file BLOB;
    lgh_file BINARY_INTEGER;
    BEGIN
    src_file := bfilename('DIR_TEMP', p_photo_name);
    -- insert a NULL record to lock
    INSERT INTO temp_photo
    *(id, photo_name, photo)*
    VALUES
    *(p_id , p_photo_name ,EMPTY_BLOB())*
    RETURNING photo INTO dst_file;
    -- lock record
    SELECT photo
    INTO dst_file
    FROM temp_photo
    WHERE id = p_id
    AND photo_name = p_photo_name
    FOR UPDATE;
    -- open the file
    dbms_lob.fileopen(src_file, dbms_lob.file_readonly);
    -- determine length
    lgh_file := dbms_lob.getlength(src_file);
    -- read the file
    dbms_lob.loadfromfile(dst_file, src_file, lgh_file);
    -- update the blob field
    UPDATE temp_photo
    SET photo = dst_file
    WHERE id = p_id
    AND photo_name = p_photo_name;
    -- close file
    dbms_lob.fileclose(src_file);
    END load_file;

    Well your Oracle procedure is designed to load a file, so that's what it does. If you want it to load from a data stream such as an upload, you need to rewrite it accordingly.
    So far this is not a Java question at all.

  • Oracle Streaming in same database - 10g

    Oracle Streaming in 10g database
    Posted: Sep 21, 2006 4:25 AM Reply
    Hello,
    I am trying do streaming at table level for all dml changes from one schema(source) to another schema (target) controlled by admin schema (stream_admin). I have all these schemas in same database(10g-enterprise edition).
    As given in documentations , I created
    1. two queues(in_Q and out_Q) in Stream_admin,
    2. two process( capture and apply process) in Stream_admin ,
    3. A propagation rule for propagation between in_Q and out_Q,
    4. did instantiation,
    5. started both capture and apply process.
    Having done that , I insert in source table and check for the same in target but alas!! nothing happens. I fail to achieve streaming.
    I am not getting any error. Neither in process , nor in propagation or queues. And all queues,rules and process are enabled.
    Please help.

    datapump uses dbms_metadata extensively.
    Problem is twofold
    - the amount of data
    - why on earth do you need to 'copy' these 1.2 Tb a 'number of times'
    One would expect you would have tested the upgrade on a smaller test database. and you wouldn't need to fix bugs in a 1.2 Tb database.
    A better idea would be to duplicate the complete database using RMAN. This doesn't perform conventional INSERTs and doesn't create redo.
    Sybrand Bakker
    Senior Oracle DBA

  • Oracle Streams 'ORA-25215: user_data type and queue type do not match'

    I am trying replication between two databases (10.2.0.3) using Oracle Streams.
    I have followed the instructions at http://www.oracle.com/technology/oramag/oracle/04-nov/o64streams.html
    The main steps are:
    1. Set up ARCHIVELOG mode.
    2. Set up the Streams administrator.
    3. Set initialization parameters.
    4. Create a database link.
    5. Set up source and destination queues.
    6. Set up supplemental logging at the source database.
    7. Configure the capture process at the source database.
    8. Configure the propagation process.
    9. Create the destination table.
    10. Grant object privileges.
    11. Set the instantiation system change number (SCN).
    12. Configure the apply process at the destination database.
    13. Start the capture and apply processes.
    For step 5, I have used the 'set_up_queue' in the 'dbms_strems_adm package'. This procedure creates a queue table and an associated queue.
    The problem is that, in the propagation process, I get this error:
    'ORA-25215: user_data type and queue type do not match'
    I have checked it, and the queue table and its associated queue are created as shown:
    sys.dbms_aqadm.create_queue_table (
    queue_table => 'CAPTURE_SFQTAB'
    , queue_payload_type => 'SYS.ANYDATA'
    , sort_list => ''
    , COMMENT => ''
    , multiple_consumers => TRUE
    , message_grouping => DBMS_AQADM.TRANSACTIONAL
    , storage_clause => 'TABLESPACE STREAMSTS LOGGING'
    , compatible => '8.1'
    , primary_instance => '0'
    , secondary_instance => '0');
    sys.dbms_aqadm.create_queue(
    queue_name => 'CAPTURE_SFQ'
    , queue_table => 'CAPTURE_SFQTAB'
    , queue_type => sys.dbms_aqadm.NORMAL_QUEUE
    , max_retries => '5'
    , retry_delay => '0'
    , retention_time => '0'
    , COMMENT => '');
    The capture process is 'capturing changes' but it seems that these changes cannot be enqueued into the capture queue because the data type is not correct.
    As far as I know, 'sys.anydata' payload type and 'normal_queue' type are the right parameters to get a successful configuration.
    I would be really grateful for any idea!

    Hi
    You need to run a VERIFY to make sure that the queues are compatible. At least on my 10.2.0.3/4 I need to do it.
    DECLARE
    rc BINARY_INTEGER;
    BEGIN
    DBMS_AQADM.VERIFY_QUEUE_TYPES(
    src_queue_name => 'np_out_onlinex',
    dest_queue_name => 'np_out_onlinex',
    rc => rc, , destination => 'scnp.pfa.dk',
    transformation => 'TransformDim2JMS_001x');
    DBMS_OUTPUT.PUT_LINE('Compatible: '||rc);
    If you dont have transformations and/or a remote destination - then delete those params.
    Check the table: SYS.AQ$_MESSAGE_TYPES there you can see what are verified or not
    regards
    Mette

  • Anyone have experience in Oracle's Manufacturing Scheduling?

    Can anyone tell me whether Oracle's Manufacturing Scheduling provides an effective way to schedule complex manufacturing operations and roughly what it costs?
    Our company, PlanetTogether, provides multi-user, add-on production order constraint-based scheduling and we're wondering whether it would make sense to become an Oracle Partner or to look for other Oracle resellers who might be interested in selling our product as an add-on to Oracle.
    Also, if you could recommend any resellers who might be interested in working with us that would be most appreciated too!
    Thanks!

    The "Unable to connect.." message can be a result of the protocol you selected. As mentioned above, HP Jetdirect is often the best because this is the same protocol used by Windows - called Standard TCP/IP or RAW Port 9100.
    If you use LPD then sometimes you need a specific queue name. And IPP often has the same condition.
    The other thing to check that the IP address you have entered is the copier and not some other device, like a Windows print server.
    As for the FTP, you can open the Terminal application and type;
    sudo -s launchctl load -w /System/Library/LaunchDaemons/ftp.plist
    Press Return and when prompted, enter your admin account password and press Return.
    Now that you have the FTP service running on the Mac you can create an address book entry in the copier to a folder in the users Home folder. The format for the address book is;
    Protocol: FTP
    Host Name: IP address of the Mac
    File Path: Desktop/Scans *1
    User: John Doe *2
    Password: The password for the user account entered
    Notes:
    The file path is entered from the point of the users Home folder. For the example above, the user John Doe has a folder called Scans that resides on his Desktop. If the Scans folder resided under his home folder, then the path would just be Scans. Also note that no slash is required for the beginning of the file path and the type of slash used to separate folders.
    You can use the full name or short name.

  • BLOB in Oracle Streams

    Oracle 10.2.0.4:
    I am new to Oracle streams and just reading docs at this point. I read in http://download.oracle.com/docs/cd/B19306_01/server.102/b14229.pdf doc that BLOB are not supported by streams. I am just looking for basic stream configuration with some rule processing which will send LCR from source queue to destination queue. And as I understand I can do that by using ANYDATA payload.
    We have some tables witih BLOB data.

    It's all a balancing act. If you absolutely need both data centers processing transactions simultaneously, you'll need Streams.
    Lets start with the simplest possible case of this, two data centers A and B, with databases 1 and 2. Database 1 is in data center A, database 2 is in data center B. If database 1 fails, would you be able to shift traffic to database 2 relatively easily? Assuming that you're building in functionality to shift load between databases, which is normally the case when you're building this sort of distributed application, it may be easier to do this sort of shift regardless of the reason that database 1 fails.
    If you have a standby database in each data center (1A as the standby for database 1, 2A as the standby for database 2), when 1 fails, you have to figure out whether whatever caused 1 to fail will also cause 1A to fail. If data center A is having connectivity or power issues, for example, you would have to shift traffic to 2 rather than failing 1 over to 1A. On the other hand, if it was an isolated server failure, you could either shift traffic to 2 or fail over to 1A. There is some risk that having a more complex failure scenario makes it more likely that someone makes a mistake-- there will be a number of failover steps that you'd do only if you're failing from 1 to 1A and other steps that you'd do if you were shifting traffic from 1 to 2 and some steps that are common-- and makes it more difficult to fully test all the scenarios. On the other hand, there may well be benefits to having more options to respond to different sorts of failures. And politics/ reporting structure as well as geography plays a role here-- if the data centers are on different continents, shifting traffic is probably much less desirable than if you have two US data centers.
    If, rather than having standbys 1A and 2A, database 1 and 2 were really multi-node RAC clusters, both database 1 and database 2 would be able to survive most sorts of localized hardware failure (i.e. one node can fail on database 1 without affecting whether database 1 is up and processing transactions). If there was a data center wide failure, you'd still have to shift traffic. But one server dying in a pile wouldn't be an issue. Of course, there would be a handful of events that could take down the entire RAC cluster without affecting the data center where a standby could potentially be used (i.e. the SAN for the cluster fails but the standby is using a different SAN). Those may not be particularly likely, however, so it may make sense not to bother with contingency planning for them and just assume that anything that knocks out all the nodes of the cluster forces traffic to be shifted to 2 and that it wouldn't be worth trying to maintain a standby for those scenarios.
    There are lots of trade-offs here. You have simplicity of setup, you have simplicity of failover, you have robustness, etc. And there are going to be cases where you realistically need to take a stab at predicting how likely various events are which gets pretty deeply into hardware, setup, and politics (i.e. how likely a server is to fail depends on whether you've bought a high-end server with doubly-rundundant-everything or a commodity linux box, how likely a data center is to fail depends on the data center's redundancy measures and your level of confidence in those measures, etc)
    Justin

  • Complex Oracle Streams issue - Update conflicts

    This is for Oracle streams replication on 11g r2.
    I am facing update conflicts in a table. The conflict arise due to technical and business logic issue. The business logic will pass through the replication/apply process successfully but we want to arrest and resolve it before replication for our requirements. These are typically a bit complex cases and we are exploring the possibility of having both DML handlers and Error handlers. The DML handlers will take care of business logic conflicts and Error handler for technical issues before pushing it to Error queue by Streams. Based on our understanding and verification, we found a limitation to configure both procedure DML handler and Error handler together for the same table operation.
    Statement handlers can not be used for our conflict scenarios.
    Following are my questions:
    1. Have anyone implemented or faced such a scenario in their real time application? If yes, can you please share some insights or inputs?
    2. Is there a custom way to handle this complex problem of configuring both DML and Error handler?
    3. Is there any alternative possible way to resolve this situation at Oracle streams environment with other handlers?

    Dear All
    I too have a similar requirement. Could anyone help with one?
    We can handle the error-ing transactions via Error Handler procedures.
    But we can not configure the DML handler procedure for transactions that are successfully replicated. STreams does not allow us to configure a handler for this. Is there any other handler / procedures / hooks in streams where we can implement the desired functionality - which includes changing the values in the LCR before invoking lcr.execute() and we should be able to discard the LCR also if required.
    Regards
    Velmurugan
    Edited by: 982387 on Jan 16, 2013 11:25 PM
    Edited by: 982387 on Jan 16, 2013 11:27 PM

  • Do I need to install oracle database 10gr2 before installing Oracle applica

    I am new to Oracle application server. I am going to install Oracle application server 10g. From what I read, it looks like I need to install oracle database 10gR2 first on the SAME server? Is that correct?
    If I do so, oracle_home will be point to where? to the Oracle application server home directory, or oracle db directory?
    Thank you very much for you help.

    hi,
    Even i am new to oracle application server and had the same doubt that you have.
    When you install Application Server there would be two tiers installed, application tier and middle tier, these tiers can be running from the same machine or different machines.
    These tiers will have a meta data repository to store application server related data, this database will be installed along with application server.
    It is upto you to install "oracle database 10gR2 " before or after installaion of application server, this will be an independent database which will have its own home, but after this installation you will have three homes, ex: infra_home,mid_home and 10gR2_home.
    by default oracle_home is set to the last installation that happens so if you install the database 10gR2 after installing application server it will point to 10gR2_home, but if you install 10gR2 before and application server later then the oracle home will be infra_home(if both the middle tier and infratier are on the same machine)
    you can always change the oracle home by using set ORACLE_HOME=path/to/oracle/home( in windows)
    or export ORACLE_HOME=path/to/oracle/home (in unix)
    hope this helps....
    regards

  • Execution of Row level trigger in Oracle Streams.

    Hi All,
    Oracle Database version : 9.2.0.4 on windows NT/2000 environment.
    We managed to install,configure oracle stream technologies.
    Oracle Stream seems to be working fine for replication of DML & DDL changes from source database to target database.
    Following is detail at source end.
    Source Sid = acc
    Source Schema = stream
    Source Table = dept
    structure of dept table.
    Name Null? Type
    DEPTNO NOT NULL NUMBER(5)
    DNAME NOT NULL VARCHAR2(10)
    LOC NOT NULL VARCHAR2(10)
    Streamadmin user = strmadmin
    Following is detail at target end.
    Target Sid = fin
    Target Schema = stream
    Target Table = dept
    structure of dept table.
    Name Null? Type
    DEPTNO NOT NULL NUMBER(5)
    DNAME NOT NULL VARCHAR2(10)
    LOC NOT NULL VARCHAR2(10)
    TRAN_DATE                    NULL DATE DEFAULT SYSDATE
    I checked on insert/update/delete of rows into dept table at source database, changes are correctly replicated to target table dept.
    I wrote a simple trigger which is as follows on dept table at target database.
    create or replace trigger dept_upd_del
    before delete or update of dname,loc on stream.dept
    for each row
    begin
    dbms_output.put_line('Inside Trigger');
    if updating then
    dbms_output.put_line('Update');     
    insert into stream.dept_change values (:old.deptno,'U',sysdate);
    end if;
    if deleting then
    dbms_output.put_line('Delete');
    insert into stream.dept_change values (:old.deptno,'D',sysdate);
    end if;
    end;
    I expect this trigger to get executed whenever changes occurs into dept table at target database whenever dml changes are propagated from source to target table. However i found that the above trigger is not executed at all.
    I was further surprised, since incase i update/delete rows from target table dept the above trigger is executing correctly.
    Can someone please let me know about this?
    I believe stream technology is using INSERT / UPDATE & DELETE statement when changes are applied at target table but this doesn't seems to be the case.
    Thanks in Advance.
    Regards,
    Vidyanand

    The trigger at the destination will not fire because it already has at the source site. Read about that in the streams documentation on page 4-25. To change the "fire once" property of the trigger, use the procedure SET_TRIGGER_FIRING_PROPERTY in the DBMS_DDL package.
    Hope this helps.
    Claudine

Maybe you are looking for

  • Subscribing to web calendar in iCal

    I'm trying to subscribe to the web/wiki calendar in iCal. I understand this will be only (that's fine). I've found the page in the manual showing how to subscribe using port 8008 (I'm actually using 8443 because of SSL). However I constantly get an a

  • Office Web Apps 2013 with SharePoint 2013 Server

    Hi All, I have installed a separate server for Office Web Apps 2013 on Windows Server 2012 VM. I have followed TechNet's article on Deploying Office Web Apps Server & exactly followed steps. On SharePoint Server 2013 (Windows Server 2012), I followed

  • Sending form details to a jdbc method

    Hi, I'm a uni student struggling with JSP. I'm trying to finish off the final bit of a development project. I've been trying to work through the 'jsptut' tutorial but it's confusing me! I have a htm web page with 6 fields containing user details (fir

  • CachedRowSet using DataSource

    Has anyone else had this problem. This piece of code works fine: RowSet rs = new JdbcRowSetImpl(); rs.setDataSourceName("java:/comp/env/jdbc/ChemNetDB"); rs.setCommand("Select * from TBL_DISPLAY_FIELDS where DEFAULT_FIELD=1"); rs.execute(); this one

  • I got notification for updates in app store but when i tried to open the updates section it shows blank

    I have 5 notification for app updates in my appstore. I tried to open the "App Store" app and tried update tab below. It started loading but after few seconds it showed me nothing but a blank page i have good internet connection and all other tabs op