Publishing SYS.aq$_jms_text_message to Oracle Streams Queue
I've created a streams queue using dbms_streams_adm and by default the payload type for the queue created is Sys.AnyData. How do I publish a message of type aq$_jms_text_message in PL/SQL to this streams Queue. I guess it all comes down to converting aq$_jms_text_message to AnyData in pl/sql. Sys.AnyData does NOT have anything to convert aq$_jms_text_message.
Any help would be appreciated.
Thanks,
Das
This has been asked a lot of times - I'm not sure how my initial searching missed all of the other questions/answers related to this topic.
In our case, the solution was to:
1) Leave the queue as a sys.aq$_jms_text_message type
2) Construct a sys.xmltype object with our desired payload
3) Do a getStringVal() on the xmltype object and use that string as the payload for our queue message
- Nathan
Similar Messages
-
Oracle Streaming Queues in Oracle 10G standard Edition
I would like to configure and implement Oracle Streaming Queues in Oracle 10G standard Edition. If it is possible then please guide me and give me some clues and if not then please advise me some alternate method.
Here is the guidance you requested.
License information:
http://download.oracle.com/docs/cd/B19306_01/license.102/b14199/toc.htm
Technical information:
http://tahiti.oracle.com/
Since I don't even know what version you have ... this is as far as I can take you. -
Hi,
I have setup a streams environment, can see statistics on changes being enqueued but cannot figure out how to see the actual LCR data. I have queried against my queue table and get no rows. I am hoping this is something very simple I just cannot find it late on a Friday. Please enlighten me.
Thanks
TomCaptured LCRs are maintained in memory in the buffer queue and the contents of the LCR are not available for viewing.
Use the dynamic views V$STREAMS_CAPTURE, V$STREAMS_APPLY_COORDINATOR, V$STREAMS_APPLY_READER, and V$STREAMS_APPLY_SERVER to see the progress of LCRs through the systems.
If an error is encountered on apply, the error transaction will be placed in the Error queue. Once the error is in the error queue, you can use the scripts in the documentation (Display details of error transaction) to see the information in the LCR.
Only user-enqueued LCRs will be visible in the Streams queue table. A user-enqueued LCR is an LCR that is constructed and explicitly enqueued into the streams queue, as opposed to an LCR implicitly captured by a streams capture process. -
Capture Changes from Sql Server using Oracle Streams - Destination Oracle
Is it possible to capture changes made to tables in Sql Server database and propagate the changes to Oracle Database using Oracle Streams and Heterogeneous Gateway. I see plenty of information about pushing data from Oracle to Sql server, but I haven't been able to find much information about going the other way. Currently we are using sql server 2005 replication to accomplish this. We are looking into the possibility of replacing it with streams.
the brief understanding i have is that there is nothing out of the tin that Oracle provides to stream between SQL Server and Oracle. The senario is documented in Oracle docs however and says you need to implement the SQL Server side to grabe changes and submit to Oracle stream queues.
i'm sure i've seen third parties who sell software to do this.
If you know otherwise please let me know. Also wasn;t aware one could push from SQL Server to Oracle. Is this something only avail in SQL Server 2005 or does 200 also have it? How are you doing this?
Cheers -
Oracle Streams 'ORA-25215: user_data type and queue type do not match'
I am trying replication between two databases (10.2.0.3) using Oracle Streams.
I have followed the instructions at http://www.oracle.com/technology/oramag/oracle/04-nov/o64streams.html
The main steps are:
1. Set up ARCHIVELOG mode.
2. Set up the Streams administrator.
3. Set initialization parameters.
4. Create a database link.
5. Set up source and destination queues.
6. Set up supplemental logging at the source database.
7. Configure the capture process at the source database.
8. Configure the propagation process.
9. Create the destination table.
10. Grant object privileges.
11. Set the instantiation system change number (SCN).
12. Configure the apply process at the destination database.
13. Start the capture and apply processes.
For step 5, I have used the 'set_up_queue' in the 'dbms_strems_adm package'. This procedure creates a queue table and an associated queue.
The problem is that, in the propagation process, I get this error:
'ORA-25215: user_data type and queue type do not match'
I have checked it, and the queue table and its associated queue are created as shown:
sys.dbms_aqadm.create_queue_table (
queue_table => 'CAPTURE_SFQTAB'
, queue_payload_type => 'SYS.ANYDATA'
, sort_list => ''
, COMMENT => ''
, multiple_consumers => TRUE
, message_grouping => DBMS_AQADM.TRANSACTIONAL
, storage_clause => 'TABLESPACE STREAMSTS LOGGING'
, compatible => '8.1'
, primary_instance => '0'
, secondary_instance => '0');
sys.dbms_aqadm.create_queue(
queue_name => 'CAPTURE_SFQ'
, queue_table => 'CAPTURE_SFQTAB'
, queue_type => sys.dbms_aqadm.NORMAL_QUEUE
, max_retries => '5'
, retry_delay => '0'
, retention_time => '0'
, COMMENT => '');
The capture process is 'capturing changes' but it seems that these changes cannot be enqueued into the capture queue because the data type is not correct.
As far as I know, 'sys.anydata' payload type and 'normal_queue' type are the right parameters to get a successful configuration.
I would be really grateful for any idea!Hi
You need to run a VERIFY to make sure that the queues are compatible. At least on my 10.2.0.3/4 I need to do it.
DECLARE
rc BINARY_INTEGER;
BEGIN
DBMS_AQADM.VERIFY_QUEUE_TYPES(
src_queue_name => 'np_out_onlinex',
dest_queue_name => 'np_out_onlinex',
rc => rc, , destination => 'scnp.pfa.dk',
transformation => 'TransformDim2JMS_001x');
DBMS_OUTPUT.PUT_LINE('Compatible: '||rc);
If you dont have transformations and/or a remote destination - then delete those params.
Check the table: SYS.AQ$_MESSAGE_TYPES there you can see what are verified or not
regards
Mette -
I am trying to write a web-service to Enqueue/Dequeue messages from an AQ with payload type SYS.AQ$_JMS_TEXT_MESSAGE defined in Oracle DB.
In my understanding is that I need to create a JMSModule within weblogic with a ForeignServer defined within it to enqueue/dequeue message to/from the AQ.
I have created Datasource, JMSServer, JMSModule, ForeignServer (created ConnectionFactory with localJNDIName="MyQueueCF" and RemoteJNDIName as "QueueConnectionFactory" and Destination with localJNDIName="MyQueueDest" and RemoteJNDIName="Queues/<queue_name_in_DB>")
My business service has an endpoint "http://localhost:7001/MyQueueCF/MyQueueDest"
When I am testing my service to populate message on to the Queue. I get the following error:
The error was oracle.jms.AQjmsException: Error creating the db_connection
My questions are:
* Am I following the correct procedure to talk to AQ with JMS text message type payload?
* If yes, how can I get around the issue I am stuck with?
Please help!
Thanks.
Edited by: user4696353 on 27-Sep-2011 11:43
Edited by: user4696353 on 27-Sep-2011 11:49
Edited by: user4696353 on 27-Sep-2011 12:25Example:
conn / as sysdba
begin
dbms_aqadm.create_queue_table
( queue_table=> 'SCOTT.AQJMS'
, queue_payload_type=> 'SYS.AQ$_JMS_TEXT_MESSAGE'
, compatible=> '9.1'
end;
This worked fine for me after a standard DB-installation. -
Help with Oracle Streams. How to uniquely identify LCRs in queue?
We are using strems for data replication in our shop.
When an error occours in our processing procedures, the LCR is moved to the error queue.
The problem we are facing is we don't know how to uniquely identify LCRs in that queue, so we can run them again when we think the error is corrected.
LCRs contain SCN, but as I understand it, the SCN is not unique.
What is the easy way to keep track of LCRs? Any information is helpful.
ThanksHi,
When you correct the data,you would have to execute the failed transaction in order.
To see what information the apply process has tried to apply, you have to print that LCR. Depending on the size (MESSAGE_COUNT) of the transaction that has failed, it could be interesting to print the whole transaction or a single LCR.
To do this print you can make use of procedures print_transaction, print_errors, print_lcr and print_any documented on :
Oracle Streams Concepts and Administration
Chapter - Monitoring Streams Apply Processes
Section - Displaying Detailed Information About Apply Errors
These procedures are also available through Note 405541.1 - Procedure to Print LCRs
To print the whole transaction, you can use print_transaction procedure, to print the error on the error queue you can use procedure print_errors and to print a single_transaction you can do it as follows:
SET SERVEROUTPUT ON;
DECLARE
lcr SYS.AnyData;
BEGIN
lcr := DBMS_APPLY_ADM.GET_ERROR_MESSAGE
(<MESSAGE_NUMBER>, <LOCAL_TRANSACTION_ID>);
print_lcr(lcr);
END;
Thanks -
Hi all,
I am trying to get PLSQL notification working on a multi subscriber queue with sys.aq$jms_text_message as the payload type. The commands to create my queue are as follows:
dbms_aqadm.create_queue_table(
queue_table => 'SOA_JMS.RJMTESTxx_QTAB',
multiple_consumers => true,
queue_payload_type => 'sys.aq$_jms_text_message'
dbms_aqadm.create_queue(
queue_name=>'RJMTESTQ',
queue_table => 'SOA_JMS.RJMTESTxx_QTAB',
retention_time => 86400, --Keep processed messages for 24 hours
max_retries => 3,
retry_delay => 1
dbms_aqadm.start_queue('RJMTESTQ');
dbms_aqadm.add_subscriber(
queue_name => 'SOA_JMS.RJMTESTQ',
subscriber => sys.aq$_agent('SUBSCRIP1',null,0),
rule => NULL,
transformation => NULL,
queue_to_queue => FALSE,
delivery_mode => dbms_aqadm.persistent
I then create a procedure with the following signature:
create or replace procedure SOA_JMS.EXCEPTION_QUEUE_NOFIFYCB_1(
p_context in raw,
p_reginfo in sys.aq$_reg_info,
p_descr in sys.aq$_descriptor,
p_payload in raw,
p_payloadl in number
And register it as follows:
reginfo := sys.aq$_reg_info(
'SOA_JMS.RJMTESTQ:SUBSCRIP1',
DBMS_AQ.NAMESPACE_AQ,
'plsql://SOA_JMS.EXCEPTION_QUEUE_NOFIFYCB_1?PR=0',
--utl_raw.cast_to_raw('STANDARDJMS')
HEXTORAW('FF')
reg_list := sys.aq$_reg_info_list(reginfo);
dbms_aq.register(reg_list,1);
The problem is the notifications are not firing as they should be.
I have done some tracing and found an error:
Error in PLSQL notification of msgid:BA964334E5A057A4E040C69BAF397075
Queue :"SOA_JMS"."RJMTESTQ"
Consumer Name :SUBSCRIP1
PLSQL function :SOA_JMS.EXCEPTION_QUEUE_NOFIFYCB_1
: Exception Occured, Error msg:
ORA-00604: error occurred at recursive SQL level 2
ORA-06550: line 1, column 7:
PLS-00306: wrong number or types of arguments in call to 'EXCEPTION_QUEUE_NOFIFYCB_1'
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
This says that the parameters I have for my procedure is wrong. Looking through the documents I think it is something to do with the ?PR=0 used in the register call, but I can’t find any documentation telling me what the required parameters are.
Does anyone here know?
Thanks
RobertHi,
I have found the solution and I am posting here in case it helps anyone else.
The paramater names must match the callback not just the types and in/out.
So the following works:
create or replace procedure SOA_JMS.EXCEPTION_QUEUE_NOFIFYCB_2(
context in raw,
reginfo in sys.aq$_reg_info,
descr in sys.aq$_descriptor,
payload in raw,
payloadl in number
Robert -
Sys.aq$_jms_text_message type queues message size limitations
Are there any size limitations for enqueing text messages into a sys.aq$_jms_text_message type queue using AQ?
yes, i understand how to describe something...
my question is how to get a list of properties.
this data type has get functions that take the name of a property. now how do i know that property name to get it? -
Oracle Advanced Queuing - Propagation problem - 11g
Hi,
I have a problem when propagation messages between queues. When the message is propagated, it stays on the source queue with READY state.
I have created two queues on 11g with a propagation rule that any message from queue A are sent to queue B. My problem is that the message from the source queue stays in the source queue even after propagation, which isn't what I was expecting. The problem doesn't occur if the queues are on a different database. This problem only happens if the queues are on the same database.
the script I use is this:
For USERB (which has the destination queue)
create type EVENT_MESSAGE as object (
eventsource VARCHAR2(30),
eventname VARCHAR2(255),
eventid NUMBER(19,0),
message CLOB
DECLARE
an_agent sys.aq$_agent;
BEGIN
-- create the publish/subscribe queue table
dbms_aqadm.create_queue_table(
queue_table => 'DESTINATION_QUEUE_TABLE',
queue_payload_type=>'EVENT_MESSAGE',
sort_list => 'ENQ_TIME',
message_grouping => DBMS_AQADM.NONE,
multiple_consumers=>true
-- create the queue
dbms_aqadm.create_queue(
queue_name => 'DESTINATION',
queue_table => 'DESTINATION_QUEUE_TABLE',
queue_type => DBMS_AQADM.NORMAL_QUEUE,
max_retries => 5
dbms_aqadm.create_aq_agent(agent_name =>'DEQUEUE_AGENT');
an_agent := sys.aq$_agent('DEQUEUE_AGENT', null, null);
dbms_aqadm.enable_db_access(
agent_name => 'DEQUEUE_AGENT',
db_username => 'USERB'
dbms_aqadm.add_subscriber(
queue_name => 'DESTINATION',
subscriber => an_agent,
queue_to_queue => FALSE,
delivery_mode => DBMS_AQADM.PERSISTENT
-- start the queues
dbms_aqadm.start_queue('DESTINATION');
END;
For USERA
create type EVENT_MESSAGE as object (
eventsource VARCHAR2(30),
eventname VARCHAR2(255),
eventid NUMBER(19,0),
message CLOB
BEGIN
-- create the publish/subscribe queue table
dbms_aqadm.create_queue_table(
queue_table => 'SOURCE_QUEUE_TABLE',
queue_payload_type=>'EVENT_MESSAGE',
sort_list => 'ENQ_TIME',
message_grouping => DBMS_AQADM.NONE,
multiple_consumers=>true
-- create the queue
dbms_aqadm.create_queue(
queue_name => 'SOURCE',
queue_table => 'SOURCE_QUEUE_TABLE',
queue_type => DBMS_AQADM.NORMAL_QUEUE,
max_retries => 5
-- start the queues
dbms_aqadm.start_queue('SOURCE');
-- create the propagation
dbms_aqadm.add_subscriber(queue_name => 'SOURCE',
subscriber => sys.aq$_agent('DEQUEUE_AGENT','USERB.DESTINATION',null),
queue_to_queue => true);
dbms_aqadm.schedule_propagation(queue_name => 'SOURCE',
start_time => sysdate,
latency => 25,
destination_queue => 'USERB.DESTINATION');
END;
When I enqueue a message to the source on USERA with this:
declare
rc binary_integer;
nq_opt dbms_aq.enqueue_options_t;
nq_pro dbms_aq.message_properties_t;
datas EVENT_MESSAGE;
msgid raw(16);
begin
nq_pro.expiration := dbms_aq.never;
nq_pro.sender_id := sys.aq$_agent('ENQUEUE_AGENT', null, null);
datas := AGEAS_EVENT_MESSAGE('message','eventname',1,null);
dbms_aq.enqueue('SOURCE',nq_opt,nq_pro,datas,msgid);
end;
The message is propagated to the destination queue, no problem, but the message state on the source queue is kept as ready. I would have expected it to be marked as processed and disappear from the queue table.
When I look at the AQ$_SOURCE_QUEUE_TABLE_S the I see these records:
QUEUE_NAME NAME ADDRESS PROTOCOL SUBSCRIBER TYPE
SOURCE (null) "USERB"."DESTINATION"@AQ$_LOCAL 0 1736
SOURCE DEQUEUE_AGENT "USERB"."DESTINATION" 0 577
Can anyone help?I was talking about following oracle documentations:
Oracle Database 11g: Advanced Queuing (Technical Whitepaper)
Streams Advanced Queuing: Best Practices (Technical Whitepaper)
Oracle Streams Advanced Queuing and Real Application Clusters: Scalability and Performance Guidelines (Technical Whitepaper)
They are available at.. http://www.oracle.com/technetwork/database/features/data-integration/default-159085.html -
Data is not replicated on target database - oracle stream
has set up streams replication on 2 databases running Oracle 10.1.0.2 on windows.
Steps for setting up one-way replication between two ORACLE databases using streams at schema level followed by the metalink doc
I entered a few few records in the source db, and the data is not getting replication to the destination db. Could you please guide me as to how do i analyse this problem to reach to the solution
setps for configuration _ steps followed by metalink doc.
==================
Set up ARCHIVELOG mode.
Set up the Streams administrator.
Set initialization parameters.
Create a database link.
Set up source and destination queues.
Set up supplemental logging at the source database.
Configure the capture process at the source database.
Configure the propagation process.
Create the destination table.
Grant object privileges.
Set the instantiation system change number (SCN).
Configure the apply process at the destination database.
Start the capture and apply processes.
Section 2 : Create user and grant privileges on both Source and Target
2.1 Create Streams Administrator :
connect SYS/password as SYSDBA
create user STRMADMIN identified by STRMADMIN;
2.2 Grant the necessary privileges to the Streams Administrator :
GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
In 10g :
GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
2.3 Create streams queue :
connect STRMADMIN/STRMADMIN
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'STREAMS_QUEUE_TABLE',
queue_name => 'STREAMS_QUEUE',
queue_user => 'STRMADMIN');
END;
Section 3 : Steps to be carried out at the Destination Database PLUTO
3.1 Add apply rules for the Schema at the destination database :
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'SCOTT',
streams_type => 'APPLY ',
streams_name => 'STRMADMIN_APPLY',
queue_name => 'STRMADMIN.STREAMS_QUEUE',
include_dml => true,
include_ddl => true,
source_database => 'REP2');
END;
3.2 Specify an 'APPLY USER' at the destination database:
This is the user who would apply all DML statements and DDL statements.
The user specified in the APPLY_USER parameter must have the necessary
privileges to perform DML and DDL changes on the apply objects.
BEGIN
DBMS_APPLY_ADM.ALTER_APPLY(
apply_name => 'STRMADMIN_APPLY',
apply_user => 'SCOTT');
END;
3.3 Start the Apply process :
DECLARE
v_started number;
BEGIN
SELECT decode(status, 'ENABLED', 1, 0) INTO v_started
FROM DBA_APPLY WHERE APPLY_NAME = 'STRMADMIN_APPLY';
if (v_started = 0) then
DBMS_APPLY_ADM.START_APPLY(apply_name => 'STRMADMIN_APPLY');
end if;
END;
Section 4 :Steps to be carried out at the Source Database REP2
4.1 Move LogMiner tables from SYSTEM tablespace:
By default, all LogMiner tables are created in the SYSTEM tablespace.
It is a good practice to create an alternate tablespace for the LogMiner
tables.
CREATE TABLESPACE LOGMNRTS DATAFILE 'logmnrts.dbf' SIZE 25M AUTOEXTEND ON
MAXSIZE UNLIMITED;
BEGIN
DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
END;
4.2 Turn on supplemental logging for DEPT and EMPLOYEES table :
connect SYS/password as SYSDBA
ALTER TABLE scott.dept ADD SUPPLEMENTAL LOG GROUP dept_pk(deptno) ALWAYS;
ALTER TABLE scott.EMPLOYEES ADD SUPPLEMENTAL LOG GROUP dep_pk(empno) ALWAYS;
Note: If the number of tables are more the supplemental logging can be
set at database level .
4.3 Create a database link to the destination database :
connect STRMADMIN/STRMADMIN
CREATE DATABASE LINK PLUTO connect to
STRMADMIN identified by STRMADMIN using 'PLUTO';
Test the database link to be working properly by querying against the
destination database.
Eg : select * from global_name@PLUTO;
4.4 Add capture rules for the schema SCOTT at the source database:
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'SCOTT',
streams_type => 'CAPTURE',
streams_name => 'STREAM_CAPTURE',
queue_name => 'STRMADMIN.STREAMS_QUEUE',
include_dml => true,
include_ddl => true,
source_database => 'REP2');
END;
4.5 Add propagation rules for the schema SCOTT at the source database.
This step will also create a propagation job to the destination database.
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
schema_name => 'SCOTT',
streams_name => 'STREAM_PROPAGATE',
source_queue_name => 'STRMADMIN.STREAMS_QUEUE',
destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@PLUTO',
include_dml => true,
include_ddl => true,
source_database => 'REP2');
END;
Section 5 : Export, import and instantiation of tables from
Source to Destination Database
5.1 If the objects are not present in the destination database, perform
an export of the objects from the source database and import them
into the destination database
Export from the Source Database:
Specify the OBJECT_CONSISTENT=Y clause on the export command.
By doing this, an export is performed that is consistent for each
individual object at a particular system change number (SCN).
exp USERID=SYSTEM/manager@rep2 OWNER=SCOTT FILE=scott.dmp
LOG=exportTables.log OBJECT_CONSISTENT=Y STATISTICS = NONE
Import into the Destination Database:
Specify STREAMS_INSTANTIATION=Y clause in the import command.
By doing this, the streams metadata is updated with the appropriate
information in the destination database corresponding to the SCN that
is recorded in the export file.
imp USERID=SYSTEM@pluto FULL=Y CONSTRAINTS=Y FILE=scott.dmp IGNORE=Y
COMMIT=Y LOG=importTables.log STREAMS_INSTANTIATION=Y
5.2 If the objects are already present in the desination database, there
are two ways of instanitating the objects at the destination site.
1. By means of Metadata-only export/import :
Specify ROWS=N during Export
Specify IGNORE=Y during Import along with above import parameters.
2. By Manaually instantiating the objects
Get the Instantiation SCN at the source database:
connect STRMADMIN/STRMADMIN@source
set serveroutput on
DECLARE
iscn NUMBER; -- Variable to hold instantiation SCN value
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' || iscn);
END;
Instantiate the objects at the destination database with
this SCN value. The SET_TABLE_INSTANTIATION_SCN procedure
controls which LCRs for a table are to be applied by the
apply process. If the commit SCN of an LCR from the source
database is less than or equal to this instantiation SCN,
then the apply process discards the LCR. Else, the apply
process applies the LCR.
connect STRMADMIN/STRMADMIN@destination
BEGIN
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(
SOURCE_SCHEMA_NAME => 'SCOTT',
source_database_name => 'REP2',
instantiation_scn => &iscn );
END;
Enter value for iscn:
<Provide the value of SCN that you got from the source database>
Note:In 9i, you must instantiate each table individually.
In 10g recursive=true parameter of DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN
is used for instantiation...
Section 6 : Start the Capture process
begin
DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'STREAM_CAPTURE');
end;
/You must have imported a JKM and after that these are the steps
1. Go to source datastrore and click on CDC --> Add to CDC
2. Click on CDC --> Start Journal
3. Now go to the interface Choose the source table and select Journalized data only and then click on ok
4. Now execute the interface
If still it doesn't work, are you using transactions in your interface ? -
Problem enqueuing with SYS.AQ$_JMS_TEXT_MESSAGE payload
We are trying to enqueue a message into a SYS.AQ$_JMS_TEXT_MESSAGE AQ from java using the standard AQjmsQueueSender. The message arrives onto the queue but the USER_DATA is always 'oracle.sql.STRUCT@5a41ec' instead of the actual message. When we enqueue from PL/SQL the USER_DATA contains the actual message. Any help would be greatly appreciated. Thanks.
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
"CORE 10.2.0.4.0 Production"
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
QUEUE DETAILS
OWNER: EIMGR
QUEUE_TABLE: AQ_VENDORS_IN_TABLE
TYPE: OBJECT
OBJECT_TYPE: SYS.AQ$_JMS_TEXT_MESSAGE
SORT_ORDER: ENQUEUE_TIME
RECIPIENTS: SINGLE
MESSAGE_GROUPING: NONE
COMPATIBLE: 8.1.3
PRIMARY_INSTANCE: 0
SECONDARY_INSTANCE: 0
OWNER_INSTANCE: 1
USER_COMMENT: EI_QUEUE
SECURE: NO
CREATED: 09-09-03
LAST_DDL_TIME: 09-09-03
-------------------------------------------------------------------------------------We've discovered with PL/SQL that setting any headers on messages causes the same problem that we're seeing from JAVA. There is no option in JAVA to send messages without headers (it's part of the JMS spec). This can be seen with the following PL/SQL script (use null instead of HEADER in the message constructor to see it work):
DECLARE
Enqueue_options DBMS_AQ.enqueue_options_t;
Message_properties DBMS_AQ.message_properties_t;
Message_handle RAW(16);
User_prop_array SYS.AQ$_JMS_USERPROPARRAY;
Agent SYS.AQ$_AGENT;
Header SYS.AQ$_JMS_HEADER;
Message SYS.AQ$_JMS_TEXT_MESSAGE;
Message_text VARCHAR2(500);
BEGIN
Agent := SYS.AQ$_AGENT('',NULL,0);
User_prop_array := SYS.AQ$_JMS_USERPROPARRAY();
Header := SYS.AQ$_JMS_HEADER( Agent, '', 'dave', '', '', '', User_prop_array);
Message_text := 'Message from PL/SQL created at '|| TO_CHAR(SYSDATE, 'mm/dd/yyyy hh24:mi:ss');
Message := SYS.AQ$_JMS_TEXT_MESSAGE(Header, LENGTH(Message_text), Message_text, NULL);
DBMS_AQ.ENQUEUE(queue_name => 'AQ_VENDORS_IN',
Enqueue_options => enqueue_options,
Message_properties => message_properties,
Payload => message,
Msgid => message_handle);
COMMIT;
END; -
Applying subset rules in Oracle streams
Hi All,
I am working to configure Streams.I am abe to do repliacation on table table unidirectional & bidirectional. I am facing problem in add_subset rules as capture,propagation & apply process is not showing error. The fillowing is the script i am using to configure add_subset_rules. Please guide me what is the wrong & how to go about it.
he Global Database Name of the Source Database is POCSRC. The Global Database Name of the Destination Database is POCDESTN. In the example setup, DEPT table belonging to SCOTT schema has been used for demonstration purpose.
Section 1 - Initialization Parameters Relevant to Streams
• COMPATIBLE: 9.2.0.
• GLOBAL_NAMES: TRUE
• JOB_QUEUE_PROCESSES : 2
• AQ_TM_PROCESSES : 4
• LOGMNR_MAX_PERSISTENT_SESSIONS : 4
• LOG_PARALLELISM: 1
• PARALLEL_MAX_SERVERS:4
• SHARED_POOL_SIZE: 350 MB
• OPEN_LINKS : 4
• Database running in ARCHIVELOG mode.
Steps to be carried out at the Destination Database (POCDESTN.)
1. Create Streams Administrator :
connect SYS/pocdestn@pocdestn as SYSDBA
create user STRMADMIN identified by STRMADMIN default tablespace users;
2. Grant the necessary privileges to the Streams Administrator :
GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE to STRMADMIN;
GRANT SELECT ANY DICTIONARY TO STRMADMIN;
GRANT EXECUTE ON DBMS_AQ TO STRMADMIN;
GRANT EXECUTE ON DBMS_AQADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_FLASHBACK TO STRMADMIN;
GRANT EXECUTE ON DBMS_STREAMS_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_CAPTURE_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_APPLY_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_RULE_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO STRMADMIN;
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'ENQUEUE_ANY',
grantee => 'STRMADMIN',
admin_option => FALSE);
END;
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'DEQUEUE_ANY',
grantee => 'STRMADMIN',
admin_option => FALSE);
END;
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'MANAGE_ANY',
grantee => 'STRMADMIN',
admin_option => TRUE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.ALTER_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.ALTER_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_EVALUATION_CONTEXT,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
3. Create streams queue :
connect STRMADMIN/STRMADMIN@POCDESTN
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'STREAMS_QUEUE_TABLE',
queue_name => 'STREAMS_QUEUE',
queue_user => 'STRMADMIN');
END;
4. Add apply rules for the table at the destination database :
BEGIN
DBMS_STREAMS_ADM.ADD_SUBSET_RULES(
TABLE_NAME=>'SCOTT.EMP',
STREAMS_TYPE=>'APPLY',
STREAMS_NAME=>'STRMADMIN_APPLY',
QUEUE_NAME=>'STRMADMIN.STREAMS_QUEUE',
DML_CONDITION=>'empno =7521',
INCLUDE_TAGGED_LCR=>FALSE,
SOURCE_DATABASE=>'POCSRC');
END;
5. Specify an 'APPLY USER' at the destination database:
BEGIN
DBMS_APPLY_ADM.ALTER_APPLY(
apply_name => 'STRMADMIN_APPLY',
apply_user => 'SCOTT');
END;
6. BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'STRMADMIN_APPLY',
parameter => 'DISABLE_ON_ERROR',
value => 'N' );
END;
7. Start the Apply process :
BEGIN
DBMS_APPLY_ADM.START_APPLY(apply_name => 'STRMADMIN_APPLY');
END;
Section 3 - Steps to be carried out at the Source Database (POCSRC.)
1. Move LogMiner tables from SYSTEM tablespace:
By default, all LogMiner tables are created in the SYSTEM tablespace. It is a good practice to create an alternate tablespace for the LogMiner tables.
CREATE TABLESPACE LOGMNRTS DATAFILE 'd:\oracle\oradata\POCSRC\logmnrts.dbf' SIZE 25M AUTOEXTEND ON MAXSIZE UNLIMITED;
BEGIN
DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
END;
2. Turn on supplemental logging for DEPT table :
connect SYS/password as SYSDBA
ALTER TABLE scott.emp ADD SUPPLEMENTAL LOG GROUP emp_pk
(empno) ALWAYS;
3. Create Streams Administrator and Grant the necessary privileges :
3.1 Create Streams Administrator :
connect SYS/password as SYSDBA
create user STRMADMIN identified by STRMADMIN default tablespace users;
3.2 Grant the necessary privileges to the Streams Administrator :
GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE to STRMADMIN;
GRANT SELECT ANY DICTIONARY TO STRMADMIN;
GRANT EXECUTE ON DBMS_AQ TO STRMADMIN;
GRANT EXECUTE ON DBMS_AQADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_FLASHBACK TO STRMADMIN;
GRANT EXECUTE ON DBMS_STREAMS_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_CAPTURE_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_APPLY_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_RULE_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO STRMADMIN;
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'ENQUEUE_ANY',
grantee => 'STRMADMIN',
admin_option => FALSE);
END;
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'DEQUEUE_ANY',
grantee => 'STRMADMIN',
admin_option => FALSE);
END;
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'MANAGE_ANY',
grantee => 'STRMADMIN',
admin_option => TRUE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.ALTER_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.ALTER_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_EVALUATION_CONTEXT,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
4. Create a database link to the destination database :
connect STRMADMIN/STRMADMIN@pocsrc
CREATE DATABASE LINK POCDESTN connect to
STRMADMIN identified by STRMADMIN using 'POCDESTN';
Test the database link to be working properly by querying against the destination database.
Eg : select * from global_name@POCDESTN;
5. Create streams queue:
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_name => 'STREAMS_QUEUE',
queue_table =>'STREAMS_QUEUE_TABLE',
queue_user => 'STRMADMIN');
END;
6. Add capture rules for the table at the source database:
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'SCOTT.EMP',
streams_type => 'CAPTURE',
streams_name => 'STRMADMIN_CAPTURE',
queue_name => 'STRMADMIN.STREAMS_QUEUE',
include_dml => true,
include_ddl => true,
source_database => 'POCSRC');
END;
7. Add propagation rules for the table at the source database.
This step will also create a propagation job to the destination database.
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => 'SCOTT.emp’,
streams_name => 'STRMADMIN_PROPAGATE',
source_queue_name => 'STRMADMIN.STREAMS_QUEUE',
destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@POCDESTN',
include_dml => true,
include_ddl => true,
source_database => 'POCSRC');
END;
Section 4 - Export, import and instantiation of tables from Source to Destination Database
1. If the objects are not present in the destination database, perform an export of the objects from the source database and import them into the destination database
Export from the Source Database:
Specify the OBJECT_CONSISTENT=Y clause on the export command.
By doing this, an export is performed that is consistent for each individual object at a particular system change number (SCN).
exp USERID=SYSTEM/POCSRC@POCSRC TABLES=SCOTT.TEST FILE=DEPT.dmp GRANTS=Y ROWS=Y LOG=exportDEPT.log OBJECT_CONSISTENT=Y INDEXES=Y STATISTICS = NONE
Import into the Destination Database:
Specify STREAMS_INSTANTIATION=Y clause in the import command.
By doing this, the streams metadata is updated with the appropriate information in the destination database corresponding to the SCN that is recorded in the export file.
imp USERID=SYSTEM/POCDESTN@POCDESTN FULL=Y CONSTRAINTS=Y FILE=DEPT.dmp IGNORE=Y GRANTS=Y ROWS=Y COMMIT=Y LOG=importDEPT.log STREAMS_INSTANTIATION=Y
2. If the objects are already present in the desination database, check that they are also consistent at data level, otherwise the apply process may fail with error ORA-1403 when apply a DML on a not consistent row. There are 2 ways of instanitating the objects at the destination site.
1. By means of Metadata-only export/import :
Export from the Source Database by specifying ROWS=N
exp USERID=SYSTEM/POCSRC@POCSRC TABLES=SCOTT.DEPT FILE=tables.dmp
ROWS=N LOG=exportTables.log OBJECT_CONSISTENT=Y
exp USERID=SYSTEM/POCSRC@POCSRC TABLES=SCOTT.EMP FILE=tables.dmp
ROWS=N LOG=exportTables.log OBJECT_CONSISTENT=Y
For Test table -
exp USERID=SYSTEM/POCSRC@POCSRC TABLES=SCOTT.TEST FILE=tables.dmp
ROWS=N LOG=exportTables.log OBJECT_CONSISTENT=Y
Import into the destination database using IGNORE=Y
imp USERID=SYSTEM/POCDESTN@POCDESTN FULL=Y FILE=tables.dmp IGNORE=Y
LOG=importTables.log STREAMS_INSTANTIATION=Y
2. By Manaually instantiating the objects
Get the Instantiation SCN at the source database:
connect STRMADMIN/STRMADMIN@POCSRC
set serveroutput on
DECLARE
iscn NUMBER; -- Variable to hold instantiation SCN value
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' || iscn);
END;
Instantiate the objects at the destination database with this SCN value.
The SET_TABLE_INSTANTIATION_SCN procedure controls which LCRs for a table are to be applied by the apply process. If the commit SCN of an LCR from the source database is less than or equal to this instantiation SCN , then the apply process discards the LCR. Else, the apply process applies the LCR.
connect STRMADMIN/STRMADMIN@POCDESTN
BEGIN
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
source_object_name => 'SCOTT.DEPT',
source_database_name => 'POCSRC',
instantiation_scn => &iscn);
END;
connect STRMADMIN/STRMADMIN@POCDESTN
BEGIN
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
source_object_name => 'SCOTT.EMP',
source_database_name => 'POCSRC',
instantiation_scn => &iscn);
END;
Enter value for iscn:
<Provide the value of SCN that you got from the source database>
Finally start the Capture Process:
connect STRMADMIN/STRMADMIN@POCSRC
BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => ‘STRMADMIN_CAPTURE');
END;
Please mail me [email protected]
Thanks.
RaghunathWhat you are trying to do that you can not do is unclear. You wrote:
"I am facing problem in add_subset rules as capture,propagation & apply process is not showing error."
Personally I don't consider it a problem when my code doesn't raise an error. So what is not working the way you think it should? Also what version are you in (to 4 decimal places). -
Register Oracle Streams with OID
I'm trying to setup an Oracle Streams environment that registers queues with OID. I have the db registered, and set global_toppic_enabled=true. DB is in archivemode. When I try to setup a queue with:
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'STREAMS_QUEUE_TABLE',
queue_name => 'STREAMS_QUEUE',
queue_user => 'STRMADMIN');
END;
I get
Error report:
ORA-00600: internal error code, arguments: [kcbgtcr_5], [52583], [4], [0], [], [], [], []
ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 739
ORA-06512: at line 2
00600. 00000 - "internal error code, arguments: [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s]"
*Cause: This is the generic internal error number for Oracle program
exceptions. This indicates that a process has encountered an
exceptional condition.
*Action: Report as a bug - the first argument is the internal error number
Has anyone run into this? I searched metalink, but couldn't find anything. I'm running 10.2.0.1 on Windows 2K.
Thanks in advance.I never use streams with OID but check Bug 4996133 - OERI[kcbgtcr_5] updating an IOT in RAC environment.
I would consider to upgrade db to 10.2.0.3 - it is first really stable release of 10gR2
Regards,
Serge -
hi ,
I’m trying to configure oracle stream one direction ( tables level )..
my source and destination database is 10.2.0.4 and destination in rac (three nodes)
source database is one node
please help if there is some configuration required in racHello
Please find the Oracle RAC Specific Configuration while implementing Oracle Bidirectional streaming Setup
#Propagation
queue_to_queue parameter
-- Assign Primary / Secondary Instance IDs
dbms_aqadm.alter_queue_table(queue_table => 'capture_srctab',
primary_instance => 1,
secondary_instance => 2);
dbms_aqadm.alter_queue_table(queue_table => 'apply_srctab',
primary_instance => 1,
secondary_instance => 2);
All Streams processing is done at the owning instance of the queue used by
the Streams client. To determine the owning instance of each ANYDATA queue
in a database, run the following query:
SELECT q.OWNER, q.NAME, t.QUEUE_TABLE, t.OWNER_INSTANCE
FROM DBA_QUEUES q, DBA_QUEUE_TABLES t
WHERE t.OBJECT_TYPE = 'SYS.ANYDATA' AND
q.QUEUE_TABLE = t.QUEUE_TABLE AND
q.OWNER = t.OWNER;
#tbsnames.ora
Service_name=global_name=db_name
Please find the metalink document
10gR2 Streams Recommended Configuration [ID 418755.1]
Regards
Hitgon
Maybe you are looking for
-
I started getting these errors on one of my lists that has several workflows running on it. The strange thing is it has been running pretty much error free for 1.5 years and we have not made any changes to the workflows and no updates have been appli
-
i have two apple id's and id like to combine them to one of the accounts in order to not lose contacts. how do i do that? help...
-
Manage multiple middleware home
Hi Experts, I am in the position to install and configure multiple middleware home in the single host as per vendor's requirement. normally when i have a single home it was easier to set environment using .bash_profile. But now, As two of the middlew
-
Spry Horizontal Menu straddles header
Problem: Spry Horizontal Menu straddles header I created a new page using "1 column elastic, centered, header and footer" I add a Spry Horizontal Menu within the "header" div. The menu does not display entirely within the grey header area. The menu s
-
[solved] layout problem in amarok 2.02
Hi, Here, I have a bad layout problem with amarok 2.02. Amarok does not fit on my screen which is 1400 pixels large! I cannot reduce the part where the collection is shown. So that it is really unpractical to use. I uploaded a snapshot to illustrate