Applying subset rules in Oracle streams
Hi All,
I am working to configure Streams.I am abe to do repliacation on table table unidirectional & bidirectional. I am facing problem in add_subset rules as capture,propagation & apply process is not showing error. The fillowing is the script i am using to configure add_subset_rules. Please guide me what is the wrong & how to go about it.
he Global Database Name of the Source Database is POCSRC. The Global Database Name of the Destination Database is POCDESTN. In the example setup, DEPT table belonging to SCOTT schema has been used for demonstration purpose.
Section 1 - Initialization Parameters Relevant to Streams
• COMPATIBLE: 9.2.0.
• GLOBAL_NAMES: TRUE
• JOB_QUEUE_PROCESSES : 2
• AQ_TM_PROCESSES : 4
• LOGMNR_MAX_PERSISTENT_SESSIONS : 4
• LOG_PARALLELISM: 1
• PARALLEL_MAX_SERVERS:4
• SHARED_POOL_SIZE: 350 MB
• OPEN_LINKS : 4
• Database running in ARCHIVELOG mode.
Steps to be carried out at the Destination Database (POCDESTN.)
1. Create Streams Administrator :
connect SYS/pocdestn@pocdestn as SYSDBA
create user STRMADMIN identified by STRMADMIN default tablespace users;
2. Grant the necessary privileges to the Streams Administrator :
GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE to STRMADMIN;
GRANT SELECT ANY DICTIONARY TO STRMADMIN;
GRANT EXECUTE ON DBMS_AQ TO STRMADMIN;
GRANT EXECUTE ON DBMS_AQADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_FLASHBACK TO STRMADMIN;
GRANT EXECUTE ON DBMS_STREAMS_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_CAPTURE_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_APPLY_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_RULE_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO STRMADMIN;
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'ENQUEUE_ANY',
grantee => 'STRMADMIN',
admin_option => FALSE);
END;
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'DEQUEUE_ANY',
grantee => 'STRMADMIN',
admin_option => FALSE);
END;
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'MANAGE_ANY',
grantee => 'STRMADMIN',
admin_option => TRUE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.ALTER_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.ALTER_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_EVALUATION_CONTEXT,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
3. Create streams queue :
connect STRMADMIN/STRMADMIN@POCDESTN
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'STREAMS_QUEUE_TABLE',
queue_name => 'STREAMS_QUEUE',
queue_user => 'STRMADMIN');
END;
4. Add apply rules for the table at the destination database :
BEGIN
DBMS_STREAMS_ADM.ADD_SUBSET_RULES(
TABLE_NAME=>'SCOTT.EMP',
STREAMS_TYPE=>'APPLY',
STREAMS_NAME=>'STRMADMIN_APPLY',
QUEUE_NAME=>'STRMADMIN.STREAMS_QUEUE',
DML_CONDITION=>'empno =7521',
INCLUDE_TAGGED_LCR=>FALSE,
SOURCE_DATABASE=>'POCSRC');
END;
5. Specify an 'APPLY USER' at the destination database:
BEGIN
DBMS_APPLY_ADM.ALTER_APPLY(
apply_name => 'STRMADMIN_APPLY',
apply_user => 'SCOTT');
END;
6. BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'STRMADMIN_APPLY',
parameter => 'DISABLE_ON_ERROR',
value => 'N' );
END;
7. Start the Apply process :
BEGIN
DBMS_APPLY_ADM.START_APPLY(apply_name => 'STRMADMIN_APPLY');
END;
Section 3 - Steps to be carried out at the Source Database (POCSRC.)
1. Move LogMiner tables from SYSTEM tablespace:
By default, all LogMiner tables are created in the SYSTEM tablespace. It is a good practice to create an alternate tablespace for the LogMiner tables.
CREATE TABLESPACE LOGMNRTS DATAFILE 'd:\oracle\oradata\POCSRC\logmnrts.dbf' SIZE 25M AUTOEXTEND ON MAXSIZE UNLIMITED;
BEGIN
DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
END;
2. Turn on supplemental logging for DEPT table :
connect SYS/password as SYSDBA
ALTER TABLE scott.emp ADD SUPPLEMENTAL LOG GROUP emp_pk
(empno) ALWAYS;
3. Create Streams Administrator and Grant the necessary privileges :
3.1 Create Streams Administrator :
connect SYS/password as SYSDBA
create user STRMADMIN identified by STRMADMIN default tablespace users;
3.2 Grant the necessary privileges to the Streams Administrator :
GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE to STRMADMIN;
GRANT SELECT ANY DICTIONARY TO STRMADMIN;
GRANT EXECUTE ON DBMS_AQ TO STRMADMIN;
GRANT EXECUTE ON DBMS_AQADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_FLASHBACK TO STRMADMIN;
GRANT EXECUTE ON DBMS_STREAMS_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_CAPTURE_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_APPLY_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_RULE_ADM TO STRMADMIN;
GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO STRMADMIN;
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'ENQUEUE_ANY',
grantee => 'STRMADMIN',
admin_option => FALSE);
END;
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'DEQUEUE_ANY',
grantee => 'STRMADMIN',
admin_option => FALSE);
END;
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
privilege => 'MANAGE_ANY',
grantee => 'STRMADMIN',
admin_option => TRUE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.ALTER_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.ALTER_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_EVALUATION_CONTEXT,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
4. Create a database link to the destination database :
connect STRMADMIN/STRMADMIN@pocsrc
CREATE DATABASE LINK POCDESTN connect to
STRMADMIN identified by STRMADMIN using 'POCDESTN';
Test the database link to be working properly by querying against the destination database.
Eg : select * from global_name@POCDESTN;
5. Create streams queue:
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_name => 'STREAMS_QUEUE',
queue_table =>'STREAMS_QUEUE_TABLE',
queue_user => 'STRMADMIN');
END;
6. Add capture rules for the table at the source database:
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'SCOTT.EMP',
streams_type => 'CAPTURE',
streams_name => 'STRMADMIN_CAPTURE',
queue_name => 'STRMADMIN.STREAMS_QUEUE',
include_dml => true,
include_ddl => true,
source_database => 'POCSRC');
END;
7. Add propagation rules for the table at the source database.
This step will also create a propagation job to the destination database.
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => 'SCOTT.emp’,
streams_name => 'STRMADMIN_PROPAGATE',
source_queue_name => 'STRMADMIN.STREAMS_QUEUE',
destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@POCDESTN',
include_dml => true,
include_ddl => true,
source_database => 'POCSRC');
END;
Section 4 - Export, import and instantiation of tables from Source to Destination Database
1. If the objects are not present in the destination database, perform an export of the objects from the source database and import them into the destination database
Export from the Source Database:
Specify the OBJECT_CONSISTENT=Y clause on the export command.
By doing this, an export is performed that is consistent for each individual object at a particular system change number (SCN).
exp USERID=SYSTEM/POCSRC@POCSRC TABLES=SCOTT.TEST FILE=DEPT.dmp GRANTS=Y ROWS=Y LOG=exportDEPT.log OBJECT_CONSISTENT=Y INDEXES=Y STATISTICS = NONE
Import into the Destination Database:
Specify STREAMS_INSTANTIATION=Y clause in the import command.
By doing this, the streams metadata is updated with the appropriate information in the destination database corresponding to the SCN that is recorded in the export file.
imp USERID=SYSTEM/POCDESTN@POCDESTN FULL=Y CONSTRAINTS=Y FILE=DEPT.dmp IGNORE=Y GRANTS=Y ROWS=Y COMMIT=Y LOG=importDEPT.log STREAMS_INSTANTIATION=Y
2. If the objects are already present in the desination database, check that they are also consistent at data level, otherwise the apply process may fail with error ORA-1403 when apply a DML on a not consistent row. There are 2 ways of instanitating the objects at the destination site.
1. By means of Metadata-only export/import :
Export from the Source Database by specifying ROWS=N
exp USERID=SYSTEM/POCSRC@POCSRC TABLES=SCOTT.DEPT FILE=tables.dmp
ROWS=N LOG=exportTables.log OBJECT_CONSISTENT=Y
exp USERID=SYSTEM/POCSRC@POCSRC TABLES=SCOTT.EMP FILE=tables.dmp
ROWS=N LOG=exportTables.log OBJECT_CONSISTENT=Y
For Test table -
exp USERID=SYSTEM/POCSRC@POCSRC TABLES=SCOTT.TEST FILE=tables.dmp
ROWS=N LOG=exportTables.log OBJECT_CONSISTENT=Y
Import into the destination database using IGNORE=Y
imp USERID=SYSTEM/POCDESTN@POCDESTN FULL=Y FILE=tables.dmp IGNORE=Y
LOG=importTables.log STREAMS_INSTANTIATION=Y
2. By Manaually instantiating the objects
Get the Instantiation SCN at the source database:
connect STRMADMIN/STRMADMIN@POCSRC
set serveroutput on
DECLARE
iscn NUMBER; -- Variable to hold instantiation SCN value
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' || iscn);
END;
Instantiate the objects at the destination database with this SCN value.
The SET_TABLE_INSTANTIATION_SCN procedure controls which LCRs for a table are to be applied by the apply process. If the commit SCN of an LCR from the source database is less than or equal to this instantiation SCN , then the apply process discards the LCR. Else, the apply process applies the LCR.
connect STRMADMIN/STRMADMIN@POCDESTN
BEGIN
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
source_object_name => 'SCOTT.DEPT',
source_database_name => 'POCSRC',
instantiation_scn => &iscn);
END;
connect STRMADMIN/STRMADMIN@POCDESTN
BEGIN
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
source_object_name => 'SCOTT.EMP',
source_database_name => 'POCSRC',
instantiation_scn => &iscn);
END;
Enter value for iscn:
<Provide the value of SCN that you got from the source database>
Finally start the Capture Process:
connect STRMADMIN/STRMADMIN@POCSRC
BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => ‘STRMADMIN_CAPTURE');
END;
Please mail me [email protected]
Thanks.
Raghunath
What you are trying to do that you can not do is unclear. You wrote:
"I am facing problem in add_subset rules as capture,propagation & apply process is not showing error."
Personally I don't consider it a problem when my code doesn't raise an error. So what is not working the way you think it should? Also what version are you in (to 4 decimal places).
Similar Messages
-
I want to replicate two tables (A, B) using Oracle Streams from one database to another database using Oracle Streams. One column in table A contains salary information for Hourly employees and salaried employees. I do not want to replicate the salary information for salaried employees in the target system. Is there any way that I can set the salary to null for salaried employees at the same time applying any other changes to the table in the target database?
Any advise is appreciated.yes i did..
SQL> select name,queue_table from user_queues;
NAME QUEUE_TABLE
AQ$_STREAMS_QUEUE_TABLE_E STREAMS_QUEUE_TABLE
STREAMS_QUEUE STREAMS_QUEUE_TABLE
AQ$_SYS$SERVICE_METRICS_TAB_E SYS$SERVICE_METRICS_TAB
SYS$SERVICE_METRICS SYS$SERVICE_METRICS_TAB
AQ$_KUPC$DATAPUMP_QUETAB_E KUPC$DATAPUMP_QUETAB
AQ$_AQ_PROP_TABLE_E AQ_PROP_TABLE
AQ_PROP_NOTIFY AQ_PROP_TABLE
AQ$_AQ_SRVNTFN_TABLE_E AQ_SRVNTFN_TABLE
AQ_SRVNTFN_TABLE_Q AQ_SRVNTFN_TABLE
AQ$_AQ_EVENT_TABLE_E AQ_EVENT_TABLE
AQ_EVENT_TABLE_Q AQ_EVENT_TABLE
the second row in above table has entry.. -
Granularity of change rules in CDC (using Oracle streams) at columns level.
Is it possible to implement granularity of change rules in CDC (using Oracle streams) at columns level.
E.g. table abc with columns a1, b1, c1 where c1 is some kind of auditing column. I want to use CDC on table abc but want to ignore changes on c1.
Is it possible? My other option is to split table abc into a child table that has the Primary key and c1 only but it needs additional table and joins.
Thanks
ShyamThe requirement can be implemented by a simple trigger.
You seem to plan to kill a mosquito by using a nuclear bomb.
Sybrand Bakker
Senior Oracle DBA -
Oracle Streaming in same database - 10g
Oracle Streaming in 10g database
Posted: Sep 21, 2006 4:25 AM Reply
Hello,
I am trying do streaming at table level for all dml changes from one schema(source) to another schema (target) controlled by admin schema (stream_admin). I have all these schemas in same database(10g-enterprise edition).
As given in documentations , I created
1. two queues(in_Q and out_Q) in Stream_admin,
2. two process( capture and apply process) in Stream_admin ,
3. A propagation rule for propagation between in_Q and out_Q,
4. did instantiation,
5. started both capture and apply process.
Having done that , I insert in source table and check for the same in target but alas!! nothing happens. I fail to achieve streaming.
I am not getting any error. Neither in process , nor in propagation or queues. And all queues,rules and process are enabled.
Please help.datapump uses dbms_metadata extensively.
Problem is twofold
- the amount of data
- why on earth do you need to 'copy' these 1.2 Tb a 'number of times'
One would expect you would have tested the upgrade on a smaller test database. and you wouldn't need to fix bugs in a 1.2 Tb database.
A better idea would be to duplicate the complete database using RMAN. This doesn't perform conventional INSERTs and doesn't create redo.
Sybrand Bakker
Senior Oracle DBA -
Oracle stream - first_scn and start_scn
Hi,
My first_scn is 7669917207423 and start_scn is 7669991182403 in DBA_CAPTURE view.
Once I will start the capture from which SCN it will start to capture from archive log?
Regards,I am using oracle 10.2.0.4 version oracle streams. It's Oracle downstream setup. The capture as well as apply is running on target database.
Regards,
Below is the setup doc.
1.1 Create the Streams Queue
conn STRMADMIN
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'NIG_Q_TABLE',
queue_name => 'NIG_Q',
queue_user => 'STRMADMIN');
END;
1.2 Create apply process for the Schema
BEGIN
DBMS_APPLY_ADM.CREATE_APPLY(
queue_name => 'NIG_Q',
apply_name => 'NIG_APPLY',
apply_captured => TRUE
END;
1.3 Setting up parameters for Apply
exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'disable_on_error','n');
exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'parallelism','6');
exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_dynamic_stmts','Y');
exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_hash_table_size','1000000');
exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_TXN_BUFFER_SIZE',10);
/********** STEP 2.- Downstream capture process *****************/
2.1 Create the downstream capture process
BEGIN
DBMS_CAPTURE_ADM.CREATE_CAPTURE (
queue_name => 'NIG_Q',
capture_name => 'NIG_CAPTURE',
rule_set_name => null,
start_scn => null,
source_database => 'PNID.LOUDCLOUD.COM',
use_database_link => true,
first_scn => null,
logfile_assignment => 'IMPLICIT');
END;
2.2 Setting up parameters for Capture
exec DBMS_CAPTURE_ADM.ALTER_CAPTURE (capture_name=>'NIG_CAPTURE',checkpoint_retention_time=> 2);
exec DBMS_CAPTURE_ADM.SET_PARAMETER ('NIG_CAPTURE','_SGA_SIZE','250');
2.3 Add the table level rule for capture
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'NIG.BUILD_VIEWS',
streams_type => 'CAPTURE',
streams_name => 'NIG_CAPTURE',
queue_name => 'STRMADMIN.NIG_Q',
include_dml => true,
include_ddl => true,
source_database => 'PNID.LOUDCLOUD.COM'
END;
/**** Step 3 : Initializing SCN on Downstream database—start from here *************/
import
=================
impdp system DIRECTORY=DBA_WORK_DIRECTORY DUMPFILE=nig_part1_srm_expdp_%U.dmp table_exists_action=replace exclude=grant,statistics,ref_constraint logfile=NIG1.log status=300
/********** STEP 4.- Start the Apply process ********************/
sqlplus STRMADMIN
exec DBMS_APPLY_ADM.START_APPLY(apply_name => 'NIG_APPLY'); -
Setup between Oracle streams and MQ
Hi All,
I m trying to create the setup between oracle streams and Messing Queue(MQ).i have already install MQClient on my machine and messing gateways is also working fine,but accoring to setup docs i have created one user having all the granst they provide in docs and created queuetable,queue,dml handler,tables rules and starting queue after that we have done Capture and Apply process on that user, in other word u have say the entire setup but our data is not propagating that messaage to MQ.
So if anyone has specific setup code then pl provide or any suggestion u want can give.
thnks & regards
SanjeevHi Sanjeev,
There is a special forum dedicated to Oracle Streams: Streams
You may want to try there.
Extra tip: post the code you used. That way it is easier for people to see what you might have done wrong.
Regards,
Rob. -
Oracle Streams and CLOB column
Hi there,
We are using "Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit". My question is "Does Oracle Streams captures, propagates (source capture method) and applies CLOB column changes?"
If yes, is this default behavior? Can we tell Streams to exclude the CLOB column from the whole (capture-propage-apply) process?
Thanks in advance!You can exclude columns via a rule (dbms_streams_adm.delete_column).
CLOBs are captured.
http://download.oracle.com/docs/cd/E11882_01/server.112/e17069/strms_capture.htm#i1006263 -
Oracle stream - Downstream new table setup
Hi,
I want to add new table to my existing oracle stream setup. Below are the step. Is this OK?
1) stop apply/capture
2) Add new rule to the the existing captue which internally will call DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION too(I guess).
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'NIG.BUILD_VIEWS',
streams_type => 'CAPTURE',
streams_name => 'NIG_CAPTURE',
queue_name => 'STRMADMIN.NIG_Q',
include_dml => true,
include_ddl => true,
source_database => 'PNID.LOUDCLOUD.COM'
END;
3) Import (which will initialize the table
impdp system DIRECTORY=DBA_WORK_DIRECTORY DUMPFILE=nig_part2_srm_expdp_%U.dmp exclude=grant,statistics,ref_constraint
4) start apply/captureHave you applied this way? What was the result?
regards -
Doubt in Oracle streams .Can you please help me in understanding the terms
1. Message
2.User defined Event
3. Event
4.Rules
5.Oracle supplied PL/SQL packages
6.Subscriber,ConsumerHi
Message
A message is the smallest unit of information that is inserted into and retrieved from a queue.
Queue
A queue is repository for messages. Queues are stored in queue tables
Enqueue
To place a message in queue
Dequeue
To comsume a message
Agent
An agent is a end user or the application uses a queue
Thanks
Venkat -
Help on Oracle streams 11g configuration
Hi Streams experts
Can you please validate the following creation process steps ?
What is need to have streams doing is a one way replication of the AR
schema from a database to another database. Both DML and DDL shall do
the replication of the data.
Help on Oracle streams 11g configuration. I would also need your help
on the maintenance steps, controls and procedures
2 databases
1 src as source database
1 dst as destination database
replication type 1 way of the entire schema FaeterBR
Step 1. Set all databases in archivelog mode.
Step 2. Change initialization parameters for Streams. The Streams pool
size and NLS_DATE_FORMAT require a restart of the instance.
SQL> alter system set global_names=true scope=both;
SQL> alter system set undo_retention=3600 scope=both;
SQL> alter system set job_queue_processes=4 scope=both;
SQL> alter system set streams_pool_size= 20m scope=spfile;
SQL> alter system set NLS_DATE_FORMAT=
'YYYY-MM-DD HH24:MI:SS' scope=spfile;
SQL> shutdown immediate;
SQL> startup
Step 3. Create Streams administrators on the src and dst databases,
and grant required roles and privileges. Create default tablespaces so
that they are not using SYSTEM.
---at the src
SQL> create tablespace streamsdm datafile
'/u01/product/oracle/oradata/orcl/strepadm01.dbf' size 100m;
---at the replica:
SQL> create tablespace streamsdm datafile
---at both sites:
'/u02/oracle/oradata/str10/strepadm01.dbf' size 100m;
SQL> create user streams_adm
identified by streams_adm
default tablespace strepadm01
temporary tablespace temp;
SQL> grant connect, resource, dba, aq_administrator_role to
streams_adm;
SQL> BEGIN
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE (
grantee => 'streams_adm',
grant_privileges => true);
END;
Step 4. Configure the tnsnames.ora at each site so that a connection
can be made to the other database.
Step 5. With the tnsnames.ora squared away, create a database link for
the streams_adm user at both SRC and DST. With the init parameter
global_name set to True, the db_link name must be the same as the
global_name of the database you are connecting to. Use a SELECT from
the table global_name at each site to determine the global name.
SQL> select * from global_name;
SQL> connect streams_adm/streams_adm@SRC
SQL> create database link DST
connect to streams_adm identified by streams_adm
using 'DST';
SQL> select sysdate from dual@DST;
SLQ> connect streams_adm/streams_adm@DST
SQL> create database link SRC
connect to stream_admin identified by streams_adm
using 'SRC';
SQL> select sysdate from dual@SRC;
Step 6. Control what schema shall be replicated
FaeterBR is the schema to be replicated
Step 7. Add supplemental logging to the FaeterBR schema on all the
tables?
SQL> Alter table FaeterBR.tb1 add supplemental log data
(ALL) columns;
SQL> alter table FaeterBR.tb2 add supplemental log data
(ALL) columns;
etc...
Step 8. Create Streams queues at the primary and replica database.
---at SRC (primary):
SQL> connect stream_admin/stream_admin@ORCL
SQL> BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'streams_adm.FaeterBR_src_queue_table',
queue_name => 'streams_adm.FaeterBR_src__queue');
END;
---At DST (replica):
SQL> connect stream_admin/stream_admin@STR10
SQL> BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'stream_admin.FaeterBR_dst_queue_table',
queue_name => 'stream_admin.FaeterBR_dst_queue');
END;
Step 9. Create the capture process on the source database (SRC).
SQL> BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name =>'FaeterBR',
streams_type =>'capture',
streams_name =>'FaeterBR_src_capture',
queue_name =>'FaeterBR_src_queue',
include_dml =>true,
include_ddl =>true,
include_tagged_lcr =>false,
source_database => NULL,
inclusion_rule => true);
END;
Step 10. Instantiate the FaeterBR schema at DST. by doing export
import : Can I use now datapump to do that ?
---AT SRC:
exp system/superman file=FaeterBR.dmp log=FaeterBR.log
object_consistent=y owner=FaeterBR
---AT DST:
---Create FaeterBR tablespaces and user:
create tablespace FaeterBR_datafile
'/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
create tablespace ws_app_idx datafile
'/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
create user FaeterBR identified by FaeterBR_
default tablespace FaeterBR_
temporary tablespace temp;
grant connect, resource to FaeterBR;
imp system/123db file=FaeterBR_.dmp log=FaeterBR.log fromuser=FaeterBR
touser=FaeterBR streams_instantiation=y
Step 11. Create a propagation job at the source database (SRC).
SQL> BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
schema_name =>'FaeterBR',
streams_name =>'FaeterBR_src_propagation',
source_queue_name =>'stream_admin.FaeterBR_src_queue',
destination_queue_name=>'stream_admin.FaeterBR_dst_queue@dst',
include_dml =>true,
include_ddl =>true,
include_tagged_lcr =>false,
source_database =>'SRC',
inclusion_rule =>true);
END;
Step 12. Create an apply process at the destination database (DST).
SQL> BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name =>'FaeterBR',
streams_type =>'apply',
streams_name =>'FaeterBR_Dst_apply',
queue_name =>'FaeterBR_dst_queue',
include_dml =>true,
include_ddl =>true,
include_tagged_lcr =>false,
source_database =>'SRC',
inclusion_rule =>true);
END;
Step 13. Create substitution key columns for äll the tables that
haven't a primary key of the FaeterBR schema on DST
The column combination must provide a unique value for Streams.
SQL> BEGIN
DBMS_APPLY_ADM.SET_KEY_COLUMNS(
object_name =>'FaeterBR.tb2',
column_list =>'id1,names,toys,vendor');
END;
Step 14. Configure conflict resolution at the replication db (DST).
Any easier method applicable the schema?
DECLARE
cols DBMS_UTILITY.NAME_ARRAY;
BEGIN
cols(1) := 'id';
cols(2) := 'names';
cols(3) := 'toys';
cols(4) := 'vendor';
DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
object_name =>'FaeterBR.tb2',
method_name =>'OVERWRITE',
resolution_column=>'FaeterBR',
column_list =>cols);
END;
Step 15. Enable the capture process on the source database (SRC).
BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name => 'FaeterBR_src_capture');
END;
Step 16. Enable the apply process on the replication database (DST).
BEGIN
DBMS_APPLY_ADM.START_APPLY(
apply_name => 'FaeterBR_DST_apply');
END;
Step 17. Test streams propagation of rows from source (src) to
replication (DST).
AT ORCL:
insert into FaeterBR.tb2 values (
31000, 'BAMSE', 'DR', 'DR Lejetoej');
AT STR10:
connect FaeterBR/FaeterBR
select * from FaeterBR.tb2 where vendor= 'DR Lejetoej';
Any other test that can be made?Check the metalink doc 301431.1 and validate
How To Setup One-Way SCHEMA Level Streams Replication [ID 301431.1]
Oracle Server Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6
Cheers. -
Oracle streams configuration problem
Hi all,
i'm trying to configure oracle stream to my source database (oracle 9.2) and when i execute the package DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS'); i got an error bellow:
ERROR at line 1:
ORA-01353: existing Logminer session
ORA-06512: at "SYS.DBMS_LOGMNR_D", line 2238
ORA-06512: at line 1
When checking some docs, they told i have to destraoy all logminer session, and i verify to v$session view and cannot identify logminer session. If someone can help me because i need this sttream tools for schema synchronization of my production database and datawarehouse database.
That i want is how to destroy or stop logminer session.
Thnaks for your help
regards
raitsarevoThanks Werner, it's ok now my problem is solved and here bellow the output of your script.
I profit if you have some docs or some advise for my database schema synchronisation, is using oracle sctrems is the best or can i use anything else but not Dataguard concept or standby database because i only want to apply DMl changes not DDL. If you have some docs for Oracle streams and especially for schema synchronization not tables.
many thanks again, and please send to my email address [email protected] if needed
ABILLITY>DELETE FROM system.logmnr_uid$;
1 row deleted.
ABILLITY>DELETE FROM system.logmnr_session$;
1 row deleted.
ABILLITY>DELETE FROM system.logmnrc_gtcs;
0 rows deleted.
ABILLITY>DELETE FROM system.logmnrc_gtlo;
13 rows deleted.
ABILLITY>EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
PL/SQL procedure successfully completed.
regards
raitsarevo -
How to apply business rules with out breaking to patch missing data
Hello Experts,
I am very new to SQL And i have no idea how to apply below rules
Data is polling for every 5 minutes so 12 time intervals per hour 6:00 -->1
6:05---->2
6:50---->11
6:55---->12
6:50---->11
Missing value
Patching
Rule
No values missing
No patching
0
Time interval 1 is null
Time interval 1 is patched with the value from time interval number 2 subject to this not being null.
1.1
Time interval 12 is null
Time interval 12 is patched as the value of time interval 11 subject to this not being null.
1.2
Time intervals 1 and 2 are null
Time interval 1 and 2 are both patched with the value of time interval 3 subject to this not being null
1.3
Two consecutive time intervals (excluding both 1 & 2 or both 11&12) are null e.g. time interval 3 and 4 are null
Average the preceding and succeeding time intervals of the missing 2 values. If time intervals 3 and 4 are null then these become the average of time intervals 2 and 5.
1.4
Time intervals 11 and 12 are missing
Time interval 11 and 12 are both patched with the value of time interval 10 subject to this not being null
1.5
Some time intervals between 2 and 11 are null with 6 or more non null time intervals
Time interval is patched with the average of interval – 1 and interval + 1 subject to these not being null.
For example if time interval 5 was null this would be patched with an average of time interval 4 and time interval 6
n.b this rule can happen up to a maximum of 5 times.
1.6
Three consecutive time intervals are missing
Set all time intervals for the period as null
2.1
More than 6 time intervals are null
Set all time intervals for the period as null
2.2
This will be more info table structure
CREATE TABLE DATA_MIN
DAYASNUMBER INTEGER,
TIMEID INTEGER,
COSIT INTEGER,
LANEDIRECTION INTEGER,
VOLUME INTEGER,
AVGSPEED INTEGER,
PMLHGV INTEGER,
CLASS1VOLUME INTEGER,
CLASS2VOLUME INTEGER,
CLASS3VOLUME INTEGER,
LINK_ID INTEGER
Sampledata
DAYASNUMBER TIMEID COSIT LANEDIRECTION VOLUME AVGSPEED PMLHGV CLASS1vol LINK_ID
20,140,110 201,401,102,315 5 1 47 12,109 0 45 5,001
20,140,110 201,401,102,325 5 1 33 12,912 0 29 5,001
20,140,110 201,401,102,330 5 1 39 14,237 0 37 5,001
20,140,110 201,401,102,345 5 1 45 12,172 0 42 5,001
20,140,110 201,401,102,350 5 1 30 12,611 0 29 5,001
20,140,111 201,401,100,000 5 1 30 12,611 0 29 5,001
output something like FOR above sample data
DAYASNUMBER TIMEID COSIT LANEDIRECTION VOLUME AVGSPEED PMLHGV CLASS1 LINK_ID
Rule
20,140,110 201,401,102,315 5 1 47 12,109 0 45 5,001
0
20,140,110 201,401,102,320 5 1 40 12,109 0 45 5,001
1.4(patched row)
20,140,110 201,401,102,325 5 1 33 12,912 0 29 5,001
0
20,140,110 201,401,102,330 5 1 39 14,237 0 37 5,001
0
20,140,110 201,401,102,335 5 1 42 14,237 0 37 5,001
1.4(patched row)
20,140,110 201,401,102,345 5 1 45 12,172 0 42 5,001
0
20,140,110 201,401,102,350 5 1 30 12,611 0 29 5,001
0
20,140,110 201,401,102,355 5 1 30 12,611 0 29 5,001
1.2(patched row)
20,140,111 201,401,100,000 5 1 30 12,611 0 29 5,001
Any help and suggestions to extend the code to achieve would be greatly appreciate.
Note:Here the key value is Volume to be patched for missed time intervals
Thanks in advancerow_number() OVER (PARTITION BY LANEDIRECTION ORDER BY TIMEID) AS RN,DAYASNUMBER,(*to_date*(timeid,'yyyymmdd hh24miss')) as cte, COSIT,
Are you in the right place? to_date is an Oracle function, and this forum is for Microsoft SQL Server which is a different product.
Erland Sommarskog, SQL Server MVP, [email protected] -
Junk Mail: applying custom rules disables junk database
Using default settings my junk mail filter works most of the time. But there are several persistent streams of spam that never get recognized as junk, and custom rules would identify them easily. But everytime I try, suddenly all the other junk that was being flagged comes through to my in box. I've selected "Trust junk mail headers in messages" and "Filter junk mail before applying my rules" but it has no effect.
Also, the minute I add custom rules, "Erase Junk Mail" greys out and cannot be selected anymore. I have to first send my junk box to trash, then erase my trash.
I've read the help files and they don't address this at all. (Same problem in 10.6)Don't have 3.0 on my computer but go to the Mail menu and choose Preferences.
The click on Junk Mail then on Advanced and see what color is chosen in the setting and the conditions under which it is set. -
Oracle Streams 9i Database Link Error
Can anyone be help me regarding Oracle Streams 9i
Oracle Version: 9.2.0.5.0
Initial parameters:
global_name=true
aq_tm_processes=1
job_queue_process=10
log_parallelism=1 scope=spfile
database is in archivelog;
I executed the following SQL statements:
CONNECT sys/xxxxx@db1 AS SYSDBA
CREATE USER strmadmin IDENTIFIED BY strmadminpw
DEFAULT TABLESPACE users QUOTA UNLIMITED ON users;
GRANT CONNECT, RESOURCE, SELECT_CATALOG_ROLE TO strmadmin;
GRANT EXECUTE ON DBMS_AQADM TO strmadmin;
GRANT EXECUTE ON DBMS_CAPTURE_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_STREAMS_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_APPLY_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_FLASHBACK TO strmadmin;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'strmadmin',
grant_option => FALSE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'strmadmin',
grant_option => FALSE);
END;
CONNECT strmadmin/strmadminpw@db1
EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();
CREATE DATABASE LINK db2 CONNECT TO strmadmin IDENTIFIED BY strmadminpw USING 'DB2';
CONNECT sys/xxxxx@db2 AS SYSDBA
CREATE USER strmadmin IDENTIFIED BY strmadminpw
DEFAULT TABLESPACE users QUOTA UNLIMITED ON users;
GRANT CONNECT, RESOURCE, SELECT_CATALOG_ROLE TO strmadmin;
GRANT EXECUTE ON DBMS_AQADM TO strmadmin;
GRANT EXECUTE ON DBMS_CAPTURE_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_STREAMS_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_APPLY_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_FLASHBACK TO strmadmin;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'strmadmin',
grant_option => FALSE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'strmadmin',
grant_option => FALSE);
END;
CONNECT strmadmin/strmadminpw@db2
EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();
CONNECT sys/xxxxx@db2 as sysdba
GRANT ALL ON scott.dept TO strmadmin;
CONNECT sys/xxxxx@db1 AS SYSDBA
CREATE TABLESPACE logmnr_ts DATAFILE 'D:\oracle\oradata\db1\logmnr01.dbf'
SIZE 25 M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('logmnr_ts');
CONNECT sys/xxxxx@db1 AS SYSDBA
ALTER TABLE scott.dept ADD SUPPLEMENTAL LOG GROUP log_group_dept_pk (deptno) ALWAYS;
CONNECT strmadmin/strmadminpw@db1
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => 'scott.dept',
streams_name => 'db1_to_db2',
source_queue_name => 'strmadmin.streams_queue',
destination_queue_name => 'strmadmin.streams_queue@db2',
include_dml => true,
include_ddl => true,
source_database => 'db1');
END;
CONNECT strmadmin/strmadminpw@db1
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'scott.dept',
streams_type => 'capture',
streams_name => 'capture_simp',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => true);
END;
Did this exp/imp
exp userid=scott/tiger@db1 FILE=dept_instant.dmp TABLES=dept OBJECT_CONSISTENT=y ROWS=n
imp userid=scott/tiger@db2 FILE=dept_instant.dmp IGNORE=y COMMIT=y LOG=import.log STREAMS_INSTANTIATION=y
and then
CONNECT sys/xxxxx@db2 AS SYSDBA
ALTER TABLE scott.dept DROP SUPPLEMENTAL LOG GROUP log_group_dept_pk;
CONNECT strmadmin/strmadminpw@db2
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'scott.dept',
streams_type => 'apply',
streams_name => 'apply_simp',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => true,
source_database => 'db1');
END;
CONNECT strmadmin/strmadminpw@db2
BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'apply_simp',
parameter => 'disable_on_error',
value => 'n');
DBMS_APPLY_ADM.START_APPLY(
apply_name => 'apply_simp');
END;
CONNECT strmadmin/strmadminpw@db1
BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name => 'capture_simp');
END;
Here is my problem:
After i configured it the streams still cannot replicate. And when I checked the EM GUI for streams monitoring it displays a broken database link.
Also it says that the database link is not active.
Can anyone please help?
Thanks.I don't know what Datacomp is, but you should be able
to connect to any non-Oracle database if there is
an ODBC driver for it. You would use Generic
Heterogeneous Services to do this. See the following
link to the documentation for details:
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96544/gencon.htm#1656
There are also several articles on the Net. Do a google
search on oracle generic heterogeneous services /
connectivity.
Hope this helps.
Kailash. -
Oracle Streams 'ORA-25215: user_data type and queue type do not match'
I am trying replication between two databases (10.2.0.3) using Oracle Streams.
I have followed the instructions at http://www.oracle.com/technology/oramag/oracle/04-nov/o64streams.html
The main steps are:
1. Set up ARCHIVELOG mode.
2. Set up the Streams administrator.
3. Set initialization parameters.
4. Create a database link.
5. Set up source and destination queues.
6. Set up supplemental logging at the source database.
7. Configure the capture process at the source database.
8. Configure the propagation process.
9. Create the destination table.
10. Grant object privileges.
11. Set the instantiation system change number (SCN).
12. Configure the apply process at the destination database.
13. Start the capture and apply processes.
For step 5, I have used the 'set_up_queue' in the 'dbms_strems_adm package'. This procedure creates a queue table and an associated queue.
The problem is that, in the propagation process, I get this error:
'ORA-25215: user_data type and queue type do not match'
I have checked it, and the queue table and its associated queue are created as shown:
sys.dbms_aqadm.create_queue_table (
queue_table => 'CAPTURE_SFQTAB'
, queue_payload_type => 'SYS.ANYDATA'
, sort_list => ''
, COMMENT => ''
, multiple_consumers => TRUE
, message_grouping => DBMS_AQADM.TRANSACTIONAL
, storage_clause => 'TABLESPACE STREAMSTS LOGGING'
, compatible => '8.1'
, primary_instance => '0'
, secondary_instance => '0');
sys.dbms_aqadm.create_queue(
queue_name => 'CAPTURE_SFQ'
, queue_table => 'CAPTURE_SFQTAB'
, queue_type => sys.dbms_aqadm.NORMAL_QUEUE
, max_retries => '5'
, retry_delay => '0'
, retention_time => '0'
, COMMENT => '');
The capture process is 'capturing changes' but it seems that these changes cannot be enqueued into the capture queue because the data type is not correct.
As far as I know, 'sys.anydata' payload type and 'normal_queue' type are the right parameters to get a successful configuration.
I would be really grateful for any idea!Hi
You need to run a VERIFY to make sure that the queues are compatible. At least on my 10.2.0.3/4 I need to do it.
DECLARE
rc BINARY_INTEGER;
BEGIN
DBMS_AQADM.VERIFY_QUEUE_TYPES(
src_queue_name => 'np_out_onlinex',
dest_queue_name => 'np_out_onlinex',
rc => rc, , destination => 'scnp.pfa.dk',
transformation => 'TransformDim2JMS_001x');
DBMS_OUTPUT.PUT_LINE('Compatible: '||rc);
If you dont have transformations and/or a remote destination - then delete those params.
Check the table: SYS.AQ$_MESSAGE_TYPES there you can see what are verified or not
regards
Mette
Maybe you are looking for
-
PC won't boot after memory upgrade (on MSI P45D3 Platinum)
I am having a problem on my MSI P45D3 Platinum motherboard system (Dual OS: Win XP and Win 7) Until recently I was running it on PATRIOT 2GB (2x1GB) 1600MHz DDR3 PDC32G1600LLK memory (CL7) and all was fine, though running only at 1066MHz, since I cou
-
IDOC type for goods issue on purchase order.
Hi, I need to know which Basic type can be used to trigger outbound IDOC when goods issue is posted on STO. In our process, we will be creating STO to move stock to third party and post goods movement (goods issue) with movement type 351 with BAPI.
-
Technical estimation for a portal project?
Is there anything out there which will help me get a reasonably accurate estimate of time required to execute a portal project? I am not sure if generic software estimation tools based on use case analysis will calculate a realistic estimate. TIA, Ma
-
HT4623 Why won't my phone activate?
I just updated my iPhone 4S to iOS7 and my phone won't "activate" over cellular data OR wifi.
-
Question about SAP Enterprise Services Explorer 1.1
Hi , 1. I use dotnet connector before and now someone told me that I can use SAP Enterprise Services Explorer 1.1 but when I download this app, it says '90 day evaluation '; what about after 90 days? 2. another questions, our