RDBMS replication
Hi,
I have ACS 3.3 in a test bed and I have set up RDBMS replication to import and create user accounts from the accountactions.csv file that is part of ACS. I have tested this with 3 or 4 accounts and all is working OK in that it creates the accounts and gives it a PAP password using the appropriate action codes. However when I try to use more action codes to assign the user to a group, assign a dial in access filter and assign a CLID number, it doesn't work. There is no error, just the username and password work but the other changes don't appear. I have attached a sample accountactions.csv file that's I have used. If anyone else has experience like this or knows what the issue is would very much appreciate the help.
Rgds,
Russell.
Its important that the status column has a zero on the end
Ive modified the file to include the zero, and it appears to work now, its attached. The status column value determines if the row is processed or not, 0 means process it. The obmission of a valid value means it doesn't
Similar Messages
-
(V7.3)RDBMS 7.3 Enterprise Edition NEW FEATURE에 대한 Q&A
제품 : ORACLE SERVER
작성날짜 : 2004-08-13
(V7.3)RDBMS 7.3 Enterprise Edition NEW FEATURE에 대한 Q&A
=========================================================
1. Q) Oracle 7 Release 7.3의 new feature들을 간단하게 알고 싶습니다.
A) 다음과 같이 요약하여 설명드리겠습니다.
New features of 7.3.3 are :Direct load to cluster
Can use backup from before RESETLOGS
New features of 7.3 are :histograms
hash joins
star join enhancement
standby databases
parallel union-all
dynamic init.ora configuration
direct path export
compiled triggers
fast create index
multiple LRU latches
updatable join views
LSQL cursor variable enhancement
replication enhancement
ops processor affinity
Net 2 load balancing
XA scaling/recovery
thread safe pro*c/oci
DB verify
new pl/sql packages
new pl/sql features
bitmap indexes
2. Q) Oracle 7 Release 7.2와 7.3의 새로운 Parallel feature에는 어떤 것이
있습니까?
A) Oracle 7 parallel query 에 의한 parallel operation에는 다음과 같은 내
용이 있습니다.
> Parallel Data Loading : conventional and direct-path, to the same
table or multiple tables concurrently.
> Parallel Query : table scans, sorts, joins, aggregates, duplicate
elimination, UNION and UNION ALL(7.3)
> Parallel Subqueries : in INSERT, UPDATE, DELETE statements.
> Parallel Execution : of application code(user-defined SQL functions)
> Parallel Joins :
nested loop,
sort-merge,
star join optimization(creation of cartesian products plus the
nested loop join),
hash joins(7.3).
> Parallel Anti-Joins : NOT IN(7.3).
> Parallel Summarization(CREATE TABLE AS SELECT) :
query and insertion of rows into a rollup table.
> Parallel Index Creation(CREATE INDEX) :
table scans, sorts, index fragment construction.
3. Q) Release 7.2와 7.3에서 추가된 optimization 기능에는 어떤 내용이 있습
니까?
A) 다음과 같은 기능들이 있습니다.
1> Direct Database Reads
Parallel query 프로세스들은 필터링이나, 소팅, 조인과 같은 작업을 수행하
기 위해서는 아주 큰 테이블을 scanning해야 합니다. Direct Database Reads는
read efficiency와 성능의 향상을 위해 contiguous memory read를 가능하게 해
줍니다. 또한, concurrent OLTP와 같은 작업을 수행시 따르는 경합을 없애기 위
해 버퍼 캐쉬를 bypass합니다.
2> Direct Database Writes
Parallel query 프로세스들은 intermediate sort runs, summarization
(CREATE TABLE AS SELECT), index creation(CREATE INDEX)과 같은 작업의 수행
결과를 디스크에 종종 기록해야 합니다.
Direct Database Writes는 write efficiency와 성능의 향상을 위해 direct
contiguous memory로 하여금 contiguous disk writes를 가능하게 해줍니다.
또한, concurrent OLTP 작업과 DBWR 프로세스에 의한 경합을 없애기 위해 버
퍼 캐쉬를 bypass합니다.
결론적으로, Direct Database Reads와 Writes는 concurrent OLTP와 DSS 작
업에 따르는 복잡한 부하를 조절하면서 Oracle 7 서버를 분리된 형태로, 또한 최
적의 튜닝을 가능하게 해줍니다.
3> Asynchronous I/O
Oracle 7은 이미 sorts, summarization, index creation, direct-path
loading 에 대한 asynchronous write 기능을 제공하고 있습니다.
Release 7.3부터는 보다 나은 성능의 향상을 위해 asynchronous read-ahead
기능을 제공하여 최대한 processing과 I/O의 병행성을 증가시켜 줍니다.
4> Parallel Table Creation
CREATE TABLE ... AS SELECT ...와 같은 구문을 제공하여 상세한 데이타를
갖는 큰 테이블의 조회된 결과를 저장하기 위해 임시 테이블을 생성합니다.
이 기능은 보통 intermediate operation의 결과를 저장하기 위해 drill-down
분석을 할 때 사용됩니다.
5> Support for the Star Query Optimization
Oracle 7은 수행 속도의 향상을 위해 star 스키마가 존재하고, star query
optimization을 invoke합니다. Star query는 먼저 여러 개의 작은 테이블을
join하고, 그런 후에, 그 결과를 하나의 큰 테이블로 join합니다.
6> Intelligent Function Shipping
Release 7.3부터 parallel query를 처리하는 coordinator 프로세스는
non-shared memory machine(cluster 또는 MPP) 내의 노드들을 처리하기 위해
디스크나 데이타들 간의 유사성에 대해 인식하게 될 것입니다.
이 사실에 근거하여, coordinator는 data들이 machine의 shared
interconnect를 통해 전달될 필요가 없다는 점에서, 특정 node-disk pair로 수
행되고 있는 프로세스들에게 parallel query operation을 지정할 수 있습니다.
이 기능은 연관된 cost나 overhead없이 'shared nothing' 소프트웨어 아키텍
쳐의 잇점을 제공하면서 효율성과 성능, 확장성을 개선할 수 있습니다.
7> Histograms
Release 7.3부터 Oracle optimizer는 테이블의 컬럼 내에 있는 데이타 값의
분포에 관한 더 많은 정보를 이용할 수 있습니다. Value와 상대적 빈도수를 나타
내는 histogram은 optimizer에게 index의 상대적 'selectivity'에 관한 정보와
어떤 index를 사용해야할 것인가에 관한 더 좋은 아이디어를 제공해 줄 것입니다.
적절한 선택을 한다면, query의 수행시간을 몇 분, 심지어 몇 시간씩이나 단축
시킬 수가 있습니다.
8> Parallel Hash Joins
Release 7.3부터 Oracle 7은 join 처리시간의 단축을 위하여 hash join을 제
공합니다. 해슁 테크닉을 사용하면 join을 하기 위해 데이타를 소트하지 않아도
되며, 기존에 존재하는 인덱스를 사용하지 않으면서 'on-the-fly' 라는 개념을 제
공합니다.
따라서, star schema 데이타베이스에 전형적으로 적용되는 small-to-large
테이블 join의 수행 속도를 향상시킬 것입니다.
9> Parallel UNION and UNION ALL
Release 7.3부터 Oracle 7은 UNION과 UNION ALL과 같은 set operator를 사
용하여 완전히 parallel하게 query를 수행할 수 있습니다. 이러한 operator를 사
용하면, 큰 테이블들을 여러 개의 작은 테이블의 집합으로 나누어 처리하기가 훨
씬 쉬워질 것입니다.
4. Q) Release 7.3에는 어떤 제품들이 있습니까?
A) Oracle 7 서버 Release 7.3.3에 대한 제품 리스트는 다음과 같습니다.
단, 모든 플랫폼들이 리스트된 모든 제품들을 지원하지는 않습니다.
[ Product ] [ Revision ]
Advanced replication option 7.3.3.0.0
Parallel Query Option 7.3.3.0.0
Parallel Server Option 7.3.3.0.0
Oracle 7 Server 7.3.3.0.0
Distributed Database Option 7.3.3.0.0
Oracle*XA 7.3.3.0.0
Oracle Spatial Data Option 7.3.3.0.0
PL/SQL 2.3.3.0.0
ICX 7.3.3.0.0
OWSUTL 7.3.3.0.0
Slax 7.3.3.0.0
Context Option 2.0.4.0.0
Pro*C 2.2.3.0.0
Pro*PL/I 1.6.27.0.0
Pro*Ada 1.8.3.0.0
Pro*COBOL 1.8.3.0.0
Pro*Pascal 1.6.27.0.0
Pro*FORTRAN 1.8.3.0.0
PRO*CORE 1.8.3.0.0
Sqllib 1.8.3.0.0
Codegen 7.3.3.0.0
Oracle CORE 2.3.7.2.0
SQL*Module Ada 1.1.5.0.0
SQL*Module C 1.1.5.0.0
Oracle CORE 3.5.3.0.0
NLSRTL 2.3.6.1.0
Oracle Server Manager 2.3.3.0.0
Oracle Toolkit II(Dependencies of svrmgr) DRUID 1.1.7.0.0
Multi-Media APIs(MM) 2.0.5.4.0
OACORE 2.1.3.0.0
Oracle*Help 2.1.1.0.0
Oracle 7 Enterprise Backup Utility 2.1.0.0.2
NLSRTL 3.2.3.0.0
SQL*Plus 3.3.3.0.0
Oracle Trace Daemon 7.3.3.0.0
Oracle MultiProtocol Interchange 2.3.3.0.0
Oracle DECnet Protocol Adapter 2.3.3.0.0
Oracle LU6.2 Protocol Adapter 2.3.3.0.0
Oracle Names 2.0.3.0.0
Advanced Networking Option 2.3.3.0.0
Oracle TCP/IP Protocol Adapter 2.3.3.0.0
Oracle Remote Operations 1.3.3.0.0
Oracle Named Pipes Protocol Adapter 2.3.3.0.0
Oracle Intelligent Agent 7.3.3.0.0
SQL*Net APPC 2.3.3.0.0
SQL*Net/DCE 2.3.3.0.0
Oracle OSI/TLI Protocol Adapter 2.3.3.0.0
Oracle SPX/IPX Protocol Adapter 2.3.3.0.0
NIS Naming Adapter 2.3.3.0.0
NDS Naming Adapter 2.3.3.0.0
Oracle Installer 4.0.1P.S. I have checked the CD rom itself by doing the installation in our classroom on a Windows XP Pro machine and it loaded like a charm. I have been emailing and calling Compaq customer support on this issue, and I've done everything they've suggested, including a Quick Format of the hard drive. I am still getting the same results. I have been able to load a small program (Palm Desktop) on the Compaq without a problem, so I don't think it's the CD drive that's the problem, either. Thanks for any help you can give me!!! Deborah
-
Replication in SAP R/3 on SQL server
hi,
We want to read of R/3 data for some legacy system. I know that there are many ways for this matter but is there any limitation for one-side replication of data in SAP R/3 to a middle ware database?
We use MS SQL server as R/3 RDBMS.
Thanks a lot in advance,I would not set up traditional replication on the SAP DB.
Instead, it would be better to setup either a DTS package (on SQL2000) or an Integration Services package (SQL2005) to collect the data via direct read query.
We have several 3rd party tools that we pull data directly from the DB for, but make sure user has read-only access to the data so that the developer who writes the code cannot make any changes by mistake. -
GoldenGate replication of creating users, roles
System Specs:
O/S : RHEL 5 (Tikanga)
RDBMS: 11.2.0.3 (Standalone, ASM, Archivelog)
GoldenGate v:11.1.1.0 (getting ready to upgrade to v11.1.1.2 (want to utilize the ADD SCHEMATRANDATA and updated sequence support)
I have read the documentation in GG that says that DDL support Oracle restricted schemas is not supported (including sys and system). From this document a create statement is considered DDL.
However, I have to think that when a user or role is created on the Source that you want that action replicated to the Targets? So you don't have to rerun the action on the number of targets you have. This is the benefit of replication, correct?
Without this ability replication is somewhat restrictive.
Please someone shed some light on how a user/role statement could be replicated?
Thanks!
JasonOkay, so I reviewed the documentation and need some help in replicating DDL for 2 schemas.
Here is my Extract and Replicat modules
EXTRACT EXT1
USERID ggs_owner, PASSWORD ggs_owner
RMTHOST db2, MGRPORT 7809
RMTTRAIL /home/oracle/goldengate/dirdat/gg
---ddl---
DDL INCLUDE MAPPED
---dml---
table schema1.*;
table exclude schema1.*_sq;
REPLICAT REP1
ASSUMETARGETDEFS
USERID ggs_owner, PASSWORD ggs_owner
DDL INCLUDE MAPPED
DDLERROR DEFAULT IGNORE RETRYOP
REPERROR (1403, DISCARD)
MAP schema1.*, TARGET schema1.* ;
MAP schema2.*, TARGET schema2.*;
My issue is that ddl replication is only working for schema1. No ddl is being replicated with schema2. I know there must be a something small I am overlooking.
I was successful with
DDL INCLUDE ALL, EXCLUDE "schema3.*, EXCLUDE "schema4.*", EXCLUDE "schema5.*", etc...
However, I don't want to list out every schema that could potentially perform ddl. I would think the DDL INCLUDE MAPPED and then include those schemas that you want mapped.
Any ideas? -
Golden Gate 11: cross platform replication
hello,
wonder if there are any cross platform (major) issues for establishing a two way replication between Sparc Solarix 5.10 to x86 Linux based hosts ?
RDBMS versions are same or 11.2 vs 12.1
We are planing to use 12C gg on a new installs.
thank youHi,
It would be better to ask this question in the GoldenGate community forum -
GoldenGate
or if you have access to My Oracle Support in the MOS forum -
GoldenGate, Streams and Distributed Database (MO
Regards,
Mike -
Errors in file /apps/oh2/diag/rdbms/bsdp12/BSDP122/trace/BSDP122_q01c_5899122.trc:
ORA-00904: "UTL_RAW"."CAST_FROM_NUMBER": invalid identifier
DB version 11.2.0,.4
and Replication getting ab-ended with below error
SourceLine
: [817]
2015-03-29 14:07:12 ERROR OGG-00665 OCI Error executing single row select (status = 26695-ORA-26695: error on call to dbms_lock.release: return code 4
ORA-06512: at "SYS.DBMS_LOGREP_UTIL", line 197
ORA-06512: at "SYS.DBMS_LOGREP_UTIL", line 240
ORA-06512: at "SYS.DBMS_LOGREP_UTIL", line 493
ORA-06512: at "SYS.DBMS_APPLY_ADM_INTERNAL", line 2375
ORA-26790: Requesting a lock on set_dml_conflict_handler "" timed out
ORA-06512: at "SYS.DBMS_XSTREAM_GG_ADM", line 335
ORA-06512: at line 1), SQL<DECLARE PRAGMA AUTONOMOUS_TRANSACTION;BEGIN dbms_xstream_gg_adm.set_dml_conflict_handler(
apply_name=>:1, source_object=>:2, object=>:
3,
conflict_handler_name=>:4, operation_name=>:5, conflict_type=>:6,
method_name=>:7, column_list=>:8, resolution_column=>:9);END;>.
Thanks -
The user guide for ACS for Windows ver4.0 states that Cisco ACS can use RDBMS to synchronize its database with a third party RDBMS system and only one primary ACS server needs to interact with the third party system and the other ACSs in the network can be updated by this primary ACS using RDBMS synchronization.
However, like many other features that suppose to work (e.g. domain stripping for MS AD) this too does not seem to work and there is no detailed documentation on how it actually does it.
The procedure stated in user guide fails and there are gaps in the documentation.
Can someone refer to any documentation other than the User Guide for instructions/details of this functionality?
Thanks in advance.I think the easiest solution is to have a single ACS that is populated via RDBMS Sync. This ACS becomes the replication "master" that then pushes its config down to a set of "slaves".
That is the easiest method but replication is a destructive write onto the slave - so you may choose not to do this.
An alternative is to use the Sync Partners config (part of RDBMS Sync) which attemtps to process actions in the sync table on multiple ACSs. For this to work you need the "other" ACSs to have the RDBMS Sync'ing ACS server in their network config db.
You need to make sure that ACS can write to the transaction table too (note CSV datasources no good) in case one of the other ACSs is down.
If you're having problems check the rdbms sync CSV & service log on the "master" ACS and the csauth service log on the "slave" for errors. -
ACS 4.2 For Windows DB Replication
Hi Folks.
I have a pair of ACS for windows 4,2 and we also have a few mappings (ACS Group --> AD Group)
The replication process was configured and it replicates all the seetings, but the Group Mappings.
Is this the way it's supposed to be or it should replicate the group mappings as well?
Best regards,
ALThe following items cannot be replicated:
•IP pool definitions (for more information, see About IP Pools Server).
•ACS certificate and private key files.
•Unknown user group mapping configuration.
•Dynamically-mapped users.
•Settings on the ACS Service Management page in the System Configuration section.
•RDBMS Synchronization settings.
User guide
http://www.ciscosystems.com/en/US/docs/net_mgmt/cisco_secure_access_control_server_for_windows/4.2/user/guide/SCAdv.html#wp756078
Regards,
Jatin
Do rate helpful posts- -
Simple Single-Source Replication(error occurred when looking up remote obj)
Dears
I am following the "Simple Single-Source Replication Example"
(http://download.oracle.com/docs/cd/B19306_01/server.102/b14228/repsimpdemo.htm#g1033597)
In Configure Capture, Propagation, and Apply for Changes to One Table; when I Set the Instantiation SCN for the hr.jobs Table at str2.net by following procedure
SQL> DECLARE
2 iscn NUMBER; -- Variable to hold instantiation SCN value
3 BEGIN
4 iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
5 [email protected](
6 source_object_name => 'scott.emp',
7 source_database_name => 'str1.net',
8 instantiation_scn => iscn);
9 END;
10 /
DECLARE
ERROR at line 1:
ORA-04052: error occurred when looking up remote object
[email protected]
ORA-00604: error occurred at recursive SQL level 1
ORA-02085: database link STR2.NET connects to
ORCL.REGRESS.RDBMS.DEV.US.ORACLE.COM
Note also advise where should I have to execute above at (STR1.NET or STR2.NET)
regards;Good morning Lenin,
Have you checked the implementation of the connectors and are your locations well registered?
Has a similar setup ever worked well?
Can you access the source table using SQL (e.g. with SQL*Plus or TOAD)?
Regards, Patrick -
Oracle 11R2 replication set up
Hi all,
I am new to oracle 11i and would like some thoughts on the implementing oracle replication.
I am planning to install Oracle 11R2 64 bit on windows 2008 64 bit. We will have another server in other location. How can i set up replication between two server..
what i need to set up on both server?
Please guide or any good doc would be helpful
thanksPranav,
Telling someone they "need" to install an expensive product is bad advice. Although your one over the world view of setting up GoldenGate is technically accurate, it also leaves out quite a bit, using a data pump, as an example.
For the OP:
Do you really mean 11i, as in E-Business Suite, or did you mean 11g, as in the RDBMS?
Your choices are:
Use what you've already paid for - Oracle Streams
License another Oracle product - GoldenGate
Buy a third party tool - Quest Shareplex, as an example
Do your own replication - materialized views, triggers, database links, and so on. -
HANA Live on ECC - Should ECC be on Hana DB or any other RDBMS
Hi,
Going through some of the blogs, i need a clarification to install HANA Live.
If the ECC is on RDBMS, should the database be retired and moved to HANA DB or it can work side by side.
Thanks
Hari PrasadHi,
HANA Live is an additional SAP HANA Delivery Unit which you can install on your HANA One server.
HANA Live has SAP delivered Views, that are based on standard SAP Tables.
So you need to replcated those Tables first then import HANA Live.
The best fit here would SAP LT for replication.
you can get more information from here
SAP HANA Live for SAP Business Suite 1.0 – SAP Help Portal Page -
ORA-01034: Message 1034 not found; No message file for product=RDBMS
Hi!
After creating a RAC Database (10.2.0.5) on Redhat Enterprise Linux 64-bit, we checked: Listeners, Processes and Environment variables.
Everything was set up properly and running.
When trying to connect as sysdba the following message appeared:
$sqlplus / as sysdba
sql>select * from v$instance;
select * from v$instance
ERROR at line 1:
ORA-01034: Message 1034 not found; No message file for product=RDBMS,
facility=ORA
A shutdown and startup did not help.
What could be the issue here?
Thanks!Here is an excerpt of the alert log. No ora-
ALTER DATABASE OPEN
Picked broadcast on commit scheme to generate SCNs
Fri Feb 24 11:26:50 CET 2012
LGWR: STARTING ARCH PROCESSES
ARC0 started with pid=32, OS id=4930
Fri Feb 24 11:26:50 CET 2012
ARC0: Archival started
ARC1: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
ARC1 started with pid=33, OS id=4932
Fri Feb 24 11:26:50 CET 2012
SUCCESS: diskgroup LOG01 was mounted
SUCCESS: diskgroup LOG02 was mounted
Thread 1 opened at log sequence 10
Current log# 11 seq# 10 mem# 0: +LOG01/omdemo/onlinelog/group_11.265.776085353
Current log# 11 seq# 10 mem# 1: +LOG02/omdemo/onlinelog/group_11.265.776085353
Successful open of redo thread 1
Fri Feb 24 11:26:50 CET 2012
ARC0: Becoming the 'no FAL' ARCH
ARC0: Becoming the 'no SRL' ARCH
Fri Feb 24 11:26:50 CET 2012
ARC1: Becoming the heartbeat ARCH
Fri Feb 24 11:26:50 CET 2012
SMON: enabling cache recovery
Fri Feb 24 11:26:50 CET 2012
Successfully onlined Undo Tablespace 1.
Fri Feb 24 11:26:50 CET 2012
SMON: enabling tx recovery
Fri Feb 24 11:26:50 CET 2012
Database Characterset is WE8MSWIN1252
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=34, OS id=4934
Fri Feb 24 11:26:52 CET 2012
Completed: ALTER DATABASE OPEN
Sat Feb 25 02:00:23 CET 2012
Thread 1 advanced to log sequence 11 (LGWR switch)
Current log# 12 seq# 11 mem# 0: +LOG01/omdemo/onlinelog/group_12.266.776085355
Current log# 12 seq# 11 mem# 1: +LOG02/omdemo/onlinelog/group_12.266.776085355$ps -ef|grep mon
oracle 4694 1 0 Feb24 ? 00:00:10 /oracle/app/oracle/product/10.2.0/db_1/bin/racgimon startd OMDEMO
oracle 4773 1 0 Feb24 ? 00:00:04 ora_pmon_OMDEMO1
oracle 4779 1 0 Feb24 ? 00:00:03 ora_lmon_OMDEMO1
oracle 4819 1 0 Feb24 ? 00:00:05 ora_smon_OMDEMO1
oracle 4825 1 0 Feb24 ? 00:00:00 ora_mmon_OMDEMO1
root 10429 1 0 Jan24 ? 03:29:11 /oracle/app/grid/11.2.0/grid/bin/osysmond.bin
root 10475 1 0 Jan24 ? 00:01:44 /oracle/app/grid/11.2.0/grid/bin/cssdmonitor
grid 11214 1 0 Jan24 ? 00:00:46 asm_pmon_+ASM1
grid 11301 1 0 Jan24 ? 00:00:19 asm_lmon_+ASM1
grid 11335 1 0 Jan24 ? 00:00:00 asm_smon_+ASM1
grid 11343 1 0 Jan24 ? 00:00:00 asm_gmon_+ASM1
grid 11347 1 0 Jan24 ? 00:00:00 asm_mmon_+ASM1
dbus 11972 1 0 Jan17 ? 00:00:03 dbus-daemon --system
xfs 14493 1 0 Jan17 ? 00:00:00 xfs -droppriv -daemon
avahi 14594 1 0 Jan17 ? 00:00:00 avahi-daemon: running [at1srv0080.local]
avahi 14595 14594 0 Jan17 ? 00:00:00 avahi-daemon: chroot helper
root 14671 1 0 Jan17 ? 00:00:00 /usr/sbin/gdm-binary -nodaemon
root 14736 14671 0 Jan17 ? 00:00:00 /usr/sbin/gdm-binary -nodaemon
oracle 17059 12052 0 10:42 pts/1 00:00:00 grep mon
[oracle@at1srv0080 /]$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.5.0 - Production on Mon Feb 27 10:46:51 2012
Copyright (c) 1982, 2010, Oracle. All Rights Reserved.
Connected to an idle instance.
SQL> startup
ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/oracle/app/oracle/product/10.2.0/db_1/dbs/initOCTA1.ora' -
Replication based on condition for whole Schema
Hi There
Oracle 11.1.0.7
We are using oracle spatial and we have got huge tables of size more than 50 GB . I just started configuring GG on test and i would like to know if it suits our requirement.
In this regards i have certain queries :
1) Can i configure GG for whole schema based on conditions , for example in the schema there are 300 tables , out of those i want all 250 to replicate at the target as is and the rest 50 should replicate on where approved = 'YES' . Is this possible ?
2) If its possible to filter data during replication , i would like to know if we can filter using a query which has a 'JOIN'
3) Should the target database (Oracle 11.1.0.7) have the tables with the data as on before starting GG so that when we start replicating both source and target should be exactly the same.
I appreciate if some one can post a response.
ThanksYou can filter data using a WHERE clause, or even using SQLEXEC, mapping OUT values to target columns. A join could be handled in the output from a stored procedure.
You can do all tables or a portion thereof. If names are similar, they can be wildcarded. Otherwise, list them.
Not sure what you mean by APPROVED.
For initial load, the stock answer is this:
GoldgenGate recommends using native RDBMS tools for the initial load. Restore a backup, that kind of drill. You can use one of the four GG initial load options, but given the size you mentioned, it would probably be much faster to create the target via Oracle Data Pump or RMAN. -
Using Berkeley DB as a frontend for Oracle RDBMS
Hey there,
One of the JE documents (possibly the architecture one) makes a remark about using JE as a frontend for a centralized RDBMS. Assuming that's being done using a replication-based technology (and I'm posting on the correct forum), can you guys provide me some pointers as to how that works? Perhaps in the same vein as TimesTen does --with an addon, can't remember the name now.
If replication is not meant in that context, what other (if any) options are there to have a centralized datastore and local copies maintained over JE?
Thx,
Gokhan Ergul
Software ArchitectHello,
I'm not sure which document you're referring to, but in general Berkeley DB is often used as a cache or front-end for RDBMS information or other information accessible from a central server. For example, it might be used for fast indexed access to a user directory on front-end machines.
Berkeley DB does not have any support for this built-in -- it is up to you to retrieve the data from the central server, store it in Berkeley DB, and synchronize with the central server as required by your application. So if you are asking whether there is built-in support (or an add on) for replication or synchronization between Berkeley DB and an RDBMS, no there is not.
If you have multiple machines running Berkeley DB and you wish to replicate information between them, the Berkeley DB C Edition does have built-in support for this. Berkeley DB Java Edition does not have yet have support for replication between nodes, but this feature is in the works.
If you have additional questions on Berkeley DB Java Edition please use the forum for that product:
Berkeley DB Java Edition
Mark -
Hi Guru's
Just thought of checking with you all for the possible solution to a problem.
The problem is with the strems replication.
Client has come back stating that data is not replicating to the target database .
could some put please come up with the possible factors for this failure .
I checked the db links are working fine .
apart from this any suggestion on this will be highly apprciated .
Thanks in Advance .Not much informations, so let's grasp whatever we can.
Can you post the results of the following queries (
(Please, enclosed the response with the tags [ code] and [ /code]
(otherwise we simply cannot read the resonse):
set lines 190 pages 66 feed off pause off verify off
col rsn format A28 head "Rule Set name"
col rn format A30 head "Rule name"
col rt format A64 head "Rule text"
COLUMN DICTIONARY_BEGIN HEADING 'Dictionary|Build|Begin' FORMAT A10
COLUMN DICTIONARY_END HEADING 'Dictionary|Build|End' FORMAT A10
col CHECKPOINT_RETENTION_TIME head "Checkpoint|Retention|time" justify c
col LAST_ENQUEUED_SCN for 999999999999 head "Last scn|enqueued" justify c
col las format 999999999999 head "Last remote|confirmed|scn Applied" justify c
col REQUIRED_CHECKPOINT_SCN for 999999999999 head "Checkpoint|Require scn" justify c
col nam format A31 head 'Queue Owner and Name'
col table_name format A30 head 'table Name'
col queue_owner format A20 head 'Queue owner'
col table_owner format A20 head 'table Owner'
col rsname format A34 head 'Rule set name'
col cap format A22 head 'Capture name'
col ti format A22 head 'Date'
col lct format A18 head 'Last|Capture time' justify c
col cmct format A18 head 'Capture|Create time' justify c
col emct format A18 head 'Last enqueued|Message creation|Time' justify c
col ltme format A18 head 'Last message|Enqueue time' justify c
col ect format 999999999 head 'Elapsed|capture|Time' justify c
col eet format 9999999 head 'Elapsed|Enqueue|Time' justify c
col elt format 9999999 head 'Elapsed|LCR|Time' justify c
col tme format 999999999999 head 'Total|Message|Enqueued' justify c
col tmc format 999999999999 head 'Total|Message|Captured' justify c
col scn format 999999999999 head 'Scn' justify c
col emn format 999999999999 head 'Enqueued|Message|Number' justify c
col cmn format 999999999999 head 'Captured|Message|Number' justify c
col lcs format 999999999999 head 'Last scn|Scanned' justify c
col AVAILABLE_MESSAGE_NUMBER format 999999999999 head 'Last system| scn' justify c
col capture_user format A20 head 'Capture user'
col ncs format 999999999999 head 'Captured|Start scn' justify c
col capture_type format A10 head 'Capture |Type'
col RULE_SET_NAME format a15 head "Rule set Name"
col NEGATIVE_RULE_SET_NAME format a15 head "Neg rule set"
col status format A8 head 'Status'
-- For each table in APPLY site, given by this query
col SOURCE_DATABASE format a30
set linesize 150
select distinct SOURCE_DATABASE,
source_object_owner||'.'||source_object_name own_obj,
SOURCE_OBJECT_TYPE objt, instantiation_scn, IGNORE_SCN,
apply_database_link lnk
from DBA_APPLY_INSTANTIATED_OBJECTS order by 1,2;
-- do
execute DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN( source_object_name=> 'owner.table',
source_database_name => 'source_database' , instantiation_scn => NULL );
-- List instantiation objects at source
select TABLE_OWNER, TABLE_NAME,SCN, to_char(TIMESTAMP,'DD-MM-YYYY HH24:MI:SS') ti
from dba_capture_prepared_tables order by table_owner
col LOGMINER_ID head 'Log|ID' for 999
select LOGMINER_ID, CAPTURE_USER, start_scn ncs, to_char(STATUS_CHANGE_TIME,'DD-MM HH24:MI:SS') change_time
,CAPTURE_TYPE,RULE_SET_NAME, negative_rule_set_name , status from dba_capture
order by logminer_id
set lines 190
col rsname format a22 head 'Rule set name'
col delay_scn head 'Delay|Scanned' justify c
col delay2 head 'Delay|Enq-Applied' justify c
col state format a24
col process_name format a8 head 'Process|Name' justify c
col LATENCY_SECONDS head 'Lat(s)'
col total_messages_captured head 'total msg|Captured'
col total_messages_enqueued head 'total msg|Enqueue'
col ENQUEUE_MESG_TIME format a17 head 'Row creation|initial time'
col CAPTURE_TIME head 'Capture at'
select a.logminer_id , a.CAPTURE_NAME cap, queue_name , AVAILABLE_MESSAGE_NUMBER, CAPTURE_MESSAGE_NUMBER lcs,
AVAILABLE_MESSAGE_NUMBER-CAPTURE_MESSAGE_NUMBER delay_scn,
last_enqueued_scn , applied_scn las , last_enqueued_scn-applied_scn delay2
from dba_capture a, v$streams_capture b where a.capture_name = b.capture_name (+)
order by logminer_id
SELECT c.logminer_id,
SUBSTR(s.program,INSTR(s.program,'(')+1,4) PROCESS_NAME,
c.sid,
c.serial#,
c.state,
to_char(c.capture_time, 'HH24:MI:SS MM/DD/YY') CAPTURE_TIME,
to_char(c.enqueue_message_create_time,'HH24:MI:SS MM/DD/YY') ENQUEUE_MESG_TIME ,
(SYSDATE-c.capture_message_create_time)*86400 LATENCY_SECONDS,
c.total_messages_captured,
c.total_messages_enqueued
FROM V$STREAMS_CAPTURE c, V$SESSION s
WHERE c.SID = s.SID
AND c.SERIAL# = s.SERIAL#
order by logminer_id ;
-- Which is lowest requeired archive :
-- this query assume you have only one logminer_id or your must add ' and session# = <id_nn> '
set serveroutput on
DECLARE
hScn number := 0;
lScn number := 0;
sScn number;
ascn number;
alog varchar2(1000);
begin
select min(start_scn), min(applied_scn) into sScn, ascn
from dba_capture ;
DBMS_OUTPUT.ENABLE(2000);
for cr in (select distinct(a.ckpt_scn)
from system.logmnr_restart_ckpt$ a
where a.ckpt_scn <= ascn and a.valid = 1
and exists (select * from system.logmnr_log$ l
where a.ckpt_scn between l.first_change# and
l.next_change#)
order by a.ckpt_scn desc)
loop
if (hScn = 0) then
hScn := cr.ckpt_scn;
else
lScn := cr.ckpt_scn;
exit;
end if;
end loop;
if lScn = 0 then
lScn := sScn;
end if;
-- select min(name) into alog from v\$archived_log where lScn between first_change# and next_change#;
-- dbms_output.put_line('Capture will restart from SCN ' || lScn ||' in log '||alog);
dbms_output.put_line('Capture will restart from SCN ' || lScn ||' in the following file:');
for cr in (select name, first_time , SEQUENCE#
from DBA_REGISTERED_ARCHIVED_LOG
where lScn between first_scn and next_scn order by thread#)
loop
dbms_output.put_line(to_char(cr.SEQUENCE#)|| ' ' ||cr.name||' ('||cr.first_time||')');
end loop;
end;
-- List all archives
prompt If 'Name' is empty then the archive is not on disk anymore
prompt
set linesize 150 pagesize 0 heading on embedded on
col name form A55 head 'Name' justify l
col st form A14 head 'Start' justify l
col end form A14 head 'End' justify l
col NEXT_CHANGE# form 9999999999999 head 'Next Change' justify c
col FIRST_CHANGE# form 9999999999999 head 'First Change' justify c
col SEQUENCE# form 999999 head 'Logseq' justify c
select thread#, SEQUENCE# , to_char(FIRST_TIME,'MM-DD HH24:MI:SS') st,
to_char(next_time,'MM-DD HH24:MI:SS') End,FIRST_CHANGE#,
NEXT_CHANGE#, NAME name
from ( select thread#, SEQUENCE# , FIRST_TIME, next_time,FIRST_CHANGE#,
NEXT_CHANGE#, NAME name
from v$archived_log order by first_time desc )
where rownum <= 30
-- APPLY status
col apply_tag format a8 head 'Apply| Tag'
col QUEUE_NAME format a24
col DDL_HANDLER format a20
col MESSAGE_HANDLER format a20
col NEGATIVE_RULE_SET_NAME format a20 head 'Negative|rule set'
col apply_user format a20
col uappn format A30 head "Apply name"
col queue_name format A30 head "Queue name"
col apply_captured format A14 head "Type of|Applied Events" justify c
col rsn format A24 head "Rule Set name"
col sts format A8 head "Apply|Process|Status"
col apply_tag format a8 head 'Apply| Tag'
col QUEUE_NAME format a24
col DDL_HANDLER format a20
col MESSAGE_HANDLER format a20
col NEGATIVE_RULE_SET_NAME format a20 head 'Negative|rule set'
col apply_user format a20
set linesize 150
select apply_name uappn, queue_owner, DECODE(APPLY_CAPTURED, 'YES', 'Captured', 'NO', 'User-Enqueued') APPLY_CAPTURED,
RULE_SET_NAME rsn , apply_tag, STATUS sts from dba_apply;
select QUEUE_NAME,DDL_HANDLER,MESSAGE_HANDLER, NEGATIVE_RULE_SET_NAME, APPLY_USER, ERROR_NUMBER,
to_char(STATUS_CHANGE_TIME,'DD-MM-YYYY HH24:MI:SS')STATUS_CHANGE_TIME
from dba_apply ;
set head off
select ERROR_MESSAGE from dba_apply;
-- propagation status
set head on
col rsn format A28 head "Rule Set name"
col rn format A30 head "Rule name"
col rt format A64 head "Rule text"
col d_dblk format A40 head 'Destination dblink'
col nams format A41 head 'Source queue'
col namd format A66 head 'Remote queue'
col prop format A40 head 'Propagation name '
col rsname format A20 head 'Rule set name'
COLUMN TOTAL_TIME HEADING 'Total Time Executing|in Seconds' FORMAT 999999
COLUMN TOTAL_NUMBER HEADING 'Total Events Propagated' FORMAT 999999999
COLUMN TOTAL_BYTES HEADING 'Total mb| Propagated' FORMAT 9999999999
COL PROPAGATION_NAME format a26
COL SOURCE_QUEUE_NAME format a34 head "Source| queue name" justify c
COL DESTINATION_QUEUE_NAME format a24 head "Destination| queue name" justify c
col QUEUE_TO_QUEUE format a9 head "Queue to| Queue"
col RULE_SET_NAME format a18
set linesize 125
prompt
set lines 190
select PROPAGATION_NAME prop, RULE_SET_NAME rsname , nvl(DESTINATION_DBLINK,'Local to db') d_dblk,NEGATIVE_RULE_SET_NAME
from dba_propagation ;
select SOURCE_QUEUE_OWNER||'.'|| SOURCE_QUEUE_NAME nams , DESTINATION_QUEUE_OWNER||'.'|| DESTINATION_QUEUE_NAME||
decode( DESTINATION_DBLINK,null,'','@'|| DESTINATION_DBLINK) namd, status , QUEUE_TO_QUEUE
from dba_propagation ;
-- Archive numbers
set linesize 150 pagesize 0 heading on embedded on
col name form A55 head 'Name' justify l
col st form A14 head 'Start' justify l
col end form A14 head 'End' justify l
col NEXT_CHANGE# form 9999999999999 head 'Next Change' justify c
col FIRST_CHANGE# form 9999999999999 head 'First Change' justify c
col SEQUENCE# form 999999 head 'Logseq' justify c
select thread#, SEQUENCE# , to_char(FIRST_TIME,'MM-DD HH24:MI:SS') st,
to_char(next_time,'MM-DD HH24:MI:SS') End,FIRST_CHANGE#,
NEXT_CHANGE#, NAME name
from ( select thread#, SEQUENCE# , FIRST_TIME, next_time,FIRST_CHANGE#,
NEXT_CHANGE#, NAME name
from v$archived_log order by first_time desc )
where rownum <= 30
-- List queues
col queue_table format A26 head 'Queue Table'
col queue_name format A32 head 'Queue Name'
col primary_instance format 9999 head 'Prim|inst'
col secondary_instance format 9999 head 'Sec|inst'
col owner_instance format 99 head 'Own|inst'
COLUMN MEM_MSG HEADING 'Messages|in Memory' FORMAT 99999999
COLUMN SPILL_MSGS HEADING 'Messages|Spilled' FORMAT 99999999
COLUMN NUM_MSGS HEADING 'Total Messages|in Buffered Queue' FORMAT 99999999
select
a.owner||'.'|| a.name nam, a.queue_table,
decode(a.queue_type,'NORMAL_QUEUE','NORMAL', 'EXCEPTION_QUEUE','EXCEPTION',a.queue_type) qt,
trim(a.enqueue_enabled) enq, trim(a.dequeue_enabled) deq, (NUM_MSGS - SPILL_MSGS) MEM_MSG, spill_msgs, x.num_msgs msg,
x.INST_ID owner_instance
from dba_queues a , sys.gv_$buffered_queues x
where
a.qid = x.queue_id (+) and a.owner not in ( 'SYS','SYSTEM','WMSYS','SYSMAN') order by a.owner ,qt desc
-- List instantiated objects
set linesize 150
select distinct SOURCE_DATABASE,
source_object_owner||'.'||source_object_name own_obj,
SOURCE_OBJECT_TYPE objt, instantiation_scn, IGNORE_SCN,
apply_database_link lnk
from DBA_APPLY_INSTANTIATED_OBJECTS
order by 1,2;
-- Propagation senders : to run on source site
prompt
prompt ++ EVENTS AND BYTES PROPAGATED FOR EACH PROPAGATION++
prompt
COLUMN Elapsed_propagation_TIME HEADING 'Elapsed |Propagation Time|(Seconds)' FORMAT 9999999999999999
COLUMN TOTAL_NUMBER HEADING 'Total Events|Propagated' FORMAT 9999999999999999
COLUMN SCHEDULE_STATUS HEADING 'Schedule|Status'
column elapsed_dequeue_time HEADING 'Total Dequeue|Time (Secs)'
column elapsed_propagation_time HEADING 'Total Propagation|Time (Secs)' justify c
column elapsed_pickle_time HEADING 'Total Pickle| Time(Secs)' justify c
column total_time HEADING 'Elapsed|Pickle Time|(Seconds)' justify c
column high_water_mark HEADING 'High|Water|Mark'
column acknowledgement HEADING 'Target |Ack'
prompt pickle : Pickling is the action of building the messages, wrap the LCR before enqueuing
prompt
set linesize 150
SELECT p.propagation_name,q.message_delivery_mode queue_type, DECODE(p.STATUS,
'DISABLED', 'Disabled', 'ENABLED', 'Enabled') SCHEDULE_STATUS, q.instance,
q.total_number TOTAL_NUMBER, q.TOTAL_BYTES/1048576 total_bytes,
q.elapsed_dequeue_time/100 elapsed_dequeue_time, q.elapsed_pickle_time/100 elapsed_pickle_time,
q.total_time/100 elapsed_propagation_time
FROM DBA_PROPAGATION p, dba_queue_schedules q
WHERE p.DESTINATION_DBLINK = NVL(REGEXP_SUBSTR(q.destination, '[^@]+', 1, 2), q.destination)
AND q.SCHEMA = p.SOURCE_QUEUE_OWNER
AND q.QNAME = p.SOURCE_QUEUE_NAME
order by q.message_delivery_mode, p.propagation_name;
-- propagation receiver : to run on apply site
COLUMN SRC_QUEUE_NAME HEADING 'Source|Queue|Name' FORMAT A20
COLUMN DST_QUEUE_NAME HEADING 'Target|Queue|Name' FORMAT A20
COLUMN SRC_DBNAME HEADING 'Source|Database' FORMAT A15
COLUMN ELAPSED_UNPICKLE_TIME HEADING 'Unpickle|Time' FORMAT 99999999.99
COLUMN ELAPSED_RULE_TIME HEADING 'Rule|Evaluation|Time' FORMAT 99999999.99
COLUMN ELAPSED_ENQUEUE_TIME HEADING 'Enqueue|Time' FORMAT 99999999.99
SELECT SRC_QUEUE_NAME,
SRC_DBNAME,DST_QUEUE_NAME,
(ELAPSED_UNPICKLE_TIME / 100) ELAPSED_UNPICKLE_TIME,
(ELAPSED_RULE_TIME / 100) ELAPSED_RULE_TIME,
(ELAPSED_ENQUEUE_TIME / 100) ELAPSED_ENQUEUE_TIME, TOTAL_MSGS,HIGH_WATER_MARK
FROM V$PROPAGATION_RECEIVER;
-- Apply reader
col rsid format 99999 head "Reader|Sid" justify c
COLUMN CREATION HEADING 'Message|Creation time' FORMAT A17 justify c
COLUMN LATENCY HEADING 'Latency|in|Seconds' FORMAT 9999999
col deqt format A15 head "Last|Dequeue Time" justify c
SELECT APPLY_NAME, sid rsid , (DEQUEUE_TIME-DEQUEUED_MESSAGE_CREATE_TIME)*86400 LATENCY,
TO_CHAR(DEQUEUED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD') CREATION, TO_CHAR(DEQUEUE_TIME,'HH24:MI:SS MM/DD') deqt,
DEQUEUED_MESSAGE_NUMBER FROM V$STREAMS_APPLY_READER;
-- Do we have any library cache lock ?
set head on pause off feed off linesize 150
column event format a48 head "Event type"
column wait_time format 999999 head "Total| waits"
column seconds_in_wait format 999999 head " Time waited "
column sid format 9999 head "Sid"
column state format A17 head "State"
col seq# format 999999
select
w.sid, s.status,w.seq#, w.event
, w.wait_time, w.seconds_in_wait , w.p1 , w.p2 , w.p3 , w.state
from
v$session_wait w , v$session s
where
s.sid = w.sid and
w.event != 'pmon timer' and
w.event != 'rdbms ipc message' and
w.event != 'PL/SQL lock timer' and
w.event != 'SQL*Net message from client' and
w.event != 'client message' and
w.event != 'pipe get' and
w.event != 'Null event' and
w.event != 'wakeup time manager' and
w.event != 'slave wait' and
w.event != 'smon timer'
and w.event != 'class slave wait'
and w.event != 'LogMiner: wakeup event for preparer'
and w.event != 'Streams AQ: waiting for time management or cleanup tasks'
and w.event != 'LogMiner: wakeup event for builder'
and w.event != 'Streams AQ: waiting for messages in the queue'
and w.event != 'ges remote message'
and w.event != 'gcs remote message'
and w.event != 'Streams AQ: qmn slave idle wait'
and w.event != 'Streams AQ: qmn coordinator idle wait'
and w.event != 'ASM background timer'
and w.event != 'DIAG idle wait'
and w.seconds_in_wait > 0
/
Maybe you are looking for
-
Why field volume sales by KG is not capture figure any SKU such as FGDIHDV1950S3TH00, FGHQ1RG0400TNTH02 and FGHQ2RG0400TNTH02?
-
URGENT SAP LICENCE R/3 IDEAS VERSION 4.7ec
HI , GURUS CAN ANY ONE HELP ME, ITS URGENT, WHEN IAM TRYING TO LOG IN IN MY R/3 SERVER, THE MESSAGE IS LICENCE EXPIRED, IAM UNABLE TO LOG IN, CAN ANY ONE GUIDE ME, HOW TO RESOLVE THIS ISSUE ?? THANK'S IN ADVANCE.
-
SmartSound QuickTracks plug-in for CS4 now available
It has just come to my attention that SmartSound has released a new update for their QuickTracks for Premiere CS4. It's now on their site, HERE. Note that the original plug-in for previous versions of PrPro (CS3 and earlier) and also most versions of
-
I believe I have a dead router. When you go to plug the router in, the last 5 lights light up for about a quarter of a second and then hide, unless if a internet connection was detected with the router. But the power light plugged in does not show, o
-
Keep track history of Planning Cell Comment
We want to have history log of Cell Text add/change/delete in Planning data form that performed by users. However, HSP_AUDIT_RECORDS only keeps logs for Data Cell Values, Cell-level documents and Cell Text Clearing. For example, user A has added a C