Table level supplemental logging
How is table level supplemental logging different from Database level supplemental logging? Is Database level supplemental logging required for enabling table level supplemental logging?
I have done 3 test cases, please suggest!
Case 1
Enabled only DB level supplemental logging(sl)
observations--->
DML on all tables can be tracked with logminer.
I find this perfect.
case 2
Enabling only table level supplemental logging
Setting---->
2 tables ---AAA(with table level sl) & BBB (without table level sl)
Only DDL is recorded with the help of logminer & few of the operations are listed as internal.
case3
Enabling database level sl first & then enabling table level sl only on one table --->AAA & no table level sl on BBB
observation---> All the tables DDL & DML are getting tracked--point is if this is getting the same result
as DB level SL, what is the significance of enabling Table level SL? or am I missing something?
I have the same experience: when database level supplemental logging is enabled, adding supplemental logging at the table level does not affect functionality or performance. Inserting 1 M rows into test table takes 25 sec ( measured on target database ) with table level supplemental logging, and 26 sec without it. My GoldenGate version is 11.2, Oracle database version 11.2.0.3.0
If someone can show the benefit of having table level supplemental logging in addition to database level logging, I would very much appreciate.
Similar Messages
-
Schema level and table level supplemental logging
Hello,
I'm setting up bi- directional DML replication between two oracle databases. I have enabled supplemental logging database level by running this command-
SQL>alter database add supplemental log data (primary key) columns;
Database altered.
SQL> select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI from v$database;
SUPPLEME SUP SUP
IMPLICIT YES NO
-My question is should I enable supplemental logging table level also(for DML replication only)? should I run the below command also?
GGSCI (db1) 1> DBLOGIN USERID ggs_admin, PASSWORD ggs_admin
Successfully logged into database.
GGSCI (db1) 2> ADD TRANDATA schema.<table-name>
what is the deference between schema level and table level supplemental logging?For Oracle, ADD TRANDATA by default enables table-level supplemental logging. The supplemental log group includes one of the following sets of columns, in the listed order of priority, depending on what is defined on the table:
1. Primary key
2. First unique key alphanumerically with no virtual columns, no UDTs, no functionbased
columns, and no nullable columns
3. First unique key alphanumerically with no virtual columns, no UDTs, or no functionbased
columns, but can include nullable columns
4. If none of the preceding key types exist (even though there might be other types of keys
defined on the table) Oracle GoldenGate constructs a pseudo key of all columns that
the database allows to be used in a unique key, excluding virtual columns, UDTs,
function-based columns, and any columns that are explicitly excluded from the Oracle
GoldenGate configuration.
The command issues an ALTER TABLE command with an ADD SUPPLEMENTAL LOG DATA clause that
is appropriate for the type of unique constraint (or lack of one) that is defined for the table.
When to use ADD TRANDATA for an Oracle source database
Use ADD TRANDATA only if you are not using the Oracle GoldenGate DDL replication feature.
If you are using the Oracle GoldenGate DDL replication feature, use the ADD SCHEMATRANDATA command to log the required supplemental data. It is possible to use ADD
TRANDATA when DDL support is enabled, but only if you can guarantee one of the following:
● You can stop DML activity on any and all tables before users or applications perform DDL on them.
● You cannot stop DML activity before the DDL occurs, but you can guarantee that:
❍ There is no possibility that users or applications will issue DDL that adds new tables whose names satisfy an explicit or wildcarded specification in a TABLE or MAP
statement.
❍ There is no possibility that users or applications will issue DDL that changes the key definitions of any tables that are already in the Oracle GoldenGate configuration.
ADD SCHEMATRANDATA ensures replication continuity should DML ever occur on an object for which DDL has just been performed.
You can use ADD TRANDATA even when using ADD SCHEMATRANDATA if you need to use the COLS option to log any non-key columns, such as those needed for FILTER statements and KEYCOLS clauses in the TABLE and MAP parameters.
Additional requirements when using ADD TRANDATA
Besides table-level logging, minimal supplemental logging must be enabled at the database level in order for Oracle GoldenGate to process updates to primary keys and
chained rows. This must be done through the database interface, not through Oracle GoldenGate. You can enable minimal supplemental logging by issuing the following DDL
statement:
SQL> alter database add supplemental log data;
To verify that supplemental logging is enabled at the database level, issue the following statement:
SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
The output of the query must be YES or IMPLICIT. LOG_DATA_MIN must be explicitly set, because it is not enabled automatically when other LOG_DATA options are set.
If you required more details refer Oracle® GoldenGate Windows and UNIX Reference Guide 11g Release 2 (11.2.1.0.0) -
Hi,
Presently we copy prod tables into reporting database using Mviews and they work like a charm, there are couple of problems with this setup we have right now one being it is not feasible when we keep adding/altering 20-30 tables every release, dropping/recreating the mlogs/mviews, secondly during the busy season the mview refresh data takes a very long time copying changes as there are just too much data to move around.
Looks like people are not liking this delay in copying the tables, not often couple of times every month. so we are planning to implement logical standby, we already have a physical standby.
After reading Oracle docs for some reason I get the feeling that supplemental logging is going to dump so much redo that it might eventually effect production performance, can this happen?
and also in our database we have several history tables that don't have either primary keys or unique keys, in which can I will have to enable supplemental logging for ALL columns at least for those tables.
does anyone had a similar situation and if so...does it make sense to go with logical standby or use streams instead ofcourse they both use the same technology?..
please I need some insights into which might work best in my case..
We are running Oracle 10.2.0.3 on RHEL5.
Thanks,
RamkiThanks Justin!
1) While generally I'd certainly rather deal with a logical standby than a bunch of >>materialized views if your reporting database needs to have most or all the tables in
the production database Pretty much we copy all the tables over to reporting with fast refreshable mviews and we have a 100m line between prod and reporting which will eventually be moved to GIG network in fall, so this will not be a bottleneck any more.
it is not obvious to me why that should substantially reduce the amount of data that >>needs to be moved around. it really doesn't matter whether the changes are coming >>via logical change records or MV log entries-- it's going to be roughly the same >>amount of change data flowing over.But with logical standby there is no additional I/O happening on the prod to figure out what has changed/updated/deleted when trying to refresh the reporting database using MLOGS.
when you have 6 million changed rows sitting the MLOGS to be copied over to reporting all the additional I/O required on prod to get these rows over is minimized when using logical standby as everything happens on reporting, like looking for LCR's in the redo logs that are copied over?.
2) Supplemental logging does increase redo volume a bit, so if you have an I/O bound >>source database, adding additional redo could certainly affect performance. Of >>course, you're also getting rid of the redo generated by writing to MV logs, so it may >>be a wash.that's a good point, so additional redo generated by supplemental logging will be offset by no redo being generated by writing into MLOGS.
Is there a reason that you have history tables without a primary key? It may be >>substantially easier to add a primary key that just gets populated via a sequence >>than to supplementally log every column in the table. Of course, it may only matter if >>you update history table rows.Application only inserts into these history tables and never updates and these are not queried within the application, some of them are but they have primary keys but not all.
So when we don't have PK or UK available on these tables do we need to enable supplemental logging for all columns on these tables or can we do minimal supplemental logging at the database level and do table level supp logging.
I am still unable to get the whole picture on how this supp logging works for the tables that don't have any means to identify rows uniquely, does it write the whole rows to the redo, if so does this mean more redo gets generated in doing so?.
I am doing a POC right now, but as usual I cannot really replay all the things that are happening in production on this test database (I wish I was running 11g).
Thanks,
Ramki -
What level suplemental logging requires to setup Streams at Schema level
Hi,
Working on setting-up streams from 10g to 11g db @ schema level. And the session is hanging with statement "ALTER DATABASE ADD SUPPLEMENTAL LOG DATA" while running following command - generated using DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS.
Begin
dbms_streams_adm.add_schema_rules(
schema_name => '"DPX1"',
streams_type => 'CAPTURE',
streams_name => '"CAPTURE_DPX1"',
queue_name => '"STRMADMIN"."CAPTURE_QUEUE"',
include_dml => TRUE,
include_ddl => TRUE,
include_tagged_lcr => TRUE,
source_database => 'DPX1DB',
inclusion_rule => TRUE,
and_condition => get_compatible);
END;
The generated script also setting each table with table-level logging "'ALTER TABLE "DPX1"."DEPT" ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, FOREIGN KEY, UNIQUE INDEX) COLUMNS'".
So my question is: Is Database level supplemental logging required to setup schema-level replication? If answer is no then why the following script is invoking "ALTER DATABASE ADD SUPPLEMENTAL LOG DATA" command.
Thanks in advance.
Regards,
SridharHi sri dhar,
From what I found, the "ALTER DATABASE ADD SUPPLEMENTAL LOG DATA" is required for the first capture you create in a database. Once it has been run, you'll see V$DATABASE with the column SUPPLEMENTAL_LOG_DATA_MIN set to YES. It requires a strong level of locking - for example, you cannot run this alter database while an index rebuild is running (maybe an rebuild online?)
I know it is called implicitly by DBMS_STREAMS_ADM.add_table_rules for the first rule created.
So, you can just run the statement once in a maintenance window and you'll be all set.
Minimal Supplemental Logging - http://www.oracle.com/pls/db102/to_URL?remark=ranked&urlname=http:%2F%2Fdownload.oracle.com%2Fdocs%2Fcd%2FB19306_01%2Fserver.102%2Fb14215%2Flogminer.htm%23sthref2006
NOT to be confused with database level supplemental log group.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14228/mon_rep.htm#BABHHCCC
Hope this helps,
Regards, -
Avoid SUPPLEMENTAL LOG while comparing 2 tables using dbms_metadata_diff()
Hi,
I am using ORACLE DATABASE 11g R2. I am using a inbuilt package dbms_metadata_diff.compare_alter() to compare 2 tables and get the alter statements for them. I have applied GOLDEN GATE on one of the Schema's and as per the process we need to apply SUPPLEMENTAL LOGGING on the database. So when 2 tables are compared it also gives me the difference about the SUPPLEMENTAL LOG. I want to compare 2 tables but it should avoid the difference of the SUPPLEMENTAL LOG group.
Below is a part of code which I use :-
dbms_metadata.set_transform_param(DBMS_METADATA.SESSION_TRANSFORM, -- Parameter to keep the DDL pretty.
'PRETTY',
TRUE);
dbms_metadata.set_transform_param(DBMS_METADATA.SESSION_TRANSFORM, -- To put an SQL terminator(;) at the end of SQL.
'SQLTERMINATOR',
TRUE);
dbms_metadata.set_transform_param(dbms_metadata.session_transform, -- Not to consider the SEGMENT attributes for comparison.
'SEGMENT_ATTRIBUTES',
false);
dbms_metadata.set_transform_param(dbms_metadata.session_transform, -- Not to include the STORAGE clause.
'STORAGE',
false);
dbms_metadata.set_transform_param(dbms_metadata.session_transform, -- Not to include the TABLESPACE Info.
'TABLESPACE',
false);
-- Here I want some parameter which should avoid the SUPPLEMENTAL LOG group difference.
SELECT dbms_metadata_diff.compare_alter('TABLE', -- Compare 2 tables with respect to above parameters and give output as ALTER STATEMENTS.
V_OBJECT_NAME,
V_OBJECT_NAME,
V_DEST_SCHEMA_NAME,
V_SOURCE_SCHEMA_NAME,
null,
'DBLINK_TEMP')
into V_TAB_DIFF_ALTER
FROM dual;In the current case for all tables i get the output as below :- (sample table output)
ALTER TABLE "BANK"."BA_EOD_SHELL_DRIVER" DROP SUPPLEMENTAL LOG GROUP GGS_BA_EOD_SHELL_DR_199689;I don't want such alter statements in my output as i am not going to execute this on the schema , because i need SUPPLEMENTAL LOG for GOLDEN GATE.
Please suggest me some solution on it.
Thanks in advance.It probably won't answer the question...
The DBMS_METADATA_DIFF.COMPARE_ALTER function will return a CLOB containing all the ALTER TABLE statements.
I have noticed that you hold your resoult in V_TAB_DIFF_ALTER variable. Why don't you search what you need and remove it?
"The DBMS_LOB package provides subprograms to operate on BLOBs, CLOBs, NCLOBs, BFILEs, and temporary LOBs. You can use DBMS_LOB to access and manipulation specific parts of a LOB or complete LOBs." -
Enabling supplemental logging for many tables
Hi All,
Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
PL/SQL Release 9.2.0.8.0 - Production
CORE 9.2.0.8.0 Production
TNS for Solaris: Version 9.2.0.8.0 - Production
NLSRTL Version 9.2.0.8.0 - Production
I have 200 tables where i need to enable supplement logging.
ALTER TABLE table_name ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS ==>not working==>ORA-00905: missing keyword
so iam manually enabling the supplement logging
alter table EMP_PER ADD SUPPLEMENTAL LOG GROUP EMP_PE_SLOG3(END_ADDR_ORG_ID,LOA_END_DT,LAST_PROMO_DT,LAST_ANNUAL_RVW_DT,HIRE_DT,CURR_SALARY_AMT,CURR_BONUS_TGT_PCT,CURR_AVAIL_UNTIL,COST_PER_HR)always;
But as few of the tables are having more then 400 columns its taking lot of time to break the query into many group.
Can any one help we with a PLSQL block to generate the script for all the tablesThanks Cj for your reply.
I have checked the whole presensentaion but the issue is when we have more then 33 columns then we need to create a new log group to fit in else we are getting max column exceed.
So now iam writting the queries manaually..for all the 200 tables...which is taking ,more time
so could u hel;p me out with a proc or script or dynamic sql which can fetch me the supplement enabling query for many tables -
Supplemental Logging in Redo Log
I enabled the Supplemental Logging both in database level and table level.
Then i execute some sql statements.
But after i dump the redo file using "alter system dump logfile", i can't see the effect from the trace file.
In my expectation, i think the primary key value should in the OP:5.1 change.
Does the supplemental logging can't effect if row chaining or row migration occurred? i mean OP:11.6
Does it must be in the undo segment of update change OP:11.5?
Black Thoughtplease provide your Oracle version and how you enable supplemental logging. this presentation may >assist youThank you TongucY for response, I had already see the julian's presentation, and i know the internals.
As you will see from Julian Dyke's presentation, the supplemental log goes into the undo change vector >(and the undo block). The last time I checked, it was not made visible in the formatted log dump.
The only clue in the formatted trace about what had happened was that the undo change vector LEN >(and the redo record LEN) were larger.Really? I did this suspicion last weekend, I will going to check it today, thank you Lewis for notification. -
What is the overhead of Supplemental Logging?
We would like to copy data from a 600 GB Oracle database (9i) to a separate database for reporting. We want to use Data Guard in Logical Standby mode. The source database is heavily used and we can't afford a significant increase in system load (e.g. I/O activity) on that system.
To set up a Logical Standby, we need to put the JD Edwards database into Supplemental Logging mode. I am concerned that this will noticibly increase the load on the source server.
Has anyone analyzed the additional overhead of Supplemental Logging?
I have done some testing using Oracle 10.2 (Oracle XE on my computer) which indicates that when I turn on Supplemental Logging, the size of the archive logs grows by 40%. I have not yet tested this on our 9i database.
Thank you in advance for your help!
Best Regards,
Mike
=================================
The code below demonstrates the symptoms mentioned above:
RESULTS - size of archive logs generated:
- With Supplemental Logging: 120 MB
- Without: 80 MB
=================================
CREATE TABLE "EMP"
( "EMPLOYEE_ID" NUMBER(6,0),
"FIRST_NAME" VARCHAR2(20),
"LAST_NAME" VARCHAR2(25) NOT NULL ENABLE,
"EMAIL" VARCHAR2(25) NOT NULL ENABLE,
"PHONE_NUMBER" VARCHAR2(20),
"HIRE_DATE" DATE NOT NULL ENABLE,
"JOB_ID" VARCHAR2(10) NOT NULL ENABLE,
"SALARY" NUMBER(8,2),
"COMMISSION_PCT" NUMBER(2,2),
"MANAGER_ID" NUMBER(6,0),
"DEPARTMENT_ID" NUMBER(4,0)
alter table emp add CONSTRAINT "EMP_EMP_ID_PK" PRIMARY KEY ("EMPLOYEE_ID")
CREATE TABLE "STAT"
( "F1" NUMBER,
"ID" VARCHAR2(10)
The "employee" table is from Oracle XE samples
The procedure below generates transactions to test archive log size.
To run, put the database in archive log mode. Then pop an archive log by executing
ALTER SYSTEM ARCHIVE LOG CURRENT;
To flip between running with Supplemental Logging, use one of:
ALTER DATABASE drop SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX) COLUMNS;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX) COLUMNS;
declare
i number;
begin
i := 0;
while i < 1000
loop
delete from emp;
insert into emp select * from employees;
update emp set COMMISSION_PCT = COMMISSION_PCT * .5;
update stat set f1 = i where id = 'UPD';
commit;
if i mod 1000 = 0 then
dbms_output.put_line(i);
end if;
i := i + 1;
end loop;
end;
/***********************************************/Unless the bottleneck of your system is related in any way to the redo log files, I don't see any risk in generating supplemental logging. A good way to find out is to look at a Statspack report, and see which events are the top 5 time-wise.
Daniel -
One to Many table level replication
Hi All,
I was configuring Streams replication between one table to many(3) tables in the same database(10.2.0.4)
Below figure states my requirement.
|--------->TEST2.TAB2(Destination)
|
TEST1.TAB1(Source) ---------------->|--------->TEST3.TAB3(Destination)
|
|--------->TEST4.TAB4(Destination)Below are the steps i followed. But replication is not working.
CREATE USER strmadmin
IDENTIFIED BY strmadmin
GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE, DBA to strmadmin;
BEGIN
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => 'strmadmin',
grant_privileges => true);
END;
check that the streams admin is created:
SELECT * FROM dba_streams_administrator;
SELECT supplemental_log_data_min,
supplemental_log_data_pk,
supplemental_log_data_ui,
supplemental_log_data_fk,
supplemental_log_data_all FROM v$database;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
alter table test1.tab1 ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
alter table test2.tab2 ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
alter table test3.tab3 ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
alter table test4.tab4 ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
conn strmadmin/strmadmin
var first_scn number;
set serveroutput on
DECLARE scn NUMBER;
BEGIN
DBMS_CAPTURE_ADM.BUILD(
first_scn => scn);
DBMS_OUTPUT.PUT_LINE('First SCN Value = ' || scn);
:first_scn := scn;
END;
exec dbms_capture_adm.prepare_table_instantiation(table_name=>'test1.tab1');
begin
dbms_streams_adm.set_up_queue(
queue_table => 'strm_tab',
queue_name => 'strm_q',
queue_user => 'strmadmin');
end;
var first_scn number;
exec :first_scn:= 2914584
BEGIN
DBMS_CAPTURE_ADM.CREATE_CAPTURE(
queue_name => 'strm_q',
capture_name => 'capture_tab1',
rule_set_name => NULL,
source_database => 'SIVIN1',
use_database_link => false,
first_scn => :first_scn,
logfile_assignment => 'implicit');
END;
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'test1.tab1',
streams_type => 'capture',
streams_name => 'capture_tab1',
queue_name => 'strm_q',
include_dml => true,
include_ddl => false,
include_tagged_lcr => true,
source_database => 'SIVIN1',
inclusion_rule => true);
END;
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'test2.tab2',
streams_type => 'apply',
streams_name => 'apply_tab2',
queue_name => 'strm_q',
include_dml => true,
include_ddl => false,
include_tagged_lcr => true,
source_database => 'SIVIN1',
inclusion_rule => true);
END;
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'test3.tab3',
streams_type => 'apply',
streams_name => 'apply_tab3',
queue_name => 'strm_q',
include_dml => true,
include_ddl => false,
include_tagged_lcr => true,
source_database => 'SIVIN1',
inclusion_rule => true);
END;
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'test4.tab4',
streams_type => 'apply',
streams_name => 'apply_tab4',
queue_name => 'strm_q',
include_dml => true,
include_ddl => false,
include_tagged_lcr => true,
source_database => 'SIVIN1',
inclusion_rule => true);
END;
select STREAMS_NAME,
STREAMS_TYPE,
TABLE_OWNER,
TABLE_NAME,
RULE_TYPE,
RULE_NAME
from DBA_STREAMS_TABLE_RULES;
begin
dbms_streams_adm.rename_table(
rule_name => 'TAB245' ,
from_table_name => 'test1.tab1',
to_table_name => 'test2.tab2',
step_number => 0,
operation => 'add');
end;
begin
dbms_streams_adm.rename_table(
rule_name => 'TAB347' ,
from_table_name => 'test1.tab1',
to_table_name => 'test3.tab3',
step_number => 0,
operation => 'add');
end;
begin
dbms_streams_adm.rename_table(
rule_name => 'TAB448' ,
from_table_name => 'test1.tab1',
to_table_name => 'test4.tab4',
step_number => 0,
operation => 'add');
end;
col apply_scn format 999999999999
select dbms_flashback.get_system_change_number apply_scn from dual;
begin
dbms_apply_adm.set_table_instantiation_scn(
source_object_name => 'test1.tab1',
source_database_name => 'SIVIN1',
instantiation_scn => 2916093);
end;
exec dbms_capture_adm.start_capture('capture_tab1');
exec dbms_apply_adm.start_apply('apply_tab2');
exec dbms_apply_adm.start_apply('apply_tab3');
exec dbms_apply_adm.start_apply('apply_tab4');Could anyone please help me....Please let me where i have gone wrong.
If above steps are not correct, then please let me know the desired steps.
-YasserFirst of all I suggest implement it to one destination side.
here is a good example, which I have done
Just use it and test. Then prepare your other schema`s table (3 destination I mean)
alter system set global_names =TRUE scope=both;
oracle@ulfet-laptop:/MyNewPartition/oradata/my$ mkdir Archive
shutdown immediate
startup mount
alter database archivelog
alter database open
ALTER SYSTEM SET log_archive_format='MY_%t_%s_%r.arc' SCOPE=spfile;
ALTER SYSTEM SET log_archive_dest_1='location=/MyNewPartition/oradata/MY/Archive MANDATORY' SCOPE=spfile;
# alter system set streams_pool_size=25M scope=both;
create tablespace streams_tbs datafile '/MyNewPartition/oradata/MY/streams_tbs01.dbf' size 25M autoextend on maxsize unlimited;
grant dba to strmadmin identified by streams;
alter user strmadmin default tablespace streams_tbs quota unlimited on streams_tbs;
exec dbms_streams_auth.grant_admin_privilege( -
grantee => 'strmadmin', -
grant_privileges => true)
grant dba to demo identified by demo;
create table DEMO.EMP as select * from HR.EMPLOYEES;
alter table demo.emp add constraint emp_emp_id_pk primary key (employee_id);
begin
dbms_streams_adm.set_up_queue (
queue_table => 'strmadmin.streams_queue_table',
queue_name => 'strmadmin.streams_queue');
end;
select name, queue_table from dba_queues where owner='STRMADMIN';
set linesize 150
col rule_owner for a10
select rule_owner, streams_type, streams_name, rule_set_name, rule_name from dba_streams_rules;
BEGIN
dbms_streams_adm.add_table_rules(
table_name => 'HR.EMPLOYEES',
streams_type => 'CAPTURE',
streams_name => 'CAPTURE_EMP',
queue_name => 'STRMADMIN.STREAMS_QUEUE',
include_dml => TRUE,
include_ddl => FALSE,
inclusion_rule => TRUE);
END;
select capture_name, rule_set_name,capture_user from dba_capture;
BEGIN
DBMS_CAPTURE_ADM.INCLUDE_EXTRA_ATTRIBUTE(
capture_name => 'CAPTURE_EMP',
attribute_name => 'USERNAME',
include => true);
END;
select source_object_owner, source_object_name, instantiation_scn from dba_apply_instantiated_objects;
--no rows returned - why?
DECLARE
iscn NUMBER;
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_nUMBER();
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
source_object_name => 'HR.EMPLOYEES',
source_database_name => 'MY',
instantiation_scn => iscn);
END;
conn strmadmin/streams
SET SERVEROUTPUT ON
DECLARE
emp_rule_name_dml VARCHAR2(30);
emp_rule_name_ddl VARCHAR2(30);
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'hr.employees',
streams_type => 'apply',
streams_name => 'apply_emp',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => false,
source_database => 'my',
dml_rule_name => emp_rule_name_dml,
ddl_rule_name => emp_rule_name_ddl);
DBMS_OUTPUT.PUT_LINE('DML rule name: '||emp_rule_name_dml);
DBMS_OUTPUT.PUT_LINE('DDL rule name: '||emp_rule_name_ddl);
END;
BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'apply_emp',
parameter => 'disable_on_error',
value => 'n');
END;
SELECT a.apply_name, a.rule_set_name, r.rule_owner, r.rule_name
FROM dba_apply a, dba_streams_rules r
WHERE a.rule_set_name=r.rule_set_name;
-- select rule_name field`s value and write below -- example EMPLOYEES16
BEGIN
DBMS_STREAMS_ADM.RENAME_TABLE(
rule_name => 'STRMADMIN.EMPLOYEES14',
from_table_name => 'HR.EMPLOYEES',
to_table_name => 'DEMO.EMP',
operation => 'ADD'); -- can be ADD or REMOVE
END;
BEGIN
DBMS_APPLY_ADM.START_APPLY(
apply_name => 'apply_emp');
END;
BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name => 'capture_emp');
END;
alter user hr identified by hr;
alter user hr account unlock;
conn hr/hr
insert into employees values
(400,'Ilqar','Ibrahimov','[email protected]','123456789',sysdate,'ST_MAN',30000,0,110,110);
insert into employees values
(500,'Ulfet','Tanriverdiyev','[email protected]','123456789',sysdate,'ST_MAN',30000,0,110,110);
conn demo/demo
grant all on emp to public;
select last_name, first_name from emp where employee_id=300;
strmadmin/streams
select apply_name, queue_name,status from dba_apply;
select capture_name, status from dba_capture; -
what are the different levels at which logging can be done? Can we do it at fields' level or is it restricted to only table level?
hi,
We can do the logging at field level also..
http://help.sap.com/saphelp_erp2005vp/helpdata/en/2a/fa015b493111d182b70000e829fbfe/frameset.htm
Eg Prog
REPORT zvinay_cd .
DATA : lv_objectid TYPE cdhdr-objectid,
lv_trnscode TYPE cdhdr-tcode.
DATA : lt_icdtxt_cnvxpracd TYPE TABLE OF cdtxt,
ls_icdtxt_cnvxpracd TYPE cdtxt.
DATA : lt_n_xkdb_comp TYPE TABLE OF cnv_xkdb_comp,
lt_o_xkdb_comp TYPE TABLE OF cnv_xkdb_comp,
ls_n_xkdb_comp TYPE cnv_xkdb_comp,
ls_o_xkdb_comp TYPE cnv_xkdb_comp.
ls_o_xkdb_comp-dlvunit = 'DFGDFG'.
ls_o_xkdb_comp-planrel = 'GDFGDF'.
ls_o_xkdb_comp-suppack = 'DFGDFG'.
*APPEND ls_xkdb_comp TO lt_o_xkdb_comp.
ls_n_xkdb_comp-dlvunit = 'DFGDFG'.
ls_n_xkdb_comp-planrel = 'GDFGDF'.
ls_n_xkdb_comp-suppack = 'DFGDFG'.
ls_n_xkdb_comp-orgsystem = 'CLR'.
*APPEND ls_xkdb_comp TO lt_n_xkdb_comp.
lv_objectid = 'CNVXPRACD'.
lv_trnscode = 'CNV_XPRA_KDB_COMP'.
ls_icdtxt_cnvxpracd-teilobjid = 'CNVXPRACD'.
ls_icdtxt_cnvxpracd-textart = 'VINAY'.
ls_icdtxt_cnvxpracd-textspr = 'EN'.
ls_icdtxt_cnvxpracd-textart = 'U'.
APPEND ls_icdtxt_cnvxpracd TO lt_icdtxt_cnvxpracd.
CALL FUNCTION 'CNVXPRACD_WRITE_DOCUMENT'
EXPORTING
objectid = lv_objectid
tcode = lv_trnscode
utime = sy-uzeit
udate = sy-datum
username = sy-uname
PLANNED_CHANGE_NUMBER = ' '
object_change_indicator = 'U'
PLANNED_OR_REAL_CHANGES = ' '
NO_CHANGE_POINTERS = ' '
UPD_ICDTXT_CNVXPRACD = ' '
n_cnv_xkdb_action = 'CNV_XKDB_ACTION'
o_cnv_xkdb_action = 'CNV_XKDB_ACTION'
UPD_CNV_XKDB_ACTION = ' '
n_cnv_xkdb_comp = ls_n_xkdb_comp
o_cnv_xkdb_comp = ls_o_xkdb_comp
upd_cnv_xkdb_comp = 'U'
n_cnv_xkdb_xpra = 'CNV_XKDB_XPRA'
o_cnv_xkdb_xpra = 'CNV_XKDB_XPRA'
UPD_CNV_XKDB_XPRA = ' '
TABLES
icdtxt_cnvxpracd = lt_icdtxt_cnvxpracd.
IF sy-subrc NE 0.
WRITE: 'not working'.
ENDIF.
Regards,
Sourabh -
I setup streams for 5 tables.I use uncodititional supplemental log group. 4 tables are okay,one is not working. I check dba_log_groups view. For four tables were generated 3 additional supplemental groups for each(primary key,unique key,foreign key), one does have only created by me supplemental log group.
What is the problem with this table?
Please help it is emergency, tommorow we need to go production
ThanksHi Mary,
Are you qualifying the pk for each of the tables?
Make sure you have defined your rules correctly (ex.: query dba_streams_table_rules).
The 3 additional log group are normal. What is not normal is the table that does not have these.
My guess would be that that table does not have a capture rule correctly defined.
I already noticed that the 3 log groups appear when you create the capture rule and they disappear when you drop the capture rule.
Regards, -
SQL Server - Extract Error - OGG-00868 Supplemental logging is disabled
Hello,
We are trying to replicate from a SQL Server 2008 database to Oracle database, but when trying to start the extract process we are getting the following error message:
OGG-00868 Supplemental logging is disabled for database 'GoldenGate'. To enable logging, perform the following: 1) Set 'trunc. log on chkpt.' to false. 2) Create a full backup of the database. Please refer to the "Oracle GoldenGate For Windows and UNIX Administration Guide" for details.
I have read that for enabling the supplemental logging is enough to "add trandata table_name", and this is done, and the extract process we are using is the following:
EXTRACT cap_or4
SOURCEDB GoldenGate
TRANLOGOPTIONS MANAGESECONDARYTRUNCATIONPOINT
EXTTRAIL c:\GoldenGate\V28983-01-GG-111112-SQLServer-Windows-x64\dirdat\C4
TABLE GoldenGate.dbo.DES_T1;
And the 'trunc.log on chkpt' is set to false.
We don’t know what else to do, or to check... does anyone have any idea?!
Thank you very much, best regards,
Araitz.-Have you followed all the process for installing as per the guide? clearly you missed something
Please follow below steps.
Installation & Configuration of Oracle GoldenGate for MS SQL Server:
Pre-requisites:
1.Change Data Capture (CDC) must be enabled for Oracle GoldenGate and will be enabled by Oracle GoldenGate by means of the ADD TRANDATA command.
2.SQL Server source database must be set to use the full recovery model.
3.Oracle GoldenGate does not support system databases.
4.After the source database is set to full recovery, a full database backup must be taken.
5.SQL Server 2008 ODBC/OLE DB: SQL Server Native Client 10.0 driver
6.Oracle GoldenGate processes can use either Windows Authentication or SQL Server Authentication to connect to a database.
7.Before installing Oracle GoldenGate on a Windows system, install and configure the Microsoft Visual C ++ 2005 SP1 Redistributable Package. Make certain it
is the SP1 version of this package, and make certain to get the correct bit version for your server. This package installs runtime components of Visual C++
Libraries. For more information, and to download this package, go to http://www.microsoft.com.
Privileges:
1.Required SQL Server privileges for Manager when using Windows authentication
Extract(source system)
BUILTIN\Administrators account must be a member of the SQL Server fixed server role System Administrators.
Account must be a member of the SQL Server fixed server role System Administrators
Replicat (target system)
BUILTIN\Administrators account must be at least a member of the db_owner fixed database role of the target database.
Account must be at least a member of the db_owner fixed database role of the target database.
2.Required SQL Server privileges for Extract and Replicat when using SQL Server authentication
Extract - Member of the SQL Server fixed server role System Administrators.
Replicat - At least a member of the db_owner fixed database role of the target database.
Downloading Oracle GoldenGate
Download the appropriate Oracle GoldenGate build to each system that will be part of the Oracle GoldenGate configuration.
1. Navigate to http://edelivery.oracle.com.
2. On the Welcome page:
--Select your language.
--Click Continue.
3. On the Export Validation page:
--Enter your identification information.
--Accept the Trial License Agreement (even if you have a permanent license).
--Accept the Export Restrictions.
--Click Continue.
4. On the Media Pack Search page:
--Select the Oracle Fusion Middleware Product Pack.
--Select the platform on which you will be installing the software.
--Click Go.
5. In the Results List:
--Select the Oracle GoldenGate Media Pack that you want.
--Click Continue.
6. On the Download page:
--Click Download for each component that you want. Follow the automatic download
process to transfer the mediapack.zip file to your system.
Installing the Oracle GoldenGate files
1. Unzip the downloaded file(s) by using WinZip or an equivalent compression product.
2. Move the files in binary mode to a folder on the drive where you want to install Oracle GoldenGate. Do not install Oracle GoldenGate into a folder that contains spaces in its name, even if the path is in quotes. For example:
C:\“Oracle GoldenGate” is not valid.
C:\Oracle_GoldenGate is valid.
3. From the Oracle GoldenGate folder, run the GGSCI program.
4. In GGSCI, issue the following command to create the Oracle GoldenGate working
directories.
CREATE SUBDIRS
a.Create the necessary working directories for GG.
Source DB:
GGSCI>create subdirs
Target DB:
GGSCI>create subdirs
Install the GoldenGate Manager process
1.Create a GLOBALS parameter file
--Execute the following commands from the <install location>.
GGSCI> EDIT PARAMS ./GLOBALS
--In the text editor, type the following:
MGRSERVNAME <mgr service>
Using a GLOBALS file in each GoldenGate instance allows you to run multiple Managers as services on Windows. When the service is installed, the Manager name
is referenced in GLOBALS, and this name will appear in the Windows Services control panel.
Note! Check to ensure that the GLOBALS file has been added in the GoldenGate installation directory and that it does not have an extension.
--Execute the following command to exit GGSCI.
GGSCI> EXIT
2. Install the Manager service
Execute the following command to run GoldenGate’s INSTALL.EXE . This executable installs Manager as a Windows service and adds GoldenGate events to the
Windows Event Viewer.
Shell> INSTALL ADDSERVICE ADDEVENTS
Note: Adding the Manager as a service is an optional step used when there are multiple environments on the same system or when you want to control the name
of the manager for any reason.
Configuring an ODBC connection
A DSN stores information about how to connect to a SQL Server database through ODBC (Open Database Connectivity). Create a DSN on each SQL Server source
and target system.
NOTE: Replicat will always use ODBC to query the target database for metadata.
To create a SQL Server DSN
1. Run one of the following ODBC clients:
--If using a 32-bit version of Oracle GoldenGate on a 64-bit system, create the DSN by running the ODBCAD32.EXE client from the %SystemRoot%\SysWOW64
folder.
--If using a 64-bit version of Oracle GoldenGate on a 64-bit system, create a DSN by running the default ODBCAD32.EXE client in Control Panel>Administrative
Tools>Data Sources (ODBC).
--If using a version of Oracle GoldenGate other than the preceding, use the default ODBC client in Control Panel>Administrative Tools>Data Sources (ODBC).
2. In the ODBC Data Source Administrator dialog box of the ODBC client, select the System DSN tab, and then click Add.
3. Under Create New Data Source, select the correct SQL Server driver as follows:
--SQL Server 2000: SQL Server driver
--SQL Server 2005: SQL Native Client driver
--SQL Server 2008: SQL Server Native Client 10.0 driver
4. Click Finish. The Create a New Data Source to SQL Server wizard is displayed.
5. Supply the following:
--Name: Can be of your choosing. In a Windows cluster, use one name across all nodes in the cluster.
--Server: Select the SQL Server instance name.
6. Click Next.
7. For login authentication, select With Windows NT authentication using the network login ID for Oracle GoldenGate to use Windows authentication, or select
With SQL Server authentication using a login ID and password entered by the user for Oracle GoldenGate to use database credentials. Supply login information
if selecting SQL Server authentication.
8. Click Next.
9. If the default database is not set to the one that Oracle GoldenGate will connect to,
click Change the default database to, and then select the correct name. Set the other
settings to use ANSI.
10. Click Next.
11. Leave the next page set to the defaults.
12. Click Finish.
13. Click Test Data Source to test the connection.
14. Close the confirmation box and the Create a New Data Source box.
15. Repeat this procedure from step 1 on each SQL Server source and target system.
Setting the database to full recovery model
Oracle GoldenGate requires a SQL Server source database to be set to the full recovery model.
To verify or set the recovery model
1. Connect to the SQL Server instance with either Enterprise Manager for SQL Server 2000 or SQL Server Management Studio for SQL Server 2005 and 2008.
2. Expand the Databases folder.
3. Right-click the source database, and then select Properties.
4. Select the Options tab.
5. Under Recovery, set Model to Full if not already.
6. If the database was in Simple recovery or never had a Full database backup, take a Fulldatabase backup before starting Extract.
7. Click OK.
Enabling supplemental logging
These instructions apply to new installations of Oracle GoldenGate for all supported SQL Server versions. You will enable supplemental logging with the ADD
TRANDATA command so that Extract can capture the information that is required to reconstruct SQL operations on the target. This is more information than
what SQL Server logs by default.
--SQL Server 2005 updated to CU6 for SP2 or later: ADD TRANDATA calls the sys.sp_extended_logging stored procedure.
--SQL Server 2005 pre-CU6 for SP2: ADD TRANDATA creates the following:
A replication publication named [<source database name>]: GoldenGate<source database name> Publisher. To view this publication, look under Replication>Local
Publications in SQL Server Management Studio. This procedure adds the specified table to the publication as an article.
A SQL Server Log Reader Agent job for the publication. This job cannot run concurrently with an Extract process in this configuration.
--SQL Server 2008: ADD TRANDATA enables Change Data Capture (CDC) and creates a minimal Change Data Capture on the specified table.
a.Oracle GoldenGate does not use the CDC tables other than as necessary to enablesupplemental logging.
b.As part of enabling CDC, SQL Server creates two jobs per database: <dbname>_capture and <dbname>_cleanup. The <dbname>_capture job adjusts the secondary
truncation point and captures data from the log to store in the CDC
tables. The <dbname>_cleanup job ages and deletes data captured by CDC.
c.Using the TRANLOGOPTIONS parameter with the MANAGESECONDARYTRUNCATIONPOINT option for Extract removes the <dbname_capture> job, preventing the overhead of
the job loading the CDC tables.
d.The alternative (using TRANLOGOPTIONS with NOMANAGESECONDARYTRUNCATIONPOINT) requires the SQL Server Agent to be running and requires the <dbname>_capture and <dbname>_cleanup jobs to be retained. You will probably need to adjust the <dbname>_cleanup data retention period if the default of three days is not acceptable for storage concerns.
To enable supplemental logging
1. On the source system, run GGSCI.
2. Log into the database from GGSCI.
DBLOGIN SOURCEDB <DSN>[, USERID <user>, PASSWORD <password>]
Where:
-- SOURCEDB <DSN> is the name of the SQL Server data source.
-- USERID <user> is the Extract login and PASSWORD <password> is the password that is required if Extract uses SQL Server authentication.
3. In GGSCI, issue the following command for each table that is, or will be, in the Extract configuration. You can use a wildcard to specify multiple table
names, but not owner names.
ADD TRANDATA <owner>.<table>
NOTE:The Log Reader Agent job cannot run concurrently with the GoldenGate Extract process.
4.Configuration
a.Create and start manager on the source and the destination.
Source DB:
shell>ggsci
GGSCI> edit params mgr
PORT 7809
DYNAMICPORTLIST 7900-7950
DYNAMICPORTREASSIGNDELAY 5
AUTOSTART ER *
AUTORESTART ER *, RETRIES 3, WAITMINUTES 5, RESETMINUTES 30
LAGCRITICALMINUTES 60
LAGREPORTMINUTES 30
PURGEOLDEXTRACTS c:\ogg\dirdat\T*, USECHECKPOINTS, MINKEEPFILES 10
GGSCI> start manager
GGSCI>info all
b. Create the extract group on the source side:
GGSCI> edit params EXT1
Add the following lines to the new parameter file
EXTRACT EXT1
SOURCEDB <DSN>, USERID ogg, PASSWORD ogg@321!
TRANLOGOPTIONS MANAGESECONDARYTRUNCATIONPOINT
EXTTRAIL c:\ogg\dirdat\T1
DISCARDFILE c:\ogg\dirrpt\EXT1.DSC, PURGE, MEGABYTES 100
TABLE dbo.TCUSTMER;
TABLE dbo.TCUSTORD;
GGSCI>ADD EXTRACT EXT1, TRANLOG, BEGIN NOW
GGSCI>ADD EXTTRAIL c:\ogg\dirdat\T1, EXTRACT EXT1, MEGABYTES 100
GGSCI> edit params PMP1
Add the following lines to the new parameter file
EXTRACT PMP1
SOURCEDB <DSN>, USERID ogg, PASSWORD ogg@321!
PASSTHRU
RMTHOST dr, MGRPORT 7810
RMTTRAIL c:\ogg\dirdat\P1
TABLE dbo.TCUSTMER;
TABLE dbo.TCUSTORD;
GGSCI> ADD EXTRACT PMP1, EXTTRAILSOURCE c:\ogg\dirdat\T1
GGSCI> ADD EXTTRAIL c:\ogg\dirdat\P1, EXTRACT PMP1, MEGABYTES 100
Target DB:
===========
shell>ggsci
GGSCI> edit params mgr
PORT 7810
AUTOSTART ER *
AUTORESTART ER *, RETRIES 3, WAITMINUTES 5, RESETMINUTES 30
LAGCRITICALMINUTES 60
LAGREPORTMINUTES 30
PURGEOLDEXTRACTS c:\ogg\dirdat\P*, USECHECKPOINTS, MINKEEPFILES 10
GGSCI> start manager
GGSCI>info all
Create parameter file for replicat:
GGSCI> edit params REP1
REPLICAT REP1
ASSUMETARGETDEFS
TARGETDB <dsn>, USERID ogg@DR, PASSWORD ogg@321!
DISCARDFILE c:\ogg\dirrpt\REP1.DSC, append, megabytes 100
HANDLECOLLISIONS
ASSUMETARGETDEFS
MAP dbo.TCUSTMER, TARGET dbo.TCUSTMER;
MAP dbo.TCUSTORD, TARGET dbo.TCUSTORD;
GGSCI>ADD REPLICAT REP1, RMTTRAIL c:\ogg\dirdat\P1, nodbcheckpoint
# Start extract and replicat:
Source:
GGSCI> start er *
Destination:
GGSCI> start er *Greetings,
N K -
Redo Log and Supplemental Logging related doubts
Hi Friends,
I am studying Supplemental logging in detail. Have read lots of articles and oracle documentation about it and redo logs. But couldnot found answers of some doubts..
Please help me clear it.
Scenario: we have one table with primary key. And we execute an update query on that table which is not using the primary key column in any clause..
Question: In this case, does the redo log entry generated for the changes done by update query contain the primary columns values..?
Question: If we have any table with primary key, do we need to enable the supplemental logging on primary columns of that table? If yes, in which circumstances, do we need to enable it?
Question: If we have to configure stream replication on that table(having primary key), why do we actually need to enable its supplemental logging ( I have read the documentation saying that stream requires some more information so.., but actually what information does it need. Again this question is highly related to the first question.)
Also please suggest any good article/site which provide inside details of redo log and supplemental logging, if you know.
Regards,
Dipali..1) Assuming you are not updating the primary key column and supplemental logging is not enabled, Oracle doesn't need to log the primary key column to the redo log, just the ROWID.
2) Is rather hard to answer without being tautological. You need to enable supplemental logging if and only if you have some downstream use for additional columns in the redo logs. Streams, and those technologies built on top of Streams, are the most common reason for enabling supplemental logging.
3) If you execute an update statement like
UPDATE some_table
SET some_column = new_value
WHERE primary_key = some_key_value
AND <<other conditions as well>>and look at an update statement that LogMiner builds from the redo logs in the absence of supplemental logging, it would basically be something like
UPDATE some_table
SET some_column = new_value
WHERE rowid = rowid_of_the_row_you_updatedOracle doesn't need to replay the exact SQL statement you issued, (and thus it doesn't have to write the SQL statement to the redo log, it doesn't have to worry if the UPDATE takes a long time to run (otherwise, it would take as long to apply an archived log as it did to generate the log, which would be disasterous in a recovery situation), etc). It just needs to reconstruct the SQL statement from the information in redo, which is just the ROWID and the column(s) that changed.
If you try to run this statement on a different database (via Streams, for example) the ROWIDs on the destination database are likely totally different (since a ROWID is just a physical address of a row on disk). So adding supplemental logging tells Oracle to log the primary key column to redo and allows LogMiner/ Streams/ etc. to reconstruct the statement using the primary key values for the changed rows, which would be the same on both the source and destination databases.
Justin -
New column add in supplemental log group
Hi All,
How to add the new column in existing supplemental log group.
Ex.
SQL> desc scott.emp
Name Null? Type
EMPNO NOT NULL NUMBER(4)
ENAME VARCHAR2(10)
JOB VARCHAR2(9)
MGR NUMBER(4)
HIREDATE DATE
SAL NUMBER(7,2)
COMM NUMBER(7,2)
DEPTNO NUMBER(2)
And the existing supplemental group is
Alter table emp add supplemental log group emp_log_grp (empno,ename,sal) always;
Now I want to add comm column in log group.
Pls. help.
Thanks
Nareshdid you try..
ALTER TABLE hr.departments ADD SUPPLEMENTAL LOG GROUP log_group_dep_pk
(comm) ALWAYS; -
Steps for table level reorganisation
Dear Friends.
We are going for Archiving on big tables like EDI40 table.
So I have decided to regain space after archiving by using table level reorg is it ok??
can you please suggest necessary steps and prerequisites.
Thanks in advance.
Regards
JIggiDear All,
I have done following steps:--
Microsoft Windows [Version 5.2.3790]
(C) Copyright 1985-2003 Microsoft Corp.
C:\Documents and Settings\wiqadm.WSECCQA>brtools
BR0651I BRTOOLS 7.00 (24)
BR0280I BRTOOLS time stamp: 2009-04-01 15.14.21
BR0656I Choice menu 1 - please make a selection
BR*Tools main menu
1 = Instance management
2 - Space management
3 - Segment management
4 - Backup and database copy
5 - Restore and recovery
6 - Check and verification
7 - Database statistics
8 - Additional functions
9 - Exit program
Standard keys: c - cont, b - back, s - stop, r - refr, h - help
BR0662I Enter your choice:
3
BR0280I BRTOOLS time stamp: 2009-04-01 15.14.25
BR0663I Your choice: '3'
BR0280I BRTOOLS time stamp: 2009-04-01 15.14.25
BR0656I Choice menu 7 - please make a selection
Database segment management
1 = Reorganize tables
2 - Rebuild indexes
3 - Export tables
4 - Import tables
5 - Alter tables
6 - Alter indexes
7 - Additional segment functions
8 - Reset program status
Standard keys: c - cont, b - back, s - stop, r - refr, h - help
BR0662I Enter your choice:
1
BR0280I BRTOOLS time stamp: 2009-04-01 15.14.28
BR0663I Your choice: '1'
BR0280I BRTOOLS time stamp: 2009-04-01 15.14.28
BR0657I Input menu 91 - please check/enter input values
BRSPACE options for reorganization of tables
1 - BRSPACE profile (profile) ...... [initWIQ.sap]
2 - Database user/password (user) .. [/]
3 ~ Reorganization action (action) . []
4 ~ Tablespace names (tablespace) .. []
5 ~ Table owner (owner) ............ []
6 ~ Table names (table) ............ []
7 - Confirmation mode (confirm) .... [yes]
8 - Extended output (output) ....... [no]
9 - Scrolling line count (scroll) .. [20]
10 - Message language (language) .... [E]
11 - BRSPACE command line (command) . [-p initWIQ.sap -s 20 -l E -f tbreorg]
Standard keys: c - cont, b - back, s - stop, r - refr, h - help
BR0662I Enter your choice:
6
BR0280I BRTOOLS time stamp: 2009-04-01 15.14.38
BR0663I Your choice: '6'
BR0280I BRTOOLS time stamp: 2009-04-01 15.14.38
BR0681I Enter string value for "table" []:
BDCP
BR0280I BRTOOLS time stamp: 2009-04-01 15.14.41
BR0683I New value for "table": 'BDCP'
BR0280I BRTOOLS time stamp: 2009-04-01 15.14.41
BR0657I Input menu 91 - please check/enter input values
BRSPACE options for reorganization of tables
1 - BRSPACE profile (profile) ...... [initWIQ.sap]
2 - Database user/password (user) .. [/]
3 ~ Reorganization action (action) . []
4 ~ Tablespace names (tablespace) .. []
5 ~ Table owner (owner) ............ []
6 ~ Table names (table) ............ [BDCP]
7 - Confirmation mode (confirm) .... [yes]
8 - Extended output (output) ....... [no]
9 - Scrolling line count (scroll) .. [20]
10 - Message language (language) .... [E]
11 - BRSPACE command line (command) . [-p initWIQ.sap -s 20 -l E -f tbreorg -t "
BDCP"]
Standard keys: c - cont, b - back, s - stop, r - refr, h - help
BR0662I Enter your choice:
c
BR0280I BRTOOLS time stamp: 2009-04-01 15.14.45
BR0663I Your choice: 'c'
BR0259I Program execution will be continued...
BR0291I BRSPACE will be started with options '-p initWIQ.sap -s 20 -l E -f tbreo
rg -t "BDCP"'
BR0280I BRTOOLS time stamp: 2009-04-01 15.14.45
BR0670I Enter 'c[ont]' to continue, 'b[ack]' to go back, 's[top]' to abort:
c
BR0280I BRTOOLS time stamp: 2009-04-01 15.14.46
BR0257I Your reply: 'c'
BR0259I Program execution will be continued...
BR1001I BRSPACE 7.00 (24)
BR1002I Start of BRSPACE processing: seagjxpe.tbr 2009-04-01 15.14.46
BR0484I BRSPACE log file: F:\oracle\WIQ\sapreorg\seagjxpe.tbr
BR0280I BRSPACE time stamp: 2009-04-01 15.14.48
BR1009I Name of database instance: WIQ
BR1010I BRSPACE action ID: seagjxpe
BR1011I BRSPACE function ID: tbr
BR1012I BRSPACE function: tbreorg
BR0280I BRSPACE time stamp: 2009-04-01 15.14.52
BR0657I Input menu 353 - please check/enter input values
Options for reorganization of tables: SAPSR3.BDCP (degree 1)
1 ~ New destination tablespace (newts) ..... []
2 ~ Separate index tablespace (indts) ...... []
3 - Parallel threads (parallel) ............ [1]
4 ~ Table/index parallel degree (degree) ... []
5 - Create DDL statements (ddl) ............ [yes]
6 ~ Initial extent size category (initial) . []
7 ~ Sort by fields of index (sortind) ...... []
Standard keys: c - cont, b - back, s - stop, r - refr, h - help
BR0662I Enter your choice:
c
BR0280I BRSPACE time stamp: 2009-04-01 15.14.55
BR0663I Your choice: 'c'
BR0259I Program execution will be continued...
BR0280I BRSPACE time stamp: 2009-04-01 15.14.55
BR1108I Checking tables for reorganization...
BR0280I BRSPACE time stamp: 2009-04-01 15.14.56
BR1112I Number of tables selected/skipped for reorganization: 1/0
BR0280I BRSPACE time stamp: 2009-04-01 15.14.56
BR0370I Directory F:\oracle\WIQ\sapreorg\seagjxpe created
BR0280I BRSPACE time stamp: 2009-04-01 15.14.56
BR1101I Starting online table reorganization...
BR0280I BRSPACE time stamp: 2009-04-01 15.14.56
BR1124I Starting reorganization of table SAPSR3.BDCP ...
BR0280I BRSPACE time stamp: 2009-04-01 15.17.24
BR1105I Table SAPSR3.BDCP reorganized successfully
BR0280I BRSPACE time stamp: 2009-04-01 15.17.24
BR1141I 1 of 1 table reorganized - 2152767 of 2152767 rows processed
BR0204I Percentage done: 100.00%, estimated end time: 15:17
BR0001I **************************************************
BR0280I BRSPACE time stamp: 2009-04-01 15.17.24
BR1102I Number of tables reorganized successfully: 1
BR0280I BRSPACE time stamp: 2009-04-01 15.17.24
BR0670I Enter 'c[ont]' to continue, 'b[ack]' to go back, 's[top]' to abort:
c
BR0280I BRSPACE time stamp: 2009-04-01 15.18.05
BR0257I Your reply: 'c'
BR0259I Program execution will be continued...
BR0280I BRSPACE time stamp: 2009-04-01 15.18.05
BR1022I Number of tables processed: 1
BR1003I BRSPACE function 'tbreorg' completed
BR1008I End of BRSPACE processing: seagjxpe.tbr 2009-04-01 15.18.05
BR0280I BRSPACE time stamp: 2009-04-01 15.18.05
BR1005I BRSPACE completed successfully
BR0292I Execution of BRSPACE finished with return code 0
BR0280I BRTOOLS time stamp: 2009-04-01 15.18.05
BR0256I Enter 'c[ont]' to continue, 's[top]' to cancel BRTOOLS:
c
BR0280I BRTOOLS time stamp: 2009-04-01 15.18.10
BR0257I Your reply: 'c'
BR0259I Program execution will be continued...
BR0280I BRTOOLS time stamp: 2009-04-01 15.18.10
BR0656I Choice menu 7 - please make a selection
Database segment management
1 + Reorganize tables
2 - Rebuild indexes
3 - Export tables
4 - Import tables
5 - Alter tables
6 - Alter indexes
7 - Additional segment functions
8 - Reset program status
Standard keys: c - cont, b - back, s - stop, r - refr, h - help
BR0662I Enter your choice:
b
BR0280I BRTOOLS time stamp: 2009-04-01 15.18.13
BR0663I Your choice: 'b'
BR0673I Going back to the previous menu...
BR0280I BRTOOLS time stamp: 2009-04-01 15.18.13
BR0656I Choice menu 1 - please make a selection
BR*Tools main menu
1 = Instance management
2 - Space management
3 + Segment management
4 - Backup and database copy
5 - Restore and recovery
6 - Check and verification
7 - Database statistics
8 - Additional functions
9 - Exit program
Standard keys: c - cont, b - back, s - stop, r - refr, h - help
BR0662I Enter your choice:
9
BR0280I BRTOOLS time stamp: 2009-04-01 15.18.15
BR0663I Your choice: '9'
BR0280I BRTOOLS time stamp: 2009-04-01 15.18.15
BR0680I Do you really want to exit BRTOOLS? Enter y[es]/n[o]:
y
BR0280I BRTOOLS time stamp: 2009-04-01 15.18.17
BR0257I Your reply: 'y'
BR0280I BRTOOLS time stamp: 2009-04-01 15.18.17
BR0652I BRTOOLS completed successfully
And I compared space before reorg and after reorg.
I got 5 Gb space back .
I hv done the whole procedure on BDCP table on WIQ.
Shall i do the same procedure on production server.
Thanks in advance.
JIggi
Maybe you are looking for
-
Dropping index gives ORA-29861
Hello We have an application which reads data from MapInfo Spatialware using MapInfo Professional and upload this data into Oracle Spatial tables using MapInfo Easy Loader utility. The constraint is that tables in oracle spatial must not be dropped.
-
Exchange Global Address List search not showing name or email
I have an iPhone and iPad connected to a Microsoft Exchange server at work. When I search the GAL for a person in the company, the results are shown as the Company only (data coming from the 'Company' field on the Exchange server) instead of the cont
-
Incoming email stuck in SOIN and Internal recipient Blank
Hi experts, ERMS solution hsa been implemented in CRM 7.0 And I configured a new recipient in CRMC_IC_AUIADDR and SO28 transaction. For some senders only,The internal recipients is always blank in SOIN transaction.Hence its not reaching agent inbox.
-
Best approach - Interface design
Hi, I have an interface where from source A, 2 completely different XML messages are posted to a web service on source B. What is the the best practice for designing the interface? 1. Should I create 2 different applications one for each message and
-
Looking forward to upgrade ram for 24" iMac7,1.
Hi, I have been happily using my iMac 7,1 (24") for last few years. Recently I have bought Nikon D7100 and when I have started to digital manipulation using Adobe Creative Suit, it is clearly noticeable that it struggles. Therefore, I would like to u