Supplemental logging

Hi,
Presently we copy prod tables into reporting database using Mviews and they work like a charm, there are couple of problems with this setup we have right now one being it is not feasible when we keep adding/altering 20-30 tables every release, dropping/recreating the mlogs/mviews, secondly during the busy season the mview refresh data takes a very long time copying changes as there are just too much data to move around.
Looks like people are not liking this delay in copying the tables, not often couple of times every month. so we are planning to implement logical standby, we already have a physical standby.
After reading Oracle docs for some reason I get the feeling that supplemental logging is going to dump so much redo that it might eventually effect production performance, can this happen?
and also in our database we have several history tables that don't have either primary keys or unique keys, in which can I will have to enable supplemental logging for ALL columns at least for those tables.
does anyone had a similar situation and if so...does it make sense to go with logical standby or use streams instead ofcourse they both use the same technology?..
please I need some insights into which might work best in my case..
We are running Oracle 10.2.0.3 on RHEL5.
Thanks,
Ramki

Thanks Justin!
1) While generally I'd certainly rather deal with a logical standby than a bunch of >>materialized views if your reporting database needs to have most or all the tables in
the production database Pretty much we copy all the tables over to reporting with fast refreshable mviews and we have a 100m line between prod and reporting which will eventually be moved to GIG network in fall, so this will not be a bottleneck any more.
it is not obvious to me why that should substantially reduce the amount of data that >>needs to be moved around. it really doesn't matter whether the changes are coming >>via logical change records or MV log entries-- it's going to be roughly the same >>amount of change data flowing over.But with logical standby there is no additional I/O happening on the prod to figure out what has changed/updated/deleted when trying to refresh the reporting database using MLOGS.
when you have 6 million changed rows sitting the MLOGS to be copied over to reporting all the additional I/O required on prod to get these rows over is minimized when using logical standby as everything happens on reporting, like looking for LCR's in the redo logs that are copied over?.
2) Supplemental logging does increase redo volume a bit, so if you have an I/O bound >>source database, adding additional redo could certainly affect performance. Of >>course, you're also getting rid of the redo generated by writing to MV logs, so it may >>be a wash.that's a good point, so additional redo generated by supplemental logging will be offset by no redo being generated by writing into MLOGS.
Is there a reason that you have history tables without a primary key? It may be >>substantially easier to add a primary key that just gets populated via a sequence >>than to supplementally log every column in the table. Of course, it may only matter if >>you update history table rows.Application only inserts into these history tables and never updates and these are not queried within the application, some of them are but they have primary keys but not all.
So when we don't have PK or UK available on these tables do we need to enable supplemental logging for all columns on these tables or can we do minimal supplemental logging at the database level and do table level supp logging.
I am still unable to get the whole picture on how this supp logging works for the tables that don't have any means to identify rows uniquely, does it write the whole rows to the redo, if so does this mean more redo gets generated in doing so?.
I am doing a POC right now, but as usual I cannot really replay all the things that are happening in production on this test database (I wish I was running 11g).
Thanks,
Ramki

Similar Messages

  • What is the overhead of Supplemental Logging?

    We would like to copy data from a 600 GB Oracle database (9i) to a separate database for reporting. We want to use Data Guard in Logical Standby mode. The source database is heavily used and we can't afford a significant increase in system load (e.g. I/O activity) on that system.
    To set up a Logical Standby, we need to put the JD Edwards database into Supplemental Logging mode. I am concerned that this will noticibly increase the load on the source server.
    Has anyone analyzed the additional overhead of Supplemental Logging?
    I have done some testing using Oracle 10.2 (Oracle XE on my computer) which indicates that when I turn on Supplemental Logging, the size of the archive logs grows by 40%. I have not yet tested this on our 9i database.
    Thank you in advance for your help!
    Best Regards,
    Mike
    =================================
    The code below demonstrates the symptoms mentioned above:
    RESULTS - size of archive logs generated:
    - With Supplemental Logging: 120 MB
    - Without: 80 MB
    =================================
    CREATE TABLE "EMP"
    (     "EMPLOYEE_ID" NUMBER(6,0),
         "FIRST_NAME" VARCHAR2(20),
         "LAST_NAME" VARCHAR2(25) NOT NULL ENABLE,
         "EMAIL" VARCHAR2(25) NOT NULL ENABLE,
         "PHONE_NUMBER" VARCHAR2(20),
         "HIRE_DATE" DATE NOT NULL ENABLE,
         "JOB_ID" VARCHAR2(10) NOT NULL ENABLE,
         "SALARY" NUMBER(8,2),
         "COMMISSION_PCT" NUMBER(2,2),
         "MANAGER_ID" NUMBER(6,0),
         "DEPARTMENT_ID" NUMBER(4,0)
    alter table emp add CONSTRAINT "EMP_EMP_ID_PK" PRIMARY KEY ("EMPLOYEE_ID")
    CREATE TABLE "STAT"
    (     "F1" NUMBER,
         "ID" VARCHAR2(10)
    The "employee" table is from Oracle XE samples
    The procedure below generates transactions to test archive log size.
    To run, put the database in archive log mode. Then pop an archive log by executing
    ALTER SYSTEM ARCHIVE LOG CURRENT;
    To flip between running with Supplemental Logging, use one of:
    ALTER DATABASE drop SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX) COLUMNS;
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX) COLUMNS;
    declare
    i number;
    begin
    i := 0;
    while i < 1000
    loop
    delete from emp;
    insert into emp select * from employees;
    update emp set COMMISSION_PCT = COMMISSION_PCT * .5;
    update stat set f1 = i where id = 'UPD';
    commit;
    if i mod 1000 = 0 then
    dbms_output.put_line(i);
    end if;
    i := i + 1;
    end loop;
    end;
    /***********************************************/

    Unless the bottleneck of your system is related in any way to the redo log files, I don't see any risk in generating supplemental logging. A good way to find out is to look at a Statspack report, and see which events are the top 5 time-wise.
    Daniel

  • Table level supplemental logging

    How is table level supplemental logging different from Database level supplemental logging? Is Database level supplemental logging required for enabling table level supplemental logging?
    I have done 3 test cases, please suggest!
    Case 1
    Enabled only DB level supplemental logging(sl)
    observations--->
    DML on all tables can be tracked with logminer.
    I find this perfect.
    case 2
    Enabling only table level supplemental logging
    Setting---->
    2 tables ---AAA(with table level sl) & BBB (without table level sl)
    Only DDL is recorded with the help of logminer & few of the operations are listed as internal.
    case3
    Enabling database level sl first & then enabling table level sl only on one table --->AAA & no table level sl on BBB
    observation---> All the tables DDL & DML are getting tracked--point is if this is getting the same result
    as DB level SL, what is the significance of enabling Table level SL? or am I missing something?

    I have the same experience: when database level supplemental logging is enabled, adding supplemental logging at the table level does not affect functionality or performance.  Inserting 1 M rows into test table takes 25 sec ( measured on target database ) with table level supplemental logging, and 26 sec without it.  My GoldenGate version is 11.2, Oracle database version 11.2.0.3.0
    If someone can show the benefit of having table level supplemental logging in addition to database level logging, I would very much appreciate.

  • Schema level and table level supplemental logging

    Hello,
    I'm setting up bi- directional DML replication between two oracle databases. I have enabled supplemental logging database level by running this command-
    SQL>alter database add supplemental log data (primary key) columns;
    Database altered.
    SQL> select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI from v$database;
    SUPPLEME SUP SUP
    IMPLICIT YES NO
    -My question is should I enable supplemental logging table level also(for DML replication only)? should I run the below command also?
    GGSCI (db1) 1> DBLOGIN USERID ggs_admin, PASSWORD ggs_admin
    Successfully logged into database.
    GGSCI (db1) 2> ADD TRANDATA schema.<table-name>
    what is the deference between schema level and table level supplemental logging?

    For Oracle, ADD TRANDATA by default enables table-level supplemental logging. The supplemental log group includes one of the following sets of columns, in the listed order of priority, depending on what is defined on the table:
    1. Primary key
    2. First unique key alphanumerically with no virtual columns, no UDTs, no functionbased
    columns, and no nullable columns
    3. First unique key alphanumerically with no virtual columns, no UDTs, or no functionbased
    columns, but can include nullable columns
    4. If none of the preceding key types exist (even though there might be other types of keys
    defined on the table) Oracle GoldenGate constructs a pseudo key of all columns that
    the database allows to be used in a unique key, excluding virtual columns, UDTs,
    function-based columns, and any columns that are explicitly excluded from the Oracle
    GoldenGate configuration.
    The command issues an ALTER TABLE command with an ADD SUPPLEMENTAL LOG DATA clause that
    is appropriate for the type of unique constraint (or lack of one) that is defined for the table.
    When to use ADD TRANDATA for an Oracle source database
    Use ADD TRANDATA only if you are not using the Oracle GoldenGate DDL replication feature.
    If you are using the Oracle GoldenGate DDL replication feature, use the ADD SCHEMATRANDATA command to log the required supplemental data. It is possible to use ADD
    TRANDATA when DDL support is enabled, but only if you can guarantee one of the following:
    ● You can stop DML activity on any and all tables before users or applications perform DDL on them.
    ● You cannot stop DML activity before the DDL occurs, but you can guarantee that:
    ❍ There is no possibility that users or applications will issue DDL that adds new tables whose names satisfy an explicit or wildcarded specification in a TABLE or MAP
    statement.
    ❍ There is no possibility that users or applications will issue DDL that changes the key definitions of any tables that are already in the Oracle GoldenGate configuration.
    ADD SCHEMATRANDATA ensures replication continuity should DML ever occur on an object for which DDL has just been performed.
    You can use ADD TRANDATA even when using ADD SCHEMATRANDATA if you need to use the COLS option to log any non-key columns, such as those needed for FILTER statements and KEYCOLS clauses in the TABLE and MAP parameters.
    Additional requirements when using ADD TRANDATA
    Besides table-level logging, minimal supplemental logging must be enabled at the database level in order for Oracle GoldenGate to process updates to primary keys and
    chained rows. This must be done through the database interface, not through Oracle GoldenGate. You can enable minimal supplemental logging by issuing the following DDL
    statement:
    SQL> alter database add supplemental log data;
    To verify that supplemental logging is enabled at the database level, issue the following statement:
    SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
    The output of the query must be YES or IMPLICIT. LOG_DATA_MIN must be explicitly set, because it is not enabled automatically when other LOG_DATA options are set.
    If you required more details refer Oracle® GoldenGate Windows and UNIX Reference Guide 11g Release 2 (11.2.1.0.0)

  • Avoid SUPPLEMENTAL LOG while comparing 2 tables using dbms_metadata_diff()

    Hi,
    I am using ORACLE DATABASE 11g R2. I am using a inbuilt package dbms_metadata_diff.compare_alter() to compare 2 tables and get the alter statements for them. I have applied GOLDEN GATE on one of the Schema's and as per the process we need to apply SUPPLEMENTAL LOGGING on the database. So when 2 tables are compared it also gives me the difference about the SUPPLEMENTAL LOG. I want to compare 2 tables but it should avoid the difference of the SUPPLEMENTAL LOG group.
    Below is a part of code which I use :-
    dbms_metadata.set_transform_param(DBMS_METADATA.SESSION_TRANSFORM,  -- Parameter to keep the DDL pretty.
                                                'PRETTY',
                                                TRUE);
              dbms_metadata.set_transform_param(DBMS_METADATA.SESSION_TRANSFORM, -- To put an SQL terminator(;) at the end of SQL.
                                                'SQLTERMINATOR',
                                                TRUE);
              dbms_metadata.set_transform_param(dbms_metadata.session_transform, -- Not to consider the SEGMENT attributes for comparison.
                                                'SEGMENT_ATTRIBUTES',
                                                false);
              dbms_metadata.set_transform_param(dbms_metadata.session_transform, -- Not to include the STORAGE clause.
                                                'STORAGE',
                                                false);
              dbms_metadata.set_transform_param(dbms_metadata.session_transform, -- Not to include the TABLESPACE Info.
                                                'TABLESPACE',
                                                false);
    -- Here I want some parameter which should avoid the SUPPLEMENTAL LOG group difference.
    SELECT dbms_metadata_diff.compare_alter('TABLE',    -- Compare 2 tables with respect to above parameters and give output as ALTER STATEMENTS.
                                                      V_OBJECT_NAME,
                                                      V_OBJECT_NAME,
                                                      V_DEST_SCHEMA_NAME,
                                                      V_SOURCE_SCHEMA_NAME,
                                                      null,
                                                      'DBLINK_TEMP')
                into V_TAB_DIFF_ALTER
                FROM dual;In the current case for all tables i get the output as below :- (sample table output)
    ALTER TABLE "BANK"."BA_EOD_SHELL_DRIVER" DROP SUPPLEMENTAL LOG GROUP GGS_BA_EOD_SHELL_DR_199689;I don't want such alter statements in my output as i am not going to execute this on the schema , because i need SUPPLEMENTAL LOG for GOLDEN GATE.
    Please suggest me some solution on it.
    Thanks in advance.

    It probably won't answer the question...
    The DBMS_METADATA_DIFF.COMPARE_ALTER function will return a CLOB containing all the ALTER TABLE statements.
    I have noticed that you hold your resoult in V_TAB_DIFF_ALTER variable. Why don't you search what you need and remove it?
    "The DBMS_LOB package provides subprograms to operate on BLOBs, CLOBs, NCLOBs, BFILEs, and temporary LOBs. You can use DBMS_LOB to access and manipulation specific parts of a LOB or complete LOBs."

  • Supplemental Logging in Redo Log

    I enabled the Supplemental Logging both in database level and table level.
    Then i execute some sql statements.
    But after i dump the redo file using "alter system dump logfile", i can't see the effect from the trace file.
    In my expectation, i think the primary key value should in the OP:5.1 change.
    Does the supplemental logging can't effect if row chaining or row migration occurred? i mean OP:11.6
    Does it must be in the undo segment of update change OP:11.5?
    Black Thought

    please provide your Oracle version and how you enable supplemental logging. this presentation may >assist youThank you TongucY for response, I had already see the julian's presentation, and i know the internals.
    As you will see from Julian Dyke's presentation, the supplemental log goes into the undo change vector >(and the undo block). The last time I checked, it was not made visible in the formatted log dump.
    The only clue in the formatted trace about what had happened was that the undo change vector LEN >(and the redo record LEN) were larger.Really? I did this suspicion last weekend, I will going to check it today, thank you Lewis for notification.

  • Supplemental log group

    I setup streams for 5 tables.I use uncodititional supplemental log group. 4 tables are okay,one is not working. I check dba_log_groups view. For four tables were generated 3 additional supplemental groups for each(primary key,unique key,foreign key), one does have only created by me supplemental log group.
    What is the problem with this table?
    Please help it is emergency, tommorow we need to go production
    Thanks

    Hi Mary,
    Are you qualifying the pk for each of the tables?
    Make sure you have defined your rules correctly (ex.: query dba_streams_table_rules).
    The 3 additional log group are normal. What is not normal is the table that does not have these.
    My guess would be that that table does not have a capture rule correctly defined.
    I already noticed that the 3 log groups appear when you create the capture rule and they disappear when you drop the capture rule.
    Regards,

  • SQL Server - Extract Error - OGG-00868  Supplemental logging is disabled

    Hello,
    We are trying to replicate from a SQL Server 2008 database to Oracle database, but when trying to start the extract process we are getting the following error message:
    OGG-00868 Supplemental logging is disabled for database 'GoldenGate'. To enable logging, perform the following: 1) Set 'trunc. log on chkpt.' to false. 2) Create a full backup of the database. Please refer to the "Oracle GoldenGate For Windows and UNIX Administration Guide" for details.
    I have read that for enabling the supplemental logging is enough to "add trandata table_name", and this is done, and the extract process we are using is the following:
    EXTRACT cap_or4
    SOURCEDB GoldenGate
    TRANLOGOPTIONS MANAGESECONDARYTRUNCATIONPOINT
    EXTTRAIL c:\GoldenGate\V28983-01-GG-111112-SQLServer-Windows-x64\dirdat\C4
    TABLE GoldenGate.dbo.DES_T1;
    And the 'trunc.log on chkpt' is set to false.
    We don’t know what else to do, or to check... does anyone have any idea?!
    Thank you very much, best regards,
    Araitz.-

    Have you followed all the process for installing as per the guide? clearly you missed something
    Please follow below steps.
    Installation & Configuration of Oracle GoldenGate for MS SQL Server:
    Pre-requisites:
    1.Change Data Capture (CDC) must be enabled for Oracle GoldenGate and will be enabled by Oracle GoldenGate by means of the ADD TRANDATA command.
    2.SQL Server source database must be set to use the full recovery model.
    3.Oracle GoldenGate does not support system databases.
    4.After the source database is set to full recovery, a full database backup must be taken.
    5.SQL Server 2008 ODBC/OLE DB: SQL Server Native Client 10.0 driver
    6.Oracle GoldenGate processes can use either Windows Authentication or SQL Server Authentication to connect to a database.
    7.Before installing Oracle GoldenGate on a Windows system, install and configure the Microsoft Visual C ++ 2005 SP1 Redistributable Package. Make certain it
    is the SP1 version of this package, and make certain to get the correct bit version for your server. This package installs runtime components of Visual C++
    Libraries. For more information, and to download this package, go to http://www.microsoft.com.
    Privileges:
    1.Required SQL Server privileges for Manager when using Windows authentication
    Extract(source system)
    BUILTIN\Administrators account must be a member of the SQL Server fixed server role System Administrators.
    Account must be a member of the SQL Server fixed server role System Administrators
    Replicat (target system)
    BUILTIN\Administrators account must be at least a member of the db_owner fixed database role of the target database.
    Account must be at least a member of the db_owner fixed database role of the target database.
    2.Required SQL Server privileges for Extract and Replicat when using SQL Server authentication
    Extract - Member of the SQL Server fixed server role System Administrators.
    Replicat - At least a member of the db_owner fixed database role of the target database.
    Downloading Oracle GoldenGate
    Download the appropriate Oracle GoldenGate build to each system that will be part of the Oracle GoldenGate configuration.
    1. Navigate to http://edelivery.oracle.com.
    2. On the Welcome page:
    --Select your language.
    --Click Continue.
    3. On the Export Validation page:
    --Enter your identification information.
    --Accept the Trial License Agreement (even if you have a permanent license).
    --Accept the Export Restrictions.
    --Click Continue.
    4. On the Media Pack Search page:
    --Select the Oracle Fusion Middleware Product Pack.
    --Select the platform on which you will be installing the software.
    --Click Go.
    5. In the Results List:
    --Select the Oracle GoldenGate Media Pack that you want.
    --Click Continue.
    6. On the Download page:
    --Click Download for each component that you want. Follow the automatic download
    process to transfer the mediapack.zip file to your system.
    Installing the Oracle GoldenGate files
    1. Unzip the downloaded file(s) by using WinZip or an equivalent compression product.
    2. Move the files in binary mode to a folder on the drive where you want to install Oracle GoldenGate. Do not install Oracle GoldenGate into a folder that contains spaces in its name, even if the path is in quotes. For example:
    C:\“Oracle GoldenGate” is not valid.
    C:\Oracle_GoldenGate is valid.
    3. From the Oracle GoldenGate folder, run the GGSCI program.
    4. In GGSCI, issue the following command to create the Oracle GoldenGate working
    directories.
    CREATE SUBDIRS
    a.Create the necessary working directories for GG.
    Source DB:
    GGSCI>create subdirs
    Target DB:
    GGSCI>create subdirs
    Install the GoldenGate Manager process
    1.Create a GLOBALS parameter file
    --Execute the following commands from the <install location>.
    GGSCI> EDIT PARAMS ./GLOBALS
    --In the text editor, type the following:
    MGRSERVNAME <mgr service>
    Using a GLOBALS file in each GoldenGate instance allows you to run multiple Managers as services on Windows. When the service is installed, the Manager name
    is referenced in GLOBALS, and this name will appear in the Windows Services control panel.
    Note! Check to ensure that the GLOBALS file has been added in the GoldenGate installation directory and that it does not have an extension.
    --Execute the following command to exit GGSCI.
    GGSCI> EXIT
    2. Install the Manager service
    Execute the following command to run GoldenGate’s INSTALL.EXE . This executable installs Manager as a Windows service and adds GoldenGate events to the
    Windows Event Viewer.
    Shell> INSTALL ADDSERVICE ADDEVENTS
    Note: Adding the Manager as a service is an optional step used when there are multiple environments on the same system or when you want to control the name
    of the manager for any reason.
    Configuring an ODBC connection
    A DSN stores information about how to connect to a SQL Server database through ODBC (Open Database Connectivity). Create a DSN on each SQL Server source
    and target system.
    NOTE: Replicat will always use ODBC to query the target database for metadata.
    To create a SQL Server DSN
    1. Run one of the following ODBC clients:
    --If using a 32-bit version of Oracle GoldenGate on a 64-bit system, create the DSN by running the ODBCAD32.EXE client from the %SystemRoot%\SysWOW64
    folder.
    --If using a 64-bit version of Oracle GoldenGate on a 64-bit system, create a DSN by running the default ODBCAD32.EXE client in Control Panel>Administrative
    Tools>Data Sources (ODBC).
    --If using a version of Oracle GoldenGate other than the preceding, use the default ODBC client in Control Panel>Administrative Tools>Data Sources (ODBC).
    2. In the ODBC Data Source Administrator dialog box of the ODBC client, select the System DSN tab, and then click Add.
    3. Under Create New Data Source, select the correct SQL Server driver as follows:
    --SQL Server 2000: SQL Server driver
    --SQL Server 2005: SQL Native Client driver
    --SQL Server 2008: SQL Server Native Client 10.0 driver
    4. Click Finish. The Create a New Data Source to SQL Server wizard is displayed.
    5. Supply the following:
    --Name: Can be of your choosing. In a Windows cluster, use one name across all nodes in the cluster.
    --Server: Select the SQL Server instance name.
    6. Click Next.
    7. For login authentication, select With Windows NT authentication using the network login ID for Oracle GoldenGate to use Windows authentication, or select
    With SQL Server authentication using a login ID and password entered by the user for Oracle GoldenGate to use database credentials. Supply login information
    if selecting SQL Server authentication.
    8. Click Next.
    9. If the default database is not set to the one that Oracle GoldenGate will connect to,
    click Change the default database to, and then select the correct name. Set the other
    settings to use ANSI.
    10. Click Next.
    11. Leave the next page set to the defaults.
    12. Click Finish.
    13. Click Test Data Source to test the connection.
    14. Close the confirmation box and the Create a New Data Source box.
    15. Repeat this procedure from step 1 on each SQL Server source and target system.
    Setting the database to full recovery model
    Oracle GoldenGate requires a SQL Server source database to be set to the full recovery model.
    To verify or set the recovery model
    1. Connect to the SQL Server instance with either Enterprise Manager for SQL Server 2000 or SQL Server Management Studio for SQL Server 2005 and 2008.
    2. Expand the Databases folder.
    3. Right-click the source database, and then select Properties.
    4. Select the Options tab.
    5. Under Recovery, set Model to Full if not already.
    6. If the database was in Simple recovery or never had a Full database backup, take a Fulldatabase backup before starting Extract.
    7. Click OK.
    Enabling supplemental logging
    These instructions apply to new installations of Oracle GoldenGate for all supported SQL Server versions. You will enable supplemental logging with the ADD
    TRANDATA command so that Extract can capture the information that is required to reconstruct SQL operations on the target. This is more information than
    what SQL Server logs by default.
    --SQL Server 2005 updated to CU6 for SP2 or later: ADD TRANDATA calls the sys.sp_extended_logging stored procedure.
    --SQL Server 2005 pre-CU6 for SP2: ADD TRANDATA creates the following:
    A replication publication named [<source database name>]: GoldenGate<source database name> Publisher. To view this publication, look under Replication>Local
    Publications in SQL Server Management Studio. This procedure adds the specified table to the publication as an article.
    A SQL Server Log Reader Agent job for the publication. This job cannot run concurrently with an Extract process in this configuration.
    --SQL Server 2008: ADD TRANDATA enables Change Data Capture (CDC) and creates a minimal Change Data Capture on the specified table.
    a.Oracle GoldenGate does not use the CDC tables other than as necessary to enablesupplemental logging.
    b.As part of enabling CDC, SQL Server creates two jobs per database: <dbname>_capture and <dbname>_cleanup. The <dbname>_capture job adjusts the secondary
    truncation point and captures data from the log to store in the CDC
    tables. The <dbname>_cleanup job ages and deletes data captured by CDC.
    c.Using the TRANLOGOPTIONS parameter with the MANAGESECONDARYTRUNCATIONPOINT option for Extract removes the <dbname_capture> job, preventing the overhead of
    the job loading the CDC tables.
    d.The alternative (using TRANLOGOPTIONS with NOMANAGESECONDARYTRUNCATIONPOINT) requires the SQL Server Agent to be running and requires the <dbname>_capture and <dbname>_cleanup jobs to be retained. You will probably need to adjust the <dbname>_cleanup data retention period if the default of three days is not acceptable for storage concerns.
    To enable supplemental logging
    1. On the source system, run GGSCI.
    2. Log into the database from GGSCI.
    DBLOGIN SOURCEDB <DSN>[, USERID <user>, PASSWORD <password>]
    Where:
    -- SOURCEDB <DSN> is the name of the SQL Server data source.
    -- USERID <user> is the Extract login and PASSWORD <password> is the password that is required if Extract uses SQL Server authentication.
    3. In GGSCI, issue the following command for each table that is, or will be, in the Extract configuration. You can use a wildcard to specify multiple table
    names, but not owner names.
    ADD TRANDATA <owner>.<table>
    NOTE:The Log Reader Agent job cannot run concurrently with the GoldenGate Extract process.
    4.Configuration
    a.Create and start manager on the source and the destination.
    Source DB:
    shell>ggsci
    GGSCI> edit params mgr
    PORT 7809
    DYNAMICPORTLIST 7900-7950
    DYNAMICPORTREASSIGNDELAY 5
    AUTOSTART ER *
    AUTORESTART ER *, RETRIES 3, WAITMINUTES 5, RESETMINUTES 30
    LAGCRITICALMINUTES 60
    LAGREPORTMINUTES 30
    PURGEOLDEXTRACTS c:\ogg\dirdat\T*, USECHECKPOINTS, MINKEEPFILES 10
    GGSCI> start manager
    GGSCI>info all
    b. Create the extract group on the source side:
    GGSCI> edit params EXT1
    Add the following lines to the new parameter file
    EXTRACT EXT1
    SOURCEDB <DSN>, USERID ogg, PASSWORD ogg@321!
    TRANLOGOPTIONS MANAGESECONDARYTRUNCATIONPOINT
    EXTTRAIL c:\ogg\dirdat\T1
    DISCARDFILE c:\ogg\dirrpt\EXT1.DSC, PURGE, MEGABYTES 100
    TABLE dbo.TCUSTMER;
    TABLE dbo.TCUSTORD;
    GGSCI>ADD EXTRACT EXT1, TRANLOG, BEGIN NOW
    GGSCI>ADD EXTTRAIL c:\ogg\dirdat\T1, EXTRACT EXT1, MEGABYTES 100
    GGSCI> edit params PMP1
    Add the following lines to the new parameter file
    EXTRACT PMP1
    SOURCEDB <DSN>, USERID ogg, PASSWORD ogg@321!
    PASSTHRU
    RMTHOST dr, MGRPORT 7810
    RMTTRAIL c:\ogg\dirdat\P1
    TABLE dbo.TCUSTMER;
    TABLE dbo.TCUSTORD;
    GGSCI> ADD EXTRACT PMP1, EXTTRAILSOURCE c:\ogg\dirdat\T1
    GGSCI> ADD EXTTRAIL c:\ogg\dirdat\P1, EXTRACT PMP1, MEGABYTES 100
    Target DB:
    ===========
    shell>ggsci
    GGSCI> edit params mgr
    PORT 7810
    AUTOSTART ER *
    AUTORESTART ER *, RETRIES 3, WAITMINUTES 5, RESETMINUTES 30
    LAGCRITICALMINUTES 60
    LAGREPORTMINUTES 30
    PURGEOLDEXTRACTS c:\ogg\dirdat\P*, USECHECKPOINTS, MINKEEPFILES 10
    GGSCI> start manager
    GGSCI>info all
    Create parameter file for replicat:
    GGSCI> edit params REP1
    REPLICAT REP1
    ASSUMETARGETDEFS
    TARGETDB <dsn>, USERID ogg@DR, PASSWORD ogg@321!
    DISCARDFILE c:\ogg\dirrpt\REP1.DSC, append, megabytes 100
    HANDLECOLLISIONS
    ASSUMETARGETDEFS
    MAP dbo.TCUSTMER, TARGET dbo.TCUSTMER;
    MAP dbo.TCUSTORD, TARGET dbo.TCUSTORD;
    GGSCI>ADD REPLICAT REP1, RMTTRAIL c:\ogg\dirdat\P1, nodbcheckpoint
    # Start extract and replicat:
    Source:
    GGSCI> start er *
    Destination:
    GGSCI> start er *Greetings,
    N K

  • Redo Log and Supplemental Logging related doubts

    Hi Friends,
    I am studying Supplemental logging in detail. Have read lots of articles and oracle documentation about it and redo logs. But couldnot found answers of some doubts..
    Please help me clear it.
    Scenario: we have one table with primary key. And we execute an update query on that table which is not using the primary key column in any clause..
    Question: In this case, does the redo log entry generated for the changes done by update query contain the primary columns values..?
    Question: If we have any table with primary key, do we need to enable the supplemental logging on primary columns of that table? If yes, in which circumstances, do we need to enable it?
    Question: If we have to configure stream replication on that table(having primary key), why do we actually need to enable its supplemental logging ( I have read the documentation saying that stream requires some more information so.., but actually what information does it need. Again this question is highly related to the first question.)
    Also please suggest any good article/site which provide inside details of redo log and supplemental logging, if you know.
    Regards,
    Dipali..

    1) Assuming you are not updating the primary key column and supplemental logging is not enabled, Oracle doesn't need to log the primary key column to the redo log, just the ROWID.
    2) Is rather hard to answer without being tautological. You need to enable supplemental logging if and only if you have some downstream use for additional columns in the redo logs. Streams, and those technologies built on top of Streams, are the most common reason for enabling supplemental logging.
    3) If you execute an update statement like
    UPDATE some_table
      SET some_column = new_value
    WHERE primary_key = some_key_value
       AND <<other conditions as well>>and look at an update statement that LogMiner builds from the redo logs in the absence of supplemental logging, it would basically be something like
    UPDATE some_table
      SET some_column = new_value
    WHERE rowid = rowid_of_the_row_you_updatedOracle doesn't need to replay the exact SQL statement you issued, (and thus it doesn't have to write the SQL statement to the redo log, it doesn't have to worry if the UPDATE takes a long time to run (otherwise, it would take as long to apply an archived log as it did to generate the log, which would be disasterous in a recovery situation), etc). It just needs to reconstruct the SQL statement from the information in redo, which is just the ROWID and the column(s) that changed.
    If you try to run this statement on a different database (via Streams, for example) the ROWIDs on the destination database are likely totally different (since a ROWID is just a physical address of a row on disk). So adding supplemental logging tells Oracle to log the primary key column to redo and allows LogMiner/ Streams/ etc. to reconstruct the statement using the primary key values for the changed rows, which would be the same on both the source and destination databases.
    Justin

  • Enabling supplemental logging for many tables

    Hi All,
    Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
    PL/SQL Release 9.2.0.8.0 - Production
    CORE     9.2.0.8.0     Production
    TNS for Solaris: Version 9.2.0.8.0 - Production
    NLSRTL Version 9.2.0.8.0 - Production
    I have 200 tables where i need to enable supplement logging.
    ALTER TABLE table_name ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS ==>not working==>ORA-00905: missing keyword
    so iam manually enabling the supplement logging
    alter table EMP_PER ADD SUPPLEMENTAL LOG GROUP EMP_PE_SLOG3(END_ADDR_ORG_ID,LOA_END_DT,LAST_PROMO_DT,LAST_ANNUAL_RVW_DT,HIRE_DT,CURR_SALARY_AMT,CURR_BONUS_TGT_PCT,CURR_AVAIL_UNTIL,COST_PER_HR)always;
    But as few of the tables are having more then 400 columns its taking lot of time to break the query into many group.
    Can any one help we with a PLSQL block to generate the script for all the tables

    Thanks Cj for your reply.
    I have checked the whole presensentaion but the issue is when we have more then 33 columns then we need to create a new log group to fit in else we are getting max column exceed.
    So now iam writting the queries manaually..for all the 200 tables...which is taking ,more time
    so could u hel;p me out with a proc or script or dynamic sql which can fetch me the supplement enabling query for many tables

  • New column add in supplemental log group

    Hi All,
    How to add the new column in existing supplemental log group.
    Ex.
    SQL> desc scott.emp
    Name Null? Type
    EMPNO NOT NULL NUMBER(4)
    ENAME VARCHAR2(10)
    JOB VARCHAR2(9)
    MGR NUMBER(4)
    HIREDATE DATE
    SAL NUMBER(7,2)
    COMM NUMBER(7,2)
    DEPTNO NUMBER(2)
    And the existing supplemental group is
    Alter table emp add supplemental log group emp_log_grp (empno,ename,sal) always;
    Now I want to add comm column in log group.
    Pls. help.
    Thanks
    Naresh

    did you try..
    ALTER TABLE hr.departments ADD SUPPLEMENTAL LOG GROUP log_group_dep_pk
    (comm) ALWAYS;

  • SUPPLEMENTAL LOG DATA (ALL)  CREATING "?" Constraint

    Hi all,
    In my application we are adding SUPPLEMENTAL LOGGING for all the columns.After this a misc constraint is created on each table .Following are constraint details
    Constraint Type : ?
    Search Condition : NULL
    Status : DISABLED
    Even though this is in disabled state sometimes it is preventing us to commit the operation.
    We have one way to overcome this is to write a SQL script to drop such type of constraint after the deployement, bt we can't do this because of our app dependencies.
    Can any one of you please help me to get rid of this?

    Ok. With some more investigation I have found the following is true.
    Following commands run in Oracle SQL Developer:
    1) Run insert / update statement
    2) Appears immediately in source table and v$logmnr_contents but not in target table
    3) Run commit statement and it replicates to target.
    Why does it only appear in target after commit statement but appears in source and v$logmnr_contents immediately ?
    Thanks

  • Supplemental logging with Oracle 10gR2 Streams and Data Guard

    Hello,
    I have a environment with Oracle DB 10gR2 and Physical Standby with Data Guard DR Conf. Right now, this environment is going to be extended to a replication schema using 2-way Oracle Streams Replication (for replication to the central office from this branch office, other branchs will be added soon). The primary DB will be replicated to the other primary DB (in the remote central office).
    So, there is my question: It's completly necesary to specify Supplemental Logging on the sources databases (primaries) for setting 2-way Streams Replication?, and, if it's completly necesary, then, do I can set Supplemental Logging on primaries without affect theirs physical standbys, or do I need to do something special?
    Thanks in advance.

    Sorry, it's repeated. 'cus browser connection problem.

  • Stream supplemental log group (10gR2).

    How can I find out all the specific tables/columns(primary key, unique, fk) that have been added to the supplemental group/data?

    dba_log_group_columns

  • Logminer supplemental data logging

    Hi,
    what information will me give logminer if supplemental log data is not enabled ?
    Is the logminer completlly useless in such case ?
    thx

    sb is correct. Please do not expect volunteers to read the docs for you.
    http://tahiti.oracle.com
    If you have a specific question about a specific version after reading the docs where you don't understand something then ask.
    But do not expect a lot of love from us if you ask us to teach you how to drive your car.

Maybe you are looking for

  • Using a Time Capsule to Extend a Non Apple Network

    I have a home network that i use to connect to through wifi, i tried to extend it through my Time Capsule and it said "This network cannot be extended". Whats the deal? Message was edited by: Robert Boinski

  • Proxy Maximum Payload size

    Hi , What is the maximum size of payload a proxy ( ABAP server/client) can handle in XI?  Also send me docs on Proxy Sizing ? Regards, Praveen Kumar

  • OIM11gR2 - Is it possible to create a direct link to a specific area in the oim plattform?

    hi, is it possible to create a direct link to specific area in the oim self service plattform? for example a link to the users account list, or a link to a specific resource account? br, max

  • BPM - Message Interface

    Hello all, I have some doubts about Messages Interface used in BPM. BPM just used Abstract Message Interfaces, but to expose the Interface to sender aplication, XI need the same Message Interfaces that is declared abstract but no abstract. My questio

  • Windows XP Home with SP2 QUESTION

    I've been searching for the answer for the past 45 minutes and can't seem to find it on the net. I turn to you now - as I've had good luck having my questions answered on this forum. Here we go : I've just ordered Windows XP Home Edition with SP2 fro