SUPPLEMENTAL LOG DATA (ALL)  CREATING "?" Constraint

Hi all,
In my application we are adding SUPPLEMENTAL LOGGING for all the columns.After this a misc constraint is created on each table .Following are constraint details
Constraint Type : ?
Search Condition : NULL
Status : DISABLED
Even though this is in disabled state sometimes it is preventing us to commit the operation.
We have one way to overcome this is to write a SQL script to drop such type of constraint after the deployement, bt we can't do this because of our app dependencies.
Can any one of you please help me to get rid of this?

Ok. With some more investigation I have found the following is true.
Following commands run in Oracle SQL Developer:
1) Run insert / update statement
2) Appears immediately in source table and v$logmnr_contents but not in target table
3) Run commit statement and it replicates to target.
Why does it only appear in target after commit statement but appears in source and v$logmnr_contents immediately ?
Thanks

Similar Messages

  • Enabling supplemental logging for many tables

    Hi All,
    Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
    PL/SQL Release 9.2.0.8.0 - Production
    CORE     9.2.0.8.0     Production
    TNS for Solaris: Version 9.2.0.8.0 - Production
    NLSRTL Version 9.2.0.8.0 - Production
    I have 200 tables where i need to enable supplement logging.
    ALTER TABLE table_name ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS ==>not working==>ORA-00905: missing keyword
    so iam manually enabling the supplement logging
    alter table EMP_PER ADD SUPPLEMENTAL LOG GROUP EMP_PE_SLOG3(END_ADDR_ORG_ID,LOA_END_DT,LAST_PROMO_DT,LAST_ANNUAL_RVW_DT,HIRE_DT,CURR_SALARY_AMT,CURR_BONUS_TGT_PCT,CURR_AVAIL_UNTIL,COST_PER_HR)always;
    But as few of the tables are having more then 400 columns its taking lot of time to break the query into many group.
    Can any one help we with a PLSQL block to generate the script for all the tables

    Thanks Cj for your reply.
    I have checked the whole presensentaion but the issue is when we have more then 33 columns then we need to create a new log group to fit in else we are getting max column exceed.
    So now iam writting the queries manaually..for all the 200 tables...which is taking ,more time
    so could u hel;p me out with a proc or script or dynamic sql which can fetch me the supplement enabling query for many tables

  • What is the overhead of Supplemental Logging?

    We would like to copy data from a 600 GB Oracle database (9i) to a separate database for reporting. We want to use Data Guard in Logical Standby mode. The source database is heavily used and we can't afford a significant increase in system load (e.g. I/O activity) on that system.
    To set up a Logical Standby, we need to put the JD Edwards database into Supplemental Logging mode. I am concerned that this will noticibly increase the load on the source server.
    Has anyone analyzed the additional overhead of Supplemental Logging?
    I have done some testing using Oracle 10.2 (Oracle XE on my computer) which indicates that when I turn on Supplemental Logging, the size of the archive logs grows by 40%. I have not yet tested this on our 9i database.
    Thank you in advance for your help!
    Best Regards,
    Mike
    =================================
    The code below demonstrates the symptoms mentioned above:
    RESULTS - size of archive logs generated:
    - With Supplemental Logging: 120 MB
    - Without: 80 MB
    =================================
    CREATE TABLE "EMP"
    (     "EMPLOYEE_ID" NUMBER(6,0),
         "FIRST_NAME" VARCHAR2(20),
         "LAST_NAME" VARCHAR2(25) NOT NULL ENABLE,
         "EMAIL" VARCHAR2(25) NOT NULL ENABLE,
         "PHONE_NUMBER" VARCHAR2(20),
         "HIRE_DATE" DATE NOT NULL ENABLE,
         "JOB_ID" VARCHAR2(10) NOT NULL ENABLE,
         "SALARY" NUMBER(8,2),
         "COMMISSION_PCT" NUMBER(2,2),
         "MANAGER_ID" NUMBER(6,0),
         "DEPARTMENT_ID" NUMBER(4,0)
    alter table emp add CONSTRAINT "EMP_EMP_ID_PK" PRIMARY KEY ("EMPLOYEE_ID")
    CREATE TABLE "STAT"
    (     "F1" NUMBER,
         "ID" VARCHAR2(10)
    The "employee" table is from Oracle XE samples
    The procedure below generates transactions to test archive log size.
    To run, put the database in archive log mode. Then pop an archive log by executing
    ALTER SYSTEM ARCHIVE LOG CURRENT;
    To flip between running with Supplemental Logging, use one of:
    ALTER DATABASE drop SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX) COLUMNS;
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX) COLUMNS;
    declare
    i number;
    begin
    i := 0;
    while i < 1000
    loop
    delete from emp;
    insert into emp select * from employees;
    update emp set COMMISSION_PCT = COMMISSION_PCT * .5;
    update stat set f1 = i where id = 'UPD';
    commit;
    if i mod 1000 = 0 then
    dbms_output.put_line(i);
    end if;
    i := i + 1;
    end loop;
    end;
    /***********************************************/

    Unless the bottleneck of your system is related in any way to the redo log files, I don't see any risk in generating supplemental logging. A good way to find out is to look at a Statspack report, and see which events are the top 5 time-wise.
    Daniel

  • Schema level and table level supplemental logging

    Hello,
    I'm setting up bi- directional DML replication between two oracle databases. I have enabled supplemental logging database level by running this command-
    SQL>alter database add supplemental log data (primary key) columns;
    Database altered.
    SQL> select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI from v$database;
    SUPPLEME SUP SUP
    IMPLICIT YES NO
    -My question is should I enable supplemental logging table level also(for DML replication only)? should I run the below command also?
    GGSCI (db1) 1> DBLOGIN USERID ggs_admin, PASSWORD ggs_admin
    Successfully logged into database.
    GGSCI (db1) 2> ADD TRANDATA schema.<table-name>
    what is the deference between schema level and table level supplemental logging?

    For Oracle, ADD TRANDATA by default enables table-level supplemental logging. The supplemental log group includes one of the following sets of columns, in the listed order of priority, depending on what is defined on the table:
    1. Primary key
    2. First unique key alphanumerically with no virtual columns, no UDTs, no functionbased
    columns, and no nullable columns
    3. First unique key alphanumerically with no virtual columns, no UDTs, or no functionbased
    columns, but can include nullable columns
    4. If none of the preceding key types exist (even though there might be other types of keys
    defined on the table) Oracle GoldenGate constructs a pseudo key of all columns that
    the database allows to be used in a unique key, excluding virtual columns, UDTs,
    function-based columns, and any columns that are explicitly excluded from the Oracle
    GoldenGate configuration.
    The command issues an ALTER TABLE command with an ADD SUPPLEMENTAL LOG DATA clause that
    is appropriate for the type of unique constraint (or lack of one) that is defined for the table.
    When to use ADD TRANDATA for an Oracle source database
    Use ADD TRANDATA only if you are not using the Oracle GoldenGate DDL replication feature.
    If you are using the Oracle GoldenGate DDL replication feature, use the ADD SCHEMATRANDATA command to log the required supplemental data. It is possible to use ADD
    TRANDATA when DDL support is enabled, but only if you can guarantee one of the following:
    ● You can stop DML activity on any and all tables before users or applications perform DDL on them.
    ● You cannot stop DML activity before the DDL occurs, but you can guarantee that:
    ❍ There is no possibility that users or applications will issue DDL that adds new tables whose names satisfy an explicit or wildcarded specification in a TABLE or MAP
    statement.
    ❍ There is no possibility that users or applications will issue DDL that changes the key definitions of any tables that are already in the Oracle GoldenGate configuration.
    ADD SCHEMATRANDATA ensures replication continuity should DML ever occur on an object for which DDL has just been performed.
    You can use ADD TRANDATA even when using ADD SCHEMATRANDATA if you need to use the COLS option to log any non-key columns, such as those needed for FILTER statements and KEYCOLS clauses in the TABLE and MAP parameters.
    Additional requirements when using ADD TRANDATA
    Besides table-level logging, minimal supplemental logging must be enabled at the database level in order for Oracle GoldenGate to process updates to primary keys and
    chained rows. This must be done through the database interface, not through Oracle GoldenGate. You can enable minimal supplemental logging by issuing the following DDL
    statement:
    SQL> alter database add supplemental log data;
    To verify that supplemental logging is enabled at the database level, issue the following statement:
    SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
    The output of the query must be YES or IMPLICIT. LOG_DATA_MIN must be explicitly set, because it is not enabled automatically when other LOG_DATA options are set.
    If you required more details refer Oracle® GoldenGate Windows and UNIX Reference Guide 11g Release 2 (11.2.1.0.0)

  • Supplemental logging

    Hi,
    Presently we copy prod tables into reporting database using Mviews and they work like a charm, there are couple of problems with this setup we have right now one being it is not feasible when we keep adding/altering 20-30 tables every release, dropping/recreating the mlogs/mviews, secondly during the busy season the mview refresh data takes a very long time copying changes as there are just too much data to move around.
    Looks like people are not liking this delay in copying the tables, not often couple of times every month. so we are planning to implement logical standby, we already have a physical standby.
    After reading Oracle docs for some reason I get the feeling that supplemental logging is going to dump so much redo that it might eventually effect production performance, can this happen?
    and also in our database we have several history tables that don't have either primary keys or unique keys, in which can I will have to enable supplemental logging for ALL columns at least for those tables.
    does anyone had a similar situation and if so...does it make sense to go with logical standby or use streams instead ofcourse they both use the same technology?..
    please I need some insights into which might work best in my case..
    We are running Oracle 10.2.0.3 on RHEL5.
    Thanks,
    Ramki

    Thanks Justin!
    1) While generally I'd certainly rather deal with a logical standby than a bunch of >>materialized views if your reporting database needs to have most or all the tables in
    the production database Pretty much we copy all the tables over to reporting with fast refreshable mviews and we have a 100m line between prod and reporting which will eventually be moved to GIG network in fall, so this will not be a bottleneck any more.
    it is not obvious to me why that should substantially reduce the amount of data that >>needs to be moved around. it really doesn't matter whether the changes are coming >>via logical change records or MV log entries-- it's going to be roughly the same >>amount of change data flowing over.But with logical standby there is no additional I/O happening on the prod to figure out what has changed/updated/deleted when trying to refresh the reporting database using MLOGS.
    when you have 6 million changed rows sitting the MLOGS to be copied over to reporting all the additional I/O required on prod to get these rows over is minimized when using logical standby as everything happens on reporting, like looking for LCR's in the redo logs that are copied over?.
    2) Supplemental logging does increase redo volume a bit, so if you have an I/O bound >>source database, adding additional redo could certainly affect performance. Of >>course, you're also getting rid of the redo generated by writing to MV logs, so it may >>be a wash.that's a good point, so additional redo generated by supplemental logging will be offset by no redo being generated by writing into MLOGS.
    Is there a reason that you have history tables without a primary key? It may be >>substantially easier to add a primary key that just gets populated via a sequence >>than to supplementally log every column in the table. Of course, it may only matter if >>you update history table rows.Application only inserts into these history tables and never updates and these are not queried within the application, some of them are but they have primary keys but not all.
    So when we don't have PK or UK available on these tables do we need to enable supplemental logging for all columns on these tables or can we do minimal supplemental logging at the database level and do table level supp logging.
    I am still unable to get the whole picture on how this supp logging works for the tables that don't have any means to identify rows uniquely, does it write the whole rows to the redo, if so does this mean more redo gets generated in doing so?.
    I am doing a POC right now, but as usual I cannot really replay all the things that are happening in production on this test database (I wish I was running 11g).
    Thanks,
    Ramki

  • With journaling, I have found that my computer is saving a large amount of data, logs of all the changes I make to files; how can I clean up these logs?

    With journaling, I have found that my computer is saving a large amount of data, logs of all the changes I make to files; how can I clean up these logs?
    For example, in Notes, I have written three notes; however if I click on 'All On My Mac' on the side bar, I see about 10 different versions of each note I make, it saves a version every time I add or delete a sentence.
    I also noticed, that when I write an email, Mail saves about 10 or more draft versions before the final is sent.
    I understand that all this journaling provides a level of security, and prevents data lost; but I was wondering, is there a function to clean up journal logs once in a while?
    Thanks
    Roz

    Are you using Microsoft word?  Microsoft thinks the users are idiots. They put up a lot of pointless messages that annoy & worry users.  I have seen this message from Microsoft word.  It's annoying.
    As BDaqua points out...
    When you copy information via edit > copy,  command + c, edit > cut, or command +x, you place the information on the clipboard. When you paste information, edit > paste or command + v, you copy information from the clipboard to your data file.
    If you edit > cut or command + x and you do not paste the information and you quite Word, you could be loosing information.  Microsoft is very worried about this. When you quite Word, Microsoft checks if there is information on the clipboard & if so, Microsoft puts out this message.
    You should be saving your work more than once a day. I'd save every 5 minutes.  command + s does a save.
    Robert

  • .tmp file is created by NI Report generation toolkit for logging data to MS Excel

    Hi,
    I am getting ".tmp" file while logging data to MS Excel by NI Report generation toolkit. Instead of assigned file name, toolkit is generating a junk value with .tmp extension, but I can able to open this .tmp file with MS Excel.
    Can anyone help to solve this issue?
    Thanks,
    Thirumala

    Hi Thirumala,
    You may experience issues when you try to save a Microsoft Excel file if one or more of the following conditions are true:
    1. You save an Excel file to a network drive where you have restricted permissions.
    2. You save an Excel file to a location that does not have sufficient drive space.
    3. The connection to the Excel file has been lost.
    4. There is a conflict with an antivirus software program.
    5. You save an Excel file that is shared.
    6. The 218-character path limitation has been exceeded when you save an Excel file.
    7. The Transition Formula Evaluation feature is turned on in Excel.
    8. The file was created from a template that contains embedded objects.
    Hope this will help to solve your problem.
    Thanks,
    Sanad MM

  • Data Logging a programmatically created shared variable of cluster or array datatype into citadel DB

    Hi,
    Is there a way to log a programmatically created shared variable of cluster or array datatype into citadel DB?
    I have attempted to programmatically create a shared variable of type 'double' and was able to successfully log the same into Citadel DB. In the attachment, Please refer to the attached project SV_TC.lvproj and specifically to SV_W.vi for the code that i have used (W.png file shows the dB in MAX and shared variable in NI Distributed System Manager)
    However when i tried the same approach to create a shared variable 'array of double', i noticed that traces are not getting created and hence data logging isn't happening into the Citadel DB. I was however able to write and read the shared variable array without issues. The same is true with cluster datatype also. Please refer to SN_NW.vi for the code that i have used - only difference from SV_W.vi is that i have tried to create a shared variable of type 'array of double'
    One observation is that if i create an 'array of double' or cluster in a shared variable library using the project explorer and deploy, i could see that these being logged into Citadel DB.
    Hence i want to understand what could be done to achieve the same programmatically? Could you please advice?
    Regards,
    Sridhar
    Attachments:
    SV_TC.zip ‏3925 KB

    Why is the transaction happening over remote?
    Because we need the information to be on our database immediately when they submit it.
    I don't know anything about streams or replication.....
    The web database is run by the company that hosts our web site. We use PL/SQL to show the pages and output the html. It fetches the data the customer is requesting from our database, but I'm struggling to get the best way to insert into our database from there.

  • Supplemental logging with Oracle 10gR2 Streams and Data Guard

    Hello,
    I have a environment with Oracle DB 10gR2 and Physical Standby with Data Guard DR Conf. Right now, this environment is going to be extended to a replication schema using 2-way Oracle Streams Replication (for replication to the central office from this branch office, other branchs will be added soon). The primary DB will be replicated to the other primary DB (in the remote central office).
    So, there is my question: It's completly necesary to specify Supplemental Logging on the sources databases (primaries) for setting 2-way Streams Replication?, and, if it's completly necesary, then, do I can set Supplemental Logging on primaries without affect theirs physical standbys, or do I need to do something special?
    Thanks in advance.

    Sorry, it's repeated. 'cus browser connection problem.

  • SQL Server - Extract Error - OGG-00868  Supplemental logging is disabled

    Hello,
    We are trying to replicate from a SQL Server 2008 database to Oracle database, but when trying to start the extract process we are getting the following error message:
    OGG-00868 Supplemental logging is disabled for database 'GoldenGate'. To enable logging, perform the following: 1) Set 'trunc. log on chkpt.' to false. 2) Create a full backup of the database. Please refer to the "Oracle GoldenGate For Windows and UNIX Administration Guide" for details.
    I have read that for enabling the supplemental logging is enough to "add trandata table_name", and this is done, and the extract process we are using is the following:
    EXTRACT cap_or4
    SOURCEDB GoldenGate
    TRANLOGOPTIONS MANAGESECONDARYTRUNCATIONPOINT
    EXTTRAIL c:\GoldenGate\V28983-01-GG-111112-SQLServer-Windows-x64\dirdat\C4
    TABLE GoldenGate.dbo.DES_T1;
    And the 'trunc.log on chkpt' is set to false.
    We don’t know what else to do, or to check... does anyone have any idea?!
    Thank you very much, best regards,
    Araitz.-

    Have you followed all the process for installing as per the guide? clearly you missed something
    Please follow below steps.
    Installation & Configuration of Oracle GoldenGate for MS SQL Server:
    Pre-requisites:
    1.Change Data Capture (CDC) must be enabled for Oracle GoldenGate and will be enabled by Oracle GoldenGate by means of the ADD TRANDATA command.
    2.SQL Server source database must be set to use the full recovery model.
    3.Oracle GoldenGate does not support system databases.
    4.After the source database is set to full recovery, a full database backup must be taken.
    5.SQL Server 2008 ODBC/OLE DB: SQL Server Native Client 10.0 driver
    6.Oracle GoldenGate processes can use either Windows Authentication or SQL Server Authentication to connect to a database.
    7.Before installing Oracle GoldenGate on a Windows system, install and configure the Microsoft Visual C ++ 2005 SP1 Redistributable Package. Make certain it
    is the SP1 version of this package, and make certain to get the correct bit version for your server. This package installs runtime components of Visual C++
    Libraries. For more information, and to download this package, go to http://www.microsoft.com.
    Privileges:
    1.Required SQL Server privileges for Manager when using Windows authentication
    Extract(source system)
    BUILTIN\Administrators account must be a member of the SQL Server fixed server role System Administrators.
    Account must be a member of the SQL Server fixed server role System Administrators
    Replicat (target system)
    BUILTIN\Administrators account must be at least a member of the db_owner fixed database role of the target database.
    Account must be at least a member of the db_owner fixed database role of the target database.
    2.Required SQL Server privileges for Extract and Replicat when using SQL Server authentication
    Extract - Member of the SQL Server fixed server role System Administrators.
    Replicat - At least a member of the db_owner fixed database role of the target database.
    Downloading Oracle GoldenGate
    Download the appropriate Oracle GoldenGate build to each system that will be part of the Oracle GoldenGate configuration.
    1. Navigate to http://edelivery.oracle.com.
    2. On the Welcome page:
    --Select your language.
    --Click Continue.
    3. On the Export Validation page:
    --Enter your identification information.
    --Accept the Trial License Agreement (even if you have a permanent license).
    --Accept the Export Restrictions.
    --Click Continue.
    4. On the Media Pack Search page:
    --Select the Oracle Fusion Middleware Product Pack.
    --Select the platform on which you will be installing the software.
    --Click Go.
    5. In the Results List:
    --Select the Oracle GoldenGate Media Pack that you want.
    --Click Continue.
    6. On the Download page:
    --Click Download for each component that you want. Follow the automatic download
    process to transfer the mediapack.zip file to your system.
    Installing the Oracle GoldenGate files
    1. Unzip the downloaded file(s) by using WinZip or an equivalent compression product.
    2. Move the files in binary mode to a folder on the drive where you want to install Oracle GoldenGate. Do not install Oracle GoldenGate into a folder that contains spaces in its name, even if the path is in quotes. For example:
    C:\“Oracle GoldenGate” is not valid.
    C:\Oracle_GoldenGate is valid.
    3. From the Oracle GoldenGate folder, run the GGSCI program.
    4. In GGSCI, issue the following command to create the Oracle GoldenGate working
    directories.
    CREATE SUBDIRS
    a.Create the necessary working directories for GG.
    Source DB:
    GGSCI>create subdirs
    Target DB:
    GGSCI>create subdirs
    Install the GoldenGate Manager process
    1.Create a GLOBALS parameter file
    --Execute the following commands from the <install location>.
    GGSCI> EDIT PARAMS ./GLOBALS
    --In the text editor, type the following:
    MGRSERVNAME <mgr service>
    Using a GLOBALS file in each GoldenGate instance allows you to run multiple Managers as services on Windows. When the service is installed, the Manager name
    is referenced in GLOBALS, and this name will appear in the Windows Services control panel.
    Note! Check to ensure that the GLOBALS file has been added in the GoldenGate installation directory and that it does not have an extension.
    --Execute the following command to exit GGSCI.
    GGSCI> EXIT
    2. Install the Manager service
    Execute the following command to run GoldenGate’s INSTALL.EXE . This executable installs Manager as a Windows service and adds GoldenGate events to the
    Windows Event Viewer.
    Shell> INSTALL ADDSERVICE ADDEVENTS
    Note: Adding the Manager as a service is an optional step used when there are multiple environments on the same system or when you want to control the name
    of the manager for any reason.
    Configuring an ODBC connection
    A DSN stores information about how to connect to a SQL Server database through ODBC (Open Database Connectivity). Create a DSN on each SQL Server source
    and target system.
    NOTE: Replicat will always use ODBC to query the target database for metadata.
    To create a SQL Server DSN
    1. Run one of the following ODBC clients:
    --If using a 32-bit version of Oracle GoldenGate on a 64-bit system, create the DSN by running the ODBCAD32.EXE client from the %SystemRoot%\SysWOW64
    folder.
    --If using a 64-bit version of Oracle GoldenGate on a 64-bit system, create a DSN by running the default ODBCAD32.EXE client in Control Panel>Administrative
    Tools>Data Sources (ODBC).
    --If using a version of Oracle GoldenGate other than the preceding, use the default ODBC client in Control Panel>Administrative Tools>Data Sources (ODBC).
    2. In the ODBC Data Source Administrator dialog box of the ODBC client, select the System DSN tab, and then click Add.
    3. Under Create New Data Source, select the correct SQL Server driver as follows:
    --SQL Server 2000: SQL Server driver
    --SQL Server 2005: SQL Native Client driver
    --SQL Server 2008: SQL Server Native Client 10.0 driver
    4. Click Finish. The Create a New Data Source to SQL Server wizard is displayed.
    5. Supply the following:
    --Name: Can be of your choosing. In a Windows cluster, use one name across all nodes in the cluster.
    --Server: Select the SQL Server instance name.
    6. Click Next.
    7. For login authentication, select With Windows NT authentication using the network login ID for Oracle GoldenGate to use Windows authentication, or select
    With SQL Server authentication using a login ID and password entered by the user for Oracle GoldenGate to use database credentials. Supply login information
    if selecting SQL Server authentication.
    8. Click Next.
    9. If the default database is not set to the one that Oracle GoldenGate will connect to,
    click Change the default database to, and then select the correct name. Set the other
    settings to use ANSI.
    10. Click Next.
    11. Leave the next page set to the defaults.
    12. Click Finish.
    13. Click Test Data Source to test the connection.
    14. Close the confirmation box and the Create a New Data Source box.
    15. Repeat this procedure from step 1 on each SQL Server source and target system.
    Setting the database to full recovery model
    Oracle GoldenGate requires a SQL Server source database to be set to the full recovery model.
    To verify or set the recovery model
    1. Connect to the SQL Server instance with either Enterprise Manager for SQL Server 2000 or SQL Server Management Studio for SQL Server 2005 and 2008.
    2. Expand the Databases folder.
    3. Right-click the source database, and then select Properties.
    4. Select the Options tab.
    5. Under Recovery, set Model to Full if not already.
    6. If the database was in Simple recovery or never had a Full database backup, take a Fulldatabase backup before starting Extract.
    7. Click OK.
    Enabling supplemental logging
    These instructions apply to new installations of Oracle GoldenGate for all supported SQL Server versions. You will enable supplemental logging with the ADD
    TRANDATA command so that Extract can capture the information that is required to reconstruct SQL operations on the target. This is more information than
    what SQL Server logs by default.
    --SQL Server 2005 updated to CU6 for SP2 or later: ADD TRANDATA calls the sys.sp_extended_logging stored procedure.
    --SQL Server 2005 pre-CU6 for SP2: ADD TRANDATA creates the following:
    A replication publication named [<source database name>]: GoldenGate<source database name> Publisher. To view this publication, look under Replication>Local
    Publications in SQL Server Management Studio. This procedure adds the specified table to the publication as an article.
    A SQL Server Log Reader Agent job for the publication. This job cannot run concurrently with an Extract process in this configuration.
    --SQL Server 2008: ADD TRANDATA enables Change Data Capture (CDC) and creates a minimal Change Data Capture on the specified table.
    a.Oracle GoldenGate does not use the CDC tables other than as necessary to enablesupplemental logging.
    b.As part of enabling CDC, SQL Server creates two jobs per database: <dbname>_capture and <dbname>_cleanup. The <dbname>_capture job adjusts the secondary
    truncation point and captures data from the log to store in the CDC
    tables. The <dbname>_cleanup job ages and deletes data captured by CDC.
    c.Using the TRANLOGOPTIONS parameter with the MANAGESECONDARYTRUNCATIONPOINT option for Extract removes the <dbname_capture> job, preventing the overhead of
    the job loading the CDC tables.
    d.The alternative (using TRANLOGOPTIONS with NOMANAGESECONDARYTRUNCATIONPOINT) requires the SQL Server Agent to be running and requires the <dbname>_capture and <dbname>_cleanup jobs to be retained. You will probably need to adjust the <dbname>_cleanup data retention period if the default of three days is not acceptable for storage concerns.
    To enable supplemental logging
    1. On the source system, run GGSCI.
    2. Log into the database from GGSCI.
    DBLOGIN SOURCEDB <DSN>[, USERID <user>, PASSWORD <password>]
    Where:
    -- SOURCEDB <DSN> is the name of the SQL Server data source.
    -- USERID <user> is the Extract login and PASSWORD <password> is the password that is required if Extract uses SQL Server authentication.
    3. In GGSCI, issue the following command for each table that is, or will be, in the Extract configuration. You can use a wildcard to specify multiple table
    names, but not owner names.
    ADD TRANDATA <owner>.<table>
    NOTE:The Log Reader Agent job cannot run concurrently with the GoldenGate Extract process.
    4.Configuration
    a.Create and start manager on the source and the destination.
    Source DB:
    shell>ggsci
    GGSCI> edit params mgr
    PORT 7809
    DYNAMICPORTLIST 7900-7950
    DYNAMICPORTREASSIGNDELAY 5
    AUTOSTART ER *
    AUTORESTART ER *, RETRIES 3, WAITMINUTES 5, RESETMINUTES 30
    LAGCRITICALMINUTES 60
    LAGREPORTMINUTES 30
    PURGEOLDEXTRACTS c:\ogg\dirdat\T*, USECHECKPOINTS, MINKEEPFILES 10
    GGSCI> start manager
    GGSCI>info all
    b. Create the extract group on the source side:
    GGSCI> edit params EXT1
    Add the following lines to the new parameter file
    EXTRACT EXT1
    SOURCEDB <DSN>, USERID ogg, PASSWORD ogg@321!
    TRANLOGOPTIONS MANAGESECONDARYTRUNCATIONPOINT
    EXTTRAIL c:\ogg\dirdat\T1
    DISCARDFILE c:\ogg\dirrpt\EXT1.DSC, PURGE, MEGABYTES 100
    TABLE dbo.TCUSTMER;
    TABLE dbo.TCUSTORD;
    GGSCI>ADD EXTRACT EXT1, TRANLOG, BEGIN NOW
    GGSCI>ADD EXTTRAIL c:\ogg\dirdat\T1, EXTRACT EXT1, MEGABYTES 100
    GGSCI> edit params PMP1
    Add the following lines to the new parameter file
    EXTRACT PMP1
    SOURCEDB <DSN>, USERID ogg, PASSWORD ogg@321!
    PASSTHRU
    RMTHOST dr, MGRPORT 7810
    RMTTRAIL c:\ogg\dirdat\P1
    TABLE dbo.TCUSTMER;
    TABLE dbo.TCUSTORD;
    GGSCI> ADD EXTRACT PMP1, EXTTRAILSOURCE c:\ogg\dirdat\T1
    GGSCI> ADD EXTTRAIL c:\ogg\dirdat\P1, EXTRACT PMP1, MEGABYTES 100
    Target DB:
    ===========
    shell>ggsci
    GGSCI> edit params mgr
    PORT 7810
    AUTOSTART ER *
    AUTORESTART ER *, RETRIES 3, WAITMINUTES 5, RESETMINUTES 30
    LAGCRITICALMINUTES 60
    LAGREPORTMINUTES 30
    PURGEOLDEXTRACTS c:\ogg\dirdat\P*, USECHECKPOINTS, MINKEEPFILES 10
    GGSCI> start manager
    GGSCI>info all
    Create parameter file for replicat:
    GGSCI> edit params REP1
    REPLICAT REP1
    ASSUMETARGETDEFS
    TARGETDB <dsn>, USERID ogg@DR, PASSWORD ogg@321!
    DISCARDFILE c:\ogg\dirrpt\REP1.DSC, append, megabytes 100
    HANDLECOLLISIONS
    ASSUMETARGETDEFS
    MAP dbo.TCUSTMER, TARGET dbo.TCUSTMER;
    MAP dbo.TCUSTORD, TARGET dbo.TCUSTORD;
    GGSCI>ADD REPLICAT REP1, RMTTRAIL c:\ogg\dirdat\P1, nodbcheckpoint
    # Start extract and replicat:
    Source:
    GGSCI> start er *
    Destination:
    GGSCI> start er *Greetings,
    N K

  • Why multiple  log files are created while using transaction in berkeley db

    we are using berkeleydb java edition db base api, we have already read/write CDRFile of 9 lack rows with transaction and
    without transaction implementing secondary database concept the issues we are getting are as follows:-
    with transaction----------size of database environment 1.63gb which is due to no. of log files created each of 10 mb.
    without transaction-------size of database environment 588mb and here only one log file is created which is of 10mb. so we want to know how REASON CONCRETE CONCLUSION ..
    how log files are created and what is meant of using transaction and not using transaction in db environment and what are this db files db.001,db.002,_db.003,_db.004,__db.005 and log files like log.0000000001.....plz reply soon

    we are using berkeleydb java edition db base api, If you are seeing __db.NNN files in your environment root directory, these are environment's shared region files. And since you see these you are using Berkeley DB Core (with the Java/JNI Base API), not Berkeley DB Java Edition.
    with transaction ...
    without transaction ...First of all, do you need transactions or not? Review the documentation section called "Why transactions?" in the Berkeley DB Programmer's Reference Guide.
    without transaction-------size of database environment 588mb and here only one log file is created which is of 10mb.There should be no logs created when transactions are not used. That single log file has likely remained there from the previous transactional run.
    how log files are created and what is meant of using transaction and not using transaction in db environment and what are this db files db.001,db.002,_db.003,_db.004,__db.005 and log files like log.0000000001Have you reviewed the basic documentations references for Berkeley DB Core?
    - Berkeley DB Programmer's Reference Guide
    in particular sections: The Berkeley DB products, Shared memory regions, Chapter 11. Berkeley DB Transactional Data Store Applications, Chapter 17. The Logging Subsystem.
    - Getting Started with Berkeley DB (Java API Guide) and Getting Started with Berkeley DB Transaction Processing (Java API Guide).
    If so, you would have had the answers to these questions; the __db.NNN files are the environment shared region files needed by the environment's subsystems (transaction, locking, logging, memory pool buffer, mutexes), and the log.MMMMMMMMMM are the log files needed for recoverability and created when running with transactions.
    --Andrei                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Best way to log data

    I have an application that receives data from a source, and I would
    like to create and maintain a log file with this data. Small bits of
    data will be received at least every minute, and often once every
    second. The user can also view the logged data at any time.
    Lastly, each entry is timestamped, and I would like to be able to
    remove log entries that are x hour/days old.
    What I would like advice on is what is the bext way to accomplish this?
    Clearly a bad way to do this is every time I get data, read the log in,
    remove old entries, rewrite all the remaining entries, and finally
    append the new entry. This uses way too many file operations.
    I have briefly looked at the java.util.logging stuff, but it appears that
    the purpose of that is more for code debugging/etc. It also doesn't
    appear that there are methods to read the logs. So, this does not
    look like an "easy way out".
    I know how to append to a file, but I am not sure if I should leave
    the file open during execution of the program and simply flush
    the stream/buffer, or open and close the file everytime I write.
    The latter is clearly more expensive, but safer.
    I don't know how to delete data from the beginning of a file.
    To summarize, what is the best way to read and write data to
    a file such that:
    1) the data is appended to the end of the file (easy)
    2) the data is removed from the beginning of the file
    3) the data that is written is not lost of the program crashes/etc.
    4) the whole logging business is not too expensive
    Any suggestions?
    Thanks,
    Chuck.

    Here's what we did at my last job:
    Incoming messages are logged to disk via FileOutputStream (maybe with BufferedOutputStream encasing that, I don't remember) this outputstream is not closed until the applet is stopped. System.out is also reassigned to this fileoutputstream to capture all messages.
    At the same time, there is an InputStream reading from that file. As data is read, we stick the text into a TextArea.
    When the TextArea size hits some certain amount (maybe 20,000 characters) we'd remove ... maybe 5000 from the top. Log stays the same tho
    When the day rolls over, we switch log files
    You're questions:
    1) the data is appended to the end of the file (easy)
    Yes. But if you're not using streams, you should check that out.
    2) the data is removed from the beginning of the file
    That's pretty easy. Open the file for reading, open a new file for writing. Read the first n bytes but don't write them. Then write the rest
    3) the data that is written is not lost of the program crashes/etc.
    Data should be fine if it's logged to disk
    4) the whole logging business is not too expensive
    Saving to disk isn't very expensive at all. In fact, normally writing to disk is slow enough that stuff just gets put into a buffer somewhere to get sent to the disk and the processor goes on with the program.

  • Logging Data from USB 6000 without Labview

    Greetings all.
    I am sure this questions is buried somewhere, so I apologize if I am repeating a question that has already been posed and addressed. If this is a redundant question, simply pointing me to the relavent post will help.
    I've got a USB 6000. I've got it set up and working just fine in a MAX test panel. Is there any way to log acquired data to any sort of file without Labview, measurment studio, signal express, or lab windows/cvi? In a similar note, being able to acquire and log data without a different programming routine like .Net or C++. Maybe there is a way to create a task and call it from an excel macro?
    I know I've dug my own grave by getting the bottom of the like USB DAQ. However, even with the USB-6000, there must be a way to log the data being acquired without needing to make additional investments in development software. Not trying to do anything fancy like triggering or data manipulation, I'm just looking to log measured voltages to file and do post processing in Excel.
    Again, I am sure this (or at least a similar) question has been posed before. Any help would be appreciated.
    Thanks,
    Ben

    I think Signal Express is what you really want to use here.  It was designed for the very high level, simple measurement acquisition.
    In MAX, you can set up the task to log to a TDMS file.  With that enabled, you should be able to just let Signal Express run your task and then tell it to stop whenever you want it to and the data will be logged.

  • Archive log gap is created is standby when ever audit trail is set to DB

    Hi
    I am a new dba. I am facing a problem at production server that whenever audit_trail parameter is set to db , archive log gap is created at the standby site.
    My database version is 10.2.0.4
    Os is windows 2003 R2
    Audit_trail parameter is set to db only in primary site, after setting the parameter as db when I bounced the database and switched the logfile , archive log gap is created in the standby..I am using LGWR mode of log transport.
    Is there any relation beteen audit_trail and log transport ?
    Please note that my archive log location of both the sites has sufficient disk space and the drive is working fine.Also my primary and standby is in WAN.
    Please help me in this.Any help will be highly appreciated.
    Here a trace file which may be helpful to give any opinion.
    Dump file d:\oracle\admin\sbiofac\bdump\sbiofac_lns1_6480.trc
    Tue Jun 05 13:46:02 2012
    ORACLE V10.2.0.4.0 - Production vsnsta=0
    vsnsql=14 vsnxtr=3
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Windows Server 2003 Version V5.2 Service Pack 2
    CPU : 2 - type 586, 1 Physical Cores
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:16504M/18420M, Ph+PgF:41103M/45775M, VA:311M/2047M
    Instance name: sbiofac
    Redo thread mounted by this instance: 1
    Oracle process number: 21
    Windows thread id: 6480, image: ORACLE.EXE (LNS1)
    *** SERVICE NAME:() 2012-06-05 13:46:02.703
    *** SESSION ID:(534.1) 2012-06-05 13:46:02.703
    *** 2012-06-05 13:46:02.703 58902 kcrr.c
    LNS1: initializing for LGWR communication
    LNS1: connecting to KSR channel
    Success
    LNS1: subscribing to KSR channel
    Success
    *** 2012-06-05 13:46:02.750 58955 kcrr.c
    LNS1: initialized successfully ASYNC=1
    Destination is specified with ASYNC=61440
    *** 2012-06-05 13:46:02.875 73045 kcrr.c
    Sending online log thread 1 seq 2217 [logfile 1] to standby
    Redo shipping client performing standby login
    *** 2012-06-05 13:46:03.656 66535 kcrr.c
    Logged on to standby successfully
    Client logon and security negotiation successful!
    Archiving to destination sbiofacdr ASYNC blocks=20480
    Allocate ASYNC blocks: Previous blocks=0 New blocks=20480
    Log file opened [logno 1]
    *** 2012-06-05 13:46:44.046
    Error 272 writing standby archive log file at host 'sbiofacdr'
    ORA-00272: error writing archive log
    *** 2012-06-05 13:46:44.078 62692 kcrr.c
    LGWR: I/O error 272 archiving log 1 to 'sbiofacdr'
    *** 2012-06-05 13:46:44.078 60970 kcrr.c
    kcrrfail: dest:2 err:272 force:0 blast:1
    *** 2012-06-05 13:47:37.031
    *** 2012-06-05 13:47:37.031 73045 kcrr.c
    Sending online log thread 1 seq 2218 [logfile 2] to standby
    *** 2012-06-05 13:47:37.046 73221 kcrr.c
    Shutting down [due to no more ASYNC destination]
    Redo Push Server: Freeing ASYNC PGA buffer
    LNS1: Doing a channel reset for next time around...

    OK
    Great details thanks!!
    Are The SDU/TDU settings are configured in the Oracle Net files on both primary and standby ? I will see if I have an example.
    The parameters appear fine.
    There was an Oracle document 386417.1 on this, I have not double checked if its still available. ( CHECK - Oracle 9 but worth a galance )
    Will Check and post here.
    I have these listed too. ( Will check all three and see if they still exist )
    When to modify, when not to modify the Session data unit (SDU) [ID 99715.1] ( CHECK - still there but very old )
    SQL*Net Packet Sizes (SDU & TDU Parameters) [ID 44694.1] ( CHECK - Best by far WOULD REVIEW FIRST )
    Any chance your firewall limit the Packet size?
    Best Regards
    mseberg
    Edited by: mseberg on Jun 6, 2012 12:36 PM
    Edited by: mseberg on Jun 6, 2012 12:43 PM
    Additional document
    The relation between MTU (Maximum Transmission Unit) and SDU (Session Data Unit) [ID 274483.1]
    Edited by: mseberg on Jun 6, 2012 12:50 PM
    Still later
    Not sure if this helps but I played around will this on Oracle 11 a little, here that example:
    # listener.ora Network Configuration File: /u01/app/oracle/product/11.2.0/network/admin/listener.ora
    # Generated by Oracle configuration tools.
    LISTENER =
      (DESCRIPTION_LIST =
        (DESCRIPTION =
          (ADDRESS = (PROTOCOL = TCP)(HOST = yourdomain.com)(PORT = 1521))
    SID_LIST_LISTENER = (SID_LIST =(SID_DESC =(SID_NAME = STANDBY)
                          (ORACLE_HOME = /u01/app/oracle/product/11.2.0)
                          (SDU=32767)
                          (GLOBAL_DBNAME = STANDBY_DGMGRL.yourdomain.com)))   
    ADR_BASE_LISTENER = /u01/app/oracle
    INBOUND_CONNECT_TIMEOUT_LISTENER=120Edited by: mseberg on Jun 6, 2012 12:57 PM
    Also of interest
    Redo is transporting in 10gR2 versions.
    http://www.oracle.com/technetwork/database/features/availability/maa-wp-10gr2-dataguardnetworkbestpr-134557.pdf
    Edited by: mseberg on Jun 6, 2012 1:11 PM

  • Please Help - Avoid default name when creating constraints

    Here I am creating two very simple tables:
    create table supplier
         id number not null,
         constraint pk_supplier primary key (id)
    create table product
    id number not null,
    supplier_id number not null,
    constraint pk_product primary key (id),
    constraint fk_product_supplier_id
    foreign key (supplier_id) references supplier(id)
    After done while I choose to view table constraint information by using:
    select table_name, constraint_name
    from user_cons_columns
    where lower(table_name) = 'product';
    I get result:
    TABLE_NAME                     CONSTRAINT_NAME
    PRODUCT                        SYS_C005441
    PRODUCT                        SYS_C005442
    PRODUCT                        PK_PRODUCT
    PRODUCT                        FK_PRODUCT_SUPPLIER_ID
    What is the "SYS_C005441" and "SYS_C005442" for? Can I avoid creating them?
    Thanks

    Justin Cave wrote:
    Are you suggest not explicitly naming all your constraints? Or just the NOT NULL ones?No, not at all. Did not even thought that far when I responded. Bit of foot in mouth as I only thought of not null constraints. Blame it on a lack of coffee. :-)
    I usually name primary & foreign key and check constraints. But not null constraints - too much of an effort to name them IMO.
    But you do raise an interesting topic. Why bother naming some constraints (e.g. PK and FK) and not others (e.g. not null)? So playing devil's advocate, why should we be naming constraints at all?
    It sounds like you're suggesting that all constraints should get system default names. Well, one can argue that the same effort of naming other constraints is not worth it - why have null constraints as the exception to implicit naming? I would think that the issue I raised also applies to all constraints.
    You have multiple foreign key constraints for INVOICE_ID - how do you name these? I would like both the table and column names as this will provide the meaning needed when seeing the constraint being violated. But with 30 chars only, that is a problem. So invariable one needs to start to abbreviate table and column names in order to make it meaningful constraint name. Been there many times - disliked the constraint name I came up with every time, as it was a compromise and not the actual naming format I would have preferred.
    If that's what you're suggesting, doesn't that create problems for you when different environments have different constraint names? I know personally that I'd much rather have named constraints so that when there are constraint violation errors in production, I can start troubleshooting the problem in my local environment without first determining how to map the production constraint name to my local database's constraint name. Understand your point - and yes, it can be an issue. But by the same token, why should an application user ever see a default Oracle exception? With the exception of system-type exceptions (e.g. no more tablespace space, eof on communcation channel), all other exceptions should be custom application exceptions.
    Showing for example a "+ORA-00001: unique constraint (xxx.xxxxx) violated+" is wrong IMO. The app code should trap that exception and raise a meaningful and unique application exception for that. The user seeing anything other than a custom app exception, should itself be an exception.
    And I'd much rather be able to put the constraint name in a script that is to be promoted through the environments rather than coming up with a process that uses dynamic SQL to figure out the name of the constraint I want to do something to. Well, I would not rely on a constraint name itself to determine what it is. Just because it says fk_invoiceid_invoices does not mean that is is a foreign key on column invoice_id and references the invoices table. The safe/proper thing to do would be to query the data dictionary and confirm just what that constraint is.
    I suppose if you know that all the lower environments are very recent clones of production rather than running all the scripts in source control that these problems may not be particularly large. But I'm curious if you have some better approach to handling them (or if I'm completely misinterpreting what you're suggesting).Not sure if you recall some feature discussion with Tom and others (was on asktom?) when someone came up with this core issue and suggested it be addressed by allowing one to define an exception message with the definition of the constraint. Cannot recall the exact syntax proposed, but I do remember thinking that it was not a shabby idea - as this solves the problem of having to invent a meaningful name using 30 chars only. And it also removes the need for trapping that constraint violation in all app code that may cause the exception and raising a custom and meaningful app exception instead. Not too mention that plain and direct SQL access will show the same exception message.
    Perhaps in Oracle 12c? I assume the c as it seems that The Next Big Thing is cloud computing and surely Oracle 12 will somehow try to exploit that buzzword - as it has with i (Internet) and g (Grid) versions. ;-)

Maybe you are looking for