Goldengate Extracts reads slow during Table Data Archiving and Index Rebuilding Operations.

We have configured OGG on a  near-DR server. The extracts are configured to work in ALO Mode.
During the day, extracts work as expected and are in sync. But during any dialy maintenance task, the extracts starts lagging, and read the same archives very slow.
This usually happens during Table Data Archiving (DELETE from prod tables, INSERT into history tables) and during Index Rebuilding on those tables.
Points to be noted:
1) The Tables on which Archiving is done and whose Indexes are rebuilt are not captured by GoldenGate Extract.
2) The extracts are configured to capture DML opeartions. Only INSERT and UPDATE operations are captured, DELETES are ignored by the extracts. Also DDL extraction is not configured.
3) There is no connection to PROD or DR Database
4) System functions normally all the time, but just during table data archiving and index rebuild it starts lagging.
Q 1. As mentioned above, even though the tables are not a part of capture, the extracts lags ? What are the possible reasons for the lag ?
Q 2. I understand that Index Rebuild is a DDL operation, then too it induces a lag into the system. how ?
Q 3. We have been trying to find a way to overcome the lag, which ideally shouldn't have arised. Is there any extract parameter or some work around for this situation ?

Hi Nick.W,
The amount of redo logs generated is huge. Approximately 200-250 GB in 45-60 minutes.
I agree that the extract has to parse the extra object-id's. During the day, there is a redo switch every 2-3 minutes. The source is a 3-Node RAC. So approximately, 80-90 archives generated in an hour.
The reason to mention this was, that while reading these archives also, the extract would be parsing extra Object ID's, as we are capturing data only for 3 tables. The effect of parsing extract object id's should have been seen during the day also. The reason being archive size is same, amount of data is same, the number of records to be scanned is same.
The extract slows down and read at half the speed. If normally it would take 45-50 secs to read an archive log of normal day functioning, then it would take approx 90-100 secs to read the archives of the mentioned activities.
Regarding the 3rd point,
a. The extract is a classic extract, the archived logs are on local file system. No ASM, NO SAN/NAS.
b. We have added  "TRANLOGOPTIONS BUFSIZE" parameter in our extract. We'll update as soon as we see any kind of improvements.

Similar Messages

  • Data archival and purging for OLTP database

    Hi All,
    Need your suggestion regarding data archival and purging solution for OLTP db.
    currently, we are planning to generate flat files from table..before purging the inactive data and move them to tapes/disks for archiving then purge the data from system. we have many retention requirements and conditions before archival of data. so partition alone is not sufficient.
    Is there any better approach for archival and purging other than this flat file approach..
    thank you.
    regards,
    vara

    user11261773 wrote:
    Hi All,
    Need your suggestion regarding data archival and purging solution for OLTP db.
    currently, we are planning to generate flat files from table..before purging the inactive data and move them to tapes/disks for archiving then purge the data from system. we have many retention requirements and conditions before archival of data. so partition alone is not sufficient.
    Is there any better approach for archival and purging other than this flat file approach..
    thank you.
    regards,
    varaFBDA is the better option option .Check the below link :
    http://www.oracle.com/pls/db111/search?remark=quick_search&word=flashback+data+archive
    Good luck
    --neeraj                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Procedure for creating transparent table, data element and domain

    Hi,
    Can anybody let me know the procedure for creating transparent table, data element and domain.
    Thanks,
    Mahathi

    Hi
    Database table and its components
    A database table is the central data structure of the ABAP/4 data dictionary.
    The structure of the objects of application development are mapped in tables on the underlying relational database.
    The attributes of these objects correspond to fields of the table.
    A table consists of columns (fields) and rows (entries). It has a name and different attributes, such as delivery class and maintenance authorization.
    A field has a unique name and attributes; for example it can be a key field.
    A table has one or more key fields, called the primary key.
    The values of these key fields uniquely identify a table entry.
    You must specify a reference table for fields containing a currency (data type CURR) or quantity (data type QUAN). It must contain a field (reference field) with the format for currency keys (data type CUKY) or the format for units (data type UNIT). The field is only assigned to the reference field at program runtime.
    The basic objects for defining data in the ABAP Dictionary are tables, data elements and domains. The domain is used for the technical definition of a table field (for example field type and length) and the data element is used for the semantic definition (for example short description).
    A domain describes the value range of a field. It is defined by its data type and length. The value range can be limited by specifying fixed values.
    A data element describes the meaning of a domain in a certain business context. It contains primarily the field help (F1 documentation) and the field labels in the screen.
    A field is not an independent object. It is table-dependent and can only be maintained within a table.
    You can enter the data type and number of places directly for a field. No data element is required in this case. Instead the data type and number of places is defined by specifying a direct type.
    The data type attributes of a data element can also be defined by specifying a built-in type, where the data type and number of places is entered directly.
    <b>Two Level Domain Example</b>
    A domain defines a field technically and therefore it may
    be used at different business levels.
    A data element describes the meaning of a domain in a certain business context.
    A domain, however, is used for the technical definition of a table field (for example field type and length).
    Therefore, although a take-off airport (data element S_FROMAIRP) would have a different business meaning from an airport where a plane lands (data element S_TOAIRP), they could still have the same domain(here S_AIRPID) because technically we could assign the same number of characters whether the airport is a take-off or a landing airport.
    <b>Definitions of Table in Database</b>
    In SAP R/3 tables are defined as
    A) Transparent tables: All of the fields of a dictionary table correspond to a field in the real database table.
    B) Pooled tables: Different tables which are not linked to each other with a common key are combined into a TABLE POOL. Several logical tables thus exist as a single real database table.
    C) Cluster tables: Several tables linked by a common key may sometimes be combined by the data dictionary and made to exist on the database schema as a single table.
    SAP is evolving R/3 tables in transparent tables.
    <b>Elaboration on each of the definitions</b>
    A transparent table is automatically created on the database when it is activated in the ABAP Dictionary. At this time the database-independent description of the table in the ABAP Dictionary is translated into the language of the database system used.
    The database table has the same name as the table in the ABAP Dictionary. The fields also have the same name in both the database and the ABAP Dictionary. The data types in the ABAP Dictionary are converted to the corresponding data types of the database system.
    The order of the fields in the ABAP Dictionary can differ from the order of the fields on the database. This permits you to insert new fields without having to convert the table. When a new field is added, the adjustment is made by changing the database catalog (ALTER TABLE). The new field is added to the database table, whatever the position of the new field in the ABAP Dictionary.
    Tables can also reside on the database as Pooled tables or cluster tables
    Pooled Tables: Different tables which are not linked to each other with a common key can be combined into a Table Pool. The tables contained within this pool are called Pooled Tables. A table pool is stored in the database a simple table. The table's data sets contain, in separate fields, the actual key for the data set to be stored, the name of the pooled table and the contents of the data set to be stored.
    Using this schema, several logical tables are combined into a single real database table. Although the data structure of each set is lost during the write to the table pool, it is restored during the read by the ABAP/4 Data Dictionary. The ABAP/4 Data Dictionary utilizes its meta-data to accomplish this.
    Since information must be prepared (defined) within the ABAP/4 Data Dictionary when it is read or written to (or accessed), this process itself defines these as not transparent tables
    Cluster Tables: Occasionally, several tables may be linked by a common key. The ABAP/4 Data Dictionary can also combine these tables into a single table. Each data set of the real table within the database contains a key and in a single data field, several data sets of the subsequent table for this key.
    As mentioned above, these table types require special data handling, therefore they are not transparent tables.
    <b>Technical Settings in Dictionary</b>
    The data class logically defines the physical area of the database (for ORACLE the table space) in which your table should be created. If you choose the data class correctly, the table will automatically be created in the appropriate area on the database when it is activated in the ABAP Dictionary.
    The most important data classes are master data, transaction data, organizational data and system data.
    Master data is data that is rarely modified. An example of master data is the data of an address file, for example the name, address and telephone number.
    Transaction data is data that is frequently modified. An example is the material stock of a warehouse, which can change after each purchase order.
    Organizational data is data that is defined during customizing when the system is installed and that is rarely modified thereafter. The country keys are an example.
    System data is data that the R/3 System itself needs. The program sources are an example.
    Further data classes, called customer data classes (USER, USER1), are provided for customers. These should be used for customer developments. Special storage areas must be allocated in the database.
    The size category describes the expected storage requirements for the table on the database.
    An initial extent is reserved when a table is created on the database. The size of the initial extent is identical for all size categories. If the table needs more space for data at a later time, extents are added. These additional extents have a fixed size that is determined by the size category specified in the ABAP Dictionary.
    You can choose a size category from 0 to 4. A fixed extent size, which depends on the database system used, is assigned to each category.
    Correctly assigning a size category therefore ensures that you do not create a large number of small extents. It also prevents storage space from being wasted when creating extents that are too large.
    Modifications to the entries of a table can be recorded and stored using logging.
    To activate logging, the corresponding field must be selected in the technical settings. Logging, however, only will take place if the R/3 System was started with a profile containing parameter 'rec/client'. Only selecting the flag in the ABAP Dictionary is not sufficient to trigger logging.
    Parameter 'rec/client' can have the following settings:
    rec/client = ALL All clients should be logged.
    rec/client = 000[...] Only the specified clients should be logged.
    rec/client = OFF Logging is not enabled on this system.
    The data modifications are logged independently of the update. The logs can be displayed with the Transaction Table History (SCU3).
    Logging creates a 'bottleneck' in the system:
    Additional write access for each modification to tables being logged.
    This can result in lock situations although the users are accessing different application tables!
    <b>Create transparent table</b>
    Go to transaction SE11. Enter name of table you want to create (beginning with Y or Z) and click on create pushbutton
    Enter the delivery class and the table maintenance criteria
    The delivery class controls the transport of table data when installing or upgrading, in a client copy and when transporting between customer systems .
    The display/maintenance indicator specifies whether it is possible to display/maintain a table/view using the maintenance tools Data Browser (transaction SE16) and table view maintenance (transactions SM30 and SM31).
    Enter the name of the table field and the data element. The
    System automatically populates the technical details for
    existing data elements.
    So far as possible it is advisable to use existing data elements which befit the business requirements.
    However, we may create data elements if need be. The same is shown in the next slide.
    To create a data element simply double click on it.
    Alternately create a data element by simply choosing the
    data type radio button on SE11 initial screen.
    <b>Create data element</b>
    The system prompts you to create a new data element.
    Choose the Yes pushbutton.
    Under the data type tab enter the domain name which
    determines the technical characteristics of the field.
    Further characteristics tab: Allows you to specify a search help assigned to the data element.
    It also allows you to specify a parameter id which helps you populate a field from SAP memory.
    Field label: Can be assigned as prefixed text to a screen field referring to the ABAP Dictionary. The text is displayed on the screen in the logon language of the user (if the text was translated into this language).
    <b>Create domain</b>
    If the domain does not exist in the data dictionary the
    system prompts you to create one.
    Give the technical characteristics under the definition
    tab. Value range allows you value restriction at domain
    level.
    Value range tab:
    As explained in the section Consistency through input checks one can restrict the possible values for a field at domain level itself by either entering fixed values or by specifying a value table under the tab Value range.
    <b>Currency/Quantity fields in a table</b>
    A currency or a quantity field must be assigned a reference field from a reference table containing applicable qty unit or currency unit.
    Field of the reference table, containing the applicable quantity unit or currency
    A field containing currency amounts (data type CURR) must be assigned a reference field including the currency key (data type CUKY).
    A field containing quantity specifications (data type QUAN) must be assigned a reference field including the associated quantity unit (data type UNIT).
    <b>Create transparent table continue</b>
    Maintain the technical settings of the table by clicking on the tab

  • What is data archiving and DMS(Data Management System) in SAP

    what is data archiving and DMS(Data Management System) in SAP
    Welcome to SCN. Before posting questions please search for available information here and in the web. Please also read the Rules of Engagement before further posting.
    Edited by: kishan P on Aug 31, 2010 1:06 PM

    Hi,
    Filtering at the IDoc Level
    Identify the filter object (BD59)
    Modify the distribution model
    Segment Filtering
    specify the segments to be filtered (BD56)
    The Reduced IDoc Type
    Analyze the data.
    Reduce the IDoc type (BD53)
    Thanks and regards.

  • Data archiving and data cleansing

    hi experts,
    Can any one tell me step by step guide for Data archiving and Data cleansing of SAP-ISU Object.
    what is the difference between  Data archiving and Data cleansing .
    Thanx & Rgds

    Data Archiving: So many objects are there you can look some of them..........
    ISU_BBP IS-U Archiving: Budget Billing Plan
    ISU_BCONT Business Partner Contacts (Contract A/R + A/P)
    ISU_BILL IS-U Archiving: Billing Document Header
    ISU_BILLZ IS-U Archiving: Billing Line Item
    ISU_EABL IS-U Archiving: Meter Reading Results
    ISU_EORDER IS-U Archiving: Waste Disposal Order
    ISU_EUFASS Archiving of Usage Factors
    ISU_FACTS Installation Facts
    ISU_INSPEC IS-U Archiving: Campaigns for Inspection List
    ISU_PPM Prepayment Documents
    ISU_PRDOCH IS-U Archiving: Print Document Header
    ISU_PRDOCL IS-U Archiving: Print Document Line Item
    ISU_PROFV IS-U Archiving: EDM Profile Values
    ISU_ROUTE IS-U Archiving: Waste Disposal Route
    ISU_SETTLB Settlement Document
    ISU_SWTDOC Archive Switch Document
    Go to SARA t-code and enter the object CA_BUPA for business partner then Press F6 you will get  step by step documentation. please follow the procedure for all the objects.
    Regards,
    Siva

  • What should i do to implement Data Archiving and Data Reporting

    we want to implement Data Archiving and Data Reporting in our product. Can someone tell me what techniques or approaches people take in general to implement Data Archiving and Data reporting.
    i am currently looking into DataWarehousing. is this the right apporach ? i have no idea as where i should start on this. can someone give me a good direction as a starting point.
    thank you,
    Puja

    Did you setup Find My Mac before it was stolen?

  • Importing data tables into data tablespace and indexes into tablespaces

    Hi
    I want to import data into new schema and i want to store tables into data tablespaces and index into index tablespace ...can anyone tell me how it will possible...

    I want to import data into new schema and i want to store tables into data tablespaces and index into index tablespace ...can anyone tell me how it will possible...
    imp userid=/user/passwd show=y indexfile=import.sql indexes=n full=y
    imp userid=/user/passwd show=y indexfile=import2.sql full=y
    Edit the import.sql and import2.sql to modify the tables' tablespace and indexes tablespace.
    execute import.sql the script in the database. this will create the tables in their respective tablespace.
    imp userid=/user/passwd full=y ignore=y indexes=n constraints=y - to import just the data since the tables have already been created.
    imp userid=/user/passwd full=y ignore=y rows=n  - to import just the indexes since the tables and data have already been imported.

  • Reading dynamic html table data

    HI friends I am creating asp.net project and i want read html table data which is dynamically created.

    This forum is for language questions. For help in asp.net, visit forums.asp.net.
    Visual C++ MVP

  • A question on different options for data archival and compression

    Hi Experts,
    I have production database of about 5 terabyte size in production and about 50 GB in development/QA. I am on Oracle 11.2.0.3 on Linux. We have RMAN backups configured. I have a question about data archival stretegy.  To keep the OLTP database size optimum, what options can be suggested for data archival?
    1) Should each table have a archival stretegy?
    2) What is the best way to archive data - should it be sent to a seperate archival database?
    In our environment, only for about 15 tables we have the archival stretegy defined. For these tables, we copy their data every night to a seperate schema meant to store this archived data and then transfer it eventualy to a diffent archival database. For all other tables, there is no data archival stretegy.
    What are the different options and best practices about it that can be reviewed to put a practical data archival strategy in place? I will be most thankful for your inputs.
    Also what are the different data compression stregegies? For example we have about 25 tables that are read-only. Should they be compressed using the default oracle 9i basic compression? (alter table compress).
    Thanks,
    OrauserN

    You are using 11g and you can compress read only as well as read-write tables in 11g. Both read only and read write tables are the candidate for compression always. This will save space and could increase the performance but always test it before. This was not an option in 10g. Read the following docs.
    http://www.oracle.com/technetwork/database/storage/advanced-compression-whitepaper-130502.pdf
    http://www.oracle.com/technetwork/database/options/compression/faq-092157.html

  • Freeze during 10.4 archive and install

    Hi. I'm doing an Archive and Install on my PBG4 and the installation was going fine until I put in the second disk (as requested) and it is now hung-up on "Installing GarageBand Application." It has been for about 30 minutes - "Writing files: 8% complete". Does anyone have any advice? I think I need to bailout of the install but I'm concerned about doing that. I DO have a complete back-up. Any advice is greatly appreciated. Thanks.

    Welcome to the forums!
    Should you need full service help, you can start a New Topic here...
    http://discussions.apple.com/forum.jspa?forumID=752&start=0
    That being said, nobody can say for certain, only guess, and +most of the time+ you won't lose everything by Force Quitting, but it CAN happen, and since it already is having problems, it means you had Disk Problems before you started.
    Try to get #1 DiskWarrior the CD, second get an external Firewire Drive for backups.

  • Data Archiving and ABAP

    Hi All,
    I am an ABAPer and recently came accross the process of data archiving..
    Can anyone tell me what significance data archiving has w.r.t ABAP.Are they interrelated at any point??
    Waiting for Reply..
    Shilpa

    Hi Shilpa,
    As a developer you must be aware of the fact that what ever application are provided by SAP they all are using codes written in ABAP in background .
    Similarly we have a tool Archive Developent Kit (ADK) provided by SAP that uses various programs written in ABAP grouped in the form of Archive object to perform Archiving Successfully in any system.
    Yes Archiving and ABAP both are interrelated and an ABPER can very well understand how actually there programs functions at runtime while we Archive data from the database.
    Another imporeant thing is apart from the standard Archive Objects provided by SAP sometimes  requiremnets comes for  custom Objects to be created which requires good indepth knowledge of ABAP.
    I hope my answer will help you understand the assosiation of ABAP and Data Archiving.
    For more you can go through the below link ::
    http://help.sap.com/saphelp_47x200/helpdata/en/8d/3e4d22462a11d189000000e8323d3a/frameset.htm
    -Supriya

  • Should we use LOGGING or NOLOGGING for table, lob segment, and indexes?

    We have some DML performance issue on cf contention over the tables that also include LOB segments. In this case, should we define LOGGING on tables, lob segments, and/or INDEXES?
    Based on the metalink note < Performance Degradation as a Result of 'enq: CF - contention' [ID 1072417.1]> It looks we need to turn on logging for at least table and lob segment. What about the indexes?
    Thanks!

    >
    These tables that have nologging are likely from the application team. Yes, we need to turn on the logging from nologging for tables and lob segments. What about the indexes?
    >
    Indexes only get modified when the underlying table is modified. When you need recovery you don't want to do things that can interfere with Oracle's ability to perform its normal recovery. For indexes there will never be loss of data that can't be recovered by rebuilding the index.
    But use of NOLOGGING means that NO RECOVERY is possible. For production objects you should ALWAYS use LOGGING. And even for those use cases where use of NOLOGGING is appropriate for a table (loading a large amount of data into a staging table) the indexes are typically dropped (or at least disabled) before the load and then rebuilt afterward. When they are rebuilt NOLOGGING is used during the rebuild. Normal index operations will be logged anyway so for these 'offline' staging tables the setting for the indexes doesn't really matter. Still, as a rule of thumb you only use NOLOGGING during the specific load (for a table) or rebuild (for an index) and then you would ALTER the setting to LOGGING again.
    This is from Tom Kyte in his AskTom blog from over 10 years ago and it still applies today.
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:5280714813869
    >
    NO NO NO -- it does not make sense to leave objects in NOLOGGING mode in a production
    instance!!!! it should be used CAREFULLY, and only in close coordination with the guys
    responsible for doing backups -- every non-logged operation performed makes media
    recovery for that segment IMPOSSIBLE until you back it up.
    >
    Use of NOLOGGING is a special-case operation. It is mainly used in Datawarehouse (OLAP systems) data processing during truncate-and-load operations on staging tables. Those are background or even offline operations and the tables are NOT accessible by end users; they are work tables used to prepare the data that will be merged to the production tables.
    1. TRUNCATE a table
    2. load the table with data
    3. process the data in the table
    In those operations the table load is seldom backed up and rarely needs recovery. So use of NOLOGGING enhances the performance of the data load and the data can be recovered, if necessary, from the source it was loaded from to begin with.
    Use of NOLOGGING is rarely, if ever, used for OLTP systems since that data needs to be recovered.

  • Book with Table of Contents and Index

    I want to write a genealogical book over 100 pages long, including photos, charts, and lots of text. My last book was on a PC using Adobe PageMaker, and I liked its ability to create a Table of Contents and an Index, both of which adjusted automatically as you added pages in the middle. An index is essential to genealogy, as people are mentioned in different places. To put a name in the Index, you selected the name, entered different possible versions of the name, and from a menu picked "Place in Index". When ready, you selected "Create Index" and it went through and picked the words that you had marked, noted what page they were on, and created a neat index. I need something like that for the Mac.
    I tried Adobe InDesign on the PC, and found that it was mostly for graphics and fancy lettering and did not have a good Table of Contents or Index function. Mac's Pages program does not seem to have any indexing function at all, and gets very difficult to use as the document gets large.
    Is there any Mac program which can write a serious book?

    That is not a feature of iPhoto
    And iPhoto books are limited to 100 pages (50 sheets of paper)
    You can create custom pages in other software and print to PDF using the send PDF to iPhoto function to create an image in iPhoto to place in your book
    see Old Toads tutorial #19 for more details - http://web.me.com/toad.hall/NewTutorials/main19.html
    LN

  • Character override in Table of Contents and Index

    Am using a Stone Sans Open Type phonetic font for several characters in headings and in index markers. When Table of Contents or Index generates, the phonetic characters are not displaying in the override font. All characters display in the base font, and the characters that have the phonetic font override display as errors. Have tried using a Character Style in the heading to change those characters (and making sure the same Character Style is in the Index document), and have tried using manual overrides to change those characters. Same results. Any suggestions?

    @KH Allstars – What kind of error do you see displayed? Missing glyphs?
    One possible solution would be a GREP Search/Replace or a GREP Style, that is applying the right Character Style for the phonetic forms.
    We need two things to make that work:
    1. A "yes" to the question:
    Are all Unicode values of the phonetic font distinct from the Unicode values of the regular font?
    If yes, we could:
    2. Make a GREP representation of a list, or better: a range of Unicode values and apply a Character Style to it, when found. Could be a GREP Style in a Paragraph Style.
    To test this, just apply the base font to an already well formatted text with the phonetic font. If ALL glyphs showing as missing we are on the save side and could check for their Unicode range(s).
    Uwe

  • System slow during loading data from source system

    hi,
    I am trying to load master data from r/3 into bw in quality envirnoment by means of a process chain. The problem is it being a master data load is consuming a lot of time. My development and quality environments are maintained on the same server. I doubt about this being something related to memory. If anybody could mention the ways through which memory can be monitored along with the reason for the slow nature of the system would be very helpful.
    Source system: R/3.
    Environment    : Q03(quality)
    Load               : Master data(full load)

    Hi,
    I will suggest you to check a few places where you can see the status
    1) SM37 job log (In source system if load is from R/3 or in BW if its a datamart load) (give request name) and it should give you the details about the request. If its active make sure that the job log is getting updated at frequent intervals.
    Also see if there is any 'sysfail' for any datapacket in SM37.
    2) SM66 get the job details (server name PID etc from SM37) and see in SM66 if the job is running or not. (In source system if load is from R/3 or in BW if its a datamart load). See if its accessing/updating some tables or is not doing anything at all.
    3) RSMO see what is available in details tab. It may be in update rules.
    4) ST22 check if any short dump has occured.(In source system if load is from R/3 or in BW if its a datamart load)
    5) SM58 and BD87 for pending tRFCs.
    Once you identify you can rectify the error.
    If all the records are in PSA you can pull it from the PSA to target. Else you may have to pull it again from source infoprovider.
    If its running and if you are able to see it active in SM66 you can wait for some time to let it finish. You can also try SM50 / SM51 to see what is happening in the system level like reading/inserting tables etc.
    If you feel its active and running you can verify by checking if the number of records has increased in the data tables.
    Thanks,
    JituK

Maybe you are looking for