Active Table Logging T000 performance impact

Hi fellow SAP experts,
I need some advice on system performance impact when switching on Table Logging for T000 - configuration in production please?
We have decided to turn on Table Logging for auditing purposes, only allowing developer config in production following a volume of evidence being supplied.
I need to know how much this activation is going to impact the performance of the companies production environments, how much storage, memory, performance, etc. this function is going to consume and how much of the above consumables I need to cater for now and in the future?
We have a Dual Track environment, BAU want to switch on Table Logging for fix on fail, I want to swich it on for Project deliveries.
Please advise, with referencing if possible?
Thank you kindly
Paul

Hi Paul,
There has been a constant debate whether table logging affects the System performance, especially in Produciton environment. Please see my comments below :
1) To turn on logging for table T000, you will have to activate the parameter rec/client, with values for one client or all the clients in the System depending on your requirement.
2) This parameter setting will not only log changes to T000, but also for over 28000 tables.
3) But these are customizing tables which usually contain a relatively small amount of data which is changed occasionally.
4) After activating, if you suddenly find performance issues, you can check which tables are causing issues via transaction SCU3.
5) You can go to transaction SE13 and deactivate logging for a table, if you find too many entries for any particular table in SCU3.
So, logging tables doesn't necessarily impact performance. Hope this helps. Please refer to SAP Note-608835 related to this.
Best Regards,
Savitha

Similar Messages

  • Activating Table Logging

    Hi All
    We are currently looking at activating table logging on our ECC system, as we have some tables that will require logging for an internal auditing process. So far I have activated the Log Data Changes in SE13 however I am a little concerned about implementing the rec/client parameter as I am not sure how this will effect performance and database size.
    I am aware that SAP have logging already set on quite a few tables, but with out the parameter set these will not log.
    Does anyone have any experience of doing this, what was the effect and is there anything i should be aware of?
    I know all systems are different, but any advice would be great.
    Thanks
    Phil

    Hi Phil,
    I've not been able to measure any change in performance on systems where I have implemented table logging.
    The data is stored in DBTABLOG and it's growth is entirely dependent on what you are logging and changing.  From memory the SAP defined lists are all config/customisation tables and as there will/should be infrequent change in these in the prod environment, the log size will not grow in the same way that the Security Audit Log files do. 
    If you log tables containing transactional data then you will get a large growth in the table size and you will need to think about archiving.
    The approach that I usually take is to activate logging on config tables in the dev environment and a specific set of customer defined tables in production. 
    Hope that helps
    Cheers
    Alex

  • What is the impact - Activating Change Logs for Material Classification Dta

    To activate Change Logs for Material Classification Data I must first set the flag for "Multiple Objs Allowed" for the Class Type 001 Material Class. I am curious of what the impact of setting this flag will be. Also, is there a way to measure the DASD impact of activating the Change logging for this data.
    Thankyou
    Kevin

    In addition to the performance impact the process of turning on Change Logs for Classification data will convert the tables KSSK and AUSP to use an internal SAP number as a key instead of the Material number. This will cause problems for any custom programs that access those files directly and expect the material number to be the key. This file conversion occurs during the execution of program RCCLUKA2 which is used to Flag the change logs for existing records. This can be reversed by running report RMCLINOB. Because of the above impact we decided here to NOT implement this change in production.

  • Impact of Table Logging on Standard SAP Tables

    Table Logging is not currently active in our system, so if we activate this parameter what will be the impact on system standard tables.
    Please find the below example,
    For some Standard SAP Tables, the table logging is already enabled, but as overall logging is not activated in the system, the logs are not getting saved for these tables as well.
    in the above table the log data changes have been enabled, but as per the below screenshot the overall logging is disabled.
    so, if this is enabled, what impact will it have on the system standard tables.

    Please move this to BW area of SCN, BI platform space is for Analytics/Business Objects platform.

  • Difference between Delta "Change Log" and "Active Table (Without Archive)"?

    In BI7.0 environment, we perform our Delta loads (the DTP settings under the Extraction tab, there is a field called Extraction Mode and it's value is selected as "Delta") every day among all our DSOs.
    There is a section called "Delta Init. Extraction From..." under the same tab in DTP, there are four radio buttons:
    Active Table (With Archive)
    Active Table (Without Archive)
    Archive (Full Extraction Only)
    Change Log
    Then what is the difference between "Change Log" and "Active Table (Without Archive)" if both Extraction Mode is "Delta" for two Delta loads?
    Thanks!

    Hi ,
    The new options SP16 has are:(Chk Note 1096771)
    Active Table (with Archive)
    The data is read from the active table and from the archive or from a near-line storage if one exists. You can choose this option even if there is no active data archiving process yet for the DataStore object.
    Active Table (Without Archive)
    The data is only read from the active table. If there is data in the archive or in a near-line storage at the time of extraction, this data is not extracted.
    Archive (Only Full Extraction)
    The data is only read from the archive or from a near-line storage. Data is not extracted from the active table.
    Change Log
    The data is read from the change log of the DataStore object.
    Delta will always be picked from change log table.Only during intialization you can choose between getting data from change log or active table.If you are doing the load first time and are initializzing delta in subsequent data targets, then pulling data from active table will get lesse volume of data then it would have got from change log table....All subsequent deltas will be picked up from the change log.  And when we need to reload data into the data target (which would be a full load) - we use active table.
    From change log : you can take below ones as targets
    1) Cube 2) DSO with Addition as the update for the Keyfigures
    From Active table: you can take below ones as targets
    1) Cube ,if and only if, the records are never changes in the source once after creation
    2) DSO with Addition as the update for the Keyfigures ,if and only if, the records are never changes in the source once after creation
    3) DSO with Overwrite as the update for the Keyfigures ( incase deletions is not happening in the source system)
    Pls check this link
    http://help.sap.com/saphelp_nw70ehp1/helpdata/en/47/e8c56ecd313c86e10000000a42189c/frameset.htm
    Regards,
    CSM Reddy

  • Performance impact using nested tables and object

    Hi,
    Iam using oracle 11g.
    While creating a package, iam using lot of nested tables created based on objects which will be passed between multiple functions in the package..
    Will it have any performance impact since all the data is stored in the memory.
    How can i measure the performance impact when the data grows ?
    Regards,
    Oracle User
    Edited by: user9080289 on Jun 30, 2011 6:07 AM
    Edited by: user9080289 on Jun 30, 2011 6:42 AM

    user9080289 wrote:
    While creating a package, iam using lot of nested tables created based on objects which will be passed between multiple functions in the package.. Not the best of ideas in general, in PL/SQL. This is not client code that can lay sole claim to most of the memory. It is server code and one of many server processes that need to share the available resources. So capitalism is fine on a client, but you need socialism on the server? {noformat} ;-) {noformat}
    Will it have any performance impact since all the data is stored in the memory.Interestingly yes. Usually crunching data in memory is better. In this case it may not be so. The memory used is the most expensive memory Oracle can use - the PGA. Private process memory. This means each process copy running that code, will need lots of memory.
    If you're not passing the data structures by reference, it means even bigger demands on memory as the data structure needs to be copied into the call stack and duplicated.
    The worse case scenario is that such code consumes so much free server memory, and make such huge demands on having that in pysical memory, it trashes memory management as the swap daemons are unable to keep up with the demand of swapping virtual memory pages into and out of memory. Most CPU time is spend by the swap daemons.
    I have seen servers crash due to this. I have seen a single PL/SQL process causing this.
    How can i measure the performance impact when the data grows ?Well, you need to look at the impact of your code on PGA memory. It is not SQL performance or I/O performance that is a factor - just how much private process memory your code needs in order to execute.

  • Impact of Query Logging on Performance of Queries in OBIEE

    I see from [An Oracle BI Blog post|http://obieeblog.wordpress.com/2009/01/19/obiee-performance-tuning-tip-%e2%80%93-turn-off-query-logging/] that Query Logging has a performance impact in OBIEE.
    What is the experience with Query Logging at different levels in a Production environment with, say, 50 or 100 or 500 concurrent users ?
    I am completely new to OBIEE, I know the Database. So, please bear with me.
    Hemant K Chitale

    Kumar's blog that you reference says it all really.
    I don't know if anyone's going to be able to give you the kind of information you're looking for, because it's a no-brainer not to enable this level of logging :)
    Is there are reason you're even considering it?
    Imagine in the database running a low-level trace or debug log for every user session... you just wouldn't do it

  • Table has 80 million records - Performance impact if we stop archiving

    HI All,
    I have a table (Oracle 11g) which has around 80 million records till now we used to do weekly archiving to maintain size. But now one of the architect at my firm suggested that oracle does not have any problem with maintaining even billion of records with just a few performance tuning.
    I was just wondering is it true and moreover what kind of effect would be their on querying and insertion if table size is 80 million and increasing every day ?
    Any comments welcomed.

    What is true is that Oracle database can manage tables with billions of rows but when talking about data size you should give table size instead of number of rows because you wont't have the same table size if the average row size is 50 bytes or if the average row size is 5K.
    About performance impact, it depends on the queries that access this table: the more data queries need to process and/or to return as result set, the more this can have an impact on performance for these queries.
    You don't give enough input to give a good answer. Ideally you should give DDL statements to create this table and its indexes and SQL queries that are using these tables.
    In some cases using table partitioning can really help: but this is not always true (and you can only use partitioning with Entreprise Edition and additional licensing).
    Please read http://docs.oracle.com/cd/E11882_01/server.112/e25789/schemaob.htm#CNCPT112 .

  • DBCC SHRINKFILE with NOTRUNCATE has any performance impact in log shipping?

    Hi All,
    To procure space I'm suggested to use below command on primary database in log shipping and I just want
    to clarify whether it has any performance impact on primary database in log shipping and also is it a recommended practice to use the below command
    in regular intervals in
    case the log is using much space of the drive. Please suggest on this. Thank You.
    "DBCC
    SHRINKFILE ('CommonDB_LoadTest_log', 2048, NOTRUNCATE)"
    Regards,
    Kalyan
    ----Learners Curiosity Never Ends----

    Hi Kalyan \ Shanky
    I was not clear in linked conversation so adding some thing :
    As per http://msdn.microsoft.com/en-us//library/ms189493.aspx
    ----->TRUNCATEONLY is applicable only to data files. 
    BUT
    As per : http://technet.microsoft.com/en-us/library/ms190488.aspx
    TRUNCATEONLY affects the log file.
    And i also tried , it does works.
    Now Truncateonly : Releases all free space at the end of the file to the operating system but does not perform any page movement inside the file. The data file is shrunk only to the last allocated extent. target_percent is ignored if specified
    with TRUNCATEONLY.
    So
    1. if i am removing non used space it will not effect log shiping or no log chain will broke.
    2. If you clear unsued space it will not touch existing data. no performance issue
    3. If you clear space and then due to other operation log file will auto grow it will put unnecessary pressure on database to allocate disk every time. So once you find the max growth of log file let it be as any how again it will grow to same size.
    4. Shrinking log file is not recommeded if its again and again reaching to same size . Until unless you have space crunch
    Thanks Saurabh Sinha
    http://saurabhsinhainblogs.blogspot.in/
    Please click the Mark as answer button and vote as helpful
    if this reply solves your problem

  • Need of Change log deletion for a DSO, when DTP delta Init with Active Tabl

    Hi,
    Thanks for your time, please help me with the following
    Scenario: I have Cube1 being updated from DSO1 with DTP delta, for quite some time. As per new requirement, I would need to create new data mart such that Cube2 gets updated from DSO1.
    In order to Configure the delta, I am Considering Following options, please let me know, which one is right one.
    Note: DSO1 is in Overwrite Mode.
    Create one DTP with update Mode "Delta" , and "Active Table".  I knew Delta Changes are captured in change log, but my understanding is, when you execute the DTP for the First time, it fetches the records from Active Table. And from 2nd Run onwards the very "same DTP" without "any changes" get the records from change log, even if we choose "Active Table".
    is my understanding is Right?
    Do I have to consider deleting the change log, because DSO1 is in Production from quite some time which is in Overwrite mode?
    Do I have to create a DTP  with Delta and Active Table, run the dtp Once(Init), then choose "Change Log for subsequent runs explicitly
    please help me with these. I have reviewed many sap help articles and sdn threads but none of them were helpful. all of them explains, difference b/w active table and change log table, also gives scenarios like use a full DTP and then 2nd DTP with Delta without Data transfer. I am not looking for that kind of details.
    Appreciate any help, Thanks again
    Edited by: curious maven on Mar 28, 2011 8:35 AM

    Hi Uma,
    Thanks for your response.
    to your question, If I need to load all the data from DSO1 to Cube1, answer is NO. if you read my post once again,
    Existing Flow is : DSO1 --> Cube1 (Means, Active Table and Change Logs of DSO1 already Filled Up)
    Requirement is: DSO1 --> Cube2. (Need to Init the Delta, and Detlas going forward)
    My Questions:
    (1) Imagine, DSO1 has 10 requests, so we see resepctive data in Active Table as well as Change Log. As a matter of fact, Data is being updated to DSO1 in Overwrite Mode, so all the changes(10 requests equivalent) would be captured in Change Log, which in turn helpful / used when we do Delta from DSO1 to any Cube, in my case, it used to update to Cube1. As per new requirement, it needs to update to Cube2.
    My assumption is, if I choose "Active table and Delta in Extraction Tab" in DTP, During First Run (Delta Init), Data would be fethed from Active table. And the 2nd Run Onwards, delta records would be fetehced from Change Log automatically, even if we don't change the selection from Active Table to Change Log.
    Is My Assumption Right?,
    in this process, Do I have to Delete Change Log
    (2) Do I have to Explicitly change DTP setting from "Active Table to Change Log" Once Delta Init has been run with setting "Active table" inorder to get Deltas from DSO1 --> Cube2 ?
    Appreciate your help
    Edited by: curious maven on Mar 30, 2011 3:14 PM
    Edited by: curious maven on Mar 30, 2011 3:29 PM

  • Performance impacts of activating MDTB for MRP lists

    Does anyone have any input on the performance impacts of activating MDTB?  Clearly it will impact performance, but by how much?  What are the main drivers of performance number of MRP elements / MDTB records? 
    What methods can be used to mitigate?  Increasing the DB buffering? 
    Thanks

    Hi,
    I doubt if there could be a generic answer for your query. It would depend on a lot of parameters eg: the number of materials being planned, the frequency of planning, system resources available etc.
    All i could say is, load the entire set of materials into the system & then work with your basis personnel to fine tune the system.
    Regards,
    Vivek

  • Audit Log - Table Log

    Hi everyone,
    Can anyone tell me if i activate table logging on a table (not customizing table) like MARC table, what information is saved in the system?.
    Can I check or know, previous the audit log activation over this table, what fields or what information is recorded?
    thanks,
    HEPC

    This is table logging for customizing type entries in not necessarily what you are looking for.
    For master data you need to use the application change documents (table CDHDR etc) which is a different concept (I would use that route and protect the object S_ARCHIVE).
    What you are actually looking for (and waiting for) is [the package concept at runtime|http://forums.sdn.sap.com/click.jspa?searchID=58483939&messageID=4675719] which developers can already see as warnings. It also means that the package which the table is assigned to must have a complete set of APIs.
    I would personally not look for workarounds with performance impacts, but rather clean up the code to make it package concept conform, and then use the application change documents and not the table change records.
    This is a better design - more sustainable, less hassles and auditable (via where-used-lists).
    My 2 cents,
    Julius

  • Usage Tracking performance impact

    Hi all,
    I just started to implement the OBIEE usage tracking functionalities.
    Anyone implemented it and call share some experience on the performance impact with/without usage tracking enabled??
    Thanks
    BCHK

    If you use direct_insert option impact will be minimized. In general usage tracking captures report run activity and logs into table, so there is overhead on the system to insert to table or write to file. trade off is the information you get with usage tracking..
    hope this helps..
    Edited by: Kasyap on Mar 23, 2013 10:32 PM

  • Improving redo log writer performance

    I have a database on RAC (2 nodes)
    Oracle 10g
    Linux 3
    2 servers PowerEdge 2850
    I'm tuning my database with "spotilght". I have alredy this alert
    "The Average Redo Write Time alarm is activated when the time taken to write redo log entries exceeds a threshold. "
    The serveres are not in RAID5.
    How can I improve redo log writer performance?
    Unlike most other Oracle write I/Os, Oracle sessions must wait for redo log writes to complete before they can continue processing.
    Therefore, redo log devices should be placed on fast devices.
    Most modern disks should be able to process a redo log write in less than 20 milliseconds, and often much lower.
    To reduce redo write time see Improving redo log writer performance.
    See Also:
    Tuning Contention - Redo Log Files
    Tuning Disk I/O - Archive Writer

    Some comments on the section that was pulled from Wikipedia. There is some confusion in the market as their are different types of solid state disks with different pros and cons. The first major point is that the quote pulled from Wikipedia addresses issues with Flash hard disk drives. Flash disks are one type of solid state disk that would be a bad solution for redo acceleration (as I will attempt to describe below) they could be useful for accelerating read intensive applications. The type of solid state disk used for redo logs use DDR RAM as the storage media. You may decide to discount my advice because I work with one of these SSD manufacturers but I think if you do enough research you will see the point. There are many articles and many more customers who have used SSD to accelerate Oracle.
    > Assuming that you are not CPU constrained,
    moving the online redo to
    high-speed solid-state disk can make a hugedifference.
    Do you honestly think this is practical and usable
    advice Don? There is HUGE price difference between
    SSD and and normal hard disks. Never mind the
    following disadvantages. Quoting
    (http://en.wikipedia.org/wiki/Solid_state_disk):[
    i]
    # Price - As of early 2007, flash memory prices are
    still considerably higher  
    per gigabyte than those of comparable conventional
    hard drives - around $10
    per GB compared to about $0.25 for mechanical
    drives.Comment: Prices for DDR RAM base systems are actually higher than this with a typical list price around $1000 per GB. Your concern, however, is not price per capacity but price for performance. How many spindles will you have to spread your redo log across to get the performance that you need? How much impact are the redo logs having on your RAID cache effectiveness? Our system is obviously geared to the enterprise where Oracle is supporting mission critical databases where a hugh return can be made on accelerating Oracle.
    Capacity - The capacity of SSDs tends to be
    significantly smaller than the
    capacity of HDDs.Comment: This statement is true. Per hard disk drive versus per individual solid state disk system you can typically get higher density of storage with a hard disk drive. However, if your goal is redo log acceleration, storage capacity is not your bottleneck. Write performance, however, can be. Keep in mind, just as with any storage media you can deploy an array of solid state disks that provide terabytes of capacity (with either DDR or flash).
    Lower recoverability - After mechanical failure the
    data is completely lost as
    the cell is destroyed, while if normal HDD suffers
    mechanical failure the data
    is often recoverable using expert help.Comment: If you lose a hard drive for your redo log, the last thing you are likely to do is to have a disk restoration company partially restore your data. You ought to be getting data from your mirror or RAID to rebuild the failed disk. Similarly, with solid state disks (flash or DDR) we recommend host based mirroring to provide enterprise levels of reliability. In our experience, a DDR based solid state disk has a failure rate equal to the odds of losing two hard disk drives in a RAID set.
    Vulnerability against certain types of effects,
    including abrupt power loss
    (especially DRAM based SSDs), magnetic fields and
    electric/static charges
    compared to normal HDDs (which store the data inside
    a Faraday cage).Comment: This statement is all FUD. For example, our DDR RAM based systems have redundant power supplies, N+1 redundant batteries, four RAID protected "hard disk drives" for data backup. The memory is ECC protected and Chipkill protected.
    Slower than conventional disks on sequential I/OComment: Most Flash drives, will be slower on sequential I/O than a hard disk drive (to really understand this you should know there are different kinds of flash memory that also impact flash performance.) DDR RAM based systems, however, offer enormous performance benefits versus hard disk or flash based systems for sequential or random writes. DDR RAM systems can handle over 400,000 random write I/O's per second (the number is slightly higher for sequential access). We would be happy to share with you some Oracle ORION benchmark data to make the point. For redo logs on a heavily transactional system, the latency of the redo log storage can be the ultimate limit on the database.
    Limited write cycles. Typical Flash storage will
    typically wear out after
    100,000-300,000 write cycles, while high endurance
    Flash storage is often
    marketed with endurance of 1-5 million write cycles
    (many log files, file
    allocation tables, and other commonly used parts of
    the file system exceed
    this over the lifetime of a computer). Special file
    systems or firmware
    designs can mitigate this problem by spreading
    writes over the entire device,
    rather than rewriting files in place.
    Comment: This statement is mostly accurate but refers only to flash drives. DDR RAM based systems, such as those Don's books refer to, do not have this limitation.
    >
    Looking at many of your postings to Oracle Forums
    thus far Don, it seems to me that you are less
    interested in providing actual practical help, and
    more interested in self-promotion - of your company
    and the Oracle books produced by it.
    .. and that is not a very nice approach when people
    post real problems wanting real world practical
    advice and suggestions.Comment: Contact us and we will see if we can prove to you that Don, and any number of other reputable Oracle consultants, recommend using DDR based solid state disk to solve redo log performance issues. In fact, if it looks like your system can see a serious performance increase, we would be happy to put you on our evaluation program to try it out so that you can do it at no cost from us.

  • Index creation online - performance impact on database

    hi,
    I have oracle 11.1.0.7 database running on Linux as 3 node RAC.
    I have a huge table which has more than 255 columns and is about 400GB in size which is also highly fragmented because of constant DML activities.
    Questions:
    1. For now i am trying to create an index Online while the business applications are running.
    Will there be any performance impact on the database to create index Online on a single column of a table 'TBL' while applications are active against the same table? So basically my question will index creation on a object during DML operations on the same object have performance impact on the database? is there a major performance impact difference in the database in creating index online and not online?
    2. I tried to build an index on a column which has NULL value on this same table 'TBL' which has more than 255 columns and is about 400GB in size highly fragmented and has about 140 million rows.
    I requested the applications to be shutdown, but the index creation with parallel of 4 a least took more than 6 hours to complete.
    We have a Pre-Prod database which has the exported and imported copy of the Prod data. So the pre-Prod is a highly de-fragmented copy of the Prod.
    When i created the same index on the same column with NULL, it only took 15 minutes to complete.
    Not sure why on a highly fragmented copy of Prod it took more than 6 hours compared to highly defragmented copy of Pre-Prod where the index creation took only 15 minutes.
    Any thoughts would be helpful.
    Thanks.
    Phil.

    How are you measuring the "fragmentation" of the table ?
    Is the pre-prod database running single instance or RAC ?
    Did you collect any workload stats (AWR / Statspack) on the pre-prod and production systems while creating (or failing to create) the index ?
    Did you check whether the index creation ended up in-memory, single pass or multi pass in in the two environments ?
    The commonest explanation for this type of difference is two-fold:
    a) the older data needs a lot of delayed block cleanout, which results in a lot of random I/O to the undo tablespace - slowing down I/O generally
    b) the newer end of the table is subject to lots of change, so needs a lot of work relating to read-consistency - which also means I/O on the undo system
      --  UPDATED:  but you did say that you had stopped the application so this bit wouldn't have been relevant.
    On top of this, an online (re)build has to lock the table briefly at the start and end of the build, and in a busy system you can wait a long time for the locks to be acquired - and if the system has been busy while the build has been going on it can take quite a long time to apply the journal file to finish the index build.
    Regards
    Jonathan Lewis

Maybe you are looking for

  • High thread count on store.exe

    I understand that the mdb store process utilizes as much memory as is available.  My question is regarding the thread count on the process.  One of the three mailbox servers tends to have a higher thread count than the other two.  It usually ranges f

  • Question for EXS24 instrument editor experts!

    Can anybody help with this? here's what I'm trying to do: I'm trying to combine a solo violin instrument and a solo cello instrument into one EXS24 instrument that has the violin in the higher octaves and the cello in the lower octaves, not overlappi

  • How can I change the view size to make it my default?

    the view size is way too small with pages that I open on Firefox. I want to change the size and I want to make it my default so I don't have to click on zoom in everytime I open Firefox. Where can I change this? I have Windows Vista.

  • Importing example package Z_ABAP_BOOK into an installed AS ABAP

    At point 6 from readme.txt ... 1. Copy the cofile "K900035.NSP" to directory "…\sap\trans\cofiles". 2. Copy the datafile "R900035.NSP" to directory "…\sap\trans\data". 3. Logon to the AS ABAP. 4. Call transaction "STMS". 5. Choose "Import Overview".

  • My iphone silver Apple logo shows but is not starting up

    My Iphone silver apple logo shows on the screen but its not starting up.then suddily it turn off and start again but it won't pass the silver logo on the screen..please help...