Change Tracker Performance Impact

MDM: 7.1
CE: 7.2
ERP: 6.0 EHP4
Hi,
We are currently using CE/BPM based central master data management. A custom application is being developed for collaborative master data authoring.
As part of the MDM configuration, the client wishes to track changes on MDM records for audit purposes. We are looking at the MDM Change Tracking facility provided by SAP but not sure about the performance impact it will have on the MDM server.
We have over 300 attributes for the object we wish to track for changes. Not all attributes will change all the time but it is expected that the overall number of changes every month will be over 1000, each change including approx 20-30 fields. The number of users is expected to be approx 15 initially but will increase over time.
I have seen on SCN forums people talking about potential performance degrade by enabling change tracking. Has anyone actually experienced performance degrade due to enabling change tracking for MDM records? If so, have you tried any means to keep the impact low e.g. by allocating more resources to MDM server?
Thanks and regards,
Shehryar
Edited by: Shehryar Khan on Dec 2, 2010 1:39 PM

Hi,
Change histoey does have impact on system performance. this can be controlled via regular archiving of Change History database. Lets say, Change history table data older than 3 months can be stored in another repository.
Second option could be to export all Change History data(Older than 1-2 days) to a BI system(using regular scheduled job), and have change hisotry report there. It will bring drastci change in MDM System performance. 1-2 days data can be viewed from MDM Change history database itself.
Regards,
Prashant
Edited by: Prashant Malik on Dec 3, 2010 1:58 AM

Similar Messages

  • Usage Tracking performance impact

    Hi all,
    I just started to implement the OBIEE usage tracking functionalities.
    Anyone implemented it and call share some experience on the performance impact with/without usage tracking enabled??
    Thanks
    BCHK

    If you use direct_insert option impact will be minimized. In general usage tracking captures report run activity and logs into table, so there is overhead on the system to insert to table or write to file. trade off is the information you get with usage tracking..
    hope this helps..
    Edited by: Kasyap on Mar 23, 2013 10:32 PM

  • Performance impact after changing the awr snapshot timing from 1 hour to 15 minuts.

    want to know performance impact after changing the AWR snapshot timing from 1 hour to 15 minutes.

    Hi,
    1) typically performance impact is negligible
    2) we have no way of knowing whether or system fits into the definition of "typical"
    3) the best way would be to do that on a test system and measure the impact
    4) I would be concerned more about SYSAUX growth than performance impact -- you need to make sure that you won't run out of space because of x4 more frequent snapshots
    Best regards,
      Nikolay

  • Change Tracking internals behave differently, SQL Server 2012 vs SQL Server 2008

    <original post by Glenn Estrada>
    Reposting an issue from Stack Overflow that a coworker and I are dealing with.
    In trouble shooting an issue with synchronizing disconnected devices with a central database server using Sync Framework 1.0, we are experiencing a problem after upgrading to SQL Server 2012 on the server. It appears that the CHANGE_TRACKING_MIN_VALID_VERSION
    is returning a value 1 higher than it should (or at least than it did prior to the upgrade.)
    I have been working thru Arshad
    Ali's great walk thru example of how to set up a simple example.
    I have run the scripts from #1 thru #5 to insert, delete, and update a row in the Employee table in both a SQL Server 2008 and a 2012 environment.
    In 2008, the following statement returns a 0:
    SELECT CHANGE_TRACKING_MIN_VALID_VERSION(OBJECT_ID('Employee'))
    In 2012, it returns a 1.
    In working thru a few more scripts (6-8) in the tests, I set the retention period to 1 minute to hopefully force a cleanup action. I left for the day and apparently it ran overnight.
    In the 2008 instance, the CHANGE_TRACKING_CURRENT_VERSION and the CHANGE_TRACKING_MIN_VALID_VERSION are equal (11). In the 2012 instance, the CHANGE_TRACKING_MIN_VALID_VERSION is one higher (12) than the CHANGE_TRACKING_CURRENT_VERSION (11). This could have
    an impact to the synchronization process when a database is idle for extended periods of time. And we have found that process could get caught in a loop, especially when the following test is performed to determine if a re-initialization, as opposed to synchronization,
    is required:
    IF CHANGE_TRACKING_MIN_VALID_VERSION(object_id(N'dbo.Employee')) > @sync_last_received_anchor
    RAISERROR (N'SQL Server Change Tracking has cleaned up tracking information for table ''%s''...
    Has anyone else experienced this change in behavior? Does anyone have an explanation?

    <original post by Glenn Estrada>
    Reposting an issue from Stack Overflow that a coworker and I are dealing with.
    In trouble shooting an issue with synchronizing disconnected devices with a central database server using Sync Framework 1.0, we are experiencing a problem after upgrading to SQL Server 2012 on the server. It appears that the CHANGE_TRACKING_MIN_VALID_VERSION
    is returning a value 1 higher than it should (or at least than it did prior to the upgrade.)
    I have been working thru Arshad Ali's
    great walk thru example of how to set up a simple example.
    I have run the scripts from #1 thru #5 to insert, delete, and update a row in the Employee table in both a SQL Server 2008 and a 2012 environment.
    In 2008, the following statement returns a 0:
    SELECT CHANGE_TRACKING_MIN_VALID_VERSION(OBJECT_ID('Employee'))
    In 2012, it returns a 1.
    In working thru a few more scripts (6-8) in the tests, I set the retention period to 1 minute to hopefully force a cleanup action. I left for the day and apparently it ran overnight.
    In the 2008 instance, the CHANGE_TRACKING_CURRENT_VERSION and the CHANGE_TRACKING_MIN_VALID_VERSION are equal (11). In the 2012 instance, the CHANGE_TRACKING_MIN_VALID_VERSION is one higher (12) than the CHANGE_TRACKING_CURRENT_VERSION (11). This could have
    an impact to the synchronization process when a database is idle for extended periods of time. And we have found that process could get caught in a loop, especially when the following test is performed to determine if a re-initialization, as opposed to synchronization,
    is required:
    IF CHANGE_TRACKING_MIN_VALID_VERSION(object_id(N'dbo.Employee')) > @sync_last_received_anchor
    RAISERROR (N'SQL Server Change Tracking has cleaned up tracking information for table ''%s''...
    Has anyone else experienced this change in behavior? Does anyone have an explanation?
    sql-server sql sql-server-2012

  • Active Table Logging T000 performance impact

    Hi fellow SAP experts,
    I need some advice on system performance impact when switching on Table Logging for T000 - configuration in production please?
    We have decided to turn on Table Logging for auditing purposes, only allowing developer config in production following a volume of evidence being supplied.
    I need to know how much this activation is going to impact the performance of the companies production environments, how much storage, memory, performance, etc. this function is going to consume and how much of the above consumables I need to cater for now and in the future?
    We have a Dual Track environment, BAU want to switch on Table Logging for fix on fail, I want to swich it on for Project deliveries.
    Please advise, with referencing if possible?
    Thank you kindly
    Paul

    Hi Paul,
    There has been a constant debate whether table logging affects the System performance, especially in Produciton environment. Please see my comments below :
    1) To turn on logging for table T000, you will have to activate the parameter rec/client, with values for one client or all the clients in the System depending on your requirement.
    2) This parameter setting will not only log changes to T000, but also for over 28000 tables.
    3) But these are customizing tables which usually contain a relatively small amount of data which is changed occasionally.
    4) After activating, if you suddenly find performance issues, you can check which tables are causing issues via transaction SCU3.
    5) You can go to transaction SE13 and deactivate logging for a table, if you find too many entries for any particular table in SCU3.
    So, logging tables doesn't necessarily impact performance. Hope this helps. Please refer to SAP Note-608835 related to this.
    Best Regards,
    Savitha

  • MS Sync Framework - SQL Change Tracking issue

    Need some recommendation or suggestion on SQL Change Tracking Rention period
    we are using MS Synchronization framework 2.1 to synchronize the data from SQL Server to WinCe 3.5 devices ( SQL CE 3.5 )
    As part of the solution, we have used SQL Change tracking feature to keep track of data changes happened on the server to download the data.
    For change tracking, we have set the database retention period to 10 days.
    Everything works fine in the normal scenarios.
    Issue :  When a new device is installed after 10 days of the data changes on the server, because of retention period, the changes older than 10 days are not downloaded to the devices.
    If you increase the retention period , it retains the data for the period.
    Required suggestion
    what is the best configuration value for the retention period?
    can we set it to 2 or 3 years? is there any performance impact because of retention is set to maximum period.
    Is there any alternative approach to configure for minimum period and synchronize the require changes.
    Note: we are downloading changes only from server. it is download only configuration.

    Hi,
    When you are setting the change retention value, you should consider how often applications will synchronize with the tables in the database. The specified retention period must be at least as long as the maximum time period between synchronizations.
    If an application obtains changes at longer intervals, the results that are returned might be incorrect because some of the change information has probably been removed. To avoid obtaining incorrect results, an application can use the CHANGE_TRACKING_MIN_VALID_VERSION
    system function to determine whether the interval between synchronizations has been too long.
    Considering the performance impact, it is related to what it tracks changes for and the size of increment.
    For more information, see:
    http://msdn.microsoft.com/en-us/library/bb964713.aspx
    Thanks.
    Tracy Cai
    TechNet Community Support

  • Index creation online - performance impact on database

    hi,
    I have oracle 11.1.0.7 database running on Linux as 3 node RAC.
    I have a huge table which has more than 255 columns and is about 400GB in size which is also highly fragmented because of constant DML activities.
    Questions:
    1. For now i am trying to create an index Online while the business applications are running.
    Will there be any performance impact on the database to create index Online on a single column of a table 'TBL' while applications are active against the same table? So basically my question will index creation on a object during DML operations on the same object have performance impact on the database? is there a major performance impact difference in the database in creating index online and not online?
    2. I tried to build an index on a column which has NULL value on this same table 'TBL' which has more than 255 columns and is about 400GB in size highly fragmented and has about 140 million rows.
    I requested the applications to be shutdown, but the index creation with parallel of 4 a least took more than 6 hours to complete.
    We have a Pre-Prod database which has the exported and imported copy of the Prod data. So the pre-Prod is a highly de-fragmented copy of the Prod.
    When i created the same index on the same column with NULL, it only took 15 minutes to complete.
    Not sure why on a highly fragmented copy of Prod it took more than 6 hours compared to highly defragmented copy of Pre-Prod where the index creation took only 15 minutes.
    Any thoughts would be helpful.
    Thanks.
    Phil.

    How are you measuring the "fragmentation" of the table ?
    Is the pre-prod database running single instance or RAC ?
    Did you collect any workload stats (AWR / Statspack) on the pre-prod and production systems while creating (or failing to create) the index ?
    Did you check whether the index creation ended up in-memory, single pass or multi pass in in the two environments ?
    The commonest explanation for this type of difference is two-fold:
    a) the older data needs a lot of delayed block cleanout, which results in a lot of random I/O to the undo tablespace - slowing down I/O generally
    b) the newer end of the table is subject to lots of change, so needs a lot of work relating to read-consistency - which also means I/O on the undo system
      --  UPDATED:  but you did say that you had stopped the application so this bit wouldn't have been relevant.
    On top of this, an online (re)build has to lock the table briefly at the start and end of the build, and in a busy system you can wait a long time for the locks to be acquired - and if the system has been busy while the build has been going on it can take quite a long time to apply the journal file to finish the index build.
    Regards
    Jonathan Lewis

  • Performance Impact of Unique Constraint on a Date Column

    In a table I have a compound unique constraint which extends over 3 columns. As a part of functionality I need to add another DATE column to this unique constraint.
    I would like to know the performance implications of adding a DATE column to the unique constraint. Would the DATE column behave like another VARCHAR2 or NUMBER column, or would it degrade the performance significantly?
    Thanks
    Message was edited by:
    user627808

    What performance are you concerned about degrading? Inserts? Or queries? If you're talking about queries, what sort of access path are you concerned about?
    Are you concerned that merely changing the definition of the unique constraint would impact performance? Or are you worried that whatever functional change you are making would impact performance (i.e. if you are now retaining historical data in the table rather than just updating it)?
    Regardless of the performance impact, unique indexes (and unique constraints) need to be correct. If you need to allow duplicates on the 3 current columns with different dates, then you would need to change the unique constraint definition regardless of the performance impact. Fast and wrong generally isn't going to be preferrable to slow and right.
    Generally, though, there probably is no reason to be terribly concerned about performance here. Indexing a date is no different than indexing any other primitive data type.
    Justin

  • Example of Change Tracking in Master Data Services

    Ive read through, and performed the steps in this article by microsoft on change Tracking in MDS using business rules.
    MDS Change Tracking
    The problem is that its not really tracking anything. The most i can do here is place a value in a column when some value in that row has changed. Its super limited and adds very little value:
    -I cant make this value a date,
    -I cant create a new row and set a field in the original row to "expired".
    Basically I cant see how the product can do anything that one would actually need change tracking for.
    Ive searched but found no demos on how this is actually used.
    Is there a demo out there, perhaps by microsoft that shows the extent to which Change Tracking in MDS can be used ?

    The feature allows you to run a business rule whenever a change is made to a tracked entity member attribute.  Business rules are the main extensibility mechanism in MDS, and can do way more than just setting a member attribute.  For instance you
    can send email, kick of a workflow or run custom code through the misleadingly-named "External Workflow" functionality.
    If you just need to enable downstream systems to extract master data that has changed recently, then you can simply filter using the LastChgDateTime column in the subscription views.
    David
    David http://blogs.msdn.com/b/dbrowne/

  • Change Tracking At record Level?? URGENT

    Hi Experts,
    I have a requirement where i need to activate the change trackinjg for 6 fields.
    If i update 3 field VALUES in 4 RECORDS in Data manager.
    How will i know from the History table that..
    >>  This Particluar Record  Was updated with old value New Value User name ETc..
    I think it just says the filed is updated from this value to This value, BUT how will i know in Which RECORD this field is updated from this value to this value???
    Can any one HELP!!!!
    Kind Regards
    John

    Hi ,
    is the Work flow solution something related tot Time stamps creating for each field that we wish to have tracking?
    We dont want to create time stamps for each individual fields for which we require tracking!!!
    Also, in the below notes i can see ..
    " By default, the MDM change tracking database has a single index which is applied on the 'Id' field.
    Does this mean the Reciord ID in the MDM Main table !!!
    If its giving the Record id, Then it solves My problemss..
    Note 1343132 - Archiving MDM change tracking A2I_CM_HISTORY best practices
    Note 1405410 - MDM change tracking A2I_CM_HISTORY performance and indexes  
    KR
    John

  • Queries Performance impact

    Hi Team,
              We have few queries which were running good until last week; but for the past 3 days these queries were facing severe some performance issues and timeout dumps in the back-end.
    For some selections it is running long and for some selection it is executing quicker and for some selection it is getting time out.
    We made a complete data rebuild for the queries connected data targets (data rebuild from the source) before 3 days; after which  the query performance issue faced.
    No changes were made to the queries or objects for the last 2 months.
    Data Flow -  Query -> Multi Provider -> Infoset -> InfoCube -> DSO -> Datasource (DB Connect).
    Note:
    In Query we have nested aggregation to handle the result rows; but again no changes to it for the past 2 months.
    We have loaded data in one single request at the InfoCube level.
              I mean some 2 million records with different plants in one single request do it have performance impact while reading data?
    Can anyone please throw light on the possible cause for the performance issue?
    Thanks
    Regards
    San

    Hi San,
    As you said that you completely loaded data and then  only your performance issues started, can you please tell whether you are using any BIA  for reporting?.
    If not BIA,  can  you please  delete the DB  staticstics for those Infocubes and then  create the DB  statistics for the same.
    Also you completely rebuild the  data which means  drop and reload, your PSATEMPSPACE OR  your temporarrily file space   might have  completely build. Ask your basis team to check the  space in the tables.
    Regards,
    Rajesh

  • How to handle Integrated Configuration performance impact on AAE/Java AS

    Hi there,
    Recently I have moved  a configuration scenario from standard flow involving both ABAP and Java stacks, to Integrated Configuration usage. Undoubtedly, this will increase the load on AAE/Java stack. However, do you have link to some clear (official - even better) guidelines - what configurational changes should be done on Java side in order to handle the performance impact of such transition?
    Best Regards,
    Lalo

    Hi Lalo,
    In fact, using AAE generates no traffic in ABAP stack at all (it is ommited when processing a message), while the traffic in Java stack should be lower than for normal scenario. The performance should be noticeably better, thanks to smaller number of persistence steps and no costly HTTP connections between stacks. For more details, please refer to this document:
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/2016a0b1-1780-2b10-97bd-be3ac62214c7
    Important quotation from this document:
    Since the Integration Engine is bypassed for local message processing in the AAE, the resource consumption both in memory and CPU is lower. This leads to higher message throughput, and faster response times which especially is important for synchronous scenarios.
    Moreover, have a look at this document, especially its beginning, for details about the architecture of AAE processing:
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/70066f78-7794-2c10-2e8c-cb967cef407b
    Hope this helps,
    Greg

  • Partition Change Tracking

    Hi
    while reading oracle 11g documentation. I have found out the materialized views can be refreshed through Partition Change Tracking system. I tried to understand but it seems no vail. Can anyone help me in understanding what is Partition Change Tracking and how its different from snapshot materialized view
    thanks
    regards
    Nick

    Hi
    IMO the following section in the documentation provides a good description:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28313/advmv.htm#i1006635
    Have you already read it?
    HTH
    Chris Antognini
    Author of Troubleshooting Oracle Performance, Apress 2008 (http://top.antognini.ch)

  • Block Change Trackin Performance

    Hi,
    I'm planning to use incremental backup with Block Change Tracking(BCT) enabled.
    Well i see the good points of BCT like reducing network traffic and cpu cycles such that when RMAN checks for datafiles that has changed in block, it will use the BCT file instead of checking the datfile one by one.
    Now it cannot yet implemented it because of this question?
    "How is the performance hit of BCT?" If there is a block / blocks that are updated / changed, BCT will record it in the file, how much of the resources of the server will be used? Is it too much that it will cause slowness in daily transactions?"
    Hope someone will help,
    Jay

    There will a "small" overhead as per the documentation.
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/bkup004.htm#BRBSC125
    <quote>
    Change tracking is disabled by default, because it does introduce some minimal performance overhead on your database during normal operations.
    <quote>
    I also have seen some bugs in 10gR1 related to database hanging issues after enabling BCT but those have been fixed in 10gR2.
    All said, you can carry out some testing in your test environment to measure the performance overheads, if any.

  • Will there be any performance impact

    Hi All,
        Currently i'm having table employee with 1 millon records.. (emp ID is primary key). In process , i want to insert new employee ID and use for program and deleting it finally(simplyfing changes in current program).. every day this will take 100K trancations.
       I'm planning to commit only after delete. (ie insert -> make some update --> delete the same row --> commit).
    Will this emp IDs added to index memory  and give performance impact though i'm commiting the transaction after deleting the rows?
    database : oracle 10g.
    Thanks!!!

    If I understand you correctly, this sounds like a use case for a global temporary table (with the same structure as your employee table).
    As you insert, update and delete the same row within one single transaction (for the convenience of your code I assume), those row will only ever be visible to the session that (temporarily) inserts them into the table.
    The design you are suggesting has (at least) the following performance impact:
    1) it will inhibit concurrency
         - other sessions reading the table while transient rows are inserted and are being updated may have to clone some data buffers and apply UNDO to get read consistent clones of the buffers being modified.
         - you may cause buffer busy wait events as you modify the blocks belonging to your employee table while other sessions want to read the blocks affected by these modifications (the severity of this depends on how your 100K transactions are spread throughout the day and what activity runs on the database in parallel).
         - you will increase activity on the hash chain latches protecting the buffers of your employee table (the same applies to the severity as for the previous point).
    2) You increase the amount of REDO generated by your code. Using a global temporary table your 100K transactions will also generate some REDO, but significantly less.
    3) Using the global temporary table approach you don't need to delete the rows once you are done with your processing - you simply define your global temporary table as "ON COMMIT DELETE ROWS".
    4) You'll have to do all the work associated with the index maintenance to insert and delete the corresponding index entry (see my post from  Jun 24, 2013 8:16 PM)

Maybe you are looking for

  • How to print on a non supported AirPrint printer

    We have a wireless HP 6300 inkjet printer.

  • Can't access files on cd

    When inserted into the drive the cd:s mount fine and I can see what files there are, but when I try to view for example an .avi file, mplayer says "seek failed", and when trying to copy files to hard drive it says "Could not read file". Dunno if it m

  • HT204074 can't find previously downloaded movie on new computer

    Help! I had purchased RED 2 in a blu ray combo pack with Digital Copy. I downloaded the movie to my now dead computer, and it included iTunes Extras. I can only find the extras file in my new computer with my id. How do I get my previously purchased

  • Hallo! I have a question to the following problem.

    Hallo! I have a question to the following problem. I have to compare data. I get every 10 ms 3 data from a sensor. I must save the last 5 measurements. Compare the data and then I have to make a decision. Can I save the last 5 measurements with a shi

  • Podcast can't play on this ipad

    I've had my iPad since Christmas and have downloaded many podcasts to it already, however, yesterday I was trying to download podcasts and itunes U items. After awhile the iPad started saying that the podcasts wouldn't download because they couldn't