Data Warehouse performance since changing retention settings?

Hi,
I dont know if its a coincidence but I have managed to get into a bit of a state with regard to our data warehouse.
Firstly when the server was specced I dont think anybody actually worked out what size the databases would work out to be. I have started troubleshooting initially because of a lack of disk space. The DW was set to the defaults of 400 days etc and had grown
to around 700GB. Our operations DB is around 50GB in size. The disk at this point had around 50GB of space left.
Anyway I did some work on the retention in the DW to knock a lot of stuff down to say 9 months as needed. A week later and the data is now 500GB although the physical size is still 700GB.
Now I dont know if its coincidence but in the last couple of days I am getting performance alerts such as not being able to  store data in the DW in a timely manner, failing to perform maintenace and visual indications that things have slowed down on
the DW. For example an SLA report for the month for all servers now times out when before it ran in a few minutes.
So I am wondering if the "blank" space in the DW is now causing issues as there is data at both ends of the database perhaps. I would like to get this blank space back but no expert on SQL and wondering if any other considerations needs to be taken
for SCOM "to get this back".
I also understand that perhaps more disk space is required for the actual grooming so maybe I need to get down to 6 montghs of data before this can happen.
The performance part may not be tied to the issue but I guess either way I would like to get the space back if possible
thanks

There are sereval causes
1) check any events or performance collection which generate a hugh DB and DW size by using the following SQL query in
http://blogs.technet.com/b/kevinholman/archive/2007/10/18/useful-operations-manager-2007-sql-queries.aspx
2) You can also refer to the following post
http://deploymentportal.blogspot.ru/2012/08/operations-manager-data-warehouse.html
3) check SQL logs on the datawarehouse, especially for blocking problems
 Also, check Disk IO on the data warehouse (the windows mp collects these metrics). If it affects all Management Servers and the message does say "timeout" then the problem is likely to be at the SQL end. It may not be maxed out by CPU or Memory but
there are likely to be other bottlenecks on the SQL box. What is the Disk Queue for disks that the operations manager data warehouse data and log files reside? These are on seperate physical disks aren't they??
 My guess is that it is a SQL issue .... temporary timeouts suggest that SQL is busy  doing something else. And I'd tend to concentrate my thoughts on Disk IO rather than memory or cpu.
Roger

Similar Messages

  • Alert data is not present in SCOM 2012 Data Warehouse database since two weeks

    Alert data is not present in SCOM 2012 Data Warehouse database since a week though I could see Performance data for the latest dates. Old Alert data is present but I think the latest Alert data is not being inserted to Data warehouse. No activity was done
    on the day from where we are missing data.
    I could see 31554 events on all my Management servers and this proves that Data Insertion is happening. I am not sure why only Alert data is missing (or not getting inserted) in DW database. I am trying to use SQL queries to fetch the data as I dont have reporting
    currently. The same query is working for other dates, so there is no issue with this query.
    I have noticed that I could see the Alert Data present in SCOM OperationsmNager Db but NOT present in OperationsManagerDW database.
    In SCOM 2007, data will be inserted in both Ops DB and DW simultaneously. I believe the same methodology in 2012 too.
    Please help me to fetch Alert data from DW. Any suggestion pls?
    Regards, Suresh

    Hi,
    Generally, data warehouse store a long-term data, and by default, it would keep 400 days data, I suggest check your configuration:
    How to Configure Grooming Settings for the Reporting Data Warehouse Database
    http://technet.microsoft.com/en-us/library/hh212806.aspx
    Alex Zhao
    TechNet Community Support

  • Data Warehouse Performance impact

    Hi, everyone.
    I have a model that I'm designing for our Data Warehouse team.
    It is a star schema (not snowflake) and it is my understanding that all of the FK fields should together define the FACT table's PK.
    However, my developer doesn't want to do that... he'd rather have only the fields which define the actual uniqueness of the FACT data in the PK, leavning the other DIM FK's as attributes. He would also set up indexes on those columns, according to how the query structures needs, but he doesn't see any performance or other benefit to the structure that I'm suggesting.
    What benefits can I tell him that will be realized by having all the Dimensional FK's in the FACT table's PK? Will it help query performance? Will it otherwise enhance BI objectives? Our DBA's were a little unclear on what the performance benefits would be...
    Thanks for the imput!
    David

    Hi David,
    These are probably the questions we all have to take time for when designing a system. However, if you forget we are looking at a 'star schema' and simply look at it from a data modeling perspective, you quickly realize that only the attributes (FK's) that make up the combination of attributes that uniquely identify exactly one row in the fact table can be in the PK.
    Let's say for example that you add another FK to the PK even though it is not actually part of the combination that identifies one row uniquely. That would allow 2 rows to be inserted with the same combination of columns that uniquely should identify exactly one row! However, due to the 'extra' FK in the PK, this is not valid anymore, because it can be this extra FK that distinguishes both rows and allows the database to store these 2 rows without throwing a PK constraint!
    When making fact mappings that load your fact tables, a lookup will be done to see if this new input data already exists. That must be a lookup on the fact table's PK. If you can keep those as small as possible (as few columns in PK index as possible) the better. Regarding performance, test with adding indexes on other columns later, when some data is loaded.
    Looking back, I made this mistake myself somewhere in the past and it cost me some extra work later to 'redo' things and correct my design. The advantage is: you do such things only once. :-)
    Hope this helps.
    Regards,
    Ed

  • Strange problem since changing some settings Connection Pool

    Since changing the following settings in my connection pool, I have been seeing strange behavior with an application that has been deployed for over a year.
    <br><br>
    The settings I changed were the following:
    <br><br>
    Maximum Capacity: Changed from 25 to 100<br>
    Statement Cache Size: Changed from 10 to 200<br>
    Shrink Frequency: Changed from 900 to 300<br>
    Connection Reserve Timeout: Changed from 10 to 5<br>
    Maximum Waiting for Connection: Changed from 2147483647 to 50
    <br><br>
    I was wondering if anyone had any comments on these settings as well as any insight as to why I am seeing the results of a prepared statement
    <br>
    <b>("select count(*) from event where event_id = ?" )</b>
    <br>
    come back as <u>0</u> for the user that just created a record a few minutes before. At the same time another user can log in to the application and cause the same query to run and they get a count of <u>1</u> for the record the other user just created. Then if I restart Weblogic both users get a count of <u>1</u>
    <br><br>
    Driver 9.0.2.0.0<br>
    Weblogic 8.1.3.0

    try
    select * from v$session where lower(module) like 'jdbc%';if you don't find .... so... no connection from jdbc...
    Because when jdbc pool ... start.... It should create connection ... hold on database.

  • My brother used his Verizon Wireless SIM to activate my factory unlocked iphone 5s.  Now, I can't access the cellular data network option to change APN settings for my new carrier.  Already tried configuration utility and factory resets.

    So as the title says, after I got my iPhone 5s from apple, factory unlocked, my brother used his sim card in it for a moment to allow it to be registered and usable on WIFI until I made it to Japan.  Now that I have a data SIM card, I have to input specific APN settings, which are located in settings>cellular>cellular data network, however, cellular data network is not available as an option. I have reset the phone multiple times with and without backups and with and without the new sim in the phone.  I have also used the configuration utility to try to put the APN settings in the phone.  I contacted Apple and they didn't really have an answer as to what i should do.  My brother checked his Verizon account and my phone does not show up as one of his devices, so right now I'm stuck with and incredibly expensive iPod.  Does anyone have any idea what I can do to remedy this?  I am unable to contact Verizon via telephone myself as I have no phone service.  Any help would be greatly, greatly appreciated.

    You only have to clone your mac when using certain cable modem.  You don't clone your mac when using dsl.
    Greetings from Northern Ontario, Canada

  • Only Alert Data is not being inserted in SCOM 2012 Data Warehouse database

    Hi All,
    Alert data is not getting inserted into SCOM Data Warehouse database since 10 days though I could see latest Performance data in DW DB.  No changes were made as far I know on SCOM servers or DB's. I had this issue few months back
    and got resolved by executing a qiery to create an entry for Data Warehouse Synchronisation server entry.
    Now I have checked the discovered inventory and could see an entry present and it is healthy. Still, latest Alert data is not getting inserted into DW DB. Please help me out.
    http://social.technet.microsoft.com/Forums/en-US/2dac4f45-4911-40dc-a220-702993188832/alert-data-is-not-present-in-scom-2012-data-warehouse-database-since-two-weeks?forum=operationsmanagergeneral
    Regards, Suresh

    Hi,
    Generally, data warehouse store a long-term data, and by default, it would keep 400 days data, I suggest check your configuration:
    How to Configure Grooming Settings for the Reporting Data Warehouse Database
    http://technet.microsoft.com/en-us/library/hh212806.aspx
    Alex Zhao
    TechNet Community Support

  • Foreign keys in SCD2 dimensions and fact tables in data warehouse

    Hello.
    I have datawarehouse in snowflake schema. All dimensions are SCD2, the columns are like that:
    ID (PK) SID NAME ... START_DATE END_DATE IS_ACTUAL
    1 1 XXX 01.01.2000 01.01.2002 0
    2 1 YYX 02.01.2002 01.01.2004 1
    3 2 SYX 02.01.2002 1
    4 3 AYX 02.01.2002 01.01.2004 0
    5 3 YYZ 02.01.2004 1
    On this table there are relations from other dimension and fact table.
    Need I create foreign keys for relation?
    And if I do, on what columns? SID (serial ID) is not unique. If I create on ID, I have to get SID and actual row in any query.

    >
    I have datawarehouse in snowflake schema. All dimensions are SCD2, the columns are like that:
    ID (PK) SID NAME ... START_DATE END_DATE IS_ACTUAL
    1 1 XXX 01.01.2000 01.01.2002 0
    2 1 YYX 02.01.2002 01.01.2004 1
    3 2 SYX 02.01.2002 1
    4 3 AYX 02.01.2002 01.01.2004 0
    5 3 YYZ 02.01.2004 1
    On this table there are relations from other dimension and fact table.
    Need I create foreign keys for relation?
    >
    Are you still designing your system? Why did you choose NOT to use a Star schema? Star schema's are simpler and have some performance benefits over snowflakes. Although there may be some data redundancy that is usually not an issue for data warehouse systems since any DML is usually well-managed and normalization is often sacrificed for better performance.
    Only YOU can determine what foreign keys you need. Generally you will create foreign keys between any child table and its parent table and those need to be created on a primary key or unique key value.
    >
    And if I do, on what columns? SID (serial ID) is not unique. If I create on ID, I have to get SID and actual row in any query.
    >
    I have no idea what that means. There isn't any way to tell from just the DDL for one dimension table that you provided.
    It is not clear if you are saying that your fact table will have a direct relationship to the star-flake dimension tables or only link to them through the top-level dimensions.
    Some types of snowflakes do nothing more than normalize a dimension table to eliminate redundancy. For those types the dimension table is, in a sense, a 'mini' fact table and the other normalized tables become its children. The fact table only has a relation to the main dimension table; any data needed from the dimensions 'child' tables is obtained by joining them to their 'parent'.
    Other snowflake types have the main fact table having relations to one or more of the dimensions 'child' tables. That complicates the maintenance of the fact table since any change to the dimension 'child' table impacts the fact table also. It is not recommended to use that type of snowflake.
    See the 'Snowflake Schemas' section of the Data Warehousing Guide
    http://docs.oracle.com/cd/B28359_01/server.111/b28313/schemas.htm
    >
    Snowflake Schemas
    The snowflake schema is a more complex data warehouse model than a star schema, and is a type of star schema. It is called a snowflake schema because the diagram of the schema resembles a snowflake.
    Snowflake schemas normalize dimensions to eliminate redundancy. That is, the dimension data has been grouped into multiple tables instead of one large table. For example, a product dimension table in a star schema might be normalized into a products table, a product_category table, and a product_manufacturer table in a snowflake schema. While this saves space, it increases the number of dimension tables and requires more foreign key joins. The result is more complex queries and reduced query performance. Figure 19-3 presents a graphical representation of a snowflake schema.

  • Apply cdc to existing data warehouse

    Hi all,
    I'm building an education data warehouse. I have various dimensions that i would like to track changes to i.e. DimStudent, DimClass, DimCollege, DimExamPaper etc. etc. My solution currently drops keys, truncates tables and recreates keys before loading staging
    tables, which then go on to load Dimension and fact tables.
    Therefore, my question is what are the minimum physical changes i need to put in place to --> truncate staging tables prior to load, track changes between staging tables, dim tables and fact tables, to incrementally load rows to dim tables and fact tables?
    Do i need to add a table/tables to track changes for EACH dimension or fact table? I have looked at examples but there seems to be a fair amount of work for just one table. I have 11 dimensions, 4 fact tables and 22 staging tables.  Also, do i need a
    CDC state table for EACH table? Fast response would be greatly appreciated as this is holding me up.  I basically want to know what the best method would be to apply to a data warehouse to track changes with lowish volumes.  (few million rows in
    total DWH).

    Hi all,
    I'm building an education data warehouse. I have various dimensions that i would like to track changes to i.e. DimStudent, DimClass, DimCollege, DimExamPaper etc. etc. My solution currently drops keys, truncates tables and recreates keys before loading staging
    tables, which then go on to load Dimension and fact tables.
    Therefore, my question is what are the minimum physical changes i need to put in place to --> truncate staging tables prior to load, track changes between staging tables, dim tables and fact tables, to incrementally load rows to dim tables and fact tables?
    Do i need to add a table/tables to track changes for EACH dimension or fact table? I have looked at examples but there seems to be a fair amount of work for just one table. I have 11 dimensions, 4 fact tables and 22 staging tables.  Also, do i need a
    CDC state table for EACH table? Fast response would be greatly appreciated as this is holding me up.  I basically want to know what the best method would be to apply to a data warehouse to track changes with lowish volumes.  (few million rows in
    total DWH).
    can someone help with this pls

  • Performance issues with data warehouse loads

    We have performance issues with our data warehouse load ETL process. I have run
    analyze and dbms_stats and checked database environment. What other things can I do to optimize performance? I cannot use statspack since we are running Oracle 8i. Thanks
    Scott

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Compression and query performance in data warehouses

    Hi,
    Using Oracle 11.2.0.3 have a large fact table with bitmap indexes to the asscoiated dimensions.
    Understand bitmap indexes are compressed by default so assume cannot further compress them.
    Is this correct?
    Wish to try compress the large fact table to see if this will reduce the i/o on reads and therfore give performance benefits.
    ETL speed fine just want to increase the report performance.
    Thoughts - anyone seen significant gains in data warehouse report performance with compression.
    Also, current PCTFREE on table 10%.
    As only insert into tabel considering making this 1% to imporve report performance.
    Thoughts?
    Thanks

    First of all:
    Table Compression and Bitmap Indexes
    To use table compression on partitioned tables with bitmap indexes, you must do the following before you introduce the compression attribute for the first time:
    Mark bitmap indexes unusable.
    Set the compression attribute.
    Rebuild the indexes.
    The first time you make a compressed partition part of an existing, fully uncompressed partitioned table, you must either drop all existing bitmap indexes or mark them UNUSABLE before adding a compressed partition. This must be done irrespective of whether any partition contains any data. It is also independent of the operation that causes one or more compressed partitions to become part of the table. This does not apply to a partitioned table having B-tree indexes only.
    This rebuilding of the bitmap index structures is necessary to accommodate the potentially higher number of rows stored for each data block with table compression enabled. Enabling table compression must be done only for the first time. All subsequent operations, whether they affect compressed or uncompressed partitions, or change the compression attribute, behave identically for uncompressed, partially compressed, or fully compressed partitioned tables.
    To avoid the recreation of any bitmap index structure, Oracle recommends creating every partitioned table with at least one compressed partition whenever you plan to partially or fully compress the partitioned table in the future. This compressed partition can stay empty or even can be dropped after the partition table creation.
    Having a partitioned table with compressed partitions can lead to slightly larger bitmap index structures for the uncompressed partitions. The bitmap index structures for the compressed partitions, however, are usually smaller than the appropriate bitmap index structure before table compression. This highly depends on the achieved compression rates.

  • Updated latest version of itunes onto windows xp.Can't open it.The Data Execution Prevention window comes up.I have followed all the directions of changing the settings.I have updated Quicktime.Still can't open it.

    I updated the latest version of itunes onto my pc (windows xp) and all of a sudden I can't open it up.  Everytime I try to open it I get a window that says: "Data Execution Prevention - Microsoft Windows...To help protect your computer, Windows has closed this program.  Change settings."  I have tried to change the settings.  Uninstalled & reinstalled itunes.  Updated the lates version of quicktime.  Nothing has helped.  If there is anyone who has a suggestion I would appreciate it.  I am at my wits end with this.  Thank you in advance for all your suggestions.

    If anyone is looking at this and would like to know what I did to finally get it to work, I did the following:  I right click on the itunes icon on my desktop.......clicked on "run as"........made sure that it ran as administrator.   This finally worked for me.  Hope you all find this info helpful.

  • Since upgrading to the newest version of ios 8, I can no longer share my photos on facebook. I have tried going to setting-photos, but the facebook app does not show up for me to change the settings. Any suggestions?

    Since upgrading to the newest version of ios 8, I can no longer share my photos on facebook. I have tried going to settings-privacy-photos, but the facebook app does not show up for me to change the settings. Any suggestions?

    when you opened the shared library with the newer version of iPhoto (iPhoto '11) you were given a warning that your library would be converted and could not be used by older versions - you clicked ok to go ahead - there is no updo available - either upgrade to iphoto '11 of the MBP or load your backup of the iPhoto '09 library on it - older versiopns of i{Photo can not read newer libraries
    LN

  • APEX Error: Current version of data in database has changed since user init

    Hi:
    APEX 4.1
    I have a page with 2 regions. The first region is built with custom SQL using the APEX APIs. I have a process the can successfully update records.
    I built the second region with the tabular form wizard. This created the multi row update process.
    I created a region button and the two process respond to that button: process for first region, then process for the second region.
    When I add data to the second region and click the region button, I get the following error:
    Current version of data in database has changed since user initiated update process. current row version identifier = "A884FA378C851786DDFE3A33709CB23C" application row version identifier = "9ED06A0F09F80F054AB781CA24CC4CBF"
    I know it has something to do with these two types of regions being on the same page, because when I create a page of just the table form, the data is updated.
    Can anyone suggest what I might be doing wrong?
    Thanks.

    Hello
    If you try update the same data from 2 places you will got this message because:
    Apex forms have locking mechanism.
    1. During fetch data to form apex calculate checksum from every items on the forms.
    2. Before update process apex again fetch data from database (in background) and calculate checksum again if checksum from point 1 is the same like this checksum APEX realy update data with new item values. If no, You will got your error (apex secure You, You realy don't know what you update).
    According this if you change data which you use in the form between 1 and 2 point you will got this "error". You can check it for example if you change data from SQL/Plus or from other form.
    Probably you forget about this locking mechanism during designe your process from APEX API <- successfully update records (maybe successful but you don't know what you updated) : )
    If I helped You please check correct or helpful :)

  • MRU Error : Current version of data in database has changed since user init

    When i use HTMLDB Wizard to create Master Detail Form
    (You can see step by step what i doing by see this url :
    http://jroller.com/resources/w/wildan83/MRU%20Error.pdf
    The is error in MRU :
    Error in mru internal routine: ORA-20001: Error in MRU: row= 1, ORA-20001: ORA-20001: Current version of data in database has changed since user initiated update process. current checksum = "A884FA378C851786DDFE3A33709CB23C", item checksum = "0EEFFABE8252B0B279DB14A77F567F5D"., update "CNAP2"."ENROLLMENT" set "STUDENT_ID" = :b1, "SECTION_ID" = :b2, "ENROLL_DATE" = :b3, "FINAL_GRADE" = :b4.
    If there something missing.., just say ..
    Thanks for the help.

    Oh i see ..,i never think if that is the source of the MRU
    error.
    Ok ..,
    Now ..., i have new question .., how can i change so the the primay key in detail table (enrollment table), i have one primary key with two columns .., if you don't understand what i mean ..,
    check this alter table statement :
    ALTER TABLE ENROLLMENT
    ADD CONSTRAINT ENR_PK PRIMARY KEY
    (STUDENT_ID
    ,SECTION_ID) ;
    Now the question is .., when i using wizard to create master detail form .., html db automatically make student_id and section_id as the primary at "application level (html db)" .., how can i change this behaviour ..,
    i want the html db not treat this two column as primary key ?
    Do i have to create master detail form manually to accomplish this ?
    Thanks before .., sorry if my english is not too good.

  • ORA-20001: Current version of data in database has changed since user.....

    Hi,
    I am having a tabular form which I created using the wizard
    I am facing the below error when I try to update or 'Add Row'
    Error in mru internal routine: ORA-20001: Error in MRU: row= 1, ORA-20001: ORA-20001: Current version of data in database has changed since user initiated update process."
    I havenot changed the query but I made certain columns based on select list,gave some default values,etc..
    How can I solve this problem?
    Also,I am getting the above error once I change the query for another tabular form.The client wanted some more fields to be displayed on the page.
    How can I solve this problem too?
    Thanks and Regards,
    K.tanna

    Can somebody help me out?

Maybe you are looking for