Data warehouse monitor initial state data synchronization process failed to write state.

Data Warehouse monitor initial state data synchronization process failed to write state to the Data Warehouse database. Failed to store synchronization process state information in the Data Warehouse database. The operation will be retried.
Exception 'SqlException': Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
One or more workflows were affected by this. 
Workflow name: Microsoft.SystemCenter.DataWarehouse.Synchronization.MonitorInitialState
Instance name: Data Warehouse Synchronization Service
Instance ID: {0FFB4A13-67B7-244A-4396-B1E6F3EB96E5}
Management group: SCOM2012R2BIZ
Could you please help me out of the issue?

Hi,
It seems like that you are encountering event 31552, you may check operation manager event logs for more information regarding to this issue.
There can be many causes of getting this 31552 event, such as:
A sudden flood (or excessive sustained amounts) of data to the warehouse that is causing aggregations to fail moving forward. 
The Exchange 2010 MP is imported into an environment with lots of statechanges happening. 
Excessively large ManagedEntityProperty tables causing maintenance to fail because it cannot be parsed quickly enough in the time allotted.
Too much data in the warehouse staging tables which was not processed due to an issue and is now too much to be processed at one time.
Please go through the links below to get more information about troubleshooting this issue:
The 31552 event, or “why is my data warehouse server consuming so much CPU?”
http://blogs.technet.com/b/kevinholman/archive/2010/08/30/the-31552-event-or-why-is-my-data-warehouse-server-consuming-so-much-cpu.aspx
FIX: Failed to store data in the Data Warehouse due to a Exception ‘SqlException': Timeout expired.
Regards,
Yan Li
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

Similar Messages

  • Reinstall Data Warehouse to remove test data

    We are wanting to go live with SCSM soon, hopefully in a couple of weeks.
    Unfortunately, we don't have the infrastructure in which I was able to setup a test and production environment. I've managed to remove the test data out of the ServiceManager database via the SMLets, but we are wanting to remove the test data out of the
    data warehouse as well. I've attempting manually deleting from the database (I know unsupported), though this did not work and just gave many database errors (as expected).
    What I'm wondering is if it's possible to uninstall/reinstall the DW. Upon reinstall, I know I would need to likely re-register it with SCSM.
    In doing this, will all of the custom classifications/statuses/templates/etc stay in place?
    Once installed, would all of the sync jobs pick up as expected and sync the current data in the Service Manager (No incidents/Changes/etc) to the DW?
    We have all databases/Service Manager/Portal running on one server. DW running on a 2nd server.
    If anyone could provide some insight around this, it would be much appreciated. Any direction towards documentation would also be great!
    Thanks

    The best scenario (and my recomendation) would be to export and take backup of all your management packs, reinstall the entire Service Manager environment and import them again. In that way you should be able to get a fresh, functional Service
    Manager and all your settings would be retained.
    However, to answer your question, you would need to:
    - Unregister with SCSM DW
    - Uninstall DW
    - Delete the three DW databases
    - Install DW
    - Register with DW
    This would not affect any settings in your SCSM environment and you would have an empty DW.
    Regards
    //Anders
    Anders Asp | Lumagate | www.lumagate.com | Sweden

  • On demand process fails with session state checksum protection on

    im doing something like this: : http://apex.oracle.com/pls/otn/f?p=56976:2
    but the On Demand Application Process errors if session state checksum security is turned on..
    works fine if I turn it off..
    any ideas?

    Anyone?

  • Event data collection process unable to write data to the Data Warehouse

    Alert Description:
    Event data collection process unable to write data to the Data Warehouse. Failed to store data in the Data Warehouse. The operation will be retried.
    Exception 'InvalidOperationException': The given value of type Int32 from the data source cannot be converted to type tinyint of the specified target column.
    Running SCOM 2007 R2 on Server 2008 R2 with SQL Server 2008 R2. I can only find a single reference to this exact error on the Internet. It started occurring on a weekend. No changes were made to the SCOM server directly before this occurred. Anyone know
    what the error means and/or how to fix?

    Hello,
    I would suggest the following threas for your reference:
    Troubles with DataWarehouse database
    http://social.technet.microsoft.com/Forums/en-US/operationsmanagergeneral/thread/5e7005ae-d5d8-4b5c-a51c-740634e3da4e
    Data Warehouse configuration synchronization process failed
    to read state 
    http://social.technet.microsoft.com/Forums/en-US/systemcenter/thread/8ea1f4b9-115b-43cd-b66f-617533703047
    Thanks,
    Yog Li
    TechNet Community Support

  • What are the best solutions for data warehouse configuration in 10gR2

    I need help on solutions to be provided to my Client for upgrading the data warehouse.
    Current Configuration: Oracle database 9.2.0.8. This database contains the data warehouse and one more data mart on the same host.Sizes are respectively 6 Terabyte(retention policy of 3 years+current year) and 1 Terabyte. The ETL tool and BO Reporting tools are also hosted on the same host. This current configuration is really performing poor.
    Client cannot go for a major architectural or configuration changes to its existing environment now due to some constraints.
    However, they have agreed to separate out the databases on separate hosts from the ETL tools and BO objects. Also we are planning to upgrade the database to 10gR2 to attain stability, better performance and overcome current headaches.
    We cannot upgrade the database to 11g as the BO is at a version 6.5 which isn't compatible with Oracle 11g. And Client cannot afford to upgrade anything else other than the database.
    So, my role is very vital in providing a perfect solution towards better performance and take a successful migration of Oracle Database from one host to another (similar platform and OS) in addition to upgrade.
    I have till now thought of the following:
    Move the Oracle database and data mart to separate host.
    The host will be the same platform, that is, HP Superdome with HP-UX 32-bit OS (we cannot change to 64-bit as ETL tool doesn't support)
    Install new Oracle database 10g on the new host and move the data to it.
    Exploring all new features of 10gR2 to help data warehouse, that is, SQL MODEL Clause introduction, Parallel processing, Partitioning, Data Pump, SPA to study pre and post migrations.
    Also thinking of RAC to provide more better solution as our main motive is to show a tremendous performance enhancement.
    I need all your help to prepare a good road map for my assignment. Please suggest.
    Thanks,
    Tapan

    SGA=27.5 GB and PGA=50 MB
    Also I am pasting part of STATSPACK Report, eliminating the snaps of DB bounce. Please suggest the scope of improvement in this case.
    STATSPACK report for
    Snap Id Snap Time Sessions Curs/Sess Comment
    Begin Snap: 582946 11-Mar-13 20:02:16 46 12.8
    End Snap: 583036 12-Mar-13 18:24:24 60 118.9
    Elapsed: 1,342.13 (mins)
    Cache Sizes (end)
    ~~~~~~~~~~~~~~~~~
    Buffer Cache: 21,296M Std Block Size: 16K
    Shared Pool Size: 6,144M Log Buffer: 16,384K
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 1,343,739.01 139,883.39
    Logical reads: 100,102.54 10,420.69
    Block changes: 3,757.42 391.15
    Physical reads: 6,670.84 694.44
    Physical writes: 874.34 91.02
    User calls: 1,986.04 206.75
    Parses: 247.87 25.80
    Hard parses: 5.82 0.61
    Sorts: 1,566.76 163.10
    Logons: 10.99 1.14
    Executes: 1,309.79 136.35
    Transactions: 9.61
    % Blocks changed per Read: 3.75 Recursive Call %: 43.34
    Rollback per transaction %: 3.49 Rows per Sort: 190.61
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.90 Redo NoWait %: 100.00
    Buffer Hit %: 96.97 In-memory Sort %: 100.00
    Library Hit %: 99.27 Soft Parse %: 97.65
    Execute to Parse %: 81.08 Latch Hit %: 99.58
    Parse CPU to Parse Elapsd %: 3.85 % Non-Parse CPU: 99.34
    Shared Pool Statistics Begin End
    Memory Usage %: 7.11 50.37
    % SQL with executions>1: 62.31 46.46
    % Memory for SQL w/exec>1: 26.75 13.47
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    CPU time 492,062 43.66
    db file sequential read 157,418,414 343,549 30.49
    library cache pin 92,339 66,759 5.92
    PX qref latch 63,635 43,845 3.89
    db file scattered read 2,506,806 41,677 3.70
    Background Wait Events for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (s) (ms) /txn
    log file sequential read 176,386 0 3,793 22 0.2
    log file parallel write 2,685,833 0 1,813 1 3.5
    db file parallel write 239,166 0 1,350 6 0.3
    control file parallel write 33,432 0 79 2 0.0
    LGWR wait for redo copy 478,120 536 75 0 0.6
    rdbms ipc reply 10,027 0 47 5 0.0
    control file sequential read 32,414 0 40 1 0.0
    db file scattered read 4,101 0 30 7 0.0
    db file sequential read 13,946 0 29 2 0.0
    direct path read 203,694 0 14 0 0.3
    log buffer space 363 0 13 37 0.0
    latch free 3,766 0 9 2 0.0
    direct path write 80,491 0 6 0 0.1
    async disk IO 351,955 0 4 0 0.5
    enqueue 28 0 1 21 0.0
    buffer busy waits 1,281 0 1 0 0.0
    log file single write 172 0 0 1 0.0
    rdbms ipc message 10,563,204 251,286 992,837 94 13.7
    pmon timer 34,751 34,736 78,600 2262 0.0
    smon timer 7,462 113 76,463 10247 0.0
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    CPU used by this session 49,206,154 611.0 63.6
    CPU used when call started 49,435,735 613.9 63.9
    CR blocks created 6,740,777 83.7 8.7
    Cached Commit SCN referenced 423,253,503 5,256.0 547.2
    Commit SCN cached 19,165 0.2 0.0
    DBWR buffers scanned 48,276,489 599.5 62.4
    DBWR checkpoint buffers written 6,959,752 86.4 9.0
    DBWR checkpoints 454 0.0 0.0
    DBWR free buffers found 44,817,183 556.5 57.9
    DBWR lru scans 137,149 1.7 0.2
    DBWR make free requests 162,528 2.0 0.2
    DBWR revisited being-written buff 4,220 0.1 0.0
    DBWR summed scan depth 48,276,489 599.5 62.4
    DBWR transaction table writes 5,036 0.1 0.0
    DBWR undo block writes 2,989,436 37.1 3.9
    DDL statements parallelized 3,723 0.1 0.0
    DFO trees parallelized 4,157 0.1 0.0
    DML statements parallelized 3 0.0 0.0
    OS Block input operations 29,850 0.4 0.0
    OS Block output operations 1,591 0.0 0.0
    OS Characters read/written 182,109,814,791 2,261,447.1 235,416.9
    OS Integral unshared data size ################## 242,463,432.4 ############
    OS Involuntary context switches 188,257,786 2,337.8 243.4
    OS Maximum resident set size 43,518,730,619 540,417.4 56,257.5
    OS Page reclaims 159,430,953 1,979.8 206.1
    OS Signals received 5,260,938 65.3 6.8
    OS Socket messages received 79,438,383 986.5 102.7
    OS Socket messages sent 93,064,176 1,155.7 120.3
    OS System time used 10,936,430 135.8 14.1
    OS User time used 132,043,884 1,639.7 170.7
    OS Voluntary context switches 746,207,739 9,266.4 964.6
    PX local messages recv'd 55,120,663 684.5 71.3
    PX local messages sent 55,120,817 684.5 71.3
    Parallel operations downgraded 1 3 0.0 0.0
    Parallel operations not downgrade 4,154 0.1 0.0
    SQL*Net roundtrips to/from client 155,422,335 1,930.0 200.9
    SQL*Net roundtrips to/from dblink 18 0.0 0.0
    active txn count during cleanout 16,529,551 205.3 21.4
    background checkpoints completed 43 0.0 0.0
    background checkpoints started 43 0.0 0.0
    background timeouts 280,202 3.5 0.4
    branch node splits 4,428 0.1 0.0
    buffer is not pinned count 6,382,440,322 79,257.4 8,250.7
    buffer is pinned count 9,675,661,370 120,152.8 12,507.9
    bytes received via SQL*Net from c 67,384,496,376 836,783.4 87,109.3
    bytes received via SQL*Net from d 6,142 0.1 0.0
    bytes sent via SQL*Net to client 50,240,643,657 623,890.4 64,947.1
    bytes sent via SQL*Net to dblink 3,701 0.1 0.0
    calls to get snapshot scn: kcmgss 145,385,064 1,805.4 187.9
    calls to kcmgas 36,816,132 457.2 47.6
    calls to kcmgcs 3,514,770 43.7 4.5
    change write time 369,373 4.6 0.5
    cleanout - number of ktugct calls 20,954,488 260.2 27.1
    cleanouts and rollbacks - consist 6,357,174 78.9 8.2
    cleanouts only - consistent read 10,078,802 125.2 13.0
    cluster key scan block gets 69,403,565 861.9 89.7
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    cluster key scans 41,311,211 513.0 53.4
    commit cleanout failures: block l 413,776 5.1 0.5
    commit cleanout failures: buffer 414 0.0 0.0
    commit cleanout failures: callbac 41,194 0.5 0.1
    commit cleanout failures: cannot 174,382 2.2 0.2
    commit cleanouts 11,469,056 142.4 14.8
    commit cleanouts successfully com 10,839,290 134.6 14.0
    commit txn count during cleanout 17,155,424 213.0 22.2
    consistent changes 145,418,277 1,805.8 188.0
    consistent gets 8,043,252,188 99,881.4 10,397.7
    consistent gets - examination 3,180,028,047 39,489.7 4,110.9
    current blocks converted for CR 9 0.0 0.0
    cursor authentications 14,926 0.2 0.0
    data blocks consistent reads - un 143,706,500 1,784.6 185.8
    db block changes 302,577,666 3,757.4 391.2
    db block gets 336,562,217 4,179.4 435.1
    deferred (CURRENT) block cleanout 2,912,793 36.2 3.8
    dirty buffers inspected 627,174 7.8 0.8
    enqueue conversions 1,296,337 16.1 1.7
    enqueue releases 13,053,200 162.1 16.9
    enqueue requests 13,239,092 164.4 17.1
    enqueue timeouts 185,878 2.3 0.2
    enqueue waits 114,120 1.4 0.2
    exchange deadlocks 7,390 0.1 0.0
    execute count 105,475,101 1,309.8 136.4
    free buffer inspected 1,604,407 19.9 2.1
    free buffer requested 258,126,047 3,205.4 333.7
    hot buffers moved to head of LRU 22,793,576 283.1 29.5
    immediate (CR) block cleanout app 16,436,010 204.1 21.3
    immediate (CURRENT) block cleanou 2,860,013 35.5 3.7
    index fast full scans (direct rea 12,375 0.2 0.0
    index fast full scans (full) 3,733 0.1 0.0
    index fast full scans (rowid rang 192,148 2.4 0.3
    index fetch by key 1,321,024,486 16,404.5 1,707.7
    index scans kdiixs1 406,165,684 5,043.8 525.1
    leaf node 90-10 splits 50,373 0.6 0.1
    leaf node splits 697,235 8.7 0.9
    logons cumulative 884,756 11.0 1.1
    messages received 3,276,719 40.7 4.2
    messages sent 3,257,171 40.5 4.2
    no buffer to keep pinned count 569 0.0 0.0
    no work - consistent read gets 4,406,092,172 54,715.0 5,695.8
    opened cursors cumulative 20,527,704 254.9 26.5
    parse count (failures) 267,088 3.3 0.4
    parse count (hard) 468,996 5.8 0.6
    parse count (total) 19,960,548 247.9 25.8
    parse time cpu 323,024 4.0 0.4
    parse time elapsed 8,393,422 104.2 10.9
    physical reads 537,189,332 6,670.8 694.4
    physical reads direct 292,545,140 3,632.8 378.2
    physical writes 70,409,002 874.3 91.0
    physical writes direct 59,248,394 735.8 76.6
    physical writes non checkpoint 69,103,391 858.1 89.3
    pinned buffers inspected 11,893 0.2 0.0
    prefetched blocks 95,892,161 1,190.8 124.0
    prefetched blocks aged out before 1,495,883 18.6 1.9
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    process last non-idle time ################## ############## ############
    queries parallelized 417 0.0 0.0
    recursive calls 122,323,299 1,519.0 158.1
    recursive cpu usage 3,144,533 39.1 4.1
    redo blocks written 180,881,558 2,246.2 233.8
    redo buffer allocation retries 5,400 0.1 0.0
    redo entries 164,728,513 2,045.6 213.0
    redo log space requests 1,006 0.0 0.0
    redo log space wait time 2,230 0.0 0.0
    redo ordering marks 2,563 0.0 0.0
    redo size 108,208,614,904 1,343,739.0 139,883.4
    redo synch time 558,520 6.9 0.7
    redo synch writes 2,343,824 29.1 3.0
    redo wastage 1,126,585,600 13,990.0 1,456.4
    redo write time 718,655 8.9 0.9
    redo writer latching time 7,763 0.1 0.0
    redo writes 2,685,833 33.4 3.5
    rollback changes - undo records a 522,742 6.5 0.7
    rollbacks only - consistent read 335,177 4.2 0.4
    rows fetched via callback 1,100,990,382 13,672.1 1,423.3
    session connect time ################## ############## ############
    session cursor cache count 1,061 0.0 0.0
    session cursor cache hits 1,687,796 21.0 2.2
    session logical reads 8,061,057,193 100,102.5 10,420.7
    session pga memory 1,573,228,913,832 19,536,421.0 2,033,743.8
    session pga memory max 1,841,357,626,496 22,866,054.4 2,380,359.0
    session uga memory 1,074,114,630,336 13,338,399.4 1,388,529.0
    session uga memory max 386,645,043,296 4,801,374.0 499,823.6
    shared hash latch upgrades - no w 410,360,146 5,095.9 530.5
    sorts (disk) 2,657 0.0 0.0
    sorts (memory) 126,165,625 1,566.7 163.1
    sorts (rows) 24,048,783,304 298,638.8 31,088.3
    summed dirty queue length 5,438,201 67.5 7.0
    switch current to new buffer 1,302,798 16.2 1.7
    table fetch by rowid 6,201,503,534 77,010.5 8,016.8
    table fetch continued row 26,649,697 330.9 34.5
    table scan blocks gotten 1,864,435,032 23,152.6 2,410.2
    table scan rows gotten 43,639,997,280 541,923.3 56,414.3
    table scans (cache partitions) 26,112 0.3 0.0
    table scans (direct read) 246,243 3.1 0.3
    table scans (long tables) 340,200 4.2 0.4
    table scans (rowid ranges) 359,617 4.5 0.5
    table scans (short tables) 9,111,559 113.2 11.8
    transaction rollbacks 4,819 0.1 0.0
    transaction tables consistent rea 824 0.0 0.0
    transaction tables consistent rea 1,386,848 17.2 1.8
    user calls 159,931,913 1,986.0 206.8
    user commits 746,543 9.3 1.0
    user rollbacks 27,020 0.3 0.0
    write clones created in backgroun 7 0.0 0.0
    write clones created in foregroun 4,350 0.1 0.0
    Buffer Pool Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> Standard block size Pools D: default, K: keep, R: recycle
    -> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
    Free Write Buffer
    Number of Cache Buffer Physical Physical Buffer Complete Busy
    P Buffers Hit % Gets Reads Writes Waits Waits Waits
    D 774,144 95.6############ 233,869,082 10,089,734 0 0########
    K 504,000 99.9############ 3,260,227 1,070,338 0 0 65,898
    R 63,504 96.2 196,079,539 7,511,863 535 0 0 0
    Buffer wait Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc
    Tot Wait Avg
    Class Waits Time (s) Time (ms)
    data block 7,791,121 14,676 2
    file header block 587 101 172
    undo header 151,617 71 0
    segment header 299,312 58 0
    1st level bmb 45,235 7 0
    bitmap index block 392 1 3
    undo block 4,250 1 0
    2nd level bmb 14 0 0
    system undo header 2 0 0
    3rd level bmb 1 0 0
    Latch Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
    willing-to-wait latch get requests
    ->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
    ->"Pct Misses" for both should be very close to 0.0
    Pct Avg Wait Pct
    Get Get Slps Time NoWait NoWait
    Latch Requests Miss /Miss (s) Requests Miss
    Consistent RBA 2,686,230 0.0 0.2 0 0
    FAL request queue 86 0.0 0 0
    FAL subheap alocation 0 0 2 0.0
    FIB s.o chain latch 1,089 0.0 0 0
    FOB s.o list latch 4,589,986 0.5 0.0 2 0
    NLS data objects 1 0.0 0 0
    SQL memory manager worka 5,963 0.0 0 0
    Token Manager 0 0 2 0.0
    active checkpoint queue 719,439 0.3 0.1 0 1 0.0
    alert log latch 184 0.0 0 2 0.0
    archive control 4,365 0.0 0 0
    archive process latch 1,808 0.6 0.6 0 0
    begin backup scn array 3,387,572 0.0 0.0 0 0
    cache buffer handles 1,577,222 0.2 0.0 0 0
    cache buffers chains ############## 0.5 0.0 430 354,357,972 0.3
    cache buffers lru chain 17,153,023 0.1 0.0 1 385,505,654 0.5
    cas latch 538,804,153 0.3 0.0 7 0
    channel handle pool latc 1,776,950 0.5 0.0 0 0
    channel operations paren 2,901,371 0.3 0.0 0 0
    checkpoint queue latch 99,329,722 0.0 0.0 0 11,153,369 0.1
    child cursor hash table 3,927,427 0.0 0.0 0 0
    commit callback allocati 8,739 0.0 0 0
    dictionary lookup 7,980 0.0 0 0
    dml lock allocation 6,767,990 0.1 0.0 0 0
    dummy allocation 1,898,183 0.2 0.1 0 0
    enqueue hash chains 27,741,348 0.1 0.1 4 0
    enqueues 17,450,161 0.3 0.1 6 0
    error message lists 132,828 2.6 0.2 1 0
    event group latch 884,066 0.0 0.7 0 0
    event range base latch 1 0.0 0 0
    file number translation 34 38.2 0.9 0 0
    global tx hash mapping 577,859 0.0 0 0
    hash table column usage 4,062 0.0 0 8,757,234 0.0
    hash table modification 16 0.0 0 2 0.0
    i/o slave adaptor 0 0 2 0.0
    job workq parent latch 4 100.0 0.3 0 494 8.7
    job_queue_processes para 1,950 0.0 0 2 0.0
    ksfv messages 0 0 4 0.0
    ktm global data 8,219 0.0 0 0
    lgwr LWN SCN 2,687,862 0.0 0.0 0 0
    library cache 310,882,781 0.9 0.0 34 104,759 4.0
    library cache load lock 30,369 0.0 0.3 0 0
    library cache pin 153,821,358 0.1 0.0 2 0
    library cache pin alloca 126,316,296 0.1 0.0 4 0
    list of block allocation 2,730,808 0.3 0.0 0 0
    loader state object free 566,036 0.1 0.0 0 0
    longop free list parent 197,368 0.0 0 8,390 0.0
    message pool operations 14,424 0.0 0.0 0 0
    messages 25,931,764 0.1 0.0 1 0
    mostly latch-free SCN 40,124,948 0.3 0.0 5 0
    Latch Sleep breakdown for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by misses desc
    Get Spin &
    Latch Name Requests Misses Sleeps Sleeps 1->4
    cache buffers chains ############## 74,770,083 1,062,119 73803903/884
    159/71439/10
    582/0
    redo allocation 170,107,983 3,441,055 149,631 3292872/1467
    48/1426/9/0
    library cache 310,882,781 2,831,747 89,240 2754499/6780
    6/7405/2037/
    0
    shared pool 158,471,190 1,755,922 55,268 1704342/4836
    9/2826/385/0
    cas latch 538,804,153 1,553,992 6,927 1547125/6808
    /58/1/0
    row cache objects 161,142,207 1,176,998 27,658 1154070/1952
    0/2560/848/0
    process queue reference 1,893,917,184 1,119,215 106,454 78758/4351/1
    36/0/0
    Library Cache Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Pct Misses" should be very low
    Get Pct Pin Pct Invali-
    Namespace Requests Miss Requests Miss Reloads dations
    BODY 3,137,721 0.0 3,137,722 0.0 0 0
    CLUSTER 6,741 0.1 4,420 0.2 0 0
    INDEX 353,708 0.8 361,065 1.2 0 0
    SQL AREA 17,052,073 0.3 54,615,678 0.9 410,682 19,628
    TABLE/PROCEDURE 3,521,884 0.2 12,922,737 0.1 619 0
    TRIGGER 1,975,977 0.0 1,975,977 0.0 1 0
    SGA Memory Summary for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    SGA regions Size in Bytes
    Database Buffers 22,330,474,496
    Fixed Size 779,288
    Redo Buffers 17,051,648
    Variable Size 7,180,648,448
    sum 29,528,953,880

  • Update data automatically in fact table in Data Warehouse

    Hi,
    I'm working on the creation of a data warehouse that include different data source like SQL Server performance (more than one), Active Directory users, Server performance (more than one), Exchange server mailboxes. The problem is that performance data change
    frequently (like CPU and Memory), so my question is how to update data in fact table every 5 seconds automatically with SSIS.
    Thank you for any advice  

    I'm assuming you have already figured out how to capture the data e.g. Powershell, extended events, MDW etc. and just need to know what dimensions or fact tables do you need.
    You need to decide how often you are going to capture this data and based on that you will have dimensions with appropriate grain. Don't try to cram everything in the same fact table if it not of the same granularity. Also, separate process usually
    have separate fact tables.
    In addition to the Date dimension, you will need a Time dimension with a grain of 1 second (or maybe 5 seconds if that is when you get your data) then run the SSIS every 5 seconds to capture and append that data in the fact table.
    - Aalamjeet Rangi | (Blog)

  • How to create a new data warehouse?

    Hi,
    I have a set up where multiple SCOM management groups all report to a shared data warehouse database. For various reasons, we need to “unplug” the management groups from this shared resource and give them each their own data warehouse to use (locally, rather
    than centrally).
    Is there a way of creating a new clean data warehouse without running a full installation? I’m imagined that I could probably put a copy of the existing shared one into each local management group and then use this process (http://technet.microsoft.com/EN-US/library/hh268492.aspx)
    to repoint each locally – but it would be nicer to start fresh for each one. Is that possible, is there an installation I can run for JUST the DW elements of the SCOM installation?
    Thanks.

    Breaking a shared data warehouse configuration into separate data warehouses is really not supported - there is no installation path for this and no guidance available. If you absolutely need to do this, then you are basically stuck with reinstalling the
    data warehouse and reporting server for each management group. You will lose all historical reporting in this scenario. Sure, the data can be saved and used elsewhere, but it will not be available as you would expect to see it in the reporting space of the
    Operations Console.
    Moving the data warehouse as Mai suggested will not solve the problem, and may actually end up in in other problems with retention and reports being unavailable for some instances. I would not suggest moving/copying the existing data warehouse to each management
    group.
    Jonathan Almquist | SCOMskills, LLC (http://scomskills.com)

  • Tables between OBAW and Oracle Data Warehouse in OBIA rpd are different

    Hi,
    The tables in the Data Warehouse DB is different as compared to the tables in the physical layer of the Oracle data Warehouse in OracleBIAnalytiscApps.rpd.
    When I click update Row Count, it gives me below error message:
    There was an error while updating row count for "Oracle Data Warehouse"."Catalog"."DBO"."W_CTRY_REGN_D":
    [nQSError :17001] Oracle Error Code: 942, message: ORA-00942:Table or view does not exist at OCICallSTmTEXECUTE
    Whereas when I click on some other tables for Row count there is not error.
    Also, When I am trying to look for some specific table which exist in Data Warehouse (that has the data), it does not exist in the rpd.
    Kindly assit me.

    hi,
    Have a look
    Not able toView Data in Answers
    Let me know
    thanks,
    saichand.v

  • How to reset data Warehouse table

    For fixing some mistakes, I'd like to reset the whole Data Warehouse tables ( clear alll data in the tables).
    Is there any way to achieve this with DAC client?
    Roger

    Nothing to worry. We have the option in DAC. Navigate to Tools -> ETL Management -> Reset Data Sources. Give the confirmation text in the dialog box. With this action the refresh dates will become Null for all tables and which causes to do a full load in the next ETL run.

  • How do I design this in data warehouse?

    I am working on building a data warehouse for insurance quote data.
    Each quote will have an applicant and can have an optional co-applicant. Each applicant and co-applicant will have prior auto insurance history, prior home insurance history, current auto insurance information and current home insurance information.
    So do I create Applicant and Insurance dimensions here?

    Hi Ashan,
    Just so you know.
    I completely reworked our methodology of building data warehouses back in 2012. The new way of building data warehouses is quite different to the old way.  The way you listed.
    The methodology presentation is on this link.
    https://www.youtube.com/watch?v=Df4CgOtrFq8
    Video channels are here. http://www.instantbi.com/videos/
    Downloads are here: http://www.instantbi.com/company/downloads/
    I have been doing BI since 91 and what we have done now is industry leading. 
    I am an MSDN so we do our development on MSFT first and then deploy where ever our clients want us to deploy.
    Best Regards 
    Peter Nolan

  • Data Warehouse SQL error log shows failed login

    In addition to the above title, on our management servers (x2 Win 2012 R2 - SCOM 2012 R2), I am seeing the event ID 31551 stating:
    Failed to store data in the Data Warehouse. The operation will be retired. Exception 'SqlException':Login failed for user 'xx'.
    One or more workflows were affected by this.
    Workflow name: Microsoft.SystemCenter.DataWarehouse.CollectEntityHealthStateChange
    Instance name: management server
    Instance ID: {xxxxxxxxxxxxxxxxx}
    Management Group: XXXX
    I've logged onto Data Warehouse server using the account referenced in the error message, loaded SQL Management Studio (2012 Std), and logged in and am able to see, view tables within the OperationsManagerDW database. So I'm trying to establish what's going
    on! If I can access the DW DB using the account, why am I getting these errors?

    Hi
    Unfortunately, this hasn't resolved the issue. I've ran the query DBCC CHECKIDENT ("EventChannel"); and have got the following response back: 
    Checking identity information: current identity value '1', current column value '1'.
    DBCC execution completed. If DBCC printed error messages, contact your system administrator.
    I've revisited the run as account for 'Data Warehouse SQL Account' - this is a domain account. I've checked the Data Warehouse DB and can confirm that it's got write access over the database. I'm using the same account as the 'Data Warehouse Action Account'.
    However, the SQL log on the data warehouse server is saying failed login, see below:
    Login failed for user 'sv-scom-dw'. Reason: Could not find a login matching the name provided. [CLIENT: Management server 1 IP]
    I've checked the 'Management Group' table and can confirm the WriterLoginName is DOMAIN\sv-scom-dw
    However, the SQL error looks like it's looking for a local SQL login. The database is set to Mixed mode authentication.
    Any ideas?

  • Data Warehouse issue after deploaying Server 2012 MP in SCOM 2007R2

    Hi all,
    I have recently installed Server 2012 MP in SCOM 2007R2. But now I get DataWarehouse warnings that the specified schema name "sdk" either does not exist or I do not have permission to use it.
    It's event 31565 with source Operations Manager Health Service Modules
    Can anyone help me?
    Failed to perform Data Warehouse component deployment operation:Install; Component: Script, Id: '79c157e5-9aa8-7045-d0b8-19e2d713d6c7',Management Pack Version-dependent Id: '01f04523-ee6c-1f3e-08a1-a108dda43507';Target: Database, Server name: 'SQL\SCOM,10100', Database name: 'OperationsManagerDW'. Batch ordinal: 0; Exception: The specified schema name "sdk" either does not exist or you do not have permission to use it.
    Best regards,
    Oliver
    Gruss Oliver Scholl

    Hi Gruss,
    Was there any resent permission change on the SQL server or the Data warehouse ?
    Has your Data reader and writer account Sys admin permissions on the SQL application ?
    Also is the reporting tab working and are the reports related to this and other management packs working ?
    Gautam.75801

  • How i create oracle9i data Warehouse

    hi master
    sir i am use oracle 9i with devloper 6i
    sir how i create data Warehouse with my 9i data base
    please give me step
    what software i install
    how i use portal
    how i create datamart
    how create cub
    how i connect my oracle 9i with Warehouse
    how i show my portal on net
    how i import my oracle 9i data in Warehouse
    how i create multidimation report for net becuase my other office in other
    city
    how i import diffrence type data from diffrence type of database
    please give me idea how i start and how i get above result
    thank's
    aamir

    You need to have the enterprise version of Oracle DB 9.2.0.2 in order to have DM4J to work fully. It will run on Oracle DB 9.2.0.1 but you would need to install a JMS patch to have the components run successfully. This is better done by just upgrading to 9.2.0.2.
    If you have the correct DB then you need to establish a connection to the db using the "ODM" account and password. You can not access data mining in the db other then through this account.
    Additionally, all ODM_MTR tables are granted to ODM for access out of the box. If you want more data accessible to ODM you will need to grant these individually.

  • Data warehouse implementation misconceptions

    In their book "Mastering the SAP Business Information Warehouse" the authors identify 5 common misconceptions of data warehouse implementations as:
    1) Data warehouse implementations are IT projects
    2) "Quick-win" iterative implementations will lead to a successful data warehouse
    3) Business content is a proper solution to accommodate all BI demands
    4) Governance can be introduced later
    5) Operations is not so important
    I'd like those of you who have experience in data warehouse implementation to rank these five misconceptions in terms of their importance.  You may just rank them by their numbers, for example, 5,1,3, 2,4.
    Please also indicate:
    1) your job title
    2) the number of years of experience you have in data warehousing
    3) the number of years of experience you have in SAP BW/BI
    Thank you!

    Hi Mark,
    I guess your questions below are very weird. Can I know what is your purpose ?
    Please also indicate:
    1) your job title
    2) the number of years of experience you have in data warehousing
    3) the number of years of experience you have in SAP BW/BI

  • How create oracle data Warehouse

    hi master
    sir i am use oracle 9i how i creat data Warehouse
    and how use oracle portal for net
    please give me idea how i start and which woftware i use
    thank's
    aamir

    Do you want to craete a Data Warehouse database or you want to learn on how to design a Data Warehouse ?
    Creating a Data Warehouse database is no different than creating an OLTP database, the only thing in mind should be is that DW are mostly read-only, so you should set initialization parameters suiting DW .

Maybe you are looking for