Data warehouse configuration doubt

Hi guys,
I´m studing data warehouse and I would like to know if there´s any documentation showing the correct way to configure a DW database, like parameters, block size and partitioned tables.
or if you have experience please post here your suggestions.
Thank you,
Felipe

Starting with Oracle's data Warehousing Guide, documentation is a great start.
http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96520/toc.htm
You can also search fro datawarehouse books from goole or any search enginee.
Jaffar

Similar Messages

  • What are the best solutions for data warehouse configuration in 10gR2

    I need help on solutions to be provided to my Client for upgrading the data warehouse.
    Current Configuration: Oracle database 9.2.0.8. This database contains the data warehouse and one more data mart on the same host.Sizes are respectively 6 Terabyte(retention policy of 3 years+current year) and 1 Terabyte. The ETL tool and BO Reporting tools are also hosted on the same host. This current configuration is really performing poor.
    Client cannot go for a major architectural or configuration changes to its existing environment now due to some constraints.
    However, they have agreed to separate out the databases on separate hosts from the ETL tools and BO objects. Also we are planning to upgrade the database to 10gR2 to attain stability, better performance and overcome current headaches.
    We cannot upgrade the database to 11g as the BO is at a version 6.5 which isn't compatible with Oracle 11g. And Client cannot afford to upgrade anything else other than the database.
    So, my role is very vital in providing a perfect solution towards better performance and take a successful migration of Oracle Database from one host to another (similar platform and OS) in addition to upgrade.
    I have till now thought of the following:
    Move the Oracle database and data mart to separate host.
    The host will be the same platform, that is, HP Superdome with HP-UX 32-bit OS (we cannot change to 64-bit as ETL tool doesn't support)
    Install new Oracle database 10g on the new host and move the data to it.
    Exploring all new features of 10gR2 to help data warehouse, that is, SQL MODEL Clause introduction, Parallel processing, Partitioning, Data Pump, SPA to study pre and post migrations.
    Also thinking of RAC to provide more better solution as our main motive is to show a tremendous performance enhancement.
    I need all your help to prepare a good road map for my assignment. Please suggest.
    Thanks,
    Tapan

    SGA=27.5 GB and PGA=50 MB
    Also I am pasting part of STATSPACK Report, eliminating the snaps of DB bounce. Please suggest the scope of improvement in this case.
    STATSPACK report for
    Snap Id Snap Time Sessions Curs/Sess Comment
    Begin Snap: 582946 11-Mar-13 20:02:16 46 12.8
    End Snap: 583036 12-Mar-13 18:24:24 60 118.9
    Elapsed: 1,342.13 (mins)
    Cache Sizes (end)
    ~~~~~~~~~~~~~~~~~
    Buffer Cache: 21,296M Std Block Size: 16K
    Shared Pool Size: 6,144M Log Buffer: 16,384K
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 1,343,739.01 139,883.39
    Logical reads: 100,102.54 10,420.69
    Block changes: 3,757.42 391.15
    Physical reads: 6,670.84 694.44
    Physical writes: 874.34 91.02
    User calls: 1,986.04 206.75
    Parses: 247.87 25.80
    Hard parses: 5.82 0.61
    Sorts: 1,566.76 163.10
    Logons: 10.99 1.14
    Executes: 1,309.79 136.35
    Transactions: 9.61
    % Blocks changed per Read: 3.75 Recursive Call %: 43.34
    Rollback per transaction %: 3.49 Rows per Sort: 190.61
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.90 Redo NoWait %: 100.00
    Buffer Hit %: 96.97 In-memory Sort %: 100.00
    Library Hit %: 99.27 Soft Parse %: 97.65
    Execute to Parse %: 81.08 Latch Hit %: 99.58
    Parse CPU to Parse Elapsd %: 3.85 % Non-Parse CPU: 99.34
    Shared Pool Statistics Begin End
    Memory Usage %: 7.11 50.37
    % SQL with executions>1: 62.31 46.46
    % Memory for SQL w/exec>1: 26.75 13.47
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    CPU time 492,062 43.66
    db file sequential read 157,418,414 343,549 30.49
    library cache pin 92,339 66,759 5.92
    PX qref latch 63,635 43,845 3.89
    db file scattered read 2,506,806 41,677 3.70
    Background Wait Events for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (s) (ms) /txn
    log file sequential read 176,386 0 3,793 22 0.2
    log file parallel write 2,685,833 0 1,813 1 3.5
    db file parallel write 239,166 0 1,350 6 0.3
    control file parallel write 33,432 0 79 2 0.0
    LGWR wait for redo copy 478,120 536 75 0 0.6
    rdbms ipc reply 10,027 0 47 5 0.0
    control file sequential read 32,414 0 40 1 0.0
    db file scattered read 4,101 0 30 7 0.0
    db file sequential read 13,946 0 29 2 0.0
    direct path read 203,694 0 14 0 0.3
    log buffer space 363 0 13 37 0.0
    latch free 3,766 0 9 2 0.0
    direct path write 80,491 0 6 0 0.1
    async disk IO 351,955 0 4 0 0.5
    enqueue 28 0 1 21 0.0
    buffer busy waits 1,281 0 1 0 0.0
    log file single write 172 0 0 1 0.0
    rdbms ipc message 10,563,204 251,286 992,837 94 13.7
    pmon timer 34,751 34,736 78,600 2262 0.0
    smon timer 7,462 113 76,463 10247 0.0
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    CPU used by this session 49,206,154 611.0 63.6
    CPU used when call started 49,435,735 613.9 63.9
    CR blocks created 6,740,777 83.7 8.7
    Cached Commit SCN referenced 423,253,503 5,256.0 547.2
    Commit SCN cached 19,165 0.2 0.0
    DBWR buffers scanned 48,276,489 599.5 62.4
    DBWR checkpoint buffers written 6,959,752 86.4 9.0
    DBWR checkpoints 454 0.0 0.0
    DBWR free buffers found 44,817,183 556.5 57.9
    DBWR lru scans 137,149 1.7 0.2
    DBWR make free requests 162,528 2.0 0.2
    DBWR revisited being-written buff 4,220 0.1 0.0
    DBWR summed scan depth 48,276,489 599.5 62.4
    DBWR transaction table writes 5,036 0.1 0.0
    DBWR undo block writes 2,989,436 37.1 3.9
    DDL statements parallelized 3,723 0.1 0.0
    DFO trees parallelized 4,157 0.1 0.0
    DML statements parallelized 3 0.0 0.0
    OS Block input operations 29,850 0.4 0.0
    OS Block output operations 1,591 0.0 0.0
    OS Characters read/written 182,109,814,791 2,261,447.1 235,416.9
    OS Integral unshared data size ################## 242,463,432.4 ############
    OS Involuntary context switches 188,257,786 2,337.8 243.4
    OS Maximum resident set size 43,518,730,619 540,417.4 56,257.5
    OS Page reclaims 159,430,953 1,979.8 206.1
    OS Signals received 5,260,938 65.3 6.8
    OS Socket messages received 79,438,383 986.5 102.7
    OS Socket messages sent 93,064,176 1,155.7 120.3
    OS System time used 10,936,430 135.8 14.1
    OS User time used 132,043,884 1,639.7 170.7
    OS Voluntary context switches 746,207,739 9,266.4 964.6
    PX local messages recv'd 55,120,663 684.5 71.3
    PX local messages sent 55,120,817 684.5 71.3
    Parallel operations downgraded 1 3 0.0 0.0
    Parallel operations not downgrade 4,154 0.1 0.0
    SQL*Net roundtrips to/from client 155,422,335 1,930.0 200.9
    SQL*Net roundtrips to/from dblink 18 0.0 0.0
    active txn count during cleanout 16,529,551 205.3 21.4
    background checkpoints completed 43 0.0 0.0
    background checkpoints started 43 0.0 0.0
    background timeouts 280,202 3.5 0.4
    branch node splits 4,428 0.1 0.0
    buffer is not pinned count 6,382,440,322 79,257.4 8,250.7
    buffer is pinned count 9,675,661,370 120,152.8 12,507.9
    bytes received via SQL*Net from c 67,384,496,376 836,783.4 87,109.3
    bytes received via SQL*Net from d 6,142 0.1 0.0
    bytes sent via SQL*Net to client 50,240,643,657 623,890.4 64,947.1
    bytes sent via SQL*Net to dblink 3,701 0.1 0.0
    calls to get snapshot scn: kcmgss 145,385,064 1,805.4 187.9
    calls to kcmgas 36,816,132 457.2 47.6
    calls to kcmgcs 3,514,770 43.7 4.5
    change write time 369,373 4.6 0.5
    cleanout - number of ktugct calls 20,954,488 260.2 27.1
    cleanouts and rollbacks - consist 6,357,174 78.9 8.2
    cleanouts only - consistent read 10,078,802 125.2 13.0
    cluster key scan block gets 69,403,565 861.9 89.7
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    cluster key scans 41,311,211 513.0 53.4
    commit cleanout failures: block l 413,776 5.1 0.5
    commit cleanout failures: buffer 414 0.0 0.0
    commit cleanout failures: callbac 41,194 0.5 0.1
    commit cleanout failures: cannot 174,382 2.2 0.2
    commit cleanouts 11,469,056 142.4 14.8
    commit cleanouts successfully com 10,839,290 134.6 14.0
    commit txn count during cleanout 17,155,424 213.0 22.2
    consistent changes 145,418,277 1,805.8 188.0
    consistent gets 8,043,252,188 99,881.4 10,397.7
    consistent gets - examination 3,180,028,047 39,489.7 4,110.9
    current blocks converted for CR 9 0.0 0.0
    cursor authentications 14,926 0.2 0.0
    data blocks consistent reads - un 143,706,500 1,784.6 185.8
    db block changes 302,577,666 3,757.4 391.2
    db block gets 336,562,217 4,179.4 435.1
    deferred (CURRENT) block cleanout 2,912,793 36.2 3.8
    dirty buffers inspected 627,174 7.8 0.8
    enqueue conversions 1,296,337 16.1 1.7
    enqueue releases 13,053,200 162.1 16.9
    enqueue requests 13,239,092 164.4 17.1
    enqueue timeouts 185,878 2.3 0.2
    enqueue waits 114,120 1.4 0.2
    exchange deadlocks 7,390 0.1 0.0
    execute count 105,475,101 1,309.8 136.4
    free buffer inspected 1,604,407 19.9 2.1
    free buffer requested 258,126,047 3,205.4 333.7
    hot buffers moved to head of LRU 22,793,576 283.1 29.5
    immediate (CR) block cleanout app 16,436,010 204.1 21.3
    immediate (CURRENT) block cleanou 2,860,013 35.5 3.7
    index fast full scans (direct rea 12,375 0.2 0.0
    index fast full scans (full) 3,733 0.1 0.0
    index fast full scans (rowid rang 192,148 2.4 0.3
    index fetch by key 1,321,024,486 16,404.5 1,707.7
    index scans kdiixs1 406,165,684 5,043.8 525.1
    leaf node 90-10 splits 50,373 0.6 0.1
    leaf node splits 697,235 8.7 0.9
    logons cumulative 884,756 11.0 1.1
    messages received 3,276,719 40.7 4.2
    messages sent 3,257,171 40.5 4.2
    no buffer to keep pinned count 569 0.0 0.0
    no work - consistent read gets 4,406,092,172 54,715.0 5,695.8
    opened cursors cumulative 20,527,704 254.9 26.5
    parse count (failures) 267,088 3.3 0.4
    parse count (hard) 468,996 5.8 0.6
    parse count (total) 19,960,548 247.9 25.8
    parse time cpu 323,024 4.0 0.4
    parse time elapsed 8,393,422 104.2 10.9
    physical reads 537,189,332 6,670.8 694.4
    physical reads direct 292,545,140 3,632.8 378.2
    physical writes 70,409,002 874.3 91.0
    physical writes direct 59,248,394 735.8 76.6
    physical writes non checkpoint 69,103,391 858.1 89.3
    pinned buffers inspected 11,893 0.2 0.0
    prefetched blocks 95,892,161 1,190.8 124.0
    prefetched blocks aged out before 1,495,883 18.6 1.9
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    process last non-idle time ################## ############## ############
    queries parallelized 417 0.0 0.0
    recursive calls 122,323,299 1,519.0 158.1
    recursive cpu usage 3,144,533 39.1 4.1
    redo blocks written 180,881,558 2,246.2 233.8
    redo buffer allocation retries 5,400 0.1 0.0
    redo entries 164,728,513 2,045.6 213.0
    redo log space requests 1,006 0.0 0.0
    redo log space wait time 2,230 0.0 0.0
    redo ordering marks 2,563 0.0 0.0
    redo size 108,208,614,904 1,343,739.0 139,883.4
    redo synch time 558,520 6.9 0.7
    redo synch writes 2,343,824 29.1 3.0
    redo wastage 1,126,585,600 13,990.0 1,456.4
    redo write time 718,655 8.9 0.9
    redo writer latching time 7,763 0.1 0.0
    redo writes 2,685,833 33.4 3.5
    rollback changes - undo records a 522,742 6.5 0.7
    rollbacks only - consistent read 335,177 4.2 0.4
    rows fetched via callback 1,100,990,382 13,672.1 1,423.3
    session connect time ################## ############## ############
    session cursor cache count 1,061 0.0 0.0
    session cursor cache hits 1,687,796 21.0 2.2
    session logical reads 8,061,057,193 100,102.5 10,420.7
    session pga memory 1,573,228,913,832 19,536,421.0 2,033,743.8
    session pga memory max 1,841,357,626,496 22,866,054.4 2,380,359.0
    session uga memory 1,074,114,630,336 13,338,399.4 1,388,529.0
    session uga memory max 386,645,043,296 4,801,374.0 499,823.6
    shared hash latch upgrades - no w 410,360,146 5,095.9 530.5
    sorts (disk) 2,657 0.0 0.0
    sorts (memory) 126,165,625 1,566.7 163.1
    sorts (rows) 24,048,783,304 298,638.8 31,088.3
    summed dirty queue length 5,438,201 67.5 7.0
    switch current to new buffer 1,302,798 16.2 1.7
    table fetch by rowid 6,201,503,534 77,010.5 8,016.8
    table fetch continued row 26,649,697 330.9 34.5
    table scan blocks gotten 1,864,435,032 23,152.6 2,410.2
    table scan rows gotten 43,639,997,280 541,923.3 56,414.3
    table scans (cache partitions) 26,112 0.3 0.0
    table scans (direct read) 246,243 3.1 0.3
    table scans (long tables) 340,200 4.2 0.4
    table scans (rowid ranges) 359,617 4.5 0.5
    table scans (short tables) 9,111,559 113.2 11.8
    transaction rollbacks 4,819 0.1 0.0
    transaction tables consistent rea 824 0.0 0.0
    transaction tables consistent rea 1,386,848 17.2 1.8
    user calls 159,931,913 1,986.0 206.8
    user commits 746,543 9.3 1.0
    user rollbacks 27,020 0.3 0.0
    write clones created in backgroun 7 0.0 0.0
    write clones created in foregroun 4,350 0.1 0.0
    Buffer Pool Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> Standard block size Pools D: default, K: keep, R: recycle
    -> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
    Free Write Buffer
    Number of Cache Buffer Physical Physical Buffer Complete Busy
    P Buffers Hit % Gets Reads Writes Waits Waits Waits
    D 774,144 95.6############ 233,869,082 10,089,734 0 0########
    K 504,000 99.9############ 3,260,227 1,070,338 0 0 65,898
    R 63,504 96.2 196,079,539 7,511,863 535 0 0 0
    Buffer wait Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc
    Tot Wait Avg
    Class Waits Time (s) Time (ms)
    data block 7,791,121 14,676 2
    file header block 587 101 172
    undo header 151,617 71 0
    segment header 299,312 58 0
    1st level bmb 45,235 7 0
    bitmap index block 392 1 3
    undo block 4,250 1 0
    2nd level bmb 14 0 0
    system undo header 2 0 0
    3rd level bmb 1 0 0
    Latch Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
    willing-to-wait latch get requests
    ->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
    ->"Pct Misses" for both should be very close to 0.0
    Pct Avg Wait Pct
    Get Get Slps Time NoWait NoWait
    Latch Requests Miss /Miss (s) Requests Miss
    Consistent RBA 2,686,230 0.0 0.2 0 0
    FAL request queue 86 0.0 0 0
    FAL subheap alocation 0 0 2 0.0
    FIB s.o chain latch 1,089 0.0 0 0
    FOB s.o list latch 4,589,986 0.5 0.0 2 0
    NLS data objects 1 0.0 0 0
    SQL memory manager worka 5,963 0.0 0 0
    Token Manager 0 0 2 0.0
    active checkpoint queue 719,439 0.3 0.1 0 1 0.0
    alert log latch 184 0.0 0 2 0.0
    archive control 4,365 0.0 0 0
    archive process latch 1,808 0.6 0.6 0 0
    begin backup scn array 3,387,572 0.0 0.0 0 0
    cache buffer handles 1,577,222 0.2 0.0 0 0
    cache buffers chains ############## 0.5 0.0 430 354,357,972 0.3
    cache buffers lru chain 17,153,023 0.1 0.0 1 385,505,654 0.5
    cas latch 538,804,153 0.3 0.0 7 0
    channel handle pool latc 1,776,950 0.5 0.0 0 0
    channel operations paren 2,901,371 0.3 0.0 0 0
    checkpoint queue latch 99,329,722 0.0 0.0 0 11,153,369 0.1
    child cursor hash table 3,927,427 0.0 0.0 0 0
    commit callback allocati 8,739 0.0 0 0
    dictionary lookup 7,980 0.0 0 0
    dml lock allocation 6,767,990 0.1 0.0 0 0
    dummy allocation 1,898,183 0.2 0.1 0 0
    enqueue hash chains 27,741,348 0.1 0.1 4 0
    enqueues 17,450,161 0.3 0.1 6 0
    error message lists 132,828 2.6 0.2 1 0
    event group latch 884,066 0.0 0.7 0 0
    event range base latch 1 0.0 0 0
    file number translation 34 38.2 0.9 0 0
    global tx hash mapping 577,859 0.0 0 0
    hash table column usage 4,062 0.0 0 8,757,234 0.0
    hash table modification 16 0.0 0 2 0.0
    i/o slave adaptor 0 0 2 0.0
    job workq parent latch 4 100.0 0.3 0 494 8.7
    job_queue_processes para 1,950 0.0 0 2 0.0
    ksfv messages 0 0 4 0.0
    ktm global data 8,219 0.0 0 0
    lgwr LWN SCN 2,687,862 0.0 0.0 0 0
    library cache 310,882,781 0.9 0.0 34 104,759 4.0
    library cache load lock 30,369 0.0 0.3 0 0
    library cache pin 153,821,358 0.1 0.0 2 0
    library cache pin alloca 126,316,296 0.1 0.0 4 0
    list of block allocation 2,730,808 0.3 0.0 0 0
    loader state object free 566,036 0.1 0.0 0 0
    longop free list parent 197,368 0.0 0 8,390 0.0
    message pool operations 14,424 0.0 0.0 0 0
    messages 25,931,764 0.1 0.0 1 0
    mostly latch-free SCN 40,124,948 0.3 0.0 5 0
    Latch Sleep breakdown for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by misses desc
    Get Spin &
    Latch Name Requests Misses Sleeps Sleeps 1->4
    cache buffers chains ############## 74,770,083 1,062,119 73803903/884
    159/71439/10
    582/0
    redo allocation 170,107,983 3,441,055 149,631 3292872/1467
    48/1426/9/0
    library cache 310,882,781 2,831,747 89,240 2754499/6780
    6/7405/2037/
    0
    shared pool 158,471,190 1,755,922 55,268 1704342/4836
    9/2826/385/0
    cas latch 538,804,153 1,553,992 6,927 1547125/6808
    /58/1/0
    row cache objects 161,142,207 1,176,998 27,658 1154070/1952
    0/2560/848/0
    process queue reference 1,893,917,184 1,119,215 106,454 78758/4351/1
    36/0/0
    Library Cache Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Pct Misses" should be very low
    Get Pct Pin Pct Invali-
    Namespace Requests Miss Requests Miss Reloads dations
    BODY 3,137,721 0.0 3,137,722 0.0 0 0
    CLUSTER 6,741 0.1 4,420 0.2 0 0
    INDEX 353,708 0.8 361,065 1.2 0 0
    SQL AREA 17,052,073 0.3 54,615,678 0.9 410,682 19,628
    TABLE/PROCEDURE 3,521,884 0.2 12,922,737 0.1 619 0
    TRIGGER 1,975,977 0.0 1,975,977 0.0 1 0
    SGA Memory Summary for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    SGA regions Size in Bytes
    Database Buffers 22,330,474,496
    Fixed Size 779,288
    Redo Buffers 17,051,648
    Variable Size 7,180,648,448
    sum 29,528,953,880

  • EXTRACTing to TEXT file in Data Warehouse - Simple doubts!

    Hi Experts,
    Pls. clarify my simple doubts, in data EXTRACTing prog.,(data extract from SAP to text file in Application server, prog. runs in back ground)
    For the Dataware house mapping, I hv been asked to make the following changes,
    1) Presently, there is NO column headings in Text file, so I need to add the column Headings - How to get it done?
    2) presently, its NOT tab deliminated, so, I need to make it to TAB deliminated- How to achieve it?
    I am here paste some piece of code, so that U will get understand well.
    PERFORM open_dataset_zdata_whouse_04.
        DESCRIBE FIELD i_tab LENGTH tfr_length IN BYTE MODE.
        LOOP AT i_itab.
          TRANSFER i_itab TO transfer_file1 LENGTH tfr_length.
        ENDLOOP.
        CLOSE DATASET transfer_file1.
    ThaNQ.

    See the below code :
    parameters: d1 type localfile default
    '/usr/sap/TST/SYS/Test.txt'.
    data: begin of itab occurs 0,
          field1(20) type c,
          field2(20) type c,
          field3(20) type c,
          end of itab.
    data: str type string.
    constants: con_tab type x value '09'.
    if you have a newer version, then you can use this
    instead.
    *constants:
       con_tab  type c value
    cl_abap_char_utilities=>HORIZONTAL_TAB.
    start-of-selection.
    itab-field1 = 'ABC'.
    itab-field2 = 'DEF'.
    itab-field3 = 'GHI'.
    append itab.
    itab-field1 = '123'.
    itab-field2 = '456'.
    itab-field3 = '789'.
    append itab.
      open dataset d1 for output in text mode.
      loop at itab.
        translate itab using ' # '.
        concatenate itab-field1 itab-field2 itab-field2
    into str
                      separated by con_tab.
        translate str using ' # '.
        transfer str to d1.
      endloop.
      close dataset d1.
    above code for tab delimited.
    for heading then you can write simple logic in the loop of internal table
    loop at itab.
    if sy-tabix = 1'
    move heading data to file.
    endif.
    endloop.
    Thanks
    Seshu

  • How to create a new data warehouse?

    Hi,
    I have a set up where multiple SCOM management groups all report to a shared data warehouse database. For various reasons, we need to “unplug” the management groups from this shared resource and give them each their own data warehouse to use (locally, rather
    than centrally).
    Is there a way of creating a new clean data warehouse without running a full installation? I’m imagined that I could probably put a copy of the existing shared one into each local management group and then use this process (http://technet.microsoft.com/EN-US/library/hh268492.aspx)
    to repoint each locally – but it would be nicer to start fresh for each one. Is that possible, is there an installation I can run for JUST the DW elements of the SCOM installation?
    Thanks.

    Breaking a shared data warehouse configuration into separate data warehouses is really not supported - there is no installation path for this and no guidance available. If you absolutely need to do this, then you are basically stuck with reinstalling the
    data warehouse and reporting server for each management group. You will lose all historical reporting in this scenario. Sure, the data can be saved and used elsewhere, but it will not be available as you would expect to see it in the reporting space of the
    Operations Console.
    Moving the data warehouse as Mai suggested will not solve the problem, and may actually end up in in other problems with retention and reports being unavailable for some instances. I would not suggest moving/copying the existing data warehouse to each management
    group.
    Jonathan Almquist | SCOMskills, LLC (http://scomskills.com)

  • Event data collection process unable to write data to the Data Warehouse

    Alert Description:
    Event data collection process unable to write data to the Data Warehouse. Failed to store data in the Data Warehouse. The operation will be retried.
    Exception 'InvalidOperationException': The given value of type Int32 from the data source cannot be converted to type tinyint of the specified target column.
    Running SCOM 2007 R2 on Server 2008 R2 with SQL Server 2008 R2. I can only find a single reference to this exact error on the Internet. It started occurring on a weekend. No changes were made to the SCOM server directly before this occurred. Anyone know
    what the error means and/or how to fix?

    Hello,
    I would suggest the following threas for your reference:
    Troubles with DataWarehouse database
    http://social.technet.microsoft.com/Forums/en-US/operationsmanagergeneral/thread/5e7005ae-d5d8-4b5c-a51c-740634e3da4e
    Data Warehouse configuration synchronization process failed
    to read state 
    http://social.technet.microsoft.com/Forums/en-US/systemcenter/thread/8ea1f4b9-115b-43cd-b66f-617533703047
    Thanks,
    Yog Li
    TechNet Community Support

  • Configuration Dataset = 90% of Data Warehouse - Event Errors 31552

    Hi All,
    I'm currently running SCOM 2012 R2 and have recently had some problems with the Data Warehouse Data Sync. We currently have around 800 servers in our production environment, no Network devices, we use Orchestrator for integration with our call logging system
    and I believe this is where our problems started. We had a runbook which got itself into a loop and was constantly updating alerts, it also contributed to a large number of state changes. We have resolved that problem now, but I started to receive alerts
    saying SCOM couldn't sync Alert data under event 31552.
    Failed to store data in the Data Warehouse.
    Exception 'SqlException': Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding. 
    One or more workflows were affected by this.  
    Workflow name: Microsoft.SystemCenter.DataWarehouse.StandardDataSetMaintenance 
    Instance name: Alert data set 
    Instance ID: XX
    Management group: XX
    I have been researching problems with syncing alert data, and came across the queries to manually do the database maintenance, I ran that on the alert instance and it took around 16.5 hours to run on the first night, then it ran fast (2 seconds) most the
    day, when it got to about the same time the next day it took another 9.5 hours to run so I'm not sure why that's giving different results.
    Initially it appeared all of our datasets were out of sync, after the first night all appear to be in sync bar the Hourly Performance Dataset. Which still has around 161 OutstandingAggregations. When I run the Maintenance on Performance it doesn't appear
    to be fixing it. (It runs in about 2 seconds, successfully)
    I recently ran DWDatarp on the database to see how the Alert Dataset was looking and to my surprise I found that the Configuration Dataset has blown out to take up 90% of the DataWarehouse, table below. Does anyone have any ideas on what might cause this
    or how I can fix it?
    Dataset name                   Aggregation name     Max Age     Current Size, Kb
    Alert data set                 Raw data                 400       132,224 (  0%)
    Client Monitoring data set     Raw data                  30             0 (  0%)
    Client Monitoring data set     Daily aggregations       400            16 (  0%)
    Configuration dataset          Raw data                 400   683,981,456 ( 90%)
    Event data set                 Raw data                 100    17,971,872 (  2%)
    Performance data set           Raw data                  10     4,937,536 (  1%)
    Performance data set           Hourly aggregations      400    28,487,376 (  4%)
    Performance data set           Daily aggregations       400     1,302,368 (  0%)
    State data set                 Raw data                 180       296,392 (  0%)
    State data set                 Hourly aggregations      400    17,752,280 (  2%)
    State data set                 Daily aggregations       400     1,094,240 (  0%)
    Microsoft.Exchange.2010.Dataset.AlertImpact Raw data      
    7     0 (  0%)
    Microsoft.Exchange.2010.Dataset.AlertImpact Hourly aggregations        
    3     0 (  0%)
    Microsoft.Exchange.2010.Dataset.AlertImpact Daily aggregations      
    182     0 (  0%)
    Microsoft.Exchange.2010.Reports.Dataset.Availability Raw data                 400           176 (  0%)
    Microsoft.Exchange.2010.Reports.Dataset.Availability Daily aggregations       400             0 (  0%)
    Microsoft.Exchange.2010.Reports.Dataset.TenantMapping Raw data 7             0 (  0%)
    Microsoft.Exchange.2010.Reports.Dataset.TenantMapping Daily aggregations       400             0 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Rawdata                   3        84,864 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Hourly aggregations        7       407,416 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Daily aggregations       182       143,128 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Raw data                   7         6,088 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Hourly aggregations       31        20,056 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Daily aggregations       182         3,720 (  0%)
    I have one other 31553 event showing up on one of the Management servers as follows,
    Data was written to the Data Warehouse staging area but processing failed on one of the subsequent operations.
    Exception 'SqlException': Sql execution failed. Error 2627, Level 14, State 1, Procedure ManagedEntityChange, Line 368, Message: Violation of UNIQUE KEY constraint 'UN_ManagedEntityProperty_ManagedEntityRowIdFromDAteTime'. Cannot insert duplicate key in
    object 'dbo.ManagedEntityProperty'. The duplicate key value is (263, Aug 26 2013  6:02AM). 
    One or more workflows were affected by this.  
    Workflow name: Microsoft.SystemCenter.DataWarehouse.Synchronization.ManagedEntity 
    Instance name: XX 
    Instance ID: XX
    Management group: XX
    which from my readings means I'm likely in for an MS support call.. :( But I just wanted to see if anyone has any information about the Configuration Dataset as I couldn't find much in my searching.

    Hi All,
    The results of the MS Support call were as follows, I don't recommend doing these steps without an MS Support case, any damage you do is your own fault these particular actions resolved our problems:
    1. Regarding the Configuration Dataset being so large. 
    This was caused by our AlertStage table which was also very large, we truncated the alert stage table and ran the maintenance tasks manually to clear this up. As I didn't require any of the alerts sitting in the AlertStage table we simply did a straight truncation
    of the the table. The document linked by MHG above shows the process of doing a backup & restore on the AlertStage Table if you need to. It took a few days of running maintenance tasks to resolve this problem properly. As soon as the truncation had taken
    place the Confirguration Dataset dropped in size to less than a gig.
    2. Error 31553 Duplicate Key Error
    This was a problem with duplicate keys in the ManagedEntityProperty table. We identified rows which had duplicate information, which could be gathered from the Events being logged on the Management Server.
    We then updated a few of these rows to have a slightly different time to what was already in the Database. We noticed that the event kept logging with a different row each time we updated the previous row. We ran the following query to find out how many rows
    actually had duplicates:
    select * from ManagedEntityProperty mep
    inner join ManagedEntity me on mep.ManagedEntityRowId = me.ManagedEntityRowId
    inner join ManagedEntityStage mes on mes.ManagedEntityGuid = me.ManagedEntityGuid
    where mes.ChangeDateTime = mep.FromDateTime
    order by mep.ManagedEntityRowId
    This returned over 25,000 duplicate rows. Rather than replace the times for all the rows, we removed all duplicates from the database. (Best to have MS Check this one out for you if you have a lot of data)
    After doing this there was a lot of data moving around the Staging tables (I assume from the management server that couldn't communicate properly), so once again we truncated the AlertStage table as it wasn't keeping up. Once this was done everything worked
    properly and all the queues stayed under control.
    To confirm things had been cleared up we checked the AlertStage table had no entries and the ManagedEntityStage table had no entries. We also confirmed that the 31553 events stopped on the Management server.
    Hopefully this can help someone, or provide a bit more information on these problems.

  • Service Manager Data Warehouse Install - Analysis Server Configuration For OLAP Cubes Fail

    Hello everyone,
    I have an issue with my installation of the Data Warehouse for System Center Service Manager 2012 SP1.
    My install environment is the following:
    Windows Server 2012 – System Center Service Manager (Successfully Installed) - Virtual
    Windows Server 2012 – System Center Data Warehouse (Pending) - Virtual
    Windows Server 2012 – MS SQL Server 2012 – Physical, Clustered 1<sup>st</sup> of Four Servers
    The SQL Server is a clustered installation with named instances, specifically for SharePoint and Service Manager. Each instance has its own IP address and dynamic ports are turned off. I’m installing using the domain administrator account and I also chose
    to run the installer as administrator. The domain admin has sysadmin rights to the service manager server and instance I’m trying to install on. However, the account does not have sysadmin rights to some of the other instances.
    The install is smooth up until it needs to connect to the Analysis server database. I have tried connecting to the analysis servers on other SQL servers on site and all were successful. The only difference between the older SQL servers, the SQL 2012 development
    server and the SQL 2012 Production server I’m trying to install to is that the that the domain admin account doesn’t have sysadmin access on all the databases on the new production server. The SQL server is being installed and configured by a contractor so
    if you all have troubleshooting suggestions, I’ll need to coordinate with the contractor.
    Starting with the screen below, I began searching for help online. There seems to be no one else with this issue or it is not documented properly. I opened a ticket with MS, called the contractor and troubleshot with him, troubleshot as far as I could on
    my own and I’m still at a loss as to what is preventing the installer from connecting specifically to the analysis server.
    I first thought the installer was at issue or that the data warehouse sever was at issue. But all signs are pointing at the SQL server. The installer is able to connect to all the other SQL servers – including other 2012 servers (same versions) – so it can’t
    be the installer. I’m pretty sure the SQL server is going to be at issue.
    After looking at this error, I opened the resource monitor and clicked the dropdown to see if it was trying to connect to the correct server and it was. I then connected to the old and new test and development servers successfully. Then connected to the
    SQL 2008 R2 production cluster successfully. I then compared the two servers. The only difference other than the version numbers is that the admin account doesn’t have sysadmin rights on all the SQL 2012 database servers. But the database servers are not the
    problem. The analysis servers are.
    I then checked the event logs and they are empty as far as this issue is concerned. Actually, there are no errors on the SQL 2012 production box and the Data Warehouse box. I then checked the log that the installer creates during every step of the installation
    and this is what is created when the dropdown is clicked for the analysis server configuration screen. The log file location is:
    “C:\Users\admin\AppData\Local\Temp\2\SCSMSetupWizard01.txt”
    In the file is the following text.
    01:03:34:Attempting connection to SQL Server 2012 management scope on SCSMSQL2012
    01:03:34:Using SQL Server 2012 management scope on SCSMSQL2012
    01:03:36:Collecting SQL instances on server SCSMSQL2012
    01:03:36:Attempting connection to SQL Server 2012 management scope on SCSMSQL2012.johnsonbrothers.com
    01:03:36:Using SQL Server 2012 management scope on SCSMSQL2012.johnsonbrothers.com
    01:03:38:Found SQL Instance: SCSMSQL2012\PWGSQL2012
    01:03:38:Found SQL Instance: SCSMSQL2012\SCSMSQL2012
    01:03:39:Error:GetSqlInstanceList(), Exception Type: Microsoft.AnalysisServices.ConnectionException, Exception Message: A connection cannot be made. Ensure that the server is running.
    01:03:39:StackTrace:   at Microsoft.AnalysisServices.XmlaClient.GetTcpClient(ConnectionInfo connectionInfo)
       at Microsoft.AnalysisServices.XmlaClient.OpenTcpConnection(ConnectionInfo connectionInfo)
       at Microsoft.AnalysisServices.XmlaClient.OpenConnection(ConnectionInfo connectionInfo, Boolean& isSessionTokenNeeded)
       at Microsoft.AnalysisServices.XmlaClient.Connect(ConnectionInfo connectionInfo, Boolean beginSession)
       at Microsoft.AnalysisServices.Server.Connect(String connectionString, String sessionId, ObjectExpansion expansionType)
       at Microsoft.SystemCenter.Essentials.SetupFramework.HelperClasses.SetupValidationHelpers.GetASVersion(StringBuilder sqlInstanceServiceName)
       at Microsoft.SystemCenter.Essentials.SetupFramework.HelperClasses.SetupValidationHelpers.GetSqlInstanceList(String sqlServerName, Int32 serviceType)
    I’m now investigating the issue according to this output, and decided to ask you all if you’ve run into this issue and found a resolution.

    I am running into same issue . But I don't anything in the instances section related to portipv6 . I do see in the listener section , I tried to remove it . But it comes up again . Please help
    <ConfigurationSettings>
    <Security>
    <RequireClientAuthentication>0</RequireClientAuthentication>
    <SecurityPackageList/>
    </Security>
    <Network>
    <Listener>
    <RequestSizeThreshold>4095</RequestSizeThreshold>
    <MaxAllowedRequestSize>0</MaxAllowedRequestSize>
    <ServerSendTimeout>60000</ServerSendTimeout>
    <ServerReceiveTimeout>60000</ServerReceiveTimeout>
    <IPV4Support>2</IPV4Support>
    <IPV6Support>2</IPV6Support>
    </Listener>
    <TCP>
    <MaxPendingSendCount>12</MaxPendingSendCount>
    <MaxPendingReceiveCount>4</MaxPendingReceiveCount>
    <MinPendingReceiveCount>2</MinPendingReceiveCount>
    <MaxCompletedReceiveCount>9</MaxCompletedReceiveCount>
    <ScatterReceiveMultiplier>5</ScatterReceiveMultiplier>
    <MaxPendingAcceptExCount>10</MaxPendingAcceptExCount>
    <MinPendingAcceptExCount>2</MinPendingAcceptExCount>
    <InitialConnectTimeout>10</InitialConnectTimeout>
    <SocketOptions>
    <SendBufferSize>0</SendBufferSize>
    <ReceiveBufferSize>0</ReceiveBufferSize>
    <DisableNonblockingMode>1</DisableNonblockingMode>
    <EnableNagleAlgorithm>0</EnableNagleAlgorithm>
    <EnableLingerOnClose>0</EnableLingerOnClose>
    <LingerTimeout>0</LingerTimeout>
    </SocketOptions>
    </TCP>
    <Requests>
    <EnableBinaryXML>0</EnableBinaryXML>
    <EnableCompression>0</EnableCompression>
    </Requests>
    <Responses>
    <EnableBinaryXML>1</EnableBinaryXML>
    <EnableCompression>1</EnableCompression>
    <CompressionLevel>9</CompressionLevel>
    </Responses>
    <ListenOnlyOnLocalConnections>0</ListenOnlyOnLocalConnections>
    </Network>
    <Log>
    <File>msmdredir.log</File>
    <FileBufferSize>0</FileBufferSize>
    <MessageLogs>Console;System</MessageLogs>
    <Exception>
    <CreateAndSendCrashReports>0</CreateAndSendCrashReports>
    <CrashReportsFolder/>
    <SQLDumperFlagsOn>0x0</SQLDumperFlagsOn>
    <SQLDumperFlagsOff>0x0</SQLDumperFlagsOff>
    <MiniDumpFlagsOn>0x0</MiniDumpFlagsOn>
    <MiniDumpFlagsOff>0x0</MiniDumpFlagsOff>
    <MinidumpErrorList>0xC1000000, 0xC1000001, 0xC100000C, 0xC1000016, 0xC1360054, 0xC1360055</MinidumpErrorList>
    <ExceptionHandlingMode>0</ExceptionHandlingMode>
    <MaxExceptions>500</MaxExceptions>
    <MaxDuplicateDumps>1</MaxDuplicateDumps>
    </Exception>
    </Log>
    <Memory>
    <HandleIA64AlignmentFaults>0</HandleIA64AlignmentFaults>
    <PreAllocate>0</PreAllocate>
    <VertiPaqPagingPolicy>0</VertiPaqPagingPolicy>
    <PagePoolRestrictNumaNode>0</PagePoolRestrictNumaNode>
    </Memory>
    <Instances/>
    <VertiPaq>
    <DefaultSegmentRowCount>0</DefaultSegmentRowCount>
    <ProcessingTimeboxSecPerMRow>-1</ProcessingTimeboxSecPerMRow>
    <SEQueryRegistry>
    <Size>0</Size>
    <MinKCycles>0</MinKCycles>
    <MinCyclesPerRow>0</MinCyclesPerRow>
    <MaxArbShpSize>0</MaxArbShpSize>
    </SEQueryRegistry>
    </VertiPaq>
    </ConfigurationSettings>

  • ACI Setup - How to Configure Data Warehouse Database - Partitoning

    After reading the ACI Install Guide & Data Warehouse documentation, I have some questions regarding how to setup the database:
    - Should database partitioning be setup? If so, what tables should be partitioned and what should they be partitioned by?
    - Are there any other best practices or tips for setting up & tuning the database?
    We are trying to avoid the (painful) situation of having to add partitioning later on; it is much easier to add it up front (if done correctly up front).
    Thanks in advance for any advice!

    On the tables recommended for partitioning, the partition key is nullable. If ATG inserts a null value into the timestamp column of one of the partitioned tables, we'll receive an ORA-14300 or ORA-14440 error. Oracle isn't able to figure out what partition to map that record to.
    Can the columns be changed to NOT NULL? Or, can the application guarantee a nullable value won't be inserted?
    Here are some example columns:
    ARF_SITE_VISIT.START_VISIT_TIMESTAMP --> TIMESTAMP(6) null
    ARF_REGISTRATION.REGISTRATION_TIMESTAMP --> TIMESTAMP(6) null
    ARF_LINE_ITEM.SUBMIT_TIMESTAMP --> TIMESTAMP(6) null
    ARF_PROMOTION_USAGE.USAGE_TIMESTAMP --> TIMESTAMP(6) null
    ARF_RETURN_ITEM.SUBMIT_TIMESTAMP --> TIMESTAMP(6) null
    Thanks

  • Error while creating data warehouse tables.

    Hi,
    I am getting an error while creating data warehouse tables.
    I am using OBIA 7.9.5.
    The contents of the generate_clt log are as below.
    >>>>>>>>>>>>>>>>>>>>>>>>>>
    Schema will be created from the following containers:
    Oracle 11.5.10
    Universal
    Conflict(s) between containers:
    Table Name : W_BOM_ITEM_FS
    Column Name: INTEGRATION_ID.
    The column properties that are different :[keyTypeCode]
    Success!
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    There are two rows in the DAC repository schema for the column and the table.
    The w_etl_table_col.KEY_TYPE_CD value for DW application is UNKNOWN and for the ORA_11i application it is NULL.
    Could this be the cause of the issue? If yes, why could the values be different and how to resolve this?
    If not, then what could be the problem?
    Any responses will be appreciated.
    Thanks and regards,
    Manoj.

    Strange. The OBIA 7.9.5 Installation and Configuration Guide says the following:
    4.3.4.3 Create ODBC Database Connections
    Note: You must use the Oracle Merant ODBC driver to create the ODBC connections. The Oracle Merant ODBC driver is installed by the Oracle Business Intelligence Applications installer. Therefore, you will need to create the ODBC connections after you have run the Oracle Business Intelligence Applications installer and have installed the DAC Client.
    Several other users are getting the same message creating DW tables.

  • Service manager console can't connect to Service manager data warehouse SQL reporting services

    When I start Service manager console, it gives this kind of error:
    The Service Manager data warehouse SQL Reporting Services server is currently unavailable. You will be unable to execute reports until this server is available. Please contact your system administrator. After the server becomes available please close your
    console and re-open to view reports.
    Also in EventViewer says:
    cannot connect to SQL Reporting Services Server. Message= An unexpected error occured while connecting to SQL Reporting Services server: System.Net.WebException: The request failed with HTTP status 401: Unauthorized.
    at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall)
    at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
    at Microsoft.EnterpriseManagement.Reporting.ReportingService.ReportingService2005.FindItems(String Folder, BooleanOperatorEnum BooleanOperator, SearchCondition[] Conditions)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReporting.FindItems(String searchPath, IList`1 criteria, Boolean And)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReporting.FindItems(String itemPath)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReporting.FindItem(String itemPath, ItemTypeEnum[] desiredTypes)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReporting.GetFolder(String path)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReportingGroup.Initialize()
    at Microsoft.EnterpriseManagement.Reporting.ServiceManagerReportingGroup..ctor(DataWarehouseManagementGroup managementGroup, String reportingServerURL, String reportsFolderPath, NetworkCredential credentials)
    at Microsoft.EnterpriseManagement.Reporting.ServiceManagerReportingGroup..ctor(DataWarehouseManagementGroup managementGroup, String reportingServerURL, String reportsFolderPath)
    at Microsoft.EnterpriseManagement.UI.SdkDataAccess.ManagementGroupServerSession.TryConnectToReportingManagementGroup() Remediation = Please contact your Administrator.
    We have a four server set-up where SCSM, SCDW, and sqls for both are on different servers. Also I have red that this could be a SPN problem, but this has  been worked on last week without the SPNs.

    On the computer you get the "SQL Reporting Services server is currently unavailable" message please open the Internet Explorer and try to connect to the URL <a href="http:///reports">http://<NameOfReportingServer>/reports
    This should open the reporting website in IE. If this isn't working you should check the proxy settings in IE. If the URL doesn't work in IE it won't work in the SCSM console as well (and vice versa).
    Andreas Baumgarten | H&D International Group
    Actually I can't access to the reporting website. It asks me credentials 3 times and then return a blank page. Also error message comes to the EventViewer System log with id 4 and source Security-Kerberos.
    The Kerberos client received a KRB_AP_ERR_MODIFIED error from the server "accountname".
    The target name used was HTTP/"reporting services fqn". This indicates that the target server failed to decrypt the ticket provided by the client.
    This can occur when the target server principal name (SPN) is registered on an account other than the account the target service is using.
    Ensure that the target SPN is only registered on the account used by the server.
    This error can also happen if the target service account password is different than what is configured on the Kerberos Key Distribution Center for that target service.
    Ensure that the service on the server and the KDC are both configured to use the same password.
    If the server name is not fully qualified, and the target domain (domain.com) is different from the client domain (domain.com), check if there are identically named server accounts in these two domains,
    or use the fully-qualified name to identify the server.
    I can access the website directly from the server which hosts Reporting Services.
    Also I query "setspn -Q HTTP/"reporting services fqn" whit result NO SUCH SPN FOUND.

  • I am getting the error "Unable to connect to data warehouse management server" when I try to register it DWMS

    I have a Data Warehouse Server that appears to be functioning but is running System Center Service Manager 2010 (w/ out SP1), Also have a functioning Configuration Management Server with Reporting Services Point installed the is running System Center
    2012. Both systems are VM's running Windows Server 2008 R2 Enterprise w/ SP1 fully patched. Both systems are running SQL Server 2008 R2 as well. When I try to register the Data Warehouse Server via the GUI using the console or in Powershell it errors
    out. Particularly on the GUI with the error "Unable to connect to data warehouse management server". I can browse to it, ping it, get the configuration management reports to run and show my AD assets, etc... but it will not register the DWS. I have
    tried every suggestion TechNet has to offer and I am hitting a wall. Can someone please, please, please help!?!

    unplug modem and router and reboot.
    check setting for network, verify password.
    sign in.  Enter computer information.
    let me know if this works.

  • Unable to connect to Data Warehouse Server

    I have a Data Warehouse Server that appears to be functioning but is running System Center Service Manager 2010 (w/ out SP1), Also have a functioning Configuration Management Server with Reporting Services Point installed the is running System Center
    2012. Both systems are VM's running Windows Server 2008 R2 Enterprise w/ SP1 fully patched. Both systems are running SQL Server 2008 R2 as well. When I try to register the Data Warehouse Server via the GUI using the console or in Powershell it errors
    out. Particularly on the GUI with the error "Unable to connect to data warehouse management server". I can browse to it, ping it, get the configuration management reports to run and show my AD assets, etc... but it will not register the DWS. I have
    tried every suggestion TechNet has to offer and I am hitting a wall. Can someone please, please, please help!?!

    These are the types of errors I am getting in the Event Logs:
    Log Name:      Operations Manager
    Source:        Console Operations
    Date:          4/22/2014 11:18:53 AM
    Event ID:      33569
    Task Category: None
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      IS-V004.CH.ElSegundo.org
    Description:
    Cannot connect to SQL Reporting Services Server.  Message= Cannot display reporting wunderbar because the information is not yet available in DW CMDB.  Remediation = Please wait for MP sync process to finish and try again later.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Console Operations" />
        <EventID Qualifiers="49152">33569</EventID>
        <Level>2</Level>
        <Task>0</Task>
        <Keywords>0x80000000000000</Keywords>
        <TimeCreated SystemTime="2014-04-22T18:18:53.000000000Z" />
        <EventRecordID>71</EventRecordID>
        <Channel>Operations Manager</Channel>
        <Computer>IS-V004.CH.ElSegundo.org</Computer>
        <Security />
      </System>
      <EventData>
        <Data>Cannot display reporting wunderbar because the information is not yet available in DW CMDB.</Data>
        <Data>Please wait for MP sync process to finish and try again later.</Data>
      </EventData>
    </Event>
    Log Name:      Operations Manager
    Source:        Console Operations
    Date:          4/22/2014 11:18:42 AM
    Event ID:      33569
    Task Category: None
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      IS-V004.CH.ElSegundo.org
    Description:
    Cannot connect to SQL Reporting Services Server.  Message= An unexpected error occured while connecting to SQL Reporting Services server: System.NullReferenceException: Object reference not set to an instance of an object.
       at Microsoft.EnterpriseManagement.UI.SdkDataAccess.ManagementGroupServerSession.TryConnectToReportingManagementGroup()  Remediation = Please contact your Administrator.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Console Operations" />
        <EventID Qualifiers="49152">33569</EventID>
        <Level>2</Level>
        <Task>0</Task>
        <Keywords>0x80000000000000</Keywords>
        <TimeCreated SystemTime="2014-04-22T18:18:42.000000000Z" />
        <EventRecordID>68</EventRecordID>
        <Channel>Operations Manager</Channel>
        <Computer>IS-V004.CH.ElSegundo.org</Computer>
        <Security />
      </System>
      <EventData>
        <Data>An unexpected error occured while connecting to SQL Reporting Services server: System.NullReferenceException: Object reference not set to an instance of an object.
       at Microsoft.EnterpriseManagement.UI.SdkDataAccess.ManagementGroupServerSession.TryConnectToReportingManagementGroup()</Data>
        <Data>Please contact your Administrator.</Data>
      </EventData>
    </Event>

  • Syntax for WriterLoginName in Data Warehouse DB

    Hello
    I'm having a few issues with our management servers writing to the Data Warehouse DB. I've checked the 'Management Group' table and can see the WriterLoginName is set to
    DOMAIN\sv-scom-dw - however, i'm just querying whether that field should read
    sv-scom-dw
    The account is in fact a domain account. It's listed as the 'Data Warehouse SQL Account' & 'Data Warehouse Action Account' (under Administration > Run As configuration > Accounts). 
    We have two entries in the database security (rights over OperationsMangerDW), one as DOMAIN\sv-scom-dw & a local SQL login called sv-scom-dw. Both accounts have the following permissions: apm_datareader, apm_datawriter, db_datareader, db_owner, OpsMgrReader,
    OpsMgrWriter, public.
    We're a SCOM 2012 R2 environment. All servers are 2012 R2, SQL is also 2012 standard. 
    Anyone faced a similar issue before? I'm seeing a lot of alerts in the Monitoring section for the Data Warehouse. One in particular:
    Data Warehouse failed to discover performance standard data set. Failed to enumerate (discover) Data Warehouse objects and relationships among them. The operation will be retried.
    Exception 'SqlException': Management Group with id ''5F201AB2-4B10-7FCC-C716-B2361102248D'' is not allowed to access Data Warehouse under login ''sv-scom-dw''
    One or more workflows were affected by this.
    Workflow name: Microsoft.SystemCenter.DataWarehouse.Discovery.StandardDataSet
    Instance name: Performance data set
    Instance ID: {B81C47FB-A80D-0FE5-A8DB-DC4544FC8DA6}
    Management group: ******
    As you can see from the alert the account referenced is 'sv-scom-dw' and not 'DOMAIN\sv-scom-dw'. Which is why I originally asked, should the field in the management table be updated?
    Thanks, David.

    Hi guys.
    Thanks for the responses, I shall provide an event  ID shortly. In response to Mai, I've followed the link you've posted and I'm now checking the 'data source and related settings', so i've gone to http://localhost/reports on the Warehouse server (which
    also hosts the reporting), and i've got the following error:
    The report server cannot decrypt the symmetric key that is used to access sensitive or encrypted data in a report server database. You must either restore a backup key or delete all encrypted content. (rsReportServerDisabled)
    Get
    Online Help
    Keyset does not exist (Exception from HRESULT: 0x80090016)
    Have you come across this before?

  • Accessing Data Warehouse with HTML DB

    I have a test data warehouse database 10g comprising of seven dimension tables and one fact table. When I access one table at a time, the query runs fine, but when I join two dimension tables or more to the fact table, the result set comes out wrong. The performance is also very poor. Is HTML DB not capable of properly accessing a data warehouse data?
    Here is the query I'm having problem with:
    SELECT p.prod_name, s.store_name, pr.week, sl.dollars
    FROM sales sl, product p, period pr, store s
    WHERE p.prodkey = sl.prodkey
    AND pr.perkey = sl.perkey
    AND p.prod_name LIKE 'Assam Gold%'
    OR p.prod_name LIKE 'Earl%'
    AND s.store_name LIKE 'Instant%'
    AND pr.month = 'NOV'
    AND pr.year = 2003
    ORDER BY p.prod_name, sl.dollars DESC
    Your input would be appreciated.

    I doubt this was intentional, but you are not joining the store table to anything. You do filter the rows from that table with the AND s.store_name LIKE 'Instant%' predicate, but it is not joined to any of the other 3 tables. Your query will essentially return the number of rows from the other 3 tables multiplied by the number of rows returned from store. SYou might think about grouping some of your predicates for readability and possibly for correct logic.SELECT p.prod_name, s.store_name, pr.week, sl.dollars
      FROM sales sl, product p, period pr, store s
    WHERE p.prodkey = sl.prodkey
       AND pr.perkey = sl.perkey
       -- Add missing predicate here
       -- AND s.something = sl,p, or pr .something
       -- end missing predicate
       AND (p.prod_name LIKE 'Assam Gold%'
            OR
            p.prod_name LIKE 'Earl%')
       AND s.store_name LIKE 'Instant%'
       AND pr.month = 'NOV'
       AND pr.year = 2003
    ORDER BY p.prod_name, sl.dollars DESCHope this helps,
    Tyler

  • Only Alert Data is not being inserted in SCOM 2012 Data Warehouse database

    Hi All,
    Alert data is not getting inserted into SCOM Data Warehouse database since 10 days though I could see latest Performance data in DW DB.  No changes were made as far I know on SCOM servers or DB's. I had this issue few months back
    and got resolved by executing a qiery to create an entry for Data Warehouse Synchronisation server entry.
    Now I have checked the discovered inventory and could see an entry present and it is healthy. Still, latest Alert data is not getting inserted into DW DB. Please help me out.
    http://social.technet.microsoft.com/Forums/en-US/2dac4f45-4911-40dc-a220-702993188832/alert-data-is-not-present-in-scom-2012-data-warehouse-database-since-two-weeks?forum=operationsmanagergeneral
    Regards, Suresh

    Hi,
    Generally, data warehouse store a long-term data, and by default, it would keep 400 days data, I suggest check your configuration:
    How to Configure Grooming Settings for the Reporting Data Warehouse Database
    http://technet.microsoft.com/en-us/library/hh212806.aspx
    Alex Zhao
    TechNet Community Support

Maybe you are looking for