Data Warehouse Cursor Problem

I am trying to complete a piece of work for College but am having trouble with the completion of a cursor. The object of this small project is to create a very basic data warehouse from an operational system.
I have populated all of the dimension tables except one which is to be populated with the FACT table. These tables are to be populated with the cursor I am trying to complete.
I having difficulty understanding what the first select statement in the cursor does. For the region dimension table, I was asked to create a sequence to use as the primary key (region_id). The region_id in the operational table has different values e.g. 6000, 6001.
'dw_op' is the schema on the operation table which is accessed through the DB link 'q_link'.
Any thoughts on what is required to complete this cursor would be a big help.
Here is the incomplete anonymous block and cursor:
declare
     cursor c_sales is
          select order_line.product_id, order_line.quantity,
          product.unit_cost, product.unit_price, ord.client_id,
          ord.SALES_REP_ID, ord.order_date from dw_op.ord@q_link, dw_op.order_line@q_link,
          dw_op.product@q_link
          where ord.order_id = order_line.order_id AND
          order_line.product_id = product.product_id...
     s_value number;
     s_cost number;
     begin
     for c_rec in c_sales loop
          select region_id into r_id
          from region where region_name =
               (select region_name from dw_op.sales_region@q_link,
               dw_op.sales_rep@q_link where sales_region.region_id =
               sales_rep.region_id and SR_id = c_rec.SALES_REP_ID);
          select time_seq.nextval into s_time from dual;
          Insert into time values (s_time, s_day, s_month, s_year ... );
          s_value := ... // how much it costs the company, unit_price * something?
          s_cost := ... // time something by the quantity
INSERT INTO sale VALUES (ord.client_id, order_line.product_id, ord.SALES_REP_ID, r_id, s_time, order_line.quantity,
s_value, s_cost); // need to find out how to enter select info into table
     end loop
end;

You may have an IO problem but you may also have a design issue/configuration issue. what you are seeing is multiple session waiting for the same block, if 20 sessions all request the same block one will read it form disk (db file scattered read or db file sequential read) and 19 session will wailt on read by other session and then get the block from the cache.
there do seem to be a very high number of waits for read by other session in the database so you may want to investigate exactly what sql is waiting on this event and if you could benifit form either a larger buffer cache or using te keep and recycle pools to manage frequently accessed tables better. Otherwise investigate the SQL that is perfomring the most IO and tune it to do less work.
Chris

Similar Messages

  • Problem of querying a data warehouse

    hi
    I need to create a data warehouse I used version 10.2 Oracle Warehouse Builder but I found problems with the interrogation of my warehouse so I used Excel to solve this problem
    if someone helps me and gives me an alternative I do not know if there is another version that resolve this problem
    thnx

    Can you explain the problem in detail to understand it better?
    what does interrogation exactly means in OWB.

  • Data warehouse problem plz help

    hi, i got a problem in making my first warehouse
    first of all
    i have many operational databases, and i want to make a warehouse to get the data of these db's to save them according to time...
    and how i connect a vb .net application to retrieve data and queries from the data warehouse
    is this possible and can any one help pleaaaaaaaassssssse

    Because of this our server gets shut down automatically No. Just because connection pool got suspended, server should not go down. There is any other issue which you did not notice. For Data Source to function properly, make sure that intial and maximum connections limit has been set appropriately (preferably both should be equal) and make sure that database is up and running always and has that many connections open. Check with the DBA for DB connection limit settings.
    Raise a SR with support if you are not able to figure out the exact issue.
    Regards,
    Anuj

  • Error while creating data warehouse tables.

    Hi,
    I am getting an error while creating data warehouse tables.
    I am using OBIA 7.9.5.
    The contents of the generate_clt log are as below.
    >>>>>>>>>>>>>>>>>>>>>>>>>>
    Schema will be created from the following containers:
    Oracle 11.5.10
    Universal
    Conflict(s) between containers:
    Table Name : W_BOM_ITEM_FS
    Column Name: INTEGRATION_ID.
    The column properties that are different :[keyTypeCode]
    Success!
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    There are two rows in the DAC repository schema for the column and the table.
    The w_etl_table_col.KEY_TYPE_CD value for DW application is UNKNOWN and for the ORA_11i application it is NULL.
    Could this be the cause of the issue? If yes, why could the values be different and how to resolve this?
    If not, then what could be the problem?
    Any responses will be appreciated.
    Thanks and regards,
    Manoj.

    Strange. The OBIA 7.9.5 Installation and Configuration Guide says the following:
    4.3.4.3 Create ODBC Database Connections
    Note: You must use the Oracle Merant ODBC driver to create the ODBC connections. The Oracle Merant ODBC driver is installed by the Oracle Business Intelligence Applications installer. Therefore, you will need to create the ODBC connections after you have run the Oracle Business Intelligence Applications installer and have installed the DAC Client.
    Several other users are getting the same message creating DW tables.

  • Service Manager data warehouse management server Installation fails

    Hi there,
    In Virtual Machine Windows Server 2012 R2 Standard with my user being a Local Admin and SQL Admin. I tried to do a Service Manager data warehouse management server
    first installation I am facing the following image as error:
    In the event viewer I get the following error:
    "Microsoft System Center 2012 R2 Service Manager -- The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 25211
    The arguments are: -2147024809, The parameter is incorrect."
    In the Setup log, some of the errors are:
    WixRemoveFoldersEx:  Entering WixRemoveFoldersEx in C:\Windows\Installer\MSI35E.tmp, version 3.7.1224.0
    WixRemoveFoldersEx:  Error 0x80070057: Missing folder property: PSCONFIGFOLDER.A591E3B4_D228_431D_BF89_99D52C8FFB76 for row: wrf4582BC4C5CC47B1D2380408CD7A752DC.A591E3B4_D228_431D_BF89_99D52C8FFB76
    CustomAction WixRemoveFoldersEx.A591E3B4_D228_431D_BF89_99D52C8FFB76 returned actual error code 1603 but will be translated to success due to continue marking
    CAStartServices: CAStartServices was passed . OMCFG
    CAStartServices: Checking if service already started. OMCFG
    CAStartServices: Attempting to start service. OMCFG
    CAStartServices: StartService failed. Error Code: 0x8007042D.
    ConfigureSDKConfigService: CAStartServices failed, trying again.... Error Code: 0x8007042D. OMCFG
    Action start 17:47:05: _SetHealthServiceConfig.80B659D9_F758_4E7D_B4FA_E53FC737DCC9.
    GetMsiFeatureState: Failed to get feature state. Error Code: 0x80070646. MOMServer
    MSI (s) (EC!4C) [17:47:05:483]: Note: 1: 2711 2: MOMGateway
    SetHealthServiceConfig: Failed to get Feature State.. Error Code: 0x80070646. MOMServer
    GetMsiFeatureState: Failed to get feature state. Error Code: 0x80070646. MOMGateway
    I have checked the following post but it did not help me:
    http://social.technet.microsoft.com/Forums/systemcenter/en-US/c42bb04d-a51e-4037-a8a3-37d714d6faac/scsm-management-server-installation-fails?forum=systemcenterservicemanager
    Could you please help me with this issue?
    Thanks a lot,
    M

    Hi,
    Sorry I cannot post the full log. I have found also these errors in the log:
    Calling custom action CAManaged!Microsoft.MOMv3.Setup.MOMv3ManagedCAs.RegisterSdkSCP
    RegisterSdkSCP: There is no previous serviceConnectionPoint
    RegisterSdkSCP: Creating New serviceConnectionPoint
    RegisterSdkSCP: Adding ACL for current user: DOMAIN\InstallationAccount
    RegisterSdkSCP: Adding ACL for SM Admini: DOMAIN\SCSMDWadmins
    RegisterSdkSCP: Error: Access is denied.
    InstallCounters: LoadPerfCounterTextStrings() failed . Error Code: 0x80070057. momv3 "D:\Program Files\Microsoft System Center 2012 R2\Service Manager\MOMConnectorCounters.ini"
    InstallPerfCountersHelper: pcCounterInstaller->InstallCounters() for the default counters failed. Error Code: 0x80070057. MOMConnector
    InstallPerfCountersLib: InstallHealthServicePerfCounters() failed . Error Code: 0x80070057.
    InstallPerfCountersLib: Retry Count : .
    InstallHSPerfCounters: Failed to install agent perf counters. Error Code: 0x80070057.
    Thanks for your reply.

  • What are the best solutions for data warehouse configuration in 10gR2

    I need help on solutions to be provided to my Client for upgrading the data warehouse.
    Current Configuration: Oracle database 9.2.0.8. This database contains the data warehouse and one more data mart on the same host.Sizes are respectively 6 Terabyte(retention policy of 3 years+current year) and 1 Terabyte. The ETL tool and BO Reporting tools are also hosted on the same host. This current configuration is really performing poor.
    Client cannot go for a major architectural or configuration changes to its existing environment now due to some constraints.
    However, they have agreed to separate out the databases on separate hosts from the ETL tools and BO objects. Also we are planning to upgrade the database to 10gR2 to attain stability, better performance and overcome current headaches.
    We cannot upgrade the database to 11g as the BO is at a version 6.5 which isn't compatible with Oracle 11g. And Client cannot afford to upgrade anything else other than the database.
    So, my role is very vital in providing a perfect solution towards better performance and take a successful migration of Oracle Database from one host to another (similar platform and OS) in addition to upgrade.
    I have till now thought of the following:
    Move the Oracle database and data mart to separate host.
    The host will be the same platform, that is, HP Superdome with HP-UX 32-bit OS (we cannot change to 64-bit as ETL tool doesn't support)
    Install new Oracle database 10g on the new host and move the data to it.
    Exploring all new features of 10gR2 to help data warehouse, that is, SQL MODEL Clause introduction, Parallel processing, Partitioning, Data Pump, SPA to study pre and post migrations.
    Also thinking of RAC to provide more better solution as our main motive is to show a tremendous performance enhancement.
    I need all your help to prepare a good road map for my assignment. Please suggest.
    Thanks,
    Tapan

    SGA=27.5 GB and PGA=50 MB
    Also I am pasting part of STATSPACK Report, eliminating the snaps of DB bounce. Please suggest the scope of improvement in this case.
    STATSPACK report for
    Snap Id Snap Time Sessions Curs/Sess Comment
    Begin Snap: 582946 11-Mar-13 20:02:16 46 12.8
    End Snap: 583036 12-Mar-13 18:24:24 60 118.9
    Elapsed: 1,342.13 (mins)
    Cache Sizes (end)
    ~~~~~~~~~~~~~~~~~
    Buffer Cache: 21,296M Std Block Size: 16K
    Shared Pool Size: 6,144M Log Buffer: 16,384K
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 1,343,739.01 139,883.39
    Logical reads: 100,102.54 10,420.69
    Block changes: 3,757.42 391.15
    Physical reads: 6,670.84 694.44
    Physical writes: 874.34 91.02
    User calls: 1,986.04 206.75
    Parses: 247.87 25.80
    Hard parses: 5.82 0.61
    Sorts: 1,566.76 163.10
    Logons: 10.99 1.14
    Executes: 1,309.79 136.35
    Transactions: 9.61
    % Blocks changed per Read: 3.75 Recursive Call %: 43.34
    Rollback per transaction %: 3.49 Rows per Sort: 190.61
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.90 Redo NoWait %: 100.00
    Buffer Hit %: 96.97 In-memory Sort %: 100.00
    Library Hit %: 99.27 Soft Parse %: 97.65
    Execute to Parse %: 81.08 Latch Hit %: 99.58
    Parse CPU to Parse Elapsd %: 3.85 % Non-Parse CPU: 99.34
    Shared Pool Statistics Begin End
    Memory Usage %: 7.11 50.37
    % SQL with executions>1: 62.31 46.46
    % Memory for SQL w/exec>1: 26.75 13.47
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    CPU time 492,062 43.66
    db file sequential read 157,418,414 343,549 30.49
    library cache pin 92,339 66,759 5.92
    PX qref latch 63,635 43,845 3.89
    db file scattered read 2,506,806 41,677 3.70
    Background Wait Events for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (s) (ms) /txn
    log file sequential read 176,386 0 3,793 22 0.2
    log file parallel write 2,685,833 0 1,813 1 3.5
    db file parallel write 239,166 0 1,350 6 0.3
    control file parallel write 33,432 0 79 2 0.0
    LGWR wait for redo copy 478,120 536 75 0 0.6
    rdbms ipc reply 10,027 0 47 5 0.0
    control file sequential read 32,414 0 40 1 0.0
    db file scattered read 4,101 0 30 7 0.0
    db file sequential read 13,946 0 29 2 0.0
    direct path read 203,694 0 14 0 0.3
    log buffer space 363 0 13 37 0.0
    latch free 3,766 0 9 2 0.0
    direct path write 80,491 0 6 0 0.1
    async disk IO 351,955 0 4 0 0.5
    enqueue 28 0 1 21 0.0
    buffer busy waits 1,281 0 1 0 0.0
    log file single write 172 0 0 1 0.0
    rdbms ipc message 10,563,204 251,286 992,837 94 13.7
    pmon timer 34,751 34,736 78,600 2262 0.0
    smon timer 7,462 113 76,463 10247 0.0
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    CPU used by this session 49,206,154 611.0 63.6
    CPU used when call started 49,435,735 613.9 63.9
    CR blocks created 6,740,777 83.7 8.7
    Cached Commit SCN referenced 423,253,503 5,256.0 547.2
    Commit SCN cached 19,165 0.2 0.0
    DBWR buffers scanned 48,276,489 599.5 62.4
    DBWR checkpoint buffers written 6,959,752 86.4 9.0
    DBWR checkpoints 454 0.0 0.0
    DBWR free buffers found 44,817,183 556.5 57.9
    DBWR lru scans 137,149 1.7 0.2
    DBWR make free requests 162,528 2.0 0.2
    DBWR revisited being-written buff 4,220 0.1 0.0
    DBWR summed scan depth 48,276,489 599.5 62.4
    DBWR transaction table writes 5,036 0.1 0.0
    DBWR undo block writes 2,989,436 37.1 3.9
    DDL statements parallelized 3,723 0.1 0.0
    DFO trees parallelized 4,157 0.1 0.0
    DML statements parallelized 3 0.0 0.0
    OS Block input operations 29,850 0.4 0.0
    OS Block output operations 1,591 0.0 0.0
    OS Characters read/written 182,109,814,791 2,261,447.1 235,416.9
    OS Integral unshared data size ################## 242,463,432.4 ############
    OS Involuntary context switches 188,257,786 2,337.8 243.4
    OS Maximum resident set size 43,518,730,619 540,417.4 56,257.5
    OS Page reclaims 159,430,953 1,979.8 206.1
    OS Signals received 5,260,938 65.3 6.8
    OS Socket messages received 79,438,383 986.5 102.7
    OS Socket messages sent 93,064,176 1,155.7 120.3
    OS System time used 10,936,430 135.8 14.1
    OS User time used 132,043,884 1,639.7 170.7
    OS Voluntary context switches 746,207,739 9,266.4 964.6
    PX local messages recv'd 55,120,663 684.5 71.3
    PX local messages sent 55,120,817 684.5 71.3
    Parallel operations downgraded 1 3 0.0 0.0
    Parallel operations not downgrade 4,154 0.1 0.0
    SQL*Net roundtrips to/from client 155,422,335 1,930.0 200.9
    SQL*Net roundtrips to/from dblink 18 0.0 0.0
    active txn count during cleanout 16,529,551 205.3 21.4
    background checkpoints completed 43 0.0 0.0
    background checkpoints started 43 0.0 0.0
    background timeouts 280,202 3.5 0.4
    branch node splits 4,428 0.1 0.0
    buffer is not pinned count 6,382,440,322 79,257.4 8,250.7
    buffer is pinned count 9,675,661,370 120,152.8 12,507.9
    bytes received via SQL*Net from c 67,384,496,376 836,783.4 87,109.3
    bytes received via SQL*Net from d 6,142 0.1 0.0
    bytes sent via SQL*Net to client 50,240,643,657 623,890.4 64,947.1
    bytes sent via SQL*Net to dblink 3,701 0.1 0.0
    calls to get snapshot scn: kcmgss 145,385,064 1,805.4 187.9
    calls to kcmgas 36,816,132 457.2 47.6
    calls to kcmgcs 3,514,770 43.7 4.5
    change write time 369,373 4.6 0.5
    cleanout - number of ktugct calls 20,954,488 260.2 27.1
    cleanouts and rollbacks - consist 6,357,174 78.9 8.2
    cleanouts only - consistent read 10,078,802 125.2 13.0
    cluster key scan block gets 69,403,565 861.9 89.7
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    cluster key scans 41,311,211 513.0 53.4
    commit cleanout failures: block l 413,776 5.1 0.5
    commit cleanout failures: buffer 414 0.0 0.0
    commit cleanout failures: callbac 41,194 0.5 0.1
    commit cleanout failures: cannot 174,382 2.2 0.2
    commit cleanouts 11,469,056 142.4 14.8
    commit cleanouts successfully com 10,839,290 134.6 14.0
    commit txn count during cleanout 17,155,424 213.0 22.2
    consistent changes 145,418,277 1,805.8 188.0
    consistent gets 8,043,252,188 99,881.4 10,397.7
    consistent gets - examination 3,180,028,047 39,489.7 4,110.9
    current blocks converted for CR 9 0.0 0.0
    cursor authentications 14,926 0.2 0.0
    data blocks consistent reads - un 143,706,500 1,784.6 185.8
    db block changes 302,577,666 3,757.4 391.2
    db block gets 336,562,217 4,179.4 435.1
    deferred (CURRENT) block cleanout 2,912,793 36.2 3.8
    dirty buffers inspected 627,174 7.8 0.8
    enqueue conversions 1,296,337 16.1 1.7
    enqueue releases 13,053,200 162.1 16.9
    enqueue requests 13,239,092 164.4 17.1
    enqueue timeouts 185,878 2.3 0.2
    enqueue waits 114,120 1.4 0.2
    exchange deadlocks 7,390 0.1 0.0
    execute count 105,475,101 1,309.8 136.4
    free buffer inspected 1,604,407 19.9 2.1
    free buffer requested 258,126,047 3,205.4 333.7
    hot buffers moved to head of LRU 22,793,576 283.1 29.5
    immediate (CR) block cleanout app 16,436,010 204.1 21.3
    immediate (CURRENT) block cleanou 2,860,013 35.5 3.7
    index fast full scans (direct rea 12,375 0.2 0.0
    index fast full scans (full) 3,733 0.1 0.0
    index fast full scans (rowid rang 192,148 2.4 0.3
    index fetch by key 1,321,024,486 16,404.5 1,707.7
    index scans kdiixs1 406,165,684 5,043.8 525.1
    leaf node 90-10 splits 50,373 0.6 0.1
    leaf node splits 697,235 8.7 0.9
    logons cumulative 884,756 11.0 1.1
    messages received 3,276,719 40.7 4.2
    messages sent 3,257,171 40.5 4.2
    no buffer to keep pinned count 569 0.0 0.0
    no work - consistent read gets 4,406,092,172 54,715.0 5,695.8
    opened cursors cumulative 20,527,704 254.9 26.5
    parse count (failures) 267,088 3.3 0.4
    parse count (hard) 468,996 5.8 0.6
    parse count (total) 19,960,548 247.9 25.8
    parse time cpu 323,024 4.0 0.4
    parse time elapsed 8,393,422 104.2 10.9
    physical reads 537,189,332 6,670.8 694.4
    physical reads direct 292,545,140 3,632.8 378.2
    physical writes 70,409,002 874.3 91.0
    physical writes direct 59,248,394 735.8 76.6
    physical writes non checkpoint 69,103,391 858.1 89.3
    pinned buffers inspected 11,893 0.2 0.0
    prefetched blocks 95,892,161 1,190.8 124.0
    prefetched blocks aged out before 1,495,883 18.6 1.9
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    process last non-idle time ################## ############## ############
    queries parallelized 417 0.0 0.0
    recursive calls 122,323,299 1,519.0 158.1
    recursive cpu usage 3,144,533 39.1 4.1
    redo blocks written 180,881,558 2,246.2 233.8
    redo buffer allocation retries 5,400 0.1 0.0
    redo entries 164,728,513 2,045.6 213.0
    redo log space requests 1,006 0.0 0.0
    redo log space wait time 2,230 0.0 0.0
    redo ordering marks 2,563 0.0 0.0
    redo size 108,208,614,904 1,343,739.0 139,883.4
    redo synch time 558,520 6.9 0.7
    redo synch writes 2,343,824 29.1 3.0
    redo wastage 1,126,585,600 13,990.0 1,456.4
    redo write time 718,655 8.9 0.9
    redo writer latching time 7,763 0.1 0.0
    redo writes 2,685,833 33.4 3.5
    rollback changes - undo records a 522,742 6.5 0.7
    rollbacks only - consistent read 335,177 4.2 0.4
    rows fetched via callback 1,100,990,382 13,672.1 1,423.3
    session connect time ################## ############## ############
    session cursor cache count 1,061 0.0 0.0
    session cursor cache hits 1,687,796 21.0 2.2
    session logical reads 8,061,057,193 100,102.5 10,420.7
    session pga memory 1,573,228,913,832 19,536,421.0 2,033,743.8
    session pga memory max 1,841,357,626,496 22,866,054.4 2,380,359.0
    session uga memory 1,074,114,630,336 13,338,399.4 1,388,529.0
    session uga memory max 386,645,043,296 4,801,374.0 499,823.6
    shared hash latch upgrades - no w 410,360,146 5,095.9 530.5
    sorts (disk) 2,657 0.0 0.0
    sorts (memory) 126,165,625 1,566.7 163.1
    sorts (rows) 24,048,783,304 298,638.8 31,088.3
    summed dirty queue length 5,438,201 67.5 7.0
    switch current to new buffer 1,302,798 16.2 1.7
    table fetch by rowid 6,201,503,534 77,010.5 8,016.8
    table fetch continued row 26,649,697 330.9 34.5
    table scan blocks gotten 1,864,435,032 23,152.6 2,410.2
    table scan rows gotten 43,639,997,280 541,923.3 56,414.3
    table scans (cache partitions) 26,112 0.3 0.0
    table scans (direct read) 246,243 3.1 0.3
    table scans (long tables) 340,200 4.2 0.4
    table scans (rowid ranges) 359,617 4.5 0.5
    table scans (short tables) 9,111,559 113.2 11.8
    transaction rollbacks 4,819 0.1 0.0
    transaction tables consistent rea 824 0.0 0.0
    transaction tables consistent rea 1,386,848 17.2 1.8
    user calls 159,931,913 1,986.0 206.8
    user commits 746,543 9.3 1.0
    user rollbacks 27,020 0.3 0.0
    write clones created in backgroun 7 0.0 0.0
    write clones created in foregroun 4,350 0.1 0.0
    Buffer Pool Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> Standard block size Pools D: default, K: keep, R: recycle
    -> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
    Free Write Buffer
    Number of Cache Buffer Physical Physical Buffer Complete Busy
    P Buffers Hit % Gets Reads Writes Waits Waits Waits
    D 774,144 95.6############ 233,869,082 10,089,734 0 0########
    K 504,000 99.9############ 3,260,227 1,070,338 0 0 65,898
    R 63,504 96.2 196,079,539 7,511,863 535 0 0 0
    Buffer wait Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc
    Tot Wait Avg
    Class Waits Time (s) Time (ms)
    data block 7,791,121 14,676 2
    file header block 587 101 172
    undo header 151,617 71 0
    segment header 299,312 58 0
    1st level bmb 45,235 7 0
    bitmap index block 392 1 3
    undo block 4,250 1 0
    2nd level bmb 14 0 0
    system undo header 2 0 0
    3rd level bmb 1 0 0
    Latch Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
    willing-to-wait latch get requests
    ->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
    ->"Pct Misses" for both should be very close to 0.0
    Pct Avg Wait Pct
    Get Get Slps Time NoWait NoWait
    Latch Requests Miss /Miss (s) Requests Miss
    Consistent RBA 2,686,230 0.0 0.2 0 0
    FAL request queue 86 0.0 0 0
    FAL subheap alocation 0 0 2 0.0
    FIB s.o chain latch 1,089 0.0 0 0
    FOB s.o list latch 4,589,986 0.5 0.0 2 0
    NLS data objects 1 0.0 0 0
    SQL memory manager worka 5,963 0.0 0 0
    Token Manager 0 0 2 0.0
    active checkpoint queue 719,439 0.3 0.1 0 1 0.0
    alert log latch 184 0.0 0 2 0.0
    archive control 4,365 0.0 0 0
    archive process latch 1,808 0.6 0.6 0 0
    begin backup scn array 3,387,572 0.0 0.0 0 0
    cache buffer handles 1,577,222 0.2 0.0 0 0
    cache buffers chains ############## 0.5 0.0 430 354,357,972 0.3
    cache buffers lru chain 17,153,023 0.1 0.0 1 385,505,654 0.5
    cas latch 538,804,153 0.3 0.0 7 0
    channel handle pool latc 1,776,950 0.5 0.0 0 0
    channel operations paren 2,901,371 0.3 0.0 0 0
    checkpoint queue latch 99,329,722 0.0 0.0 0 11,153,369 0.1
    child cursor hash table 3,927,427 0.0 0.0 0 0
    commit callback allocati 8,739 0.0 0 0
    dictionary lookup 7,980 0.0 0 0
    dml lock allocation 6,767,990 0.1 0.0 0 0
    dummy allocation 1,898,183 0.2 0.1 0 0
    enqueue hash chains 27,741,348 0.1 0.1 4 0
    enqueues 17,450,161 0.3 0.1 6 0
    error message lists 132,828 2.6 0.2 1 0
    event group latch 884,066 0.0 0.7 0 0
    event range base latch 1 0.0 0 0
    file number translation 34 38.2 0.9 0 0
    global tx hash mapping 577,859 0.0 0 0
    hash table column usage 4,062 0.0 0 8,757,234 0.0
    hash table modification 16 0.0 0 2 0.0
    i/o slave adaptor 0 0 2 0.0
    job workq parent latch 4 100.0 0.3 0 494 8.7
    job_queue_processes para 1,950 0.0 0 2 0.0
    ksfv messages 0 0 4 0.0
    ktm global data 8,219 0.0 0 0
    lgwr LWN SCN 2,687,862 0.0 0.0 0 0
    library cache 310,882,781 0.9 0.0 34 104,759 4.0
    library cache load lock 30,369 0.0 0.3 0 0
    library cache pin 153,821,358 0.1 0.0 2 0
    library cache pin alloca 126,316,296 0.1 0.0 4 0
    list of block allocation 2,730,808 0.3 0.0 0 0
    loader state object free 566,036 0.1 0.0 0 0
    longop free list parent 197,368 0.0 0 8,390 0.0
    message pool operations 14,424 0.0 0.0 0 0
    messages 25,931,764 0.1 0.0 1 0
    mostly latch-free SCN 40,124,948 0.3 0.0 5 0
    Latch Sleep breakdown for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by misses desc
    Get Spin &
    Latch Name Requests Misses Sleeps Sleeps 1->4
    cache buffers chains ############## 74,770,083 1,062,119 73803903/884
    159/71439/10
    582/0
    redo allocation 170,107,983 3,441,055 149,631 3292872/1467
    48/1426/9/0
    library cache 310,882,781 2,831,747 89,240 2754499/6780
    6/7405/2037/
    0
    shared pool 158,471,190 1,755,922 55,268 1704342/4836
    9/2826/385/0
    cas latch 538,804,153 1,553,992 6,927 1547125/6808
    /58/1/0
    row cache objects 161,142,207 1,176,998 27,658 1154070/1952
    0/2560/848/0
    process queue reference 1,893,917,184 1,119,215 106,454 78758/4351/1
    36/0/0
    Library Cache Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Pct Misses" should be very low
    Get Pct Pin Pct Invali-
    Namespace Requests Miss Requests Miss Reloads dations
    BODY 3,137,721 0.0 3,137,722 0.0 0 0
    CLUSTER 6,741 0.1 4,420 0.2 0 0
    INDEX 353,708 0.8 361,065 1.2 0 0
    SQL AREA 17,052,073 0.3 54,615,678 0.9 410,682 19,628
    TABLE/PROCEDURE 3,521,884 0.2 12,922,737 0.1 619 0
    TRIGGER 1,975,977 0.0 1,975,977 0.0 1 0
    SGA Memory Summary for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    SGA regions Size in Bytes
    Database Buffers 22,330,474,496
    Fixed Size 779,288
    Redo Buffers 17,051,648
    Variable Size 7,180,648,448
    sum 29,528,953,880

  • Configuration Dataset = 90% of Data Warehouse - Event Errors 31552

    Hi All,
    I'm currently running SCOM 2012 R2 and have recently had some problems with the Data Warehouse Data Sync. We currently have around 800 servers in our production environment, no Network devices, we use Orchestrator for integration with our call logging system
    and I believe this is where our problems started. We had a runbook which got itself into a loop and was constantly updating alerts, it also contributed to a large number of state changes. We have resolved that problem now, but I started to receive alerts
    saying SCOM couldn't sync Alert data under event 31552.
    Failed to store data in the Data Warehouse.
    Exception 'SqlException': Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding. 
    One or more workflows were affected by this.  
    Workflow name: Microsoft.SystemCenter.DataWarehouse.StandardDataSetMaintenance 
    Instance name: Alert data set 
    Instance ID: XX
    Management group: XX
    I have been researching problems with syncing alert data, and came across the queries to manually do the database maintenance, I ran that on the alert instance and it took around 16.5 hours to run on the first night, then it ran fast (2 seconds) most the
    day, when it got to about the same time the next day it took another 9.5 hours to run so I'm not sure why that's giving different results.
    Initially it appeared all of our datasets were out of sync, after the first night all appear to be in sync bar the Hourly Performance Dataset. Which still has around 161 OutstandingAggregations. When I run the Maintenance on Performance it doesn't appear
    to be fixing it. (It runs in about 2 seconds, successfully)
    I recently ran DWDatarp on the database to see how the Alert Dataset was looking and to my surprise I found that the Configuration Dataset has blown out to take up 90% of the DataWarehouse, table below. Does anyone have any ideas on what might cause this
    or how I can fix it?
    Dataset name                   Aggregation name     Max Age     Current Size, Kb
    Alert data set                 Raw data                 400       132,224 (  0%)
    Client Monitoring data set     Raw data                  30             0 (  0%)
    Client Monitoring data set     Daily aggregations       400            16 (  0%)
    Configuration dataset          Raw data                 400   683,981,456 ( 90%)
    Event data set                 Raw data                 100    17,971,872 (  2%)
    Performance data set           Raw data                  10     4,937,536 (  1%)
    Performance data set           Hourly aggregations      400    28,487,376 (  4%)
    Performance data set           Daily aggregations       400     1,302,368 (  0%)
    State data set                 Raw data                 180       296,392 (  0%)
    State data set                 Hourly aggregations      400    17,752,280 (  2%)
    State data set                 Daily aggregations       400     1,094,240 (  0%)
    Microsoft.Exchange.2010.Dataset.AlertImpact Raw data      
    7     0 (  0%)
    Microsoft.Exchange.2010.Dataset.AlertImpact Hourly aggregations        
    3     0 (  0%)
    Microsoft.Exchange.2010.Dataset.AlertImpact Daily aggregations      
    182     0 (  0%)
    Microsoft.Exchange.2010.Reports.Dataset.Availability Raw data                 400           176 (  0%)
    Microsoft.Exchange.2010.Reports.Dataset.Availability Daily aggregations       400             0 (  0%)
    Microsoft.Exchange.2010.Reports.Dataset.TenantMapping Raw data 7             0 (  0%)
    Microsoft.Exchange.2010.Reports.Dataset.TenantMapping Daily aggregations       400             0 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Rawdata                   3        84,864 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Hourly aggregations        7       407,416 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Daily aggregations       182       143,128 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Raw data                   7         6,088 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Hourly aggregations       31        20,056 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Daily aggregations       182         3,720 (  0%)
    I have one other 31553 event showing up on one of the Management servers as follows,
    Data was written to the Data Warehouse staging area but processing failed on one of the subsequent operations.
    Exception 'SqlException': Sql execution failed. Error 2627, Level 14, State 1, Procedure ManagedEntityChange, Line 368, Message: Violation of UNIQUE KEY constraint 'UN_ManagedEntityProperty_ManagedEntityRowIdFromDAteTime'. Cannot insert duplicate key in
    object 'dbo.ManagedEntityProperty'. The duplicate key value is (263, Aug 26 2013  6:02AM). 
    One or more workflows were affected by this.  
    Workflow name: Microsoft.SystemCenter.DataWarehouse.Synchronization.ManagedEntity 
    Instance name: XX 
    Instance ID: XX
    Management group: XX
    which from my readings means I'm likely in for an MS support call.. :( But I just wanted to see if anyone has any information about the Configuration Dataset as I couldn't find much in my searching.

    Hi All,
    The results of the MS Support call were as follows, I don't recommend doing these steps without an MS Support case, any damage you do is your own fault these particular actions resolved our problems:
    1. Regarding the Configuration Dataset being so large. 
    This was caused by our AlertStage table which was also very large, we truncated the alert stage table and ran the maintenance tasks manually to clear this up. As I didn't require any of the alerts sitting in the AlertStage table we simply did a straight truncation
    of the the table. The document linked by MHG above shows the process of doing a backup & restore on the AlertStage Table if you need to. It took a few days of running maintenance tasks to resolve this problem properly. As soon as the truncation had taken
    place the Confirguration Dataset dropped in size to less than a gig.
    2. Error 31553 Duplicate Key Error
    This was a problem with duplicate keys in the ManagedEntityProperty table. We identified rows which had duplicate information, which could be gathered from the Events being logged on the Management Server.
    We then updated a few of these rows to have a slightly different time to what was already in the Database. We noticed that the event kept logging with a different row each time we updated the previous row. We ran the following query to find out how many rows
    actually had duplicates:
    select * from ManagedEntityProperty mep
    inner join ManagedEntity me on mep.ManagedEntityRowId = me.ManagedEntityRowId
    inner join ManagedEntityStage mes on mes.ManagedEntityGuid = me.ManagedEntityGuid
    where mes.ChangeDateTime = mep.FromDateTime
    order by mep.ManagedEntityRowId
    This returned over 25,000 duplicate rows. Rather than replace the times for all the rows, we removed all duplicates from the database. (Best to have MS Check this one out for you if you have a lot of data)
    After doing this there was a lot of data moving around the Staging tables (I assume from the management server that couldn't communicate properly), so once again we truncated the AlertStage table as it wasn't keeping up. Once this was done everything worked
    properly and all the queues stayed under control.
    To confirm things had been cleared up we checked the AlertStage table had no entries and the ManagedEntityStage table had no entries. We also confirmed that the 31553 events stopped on the Management server.
    Hopefully this can help someone, or provide a bit more information on these problems.

  • Service Manager Data Warehouse Install - Analysis Server Configuration For OLAP Cubes Fail

    Hello everyone,
    I have an issue with my installation of the Data Warehouse for System Center Service Manager 2012 SP1.
    My install environment is the following:
    Windows Server 2012 – System Center Service Manager (Successfully Installed) - Virtual
    Windows Server 2012 – System Center Data Warehouse (Pending) - Virtual
    Windows Server 2012 – MS SQL Server 2012 – Physical, Clustered 1<sup>st</sup> of Four Servers
    The SQL Server is a clustered installation with named instances, specifically for SharePoint and Service Manager. Each instance has its own IP address and dynamic ports are turned off. I’m installing using the domain administrator account and I also chose
    to run the installer as administrator. The domain admin has sysadmin rights to the service manager server and instance I’m trying to install on. However, the account does not have sysadmin rights to some of the other instances.
    The install is smooth up until it needs to connect to the Analysis server database. I have tried connecting to the analysis servers on other SQL servers on site and all were successful. The only difference between the older SQL servers, the SQL 2012 development
    server and the SQL 2012 Production server I’m trying to install to is that the that the domain admin account doesn’t have sysadmin access on all the databases on the new production server. The SQL server is being installed and configured by a contractor so
    if you all have troubleshooting suggestions, I’ll need to coordinate with the contractor.
    Starting with the screen below, I began searching for help online. There seems to be no one else with this issue or it is not documented properly. I opened a ticket with MS, called the contractor and troubleshot with him, troubleshot as far as I could on
    my own and I’m still at a loss as to what is preventing the installer from connecting specifically to the analysis server.
    I first thought the installer was at issue or that the data warehouse sever was at issue. But all signs are pointing at the SQL server. The installer is able to connect to all the other SQL servers – including other 2012 servers (same versions) – so it can’t
    be the installer. I’m pretty sure the SQL server is going to be at issue.
    After looking at this error, I opened the resource monitor and clicked the dropdown to see if it was trying to connect to the correct server and it was. I then connected to the old and new test and development servers successfully. Then connected to the
    SQL 2008 R2 production cluster successfully. I then compared the two servers. The only difference other than the version numbers is that the admin account doesn’t have sysadmin rights on all the SQL 2012 database servers. But the database servers are not the
    problem. The analysis servers are.
    I then checked the event logs and they are empty as far as this issue is concerned. Actually, there are no errors on the SQL 2012 production box and the Data Warehouse box. I then checked the log that the installer creates during every step of the installation
    and this is what is created when the dropdown is clicked for the analysis server configuration screen. The log file location is:
    “C:\Users\admin\AppData\Local\Temp\2\SCSMSetupWizard01.txt”
    In the file is the following text.
    01:03:34:Attempting connection to SQL Server 2012 management scope on SCSMSQL2012
    01:03:34:Using SQL Server 2012 management scope on SCSMSQL2012
    01:03:36:Collecting SQL instances on server SCSMSQL2012
    01:03:36:Attempting connection to SQL Server 2012 management scope on SCSMSQL2012.johnsonbrothers.com
    01:03:36:Using SQL Server 2012 management scope on SCSMSQL2012.johnsonbrothers.com
    01:03:38:Found SQL Instance: SCSMSQL2012\PWGSQL2012
    01:03:38:Found SQL Instance: SCSMSQL2012\SCSMSQL2012
    01:03:39:Error:GetSqlInstanceList(), Exception Type: Microsoft.AnalysisServices.ConnectionException, Exception Message: A connection cannot be made. Ensure that the server is running.
    01:03:39:StackTrace:   at Microsoft.AnalysisServices.XmlaClient.GetTcpClient(ConnectionInfo connectionInfo)
       at Microsoft.AnalysisServices.XmlaClient.OpenTcpConnection(ConnectionInfo connectionInfo)
       at Microsoft.AnalysisServices.XmlaClient.OpenConnection(ConnectionInfo connectionInfo, Boolean& isSessionTokenNeeded)
       at Microsoft.AnalysisServices.XmlaClient.Connect(ConnectionInfo connectionInfo, Boolean beginSession)
       at Microsoft.AnalysisServices.Server.Connect(String connectionString, String sessionId, ObjectExpansion expansionType)
       at Microsoft.SystemCenter.Essentials.SetupFramework.HelperClasses.SetupValidationHelpers.GetASVersion(StringBuilder sqlInstanceServiceName)
       at Microsoft.SystemCenter.Essentials.SetupFramework.HelperClasses.SetupValidationHelpers.GetSqlInstanceList(String sqlServerName, Int32 serviceType)
    I’m now investigating the issue according to this output, and decided to ask you all if you’ve run into this issue and found a resolution.

    I am running into same issue . But I don't anything in the instances section related to portipv6 . I do see in the listener section , I tried to remove it . But it comes up again . Please help
    <ConfigurationSettings>
    <Security>
    <RequireClientAuthentication>0</RequireClientAuthentication>
    <SecurityPackageList/>
    </Security>
    <Network>
    <Listener>
    <RequestSizeThreshold>4095</RequestSizeThreshold>
    <MaxAllowedRequestSize>0</MaxAllowedRequestSize>
    <ServerSendTimeout>60000</ServerSendTimeout>
    <ServerReceiveTimeout>60000</ServerReceiveTimeout>
    <IPV4Support>2</IPV4Support>
    <IPV6Support>2</IPV6Support>
    </Listener>
    <TCP>
    <MaxPendingSendCount>12</MaxPendingSendCount>
    <MaxPendingReceiveCount>4</MaxPendingReceiveCount>
    <MinPendingReceiveCount>2</MinPendingReceiveCount>
    <MaxCompletedReceiveCount>9</MaxCompletedReceiveCount>
    <ScatterReceiveMultiplier>5</ScatterReceiveMultiplier>
    <MaxPendingAcceptExCount>10</MaxPendingAcceptExCount>
    <MinPendingAcceptExCount>2</MinPendingAcceptExCount>
    <InitialConnectTimeout>10</InitialConnectTimeout>
    <SocketOptions>
    <SendBufferSize>0</SendBufferSize>
    <ReceiveBufferSize>0</ReceiveBufferSize>
    <DisableNonblockingMode>1</DisableNonblockingMode>
    <EnableNagleAlgorithm>0</EnableNagleAlgorithm>
    <EnableLingerOnClose>0</EnableLingerOnClose>
    <LingerTimeout>0</LingerTimeout>
    </SocketOptions>
    </TCP>
    <Requests>
    <EnableBinaryXML>0</EnableBinaryXML>
    <EnableCompression>0</EnableCompression>
    </Requests>
    <Responses>
    <EnableBinaryXML>1</EnableBinaryXML>
    <EnableCompression>1</EnableCompression>
    <CompressionLevel>9</CompressionLevel>
    </Responses>
    <ListenOnlyOnLocalConnections>0</ListenOnlyOnLocalConnections>
    </Network>
    <Log>
    <File>msmdredir.log</File>
    <FileBufferSize>0</FileBufferSize>
    <MessageLogs>Console;System</MessageLogs>
    <Exception>
    <CreateAndSendCrashReports>0</CreateAndSendCrashReports>
    <CrashReportsFolder/>
    <SQLDumperFlagsOn>0x0</SQLDumperFlagsOn>
    <SQLDumperFlagsOff>0x0</SQLDumperFlagsOff>
    <MiniDumpFlagsOn>0x0</MiniDumpFlagsOn>
    <MiniDumpFlagsOff>0x0</MiniDumpFlagsOff>
    <MinidumpErrorList>0xC1000000, 0xC1000001, 0xC100000C, 0xC1000016, 0xC1360054, 0xC1360055</MinidumpErrorList>
    <ExceptionHandlingMode>0</ExceptionHandlingMode>
    <MaxExceptions>500</MaxExceptions>
    <MaxDuplicateDumps>1</MaxDuplicateDumps>
    </Exception>
    </Log>
    <Memory>
    <HandleIA64AlignmentFaults>0</HandleIA64AlignmentFaults>
    <PreAllocate>0</PreAllocate>
    <VertiPaqPagingPolicy>0</VertiPaqPagingPolicy>
    <PagePoolRestrictNumaNode>0</PagePoolRestrictNumaNode>
    </Memory>
    <Instances/>
    <VertiPaq>
    <DefaultSegmentRowCount>0</DefaultSegmentRowCount>
    <ProcessingTimeboxSecPerMRow>-1</ProcessingTimeboxSecPerMRow>
    <SEQueryRegistry>
    <Size>0</Size>
    <MinKCycles>0</MinKCycles>
    <MinCyclesPerRow>0</MinCyclesPerRow>
    <MaxArbShpSize>0</MaxArbShpSize>
    </SEQueryRegistry>
    </VertiPaq>
    </ConfigurationSettings>

  • Service manager console can't connect to Service manager data warehouse SQL reporting services

    When I start Service manager console, it gives this kind of error:
    The Service Manager data warehouse SQL Reporting Services server is currently unavailable. You will be unable to execute reports until this server is available. Please contact your system administrator. After the server becomes available please close your
    console and re-open to view reports.
    Also in EventViewer says:
    cannot connect to SQL Reporting Services Server. Message= An unexpected error occured while connecting to SQL Reporting Services server: System.Net.WebException: The request failed with HTTP status 401: Unauthorized.
    at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall)
    at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
    at Microsoft.EnterpriseManagement.Reporting.ReportingService.ReportingService2005.FindItems(String Folder, BooleanOperatorEnum BooleanOperator, SearchCondition[] Conditions)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReporting.FindItems(String searchPath, IList`1 criteria, Boolean And)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReporting.FindItems(String itemPath)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReporting.FindItem(String itemPath, ItemTypeEnum[] desiredTypes)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReporting.GetFolder(String path)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReportingGroup.Initialize()
    at Microsoft.EnterpriseManagement.Reporting.ServiceManagerReportingGroup..ctor(DataWarehouseManagementGroup managementGroup, String reportingServerURL, String reportsFolderPath, NetworkCredential credentials)
    at Microsoft.EnterpriseManagement.Reporting.ServiceManagerReportingGroup..ctor(DataWarehouseManagementGroup managementGroup, String reportingServerURL, String reportsFolderPath)
    at Microsoft.EnterpriseManagement.UI.SdkDataAccess.ManagementGroupServerSession.TryConnectToReportingManagementGroup() Remediation = Please contact your Administrator.
    We have a four server set-up where SCSM, SCDW, and sqls for both are on different servers. Also I have red that this could be a SPN problem, but this has  been worked on last week without the SPNs.

    On the computer you get the "SQL Reporting Services server is currently unavailable" message please open the Internet Explorer and try to connect to the URL <a href="http:///reports">http://<NameOfReportingServer>/reports
    This should open the reporting website in IE. If this isn't working you should check the proxy settings in IE. If the URL doesn't work in IE it won't work in the SCSM console as well (and vice versa).
    Andreas Baumgarten | H&D International Group
    Actually I can't access to the reporting website. It asks me credentials 3 times and then return a blank page. Also error message comes to the EventViewer System log with id 4 and source Security-Kerberos.
    The Kerberos client received a KRB_AP_ERR_MODIFIED error from the server "accountname".
    The target name used was HTTP/"reporting services fqn". This indicates that the target server failed to decrypt the ticket provided by the client.
    This can occur when the target server principal name (SPN) is registered on an account other than the account the target service is using.
    Ensure that the target SPN is only registered on the account used by the server.
    This error can also happen if the target service account password is different than what is configured on the Kerberos Key Distribution Center for that target service.
    Ensure that the service on the server and the KDC are both configured to use the same password.
    If the server name is not fully qualified, and the target domain (domain.com) is different from the client domain (domain.com), check if there are identically named server accounts in these two domains,
    or use the fully-qualified name to identify the server.
    I can access the website directly from the server which hosts Reporting Services.
    Also I query "setspn -Q HTTP/"reporting services fqn" whit result NO SUCH SPN FOUND.

  • Table and Index compression in data warehouse - thoughts?

    Hi,
    We have a data warehouse with large fact tables and materialized views of this data.
    Approx 3 million inserts per day week-ends about 12 million.
    The fact tables we have expected to have 200 million, and couple with 1-3 billion.
    Tables partitioned and have bitmap indexes.
    Just wondered what thoughts were about compressing large fact tables and mviews both from point of view of ETL into them and reporting from them afterwards.
    I take it, can compress/uncompress accordingly without any problem?
    Many Thanks

    After compression, most SELECT statements would not get slower. Actually, many can get faster due to reduced IO and buffer needs.
    The situation with DMLs is more complex. It depends on the exact compression options (basic or advanced) and the DML (INSERT,UPDATE, direct load,..),but generally DML are negatively affected by compression.
    In a Data Warehouses (DWs), it is usually quite beneficial to compress partitions or tables that contain data that is not supposed to be modified (read only or read mostly). Please note that in many cases you do not have to compress while you are loading the data – you can do that later.
    You can also consider compressing some of your B-tree indexes (if you use them in your DW system).
    Iordan Iotzov
    http://iiotzov.wordpress.com/

  • Update data automatically in fact table in Data Warehouse

    Hi,
    I'm working on the creation of a data warehouse that include different data source like SQL Server performance (more than one), Active Directory users, Server performance (more than one), Exchange server mailboxes. The problem is that performance data change
    frequently (like CPU and Memory), so my question is how to update data in fact table every 5 seconds automatically with SSIS.
    Thank you for any advice  

    I'm assuming you have already figured out how to capture the data e.g. Powershell, extended events, MDW etc. and just need to know what dimensions or fact tables do you need.
    You need to decide how often you are going to capture this data and based on that you will have dimensions with appropriate grain. Don't try to cram everything in the same fact table if it not of the same granularity. Also, separate process usually
    have separate fact tables.
    In addition to the Date dimension, you will need a Time dimension with a grain of 1 second (or maybe 5 seconds if that is when you get your data) then run the SSIS every 5 seconds to capture and append that data in the fact table.
    - Aalamjeet Rangi | (Blog)

  • Accessing Data Warehouse with HTML DB

    I have a test data warehouse database 10g comprising of seven dimension tables and one fact table. When I access one table at a time, the query runs fine, but when I join two dimension tables or more to the fact table, the result set comes out wrong. The performance is also very poor. Is HTML DB not capable of properly accessing a data warehouse data?
    Here is the query I'm having problem with:
    SELECT p.prod_name, s.store_name, pr.week, sl.dollars
    FROM sales sl, product p, period pr, store s
    WHERE p.prodkey = sl.prodkey
    AND pr.perkey = sl.perkey
    AND p.prod_name LIKE 'Assam Gold%'
    OR p.prod_name LIKE 'Earl%'
    AND s.store_name LIKE 'Instant%'
    AND pr.month = 'NOV'
    AND pr.year = 2003
    ORDER BY p.prod_name, sl.dollars DESC
    Your input would be appreciated.

    I doubt this was intentional, but you are not joining the store table to anything. You do filter the rows from that table with the AND s.store_name LIKE 'Instant%' predicate, but it is not joined to any of the other 3 tables. Your query will essentially return the number of rows from the other 3 tables multiplied by the number of rows returned from store. SYou might think about grouping some of your predicates for readability and possibly for correct logic.SELECT p.prod_name, s.store_name, pr.week, sl.dollars
      FROM sales sl, product p, period pr, store s
    WHERE p.prodkey = sl.prodkey
       AND pr.perkey = sl.perkey
       -- Add missing predicate here
       -- AND s.something = sl,p, or pr .something
       -- end missing predicate
       AND (p.prod_name LIKE 'Assam Gold%'
            OR
            p.prod_name LIKE 'Earl%')
       AND s.store_name LIKE 'Instant%'
       AND pr.month = 'NOV'
       AND pr.year = 2003
    ORDER BY p.prod_name, sl.dollars DESCHope this helps,
    Tyler

  • Permanent Job Opportunity - Oracle BI Data Warehouse Developer Chicago, IL

    Submit Resumes to [email protected]
    The Business Intelligence Specialist will play a critical role in designing, developing, deploying, and supporting data warehouse/data mart applications. In this role, the person will be responsible for all BI aspects of a data warehouse/data mart application. Primary duties will be to create reporting standards, as well as coach and support power users with selected Oracle tool. The ideal candidate will have 3+ years demonstrated experience in data warehousing and Business Intelligence tools. Must also possess excellent communication skills and an outstanding track record with the user.
    Principal Duties:
    Participates with internal clients to define software requirements for development, maintenance and/or improvements
    Maintains accuracy, integrity, and availability of the data warehouse
    Tests, monitors, manages, and validates data warehouse activity, including data extraction, transformation, movement, loading, cleansing, and updating processes
    Designs and optimizes data mart models for Oracle Business Intelligence Suite.
    Translates the reporting requirements into data analysis and reporting solutions.
    Reviews and sign off on project plan(s).
    Reviews and sign off on technical design(s).
    Defines and develops BI reports for accessing/analyzing data in warehouse.
    Customizes BI tools and data sets for different types of users.
    Designs and develop UAT (User Acceptance Testing).
    Drives improvement of BI system architecture and development process.
    Develops and maintains internal relationships. Actively champions teamwork. Uses internal resources to enhance knowledge and expertise of industry, research, products and services. Provides information and support to others in the company.
    Required Skills:
    Education and Experience:
    BS/MS in Computer Science or equivalent.
    3+ years of experience with Oracle, PL/SQL Development and Data Warehousing.
    Experience Oracle Business Intelligence Suite and Crystal Reports is a plus.
    2-3 years dimensional modeling experience.
    Demonstrated hands on experience with Unix/Linux, SQL required.
    Demonstrated hands on experience with Oracle reporting tools.
    Demonstrated experience with translating business requirements into data analysis and reporting solutions.
    Experience in training programs/teach users to use tools.
    Expertise with software development process.
    Effective mediator - able to facilitate constructive and productive discussions with internal customers, external clients, and development personnel pertaining to feature definition, project scope, and status
    Problem solving*identifies and resolves problems in a timely manner, gathers and analyzes information skillfully and maintains confidentiality.
    Planning/organizing*prioritizes and plans work activities and uses time efficiently. Work requires continual attention to detail in composing and proofing materials, establishing priorities and meeting deadlines. Must be able to work in a fast-paced environment with demonstrated ability to juggle multiple competing tasks and demands.
    Quality control*demonstrates accuracy and thoroughness and monitors own work to ensure quality.
    Adaptability*adapts to changes in the work environment, manages competing demands and is able to deal with frequent change, delays or unexpected events.
    Benefits/Compensation:
    Employees enjoy competitive compensation. We have a full benefits package including medical and dental insurance, long-term disability and life insurance and a 401(k) plan.
    The client operates within the healthcare industry.
    This is a permanent full-time position. After ensuring your availability and qualifications we will put you in direct contact with the client to move forward in the process.

    FORWARD THE UPDATED RESUME AS SOON AS POSSIBLE.

  • How to Troubleshoot why data is not moving over into the Data Warehouse after Sql Server Agent Job Run

    Hello,
    Here is my problem:
    Data was imported into the staging area. After resolving some errors and running the job, I got the data to move over to the next area. From there, data should be moving over into the DW.  I have been troubleshooting for hours and cannot reslove this
    issue. I have restarted the sql service services, I have ran a couple packages manually, and the job is running successfully. 
    What are some reasons why data is not getting into the data warehouse? Where should I be looking? 
    Your help is greatly appreciated!!

    Anything is possible.
    So, just to reiterate, running the job manually works, running the scheduled job does not result in errors neither data arriving to the DW, right? And it used to, correct?
    If so, the 1st step would be to examine the configuration(s). But not before you inspect the package. Do you have an ability to export it to a file system and open in BIDS?
    Arthur My Blog

  • Management Data Warehouse Data Collection fails due to login failure

    Hello,
    I am trying to set up a Management Data Warehouse on a server other than the one I want statistics of.  Unfortunately, each upload fails because the data collector upload job cannot log onto the warehouse server.  For some inexplicable reason the process is trying to log on using domain\serverName.  Obviously no such user exists, and the process fails.  Below is the error message I see in the logs:
    Description: An error occurred with the following error message: "An error occurred while verifying the result set schema against the output table schema. The data collector cannot connect to the management data warehouse. : Login failed for user 'domain\server name$'.".
    Any help would be greatly appreciated.
    Thanks,
    Zachary

    http://technet.microsoft.com/en-us/library/bb677211.aspx says
    The data warehouse is installed on a different computer from the data collector. Probable causes are network connectivity problems or an unavailable host server. This error only affects upload packages.
    Handling: Because there is no advance notification about a server shutdown, this error cannot be anticipated and handled automatically. The error is logged and after a brief interval, the upload is restarted. After four unsuccessful upload attempts, the collection set is disabled and its state is written to the execution log.
    Note:
    Any data that is collected while the collection set is running is kept and accumulated. If the upload package can connect to the data warehouse, the accumulated data is uploaded.
    Blog: http://dineshasanka.spaces.live.com

Maybe you are looking for