Syntax for WriterLoginName in Data Warehouse DB

Hello
I'm having a few issues with our management servers writing to the Data Warehouse DB. I've checked the 'Management Group' table and can see the WriterLoginName is set to
DOMAIN\sv-scom-dw - however, i'm just querying whether that field should read
sv-scom-dw
The account is in fact a domain account. It's listed as the 'Data Warehouse SQL Account' & 'Data Warehouse Action Account' (under Administration > Run As configuration > Accounts). 
We have two entries in the database security (rights over OperationsMangerDW), one as DOMAIN\sv-scom-dw & a local SQL login called sv-scom-dw. Both accounts have the following permissions: apm_datareader, apm_datawriter, db_datareader, db_owner, OpsMgrReader,
OpsMgrWriter, public.
We're a SCOM 2012 R2 environment. All servers are 2012 R2, SQL is also 2012 standard. 
Anyone faced a similar issue before? I'm seeing a lot of alerts in the Monitoring section for the Data Warehouse. One in particular:
Data Warehouse failed to discover performance standard data set. Failed to enumerate (discover) Data Warehouse objects and relationships among them. The operation will be retried.
Exception 'SqlException': Management Group with id ''5F201AB2-4B10-7FCC-C716-B2361102248D'' is not allowed to access Data Warehouse under login ''sv-scom-dw''
One or more workflows were affected by this.
Workflow name: Microsoft.SystemCenter.DataWarehouse.Discovery.StandardDataSet
Instance name: Performance data set
Instance ID: {B81C47FB-A80D-0FE5-A8DB-DC4544FC8DA6}
Management group: ******
As you can see from the alert the account referenced is 'sv-scom-dw' and not 'DOMAIN\sv-scom-dw'. Which is why I originally asked, should the field in the management table be updated?
Thanks, David.

Hi guys.
Thanks for the responses, I shall provide an event  ID shortly. In response to Mai, I've followed the link you've posted and I'm now checking the 'data source and related settings', so i've gone to http://localhost/reports on the Warehouse server (which
also hosts the reporting), and i've got the following error:
The report server cannot decrypt the symmetric key that is used to access sensitive or encrypted data in a report server database. You must either restore a backup key or delete all encrypted content. (rsReportServerDisabled)
Get
Online Help
Keyset does not exist (Exception from HRESULT: 0x80090016)
Have you come across this before?

Similar Messages

  • Syntax for NOW() in date function

    Hi all
    What is the syntax for NOW() in Date function? Can anybody help me..
    Thanks&Regards
    Hema

    Here's how I used it.
    In VC select an expression box. When creating it, ensure you have the data type DATE selected and provide a field name. If you don't select a date data type, it will not work (the default is text).
    In the Data source field section, under the Expression field, select Formula, Under Date Functions select NOW(). You can also format the date
    If you want to select yesterdays date, use the following formula:
    DADD(NOW(),-1,'d')

  • How to convert number datatype to raw datatype for use in data warehouse?

    I am picking up the work of another grad student who assembled the initial data for a data warehouse, mapped out a dimensional dw and then created then initial fact and dimension tables. I am using oracle enterprise 11gR2. The student was new to oracle and used datatypes of NUMBER (without a length - defaulting to number(38) for dimension keys. The dw has 1 fact table and about 20 dimension tables at this point.
    Before refining the dw further, I have to translate all these dimension tables and convert all columns of Number and Number(n) (where n=1-38) to raw datatype with a length. The goal is to compact the size of the dw database significantly. With only a few exceptions every number column is a dimension key or attribute.
    The entire dw db is now sitting in a datapump dmp file. this has to be imported to the db instance and then somehow converted so all occurrences of a number datatype into raw datatypes. BTW, there are other datatypes present such as varchar2 and date.
    I discovered that datapump cannot convert number to raw in an import or export, so the instance tables once loaded using impdp will be the starting point.
    I found there is a utl_raw package delivered with oracle to facilitate using the raw datatype. This has a numbertoraw function. Never used it and am unsure how to incorporate this in the table conversions. I also hope to use OWB capabilities at some point but I have never used it and only know that it has a lot of analytical capabilities. As a preliminary step I have done partial imports and determined the max length of every number column so I can alter the present schema number columns tp be an apporpriate max length for each column in each table.
    Right now I am not sure what the next step is. Any suggestions for the data conversion steps would be appreciated.

    Hi there,
    The post about "Convert Numbers" might help in your case. You might also interested in "Anydata cast" or transformations.
    Thanks,

  • Using OBIEE for a custom Data Warehouse

    Hi Everyone,
    I am very new to OBIEE and I have a few questions about this product family.
    1. I have an existing custom build data warehouse, and I would like to know, is it possible to have build reports on this data warehouse?
    2. I understand that OBIEE comes with pre-built ETL jobs in Informatica, what kind of license is it? Is it possible to modify them, or even build new jobs that load into a non-OBIEE data warehouse?
    your answer will be greatly appreciated.
    Jeffrey
    Edited by: user3265404 on Oct 13, 2009 12:50 PM

    Its the same Informatica which can do all functions as a stand alone Infa. additionally it also has prebuilt adapters for source systems like Siebel, APPL, PSFT, JDE, SAP and some universal adapters, so the license included these also which is going to cost more than getting a informatica Licence from Informatica Corp. Moreover, OBI Apps 7.9.6 comes with Informatica 8.6 which is a little older version of the tool. Informatica is going to release version 9 in a couple of weeks.
    I see that you already have a datawarehouse, so why do you need a ETl tool again?
    OBI EE can directly report out of a datewarehouse, and also transactional systems as long as the metadata layer is built.
    PS: Am I clear?

  • BW Extractors for NON-BW Data Warehouse

    Hi,
    I am working on a client who wishes to use a custom developed Data Warehouse. Is there any way to use SAP's standard extractors to extract data to these non-sap dw systems (or download the data in flat file format from extractor after run)? We are looking for mainly LO Cockpit and Finance extractors.
    Best regards,
    Nikhil

    One option I know of is using Informatica you can extract data from SAP R/3 and load a non-BW data warehouse via SAP BCI adapters. BCI adapter makes use of delivered extractors.
    Check out the webex replay of InformaticaWorld conference
    http://www11.informatica.com/replays/IW06_BOsession_Kato.wrf

  • Syntax for previous months data

    For some reason I cannot get the syntax of the previou month correct. Here is what I have used.
    ("Date"."Calendar Month" IN (SELECT case when 1=0 then "Date"."Calendar Month" else timestampadd (sql_tsi_month, -1, current_date) end FROM "RProBIEE"))
    However it doesn't like this. I get an error.
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 22027] Union of non-compatible types. (HY000)
    SQL Issued: SELECT s_0, s_1, s_2, s_3, s_4 FROM ( SELECT 0 s_0, "RProBIEE"."Date"."Calendar Month" s_1, "RProBIEE"."Stores"."Store Name" s_2, "RProBIEE"."Sales Amounts"."Local Revenue" s_3, REPORT_SUM("RProBIEE"."Sales Amounts"."Local Revenue" BY "RProBIEE"."Stores"."Store Name") s_4 FROM "RProBIEE" WHERE (("Date"."Calendar Month" IN (SELECT case when 1=0 then "Date"."Calendar Month" else timestampadd (sql_tsi_month, -1, current_date) end FROM "RProBIEE"))) ) djm
    I think it is saying that the it doesn't like (sql_tsi_month, -1, CURRENT_DATE) however If I use CURRENT_MONTH it doesn't like that either.
    Any help is appreciated.

    Hello... The data type for the "Date"."Calendar Month" value is TINYINT. So i assume you are right that i need to CAST this so that it matches CURRENT_DATE. Or perhaps maybe instead I need to CAST current DATE as TINYTINT to match? Do you happen to know the syntax? This is what I have but it's not working. I get syntax errors.
    "Date"."Calendar Month" IN (SELECT case when 1=0 then "Date"."Calendar Month" else (cast (timestampadd (sql_tsi_month, -1, current_date))))

  • Syntax for querying between dates with ADO

    Hello,
    I am connecting to Oracle tables using ADO in Microsoft Access. I am not familiar with Oracle Sql. I am trying to execute the following query string but am not retrieving and records:
    strSql = "SELECT COUNT(*) " _
    & "FROM CCC2.CASE_EPRP WHERE CALL_DATE >= '1/1/2002' " _
    & "AND CALL_DATE <= '2/1/2002'"
    I can retrieve a record count if I only have the first date, but if I use the date range above the query returns zero records even though there are records for that date range. Could someone explain the correct way to write this query?
    Thanks,
    Rich

    You really don't want to rely on implicit string to date conversion ever. Oracle will use your NLS date settings to do the conversion, but different users (and different databases) may have this set differently, so one user might have '2/1/2002' convert to February 1, 2002 while another user might have it convert to January 2, 2002. A third user might not be able to convert the string at all.
    The proper way to do this is to use either Oracle syntax
    "where call_date >= to_date( '1/1/2002', 'MM/DD/YYYY' )
    and call_date <= to_date( '2/1/2002', 'MM/DD/YYYY' )"
    or to use the ODBC date escape sequence, {d }, to create the dates.
    If this isn't the problem, there may be issues because Oracle dates have a time component. If you don't specify a time, Oracle will default to midnight, so any call_date records after 12:00am on 2/1/2002 won't be found.
    Justin

  • Advise for a pseudo-data warehouse?

    Hello:
    We are looking into setting up a database that is a mirror of our production environment for querying only. Running of queries in our production environment has proven to be too much of a strain on resources. I have been researching setting up this mirrored machine as a standby database in the managed environment. It could then be queried when in "read only" mode. It seems like it would offer a lot of advantages. We are already generating redo logs on the production machine, so I don't think there would be much overhead added to the production machine. One concern I have, is the robustness of the net8 transfer of the redo logs from the production machine to the standby machine? The redo logs would go over a WAN, and thus might be susceptible to blips in the connection. Is the transferring of the redo logs over net8 robust enough to recover from this, or would the transfer just fail and not restart?
    Also, an additional requirement, is for data from another database (Microsoft SQL Server) to be available in this standby database. Is this possible? It seems that when a database is set up as a standby machine, it is basically a slave to the main database, and no operations can be performed on it. Is that the case?
    I know these are kind of vague questions. I can provide more info on our requirements if anybody is curious...
    Thanks!

    Its the same Informatica which can do all functions as a stand alone Infa. additionally it also has prebuilt adapters for source systems like Siebel, APPL, PSFT, JDE, SAP and some universal adapters, so the license included these also which is going to cost more than getting a informatica Licence from Informatica Corp. Moreover, OBI Apps 7.9.6 comes with Informatica 8.6 which is a little older version of the tool. Informatica is going to release version 9 in a couple of weeks.
    I see that you already have a datawarehouse, so why do you need a ETl tool again?
    OBI EE can directly report out of a datewarehouse, and also transactional systems as long as the metadata layer is built.
    PS: Am I clear?

  • What are the best solutions for data warehouse configuration in 10gR2

    I need help on solutions to be provided to my Client for upgrading the data warehouse.
    Current Configuration: Oracle database 9.2.0.8. This database contains the data warehouse and one more data mart on the same host.Sizes are respectively 6 Terabyte(retention policy of 3 years+current year) and 1 Terabyte. The ETL tool and BO Reporting tools are also hosted on the same host. This current configuration is really performing poor.
    Client cannot go for a major architectural or configuration changes to its existing environment now due to some constraints.
    However, they have agreed to separate out the databases on separate hosts from the ETL tools and BO objects. Also we are planning to upgrade the database to 10gR2 to attain stability, better performance and overcome current headaches.
    We cannot upgrade the database to 11g as the BO is at a version 6.5 which isn't compatible with Oracle 11g. And Client cannot afford to upgrade anything else other than the database.
    So, my role is very vital in providing a perfect solution towards better performance and take a successful migration of Oracle Database from one host to another (similar platform and OS) in addition to upgrade.
    I have till now thought of the following:
    Move the Oracle database and data mart to separate host.
    The host will be the same platform, that is, HP Superdome with HP-UX 32-bit OS (we cannot change to 64-bit as ETL tool doesn't support)
    Install new Oracle database 10g on the new host and move the data to it.
    Exploring all new features of 10gR2 to help data warehouse, that is, SQL MODEL Clause introduction, Parallel processing, Partitioning, Data Pump, SPA to study pre and post migrations.
    Also thinking of RAC to provide more better solution as our main motive is to show a tremendous performance enhancement.
    I need all your help to prepare a good road map for my assignment. Please suggest.
    Thanks,
    Tapan

    SGA=27.5 GB and PGA=50 MB
    Also I am pasting part of STATSPACK Report, eliminating the snaps of DB bounce. Please suggest the scope of improvement in this case.
    STATSPACK report for
    Snap Id Snap Time Sessions Curs/Sess Comment
    Begin Snap: 582946 11-Mar-13 20:02:16 46 12.8
    End Snap: 583036 12-Mar-13 18:24:24 60 118.9
    Elapsed: 1,342.13 (mins)
    Cache Sizes (end)
    ~~~~~~~~~~~~~~~~~
    Buffer Cache: 21,296M Std Block Size: 16K
    Shared Pool Size: 6,144M Log Buffer: 16,384K
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 1,343,739.01 139,883.39
    Logical reads: 100,102.54 10,420.69
    Block changes: 3,757.42 391.15
    Physical reads: 6,670.84 694.44
    Physical writes: 874.34 91.02
    User calls: 1,986.04 206.75
    Parses: 247.87 25.80
    Hard parses: 5.82 0.61
    Sorts: 1,566.76 163.10
    Logons: 10.99 1.14
    Executes: 1,309.79 136.35
    Transactions: 9.61
    % Blocks changed per Read: 3.75 Recursive Call %: 43.34
    Rollback per transaction %: 3.49 Rows per Sort: 190.61
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.90 Redo NoWait %: 100.00
    Buffer Hit %: 96.97 In-memory Sort %: 100.00
    Library Hit %: 99.27 Soft Parse %: 97.65
    Execute to Parse %: 81.08 Latch Hit %: 99.58
    Parse CPU to Parse Elapsd %: 3.85 % Non-Parse CPU: 99.34
    Shared Pool Statistics Begin End
    Memory Usage %: 7.11 50.37
    % SQL with executions>1: 62.31 46.46
    % Memory for SQL w/exec>1: 26.75 13.47
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    CPU time 492,062 43.66
    db file sequential read 157,418,414 343,549 30.49
    library cache pin 92,339 66,759 5.92
    PX qref latch 63,635 43,845 3.89
    db file scattered read 2,506,806 41,677 3.70
    Background Wait Events for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (s) (ms) /txn
    log file sequential read 176,386 0 3,793 22 0.2
    log file parallel write 2,685,833 0 1,813 1 3.5
    db file parallel write 239,166 0 1,350 6 0.3
    control file parallel write 33,432 0 79 2 0.0
    LGWR wait for redo copy 478,120 536 75 0 0.6
    rdbms ipc reply 10,027 0 47 5 0.0
    control file sequential read 32,414 0 40 1 0.0
    db file scattered read 4,101 0 30 7 0.0
    db file sequential read 13,946 0 29 2 0.0
    direct path read 203,694 0 14 0 0.3
    log buffer space 363 0 13 37 0.0
    latch free 3,766 0 9 2 0.0
    direct path write 80,491 0 6 0 0.1
    async disk IO 351,955 0 4 0 0.5
    enqueue 28 0 1 21 0.0
    buffer busy waits 1,281 0 1 0 0.0
    log file single write 172 0 0 1 0.0
    rdbms ipc message 10,563,204 251,286 992,837 94 13.7
    pmon timer 34,751 34,736 78,600 2262 0.0
    smon timer 7,462 113 76,463 10247 0.0
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    CPU used by this session 49,206,154 611.0 63.6
    CPU used when call started 49,435,735 613.9 63.9
    CR blocks created 6,740,777 83.7 8.7
    Cached Commit SCN referenced 423,253,503 5,256.0 547.2
    Commit SCN cached 19,165 0.2 0.0
    DBWR buffers scanned 48,276,489 599.5 62.4
    DBWR checkpoint buffers written 6,959,752 86.4 9.0
    DBWR checkpoints 454 0.0 0.0
    DBWR free buffers found 44,817,183 556.5 57.9
    DBWR lru scans 137,149 1.7 0.2
    DBWR make free requests 162,528 2.0 0.2
    DBWR revisited being-written buff 4,220 0.1 0.0
    DBWR summed scan depth 48,276,489 599.5 62.4
    DBWR transaction table writes 5,036 0.1 0.0
    DBWR undo block writes 2,989,436 37.1 3.9
    DDL statements parallelized 3,723 0.1 0.0
    DFO trees parallelized 4,157 0.1 0.0
    DML statements parallelized 3 0.0 0.0
    OS Block input operations 29,850 0.4 0.0
    OS Block output operations 1,591 0.0 0.0
    OS Characters read/written 182,109,814,791 2,261,447.1 235,416.9
    OS Integral unshared data size ################## 242,463,432.4 ############
    OS Involuntary context switches 188,257,786 2,337.8 243.4
    OS Maximum resident set size 43,518,730,619 540,417.4 56,257.5
    OS Page reclaims 159,430,953 1,979.8 206.1
    OS Signals received 5,260,938 65.3 6.8
    OS Socket messages received 79,438,383 986.5 102.7
    OS Socket messages sent 93,064,176 1,155.7 120.3
    OS System time used 10,936,430 135.8 14.1
    OS User time used 132,043,884 1,639.7 170.7
    OS Voluntary context switches 746,207,739 9,266.4 964.6
    PX local messages recv'd 55,120,663 684.5 71.3
    PX local messages sent 55,120,817 684.5 71.3
    Parallel operations downgraded 1 3 0.0 0.0
    Parallel operations not downgrade 4,154 0.1 0.0
    SQL*Net roundtrips to/from client 155,422,335 1,930.0 200.9
    SQL*Net roundtrips to/from dblink 18 0.0 0.0
    active txn count during cleanout 16,529,551 205.3 21.4
    background checkpoints completed 43 0.0 0.0
    background checkpoints started 43 0.0 0.0
    background timeouts 280,202 3.5 0.4
    branch node splits 4,428 0.1 0.0
    buffer is not pinned count 6,382,440,322 79,257.4 8,250.7
    buffer is pinned count 9,675,661,370 120,152.8 12,507.9
    bytes received via SQL*Net from c 67,384,496,376 836,783.4 87,109.3
    bytes received via SQL*Net from d 6,142 0.1 0.0
    bytes sent via SQL*Net to client 50,240,643,657 623,890.4 64,947.1
    bytes sent via SQL*Net to dblink 3,701 0.1 0.0
    calls to get snapshot scn: kcmgss 145,385,064 1,805.4 187.9
    calls to kcmgas 36,816,132 457.2 47.6
    calls to kcmgcs 3,514,770 43.7 4.5
    change write time 369,373 4.6 0.5
    cleanout - number of ktugct calls 20,954,488 260.2 27.1
    cleanouts and rollbacks - consist 6,357,174 78.9 8.2
    cleanouts only - consistent read 10,078,802 125.2 13.0
    cluster key scan block gets 69,403,565 861.9 89.7
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    cluster key scans 41,311,211 513.0 53.4
    commit cleanout failures: block l 413,776 5.1 0.5
    commit cleanout failures: buffer 414 0.0 0.0
    commit cleanout failures: callbac 41,194 0.5 0.1
    commit cleanout failures: cannot 174,382 2.2 0.2
    commit cleanouts 11,469,056 142.4 14.8
    commit cleanouts successfully com 10,839,290 134.6 14.0
    commit txn count during cleanout 17,155,424 213.0 22.2
    consistent changes 145,418,277 1,805.8 188.0
    consistent gets 8,043,252,188 99,881.4 10,397.7
    consistent gets - examination 3,180,028,047 39,489.7 4,110.9
    current blocks converted for CR 9 0.0 0.0
    cursor authentications 14,926 0.2 0.0
    data blocks consistent reads - un 143,706,500 1,784.6 185.8
    db block changes 302,577,666 3,757.4 391.2
    db block gets 336,562,217 4,179.4 435.1
    deferred (CURRENT) block cleanout 2,912,793 36.2 3.8
    dirty buffers inspected 627,174 7.8 0.8
    enqueue conversions 1,296,337 16.1 1.7
    enqueue releases 13,053,200 162.1 16.9
    enqueue requests 13,239,092 164.4 17.1
    enqueue timeouts 185,878 2.3 0.2
    enqueue waits 114,120 1.4 0.2
    exchange deadlocks 7,390 0.1 0.0
    execute count 105,475,101 1,309.8 136.4
    free buffer inspected 1,604,407 19.9 2.1
    free buffer requested 258,126,047 3,205.4 333.7
    hot buffers moved to head of LRU 22,793,576 283.1 29.5
    immediate (CR) block cleanout app 16,436,010 204.1 21.3
    immediate (CURRENT) block cleanou 2,860,013 35.5 3.7
    index fast full scans (direct rea 12,375 0.2 0.0
    index fast full scans (full) 3,733 0.1 0.0
    index fast full scans (rowid rang 192,148 2.4 0.3
    index fetch by key 1,321,024,486 16,404.5 1,707.7
    index scans kdiixs1 406,165,684 5,043.8 525.1
    leaf node 90-10 splits 50,373 0.6 0.1
    leaf node splits 697,235 8.7 0.9
    logons cumulative 884,756 11.0 1.1
    messages received 3,276,719 40.7 4.2
    messages sent 3,257,171 40.5 4.2
    no buffer to keep pinned count 569 0.0 0.0
    no work - consistent read gets 4,406,092,172 54,715.0 5,695.8
    opened cursors cumulative 20,527,704 254.9 26.5
    parse count (failures) 267,088 3.3 0.4
    parse count (hard) 468,996 5.8 0.6
    parse count (total) 19,960,548 247.9 25.8
    parse time cpu 323,024 4.0 0.4
    parse time elapsed 8,393,422 104.2 10.9
    physical reads 537,189,332 6,670.8 694.4
    physical reads direct 292,545,140 3,632.8 378.2
    physical writes 70,409,002 874.3 91.0
    physical writes direct 59,248,394 735.8 76.6
    physical writes non checkpoint 69,103,391 858.1 89.3
    pinned buffers inspected 11,893 0.2 0.0
    prefetched blocks 95,892,161 1,190.8 124.0
    prefetched blocks aged out before 1,495,883 18.6 1.9
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    process last non-idle time ################## ############## ############
    queries parallelized 417 0.0 0.0
    recursive calls 122,323,299 1,519.0 158.1
    recursive cpu usage 3,144,533 39.1 4.1
    redo blocks written 180,881,558 2,246.2 233.8
    redo buffer allocation retries 5,400 0.1 0.0
    redo entries 164,728,513 2,045.6 213.0
    redo log space requests 1,006 0.0 0.0
    redo log space wait time 2,230 0.0 0.0
    redo ordering marks 2,563 0.0 0.0
    redo size 108,208,614,904 1,343,739.0 139,883.4
    redo synch time 558,520 6.9 0.7
    redo synch writes 2,343,824 29.1 3.0
    redo wastage 1,126,585,600 13,990.0 1,456.4
    redo write time 718,655 8.9 0.9
    redo writer latching time 7,763 0.1 0.0
    redo writes 2,685,833 33.4 3.5
    rollback changes - undo records a 522,742 6.5 0.7
    rollbacks only - consistent read 335,177 4.2 0.4
    rows fetched via callback 1,100,990,382 13,672.1 1,423.3
    session connect time ################## ############## ############
    session cursor cache count 1,061 0.0 0.0
    session cursor cache hits 1,687,796 21.0 2.2
    session logical reads 8,061,057,193 100,102.5 10,420.7
    session pga memory 1,573,228,913,832 19,536,421.0 2,033,743.8
    session pga memory max 1,841,357,626,496 22,866,054.4 2,380,359.0
    session uga memory 1,074,114,630,336 13,338,399.4 1,388,529.0
    session uga memory max 386,645,043,296 4,801,374.0 499,823.6
    shared hash latch upgrades - no w 410,360,146 5,095.9 530.5
    sorts (disk) 2,657 0.0 0.0
    sorts (memory) 126,165,625 1,566.7 163.1
    sorts (rows) 24,048,783,304 298,638.8 31,088.3
    summed dirty queue length 5,438,201 67.5 7.0
    switch current to new buffer 1,302,798 16.2 1.7
    table fetch by rowid 6,201,503,534 77,010.5 8,016.8
    table fetch continued row 26,649,697 330.9 34.5
    table scan blocks gotten 1,864,435,032 23,152.6 2,410.2
    table scan rows gotten 43,639,997,280 541,923.3 56,414.3
    table scans (cache partitions) 26,112 0.3 0.0
    table scans (direct read) 246,243 3.1 0.3
    table scans (long tables) 340,200 4.2 0.4
    table scans (rowid ranges) 359,617 4.5 0.5
    table scans (short tables) 9,111,559 113.2 11.8
    transaction rollbacks 4,819 0.1 0.0
    transaction tables consistent rea 824 0.0 0.0
    transaction tables consistent rea 1,386,848 17.2 1.8
    user calls 159,931,913 1,986.0 206.8
    user commits 746,543 9.3 1.0
    user rollbacks 27,020 0.3 0.0
    write clones created in backgroun 7 0.0 0.0
    write clones created in foregroun 4,350 0.1 0.0
    Buffer Pool Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> Standard block size Pools D: default, K: keep, R: recycle
    -> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
    Free Write Buffer
    Number of Cache Buffer Physical Physical Buffer Complete Busy
    P Buffers Hit % Gets Reads Writes Waits Waits Waits
    D 774,144 95.6############ 233,869,082 10,089,734 0 0########
    K 504,000 99.9############ 3,260,227 1,070,338 0 0 65,898
    R 63,504 96.2 196,079,539 7,511,863 535 0 0 0
    Buffer wait Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc
    Tot Wait Avg
    Class Waits Time (s) Time (ms)
    data block 7,791,121 14,676 2
    file header block 587 101 172
    undo header 151,617 71 0
    segment header 299,312 58 0
    1st level bmb 45,235 7 0
    bitmap index block 392 1 3
    undo block 4,250 1 0
    2nd level bmb 14 0 0
    system undo header 2 0 0
    3rd level bmb 1 0 0
    Latch Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
    willing-to-wait latch get requests
    ->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
    ->"Pct Misses" for both should be very close to 0.0
    Pct Avg Wait Pct
    Get Get Slps Time NoWait NoWait
    Latch Requests Miss /Miss (s) Requests Miss
    Consistent RBA 2,686,230 0.0 0.2 0 0
    FAL request queue 86 0.0 0 0
    FAL subheap alocation 0 0 2 0.0
    FIB s.o chain latch 1,089 0.0 0 0
    FOB s.o list latch 4,589,986 0.5 0.0 2 0
    NLS data objects 1 0.0 0 0
    SQL memory manager worka 5,963 0.0 0 0
    Token Manager 0 0 2 0.0
    active checkpoint queue 719,439 0.3 0.1 0 1 0.0
    alert log latch 184 0.0 0 2 0.0
    archive control 4,365 0.0 0 0
    archive process latch 1,808 0.6 0.6 0 0
    begin backup scn array 3,387,572 0.0 0.0 0 0
    cache buffer handles 1,577,222 0.2 0.0 0 0
    cache buffers chains ############## 0.5 0.0 430 354,357,972 0.3
    cache buffers lru chain 17,153,023 0.1 0.0 1 385,505,654 0.5
    cas latch 538,804,153 0.3 0.0 7 0
    channel handle pool latc 1,776,950 0.5 0.0 0 0
    channel operations paren 2,901,371 0.3 0.0 0 0
    checkpoint queue latch 99,329,722 0.0 0.0 0 11,153,369 0.1
    child cursor hash table 3,927,427 0.0 0.0 0 0
    commit callback allocati 8,739 0.0 0 0
    dictionary lookup 7,980 0.0 0 0
    dml lock allocation 6,767,990 0.1 0.0 0 0
    dummy allocation 1,898,183 0.2 0.1 0 0
    enqueue hash chains 27,741,348 0.1 0.1 4 0
    enqueues 17,450,161 0.3 0.1 6 0
    error message lists 132,828 2.6 0.2 1 0
    event group latch 884,066 0.0 0.7 0 0
    event range base latch 1 0.0 0 0
    file number translation 34 38.2 0.9 0 0
    global tx hash mapping 577,859 0.0 0 0
    hash table column usage 4,062 0.0 0 8,757,234 0.0
    hash table modification 16 0.0 0 2 0.0
    i/o slave adaptor 0 0 2 0.0
    job workq parent latch 4 100.0 0.3 0 494 8.7
    job_queue_processes para 1,950 0.0 0 2 0.0
    ksfv messages 0 0 4 0.0
    ktm global data 8,219 0.0 0 0
    lgwr LWN SCN 2,687,862 0.0 0.0 0 0
    library cache 310,882,781 0.9 0.0 34 104,759 4.0
    library cache load lock 30,369 0.0 0.3 0 0
    library cache pin 153,821,358 0.1 0.0 2 0
    library cache pin alloca 126,316,296 0.1 0.0 4 0
    list of block allocation 2,730,808 0.3 0.0 0 0
    loader state object free 566,036 0.1 0.0 0 0
    longop free list parent 197,368 0.0 0 8,390 0.0
    message pool operations 14,424 0.0 0.0 0 0
    messages 25,931,764 0.1 0.0 1 0
    mostly latch-free SCN 40,124,948 0.3 0.0 5 0
    Latch Sleep breakdown for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by misses desc
    Get Spin &
    Latch Name Requests Misses Sleeps Sleeps 1->4
    cache buffers chains ############## 74,770,083 1,062,119 73803903/884
    159/71439/10
    582/0
    redo allocation 170,107,983 3,441,055 149,631 3292872/1467
    48/1426/9/0
    library cache 310,882,781 2,831,747 89,240 2754499/6780
    6/7405/2037/
    0
    shared pool 158,471,190 1,755,922 55,268 1704342/4836
    9/2826/385/0
    cas latch 538,804,153 1,553,992 6,927 1547125/6808
    /58/1/0
    row cache objects 161,142,207 1,176,998 27,658 1154070/1952
    0/2560/848/0
    process queue reference 1,893,917,184 1,119,215 106,454 78758/4351/1
    36/0/0
    Library Cache Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Pct Misses" should be very low
    Get Pct Pin Pct Invali-
    Namespace Requests Miss Requests Miss Reloads dations
    BODY 3,137,721 0.0 3,137,722 0.0 0 0
    CLUSTER 6,741 0.1 4,420 0.2 0 0
    INDEX 353,708 0.8 361,065 1.2 0 0
    SQL AREA 17,052,073 0.3 54,615,678 0.9 410,682 19,628
    TABLE/PROCEDURE 3,521,884 0.2 12,922,737 0.1 619 0
    TRIGGER 1,975,977 0.0 1,975,977 0.0 1 0
    SGA Memory Summary for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    SGA regions Size in Bytes
    Database Buffers 22,330,474,496
    Fixed Size 779,288
    Redo Buffers 17,051,648
    Variable Size 7,180,648,448
    sum 29,528,953,880

  • RAC for Data Warehouse

    Hello,
    We have a research project for restructructuring our data warehouse system.
    I would like to get some opinions about whether RAC architecture can be
    a good solution for Data Warehouse application.
    We have using parallel queries massively. Does running these kind of queries
    on different servers on RAC with multiple server result in performance degradation rather than
    running on single monolithic server with multiple CPUs
    I will appreciate any comments using RAC architecture for Data Warehouse
    systems?
    Regards,

    Maurice Muller wrote:
    Just keep in mind that during the last 4 years (I guess your current system is about 4 years old) the CPUs became much faster.
    A cpu can't work without data which means that the I throughput has to be fast enough to feed all your cores with data.
    The main bottleneck of all DWHs I have seen during the last 8 years was allways the IO never the CPUs. And not just data warehousing Maurice, but a basic principle for any data processing platform - the slowest layer is always the I/O layer.. and can be the most expensive one to solve too.
    Which is why newer technology like Infiniband is exciting as it can also serve as the I/O layer. Instead of using the traditional HBA which is typically configured with 2Gb fibre channels to the storage layer, using HCA cards you can wire this directly into an Infiniband storage array... and this can run at up to speeds of 40Gb. Dual connections means a total theoretical pipe size of 80Gb. I do not know of any other standard technology (like GigE) that can provide any similar bandwidth speed.
    Back to RAC though - with RAC, when you add a new server that comes with a new set of I/O pipes.. plus of course more RAM and more CPU cores. SMP server architecture does not scale like this at all. You only have x number of slots for PCI cards, CPUs and RAM. A very specific ceiling that cannot be moved. With MPP this ceiling is a a lot higher and more flexible.
    You can also replace a dual core dual CPU nodes with a 6 core AMD Istanbul CPUs next year.. and possibly 12 core CPUs year after that. So even a smallish 4 node cluster with 16 cores in total can be grown significantly and remain a 4 node cluster. Together with advances in HPC (High Performance Computing) like Infiniband.
    I'm not seeing much use of non-RAC RDBMS architecture in the future. Databases are getting ever bigger because we have the technology to crunch more data, and crunch it a lot more intelligently than ever before. My first production database was 4MB in size, and ran on a Novell File Server with two 20MB disks. I'm currently testing a 24TB array for use for a single database.
    Technology is inevitable, as is the growth in data volumes. And I cannot see a non-RAC architecture rising to that challenge. Especially not in something like data warehousing.

  • Only Alert Data is not being inserted in SCOM 2012 Data Warehouse database

    Hi All,
    Alert data is not getting inserted into SCOM Data Warehouse database since 10 days though I could see latest Performance data in DW DB.  No changes were made as far I know on SCOM servers or DB's. I had this issue few months back
    and got resolved by executing a qiery to create an entry for Data Warehouse Synchronisation server entry.
    Now I have checked the discovered inventory and could see an entry present and it is healthy. Still, latest Alert data is not getting inserted into DW DB. Please help me out.
    http://social.technet.microsoft.com/Forums/en-US/2dac4f45-4911-40dc-a220-702993188832/alert-data-is-not-present-in-scom-2012-data-warehouse-database-since-two-weeks?forum=operationsmanagergeneral
    Regards, Suresh

    Hi,
    Generally, data warehouse store a long-term data, and by default, it would keep 400 days data, I suggest check your configuration:
    How to Configure Grooming Settings for the Reporting Data Warehouse Database
    http://technet.microsoft.com/en-us/library/hh212806.aspx
    Alex Zhao
    TechNet Community Support

  • Alert data is not present in SCOM 2012 Data Warehouse database since two weeks

    Alert data is not present in SCOM 2012 Data Warehouse database since a week though I could see Performance data for the latest dates. Old Alert data is present but I think the latest Alert data is not being inserted to Data warehouse. No activity was done
    on the day from where we are missing data.
    I could see 31554 events on all my Management servers and this proves that Data Insertion is happening. I am not sure why only Alert data is missing (or not getting inserted) in DW database. I am trying to use SQL queries to fetch the data as I dont have reporting
    currently. The same query is working for other dates, so there is no issue with this query.
    I have noticed that I could see the Alert Data present in SCOM OperationsmNager Db but NOT present in OperationsManagerDW database.
    In SCOM 2007, data will be inserted in both Ops DB and DW simultaneously. I believe the same methodology in 2012 too.
    Please help me to fetch Alert data from DW. Any suggestion pls?
    Regards, Suresh

    Hi,
    Generally, data warehouse store a long-term data, and by default, it would keep 400 days data, I suggest check your configuration:
    How to Configure Grooming Settings for the Reporting Data Warehouse Database
    http://technet.microsoft.com/en-us/library/hh212806.aspx
    Alex Zhao
    TechNet Community Support

  • Service Manager 2012 R2 - Data warehouse Issue

    i have an issue with a customer with their Data warehouse Server. when ever we generate a report using Service Manager we are not seeing data in the report. example - we only see 4 incidents on reports when we generate them and these are many months
    old records. within the database there are 1000+ Incidents created however when generating a report only shows us 4 incidents. i'm trying to figure out why it's only showing few records whereas it should show all the records when we generate. i have this
    issue now with two customers
    i can see that the Data warehouse jobs are running without issues. they are not failing. please let me know how i can get this issue fixed

    Open up an SQL management studio and connect to the database that hosts the data warehouse database. Run a query against this following views.
    Incident
    SELECT * FROM [DWDataMart].[dbo].[IncidentDimvw]
    If we look at the Incident query if this only returns 4 incidents as your report then the sync to the data warehouse is not working correctly. I would recommend runnung travis ETL job to run all the data warehouse jobs in the correct order. You can find
    it here. https://gallery.technet.microsoft.com/PowerShell-Script-to-Run-a4a2081c
    And if that still does not help there is another few blog posts for troubleshooting the data warehouse but lets try this first and go from there. 
    Cheers,
    Thomas Strömberg
    System Center Specialist
    Blog:  
    Twitter:   LinkedIn:
    Please remember to 'Propose as answer' if you find a reply helpful

  • Architectural Design - New Data Warehouse

    Hello All,
    This is my first post to the oracle discussion forums and I'm looking forward to the interactions with other ODWB users.
    I am just begining to implement a design for a new data warehouse, our team has already defined user requirements for a subset of the business (Sales/Marketing) and have committted a logical model to paper. We have installed our dev environment and are now ready to begin the work of creating our prototype.
    I've read all the Oracle doc I can get my hands on regarding implementing your DW objects and have been pondering the approach. ROLAP or MOLAP.....
    it seems to make sense that we should deploy into a ROLAP environment bringing in all our data from our staging area to create a stable relational data store. Then select most used or queried dimensions and facts to deploy in a MOLAP environment... has anyone used this approach? any lessons learned? do you have to choose one method or the other? or can you take a blended approach ? would you deploy both in the same database instance or seperate the two?
    thx

    I'm somewhat new to OWB coming from an Informatica background but in our environment, we are doing the same thing. Our Enterprise Data Warehouse will be based on ROLAP and I intend to use MOLAP for subsets of the EDW.
    Dimensions in Oracle are somewhat interesting in that they are "leveled" and you can tie cubes or "fact tables" to any level of the dimension. This is a bit un-Kimball-like and has taken some getting used to. I think it is a powerful feature but I will have to experiment some until I understand it better.
    One critical bug with 10.2 I've run into is with dimension roles - The time dimension for instance. Typically this is one table that is aliased many many times. If you exceed roughly 5 roles for the time dimension, the generation of the object fails since OWB generates a single anonymous PL/SQL block that exceeds 64k. Its a documented bug in development with no workaround according to metalink.
    Other gotchas are that table changes always try to generate "create table" scripts even if you only add an index or change parallelism. We have had to do table maintenance outside OWB and then keep the metadata in sync up until now.
    I haven't done any of the MOLAP yet but from what I read there are some restrictions - such as you can't have roles on dimensions for MOLAP and I believe you can't have SCDs in MOLAP. I don't know how Time dimensions are handled in MOLAP without roles! Do people really generate tables for every single time dimension in OWB???
    Hope you share your experiences here!
    - Mike Taylor

  • Design the data warehouse around the reporting system?

    Hi All,
    A Jr. data warehouse developer resisted my suggestion to flatten out activity tables of differing grains into a single fact table.  (Think sales order header, sales order detail, and even a 3rd level of details to each sales order detail.)  Although
    he agreed that flattening out the fact tables into a single fact would be proper for a data warehouse, he's concerned that report developers will have an easier time querying the data warehouse with the 3 separate fact tables.  I'm not sure if it's because
    the report developers don't like learning new schemas or if their reporting tool is just severely limited, mainly because I've never used Cognos.  I assured him that a properly-designed data warehouse will save on query execution time, but he's concerned
    about the reporting tool and how it may not work so well with the data warehouse.  
    Did I give him the proper advice?  It seems like a data warehouse should be built properly regardless of reporting tool shortcomings.  Assuming this tool is lousy, maybe they need a new reporting system for their new data warehouse.
    Thanks,
    Eric

    Hi Eric,
    one of the hard and fast rules of building a data warehouse is that from a logical point of view the fact table presents data at a certain level of granularity and that you do not mix facts in fact tables. This is data warehousing 101.
    From your comment you seem to be suggesting mixing data of different granularity in the one table.
    Now, we have ways and means of co-habiting data that will appear as different fact tables in the one physical table. We control the physical placement of data in fact tables. But on SQL Server we would never mix facts at different granularities or representing
    different data in the one fact table. SQL Server supports that quite poorly.
    It is sad that in 2015 people are still messing up data warehouse project from pure ignorance of what is available. We have data warehouse data models that are extremely extensive but people just have to start from scratch and reinvent the wheel and fail over
    and over again. Sad but true.
    Best Regards 
    Peter Nolan

Maybe you are looking for