Adding racks/nodes to parallel data warehouse v2

Hi,
Wondering what the mechanism is for redistributing data among nodes when you add a rack to PDW V2. Recall the way mentioned to do it was backup and restore in V1 and didn't see anything new discussed on the topic at pass 2012. Thanks.

Hi SQLpro26,
Could you have a look the following blog as well as the white paper for SQL Server 2012 Parallel Data Warehouse?
http://www.jamesserra.com/archive/2013/03/parallel-data-warehouse-pdw-version-2/
http://download.microsoft.com/download/6/1/B/61B70E27-29AD-4A81-BF29-4945D523B967/SQL_Server_2012_Parallel_Data_Warehouse_Solution_Brief.pdf
Hope this helps.
Regards,
Mike Yin
If you have any feedback on our support, please click
here
Mike Yin
TechNet Community Support

Similar Messages

  • SQL Server Parallel Data Warehouse Appliance - Cross-domain Watcher node

    Hi,
    Is it supported to have the PDW appliance in one domain and the watcher node in a different domain?
    If so, is a domain trust required or can certificates be used?
    Thanks!

    Thanks for the response Mai, but it does not answer my question.
    I had already read the MP guide for System Center Monitoring Pack for SQL Server 2012 Parallel Data Warehouse Appliance before asking the question.
    The guide identifies that all monitoring of the SQL PDW appliance is performed from a dedicated server that acts as a "watcher node". What the guide does not indicate is whether the watcher node must be in the same domain as the SCOM management
    servers or if it can be located across an untrusted boundary.
    In my case, the untrusted domain that requires monitoring includes the PDW appliance which has already been deployed and I plan to leverage a new SCOM gateway to simply the configuration, minimize the number of required certificates, firewall exceptions,
    etc as other servers also require monitoring.
    I want to locate the watcher node within the untrusted boundary and need to determine if this is a Microsoft-supported configuration or not.
    Is there a more appropriate forum for this question, please?
    Much thanks!

  • SQL Server Parallel Data Warehouse (PDW) Licensing

    Hi All,
    We have a customer that's interested in SQL Server Parallel Data Warehouse
    (PDW). I'm told this is an appliance sold by a manufacturer like Dell or HP. But
    I also see Licensing price on the EA price list.
    Can they also purchase PDW under a VL Agreement?
    Regards,
    DSarao

    Yes. Microsoft sells PDW as an appliance with a software and hardware purchase. Note that the PDW requires both the software and hardware purchase.

  • The Parallel Data Warehouse (PDW) features are not enabled.

    Tryin to use the Over Clause and receiving the following message:
    Msg 11305, Level 15, State 10, Line 2
    The Parallel Data Warehouse (PDW) features are not enabled.
    Microsoft SQL Server Management Studio 10.0.5500.0
    Microsoft Analysis Services Client Tools 10.0.5500.0
    Microsoft Data Access Components (MDAC) 6.1.7601.17514
    Microsoft MSXML 3.0 5.0 6.0 
    Microsoft Internet Explorer 9.0.8112.16421
    Microsoft .NET Framework 2.0.50727.5456
    Operating System 6.1.7601

    That's only the information about your client tools; the SQL Server version is mor of interest. You can query it with
    SELECT @@VERSION
    Olaf Helper
    * cogito ergo sum * errare humanum est * quote erat demonstrandum *
    Wenn ich denke, ist das ein Fehler und das beweise ich täglich
    Blog
    Xing

  • SQL Server PDW(Parallel Data Warehouse)

    Dear,
    There are several features which are unsupported at SQL Server PDW(Parallel Data Warehouse) version.
    Such as
    UDF(User Defined Function) / Extended Stored Procedure, CLR / Cursor / Trigger /
    @@SPID, @@SNAME_SUSER so on.
    I'm wondering when these features will be supported.
    Would you let me know approximate target date or road map?
    Thanks, Jungwon

    As far as I can tell , these features do not exist because the appliance purpose is to limit any components that can introduce a chance of performance degradation such as CLR and extended stored procedures beside also focusing on the core features for now.
     The word in the street is you have have your own CLR code in a business layer while PDW is for mass data analytics.
    for something like @@SPID , you should use the management web console & DMVs where they provides more rich information
    HTH
    Please click Vote As Helpful , Propose As Answer and/or Mark As Answer
    if you think this post helps!

  • SQL Server 2012's Parallel Data Warehouse (PDW) Optional Add-on?

    Hi, I work for the State of Illinois and though we have progressive IT shop, we tend to be a bit behind the curve.  We are still using SQL Server 2005 for all our data needs and have been planning to move to 2012 for the past two years (may actually
    occur soon, even though I have been trying to influence our department to just go straight to the new SQL Server 2014).  Because I was selected to work in our new Data Warehouse team, I have a keen interest in learning what I can about the PDW. 
    However, I suspect that just because we are moving up to 2012 does not mean that any PDW licensing will automatically be included, or does it?  Can someone point me to an article on PDW licensing (I have seen something hinting at more than one
    level of PDW licensing)?   

    Hi Ron. PDW is a scale-out SQL Server data warehouse workload on the Microsoft Analytics Platform System appliance. You can learn more about APS here (http://www.microsoft.com/aps). As for the licensing, PDW is licensed separately from SQL Server 2012/2014. 

  • Using SQLCMD need to SET NOCOUNT OFF for Sql Server Parallel Data warehouse (PDW) V2

    Using SQLCMD to copy data to a flat file that is imported into Oracle using SQL Loader.
    At the bottom of the files created by SQLCMD there’s a blank row (ALL NULLs) and then there’s a row that shows the total number of rows, e.g., (12571 rows affected).
    SQL Loader allows us to remove headers (SKIP = 2), but it does not allow us to skip trailing records. Using the WHEN TABLE_ID != BLANKS command I can remove the blank row above the count marker, but SQLLDR still tries to load the count marker into the first
    column, which fails the import.
    When using SET NOCOUNT in the PDW I receive the error ‘NoCount is not a recognized option’.
    Any suggestions on how to get around this and remove the trailing count?

    Hi Waldropj,
    Thank you for your question. I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated. 
    Thank you for your understanding and support.
    If you have any feedback on our support, please click
    here.
    Elvis Long
    TechNet Community Support

  • Adding new node to the Clusterware fails with the root.sh script.

    Dear All,
    I had successfully added third node to the existing 2 node cluster. After adding new node I need to run the root.sh scripts, but it was faling with the below error.
    Please help me with the below issue:
    Instantiating scripts for add node (Monday, April 8, 2013 3:23:14 PM EDT)
    . 1% Done.
    Instantiation of add node scripts complete
    Copying to remote nodes (Monday, April 8, 2013 3:23:16 PM EDT)
    ............................................................................................... 96% Done.
    Home copied to new nodes
    Saving inventory on nodes (Monday, April 8, 2013 3:31:40 PM EDT)
    . 100% Done.
    Save inventory complete
    WARNING:
    The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
    /u01/app/11.2.0/grid/root.sh #On nodes svphxwgdbprd06
    To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node
    The Cluster Node Addition of /u01/app/11.2.0/grid was successful.
    Root.SH Script Log:
    [root@svphxwgdbprd06 ~]# /u01/app/11.2.0/grid/root.sh
    Performing root user operation for Oracle 11g
    The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME= /u01/app/11.2.0/grid
    Enter the full pathname of the local bin directory: [usr/local/bin]:
    The contents of "dbhome" have not changed. No need to overwrite.
    The contents of "oraenv" have not changed. No need to overwrite.
    The contents of "coraenv" have not changed. No need to overwrite.
    Creating /etc/oratab file...
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root script.
    Now product-specific root actions will be performed.
    Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
    Creating trace directory
    User ignored Prerequisites during installation
    OLR initialization - successful
    Adding Clusterware entries to inittab
    CRS-2672: Attempting to start 'ora.mdnsd' on 'svphxwgdbprd06'
    CRS-2676: Start of 'ora.mdnsd' on 'svphxwgdbprd06' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'svphxwgdbprd06'
    CRS-2676: Start of 'ora.gpnpd' on 'svphxwgdbprd06' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'svphxwgdbprd06'
    CRS-2672: Attempting to start 'ora.gipcd' on 'svphxwgdbprd06'
    CRS-2676: Start of 'ora.cssdmonitor' on 'svphxwgdbprd06' succeeded
    CRS-2676: Start of 'ora.gipcd' on 'svphxwgdbprd06' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'svphxwgdbprd06'
    CRS-2672: Attempting to start 'ora.diskmon' on 'svphxwgdbprd06'
    CRS-2676: Start of 'ora.diskmon' on 'svphxwgdbprd06' succeeded
    CRS-2676: Start of 'ora.cssd' on 'svphxwgdbprd06' succeeded
    ASM created and started successfully.
    Disk Group DATA created successfully.
    clscfg: -install mode specified
    clscfg: EXISTING configuration version 5 detected.
    clscfg: version 5 is 11g Release 2.
    Successfully accumulated necessary OCR keys.
    clscfg: Arguments check out successfully.
    NO KEYS WERE WRITTEN. Supply -force parameter to override.
    -force is destructive and will destroy any previous cluster
    configuration.
    Failed to initialize Oracle Cluster Registry for cluster, rc 105
    Oracle Grid Infrastructure Repository configuration failed at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 6818.

    The document references posted already are very good ones. However, I would say that on personal experience (on Solaris and 10gR2) that the addnode tools gave me nothing but problems. Luckily, I was able to build a parallel cluster (with three nodes) on other hardware and then move the databases across via DataGuard. It was quicker and cleaner (and easier!) that way...
    Good luck!

  • What are the best solutions for data warehouse configuration in 10gR2

    I need help on solutions to be provided to my Client for upgrading the data warehouse.
    Current Configuration: Oracle database 9.2.0.8. This database contains the data warehouse and one more data mart on the same host.Sizes are respectively 6 Terabyte(retention policy of 3 years+current year) and 1 Terabyte. The ETL tool and BO Reporting tools are also hosted on the same host. This current configuration is really performing poor.
    Client cannot go for a major architectural or configuration changes to its existing environment now due to some constraints.
    However, they have agreed to separate out the databases on separate hosts from the ETL tools and BO objects. Also we are planning to upgrade the database to 10gR2 to attain stability, better performance and overcome current headaches.
    We cannot upgrade the database to 11g as the BO is at a version 6.5 which isn't compatible with Oracle 11g. And Client cannot afford to upgrade anything else other than the database.
    So, my role is very vital in providing a perfect solution towards better performance and take a successful migration of Oracle Database from one host to another (similar platform and OS) in addition to upgrade.
    I have till now thought of the following:
    Move the Oracle database and data mart to separate host.
    The host will be the same platform, that is, HP Superdome with HP-UX 32-bit OS (we cannot change to 64-bit as ETL tool doesn't support)
    Install new Oracle database 10g on the new host and move the data to it.
    Exploring all new features of 10gR2 to help data warehouse, that is, SQL MODEL Clause introduction, Parallel processing, Partitioning, Data Pump, SPA to study pre and post migrations.
    Also thinking of RAC to provide more better solution as our main motive is to show a tremendous performance enhancement.
    I need all your help to prepare a good road map for my assignment. Please suggest.
    Thanks,
    Tapan

    SGA=27.5 GB and PGA=50 MB
    Also I am pasting part of STATSPACK Report, eliminating the snaps of DB bounce. Please suggest the scope of improvement in this case.
    STATSPACK report for
    Snap Id Snap Time Sessions Curs/Sess Comment
    Begin Snap: 582946 11-Mar-13 20:02:16 46 12.8
    End Snap: 583036 12-Mar-13 18:24:24 60 118.9
    Elapsed: 1,342.13 (mins)
    Cache Sizes (end)
    ~~~~~~~~~~~~~~~~~
    Buffer Cache: 21,296M Std Block Size: 16K
    Shared Pool Size: 6,144M Log Buffer: 16,384K
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 1,343,739.01 139,883.39
    Logical reads: 100,102.54 10,420.69
    Block changes: 3,757.42 391.15
    Physical reads: 6,670.84 694.44
    Physical writes: 874.34 91.02
    User calls: 1,986.04 206.75
    Parses: 247.87 25.80
    Hard parses: 5.82 0.61
    Sorts: 1,566.76 163.10
    Logons: 10.99 1.14
    Executes: 1,309.79 136.35
    Transactions: 9.61
    % Blocks changed per Read: 3.75 Recursive Call %: 43.34
    Rollback per transaction %: 3.49 Rows per Sort: 190.61
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.90 Redo NoWait %: 100.00
    Buffer Hit %: 96.97 In-memory Sort %: 100.00
    Library Hit %: 99.27 Soft Parse %: 97.65
    Execute to Parse %: 81.08 Latch Hit %: 99.58
    Parse CPU to Parse Elapsd %: 3.85 % Non-Parse CPU: 99.34
    Shared Pool Statistics Begin End
    Memory Usage %: 7.11 50.37
    % SQL with executions>1: 62.31 46.46
    % Memory for SQL w/exec>1: 26.75 13.47
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    CPU time 492,062 43.66
    db file sequential read 157,418,414 343,549 30.49
    library cache pin 92,339 66,759 5.92
    PX qref latch 63,635 43,845 3.89
    db file scattered read 2,506,806 41,677 3.70
    Background Wait Events for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (s) (ms) /txn
    log file sequential read 176,386 0 3,793 22 0.2
    log file parallel write 2,685,833 0 1,813 1 3.5
    db file parallel write 239,166 0 1,350 6 0.3
    control file parallel write 33,432 0 79 2 0.0
    LGWR wait for redo copy 478,120 536 75 0 0.6
    rdbms ipc reply 10,027 0 47 5 0.0
    control file sequential read 32,414 0 40 1 0.0
    db file scattered read 4,101 0 30 7 0.0
    db file sequential read 13,946 0 29 2 0.0
    direct path read 203,694 0 14 0 0.3
    log buffer space 363 0 13 37 0.0
    latch free 3,766 0 9 2 0.0
    direct path write 80,491 0 6 0 0.1
    async disk IO 351,955 0 4 0 0.5
    enqueue 28 0 1 21 0.0
    buffer busy waits 1,281 0 1 0 0.0
    log file single write 172 0 0 1 0.0
    rdbms ipc message 10,563,204 251,286 992,837 94 13.7
    pmon timer 34,751 34,736 78,600 2262 0.0
    smon timer 7,462 113 76,463 10247 0.0
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    CPU used by this session 49,206,154 611.0 63.6
    CPU used when call started 49,435,735 613.9 63.9
    CR blocks created 6,740,777 83.7 8.7
    Cached Commit SCN referenced 423,253,503 5,256.0 547.2
    Commit SCN cached 19,165 0.2 0.0
    DBWR buffers scanned 48,276,489 599.5 62.4
    DBWR checkpoint buffers written 6,959,752 86.4 9.0
    DBWR checkpoints 454 0.0 0.0
    DBWR free buffers found 44,817,183 556.5 57.9
    DBWR lru scans 137,149 1.7 0.2
    DBWR make free requests 162,528 2.0 0.2
    DBWR revisited being-written buff 4,220 0.1 0.0
    DBWR summed scan depth 48,276,489 599.5 62.4
    DBWR transaction table writes 5,036 0.1 0.0
    DBWR undo block writes 2,989,436 37.1 3.9
    DDL statements parallelized 3,723 0.1 0.0
    DFO trees parallelized 4,157 0.1 0.0
    DML statements parallelized 3 0.0 0.0
    OS Block input operations 29,850 0.4 0.0
    OS Block output operations 1,591 0.0 0.0
    OS Characters read/written 182,109,814,791 2,261,447.1 235,416.9
    OS Integral unshared data size ################## 242,463,432.4 ############
    OS Involuntary context switches 188,257,786 2,337.8 243.4
    OS Maximum resident set size 43,518,730,619 540,417.4 56,257.5
    OS Page reclaims 159,430,953 1,979.8 206.1
    OS Signals received 5,260,938 65.3 6.8
    OS Socket messages received 79,438,383 986.5 102.7
    OS Socket messages sent 93,064,176 1,155.7 120.3
    OS System time used 10,936,430 135.8 14.1
    OS User time used 132,043,884 1,639.7 170.7
    OS Voluntary context switches 746,207,739 9,266.4 964.6
    PX local messages recv'd 55,120,663 684.5 71.3
    PX local messages sent 55,120,817 684.5 71.3
    Parallel operations downgraded 1 3 0.0 0.0
    Parallel operations not downgrade 4,154 0.1 0.0
    SQL*Net roundtrips to/from client 155,422,335 1,930.0 200.9
    SQL*Net roundtrips to/from dblink 18 0.0 0.0
    active txn count during cleanout 16,529,551 205.3 21.4
    background checkpoints completed 43 0.0 0.0
    background checkpoints started 43 0.0 0.0
    background timeouts 280,202 3.5 0.4
    branch node splits 4,428 0.1 0.0
    buffer is not pinned count 6,382,440,322 79,257.4 8,250.7
    buffer is pinned count 9,675,661,370 120,152.8 12,507.9
    bytes received via SQL*Net from c 67,384,496,376 836,783.4 87,109.3
    bytes received via SQL*Net from d 6,142 0.1 0.0
    bytes sent via SQL*Net to client 50,240,643,657 623,890.4 64,947.1
    bytes sent via SQL*Net to dblink 3,701 0.1 0.0
    calls to get snapshot scn: kcmgss 145,385,064 1,805.4 187.9
    calls to kcmgas 36,816,132 457.2 47.6
    calls to kcmgcs 3,514,770 43.7 4.5
    change write time 369,373 4.6 0.5
    cleanout - number of ktugct calls 20,954,488 260.2 27.1
    cleanouts and rollbacks - consist 6,357,174 78.9 8.2
    cleanouts only - consistent read 10,078,802 125.2 13.0
    cluster key scan block gets 69,403,565 861.9 89.7
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    cluster key scans 41,311,211 513.0 53.4
    commit cleanout failures: block l 413,776 5.1 0.5
    commit cleanout failures: buffer 414 0.0 0.0
    commit cleanout failures: callbac 41,194 0.5 0.1
    commit cleanout failures: cannot 174,382 2.2 0.2
    commit cleanouts 11,469,056 142.4 14.8
    commit cleanouts successfully com 10,839,290 134.6 14.0
    commit txn count during cleanout 17,155,424 213.0 22.2
    consistent changes 145,418,277 1,805.8 188.0
    consistent gets 8,043,252,188 99,881.4 10,397.7
    consistent gets - examination 3,180,028,047 39,489.7 4,110.9
    current blocks converted for CR 9 0.0 0.0
    cursor authentications 14,926 0.2 0.0
    data blocks consistent reads - un 143,706,500 1,784.6 185.8
    db block changes 302,577,666 3,757.4 391.2
    db block gets 336,562,217 4,179.4 435.1
    deferred (CURRENT) block cleanout 2,912,793 36.2 3.8
    dirty buffers inspected 627,174 7.8 0.8
    enqueue conversions 1,296,337 16.1 1.7
    enqueue releases 13,053,200 162.1 16.9
    enqueue requests 13,239,092 164.4 17.1
    enqueue timeouts 185,878 2.3 0.2
    enqueue waits 114,120 1.4 0.2
    exchange deadlocks 7,390 0.1 0.0
    execute count 105,475,101 1,309.8 136.4
    free buffer inspected 1,604,407 19.9 2.1
    free buffer requested 258,126,047 3,205.4 333.7
    hot buffers moved to head of LRU 22,793,576 283.1 29.5
    immediate (CR) block cleanout app 16,436,010 204.1 21.3
    immediate (CURRENT) block cleanou 2,860,013 35.5 3.7
    index fast full scans (direct rea 12,375 0.2 0.0
    index fast full scans (full) 3,733 0.1 0.0
    index fast full scans (rowid rang 192,148 2.4 0.3
    index fetch by key 1,321,024,486 16,404.5 1,707.7
    index scans kdiixs1 406,165,684 5,043.8 525.1
    leaf node 90-10 splits 50,373 0.6 0.1
    leaf node splits 697,235 8.7 0.9
    logons cumulative 884,756 11.0 1.1
    messages received 3,276,719 40.7 4.2
    messages sent 3,257,171 40.5 4.2
    no buffer to keep pinned count 569 0.0 0.0
    no work - consistent read gets 4,406,092,172 54,715.0 5,695.8
    opened cursors cumulative 20,527,704 254.9 26.5
    parse count (failures) 267,088 3.3 0.4
    parse count (hard) 468,996 5.8 0.6
    parse count (total) 19,960,548 247.9 25.8
    parse time cpu 323,024 4.0 0.4
    parse time elapsed 8,393,422 104.2 10.9
    physical reads 537,189,332 6,670.8 694.4
    physical reads direct 292,545,140 3,632.8 378.2
    physical writes 70,409,002 874.3 91.0
    physical writes direct 59,248,394 735.8 76.6
    physical writes non checkpoint 69,103,391 858.1 89.3
    pinned buffers inspected 11,893 0.2 0.0
    prefetched blocks 95,892,161 1,190.8 124.0
    prefetched blocks aged out before 1,495,883 18.6 1.9
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    process last non-idle time ################## ############## ############
    queries parallelized 417 0.0 0.0
    recursive calls 122,323,299 1,519.0 158.1
    recursive cpu usage 3,144,533 39.1 4.1
    redo blocks written 180,881,558 2,246.2 233.8
    redo buffer allocation retries 5,400 0.1 0.0
    redo entries 164,728,513 2,045.6 213.0
    redo log space requests 1,006 0.0 0.0
    redo log space wait time 2,230 0.0 0.0
    redo ordering marks 2,563 0.0 0.0
    redo size 108,208,614,904 1,343,739.0 139,883.4
    redo synch time 558,520 6.9 0.7
    redo synch writes 2,343,824 29.1 3.0
    redo wastage 1,126,585,600 13,990.0 1,456.4
    redo write time 718,655 8.9 0.9
    redo writer latching time 7,763 0.1 0.0
    redo writes 2,685,833 33.4 3.5
    rollback changes - undo records a 522,742 6.5 0.7
    rollbacks only - consistent read 335,177 4.2 0.4
    rows fetched via callback 1,100,990,382 13,672.1 1,423.3
    session connect time ################## ############## ############
    session cursor cache count 1,061 0.0 0.0
    session cursor cache hits 1,687,796 21.0 2.2
    session logical reads 8,061,057,193 100,102.5 10,420.7
    session pga memory 1,573,228,913,832 19,536,421.0 2,033,743.8
    session pga memory max 1,841,357,626,496 22,866,054.4 2,380,359.0
    session uga memory 1,074,114,630,336 13,338,399.4 1,388,529.0
    session uga memory max 386,645,043,296 4,801,374.0 499,823.6
    shared hash latch upgrades - no w 410,360,146 5,095.9 530.5
    sorts (disk) 2,657 0.0 0.0
    sorts (memory) 126,165,625 1,566.7 163.1
    sorts (rows) 24,048,783,304 298,638.8 31,088.3
    summed dirty queue length 5,438,201 67.5 7.0
    switch current to new buffer 1,302,798 16.2 1.7
    table fetch by rowid 6,201,503,534 77,010.5 8,016.8
    table fetch continued row 26,649,697 330.9 34.5
    table scan blocks gotten 1,864,435,032 23,152.6 2,410.2
    table scan rows gotten 43,639,997,280 541,923.3 56,414.3
    table scans (cache partitions) 26,112 0.3 0.0
    table scans (direct read) 246,243 3.1 0.3
    table scans (long tables) 340,200 4.2 0.4
    table scans (rowid ranges) 359,617 4.5 0.5
    table scans (short tables) 9,111,559 113.2 11.8
    transaction rollbacks 4,819 0.1 0.0
    transaction tables consistent rea 824 0.0 0.0
    transaction tables consistent rea 1,386,848 17.2 1.8
    user calls 159,931,913 1,986.0 206.8
    user commits 746,543 9.3 1.0
    user rollbacks 27,020 0.3 0.0
    write clones created in backgroun 7 0.0 0.0
    write clones created in foregroun 4,350 0.1 0.0
    Buffer Pool Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> Standard block size Pools D: default, K: keep, R: recycle
    -> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
    Free Write Buffer
    Number of Cache Buffer Physical Physical Buffer Complete Busy
    P Buffers Hit % Gets Reads Writes Waits Waits Waits
    D 774,144 95.6############ 233,869,082 10,089,734 0 0########
    K 504,000 99.9############ 3,260,227 1,070,338 0 0 65,898
    R 63,504 96.2 196,079,539 7,511,863 535 0 0 0
    Buffer wait Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc
    Tot Wait Avg
    Class Waits Time (s) Time (ms)
    data block 7,791,121 14,676 2
    file header block 587 101 172
    undo header 151,617 71 0
    segment header 299,312 58 0
    1st level bmb 45,235 7 0
    bitmap index block 392 1 3
    undo block 4,250 1 0
    2nd level bmb 14 0 0
    system undo header 2 0 0
    3rd level bmb 1 0 0
    Latch Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
    willing-to-wait latch get requests
    ->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
    ->"Pct Misses" for both should be very close to 0.0
    Pct Avg Wait Pct
    Get Get Slps Time NoWait NoWait
    Latch Requests Miss /Miss (s) Requests Miss
    Consistent RBA 2,686,230 0.0 0.2 0 0
    FAL request queue 86 0.0 0 0
    FAL subheap alocation 0 0 2 0.0
    FIB s.o chain latch 1,089 0.0 0 0
    FOB s.o list latch 4,589,986 0.5 0.0 2 0
    NLS data objects 1 0.0 0 0
    SQL memory manager worka 5,963 0.0 0 0
    Token Manager 0 0 2 0.0
    active checkpoint queue 719,439 0.3 0.1 0 1 0.0
    alert log latch 184 0.0 0 2 0.0
    archive control 4,365 0.0 0 0
    archive process latch 1,808 0.6 0.6 0 0
    begin backup scn array 3,387,572 0.0 0.0 0 0
    cache buffer handles 1,577,222 0.2 0.0 0 0
    cache buffers chains ############## 0.5 0.0 430 354,357,972 0.3
    cache buffers lru chain 17,153,023 0.1 0.0 1 385,505,654 0.5
    cas latch 538,804,153 0.3 0.0 7 0
    channel handle pool latc 1,776,950 0.5 0.0 0 0
    channel operations paren 2,901,371 0.3 0.0 0 0
    checkpoint queue latch 99,329,722 0.0 0.0 0 11,153,369 0.1
    child cursor hash table 3,927,427 0.0 0.0 0 0
    commit callback allocati 8,739 0.0 0 0
    dictionary lookup 7,980 0.0 0 0
    dml lock allocation 6,767,990 0.1 0.0 0 0
    dummy allocation 1,898,183 0.2 0.1 0 0
    enqueue hash chains 27,741,348 0.1 0.1 4 0
    enqueues 17,450,161 0.3 0.1 6 0
    error message lists 132,828 2.6 0.2 1 0
    event group latch 884,066 0.0 0.7 0 0
    event range base latch 1 0.0 0 0
    file number translation 34 38.2 0.9 0 0
    global tx hash mapping 577,859 0.0 0 0
    hash table column usage 4,062 0.0 0 8,757,234 0.0
    hash table modification 16 0.0 0 2 0.0
    i/o slave adaptor 0 0 2 0.0
    job workq parent latch 4 100.0 0.3 0 494 8.7
    job_queue_processes para 1,950 0.0 0 2 0.0
    ksfv messages 0 0 4 0.0
    ktm global data 8,219 0.0 0 0
    lgwr LWN SCN 2,687,862 0.0 0.0 0 0
    library cache 310,882,781 0.9 0.0 34 104,759 4.0
    library cache load lock 30,369 0.0 0.3 0 0
    library cache pin 153,821,358 0.1 0.0 2 0
    library cache pin alloca 126,316,296 0.1 0.0 4 0
    list of block allocation 2,730,808 0.3 0.0 0 0
    loader state object free 566,036 0.1 0.0 0 0
    longop free list parent 197,368 0.0 0 8,390 0.0
    message pool operations 14,424 0.0 0.0 0 0
    messages 25,931,764 0.1 0.0 1 0
    mostly latch-free SCN 40,124,948 0.3 0.0 5 0
    Latch Sleep breakdown for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by misses desc
    Get Spin &
    Latch Name Requests Misses Sleeps Sleeps 1->4
    cache buffers chains ############## 74,770,083 1,062,119 73803903/884
    159/71439/10
    582/0
    redo allocation 170,107,983 3,441,055 149,631 3292872/1467
    48/1426/9/0
    library cache 310,882,781 2,831,747 89,240 2754499/6780
    6/7405/2037/
    0
    shared pool 158,471,190 1,755,922 55,268 1704342/4836
    9/2826/385/0
    cas latch 538,804,153 1,553,992 6,927 1547125/6808
    /58/1/0
    row cache objects 161,142,207 1,176,998 27,658 1154070/1952
    0/2560/848/0
    process queue reference 1,893,917,184 1,119,215 106,454 78758/4351/1
    36/0/0
    Library Cache Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Pct Misses" should be very low
    Get Pct Pin Pct Invali-
    Namespace Requests Miss Requests Miss Reloads dations
    BODY 3,137,721 0.0 3,137,722 0.0 0 0
    CLUSTER 6,741 0.1 4,420 0.2 0 0
    INDEX 353,708 0.8 361,065 1.2 0 0
    SQL AREA 17,052,073 0.3 54,615,678 0.9 410,682 19,628
    TABLE/PROCEDURE 3,521,884 0.2 12,922,737 0.1 619 0
    TRIGGER 1,975,977 0.0 1,975,977 0.0 1 0
    SGA Memory Summary for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    SGA regions Size in Bytes
    Database Buffers 22,330,474,496
    Fixed Size 779,288
    Redo Buffers 17,051,648
    Variable Size 7,180,648,448
    sum 29,528,953,880

  • Oracle Development Survey: Data Warehouses Customers

    At the start of most data warehouse projects, or even during a project, I am sure you as customers try to find answers to the following questions to help you plan and manage your environments:
    * Where can I find trend and comparison information to help me plan for future growth of my data warehouse?
    * How many cpu's do other customers use per terabyte?
    * How many partitions are typically used in large tables? How many indexes?
    * How much should I allocate for memory for buffer cache?
    * How does my warehouse compare to others of similar and larger scale?
    The data warehouse development team, here at Oracle would like to help provide answers to these questions. However, to do this we need your help. If you have an existing data warehouse environment, we would like to obtain more technical information about your environment(s) by running a simple measurement script and returning the output files to us, here at Oracle. This will allow our developers to provide comprehensive documents that explain best practices and get a better understanding of which features our customers use the most. This will also allow you as Customers, to benchmark your environments compared to other customers’ environments.
    From a Company perspective we are also interested to get feedback on features we have added to the database, are these features used, how are they used etc. For example we are keen to understand:
    * Which initialization parameters are most frequently used at what values?
    * How many Oracle data warehouses run on RAC? on single nodes?
    * Is there a trend one-way or the other, especially as data volumes increase?
    * Does this change with newer releases of the database?
    All results from these scripts will be held confidential. No customers will be mentioned by name; only summaries and trends will be reported (e.g., “X percent of tables are partitioned and Y percent are indexed in data warehouses that are Z terabytes and larger in size.” or “X percent of Oracle9i and Y percent of Oracle10g data warehouses surveyed run RAC”). Results will be written up as a summarized report. Every participating customer will receive a copy of the report.
    Terabyte and larger DW are the primary interest, but information on any data warehouse environment is useful. We would like to have as many customers as possible submit results, ideally by the end of this week. However, this will be an on going process so regular feedback after this week is extremely useful.
    To help our developers and product management team please download and run the DW measurement script kit from OTN which is available from the following link:
    http://www.oracle.com/technology/products/bi/db/10g/dw_survey_0206.html
    Please return the script outputs using the link shown on the above web page, see the FAQ section, or alternatively mail them directly to me: [email protected].
    Thank you and we look forward to your responses.
    Message was edited by:
    klaker

    At the start of most data warehouse projects, or even during a project, I am sure you as customers try to find answers to the following questions to help you plan and manage your environments:
    * Where can I find trend and comparison information to help me plan for future growth of my data warehouse?
    * How many cpu's do other customers use per terabyte?
    * How many partitions are typically used in large tables? How many indexes?
    * How much should I allocate for memory for buffer cache?
    * How does my warehouse compare to others of similar and larger scale?
    The data warehouse development team, here at Oracle would like to help provide answers to these questions. However, to do this we need your help. If you have an existing data warehouse environment, we would like to obtain more technical information about your environment(s) by running a simple measurement script and returning the output files to us, here at Oracle. This will allow our developers to provide comprehensive documents that explain best practices and get a better understanding of which features our customers use the most. This will also allow you as Customers, to benchmark your environments compared to other customers’ environments.
    From a Company perspective we are also interested to get feedback on features we have added to the database, are these features used, how are they used etc. For example we are keen to understand:
    * Which initialization parameters are most frequently used at what values?
    * How many Oracle data warehouses run on RAC? on single nodes?
    * Is there a trend one-way or the other, especially as data volumes increase?
    * Does this change with newer releases of the database?
    All results from these scripts will be held confidential. No customers will be mentioned by name; only summaries and trends will be reported (e.g., “X percent of tables are partitioned and Y percent are indexed in data warehouses that are Z terabytes and larger in size.” or “X percent of Oracle9i and Y percent of Oracle10g data warehouses surveyed run RAC”). Results will be written up as a summarized report. Every participating customer will receive a copy of the report.
    Terabyte and larger DW are the primary interest, but information on any data warehouse environment is useful. We would like to have as many customers as possible submit results, ideally by the end of this week. However, this will be an on going process so regular feedback after this week is extremely useful.
    To help our developers and product management team please download and run the DW measurement script kit from OTN which is available from the following link:
    http://www.oracle.com/technology/products/bi/db/10g/dw_survey_0206.html
    Please return the script outputs using the link shown on the above web page, see the FAQ section, or alternatively mail them directly to me: [email protected].
    Thank you and we look forward to your responses.
    Message was edited by:
    klaker

  • Oracle Development Survey on Data Warehouses: How Does Yours Compare?

    At the start of most data warehouse projects, or even during a project, I am sure you as customers try to find answers to the following questions to help you plan and manage your environments:
    * Where can I find trend and comparison information to help me plan for future growth of my data warehouse?
    * How many cpu's do other customers use per terabyte?
    * How many partitions are typically used in large tables? How many indexes?
    * How much should I allocate for memory for buffer cache?
    * How does my warehouse compare to others of similar and larger scale?
    The data warehouse development team, here at Oracle, would like to help provide answers to these questions. However, to do this we need your help. If you have an existing data warehouse environment, we would like to obtain more technical information about your environment(s) by running a simple measurement script and returning the output files to us, here at Oracle. This will allow our developers to provide comprehensive documents that explain best practices and get a better understanding of which features our customers use the most. This will also allow you as Customers, to benchmark your environments compared to other customers’ environments.
    From a Company perspective we are also interested to get feedback on features we have added to the database, are these features used, how are they used etc. For example we are keen to understand:
    * Which initialization parameters are most frequently used at what values?
    * How many Oracle data warehouses run on RAC? on single nodes?
    * Is there a trend one-way or the other, especially as data volumes increase?
    * Does this change with newer releases of the database?
    All results from these scripts will be held confidential. No customers will be mentioned by name; only summaries and trends will be reported (e.g., “X percent of tables are partitioned and Y percent are indexed in data warehouses that are Z terabytes and larger in size.” or “X percent of Oracle9i and Y percent of Oracle10g data warehouses surveyed run RAC”). Results will be written up as a summarized report. Every participating customer will receive a copy of the report.
    Terabyte and larger DW are the primary interest, but information on any data warehouse environment is useful. We would like to have as many customers as possible submit results, ideally by the end of this week. However, this will be an on going process so regular feedback after this week is extremely useful.
    To help our developers and product management team please download and run the DW measurement script kit from OTN which is available from the following link:
    http://www.oracle.com/technology/products/bi/db/10g/dw_survey_0206.html
    Please return the script outputs using the link shown on the above web page, see the FAQ section, or alternatively mail them directly to me: [email protected].

    969224 wrote:
    Hi Guys, just a quick question. when we have a primary key on 4 coloumns and we have, say 20 million rows and we want to add one extra row. How does oracle check whether the data on the primary key is unique to the record being added compared to the 20 million rows. Does it actually compare the record being added to all the rows present in the table?
    Edited by: 969224 on May 10, 2013 8:14 AMNot the whole row, it compares the 4 columns in the INDEX against the 4 columns in the new row.

  • RAC for Data Warehouse

    Hello,
    We have a research project for restructructuring our data warehouse system.
    I would like to get some opinions about whether RAC architecture can be
    a good solution for Data Warehouse application.
    We have using parallel queries massively. Does running these kind of queries
    on different servers on RAC with multiple server result in performance degradation rather than
    running on single monolithic server with multiple CPUs
    I will appreciate any comments using RAC architecture for Data Warehouse
    systems?
    Regards,

    Maurice Muller wrote:
    Just keep in mind that during the last 4 years (I guess your current system is about 4 years old) the CPUs became much faster.
    A cpu can't work without data which means that the I throughput has to be fast enough to feed all your cores with data.
    The main bottleneck of all DWHs I have seen during the last 8 years was allways the IO never the CPUs. And not just data warehousing Maurice, but a basic principle for any data processing platform - the slowest layer is always the I/O layer.. and can be the most expensive one to solve too.
    Which is why newer technology like Infiniband is exciting as it can also serve as the I/O layer. Instead of using the traditional HBA which is typically configured with 2Gb fibre channels to the storage layer, using HCA cards you can wire this directly into an Infiniband storage array... and this can run at up to speeds of 40Gb. Dual connections means a total theoretical pipe size of 80Gb. I do not know of any other standard technology (like GigE) that can provide any similar bandwidth speed.
    Back to RAC though - with RAC, when you add a new server that comes with a new set of I/O pipes.. plus of course more RAM and more CPU cores. SMP server architecture does not scale like this at all. You only have x number of slots for PCI cards, CPUs and RAM. A very specific ceiling that cannot be moved. With MPP this ceiling is a a lot higher and more flexible.
    You can also replace a dual core dual CPU nodes with a 6 core AMD Istanbul CPUs next year.. and possibly 12 core CPUs year after that. So even a smallish 4 node cluster with 16 cores in total can be grown significantly and remain a 4 node cluster. Together with advances in HPC (High Performance Computing) like Infiniband.
    I'm not seeing much use of non-RAC RDBMS architecture in the future. Databases are getting ever bigger because we have the technology to crunch more data, and crunch it a lot more intelligently than ever before. My first production database was 4MB in size, and ran on a Novell File Server with two 20MB disks. I'm currently testing a 24TB array for use for a single database.
    Technology is inevitable, as is the growth in data volumes. And I cannot see a non-RAC architecture rising to that challenge. Especially not in something like data warehousing.

  • Performance issues with data warehouse loads

    We have performance issues with our data warehouse load ETL process. I have run
    analyze and dbms_stats and checked database environment. What other things can I do to optimize performance? I cannot use statspack since we are running Oracle 8i. Thanks
    Scott

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Service Manager data warehouse management server Installation fails

    Hi there,
    In Virtual Machine Windows Server 2012 R2 Standard with my user being a Local Admin and SQL Admin. I tried to do a Service Manager data warehouse management server
    first installation I am facing the following image as error:
    In the event viewer I get the following error:
    "Microsoft System Center 2012 R2 Service Manager -- The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 25211
    The arguments are: -2147024809, The parameter is incorrect."
    In the Setup log, some of the errors are:
    WixRemoveFoldersEx:  Entering WixRemoveFoldersEx in C:\Windows\Installer\MSI35E.tmp, version 3.7.1224.0
    WixRemoveFoldersEx:  Error 0x80070057: Missing folder property: PSCONFIGFOLDER.A591E3B4_D228_431D_BF89_99D52C8FFB76 for row: wrf4582BC4C5CC47B1D2380408CD7A752DC.A591E3B4_D228_431D_BF89_99D52C8FFB76
    CustomAction WixRemoveFoldersEx.A591E3B4_D228_431D_BF89_99D52C8FFB76 returned actual error code 1603 but will be translated to success due to continue marking
    CAStartServices: CAStartServices was passed . OMCFG
    CAStartServices: Checking if service already started. OMCFG
    CAStartServices: Attempting to start service. OMCFG
    CAStartServices: StartService failed. Error Code: 0x8007042D.
    ConfigureSDKConfigService: CAStartServices failed, trying again.... Error Code: 0x8007042D. OMCFG
    Action start 17:47:05: _SetHealthServiceConfig.80B659D9_F758_4E7D_B4FA_E53FC737DCC9.
    GetMsiFeatureState: Failed to get feature state. Error Code: 0x80070646. MOMServer
    MSI (s) (EC!4C) [17:47:05:483]: Note: 1: 2711 2: MOMGateway
    SetHealthServiceConfig: Failed to get Feature State.. Error Code: 0x80070646. MOMServer
    GetMsiFeatureState: Failed to get feature state. Error Code: 0x80070646. MOMGateway
    I have checked the following post but it did not help me:
    http://social.technet.microsoft.com/Forums/systemcenter/en-US/c42bb04d-a51e-4037-a8a3-37d714d6faac/scsm-management-server-installation-fails?forum=systemcenterservicemanager
    Could you please help me with this issue?
    Thanks a lot,
    M

    Hi,
    Sorry I cannot post the full log. I have found also these errors in the log:
    Calling custom action CAManaged!Microsoft.MOMv3.Setup.MOMv3ManagedCAs.RegisterSdkSCP
    RegisterSdkSCP: There is no previous serviceConnectionPoint
    RegisterSdkSCP: Creating New serviceConnectionPoint
    RegisterSdkSCP: Adding ACL for current user: DOMAIN\InstallationAccount
    RegisterSdkSCP: Adding ACL for SM Admini: DOMAIN\SCSMDWadmins
    RegisterSdkSCP: Error: Access is denied.
    InstallCounters: LoadPerfCounterTextStrings() failed . Error Code: 0x80070057. momv3 "D:\Program Files\Microsoft System Center 2012 R2\Service Manager\MOMConnectorCounters.ini"
    InstallPerfCountersHelper: pcCounterInstaller->InstallCounters() for the default counters failed. Error Code: 0x80070057. MOMConnector
    InstallPerfCountersLib: InstallHealthServicePerfCounters() failed . Error Code: 0x80070057.
    InstallPerfCountersLib: Retry Count : .
    InstallHSPerfCounters: Failed to install agent perf counters. Error Code: 0x80070057.
    Thanks for your reply.

  • Unread      Implementing heirarichal structure in data warehouse

    I want to create a data warehouse for credit card application. Each user can have a credit card and multiple supplementary credit cards. Each credit card has a main limit, which can be sub-divided into sub-limits to supplementary credit cards as requested by the user. Let us consider the following example:
    User “A” has a credit card “CC” with Limit “L” and its limit is $100,000.
    User “A” requested for a supplementary credit card “CC1” which is assigned limit
    “L1” = $50,000. He requests for another supplementary credit card “CC2” which is assigned limit “L2” = $100,000.
    Source tables contain data like this:
    1. src_client_card_trans: contains transaction data of client/user credit card usage (client_id, credit_card_number, balance_acquired)
    Client_id     Credit_card_number     Balance_acquired
    A     CC1     $20,000
    A     CC2     $50,000
    A     CC     $70,000
    2. src_card_limits: contains client’s credit cards linked to credit limits.
    Credit_card_number     Limit_id
    CC1     L1
    CC2     L2
    CC     L
    3. src_limit_structure: contains the relationship of limits and sub-limits.
    Limit_id     Sub_Limit_id
    L     L1
    L     L2
    I have designed two dimensions and one fact table. Dimensions are:
    1. LIMITS: contains the limit_id.
    2. CLIENTS: contains credit card user’s information.
    And fact table is LIMIT_BALANCES_FACT, which have some fact columns with the above dimensions.
    How can I implement the above scenario of limit hierarchy in data warehouse? Need your suggestions.
    Thanks in advance

    Much depends on how you want to analyze the data and there are a few options:
    1) Use credit limit as an attribute of the customer dimension. This would allow you to create query filters that can just show those customers with a $100,000 credit limit. This would return a list of credit cards (since the attribute would be assigned to each credit card) and then you can simply add or just keep the parents of that result set.
    However, this assumes you do not want to measure data specifically relating to credit card limit. For example it would not be possible to view a total amount spent by all customers who had a credit-limit of $100,000.
    In this case the attribute, credit limit, is simply used to filter a result set
    2) Create a separate dimension called Credit Limit and create three levels:
    All
    Range
    Credit Limit
    The level Range would contain groupings of credit limits such as 100-500, 501-1200, 1201-1,000 etc etc.
    This would allow you to analyse your data by customer and by credit limit over time. Allowing you to slice and dice quickly and easily.
    3) A second customer hierarchy could be added to the customer dimension. This would allow you to drill-down through different credit limits to customers to individual credit cards. It would be advisable to follow the same approach as option 2 and create some groupings for the credit limits to make the drill down easier for your business users to navigate:
    All
    Range
    Credit Limit
    Customer
    Credit Card
    Hope this helps
    Keith Laker
    Oracle EMEA Consulting
    BI Blog: http://oraclebi.blogspot.com/
    DM Blog: http://oracledmt.blogspot.com/
    BI on Oracle: http://www.oracle.com/bi/
    BI on OTN: http://www.oracle.com/technology/products/bi/
    BI Samples: http://www.oracle.com/technology/products/bi/samples/

Maybe you are looking for