Possible solutions for data warehouse backup

We have a data warehouse system with 4TB size of data hosted on SPARC M9000 server, DB version 11.0.2. Currently our database backup is using weekly data pump. We have tried switch DB to archivelog mode and using RMAN, create baseline weekly and incremental daily. There was a pretty big impact on end user performance. Are there any suggestions we could back up database daily and with less impact on end user performance.
Thanks

So you have lot DML (and possibly DDL) when user queries are running ? Is this a real-time datawarehouse with data feeds continuously running ? Or are there other jobs that execute the DML at the same time as user queries ?
How big are your Onlne Redo Logs ? Do you see "cannot allocate new log" and/or "checkpoint not completed" messages in the alert.log ?
Is your archive log destination directory on slower disks ?
Hemant K Chitale

Similar Messages

  • What are the best solutions for data warehouse configuration in 10gR2

    I need help on solutions to be provided to my Client for upgrading the data warehouse.
    Current Configuration: Oracle database 9.2.0.8. This database contains the data warehouse and one more data mart on the same host.Sizes are respectively 6 Terabyte(retention policy of 3 years+current year) and 1 Terabyte. The ETL tool and BO Reporting tools are also hosted on the same host. This current configuration is really performing poor.
    Client cannot go for a major architectural or configuration changes to its existing environment now due to some constraints.
    However, they have agreed to separate out the databases on separate hosts from the ETL tools and BO objects. Also we are planning to upgrade the database to 10gR2 to attain stability, better performance and overcome current headaches.
    We cannot upgrade the database to 11g as the BO is at a version 6.5 which isn't compatible with Oracle 11g. And Client cannot afford to upgrade anything else other than the database.
    So, my role is very vital in providing a perfect solution towards better performance and take a successful migration of Oracle Database from one host to another (similar platform and OS) in addition to upgrade.
    I have till now thought of the following:
    Move the Oracle database and data mart to separate host.
    The host will be the same platform, that is, HP Superdome with HP-UX 32-bit OS (we cannot change to 64-bit as ETL tool doesn't support)
    Install new Oracle database 10g on the new host and move the data to it.
    Exploring all new features of 10gR2 to help data warehouse, that is, SQL MODEL Clause introduction, Parallel processing, Partitioning, Data Pump, SPA to study pre and post migrations.
    Also thinking of RAC to provide more better solution as our main motive is to show a tremendous performance enhancement.
    I need all your help to prepare a good road map for my assignment. Please suggest.
    Thanks,
    Tapan

    SGA=27.5 GB and PGA=50 MB
    Also I am pasting part of STATSPACK Report, eliminating the snaps of DB bounce. Please suggest the scope of improvement in this case.
    STATSPACK report for
    Snap Id Snap Time Sessions Curs/Sess Comment
    Begin Snap: 582946 11-Mar-13 20:02:16 46 12.8
    End Snap: 583036 12-Mar-13 18:24:24 60 118.9
    Elapsed: 1,342.13 (mins)
    Cache Sizes (end)
    ~~~~~~~~~~~~~~~~~
    Buffer Cache: 21,296M Std Block Size: 16K
    Shared Pool Size: 6,144M Log Buffer: 16,384K
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 1,343,739.01 139,883.39
    Logical reads: 100,102.54 10,420.69
    Block changes: 3,757.42 391.15
    Physical reads: 6,670.84 694.44
    Physical writes: 874.34 91.02
    User calls: 1,986.04 206.75
    Parses: 247.87 25.80
    Hard parses: 5.82 0.61
    Sorts: 1,566.76 163.10
    Logons: 10.99 1.14
    Executes: 1,309.79 136.35
    Transactions: 9.61
    % Blocks changed per Read: 3.75 Recursive Call %: 43.34
    Rollback per transaction %: 3.49 Rows per Sort: 190.61
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.90 Redo NoWait %: 100.00
    Buffer Hit %: 96.97 In-memory Sort %: 100.00
    Library Hit %: 99.27 Soft Parse %: 97.65
    Execute to Parse %: 81.08 Latch Hit %: 99.58
    Parse CPU to Parse Elapsd %: 3.85 % Non-Parse CPU: 99.34
    Shared Pool Statistics Begin End
    Memory Usage %: 7.11 50.37
    % SQL with executions>1: 62.31 46.46
    % Memory for SQL w/exec>1: 26.75 13.47
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    CPU time 492,062 43.66
    db file sequential read 157,418,414 343,549 30.49
    library cache pin 92,339 66,759 5.92
    PX qref latch 63,635 43,845 3.89
    db file scattered read 2,506,806 41,677 3.70
    Background Wait Events for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (s) (ms) /txn
    log file sequential read 176,386 0 3,793 22 0.2
    log file parallel write 2,685,833 0 1,813 1 3.5
    db file parallel write 239,166 0 1,350 6 0.3
    control file parallel write 33,432 0 79 2 0.0
    LGWR wait for redo copy 478,120 536 75 0 0.6
    rdbms ipc reply 10,027 0 47 5 0.0
    control file sequential read 32,414 0 40 1 0.0
    db file scattered read 4,101 0 30 7 0.0
    db file sequential read 13,946 0 29 2 0.0
    direct path read 203,694 0 14 0 0.3
    log buffer space 363 0 13 37 0.0
    latch free 3,766 0 9 2 0.0
    direct path write 80,491 0 6 0 0.1
    async disk IO 351,955 0 4 0 0.5
    enqueue 28 0 1 21 0.0
    buffer busy waits 1,281 0 1 0 0.0
    log file single write 172 0 0 1 0.0
    rdbms ipc message 10,563,204 251,286 992,837 94 13.7
    pmon timer 34,751 34,736 78,600 2262 0.0
    smon timer 7,462 113 76,463 10247 0.0
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    CPU used by this session 49,206,154 611.0 63.6
    CPU used when call started 49,435,735 613.9 63.9
    CR blocks created 6,740,777 83.7 8.7
    Cached Commit SCN referenced 423,253,503 5,256.0 547.2
    Commit SCN cached 19,165 0.2 0.0
    DBWR buffers scanned 48,276,489 599.5 62.4
    DBWR checkpoint buffers written 6,959,752 86.4 9.0
    DBWR checkpoints 454 0.0 0.0
    DBWR free buffers found 44,817,183 556.5 57.9
    DBWR lru scans 137,149 1.7 0.2
    DBWR make free requests 162,528 2.0 0.2
    DBWR revisited being-written buff 4,220 0.1 0.0
    DBWR summed scan depth 48,276,489 599.5 62.4
    DBWR transaction table writes 5,036 0.1 0.0
    DBWR undo block writes 2,989,436 37.1 3.9
    DDL statements parallelized 3,723 0.1 0.0
    DFO trees parallelized 4,157 0.1 0.0
    DML statements parallelized 3 0.0 0.0
    OS Block input operations 29,850 0.4 0.0
    OS Block output operations 1,591 0.0 0.0
    OS Characters read/written 182,109,814,791 2,261,447.1 235,416.9
    OS Integral unshared data size ################## 242,463,432.4 ############
    OS Involuntary context switches 188,257,786 2,337.8 243.4
    OS Maximum resident set size 43,518,730,619 540,417.4 56,257.5
    OS Page reclaims 159,430,953 1,979.8 206.1
    OS Signals received 5,260,938 65.3 6.8
    OS Socket messages received 79,438,383 986.5 102.7
    OS Socket messages sent 93,064,176 1,155.7 120.3
    OS System time used 10,936,430 135.8 14.1
    OS User time used 132,043,884 1,639.7 170.7
    OS Voluntary context switches 746,207,739 9,266.4 964.6
    PX local messages recv'd 55,120,663 684.5 71.3
    PX local messages sent 55,120,817 684.5 71.3
    Parallel operations downgraded 1 3 0.0 0.0
    Parallel operations not downgrade 4,154 0.1 0.0
    SQL*Net roundtrips to/from client 155,422,335 1,930.0 200.9
    SQL*Net roundtrips to/from dblink 18 0.0 0.0
    active txn count during cleanout 16,529,551 205.3 21.4
    background checkpoints completed 43 0.0 0.0
    background checkpoints started 43 0.0 0.0
    background timeouts 280,202 3.5 0.4
    branch node splits 4,428 0.1 0.0
    buffer is not pinned count 6,382,440,322 79,257.4 8,250.7
    buffer is pinned count 9,675,661,370 120,152.8 12,507.9
    bytes received via SQL*Net from c 67,384,496,376 836,783.4 87,109.3
    bytes received via SQL*Net from d 6,142 0.1 0.0
    bytes sent via SQL*Net to client 50,240,643,657 623,890.4 64,947.1
    bytes sent via SQL*Net to dblink 3,701 0.1 0.0
    calls to get snapshot scn: kcmgss 145,385,064 1,805.4 187.9
    calls to kcmgas 36,816,132 457.2 47.6
    calls to kcmgcs 3,514,770 43.7 4.5
    change write time 369,373 4.6 0.5
    cleanout - number of ktugct calls 20,954,488 260.2 27.1
    cleanouts and rollbacks - consist 6,357,174 78.9 8.2
    cleanouts only - consistent read 10,078,802 125.2 13.0
    cluster key scan block gets 69,403,565 861.9 89.7
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    cluster key scans 41,311,211 513.0 53.4
    commit cleanout failures: block l 413,776 5.1 0.5
    commit cleanout failures: buffer 414 0.0 0.0
    commit cleanout failures: callbac 41,194 0.5 0.1
    commit cleanout failures: cannot 174,382 2.2 0.2
    commit cleanouts 11,469,056 142.4 14.8
    commit cleanouts successfully com 10,839,290 134.6 14.0
    commit txn count during cleanout 17,155,424 213.0 22.2
    consistent changes 145,418,277 1,805.8 188.0
    consistent gets 8,043,252,188 99,881.4 10,397.7
    consistent gets - examination 3,180,028,047 39,489.7 4,110.9
    current blocks converted for CR 9 0.0 0.0
    cursor authentications 14,926 0.2 0.0
    data blocks consistent reads - un 143,706,500 1,784.6 185.8
    db block changes 302,577,666 3,757.4 391.2
    db block gets 336,562,217 4,179.4 435.1
    deferred (CURRENT) block cleanout 2,912,793 36.2 3.8
    dirty buffers inspected 627,174 7.8 0.8
    enqueue conversions 1,296,337 16.1 1.7
    enqueue releases 13,053,200 162.1 16.9
    enqueue requests 13,239,092 164.4 17.1
    enqueue timeouts 185,878 2.3 0.2
    enqueue waits 114,120 1.4 0.2
    exchange deadlocks 7,390 0.1 0.0
    execute count 105,475,101 1,309.8 136.4
    free buffer inspected 1,604,407 19.9 2.1
    free buffer requested 258,126,047 3,205.4 333.7
    hot buffers moved to head of LRU 22,793,576 283.1 29.5
    immediate (CR) block cleanout app 16,436,010 204.1 21.3
    immediate (CURRENT) block cleanou 2,860,013 35.5 3.7
    index fast full scans (direct rea 12,375 0.2 0.0
    index fast full scans (full) 3,733 0.1 0.0
    index fast full scans (rowid rang 192,148 2.4 0.3
    index fetch by key 1,321,024,486 16,404.5 1,707.7
    index scans kdiixs1 406,165,684 5,043.8 525.1
    leaf node 90-10 splits 50,373 0.6 0.1
    leaf node splits 697,235 8.7 0.9
    logons cumulative 884,756 11.0 1.1
    messages received 3,276,719 40.7 4.2
    messages sent 3,257,171 40.5 4.2
    no buffer to keep pinned count 569 0.0 0.0
    no work - consistent read gets 4,406,092,172 54,715.0 5,695.8
    opened cursors cumulative 20,527,704 254.9 26.5
    parse count (failures) 267,088 3.3 0.4
    parse count (hard) 468,996 5.8 0.6
    parse count (total) 19,960,548 247.9 25.8
    parse time cpu 323,024 4.0 0.4
    parse time elapsed 8,393,422 104.2 10.9
    physical reads 537,189,332 6,670.8 694.4
    physical reads direct 292,545,140 3,632.8 378.2
    physical writes 70,409,002 874.3 91.0
    physical writes direct 59,248,394 735.8 76.6
    physical writes non checkpoint 69,103,391 858.1 89.3
    pinned buffers inspected 11,893 0.2 0.0
    prefetched blocks 95,892,161 1,190.8 124.0
    prefetched blocks aged out before 1,495,883 18.6 1.9
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    process last non-idle time ################## ############## ############
    queries parallelized 417 0.0 0.0
    recursive calls 122,323,299 1,519.0 158.1
    recursive cpu usage 3,144,533 39.1 4.1
    redo blocks written 180,881,558 2,246.2 233.8
    redo buffer allocation retries 5,400 0.1 0.0
    redo entries 164,728,513 2,045.6 213.0
    redo log space requests 1,006 0.0 0.0
    redo log space wait time 2,230 0.0 0.0
    redo ordering marks 2,563 0.0 0.0
    redo size 108,208,614,904 1,343,739.0 139,883.4
    redo synch time 558,520 6.9 0.7
    redo synch writes 2,343,824 29.1 3.0
    redo wastage 1,126,585,600 13,990.0 1,456.4
    redo write time 718,655 8.9 0.9
    redo writer latching time 7,763 0.1 0.0
    redo writes 2,685,833 33.4 3.5
    rollback changes - undo records a 522,742 6.5 0.7
    rollbacks only - consistent read 335,177 4.2 0.4
    rows fetched via callback 1,100,990,382 13,672.1 1,423.3
    session connect time ################## ############## ############
    session cursor cache count 1,061 0.0 0.0
    session cursor cache hits 1,687,796 21.0 2.2
    session logical reads 8,061,057,193 100,102.5 10,420.7
    session pga memory 1,573,228,913,832 19,536,421.0 2,033,743.8
    session pga memory max 1,841,357,626,496 22,866,054.4 2,380,359.0
    session uga memory 1,074,114,630,336 13,338,399.4 1,388,529.0
    session uga memory max 386,645,043,296 4,801,374.0 499,823.6
    shared hash latch upgrades - no w 410,360,146 5,095.9 530.5
    sorts (disk) 2,657 0.0 0.0
    sorts (memory) 126,165,625 1,566.7 163.1
    sorts (rows) 24,048,783,304 298,638.8 31,088.3
    summed dirty queue length 5,438,201 67.5 7.0
    switch current to new buffer 1,302,798 16.2 1.7
    table fetch by rowid 6,201,503,534 77,010.5 8,016.8
    table fetch continued row 26,649,697 330.9 34.5
    table scan blocks gotten 1,864,435,032 23,152.6 2,410.2
    table scan rows gotten 43,639,997,280 541,923.3 56,414.3
    table scans (cache partitions) 26,112 0.3 0.0
    table scans (direct read) 246,243 3.1 0.3
    table scans (long tables) 340,200 4.2 0.4
    table scans (rowid ranges) 359,617 4.5 0.5
    table scans (short tables) 9,111,559 113.2 11.8
    transaction rollbacks 4,819 0.1 0.0
    transaction tables consistent rea 824 0.0 0.0
    transaction tables consistent rea 1,386,848 17.2 1.8
    user calls 159,931,913 1,986.0 206.8
    user commits 746,543 9.3 1.0
    user rollbacks 27,020 0.3 0.0
    write clones created in backgroun 7 0.0 0.0
    write clones created in foregroun 4,350 0.1 0.0
    Buffer Pool Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> Standard block size Pools D: default, K: keep, R: recycle
    -> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
    Free Write Buffer
    Number of Cache Buffer Physical Physical Buffer Complete Busy
    P Buffers Hit % Gets Reads Writes Waits Waits Waits
    D 774,144 95.6############ 233,869,082 10,089,734 0 0########
    K 504,000 99.9############ 3,260,227 1,070,338 0 0 65,898
    R 63,504 96.2 196,079,539 7,511,863 535 0 0 0
    Buffer wait Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc
    Tot Wait Avg
    Class Waits Time (s) Time (ms)
    data block 7,791,121 14,676 2
    file header block 587 101 172
    undo header 151,617 71 0
    segment header 299,312 58 0
    1st level bmb 45,235 7 0
    bitmap index block 392 1 3
    undo block 4,250 1 0
    2nd level bmb 14 0 0
    system undo header 2 0 0
    3rd level bmb 1 0 0
    Latch Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
    willing-to-wait latch get requests
    ->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
    ->"Pct Misses" for both should be very close to 0.0
    Pct Avg Wait Pct
    Get Get Slps Time NoWait NoWait
    Latch Requests Miss /Miss (s) Requests Miss
    Consistent RBA 2,686,230 0.0 0.2 0 0
    FAL request queue 86 0.0 0 0
    FAL subheap alocation 0 0 2 0.0
    FIB s.o chain latch 1,089 0.0 0 0
    FOB s.o list latch 4,589,986 0.5 0.0 2 0
    NLS data objects 1 0.0 0 0
    SQL memory manager worka 5,963 0.0 0 0
    Token Manager 0 0 2 0.0
    active checkpoint queue 719,439 0.3 0.1 0 1 0.0
    alert log latch 184 0.0 0 2 0.0
    archive control 4,365 0.0 0 0
    archive process latch 1,808 0.6 0.6 0 0
    begin backup scn array 3,387,572 0.0 0.0 0 0
    cache buffer handles 1,577,222 0.2 0.0 0 0
    cache buffers chains ############## 0.5 0.0 430 354,357,972 0.3
    cache buffers lru chain 17,153,023 0.1 0.0 1 385,505,654 0.5
    cas latch 538,804,153 0.3 0.0 7 0
    channel handle pool latc 1,776,950 0.5 0.0 0 0
    channel operations paren 2,901,371 0.3 0.0 0 0
    checkpoint queue latch 99,329,722 0.0 0.0 0 11,153,369 0.1
    child cursor hash table 3,927,427 0.0 0.0 0 0
    commit callback allocati 8,739 0.0 0 0
    dictionary lookup 7,980 0.0 0 0
    dml lock allocation 6,767,990 0.1 0.0 0 0
    dummy allocation 1,898,183 0.2 0.1 0 0
    enqueue hash chains 27,741,348 0.1 0.1 4 0
    enqueues 17,450,161 0.3 0.1 6 0
    error message lists 132,828 2.6 0.2 1 0
    event group latch 884,066 0.0 0.7 0 0
    event range base latch 1 0.0 0 0
    file number translation 34 38.2 0.9 0 0
    global tx hash mapping 577,859 0.0 0 0
    hash table column usage 4,062 0.0 0 8,757,234 0.0
    hash table modification 16 0.0 0 2 0.0
    i/o slave adaptor 0 0 2 0.0
    job workq parent latch 4 100.0 0.3 0 494 8.7
    job_queue_processes para 1,950 0.0 0 2 0.0
    ksfv messages 0 0 4 0.0
    ktm global data 8,219 0.0 0 0
    lgwr LWN SCN 2,687,862 0.0 0.0 0 0
    library cache 310,882,781 0.9 0.0 34 104,759 4.0
    library cache load lock 30,369 0.0 0.3 0 0
    library cache pin 153,821,358 0.1 0.0 2 0
    library cache pin alloca 126,316,296 0.1 0.0 4 0
    list of block allocation 2,730,808 0.3 0.0 0 0
    loader state object free 566,036 0.1 0.0 0 0
    longop free list parent 197,368 0.0 0 8,390 0.0
    message pool operations 14,424 0.0 0.0 0 0
    messages 25,931,764 0.1 0.0 1 0
    mostly latch-free SCN 40,124,948 0.3 0.0 5 0
    Latch Sleep breakdown for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by misses desc
    Get Spin &
    Latch Name Requests Misses Sleeps Sleeps 1->4
    cache buffers chains ############## 74,770,083 1,062,119 73803903/884
    159/71439/10
    582/0
    redo allocation 170,107,983 3,441,055 149,631 3292872/1467
    48/1426/9/0
    library cache 310,882,781 2,831,747 89,240 2754499/6780
    6/7405/2037/
    0
    shared pool 158,471,190 1,755,922 55,268 1704342/4836
    9/2826/385/0
    cas latch 538,804,153 1,553,992 6,927 1547125/6808
    /58/1/0
    row cache objects 161,142,207 1,176,998 27,658 1154070/1952
    0/2560/848/0
    process queue reference 1,893,917,184 1,119,215 106,454 78758/4351/1
    36/0/0
    Library Cache Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Pct Misses" should be very low
    Get Pct Pin Pct Invali-
    Namespace Requests Miss Requests Miss Reloads dations
    BODY 3,137,721 0.0 3,137,722 0.0 0 0
    CLUSTER 6,741 0.1 4,420 0.2 0 0
    INDEX 353,708 0.8 361,065 1.2 0 0
    SQL AREA 17,052,073 0.3 54,615,678 0.9 410,682 19,628
    TABLE/PROCEDURE 3,521,884 0.2 12,922,737 0.1 619 0
    TRIGGER 1,975,977 0.0 1,975,977 0.0 1 0
    SGA Memory Summary for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    SGA regions Size in Bytes
    Database Buffers 22,330,474,496
    Fixed Size 779,288
    Redo Buffers 17,051,648
    Variable Size 7,180,648,448
    sum 29,528,953,880

  • RAC for Data Warehouse

    Hello,
    We have a research project for restructructuring our data warehouse system.
    I would like to get some opinions about whether RAC architecture can be
    a good solution for Data Warehouse application.
    We have using parallel queries massively. Does running these kind of queries
    on different servers on RAC with multiple server result in performance degradation rather than
    running on single monolithic server with multiple CPUs
    I will appreciate any comments using RAC architecture for Data Warehouse
    systems?
    Regards,

    Maurice Muller wrote:
    Just keep in mind that during the last 4 years (I guess your current system is about 4 years old) the CPUs became much faster.
    A cpu can't work without data which means that the I throughput has to be fast enough to feed all your cores with data.
    The main bottleneck of all DWHs I have seen during the last 8 years was allways the IO never the CPUs. And not just data warehousing Maurice, but a basic principle for any data processing platform - the slowest layer is always the I/O layer.. and can be the most expensive one to solve too.
    Which is why newer technology like Infiniband is exciting as it can also serve as the I/O layer. Instead of using the traditional HBA which is typically configured with 2Gb fibre channels to the storage layer, using HCA cards you can wire this directly into an Infiniband storage array... and this can run at up to speeds of 40Gb. Dual connections means a total theoretical pipe size of 80Gb. I do not know of any other standard technology (like GigE) that can provide any similar bandwidth speed.
    Back to RAC though - with RAC, when you add a new server that comes with a new set of I/O pipes.. plus of course more RAM and more CPU cores. SMP server architecture does not scale like this at all. You only have x number of slots for PCI cards, CPUs and RAM. A very specific ceiling that cannot be moved. With MPP this ceiling is a a lot higher and more flexible.
    You can also replace a dual core dual CPU nodes with a 6 core AMD Istanbul CPUs next year.. and possibly 12 core CPUs year after that. So even a smallish 4 node cluster with 16 cores in total can be grown significantly and remain a 4 node cluster. Together with advances in HPC (High Performance Computing) like Infiniband.
    I'm not seeing much use of non-RAC RDBMS architecture in the future. Databases are getting ever bigger because we have the technology to crunch more data, and crunch it a lot more intelligently than ever before. My first production database was 4MB in size, and ran on a Novell File Server with two 20MB disks. I'm currently testing a 24TB array for use for a single database.
    Technology is inevitable, as is the growth in data volumes. And I cannot see a non-RAC architecture rising to that challenge. Especially not in something like data warehousing.

  • Is RAC suited for Data Warehouse

    Hi,
    I have to provide a best architecture for our data warehouse project, as i googled , i found documents about Oracle RAC for Data Warehouse and features that Oracle RAC provide for a warehouse environment, like performane, availability ...
    But i see some document that says , Oracle RAC is not best choice for data warehousing.
    I want to provide flexible design that don't need to change after years. but our resources are limited, we have two IBM Power7 servers with 32cores and 62GB ram, we estimate about 1 to 2TB data yearly that should upload into warehouse, because each server just have 62GB ram, what should we do?
    Implementing Oracle RAC or just running warehouse on sigle-instance database on one server and just increase the resources of that server?

    VahidS wrote:
    Hi,
    I have to provide a best architecture for our data warehouse project, as i googled , i found documents about Oracle RAC for Data Warehouse and features that Oracle RAC provide for a warehouse environment, like performane, availability ...
    But i see some document that says , Oracle RAC is not best choice for data warehousing.
    I want to provide flexible design that don't need to change after years. but our resources are limited, we have two IBM Power7 servers with 32cores and 62GB ram, we estimate about 1 to 2TB data yearly that should upload into warehouse, because each server just have 62GB ram, what should we do?
    Implementing Oracle RAC or just running warehouse on sigle-instance database on one server and just increase the resources of that server?
    I am not sure that which documents you have read based on which you have got the impression that RAC is for DWH. RAC is a part of the Oracle's High Availability solution architecture. It's not designed to cater any specific kind of workload but to  provide HA in the case of a node or instance crash. And also, RAC itself doesn't give any performance benefit. If you have a database which is poorly performing, it would remain the same or may be , will become even worse after implementing RAC due to too many things being involved in the architecture now e.g storage, network etc. So if you do want to implement RAC, give it a careful thought, do some testing and then only project it. If you are determined to go for RAC and you are a pure DWH shop, I would suggest to go fro Exadata running RAC . Smart Scan, Compression are just few of those benefits which would come with Exadata along with the RAC's HA solution that would be serving better for your requirements.
    HTH
    Aman....

  • Are there any timesten installation for data warehouse environment?

    Hi,
    I wonder if there is a way to install timesten as an in memory database for data warehouse environment?
    The DW today consist of a large Oralcle database and I wonder if and how a timesten implementation can be done.
    what kind of application changes involve with such an implementation and so on?
    I know that the answer is probably complex but if anyone knows about such an implementation and some information about it , it would be great to learn from that experience.
    Thanks,
    Adi

    Adi,
    It depends on what you want to do with the data in the TimesTen database. If you know the "hot" dataset that you want to cache in TimesTen, you can use Cache Connect to Oracle to cache a subset of your Oracle tables into TimesTen. The key is to figure out what queries you want to run and see if the queries are supported in TimesTen.
    Assuming you know the dataset you need to cache and you have control of your application code to change the connection to TimesTen (using ODBC or JDBC), you can give it a try. If you are using a third party tool, you need to see if the tool supports JDBC or ODBC access to the database and change the tool to point to your TimesTen database instead of the Oracle database.
    If you are using the TimesTen Cache Connect to Oracle product option, data synchronization between Oracle and TimesTen is handled automatically by the product.
    Without further details of what you'd like to do, it's difficult to provide more detailed recommendation.
    -scheung

  • Buy my problem of camels Store Lee games possible solution for me my problem?

    Buy my problem of camels Store Lee games possible solution for me my problem?

    This worked for me, sort of. A while back when I downloaded that windows update that installed the new version of IE, I noticed that IE wouldn't connect to the internet and pages wouldn't load. However, I thought nothing of it because I use Firefox and FF was working just fine.
    I went into IE earlier this evening (it still wasn't working) and tried to follow your recommended steps. I got up to step 4 and IE froze. I tried closing it, reopening it and repeating the steps but it would always freeze at step 4. In another thread, someone was saying that iTunes released a security update and that fixed the store for them so I tried opening Apple Software Update and that froze. I tried it several times, nothing. Would always freeze. I even tried updating thru iTunes and IT would freeze.
    Finally, I went to the control panel and uninstalled IE7. The iTunes store and the Apple Software Update program are now working! So thank you very much for your suggestion. Otherwise, it may have never clicked with me that IE was the problem.
    Windows XP

  • Data Access Object for Data Warehouse?

    Hi,
    Does anyone know how the DAO pattern looks like when it is used for a data warehouse rather than a normal transactional database?
    Normally we have something like CustomerDAO or ProductDAO in the DAO pattern, but for data warehouse applications, JOINs are used and multiple tables are queried, for example, a query may contains data from the Customer, Product and Time table, what should the DAO class be named? CustomerProductTimeDAO?? Any difference in other parts of the pattern?
    Thanks in advance.
    SK

    In my opinion, there are no differences in the Data Access Object design pattern which have any thing to do with any characteristic of its implementation or the storage format of the data the pattern is designed to function with.
    The core pupose of the DAO design pattern is to encapsulate data access code and separate it from the business logic code of the application. A DAO implementation might vary from application to application. The design pattern does not specify any implementation details. A DAO implementation can be applied to group of XML data files, an Excel-based CSV file, a relational database, or an OS file system. The design is the same for all these, it is the implementation that varies.
    The core difference between an operational database and a strategic data warehouse is the purpose of why and how the data is used. It is not so much a technical difference. The relational design may vary however, there may be more tables amd ternary relationships in a data warehouse to support more fine-tuned queries; there may be less tables in a operational database to support insert/add efficiencies.
    The DAO implementation for a data warehouse would be based on the model of the databases. However the tables are set up, that is how the DAO is coded.

  • RMAN crosscheck and expire guidelines for data warehouse environment

    O/S: Windows Server 2008
    DB: Oracle 11gR2
    Are there any guidelines to how often one should do a RMAN crosscheck and set an expiration on archivelogs for data warehouse environments?
    It would seem once a day would be enough for the crosscheck in a data warehouse environment that gets refreshed nightly. Expiration I would expect no less than 1 week.
    Cheers!

    I agree with damorgan
    refer the below links for best practices.
    http://www.oracle.com/technetwork/database/features/availability/311394-132335.pdf
    https://blogs.oracle.com/datawarehousing/entry/data_warehouse_in_archivelog_m
    Hope this helps,
    Regards
    http://www.oracleracepxert.com
    Understand the Power of Oracle RMAN
    http://www.oracleracexpert.com/2011/10/understand-power-of-oracle-rman.html
    Duplicating RAC database using RMAN
    http://www.oracleracexpert.com/2009/12/duplicate-rac-database-using-rman.html

  • PrintWindow api with possible solution for capture screenshot Google Chrome window

    Hi,
    as all you know, PrintWindow api give us a black image when us want a capture screenshot  of Google Chrome window. So, a friend said me that a possible solution for this problem is: 
    Reduce Google Chrome window for -1px in both sides and after this, reset  to original size. And so, will repaint again.
    Based on code below, someone could help me make this? sincerely I don't know where begin.
    [DllImport("user32.dll", SetLastError = true)]
    static extern IntPtr FindWindow(string lpClassName, string lpWindowName);
    [DllImport("user32.dll", SetLastError = true)]
    static extern IntPtr FindWindowEx(IntPtr hwndParent, IntPtr hwndChildAfter, string lpszClass, string lpszWindow);
    [DllImport("user32.dll")]
    private static extern IntPtr GetDC(IntPtr WindowHandle);
    [DllImport("user32.dll")]
    private static extern void ReleaseDC(IntPtr WindowHandle, IntPtr DC);
    [DllImport("user32.dll")]
    private static extern IntPtr GetWindowRect(IntPtr WindowHandle, ref Rect rect);
    [DllImport("User32.dll", SetLastError = true)]
    [return: MarshalAs(UnmanagedType.Bool)]
    static extern bool PrintWindow(IntPtr hwnd, IntPtr hDC, uint nFlags);
    [StructLayout(LayoutKind.Sequential)]
    private struct Rect
    public int Left;
    public int Top;
    public int Right;
    public int Bottom;
    public static Bitmap Capture(IntPtr handle)
    Rect rect = new Rect();
    GetWindowRect(handle, ref rect);
    Bitmap Bmp = new Bitmap(rect.Right - rect.Left, rect.Bottom - rect.Top);
    Graphics memoryGraphics = Graphics.FromImage(Bmp);
    IntPtr dc = memoryGraphics.GetHdc();
    bool success = PrintWindow(handle, dc, 0);
    memoryGraphics.ReleaseHdc(dc);
    return Bmp;
    private void button1_Click(object sender, EventArgs e)
    IntPtr WindowHandle = FindWindowEx(IntPtr.Zero, IntPtr.Zero, "Chrome_WidgetWin_1", null);
    Bitmap BMP = Capture(WindowHandle);
    BMP.Save("C:\\Foo.bmp");
    BMP.Dispose();
    Any suggestions here is appreciated.

    Hello,
    I would prefer capture the screen rather than get it from that application directly.
    It has been discussed in the following thread.
    Is there any way to hide Chrome window and capture a screenshot or convert the Chrome window to image?
    In this case, you could remove the line "ShowWindowAsync(mainHandle, 0); " since you don't want to hide it.
    using System;
    using System.Runtime.InteropServices;
    using System.Diagnostics;
    using System.Drawing.Imaging;
    [DllImport("user32.dll")]
    private static extern bool ShowWindowAsync(IntPtr hWnd, int nCmdShow);
    [DllImport("user32.dll")]
    public static extern bool GetWindowRect(IntPtr hWnd, out RECT lpRect);
    [DllImport("user32.dll")]
    public static extern bool PrintWindow(IntPtr hWnd, IntPtr hdcBlt, int nFlags);
    public WhateverMethod()
    //initialize process and get hWnd
    Process chrome = Process.Start("chrome.exe","http://www.cnn.com");
    //wait for chrome window to open AND page to load (important for process refresh)
    //you might need to increase the sleep time for the page to load or monitor the "loading" title on Chrome
    System.Threading.Thread.Sleep(4000);
    chrome.Refresh();
    IntPtr mainHandle = chrome.MainWindowHandle;
    RECT rc;
    GetWindowRect(mainHandle, out rc);
    Bitmap bmp = new Bitmap(rc.Width, rc.Height, PixelFormat.Format32bppArgb);
    Graphics gfxBmp = Graphics.FromImage(bmp);
    IntPtr hdcBitmap = gfxBmp.GetHdc();
    PrintWindow(mainHandle, hdcBitmap, 0);
    gfxBmp.ReleaseHdc(hdcBitmap);
    gfxBmp.Dispose();
    bmp.Save("c:\\temp\\test.png", ImageFormat.Png);
    ShowWindowAsync(mainHandle, 0);
    [StructLayout(LayoutKind.Sequential)]
    public struct RECT
    private int _Left;
    private int _Top;
    private int _Right;
    private int _Bottom;
    public RECT(RECT Rectangle)
    : this(Rectangle.Left, Rectangle.Top, Rectangle.Right, Rectangle.Bottom)
    public RECT(int Left, int Top, int Right, int Bottom)
    _Left = Left;
    _Top = Top;
    _Right = Right;
    _Bottom = Bottom;
    public int X
    get { return _Left; }
    set { _Left = value; }
    public int Y
    get { return _Top; }
    set { _Top = value; }
    public int Left
    get { return _Left; }
    set { _Left = value; }
    public int Top
    get { return _Top; }
    set { _Top = value; }
    public int Right
    get { return _Right; }
    set { _Right = value; }
    public int Bottom
    get { return _Bottom; }
    set { _Bottom = value; }
    public int Height
    get { return _Bottom - _Top; }
    set { _Bottom = value + _Top; }
    public int Width
    get { return _Right - _Left; }
    set { _Right = value + _Left; }
    public Point Location
    get { return new Point(Left, Top); }
    set
    _Left = value.X;
    _Top = value.Y;
    public Size Size
    get { return new Size(Width, Height); }
    set
    _Right = value.Width + _Left;
    _Bottom = value.Height + _Top;
    public static implicit operator Rectangle(RECT Rectangle)
    return new Rectangle(Rectangle.Left, Rectangle.Top, Rectangle.Width, Rectangle.Height);
    public static implicit operator RECT(Rectangle Rectangle)
    return new RECT(Rectangle.Left, Rectangle.Top, Rectangle.Right, Rectangle.Bottom);
    public static bool operator ==(RECT Rectangle1, RECT Rectangle2)
    return Rectangle1.Equals(Rectangle2);
    public static bool operator !=(RECT Rectangle1, RECT Rectangle2)
    return !Rectangle1.Equals(Rectangle2);
    public override string ToString()
    return "{ + _Left + "; " + " + _Top + "; Right: " + _Right + "; Bottom: " + _Bottom + "}";
    public override int GetHashCode()
    return ToString().GetHashCode();
    public bool Equals(RECT Rectangle)
    return Rectangle.Left == _Left && Rectangle.Top == _Top && Rectangle.Right == _Right && Rectangle.Bottom == _Bottom;
    public override bool Equals(object Object)
    if (Object is RECT)
    return Equals((RECT)Object);
    else if (Object is Rectangle)
    return Equals(new RECT((Rectangle)Object));
    return false;
    And the key method used is the one shared in
    Get a screenshot of a specific application.
    Regards,
    Carl
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Are analytic functions usefull only for data warehouses?

    Hi,
    I deal with reporting queries on Oracle databases but I don't work on Data Warehouses, thus I'd like to know if learning to use analytic functions (sql for anaylis such as rollup, cube, grouping, ...) might be usefull in helping me to develop better reports or if analytic functions are usually usefull only for data warehouses queries. I mean are rollup, cube, grouping, ... usefull also on operational database or do they make sense only on DWH?
    Thanks!

    Mark1970 wrote:
    thus does it worth learning them for improving report queries also not on DHW but on common operational databases?Why pigeonhole report queries as "+operational+" or "+data warehouse+"?
    Do you tell a user/manager that "<i>No, this report cannot be done as it looks like a data warehouse report and we have an operational database!</i>"?
    Data processing and data reporting requirements not not care what label you assign to your database.
    Simple real world example of using analytical queries on a non warehouse. We supply data to an external system via XML. They require that we limit the number of parent entities per XML file we supply. E.g. 100 customer elements (together with all their child elements) per file. Analytical SQL enables this to be done by creating "buckets" that can only contain 100 parent elements at a time. Complete process is SQL driven - no slow-by-slow row by row processing in PL/SQL using nested cursor loops and silly approaches like that.
    Analytical SQL is a tool in the developer toolbox. It would be unwise to remove it from the toolbox, thinking that it is not applicable and won't be needed for the work that's to be done.

  • Example database for data warehouse

    Hi!
    Does anybody know where can be found exaple database for data warehouse?
    - schema is good
    - schema with data is better
    Best regards

    slkLinuxUser wrote:
    Hi!
    Does anybody know where can be found exaple database for data warehouse?
    - schema is good
    - schema with data is better
    Best regardsJust like an OLTP database, the schema design and its data is 100% dependant on the business needs and (if done properly) the result of a thorough data analysis. Any kind of a pre-designed sample would be near worthless to any actual application.

  • OBE for Data Warehouse 9.2

    Dear All
    Can anyone tell me the link of Oracle By Example: Hands on Tutorials for Data Warehouse 9.2.
    I have found Oracle By Example: Hands On Tutorials Data Warehouse OWB 10g release 1, 10g release 2 and 11g at the following link
    http://www.oracle.com/technology/obe/admin/owb_main.html
    Thanks in Advance.
    Best Regards
    Thunder2777

    Dear Dallan
    I was looking for the Oracle By Example sort of tutorials for OWB 9.2.
    Actually I want to learn OWB form 9.2 to 10.2. I have OWB 10.1 and OWB 10.2,
    what missing was OWB 9.2.
    That's why I have put it on forum so that anyone who knows where they can guide me.
    But your comment that there is a little difference in OWB 9.2 and OWB 10.1, if I won't get
    OWB 9.2 then it's better to start with OWB 10.1.
    Ayway Thanks.
    Thunder2777

  • Data Warehouse Backups

    We want to keep our database up as much as possible due to the
    number of data feeds coming in and also so we can recover tablespaces at
    any given point in time. We tried running in archivelog mode but with
    nologging enabled so our archives would not grow enormously.
    When we recovered one of the tablespaces we got block corruption errors
    because it could not apply some of the redo information. We then tried doing a block recovery thru rman but it wasn't able to fix it.
    It looks like we only have the option of taking cold backups but we would like to
    recover tablespaces without doing a full recovery. It seems to me that doing periodic data pumps would be the only way to recover tablespaces without a full recovery.
    Does anyone have any ideas ? Is trying to run in archivelog mode futile ? How does everyone else run there backups for there warehouse ?

    For our DW (about 5 TB), we use archivelog mode with logging available for all tables. We take a weekly full level 0 backup, 3 x a week differential, 3 x a week cumulative backups. Our full backup takes 14 hours, our diff takes 6 hours, and our cum take about 6 hours. You could make the incremental backups faster if you are running 10g and enable block change logging. We haven't implemented that because it would take some time for us to test all senerios for recovery. Just haven't had time yet, and our backups run in an acceptable amount of time (for us).

  • Solution for Data transfer from Non-network location

    If i  needed 10 non-networked locations around Australia to submit data (say text files ranging from 20 - 100 mb each) each day to my server, and the data loaded and manipulated into a database in a secure environment, would you be able to describe the key points and a possible solution to this problem ?

    Hi Amar,
    FTP [File Transfer Protocol] alone isnu2019t a viable option to give the insight, security, performance, and, ultimately, the risk mitigation necessary to responsibly conduct business.One of the solution to protecting and transferring sensitive or mission-critical data securely is Managed File Transfer (MFT). Managed File Transfer solutions provide a greater level of security, meet strict regulatory compliance standards and give you the reliability you need in a data transfer solution.
    Points to look for in a solution:
    a.  Providing file transfer transparency throughout your entire organization.
    b. Solution should support the most modern security standards and methodology including SSL encryption, X.509 certificates and proxy certificates.
    c. The solution should streamline the audit process while also being able to access that audit information from a central point, saving you time and money.
    d. Solution should include functionality that allows data to be pre-and post-processed.
    e.  Solutions should ensure that all interrupted file transfers resume where they left off after a connection failure without manual intervention.
    f. Should tightly integrate with your existing job scheduling solution to issue alerts if connections are not re-established after an acceptable time interval.
    g. Should adhere to current security and audit requirements including SOX, GLBA and HIPAA.
    Hope this helps.
    Regards,
    Jimmy

  • Data warehouse backups and read only tablespaces

    Hi all,
    I am working on a data warehouse database with following specs:
    Version: Oracle 10.2.0.3 Enterprise
    OS: Solaris
    App: Data warehouse
    We use RMAN to take 'level 0' & 'level 1' backups. We have block change tracking enabled and RMAN backups up data files and archive logs straight to tape.
    I am exploring ways of reducing the 'level 0' backups and was specifically focussing on using read-only tablespaces for this purpose.
    I have often seen it mentioned that a best practice in D/Ws is to store the old static partitions of fact tables in read only tablespaces so as to reduce the backup size.
    In case you have already implemented such a scheme, I would like to know how you have implemented it.
    I am thinking of the following mechanism:
    -- Start using backups at tablespace level rather than 'level 0' at database level.
    -- Record the latest SCNs of all datafiles prior to back up.
    -- If the latest SCN has not changed since last backup and the tablespace is in read only mode then
    -- Check if a backup copy of the tablespace has been done within the recovery window and is accessible.
    -- If the copy exists then don't backup that tablespace, else backup the tablespace.
    -- If the tablespace is read/write then back it up.
    I haven't delved into the low level details, but this seems to be lot of work. So just wanted to know from you if there's any ready made feature which makes all this easier.
    Many thanks in advance.

    Thank you so much for your help.
    Backup optimization was indeed the thing I was looking for. To be honest I had done a bit of RTFM, but I didn't check the advanced user guide.
    Although my specific question has been answered, it would be interesting to know what other things other people are implementing to reduce backups etc.
    I am also thinking of following options:
    -- Turn on index monitoring to get rid of unused indexes.
    -- Stop the backups of 'index' tablespaces.
    -- Archive off old data.
    Any other ideas for reducing DW backup size?
    Many thanks.

Maybe you are looking for

  • Validation XML documnet with XML Schema in PL/SQL

    Hello, can someone please explain How to validate an XML document with XML schema using PL/SQL code (with out writing Java code). Thanks for your Help in Advance Surendra.

  • Apple TV audio, mirroring movie from iPhone

    Haven't bought the Apple TV yet. I'm not talking about airplay. When I mirror a movie playing on safari browser through the Apple TV to my tv, I was told I must keep the safari browser "open". True? In other words, the iPhone screen must stay on, whi

  • Get "EXE File properties"

    I would like to get the follwing "File properties" from an EXE File - Original Filename - File version like if you "right klick" on a EXE File and than klick on "properties". How can I get these information with JAVA. thanks Aykut

  • Incorrectly Formatted Books

    I've noticed a few books here on Lulu which are not correctly formatted i.e. there are no paragraphs so the entire book is one long paragraph. No one would enjoy reading such a book. I would challenge the authors who create such books to read them th

  • Why itunes is automatic changing my genre to chinese when I play my songs?

    What is happening to my itunes? Every time I play a song the language of my genre just automatic change to chinese without infrom me anything. I've organized all the songs by the genres and also use genres to creat playlists for my ipod, but now they