OBIEE Data warehouse Architecture

Is it common to install the BI server and Database on different servers or both on the same.
I have installed OBIEE on OEL Version 5 update 2 64 bit and oralce's Retail Data Warehouse RDW on an other server OEL Version 5 update 2 64 bit.
There is a bug that requires a 32 bit client to be installed on the Database. The fix is to create a soft link between the OracleBi/server/Bin/xxxxx.so file on BI server to $ORACLE_HOME/client_32 lib32 libraries xxxxx10.1.so file. I don't see anywhere in the documentation of creating a softlink beween server so I am thinking maybe it's not possible to have the database and BI server on different servers.
Thanks.

The error I got was
nQSError: 46029 Failed to load the DLL /u01/app/oracle/OracleBI/server/Bin/libnqsdbgatewayoci10g.so. Check if 'Oracle OCI 10G' database client is installed.
Fix
The symlink needs to go from libclntsh.so (which is the generic library name OBIEE will try to load) to your libclntsh.so.xxx library (libclntsh.so.10.1 in your case).
This is from another post Re: HPUX 11.31 / OBIEE 10.1.3.4 / 11g client - Unsatisfied code symbol error
However this is referring to both database and server being on the one server.

Similar Messages

  • Good Data warehouse architecture  books or Documentation

    Hi,
    i was looking for the good data warehousing Architecture/Concept books or documentation links. Can you suggest me please ? Your advice is greatly appreciated!
    Thanks,
    shashi

    See if following helps
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/toc.htm
    http://download.oracle.com/docs/cd/B10500_01/server.920/a96520/concept.htm
    http://principlepartners.com/presentations/DataWarehouseConceptsAndArchitecture.pdf

  • Data warehouse architecture

    Hi DW's gurus!
    I'm a bit confused, could you help me, please. I dont want to discuss about which is better - CIF or BUS. All i want is to know all differences between these ideas.
    1) So, Inmon uses relation DB(3rd NF) for atomic data and dimensional for aggregation data?
    2) Kimball uses dimensional db for atomic and aggregation data?
    is this only difference between these 2 ideas?
    3) what differences between Inmon's and hybrid architecture then?
    regards and thanks,
    rustam

    My own inclination is to keep warehouses and marts in a different database to the application (ETL and front-end) metadata schemas. The query types really are very different, and I think that with the user community it really makes a difference if you can provide snappy front-end performance even when the query database is heavily loaded -- this implies a different server completely of course.
    Another issue is of logging -- metadata operations definitely need to be run on a database with archive logging so that you don't lose your records of ETL ops and user reports created and modified, whereas with a warehouse or mart there is not so much need. For one thing you are probably doing a lot of direct path inserts with nologging anyway, and ought to be using RMAN to backup with.
    Which brings us to the point that a separate metadata database is a great place to use as an RMAN catalog. Backup of a metadata database is a simple job due to the small size and stability of schemas, and can easily be performed with regular (non-RMAN) hot backups.

  • Using OBIEE for a custom Data Warehouse

    Hi Everyone,
    I am very new to OBIEE and I have a few questions about this product family.
    1. I have an existing custom build data warehouse, and I would like to know, is it possible to have build reports on this data warehouse?
    2. I understand that OBIEE comes with pre-built ETL jobs in Informatica, what kind of license is it? Is it possible to modify them, or even build new jobs that load into a non-OBIEE data warehouse?
    your answer will be greatly appreciated.
    Jeffrey
    Edited by: user3265404 on Oct 13, 2009 12:50 PM

    Its the same Informatica which can do all functions as a stand alone Infa. additionally it also has prebuilt adapters for source systems like Siebel, APPL, PSFT, JDE, SAP and some universal adapters, so the license included these also which is going to cost more than getting a informatica Licence from Informatica Corp. Moreover, OBI Apps 7.9.6 comes with Informatica 8.6 which is a little older version of the tool. Informatica is going to release version 9 in a couple of weeks.
    I see that you already have a datawarehouse, so why do you need a ETl tool again?
    OBI EE can directly report out of a datewarehouse, and also transactional systems as long as the metadata layer is built.
    PS: Am I clear?

  • OBIEE reverse engineering to go from SQL Server to a data warehouse

    Hi,
    I'm new to data modeling for warehouses. We currently have an OBIEE environment set up where the data source was SQL Server transactional tables. The SQL Server data is to be moved to a non-Oracle data warehouse and I need to produce a logical data model for the warehouse folks at my company. Unfortunately, the SQL Server data was never modeled, so, I'm basing the model from the Logical and Physical diagram/relationships of OBIEE.
    My question is in regards to the validity of the following relationship to be used in a data warehouse based on what's currently in OBIEE. When I model this via Erwin, I'm wondering if I'm way off base in the relationships (modeling, not personal):
    Dimension 1 has a 0:M with Dimension 2
    Dimension 1 has a 0:M with Dimesion 3
    Dimension 2 has a 0:M with Dimension 3
    Both Dimension 2 and Dimension 3 have a 0:M with Fact 1
    Through the use of aliases and such, this does work in OBIEE. Will this work as a data model for a data warehouse environment?
    Thanks!

    I think you started with the wrong foot. I suggest you search in Google for "kimball methodology" and have a read at a few articles. Your DWH model should not be based on your transactional tables. You should ask your business users what "questions" they want to answer in the DWH. Then model your DWH base on that. You can not model a DWH without knowing what questions you need to answer. For instance if your business users want to know the sales per day and per branch you will a sales fact with a sales amount measure joining to two dimensions branch and time dimension. The number of facts will depend on the questions you need to answer, the type of data and the granularity of them.

  • Is the OBIEE used to create a data warehouses dynamically?

    Management where I work wants to use the OBIEE Administrator to source a 3NF normalized database and create a "virtual data warehouse" in the Business Modeling Mapping layer of OBI Administrator as a Star Schema model is required by OBI Business Model layer. They claim they were told by an Oracle sales rep that the Administrator tool could do this.
    Is this possible? As OBI issues only SQL and not PL/SQL how can one "create" dimensions, lookup tables and fact tables dynamically? And even if it could the performance hit to recreate the virtual data warehouse each time a query is issued would be huge.
    Having used Prism Warehouse Builder and DataStage in the past to create data warehouses I am aware that one needs a procedural programming language to create and maintain the star schema tables (surrogate key maintenance, controling workflows, maintaining slowly changing dimensions, intermediate lookup tables, etc.). SQL was not meant to do this heavy lifting programming. Afterall, isn't this why Oracle Warehouse Builder and previously Informatica is shipped with OBIEE suite because OBI is not an ETL tool to create dimensional models? One uses an ETL tool to create the dimensional data model for OBI to access and pass along the metadata to OBI Answers.
    So is it normal practice to use the Administrator's Business Mapping/Modeling layer for creating a virtual star schema logical tables from physical tables that are 3NF? Or is the tool used to access already denormalized tables in the physical layer that were created using Informatica or OWB or other ETL tool?

    I asked an "Expert" in OBIEE. Here are snippets of his response:
    "Be aware though that the transformation ability is fairly limited, and
    will only really work with data that is very close to a star schema, i.e.
    the data can be easily transformed through a couple of denormalizations and
    table joins. If your source data is very normalized and cannot easily be
    transformed into a star schema, you would need to use a tools such as
    Informatica, OWB or similar to extract data from your source systems, load
    and then transform it into a data warehouse or data mart and report of of
    that. The more that your data needs to be transformed (i.e. the closer it
    is to a 3NF model) the more likely it is that you'll need to use an ETL
    tool, and a data warehouse or data mart, to host your data."
    And in response to my noting the lack of documentation on how to model a 3NF to Star Schema his response was:
    "No, you're right, the documentation doesn't really go into "how to" turn a 3NF model into a dimensional model. If you look back to when OBIEE was a Siebel product, the documentation was really aimed at either Siebel consultants or customers who had been on the training, they didn't want customers "off the street" to try and implement OBIEE as it would hit their services revenue. That's where the blog posts we do, things like the Oracle-by-example training courses on OTN and so on come in, otherwise as you say there's little out there on the best way to transform your model - it's mostly passed on "word of mouth" or is built up from experience on working on projects."

  • Architectural Design - New Data Warehouse

    Hello All,
    This is my first post to the oracle discussion forums and I'm looking forward to the interactions with other ODWB users.
    I am just begining to implement a design for a new data warehouse, our team has already defined user requirements for a subset of the business (Sales/Marketing) and have committted a logical model to paper. We have installed our dev environment and are now ready to begin the work of creating our prototype.
    I've read all the Oracle doc I can get my hands on regarding implementing your DW objects and have been pondering the approach. ROLAP or MOLAP.....
    it seems to make sense that we should deploy into a ROLAP environment bringing in all our data from our staging area to create a stable relational data store. Then select most used or queried dimensions and facts to deploy in a MOLAP environment... has anyone used this approach? any lessons learned? do you have to choose one method or the other? or can you take a blended approach ? would you deploy both in the same database instance or seperate the two?
    thx

    I'm somewhat new to OWB coming from an Informatica background but in our environment, we are doing the same thing. Our Enterprise Data Warehouse will be based on ROLAP and I intend to use MOLAP for subsets of the EDW.
    Dimensions in Oracle are somewhat interesting in that they are "leveled" and you can tie cubes or "fact tables" to any level of the dimension. This is a bit un-Kimball-like and has taken some getting used to. I think it is a powerful feature but I will have to experiment some until I understand it better.
    One critical bug with 10.2 I've run into is with dimension roles - The time dimension for instance. Typically this is one table that is aliased many many times. If you exceed roughly 5 roles for the time dimension, the generation of the object fails since OWB generates a single anonymous PL/SQL block that exceeds 64k. Its a documented bug in development with no workaround according to metalink.
    Other gotchas are that table changes always try to generate "create table" scripts even if you only add an index or change parallelism. We have had to do table maintenance outside OWB and then keep the metadata in sync up until now.
    I haven't done any of the MOLAP yet but from what I read there are some restrictions - such as you can't have roles on dimensions for MOLAP and I believe you can't have SCDs in MOLAP. I don't know how Time dimensions are handled in MOLAP without roles! Do people really generate tables for every single time dimension in OWB???
    Hope you share your experiences here!
    - Mike Taylor

  • Best practice of metadata table in data warehouse environment ?

    Hi guru's,
    In datawarehouse, we have 1. Stage schema 2. DWH(Data warehouse reporting schema). In stageing we have about 300 source tables. In DWH schema, we are creating the tables which are only required from reporting prespective . some of the tables in stageing schema, have been created in DWH schema as well with different table name and column names. The naming convention for these same tables and columns in DWH schema is more based on business names.
    In order to keep track of these tables we are creating metadata table in DWH schema say for example
    Stage                DWH_schema
    Table_1             Table_A         
    Table_2             Table_b
    Table_3             Table_c
    Table_4              Table_DMy question is how do we handle the column names in each of these tables. The stage_1, stage_2 and stage_3 column names have been renamed in DWH_schema which are part of Table_A, Table_B, Table_c.
    As said earlier, we have about 300 tables in stage and may be around 200 tables in DWH schema. Lot of the column names have been renamed in DWH schema from stage tables. In some of the tables we have 200 column's
    so my concern is how do we handle the column names in metadata table ? Do we need to keep only table names in metadata table not column names ?
    Any idea will be greatly appriciated.
    Thanks!

    hi
    seems quite a buzzing question.
    In our project we designed a hub and spoke like architecture.
    Thus we have 3 layer, L0 is the one closest to the source and L0 table's name are linked to the corresponding sources names by mean of naming standard (like tabA EXT_tabA tabA_OK1 so on based on implementation of load procedures).
    At L1 we have the ODS , normalized model , we use business names for table there and standard names for temporary structures and artifacts
    Both L0 an L1 keep source's column names as general rule, new columns like calculated one are business driven and metadata are standard driven.
    Datamodeler fits perfect for modelling L1 purpose.
    L2 is the dimensional schema business names take place for tables and columns eventually rewritten at presentation layer ( front end tool )
    hope this helps D.

  • What are the best solutions for data warehouse configuration in 10gR2

    I need help on solutions to be provided to my Client for upgrading the data warehouse.
    Current Configuration: Oracle database 9.2.0.8. This database contains the data warehouse and one more data mart on the same host.Sizes are respectively 6 Terabyte(retention policy of 3 years+current year) and 1 Terabyte. The ETL tool and BO Reporting tools are also hosted on the same host. This current configuration is really performing poor.
    Client cannot go for a major architectural or configuration changes to its existing environment now due to some constraints.
    However, they have agreed to separate out the databases on separate hosts from the ETL tools and BO objects. Also we are planning to upgrade the database to 10gR2 to attain stability, better performance and overcome current headaches.
    We cannot upgrade the database to 11g as the BO is at a version 6.5 which isn't compatible with Oracle 11g. And Client cannot afford to upgrade anything else other than the database.
    So, my role is very vital in providing a perfect solution towards better performance and take a successful migration of Oracle Database from one host to another (similar platform and OS) in addition to upgrade.
    I have till now thought of the following:
    Move the Oracle database and data mart to separate host.
    The host will be the same platform, that is, HP Superdome with HP-UX 32-bit OS (we cannot change to 64-bit as ETL tool doesn't support)
    Install new Oracle database 10g on the new host and move the data to it.
    Exploring all new features of 10gR2 to help data warehouse, that is, SQL MODEL Clause introduction, Parallel processing, Partitioning, Data Pump, SPA to study pre and post migrations.
    Also thinking of RAC to provide more better solution as our main motive is to show a tremendous performance enhancement.
    I need all your help to prepare a good road map for my assignment. Please suggest.
    Thanks,
    Tapan

    SGA=27.5 GB and PGA=50 MB
    Also I am pasting part of STATSPACK Report, eliminating the snaps of DB bounce. Please suggest the scope of improvement in this case.
    STATSPACK report for
    Snap Id Snap Time Sessions Curs/Sess Comment
    Begin Snap: 582946 11-Mar-13 20:02:16 46 12.8
    End Snap: 583036 12-Mar-13 18:24:24 60 118.9
    Elapsed: 1,342.13 (mins)
    Cache Sizes (end)
    ~~~~~~~~~~~~~~~~~
    Buffer Cache: 21,296M Std Block Size: 16K
    Shared Pool Size: 6,144M Log Buffer: 16,384K
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 1,343,739.01 139,883.39
    Logical reads: 100,102.54 10,420.69
    Block changes: 3,757.42 391.15
    Physical reads: 6,670.84 694.44
    Physical writes: 874.34 91.02
    User calls: 1,986.04 206.75
    Parses: 247.87 25.80
    Hard parses: 5.82 0.61
    Sorts: 1,566.76 163.10
    Logons: 10.99 1.14
    Executes: 1,309.79 136.35
    Transactions: 9.61
    % Blocks changed per Read: 3.75 Recursive Call %: 43.34
    Rollback per transaction %: 3.49 Rows per Sort: 190.61
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.90 Redo NoWait %: 100.00
    Buffer Hit %: 96.97 In-memory Sort %: 100.00
    Library Hit %: 99.27 Soft Parse %: 97.65
    Execute to Parse %: 81.08 Latch Hit %: 99.58
    Parse CPU to Parse Elapsd %: 3.85 % Non-Parse CPU: 99.34
    Shared Pool Statistics Begin End
    Memory Usage %: 7.11 50.37
    % SQL with executions>1: 62.31 46.46
    % Memory for SQL w/exec>1: 26.75 13.47
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    CPU time 492,062 43.66
    db file sequential read 157,418,414 343,549 30.49
    library cache pin 92,339 66,759 5.92
    PX qref latch 63,635 43,845 3.89
    db file scattered read 2,506,806 41,677 3.70
    Background Wait Events for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (s) (ms) /txn
    log file sequential read 176,386 0 3,793 22 0.2
    log file parallel write 2,685,833 0 1,813 1 3.5
    db file parallel write 239,166 0 1,350 6 0.3
    control file parallel write 33,432 0 79 2 0.0
    LGWR wait for redo copy 478,120 536 75 0 0.6
    rdbms ipc reply 10,027 0 47 5 0.0
    control file sequential read 32,414 0 40 1 0.0
    db file scattered read 4,101 0 30 7 0.0
    db file sequential read 13,946 0 29 2 0.0
    direct path read 203,694 0 14 0 0.3
    log buffer space 363 0 13 37 0.0
    latch free 3,766 0 9 2 0.0
    direct path write 80,491 0 6 0 0.1
    async disk IO 351,955 0 4 0 0.5
    enqueue 28 0 1 21 0.0
    buffer busy waits 1,281 0 1 0 0.0
    log file single write 172 0 0 1 0.0
    rdbms ipc message 10,563,204 251,286 992,837 94 13.7
    pmon timer 34,751 34,736 78,600 2262 0.0
    smon timer 7,462 113 76,463 10247 0.0
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    CPU used by this session 49,206,154 611.0 63.6
    CPU used when call started 49,435,735 613.9 63.9
    CR blocks created 6,740,777 83.7 8.7
    Cached Commit SCN referenced 423,253,503 5,256.0 547.2
    Commit SCN cached 19,165 0.2 0.0
    DBWR buffers scanned 48,276,489 599.5 62.4
    DBWR checkpoint buffers written 6,959,752 86.4 9.0
    DBWR checkpoints 454 0.0 0.0
    DBWR free buffers found 44,817,183 556.5 57.9
    DBWR lru scans 137,149 1.7 0.2
    DBWR make free requests 162,528 2.0 0.2
    DBWR revisited being-written buff 4,220 0.1 0.0
    DBWR summed scan depth 48,276,489 599.5 62.4
    DBWR transaction table writes 5,036 0.1 0.0
    DBWR undo block writes 2,989,436 37.1 3.9
    DDL statements parallelized 3,723 0.1 0.0
    DFO trees parallelized 4,157 0.1 0.0
    DML statements parallelized 3 0.0 0.0
    OS Block input operations 29,850 0.4 0.0
    OS Block output operations 1,591 0.0 0.0
    OS Characters read/written 182,109,814,791 2,261,447.1 235,416.9
    OS Integral unshared data size ################## 242,463,432.4 ############
    OS Involuntary context switches 188,257,786 2,337.8 243.4
    OS Maximum resident set size 43,518,730,619 540,417.4 56,257.5
    OS Page reclaims 159,430,953 1,979.8 206.1
    OS Signals received 5,260,938 65.3 6.8
    OS Socket messages received 79,438,383 986.5 102.7
    OS Socket messages sent 93,064,176 1,155.7 120.3
    OS System time used 10,936,430 135.8 14.1
    OS User time used 132,043,884 1,639.7 170.7
    OS Voluntary context switches 746,207,739 9,266.4 964.6
    PX local messages recv'd 55,120,663 684.5 71.3
    PX local messages sent 55,120,817 684.5 71.3
    Parallel operations downgraded 1 3 0.0 0.0
    Parallel operations not downgrade 4,154 0.1 0.0
    SQL*Net roundtrips to/from client 155,422,335 1,930.0 200.9
    SQL*Net roundtrips to/from dblink 18 0.0 0.0
    active txn count during cleanout 16,529,551 205.3 21.4
    background checkpoints completed 43 0.0 0.0
    background checkpoints started 43 0.0 0.0
    background timeouts 280,202 3.5 0.4
    branch node splits 4,428 0.1 0.0
    buffer is not pinned count 6,382,440,322 79,257.4 8,250.7
    buffer is pinned count 9,675,661,370 120,152.8 12,507.9
    bytes received via SQL*Net from c 67,384,496,376 836,783.4 87,109.3
    bytes received via SQL*Net from d 6,142 0.1 0.0
    bytes sent via SQL*Net to client 50,240,643,657 623,890.4 64,947.1
    bytes sent via SQL*Net to dblink 3,701 0.1 0.0
    calls to get snapshot scn: kcmgss 145,385,064 1,805.4 187.9
    calls to kcmgas 36,816,132 457.2 47.6
    calls to kcmgcs 3,514,770 43.7 4.5
    change write time 369,373 4.6 0.5
    cleanout - number of ktugct calls 20,954,488 260.2 27.1
    cleanouts and rollbacks - consist 6,357,174 78.9 8.2
    cleanouts only - consistent read 10,078,802 125.2 13.0
    cluster key scan block gets 69,403,565 861.9 89.7
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    cluster key scans 41,311,211 513.0 53.4
    commit cleanout failures: block l 413,776 5.1 0.5
    commit cleanout failures: buffer 414 0.0 0.0
    commit cleanout failures: callbac 41,194 0.5 0.1
    commit cleanout failures: cannot 174,382 2.2 0.2
    commit cleanouts 11,469,056 142.4 14.8
    commit cleanouts successfully com 10,839,290 134.6 14.0
    commit txn count during cleanout 17,155,424 213.0 22.2
    consistent changes 145,418,277 1,805.8 188.0
    consistent gets 8,043,252,188 99,881.4 10,397.7
    consistent gets - examination 3,180,028,047 39,489.7 4,110.9
    current blocks converted for CR 9 0.0 0.0
    cursor authentications 14,926 0.2 0.0
    data blocks consistent reads - un 143,706,500 1,784.6 185.8
    db block changes 302,577,666 3,757.4 391.2
    db block gets 336,562,217 4,179.4 435.1
    deferred (CURRENT) block cleanout 2,912,793 36.2 3.8
    dirty buffers inspected 627,174 7.8 0.8
    enqueue conversions 1,296,337 16.1 1.7
    enqueue releases 13,053,200 162.1 16.9
    enqueue requests 13,239,092 164.4 17.1
    enqueue timeouts 185,878 2.3 0.2
    enqueue waits 114,120 1.4 0.2
    exchange deadlocks 7,390 0.1 0.0
    execute count 105,475,101 1,309.8 136.4
    free buffer inspected 1,604,407 19.9 2.1
    free buffer requested 258,126,047 3,205.4 333.7
    hot buffers moved to head of LRU 22,793,576 283.1 29.5
    immediate (CR) block cleanout app 16,436,010 204.1 21.3
    immediate (CURRENT) block cleanou 2,860,013 35.5 3.7
    index fast full scans (direct rea 12,375 0.2 0.0
    index fast full scans (full) 3,733 0.1 0.0
    index fast full scans (rowid rang 192,148 2.4 0.3
    index fetch by key 1,321,024,486 16,404.5 1,707.7
    index scans kdiixs1 406,165,684 5,043.8 525.1
    leaf node 90-10 splits 50,373 0.6 0.1
    leaf node splits 697,235 8.7 0.9
    logons cumulative 884,756 11.0 1.1
    messages received 3,276,719 40.7 4.2
    messages sent 3,257,171 40.5 4.2
    no buffer to keep pinned count 569 0.0 0.0
    no work - consistent read gets 4,406,092,172 54,715.0 5,695.8
    opened cursors cumulative 20,527,704 254.9 26.5
    parse count (failures) 267,088 3.3 0.4
    parse count (hard) 468,996 5.8 0.6
    parse count (total) 19,960,548 247.9 25.8
    parse time cpu 323,024 4.0 0.4
    parse time elapsed 8,393,422 104.2 10.9
    physical reads 537,189,332 6,670.8 694.4
    physical reads direct 292,545,140 3,632.8 378.2
    physical writes 70,409,002 874.3 91.0
    physical writes direct 59,248,394 735.8 76.6
    physical writes non checkpoint 69,103,391 858.1 89.3
    pinned buffers inspected 11,893 0.2 0.0
    prefetched blocks 95,892,161 1,190.8 124.0
    prefetched blocks aged out before 1,495,883 18.6 1.9
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    process last non-idle time ################## ############## ############
    queries parallelized 417 0.0 0.0
    recursive calls 122,323,299 1,519.0 158.1
    recursive cpu usage 3,144,533 39.1 4.1
    redo blocks written 180,881,558 2,246.2 233.8
    redo buffer allocation retries 5,400 0.1 0.0
    redo entries 164,728,513 2,045.6 213.0
    redo log space requests 1,006 0.0 0.0
    redo log space wait time 2,230 0.0 0.0
    redo ordering marks 2,563 0.0 0.0
    redo size 108,208,614,904 1,343,739.0 139,883.4
    redo synch time 558,520 6.9 0.7
    redo synch writes 2,343,824 29.1 3.0
    redo wastage 1,126,585,600 13,990.0 1,456.4
    redo write time 718,655 8.9 0.9
    redo writer latching time 7,763 0.1 0.0
    redo writes 2,685,833 33.4 3.5
    rollback changes - undo records a 522,742 6.5 0.7
    rollbacks only - consistent read 335,177 4.2 0.4
    rows fetched via callback 1,100,990,382 13,672.1 1,423.3
    session connect time ################## ############## ############
    session cursor cache count 1,061 0.0 0.0
    session cursor cache hits 1,687,796 21.0 2.2
    session logical reads 8,061,057,193 100,102.5 10,420.7
    session pga memory 1,573,228,913,832 19,536,421.0 2,033,743.8
    session pga memory max 1,841,357,626,496 22,866,054.4 2,380,359.0
    session uga memory 1,074,114,630,336 13,338,399.4 1,388,529.0
    session uga memory max 386,645,043,296 4,801,374.0 499,823.6
    shared hash latch upgrades - no w 410,360,146 5,095.9 530.5
    sorts (disk) 2,657 0.0 0.0
    sorts (memory) 126,165,625 1,566.7 163.1
    sorts (rows) 24,048,783,304 298,638.8 31,088.3
    summed dirty queue length 5,438,201 67.5 7.0
    switch current to new buffer 1,302,798 16.2 1.7
    table fetch by rowid 6,201,503,534 77,010.5 8,016.8
    table fetch continued row 26,649,697 330.9 34.5
    table scan blocks gotten 1,864,435,032 23,152.6 2,410.2
    table scan rows gotten 43,639,997,280 541,923.3 56,414.3
    table scans (cache partitions) 26,112 0.3 0.0
    table scans (direct read) 246,243 3.1 0.3
    table scans (long tables) 340,200 4.2 0.4
    table scans (rowid ranges) 359,617 4.5 0.5
    table scans (short tables) 9,111,559 113.2 11.8
    transaction rollbacks 4,819 0.1 0.0
    transaction tables consistent rea 824 0.0 0.0
    transaction tables consistent rea 1,386,848 17.2 1.8
    user calls 159,931,913 1,986.0 206.8
    user commits 746,543 9.3 1.0
    user rollbacks 27,020 0.3 0.0
    write clones created in backgroun 7 0.0 0.0
    write clones created in foregroun 4,350 0.1 0.0
    Buffer Pool Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> Standard block size Pools D: default, K: keep, R: recycle
    -> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
    Free Write Buffer
    Number of Cache Buffer Physical Physical Buffer Complete Busy
    P Buffers Hit % Gets Reads Writes Waits Waits Waits
    D 774,144 95.6############ 233,869,082 10,089,734 0 0########
    K 504,000 99.9############ 3,260,227 1,070,338 0 0 65,898
    R 63,504 96.2 196,079,539 7,511,863 535 0 0 0
    Buffer wait Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc
    Tot Wait Avg
    Class Waits Time (s) Time (ms)
    data block 7,791,121 14,676 2
    file header block 587 101 172
    undo header 151,617 71 0
    segment header 299,312 58 0
    1st level bmb 45,235 7 0
    bitmap index block 392 1 3
    undo block 4,250 1 0
    2nd level bmb 14 0 0
    system undo header 2 0 0
    3rd level bmb 1 0 0
    Latch Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
    willing-to-wait latch get requests
    ->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
    ->"Pct Misses" for both should be very close to 0.0
    Pct Avg Wait Pct
    Get Get Slps Time NoWait NoWait
    Latch Requests Miss /Miss (s) Requests Miss
    Consistent RBA 2,686,230 0.0 0.2 0 0
    FAL request queue 86 0.0 0 0
    FAL subheap alocation 0 0 2 0.0
    FIB s.o chain latch 1,089 0.0 0 0
    FOB s.o list latch 4,589,986 0.5 0.0 2 0
    NLS data objects 1 0.0 0 0
    SQL memory manager worka 5,963 0.0 0 0
    Token Manager 0 0 2 0.0
    active checkpoint queue 719,439 0.3 0.1 0 1 0.0
    alert log latch 184 0.0 0 2 0.0
    archive control 4,365 0.0 0 0
    archive process latch 1,808 0.6 0.6 0 0
    begin backup scn array 3,387,572 0.0 0.0 0 0
    cache buffer handles 1,577,222 0.2 0.0 0 0
    cache buffers chains ############## 0.5 0.0 430 354,357,972 0.3
    cache buffers lru chain 17,153,023 0.1 0.0 1 385,505,654 0.5
    cas latch 538,804,153 0.3 0.0 7 0
    channel handle pool latc 1,776,950 0.5 0.0 0 0
    channel operations paren 2,901,371 0.3 0.0 0 0
    checkpoint queue latch 99,329,722 0.0 0.0 0 11,153,369 0.1
    child cursor hash table 3,927,427 0.0 0.0 0 0
    commit callback allocati 8,739 0.0 0 0
    dictionary lookup 7,980 0.0 0 0
    dml lock allocation 6,767,990 0.1 0.0 0 0
    dummy allocation 1,898,183 0.2 0.1 0 0
    enqueue hash chains 27,741,348 0.1 0.1 4 0
    enqueues 17,450,161 0.3 0.1 6 0
    error message lists 132,828 2.6 0.2 1 0
    event group latch 884,066 0.0 0.7 0 0
    event range base latch 1 0.0 0 0
    file number translation 34 38.2 0.9 0 0
    global tx hash mapping 577,859 0.0 0 0
    hash table column usage 4,062 0.0 0 8,757,234 0.0
    hash table modification 16 0.0 0 2 0.0
    i/o slave adaptor 0 0 2 0.0
    job workq parent latch 4 100.0 0.3 0 494 8.7
    job_queue_processes para 1,950 0.0 0 2 0.0
    ksfv messages 0 0 4 0.0
    ktm global data 8,219 0.0 0 0
    lgwr LWN SCN 2,687,862 0.0 0.0 0 0
    library cache 310,882,781 0.9 0.0 34 104,759 4.0
    library cache load lock 30,369 0.0 0.3 0 0
    library cache pin 153,821,358 0.1 0.0 2 0
    library cache pin alloca 126,316,296 0.1 0.0 4 0
    list of block allocation 2,730,808 0.3 0.0 0 0
    loader state object free 566,036 0.1 0.0 0 0
    longop free list parent 197,368 0.0 0 8,390 0.0
    message pool operations 14,424 0.0 0.0 0 0
    messages 25,931,764 0.1 0.0 1 0
    mostly latch-free SCN 40,124,948 0.3 0.0 5 0
    Latch Sleep breakdown for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by misses desc
    Get Spin &
    Latch Name Requests Misses Sleeps Sleeps 1->4
    cache buffers chains ############## 74,770,083 1,062,119 73803903/884
    159/71439/10
    582/0
    redo allocation 170,107,983 3,441,055 149,631 3292872/1467
    48/1426/9/0
    library cache 310,882,781 2,831,747 89,240 2754499/6780
    6/7405/2037/
    0
    shared pool 158,471,190 1,755,922 55,268 1704342/4836
    9/2826/385/0
    cas latch 538,804,153 1,553,992 6,927 1547125/6808
    /58/1/0
    row cache objects 161,142,207 1,176,998 27,658 1154070/1952
    0/2560/848/0
    process queue reference 1,893,917,184 1,119,215 106,454 78758/4351/1
    36/0/0
    Library Cache Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Pct Misses" should be very low
    Get Pct Pin Pct Invali-
    Namespace Requests Miss Requests Miss Reloads dations
    BODY 3,137,721 0.0 3,137,722 0.0 0 0
    CLUSTER 6,741 0.1 4,420 0.2 0 0
    INDEX 353,708 0.8 361,065 1.2 0 0
    SQL AREA 17,052,073 0.3 54,615,678 0.9 410,682 19,628
    TABLE/PROCEDURE 3,521,884 0.2 12,922,737 0.1 619 0
    TRIGGER 1,975,977 0.0 1,975,977 0.0 1 0
    SGA Memory Summary for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    SGA regions Size in Bytes
    Database Buffers 22,330,474,496
    Fixed Size 779,288
    Redo Buffers 17,051,648
    Variable Size 7,180,648,448
    sum 29,528,953,880

  • Permanent Job Opportunity - Oracle BI Data Warehouse Developer Chicago, IL

    Submit Resumes to [email protected]
    The Business Intelligence Specialist will play a critical role in designing, developing, deploying, and supporting data warehouse/data mart applications. In this role, the person will be responsible for all BI aspects of a data warehouse/data mart application. Primary duties will be to create reporting standards, as well as coach and support power users with selected Oracle tool. The ideal candidate will have 3+ years demonstrated experience in data warehousing and Business Intelligence tools. Must also possess excellent communication skills and an outstanding track record with the user.
    Principal Duties:
    Participates with internal clients to define software requirements for development, maintenance and/or improvements
    Maintains accuracy, integrity, and availability of the data warehouse
    Tests, monitors, manages, and validates data warehouse activity, including data extraction, transformation, movement, loading, cleansing, and updating processes
    Designs and optimizes data mart models for Oracle Business Intelligence Suite.
    Translates the reporting requirements into data analysis and reporting solutions.
    Reviews and sign off on project plan(s).
    Reviews and sign off on technical design(s).
    Defines and develops BI reports for accessing/analyzing data in warehouse.
    Customizes BI tools and data sets for different types of users.
    Designs and develop UAT (User Acceptance Testing).
    Drives improvement of BI system architecture and development process.
    Develops and maintains internal relationships. Actively champions teamwork. Uses internal resources to enhance knowledge and expertise of industry, research, products and services. Provides information and support to others in the company.
    Required Skills:
    Education and Experience:
    BS/MS in Computer Science or equivalent.
    3+ years of experience with Oracle, PL/SQL Development and Data Warehousing.
    Experience Oracle Business Intelligence Suite and Crystal Reports is a plus.
    2-3 years dimensional modeling experience.
    Demonstrated hands on experience with Unix/Linux, SQL required.
    Demonstrated hands on experience with Oracle reporting tools.
    Demonstrated experience with translating business requirements into data analysis and reporting solutions.
    Experience in training programs/teach users to use tools.
    Expertise with software development process.
    Effective mediator - able to facilitate constructive and productive discussions with internal customers, external clients, and development personnel pertaining to feature definition, project scope, and status
    Problem solving*identifies and resolves problems in a timely manner, gathers and analyzes information skillfully and maintains confidentiality.
    Planning/organizing*prioritizes and plans work activities and uses time efficiently. Work requires continual attention to detail in composing and proofing materials, establishing priorities and meeting deadlines. Must be able to work in a fast-paced environment with demonstrated ability to juggle multiple competing tasks and demands.
    Quality control*demonstrates accuracy and thoroughness and monitors own work to ensure quality.
    Adaptability*adapts to changes in the work environment, manages competing demands and is able to deal with frequent change, delays or unexpected events.
    Benefits/Compensation:
    Employees enjoy competitive compensation. We have a full benefits package including medical and dental insurance, long-term disability and life insurance and a 401(k) plan.
    The client operates within the healthcare industry.
    This is a permanent full-time position. After ensuring your availability and qualifications we will put you in direct contact with the client to move forward in the process.

    FORWARD THE UPDATED RESUME AS SOON AS POSSIBLE.

  • Oracle Data Warehouse DB and OLAP_OBIEE DB [OBIEE11g]

    Hi Experts,
    I have a problem regarding joining the 2 DB in an Analysis report.
    The scenario is this:
    Oracle Data Warehouse is DB_1 and OLAP_OBIEE is DB_2
    DB_1
    -col1
    -col2
    DB_2
    -col3
    In Analysis report when they are queried or run separately, obviously there will be a report generated.
    But when this 2 DB columns are joined together in one Analysis report, I always get 'No Results'.
    Please help how do I fix this issue :(
    Thanks,

    Hi,
    check the following links:
    http://www.rittmanmead.com/2007/10/reporting-against-multiple-datasources-in-obiee/
    http://108obiee.blogspot.com/2009/01/fragmentation-in-obiee.html
    http://gerardnico.com/wiki/dat/obiee/multiple_subject_area

  • Data Warehouse using MSSQL - SSIS : Installation best practices

    Hi All,
              I am working on a MSSQL - 2008 R2 , based data warehouse building.  The requirement is to read source data from files, put it in stage database and perform data cleansing etc .. and then move the
    data to data warehouse db .. Now the question is about the required number of physical servers and in which server which component (MSSQL , SSIS ) of MSSQL should be installed based on any best practices:
    Store Source files --> Stage database --> data warehouse db
    The data volumne will be high ( per day 20 - 30 k transactions) ... Please suggest
    Thank you
    MSSQL.Arc

    Microsoft documentation: "Use a Reference Architecture to Build An Optimal Warehouse
    Microsoft SQL Server 2012 Fast Track is a reference architecture data warehouse solution giving you a step-by-step guide to build a balanced hardware...Microsoft SQL Server 2012 Fast Track is a reference architecture
    data warehouse solution giving you a step-by-step guide to build a balanced hardware configuration and the exact software setup. 
    Step-by-step instructions on what hardware to buy and how to put the server together.
    Setup instructions  on installing the software and all the specific settings to configure.
    Pre-certified by Microsoft and industry hardware partners for the most optimal hardware and software configuration."
    LINK:
    https://www.microsoft.com/en-us/sqlserver/solutions-technologies/data-warehousing/reference-architecture.aspx
    Kalman Toth Database & OLAP Architect
    IPAD SELECT Query Video Tutorial 3.5 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Efficiency of data warehouse sql and star/snowflake schema

    Hi,
    We are using 11.2.0.3 and need to improve query performance of reports.  data warehouse star/snowflake schema
    In addition to indexing, partitioning having star_transformation enabled etc I am condisriing impact of the following on query performance.
    central fact (over a billion rows) joins to a dimesnion customer ( few hundred thousand rows) which in turn joined to latest version of the dimesnion ( whichhas circa 30,000 rows).
    The table with few hundred thousand rows (customer dimesnion) must alwsys be queried as data stored aganist the version of customer applicable at the time - we just query latest_customer as users want to see
    latest version of customer attributes to stop data being fragemented across several rows in the report.
    Considering if would be more efficient to create a dimenson which is the equivalent of customer but also stores the latest version of the customer attributes on the on row - this would mean customer dimensuion would have far more columns but queries would could avoid additional lookup of this 30k row table.
    Thoughts are - would this be a material benefit?
    At monent users would query latest_customer to say get all customers belonging to a certain multiple chain.
    If change as above then they would query directly the customer dimension with few hundred thousand rows.
    Thoughts?
    Thanks

    We are using 11.2.0.3 and need to improve query performance of reports.  data warehouse star/snowflake schema
    That is NOT a realistic or even meaningful goal.
    And until you identify and document an actual PROBLEM or specific goal you should not even be considering possible solutions.
    Anything you do to improve one report might degrade the performance of several other reports.
    You need to start over and gather information about WHAT Oracle is doing for the reports now, HOW that work is being done and capture metrics that validate how the reports are currently performing.
    Your first step should be to document the performance you are getting now for each report.
    The second step would be to identify which of those reports is a possible target for tuning.
    The third step is to prioritize the reports: which is most important to tune, which is next, etc.
    Then you need to generate the execution plans for those reports to identify EXACTLY how Oracle is executing the queries now.
    At this point you should have enough information to know what your possible options are.
    So then you create a prioritized list of options. The top of the list should be additions to what you already have.
    1. New indexes - regular or bitmapped (if appropriate)
    2. Dropping indexes that aren't being used.
    3. Report-ready summary tables or Materializeds views.
    IMHO modifying your basic architecture should be your LAST resort and undertaken only if you can't solve your (unstated) problem using solutions that have less impact and risk.

  • RAC for Data Warehouse

    Hello,
    We have a research project for restructructuring our data warehouse system.
    I would like to get some opinions about whether RAC architecture can be
    a good solution for Data Warehouse application.
    We have using parallel queries massively. Does running these kind of queries
    on different servers on RAC with multiple server result in performance degradation rather than
    running on single monolithic server with multiple CPUs
    I will appreciate any comments using RAC architecture for Data Warehouse
    systems?
    Regards,

    Maurice Muller wrote:
    Just keep in mind that during the last 4 years (I guess your current system is about 4 years old) the CPUs became much faster.
    A cpu can't work without data which means that the I throughput has to be fast enough to feed all your cores with data.
    The main bottleneck of all DWHs I have seen during the last 8 years was allways the IO never the CPUs. And not just data warehousing Maurice, but a basic principle for any data processing platform - the slowest layer is always the I/O layer.. and can be the most expensive one to solve too.
    Which is why newer technology like Infiniband is exciting as it can also serve as the I/O layer. Instead of using the traditional HBA which is typically configured with 2Gb fibre channels to the storage layer, using HCA cards you can wire this directly into an Infiniband storage array... and this can run at up to speeds of 40Gb. Dual connections means a total theoretical pipe size of 80Gb. I do not know of any other standard technology (like GigE) that can provide any similar bandwidth speed.
    Back to RAC though - with RAC, when you add a new server that comes with a new set of I/O pipes.. plus of course more RAM and more CPU cores. SMP server architecture does not scale like this at all. You only have x number of slots for PCI cards, CPUs and RAM. A very specific ceiling that cannot be moved. With MPP this ceiling is a a lot higher and more flexible.
    You can also replace a dual core dual CPU nodes with a 6 core AMD Istanbul CPUs next year.. and possibly 12 core CPUs year after that. So even a smallish 4 node cluster with 16 cores in total can be grown significantly and remain a 4 node cluster. Together with advances in HPC (High Performance Computing) like Infiniband.
    I'm not seeing much use of non-RAC RDBMS architecture in the future. Databases are getting ever bigger because we have the technology to crunch more data, and crunch it a lot more intelligently than ever before. My first production database was 4MB in size, and ran on a Novell File Server with two 20MB disks. I'm currently testing a 24TB array for use for a single database.
    Technology is inevitable, as is the growth in data volumes. And I cannot see a non-RAC architecture rising to that challenge. Especially not in something like data warehousing.

  • Sample of cost/benefit analysis of data warehouse

    Greeting!
    I am currently doing a research on data warehouse kindly give me a smple of a cost/benefit analysis.
    Looking forward for your immedicate response.
    e-mail me at "[email protected]"
    Thank you and God Bless!
    very truly your,
    Benedict

    Good day Colin.
    You question is like trying to guess the next day stock qouats...
    There are many parameters that combined together and not all of them are constants infact...most of them are variables.
    The issues are vary from lets say:
    1.  COBOL programmers payments needed to support and maintain you current AS400 special written applications and interfacing...
    2.The number of system you want to integrate in your organization buisness scenarios..
    3.The monitoring capabilities currently available in your organization in the Spagethi architecture.
    4.Predicting the growth of your comapny and adding Buisness partners systems...
    5. Trying to estimate the re-using of XI-allready-writen-interfaces (not possible... )
    6.The duplicated data and data incosistancy in your IT enviorment before XI and after XI...
    So my friend,I belive this imaginary generic document of TCO can't be created...
    Good luck with your project
    Nimrod
    If it helps you I've recently asked an ABAP programmer
    how long it will take him to develope a simple File to IDoc scenario....he said that including design documents and testing Aprox. 8 days.
    I belive i can do the first one in 5 days and the second in 50% of that time...
    Message was edited by: Nimrod Gisis

Maybe you are looking for