Basic Modeling in Data Warehouse Workbench

Can anybody explain, how to load the data from a flat file to BI system, step by step? By creating InfoArea, InfoObject Catalog, InfoObjects, Data Source.
I have already created InfoArea, InfoObject Catalog and InfoObjects. I am confused, what the next step? How to create Data Source to load the data from a local flat file?
Please help me with this.

Hi,
Before creating the Datasource you need to create the Application Component and select your Source System for Flat File
Generally it will be CLNT90 something like that.
After that create the Datasource for Master Data or for Transaction Data.
If you are loading transaction data then create an Infocube by right clicking on your Infoarea.
After creating the Infocube select your Chars and KF's and Time dimension etc. and activate it.
Create transformation for source system to target system.
Then create the Infopackage for data to load from flat file.
After that create DTP and check the data in manage data of Infocube it your flat file data with Infocube data is correct.
Hope it helps.
Regards,
Raghu

Similar Messages

  • Differences between operational systems data modeling and data warehouse da

    Hello Everyone,
    Can anybody help me understand the differences between operational systems data modeling and data warehouse data modeling>
    Thanks

    Hello A S!
    What you mean is the difference between modelling after normal form like in operational systems (OLTP) e. g. 3NF and modelling a InfoCube in a data warehouse (OLAP)?
    While in a OLTP you want to have data tables free of redundance and ready for transactions meaning writing and reading few records often, in an OLAP-system you need to read a lot of data for every query you do on a database. Often in an OLAP-system you aggregate these amounts of data.
    Therefore you use another principle for these database scheme. This is called star schema. This means that you have one central table (called fact table) which holds the key figures and have keys to another tables with characteristics. These other tables are called dimension tables. They hold combinations of the characteristics. Normally you design it that your dimensions are small, so the access  on the data is more efficent.
    the star scheme in SAP BI is a little more complex than explained here but it follows the same concept.
    Best regards,
    Peter

  • Data Models and Data Flow diagrams.

    Hi  Gurus,
        Can anybody brief me the concept of Data Models and Data Flow Diagrams and their development with illustrations. And is it a responsibility of a Technical or a Functional consultant..i.e to translate Business requirements and functional specifications into technical specifications, data flow diagrams and data models.
    Your valuable answers will be rewarded.
    Thanks in advance.

    Hi,
    Concept of Data Models
    Data model or Data modelling is basically how you define or design your BW Architecture based on Business requirements. It deals with designing and creating a effcient BW architecture sticking to standard practices.
    Multi-Dimensional Modeling with SAP NetWeaver BI
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/6ce7b0a4-0b01-0010-52ac-a6e813c35a84
    /people/githen.ronney3/blog/2008/02/13/modeling-strategies
    Modeling the Data Warehouse Layer with BI
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3668618d-0c01-0010-1ab5-aa75c3a4dfc2
    /people/gilad.weinbach2/blog/2007/02/23/a-beginners-guide-to-your-first-bi-model-in-nw2004s
    Data Flow Diagrams
    This show the path of data flow for each individual object in BW. How data dets loaded into that object and how it is going out of the object etc.
    Right click on the data target > show data flow .
    It shows all the intermdeiate layer through which data comes into that particular object.
    Responsibility of a Technical or a Functional consultant
    This is done genrally in the designing phase itself by a Senior Technical Consultant with the help of a Functional consultant or a Techno=Functional consultant interacting with Business.
    Hope this helps.
    Thanks,
    JituK

  • Basic questions on data modeling

    Hi experts,
    I have some basic questions regarding data modeling within MDM. I understand the available table types and the concept of lookup fields. I know that the MDM data modeling concept is different to the relational concept. But having a strong database background my first step was to design a relational data model which I would like to transfer to a MDM repository. Unfortunately I didn't found good information material on this. So here are some questions maybe you can help me:
    1) Is it the right approach to model n:m relationships with multivalued lookup fields? E.g. main table Users with lookup field from subtable SapAccounts (a user can have accounts in different SAP systems, that means more than one account).
    2) Has a record always be unique in MDM repositories (e.g. should we use Auto ID's in every table or do we have to mark a combination of fields as unique)? Is a composite key of 2 or more fields represented with marking these fields as unique?
    3) The concept of relationships in MDM is only based on relationships between single records (not valid for all records in a table)? Is it necessary to define all relationships similar to the relational data model in MDM? Is there something similar to referential integrity in MDM?
    4) Is it possible to change the main table to a sub table later on if we realize that it has also to be used as a lookup table for another table (when extending the data model) or do we have to create a new repository from scratch?
    Thank you for your answers.
    Regards, bd

    Yes you are correct. It is almost difficult to map relational database to mdm one. But again MDM is not 'just' a database. It holds much more 'master' information as compared to any relational db.
    1) Is it the right approach to model n:m relationships with multivalued lookup fields? E.g. main table Users with lookup field from subtable SapAccounts (a user can have accounts in different SAP systems, that means more than one account).
    Yes Here you need to use MV look up tables or can also try Qualifier tables if it gets more complex
    2) Has a record always be unique in MDM repositories (e.g. should we use Auto ID's in every table or do we have to mark a combination of fields as unique)? Is a composite key of 2 or more fields represented with marking these fields as unique?
    Concept of uniqueness differs here that you also have something called Display Fields (DF). A combination of DF can also be treated as Unique one. For instance while importing records if you select these DF as a combination, you will eliminate any possible of duplicates based on this combination. Auto Id is one of the ways to have a unique id once record is within MDM. While you use UF or DF to eliminate any possible duplicates at import level
    3) The concept of relationships in MDM is only based on relationships between single records (not valid for all records in a table)? Is it necessary to define all relationships similar to the relational data model in MDM? Is there something similar to referential integrity in MDM?
    Hmm... good one. Referencial Integrity. What I assume you are talking is that if you have relationships between tables then removing a record will not be possible as it is a foreign key for some record. Here MDM does not allow that. As Relationships within MDM are physical and not conceptual. For instance material can have components. Now if material does not exist then any relationship to components is not worthwile to maintain. Hence relationshsip is eliminated.  While in relational model relationships are more conceptual. Hence with MDM usage of lookups and main table you do not need to maintain these kind of relationships on your own.
    4) Is it possible to change the main table to a sub table later on if we realize that it has also to be used as a lookup table for another table (when extending the data model) or do we have to create a new repository from scratch?
    No. It is not possible to convert main table. There is only one main table and it cannot be changed.
    I went for the same option but it did not work. What I suggest is to look up your legacy system one by one and see what fields in general can be classified as Master, Reference, Transactional - You will start getting answers immediately.

  • Data warehouse modeling

    i am stuck at some of the points and have no clue to what i should do. Please, if you could find it out from someone already there they should know somthing.
    1. What do you do with flag indicators and different code attributes in your entity tables. I mean do you include them in your dimension.
    2. How do you handle dependent or weak entities when transferring from ERD to dimensional star schema. For example, My Account table has dependent (Aggreement, Suitability,Qualification, Name Address) with one to many relationship how do i handle them. Should i include these entities inside account dimension or directly to the fact.
    3. My dimensions are User, Account, Account activity, Time how do we identify which are slowly changing dimensions.
    Because when an account goes through steps there are changes made to it. After being saved it can again be put into the cycle again for more changes. i mean very frequent changes. Where do i put the start date and end date if indeed it is a Slowly changing Dimension.
    I would really appreciate your help.
    plz reply at [email protected]

    My 2 cents, I think Gopi is it about rigth on the OLTP side, but I have to disagreewith Gopi on some the data warehouse points.
    Data warehouse is generally a broader concept than just OLAP /multi-dimensional model, that would be regarded as just a component of most DWs.
    Data warehouses run SQL queries all the time.  I would bet the overwhelming majority of BW queries are SQL queries, even for querying OLAP cubes, although MDX is starting to be used more.  Operationally, SAP uses SQL to perform a lots of the procesess in BW - loading data, rollup, compress, etc.
    The majority of data warehouses are perhaps in the hundreds of GBs, although large enterprises can easiy have TBs of data.
    BW can incorporate real time data from R3 with remote cubes.
    BW has transactional InfoCubes where users enter data for budgeting, forecasting etc.
    You can google hese topics and finds lots of info on data warehouse design.

  • Modelling tables in Transactional database and data warehouse

    Hello,
    Can any one please tell me what are the differences between modelling tables in a transactional databse compared with a data warehouse? And I want to know the process of multi dimensional modelling...

    Check here:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/6ce7b0a4-0b01-0010-52ac-a6e813c35a84
    BI Data Modeling and Frontend Design [original link is broken]
    Regards,
    Luis

  • Oracle Retail Data Warehouse (RDW) Vs Oracle Retail Data Model??

    Oracle Retail suite of applications include Oracle Retail Data Warehouse (RDW) application. It is integrated with Point of Sale via TLOG and other applications. Is Oracle Retail Data Model leveraging the existing data in RDW? If we use Oracle Retail Data Model, RDW becomes redundant?

    Oracle Retail Data Model is independent of RDW and RGBU applications. It can be used with RMS and Point of Sale, but there is no pre-built ETL to load the data included with the product, although there are some options available thru partners.

  • Core data warehouse-component in SAP BW

    Hi experts,
    a typical data warehouse reference architecture (as described in many books) basically consists of a core data warehouse, an ods and data marts.
    As far as I understood in SAP BW an ODS object acts as a storage location for consolidated and cleaned-up transaction data (transaction data or master data, for example) on the document (atomic) level. This data is not stored permanently but overritten as soon as changes are applied to data in the operational systems.
    But where in SAP BW is a component like a core data warehouse that stores consolidated and cleaned-up transaction data on atomic level permanently without overwriting it????
    Thanx
    Axel

    hi Dinesh
    thanx for the PDF! Does it refer to SAP BW 3.5 or NW2004?
    Do I get it right that in SAP BW version 3.5 there is no "core data warehouse" that provides persistent historic data on a granular level but only ODS-Objects that can only provide the latest integrated but NOT HISTORIC data on a granular level?
    And do they actually introduce a new component in NW2004 (referring to page 20 in PDF) or only suggest a differnt way of modelling a set of ODS-Objects and InfoCubes (that have already existed in version 3.5) with granular data to a "Data Warehouse Layer"?
    Thanx
    Axel
    22 von 38

  • Best practice of metadata table in data warehouse environment ?

    Hi guru's,
    In datawarehouse, we have 1. Stage schema 2. DWH(Data warehouse reporting schema). In stageing we have about 300 source tables. In DWH schema, we are creating the tables which are only required from reporting prespective . some of the tables in stageing schema, have been created in DWH schema as well with different table name and column names. The naming convention for these same tables and columns in DWH schema is more based on business names.
    In order to keep track of these tables we are creating metadata table in DWH schema say for example
    Stage                DWH_schema
    Table_1             Table_A         
    Table_2             Table_b
    Table_3             Table_c
    Table_4              Table_DMy question is how do we handle the column names in each of these tables. The stage_1, stage_2 and stage_3 column names have been renamed in DWH_schema which are part of Table_A, Table_B, Table_c.
    As said earlier, we have about 300 tables in stage and may be around 200 tables in DWH schema. Lot of the column names have been renamed in DWH schema from stage tables. In some of the tables we have 200 column's
    so my concern is how do we handle the column names in metadata table ? Do we need to keep only table names in metadata table not column names ?
    Any idea will be greatly appriciated.
    Thanks!

    hi
    seems quite a buzzing question.
    In our project we designed a hub and spoke like architecture.
    Thus we have 3 layer, L0 is the one closest to the source and L0 table's name are linked to the corresponding sources names by mean of naming standard (like tabA EXT_tabA tabA_OK1 so on based on implementation of load procedures).
    At L1 we have the ODS , normalized model , we use business names for table there and standard names for temporary structures and artifacts
    Both L0 an L1 keep source's column names as general rule, new columns like calculated one are business driven and metadata are standard driven.
    Datamodeler fits perfect for modelling L1 purpose.
    L2 is the dimensional schema business names take place for tables and columns eventually rewritten at presentation layer ( front end tool )
    hope this helps D.

  • What are the best solutions for data warehouse configuration in 10gR2

    I need help on solutions to be provided to my Client for upgrading the data warehouse.
    Current Configuration: Oracle database 9.2.0.8. This database contains the data warehouse and one more data mart on the same host.Sizes are respectively 6 Terabyte(retention policy of 3 years+current year) and 1 Terabyte. The ETL tool and BO Reporting tools are also hosted on the same host. This current configuration is really performing poor.
    Client cannot go for a major architectural or configuration changes to its existing environment now due to some constraints.
    However, they have agreed to separate out the databases on separate hosts from the ETL tools and BO objects. Also we are planning to upgrade the database to 10gR2 to attain stability, better performance and overcome current headaches.
    We cannot upgrade the database to 11g as the BO is at a version 6.5 which isn't compatible with Oracle 11g. And Client cannot afford to upgrade anything else other than the database.
    So, my role is very vital in providing a perfect solution towards better performance and take a successful migration of Oracle Database from one host to another (similar platform and OS) in addition to upgrade.
    I have till now thought of the following:
    Move the Oracle database and data mart to separate host.
    The host will be the same platform, that is, HP Superdome with HP-UX 32-bit OS (we cannot change to 64-bit as ETL tool doesn't support)
    Install new Oracle database 10g on the new host and move the data to it.
    Exploring all new features of 10gR2 to help data warehouse, that is, SQL MODEL Clause introduction, Parallel processing, Partitioning, Data Pump, SPA to study pre and post migrations.
    Also thinking of RAC to provide more better solution as our main motive is to show a tremendous performance enhancement.
    I need all your help to prepare a good road map for my assignment. Please suggest.
    Thanks,
    Tapan

    SGA=27.5 GB and PGA=50 MB
    Also I am pasting part of STATSPACK Report, eliminating the snaps of DB bounce. Please suggest the scope of improvement in this case.
    STATSPACK report for
    Snap Id Snap Time Sessions Curs/Sess Comment
    Begin Snap: 582946 11-Mar-13 20:02:16 46 12.8
    End Snap: 583036 12-Mar-13 18:24:24 60 118.9
    Elapsed: 1,342.13 (mins)
    Cache Sizes (end)
    ~~~~~~~~~~~~~~~~~
    Buffer Cache: 21,296M Std Block Size: 16K
    Shared Pool Size: 6,144M Log Buffer: 16,384K
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 1,343,739.01 139,883.39
    Logical reads: 100,102.54 10,420.69
    Block changes: 3,757.42 391.15
    Physical reads: 6,670.84 694.44
    Physical writes: 874.34 91.02
    User calls: 1,986.04 206.75
    Parses: 247.87 25.80
    Hard parses: 5.82 0.61
    Sorts: 1,566.76 163.10
    Logons: 10.99 1.14
    Executes: 1,309.79 136.35
    Transactions: 9.61
    % Blocks changed per Read: 3.75 Recursive Call %: 43.34
    Rollback per transaction %: 3.49 Rows per Sort: 190.61
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.90 Redo NoWait %: 100.00
    Buffer Hit %: 96.97 In-memory Sort %: 100.00
    Library Hit %: 99.27 Soft Parse %: 97.65
    Execute to Parse %: 81.08 Latch Hit %: 99.58
    Parse CPU to Parse Elapsd %: 3.85 % Non-Parse CPU: 99.34
    Shared Pool Statistics Begin End
    Memory Usage %: 7.11 50.37
    % SQL with executions>1: 62.31 46.46
    % Memory for SQL w/exec>1: 26.75 13.47
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    CPU time 492,062 43.66
    db file sequential read 157,418,414 343,549 30.49
    library cache pin 92,339 66,759 5.92
    PX qref latch 63,635 43,845 3.89
    db file scattered read 2,506,806 41,677 3.70
    Background Wait Events for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc (idle events last)
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (s) (ms) /txn
    log file sequential read 176,386 0 3,793 22 0.2
    log file parallel write 2,685,833 0 1,813 1 3.5
    db file parallel write 239,166 0 1,350 6 0.3
    control file parallel write 33,432 0 79 2 0.0
    LGWR wait for redo copy 478,120 536 75 0 0.6
    rdbms ipc reply 10,027 0 47 5 0.0
    control file sequential read 32,414 0 40 1 0.0
    db file scattered read 4,101 0 30 7 0.0
    db file sequential read 13,946 0 29 2 0.0
    direct path read 203,694 0 14 0 0.3
    log buffer space 363 0 13 37 0.0
    latch free 3,766 0 9 2 0.0
    direct path write 80,491 0 6 0 0.1
    async disk IO 351,955 0 4 0 0.5
    enqueue 28 0 1 21 0.0
    buffer busy waits 1,281 0 1 0 0.0
    log file single write 172 0 0 1 0.0
    rdbms ipc message 10,563,204 251,286 992,837 94 13.7
    pmon timer 34,751 34,736 78,600 2262 0.0
    smon timer 7,462 113 76,463 10247 0.0
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    CPU used by this session 49,206,154 611.0 63.6
    CPU used when call started 49,435,735 613.9 63.9
    CR blocks created 6,740,777 83.7 8.7
    Cached Commit SCN referenced 423,253,503 5,256.0 547.2
    Commit SCN cached 19,165 0.2 0.0
    DBWR buffers scanned 48,276,489 599.5 62.4
    DBWR checkpoint buffers written 6,959,752 86.4 9.0
    DBWR checkpoints 454 0.0 0.0
    DBWR free buffers found 44,817,183 556.5 57.9
    DBWR lru scans 137,149 1.7 0.2
    DBWR make free requests 162,528 2.0 0.2
    DBWR revisited being-written buff 4,220 0.1 0.0
    DBWR summed scan depth 48,276,489 599.5 62.4
    DBWR transaction table writes 5,036 0.1 0.0
    DBWR undo block writes 2,989,436 37.1 3.9
    DDL statements parallelized 3,723 0.1 0.0
    DFO trees parallelized 4,157 0.1 0.0
    DML statements parallelized 3 0.0 0.0
    OS Block input operations 29,850 0.4 0.0
    OS Block output operations 1,591 0.0 0.0
    OS Characters read/written 182,109,814,791 2,261,447.1 235,416.9
    OS Integral unshared data size ################## 242,463,432.4 ############
    OS Involuntary context switches 188,257,786 2,337.8 243.4
    OS Maximum resident set size 43,518,730,619 540,417.4 56,257.5
    OS Page reclaims 159,430,953 1,979.8 206.1
    OS Signals received 5,260,938 65.3 6.8
    OS Socket messages received 79,438,383 986.5 102.7
    OS Socket messages sent 93,064,176 1,155.7 120.3
    OS System time used 10,936,430 135.8 14.1
    OS User time used 132,043,884 1,639.7 170.7
    OS Voluntary context switches 746,207,739 9,266.4 964.6
    PX local messages recv'd 55,120,663 684.5 71.3
    PX local messages sent 55,120,817 684.5 71.3
    Parallel operations downgraded 1 3 0.0 0.0
    Parallel operations not downgrade 4,154 0.1 0.0
    SQL*Net roundtrips to/from client 155,422,335 1,930.0 200.9
    SQL*Net roundtrips to/from dblink 18 0.0 0.0
    active txn count during cleanout 16,529,551 205.3 21.4
    background checkpoints completed 43 0.0 0.0
    background checkpoints started 43 0.0 0.0
    background timeouts 280,202 3.5 0.4
    branch node splits 4,428 0.1 0.0
    buffer is not pinned count 6,382,440,322 79,257.4 8,250.7
    buffer is pinned count 9,675,661,370 120,152.8 12,507.9
    bytes received via SQL*Net from c 67,384,496,376 836,783.4 87,109.3
    bytes received via SQL*Net from d 6,142 0.1 0.0
    bytes sent via SQL*Net to client 50,240,643,657 623,890.4 64,947.1
    bytes sent via SQL*Net to dblink 3,701 0.1 0.0
    calls to get snapshot scn: kcmgss 145,385,064 1,805.4 187.9
    calls to kcmgas 36,816,132 457.2 47.6
    calls to kcmgcs 3,514,770 43.7 4.5
    change write time 369,373 4.6 0.5
    cleanout - number of ktugct calls 20,954,488 260.2 27.1
    cleanouts and rollbacks - consist 6,357,174 78.9 8.2
    cleanouts only - consistent read 10,078,802 125.2 13.0
    cluster key scan block gets 69,403,565 861.9 89.7
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    cluster key scans 41,311,211 513.0 53.4
    commit cleanout failures: block l 413,776 5.1 0.5
    commit cleanout failures: buffer 414 0.0 0.0
    commit cleanout failures: callbac 41,194 0.5 0.1
    commit cleanout failures: cannot 174,382 2.2 0.2
    commit cleanouts 11,469,056 142.4 14.8
    commit cleanouts successfully com 10,839,290 134.6 14.0
    commit txn count during cleanout 17,155,424 213.0 22.2
    consistent changes 145,418,277 1,805.8 188.0
    consistent gets 8,043,252,188 99,881.4 10,397.7
    consistent gets - examination 3,180,028,047 39,489.7 4,110.9
    current blocks converted for CR 9 0.0 0.0
    cursor authentications 14,926 0.2 0.0
    data blocks consistent reads - un 143,706,500 1,784.6 185.8
    db block changes 302,577,666 3,757.4 391.2
    db block gets 336,562,217 4,179.4 435.1
    deferred (CURRENT) block cleanout 2,912,793 36.2 3.8
    dirty buffers inspected 627,174 7.8 0.8
    enqueue conversions 1,296,337 16.1 1.7
    enqueue releases 13,053,200 162.1 16.9
    enqueue requests 13,239,092 164.4 17.1
    enqueue timeouts 185,878 2.3 0.2
    enqueue waits 114,120 1.4 0.2
    exchange deadlocks 7,390 0.1 0.0
    execute count 105,475,101 1,309.8 136.4
    free buffer inspected 1,604,407 19.9 2.1
    free buffer requested 258,126,047 3,205.4 333.7
    hot buffers moved to head of LRU 22,793,576 283.1 29.5
    immediate (CR) block cleanout app 16,436,010 204.1 21.3
    immediate (CURRENT) block cleanou 2,860,013 35.5 3.7
    index fast full scans (direct rea 12,375 0.2 0.0
    index fast full scans (full) 3,733 0.1 0.0
    index fast full scans (rowid rang 192,148 2.4 0.3
    index fetch by key 1,321,024,486 16,404.5 1,707.7
    index scans kdiixs1 406,165,684 5,043.8 525.1
    leaf node 90-10 splits 50,373 0.6 0.1
    leaf node splits 697,235 8.7 0.9
    logons cumulative 884,756 11.0 1.1
    messages received 3,276,719 40.7 4.2
    messages sent 3,257,171 40.5 4.2
    no buffer to keep pinned count 569 0.0 0.0
    no work - consistent read gets 4,406,092,172 54,715.0 5,695.8
    opened cursors cumulative 20,527,704 254.9 26.5
    parse count (failures) 267,088 3.3 0.4
    parse count (hard) 468,996 5.8 0.6
    parse count (total) 19,960,548 247.9 25.8
    parse time cpu 323,024 4.0 0.4
    parse time elapsed 8,393,422 104.2 10.9
    physical reads 537,189,332 6,670.8 694.4
    physical reads direct 292,545,140 3,632.8 378.2
    physical writes 70,409,002 874.3 91.0
    physical writes direct 59,248,394 735.8 76.6
    physical writes non checkpoint 69,103,391 858.1 89.3
    pinned buffers inspected 11,893 0.2 0.0
    prefetched blocks 95,892,161 1,190.8 124.0
    prefetched blocks aged out before 1,495,883 18.6 1.9
    Instance Activity Stats for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    Statistic Total per Second per Trans
    process last non-idle time ################## ############## ############
    queries parallelized 417 0.0 0.0
    recursive calls 122,323,299 1,519.0 158.1
    recursive cpu usage 3,144,533 39.1 4.1
    redo blocks written 180,881,558 2,246.2 233.8
    redo buffer allocation retries 5,400 0.1 0.0
    redo entries 164,728,513 2,045.6 213.0
    redo log space requests 1,006 0.0 0.0
    redo log space wait time 2,230 0.0 0.0
    redo ordering marks 2,563 0.0 0.0
    redo size 108,208,614,904 1,343,739.0 139,883.4
    redo synch time 558,520 6.9 0.7
    redo synch writes 2,343,824 29.1 3.0
    redo wastage 1,126,585,600 13,990.0 1,456.4
    redo write time 718,655 8.9 0.9
    redo writer latching time 7,763 0.1 0.0
    redo writes 2,685,833 33.4 3.5
    rollback changes - undo records a 522,742 6.5 0.7
    rollbacks only - consistent read 335,177 4.2 0.4
    rows fetched via callback 1,100,990,382 13,672.1 1,423.3
    session connect time ################## ############## ############
    session cursor cache count 1,061 0.0 0.0
    session cursor cache hits 1,687,796 21.0 2.2
    session logical reads 8,061,057,193 100,102.5 10,420.7
    session pga memory 1,573,228,913,832 19,536,421.0 2,033,743.8
    session pga memory max 1,841,357,626,496 22,866,054.4 2,380,359.0
    session uga memory 1,074,114,630,336 13,338,399.4 1,388,529.0
    session uga memory max 386,645,043,296 4,801,374.0 499,823.6
    shared hash latch upgrades - no w 410,360,146 5,095.9 530.5
    sorts (disk) 2,657 0.0 0.0
    sorts (memory) 126,165,625 1,566.7 163.1
    sorts (rows) 24,048,783,304 298,638.8 31,088.3
    summed dirty queue length 5,438,201 67.5 7.0
    switch current to new buffer 1,302,798 16.2 1.7
    table fetch by rowid 6,201,503,534 77,010.5 8,016.8
    table fetch continued row 26,649,697 330.9 34.5
    table scan blocks gotten 1,864,435,032 23,152.6 2,410.2
    table scan rows gotten 43,639,997,280 541,923.3 56,414.3
    table scans (cache partitions) 26,112 0.3 0.0
    table scans (direct read) 246,243 3.1 0.3
    table scans (long tables) 340,200 4.2 0.4
    table scans (rowid ranges) 359,617 4.5 0.5
    table scans (short tables) 9,111,559 113.2 11.8
    transaction rollbacks 4,819 0.1 0.0
    transaction tables consistent rea 824 0.0 0.0
    transaction tables consistent rea 1,386,848 17.2 1.8
    user calls 159,931,913 1,986.0 206.8
    user commits 746,543 9.3 1.0
    user rollbacks 27,020 0.3 0.0
    write clones created in backgroun 7 0.0 0.0
    write clones created in foregroun 4,350 0.1 0.0
    Buffer Pool Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> Standard block size Pools D: default, K: keep, R: recycle
    -> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
    Free Write Buffer
    Number of Cache Buffer Physical Physical Buffer Complete Busy
    P Buffers Hit % Gets Reads Writes Waits Waits Waits
    D 774,144 95.6############ 233,869,082 10,089,734 0 0########
    K 504,000 99.9############ 3,260,227 1,070,338 0 0 65,898
    R 63,504 96.2 196,079,539 7,511,863 535 0 0 0
    Buffer wait Statistics for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by wait time desc, waits desc
    Tot Wait Avg
    Class Waits Time (s) Time (ms)
    data block 7,791,121 14,676 2
    file header block 587 101 172
    undo header 151,617 71 0
    segment header 299,312 58 0
    1st level bmb 45,235 7 0
    bitmap index block 392 1 3
    undo block 4,250 1 0
    2nd level bmb 14 0 0
    system undo header 2 0 0
    3rd level bmb 1 0 0
    Latch Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
    willing-to-wait latch get requests
    ->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
    ->"Pct Misses" for both should be very close to 0.0
    Pct Avg Wait Pct
    Get Get Slps Time NoWait NoWait
    Latch Requests Miss /Miss (s) Requests Miss
    Consistent RBA 2,686,230 0.0 0.2 0 0
    FAL request queue 86 0.0 0 0
    FAL subheap alocation 0 0 2 0.0
    FIB s.o chain latch 1,089 0.0 0 0
    FOB s.o list latch 4,589,986 0.5 0.0 2 0
    NLS data objects 1 0.0 0 0
    SQL memory manager worka 5,963 0.0 0 0
    Token Manager 0 0 2 0.0
    active checkpoint queue 719,439 0.3 0.1 0 1 0.0
    alert log latch 184 0.0 0 2 0.0
    archive control 4,365 0.0 0 0
    archive process latch 1,808 0.6 0.6 0 0
    begin backup scn array 3,387,572 0.0 0.0 0 0
    cache buffer handles 1,577,222 0.2 0.0 0 0
    cache buffers chains ############## 0.5 0.0 430 354,357,972 0.3
    cache buffers lru chain 17,153,023 0.1 0.0 1 385,505,654 0.5
    cas latch 538,804,153 0.3 0.0 7 0
    channel handle pool latc 1,776,950 0.5 0.0 0 0
    channel operations paren 2,901,371 0.3 0.0 0 0
    checkpoint queue latch 99,329,722 0.0 0.0 0 11,153,369 0.1
    child cursor hash table 3,927,427 0.0 0.0 0 0
    commit callback allocati 8,739 0.0 0 0
    dictionary lookup 7,980 0.0 0 0
    dml lock allocation 6,767,990 0.1 0.0 0 0
    dummy allocation 1,898,183 0.2 0.1 0 0
    enqueue hash chains 27,741,348 0.1 0.1 4 0
    enqueues 17,450,161 0.3 0.1 6 0
    error message lists 132,828 2.6 0.2 1 0
    event group latch 884,066 0.0 0.7 0 0
    event range base latch 1 0.0 0 0
    file number translation 34 38.2 0.9 0 0
    global tx hash mapping 577,859 0.0 0 0
    hash table column usage 4,062 0.0 0 8,757,234 0.0
    hash table modification 16 0.0 0 2 0.0
    i/o slave adaptor 0 0 2 0.0
    job workq parent latch 4 100.0 0.3 0 494 8.7
    job_queue_processes para 1,950 0.0 0 2 0.0
    ksfv messages 0 0 4 0.0
    ktm global data 8,219 0.0 0 0
    lgwr LWN SCN 2,687,862 0.0 0.0 0 0
    library cache 310,882,781 0.9 0.0 34 104,759 4.0
    library cache load lock 30,369 0.0 0.3 0 0
    library cache pin 153,821,358 0.1 0.0 2 0
    library cache pin alloca 126,316,296 0.1 0.0 4 0
    list of block allocation 2,730,808 0.3 0.0 0 0
    loader state object free 566,036 0.1 0.0 0 0
    longop free list parent 197,368 0.0 0 8,390 0.0
    message pool operations 14,424 0.0 0.0 0 0
    messages 25,931,764 0.1 0.0 1 0
    mostly latch-free SCN 40,124,948 0.3 0.0 5 0
    Latch Sleep breakdown for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    -> ordered by misses desc
    Get Spin &
    Latch Name Requests Misses Sleeps Sleeps 1->4
    cache buffers chains ############## 74,770,083 1,062,119 73803903/884
    159/71439/10
    582/0
    redo allocation 170,107,983 3,441,055 149,631 3292872/1467
    48/1426/9/0
    library cache 310,882,781 2,831,747 89,240 2754499/6780
    6/7405/2037/
    0
    shared pool 158,471,190 1,755,922 55,268 1704342/4836
    9/2826/385/0
    cas latch 538,804,153 1,553,992 6,927 1547125/6808
    /58/1/0
    row cache objects 161,142,207 1,176,998 27,658 1154070/1952
    0/2560/848/0
    process queue reference 1,893,917,184 1,119,215 106,454 78758/4351/1
    36/0/0
    Library Cache Activity for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    ->"Pct Misses" should be very low
    Get Pct Pin Pct Invali-
    Namespace Requests Miss Requests Miss Reloads dations
    BODY 3,137,721 0.0 3,137,722 0.0 0 0
    CLUSTER 6,741 0.1 4,420 0.2 0 0
    INDEX 353,708 0.8 361,065 1.2 0 0
    SQL AREA 17,052,073 0.3 54,615,678 0.9 410,682 19,628
    TABLE/PROCEDURE 3,521,884 0.2 12,922,737 0.1 619 0
    TRIGGER 1,975,977 0.0 1,975,977 0.0 1 0
    SGA Memory Summary for DB: P7IN1 Instance: P7IN1 Snaps: 582946 -583036
    SGA regions Size in Bytes
    Database Buffers 22,330,474,496
    Fixed Size 779,288
    Redo Buffers 17,051,648
    Variable Size 7,180,648,448
    sum 29,528,953,880

  • Table and Index compression in data warehouse - thoughts?

    Hi,
    We have a data warehouse with large fact tables and materialized views of this data.
    Approx 3 million inserts per day week-ends about 12 million.
    The fact tables we have expected to have 200 million, and couple with 1-3 billion.
    Tables partitioned and have bitmap indexes.
    Just wondered what thoughts were about compressing large fact tables and mviews both from point of view of ETL into them and reporting from them afterwards.
    I take it, can compress/uncompress accordingly without any problem?
    Many Thanks

    After compression, most SELECT statements would not get slower. Actually, many can get faster due to reduced IO and buffer needs.
    The situation with DMLs is more complex. It depends on the exact compression options (basic or advanced) and the DML (INSERT,UPDATE, direct load,..),but generally DML are negatively affected by compression.
    In a Data Warehouses (DWs), it is usually quite beneficial to compress partitions or tables that contain data that is not supposed to be modified (read only or read mostly). Please note that in many cases you do not have to compress while you are loading the data – you can do that later.
    You can also consider compressing some of your B-tree indexes (if you use them in your DW system).
    Iordan Iotzov
    http://iiotzov.wordpress.com/

  • Modeling transaction data

    Hello all,
    I have 2 questions that I was hoping to get an answer to:
    Question 1:
    What is the normal way of modeling transaction data with a changing status in a BW system? Are there any links/threads to read?
    I thought that transaction  idata would go into the DSO that any changes to transaction data would be recorded there (very granular) while the aggregation of data would be placed in the cube.
    Question 2:
    For what reason would someone place a navigation attribute in the dimension of a cube?
    TIA
    PS - this is for BI 7.0
    Edited by: Casey Harris on Feb 4, 2008 10:15 PM

    Casey,
    A couple of quick answers that aren't links:
    1)  Ideally BW 7.0 allows you to create an Enterprise Data Warehouse (EDW), where you have granular data loaded to DSO's that then aggregate the data into cubes.    That is what we strive for.  In practice it doesn't always work out that way.  Do some searches on EDW and you should find some info.
    2)  Navigation attributes are essentially links to master data attributes.  By not putting them directly in a cube, you save a little bit of space in the cube.  The most common use of them that we have is when users tell us they want to filter on a field that is not directly in a cube, but is in the master data attributes.  We can then easily make that field a navigation attribute.  Otherwise if you wanted to add the field to the cube, you'd have to reload all the data, which can be quite painful.
    Michael

  • Permanent Job Opportunity - Oracle BI Data Warehouse Developer Chicago, IL

    Submit Resumes to [email protected]
    The Business Intelligence Specialist will play a critical role in designing, developing, deploying, and supporting data warehouse/data mart applications. In this role, the person will be responsible for all BI aspects of a data warehouse/data mart application. Primary duties will be to create reporting standards, as well as coach and support power users with selected Oracle tool. The ideal candidate will have 3+ years demonstrated experience in data warehousing and Business Intelligence tools. Must also possess excellent communication skills and an outstanding track record with the user.
    Principal Duties:
    Participates with internal clients to define software requirements for development, maintenance and/or improvements
    Maintains accuracy, integrity, and availability of the data warehouse
    Tests, monitors, manages, and validates data warehouse activity, including data extraction, transformation, movement, loading, cleansing, and updating processes
    Designs and optimizes data mart models for Oracle Business Intelligence Suite.
    Translates the reporting requirements into data analysis and reporting solutions.
    Reviews and sign off on project plan(s).
    Reviews and sign off on technical design(s).
    Defines and develops BI reports for accessing/analyzing data in warehouse.
    Customizes BI tools and data sets for different types of users.
    Designs and develop UAT (User Acceptance Testing).
    Drives improvement of BI system architecture and development process.
    Develops and maintains internal relationships. Actively champions teamwork. Uses internal resources to enhance knowledge and expertise of industry, research, products and services. Provides information and support to others in the company.
    Required Skills:
    Education and Experience:
    BS/MS in Computer Science or equivalent.
    3+ years of experience with Oracle, PL/SQL Development and Data Warehousing.
    Experience Oracle Business Intelligence Suite and Crystal Reports is a plus.
    2-3 years dimensional modeling experience.
    Demonstrated hands on experience with Unix/Linux, SQL required.
    Demonstrated hands on experience with Oracle reporting tools.
    Demonstrated experience with translating business requirements into data analysis and reporting solutions.
    Experience in training programs/teach users to use tools.
    Expertise with software development process.
    Effective mediator - able to facilitate constructive and productive discussions with internal customers, external clients, and development personnel pertaining to feature definition, project scope, and status
    Problem solving*identifies and resolves problems in a timely manner, gathers and analyzes information skillfully and maintains confidentiality.
    Planning/organizing*prioritizes and plans work activities and uses time efficiently. Work requires continual attention to detail in composing and proofing materials, establishing priorities and meeting deadlines. Must be able to work in a fast-paced environment with demonstrated ability to juggle multiple competing tasks and demands.
    Quality control*demonstrates accuracy and thoroughness and monitors own work to ensure quality.
    Adaptability*adapts to changes in the work environment, manages competing demands and is able to deal with frequent change, delays or unexpected events.
    Benefits/Compensation:
    Employees enjoy competitive compensation. We have a full benefits package including medical and dental insurance, long-term disability and life insurance and a 401(k) plan.
    The client operates within the healthcare industry.
    This is a permanent full-time position. After ensuring your availability and qualifications we will put you in direct contact with the client to move forward in the process.

    FORWARD THE UPDATED RESUME AS SOON AS POSSIBLE.

  • OBIEE reverse engineering to go from SQL Server to a data warehouse

    Hi,
    I'm new to data modeling for warehouses. We currently have an OBIEE environment set up where the data source was SQL Server transactional tables. The SQL Server data is to be moved to a non-Oracle data warehouse and I need to produce a logical data model for the warehouse folks at my company. Unfortunately, the SQL Server data was never modeled, so, I'm basing the model from the Logical and Physical diagram/relationships of OBIEE.
    My question is in regards to the validity of the following relationship to be used in a data warehouse based on what's currently in OBIEE. When I model this via Erwin, I'm wondering if I'm way off base in the relationships (modeling, not personal):
    Dimension 1 has a 0:M with Dimension 2
    Dimension 1 has a 0:M with Dimesion 3
    Dimension 2 has a 0:M with Dimension 3
    Both Dimension 2 and Dimension 3 have a 0:M with Fact 1
    Through the use of aliases and such, this does work in OBIEE. Will this work as a data model for a data warehouse environment?
    Thanks!

    I think you started with the wrong foot. I suggest you search in Google for "kimball methodology" and have a read at a few articles. Your DWH model should not be based on your transactional tables. You should ask your business users what "questions" they want to answer in the DWH. Then model your DWH base on that. You can not model a DWH without knowing what questions you need to answer. For instance if your business users want to know the sales per day and per branch you will a sales fact with a sales amount measure joining to two dimensions branch and time dimension. The number of facts will depend on the questions you need to answer, the type of data and the granularity of them.

  • Is the OBIEE used to create a data warehouses dynamically?

    Management where I work wants to use the OBIEE Administrator to source a 3NF normalized database and create a "virtual data warehouse" in the Business Modeling Mapping layer of OBI Administrator as a Star Schema model is required by OBI Business Model layer. They claim they were told by an Oracle sales rep that the Administrator tool could do this.
    Is this possible? As OBI issues only SQL and not PL/SQL how can one "create" dimensions, lookup tables and fact tables dynamically? And even if it could the performance hit to recreate the virtual data warehouse each time a query is issued would be huge.
    Having used Prism Warehouse Builder and DataStage in the past to create data warehouses I am aware that one needs a procedural programming language to create and maintain the star schema tables (surrogate key maintenance, controling workflows, maintaining slowly changing dimensions, intermediate lookup tables, etc.). SQL was not meant to do this heavy lifting programming. Afterall, isn't this why Oracle Warehouse Builder and previously Informatica is shipped with OBIEE suite because OBI is not an ETL tool to create dimensional models? One uses an ETL tool to create the dimensional data model for OBI to access and pass along the metadata to OBI Answers.
    So is it normal practice to use the Administrator's Business Mapping/Modeling layer for creating a virtual star schema logical tables from physical tables that are 3NF? Or is the tool used to access already denormalized tables in the physical layer that were created using Informatica or OWB or other ETL tool?

    I asked an "Expert" in OBIEE. Here are snippets of his response:
    "Be aware though that the transformation ability is fairly limited, and
    will only really work with data that is very close to a star schema, i.e.
    the data can be easily transformed through a couple of denormalizations and
    table joins. If your source data is very normalized and cannot easily be
    transformed into a star schema, you would need to use a tools such as
    Informatica, OWB or similar to extract data from your source systems, load
    and then transform it into a data warehouse or data mart and report of of
    that. The more that your data needs to be transformed (i.e. the closer it
    is to a 3NF model) the more likely it is that you'll need to use an ETL
    tool, and a data warehouse or data mart, to host your data."
    And in response to my noting the lack of documentation on how to model a 3NF to Star Schema his response was:
    "No, you're right, the documentation doesn't really go into "how to" turn a 3NF model into a dimensional model. If you look back to when OBIEE was a Siebel product, the documentation was really aimed at either Siebel consultants or customers who had been on the training, they didn't want customers "off the street" to try and implement OBIEE as it would hit their services revenue. That's where the blog posts we do, things like the Oracle-by-example training courses on OTN and so on come in, otherwise as you say there's little out there on the best way to transform your model - it's mostly passed on "word of mouth" or is built up from experience on working on projects."

Maybe you are looking for