OWB Lookup/Join

We are using OWB repository 10.2.0.2.0 and OWB client 10.2.0.2.8. The Oracle version is 10 G (10.2.0.2.0). OWB is installed on Sun 64 bit server.
As we use lookup in OWB mapping, We can use the lookup tables/source table to create a View as Join with source and lookup table in Database. The created view could be a source for OWB mapping. This will help us to have less lookups in OWB mapping. As we Join the lookup in Database server, we could get better performance.
Did any one in this forum use above approach in large projects?
Any idea?
What is the best approach, when lookup returns multiple values for single lookup key?
Thanks in advance.
RI

Hi RI,
This will help us to have less lookups in OWB mapping. As we Join the lookup in Database server, we could get better performance.
Did any one in this forum use above approach in large projects?OWB uses Oracle database as ETL-engine and in case of Set based execution mode (for PL/SQL mapping) you will not get better performance with view (when used instead of lookup of join operators).
Additional negative impact of "view" approach is lost of complete info for Lineage/Impact analysis (I think that for large projects this is a very helpful features).
What is the best approach, when lookup returns multiple values for single lookup key?I think this is design mistake - you should define unique key on lookup table
Regards,
Oleg

Similar Messages

  • OWB Lookup

    Hi
    I am relatively new to OWB . so if my question is very basic ,please bear with me. I would like to know what is the difference between lookup option and join we have in the mapping editor. Can anybody share the knowledge with some example.
    I appreciate your help on this regard
    Regards
    Balaji

    The key lookup is similar to the join, but it has two important differences:
    - it results in an outer join, not equi-join. If the output record for an input record is not found, a null record will be generated anyway.
    - You can (from the GUI) set the output value in this case (null record) by setting a property named 'Default value'.
    This is useful, for example, when you define a mapping that loads a cube and when you define surrogate keys on the dimension. In this example, you create a Key Lookup operator that looks up the surrogate key in the dimension table and returns the corresponding original record to form the foreign key relationship. You can also define the Default Value in this scenario to point to a key referring an 'Unknown' or 'Others' record in the dimension. So for example, if during the loading process you encounter an unknown product in the sources, you can still generate a record for the fact table (cube) that will point to the 'Unknown' record in the products table.
    Regards:
    Igor

  • OWB Newbie - Joining Dimensions to Fact Tables

    Hello Forum,
    This may seem like a simple question, but the documentation is so lacking that I can't seem to find the answer to "how does the tool" work.
    I am creating a simple data mart starting with a star schema and choosing a deployment option of ROLAP.
    When using the OWB Tool to create dimensions and fact tables, do you need to define your primary business key (coming from your source system) in each fact table for each dimension that will join to that fact table? I am assuming yes, at least at the lowest leaf level of the dimension hierarchy in this case Store_ID. How else would you be able to correctly join a particular sales order record to the store that it was sold in? a simple query that we know our users will execute for reporting purposes. To make myself clear here is a simple example:
    Dimension = Store
    Hierarchy = Store ----> Sales Territory ---->Region ----->Country
    Attributes = Store ID (primary business key), Store_Name, Sales_Territory_ID, Sales_Territory_Description, Store_Manager_Name, Store_Location etc.,
    Cube or Fact Table = Sales Orders
    Measures = sale_price, quantity, drop_ship_cost, sales_tax, etc.,
    Do I also need to create an attribute for Store ID so that when I load each sales order record into the fact table along with it comes it's proper Store ID?
    I understand how the tool uses the surrogate key and business identifier to create a foreign key look-up from dimension to fact table and pull in the correct store description (i.e. name) but unless you have the Store ID as part of the sales record being loaded into the fact table I don't see how the you can traverse the table and join a sales order to it's proper store?
    Can someone who is farther along in the process or has more experience with other components of the tool confirm my assumption or set me straight on how the tool accomplishes this?
    thanks
    monalisa

    Hi Monalisa, I'll reply assuming your using OWB 10gR2.
    First off, for each of the dimensions, you'll define a business key (as you noted below). Then, when you define the cube object, you'll tell it which dimensions you are using, along with the measures you will be storing.
    When you drop a "cube" object into a map, it will show the measures, but it will replace the dimensions with their natural keys. I.e. in your example below, if your sales order cube had ONLY the store dimension, then the cube object will show up in a mapping with the measures of sales_price, quantity, drop_ship_cost, etc., but instead of showing the dimension for store, it will instead show the natural key Store_ID. When you map your data to load (which should be by store_id), OWB will automagically look up the proper dimension keys and do everything else for you.
    Hope this helps!
    Scott

  • OWB Outer Join Error

    I am working with OWB 10gR2. My source is Oracle 8i. My target db is 10g.
    Here is the code in the joiner expression window:
    APS_CUSTOMER.ID = APS_PROF_MEMBER.ID (+)
    AND APS_CUSTOMER.SEGMENT_ID = APS_CUSTOMER_SEGMENT.SEGMENT_ID (+)
    AND APS_CUSTOMER.ADDR_ID = APS_CUST_ADDR.ADDR_ID (+)
    AND APS_CUSTOMER.ETHNICITY_ID = APS_ETHNICITY.ETHNICITY_ID (+)
    AND APS_CUSTOMER.ID = APS_CUSTOMER_SPECIALTY.ID (+)
    AND APS_CUSTOMER_SPECIALTY.SPECIALTY_ID = APS_SPECIALTY.SPECIALTY_ID (+)
    AND APS_MEMBER_CLASSIFICATION.CLASSIFICATION_ID = APS_CLASSIFICATION.CLASSIFICATION_ID (+)
    AND APS_CUSTOMER.ID = APS_MEMBER_CLASSIFICATION.ID (+)
    AND APS_CUSTOMER.ID = DPS_USER_ROLES.USER_ID
    AND DPS_USER_ROLES.ATG_ROLE = DPS_ROLE.ROLE_ID
    AND DPS_USER_ROLES.ATG_ROLE = '1800003'
    AND (APS_PROF_MEMBER.DEGREES IS NOT NULL OR
    APS_MEMBER_CLASSIFICATION.CLASSIFICATION_ID IS NOT NULL OR
    APS_CUSTOMER_SPECIALTY.SPECIALTY_ID IS NOT NULL)
    I am getting the following error:
    VLD-1512: Outer join marker(+) can only be applied to column names. It cannot be applied to more complex expressions.
    I have tried to remove the last 4 rows of the join condition to no avail. Can someone please provide some guidance. Thanks.

    Next problem. The mapping "works", but is not giving me the correct results.
    I had previously written the query in TOAD and know the results are correct. The joiner code is almost exactly the same as the SQL query, but the input order is not the same as the table order in the query. Does the order of the input groups in the joiner matter that much? I think I remember reading that in the OWB help.
    Thanks.

  • Lookup Idea??

    We are using OWB repository 10.2.0.2.0 and OWB client 10.2.0.2.8. The Oracle version is 10 G (10.2.0.2.0). OWB is installed on Sun 64 bit server.
    As we use lookup in OWB mapping, We have a situation to create lookup from same table for different results in same OWB map. Here is the situation.
    1) Table Ltab
    Lookup key = sourcekey1
    and lookupcode in ( 'A', 'M')
    2) Table Ltab
    Lookup key = sourcekey1
    and lookupcode in ( 'K', 'V')
    We can use ( lookupcode= 'A' OR lookupcode = 'M') instead lookupcode in ( 'A', 'M') as well.
    I do not see a way to code as above in OWB lookup operator.
    Is it doable in OWB via lookup operator?
    Alternatevely, we could create multiple views to support above situation and attach the corresponding views to lookup.
    Did any one in this forum use above approach in large projects?
    Any idea?
    Thanks in advance.
    RI

    Hi,
    I suggest using a joiner operator instead of the lookup. The lookup operator generates a left outer join anyway and in the join condition you have much more flexibility.
    I would not recommend using views, since this splits your etl logik into two different locations.
    Regards,
    Carsten.

  • Distinct count using lookup table

    How can I get a distinct count of column values using a different table?
    Let's say I want to get a distinct count of all "company_name" records in table "emp" that corespond (match) with "lookup" table, "state" category.
    What I want is to find counts for all companies that have a value of "california" in the "state" column of the "lookup" Table. I want the output to look like:
    Sears 17
    Pennys 22
    Marshalls 6
    Macys 9
    I want the result to show me the company names dynamically as I don't know what they are, just that they are part of the "state" group in the lookup Table. Does this make sense?
    M

    Mark,
    In the future you might consider creating test cases for us to work with. Something similar to the following where sample data is created for each table as the union all of multiple select statementsselect 'INIT_ASSESS' lookup_type
         , 1 lookup_value
         , 'Initial Assessment' lookup_value_desc
      from dual union all
    select 'JOB_REF', 2, 'Job Reference' from dual union all
    select 'SPEC_STA', 3, 'SPEC STA' from dual;
    select 'INIT_ASSESS' rfs_category
         , 1 val
      from dual union all
    select 'JOB_REF', 1 from dual union all
    select 'JOB_REF', 1 from dual union all
    select 'SPEC_STA', null from dual;Then we can either take your select statements and make them the source of a CTAS (create table as) statementcreate table lookup as
    select 'INIT_ASSESS' lookup_type
         , 1 lookup_value
         , 'Initial Assessment' lookup_value_desc
      from dual union all
    select 'JOB_REF', 2, 'Job Reference' from dual union all
    select 'SPEC_STA', 3, 'SPEC STA' from dual;, or include them as subfactored queries by using the with statement:with lookup as (
    select 'INIT_ASSESS' lookup_type
         , 1 lookup_value
         , 'Initial Assessment' lookup_value_desc
      from dual union all
    select 'JOB_REF', 2, 'Job Reference' from dual union all
    select 'SPEC_STA', 3, 'SPEC STA' from dual
    ), RFS as (
    select 'INIT_ASSESS' rfs_category
         , 1 val
      from dual union all
    select 'JOB_REF', 1 from dual union all
    select 'JOB_REF', 1 from dual union all
    select 'SPEC_STA', null from dual
    select lookup_value_desc, count_all, count_val, dist_val
      from lookup
      join (select rfs_category
                 , count(*) count_all
                 , count(val) count_val
                 , count(distinct val) dist_val
              from RFS group by rfs_category)
        on rfs_category = lookup_type;Edited by: Sentinel on Nov 17, 2008 3:38 PM

  • LOOKUP TABLES

    should both the source and lookup tables be in the same schema???
    i choose the source table from one schema and the lookup tabel from another schema,so when i try to save it, i get an error saying
    *"In order to create a lookup ,you must be sure that both tables are on the same connection and the source technology has the ability to use a lookup"*.and i am not able to save the lookup.
    Edited by: 851305 on May 26, 2011 12:21 AM

    Hi,
    Please execute the lookup Join condition on staging area.
    Thanks

  • ODI-15605: Multiple ordered joins in this dataset use the same order number.

    I get this error but only have 3 ordered joins numbered 10, 25 and 30; I also have ten lookups specified as left-outer joins.
    I presume the lookup joins also have order numbers. Does anyone know how to find out those order numbers?
    (For the moment I'm using a work-around of doing the lookups in the staging area, which seems to avoid the problem.)
    ODI version: 11.1.1, Build ODI_11.1.1.5.0_GENERIC_110422.1001
    Java: 1.6.0_45
    Windows: 7 Pro SP 1, 64-bit
    Keith H.

    Forgot to mention, I get this error when I save the interface, not when I run it. K

  • OWB 11.1 Cube Operator 'Active Date' column

    Hi Experts,
    I recently defined a cube in OWB 11.1 using the cube editor. The result object contained a column 'Active Date' which was only visible in the cube object within a mapping. Neither the underlying table nor the Cube object editor showed the 'Active Date' column.
    Has anyone a short explanation on what the idea of that column is and how it could / should be used?
    Thanks for your help
    Regards
    Andy

    Hi Alex,
    thanks for your reply.
    Does that mean that I should fill the 'Active Date' column with the date-value of my fact-row. OWB then joins all type II or type III dimensions like the this:
    cube.active_date between dimension_start_date and dimension_end_date.
    Correct?
    The 'Active Date' column will never be visible in the underlying cube-table as it's only used to create the correct join?
    Thanks for your help.
    Regards
    Andy

  • Look Up Error

    Hi All,
    Can anyone tell me as to how I can guess by seeing the mapping itself (that uses a lookup) that it will suffer from the following runtime error:
    "A table can be outer joined with at most one other table".
    After developing a long enough mapping I discover it suffers from the above error in runtime. Then the obviuos workaround is to replace the look ups with functions or joiners.
    If I can guess the above error beforehand, it enables me to structure my mapping logic to avoid this rather than to do it and then keep on changing the structure of it.
    I have tried to produce the above error in some simple mappings by using some lookups but it does not occur.
    How is it that I can expect such errors to occur. What I feel is OWB deploys code whose structure is very much different from the way it looks in the mapping editor.
    Any hints/suggestions are welcome.
    Regards
    -AP

    Hello,
    Functions are definitely is not best choice as far as using them involve SQL – PLSQL context switching which is very expensive. This will influence overall mapping performance, especially on large data volumes. So to perform lookups joins are the most straight forward and best working approach.
    Views (inline or presented by an database object) are nice way of “splitting” mapping. But, currently OWB does not provide the INLINE VIEW operator. Actually this is the thing that I need most in OWB. Currently I use ORDER BY operator to facilitate this functionality. On other hand, if used properly, ORDER operator would not lead to heavy performance degradation – at the end we need ALL records to be inserted into the target. Anyway, this is a price for using the tool consistently - to accept it's approach.
    Sergey

  • Star schema question

    Hi,
    I have a question about the realization of the star schema. I have familiarized me with the basic concepts of dimensions and fact tables. But what I don’t get is how I “combine” the dimensions with the fact table. I know that the fact table includes the dimension-IDs and measures. But do I use the joiner-operator in the OWB to join the dimension-ID (IDs of the dims are the criteria for the joiner condition) to create the fact table?
    So my understanding is when I have for example 3 dimensions (product dimension, sales dimension, time dimension) and one fact table.
    The realization looks like this:
    product dim ->
    sales dim -> joiner operator = fact table with the IDs of the dims and measure
    time dim ->
    Please correct me if I am wrong.
    If there is something that I can read to this subject of matter it would be very nice if someone could post it.
    Thx

    Hi,
    first you load the dimensions. Every entry has an id (surrogate key) and some business key (coming from the data source).
    When you load the fact, you use the business key from the data source to join (using a joiner or lookup operator) the dimension and get the id (surrogate key) from it. You only load the id and the measures into the fact table.
    Make sure to handle the case that the business key is null or no entry in the dimension can be found.
    If you query the fact table you must always join the dimensions.
    Regards,
    Carsten.

  • Load data in a fact table

    Hello,
    I have implemented SCD2 dimension and mapping executing works fine.
    Now I have question about loading data in a fact table.
    How do I need to use OWB (maybe JOINER operator - Join condition - between dimensions and source table) in case of:
    - update on source table
    - delete on source table
    I think the most simple is insert on source table. It is probably to_char(source_transaction_date,'dd.mm.yyyy') = to_char(sysdate,'dd.mm.yyyy'), if I load once a day..
    What is the procedure for fact table mapping to handle updates and deletes on source table?
    Regards

    Some discussions in previous forums should help you
    http://forums.sdn.sap.com/thread.jspa?threadID=2019448
    http://forums.sdn.sap.com/thread.jspa?threadID=1908902
    In the SAP tutorial, you can see a sample example of making fact tables.
    http://help.sap.com/businessobject/product_guides/boexir32SP1/en/xi321_ds_tutorial_en.pdf
    Arun

  • How do I call a SP that returns a sequence number from CMD

    Hi ,  I have a requirement to call Sequence.NEXTVAL function from CMD . I believe I would have to call a SP to acheiev this I was also told that there is a generic mapplet for invoking SP from CMD  Could somebody please point me to the right place Thanks. -Bhim

    Using the Cloud / PowerCenter hybrid approach, one can cleanse address data by calling the AddressDoctor webservice. The attached sample code ( PowerCenter workflow XML ) leverages AddressDoctor's Batch Mode to cleanse individual address components ( Discrete ). The mapplet in the code can be uploaded into the Cloud platform and be used in a DSS or MCT task. Here is an example illustrating address data in a flat file being cleased and written to another flat file in a Cloud mapping. Note, in this mapping the AddressDoctor login credentials are sourced from a properties file via a lookup. The data flows from source -> expression(1) --> lookup(2) --> mapplet(3) --> target:      (1) expression - The expression sets an output port to a default value equal to 1 for use in the lookup join condition.     (2) lookup - The lookup gets the login credentials from a file on the secure agent host machine. Sample file:                         (3) mapplet - The mapplet, imported from the PowerCenter XML, calls the Address Doctor webservice and returns the process status codes, the original input data and a new cleansed data set. Note, the RecordID must be mapped and unique for each source row.

  • Performance issue loading data out of AS400

    Hi,
    For loading data out of AS400, I have created a view containing a join between three AS400 tables connected with a database link (And some more changes in the file tnsnames.ora and the listener. Hell of a job with Oracle, but it works finally)
    When I use the tool Toad, the results of this query will be shown in about 20 seconds.
    When I use this view in OWB to load this data into a target table, then the load takes about 15 MINUTES!
    Why is this so slow?
    Do I have to configure something in OWB to make this load faster?
    Other loads when I'm using views (to Oracle tables) to load data are running fast.
    It seems that Oracle does internally more dan just running the view.
    Who knows?
    Regards,
    Maurice

    Maurice,
    OWB generates optimized code based on whether sources are local or remote. With remote sources, Warehouse Builder will generate code that uses inline views in order to minimize network traffic.
    In your case, you confuse the generation by creating a view that does some remote/local joins telling OWB that the object is local (which is only partly true).
    Perhaps what you could do is create one-to-one views and leave it up to OWB to join the objects. One additional advantage you gain with this approach is that you can keep track of the impact analysis based on your source tables rather than views that are based on the tables with flat text queries.
    Mark.

  • ODI is keep on runnng at load data step

    Hi ,
    I am loading the data from flat file to Oracle DB.
    When I check the operator it is always running at the step load data and is unable to go to next stpe( Insert new records) , But when I check thework table all the data loaded into work table( C$ Table) . Is there any way to check the log at what step my scenario is running?
    thanks ,

    Could you put some more information on
    1. Staging Area : Is it different from Target ?: The standard Sunopsis Memory Engine or In-memory engine are too slow
    2. What are the mappings (any lookups/joins that are performed) and where are they performed, Source, Staging or Target ? (High volume look ups or joins can decide the time depending on where they are implemented)
    3. LKM : Try using Bulk loading or some specific technology LKMs instead of generic
    Regards,
    NJ

Maybe you are looking for

  • Use Camera RAW with Sony Nex-3

    Hey Guys! I want to get out more details from my photos. Im using Adobe Photoshop CS5 Sony NEX-3 Compact Camera (listed in the camera raw compatibility list) First i changed the cameras picture format from jpg to RAW & made some photos. At home i tri

  • Certificate not valid

    Hi I have a customer which is using Reader 7.0 Version 7.0.9 english. We have added the root certificate to his reader and configured that its valid for "Signatures and as trusted root" and "certified documents" in the certificate details I can see t

  • Very frustrated.. :( please help?

    Okay. I have been using javac to compile. Writing code in notepad and compiling. But I want to use an IDE. I downloaded Netbeans, JBuilder, jEdit and a few others. My problem is that when I try to build or run my programs I get errors because.. well

  • Why won't my macbook pro repair disk permissions after upgrading to Mountain Lion?

    I upgraded my 2010 Macbook pro with 2.4gHz core 2 duo processor to Mountain Lion and ever since it has been giving me issues. One of my biggest concerns is that it will no longer repair disk permissions. The progress bar goes about half way and just

  • [Solved]Error with installing package: /usr/lib64 exists in filesystem

    So i know there are other posts about this and i have read them and I have also read the wiki https://wiki.archlinux.org/index.php/De - _in_.2Flib and also this forum post https://bbs.archlinux.org/viewtopic.php?id=156942 the error i get when trying