ODI Data Quality - Relationship Linker Process

Hi All,
I have been trying to use the relationship linker process in the ODI data quality interface with limited success.
My Problem:
The process tab in the relationship linker asks for two inputs. 1) Attribute containing record type & 2) Attribute Containing window key
The attribute containing window key is pre-filled to be window_key_01 (window key) that you can set up in the prior step. I am not able to determine which exact field i should be specifying in the "ATTRIBUTE CONTAINING RECORD TYPE" column.
Things that i have tried out:
I have used the following column for attribute containing record type
1) PR_BUSNAME_RECODED01
2) US_POSTAL_MATCH_INPUT_AREA
3) WINDOW_KEY_01
4) PR_ STREET_NAME
The window key i am using is as follows
1) Business name - 5 chars( first character + subsequent non-repeating consonants of the business name)
2) State Name - 2 chars
3) City Name - 5 chars (first character + subsequent consonants)
4) postal code - 5 chars (numerics only)
Guidance Needed
1) Which attribute should be selected for "ATTRIBUTE CONTAINING RECORD TYPE" column. If possible, can you also include the reason to use that attribute?
2) In the advances features window of the relationship linker process, the linking rules are also not very clear. I am still trying to figure out what lev2_matched_in_lev1_matched and other similar options correspond to.
The help files do not explain this at length.
I am having some difficulty in understanding the relationship linker process and I have tried out many options.
If any ODI data quality guru's or pro's can help me out with this. I'll be really grateful.
Thanks,
Chapanna

Hi,
thanks for your reply.
Yes I got the second connection; the question was about the procedure described in the manual.
The solution I found is a trick also because the way described in docs is not limited to a second repository but could be used also for several entries.
Because I am trying develop knowledge on ODI I would solve any possible unexpected behavior. Do you see in your setup the entry described in docs?
Taking a look at the Store the ODQ is quoted 70,000 $ / processor which is one reason more I would expect customer wanting to be able to follow the procedure depicted in the documentation or being removed from there (I try to answer questions that someone could ask to me).
Thanks
Fabio D'Alfonso

Similar Messages

  • Odi-data quality and data profiling

    Would odi-ee license be sufficient to use,odi data quality and data profiling.
    We plan to use odi to migrate data from legacy databases to oracle 11g data base.
    Do we have to use odi data quality and profiling.

    Hi after 11.1.1.3 version of ODI release, oracle no longer supports ODP/ODQ which is a trillium software product and this not owned by oracle. Oracle is recommending the usage OEDQ for quality purposes. It's better if you could spend time on OEDQ rather than trying to learn and implement ODP/ODQ in ODI

  • ODI Data Quality metabase Load connections

    Hi Guys
    I am trying to get started with ODI data quality and profiling. I would like to connect from the ODI metabase manager load connections to the database on my local machine using the following
    username: thiza
    Password:
    url:10.12.12.12:1521:EDW
    The problem is ODI metabase manager requirs tns name. I tried to put the following string on the tns but still not working
    EDW=(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = tvictor-za)(PORT = 1521))(CONNECT_DATA =(SERVER = DEDICATED)(SERVICE_NAME = EDW)))
    Can anyone give step by step on how to connect to oracle database from ODI metabase manager(load connections)
    any help will be highyl appreciated.
    Thanks
    Umapada
    I tried to put the following string on the tns but still not working
    only "EDW" in place of TNS Name in metabase administrator. But when testing the connection at the time of creating entity, it is saying "Please wait, Validating Connection" but this wait never ends and continues for hour.
    Edited by: user10612738 on Mar 2, 2009 10:54 PM
    Seems following shared library is missing. Dont know how to get it from Oracle?
    Has anybody found this problem earlier.
    2009-03-03 12:12:15 02837 WARNING CONNECT Remote oracle connection failure, couldn't load file "/ora/ora10g/odi101350/oracledq/metabase_server/metabase/lib/pkgOracleAdapter/pkgOracleAdapter.sl": no such file or directory - couldn't load file "/ora/ora10g/odi101350/oracledq/metabase_server/metabase/lib/pkgOracleAdapter/pkgOracleAdapter.sl": no such file or directory
    2009-03-03 12:12:15 02837 WARNING ADAPTER Authentication failed. - couldn't load file "/ora/ora10g/odi101350/oracledq/metabase_server/metabase/lib/pkgOracleAdapter/pkgOracleAdapter.sl": no such file or directory
    2009-03-03 12:12:15 02837 INFO CLIENT_DISCONNECT interpffffec50
    2009-03-03 12:12:15 02837 INFO METABASE removing session directory ->/ora/ora10g/odi101350/oracledq/metabase_data/tmp/session/3fb551e8-4c99-4be3-860d-5953ef6512fe<-
    2009-03-03 12:12:25 27177 INFO CLIENT_DISCONNECT interp0029c060

    If you are trying to connect to an oracle box. Try to do this. Worked for me.
    Go to add loader connections.
    And instead of pasting the entire string. Try to use only the name of the TNS.
    In your case this would be EDW.
    Once added save your metabase connections and go into the data quality and profiling part.
    Type in the name of the metabase u recently created, along with the user name and password.
    It should log you on. Worked for me. I have tried a lot of oracle connections. But the only problem for me is that i have never been able to configure a ODBC loader connections with SQL server.
    Hope this helps.
    Chapanna

  • Use of Trace key in transformer and parser of ODI Data Quality

    Hi All,
    Can someone explain to me the significance of Trace key in the transformer and parser processes (modules) in the ODI data profiling and quality part.
    I have read all the documentation present of the ODI dataquality getting started and the user guide. I was not able to find any answer.
    It would be great anyone can answer this question.
    Thanks in advance,
    Chapanna

    Hi Julien,
    The Trace key option is present in the "transformer" and "transformer address reconstructor" steps when executing an ODQ Name and Address Quality project.
    The purpose of this trace key is to help developer to easily debug and identify records that have incorrectly transformed. A primary key/surrogate key is usually chosen as the trace key to facilitate the debugging process by identifying the exact record.
    Thanks,
    Chapanna

  • Problem launching ODI Data Quality/Profiling in Vista

    Hello
    I am having a weird problem with Oracle Data Integrator installation in Vista. Hope somebody could help me.
    I have installed ODI (10.1.3.5.0) along with Data Quality and Profiling. The installation was successful, however, while trying to launch Data Quality/Metabase Server etc I am getting an error: "admin.exe is not found"/"discovery.exe not found". Moreover, bin directory under oracledq directory is empty. I was just wondering, there must be "something" at least in the bin directory if the installation was error free!
    Could you please let me know if anything is wrong with the installation or is that a Vista problem? Do I need to install anything else as a pre-requisite?
    Your assistance is much appreciated..
    Shibaji Gupta

    Hello Rathish
    Thanks for the info; First of all, please let me know where from shall I copy those files?
    In my case the "/oracledq/bin" directory is "just" empty!
    As the installation did not raise any problem, I can probably guess that all the .dll files (if any) are correctly installed, except for the "bin" directory contents. Is there a way to copy the entire bin directory content? Not sure how Oracle performs the installation. If you could help me from your experience.
    Any way, thanks again for the assistance. Would like to try all available options.
    Shibaji Gupta

  • Data Services and Data Quality Recommnded Install process

    Hi Experts,
    I have a few questions. We have some groups that have requested Data Quality be implemented along with another request for Data Services to be implemented. I've seen the requested for Data Services to be installed on the desktop, but from what I've read, it appears to be best to install this on the server side to allow for more of a central benefit to all.
    My questions are:
    1. Can Data Services (Server) install X.1 3.2 be installed on the same server as X.I 3.1 SP3 Enterprise?
    2. Is the Data Services (CLIENT) Version dependent on if the Data Services (Server) install is completed? Basically can the u201CData Services Designeru201D be used without the Server install?
    3. Do we require a new License key for this or can I use the Enterprise Server license key?
    4. At this time we are not using this to move data in and out of SAP, just using this to read data that is coming from SAP.
    From what I read, DATA Services comes with the SAP BusinessObjects Data Integrator or SAP BusinessObjects Data Quality Management solutions. Right now it's seems we dont have a need for the SAP Connection supplement, but definetly something we would implement in the near future. What would be the recommended architecture? A new Server with tomcat and cmc (seperate from our current BOBJ Enterprise servers)? or can DataServices be installed on the same?
    Thank you,
    Teresa

    Hi Teresa.
    Hope you are referring to BOE 3.1 (Business Objects Enterprise) and BODS (Business Objects Data Services) installation on the same server machine.
    Am not an expert on BODS installation.
    But this is my observation :
    We had recently tested on a test machine BOE BOXI 3.1 SP3 (full build) installation before upgrade of our BOE system.
    We also have BODS in our environment.
    Which we also wanted to check whether we could keep on the same server.
    So on this test machine, which already has BOXI 3.1 SP3 build, when i installed BODS server installation,
    what we observed was that,
    all the menus of BOE went away
    and only menus of BODS were seen.
    May be BODS installation overwrites/ or uninstalls BOE, if it already exists ?
    I dont know.  Though i could not fine any documentation, saying that we cannot have BODS and BOE on the same server machine. But this is what we observed.
    So we have kept BODS and BOE on 2 different machines running independently and we do not see any problem.
    Cheers
    indu

  • Verification of data quality in migration process

    Hi All,
    I am in a project that migration data from SQLserver to Oracle database. But my quesion is not performance but the check of data quality.
    My procedures to move data is: a) Extract data to a flat file from SQLserver via a GUI tool b) ftp it to UNIX c) sqlldr to Oracle temp tables d) copy the data from temp tables to fact tables.
    My point is to only check the sqlserver log file and sqlldr log file, if no error in them and the row counts match in SQLserver and Oracle, then we can say a,b,c are successful.
    And since d is a third party stored procedure, we can trust its correctness. I don't see any point where the error could happen.
    But the QA team think we have to do at least two more verification: 1. compare some rows column by column 2. sum some numeric columns to compare the results.
    Can someone give me some suggestion on how you check the data quality in your migration projects, please?
    Best regards,
    Leon

    Without wishing to repeat anything that's already been said by Kim and Frank this is exactly the type of thing you need checks around.
    1. Exporting from SQL Server into a CSV
    Potential to loose precision in data types such as numbers, dates, timestamps, or in character sets (unicode, utf etc)
    2. Moving from windows to unix
    Immediately there are differences in EOL characters
    Potential for differences in character sets
    Potential issues with incomplete ftp of files
    3. CSV into temp tables with SQL Loader
    Potential to loose precision in data types such as numbers, dates, timestamps, or in character sets (unicode, utf etc)
    Potentail to have control files not catering for special characters
    4. Copy from temp tables to fact tables
    Potential to have column mappings wrong
    Potential to loose precision in data types such as numbers, dates, timestamps, or in character sets (unicode, utf etc)
    And I'm sure there are loads more things that could go wrong at any stage. You have to cater not only for things going wrong in the disaster sense i.e. disk fails, network fails, data precision lost, but also consider there could be obscure bug in any of the technologies you're working with. They're not things you can directly predict but you should have verification in place to make sure you know if something has gone wrong - however subtle.
    HTH
    David

  • ODI Data Quality Licensing!!!

    Hi All,
    Can anyone please let me know about the ODI DQ licensing. Is it part of ODI?
    Thanks,
    Guru

    Hi Guru,
    The ODI DQ licensing is based on the number of transactions/records ..........it does not depends on year based licensing......
    I dont think using DEMO it will work ...........I have not tried with DEMO ........
    Hope this helps .......
    Sorry for the delay in response.....
    Thanks
    AK

  • Data integrator for HP-UX missing data quality and data profiling

    Hi All,
    I have installed ODI 10.1.3.40.0 from odi_unix_generic_10.1.3.4.0.zip in HP unix 11.23.
    Data quality and data profiling is missing in that zip ? Could anyone please help me how to get installer of data quality and data profiling for Oracle data integrator in HP UX 11.23.
    Any response will be highly appreciated.
    Thanks in advance.
    regards
    Umapada
    Edited by: user10612738 on Nov 28, 2008 1:54 AM

    The integrated packaged( ODi, data quality and data profiling) for HP-UX wount available till 10.1.3.5.0 which will be released by this year end.

  • ODI Data Profiling and Data Quality

    Hi experts,
    Searching about ODI features for data profiling and data quality I found (I think) many ... extensions? for the product, that confuse me. Cause I think there are at least three diferents ways to do data profiling and data quality:
    In first place, I found that ODI has out of the box features for data profiling and data quality, but, acording to the paper, this features are too limited.
    The second way I found was the product Oracle Data Profiling and Oracle Data Quality for Oracle Data Integrator 11gR1 (11.1.1.3.0) that is in the download page of ODI. Acording to the page, this product extends the existing inline Data Quality and Data profiling features of ODI.
    Finally, the third way is Oracle Enterprise Data Quality that is another product that can be integrated to ODI.
    I dont know if I understood good my alternatives. But, in fact, I need a general explanation of what ODI offer to do Data Quality and Data profiling. Can you help me to understand this?
    Very thanks in advance.

    Hi after 11.1.1.3 version of ODI release, oracle no longer supports ODP/ODQ which is a trillium software product and this not owned by oracle. Oracle is recommending the usage OEDQ for quality purposes. It's better if you could spend time on OEDQ rather than trying to learn and implement ODP/ODQ in ODI

  • Oracle Enterprise Data Quality for ODI

    Hi,
    1.     Can we install the below EDQ products on single machine
    -Enterprise Data Quality Profiling for Oracle Data Integrator
    -Enterprise Data Quality Batch Processing for Oracle Data Integrator
    2.     How do we size EDQ ? and what is the preferred architecture ?
    3.     Does “Enterprise Data Quality Profiling for Oracle Data Integrator” & “Enterprise Data Quality Batch Processing for Oracle Data Integrator” have same installation media ?
    Thanks & Regards,
    NC

    You should find all you need in the Getting Started guide in the online help.
    I recommend using the introductory wizard when first launching Director- this will take you through creating a project, connecting to some data, and creating a profiling process.

  • Enterprise Data Quality - stuck/crash when processing high volume

    I am using Enterprise Data Quality, trying to run a data profiling process of 1 million rows.  However, the process (which contains group and merge processors) appears to be always stuck half way through or crashes.  I have tried other data sources and the result is the same.
    It seems that Enterprise Data Quality does not handle high volume very well.   Please assist and let me know what other details you require.

    Hi,
    It is certainly not the case that EDQ does not handle large volumes of data well. We have a large number of customers running huge data volumes in the product and have benchmarked the product running services on massive volumes, including matching of 250m records.
    However, if you want to run large jobs, you need to make sure the system is installed and tuned correctly. How did you install the product? Are you running 32-bit or 64-bit? How much memory is allocated to EDQ?
    With regard to best practice, did you throw all profiling processors at all of your data? The better approach is to 'open profile' a sample of records and pick more carefully which processors to run on larger volumes... otherwise you are telling the server to do a huge amount of work and some of it may not be necessary. Data profiling is an iterative exercise to help you find data issues that you can check for an fix using audit and transformation processors. Profilers are used mostly for simple reporting when it comes to production jobs on larger volumes.
    Note that there are some profiling processors that will cause progress reporting to appear to 'pause' at 50% of the way through a process. These are 2-phase processors such as the Record Duplication Profiler which needs to spool all data to a table and then analyze it rather than work record by record. Such processors are typically slower than the simpler profilers that add flags to the data with a counting phase at the end (Frequency Profiler, Patterns Profiler etc.)
    Regards,
    Mike

  • Data Quality Process

    Hi:
    I'm very interested in data quality and I'd like to know where to begin and which Oracle tools help to support activities like data auditing, data profiling and data cleansing.
    I'm an OWB user but I don't like the way this tool performs data profiling, it creates a materialized view for every column on the table to be analyzed, so it takes many time performing that and then produces a lot of tables with the statistics gathered. I think there would be a better implementation for this.
    Thanks for the forum,
    Hazbleydi C.

    One problem is that the data quality and profiling functions built into the Oracle database cannot be used in a heterogeneous implementation like Fusion - and most of the good data quality vendors have already been acquired (FirstLogic, Vality, Similarity, Fuzzy Logistics). Informatica would be a multi-billion dollar acquisition which may be too expensive for an Oracle data integration stack. Oracle may rely on partnerships with Informatica, IBM and Trillium to provide a wide range of data quality and profiling functions.
    Gartner thinks everyone is looking up in the Magic Quadrant for Data Quality Tools, 2007

  • Data quality in ODI 11g

    Hi all,
    I want to use DQ tool for validating the source(comlex file). My all validations are mathmetical and complicated.
    So it possible by doing with Oracle data quality tool which in ODI11g?
    Regards,
    Suresh

    I once used a small ETL tool esProc to combined with Data Quality( to analyse the stock data).
    It is famous for handling the complicated mathematical computation and statistical analysis, and its performance is also acceptable.
    Check here for details about esProc

  • Question on CKM and Data Quality

    As I understand, CKM only supports the check based on db constraints. If I want to have more complicated business logic built to the checking, does Data Quality sound like a good choice. Or other suggestions?
    In my case, I will need to check the data in the source table based on the table data from different sources (both target and source tables). This should be doable through data quality, correct? I am new to ODI. When I first installed the ODI, I didn't choose to install data quality module. I suppose I can install DQ separately and link it back to ODI? Do they share the same master repository?
    Sorry for the naive questions, you help is greatly appreciated.
    -Wei

    Hi,
    My idea is just like:
    for instance a FK validation:
    create function F$_validate_FK (value1 number) return number
    as
    v_return number;
    begin
    select my_fk into v_return from any_table where column_fk = value1 ;
    return v_return ;
    exception
    When NO_DATA_FOUND then
    return -1;
    end;
    And at constraint you will have:
    -1 = (select F$_validate(table1.column_to_be_validated) from dual)
    Any record that have -1 as return will be not valide for the flow.
    The F$ function can be created in a ODI procedure before the interface and dropped at end if you think to be necessary.
    Make any sense?
    (Maybe there are several syntax error in this example, I just write it and did not compilate, just to show the idea)
    Edited by: Cezar Santos on 28/04/2009 10:20

Maybe you are looking for

  • Adhoc query - Hit list 40, output 5

    My client has an adhoc query set up and the selections make sense the hit list number is correct as compared to what is on the DB - I have checked using SE16. However when the output is produced only 5 records are shown. There are no auths issues acc

  • Trouble connecting external monitor to new MacBook

    Just today I bought a new MacBook with Leopard, and also bought an external monitor (LG brand). I hooked it up as directed: an adapter from the mini-dvi port to a dvi cable, and the other end of the dvi cable in the back of the monitor. When I turned

  • Workarounds of 32K-limitation in jdbc:kprb needed!

    Hello ALL. Can you help me with solution of workarounds the limitation in jdbc:kprb(internal driver) for stored java? I try to put large string more than 32K into LONG field type using internal driver and Java Stored Procedure. And I get the error: "

  • Where is the swing.properties file?

    In J2SDK 1.5 beta, where is the swing.properties file. The file is supposed to be located in <Java Home>/lib/ but it is not there. Thanks,

  • Does Mail cope with large databases (import from Eudora)?

    I have 1GB of email in Eudora in heavily nested folders. Will Mail be able to handle an import of my Eudora folders and even if it does, what performance can I expect, will Mail grind to a halt?? If anyone has done this type of import, how well did t