GEOCODING: modify matchmode to meet data quality needs?

Hi.
We use ORACLE Spatial 10gR2 for geocoding europe.
Currently I experience that sdo_gcdr.geocode() does not return the matching record, but another one that is very similar, but differs in settlement, which is, however, as exact as we would like to be. Using geocode_all, I found out, that actually two records are returned, the second one being the exact match and the first one a very similar one, but differing in settlement name.
Is there a matchmode that will only return the total match? For our data 'EXACT' did not quite do the trick as we do not have/use any data on base_name, which is the same as name, street_prefix, street_suffix, street_type...? So we would like to be 'EXACT' in terms of name, postal_code, house number, settlement. Could one possibly create such a matchmode?
Cheers
Sebastian

In the data we use, base_name is always the same as name in the GC_ROAD_XX table. Additionally, the fields stype_after, stype_before etc... of that table are never filled. The reason being lack of data quality. We totally rely on the field name in the GC_ROAD__XX table, which works okay.
Concerning the MATCHMODE, however, the fields base_name, stype etc.. in that table will never match exactly as base_name is set to the same value as the field name and stype_after, stype_before are empty. E.g. for the road 'Hauptstrasse', base_name and name are identically set to 'Hauptstrasse' and stype_after is empty while the geocoder I believe expects base_name to be Haupt and stype_after to be strasse in order to give an exact match.
So we would like to modify the matchmode to be exact for the following fields only:
- NAME (GC_ROAD_XX)
- POSTAL_CODE, SETTLEMENT_NAME (GC_POSTAL_CODE_XX)
- HN_FIELDS (GC_ROAD_SEGMENT_XX)
Hope that helps to help me.

Similar Messages

  • Need descriptions for data quality error codes

    I'm getting the following address error codes and I don't have the corresponding messages: F440, F441, F505. I have the descriptions for all the other codes.
    Thanks,
    Nathan

    Nathan,
    Can you tell me which data quality product you are using? If this is for International Address Correction & Encoding you can find all error codes in the Quick Reference for Views and Job-File
    Products. A copy should be installed with your software. Otherwise you may locate a copy at http://help.sap.com/ > Business Objects > Data Quality > Quick Reference for Views and Job-File
    Products
    F440 - Insufficient input data; cannot choose between multiple street-level matches.
    F441 - Insufficient input data; street level match occurred using partial-range matching.
    F505 - Matched to undeliverable default record. The generated undeliverable (F505) has no ZIP+4 listed, an invalid CART, and is flagged as undeliverable.(United States)
    Regards,
    Bob Minard
    Engineer, Technical Customer Assurance

  • Address verification - Data Quality

    Hi guys,
    I am trying to do some research to understand if you (ORPOS) customers see a need for Address, Phone & EMail Verification to improve data quality?
    If you do, please let me know where is your biggest pain with the data quality? which forms or module if you had an Address, Phone or EMail verification solution integrated would make your life and improve ROI for your company
    Thanks!

    Hello Ida,
    Address Verification in OEDQ is comprised of the Address Verification API, and a Global Knowledge Repository (also known as Postal Address File).
    A subscription to a Postal Address File must be purchased directly from a provider, and Oracle's prefered partner for this is Loqate, (http://www.loqate.com/).
    See explanation here for details: https://blogs.oracle.com/mdm/entry/enterprise_data_quality_integration_ready
    The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically:
    Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API
    Optimization of address verification by pre-standardizing data where required
    Formatting of output addresses into the input address fields normally used by applications
    Adding descriptions of the address verification and geocoding return codes
    The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.
    The Installation and Configuration of Addess Verification with OEDQ and Loqate is documented here: Installing and Configuring Address Verification
    Best regards,
    Oliver.

  • Data quality check or automation

    Apart from passing the report to the user for testing are there ways the process can be automated for a data quality check and how?
    Thanks.

    Hi Dre01,
    According to your description, you want to check the report data quality. Right?
    In Reporting Services, the only way to check the report data is viewing the report. So for your requirement, if you want to make this data processing automatically. We suggest to create subscription, it will process the data automatically based
    on the schedule and you will get the subscription of report to check if it shows data properly.
    Reference:
    Create, Modify, and Delete Standard Subscriptions (Reporting Services in Native Mode)
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou

  • Data Quality Services

    Hi All,
    I have utilized the DQS for my organization Data Cleansing project. And we have built Data Quality reports using DQ output.
    We also use Profiler Summary report in the Activity Monitor, but this is manual process where Data owners need to manually enter into the Activity monitor and generet the porofiler report.
    My question is, which DQS table store the this Profiler summary report where by I can automate the report through that source.
    Here is the screenshot of the report we are looking at, we need to find out which DQS source table got this information.
    Thanks for the help

    0
    Hi All,
    I have utilized the DQS for my organization Data Cleansing project. And we have built Data Quality reports using DQ output.
    We also use Profiler Summary report in the Activity Monitor, but this is manual process where Data owners need to manually enter into the Activity monitor and generet the porofiler report.
    My question is, which DQS table store the this Profiler summary report where by I can automate the report through that source.
    Here is the screenshot of the report we are looking at, we need to find out which DQS source table got this information.
    Thanks for the help

  • Data Quality Installation

    Post Author: Hexagram16
    CA Forum: Data Quality and Data Insight
    Hi Guys,
    Im new in data quality module, need some help, how to upload or update Data Quality Licenses? can anyone here give a blow by blow procedure?
    Thanks in Advance.

    Are you referring to Data Quality XI or Data Insight?
    Also what version?

  • Data Quality Services - Summary report

    Hi,
    Is anyone has a idea about which table is stored summary information comming out from Activity monitor?
    the example is below:
    when I exported the data  the summary as follows:
    My pourpose is to automate this report if it is stored on DQS data bases?
    Field
    Domain
    Corrected Values
    Suggested Values
    Completeness
    Accuracy
    EmployeeName
    EmpName
    5 (0.06%)
    0 (0%)
    7303 (88.73%)
    8222 (99.89%)
    EmployeeKey
    EmployeeKey
    1 (0.01%)
    0 (0%)
    8231 (100%)
    8215 (99.81%)
    CostCentreKey
    CostCentreKey
    0 (0%)
    0 (0%)
    8231 (100%)
    8141 (98.91%)
    CostGroupKey
    CostCentreGroupKey
    0 (0%)
    0 (0%)
    7188 (87.33%)
    7094 (86.19%)
    LeaveGroupKey
    LeaveGroupKey
    0 (0%)
    0 (0%)
    8231 (100%)
    8129 (98.76%)
    EmployeeStatusKey
    EmployeeStatusKey
    0 (0%)
    0 (0%)
    8231 (100%)
    8212 (99.77%)
    EmployeePositionNumber
    EmployeePositionNumber
    0 (0%)
    0 (0%)
    8214 (99.79%)
    8117 (98.61%)
    EmployeeEmail
    EmployeeEmail
    0 (0%)
    0 (0%)
    5133 (62.36%)
    8220 (99.87%)
    HoursPerWeek
    HoursPerWeek
    0 (0%)
    0 (0%)
    8214 (99.79%)
    8213 (99.78%)
    Gender
    Gender
    0 (0%)
    0 (0%)
    8231 (100%)
    8231 (100%)
    EmployeeEFT
    EmployeeEFT
    0 (0%)
    0 (0%)
    8214 (99.79%)
    8213 (99.78%)
    EmployeePostCode
    EmployeePostCode
    0 (0%)
    0 (0%)
    8153 (99.05%)
    8124 (98.7%)
    EmployeeSuburb
    EmployeeSuburb
    133 (1.62%)
    0 (0%)
    8152 (99.04%)
    8134 (98.82%)
    ReportToManager
    ReportToManager
    0 (0%)
    0 (0%)
    7037 (85.49%)
    7036 (85.48%)
    PositionClassificationCode
    PositionClassificationCode
    0 (0%)
    0 (0%)
    8144 (98.94%)
    8144 (98.94%)
    PositionClassificationDesc
    PositionClassificationDesc
    0 (0%)
    0 (0%)
    8144 (98.94%)
    8144 (98.94%)
    PositionLocation
    PositionLocation
    0 (0%)
    0 (0%)
    8214 (99.79%)
    8122 (98.68%)
    Age
    Age
    0 (0%)
    0 (0%)
    8231 (100%)
    8229 (99.98%)
    CurrentClassCode
    CurrentClassCode
    0 (0%)
    0 (0%)
    7908 (96.08%)
    7906 (96.05%)
    CurrentCLassDescription
    CurrentCLassDescription
    0 (0%)
    0 (0%)
    7908 (96.08%)
    7907 (96.06%)
    EmpState
    EmpState
    0 (0%)
    0 (0%)
    8153 (99.05%)
    8137 (98.86%)

    0
    Hi All,
    I have utilized the DQS for my organization Data Cleansing project. And we have built Data Quality reports using DQ output.
    We also use Profiler Summary report in the Activity Monitor, but this is manual process where Data owners need to manually enter into the Activity monitor and generet the porofiler report.
    My question is, which DQS table store the this Profiler summary report where by I can automate the report through that source.
    Here is the screenshot of the report we are looking at, we need to find out which DQS source table got this information.
    Thanks for the help

  • Data Quality for name and address

    Hello,
    We have OWB10G and would like to use Name and Address operator. It seesm to me that we need to buy some Data Quality library from some 3rd party ( FirstLogic etc). Our client is not ready to pay money to buy these 3rd party libraries. How would we go and still be able to use Name and address operator.
    I was in the impression that OWB has some builtin Data Quality libraries. Do we have to write some PL/SQL to clean , parse , match and merge data or it can be done using OWB? Why OWB does not have these libraries built-in.
    Is there some freely available Data Quality tool?
    Thanks
    Suhail Ahmad

    bump again,
    Oracle used to have pure extraxt and integrate, could we use these libraries with OWB10G?
    Syed

  • Data Quality , Name and address

    I would like to know if someone has used Name and address functinality in OWB. I would like to clean our data and possibly link two or more record with one record and also would like to standardise the addresses such as Parkway to Pkwa, st. to street etc. Is this all possible in OWB? Could I use Name and address server to do all these?
    While reading a FAQ on http://otn.oracle.com/products/warehouse/htdocs/ORACLE92_WAREHOUSE_BUILDER_DQ_FAQ.htm , it seems to me that I need to buy some kind of data qulaity software from third party, am I right?
    Thanks
    Suhail

    Syed,
    Many OWB customers are using OWB's Name and Address, and Match-Merge capabilities to perform the tasks you described.
    You are right, as the Data Quality FAQ states, Name and Address, while modelled in OWB at design time, requires third-party software at the run-time. That software is licensed separately (directly with third-party vendors, previously Oracle re-sold the third-party technology as an extra option to OWB). However, Match-Merge does not rely on any third-party technology.
    There are introductory viewlets and self-paced exercises for both of these features at http://otn.oracle.com/products/warehouse/htdocs/OTN_viewlet.html
    Nikolai

  • Question on CKM and Data Quality

    As I understand, CKM only supports the check based on db constraints. If I want to have more complicated business logic built to the checking, does Data Quality sound like a good choice. Or other suggestions?
    In my case, I will need to check the data in the source table based on the table data from different sources (both target and source tables). This should be doable through data quality, correct? I am new to ODI. When I first installed the ODI, I didn't choose to install data quality module. I suppose I can install DQ separately and link it back to ODI? Do they share the same master repository?
    Sorry for the naive questions, you help is greatly appreciated.
    -Wei

    Hi,
    My idea is just like:
    for instance a FK validation:
    create function F$_validate_FK (value1 number) return number
    as
    v_return number;
    begin
    select my_fk into v_return from any_table where column_fk = value1 ;
    return v_return ;
    exception
    When NO_DATA_FOUND then
    return -1;
    end;
    And at constraint you will have:
    -1 = (select F$_validate(table1.column_to_be_validated) from dual)
    Any record that have -1 as return will be not valide for the flow.
    The F$ function can be created in a ODI procedure before the interface and dropped at end if you think to be necessary.
    Make any sense?
    (Maybe there are several syntax error in this example, I just write it and did not compilate, just to show the idea)
    Edited by: Cezar Santos on 28/04/2009 10:20

  • ODI Data Profiling and Data Quality

    Hi experts,
    Searching about ODI features for data profiling and data quality I found (I think) many ... extensions? for the product, that confuse me. Cause I think there are at least three diferents ways to do data profiling and data quality:
    In first place, I found that ODI has out of the box features for data profiling and data quality, but, acording to the paper, this features are too limited.
    The second way I found was the product Oracle Data Profiling and Oracle Data Quality for Oracle Data Integrator 11gR1 (11.1.1.3.0) that is in the download page of ODI. Acording to the page, this product extends the existing inline Data Quality and Data profiling features of ODI.
    Finally, the third way is Oracle Enterprise Data Quality that is another product that can be integrated to ODI.
    I dont know if I understood good my alternatives. But, in fact, I need a general explanation of what ODI offer to do Data Quality and Data profiling. Can you help me to understand this?
    Very thanks in advance.

    Hi after 11.1.1.3 version of ODI release, oracle no longer supports ODP/ODQ which is a trillium software product and this not owned by oracle. Oracle is recommending the usage OEDQ for quality purposes. It's better if you could spend time on OEDQ rather than trying to learn and implement ODP/ODQ in ODI

  • Data Services and Data Quality Recommnded Install process

    Hi Experts,
    I have a few questions. We have some groups that have requested Data Quality be implemented along with another request for Data Services to be implemented. I've seen the requested for Data Services to be installed on the desktop, but from what I've read, it appears to be best to install this on the server side to allow for more of a central benefit to all.
    My questions are:
    1. Can Data Services (Server) install X.1 3.2 be installed on the same server as X.I 3.1 SP3 Enterprise?
    2. Is the Data Services (CLIENT) Version dependent on if the Data Services (Server) install is completed? Basically can the u201CData Services Designeru201D be used without the Server install?
    3. Do we require a new License key for this or can I use the Enterprise Server license key?
    4. At this time we are not using this to move data in and out of SAP, just using this to read data that is coming from SAP.
    From what I read, DATA Services comes with the SAP BusinessObjects Data Integrator or SAP BusinessObjects Data Quality Management solutions. Right now it's seems we dont have a need for the SAP Connection supplement, but definetly something we would implement in the near future. What would be the recommended architecture? A new Server with tomcat and cmc (seperate from our current BOBJ Enterprise servers)? or can DataServices be installed on the same?
    Thank you,
    Teresa

    Hi Teresa.
    Hope you are referring to BOE 3.1 (Business Objects Enterprise) and BODS (Business Objects Data Services) installation on the same server machine.
    Am not an expert on BODS installation.
    But this is my observation :
    We had recently tested on a test machine BOE BOXI 3.1 SP3 (full build) installation before upgrade of our BOE system.
    We also have BODS in our environment.
    Which we also wanted to check whether we could keep on the same server.
    So on this test machine, which already has BOXI 3.1 SP3 build, when i installed BODS server installation,
    what we observed was that,
    all the menus of BOE went away
    and only menus of BODS were seen.
    May be BODS installation overwrites/ or uninstalls BOE, if it already exists ?
    I dont know.  Though i could not fine any documentation, saying that we cannot have BODS and BOE on the same server machine. But this is what we observed.
    So we have kept BODS and BOE on 2 different machines running independently and we do not see any problem.
    Cheers
    indu

  • Data Quality tab for migration isn't working as expected.

    I was doing a test migration from DB2 (9.7) database to Oracle 11g using SQL Developer Version 3.2.20.09 Build MAIN-09.87. I found that the Data Quality tab for record count comparison from migration project wasn't working as expected.
    It’s not showing the record count from source database. It is showing following error in logging page after every refresh. It is happening because tool is populating SOURCENAME as "DB2"."SCHEMANAME"."TABLENAME" whereas the SOURCENAME should be consist of schema name and table name only. I have gone through the MD_META package and Database views, which are written for repository and found that its appending catalog name in case of source database is DB2.
    Is this a known issue? Do we have fix available? I think we need a change in QUOTE function of MD_META package and DB views. Please suggest.
    SEVERE     1377     2     oracle.dbtools.db.DBUtil     Warning, unhandled exception: DB2 SQL error: SQLCODE: -204, SQLSTATE: 42704, SQLERRMC: DB2.CDSWEB.PRODUCTLICTYPE

    Hello,
    Is this a known issue?
    Yes, it is a known issue. Bug 11778359: DB2:RUN DATA QUALITY REPORT GET UNHANDLED EXCEPTION: DB2 SQL ERROR
    The bug is unpublished so you can't see it in My Oracle Support, I just mentioned it for reference.
    Do we have fix available?Not yet. A fix shall be available in a future version of SQL Developer. Don't ask me in which one and when, I have no idea.
    I think we need a change in QUOTE function of MD_META package and DB viewsI don't agree. I don't know what might break if you manipulate that package.
    Sorry that I have no better answer.
    Best regards
    Wolfgang

  • The 2014 ASUG Data Quality Survey is Live

    This is the third ASUG survey from the Data Governance Special Interest Group (SIG)/ Metric Working Group. Like the other two surveys from 2009 and 2011, this survey is polling how companies are managing quality control for their master data, in particular the number of controls and in what areas.
    The main difference between this and previous surveys is that we are asking that you respond for a single domain (customer, material, vendor) per survey
    (you can take it more than once if you have input for multiple domains). In previous surveys, the majority of respondents were only able to complete the
    questions for one domain with a level of confidence. This change will allow for a cleaner analysis.
    This survey will be used for benchmarking what companies are measuring, and not the quality of your data. That will be a future topic focused on the popular
    areas of measurement.
    Before you take the survey, we recommend you have the list of what your company is measuring and the ASUG Taxonomy Handbook (available on the ASUG Data Governance SIG discussion board as well as at www.DataIntent-LLC.com, or on the LinkedIn discussion group: ASUG Data Quality Controls Taxonomy). Some questions will refer to the number of queries you are measuring by the taxonomy categories.
    Please do not get caught up in precision; it is better to have many participants and be close, than for us to lose your input because you are overly precise about the numbers of queries by taxonomy node.
    This survey is divided into three main parts: demographics, master data controls poll (customer, material or vendor) and operations; followed by a few
    questions about the survey itself including an opt-in opportunity should you like to see the results. In order to qualify to see the composite data, you must
    complete 29 of the 43 questions. Everyone qualifies for the summary report, which will be available in the near future on the ASUG website and through
    various presentations.
    We expect this survey to take about 20 minutes and assure you all individual responses are confidential.
    As long as you return to the survey from the same PC, you may save and return to the same survey (this will help you to collaborate).
    If at any time you would like assistance with this survey or the ASUG Taxonomy Handbook, please reply to the discussion thread, or email [email protected] or any member on the steering committee for data quality metrics found at the end of the survey.
    We welcome your participation in this survey - please contact any steering committee member with questions or respond to the discussion on the website.
    Thank you,
    ASUG Data Governance Team

    Just updating in case anyone else has this problem.
    1) The missing help files are appraently a known issue: our Oracle account manager says they can get a complete copy from somewhere and provide it to us. So if you have this problem, as far as I know you'll need to chase up someone in Oracle at this stage.
    2) I think that linux install is the server component only; the client is windows only; and the help files come with the client: so help files are not likely to be available from some other install, as far as I can work out.

  • In Data Quality transform please explain Associate transform with the help of any example.

    In Data Quality transform please explain Associate transform with the help of any example.

    Hi Neha,
    If we are using multiple match transforms and consolidate the final output we will use associate transform .
    Let me explain with one example based on data quality blue prints for USA .
    We have customer/vendor data .     We need to find the duplicates .
    1. First  we will find the duplicates  on   Name and Address
    2. Second  we will find the duplicates on Name and Email
    3. Third we will find the duplicates on Name and Phone
    Here why we need to find the duplicates in multiple stages . If we are finding the duplicates on combination of  Name, Address, Email and Phone  we may not get proper duplicates   as we are finding the potential duplicates . That's why we are finding the duplicates on different combinations .
    In this case we will get the different group numbers for each match combination . Each combination name is there .
    We want to consolidate and give the group number to the whole set of duplicates . We will pass these  3 match groups to associative transform and generate the  consolidated match group for the input data.
    I hope you understand the concept .
    Thanks & Regards,
    Ramana.

Maybe you are looking for

  • Keeping Contacts & Apps when syncing iPhone with new computer

    I just got a new netbook with Windows 7 Starter. I would like to transition from using my current work laptop to the netbook as the manager of my iPhone. When I try to sync my iPhone with the new computer it says that my iPhone's library is linked to

  • 5320 volume control

    Hi, I have a problem with 5320 express music.. My button volume up+ was stucked and i puled it out cus it was wasting my battery, i taught i wont need it, but actualy i do need it ...cus there is no way to control volume in player or radio... So my q

  • Apple Store gurus recommend I ask you about this...

    The Apple Store gurus tested my 40GB click wheel iPod and say it's in perfect working order. Before taking it to them, I restored the iPod after receiving an "Unknown Error (-48)" message and "Disk could not be read or written to" message. Now I can'

  • My bookmarked website loads but when I log in, I get a blank page

    When I access a bookmarked website, it loads. When I try to log on to my account, I get a blank page

  • Adobe Premiere 8.0 - jerky playback

    Guys, here are my computer specs. Should I have a problem with resource requirements? Base processor Core 2 Quad Q6700 (K) 2.66 GHz 1066 MHz front side bus Socket 775 Chipset Intel G33 Express Motherboard Manufacturer: Asus Motherboard Name: IPIBL-LB