Data quality check or automation

Apart from passing the report to the user for testing are there ways the process can be automated for a data quality check and how?
Thanks.

Hi Dre01,
According to your description, you want to check the report data quality. Right?
In Reporting Services, the only way to check the report data is viewing the report. So for your requirement, if you want to make this data processing automatically. We suggest to create subscription, it will process the data automatically based
on the schedule and you will get the subscription of report to check if it shows data properly.
Reference:
Create, Modify, and Delete Standard Subscriptions (Reporting Services in Native Mode)
If you have any question, please feel free to ask.
Best Regards,
Simon Hou

Similar Messages

  • How to do data quality check on XI?

    Hi XI guru,
    I am working on a project, our architect want XI to perform some data quality check to make sure the message contains correct data in future processing. Is there any existing solution or walk arround for XI to perform the data quality check?
    For example: if field A and field B is not exist, then XI need to send a email to remind someone who support this interface.
    For this kind of scenario, is that possible for XI to handle? What's the best option for XI?

    Hi,
    check all the condition in UDF and based on the condition result raise an alert.
    follow below steps:
    Michal Krawczyk
    The specified item was not found.
    Configuration steps are: go to transaction ALRTCATDEF
    1) Define Alert Category
    2) Create container elements which are used for holding an error messages.
    3) Recipient Determination.
    https://www.sdn.sap.com/irj/sdn/wiki?path=/display/xi/alert%2bconfiguration%2bin%2bxi
    Alert can be triggered in different ways.
    1) Triggering by Calling a Function Module Directly.
    Triggering XI Alerts from a User Defined Function
    chirag

  • CDS Data Quality Health Check Error -(Code: 207,103)

    Using the CDS Individual Data Quality Health Check. 
    Created a new data source, snap shot and mapping.  Cloned the job and modified the snapshot an mapping.  Ran the Job and go the following :An error occurred whilst creating task '[I2A] Profile Individual Misc Data' : Failed to load process "[I2A] Profile Individual Misc Data" (Code: 207,103) - Any Suggestions

    Determined the Error to be a missing processor.  I must have deleted it by mistake.  You can identify which one is missing by the processor Name [I2A] Profile Individual Misc Data

  • DATA QUALITY connection on INGRES

    Post Author: ryefrancis
    CA Forum: Data Quality and Data Insight
    Hi,
    Would like to ask if any you guys have any idea on the error prompt "NO COLUMNS SHOULD BE RETRIEVED FOR THIS TABLE. A POSSIBLE REASON IS THAT YOU DO NOT HAVE SELECT PERMISSION FOR THIS TABLE OR DATABASE"?
    The database being used is INGRES. We already tried to update its
    security settings and Grant Access still it won't establish a
    connection.
    But when we try using SQL statements instead of SQL HELPER we could get some reports.
    Hope you could have any suggestions and solutions to this problem.
    Good Day!!!

    Ingres is not supported for Data Insight however if you will send some additional information we may be able to check if anything obvious is missing.
    Two items that may help are the connection information from the Data Insight repository (iqi_connection table) for this source and screen shots of the populated connection dialogs.

  • Data Quality - Failed to create connection to MDR.

    Hi All,
    I have installed Data Quality 11.7.2 on windows 2003 server using SQL 2005 (Both SQL 2005 server and Data Quality are in the same server) , The install was successful. But when I try to Open 'Data Quality Project Architect' it throws an error
    Failed to create connection to MDR.
    Driver error messgage = qodbc_mssql:
    Unable to connect, Database error message =] [microsoft][odbc sql server driver]
    [ tcp/ip sockets] sql server does not exist or access denied
    I checked the ODBC connection and tested it successfully too.
    I am not able to pass through the 'Meta Data Repository Connection' as its failing in 'Setup SQL connection option'
    Any help is appreciated !!!
    Thanks
    Ranjit Krishnan

    Hi Paul,
    Thanks for you reply, OBDC connection was created under System DSN, also tired the users DSN option too, but did not work.
    Recreated the ODBC connection many times still no luck.
    The 'Project Architect' client and the MDR (SQL 2005 Server) are on the same machine.
    Thanks
    Ranjit Krishnan

  • Question on CKM and Data Quality

    As I understand, CKM only supports the check based on db constraints. If I want to have more complicated business logic built to the checking, does Data Quality sound like a good choice. Or other suggestions?
    In my case, I will need to check the data in the source table based on the table data from different sources (both target and source tables). This should be doable through data quality, correct? I am new to ODI. When I first installed the ODI, I didn't choose to install data quality module. I suppose I can install DQ separately and link it back to ODI? Do they share the same master repository?
    Sorry for the naive questions, you help is greatly appreciated.
    -Wei

    Hi,
    My idea is just like:
    for instance a FK validation:
    create function F$_validate_FK (value1 number) return number
    as
    v_return number;
    begin
    select my_fk into v_return from any_table where column_fk = value1 ;
    return v_return ;
    exception
    When NO_DATA_FOUND then
    return -1;
    end;
    And at constraint you will have:
    -1 = (select F$_validate(table1.column_to_be_validated) from dual)
    Any record that have -1 as return will be not valide for the flow.
    The F$ function can be created in a ODI procedure before the interface and dropped at end if you think to be necessary.
    Make any sense?
    (Maybe there are several syntax error in this example, I just write it and did not compilate, just to show the idea)
    Edited by: Cezar Santos on 28/04/2009 10:20

  • Data Services and Data Quality Recommnded Install process

    Hi Experts,
    I have a few questions. We have some groups that have requested Data Quality be implemented along with another request for Data Services to be implemented. I've seen the requested for Data Services to be installed on the desktop, but from what I've read, it appears to be best to install this on the server side to allow for more of a central benefit to all.
    My questions are:
    1. Can Data Services (Server) install X.1 3.2 be installed on the same server as X.I 3.1 SP3 Enterprise?
    2. Is the Data Services (CLIENT) Version dependent on if the Data Services (Server) install is completed? Basically can the u201CData Services Designeru201D be used without the Server install?
    3. Do we require a new License key for this or can I use the Enterprise Server license key?
    4. At this time we are not using this to move data in and out of SAP, just using this to read data that is coming from SAP.
    From what I read, DATA Services comes with the SAP BusinessObjects Data Integrator or SAP BusinessObjects Data Quality Management solutions. Right now it's seems we dont have a need for the SAP Connection supplement, but definetly something we would implement in the near future. What would be the recommended architecture? A new Server with tomcat and cmc (seperate from our current BOBJ Enterprise servers)? or can DataServices be installed on the same?
    Thank you,
    Teresa

    Hi Teresa.
    Hope you are referring to BOE 3.1 (Business Objects Enterprise) and BODS (Business Objects Data Services) installation on the same server machine.
    Am not an expert on BODS installation.
    But this is my observation :
    We had recently tested on a test machine BOE BOXI 3.1 SP3 (full build) installation before upgrade of our BOE system.
    We also have BODS in our environment.
    Which we also wanted to check whether we could keep on the same server.
    So on this test machine, which already has BOXI 3.1 SP3 build, when i installed BODS server installation,
    what we observed was that,
    all the menus of BOE went away
    and only menus of BODS were seen.
    May be BODS installation overwrites/ or uninstalls BOE, if it already exists ?
    I dont know.  Though i could not fine any documentation, saying that we cannot have BODS and BOE on the same server machine. But this is what we observed.
    So we have kept BODS and BOE on 2 different machines running independently and we do not see any problem.
    Cheers
    indu

  • Data quality in ODI 11g

    Hi all,
    I want to use DQ tool for validating the source(comlex file). My all validations are mathmetical and complicated.
    So it possible by doing with Oracle data quality tool which in ODI11g?
    Regards,
    Suresh

    I once used a small ETL tool esProc to combined with Data Quality( to analyse the stock data).
    It is famous for handling the complicated mathematical computation and statistical analysis, and its performance is also acceptable.
    Check here for details about esProc

  • The Data quality knowledge base property is empty

    Hello 
    I build Knowledge base and published it successfully (using the DQS client wizard  normally )
    But when i open new ssis project and try to use the DQS component i can't use the knowloedge base i build , i get the message The Data quality knowledge base property is empty
    How can i be sure that i published successfully (any query on repository ?)
     What do i miss ?

    Hello,
    Use the Data Quality Client to check in the Open Knowledge Base
    screen if the knowledge base was published. To do so, click Open Knowledge Base
    in the home screen of Data Quality Client, and then check the Date Published column.
    Alternately, you can also check the PUBLISH_DATE column in the DQS_MAIN.dbo.A_KNOWLEDGEBASE
    table.
    Thanks,
    Vivek

  • SAP Data Quality Management: how to replace input fields

    Hi all,
    the input fields in the SAP DQM 4.0 Real time job Job_Realtime_DQ_SAP_Name_And_Address_Match are:
    REC_NUM
    NAME_FIRST
    NAME_LAST
    NAME1
    NAME2
    HOUSE_NUM1
    STR_SUPPL1
    STREET
    CITY1
    COUNTRY
    POST_CODE1
    REGION
    ADDR_TYPE
    MTCSTD1
    MTCSTD2
    PO_BOX
    POST_CODE2    
    THRESHOLD
    I want to replace one of the fields (i.e ADDR_TYPE) with "Account_group" column.
    The idea is replace, not create of additional fields ( as described in SAP BusinessObjects Data Quality Management (DQM): Enhancements for more fields other than address data for Duplicate Check SAP Business Objects Data Quality Management ( DQM ) Enhancements for more fields other than address data for Duplicate Check.).
    Could you please let me know if the standard  XML file sent by the RFC server contains the Account_group column.
    Any other suggestions and ideas are welcome
    Thanks in advance
    Goran

    Hi,
    the additional data tab is controlled by means of classes created in DMS.
    if your requirement is something like,
    project name
    responsible person name etc..
    for this - in CL01, you can create class, say Project XYZ
    in CT04 - you can create characteristics as project name and person name..etc for this class.
    in each characteristics you can maintain the values which will serve the requirement.
    hope this helps U..!
    Thank You,
    Manoj
    award points if useful..!

  • Verification of data quality in migration process

    Hi All,
    I am in a project that migration data from SQLserver to Oracle database. But my quesion is not performance but the check of data quality.
    My procedures to move data is: a) Extract data to a flat file from SQLserver via a GUI tool b) ftp it to UNIX c) sqlldr to Oracle temp tables d) copy the data from temp tables to fact tables.
    My point is to only check the sqlserver log file and sqlldr log file, if no error in them and the row counts match in SQLserver and Oracle, then we can say a,b,c are successful.
    And since d is a third party stored procedure, we can trust its correctness. I don't see any point where the error could happen.
    But the QA team think we have to do at least two more verification: 1. compare some rows column by column 2. sum some numeric columns to compare the results.
    Can someone give me some suggestion on how you check the data quality in your migration projects, please?
    Best regards,
    Leon

    Without wishing to repeat anything that's already been said by Kim and Frank this is exactly the type of thing you need checks around.
    1. Exporting from SQL Server into a CSV
    Potential to loose precision in data types such as numbers, dates, timestamps, or in character sets (unicode, utf etc)
    2. Moving from windows to unix
    Immediately there are differences in EOL characters
    Potential for differences in character sets
    Potential issues with incomplete ftp of files
    3. CSV into temp tables with SQL Loader
    Potential to loose precision in data types such as numbers, dates, timestamps, or in character sets (unicode, utf etc)
    Potentail to have control files not catering for special characters
    4. Copy from temp tables to fact tables
    Potential to have column mappings wrong
    Potential to loose precision in data types such as numbers, dates, timestamps, or in character sets (unicode, utf etc)
    And I'm sure there are loads more things that could go wrong at any stage. You have to cater not only for things going wrong in the disaster sense i.e. disk fails, network fails, data precision lost, but also consider there could be obscure bug in any of the technologies you're working with. They're not things you can directly predict but you should have verification in place to make sure you know if something has gone wrong - however subtle.
    HTH
    David

  • Trillium Data quality connector for SAP

    hi
    this is my first time in this forum, so if this is not the correct area to post this, mods please move the appropriate area.
    as the subject mentions, i just want to know if anyone have any business documents regarding this data quality connector for SAP. any kind of documentation will be helpfull. thanks

    Please check the newly created MDM Data Enrichment page on SDN. It provides useful information: https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/webcontent/uuid/60368290-4d9e-2910-1480-bb55167ee3c3. [original link is broken]
    Regards,
    Markus

  • Address verification - Data Quality

    Hi guys,
    I am trying to do some research to understand if you (ORPOS) customers see a need for Address, Phone & EMail Verification to improve data quality?
    If you do, please let me know where is your biggest pain with the data quality? which forms or module if you had an Address, Phone or EMail verification solution integrated would make your life and improve ROI for your company
    Thanks!

    Hello Ida,
    Address Verification in OEDQ is comprised of the Address Verification API, and a Global Knowledge Repository (also known as Postal Address File).
    A subscription to a Postal Address File must be purchased directly from a provider, and Oracle's prefered partner for this is Loqate, (http://www.loqate.com/).
    See explanation here for details: https://blogs.oracle.com/mdm/entry/enterprise_data_quality_integration_ready
    The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically:
    Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API
    Optimization of address verification by pre-standardizing data where required
    Formatting of output addresses into the input address fields normally used by applications
    Adding descriptions of the address verification and geocoding return codes
    The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.
    The Installation and Configuration of Addess Verification with OEDQ and Loqate is documented here: Installing and Configuring Address Verification
    Best regards,
    Oliver.

  • Implementation of Oracle Coding Standards and Code Quality Checks

    I wanted to implement a list of coding standards and code quality checks for my oracle packages,functions,views,tableetc .
    for example
    All variables with number datatype should start with N_ and charcter type with C_ in all my tables & views definition.
    This can be identified during peer review & can be corrected, but i think this is a repeated process which i don want to burden the developers rather i wanted a tool which does all these kind of checks which can be automated.
    Is there any tool which does this operation or can someone give me a little idea how can i automate these stuffs bu creating a generic oracle procedure which can run through all the tables,views and generate a error report for those which are deviating from the standards.
    Thus we can reduce the manual effort spent on peer review, please suggest.
    thanks in advance

    maru wrote:
    I wanted to implement a list of coding standards and code quality checks for my oracle packages,functions,views,tableetc .
    for example
    All variables with number datatype should start with N_ and charcter type with C_ in all my tables & views definition.Hungarian notation is dead. It has no place in modern programming languages. Has no place in PL/SQL. Anit ain't just me saying that.
    +"Encoding the type of a function into the name (so-called Hungarian notation) is brain damaged—the compiler knows the types anyway and can check those, and it only confuses the programmer."+
    Linus Torvalds
    +"No I don't recommend 'Hungarian'. I regard 'Hungarian' (embedding an abbreviated version of a type in a variable name) a technique that can be useful in untyped languages, but is completely unsuitable for a language that supports generic programming and object-oriented programming—both of which emphasize selection of operations based on the type an arguments (known to the language or to the run-time support). In this case, 'building the type of an object into names' simply complicates and minimizes abstraction."+
    Bjarne Stroustrup
    2) Conditional Statements
    IF (x = 1) --> Wrong
    IF ((x = 1) AND (y = 2)) --> wrong
    IF x = 1 AND y = 2 --> RightIdiotic rules. The simple rule should be readability of code. Not how many brackets to use, and when not to use brackets. Minute standards like detracts from designing and writing proper code, fast and efficiently.
    There are many more rules (which is specific to ur application) which can be incorporated in the tool, there by giving consistency ,readability and easy to maintain for the developers.Bull. The more rules there are, the more difficult it becomes for programmers to write code. As it is no longer about writing readable and flexible and performing code - it is about double checking every single statement line against a huge list of rules about do's and dont's. It is not about getting the programmer focusing on solving the problem - it is about distracting the programmer with a complex and large rule list of how the code should look like.
    Sorry - but this rubs me the wrong way. In that environment, I would be the first to tell you to shove your "+many more rules+".
    I've developed systems in over a dozen languages over the years. I've seen all kinds of standards. The standards that work are those that are short, simple and sensible. Hungarian notation is not sensible. Writing reserved words in uppercase is not sensible. Dictating how brackets should be used is not sensible.
    What is sensible is using the de facto naming standards in use today - as per .Net Guidelines for Names (MSDN) and Code Conventions for the Java Programming Language.
    What is sensible is providing guidelines like bulk collection needs to be justified (not possible using SQL only) and use the limit clause to manage memory spend on the collection variable. Or how packages need to be used to modularise code, providing a public interface and private implementation.
    Standards are about creating a sensible and easy-to-use framework for writing code. It is not about creating a list of a 1001 rules that a developer needs to remember and adhere to, as if the developer is now part of some weird religious sect that has rules for every single aspect of human behaviour.

  • Enterprise Data Quality - stuck/crash when processing high volume

    I am using Enterprise Data Quality, trying to run a data profiling process of 1 million rows.  However, the process (which contains group and merge processors) appears to be always stuck half way through or crashes.  I have tried other data sources and the result is the same.
    It seems that Enterprise Data Quality does not handle high volume very well.   Please assist and let me know what other details you require.

    Hi,
    It is certainly not the case that EDQ does not handle large volumes of data well. We have a large number of customers running huge data volumes in the product and have benchmarked the product running services on massive volumes, including matching of 250m records.
    However, if you want to run large jobs, you need to make sure the system is installed and tuned correctly. How did you install the product? Are you running 32-bit or 64-bit? How much memory is allocated to EDQ?
    With regard to best practice, did you throw all profiling processors at all of your data? The better approach is to 'open profile' a sample of records and pick more carefully which processors to run on larger volumes... otherwise you are telling the server to do a huge amount of work and some of it may not be necessary. Data profiling is an iterative exercise to help you find data issues that you can check for an fix using audit and transformation processors. Profilers are used mostly for simple reporting when it comes to production jobs on larger volumes.
    Note that there are some profiling processors that will cause progress reporting to appear to 'pause' at 50% of the way through a process. These are 2-phase processors such as the Record Duplication Profiler which needs to spool all data to a table and then analyze it rather than work record by record. Such processors are typically slower than the simpler profilers that add flags to the data with a counting phase at the end (Frequency Profiler, Patterns Profiler etc.)
    Regards,
    Mike

Maybe you are looking for

  • Upgrade to Windows 8.1 Pro Update 1 and it killed all my 3rd party drivers

    Hoping someone can help - my PC upgraded Windows 8.1 Pro (fresh install a few months ago) with the 'Update 1 patch' which was made available a few days ago.  It was done via Windows Update - I let it do its thing for about 15 minutes, restart etc. On

  • In home agent

    Should I uninstall In Home Agent to avoid getting this worm that it may carry in updates?  Is Verizon doing anything about this problem reported several times on this forum? 

  • Transport from development server to production server in BW

    I need step by step procedure to transport contents from bw-development to bw-production server. please forward me if you have one . thanks ajit

  • How read and write file

    Hi please help me How to read and write data from file in j2me

  • Making external SWF internal

    hello. i recently downloaded a XML splash rotator from flash den. http://flashden.net/item/zoom-slideshow-banner-rotator/13523 i set up the XML files so that its loading the pictures i want and it is working fine when i publish it and check the html.