Verification of data quality in migration process

Hi All,
I am in a project that migration data from SQLserver to Oracle database. But my quesion is not performance but the check of data quality.
My procedures to move data is: a) Extract data to a flat file from SQLserver via a GUI tool b) ftp it to UNIX c) sqlldr to Oracle temp tables d) copy the data from temp tables to fact tables.
My point is to only check the sqlserver log file and sqlldr log file, if no error in them and the row counts match in SQLserver and Oracle, then we can say a,b,c are successful.
And since d is a third party stored procedure, we can trust its correctness. I don't see any point where the error could happen.
But the QA team think we have to do at least two more verification: 1. compare some rows column by column 2. sum some numeric columns to compare the results.
Can someone give me some suggestion on how you check the data quality in your migration projects, please?
Best regards,
Leon

Without wishing to repeat anything that's already been said by Kim and Frank this is exactly the type of thing you need checks around.
1. Exporting from SQL Server into a CSV
Potential to loose precision in data types such as numbers, dates, timestamps, or in character sets (unicode, utf etc)
2. Moving from windows to unix
Immediately there are differences in EOL characters
Potential for differences in character sets
Potential issues with incomplete ftp of files
3. CSV into temp tables with SQL Loader
Potential to loose precision in data types such as numbers, dates, timestamps, or in character sets (unicode, utf etc)
Potentail to have control files not catering for special characters
4. Copy from temp tables to fact tables
Potential to have column mappings wrong
Potential to loose precision in data types such as numbers, dates, timestamps, or in character sets (unicode, utf etc)
And I'm sure there are loads more things that could go wrong at any stage. You have to cater not only for things going wrong in the disaster sense i.e. disk fails, network fails, data precision lost, but also consider there could be obscure bug in any of the technologies you're working with. They're not things you can directly predict but you should have verification in place to make sure you know if something has gone wrong - however subtle.
HTH
David

Similar Messages

  • ODI Data Quality - Relationship Linker Process

    Hi All,
    I have been trying to use the relationship linker process in the ODI data quality interface with limited success.
    My Problem:
    The process tab in the relationship linker asks for two inputs. 1) Attribute containing record type & 2) Attribute Containing window key
    The attribute containing window key is pre-filled to be window_key_01 (window key) that you can set up in the prior step. I am not able to determine which exact field i should be specifying in the "ATTRIBUTE CONTAINING RECORD TYPE" column.
    Things that i have tried out:
    I have used the following column for attribute containing record type
    1) PR_BUSNAME_RECODED01
    2) US_POSTAL_MATCH_INPUT_AREA
    3) WINDOW_KEY_01
    4) PR_ STREET_NAME
    The window key i am using is as follows
    1) Business name - 5 chars( first character + subsequent non-repeating consonants of the business name)
    2) State Name - 2 chars
    3) City Name - 5 chars (first character + subsequent consonants)
    4) postal code - 5 chars (numerics only)
    Guidance Needed
    1) Which attribute should be selected for "ATTRIBUTE CONTAINING RECORD TYPE" column. If possible, can you also include the reason to use that attribute?
    2) In the advances features window of the relationship linker process, the linking rules are also not very clear. I am still trying to figure out what lev2_matched_in_lev1_matched and other similar options correspond to.
    The help files do not explain this at length.
    I am having some difficulty in understanding the relationship linker process and I have tried out many options.
    If any ODI data quality guru's or pro's can help me out with this. I'll be really grateful.
    Thanks,
    Chapanna

    Hi,
    thanks for your reply.
    Yes I got the second connection; the question was about the procedure described in the manual.
    The solution I found is a trick also because the way described in docs is not limited to a second repository but could be used also for several entries.
    Because I am trying develop knowledge on ODI I would solve any possible unexpected behavior. Do you see in your setup the entry described in docs?
    Taking a look at the Store the ODQ is quoted 70,000 $ / processor which is one reason more I would expect customer wanting to be able to follow the procedure depicted in the documentation or being removed from there (I try to answer questions that someone could ask to me).
    Thanks
    Fabio D'Alfonso

  • Data Services and Data Quality Recommnded Install process

    Hi Experts,
    I have a few questions. We have some groups that have requested Data Quality be implemented along with another request for Data Services to be implemented. I've seen the requested for Data Services to be installed on the desktop, but from what I've read, it appears to be best to install this on the server side to allow for more of a central benefit to all.
    My questions are:
    1. Can Data Services (Server) install X.1 3.2 be installed on the same server as X.I 3.1 SP3 Enterprise?
    2. Is the Data Services (CLIENT) Version dependent on if the Data Services (Server) install is completed? Basically can the u201CData Services Designeru201D be used without the Server install?
    3. Do we require a new License key for this or can I use the Enterprise Server license key?
    4. At this time we are not using this to move data in and out of SAP, just using this to read data that is coming from SAP.
    From what I read, DATA Services comes with the SAP BusinessObjects Data Integrator or SAP BusinessObjects Data Quality Management solutions. Right now it's seems we dont have a need for the SAP Connection supplement, but definetly something we would implement in the near future. What would be the recommended architecture? A new Server with tomcat and cmc (seperate from our current BOBJ Enterprise servers)? or can DataServices be installed on the same?
    Thank you,
    Teresa

    Hi Teresa.
    Hope you are referring to BOE 3.1 (Business Objects Enterprise) and BODS (Business Objects Data Services) installation on the same server machine.
    Am not an expert on BODS installation.
    But this is my observation :
    We had recently tested on a test machine BOE BOXI 3.1 SP3 (full build) installation before upgrade of our BOE system.
    We also have BODS in our environment.
    Which we also wanted to check whether we could keep on the same server.
    So on this test machine, which already has BOXI 3.1 SP3 build, when i installed BODS server installation,
    what we observed was that,
    all the menus of BOE went away
    and only menus of BODS were seen.
    May be BODS installation overwrites/ or uninstalls BOE, if it already exists ?
    I dont know.  Though i could not fine any documentation, saying that we cannot have BODS and BOE on the same server machine. But this is what we observed.
    So we have kept BODS and BOE on 2 different machines running independently and we do not see any problem.
    Cheers
    indu

  • Address verification - Data Quality

    Hi guys,
    I am trying to do some research to understand if you (ORPOS) customers see a need for Address, Phone & EMail Verification to improve data quality?
    If you do, please let me know where is your biggest pain with the data quality? which forms or module if you had an Address, Phone or EMail verification solution integrated would make your life and improve ROI for your company
    Thanks!

    Hello Ida,
    Address Verification in OEDQ is comprised of the Address Verification API, and a Global Knowledge Repository (also known as Postal Address File).
    A subscription to a Postal Address File must be purchased directly from a provider, and Oracle's prefered partner for this is Loqate, (http://www.loqate.com/).
    See explanation here for details: https://blogs.oracle.com/mdm/entry/enterprise_data_quality_integration_ready
    The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically:
    Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API
    Optimization of address verification by pre-standardizing data where required
    Formatting of output addresses into the input address fields normally used by applications
    Adding descriptions of the address verification and geocoding return codes
    The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.
    The Installation and Configuration of Addess Verification with OEDQ and Loqate is documented here: Installing and Configuring Address Verification
    Best regards,
    Oliver.

  • Data Quality tab for migration isn't working as expected.

    I was doing a test migration from DB2 (9.7) database to Oracle 11g using SQL Developer Version 3.2.20.09 Build MAIN-09.87. I found that the Data Quality tab for record count comparison from migration project wasn't working as expected.
    It’s not showing the record count from source database. It is showing following error in logging page after every refresh. It is happening because tool is populating SOURCENAME as "DB2"."SCHEMANAME"."TABLENAME" whereas the SOURCENAME should be consist of schema name and table name only. I have gone through the MD_META package and Database views, which are written for repository and found that its appending catalog name in case of source database is DB2.
    Is this a known issue? Do we have fix available? I think we need a change in QUOTE function of MD_META package and DB views. Please suggest.
    SEVERE     1377     2     oracle.dbtools.db.DBUtil     Warning, unhandled exception: DB2 SQL error: SQLCODE: -204, SQLSTATE: 42704, SQLERRMC: DB2.CDSWEB.PRODUCTLICTYPE

    Hello,
    Is this a known issue?
    Yes, it is a known issue. Bug 11778359: DB2:RUN DATA QUALITY REPORT GET UNHANDLED EXCEPTION: DB2 SQL ERROR
    The bug is unpublished so you can't see it in My Oracle Support, I just mentioned it for reference.
    Do we have fix available?Not yet. A fix shall be available in a future version of SQL Developer. Don't ask me in which one and when, I have no idea.
    I think we need a change in QUOTE function of MD_META package and DB viewsI don't agree. I don't know what might break if you manipulate that package.
    Sorry that I have no better answer.
    Best regards
    Wolfgang

  • Address verification in Oracle Enterprise Data Quality

    Hi,
    I am new to OEDQ. I have a need to do the address verification using OEDQ.  Please guide me how to proceed with the process.
    Thanks in advance.
    Regard,
    Ida.

    Hello Ida,
    Address Verification in OEDQ is comprised of the Address Verification API, and a Global Knowledge Repository (also known as Postal Address File).
    A subscription to a Postal Address File must be purchased directly from a provider, and Oracle's prefered partner for this is Loqate, (http://www.loqate.com/).
    See explanation here for details: https://blogs.oracle.com/mdm/entry/enterprise_data_quality_integration_ready
    The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically:
    Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API
    Optimization of address verification by pre-standardizing data where required
    Formatting of output addresses into the input address fields normally used by applications
    Adding descriptions of the address verification and geocoding return codes
    The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.
    The Installation and Configuration of Addess Verification with OEDQ and Loqate is documented here: Installing and Configuring Address Verification
    Best regards,
    Oliver.

  • SLcM and Experian Data Quality (QAS) Pro address verification service

    Does SLcM integrate with Experian Data Quality (QAS) Pro address verification service?  Is so how and is there any documentation on it?  Also are there any institutions that are doing it and are available as a reference?
    Thanks,
    Stan

    Stan,
       I don't think there is any documentation out there. But it can be integrated using Web-service, Interfaces and Scripting. We integrated SLcM with QAS for one of our client.
    Thanks,
    Prabhat Singh

  • Enterprise Data Quality - stuck/crash when processing high volume

    I am using Enterprise Data Quality, trying to run a data profiling process of 1 million rows.  However, the process (which contains group and merge processors) appears to be always stuck half way through or crashes.  I have tried other data sources and the result is the same.
    It seems that Enterprise Data Quality does not handle high volume very well.   Please assist and let me know what other details you require.

    Hi,
    It is certainly not the case that EDQ does not handle large volumes of data well. We have a large number of customers running huge data volumes in the product and have benchmarked the product running services on massive volumes, including matching of 250m records.
    However, if you want to run large jobs, you need to make sure the system is installed and tuned correctly. How did you install the product? Are you running 32-bit or 64-bit? How much memory is allocated to EDQ?
    With regard to best practice, did you throw all profiling processors at all of your data? The better approach is to 'open profile' a sample of records and pick more carefully which processors to run on larger volumes... otherwise you are telling the server to do a huge amount of work and some of it may not be necessary. Data profiling is an iterative exercise to help you find data issues that you can check for an fix using audit and transformation processors. Profilers are used mostly for simple reporting when it comes to production jobs on larger volumes.
    Note that there are some profiling processors that will cause progress reporting to appear to 'pause' at 50% of the way through a process. These are 2-phase processors such as the Record Duplication Profiler which needs to spool all data to a table and then analyze it rather than work record by record. Such processors are typically slower than the simpler profilers that add flags to the data with a counting phase at the end (Frequency Profiler, Patterns Profiler etc.)
    Regards,
    Mike

  • Data Quality Process

    Hi:
    I'm very interested in data quality and I'd like to know where to begin and which Oracle tools help to support activities like data auditing, data profiling and data cleansing.
    I'm an OWB user but I don't like the way this tool performs data profiling, it creates a materialized view for every column on the table to be analyzed, so it takes many time performing that and then produces a lot of tables with the statistics gathered. I think there would be a better implementation for this.
    Thanks for the forum,
    Hazbleydi C.

    One problem is that the data quality and profiling functions built into the Oracle database cannot be used in a heterogeneous implementation like Fusion - and most of the good data quality vendors have already been acquired (FirstLogic, Vality, Similarity, Fuzzy Logistics). Informatica would be a multi-billion dollar acquisition which may be too expensive for an Oracle data integration stack. Oracle may rely on partnerships with Informatica, IBM and Trillium to provide a wide range of data quality and profiling functions.
    Gartner thinks everyone is looking up in the Magic Quadrant for Data Quality Tools, 2007

  • Popularity trend/usage report is not working in sp2013. Data was not being processed to EVENT STORE folder which was present under the Analytics_GUID folder.

    Hi
     I am working in a sharepoint migration project. We have migrated one SharePoint project from moss2007 to sp2013. Issue is when we are clicking on Popularity trend > usage report,  it is throwing an error.
    Issue: The data was not being processed to EVENT STORE folder which was present under the
    Analytics_GUID folder. Also data was not present in the Analytical Store database.
    In log viewer I have found the bellow error.
    HIGH -
    SearchServiceApplicationProxy::GetAnalyticsEventTypeDefinitions--Error occured: System.ServiceModel.Security.MessageSecurityException: An unsecured or incorrectly
    secured fault was received from the other party.
    UNEXPECTED - System.ServiceModel.FaultException`1[[System.ServiceModel.ExceptionDetail,
    System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]]: We're sorry, we weren't able to complete the operation, please try again in a few minutes.
    HIGH - Getting Error Message for Exception System.Web.HttpUnhandledException
    (0x80004005): Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.ServiceModel.Security.MessageSecurityException: An unsecured or incorrectly secured fault was received from the other party.
    CRITICAL - A failure was reported when trying to invoke a service application:
    EndpointFailure Process Name: w3wp Process ID: 13960 AppDomain Name: /LM/W3SVC/767692721/ROOT-1-130480636828071139 AppDomain ID: 2 Service Application Uri: urn:schemas-microsoft-
    UNEXPECTED - Could not retrieve analytics event definitions for
    https://XXX System.ServiceModel.FaultException`1[System.ServiceModel.ExceptionDetail]: We're sorry, we weren't able to complete the operation, please try again in a few minutes.
    UNEXPECTED - System.ServiceModel.FaultException`1[[System.ServiceModel.ExceptionDetail,
    System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]]: We're sorry, we weren't able to complete the operation, please try again in a few minutes.
    I have verified few things in server which are mentioned below
    Two timer jobs (Microsoft SharePoint Foundation Usage Data Processing, Microsoft SharePoint Foundation Usage Data Import) are running fine.
    APPFabric Caching service has been started.
    Analytics_GUID folder has been
    shared with
    WSS_ADMIN_WPG and WSS_WPG and Read/Write access was granted
    .usage files are getting created and also the temporary(.tmp) file has been created.
    uasage  logging database for uasage data being transported. The data is available.
    Please provide pointers on what needs to be done.

    Hi Nabhendu,
    According to your description, my understanding is that you could not use popularity trend after you migrated SharePoint 2007 to SharePoint 2013.
    In SharePoint 2013, the analytics functionality is a part of the search component. There is an article for troubleshooting SharePoint 2013 Web Analytics, please take a look at:
    Troubleshooting SharePoint 2013 Web Analytics
    http://blog.fpweb.net/troubleshooting-sharepoint-2013-web-analytics/#.U8NyA_kabp4
    I hope this helps.
    Thanks,
    Wendy
    Wendy Li
    TechNet Community Support

  • Data quality check or automation

    Apart from passing the report to the user for testing are there ways the process can be automated for a data quality check and how?
    Thanks.

    Hi Dre01,
    According to your description, you want to check the report data quality. Right?
    In Reporting Services, the only way to check the report data is viewing the report. So for your requirement, if you want to make this data processing automatically. We suggest to create subscription, it will process the data automatically based
    on the schedule and you will get the subscription of report to check if it shows data properly.
    Reference:
    Create, Modify, and Delete Standard Subscriptions (Reporting Services in Native Mode)
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou

  • Data Quality Services

    Hi All,
    I have utilized the DQS for my organization Data Cleansing project. And we have built Data Quality reports using DQ output.
    We also use Profiler Summary report in the Activity Monitor, but this is manual process where Data owners need to manually enter into the Activity monitor and generet the porofiler report.
    My question is, which DQS table store the this Profiler summary report where by I can automate the report through that source.
    Here is the screenshot of the report we are looking at, we need to find out which DQS source table got this information.
    Thanks for the help

    0
    Hi All,
    I have utilized the DQS for my organization Data Cleansing project. And we have built Data Quality reports using DQ output.
    We also use Profiler Summary report in the Activity Monitor, but this is manual process where Data owners need to manually enter into the Activity monitor and generet the porofiler report.
    My question is, which DQS table store the this Profiler summary report where by I can automate the report through that source.
    Here is the screenshot of the report we are looking at, we need to find out which DQS source table got this information.
    Thanks for the help

  • Data Quality Services - Summary report

    Hi,
    Is anyone has a idea about which table is stored summary information comming out from Activity monitor?
    the example is below:
    when I exported the data  the summary as follows:
    My pourpose is to automate this report if it is stored on DQS data bases?
    Field
    Domain
    Corrected Values
    Suggested Values
    Completeness
    Accuracy
    EmployeeName
    EmpName
    5 (0.06%)
    0 (0%)
    7303 (88.73%)
    8222 (99.89%)
    EmployeeKey
    EmployeeKey
    1 (0.01%)
    0 (0%)
    8231 (100%)
    8215 (99.81%)
    CostCentreKey
    CostCentreKey
    0 (0%)
    0 (0%)
    8231 (100%)
    8141 (98.91%)
    CostGroupKey
    CostCentreGroupKey
    0 (0%)
    0 (0%)
    7188 (87.33%)
    7094 (86.19%)
    LeaveGroupKey
    LeaveGroupKey
    0 (0%)
    0 (0%)
    8231 (100%)
    8129 (98.76%)
    EmployeeStatusKey
    EmployeeStatusKey
    0 (0%)
    0 (0%)
    8231 (100%)
    8212 (99.77%)
    EmployeePositionNumber
    EmployeePositionNumber
    0 (0%)
    0 (0%)
    8214 (99.79%)
    8117 (98.61%)
    EmployeeEmail
    EmployeeEmail
    0 (0%)
    0 (0%)
    5133 (62.36%)
    8220 (99.87%)
    HoursPerWeek
    HoursPerWeek
    0 (0%)
    0 (0%)
    8214 (99.79%)
    8213 (99.78%)
    Gender
    Gender
    0 (0%)
    0 (0%)
    8231 (100%)
    8231 (100%)
    EmployeeEFT
    EmployeeEFT
    0 (0%)
    0 (0%)
    8214 (99.79%)
    8213 (99.78%)
    EmployeePostCode
    EmployeePostCode
    0 (0%)
    0 (0%)
    8153 (99.05%)
    8124 (98.7%)
    EmployeeSuburb
    EmployeeSuburb
    133 (1.62%)
    0 (0%)
    8152 (99.04%)
    8134 (98.82%)
    ReportToManager
    ReportToManager
    0 (0%)
    0 (0%)
    7037 (85.49%)
    7036 (85.48%)
    PositionClassificationCode
    PositionClassificationCode
    0 (0%)
    0 (0%)
    8144 (98.94%)
    8144 (98.94%)
    PositionClassificationDesc
    PositionClassificationDesc
    0 (0%)
    0 (0%)
    8144 (98.94%)
    8144 (98.94%)
    PositionLocation
    PositionLocation
    0 (0%)
    0 (0%)
    8214 (99.79%)
    8122 (98.68%)
    Age
    Age
    0 (0%)
    0 (0%)
    8231 (100%)
    8229 (99.98%)
    CurrentClassCode
    CurrentClassCode
    0 (0%)
    0 (0%)
    7908 (96.08%)
    7906 (96.05%)
    CurrentCLassDescription
    CurrentCLassDescription
    0 (0%)
    0 (0%)
    7908 (96.08%)
    7907 (96.06%)
    EmpState
    EmpState
    0 (0%)
    0 (0%)
    8153 (99.05%)
    8137 (98.86%)

    0
    Hi All,
    I have utilized the DQS for my organization Data Cleansing project. And we have built Data Quality reports using DQ output.
    We also use Profiler Summary report in the Activity Monitor, but this is manual process where Data owners need to manually enter into the Activity monitor and generet the porofiler report.
    My question is, which DQS table store the this Profiler summary report where by I can automate the report through that source.
    Here is the screenshot of the report we are looking at, we need to find out which DQS source table got this information.
    Thanks for the help

  • Data Quality Report Error

    In DS 12.1.1.0,
    when I try to open a Data Quality Report in the Management Console I get a new Windows with the following error message:
    Fehler
    Fehler bei der Seitenformatierung: FormulaFunction hat eine unerwartete Ausnahme von der 'evaluate'-Methode ausgelöst. 
    Tomcat log stdout says:
    19-05-09 10:58:59:708 - {ERROR} sdk.JRCCommunicationAdapter Thread [http-28080-Processor24];  JRCAgent5 detected an exception: Fehler bei der Seitenformatierung: FormulaFunction hat eine unerwartete Ausnahme von der 'evaluate'-Methode ausgelöst.
         at com.crystaldecisions.sdk.occa.report.lib.ReportSDKException.throwReportSDKException(Unknown Source)
         at com.businessobjects.reports.sdk.b.i.byte(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.request(Unknown Source)
         at com.crystaldecisions.proxy.remoteagent.y.a(Unknown Source)
         at com.crystaldecisions.proxy.remoteagent.r.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.cf.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.ReportSource.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.ReportSource.getPage(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.AdvancedReportSource.getPage(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.NonDCPAdvancedReportSource.getPage(Unknown Source)
         at com.crystaldecisions.report.web.event.ac.a(Unknown Source)
         at com.crystaldecisions.report.web.event.ac.a(Unknown Source)
         at com.crystaldecisions.report.web.event.b2.a(Unknown Source)
         at com.crystaldecisions.report.web.event.b7.broadcast(Unknown Source)
         at com.crystaldecisions.report.web.event.av.a(Unknown Source)
         at com.crystaldecisions.report.web.WorkflowController.do(Unknown Source)
         at com.crystaldecisions.report.web.WorkflowController.doLifecycle(Unknown Source)
         at com.crystaldecisions.report.web.ServerControl.a(Unknown Source)
         at com.crystaldecisions.report.web.ServerControl.processHttpRequest(Unknown Source)
         at org.apache.jsp.jsp.dqcrystalviewer_jsp._jspService(dqcrystalviewer_jsp.java:274)
         at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
         at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:334)
         at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:314)
         at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:264)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
         at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:672)
         at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:463)
         at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:398)
         at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:301)
         at org.apache.struts.action.RequestProcessor.doForward(RequestProcessor.java:1063)
         at org.apache.struts.action.RequestProcessor.processForwardConfig(RequestProcessor.java:386)
         at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:229)
         at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1194)
         at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:689)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
         at com.acta.webapp.mdreport.servlet.JSFilter.doFilter(Unknown Source)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
         at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
         at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178)
         at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126)
         at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105)
         at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107)
         at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148)
         at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:869)
         at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:664)
         at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527)
         at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:80)
         at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:684)
         at java.lang.Thread.run(Thread.java:595)
    Any idea how this can happen?
    Regards,
        Martin
    Edited by: Martin Bernhardt on May 19, 2009 11:03 AM

    Martin,
    can you try to change the Settings of your Internet Explorer:
    Tools --> Internet Options --> General/Appearance/Languages
    Add "English (United States) [en-US]" and move it to first place above the "German (Germany) [de]" entry?
    Niels

  • Migrating from Oracle to SQL Server - data won't migrate but no errors

    I'm attempting to migrate a database from an Oracle 11g (11.1.0.7.0 64-bit) database to a SQL Server 2008 R2 (10.50.4033.0).  I'm using SSMA, and the Oracle tool was installed on the target SQL Server by the DBA.  I've completed all the steps up
    to and including synchronizing the database, and the empty tables all exist in SQL Server.  When I run the step to migrate the data, I get:
    Migrating data...
    Analyzing metadata...
    Preparing table POLICYS.LOGINS...
    Preparing data migration package...
    Starting data migration Engine
    Starting data migration...
    Data migration operation has finished.
        0 table(s) successfully migrated.
        0 table(s) partially migrated.
        1 table(s) failed to migrate.
    Then a summary box appears, and there is a red X next to the table (I'm trying to just convert 1 small table to begin with.)  What immediately appears to be wrong is that the "To" table is blank.
    I've attempted this multiple times, different tables (initially tried all tables) and tried both Server Side and Client Side migrations with the same result.  There are no errors or warnings on the Error List.
    I've found another online post where the user had the same issue, but in that instance they fixed it by changing the case of the table name. I tried that, but it didn't help.  I'm not sure what is wrong that it won't migrate any data.  Any help
    is appreciated.  TIA
    -George

    Hello,
    Please review the migration process steps from the following tutorial:
    http://blogs.msdn.com/b/ssma/archive/2010/08/27/migrating-oracle-s-sample-hr-schema.aspx
    You can also get help writing an email to
    [email protected] 
    For more information, please read the following article:
    http://msdn.microsoft.com/en-us/library/hh313173(v=sql.110).aspx
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

Maybe you are looking for