Data Quality Process

Hi:
I'm very interested in data quality and I'd like to know where to begin and which Oracle tools help to support activities like data auditing, data profiling and data cleansing.
I'm an OWB user but I don't like the way this tool performs data profiling, it creates a materialized view for every column on the table to be analyzed, so it takes many time performing that and then produces a lot of tables with the statistics gathered. I think there would be a better implementation for this.
Thanks for the forum,
Hazbleydi C.

One problem is that the data quality and profiling functions built into the Oracle database cannot be used in a heterogeneous implementation like Fusion - and most of the good data quality vendors have already been acquired (FirstLogic, Vality, Similarity, Fuzzy Logistics). Informatica would be a multi-billion dollar acquisition which may be too expensive for an Oracle data integration stack. Oracle may rely on partnerships with Informatica, IBM and Trillium to provide a wide range of data quality and profiling functions.
Gartner thinks everyone is looking up in the Magic Quadrant for Data Quality Tools, 2007

Similar Messages

  • ODI Data Quality - Relationship Linker Process

    Hi All,
    I have been trying to use the relationship linker process in the ODI data quality interface with limited success.
    My Problem:
    The process tab in the relationship linker asks for two inputs. 1) Attribute containing record type & 2) Attribute Containing window key
    The attribute containing window key is pre-filled to be window_key_01 (window key) that you can set up in the prior step. I am not able to determine which exact field i should be specifying in the "ATTRIBUTE CONTAINING RECORD TYPE" column.
    Things that i have tried out:
    I have used the following column for attribute containing record type
    1) PR_BUSNAME_RECODED01
    2) US_POSTAL_MATCH_INPUT_AREA
    3) WINDOW_KEY_01
    4) PR_ STREET_NAME
    The window key i am using is as follows
    1) Business name - 5 chars( first character + subsequent non-repeating consonants of the business name)
    2) State Name - 2 chars
    3) City Name - 5 chars (first character + subsequent consonants)
    4) postal code - 5 chars (numerics only)
    Guidance Needed
    1) Which attribute should be selected for "ATTRIBUTE CONTAINING RECORD TYPE" column. If possible, can you also include the reason to use that attribute?
    2) In the advances features window of the relationship linker process, the linking rules are also not very clear. I am still trying to figure out what lev2_matched_in_lev1_matched and other similar options correspond to.
    The help files do not explain this at length.
    I am having some difficulty in understanding the relationship linker process and I have tried out many options.
    If any ODI data quality guru's or pro's can help me out with this. I'll be really grateful.
    Thanks,
    Chapanna

    Hi,
    thanks for your reply.
    Yes I got the second connection; the question was about the procedure described in the manual.
    The solution I found is a trick also because the way described in docs is not limited to a second repository but could be used also for several entries.
    Because I am trying develop knowledge on ODI I would solve any possible unexpected behavior. Do you see in your setup the entry described in docs?
    Taking a look at the Store the ODQ is quoted 70,000 $ / processor which is one reason more I would expect customer wanting to be able to follow the procedure depicted in the documentation or being removed from there (I try to answer questions that someone could ask to me).
    Thanks
    Fabio D'Alfonso

  • Enterprise Data Quality - stuck/crash when processing high volume

    I am using Enterprise Data Quality, trying to run a data profiling process of 1 million rows.  However, the process (which contains group and merge processors) appears to be always stuck half way through or crashes.  I have tried other data sources and the result is the same.
    It seems that Enterprise Data Quality does not handle high volume very well.   Please assist and let me know what other details you require.

    Hi,
    It is certainly not the case that EDQ does not handle large volumes of data well. We have a large number of customers running huge data volumes in the product and have benchmarked the product running services on massive volumes, including matching of 250m records.
    However, if you want to run large jobs, you need to make sure the system is installed and tuned correctly. How did you install the product? Are you running 32-bit or 64-bit? How much memory is allocated to EDQ?
    With regard to best practice, did you throw all profiling processors at all of your data? The better approach is to 'open profile' a sample of records and pick more carefully which processors to run on larger volumes... otherwise you are telling the server to do a huge amount of work and some of it may not be necessary. Data profiling is an iterative exercise to help you find data issues that you can check for an fix using audit and transformation processors. Profilers are used mostly for simple reporting when it comes to production jobs on larger volumes.
    Note that there are some profiling processors that will cause progress reporting to appear to 'pause' at 50% of the way through a process. These are 2-phase processors such as the Record Duplication Profiler which needs to spool all data to a table and then analyze it rather than work record by record. Such processors are typically slower than the simpler profilers that add flags to the data with a counting phase at the end (Frequency Profiler, Patterns Profiler etc.)
    Regards,
    Mike

  • Data Services and Data Quality Recommnded Install process

    Hi Experts,
    I have a few questions. We have some groups that have requested Data Quality be implemented along with another request for Data Services to be implemented. I've seen the requested for Data Services to be installed on the desktop, but from what I've read, it appears to be best to install this on the server side to allow for more of a central benefit to all.
    My questions are:
    1. Can Data Services (Server) install X.1 3.2 be installed on the same server as X.I 3.1 SP3 Enterprise?
    2. Is the Data Services (CLIENT) Version dependent on if the Data Services (Server) install is completed? Basically can the u201CData Services Designeru201D be used without the Server install?
    3. Do we require a new License key for this or can I use the Enterprise Server license key?
    4. At this time we are not using this to move data in and out of SAP, just using this to read data that is coming from SAP.
    From what I read, DATA Services comes with the SAP BusinessObjects Data Integrator or SAP BusinessObjects Data Quality Management solutions. Right now it's seems we dont have a need for the SAP Connection supplement, but definetly something we would implement in the near future. What would be the recommended architecture? A new Server with tomcat and cmc (seperate from our current BOBJ Enterprise servers)? or can DataServices be installed on the same?
    Thank you,
    Teresa

    Hi Teresa.
    Hope you are referring to BOE 3.1 (Business Objects Enterprise) and BODS (Business Objects Data Services) installation on the same server machine.
    Am not an expert on BODS installation.
    But this is my observation :
    We had recently tested on a test machine BOE BOXI 3.1 SP3 (full build) installation before upgrade of our BOE system.
    We also have BODS in our environment.
    Which we also wanted to check whether we could keep on the same server.
    So on this test machine, which already has BOXI 3.1 SP3 build, when i installed BODS server installation,
    what we observed was that,
    all the menus of BOE went away
    and only menus of BODS were seen.
    May be BODS installation overwrites/ or uninstalls BOE, if it already exists ?
    I dont know.  Though i could not fine any documentation, saying that we cannot have BODS and BOE on the same server machine. But this is what we observed.
    So we have kept BODS and BOE on 2 different machines running independently and we do not see any problem.
    Cheers
    indu

  • Verification of data quality in migration process

    Hi All,
    I am in a project that migration data from SQLserver to Oracle database. But my quesion is not performance but the check of data quality.
    My procedures to move data is: a) Extract data to a flat file from SQLserver via a GUI tool b) ftp it to UNIX c) sqlldr to Oracle temp tables d) copy the data from temp tables to fact tables.
    My point is to only check the sqlserver log file and sqlldr log file, if no error in them and the row counts match in SQLserver and Oracle, then we can say a,b,c are successful.
    And since d is a third party stored procedure, we can trust its correctness. I don't see any point where the error could happen.
    But the QA team think we have to do at least two more verification: 1. compare some rows column by column 2. sum some numeric columns to compare the results.
    Can someone give me some suggestion on how you check the data quality in your migration projects, please?
    Best regards,
    Leon

    Without wishing to repeat anything that's already been said by Kim and Frank this is exactly the type of thing you need checks around.
    1. Exporting from SQL Server into a CSV
    Potential to loose precision in data types such as numbers, dates, timestamps, or in character sets (unicode, utf etc)
    2. Moving from windows to unix
    Immediately there are differences in EOL characters
    Potential for differences in character sets
    Potential issues with incomplete ftp of files
    3. CSV into temp tables with SQL Loader
    Potential to loose precision in data types such as numbers, dates, timestamps, or in character sets (unicode, utf etc)
    Potentail to have control files not catering for special characters
    4. Copy from temp tables to fact tables
    Potential to have column mappings wrong
    Potential to loose precision in data types such as numbers, dates, timestamps, or in character sets (unicode, utf etc)
    And I'm sure there are loads more things that could go wrong at any stage. You have to cater not only for things going wrong in the disaster sense i.e. disk fails, network fails, data precision lost, but also consider there could be obscure bug in any of the technologies you're working with. They're not things you can directly predict but you should have verification in place to make sure you know if something has gone wrong - however subtle.
    HTH
    David

  • Data quality check or automation

    Apart from passing the report to the user for testing are there ways the process can be automated for a data quality check and how?
    Thanks.

    Hi Dre01,
    According to your description, you want to check the report data quality. Right?
    In Reporting Services, the only way to check the report data is viewing the report. So for your requirement, if you want to make this data processing automatically. We suggest to create subscription, it will process the data automatically based
    on the schedule and you will get the subscription of report to check if it shows data properly.
    Reference:
    Create, Modify, and Delete Standard Subscriptions (Reporting Services in Native Mode)
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou

  • Data Quality Services

    Hi All,
    I have utilized the DQS for my organization Data Cleansing project. And we have built Data Quality reports using DQ output.
    We also use Profiler Summary report in the Activity Monitor, but this is manual process where Data owners need to manually enter into the Activity monitor and generet the porofiler report.
    My question is, which DQS table store the this Profiler summary report where by I can automate the report through that source.
    Here is the screenshot of the report we are looking at, we need to find out which DQS source table got this information.
    Thanks for the help

    0
    Hi All,
    I have utilized the DQS for my organization Data Cleansing project. And we have built Data Quality reports using DQ output.
    We also use Profiler Summary report in the Activity Monitor, but this is manual process where Data owners need to manually enter into the Activity monitor and generet the porofiler report.
    My question is, which DQS table store the this Profiler summary report where by I can automate the report through that source.
    Here is the screenshot of the report we are looking at, we need to find out which DQS source table got this information.
    Thanks for the help

  • Data Quality Services - Summary report

    Hi,
    Is anyone has a idea about which table is stored summary information comming out from Activity monitor?
    the example is below:
    when I exported the data  the summary as follows:
    My pourpose is to automate this report if it is stored on DQS data bases?
    Field
    Domain
    Corrected Values
    Suggested Values
    Completeness
    Accuracy
    EmployeeName
    EmpName
    5 (0.06%)
    0 (0%)
    7303 (88.73%)
    8222 (99.89%)
    EmployeeKey
    EmployeeKey
    1 (0.01%)
    0 (0%)
    8231 (100%)
    8215 (99.81%)
    CostCentreKey
    CostCentreKey
    0 (0%)
    0 (0%)
    8231 (100%)
    8141 (98.91%)
    CostGroupKey
    CostCentreGroupKey
    0 (0%)
    0 (0%)
    7188 (87.33%)
    7094 (86.19%)
    LeaveGroupKey
    LeaveGroupKey
    0 (0%)
    0 (0%)
    8231 (100%)
    8129 (98.76%)
    EmployeeStatusKey
    EmployeeStatusKey
    0 (0%)
    0 (0%)
    8231 (100%)
    8212 (99.77%)
    EmployeePositionNumber
    EmployeePositionNumber
    0 (0%)
    0 (0%)
    8214 (99.79%)
    8117 (98.61%)
    EmployeeEmail
    EmployeeEmail
    0 (0%)
    0 (0%)
    5133 (62.36%)
    8220 (99.87%)
    HoursPerWeek
    HoursPerWeek
    0 (0%)
    0 (0%)
    8214 (99.79%)
    8213 (99.78%)
    Gender
    Gender
    0 (0%)
    0 (0%)
    8231 (100%)
    8231 (100%)
    EmployeeEFT
    EmployeeEFT
    0 (0%)
    0 (0%)
    8214 (99.79%)
    8213 (99.78%)
    EmployeePostCode
    EmployeePostCode
    0 (0%)
    0 (0%)
    8153 (99.05%)
    8124 (98.7%)
    EmployeeSuburb
    EmployeeSuburb
    133 (1.62%)
    0 (0%)
    8152 (99.04%)
    8134 (98.82%)
    ReportToManager
    ReportToManager
    0 (0%)
    0 (0%)
    7037 (85.49%)
    7036 (85.48%)
    PositionClassificationCode
    PositionClassificationCode
    0 (0%)
    0 (0%)
    8144 (98.94%)
    8144 (98.94%)
    PositionClassificationDesc
    PositionClassificationDesc
    0 (0%)
    0 (0%)
    8144 (98.94%)
    8144 (98.94%)
    PositionLocation
    PositionLocation
    0 (0%)
    0 (0%)
    8214 (99.79%)
    8122 (98.68%)
    Age
    Age
    0 (0%)
    0 (0%)
    8231 (100%)
    8229 (99.98%)
    CurrentClassCode
    CurrentClassCode
    0 (0%)
    0 (0%)
    7908 (96.08%)
    7906 (96.05%)
    CurrentCLassDescription
    CurrentCLassDescription
    0 (0%)
    0 (0%)
    7908 (96.08%)
    7907 (96.06%)
    EmpState
    EmpState
    0 (0%)
    0 (0%)
    8153 (99.05%)
    8137 (98.86%)

    0
    Hi All,
    I have utilized the DQS for my organization Data Cleansing project. And we have built Data Quality reports using DQ output.
    We also use Profiler Summary report in the Activity Monitor, but this is manual process where Data owners need to manually enter into the Activity monitor and generet the porofiler report.
    My question is, which DQS table store the this Profiler summary report where by I can automate the report through that source.
    Here is the screenshot of the report we are looking at, we need to find out which DQS source table got this information.
    Thanks for the help

  • Data Quality Report Error

    In DS 12.1.1.0,
    when I try to open a Data Quality Report in the Management Console I get a new Windows with the following error message:
    Fehler
    Fehler bei der Seitenformatierung: FormulaFunction hat eine unerwartete Ausnahme von der 'evaluate'-Methode ausgelöst. 
    Tomcat log stdout says:
    19-05-09 10:58:59:708 - {ERROR} sdk.JRCCommunicationAdapter Thread [http-28080-Processor24];  JRCAgent5 detected an exception: Fehler bei der Seitenformatierung: FormulaFunction hat eine unerwartete Ausnahme von der 'evaluate'-Methode ausgelöst.
         at com.crystaldecisions.sdk.occa.report.lib.ReportSDKException.throwReportSDKException(Unknown Source)
         at com.businessobjects.reports.sdk.b.i.byte(Unknown Source)
         at com.businessobjects.reports.sdk.JRCCommunicationAdapter.request(Unknown Source)
         at com.crystaldecisions.proxy.remoteagent.y.a(Unknown Source)
         at com.crystaldecisions.proxy.remoteagent.r.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.cf.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.ReportSource.a(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.ReportSource.getPage(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.AdvancedReportSource.getPage(Unknown Source)
         at com.crystaldecisions.sdk.occa.report.application.NonDCPAdvancedReportSource.getPage(Unknown Source)
         at com.crystaldecisions.report.web.event.ac.a(Unknown Source)
         at com.crystaldecisions.report.web.event.ac.a(Unknown Source)
         at com.crystaldecisions.report.web.event.b2.a(Unknown Source)
         at com.crystaldecisions.report.web.event.b7.broadcast(Unknown Source)
         at com.crystaldecisions.report.web.event.av.a(Unknown Source)
         at com.crystaldecisions.report.web.WorkflowController.do(Unknown Source)
         at com.crystaldecisions.report.web.WorkflowController.doLifecycle(Unknown Source)
         at com.crystaldecisions.report.web.ServerControl.a(Unknown Source)
         at com.crystaldecisions.report.web.ServerControl.processHttpRequest(Unknown Source)
         at org.apache.jsp.jsp.dqcrystalviewer_jsp._jspService(dqcrystalviewer_jsp.java:274)
         at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
         at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:334)
         at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:314)
         at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:264)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
         at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:672)
         at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:463)
         at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:398)
         at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:301)
         at org.apache.struts.action.RequestProcessor.doForward(RequestProcessor.java:1063)
         at org.apache.struts.action.RequestProcessor.processForwardConfig(RequestProcessor.java:386)
         at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:229)
         at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1194)
         at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:689)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
         at com.acta.webapp.mdreport.servlet.JSFilter.doFilter(Unknown Source)
         at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202)
         at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
         at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
         at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178)
         at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126)
         at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105)
         at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107)
         at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148)
         at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:869)
         at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:664)
         at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527)
         at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:80)
         at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:684)
         at java.lang.Thread.run(Thread.java:595)
    Any idea how this can happen?
    Regards,
        Martin
    Edited by: Martin Bernhardt on May 19, 2009 11:03 AM

    Martin,
    can you try to change the Settings of your Internet Explorer:
    Tools --> Internet Options --> General/Appearance/Languages
    Add "English (United States) [en-US]" and move it to first place above the "German (Germany) [de]" entry?
    Niels

  • How to do data quality check on XI?

    Hi XI guru,
    I am working on a project, our architect want XI to perform some data quality check to make sure the message contains correct data in future processing. Is there any existing solution or walk arround for XI to perform the data quality check?
    For example: if field A and field B is not exist, then XI need to send a email to remind someone who support this interface.
    For this kind of scenario, is that possible for XI to handle? What's the best option for XI?

    Hi,
    check all the condition in UDF and based on the condition result raise an alert.
    follow below steps:
    Michal Krawczyk
    The specified item was not found.
    Configuration steps are: go to transaction ALRTCATDEF
    1) Define Alert Category
    2) Create container elements which are used for holding an error messages.
    3) Recipient Determination.
    https://www.sdn.sap.com/irj/sdn/wiki?path=/display/xi/alert%2bconfiguration%2bin%2bxi
    Alert can be triggered in different ways.
    1) Triggering by Calling a Function Module Directly.
    Triggering XI Alerts from a User Defined Function
    chirag

  • Data Transfer Process filter: how to view the filter values?

    Hi,
    does anyone know how to view (and download) the values of a filter of data transfer processes?
    I want to compare the filtervalues of data transfer processes from productive system with development system. Normally the DTP's are created in a development system and transported with TMS to quality and productive systems. In the productive system there is often the need to slightly change the values in the filter, and to do this directly in the productive system without transport.
    If the same DTP is used and changed by another project after a while, it will overwrite the changes from productive maintenance.
    By comparing the filter values one could evaluate the changes.
    For info packages there is the same need. Here you can use table RSLDPSEL. Is there someting like that for DTP?
    thanks for any help
    Jean

    Hi,
    the difference you have in the infopack and DTP is
    you can have any number of infopackages between the source and PSA but you can have only one DTP between the PSA and datatarget.
    so there is no othere way than changing the filter values in the existing DTP.
    just chk the filter option and choose the required chars for ur load and activate the DTP.
    Ramesh

  • Oracle enterprise data quality for product(PDQ)

    Hi,
    i am looking for tutorial or training material for oracle pdq(product data quality).
    can anybody please suggest where can i find good material or hands on tutorials?
    Thank you
    Raghu.Vujjini

    You should find all you need in the Getting Started guide in the online help.
    I recommend using the introductory wizard when first launching Director- this will take you through creating a project, connecting to some data, and creating a profiling process.

  • Oracle Enterprise Data Quality for ODI

    Hi,
    1.     Can we install the below EDQ products on single machine
    -Enterprise Data Quality Profiling for Oracle Data Integrator
    -Enterprise Data Quality Batch Processing for Oracle Data Integrator
    2.     How do we size EDQ ? and what is the preferred architecture ?
    3.     Does “Enterprise Data Quality Profiling for Oracle Data Integrator” & “Enterprise Data Quality Batch Processing for Oracle Data Integrator” have same installation media ?
    Thanks & Regards,
    NC

    You should find all you need in the Getting Started guide in the online help.
    I recommend using the introductory wizard when first launching Director- this will take you through creating a project, connecting to some data, and creating a profiling process.

  • Oracle Enterprise Data Quality

    Anyone has installed and used Oracle Enterprise Data Quality?
    Can someone provide guidance where to download and the file name?
    We are planning to use it for profiling product data(home grown system)

    You should find all you need in the Getting Started guide in the online help.
    I recommend using the introductory wizard when first launching Director- this will take you through creating a project, connecting to some data, and creating a profiling process.

  • Reconciliation data quality management tools

    When you start out a new IDM implementation project or when you expand the scope of an IDM solution in an organization one of the problems you are always faced with is to determine the data quality in the already present identity data. Is the primary key that you think you can use actually present in every record? If not in every record how many records lacks it? Are there any primary key collisions? How are the secondary attributes?
    In theory you could connect your OIM instances, configure the recon and see what results you get but in many cases you want quick, simple answers and configuring the whole recon process is not always quick and easy. I have mostly been using Excel or a separate database and Java/Sql to do the transformations.
    Has anyone found a good tool to do this kind of work?
    Thanks
    /Martin

    In any case, there is SAP Master Data Governance for Financial Data as an out-of-the box application on top of SAP ERP when it comes to centrally creating master data (such as chart of accounts). See the following [blog|/people/andreas.seifried/blog/2010/05/05/governance-for-financial-master-data].
    Regards,
    Markus

Maybe you are looking for

  • Error Deploying A Cube with the Deploy All option

    Hi All, I get the error below when I try to deploy a cube: ORA-06510: PL/SQL: unhandled user-defined exception ORA-06512: at "OLAPSYS.CWM2_OLAP_UTILITY", line 1660 ORA-01403: no data found ORA-06512: at "OLAPSYS.CWM2_OLAP_CUBE", line 33 ORA-06512: at

  • How to Install Oracle 9i & 10 G on Suse 10 Linux

    Hello Please send me steps how to install oracle 9i & 10G on suse linux 10 . this is urgent . Thanks Manoj [email protected] [email protected]

  • Installed Memory not showing up in a late 2009 Dual Core Mac Pro

    I have a late 2009 Dual Core Mac Pro. I restarted the my computer today, an a utility popped up saying something about I need to re-arrange the memory in my mac because it was not optimized. I thought this was weird because all the slots are filled w

  • ResultSet Issues

    I have a servlet that queries a database, puts the data into a 2-D String array, and then forwards this array to a JSP page to display it in a table. The problem is that the table is showing rows 5 through 19 rows (of 19 total rows) and then 4 rows o

  • How to use wwv_flow.g_inline_validation_error_cnt?

    well, I stucked... Somebody could post some sample code illustrating how to use g_inline_validation_error_cnt (number of validations errors). thx, cyryl